content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Are there any alternatives to py2exe? Are there any alternatives to py2exe? A: pyInstaller is cross-platform and very powerful, with many third-party packages (matplotlib, numpy, PyQT4, ...) specially supported "out of the box", support for eggs, code-signing on Windows (and a couple other Windows-only goodies, optional binary packing... the works!-) The one big issue: the last "released" version, 1.3, is ages-old -- you absolutely must install the SVN trunk version, svn co http://svn.pyinstaller.org/trunk pyinstaller (or the 1.4 pre-release, but I haven't tested that one). A fair summary of its capabilities as of 6 months ago is here (in English, despite the Italian URL;-). A: cx_Freeze is cross-platform and does the same, or you could use py2app, which works on mac only. A: Here's a list of them. Py2exe PyInstaller cx_Freeze bbfreeze py2app You might also consider Nuitka, which compiles python to native code. A: bbfreeze claims to works on Windows and UNIX, but not on OS X. It doesn't seem to be actively developed anymore, though.
Are there any alternatives to py2exe?
Are there any alternatives to py2exe?
[ "pyInstaller is cross-platform and very powerful, with many third-party packages (matplotlib, numpy, PyQT4, ...) specially supported \"out of the box\", support for eggs, code-signing on Windows (and a couple other Windows-only goodies, optional binary packing... the works!-) The one big issue: the last \"released\" version, 1.3, is ages-old -- you absolutely must install the SVN trunk version, svn co http://svn.pyinstaller.org/trunk pyinstaller (or the 1.4 pre-release, but I haven't tested that one). A fair summary of its capabilities as of 6 months ago is here (in English, despite the Italian URL;-).\n", "cx_Freeze is cross-platform and does the same, or you could use py2app, which works on mac only.\n", "Here's a list of them.\n\nPy2exe\nPyInstaller\ncx_Freeze\nbbfreeze\npy2app\n\nYou might also consider Nuitka, which compiles python to native code.\n", "bbfreeze claims to works on Windows and UNIX, but not on OS X. It doesn't seem to be actively developed anymore, though.\n" ]
[ 59, 25, 20, 6 ]
[]
[]
[ "py2exe", "python" ]
stackoverflow_0001689086_py2exe_python.txt
Q: AND in Python's slicing with modulus How can you fix the code? I am trying to have i % 3 == 1 and i != 16 unsuccessfully by data = "8|9|8|9|8|9|8|9|9|8|9|8|9|8|9|8" arra = map(int,data.split("|")) arra = sum(arra[1::3 and != 16]) for i in range(0, len(arra), 16)] | |---// Problem here A: Try this: arra = sum(a for i,a in enumerate(arra) if i %3==1 and i != 16) For this kind of complex work, slice notation wont really do. But why do you assign back to arra? You wipe out your original list of values. A: Slices don't work like that. Paul McGuire has the correct code: arra = sum(x for i, x in enumerate(arra) if i % 3 == 1 and i != 16) It's also not clear from your code what the point of the for i in range(0, len(arra), 16)] is supposed to be. What are you trying to accomplish?
AND in Python's slicing with modulus
How can you fix the code? I am trying to have i % 3 == 1 and i != 16 unsuccessfully by data = "8|9|8|9|8|9|8|9|9|8|9|8|9|8|9|8" arra = map(int,data.split("|")) arra = sum(arra[1::3 and != 16]) for i in range(0, len(arra), 16)] | |---// Problem here
[ "Try this:\narra = sum(a for i,a in enumerate(arra) if i %3==1 and i != 16)\n\nFor this kind of complex work, slice notation wont really do. But why do you assign back to arra? You wipe out your original list of values.\n", "Slices don't work like that.\nPaul McGuire has the correct code:\narra = sum(x for i, x in enumerate(arra) if i % 3 == 1 and i != 16)\n\nIt's also not clear from your code what the point of the for i in range(0, len(arra), 16)] is supposed to be. What are you trying to accomplish?\n" ]
[ 6, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001690203_python.txt
Q: Sending a retrieved SMS using Python I am writing a Python script to read a SMS from the SIM memory, buffer it and send the same SMS to another number. I am executing this script on Telit GM862-GPS. The script I have written is : import MDM MDM.send('AT+CMGF=1\r', 10) # Changing to Text mode MDM.send('AT+CMGR=1\r',0) # Reading SMS at index 1 a = MDM.receive(10) # Receiving as string MDM.send('AT+CMGS="Phone no.", 129', 0) #selecting a particular no. MDM.send(a, 0) # sending retrieved SMS MDM.sendbyte(0x1A, 0) # sending Ctrl Z But I am facing this problem: After executing "AT+CMGR=1 \r" command, the script doesn't execute commands after that. I have checked this by putting a simple AT command to change some value after "Read SMS" command & that value doesnt get changed. I dont know for what weird reason its doing this. It would be really helpful if someone can help me out with this. Regards Update @ Paid nerd: Yes..thats a timeout value @ Jaime: A SMS exists in the SIM memory and it does show the SMS at index 1. The only problem I am getting is that it doesn't execute the commands which come after the "AT+CMGR" or "AT+CMGL" command. @ Foresto: I tried adding "\n" at the end but it doesn't execute the python statements after the Read SMS statement. A: It looks like your program is waiting for a response that never arrives. That sort of thing is typical when a device doesn't think you have sent a complete command yet. I don't know the protocol you're using to communicate with that device, but it looks like a Hayes AT command set. Is it possible the device is expecting a newline character instead of or in addition to the carriage returns you're sending? For example: 'AT+CMGF=1\r\n' Also, I don't know what your MDM object is, but could it be buffering your commands (not actually sending them) until you call a flush() method or something similar?
Sending a retrieved SMS using Python
I am writing a Python script to read a SMS from the SIM memory, buffer it and send the same SMS to another number. I am executing this script on Telit GM862-GPS. The script I have written is : import MDM MDM.send('AT+CMGF=1\r', 10) # Changing to Text mode MDM.send('AT+CMGR=1\r',0) # Reading SMS at index 1 a = MDM.receive(10) # Receiving as string MDM.send('AT+CMGS="Phone no.", 129', 0) #selecting a particular no. MDM.send(a, 0) # sending retrieved SMS MDM.sendbyte(0x1A, 0) # sending Ctrl Z But I am facing this problem: After executing "AT+CMGR=1 \r" command, the script doesn't execute commands after that. I have checked this by putting a simple AT command to change some value after "Read SMS" command & that value doesnt get changed. I dont know for what weird reason its doing this. It would be really helpful if someone can help me out with this. Regards Update @ Paid nerd: Yes..thats a timeout value @ Jaime: A SMS exists in the SIM memory and it does show the SMS at index 1. The only problem I am getting is that it doesn't execute the commands which come after the "AT+CMGR" or "AT+CMGL" command. @ Foresto: I tried adding "\n" at the end but it doesn't execute the python statements after the Read SMS statement.
[ "It looks like your program is waiting for a response that never arrives. That sort of thing is typical when a device doesn't think you have sent a complete command yet.\nI don't know the protocol you're using to communicate with that device, but it looks like a Hayes AT command set. Is it possible the device is expecting a newline character instead of or in addition to the carriage returns you're sending? For example:\n'AT+CMGF=1\\r\\n'\nAlso, I don't know what your MDM object is, but could it be buffering your commands (not actually sending them) until you call a flush() method or something similar?\n" ]
[ 1 ]
[]
[]
[ "python", "sms" ]
stackoverflow_0001689638_python_sms.txt
Q: Why we should perfer to store the serialized data not the raw code to DB? If we have some code(a data structure) which should be stored in DB, someone always suggests us to store the serialized data not the raw code string. So I'm not so sure why we should prefer the serialized data. Give a simple instance(in python): we've got a field which will store a dict of python, like { "name" : "BMW", "category":"car", "cost" : "200000"} so we can serialize it using pickle(a python module) and then store the pickled data to db field. Or we can store the dict string directly to DB without serializing. Since we need to convert the string to python data back, two approaches are both easy to do, by using pickle.loads and exec respectively. So which should be preferred? And why? Is it because exec is much slower than pickle? or some other reasons? Thanks. A: Or we can store the dict string directly to DB without serializing. There is no such thing as "the dict string". There are many ways to serialize a dict into a string; you may be thinking of repr, possibly as eval as the way to get the dict back (you mention exec, but that's simply absurd: what statement would you execute...?! I think you probably mean eval). They're different serialization methods with their tradeoffs, and in many cases the tradeoffs tend to favor pickling (cPickle, for speed, with protocol -1 meaning "the best you can do", usually). Performance is surely an issue, e.g., in terms of size of what you're storing...: $ python -c 'import cPickle; d=dict.fromkeys(range(99), "banana"); print len(repr(d))' 1376 $ python -c 'import cPickle; d=dict.fromkeys(range(99), "banana"); print len(cPickle.dumps(d,-1))' 412 ...why would you want to store 1.4 KB rather than 0.4 KB each time you serialize a dict like this one...?-) Edit: since some suggest Json, it's worth pointing out that json takes 1574 bytes here -- even bulkier than bulky repr! As for speed... $ python -mtimeit -s'import cPickle; d=dict.fromkeys(range(99), "chocolate")' 'eval(repr(d))' 1000 loops, best of 3: 706 usec per loop $ python -mtimeit -s'import cPickle; d=dict.fromkeys(range(99), "chocolate")' 'cPickle.loads(cPickle.dumps(d, -1))' 10000 loops, best of 3: 70.2 usec per loop ...why take 10 times longer? What's the upside that would justify paying such a hefty price? Edit: json takes 2.7 milliseconds -- almost forty times slower than cPickle. Then there's generality -- not every serializable object can properly round-trip with repr and eval, while pickling is much more general. E.g.: $ python -c'def f(): pass d={23:f} print d == eval(repr(d))' Traceback (most recent call last): File "<string>", line 3, in <module> File "<string>", line 1 {23: <function f at 0x241970>} ^ SyntaxError: invalid syntax vs $ python -c'import cPickle def f(): pass d={"x":f} print d == cPickle.loads(cPickle.dumps(d, -1))' True Edit: json is even less general than repr in terms of round-trips. So, comparing the two serialization approaches (pickling vs repr/eval), we see: pickling is way more general, it can be e.g. 10 times faster, and take up e.g. 3 times less space in your database. What compensating advantages do you envisage for repr/eval...? BTW, I see some answers mention security, but that's not a real point: pickling is insecure too (the security issue with eval`ing untrusted strings may be more obvious, but unpickling an untrusted string is also insecure, though in subtler and darker ways). Edit: json is more secure. Whether that's worth the huge cost in size, speed and generality, is a tradeoff worth pondering. In most cases it won't be. A: Both storing as a string and using pickle are serialization strategies. Pickle is more flexible in what it can store and can be more compact. Both strategies, eval (which is what you would use over exec in this instance) and pickle.loads are insecure—both of these can run arbitrary Python code. Better would be to use a serialization format like JSON (json module in 2.6, simplejson 3rd party module pre-2.6), which isn't tied specifically to being read by Python and won't execute arbitrary code if there ends up being data you do not expect in your database. Further, while the pickle formats are subject to changing (and your losing data!), a standard like JSON is not going to change on you in a backward-incompatible way. A: I've preferred using a standard serialization format like JSON for storing this kind of data in the database. It makes it possible for consumers of the data to be written in other languages than python, it's basically human readable, and it's more easily query-able with SQL than pickled objects. A: If I had to choose between serializing the data into something like JSON or storing a pickled data structure, I'd choose the JSON option every time. Other than the security issues everyone else is mentioning, portability is the biggest reason to not store a native python object in the database. There may be a requirement in the future to port your system to some other language, and storing a pickled python object would make that rather difficult. Also, other applications may need to hit the data your storing, but I can't speak to specific instances since I don't know your situation. Also, if your system needed to do any kind of filtering, storing data in a JSON string still wouldn't be your best option. If you can, and there are a set number of fields, I would be very tempted to split them into atomic elements. It would make searching and filtering a lot easier and efficient. A: The question is what does serialization gain you? I bet the people recommending that you store the serialized data think you will save time because you don't need to mess around with SQL queries to construct Python objects. But there are some significant trade offs in storing you data as serialized blobs, such as: You lose referential integrity checking The data format you choose may not work well for different access patterns. How will you get all cars that cost more than $20,000 efficiently, if all of the data is stored inside serialized objects? What will you do if you object model changes significantly? You lose interoperability with other languages if you use a native Python serialization format You have to write code support code to loading data with a non-native Python serialization format You can't use 3rd party tools for doing reporting on your data The list goes on and on, make sure you are okay with these trade offs. A: There's always a danger in exec that someone will somehow pass in a string with some nasty code. It might never be the case in your application but in general, it's a big problem, and using built-in serialization avoids it. Another reason for using built-in serialization is that it makes it obvious what you're trying to do in your code. If you just fetch and exec, someone might not understand your actual intent. A: Firstly fetch and exec will leave your application vulnerable to code injection. If someone entered "System(rm -r /);" in your name field you would lose most of your files on a *nix system when you read in the data. The second reason is portability and upgradability. "pickled" objects will work on any python platform with any python release -- Guido promised! Thirdly "pikling" will automgically handle special characters and wierd code pages. So there will be no problems if your users enter line feeds or semi colons. A: Serialization means less worrying. When you marshall your data using some known serialization — Pickle, JSON, Google's Protocol Buffers — you can trust that the data you retrieve later is the data you stored earlier. Limit capability. If you're storing static data, why open up the possibility of letting the code be executed? It's unnecessary. Imagine the complication which will occur if, one year from now, another programer starts adding functions and module imports to this "static" data. A: If there is any possibility of manipulating this data on DB or creating reports from it; I would seriously consider unpacking it onto a table. A simple table with name, key and value columns would give you all the power of your relational database. Depending on edits it might even perform better than fetch->modify->dump.
Why we should perfer to store the serialized data not the raw code to DB?
If we have some code(a data structure) which should be stored in DB, someone always suggests us to store the serialized data not the raw code string. So I'm not so sure why we should prefer the serialized data. Give a simple instance(in python): we've got a field which will store a dict of python, like { "name" : "BMW", "category":"car", "cost" : "200000"} so we can serialize it using pickle(a python module) and then store the pickled data to db field. Or we can store the dict string directly to DB without serializing. Since we need to convert the string to python data back, two approaches are both easy to do, by using pickle.loads and exec respectively. So which should be preferred? And why? Is it because exec is much slower than pickle? or some other reasons? Thanks.
[ "\nOr we can store the dict string\n directly to DB without serializing.\n\nThere is no such thing as \"the dict string\". There are many ways to serialize a dict into a string; you may be thinking of repr, possibly as eval as the way to get the dict back (you mention exec, but that's simply absurd: what statement would you execute...?! I think you probably mean eval). They're different serialization methods with their tradeoffs, and in many cases the tradeoffs tend to favor pickling (cPickle, for speed, with protocol -1 meaning \"the best you can do\", usually).\nPerformance is surely an issue, e.g., in terms of size of what you're storing...:\n$ python -c 'import cPickle; d=dict.fromkeys(range(99), \"banana\"); print len(repr(d))'\n1376\n$ python -c 'import cPickle; d=dict.fromkeys(range(99), \"banana\"); print len(cPickle.dumps(d,-1))'\n412\n\n...why would you want to store 1.4 KB rather than 0.4 KB each time you serialize a dict like this one...?-)\nEdit: since some suggest Json, it's worth pointing out that json takes 1574 bytes here -- even bulkier than bulky repr!\nAs for speed...\n$ python -mtimeit -s'import cPickle; d=dict.fromkeys(range(99), \"chocolate\")' 'eval(repr(d))'\n1000 loops, best of 3: 706 usec per loop\n$ python -mtimeit -s'import cPickle; d=dict.fromkeys(range(99), \"chocolate\")' 'cPickle.loads(cPickle.dumps(d, -1))'\n10000 loops, best of 3: 70.2 usec per loop\n\n...why take 10 times longer? What's the upside that would justify paying such a hefty price?\nEdit: json takes 2.7 milliseconds -- almost forty times slower than cPickle.\nThen there's generality -- not every serializable object can properly round-trip with repr and eval, while pickling is much more general. E.g.:\n$ python -c'def f(): pass\nd={23:f}\nprint d == eval(repr(d))'\nTraceback (most recent call last):\n File \"<string>\", line 3, in <module>\n File \"<string>\", line 1\n {23: <function f at 0x241970>}\n ^\nSyntaxError: invalid syntax\n\nvs\n$ python -c'import cPickle\ndef f(): pass\nd={\"x\":f}\nprint d == cPickle.loads(cPickle.dumps(d, -1))'\nTrue\n\nEdit: json is even less general than repr in terms of round-trips.\nSo, comparing the two serialization approaches (pickling vs repr/eval), we see: pickling is way more general, it can be e.g. 10 times faster, and take up e.g. 3 times less space in your database.\nWhat compensating advantages do you envisage for repr/eval...?\nBTW, I see some answers mention security, but that's not a real point: pickling is insecure too (the security issue with eval`ing untrusted strings may be more obvious, but unpickling an untrusted string is also insecure, though in subtler and darker ways).\nEdit: json is more secure. Whether that's worth the huge cost in size, speed and generality, is a tradeoff worth pondering. In most cases it won't be.\n", "Both storing as a string and using pickle are serialization strategies. Pickle is more flexible in what it can store and can be more compact. Both strategies, eval (which is what you would use over exec in this instance) and pickle.loads are insecure—both of these can run arbitrary Python code.\nBetter would be to use a serialization format like JSON (json module in 2.6, simplejson 3rd party module pre-2.6), which isn't tied specifically to being read by Python and won't execute arbitrary code if there ends up being data you do not expect in your database. Further, while the pickle formats are subject to changing (and your losing data!), a standard like JSON is not going to change on you in a backward-incompatible way.\n", "I've preferred using a standard serialization format like JSON for storing this kind of data in the database. It makes it possible for consumers of the data to be written in other languages than python, it's basically human readable, and it's more easily query-able with SQL than pickled objects.\n", "If I had to choose between serializing the data into something like JSON or storing a pickled data structure, I'd choose the JSON option every time. Other than the security issues everyone else is mentioning, portability is the biggest reason to not store a native python object in the database. There may be a requirement in the future to port your system to some other language, and storing a pickled python object would make that rather difficult. Also, other applications may need to hit the data your storing, but I can't speak to specific instances since I don't know your situation.\nAlso, if your system needed to do any kind of filtering, storing data in a JSON string still wouldn't be your best option. If you can, and there are a set number of fields, I would be very tempted to split them into atomic elements. It would make searching and filtering a lot easier and efficient.\n", "The question is what does serialization gain you?\nI bet the people recommending that you store the serialized data think you will save time because you don't need to mess around with SQL queries to construct Python objects. But there are some significant trade offs in storing you data as serialized blobs, such as:\n\nYou lose referential integrity checking\nThe data format you choose may not work well for different access patterns. How will you get all cars that cost more than $20,000 efficiently, if all of the data is stored inside serialized objects?\nWhat will you do if you object model changes significantly?\nYou lose interoperability with other languages if you use a native Python serialization format\nYou have to write code support code to loading data with a non-native Python serialization format\nYou can't use 3rd party tools for doing reporting on your data\n\nThe list goes on and on, make sure you are okay with these trade offs.\n", "There's always a danger in exec that someone will somehow pass in a string with some nasty code. It might never be the case in your application but in general, it's a big problem, and using built-in serialization avoids it.\nAnother reason for using built-in serialization is that it makes it obvious what you're trying to do in your code. If you just fetch and exec, someone might not understand your actual intent.\n", "Firstly fetch and exec will leave your application vulnerable to code injection.\nIf someone entered \"System(rm -r /);\" in your name field you would lose most of your files on a *nix system when you read in the data.\nThe second reason is portability and upgradability. \"pickled\" objects will work on any python platform with any python release -- Guido promised!\nThirdly \"pikling\" will automgically handle special characters and wierd code pages. So there will be no problems if your users enter line feeds or semi colons.\n", "Serialization means less worrying. When you marshall your data using some known serialization — Pickle, JSON, Google's Protocol Buffers — you can trust that the data you retrieve later is the data you stored earlier.\nLimit capability. If you're storing static data, why open up the possibility of letting the code be executed? It's unnecessary. Imagine the complication which will occur if, one year from now, another programer starts adding functions and module imports to this \"static\" data.\n", "If there is any possibility of manipulating this data on DB or creating reports from it; I would seriously consider unpacking it onto a table. A simple table with name, key and value columns would give you all the power of your relational database. Depending on edits it might even perform better than fetch->modify->dump.\n" ]
[ 10, 3, 3, 3, 2, 1, 0, 0, 0 ]
[]
[]
[ "database", "python", "serialization" ]
stackoverflow_0001685330_database_python_serialization.txt
Q: How do I fetch an XML document and parse it with Python twisted? I want a fast way to grab a URL and parse it while streaming. Ideally this should be super fast. My language of choice is Python. I have an intuition that twisted can do this but I'm at a loss to find an example. A: If you need to handle HTTP responses in a streaming fashion, there are a few options. You can do it via downloadPage: from xml.sax import make_parser from twisted.web.client import downloadPage class StreamingXMLParser: def __init__(self): self._parser = make_parser() def write(self, bytes): self._parser.feed(bytes) def close(self): self._parser.feed('', True) parser = StreamingXMLParser() d = downloadPage(url, parser) # d fires when the response is completely received This works because downloadPage writes the response body to the file-like object passed to it. Here, passing in an object with write and close methods satisfies that requirement, but incrementally parses the data as XML instead of putting it on a disk. Another approach is to hook into things at the HTTPPageGetter level. HTTPPageGetter is the protocol used internally by getPage. class StreamingXMLParsingHTTPClient(HTTPPageGetter): def connectionMade(self): HTTPPageGetter.connectionMade(self) self._parser = make_parser() def handleResponsePart(self, bytes): self._parser.feed(bytes) def handleResponseEnd(self): self._parser.feed('', True) self.handleResponse(None) # Whatever you pass to handleResponse will be the result of the Deferred below. factory = HTTPClientFactory(url) factory.protocol = StreamingXMLParsingHTTPClient reactor.connectTCP(host, port, factory) d = factory.deferred # d fires when the response is completely received Finally, there will be a new HTTP client API soon. Since this isn't part of any release yet, it's not as directly useful as the previous two approaches, but it's somewhat nicer, so I'll include it to give you an idea of what the future will bring. :) The new API lets you specify a protocol to receive the response body. So you'd do something like this: class StreamingXMLParser(Protocol): def __init__(self): self.done = Deferred() def connectionMade(self): self._parser = make_parser() def dataReceived(self, bytes): self._parser.feed(bytes) def connectionLost(self, reason): self._parser.feed('', True) self.done.callback(None) from twisted.web.client import Agent from twisted.internet import reactor agent = Agent(reactor) d = agent.request('GET', url, headers, None) def cbRequest(response): # You can look at the response headers here if you like. protocol = StreamingXMLParser() response.deliverBody(protocol) return protocol.done d.addCallback(cbRequest) # d fires when the response is fully received and parsed A: You only need to parse a single URL? Then don't worry. Use urllib2 to open the connection and pass the file handle into ElementTree. Variations you can try would be to use ElementTree's incremental parser or to use iterparse, but that depends on what your real requirements are. There's "super fast" but there's also "fast enough." It's only when you start having multiple simultaneous connections where you should look at Twisted or multithreading.
How do I fetch an XML document and parse it with Python twisted?
I want a fast way to grab a URL and parse it while streaming. Ideally this should be super fast. My language of choice is Python. I have an intuition that twisted can do this but I'm at a loss to find an example.
[ "If you need to handle HTTP responses in a streaming fashion, there are a few options.\nYou can do it via downloadPage:\nfrom xml.sax import make_parser\nfrom twisted.web.client import downloadPage\n\nclass StreamingXMLParser:\n def __init__(self):\n self._parser = make_parser()\n\n def write(self, bytes):\n self._parser.feed(bytes)\n\n def close(self):\n self._parser.feed('', True)\n\nparser = StreamingXMLParser()\nd = downloadPage(url, parser)\n# d fires when the response is completely received\n\nThis works because downloadPage writes the response body to the file-like object passed to it. Here, passing in an object with write and close methods satisfies that requirement, but incrementally parses the data as XML instead of putting it on a disk. \nAnother approach is to hook into things at the HTTPPageGetter level. HTTPPageGetter is the protocol used internally by getPage.\nclass StreamingXMLParsingHTTPClient(HTTPPageGetter):\n def connectionMade(self):\n HTTPPageGetter.connectionMade(self)\n self._parser = make_parser()\n\n def handleResponsePart(self, bytes):\n self._parser.feed(bytes)\n\n def handleResponseEnd(self):\n self._parser.feed('', True)\n self.handleResponse(None) # Whatever you pass to handleResponse will be the result of the Deferred below.\n\nfactory = HTTPClientFactory(url)\nfactory.protocol = StreamingXMLParsingHTTPClient\nreactor.connectTCP(host, port, factory)\nd = factory.deferred\n# d fires when the response is completely received\n\nFinally, there will be a new HTTP client API soon. Since this isn't part of any release yet, it's not as directly useful as the previous two approaches, but it's somewhat nicer, so I'll include it to give you an idea of what the future will bring. :) The new API lets you specify a protocol to receive the response body. So you'd do something like this:\nclass StreamingXMLParser(Protocol):\n def __init__(self):\n self.done = Deferred()\n\n def connectionMade(self):\n self._parser = make_parser()\n\n def dataReceived(self, bytes):\n self._parser.feed(bytes)\n\n def connectionLost(self, reason):\n self._parser.feed('', True)\n self.done.callback(None)\n\nfrom twisted.web.client import Agent\nfrom twisted.internet import reactor\n\nagent = Agent(reactor)\nd = agent.request('GET', url, headers, None)\ndef cbRequest(response):\n # You can look at the response headers here if you like.\n protocol = StreamingXMLParser()\n response.deliverBody(protocol)\n return protocol.done\nd.addCallback(cbRequest) # d fires when the response is fully received and parsed\n\n", "You only need to parse a single URL? Then don't worry. Use urllib2 to open the connection and pass the file handle into ElementTree.\nVariations you can try would be to use ElementTree's incremental parser or to use iterparse, but that depends on what your real requirements are. There's \"super fast\" but there's also \"fast enough.\"\nIt's only when you start having multiple simultaneous connections where you should look at Twisted or multithreading.\n" ]
[ 7, 0 ]
[]
[]
[ "python", "twisted", "xml" ]
stackoverflow_0001659380_python_twisted_xml.txt
Q: Is TCP Guaranteed to arrive in order? If I send two TCP messages, do I need to handle the case where the latter arrives before the former? Or is it guaranteed to arrive in the order I send it? I assume that this is not a Twisted-specific example, because it should conform to the TCP standard, but if anyone familiar with Twisted could provide a Twisted-specific answer for my own peace of mind, that'd be appreciated :-) A: As long as the two messages were sent on the same TCP connection, order will be maintained. If multiple connections are opened between the same pair of processes, you may be in trouble. Regarding Twisted, or any other asynchronous event system: I expect you'll get the dataReceived messages in the order that bytes are received. However, if you start pushing work off onto deferred calls, you can, erm... "twist" your control flow beyond recognition. A: TCP is connection-oriented and offers its Clients in-order delivery. Of course this applies to the connection level: individual connections are independent. You should note that normally we refer to "TCP streams" and "UDP messages". Whatever Client library you use (e.g. Twisted), the underlying TCP connection is independent of it. TCP will deliver the "protocol messages" in order to your client. By "protocol message" I refer of course to the protocol you use on the TCP layer. Further note that I/O operation are async in nature and very dependent on system load + also compounding network delays & losses, you cannot rely on message ordering between TCP connections. A: TCP "guarantees" that a receiver will receive the reconstituted stream of bytes as it was originally sent by the sender. However, between the TCP send/receive endpoints (i.e., the physical network), the data can be received out of order, it can be fragmented, it can be corrupted, and it can even be lost. TCP accounts for these problems using a handshake mechanism that causes bad packets to be retransmitted. The TCP stack on the receiver places these packets in the order in which they were transmitted so that when you read from your TCP socket, you are receive the data as it was originally sent. When you call the doRead method in Twisted, the data is read from the socket up to the size of the buffer. This data may represent a single message, a partial message, or multiple messages. It is up to you to extract the messages from the buffer, but you are guaranteed that the bytes are in their transmitted order at this point. Sorry for muddying the waters with my earlier post... A: TCP is a stream, UDP is a message. You're mixing up terms. For TCP it is true that the stream will arrive in the same order as it was send. There are no distict messages in TCP, bytes appear as they arrive, interpreting them as messages is up to you.
Is TCP Guaranteed to arrive in order?
If I send two TCP messages, do I need to handle the case where the latter arrives before the former? Or is it guaranteed to arrive in the order I send it? I assume that this is not a Twisted-specific example, because it should conform to the TCP standard, but if anyone familiar with Twisted could provide a Twisted-specific answer for my own peace of mind, that'd be appreciated :-)
[ "As long as the two messages were sent on the same TCP connection, order will be maintained. If multiple connections are opened between the same pair of processes, you may be in trouble.\nRegarding Twisted, or any other asynchronous event system: I expect you'll get the dataReceived messages in the order that bytes are received. However, if you start pushing work off onto deferred calls, you can, erm... \"twist\" your control flow beyond recognition.\n", "TCP is connection-oriented and offers its Clients in-order delivery. Of course this applies to the connection level: individual connections are independent.\nYou should note that normally we refer to \"TCP streams\" and \"UDP messages\".\nWhatever Client library you use (e.g. Twisted), the underlying TCP connection is independent of it. TCP will deliver the \"protocol messages\" in order to your client. By \"protocol message\" I refer of course to the protocol you use on the TCP layer.\nFurther note that I/O operation are async in nature and very dependent on system load + also compounding network delays & losses, you cannot rely on message ordering between TCP connections.\n", "TCP \"guarantees\" that a receiver will receive the reconstituted stream of bytes as it was originally sent by the sender. However, between the TCP send/receive endpoints (i.e., the physical network), the data can be received out of order, it can be fragmented, it can be corrupted, and it can even be lost. TCP accounts for these problems using a handshake mechanism that causes bad packets to be retransmitted. The TCP stack on the receiver places these packets in the order in which they were transmitted so that when you read from your TCP socket, you are receive the data as it was originally sent.\nWhen you call the doRead method in Twisted, the data is read from the socket up to the size of the buffer. This data may represent a single message, a partial message, or multiple messages. It is up to you to extract the messages from the buffer, but you are guaranteed that the bytes are in their transmitted order at this point.\nSorry for muddying the waters with my earlier post...\n", "TCP is a stream, UDP is a message. You're mixing up terms. For TCP it is true that the stream will arrive in the same order as it was send. There are no distict messages in TCP, bytes appear as they arrive, interpreting them as messages is up to you.\n" ]
[ 54, 25, 20, 8 ]
[]
[]
[ "protocols", "python", "tcp", "twisted" ]
stackoverflow_0001691179_protocols_python_tcp_twisted.txt
Q: Location of Sphinx sources for my notes - WARNING: document isn't included in any toctree How can you fix the Sphinx's warning at the bottom? I am trying to have my Python notes in Sphinx. I have my notes in separate files at the same directory level as the index.rst. I get the following warnings after building HTML The warning /home/heo/S_codes/trig_functions.rst:: WARNING: document isn't included in any toctree in The complete message when building sudo sphinx-build -b html ./ _build/html Running Sphinx v0.6.2 loading pickled environment... done building [html]: targets for 0 source files that are out of date updating environment: 1 added, 2 changed, 0 removed reading sources... [100%] trig_functions /home/heo/S_codes/databooklet.rst:1: (WARNING/2) malformed hyperlink target. /home/heo/S_codes/index.rst:11: (ERROR/3) Error in "toctree" directive: invalid option block. .. toctree:: :numbered: :glob: * databooklet.rst trig_functions.rst /home/heo/S_codes/trig_functions.rst:11: (ERROR/3) Unexpected indentation. looking for now-outdated files... none found pickling environment... done checking consistency... /home/heo/S_codes/databooklet.rst:: WARNING: document isn't included in any toctree /home/heo/S_codes/trig_functions.rst:: WARNING: document isn't included in any toctree done preparing documents... done writing output... [100%] trig_functions writing additional files... genindex search copying static files... done dumping search index... done dumping object inventory... done build succeeded, 6 warnings. A: Are you aware of Sphinx's documentation? https://www.sphinx-doc.org Specifically, read about the toctree directive: https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-toctree You can have as many files as you want. Via toctree you can create a single document from many parts. Please actually read this: http://sphinx.pocoo.org/concepts.html#document-names Since the reST source files can have different extensions (some people like .txt, some like .rst – the extension can be configured with source_suffix) and different OSes have different path separators, Sphinx abstracts them: all “document names” are relative to the source directory, the extension is stripped, and path separators are converted to slashes. All values, parameters and suchlike referring to “documents” expect such a document name. Examples for document names are index, library/zipfile, or reference/datamodel/types. Note that there is no leading slash. Since you globbed with *, you do not need to list your files. If you want to list your files, please actually read and follow the above rules.
Location of Sphinx sources for my notes - WARNING: document isn't included in any toctree
How can you fix the Sphinx's warning at the bottom? I am trying to have my Python notes in Sphinx. I have my notes in separate files at the same directory level as the index.rst. I get the following warnings after building HTML The warning /home/heo/S_codes/trig_functions.rst:: WARNING: document isn't included in any toctree in The complete message when building sudo sphinx-build -b html ./ _build/html Running Sphinx v0.6.2 loading pickled environment... done building [html]: targets for 0 source files that are out of date updating environment: 1 added, 2 changed, 0 removed reading sources... [100%] trig_functions /home/heo/S_codes/databooklet.rst:1: (WARNING/2) malformed hyperlink target. /home/heo/S_codes/index.rst:11: (ERROR/3) Error in "toctree" directive: invalid option block. .. toctree:: :numbered: :glob: * databooklet.rst trig_functions.rst /home/heo/S_codes/trig_functions.rst:11: (ERROR/3) Unexpected indentation. looking for now-outdated files... none found pickling environment... done checking consistency... /home/heo/S_codes/databooklet.rst:: WARNING: document isn't included in any toctree /home/heo/S_codes/trig_functions.rst:: WARNING: document isn't included in any toctree done preparing documents... done writing output... [100%] trig_functions writing additional files... genindex search copying static files... done dumping search index... done dumping object inventory... done build succeeded, 6 warnings.
[ "Are you aware of Sphinx's documentation? https://www.sphinx-doc.org\nSpecifically, read about the toctree directive: https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-toctree\nYou can have as many files as you want. Via toctree you can create a single document from many parts.\n\nPlease actually read this: http://sphinx.pocoo.org/concepts.html#document-names\n\nSince the reST source files can have\n different extensions (some people like\n .txt, some like .rst – the extension\n can be configured with source_suffix)\n and different OSes have different path\n separators, Sphinx abstracts them: all\n “document names” are relative to the\n source directory, the extension is\n stripped, and path separators are\n converted to slashes. All values,\n parameters and suchlike referring to\n “documents” expect such a document\n name.\nExamples for document names are index,\n library/zipfile, or\n reference/datamodel/types. Note that\n there is no leading slash.\n\nSince you globbed with *, you do not need to list your files.\nIf you want to list your files, please actually read and follow the above rules.\n" ]
[ 6 ]
[]
[]
[ "glob", "python", "python_sphinx", "toctree" ]
stackoverflow_0001690757_glob_python_python_sphinx_toctree.txt
Q: Is there any benefit to returning a hash constructed with dict rather than just using the curly braces syntax? In some Python code I've read I keep noticing this code: return dict(somekey=somevalue) Does that have any benefit over: return {somekey:somevalue} I tend to say no, since both objects will belong to the same dict type, but I may be wrong. A: >>> def foo(): return dict(a=1) ... >>> def bar(): return {'a':1} ... >>> import dis >>> dis.dis(foo) 1 0 LOAD_GLOBAL 0 (dict) 3 LOAD_CONST 1 ('a') 6 LOAD_CONST 2 (1) 9 CALL_FUNCTION 256 12 RETURN_VALUE >>> dis.dis(bar) 1 0 BUILD_MAP 1 3 LOAD_CONST 1 (1) 6 LOAD_CONST 2 ('a') 9 STORE_MAP 10 RETURN_VALUE >>> import timeit >>> timeit.Timer(foo).timeit() 0.76093816757202148 >>> timeit.Timer(bar).timeit() 0.31897807121276855 There is no functional difference, but the latter is more efficient. A: They are semantically identical. The dict( param=value, ... ) notation limits your keys to strings which are valid python identifiers. The dict( sequence-of-2-tuples ) is effectively the same as {}. The {} notation places no limits on the keys. Except that they be hashable objects. There are performance differences. A: {somekey:somevalue} can only be used as a literal, it doesn't allow any looping inside the brackets. dict takes **kwargs, which means you can for example do dict( **locals() ). dict(somekey=somevalue) is just a nice shortcut when you don't want to type quotes and braces. There is a third dict constructor too, dict([(key,val),(key2,val2)]). Its the most powerful and often used with zip A: Python is dynamic -- so you can redefine what 'dict' means in any of your scopes, while Python's syntax is absolutely nonprogrammable. Thus the Python parser can conclude, when it sees the curly braces, that it must build a dict. A dict(..) expression must always be evaluated like any other function call; lookup the name, build argument tuple etc. In effect, using container literals like the {} curly braces is the closest you can come to static type declarations in Python. I think this has influenced the Python 3 decision to introduce set literals like s = {1,2,3}. It does happen that programmers reassign python builtins! I think this is mostly by mistake, and mostly limited to local variables (and Python's namespaces make sure it can't do too much harm!) Here is a Google Code Search for code reassigning dict. I think the strangest example is dict = dict(); by that point, it should be obvious what you are doing! That it's possible to do this doesn't mean you should do it. Yes, for example thinking that dict should really be called hash in python and swapping out the other builtin called hash is not something you should do: hash, checksum = dict, hash :-) An example of code that does this is found right in the Python standard library. That's right, here is from line 92 of shelve.py: def __init__(self, dict, protocol=None, writeback=False): self.dict = dict if protocol is None: protocol = 0 self._protocol = protocol self.writeback = writeback self.cache = {} This is a very typical example; dict is used as a method argument, and no-one notices and it does no harm, since the method is very short. Use syntax highlighting for builtins to catch this, is my suggestion. A: The curly braces are a syntaxtic nicity, so the only benefit is can more clearly express the structure of the dict. A: dict(somekey=somevalue) is exactly the same as {'somekey': somevalue}. (Your two examples, then, aren't quite quivalent—in the first case the key is the string 'somekey' as the key and in the second it is whatever somekey is.) It is generally nicer just to use dict literals unless you are avoiding typing all the quotes.
Is there any benefit to returning a hash constructed with dict rather than just using the curly braces syntax?
In some Python code I've read I keep noticing this code: return dict(somekey=somevalue) Does that have any benefit over: return {somekey:somevalue} I tend to say no, since both objects will belong to the same dict type, but I may be wrong.
[ "\n>>> def foo(): return dict(a=1)\n...\n>>> def bar(): return {'a':1}\n...\n>>> import dis\n>>> dis.dis(foo)\n 1 0 LOAD_GLOBAL 0 (dict)\n 3 LOAD_CONST 1 ('a')\n 6 LOAD_CONST 2 (1)\n 9 CALL_FUNCTION 256\n 12 RETURN_VALUE\n>>> dis.dis(bar)\n 1 0 BUILD_MAP 1\n 3 LOAD_CONST 1 (1)\n 6 LOAD_CONST 2 ('a')\n 9 STORE_MAP\n 10 RETURN_VALUE\n>>> import timeit\n>>> timeit.Timer(foo).timeit()\n0.76093816757202148\n>>> timeit.Timer(bar).timeit()\n0.31897807121276855\n\nThere is no functional difference, but the latter is more efficient.\n", "They are semantically identical. \nThe dict( param=value, ... ) notation limits your keys to strings which are valid python identifiers.\nThe dict( sequence-of-2-tuples ) is effectively the same as {}.\nThe {} notation places no limits on the keys. Except that they be hashable objects.\nThere are performance differences.\n", "{somekey:somevalue} can only be used as a literal, it doesn't allow any looping inside the brackets.\ndict takes **kwargs, which means you can for example do dict( **locals() ). dict(somekey=somevalue) is just a nice shortcut when you don't want to type quotes and braces. \nThere is a third dict constructor too, dict([(key,val),(key2,val2)]). Its the most powerful and often used with zip\n", "Python is dynamic -- so you can redefine what 'dict' means in any of your scopes, while Python's syntax is absolutely nonprogrammable. Thus the Python parser can conclude, when it sees the curly braces, that it must build a dict. A dict(..) expression must always be evaluated like any other function call; lookup the name, build argument tuple etc.\nIn effect, using container literals like the {} curly braces is the closest you can come to static type declarations in Python.\nI think this has influenced the Python 3 decision to introduce set literals like s = {1,2,3}.\n\nIt does happen that programmers reassign python builtins! I think this is mostly by mistake, and mostly limited to local variables (and Python's namespaces make sure it can't do too much harm!) Here is a Google Code Search for code reassigning dict. I think the strangest example is dict = dict(); by that point, it should be obvious what you are doing!\nThat it's possible to do this doesn't mean you should do it. Yes, for example thinking that dict should really be called hash in python and swapping out the other builtin called hash is not something you should do:\nhash, checksum = dict, hash\n\n:-)\nAn example of code that does this is found right in the Python standard library. That's right, here is from line 92 of shelve.py:\ndef __init__(self, dict, protocol=None, writeback=False):\n self.dict = dict\n if protocol is None:\n protocol = 0\n self._protocol = protocol\n self.writeback = writeback\n self.cache = {}\n\nThis is a very typical example; dict is used as a method argument, and no-one notices and it does no harm, since the method is very short. Use syntax highlighting for builtins to catch this, is my suggestion.\n", "The curly braces are a syntaxtic nicity, so the only benefit is can more clearly express the structure of the dict. \n", "dict(somekey=somevalue) is exactly the same as {'somekey': somevalue}. (Your two examples, then, aren't quite quivalent—in the first case the key is the string 'somekey' as the key and in the second it is whatever somekey is.) It is generally nicer just to use dict literals unless you are avoiding typing all the quotes.\n" ]
[ 15, 5, 2, 2, 1, 1 ]
[]
[]
[ "dictionary", "hash", "python" ]
stackoverflow_0001690517_dictionary_hash_python.txt
Q: optimizing this django code? I'm having some performance issues because I'm making a lot of query calls that I'm not sure how to reduce. user_item_rel_set is a m2m relation between user and items showing how much a user paid for a particular item. Each item can have multiple users and buyers, and I'm trying to get the m2m relation for a particular user. # find anything that you bought or used and how much you paid for it u = User.objects.get(id=self.uid) t = self.list.filter(user_item_rel__user__exact=u) y = self.list.filter(buyer_item_rel__buyer__exact=u) items = t | y items = items.distinct() u = User.objects.get(id=self.uid) for t in items: try: t.price = t.user_item_rel_set.get(user=u).payment_amount except: t.price = -1 * t.buyer_item_rel_set.get(buyer=u).payment_amount return items and at another instance for i in new_list: if str(i.tag) not in x: x[str(i.tag)] = 0 if houseMode == 0: x[str(i.tag)] += float(i.user_item_rel_set.get(user__id__exact=self.uid).payment_amount) else: x[str(i.tag)] += float(i.price) A: Some additional code from your model would help, because it's hard to see what the 'items' queryset contains. I will try to help anyway... Because you've modeled a relationship between users and items, there is no need to iterate over every item in that queryset when you can simply select the subset that are interesting to you. Again, I'm having a bit of difficulty following your application logic, but I think your queries can be reduced to something of this nature: # Find all the items where this user is the "user" user_items = items.filter(user_item_rel_set__user=u) # Find all the items where this user is the "buyer" buyer_items = items.filter(user_item_rel_set__buyer=u) I don't quite follow why you are assigning these values to 't.price' in the loop or I would expand on that code. If that doesn't help your performance, I recommend dumping your SQL queries to the console so you can see exactly what's going on behind the ORM. In logic like this, it shouldn't take more than a handful of SQL statements to arrive at your calculation. Furthermore, it is generally a bad idea to use floating point datatypes (float) anywhere in proximity to a monetary value. Floating point datatypes are generally for scientific applications where performance is more important than precision. If you're dealing with money, precision is almost always more important than performance, so you use a datatype capable of exact representation like decimal.Decimal everywhere. Edit Given the comments, I recommend starting your query with the "relationship" object instead of the Item. Since your sample doesn't tell me the name of that class, I will assume it's called UserItem: from django.db.models import Q from decimal import Decimal price = Decimal('0') # Get all UserItems where this user is the user or buyer interesting_items = UserItem.objects.filter((Q(user=u) | Q(buyer=u))) for ii in interesting_items: if ii.user == u: price += ii.payment_amount elif ii.buyer == u: price -= ii.payment_amount else: assert False, "Oops, this shouldn't happen" # Do something with 'price'... The Django "Q" facility lets you get a little more granular with your queries. If you need to filter based on some attribute of the item, throw that in there too. The part that still confuses me in your examples, is why are you assigning 'price' to the item object when it is clear that many users will share that item. Edit 2 You can also use the aggregation API to let the DBMS compute the sum if that's all you're interested in: from django.db.models import Sum buyer_price = UserItem.objects.filter(item=i, user=u).aggregate( Sum('payment_amount'))['payment_amount__sum']
optimizing this django code?
I'm having some performance issues because I'm making a lot of query calls that I'm not sure how to reduce. user_item_rel_set is a m2m relation between user and items showing how much a user paid for a particular item. Each item can have multiple users and buyers, and I'm trying to get the m2m relation for a particular user. # find anything that you bought or used and how much you paid for it u = User.objects.get(id=self.uid) t = self.list.filter(user_item_rel__user__exact=u) y = self.list.filter(buyer_item_rel__buyer__exact=u) items = t | y items = items.distinct() u = User.objects.get(id=self.uid) for t in items: try: t.price = t.user_item_rel_set.get(user=u).payment_amount except: t.price = -1 * t.buyer_item_rel_set.get(buyer=u).payment_amount return items and at another instance for i in new_list: if str(i.tag) not in x: x[str(i.tag)] = 0 if houseMode == 0: x[str(i.tag)] += float(i.user_item_rel_set.get(user__id__exact=self.uid).payment_amount) else: x[str(i.tag)] += float(i.price)
[ "Some additional code from your model would help, because it's hard to see what the 'items' queryset contains.\nI will try to help anyway...\nBecause you've modeled a relationship between users and items, there is no need to iterate over every item in that queryset when you can simply select the subset that are interesting to you.\nAgain, I'm having a bit of difficulty following your application logic, but I think your queries can be reduced to something of this nature:\n# Find all the items where this user is the \"user\"\nuser_items = items.filter(user_item_rel_set__user=u)\n\n# Find all the items where this user is the \"buyer\"\nbuyer_items = items.filter(user_item_rel_set__buyer=u)\n\nI don't quite follow why you are assigning these values to 't.price' in the loop or I would expand on that code.\nIf that doesn't help your performance, I recommend dumping your SQL queries to the console so you can see exactly what's going on behind the ORM. In logic like this, it shouldn't take more than a handful of SQL statements to arrive at your calculation. \nFurthermore, it is generally a bad idea to use floating point datatypes (float) anywhere in proximity to a monetary value. Floating point datatypes are generally for scientific applications where performance is more important than precision. If you're dealing with money, precision is almost always more important than performance, so you use a datatype capable of exact representation like decimal.Decimal everywhere.\nEdit\nGiven the comments, I recommend starting your query with the \"relationship\" object instead of the Item. Since your sample doesn't tell me the name of that class, I will assume it's called UserItem:\nfrom django.db.models import Q\nfrom decimal import Decimal\n\nprice = Decimal('0')\n\n# Get all UserItems where this user is the user or buyer\ninteresting_items = UserItem.objects.filter((Q(user=u) | Q(buyer=u)))\nfor ii in interesting_items:\n if ii.user == u:\n price += ii.payment_amount\n elif ii.buyer == u:\n price -= ii.payment_amount\n else:\n assert False, \"Oops, this shouldn't happen\"\n\n# Do something with 'price'...\n\nThe Django \"Q\" facility lets you get a little more granular with your queries. If you need to filter based on some attribute of the item, throw that in there too. \nThe part that still confuses me in your examples, is why are you assigning 'price' to the item object when it is clear that many users will share that item. \nEdit 2\nYou can also use the aggregation API to let the DBMS compute the sum if that's all you're interested in:\nfrom django.db.models import Sum\nbuyer_price = UserItem.objects.filter(item=i, user=u).aggregate(\n Sum('payment_amount'))['payment_amount__sum']\n\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001691207_django_python.txt
Q: How to implement Symfony Partials or Components in Django? I've been developing in the Symfony framework for quite a time, but now I have to work with Django and I'm having problems with doing something like a "component" or "partial" in Symfony. That said, here is my goal: I have a webpage with lots of small widgets, all these need their logic - located in a "views.py" I guess. But, how do I tell Django to call all this logic and render it all as one webpage? A: It sounds like what you're looking for is something like custom template tags... You can write your own set of tags that process custom logic and return template chunks that are reusable in a very widget-like way. A: Assuming you are going to be using the components in different places on different pages I would suggest trying {% include "foo.html" %}. One of the (several) downsides of the Django templating language is that there is no concept of macros, so you need to be very consistent in the names of values in the context you pass to your main template so that the included template finds things it's looking for. Alternatively, in the view you can invoke the template engine for each component and save the result in a value passed in the context. Then in the main template simply use the value in the context. I'm not fond of either of these approaches. The more complex your template needs become the more you may want to look at Jinja2. (And, no, I don't buy the Django Party Line about 'template designers' -- never saw one in my life.)
How to implement Symfony Partials or Components in Django?
I've been developing in the Symfony framework for quite a time, but now I have to work with Django and I'm having problems with doing something like a "component" or "partial" in Symfony. That said, here is my goal: I have a webpage with lots of small widgets, all these need their logic - located in a "views.py" I guess. But, how do I tell Django to call all this logic and render it all as one webpage?
[ "It sounds like what you're looking for is something like custom template tags...\nYou can write your own set of tags that process custom logic and return template chunks that are reusable in a very widget-like way.\n", "Assuming you are going to be using the components in different places on different pages I would suggest trying {% include \"foo.html\" %}. One of the (several) downsides of the Django templating language is that there is no concept of macros, so you need to be very consistent in the names of values in the context you pass to your main template so that the included template finds things it's looking for.\nAlternatively, in the view you can invoke the template engine for each component and save the result in a value passed in the context. Then in the main template simply use the value in the context.\nI'm not fond of either of these approaches. The more complex your template needs become the more you may want to look at Jinja2. (And, no, I don't buy the Django Party Line about 'template designers' -- never saw one in my life.)\n" ]
[ 3, 1 ]
[]
[]
[ "django", "python", "symfony1" ]
stackoverflow_0001691400_django_python_symfony1.txt
Q: Organizing Python projects with shared packages What is the best way to organize and develop a project composed of many small scripts sharing one (or more) larger Python libraries? We have a bunch of programs in our repository that all use the same libraries stored in the same repository. So in other words, a layout like trunk libs python utilities projects projA projB When the official runs of our programs are done, we want to record what version of the code was used. For our C++ executables, things are simple because as long as the working copy is clean at compile time, everything is fine. (And since we get the version number programmatically, it must be a working copy, not an export.) For Python scripts, things are more complicated. The problem is that, often one project (e.g. projA) will be running, and projB will need to be updated. This could cause the working copy revision to appear mixed to projA during runtime. (The code takes hours to run, and can be used as inputs for processes that take days to run, hence the strong traceability goal.) My current workaround is, if necessary, check out another copy of the trunk to a different location, and run off there. But then I need to remember to change my PYTHONPATH to point to the second version of lib/python, not the one in the first tree. There's not likely to be a perfect answer. But there must be a better way. Should we be using subversion keywords to store the revision number, which would allow the data user to export files? Should we be using virtualenv? Should we be going more towards a packaging and installation mechanism? Setuptools is the standard, but I've read mixed things about it, and it seems designed for non-developer end users (of which we have none). A: The much better solution involves not storing all your projects and their shared dependencies in the same repository. Use one repository for each project, and externals for the shared libraries. Make use of tags in the shared library repositories, so consumer projects may use exactly the version they need in their external. Edit: (just copying this from my comment) use virtualenv if you need to provide isolated runtime environments for the different apps on the same server. Then each environment can contain a unique version of the library it needs. A: If I'm understanding your question properly, then you definitely want virtualenv. Add in some virtualenvwrapper goodness to make it that much better.
Organizing Python projects with shared packages
What is the best way to organize and develop a project composed of many small scripts sharing one (or more) larger Python libraries? We have a bunch of programs in our repository that all use the same libraries stored in the same repository. So in other words, a layout like trunk libs python utilities projects projA projB When the official runs of our programs are done, we want to record what version of the code was used. For our C++ executables, things are simple because as long as the working copy is clean at compile time, everything is fine. (And since we get the version number programmatically, it must be a working copy, not an export.) For Python scripts, things are more complicated. The problem is that, often one project (e.g. projA) will be running, and projB will need to be updated. This could cause the working copy revision to appear mixed to projA during runtime. (The code takes hours to run, and can be used as inputs for processes that take days to run, hence the strong traceability goal.) My current workaround is, if necessary, check out another copy of the trunk to a different location, and run off there. But then I need to remember to change my PYTHONPATH to point to the second version of lib/python, not the one in the first tree. There's not likely to be a perfect answer. But there must be a better way. Should we be using subversion keywords to store the revision number, which would allow the data user to export files? Should we be using virtualenv? Should we be going more towards a packaging and installation mechanism? Setuptools is the standard, but I've read mixed things about it, and it seems designed for non-developer end users (of which we have none).
[ "The much better solution involves not storing all your projects and their shared dependencies in the same repository.\nUse one repository for each project, and externals for the shared libraries.\nMake use of tags in the shared library repositories, so consumer projects may use exactly the version they need in their external.\nEdit: (just copying this from my comment) use virtualenv if you need to provide isolated runtime environments for the different apps on the same server. Then each environment can contain a unique version of the library it needs.\n", "If I'm understanding your question properly, then you definitely want virtualenv. Add in some virtualenvwrapper goodness to make it that much better.\n" ]
[ 2, 1 ]
[]
[]
[ "code_organization", "python", "svn", "version_control" ]
stackoverflow_0001691495_code_organization_python_svn_version_control.txt
Q: Modulus in Python's slicing How can you fix the following code? I want to get the slice of elements that are i mod 5 == 1. data = "8|9|8|9|8|9|8|9|9|8|9|8|9|8|9|8" arra = map(int,data.split("|")) sums += [sum(arra[i % 5==1:(i + 4) % 5==1]) // Problem here for i in range(0, len(arra), 4)] A: sums += sum(arra[1::5]) And it's spelled array. ;-) A: It's sums = sum(arra[1::5]) If you use +=, Python spects that the name sums is alreadey accesible: Traceback (most recent call last): File "", line 1, in sums += sum(arra[1::5]) NameError: name 'sums' is not defined
Modulus in Python's slicing
How can you fix the following code? I want to get the slice of elements that are i mod 5 == 1. data = "8|9|8|9|8|9|8|9|9|8|9|8|9|8|9|8" arra = map(int,data.split("|")) sums += [sum(arra[i % 5==1:(i + 4) % 5==1]) // Problem here for i in range(0, len(arra), 4)]
[ "sums += sum(arra[1::5])\n\nAnd it's spelled array. ;-)\n", "It's\nsums = sum(arra[1::5])\n\nIf you use +=, Python spects that the name sums is alreadey accesible:\nTraceback (most recent call last):\n File \"\", line 1, in \n sums += sum(arra[1::5])\nNameError: name 'sums' is not defined\n" ]
[ 6, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001689984_python.txt
Q: python cProfile and profile models skip functions basically the cProfile module skips some functions when i run it, and the normal profile module produces this error. The debugged program raised the exception unhandled AssertionError "('Bad call', ('objects/controller/StageController.py', 9, '__init__'), <frame object at 0x9bbc104>, <frame object at 0x9bb438c>, <frame object at 0x9bd0554>, <frame object at 0x9bcf2f4>)" File: /usr/lib/python2.6/profile.py, Line: 301 ive done all the searches and i cant find anything. How do i make them work properly? @ yk4ever StageController.py class starts like this: class StageControl(ObjectControl): def __init__(self, canvas_name): ObjectControl.__init__(self, canvas_name,"stage_object") self.model = StageModel() self.variables() self.make_stage() self.overrides() the "Bad call" error above seems to dislike this class A: I've found the problem. Psyco the 'ObjectControl' class which my 'StageControl' inherited has a simple: import psyco psyco.full() INSIDE the class, which caused the error hence only the methods in the classes which inherited 'ObjectControl', caused the profiler to fail. i read somewhere it was a good idea to import psyco only where it was necessary, turns out thats a bad idea. I had used psyco for a time until i came across cython, but for some reason left the psyco import statements lying around long enough to bork the profiler. ive since dumped psyco. the moral of the story is: just stick with cython, at the end of the day nothing beats C. A: Python Bug #1117670 seems to describe the same issue. A minimal test script to reproduce that similar problem also attached there. The bug has been marked as fixed. See msg24185 in the above Python Bug report for a workaround which can be used on Python 2.4. Which Python version do you use?
python cProfile and profile models skip functions
basically the cProfile module skips some functions when i run it, and the normal profile module produces this error. The debugged program raised the exception unhandled AssertionError "('Bad call', ('objects/controller/StageController.py', 9, '__init__'), <frame object at 0x9bbc104>, <frame object at 0x9bb438c>, <frame object at 0x9bd0554>, <frame object at 0x9bcf2f4>)" File: /usr/lib/python2.6/profile.py, Line: 301 ive done all the searches and i cant find anything. How do i make them work properly? @ yk4ever StageController.py class starts like this: class StageControl(ObjectControl): def __init__(self, canvas_name): ObjectControl.__init__(self, canvas_name,"stage_object") self.model = StageModel() self.variables() self.make_stage() self.overrides() the "Bad call" error above seems to dislike this class
[ "I've found the problem. Psyco\nthe 'ObjectControl' class which my 'StageControl' inherited has a simple:\nimport psyco\npsyco.full()\n\nINSIDE the class, which caused the error hence only the methods in the classes which inherited 'ObjectControl', caused the profiler to fail. i read somewhere it was a good idea to import psyco only where it was necessary, turns out thats a bad idea.\nI had used psyco for a time until i came across cython, but for some reason left the psyco import statements lying around long enough to bork the profiler. ive since dumped psyco.\nthe moral of the story is: just stick with cython, at the end of the day nothing beats C.\n", "Python Bug #1117670 seems to describe the same issue. A minimal test script to reproduce that similar problem also attached there. The bug has been marked as fixed.\nSee msg24185 in the above Python Bug report for a workaround which can be used on Python 2.4.\nWhich Python version do you use?\n" ]
[ 2, 0 ]
[]
[]
[ "profiler", "python" ]
stackoverflow_0001688412_profiler_python.txt
Q: What is the default chunker for NLTK toolkit in Python? I am using their default POS tagging and default tokenization..and it seems sufficient. I'd like their default chunker too. I am reading the NLTK toolkit book, but it does not seem like they have a default chunker? A: You can get out of the box named entity chunking with the nltk.ne_chunk() method. It takes a list of POS tagged tuples: nltk.ne_chunk([('Barack', 'NNP'), ('Obama', 'NNP'), ('lives', 'NNS'), ('in', 'IN'), ('Washington', 'NNP')]) results in: Tree('S', [Tree('PERSON', [('Barack', 'NNP')]), Tree('ORGANIZATION', [('Obama', 'NNP')]), ('lives', 'NNS'), ('in', 'IN'), Tree('GPE', [('Washington', 'NNP')])]) It identifies Barack as a person, but Obama as an organization. So, not perfect. A: I couldn't find a default chunker/shallow parser either. Although the book describes how to build and train one with example features. Coming up with additional features to get good performance shouldn't be too difficult. See Chapter 7's section on Training Classifier-based Chunkers.
What is the default chunker for NLTK toolkit in Python?
I am using their default POS tagging and default tokenization..and it seems sufficient. I'd like their default chunker too. I am reading the NLTK toolkit book, but it does not seem like they have a default chunker?
[ "You can get out of the box named entity chunking with the nltk.ne_chunk() method. It takes a list of POS tagged tuples:\nnltk.ne_chunk([('Barack', 'NNP'), ('Obama', 'NNP'), ('lives', 'NNS'), ('in', 'IN'), ('Washington', 'NNP')])\nresults in:\nTree('S', [Tree('PERSON', [('Barack', 'NNP')]), Tree('ORGANIZATION', [('Obama', 'NNP')]), ('lives', 'NNS'), ('in', 'IN'), Tree('GPE', [('Washington', 'NNP')])])\nIt identifies Barack as a person, but Obama as an organization. So, not perfect.\n", "I couldn't find a default chunker/shallow parser either. Although the book describes how to build and train one with example features. Coming up with additional features to get good performance shouldn't be too difficult.\nSee Chapter 7's section on Training Classifier-based Chunkers.\n" ]
[ 9, 8 ]
[]
[]
[ "chunking", "nlp", "nltk", "python" ]
stackoverflow_0001687510_chunking_nlp_nltk_python.txt
Q: Parsing output of apt-get install for progress bar I'm working on a simple GUI Python script to do some simple tasks on a system. Some of that work involves apt-get install to install some packages. While this is going on, I want to display a progress bar that should update with the progress of the download, using the little percentage shown in apt-get's interface in the terminal. BUT! I can't find a way to get the progress info. Piping or redirecting the output of apt-get just gives static lines that show the "completed download" message for each package, and same for reading via subprocess.Popen() in my script. How can I read from apt-get's output to get the percentages of the file downloaded? A: Instead of parsing the output of the apt-get, you can use python-apt to install packages. AFAIK it also has modules for reporting the progress. A: As I've often said, use pexpect, not subprocess etc, to run sub-processes when you need to get their continuous output. pexpect fools the subprocess into believing it's running on a terminal, so the subprocess will provide just the kind of output it would give on a real terminal... and you can catch it and transform it into any kind of fancy output you want!-)
Parsing output of apt-get install for progress bar
I'm working on a simple GUI Python script to do some simple tasks on a system. Some of that work involves apt-get install to install some packages. While this is going on, I want to display a progress bar that should update with the progress of the download, using the little percentage shown in apt-get's interface in the terminal. BUT! I can't find a way to get the progress info. Piping or redirecting the output of apt-get just gives static lines that show the "completed download" message for each package, and same for reading via subprocess.Popen() in my script. How can I read from apt-get's output to get the percentages of the file downloaded?
[ "Instead of parsing the output of the apt-get, you can use python-apt to install packages. AFAIK it also has modules for reporting the progress.\n", "As I've often said, use pexpect, not subprocess etc, to run sub-processes when you need to get their continuous output. pexpect fools the subprocess into believing it's running on a terminal, so the subprocess will provide just the kind of output it would give on a real terminal... and you can catch it and transform it into any kind of fancy output you want!-)\n" ]
[ 6, 3 ]
[]
[]
[ "apt_get", "popen", "progress_bar", "python", "subprocess" ]
stackoverflow_0001692082_apt_get_popen_progress_bar_python_subprocess.txt
Q: Open web page with custom cookies in Python For example, I have cookies my_cookies = {'name': 'Albert', 'uid': '654897897564'} and I want to open page http://website.com opener = urllib2.build_opener(urllib2.HTTPCookieProcessor()) opener.addheaders.append(('User-agent', 'Mozilla/5.0 (compatible)')) opener.open('http://website.com').read() How I can do this with my predefined cookies ? A: You just need a few more steps: import urllib2 import cookielib cp = urllib2.HTTPCookieProcessor() cj = cp.cookiejar # see cookielib.Cookie documentation for options description cj.set_cookie(cookielib.Cookie(0, 'a_cookie', 'a_value', '80', False, 'domain', True, False, '/path', True, False, None, False, None, None, None)) opener = urllib2.build_opener(urllib2.HTTPHandler(), cp) opener.addheaders.append(('User-agent', 'Mozilla/5.0 (compatible)')) opener.open('http://website.com').read()
Open web page with custom cookies in Python
For example, I have cookies my_cookies = {'name': 'Albert', 'uid': '654897897564'} and I want to open page http://website.com opener = urllib2.build_opener(urllib2.HTTPCookieProcessor()) opener.addheaders.append(('User-agent', 'Mozilla/5.0 (compatible)')) opener.open('http://website.com').read() How I can do this with my predefined cookies ?
[ "You just need a few more steps:\nimport urllib2\nimport cookielib\n\ncp = urllib2.HTTPCookieProcessor()\ncj = cp.cookiejar\n\n# see cookielib.Cookie documentation for options description\ncj.set_cookie(cookielib.Cookie(0, 'a_cookie', 'a_value',\n '80', False, 'domain', True, False, '/path',\n True, False, None, False, None, None, None))\nopener = urllib2.build_opener(urllib2.HTTPHandler(),\n cp)\nopener.addheaders.append(('User-agent', 'Mozilla/5.0 (compatible)'))\nopener.open('http://website.com').read()\n\n" ]
[ 8 ]
[]
[]
[ "cookies", "python", "urllib2" ]
stackoverflow_0001692396_cookies_python_urllib2.txt
Q: What is a scripting engine? I've seen here that what sets a programming language apart from a scripting language is the scripting engine. But I don't understand how it works, so I don't know the difference. For example, I see code in Java calling methods in imported libraries, but it doesn't seem "different enough" from Python or Ruby code - both are scripting languages, right? I guess this also has to do with the procedural and object oriented paradigms, but in the end, I can't see why they are classified they way they are. EDIT: About a scripting engine being an interpreter... Isn't Java an interpreted language? I know there's the compiled bytecode, but still, it doesn't make sense to me. A: There is no hard and fast line between a "scripting language" and a "programming language". Properties of "scripting languages" tend to include: garbage-collected memory manager, with no need to explicitly allocate and free objects ability to simply execute commands, without a bunch of boilerplate code. Java is usually used as a counter-example of this. In Python you can simply say print("Hello, world!") but in Java you need a lot more syntax (the example here is seven lines of code). Related to the above, usually in a "scripting language" you don't have to explicitly declare variables, and you rarely need to declare types of variables. Some scripting languages (such as Javascript) will coerce types with wild abandon, and others (such as Python) are strongly typed and raise exceptions on type mismatches. no need for an explicit compile or link step; you just write code and run it. (A "scripting language" can still be Just-In-Time compiled internally; Python does this, for example.) Beyond these basics, a "scripting language" can range from something primitive and trivial, like the "batch" language in MS-DOS, on up to an expressive and powerful language like Python, Ruby, etc. A: You've basically discovered that the distinction between a scripting language and a "non-scripting" language is pretty artificial. Python can be compiled to JVM bytecode (with Jython), and I believe Ruby also can -- then the "engine" running the Python or Ruby code in question will be a JVM, the same "engine" that runs Java code (or Scala code, etc etc). Similarly with .NET and IronPython (or IronRuby) -- then the "engine" is Microsoft's CLR, just as for C#, Boo, and so on. Languages said to be "scripting" are often dynamically typed ones... but I've never heard the term used for other important dynamically typed languages such as Smalltalk, Mozart/OZ, or Erlang...;-). A: I know you have accepted an answer, however there is some amgiguity. When referring to a scripting engine, we typically mean a small embedded language that sits within a template and generates textual output or documents. For example Freemarker and Velocity are often referred to as scripting engines. Erb would sit here too, but oddly is not referred to as a scripting engine that often. A scripting language generally needs no compile step, therefore can be run more simply as a, or, from a shell script. This includes things like awk, perl, tcl, python, ruby and so on. These languages typically need to be terse and type safety is often optional. Windows supports a number of languages in it's scripting host facilities. This exposes scripting languages to various components within Windows. So then fully compiled languages such as Java may well run as bytecode and could be considered as interpreted, however the point is that there is an explicit compile step, there is no interpreter (with the Sun JRE anyway) that provides a runtime executable environment for java code. Other languages such as VBA are embedded, many of the languages above can be embedded. Embedded languages could be referred too as a scripting engine for the host application. In my mind a scripting engine interprets programmatic instructions and in turn instructs a larger host application or system. The instructions are executed immediately without concern for any remaining instructions. Many Lisps have no distinction between data and code, possibly compiling dynamically at runtime. The interpret, compile and execute steps are available to the Lisp programmer to be manipulated as programmers manipulate data in other languages. A: Probably the closest thing to what you are talking about is an interpreter: In computer science, an interpreter normally means a computer program that executes, i.e. performs, instructions written in a programming language. While interpretation and compilation are the two principal means by which programming languages are implemented, these are not fully distinct categories, one of the reasons being that most interpreting systems also perform some translation work, just like compilers. Basically an intepreter (or scripting engine if you prefer) is the component that is responsible for turning a script into machine code at execution time (as opposed to a compiler which creates machine code prior to execution time). A: "Scripting language" might be called a colloquialism. The term is not well defined, and you will see some disagreement about which languages are scripting languages. It is sometimes useful for conveying a vague idea of the properties of a language (See steveha's answer). "Scripting language" might also refer to a particular use of a language. For example, a piece of software might use Lua as its scripting language -- the language used by the end user to automate (or "script") complex tasks. A: One useful distinction between scripting / interpreted languages and compiled languages is that you can typically embed a scripting language's interpreter in a compiled project, such as a game engine.
What is a scripting engine?
I've seen here that what sets a programming language apart from a scripting language is the scripting engine. But I don't understand how it works, so I don't know the difference. For example, I see code in Java calling methods in imported libraries, but it doesn't seem "different enough" from Python or Ruby code - both are scripting languages, right? I guess this also has to do with the procedural and object oriented paradigms, but in the end, I can't see why they are classified they way they are. EDIT: About a scripting engine being an interpreter... Isn't Java an interpreted language? I know there's the compiled bytecode, but still, it doesn't make sense to me.
[ "There is no hard and fast line between a \"scripting language\" and a \"programming language\".\nProperties of \"scripting languages\" tend to include:\n\ngarbage-collected memory manager, with no need to explicitly allocate and free objects\nability to simply execute commands, without a bunch of boilerplate code. Java is usually used as a counter-example of this. In Python you can simply say print(\"Hello, world!\") but in Java you need a lot more syntax (the example here is seven lines of code).\nRelated to the above, usually in a \"scripting language\" you don't have to explicitly declare variables, and you rarely need to declare types of variables. Some scripting languages (such as Javascript) will coerce types with wild abandon, and others (such as Python) are strongly typed and raise exceptions on type mismatches.\nno need for an explicit compile or link step; you just write code and run it. (A \"scripting language\" can still be Just-In-Time compiled internally; Python does this, for example.)\n\nBeyond these basics, a \"scripting language\" can range from something primitive and trivial, like the \"batch\" language in MS-DOS, on up to an expressive and powerful language like Python, Ruby, etc.\n", "You've basically discovered that the distinction between a scripting language and a \"non-scripting\" language is pretty artificial. Python can be compiled to JVM bytecode (with Jython), and I believe Ruby also can -- then the \"engine\" running the Python or Ruby code in question will be a JVM, the same \"engine\" that runs Java code (or Scala code, etc etc). Similarly with .NET and IronPython (or IronRuby) -- then the \"engine\" is Microsoft's CLR, just as for C#, Boo, and so on. Languages said to be \"scripting\" are often dynamically typed ones... but I've never heard the term used for other important dynamically typed languages such as Smalltalk, Mozart/OZ, or Erlang...;-).\n", "I know you have accepted an answer, however there is some amgiguity.\nWhen referring to a scripting engine, we typically mean a small embedded language that sits within a template and generates textual output or documents. For example Freemarker and Velocity are often referred to as scripting engines. Erb would sit here too, but oddly is not referred to as a scripting engine that often.\nA scripting language generally needs no compile step, therefore can be run more simply as a, or, from a shell script. This includes things like awk, perl, tcl, python, ruby and so on. These languages typically need to be terse and type safety is often optional. Windows supports a number of languages in it's scripting host facilities. This exposes scripting languages to various components within Windows.\nSo then fully compiled languages such as Java may well run as bytecode and could be considered as interpreted, however the point is that there is an explicit compile step, there is no interpreter (with the Sun JRE anyway) that provides a runtime executable environment for java code.\nOther languages such as VBA are embedded, many of the languages above can be embedded. Embedded languages could be referred too as a scripting engine for the host application.\nIn my mind a scripting engine interprets programmatic instructions and in turn instructs a larger host application or system. The instructions are executed immediately without concern for any remaining instructions.\nMany Lisps have no distinction between data and code, possibly compiling dynamically at runtime. The interpret, compile and execute steps are available to the Lisp programmer to be manipulated as programmers manipulate data in other languages.\n", "Probably the closest thing to what you are talking about is an interpreter:\n\nIn computer science, an interpreter\n normally means a computer program that\n executes, i.e. performs, instructions\n written in a programming language.\n While interpretation and compilation\n are the two principal means by which\n programming languages are implemented,\n these are not fully distinct\n categories, one of the reasons being\n that most interpreting systems also\n perform some translation work, just\n like compilers.\n\nBasically an intepreter (or scripting engine if you prefer) is the component that is responsible for turning a script into machine code at execution time (as opposed to a compiler which creates machine code prior to execution time).\n", "\"Scripting language\" might be called a colloquialism. The term is not well defined, and you will see some disagreement about which languages are scripting languages. It is sometimes useful for conveying a vague idea of the properties of a language (See steveha's answer).\n\"Scripting language\" might also refer to a particular use of a language. For example, a piece of software might use Lua as its scripting language -- the language used by the end user to automate (or \"script\") complex tasks.\n", "One useful distinction between scripting / interpreted languages and compiled languages is that you can typically embed a scripting language's interpreter in a compiled project, such as a game engine.\n" ]
[ 12, 6, 3, 2, 0, 0 ]
[]
[]
[ "java", "python", "ruby", "scripting" ]
stackoverflow_0001691201_java_python_ruby_scripting.txt
Q: Python internationalization, local setting independent I need the return of a strftime() call being in a language different at the one set on my local machine/OS. Is that possible to choose the language of the return? A: For solid i18n/L10N, usable by a server which must serve different localizations within the same run, I keep recommending PyICU, the Python layer on top of ICU, the International Components for Unicode open-source package. Other approaches tend to be pretty limited and fragile:-(. A: Try the babel library: http://babel.edgewall.org/wiki/Documentation/dates.html
Python internationalization, local setting independent
I need the return of a strftime() call being in a language different at the one set on my local machine/OS. Is that possible to choose the language of the return?
[ "For solid i18n/L10N, usable by a server which must serve different localizations within the same run, I keep recommending PyICU, the Python layer on top of ICU, the International Components for Unicode open-source package. Other approaches tend to be pretty limited and fragile:-(.\n", "Try the babel library: http://babel.edgewall.org/wiki/Documentation/dates.html\n" ]
[ 1, 0 ]
[]
[]
[ "internationalization", "python" ]
stackoverflow_0001690857_internationalization_python.txt
Q: Agile Software Development in Python I have been trying to learn a cross platform language with a fast learning curve, and so it seemed obvious Python was the logical choice. I've never programmed before but I have been reading on pragmatic programming and agile development for quite some time. The question comes, "What is the single best choice to create a desktop software that is built heavily in python and can handle flexibilty of SQL injections, along with rich interface reporting?" e.g. SQL Alchemy, ReportLabs. I have been looking into pyHed found in sourceforge.net. However, it's on early development stage and is still not well documented. I checked out Titanium Desktop from Appcelerator and the concept seems exciting, but it's not in stable condition yet. Any suggestions, comments or ideas of what is currently being used? or new technologies out there now? A: For cross-platform GUI-based desktop software, my preference is Qt -- solid, mature, rich, great tools, strong underlying event-like approach (signals and slots). Having Nokia behind it doesn't hurt, of course. The mature Python interface to that is PyQt, but if the alternative of GPL or for-pay licenses is a problem for you, PySide is on the horizon (nowhere as mature as PyQt at this time, but by the time GPL'ing your software could possibly be a problem, PySide should be definitely ready for you;-). PySide is also sponsored by Nokia, according to this. Beyond your choice of frameworks for GUI-based cross-platform desktop app development, of course, lie many, many other choices of tools and approaches -- but they're less crucial for solo development than they are for effective team cooperation, so, until teamwork is in prospect for you, it won't hurt to use whatever tools you find simplest (e.g., svn rather than a DVCS: I strongly recommend a DVCS such as hg, git or bazaar for team use, but for a solo developer I guess svn is still quite acceptable, and perhaps simpler to install and use). A: There are many answers to your question because you raise a number of issues: Agile development is methodology and has very little to do with the language or software platform. It is more a set of principles around which software teams organize themselves. Refer to the works of Kent Beck for a bit more detail. Do you have an existing Python code base? If you do have an existing Python code base you could get relatively far with pyHed. Otherwise you could look at something like Java Swing or C#. But really you might want to consider moving the application to a web platform - that seems to be the direction almost all desktop apps are heading. Django is well known Python framework. Or any number of the Java, C#, Ruby platforms if it strikes your fancy. The jquery JavaScript framework is a good tool to provide rich Web interfaces. A: you could have a look at the camelot framework http://www.conceptive.be/projects/camelot/ It provides a pyqt gui on top of sqlalchemy mapped classes. If you have questions, you can always post on our mailing list. Erik A: For what is worth, last week with no previous experience in python itself.. I managed to build a basic MVC app in about 4 days.. I used wxpython & wxglade I think that if you know what your functional needs are, with a bit of googling & a bunch of reading other peoples code, you can produce very usable stuff in a very short time. http://www.wxpython.org/ A: You might want to checkout http://dabodev.com/ too, I have no personally experience with it, just know of it existence and that there are a couple of enthusiasts. I would recommend that you don't concentrate too much about Agile or XP coding, especially when you start out, good old big design up front will save your skin before you burn it with headless hacking. That being said, I usually start coding a prototype/proof-of-concept before I actually design it and consequently write unit-tests for the first release. But the most important advice I would like to give you is, keep yourself motivated and happy :-) A: I'm one of the developers of pyHed. We know that pyHed's documentation is not very good yet, but we are working very hard in it (it's the main thread of 1.1 version). Having any doubt about pyHed please contact us in our forum, your question will be answered imediately... Vinicius Berni - pyHed Team
Agile Software Development in Python
I have been trying to learn a cross platform language with a fast learning curve, and so it seemed obvious Python was the logical choice. I've never programmed before but I have been reading on pragmatic programming and agile development for quite some time. The question comes, "What is the single best choice to create a desktop software that is built heavily in python and can handle flexibilty of SQL injections, along with rich interface reporting?" e.g. SQL Alchemy, ReportLabs. I have been looking into pyHed found in sourceforge.net. However, it's on early development stage and is still not well documented. I checked out Titanium Desktop from Appcelerator and the concept seems exciting, but it's not in stable condition yet. Any suggestions, comments or ideas of what is currently being used? or new technologies out there now?
[ "For cross-platform GUI-based desktop software, my preference is Qt -- solid, mature, rich, great tools, strong underlying event-like approach (signals and slots). Having Nokia behind it doesn't hurt, of course.\nThe mature Python interface to that is PyQt, but if the alternative of GPL or for-pay licenses is a problem for you, PySide is on the horizon (nowhere as mature as PyQt at this time, but by the time GPL'ing your software could possibly be a problem, PySide should be definitely ready for you;-). PySide is also sponsored by Nokia, according to this.\nBeyond your choice of frameworks for GUI-based cross-platform desktop app development, of course, lie many, many other choices of tools and approaches -- but they're less crucial for solo development than they are for effective team cooperation, so, until teamwork is in prospect for you, it won't hurt to use whatever tools you find simplest (e.g., svn rather than a DVCS: I strongly recommend a DVCS such as hg, git or bazaar for team use, but for a solo developer I guess svn is still quite acceptable, and perhaps simpler to install and use).\n", "There are many answers to your question because you raise a number of issues:\nAgile development is methodology and has very little to do with the language or software platform. It is more a set of principles around which software teams organize themselves. Refer to the works of Kent Beck for a bit more detail.\nDo you have an existing Python code base? If you do have an existing Python code base you could get relatively far with pyHed. Otherwise you could look at something like Java Swing or C#. \nBut really you might want to consider moving the application to a web platform - that seems to be the direction almost all desktop apps are heading. Django is well known Python framework. Or any number of the Java, C#, Ruby platforms if it strikes your fancy.\nThe jquery JavaScript framework is a good tool to provide rich Web interfaces.\n", "you could have a look at the camelot framework http://www.conceptive.be/projects/camelot/\nIt provides a pyqt gui on top of sqlalchemy mapped classes. If you have questions, you can always\npost on our mailing list.\nErik\n", "For what is worth, last week with no previous experience in python itself.. I managed to build a basic MVC app in about 4 days.. I used wxpython & wxglade \nI think that if you know what your functional needs are, with a bit of googling & a bunch of reading other peoples code, you can produce very usable stuff in a very short time.\nhttp://www.wxpython.org/\n", "You might want to checkout http://dabodev.com/ too, I have no personally experience with it, just know of it existence and that there are a couple of enthusiasts.\nI would recommend that you don't concentrate too much about Agile or XP coding, especially when you start out, good old big design up front will save your skin before you burn it with headless hacking. \nThat being said, I usually start coding a prototype/proof-of-concept before I actually design it and consequently write unit-tests for the first release.\nBut the most important advice I would like to give you is, keep yourself motivated and happy :-)\n", "I'm one of the developers of pyHed. We know that pyHed's documentation is not very good yet, but we are working very hard in it (it's the main thread of 1.1 version).\nHaving any doubt about pyHed please contact us in our forum, your question will be answered imediately...\nVinicius Berni - pyHed Team\n" ]
[ 3, 2, 2, 1, 0, 0 ]
[]
[]
[ "database", "frameworks", "open_source", "python" ]
stackoverflow_0001390868_database_frameworks_open_source_python.txt
Q: python or bash - adding " at beginning of line and ", at end of line I have text file with something like first line line nr 2 line three etc And i want to generate "first line", "line nr 2", "line three", I wonder how to do this in python or maybe in bash if it's easier/quicker. I know there is different code for opening file and different for reading only one line in python(?) but i'm not sure which option to use in this case and, more importantly, how to add these characters. Any advice would help. A: sed 's/.*/"&",/' A: For the reference, in case someone wants to do the same thing using python. There is a handy module fileinput that could be used like this: import fileinput import sys, os for line in fileinput.input(inplace=True): sys.stdout.write('"%s",%s' % (line.rstrip(os.linesep), os.linesep)) Then run this as a script: python myscript.py file1 file2 file3 That will change the files inplace for you. A: Be a true unix geek: use sed! sed 's/^/"/; s/$/",/;' < your_text_file If you want to escape existing double quotes with backslashes, use 's/"/\\"/g; s/^/"/; s/$/",/;' as the pattern. sed is ideally suited for this type of task. Check out a ludicrously long list of examples. A: there is no need to construct regular expression(with backreferencing) for this task. Its an expensive operation since you are not going to change something in the line. Easiest way is just to print them out. awk '{print "\042"$0"\042,"}' file Results on operation on a big file: $ head -5 file this is line this is line this is line this is line this is line $ wc -l < file 9545088 $ time awk '{print "\042"$0"\042,"}' file >/dev/null real 0m15.574s user 0m15.327s sys 0m0.172s $ time sed 's/.*/"&",/' file > /dev/null real 0m31.717s user 0m31.465s sys 0m0.157s $ time perl -p -e 's/^(.*)$/\"$1\",/g' file >/dev/null real 0m36.576s user 0m36.006s sys 0m0.360s A: A number of easy ways to do it... A simple perl oneliner: perl -pi -e 's/^(.*)$/\"$1\",/g' /path/to/your/file To explain a bit, the regex ^(.*)$ grabs everything (the (.*)) between the start of the line (^) and the end of the line ($), then uses the $1 match group variable to reconstruct it with the quotes and comma. A: In Bash: while read line do echo "\"${line}\"," done < inputfile A: Python for line in open("file"): line=line.strip() print '"%s",' % line A: sh + awk are nice here too... !/bin/sh for FILE in "$@" do awk '{print "\" $0 "\","}' < $FILE > $FILE.tmp mv $FILE.tmp $FILE done A: In vi: :%s/^\(.*\)$/"\1",/g
python or bash - adding " at beginning of line and ", at end of line
I have text file with something like first line line nr 2 line three etc And i want to generate "first line", "line nr 2", "line three", I wonder how to do this in python or maybe in bash if it's easier/quicker. I know there is different code for opening file and different for reading only one line in python(?) but i'm not sure which option to use in this case and, more importantly, how to add these characters. Any advice would help.
[ "sed 's/.*/\"&\",/'\n\n", "For the reference, in case someone wants to do the same thing using python. There is a handy module fileinput that could be used like this:\nimport fileinput\nimport sys, os\n\nfor line in fileinput.input(inplace=True):\n sys.stdout.write('\"%s\",%s' % (line.rstrip(os.linesep), os.linesep))\n\nThen run this as a script:\npython myscript.py file1 file2 file3\n\nThat will change the files inplace for you.\n", "Be a true unix geek: use sed!\nsed 's/^/\"/; s/$/\",/;' < your_text_file\n\nIf you want to escape existing double quotes with backslashes, use 's/\"/\\\\\"/g; s/^/\"/; s/$/\",/;' as the pattern.\nsed is ideally suited for this type of task. Check out a ludicrously long list of examples.\n", "there is no need to construct regular expression(with backreferencing) for this task. Its an expensive operation since you are not going to change something in the line. Easiest way is just to print them out.\n awk '{print \"\\042\"$0\"\\042,\"}' file \n\nResults on operation on a big file:\n$ head -5 file\nthis is line\nthis is line\nthis is line\nthis is line\nthis is line\n$ wc -l < file\n9545088\n\n$ time awk '{print \"\\042\"$0\"\\042,\"}' file >/dev/null\n\nreal 0m15.574s\nuser 0m15.327s\nsys 0m0.172s\n\n$ time sed 's/.*/\"&\",/' file > /dev/null\n\nreal 0m31.717s\nuser 0m31.465s\nsys 0m0.157s\n\n$ time perl -p -e 's/^(.*)$/\\\"$1\\\",/g' file >/dev/null\n\nreal 0m36.576s\nuser 0m36.006s\nsys 0m0.360s\n\n", "A number of easy ways to do it... \nA simple perl oneliner: \nperl -pi -e 's/^(.*)$/\\\"$1\\\",/g' /path/to/your/file\n\nTo explain a bit, the regex ^(.*)$ grabs everything (the (.*)) between the start of the line (^) and the end of the line ($), then uses the $1 match group variable to reconstruct it with the quotes and comma.\n", "In Bash:\nwhile read line\n do\n echo \"\\\"${line}\\\",\"\ndone < inputfile\n\n", "Python\nfor line in open(\"file\"):\n line=line.strip()\n print '\"%s\",' % line\n\n", "sh + awk are nice here too...\n!/bin/sh\nfor FILE in \"$@\"\ndo\n awk '{print \"\\\" $0 \"\\\",\"}' < $FILE > $FILE.tmp\n mv $FILE.tmp $FILE\ndone\n\n", "In vi:\n:%s/^\\(.*\\)$/\"\\1\",/g\n\n" ]
[ 8, 6, 6, 5, 4, 1, 1, 0, 0 ]
[]
[]
[ "bash", "linux", "python" ]
stackoverflow_0001688952_bash_linux_python.txt
Q: How to profile a Django custom management command exclusively I would like to profile a custom management command that is relatively CPU intensive (renders an image using PIL). When I use the following command I get all sorts of Django modules (admin, ORM etc) in my profiling results: python -m cProfile manage.py testrender I have removed all imports that can potentially import Django but I am guessing the following is the culprit: from django.core.management.base import BaseCommand, CommandError Is there a way to filter out cProfile results? (only filenames are shown, no paths) Or, is there any other way to exclude/include respective modules/packages from profiling? A: I solved this problem the following way: from cProfile import Profile from django.core.management.base import BaseCommand class Command(BaseCommand): ... def _handle(self, *args, **options): # Actual code I want to profile pass def handle(self, *args, **options): if options['profile']: profiler = Profile() profiler.runcall(self._handle, *args, **options) profiler.print_stats() else: self._handle(*args, **options) This way profiling statistics are gathered within the scope of _handle. So instead of: python -m cProfile manage.py testrender I'll have to run: python manage.py testrender --profile which is even better. A: Separate the PIL functionality into its own function/class in its own module, and import it from your management command. Then you can test/profile the PIL functionality independently of Django. A: If I can't find any answers. Gprof2Dot as explained here can be an acceptable hack. It doesn't filter out modules I'm not interested, but hopefully it will make it easier to inspect the results visually seperating my code and Django modules.
How to profile a Django custom management command exclusively
I would like to profile a custom management command that is relatively CPU intensive (renders an image using PIL). When I use the following command I get all sorts of Django modules (admin, ORM etc) in my profiling results: python -m cProfile manage.py testrender I have removed all imports that can potentially import Django but I am guessing the following is the culprit: from django.core.management.base import BaseCommand, CommandError Is there a way to filter out cProfile results? (only filenames are shown, no paths) Or, is there any other way to exclude/include respective modules/packages from profiling?
[ "I solved this problem the following way:\nfrom cProfile import Profile\nfrom django.core.management.base import BaseCommand\n\n\nclass Command(BaseCommand):\n ...\n\n def _handle(self, *args, **options):\n # Actual code I want to profile\n pass\n\n def handle(self, *args, **options):\n if options['profile']:\n profiler = Profile()\n profiler.runcall(self._handle, *args, **options)\n profiler.print_stats()\n else:\n self._handle(*args, **options)\n\nThis way profiling statistics are gathered within the scope of _handle. So instead of:\npython -m cProfile manage.py testrender\n\nI'll have to run:\npython manage.py testrender --profile\n\nwhich is even better.\n", "Separate the PIL functionality into its own function/class in its own module, and import it from your management command. Then you can test/profile the PIL functionality independently of Django.\n", "If I can't find any answers. Gprof2Dot as explained here can be an acceptable hack.\nIt doesn't filter out modules I'm not interested, but hopefully it will make it easier to inspect the results visually seperating my code and Django modules.\n" ]
[ 19, 1, 0 ]
[]
[]
[ "django", "profiling", "python" ]
stackoverflow_0001687125_django_profiling_python.txt
Q: How do I check if it's the homepage in a Plone website using ZPT? I want to change my website's header only it if's not the homepage. Is there a tal:condition expression for that? I've been reading this and can't find what I'm looking for... thanks! A: The best way is to use two really handy plone views that are intended just for this purpose. The interface that defines them is at https://svn.plone.org/svn/plone/plone.app.layout/trunk/plone/app/layout/globals/interfaces.py, in case you want to check it out. <tal:block tal:define="our_url context/@@plone_context_state/canonical_object_url; home_url context/@@plone_portal_state/portal_url;" tal:condition="python:our_url == home_url"> HERE GOES YOUR STUFF </tal:block> The great thing about @@plone_context_state and @@plone_portal_state is that they handle all sorts of weird edge cases. context/@@plone_context_state/canonical_object_url also returns the right, most basic, object's url even when you're viewing the default page in the portal root with a query string appended :-) A: I use something similar to ax: <tal:block define="global currentUrl request/getURL" condition="python: u'home' not in str(currentUrl)"> <!-- whatever --> </tal:block> A: how about something like <tal:condition="python: request.URLPATH0 == '/index_html' ...>`? see TALES Built-in Names and the Zope API Reference for more choices.
How do I check if it's the homepage in a Plone website using ZPT?
I want to change my website's header only it if's not the homepage. Is there a tal:condition expression for that? I've been reading this and can't find what I'm looking for... thanks!
[ "The best way is to use two really handy plone views that are intended just for this purpose. The interface that defines them is at https://svn.plone.org/svn/plone/plone.app.layout/trunk/plone/app/layout/globals/interfaces.py, in case you want to check it out.\n<tal:block\n tal:define=\"our_url context/@@plone_context_state/canonical_object_url;\n home_url context/@@plone_portal_state/portal_url;\"\n tal:condition=\"python:our_url == home_url\">\nHERE GOES YOUR STUFF\n</tal:block>\n\nThe great thing about @@plone_context_state and @@plone_portal_state is that they handle all sorts of weird edge cases. context/@@plone_context_state/canonical_object_url also returns the right, most basic, object's url even when you're viewing the default page in the portal root with a query string appended :-)\n", "I use something similar to ax:\n<tal:block define=\"global currentUrl request/getURL\" condition=\"python: u'home' not in str(currentUrl)\">\n\n<!-- whatever -->\n\n</tal:block>\n\n", "how about something like <tal:condition=\"python: request.URLPATH0 == '/index_html' ...>`? see TALES Built-in Names and the Zope API Reference for more choices.\n" ]
[ 6, 1, 0 ]
[]
[]
[ "plone", "python", "template_tal", "zope", "zpt" ]
stackoverflow_0000651009_plone_python_template_tal_zope_zpt.txt
Q: CSS parser + XHTML generator, advice needed Guys, I need to develop a tool which would meet following requirements: Input: XHTML document with CSS rules within head section. Output: XHTML document with CSS rules computed in tag attributes The best way to illustrate the behavior I want is as follows. Example input: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <style type="text/css" media="screen"> .a { color: red; } p { font-size: 12px; } </style> </head> <body> <p class="a">Lorem Ipsum</p> <div class="a"> <p>Oh hai</p> </div> </body> </html> Example output: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <body> <p style="color: red; font-size: 12px;">Lorem Ipsum</p> <div style="color: red;"> <p style="font-size: 12px;">Oh hai</p> </div> </body> </html> What tools/libraries will fit best for such task? I'm not sure if BeautifulSoup and cssutils is capable of doing this. Python is not a requirement. Any recommendations will be highly appreciated. A: Try premailer code.dunae.ca/premailer.web More info: campaignmonitor.com A: While I do not know any specific tool to do this, here is the basic approach I would take: Load as xml document Extract the css classes and styles from document For each pair of css class and style   Construct xpath query from css class   For each matching node     Set the style attribute for that class Remove style node from document Convert document to string A: There is a premailer python package on Pypi A: Depends on how complicated your CSS is going to be. If it's a simple matter of elements ("p {}", "a {}"), IDs/Classes (#test {}), then probably easiest to use regular expressions. You'd have to have one to find all the style definitions and then parse them, then use more regular expressions to find instances of tags that match. For, for example, if you found you had a style for A tags, you could use a regular expression like: <a\b[^>]*>(.*?)</a> To get them, then you'd have to do a replace to add the style. Of course you'd want the regex to accept the tag as a parameter (the A tag in this case). If you got into child selection or anything more than just root elements and ID/classes this could get messy fast. Consider just defining the styles inline to begin with?
CSS parser + XHTML generator, advice needed
Guys, I need to develop a tool which would meet following requirements: Input: XHTML document with CSS rules within head section. Output: XHTML document with CSS rules computed in tag attributes The best way to illustrate the behavior I want is as follows. Example input: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <style type="text/css" media="screen"> .a { color: red; } p { font-size: 12px; } </style> </head> <body> <p class="a">Lorem Ipsum</p> <div class="a"> <p>Oh hai</p> </div> </body> </html> Example output: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <body> <p style="color: red; font-size: 12px;">Lorem Ipsum</p> <div style="color: red;"> <p style="font-size: 12px;">Oh hai</p> </div> </body> </html> What tools/libraries will fit best for such task? I'm not sure if BeautifulSoup and cssutils is capable of doing this. Python is not a requirement. Any recommendations will be highly appreciated.
[ "Try premailer\ncode.dunae.ca/premailer.web\nMore info:\ncampaignmonitor.com\n", "While I do not know any specific tool to do this, here is the basic approach I would take: \nLoad as xml document \nExtract the css classes and styles from document \nFor each pair of css class and style \n  Construct xpath query from css class \n  For each matching node\n    Set the style attribute for that class \nRemove style node from document\nConvert document to string\n", "There is a premailer python package on Pypi\n", "Depends on how complicated your CSS is going to be. If it's a simple matter of elements (\"p {}\", \"a {}\"), IDs/Classes (#test {}), then probably easiest to use regular expressions. You'd have to have one to find all the style definitions and then parse them, then use more regular expressions to find instances of tags that match.\nFor, for example, if you found you had a style for A tags, you could use a regular expression like:\n<a\\b[^>]*>(.*?)</a>\n\nTo get them, then you'd have to do a replace to add the style. Of course you'd want the regex to accept the tag as a parameter (the A tag in this case).\nIf you got into child selection or anything more than just root elements and ID/classes this could get messy fast. \nConsider just defining the styles inline to begin with?\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "css", "parsing", "python", "xhtml" ]
stackoverflow_0000781382_css_parsing_python_xhtml.txt
Q: Move or copy an entity to another kind Is there a way to move an entity to another kind in appengine. Say you have a kind defines, and you want to keep a record of deleted entities of that kind. But you want to separate the storage of live object and archived objects. Kinds are basically just serialized dicts in the bigtable anyway. And maybe you don't need to index the archive in the same way as the live data. So how would you make a move or copy of a entity of one kind to another kind. A: Unless someone's written utilities for this kind of thing, the way to go is to read from one and write to the other kind! A: No - once created, the kind is a part of the entity's immutable key. You need to create a new entity and copy everything across. One way to do this would be to use the low-level google.appengine.api.datastore interface, which treats entities as dicts.
Move or copy an entity to another kind
Is there a way to move an entity to another kind in appengine. Say you have a kind defines, and you want to keep a record of deleted entities of that kind. But you want to separate the storage of live object and archived objects. Kinds are basically just serialized dicts in the bigtable anyway. And maybe you don't need to index the archive in the same way as the live data. So how would you make a move or copy of a entity of one kind to another kind.
[ "Unless someone's written utilities for this kind of thing, the way to go is to read from one and write to the other kind!\n", "No - once created, the kind is a part of the entity's immutable key. You need to create a new entity and copy everything across. One way to do this would be to use the low-level google.appengine.api.datastore interface, which treats entities as dicts.\n" ]
[ 1, 1 ]
[]
[]
[ "archive", "bigtable", "google_app_engine", "indexing", "python" ]
stackoverflow_0001693815_archive_bigtable_google_app_engine_indexing_python.txt
Q: Using md5 on BeautifulSoup result Im trying to use the md5 algorithm on web pages to avoid seeing duplicates. Is there an easy way to convert the result from beautifulsoup into a string which is digestible by md5? Many thanks A: Just turn it into a string with str: from BeautifulSoup import BeautifulSoup doc = "<html><h1>Heading</h1><p>Text" soup = BeautifulSoup(doc) str(soup) (from the docs)
Using md5 on BeautifulSoup result
Im trying to use the md5 algorithm on web pages to avoid seeing duplicates. Is there an easy way to convert the result from beautifulsoup into a string which is digestible by md5? Many thanks
[ "Just turn it into a string with str:\nfrom BeautifulSoup import BeautifulSoup\ndoc = \"<html><h1>Heading</h1><p>Text\"\nsoup = BeautifulSoup(doc)\n\nstr(soup)\n\n(from the docs)\n" ]
[ 4 ]
[]
[]
[ "beautifulsoup", "md5", "python" ]
stackoverflow_0001694061_beautifulsoup_md5_python.txt
Q: Finding a logic bug in converting Python code to PHP The input 7|12|1|14|2|13|8|11|16|3|10|5|9|6|15|4 returns 0 by the PHP -code, while 1 by Python code: 1 means that the sums of the 4x4 magic square are the same, while 0 means the reverse. Python code is correct. The problem of the PHP code seems to be in the function divide's for -loop, since PHP gives too many sums. How does the logic of PHP code differ from Python's? PHP $data = "7|12|1|14|2|13|8|11|16|3|10|5|9|6|15|4"; $array = explode("|", $data); # Calculate the unique sums of the four figures # @return array int function divide ( $array ) { $sum = array_map('array_sum', array_chunk($array, 4)); $apu_a = array(); for ( $i = 0; $i < count( $sum ); $i++ ) { if ( $i % 5 == 0 ) $apu_a []= $array[$i]; } $sum []= array_sum( $apu_a ); $apu_a = array(); for ( $i = 0; $i < count( $sum ); $i++ ) { if ( $i % 3 == 0 and $i != 15 and $i != 0 ) $apu_a []= $array[$i]; } $sum []= array_sum( $apu_a ); $result = array_unique($sum); return $result; } Python data = "7|12|1|14|2|13|8|11|16|3|10|5|9|6|15|4" lista = map(int,data.split("|")) def divide( lista ): summat = [sum(lista[i:i+4]) for i in range(0,len(lista),4)] summat += [sum(lista[0::5]) for i in range(0, len(lista), 16)] summat += [sum(a for i,a in enumerate(lista) if i %3==0 and i != 15 and i != 0)] return set(summat) A: The problem is in your PHP line: for ( $i = 0; $i < count( $sum ); $i++ ) { you wanted: for ( $i = 0; $i < count( $array ); $i++ ) { Fix it in both places, and you get the right answer. BTW: You are only checking the main diagonals, but you can also check the other wrapping diagonals, and as Greg points out, you never sum the columns. A: In your Python code, you have: summat += [sum(lista[0::5]) for i in range(0, len(lista), 16)] I'm pretty sure this isn't doing what you intend it to do. len(lista) is 16, so this range is range(0, 16, 16) which is [0]. Then, you're not even using i in the left hand side of the list comprehension, you're just summing the values along the main diagonal. If this is intended to sum along the main diagonals, you can replace this and the following line with the simpler: summat += [sum(lista[0::5])] summat += [sum(lista[3:15:3])] Finally, you are not calculating the sums of the columns in your magic square at all. You would need something like: summat += [sum(lista[i::4]) for i in range(4)] A: Look to Ned for your answer; this is just to note PHP improvements: $s = 0; for ( $i = 0; $i < count( $array ); $i += 5 ) { $s += $array[$i]; } $sum[]=$s; $s=0; for ( $i = 3; $i < 15; $i+=3 ) { $s += $array[$i]; } $sum[]=$s;
Finding a logic bug in converting Python code to PHP
The input 7|12|1|14|2|13|8|11|16|3|10|5|9|6|15|4 returns 0 by the PHP -code, while 1 by Python code: 1 means that the sums of the 4x4 magic square are the same, while 0 means the reverse. Python code is correct. The problem of the PHP code seems to be in the function divide's for -loop, since PHP gives too many sums. How does the logic of PHP code differ from Python's? PHP $data = "7|12|1|14|2|13|8|11|16|3|10|5|9|6|15|4"; $array = explode("|", $data); # Calculate the unique sums of the four figures # @return array int function divide ( $array ) { $sum = array_map('array_sum', array_chunk($array, 4)); $apu_a = array(); for ( $i = 0; $i < count( $sum ); $i++ ) { if ( $i % 5 == 0 ) $apu_a []= $array[$i]; } $sum []= array_sum( $apu_a ); $apu_a = array(); for ( $i = 0; $i < count( $sum ); $i++ ) { if ( $i % 3 == 0 and $i != 15 and $i != 0 ) $apu_a []= $array[$i]; } $sum []= array_sum( $apu_a ); $result = array_unique($sum); return $result; } Python data = "7|12|1|14|2|13|8|11|16|3|10|5|9|6|15|4" lista = map(int,data.split("|")) def divide( lista ): summat = [sum(lista[i:i+4]) for i in range(0,len(lista),4)] summat += [sum(lista[0::5]) for i in range(0, len(lista), 16)] summat += [sum(a for i,a in enumerate(lista) if i %3==0 and i != 15 and i != 0)] return set(summat)
[ "The problem is in your PHP line:\nfor ( $i = 0; $i < count( $sum ); $i++ ) {\n\nyou wanted:\nfor ( $i = 0; $i < count( $array ); $i++ ) {\n\nFix it in both places, and you get the right answer.\nBTW: You are only checking the main diagonals, but you can also check the other wrapping diagonals, and as Greg points out, you never sum the columns.\n", "In your Python code, you have:\n summat += [sum(lista[0::5]) for i in range(0, len(lista), 16)]\n\nI'm pretty sure this isn't doing what you intend it to do. len(lista) is 16, so this range is range(0, 16, 16) which is [0]. Then, you're not even using i in the left hand side of the list comprehension, you're just summing the values along the main diagonal. If this is intended to sum along the main diagonals, you can replace this and the following line with the simpler:\n summat += [sum(lista[0::5])]\n summat += [sum(lista[3:15:3])]\n\nFinally, you are not calculating the sums of the columns in your magic square at all. You would need something like:\n summat += [sum(lista[i::4]) for i in range(4)]\n\n", "Look to Ned for your answer; this is just to note PHP improvements:\n $s = 0;\n for ( $i = 0; $i < count( $array ); $i += 5 ) {\n $s += $array[$i];\n }\n $sum[]=$s;\n\n $s=0;\n for ( $i = 3; $i < 15; $i+=3 ) {\n $s += $array[$i];\n }\n $sum[]=$s;\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "php", "python" ]
stackoverflow_0001693945_php_python.txt
Q: Django: Ordering objects by their children's attributes Consider the models: class Author(models.Model): name = models.CharField(max_length=200, unique=True) class Book(models.Model): pub_date = models.DateTimeField() author = models.ForeignKey(Author) Now suppose I want to order all the books by, say, their pub_date. I would use order_by('pub_date'). But what if I want a list of all authors ordered according to who most recently published books? It's really very simple when you think about it. It's essentially: The author on top is the one who most recently published a book The next one is the one who published books not as new as the first, So on etc. I could probably hack something together, but since this could grow big, I need to know that I'm doing it right. Help appreciated! Edit: Lastly, would the option of just adding a new field to each one to show the date of the last book and just updating that the whole time be better? A: from django.db.models import Max Author.objects.annotate(max_pub_date=Max('books__pub_date')).order_by('-max_pub_date') this requires that you use django 1.1 and i assumed you will add a 'related_name' to your author field in Book model, so it will be called by Author.books instead of Author.book_set. its much more readable. A: Or, you could play around with something like this: Author.objects.filter(book__pub_date__isnull=False).order_by('-book__pub_date') A: def remove_duplicates(seq): seen = {} result = [] for item in seq: if item in seen: continue seen[item] = 1 result.append(item) return result # Get the authors of the most recent books query_result = Books.objects.order_by('pub_date').values('author') # Strip the keys from the result set and remove duplicate authors recent_authors = remove_duplicates(query_result.values()) A: Building on ayaz's solution, what about: Author.objects.filter(book__pub_date__isnull=False).distinct().order_by('-book__pub_date') A: Lastly, would the option of just adding a new field to each one to show the date of the last book and just updating that the whole time be better? Actually it would! This is a normal denormalization practice and can be done like this: class Author(models.Model): name = models.CharField(max_length=200, unique=True) latest_pub_date = models.DateTimeField(null=True, blank=True) def update_pub_date(self): try: self.latest_pub_date = self.book_set.order_by('-pub_date')[0] self.save() except IndexError: pass # no books yet! class Book(models.Model): pub_date = models.DateTimeField() author = models.ForeignKey(Author) def save(self, **kwargs): super(Book, self).save(**kwargs) self.author.update_pub_date() def delete(self): super(Book, self).delete() self.author.update_pub_date() This is the third common option you have besides two already suggested: doing it in SQL with a join and grouping getting all the books to Python side and remove duplicates Both these options choose to compute pub_dates from a normalized data at the time when you read them. Denormalization does this computation for each author at the time when you write new data. The idea is that most web apps do reads most often than writes so this approach is preferable. One of the perceived downsides of this is that basically you have the same data in different places and it requires you to keep it in sync. It horrifies database people to death usually :-). But this is usually not a problem until you use your ORM model to work with dat (which you probably do anyway). In Django it's the app that controls the database, not the other way around. Another (more realistic) downside is that with the naive code that I've shown massive books update may be way slower since they ping authors for updating their data on each update no matter what. This is usually solved by having a flag to temporarily disable calling update_pub_date and calling it manually afterwards. Basically, denormalized data requires more maintenance than normalized.
Django: Ordering objects by their children's attributes
Consider the models: class Author(models.Model): name = models.CharField(max_length=200, unique=True) class Book(models.Model): pub_date = models.DateTimeField() author = models.ForeignKey(Author) Now suppose I want to order all the books by, say, their pub_date. I would use order_by('pub_date'). But what if I want a list of all authors ordered according to who most recently published books? It's really very simple when you think about it. It's essentially: The author on top is the one who most recently published a book The next one is the one who published books not as new as the first, So on etc. I could probably hack something together, but since this could grow big, I need to know that I'm doing it right. Help appreciated! Edit: Lastly, would the option of just adding a new field to each one to show the date of the last book and just updating that the whole time be better?
[ "from django.db.models import Max\nAuthor.objects.annotate(max_pub_date=Max('books__pub_date')).order_by('-max_pub_date')\n\nthis requires that you use django 1.1\nand i assumed you will add a 'related_name' to your author field in Book model, so it will be called by Author.books instead of Author.book_set. its much more readable.\n", "Or, you could play around with something like this:\nAuthor.objects.filter(book__pub_date__isnull=False).order_by('-book__pub_date')\n", " def remove_duplicates(seq): \n seen = {}\n result = []\n for item in seq:\n if item in seen: continue\n seen[item] = 1\n result.append(item)\n return result\n\n\n# Get the authors of the most recent books\nquery_result = Books.objects.order_by('pub_date').values('author')\n# Strip the keys from the result set and remove duplicate authors\nrecent_authors = remove_duplicates(query_result.values())\n\n", "Building on ayaz's solution, what about:\n Author.objects.filter(book__pub_date__isnull=False).distinct().order_by('-book__pub_date')\n", "\nLastly, would the option of just adding a new field to each one to show the date of the last book and just updating that the whole time be better?\n\nActually it would! This is a normal denormalization practice and can be done like this:\nclass Author(models.Model):\n name = models.CharField(max_length=200, unique=True)\n latest_pub_date = models.DateTimeField(null=True, blank=True)\n\n def update_pub_date(self):\n try:\n self.latest_pub_date = self.book_set.order_by('-pub_date')[0]\n self.save()\n except IndexError:\n pass # no books yet!\n\nclass Book(models.Model):\n pub_date = models.DateTimeField()\n author = models.ForeignKey(Author)\n\n def save(self, **kwargs):\n super(Book, self).save(**kwargs)\n self.author.update_pub_date()\n\n def delete(self):\n super(Book, self).delete()\n self.author.update_pub_date()\n\nThis is the third common option you have besides two already suggested:\n\ndoing it in SQL with a join and grouping\ngetting all the books to Python side and remove duplicates\n\nBoth these options choose to compute pub_dates from a normalized data at the time when you read them. Denormalization does this computation for each author at the time when you write new data. The idea is that most web apps do reads most often than writes so this approach is preferable.\nOne of the perceived downsides of this is that basically you have the same data in different places and it requires you to keep it in sync. It horrifies database people to death usually :-). But this is usually not a problem until you use your ORM model to work with dat (which you probably do anyway). In Django it's the app that controls the database, not the other way around.\nAnother (more realistic) downside is that with the naive code that I've shown massive books update may be way slower since they ping authors for updating their data on each update no matter what. This is usually solved by having a flag to temporarily disable calling update_pub_date and calling it manually afterwards. Basically, denormalized data requires more maintenance than normalized.\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "database", "django", "django_models", "python", "sql" ]
stackoverflow_0001692322_database_django_django_models_python_sql.txt
Q: How to access the current table in Numbers? How do I access the current table in Numbers using py-appscript? For posterity, the program I created using this information clears all the cells of the current table and returns the selection to cell A1. I turned it into a Service using a python Run Shell Script in Automator and attached it to Numbers. from appscript import * Numbers = app('Numbers') current_table = None for sheet in Numbers.documents.first.sheets(): for table in sheet.tables(): if table.selection_range(): current_table = table if current_table: for cell in current_table.cells(): cell.value.set('') current_table.selection_range.set(to=current_table.ranges[u'A1']) It was used to clear large tables of numbers that I used for temporary calculations. A: >>> d = app('Numbers').documents.first() # reference to current top document EDIT: There doesn't seem to be a straight-forward single reference to the current table but it looks like you can find it by searching the current first document's sheets for a table with a non-null selection_range, so something like this: >>> nu = app('Numbers') >>> for sheet in nu.documents.first.sheets(): ... for table in sheet.tables(): ... if table.selection_range(): ... print table.name()
How to access the current table in Numbers?
How do I access the current table in Numbers using py-appscript? For posterity, the program I created using this information clears all the cells of the current table and returns the selection to cell A1. I turned it into a Service using a python Run Shell Script in Automator and attached it to Numbers. from appscript import * Numbers = app('Numbers') current_table = None for sheet in Numbers.documents.first.sheets(): for table in sheet.tables(): if table.selection_range(): current_table = table if current_table: for cell in current_table.cells(): cell.value.set('') current_table.selection_range.set(to=current_table.ranges[u'A1']) It was used to clear large tables of numbers that I used for temporary calculations.
[ ">>> d = app('Numbers').documents.first() # reference to current top document\n\nEDIT: There doesn't seem to be a straight-forward single reference to the current table but it looks like you can find it by searching the current first document's sheets for a table with a non-null selection_range, so something like this:\n>>> nu = app('Numbers')\n>>> for sheet in nu.documents.first.sheets():\n... for table in sheet.tables():\n... if table.selection_range():\n... print table.name()\n\n" ]
[ 2 ]
[]
[]
[ "iwork", "py_appscript", "python", "sourceforge_appscript" ]
stackoverflow_0001694060_iwork_py_appscript_python_sourceforge_appscript.txt
Q: Reasons to prefer zope 3 over grok I am familiar with zope 2 and think that zope 3 is superior in many ways, as far as I've used it (i.e. primarily with Five). Now I'm considering to dive deeper into zope 3. Would you recommend going even one step further and use grok instead, and if so, why? (and if not, why not? :) A: A good resource is http://plone.org/products/dexterity/documentation/manual/five.grok/referencemanual-all-pages . Plone is probably the biggest piece of software that uses zope3, so the fact that plone uses grok's way of configuring zope3 counts for something. I'd definitively recommend going one step further and to use grok. The underlying functionality (the so-called "zope component architecture") is the same, is is basically only the way it is configured that is different. With grok, the configuration happens in your python files instead of in xml (.zcml) files. Much more comfortable, especially when you need to figure out what happens where. Important to keep in mind: you can mix them up virtually at will. It is "just" the configuration (and some defaults) that are different. So an event handler registered by using grok will react just fine to an event that is configured with pure zope3 .zcml files!
Reasons to prefer zope 3 over grok
I am familiar with zope 2 and think that zope 3 is superior in many ways, as far as I've used it (i.e. primarily with Five). Now I'm considering to dive deeper into zope 3. Would you recommend going even one step further and use grok instead, and if so, why? (and if not, why not? :)
[ "A good resource is http://plone.org/products/dexterity/documentation/manual/five.grok/referencemanual-all-pages . Plone is probably the biggest piece of software that uses zope3, so the fact that plone uses grok's way of configuring zope3 counts for something.\nI'd definitively recommend going one step further and to use grok. The underlying functionality (the so-called \"zope component architecture\") is the same, is is basically only the way it is configured that is different.\nWith grok, the configuration happens in your python files instead of in xml (.zcml) files. Much more comfortable, especially when you need to figure out what happens where.\nImportant to keep in mind: you can mix them up virtually at will. It is \"just\" the configuration (and some defaults) that are different. So an event handler registered by using grok will react just fine to an event that is configured with pure zope3 .zcml files!\n" ]
[ 5 ]
[]
[]
[ "grok", "python", "zope" ]
stackoverflow_0001694309_grok_python_zope.txt
Q: Difference between ^ Operator in JS and Python I need to port some JS code which involves Math.random()*2147483648)^(new Date).getTime(). While it looks like for smaller numbers, the python function and the JS function are equivalent in function, but with large numbers like this, the values end up entirely different. Python: >>> 2147483647 ^ 1257628307380 1257075044427 Javascript: > 2147483647 ^ 1257628307380 -1350373301 How can I get the Javascript value from python? A: Python has unlimited-precision integers, while Javascript is using a 32-bit integer. You can manually apply a 32-bit limit to get the result you want: def xor32bit(a, b): m = (a ^ b) % (2**32) if m > (2**16): m -= 2**32 return m A: Easiest way would be to use ctypes to get the same overflow behavior as Javascript: >>> import ctypes >>> ctypes.c_int(1257075044427) c_long(-1350373301) To get the value: >>> ctypes.c_int(1257075044427).value -1350373301
Difference between ^ Operator in JS and Python
I need to port some JS code which involves Math.random()*2147483648)^(new Date).getTime(). While it looks like for smaller numbers, the python function and the JS function are equivalent in function, but with large numbers like this, the values end up entirely different. Python: >>> 2147483647 ^ 1257628307380 1257075044427 Javascript: > 2147483647 ^ 1257628307380 -1350373301 How can I get the Javascript value from python?
[ "Python has unlimited-precision integers, while Javascript is using a 32-bit integer. You can manually apply a 32-bit limit to get the result you want:\ndef xor32bit(a, b):\n m = (a ^ b) % (2**32)\n if m > (2**16):\n m -= 2**32\n return m\n\n", "Easiest way would be to use ctypes to get the same overflow behavior as Javascript:\n>>> import ctypes\n>>> ctypes.c_int(1257075044427)\nc_long(-1350373301)\n\nTo get the value:\n>>> ctypes.c_int(1257075044427).value\n-1350373301\n\n" ]
[ 7, 4 ]
[]
[]
[ "bit_manipulation", "javascript", "python", "xor" ]
stackoverflow_0001694507_bit_manipulation_javascript_python_xor.txt
Q: How do you generate random unique identifiers in a multi process and multi thread environment? Every solution I come up with is not thread save. def uuid(cls,db): u = hexlify(os.urandom(8)).decode('ascii') db.execute('SELECT sid FROM sessions WHERE sid=?',(u,)) if db.fetch(): u=cls.uuid(db) else: db.execute('INSERT INTO sessions (sid) VALUES (?)',(u,)) return u A: import os, threading, Queue def idmaker(aqueue): while True: u = hexlify(os.urandom(8)).decode('ascii') aqueue.put(u) idqueue = Queue.Queue(2) t = threading.Thread(target=idmaker, args=(idqueue,)) t.daemon = True t.start() def idgetter(): return idqueue.get() Queue is often the best way to synchronize threads in Python -- that's frequent enough that when designing a multi-thread system your first thought should be "how could I best do this with Queues". The underlying idea is to dedicate a thread to entirely "own" a shared resource or subsystem, and have all other "worker" threads access the resource only by gets and/or puts on Queues used by that dedicated thread (Queue is intrinsically threadsafe). Here, we make an idqueue with a length of only 2 (we don't want the id generation to go wild, making a lot of ids beforehand, which wastes memory and exhausts the entropy pool -- not sure if 2 is optimal, but the sweet spot is definitely going to be a pretty small integer;-), so the id generator thread will block when trying to add the third one, and wait until some space opens in the queue. idgetter (which could also be simply defined by a top-level assignment, idgetter = idqueue.get) will normally find an id already there and waiting (and make space for the next one!) -- if not, it intrinsically blocks and waits, waking up as soon as the id generator has placed a new id in the queue. A: Your algorithm is OK (thread safe as far as your DB API module is safe) and probably is the best way to go. It will never give you duplicate (assuming you have PRIMARY or UNIQUE key on sid), but you have a neglectfully small chance to get IntegrityError exception on INSERT. But your code doesn't look good. It's better to use a loop with limited number of attempts instead of recursion (which in case of some error in the code could become infinite): for i in range(MAX_ATTEMPTS): sid = os.urandom(8).decode('hex') db.execute('SELECT COUNT(*) FROM sessions WHERE sid=?', (sid,)) if not db.fetchone()[0]: # You can catch IntegrityError here and continue, but there are reasons # to avoid this. db.execute('INSERT INTO sessions (sid) VALUES (?)', (sid,)) break else: raise RuntimeError('Failed to generate unique session ID') You can raise the number of random characters read used to make chance to fail even smaller. base64.urlsafe_b64encode() is your friend if you'd like to make SID shorter, but then you have to insure your database uses case-sensitive comparison for this columns (MySQL's VARCHAR is not suitable unless you set binary collation for it, but VARBINARY is OK). A: I'm suggesting just a small modification to the accepted answer by Denis: for i in range(MAX_ATTEMPTS): sid = os.urandom(8).decode('hex') try: db.execute('INSERT INTO sessions (sid) VALUES (?)', (sid,)) except IntegrityError: continue break else: raise RuntimeError('Failed to generate unique session ID') We simply attempt the insert without explicitly checking for the generated ID. The insert will very rarely fail, so we most often only have to make the one database call, instead of two. This will improve efficiency by making fewer database calls, without compromising thread-safety (as this will effectively be handled by the database engine). A: If you require thread safety why not put you random number generator a function that uses a shared lock: import threading lock = threading.Lock() def get_random_number(lock) with lock: print "This can only be done by one thread at a time" If all the threads calling get_random_number use the same lock instance, then only one of them at time can create a random number. Of course you have also just created a bottle neck in your application with this solution. There are other solutions depending on your requirements such as creating blocks of unique identifiers then consuming them in parallel. A: No need to call the database I'd think: >>> import uuid # make a UUID based on the host ID and current time >>> uuid.uuid1() UUID('a8098c1a-f86e-11da-bd1a-00112444be1e') From this page. A: I'd start with a thread-unique ID and (somehow) concatenate that with a thread-local counter, then feed it through a cryptographic hash algorithm. A: If you absolutely need to verify uid against database and avoid race conditions, use transactions: BEGIN TRANSACTION SELECT COUNT(*) FROM sessions WHERE sid=%s INSERT INTO sessions (sid,...) VALUES (%s,...) COMMIT A: Is there not a unique piece of data in each thread? It is difficult for me to imagine two threads with exactly the same data. Though I don't discount the possibility. In the past when I have done things of this nature there is usually something unique about the thread. User name or client name or something of that nature. The solution for me was to concatenate the UserName, for example, and the current time in milliseconds then hash that string and get a hex digest of the hash. This gives one a nice string that is always the same length. There is a really remote possibility that two different John Smith's (or whatever) in two threads generate the id in the same millisecond. If that possibility makes one nervous then the locking route as mentioned may be needed. As was already mentioned there are already routines to get a GUID. I personally like fiddling with hash functions so I have rolled my own in the way mentioned on large multi threaded systems with success. It is ultimately up to you to decide if you really have threads with duplicate data. Be sure to choose a good hashing algorithm. I have used md5 successfully but I have read that it is possible to generate a hash collision with md5 though I have never done it. Lately I have been using sha1. A: mkdtemp should be thread-safe,simple and secure : def uuid(): import tempfile,os _tdir = tempfile.mkdtemp(prefix='uuid_') _uuid = os.path.basename(_tdir) os.rmdir(_tdir) return _uuid
How do you generate random unique identifiers in a multi process and multi thread environment?
Every solution I come up with is not thread save. def uuid(cls,db): u = hexlify(os.urandom(8)).decode('ascii') db.execute('SELECT sid FROM sessions WHERE sid=?',(u,)) if db.fetch(): u=cls.uuid(db) else: db.execute('INSERT INTO sessions (sid) VALUES (?)',(u,)) return u
[ "import os, threading, Queue\n\ndef idmaker(aqueue):\n while True:\n u = hexlify(os.urandom(8)).decode('ascii')\n aqueue.put(u)\n\nidqueue = Queue.Queue(2)\n\nt = threading.Thread(target=idmaker, args=(idqueue,))\nt.daemon = True\nt.start()\n\ndef idgetter():\n return idqueue.get()\n\nQueue is often the best way to synchronize threads in Python -- that's frequent enough that when designing a multi-thread system your first thought should be \"how could I best do this with Queues\". The underlying idea is to dedicate a thread to entirely \"own\" a shared resource or subsystem, and have all other \"worker\" threads access the resource only by gets and/or puts on Queues used by that dedicated thread (Queue is intrinsically threadsafe).\nHere, we make an idqueue with a length of only 2 (we don't want the id generation to go wild, making a lot of ids beforehand, which wastes memory and exhausts the entropy pool -- not sure if 2 is optimal, but the sweet spot is definitely going to be a pretty small integer;-), so the id generator thread will block when trying to add the third one, and wait until some space opens in the queue. idgetter (which could also be simply defined by a top-level assignment, idgetter = idqueue.get) will normally find an id already there and waiting (and make space for the next one!) -- if not, it intrinsically blocks and waits, waking up as soon as the id generator has placed a new id in the queue.\n", "Your algorithm is OK (thread safe as far as your DB API module is safe) and probably is the best way to go. It will never give you duplicate (assuming you have PRIMARY or UNIQUE key on sid), but you have a neglectfully small chance to get IntegrityError exception on INSERT. But your code doesn't look good. It's better to use a loop with limited number of attempts instead of recursion (which in case of some error in the code could become infinite):\nfor i in range(MAX_ATTEMPTS):\n sid = os.urandom(8).decode('hex')\n db.execute('SELECT COUNT(*) FROM sessions WHERE sid=?', (sid,))\n if not db.fetchone()[0]:\n # You can catch IntegrityError here and continue, but there are reasons\n # to avoid this.\n db.execute('INSERT INTO sessions (sid) VALUES (?)', (sid,))\n break\nelse:\n raise RuntimeError('Failed to generate unique session ID')\n\nYou can raise the number of random characters read used to make chance to fail even smaller. base64.urlsafe_b64encode() is your friend if you'd like to make SID shorter, but then you have to insure your database uses case-sensitive comparison for this columns (MySQL's VARCHAR is not suitable unless you set binary collation for it, but VARBINARY is OK).\n", "I'm suggesting just a small modification to the accepted answer by Denis:\nfor i in range(MAX_ATTEMPTS):\n sid = os.urandom(8).decode('hex')\n try:\n db.execute('INSERT INTO sessions (sid) VALUES (?)', (sid,))\n except IntegrityError:\n continue\n break\nelse:\n raise RuntimeError('Failed to generate unique session ID')\n\nWe simply attempt the insert without explicitly checking for the generated ID. The insert will very rarely fail, so we most often only have to make the one database call, instead of two.\nThis will improve efficiency by making fewer database calls, without compromising thread-safety (as this will effectively be handled by the database engine).\n", "If you require thread safety why not put you random number generator a function that uses a shared lock:\nimport threading\nlock = threading.Lock()\ndef get_random_number(lock)\n with lock:\n print \"This can only be done by one thread at a time\"\n\nIf all the threads calling get_random_number use the same lock instance, then only one of them at time can create a random number. \nOf course you have also just created a bottle neck in your application with this solution. There are other solutions depending on your requirements such as creating blocks of unique identifiers then consuming them in parallel.\n", "No need to call the database I'd think:\n>>> import uuid\n\n# make a UUID based on the host ID and current time\n>>> uuid.uuid1()\nUUID('a8098c1a-f86e-11da-bd1a-00112444be1e')\n\nFrom this page.\n", "I'd start with a thread-unique ID and (somehow) concatenate that with a thread-local counter, then feed it through a cryptographic hash algorithm.\n", "If you absolutely need to verify uid against database and avoid race conditions, use transactions:\nBEGIN TRANSACTION\nSELECT COUNT(*) FROM sessions WHERE sid=%s\nINSERT INTO sessions (sid,...) VALUES (%s,...)\nCOMMIT\n\n", "Is there not a unique piece of data in each thread? It is difficult for me to imagine two threads with exactly the same data. Though I don't discount the possibility.\nIn the past when I have done things of this nature there is usually something unique about the thread. User name or client name or something of that nature. The solution for me was to concatenate the UserName, for example, and the current time in milliseconds then hash that string and get a hex digest of the hash. This gives one a nice string that is always the same length.\nThere is a really remote possibility that two different John Smith's (or whatever) in two threads generate the id in the same millisecond. If that possibility makes one nervous then the locking route as mentioned may be needed. \nAs was already mentioned there are already routines to get a GUID. I personally like fiddling with hash functions so I have rolled my own in the way mentioned on large multi threaded systems with success.\nIt is ultimately up to you to decide if you really have threads with duplicate data. Be sure to choose a good hashing algorithm. I have used md5 successfully but I have read that it is possible to generate a hash collision with md5 though I have never done it. Lately I have been using sha1.\n", "mkdtemp should be thread-safe,simple and secure :\ndef uuid():\n import tempfile,os\n _tdir = tempfile.mkdtemp(prefix='uuid_')\n _uuid = os.path.basename(_tdir)\n os.rmdir(_tdir)\n return _uuid\n\n" ]
[ 5, 3, 3, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "mod_wsgi", "python", "sql" ]
stackoverflow_0001687344_mod_wsgi_python_sql.txt
Q: How to set selection for a table in Numbers? How do you set the selection for a table in Numbers using py-appscript? This seems like it should be really simple to do but the solution is frustratingly evasive. I can get the current selection: current_table.selection_range and I can get its cells: current_table.selection_range.cells() but trying to set() either of them gets an angry appscript error. A: Looks like something like this works: >>> current_table.selection_range.set(to=current_table.ranges[u'B3:C10']) Note, looking at Number's script dictionary in AppleScript Editor or with ASDictionary, the property selection_range is defined as class range. So that's a clue that you need to come up with a reference of type range to set it.
How to set selection for a table in Numbers?
How do you set the selection for a table in Numbers using py-appscript? This seems like it should be really simple to do but the solution is frustratingly evasive. I can get the current selection: current_table.selection_range and I can get its cells: current_table.selection_range.cells() but trying to set() either of them gets an angry appscript error.
[ "Looks like something like this works:\n>>> current_table.selection_range.set(to=current_table.ranges[u'B3:C10'])\n\nNote, looking at Number's script dictionary in AppleScript Editor or with ASDictionary, the property selection_range is defined as class range. So that's a clue that you need to come up with a reference of type range to set it.\n" ]
[ 3 ]
[]
[]
[ "iwork", "py_appscript", "python", "sourceforge_appscript" ]
stackoverflow_0001694478_iwork_py_appscript_python_sourceforge_appscript.txt
Q: How can I build the Boost.Python example on Ubuntu 9.10? I am using Ubuntu 9.10 beta, whose repositories contain boost 1.38. I would like to build the hello-world example. I followed the instructions here (http://www.boost.org/doc/libs/1_40_0/libs/python/doc/tutorial/doc/html/python/hello.html), found the example project, and issued the "bjam" command. I have installed bjam and boost-build. I get the following output: Jamroot:18: in modules.load rule python-extension unknown in module Jamfile</usr/share/doc/libboost1.38-doc/examples/libs/python/example>. /usr/share/boost-build/build/project.jam:312: in load-jamfile /usr/share/boost-build/build/project.jam:68: in load /usr/share/boost-build/build/project.jam:170: in project.find /usr/share/boost-build/build-system.jam:248: in load /usr/share/boost-build/kernel/modules.jam:261: in import /usr/share/boost-build/kernel/bootstrap.jam:132: in boost-build /usr/share/doc/libboost1.38-doc/examples/libs/python/example/boost-build.jam:7: in module scope I do not know enough about Boost (this is an exploratory exercise for myself) to understand why the python-extension macro in the included Jamroot is not valid. I am running this example from the install directory, so I have not altered the Jamroot's use-project setting. As a side question, if I were to just willy-nilly start a project in an arbitrary directory, how would I write my jamroot? A: The problem comes from using Ubuntu package instead of boost compiled from source. You have to edit you Jamroot to say it to use global libboost-python, instead of looking for lib in relative boost source tree. Summarily you should have these lines at the beginning of your Jamroot: using python ; lib libboost_python : : <name>boost_python ; project : requirements <library>libboost_python ; It was reported as a bug on Debian and corrected at least on lenny with libboost-python1.40 ...mostly. The example in libboost_python still refers to boost_python-mt instead of boost_python, but /usr/lib/libboost_python.so exists but not /usr/lib/libboost_python-mt.so. Hopefully Ubuntu will soon have the same fix and the next user won't stumble on this... I know the answer to your question because I did had the exact same problem not long ago.
How can I build the Boost.Python example on Ubuntu 9.10?
I am using Ubuntu 9.10 beta, whose repositories contain boost 1.38. I would like to build the hello-world example. I followed the instructions here (http://www.boost.org/doc/libs/1_40_0/libs/python/doc/tutorial/doc/html/python/hello.html), found the example project, and issued the "bjam" command. I have installed bjam and boost-build. I get the following output: Jamroot:18: in modules.load rule python-extension unknown in module Jamfile</usr/share/doc/libboost1.38-doc/examples/libs/python/example>. /usr/share/boost-build/build/project.jam:312: in load-jamfile /usr/share/boost-build/build/project.jam:68: in load /usr/share/boost-build/build/project.jam:170: in project.find /usr/share/boost-build/build-system.jam:248: in load /usr/share/boost-build/kernel/modules.jam:261: in import /usr/share/boost-build/kernel/bootstrap.jam:132: in boost-build /usr/share/doc/libboost1.38-doc/examples/libs/python/example/boost-build.jam:7: in module scope I do not know enough about Boost (this is an exploratory exercise for myself) to understand why the python-extension macro in the included Jamroot is not valid. I am running this example from the install directory, so I have not altered the Jamroot's use-project setting. As a side question, if I were to just willy-nilly start a project in an arbitrary directory, how would I write my jamroot?
[ "The problem comes from using Ubuntu package instead of boost compiled from source. You have to edit you Jamroot to say it to use global libboost-python, instead of looking for lib in relative boost source tree.\nSummarily you should have these lines at the beginning of your Jamroot:\nusing python ;\nlib libboost_python : : <name>boost_python ;\nproject : requirements <library>libboost_python ;\n\nIt was reported as a bug on Debian and corrected at least on lenny with libboost-python1.40 ...mostly. The example in libboost_python still refers to boost_python-mt instead of boost_python, but /usr/lib/libboost_python.so exists but not /usr/lib/libboost_python-mt.so.\nHopefully Ubuntu will soon have the same fix and the next user won't stumble on this... I know the answer to your question because I did had the exact same problem not long ago.\n" ]
[ 4 ]
[]
[]
[ "boost", "python", "ubuntu" ]
stackoverflow_0001569490_boost_python_ubuntu.txt
Q: Converting a list of tuples into a dict I have a list of tuples like this: [ ('a', 1), ('a', 2), ('a', 3), ('b', 1), ('b', 2), ('c', 1), ] I want to iterate through this keying by the first item, so, for example, I could print something like this: a 1 2 3 b 1 2 c 1 How would I go about doing this without keeping an item to track whether the first item is the same as I loop around the tuples? This feels rather messy (plus I have to sort the list to start with)... A: l = [ ('a', 1), ('a', 2), ('a', 3), ('b', 1), ('b', 2), ('c', 1), ] d = {} for x, y in l: d.setdefault(x, []).append(y) print d produces: {'a': [1, 2, 3], 'c': [1], 'b': [1, 2]} A: Slightly simpler... from collections import defaultdict fq = defaultdict(list) for n, v in myList: fq[n].append(v) print(fq) # defaultdict(<type 'list'>, {'a': [1, 2, 3], 'c': [1], 'b': [1, 2]}) A: A solution using groupby from itertools import groupby l = [('a',1), ('a', 2),('a', 3),('b', 1),('b', 2),('c', 1),] [(label, [v for l,v in value]) for (label, value) in groupby(l, lambda x:x[0])] Output: [('a', [1, 2, 3]), ('b', [1, 2]), ('c', [1])] groupby(l, lambda x:x[0]) gives you an iterator that contains ['a', [('a', 1), ...], c, [('c', 1)], ...] A: I would just do the basic answer = {} for key, value in list_of_tuples: if key in answer: answer[key].append(value) else: answer[key] = [value] If it's this short, why use anything complicated. Of course if you don't mind using setdefault that's okay too. A: Print list of tuples grouping by the first item This answer is based on the @gommen one. #!/usr/bin/env python from itertools import groupby from operator import itemgetter L = [ ('a', 1), ('a', 2), ('a', 3), ('b', 1), ('b', 2), ('c', 1), ] key = itemgetter(0) L.sort(key=key) #NOTE: use `L.sort()` if you'd like second items to be sorted too for k, group in groupby(L, key=key): print k, ' '.join(str(item[1]) for item in group) Output: a 1 2 3 b 1 2 c 1
Converting a list of tuples into a dict
I have a list of tuples like this: [ ('a', 1), ('a', 2), ('a', 3), ('b', 1), ('b', 2), ('c', 1), ] I want to iterate through this keying by the first item, so, for example, I could print something like this: a 1 2 3 b 1 2 c 1 How would I go about doing this without keeping an item to track whether the first item is the same as I loop around the tuples? This feels rather messy (plus I have to sort the list to start with)...
[ "l = [\n('a', 1),\n('a', 2),\n('a', 3),\n('b', 1),\n('b', 2),\n('c', 1),\n]\n\nd = {}\nfor x, y in l:\n d.setdefault(x, []).append(y)\nprint d\n\nproduces:\n{'a': [1, 2, 3], 'c': [1], 'b': [1, 2]}\n\n", "Slightly simpler...\nfrom collections import defaultdict\n\nfq = defaultdict(list)\nfor n, v in myList:\n fq[n].append(v)\n \nprint(fq) # defaultdict(<type 'list'>, {'a': [1, 2, 3], 'c': [1], 'b': [1, 2]})\n\n", "A solution using groupby\nfrom itertools import groupby\nl = [('a',1), ('a', 2),('a', 3),('b', 1),('b', 2),('c', 1),]\n[(label, [v for l,v in value]) for (label, value) in groupby(l, lambda x:x[0])]\n\nOutput:\n[('a', [1, 2, 3]), ('b', [1, 2]), ('c', [1])]\n\ngroupby(l, lambda x:x[0]) gives you an iterator that contains\n['a', [('a', 1), ...], c, [('c', 1)], ...]\n\n", "I would just do the basic\n\nanswer = {}\nfor key, value in list_of_tuples:\n if key in answer:\n answer[key].append(value)\n else:\n answer[key] = [value]\n\nIf it's this short, why use anything complicated. Of course if you don't mind using setdefault that's okay too.\n", "Print list of tuples grouping by the first item\nThis answer is based on the @gommen one.\n#!/usr/bin/env python\n\nfrom itertools import groupby\nfrom operator import itemgetter\n\nL = [\n('a', 1),\n('a', 2),\n('a', 3),\n('b', 1),\n('b', 2),\n('c', 1),\n]\n\nkey = itemgetter(0)\nL.sort(key=key) #NOTE: use `L.sort()` if you'd like second items to be sorted too\nfor k, group in groupby(L, key=key):\n print k, ' '.join(str(item[1]) for item in group)\n\nOutput:\na 1 2 3\nb 1 2\nc 1\n\n" ]
[ 46, 39, 11, 3, 3 ]
[]
[]
[ "dictionary", "iteration", "list", "python", "tuples" ]
stackoverflow_0000261655_dictionary_iteration_list_python_tuples.txt
Q: generating javascript string in python i have string stored in python variables, and i am outputting a html that contains javascript, and the i need to create javascript variables. for ex, in python title = "What's your name?" i use Cheetah to generate the html. Cheetah code: var title = '$title'; how do i escape this correctly so that a correct javascript variable is created? actual html output needed: var title = 'What\'s your name?'; A: You probably want JSON: import simplejson as sj print sj.dumps('What\'s your name?') # => '"What\'s your name?"' Don't generate js with cheetah, there are libraries.
generating javascript string in python
i have string stored in python variables, and i am outputting a html that contains javascript, and the i need to create javascript variables. for ex, in python title = "What's your name?" i use Cheetah to generate the html. Cheetah code: var title = '$title'; how do i escape this correctly so that a correct javascript variable is created? actual html output needed: var title = 'What\'s your name?';
[ "You probably want JSON:\nimport simplejson as sj\nprint sj.dumps('What\\'s your name?') # => '\"What\\'s your name?\"'\n\nDon't generate js with cheetah, there are libraries.\n" ]
[ 4 ]
[ "Either just do title = title.replace(\"'\", \"\\\\'\") in Python before the title value gets to Cheetah, or add a custom filter to Cheetah for the purpose and invoke it in the template. Doing it on the Python side of things seems simpler, though.\n" ]
[ -2 ]
[ "cheetah", "javascript", "python" ]
stackoverflow_0001694939_cheetah_javascript_python.txt
Q: "".join(reversed(val)) vs val[::-1]...which is pythonic? So according to the Zen of Python ... Explicit is better than implicit...Sparse is better than dense...Readability counts...but then again Flat is better than nested...so then which is pythonic? val = "which is pythonic?" print("".join(reversed(val))) or print(val[::-1]) I'm just a Java programmer learning Python so I find this pythonic stuff interesting since there is no analog in the Java world AFAIK. A: My wife Anna has nicknamed x[::-1] "the Martian Smiley" -- I mostly bow to her (and her long experience in training &c, and studies in human psychology &c), when it comes to judging what's easy and natural for most people, and she absolutely loves the martial smiley. "Just walk it backwards" -- how much more direct, and high-abstraction, than the detailed specification of "reverse it and then join it back"! Also, python -mtimeit is often a good judge of what's Pythonic: top Pythonistas, over the years, have of course tended to optimize what they most often needed and used, so a very substantial performance difference tells you what "goes with the grain" of the language and its top practitioners. And by that score, the martian smiley beats the detailed spec hands-down...: $ python -mtimeit '"".join(reversed("hello there!"))' 100000 loops, best of 3: 4.06 usec per loop $ python -mtimeit '"hello there!"[::-1]' 1000000 loops, best of 3: 0.392 usec per loop order-of-magnitude performance differences just don't leave that much room for doubt!-) A: The second one (in my opinion) is more Pythonic as it is simpler, shorter, and clearer. For more info on what is Pythonic I would recommend The Zen of Python: Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than right now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! In my opinion your second example satisfies these tenets: Beautiful is better than ugly. Simple is better than complex. Sparse is better than dense. Readability counts. A: With a string, as you have, I would go with the first option, since it makes it clear that the result you want is a string. For a list or other iterable/slicable, I'd go with the second. An added benefit of the first form is it will work if val is not actually a string, but some other iterable of characters (e.g. a generator of some kind). In some cases, this makes a big difference. A: First, if you are a beginner, don't worry what is more pythonic or not. This smell language wars, and you will eventually find your own opinion anyway. Just use the way you think is more readable/simpler for you and you will find the light, I think. That said, I agree with Alex's excellent answer (as he is always right) and I would add an additional comment why you should prefer the second method - the first will only work correctly if val is a string.
"".join(reversed(val)) vs val[::-1]...which is pythonic?
So according to the Zen of Python ... Explicit is better than implicit...Sparse is better than dense...Readability counts...but then again Flat is better than nested...so then which is pythonic? val = "which is pythonic?" print("".join(reversed(val))) or print(val[::-1]) I'm just a Java programmer learning Python so I find this pythonic stuff interesting since there is no analog in the Java world AFAIK.
[ "My wife Anna has nicknamed x[::-1] \"the Martian Smiley\" -- I mostly bow to her (and her long experience in training &c, and studies in human psychology &c), when it comes to judging what's easy and natural for most people, and she absolutely loves the martial smiley. \"Just walk it backwards\" -- how much more direct, and high-abstraction, than the detailed specification of \"reverse it and then join it back\"!\nAlso, python -mtimeit is often a good judge of what's Pythonic: top Pythonistas, over the years, have of course tended to optimize what they most often needed and used, so a very substantial performance difference tells you what \"goes with the grain\" of the language and its top practitioners. And by that score, the martian smiley beats the detailed spec hands-down...:\n$ python -mtimeit '\"\".join(reversed(\"hello there!\"))'\n100000 loops, best of 3: 4.06 usec per loop\n$ python -mtimeit '\"hello there!\"[::-1]'\n1000000 loops, best of 3: 0.392 usec per loop\n\norder-of-magnitude performance differences just don't leave that much room for doubt!-)\n", "The second one (in my opinion) is more Pythonic as it is simpler, shorter, and clearer.\nFor more info on what is Pythonic I would recommend The Zen of Python:\n\nBeautiful is better than ugly.\n Explicit is better than implicit.\n Simple is better than complex.\n Complex is better than complicated.\n Flat is better than nested.\n Sparse is better than dense.\n Readability counts.\n Special cases aren't special enough to break the rules.\n Although practicality beats purity.\n Errors should never pass silently.\n Unless explicitly silenced.\n In the face of ambiguity, refuse the temptation to guess.\n There should be one-- and preferably only one --obvious way to\n do it.\n Although that way may not be obvious at first unless you're Dutch.\n Now is better than never.\n Although never is often better than right now.\n If the implementation is hard to explain, it's a bad idea.\n If the implementation is easy to explain, it may be a good idea.\n Namespaces are one honking great idea -- let's do more of those! \n\nIn my opinion your second example satisfies these tenets:\n\nBeautiful is better than ugly.\n Simple is better than complex.\n Sparse is better than dense.\n Readability counts. \n\n", "With a string, as you have, I would go with the first option, since it makes it clear that the result you want is a string. For a list or other iterable/slicable, I'd go with the second.\nAn added benefit of the first form is it will work if val is not actually a string, but some other iterable of characters (e.g. a generator of some kind). In some cases, this makes a big difference.\n", "First, if you are a beginner, don't worry what is more pythonic or not. This smell language wars, and you will eventually find your own opinion anyway. Just use the way you think is more readable/simpler for you and you will find the light, I think.\nThat said, I agree with Alex's excellent answer (as he is always right) and I would add an additional comment why you should prefer the second method - the first will only work correctly if val is a string.\n" ]
[ 45, 4, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001695385_python.txt
Q: Joomla and XMLRPC How do I get started with getting going with XML-RPC with joomla? I've been looking around for documentation and finding nothing... I'd like to connect to a joomla server, (after enabling the Core Joomla XML-RPC plugin), and be able to do things like login and add an article, and tweak all the parameters of the article if possible. My xml-rpc client implementation will be in python. A: the book "Mastering Joomla 1.5 Extension and Framework Development" has a nice explanation of that. Joomla has a fex XML-RPC plugins that let you do a few things, like the blogger API interface. (plugins/xmlrpc/blogger.php) You should create your own XML-RPC plugin to do the custom things you want.
Joomla and XMLRPC
How do I get started with getting going with XML-RPC with joomla? I've been looking around for documentation and finding nothing... I'd like to connect to a joomla server, (after enabling the Core Joomla XML-RPC plugin), and be able to do things like login and add an article, and tweak all the parameters of the article if possible. My xml-rpc client implementation will be in python.
[ "the book \"Mastering Joomla 1.5 Extension and Framework Development\" has a nice explanation of that.\nJoomla has a fex XML-RPC plugins that let you do a few things, like the blogger API interface. (plugins/xmlrpc/blogger.php)\nYou should create your own XML-RPC plugin to do the custom things you want.\n" ]
[ 3 ]
[]
[]
[ "joomla", "python", "xml_rpc" ]
stackoverflow_0001694205_joomla_python_xml_rpc.txt
Q: How to create 3 dimensions matrix in numpy , like matlab a(:,:,:) How to create 3 dimensions matrix in numpy , like matlab a(:,:,:) . I try to convert matlab code that create 3d matrix to python by use numpy.array and i don't know how to create 3d matrix/array in numpy A: a=np.empty((2,3,5)) creates a 2x3x5 array. (There is also np.zeros if you want the values initialized.) You can also reshape existing arrays: a=np.arange(30).reshape(2,3,5) np.arange(30) creates a 1-d array with values from 0..29. The reshape() method returns an array containing the same data with a new shape.
How to create 3 dimensions matrix in numpy , like matlab a(:,:,:)
How to create 3 dimensions matrix in numpy , like matlab a(:,:,:) . I try to convert matlab code that create 3d matrix to python by use numpy.array and i don't know how to create 3d matrix/array in numpy
[ "a=np.empty((2,3,5))\n\ncreates a 2x3x5 array. (There is also np.zeros if you want the values initialized.)\nYou can also reshape existing arrays:\na=np.arange(30).reshape(2,3,5)\n\nnp.arange(30) creates a 1-d array with values from 0..29. The reshape() method returns an array containing the same data with a new shape.\n" ]
[ 69 ]
[]
[]
[ "matlab", "numpy", "python" ]
stackoverflow_0001696135_matlab_numpy_python.txt
Q: Are all HttpError in python subclasses of IOError In our code we catch IOError and log it before reraising. I am getting a "connection reset by peer", but nothing in the logs. Is "connection reset by peer" a subclass of IOError in python? ..... File "/usr/lib/python2.5/httplib.py", line 1047, in readline s = self._read() File "/usr/lib/python2.5/httplib.py", line 1003, in _read buf = self._ssl.read(self._bufsize) error: (104, 'Connection reset by peer') A: The stack trace you pasted looks like some Exception of class error with arguments (104, 'Connection reset by peer). So it looks like it's not a HTTPError exception at all. It looks to me like it's actually a socket.error. This class is indeed a subclass of IOError since Python 2.6. But I guess that's not your question, since you are asking about HttpError exceptions. Can you rephrase your question to clarify your assumptions and expectations? Comment from usawaretech: How are you finding out it is a socket error? MY code is something like: try:risky_code(); except IOError: logger.debug('...'); raise; As I am assuming that HttpError is a subclass of IOError, when I get that exception, I am assuming that it be logged. There is nothing in my logs I guess it is a socket.error because I used the index of the standard library documentation, and because I encountered this error before. What version of Python are you using? I guess it's Python 2.5 or earlier. If your intent is to log and re-raise exceptions, it would be a better idea to use a bare except: try: risky_code() except: logger.debug(...) raise Also, you can find the module where the exception class was defined using exception.__module__.
Are all HttpError in python subclasses of IOError
In our code we catch IOError and log it before reraising. I am getting a "connection reset by peer", but nothing in the logs. Is "connection reset by peer" a subclass of IOError in python? ..... File "/usr/lib/python2.5/httplib.py", line 1047, in readline s = self._read() File "/usr/lib/python2.5/httplib.py", line 1003, in _read buf = self._ssl.read(self._bufsize) error: (104, 'Connection reset by peer')
[ "The stack trace you pasted looks like some Exception of class error with arguments (104, 'Connection reset by peer).\nSo it looks like it's not a HTTPError exception at all. It looks to me like it's actually a socket.error. This class is indeed a subclass of IOError since Python 2.6.\nBut I guess that's not your question, since you are asking about HttpError exceptions. Can you rephrase your question to clarify your assumptions and expectations?\nComment from usawaretech:\n\nHow are you finding out it is a socket\n error? MY code is something like:\n try:risky_code(); except IOError:\n logger.debug('...'); raise; As I am\n assuming that HttpError is a subclass\n of IOError, when I get that exception,\n I am assuming that it be logged. There\n is nothing in my logs\n\nI guess it is a socket.error because I used the index of the standard library documentation, and because I encountered this error before.\nWhat version of Python are you using? I guess it's Python 2.5 or earlier.\nIf your intent is to log and re-raise exceptions, it would be a better idea to use a bare except:\ntry:\n risky_code()\nexcept:\n logger.debug(...)\n raise\n\nAlso, you can find the module where the exception class was defined using exception.__module__.\n" ]
[ 2 ]
[]
[]
[ "ioerror", "python" ]
stackoverflow_0001696195_ioerror_python.txt
Q: Searching values of a list in another List using Python I'm a trying to find a sublist of a list. Meaning if list1 say [1,5] is in list2 say [1,4,3,5,6] than it should return True. What I have so far is this: for nums in l1: if nums in l2: return True else: return False This would be true but I'm trying to return True only if list1 is in list2 in the respective order. So if list2 is [5,2,3,4,1], it should return False. I was thinking along the lines of comparing the index values of list1 using < but I'm not sure. A: try: last_found = -1 for num in L1: last_found = L2.index(num, last_found + 1) return True except ValueError: return False The index method of list L2 returns the position at which the first argument (num) is found in the list; called, like here, with a second arg, it starts looking in the list at that position. If index does not find what it's looking for, it raises a ValueError exception. So, this code uses this approach to look for each item num of L1, in order, inside L2. The first time it needs to start looking from position 0; each following time, it needs to start looking from the position just after the last one where it found the previous item, i.e. last_found + 1 (so at the start we must set last_found = -1 to start looking from position 0 the first time). If every item in L1 is found this way (i.e. it's found in L2 after the position where the previous item was found), then the two lists meet the given condition and the code returns True. If any item of L1 is ever not-found, the code catches the resulting ValueError exception and just returns False. A different approach would be to use iterators over the two lists, that can be formed with the iter built-in function. You can "advance" an iterator by calling built-in next on it; this will raise StopIteration if there is no "next item", i.e., the iterator is exhausted. You can also use for on the iterator for a somewhat smoother interface, where applicable. The low-level approach using the iter/next idea: i1 = iter(L1) i2 = iter(L2) while True: try: lookfor = next(i1) except StopIteration: # no more items to look for == all good! return True while True: try: maybe = next(i2) except StopIteration: # item lookfor never matched == nope! return False if maybe == lookfor: break or, a bit higher-level: i1 = iter(L1) i2 = iter(L2) for lookfor in i1: for maybe in i2: if maybe == lookfor: break else: # item lookfor never matched == nope! return False # no more items to look for == all good! return True In fact, the only crucial use of iter here is to get i2 -- having the inner loop as for maybe in i2 guarantees the inner loop won't start looking from the beginning every time, but, rather, it will keep looking where it last left off. The outer loop might as well for for lookfor in L1:, since it has no "restarting" issue. Key, here, is the else: clause of loops, which triggers if, and only if, the loop was not interrupted by break but rather exited naturally. Working further on this idea we are again reminded of the in operator, which also can be made to continue where it last left off simply by using an iterator. Big simplification: i2 = iter(L2) for lookfor in L1: if lookfor not in i2: return False # no more items to look for == all good! return True But now we recognize that is exactly the patter abstracted by the short-circuiting any and all built-in "short-circuiting accumulator" functions, so...: i2 = iter(L2) return all(lookfor in i2 for lookfor in L1) which I believe is just about as simple as you can get. The only non-elementary bit left here is: you need to use an iter(L2) explicitly, just once, to make sure the in operator (intrinsically an inner loop) doesn't restart the search from the beginning but rather continues each time from where it last left off. A: This question looks a bit like homework and for this reason I'd like to take the time and discuss what may be going wrong with the snippet shown in the question. Although you are using a word in its plural form, for the nums variable, you need to understand that Python will use this variable to store ONE item from l1 at a time, and go through the block of code in this "for block", one time for each different item. The result of your current snippet will therefore be to exit upon the very first iteration, with either True or False depending if by chance the first items in the list happen to match. Edit: Yes, A1, exactly as you said: the logic exits with True after the first iteration. This is because of the "return" when nums is found in l2. If you were to do nothing in the "found" case, the loop the logic would proceed with finishing whatever logic in the block (none here) and it would then start the next iteration. Therefore it would only exit with a "False" return value, in the case when an item from l1 is not found l2 (indeed after the very first such not-found item). Therefore your logic is almost correct (if it were to do nothing in the "found case"), the one thing missing would be to return "True", systematically after the for loop (since if it didn't exit with a False value within the loop, then all items of l2 are in l1...). There are two ways to modify the code so it does nothing for the "found case". - by using pass, which is a convenient way to instruct Python to do nothing; "pass" is typically used when "something", i.e. some action is syntactically required but we don't want anything done, but it can also be used when debugging etc. - by rewriting the test as a "not in" instead if nums not in l2: return False #no else:, i.e. do nothing at all if found Now... Getting into more details. There may be a flaw in your program (with the suggested changes), that is that it would consider l1 to be a sublist of l2, even if l1 had say 2 items with value say 5 whereby l2 only had one such value. I'm not sure if that kind of consideration is part of the problem (possibly the understanding is that both lists are "sets", with no possible duplicate items). If duplicates were allowed however, you would have to complicate the logic somewhat (a possible approach would be to intitially make a copy of l2 and each time "nums" is find in the l2 copy, to remove this item. Another consideration is that maybe a list can only be said to be a sublist if its items are found the same order as the items in the other list... Again it all depends on the way the problem is defined... BTW some of the solutions proposed, like Alex Martelli's are written in such fashion because they solve the problem in a way that the order of items with the lists matter. A: I think this solution is the fastest, since it iterates only once, albeit on the longer list and exits before finishing the iteration if a match is found. (Edit: However, it is not as succinct or as fast as Alex's latest solution) def ck(l1,l2): i,j = 0,len(l1) for e in l2: if e == l1[i]: i += 1 if i == j: return True return False An improvement was suggested by Anurag Uniyal (see comment) and is reflected in the showdown below. Here are some speed results for a range of list size ratios (List l1 is a 10-element list containing random values from 1-10. List l2 ranges from 10-1000 in length (and also contain random values from 1-10). Code that compares run times and plots the results: import random import os import pylab import timeit def paul(l1,l2): i = 0 j = len(l1) try: for e in l2: if e == l1[i]: i += 1 except IndexError: # thanks Anurag return True return False def jed(list1, list2): try: for num in list1: list2 = list2[list2.index(num):] except: return False else: return True def alex(L1,L2): # wow! i2 = iter(L2) return all(lookfor in i2 for lookfor in L1) from itertools import dropwhile from operator import ne from functools import partial def thc4k_andrea(l1, l2): it = iter(l2) try: for e in l1: dropwhile(partial(ne, e), it).next() return True except StopIteration: return False ct = 100 ss = range(10,1000,100) nms = 'paul alex jed thc4k_andrea'.split() ls = dict.fromkeys(nms) for nm in nms: ls[nm] = [] setup = 'import test_sublist as x' for s in ss: l1 = [random.randint(1,10) for i in range(10)] l2 = [random.randint(1,10) for i in range(s)] for nm in nms: stmt = 'x.'+nm+'(%s,%s)'%(str(l1),str(l2)) t = timeit.Timer(setup=setup, stmt=stmt).timeit(ct) ls[nm].append( t ) pylab.clf() for nm in nms: print len(ss), len(ls[nm]) pylab.plot(ss,ls[nm],label=nm) pylab.legend(loc=0) pylab.xlabel('length of l2') pylab.ylabel('time') pylab.savefig('cmp_lsts.png') os.startfile('cmp_lsts.png') results: A: This should be easy to understand and avoid corner case nicely as you don't need to work with indexes: def compare(l1, l2): it = iter(l2) for e in l1: try: while it.next() != e: pass except StopIteration: return False return True it tries to compare each element of l1 to the next element in l2. if there is no next element (StopIteration) it returns false (it visited the whole l2 and didn't find the current e) else it found it, so it returns true. For faster execution it may help to put the try block outside the for: def compare(l1, l2): it = iter(l2) try: for e in l1: while it.next() != e: pass except StopIteration: return False return True A: I have a feeling this is more intensive than Alex's answer, but here was my first thought: def test(list1, list2): try: for num in list1: list2 = list2[list2.index(num):] except: return False else: return True Edit: Just tried it. His is faster. It's close. Edit 2: Moved try/except out of the loop (this is why others should look at your code). Thanks, gnibbler. A: I have a hard time seeing questions like this and not wishing that Python's list handling was more like Haskell's. This seems a much more straightforward solution than anything I could come up with in Python: contains_inorder :: Eq a => [a] -> [a] -> Bool contains_inorder [] _ = True contains_inorder _ [] = False contains_inorder (x:xs) (y:ys) | x == y = contains_inorder xs ys | otherwise = contains_inorder (x:xs) ys A: The ultra-optimized version of Andrea's solution: from itertools import dropwhile from operator import ne from functools import partial def compare(l1, l2): it = iter(l2) try: for e in l1: dropwhile(partial(ne, e), it).next() return True except StopIteration: return False This can be written even more functional style: def compare(l1,l2): it = iter(l2) # any( True for .. ) because any([0]) is False, which we don't want here return all( any(True for _ in dropwhile(partial(ne, e), it)) for e in l1 )
Searching values of a list in another List using Python
I'm a trying to find a sublist of a list. Meaning if list1 say [1,5] is in list2 say [1,4,3,5,6] than it should return True. What I have so far is this: for nums in l1: if nums in l2: return True else: return False This would be true but I'm trying to return True only if list1 is in list2 in the respective order. So if list2 is [5,2,3,4,1], it should return False. I was thinking along the lines of comparing the index values of list1 using < but I'm not sure.
[ "try:\n last_found = -1\n for num in L1:\n last_found = L2.index(num, last_found + 1)\n return True\nexcept ValueError:\n return False\n\nThe index method of list L2 returns the position at which the first argument (num) is found in the list; called, like here, with a second arg, it starts looking in the list at that position. If index does not find what it's looking for, it raises a ValueError exception.\nSo, this code uses this approach to look for each item num of L1, in order, inside L2. The first time it needs to start looking from position 0; each following time, it needs to start looking from the position just after the last one where it found the previous item, i.e. last_found + 1 (so at the start we must set last_found = -1 to start looking from position 0 the first time).\nIf every item in L1 is found this way (i.e. it's found in L2 after the position where the previous item was found), then the two lists meet the given condition and the code returns True. If any item of L1 is ever not-found, the code catches the resulting ValueError exception and just returns False.\nA different approach would be to use iterators over the two lists, that can be formed with the iter built-in function. You can \"advance\" an iterator by calling built-in next on it; this will raise StopIteration if there is no \"next item\", i.e., the iterator is exhausted. You can also use for on the iterator for a somewhat smoother interface, where applicable. The low-level approach using the iter/next idea:\ni1 = iter(L1)\ni2 = iter(L2)\nwhile True:\n try:\n lookfor = next(i1)\n except StopIteration:\n # no more items to look for == all good!\n return True\n while True:\n try:\n maybe = next(i2)\n except StopIteration:\n # item lookfor never matched == nope!\n return False\n if maybe == lookfor:\n break\n\nor, a bit higher-level:\ni1 = iter(L1)\ni2 = iter(L2)\nfor lookfor in i1:\n for maybe in i2:\n if maybe == lookfor:\n break\n else:\n # item lookfor never matched == nope!\n return False\n# no more items to look for == all good!\nreturn True \n\nIn fact, the only crucial use of iter here is to get i2 -- having the inner loop as for maybe in i2 guarantees the inner loop won't start looking from the beginning every time, but, rather, it will keep looking where it last left off. The outer loop might as well for for lookfor in L1:, since it has no \"restarting\" issue. \nKey, here, is the else: clause of loops, which triggers if, and only if, the loop was not interrupted by break but rather exited naturally.\nWorking further on this idea we are again reminded of the in operator, which also can be made to continue where it last left off simply by using an iterator. Big simplification:\ni2 = iter(L2)\nfor lookfor in L1:\n if lookfor not in i2:\n return False\n# no more items to look for == all good!\nreturn True \n\nBut now we recognize that is exactly the patter abstracted by the short-circuiting any and all built-in \"short-circuiting accumulator\" functions, so...:\ni2 = iter(L2)\nreturn all(lookfor in i2 for lookfor in L1)\n\nwhich I believe is just about as simple as you can get. The only non-elementary bit left here is: you need to use an iter(L2) explicitly, just once, to make sure the in operator (intrinsically an inner loop) doesn't restart the search from the beginning but rather continues each time from where it last left off.\n", "This question looks a bit like homework and for this reason I'd like to take the time and discuss what may be going wrong with the snippet shown in the question.\nAlthough you are using a word in its plural form, for the nums variable, you need to understand that Python will use this variable to store ONE item from l1 at a time, and go through the block of code in this \"for block\", one time for each different item.\nThe result of your current snippet will therefore be to exit upon the very first iteration, with either True or False depending if by chance the first items in the list happen to match.\nEdit: Yes, A1, exactly as you said: the logic exits with True after the first iteration. This is because of the \"return\" when nums is found in l2.\nIf you were to do nothing in the \"found\" case, the loop the logic would proceed with finishing whatever logic in the block (none here) and it would then start the next iteration. Therefore it would only exit with a \"False\" return value, in the case when an item from l1 is not found l2 (indeed after the very first such not-found item). Therefore your logic is almost correct (if it were to do nothing in the \"found case\"), the one thing missing would be to return \"True\", systematically after the for loop (since if it didn't exit with a False value within the loop, then all items of l2 are in l1...).\nThere are two ways to modify the code so it does nothing for the \"found case\".\n - by using pass, which is a convenient way to instruct Python to do nothing; \"pass\" is typically used when \"something\", i.e. some action is syntactically required but we don't want anything done, but it can also be used when debugging etc.\n - by rewriting the test as a \"not in\" instead\nif nums not in l2:\n return False\n#no else:, i.e. do nothing at all if found\n\nNow... Getting into more details.\nThere may be a flaw in your program (with the suggested changes), that is that it would consider l1 to be a sublist of l2, even if l1 had say 2 items with value say 5 whereby l2 only had one such value. I'm not sure if that kind of consideration is part of the problem (possibly the understanding is that both lists are \"sets\", with no possible duplicate items). If duplicates were allowed however, you would have to complicate the logic somewhat (a possible approach would be to intitially make a copy of l2 and each time \"nums\" is find in the l2 copy, to remove this item.\nAnother consideration is that maybe a list can only be said to be a sublist if its items are found the same order as the items in the other list... Again it all depends on the way the problem is defined... BTW some of the solutions proposed, like Alex Martelli's are written in such fashion because they solve the problem in a way that the order of items with the lists matter.\n", "I think this solution is the fastest, since it iterates only once, albeit on the longer list and exits before finishing the iteration if a match is found. (Edit: However, it is not as succinct or as fast as Alex's latest solution)\ndef ck(l1,l2):\n i,j = 0,len(l1)\n for e in l2:\n if e == l1[i]:\n i += 1\n if i == j:\n return True\n return False\n\nAn improvement was suggested by Anurag Uniyal (see comment) and is reflected in the showdown below.\nHere are some speed results for a range of list size ratios (List l1 is a 10-element list containing random values from 1-10. List l2 ranges from 10-1000 in length (and also contain random values from 1-10).\nCode that compares run times and plots the results:\nimport random\nimport os\nimport pylab\nimport timeit\n\ndef paul(l1,l2):\n i = 0\n j = len(l1)\n try:\n for e in l2:\n if e == l1[i]:\n i += 1\n except IndexError: # thanks Anurag\n return True\n return False\n\ndef jed(list1, list2):\n try:\n for num in list1:\n list2 = list2[list2.index(num):]\n except: return False\n else: return True\n\ndef alex(L1,L2): # wow!\n i2 = iter(L2)\n return all(lookfor in i2 for lookfor in L1)\n\nfrom itertools import dropwhile\nfrom operator import ne\nfrom functools import partial\n\ndef thc4k_andrea(l1, l2):\n it = iter(l2)\n try:\n for e in l1:\n dropwhile(partial(ne, e), it).next()\n return True\n except StopIteration:\n return False\n\n\nct = 100\nss = range(10,1000,100)\nnms = 'paul alex jed thc4k_andrea'.split()\nls = dict.fromkeys(nms)\nfor nm in nms:\n ls[nm] = []\n\nsetup = 'import test_sublist as x'\nfor s in ss:\n l1 = [random.randint(1,10) for i in range(10)]\n l2 = [random.randint(1,10) for i in range(s)]\n for nm in nms:\n stmt = 'x.'+nm+'(%s,%s)'%(str(l1),str(l2))\n t = timeit.Timer(setup=setup, stmt=stmt).timeit(ct)\n ls[nm].append( t )\n\npylab.clf()\nfor nm in nms:\n print len(ss), len(ls[nm])\n pylab.plot(ss,ls[nm],label=nm)\n pylab.legend(loc=0)\n\n pylab.xlabel('length of l2')\n pylab.ylabel('time')\n\npylab.savefig('cmp_lsts.png')\nos.startfile('cmp_lsts.png')\n\nresults:\n\n", "This should be easy to understand and avoid corner case nicely as you don't need to work with indexes:\ndef compare(l1, l2):\n it = iter(l2)\n for e in l1:\n try:\n while it.next() != e: pass\n except StopIteration: return False\n return True\n\nit tries to compare each element of l1 to the next element in l2.\nif there is no next element (StopIteration) it returns false (it visited the whole l2 and didn't find the current e) else it found it, so it returns true.\nFor faster execution it may help to put the try block outside the for:\ndef compare(l1, l2):\n it = iter(l2)\n try: \n for e in l1:\n while it.next() != e: pass\n except StopIteration: return False\n return True\n\n", "I have a feeling this is more intensive than Alex's answer, but here was my first thought:\ndef test(list1, list2):\n try:\n for num in list1:\n list2 = list2[list2.index(num):]\n except: return False\n else: return True\n\nEdit: Just tried it. His is faster. It's close.\nEdit 2: Moved try/except out of the loop (this is why others should look at your code). Thanks, gnibbler.\n", "I have a hard time seeing questions like this and not wishing that Python's list handling was more like Haskell's. This seems a much more straightforward solution than anything I could come up with in Python:\ncontains_inorder :: Eq a => [a] -> [a] -> Bool\ncontains_inorder [] _ = True\ncontains_inorder _ [] = False\ncontains_inorder (x:xs) (y:ys) | x == y = contains_inorder xs ys\n | otherwise = contains_inorder (x:xs) ys\n\n", "The ultra-optimized version of Andrea's solution:\nfrom itertools import dropwhile\nfrom operator import ne\nfrom functools import partial\n\ndef compare(l1, l2):\n it = iter(l2)\n try:\n for e in l1:\n dropwhile(partial(ne, e), it).next()\n return True\n except StopIteration:\n return False\n\nThis can be written even more functional style:\ndef compare(l1,l2):\n it = iter(l2)\n # any( True for .. ) because any([0]) is False, which we don't want here\n return all( any(True for _ in dropwhile(partial(ne, e), it)) for e in l1 )\n\n" ]
[ 7, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001695452_list_python.txt
Q: How to read .bin files? I exported a .bin file from RealFlow 4 and now need to be able to read it in Python, to make an importer. How do these files work? A: Here you go: import struct class Particle: """A single particle. Attributes added in BinFile.""" pass class BinFile: """Parse and store the contents of a RealFlow .bin file.""" def __init__(self, fname): self.bindata = open(fname, "rb").read() self.off = 0 self.verify = self.peel("=i")[0] assert self.verify == 0xfabada self.name = self.string(250) (self.version, self.scale, self.fluid_type, self.simtime, self.frame_number, self.fps, self.num_particles, self.radius) = self.peel("=hfifiiif") self.pressure = self.peel("=fff") self.speed = self.peel("=fff") self.temperature = self.peel("=fff") if self.version >= 7: self.emitter_position = self.peel("=fff") self.emitter_rotation = self.peel("=fff") self.emitter_scale = self.peel("=fff") self.particles = [self.peel_particle() for i in range(self.num_particles)] def peel_particle(self): """Read one particle from the file.""" p = Particle() p.position = self.peel("=fff") p.velocity = self.peel("=fff") p.force = self.peel("=fff") if self.version >= 9: p.vorticity = self.peel("=fff") if self.version >= 3: p.normal = self.peel("=fff") if self.version >= 4: p.neighbors = self.peel("=i")[0] if self.version >= 5: p.texture = self.peel("=fff") p.infobits = self.peel("=h")[0] (p.age, p.isolation_time, p.viscosity, p.density, p.pressure, p.mass, p.temperature, p.id) = self.peel("=fffffffi") print p.id, p.neighbors, p.position return p def peel(self, fmt): """Read some struct data from `self.bindata`.""" data = struct.unpack_from(fmt, self.bindata, self.off) self.off += struct.calcsize(fmt) return data def string(self, length): s = self.bindata[self.off:self.off+length].split("\0")[0] self.off += length return s b = BinFile("Circle0100001.bin") print "Name:", b.name print "Particles:", b.num_particles print "Position of first particle", b.particles[0].position When run on your sample data, it prints: Name: Circle01 Particles: 1066 Position of first particle (-1.7062506675720215, 4.9283280372619629, -6.4365010261535645) A: You need to know how the data is coded in the file. If you have this information, you can use the struct package to convert the binary data to something that can be used in python. I hope it helps A: Thanks guys, I've found the file. Here I uploaded it, and included a particle file for you. http://www.mediafire.com/download.php?xujqjghkcim
How to read .bin files?
I exported a .bin file from RealFlow 4 and now need to be able to read it in Python, to make an importer. How do these files work?
[ "Here you go:\nimport struct\n\nclass Particle:\n \"\"\"A single particle. Attributes added in BinFile.\"\"\"\n pass\n\nclass BinFile:\n \"\"\"Parse and store the contents of a RealFlow .bin file.\"\"\"\n def __init__(self, fname):\n self.bindata = open(fname, \"rb\").read()\n self.off = 0\n\n self.verify = self.peel(\"=i\")[0]\n assert self.verify == 0xfabada\n self.name = self.string(250)\n\n (self.version, self.scale, self.fluid_type, self.simtime, self.frame_number,\n self.fps, self.num_particles, self.radius) = self.peel(\"=hfifiiif\")\n self.pressure = self.peel(\"=fff\")\n self.speed = self.peel(\"=fff\")\n self.temperature = self.peel(\"=fff\")\n if self.version >= 7:\n self.emitter_position = self.peel(\"=fff\")\n self.emitter_rotation = self.peel(\"=fff\")\n self.emitter_scale = self.peel(\"=fff\")\n\n self.particles = [self.peel_particle() for i in range(self.num_particles)]\n\n def peel_particle(self):\n \"\"\"Read one particle from the file.\"\"\"\n p = Particle()\n p.position = self.peel(\"=fff\")\n p.velocity = self.peel(\"=fff\")\n p.force = self.peel(\"=fff\")\n if self.version >= 9:\n p.vorticity = self.peel(\"=fff\")\n if self.version >= 3:\n p.normal = self.peel(\"=fff\")\n if self.version >= 4:\n p.neighbors = self.peel(\"=i\")[0]\n if self.version >= 5:\n p.texture = self.peel(\"=fff\")\n p.infobits = self.peel(\"=h\")[0]\n (p.age, p.isolation_time, p.viscosity, p.density, p.pressure, p.mass,\n p.temperature, p.id) = self.peel(\"=fffffffi\")\n print p.id, p.neighbors, p.position\n return p\n\n def peel(self, fmt):\n \"\"\"Read some struct data from `self.bindata`.\"\"\"\n data = struct.unpack_from(fmt, self.bindata, self.off)\n self.off += struct.calcsize(fmt)\n return data\n\n def string(self, length):\n s = self.bindata[self.off:self.off+length].split(\"\\0\")[0]\n self.off += length\n return s\n\nb = BinFile(\"Circle0100001.bin\")\nprint \"Name:\", b.name\nprint \"Particles:\", b.num_particles\nprint \"Position of first particle\", b.particles[0].position\n\nWhen run on your sample data, it prints:\nName: Circle01\nParticles: 1066\nPosition of first particle (-1.7062506675720215, 4.9283280372619629, -6.4365010261535645)\n\n", "You need to know how the data is coded in the file. If you have this information, you can use the struct package to convert the binary data to something that can be used in python.\nI hope it helps\n", "Thanks guys, I've found the file. Here I uploaded it, and included a particle file for you.\nhttp://www.mediafire.com/download.php?xujqjghkcim\n" ]
[ 5, 2, 0 ]
[]
[]
[ "binary", "file", "python" ]
stackoverflow_0001696165_binary_file_python.txt
Q: What is the return value of subprocess.call()? I am not sure what the return value of subprocess.call() means. Can I safely assume a zero value will always mean that the command executed successfully? Is the return value equivalent to the exit staus of a shell command? For example, will the following piece of code work for virtually any command on Linux? cmd = "foo.txt > bar.txt" ret = subprocess.call(cmd, shell=True) if ret != 0: if ret < 0: print "Killed by signal", -ret else: print "Command failed with return code", ret else: print "SUCCESS!!" Please enlighten me :-) A: Yes, Subprocess.call returns "actual process return code". You can check official documentation of Subprocess.call and Subprocess.Popen.returncode A: It is the return code, but keep in mind it's up to the author of the subprocess what the return code means. There is a strong culture of 0 meaning success, but there's nothing enforcing it. A: You are at the mercy of the commands that you call. Consider this: test.py #!/usr/bin/env python success=False if not success: exit() Then running your code (with cmd='test.py') will result in SUCCESS!! merely because test.py does not conform to the convention of returning a non-zero value when it is not successful.
What is the return value of subprocess.call()?
I am not sure what the return value of subprocess.call() means. Can I safely assume a zero value will always mean that the command executed successfully? Is the return value equivalent to the exit staus of a shell command? For example, will the following piece of code work for virtually any command on Linux? cmd = "foo.txt > bar.txt" ret = subprocess.call(cmd, shell=True) if ret != 0: if ret < 0: print "Killed by signal", -ret else: print "Command failed with return code", ret else: print "SUCCESS!!" Please enlighten me :-)
[ "Yes, Subprocess.call returns \"actual process return code\". \nYou can check official documentation of Subprocess.call and Subprocess.Popen.returncode\n", "It is the return code, but keep in mind it's up to the author of the subprocess what the return code means. There is a strong culture of 0 meaning success, but there's nothing enforcing it. \n", "You are at the mercy of the commands that you call. \nConsider this:\ntest.py\n#!/usr/bin/env python\nsuccess=False\nif not success:\n exit()\n\nThen running your code (with cmd='test.py') will result in \nSUCCESS!!\nmerely because test.py does not conform to the convention of returning a non-zero value when it is not successful.\n" ]
[ 38, 13, 5 ]
[]
[]
[ "linux", "python" ]
stackoverflow_0001696998_linux_python.txt
Q: Django/SQL: keeping track of who who read what in a forum I'm working on a not-so-big project in django that will among other things incorporate a forum system. I have most of the system at a more or less functioning state, but I'm still missing a feature to mark unread threads for the users when there are new posts. The thing is I can't really think of a way to properly store that information. My first idea was to create another model that will store a list of threads with changes in them for each user. Something with one ForeignKey(User) and one ForeignKey(Thread) and just keep adding new entries each time a thread is posted or a post is added to a thread. But then, I'm not sure how well that would scale with say several hundred threads after a while and maybe 50-200 users. So add 200 rows for each new post for the users who aren't logged on? Sounds like a lot. How do other forum systems do it anyway? And how can I implement a system to work these things out in Django. Thanks! A: You're much better off storing the "read" bit, not the "unread" bit. And you can store them not as relational data, but in a giant bit-blob. Then you don't have to modify the read data at all when new posts are added, only when a user reads posts. A: You might also simply store the last time a user was reading a particular forum. Any posts that have been updated since that date are new. You'll only be storing one additional piece of information per user as opposed to a piece of information per post per user.
Django/SQL: keeping track of who who read what in a forum
I'm working on a not-so-big project in django that will among other things incorporate a forum system. I have most of the system at a more or less functioning state, but I'm still missing a feature to mark unread threads for the users when there are new posts. The thing is I can't really think of a way to properly store that information. My first idea was to create another model that will store a list of threads with changes in them for each user. Something with one ForeignKey(User) and one ForeignKey(Thread) and just keep adding new entries each time a thread is posted or a post is added to a thread. But then, I'm not sure how well that would scale with say several hundred threads after a while and maybe 50-200 users. So add 200 rows for each new post for the users who aren't logged on? Sounds like a lot. How do other forum systems do it anyway? And how can I implement a system to work these things out in Django. Thanks!
[ "You're much better off storing the \"read\" bit, not the \"unread\" bit. And you can store them not as relational data, but in a giant bit-blob. Then you don't have to modify the read data at all when new posts are added, only when a user reads posts.\n", "You might also simply store the last time a user was reading a particular forum. Any posts that have been updated since that date are new. You'll only be storing one additional piece of information per user as opposed to a piece of information per post per user.\n" ]
[ 2, 1 ]
[]
[]
[ "database", "database_design", "django", "python", "sql" ]
stackoverflow_0001697045_database_database_design_django_python_sql.txt
Q: Clean input strings without using the django Form classes Is there a recommended way of using Django to clean an input string without going through the Django form system? That is, I'm writing code that delivers form input via AJAX so I'm skipping the whole Form model django offers. But I do want to clean the input prior to submission to the database. A: Django Form models aren't just about rendering forms, they're more about processing and sanitizing form (GET/POST) input, which is what you want to do. When the POST or GET data from your AJAX request reaches your server it's essentially indistinguishable from form data. I would advocate creating a Form model that is a model of your AJAX request. Think of an example POST: POST /login.jsp HTTP/1.1 Host: www.mysite.com User-Agent: Mozilla/4.0 Content-Length: 27 Content-Type: application/x-www-form-urlencoded userid=joe&password=guessme That could have come from an AJAX request OR a form, by the time it hits your server it doesn't really matter! Sure they're called Form models because that's usually where GET or POST data comes from, but it doesn't have to be from a form :) If you create a Form model to represent your AJAX request you get all the hooks and sanitization that come with it and it's all a little more "django-esque". Update regarding your comment: I imagine you'd have multiple form classes. Obviously I don't know how your system is designed, but I'll provide what advice I can. Like you said, you'll be using this to sanitize your data so you'll want to define your Form classes based on the data you're sending. For example, if I have an AJAX request that submits a comment with Name, Email and CommentBody data that would be one Form class. If I have another AJAX request that posts a new article that sends Title, Author and ArticleBody that would be another Form class. Not all your AJAX requests will necessarily need a Form, if you have an AJAX call that votes up a comment you probably wouldn't treat that as a form, since (I'm guessing) you wouldn't need to sanitize any data.
Clean input strings without using the django Form classes
Is there a recommended way of using Django to clean an input string without going through the Django form system? That is, I'm writing code that delivers form input via AJAX so I'm skipping the whole Form model django offers. But I do want to clean the input prior to submission to the database.
[ "Django Form models aren't just about rendering forms, they're more about processing and sanitizing form (GET/POST) input, which is what you want to do. When the POST or GET data from your AJAX request reaches your server it's essentially indistinguishable from form data. I would advocate creating a Form model that is a model of your AJAX request. \nThink of an example POST:\nPOST /login.jsp HTTP/1.1\nHost: www.mysite.com\nUser-Agent: Mozilla/4.0\nContent-Length: 27\nContent-Type: application/x-www-form-urlencoded\n\nuserid=joe&password=guessme\n\nThat could have come from an AJAX request OR a form, by the time it hits your server it doesn't really matter! Sure they're called Form models because that's usually where GET or POST data comes from, but it doesn't have to be from a form :)\nIf you create a Form model to represent your AJAX request you get all the hooks and sanitization that come with it and it's all a little more \"django-esque\". \nUpdate regarding your comment:\nI imagine you'd have multiple form classes. Obviously I don't know how your system is designed, but I'll provide what advice I can.\nLike you said, you'll be using this to sanitize your data so you'll want to define your Form classes based on the data you're sending. For example, if I have an AJAX request that submits a comment with Name, Email and CommentBody data that would be one Form class. If I have another AJAX request that posts a new article that sends Title, Author and ArticleBody that would be another Form class.\nNot all your AJAX requests will necessarily need a Form, if you have an AJAX call that votes up a comment you probably wouldn't treat that as a form, since (I'm guessing) you wouldn't need to sanitize any data.\n" ]
[ 16 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001697508_django_python.txt
Q: How to open a pdb file in Python? I've got an e-book (viewable with isilo) in a pdb file. Is it possible to read it's contents with Python (perl, ruby, php)? A: PDB was designed for Palm pilots in the very beginning. isilo started kind of as a universal reader on PDAs so they've also implemented the PDB protocol. For python, you can use the Python PalmDB library.
How to open a pdb file in Python?
I've got an e-book (viewable with isilo) in a pdb file. Is it possible to read it's contents with Python (perl, ruby, php)?
[ "PDB was designed for Palm pilots in the very beginning. isilo started kind of as a universal reader on PDAs so they've also implemented the PDB protocol.\nFor python, you can use the Python PalmDB library. \n" ]
[ 4 ]
[]
[]
[ "palmdb", "pdb_palm", "python" ]
stackoverflow_0001697739_palmdb_pdb_palm_python.txt
Q: How to parse a custom string using optparse? How to parse a custom string using optparse, instead of command line argument? I want to parse a string that I get from using raw_input(). How can I use optparse for that? A: optparse expects a list of values that have been broken up shell-style (which is what argv[1:] is). To accomplish the same starting with a string, try this: parser = optparse.OptionParser() # Set up your OptionParser inp = raw_input("Enter some crap: ") try: (options, args) = parser.parse_args(shlex.split(inp)) except: # Error handling. The optional argument to parse_args is where you substitute in your converted string. Be advised that shlex.split can exception, as can parse_args. When you're dealing with input from the user, it's wise to expect both cases. A: Use the shlex module to split the input first. >>> import shlex >>> shlex.split(raw_input()) this is "a test" of shlex ['this', 'is', 'a test', 'of', 'shlex']
How to parse a custom string using optparse?
How to parse a custom string using optparse, instead of command line argument? I want to parse a string that I get from using raw_input(). How can I use optparse for that?
[ "optparse expects a list of values that have been broken up shell-style (which is what argv[1:] is). To accomplish the same starting with a string, try this:\nparser = optparse.OptionParser()\n# Set up your OptionParser\n\ninp = raw_input(\"Enter some crap: \")\n\ntry: (options, args) = parser.parse_args(shlex.split(inp))\nexcept:\n # Error handling.\n\nThe optional argument to parse_args is where you substitute in your converted string.\nBe advised that shlex.split can exception, as can parse_args. When you're dealing with input from the user, it's wise to expect both cases.\n", "Use the shlex module to split the input first.\n>>> import shlex\n>>> shlex.split(raw_input())\nthis is \"a test\" of shlex\n['this', 'is', 'a test', 'of', 'shlex']\n\n" ]
[ 9, 4 ]
[]
[]
[ "optparse", "python" ]
stackoverflow_0001697761_optparse_python.txt
Q: Why does str.lstrip strip an extra character? >>> path = "/Volumes/Users" >>> path.lstrip('/Volume') 's/Users' >>> path.lstrip('/Volumes') 'Users' >>> I expected the output of path.lstrip('/Volumes') to be '/Users' A: lstrip is character-based, it removes all characters from the left end that are in that string. To verify this, try this: "/Volumes/Users".lstrip("semuloV/") # also returns "Users" Since / is part of the string, it is removed. You need to use slicing instead: if s.startswith("/Volumes"): s = s[8:] Or, on Python 3.9+ you can use removeprefix: s = s.removeprefix("/Volumes") A: Strip is character-based. If you are trying to do path manipulation you should have a look at os.path >>> os.path.split("/Volumes/Users") ('/Volumes', 'Users') A: The argument passed to lstrip is taken as a set of characters! >>> ' spacious '.lstrip() 'spacious ' >>> 'www.example.com'.lstrip('cmowz.') 'example.com' See also the documentation You might want to use str.replace() str.replace(old, new[, count]) # e.g. '/Volumes/Home'.replace('/Volumes', '' ,1) Return a copy of the string with all occurrences of substring old replaced by new. If the optional argument count is given, only the first count occurrences are replaced. For paths, you may want to use os.path.split(). It returns a list of the paths elements. >>> os.path.split('/home/user') ('/home', '/user') To your problem: >>> path = "/vol/volume" >>> path.lstrip('/vol') 'ume' The example above shows, how lstrip() works. It removes '/vol' starting form left. Then, is starts again... So, in your example, it fully removed '/Volumes' and started removing '/'. It only removed the '/' as there was no 'V' following this slash. HTH A: lstrip doc says: Return a copy of the string S with leading whitespace removed. If chars is given and not None, remove characters in chars instead. If chars is unicode, S will be converted to unicode before stripping So you are removing every character that is contained in the given string, including both 's' and '/' characters. A: Here is a primitive version of lstrip (that I wrote) that might help clear things up for you: def lstrip(s, chars): for i in range len(s): char = s[i] if not char in chars: return s[i:] else: return lstrip(s[i:], chars) Thus, you can see that every occurrence of a character in chars is is removed until a character that is not in chars is encountered. Once that happens, the deletion stops and the rest of the string is simply returned
Why does str.lstrip strip an extra character?
>>> path = "/Volumes/Users" >>> path.lstrip('/Volume') 's/Users' >>> path.lstrip('/Volumes') 'Users' >>> I expected the output of path.lstrip('/Volumes') to be '/Users'
[ "lstrip is character-based, it removes all characters from the left end that are in that string.\nTo verify this, try this:\n\"/Volumes/Users\".lstrip(\"semuloV/\") # also returns \"Users\"\n\nSince / is part of the string, it is removed.\nYou need to use slicing instead:\nif s.startswith(\"/Volumes\"):\n s = s[8:]\n\nOr, on Python 3.9+ you can use removeprefix:\ns = s.removeprefix(\"/Volumes\")\n\n", "Strip is character-based. If you are trying to do path manipulation you should have a look at os.path\n>>> os.path.split(\"/Volumes/Users\")\n('/Volumes', 'Users')\n\n", "The argument passed to lstrip is taken as a set of characters!\n>>> ' spacious '.lstrip()\n'spacious '\n>>> 'www.example.com'.lstrip('cmowz.')\n'example.com'\n\nSee also the documentation\nYou might want to use str.replace()\nstr.replace(old, new[, count])\n# e.g.\n'/Volumes/Home'.replace('/Volumes', '' ,1)\n\nReturn a copy of the string with all occurrences of substring old replaced by new. If the optional argument count is given, only the first count occurrences are replaced.\nFor paths, you may want to use os.path.split(). It returns a list of the paths elements.\n>>> os.path.split('/home/user')\n('/home', '/user')\n\nTo your problem:\n>>> path = \"/vol/volume\"\n>>> path.lstrip('/vol')\n'ume'\n\nThe example above shows, how lstrip() works. It removes '/vol' starting form left. Then, is starts again...\nSo, in your example, it fully removed '/Volumes' and started removing '/'. It only removed the '/' as there was no 'V' following this slash.\nHTH\n", "lstrip doc says:\nReturn a copy of the string S with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nIf chars is unicode, S will be converted to unicode before stripping\n\nSo you are removing every character that is contained in the given string, including both 's' and '/' characters.\n", "Here is a primitive version of lstrip (that I wrote) that might help clear things up for you:\ndef lstrip(s, chars):\n for i in range len(s):\n char = s[i]\n if not char in chars:\n return s[i:]\n else:\n return lstrip(s[i:], chars)\n\nThus, you can see that every occurrence of a character in chars is is removed until a character that is not in chars is encountered. Once that happens, the deletion stops and the rest of the string is simply returned\n" ]
[ 34, 17, 16, 1, 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001687171_python_string.txt
Q: Matching tags in BeautifulSoup I'm trying to count the number of tags in the 'soup' from a beautifulsoup result. I'd like to use a regular expression but am having trouble. The code Ive tried is as follows: reg_exp_tag = re.compile("<[^>*>") tags = re.findall(reg_exp_tag, soup(cast as a string)) but re will not allow reg_exp_tag, giving an unexpected end of regular expression error. Any help would be much appreciated! Thanks A: If you've already parsed the HTML with BeautifulSoup, why parse it again? Try this: num_tags = len(soup.findAll()) A: Shouldn't that be "<[^>]*>" instead of "<[^>*>"? (the class needs to be closed with a ])
Matching tags in BeautifulSoup
I'm trying to count the number of tags in the 'soup' from a beautifulsoup result. I'd like to use a regular expression but am having trouble. The code Ive tried is as follows: reg_exp_tag = re.compile("<[^>*>") tags = re.findall(reg_exp_tag, soup(cast as a string)) but re will not allow reg_exp_tag, giving an unexpected end of regular expression error. Any help would be much appreciated! Thanks
[ "If you've already parsed the HTML with BeautifulSoup, why parse it again? Try this:\nnum_tags = len(soup.findAll())\n\n", "Shouldn't that be \"<[^>]*>\" instead of \"<[^>*>\"?\n(the class needs to be closed with a ])\n" ]
[ 4, 1 ]
[]
[]
[ "beautifulsoup", "python", "regex" ]
stackoverflow_0001697774_beautifulsoup_python_regex.txt
Q: How to create Django FormWizard for one Model? I have Django Model with many fields which user must fill. If I'll create one ModelForm for this Model it will be big enough for one form. I want to split it using FormWizard. I think it's possible first to create forms dynamically and then create FormWizard using them. Is this good approach or is there any better way? A: To me it seems fine. The approach for creating partial forms is written in the docs. In short: class PartialAuthorForm(ModelForm): class Meta: model = Author fields = ('name', 'title') class PartialAuthorForm(ModelForm): class Meta: model = Author exclude = ('birth_date',) Dynamic way of doing this would be: def gimme_my_form(field_tuple): class MyForm(ModelForm): class Meta: model = MyModel fields = field_tuple return MyForm Eventually you can also parametrize the model this way.
How to create Django FormWizard for one Model?
I have Django Model with many fields which user must fill. If I'll create one ModelForm for this Model it will be big enough for one form. I want to split it using FormWizard. I think it's possible first to create forms dynamically and then create FormWizard using them. Is this good approach or is there any better way?
[ "To me it seems fine.\nThe approach for creating partial forms is written in the docs.\nIn short:\nclass PartialAuthorForm(ModelForm):\n class Meta:\n model = Author\n fields = ('name', 'title')\n\nclass PartialAuthorForm(ModelForm):\n class Meta:\n model = Author\n exclude = ('birth_date',)\n\nDynamic way of doing this would be:\ndef gimme_my_form(field_tuple):\n class MyForm(ModelForm):\n class Meta:\n model = MyModel\n fields = field_tuple\n return MyForm\n\nEventually you can also parametrize the model this way.\n" ]
[ 2 ]
[]
[]
[ "django", "django_forms", "formwizard", "python" ]
stackoverflow_0001697866_django_django_forms_formwizard_python.txt
Q: Which database should I use to store records, and how should I use it? I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...}) As this is a client-side app, I don't want to use a database server, I just want the info stored into files. I want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game. I am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key/value pairs. Is my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution? A: I am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key/value pairs. What you are describing is exactly what relational is about, even if you only need one table. SQLite will probably make this very easy to do. EDIT: The relational model doesn't have anything to do with relationships between tables. A relation is a subset of the Cartesian product of other sets. For instance, the cartesian product of the Real numbers, Real Numbers, and Real numbers (Yes, all three the same) produce 3d coordinate space, and you could define a relation upon that space with a formula, say x*y = z. each possible set of coordinates (x0,y0,z0) are either in the relation if they satisfy the given formula, or else they are not. A relational database uses this concept with a few additional requirements. First, and most important, the size of the relation must be finite. The product relation given above doesn't satisfy that requirement, because there are infinitely many 3-tuples that satisfy the formula. There are a number of other considerations that have more to do with what is practical or useful on real computers solving real problems. A better way of thinking about the problem is to think about where each type of persistence mechanism specifically works better than the other. You already recognize that a relational solution makes sense when you have many separate datasets (tables) that must support relationships between them (foreign key constraints), which is almost impossible to enforce with a key-value store. Another real advantage to relational is the way it makes rich, ad-hoc queries possible with the use of proper indexes. This is a consequence of the database layer actually understanding the data that it is representing. A key-value store has it's own set of advantages. One of the more important is the way that key-value stores scale out. It is no consequence that memcached, couchdb, hadoop all use key-value storage, because it is easy to distribute key-value lookup across multiple servers. Another area that key-value storage works well is when the key or value is opaque, such as when the stored item is encrypted, only to be readable by it's owner. To drive this point home, that a Relational database works well even when you just don't need more than one table, consider the following (not original) SELECT t1.actor1 FROM workswith AS t1, workswith AS t2, workswith AS t3, workswith AS t4, workswith AS t5, workswith AS t6 WHERE t1.actor2 = t2.actor1 AND t2.actor2 = t3.actor1 AND t3.actor2 = t4.actor1 AND t4.actor2 = t5.actor1 AND t5.actor2 = t6.actor1 AND t6.actor2 = "Kevin Bacon"; Which, obviously uses a single table: workswith to compute every actor with a bacon number of 6 A: BerkeleyDB is good, also look at the *DBM incarnations (e.g. GDBM). The big question though is: for what do you need to search? Do you need to search by that URL, by a range of URLs or the dates you list? It is also quite possible to keep groups of records as simple files in the local filesystem, grouped by dates or search terms, &c. Answering the "search" question is the biggest start. As for the key/value thingy, what you need to ensure is that the KEY itself is well defined as for your lookups. If for example you need to lookup by dates sometimes and others by title, you will need to maintain a "record" row, and then possibly 2 or more "index" rows making reference to the original record. You can model nearly anything in a key/value store. A: Personally I would use sqlite anyway. It has always just worked for me (and for others I work with). When your app grows and you suddenly do want to do something a little more sophisticated, you won't have to rewrite. On the other hand, I've seen various comments on the Python dev list about Berkely DB that suggest it's less than wonderful; you only get dict-style access (what if you want to select certain date ranges or titles instead of URLs); and it's not even in Python 3's standard set of libraries. A: What about MongoDB? I haven't tried it yet, but it seems interesting. A: If you're only going to use a single field to look up records, a simple key-value store would be a good choice. Store that single field (or any other unique ID) as your key, serialize each record as a string (using JSON or similar), and store that string as the value. Berkeley DB is certainly a reasonable choice for a key-value store, but there are many alternatives to choose from: http://en.wikipedia.org/wiki/Dbm If you want to look up records by any of several fields, SQLite might be easiest for development purposes. You'll be writing queries in SQL but you won't have to maintain a database server. All the multi-key machinery is already written for you. If you really want to avoid SQL or squeeze every bit of performance out of your data store, and you want multi-key access, consider a layer of extra logic on top of a key-value store. It is possible to build column-like behavior on top of key-value stores by serializing your records and inserting the "column" values of each record as additional keys whose values contain the "primary" key of your record. (You're effectively using the key-value store as both a dictionary of records and a dictionary of indexes for finding those records.) Google's App Engine does something like this. You can do this yourself or use one of various document-oriented databases that will do it for you. For some interesting reading, try googling "nosql". http://www.google.com/search?&q=nosql A: Ok, so you say just storing the data..? You really only need a DB for retrieval, lookup, summarising, etc. So, for storing, just use simple text files and append lines. Compress the data if you need to, use delims between fields - just about any language will be able to read such files. If you do want to retrieve, then focus on your retrieval needs, by date, by key, which keys, etc. If you want simple client side, then you need simple client db. SQLite is far easier than BDB, but look at things like Sybase Advantage (very fast and free for local clients but not open-source) or VistaDB or firebird... but all will require local config/setup/maintenance. If you go local XML for a 'sizable' number of records will give you some unnecessarily bloated file-sizes..!
Which database should I use to store records, and how should I use it?
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...}) As this is a client-side app, I don't want to use a database server, I just want the info stored into files. I want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game. I am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key/value pairs. Is my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution?
[ "\nI am seeing two possibilities: sqlite\n and BerkeleyDB. As my use case is\n clearly not relational, I am tempted\n to go with BerkeleyDB, however I don't\n really know how I should use it to\n store my records, as it only stores\n key/value pairs.\n\nWhat you are describing is exactly what relational is about, even if you only need one table. SQLite will probably make this very easy to do.\nEDIT: The relational model doesn't have anything to do with relationships between tables. A relation is a subset of the Cartesian product of other sets. For instance, the cartesian product of the Real numbers, Real Numbers, and Real numbers (Yes, all three the same) produce 3d coordinate space, and you could define a relation upon that space with a formula, say x*y = z. each possible set of coordinates (x0,y0,z0) are either in the relation if they satisfy the given formula, or else they are not. \nA relational database uses this concept with a few additional requirements. First, and most important, the size of the relation must be finite. The product relation given above doesn't satisfy that requirement, because there are infinitely many 3-tuples that satisfy the formula. There are a number of other considerations that have more to do with what is practical or useful on real computers solving real problems.\nA better way of thinking about the problem is to think about where each type of persistence mechanism specifically works better than the other. You already recognize that a relational solution makes sense when you have many separate datasets (tables) that must support relationships between them (foreign key constraints), which is almost impossible to enforce with a key-value store. Another real advantage to relational is the way it makes rich, ad-hoc queries possible with the use of proper indexes. This is a consequence of the database layer actually understanding the data that it is representing. \nA key-value store has it's own set of advantages. One of the more important is the way that key-value stores scale out. It is no consequence that memcached, couchdb, hadoop all use key-value storage, because it is easy to distribute key-value lookup across multiple servers. Another area that key-value storage works well is when the key or value is opaque, such as when the stored item is encrypted, only to be readable by it's owner.\n\nTo drive this point home, that a Relational database works well even when you just don't need more than one table, consider the following (not original)\nSELECT t1.actor1 \nFROM workswith AS t1, \n workswith AS t2, \n workswith AS t3, \n workswith AS t4, \n workswith AS t5,\n workswith AS t6\nWHERE t1.actor2 = t2.actor1 AND\n t2.actor2 = t3.actor1 AND\n t3.actor2 = t4.actor1 AND\n t4.actor2 = t5.actor1 AND\n t5.actor2 = t6.actor1 AND\n t6.actor2 = \"Kevin Bacon\";\n\nWhich, obviously uses a single table: workswith to compute every actor with a bacon number of 6\n", "BerkeleyDB is good, also look at the *DBM incarnations (e.g. GDBM). The big question though is: for what do you need to search? Do you need to search by that URL, by a range of URLs or the dates you list? \nIt is also quite possible to keep groups of records as simple files in the local filesystem, grouped by dates or search terms, &c.\nAnswering the \"search\" question is the biggest start.\nAs for the key/value thingy, what you need to ensure is that the KEY itself is well defined as for your lookups. If for example you need to lookup by dates sometimes and others by title, you will need to maintain a \"record\" row, and then possibly 2 or more \"index\" rows making reference to the original record. You can model nearly anything in a key/value store.\n", "Personally I would use sqlite anyway. It has always just worked for me (and for others I work with). When your app grows and you suddenly do want to do something a little more sophisticated, you won't have to rewrite.\nOn the other hand, I've seen various comments on the Python dev list about Berkely DB that suggest it's less than wonderful; you only get dict-style access (what if you want to select certain date ranges or titles instead of URLs); and it's not even in Python 3's standard set of libraries.\n", "What about MongoDB? I haven't tried it yet, but it seems interesting.\n", "If you're only going to use a single field to look up records, a simple key-value store would be a good choice. Store that single field (or any other unique ID) as your key, serialize each record as a string (using JSON or similar), and store that string as the value. Berkeley DB is certainly a reasonable choice for a key-value store, but there are many alternatives to choose from:\nhttp://en.wikipedia.org/wiki/Dbm\nIf you want to look up records by any of several fields, SQLite might be easiest for development purposes. You'll be writing queries in SQL but you won't have to maintain a database server. All the multi-key machinery is already written for you.\nIf you really want to avoid SQL or squeeze every bit of performance out of your data store, and you want multi-key access, consider a layer of extra logic on top of a key-value store. It is possible to build column-like behavior on top of key-value stores by serializing your records and inserting the \"column\" values of each record as additional keys whose values contain the \"primary\" key of your record. (You're effectively using the key-value store as both a dictionary of records and a dictionary of indexes for finding those records.) Google's App Engine does something like this. You can do this yourself or use one of various document-oriented databases that will do it for you. For some interesting reading, try googling \"nosql\".\nhttp://www.google.com/search?&q=nosql\n", "Ok, so you say just storing the data..? You really only need a DB for retrieval, lookup, summarising, etc. So, for storing, just use simple text files and append lines. Compress the data if you need to, use delims between fields - just about any language will be able to read such files. If you do want to retrieve, then focus on your retrieval needs, by date, by key, which keys, etc. If you want simple client side, then you need simple client db. SQLite is far easier than BDB, but look at things like Sybase Advantage (very fast and free for local clients but not open-source) or VistaDB or firebird... but all will require local config/setup/maintenance. If you go local XML for a 'sizable' number of records will give you some unnecessarily bloated file-sizes..!\n" ]
[ 4, 2, 2, 1, 1, 0 ]
[]
[]
[ "c++", "database", "persistence", "python" ]
stackoverflow_0001697153_c++_database_persistence_python.txt
Q: Selective merge of two or more data files I have an executable whose input is contained in an ASCII file with format: $ GENERAL INPUTS $ PARAM1 = 123.456 PARAM2=456,789,101112 PARAM3(1)=123,456,789 PARAM4 = 1234,5678,91011E2 PARAM5(1,2)='STRING','STRING2' $ NEW INSTANCE NEW(1)=.TRUE. PAR1=123 [More data here] $ NEW INSTANCE NEW(2)=.TRUE. [etcetera] In other words, some general inputs, and some parameter values for a number of new instances. The declaration of parameters is irregular; some numbers are separated by commas, others are in scientific notation, others are inside quotes, the spacing is not constant, etc. The evaluation of some scenarios requires that I take the input of one "master" data file and copy the parameter data of, say, instances 2 through 6 to another data file which may already contain data for said instances (in which case data should be overwritten) and possibly others (data which should be left unchanged). I have written a Flex lexer and a Bison parser; together they can eat a data file and store the parameters in memory. If I use them to open both files (master and "scenario"), it should not be too hard to selectively write to a third, new file the desired parameters (as in "general input from 'scenario'; instances 1 though 5 from 'master'; instances 6 through 9 from 'scenario'; ..."), save it, and delete the original scenario file. Other information: (1) the files are highly sensitive, it is very important that the user is completely shielded from altering the master file; (2) the files are of manageable size (from 500K to 10M). I have learned that what I can do in ten lines of code, some fellow here can do in two. How would you approach this problem? A Pythonic answer would make me cry. Seriously. A: If you're already able to parse this format (I'd have tried it with pyParsing, but if you already have a working flexx/bison solution, that will be just fine), and the parsed data fit well in memory, then you're basically there. You can represent what you read from each file as a simple object with a dict for "general input" and a list of dicts, one per instance (or probably better a dict of instances, with the keys being the instance-numbers, which may give you a bit more flexibility). Then, as you mentioned, you just selectively "update" (add or overwrite) some of the instances copied from the master into the scenario, write the new scenario file, replace the old one with it. To use the flexx/bison code with Python you have several options -- make it into a DLL/so and access it with ctypes, or call it from a cython-coded extension, a SWIG wrapper, a Python C-API extension, or SIP, Boost, etc etc. Suppose that, one way or another, you have a parser primitive that (e.g.) accepts an input filename, reads and parses that file, and returns a list of 2-string tuples, each of which is either of the following: (paramname, paramvalue) ('$$$$', 'General Inputs') ('$$$$', 'New Instance') just using '$$$$' as a kind of arbitrary marker. Then for the object representing all that you've read from a file you might have: import re instidre = re.compile(r'NEW\((\d+)\)') class Afile(object): def __init__(self, filename): self.filename = filename self.geninput = dict() self.instances = dict() def feed_data(self, listoftuples): it = iter(listoftuples) assert next(it) == ('$$$$', 'General Inputs') for name, value in it: if name == '$$$$': break self.geninput[name] = value else: # no instances at all! return currinst = dict() for name, value in it: if name == '$$$$': self.finish_inst(currinst) currinst = dict() continue mo = instidre.match(name) if mo: assert value == '.TRUE.' name = '$$$INSTID$$$' value = mo.group(1) currinst[name] = value self.finish_inst(currinst) def finish_inst(self, adict): instid = dict.pop('$$$INSTID$$$') assert instid not in self.instances self.instances[instid] = adict Sanity checking might be improved a bit, diagnosing anomalies more precisely, but net of error cases I think this is roughly what you want. The merging just requires doing foo.instances[instid] = bar.instances[instid] for the required values of instid, where foo is the Afile instance for the scenario file and bar is the one for the master file -- that will overwrite or add as required. I'm assuming that to write out the newly changed scenario file you don't need to repeat all the formatting quirks the specific inputs might have (if you do, then those quirks will need to be recorded during parsing together with names and values), so simply looping on sorted(foo.instances) and writing each out also in sorted order (after writing the general stuff also in sorted order, and with appropriate $ this and that marker lines, and with proper translation of the '$$$INSTID$$$' entry, etc) should suffice.
Selective merge of two or more data files
I have an executable whose input is contained in an ASCII file with format: $ GENERAL INPUTS $ PARAM1 = 123.456 PARAM2=456,789,101112 PARAM3(1)=123,456,789 PARAM4 = 1234,5678,91011E2 PARAM5(1,2)='STRING','STRING2' $ NEW INSTANCE NEW(1)=.TRUE. PAR1=123 [More data here] $ NEW INSTANCE NEW(2)=.TRUE. [etcetera] In other words, some general inputs, and some parameter values for a number of new instances. The declaration of parameters is irregular; some numbers are separated by commas, others are in scientific notation, others are inside quotes, the spacing is not constant, etc. The evaluation of some scenarios requires that I take the input of one "master" data file and copy the parameter data of, say, instances 2 through 6 to another data file which may already contain data for said instances (in which case data should be overwritten) and possibly others (data which should be left unchanged). I have written a Flex lexer and a Bison parser; together they can eat a data file and store the parameters in memory. If I use them to open both files (master and "scenario"), it should not be too hard to selectively write to a third, new file the desired parameters (as in "general input from 'scenario'; instances 1 though 5 from 'master'; instances 6 through 9 from 'scenario'; ..."), save it, and delete the original scenario file. Other information: (1) the files are highly sensitive, it is very important that the user is completely shielded from altering the master file; (2) the files are of manageable size (from 500K to 10M). I have learned that what I can do in ten lines of code, some fellow here can do in two. How would you approach this problem? A Pythonic answer would make me cry. Seriously.
[ "If you're already able to parse this format (I'd have tried it with pyParsing, but if you already have a working flexx/bison solution, that will be just fine), and the parsed data fit well in memory, then you're basically there. You can represent what you read from each file as a simple object with a dict for \"general input\" and a list of dicts, one per instance (or probably better a dict of instances, with the keys being the instance-numbers, which may give you a bit more flexibility). Then, as you mentioned, you just selectively \"update\" (add or overwrite) some of the instances copied from the master into the scenario, write the new scenario file, replace the old one with it.\nTo use the flexx/bison code with Python you have several options -- make it into a DLL/so and access it with ctypes, or call it from a cython-coded extension, a SWIG wrapper, a Python C-API extension, or SIP, Boost, etc etc.\nSuppose that, one way or another, you have a parser primitive that (e.g.) accepts an input filename, reads and parses that file, and returns a list of 2-string tuples, each of which is either of the following:\n\n(paramname, paramvalue)\n('$$$$', 'General Inputs')\n('$$$$', 'New Instance')\n\njust using '$$$$' as a kind of arbitrary marker. Then for the object representing all that you've read from a file you might have:\nimport re\n\ninstidre = re.compile(r'NEW\\((\\d+)\\)')\n\nclass Afile(object):\n\n def __init__(self, filename):\n self.filename = filename\n self.geninput = dict()\n self.instances = dict()\n\n def feed_data(self, listoftuples):\n it = iter(listoftuples)\n assert next(it) == ('$$$$', 'General Inputs')\n for name, value in it:\n if name == '$$$$': break\n self.geninput[name] = value\n else: # no instances at all!\n return\n currinst = dict()\n for name, value in it:\n if name == '$$$$':\n self.finish_inst(currinst)\n currinst = dict()\n continue\n mo = instidre.match(name)\n if mo:\n assert value == '.TRUE.'\n name = '$$$INSTID$$$'\n value = mo.group(1)\n currinst[name] = value\n self.finish_inst(currinst)\n\n def finish_inst(self, adict):\n instid = dict.pop('$$$INSTID$$$')\n assert instid not in self.instances\n self.instances[instid] = adict\n\nSanity checking might be improved a bit, diagnosing anomalies more precisely, but net of error cases I think this is roughly what you want.\nThe merging just requires doing foo.instances[instid] = bar.instances[instid] for the required values of instid, where foo is the Afile instance for the scenario file and bar is the one for the master file -- that will overwrite or add as required.\nI'm assuming that to write out the newly changed scenario file you don't need to repeat all the formatting quirks the specific inputs might have (if you do, then those quirks will need to be recorded during parsing together with names and values), so simply looping on sorted(foo.instances) and writing each out also in sorted order (after writing the general stuff also in sorted order, and with appropriate $ this and that marker lines, and with proper translation of the '$$$INSTID$$$' entry, etc) should suffice.\n" ]
[ 1 ]
[]
[]
[ "bison", "flex_lexer", "parsing", "python", "text_files" ]
stackoverflow_0001698188_bison_flex_lexer_parsing_python_text_files.txt
Q: Scipy loadmat only load integers? It seems I can only load everything as uint8 type, just with the following two lines, import scipy.io X1=scipy.io.loadmat('one.mat') all double precision numbers get transformed. I believe the creators of scipy are aware of the fact that floating-point numbers are much more common... So, what should I do? Thank you! A: What level matfile are you trying to read? According to the docs, v4 (Level 1.0), v6 and v7 to 7.2 matfiles are supported. You will need an HDF5 python library to read matlab 7.3 format mat files. Because scipy does not supply one, we do not implement the HDF5 / 7.3 interface here. For the supported levels, variables should be reloaded with the dtype with which they were saved; if you'd rather load them as matlab would, add mat_dtype=True at the end of the parameters with which you call loadmat.
Scipy loadmat only load integers?
It seems I can only load everything as uint8 type, just with the following two lines, import scipy.io X1=scipy.io.loadmat('one.mat') all double precision numbers get transformed. I believe the creators of scipy are aware of the fact that floating-point numbers are much more common... So, what should I do? Thank you!
[ "What level matfile are you trying to read? According to the docs,\n\nv4 (Level 1.0), v6 and v7 to 7.2\n matfiles are supported.\nYou will need an HDF5 python library\n to read matlab 7.3 format mat files.\n Because scipy does not supply one, we\n do not implement the HDF5 / 7.3\n interface here.\n\nFor the supported levels, variables should be reloaded with the dtype with which they were saved; if you'd rather load them as matlab would, add mat_dtype=True at the end of the parameters with which you call loadmat.\n" ]
[ 1 ]
[]
[]
[ "mat_file", "python", "scipy" ]
stackoverflow_0001698223_mat_file_python_scipy.txt
Q: How to tame the location of third party contributions in Django I have a django project which is laid out like this... myproject apps media templates django registration sorl typogrify I'd like to change it to this... myproject apps media templates site-deps django registration sorl typogrify When I attempt it the 'site-dependencies' all break. Is there a way to implement this structure? I tried adding site-deps to the PYTHONPATH without joy... A: This looks like a job for virtualenv. A Primer on virtualenv Working with Virtualenv Using a Virtualenv Sandbox Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip A: PYTHONPATH searches in the order that the paths are listed PythonPath "[ '/myproject', '/myproject/site-deps' ] + sys.path" is not the same as PythonPath "[ '/myproject/site-deps', '/myproject' ] + sys.path" The former order fails; perhaps because it figures it's already looked at the site-deps and there's no point in looking again. The latter order works. A: Make sure that site-dependencies, django, registration, sorl, and typogrify all have __init__.py files in them. A: How are you importing the packages under site-dependencies? Slightly off topic to your question, but I never liked the "default" project layout for Django so I have a script that lays my projects out like so: myproject/ apps/ vendor/ vendor/django/ config/__init__.py config/urls.py config/settings/ config/settings/__init__.py config/settings/base.py config/settings/hostname.py templates/ media/ script/manage.py The included manage.py is tweaked to add config, apps and vendor to python path ('myproject' itself is not in the python path) and to import config/settings/hostname.py as the settings module (where hostname would be the actual host name of the computer). Any 3rd party apps go in vendor (eg, django itself) and apps for this project go in the apps directory. It's a bit unconventional, but I like the setup.
How to tame the location of third party contributions in Django
I have a django project which is laid out like this... myproject apps media templates django registration sorl typogrify I'd like to change it to this... myproject apps media templates site-deps django registration sorl typogrify When I attempt it the 'site-dependencies' all break. Is there a way to implement this structure? I tried adding site-deps to the PYTHONPATH without joy...
[ "This looks like a job for virtualenv.\n\nA Primer on virtualenv\nWorking with Virtualenv\nUsing a Virtualenv Sandbox\nTools of the Modern Python Hacker: Virtualenv, Fabric and Pip\n\n", "PYTHONPATH searches in the order that the paths are listed\nPythonPath \"[ '/myproject', '/myproject/site-deps' ] + sys.path\"\n\nis not the same as\nPythonPath \"[ '/myproject/site-deps', '/myproject' ] + sys.path\"\n\nThe former order fails; perhaps because it figures it's already looked at the site-deps and there's no point in looking again.\nThe latter order works.\n", "Make sure that site-dependencies, django, registration, sorl, and typogrify all have __init__.py files in them.\n", "How are you importing the packages under site-dependencies?\nSlightly off topic to your question, but I never liked the \"default\" project layout for Django so I have a script that lays my projects out like so:\nmyproject/\n apps/\n\n vendor/\n vendor/django/\n\n config/__init__.py\n config/urls.py\n config/settings/\n config/settings/__init__.py\n config/settings/base.py\n config/settings/hostname.py\n\n templates/\n media/\n\n script/manage.py\n\nThe included manage.py is tweaked to add config, apps and vendor to python path ('myproject' itself is not in the python path) and to import config/settings/hostname.py as the settings module (where hostname would be the actual host name of the computer). Any 3rd party apps go in vendor (eg, django itself) and apps for this project go in the apps directory.\nIt's a bit unconventional, but I like the setup.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001309606_django_python.txt
Q: win32com and PAMIE web page open timeout currently im making some crawler script,one of problem is sometimes if i open webpage with PAMIE,webpage can't open and hang forever. are there any method to close PAMIE's IE or win32com's IE ? such like if webpage didn't response or loading complete less than 10sec or so . thanks in advance A: Just use, to initialize your PAMIE instance, PAMIE(timeOut=100) or whatever. The units of measure for timeOut are "tenths of a second" (!); the default is 3000 (300 seconds, i.e., 5 minutes); with 300 as I suggested, you'd time out after 10 seconds as you request. (You can pass the timeOut= parameter even when you're initializing with a URL, but in that case the timeout will only be active after the initial navigation). A: I think what you are looking for is somewhere to set the timeout on your request. I would suggest looking into the documentation on PAMIE.
win32com and PAMIE web page open timeout
currently im making some crawler script,one of problem is sometimes if i open webpage with PAMIE,webpage can't open and hang forever. are there any method to close PAMIE's IE or win32com's IE ? such like if webpage didn't response or loading complete less than 10sec or so . thanks in advance
[ "Just use, to initialize your PAMIE instance, PAMIE(timeOut=100) or whatever. The units of measure for timeOut are \"tenths of a second\" (!); the default is 3000 (300 seconds, i.e., 5 minutes); with 300 as I suggested, you'd time out after 10 seconds as you request.\n(You can pass the timeOut= parameter even when you're initializing with a URL, but in that case the timeout will only be active after the initial navigation).\n", "I think what you are looking for is somewhere to set the timeout on your request. I would suggest looking into the documentation on PAMIE.\n" ]
[ 2, 0 ]
[]
[]
[ "multithreading", "pamie", "python", "time" ]
stackoverflow_0001698362_multithreading_pamie_python_time.txt
Q: What is the fastest template system for Python? Jinja2 and Mako are both apparently pretty fast. How do these compare to (the less featured but probably good enough for what I'm doing) string.Template ? A: Here are the results of the popular template engines for rendering a 10x1000 HTML table. Python 2.6.2 on a 3GHz Intel Core 2 Kid template 696.89 ms Kid template + cElementTree 649.88 ms Genshi template + tag builder 431.01 ms Genshi tag builder 389.39 ms Django template 352.68 ms Genshi template 266.35 ms ElementTree 180.06 ms cElementTree 107.85 ms StringIO 41.48 ms Jinja 2 36.38 ms Cheetah template 34.66 ms Mako Template 29.06 ms Spitfire template 21.80 ms Tenjin 18.39 ms Spitfire template -O1 11.86 ms cStringIO 5.80 ms Spitfire template -O3 4.91 ms Spitfire template -O2 4.82 ms generator concat 4.06 ms list concat 3.99 ms generator concat optimized 2.84 ms list concat optimized 2.62 ms The benchmark is based on code from Spitfire performance tests with some added template engines and added iterations to increase accuracy. The list and generator concat at the end are hand coded Python to get a feel for the upper limit of performance achievable by compiling to Python bytecode. The optimized versions use string interpolation in the inner loop. But before you run out to switch your template engine, make sure it matters. You'll need to be doing some pretty heavy caching and really optimized code before the differences between the compiling template engines starts to matter. For most applications good abstraction facilities, compatibility with design tools, familiarity and other things matter much much more. A: From the jinja2 docs, it seems that string.Template is the fastest if that's all you need. Without a doubt you should try to remove as much logic from templates as possible. But templates without any logic mean that you have to do all the processing in the code which is boring and stupid. A template engine that does that is shipped with Python and called string.Template. Comes without loops and if conditions and is by far the fastest template engine you can get for Python. A: If you can throw caching in the mix (like memcached) then choose based on features and ease of use rather than optimization. I use Mako because I like the syntax and features. Fortunately it is one of the fastest as well. A: In general you will have to do profiling to answer that question, as it depends on how you use the templates and what for. string.Template is the fastest, but so primitive it can hardly be called a template in the same breath as the other templating systems, as it only does string replacements, and has no conditions or loops, making it pretty useless in practice.
What is the fastest template system for Python?
Jinja2 and Mako are both apparently pretty fast. How do these compare to (the less featured but probably good enough for what I'm doing) string.Template ?
[ "Here are the results of the popular template engines for rendering a 10x1000 HTML table.\nPython 2.6.2 on a 3GHz Intel Core 2\n\nKid template 696.89 ms\nKid template + cElementTree 649.88 ms\nGenshi template + tag builder 431.01 ms\nGenshi tag builder 389.39 ms\nDjango template 352.68 ms\nGenshi template 266.35 ms\nElementTree 180.06 ms\ncElementTree 107.85 ms\nStringIO 41.48 ms\nJinja 2 36.38 ms\nCheetah template 34.66 ms\nMako Template 29.06 ms\nSpitfire template 21.80 ms\nTenjin 18.39 ms\nSpitfire template -O1 11.86 ms\ncStringIO 5.80 ms\nSpitfire template -O3 4.91 ms\nSpitfire template -O2 4.82 ms\ngenerator concat 4.06 ms\nlist concat 3.99 ms\ngenerator concat optimized 2.84 ms\nlist concat optimized 2.62 ms\n\nThe benchmark is based on code from Spitfire performance tests with some added template engines and added iterations to increase accuracy. The list and generator concat at the end are hand coded Python to get a feel for the upper limit of performance achievable by compiling to Python bytecode. The optimized versions use string interpolation in the inner loop.\nBut before you run out to switch your template engine, make sure it matters. You'll need to be doing some pretty heavy caching and really optimized code before the differences between the compiling template engines starts to matter. For most applications good abstraction facilities, compatibility with design tools, familiarity and other things matter much much more.\n", "From the jinja2 docs, it seems that string.Template is the fastest if that's all you need.\n\nWithout a doubt you should try to\n remove as much logic from templates as\n possible. But templates without any\n logic mean that you have to do all the\n processing in the code which is boring\n and stupid. A template engine that\n does that is shipped with Python and\n called string.Template. Comes without\n loops and if conditions and is by far\n the fastest template engine you can\n get for Python.\n\n", "If you can throw caching in the mix (like memcached) then choose based on features and ease of use rather than optimization.\nI use Mako because I like the syntax and features. Fortunately it is one of the fastest as well.\n", "In general you will have to do profiling to answer that question, as it depends on how you use the templates and what for.\nstring.Template is the fastest, but so primitive it can hardly be called a template in the same breath as the other templating systems, as it only does string replacements, and has no conditions or loops, making it pretty useless in practice.\n" ]
[ 104, 9, 3, 1 ]
[ "I think Cheetah might be the fastest, as it's implemented in C.\n" ]
[ -4 ]
[ "django_templates", "jinja2", "mako", "python", "template_engine" ]
stackoverflow_0001324238_django_templates_jinja2_mako_python_template_engine.txt
Q: What if setuptools isn't installed? I'm just learning the art of writing a setup.py file for my project. I see there's lots of talk about setuptools, which is supposed to be superior to distutils. There's one thing though that I fail to understand, and I didn't see it addressed in any tutorial I've read about this: What if setuptools isn't installed? I understand it's not part of the standard library, so how can you assume the person who wants to install your program will have it installed? A: The standard way to distribute packages with setuptools includes an ez_setup.py script which will automatically download and install setuptools itself - on Windows I believe it will actually install an executable for easy_install. You can get this from the standard setuptools/easy_install distribution. A: In most librarys I ever installed for python, a warning apears "You have to install setuptools". You could do it as well I think, you could add a link so the user don't have to search the internet for it. A: I have used setuptools to compile many python scripts that I have written into windows EXEs. However, it has always been my understanding (from experience) that the computer running the compiled EXE does not need to have setup tools installed. Hope that helps A: You can't assume it's installed. There are ways around that, you can fall back to distutils (but then why have setuptools in the first place) or you can install setuptools in setup.py (but I think that's evil). Use setuptools only if you need it. When it comes to setuptools vs distrubute, they are compatible, and choosing one over the other is mainly up to the user. The setup.py is identical. A: You can download Windows EXE installers and a Linux RPM from here http://pypi.python.org/pypi/setuptools Then, once you have setuptools in place you can use the easy_install command to both download and install new packages. Because easy_install, also automatically downloads and installs dependencies, you might want to set up virtualenv before you actually use it. That way you can decide whether or not you want to install a bunch of packages into your system's default Python install. Yes, this means that your users will have to have setuptools installed in order for them to use it. Of course, you could take the setuptools installers, rename them, and package them up with like NSIS and distribute that to your users. The fact is, that you have to install something, so if you don't want to put your application in the installer, you can package up setuptools instead. A: I would say it depends on what kind of user you are addressing. If they are simply users and not Python programmers, or if they are basic programmers, using setuptools might be a little bit too much at first. For those the distutils is perfect. For clients, I would definitely stick to distutils. For more enthusiast programmers the setuptools would be fine. Somehow, it also depends on how you want to distribute updates, and how often. For example, do the users have an access to the Internet without a nasty proxy setup by their company that would block setuptools? - We do have one and it's an extra step to configure and make it work on every workstation.
What if setuptools isn't installed?
I'm just learning the art of writing a setup.py file for my project. I see there's lots of talk about setuptools, which is supposed to be superior to distutils. There's one thing though that I fail to understand, and I didn't see it addressed in any tutorial I've read about this: What if setuptools isn't installed? I understand it's not part of the standard library, so how can you assume the person who wants to install your program will have it installed?
[ "The standard way to distribute packages with setuptools includes an ez_setup.py script which will automatically download and install setuptools itself - on Windows I believe it will actually install an executable for easy_install. You can get this from the standard setuptools/easy_install distribution.\n", "In most librarys I ever installed for python, a warning apears \"You have to install setuptools\". You could do it as well I think, you could add a link so the user don't have to search the internet for it.\n", "I have used setuptools to compile many python scripts that I have written into windows EXEs. However, it has always been my understanding (from experience) that the computer running the compiled EXE does not need to have setup tools installed.\nHope that helps\n", "You can't assume it's installed. There are ways around that, you can fall back to distutils (but then why have setuptools in the first place) or you can install setuptools in setup.py (but I think that's evil).\nUse setuptools only if you need it.\nWhen it comes to setuptools vs distrubute, they are compatible, and choosing one over the other is mainly up to the user. The setup.py is identical.\n", "You can download Windows EXE installers and a Linux RPM from here\nhttp://pypi.python.org/pypi/setuptools\nThen, once you have setuptools in place you can use the easy_install command to both download and install new packages. Because easy_install, also automatically downloads and installs dependencies, you might want to set up virtualenv before you actually use it. That way you can decide whether or not you want to install a bunch of packages into your system's default Python install.\nYes, this means that your users will have to have setuptools installed in order for them to use it. Of course, you could take the setuptools installers, rename them, and package them up with like NSIS and distribute that to your users. The fact is, that you have to install something, so if you don't want to put your application in the installer, you can package up setuptools instead.\n", "I would say it depends on what kind of user you are addressing.\nIf they are simply users and not Python programmers, or if they are basic programmers, using setuptools might be a little bit too much at first. For those the distutils is perfect.\nFor clients, I would definitely stick to distutils.\nFor more enthusiast programmers the setuptools would be fine.\nSomehow, it also depends on how you want to distribute updates, and how often. For example, do the users have an access to the Internet without a nasty proxy setup by their company that would block setuptools? - We do have one and it's an extra step to configure and make it work on every workstation.\n" ]
[ 4, 2, 2, 1, 0, 0 ]
[]
[]
[ "deployment", "distutils", "python", "setuptools" ]
stackoverflow_0001666482_deployment_distutils_python_setuptools.txt
Q: BeautifulSoup is omitting body of page BeautifulSoup newbe... Need help Here is the code sample... from mechanize import Browser from BeautifulSoup import BeautifulSoup mec = Browser() #url1 = "http://www.wines.com/catalog/index.php?cPath=21" url2 = "http://www.wines.com/catalog/product_info.php?products_id=4866" page = mec.open(url2) html = page.read() soup = BeautifulSoup(html) print soup.prettify() When I use url1 I get a nice dump of the page. When I use url2(the one I need). I get output without the body. <!doctype html public "-//W3C//DTD HTML 4.01 Transitional//EN"> <html dir="LTR" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title> 2005 Jordan Cabernet Sauvignon Sonoma 2005 </title> </head> </html> Any ideas? A: Yes. The HTML is bad. Step 1a, print soup.prettify() and see where it stops indenting correctly. Step 1b (if 1a doesn't work). Just print the raw through any HTML prettifying. I use BBEdit for things that confuse Beautiful Soup. Look closely at the HTML. There will be some kind of horrible error. Misplaced " characters is common. Also, the CSS background-image when given as a style has bad quotes. <tag style="background-image:url("something")"> Note the "improper" quotes. You'll need to write an Regex to find and fix these. Step 2. Write a "massage" regular expression and function to fix this. Read the http://www.crummy.com/software/BeautifulSoup/documentation.html#Sanitizing%20Bad%20Data%20with%20Regexps section for how to create a markup massage. Here's what I often use # Fix background-image:url("some URI") # to replace the quotes with &quote; background_image = re.compile(r'background-image:url\("([^"]+)"\)') def fix_background_image( match ): return 'background-image:url(&quote;%s&quote;)' % ( match.group(1) ) # Fix <img src="some URI name="someString""> -- note the out-of-place quotes bad_img = re.compile( r'src="([^ ]+) name="([^"]+)""' ) def fix_bad_img( match ): return 'src="%s" name="%s"' % ( match.group(1), match.group(2) ) fix_style_quotes = [ (background_image, fix_background_image), (bad_img, fix_bad_img), ] A: It seems to be getting tripped up by this bad tag: <META NAME="description" CONTENT="$49 at Wines.com "Deep red. Red- and blackcurrant, cherry and menthol on the nose, with subtle vanilla, cola and tobacco notes adding complexity. Tightly wound red berry and bitter cherry flavors are framed by dusty..."> Clearly here they have failed to escape a quote inside the attribute value (uh-oh... site might be vulnerable to cross-site scripting?), and that's making the parser think the rest of the content of the page is all in attribute values. (It would take another " or a > inside one of the real attribute values to make it realise the mistake, I think.) Unfortunately this is quite a tricky error to fix up after. You could try a slightly different parser, perhaps? eg. try Soup 3.0.x instead of 3.1.x if you're using that version, or vice-versa. Or try html5lib. A: The HTML is indeed horrible :-) BeautifulSoup 3.0.7 is much better at handling malformed HTML than the current version. The project website warns: "Currently the 3.0.x series is better at parsing bad HTML than the 3.1 series."... and there's a great page devoted to the reason why, which boils down to the fact that SGMLParser was removed in Python 3, and BS 3.1.x was written to be convertible to Py3k. The good news is that you can still download 3.0.7a (the last version), which on my machine parses the url you mentioned perfectly: http://www.crummy.com/software/BeautifulSoup/download/3.x/ A: Running on the HTML in question a validator shows 116 errors -- just too many to track down which one BeautifulSoup is proving unable to recover from, I guess:-( html5lib seems to survive the ordeal of parsing this horror page, and leaves a lot of stuff in (the prettify has just about all of the original page, it seems to me, when you use html5lib's parser to produce a BeautifulSoup object). Hard to say if the resulting parse tree will do what you need, since we don't really know what that is;-). Note: I've installed html5lib right from the hg clone sources (just python setup.py install from the html5lib/python directory), since the last official release is a bit long in the tooth.
BeautifulSoup is omitting body of page
BeautifulSoup newbe... Need help Here is the code sample... from mechanize import Browser from BeautifulSoup import BeautifulSoup mec = Browser() #url1 = "http://www.wines.com/catalog/index.php?cPath=21" url2 = "http://www.wines.com/catalog/product_info.php?products_id=4866" page = mec.open(url2) html = page.read() soup = BeautifulSoup(html) print soup.prettify() When I use url1 I get a nice dump of the page. When I use url2(the one I need). I get output without the body. <!doctype html public "-//W3C//DTD HTML 4.01 Transitional//EN"> <html dir="LTR" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title> 2005 Jordan Cabernet Sauvignon Sonoma 2005 </title> </head> </html> Any ideas?
[ "Yes. The HTML is bad. \nStep 1a, print soup.prettify() and see where it stops indenting correctly. \nStep 1b (if 1a doesn't work). Just print the raw through any HTML prettifying. I use BBEdit for things that confuse Beautiful Soup.\nLook closely at the HTML. There will be some kind of horrible error. Misplaced \" characters is common. Also, the CSS background-image when given as a style has bad quotes.\n<tag style=\"background-image:url(\"something\")\"> \n\nNote the \"improper\" quotes. You'll need to write an Regex to find and fix these.\nStep 2. Write a \"massage\" regular expression and function to fix this.\nRead the http://www.crummy.com/software/BeautifulSoup/documentation.html#Sanitizing%20Bad%20Data%20with%20Regexps section for how to create a markup massage. \nHere's what I often use\n# Fix background-image:url(\"some URI\")\n# to replace the quotes with &quote;\nbackground_image = re.compile(r'background-image:url\\(\"([^\"]+)\"\\)')\ndef fix_background_image( match ):\n return 'background-image:url(&quote;%s&quote;)' % ( match.group(1) )\n# Fix <img src=\"some URI name=\"someString\"\"> -- note the out-of-place quotes\nbad_img = re.compile( r'src=\"([^ ]+) name=\"([^\"]+)\"\"' )\ndef fix_bad_img( match ):\n return 'src=\"%s\" name=\"%s\"' % ( match.group(1), match.group(2) )\nfix_style_quotes = [\n (background_image, fix_background_image),\n (bad_img, fix_bad_img),\n]\n\n", "It seems to be getting tripped up by this bad tag:\n<META NAME=\"description\" CONTENT=\"$49 at Wines.com \"Deep red. Red- and blackcurrant, cherry and menthol on the nose, with subtle vanilla, cola and tobacco notes adding complexity. Tightly wound red berry and bitter cherry flavors are framed by dusty...\">\n\nClearly here they have failed to escape a quote inside the attribute value (uh-oh... site might be vulnerable to cross-site scripting?), and that's making the parser think the rest of the content of the page is all in attribute values. (It would take another \" or a > inside one of the real attribute values to make it realise the mistake, I think.)\nUnfortunately this is quite a tricky error to fix up after. You could try a slightly different parser, perhaps? eg. try Soup 3.0.x instead of 3.1.x if you're using that version, or vice-versa. Or try html5lib.\n", "The HTML is indeed horrible :-) BeautifulSoup 3.0.7 is much better at handling malformed HTML than the current version. The project website warns: \"Currently the 3.0.x series is better at parsing bad HTML than the 3.1 series.\"... and there's a great page devoted to the reason why, which boils down to the fact that SGMLParser was removed in Python 3, and BS 3.1.x was written to be convertible to Py3k.\nThe good news is that you can still download 3.0.7a (the last version), which on my machine parses the url you mentioned perfectly: http://www.crummy.com/software/BeautifulSoup/download/3.x/\n", "Running on the HTML in question a validator shows 116 errors -- just too many to track down which one BeautifulSoup is proving unable to recover from, I guess:-( \nhtml5lib seems to survive the ordeal of parsing this horror page, and leaves a lot of stuff in (the prettify has just about all of the original page, it seems to me, when you use html5lib's parser to produce a BeautifulSoup object). Hard to say if the resulting parse tree will do what you need, since we don't really know what that is;-).\nNote: I've installed html5lib right from the hg clone sources (just python setup.py install from the html5lib/python directory), since the last official release is a bit long in the tooth.\n" ]
[ 2, 2, 1, 0 ]
[]
[]
[ "beautifulsoup", "python", "screen_scraping" ]
stackoverflow_0001698627_beautifulsoup_python_screen_scraping.txt
Q: Pythonic way to find a regular expression match Is there a more succinct/correct/pythonic way to do the following: url = "http://0.0.0.0:3000/authenticate/login" re_token = re.compile("<[^>]*authenticity_token[^>]*value=\"([^\"]*)") for line in urllib2.urlopen(url): if re_token.match(line): token = re_token.findall(line)[0] break I want to get the value of the input tag named "authenticity_token" from an HTML page: <input name="authenticity_token" type="hidden" value="WTumSWohmrxcoiDtgpPRcxUMh/D9m7O7T6HOhWH+Yw4=" /> A: Could you use Beautiful Soup for this? The code would essentially look something like so: from BeautifulSoup import BeautifulSoup url = "hhttp://0.0.0.0:3000/authenticate/login" page = urlli2b.urlopen(page) soup = BeautifulSoup(page) token = soup.find("input", { 'name': 'authenticity_token'}) Something like that should work. I didn't test this but you can read the documentation to get it exact. A: You don't need the findall call. Instead use: m = re_token.match(line) if m: token = m.group(1) .... I second the recommendation of BeautifulSoup over regular expressions though. A: there's nothing "pythonic" with using regex. If you don't want to use BeautifulSoup(which you should ideally), just use Python's excellent string manipulation capabilities for line in open("file"): line=line.strip() if "<input name" in line and "value=" in line: item=line.split() for i in item: if "value" in i: print i output $ more file <input name="authenticity_token" type="hidden" value="WTumSWohmrxcoiDtgpPRcxUMh/D9m7O7T6HOhWH+Yw4=" /> $ python script.py value="WTumSWohmrxcoiDtgpPRcxUMh/D9m7O7T6HOhWH+Yw4=" A: As to why you shouldn't use regular expressions to search HTML, there are two main reasons. The first is that HTML is defined recursively, and regular expressions, which compile into stackless state machines, don't do recursion. You can't write a regular expression that can tell, when it encounters an end tag, what start tag it encountered on its way to that tag it belongs to; there's nowhere to save that information. The second is that parsing HTML (which BeautifulSoup does) normalizes all kinds of things that are allowable in HTML and that you're probably not going to ever consider in your regular expressions. To pick a trivial example, what you're trying to parse: <input name="authenticity_token" type="hidden" value="xxx"/> could just as easily be: <input name='authenticity_token' type="hidden" value="xxx"/> or <input type = "hidden" value = "xxx" name = 'authenticity_token' /> or any one of a hundred other permutations that I'm not thinking about right now.
Pythonic way to find a regular expression match
Is there a more succinct/correct/pythonic way to do the following: url = "http://0.0.0.0:3000/authenticate/login" re_token = re.compile("<[^>]*authenticity_token[^>]*value=\"([^\"]*)") for line in urllib2.urlopen(url): if re_token.match(line): token = re_token.findall(line)[0] break I want to get the value of the input tag named "authenticity_token" from an HTML page: <input name="authenticity_token" type="hidden" value="WTumSWohmrxcoiDtgpPRcxUMh/D9m7O7T6HOhWH+Yw4=" />
[ "Could you use Beautiful Soup for this? The code would essentially look something like so:\nfrom BeautifulSoup import BeautifulSoup\nurl = \"hhttp://0.0.0.0:3000/authenticate/login\"\npage = urlli2b.urlopen(page)\nsoup = BeautifulSoup(page)\ntoken = soup.find(\"input\", { 'name': 'authenticity_token'})\n\nSomething like that should work. I didn't test this but you can read the documentation to get it exact.\n", "You don't need the findall call. Instead use:\nm = re_token.match(line)\nif m:\n token = m.group(1)\n ....\n\nI second the recommendation of BeautifulSoup over regular expressions though.\n", "there's nothing \"pythonic\" with using regex. If you don't want to use BeautifulSoup(which you should ideally), just use Python's excellent string manipulation capabilities\nfor line in open(\"file\"):\n line=line.strip()\n if \"<input name\" in line and \"value=\" in line:\n item=line.split()\n for i in item:\n if \"value\" in i:\n print i\n\noutput\n$ more file\n<input name=\"authenticity_token\" type=\"hidden\" value=\"WTumSWohmrxcoiDtgpPRcxUMh/D9m7O7T6HOhWH+Yw4=\" />\n$ python script.py\nvalue=\"WTumSWohmrxcoiDtgpPRcxUMh/D9m7O7T6HOhWH+Yw4=\"\n\n", "As to why you shouldn't use regular expressions to search HTML, there are two main reasons.\nThe first is that HTML is defined recursively, and regular expressions, which compile into stackless state machines, don't do recursion. You can't write a regular expression that can tell, when it encounters an end tag, what start tag it encountered on its way to that tag it belongs to; there's nowhere to save that information.\nThe second is that parsing HTML (which BeautifulSoup does) normalizes all kinds of things that are allowable in HTML and that you're probably not going to ever consider in your regular expressions. To pick a trivial example, what you're trying to parse:\n<input name=\"authenticity_token\" type=\"hidden\" value=\"xxx\"/>\n\ncould just as easily be:\n<input name='authenticity_token' type=\"hidden\" value=\"xxx\"/>\n\nor\n<input type = \"hidden\" value = \"xxx\" name = 'authenticity_token' />\n\nor any one of a hundred other permutations that I'm not thinking about right now.\n" ]
[ 6, 1, 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001698217_python_regex.txt
Q: Numpy, problem with long arrays I have two arrays (a and b) with n integer elements in the range (0,N). typo: arrays with 2^n integers where the largest integer takes the value N = 3^n I want to calculate the sum of every combination of elements in a and b (sum_ij_ = a_i_ + b_j_ for all i,j). Then take modulus N (sum_ij_ = sum_ij_ % N), and finally calculate the frequency of the different sums. In order to do this fast with numpy, without any loops, I tried to use the meshgrid and the bincount function. A,B = numpy.meshgrid(a,b) A = A + B A = A % N A = numpy.reshape(A,A.size) result = numpy.bincount(A) Now, the problem is that my input arrays are long. And meshgrid gives me MemoryError when I use inputs with 2^13 elements. I would like to calculate this for arrays with 2^15-2^20 elements. that is n in the range 15 to 20 Is there any clever tricks to do this with numpy? Any help will be highly appreciated. -- jon A: try chunking it. your meshgrid is an NxN matrix, block that up to 10x10 N/10xN/10 and just compute 100 bins, add them up at the end. this only uses ~1% as much memory as doing the whole thing. A: Edit in response to jonalm's comment: jonalm: N~3^n not n~3^N. N is max element in a and n is number of elements in a. n is ~ 2^20. If N is ~ 3^n then N is ~ 3^(2^20) > 10^(500207). Scientists estimate (http://www.stormloader.com/ajy/reallife.html) that there are only around 10^87 particles in the universe. So there is no (naive) way a computer can handle an int of size 10^(500207). jonalm: I am however a bit curios about the pv() function you define. (I do not manage to run it as text.find() is not defined (guess its in another module)). How does this function work and what is its advantage? pv is a little helper function I wrote to debug the value of variables. It works like print() except when you say pv(x) it prints both the literal variable name (or expression string), a colon, and then the variable's value. If you put #!/usr/bin/env python import traceback def pv(var): (filename,line_number,function_name,text)=traceback.extract_stack()[-2] print('%s: %s'%(text[text.find('(')+1:-1],var)) x=1 pv(x) in a script you should get x: 1 The modest advantage of using pv over print is that it saves you typing. Instead of having to write print('x: %s'%x) you can just slap down pv(x) When there are multiple variables to track, it's helpful to label the variables. I just got tired of writing it all out. The pv function works by using the traceback module to peek at the line of code used to call the pv function itself. (See http://docs.python.org/library/traceback.html#module-traceback) That line of code is stored as a string in the variable text. text.find() is a call to the usual string method find(). For instance, if text='pv(x)' then text.find('(') == 2 # The index of the '(' in string text text[text.find('(')+1:-1] == 'x' # Everything in between the parentheses I'm assuming n ~ 3^N, and n~2**20 The idea is to work module N. This cuts down on the size of the arrays. The second idea (important when n is huge) is to use numpy ndarrays of 'object' type because if you use an integer dtype you run the risk of overflowing the size of the maximum integer allowed. #!/usr/bin/env python import traceback import numpy as np def pv(var): (filename,line_number,function_name,text)=traceback.extract_stack()[-2] print('%s: %s'%(text[text.find('(')+1:-1],var)) You can change n to be 2**20, but below I show what happens with small n so the output is easier to read. n=100 N=int(np.exp(1./3*np.log(n))) pv(N) # N: 4 a=np.random.randint(N,size=n) b=np.random.randint(N,size=n) pv(a) pv(b) # a: [1 0 3 0 1 0 1 2 0 2 1 3 1 0 1 2 2 0 2 3 3 3 1 0 1 1 2 0 1 2 3 1 2 1 0 0 3 # 1 3 2 3 2 1 1 2 2 0 3 0 2 0 0 2 2 1 3 0 2 1 0 2 3 1 0 1 1 0 1 3 0 2 2 0 2 # 0 2 3 0 2 0 1 1 3 2 2 3 2 0 3 1 1 1 1 2 3 3 2 2 3 1] # b: [1 3 2 1 1 2 1 1 1 3 0 3 0 2 2 3 2 0 1 3 1 0 0 3 3 2 1 1 2 0 1 2 0 3 3 1 0 # 3 3 3 1 1 3 3 3 1 1 0 2 1 0 0 3 0 2 1 0 2 2 0 0 0 1 1 3 1 1 1 2 1 1 3 2 3 # 3 1 2 1 0 0 2 3 1 0 2 1 1 1 1 3 3 0 2 2 3 2 0 1 3 1] wa holds the number of 0s, 1s, 2s, 3s in a wb holds the number of 0s, 1s, 2s, 3s in b wa=np.bincount(a) wb=np.bincount(b) pv(wa) pv(wb) # wa: [24 28 28 20] # wb: [21 34 20 25] result=np.zeros(N,dtype='object') Think of a 0 as a token or chip. Similarly for 1,2,3. Think of wa=[24 28 28 20] as meaning there is a bag with 24 0-chips, 28 1-chips, 28 2-chips, 20 3-chips. You have a wa-bag and a wb-bag. When you draw a chip from each bag, you "add" them together and form a new chip. You "mod" the answer (modulo N). Imagine taking a 1-chip from the wb-bag and adding it with each chip in the wa-bag. 1-chip + 0-chip = 1-chip 1-chip + 1-chip = 2-chip 1-chip + 2-chip = 3-chip 1-chip + 3-chip = 4-chip = 0-chip (we are mod'ing by N=4) Since there are 34 1-chips in the wb bag, when you add them against all the chips in the wa=[24 28 28 20] bag, you get 34*24 1-chips 34*28 2-chips 34*28 3-chips 34*20 0-chips This is just the partial count due to the 34 1-chips. You also have to handle the other types of chips in the wb-bag, but this shows you the method used below: for i,count in enumerate(wb): partial_count=count*wa pv(partial_count) shifted_partial_count=np.roll(partial_count,i) pv(shifted_partial_count) result+=shifted_partial_count # partial_count: [504 588 588 420] # shifted_partial_count: [504 588 588 420] # partial_count: [816 952 952 680] # shifted_partial_count: [680 816 952 952] # partial_count: [480 560 560 400] # shifted_partial_count: [560 400 480 560] # partial_count: [600 700 700 500] # shifted_partial_count: [700 700 500 600] pv(result) # result: [2444 2504 2520 2532] This is the final result: 2444 0s, 2504 1s, 2520 2s, 2532 3s. # This is a test to make sure the result is correct. # This uses a very memory intensive method. # c is too huge when n is large. if n>1000: print('n is too large to run the check') else: c=(a[:]+b[:,np.newaxis]) c=c.ravel() c=c%N result2=np.bincount(c) pv(result2) assert(all(r1==r2 for r1,r2 in zip(result,result2))) # result2: [2444 2504 2520 2532] A: Check your math, that's a lot of space you're asking for: 2^20*2^20 = 2^40 = 1 099 511 627 776 If each of your elements was just one byte, that's already one terabyte of memory. Add a loop or two. This problem is not suited to maxing out your memory and minimizing your computation.
Numpy, problem with long arrays
I have two arrays (a and b) with n integer elements in the range (0,N). typo: arrays with 2^n integers where the largest integer takes the value N = 3^n I want to calculate the sum of every combination of elements in a and b (sum_ij_ = a_i_ + b_j_ for all i,j). Then take modulus N (sum_ij_ = sum_ij_ % N), and finally calculate the frequency of the different sums. In order to do this fast with numpy, without any loops, I tried to use the meshgrid and the bincount function. A,B = numpy.meshgrid(a,b) A = A + B A = A % N A = numpy.reshape(A,A.size) result = numpy.bincount(A) Now, the problem is that my input arrays are long. And meshgrid gives me MemoryError when I use inputs with 2^13 elements. I would like to calculate this for arrays with 2^15-2^20 elements. that is n in the range 15 to 20 Is there any clever tricks to do this with numpy? Any help will be highly appreciated. -- jon
[ "try chunking it. your meshgrid is an NxN matrix, block that up to 10x10 N/10xN/10 and just compute 100 bins, add them up at the end. this only uses ~1% as much memory as doing the whole thing.\n", "Edit in response to jonalm's comment:\n\njonalm: N~3^n not n~3^N. N is max element in a and n is number of\n elements in a.\n\nn is ~ 2^20. If N is ~ 3^n then N is ~ 3^(2^20) > 10^(500207).\nScientists estimate (http://www.stormloader.com/ajy/reallife.html) that there are only around 10^87 particles in the universe. So there is no (naive) way a computer can handle an int of size 10^(500207). \n\njonalm: I am however a bit curios about the pv() function you define. (I\n do not manage to run it as text.find() is not defined (guess its in another\n module)). How does this function work and what is its advantage?\n\npv is a little helper function I wrote to debug the value of variables. It works like\nprint() except when you say pv(x) it prints both the literal variable name (or expression string), a colon, and then the variable's value.\nIf you put \n#!/usr/bin/env python\nimport traceback\ndef pv(var):\n (filename,line_number,function_name,text)=traceback.extract_stack()[-2]\n print('%s: %s'%(text[text.find('(')+1:-1],var))\nx=1\npv(x)\n\nin a script you should get\nx: 1\n\nThe modest advantage of using pv over print is that it saves you typing. Instead of having to\nwrite\nprint('x: %s'%x)\n\nyou can just slap down\npv(x)\n\nWhen there are multiple variables to track, it's helpful to label the variables.\nI just got tired of writing it all out.\nThe pv function works by using the traceback module to peek at the line of code \nused to call the pv function itself. (See http://docs.python.org/library/traceback.html#module-traceback) That line of code is stored as a string in the variable text. \ntext.find() is a call to the usual string method find(). For instance, if \ntext='pv(x)'\n\nthen\ntext.find('(') == 2 # The index of the '(' in string text\ntext[text.find('(')+1:-1] == 'x' # Everything in between the parentheses\n\nI'm assuming n ~ 3^N, and n~2**20\nThe idea is to work module N. This cuts down on the size of the arrays.\nThe second idea (important when n is huge) is to use numpy ndarrays of 'object' type because if you use an integer dtype you run the risk of overflowing the size of the maximum integer allowed. \n#!/usr/bin/env python\nimport traceback\nimport numpy as np\n\ndef pv(var):\n (filename,line_number,function_name,text)=traceback.extract_stack()[-2]\n print('%s: %s'%(text[text.find('(')+1:-1],var))\n\nYou can change n to be 2**20, but below I show what happens with small n \nso the output is easier to read.\nn=100\nN=int(np.exp(1./3*np.log(n)))\npv(N)\n# N: 4\n\na=np.random.randint(N,size=n)\nb=np.random.randint(N,size=n)\npv(a)\npv(b)\n# a: [1 0 3 0 1 0 1 2 0 2 1 3 1 0 1 2 2 0 2 3 3 3 1 0 1 1 2 0 1 2 3 1 2 1 0 0 3\n# 1 3 2 3 2 1 1 2 2 0 3 0 2 0 0 2 2 1 3 0 2 1 0 2 3 1 0 1 1 0 1 3 0 2 2 0 2\n# 0 2 3 0 2 0 1 1 3 2 2 3 2 0 3 1 1 1 1 2 3 3 2 2 3 1]\n# b: [1 3 2 1 1 2 1 1 1 3 0 3 0 2 2 3 2 0 1 3 1 0 0 3 3 2 1 1 2 0 1 2 0 3 3 1 0\n# 3 3 3 1 1 3 3 3 1 1 0 2 1 0 0 3 0 2 1 0 2 2 0 0 0 1 1 3 1 1 1 2 1 1 3 2 3\n# 3 1 2 1 0 0 2 3 1 0 2 1 1 1 1 3 3 0 2 2 3 2 0 1 3 1]\n\nwa holds the number of 0s, 1s, 2s, 3s in a\nwb holds the number of 0s, 1s, 2s, 3s in b\nwa=np.bincount(a)\nwb=np.bincount(b)\npv(wa)\npv(wb)\n# wa: [24 28 28 20]\n# wb: [21 34 20 25]\nresult=np.zeros(N,dtype='object')\n\nThink of a 0 as a token or chip. Similarly for 1,2,3.\nThink of wa=[24 28 28 20] as meaning there is a bag with 24 0-chips, 28 1-chips, 28 2-chips, 20 3-chips.\nYou have a wa-bag and a wb-bag. When you draw a chip from each bag, you \"add\" them together and form a new chip. You \"mod\" the answer (modulo N).\nImagine taking a 1-chip from the wb-bag and adding it with each chip in the wa-bag.\n1-chip + 0-chip = 1-chip\n1-chip + 1-chip = 2-chip\n1-chip + 2-chip = 3-chip\n1-chip + 3-chip = 4-chip = 0-chip (we are mod'ing by N=4)\n\nSince there are 34 1-chips in the wb bag, when you add them against all the chips in the wa=[24 28 28 20] bag, you get\n34*24 1-chips\n34*28 2-chips\n34*28 3-chips\n34*20 0-chips\n\nThis is just the partial count due to the 34 1-chips. You also have to handle the other\ntypes of chips in the wb-bag, but this shows you the method used below:\nfor i,count in enumerate(wb):\n partial_count=count*wa\n pv(partial_count)\n shifted_partial_count=np.roll(partial_count,i)\n pv(shifted_partial_count)\n result+=shifted_partial_count\n# partial_count: [504 588 588 420]\n# shifted_partial_count: [504 588 588 420]\n# partial_count: [816 952 952 680]\n# shifted_partial_count: [680 816 952 952]\n# partial_count: [480 560 560 400]\n# shifted_partial_count: [560 400 480 560]\n# partial_count: [600 700 700 500]\n# shifted_partial_count: [700 700 500 600]\n\npv(result) \n# result: [2444 2504 2520 2532]\n\nThis is the final result: 2444 0s, 2504 1s, 2520 2s, 2532 3s.\n# This is a test to make sure the result is correct.\n# This uses a very memory intensive method.\n# c is too huge when n is large.\nif n>1000:\n print('n is too large to run the check')\nelse:\n c=(a[:]+b[:,np.newaxis])\n c=c.ravel()\n c=c%N\n result2=np.bincount(c)\n pv(result2)\n assert(all(r1==r2 for r1,r2 in zip(result,result2)))\n# result2: [2444 2504 2520 2532]\n\n", "Check your math, that's a lot of space you're asking for:\n2^20*2^20 = 2^40 = 1 099 511 627 776\nIf each of your elements was just one byte, that's already one terabyte of memory.\nAdd a loop or two. This problem is not suited to maxing out your memory and minimizing your computation.\n" ]
[ 7, 2, 1 ]
[]
[]
[ "math", "numpy", "python" ]
stackoverflow_0001697557_math_numpy_python.txt
Q: "Slice lists" and "the ellipsis" in Python; slicing lists and lists of lists with lists of slices Original question: Can someone tell me how to use "slice lists" and the "ellipsis"? When are they useful? Thanks. Here's what the language definition says about "slice_list" and "ellipsis"; Alex Martelli's answer points out their origin, which is not what I had envisioned. [http://docs.python.org/reference/expressions.html#tok-slicing][1] 5.3.3. Slicings extended_slicing ::= primary "[" slice_list "]" slice_list ::= slice_item ("," slice_item)* [","] slice_item ::= expression | proper_slice | ellipsis ellipsis ::= "..." [1]: http://docs.python.org/reference/expressions.html#tok-slicing In case anyone (as I was) is looking for ways to attack a list (or a list of lists) with a list of slices, here are 5 ways to get a list of elements from a list that are selected by a list of slices and 2 ways to do the same thing to a list of lists, in that case applying one slice per list. The output's in a comment at the end. I find h5, the example that uses nested for loops, the hardest to understand if meaningful variable names aren't used (updated). #!/usr/bin/env python import itertools puz = [(i + 100) for i in range(40)] puz1 = list( puz) puz2 = [(i + 200) for i in range(40)] puz3 = [(i + 300) for i in range(40)] puzs = [puz1,puz2,puz3] sa = slice( 0,1,1) sb = slice( 30,39,4) sc = slice( -1, -15,-5) ss = [sa,sb,sc] def mapfunc( a,b): return a[b] f = map( mapfunc,[puz] * len(ss),ss) print "f = ", f #same as g below g = [ puz[i] for i in ss ] print "g = ",g #same as f, above h1 = [ i for i in itertools.chain( puz[sa],puz[sb],puz[sc]) ] print "h1 = ", h1 #right h2 = [ i for i in itertools.chain( *(map( mapfunc,[puz] * len(ss),ss))) ] print "h2 = ",h2 #right h3 = [ i for i in itertools.chain( *f) ] print "h3 = ",h3 #right h4 = [ i for i in itertools.chain( *g) ] print "h4 = ", h4 #also right h5 = [] for slice_object in ss: for list_element in puz[slice_object]: h5.append( list_element) print "h5 = ", h5 #right, too print "==============================" hh1 = [ i for i in itertools.chain( *(map( mapfunc,puzs,ss))) ] print "hh1 = ",hh1 #right puz_s_pairs = zip( puzs,ss) #print "puz_s_pairs = ",puz_s_pairs hh2 = [ i for i in itertools.chain( *(map( mapfunc,*zip( *puz_s_pairs)))) ] print "hh2 = ",hh2 #right ''' >>> execfile(r'D:/cygwin/home/usr01/wrk/py/pyexpts/list_of_slices_of_list.02.py') f = [[100], [130, 134, 138], [139, 134, 129]] g = [[100], [130, 134, 138], [139, 134, 129]] h1 = [100, 130, 134, 138, 139, 134, 129] h2 = [100, 130, 134, 138, 139, 134, 129] h3 = [100, 130, 134, 138, 139, 134, 129] h4 = [100, 130, 134, 138, 139, 134, 129] h5 = [100, 130, 134, 138, 139, 134, 129] ============================== hh1 = [100, 230, 234, 238, 339, 334, 329] hh2 = [100, 230, 234, 238, 339, 334, 329] ''' A: Slice lists and ellipsis were originally introduced in Python to supply nice syntax sugar for the precedessor of numpy (good old Numeric). If you're using numpy (no reason to go back to any of its predecessors!-) you should of course use them; if for whatever strange reason you're doing your own implementation of super-flexible multi-dimensional arrays, you'll definitely want to study the way numpy uses them and probably imitate it closely (it is pretty well designed after all). I can't think of good uses beyond multi-dimensional arrays. A: Numpy uses them to implement array slicing. A: I'm not too sure about ellipsis, so I will not address that, lest I give you a bad answer. Here goes list slicing: I hope you know that list indeces begin at 0. l = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Indexing into a list: l[0] >>> 0 l[5] >>> 5 Slicing a list. The first index is included, but not the last: l[0:5] >>> [0, 1, 2, 3, 4] l[2:5] >>> [2, 3, 4] Return the whole list as ONE slice: l[:] >>> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Get a slice of the list containing every element including and after the 3rd index: l[3:] >>> [3, 4, 5, 6, 7, 8, 9] Get a slice of the list containing every element upto but not including the 5th index: l[:5] >>> [0, 1, 2, 3, 4] Here is something that you would not expect python to do: l[5:18] # note: there is no 18th index in this list >>> [5, 6, 7, 8, 9]
"Slice lists" and "the ellipsis" in Python; slicing lists and lists of lists with lists of slices
Original question: Can someone tell me how to use "slice lists" and the "ellipsis"? When are they useful? Thanks. Here's what the language definition says about "slice_list" and "ellipsis"; Alex Martelli's answer points out their origin, which is not what I had envisioned. [http://docs.python.org/reference/expressions.html#tok-slicing][1] 5.3.3. Slicings extended_slicing ::= primary "[" slice_list "]" slice_list ::= slice_item ("," slice_item)* [","] slice_item ::= expression | proper_slice | ellipsis ellipsis ::= "..." [1]: http://docs.python.org/reference/expressions.html#tok-slicing In case anyone (as I was) is looking for ways to attack a list (or a list of lists) with a list of slices, here are 5 ways to get a list of elements from a list that are selected by a list of slices and 2 ways to do the same thing to a list of lists, in that case applying one slice per list. The output's in a comment at the end. I find h5, the example that uses nested for loops, the hardest to understand if meaningful variable names aren't used (updated). #!/usr/bin/env python import itertools puz = [(i + 100) for i in range(40)] puz1 = list( puz) puz2 = [(i + 200) for i in range(40)] puz3 = [(i + 300) for i in range(40)] puzs = [puz1,puz2,puz3] sa = slice( 0,1,1) sb = slice( 30,39,4) sc = slice( -1, -15,-5) ss = [sa,sb,sc] def mapfunc( a,b): return a[b] f = map( mapfunc,[puz] * len(ss),ss) print "f = ", f #same as g below g = [ puz[i] for i in ss ] print "g = ",g #same as f, above h1 = [ i for i in itertools.chain( puz[sa],puz[sb],puz[sc]) ] print "h1 = ", h1 #right h2 = [ i for i in itertools.chain( *(map( mapfunc,[puz] * len(ss),ss))) ] print "h2 = ",h2 #right h3 = [ i for i in itertools.chain( *f) ] print "h3 = ",h3 #right h4 = [ i for i in itertools.chain( *g) ] print "h4 = ", h4 #also right h5 = [] for slice_object in ss: for list_element in puz[slice_object]: h5.append( list_element) print "h5 = ", h5 #right, too print "==============================" hh1 = [ i for i in itertools.chain( *(map( mapfunc,puzs,ss))) ] print "hh1 = ",hh1 #right puz_s_pairs = zip( puzs,ss) #print "puz_s_pairs = ",puz_s_pairs hh2 = [ i for i in itertools.chain( *(map( mapfunc,*zip( *puz_s_pairs)))) ] print "hh2 = ",hh2 #right ''' >>> execfile(r'D:/cygwin/home/usr01/wrk/py/pyexpts/list_of_slices_of_list.02.py') f = [[100], [130, 134, 138], [139, 134, 129]] g = [[100], [130, 134, 138], [139, 134, 129]] h1 = [100, 130, 134, 138, 139, 134, 129] h2 = [100, 130, 134, 138, 139, 134, 129] h3 = [100, 130, 134, 138, 139, 134, 129] h4 = [100, 130, 134, 138, 139, 134, 129] h5 = [100, 130, 134, 138, 139, 134, 129] ============================== hh1 = [100, 230, 234, 238, 339, 334, 329] hh2 = [100, 230, 234, 238, 339, 334, 329] '''
[ "Slice lists and ellipsis were originally introduced in Python to supply nice syntax sugar for the precedessor of numpy (good old Numeric). If you're using numpy (no reason to go back to any of its predecessors!-) you should of course use them; if for whatever strange reason you're doing your own implementation of super-flexible multi-dimensional arrays, you'll definitely want to study the way numpy uses them and probably imitate it closely (it is pretty well designed after all). I can't think of good uses beyond multi-dimensional arrays.\n", "Numpy uses them to implement array slicing.\n", "I'm not too sure about ellipsis, so I will not address that, lest I give you a bad answer.\nHere goes list slicing:\nI hope you know that list indeces begin at 0.\nl = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nIndexing into a list:\nl[0]\n>>> 0\n\nl[5]\n>>> 5\n\nSlicing a list. The first index is included, but not the last:\nl[0:5]\n>>> [0, 1, 2, 3, 4]\n\nl[2:5]\n>>> [2, 3, 4]\n\nReturn the whole list as ONE slice:\nl[:]\n>>> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nGet a slice of the list containing every element including and after the 3rd index:\nl[3:]\n>>> [3, 4, 5, 6, 7, 8, 9]\n\nGet a slice of the list containing every element upto but not including the 5th index:\nl[:5]\n>>> [0, 1, 2, 3, 4]\n\nHere is something that you would not expect python to do:\nl[5:18] # note: there is no 18th index in this list\n>>> [5, 6, 7, 8, 9]\n\n" ]
[ 11, 3, 1 ]
[]
[]
[ "list", "python", "python_itertools", "slice" ]
stackoverflow_0001698753_list_python_python_itertools_slice.txt
Q: Python data structure recommendation? I currently have a structure that is a dict: each value is a list that contains numeric values. Each of these numeric lists contain what (to borrow a SQL idiom) you could call a primary key containing the first three values which are: a year, a player identifier, and a team identifier. This is the key for the dict. So you can get a unique row by passing the a value in for the year, player ID, and team ID like so: statline = stats[(2001, 'SEA', 'suzukic01')] Which yields something like [305, 20, 444, 330, 45] I'd like to alter this data structure to be quickly summed by either of these three keys: so you could easily slice the totals for a given index in the numeric lists by passing in ONE of year, player ID, and team ID, and then the index. I want to be able to do something like hr_total = stats[year=2001, idx=3] Where that idx of 3 corresponds to the third column in the numeric list(s) that would be retrieved. Any ideas? A: Put your data into SQLite, and use its relational engine to do the work. You can create an in-memory database and not even have to touch the disk. A: Read up on Data Warehousing. Any book. Read up on Star Schema Design. Any book. Seriously. You have several dimensions: Year, Player, Team. You have one fact: score You want to have a structure like this. You then want to create a set of dimension indexes like this. years = collections.defaultdict( list ) players = collections.defaultdict( list ) teams = collections.defaultdict( list ) Your fact table can be this a collections.namedtuple. You can use something like this. class ScoreFact( object ): def __init__( self, year, player, team, score ): self.year= year self.player= player self.team= team self.score= score years[self.year].append( self ) players[self.player].append( self ) teams[self.team].append( self ) Now you can find all items in a given dimension value. It's a simple list attached to a dimension value. years['2001'] are all scores for the given year. players['SEA'] are all scores for the given player. etc. You can simply use sum() to add them up. A multi-dimensional query is something like this. [ x for x in players['SEA'] if x.year == '2001' ] A: The syntax stats[year=2001, idx=3] is invalid Python and there is no way you can make it work with those square brackets and "keyword arguments"; you'll need to have a function or method call in order to accept keyword arguments. So, say we make it a function, to be called like wells(stats, year=2001, idx=3). I imagine the idx argument is mandatory (which is very peculiar given the call, but you give no indication of what could possibly mean to omit idx) and exactly one of year, playerid, and teamid must be there. With your current data structure, wells can already be implemented: def wells(stats, year=None, playerid=None, teamid=None, idx=None): if idx is None: raise ValueError('idx must be specified') specifiers = [(i, x) for x in enumerate((year, playerid, teamid)) if x is not None] if len(specifiers) != 2: raise ValueError('Exactly one of year, playerid, teamid, must be given') ikey, keyv = specifiers[0] return sum(v[idx] for k, v in stats.iteritems() if k[ikey]==keyv) of course, this is O(N) in the size of stats -- it must examine every entry in it. Please measure correctness and performance with this simple implementation as a baseline. An alternative solutions (much speedier in use, but requiring much time for preparation) is to put three dicts of lists (one each for year, playerid, teamid) to the side of stats, each entry indicating (or copying, but I think indicating by full key may suffice) all entries of stats that match that that ikey / keyv pair. But it's not clear at this time whether this implementation may not be premature, so please try first with the simple-minded idea!-) A: def getSum(d, year, idx): sum = 0 for key in d.keys(): if key[0] == year: sum += d[key][idx] return sum This should get you started. I have made the assumption in this code, that ONLY year will be asked for, but it should be easy enough for you to manipulate this to check for other parameters as well Cheers
Python data structure recommendation?
I currently have a structure that is a dict: each value is a list that contains numeric values. Each of these numeric lists contain what (to borrow a SQL idiom) you could call a primary key containing the first three values which are: a year, a player identifier, and a team identifier. This is the key for the dict. So you can get a unique row by passing the a value in for the year, player ID, and team ID like so: statline = stats[(2001, 'SEA', 'suzukic01')] Which yields something like [305, 20, 444, 330, 45] I'd like to alter this data structure to be quickly summed by either of these three keys: so you could easily slice the totals for a given index in the numeric lists by passing in ONE of year, player ID, and team ID, and then the index. I want to be able to do something like hr_total = stats[year=2001, idx=3] Where that idx of 3 corresponds to the third column in the numeric list(s) that would be retrieved. Any ideas?
[ "Put your data into SQLite, and use its relational engine to do the work. You can create an in-memory database and not even have to touch the disk.\n", "Read up on Data Warehousing. Any book. \nRead up on Star Schema Design. Any book. Seriously. \nYou have several dimensions: Year, Player, Team. \nYou have one fact: score\nYou want to have a structure like this.\nYou then want to create a set of dimension indexes like this.\nyears = collections.defaultdict( list )\nplayers = collections.defaultdict( list )\nteams = collections.defaultdict( list )\n\nYour fact table can be this a collections.namedtuple. You can use something like this.\nclass ScoreFact( object ):\n def __init__( self, year, player, team, score ):\n self.year= year\n self.player= player\n self.team= team\n self.score= score\n years[self.year].append( self )\n players[self.player].append( self )\n teams[self.team].append( self )\n\nNow you can find all items in a given dimension value. It's a simple list attached to a dimension value. \nyears['2001'] are all scores for the given year.\n\nplayers['SEA'] are all scores for the given player.\n\netc. You can simply use sum() to add them up. A multi-dimensional query is something like this.\n[ x for x in players['SEA'] if x.year == '2001' ]\n\n", "The syntax stats[year=2001, idx=3] is invalid Python and there is no way you can make it work with those square brackets and \"keyword arguments\"; you'll need to have a function or method call in order to accept keyword arguments.\nSo, say we make it a function, to be called like wells(stats, year=2001, idx=3). I imagine the idx argument is mandatory (which is very peculiar given the call, but you give no indication of what could possibly mean to omit idx) and exactly one of year, playerid, and teamid must be there.\nWith your current data structure, wells can already be implemented:\ndef wells(stats, year=None, playerid=None, teamid=None, idx=None):\n if idx is None: raise ValueError('idx must be specified')\n specifiers = [(i, x) for x in enumerate((year, playerid, teamid)) if x is not None]\n if len(specifiers) != 2:\n raise ValueError('Exactly one of year, playerid, teamid, must be given')\n ikey, keyv = specifiers[0]\n return sum(v[idx] for k, v in stats.iteritems() if k[ikey]==keyv)\n\nof course, this is O(N) in the size of stats -- it must examine every entry in it. Please measure correctness and performance with this simple implementation as a baseline. An alternative solutions (much speedier in use, but requiring much time for preparation) is to put three dicts of lists (one each for year, playerid, teamid) to the side of stats, each entry indicating (or copying, but I think indicating by full key may suffice) all entries of stats that match that that ikey / keyv pair. But it's not clear at this time whether this implementation may not be premature, so please try first with the simple-minded idea!-)\n", "def getSum(d, year, idx):\n sum = 0\n for key in d.keys():\n if key[0] == year:\n sum += d[key][idx]\n return sum\n\nThis should get you started. I have made the assumption in this code, that ONLY year will be asked for, but it should be easy enough for you to manipulate this to check for other parameters as well\nCheers\n" ]
[ 4, 4, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001698734_python.txt
Q: Activate virtualenv via os.system() I'm writing a Python-based shell script to boilerplate a Django app with virtualenv, pip, and fabric. Should be straightforward enough, but it appears that I'm not able to activate and run commands in the virtualenv through the shell script. os.system('virtualenv %s --no-site-packages' % project_name) os.system('source %s/bin/activate' % project_name) os.system('easy_install pip') When running, this errors out: $ startproject+ -s false sample New python executable in sample/bin/python Installing setuptools............done. /testing Searching for pip Best match: pip 0.4 Processing pip-0.4-py2.6.egg pip 0.4 is already the active version in easy-install.pth Installing pip script to /usr/local/bin error: /usr/local/bin/pip: Permission denied Obviously the source line isn't being run, but why? Is it a concurrency/threading issue, or something deeper with virtualenv? Thanks! A: Each call to os.system runs the command in a new subshell, which has the same properties as the original python process. Try putting the commands into one string separated by semicolons. A: Just don't use "source activate" at all. It does nothing but alter your shell PATH to put the virtualenv's bin directory first. I presume your script knows the directory of the virtualenv it has just created; all you have to do is call _virtualenv_dir_/bin/easy_install by full path. Or _virtualenv_dir_/bin/python for running any other python script within the virtualenv. A: Each os.system call creates a new process. You'll need to ensure that the activate and the easy_install are run in the same os.system or subprocess call. A: You could also install virtualenvwrapper, and use the postmkvirtualenv hook. I use it to automatically bring in fresh copies of pip and IPython into virtualenvs I create (as I don't want it using my system IPython). I also use it to copy pythonw into the virtualenv, otherwise wx-based stuff won't work. Looks like this: easy_install pip pip install -I ipython cd ~/bin python install_pythonw.py ${VIRTUAL_ENV}
Activate virtualenv via os.system()
I'm writing a Python-based shell script to boilerplate a Django app with virtualenv, pip, and fabric. Should be straightforward enough, but it appears that I'm not able to activate and run commands in the virtualenv through the shell script. os.system('virtualenv %s --no-site-packages' % project_name) os.system('source %s/bin/activate' % project_name) os.system('easy_install pip') When running, this errors out: $ startproject+ -s false sample New python executable in sample/bin/python Installing setuptools............done. /testing Searching for pip Best match: pip 0.4 Processing pip-0.4-py2.6.egg pip 0.4 is already the active version in easy-install.pth Installing pip script to /usr/local/bin error: /usr/local/bin/pip: Permission denied Obviously the source line isn't being run, but why? Is it a concurrency/threading issue, or something deeper with virtualenv? Thanks!
[ "Each call to os.system runs the command in a new subshell, which has the same properties as the original python process.\nTry putting the commands into one string separated by semicolons.\n", "Just don't use \"source activate\" at all. It does nothing but alter your shell PATH to put the virtualenv's bin directory first. I presume your script knows the directory of the virtualenv it has just created; all you have to do is call _virtualenv_dir_/bin/easy_install by full path. Or _virtualenv_dir_/bin/python for running any other python script within the virtualenv.\n", "Each os.system call creates a new process. You'll need to ensure that the activate and the easy_install are run in the same os.system or subprocess call.\n", "You could also install virtualenvwrapper, and use the postmkvirtualenv hook. I use it to automatically bring in fresh copies of pip and IPython into virtualenvs I create (as I don't want it using my system IPython). I also use it to copy pythonw into the virtualenv, otherwise wx-based stuff won't work. Looks like this:\neasy_install pip\npip install -I ipython\ncd ~/bin\npython install_pythonw.py ${VIRTUAL_ENV}\n\n" ]
[ 11, 6, 5, 3 ]
[]
[]
[ "django", "fabric", "pip", "python", "virtualenv" ]
stackoverflow_0001691076_django_fabric_pip_python_virtualenv.txt
Q: UnicodeDecodeError with Django's request.FILES I have the following code in the view call.. def view(request): body = u"" for filename, f in request.FILES.items(): body = body + 'Filename: ' + filename + '\n' + f.read() + '\n' On some cases I get UnicodeDecodeError: 'ascii' codec can't decode byte 0xf0 in position 7470: ordinal not in range(128) What am I doing wrong? (I am using Django 1.1.) Thank you. A: Django has some utilities that handle this (smart_unicode, force_unicode, smart_str). Generally you just need smart_unicode. from django.utils.encoding import smart_unicode def view(request): body = u"" for filename, f in request.FILES.items(): body = body + 'Filename: ' + filename + '\n' + smart_unicode(f.read()) + '\n' A: you are appending f.read() directly to unicode string, without decoding it, if the data you are reading from file is utf-8 encoded use utf-8, else use whatever encoding it is in. decode it first and then append to body e.g. data = f.read().decode("utf-8") body = body + 'Filename: ' + filename + '\n' + data + '\n' A: Anurag's answer is correct. However another problem here is you can't for certain know the encoding of the files that users upload. It may be useful to loop over a tuple of the most common ones till you get the correct one: encodings = ('windows-xxx', 'iso-yyy', 'utf-8',) for e in encodings: try: data = f.read().decode(e) break except UnicodeDecodeError: pass A: If you are not in control of the file encoding for files that can be uploaded , you can guess what encoding a file is in using the Universal Encoding Detector module chardet.
UnicodeDecodeError with Django's request.FILES
I have the following code in the view call.. def view(request): body = u"" for filename, f in request.FILES.items(): body = body + 'Filename: ' + filename + '\n' + f.read() + '\n' On some cases I get UnicodeDecodeError: 'ascii' codec can't decode byte 0xf0 in position 7470: ordinal not in range(128) What am I doing wrong? (I am using Django 1.1.) Thank you.
[ "Django has some utilities that handle this (smart_unicode, force_unicode, smart_str). Generally you just need smart_unicode.\nfrom django.utils.encoding import smart_unicode\ndef view(request):\n body = u\"\" \n for filename, f in request.FILES.items():\n body = body + 'Filename: ' + filename + '\\n' + smart_unicode(f.read()) + '\\n'\n\n", "you are appending f.read() directly to unicode string, without decoding it, if the data you are reading from file is utf-8 encoded use utf-8, else use whatever encoding it is in.\ndecode it first and then append to body e.g.\ndata = f.read().decode(\"utf-8\")\nbody = body + 'Filename: ' + filename + '\\n' + data + '\\n'\n\n", "Anurag's answer is correct. However another problem here is you can't for certain know the encoding of the files that users upload. It may be useful to loop over a tuple of the most common ones till you get the correct one:\nencodings = ('windows-xxx', 'iso-yyy', 'utf-8',)\nfor e in encodings:\n try:\n data = f.read().decode(e)\n break\n except UnicodeDecodeError:\n pass\n\n", "If you are not in control of the file encoding for files that can be uploaded , you can guess what encoding a file is in using the Universal Encoding Detector module chardet.\n" ]
[ 7, 4, 4, 3 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001699126_django_python.txt
Q: regex in for loop How do you use a regex with a for loop in Python example data abc 1 xyz 0 abc 2 xyz 1 abc 3 xyz 2 How do you write regex for something like below for i in range(1, 3): re.match(abc +i xyz +(i-1)) A: This substitutes i into the first %s and i-1 into the second %s re.match("abc %s xyz %s"%(i,i-1), data) another way to write it would be re.match("abc "+str(i)+" xyz "+str(i-1), data) A: You can't make a single regex that includes math expressions which are evaluated at regex matching time. However, you can dynamically generate regex expressions, using the usual Python string formatting techniques: import re example_data = """ this line will not match abc 1 xyz 0 this line will not match abc 2 xyz 1 abc 2 xyz 2 will not match abc 3 xyz 2 """ for i in range(1, 4): pattern = "abc %d xyz %d" % (i, (i - 1)) match_group = re.search(pattern, example_data) if match_group: print match_group.group(0) This will print: abc 1 xyz 0 abc 2 xyz 1 abc 3 xyz 2 It might be a better idea to do as abyx suggested, and make a single regex pattern with several match groups, and do the math based on the substrings captured by the match groups: import re example_data = """ this line will not match abc 1 xyz 0 this line will not match abc 2 xyz 1 abc 2 xyz 2 will not match abc 3 xyz 2 """ s_pattern = "abc (\d+) xyz (\d+)" pat = re.compile(s_pattern) # note that you can pre-compile the single pattern # you cannot do that with the dynamic patterns for match_group in re.finditer(pat, example_data): n1 = int(match_group.group(1)) n2 = int(match_group.group(2)) if n1 > 0 and n1 == n2 + 1: print match_group.group(0) This also will print: abc 1 xyz 0 abc 2 xyz 1 abc 3 xyz 2
regex in for loop
How do you use a regex with a for loop in Python example data abc 1 xyz 0 abc 2 xyz 1 abc 3 xyz 2 How do you write regex for something like below for i in range(1, 3): re.match(abc +i xyz +(i-1))
[ "This substitutes i into the first %s and i-1 into the second %s\nre.match(\"abc %s xyz %s\"%(i,i-1), data)\n\nanother way to write it would be\nre.match(\"abc \"+str(i)+\" xyz \"+str(i-1), data)\n\n", "You can't make a single regex that includes math expressions which are evaluated at regex matching time. However, you can dynamically generate regex expressions, using the usual Python string formatting techniques:\nimport re\n\nexample_data = \"\"\"\nthis line will not match\nabc 1 xyz 0\nthis line will not match\nabc 2 xyz 1\nabc 2 xyz 2 will not match\nabc 3 xyz 2\n\"\"\"\n\nfor i in range(1, 4):\n pattern = \"abc %d xyz %d\" % (i, (i - 1))\n match_group = re.search(pattern, example_data)\n if match_group:\n print match_group.group(0)\n\nThis will print:\nabc 1 xyz 0\nabc 2 xyz 1\nabc 3 xyz 2\n\nIt might be a better idea to do as abyx suggested, and make a single regex pattern with several match groups, and do the math based on the substrings captured by the match groups:\nimport re\n\nexample_data = \"\"\"\nthis line will not match\nabc 1 xyz 0\nthis line will not match\nabc 2 xyz 1\nabc 2 xyz 2 will not match\nabc 3 xyz 2\n\"\"\"\ns_pattern = \"abc (\\d+) xyz (\\d+)\"\n\npat = re.compile(s_pattern)\n# note that you can pre-compile the single pattern\n# you cannot do that with the dynamic patterns\n\nfor match_group in re.finditer(pat, example_data):\n n1 = int(match_group.group(1))\n n2 = int(match_group.group(2))\n if n1 > 0 and n1 == n2 + 1:\n print match_group.group(0)\n\nThis also will print:\nabc 1 xyz 0\nabc 2 xyz 1\nabc 3 xyz 2\n\n" ]
[ 3, 2 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001699467_python_regex.txt
Q: Python file manipulation Assume I have such folders rootfolder | / \ \ 01 02 03 .... | 13_itemname.xml So under my rootfolder, each directory represents a month like 01 02 03 and under these directories I have items with their create hour and item name such as 16_item1.xml, 24_item1.xml etc, as you may guess there are several items and each xml created every hour. Now I want to do two things: I need to generate a list of item names for a month, ie for 01 I have item1, item2 and item3 inside. I need to filter each item, such as for item1: i want to read each from 01_item1.xml to 24_item1.xml. How can I achieve these in Python in an easy way? A: Here are two methods doing what you ask (if I understood it properly). One with regex, one without. You choose which one you prefer ;) One bit which may seem like magic is the "setdefault" line. For an explanation, see the docs. I leave it as "an exercise to the reader" to understand how it works ;) from os import listdir from os.path import join DATA_ROOT = "testdata" def folder_items_no_regex(month_name): # dict holding the items (assuming ordering is irrelevant) items = {} # 1. Loop through all filenames in said folder for file in listdir( join( DATA_ROOT, month_name ) ): date, name = file.split( "_", 1 ) # skip files that were not possible to split on "_" if not date or not name: continue # ignore non-.xml files if not name.endswith(".xml"): continue # cut off the ".xml" extension name = name[0:-4] # keep a list of filenames items.setdefault( name, set() ).add( file ) return items def folder_items_regex(month_name): import re # The pattern: # 1. match the beginnning of line "^" # 2. capture 1 or more digits ( \d+ ) # 3. match the "_" # 4. capture any character (as few as possible ): (.*?) # 5. match ".xml" # 6. match the end of line "$" pattern = re.compile( r"^(\d+)_(.*?)\.xml$" ) # dict holding the items (assuming ordering is irrelevant) items = {} # 1. Loop through all filenames in said folder for file in listdir( join( DATA_ROOT, month_name ) ): match = pattern.match( file ) if not match: continue date, name = match.groups() # keep a list of filenames items.setdefault( name, set() ).add( file ) return items if __name__ == "__main__": from pprint import pprint data = folder_items_no_regex( "02" ) print "--- The dict ---------------" pprint( data ) print "--- The items --------------" pprint( sorted( data.keys() ) ) print "--- The files for item1 ---- " pprint( sorted( data["item1"] ) ) data = folder_items_regex( "02" ) print "--- The dict ---------------" pprint( data ) print "--- The items --------------" pprint( sorted( data.keys() ) ) print "--- The files for item1 ---- " pprint( sorted( data["item1"] ) ) A: Assuming that item names have a fixed length prefix and suffix (ie, a 3 character prefix such as '01_' and a 4 character suffix of '.xml'), you could solve the first part of the problem like this: names = set(name[3:-4] for name in os.listdir('01') if name.endswith('.xml')] That will get you unique item names. To filter each item, simply look for files that end with that item's name and sort it if required. item_suffix = '_item2.xml' filtered = sorted(name for name in os.listdir('01') if name.endswith(item_suffix)) A: Not sure exactly what you want to do, but here are some pointers that might be useful creating filenames ("%02d" means pad left with zeros) foldernames = ["%02d"%i for i in range(1,13)] filenames = ["%02d"%i for i in range(1,24)] use os.path.join for building up complex paths instead of string concatenation os.path.join(foldername,filename) os.path.exists for checking whether a file exists first if os.path.exists(newname): print "file already exists" for listing directory contents, use glob from glob import glob xmlfiles = glob("*.xml") use shutil for higher level operations like creating folders, renaming files shutil.move(oldname,newname) basename to get a file name from a full path filename = os.path.basename(fullpath)
Python file manipulation
Assume I have such folders rootfolder | / \ \ 01 02 03 .... | 13_itemname.xml So under my rootfolder, each directory represents a month like 01 02 03 and under these directories I have items with their create hour and item name such as 16_item1.xml, 24_item1.xml etc, as you may guess there are several items and each xml created every hour. Now I want to do two things: I need to generate a list of item names for a month, ie for 01 I have item1, item2 and item3 inside. I need to filter each item, such as for item1: i want to read each from 01_item1.xml to 24_item1.xml. How can I achieve these in Python in an easy way?
[ "Here are two methods doing what you ask (if I understood it properly). One with regex, one without. You choose which one you prefer ;)\nOne bit which may seem like magic is the \"setdefault\" line. For an explanation, see the docs. I leave it as \"an exercise to the reader\" to understand how it works ;)\nfrom os import listdir\nfrom os.path import join\n\nDATA_ROOT = \"testdata\"\n\ndef folder_items_no_regex(month_name):\n\n # dict holding the items (assuming ordering is irrelevant)\n items = {}\n\n # 1. Loop through all filenames in said folder\n for file in listdir( join( DATA_ROOT, month_name ) ):\n date, name = file.split( \"_\", 1 )\n\n # skip files that were not possible to split on \"_\"\n if not date or not name:\n continue\n\n # ignore non-.xml files\n if not name.endswith(\".xml\"):\n continue\n\n # cut off the \".xml\" extension\n name = name[0:-4]\n\n # keep a list of filenames\n items.setdefault( name, set() ).add( file )\n\n return items\n\ndef folder_items_regex(month_name):\n\n import re\n\n # The pattern:\n # 1. match the beginnning of line \"^\"\n # 2. capture 1 or more digits ( \\d+ )\n # 3. match the \"_\"\n # 4. capture any character (as few as possible ): (.*?)\n # 5. match \".xml\"\n # 6. match the end of line \"$\"\n pattern = re.compile( r\"^(\\d+)_(.*?)\\.xml$\" )\n\n # dict holding the items (assuming ordering is irrelevant)\n items = {}\n\n # 1. Loop through all filenames in said folder\n for file in listdir( join( DATA_ROOT, month_name ) ):\n\n match = pattern.match( file )\n if not match:\n continue\n\n date, name = match.groups()\n\n # keep a list of filenames\n items.setdefault( name, set() ).add( file )\n\n return items\nif __name__ == \"__main__\":\n from pprint import pprint\n\n data = folder_items_no_regex( \"02\" )\n\n print \"--- The dict ---------------\"\n pprint( data )\n\n print \"--- The items --------------\"\n pprint( sorted( data.keys() ) )\n\n print \"--- The files for item1 ---- \"\n pprint( sorted( data[\"item1\"] ) )\n\n\n data = folder_items_regex( \"02\" )\n\n print \"--- The dict ---------------\"\n pprint( data )\n\n print \"--- The items --------------\"\n pprint( sorted( data.keys() ) )\n\n print \"--- The files for item1 ---- \"\n pprint( sorted( data[\"item1\"] ) )\n\n", "Assuming that item names have a fixed length prefix and suffix (ie, a 3 character prefix such as '01_' and a 4 character suffix of '.xml'), you could solve the first part of the problem like this:\nnames = set(name[3:-4] for name in os.listdir('01') if name.endswith('.xml')]\n\nThat will get you unique item names.\nTo filter each item, simply look for files that end with that item's name and sort it if required.\nitem_suffix = '_item2.xml'\nfiltered = sorted(name for name in os.listdir('01') if name.endswith(item_suffix))\n\n", "Not sure exactly what you want to do, but here are some pointers that might be useful\n\ncreating filenames (\"%02d\" means pad left with zeros)\nfoldernames = [\"%02d\"%i for i in range(1,13)]\nfilenames = [\"%02d\"%i for i in range(1,24)]\n\nuse os.path.join for building up complex paths instead of string concatenation\nos.path.join(foldername,filename)\n\n\nos.path.exists for checking whether a file exists first\nif os.path.exists(newname):\n print \"file already exists\"\n\n\nfor listing directory contents, use glob\nfrom glob import glob\nxmlfiles = glob(\"*.xml\")\n\n\nuse shutil for higher level operations like creating folders, renaming files\nshutil.move(oldname,newname)\n\nbasename to get a file name from a full path\nfilename = os.path.basename(fullpath)\n" ]
[ 5, 0, 0 ]
[]
[]
[ "directory", "file", "pattern_matching", "python" ]
stackoverflow_0001699552_directory_file_pattern_matching_python.txt
Q: How to retrieve a directory of files from a remote server? If I have a directory on a remote web server that allows directory browsing, how would I go about to fetch all those files listed there from my other web server? I know I can use urllib2.urlopen to fetch individual files, but how would I get a list of all the files in that remote directory? A: If the webserver has directory browsing enabled, it will return a HTML document with links to all the files. You could parse the HTML document and extract all the links. This would give you the list of files. You can use the HTMLParser class to extract the elements you're interested in. Something like this will work: from HTMLParser import HTMLParser import urllib class AnchorParser(HTMLParser): def handle_starttag(self, tag, attrs): if tag =='a': for key, value in attrs.iteritems()): if key == 'href': print value parser = AnchorParser() data = urllib.urlopen('http://somewhere').read() parser.feed(data) A: Why don't you use curl or wget to recursively download the given page, and limit it upto 1 level. You will save all the trouble of writing the script. e.g. something like wget -H -r --level=1 -k -p www.yourpage/dir
How to retrieve a directory of files from a remote server?
If I have a directory on a remote web server that allows directory browsing, how would I go about to fetch all those files listed there from my other web server? I know I can use urllib2.urlopen to fetch individual files, but how would I get a list of all the files in that remote directory?
[ "If the webserver has directory browsing enabled, it will return a HTML document with links to all the files. You could parse the HTML document and extract all the links. This would give you the list of files.\nYou can use the HTMLParser class to extract the elements you're interested in. Something like this will work:\nfrom HTMLParser import HTMLParser\nimport urllib\n\nclass AnchorParser(HTMLParser):\n def handle_starttag(self, tag, attrs):\n if tag =='a':\n for key, value in attrs.iteritems()):\n if key == 'href':\n print value\n\nparser = AnchorParser()\ndata = urllib.urlopen('http://somewhere').read()\nparser.feed(data)\n\n", "Why don't you use curl or wget to recursively download the given page, and limit it upto 1 level. You will save all the trouble of writing the script.\ne.g. something like\nwget -H -r --level=1 -k -p www.yourpage/dir\n\n" ]
[ 5, 2 ]
[]
[]
[ "directory", "file", "python", "screen_scraping" ]
stackoverflow_0001699634_directory_file_python_screen_scraping.txt
Q: django, uni_form and python's __init__() function - how to pass arguments to a form? I'm having a bit of difficulty understanding how the python __init__( ) function works. What I'm trying to do is create a new form in django, and use the uni_form helper to display the form in a custom manner using fieldsets, however I'm passing an argument to the form that should slightly change the layout of the form and I can't figure out how to make this work. Here's my code: class MyForm(forms.Form): name = forms.CharField(label=_("Your name"), max_length=100, widget=forms.TextInput()) city = forms.CharField(label=_("Your city"), max_length=100, widget=forms.TextInput()) postal_code = forms.CharField(label=_("Postal code"), max_length=7, widget=forms.TextInput(), required=False) def __init__(self, city, *args, **kwargs): super(MyForm, self).__init__(*args, **kwargs) if city == "Vancouver": self.canada = True if self.canada: # create an extra uni_form fieldset that shows the postal code field else: # create the form without the postal code field However, the reason this isn't working for me. self.canada never seems to have any value outside of __init__, and therefore even though I passed that argument to the function I can't use the value in my class. I found a workaround for this, which is to create the form entirely inside __init__ using self.fields, but this is ugly. How do I use self.canada outside of __init__? A: You have misunderstood the way classes work in Python. You're trying to run code inside a class but outside of any function, which is unlikely to work, especially if it depends on something that happens inside __init__. That code will be evaluated when the class is first imported, whereas __init__ happens when each form is instantiated. The best way would surely be to include the fieldsets in the form, but simply don't display them when canada is True. Your __init__ code can set those fields to required=False dependent on that value, so you don't get validation errors.
django, uni_form and python's __init__() function - how to pass arguments to a form?
I'm having a bit of difficulty understanding how the python __init__( ) function works. What I'm trying to do is create a new form in django, and use the uni_form helper to display the form in a custom manner using fieldsets, however I'm passing an argument to the form that should slightly change the layout of the form and I can't figure out how to make this work. Here's my code: class MyForm(forms.Form): name = forms.CharField(label=_("Your name"), max_length=100, widget=forms.TextInput()) city = forms.CharField(label=_("Your city"), max_length=100, widget=forms.TextInput()) postal_code = forms.CharField(label=_("Postal code"), max_length=7, widget=forms.TextInput(), required=False) def __init__(self, city, *args, **kwargs): super(MyForm, self).__init__(*args, **kwargs) if city == "Vancouver": self.canada = True if self.canada: # create an extra uni_form fieldset that shows the postal code field else: # create the form without the postal code field However, the reason this isn't working for me. self.canada never seems to have any value outside of __init__, and therefore even though I passed that argument to the function I can't use the value in my class. I found a workaround for this, which is to create the form entirely inside __init__ using self.fields, but this is ugly. How do I use self.canada outside of __init__?
[ "You have misunderstood the way classes work in Python. You're trying to run code inside a class but outside of any function, which is unlikely to work, especially if it depends on something that happens inside __init__. That code will be evaluated when the class is first imported, whereas __init__ happens when each form is instantiated.\nThe best way would surely be to include the fieldsets in the form, but simply don't display them when canada is True. Your __init__ code can set those fields to required=False dependent on that value, so you don't get validation errors.\n" ]
[ 4 ]
[]
[]
[ "django", "django_forms", "init", "python" ]
stackoverflow_0001700043_django_django_forms_init_python.txt
Q: Easy Python ASync. Precompiler? imagine you have an io heavy function like this: def getMd5Sum(path): with open(path) as f: return md5(f.read()).hexdigest() Do you think Python is flexible enough to allow code like this (notice the $): def someGuiCallback(filebutton): ... path = filebutton.getPath() md5sum = $getMd5Sum() showNotification("Md5Sum of file: %s" % md5sum) ... To be executed something like this: def someGuiCallback_1(filebutton): ... path = filebutton.getPath() Thread(target=someGuiCallback_2, args=(path,)).start() def someGuiCallback_2(path): md5sum = getMd5Sum(path) glib.idle_add(someGuiCallback_3, md5sum) def someGuiCallback_3(md5sum): showNotification("Md5Sum of file: %s" % md5sum) ... (glib.idle_add just pushes a function onto the queue of the main thread) I've thought about using decoraters, but they don't allow me to access the 'content' of the function after the call. (the showNotification part) I guess I could write a 'compiler' to change the code before execution, but it doesn't seam like the optimal solution. Do you have any ideas, on how to do something like the above? A: You can use import hooks to achieve this goal... PEP 302 - New Import Hooks PEP 369 - Post Import Hooks ... but I'd personally view it as a little bit nasty. If you want to go down that route though, essentially what you'd be doing is this: You add an import hook for an extension (eg ".thpy") That import hook is then responsible for (essentially) passing some valid code as a result of the import. That valid code is given arguments that effectively relate to the file you're importing. That means your precompiler can perform whatever transformations you like to the source on the way in. On the downside: Whilst using import hooks in this way will work, it will surprise the life out of any maintainer or your code. (Bad idea IMO) The way you do this relies upon imputil - which has been removed in python 3.0, which means your code written this way has a limited lifetime. Personally I wouldn't go there, but if you do, there's an issue of the Python Magazine where doing this sort of thing is covered in some detail, and I'd advise getting a back issue of that to read up on it. (Written by Paul McGuire, April 2009 issue, probably available as PDF). Specifically that uses imputil and pyparsing as it's example, but the principle is the same. A: How about something like this: def performAsync(asyncFunc, notifyFunc): def threadProc(): retValue = asyncFunc() glib.idle_add(notifyFunc, retValue) Thread(target=threadProc).start() def someGuiCallback(filebutton): path = filebutton.getPath() performAsync( lambda: getMd5Sum(path), lambda md5sum: showNotification("Md5Sum of file: %s" % md5sum) ) A bit ugly with the lambdas, but it's simple and probably more readable than using precompiler tricks. A: Sure you can access function code (already compiled) from decorator, disassemble and hack it. You can even access the source of module it's defined in and recompile it. But I think this is not necessary. Below is an example using decorated generator, where yield statement serves as a delimiter between synchronous and asynchronous parts: from threading import Thread import hashlib def async(gen): def func(*args, **kwargs): it = gen(*args, **kwargs) result = it.next() Thread(target=lambda: list(it)).start() return result return func @async def test(text): # synchronous part (empty in this example) yield # Use "yield value" if you need to return meaningful value # asynchronous part[s] digest = hashlib.md5(text).hexdigest() print digest
Easy Python ASync. Precompiler?
imagine you have an io heavy function like this: def getMd5Sum(path): with open(path) as f: return md5(f.read()).hexdigest() Do you think Python is flexible enough to allow code like this (notice the $): def someGuiCallback(filebutton): ... path = filebutton.getPath() md5sum = $getMd5Sum() showNotification("Md5Sum of file: %s" % md5sum) ... To be executed something like this: def someGuiCallback_1(filebutton): ... path = filebutton.getPath() Thread(target=someGuiCallback_2, args=(path,)).start() def someGuiCallback_2(path): md5sum = getMd5Sum(path) glib.idle_add(someGuiCallback_3, md5sum) def someGuiCallback_3(md5sum): showNotification("Md5Sum of file: %s" % md5sum) ... (glib.idle_add just pushes a function onto the queue of the main thread) I've thought about using decoraters, but they don't allow me to access the 'content' of the function after the call. (the showNotification part) I guess I could write a 'compiler' to change the code before execution, but it doesn't seam like the optimal solution. Do you have any ideas, on how to do something like the above?
[ "You can use import hooks to achieve this goal...\n\nPEP 302 - New Import Hooks\nPEP 369 - Post Import Hooks\n\n... but I'd personally view it as a little bit nasty.\nIf you want to go down that route though, essentially what you'd be doing is this:\n\nYou add an import hook for an extension (eg \".thpy\")\nThat import hook is then responsible for (essentially) passing some valid code as a result of the import.\nThat valid code is given arguments that effectively relate to the file you're importing.\nThat means your precompiler can perform whatever transformations you like to the source on the way in.\n\nOn the downside:\n\nWhilst using import hooks in this way will work, it will surprise the life out of any maintainer or your code. (Bad idea IMO)\nThe way you do this relies upon imputil - which has been removed in python 3.0, which means your code written this way has a limited lifetime.\n\nPersonally I wouldn't go there, but if you do, there's an issue of the Python Magazine where doing this sort of thing is covered in some detail, and I'd advise getting a back issue of that to read up on it. (Written by Paul McGuire, April 2009 issue, probably available as PDF).\nSpecifically that uses imputil and pyparsing as it's example, but the principle is the same.\n", "How about something like this:\ndef performAsync(asyncFunc, notifyFunc):\n def threadProc():\n retValue = asyncFunc()\n glib.idle_add(notifyFunc, retValue)\n Thread(target=threadProc).start()\n\ndef someGuiCallback(filebutton):\n path = filebutton.getPath()\n performAsync(\n lambda: getMd5Sum(path),\n lambda md5sum: showNotification(\"Md5Sum of file: %s\" % md5sum)\n )\n\nA bit ugly with the lambdas, but it's simple and probably more readable than using precompiler tricks.\n", "Sure you can access function code (already compiled) from decorator, disassemble and hack it. You can even access the source of module it's defined in and recompile it. But I think this is not necessary. Below is an example using decorated generator, where yield statement serves as a delimiter between synchronous and asynchronous parts:\nfrom threading import Thread\nimport hashlib\n\ndef async(gen):\n def func(*args, **kwargs):\n it = gen(*args, **kwargs)\n result = it.next()\n Thread(target=lambda: list(it)).start()\n return result\n return func\n\n@async\ndef test(text):\n # synchronous part (empty in this example)\n yield # Use \"yield value\" if you need to return meaningful value\n # asynchronous part[s]\n digest = hashlib.md5(text).hexdigest()\n print digest\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "asynchronous", "compiler_construction", "python", "syntax" ]
stackoverflow_0001696152_asynchronous_compiler_construction_python_syntax.txt
Q: Python XPath Result displaying only [] Hey I have just started to use Python recently and I want to use it with a bit of xPath, the thing is when I print the result of the query I only get [] and I don't know why =S import libxml2, urllib doc = libxml2.parseDoc(urllib.urlopen("http://www.domain.com/").read()) result = doc.xpathEval("//th//td[(((count(preceding-sibling::*) + 1) = 2) and parent::*)]//a") if result != []: print result elif result == "": print "null" else: print result doc.freeDoc() I get no error whatsoever just a []. What could it be? also is there any better documentation for libxml2 than the one here since I find it reaaaally confusing =S Edit I changed the code, so now I get more than the [] I get the following output which should be related to the non-validity of the html I'm trying to parse (but it's not mine so I can't modify it). Any ideas on to how to tell Python to be more forgiving with that fact? ^ Entity: line 3552: parser error : Premature end of data in tag tr line 209 ^ Entity: line 3552: parser error : Premature end of data in tag tbody line 208 ^ Entity: line 3552: parser error : Premature end of data in tag table line 207 ^ Entity: line 3552: parser error : Premature end of data in tag input line 206 ^ Entity: line 3552: parser error : Premature end of data in tag input line 205 ^ Entity: line 3552: parser error : Premature end of data in tag form line 204 ^ Entity: line 3552: parser error : Premature end of data in tag table line 99 ^ Entity: line 3552: parser error : Premature end of data in tag div line 98 ^ Entity: line 3552: parser error : Premature end of data in tag body line 96 ^ Entity: line 3552: parser error : Premature end of data in tag html line 3 ^ Traceback (most recent call last): File "C:\Python26\lib\site-packages\libxml2.py", line 1263, in parseDoc if ret is None:raise parserError('xmlParseDoc() failed') libxml2.parserError: xmlParseDoc() failed It's actually a longer list but there's no point in placing it all here, since all errors are due to invalid html. A: It could be that your XPath doesn't select any elements. For example, you are looking for td's inside th's, but those elements are peers, and shouldn't nest. Why do you say (count(preceding-sibling::*) + 1) = 2 instead of count(preceding-sibling::*) = 1? If you use a simpler XPath, do you get the results you expect? A: Are you confusing th and tr? Change your th to tr. A: Side note: Where does all that unnecessary complexity in your XPath come from? This: //th//td[(((count(preceding-sibling::*) + 1) = 2) and parent::*)]//a is equivalent to: //th//td[count(preceding-sibling::*) = 1)]//a and very probably even to: //th/td[2]//a
Python XPath Result displaying only []
Hey I have just started to use Python recently and I want to use it with a bit of xPath, the thing is when I print the result of the query I only get [] and I don't know why =S import libxml2, urllib doc = libxml2.parseDoc(urllib.urlopen("http://www.domain.com/").read()) result = doc.xpathEval("//th//td[(((count(preceding-sibling::*) + 1) = 2) and parent::*)]//a") if result != []: print result elif result == "": print "null" else: print result doc.freeDoc() I get no error whatsoever just a []. What could it be? also is there any better documentation for libxml2 than the one here since I find it reaaaally confusing =S Edit I changed the code, so now I get more than the [] I get the following output which should be related to the non-validity of the html I'm trying to parse (but it's not mine so I can't modify it). Any ideas on to how to tell Python to be more forgiving with that fact? ^ Entity: line 3552: parser error : Premature end of data in tag tr line 209 ^ Entity: line 3552: parser error : Premature end of data in tag tbody line 208 ^ Entity: line 3552: parser error : Premature end of data in tag table line 207 ^ Entity: line 3552: parser error : Premature end of data in tag input line 206 ^ Entity: line 3552: parser error : Premature end of data in tag input line 205 ^ Entity: line 3552: parser error : Premature end of data in tag form line 204 ^ Entity: line 3552: parser error : Premature end of data in tag table line 99 ^ Entity: line 3552: parser error : Premature end of data in tag div line 98 ^ Entity: line 3552: parser error : Premature end of data in tag body line 96 ^ Entity: line 3552: parser error : Premature end of data in tag html line 3 ^ Traceback (most recent call last): File "C:\Python26\lib\site-packages\libxml2.py", line 1263, in parseDoc if ret is None:raise parserError('xmlParseDoc() failed') libxml2.parserError: xmlParseDoc() failed It's actually a longer list but there's no point in placing it all here, since all errors are due to invalid html.
[ "It could be that your XPath doesn't select any elements. For example, you are looking for td's inside th's, but those elements are peers, and shouldn't nest.\nWhy do you say (count(preceding-sibling::*) + 1) = 2 instead of count(preceding-sibling::*) = 1?\nIf you use a simpler XPath, do you get the results you expect?\n", "Are you confusing th and tr? Change your th to tr.\n", "Side note: Where does all that unnecessary complexity in your XPath come from? This:\n\n//th//td[(((count(preceding-sibling::*) + 1) = 2) and parent::*)]//a\n\nis equivalent to:\n\n//th//td[count(preceding-sibling::*) = 1)]//a\n\nand very probably even to:\n\n//th/td[2]//a\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "libxml2", "python", "xpath" ]
stackoverflow_0001694427_libxml2_python_xpath.txt
Q: Editing both sides of M2M in Admin Page First I'll lay out what I'm trying to achieve in case there's a different way to go about it! I want to be able to edit both sides of an M2M relationship (preferably on the admin page although if needs be it could be on a normal page) using any of the multi select interfaces. The problem obviously comes with the reverse side, as the main side (where the relationship is defined) works just fine automagically. I have tried some of the advice here to get an inline to appear and that works but its not a very nice interface. The advice I got on the django mailing list was to use a custom ModelForm. I've got as far as getting a multiselect box to appear but it doesnt seem to be "connected" to anything as it does not start with anything selected and does not save any changes that are made. Here's the appropriate snippets of code: #models.py class Tag(models.Model): name = models.CharField(max_length=200) class Project(models.Model): name = models.CharField(max_length=200) description = models.TextField() tags = models.ManyToManyField(Tag, related_name='projects') #admin.py class TagForm(ModelForm): fields = ('name', 'projects') projects = ModelMultipleChoiceField(Project.objects.all(), widget=SelectMultiple()) class Meta: model = Tag class TagAdmin(admin.ModelAdmin): fields = ('name', 'projects') form = TagForm Any help would be much appreciated, either getting the code above to work or by providing a better way to do it! DavidM A: The reason why nothing happens automatically is that the "projects" field is not a part of the Tag model. Which means you have to do all the work yourself. Something like (in TagForm): def __init__(self, *args, **kwargs): super(TagForm, self).__init__(*args, **kwargs) if 'instance' in kwargs: self.fields['projects'].initial = self.instance.project_set.all() def save(self, *args, **kwargs): super(TagForm, self).save(*args, **kwargs) self.instance.project_set.clear() for project in self.cleaned_data['projects']: self.instance.project_set.add(project) Note that the code is untested so you might need to tweek it some to get it to work.
Editing both sides of M2M in Admin Page
First I'll lay out what I'm trying to achieve in case there's a different way to go about it! I want to be able to edit both sides of an M2M relationship (preferably on the admin page although if needs be it could be on a normal page) using any of the multi select interfaces. The problem obviously comes with the reverse side, as the main side (where the relationship is defined) works just fine automagically. I have tried some of the advice here to get an inline to appear and that works but its not a very nice interface. The advice I got on the django mailing list was to use a custom ModelForm. I've got as far as getting a multiselect box to appear but it doesnt seem to be "connected" to anything as it does not start with anything selected and does not save any changes that are made. Here's the appropriate snippets of code: #models.py class Tag(models.Model): name = models.CharField(max_length=200) class Project(models.Model): name = models.CharField(max_length=200) description = models.TextField() tags = models.ManyToManyField(Tag, related_name='projects') #admin.py class TagForm(ModelForm): fields = ('name', 'projects') projects = ModelMultipleChoiceField(Project.objects.all(), widget=SelectMultiple()) class Meta: model = Tag class TagAdmin(admin.ModelAdmin): fields = ('name', 'projects') form = TagForm Any help would be much appreciated, either getting the code above to work or by providing a better way to do it! DavidM
[ "The reason why nothing happens automatically is that the \"projects\" field is not a part of the Tag model. Which means you have to do all the work yourself. Something like (in TagForm):\ndef __init__(self, *args, **kwargs):\n super(TagForm, self).__init__(*args, **kwargs)\n if 'instance' in kwargs:\n self.fields['projects'].initial = self.instance.project_set.all()\n\ndef save(self, *args, **kwargs):\n super(TagForm, self).save(*args, **kwargs)\n self.instance.project_set.clear()\n for project in self.cleaned_data['projects']:\n self.instance.project_set.add(project)\n\nNote that the code is untested so you might need to tweek it some to get it to work.\n" ]
[ 2 ]
[]
[]
[ "django", "django_admin", "django_forms", "m2m", "python" ]
stackoverflow_0001700202_django_django_admin_django_forms_m2m_python.txt
Q: How to unquote URL quoted UTF-8 strings in Python thestring = urllib.quote(thestring.encode('utf-8')) This will encode it. How to decode it? A: What about backtonormal = urllib.unquote(thestring) A: if you mean to decode a string from utf-8, you can first transform the string to unicode and then to any other encoding you would like (or leave it in unicode), like this unicodethestring = unicode(thestring, 'utf-8') latin1thestring = unicodethestring.encode('latin-1','ignore') 'ignore' meaning that if you encounter a character that is not in the latin-1 character set you ignore this character.
How to unquote URL quoted UTF-8 strings in Python
thestring = urllib.quote(thestring.encode('utf-8')) This will encode it. How to decode it?
[ "What about\nbacktonormal = urllib.unquote(thestring)\n\n", "if you mean to decode a string from utf-8, you can first transform the string to unicode and then to any other encoding you would like (or leave it in unicode), like this\nunicodethestring = unicode(thestring, 'utf-8')\nlatin1thestring = unicodethestring.encode('latin-1','ignore')\n\n'ignore' meaning that if you encounter a character that is not in the latin-1 character set you ignore this character.\n" ]
[ 6, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001700427_python.txt
Q: How to close not responsive Win32 Internet Explorer COM interface? actually this is not hang status, i mean..it slow response, so in that case, i would like to close IE and want to restart from start. so closing is no problem ,problem is ,how to set timeout ,for example if i set 15sec, if not webpage open less than 15 sec i want to close it and restart from start. is this possible to use with IE com interface? really hard to find solution Paul, I'm used to follow code to check wether a webpage is completely open or not. But as I mentioned, it is not working well, because IE.navigate is looks like it hangs or does not respond. while ie.ReadyState != 4: time.sleep(0.5) A: To avoid blocking problem use IE COM object in a thread. Here is a simple but powerful example demonstrating how can you use thread and IE com object together. You can improve it for your purpose. This example starts a thread a uses a queue to communicate with main thread, in main thread user can add urls to queue, and IE thread visits them one by one, after he finishes one url, IE visits next. As IE COM object is being used in a thread you need to call Coinitialize from threading import Thread from Queue import Queue from win32com.client import Dispatch import pythoncom import time class IEThread(Thread): def __init__(self): Thread.__init__(self) self.queue = Queue() def run(self): ie = None # as IE Com object will be used in thread, do CoInitialize pythoncom.CoInitialize() try: ie = Dispatch("InternetExplorer.Application") ie.Visible = 1 while 1: url = self.queue.get() print "Visiting...",url ie.Navigate(url) while ie.Busy: time.sleep(0.1) except Exception,e: print "Error in IEThread:",e if ie is not None: ie.Quit() ieThread = IEThread() ieThread.start() while 1: url = raw_input("enter url to visit:") if url == 'q': break ieThread.queue.put(url)
How to close not responsive Win32 Internet Explorer COM interface?
actually this is not hang status, i mean..it slow response, so in that case, i would like to close IE and want to restart from start. so closing is no problem ,problem is ,how to set timeout ,for example if i set 15sec, if not webpage open less than 15 sec i want to close it and restart from start. is this possible to use with IE com interface? really hard to find solution Paul, I'm used to follow code to check wether a webpage is completely open or not. But as I mentioned, it is not working well, because IE.navigate is looks like it hangs or does not respond. while ie.ReadyState != 4: time.sleep(0.5)
[ "To avoid blocking problem use IE COM object in a thread.\nHere is a simple but powerful example demonstrating how can you use thread and IE com object together. You can improve it for your purpose.\nThis example starts a thread a uses a queue to communicate with main thread, in main thread user can add urls to queue, and IE thread visits them one by one, after he finishes one url, IE visits next. As IE COM object is being used in a thread you need to call Coinitialize\nfrom threading import Thread\nfrom Queue import Queue\nfrom win32com.client import Dispatch\nimport pythoncom\nimport time\n\nclass IEThread(Thread):\n def __init__(self):\n Thread.__init__(self)\n self.queue = Queue()\n\n def run(self):\n ie = None\n # as IE Com object will be used in thread, do CoInitialize\n pythoncom.CoInitialize()\n try:\n ie = Dispatch(\"InternetExplorer.Application\")\n ie.Visible = 1\n while 1:\n url = self.queue.get()\n print \"Visiting...\",url\n ie.Navigate(url)\n while ie.Busy:\n time.sleep(0.1)\n except Exception,e:\n print \"Error in IEThread:\",e\n\n if ie is not None:\n ie.Quit()\n\n\nieThread = IEThread()\nieThread.start()\nwhile 1:\n url = raw_input(\"enter url to visit:\")\n if url == 'q':\n break\n ieThread.queue.put(url)\n\n" ]
[ 0 ]
[]
[]
[ "python", "win32com" ]
stackoverflow_0001700551_python_win32com.txt
Q: IPC between Python and C# I want to pass data between a Python and a C# application in Windows (I want the channel to be bi-directional) In fact I wanna pass a struct containing data about a network packet that I've captured with C# (SharpPcap) to the Python app and then send back a modified packet to the C# program. What do you propose ? (I rather it be a fast method) My searches so far revealed that I can use these technologies, but I don't know which: JSON-RPC Use WCF (run the project under IronPython using Ironclad) WCF (use Python for .NET) A: Use JSON-RPC because the experience that you gain will have more practical use. JSON is widely used in web applications written in all of the dozen or so most popular languages. A: Why not use a simple socket communication, or if you wish you can start a simple http server, and/or do json-rpc over it.
IPC between Python and C#
I want to pass data between a Python and a C# application in Windows (I want the channel to be bi-directional) In fact I wanna pass a struct containing data about a network packet that I've captured with C# (SharpPcap) to the Python app and then send back a modified packet to the C# program. What do you propose ? (I rather it be a fast method) My searches so far revealed that I can use these technologies, but I don't know which: JSON-RPC Use WCF (run the project under IronPython using Ironclad) WCF (use Python for .NET)
[ "Use JSON-RPC because the experience that you gain will have more practical use. JSON is widely used in web applications written in all of the dozen or so most popular languages.\n", "Why not use a simple socket communication, or if you wish you can start a simple http server, and/or do json-rpc over it.\n" ]
[ 2, 2 ]
[]
[]
[ "bidirectional", "c#", "ipc", "python", "rpc" ]
stackoverflow_0001700228_bidirectional_c#_ipc_python_rpc.txt
Q: Is it possible to redefine reverse in a Django project? I have some custom logic that needs to be executed every single time a URL is reversed, even for third-party apps. My project is a multitenant web app, and the tenant is identified based on the URL. There isn't a single valid URL that doesn't include a tenant identifier. I already have a wrapper function around reverse, but now I need a way to tell every installed app to use it. The wrapper around reverse uses a thread-local to inject the identifier into the resulting URL. I could write this function as a decorator on reverse, but I don't know where to do the actual decoration. Moderately Firm Constraint: I'm already using 3 3rd-party apps, and I'll probably add more. A solution should not require me to modify the source code of all these third-party apps. I don't relish the idea of maintaining patches on top of multiple 3rd-party source trees if there is an easier way. I can make the documentation abundantly clear that reverse has been decorated. The Original Question: Where could I make such a change that guarantees it would apply to every invocation of reverse? Possible Alternate Question: What's a better way of making sure that every URL—including those generated by 3rd-party apps—gets the tenant identifier? BTW, I'm open to a better way to handle any of this except the embedding of the tenant-id in the URL; that decision is pretty set in stone right now. Thanks. Thanks. A: only way so that django reverse is replaced by ur_reverse is django.core.urlresolvers.reverse = ur_reverse or if you like decorator syntactic sugar django.core.urlresolvers.reverse = ur_reverse_decorator(django.core.urlresolvers.reverse ) which i would not advice(and many will shout), unless you are not willing to change every usage of reverse with ur_reverse
Is it possible to redefine reverse in a Django project?
I have some custom logic that needs to be executed every single time a URL is reversed, even for third-party apps. My project is a multitenant web app, and the tenant is identified based on the URL. There isn't a single valid URL that doesn't include a tenant identifier. I already have a wrapper function around reverse, but now I need a way to tell every installed app to use it. The wrapper around reverse uses a thread-local to inject the identifier into the resulting URL. I could write this function as a decorator on reverse, but I don't know where to do the actual decoration. Moderately Firm Constraint: I'm already using 3 3rd-party apps, and I'll probably add more. A solution should not require me to modify the source code of all these third-party apps. I don't relish the idea of maintaining patches on top of multiple 3rd-party source trees if there is an easier way. I can make the documentation abundantly clear that reverse has been decorated. The Original Question: Where could I make such a change that guarantees it would apply to every invocation of reverse? Possible Alternate Question: What's a better way of making sure that every URL—including those generated by 3rd-party apps—gets the tenant identifier? BTW, I'm open to a better way to handle any of this except the embedding of the tenant-id in the URL; that decision is pretty set in stone right now. Thanks. Thanks.
[ "only way so that django reverse is replaced by ur_reverse is\ndjango.core.urlresolvers.reverse = ur_reverse\n\nor if you like decorator syntactic sugar\ndjango.core.urlresolvers.reverse = ur_reverse_decorator(django.core.urlresolvers.reverse )\n\nwhich i would not advice(and many will shout), unless you are not willing to change every usage of reverse with ur_reverse\n" ]
[ 5 ]
[]
[]
[ "decorator", "django", "monkeypatching", "python", "reverse" ]
stackoverflow_0001700577_decorator_django_monkeypatching_python_reverse.txt
Q: embed python in matlab mex file on os x I'm trying to embed Python into a MATLAB mex function on OS X. I've seen references that this can be done (eg here) but I can't find any OS X specific information. So far I can successfully build an embedded Python (so my linker flags must be OK) and I can also build example mex files without any trouble and with the default options: jm-g26b101:mex robince$ cat pytestnomex.c #include <Python/Python.h> int main() { Py_Initialize(); PyRun_SimpleString("print 'hello'"); Py_Finalize(); return 0; } jm-g26b101:mex robince$ gcc -arch i386 pytestnomex.c -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -L/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/config -ldl -lpython2.5 jm-g26b101:mex robince$ ./a.out hello But when I try to build a mex file that embeds Python I run into a problem with undefined symbol main. Here is my mex function: #include <Python.h> #include <mex.h> void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray*prhs[]) { mexPrintf("hello1\n"); Py_Initialize(); PyRun_SimpleString("print 'hello from python'"); Py_Finalize(); } Here are the mex compilation steps: jm-g26b101:mex robince$ gcc -c -I/Applications/MATLAB_R2009a.app/extern/include -I/Applications/MATLAB_R2009a.app/simulink/include -DMATLAB_MEX_FILE -arch i386 -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -DMX_COMPAT_32 -O2 -DNDEBUG "pytest.c" jm-g26b101:mex robince$ gcc -O -arch i386 -L/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/config -ldl -lpython2.5 -o "pytest.mexmaci" pytest.o -L/Applications/MATLAB_R2009a.app/bin/maci -lmx -lmex -lmat -lstdc++ Undefined symbols: "_main", referenced from: start in crt1.10.6.o ld: symbol(s) not found collect2: ld returned 1 exit status I've tried playing around with arch settings (I added -arch i386 it to try and keep everything 32bit - I am using the python.org 32 bit 2.5 build), and the order of the linker flags, but haven't been able to get anywhere. Can't find much online either. Does anyone have any ideas of how I can get this to build? [EDIT: should probably add I'm on OS X 10.6.1 with MATLAB 7.8 (r2009a), Python 2.5.4 (python.org) - I've tried both gcc-4.0 and gcc-4.2 (apple)] A: I think I found the answer - by including the mysterious apple linker flags: -undefined dynamic_lookup -bundle I was able to get it built and it seems to work OK. I'd be very interested if anyone has any references about these flags or library handling on OS X in general. Now I see them I remember being bitten by the same thing in the past - yet I'm unable to find any documentation on what they actually do and why/when they should be needed.
embed python in matlab mex file on os x
I'm trying to embed Python into a MATLAB mex function on OS X. I've seen references that this can be done (eg here) but I can't find any OS X specific information. So far I can successfully build an embedded Python (so my linker flags must be OK) and I can also build example mex files without any trouble and with the default options: jm-g26b101:mex robince$ cat pytestnomex.c #include <Python/Python.h> int main() { Py_Initialize(); PyRun_SimpleString("print 'hello'"); Py_Finalize(); return 0; } jm-g26b101:mex robince$ gcc -arch i386 pytestnomex.c -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -L/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/config -ldl -lpython2.5 jm-g26b101:mex robince$ ./a.out hello But when I try to build a mex file that embeds Python I run into a problem with undefined symbol main. Here is my mex function: #include <Python.h> #include <mex.h> void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray*prhs[]) { mexPrintf("hello1\n"); Py_Initialize(); PyRun_SimpleString("print 'hello from python'"); Py_Finalize(); } Here are the mex compilation steps: jm-g26b101:mex robince$ gcc -c -I/Applications/MATLAB_R2009a.app/extern/include -I/Applications/MATLAB_R2009a.app/simulink/include -DMATLAB_MEX_FILE -arch i386 -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -DMX_COMPAT_32 -O2 -DNDEBUG "pytest.c" jm-g26b101:mex robince$ gcc -O -arch i386 -L/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/config -ldl -lpython2.5 -o "pytest.mexmaci" pytest.o -L/Applications/MATLAB_R2009a.app/bin/maci -lmx -lmex -lmat -lstdc++ Undefined symbols: "_main", referenced from: start in crt1.10.6.o ld: symbol(s) not found collect2: ld returned 1 exit status I've tried playing around with arch settings (I added -arch i386 it to try and keep everything 32bit - I am using the python.org 32 bit 2.5 build), and the order of the linker flags, but haven't been able to get anywhere. Can't find much online either. Does anyone have any ideas of how I can get this to build? [EDIT: should probably add I'm on OS X 10.6.1 with MATLAB 7.8 (r2009a), Python 2.5.4 (python.org) - I've tried both gcc-4.0 and gcc-4.2 (apple)]
[ "I think I found the answer - by including the mysterious apple linker flags:\n-undefined dynamic_lookup -bundle\n\nI was able to get it built and it seems to work OK. I'd be very interested if anyone has any references about these flags or library handling on OS X in general. Now I see them I remember being bitten by the same thing in the past - yet I'm unable to find any documentation on what they actually do and why/when they should be needed.\n" ]
[ 4 ]
[]
[]
[ "macos", "matlab", "mex", "python", "python_embedding" ]
stackoverflow_0001700628_macos_matlab_mex_python_python_embedding.txt
Q: How to install django-haystack using buildout I'm trying to convert a current Django project in development to use zc.buildout So far, I've got all the bits figured except for Haystack figured out. The Haystack source is available on GitHub, but I don't want to force users to install git. A suitable alternative seems to be to fetch a tarball from here That tarball contains a setuptools setup.py, and it seems like it should be so easy to get buildout to install it. Halp! A: I figured this one out, without posting it to PyPI. (There is no actually tagged release version of django-haystack, so posting to to PyPI seems unclean. It's something the maintainer should and probably will handle better themselves.) The relevant section is as follows: [haystack] recipe = collective.recipe.distutils url = http://github.com/ephelon/django-haystack/tarball/master I had to create a fork of the project to remove zip_safe=False from setup.py. Once I'd done that the above works flawlessly, even the redirect sent by the above url. A: This currently works for me without forking. [django-haystack] recipe = zerokspot.recipe.git repository = git://github.com/toastdriven/django-haystack.git as_egg = true [whoosh] recipe = zerokspot.recipe.git repository = git://github.com/toastdriven/whoosh.git branch = haystacked as_egg = true Make sure you add the locations to your extra-paths. A: Well, if you don't want to install GIT, you can't check it out. So then you have to download a release. But there aren't any. In theory, find-links directly to the distribution should work. In this case it wont, probably because you don't link to the file, but to a page that generates the file from the trunk. So that option was out. So, you need to bribe somebody to make a release, or make one yourself. You can make a release and stick it in a file server somewhere, and then use find-links in the buildout to point to the right place. Or, since nobody else seems to have released Haystack to PyPI, you can do it! (But be nice and tell the developers and give them manager rights to the package as well). A: It seems they've fixed the package to work from the tarball. James' fork is not working right now, but you can use the same recipe passing it the standard url: [haystack] recipe = collective.recipe.distutils url = http://github.com/toastdriven/django-haystack/tarball/master This worked for me and is 100% hack free.
How to install django-haystack using buildout
I'm trying to convert a current Django project in development to use zc.buildout So far, I've got all the bits figured except for Haystack figured out. The Haystack source is available on GitHub, but I don't want to force users to install git. A suitable alternative seems to be to fetch a tarball from here That tarball contains a setuptools setup.py, and it seems like it should be so easy to get buildout to install it. Halp!
[ "I figured this one out, without posting it to PyPI. (There is no actually tagged release version of django-haystack, so posting to to PyPI seems unclean. It's something the maintainer should and probably will handle better themselves.)\nThe relevant section is as follows:\n[haystack]\nrecipe = collective.recipe.distutils\nurl = http://github.com/ephelon/django-haystack/tarball/master\n\nI had to create a fork of the project to remove zip_safe=False from setup.py. Once I'd done that the above works flawlessly, even the redirect sent by the above url.\n", "This currently works for me without forking.\n[django-haystack]\nrecipe = zerokspot.recipe.git\nrepository = git://github.com/toastdriven/django-haystack.git\nas_egg = true\n\n[whoosh]\nrecipe = zerokspot.recipe.git\nrepository = git://github.com/toastdriven/whoosh.git\nbranch = haystacked\nas_egg = true\n\nMake sure you add the locations to your extra-paths.\n", "Well, if you don't want to install GIT, you can't check it out. So then you have to download a release. But there aren't any. In theory, find-links directly to the distribution should work. In this case it wont, probably because you don't link to the file, but to a page that generates the file from the trunk. So that option was out. \nSo, you need to bribe somebody to make a release, or make one yourself. You can make a release and stick it in a file server somewhere, and then use find-links in the buildout to point to the right place. \nOr, since nobody else seems to have released Haystack to PyPI, you can do it! (But be nice and tell the developers and give them manager rights to the package as well).\n", "It seems they've fixed the package to work from the tarball. James' fork is not working right now, but you can use the same recipe passing it the standard url:\n[haystack]\nrecipe = collective.recipe.distutils\nurl = http://github.com/toastdriven/django-haystack/tarball/master\n\nThis worked for me and is 100% hack free.\n" ]
[ 4, 2, 1, 0 ]
[]
[]
[ "buildout", "python" ]
stackoverflow_0001134946_buildout_python.txt
Q: Environment on google Appengine does someone have an idea how to get the environment variables on Google-AppEngine ? I'm trying to write a simple Script that shall use the Client-IP (for Authentication) and a parameter (geturl or so) from the URL (for e.g. http://thingy.appspot.dom/index?geturl=www.google.at) I red that i should be able to get the Client-IP via "request.remote_addr" but i seem to lack 'request' even tho i imported webapp from google.appengine.ext Many thanks in advance, Birt A: To answer the actual question from the title of your post, assuming you're still wondering: to get environment variables, simple import os and the environment is available in os.environ. A: In short, assuming you're using webapp: you can get the client ip address via self.request.remote_addr and the parameter with self.request.get("geturl") See the Handling Forms with webapp section of the tutorial. A: Are you using webapp or doing CGI-style? The webapp request class is documented at the appengine docs.
Environment on google Appengine
does someone have an idea how to get the environment variables on Google-AppEngine ? I'm trying to write a simple Script that shall use the Client-IP (for Authentication) and a parameter (geturl or so) from the URL (for e.g. http://thingy.appspot.dom/index?geturl=www.google.at) I red that i should be able to get the Client-IP via "request.remote_addr" but i seem to lack 'request' even tho i imported webapp from google.appengine.ext Many thanks in advance, Birt
[ "To answer the actual question from the title of your post, assuming you're still wondering: to get environment variables, simple import os and the environment is available in os.environ.\n", "In short, assuming you're using webapp: you can get the client ip address via self.request.remote_addr and the parameter with self.request.get(\"geturl\")\nSee the Handling Forms with webapp section of the tutorial.\n", "Are you using webapp or doing CGI-style? The webapp request class is documented at the appengine docs.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001700441_google_app_engine_python.txt
Q: Where is Python's "best ASCII for this Unicode" database? I have some text that uses Unicode punctuation, like left double quote, right single quote for apostrophe, and so on, and I need it in ASCII. Does Python have a database of these characters with obvious ASCII substitutes so I can do better than turning them all into "?" ? A: Unidecode looks like a complete solution. It converts fancy quotes to ascii quotes, accented latin characters to unaccented and even attempts transliteration to deal with characters that don't have ASCII equivalents. That way your users don't have to see a bunch of ? when you had to pass their text through a legacy 7-bit ascii system. >>> from unidecode import unidecode >>> print unidecode(u"\u5317\u4EB0") Bei Jing http://www.tablix.org/~avian/blog/archives/2009/01/unicode_transliteration_in_python/ A: In my original answer, I also suggested unicodedata.normalize. However, I decided to test it out and it turns out it doesn't work with Unicode quotation marks. It does a good job translating accented Unicode characters, so I'm guessing unicodedata.normalize is implemented using the unicode.decomposition function, which leads me to believe it probably can only handle Unicode characters that are combinations of a letter and a diacritical mark, but I'm not really an expert on the Unicode specification, so I could just be full of hot air... In any event, you can use unicode.translate to deal with punctuation characters instead. The translate method takes a dictionary of Unicode ordinals to Unicode ordinals, thus you can create a mapping that translates Unicode-only punctuation to ASCII-compatible punctuation: 'Maps left and right single and double quotation marks' 'into ASCII single and double quotation marks' >>> punctuation = { 0x2018:0x27, 0x2019:0x27, 0x201C:0x22, 0x201D:0x22 } >>> teststring = u'\u201Chello, world!\u201D' >>> teststring.translate(punctuation).encode('ascii', 'ignore') '"hello, world!"' You can add more punctuation mappings if needed, but I don't think you necessarily need to worry about handling every single Unicode punctuation character. If you do need to handle accents and other diacritical marks, you can still use unicodedata.normalize to deal with those characters. A: Interesting question. Google helped me find this page which descibes using the unicodedata module as the following: import unicodedata unicodedata.normalize('NFKD', title).encode('ascii','ignore')
Where is Python's "best ASCII for this Unicode" database?
I have some text that uses Unicode punctuation, like left double quote, right single quote for apostrophe, and so on, and I need it in ASCII. Does Python have a database of these characters with obvious ASCII substitutes so I can do better than turning them all into "?" ?
[ "Unidecode looks like a complete solution. It converts fancy quotes to ascii quotes, accented latin characters to unaccented and even attempts transliteration to deal with characters that don't have ASCII equivalents. That way your users don't have to see a bunch of ? when you had to pass their text through a legacy 7-bit ascii system.\n>>> from unidecode import unidecode\n>>> print unidecode(u\"\\u5317\\u4EB0\")\nBei Jing \n\nhttp://www.tablix.org/~avian/blog/archives/2009/01/unicode_transliteration_in_python/\n", "In my original answer, I also suggested unicodedata.normalize. However, I decided to test it out and it turns out it doesn't work with Unicode quotation marks. It does a good job translating accented Unicode characters, so I'm guessing unicodedata.normalize is implemented using the unicode.decomposition function, which leads me to believe it probably can only handle Unicode characters that are combinations of a letter and a diacritical mark, but I'm not really an expert on the Unicode specification, so I could just be full of hot air... \nIn any event, you can use unicode.translate to deal with punctuation characters instead. The translate method takes a dictionary of Unicode ordinals to Unicode ordinals, thus you can create a mapping that translates Unicode-only punctuation to ASCII-compatible punctuation:\n'Maps left and right single and double quotation marks'\n'into ASCII single and double quotation marks'\n>>> punctuation = { 0x2018:0x27, 0x2019:0x27, 0x201C:0x22, 0x201D:0x22 }\n>>> teststring = u'\\u201Chello, world!\\u201D'\n>>> teststring.translate(punctuation).encode('ascii', 'ignore')\n'\"hello, world!\"'\n\nYou can add more punctuation mappings if needed, but I don't think you necessarily need to worry about handling every single Unicode punctuation character. If you do need to handle accents and other diacritical marks, you can still use unicodedata.normalize to deal with those characters.\n", "Interesting question. \nGoogle helped me find this page which descibes using the unicodedata module as the following:\nimport unicodedata\nunicodedata.normalize('NFKD', title).encode('ascii','ignore')\n\n" ]
[ 90, 26, 21 ]
[]
[]
[ "ascii", "python", "unicode" ]
stackoverflow_0000816285_ascii_python_unicode.txt
Q: Pragmatically adding give-aways/freebies to an online store Our business currently has an online store and recently we've been offering free specials to our customers. Right now, we simply display the special and give the buyer a notice stating we will add the extra free items to their order after they checkout. Of course, it'd be nice to automate this entire process. I've been mulling over a few ideas, mainly creating a Discount model (I'm using Django in this case, but this is more of a logic question) and having that model have a variety of flags and product lists so I could create an instance like so: Discount( description="Get one free pair of bands when you buy two pairs of shoes.", valid_products=[BigProductA, BigProductB], received_products=[FreebieProductA, FreebieProductB], special_in_intervals=2, # Whenever the user buys 2, give one for free ) This logic kind of works. I can then take a look at what is in their cart and test against the existing Discounts in the model and see if they apply for anything. The biggest problem with this is it can get very messy especially if you have multiple specials going on and I just don't see it working out too well. Unfortunately, that's really my best idea for this right now. So, I come to ask you guys: What do you think is the best approach for this? I'm not looking for code, just some ideas of logic and ways to do this. :) Thanks in advance! A: Welcome to hell. Stay a while. ;) Ahem. Discounts are a mess, so it's not surprising that you feel tainted by having to work with them. From a design point of view, the testing should be part of the Discount instance, i.e. there should be an appliesTo(cart) method and an apply(cart) method. The first tells you whether a discount applies, the second one actually applies the discount. I suggest that the apply() method doesn't change the "user part" of the cart but instead modifies extra fields, so you can easily reset the cart (drop all discounts) and run the process again. This way, you can cleanly implement the two types of discounts that appear most often: "Get X for free, when buying Y" and "get a rebate of X% if you buy for Y $$$". Since you don't change the original figures, you can easily apply multiple discounts and rebates. I also suggest to back this up with a whole lot of unit tests to make sure the whole thing behaves as you expect. Otherwise the next discount might be your last. :) A: I don't quiet get the question - but if you select DISTINCT (I'm writing "pseudo logic" in SQL) all free items that match the items in the car , and then if you wish to give only one or n of them - SELECT TOP(n) DISTINCT from tblFREE where freebeid in (select freebdid from tbl itemsfreebe where items in (Select Items from CART where **** Freebe givaway LOGIC***)) freebe giveaway logic is the generic placeholder that should always evaluate for true or false: like where (select count(*) from cart >2) so if the logic works - you'll get items in the list, and if not - you'll get nothing. you can move this logic to your code and run only the first part of the "query" in the DB... logic can be used with AND or OR with other logics.... once the user accept the offer - you add the list to the cart, and should rais a flag that the discount/freebee was applied - so it won't happen twice... I wonder what does it means that it easier to SQL it than to say it :-) I hope that targets your question...
Pragmatically adding give-aways/freebies to an online store
Our business currently has an online store and recently we've been offering free specials to our customers. Right now, we simply display the special and give the buyer a notice stating we will add the extra free items to their order after they checkout. Of course, it'd be nice to automate this entire process. I've been mulling over a few ideas, mainly creating a Discount model (I'm using Django in this case, but this is more of a logic question) and having that model have a variety of flags and product lists so I could create an instance like so: Discount( description="Get one free pair of bands when you buy two pairs of shoes.", valid_products=[BigProductA, BigProductB], received_products=[FreebieProductA, FreebieProductB], special_in_intervals=2, # Whenever the user buys 2, give one for free ) This logic kind of works. I can then take a look at what is in their cart and test against the existing Discounts in the model and see if they apply for anything. The biggest problem with this is it can get very messy especially if you have multiple specials going on and I just don't see it working out too well. Unfortunately, that's really my best idea for this right now. So, I come to ask you guys: What do you think is the best approach for this? I'm not looking for code, just some ideas of logic and ways to do this. :) Thanks in advance!
[ "Welcome to hell. Stay a while. ;) Ahem.\nDiscounts are a mess, so it's not surprising that you feel tainted by having to work with them. From a design point of view, the testing should be part of the Discount instance, i.e. there should be an appliesTo(cart) method and an apply(cart) method. The first tells you whether a discount applies, the second one actually applies the discount. I suggest that the apply() method doesn't change the \"user part\" of the cart but instead modifies extra fields, so you can easily reset the cart (drop all discounts) and run the process again.\nThis way, you can cleanly implement the two types of discounts that appear most often: \"Get X for free, when buying Y\" and \"get a rebate of X% if you buy for Y $$$\". Since you don't change the original figures, you can easily apply multiple discounts and rebates.\nI also suggest to back this up with a whole lot of unit tests to make sure the whole thing behaves as you expect. Otherwise the next discount might be your last. :)\n", "I don't quiet get the question - but if you select DISTINCT (I'm writing \"pseudo logic\" in SQL) all free items that match the items in the car , and then if you wish to give only one or n of them - \nSELECT TOP(n) DISTINCT from tblFREE where freebeid in (select freebdid from tbl itemsfreebe where items in (Select Items from CART where **** Freebe givaway LOGIC***))\nfreebe giveaway logic is the generic placeholder that should always evaluate for true or false:\nlike where (select count(*) from cart >2)\nso if the logic works - you'll get items in the list, and if not - you'll get nothing.\nyou can move this logic to your code and run only the first part of the \"query\" in the DB...\nlogic can be used with AND or OR with other logics....\nonce the user accept the offer - you add the list to the cart, and should rais a flag that the discount/freebee was applied - so it won't happen twice...\nI wonder what does it means that it easier to SQL it than to say it :-)\nI hope that targets your question...\n" ]
[ 2, 0 ]
[]
[]
[ "django", "e_commerce", "logic", "python" ]
stackoverflow_0001701956_django_e_commerce_logic_python.txt
Q: Python: complex list comprehensions where one var depends on another (x for x in t[1] for t in tests) I want to do something like: all = [ x for x in t[1] for t in tests ] tests looks like: [ ("foo",[a,b,c]), ("bar",[d,e,f]) ] So I want to have the result all = [a,b,c,d,e,f] My code does not work, Python says: UnboundLocalError: local variable 't' referenced before assignment Is there any simple way to do that? A: It should work the other way around: all = [x for t in tests for x in t[1]] A: When in doubt, don't use list comprehensions. Try import this in your Python shell and read the second line: Explicit is better than implicit This type of compounding of list comprehensions will puzzle a lot of Python programmers so at least add a comment to explain that you are removing strings and flattening the remaining list. Do use list comprehensions where they are clear and easy to understand, and especially, do use them when they are idiomatic, i.e. commonly used because they are the most efficient or elegant way to express something. For instance, this Python Idioms article gives the following example: result = [3*d.Count for d in data if d.Count > 4] It is clear, simple and straightforward. Nesting list comprehensions is not too bad if you pay attention to formatting, and perhaps add a comment because the braces help the reader to decompose the expression. But the solution that was accepted for this problem is too complex and confusing in my opinion. It oversteps the bounds and makes the code unreadable for too many people. It is better to unroll at least one iteration into a for loop. A: If all you are doing is adding together some lists, try the sum builtin, using [] as a starting value: all = sum((t[1] for t in tests), []) A: That looks like a reduce to me. Unfortunately Python does not offer any syntactic sugar for reduce, so we have to use lambda: reduce(lambda x, y: x+y[1], tests, [])
Python: complex list comprehensions where one var depends on another (x for x in t[1] for t in tests)
I want to do something like: all = [ x for x in t[1] for t in tests ] tests looks like: [ ("foo",[a,b,c]), ("bar",[d,e,f]) ] So I want to have the result all = [a,b,c,d,e,f] My code does not work, Python says: UnboundLocalError: local variable 't' referenced before assignment Is there any simple way to do that?
[ "It should work the other way around:\nall = [x for t in tests for x in t[1]]\n\n", "When in doubt, don't use list comprehensions.\nTry import this in your Python shell and read the second line:\nExplicit is better than implicit\n\nThis type of compounding of list comprehensions will puzzle a lot of Python programmers so at least add a comment to explain that you are removing strings and flattening the remaining list.\nDo use list comprehensions where they are clear and easy to understand, and especially, do use them when they are idiomatic, i.e. commonly used because they are the most efficient or elegant way to express something. For instance, this Python Idioms article gives the following example:\nresult = [3*d.Count for d in data if d.Count > 4]\n\nIt is clear, simple and straightforward. Nesting list comprehensions is not too bad if you pay attention to formatting, and perhaps add a comment because the braces help the reader to decompose the expression. But the solution that was accepted for this problem is too complex and confusing in my opinion. It oversteps the bounds and makes the code unreadable for too many people. It is better to unroll at least one iteration into a for loop.\n", "If all you are doing is adding together some lists, try the sum builtin, using [] as a starting value:\nall = sum((t[1] for t in tests), [])\n\n", "That looks like a reduce to me. Unfortunately Python does not offer any syntactic sugar for reduce, so we have to use lambda:\nreduce(lambda x, y: x+y[1], tests, [])\n\n" ]
[ 17, 5, 2, 1 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0001700113_list_comprehension_python.txt
Q: PyGreSQL vs psycopg2 What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql? A: For what it's worth, django uses psycopg2. A: "PyGreSQL is written in Python only, easy to deployed but slower." PyGreSQL contains a C-coded module, too. I haven't done speed tests, but they're not likely to be much different, as the real work will happen inside the database server. A: Licensing may be an issue for you. PyGreSQL is MIT license. Psycopg2 is GPL license. (as long as you are accessing psycopg2 in normal ways from Python, with no internal API, and no direct C calls, this shouldn't cause you any headaches, and you can release your code under whatever license you like - but I am not a lawyer). A: Psycopg2 doesn't have much documentation but the code in the examples directory is very helpful. Also it's thread safety level 2, meaning multiple threads can share the module and connections but not the cursors. python dbi pep A: psycopg2 is partly written in C so you can expect a performance gain, but on the other hand, a bit harder to install. PyGreSQL is written in Python only, easy to deployed but slower.
PyGreSQL vs psycopg2
What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?
[ "For what it's worth, django uses psycopg2.\n", "\"PyGreSQL is written in Python only, easy to deployed but slower.\"\nPyGreSQL contains a C-coded module, too. I haven't done speed tests, but they're not likely to be much different, as the real work will happen inside the database server.\n", "Licensing may be an issue for you. PyGreSQL is MIT license. Psycopg2 is GPL license.\n(as long as you are accessing psycopg2 in normal ways from Python, with no internal API, and no direct C calls, this shouldn't cause you any headaches, and you can release your code under whatever license you like - but I am not a lawyer).\n", "Psycopg2 doesn't have much documentation but the code in the examples directory is very helpful.\nAlso it's thread safety level 2, meaning multiple threads can share the module and connections but not the cursors.\npython dbi pep\n", "psycopg2 is partly written in C so you can expect a performance gain, but on the other hand, a bit harder to install. PyGreSQL is written in Python only, easy to deployed but slower.\n" ]
[ 5, 4, 2, 2, 0 ]
[]
[]
[ "postgresql", "python" ]
stackoverflow_0000413228_postgresql_python.txt
Q: What's going on with python 3k? Since I'm not strictly python developer please don't flame me just for the question. I'm wondering about Python 3k, that from my point of view, might be some kind of misconception. Or quite in-relevant step forward (I'm taking into account the 2.6 and 3k releases, which was almost one after another). Before the flame will start, I'll explain my position at this topic and I'd assume couple facts from my work environment. I work in a cutting edge market data solutions company, we use mainly functional languages to high throughput data analysis. But we also use python as a second technology for smaller tasks, scripts, process management & monitoring. Some of my colleagues write more serious production applications based on python technologies, but: ALL of our customers use python 2.6, because of above we have quite strong 2.6 toolset and internal/external support, We still plan to develop 2.6 apps. Every small tool development is also based on 2.6 platform. Additional what I observe: at this point any new linux distribution (in our infrastructure) has python 2.6 on the board, most third party modules are developed for 2.6 version, great part of resources in the network is also dedicated for 2.6 (I know that You can port most of this to 3.1.x, but it's too big overhead here.) I know that Py 3k is still growing but, there is already python 3.1.1, but no one "cares" (in my environment). I've a strong feeling that Python 3k overheads are stopping it for moving this great technology into new dimension. A: The OP appears to be surprised that a minor-release upgrade (which added some nifty features, basically broke zero existing code, and allowed trivial rebuilds of all existing third party libraries) happened overnight at their organization, while a major-release upgrade (requiring much more effort especially from the point of view of third-party library authors) didn't and won't. I think I mostly feel surprised at their surprise;-). Even minor-release upgrades don't happen instantly in most large projects and organizations; for example, App Engine is still using Python 2.5 (apparently, upgrading its specialized "sandboxed" Python runtime and all it relies on is not a zero-effort proposition, so they prefer to keep putting their energy towards adding engine features instead) -- so I believe are implementations such as Jython and PyPy (I think IronPython's in the process of migrating to 2.6, but the current production version is still 2.5). Totally new projects starting today should seriously consider starting with Python 3; for example, Allison Randal's pynie (Python for Parrot) project made exactly that choice (and, I think, it was the correct choice in their situation). Migrating existing projects is a harder proposition, and mostly depends on what third-party components the existing project depends on (if a new project intrinsically depends on some functionality that's only available in 3rd party libraries for 2.6, not 3.1, then the new project will probably also have to stick with the 2.6 version for the time being, of course). Third party libraries that are under active development will probably come out with Python 3 version gradually (for example, gmpy did so relatively recently). Once enough such third party libraries are available, the chance that a missing library inhibits migrating an existing project (or, even more, starting a new project) using Python 3 starts going down pretty rapidly. This makes Python 3 ports feasible. At some point, some attractive functionality will become available in Python 3 only (for example, if and when pynie releases, that might be the case for a Parrot implementation affording smooth interoperability with Perl &c), and that will provide a strong motivation for some projects to go 3-only (pragmatically stronger than pure issues of language quality). Even then, some sufficiently large projects and organizations will stick with Python 2 for a long time, and you can confidently expect that at some time a 2.7 will exist (possibly one or two more after that, but that's harder to predict). Hey, I sharply remember that throughout the '90s in most large projects and organizations "Fortran" still meant Fortran 77 (in fact in some places -- not many, 30+ years after than Fortran version's first release -- that's still true today!)... for all the advantages of Fortran 90 and later versions, migration costs (esp. in terms of various compilers, libraries, tools) were just perceived as being too high a price to pay for the advantages of the new language version. That's just inevitable when a language acquires a large installed base and a rich ecosystem of third party tools and libraries, as Python 2 now has. No reason for surprise!-) A: Py3k is a good and necessary step in the evolution of the Python language. I like it much better than 2.x, and code in it almost exclusively. I'm fortunately not dependent on third-party libraries, few of which are there for Py3k yet. But they will be. There is nothing wrong with using 2.6 - 2.7 will come and be even more of a bridge to 3.x, and you can start generating Py3k-like code already that will be trivial to port to 3.x once your favorite third-party library is there. I think I read somewhere that Py2k will be around for several more years. A: What exactly is the question here? You observe that there's a newer version of the platform you currently use. You have good reason for staying with the version you have. This pattern is common across very many development organisations, you simply cannot chase every new version of the stuff yiou use. Nor can you stay where you are for ever, eventually you will need to migrate. There are forces that may drive you to migrate, for example The new version has some very major new feature you would really benefit from (eg. annotations in Java 5) Support for a current platform (or dependent library) is being withdrawn Meanwhile, you could attempt to future-proof your code so that migrating is easier. Are you familiar with the level of change required to move up? A: Since Python 3K is not backwards compatible, you can expect that it will take more time to be accepted in corporate environments. Python has a huge code base and many important third libraries. One must wait for all dependencies to be converted to Python 3K before adopting it. This could be a real slow process. From what I understand, the creators of Python expected this to happen, but they thought it will be worse not to make the changes and just let the language "die". A: Python 3.x is not yet aimed at production use, but it will get there in time. As a language and a runtime, Python 3.x is fine. The limitation is that a lot of important third party libraries and tools are still in the process of being ported and tested. It is expected that the transition to Python 3.x will take up to five years. So, if you are not library developer, you really don't need to care about Python 3.x for the time being. Also, it is not expected of current Python users to do the switch to Python 3.x anytime soon. A: There are a lot of libraries which depend on other libraries in the Python ecosystem. Nose is used to test numpy, and scipy requires numpy, and lots of scientific libraries require scipy. Also, none of the library developers can start porting until their dependencies are ported. It's going to take a long time for whole chain to port across. A lot of the pragmatists are hoping that python 2.7 and 2.8 will gradually depreciate the bits that won't work in python 3 (introduce a print function called print_function, then depreciate the print statement, then rename print_function to print ...), so eventually python 2 will merge with python 3. A: Here is a video there Guido van Rossum, at the Py4Science meeting, talks about Python 3. http://fperez.org/py4science/2009_guido_ucb/index.html
What's going on with python 3k?
Since I'm not strictly python developer please don't flame me just for the question. I'm wondering about Python 3k, that from my point of view, might be some kind of misconception. Or quite in-relevant step forward (I'm taking into account the 2.6 and 3k releases, which was almost one after another). Before the flame will start, I'll explain my position at this topic and I'd assume couple facts from my work environment. I work in a cutting edge market data solutions company, we use mainly functional languages to high throughput data analysis. But we also use python as a second technology for smaller tasks, scripts, process management & monitoring. Some of my colleagues write more serious production applications based on python technologies, but: ALL of our customers use python 2.6, because of above we have quite strong 2.6 toolset and internal/external support, We still plan to develop 2.6 apps. Every small tool development is also based on 2.6 platform. Additional what I observe: at this point any new linux distribution (in our infrastructure) has python 2.6 on the board, most third party modules are developed for 2.6 version, great part of resources in the network is also dedicated for 2.6 (I know that You can port most of this to 3.1.x, but it's too big overhead here.) I know that Py 3k is still growing but, there is already python 3.1.1, but no one "cares" (in my environment). I've a strong feeling that Python 3k overheads are stopping it for moving this great technology into new dimension.
[ "The OP appears to be surprised that a minor-release upgrade (which added some nifty features, basically broke zero existing code, and allowed trivial rebuilds of all existing third party libraries) happened overnight at their organization, while a major-release upgrade (requiring much more effort especially from the point of view of third-party library authors) didn't and won't. I think I mostly feel surprised at their surprise;-).\nEven minor-release upgrades don't happen instantly in most large projects and organizations; for example, App Engine is still using Python 2.5 (apparently, upgrading its specialized \"sandboxed\" Python runtime and all it relies on is not a zero-effort proposition, so they prefer to keep putting their energy towards adding engine features instead) -- so I believe are implementations such as Jython and PyPy (I think IronPython's in the process of migrating to 2.6, but the current production version is still 2.5).\nTotally new projects starting today should seriously consider starting with Python 3; for example, Allison Randal's pynie (Python for Parrot) project made exactly that choice (and, I think, it was the correct choice in their situation). Migrating existing projects is a harder proposition, and mostly depends on what third-party components the existing project depends on (if a new project intrinsically depends on some functionality that's only available in 3rd party libraries for 2.6, not 3.1, then the new project will probably also have to stick with the 2.6 version for the time being, of course).\nThird party libraries that are under active development will probably come out with Python 3 version gradually (for example, gmpy did so relatively recently). Once enough such third party libraries are available, the chance that a missing library inhibits migrating an existing project (or, even more, starting a new project) using Python 3 starts going down pretty rapidly. This makes Python 3 ports feasible. At some point, some attractive functionality will become available in Python 3 only (for example, if and when pynie releases, that might be the case for a Parrot implementation affording smooth interoperability with Perl &c), and that will provide a strong motivation for some projects to go 3-only (pragmatically stronger than pure issues of language quality).\nEven then, some sufficiently large projects and organizations will stick with Python 2 for a long time, and you can confidently expect that at some time a 2.7 will exist (possibly one or two more after that, but that's harder to predict). Hey, I sharply remember that throughout the '90s in most large projects and organizations \"Fortran\" still meant Fortran 77 (in fact in some places -- not many, 30+ years after than Fortran version's first release -- that's still true today!)... for all the advantages of Fortran 90 and later versions, migration costs (esp. in terms of various compilers, libraries, tools) were just perceived as being too high a price to pay for the advantages of the new language version. That's just inevitable when a language acquires a large installed base and a rich ecosystem of third party tools and libraries, as Python 2 now has. No reason for surprise!-)\n", "Py3k is a good and necessary step in the evolution of the Python language. I like it much better than 2.x, and code in it almost exclusively. I'm fortunately not dependent on third-party libraries, few of which are there for Py3k yet. But they will be.\nThere is nothing wrong with using 2.6 - 2.7 will come and be even more of a bridge to 3.x, and you can start generating Py3k-like code already that will be trivial to port to 3.x once your favorite third-party library is there. I think I read somewhere that Py2k will be around for several more years. \n", "What exactly is the question here? You observe that there's a newer version of the platform you currently use. You have good reason for staying with the version you have. This pattern is common across very many development organisations, you simply cannot chase every new version of the stuff yiou use. Nor can you stay where you are for ever, eventually you will need to migrate. \nThere are forces that may drive you to migrate, for example\n\nThe new version has some very major new feature you would really benefit from (eg. annotations in Java 5)\nSupport for a current platform (or dependent library) is being withdrawn\n\nMeanwhile, you could attempt to future-proof your code so that migrating is easier. Are you familiar with the level of change required to move up?\n", "Since Python 3K is not backwards compatible, you can expect that it will take more time to be accepted in corporate environments. Python has a huge code base and many important third libraries. One must wait for all dependencies to be converted to Python 3K before adopting it. This could be a real slow process.\nFrom what I understand, the creators of Python expected this to happen, but they thought it will be worse not to make the changes and just let the language \"die\".\n", "Python 3.x is not yet aimed at production use, but it will get there in time.\nAs a language and a runtime, Python 3.x is fine. The limitation is that a lot of important third party libraries and tools are still in the process of being ported and tested.\nIt is expected that the transition to Python 3.x will take up to five years.\nSo, if you are not library developer, you really don't need to care about Python 3.x for the time being.\nAlso, it is not expected of current Python users to do the switch to Python 3.x anytime soon.\n", "There are a lot of libraries which depend on other libraries in the Python ecosystem. Nose is used to test numpy, and scipy requires numpy, and lots of scientific libraries require scipy. Also, none of the library developers can start porting until their dependencies are ported. It's going to take a long time for whole chain to port across. \nA lot of the pragmatists are hoping that python 2.7 and 2.8 will gradually depreciate the bits that won't work in python 3 (introduce a print function called print_function, then depreciate the print statement, then rename print_function to print ...), so eventually python 2 will merge with python 3.\n", "Here is a video there Guido van Rossum, at the Py4Science meeting, talks about Python 3.\nhttp://fperez.org/py4science/2009_guido_ucb/index.html\n" ]
[ 9, 4, 3, 3, 2, 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0001700569_python_python_3.x.txt
Q: Is there a way to manually register a user with a py-transport server-side? I'm trying to write some scripts to migrate my users to ejabberd, but the only way that's been suggested for me to register a user with a transport is to have them use their client and discover the service. Certainly there is a way, right? A: Go through once for each transport and register yourself. Capture the XMPP packets. Dump the transport registration data from your current system into a csv file, xml file, or something else you can know the structure. Write a script using jabberpy, xmpppy, pyxmpp, or whatever, and emulate each of your users registering with the transports. One issue is you may have to be connected to the Internet for the transports to come online. Then you're going live with someone else's account. If you can't get their current password data for your jabber server, set it all to a default and then migrate it back after your transport registration.
Is there a way to manually register a user with a py-transport server-side?
I'm trying to write some scripts to migrate my users to ejabberd, but the only way that's been suggested for me to register a user with a transport is to have them use their client and discover the service. Certainly there is a way, right?
[ "\nGo through once for each transport\nand register yourself. Capture the\nXMPP packets. \nDump the transport\nregistration data from your current\nsystem into a csv file, xml file, or\nsomething else you can know the\nstructure.\nWrite a script\nusing jabberpy, xmpppy, pyxmpp, or\nwhatever, and emulate each of your\nusers registering with the\ntransports.\n\nOne issue is you may have to be connected to the Internet for the transports to come online. Then you're going live with someone else's account. If you can't get their current password data for your jabber server, set it all to a default and then migrate it back after your transport registration.\n" ]
[ 0 ]
[]
[]
[ "ejabberd", "python", "xmpp" ]
stackoverflow_0000667510_ejabberd_python_xmpp.txt
Q: Something disturbing about PyDev content assist I created a simple class in Python as follows, from UserDict import UserDict class Person(UserDict): def __init__(self,personName=None): UserDict.__init__(self) self["name"]=personName In another module I try to instantiate an object of class Person and print its doc and class attributes: import Person p = Person.Person("me") print p.__doc__ print p.__class__ It bothers me to think that doc and class are not in the list of attributes of an instantiated object when I use content assist in Eclipse: alt text http://img171.imageshack.us/img171/5169/pydevcontentassist.png Why does this happen? In Java, Eclipse shows the complete list of attributes and methods and this helps me a lot in development sometimes when I don't want to look at the Java Docs. I just figure things out using content assist. A: Not sure if anyone outside of the PyDev development team can really help you here, as this basically boils down to a feature question/request. I'd suggest creating an item on their Feature Request tracker or their bug tracker. A: EDIT: Your class Person is a so-called old-style class because it is subclassed from the UserDict class, an old-style class. There are fundamental differences between old-style and new-style (i.e. classes that subclass from object) in the availability and treatment of special attributes. In particular, dir() of an instance of an old-style class does not return __class__, whereas dir() of new-style class instances do, and, undoubtedly, PyDev is displaying the results of dir(): >>> class OldStyle: pass ... >>> os = OldStyle(); os.__class__; dir(os) <class __main__.OldStyle at 0x100412cb0> ['__doc__', '__module__'] >>> class NewStyle(object): pass ... >>> ns = NewStyle(); ns.__class__; dir(ns) <class '__main__.NewStyle'> ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__'] As described in recent Python Standard Library documentation, the need for UserDict has largely gone away since, with the introduction of new-style classes in Python 2.2, it is now possible to subclass directly from built-in types like dict. There are other disadvantages of using old-style classes and they have been removed entirely in Python 3, along with the UserDict module. You could get the benefits now, and get better info in PyDev, by changing the Person class to subclass directly from dict.
Something disturbing about PyDev content assist
I created a simple class in Python as follows, from UserDict import UserDict class Person(UserDict): def __init__(self,personName=None): UserDict.__init__(self) self["name"]=personName In another module I try to instantiate an object of class Person and print its doc and class attributes: import Person p = Person.Person("me") print p.__doc__ print p.__class__ It bothers me to think that doc and class are not in the list of attributes of an instantiated object when I use content assist in Eclipse: alt text http://img171.imageshack.us/img171/5169/pydevcontentassist.png Why does this happen? In Java, Eclipse shows the complete list of attributes and methods and this helps me a lot in development sometimes when I don't want to look at the Java Docs. I just figure things out using content assist.
[ "Not sure if anyone outside of the PyDev development team can really help you here, as this basically boils down to a feature question/request.\nI'd suggest creating an item on their Feature Request tracker or their bug tracker.\n", "EDIT:\nYour class Person is a so-called old-style class because it is subclassed from the UserDict class, an old-style class. There are fundamental differences between old-style and new-style (i.e. classes that subclass from object) in the availability and treatment of special attributes. In particular, dir() of an instance of an old-style class does not return __class__, whereas dir() of new-style class instances do, and, undoubtedly, PyDev is displaying the results of dir():\n>>> class OldStyle: pass\n... \n>>> os = OldStyle(); os.__class__; dir(os)\n<class __main__.OldStyle at 0x100412cb0>\n['__doc__', '__module__']\n>>> class NewStyle(object): pass\n... \n>>> ns = NewStyle(); ns.__class__; dir(ns)\n<class '__main__.NewStyle'>\n['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__']\n\nAs described in recent Python Standard Library documentation, the need for UserDict has largely gone away since, with the introduction of new-style classes in Python 2.2, it is now possible to subclass directly from built-in types like dict. There are other disadvantages of using old-style classes and they have been removed entirely in Python 3, along with the UserDict module. You could get the benefits now, and get better info in PyDev, by changing the Person class to subclass directly from dict.\n" ]
[ 1, 1 ]
[]
[]
[ "eclipse", "pydev", "python" ]
stackoverflow_0001702255_eclipse_pydev_python.txt
Q: What does the brackets mean in python: table[r][pos+i]? This is the full code: def checkRow(table, r, pos, word): # done for you! for i in range(0, len(word)): if table[r][pos+i] != word[i]: return False return True I know the bracket mean the index value (in this case r some value of the index table) but what does a bracket next to another bracket mean? (table[r][pos+i]) A: It means that the value of table[r] is another array (an array within an array), which you are indexing into with [pos+i]. So it's the equivalent of: foo = table[r] if foo[pos+i] != word[i]: A: table[r][pos+i] To get the pos+i character of the string table[r] A: If r was length 2 and the pos was length 3 the table could be represented this way: | pos+0 | pos+1 | pos+2 | ---------------------------- r+0| ???1 | ???2 | ???3 | ---------------------------- r+1| ???4 | ???5 | ???6 | ---------------------------- Where the ??? represent the data at table[r][pos+i]. table[r] returns all values in a row. Note: Many programming languages don't have an easy way to get columns. IE: C will give an error instead of returning a column when given table[][pos+1].
What does the brackets mean in python: table[r][pos+i]?
This is the full code: def checkRow(table, r, pos, word): # done for you! for i in range(0, len(word)): if table[r][pos+i] != word[i]: return False return True I know the bracket mean the index value (in this case r some value of the index table) but what does a bracket next to another bracket mean? (table[r][pos+i])
[ "It means that the value of table[r] is another array (an array within an array), which you are indexing into with [pos+i]. So it's the equivalent of:\nfoo = table[r]\nif foo[pos+i] != word[i]:\n\n", "table[r][pos+i]\nTo get the pos+i character of the string table[r]\n", "If r was length 2 and the pos was length 3 the table could be represented this way:\n | pos+0 | pos+1 | pos+2 |\n----------------------------\nr+0| ???1 | ???2 | ???3 |\n----------------------------\nr+1| ???4 | ???5 | ???6 |\n----------------------------\n\nWhere the ??? represent the data at table[r][pos+i].\ntable[r] returns all values in a row. \nNote: Many programming languages don't have an easy way to get columns. IE: C will give an error instead of returning a column when given table[][pos+1]. \n" ]
[ 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001699342_python.txt
Q: What is suggested seed value to use with random.seed()? Simple enough question: I'm using python random module to generate random integers. I want to know what is the suggested value to use with the random.seed() function? Currently I am letting this default to the current time, but this is not ideal. It seems like a string literal constant (similar to a password) would also not be ideal/strong Suggestions? Thanks, -aj UPDATE: The reason I am generating random integers is for generation of test data. The numbers do not need to be reproducable. A: According to the documentation for random.seed: If x is omitted or None, current system time is used; current system time is also used to initialize the generator when the module is first imported. If randomness sources are provided by the operating system, they are used instead of the system time (see the os.urandom() function for details on availability). If you don't pass something to seed, it will try to use operating-system provided randomness sources instead of the time, which is always a better bet. This saves you a bit of work, and is about as good as it's going to get. Regarding availability, the docs for os.urandom tell us: On a UNIX-like system this will query /dev/urandom, and on Windows it will use CryptGenRandom. Cross-platform random seeds are the big win here; you can safely omit a seed and trust that it will be random enough on almost every platform you'll use Python on. Even if Python falls back to the time, there's probably only a millisecond window (or less) to guess the seed. I don't think you'll run into any trouble using the current time anyway -- even then, it's only a fallback. A: For most cases using current time is good enough. Occasionally you need to use a fixed number to generate pseudo random numbers for comparison purposes. A: Setting the seed is for repeatability, not security. If anything, you make the system less secure by having a fixed seed than one that is constantly changing. A: Perhaps it is not a problem in your case, but ont problem with using the system time as the seed is that someone who knows roughly when your system was started may be able to guess your seed (by trial) after seeing a few numbers from the sequence. eg, don't use system time as the seed for your online poker game A: If you are using random for generating test data I would like to suggest that reproducibility can be important. Just think to an use case: for data set X you get some weird behaviour (eg crash). Turns out that data set X shows some feature that was not so apparent from the other data sets Y and Z and uncovers a bug which had escapend your test suites. Now knowing the seed is useful so that you can precisely reproduce the bug and you can fix it.
What is suggested seed value to use with random.seed()?
Simple enough question: I'm using python random module to generate random integers. I want to know what is the suggested value to use with the random.seed() function? Currently I am letting this default to the current time, but this is not ideal. It seems like a string literal constant (similar to a password) would also not be ideal/strong Suggestions? Thanks, -aj UPDATE: The reason I am generating random integers is for generation of test data. The numbers do not need to be reproducable.
[ "According to the documentation for random.seed:\n\nIf x is omitted or None, current system time is used; current system time is also used to initialize the generator when the module is first imported. If randomness sources are provided by the operating system, they are used instead of the system time (see the os.urandom() function for details on availability).\n\nIf you don't pass something to seed, it will try to use operating-system provided randomness sources instead of the time, which is always a better bet. This saves you a bit of work, and is about as good as it's going to get. Regarding availability, the docs for os.urandom tell us:\n\nOn a UNIX-like system this will query /dev/urandom, and on Windows it will use CryptGenRandom.\n\nCross-platform random seeds are the big win here; you can safely omit a seed and trust that it will be random enough on almost every platform you'll use Python on. Even if Python falls back to the time, there's probably only a millisecond window (or less) to guess the seed. I don't think you'll run into any trouble using the current time anyway -- even then, it's only a fallback.\n", "For most cases using current time is good enough. Occasionally you need to use a fixed number to generate pseudo random numbers for comparison purposes.\n", "Setting the seed is for repeatability, not security. If anything, you make the system less secure by having a fixed seed than one that is constantly changing.\n", "Perhaps it is not a problem in your case, but ont problem with using the system time as the seed is that someone who knows roughly when your system was started may be able to guess your seed (by trial) after seeing a few numbers from the sequence.\neg, don't use system time as the seed for your online poker game \n", "If you are using random for generating test data I would like to suggest that reproducibility can be important.\nJust think to an use case: for data set X you get some weird behaviour (eg crash). Turns out that data set X shows some feature that was not so apparent from the other data sets Y and Z and uncovers a bug which had escapend your test suites. Now knowing the seed is useful so that you can precisely reproduce the bug and you can fix it.\n" ]
[ 14, 5, 3, 1, 0 ]
[]
[]
[ "python", "random" ]
stackoverflow_0001703012_python_random.txt
Q: Basic Python loop question - problem reading a list from a text file I'm trying to read a list of items from a text file and format with square brackets and separators like this: ['item1','item2', .... 'last_item'] but I'm having trouble with the beginning and end item for which I always get: ...,'last_item','], so I do not want the last ,' to be there. In python I've write: out_list = "['" for line in open(file_in): out_list += line #append the item to the list out_accession_list += "','" #add the separator out_accession_list += "]" #add the final closed bracket return out_list I realize that this is a basic loop question, but I can't think of the best way to do it. Should I use a try final statement, should it be a while loop, or should I count the number of lines first and then use a loop with a range? Help much appreciated. Thanks, John A: Read in all your lines and use the string.join() method to join them together. lines = open(file_in).readlines() out_list = "['" + "','".join(lines) + "']" Additionally, join() can take any sequence, so reading the lines isn't necessary. The above code can be simplified as: out_list = "['" + "','".join(open(file_in)) + "']" A: out_list = [] for line in open(file_in): out_list.append("'" + line + "'") return "[" + ",".join(out_list) + "]" A: You "right strip" for "," the result before adding the last "]". e.g. use the string.rstrip(",") A: Or result = "['" for line in open(file_in): if len(result) > 0: result += "','" result += line result += "']" return result A: Your desired output format is exactly Python's standard printable representation of a list. So an easy solution is to read the file, create a list with each line as a string element (stripping the end-of-line of each), and call the Python built-in function repr to produce a string representation of the list: >>> repr([line.rstrip() for line in open(file_in)]) "['item1', 'item2', 'item3']" A: def do(filepath): out = [] for line in open(filepath, 'r'): out.append("'" + line.strip() + "'") return out.__str__()
Basic Python loop question - problem reading a list from a text file
I'm trying to read a list of items from a text file and format with square brackets and separators like this: ['item1','item2', .... 'last_item'] but I'm having trouble with the beginning and end item for which I always get: ...,'last_item','], so I do not want the last ,' to be there. In python I've write: out_list = "['" for line in open(file_in): out_list += line #append the item to the list out_accession_list += "','" #add the separator out_accession_list += "]" #add the final closed bracket return out_list I realize that this is a basic loop question, but I can't think of the best way to do it. Should I use a try final statement, should it be a while loop, or should I count the number of lines first and then use a loop with a range? Help much appreciated. Thanks, John
[ "Read in all your lines and use the string.join() method to join them together.\nlines = open(file_in).readlines()\n\nout_list = \"['\" + \"','\".join(lines) + \"']\"\n\nAdditionally, join() can take any sequence, so reading the lines isn't necessary. The above code can be simplified as:\nout_list = \"['\" + \"','\".join(open(file_in)) + \"']\"\n\n", "out_list = []\nfor line in open(file_in):\n out_list.append(\"'\" + line + \"'\")\nreturn \"[\" + \",\".join(out_list) + \"]\"\n\n", "You \"right strip\" for \",\" the result before adding the last \"]\".\ne.g. use the string.rstrip(\",\")\n", "Or\nresult = \"['\"\nfor line in open(file_in):\n if len(result) > 0:\n result += \"','\" \n result += line\nresult += \"']\"\nreturn result\n\n", "Your desired output format is exactly Python's standard printable representation of a list. So an easy solution is to read the file, create a list with each line as a string element (stripping the end-of-line of each), and call the Python built-in function repr to produce a string representation of the list:\n>>> repr([line.rstrip() for line in open(file_in)])\n\"['item1', 'item2', 'item3']\"\n\n", "def do(filepath):\n out = []\n for line in open(filepath, 'r'):\n out.append(\"'\" + line.strip() + \"'\")\n return out.__str__()\n\n" ]
[ 4, 1, 0, 0, 0, 0 ]
[]
[]
[ "loops", "python" ]
stackoverflow_0001703471_loops_python.txt
Q: IPython in unbuffered mode Is there a way to run IPython in unbuffered mode? The same way as python -u gives unbuffered IO for the standard python shell A: From Python's man page: -u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout and stderr in binary mode. Note that there is internal buffering in xreadlines(), readlines() and file-object iterators ("for line in sys.stdin") which is not influenced by this option. To work around this, you will want to use "sys.stdin.readline()" inside a "while 1:" loop. If using unbuffered mode interferes with readline, it certainly would interfere even more with all the magic editing and auto-completion that iPython gives you. I'm inclined to suspect that's why that command line option isn't in iPython. A: Try: python -u `which ipython` Not sure if it will work, though.
IPython in unbuffered mode
Is there a way to run IPython in unbuffered mode? The same way as python -u gives unbuffered IO for the standard python shell
[ "From Python's man page:\n -u Force stdin, stdout and stderr to be totally unbuffered. On systems\n where it matters, also put stdin, stdout and stderr in binary mode.\n Note that there is internal buffering in xreadlines(), readlines()\n and file-object iterators (\"for line in sys.stdin\") which is not\n influenced by this option. To work around this, you will want to use\n \"sys.stdin.readline()\" inside a \"while 1:\" loop.\n\nIf using unbuffered mode interferes with readline, it certainly would interfere even more with all the magic editing and auto-completion that iPython gives you. I'm inclined to suspect that's why that command line option isn't in iPython.\n", "Try:\npython -u `which ipython`\n\nNot sure if it will work, though.\n" ]
[ 2, 0 ]
[]
[]
[ "ipython", "python" ]
stackoverflow_0001578592_ipython_python.txt
Q: Python threading test not working EDIT I solved the issue by forking the process instead of using threads. From the comments and links in the comments, I don't think threading is the right move here. Thanks everyone for your assistance. FINISHED EDIT I haven't done much with threading before. I've created a few simple example "Hello World" scripts but nothing that actually did any work. To help me grasp it, I wrote a simple script using the binaries from Nagios to query services like HTTP. This script works although with a timeout of 1 second if I have 10 services that timeout, the script will take over 10 seconds long. What I want to do is run all checks in parallel to each other. This should reduce the time it takes to complete. I'm currently getting segfaults but not all the time. Strangely at the point where I check the host in the processCheck function, I can printout all hosts. Just after checking the host though, the hosts variable only prints one or two of the hosts in the set. I have a feeling it's a namespace issue but I'm not sure how to resolve. I've posted the entire code here sans the MySQL db but a result from he service_list view looks like. Any assistance is greatly appreciated. 6543L, 'moretesting.com', 'smtp') (6543L, 'moretesting.com', 'ping') (6543L, 'moretesting.com', 'http') from commands import getstatusoutput import MySQLdb import threading import Queue import time def checkHost(x, service): command = {} command['http'] = './plugins/check_http -t 1 -H ' command['smtp'] = './plugins/check_smtp -t 1 -H ' cmd = command[service] cmd += x retval = getstatusoutput(cmd) if retval[0] == 0: return 0 else: return retval[1] def fetchHosts(): hostList = [] cur.execute('SELECT veid, hostname, service from service_list') for row in cur.fetchall(): hostList.append(row) return hostList def insertFailure(veid, hostname, service, error): query = 'INSERT INTO failures (veid, hostname, service, error) ' query += "VALUES ('%s', '%s', '%s', '%s')" % (veid, hostname, service, error) cur.execute(query) cur.execute('COMMIT') def processCheck(host): #If I print the host tuple here I get all hosts/services retval = checkHost(host[1], host[2]) #If I print the host tuple here, I get one host maybe two if retval != 0: try: insertFailure(host[0], host[1], host[2], retval) except: pass else: try: #if service is back up, remove old failure entry query = "DELETE FROM failures WHERE veid='%s' AND service='%s' AND hostname='%s'" % (host[0], host[2], host[1]) cur.execute(query) cur.execute('COMMIT') except: pass return 0 class ThreadClass(threading.Thread): def __init__(self, queue): threading.Thread.__init__(self) self.queue = queue def run(self): processCheck(queue.get()) time.sleep(1) def main(): for host in fetchHosts(): queue.put(host) t = ThreadClass(queue) t.setDaemon(True) t.start() if __name__ == '__main__': conn = MySQLdb.connect('localhost', 'root', '', 'testing') cur = conn.cursor() queue = Queue.Queue() main() conn.close() A: The MySQL DB driver isn't thread safe. You're using the same cursor concurrently from all threads. Try creating a new connection in each thread, or create a pool of connections that the threads can use (e.g. keep them in a Queue, each thread gets a connection, and puts it pack when it's done). A: You should be constructing and populating your queue first. When the entire queue is constructed and populated, then you should construct a number of threads which then, each in a loop, polls the queue and processes an item on the queue. A: You realize Python doesn't do true multi-threading as you would expect on a multi-core processor: See Here And Here Don't expect those 10 things to take 1 second each. Besides, even in true multi-threading there is a little overhead associated with the threads. I'd like to add that this isn't a slur against Python.
Python threading test not working
EDIT I solved the issue by forking the process instead of using threads. From the comments and links in the comments, I don't think threading is the right move here. Thanks everyone for your assistance. FINISHED EDIT I haven't done much with threading before. I've created a few simple example "Hello World" scripts but nothing that actually did any work. To help me grasp it, I wrote a simple script using the binaries from Nagios to query services like HTTP. This script works although with a timeout of 1 second if I have 10 services that timeout, the script will take over 10 seconds long. What I want to do is run all checks in parallel to each other. This should reduce the time it takes to complete. I'm currently getting segfaults but not all the time. Strangely at the point where I check the host in the processCheck function, I can printout all hosts. Just after checking the host though, the hosts variable only prints one or two of the hosts in the set. I have a feeling it's a namespace issue but I'm not sure how to resolve. I've posted the entire code here sans the MySQL db but a result from he service_list view looks like. Any assistance is greatly appreciated. 6543L, 'moretesting.com', 'smtp') (6543L, 'moretesting.com', 'ping') (6543L, 'moretesting.com', 'http') from commands import getstatusoutput import MySQLdb import threading import Queue import time def checkHost(x, service): command = {} command['http'] = './plugins/check_http -t 1 -H ' command['smtp'] = './plugins/check_smtp -t 1 -H ' cmd = command[service] cmd += x retval = getstatusoutput(cmd) if retval[0] == 0: return 0 else: return retval[1] def fetchHosts(): hostList = [] cur.execute('SELECT veid, hostname, service from service_list') for row in cur.fetchall(): hostList.append(row) return hostList def insertFailure(veid, hostname, service, error): query = 'INSERT INTO failures (veid, hostname, service, error) ' query += "VALUES ('%s', '%s', '%s', '%s')" % (veid, hostname, service, error) cur.execute(query) cur.execute('COMMIT') def processCheck(host): #If I print the host tuple here I get all hosts/services retval = checkHost(host[1], host[2]) #If I print the host tuple here, I get one host maybe two if retval != 0: try: insertFailure(host[0], host[1], host[2], retval) except: pass else: try: #if service is back up, remove old failure entry query = "DELETE FROM failures WHERE veid='%s' AND service='%s' AND hostname='%s'" % (host[0], host[2], host[1]) cur.execute(query) cur.execute('COMMIT') except: pass return 0 class ThreadClass(threading.Thread): def __init__(self, queue): threading.Thread.__init__(self) self.queue = queue def run(self): processCheck(queue.get()) time.sleep(1) def main(): for host in fetchHosts(): queue.put(host) t = ThreadClass(queue) t.setDaemon(True) t.start() if __name__ == '__main__': conn = MySQLdb.connect('localhost', 'root', '', 'testing') cur = conn.cursor() queue = Queue.Queue() main() conn.close()
[ "The MySQL DB driver isn't thread safe. You're using the same cursor concurrently from all threads.\nTry creating a new connection in each thread, or create a pool of connections that the threads can use (e.g. keep them in a Queue, each thread gets a connection, and puts it pack when it's done).\n", "You should be constructing and populating your queue first. When the entire queue is constructed and populated, then you should construct a number of threads which then, each in a loop, polls the queue and processes an item on the queue.\n", "You realize Python doesn't do true multi-threading as you would expect on a multi-core processor:\nSee Here\nAnd Here\nDon't expect those 10 things to take 1 second each. Besides, even in true multi-threading there is a little overhead associated with the threads. I'd like to add that this isn't a slur against Python.\n" ]
[ 8, 0, 0 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0001704085_multithreading_python.txt
Q: Python (and Django) best import practices Out of the various ways to import code, are there some ways that are preferable to use, compared to others? This link http://effbot.org/zone/import-confusion.htm in short states that from foo.bar import MyClass is not the preferred way to import MyClass under normal circumstances or unless you know what you are doing. (Rather, a better way would like: import foo.bar as foobaralias and then in the code, to access MyClass use foobaralias.MyClass ) In short, it seems that the above-referenced link is saying it is usually better to import everything from a module, rather than just parts of the module. However, that article I linked is really old. I've also heard that it is better, at least in the context of Django projects, to instead only import the classes you want to use, rather than the whole module. It has been said that this form helps avoid circular import errors or at least makes the django import system less fragile. It was pointed out that Django's own code seems to prefer "from x import y" over "import x". Assuming the project I am working on doesn't use any special features of __init__.py ... (all of our __init__.py files are empty), what import method should I favor, and why? A: First, and primary, rule of imports: never ever use from foo import *. The article is discussing the issue of cyclical imports, which still exists today in poorly-structured code. I dislike cyclical imports; their presence is a strong sign that some module is doing too much, and needs to be split up. If for whatever reason you need to work with code with cyclical imports which cannot be re-arranged, import foo is the only option. For most cases, there's not much difference between import foo and from foo import MyClass. I prefer the second, because there's less typing involved, but there's a few reasons why I might use the first: The module and class/value have different names. It can be difficult for readers to remember where a particular import is coming from, when the imported value's name is unrelated to the module. Good: import myapp.utils as utils; utils.frobnicate() Good: import myapp.utils as U; U.frobnicate() Bad: from myapp.utils import frobnicate You're importing a lot of values from one module. Save your fingers, and reader's eyes. Bad: from myapp.utils import frobnicate, foo, bar, baz, MyClass, SomeOtherClass, # yada yada A: For me, it's dependent on the situation. If it's a uniquely named method/class (i.e., not process() or something like that), and you're going to use it a lot, then save typing and just do from foo import MyClass. If you're importing multiple things from one module, it's probably better to just import the module, and do module.bar, module.foo, module.baz, etc., to keep the namespace clean. You also said It has been said that this form helps avoid circular import errors or at least makes the django import system less fragile. It was pointed out that Django's own code seems to prefer "from x import y" over "import x". I don't see how one way or the other would help prevent circular imports. The reason is that even when you do from x import y, ALL of x is imported. Only y is brought into the current namespace, but the entire module x is processed. Try out this example: In test.py, put the following: def a(): print "a" print "hi" def b(): print "b" print "bye" Then in 'runme.py', put: from test import b b() Then just do python runme.py You'll see the following output: hi bye b So everything in test.py was run, even though you only imported b A: The advantage of the latter is that the origin of MyClass is more explicit. The former puts MyClass in the current namespace so the code can just use MyClass unqualified. So it's less obvious to someone reading the code where MyClass is defined.
Python (and Django) best import practices
Out of the various ways to import code, are there some ways that are preferable to use, compared to others? This link http://effbot.org/zone/import-confusion.htm in short states that from foo.bar import MyClass is not the preferred way to import MyClass under normal circumstances or unless you know what you are doing. (Rather, a better way would like: import foo.bar as foobaralias and then in the code, to access MyClass use foobaralias.MyClass ) In short, it seems that the above-referenced link is saying it is usually better to import everything from a module, rather than just parts of the module. However, that article I linked is really old. I've also heard that it is better, at least in the context of Django projects, to instead only import the classes you want to use, rather than the whole module. It has been said that this form helps avoid circular import errors or at least makes the django import system less fragile. It was pointed out that Django's own code seems to prefer "from x import y" over "import x". Assuming the project I am working on doesn't use any special features of __init__.py ... (all of our __init__.py files are empty), what import method should I favor, and why?
[ "First, and primary, rule of imports: never ever use from foo import *.\nThe article is discussing the issue of cyclical imports, which still exists today in poorly-structured code. I dislike cyclical imports; their presence is a strong sign that some module is doing too much, and needs to be split up. If for whatever reason you need to work with code with cyclical imports which cannot be re-arranged, import foo is the only option.\nFor most cases, there's not much difference between import foo and from foo import MyClass. I prefer the second, because there's less typing involved, but there's a few reasons why I might use the first:\n\nThe module and class/value have different names. It can be difficult for readers to remember where a particular import is coming from, when the imported value's name is unrelated to the module.\n\nGood: import myapp.utils as utils; utils.frobnicate()\nGood: import myapp.utils as U; U.frobnicate()\nBad: from myapp.utils import frobnicate\n\nYou're importing a lot of values from one module. Save your fingers, and reader's eyes.\n\nBad: from myapp.utils import frobnicate, foo, bar, baz, MyClass, SomeOtherClass, # yada yada\n\n\n", "For me, it's dependent on the situation. If it's a uniquely named method/class (i.e., not process() or something like that), and you're going to use it a lot, then save typing and just do from foo import MyClass. \nIf you're importing multiple things from one module, it's probably better to just import the module, and do module.bar, module.foo, module.baz, etc., to keep the namespace clean.\nYou also said \n\nIt has been said that this form helps avoid circular import errors or at least makes the django import system less fragile. It was pointed out that Django's own code seems to prefer \"from x import y\" over \"import x\".\n\nI don't see how one way or the other would help prevent circular imports. The reason is that even when you do from x import y, ALL of x is imported. Only y is brought into the current namespace, but the entire module x is processed. Try out this example:\nIn test.py, put the following:\ndef a():\n print \"a\"\n\nprint \"hi\"\n\ndef b():\n print \"b\"\n\nprint \"bye\"\n\nThen in 'runme.py', put:\nfrom test import b\n\nb()\n\nThen just do python runme.py\nYou'll see the following output:\nhi\nbye\nb\n\nSo everything in test.py was run, even though you only imported b\n", "The advantage of the latter is that the origin of MyClass is more explicit. The former puts MyClass in the current namespace so the code can just use MyClass unqualified. So it's less obvious to someone reading the code where MyClass is defined.\n" ]
[ 13, 6, 2 ]
[]
[]
[ "django", "python", "python_import" ]
stackoverflow_0001704058_django_python_python_import.txt
Q: Speeding Up the First Page Load in django When I update the code on my website I (naturally) restart my apache instance so that the changes will take effect. Unfortunately the first page served by each apache instance is quite slow while it loads everything into RAM for the first time (5-7 sec for this particular site). Subsequent requests only take 0.5 - 1.5 seconds so I would like to eliminate this effect for my users. Is there a better way to get everything loaded into RAM than to do a wget x times (where x is the number of apache instances defined by ServerLimit in my http.conf) Writing a restart script that restarts apache and runs wget 5 times seems kind of hacky to me. Thanks! A: The default for Apache/mod_wsgi is to only load application code on first request to a process which requires that applications. So, first step is to configure mod_wsgi to preload your code when the process starts and not only the first request. This can be done in mod_wsgi 2.X using the WSGIImportScript directive. Presuming daemon mode, which is better option anyway, this means you would have something like: # Define process group. WSGIDaemonProcess django display-name=%{GROUP} # Mount application. WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi # Ensure application preloaded on process start. Must specify the # process group and application group (Python interpreter) to use. WSGIImportScript /usr/local/django/mysite/apache/django.wsgi \ process-group=django application-group=%{GLOBAL} <Directory /usr/local/django/mysite/apache> # Ensure application runs in same process group and application # group as was preloaded into on process start. WSGIProcessGroup django WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> When you have made a code change, instead of touch the WSGI script file, which is only checked on the next request, send a SIGINT signal to the processes in the daemon process group instead. With the 'display-name' option to WSGIDaemonProcess you can identify which processes by using BSD style 'ps' program. With 'display-name' set to '%{GROUP}', the 'ps' output should show '(wsgi:django)' as process name. Identify the process ID and do: kill -SIGINT pid Swap 'pid' with actual process ID. If more than one process in daemon process group, send signal to all of them. Not sure if 'killall' can be used to do this in one step. I had problem with doing it on MacOS X. In mod_wsgi 3.X the configuration can be simpler and can use instead: # Define process group. WSGIDaemonProcess django display-name=%{GROUP} # Mount application and designate which process group and # application group (Python interpreter) to run it in. As # process group and application group named, this will have # side effect of preloading application on process start. WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi \ process-group=django application-group=%{GLOBAL} <Directory /usr/local/django/mysite/apache> Order deny,allow Allow from all </Directory> That is, no need to use separate WSGIImportScript directive as can specific process group and application group as arguments to WSGIScriptAlias instead with side effect that it will preload application. A: How are you running Django (mod_python vs mod_wsgi)? If you're running mod_wsgi (in daemon mode), restarting Apache isn't necessary to reload your application. All you need to do is update the mtime of your wsgi script (which is done easily with touch). mod_wsgi's documentation has a pretty thorough explanation of the process: ReloadingSourceCode
Speeding Up the First Page Load in django
When I update the code on my website I (naturally) restart my apache instance so that the changes will take effect. Unfortunately the first page served by each apache instance is quite slow while it loads everything into RAM for the first time (5-7 sec for this particular site). Subsequent requests only take 0.5 - 1.5 seconds so I would like to eliminate this effect for my users. Is there a better way to get everything loaded into RAM than to do a wget x times (where x is the number of apache instances defined by ServerLimit in my http.conf) Writing a restart script that restarts apache and runs wget 5 times seems kind of hacky to me. Thanks!
[ "The default for Apache/mod_wsgi is to only load application code on first request to a process which requires that applications. So, first step is to configure mod_wsgi to preload your code when the process starts and not only the first request. This can be done in mod_wsgi 2.X using the WSGIImportScript directive.\nPresuming daemon mode, which is better option anyway, this means you would have something like:\n# Define process group.\n\nWSGIDaemonProcess django display-name=%{GROUP}\n\n# Mount application.\n\nWSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi\n\n# Ensure application preloaded on process start. Must specify the\n# process group and application group (Python interpreter) to use.\n\nWSGIImportScript /usr/local/django/mysite/apache/django.wsgi \\\n process-group=django application-group=%{GLOBAL}\n\n<Directory /usr/local/django/mysite/apache>\n\n # Ensure application runs in same process group and application\n # group as was preloaded into on process start.\n\n WSGIProcessGroup django\n WSGIApplicationGroup %{GLOBAL}\n\n Order deny,allow\n Allow from all\n</Directory>\n\nWhen you have made a code change, instead of touch the WSGI script file, which is only checked on the next request, send a SIGINT signal to the processes in the daemon process group instead.\nWith the 'display-name' option to WSGIDaemonProcess you can identify which processes by using BSD style 'ps' program. With 'display-name' set to '%{GROUP}', the 'ps' output should show '(wsgi:django)' as process name. Identify the process ID and do:\nkill -SIGINT pid\n\nSwap 'pid' with actual process ID. If more than one process in daemon process group, send signal to all of them.\nNot sure if 'killall' can be used to do this in one step. I had problem with doing it on MacOS X.\nIn mod_wsgi 3.X the configuration can be simpler and can use instead:\n# Define process group.\n\nWSGIDaemonProcess django display-name=%{GROUP}\n\n# Mount application and designate which process group and\n# application group (Python interpreter) to run it in. As\n# process group and application group named, this will have\n# side effect of preloading application on process start.\n\nWSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi \\\n process-group=django application-group=%{GLOBAL}\n\n<Directory /usr/local/django/mysite/apache>\n Order deny,allow\n Allow from all\n</Directory>\n\nThat is, no need to use separate WSGIImportScript directive as can specific process group and application group as arguments to WSGIScriptAlias instead with side effect that it will preload application.\n", "How are you running Django (mod_python vs mod_wsgi)? \nIf you're running mod_wsgi (in daemon mode), restarting Apache isn't necessary to reload your application. All you need to do is update the mtime of your wsgi script (which is done easily with touch). \nmod_wsgi's documentation has a pretty thorough explanation of the process: \nReloadingSourceCode\n" ]
[ 32, 3 ]
[]
[]
[ "django", "mod_wsgi", "pageload", "performance", "python" ]
stackoverflow_0001702562_django_mod_wsgi_pageload_performance_python.txt
Q: Why is Maya 2009 TreeView control giving a syntax error on drag? I'm using the TreeView control in Maya 2009 but I'm getting a syntax error on drag and drop. My code is as follows (simplified for brevity): class View(event.Dispatcher): def __init__(self): self.window = cmds.window() tree_view = cmds.treeView( numberOfButtons=1, allowReparenting=True, dragAndDropCommand=self.tree_view_onDrag ) cmds.showWindow(self.window) def tree_view_onDrag(self, dropped_items, old_parents, old_indexes, new_parent, new_indexes, item_before, item_after, *args): print "worked" When I drag and drop and element I get the following command is executed in the console: <bound method View.tree_view_onDrag of {"layer 3"} {""} {1} "layer 1" {0} "" "layer 2"; And get the following error: // Error: <bound method View.tree_view_onDrag of {"layer 3"} {""} {1} "layer 1" {0}€ // // Error: Line 1.1: Syntax error // EDIT: It turns out that the issues I was having were due to the treeView still implementing MEL function calls on most of its event callbacks. The errors above are being thrown by the MEL interpreter as it attempts to feed arguments to a command name. A: See http://download.autodesk.com/us/maya/2009help/CommandsPython/treeView.html: dragAndDropCommand is a STRING -- you're passing a bound method, Maya's using its repr. I'm not sure, but I suspect that string should name a top-level (module-level) function, not a bound method. A: As of Maya 2010 the treeView widget appears to still require a string name of a mel procedure to be used for some of its callbacks, but not for others. For example, the dragCallback and dropCallback do work as expected, but the selectCommand and others don't. Many other widgets do accept a python function for their callbacks. Even though the docs list the arguments for some treeView callbacks as strings, it isn't stated that the string must be a mel procedure name, and it is certainly inconsistent.
Why is Maya 2009 TreeView control giving a syntax error on drag?
I'm using the TreeView control in Maya 2009 but I'm getting a syntax error on drag and drop. My code is as follows (simplified for brevity): class View(event.Dispatcher): def __init__(self): self.window = cmds.window() tree_view = cmds.treeView( numberOfButtons=1, allowReparenting=True, dragAndDropCommand=self.tree_view_onDrag ) cmds.showWindow(self.window) def tree_view_onDrag(self, dropped_items, old_parents, old_indexes, new_parent, new_indexes, item_before, item_after, *args): print "worked" When I drag and drop and element I get the following command is executed in the console: <bound method View.tree_view_onDrag of {"layer 3"} {""} {1} "layer 1" {0} "" "layer 2"; And get the following error: // Error: <bound method View.tree_view_onDrag of {"layer 3"} {""} {1} "layer 1" {0}€ // // Error: Line 1.1: Syntax error // EDIT: It turns out that the issues I was having were due to the treeView still implementing MEL function calls on most of its event callbacks. The errors above are being thrown by the MEL interpreter as it attempts to feed arguments to a command name.
[ "See http://download.autodesk.com/us/maya/2009help/CommandsPython/treeView.html: dragAndDropCommand is a STRING -- you're passing a bound method, Maya's using its repr. I'm not sure, but I suspect that string should name a top-level (module-level) function, not a bound method.\n", "As of Maya 2010 the treeView widget appears to still require a string name of a mel procedure to be used for some of its callbacks, but not for others. For example, the dragCallback and dropCallback do work as expected, but the selectCommand and others don't. Many other widgets do accept a python function for their callbacks. Even though the docs list the arguments for some treeView callbacks as strings, it isn't stated that the string must be a mel procedure name, and it is certainly inconsistent.\n" ]
[ 1, 0 ]
[]
[]
[ "maya", "python", "syntax_error", "treeview" ]
stackoverflow_0000820697_maya_python_syntax_error_treeview.txt
Q: python: xml.etree.ElementTree, removing "namespaces" I like the way ElementTree parses xml, in particular the Xpath feature. I've an output in xml from an application with nested tags. I'd like to access this tags by name without specifying the namespace, is it possible? For example: root.findall("/molpro/job") instead of: root.findall("{http://www.molpro.net/schema/molpro2006}molpro/{http://www.molpro.net/schema/molpro2006}job") A: At least with lxml2, it's possible to reduce this overhead somewhat: root.findall("/n:molpro/n:job", namespaces=dict(n="http://www.molpro.net/schema/molpro2006")) A: You could write your own function to wrap the nasty looking bits for example: def my_xpath(doc, ns, xp); num = xp.count('/') new_xp = xp.replace('/', '/{%s}') ns_tup = (ns,) * num doc.findall(new_xp % ns_tup) namespace = 'http://www.molpro.net/schema/molpro2006' my_xpath(root, namespace, '/molpro/job') Not that much fun I admit but a least you will be able to read your xpath expressions.
python: xml.etree.ElementTree, removing "namespaces"
I like the way ElementTree parses xml, in particular the Xpath feature. I've an output in xml from an application with nested tags. I'd like to access this tags by name without specifying the namespace, is it possible? For example: root.findall("/molpro/job") instead of: root.findall("{http://www.molpro.net/schema/molpro2006}molpro/{http://www.molpro.net/schema/molpro2006}job")
[ "At least with lxml2, it's possible to reduce this overhead somewhat:\nroot.findall(\"/n:molpro/n:job\",\n namespaces=dict(n=\"http://www.molpro.net/schema/molpro2006\"))\n\n", "You could write your own function to wrap the nasty looking bits for example:\ndef my_xpath(doc, ns, xp);\n num = xp.count('/')\n new_xp = xp.replace('/', '/{%s}')\n ns_tup = (ns,) * num\n doc.findall(new_xp % ns_tup)\n\nnamespace = 'http://www.molpro.net/schema/molpro2006'\nmy_xpath(root, namespace, '/molpro/job')\n\nNot that much fun I admit but a least you will be able to read your xpath expressions.\n" ]
[ 8, 5 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0001703882_python_xml.txt
Q: Abandoned Apache process, how long will it go on? So lets say there's a server process that takes way too long. The client complains that it "times out." Correct me if I'm wrong, but this particular timeout could have to do with apache's timeout setting, but not necessarily. I believe this to be the case because when testing the page in question we couldn't get it to time out reliably - mostly the browser would just spin for as long as it took. The timeout setting would come into effect if there were issues with the connection to the client, as described in the documentation. But if the connection was fine, it would be up to the client to close the connection (I believe). I assumed this also meant that if the client closed their browser, Apache would hit the timeout limit (in my case, 300 seconds), and kill the process. This doesn't seem to be the case. Here's how I tested it: I added a while loop to some code on the server: too_long = 2000 tstart = time.time() f = open('/tmp/timeout_test.txt', 'w') while True: time.sleep(100) elapsed = time.time() - tstart f.write('Loop, %s elapsed\n' % elapsed) if elapsed > too_long: break I then open the web page to launch that loop, and ran netstat on the server: ~$ netstat -np | grep ESTAB | grep apache tcp 0 0 10.102.123.6:443 10.102.119.101:53519 ESTABLISHED 16534/apache2 tcp 0 0 127.0.0.1:60299 127.0.0.1:5432 ESTABLISHED 16534/apache2 (that's me at 10.102.119.101, the server is at 10.102.123.6) I then closed my browser and reran that netstat line: ~% netstat -np | grep ESTAB | grep apache tcp 0 0 127.0.0.1:60299 127.0.0.1:5432 ESTABLISHED 16534/apache2 My connection disappeared, but the server was still in the loop, I could confirm by running: ~% lsof | grep timeout apache2 16534 www-data 14w REG 8,1 0 536533 /tmp/timeout_test.txt meaning the apache process still had that file open. For the next 2000 seconds, when I ran: ~% cat /tmp/timeout_test.txt I got nothing. After 2000 seconds the netcat line produced nothing at all, and the tmp file was filled out with the output from the while loop. So it seems that Apache process just does what it was asked to do, regardless of the client connection? And what is that loopback connection about? A: Correct. In a C apache module you can add a check like: /* r is the 'request_rec' object from apache */ if (r->connection->aborted) { /* stop processing and return */ } to verify that the client is still connected. Probably the python interface has something similar. As for the loopback connection, it is a connection to a postgresql database kept open for as long as that loop is running.
Abandoned Apache process, how long will it go on?
So lets say there's a server process that takes way too long. The client complains that it "times out." Correct me if I'm wrong, but this particular timeout could have to do with apache's timeout setting, but not necessarily. I believe this to be the case because when testing the page in question we couldn't get it to time out reliably - mostly the browser would just spin for as long as it took. The timeout setting would come into effect if there were issues with the connection to the client, as described in the documentation. But if the connection was fine, it would be up to the client to close the connection (I believe). I assumed this also meant that if the client closed their browser, Apache would hit the timeout limit (in my case, 300 seconds), and kill the process. This doesn't seem to be the case. Here's how I tested it: I added a while loop to some code on the server: too_long = 2000 tstart = time.time() f = open('/tmp/timeout_test.txt', 'w') while True: time.sleep(100) elapsed = time.time() - tstart f.write('Loop, %s elapsed\n' % elapsed) if elapsed > too_long: break I then open the web page to launch that loop, and ran netstat on the server: ~$ netstat -np | grep ESTAB | grep apache tcp 0 0 10.102.123.6:443 10.102.119.101:53519 ESTABLISHED 16534/apache2 tcp 0 0 127.0.0.1:60299 127.0.0.1:5432 ESTABLISHED 16534/apache2 (that's me at 10.102.119.101, the server is at 10.102.123.6) I then closed my browser and reran that netstat line: ~% netstat -np | grep ESTAB | grep apache tcp 0 0 127.0.0.1:60299 127.0.0.1:5432 ESTABLISHED 16534/apache2 My connection disappeared, but the server was still in the loop, I could confirm by running: ~% lsof | grep timeout apache2 16534 www-data 14w REG 8,1 0 536533 /tmp/timeout_test.txt meaning the apache process still had that file open. For the next 2000 seconds, when I ran: ~% cat /tmp/timeout_test.txt I got nothing. After 2000 seconds the netcat line produced nothing at all, and the tmp file was filled out with the output from the while loop. So it seems that Apache process just does what it was asked to do, regardless of the client connection? And what is that loopback connection about?
[ "Correct. In a C apache module you can add a check like:\n/* r is the 'request_rec' object from apache */\nif (r->connection->aborted) {\n /* stop processing and return */\n}\n\nto verify that the client is still connected. Probably the python interface has something similar.\nAs for the loopback connection, it is a connection to a postgresql database kept open for as long as that loop is running.\n" ]
[ 1 ]
[]
[]
[ "apache", "netstat", "python", "timeout" ]
stackoverflow_0001704710_apache_netstat_python_timeout.txt
Q: Find images with similar color palette with Python Suppose there are 10,000 JPEG, PNG images in a gallery, how to find all images with similar color palettes to a selected image sorted by descending similarity? A: Build a color histogram for each image. Then when you want to match an image to the collection, simply order the list by how close their histogram is to your selected image's histogram. The number of buckets will depend on how accurate you want to be. The type of data combined to make a bucket will define how you prioritize your search. For example, if you are most interested in hue, then you can define which bucket your each individual pixel of the image goes into as: def bucket_from_pixel(r, g, b): hue = hue_from_rgb(r, g, b) # [0, 360) return (hue * NUM_BUCKETS) / 360 If you also want a general matcher, then you can pick the bucket based upon the full RGB value. Using PIL, you can use the built-in histogram function. The "closeness" histograms can be calculated using any distance measure you want. For example, an L1 distance could be: hist_sel = normalize(sel.histogram()) hist = normalize(o.histogram()) # These normalized histograms should be stored dist = sum([abs(x) for x in (hist_sel - hist)]) an L2 would be: dist = sqrt(sum([x*x for x in (hist_sel - hist)])) Normalize just forces the sum of the histogram to equal some constant value (1.0 works fine). This is important so that large images can be correctly compared to small images. If you're going to use L1 distances, then you should use an L1 measure in normalize. If L2, then L2.
Find images with similar color palette with Python
Suppose there are 10,000 JPEG, PNG images in a gallery, how to find all images with similar color palettes to a selected image sorted by descending similarity?
[ "Build a color histogram for each image. Then when you want to match an image to the collection, simply order the list by how close their histogram is to your selected image's histogram.\nThe number of buckets will depend on how accurate you want to be. The type of data combined to make a bucket will define how you prioritize your search.\nFor example, if you are most interested in hue, then you can define which bucket your each individual pixel of the image goes into as:\ndef bucket_from_pixel(r, g, b):\n hue = hue_from_rgb(r, g, b) # [0, 360)\n return (hue * NUM_BUCKETS) / 360\n\nIf you also want a general matcher, then you can pick the bucket based upon the full RGB value.\nUsing PIL, you can use the built-in histogram function. The \"closeness\" histograms can be calculated using any distance measure you want. For example, an L1 distance could be:\nhist_sel = normalize(sel.histogram())\nhist = normalize(o.histogram()) # These normalized histograms should be stored\n\ndist = sum([abs(x) for x in (hist_sel - hist)])\n\nan L2 would be:\ndist = sqrt(sum([x*x for x in (hist_sel - hist)]))\n\nNormalize just forces the sum of the histogram to equal some constant value (1.0 works fine). This is important so that large images can be correctly compared to small images. If you're going to use L1 distances, then you should use an L1 measure in normalize. If L2, then L2.\n" ]
[ 11 ]
[]
[]
[ "colors", "image", "python" ]
stackoverflow_0001704793_colors_image_python.txt
Q: List of values to a sound file Im trying to engineer in python a way of transforming a list of integer values between 0-255 into representative equivalent tones from 1500-2200Hz. Timing information (at 1200Hz) is given by the (-1),(-2) and (-3) values. I have created a function that generates a .wav file and then call this function with the parameters of each tone. I need to create a 'stream' either by concatenating lots of individual tones into one output file or creating some way of running through the entire list and creating a separate file. Or by some other crazy function I don't know of.... The timing information will vary in duration but the information bits (0-255) will all be fixed length. A sample of the list is shown below: [-2, 3, 5, 7, 7, 7, 16, 9, 10, 21, 16, -1, 19, 13, 8, 8, 0, 5, 9, 21, 19, 11, -1, 11, 16, 19, 5, 21, 34, 39, 46, 58, 50, -1, 35, 46, 17, 28, 23, 19, 8, 2, 13, 12, -1, 9, 6, 8, 11, 2, 3, 2, 13, 14, 42, -1, 35, 41, 46, 55, 73, 69, 56, 47, 45, 26, -1, -3] The current solution I'm thinking of involves opening the file, checking the next value in the list using an 'if' statement to check whether the bit is timing (-ve) and if not: run an algorithm to see what freq needs to be generated and add the tone to the output file. Continue until -3 or end of list. Can anyone guide on how this complete output file might be created or any suggestions... I'm new to programming so please be gentle. Thanks in advance A: Looks like you're trying to reinvent the wheel, be careful... If you want to generate music from arrays then you can have a look at pyaudiere, a simple wrapper upon the audiere library. See the docs for how to open an array but it looks should like this : import audiere d = audiere.open_device() s = d.open_array(buff,fs) s.play() the documentation for this call is: open_array(buffer, fs) : Opens a sound buffer for playback, and returns an OutputStream object for it. The buffer should be a NumPy array of Float32's with one or two columns for mono of stereo playback. The second parameter is the sampling frequency. Values outside the range +-1 will be clipped. A: All you need to do is just append the new data on the end of your wav file. So if you haven't closed your file, just keep writing to it, or if you have, reopen it in append mode (w = open(myfile, 'ba')) and write the new data. For this to sound reasonable without clicks and such, the trick will be to make the waveform continuous from one frequency to the next. Assuming that you're using sine waves of the same amplitude, you need to start each sine wave with the same phase that you ended the previous. You can do this either by playing with the length of the waveform to make sure you always end at, say, zero phase and then always start at zero phase, or by explicitly including the phase in the sine wave. A: I'm not sure I understand exactly what you're asking, but I'll try to answer. I wouldn't mess around with the low-level WAV format if I didn't have to. Just use Audiolab for this. Initialize an empty song NumPy 1-D array Open your file of numbers Use the if statement as you said, to detect if it's a negative or positive number Generate a "snippet" of tone according to your formula (which I don't really understand from the description). First generate a timebase with something like t = linspace(0,1,num=48000) Then generate the tone with something like a = sin(2*pi*f*t) Concatenate the snippet onto the rest of the array with something like song = concatenate((song,a)) Loop through the file to create and concatenate each snippet Write to a WAV file using something like wavwrite(song, 'filename.wav', fs, enc) Did you think up this format of tones and timing yourself or is it something created by others?
List of values to a sound file
Im trying to engineer in python a way of transforming a list of integer values between 0-255 into representative equivalent tones from 1500-2200Hz. Timing information (at 1200Hz) is given by the (-1),(-2) and (-3) values. I have created a function that generates a .wav file and then call this function with the parameters of each tone. I need to create a 'stream' either by concatenating lots of individual tones into one output file or creating some way of running through the entire list and creating a separate file. Or by some other crazy function I don't know of.... The timing information will vary in duration but the information bits (0-255) will all be fixed length. A sample of the list is shown below: [-2, 3, 5, 7, 7, 7, 16, 9, 10, 21, 16, -1, 19, 13, 8, 8, 0, 5, 9, 21, 19, 11, -1, 11, 16, 19, 5, 21, 34, 39, 46, 58, 50, -1, 35, 46, 17, 28, 23, 19, 8, 2, 13, 12, -1, 9, 6, 8, 11, 2, 3, 2, 13, 14, 42, -1, 35, 41, 46, 55, 73, 69, 56, 47, 45, 26, -1, -3] The current solution I'm thinking of involves opening the file, checking the next value in the list using an 'if' statement to check whether the bit is timing (-ve) and if not: run an algorithm to see what freq needs to be generated and add the tone to the output file. Continue until -3 or end of list. Can anyone guide on how this complete output file might be created or any suggestions... I'm new to programming so please be gentle. Thanks in advance
[ "Looks like you're trying to reinvent the wheel, be careful...\nIf you want to generate music from arrays then you can have a look at pyaudiere, a simple wrapper upon the audiere library. See the docs for how to open an array but it looks should like this : \nimport audiere\nd = audiere.open_device()\ns = d.open_array(buff,fs)\ns.play()\n\nthe documentation for this call is:\nopen_array(buffer, fs) :\nOpens a sound buffer for playback, and returns an OutputStream object for it. The buffer should be a NumPy array of Float32's with one or two columns for mono of stereo playback. The second parameter is the sampling frequency. Values outside the range +-1 will be clipped. \n", "All you need to do is just append the new data on the end of your wav file. So if you haven't closed your file, just keep writing to it, or if you have, reopen it in append mode (w = open(myfile, 'ba')) and write the new data.\nFor this to sound reasonable without clicks and such, the trick will be to make the waveform continuous from one frequency to the next. Assuming that you're using sine waves of the same amplitude, you need to start each sine wave with the same phase that you ended the previous. You can do this either by playing with the length of the waveform to make sure you always end at, say, zero phase and then always start at zero phase, or by explicitly including the phase in the sine wave. \n", "I'm not sure I understand exactly what you're asking, but I'll try to answer.\nI wouldn't mess around with the low-level WAV format if I didn't have to. Just use Audiolab for this.\n\nInitialize an empty song NumPy 1-D array\nOpen your file of numbers\nUse the if statement as you said, to detect if it's a negative or positive number\nGenerate a \"snippet\" of tone according to your formula (which I don't really understand from the description).\n\n\nFirst generate a timebase with something like t = linspace(0,1,num=48000)\nThen generate the tone with something like a = sin(2*pi*f*t)\n\nConcatenate the snippet onto the rest of the array with something like song = concatenate((song,a))\nLoop through the file to create and concatenate each snippet\nWrite to a WAV file using something like wavwrite(song, 'filename.wav', fs, enc)\n\nDid you think up this format of tones and timing yourself or is it something created by others?\n" ]
[ 2, 0, 0 ]
[]
[]
[ "audio", "python" ]
stackoverflow_0001118266_audio_python.txt
Q: How can I reduce memory usage of a Twisted server? I wrote an audio broadcasting server with Python/Twisted. It works fine, but the usage of memory grows too fast! I think that's because some user's network might not be good enough to download the audio in time. My audio server broadcast audio data to different listener's client, if some of them can't download the audio in time, that means, my server keep the audio data until listeners received. And what's more, my audio server is a broadcasting server, it receive audio data, and send them to different clients, I though Twisted copy those data in different buffer, even they are same audio piece. I want to reduce the usage of memory usage, so I need to know when is the audio received by the client, so that I can decide when to discard some slow clients. But I have no idea how to achieve that with Twisted. Do anyone have idea? And what else can I do to reduce usage of memory usage? Thanks. Victor Lin. A: You didn't say, but I'm going to assume that you're using TCP. It would be hard to write a UDP-based system which had ever increasing memory because of clients who can't receive data as fast as you're trying to send it. TCP has built-in flow control capabilities. If a receiver cannot read data as fast as you'd like to send it, this information will be made available to you and you can send more slowly. The way this works with the BSD socket API is that a send(2) call will block or will return 0 to indicate it cannot add any bytes to the send buffer. The way it works in Twisted is by a system called "producers and consumers". The gist of this system is that you register a producer with a consumer. The producer calls write on the consumer repeatedly. When the consumer cannot keep up, it calls pauseProducing on the producer. When the consumer is again ready for more data, it calls resumeProducing on the producer. You can read about this system in more detail in the producer/consumer howto, part of Twisted's documentation.
How can I reduce memory usage of a Twisted server?
I wrote an audio broadcasting server with Python/Twisted. It works fine, but the usage of memory grows too fast! I think that's because some user's network might not be good enough to download the audio in time. My audio server broadcast audio data to different listener's client, if some of them can't download the audio in time, that means, my server keep the audio data until listeners received. And what's more, my audio server is a broadcasting server, it receive audio data, and send them to different clients, I though Twisted copy those data in different buffer, even they are same audio piece. I want to reduce the usage of memory usage, so I need to know when is the audio received by the client, so that I can decide when to discard some slow clients. But I have no idea how to achieve that with Twisted. Do anyone have idea? And what else can I do to reduce usage of memory usage? Thanks. Victor Lin.
[ "You didn't say, but I'm going to assume that you're using TCP. It would be hard to write a UDP-based system which had ever increasing memory because of clients who can't receive data as fast as you're trying to send it.\nTCP has built-in flow control capabilities. If a receiver cannot read data as fast as you'd like to send it, this information will be made available to you and you can send more slowly. The way this works with the BSD socket API is that a send(2) call will block or will return 0 to indicate it cannot add any bytes to the send buffer. The way it works in Twisted is by a system called \"producers and consumers\". The gist of this system is that you register a producer with a consumer. The producer calls write on the consumer repeatedly. When the consumer cannot keep up, it calls pauseProducing on the producer. When the consumer is again ready for more data, it calls resumeProducing on the producer.\nYou can read about this system in more detail in the producer/consumer howto, part of Twisted's documentation.\n" ]
[ 2 ]
[ "Make sure you're using Python's garbage collector and then go through and delete variables you aren't using.\n" ]
[ -5 ]
[ "memory_management", "python", "twisted" ]
stackoverflow_0001697009_memory_management_python_twisted.txt
Q: Can I use Django 1.1 with django-search-lucene for full-text searching, and if so, what resources/links/docs can I reference to get it up and running? A little background: I want to use Django Search with Lucene I have Django 1.1 w/ Python 2.5 installed MySQL 5.1 is being used My local machine is running Windows Vista x64, but we will deploy to Red Hat Linux Yes, I wish that right about now I was running Linux. A: I would recommend Apache SOLR, which is built on top of Lucene. The primary advantage is that it exposes an easy to use API, and can return a native Python object. Here is an example of how to call it from Python: params = urllib.urlencode({ "rows": "100", "fl": "id,name,score,address,city,state,zip", "wt": "python", "q": "+name:Foo +city:Boston" }) request = urllib2.urlopen(urllib2.Request("http://locahost:8983/solr/select", params)) response = ast.literal_eval(request.read()) request.close() return response["docs"]
Can I use Django 1.1 with django-search-lucene for full-text searching, and if so, what resources/links/docs can I reference to get it up and running?
A little background: I want to use Django Search with Lucene I have Django 1.1 w/ Python 2.5 installed MySQL 5.1 is being used My local machine is running Windows Vista x64, but we will deploy to Red Hat Linux Yes, I wish that right about now I was running Linux.
[ "I would recommend Apache SOLR, which is built on top of Lucene. The primary advantage is that it exposes an easy to use API, and can return a native Python object. Here is an example of how to call it from Python:\nparams = urllib.urlencode({ \n \"rows\": \"100\", \n \"fl\": \"id,name,score,address,city,state,zip\", \n \"wt\": \"python\", \n \"q\": \"+name:Foo +city:Boston\"\n}) \n\nrequest = urllib2.urlopen(urllib2.Request(\"http://locahost:8983/solr/select\", params))\nresponse = ast.literal_eval(request.read())\nrequest.close() \nreturn response[\"docs\"] \n\n" ]
[ 3 ]
[]
[]
[ "django", "django_search_lucene", "python" ]
stackoverflow_0001704278_django_django_search_lucene_python.txt
Q: Organize a python library as plugins I would like to create a library, say foolib, but to keep different subpackages separated, so to have barmodule, bazmodule, all under the same foolib main package. In other words, I want the client code to be able to do import foolib.barmodule import foolib.bazmodule but to distribute barmodule and bazmodule as two independent entities. Replace module with package as well... ba[rz]module can be a fukll fledged library with complex content. The reason behind this choice is manifold: I would like a user to install only barmodule if he needs so. I would like to keep the modules relatively independent and lightweight. but I would like to keep them under a common namespace. jQuery has a similar structure with the plugins. Is it feasible in python with the standard setuptools and install procedure ? A: You may be looking for namespace packages. See also PEP 382. A: Yes, simply create a foolib directory, add an __init__.py to it, and make each sub-module a .py file. /foolib barmodule.py bazmodule.py then you can import them like so: from foolib import barmodule barmodule.some_function()
Organize a python library as plugins
I would like to create a library, say foolib, but to keep different subpackages separated, so to have barmodule, bazmodule, all under the same foolib main package. In other words, I want the client code to be able to do import foolib.barmodule import foolib.bazmodule but to distribute barmodule and bazmodule as two independent entities. Replace module with package as well... ba[rz]module can be a fukll fledged library with complex content. The reason behind this choice is manifold: I would like a user to install only barmodule if he needs so. I would like to keep the modules relatively independent and lightweight. but I would like to keep them under a common namespace. jQuery has a similar structure with the plugins. Is it feasible in python with the standard setuptools and install procedure ?
[ "You may be looking for namespace packages. See also PEP 382.\n", "Yes, simply create a foolib directory, add an __init__.py to it, and make each sub-module a .py file.\n/foolib\n barmodule.py\n bazmodule.py\n\nthen you can import them like so:\nfrom foolib import barmodule\nbarmodule.some_function()\n\n" ]
[ 3, 0 ]
[]
[]
[ "module", "python" ]
stackoverflow_0001705665_module_python.txt
Q: Python Form Processing alternatives django.forms is very nice, and does almost exactly what I want to do on my current project, but unfortunately, Google App Engine makes most of the rest of Django unusable, and so packing it along with the app seems kind of silly. I've also discovered FormAlchemy, which is an SQLAlchemy analog to Django forms, and I intend to explore that fully, but it's relationship with SQLAlchemy suggests that it may also give me some trouble. Is there any HTML Forms processing library for python that I haven't considered? A: I've grown to love WTForms, it's simple, straightforward, and very flexible. It's part of my django-free web stack. It's completely standalone, and carries over the good parts of django's form libraries, while imho having some things much better. A: I'm not sure what you mean by "making most of the rest of Django unusable" and especially by "packing it along with the app". Are you familiar with the docs? If you just do as they suggest, i.e. import os os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' from google.appengine.dist import use_library use_library('django', '1.1') don't django.forms and the rest of Django work for you (once you upload your app to Google)? As the docs also explain, Django versions 1.0 are later are not included in the SDK. To test your app with a newer version of Django on your computer, you must download and install Django from the Django website. but You do not need to add the newer Django library to your application directory. i.e. you don't have to "pack it along"; it's already made available on Google's servers by Google to your app engine application. (A few third-party apps that depend on relational features, admin especially, don't work -- but your own Django app, written using he App Engine data modeling libraries, will be fine!-). A: You can also have a look at formencode, it is generic enough so that you may fit it in GAE. A: Is there a more specific reason you don't want to use django.forms? I've quite successfully used bits and pieces of django all by themselves without trouble in several projects. As an aside, there are several patches that make django sortof work in app-engine, though I assume you've considered and discarded them.
Python Form Processing alternatives
django.forms is very nice, and does almost exactly what I want to do on my current project, but unfortunately, Google App Engine makes most of the rest of Django unusable, and so packing it along with the app seems kind of silly. I've also discovered FormAlchemy, which is an SQLAlchemy analog to Django forms, and I intend to explore that fully, but it's relationship with SQLAlchemy suggests that it may also give me some trouble. Is there any HTML Forms processing library for python that I haven't considered?
[ "I've grown to love WTForms, it's simple, straightforward, and very flexible. It's part of my django-free web stack. \nIt's completely standalone, and carries over the good parts of django's form libraries, while imho having some things much better.\n", "I'm not sure what you mean by \"making most of the rest of Django unusable\" and especially by \"packing it along with the app\". Are you familiar with the docs? If you just do as they suggest, i.e.\nimport os\nos.environ['DJANGO_SETTINGS_MODULE'] = 'settings'\n\nfrom google.appengine.dist import use_library\nuse_library('django', '1.1')\n\ndon't django.forms and the rest of Django work for you (once you upload your app to Google)?\nAs the docs also explain,\n\nDjango versions 1.0 are later are not\n included in the SDK. To test your app\n with a newer version of Django on your\n computer, you must download and\n install Django from the Django\n website.\n\nbut\n\nYou do not need to add the newer\n Django library to your application\n directory.\n\ni.e. you don't have to \"pack it along\"; it's already made available on Google's servers by Google to your app engine application. (A few third-party apps that depend on relational features, admin especially, don't work -- but your own Django app, written using he App Engine data modeling libraries, will be fine!-).\n", "You can also have a look at formencode, it is generic enough so that you may fit it in GAE.\n", "Is there a more specific reason you don't want to use django.forms? I've quite successfully used bits and pieces of django all by themselves without trouble in several projects.\nAs an aside, there are several patches that make django sortof work in app-engine, though I assume you've considered and discarded them.\n" ]
[ 13, 3, 2, 1 ]
[]
[]
[ "forms", "google_app_engine", "python" ]
stackoverflow_0001705217_forms_google_app_engine_python.txt