content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
Python2.6 Backtrack/Ubuntu wxPython
I wrote a python app and it needs python2.6. I'm trying to get it to run in Backtrack 4 which is a pen-testing linux distro based on debian/ubuntu. I'v managed to install python2.6 along side of python2.5. Now I'm trying to install wxPython for 2.6 from the repos but I can't get it to install it for python2.6 rather than 2.5. Is there some way i can set a flag to specify what python installation to target? Or do I just need to install it from source?
A:
There is pre-built version of python, wxwidgets, wxpython in ubuntu packages.
You don't need to build from the sources(unless you have special reasons), you can install it from the following links.
http://packages.ubuntu.com/jaunty/python2.6
http://packages.ubuntu.com/jaunty/libwxgtk2.8-0
http://packages.ubuntu.com/jaunty/python-wxgtk2.8
And also wxPython 2.8 is recommended, you still can find 2.6 though.
|
Python2.6 Backtrack/Ubuntu wxPython
|
I wrote a python app and it needs python2.6. I'm trying to get it to run in Backtrack 4 which is a pen-testing linux distro based on debian/ubuntu. I'v managed to install python2.6 along side of python2.5. Now I'm trying to install wxPython for 2.6 from the repos but I can't get it to install it for python2.6 rather than 2.5. Is there some way i can set a flag to specify what python installation to target? Or do I just need to install it from source?
|
[
"There is pre-built version of python, wxwidgets, wxpython in ubuntu packages.\nYou don't need to build from the sources(unless you have special reasons), you can install it from the following links.\nhttp://packages.ubuntu.com/jaunty/python2.6\nhttp://packages.ubuntu.com/jaunty/libwxgtk2.8-0\nhttp://packages.ubuntu.com/jaunty/python-wxgtk2.8\nAnd also wxPython 2.8 is recommended, you still can find 2.6 though.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"ubuntu",
"wxpython"
] |
stackoverflow_0001837558_python_ubuntu_wxpython.txt
|
Q:
Are there any pure Python, BSD-ish open source SVG libraries?
I'm looking for a pure Python library to help put together SVG images. It doesn't need to be fast.
I know pySVG exists, but I'm not interested in a GPL library (and I can't use GPL libraries for this particular project).
Basic SVG elements aren't especially complicated, so I suppose I could roll my own, but I'd rather participate in an existing project than go off on my own.
Thanks!
A:
I wrote a library that I use to generate XHTML and SVG and I posted it here. It's pretty easy, each element has a corresponding all upper case class, the __init__ parameters are child elements and keyword parameters are attributes. The Transformable class, inherited by PATH and RECT, have additional functions for building attribute values.
Have fun with it, I'm posting this as free to use for anything you want, and if you make some improvements I would appreciate them. If there's more interest I'll make it a SourceForge or Google Code project.
[edit: remove the import of the DebugContents class and the _debugContents attribute in XML.py, that's part of a large debugging framework.]
A:
If your backend is OpenGL, I recommend Squirtle-SVG-Library that draws SVG images using pyglet.
|
Are there any pure Python, BSD-ish open source SVG libraries?
|
I'm looking for a pure Python library to help put together SVG images. It doesn't need to be fast.
I know pySVG exists, but I'm not interested in a GPL library (and I can't use GPL libraries for this particular project).
Basic SVG elements aren't especially complicated, so I suppose I could roll my own, but I'd rather participate in an existing project than go off on my own.
Thanks!
|
[
"I wrote a library that I use to generate XHTML and SVG and I posted it here. It's pretty easy, each element has a corresponding all upper case class, the __init__ parameters are child elements and keyword parameters are attributes. The Transformable class, inherited by PATH and RECT, have additional functions for building attribute values.\nHave fun with it, I'm posting this as free to use for anything you want, and if you make some improvements I would appreciate them. If there's more interest I'll make it a SourceForge or Google Code project.\n[edit: remove the import of the DebugContents class and the _debugContents attribute in XML.py, that's part of a large debugging framework.]\n",
"If your backend is OpenGL, I recommend Squirtle-SVG-Library that draws SVG images using pyglet. \n"
] |
[
2,
1
] |
[] |
[] |
[
"python",
"svg"
] |
stackoverflow_0001837262_python_svg.txt
|
Q:
Invalid Token when using Octal numbers
I'm a beginner in python and I'm trying to use a octal number in my script, but when I try it, it returns me that error:
>>> a = 010
SyntaxError: invalid token (<pyshell#0>, line 1)
>>> 01
SyntaxError: invalid token (<pyshell#1>, line 1)
There's something wrong with my code? I'm using Python3 (and reading a python 2.2 book)
A:
Try 0o10, may be because of python 3, or pyshell itself.
PEP says,
octal literals must now be specified
with a leading "0o" or "0O" instead of
"0";
http://www.python.org/dev/peps/pep-3127/
|
Invalid Token when using Octal numbers
|
I'm a beginner in python and I'm trying to use a octal number in my script, but when I try it, it returns me that error:
>>> a = 010
SyntaxError: invalid token (<pyshell#0>, line 1)
>>> 01
SyntaxError: invalid token (<pyshell#1>, line 1)
There's something wrong with my code? I'm using Python3 (and reading a python 2.2 book)
|
[
"Try 0o10, may be because of python 3, or pyshell itself.\nPEP says, \n\noctal literals must now be specified\n with a leading \"0o\" or \"0O\" instead of\n \"0\";\n\nhttp://www.python.org/dev/peps/pep-3127/\n"
] |
[
83
] |
[] |
[] |
[
"octal",
"python",
"python_3.x",
"syntax"
] |
stackoverflow_0001837874_octal_python_python_3.x_syntax.txt
|
Q:
Pulling HTML from a Webpage in Java
I want to pull the entire HTML source code file from a website in Java (or Python or PHP if it is easier in those languages to display). I wish only to view the HTML and scan through it with a few methods- not edit or manipulate it in any way, and I really wish that I do not write it to a new file unless there is no other way. Are there any library classes or methods that do this? If not, is there any way of going about this?
A:
In Java:
URL url = new URL("http://stackoverflow.com");
URLConnection connection = new URLConnection(url);
InputStream stream = url.openConnection();
// ... read stream like any file stream
This code, is good for scripting purposes and internal use. I would argue against using it for production use though. It doesn't handle timeouts and failed connections.
I would recommend using HttpClient library for production use. It supports authentication, redirect handling, threading, pooling, etc.
A:
In Python:
import urllib
# Get a file-like object for the Python Web site's home page.
f = urllib.urlopen("http://www.python.org")
# Read from the object, storing the page's contents in 's'.
s = f.read()
f.close()
Please see Python and HTML Processing for more details.
A:
Maybe you should also consider an alternative like running a standard utility like wget or curl from the command line to fetch the site tree into a local directory tree. Then do your scanning (in Java, Python, whatever) using the local copy. It should be simpler to do that, than to implement all of the boring stuff like error handling, argument parsing, etc yourself.
If you want to fetch all pages in a site, wget and curl don't know how to harvest links from HTML pages. An alternative is to use an open source web crawler.
|
Pulling HTML from a Webpage in Java
|
I want to pull the entire HTML source code file from a website in Java (or Python or PHP if it is easier in those languages to display). I wish only to view the HTML and scan through it with a few methods- not edit or manipulate it in any way, and I really wish that I do not write it to a new file unless there is no other way. Are there any library classes or methods that do this? If not, is there any way of going about this?
|
[
"In Java:\nURL url = new URL(\"http://stackoverflow.com\");\nURLConnection connection = new URLConnection(url);\nInputStream stream = url.openConnection();\n// ... read stream like any file stream\n\nThis code, is good for scripting purposes and internal use. I would argue against using it for production use though. It doesn't handle timeouts and failed connections.\nI would recommend using HttpClient library for production use. It supports authentication, redirect handling, threading, pooling, etc.\n",
"In Python:\nimport urllib\n# Get a file-like object for the Python Web site's home page.\nf = urllib.urlopen(\"http://www.python.org\")\n# Read from the object, storing the page's contents in 's'.\ns = f.read()\nf.close()\n\nPlease see Python and HTML Processing for more details.\n",
"Maybe you should also consider an alternative like running a standard utility like wget or curl from the command line to fetch the site tree into a local directory tree. Then do your scanning (in Java, Python, whatever) using the local copy. It should be simpler to do that, than to implement all of the boring stuff like error handling, argument parsing, etc yourself.\nIf you want to fetch all pages in a site, wget and curl don't know how to harvest links from HTML pages. An alternative is to use an open source web crawler.\n"
] |
[
5,
2,
0
] |
[] |
[] |
[
"html",
"java",
"pull",
"python",
"webpage"
] |
stackoverflow_0001837471_html_java_pull_python_webpage.txt
|
Q:
A Friendship relationship question
I am having difficulties with listing this type of data. Scenario is
as follows:
user1 add's a friend called user2
user2 confirms that user1 is his friend
what should happen is user2 and user1 see's each others name in their
friends list. What's happening now is I am able to add user2 to user1
friends list but user1 cannot see user2 in his/her list. My question
is how do I get user1 to show up in user2's list and user2 to show up
in user1's friend list if a user has confirmed friendship? I was
thinking of utilizing the confirmation status in the model and because
that user1 and user2's id is both in the confirmed relationship I
don't see any integrity issues here. Any tips?
Friendship model:
class Friendship(models.Model):
NOT_CONFIRMED = 1
PENDING= 2
CONFIRMED = 3
STATUS_CHOICES = (
(NOT_CONFIRMED, 'Not Confirmed'),
(PENDING, 'Pending'),
(CONFIRMED, 'Confirmed'),
)
from_friend = models.ForeignKey(User, related_name='friend_set')
to_friend = models.ForeignKey(User, related_name='to_friend_set')
confirmed = models.IntegerField(choices=STATUS_CHOICES,
default=NOT_CONFIRMED)
class Meta:
unique_together = (('to_friend', 'from_friend'),)
def __unicode__(self):
return '%s, %s' % (self.from_friend.username,
self.to_friend.username)
Views to render the friendships (as you can see, I have been playing
with the filtering):
@login_required
def friends_list(request, username):
user = get_object_or_404(User, username=username)
#friends = [friendship for friendship in user.friend_set.filter(Q
(confirmed=2) | Q(confirmed=3))]
friends = Friendship.objects.filter(
Q(to_friend=user) | Q(confirmed=3)
)
# get friends latest 10 shows
friend_shows = Show.objects.filter(user__in=friends).order_by('-id')
return render_to_response('habit/friends_list.html', {
'user': request.user,
'friends': friends,
'shows': friend_shows[:10],
'username': username,
})
A:
It's not obvious that your friendship model is associative. Is my friends list the list of all friendships in which I am the from_user? Or is it all friendships in which I am either the from_user or the to_user?
If it's the former, then every friend relationship is represented by two friendship objects, one indicating that you're my friend, and one indicating that I'm your friend. Whenever a user makes a request, you'll be adding two friendships. The status will indicate whether the person making the request is the from_user or the to_user. And when a request is confirmed or rejected, you'll update both friendships to reflect this.
If it's the latter, then every friend relationship is represented by a single friendship object. In this case, your object design will need to record which friend in the relationship made the request, so that when you examine the object you can tell which user can turn the request into a confirmed friendship.
A:
Based upon this page: http://docs.djangoproject.com/en/dev/topics/db/queries/
The following code:
friends = Friendship.objects.filter(
Q(to_friend=user) | Q(confirmed=3)
)
equates to: to_friend = user OR confirmed = 3. Which probably isn't what you want, based on your description.
This looks closer to what you want:
friends = Friendship.objects.filter(
Q(to_friend=user) | Q(from_friend=user)
, Q(confirmed=3)
)
|
A Friendship relationship question
|
I am having difficulties with listing this type of data. Scenario is
as follows:
user1 add's a friend called user2
user2 confirms that user1 is his friend
what should happen is user2 and user1 see's each others name in their
friends list. What's happening now is I am able to add user2 to user1
friends list but user1 cannot see user2 in his/her list. My question
is how do I get user1 to show up in user2's list and user2 to show up
in user1's friend list if a user has confirmed friendship? I was
thinking of utilizing the confirmation status in the model and because
that user1 and user2's id is both in the confirmed relationship I
don't see any integrity issues here. Any tips?
Friendship model:
class Friendship(models.Model):
NOT_CONFIRMED = 1
PENDING= 2
CONFIRMED = 3
STATUS_CHOICES = (
(NOT_CONFIRMED, 'Not Confirmed'),
(PENDING, 'Pending'),
(CONFIRMED, 'Confirmed'),
)
from_friend = models.ForeignKey(User, related_name='friend_set')
to_friend = models.ForeignKey(User, related_name='to_friend_set')
confirmed = models.IntegerField(choices=STATUS_CHOICES,
default=NOT_CONFIRMED)
class Meta:
unique_together = (('to_friend', 'from_friend'),)
def __unicode__(self):
return '%s, %s' % (self.from_friend.username,
self.to_friend.username)
Views to render the friendships (as you can see, I have been playing
with the filtering):
@login_required
def friends_list(request, username):
user = get_object_or_404(User, username=username)
#friends = [friendship for friendship in user.friend_set.filter(Q
(confirmed=2) | Q(confirmed=3))]
friends = Friendship.objects.filter(
Q(to_friend=user) | Q(confirmed=3)
)
# get friends latest 10 shows
friend_shows = Show.objects.filter(user__in=friends).order_by('-id')
return render_to_response('habit/friends_list.html', {
'user': request.user,
'friends': friends,
'shows': friend_shows[:10],
'username': username,
})
|
[
"It's not obvious that your friendship model is associative. Is my friends list the list of all friendships in which I am the from_user? Or is it all friendships in which I am either the from_user or the to_user?\nIf it's the former, then every friend relationship is represented by two friendship objects, one indicating that you're my friend, and one indicating that I'm your friend. Whenever a user makes a request, you'll be adding two friendships. The status will indicate whether the person making the request is the from_user or the to_user. And when a request is confirmed or rejected, you'll update both friendships to reflect this.\nIf it's the latter, then every friend relationship is represented by a single friendship object. In this case, your object design will need to record which friend in the relationship made the request, so that when you examine the object you can tell which user can turn the request into a confirmed friendship.\n",
"Based upon this page: http://docs.djangoproject.com/en/dev/topics/db/queries/\nThe following code:\nfriends = Friendship.objects.filter(\n Q(to_friend=user) | Q(confirmed=3)\n)\n\nequates to: to_friend = user OR confirmed = 3. Which probably isn't what you want, based on your description.\nThis looks closer to what you want:\nfriends = Friendship.objects.filter(\n Q(to_friend=user) | Q(from_friend=user)\n , Q(confirmed=3)\n)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"models",
"python"
] |
stackoverflow_0001836176_django_models_python.txt
|
Q:
Django-based skill implementation
I'm working on a RPG using django and am considering different options for implementing part of the skill system.
Say I have a base skill class ie, something like:
class Skill (models.Model):
name = models.CharField()
cost = models.PositiveIntegerField()
blah blah blah
What would be some approaches to implementing specific skills? The first option that comes to mind is:
1) Each skill extends Skill class and
overrides specific functions:
Not sure how this would work in django. Seems like having a db table for each skill would be overkill. Could the child class be abstract while the Skill class have an entry? Doesn't sound right. How about using a proxy class?
What are some other options. I'd like to avoid a scripted approach for a pure django approach.
A:
Perhaps you might consider separating a skill and it's associated effect. More that likely, skills will end up having one or more effect associated with them, and that effect could potentially be used by multiple skills.
For example, an effect could be "Does N frost damage to current target". That effect could be used by the skills "Blizzard Bolt", "Frost Blast", and "Icy Nova".
models.py
class Skill(models.Model):
name = models.CharField()
cost = models.PositiveIntegerField()
effects = models.ManyToManyField(Effect)
class Effect(models.Model):
description = models.CharField()
action = models.CharField()
# Each Django model has a ContentType. So you could store the contenttypes of
# the Player, Enemy, and Breakable model for example
objects_usable_on = models.ManyToManyField(ContentType)
def do_effect(self, **kwargs):
// self.action contains the python module to execute
// for example self.action = 'effects.spells.frost_damage'
// So when called it would look like this:
// Effect.do_effect(damage=50, target=target)
// 'damage=50' gets passed to actions.spells.frost_damage as
// a keyword argument
action = __import__(self.action)
action(**kwargs)
effects\spells.py
def frost_damage(**kwargs):
if 'damage' in kwargs:
target.life -= kwargs['damage']
if target.left <= 0:
# etc. etc.
A:
I'm kind of tired (late here in Sweden), so I am sorry if i misunderstood, but the first thing that popped into my head was extra fields on many-to-many relationships.
A:
I would set up some inheritance.
class BaseSkill(models.Model):
name = models.CharField()
cost = models.PositiveIntegerField()
type = models.CharField()
....
class FireSkill(BaseSkill):
burn_time = models.PositiveIntegerField()
def save():
self.type = 'fire_skill'
return super(FireSkill, self).save()
class IceSkill(BaseSkill):
freeze_time = models.PositiveIntegerField()
def save():
self.type = 'ice_skill'
return super(IceSkill, self).save()
The advantages of this are when you just want to list a player skills you all need to work with the BaseSkill class. If a vendor is selling skills you only need to list prices from the BaseSkill class. When you need more detailed attributes of a skill it is easy to take the type to access it. E.g. If you have: skill = BaseSkill.objects().get(pk=1) you can access ice skill by doing skill.ice_skill.freeze_time or more generally get_attribute(skill, skill.type).field_name
A:
When I've seen this, there have been two classes: one for the Skill as an abstract instance (e.g. a skill in speaking Swedish, a skill in Excel development) and then the actual skills possessed by a person with a foreign key to the Skill.
A:
You could also use single table and save inner model based off object in a pickled field.
|
Django-based skill implementation
|
I'm working on a RPG using django and am considering different options for implementing part of the skill system.
Say I have a base skill class ie, something like:
class Skill (models.Model):
name = models.CharField()
cost = models.PositiveIntegerField()
blah blah blah
What would be some approaches to implementing specific skills? The first option that comes to mind is:
1) Each skill extends Skill class and
overrides specific functions:
Not sure how this would work in django. Seems like having a db table for each skill would be overkill. Could the child class be abstract while the Skill class have an entry? Doesn't sound right. How about using a proxy class?
What are some other options. I'd like to avoid a scripted approach for a pure django approach.
|
[
"Perhaps you might consider separating a skill and it's associated effect. More that likely, skills will end up having one or more effect associated with them, and that effect could potentially be used by multiple skills.\nFor example, an effect could be \"Does N frost damage to current target\". That effect could be used by the skills \"Blizzard Bolt\", \"Frost Blast\", and \"Icy Nova\".\nmodels.py\nclass Skill(models.Model):\n name = models.CharField()\n cost = models.PositiveIntegerField()\n effects = models.ManyToManyField(Effect)\n\nclass Effect(models.Model):\n description = models.CharField()\n action = models.CharField()\n\n # Each Django model has a ContentType. So you could store the contenttypes of\n # the Player, Enemy, and Breakable model for example\n objects_usable_on = models.ManyToManyField(ContentType)\n\n def do_effect(self, **kwargs):\n // self.action contains the python module to execute\n // for example self.action = 'effects.spells.frost_damage'\n // So when called it would look like this:\n // Effect.do_effect(damage=50, target=target)\n // 'damage=50' gets passed to actions.spells.frost_damage as\n // a keyword argument \n\n action = __import__(self.action)\n action(**kwargs)\n\neffects\\spells.py\ndef frost_damage(**kwargs):\n if 'damage' in kwargs:\n target.life -= kwargs['damage']\n\n if target.left <= 0:\n # etc. etc.\n\n",
"I'm kind of tired (late here in Sweden), so I am sorry if i misunderstood, but the first thing that popped into my head was extra fields on many-to-many relationships.\n",
"I would set up some inheritance.\nclass BaseSkill(models.Model):\n name = models.CharField()\n cost = models.PositiveIntegerField()\n type = models.CharField()\n ....\n\nclass FireSkill(BaseSkill):\n burn_time = models.PositiveIntegerField()\n\n def save():\n self.type = 'fire_skill'\n return super(FireSkill, self).save()\n\nclass IceSkill(BaseSkill):\n freeze_time = models.PositiveIntegerField()\n\n def save():\n self.type = 'ice_skill'\n return super(IceSkill, self).save()\n\nThe advantages of this are when you just want to list a player skills you all need to work with the BaseSkill class. If a vendor is selling skills you only need to list prices from the BaseSkill class. When you need more detailed attributes of a skill it is easy to take the type to access it. E.g. If you have: skill = BaseSkill.objects().get(pk=1) you can access ice skill by doing skill.ice_skill.freeze_time or more generally get_attribute(skill, skill.type).field_name\n",
"When I've seen this, there have been two classes: one for the Skill as an abstract instance (e.g. a skill in speaking Swedish, a skill in Excel development) and then the actual skills possessed by a person with a foreign key to the Skill.\n",
"You could also use single table and save inner model based off object in a pickled field.\n"
] |
[
5,
1,
1,
1,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001836881_django_python.txt
|
Q:
mysqldb build error
i remember installing Python + Django + MySQL + MySQLdb on my 32-bit Mac with Leopard 10.5.7.
I tried the same procedure with Mac Snow Leopard. But have unfortunately ran into a lot of errors...
i dont know but something weird is happening. Please look at the error log:
Amit-Vermas-MacBook:mysql-python-1.2.2 amitverma$ python setup.py build
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb
running build_ext
building '_mysql' extension
creating build/temp.macosx-10.3-i386-2.5
gcc-4.0 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -Dversion_info=(1,2,2,'final',0) -D__version__=1.2.2 -I/usr/local/mysql/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c _mysql.c -o build/temp.macosx-10.3-i386-2.5/_mysql.o -g -Os -arch x86_64 -fno-common -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL
In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:57,
from pymemcompat.h:10,
from _mysql.c:29:
/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:761:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)."
In file included from _mysql.c:35:
/usr/local/mysql/include/my_config.h:1050:1: warning: "HAVE_WCSCOLL" redefined
In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:8,
from pymemcompat.h:10,
from _mysql.c:29:
/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyconfig.h:721:1: warning: this is the location of the previous definition
In file included from _mysql.c:35:
/usr/local/mysql/include/my_config.h:1168:1: warning: "SIZEOF_LONG" redefined
In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:8,
from pymemcompat.h:10,
from _mysql.c:29:
/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyconfig.h:811:1: warning: this is the location of the previous definition
In file included from _mysql.c:35:
/usr/local/mysql/include/my_config.h:1177:1: warning: "SIZEOF_PTHREAD_T" redefined
In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:8,
from pymemcompat.h:10,
from _mysql.c:29:
/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyconfig.h:820:1: warning: this is the location of the previous definition
error: command 'gcc-4.0' failed with exit status 1
Amit-Vermas-MacBook:mysql-python-1.2.2 amitverma$
A:
This is my personal makefile rule for that
MYSQLDB_VERSION=1.2.3c1
MYSQLDB_TARGET=$(BUILD_FLAGS_DIR)/mysqldb
MYSQLDB_PACKAGE=MySQL-python-$(MYSQLDB_VERSION).tar.gz
MYSQLDB_PACKAGE_URL=http://downloads.sourceforge.net/project/mysql-python/mysql-python-test/$(MYSQLDB_VERSION)/$(MYSQLDB_PACKAGE)
.PHONY: mysqldb mysqldb-download
mysqldb: $(MYSQLDB_TARGET)
mysqldb-download: $(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE)
$(MYSQLDB_TARGET): $(INIT_TARGET) $(MYSQLDB_DEPS) $(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE)
-rm -rf $(UNPACK_DIR)/MySQL-python-$(MYSQLDB_VERSION)
tar -m -C $(UNPACK_DIR) -xzvf $(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE)
-cd $(UNPACK_DIR)/MySQL-python-$(MYSQLDB_VERSION); \
for patch in $(PATCH_DIR)/mysqldb-$(MYSQLDB_VERSION)_$(ARCH)_*; \
do patch -p1 < $$patch; \
done
cd $(UNPACK_DIR)/MySQL-python-$(MYSQLDB_VERSION); export CC="gcc -m64" FC="g95 -m64" CPPFLAGS="-I$(RUNTIME_DIR)/include" CFLAGS="-m64 -I$(RUNTIME_DIR)/include" LD_LIBRARY_PATH=$(RUNTIME_DIR)/lib64:$(RUNTIME_DIR)/lib:$$LD_LIBRARY_PATH PATH=$(RUNTIME_DIR)/bin:$$PATH PYTHONPATH=$(RUNTIME_DIR)/lib/python2.5/site-packages/; $(RUNTIME_DIR)/bin/python2.5 setup.py install --prefix=$(RUNTIME_DIR)
touch $(MYSQLDB_TARGET)
$(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE):
for package in $(MYSQLDB_PACKAGE_URL); \
do \
echo -n "Downloading $$package... "; \
cd $(DOWNLOAD_DIR); curl -L -O $$package; \
echo "done"; \
done
touch $@
ALL_RUNTIME_TARGETS+=$(MYSQLDB_TARGET)
ALL_DOWNLOAD_TARGETS+=$(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE)
And a patch
$ more mysqldb-1.2.3c1_x86_64-apple-darwin10_patch-000
diff -Naur MySQL-python-1.2.3c1/setup.py MySQL-python-1.2.3c1.new/setup.py
--- MySQL-python-1.2.3c1/setup.py 2008-10-18 02:12:31.000000000 +0200
+++ MySQL-python-1.2.3c1.new/setup.py 2009-10-08 22:59:05.000000000 +0200
@@ -13,6 +13,8 @@
from setup_windows import get_config
metadata, options = get_config()
+options["extra_compile_args"].remove("-arch")
+options["extra_compile_args"].remove("x86_64")
metadata['ext_modules'] = [Extension(sources=['_mysql.c'], **options)]
metadata['long_description'] = metadata['long_description'].replace(r'\n', '')
setup(**metadata)
And it works for me. I cannot guarantee, but... maybe you will find some interesting hint inside.
Please note that I am using a custom built compiler (for outdated reasons too ugly to delve in)
A:
The most likely explanation is that you are trying to link a 64-bit version of the MySQL libraries with a 32-bit-only version of Python (currently, all of the python.org installers for OS X are 32-bit only). (You can verify that by using the file command on the library files in /usr/local/mysql/).
Some solutions:
use the Apple-supplied python2.6 on
Snow Leopard which is 64-bit
install a 32-bit version of the MySQL libraries
install a complete solution using MacPorts: install the base MacPorts
infrastructure and then install the MySQLdb adapter for python 2.6 (or 2.5) which will also install all necessary dependencies including a new
python and MySQL client libraries that should all work together correctly (and be
able to be updated by MacPorts):
sudo port install py26-mysql # or py25-mysql
For using MySQL with python on OS X, I recommend the last solution, that is, unless you really enjoy and have the time to do package management and installation. It will likely save you a lot of trouble over the long run.
P.S. MacPorts includes ports of django and PIL as well:
sudo port install py26-django py26-pil
EDIT:
To go the MacPorts route, follow the instructions I gave here to remove the effects of a python.org installer python. DO NOT attempt to delete or modify the Apple-installed Python files in /usr/bin or /System/Library; they are part of OS X. Then follow the instructions cited above to install MacPorts. In order to avoid interference with Apple- or third-party installs, MacPorts installs all of its files into a completely separate directory structure rooted at /opt/local. Thus, you will need to modify your .bash_profile to add /opt/local/bin to your $PATH. If you want the MacPorts versions to be found first, add something like:
export PATH="/opt/local/bin:${PATH}"
When you start a new terminal session, you should find the MacPorts python2.6 at python2.6. If you also want to make the command python point there:
$ sudo port install python_select
$ sudo python_select -l
Available versions:
current none python26 python26-apple
$ sudo python_select python26
A:
The following blog post helped me compile MySQLdb 1.2.2 on the Mac:
http://www.mangoorange.com/2008/08/01/installing-python-mysqldb-122-on-mac-os-x/
However, later on I tried MySQLDB 1.2.3c1 and didn't have any problems compiling out of the box. 1.2.2 is several years old and causes deprecation warnings on Python 2.6. I would just make the switch to 1.2.3.c1 and see if that works for you.
1.2.3c1 is the latest version on PyPi.
A:
It looks like you need to reinstall/update XCode (build tools)
|
mysqldb build error
|
i remember installing Python + Django + MySQL + MySQLdb on my 32-bit Mac with Leopard 10.5.7.
I tried the same procedure with Mac Snow Leopard. But have unfortunately ran into a lot of errors...
i dont know but something weird is happening. Please look at the error log:
Amit-Vermas-MacBook:mysql-python-1.2.2 amitverma$ python setup.py build
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb
running build_ext
building '_mysql' extension
creating build/temp.macosx-10.3-i386-2.5
gcc-4.0 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -Dversion_info=(1,2,2,'final',0) -D__version__=1.2.2 -I/usr/local/mysql/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c _mysql.c -o build/temp.macosx-10.3-i386-2.5/_mysql.o -g -Os -arch x86_64 -fno-common -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL
In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:57,
from pymemcompat.h:10,
from _mysql.c:29:
/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:761:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)."
In file included from _mysql.c:35:
/usr/local/mysql/include/my_config.h:1050:1: warning: "HAVE_WCSCOLL" redefined
In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:8,
from pymemcompat.h:10,
from _mysql.c:29:
/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyconfig.h:721:1: warning: this is the location of the previous definition
In file included from _mysql.c:35:
/usr/local/mysql/include/my_config.h:1168:1: warning: "SIZEOF_LONG" redefined
In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:8,
from pymemcompat.h:10,
from _mysql.c:29:
/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyconfig.h:811:1: warning: this is the location of the previous definition
In file included from _mysql.c:35:
/usr/local/mysql/include/my_config.h:1177:1: warning: "SIZEOF_PTHREAD_T" redefined
In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:8,
from pymemcompat.h:10,
from _mysql.c:29:
/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyconfig.h:820:1: warning: this is the location of the previous definition
error: command 'gcc-4.0' failed with exit status 1
Amit-Vermas-MacBook:mysql-python-1.2.2 amitverma$
|
[
"This is my personal makefile rule for that\nMYSQLDB_VERSION=1.2.3c1\nMYSQLDB_TARGET=$(BUILD_FLAGS_DIR)/mysqldb\nMYSQLDB_PACKAGE=MySQL-python-$(MYSQLDB_VERSION).tar.gz\nMYSQLDB_PACKAGE_URL=http://downloads.sourceforge.net/project/mysql-python/mysql-python-test/$(MYSQLDB_VERSION)/$(MYSQLDB_PACKAGE)\n\n.PHONY: mysqldb mysqldb-download\nmysqldb: $(MYSQLDB_TARGET)\nmysqldb-download: $(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE)\n\n$(MYSQLDB_TARGET): $(INIT_TARGET) $(MYSQLDB_DEPS) $(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE)\n -rm -rf $(UNPACK_DIR)/MySQL-python-$(MYSQLDB_VERSION)\n tar -m -C $(UNPACK_DIR) -xzvf $(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE)\n -cd $(UNPACK_DIR)/MySQL-python-$(MYSQLDB_VERSION); \\\n for patch in $(PATCH_DIR)/mysqldb-$(MYSQLDB_VERSION)_$(ARCH)_*; \\\n do patch -p1 < $$patch; \\\n done\n cd $(UNPACK_DIR)/MySQL-python-$(MYSQLDB_VERSION); export CC=\"gcc -m64\" FC=\"g95 -m64\" CPPFLAGS=\"-I$(RUNTIME_DIR)/include\" CFLAGS=\"-m64 -I$(RUNTIME_DIR)/include\" LD_LIBRARY_PATH=$(RUNTIME_DIR)/lib64:$(RUNTIME_DIR)/lib:$$LD_LIBRARY_PATH PATH=$(RUNTIME_DIR)/bin:$$PATH PYTHONPATH=$(RUNTIME_DIR)/lib/python2.5/site-packages/; $(RUNTIME_DIR)/bin/python2.5 setup.py install --prefix=$(RUNTIME_DIR)\n touch $(MYSQLDB_TARGET)\n\n$(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE):\n for package in $(MYSQLDB_PACKAGE_URL); \\\n do \\\n echo -n \"Downloading $$package... \"; \\\n cd $(DOWNLOAD_DIR); curl -L -O $$package; \\\n echo \"done\"; \\\n done\n touch $@\n\nALL_RUNTIME_TARGETS+=$(MYSQLDB_TARGET)\nALL_DOWNLOAD_TARGETS+=$(DOWNLOAD_DIR)/$(MYSQLDB_PACKAGE)\n\nAnd a patch\n$ more mysqldb-1.2.3c1_x86_64-apple-darwin10_patch-000 \ndiff -Naur MySQL-python-1.2.3c1/setup.py MySQL-python-1.2.3c1.new/setup.py\n--- MySQL-python-1.2.3c1/setup.py 2008-10-18 02:12:31.000000000 +0200\n+++ MySQL-python-1.2.3c1.new/setup.py 2009-10-08 22:59:05.000000000 +0200\n@@ -13,6 +13,8 @@\n from setup_windows import get_config\n\n metadata, options = get_config()\n+options[\"extra_compile_args\"].remove(\"-arch\")\n+options[\"extra_compile_args\"].remove(\"x86_64\")\n metadata['ext_modules'] = [Extension(sources=['_mysql.c'], **options)]\n metadata['long_description'] = metadata['long_description'].replace(r'\\n', '')\n setup(**metadata)\n\nAnd it works for me. I cannot guarantee, but... maybe you will find some interesting hint inside.\nPlease note that I am using a custom built compiler (for outdated reasons too ugly to delve in)\n",
"The most likely explanation is that you are trying to link a 64-bit version of the MySQL libraries with a 32-bit-only version of Python (currently, all of the python.org installers for OS X are 32-bit only). (You can verify that by using the file command on the library files in /usr/local/mysql/).\nSome solutions:\n\nuse the Apple-supplied python2.6 on\nSnow Leopard which is 64-bit\ninstall a 32-bit version of the MySQL libraries\ninstall a complete solution using MacPorts: install the base MacPorts \ninfrastructure and then install the MySQLdb adapter for python 2.6 (or 2.5) which will also install all necessary dependencies including a new\npython and MySQL client libraries that should all work together correctly (and be\nable to be updated by MacPorts):\nsudo port install py26-mysql # or py25-mysql\n\nFor using MySQL with python on OS X, I recommend the last solution, that is, unless you really enjoy and have the time to do package management and installation. It will likely save you a lot of trouble over the long run.\nP.S. MacPorts includes ports of django and PIL as well:\nsudo port install py26-django py26-pil\n\nEDIT:\nTo go the MacPorts route, follow the instructions I gave here to remove the effects of a python.org installer python. DO NOT attempt to delete or modify the Apple-installed Python files in /usr/bin or /System/Library; they are part of OS X. Then follow the instructions cited above to install MacPorts. In order to avoid interference with Apple- or third-party installs, MacPorts installs all of its files into a completely separate directory structure rooted at /opt/local. Thus, you will need to modify your .bash_profile to add /opt/local/bin to your $PATH. If you want the MacPorts versions to be found first, add something like:\nexport PATH=\"/opt/local/bin:${PATH}\"\n\nWhen you start a new terminal session, you should find the MacPorts python2.6 at python2.6. If you also want to make the command python point there:\n$ sudo port install python_select\n$ sudo python_select -l\nAvailable versions:\ncurrent none python26 python26-apple\n$ sudo python_select python26\n\n",
"The following blog post helped me compile MySQLdb 1.2.2 on the Mac:\nhttp://www.mangoorange.com/2008/08/01/installing-python-mysqldb-122-on-mac-os-x/\nHowever, later on I tried MySQLDB 1.2.3c1 and didn't have any problems compiling out of the box. 1.2.2 is several years old and causes deprecation warnings on Python 2.6. I would just make the switch to 1.2.3.c1 and see if that works for you.\n1.2.3c1 is the latest version on PyPi.\n",
"It looks like you need to reinstall/update XCode (build tools)\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"django",
"macos",
"osx_snow_leopard",
"python"
] |
stackoverflow_0001835658_django_macos_osx_snow_leopard_python.txt
|
Q:
Get new dict from old dict with condition?
I am an dictionary (Dictionary of dictionary)
old_dict = {
'1':{'A':1, 'check': 0, 'AA':2, 'AAA':3 , 'status':0},
'2':{'A':11,'check': 0, 'AA':22, 'AAA':33 ,'status':1},
'3':{'A':111,'check': 0, 'AA':222, 'AAA':333 ,'status':0},
'4':{'A':1111,'check': 1, 'AA':2222, 'AAA':3333 ,'status':0},
}
I Want to get a new dictionary that before['check'] != 0 and before['status'] != 0
so will be
new_dict = {
'1':{'A':1, 'check': 0, 'AA':2, 'AAA':3 , 'status':0},
'3':{'A':111,'check': 0, 'AA':222, 'AAA':333 ,'status':0},
}
If it's a list I did like this
ouputdata = [d for d in data if d[1] == ' 0' and d[6]==' 0']
I have tried
for field in old_dict.values():
if field['check'] !=0 and field['status'] != 0
line = field['A'] + field['AA']+field['AAA']
#write these line to file
How Make it with dictionary.
Could you help me to make it with dictionary
A:
Dropping the [] from yu_sha's answer avoids constructing a dummy list:
new_dict = dict( (k,v) for k,v in old_dict.iteritems()
if v["status"] != 0 and v["check"] != 0 )
or
new_dict = dict(item for item in old_dict.iteritems()
if item[1]["status"] != 0 and item[1]["check"] != 0)
A:
new_dict = {}
for k, d in old_dict.iteritems():
if d['check'] == 0 and d['status'] == 0:
new_dict[k] = d
A:
dict([x for x in old_dict.iteritems() if x[1]['status']==0 and x[1]['check']==0])
old_dict.iteritems() returns you a list of pairs. Which you then filter and convert back to dictionary.
A:
outputdata = dict([d for d in data.iteritems() if d[1][1] == ' 0' and d[1][6]==' 0'])
A:
Not very interesting, don't know why I answer it.
In [17]: dict([(k, v) for k, v in old_dict.items() if v['check'] == 0 and v['status'] == 0])
|
Get new dict from old dict with condition?
|
I am an dictionary (Dictionary of dictionary)
old_dict = {
'1':{'A':1, 'check': 0, 'AA':2, 'AAA':3 , 'status':0},
'2':{'A':11,'check': 0, 'AA':22, 'AAA':33 ,'status':1},
'3':{'A':111,'check': 0, 'AA':222, 'AAA':333 ,'status':0},
'4':{'A':1111,'check': 1, 'AA':2222, 'AAA':3333 ,'status':0},
}
I Want to get a new dictionary that before['check'] != 0 and before['status'] != 0
so will be
new_dict = {
'1':{'A':1, 'check': 0, 'AA':2, 'AAA':3 , 'status':0},
'3':{'A':111,'check': 0, 'AA':222, 'AAA':333 ,'status':0},
}
If it's a list I did like this
ouputdata = [d for d in data if d[1] == ' 0' and d[6]==' 0']
I have tried
for field in old_dict.values():
if field['check'] !=0 and field['status'] != 0
line = field['A'] + field['AA']+field['AAA']
#write these line to file
How Make it with dictionary.
Could you help me to make it with dictionary
|
[
"Dropping the [] from yu_sha's answer avoids constructing a dummy list:\nnew_dict = dict( (k,v) for k,v in old_dict.iteritems() \n if v[\"status\"] != 0 and v[\"check\"] != 0 )\n\nor\nnew_dict = dict(item for item in old_dict.iteritems() \n if item[1][\"status\"] != 0 and item[1][\"check\"] != 0)\n\n",
"new_dict = {}\nfor k, d in old_dict.iteritems():\n if d['check'] == 0 and d['status'] == 0:\n new_dict[k] = d\n\n",
"dict([x for x in old_dict.iteritems() if x[1]['status']==0 and x[1]['check']==0])\n\nold_dict.iteritems() returns you a list of pairs. Which you then filter and convert back to dictionary.\n",
"outputdata = dict([d for d in data.iteritems() if d[1][1] == ' 0' and d[1][6]==' 0'])\n\n",
"Not very interesting, don't know why I answer it.\nIn [17]: dict([(k, v) for k, v in old_dict.items() if v['check'] == 0 and v['status'] == 0])\n\n"
] |
[
5,
5,
3,
1,
0
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0001838549_dictionary_python.txt
|
Q:
Html Agility Pack for python
Is there any module for python similar to Html Agility Pack?
If not, can anyone recommend me an alternative.
Thanks in advance!
A:
Try Beautiful Soup.
|
Html Agility Pack for python
|
Is there any module for python similar to Html Agility Pack?
If not, can anyone recommend me an alternative.
Thanks in advance!
|
[
"Try Beautiful Soup.\n"
] |
[
11
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001838637_python.txt
|
Q:
How to split a string in Python?
I have read the documentation but don't fully understand how to do it.
I understand that I need to have some kind of identifier in the string so that the functions can find where to split the string (unless I can target the first space in the sentence?).
So for example how would I split:
"Sico87 is an awful python developer" to "Sico87" and "is an awful Python developer"?
The strings are retrieved from a database (if this does matter).
A:
Use the split method on strings:
>>> "Sico87 is an awful python developer".split(' ', 1)
['Sico87', 'is an awful python developer']
How it works:
Every string is an object. String objects have certain methods defined on them, such as split in this case. You call them using obj.<methodname>(<arguments>).
The first argument to split is the character that separates the individual substrings. In this case that is a space, ' '.
The second argument is the number of times the split should be performed. In your case that is 1. Leaving out this second argument applies the split as often as possible:
>>> "Sico87 is an awful python developer".split(' ')
['Sico87', 'is', 'an', 'awful', 'python', 'developer']
Of course you can also store the substrings in separate variables instead of a list:
>>> a, b = "Sico87 is an awful python developer".split(' ', 1)
>>> a
'Sico87'
>>> b
'is an awful python developer'
But do note that this will cause trouble if certain inputs do not contain spaces:
>>> a, b = "string_without_spaces".split(' ', 1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: need more than 1 value to unpack
A:
Use partition(' ') which always returns three items in the tuple - the first bit up until the separator, the separator, and then the bits after. Slots in the tuple that have are not applicable are still there, just set to be empty strings.
Examples:
"Sico87 is an awful python developer".partition(' ') returns ["Sico87"," ","is an awful python developer"]
"Sico87 is an awful python developer".partition(' ')[0] returns "Sico87"
An alternative, trickier way is to use split(' ',1) which works similiarly but returns a variable number of items. It will return a tuple of one or two items, the first item being the first word up until the delimiter and the second being the rest of the string (if there is any).
|
How to split a string in Python?
|
I have read the documentation but don't fully understand how to do it.
I understand that I need to have some kind of identifier in the string so that the functions can find where to split the string (unless I can target the first space in the sentence?).
So for example how would I split:
"Sico87 is an awful python developer" to "Sico87" and "is an awful Python developer"?
The strings are retrieved from a database (if this does matter).
|
[
"Use the split method on strings:\n>>> \"Sico87 is an awful python developer\".split(' ', 1)\n['Sico87', 'is an awful python developer']\n\nHow it works:\n\nEvery string is an object. String objects have certain methods defined on them, such as split in this case. You call them using obj.<methodname>(<arguments>).\nThe first argument to split is the character that separates the individual substrings. In this case that is a space, ' '.\nThe second argument is the number of times the split should be performed. In your case that is 1. Leaving out this second argument applies the split as often as possible:\n>>> \"Sico87 is an awful python developer\".split(' ')\n['Sico87', 'is', 'an', 'awful', 'python', 'developer']\n\n\nOf course you can also store the substrings in separate variables instead of a list:\n>>> a, b = \"Sico87 is an awful python developer\".split(' ', 1)\n>>> a\n'Sico87'\n>>> b\n'is an awful python developer'\n\nBut do note that this will cause trouble if certain inputs do not contain spaces:\n>>> a, b = \"string_without_spaces\".split(' ', 1)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nValueError: need more than 1 value to unpack\n\n",
"Use partition(' ') which always returns three items in the tuple - the first bit up until the separator, the separator, and then the bits after. Slots in the tuple that have are not applicable are still there, just set to be empty strings.\nExamples:\n \"Sico87 is an awful python developer\".partition(' ') returns [\"Sico87\",\" \",\"is an awful python developer\"]\n\"Sico87 is an awful python developer\".partition(' ')[0] returns \"Sico87\"\nAn alternative, trickier way is to use split(' ',1) which works similiarly but returns a variable number of items. It will return a tuple of one or two items, the first item being the first word up until the delimiter and the second being the rest of the string (if there is any).\n"
] |
[
19,
14
] |
[] |
[] |
[
"python",
"string"
] |
stackoverflow_0001838674_python_string.txt
|
Q:
python web app logging through pipe? (performance concerned)
I'm writing a web app using python with web.py, and I want to implement my own logging system. I'd like to log detailed information about each request that come to python (static files are handled by web servers).
Currently I'm thinking about writing the logs to a pipe. On the other side, there should be cronolog.
My main concern is that will the performance be good? How is the time/resource consumed in piping the logs compared to the normal processing of a request (less than 5 database queries, and page generation from templates)?
Or are there other better approaches? I don't want to write the log file in python because tens of processes will be started by fastcgi.
A:
Pipes are one of the fastest I/O mechanisms available. It's just a shared buffer. Nothing more. If the receiving end of your pipe is totally overwhelmed, you may have an issue. But you have no evidence of that right now.
If you have 10's of processes started by FastCGI, each can have their own independent log file. That's the ideal situation: use Python logging -- make each process have a unique log file.
In the rare event that you need to examine all log files, cat them together for analysis.
|
python web app logging through pipe? (performance concerned)
|
I'm writing a web app using python with web.py, and I want to implement my own logging system. I'd like to log detailed information about each request that come to python (static files are handled by web servers).
Currently I'm thinking about writing the logs to a pipe. On the other side, there should be cronolog.
My main concern is that will the performance be good? How is the time/resource consumed in piping the logs compared to the normal processing of a request (less than 5 database queries, and page generation from templates)?
Or are there other better approaches? I don't want to write the log file in python because tens of processes will be started by fastcgi.
|
[
"Pipes are one of the fastest I/O mechanisms available. It's just a shared buffer. Nothing more. If the receiving end of your pipe is totally overwhelmed, you may have an issue. But you have no evidence of that right now.\nIf you have 10's of processes started by FastCGI, each can have their own independent log file. That's the ideal situation: use Python logging -- make each process have a unique log file.\nIn the rare event that you need to examine all log files, cat them together for analysis.\n"
] |
[
1
] |
[] |
[] |
[
"logging",
"pipe",
"python"
] |
stackoverflow_0001839348_logging_pipe_python.txt
|
Q:
Compressing UTF-8(or other 8-bit encoding) to 7 or fewer bits
I wish to take a file encoded in UTF-8 that doesn't use more than 128 different characters, then move it to a 7-bit encoding to save the 1/8 of space. For example, if I have a 16 MB text file that only uses the first 128(ascii) characters, I would like to shave off the extra bit to reduce the file to 14MB.
How would I go about doing this?
There doesn't seem to be an existing free or proprietary program to do so, so I was thinking I might try and make a simple(if inefficient) one.
The basic idea I have is to make a function from the current hex/decimal/binary values used for each character to the 128 values I would have in the seven bit encoding, then scan through the file and write each modified value to a new file.
So if the file looked like(I'll use a decimal example because I try not to have to think in hex)
127 254 025 212 015 015 132...
It would become
001 002 003 004 005 005 006
If 127 mapped to 001, 254 mapped to 005, etc.
I'm not entirely sure on a couple things, though.
Would this be enough to actually shorten the filesize? I have a bad feeling this would simply leave an extra 0 on the binary string--11011001 might get mapped to 01000001 rather than 1000001, and I won't actually save space.
If this would happen, how do I get rid of the zero?
How do I open the file to read/write in binary/decimal/hex rather than just text?
I've mostly worked with Python, but I can muddle through C if I must.
Thank you.
A:
Just use gzip compression, and save 60-70% with 0% effort!
A:
Do you understand that files are divided into bytes? Thus, if you did that, you'd have 7 bits of the first letter in bytes 1, plus 1 bit of the second letter, then in byte two, you'd have 6 bits of the second letter, and 2 bits of the third, so on. It would look like this:
|AAAAAAAB|BBBBBBCC|CCCCCDDD|DDDDEEEE|EEEFFFFF|FF...
\------/ \------/ \------/ \------/ \------/
byte byte byte byte byte
A:
Your idea is on the right track, but needs some development. If you're interested in this kind of data compression, you may want to investigate Huffman coding. This is a simple data compression technique that is used in many real-world situations.
I can recommend The Data Compression Book by Mark Nelson which is a great introduction to data compression techniques.
A:
Your idea is unlikely to work. If you write the byte 0x05 into a file, the byte gets written, all 8 bits of it - with leading zeros. To actually accomplish what you need, you can encode each 8 bytes in 7 bytes (since you only need 8*7 bits to encode 8 values). One approach is keep the 7 values in the 7 low bits of their bytes, and spread the 8th byte over the 7 MSBits.
As for Python, opening a file in binary write mode is open(filename, 'wb'). You'll also have to learn about bit operations to pack bytes as described above.
Just a small example:
>>> a = 0x03
>>> b = 0x59
>>> c = ((a & 0x1) << 7) | b
>>> hex(c)
'0xd9'
>>>
This places the lowest bit of a into the MSBit of c and the rest of c is the value of b.
I'm sure you can take it from here.
A:
"this would simply leave an extra 0 on the binary string--11011001 might get mapped to 01000001 rather than 1000001, and I won't actually save space."
Correct. Your plan will do nothing.
|
Compressing UTF-8(or other 8-bit encoding) to 7 or fewer bits
|
I wish to take a file encoded in UTF-8 that doesn't use more than 128 different characters, then move it to a 7-bit encoding to save the 1/8 of space. For example, if I have a 16 MB text file that only uses the first 128(ascii) characters, I would like to shave off the extra bit to reduce the file to 14MB.
How would I go about doing this?
There doesn't seem to be an existing free or proprietary program to do so, so I was thinking I might try and make a simple(if inefficient) one.
The basic idea I have is to make a function from the current hex/decimal/binary values used for each character to the 128 values I would have in the seven bit encoding, then scan through the file and write each modified value to a new file.
So if the file looked like(I'll use a decimal example because I try not to have to think in hex)
127 254 025 212 015 015 132...
It would become
001 002 003 004 005 005 006
If 127 mapped to 001, 254 mapped to 005, etc.
I'm not entirely sure on a couple things, though.
Would this be enough to actually shorten the filesize? I have a bad feeling this would simply leave an extra 0 on the binary string--11011001 might get mapped to 01000001 rather than 1000001, and I won't actually save space.
If this would happen, how do I get rid of the zero?
How do I open the file to read/write in binary/decimal/hex rather than just text?
I've mostly worked with Python, but I can muddle through C if I must.
Thank you.
|
[
"Just use gzip compression, and save 60-70% with 0% effort!\n",
"Do you understand that files are divided into bytes? Thus, if you did that, you'd have 7 bits of the first letter in bytes 1, plus 1 bit of the second letter, then in byte two, you'd have 6 bits of the second letter, and 2 bits of the third, so on. It would look like this:\n|AAAAAAAB|BBBBBBCC|CCCCCDDD|DDDDEEEE|EEEFFFFF|FF...\n \\------/ \\------/ \\------/ \\------/ \\------/\n byte byte byte byte byte\n\n",
"Your idea is on the right track, but needs some development. If you're interested in this kind of data compression, you may want to investigate Huffman coding. This is a simple data compression technique that is used in many real-world situations.\nI can recommend The Data Compression Book by Mark Nelson which is a great introduction to data compression techniques.\n",
"Your idea is unlikely to work. If you write the byte 0x05 into a file, the byte gets written, all 8 bits of it - with leading zeros. To actually accomplish what you need, you can encode each 8 bytes in 7 bytes (since you only need 8*7 bits to encode 8 values). One approach is keep the 7 values in the 7 low bits of their bytes, and spread the 8th byte over the 7 MSBits.\nAs for Python, opening a file in binary write mode is open(filename, 'wb'). You'll also have to learn about bit operations to pack bytes as described above.\nJust a small example:\n>>> a = 0x03\n>>> b = 0x59\n>>> c = ((a & 0x1) << 7) | b\n>>> hex(c)\n'0xd9'\n>>> \n\nThis places the lowest bit of a into the MSBit of c and the rest of c is the value of b.\nI'm sure you can take it from here.\n",
"\"this would simply leave an extra 0 on the binary string--11011001 might get mapped to 01000001 rather than 1000001, and I won't actually save space.\"\nCorrect. Your plan will do nothing.\n"
] |
[
18,
6,
3,
1,
0
] |
[
"What you need is UTF-7.\nEdit: UTF-7 has the advantage of bloating \"only\" special characters, so if special characters are rare in the input, you get far less bytes than by just converting UTF-8 to 7 bit. That's what UTF-7 is for.\n"
] |
[
-2
] |
[
"c",
"compression",
"python",
"utf_8"
] |
stackoverflow_0001837686_c_compression_python_utf_8.txt
|
Q:
How to grab live text from a URL?
Im trying to grab all data(text) coming from a URL which is constantly sending text, I tried using PHP but that would mean having the script running the whole time which it isn’t really made for (I think). So I ended up using a BASH script.
At the moment I use wget (I couldn’t get CURL to output the text to a file)
wget --tries=0 --retry-connrefused http://URL/ --output-document=./output.txt
So wget seems to be working pretty well, apart from one thing, every time I re-start the script wget will clear the output.txt file and start filling it again, which isn’t what I want. Is there a way to tell wget to append to the txt file?
Also, is this the best way to capture the live stream of data?
Should I use a different language like Python or …?
A:
You can do wget --tries=0 --retry-connrefused $URL -O - >> output.txt.
Explanation: the parameters -O is short for --output-document, and a dash - means standard output.
The line command > file means write "write output of command to file", and command >> file means "append output of command to file" which is what you want.
A:
Curl doesn't follow redirects by default and outputs nothing if there is a redirect. I always specify the --location option just in case. If you want to use curl, try:
curl http://example.com --location --silent >> output.txt
The --silent option turns off the progress indicator.
A:
You could try this:
while true
do
wget -q -O - http://example.com >> filename # -O - outputs to the screen
sleep 2 # sleep 2 sec
done
A:
curl http://URL/ >> output.txt
the >> redirects the output from curl to output.txt, appending to any data already there. (If it was just > output.txt - that would overwrite the contents of output.txt each time you ran it).
|
How to grab live text from a URL?
|
Im trying to grab all data(text) coming from a URL which is constantly sending text, I tried using PHP but that would mean having the script running the whole time which it isn’t really made for (I think). So I ended up using a BASH script.
At the moment I use wget (I couldn’t get CURL to output the text to a file)
wget --tries=0 --retry-connrefused http://URL/ --output-document=./output.txt
So wget seems to be working pretty well, apart from one thing, every time I re-start the script wget will clear the output.txt file and start filling it again, which isn’t what I want. Is there a way to tell wget to append to the txt file?
Also, is this the best way to capture the live stream of data?
Should I use a different language like Python or …?
|
[
"You can do wget --tries=0 --retry-connrefused $URL -O - >> output.txt.\nExplanation: the parameters -O is short for --output-document, and a dash - means standard output. \nThe line command > file means write \"write output of command to file\", and command >> file means \"append output of command to file\" which is what you want.\n",
"Curl doesn't follow redirects by default and outputs nothing if there is a redirect. I always specify the --location option just in case. If you want to use curl, try:\ncurl http://example.com --location --silent >> output.txt\n\nThe --silent option turns off the progress indicator.\n",
"You could try this: \n\nwhile true \ndo \nwget -q -O - http://example.com >> filename # -O - outputs to the screen \nsleep 2 # sleep 2 sec \ndone\n",
"curl http://URL/ >> output.txt\nthe >> redirects the output from curl to output.txt, appending to any data already there. (If it was just > output.txt - that would overwrite the contents of output.txt each time you ran it).\n"
] |
[
4,
1,
0,
0
] |
[] |
[] |
[
"bash",
"keep_alive",
"php",
"python",
"wget"
] |
stackoverflow_0001839120_bash_keep_alive_php_python_wget.txt
|
Q:
Calculating point of intersection based on angle and speed
I have a vector consisting of a point, speed and direction. We will call this vector R. And another vector that only consists of a point and a speed. No direction. We will call this one T.
Now, what I am trying to do is to find the shortest intersection point of these two vectors. Since T has no direction, this is proving to be difficult. I was able to create a formula that works in CaRMetal but I can not get it working in python.
Can someone suggest a more efficient way to solve this problem? Or solve my existing formula for X?
Formula:
(source: bja888.com)
Key:
(source: bja888.com)
Where o or k is the speed difference between vectors. R.speed / T.speed
A:
My math could be a bit rusty, but try this:
p and q are the position vectors, d and e are the direction vectors. After time t, you want them to be at the same place:
(1) p+t*d = q+t*e
Since you want the direction vector e, write it like this
(2) e = (p-q)/t + d
Now you don't need the time t, which you can calculate using your speed constraint s (otherwise you could just travel to the other point directly):
The direction vector e has to be of the length s, so
(3) e12 + e22 = s2
After some equation solving you end up with
(4)
I) a = sum(p-q)/(s2-sum(d2))
II) b = 2*sum(d*(p-q))/(s2-sum(d2))
III) c = -1
IV) a + b*t + c*t2 = 0
The sum goes over your vector components (2 in 2d, 3 in 3d)
The last one is a quadratic formula which you should be able to solve on your own ;-)
A:
Let's assume that the first point,
A, has zero speed. In this case, it
should be very simple to find the
direction which will give the
fastest intersection.
Now, A does have a speed. We can force it to have zero speed by deducting it's speed vector from the vector of B. Now we can solve as we did in 1.
Just a rough idea that came to mind...
Some more thoughts:
If A is standing still, then the direction B need to travel in is directly towards A. This gives us the direction in the coordinate system in which A is standing still. Let's call it d.
Now we only need to convert the direction B needs to travel from the coordinate system in which A is still to the coordinate system in which A is moving at the given speed and direction, d2.
This is simply vector addition. d3 = d - d2
We can now find the direction of d3.
And a bit more formal:
A is stationary:
Sb = speed of B, known, scalar
alpha = atan2( a_y-b_y, a_x-b_x )
Vb_x = Sb * cos(alpha)
Vb_y = Sb * sin(alpha)
A moves at speed Sa, direction beta:
Vb_x' = Sb * cos(alpha) + Sa * cos(beta)
Vb_y' = Sb * sin(alpha) + Sa * sin(beta)
alpha' = atan2( Vb_y', Vb_x' )
Haven't tested the above, but it looks reasonable at first glance...
A:
In nature hunters use the constant bearing decreasing range algorithm to catch prey.
I like the explanation of how bats do this link text
We need to define a few more terms.
Point A - the position associated with vector R.
Point B - the position associated with vector T.
Vector AB - the vector from point A to point B
Angle beta - the angle between vector R and vector AB.
Angle theta - the angle between vector T and vector AB
The formula is usually given as
theta = asin( |R| * sin(beta) / |T| )
where
beta = acos( AB.xR.x + AB.yR.y )
You don't want to use this directly, since asin and acos only return angles between -PI/2 to PI/2.
beta = atan2( R.y, R.x ) - atan2( AB.y, AB.x )
x = |R| * sin(beta) / |T|
y = 1 + sqrt( 1 - x*x )
theta = 2*atan2( y, x )
Of course if x > 1 R is too fast and intersection doesn't exist
EG
|
Calculating point of intersection based on angle and speed
|
I have a vector consisting of a point, speed and direction. We will call this vector R. And another vector that only consists of a point and a speed. No direction. We will call this one T.
Now, what I am trying to do is to find the shortest intersection point of these two vectors. Since T has no direction, this is proving to be difficult. I was able to create a formula that works in CaRMetal but I can not get it working in python.
Can someone suggest a more efficient way to solve this problem? Or solve my existing formula for X?
Formula:
(source: bja888.com)
Key:
(source: bja888.com)
Where o or k is the speed difference between vectors. R.speed / T.speed
|
[
"My math could be a bit rusty, but try this:\np and q are the position vectors, d and e are the direction vectors. After time t, you want them to be at the same place:\n(1) p+t*d = q+t*e \nSince you want the direction vector e, write it like this\n(2) e = (p-q)/t + d\nNow you don't need the time t, which you can calculate using your speed constraint s (otherwise you could just travel to the other point directly):\nThe direction vector e has to be of the length s, so \n(3) e12 + e22 = s2 \nAfter some equation solving you end up with\n(4) \nI) a = sum(p-q)/(s2-sum(d2))\nII) b = 2*sum(d*(p-q))/(s2-sum(d2))\nIII) c = -1\nIV) a + b*t + c*t2 = 0 \nThe sum goes over your vector components (2 in 2d, 3 in 3d)\nThe last one is a quadratic formula which you should be able to solve on your own ;-)\n",
"\nLet's assume that the first point,\nA, has zero speed. In this case, it\nshould be very simple to find the\ndirection which will give the\nfastest intersection.\nNow, A does have a speed. We can force it to have zero speed by deducting it's speed vector from the vector of B. Now we can solve as we did in 1.\n\nJust a rough idea that came to mind...\nSome more thoughts:\nIf A is standing still, then the direction B need to travel in is directly towards A. This gives us the direction in the coordinate system in which A is standing still. Let's call it d.\nNow we only need to convert the direction B needs to travel from the coordinate system in which A is still to the coordinate system in which A is moving at the given speed and direction, d2. \nThis is simply vector addition. d3 = d - d2\n We can now find the direction of d3.\nAnd a bit more formal:\nA is stationary:\nSb = speed of B, known, scalar\nalpha = atan2( a_y-b_y, a_x-b_x )\nVb_x = Sb * cos(alpha)\nVb_y = Sb * sin(alpha)\nA moves at speed Sa, direction beta:\nVb_x' = Sb * cos(alpha) + Sa * cos(beta)\nVb_y' = Sb * sin(alpha) + Sa * sin(beta)\nalpha' = atan2( Vb_y', Vb_x' )\nHaven't tested the above, but it looks reasonable at first glance...\n",
"In nature hunters use the constant bearing decreasing range algorithm to catch prey.\nI like the explanation of how bats do this link text\nWe need to define a few more terms.\nPoint A - the position associated with vector R.\nPoint B - the position associated with vector T.\nVector AB - the vector from point A to point B\nAngle beta - the angle between vector R and vector AB.\nAngle theta - the angle between vector T and vector AB\n\nThe formula is usually given as\ntheta = asin( |R| * sin(beta) / |T| )\n\nwhere\nbeta = acos( AB.xR.x + AB.yR.y )\nYou don't want to use this directly, since asin and acos only return angles between -PI/2 to PI/2.\nbeta = atan2( R.y, R.x ) - atan2( AB.y, AB.x )\nx = |R| * sin(beta) / |T|\ny = 1 + sqrt( 1 - x*x )\ntheta = 2*atan2( y, x )\n\nOf course if x > 1 R is too fast and intersection doesn't exist\nEG\n"
] |
[
1,
0,
0
] |
[
"OK, if I understand you right, you have\nR = [ xy0, v, r ]\n T = [ xy1, v ]\nIf you are concerned about the shortest intersection point, this will be achieved when your positions are identical, and in an Euclidean space this also forces the direction of the second \"thing\" being perpendicular to the first. You can write down the equations for this and solve them easily.\n"
] |
[
-2
] |
[
"intersection",
"math",
"performance",
"python",
"vector"
] |
stackoverflow_0001839567_intersection_math_performance_python_vector.txt
|
Q:
Python split value of a string
I am working on a site in Python built on the back of Django(awesome framework, cant get my head around python), I looking to split a string that is returned from a database and I want it to be split when the first space occurs so I tried something like this,
{{product.name.split(' ' ,1)}}
This did not work and I get this stacktrace,
Environment:
Request Method: GET
Request URL: http://website.co.uk/products/
Django Version: 1.1.1
Python Version: 2.5.2
Installed Applications:
['django.contrib.auth',
'django.contrib.admin',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'website.news',
'website.store_locator',
'website.css_switch',
'website.professional',
'website.contact',
'website.shop',
'tinymce',
'captcha']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware')
Template error:
In template /var/www/website/src/website/shop/templates/category.html, error at line 6
Could not parse the remainder: '(' ',1)' from 'product.name.split(' ',1)'
1 : {% extends "shopbase.html" %}
2 : {% block pageid %}shop{%endblock%}
3 : {% block right-content %}
4 : <div class="products">
5 : <form method="post" action="{% url category category.slug %}">
6 : {% for product in category.products.all %}
7 : <div class="{% cycle 'clear' '' '' %}">
8 : <img src="{{MEDIA_URL}}{{product.mini_thumbnail}}" alt="{{product.name}}" class="thumbnail"/>
9 : <div class="prod-details">
10 : <h3><a href="{% url shop.views.product category.slug product.slug %}">{{product.name.split(' ',1)}}</a></h3>
11 : <p class="strap">{{ product.strap }}</p>
12 : <ul>
13 : <li class="price">£{{product.price}}</li>
14 : <li class="quantity">
15 : <select name="quantity_{{product.id}}">
16 : <option label="1" value="1">1</option>
Traceback:
File "/usr/lib/python2.5/site-packages/django/core/handlers/base.py" in get_response
92. response = callback(request, *callback_args, **callback_kwargs)
File "/var/www/website/src/website/shop/views.py" in home
716. return category(request, Category.objects.root_category(), data={'pageclass':'stylers'})
File "/var/www/website/src/website/shop/views.py" in category
738. return render_to_response(template, data, RequestContext(request))
File "/usr/lib/python2.5/site-packages/django/shortcuts/__init__.py" in render_to_response
20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs)
File "/usr/lib/python2.5/site-packages/django/template/loader.py" in render_to_string
103. t = get_template(template_name)
File "/usr/lib/python2.5/site-packages/django/template/loader.py" in get_template
82. template = get_template_from_string(source, origin, template_name)
File "/usr/lib/python2.5/site-packages/django/template/loader.py" in get_template_from_string
90. return Template(source, origin, name)
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in __init__
168. self.nodelist = compile_string(template_string, origin)
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in compile_string
189. return parser.parse()
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in parse
285. compiled_result = compile_func(self, token)
File "/usr/lib/python2.5/site-packages/django/template/loader_tags.py" in do_extends
169. nodelist = parser.parse()
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in parse
285. compiled_result = compile_func(self, token)
File "/usr/lib/python2.5/site-packages/django/template/loader_tags.py" in do_block
147. nodelist = parser.parse(('endblock', 'endblock %s' % block_name))
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in parse
285. compiled_result = compile_func(self, token)
File "/usr/lib/python2.5/site-packages/django/template/defaulttags.py" in do_for
688. nodelist_loop = parser.parse(('empty', 'endfor',))
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in parse
266. filter_expression = self.compile_filter(token.contents)
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in compile_filter
358. return FilterExpression(token, self)
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in __init__
538. raise TemplateSyntaxError("Could not parse the remainder: '%s' from '%s'" % (token[upto:], token))
Exception Type: TemplateSyntaxError at /products/
Exception Value: Could not parse the remainder: '(' ',1)' from 'product.name.split(' ',1)'
I assume this means that I can do what I want to do straight in the template? This is where my problem comes as I dont know where to do it in my views, I was hoping some could point me in the correct direction?
This is my view,
def home(request):
"""
The home page, borrows functionality from category
"""
return category(request, Category.objects.root_category(), data={'pageclass':'stylers'})
def category(request, category_slug, template='category.html', data={}):
"""
Display a category in the store
"""
import re
category = get_object_or_404(Category, slug=category_slug)
basket = Basket(request)
if request.method == "POST":
for k, v in request.POST.iteritems():
match = re.match('^add_to_basket_([0-9])+$',k)
if match:
id = match.group(1)
basket.add_item(Product.objects.get(id=id), request.POST['quantity_%s' % id])
break
return HttpResponseRedirect(reverse('basket'))
data['category'] = category
return render_to_response(template, data, RequestContext(request))
def product(request, category_slug, product_slug):
"""
Display a product in the store, nominated by product_slug, that is in the
category nominated by category_slug
"""
data = {'pageclass':'irons'}
category = get_object_or_404(Category, slug=category_slug)
product = get_object_or_404(Product, slug=product_slug)
basket = Basket(request)
if request.method == "POST":
basket.add_item(product, request.POST['quantity'])
return HttpResponseRedirect(reverse('basket'))
data['category'] = category
data['product'] = product
return render_to_response('product.html', data, RequestContext(request))
If my model is needed I can post this up no problem.
I really hope some can help me.
Thanks very much
A:
You can call methods using the {{ }} -- but the method can't require any attributes.
What I would do in this case is add a method on your model that performs the desired behavior. Example:
class Product(models.Model):
...
def get_first_name(self):
if self.name:
return self.name.partition(' ')[0]
return None
Then, in your template, you can call it as {{ product.get_first_name }}.
Your other option would be to write a custom filter, which would be marginally more complicated. See the Django documentation on custom template tags for more information.
|
Python split value of a string
|
I am working on a site in Python built on the back of Django(awesome framework, cant get my head around python), I looking to split a string that is returned from a database and I want it to be split when the first space occurs so I tried something like this,
{{product.name.split(' ' ,1)}}
This did not work and I get this stacktrace,
Environment:
Request Method: GET
Request URL: http://website.co.uk/products/
Django Version: 1.1.1
Python Version: 2.5.2
Installed Applications:
['django.contrib.auth',
'django.contrib.admin',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'website.news',
'website.store_locator',
'website.css_switch',
'website.professional',
'website.contact',
'website.shop',
'tinymce',
'captcha']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware')
Template error:
In template /var/www/website/src/website/shop/templates/category.html, error at line 6
Could not parse the remainder: '(' ',1)' from 'product.name.split(' ',1)'
1 : {% extends "shopbase.html" %}
2 : {% block pageid %}shop{%endblock%}
3 : {% block right-content %}
4 : <div class="products">
5 : <form method="post" action="{% url category category.slug %}">
6 : {% for product in category.products.all %}
7 : <div class="{% cycle 'clear' '' '' %}">
8 : <img src="{{MEDIA_URL}}{{product.mini_thumbnail}}" alt="{{product.name}}" class="thumbnail"/>
9 : <div class="prod-details">
10 : <h3><a href="{% url shop.views.product category.slug product.slug %}">{{product.name.split(' ',1)}}</a></h3>
11 : <p class="strap">{{ product.strap }}</p>
12 : <ul>
13 : <li class="price">£{{product.price}}</li>
14 : <li class="quantity">
15 : <select name="quantity_{{product.id}}">
16 : <option label="1" value="1">1</option>
Traceback:
File "/usr/lib/python2.5/site-packages/django/core/handlers/base.py" in get_response
92. response = callback(request, *callback_args, **callback_kwargs)
File "/var/www/website/src/website/shop/views.py" in home
716. return category(request, Category.objects.root_category(), data={'pageclass':'stylers'})
File "/var/www/website/src/website/shop/views.py" in category
738. return render_to_response(template, data, RequestContext(request))
File "/usr/lib/python2.5/site-packages/django/shortcuts/__init__.py" in render_to_response
20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs)
File "/usr/lib/python2.5/site-packages/django/template/loader.py" in render_to_string
103. t = get_template(template_name)
File "/usr/lib/python2.5/site-packages/django/template/loader.py" in get_template
82. template = get_template_from_string(source, origin, template_name)
File "/usr/lib/python2.5/site-packages/django/template/loader.py" in get_template_from_string
90. return Template(source, origin, name)
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in __init__
168. self.nodelist = compile_string(template_string, origin)
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in compile_string
189. return parser.parse()
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in parse
285. compiled_result = compile_func(self, token)
File "/usr/lib/python2.5/site-packages/django/template/loader_tags.py" in do_extends
169. nodelist = parser.parse()
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in parse
285. compiled_result = compile_func(self, token)
File "/usr/lib/python2.5/site-packages/django/template/loader_tags.py" in do_block
147. nodelist = parser.parse(('endblock', 'endblock %s' % block_name))
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in parse
285. compiled_result = compile_func(self, token)
File "/usr/lib/python2.5/site-packages/django/template/defaulttags.py" in do_for
688. nodelist_loop = parser.parse(('empty', 'endfor',))
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in parse
266. filter_expression = self.compile_filter(token.contents)
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in compile_filter
358. return FilterExpression(token, self)
File "/usr/lib/python2.5/site-packages/django/template/__init__.py" in __init__
538. raise TemplateSyntaxError("Could not parse the remainder: '%s' from '%s'" % (token[upto:], token))
Exception Type: TemplateSyntaxError at /products/
Exception Value: Could not parse the remainder: '(' ',1)' from 'product.name.split(' ',1)'
I assume this means that I can do what I want to do straight in the template? This is where my problem comes as I dont know where to do it in my views, I was hoping some could point me in the correct direction?
This is my view,
def home(request):
"""
The home page, borrows functionality from category
"""
return category(request, Category.objects.root_category(), data={'pageclass':'stylers'})
def category(request, category_slug, template='category.html', data={}):
"""
Display a category in the store
"""
import re
category = get_object_or_404(Category, slug=category_slug)
basket = Basket(request)
if request.method == "POST":
for k, v in request.POST.iteritems():
match = re.match('^add_to_basket_([0-9])+$',k)
if match:
id = match.group(1)
basket.add_item(Product.objects.get(id=id), request.POST['quantity_%s' % id])
break
return HttpResponseRedirect(reverse('basket'))
data['category'] = category
return render_to_response(template, data, RequestContext(request))
def product(request, category_slug, product_slug):
"""
Display a product in the store, nominated by product_slug, that is in the
category nominated by category_slug
"""
data = {'pageclass':'irons'}
category = get_object_or_404(Category, slug=category_slug)
product = get_object_or_404(Product, slug=product_slug)
basket = Basket(request)
if request.method == "POST":
basket.add_item(product, request.POST['quantity'])
return HttpResponseRedirect(reverse('basket'))
data['category'] = category
data['product'] = product
return render_to_response('product.html', data, RequestContext(request))
If my model is needed I can post this up no problem.
I really hope some can help me.
Thanks very much
|
[
"You can call methods using the {{ }} -- but the method can't require any attributes. \nWhat I would do in this case is add a method on your model that performs the desired behavior. Example:\nclass Product(models.Model):\n ...\n def get_first_name(self):\n if self.name:\n return self.name.partition(' ')[0]\n\n return None\n\nThen, in your template, you can call it as {{ product.get_first_name }}.\nYour other option would be to write a custom filter, which would be marginally more complicated. See the Django documentation on custom template tags for more information.\n"
] |
[
11
] |
[] |
[] |
[
"django",
"python",
"string"
] |
stackoverflow_0001840165_django_python_string.txt
|
Q:
Automate interaction with a webpage in python
I want to automate interaction with a webpage. I've been using pycurl up til now but eventually the webpage will use javascript so I'm looking for alternatives . A typical interaction is "open the page, search for some text, click on a link (which opens a form), fill out the form and submit".
We're deploying on Google App engine, if that makes a difference.
Clarification: we're deploying the webpage on appengine. But the interaction is run on a separate machine. So selenium seems like it's the best choice.
A:
Twill and mechanize don't do Javascript, and Qt and Selenium can't run on App Engine ((1)), which only supports pure Python code. I do not know of any pure-Python Javascript interpreter, which is what you'd need to deploy a JS-supporting scraper on App Engine:-(.
Maybe there's something in Java, which would at least allow you to deploy on (the Java version of) App Engine? App Engine app versions in Java and Python can use the same datastore, so you could keep some part of your app in Python... just not the part that needs to understand Javascript. Unfortunately I don't know enough about the Java / AE environment to suggest any specific package to try.
((1)): to clarify, since there seems to be a misunderstanding that has gotten so far as to get me downvoted: if you run Selenium or other scrapers on a different computer, you can of course target a site deployed in App Engine (it doesn't matter how the website you're targeting is deployed, what programming language[s] it uses, etc, etc, as long as it's a website you can access [[real website: flash, &c, may likely be different]]). How I read the question is, the OP is looking for ways to have the scraping run as part of an App Engine app -- that is the problematic part, not where you (or somebody else;-) runs the site being scraped!
A:
What about Selenium? (http://seleniumhq.org)
A:
Did you try using QtWebKit with PyQt, you can load a specific url and read the content from Python. You could then search for urls and use Webkit again to access it. I think all those can be done with some basic Django(assuming you are using Django on GAE) view testing which will test the response code. Here's a sample QtWebKit PyQt code to get your started if you want to do it the GUI way:
import sys
import time
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import *
app = QApplication(sys.argv)
web = QWebView()
settings = web.settings()
settings.setAttribute(QWebSettings.PluginsEnabled, True)
settings.setAttribute(QWebSettings.JavaEnabled, True)
settings.setAttribute(QWebSettings.JavascriptCanOpenWindows, True)
settings.setAttribute(QWebSettings.JavascriptCanAccessClipboard, True)
settings.setAttribute(QWebSettings.DeveloperExtrasEnabled, True)
settings.setAttribute(QWebSettings.ZoomTextOnly, True)
settings.setOfflineStoragePath('.')
settings.setIconDatabasePath (".")
url = 'http://stackoverflow.com'
web.load(QUrl(url))
web.show()
sys.exit(app.exec_())
A:
Check out mechanize. It should be able to handle your "typical interaction" pretty easily. Another option might be Selenium, but I've never used it personally.
A:
twill is very lightweight but works well.
|
Automate interaction with a webpage in python
|
I want to automate interaction with a webpage. I've been using pycurl up til now but eventually the webpage will use javascript so I'm looking for alternatives . A typical interaction is "open the page, search for some text, click on a link (which opens a form), fill out the form and submit".
We're deploying on Google App engine, if that makes a difference.
Clarification: we're deploying the webpage on appengine. But the interaction is run on a separate machine. So selenium seems like it's the best choice.
|
[
"Twill and mechanize don't do Javascript, and Qt and Selenium can't run on App Engine ((1)), which only supports pure Python code. I do not know of any pure-Python Javascript interpreter, which is what you'd need to deploy a JS-supporting scraper on App Engine:-(.\nMaybe there's something in Java, which would at least allow you to deploy on (the Java version of) App Engine? App Engine app versions in Java and Python can use the same datastore, so you could keep some part of your app in Python... just not the part that needs to understand Javascript. Unfortunately I don't know enough about the Java / AE environment to suggest any specific package to try.\n((1)): to clarify, since there seems to be a misunderstanding that has gotten so far as to get me downvoted: if you run Selenium or other scrapers on a different computer, you can of course target a site deployed in App Engine (it doesn't matter how the website you're targeting is deployed, what programming language[s] it uses, etc, etc, as long as it's a website you can access [[real website: flash, &c, may likely be different]]). How I read the question is, the OP is looking for ways to have the scraping run as part of an App Engine app -- that is the problematic part, not where you (or somebody else;-) runs the site being scraped!\n",
"What about Selenium? (http://seleniumhq.org)\n",
"Did you try using QtWebKit with PyQt, you can load a specific url and read the content from Python. You could then search for urls and use Webkit again to access it. I think all those can be done with some basic Django(assuming you are using Django on GAE) view testing which will test the response code. Here's a sample QtWebKit PyQt code to get your started if you want to do it the GUI way:\nimport sys\nimport time\n\nfrom PyQt4.QtCore import *\nfrom PyQt4.QtGui import *\nfrom PyQt4.QtWebKit import *\n\napp = QApplication(sys.argv)\n\nweb = QWebView()\n\nsettings = web.settings()\nsettings.setAttribute(QWebSettings.PluginsEnabled, True)\nsettings.setAttribute(QWebSettings.JavaEnabled, True)\nsettings.setAttribute(QWebSettings.JavascriptCanOpenWindows, True)\nsettings.setAttribute(QWebSettings.JavascriptCanAccessClipboard, True)\nsettings.setAttribute(QWebSettings.DeveloperExtrasEnabled, True)\nsettings.setAttribute(QWebSettings.ZoomTextOnly, True)\n\n\n\nsettings.setOfflineStoragePath('.')\nsettings.setIconDatabasePath (\".\")\n\nurl = 'http://stackoverflow.com'\n\nweb.load(QUrl(url))\n\nweb.show()\n\nsys.exit(app.exec_())\n\n",
"Check out mechanize. It should be able to handle your \"typical interaction\" pretty easily. Another option might be Selenium, but I've never used it personally.\n",
"twill is very lightweight but works well.\n"
] |
[
6,
4,
1,
0,
0
] |
[] |
[] |
[
"google_app_engine",
"pycurl",
"python"
] |
stackoverflow_0001836987_google_app_engine_pycurl_python.txt
|
Q:
How do I call a property setter from __init__
I have the following chunk of python code:
import hashlib
class User:
def _set_password(self, value):
self._password = hashlib.sha1(value).hexdigest()
def _get_password(self):
return self._password
password = property(
fset = _set_password,
fget = _get_password)
def __init__(self, user_name, password):
self.password = password
u = User("bob", "password1")
print(u.password)
This should in theory print out the SHA1 of the password, however setting self.password from the constructor ignores the defined property and just sets the value to "password1". The value of "password1" is then read by the print statement.
I know this is something down to password being defined on the class versus the instance but I'm not sure how to represent it correctly so it works. Any help would be appreciated.
A:
A property is a descriptor, and descriptors only work on new-style classes. Try:
class User(object): ...
instead of:
class User: ...
A good guide to descriptors can be found here.
|
How do I call a property setter from __init__
|
I have the following chunk of python code:
import hashlib
class User:
def _set_password(self, value):
self._password = hashlib.sha1(value).hexdigest()
def _get_password(self):
return self._password
password = property(
fset = _set_password,
fget = _get_password)
def __init__(self, user_name, password):
self.password = password
u = User("bob", "password1")
print(u.password)
This should in theory print out the SHA1 of the password, however setting self.password from the constructor ignores the defined property and just sets the value to "password1". The value of "password1" is then read by the print statement.
I know this is something down to password being defined on the class versus the instance but I'm not sure how to represent it correctly so it works. Any help would be appreciated.
|
[
"A property is a descriptor, and descriptors only work on new-style classes. Try:\nclass User(object): ...\n\ninstead of:\nclass User: ...\n\nA good guide to descriptors can be found here.\n"
] |
[
14
] |
[] |
[] |
[
"init",
"new_style_class",
"python",
"setter"
] |
stackoverflow_0001840628_init_new_style_class_python_setter.txt
|
Q:
python X.509 asymmetric encryption
I'm trying to understand how certificate and asymmetric encryption works. I'm looking for a python library where i can import public or private ca signed certificates and automatically encrypt or decrypt message in string format, i viewed the crypto library embedded in python source, but i don't know how to use the hex modulus and exponent in this part of the certificate:
RSA Public Key: (1024 bit)
Modulus (1024 bit):
00:b4:31:98:0a:c4:bc:62:c1:88:aa:dc:b0:c8:bb:
33:35:19:d5:0c:64:b9:3d:41:b2:96:fc:f3:31:e1:
66:36:d0:8e:56:12:44:ba:75:eb:e8:1c:9c:5b:66:
70:33:52:14:c9:ec:4f:91:51:70:39:de:53:85:17:
16:94:6e:ee:f4:d5:6f:d5:ca:b3:47:5e:1b:0c:7b:
c5:cc:2b:6b:c1:90:c3:16:31:0d:bf:7a:c7:47:77:
8f:a0:21:c7:4c:d0:16:65:00:c1:0f:d7:b8:80:e3:
d2:75:6b:c1:ea:9e:5c:5c:ea:7d:c1:a1:10:bc:b8:
e8:35:1c:9e:27:52:7e:41:8f
Exponent: 65537 (0x10001)
is there anyone can help me?
thanks
A:
MeTooCrypto:
M2Crypto is the most complete Python wrapper for OpenSSL featuring RSA, DSA, DH, HMACs, message digests, symmetric ciphers (including AES); SSL functionality to implement clients and servers; HTTPS extensions to Python's httplib, urllib, and xmlrpclib; unforgeable HMAC'ing AuthCookies for web session management; FTP/TLS client and server; S/MIME; ZServerSSL: A HTTPS server for Zope and ZSmime: An S/MIME messenger for Zope. M2Crypto can also be used to provide SSL for Twisted.
For an example of loading and manipulating PKI keys, see the test source, test_rsa.py. The test source directory contains keys in .pem format, and those are used by the code.
|
python X.509 asymmetric encryption
|
I'm trying to understand how certificate and asymmetric encryption works. I'm looking for a python library where i can import public or private ca signed certificates and automatically encrypt or decrypt message in string format, i viewed the crypto library embedded in python source, but i don't know how to use the hex modulus and exponent in this part of the certificate:
RSA Public Key: (1024 bit)
Modulus (1024 bit):
00:b4:31:98:0a:c4:bc:62:c1:88:aa:dc:b0:c8:bb:
33:35:19:d5:0c:64:b9:3d:41:b2:96:fc:f3:31:e1:
66:36:d0:8e:56:12:44:ba:75:eb:e8:1c:9c:5b:66:
70:33:52:14:c9:ec:4f:91:51:70:39:de:53:85:17:
16:94:6e:ee:f4:d5:6f:d5:ca:b3:47:5e:1b:0c:7b:
c5:cc:2b:6b:c1:90:c3:16:31:0d:bf:7a:c7:47:77:
8f:a0:21:c7:4c:d0:16:65:00:c1:0f:d7:b8:80:e3:
d2:75:6b:c1:ea:9e:5c:5c:ea:7d:c1:a1:10:bc:b8:
e8:35:1c:9e:27:52:7e:41:8f
Exponent: 65537 (0x10001)
is there anyone can help me?
thanks
|
[
"MeTooCrypto:\n\nM2Crypto is the most complete Python wrapper for OpenSSL featuring RSA, DSA, DH, HMACs, message digests, symmetric ciphers (including AES); SSL functionality to implement clients and servers; HTTPS extensions to Python's httplib, urllib, and xmlrpclib; unforgeable HMAC'ing AuthCookies for web session management; FTP/TLS client and server; S/MIME; ZServerSSL: A HTTPS server for Zope and ZSmime: An S/MIME messenger for Zope. M2Crypto can also be used to provide SSL for Twisted. \n\nFor an example of loading and manipulating PKI keys, see the test source, test_rsa.py. The test source directory contains keys in .pem format, and those are used by the code.\n"
] |
[
3
] |
[] |
[] |
[
"encryption",
"encryption_asymmetric",
"python",
"x509"
] |
stackoverflow_0001840720_encryption_encryption_asymmetric_python_x509.txt
|
Q:
Python3: ssl cert information
I have been trying to get information regarding expired ssl certificates using python 3 but it would be nice to be able to get as verbose a workup as possible. any takers?
So far i have been trying to use urllib.request to get this info (to no avail), does this strike anyone as foolish?
I have seen some examples of similar work using older versions of python, but nothing using v3.
http://objectmix.com/python/737581-re-urllib-getting-ssl-certificate-info.html
http://www.mail-archive.com/python-list@python.org/msg208150.html
A:
The 3.1.1 documentation for SSL has an example.
|
Python3: ssl cert information
|
I have been trying to get information regarding expired ssl certificates using python 3 but it would be nice to be able to get as verbose a workup as possible. any takers?
So far i have been trying to use urllib.request to get this info (to no avail), does this strike anyone as foolish?
I have seen some examples of similar work using older versions of python, but nothing using v3.
http://objectmix.com/python/737581-re-urllib-getting-ssl-certificate-info.html
http://www.mail-archive.com/python-list@python.org/msg208150.html
|
[
"The 3.1.1 documentation for SSL has an example.\n"
] |
[
1
] |
[] |
[] |
[
"certificate",
"python",
"ssl",
"urllib"
] |
stackoverflow_0001840725_certificate_python_ssl_urllib.txt
|
Q:
Can urllib2 make HTTP/1.1 requests?
EDIT:
This question is invalid. Turns out a transparent proxy was making an onward HTTP 1.0 request even though urllib/httplib was indeed making a HTTP 1.1 request originally.
ORIGINAL QUESTION:
By default urllib2.urlopen always makes a HTTP 1.0 request.
Is there any way to get it to talk HTTP 1.1 ?
A:
Why do you think it's not already using http 1.1? Have you tried something like...:
>>> import urllib2
>>> urllib2._opener.handlers[1].set_http_debuglevel(100)
>>> urllib2.urlopen('http://mit.edu').read()[:10]
connect: (mit.edu, 80)
send: 'GET / HTTP/1.1
(etc, etc)? This should show it's sending a 1.1 GET request already.
A:
urllib2 uses httplib to make HTTP requests. My Python 2.6.4 definitely uses HTTP/1.1 in httplib, although it can handle responses from a 1.1, 1.0 or 0.9 server. As far back as 2.3, this appears to be the case (and possibly back to 1.5)
However, if it is required to tunnel through a proxy, it will send a request like this:
CONNECT host:port HTTP/1.0
And that /1.0 string is hard-coded.
What version of python are you using, and how are you using urllib2?
|
Can urllib2 make HTTP/1.1 requests?
|
EDIT:
This question is invalid. Turns out a transparent proxy was making an onward HTTP 1.0 request even though urllib/httplib was indeed making a HTTP 1.1 request originally.
ORIGINAL QUESTION:
By default urllib2.urlopen always makes a HTTP 1.0 request.
Is there any way to get it to talk HTTP 1.1 ?
|
[
"Why do you think it's not already using http 1.1? Have you tried something like...:\n>>> import urllib2\n>>> urllib2._opener.handlers[1].set_http_debuglevel(100)\n>>> urllib2.urlopen('http://mit.edu').read()[:10]\nconnect: (mit.edu, 80)\nsend: 'GET / HTTP/1.1\n\n(etc, etc)? This should show it's sending a 1.1 GET request already.\n",
"urllib2 uses httplib to make HTTP requests. My Python 2.6.4 definitely uses HTTP/1.1 in httplib, although it can handle responses from a 1.1, 1.0 or 0.9 server. As far back as 2.3, this appears to be the case (and possibly back to 1.5)\nHowever, if it is required to tunnel through a proxy, it will send a request like this:\nCONNECT host:port HTTP/1.0\n\nAnd that /1.0 string is hard-coded.\nWhat version of python are you using, and how are you using urllib2?\n"
] |
[
12,
3
] |
[] |
[] |
[
"http",
"python",
"urllib2"
] |
stackoverflow_0001840965_http_python_urllib2.txt
|
Q:
How can urllib2 / httplib talk HTTP 1.1 for HTTPS connections via a Squid proxy?
When I use urllib2 to make a HTTP 1.1 connection via a squid proxy, squid makes a new ongoing connection in HTTP 1.0.
How can I persuade Squid to talk 1.1 to the destination server?
A:
After dealing with this problem for an entire afternoon, i found the solution. So please excuse me answering my own question, but it would be great if someone else finds this useful and it saves them the pain.
In order to get Squid to have a HTTP 1.1 conversation with the destination server, the original request to it must be done via HTTP CONNECT. This is documented in the bug http://bugs.python.org/issue1424152.
There is a fix for py3k and it has been backported to Python 3.1 and 2.6.
If you are rocking a Python 2.5 or 2.4 installation, then you can download a patched version of httplib.py and urllib2.py here http://pypi.python.org/pypi/httpsproxy_urllib2. Simply replace your existing versions, or drop these 2 files into your project.
|
How can urllib2 / httplib talk HTTP 1.1 for HTTPS connections via a Squid proxy?
|
When I use urllib2 to make a HTTP 1.1 connection via a squid proxy, squid makes a new ongoing connection in HTTP 1.0.
How can I persuade Squid to talk 1.1 to the destination server?
|
[
"After dealing with this problem for an entire afternoon, i found the solution. So please excuse me answering my own question, but it would be great if someone else finds this useful and it saves them the pain.\nIn order to get Squid to have a HTTP 1.1 conversation with the destination server, the original request to it must be done via HTTP CONNECT. This is documented in the bug http://bugs.python.org/issue1424152.\nThere is a fix for py3k and it has been backported to Python 3.1 and 2.6.\nIf you are rocking a Python 2.5 or 2.4 installation, then you can download a patched version of httplib.py and urllib2.py here http://pypi.python.org/pypi/httpsproxy_urllib2. Simply replace your existing versions, or drop these 2 files into your project.\n"
] |
[
3
] |
[] |
[] |
[
"https",
"proxy",
"python",
"urllib"
] |
stackoverflow_0001841730_https_proxy_python_urllib.txt
|
Q:
Using 'try' vs. 'if' in Python
Is there a rationale to decide which one of try or if constructs to use, when testing variable to have a value?
For example, there is a function that returns either a list or doesn't return a value. I want to check result before processing it. Which of the following would be more preferable and why?
result = function();
if (result):
for r in result:
#process items
or
result = function();
try:
for r in result:
# Process items
except TypeError:
pass;
Related discussion:
Checking for member existence in Python
A:
You often hear that Python encourages EAFP style ("it's easier to ask for forgiveness than permission") over LBYL style ("look before you leap"). To me, it's a matter of efficiency and readability.
In your example (say that instead of returning a list or an empty string, the function were to return a list or None), if you expect that 99 % of the time result will actually contain something iterable, I'd use the try/except approach. It will be faster if exceptions really are exceptional. If result is None more than 50 % of the time, then using if is probably better.
To support this with a few measurements:
>>> import timeit
>>> timeit.timeit(setup="a=1;b=1", stmt="a/b") # no error checking
0.06379691968322732
>>> timeit.timeit(setup="a=1;b=1", stmt="try:\n a/b\nexcept ZeroDivisionError:\n pass")
0.0829463709378615
>>> timeit.timeit(setup="a=1;b=0", stmt="try:\n a/b\nexcept ZeroDivisionError:\n pass")
0.5070195056614466
>>> timeit.timeit(setup="a=1;b=1", stmt="if b!=0:\n a/b")
0.11940114974277094
>>> timeit.timeit(setup="a=1;b=0", stmt="if b!=0:\n a/b")
0.051202772912802175
So, whereas an if statement always costs you, it's nearly free to set up a try/except block. But when an Exception actually occurs, the cost is much higher.
Moral:
It's perfectly OK (and "pythonic") to use try/except for flow control,
but it makes sense most when Exceptions are actually exceptional.
From the Python docs:
EAFP
Easier to ask for forgiveness than
permission. This common Python coding
style assumes the existence of valid
keys or attributes and catches
exceptions if the assumption proves
false. This clean and fast style is
characterized by the presence of many
try and except statements. The
technique contrasts with the LBYL
style common to many other languages
such as C.
A:
Your function should not return mixed types (i.e. list or empty string). It should return a list of values or just an empty list. Then you wouldn't need to test for anything, i.e. your code collapses to:
for r in function():
# process items
A:
Please ignore my solution if the code I provide is not obvious at first glance and you have to read the explanation after the code sample.
Can I assume that the "no value returned" means the return value is None? If yes, or if the "no value" is False boolean-wise, you can do the following, since your code essentially treats "no value" as "do not iterate":
for r in function() or ():
# process items
If function() returns something that's not True, you iterate over the empty tuple, i.e. you don't run any iterations. This is essentially LBYL.
A:
Generally, the impression I've gotten is that exceptions should be reserved for exceptional circumstances. If the result is expected never to be empty (but might be, if, for instance, a disk crashed, etc), the second approach makes sense. If, on the other hand, an empty result is perfectly reasonable under normal conditions, testing for it with an if statement makes more sense.
I had in mind the (more common) scenario:
# keep access counts for different files
file_counts={}
...
# got a filename somehow
if filename not in file_counts:
file_counts[filename]=0
file_counts[filename]+=1
instead of the equivalent:
...
try:
file_counts[filename]+=1
except KeyError:
file_counts[filename]=1
A:
Which of the following would be more preferable and why?
Look Before You Leap is preferable in this case. With the exception approach, a TypeError could occur anywhere in your loop body and it'd get caught and thrown away, which is not what you want and will make debugging tricky.
(I agree with Brandon Corfman though: returning None for ‘no items’ instead of an empty list is broken. It's an unpleasant habit of Java coders that should not be seen in Python. Or Java.)
A:
Your second example is broken - the code will never throw a TypeError exception since you can iterate through both strings and lists. Iterating through an empty string or list is also valid - it will execute the body of the loop zero times.
A:
bobince wisely points out that wrapping the second case can also catch TypeErrors in the loop, which is not what you want. If you do really want to use a try though, you can test if it's iterable before the loop
result = function();
try:
it = iter(result)
except TypeError:
pass
else:
for r in it:
#process items
As you can see, it's rather ugly. I don't suggest it, but it should be mentioned for completeness.
A:
As far as the performance is concerned, using try block for code that normally
doesn’t raise exceptions is faster than using if statement everytime. So, the decision depends on the probability of excetional cases.
|
Using 'try' vs. 'if' in Python
|
Is there a rationale to decide which one of try or if constructs to use, when testing variable to have a value?
For example, there is a function that returns either a list or doesn't return a value. I want to check result before processing it. Which of the following would be more preferable and why?
result = function();
if (result):
for r in result:
#process items
or
result = function();
try:
for r in result:
# Process items
except TypeError:
pass;
Related discussion:
Checking for member existence in Python
|
[
"You often hear that Python encourages EAFP style (\"it's easier to ask for forgiveness than permission\") over LBYL style (\"look before you leap\"). To me, it's a matter of efficiency and readability.\nIn your example (say that instead of returning a list or an empty string, the function were to return a list or None), if you expect that 99 % of the time result will actually contain something iterable, I'd use the try/except approach. It will be faster if exceptions really are exceptional. If result is None more than 50 % of the time, then using if is probably better.\nTo support this with a few measurements:\n>>> import timeit\n>>> timeit.timeit(setup=\"a=1;b=1\", stmt=\"a/b\") # no error checking\n0.06379691968322732\n>>> timeit.timeit(setup=\"a=1;b=1\", stmt=\"try:\\n a/b\\nexcept ZeroDivisionError:\\n pass\")\n0.0829463709378615\n>>> timeit.timeit(setup=\"a=1;b=0\", stmt=\"try:\\n a/b\\nexcept ZeroDivisionError:\\n pass\")\n0.5070195056614466\n>>> timeit.timeit(setup=\"a=1;b=1\", stmt=\"if b!=0:\\n a/b\")\n0.11940114974277094\n>>> timeit.timeit(setup=\"a=1;b=0\", stmt=\"if b!=0:\\n a/b\")\n0.051202772912802175\n\nSo, whereas an if statement always costs you, it's nearly free to set up a try/except block. But when an Exception actually occurs, the cost is much higher.\nMoral:\n\nIt's perfectly OK (and \"pythonic\") to use try/except for flow control,\nbut it makes sense most when Exceptions are actually exceptional. \n\nFrom the Python docs:\n\nEAFP\nEasier to ask for forgiveness than\n permission. This common Python coding\n style assumes the existence of valid\n keys or attributes and catches\n exceptions if the assumption proves\n false. This clean and fast style is\n characterized by the presence of many\n try and except statements. The\n technique contrasts with the LBYL\n style common to many other languages\n such as C.\n\n",
"Your function should not return mixed types (i.e. list or empty string). It should return a list of values or just an empty list. Then you wouldn't need to test for anything, i.e. your code collapses to:\nfor r in function():\n # process items\n\n",
"Please ignore my solution if the code I provide is not obvious at first glance and you have to read the explanation after the code sample.\nCan I assume that the \"no value returned\" means the return value is None? If yes, or if the \"no value\" is False boolean-wise, you can do the following, since your code essentially treats \"no value\" as \"do not iterate\":\nfor r in function() or ():\n # process items\n\nIf function() returns something that's not True, you iterate over the empty tuple, i.e. you don't run any iterations. This is essentially LBYL.\n",
"Generally, the impression I've gotten is that exceptions should be reserved for exceptional circumstances. If the result is expected never to be empty (but might be, if, for instance, a disk crashed, etc), the second approach makes sense. If, on the other hand, an empty result is perfectly reasonable under normal conditions, testing for it with an if statement makes more sense.\nI had in mind the (more common) scenario:\n# keep access counts for different files\nfile_counts={}\n...\n# got a filename somehow\nif filename not in file_counts:\n file_counts[filename]=0\nfile_counts[filename]+=1\n\ninstead of the equivalent:\n...\ntry:\n file_counts[filename]+=1\nexcept KeyError:\n file_counts[filename]=1\n\n",
"\nWhich of the following would be more preferable and why?\n\nLook Before You Leap is preferable in this case. With the exception approach, a TypeError could occur anywhere in your loop body and it'd get caught and thrown away, which is not what you want and will make debugging tricky.\n(I agree with Brandon Corfman though: returning None for ‘no items’ instead of an empty list is broken. It's an unpleasant habit of Java coders that should not be seen in Python. Or Java.)\n",
"Your second example is broken - the code will never throw a TypeError exception since you can iterate through both strings and lists. Iterating through an empty string or list is also valid - it will execute the body of the loop zero times.\n",
"bobince wisely points out that wrapping the second case can also catch TypeErrors in the loop, which is not what you want. If you do really want to use a try though, you can test if it's iterable before the loop\nresult = function();\ntry:\n it = iter(result)\nexcept TypeError:\n pass\nelse:\n for r in it:\n #process items\n\nAs you can see, it's rather ugly. I don't suggest it, but it should be mentioned for completeness.\n",
"As far as the performance is concerned, using try block for code that normally\ndoesn’t raise exceptions is faster than using if statement everytime. So, the decision depends on the probability of excetional cases.\n"
] |
[
329,
17,
13,
7,
5,
4,
3,
1
] |
[
"As a general rule of thumb, you should never use try/catch or any exception handling stuff to control flow. Even though behind the scenes iteration is controlled via the raising of StopIteration exceptions, you still should prefer your first code snippet to the second.\n"
] |
[
-6
] |
[
"python"
] |
stackoverflow_0001835756_python.txt
|
Q:
How can I organize each scraped item into a csv row?
What is the best way to organize scraped data into a csv? More specifically each item is in this form
url
"firstName middleInitial, lastName - level - word1 word2 word3, & wordN practice officeCity."
JD, schoolName, date
Example:
http://www.examplefirm.com/jang
"Joe E. Ang - partner - privatization mergers, media & technology practice New York."
JD, University of Chicago Law School, 1985
I want to put this item in this form:
(http://www.examplefirm.com/jang, Joe, E., Ang, partner, privatization mergers, media & technology, New York, University of Chicago Law School, 1985)
so that I can write it into a csv file to import to a django db.
What would be the best way of doing this?
Thank you.
A:
There's really no short cut on this. Line 1 is easy. Just assign it to url. Line 3 can probably be split on , without any ill effects, but line 2 will have to be manually parsed. What do you know about word1-wordN? Are you sure "practice" will never be a "word". Are you sure the words are only one word long? Can they be quoted? Can they contain dashes?
Then I would parse out the beginning and end bits, so you're left with a list of words, split it by commas and/or & (is there a consistent comma before &? Your format says yes, but your example says no.) If there are a variable number of words, you don't want to inline them in your tuple like that, because you don't know how to get them out. Create a list from your words, and add that as one element of the tuple.
>>> tup = (url, first, middle, last, rank, words, city, school, year)
>>> tup
('http://www.examplefirm.com/jang', 'Joe', 'E.', 'Ang', 'partner',
['privatization mergers', 'media & technology'], 'New York',
'University of Chicago Law School', '1985')
More specifically? You're on your own there.
|
How can I organize each scraped item into a csv row?
|
What is the best way to organize scraped data into a csv? More specifically each item is in this form
url
"firstName middleInitial, lastName - level - word1 word2 word3, & wordN practice officeCity."
JD, schoolName, date
Example:
http://www.examplefirm.com/jang
"Joe E. Ang - partner - privatization mergers, media & technology practice New York."
JD, University of Chicago Law School, 1985
I want to put this item in this form:
(http://www.examplefirm.com/jang, Joe, E., Ang, partner, privatization mergers, media & technology, New York, University of Chicago Law School, 1985)
so that I can write it into a csv file to import to a django db.
What would be the best way of doing this?
Thank you.
|
[
"There's really no short cut on this. Line 1 is easy. Just assign it to url. Line 3 can probably be split on , without any ill effects, but line 2 will have to be manually parsed. What do you know about word1-wordN? Are you sure \"practice\" will never be a \"word\". Are you sure the words are only one word long? Can they be quoted? Can they contain dashes? \nThen I would parse out the beginning and end bits, so you're left with a list of words, split it by commas and/or & (is there a consistent comma before &? Your format says yes, but your example says no.) If there are a variable number of words, you don't want to inline them in your tuple like that, because you don't know how to get them out. Create a list from your words, and add that as one element of the tuple.\n>>> tup = (url, first, middle, last, rank, words, city, school, year)\n>>> tup\n('http://www.examplefirm.com/jang', 'Joe', 'E.', 'Ang', 'partner', \n['privatization mergers', 'media & technology'], 'New York', \n'University of Chicago Law School', '1985')\n\nMore specifically? You're on your own there.\n"
] |
[
2
] |
[] |
[] |
[
"csv",
"django",
"python"
] |
stackoverflow_0001841903_csv_django_python.txt
|
Q:
Is there a tuple data structure in Python
I want to have an 3 item combination like tag, name, and list of values (array) what is the best possible data structure to store such things.
Current I am using dictionary, but it only allows 2 items, but easy traversal using
for k, v in dict.iteritems():
can we have something similar like:
for k, v, x in tuple.iteritems():
A:
Python tutorial on data structutres see section 5.3 "Tuples and sequences"
however, if you want to use "name" to index the data, you probably want to use a dictionary that has the string name as key and values are tuple of (tag, [list, of, values]) e.g.
d =
{ "foo" : ("dog", [1,2,3,4]),
"bar" : ("cat", [4,5,6,7,8,9]),
"moo" : ("cow", [4,5,7,8,9,1,3,4,65])
}
for name,(tag,values) in d.items():
do_something()
this way alsod["foo"] will work, just like for any other dictionary.
A:
why not just use a list of tuples (yes, this is a data type in python, like lists, but immutable):
mylistoftuples = [(1, 2, 3), (2, "three", 4), (3, 4, [1, 2, 3, 4, 5])]
for k, v, x in mylistoftuples:
print k, v, x
A:
You can consider the collections.namedtuple type to create tuple-like objects that have fields accessible by attribute lookup.
collections.namedtuple(typename, field_names[, verbose])
Returns a new tuple subclass named typename. The new subclass is used to create tuple-like objects that have fields accessible by attribute lookup as well as being indexable and iterable. Instances of the subclass also have a helpful docstring (with typename and field_names) and a helpful __repr__() method which lists the tuple contents in a name=value format.
>>> import collections
>>> mytup = collections.namedtuple('mytup', ['tag','name', 'values'])
>>> e1 = mytup('tag1','great',[1,'two',3])
>>> e1
mytup(tag='tag1', name='great', values=[1, 'two', 3])
>>> e1.values
[1, 'two', 3]
>>>
Building on other answers, an example of filtering a list of mytup objects:
>>> tlist = [mytup("foo", "dog", [1,2,3,4]),
mytup("bar","cat", [4,5,6,7,8,9]), mytup("moo","cow", [4,5,7,8,9,1,3,4,65])]
>>> tlist
[mytup(tag='foo', name='dog', values=[1, 2, 3, 4]),
mytup(tag='bar', name='cat', values=[4, 5, 6, 7, 8, 9]),
mytup(tag='moo', name='cow', values=[4, 5, 7, 8, 9, 1, 3, 4, 65])]
>>> [t for t in tlist if t.tag == 'bar']
[mytup(tag='bar', name='cat', values=[4, 5, 6, 7, 8, 9])]
>>>
Namedtuple objects can, of course, be used in other structures (e.g a dict), as mentioned in other answers. The advantage is, obviously, that the fields are named, and code using them is clearer.
A:
Here's a comment to @gimel's answer:
>>> import collections
>>> T = collections.namedtuple("T", 'tag name values')
>>> from itertools import starmap
>>> list(starmap(T, [("a", "b", [1,2]), ("c", "d",[3,4])]))
[T(tag='a', name='b', values=[1, 2]), T(tag='c', name='d', values=[3, 4])]
A:
You can have an array of 3-item tuples.
arr = [ (1,2,3), (4,5,6), (7,8,9)]
for (k, v, x) in arr:
# do stuff
|
Is there a tuple data structure in Python
|
I want to have an 3 item combination like tag, name, and list of values (array) what is the best possible data structure to store such things.
Current I am using dictionary, but it only allows 2 items, but easy traversal using
for k, v in dict.iteritems():
can we have something similar like:
for k, v, x in tuple.iteritems():
|
[
"Python tutorial on data structutres see section 5.3 \"Tuples and sequences\"\nhowever, if you want to use \"name\" to index the data, you probably want to use a dictionary that has the string name as key and values are tuple of (tag, [list, of, values]) e.g.\n d = \n { \"foo\" : (\"dog\", [1,2,3,4]),\n \"bar\" : (\"cat\", [4,5,6,7,8,9]),\n \"moo\" : (\"cow\", [4,5,7,8,9,1,3,4,65])\n }\n\n for name,(tag,values) in d.items():\n do_something()\n\nthis way alsod[\"foo\"] will work, just like for any other dictionary.\n",
"why not just use a list of tuples (yes, this is a data type in python, like lists, but immutable):\nmylistoftuples = [(1, 2, 3), (2, \"three\", 4), (3, 4, [1, 2, 3, 4, 5])]\nfor k, v, x in mylistoftuples:\n print k, v, x\n\n",
"You can consider the collections.namedtuple type to create tuple-like objects that have fields accessible by attribute lookup.\n\ncollections.namedtuple(typename, field_names[, verbose])\nReturns a new tuple subclass named typename. The new subclass is used to create tuple-like objects that have fields accessible by attribute lookup as well as being indexable and iterable. Instances of the subclass also have a helpful docstring (with typename and field_names) and a helpful __repr__() method which lists the tuple contents in a name=value format.\n\n>>> import collections\n>>> mytup = collections.namedtuple('mytup', ['tag','name', 'values'])\n>>> e1 = mytup('tag1','great',[1,'two',3])\n>>> e1\nmytup(tag='tag1', name='great', values=[1, 'two', 3])\n>>> e1.values\n[1, 'two', 3]\n>>> \n\nBuilding on other answers, an example of filtering a list of mytup objects:\n>>> tlist = [mytup(\"foo\", \"dog\", [1,2,3,4]),\n mytup(\"bar\",\"cat\", [4,5,6,7,8,9]), mytup(\"moo\",\"cow\", [4,5,7,8,9,1,3,4,65])]\n>>> tlist\n[mytup(tag='foo', name='dog', values=[1, 2, 3, 4]),\nmytup(tag='bar', name='cat', values=[4, 5, 6, 7, 8, 9]),\nmytup(tag='moo', name='cow', values=[4, 5, 7, 8, 9, 1, 3, 4, 65])]\n>>> [t for t in tlist if t.tag == 'bar']\n[mytup(tag='bar', name='cat', values=[4, 5, 6, 7, 8, 9])]\n>>> \n\nNamedtuple objects can, of course, be used in other structures (e.g a dict), as mentioned in other answers. The advantage is, obviously, that the fields are named, and code using them is clearer.\n",
"Here's a comment to @gimel's answer:\n>>> import collections\n>>> T = collections.namedtuple(\"T\", 'tag name values')\n>>> from itertools import starmap\n>>> list(starmap(T, [(\"a\", \"b\", [1,2]), (\"c\", \"d\",[3,4])]))\n[T(tag='a', name='b', values=[1, 2]), T(tag='c', name='d', values=[3, 4])]\n\n",
"You can have an array of 3-item tuples.\narr = [ (1,2,3), (4,5,6), (7,8,9)]\nfor (k, v, x) in arr:\n # do stuff\n\n"
] |
[
8,
4,
4,
2,
0
] |
[] |
[] |
[
"python",
"tuples"
] |
stackoverflow_0001831218_python_tuples.txt
|
Q:
How do I detect if Python is running as a 64-bit application?
Possible Duplicate:
How do I determine if my python shell is executing in 32bit or 64bit mode?
I'm doing some work with the windows registry. Depending on whether you're running python as 32-bit or 64-bit, the key value will be different. How do I detect if Python is running as a 64-bit application as opposed to a 32-bit application?
Note: I'm not interested in detecting 32-bit/64-bit Windows - just the Python platform.
A:
import platform
platform.architecture()
From the Python docs:
Queries the given executable (defaults
to the Python interpreter binary) for
various architecture information.
Returns a tuple (bits, linkage) which
contain information about the bit
architecture and the linkage format
used for the executable. Both values
are returned as strings.
A:
While it may work on some platforms, be aware that platform.architecture is not always a reliable way to determine whether python is running in 32-bit or 64-bit. In particular, on some OS X multi-architecture builds, the same executable file may be capable of running in either mode, as the example below demonstrates. The quickest safe multi-platform approach is to test sys.maxsize on Python 2.6, 2.7, Python 3.x.
$ arch -i386 /usr/local/bin/python2.7
Python 2.7.9 (v2.7.9:648dcafa7e5f, Dec 10 2014, 10:10:46)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import platform, sys
>>> platform.architecture(), sys.maxsize
(('64bit', ''), 2147483647)
>>> ^D
$ arch -x86_64 /usr/local/bin/python2.7
Python 2.7.9 (v2.7.9:648dcafa7e5f, Dec 10 2014, 10:10:46)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import platform, sys
>>> platform.architecture(), sys.maxsize
(('64bit', ''), 9223372036854775807)
|
How do I detect if Python is running as a 64-bit application?
|
Possible Duplicate:
How do I determine if my python shell is executing in 32bit or 64bit mode?
I'm doing some work with the windows registry. Depending on whether you're running python as 32-bit or 64-bit, the key value will be different. How do I detect if Python is running as a 64-bit application as opposed to a 32-bit application?
Note: I'm not interested in detecting 32-bit/64-bit Windows - just the Python platform.
|
[
"import platform\nplatform.architecture()\n\nFrom the Python docs:\n\nQueries the given executable (defaults\n to the Python interpreter binary) for\n various architecture information.\nReturns a tuple (bits, linkage) which\n contain information about the bit\n architecture and the linkage format\n used for the executable. Both values\n are returned as strings.\n\n",
"While it may work on some platforms, be aware that platform.architecture is not always a reliable way to determine whether python is running in 32-bit or 64-bit. In particular, on some OS X multi-architecture builds, the same executable file may be capable of running in either mode, as the example below demonstrates. The quickest safe multi-platform approach is to test sys.maxsize on Python 2.6, 2.7, Python 3.x.\n$ arch -i386 /usr/local/bin/python2.7\nPython 2.7.9 (v2.7.9:648dcafa7e5f, Dec 10 2014, 10:10:46)\n[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import platform, sys\n>>> platform.architecture(), sys.maxsize\n(('64bit', ''), 2147483647)\n>>> ^D\n$ arch -x86_64 /usr/local/bin/python2.7\nPython 2.7.9 (v2.7.9:648dcafa7e5f, Dec 10 2014, 10:10:46)\n[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import platform, sys\n>>> platform.architecture(), sys.maxsize\n(('64bit', ''), 9223372036854775807)\n\n"
] |
[
226,
70
] |
[] |
[] |
[
"64_bit",
"python"
] |
stackoverflow_0001842544_64_bit_python.txt
|
Q:
Timer object in Django
I'm using Django to create registration/payment application for a limited number of products. I want to create a timer so that when a user chooses to purchase a product, that product will then displayed as "already taken". But, if the user the user does not go through with the purchase and the timer runs out, the product goes back to status "available". If the user completes the purchase, the timer should cancel, leaving the product's status permanently as "already taken".
I've tried using python's dictionary to instantiate python Timer objects of the fly, but about 30% of the time, I get a "key error" when it's time to cancel the Timer.
Please. Could someone give me an idea on the proper way to do this in Django?
Thanks very much!
Mark
A:
I would not use a timer for such situation because it can give some complexity. What will happen to your timers if the process is restarted. That't the case with some internet providers who restart the process on a regular basis. Maybe you are facing a similar situation.
I would add a 'taken_time' datetimefield to your table to know when the product is taken and use it in order to know if the product is available or not
The list of available products would be filled with the 'available' products and the 'taken' if their 'taken_time' is lower than (current time - accepted delay)
I think you don't really need to change the status with an external process as long as you manage it correctly in your business logic.
I hope it helps
PS: Sorry for my english! Quite late in France now and difficult for me to find simple way to explain my view :-)
A:
You should not use some in-process/memory timer objects since your django app can run in multiple processes...
I suggest create this "timers" in database (like "expire" datetime field for your product model) and use cron job that will mark expired objects to be available
A:
Fork it. Have your model use three states for a product: available, taken, and purchased. When a user initiates an action that marks the product as taken, spawn a new process that will mark the product to available in 3 minutes if it is not purchased.
|
Timer object in Django
|
I'm using Django to create registration/payment application for a limited number of products. I want to create a timer so that when a user chooses to purchase a product, that product will then displayed as "already taken". But, if the user the user does not go through with the purchase and the timer runs out, the product goes back to status "available". If the user completes the purchase, the timer should cancel, leaving the product's status permanently as "already taken".
I've tried using python's dictionary to instantiate python Timer objects of the fly, but about 30% of the time, I get a "key error" when it's time to cancel the Timer.
Please. Could someone give me an idea on the proper way to do this in Django?
Thanks very much!
Mark
|
[
"I would not use a timer for such situation because it can give some complexity. What will happen to your timers if the process is restarted. That't the case with some internet providers who restart the process on a regular basis. Maybe you are facing a similar situation.\nI would add a 'taken_time' datetimefield to your table to know when the product is taken and use it in order to know if the product is available or not\nThe list of available products would be filled with the 'available' products and the 'taken' if their 'taken_time' is lower than (current time - accepted delay)\nI think you don't really need to change the status with an external process as long as you manage it correctly in your business logic.\nI hope it helps\nPS: Sorry for my english! Quite late in France now and difficult for me to find simple way to explain my view :-)\n",
"You should not use some in-process/memory timer objects since your django app can run in multiple processes...\nI suggest create this \"timers\" in database (like \"expire\" datetime field for your product model) and use cron job that will mark expired objects to be available \n",
"Fork it. Have your model use three states for a product: available, taken, and purchased. When a user initiates an action that marks the product as taken, spawn a new process that will mark the product to available in 3 minutes if it is not purchased.\n"
] |
[
6,
1,
-2
] |
[] |
[] |
[
"django",
"python",
"timer"
] |
stackoverflow_0001842593_django_python_timer.txt
|
Q:
How to do PyS60 development on OS X
Is it possible to do PyS60 development on Mac OS X? There is an XCode-plugin for Symbian C++ -development, but I don't know whether I can create Python-apps for my Nokia phone with that. I'm talking about a more thorough SDK experience than just editing files with Textmate/Emacs and copying them over to the device.
A:
I'd recommend you add PuTools to your development environment. It allows you to easily sync files between the phone and the computer, and gives you a remote shell with more functions than the default Bluetooth shell.
The "official" PuTools instructions are written for Windows machines, but the tools definitely does work on the Mac as well. These instructions should help.
(As a new user, I can only post one link. If you're looking for the original PuTools website, it's an easy Google search. Good luck! )
EDIT: A warning if you're using PyS60 v2.x on your Symbian device: Unfortunately PuTools hasn't been updated for PyS60 v2. :(
A:
Well, with python on phone all you need to do is be able to upload the scripts, and use MWS that's the simplest way. MWS supports webdav for upload, also one can use obexftp and bluetooth to drop the scripts in the right place.
One can also wrap them in SIS files in theory, but I haven't done that myself yet.
A:
I use the komodo edit 5 editor on the mac and point it to the nokia appfwui classes, then the editor will autcomplete the Nokia Pys60 apis for you.
I also use the steps given below to copy the script onto the device to test it (as the emulator is not runnable on mac os x)
http://discussion.forum.nokia.com/forum/showthread.php?t=116771
A:
S60 emulator runs only under Windows, so Mac owners run it under emulator. Heard that it works great.
|
How to do PyS60 development on OS X
|
Is it possible to do PyS60 development on Mac OS X? There is an XCode-plugin for Symbian C++ -development, but I don't know whether I can create Python-apps for my Nokia phone with that. I'm talking about a more thorough SDK experience than just editing files with Textmate/Emacs and copying them over to the device.
|
[
"I'd recommend you add PuTools to your development environment. It allows you to easily sync files between the phone and the computer, and gives you a remote shell with more functions than the default Bluetooth shell.\nThe \"official\" PuTools instructions are written for Windows machines, but the tools definitely does work on the Mac as well. These instructions should help.\n(As a new user, I can only post one link. If you're looking for the original PuTools website, it's an easy Google search. Good luck! )\nEDIT: A warning if you're using PyS60 v2.x on your Symbian device: Unfortunately PuTools hasn't been updated for PyS60 v2. :(\n",
"Well, with python on phone all you need to do is be able to upload the scripts, and use MWS that's the simplest way. MWS supports webdav for upload, also one can use obexftp and bluetooth to drop the scripts in the right place.\nOne can also wrap them in SIS files in theory, but I haven't done that myself yet.\n",
"I use the komodo edit 5 editor on the mac and point it to the nokia appfwui classes, then the editor will autcomplete the Nokia Pys60 apis for you.\nI also use the steps given below to copy the script onto the device to test it (as the emulator is not runnable on mac os x)\nhttp://discussion.forum.nokia.com/forum/showthread.php?t=116771\n",
"S60 emulator runs only under Windows, so Mac owners run it under emulator. Heard that it works great.\n"
] |
[
3,
1,
1,
0
] |
[] |
[] |
[
"nokia",
"pys60",
"python",
"s60"
] |
stackoverflow_0000790915_nokia_pys60_python_s60.txt
|
Q:
Python's string.maketrans works at home but fails on Google App Engine
I have this code in Google AppEngine (Python SDK):
from string import maketrans
intab = u"ÀÁÂÃÄÅàáâãäåÒÓÔÕÖØòóôõöøÈÉÊËèéêëÇçÌÍÎÏìíîïÙÚÛÜùúûüÿÑñ".encode('latin1')
outtab = u"aaaaaaaaaaaaooooooooooooeeeeeeeecciiiiiiiiuuuuuuuuynn".encode('latin1')
logging.info(len(intab))
logging.info(len(outtab))
trantab = maketrans(intab, outtab)
When I run the code in the interactive console I have no problem, but when I try it in GAE I get the following error:
raise ValueError, "maketrans arguments must have same length"
ValueError: maketrans arguments must have same length
INFO 2009-12-03 20:04:02,904 dev_appserver.py:3038] "POST /backendsavenew HTTP/1.1" 500 -
INFO 2009-12-03 20:08:37,649 admin.py:112] 106
INFO 2009-12-03 20:08:37,651 admin.py:113] 53
ERROR 2009-12-03 20:08:37,653 init.py:388] maketrans arguments must have same length
I can't figure out why the intab it's doubled in size.
The python file with the code is saved as UTF-8.
Thanks in advance for any help.
A:
string.maketrans and string.translate do not work for Unicode strings. Your call to string.maketrans will implictly convert the Unicode you gave it to an encoding like utf-8. In utf-8 å takes up more space than ASCII a. string.maketrans sees len(str(argument)) which is different for your two strings.
There is a Unicode translate, but for your use case (convert Unicode to ASCII because some part of your system cannot deal with Unicode) you should use http://pypi.python.org/pypi/Unidecode. Unidecode is very smart about transliterating Unicode characters to sensible ASCII, covering many more characters than in your example.
You should save your Python code as utf-8, but make sure you add the magic so Python doesn't have to assume you used the system's default encoding. This line should be the first or second line of your Python files:
# -*- coding: utf-8 -*-
There are many advantages to processing text as Unicode instead of binary strings. This is the Unicode way to do what you are trying to do:
intab = u"ÀÁÂÃÄÅàáâãäåÒÓÔÕÖØòóôõöøÈÉÊËèéêëÇçÌÍÎÏìíîïÙÚÛÜùúûüÿÑñ"
outtab = u"aaaaaaaaaaaaooooooooooooeeeeeeeecciiiiiiiiuuuuuuuuynn"
trantab = dict((ord(a), b) for a, b in zip(intab, outtab))
translated = intab.translate(trantab)
translated == outtab # True
See also Where is Python's "best ASCII for this Unicode" database?
See also How do I get str.translate to work with Unicode strings?
A:
Maybe you could use iso-8859-1 encoding for your file instead of utf-8
# -*- coding: iso-8859-1 -*-
from string import maketrans
import logging
intab = "ÀÁÂÃÄÅàáâãäåÒÓÔÕÖØòóôõöøÈÉÊËèéêëÇçÌÍÎÏìíîïÙÚÛÜùúûüÿÑñ"
outtab = "aaaaaaaaaaaaooooooooooooeeeeeeeecciiiiiiiiuuuuuuuuynn"
logging.info(len(intab))
logging.info(len(outtab))
trantab = maketrans(intab, outtab)
Remember to select iso-8859-1 in your text editor while saving this python source file.
|
Python's string.maketrans works at home but fails on Google App Engine
|
I have this code in Google AppEngine (Python SDK):
from string import maketrans
intab = u"ÀÁÂÃÄÅàáâãäåÒÓÔÕÖØòóôõöøÈÉÊËèéêëÇçÌÍÎÏìíîïÙÚÛÜùúûüÿÑñ".encode('latin1')
outtab = u"aaaaaaaaaaaaooooooooooooeeeeeeeecciiiiiiiiuuuuuuuuynn".encode('latin1')
logging.info(len(intab))
logging.info(len(outtab))
trantab = maketrans(intab, outtab)
When I run the code in the interactive console I have no problem, but when I try it in GAE I get the following error:
raise ValueError, "maketrans arguments must have same length"
ValueError: maketrans arguments must have same length
INFO 2009-12-03 20:04:02,904 dev_appserver.py:3038] "POST /backendsavenew HTTP/1.1" 500 -
INFO 2009-12-03 20:08:37,649 admin.py:112] 106
INFO 2009-12-03 20:08:37,651 admin.py:113] 53
ERROR 2009-12-03 20:08:37,653 init.py:388] maketrans arguments must have same length
I can't figure out why the intab it's doubled in size.
The python file with the code is saved as UTF-8.
Thanks in advance for any help.
|
[
"string.maketrans and string.translate do not work for Unicode strings. Your call to string.maketrans will implictly convert the Unicode you gave it to an encoding like utf-8. In utf-8 å takes up more space than ASCII a. string.maketrans sees len(str(argument)) which is different for your two strings.\nThere is a Unicode translate, but for your use case (convert Unicode to ASCII because some part of your system cannot deal with Unicode) you should use http://pypi.python.org/pypi/Unidecode. Unidecode is very smart about transliterating Unicode characters to sensible ASCII, covering many more characters than in your example.\nYou should save your Python code as utf-8, but make sure you add the magic so Python doesn't have to assume you used the system's default encoding. This line should be the first or second line of your Python files:\n# -*- coding: utf-8 -*-\n\nThere are many advantages to processing text as Unicode instead of binary strings. This is the Unicode way to do what you are trying to do:\nintab = u\"ÀÁÂÃÄÅàáâãäåÒÓÔÕÖØòóôõöøÈÉÊËèéêëÇçÌÍÎÏìíîïÙÚÛÜùúûüÿÑñ\"\nouttab = u\"aaaaaaaaaaaaooooooooooooeeeeeeeecciiiiiiiiuuuuuuuuynn\"\ntrantab = dict((ord(a), b) for a, b in zip(intab, outtab))\ntranslated = intab.translate(trantab)\ntranslated == outtab # True\n\nSee also Where is Python's \"best ASCII for this Unicode\" database?\nSee also How do I get str.translate to work with Unicode strings?\n",
"Maybe you could use iso-8859-1 encoding for your file instead of utf-8\n# -*- coding: iso-8859-1 -*-\nfrom string import maketrans \nimport logging\n\nintab = \"ÀÁÂÃÄÅàáâãäåÒÓÔÕÖØòóôõöøÈÉÊËèéêëÇçÌÍÎÏìíîïÙÚÛÜùúûüÿÑñ\"\nouttab = \"aaaaaaaaaaaaooooooooooooeeeeeeeecciiiiiiiiuuuuuuuuynn\"\nlogging.info(len(intab))\nlogging.info(len(outtab))\ntrantab = maketrans(intab, outtab)\n\nRemember to select iso-8859-1 in your text editor while saving this python source file.\n"
] |
[
15,
1
] |
[] |
[] |
[
"google_app_engine",
"internationalization",
"python",
"translation"
] |
stackoverflow_0001842692_google_app_engine_internationalization_python_translation.txt
|
Q:
parsing large compressed xml files, python
file = BZ2File(SOME_FILE_PATH)
p = xml.parsers.expat.ParserCreate()
p.Parse(file)
Here's code that tries to parse xml file compressed with bz2. Unfortunately it fails with a message:
TypeError: Parse() argument 1 must be string or read-only buffer, not bz2.BZ2File
Is there a way to parse on the fly compressed bz2 xml files?
Note: p.Parse(file.read()) is not an option here. I want to parse a file which is larger than available memory, so I need to have a stream.
A:
Just use p.ParseFile(file) instead of p.Parse(file).
Parse() takes a string, ParseFile() takes a file handle, and reads the data in as required.
Ref: http://docs.python.org/library/pyexpat.html#xml.parsers.expat.xmlparser.ParseFile
A:
Use .read() on the file object to read in the entire file as a string, and then pass that to Parse?
file = BZ2File(SOME_FILE_PATH)
p = xml.parsers.expat.ParserCreate()
p.Parse(file.read())
A:
Can you pass in an mmap()'ed file? That should take care of automatically paging the needed parts of the file in, and avoid memory overflow. Of course if expat builts a parse tree, it might still run out of memory.
http://docs.python.org/library/mmap.html
Memory-mapped file objects behave like both strings and like file objects. Unlike normal string objects, however, these are mutable. You can use mmap objects in most places where strings are expected; for example, you can use the re module to search through a memory-mapped file.
|
parsing large compressed xml files, python
|
file = BZ2File(SOME_FILE_PATH)
p = xml.parsers.expat.ParserCreate()
p.Parse(file)
Here's code that tries to parse xml file compressed with bz2. Unfortunately it fails with a message:
TypeError: Parse() argument 1 must be string or read-only buffer, not bz2.BZ2File
Is there a way to parse on the fly compressed bz2 xml files?
Note: p.Parse(file.read()) is not an option here. I want to parse a file which is larger than available memory, so I need to have a stream.
|
[
"Just use p.ParseFile(file) instead of p.Parse(file).\nParse() takes a string, ParseFile() takes a file handle, and reads the data in as required.\nRef: http://docs.python.org/library/pyexpat.html#xml.parsers.expat.xmlparser.ParseFile\n",
"Use .read() on the file object to read in the entire file as a string, and then pass that to Parse?\nfile = BZ2File(SOME_FILE_PATH)\np = xml.parsers.expat.ParserCreate()\np.Parse(file.read())\n\n",
"Can you pass in an mmap()'ed file? That should take care of automatically paging the needed parts of the file in, and avoid memory overflow. Of course if expat builts a parse tree, it might still run out of memory.\nhttp://docs.python.org/library/mmap.html\n\nMemory-mapped file objects behave like both strings and like file objects. Unlike normal string objects, however, these are mutable. You can use mmap objects in most places where strings are expected; for example, you can use the re module to search through a memory-mapped file. \n\n"
] |
[
5,
1,
0
] |
[] |
[] |
[
"bzip",
"compression",
"python"
] |
stackoverflow_0001843009_bzip_compression_python.txt
|
Q:
Hadoop MapReduce job on file containing HTML tags
I have a bunch of large HTML files and I want to run a Hadoop MapReduce job on them to find the most frequently used words. I wrote both my mapper and reducer in Python and used Hadoop streaming to run them.
Here is my mapper:
#!/usr/bin/env python
import sys
import re
import string
def remove_html_tags(in_text):
'''
Remove any HTML tags that are found.
'''
global flag
in_text=in_text.lstrip()
in_text=in_text.rstrip()
in_text=in_text+"\n"
if flag==True:
in_text="<"+in_text
flag=False
if re.search('^<',in_text)!=None and re.search('(>\n+)$', in_text)==None:
in_text=in_text+">"
flag=True
p = re.compile(r'<[^<]*?>')
in_text=p.sub('', in_text)
return in_text
# input comes from STDIN (standard input)
global flag
flag=False
for line in sys.stdin:
# remove leading and trailing whitespace, set to lowercase and remove HTMl tags
line = line.strip().lower()
line = remove_html_tags(line)
# split the line into words
words = line.split()
# increase counters
for word in words:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
if word =='': continue
for c in string.punctuation:
word= word.replace(c,'')
print '%s\t%s' % (word, 1)
Here is my reducer:
#!/usr/bin/env python
from operator import itemgetter
import sys
# maps words to their counts
word2count = {}
# input comes from STDIN
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# parse the input we got from mapper.py
word, count = line.split('\t', 1)
# convert count (currently a string) to int
try:
count = int(count)
word2count[word] = word2count.get(word, 0) + count
except ValueError:
pass
sorted_word2count = sorted(word2count.iteritems(),
key=lambda(k,v):(v,k),reverse=True)
# write the results to STDOUT (standard output)
for word, count in sorted_word2count:
print '%s\t%s'% (word, count)
Whenever I just pipe a small sample small string like 'hello world hello hello world ...' I get the proper output of a ranked list. However, when I try to use a small HTML file, and try using cat to pipe the HTML into my mapper, I get the following error (input2 contains some HTML code):
rohanbk@hadoop:~$ cat input2 | /home/rohanbk/mapper.py | sort | /home/rohanbk/reducer.py
Traceback (most recent call last):
File "/home/rohanbk/reducer.py", line 15, in <module>
word, count = line.split('\t', 1)
ValueError: need more than 1 value to unpack
Can anyone explain why I'm getting this? Also, what is a good way to debug a MapReduce job program?
A:
You can reproduce the bug even with just:
echo "hello - world" | ./mapper.py | sort | ./reducer.py
The issue is here:
if word =='': continue
for c in string.punctuation:
word= word.replace(c,'')
If word is a single punctuation mark, as would be the case for the above input (after it is split), then it is converted to an empty string. So, just move the check for an empty string to after the replacement.
|
Hadoop MapReduce job on file containing HTML tags
|
I have a bunch of large HTML files and I want to run a Hadoop MapReduce job on them to find the most frequently used words. I wrote both my mapper and reducer in Python and used Hadoop streaming to run them.
Here is my mapper:
#!/usr/bin/env python
import sys
import re
import string
def remove_html_tags(in_text):
'''
Remove any HTML tags that are found.
'''
global flag
in_text=in_text.lstrip()
in_text=in_text.rstrip()
in_text=in_text+"\n"
if flag==True:
in_text="<"+in_text
flag=False
if re.search('^<',in_text)!=None and re.search('(>\n+)$', in_text)==None:
in_text=in_text+">"
flag=True
p = re.compile(r'<[^<]*?>')
in_text=p.sub('', in_text)
return in_text
# input comes from STDIN (standard input)
global flag
flag=False
for line in sys.stdin:
# remove leading and trailing whitespace, set to lowercase and remove HTMl tags
line = line.strip().lower()
line = remove_html_tags(line)
# split the line into words
words = line.split()
# increase counters
for word in words:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
if word =='': continue
for c in string.punctuation:
word= word.replace(c,'')
print '%s\t%s' % (word, 1)
Here is my reducer:
#!/usr/bin/env python
from operator import itemgetter
import sys
# maps words to their counts
word2count = {}
# input comes from STDIN
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# parse the input we got from mapper.py
word, count = line.split('\t', 1)
# convert count (currently a string) to int
try:
count = int(count)
word2count[word] = word2count.get(word, 0) + count
except ValueError:
pass
sorted_word2count = sorted(word2count.iteritems(),
key=lambda(k,v):(v,k),reverse=True)
# write the results to STDOUT (standard output)
for word, count in sorted_word2count:
print '%s\t%s'% (word, count)
Whenever I just pipe a small sample small string like 'hello world hello hello world ...' I get the proper output of a ranked list. However, when I try to use a small HTML file, and try using cat to pipe the HTML into my mapper, I get the following error (input2 contains some HTML code):
rohanbk@hadoop:~$ cat input2 | /home/rohanbk/mapper.py | sort | /home/rohanbk/reducer.py
Traceback (most recent call last):
File "/home/rohanbk/reducer.py", line 15, in <module>
word, count = line.split('\t', 1)
ValueError: need more than 1 value to unpack
Can anyone explain why I'm getting this? Also, what is a good way to debug a MapReduce job program?
|
[
"You can reproduce the bug even with just:\necho \"hello - world\" | ./mapper.py | sort | ./reducer.py\n\nThe issue is here:\nif word =='': continue\nfor c in string.punctuation:\n word= word.replace(c,'')\n\nIf word is a single punctuation mark, as would be the case for the above input (after it is split), then it is converted to an empty string. So, just move the check for an empty string to after the replacement.\n"
] |
[
1
] |
[] |
[] |
[
"hadoop",
"mapreduce",
"python"
] |
stackoverflow_0001842747_hadoop_mapreduce_python.txt
|
Q:
Django ManyToMany relation add() error
I've got a model that looks like this,
class PL(models.Model):
locid = models.AutoField(primary_key=True)
mentionedby = models.ManyToManyField(PRT)
class PRT(models.Model):
tid = ..
The resulting many to many table in mysql is formed as,
+------------------+------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------+------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| PL_id | int(11) | NO | MUL | NULL | |
| PRT_id | bigint(64) | NO | MUL | NULL | |
+------------------+------------+------+-----+---------+----------------+
Now, if pl is an object of PL and prt that of PRT, then doing
pl.mentionedby.add(prt)
gives me an error
Incorrect integer value: 'PRT object'
for column 'prt_id' at row 1"
whereas
pl.mentionedby.add(prt.tid)
works fine - with one caveat.
I can see all the elements in pl.mentionedby.all(), but I can't go to a mentioned PRT object and see its prt.mentionedby_set.all().
Does anyone know why this happens? Whats the best way to fix it?
Thanks!
A:
Adding prt directly should work on first try. How are you retrieving pl and prt? Assuming you have some data in your database, try those commands from the Django shell and see if it works. There seems to be some missing information from the question. After running python manage.py shell:
from yourapp.models import PL
pl = PL.objects.get(id=1)
prt = PRT.objects.get(id=1)
pl.mentionedby.add(prt)
A:
Are these the complete models? I can only assume that something's been overriden somewhere, that probably shouldn't have been.
Can you post the full code?
|
Django ManyToMany relation add() error
|
I've got a model that looks like this,
class PL(models.Model):
locid = models.AutoField(primary_key=True)
mentionedby = models.ManyToManyField(PRT)
class PRT(models.Model):
tid = ..
The resulting many to many table in mysql is formed as,
+------------------+------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------+------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| PL_id | int(11) | NO | MUL | NULL | |
| PRT_id | bigint(64) | NO | MUL | NULL | |
+------------------+------------+------+-----+---------+----------------+
Now, if pl is an object of PL and prt that of PRT, then doing
pl.mentionedby.add(prt)
gives me an error
Incorrect integer value: 'PRT object'
for column 'prt_id' at row 1"
whereas
pl.mentionedby.add(prt.tid)
works fine - with one caveat.
I can see all the elements in pl.mentionedby.all(), but I can't go to a mentioned PRT object and see its prt.mentionedby_set.all().
Does anyone know why this happens? Whats the best way to fix it?
Thanks!
|
[
"Adding prt directly should work on first try. How are you retrieving pl and prt? Assuming you have some data in your database, try those commands from the Django shell and see if it works. There seems to be some missing information from the question. After running python manage.py shell:\nfrom yourapp.models import PL\npl = PL.objects.get(id=1)\nprt = PRT.objects.get(id=1)\npl.mentionedby.add(prt)\n\n",
"Are these the complete models? I can only assume that something's been overriden somewhere, that probably shouldn't have been.\nCan you post the full code?\n"
] |
[
6,
0
] |
[] |
[] |
[
"django",
"django_models",
"many_to_many",
"python"
] |
stackoverflow_0001226290_django_django_models_many_to_many_python.txt
|
Q:
Check whether a connection exists to a remote host using paramiko
I'm using single object of paramiko.SSHClient() for executing a command on a remote machine. When I use ssh.exec_command(cmd), and the connection to remote host is lost, ssh.exec_command hangs up.
Is there a way to check for connection existence before ssh.exec_command()?
A:
If you have a long running SSH connection, you may want to use the Keep Alive parameter via Transport.set_keepalive.
A:
As an alternate possibility, maybe execnet would work. It wraps the command line ssh command instead, so it's definitely not the paramiko approach... just a though.
|
Check whether a connection exists to a remote host using paramiko
|
I'm using single object of paramiko.SSHClient() for executing a command on a remote machine. When I use ssh.exec_command(cmd), and the connection to remote host is lost, ssh.exec_command hangs up.
Is there a way to check for connection existence before ssh.exec_command()?
|
[
"If you have a long running SSH connection, you may want to use the Keep Alive parameter via Transport.set_keepalive.\n",
"As an alternate possibility, maybe execnet would work. It wraps the command line ssh command instead, so it's definitely not the paramiko approach... just a though.\n"
] |
[
2,
0
] |
[] |
[] |
[
"paramiko",
"python",
"ssh"
] |
stackoverflow_0001796106_paramiko_python_ssh.txt
|
Q:
how to fix or make an exception for this error
I'm creating a code that gets image's urls from any web pages, the code are in python and use BeutifulSoup and httplib2.
When I run the code, I get the next error:
Look me http://movies.nytimes.com (this line is printed by the code)
Traceback (most recent call last):
File "main.py", line 103, in <module>
visit(initialList,profundidad)
File "main.py", line 98, in visit
visit(dodo[indice], bottom -1)
File "main.py", line 94, in visit
getImages(w)
File "main.py", line 34, in getImages
iSoupList = BeautifulSoup(response, parseOnlyThese=SoupStrainer('img'))
File "/usr/local/lib/python2.6/dist-packages/BeautifulSoup.py", line 1499, in __init__
BeautifulStoneSoup.__init__(self, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/BeautifulSoup.py", line 1230, in __init__
self._feed(isHTML=isHTML)
File "/usr/local/lib/python2.6/dist-packages/BeautifulSoup.py", line 1263, in _feed
self.builder.feed(markup)
File "/usr/lib/python2.6/HTMLParser.py", line 108, in feed
self.goahead(0)
File "/usr/lib/python2.6/HTMLParser.py", line 148, in goahead
k = self.parse_starttag(i)
File "/usr/lib/python2.6/HTMLParser.py", line 226, in parse_starttag
endpos = self.check_for_whole_start_tag(i)
File "/usr/lib/python2.6/HTMLParser.py", line 301, in check_for_whole_start_tag
self.error("malformed start tag")
File "/usr/lib/python2.6/HTMLParser.py", line 115, in error
raise HTMLParseError(message, self.getpos())
HTMLParser.HTMLParseError: malformed start tag, at line 942, column 118
Someone can explain me how to fix or make an exeption for the error
A:
Are you using latest version of BeautifulSoup?
This seems a known issue of version 3.1.x, because it started using a new parser (HTMLParser, instead of SGMLParser) that is much worse at processing malformed HTML. You can find more information about this on BeautifulSoup website.
As a quick solution, you can simply use an older version (3.0.7a).
A:
To catch that error specifically, change your code to look like this:
try:
iSoupList = BeautifulSoup(response, parseOnlyThese=SoupStrainer('img'))
except HTMLParseError:
#Do something intelligent here
Here's some more reading on Python's try except blocks:
http://docs.python.org/tutorial/errors.html
A:
I got that error when I had the string =& in my HTML document. When I replaced that string (in my case with =and) then I no longer received that parsing error.
|
how to fix or make an exception for this error
|
I'm creating a code that gets image's urls from any web pages, the code are in python and use BeutifulSoup and httplib2.
When I run the code, I get the next error:
Look me http://movies.nytimes.com (this line is printed by the code)
Traceback (most recent call last):
File "main.py", line 103, in <module>
visit(initialList,profundidad)
File "main.py", line 98, in visit
visit(dodo[indice], bottom -1)
File "main.py", line 94, in visit
getImages(w)
File "main.py", line 34, in getImages
iSoupList = BeautifulSoup(response, parseOnlyThese=SoupStrainer('img'))
File "/usr/local/lib/python2.6/dist-packages/BeautifulSoup.py", line 1499, in __init__
BeautifulStoneSoup.__init__(self, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/BeautifulSoup.py", line 1230, in __init__
self._feed(isHTML=isHTML)
File "/usr/local/lib/python2.6/dist-packages/BeautifulSoup.py", line 1263, in _feed
self.builder.feed(markup)
File "/usr/lib/python2.6/HTMLParser.py", line 108, in feed
self.goahead(0)
File "/usr/lib/python2.6/HTMLParser.py", line 148, in goahead
k = self.parse_starttag(i)
File "/usr/lib/python2.6/HTMLParser.py", line 226, in parse_starttag
endpos = self.check_for_whole_start_tag(i)
File "/usr/lib/python2.6/HTMLParser.py", line 301, in check_for_whole_start_tag
self.error("malformed start tag")
File "/usr/lib/python2.6/HTMLParser.py", line 115, in error
raise HTMLParseError(message, self.getpos())
HTMLParser.HTMLParseError: malformed start tag, at line 942, column 118
Someone can explain me how to fix or make an exeption for the error
|
[
"Are you using latest version of BeautifulSoup?\nThis seems a known issue of version 3.1.x, because it started using a new parser (HTMLParser, instead of SGMLParser) that is much worse at processing malformed HTML. You can find more information about this on BeautifulSoup website.\nAs a quick solution, you can simply use an older version (3.0.7a).\n",
"To catch that error specifically, change your code to look like this:\ntry:\n iSoupList = BeautifulSoup(response, parseOnlyThese=SoupStrainer('img'))\n\nexcept HTMLParseError:\n #Do something intelligent here\n\nHere's some more reading on Python's try except blocks:\nhttp://docs.python.org/tutorial/errors.html\n",
"I got that error when I had the string =& in my HTML document. When I replaced that string (in my case with =and) then I no longer received that parsing error.\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
"beautifulsoup",
"httplib2",
"python"
] |
stackoverflow_0001100029_beautifulsoup_httplib2_python.txt
|
Q:
starting my own threads within python paste
I'm writing a web application using pylons and paste. I have some work I want to do after an HTTP request is finished (send some emails, write some stuff to the db, etc) that I don't want to block the HTTP request on.
If I start a thread to do this work, is that OK? I always see this stuff about paste killing off hung threads, etc. Will it kill my threads which are doing work?
What else can I do here? Is there a way I can make the request return but have some code run after it's done?
Thanks.
A:
You could use a thread approach (maybe setting the Thead.daemon property would help--but I'm not sure).
However, I would suggest looking into a task queuing system. You can place a task on a queue (which is very fast), then a listener can handle the tasks asynchronously, allowing the HTTP request to return quickly. There are two task queues that I know of for Django:
Django Queue Service
Celery
You could also consider using an more "enterprise" messaging solution, such as RabbitMQ or ActiveMQ.
Edit: previous answer with some good pointers.
A:
I think the best solution is messaging system because it can be configured to not loose the task if the pylons process goes down. I would always use processes over threads especially in this case. If you are using python 2.6+ use the built in multiprocessing or you can always install the processing module which you can find on pypi (I can't post link because of I am a new user).
A:
Take a look at gearman, it was specifically made for farming out tasks to 'workers' to handle. They can even handle it in a different language entirely. You can come back and ask if the task was completed, or just let it complete. That should work well for many tasks.
If you absolutely need to ensure it was completed, I'd suggest queuing tasks in a database or somewhere persistent, then have a separate process that runs through it ensuring each one gets handled appropriately.
A:
To answer your basic question directly, you should be able to use threads just as you'd like. The "killing hung threads" part is paste cleaning up its own threads, not yours.
There are other packages that might help, etc, but I'd suggest you start with simple threads and see how far you get. Only then will you know what you need next.
(Note, "Thread.daemon" should be mostly irrelevant to you here. Setting that true will ensure a thread you start will not prevent the entire process from exiting. Doing so would mean, however, that if the process exited "cleanly" (as opposed to being forced to exit) your thread would be terminated even if it wasn't done its work. Whether that's a problem, and how you handle things like that, depend entirely on your own requirements and design.
|
starting my own threads within python paste
|
I'm writing a web application using pylons and paste. I have some work I want to do after an HTTP request is finished (send some emails, write some stuff to the db, etc) that I don't want to block the HTTP request on.
If I start a thread to do this work, is that OK? I always see this stuff about paste killing off hung threads, etc. Will it kill my threads which are doing work?
What else can I do here? Is there a way I can make the request return but have some code run after it's done?
Thanks.
|
[
"You could use a thread approach (maybe setting the Thead.daemon property would help--but I'm not sure). \nHowever, I would suggest looking into a task queuing system. You can place a task on a queue (which is very fast), then a listener can handle the tasks asynchronously, allowing the HTTP request to return quickly. There are two task queues that I know of for Django:\n\nDjango Queue Service \nCelery\n\nYou could also consider using an more \"enterprise\" messaging solution, such as RabbitMQ or ActiveMQ.\nEdit: previous answer with some good pointers.\n",
"I think the best solution is messaging system because it can be configured to not loose the task if the pylons process goes down. I would always use processes over threads especially in this case. If you are using python 2.6+ use the built in multiprocessing or you can always install the processing module which you can find on pypi (I can't post link because of I am a new user). \n",
"Take a look at gearman, it was specifically made for farming out tasks to 'workers' to handle. They can even handle it in a different language entirely. You can come back and ask if the task was completed, or just let it complete. That should work well for many tasks.\nIf you absolutely need to ensure it was completed, I'd suggest queuing tasks in a database or somewhere persistent, then have a separate process that runs through it ensuring each one gets handled appropriately.\n",
"To answer your basic question directly, you should be able to use threads just as you'd like. The \"killing hung threads\" part is paste cleaning up its own threads, not yours.\nThere are other packages that might help, etc, but I'd suggest you start with simple threads and see how far you get. Only then will you know what you need next.\n(Note, \"Thread.daemon\" should be mostly irrelevant to you here. Setting that true will ensure a thread you start will not prevent the entire process from exiting. Doing so would mean, however, that if the process exited \"cleanly\" (as opposed to being forced to exit) your thread would be terminated even if it wasn't done its work. Whether that's a problem, and how you handle things like that, depend entirely on your own requirements and design.\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"paste",
"pylons",
"python"
] |
stackoverflow_0001604079_paste_pylons_python.txt
|
Q:
Any python "compiler" that can statically link the python2x.dll dependency?
It's my understanding that py2exe can only dynamically link a python2x.dll file. Are there any Python "compilers" out there that can package it all into one standalone .exe file for easier portability?
If so or if not, which is the best compiler z0mg!
A:
If you check the bottom of the py2exe SingleFileExecutable wiki page you'll see that it can create one-file executables. They do include the DLL inside, but you shouldn't notice that. I believe it works with a freakish hack that intercepts the LoadLibrary calls to allow them to read from elsewhere in the .exe file, but again you shouldn't notice that. We've used it before... it works.
A:
PyInstaller claims to be able to create a single-executable that's user-friendly. Perhaps that would meet your needs. I've never used it.
A:
py2exe can package it all in single executable, without needing any python installation on target system, it may include python2x.dll with it, but for the end user how does it matter
A:
From what I understand, it is possible to statically link python into an executable, but then you lose your ability to load other dynamic modules (.pyd files) like os and zlib and math. Unless you are able to statically compile those as well into your main program.
And as far as I know, the only compiler that can do this is the C compiler that is compiling python from source. :)
I'm not sure its worth the effort at all.
Better just use p2exe and create a directory of files that can be zipped and shipped.
|
Any python "compiler" that can statically link the python2x.dll dependency?
|
It's my understanding that py2exe can only dynamically link a python2x.dll file. Are there any Python "compilers" out there that can package it all into one standalone .exe file for easier portability?
If so or if not, which is the best compiler z0mg!
|
[
"If you check the bottom of the py2exe SingleFileExecutable wiki page you'll see that it can create one-file executables. They do include the DLL inside, but you shouldn't notice that. I believe it works with a freakish hack that intercepts the LoadLibrary calls to allow them to read from elsewhere in the .exe file, but again you shouldn't notice that. We've used it before... it works.\n",
"PyInstaller claims to be able to create a single-executable that's user-friendly. Perhaps that would meet your needs. I've never used it.\n",
"py2exe can package it all in single executable, without needing any python installation on target system, it may include python2x.dll with it, but for the end user how does it matter\n",
"From what I understand, it is possible to statically link python into an executable, but then you lose your ability to load other dynamic modules (.pyd files) like os and zlib and math. Unless you are able to statically compile those as well into your main program.\nAnd as far as I know, the only compiler that can do this is the C compiler that is compiling python from source. :)\nI'm not sure its worth the effort at all.\nBetter just use p2exe and create a directory of files that can be zipped and shipped.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"compilation",
"dll",
"py2exe",
"python"
] |
stackoverflow_0001720893_compilation_dll_py2exe_python.txt
|
Q:
several lexers for one parser with PLY?
I'm trying to implement a python parser using PLY for the Kconfig language used to generate the configuration options for the linux kernel.
There's a keyword called source which performs an inclusion, so what i do is that when the lexer encounters this keyword, I change the lexer state to create a new lexer which is going to lex the sourced file:
def t_begin_source(t):
r'source '
t.lexer.begin('source')
def t_source_path(t):
r'[^\n]+\n+'
t.lexer.begin('INITIAL')
global path
source_lexer = lex.lex(errorlog=lex.NullLogger())
source_file_name = (path + t.value.strip(' \"\n'))
sourced_file = file(path + t.value.strip(' \"\n')).read()
source_lexer.input(sourced_file)
while True:
tok = source_lexer.token()
if not tok:
break
Somewhere else I have this line
lexer = lex.lex(errorlog=lex.NullLogger())
This is the "main" or "root" lexer which is going to be called by the parser.
My problem is that I don't know how to tell the parser to use a different lexer or to tell the "source_lexer" to return something...
Maybe the clone function should be used...
Thanks
A:
I don't know about the details of PLY, but in other systems like this that I've built, it made the most sense to have a single lexer which managed the stack of include files. So the lexer would return a unified stream of tokens, opening and closing include files as they were encountered.
A:
By an interesting coincidence a link from the same Google search that led me to this question explains how to write your own lexer for a PLY parser. The post explains it simply and well, but it's a matter of four instance variables and single token method.
A:
Ok,
so what i've done is building a list of all the tokens, which is built before the actual parsing.
The parser no longer calls the lexer because you can override the getToken function used by the parser using the tokenfunc parameter when calling the parse function.
result = yacc.parse(kconfig,debug=1,tokenfunc=my_function)
and my function which is now the function called to get the next token iterates over the list of tokens previously built.
Considering the lexing, when I encounter a source keyword, I clone my lexer and change the input to include the file.
def sourcing_file(source_file_name):
print "SOURCE FILE NAME " , source_file_name
sourced_file = file(source_file_name).read()
source_lexer = lexer.clone()
source_lexer.input(sourced_file)
print 'END OF SOURCING FILE'
while True:
tok = source_lexer.token()
if not tok:
break
token_list.append(tok)
|
several lexers for one parser with PLY?
|
I'm trying to implement a python parser using PLY for the Kconfig language used to generate the configuration options for the linux kernel.
There's a keyword called source which performs an inclusion, so what i do is that when the lexer encounters this keyword, I change the lexer state to create a new lexer which is going to lex the sourced file:
def t_begin_source(t):
r'source '
t.lexer.begin('source')
def t_source_path(t):
r'[^\n]+\n+'
t.lexer.begin('INITIAL')
global path
source_lexer = lex.lex(errorlog=lex.NullLogger())
source_file_name = (path + t.value.strip(' \"\n'))
sourced_file = file(path + t.value.strip(' \"\n')).read()
source_lexer.input(sourced_file)
while True:
tok = source_lexer.token()
if not tok:
break
Somewhere else I have this line
lexer = lex.lex(errorlog=lex.NullLogger())
This is the "main" or "root" lexer which is going to be called by the parser.
My problem is that I don't know how to tell the parser to use a different lexer or to tell the "source_lexer" to return something...
Maybe the clone function should be used...
Thanks
|
[
"I don't know about the details of PLY, but in other systems like this that I've built, it made the most sense to have a single lexer which managed the stack of include files. So the lexer would return a unified stream of tokens, opening and closing include files as they were encountered.\n",
"By an interesting coincidence a link from the same Google search that led me to this question explains how to write your own lexer for a PLY parser. The post explains it simply and well, but it's a matter of four instance variables and single token method.\n",
"Ok,\nso what i've done is building a list of all the tokens, which is built before the actual parsing.\nThe parser no longer calls the lexer because you can override the getToken function used by the parser using the tokenfunc parameter when calling the parse function.\nresult = yacc.parse(kconfig,debug=1,tokenfunc=my_function)\n\nand my function which is now the function called to get the next token iterates over the list of tokens previously built.\nConsidering the lexing, when I encounter a source keyword, I clone my lexer and change the input to include the file.\ndef sourcing_file(source_file_name):\n print \"SOURCE FILE NAME \" , source_file_name\n sourced_file = file(source_file_name).read()\n source_lexer = lexer.clone()\n source_lexer.input(sourced_file)\n print 'END OF SOURCING FILE'\n\n while True:\n tok = source_lexer.token()\n if not tok:\n break\n token_list.append(tok)\n\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"lexer",
"ply",
"python"
] |
stackoverflow_0001718067_lexer_ply_python.txt
|
Q:
Sending movie stream from client to server. The python server should play the stream on the fly
I would like to simulate some kind of camera on a UAV. The camera should provide a live stream, and send the stream over a network connection to a server. The server should be able to play the stream on the fly for me to see.
I was thinking the client(UAV) just read a moviefile and sent it to the server. But how can the server show the file on the fly? I suppose the simplest way would be to use xine or MPlayer to show the movie? But how?
This is to be done in python and GNU/Linux. The client and server is both located on the same machine.
The main issue is to get the server to play the file on the fly, before it has the whole file available. Any ideas?
EDIT: The server and client are connected with a standard TCP/IP connection. The video feed is not alone on the connection.
A:
try 'webcam'
sudo apt-get install webcam
on debian.
It will grab images from a USB camera and put them in a jpg file in /var/www/ then you make an html page that auto-refreshes as fast as possible that points to the image file.
I know its not a very elegant solution but its the only one I know of.
If you really are bent on writing it yourself, You will need to read data from /dev/video0 (probably) and maybe encode it according to a standard video format, open a socket with the client process, and write the video data to the socket. There are some rules for the proper way to stream data over a socket though.
|
Sending movie stream from client to server. The python server should play the stream on the fly
|
I would like to simulate some kind of camera on a UAV. The camera should provide a live stream, and send the stream over a network connection to a server. The server should be able to play the stream on the fly for me to see.
I was thinking the client(UAV) just read a moviefile and sent it to the server. But how can the server show the file on the fly? I suppose the simplest way would be to use xine or MPlayer to show the movie? But how?
This is to be done in python and GNU/Linux. The client and server is both located on the same machine.
The main issue is to get the server to play the file on the fly, before it has the whole file available. Any ideas?
EDIT: The server and client are connected with a standard TCP/IP connection. The video feed is not alone on the connection.
|
[
"try 'webcam'\nsudo apt-get install webcam\n\non debian.\nIt will grab images from a USB camera and put them in a jpg file in /var/www/ then you make an html page that auto-refreshes as fast as possible that points to the image file.\nI know its not a very elegant solution but its the only one I know of.\nIf you really are bent on writing it yourself, You will need to read data from /dev/video0 (probably) and maybe encode it according to a standard video format, open a socket with the client process, and write the video data to the socket. There are some rules for the proper way to stream data over a socket though.\n"
] |
[
0
] |
[] |
[] |
[
"movie",
"network_programming",
"python",
"streaming"
] |
stackoverflow_0001842583_movie_network_programming_python_streaming.txt
|
Q:
Strings are wrapped in b'...'
import urllib.request
#name = input("What is your screenname? ");
name = "zezima"
page = urllib.request.urlopen('http://hiscore.runescape.com/index_lite.ws?player=' + name)
page = page.readlines()
skills = []
for line in page:
skills += [line]
print(skills)
Outputs:
[[b'478,2372,1224928266\n'], [b'458,99,59502162\n'], [b'262,99,56673986\n'], [b'1355,99,39565273\n'], [b'227,99,61315106\n'], [b'260,99,37119213\n'], [b'502,99,14155051\n'], [b'27,99,63829007\n'], [b'5,99,200000000\n'], [b'2084,99,22203776\n'], [b'60,99,113793712\n'], [b'567,99,27240251\n'], [b'26,99,117368919\n'], [b'259,99,18094553\n'], [b'172,99,17565654\n'], [b'727,99,15789879\n'], [b'8,99,57473883\n'], [b'1420,99,13592712\n'], [b'5,99,200000000\n'], [b'640,99,18641664\n'], [b'413,99,19465433\n'], [b'745,99,16113397\n'], [b'7540,99,13187561\n'], [b'708,99,13229510\n'], [b'6230,95,9007564\n'], [b'2724,1944\n'], [b'-1,-1\n'], [b'-1,-1\n'], [b'-1,-1\n'], [b'-1,-1\n']]
I was wondering why each item in the list above has a b in front of it.
Shouldn't the list look like this?:
[['478,2372,1224928266\n'], ['458,99,59502162\n'], ['262,99,56673986\n']...]
A:
"b" means byte array.
You can refer to this question for a solution.
|
Strings are wrapped in b'...'
|
import urllib.request
#name = input("What is your screenname? ");
name = "zezima"
page = urllib.request.urlopen('http://hiscore.runescape.com/index_lite.ws?player=' + name)
page = page.readlines()
skills = []
for line in page:
skills += [line]
print(skills)
Outputs:
[[b'478,2372,1224928266\n'], [b'458,99,59502162\n'], [b'262,99,56673986\n'], [b'1355,99,39565273\n'], [b'227,99,61315106\n'], [b'260,99,37119213\n'], [b'502,99,14155051\n'], [b'27,99,63829007\n'], [b'5,99,200000000\n'], [b'2084,99,22203776\n'], [b'60,99,113793712\n'], [b'567,99,27240251\n'], [b'26,99,117368919\n'], [b'259,99,18094553\n'], [b'172,99,17565654\n'], [b'727,99,15789879\n'], [b'8,99,57473883\n'], [b'1420,99,13592712\n'], [b'5,99,200000000\n'], [b'640,99,18641664\n'], [b'413,99,19465433\n'], [b'745,99,16113397\n'], [b'7540,99,13187561\n'], [b'708,99,13229510\n'], [b'6230,95,9007564\n'], [b'2724,1944\n'], [b'-1,-1\n'], [b'-1,-1\n'], [b'-1,-1\n'], [b'-1,-1\n']]
I was wondering why each item in the list above has a b in front of it.
Shouldn't the list look like this?:
[['478,2372,1224928266\n'], ['458,99,59502162\n'], ['262,99,56673986\n']...]
|
[
"\"b\" means byte array.\nYou can refer to this question for a solution.\n"
] |
[
6
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0001844034_python_python_3.x.txt
|
Q:
User authentication in Django
I learned how to authenticate users in Django months ago, but I've since upgraded and am having some problems so it occurred to me this morning that I may not have been doing it correctly from the start so I decided to ask.
In my project's urls.py file I've got ^accounts/login/$ and ^accounts/logout/$ both wired up to the built-in login() and logout() views (at django.contrib.auth.views) and ^accounts/profile/$ is connected to a view I've written, called "start_here" whose contents are basically this:
def start_here(request):
if request.user:
user_obj = request.user
else:
user_obj = None
is_auth = False
if request.user.is_authenticated():
is_auth = True
return render_to_response("profile.html", {'auth': is_auth,'user': user_obj,})
Now, "profile.html" extends a master template, called master.html, inside which is a "navbar" block whose contents are supposed to change if 'auth' == True (snippet below)
{% block navbar %}
{% if auth %}
<a href="">Link A</a>
<a href="">Link B</a>
<a href="">Link C</a>
<a href="">Link D</a>
<a href="">Link E</a>
<a href="">Link F</a>
<a href="/accounts/logout/">Logout</a>
{% else %}
<a href="/accounts/login/">Login</a>
{% endif %}
{% endblock %}
My problem is that when I log in, and it redirects to /accounts/profile, the navbar doesn't display Links A-F + Logout, it displays only "login". It doesn't work the way I expect it to unless I manually copy-paste the above block into profile.html. When calling render_to_response(), does the context I provide get passed to the parent template as well as the child?
Full source to master and profile.html: http://dpaste.com/hold/128784/
I don't see anything suspect in the code.
A:
This answer is tangential, but Jim's suggestion to use RequestContext is so good I want to explicitly explain how to do it.
You can reduce your start_here function to
from django.template import RequestContext
def start_here(request):
return render_to_response("profile.html", {},
context_instance=RequestContext(request))
By using RequestContext, user is automatically added to the context. Instead of using
{% if auth %}
use
{% if user.is_authenticated %}
A:
Yes the context you pass in render_to_response() is passed to the named templates and ALL the templates it includes or inherits from.
You should look into Using RequestContext
Another thing to check...
Just making sure:
your profile template begins with
{% extends 'master.html' %}
A:
In order to make sure django correctly identifies users, you need to make sure it is properly enabled in your settings module. specifically, you need to make sure that the SessionMiddleware and AuthenticationMiddleware modules are enabled in your settings.MIDDLEWARE_CLASSES. also be sure that auth is in your installed apps and you have run syncdb since enabling it.
If you have not taken the above steps, then django will not be able to detect when users have logged in and perform request setup properly.
|
User authentication in Django
|
I learned how to authenticate users in Django months ago, but I've since upgraded and am having some problems so it occurred to me this morning that I may not have been doing it correctly from the start so I decided to ask.
In my project's urls.py file I've got ^accounts/login/$ and ^accounts/logout/$ both wired up to the built-in login() and logout() views (at django.contrib.auth.views) and ^accounts/profile/$ is connected to a view I've written, called "start_here" whose contents are basically this:
def start_here(request):
if request.user:
user_obj = request.user
else:
user_obj = None
is_auth = False
if request.user.is_authenticated():
is_auth = True
return render_to_response("profile.html", {'auth': is_auth,'user': user_obj,})
Now, "profile.html" extends a master template, called master.html, inside which is a "navbar" block whose contents are supposed to change if 'auth' == True (snippet below)
{% block navbar %}
{% if auth %}
<a href="">Link A</a>
<a href="">Link B</a>
<a href="">Link C</a>
<a href="">Link D</a>
<a href="">Link E</a>
<a href="">Link F</a>
<a href="/accounts/logout/">Logout</a>
{% else %}
<a href="/accounts/login/">Login</a>
{% endif %}
{% endblock %}
My problem is that when I log in, and it redirects to /accounts/profile, the navbar doesn't display Links A-F + Logout, it displays only "login". It doesn't work the way I expect it to unless I manually copy-paste the above block into profile.html. When calling render_to_response(), does the context I provide get passed to the parent template as well as the child?
Full source to master and profile.html: http://dpaste.com/hold/128784/
I don't see anything suspect in the code.
|
[
"This answer is tangential, but Jim's suggestion to use RequestContext is so good I want to explicitly explain how to do it.\nYou can reduce your start_here function to \nfrom django.template import RequestContext\n\ndef start_here(request):\n return render_to_response(\"profile.html\", {},\n context_instance=RequestContext(request))\n\nBy using RequestContext, user is automatically added to the context. Instead of using \n{% if auth %}\n\nuse\n{% if user.is_authenticated %}\n\n",
"Yes the context you pass in render_to_response() is passed to the named templates and ALL the templates it includes or inherits from.\nYou should look into Using RequestContext\nAnother thing to check...\nJust making sure:\nyour profile template begins with\n{% extends 'master.html' %}\n\n",
"In order to make sure django correctly identifies users, you need to make sure it is properly enabled in your settings module. specifically, you need to make sure that the SessionMiddleware and AuthenticationMiddleware modules are enabled in your settings.MIDDLEWARE_CLASSES. also be sure that auth is in your installed apps and you have run syncdb since enabling it.\nIf you have not taken the above steps, then django will not be able to detect when users have logged in and perform request setup properly. \n"
] |
[
6,
1,
1
] |
[] |
[] |
[
"authentication",
"django",
"django_templates",
"python"
] |
stackoverflow_0001843535_authentication_django_django_templates_python.txt
|
Q:
How to get back to the for loop after exception handling
I am ready to run this code but before I want to fix the exception handling:
for l in bios:
OpenThisLink = url + l
try:
response = urllib2.urlopen(OpenThisLink)
except urllib2.HTTPError:
pass
bio = response.read()
item = re.search('(JD)(.*?)(\d+)', bio)
....
As suggested here, I added the try...except but now if a page doesn't open I get this error:
bio = response.read()
NameError: name 'response' is not defined
So the program continues to execute. Instead I want it to go back to the for loop and try the next url. I tried break instead of pass but that ends the program. Any suggestions?
A:
Use continue instead of break.
The statement pass is a no-op (meaning that it doesn't do anything). The program just continues to the next statement, which is why you get an error.
break exits the loops and continues running from the next statement immediately after the loop. In this case, there are no more statements, which is why your program terminates.
continue restarts the loop but with the next item. This is exactly what you want.
A:
Try is actually way more powerful than that. You can use the else block here too:
try:
stuff
except Exception:
print "oh no a exception"
else:
print "oh yay no exception"
finally:
print "leaving the try block"
A:
you are getting that error because when the exception is thrown the response variable doesn't exist. If you want to leave the code how you have it you will need to check that response exists before calling read
if response:
bio = response.read()
...
having said that I agree with Mark that continue is a better suggestion than pass
|
How to get back to the for loop after exception handling
|
I am ready to run this code but before I want to fix the exception handling:
for l in bios:
OpenThisLink = url + l
try:
response = urllib2.urlopen(OpenThisLink)
except urllib2.HTTPError:
pass
bio = response.read()
item = re.search('(JD)(.*?)(\d+)', bio)
....
As suggested here, I added the try...except but now if a page doesn't open I get this error:
bio = response.read()
NameError: name 'response' is not defined
So the program continues to execute. Instead I want it to go back to the for loop and try the next url. I tried break instead of pass but that ends the program. Any suggestions?
|
[
"Use continue instead of break.\nThe statement pass is a no-op (meaning that it doesn't do anything). The program just continues to the next statement, which is why you get an error.\nbreak exits the loops and continues running from the next statement immediately after the loop. In this case, there are no more statements, which is why your program terminates.\ncontinue restarts the loop but with the next item. This is exactly what you want.\n",
"Try is actually way more powerful than that. You can use the else block here too:\ntry:\n stuff\nexcept Exception:\n print \"oh no a exception\"\nelse:\n print \"oh yay no exception\"\nfinally:\n print \"leaving the try block\"\n\n",
"you are getting that error because when the exception is thrown the response variable doesn't exist. If you want to leave the code how you have it you will need to check that response exists before calling read\nif response:\n bio = response.read()\n ...\n\nhaving said that I agree with Mark that continue is a better suggestion than pass\n"
] |
[
53,
19,
1
] |
[] |
[] |
[
"exception_handling",
"python"
] |
stackoverflow_0001843659_exception_handling_python.txt
|
Q:
Facebook login without opening another window/popup
Is it possible for web application that is created by the same owner as facebook application to have access to facebook application without going through a explicit session opening exercise?
Most of the work is done on server side and I need to access facebook application directly from backend server. Each time the website loads I do not want user to go through the facebook connect experience as data to be displayed does not require his facebook profile/data access.
Let me know if its possible?
Although its not related to language, I would be grateful if help is provided keeping python in mind. Thx
A:
The opening of a window for facebook auth is the way facebook set up their authentication for facebook connect.
I don't think they offer another way of authenticating users, and I doubt you'd be able to work-around/circumvent this method without breaking their terms of use
Sorry I don't have better news for you :/
|
Facebook login without opening another window/popup
|
Is it possible for web application that is created by the same owner as facebook application to have access to facebook application without going through a explicit session opening exercise?
Most of the work is done on server side and I need to access facebook application directly from backend server. Each time the website loads I do not want user to go through the facebook connect experience as data to be displayed does not require his facebook profile/data access.
Let me know if its possible?
Although its not related to language, I would be grateful if help is provided keeping python in mind. Thx
|
[
"The opening of a window for facebook auth is the way facebook set up their authentication for facebook connect.\nI don't think they offer another way of authenticating users, and I doubt you'd be able to work-around/circumvent this method without breaking their terms of use\nSorry I don't have better news for you :/\n"
] |
[
0
] |
[] |
[] |
[
"facebook",
"python"
] |
stackoverflow_0001842360_facebook_python.txt
|
Q:
How do I get urllib2 to log ALL transferred bytes
I'm writing a web-app that uses several 3rd party web APIs, and I want to keep track of the low level request and responses for ad-hock analysis. So I'm looking for a recipe that will get Python's urllib2 to log all bytes transferred via HTTP. Maybe a sub-classed Handler?
A:
Well, I've found how to setup the built-in debugging mechanism of the library:
import logging, urllib2, sys
hh = urllib2.HTTPHandler()
hsh = urllib2.HTTPSHandler()
hh.set_http_debuglevel(1)
hsh.set_http_debuglevel(1)
opener = urllib2.build_opener(hh, hsh)
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.setLevel(logging.NOTSET)
But I'm still looking for a way to dump all the information transferred.
A:
This looks pretty tricky to do. There are no hooks in urllib2, urllib, or httplib (which this builds on) for intercepting either input or output data.
The only thing that occurs to me, other than switching tactics to use an external tool (of which there are many, and most people use such things), would be to write a subclass of socket.socket in your own new module (say, "capture_socket") and then insert that into httplib using "import capture_socket; import httplib; httplib.socket = capture_socket". You'd have to copy all the necessary references (anything of the form "socket.foo" that is used in httplib) into your own module, but then you could override things like recv() and sendall() in your subclass to do what you like with the data.
Complications would likely arise if you were using SSL, and I'm not sure whether this would be sufficient or if you'd also have to make your own socket._fileobject as well. It appears doable though, and perusing the source in httplib.py and socket.py in the standard library would tell you more.
|
How do I get urllib2 to log ALL transferred bytes
|
I'm writing a web-app that uses several 3rd party web APIs, and I want to keep track of the low level request and responses for ad-hock analysis. So I'm looking for a recipe that will get Python's urllib2 to log all bytes transferred via HTTP. Maybe a sub-classed Handler?
|
[
"Well, I've found how to setup the built-in debugging mechanism of the library:\nimport logging, urllib2, sys\n\nhh = urllib2.HTTPHandler()\nhsh = urllib2.HTTPSHandler()\nhh.set_http_debuglevel(1)\nhsh.set_http_debuglevel(1)\nopener = urllib2.build_opener(hh, hsh)\nlogger = logging.getLogger()\nlogger.addHandler(logging.StreamHandler(sys.stdout))\nlogger.setLevel(logging.NOTSET)\n\nBut I'm still looking for a way to dump all the information transferred.\n",
"This looks pretty tricky to do. There are no hooks in urllib2, urllib, or httplib (which this builds on) for intercepting either input or output data.\nThe only thing that occurs to me, other than switching tactics to use an external tool (of which there are many, and most people use such things), would be to write a subclass of socket.socket in your own new module (say, \"capture_socket\") and then insert that into httplib using \"import capture_socket; import httplib; httplib.socket = capture_socket\". You'd have to copy all the necessary references (anything of the form \"socket.foo\" that is used in httplib) into your own module, but then you could override things like recv() and sendall() in your subclass to do what you like with the data.\nComplications would likely arise if you were using SSL, and I'm not sure whether this would be sufficient or if you'd also have to make your own socket._fileobject as well. It appears doable though, and perusing the source in httplib.py and socket.py in the standard library would tell you more.\n"
] |
[
12,
2
] |
[] |
[] |
[
"http",
"logging",
"python",
"urllib2"
] |
stackoverflow_0001170744_http_logging_python_urllib2.txt
|
Q:
Remove a sublayout in qt?
In PyQt 4.5, I have a layout inside another layout. I'd like to remove the sublayout from its parent, and hide it. I can say parent_layout.removeItem(child_layout) to remove the layout from its parent, but it still shows on the widget. I can't find any way to hide it in one step, as QLayout doesn't have a hide() method like QWidget does.
A:
The easy solution would be to have an interior widget, not an interior layout. You could assign the layout you desire to the widget, then just remove/hide the widget when you want to do so. A good rule of thumb is if you just want to arrange how widgets appear, then use a layout; if you want to hide/show them as a group, use a widget.
A:
With some help from flupke on #qt, I came up with:
for i in range(0, child_layout.count()):
child_layout.itemAt(i).widget().hide()
parent_layout.removeItem(child_layout)
Which assumes all the child layout's children are widgets. Is there a simpler solution?
|
Remove a sublayout in qt?
|
In PyQt 4.5, I have a layout inside another layout. I'd like to remove the sublayout from its parent, and hide it. I can say parent_layout.removeItem(child_layout) to remove the layout from its parent, but it still shows on the widget. I can't find any way to hide it in one step, as QLayout doesn't have a hide() method like QWidget does.
|
[
"The easy solution would be to have an interior widget, not an interior layout. You could assign the layout you desire to the widget, then just remove/hide the widget when you want to do so. A good rule of thumb is if you just want to arrange how widgets appear, then use a layout; if you want to hide/show them as a group, use a widget.\n",
"With some help from flupke on #qt, I came up with:\nfor i in range(0, child_layout.count()):\n child_layout.itemAt(i).widget().hide()\nparent_layout.removeItem(child_layout)\n\nWhich assumes all the child layout's children are widgets. Is there a simpler solution?\n"
] |
[
4,
1
] |
[] |
[] |
[
"pyqt",
"python",
"qt4"
] |
stackoverflow_0001844630_pyqt_python_qt4.txt
|
Q:
Windows impersonation for WMI calls via python?
I'm using PyWin32 to make WMI calls to the system in python from my django web application. My goal is to allow users to add printers to the system via a web interface. To do this, I'm using win32print.AddPrinterConnection.
This works well running the development server under my user account. I can add all the printers I want. However, eventually, this will need to run under apache which runs as the LocalSystem account.
This is problematic for two reasons:
The LocalSystem account has no network privileges at all, and this is a network printer. The AddPrinterConnection WMI call eventually makes a COM call that will be disallowed.
The LocalSystem account has no access to the domain these printers are on. They require a domain account to access.
Therefore, I've come to the conclusion that I need to impersonate domain user(s) to accomplish this task. I've done so using the code found here:
http://code.activestate.com/recipes/81402/
This seems to work as I'm able to verify that I've successfully impersonated the calling code. Unfortunately, after impersonation I always get this error from the win32print.AddPrinterConnection API call:
Exception Type: error
Exception Value: (2, 'AddPrinterConnection', 'The system cannot find the file specified.')
Do you have any idea why this may be?
Thanks a bunch! Pete
Update
Playing around, I noticed the the AddPrinterConnection API call completes successfully if the user that I'm impersonating is currently logged into the system. Once I log that user out and retry the command while impersonating that user, I get the error stated above.
What is going on here?
A:
I can't help with the specific problem, but I do know that if I had to work with WMI stuff on Windows, with Python, I would definitely reach for Tim Golden's Python WMI module instead of pywin32. Perhaps in the documentation/cookbook or Google searches using that module you can find a solution.
|
Windows impersonation for WMI calls via python?
|
I'm using PyWin32 to make WMI calls to the system in python from my django web application. My goal is to allow users to add printers to the system via a web interface. To do this, I'm using win32print.AddPrinterConnection.
This works well running the development server under my user account. I can add all the printers I want. However, eventually, this will need to run under apache which runs as the LocalSystem account.
This is problematic for two reasons:
The LocalSystem account has no network privileges at all, and this is a network printer. The AddPrinterConnection WMI call eventually makes a COM call that will be disallowed.
The LocalSystem account has no access to the domain these printers are on. They require a domain account to access.
Therefore, I've come to the conclusion that I need to impersonate domain user(s) to accomplish this task. I've done so using the code found here:
http://code.activestate.com/recipes/81402/
This seems to work as I'm able to verify that I've successfully impersonated the calling code. Unfortunately, after impersonation I always get this error from the win32print.AddPrinterConnection API call:
Exception Type: error
Exception Value: (2, 'AddPrinterConnection', 'The system cannot find the file specified.')
Do you have any idea why this may be?
Thanks a bunch! Pete
Update
Playing around, I noticed the the AddPrinterConnection API call completes successfully if the user that I'm impersonating is currently logged into the system. Once I log that user out and retry the command while impersonating that user, I get the error stated above.
What is going on here?
|
[
"I can't help with the specific problem, but I do know that if I had to work with WMI stuff on Windows, with Python, I would definitely reach for Tim Golden's Python WMI module instead of pywin32. Perhaps in the documentation/cookbook or Google searches using that module you can find a solution.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"python",
"pywin32",
"wmi"
] |
stackoverflow_0001464894_django_python_pywin32_wmi.txt
|
Q:
AJAX upload in Python (WSGI) without Flash/Silverlight, with progress bar
I am looking for a pure Javascript/Python upload example, that uses server polling instead of client-side SWF to display upload progress (like the one on rapidshare.com for example)
Currently, website is running on the standalone wsgi server included with Werkzeug framework, but may be moved to mod_wsgi if the load increases.
I've tried the gp.fileupload middleware, but can't get it to work. Examples on their website wont work either :|
Website already uses Glow library for other misc client-side stuff, but there is no specific upload-related functionality in it.
A:
If you don't have support on the web side to track the size of the temporary file (or in-memory buffer) of the uploading data as it arrives, I don't know how you'll do this. Some of the popular web servers have special support for this, mostly experimental, but it's not widely supported and what you're trying to do is pretty awkward in general. I've researched this recently and it's pretty poorly supported all around.
|
AJAX upload in Python (WSGI) without Flash/Silverlight, with progress bar
|
I am looking for a pure Javascript/Python upload example, that uses server polling instead of client-side SWF to display upload progress (like the one on rapidshare.com for example)
Currently, website is running on the standalone wsgi server included with Werkzeug framework, but may be moved to mod_wsgi if the load increases.
I've tried the gp.fileupload middleware, but can't get it to work. Examples on their website wont work either :|
Website already uses Glow library for other misc client-side stuff, but there is no specific upload-related functionality in it.
|
[
"If you don't have support on the web side to track the size of the temporary file (or in-memory buffer) of the uploading data as it arrives, I don't know how you'll do this. Some of the popular web servers have special support for this, mostly experimental, but it's not widely supported and what you're trying to do is pretty awkward in general. I've researched this recently and it's pretty poorly supported all around.\n"
] |
[
0
] |
[] |
[] |
[
"ajax",
"progress",
"python",
"upload",
"werkzeug"
] |
stackoverflow_0001284922_ajax_progress_python_upload_werkzeug.txt
|
Q:
Regular expression to replace with XML node
I'm using Python to write a regular expression for replacing parts of the string with a XML node.
The source string looks like:
Hello
REPLACE(str1) this is to replace
REPLACE(str2) this is to replace
And the result string should be like:
Hello
<replace name="str1"> this is to replace </replace>
<replace name="str2"> this is to replace </replace>
Can anyone help me?
A:
What makes your problem a little bit tricky is that you want to match inside of a multiline string. You need to use the re.MULTILINE flag to make that work.
Then, you need to match some groups inside your source string, and use those groups in the final output. Here is code that works to solve your problem:
import re
s_pat = "^\s*REPLACE\(([^)]+)\)(.*)$"
pat = re.compile(s_pat, re.MULTILINE)
s_input = """\
Hello
REPLACE(str1) this is to replace
REPLACE(str2) this is to replace"""
def mksub(m):
return '<replace name="%s">%s</replace>' % m.groups()
s_output = re.sub(pat, mksub, s_input)
The only tricky part is the regular expression pattern. Let's look at it in detail.
^ matches the start of a string. With re.MULTILINE, this matches the start of a line within a multiline string; in other words, it matches right after a newline in the string.
\s* matches optional whitespace.
REPLACE matches the literal string "REPLACE".
\( matches the literal string "(".
( begins a "match group".
[^)] means "match any character but a ")".
+ means "match one or more of the preceding pattern.
) closes a "match group".
\) matches the literal string ")"
(.*) is another match group containing ".*".
$ matches the end of a string. With re.MULTILINE, this matches the end of a line within a multiline string; in other words, it matches a newline character in the string.
. matches any character, and * means to match zero or more of the preceding pattern. Thus .* matches anything, up to the end of the line.
So, our pattern has two "match groups". When you run re.sub() it will make a "match object" which will be passed to mksub(). The match object has a method, .groups(), that returns the matched substrings as a tuple, and that gets substituted in to make the replacement text.
EDIT: You actually don't need to use a replacement function. You can put the special string \1 inside the replacement text, and it will be replaced by the contents of match group 1. (Match groups count from 1; the special match group 0 corresponds the the entire string matched by the pattern.) The only tricky part of the \1 string is that \ is special in strings. In a normal string, to get a \, you need to put two backslashes in a row, like so: "\\1" But you can use a Python "raw string" to conveniently write the replacement pattern. Doing so you get this:
import re
s_pat = "^\s*REPLACE\(([^)]+)\)(.*)$"
pat = re.compile(s_pat, re.MULTILINE)
s_repl = r'<replace name="\1">\2</replace>'
s_input = """\
Hello
REPLACE(str1) this is to replace
REPLACE(str2) this is to replace"""
s_output = re.sub(pat, s_repl, s_input)
A:
Here is an excellent tutorial on how to write regular expressions in Python.
A:
Here is a solution using pyparsing. I know you specifically asked about a regex solution, but if your requirements change, you might find it easier to expand a pyparsing parser. Or a pyparsing prototype solution might give you a little more insight into the problem leading toward a regex or other final implementation.
src = """\
Hello
REPLACE(str1) this is to replace
REPLACE(str2) this is to replace
"""
from pyparsing import Suppress, Word, alphas, alphanums, restOfLine
LPAR,RPAR = map(Suppress,"()")
ident = Word(alphas, alphanums)
replExpr = "REPLACE" + LPAR + ident("name") + RPAR + restOfLine("body")
replExpr.setParseAction(
lambda toks : '<replace name="%(name)s">%(body)s </replace>' % toks
)
print replExpr.transformString(src)
In this case, you create the expression to be matched with pyparsing, define a parse action to do the text conversion, and then call transformString to scan through the input source to find all the matches, apply the parse action to each match, and return the resulting output. The parse action serves a similar function to mksub in @steveha's solution.
In addition to the parse action, pyparsing also supports naming individual elements of the expression - I used "name" and "body" to label the two parts of interest, which are represented in the re solution as groups 1 and 2. You can name groups in an re, the corresponding re would look like:
s_pat = "^\s*REPLACE\((?P<name>[^)]+)\)(?P<body>.*)$"
Unfortunately, to access these groups by name, you have to invoke the group() method on the re match object, you can't directly do the named string interpolation as in my lambda parse action. But this is Python, right? We can wrap that callable with a class that will give us dict-like access to the groups by name:
class CallableDict(object):
def __init__(self,fn):
self.fn = fn
def __getitem__(self,name):
return self.fn(name)
def mksub(m):
return '<replace name="%(name)s">%(body)s</replace>' % CallableDict(m.group)
s_output = re.sub(pat, mksub, s_input)
Using CallableDict, the string interpolation in mksub can now call m.group for each field, by making it look like we are retrieving the ['name'] and ['body'] elements of a dict.
A:
Maybe like this ?
import re
mystr = """Hello
REPLACE(str1) this is to replace
REPLACE(str2) this is to replace"""
prog = re.compile(r'REPLACE\((.*?)\)\s(.*)')
for line in mystr.split("\n"):
print prog.sub(r'< replace name="\1" > \2',line)
A:
Something like this should work:
import re,sys
f = open( sys.argv[1], 'r' )
for i in f:
g = re.match( r'REPLACE\((.*)\)(.*)', i )
if g is None:
print i
else:
print '<replace name=\"%s\">%s</replace>' % (g.group(1),g.group(2))
f.close()
A:
import re
a="""Hello
REPLACE(str1) this is to replace
REPLACE(str2) this is to replace"""
regex = re.compile(r"^REPLACE\(([^)]+)\)\s+(.*)$", re.MULTILINE)
b=re.sub(regex, r'< replace name="\1" > \2 < /replace >', a)
print b
will do the replace in one line.
|
Regular expression to replace with XML node
|
I'm using Python to write a regular expression for replacing parts of the string with a XML node.
The source string looks like:
Hello
REPLACE(str1) this is to replace
REPLACE(str2) this is to replace
And the result string should be like:
Hello
<replace name="str1"> this is to replace </replace>
<replace name="str2"> this is to replace </replace>
Can anyone help me?
|
[
"What makes your problem a little bit tricky is that you want to match inside of a multiline string. You need to use the re.MULTILINE flag to make that work.\nThen, you need to match some groups inside your source string, and use those groups in the final output. Here is code that works to solve your problem:\nimport re\n\n\ns_pat = \"^\\s*REPLACE\\(([^)]+)\\)(.*)$\"\npat = re.compile(s_pat, re.MULTILINE)\n\ns_input = \"\"\"\\\nHello\nREPLACE(str1) this is to replace\nREPLACE(str2) this is to replace\"\"\"\n\n\ndef mksub(m):\n return '<replace name=\"%s\">%s</replace>' % m.groups()\n\n\ns_output = re.sub(pat, mksub, s_input)\n\nThe only tricky part is the regular expression pattern. Let's look at it in detail.\n^ matches the start of a string. With re.MULTILINE, this matches the start of a line within a multiline string; in other words, it matches right after a newline in the string.\n\\s* matches optional whitespace.\nREPLACE matches the literal string \"REPLACE\".\n\\( matches the literal string \"(\".\n( begins a \"match group\".\n[^)] means \"match any character but a \")\".\n+ means \"match one or more of the preceding pattern.\n) closes a \"match group\".\n\\) matches the literal string \")\"\n(.*) is another match group containing \".*\".\n$ matches the end of a string. With re.MULTILINE, this matches the end of a line within a multiline string; in other words, it matches a newline character in the string.\n. matches any character, and * means to match zero or more of the preceding pattern. Thus .* matches anything, up to the end of the line.\nSo, our pattern has two \"match groups\". When you run re.sub() it will make a \"match object\" which will be passed to mksub(). The match object has a method, .groups(), that returns the matched substrings as a tuple, and that gets substituted in to make the replacement text.\nEDIT: You actually don't need to use a replacement function. You can put the special string \\1 inside the replacement text, and it will be replaced by the contents of match group 1. (Match groups count from 1; the special match group 0 corresponds the the entire string matched by the pattern.) The only tricky part of the \\1 string is that \\ is special in strings. In a normal string, to get a \\, you need to put two backslashes in a row, like so: \"\\\\1\" But you can use a Python \"raw string\" to conveniently write the replacement pattern. Doing so you get this:\nimport re\ns_pat = \"^\\s*REPLACE\\(([^)]+)\\)(.*)$\"\npat = re.compile(s_pat, re.MULTILINE)\n\ns_repl = r'<replace name=\"\\1\">\\2</replace>'\n\ns_input = \"\"\"\\\nHello\nREPLACE(str1) this is to replace\nREPLACE(str2) this is to replace\"\"\"\n\n\ns_output = re.sub(pat, s_repl, s_input)\n\n",
"Here is an excellent tutorial on how to write regular expressions in Python.\n",
"Here is a solution using pyparsing. I know you specifically asked about a regex solution, but if your requirements change, you might find it easier to expand a pyparsing parser. Or a pyparsing prototype solution might give you a little more insight into the problem leading toward a regex or other final implementation.\nsrc = \"\"\"\\\nHello\nREPLACE(str1) this is to replace\nREPLACE(str2) this is to replace\n\"\"\"\n\nfrom pyparsing import Suppress, Word, alphas, alphanums, restOfLine\n\nLPAR,RPAR = map(Suppress,\"()\")\nident = Word(alphas, alphanums)\nreplExpr = \"REPLACE\" + LPAR + ident(\"name\") + RPAR + restOfLine(\"body\")\nreplExpr.setParseAction(\n lambda toks : '<replace name=\"%(name)s\">%(body)s </replace>' % toks\n )\n\nprint replExpr.transformString(src)\n\nIn this case, you create the expression to be matched with pyparsing, define a parse action to do the text conversion, and then call transformString to scan through the input source to find all the matches, apply the parse action to each match, and return the resulting output. The parse action serves a similar function to mksub in @steveha's solution.\nIn addition to the parse action, pyparsing also supports naming individual elements of the expression - I used \"name\" and \"body\" to label the two parts of interest, which are represented in the re solution as groups 1 and 2. You can name groups in an re, the corresponding re would look like:\ns_pat = \"^\\s*REPLACE\\((?P<name>[^)]+)\\)(?P<body>.*)$\"\n\nUnfortunately, to access these groups by name, you have to invoke the group() method on the re match object, you can't directly do the named string interpolation as in my lambda parse action. But this is Python, right? We can wrap that callable with a class that will give us dict-like access to the groups by name:\nclass CallableDict(object):\n def __init__(self,fn):\n self.fn = fn\n def __getitem__(self,name):\n return self.fn(name)\n\ndef mksub(m): \n return '<replace name=\"%(name)s\">%(body)s</replace>' % CallableDict(m.group)\n\ns_output = re.sub(pat, mksub, s_input)\n\nUsing CallableDict, the string interpolation in mksub can now call m.group for each field, by making it look like we are retrieving the ['name'] and ['body'] elements of a dict.\n",
"Maybe like this ?\nimport re\n\nmystr = \"\"\"Hello\nREPLACE(str1) this is to replace\nREPLACE(str2) this is to replace\"\"\"\n\nprog = re.compile(r'REPLACE\\((.*?)\\)\\s(.*)')\n\nfor line in mystr.split(\"\\n\"):\n print prog.sub(r'< replace name=\"\\1\" > \\2',line)\n\n",
"Something like this should work:\nimport re,sys\n\nf = open( sys.argv[1], 'r' )\nfor i in f:\n g = re.match( r'REPLACE\\((.*)\\)(.*)', i )\n if g is None:\n print i\n else:\n print '<replace name=\\\"%s\\\">%s</replace>' % (g.group(1),g.group(2))\nf.close()\n\n",
"import re\n\na=\"\"\"Hello\nREPLACE(str1) this is to replace\nREPLACE(str2) this is to replace\"\"\"\n\nregex = re.compile(r\"^REPLACE\\(([^)]+)\\)\\s+(.*)$\", re.MULTILINE)\n\nb=re.sub(regex, r'< replace name=\"\\1\" > \\2 < /replace >', a)\n\nprint b\n\nwill do the replace in one line.\n"
] |
[
5,
4,
1,
0,
0,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001842608_python_regex.txt
|
Q:
exec() bytecode with arbitrary locals?
Suppose I want to execute code, for example
value += 5
inside a namespace of my own (so the result is essentially mydict['value'] += 5). There's a function exec(), but I have to pass a string there:
exec('value += 5', mydict)
and passing statements as strings seems strange (e.g. it's not colorized that way).
Can it be done like:
def block():
value += 5
???(block, mydict)
? The obvious candidate for last line was exec(block.__code__, mydict), but no luck: it raises UnboundLocalError about value. I believe it basically executes block(), not the code inside block, so assignments aren't easy – is that correct?
Of course, another possible solution would be to disassembly block.__code__...
FYI, I got the question because of this thread. Also, this is why some (me undecided) call for new syntax
using mydict:
value += 5
Note how this doesn't throw error but doesn't change mydict either:
def block(value = 0):
value += 5
block(**mydict)
A:
You can pass bytecode instead of a string to exec, you just need to make the right bytecode for the purpose:
>>> bytecode = compile('value += 5', '<string>', 'exec')
>>> mydict = {'value': 23}
>>> exec(bytecode, mydict)
>>> mydict['value']
28
Specifically, ...:
>>> import dis
>>> dis.dis(bytecode)
1 0 LOAD_NAME 0 (value)
3 LOAD_CONST 0 (5)
6 INPLACE_ADD
7 STORE_NAME 0 (value)
10 LOAD_CONST 1 (None)
13 RETURN_VALUE
the load and store instructions must be of the _NAME persuasion, and this compile makes them so, while...:
>>> def f(): value += 5
...
>>> dis.dis(f.func_code)
1 0 LOAD_FAST 0 (value)
3 LOAD_CONST 1 (5)
6 INPLACE_ADD
7 STORE_FAST 0 (value)
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
...code in a function is optimized to use the _FAST versions, and those don't work on a dict passed to exec. If you started somehow with a bytecode using the _FAST instructions, you could patch it to use the _NAME kind instead, e.g. with bytecodehacks or some similar approach.
A:
Use the global keyword to force dynamic scoping on any variables you want to modify from within the block:
def block():
global value
value += 5
mydict = {"value": 42}
exec(block.__code__, mydict)
print(mydict["value"])
A:
Here is a crazy decorator to create such a block that uses "custom locals". In reality it is a quick hack to turn all variable access inside the function to global access, and evaluate the result with the custom locals dictionary as environment.
import dis
import functools
import types
import string
def withlocals(func):
"""Decorator for executing a block with custom "local" variables.
The decorated function takes one argument: its scope dictionary.
>>> @withlocals
... def block():
... counter += 1
... luckynumber = 88
>>> d = {"counter": 1}
>>> block(d)
>>> d["counter"]
2
>>> d["luckynumber"]
88
"""
def opstr(*opnames):
return "".join([chr(dis.opmap[N]) for N in opnames])
translation_table = string.maketrans(
opstr("LOAD_FAST", "STORE_FAST"),
opstr("LOAD_GLOBAL", "STORE_GLOBAL"))
c = func.func_code
newcode = types.CodeType(c.co_argcount,
0, # co_nlocals
c.co_stacksize,
c.co_flags,
c.co_code.translate(translation_table),
c.co_consts,
c.co_varnames, # co_names, name of global vars
(), # co_varnames
c.co_filename,
c.co_name,
c.co_firstlineno,
c.co_lnotab)
@functools.wraps(func)
def wrapper(mylocals):
return eval(newcode, mylocals)
return wrapper
if __name__ == '__main__':
import doctest
doctest.testmod()
This is just a monkey-patching adaption of someone's brilliant recipe for a goto decorator
A:
From S.Lott's comment above I think I get the idea for an answer using creation of new class.
class _(__metaclass__ = change(mydict)):
value += 1
...
where change is a metaclass whose __prepare__ reads dictionary and whose __new__ updates dictionary.
For reuse, the snippet below would work, but it's kind of ugly:
def increase_value(d):
class _(__metaclass__ = change(d)):
value += 1
...
increase_value(mydict)
|
exec() bytecode with arbitrary locals?
|
Suppose I want to execute code, for example
value += 5
inside a namespace of my own (so the result is essentially mydict['value'] += 5). There's a function exec(), but I have to pass a string there:
exec('value += 5', mydict)
and passing statements as strings seems strange (e.g. it's not colorized that way).
Can it be done like:
def block():
value += 5
???(block, mydict)
? The obvious candidate for last line was exec(block.__code__, mydict), but no luck: it raises UnboundLocalError about value. I believe it basically executes block(), not the code inside block, so assignments aren't easy – is that correct?
Of course, another possible solution would be to disassembly block.__code__...
FYI, I got the question because of this thread. Also, this is why some (me undecided) call for new syntax
using mydict:
value += 5
Note how this doesn't throw error but doesn't change mydict either:
def block(value = 0):
value += 5
block(**mydict)
|
[
"You can pass bytecode instead of a string to exec, you just need to make the right bytecode for the purpose:\n>>> bytecode = compile('value += 5', '<string>', 'exec')\n>>> mydict = {'value': 23}\n>>> exec(bytecode, mydict)\n>>> mydict['value']\n28\n\nSpecifically, ...:\n>>> import dis\n>>> dis.dis(bytecode)\n 1 0 LOAD_NAME 0 (value)\n 3 LOAD_CONST 0 (5)\n 6 INPLACE_ADD \n 7 STORE_NAME 0 (value)\n 10 LOAD_CONST 1 (None)\n 13 RETURN_VALUE \n\nthe load and store instructions must be of the _NAME persuasion, and this compile makes them so, while...:\n>>> def f(): value += 5\n... \n>>> dis.dis(f.func_code)\n 1 0 LOAD_FAST 0 (value)\n 3 LOAD_CONST 1 (5)\n 6 INPLACE_ADD \n 7 STORE_FAST 0 (value)\n 10 LOAD_CONST 0 (None)\n 13 RETURN_VALUE \n\n...code in a function is optimized to use the _FAST versions, and those don't work on a dict passed to exec. If you started somehow with a bytecode using the _FAST instructions, you could patch it to use the _NAME kind instead, e.g. with bytecodehacks or some similar approach.\n",
"Use the global keyword to force dynamic scoping on any variables you want to modify from within the block:\ndef block():\n global value\n value += 5\n\nmydict = {\"value\": 42}\nexec(block.__code__, mydict)\nprint(mydict[\"value\"])\n\n",
"Here is a crazy decorator to create such a block that uses \"custom locals\". In reality it is a quick hack to turn all variable access inside the function to global access, and evaluate the result with the custom locals dictionary as environment.\nimport dis\nimport functools\nimport types\nimport string\n\ndef withlocals(func):\n \"\"\"Decorator for executing a block with custom \"local\" variables.\n\n The decorated function takes one argument: its scope dictionary.\n\n >>> @withlocals\n ... def block():\n ... counter += 1\n ... luckynumber = 88\n\n >>> d = {\"counter\": 1}\n >>> block(d)\n >>> d[\"counter\"]\n 2\n >>> d[\"luckynumber\"]\n 88\n \"\"\"\n def opstr(*opnames):\n return \"\".join([chr(dis.opmap[N]) for N in opnames])\n\n translation_table = string.maketrans(\n opstr(\"LOAD_FAST\", \"STORE_FAST\"),\n opstr(\"LOAD_GLOBAL\", \"STORE_GLOBAL\"))\n\n c = func.func_code\n newcode = types.CodeType(c.co_argcount,\n 0, # co_nlocals\n c.co_stacksize,\n c.co_flags,\n c.co_code.translate(translation_table),\n c.co_consts,\n c.co_varnames, # co_names, name of global vars\n (), # co_varnames\n c.co_filename,\n c.co_name,\n c.co_firstlineno,\n c.co_lnotab)\n\n @functools.wraps(func)\n def wrapper(mylocals):\n return eval(newcode, mylocals)\n return wrapper\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod()\n\nThis is just a monkey-patching adaption of someone's brilliant recipe for a goto decorator\n",
"From S.Lott's comment above I think I get the idea for an answer using creation of new class. \nclass _(__metaclass__ = change(mydict)):\n value += 1\n ...\n\nwhere change is a metaclass whose __prepare__ reads dictionary and whose __new__ updates dictionary. \nFor reuse, the snippet below would work, but it's kind of ugly:\ndef increase_value(d):\n class _(__metaclass__ = change(d)):\n value += 1\n ...\n\nincrease_value(mydict)\n\n"
] |
[
7,
3,
3,
0
] |
[] |
[] |
[
"bytecode",
"codeblocks",
"python",
"python_3.x"
] |
stackoverflow_0001280100_bytecode_codeblocks_python_python_3.x.txt
|
Q:
Problem with the size of the arrows on a vector field plot
I would like to know how can I make it so the arrows have a length of sqrt(2) (as their x and y coordinates have length 1). As you can see for the picture shown bellow, they seem quite small. Thanks!
from pylab import *
from numpy import ma
X = (0, 0, 0)
Y = (0, 1, 2)
quiver(X, Y, (1, 1, 1), (1, 1, 1))
#axis([0, 1, 0, 3])
show()
alt text http://img18.imageshack.us/img18/6971/arrowsi.png
A:
This will produce arrows sqrt(2) in proportion to the y axis.
pylab.quiver(X,Y,(1,1,1),(1,1,1), scale=2**.5, units='y')
It was in the documentation.
Here is an example.
|
Problem with the size of the arrows on a vector field plot
|
I would like to know how can I make it so the arrows have a length of sqrt(2) (as their x and y coordinates have length 1). As you can see for the picture shown bellow, they seem quite small. Thanks!
from pylab import *
from numpy import ma
X = (0, 0, 0)
Y = (0, 1, 2)
quiver(X, Y, (1, 1, 1), (1, 1, 1))
#axis([0, 1, 0, 3])
show()
alt text http://img18.imageshack.us/img18/6971/arrowsi.png
|
[
"This will produce arrows sqrt(2) in proportion to the y axis.\npylab.quiver(X,Y,(1,1,1),(1,1,1), scale=2**.5, units='y')\n\nIt was in the documentation.\nHere is an example.\n\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0001844467_matplotlib_python.txt
|
Q:
What modules ought I to consider in Python if I wish to use CGI sessions?
Given that I know no web frameworks in Python and would like to keep it Very Simple at the moment (as I am Very Stupid), for what is a prototype of sketchy longevity, are there any streamlined, simple, "batteries-included" modules for this? (It is also too early in my Python career to evaluate frameworks, select one, and learn it.) I see a module named "Cookie," which could serve as a foundation, but nothing session-specific.
I'm familiar with the basic session concepts, having used them in classic ASP and gotten into the nuts-and-bolts of them in Perl, but I am not seeing a lot for Python. Beaker looks interesting, but then the documentation seems to require middleware with WSGI and I'm back to the frameworks problem.
I've found an old recipe on ActiveState for sessions, which could obviously use some buffing up. The information being held is not anything anyone would mind having been grabbed, so while I am normally quite security conscious, I would be willing to be a little bit more lax with this prototype.
Or is this a "roll-your-own" problem?
I will be using Python 2.6 on IIS 7.0.
A:
I think the web2py (web framework) is easy enough for you. I think it is the simplest approach of making a website or webservice. It will be also easier, than to understand Cookie or the other modules of python related to web-things.
You can start a session, by just typing:
session.your_session_name = "blabla" # or whatever you want to store
To make a cookie, just look here.
In web2py you don't have to configure anything. Just download it and start web2py.py. (you must have python 2.6 < installed.) You can also find some examples and a web-slide.
The Python Cookie module does nothing more than to hold some values in a dictonary-like object, but I think you have to store it yourself on your harddisk.
A:
CherryPy is worth looking into. Yes it is a framework, and yes it requires WSGI, but it is extremely lightweight compared to other more robust alternatives.
There is another question that was answered on SO that gives a brief example on how to manage sessions with CherryPy. As you can see it makes it very easy to get up and running quickly.
Lastly, here is a little document about setting up IIS for use with CherryPy.
A:
WSGI is not a framework, nor does it require that you choose one -- it's THE standard way to run any Python web app framework on any Python-supporting web server, including a CGI one. If you have a WSGI application named app, and want to run it on CGI, see the docs and use wsgiref.handlers.CGIHandler().run(app), as the docs say.
So, you can perfectly well use Beaker via WSGI (on top of CGI) -- e.g., take the example in Beaker's docs and just add (the needed imports and) the run call above (using the wsgi_app object that example constructs, plus of course a session.save and as well needed as, again, the Beaker docs explain right afterwards).
Rich or heavy frameworks have their place but so do lightweight, flexible components like Beaker -- and WSGI middleware is a great way to leverage such components without requiring any "framework-y" arrangements, just good old WSGI (on top of CGI or anything else).
BTW, the best way to run WSGI on IIS might be isapi-wsgi (I can only say "might" because I have no IIS installation on which to test it;-). But as long as you code to WSGI (with any framework or with none at all), that will only be an optimization -- your application won't change (net of what handler's run or equivalent method you need to call;-) whether it's running on CGI, IIS via ISAPI, Google App Engine, or any other server-and-interface-thereto combination
|
What modules ought I to consider in Python if I wish to use CGI sessions?
|
Given that I know no web frameworks in Python and would like to keep it Very Simple at the moment (as I am Very Stupid), for what is a prototype of sketchy longevity, are there any streamlined, simple, "batteries-included" modules for this? (It is also too early in my Python career to evaluate frameworks, select one, and learn it.) I see a module named "Cookie," which could serve as a foundation, but nothing session-specific.
I'm familiar with the basic session concepts, having used them in classic ASP and gotten into the nuts-and-bolts of them in Perl, but I am not seeing a lot for Python. Beaker looks interesting, but then the documentation seems to require middleware with WSGI and I'm back to the frameworks problem.
I've found an old recipe on ActiveState for sessions, which could obviously use some buffing up. The information being held is not anything anyone would mind having been grabbed, so while I am normally quite security conscious, I would be willing to be a little bit more lax with this prototype.
Or is this a "roll-your-own" problem?
I will be using Python 2.6 on IIS 7.0.
|
[
"I think the web2py (web framework) is easy enough for you. I think it is the simplest approach of making a website or webservice. It will be also easier, than to understand Cookie or the other modules of python related to web-things.\nYou can start a session, by just typing:\nsession.your_session_name = \"blabla\" # or whatever you want to store\n\nTo make a cookie, just look here.\nIn web2py you don't have to configure anything. Just download it and start web2py.py. (you must have python 2.6 < installed.) You can also find some examples and a web-slide.\nThe Python Cookie module does nothing more than to hold some values in a dictonary-like object, but I think you have to store it yourself on your harddisk.\n",
"CherryPy is worth looking into. Yes it is a framework, and yes it requires WSGI, but it is extremely lightweight compared to other more robust alternatives. \nThere is another question that was answered on SO that gives a brief example on how to manage sessions with CherryPy. As you can see it makes it very easy to get up and running quickly. \nLastly, here is a little document about setting up IIS for use with CherryPy.\n",
"WSGI is not a framework, nor does it require that you choose one -- it's THE standard way to run any Python web app framework on any Python-supporting web server, including a CGI one. If you have a WSGI application named app, and want to run it on CGI, see the docs and use wsgiref.handlers.CGIHandler().run(app), as the docs say.\nSo, you can perfectly well use Beaker via WSGI (on top of CGI) -- e.g., take the example in Beaker's docs and just add (the needed imports and) the run call above (using the wsgi_app object that example constructs, plus of course a session.save and as well needed as, again, the Beaker docs explain right afterwards).\nRich or heavy frameworks have their place but so do lightweight, flexible components like Beaker -- and WSGI middleware is a great way to leverage such components without requiring any \"framework-y\" arrangements, just good old WSGI (on top of CGI or anything else).\nBTW, the best way to run WSGI on IIS might be isapi-wsgi (I can only say \"might\" because I have no IIS installation on which to test it;-). But as long as you code to WSGI (with any framework or with none at all), that will only be an optimization -- your application won't change (net of what handler's run or equivalent method you need to call;-) whether it's running on CGI, IIS via ISAPI, Google App Engine, or any other server-and-interface-thereto combination \n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"python",
"session"
] |
stackoverflow_0001843022_python_session.txt
|
Q:
Programmatically pass unknown number of parameters in Python
I'd like an easy way to plug in a function and arguments to be executed later so I can have a kind of wrapper around it.
In this case, I'd like to use it to benchmark, example:
def timefunc(fn, *args):
start = time.clock()
fn(args)
stop = time.clock()
return stop - start
How do I pass arguments where the number of parameters aren't known? How do I find the number of parameters? How would I even call the function once I knew the number of parameters?
A:
Just call the function like this:
fn(*args)
A:
MatrixFrog's answer is the correct one, but just to complete the picture. For finding out the number of arguments simply call len, because args is a tuple:
import time
def timefunc(fn, *args):
start = time.clock()
print len(args), type(args)
fn(*args)
stop = time.clock()
return stop - start
def myfoo(a, b):
c = a + b
return c
timefunc(myfoo, 5, 6)
The print statement inside timefunc prints:
2 <type 'tuple'>
Since args is a tuple, you can access it like any other tuple.
|
Programmatically pass unknown number of parameters in Python
|
I'd like an easy way to plug in a function and arguments to be executed later so I can have a kind of wrapper around it.
In this case, I'd like to use it to benchmark, example:
def timefunc(fn, *args):
start = time.clock()
fn(args)
stop = time.clock()
return stop - start
How do I pass arguments where the number of parameters aren't known? How do I find the number of parameters? How would I even call the function once I knew the number of parameters?
|
[
"Just call the function like this:\nfn(*args)\n\n",
"MatrixFrog's answer is the correct one, but just to complete the picture. For finding out the number of arguments simply call len, because args is a tuple:\nimport time\n\ndef timefunc(fn, *args):\n start = time.clock()\n print len(args), type(args)\n fn(*args)\n stop = time.clock()\n return stop - start\n\n\ndef myfoo(a, b):\n c = a + b\n return c\n\n\ntimefunc(myfoo, 5, 6)\n\nThe print statement inside timefunc prints:\n2 <type 'tuple'>\n\nSince args is a tuple, you can access it like any other tuple. \n"
] |
[
8,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001844705_python.txt
|
Q:
'setup.py test' egg install location?
I have all the eggs my project requires pre-downloaded in a directory, and I would like setuptools to only install packages from that directory.
In my setup.cfg I have:
[easy_install]
allow_hosts = None
find_links = ../../setup
I run python setup.py develop and it finds and installs all the packages correctly.
For testing, I have an additional requirement, specified in setup.py.
tests_require=["pinocchio==0.2"],
This egg also resides locally in the ../../setup directory.
I run python setup.py test and it sees the dependency and finds the egg in ../../setup just fine. However, the egg gets installed to my current directory instead of the site-packages directory with the rest of the eggs.
I've tried specifying the install-dir both in setup.cfg and on the command line and neither seemed to work for the tests command.
I could just add the dependency to the install_requires section, but I'd like to keep what is required for installation and tests separate if possible.
How can I keep the dependency in the tests_require section, but have it installed to the site-packages directory?
A:
Just looking at the source code (setuptools/command/tests.py), it doesn't look like setup.py test is not supposed to install anything by design (it is testing, so why put anything in site-packages?). It uses fetch_build_egg (setuptools/dist.py) to get the eggs, which actually does a local easy_install. I suspect you can't trivially make test do what you want.
Notes/ideas:
My experience with setuptools is that it there are bugs in it and undocumented behavior. (One especially nasty trip-up I found was that it wouldn't enter softlinked directories, when distutils would).
I'd recommend either A) not doing this. :), B) manually installing the file by calling easy_install package. or C) looking into the setuptools system and maybe adding your own command. It isn't too difficult to understand, and knowing it will help a lot when you get future setuptools hick-ups.
|
'setup.py test' egg install location?
|
I have all the eggs my project requires pre-downloaded in a directory, and I would like setuptools to only install packages from that directory.
In my setup.cfg I have:
[easy_install]
allow_hosts = None
find_links = ../../setup
I run python setup.py develop and it finds and installs all the packages correctly.
For testing, I have an additional requirement, specified in setup.py.
tests_require=["pinocchio==0.2"],
This egg also resides locally in the ../../setup directory.
I run python setup.py test and it sees the dependency and finds the egg in ../../setup just fine. However, the egg gets installed to my current directory instead of the site-packages directory with the rest of the eggs.
I've tried specifying the install-dir both in setup.cfg and on the command line and neither seemed to work for the tests command.
I could just add the dependency to the install_requires section, but I'd like to keep what is required for installation and tests separate if possible.
How can I keep the dependency in the tests_require section, but have it installed to the site-packages directory?
|
[
"Just looking at the source code (setuptools/command/tests.py), it doesn't look like setup.py test is not supposed to install anything by design (it is testing, so why put anything in site-packages?). It uses fetch_build_egg (setuptools/dist.py) to get the eggs, which actually does a local easy_install. I suspect you can't trivially make test do what you want.\nNotes/ideas:\nMy experience with setuptools is that it there are bugs in it and undocumented behavior. (One especially nasty trip-up I found was that it wouldn't enter softlinked directories, when distutils would). \nI'd recommend either A) not doing this. :), B) manually installing the file by calling easy_install package. or C) looking into the setuptools system and maybe adding your own command. It isn't too difficult to understand, and knowing it will help a lot when you get future setuptools hick-ups.\n"
] |
[
3
] |
[] |
[] |
[
"python",
"setuptools"
] |
stackoverflow_0001843424_python_setuptools.txt
|
Q:
How to set shell variables in subprocess.Popen (in a less ugly way...)
NOTE: This is not the same question as Python: Persistent shell variables in subprocess, as that question is actually about environment variables, not shell variables.
I'm trying to automate a basic benchmark that I'm doing in csh using the shell built-in time command. You can tweak the output of time by setting the variable of the same name in the shell like follows:
set time=(1000000000 "%U %S %E %P %W %X %D %K %M %F %R %I %O %r %s %k %w %c")
However, this is of course not an environment variable, so you can't pass it in the env parameter in the subprocess.Popen constructor (I tried that first...). You can hack it together by abusing (imo, anyhow) the cmd argument to do that, and then your actual work (shortened to display nicely):
time_cmd = 'set time=(1000000000 "%U %S %E %P %c"); time ./some_func'
p = subprocess.Popen(cmd, executable="/bin/csh", shell=True, ...)
and that works as you would expect, but it seems not really the right way to go (and can lead to some pretty horrible cmd construction when you want to tweak a bunch of different shell behaviours).
Is there any other way to set shell variables in subprocess other than this kludge?
A:
You have a limited number of options to communicate to a child process (invocation arguments, environment, and streams). Since setting a shell variable is in a more general sense modifying a child process's internal state, what you call a "kludge" is actually quite reasonable. I don't know of any csh variants that have a special way to load shell variables.
However, you could neaten it up by writing all your initialisation commands to the shell via a pipe, or even writing them to a file and having them read by csh's source builtin.
|
How to set shell variables in subprocess.Popen (in a less ugly way...)
|
NOTE: This is not the same question as Python: Persistent shell variables in subprocess, as that question is actually about environment variables, not shell variables.
I'm trying to automate a basic benchmark that I'm doing in csh using the shell built-in time command. You can tweak the output of time by setting the variable of the same name in the shell like follows:
set time=(1000000000 "%U %S %E %P %W %X %D %K %M %F %R %I %O %r %s %k %w %c")
However, this is of course not an environment variable, so you can't pass it in the env parameter in the subprocess.Popen constructor (I tried that first...). You can hack it together by abusing (imo, anyhow) the cmd argument to do that, and then your actual work (shortened to display nicely):
time_cmd = 'set time=(1000000000 "%U %S %E %P %c"); time ./some_func'
p = subprocess.Popen(cmd, executable="/bin/csh", shell=True, ...)
and that works as you would expect, but it seems not really the right way to go (and can lead to some pretty horrible cmd construction when you want to tweak a bunch of different shell behaviours).
Is there any other way to set shell variables in subprocess other than this kludge?
|
[
"You have a limited number of options to communicate to a child process (invocation arguments, environment, and streams). Since setting a shell variable is in a more general sense modifying a child process's internal state, what you call a \"kludge\" is actually quite reasonable. I don't know of any csh variants that have a special way to load shell variables.\nHowever, you could neaten it up by writing all your initialisation commands to the shell via a pipe, or even writing them to a file and having them read by csh's source builtin.\n"
] |
[
1
] |
[] |
[] |
[
"freebsd",
"python",
"shell",
"subprocess",
"unix"
] |
stackoverflow_0001845435_freebsd_python_shell_subprocess_unix.txt
|
Q:
Can I CREATE TEMPORARY TABLE in SQLAlchemy without appending to Table._prefixes?
I'd like to create a temporary table in SQLAlchemy. I can build a CREATE TABLE statement with a TEMPORARY clause by calling table._prefixes.append('TEMPORARY') against a Table object, but that's less elegant than table.select().prefix_with() used to add a prefix to data manipulation language expressions.
Is there an equivalent to .prefix_with() for DDL?
A:
No, prefix_with() is defined for SELECT and INSERT only. But convenient way to add prefix to CREATE TABLE statement is passing it into table definition:
t = Table(
't', metadata,
Column('id', Integer, primary_key=True),
# ...
prefixes=['TEMPORARY'],
)
|
Can I CREATE TEMPORARY TABLE in SQLAlchemy without appending to Table._prefixes?
|
I'd like to create a temporary table in SQLAlchemy. I can build a CREATE TABLE statement with a TEMPORARY clause by calling table._prefixes.append('TEMPORARY') against a Table object, but that's less elegant than table.select().prefix_with() used to add a prefix to data manipulation language expressions.
Is there an equivalent to .prefix_with() for DDL?
|
[
"No, prefix_with() is defined for SELECT and INSERT only. But convenient way to add prefix to CREATE TABLE statement is passing it into table definition:\nt = Table(\n 't', metadata,\n Column('id', Integer, primary_key=True),\n # ...\n prefixes=['TEMPORARY'],\n)\n\n"
] |
[
11
] |
[] |
[] |
[
"python",
"sqlalchemy",
"temp_tables"
] |
stackoverflow_0001842902_python_sqlalchemy_temp_tables.txt
|
Q:
Bignum implementation that has efficient addition of small integers
I have been using python's native bignums for an algorithm and decided to try and speed it up by converting it to C++. When I used long longs, the C++ was about 100x faster than the python, but when I used GMP bindings in C++, it was only 10x faster than the python (for the same cases that fit in long longs).
Is there a better bignum implementation for doing a large number of small additions? For example, we have a big number N we'll be adding a lot of little +1, +21, +1, etc. and every once and a while adds another big number M?
A:
The GMP library itself has a fast short integer add to MPZ routine
void mpz_add_ui (mpz_t rop, mpz_t op1, unsigned long int op2)
I don't know whether gmpy uses that, but if it does try adding a normal python int to an mpz vs adding an mpz to mpz and see if it is quicker.
Edit
I tried a bit of benchmarking and found it doesn't make any difference
$ python -m timeit -c 'from gmpy import mpz
> a=mpz(10**1000)' 'a+1'
100000 loops, best of 3: 5.4 usec per loop
$ python -m timeit -c 'from gmpy import mpz
a=mpz(10**1000); b=mpz(1)' 'a+b'
100000 loops, best of 3: 5.5 usec per loop
So I guess gmpy doesn't use mpz_add_ui as I really would expect that to be a lot quicker.
A:
Did you do profiling ? Of Python and C++ whole applications. So that you know that you really need that additional speed.
Try Python 3k it now have any-length integers implemented!
A:
(Note: I help maintain GMPY and I've implemented quite a few optimizations in the most recent release.)
GMPY v1.11 does use mpz_add_ui when adding a small number to an mpz. The newest version of GMPY is also about 25% faster than prior versions when working with small numbers.
With GMPY 1.04
$ py26 -mtimeit -s "import gmpy;a=gmpy.mpz(10**1000)" "a+1"
10000000 loops, best of 3: 0.18 usec per loop
$ py26 -mtimeit -s "import gmpy;a=gmpy.mpz(10**1000);b=gmpy.mpz(1)" "a+b"
10000000 loops, best of 3: 0.153 usec per loop
With GMPY 1.11
$ py26 -mtimeit -s "import gmpy;a=gmpy.mpz(10**1000)" "a+1"
10000000 loops, best of 3: 0.127 usec per loop
$ py26 -mtimeit -s "import gmpy;a=gmpy.mpz(10**1000);b=gmpy.mpz(1)" "a+b"
10000000 loops, best of 3: 0.148 usec per loop
Since it is quicker to convert a Python int to a long and call mpz_add_ui than to convert a Python int to an mpz, there is a moderate performance advantage. I wouldn't be surprised if there is a 10x performance penalty for calling the GMP functions vs. native operations on a long long.
Can you accumulate several of the small numbers into one long long and add them at one time to your large number?
|
Bignum implementation that has efficient addition of small integers
|
I have been using python's native bignums for an algorithm and decided to try and speed it up by converting it to C++. When I used long longs, the C++ was about 100x faster than the python, but when I used GMP bindings in C++, it was only 10x faster than the python (for the same cases that fit in long longs).
Is there a better bignum implementation for doing a large number of small additions? For example, we have a big number N we'll be adding a lot of little +1, +21, +1, etc. and every once and a while adds another big number M?
|
[
"The GMP library itself has a fast short integer add to MPZ routine\nvoid mpz_add_ui (mpz_t rop, mpz_t op1, unsigned long int op2)\n\nI don't know whether gmpy uses that, but if it does try adding a normal python int to an mpz vs adding an mpz to mpz and see if it is quicker.\nEdit\nI tried a bit of benchmarking and found it doesn't make any difference\n$ python -m timeit -c 'from gmpy import mpz\n> a=mpz(10**1000)' 'a+1'\n100000 loops, best of 3: 5.4 usec per loop\n\n$ python -m timeit -c 'from gmpy import mpz\na=mpz(10**1000); b=mpz(1)' 'a+b'\n100000 loops, best of 3: 5.5 usec per loop\n\nSo I guess gmpy doesn't use mpz_add_ui as I really would expect that to be a lot quicker.\n",
"Did you do profiling ? Of Python and C++ whole applications. So that you know that you really need that additional speed.\nTry Python 3k it now have any-length integers implemented!\n",
"(Note: I help maintain GMPY and I've implemented quite a few optimizations in the most recent release.)\nGMPY v1.11 does use mpz_add_ui when adding a small number to an mpz. The newest version of GMPY is also about 25% faster than prior versions when working with small numbers.\nWith GMPY 1.04\n$ py26 -mtimeit -s \"import gmpy;a=gmpy.mpz(10**1000)\" \"a+1\"\n10000000 loops, best of 3: 0.18 usec per loop\n$ py26 -mtimeit -s \"import gmpy;a=gmpy.mpz(10**1000);b=gmpy.mpz(1)\" \"a+b\"\n10000000 loops, best of 3: 0.153 usec per loop\n\nWith GMPY 1.11\n$ py26 -mtimeit -s \"import gmpy;a=gmpy.mpz(10**1000)\" \"a+1\"\n10000000 loops, best of 3: 0.127 usec per loop\n$ py26 -mtimeit -s \"import gmpy;a=gmpy.mpz(10**1000);b=gmpy.mpz(1)\" \"a+b\"\n10000000 loops, best of 3: 0.148 usec per loop\n\nSince it is quicker to convert a Python int to a long and call mpz_add_ui than to convert a Python int to an mpz, there is a moderate performance advantage. I wouldn't be surprised if there is a 10x performance penalty for calling the GMP functions vs. native operations on a long long.\nCan you accumulate several of the small numbers into one long long and add them at one time to your large number?\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"arbitrary_precision",
"bignum",
"c++",
"gmp",
"python"
] |
stackoverflow_0001831212_arbitrary_precision_bignum_c++_gmp_python.txt
|
Q:
Matrices and inverse Matrices in Python
For a project that I am doing, I decompose a graph that I created using NetworkX into an adjacency matrix using the NetworkX adj_matrix() function. However, one of the problems that I have come across is that every single graph that I decompose gives me the following error when I try to find the inverse of the matrix.
str: Traceback (most recent call last):
File "C:\eclipse\plugins\org.python.pydev.debug_1.4.7.2843\pysrc\pydevd_resolver.py", line 179, in _getPyDictionary
attr = getattr(var, n)
File "C:\Python26\lib\site-packages\numpy\core\defmatrix.py", line 519, in getI
return asmatrix(func(self))
File "C:\Python26\lib\site-packages\numpy\linalg\linalg.py", line 355, in inv
return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
File "C:\Python26\lib\site-packages\numpy\linalg\linalg.py", line 254, in solve
raise LinAlgError, 'Singular matrix'
LinAlgError: Singular matrix
I tried generating adjacency matrices from 5 different graphs and all of them produced the same error when I tried to find the inverse of the adjacency matrix. The question that I pose is whether there is any way to go from NetworkX graph to matrix. What is my best course of action from here? I realize there are other questions pertaining to matrix inverses, but mine is somewhat limited by the fact that I need the graph adjacency matrix.
A:
Adjacency matrices are not always invertible. There are papers on this subject; I'm not sure whether there is any simple characterization of the corresponding graphs. A pragmatic approach would be to catch the LinAlgError exception in your code (try… except…), and warn when the adjacency matrix is not invertible (and keep performing your calculations otherwise).
A:
I don't know exactly how networkx produces the adjacency matrix, but there is absolutely no reason for it to be inversible. For example, consider the complete graph (all nodes are connected to every each other), its adacency matrix is full of ones, and the matrix has obviously 0 as an eigenvalue (as soon as the number of nodes is >= 2 of course...). Or the graph with N Nodes and no edges, its adjacency matrix is 0...
What do you want to do ? I never had to consider the inverse of the adjacency matrix, but very often the inverse of I - x A for some (small) value of x. Its inverse is
(I - x A) ^(-1) = I + xA + x^2 A2 + ...
which is inversible for some value of x (in fact, as soon as |x| < max( |1/y| for y in eigenvalues of A) I think)... this is because you consider the number of paths in your graph, but putting some decay in it, so it is summable (Pagerank anyone ?)
A:
are you asking for a method to generate graphs whose adjacency matrices are non-singular? it is no fault of networkx's or numpy's that the graphs you generated have adjacency matrices that do not have inverses.
|
Matrices and inverse Matrices in Python
|
For a project that I am doing, I decompose a graph that I created using NetworkX into an adjacency matrix using the NetworkX adj_matrix() function. However, one of the problems that I have come across is that every single graph that I decompose gives me the following error when I try to find the inverse of the matrix.
str: Traceback (most recent call last):
File "C:\eclipse\plugins\org.python.pydev.debug_1.4.7.2843\pysrc\pydevd_resolver.py", line 179, in _getPyDictionary
attr = getattr(var, n)
File "C:\Python26\lib\site-packages\numpy\core\defmatrix.py", line 519, in getI
return asmatrix(func(self))
File "C:\Python26\lib\site-packages\numpy\linalg\linalg.py", line 355, in inv
return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
File "C:\Python26\lib\site-packages\numpy\linalg\linalg.py", line 254, in solve
raise LinAlgError, 'Singular matrix'
LinAlgError: Singular matrix
I tried generating adjacency matrices from 5 different graphs and all of them produced the same error when I tried to find the inverse of the adjacency matrix. The question that I pose is whether there is any way to go from NetworkX graph to matrix. What is my best course of action from here? I realize there are other questions pertaining to matrix inverses, but mine is somewhat limited by the fact that I need the graph adjacency matrix.
|
[
"Adjacency matrices are not always invertible. There are papers on this subject; I'm not sure whether there is any simple characterization of the corresponding graphs. A pragmatic approach would be to catch the LinAlgError exception in your code (try… except…), and warn when the adjacency matrix is not invertible (and keep performing your calculations otherwise).\n",
"I don't know exactly how networkx produces the adjacency matrix, but there is absolutely no reason for it to be inversible. For example, consider the complete graph (all nodes are connected to every each other), its adacency matrix is full of ones, and the matrix has obviously 0 as an eigenvalue (as soon as the number of nodes is >= 2 of course...). Or the graph with N Nodes and no edges, its adjacency matrix is 0...\nWhat do you want to do ? I never had to consider the inverse of the adjacency matrix, but very often the inverse of I - x A for some (small) value of x. Its inverse is \n\n(I - x A) ^(-1) = I + xA + x^2 A2 + ...\n\nwhich is inversible for some value of x (in fact, as soon as |x| < max( |1/y| for y in eigenvalues of A) I think)... this is because you consider the number of paths in your graph, but putting some decay in it, so it is summable (Pagerank anyone ?)\n",
"are you asking for a method to generate graphs whose adjacency matrices are non-singular? it is no fault of networkx's or numpy's that the graphs you generated have adjacency matrices that do not have inverses.\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"matrix",
"matrix_inverse",
"networkx",
"python"
] |
stackoverflow_0001845209_matrix_matrix_inverse_networkx_python.txt
|
Q:
Patterns for dealing with memcache Caching in Django
I have a big Django project with several interrelated projects and a lot of caching in use. It currently has a file which stores cache helper functions. So for example, get_object_x(id) would check cache for this object and if it wasn't there, go to the DB and pull it from there and return it, caching it along the way. This same pattern is followed for caching groups of objects and the file is also used for for invalidation methods.
A problem has arisen in the imports between apps though. The app models file has a number of helper functions which we want to use cache for, and the cache_helpers file obviously needs to import the models file.
So my question is: What is a better way of doing this that doesn't expose the code to circular import issues (or maybe just a smarter way in general)? Ideally we could do invalidation in a better, less manual way as well. My guess is that the use of Django Custom Managers and Signals is the best place to start, getting rid of the cache_helpers file altogether, but does anyone have any better suggestions or direction on where to look?
A:
A general Python pattern for avoiding circular imports is to put one set of the imports inside the dependent functions:
# module_a.py
import module_b
def foo():
return "bar"
def bar():
return module_b.baz()
# module_b.py
def baz():
import module_a
return module_a.foo()
As for caching, it sounds like you need a function that looks a bit like this:
def get_cached(model, **kwargs):
timeout = kwargs.pop('timeout', 60 * 60)
key = '%s:%s' % (model, kwargs)
result = cache.get(key)
if result is None:
result = model.objects.get(**kwargs)
cache.set(key, result, timeout)
return result
Now you don't need to create "getbyid" methods for every one of your models. You can do this instead:
blog_entry = get_cached(BlogEntry, pk = 4)
You could write similar functions for dealing with full QuerySets instead of just single model objects using .get() method.
A:
Since you indicated you're caching Django ORM model instances, take a look at django-orm-cache, which provides automated caching of model instances and is smart about when to invalidate the cache.
Your circular imports won't be an issue - all you need to do is extend the models you need to cache from the ormcache.models.CachedModel class instead of Django's django.db.models.Model, and you get caching "for free."
|
Patterns for dealing with memcache Caching in Django
|
I have a big Django project with several interrelated projects and a lot of caching in use. It currently has a file which stores cache helper functions. So for example, get_object_x(id) would check cache for this object and if it wasn't there, go to the DB and pull it from there and return it, caching it along the way. This same pattern is followed for caching groups of objects and the file is also used for for invalidation methods.
A problem has arisen in the imports between apps though. The app models file has a number of helper functions which we want to use cache for, and the cache_helpers file obviously needs to import the models file.
So my question is: What is a better way of doing this that doesn't expose the code to circular import issues (or maybe just a smarter way in general)? Ideally we could do invalidation in a better, less manual way as well. My guess is that the use of Django Custom Managers and Signals is the best place to start, getting rid of the cache_helpers file altogether, but does anyone have any better suggestions or direction on where to look?
|
[
"A general Python pattern for avoiding circular imports is to put one set of the imports inside the dependent functions:\n# module_a.py\nimport module_b\n\ndef foo():\n return \"bar\"\n\ndef bar():\n return module_b.baz()\n\n# module_b.py\ndef baz():\n import module_a\n return module_a.foo()\n\nAs for caching, it sounds like you need a function that looks a bit like this:\ndef get_cached(model, **kwargs):\n timeout = kwargs.pop('timeout', 60 * 60)\n key = '%s:%s' % (model, kwargs)\n result = cache.get(key)\n if result is None:\n result = model.objects.get(**kwargs)\n cache.set(key, result, timeout)\n return result\n\nNow you don't need to create \"getbyid\" methods for every one of your models. You can do this instead:\nblog_entry = get_cached(BlogEntry, pk = 4)\n\nYou could write similar functions for dealing with full QuerySets instead of just single model objects using .get() method.\n",
"Since you indicated you're caching Django ORM model instances, take a look at django-orm-cache, which provides automated caching of model instances and is smart about when to invalidate the cache.\nYour circular imports won't be an issue - all you need to do is extend the models you need to cache from the ormcache.models.CachedModel class instead of Django's django.db.models.Model, and you get caching \"for free.\"\n"
] |
[
7,
3
] |
[] |
[] |
[
"django",
"memcached",
"python"
] |
stackoverflow_0001844432_django_memcached_python.txt
|
Q:
Python long multiplication
I'm in need of an algorithm faster than the current normal Python long multiplication.
I tried to find a decent Karatsuba implementation, but I can't.
def main():
a=long(raw_input())
if(a<0):
a=a*-1
a=((a*(a+1)/2)-1)
print(-a)
else:
a=(a*(a+1))/2
print(a)
main()
As you see, it's nothing complicated, just a few multiplications. But it has to handle numbers with up to 100000 digits in under 2.5 sec.
I'd like some snippet of a function or just a link to some implementation of a faster multiplication function, or anything that helps.
A:
I'm the author of the DecInt (Decimal Integer) library so I'll make a few comments.
The DecInt library was specifically designed to work with very large integers that needed to be converted to decimal format. The problem with converting to decimal format is that most arbitrary-precision libraries store values in binary. This is fastest and most efficient for utilizing memory but converting from binary to decimal is usually slow. Python's binary to decimal conversion uses an O(n^2) algorithm and gets slow very quickly.
DecInt uses a large decimal radix (usually 10^250) and stores the very large number in blocks of 250 digits. Converting a very large number to decimal format now runs in O(n).
Naive, or grade school, multiplication has a running time of O(n^2). Python uses Karatsuba multiplication which has running time of O(n^1.585). DecInt uses a combination of Karatsuba, Toom-Cook, and Nussbaumer convolution to get a running time of O(n*ln(n)).
Even though DecInt has much higher overhead, the combination of O(n*ln(n)) multiplication and O(n) conversion will eventually be faster than Python's O(n^1.585) multiplication and O(n^2) conversion.
Since most computations don't require every result to be displayed in decimal format, almost every arbitrary-precision library uses binary since that makes the computations easier. DecInt targets a very small niche. For large enough numbers, DecInt will be faster for multiplication and division than native Python. But if you are after pure performance, a library like GMPY will be the fastest.
I'm glad you found DecInt helpful.
A:
15.9 ms on my slow notebook. It is the print that is slowing you down. Converting to binary numbers to decimal is quite slow, which is a required step of printing it out. If you need to output the number you should try the DecInt ChristopheD mentioned already.
DecInt will be slower doing the multiply but make the print much faster
In [34]: a=2**333000
In [35]: len(str(a))
Out[35]: 100243
In [36]: b=2**333001
In [37]: len(str(b))
Out[37]: 100244
In [38]: timeit c=a*b
10 loops, best of 3: 15.9 ms per loop
Here is an example with a slightly modified version of your code. Note that converting a 100000 digit string to a long already takes ~1sec on this computer
In [1]: def f(a):
...: if(a<0):
...: a=a*-1
...: a=((a*(a+1)/2)-1)
...: else:
...: a=(a*(a+1))/2
...: return a
...:
In [2]: a=3**200000
In [3]: len(str(a))
Out[3]: 95425
In [4]: timeit f(a)
10 loops, best of 3: 417 ms per loop
A:
I suggest you get the Sage math tool, which has just about every Python math tool ever made rolled into one package. See if there is a nice fast arbitrary precision math tool in Sage that meets your needs.
A:
You could have a look at the implementation of the DecInt module (pure Python version is available (Toom-Cook) although the fastest it will probably be when using gmpy).
|
Python long multiplication
|
I'm in need of an algorithm faster than the current normal Python long multiplication.
I tried to find a decent Karatsuba implementation, but I can't.
def main():
a=long(raw_input())
if(a<0):
a=a*-1
a=((a*(a+1)/2)-1)
print(-a)
else:
a=(a*(a+1))/2
print(a)
main()
As you see, it's nothing complicated, just a few multiplications. But it has to handle numbers with up to 100000 digits in under 2.5 sec.
I'd like some snippet of a function or just a link to some implementation of a faster multiplication function, or anything that helps.
|
[
"I'm the author of the DecInt (Decimal Integer) library so I'll make a few comments.\nThe DecInt library was specifically designed to work with very large integers that needed to be converted to decimal format. The problem with converting to decimal format is that most arbitrary-precision libraries store values in binary. This is fastest and most efficient for utilizing memory but converting from binary to decimal is usually slow. Python's binary to decimal conversion uses an O(n^2) algorithm and gets slow very quickly. \nDecInt uses a large decimal radix (usually 10^250) and stores the very large number in blocks of 250 digits. Converting a very large number to decimal format now runs in O(n). \nNaive, or grade school, multiplication has a running time of O(n^2). Python uses Karatsuba multiplication which has running time of O(n^1.585). DecInt uses a combination of Karatsuba, Toom-Cook, and Nussbaumer convolution to get a running time of O(n*ln(n)).\nEven though DecInt has much higher overhead, the combination of O(n*ln(n)) multiplication and O(n) conversion will eventually be faster than Python's O(n^1.585) multiplication and O(n^2) conversion. \nSince most computations don't require every result to be displayed in decimal format, almost every arbitrary-precision library uses binary since that makes the computations easier. DecInt targets a very small niche. For large enough numbers, DecInt will be faster for multiplication and division than native Python. But if you are after pure performance, a library like GMPY will be the fastest.\nI'm glad you found DecInt helpful.\n",
"15.9 ms on my slow notebook. It is the print that is slowing you down. Converting to binary numbers to decimal is quite slow, which is a required step of printing it out. If you need to output the number you should try the DecInt ChristopheD mentioned already.\nDecInt will be slower doing the multiply but make the print much faster\nIn [34]: a=2**333000\n\nIn [35]: len(str(a))\nOut[35]: 100243\n\nIn [36]: b=2**333001\n\nIn [37]: len(str(b))\nOut[37]: 100244\n\nIn [38]: timeit c=a*b\n10 loops, best of 3: 15.9 ms per loop\n\nHere is an example with a slightly modified version of your code. Note that converting a 100000 digit string to a long already takes ~1sec on this computer\nIn [1]: def f(a):\n ...: if(a<0):\n ...: a=a*-1\n ...: a=((a*(a+1)/2)-1)\n ...: else:\n ...: a=(a*(a+1))/2\n ...: return a\n ...: \n\nIn [2]: a=3**200000\n\nIn [3]: len(str(a))\nOut[3]: 95425\n\nIn [4]: timeit f(a)\n10 loops, best of 3: 417 ms per loop\n\n",
"I suggest you get the Sage math tool, which has just about every Python math tool ever made rolled into one package. See if there is a nice fast arbitrary precision math tool in Sage that meets your needs.\n",
"You could have a look at the implementation of the DecInt module (pure Python version is available (Toom-Cook) although the fastest it will probably be when using gmpy).\n"
] |
[
27,
9,
2,
2
] |
[] |
[] |
[
"biginteger",
"multiplication",
"python"
] |
stackoverflow_0001835857_biginteger_multiplication_python.txt
|
Q:
Submenu items in Nautilus right-click menu
I am trying to write an extension for nautilus, which add an item to the menu that appears when you right-click a file (as shown in image)
However, I would like to add a submenu to my custom menu item.
I downloaded a 'nautilus-python' package which includes examples of how to write extensions for Nautilus (and so far it turned out to be the best/only documentation i found). In it, is a file called submenu.py, which contains the following:
import nautilus
class ExampleMenuProvider(nautilus.MenuProvider):
def get_file_items(self, window, files):
menuitem = nautilus.MenuItem('ExampleMenuProvider::Foo', 'Foo', '')
submenu = nautilus.Menu()
menuitem.set_submenu(submenu)
menuitem = nautilus.MenuItem('ExampleMenuProvider::Bar','Bar','')
submenu.append_item(menuitem)
return menuitem,
# FIXME: Why isn't this working?
def get_background_items(self, window, file):
submenu = nautilus.Menu()
submenu.append_item(nautilus.MenuItem('ExampleMenuProvider::Bar', 'Bar', ''))
menuitem = nautilus.MenuItem('ExampleMenuProvider::Foo', 'Foo', '')
menuitem.set_submenu(submenu)
return menuitem,
ps: i didn't add "# FIX ME: Why isnt this working?". that is actually included in the example
The code doesn't work. Even if i take out the second function and leave just the first one, it doesn't work.
Any help would be appreciated, thanks.
A:
I found the solution.
you need a init function declared.
had to make some variable name change to the example above
import nautilus
class ExampleMenuProvider(nautilus.MenuProvider):
def __init__(self):
pass
def get_file_items(self, window, files):
submenu = nautilus.Menu()
item = nautilus.MenuItem('Nautilus::sbi','Nau-T','image')
item.set_submenu(submenu)
item_two = nautilus.MenuItem('Nautilus::s','www','image')
submenu.append_item(item_two)
return item,
|
Submenu items in Nautilus right-click menu
|
I am trying to write an extension for nautilus, which add an item to the menu that appears when you right-click a file (as shown in image)
However, I would like to add a submenu to my custom menu item.
I downloaded a 'nautilus-python' package which includes examples of how to write extensions for Nautilus (and so far it turned out to be the best/only documentation i found). In it, is a file called submenu.py, which contains the following:
import nautilus
class ExampleMenuProvider(nautilus.MenuProvider):
def get_file_items(self, window, files):
menuitem = nautilus.MenuItem('ExampleMenuProvider::Foo', 'Foo', '')
submenu = nautilus.Menu()
menuitem.set_submenu(submenu)
menuitem = nautilus.MenuItem('ExampleMenuProvider::Bar','Bar','')
submenu.append_item(menuitem)
return menuitem,
# FIXME: Why isn't this working?
def get_background_items(self, window, file):
submenu = nautilus.Menu()
submenu.append_item(nautilus.MenuItem('ExampleMenuProvider::Bar', 'Bar', ''))
menuitem = nautilus.MenuItem('ExampleMenuProvider::Foo', 'Foo', '')
menuitem.set_submenu(submenu)
return menuitem,
ps: i didn't add "# FIX ME: Why isnt this working?". that is actually included in the example
The code doesn't work. Even if i take out the second function and leave just the first one, it doesn't work.
Any help would be appreciated, thanks.
|
[
"I found the solution.\n\nyou need a init function declared.\nhad to make some variable name change to the example above\nimport nautilus\n\nclass ExampleMenuProvider(nautilus.MenuProvider):\n def __init__(self):\n pass\n\n def get_file_items(self, window, files):\n submenu = nautilus.Menu()\n\n item = nautilus.MenuItem('Nautilus::sbi','Nau-T','image')\n item.set_submenu(submenu)\n\n item_two = nautilus.MenuItem('Nautilus::s','www','image')\n submenu.append_item(item_two)\n\n return item,\n\n\n"
] |
[
4
] |
[] |
[] |
[
"gnome",
"gtk",
"nautilus",
"python"
] |
stackoverflow_0001845681_gnome_gtk_nautilus_python.txt
|
Q:
Howto Embed , load .swf into Pygame?
How can I load a .swf(flash) file in Pygame?
A:
As far as I know, Pygame does not have a browser widget that supports playing flash. To get this working, you'll have to embed a window from a web browser that does (i.e. FireFox, IE, Safari, etc).
Alternatively, you can use Python's C/C++ interface integrate the GNU Gnash project (an open source flash player) into your program. Unfortunately Gnash only supports up to flash 9, and it may be difficult to make your own Gnash UI widget.
A:
Take a look at Gnash, you might find this question relevant.
|
Howto Embed , load .swf into Pygame?
|
How can I load a .swf(flash) file in Pygame?
|
[
"As far as I know, Pygame does not have a browser widget that supports playing flash. To get this working, you'll have to embed a window from a web browser that does (i.e. FireFox, IE, Safari, etc).\nAlternatively, you can use Python's C/C++ interface integrate the GNU Gnash project (an open source flash player) into your program. Unfortunately Gnash only supports up to flash 9, and it may be difficult to make your own Gnash UI widget.\n",
"Take a look at Gnash, you might find this question relevant.\n"
] |
[
0,
0
] |
[] |
[] |
[
"pygame",
"python"
] |
stackoverflow_0001844535_pygame_python.txt
|
Q:
game design - handling bonuses / duck typing - python
I am currently faced with a design problem in my game design, not terrible but it bothers me enough so I want to ask others opinions :-)
I am currently experimenting with pygame, I have developed a little space shooter and now I would like to handle some bonuses.
Right now I have an abstract class Bonus from which derive all the bonuses currently implemented: a "health bonus" which gives back some health to the player, a "death bonus" which drops the player's health to 1.
In my game loop here is what I do (roughly):
def testCollisionBonusBolt():
#bolts are sprites fired by the player that allow him to get the bonuses
collisions = pygame.sprite.groupcollide(bonusesGroup, boltsGroup, True, True)
for col in collisions:
player.bonuses.append(col)
And right after I tell the player to use the bonuses
class Player:
...
def useBonuses(self):
for bonus in self.bonuses:
bonus.use(self)
Until now everything is OK, but I would like to add a "bomb bonus" which when shooted by the player explodes and kills the enemies on his surroundings.
This "bonus" implements the "use(target)" method of my abstract class Bonus as the others to, but I feel kind of bad adding such a bonus to the list of the player's bonuses as there should be no relation between them!
The concept behind a bonus is that this is "something that does something to something", previously the targets where my player class but now it is not as clear...
Of course instead of calling player.useBonuses() after detecting which bonuses where shooted I could test the type (using isinstance) for example of the bonuses but after all the discussions I've read about duck-typing and why it's the pythonic way I am wondering how can I manage my bonus problem?
Thanks for reading this far, hope you guys can help me out!
regards
A:
If you don't want a certain col to go in player.bonuses but elsewhere, make a toplayer method in the abstract class Bonus with the default implementation:
def toplayer(self, player):
player.bonuses.append(self)
and override it in the bomb bonus class. (You don't have to inherit, as you mention, but there's no trouble in so doing if it gets you some easy functionality reuse.)
So for example a player could have an attribute bomb, normally None, and the bomb bonus class could do:
def toplayer(self, player):
player.bomb = self
And when the time comes to act on all the bonus accrued, it could start with
if player.bomb is not None:
player.bomb.explode(player.position)
or the like.
A:
I don't write in python but here's my recommendation: Load up your player with each type of weapon he can get (even ones through bonuses) and set the ammo on the ones that he gets through bonuses to 0. Then when your player picks up a "bomb bonus" (or whatever) add one ammo to the bomb weapon on the player. That ought to work nicely.
A:
I think you're on the right track - I would say the "Bomb Bonus" is still related to the player object because it affects the enemies around the player. You just need to implement the "Bomb Bonus"'s use() method like this:
class BombBonus(Bonus):
def use(self, player):
assert isinstance(player, Player)
# TODO: find all enemies that are close to the player - assuming you
# have all enemy objects in a list call 'enemies'
global enemies
for enemy in enemies:
if distance(player.position, enemy.position) < 400:
# if the distance between player and an enemy is less than 400
# (change this value to your liking), destroy that enemy.
enemy.explode()
You'll need to work out your own implementation of distance().
Developing a game without a detailed plan means that you're often going to have new ideas which almost fit your existing objects, and you will need to choose whether to expand your existing classes with new features to support your new idea, or create a new set of classes because you think the new idea is too different. There's no right and wrong here, just make sure your ideas stay organized in a way that makes sense to you.
A:
I had a bonus system in an asteroid clone I wrote a few years ago. It lives (as much as a dead project can live) on Bitbucket now. I dont think its as flexible as you seem to be aiming for. But small "bonus" entities spawns and move around, if they collide with an asteroid, the bonus is removed and missed by the player. If the player collides with it, bonus points are awarded.
|
game design - handling bonuses / duck typing - python
|
I am currently faced with a design problem in my game design, not terrible but it bothers me enough so I want to ask others opinions :-)
I am currently experimenting with pygame, I have developed a little space shooter and now I would like to handle some bonuses.
Right now I have an abstract class Bonus from which derive all the bonuses currently implemented: a "health bonus" which gives back some health to the player, a "death bonus" which drops the player's health to 1.
In my game loop here is what I do (roughly):
def testCollisionBonusBolt():
#bolts are sprites fired by the player that allow him to get the bonuses
collisions = pygame.sprite.groupcollide(bonusesGroup, boltsGroup, True, True)
for col in collisions:
player.bonuses.append(col)
And right after I tell the player to use the bonuses
class Player:
...
def useBonuses(self):
for bonus in self.bonuses:
bonus.use(self)
Until now everything is OK, but I would like to add a "bomb bonus" which when shooted by the player explodes and kills the enemies on his surroundings.
This "bonus" implements the "use(target)" method of my abstract class Bonus as the others to, but I feel kind of bad adding such a bonus to the list of the player's bonuses as there should be no relation between them!
The concept behind a bonus is that this is "something that does something to something", previously the targets where my player class but now it is not as clear...
Of course instead of calling player.useBonuses() after detecting which bonuses where shooted I could test the type (using isinstance) for example of the bonuses but after all the discussions I've read about duck-typing and why it's the pythonic way I am wondering how can I manage my bonus problem?
Thanks for reading this far, hope you guys can help me out!
regards
|
[
"If you don't want a certain col to go in player.bonuses but elsewhere, make a toplayer method in the abstract class Bonus with the default implementation:\ndef toplayer(self, player):\n player.bonuses.append(self)\n\nand override it in the bomb bonus class. (You don't have to inherit, as you mention, but there's no trouble in so doing if it gets you some easy functionality reuse.)\nSo for example a player could have an attribute bomb, normally None, and the bomb bonus class could do:\ndef toplayer(self, player):\n player.bomb = self\n\nAnd when the time comes to act on all the bonus accrued, it could start with\nif player.bomb is not None:\n player.bomb.explode(player.position)\n\nor the like.\n",
"I don't write in python but here's my recommendation: Load up your player with each type of weapon he can get (even ones through bonuses) and set the ammo on the ones that he gets through bonuses to 0. Then when your player picks up a \"bomb bonus\" (or whatever) add one ammo to the bomb weapon on the player. That ought to work nicely.\n",
"I think you're on the right track - I would say the \"Bomb Bonus\" is still related to the player object because it affects the enemies around the player. You just need to implement the \"Bomb Bonus\"'s use() method like this:\nclass BombBonus(Bonus):\n def use(self, player):\n assert isinstance(player, Player)\n # TODO: find all enemies that are close to the player - assuming you\n # have all enemy objects in a list call 'enemies'\n global enemies\n for enemy in enemies:\n if distance(player.position, enemy.position) < 400:\n # if the distance between player and an enemy is less than 400\n # (change this value to your liking), destroy that enemy.\n enemy.explode()\n\nYou'll need to work out your own implementation of distance().\nDeveloping a game without a detailed plan means that you're often going to have new ideas which almost fit your existing objects, and you will need to choose whether to expand your existing classes with new features to support your new idea, or create a new set of classes because you think the new idea is too different. There's no right and wrong here, just make sure your ideas stay organized in a way that makes sense to you.\n",
"I had a bonus system in an asteroid clone I wrote a few years ago. It lives (as much as a dead project can live) on Bitbucket now. I dont think its as flexible as you seem to be aiming for. But small \"bonus\" entities spawns and move around, if they collide with an asteroid, the bonus is removed and missed by the player. If the player collides with it, bonus points are awarded.\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"duck_typing",
"pygame",
"python"
] |
stackoverflow_0001085532_duck_typing_pygame_python.txt
|
Q:
Django makes strange multiple GET calls on admin-site
My Django-Project is making some strange faned-out GET calls when opening on model from the admin-site and i have no idea where this comes from. I will try to provide as much information possible.
Imagine this model called 'Rating', which holds a reference i.e. foreign key to 'Item', 'Usecase' and 'Rater'. So the Item can be rated under a certain case of use by some rater. Furthermore these together sould be unique.
Now when i open the list of 'Ratings' on the admin site, django blows out a couple of strange GET calls, which doesn't happen with the other models. This does even happen when there are no ratings. Actually my 'Rating' class is calles 'Testfragenbewertung' in german. This is what gets called upon clicking on the model on the admin-aite:
[04/Dec/2009 13:02:43] "GET /admin/MYAPP/testfragenbewertung/ HTTP/1.1" 200 3739
[04/Dec/2009 13:02:43] "GET /t HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /e HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /s HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /f HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /r HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /a HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /g HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /n HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET / HTTP/1.1" 404 1910
[04/Dec/2009 13:02:43] "GET /j HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /t HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /e HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /s HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /t HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /f HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /r HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /a HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /g HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /e HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /n HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET / HTTP/1.1" 404 1910
[04/Dec/2009 13:02:44] "GET /j HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /s HTTP/1.1" 404 1913
Is this supposed to happen, since it doesn't with any other model. As you can see those calls together are somehow the letters of the name of the class, with few exceptions. Have i overlooked something very stupidly or is it possibly a bug in my Django 1.2 pre-alpha SVN-11782? Thanks for any help or hints.
A:
I'd be interested to see the admin.py for that app. I'd guess there's something wrong in the media definitions for the form - the clue is the 'js' at the end of the list. You're probably using a string somewhere Django is expecting a tuple.
|
Django makes strange multiple GET calls on admin-site
|
My Django-Project is making some strange faned-out GET calls when opening on model from the admin-site and i have no idea where this comes from. I will try to provide as much information possible.
Imagine this model called 'Rating', which holds a reference i.e. foreign key to 'Item', 'Usecase' and 'Rater'. So the Item can be rated under a certain case of use by some rater. Furthermore these together sould be unique.
Now when i open the list of 'Ratings' on the admin site, django blows out a couple of strange GET calls, which doesn't happen with the other models. This does even happen when there are no ratings. Actually my 'Rating' class is calles 'Testfragenbewertung' in german. This is what gets called upon clicking on the model on the admin-aite:
[04/Dec/2009 13:02:43] "GET /admin/MYAPP/testfragenbewertung/ HTTP/1.1" 200 3739
[04/Dec/2009 13:02:43] "GET /t HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /e HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /s HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /f HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /r HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /a HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /g HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /n HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET / HTTP/1.1" 404 1910
[04/Dec/2009 13:02:43] "GET /j HTTP/1.1" 404 1913
[04/Dec/2009 13:02:43] "GET /t HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /e HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /s HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /t HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /f HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /r HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /a HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /g HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /e HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /n HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET / HTTP/1.1" 404 1910
[04/Dec/2009 13:02:44] "GET /j HTTP/1.1" 404 1913
[04/Dec/2009 13:02:44] "GET /s HTTP/1.1" 404 1913
Is this supposed to happen, since it doesn't with any other model. As you can see those calls together are somehow the letters of the name of the class, with few exceptions. Have i overlooked something very stupidly or is it possibly a bug in my Django 1.2 pre-alpha SVN-11782? Thanks for any help or hints.
|
[
"I'd be interested to see the admin.py for that app. I'd guess there's something wrong in the media definitions for the form - the clue is the 'js' at the end of the list. You're probably using a string somewhere Django is expecting a tuple.\n"
] |
[
2
] |
[] |
[] |
[
"django",
"get",
"python"
] |
stackoverflow_0001846571_django_get_python.txt
|
Q:
Python mechanize loses attributes on second open
This is a really specialized case and I feel awkward asking it; however I'm at wits end working on it.
I need to follow a tracking number through a form and to a results page so I've been using mechanize in python, the link after form submission is embedded in javascript so I can't simply follow_link. What I want to do is to regex out the url and then ask call open() on that, however when I do - I run into some problems.
I can call br.geturl() and br.title() just fine on the target page, but when it comes time to read the source of the page in question, it throws
AttributeError: mechanize._mechanize.Browser instance has no attribute read (perhaps you forgot to .select_form()?)
Is there some way to do this or am I monkey-patching it too much, any advice would be terrific
edit [more code {really ugly just trying to get it to work}]:
cosn="########"
baseurl="http://aaa.com/"
search="thing.do"
br=Browser()
br.open(baseurl+search)
br.select_form('traceForm')
br['consignments']=cosn
req=br.submit()
pars=Soup(req.read())
found_url=re.match(r"javascript:window.location.href = '(?P<url>[\w\d=&?\.]+)", pars.find('td', attrs={'class':'select'})['onclick']).group('url')
br.open(baseurl+found_url)
print br.title() # works
print br.geturl() # works
print br.read() # throws exception
A:
You never make first .read method call on Browser instance. That's because it doesn't have such method. The Browswer.response has read method, so if you want to get the body of response you'd need to do:
response = br.response()
response.read()
For the future, you could use dir(obj) to see the content of the object obj, be it browser or anything else.
|
Python mechanize loses attributes on second open
|
This is a really specialized case and I feel awkward asking it; however I'm at wits end working on it.
I need to follow a tracking number through a form and to a results page so I've been using mechanize in python, the link after form submission is embedded in javascript so I can't simply follow_link. What I want to do is to regex out the url and then ask call open() on that, however when I do - I run into some problems.
I can call br.geturl() and br.title() just fine on the target page, but when it comes time to read the source of the page in question, it throws
AttributeError: mechanize._mechanize.Browser instance has no attribute read (perhaps you forgot to .select_form()?)
Is there some way to do this or am I monkey-patching it too much, any advice would be terrific
edit [more code {really ugly just trying to get it to work}]:
cosn="########"
baseurl="http://aaa.com/"
search="thing.do"
br=Browser()
br.open(baseurl+search)
br.select_form('traceForm')
br['consignments']=cosn
req=br.submit()
pars=Soup(req.read())
found_url=re.match(r"javascript:window.location.href = '(?P<url>[\w\d=&?\.]+)", pars.find('td', attrs={'class':'select'})['onclick']).group('url')
br.open(baseurl+found_url)
print br.title() # works
print br.geturl() # works
print br.read() # throws exception
|
[
"You never make first .read method call on Browser instance. That's because it doesn't have such method. The Browswer.response has read method, so if you want to get the body of response you'd need to do:\nresponse = br.response()\nresponse.read()\n\nFor the future, you could use dir(obj) to see the content of the object obj, be it browser or anything else.\n"
] |
[
7
] |
[] |
[] |
[
"mechanize",
"python"
] |
stackoverflow_0001846476_mechanize_python.txt
|
Q:
Getting python.exe path at run time
On Windows, how do I find the path to python.exe from within a script at runtime?
A:
>>> import sys
>>> sys.executable
'C:\\Program Files\\Python31\\pythonw.exe'
|
Getting python.exe path at run time
|
On Windows, how do I find the path to python.exe from within a script at runtime?
|
[
">>> import sys\n>>> sys.executable\n'C:\\\\Program Files\\\\Python31\\\\pythonw.exe'\n\n"
] |
[
11
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001846888_python.txt
|
Q:
Pivoting SQLite table, setwise like SQL should be
I have some data. 224,000 rows of it, in a SQLite database. I want to extract time series information from it to feed a data visualisation tool. Essentially, each row in the db is an event that has (among other things not strictly relevant) a time-date group in seconds since the epoch and a name responsible for it. I want to extract how many events each name has for every week in the db.
That's simple enough:
SELECT COUNT(*),
name,
strf("%W:%Y", time, "unixepoch")
FROM events
GROUP BY strf("%W:%Y", time, "unixepoch"), name
ORDER BY time
and we get about six thousand rows of data.
count name week:year
23............ fudge.......23:2009
etc...
But I don't want a row for each name in each week - I want a row for each name, and a column for each week, like this:
Name 23:2009 24:2009 25:2009
fudge........23............6............19
fish.........1.............0............12
etc...
Now, the monitoring process has been running for 69 weeks, and the count of unique names is 502. So clearly, I'm far from keen on any solution that involves hardcoding all the columns and still less the rows. I'm less unkeen on anything that involves iterating over the lot, say with python's executemany(), but I'm willing to accept it if necessary. SQL is meant to be set-wise, dammit.
A:
A good approach in cases like this is not to push SQL to the point where it becomes convoluted and hard to understand and maintain. Let SQL do what it conveniently can and post-process the query results in Python.
Here's a cut-down version of a simple crosstab generator that I wrote. The full version delivers row/column/grand totals.
You'll note that it has built-in "group by" -- the original use-case was for summarising data obtained from Excel files using Python and xlrd.
The row_key and col_key that you supply don't need to be strings as in the example; they can be tuples -- e.g. (year, week) in your case -- or they could be integers -- e.g. you have a mapping of string column name to integer sort key.
import sys
class CrossTab(object):
def __init__(
self,
missing=0, # what to return for an empty cell. Alternatives: '', 0.0, None, 'NULL'
):
self.missing = missing
self.col_key_set = set()
self.cell_dict = {}
self.headings_OK = False
def add_item(self, row_key, col_key, value):
self.col_key_set.add(col_key)
try:
self.cell_dict[row_key][col_key] += value
except KeyError:
try:
self.cell_dict[row_key][col_key] = value
except KeyError:
self.cell_dict[row_key] = {col_key: value}
def _process_headings(self):
if self.headings_OK:
return
self.row_headings = list(sorted(self.cell_dict.iterkeys()))
self.col_headings = list(sorted(self.col_key_set))
self.headings_OK = True
def get_col_headings(self):
self._process_headings()
return self.col_headings
def generate_row_info(self):
self._process_headings()
for row_key in self.row_headings:
row_dict = self.cell_dict[row_key]
row_vals = [row_dict.get(col_key, self.missing) for col_key in self.col_headings]
yield row_key, row_vals
def dump(self, f=None, header=None, footer='', ):
if f is None:
f = sys.stdout
alist = self.__dict__.items()
alist.sort()
if header is not None:
print >> f, header
for attr, value in alist:
print >> f, "%s: %r" % (attr, value)
if footer is not None:
print >> f, footer
if __name__ == "__main__":
data = [
['Rob', 'Morn', 240],
['Rob', 'Aft', 300],
['Joe', 'Morn', 70],
['Joe', 'Aft', 80],
['Jill', 'Morn', 100],
['Jill', 'Aft', 150],
['Rob', 'Aft', 40],
['Rob', 'aft', 5],
['Dozy', 'Aft', 1],
# Dozy doesn't show up till lunch-time
['Nemo', 'never', -1],
]
NAME, TIME, AMOUNT = range(3)
xlate_time = {'morn': "AM", "aft": "PM"}
print
ctab = CrossTab(missing=None, )
# ctab.dump(header='=== after init ===')
for s in data:
ctab.add_item(
row_key=s[NAME],
col_key= xlate_time.get(s[TIME].lower(), "XXXX"),
value=s[AMOUNT])
# ctab.dump(header='=== after add_item ===')
print ctab.get_col_headings()
# ctab.dump(header='=== after get_col_headings ===')
for x in ctab.generate_row_info():
print x
Output:
['AM', 'PM', 'XXXX']
('Dozy', [None, 1, None])
('Jill', [100, 150, None])
('Joe', [70, 80, None])
('Nemo', [None, None, -1])
('Rob', [240, 345, None])
A:
I would first do your query
SELECT COUNT(*),
name,
strf("%W:%Y", time, "unixepoch")
FROM events
GROUP BY strf("%W:%Y", time, "unixepoch"), name
ORDER BY time
and then do post processing with python.
So you don't have to iterate over 224,000 rows but over 6,000 rows. You can easyly store those 6,000 rows in memory (for processing with Python). I think you can store 224,000 rows in memory too but it takes quite a lot more memory.
However: New versions of sqlite support the aggregation function group_concat. Maybe you can use this function for pivoting with SQL? I can't try because I use an older version.
|
Pivoting SQLite table, setwise like SQL should be
|
I have some data. 224,000 rows of it, in a SQLite database. I want to extract time series information from it to feed a data visualisation tool. Essentially, each row in the db is an event that has (among other things not strictly relevant) a time-date group in seconds since the epoch and a name responsible for it. I want to extract how many events each name has for every week in the db.
That's simple enough:
SELECT COUNT(*),
name,
strf("%W:%Y", time, "unixepoch")
FROM events
GROUP BY strf("%W:%Y", time, "unixepoch"), name
ORDER BY time
and we get about six thousand rows of data.
count name week:year
23............ fudge.......23:2009
etc...
But I don't want a row for each name in each week - I want a row for each name, and a column for each week, like this:
Name 23:2009 24:2009 25:2009
fudge........23............6............19
fish.........1.............0............12
etc...
Now, the monitoring process has been running for 69 weeks, and the count of unique names is 502. So clearly, I'm far from keen on any solution that involves hardcoding all the columns and still less the rows. I'm less unkeen on anything that involves iterating over the lot, say with python's executemany(), but I'm willing to accept it if necessary. SQL is meant to be set-wise, dammit.
|
[
"A good approach in cases like this is not to push SQL to the point where it becomes convoluted and hard to understand and maintain. Let SQL do what it conveniently can and post-process the query results in Python.\nHere's a cut-down version of a simple crosstab generator that I wrote. The full version delivers row/column/grand totals.\nYou'll note that it has built-in \"group by\" -- the original use-case was for summarising data obtained from Excel files using Python and xlrd.\nThe row_key and col_key that you supply don't need to be strings as in the example; they can be tuples -- e.g. (year, week) in your case -- or they could be integers -- e.g. you have a mapping of string column name to integer sort key.\nimport sys\n\nclass CrossTab(object):\n\n def __init__(\n self,\n missing=0, # what to return for an empty cell. Alternatives: '', 0.0, None, 'NULL'\n ):\n self.missing = missing\n self.col_key_set = set()\n self.cell_dict = {}\n self.headings_OK = False\n\n def add_item(self, row_key, col_key, value):\n self.col_key_set.add(col_key)\n try:\n self.cell_dict[row_key][col_key] += value\n except KeyError:\n try:\n self.cell_dict[row_key][col_key] = value\n except KeyError:\n self.cell_dict[row_key] = {col_key: value}\n\n def _process_headings(self):\n if self.headings_OK:\n return\n self.row_headings = list(sorted(self.cell_dict.iterkeys()))\n self.col_headings = list(sorted(self.col_key_set))\n self.headings_OK = True\n\n def get_col_headings(self):\n self._process_headings()\n return self.col_headings\n\n def generate_row_info(self):\n self._process_headings()\n for row_key in self.row_headings:\n row_dict = self.cell_dict[row_key]\n row_vals = [row_dict.get(col_key, self.missing) for col_key in self.col_headings]\n yield row_key, row_vals\n\n def dump(self, f=None, header=None, footer='', ):\n if f is None:\n f = sys.stdout\n alist = self.__dict__.items()\n alist.sort()\n if header is not None:\n print >> f, header\n for attr, value in alist:\n print >> f, \"%s: %r\" % (attr, value)\n if footer is not None:\n print >> f, footer\n\nif __name__ == \"__main__\":\n\n data = [\n ['Rob', 'Morn', 240],\n ['Rob', 'Aft', 300],\n ['Joe', 'Morn', 70],\n ['Joe', 'Aft', 80],\n ['Jill', 'Morn', 100],\n ['Jill', 'Aft', 150],\n ['Rob', 'Aft', 40],\n ['Rob', 'aft', 5],\n ['Dozy', 'Aft', 1],\n # Dozy doesn't show up till lunch-time\n ['Nemo', 'never', -1],\n ]\n NAME, TIME, AMOUNT = range(3)\n xlate_time = {'morn': \"AM\", \"aft\": \"PM\"}\n\n print\n ctab = CrossTab(missing=None, )\n # ctab.dump(header='=== after init ===')\n for s in data:\n ctab.add_item(\n row_key=s[NAME],\n col_key= xlate_time.get(s[TIME].lower(), \"XXXX\"),\n value=s[AMOUNT])\n # ctab.dump(header='=== after add_item ===')\n print ctab.get_col_headings()\n # ctab.dump(header='=== after get_col_headings ===')\n for x in ctab.generate_row_info():\n print x\n\nOutput:\n['AM', 'PM', 'XXXX']\n('Dozy', [None, 1, None])\n('Jill', [100, 150, None])\n('Joe', [70, 80, None])\n('Nemo', [None, None, -1])\n('Rob', [240, 345, None])\n\n",
"I would first do your query\nSELECT COUNT(*), \n name, \n strf(\"%W:%Y\", time, \"unixepoch\") \n FROM events \n GROUP BY strf(\"%W:%Y\", time, \"unixepoch\"), name \n ORDER BY time\n\nand then do post processing with python. \nSo you don't have to iterate over 224,000 rows but over 6,000 rows. You can easyly store those 6,000 rows in memory (for processing with Python). I think you can store 224,000 rows in memory too but it takes quite a lot more memory. \nHowever: New versions of sqlite support the aggregation function group_concat. Maybe you can use this function for pivoting with SQL? I can't try because I use an older version. \n"
] |
[
4,
1
] |
[] |
[] |
[
"python",
"sql",
"sqlite",
"visualization"
] |
stackoverflow_0001835391_python_sql_sqlite_visualization.txt
|
Q:
how to find list of modules which depend upon a specific module in python
In order to reduce development time of my Python based web application, I am trying to use reload() for the modules I have recently modified. The reload() happens through a dedicated web page (part of the development version of the web app) which lists the modules which have been recently modified (and the modified time stamp of py file is later than the corresponding pyc file). The full list of modules is obtained from sys.modules (and I filter the list to focus on only those modules which are part of my package).
Reloading individual python files seems to work in some cases and not in other cases. I guess, all the modules which depend on a modified module should be reloaded and the reloading should happen in proper order.
I am looking for a way to get the list of modules imported by a specific module. Is there any way to do this kind of introspection in Python?
I understand that my approach might not be 100% guaranteed and the safest way would be to reload everything, but if a fast approach works for most cases, it would be good enough for development purposes.
Response to comments regarding DJango autoreloader
@Glenn Maynard, Thanx, I had read about DJango's autoreloader. My web app is based on Zope 3 and with the amount of packages and a lot of ZCML based initializations, the total restart takes about 10 seconds to 30 seconds or more if the database size is bigger. I am attempting to cut down on this amount of time spent during restart. When I feel I have done a lot of changes, I usually prefer to do full restart, but more often I am changing couple of lines here and there for which I do not wish to spend so much of time. The development setup is completely independent of production setup and usually if something is wrong in reload, it becomes obvious since the application pages start showing illogical information or throwing exceptions. Am very much interested in exploring whether selective reload would work or not.
A:
So - this answers "Find a list of modules which depend on a given one" - instead of how the question was initally phrased - which I answered above.
As it turns out, this is a bit more complex: One have to find the dependency tree for all loaded modules, and invert it for each module, while preserving a loading order that would not break things.
I had also posted this to brazillian's python wiki at:
http://www.python.org.br/wiki/RecarregarModulos
#! /usr/bin/env python
# coding: utf-8
# Author: João S. O. Bueno
# Copyright (c) 2009 - Fundação CPqD
# License: LGPL V3.0
from types import ModuleType, FunctionType, ClassType
import sys
def find_dependent_modules():
"""gets a one level inversed module dependence tree"""
tree = {}
for module in sys.modules.values():
if module is None:
continue
tree[module] = set()
for attr_name in dir(module):
attr = getattr(module, attr_name)
if isinstance(attr, ModuleType):
tree[module].add(attr)
elif type(attr) in (FunctionType, ClassType):
tree[module].add(attr.__module__)
return tree
def get_reversed_first_level_tree(tree):
"""Creates a one level deep straight dependence tree"""
new_tree = {}
for module, dependencies in tree.items():
for dep_module in dependencies:
if dep_module is module:
continue
if not dep_module in new_tree:
new_tree[dep_module] = set([module])
else:
new_tree[dep_module].add(module)
return new_tree
def find_dependants_recurse(key, rev_tree, previous=None):
"""Given a one-level dependance tree dictionary,
recursively builds a non-repeating list of all dependant
modules
"""
if previous is None:
previous = set()
if not key in rev_tree:
return []
this_level_dependants = set(rev_tree[key])
next_level_dependants = set()
for dependant in this_level_dependants:
if dependant in previous:
continue
tmp_previous = previous.copy()
tmp_previous.add(dependant)
next_level_dependants.update(
find_dependants_recurse(dependant, rev_tree,
previous=tmp_previous,
))
# ensures reloading order on the final list
# by postponing the reload of modules in this level
# that also appear later on the tree
dependants = (list(this_level_dependants.difference(
next_level_dependants)) +
list(next_level_dependants))
return dependants
def get_reversed_tree():
"""
Yields a dictionary mapping all loaded modules to
lists of the tree of modules that depend on it, in an order
that can be used fore reloading
"""
tree = find_dependent_modules()
rev_tree = get_reversed_first_level_tree(tree)
compl_tree = {}
for module, dependant_modules in rev_tree.items():
compl_tree[module] = find_dependants_recurse(module, rev_tree)
return compl_tree
def reload_dependences(module):
"""
reloads given module and all modules that
depend on it, directly and otherwise.
"""
tree = get_reversed_tree()
reload(module)
for dependant in tree[module]:
reload(dependant)
This wokred nicely in all tests I made here - but I would not recoment abusing it.
But for updating a running zope2 server after editing a few lines of code, I think I would use this myself.
A:
You might want to take a look at Ian Bicking's Paste reloader module, which does what you want already:
http://pythonpaste.org/modules/reloader?highlight=reloader
It doesn't give you specifically a list of dependent files (which is only technically possible if the packager has been diligent and properly specified dependencies), but looking at the code will give you an accurate list of modified files for restarting the process.
A:
Some introspection to the rescue:
from types import ModuleType
def find_modules(module, all_mods = None):
if all_mods is None:
all_mods = set([module])
for item_name in dir(module):
item = getattr(module, item_name)
if isinstance(item, ModuleType) and not item in all_mods:
all_mods.add(item)
find_modules(item, all_mods)
return all_mods
This gives you a set with all loaded modules - just call the function with your first module as a sole parameter. You can then iterate over the resulting set reloading it, as simply as:
[reload (m) for m in find_modules(<module>)]
|
how to find list of modules which depend upon a specific module in python
|
In order to reduce development time of my Python based web application, I am trying to use reload() for the modules I have recently modified. The reload() happens through a dedicated web page (part of the development version of the web app) which lists the modules which have been recently modified (and the modified time stamp of py file is later than the corresponding pyc file). The full list of modules is obtained from sys.modules (and I filter the list to focus on only those modules which are part of my package).
Reloading individual python files seems to work in some cases and not in other cases. I guess, all the modules which depend on a modified module should be reloaded and the reloading should happen in proper order.
I am looking for a way to get the list of modules imported by a specific module. Is there any way to do this kind of introspection in Python?
I understand that my approach might not be 100% guaranteed and the safest way would be to reload everything, but if a fast approach works for most cases, it would be good enough for development purposes.
Response to comments regarding DJango autoreloader
@Glenn Maynard, Thanx, I had read about DJango's autoreloader. My web app is based on Zope 3 and with the amount of packages and a lot of ZCML based initializations, the total restart takes about 10 seconds to 30 seconds or more if the database size is bigger. I am attempting to cut down on this amount of time spent during restart. When I feel I have done a lot of changes, I usually prefer to do full restart, but more often I am changing couple of lines here and there for which I do not wish to spend so much of time. The development setup is completely independent of production setup and usually if something is wrong in reload, it becomes obvious since the application pages start showing illogical information or throwing exceptions. Am very much interested in exploring whether selective reload would work or not.
|
[
"So - this answers \"Find a list of modules which depend on a given one\" - instead of how the question was initally phrased - which I answered above.\nAs it turns out, this is a bit more complex: One have to find the dependency tree for all loaded modules, and invert it for each module, while preserving a loading order that would not break things.\nI had also posted this to brazillian's python wiki at:\nhttp://www.python.org.br/wiki/RecarregarModulos\n#! /usr/bin/env python\n# coding: utf-8\n\n# Author: João S. O. Bueno\n# Copyright (c) 2009 - Fundação CPqD\n# License: LGPL V3.0\n\n\nfrom types import ModuleType, FunctionType, ClassType\nimport sys\n\ndef find_dependent_modules():\n \"\"\"gets a one level inversed module dependence tree\"\"\"\n tree = {}\n for module in sys.modules.values():\n if module is None:\n continue\n tree[module] = set()\n for attr_name in dir(module):\n attr = getattr(module, attr_name)\n if isinstance(attr, ModuleType):\n tree[module].add(attr)\n elif type(attr) in (FunctionType, ClassType): \n tree[module].add(attr.__module__)\n return tree\n\n\ndef get_reversed_first_level_tree(tree):\n \"\"\"Creates a one level deep straight dependence tree\"\"\"\n new_tree = {}\n for module, dependencies in tree.items():\n for dep_module in dependencies:\n if dep_module is module:\n continue\n if not dep_module in new_tree:\n new_tree[dep_module] = set([module])\n else:\n new_tree[dep_module].add(module)\n return new_tree\n\ndef find_dependants_recurse(key, rev_tree, previous=None):\n \"\"\"Given a one-level dependance tree dictionary,\n recursively builds a non-repeating list of all dependant\n modules\n \"\"\"\n if previous is None:\n previous = set()\n if not key in rev_tree:\n return []\n this_level_dependants = set(rev_tree[key])\n next_level_dependants = set()\n for dependant in this_level_dependants:\n if dependant in previous:\n continue\n tmp_previous = previous.copy()\n tmp_previous.add(dependant)\n next_level_dependants.update(\n find_dependants_recurse(dependant, rev_tree,\n previous=tmp_previous,\n ))\n # ensures reloading order on the final list\n # by postponing the reload of modules in this level\n # that also appear later on the tree\n dependants = (list(this_level_dependants.difference(\n next_level_dependants)) +\n list(next_level_dependants))\n return dependants\n\ndef get_reversed_tree():\n \"\"\"\n Yields a dictionary mapping all loaded modules to\n lists of the tree of modules that depend on it, in an order\n that can be used fore reloading\n \"\"\"\n tree = find_dependent_modules()\n rev_tree = get_reversed_first_level_tree(tree)\n compl_tree = {}\n for module, dependant_modules in rev_tree.items():\n compl_tree[module] = find_dependants_recurse(module, rev_tree)\n return compl_tree\n\ndef reload_dependences(module):\n \"\"\"\n reloads given module and all modules that\n depend on it, directly and otherwise.\n \"\"\"\n tree = get_reversed_tree()\n reload(module)\n for dependant in tree[module]:\n reload(dependant)\n\nThis wokred nicely in all tests I made here - but I would not recoment abusing it.\nBut for updating a running zope2 server after editing a few lines of code, I think I would use this myself.\n",
"You might want to take a look at Ian Bicking's Paste reloader module, which does what you want already:\nhttp://pythonpaste.org/modules/reloader?highlight=reloader\nIt doesn't give you specifically a list of dependent files (which is only technically possible if the packager has been diligent and properly specified dependencies), but looking at the code will give you an accurate list of modified files for restarting the process.\n",
"Some introspection to the rescue:\nfrom types import ModuleType\n\ndef find_modules(module, all_mods = None):\n if all_mods is None:\n all_mods = set([module])\n for item_name in dir(module):\n item = getattr(module, item_name)\n if isinstance(item, ModuleType) and not item in all_mods:\n all_mods.add(item)\n find_modules(item, all_mods)\n return all_mods\n\nThis gives you a set with all loaded modules - just call the function with your first module as a sole parameter. You can then iterate over the resulting set reloading it, as simply as:\n[reload (m) for m in find_modules(<module>)]\n"
] |
[
5,
2,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001827629_python.txt
|
Q:
I'm looking for a cross-platform Python library that can play MP3 and OGG and support ALSA or similar
There are several different audio libraries, however, none of them meet my exact needs:
- It needs to be cross-platform.
- It needs to be able to use the ALSA, PulseAudio or any other common default mixer under Linux.
- It needs to be able to autodetect the sample frequency.
- It needs to be (fairly) simple in usage, if instead someone can give me an extra script that will MAKE it easier for me that's accepted too.
- The only functionality I need is play/pause, seeking is a nice bonus.
- It needs to be able to play MP3 and OGG. No other formats are important to me.
The libraries I've tried so far:
PyGame - doesn't support detecting the song's frequency
PyAudiere - promising, but only loads OSS in Linux, with which I have serious mixing problems
Built-in modules - don't support MP3 and OGG as far as I'm aware
PyMedia - promising, but complicated. Also couldn't find out what mixing devices it uses.
PySonic - relies on a closed-source library.
PyQt4.phonon - promising, but wouldn't play anything. Got the following error:
gst_element_make_from_uri: assertion `gst_uri_is_valid (uri)' failed
Any help on this would be appreciated.
A:
Use gstreamer.
It needs to be cross-platform.
It needs to be able to use the ALSA, PulseAudio or any other common default mixer under Linux.
Check - From gstreamer website:
GStreamer has been ported to a wide
range of operating systems, processors
and compilers. This include but are
not limited to Linux on i86,PPC, ARM
using GCC. Solaris on x86 and SPARC
using both GCC and Forte, MacOSX,
Microsoft Windows using MS Visual
Developer and IBM OS/400.
GStreamer can bridge to other
multimedia frameworks in order to
reuse existing components (e.g.
codecs) and use platform input/output
mechanisms:
Linux/Unix: OpenMAX-IL (via gst-openmax)
Windows: DirectShow
MacOS X: QuickTime
It needs to be able to autodetect the sample frequency.
Okay.
It needs to be (fairly) simple in usage, if instead someone can give me an extra script that will MAKE it easier for me that's accepted too.
Gstreamer has a lot of documentation and examples, and a strong community to give you support.
The only functionality I need is play/pause, seeking is a nice bonus.
It needs to be able to play MP3 and OGG. No other formats are important to me.
Then those are overwhelmed!
Go get yours!
|
I'm looking for a cross-platform Python library that can play MP3 and OGG and support ALSA or similar
|
There are several different audio libraries, however, none of them meet my exact needs:
- It needs to be cross-platform.
- It needs to be able to use the ALSA, PulseAudio or any other common default mixer under Linux.
- It needs to be able to autodetect the sample frequency.
- It needs to be (fairly) simple in usage, if instead someone can give me an extra script that will MAKE it easier for me that's accepted too.
- The only functionality I need is play/pause, seeking is a nice bonus.
- It needs to be able to play MP3 and OGG. No other formats are important to me.
The libraries I've tried so far:
PyGame - doesn't support detecting the song's frequency
PyAudiere - promising, but only loads OSS in Linux, with which I have serious mixing problems
Built-in modules - don't support MP3 and OGG as far as I'm aware
PyMedia - promising, but complicated. Also couldn't find out what mixing devices it uses.
PySonic - relies on a closed-source library.
PyQt4.phonon - promising, but wouldn't play anything. Got the following error:
gst_element_make_from_uri: assertion `gst_uri_is_valid (uri)' failed
Any help on this would be appreciated.
|
[
"Use gstreamer.\n\n\n\nIt needs to be cross-platform.\nIt needs to be able to use the ALSA, PulseAudio or any other common default mixer under Linux.\n\n\nCheck - From gstreamer website:\n\nGStreamer has been ported to a wide\n range of operating systems, processors\n and compilers. This include but are\n not limited to Linux on i86,PPC, ARM\n using GCC. Solaris on x86 and SPARC\n using both GCC and Forte, MacOSX,\n Microsoft Windows using MS Visual\n Developer and IBM OS/400.\nGStreamer can bridge to other\n multimedia frameworks in order to\n reuse existing components (e.g.\n codecs) and use platform input/output\n mechanisms:\n\nLinux/Unix: OpenMAX-IL (via gst-openmax)\nWindows: DirectShow\nMacOS X: QuickTime\n\n\n\n\n\nIt needs to be able to autodetect the sample frequency.\n\n\nOkay.\n\n\nIt needs to be (fairly) simple in usage, if instead someone can give me an extra script that will MAKE it easier for me that's accepted too.\n\n\nGstreamer has a lot of documentation and examples, and a strong community to give you support.\n\n\nThe only functionality I need is play/pause, seeking is a nice bonus.\nIt needs to be able to play MP3 and OGG. No other formats are important to me. \n\n\nThen those are overwhelmed!\nGo get yours!\n"
] |
[
4
] |
[] |
[] |
[
"alsa",
"audio",
"cross_platform",
"python"
] |
stackoverflow_0001847022_alsa_audio_cross_platform_python.txt
|
Q:
Print function calls with a variable delay / Python
The question is, how can I configure the Python debugger to show me in the console what functions are being called?
In order not to see everything flash by, a delay between the function calls would be needed.
A:
If you want to monitor when a few particular functions are being called,
you could use this decorator:
import functools
def trace(f):
@functools.wraps(f)
def wrapper(*arg,**kw):
'''This decorator shows how the function was called'''
arg_str=','.join(['%r'%a for a in arg]+['%s=%s'%(key,kw[key]) for key in kw])
print "%s(%s)" % (f.__name__, arg_str)
return f(*arg, **kw)
return wrapper
You would use it like this:
@trace # <--- decorator your functions with the @trace decorator
def foo(x,y):
# do stuff
When you run your program, every time foo(x,y) is called, you'll see the
function call with the value of its arguments in the console:
foo(y=(0, 1, 2),x=(0, 0, 0))
A:
You can use the alternative pydb debugger. You can invoke it with pydb --fntrace --batch <scriptname> to get a function trace.
As for the "flashing-by", use the usual tools like Ctrl-S/Ctrl-Q on an ANSI terminal, or redirect to a file.
|
Print function calls with a variable delay / Python
|
The question is, how can I configure the Python debugger to show me in the console what functions are being called?
In order not to see everything flash by, a delay between the function calls would be needed.
|
[
"If you want to monitor when a few particular functions are being called, \nyou could use this decorator:\nimport functools\ndef trace(f):\n @functools.wraps(f)\n def wrapper(*arg,**kw):\n '''This decorator shows how the function was called'''\n arg_str=','.join(['%r'%a for a in arg]+['%s=%s'%(key,kw[key]) for key in kw])\n print \"%s(%s)\" % (f.__name__, arg_str)\n return f(*arg, **kw)\n return wrapper\n\nYou would use it like this:\n@trace # <--- decorator your functions with the @trace decorator\ndef foo(x,y):\n # do stuff\n\nWhen you run your program, every time foo(x,y) is called, you'll see the\nfunction call with the value of its arguments in the console:\nfoo(y=(0, 1, 2),x=(0, 0, 0))\n\n",
"You can use the alternative pydb debugger. You can invoke it with pydb --fntrace --batch <scriptname> to get a function trace.\nAs for the \"flashing-by\", use the usual tools like Ctrl-S/Ctrl-Q on an ANSI terminal, or redirect to a file.\n"
] |
[
2,
1
] |
[] |
[] |
[
"console",
"debugging",
"python"
] |
stackoverflow_0001847177_console_debugging_python.txt
|
Q:
From a coder's perspective, what kind of project should I choose python over php for where both could do the job?
I've never used python before. I've used php for about 5 years now. I plan to learn python, but I'm not sure what for yet. If I can think of a project that might be better to do in python, I'll use that to learn it.
Edit: just to add this as an important note, I do mean strictly for linux, not multi-platform.
Edit 2: I'm hoping for objective answers, like a specific project, not a general field of projects, etc.
A:
Python is better suited for practically anything that doesn't fall within PHP's specialty domain, which is building websites.
If you want a list of programming projects that you could work on, see this thread:
https://stackoverflow.com/questions/1022738/i-need-a-good-programming-project
A:
PHP for websites. Python for pretty much anything else, such as commandline tools, long-running scripts, daemons, etcetera. If you're writing a PHP script and you're reaching for functions in the posix extenstion, shared memory or other low-level stuff then that's generally a sign that Python would be better suited. It's not that PHP can't do it, but Python just does it better and less buggy.
Especially when you're venturing into background daemons for your website you'll want to look at Python. PHP has some garbage collection problems in long running processes such as daemons. Also, some functionality is much easier and clearer in Python (e.g. redirecting STDIN, STDOUT and STDERR. PHP misses posix_dup2()). Also, Python has threads :-)
The only time when I now use PHP background daemons for my websites is when they can re-use significant amounts of code (such as with MVC frameworks like CakePHP).
One more advantage of Python is that there are many, many libraries for it, because it's rather easy to create a Python wrapper for a C library. So, Python has libraries that PHP doesn't have (OpenGL, multimedia, etcetera). So if you're into those areas Python becomes the obvious choice.
A:
I am going to answer based purely on ease of programming and encouragement of good design practices.
I learned to program using Perl and for many years wrote web applications using CGI, which was never pleasant in the first place. I always found that Perl was tedious to get right and way too easy to get wrong.
When I started developing web applications more seriously, I discovered PHP. Going from Perl to PHP was a natural progression because they share many of the same methodologies and syntax. I loved that PHP was so heavily installed and widely used and was much easier to code in than Perl. Except, like Perl, PHP makes it way too easy to do things the wrong way, and ushered in a new era of poorly-written and insecure web applications.
Now... Four years ago I discovered Django, which was in its infancy at the time, and it changed my life. I used this as the motivation to learn Python and have not looked back, not once. I use Python for everything now:
System programming (server
configurations, monitoring, alerting)
Network programming (router/switch
configuration automation, auditing,
syntax checking)
Web applications (self-explanatory)
Python's ease of use, elegant syntax, and the fact that it encourages best practices by default is what turned me on and kept me interested. Give Python a shot, you won't regret it!
A:
A project you want to maintain over any length of time.
I've had to maintain PHP code and there is something about the fact that you can mix HTML and code that makes PHP stuff a nightmare.
Python has a much higher level of abstraction and makes more maintainable code much easier to write and much more importantly for maintenance - read.
All IMHO of course as this is a rather subjective question.
A:
I only use php for the following reasons:
The software I'm using that solves my problems is written in it (like Wordpress. Don't try to program everything, sometimes you have a lot of good stuff in php);
I need/want to run in a lot of different server configurations (a lot of people use shared hosting, and can't install/don't have support to python, but php is a default in any web hosting company);
When I can choose (I have server environment control and I don't mind factor 2) , I choose Python. Just because you can write good code in php doesn't mean you should. I miss a robust language, with good Exception handling and a lot of other advantages. The language is clean and you can even use Google App Engine for free if you want to learn Python.
You can write almost everything you write in Python with PHP. But I dislike PHP because a lot of times you use a feature, and you know that you have to configurate your php.ini, or use an obscure function just after having a lot of headaches with the feature.
IMHO, at least in my experience, PHP is something you get used it. And Python is something you just fall in love: because it just works.
I know these last paragraphs may be a personal opinion, but the first two factors aren't. Just choose the language what you think it's worth in that case.
A:
Anything that requires background processing or any significant amount of code that doesn't just show a user a page. Python is really good as a scripting language, and writing a command line Python script is commonplace; writing a PHP script to do command line work is rare.
A:
My company was contracted to build a web application last year, and the client specified that it should be done in Flex. Now, this application should have been a web application, but we had a unique opportunity to try something new.
We had absolutely no idea what we were doing at the time, but it was a great learning experience. My advice would be to try something new when you get the chance, make mistakes, and continue to learn.
Might be harder if you want to learn Python casually... Try getting someone to pay you for using it.
A:
If you're doing any multi threading development, pick Python over PHP.
|
From a coder's perspective, what kind of project should I choose python over php for where both could do the job?
|
I've never used python before. I've used php for about 5 years now. I plan to learn python, but I'm not sure what for yet. If I can think of a project that might be better to do in python, I'll use that to learn it.
Edit: just to add this as an important note, I do mean strictly for linux, not multi-platform.
Edit 2: I'm hoping for objective answers, like a specific project, not a general field of projects, etc.
|
[
"Python is better suited for practically anything that doesn't fall within PHP's specialty domain, which is building websites.\nIf you want a list of programming projects that you could work on, see this thread:\nhttps://stackoverflow.com/questions/1022738/i-need-a-good-programming-project\n",
"PHP for websites. Python for pretty much anything else, such as commandline tools, long-running scripts, daemons, etcetera. If you're writing a PHP script and you're reaching for functions in the posix extenstion, shared memory or other low-level stuff then that's generally a sign that Python would be better suited. It's not that PHP can't do it, but Python just does it better and less buggy.\nEspecially when you're venturing into background daemons for your website you'll want to look at Python. PHP has some garbage collection problems in long running processes such as daemons. Also, some functionality is much easier and clearer in Python (e.g. redirecting STDIN, STDOUT and STDERR. PHP misses posix_dup2()). Also, Python has threads :-)\nThe only time when I now use PHP background daemons for my websites is when they can re-use significant amounts of code (such as with MVC frameworks like CakePHP).\nOne more advantage of Python is that there are many, many libraries for it, because it's rather easy to create a Python wrapper for a C library. So, Python has libraries that PHP doesn't have (OpenGL, multimedia, etcetera). So if you're into those areas Python becomes the obvious choice.\n",
"I am going to answer based purely on ease of programming and encouragement of good design practices.\nI learned to program using Perl and for many years wrote web applications using CGI, which was never pleasant in the first place. I always found that Perl was tedious to get right and way too easy to get wrong.\nWhen I started developing web applications more seriously, I discovered PHP. Going from Perl to PHP was a natural progression because they share many of the same methodologies and syntax. I loved that PHP was so heavily installed and widely used and was much easier to code in than Perl. Except, like Perl, PHP makes it way too easy to do things the wrong way, and ushered in a new era of poorly-written and insecure web applications.\nNow... Four years ago I discovered Django, which was in its infancy at the time, and it changed my life. I used this as the motivation to learn Python and have not looked back, not once. I use Python for everything now:\n\nSystem programming (server\nconfigurations, monitoring, alerting)\nNetwork programming (router/switch\nconfiguration automation, auditing,\nsyntax checking)\nWeb applications (self-explanatory)\n\nPython's ease of use, elegant syntax, and the fact that it encourages best practices by default is what turned me on and kept me interested. Give Python a shot, you won't regret it!\n",
"A project you want to maintain over any length of time.\nI've had to maintain PHP code and there is something about the fact that you can mix HTML and code that makes PHP stuff a nightmare.\nPython has a much higher level of abstraction and makes more maintainable code much easier to write and much more importantly for maintenance - read.\nAll IMHO of course as this is a rather subjective question.\n",
"I only use php for the following reasons:\n\nThe software I'm using that solves my problems is written in it (like Wordpress. Don't try to program everything, sometimes you have a lot of good stuff in php);\nI need/want to run in a lot of different server configurations (a lot of people use shared hosting, and can't install/don't have support to python, but php is a default in any web hosting company);\n\nWhen I can choose (I have server environment control and I don't mind factor 2) , I choose Python. Just because you can write good code in php doesn't mean you should. I miss a robust language, with good Exception handling and a lot of other advantages. The language is clean and you can even use Google App Engine for free if you want to learn Python.\nYou can write almost everything you write in Python with PHP. But I dislike PHP because a lot of times you use a feature, and you know that you have to configurate your php.ini, or use an obscure function just after having a lot of headaches with the feature. \nIMHO, at least in my experience, PHP is something you get used it. And Python is something you just fall in love: because it just works.\nI know these last paragraphs may be a personal opinion, but the first two factors aren't. Just choose the language what you think it's worth in that case.\n",
"Anything that requires background processing or any significant amount of code that doesn't just show a user a page. Python is really good as a scripting language, and writing a command line Python script is commonplace; writing a PHP script to do command line work is rare.\n",
"My company was contracted to build a web application last year, and the client specified that it should be done in Flex. Now, this application should have been a web application, but we had a unique opportunity to try something new.\nWe had absolutely no idea what we were doing at the time, but it was a great learning experience. My advice would be to try something new when you get the chance, make mistakes, and continue to learn.\nMight be harder if you want to learn Python casually... Try getting someone to pay you for using it.\n",
"If you're doing any multi threading development, pick Python over PHP.\n"
] |
[
16,
5,
4,
3,
2,
1,
1,
1
] |
[] |
[] |
[
"linux",
"php",
"python",
"theory"
] |
stackoverflow_0001842208_linux_php_python_theory.txt
|
Q:
Popularity of path hooks (PEP 302 custom import)
My project has the ability to run python functions remotely. Doing so requires transmitting modules a given function utilizes. Determining what to send is conducted via a modified modulefinder.
As I modify the modulefinder to support arbitrary path_hooks, I've started to get the impression that path_hooks are not all that popular. Quick google codesearching seems to only show the ZipImporter using them. I've noticed a minor project using it (and even then, its loader doesn't support the PEP 302 extension of get_code, which is needed by the modified modulefinder).
Has anyone come across or created projects that use custom path_hooks to access source code?
A:
Yes, I've coded some path hooks (for one of the obvious purposes: access modules living in other forms of storage besides the filesystem and zipfiles), but never on an open-source project (and actually never needed to support modulefinder in them). What difficulties are you encountering? While I can't share my original code I think I can share the know-how developed with it (though offhand I can't recall any special difficulties -- it has been a while). As for "popular", I guess they will be in direct proportion to the need to site modules "elsewhere" (e.g. in some form of DB), though of course general "usermode file systems" built e.g. using fuse , macfuse and dokan may also allow this (and offer other advantages in terms of generality -- not sure how performance compares).
|
Popularity of path hooks (PEP 302 custom import)
|
My project has the ability to run python functions remotely. Doing so requires transmitting modules a given function utilizes. Determining what to send is conducted via a modified modulefinder.
As I modify the modulefinder to support arbitrary path_hooks, I've started to get the impression that path_hooks are not all that popular. Quick google codesearching seems to only show the ZipImporter using them. I've noticed a minor project using it (and even then, its loader doesn't support the PEP 302 extension of get_code, which is needed by the modified modulefinder).
Has anyone come across or created projects that use custom path_hooks to access source code?
|
[
"Yes, I've coded some path hooks (for one of the obvious purposes: access modules living in other forms of storage besides the filesystem and zipfiles), but never on an open-source project (and actually never needed to support modulefinder in them). What difficulties are you encountering? While I can't share my original code I think I can share the know-how developed with it (though offhand I can't recall any special difficulties -- it has been a while). As for \"popular\", I guess they will be in direct proportion to the need to site modules \"elsewhere\" (e.g. in some form of DB), though of course general \"usermode file systems\" built e.g. using fuse , macfuse and dokan may also allow this (and offer other advantages in terms of generality -- not sure how performance compares).\n"
] |
[
2
] |
[] |
[] |
[
"path",
"python"
] |
stackoverflow_0001845251_path_python.txt
|
Q:
Speeding up computations with numpy matrices
I have two matrices. Both are filled with zeros and ones. One is a big one (3000 x 2000 elements), and the other is smaller ( 20 x 20 ) elements. I am doing something like:
newMatrix = (size of bigMatrix), filled with zeros
l = (a constant)
for y in xrange(0, len(bigMatrix[0])):
for x in xrange(0, len(bigMatrix)):
for b in xrange(0, len(smallMatrix[0])):
for a in xrange(0, len(smallMatrix)):
if (bigMatrix[x, y] == smallMatrix[x + a - l, y + b - l]):
newMatrix[x, y] = 1
Which is being painfully slow. Am I doing anything wrong? Is there a smart way to make this work faster?
edit: Basically I am, for each (x,y) in the big matrix, checking all the pixels of both big matrix and the small matrix around (x,y) to see if they are 1. If they are 1, then I set that value on newMatrix. I am doing a sort of collision detection.
A:
I can think of a couple of optimisations there -
As you are using 4 nested python "for" statements, you are about as slow as you can be.
I can't figure out exactly what you are looking for -
but for one thing, if your big matrix "1"s density is low, you can certainly use python's "any" function on bigMtarix's slices to quickly check if there are any set elements there -- you could get a several-fold speed increase there:
step = len(smallMatrix[0])
for y in xrange(0, len(bigMatrix[0], step)):
for x in xrange(0, len(bigMatrix), step):
if not any(bigMatrix[x: x+step, y: y + step]):
continue
(...)
At this point, if still need to interact on each element, you do another pair of indexes to walk each position inside the step - but I think you got the idea.
Apart from using inner Numeric operations like this "any" usage, you could certainly add some control flow code to break-off the (b,a) loop when the first matching pixel is found.
(Like, inserting a "break" statement inside your last "if" and another if..break pair for the "b" loop.
I really can't figure out exactly what your intent is - so I can't give you more specifc code.
A:
Your example code makes no sense, but the description of your problem sounds like you are trying to do a 2d convolution of a small bitarray over the big bitarray. There's a convolve2d function in scipy.signal package that does exactly this. Just do convolve2d(bigMatrix, smallMatrix) to get the result. Unfortunately the scipy implementation doesn't have a special case for boolean arrays so the full convolution is rather slow. Here's a function that takes advantage of the fact that the arrays contain only ones and zeroes:
import numpy as np
def sparse_convolve_of_bools(a, b):
if a.size < b.size:
a, b = b, a
offsets = zip(*np.nonzero(b))
n = len(offsets)
dtype = np.byte if n < 128 else np.short if n < 32768 else np.int
result = np.zeros(np.array(a.shape) + b.shape - (1,1), dtype=dtype)
for o in offsets:
result[o[0]:o[0] + a.shape[0], o[1]:o[1] + a.shape[1]] += a
return result
On my machine it runs in less than 9 seconds for a 3000x2000 by 20x20 convolution. The running time depends on the number of ones in the smaller array, being 20ms per each nonzero element.
A:
If your bits are really packed 8 per byte / 32 per int,
and you can reduce your smallMatrix to 20x16,
then try the following, here for a single row.
(newMatrix[x, y] = 1 when any bit of the 20x16 around x,y is 1 ??
What are you really looking for ?)
python -m timeit -s '
""" slide 16-bit mask across 32-bit pairs bits[j], bits[j+1] """
import numpy as np
bits = np.zeros( 2000 // 16, np.uint16 ) # 2000 bits
bits[::8] = 1
mask = 32+16
nhit = 16 * [0]
def hit16( bits, mask, nhit ):
"""
slide 16-bit mask across 32-bit pairs bits[j], bits[j+1]
bits: long np.array( uint16 )
mask: 16 bits, int
out: nhit[j] += 1 where pair & mask != 0
"""
left = bits[0]
for b in bits[1:]:
pair = (left << 16) | b
if pair: # np idiom for non-0 words ?
m = mask
for j in range(16):
if pair & m:
nhit[j] += 1
# hitposition = jb*16 + j
m <<= 1
left = b
# if any(nhit): print "hit16:", nhit
' \
'
hit16( bits, mask, nhit )
'
# 15 msec per loop, bits[::4] = 1
# 11 msec per loop, bits[::8] = 1
# mac g4 ppc
|
Speeding up computations with numpy matrices
|
I have two matrices. Both are filled with zeros and ones. One is a big one (3000 x 2000 elements), and the other is smaller ( 20 x 20 ) elements. I am doing something like:
newMatrix = (size of bigMatrix), filled with zeros
l = (a constant)
for y in xrange(0, len(bigMatrix[0])):
for x in xrange(0, len(bigMatrix)):
for b in xrange(0, len(smallMatrix[0])):
for a in xrange(0, len(smallMatrix)):
if (bigMatrix[x, y] == smallMatrix[x + a - l, y + b - l]):
newMatrix[x, y] = 1
Which is being painfully slow. Am I doing anything wrong? Is there a smart way to make this work faster?
edit: Basically I am, for each (x,y) in the big matrix, checking all the pixels of both big matrix and the small matrix around (x,y) to see if they are 1. If they are 1, then I set that value on newMatrix. I am doing a sort of collision detection.
|
[
"I can think of a couple of optimisations there - \nAs you are using 4 nested python \"for\" statements, you are about as slow as you can be.\nI can't figure out exactly what you are looking for - \nbut for one thing, if your big matrix \"1\"s density is low, you can certainly use python's \"any\" function on bigMtarix's slices to quickly check if there are any set elements there -- you could get a several-fold speed increase there:\nstep = len(smallMatrix[0])\nfor y in xrange(0, len(bigMatrix[0], step)):\n for x in xrange(0, len(bigMatrix), step):\n if not any(bigMatrix[x: x+step, y: y + step]):\n continue\n (...) \n\nAt this point, if still need to interact on each element, you do another pair of indexes to walk each position inside the step - but I think you got the idea.\nApart from using inner Numeric operations like this \"any\" usage, you could certainly add some control flow code to break-off the (b,a) loop when the first matching pixel is found. \n(Like, inserting a \"break\" statement inside your last \"if\" and another if..break pair for the \"b\" loop.\nI really can't figure out exactly what your intent is - so I can't give you more specifc code. \n",
"Your example code makes no sense, but the description of your problem sounds like you are trying to do a 2d convolution of a small bitarray over the big bitarray. There's a convolve2d function in scipy.signal package that does exactly this. Just do convolve2d(bigMatrix, smallMatrix) to get the result. Unfortunately the scipy implementation doesn't have a special case for boolean arrays so the full convolution is rather slow. Here's a function that takes advantage of the fact that the arrays contain only ones and zeroes:\nimport numpy as np\n\ndef sparse_convolve_of_bools(a, b):\n if a.size < b.size:\n a, b = b, a\n offsets = zip(*np.nonzero(b))\n n = len(offsets)\n dtype = np.byte if n < 128 else np.short if n < 32768 else np.int\n result = np.zeros(np.array(a.shape) + b.shape - (1,1), dtype=dtype)\n for o in offsets:\n result[o[0]:o[0] + a.shape[0], o[1]:o[1] + a.shape[1]] += a\n return result\n\nOn my machine it runs in less than 9 seconds for a 3000x2000 by 20x20 convolution. The running time depends on the number of ones in the smaller array, being 20ms per each nonzero element.\n",
"If your bits are really packed 8 per byte / 32 per int,\nand you can reduce your smallMatrix to 20x16,\nthen try the following, here for a single row.\n(newMatrix[x, y] = 1 when any bit of the 20x16 around x,y is 1 ??\nWhat are you really looking for ?)\npython -m timeit -s '\n\"\"\" slide 16-bit mask across 32-bit pairs bits[j], bits[j+1] \"\"\"\n\nimport numpy as np\n\nbits = np.zeros( 2000 // 16, np.uint16 ) # 2000 bits\nbits[::8] = 1\nmask = 32+16\nnhit = 16 * [0]\n\ndef hit16( bits, mask, nhit ):\n \"\"\"\n slide 16-bit mask across 32-bit pairs bits[j], bits[j+1]\n bits: long np.array( uint16 )\n mask: 16 bits, int\n out: nhit[j] += 1 where pair & mask != 0\n \"\"\"\n left = bits[0]\n for b in bits[1:]:\n pair = (left << 16) | b\n if pair: # np idiom for non-0 words ?\n m = mask\n for j in range(16):\n if pair & m:\n nhit[j] += 1\n # hitposition = jb*16 + j\n m <<= 1\n left = b\n # if any(nhit): print \"hit16:\", nhit\n\n' \\\n'\nhit16( bits, mask, nhit )\n'\n\n# 15 msec per loop, bits[::4] = 1\n# 11 msec per loop, bits[::8] = 1\n# mac g4 ppc\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"matrix",
"numpy",
"python"
] |
stackoverflow_0001803054_matrix_numpy_python.txt
|
Q:
Django filename from database with non-ascii characters
I'm trying to create a file dynamically in Django:
response = HttpResponse(mimetype='text/txt')
response['Content-Disposition'] = 'attachment; filename=%s' % filename # UnicodeEncodeError
response.write('text')
return response
If I hardcode the filename it works properly, but if I try to create the filename from DB data that contains non-ascii characters (like ó) I get a UnicodeEncodeError exception. How can I use the DB filename without getting an exception?
A:
from django.utils.encoding import smart_str
...
response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(filename)
A:
You can fix the problem on the Django side but there is no guarantee it will work in all browsers.
See the testcases at http://greenbytes.de/tech/tc2231/.
For more details on this see this question, which links to a snippet to handle most cases.
|
Django filename from database with non-ascii characters
|
I'm trying to create a file dynamically in Django:
response = HttpResponse(mimetype='text/txt')
response['Content-Disposition'] = 'attachment; filename=%s' % filename # UnicodeEncodeError
response.write('text')
return response
If I hardcode the filename it works properly, but if I try to create the filename from DB data that contains non-ascii characters (like ó) I get a UnicodeEncodeError exception. How can I use the DB filename without getting an exception?
|
[
"from django.utils.encoding import smart_str\n...\n\n response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(filename)\n\n",
"You can fix the problem on the Django side but there is no guarantee it will work in all browsers.\nSee the testcases at http://greenbytes.de/tech/tc2231/.\nFor more details on this see this question, which links to a snippet to handle most cases.\n"
] |
[
4,
3
] |
[] |
[] |
[
"django",
"python",
"utf_8"
] |
stackoverflow_0001846497_django_python_utf_8.txt
|
Q:
Catching a drag exit in Qt?
I've got a custom widget descended from QWidget that I want to be able to drop onto, and while the drag is hovering over the widget I'd like to highlight it to provide a little visual feedback to the user. Seems to me the simplest way to do this would be to highlight when dragEnterEvent is called and unhighlight when the drag exits the widget, but how can I catch the drag exit? There doesn't seem to be a dragExitEvent event handler.
A:
Open Assistant on the Index tab, type "drag*event" and you will find dragLeaveEvent. :)
|
Catching a drag exit in Qt?
|
I've got a custom widget descended from QWidget that I want to be able to drop onto, and while the drag is hovering over the widget I'd like to highlight it to provide a little visual feedback to the user. Seems to me the simplest way to do this would be to highlight when dragEnterEvent is called and unhighlight when the drag exits the widget, but how can I catch the drag exit? There doesn't seem to be a dragExitEvent event handler.
|
[
"Open Assistant on the Index tab, type \"drag*event\" and you will find dragLeaveEvent. :)\n"
] |
[
6
] |
[] |
[] |
[
"drag_and_drop",
"pyqt",
"python",
"qt"
] |
stackoverflow_0001848430_drag_and_drop_pyqt_python_qt.txt
|
Q:
Can Unittest in Python run a list made of strings?
I am using Selenium to run tests on a website. I have many individual test I need to run, and want to create a script that will run all of the python files in a certain folder. I can get the names, and import the modules, but once I do this I can't get the unittest to run the files. Here is some of the test code I have created. My problem seems to be that once I glob the names they are input as strings, and I can't get away from it.
I want to write one of these files for each folder, or some way of executing all of the folders in a directory. Here is the code I have so far:
\## This Module will execute all of the Admin>Vehicles>Add Vehicle (AVHC) Test Cases
import sys, glob, unittest
sys.path.append('/com/inthinc/python/tiwiPro/IE7/AVHC')
AddVehicle_IE7_tests = glob.glob('/com/inthinc/python/tiwipro/IE7/AVHC/*.py')
for i in range( len(AddVehicle_IE7_tests) ):
replaceme = AddVehicle_IE7_tests[i]
withoutpy = replaceme.replace( '.py', '')
withouttree1 = withoutpy.replace( '/com/inthinc/python/tiwipro/IE7/AVHC\\', '' )
exec("import " + withouttree1)
AddVehicle_IE7_tests[i] = withouttree1
sys.path.append('/com/inthinc/python/tiwiPro/FF3/AVHC')
AddVehicle_FF3_tests = glob.glob('/com/inthinc/python/tiwipro/FF3/AVHC/*.py')
for i in range( len(AddVehicle_FF3_tests) ):
replaceme = AddVehicle_FF3_tests[i]
withoutpy = replaceme.replace( '.py', '')
withouttree2 = withoutpy.replace( '/com/inthinc/python/tiwipro/FF3/AVHC\\', '' )
exec("import " + withouttree2)
print withouttree2
if __name__ == '__main__':
print AddVehicle_IE7_tests
unittest.TextTestRunner().run(AddVehicle_IE7_tests)
else:
unittest.TextTestRunner().run(AddVehicle_IE7_tests)
unittest.TextTestRunner().run(AddVehicle_FF3_tests)
print "success"
A:
Although I wouldn't exactly recommend this approach (nor probably what you're trying to do), here's a simple approach that appears to accomplish roughly what you want.
In file "runner.py" (for example, similar to your above):
import glob
import unittest
testfiles = glob.glob('subdir/*.py')
for name in testfiles:
execfile(name)
if __name__ == '__main__':
unittest.main()
In file subdir/file1.py:
class ClassA(unittest.TestCase):
def test01(self):
print self
In file subdir/file2.py:
class ClassB(unittest.TestCase):
def test01(self):
print self
Output when you run "runner.py":
C:\svn\stackoverflow>runner
test01 (__main__.ClassA)
.test01 (__main__.ClassB)
.
----------------------------------------------------------------------
Ran 2 tests in 0.004s
OK
Note that this is basically equivalent to textually including all the test files in the main file, similar to how #include "file.h" works in C. As you can see, the environment (namespace) for the sub-files is that of the file in which execfile() is called, which is why they don't even need to do their own "import unittest" calls. That should be okay if they never need to be run standalone, or you could just include the usual unittest boilerplate in each file if they do need to work on their own. You couldn't have duplicate class names anywhere in the included files, and you may notice other difficulties.
I'm pretty sure, however, that you'd be better off using something like nose or py.test which do this sort of "test collection" much better than unittest.
A:
## This will execute the tests we need to run
import sys, glob, os, time
def run_all_tests():
sys.path.append('/com/inthinc/python/tiwiPro/usedbyall/run files')
run_all_tests = glob.glob('/com/inthinc/python/tiwipro/usedbyall/run files/*.py')
for i in range( len(run_all_tests) ):
replaceme = run_all_tests[i]
withoutpy = replaceme.replace( '.py', '')
withouttree = withoutpy.replace( '/com/inthinc/python/tiwipro/usedbyall/run files\\', '' )
exec("import " + withouttree)
exec( withouttree + ".run_test()" )
if __name__ == '__main__':
os.system( "taskkill /im java.exe" )
if __name__ == '__main__':
os.startfile( "C:/com/inthinc/python/tiwiPro/usedbyall/start_selenium.bat" )
time.sleep( 10 )
run_all_tests()
This is what I ended up using. I simply added the run_test() method to each test so I can call them externally like a regular method. This works perfectly, and gives me more control of the test. Also I added a short line that will open the selenium RC server, and close it afterwards.
|
Can Unittest in Python run a list made of strings?
|
I am using Selenium to run tests on a website. I have many individual test I need to run, and want to create a script that will run all of the python files in a certain folder. I can get the names, and import the modules, but once I do this I can't get the unittest to run the files. Here is some of the test code I have created. My problem seems to be that once I glob the names they are input as strings, and I can't get away from it.
I want to write one of these files for each folder, or some way of executing all of the folders in a directory. Here is the code I have so far:
\## This Module will execute all of the Admin>Vehicles>Add Vehicle (AVHC) Test Cases
import sys, glob, unittest
sys.path.append('/com/inthinc/python/tiwiPro/IE7/AVHC')
AddVehicle_IE7_tests = glob.glob('/com/inthinc/python/tiwipro/IE7/AVHC/*.py')
for i in range( len(AddVehicle_IE7_tests) ):
replaceme = AddVehicle_IE7_tests[i]
withoutpy = replaceme.replace( '.py', '')
withouttree1 = withoutpy.replace( '/com/inthinc/python/tiwipro/IE7/AVHC\\', '' )
exec("import " + withouttree1)
AddVehicle_IE7_tests[i] = withouttree1
sys.path.append('/com/inthinc/python/tiwiPro/FF3/AVHC')
AddVehicle_FF3_tests = glob.glob('/com/inthinc/python/tiwipro/FF3/AVHC/*.py')
for i in range( len(AddVehicle_FF3_tests) ):
replaceme = AddVehicle_FF3_tests[i]
withoutpy = replaceme.replace( '.py', '')
withouttree2 = withoutpy.replace( '/com/inthinc/python/tiwipro/FF3/AVHC\\', '' )
exec("import " + withouttree2)
print withouttree2
if __name__ == '__main__':
print AddVehicle_IE7_tests
unittest.TextTestRunner().run(AddVehicle_IE7_tests)
else:
unittest.TextTestRunner().run(AddVehicle_IE7_tests)
unittest.TextTestRunner().run(AddVehicle_FF3_tests)
print "success"
|
[
"Although I wouldn't exactly recommend this approach (nor probably what you're trying to do), here's a simple approach that appears to accomplish roughly what you want.\nIn file \"runner.py\" (for example, similar to your above):\n import glob\n import unittest\ntestfiles = glob.glob('subdir/*.py')\nfor name in testfiles:\n execfile(name)\n\nif __name__ == '__main__':\n unittest.main()\n\nIn file subdir/file1.py:\nclass ClassA(unittest.TestCase):\n def test01(self):\n print self\n\nIn file subdir/file2.py:\nclass ClassB(unittest.TestCase):\n def test01(self):\n print self\n\nOutput when you run \"runner.py\":\nC:\\svn\\stackoverflow>runner\ntest01 (__main__.ClassA)\n.test01 (__main__.ClassB)\n.\n----------------------------------------------------------------------\nRan 2 tests in 0.004s\n\nOK\n\nNote that this is basically equivalent to textually including all the test files in the main file, similar to how #include \"file.h\" works in C. As you can see, the environment (namespace) for the sub-files is that of the file in which execfile() is called, which is why they don't even need to do their own \"import unittest\" calls. That should be okay if they never need to be run standalone, or you could just include the usual unittest boilerplate in each file if they do need to work on their own. You couldn't have duplicate class names anywhere in the included files, and you may notice other difficulties.\nI'm pretty sure, however, that you'd be better off using something like nose or py.test which do this sort of \"test collection\" much better than unittest.\n",
"## This will execute the tests we need to run\n\nimport sys, glob, os, time\n\ndef run_all_tests():\n sys.path.append('/com/inthinc/python/tiwiPro/usedbyall/run files')\n run_all_tests = glob.glob('/com/inthinc/python/tiwipro/usedbyall/run files/*.py')\n\n for i in range( len(run_all_tests) ):\n replaceme = run_all_tests[i]\n withoutpy = replaceme.replace( '.py', '')\n withouttree = withoutpy.replace( '/com/inthinc/python/tiwipro/usedbyall/run files\\\\', '' )\n exec(\"import \" + withouttree)\n exec( withouttree + \".run_test()\" )\n if __name__ == '__main__':\n os.system( \"taskkill /im java.exe\" )\n\nif __name__ == '__main__':\n os.startfile( \"C:/com/inthinc/python/tiwiPro/usedbyall/start_selenium.bat\" )\n time.sleep( 10 )\n run_all_tests()\n\nThis is what I ended up using. I simply added the run_test() method to each test so I can call them externally like a regular method. This works perfectly, and gives me more control of the test. Also I added a short line that will open the selenium RC server, and close it afterwards.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"selenium_rc"
] |
stackoverflow_0001446317_python_selenium_rc.txt
|
Q:
lxml Changing Unicode Characters
I am using lxml to read through an xml file and change a few details. However, when running it I find that even if I just use lxml to read the file and then write it out again, as below:
fil='iTunes Music Library.XML'
tre=etree.parse(fil)
tre.write('temp.xml')
I find Queensrÿche converted to Queensrÿche. Anyone know how to fix this?
A:
Change your last line to:
tre.write('temp.xml', encoding='utf-8')
Otherwise lxml writes XML in ASCII encoding, so it have to escape all non-ASCII characters.
|
lxml Changing Unicode Characters
|
I am using lxml to read through an xml file and change a few details. However, when running it I find that even if I just use lxml to read the file and then write it out again, as below:
fil='iTunes Music Library.XML'
tre=etree.parse(fil)
tre.write('temp.xml')
I find Queensrÿche converted to Queensrÿche. Anyone know how to fix this?
|
[
"Change your last line to:\ntre.write('temp.xml', encoding='utf-8')\n\nOtherwise lxml writes XML in ASCII encoding, so it have to escape all non-ASCII characters.\n"
] |
[
7
] |
[] |
[] |
[
"lxml",
"python",
"xml"
] |
stackoverflow_0001848371_lxml_python_xml.txt
|
Q:
How can I prevent a Python module from importing itself?
For instance, I want to make a sql alchemy plugin for another project. And I want to name that module sqlalchemy.py. The problem with this is that it prevents me from importing sqlalchemy:
#sqlalchemy.py
import sqlalchemy
This will make the module import itself. I've tried this, but it doesn't seem to work:
import sys
#Remove the current directory from the front of sys.path
if not sys.path[0]:
sys.path.pop(0)
import sqlalchemy
Any suggestions?
A:
Edit: as the OP has now mentioned that the issue is one of relative import being preferred to absolute, the simplest solution for the OP's specific problem is to add at the start of the module from __future__ import absolute_import which changes that "preference"/ordering.
The following still applies to the ticklish issue of two clashing absolute imports (which doesn't appear to be what the OP is currently facing...):
Once you've imported a module named x, that module's recorded in sys.modules['x'] -- changing sys.path as you're doing won't alter sys.modules. You'll also need to alter sys.modules directly.
E.g., consider:
$ cat a/foo.py
print __file__; import sys; sys.path.insert(0, "b"); del sys.modules["foo"]; import foo
$ cat b/foo.py
print __file__
$ python2.5 -c'import sys; sys.path.insert(0, "a"); import foo'
a/foo.py
b/foo.py
(running again will use and show the .pyc files instead of the .py ones of course).
Not the cleanest approach, and of course this way the original foo module is, inevitably, not accessible from the outside any more (since its sys.modules entry has been displaced), but you could play further fragile tricks as needed (stash sys.modules["foo"] somewhere before deleting it, after you import the other foo put that module somewhere else and reinstate the original sys.modules["foo"] -- etc, etc), depending on your exact needs. (Of course, avoiding the name clashes in the first place would almost invariably be simpler than waltzing all around them in this way;-).
A:
Don't name it sqlalchemy.py ?
Seriously. I think this is the problem absolute imports is supposed to solve. In python 2.5 it should not happen, but I could be wrong
A:
You may just be getting burned by differences between running code in the interactive interpreter and from a file. Remove the test for sys.path[0] being empty (when run from a file, it isn't) and the import should now work as you want.
$ more sqlalchemy.py
import sys
print sys.path[0]
sys.path.pop(0)
import sqlalchemy
print sqlalchemy.__file__
$ python sqlalchemy.py
/Users/nad
/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/sqlalchemy/__init__.pyc
$ python
Python 2.6.4 (r264:75706, Oct 28 2009, 20:34:51)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys; print repr(sys.path[0])
''
EDIT: the above applies if your main module is sqlalchemy.py. If your module is imported by another module, you'll also have to alter sys.modules as Alex explains.
|
How can I prevent a Python module from importing itself?
|
For instance, I want to make a sql alchemy plugin for another project. And I want to name that module sqlalchemy.py. The problem with this is that it prevents me from importing sqlalchemy:
#sqlalchemy.py
import sqlalchemy
This will make the module import itself. I've tried this, but it doesn't seem to work:
import sys
#Remove the current directory from the front of sys.path
if not sys.path[0]:
sys.path.pop(0)
import sqlalchemy
Any suggestions?
|
[
"Edit: as the OP has now mentioned that the issue is one of relative import being preferred to absolute, the simplest solution for the OP's specific problem is to add at the start of the module from __future__ import absolute_import which changes that \"preference\"/ordering.\nThe following still applies to the ticklish issue of two clashing absolute imports (which doesn't appear to be what the OP is currently facing...):\nOnce you've imported a module named x, that module's recorded in sys.modules['x'] -- changing sys.path as you're doing won't alter sys.modules. You'll also need to alter sys.modules directly.\nE.g., consider:\n$ cat a/foo.py\nprint __file__; import sys; sys.path.insert(0, \"b\"); del sys.modules[\"foo\"]; import foo\n$ cat b/foo.py\nprint __file__\n$ python2.5 -c'import sys; sys.path.insert(0, \"a\"); import foo'\na/foo.py\nb/foo.py\n\n(running again will use and show the .pyc files instead of the .py ones of course).\nNot the cleanest approach, and of course this way the original foo module is, inevitably, not accessible from the outside any more (since its sys.modules entry has been displaced), but you could play further fragile tricks as needed (stash sys.modules[\"foo\"] somewhere before deleting it, after you import the other foo put that module somewhere else and reinstate the original sys.modules[\"foo\"] -- etc, etc), depending on your exact needs. (Of course, avoiding the name clashes in the first place would almost invariably be simpler than waltzing all around them in this way;-).\n",
"Don't name it sqlalchemy.py ?\nSeriously. I think this is the problem absolute imports is supposed to solve. In python 2.5 it should not happen, but I could be wrong\n",
"You may just be getting burned by differences between running code in the interactive interpreter and from a file. Remove the test for sys.path[0] being empty (when run from a file, it isn't) and the import should now work as you want.\n$ more sqlalchemy.py\nimport sys\nprint sys.path[0]\nsys.path.pop(0)\nimport sqlalchemy\nprint sqlalchemy.__file__\n$ python sqlalchemy.py \n/Users/nad\n/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/sqlalchemy/__init__.pyc\n$ python\nPython 2.6.4 (r264:75706, Oct 28 2009, 20:34:51) \n[GCC 4.0.1 (Apple Inc. build 5493)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import sys; print repr(sys.path[0])\n''\n\nEDIT: the above applies if your main module is sqlalchemy.py. If your module is imported by another module, you'll also have to alter sys.modules as Alex explains.\n"
] |
[
14,
2,
2
] |
[] |
[] |
[
"import",
"module",
"python"
] |
stackoverflow_0001848268_import_module_python.txt
|
Q:
Can someone explain why scipy.integrate.quad gives different results for equally long ranges while integrating sin(X)?
I am trying to numerically integrate an arbitrary (known when I code) function in my program
using numerical integration methods. I am using Python 2.5.2 along with SciPy's numerical integration package. In order to get a feel for it, i decided to try integrating sin(x) and observed this behavior-
>>> from math import pi
>>> from scipy.integrate import quad
>>> from math import sin
>>> def integrand(x):
... return sin(x)
...
>>> quad(integrand, -pi, pi)
(0.0, 4.3998892617846002e-14)
>>> quad(integrand, 0, 2*pi)
(2.2579473462709165e-16, 4.3998892617846002e-14)
I find this behavior odd because -
1. In ordinary integration, integrating over the full cycle gives zero.
2. In numerical integration, this (1) isn't necessarily the case, because you may just be
approximating the total area under the curve.
In any case, either assuming 1 is True or assuming 2 is True, I find the behavior to be inconsistent. Either both integrations (-pi to pi and 0 to 2*pi) should return 0.0 (first value in the tuple is the result and the second is the error) or return 2.257...
Can someone please explain why this is happening? Is this really an inconsistency? Can someone also tell me if I am missing something really basic about numerical methods?
In any case, in my final application, I plan to use the above method to find the arc length of a function. If someone has experience in this area, please advise me on the best policy for doing this in Python.
Edit
Note
I already have the first differential values at all points in the range stored in an array.
Current error is tolerable.
End note
I have read Wikipaedia on this. As Dimitry has pointed out, I will be integrating sqrt(1+diff(f(x), x)^2) to get the Arc Length. What I wanted to ask was - is there a better approximation/ Best practice(?) / faster way to do this. If more context is needed, I'll post it separately/ post context here, as you wish.
A:
The quad function is a function from an old Fortran library. It works by judging by the flatness and slope of the function it is integrating how to treat the step size it uses for numerical integration in order to maximize efficiency. What this means is that you may get slightly different answers from one region to the next even if they're analytically the same.
Without a doubt both integrations should return zero. Returning something that is 1/(10 trillion) is pretty close to zero! The slight differences are due to the way quad is rolling over sin and changing its step sizes. For your planned task, quad will be all you need.
EDIT:
For what you're doing I think quad is fine. It is fast and pretty accurate. My final statement is use it with confidence unless you find something that really has gone quite awry. If it doesn't return a nonsensical answer then it is probably working just fine. No worries.
A:
I think it is probably machine precision since both answers are effectively zero.
If you want an answer from the horse's mouth I would post this question on the scipy discussion board
A:
I would say that a number O(10^-14) is effectively zero. What's your tolerance?
It might be that the algorithm underlying quad isn't the best. You might try another method for integration and see if that improves things. A 5th order Runge-Kutta can be a very nice general purpose technique.
It could be just the nature of floating point numbers: "What Every Computer Scientist Should Know About Floating Point Arithmetic".
A:
This output seems correct to me since you have absolute error estimate here. The integral value of sin(x) is indeed should have value of zero for full period (any interval of 2*pi length) in both ordinary and numeric integration and your results is close to that value.
To evaluate arc length you should calculate integral for sqrt(1+diff(f(x), x)^2) function, where diff(f(x), x) is derivative of f(x). See also Arc length
A:
0.0 == 2.3e-16 (absolute error tolerance 4.4e-14)
Both answers are the same and correct i.e., zero within the given tolerance.
A:
The difference comes from the fact that sin(x)=-sin(-x) exactly even in finite precision. Whereas finite precision only gives sin(x)~sin(x+2*pi) approximately. Sure it would be nice if quad were smart enough to figure this out, but it really has no way of knowing apriori that the integral over the two intervals you give are equivalent or that the the first is a better result.
|
Can someone explain why scipy.integrate.quad gives different results for equally long ranges while integrating sin(X)?
|
I am trying to numerically integrate an arbitrary (known when I code) function in my program
using numerical integration methods. I am using Python 2.5.2 along with SciPy's numerical integration package. In order to get a feel for it, i decided to try integrating sin(x) and observed this behavior-
>>> from math import pi
>>> from scipy.integrate import quad
>>> from math import sin
>>> def integrand(x):
... return sin(x)
...
>>> quad(integrand, -pi, pi)
(0.0, 4.3998892617846002e-14)
>>> quad(integrand, 0, 2*pi)
(2.2579473462709165e-16, 4.3998892617846002e-14)
I find this behavior odd because -
1. In ordinary integration, integrating over the full cycle gives zero.
2. In numerical integration, this (1) isn't necessarily the case, because you may just be
approximating the total area under the curve.
In any case, either assuming 1 is True or assuming 2 is True, I find the behavior to be inconsistent. Either both integrations (-pi to pi and 0 to 2*pi) should return 0.0 (first value in the tuple is the result and the second is the error) or return 2.257...
Can someone please explain why this is happening? Is this really an inconsistency? Can someone also tell me if I am missing something really basic about numerical methods?
In any case, in my final application, I plan to use the above method to find the arc length of a function. If someone has experience in this area, please advise me on the best policy for doing this in Python.
Edit
Note
I already have the first differential values at all points in the range stored in an array.
Current error is tolerable.
End note
I have read Wikipaedia on this. As Dimitry has pointed out, I will be integrating sqrt(1+diff(f(x), x)^2) to get the Arc Length. What I wanted to ask was - is there a better approximation/ Best practice(?) / faster way to do this. If more context is needed, I'll post it separately/ post context here, as you wish.
|
[
"The quad function is a function from an old Fortran library. It works by judging by the flatness and slope of the function it is integrating how to treat the step size it uses for numerical integration in order to maximize efficiency. What this means is that you may get slightly different answers from one region to the next even if they're analytically the same.\nWithout a doubt both integrations should return zero. Returning something that is 1/(10 trillion) is pretty close to zero! The slight differences are due to the way quad is rolling over sin and changing its step sizes. For your planned task, quad will be all you need.\nEDIT:\nFor what you're doing I think quad is fine. It is fast and pretty accurate. My final statement is use it with confidence unless you find something that really has gone quite awry. If it doesn't return a nonsensical answer then it is probably working just fine. No worries.\n",
"I think it is probably machine precision since both answers are effectively zero.\nIf you want an answer from the horse's mouth I would post this question on the scipy discussion board\n",
"I would say that a number O(10^-14) is effectively zero. What's your tolerance?\nIt might be that the algorithm underlying quad isn't the best. You might try another method for integration and see if that improves things. A 5th order Runge-Kutta can be a very nice general purpose technique.\nIt could be just the nature of floating point numbers: \"What Every Computer Scientist Should Know About Floating Point Arithmetic\".\n",
"This output seems correct to me since you have absolute error estimate here. The integral value of sin(x) is indeed should have value of zero for full period (any interval of 2*pi length) in both ordinary and numeric integration and your results is close to that value.\nTo evaluate arc length you should calculate integral for sqrt(1+diff(f(x), x)^2) function, where diff(f(x), x) is derivative of f(x). See also Arc length\n",
"0.0 == 2.3e-16 (absolute error tolerance 4.4e-14)\n\nBoth answers are the same and correct i.e., zero within the given tolerance.\n",
"The difference comes from the fact that sin(x)=-sin(-x) exactly even in finite precision. Whereas finite precision only gives sin(x)~sin(x+2*pi) approximately. Sure it would be nice if quad were smart enough to figure this out, but it really has no way of knowing apriori that the integral over the two intervals you give are equivalent or that the the first is a better result.\n"
] |
[
10,
6,
6,
4,
3,
2
] |
[] |
[] |
[
"integration",
"numerical_methods",
"python",
"scipy"
] |
stackoverflow_0000581186_integration_numerical_methods_python_scipy.txt
|
Q:
matching stored keywords/phrases in text
I have a database table with around 1000 keywords/phrases (one to four words long) - This table changes rarely, so I could extract the data into something more useful (like a regular expression?) - So this is not finding / guessing at keywords based on natural language processing..
I then have a user inputting some text into a form that I'd like to match against my keywords and phrases.
The program would then store a link to each phrase matched next to the text.
So if we ran the algorithm on this question text against a few phrases that are in here, we'd get a result like so:
{"inputting some text" : 1,
"extract the data" : 1,
"a phrase not here" : 0}
What are my options?
Compile a regular expression
Some sort of SQL query
A third way?
Bearing in mind that there's a 1000 possible phrases..
I'm running Django / Python with MySQL.
edit: I'm currently doing this:
>>> text_input = "This is something with first phrase in and third phrase"
>>> regex = "first phrase|second phrase|third phrase"
>>> p = re.compile(regex, re.I)
>>> p.findall(text_input)
['first phrase','third phrase']
A:
The algorithm for this job is Aho-Corasick ... see the link at the bottom whch points to a C-extension for Python.
A:
If I understand you correctly, you have a unique set of strings, that you want to compare an input strings against. In this case you could use set to store both processing results and db values. Comparison then could be done as follows:
>>> db = {'abc', 'def', 'jhi', 'asdf'}
>>> inpt = {'abc', 'tmp'}
>>> db & inpt
{'abc'}
The further conversion to the dictionary is trivial.
A:
Just a heads up... you may be interested in django's support for regex in queries
Example from the linked django docs:
Entry.objects.get(title__regex=r'^(An?|The) +')
A:
Here is a slight variation on SilentGhost's answer. You load in the keywords line by line. store them in a set. for each keyword that you find in the user input increase the corresponding entry in the results.
keyword_file = StringIO("""inputting some text
extract the data
a phrase not here""")
keywords = set(line.strip() for line in keyword_file)
results = defaultdict(int)
for phrase in keywords:
if userinput.find(phrase) != -1:
results[phrase] += 1
print results
Hope this points you in the right direction. Not entirely sure this is what you were asking but it's my best guess.
Do you care about speed? Why don't you like the method you use now? Does your method work?
A:
Once you've formed your pattern such as (first phrase)|(the second)|(and another) (with the parentheses I indicate) and compiled it into a regular expression object r, a good way to loop on matches and identify which match it was is:
class GroupCounter(object):
def __init__(self, phrases):
self.phrases = phrases
self.counts = [0] * len(phrases)
def __call__(self, mo):
self.counts[mo.lastindex - 1] += 1
return ''
def asdict(self):
return dict(zip(self.phrases, self.counts))
g = GroupCounter(['first phrase', 'the second', 'and another'])
r.sub(g, thetext)
print g.asdict()
It would also be reasonable to have the GroupCounter instance also build the regex object, since it does need the list of phrases in the same order as it appears in the regex itself.
A:
If you have 1000 phrases, and you're searching an input string to find which of those phrases are substrings, you're probably not going to be happy with the performance you get from using a big regular expression. A trie is a bit more work to implement, but it's a lot more efficient: the regular expression a|b|c|d|e does five tests on each character in a given input string, while a trie only does one. You could conceivably also use a lexer, like Plex, that produces a DFA.
Edit:
I appear to be procrastinating this morning. Try this:
class Trie(object):
def __init__(self):
self.children = {}
self.item = None
def add(self, item, remainder=None):
"""Add an item to the trie."""
if remainder == None:
remainder = item
if remainder == "":
self.item = item
else:
ch = remainder[0]
if not self.children.has_key(ch):
self.children[ch] = Trie()
self.children[ch].add(item, remainder[1:])
def find(self, word):
"""Return True if word is an item in the trie."""
if not word:
return True
ch = word[0]
if not self.children.has_key(ch):
return False
return self.children[ch].find(word[1:])
def find_words(self, word, results=None):
"""Find all items in the trie that word begins with."""
if results == None:
results = []
if self.item:
results.append(self.item)
if not word:
return results
ch = word[0]
if not self.children.has_key(ch):
return results
return self.children[ch].find_words(word[1:], results)
A quick test (words.txt is the BSD words file, a very handy thing to have around - it contains about 240,000 words):
>>> t = Trie()
>>> with open(r'c:\temp\words.txt', 'r') as f:
for word in f:
t.add(word.strip())
That takes about 15 seconds on my machine. This, however, is almost instantaneous:
>>> s = "I played video games in a drunken haze."
>>> r = []
>>> for i in range(len(s)):
r.extend(t.find_words(s[i:]))
>>> r
['I', 'p', 'play', 'l', 'la', 'lay', 'a', 'ay', 'aye', 'y', 'ye', 'yed', 'e', 'd', 'v', 'video', 'i', 'id', 'ide', 'd', 'de', 'e', 'o', 'g', 'ga', 'gam', 'game', 'a', 'am', 'ame', 'm', 'me', 'e', 'es', 's', 'i', 'in', 'n', 'a', 'd', 'drunk', 'drunken', 'r', 'run', 'u', 'un', 'unken', 'n', 'k', 'ken', 'e', 'en', 'n', 'h', 'ha', 'haze', 'a', 'z', 'e']
Yes, unken is in words.txt. I have no idea why.
Oh, and I did try to compare with regular expressions:
>>> import re
>>> with open(r'c:\temp\words.txt', 'r') as f:
p = "|".join([l.strip() for l in f])
>>> p = re.compile(p)
Traceback (most recent call last):
File "<pyshell#250>", line 1, in <module>
p = re.compile(p)
File "C:\Python26\lib\re.py", line 188, in compile
return _compile(pattern, flags)
File "C:\Python26\lib\re.py", line 241, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Python26\lib\sre_compile.py", line 529, in compile
groupindex, indexgroup
OverflowError: regular expression code size limit exceeded
|
matching stored keywords/phrases in text
|
I have a database table with around 1000 keywords/phrases (one to four words long) - This table changes rarely, so I could extract the data into something more useful (like a regular expression?) - So this is not finding / guessing at keywords based on natural language processing..
I then have a user inputting some text into a form that I'd like to match against my keywords and phrases.
The program would then store a link to each phrase matched next to the text.
So if we ran the algorithm on this question text against a few phrases that are in here, we'd get a result like so:
{"inputting some text" : 1,
"extract the data" : 1,
"a phrase not here" : 0}
What are my options?
Compile a regular expression
Some sort of SQL query
A third way?
Bearing in mind that there's a 1000 possible phrases..
I'm running Django / Python with MySQL.
edit: I'm currently doing this:
>>> text_input = "This is something with first phrase in and third phrase"
>>> regex = "first phrase|second phrase|third phrase"
>>> p = re.compile(regex, re.I)
>>> p.findall(text_input)
['first phrase','third phrase']
|
[
"The algorithm for this job is Aho-Corasick ... see the link at the bottom whch points to a C-extension for Python.\n",
"If I understand you correctly, you have a unique set of strings, that you want to compare an input strings against. In this case you could use set to store both processing results and db values. Comparison then could be done as follows:\n>>> db = {'abc', 'def', 'jhi', 'asdf'}\n>>> inpt = {'abc', 'tmp'}\n>>> db & inpt\n{'abc'}\n\nThe further conversion to the dictionary is trivial.\n",
"Just a heads up... you may be interested in django's support for regex in queries\nExample from the linked django docs:\nEntry.objects.get(title__regex=r'^(An?|The) +')\n\n",
"Here is a slight variation on SilentGhost's answer. You load in the keywords line by line. store them in a set. for each keyword that you find in the user input increase the corresponding entry in the results. \nkeyword_file = StringIO(\"\"\"inputting some text\n extract the data\n a phrase not here\"\"\")\n\nkeywords = set(line.strip() for line in keyword_file)\n\nresults = defaultdict(int)\nfor phrase in keywords:\n if userinput.find(phrase) != -1:\n results[phrase] += 1\n\nprint results\n\nHope this points you in the right direction. Not entirely sure this is what you were asking but it's my best guess. \nDo you care about speed? Why don't you like the method you use now? Does your method work? \n",
"Once you've formed your pattern such as (first phrase)|(the second)|(and another) (with the parentheses I indicate) and compiled it into a regular expression object r, a good way to loop on matches and identify which match it was is:\nclass GroupCounter(object):\n def __init__(self, phrases):\n self.phrases = phrases\n self.counts = [0] * len(phrases)\n def __call__(self, mo):\n self.counts[mo.lastindex - 1] += 1\n return ''\n def asdict(self):\n return dict(zip(self.phrases, self.counts))\n\ng = GroupCounter(['first phrase', 'the second', 'and another'])\nr.sub(g, thetext)\nprint g.asdict()\n\nIt would also be reasonable to have the GroupCounter instance also build the regex object, since it does need the list of phrases in the same order as it appears in the regex itself.\n",
"If you have 1000 phrases, and you're searching an input string to find which of those phrases are substrings, you're probably not going to be happy with the performance you get from using a big regular expression. A trie is a bit more work to implement, but it's a lot more efficient: the regular expression a|b|c|d|e does five tests on each character in a given input string, while a trie only does one. You could conceivably also use a lexer, like Plex, that produces a DFA.\nEdit:\nI appear to be procrastinating this morning. Try this:\n class Trie(object):\n def __init__(self):\n self.children = {}\n self.item = None\n def add(self, item, remainder=None):\n \"\"\"Add an item to the trie.\"\"\"\n if remainder == None:\n remainder = item\n if remainder == \"\":\n self.item = item\n else:\n ch = remainder[0]\n if not self.children.has_key(ch):\n self.children[ch] = Trie()\n self.children[ch].add(item, remainder[1:])\n def find(self, word):\n \"\"\"Return True if word is an item in the trie.\"\"\"\n if not word:\n return True\n ch = word[0]\n if not self.children.has_key(ch):\n return False\n return self.children[ch].find(word[1:])\n def find_words(self, word, results=None):\n \"\"\"Find all items in the trie that word begins with.\"\"\"\n if results == None:\n results = []\n if self.item:\n results.append(self.item)\n if not word:\n return results\n ch = word[0]\n if not self.children.has_key(ch):\n return results\n return self.children[ch].find_words(word[1:], results)\n\nA quick test (words.txt is the BSD words file, a very handy thing to have around - it contains about 240,000 words):\n>>> t = Trie()\n>>> with open(r'c:\\temp\\words.txt', 'r') as f:\n for word in f:\n t.add(word.strip())\n\nThat takes about 15 seconds on my machine. This, however, is almost instantaneous:\n>>> s = \"I played video games in a drunken haze.\"\n>>> r = []\n>>> for i in range(len(s)):\n r.extend(t.find_words(s[i:]))\n>>> r\n['I', 'p', 'play', 'l', 'la', 'lay', 'a', 'ay', 'aye', 'y', 'ye', 'yed', 'e', 'd', 'v', 'video', 'i', 'id', 'ide', 'd', 'de', 'e', 'o', 'g', 'ga', 'gam', 'game', 'a', 'am', 'ame', 'm', 'me', 'e', 'es', 's', 'i', 'in', 'n', 'a', 'd', 'drunk', 'drunken', 'r', 'run', 'u', 'un', 'unken', 'n', 'k', 'ken', 'e', 'en', 'n', 'h', 'ha', 'haze', 'a', 'z', 'e']\n\nYes, unken is in words.txt. I have no idea why.\nOh, and I did try to compare with regular expressions:\n >>> import re\n >>> with open(r'c:\\temp\\words.txt', 'r') as f:\n p = \"|\".join([l.strip() for l in f])\n\n >>> p = re.compile(p)\n\n Traceback (most recent call last):\n File \"<pyshell#250>\", line 1, in <module>\n p = re.compile(p)\n File \"C:\\Python26\\lib\\re.py\", line 188, in compile\n return _compile(pattern, flags)\n File \"C:\\Python26\\lib\\re.py\", line 241, in _compile\n p = sre_compile.compile(pattern, flags)\n File \"C:\\Python26\\lib\\sre_compile.py\", line 529, in compile\n groupindex, indexgroup\nOverflowError: regular expression code size limit exceeded\n\n"
] |
[
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"django",
"mysql",
"python",
"regex"
] |
stackoverflow_0001846833_django_mysql_python_regex.txt
|
Q:
What is the difference between M2Crypto's set_client_CA_list_from_file() and load_verify_info() and when would you use each?
The M2Crypto library has a few CA-related functions on its SSL.Context object, but the documentation is very unclear as to when you would use certain functions and why. In fact, the docs for almost all of them are, "Load CA certs into the context," so it seems possible that they all do the same thing.
There are several examples that use both set_client_CA_list_from_file() and load_verify_info(), but there are also other similar functions like load_client_ca() and load_verify_locations().
I am writing both client and server pieces. What functions should I use and why? What specifically do they do?
Edit:
Looking through the code I see:
# Deprecated.
load_client_CA = load_client_ca = set_client_CA_list_from_file
and
# Deprecated.
load_verify_info = load_verify_locations
So that helps a little. This brings us down to two functions: set_client_CA_list_from_file() and load_verify_locations(). But I still can't quite tell the difference between the two.
A:
If your server requires the client to present a certificate, it can restrict who are the valid issuers of the client certificates by specifying the issuers calling set_client_CA_list_from_file. This is actually pretty rare.
The client specifies who are the valid server certificate issuers by calling load_verify_locations. Almost all clients should do this.
Both client and server can call load_cert to set their own certificate. Servers should almost always do this. Clients should probably do this only if the server requires the client to present a certificate.
I recommend you pick a copy of Network Security with OpenSSL by John Viega, Matt Messier and Pravir Chandra, ISBN 059600270X, which should clarify these issues in more detail.
|
What is the difference between M2Crypto's set_client_CA_list_from_file() and load_verify_info() and when would you use each?
|
The M2Crypto library has a few CA-related functions on its SSL.Context object, but the documentation is very unclear as to when you would use certain functions and why. In fact, the docs for almost all of them are, "Load CA certs into the context," so it seems possible that they all do the same thing.
There are several examples that use both set_client_CA_list_from_file() and load_verify_info(), but there are also other similar functions like load_client_ca() and load_verify_locations().
I am writing both client and server pieces. What functions should I use and why? What specifically do they do?
Edit:
Looking through the code I see:
# Deprecated.
load_client_CA = load_client_ca = set_client_CA_list_from_file
and
# Deprecated.
load_verify_info = load_verify_locations
So that helps a little. This brings us down to two functions: set_client_CA_list_from_file() and load_verify_locations(). But I still can't quite tell the difference between the two.
|
[
"If your server requires the client to present a certificate, it can restrict who are the valid issuers of the client certificates by specifying the issuers calling set_client_CA_list_from_file. This is actually pretty rare.\nThe client specifies who are the valid server certificate issuers by calling load_verify_locations. Almost all clients should do this.\nBoth client and server can call load_cert to set their own certificate. Servers should almost always do this. Clients should probably do this only if the server requires the client to present a certificate.\nI recommend you pick a copy of Network Security with OpenSSL by John Viega, Matt Messier and Pravir Chandra, ISBN 059600270X, which should clarify these issues in more detail.\n"
] |
[
2
] |
[] |
[] |
[
"m2crypto",
"python"
] |
stackoverflow_0001848160_m2crypto_python.txt
|
Q:
Decorators on Django Template Filters?
I have a template filter that performs a very simple task and works well, but I would like to use a decorator on it. Unfortunately the decorator causes a nasty django error that doesn't make any sense...
Code that works:
@register.filter(name="has_network")
def has_network(profile, network):
hasnetworkfunc = getattr(profile, "has_%s" % network)
return hasnetworkfunc()
With Decorator (doesn't work):
@register.filter(name="has_network")
@cache_function(30)
def has_network(profile, network):
hasnetworkfunc = getattr(profile, "has_%s" % network)
return hasnetworkfunc()
Here is the error:
TemplateSyntaxError at /
Caught an exception while rendering:
pop from empty list
I have tried setting break points inside the decorator and I am reasonably confident that it is not even being called...
But just in case here is the decorator (I know someone will ask for it)
I replaced the decorator (temporarily) with a mock decorator that does nothing, but I still get the same error
def cache_function(cache_timeout):
def wrapper(fn):
def decorator(*args, **kwargs):
return fn(*args, **kwargs)
return decorator
return wrapper
edit CONFIRMED: It is caused because the decorator takes *args and **kwargs? I assume pop() is being called to ensure filters all take at least one arg?
changing the decorator to this fixes the problem:
def cache_function(cache_timeout):
def wrapper(fn):
def decorator(arg1, arg2):
return fn(arg1, arg2)
return decorator
return wrapper
Unfortunately that ruins the generic nature of the decorator :/ what to do now?
A:
Final Answer: Add an extra argument to the decorator indicating what is being decorated
There may be something more elegant, but this works.
from django.core.cache import cache
from django.db.models.query import QuerySet
try:
from cPickle import dumps
except:
from pickle import dumps
from hashlib import sha1
cache_miss = object()
class CantPickleAQuerySet(Exception): pass
def cache_function(cache_timeout, func_type='generic'):
def wrapper(fn):
def decorator(*args, **kwargs):
try:
cache_indentifiers = "%s%s%s%s" % (
fn.__module__,
fn.__name__,
dumps(args),
dumps(kwargs)
)
except Exception, e:
print "Error: %s\nFailed to generate cache key: %s%s" % (e, fn.__module__, fn.__name__)
return fn(*args, **kwargs)
cache_key = sha1(cache_indentifiers).hexdigest()
value = cache.get(cache_key, cache_miss)
if value is cache_miss:
value = fn(*args, **kwargs)
if isinstance(value, QuerySet):
raise CantPickleAQuerySet("You can't cache a queryset. But you CAN cache a list! just convert your Queryset (the value you were returning) to a list like so `return list(queryset)`")
try:
cache.set(cache_key, value, cache_timeout)
except Exception, e:
print "Error: %s\nFailed to cache: %s\nvalue: %s" % (e, cache_indentifiers, value)
return value
no_arg2 = object()
def filter_decorator(arg1, arg2=no_arg2):
if arg2 is no_arg2:
return decorator(arg1)
else:
return decorator(arg1, arg2)
if func_type == 'generic':
return decorator
elif func_type == 'filter':
return filter_decorator
return wrapper
|
Decorators on Django Template Filters?
|
I have a template filter that performs a very simple task and works well, but I would like to use a decorator on it. Unfortunately the decorator causes a nasty django error that doesn't make any sense...
Code that works:
@register.filter(name="has_network")
def has_network(profile, network):
hasnetworkfunc = getattr(profile, "has_%s" % network)
return hasnetworkfunc()
With Decorator (doesn't work):
@register.filter(name="has_network")
@cache_function(30)
def has_network(profile, network):
hasnetworkfunc = getattr(profile, "has_%s" % network)
return hasnetworkfunc()
Here is the error:
TemplateSyntaxError at /
Caught an exception while rendering:
pop from empty list
I have tried setting break points inside the decorator and I am reasonably confident that it is not even being called...
But just in case here is the decorator (I know someone will ask for it)
I replaced the decorator (temporarily) with a mock decorator that does nothing, but I still get the same error
def cache_function(cache_timeout):
def wrapper(fn):
def decorator(*args, **kwargs):
return fn(*args, **kwargs)
return decorator
return wrapper
edit CONFIRMED: It is caused because the decorator takes *args and **kwargs? I assume pop() is being called to ensure filters all take at least one arg?
changing the decorator to this fixes the problem:
def cache_function(cache_timeout):
def wrapper(fn):
def decorator(arg1, arg2):
return fn(arg1, arg2)
return decorator
return wrapper
Unfortunately that ruins the generic nature of the decorator :/ what to do now?
|
[
"Final Answer: Add an extra argument to the decorator indicating what is being decorated\nThere may be something more elegant, but this works.\nfrom django.core.cache import cache\nfrom django.db.models.query import QuerySet\ntry:\n from cPickle import dumps\nexcept:\n from pickle import dumps\nfrom hashlib import sha1\n\ncache_miss = object()\n\nclass CantPickleAQuerySet(Exception): pass\n\ndef cache_function(cache_timeout, func_type='generic'):\n def wrapper(fn):\n def decorator(*args, **kwargs):\n try:\n cache_indentifiers = \"%s%s%s%s\" % (\n fn.__module__,\n fn.__name__,\n dumps(args),\n dumps(kwargs)\n )\n except Exception, e:\n print \"Error: %s\\nFailed to generate cache key: %s%s\" % (e, fn.__module__, fn.__name__)\n return fn(*args, **kwargs)\n\n cache_key = sha1(cache_indentifiers).hexdigest()\n\n value = cache.get(cache_key, cache_miss)\n\n if value is cache_miss:\n value = fn(*args, **kwargs)\n\n if isinstance(value, QuerySet):\n raise CantPickleAQuerySet(\"You can't cache a queryset. But you CAN cache a list! just convert your Queryset (the value you were returning) to a list like so `return list(queryset)`\")\n\n try:\n cache.set(cache_key, value, cache_timeout)\n except Exception, e:\n print \"Error: %s\\nFailed to cache: %s\\nvalue: %s\" % (e, cache_indentifiers, value)\n\n return value\n\n no_arg2 = object()\n def filter_decorator(arg1, arg2=no_arg2):\n if arg2 is no_arg2:\n return decorator(arg1)\n else:\n return decorator(arg1, arg2)\n\n if func_type == 'generic':\n return decorator\n\n elif func_type == 'filter':\n return filter_decorator\n\n return wrapper\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_templates",
"filter",
"python",
"templatetags"
] |
stackoverflow_0001849243_django_django_templates_filter_python_templatetags.txt
|
Q:
How to do conditional character replacement within a string
I have a unicode string in Python and basically need to go through, character by character and replace certain ones based on a list of rules. One such rule is that a is changed to ö if a is after n. Also, if there are two vowel characters in a row, they get replaced by one vowel character and :. So if I have the string "natarook", what is the easiest and most efficient way of getting "nötaro:k"? Using Python 2.6 and CherryPy 3.1 if that matters.
edit: two vowels in a row does mean the same vowels (oo, aa, ii)
A:
# -*- coding: utf-8 -*-
def subpairs(s, prefix, suffix):
def sub(i, sentinal=object()):
r = prefix.get(s[i:i+2], sentinal)
if r is not sentinal: return r
r = suffix.get(s[i-1:i+1], sentinal)
if r is not sentinal: return r
return s[i]
s = '\0'+s+'\0'
return ''.join(sub(i) for i in xrange(1,len(s)))
vowels = [(v+v, u':') for v in 'aeiou']
prefix = {}
suffix = {'na':u'ö'}
suffix.update(vowels)
print subpairs('natarook', prefix, suffix)
# prints: nötaro:k
prefix = {'na':u'ö'}
suffix = dict(vowels)
print subpairs('natarook', prefix, suffix)
# prints: öataro:k
A:
"I know, I'll use regular expressions!"
But seriously, regexes are really good for string manipulation.
You could write one per rule, like so:
s/na/nö/g
s/([aeiou])$1/$1:/g
Or you could generate them at runtime from some other source which lists them all.
A:
focus on easy and correct first, then consider efficiency if profiling indicates its a bottleneck.
The simple approach is:
prev = None
for ch in string:
if ch == 'a':
if prev == 'n':
...
prev = ch
A:
Given your rules, I'd say you really want a simple state machine. Hmm, on second thought, maybe not; you can just look back in the string as you go.
I have a unicode string in Python and basically need to go through, character by character and replace certain ones based on a list of rules. One such rule is that a is changed to ö if a is after n. Also, if there are two vowel characters in a row, they get replaced by one vowel character and :. So if I have the string , what is the easiest and most efficient way of getting "nötaro:k"? Using Python 2.6 and CherryPy 3.1 if that matters.
vowel_set = frozenset(['a', 'e', 'i', 'o', 'u', 'ö'])
def fix_the_string(s):
lst = []
for i, ch in enumerate(s):
if ch == 'a' and lst and lst[-1] == 'n':
lst.append('ö')
else if ch in vowel_set and lst and lst[-1] in vowel_set:
lst[-1] = 'a' # "replaced by one vowel character", not sure what you want
lst.append(':')
else
lst.append(ch)
return "".join(lst)
print fix_the_string("natarook")
EDIT: Now that I saw the answer by @Anon. I think that's the simplest approach. This might actually be faster once you get a whole bunch of rules in play, as it makes one pass over the string; but maybe not, because the regexp stuff in Python is fast C code.
But simpler is better. Here is actual Python code for the regexp approach:
import re
pat_na = re.compile(r'na')
pat_double_vowel = re.compile(r'([aeiou])[aeiou]')
def fix_the_string(s):
s = re.sub(pat_na, r'nö', s)
s = re.sub(pat_double_vowel, r'\1:', s)
return s
print fix_the_string("natarook") # prints "nötaro:k"
A:
It might be simpler to do with a handmade list of regular expressions, rather than progmatically gererating them. I recommend the following code.
import re
# regsubs is a dictionary of regular expressions as keys,
# and the replacement regexps as values
regsubs = {'na':u'nö',
'([aeiou])\\1': '\\1:'}
def makesubs(s):
for pattern, repl in regsubs.iteritems():
s = re.sub(pattern, repl, s)
return s
print makesubs('natarook')
# prints: nötaro:k
|
How to do conditional character replacement within a string
|
I have a unicode string in Python and basically need to go through, character by character and replace certain ones based on a list of rules. One such rule is that a is changed to ö if a is after n. Also, if there are two vowel characters in a row, they get replaced by one vowel character and :. So if I have the string "natarook", what is the easiest and most efficient way of getting "nötaro:k"? Using Python 2.6 and CherryPy 3.1 if that matters.
edit: two vowels in a row does mean the same vowels (oo, aa, ii)
|
[
"# -*- coding: utf-8 -*-\n\ndef subpairs(s, prefix, suffix):\n def sub(i, sentinal=object()):\n r = prefix.get(s[i:i+2], sentinal)\n if r is not sentinal: return r\n\n r = suffix.get(s[i-1:i+1], sentinal)\n if r is not sentinal: return r\n return s[i]\n\n s = '\\0'+s+'\\0'\n return ''.join(sub(i) for i in xrange(1,len(s)))\n\nvowels = [(v+v, u':') for v in 'aeiou']\n\nprefix = {}\nsuffix = {'na':u'ö'}\nsuffix.update(vowels)\nprint subpairs('natarook', prefix, suffix)\n# prints: nötaro:k\n\nprefix = {'na':u'ö'}\nsuffix = dict(vowels)\nprint subpairs('natarook', prefix, suffix)\n# prints: öataro:k\n\n",
"\"I know, I'll use regular expressions!\"\nBut seriously, regexes are really good for string manipulation.\nYou could write one per rule, like so:\ns/na/nö/g\ns/([aeiou])$1/$1:/g\n\nOr you could generate them at runtime from some other source which lists them all.\n",
"focus on easy and correct first, then consider efficiency if profiling indicates its a bottleneck.\nThe simple approach is:\nprev = None\nfor ch in string:\n if ch == 'a':\n if prev == 'n':\n ...\n prev = ch\n\n",
"Given your rules, I'd say you really want a simple state machine. Hmm, on second thought, maybe not; you can just look back in the string as you go.\nI have a unicode string in Python and basically need to go through, character by character and replace certain ones based on a list of rules. One such rule is that a is changed to ö if a is after n. Also, if there are two vowel characters in a row, they get replaced by one vowel character and :. So if I have the string , what is the easiest and most efficient way of getting \"nötaro:k\"? Using Python 2.6 and CherryPy 3.1 if that matters.\nvowel_set = frozenset(['a', 'e', 'i', 'o', 'u', 'ö'])\n\ndef fix_the_string(s):\n lst = []\n for i, ch in enumerate(s):\n if ch == 'a' and lst and lst[-1] == 'n':\n lst.append('ö')\n else if ch in vowel_set and lst and lst[-1] in vowel_set:\n lst[-1] = 'a' # \"replaced by one vowel character\", not sure what you want\n lst.append(':')\n else\n lst.append(ch)\n return \"\".join(lst)\n\nprint fix_the_string(\"natarook\")\n\nEDIT: Now that I saw the answer by @Anon. I think that's the simplest approach. This might actually be faster once you get a whole bunch of rules in play, as it makes one pass over the string; but maybe not, because the regexp stuff in Python is fast C code.\nBut simpler is better. Here is actual Python code for the regexp approach:\nimport re\npat_na = re.compile(r'na')\npat_double_vowel = re.compile(r'([aeiou])[aeiou]')\n\ndef fix_the_string(s):\n s = re.sub(pat_na, r'nö', s)\n s = re.sub(pat_double_vowel, r'\\1:', s)\n return s\n\nprint fix_the_string(\"natarook\") # prints \"nötaro:k\"\n\n",
"It might be simpler to do with a handmade list of regular expressions, rather than progmatically gererating them. I recommend the following code.\nimport re\n# regsubs is a dictionary of regular expressions as keys, \n# and the replacement regexps as values\nregsubs = {'na':u'nö',\n '([aeiou])\\\\1': '\\\\1:'}\n\ndef makesubs(s):\n for pattern, repl in regsubs.iteritems():\n s = re.sub(pattern, repl, s)\n return s\n\nprint makesubs('natarook')\n# prints: nötaro:k\n\n"
] |
[
7,
2,
2,
1,
1
] |
[] |
[] |
[
"conditional",
"python",
"replace",
"string"
] |
stackoverflow_0001849185_conditional_python_replace_string.txt
|
Q:
How can you detect if two regular expressions overlap in the strings they can match?
I have a container of regular expressions. I'd like to analyze them to determine if it's possible to generate a string that matches more than 1 of them. Short of writing my own regex engine with this use case in mind, is there an easy way in C++ or Python to solve this problem?
A:
There's no easy way.
As long as your regular expressions use only standard features (Perl lets you embed arbitrary code in matching, I think), you can produce from each one a nondeterministic finite-state automaton (NFA) that compactly encodes all the strings that the RE matches.
Given any pair of NFA, it's decidable whether their intersection is empty. If the intersection isn't empty, then some string matches both REs in the pair (and conversely).
The standard decidability proof is to determinize them into DFAs first, and then construct a new DFA whose states are pairs of the two DFAs' states, and whose final states are exactly those in which both states in the pair are final in their original DFA. Alternatively, if you've already shown how to compute the complement of a NFA, then you can (DeMorgan's law style) get the intersection by complement(union(complement(A),complement(B))).
Unfortunately, NFA->DFA involves a potentially exponential size explosion (because states in the DFA are subsets of states in the NFA). From Wikipedia:
Some classes of regular languages can
only be described by deterministic
finite automata whose size grows
exponentially in the size of the
shortest equivalent regular
expressions. The standard example are
here the languages L_k consisting of
all strings over the alphabet {a,b}
whose kth-last letter equals a.
By the way, you should definitely use OpenFST. You can create automata as text files and play around with operations like minimization, intersection, etc. in order to see how efficient they are for your problem. There already exist open source regexp->nfa->dfa compilers (I remember a Perl module); modify one to output OpenFST automata files and play around.
Fortunately, it's possible to avoid the subset-of-states explosion, and intersect two NFA directly using the same construction as for DFA:
if A ->a B (in one NFA, you can go from state A to B outputting the letter 'a')
and X ->a Y (in the other NFA)
then (A,X) ->a (B,Y) in the intersection
(C,Z) is final iff C is final in the one NFA and Z is final in the other.
To start the process off, you start in the pair of start states for the two NFAs e.g. (A,X) - this is the start state of the intersection-NFA. Each time you first visit a state, generate an arc by the above rule for every pair of arcs leaving the two states, and then visit all the (new) states those arcs reach. You'd store the fact that you expanded a state's arcs (e.g. in a hash table) and end up exploring all the states reachable from the start.
If you allow epsilon transitions (that don't output a letter), that's fine:
if A ->epsilon B in the first NFA, then for every state (A,Y) you reach, add the arc (A,Y) ->epsilon (B,Y) and similarly for epsilons in the second-position NFA.
Epsilon transitions are useful (but not necessary) in taking the union of two NFAs when translating a regexp to an NFA; whenever you have alternation regexp1|regexp2|regexp3, you take the union: an NFA whose start state has an epsilon transition to each of the NFAs representing the regexps in the alternation.
Deciding emptiness for an NFA is easy: if you ever reach a final state in doing a depth-first-search from the start state, it's not empty.
This NFA-intersection is similar to finite state transducer composition (a transducer is an NFA that outputs pairs of symbols, that are concatenated pairwise to match both an input and output string, or to transform a given input to an output).
A:
This regex inverter (written using pyparsing) works with a limited subset of re syntax (no * or + allowed, for instance) - you could invert two re's into two sets, and then look for a set intersection.
|
How can you detect if two regular expressions overlap in the strings they can match?
|
I have a container of regular expressions. I'd like to analyze them to determine if it's possible to generate a string that matches more than 1 of them. Short of writing my own regex engine with this use case in mind, is there an easy way in C++ or Python to solve this problem?
|
[
"There's no easy way.\nAs long as your regular expressions use only standard features (Perl lets you embed arbitrary code in matching, I think), you can produce from each one a nondeterministic finite-state automaton (NFA) that compactly encodes all the strings that the RE matches.\nGiven any pair of NFA, it's decidable whether their intersection is empty. If the intersection isn't empty, then some string matches both REs in the pair (and conversely).\nThe standard decidability proof is to determinize them into DFAs first, and then construct a new DFA whose states are pairs of the two DFAs' states, and whose final states are exactly those in which both states in the pair are final in their original DFA. Alternatively, if you've already shown how to compute the complement of a NFA, then you can (DeMorgan's law style) get the intersection by complement(union(complement(A),complement(B))).\nUnfortunately, NFA->DFA involves a potentially exponential size explosion (because states in the DFA are subsets of states in the NFA). From Wikipedia:\n\nSome classes of regular languages can\n only be described by deterministic\n finite automata whose size grows\n exponentially in the size of the\n shortest equivalent regular\n expressions. The standard example are\n here the languages L_k consisting of\n all strings over the alphabet {a,b}\n whose kth-last letter equals a.\n\nBy the way, you should definitely use OpenFST. You can create automata as text files and play around with operations like minimization, intersection, etc. in order to see how efficient they are for your problem. There already exist open source regexp->nfa->dfa compilers (I remember a Perl module); modify one to output OpenFST automata files and play around.\nFortunately, it's possible to avoid the subset-of-states explosion, and intersect two NFA directly using the same construction as for DFA:\nif A ->a B (in one NFA, you can go from state A to B outputting the letter 'a')\nand X ->a Y (in the other NFA)\nthen (A,X) ->a (B,Y) in the intersection\n(C,Z) is final iff C is final in the one NFA and Z is final in the other.\nTo start the process off, you start in the pair of start states for the two NFAs e.g. (A,X) - this is the start state of the intersection-NFA. Each time you first visit a state, generate an arc by the above rule for every pair of arcs leaving the two states, and then visit all the (new) states those arcs reach. You'd store the fact that you expanded a state's arcs (e.g. in a hash table) and end up exploring all the states reachable from the start.\nIf you allow epsilon transitions (that don't output a letter), that's fine:\nif A ->epsilon B in the first NFA, then for every state (A,Y) you reach, add the arc (A,Y) ->epsilon (B,Y) and similarly for epsilons in the second-position NFA.\nEpsilon transitions are useful (but not necessary) in taking the union of two NFAs when translating a regexp to an NFA; whenever you have alternation regexp1|regexp2|regexp3, you take the union: an NFA whose start state has an epsilon transition to each of the NFAs representing the regexps in the alternation.\nDeciding emptiness for an NFA is easy: if you ever reach a final state in doing a depth-first-search from the start state, it's not empty.\nThis NFA-intersection is similar to finite state transducer composition (a transducer is an NFA that outputs pairs of symbols, that are concatenated pairwise to match both an input and output string, or to transform a given input to an output).\n",
"This regex inverter (written using pyparsing) works with a limited subset of re syntax (no * or + allowed, for instance) - you could invert two re's into two sets, and then look for a set intersection.\n"
] |
[
37,
2
] |
[
"In theory, the problem you describe is impossible.\nIn practice, if you have a manageable number of regular expressions that use a limited subset or of regexp syntax, and/or a limited selection of strings that can be used to match against the container of regular expressions, you might be able to solve it.\nAssuming you're not trying to solve the abstract general case, there might be something you can do to solve a practical application. Perhaps if you provided a representative sample of the regexps, and described the strings you'd be matching with, a heuristic could be created to solve the problem.\n"
] |
[
-1
] |
[
"algorithm",
"c++",
"overlap",
"python",
"regex"
] |
stackoverflow_0001849447_algorithm_c++_overlap_python_regex.txt
|
Q:
how to make import conditionally in Python?
I want to do something like this in C:
#ifdef SOMETHING
do_this();
#endif
But in Python this doesn't jive:
if something:
import module
What am I doing wrong? Is this possible in the first place?
A:
It should work fine:
>>> if False:
... import sys
...
>>> sys
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'sys' is not defined
>>> if True:
... import sys
...
>>> sys
<module 'sys' (built-in)>
A:
In Python there's a built-in feature called "Exception".. Applying to your needs:
try:
import <module>
except: #Catches every error
raise #and print error
There are more complex structures so search on the web for more documentation.
A:
If you're getting this:
NameError: name 'something' is not defined
then the problem here is not with the import statement but with the use of something, a variable you apparently haven't initialized. Just make sure it's initialized to either True or False, and it'll work.
A:
In the C construct, the conditional define #ifdef tests whether "SOMETHING" exists only, where your python expression tests whether the value of the expression is either True or False, in my opinion two very different things, in addition, the C construct is evaluated at compile time.
"something" based in your original question must be a variable or expression that (exists and) evaluates to true or false, as other people already pointed out, the problem may be with that "something" variable not being defined. so the "closest equivalent" in python would be something like:
if 'something' in locals(): # or you can use globals(), depends on your context
import module
or (hacky):
try:
something
import module
except NameError, ImportError:
pass # or add code to handle the exception
hth
|
how to make import conditionally in Python?
|
I want to do something like this in C:
#ifdef SOMETHING
do_this();
#endif
But in Python this doesn't jive:
if something:
import module
What am I doing wrong? Is this possible in the first place?
|
[
"It should work fine:\n>>> if False:\n... import sys\n... \n>>> sys\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nNameError: name 'sys' is not defined\n>>> if True:\n... import sys\n... \n>>> sys\n<module 'sys' (built-in)>\n\n",
"In Python there's a built-in feature called \"Exception\".. Applying to your needs:\ntry:\n\n import <module>\n\nexcept: #Catches every error\n raise #and print error\n\nThere are more complex structures so search on the web for more documentation.\n",
"If you're getting this:\nNameError: name 'something' is not defined\n\nthen the problem here is not with the import statement but with the use of something, a variable you apparently haven't initialized. Just make sure it's initialized to either True or False, and it'll work.\n",
"In the C construct, the conditional define #ifdef tests whether \"SOMETHING\" exists only, where your python expression tests whether the value of the expression is either True or False, in my opinion two very different things, in addition, the C construct is evaluated at compile time.\n\"something\" based in your original question must be a variable or expression that (exists and) evaluates to true or false, as other people already pointed out, the problem may be with that \"something\" variable not being defined. so the \"closest equivalent\" in python would be something like:\nif 'something' in locals(): # or you can use globals(), depends on your context\n import module\n\nor (hacky):\ntry:\n something\n import module\nexcept NameError, ImportError:\n pass # or add code to handle the exception\n\nhth\n"
] |
[
18,
1,
1,
1
] |
[] |
[] |
[
"python",
"python_import"
] |
stackoverflow_0001846158_python_python_import.txt
|
Q:
Pyjamas import statements
I'm starting to use Pyjamas and I'm running into some annoyances. I have to import a lot of stuff to make a script work well. For example, to make a button I need to first
from pyjamas.ui.Button import Button
and then I can use Button. Note that
import pyjamas.ui.Button
and then using Button.Button doesn't work (results in errors when you build to JavaScript, at least in 0.7pre1). Does anyone have a better example of a good way to do the import statements in Pyjamas than what the Pyjamas folks have on their site? Doing things their way is possible, but ugly and overly complicated from my perspective, especially when you want to use a dozen or more ui components.
A:
If you want to be able to say Button.Button, then instead of
import pyjamas.ui.Button
you should write
from pyjamas.ui import Button
Otherwise you need to use pyjamas.ui.Button.Button. What ends up in your namespace is what you have after the import keyword.
|
Pyjamas import statements
|
I'm starting to use Pyjamas and I'm running into some annoyances. I have to import a lot of stuff to make a script work well. For example, to make a button I need to first
from pyjamas.ui.Button import Button
and then I can use Button. Note that
import pyjamas.ui.Button
and then using Button.Button doesn't work (results in errors when you build to JavaScript, at least in 0.7pre1). Does anyone have a better example of a good way to do the import statements in Pyjamas than what the Pyjamas folks have on their site? Doing things their way is possible, but ugly and overly complicated from my perspective, especially when you want to use a dozen or more ui components.
|
[
"If you want to be able to say Button.Button, then instead of\nimport pyjamas.ui.Button\n\nyou should write\nfrom pyjamas.ui import Button\n\nOtherwise you need to use pyjamas.ui.Button.Button. What ends up in your namespace is what you have after the import keyword.\n"
] |
[
5
] |
[] |
[] |
[
"coding_style",
"import",
"pyjamas",
"python"
] |
stackoverflow_0001849909_coding_style_import_pyjamas_python.txt
|
Q:
Do dicts preserve iteration order if they are not modified?
If I have a dictionary in Python, and I iterate through it once, and then again later, is the iteration order guaranteed to be preserved given that I didn't insert, delete, or update any items in the dictionary? (But I might have done look-ups).
A:
Here is what dict.items() documentation says:
dict.items() return a copy of the dictionary’s list of (key, value) pairs.
If items(), keys(), values(), iteritems(), iterkeys(), and itervalues() are called with no intervening modifications to the dictionary, the lists will directly correspond.
I think it's reasonable to assume that item ordering won't change if all you do is iteration.
A:
The standard Python dict like most implementations does not preserve ordering as the items are usually accessed using the key.
However predictable iteration is sometime useful and in Python 3.1 the collections module contains an OrderedDict that is order preserving with minimal performance overhead.
A:
Yes. There's no randomisation involved. There's an even stronger guarantee -- see here.
A:
collections.OrderedDict will be available in Python 2.7 in addition to Python 3.1.
For Python versions earlier than 2.7, there's collective.ordereddict on PyPI, and Django has its own SortedDict implementation.
A:
It might be preserved in some implementations, but don't count on it, as it is not a part of the Dict spec.
A:
A Python dictionary has no concept of order. So you can't depend on a specific order while iterating.
This is deliberate: since it's a hashmap it's unavoidable if you want 'fast lookups'!
A:
As Christophe said, a dictionary is used to organise key/value pairs because of the fast access time it provides. If you application needs a fixed index, you should look at the other data structures that provide a specific/known order.
Having said that, it should be safe to assume that the order doesn't change unless items are added (there wouldn't be any point to do this expensive operation of reshuffling stuff) etc but, again, don't rely on it.
|
Do dicts preserve iteration order if they are not modified?
|
If I have a dictionary in Python, and I iterate through it once, and then again later, is the iteration order guaranteed to be preserved given that I didn't insert, delete, or update any items in the dictionary? (But I might have done look-ups).
|
[
"Here is what dict.items() documentation says:\n\ndict.items() return a copy of the dictionary’s list of (key, value) pairs.\nIf items(), keys(), values(), iteritems(), iterkeys(), and itervalues() are called with no intervening modifications to the dictionary, the lists will directly correspond.\n\nI think it's reasonable to assume that item ordering won't change if all you do is iteration.\n",
"The standard Python dict like most implementations does not preserve ordering as the items are usually accessed using the key.\nHowever predictable iteration is sometime useful and in Python 3.1 the collections module contains an OrderedDict that is order preserving with minimal performance overhead.\n",
"Yes. There's no randomisation involved. There's an even stronger guarantee -- see here.\n",
"collections.OrderedDict will be available in Python 2.7 in addition to Python 3.1.\nFor Python versions earlier than 2.7, there's collective.ordereddict on PyPI, and Django has its own SortedDict implementation.\n",
"It might be preserved in some implementations, but don't count on it, as it is not a part of the Dict spec.\n",
"A Python dictionary has no concept of order. So you can't depend on a specific order while iterating. \nThis is deliberate: since it's a hashmap it's unavoidable if you want 'fast lookups'!\n",
"As Christophe said, a dictionary is used to organise key/value pairs because of the fast access time it provides. If you application needs a fixed index, you should look at the other data structures that provide a specific/known order.\nHaving said that, it should be safe to assume that the order doesn't change unless items are added (there wouldn't be any point to do this expensive operation of reshuffling stuff) etc but, again, don't rely on it.\n"
] |
[
31,
12,
5,
4,
3,
2,
1
] |
[] |
[] |
[
"algorithm",
"dictionary",
"hash",
"python"
] |
stackoverflow_0001849324_algorithm_dictionary_hash_python.txt
|
Q:
Python bash pipe
I want to pipe a python script's output to a bash script. What i did so far was i tried to use os.popen(), sys.subprocess(), and tried to give a pipe for an example
os.popen('echo "P 1 1 591336 4927369 1 321 " | v.in.ascii -zn out=abcx format=standard --overwrite')
but this didn't work, the values "591336" and "4927369" are the variables which comes as the output of the python script. but when I do this or change the values manually by repeating the echo command and the pipe, it works (in bash).
v.in.ascii -zn out=abcx format=standard --overwrite
the above part of the bash command is a part of Grass GIS
Can anyone help me!
A:
You can just use print to output to stdout and pipe the Python process to the next process, e.g.
python myprogram.py | ...
Where myprogram.py might look like:
for x in something:
print dosomething(x)
A:
This works for me:
>>> stdin, stdout = os.popen2("echo %s | grep 'test'" % 'some test param')
>>> print stdout.read()
some test param
>>>
A:
As of Python 2.6, the subprocess module is recommended instead of the deprecated os.popen. Here's an example:
from subprocess import Popen, PIPE
p = Popen(["v.in.ascii", "-zn", "out=abcx", "format=standard", "--overwrite"], stdin=PIPE)
p.stdin.write("P 1 1 591336 4927369 1 321\n")
p.stdin.close()
p.wait() # unless background execution preferred
A:
I really like John Paulett's answer.
I think your echo example would work if you used os.system instead of os.popen.
One way to use popen here is like this:
f = os.popen("v.in.ascii -zn out=abcx format=standard --overwrite", 'w')
f.write("P 1 1 591336 4927369 1 321\n")
f.close()
(You have to specify the pipe is for writing.)
|
Python bash pipe
|
I want to pipe a python script's output to a bash script. What i did so far was i tried to use os.popen(), sys.subprocess(), and tried to give a pipe for an example
os.popen('echo "P 1 1 591336 4927369 1 321 " | v.in.ascii -zn out=abcx format=standard --overwrite')
but this didn't work, the values "591336" and "4927369" are the variables which comes as the output of the python script. but when I do this or change the values manually by repeating the echo command and the pipe, it works (in bash).
v.in.ascii -zn out=abcx format=standard --overwrite
the above part of the bash command is a part of Grass GIS
Can anyone help me!
|
[
"You can just use print to output to stdout and pipe the Python process to the next process, e.g.\npython myprogram.py | ...\n\nWhere myprogram.py might look like:\nfor x in something:\n print dosomething(x)\n\n",
"This works for me:\n>>> stdin, stdout = os.popen2(\"echo %s | grep 'test'\" % 'some test param')\n>>> print stdout.read()\nsome test param\n\n>>>\n\n",
"As of Python 2.6, the subprocess module is recommended instead of the deprecated os.popen. Here's an example:\nfrom subprocess import Popen, PIPE\np = Popen([\"v.in.ascii\", \"-zn\", \"out=abcx\", \"format=standard\", \"--overwrite\"], stdin=PIPE)\np.stdin.write(\"P 1 1 591336 4927369 1 321\\n\")\np.stdin.close()\np.wait() # unless background execution preferred\n\n",
"I really like John Paulett's answer.\nI think your echo example would work if you used os.system instead of os.popen.\nOne way to use popen here is like this:\nf = os.popen(\"v.in.ascii -zn out=abcx format=standard --overwrite\", 'w')\nf.write(\"P 1 1 591336 4927369 1 321\\n\")\nf.close()\n\n(You have to specify the pipe is for writing.)\n"
] |
[
6,
1,
1,
0
] |
[] |
[] |
[
"bash",
"pipe",
"python"
] |
stackoverflow_0001847195_bash_pipe_python.txt
|
Q:
Python: TypeError: 'float' object is not callable
I am trying to join 2 strings using this code:
def __get_temp(self):
return float(self.ask('RS'))
def __set_temp(self, temp):
set = ('SS' + repr(temp))
stat = self.ask(set)
return self.check(stat)
temp = property(__get_temp, __set_temp)
Once together, I then send a signal over a serial bus using PyVisa. However, when I try to call the function, I get
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
chil.temp(13)
TypeError: 'float' object is not callable
I've tried looking around for explanation of this error, but none of them make any sense. Anyone know what is going on?
A:
It looks like you are trying to set the property temp, but what you're actually doing is getting the property and then trying to call it as function with the parameter 13. The syntax for setting is:
chil.temp = 13
|
Python: TypeError: 'float' object is not callable
|
I am trying to join 2 strings using this code:
def __get_temp(self):
return float(self.ask('RS'))
def __set_temp(self, temp):
set = ('SS' + repr(temp))
stat = self.ask(set)
return self.check(stat)
temp = property(__get_temp, __set_temp)
Once together, I then send a signal over a serial bus using PyVisa. However, when I try to call the function, I get
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
chil.temp(13)
TypeError: 'float' object is not callable
I've tried looking around for explanation of this error, but none of them make any sense. Anyone know what is going on?
|
[
"It looks like you are trying to set the property temp, but what you're actually doing is getting the property and then trying to call it as function with the parameter 13. The syntax for setting is:\nchil.temp = 13\n\n"
] |
[
7
] |
[] |
[] |
[
"python",
"types"
] |
stackoverflow_0001850633_python_types.txt
|
Q:
What's the best python idiom for grouping a list of items into groups of a specific max size?
I want to write a function that takes items from a list and groups them into groups of size n.
Ie, for n = 5, [1, 2, 3, 4, 5, 6, 7] would become [[1, 2, 3, 4, 5], [6, 7]].
What's the best python idiomatic way to do this?
A:
You could do this:
[a[x:x+n] for x in range(0, len(a), n)]
(In Python 2, use xrange for efficiency; in Python 3 use range as above.)
A:
I don't know of a good command to do this, but here's a way to do it with a list comprehension:
l = [1,2,3,4,5,6,7]
n = 5
newlist = [l[i:i+n] for i in range(0,len(l),n)]
Edit: as a commenter pointed out, I had accidentally put l[i:i+n] in a list.
A:
Solutions using ranges with steps only work on sequences such as lists and tuples (not iterators). They also aren't as efficient as they can be, since they access the sequence many times instead of iterating over it once.
Here's a version which supports iterators and only iterates over the input once, creating a list of lists:
def blockify(iterator, blocksize):
"""Split the items in the given iterator into blocksize-sized lists.
If the number of items in the iterator doesn't divide by blocksize,
a smaller block containing the remaining items is added to the result.
"""
blocks = []
for index, item in enumerate(iterator):
if index % blocksize == 0:
block = []
blocks.append(block)
block.append(item)
return blocks
And now an iterator version which returns an iterator of tuples, doesn't have a memory overhead, and allows choosing whether to include the remainder. Note that the output can be converted into a list via list(blockify(...)).
from itertools import islice
def blockify(iterator, blocksize, include_remainder=True):
"""Split the items in the given iterator into blocksize-sized tuples.
If the number of items in the iterator doesn't divide by blocksize and
include_remainder is True, a smaller block containing the remaining items
is added to the result; if include_remainder is False the remaining items
are discarded.
"""
iterator = iter(iterator) # we need an actual iterator
while True:
block = tuple(islice(iterator, blocksize))
if len(block) < blocksize:
if len(block) > 0 and include_remainder:
yield block
break
yield block
A:
[a[n*k:n*(k+1)] for k in range(0,len(a)/n+1)]
|
What's the best python idiom for grouping a list of items into groups of a specific max size?
|
I want to write a function that takes items from a list and groups them into groups of size n.
Ie, for n = 5, [1, 2, 3, 4, 5, 6, 7] would become [[1, 2, 3, 4, 5], [6, 7]].
What's the best python idiomatic way to do this?
|
[
"You could do this:\n[a[x:x+n] for x in range(0, len(a), n)]\n\n(In Python 2, use xrange for efficiency; in Python 3 use range as above.)\n",
"I don't know of a good command to do this, but here's a way to do it with a list comprehension:\nl = [1,2,3,4,5,6,7]\nn = 5\nnewlist = [l[i:i+n] for i in range(0,len(l),n)]\n\nEdit: as a commenter pointed out, I had accidentally put l[i:i+n] in a list.\n",
"Solutions using ranges with steps only work on sequences such as lists and tuples (not iterators). They also aren't as efficient as they can be, since they access the sequence many times instead of iterating over it once.\nHere's a version which supports iterators and only iterates over the input once, creating a list of lists:\ndef blockify(iterator, blocksize):\n \"\"\"Split the items in the given iterator into blocksize-sized lists.\n\n If the number of items in the iterator doesn't divide by blocksize,\n a smaller block containing the remaining items is added to the result.\n\n \"\"\"\n blocks = []\n for index, item in enumerate(iterator):\n if index % blocksize == 0:\n block = []\n blocks.append(block)\n block.append(item)\n return blocks\n\nAnd now an iterator version which returns an iterator of tuples, doesn't have a memory overhead, and allows choosing whether to include the remainder. Note that the output can be converted into a list via list(blockify(...)).\nfrom itertools import islice\n\ndef blockify(iterator, blocksize, include_remainder=True):\n \"\"\"Split the items in the given iterator into blocksize-sized tuples.\n\n If the number of items in the iterator doesn't divide by blocksize and\n include_remainder is True, a smaller block containing the remaining items\n is added to the result; if include_remainder is False the remaining items\n are discarded.\n\n \"\"\"\n iterator = iter(iterator) # we need an actual iterator\n while True:\n block = tuple(islice(iterator, blocksize))\n if len(block) < blocksize:\n if len(block) > 0 and include_remainder:\n yield block\n break\n yield block\n\n",
"[a[n*k:n*(k+1)] for k in range(0,len(a)/n+1)]\n\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001849741_python.txt
|
Q:
Pylucene eclipse plugin
Is there a Pylucene eclipse plugin? or am I missing something?
I want it for Auto complete. Is the import structure same as java lucene
A:
Assuming you're using PyDev, you need to add PyLucene to your import path. This can be done in the preferences pane somewhere under the PyDev section. (Sorry, I don't have PyDev installed on this computer so I can't be more specific.) Not having used PyLucene I can't tell you exactly which path you need to add, but it should be the path where you have PyLucene installed.
|
Pylucene eclipse plugin
|
Is there a Pylucene eclipse plugin? or am I missing something?
I want it for Auto complete. Is the import structure same as java lucene
|
[
"Assuming you're using PyDev, you need to add PyLucene to your import path. This can be done in the preferences pane somewhere under the PyDev section. (Sorry, I don't have PyDev installed on this computer so I can't be more specific.) Not having used PyLucene I can't tell you exactly which path you need to add, but it should be the path where you have PyLucene installed.\n"
] |
[
1
] |
[] |
[] |
[
"java",
"lucene",
"pylucene",
"python"
] |
stackoverflow_0001850717_java_lucene_pylucene_python.txt
|
Q:
Is pickle file of python cross-platform?
I have created a small python script of mine. I saved the pickle file on Linux and then used it on windows and then again used it back on Linux but now that file is not working on Linux but it is working perfectly on windows.
Is is so that python is coss-platform but the pickle file is not.
Is there any solution to this one???
A:
Python's pickle is perfectly cross-platform.
This is likely due to EOL (End-Of-Line) differences between Windows and Linux. Make sure to open your pickle files in binary mode both when writing them and when reading them, using open()'s "wb" and "rb" modes respectively.
Note: Passing pickles between different versions of Python can cause trouble, so try to have the same version on both platforms.
A:
The pickle module supports several different data formats. If you are specifying a particular pickle format instead of using the default (0), you may be running into cross-platform binary file problems. You can use plain ASCII pickle files by specifying protocol 0.
A:
Maybe you don't open the file in binary mode? See this stackoverflow question
A:
Pickle should be cross-platform, there are versioning/protocol issues, (see http://docs.python.org/library/pickle.html#data-stream-format) but in general if you're using the same release of python on your windows and unix boxes, they should be interoperable.
If you're using pickle as a data transport mechanism, you might want to consider less-implementation specific formats for data storage, such as json, xml, csv, yaml, etc.
A:
You could use json instead of pickle. If it can save your data, you know it's cross platform.
A:
One interesting idea to try out is PyON (Python Object Notation). The current version seems to work at least for simple cases according to my tests. There seems to have been some disagreement on mailing lists whether the project is a good idea, though.
|
Is pickle file of python cross-platform?
|
I have created a small python script of mine. I saved the pickle file on Linux and then used it on windows and then again used it back on Linux but now that file is not working on Linux but it is working perfectly on windows.
Is is so that python is coss-platform but the pickle file is not.
Is there any solution to this one???
|
[
"Python's pickle is perfectly cross-platform.\nThis is likely due to EOL (End-Of-Line) differences between Windows and Linux. Make sure to open your pickle files in binary mode both when writing them and when reading them, using open()'s \"wb\" and \"rb\" modes respectively.\nNote: Passing pickles between different versions of Python can cause trouble, so try to have the same version on both platforms.\n",
"The pickle module supports several different data formats. If you are specifying a particular pickle format instead of using the default (0), you may be running into cross-platform binary file problems. You can use plain ASCII pickle files by specifying protocol 0.\n",
"Maybe you don't open the file in binary mode? See this stackoverflow question\n",
"Pickle should be cross-platform, there are versioning/protocol issues, (see http://docs.python.org/library/pickle.html#data-stream-format) but in general if you're using the same release of python on your windows and unix boxes, they should be interoperable.\nIf you're using pickle as a data transport mechanism, you might want to consider less-implementation specific formats for data storage, such as json, xml, csv, yaml, etc.\n",
"You could use json instead of pickle. If it can save your data, you know it's cross platform.\n",
"One interesting idea to try out is PyON (Python Object Notation). The current version seems to work at least for simple cases according to my tests. There seems to have been some disagreement on mailing lists whether the project is a good idea, though.\n"
] |
[
36,
12,
4,
4,
1,
0
] |
[] |
[] |
[
"file_io",
"pickle",
"python"
] |
stackoverflow_0001849523_file_io_pickle_python.txt
|
Q:
Python: Change class type name
How would you change a class type name to something other than classobj?
class bob():
pass
foo = bob
print "%s" % type(foo).__name__
which gets me 'classobj'.
A:
In your example, you've defined foo as a reference to the class definition of bob, not to an instance of bob. The type of an (old-style) class is indeed classobj.
If you instantiate bob, on the other hand, the result will be different:
# example using new-style classes, which are recommended over old-style
class bob(object):
pass
foo = bob()
print type(foo).__name__
'bob'
If you just want to see the name of the bob type without instantiating it, use:
print bob.__name__
'bob'
This works because bob is already a class type, and therefore has a __name__ property that you can query.
A:
class DifferentTypeName(type): pass
class bob:
__metaclass__ = DifferentTypeName
foo = bob
print "%s" % type(foo).__name__
emits DifferentTypeName, as you require. It seems unlikely that this is actually what you want (or need), but, hey, it is the way to do exactly what you so explicitly ask for: change a class's type's name. Assigning a suitable renamed derivative of type to foo.__class__ or bob.__class__ later would also work, so you could encapsulate this into a pretty peculiar function:
def changeClassTypeName(theclass, thename):
theclass.__class__ = type(thename, (type,), {})
changeClassTypeName(bob, 'whatEver')
foo = bob
print "%s" % type(foo).__name__
this emits whatEver.
|
Python: Change class type name
|
How would you change a class type name to something other than classobj?
class bob():
pass
foo = bob
print "%s" % type(foo).__name__
which gets me 'classobj'.
|
[
"In your example, you've defined foo as a reference to the class definition of bob, not to an instance of bob. The type of an (old-style) class is indeed classobj.\nIf you instantiate bob, on the other hand, the result will be different:\n# example using new-style classes, which are recommended over old-style\nclass bob(object):\n pass\n\nfoo = bob()\nprint type(foo).__name__\n'bob'\n\nIf you just want to see the name of the bob type without instantiating it, use:\nprint bob.__name__\n'bob'\n\nThis works because bob is already a class type, and therefore has a __name__ property that you can query.\n",
"class DifferentTypeName(type): pass\n\nclass bob:\n __metaclass__ = DifferentTypeName\n\nfoo = bob\nprint \"%s\" % type(foo).__name__\n\nemits DifferentTypeName, as you require. It seems unlikely that this is actually what you want (or need), but, hey, it is the way to do exactly what you so explicitly ask for: change a class's type's name. Assigning a suitable renamed derivative of type to foo.__class__ or bob.__class__ later would also work, so you could encapsulate this into a pretty peculiar function:\ndef changeClassTypeName(theclass, thename):\n theclass.__class__ = type(thename, (type,), {})\n\nchangeClassTypeName(bob, 'whatEver')\n\nfoo = bob\nprint \"%s\" % type(foo).__name__\n\nthis emits whatEver.\n"
] |
[
14,
12
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001850289_python.txt
|
Q:
PDF Parsing Using Python - extracting formatted and plain texts
I'm looking for a PDF library which will allow me to extract the text from a PDF document. I've looked at PyPDF, and this can extract the text from a PDF document very nicely. The problem with this is that if there are tables in the document, the text in the tables is extracted in-line with the rest of the document text. This can be problematic because it produces sections of text that aren't useful and look garbled (for instance, lots of numbers mashed together).
I'd like to extract the text from a PDF document, excluding any tables and special formatting. Is there a library out there that does this?
A:
You can also take a look at PDFMiner (or for older versions of Python see PDFMiner and PDFMiner).
A particular feature of interest in PDFMiner is that you can control how it regroups text parts when extracting them. You do this by specifying the space between lines, words, characters, etc. So, maybe by tweaking this you can achieve what you want (that depends of the variability of your documents). PDFMiner can also give you the location of the text in the page, it can extract data by Object ID and other stuff. So dig in PDFMiner and be creative!
But your problem is really not an easy one to solve because, in a PDF, the text is not continuous, but made from a lot of small groups of characters positioned absolutely in the page. The focus of PDF is to keep the layout intact. It's not content oriented but presentation oriented.
A:
That's a difficult problem to solve since visually similar PDFs may have a wildly differing structure depending on how they were produced. In the worst case the library would need to basically act like an OCR. On the other hand, the PDF may contain sufficient structure and metadata for easy removal of tables and figures, which the library can be tailored to take advantage of.
I'm pretty sure there are no open source tools which solve your problem for a wide variety of PDFs, but I remember having heard of commercial software claiming to do exactly what you ask for. I'm sure you'll run into them while googling.
|
PDF Parsing Using Python - extracting formatted and plain texts
|
I'm looking for a PDF library which will allow me to extract the text from a PDF document. I've looked at PyPDF, and this can extract the text from a PDF document very nicely. The problem with this is that if there are tables in the document, the text in the tables is extracted in-line with the rest of the document text. This can be problematic because it produces sections of text that aren't useful and look garbled (for instance, lots of numbers mashed together).
I'd like to extract the text from a PDF document, excluding any tables and special formatting. Is there a library out there that does this?
|
[
"You can also take a look at PDFMiner (or for older versions of Python see PDFMiner and PDFMiner).\nA particular feature of interest in PDFMiner is that you can control how it regroups text parts when extracting them. You do this by specifying the space between lines, words, characters, etc. So, maybe by tweaking this you can achieve what you want (that depends of the variability of your documents). PDFMiner can also give you the location of the text in the page, it can extract data by Object ID and other stuff. So dig in PDFMiner and be creative!\nBut your problem is really not an easy one to solve because, in a PDF, the text is not continuous, but made from a lot of small groups of characters positioned absolutely in the page. The focus of PDF is to keep the layout intact. It's not content oriented but presentation oriented.\n",
"That's a difficult problem to solve since visually similar PDFs may have a wildly differing structure depending on how they were produced. In the worst case the library would need to basically act like an OCR. On the other hand, the PDF may contain sufficient structure and metadata for easy removal of tables and figures, which the library can be tailored to take advantage of.\nI'm pretty sure there are no open source tools which solve your problem for a wide variety of PDFs, but I remember having heard of commercial software claiming to do exactly what you ask for. I'm sure you'll run into them while googling.\n"
] |
[
63,
2
] |
[] |
[] |
[
"information_extraction",
"parsing",
"pdf",
"python",
"text_extraction"
] |
stackoverflow_0001848464_information_extraction_parsing_pdf_python_text_extraction.txt
|
Q:
python web crawler with thread support
these day im making some web crawler script, but one of problem is my internet is very slow.
so i was thought whether is it possible webcrawler with multithreading by use mechanize or urllib or so.
if anyone have experience ,share info much appreciate.
i was look for in google ,but not found much useful info.
Thanks in advance
A:
There's a good, simple example on this Stack Overflow thread.
A:
Practical threaded programming with Python is worth reading.
A:
Making multiple requests to many websites at the same time will certainly improve your results, since you don't have to wait for a result to arrive before sending new requests.
However threading is just one of the ways to do that (and a poor one, I might add). Don't use threading for that. Just don't wait for the response before sending another request! No need for threading to do that.
A good idea is to use scrapy. It is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It is written in python and can make many concurrent connections to fetch data at the same time (without using threads to do so). It is really fast. You can also study it to see how it is implemented.
|
python web crawler with thread support
|
these day im making some web crawler script, but one of problem is my internet is very slow.
so i was thought whether is it possible webcrawler with multithreading by use mechanize or urllib or so.
if anyone have experience ,share info much appreciate.
i was look for in google ,but not found much useful info.
Thanks in advance
|
[
"There's a good, simple example on this Stack Overflow thread.\n",
"Practical threaded programming with Python is worth reading.\n",
"Making multiple requests to many websites at the same time will certainly improve your results, since you don't have to wait for a result to arrive before sending new requests.\nHowever threading is just one of the ways to do that (and a poor one, I might add). Don't use threading for that. Just don't wait for the response before sending another request! No need for threading to do that.\nA good idea is to use scrapy. It is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It is written in python and can make many concurrent connections to fetch data at the same time (without using threads to do so). It is really fast. You can also study it to see how it is implemented.\n"
] |
[
4,
3,
1
] |
[] |
[] |
[
"multithreading",
"python"
] |
stackoverflow_0001848413_multithreading_python.txt
|
Q:
Troubles installing PyQt4
I'm following this guide.
Python is at C:\Python31
PyQt4 is at C:\Python31\pyqt
sip is at C:\Python31\sip
Qt is at C:\Qt\4.6.0
I followed the instructions on that guide, but when I tried to test it (from PyQt4.Qt install *), it said the module didn't exist. I checked all the files that guide said should exist, and none of them existed.
What should I do?
Oh:
sip installed fine. from sip import * didn't yield errors, print(SIP_VERSION_STR) output 4.10-snapshot-20091204.
A:
There is pre-built version already, why you still need to build yourself?
http://www.riverbankcomputing.co.uk/static/Downloads/PyQt4/PyQt-Py3.1-gpl-4.6.2-2.exe
|
Troubles installing PyQt4
|
I'm following this guide.
Python is at C:\Python31
PyQt4 is at C:\Python31\pyqt
sip is at C:\Python31\sip
Qt is at C:\Qt\4.6.0
I followed the instructions on that guide, but when I tried to test it (from PyQt4.Qt install *), it said the module didn't exist. I checked all the files that guide said should exist, and none of them existed.
What should I do?
Oh:
sip installed fine. from sip import * didn't yield errors, print(SIP_VERSION_STR) output 4.10-snapshot-20091204.
|
[
"There is pre-built version already, why you still need to build yourself?\nhttp://www.riverbankcomputing.co.uk/static/Downloads/PyQt4/PyQt-Py3.1-gpl-4.6.2-2.exe\n"
] |
[
5
] |
[] |
[] |
[
"pyqt4",
"python",
"python_sip"
] |
stackoverflow_0001851321_pyqt4_python_python_sip.txt
|
Q:
Numpy - show decimal values in array results
how do I calculate that an array of python numpy or me of all the calculate decimals and not skip like.
>> A = numpy.array ([[1,2,3], [4,5,6], [7,8,9]]).
>> C = numpy.array ([[7,8,9], [1,2,3], [4,5,6]]).
>> A / C
array ([[0, 0, 0],
[4, 2, 2],
[1, 1, 1]])
but in the first vector would not have to be given to absolute zero [0.143, 0.250, 0.333]
A:
To avoid integer division, use numpy.true_divide(A,C). You can also put from __future__ import division at the top of the file to default to this behavior.
A:
Try converting one of the arrays A or C into an array of floats. For instance:
A = A * 1.0
Then the division will be floating point division.
A:
Numpy arrays may have different types. You may also create a float array, it will always divide correctly:
>>> A = numpy.array ([[1,2,3], [4,5,6], [7,8,9]], dtype=float)
>>> A/2
array([[ 0.5, 1. , 1.5],
[ 2. , 2.5, 3. ],
[ 3.5, 4. , 4.5]])
Notice the dtype= argument to numpy.array
|
Numpy - show decimal values in array results
|
how do I calculate that an array of python numpy or me of all the calculate decimals and not skip like.
>> A = numpy.array ([[1,2,3], [4,5,6], [7,8,9]]).
>> C = numpy.array ([[7,8,9], [1,2,3], [4,5,6]]).
>> A / C
array ([[0, 0, 0],
[4, 2, 2],
[1, 1, 1]])
but in the first vector would not have to be given to absolute zero [0.143, 0.250, 0.333]
|
[
"To avoid integer division, use numpy.true_divide(A,C). You can also put from __future__ import division at the top of the file to default to this behavior.\n",
"Try converting one of the arrays A or C into an array of floats. For instance:\nA = A * 1.0\n\nThen the division will be floating point division.\n",
"Numpy arrays may have different types. You may also create a float array, it will always divide correctly:\n>>> A = numpy.array ([[1,2,3], [4,5,6], [7,8,9]], dtype=float)\n>>> A/2\narray([[ 0.5, 1. , 1.5],\n [ 2. , 2.5, 3. ],\n [ 3.5, 4. , 4.5]])\n\nNotice the dtype= argument to numpy.array\n"
] |
[
6,
4,
1
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0001799527_numpy_python.txt
|
Q:
Why is pdb displaying "*** Blank or comment" when I try to set a Break?
I'm working with my Django app. For some reason an element of a list is being assigned incorrectly.
I'm trying to set a break where I think the error is occurring. ( line 20 )
I'm invoking pdb with this line of code:
import pdb; pdb.set_trace()
However, inside the code, I can't seem to set a Break.
(Pdb) b 20
*** Blank or comment
(Pdb) break 20
*** Blank or comment `
What am I doing wrong?
A:
pdb is telling you that line 20 of the file you're in doesn't contain code; it's either blank or just contains a comment. Such a line will never actually be executed, so a breakpoint can't be set on it.
Use the 'list' command to see the code of the file you're currently in ('help list' for details on this command), and then set breakpoints on lines which include executable code.
You can also use the 'where' command to see the stack frame, since you might not be in the right file because you're not looking at the level of the stack frame where you think you are. Use 'up' and 'down' to go to the level of the stack where you want to debug.
|
Why is pdb displaying "*** Blank or comment" when I try to set a Break?
|
I'm working with my Django app. For some reason an element of a list is being assigned incorrectly.
I'm trying to set a break where I think the error is occurring. ( line 20 )
I'm invoking pdb with this line of code:
import pdb; pdb.set_trace()
However, inside the code, I can't seem to set a Break.
(Pdb) b 20
*** Blank or comment
(Pdb) break 20
*** Blank or comment `
What am I doing wrong?
|
[
"pdb is telling you that line 20 of the file you're in doesn't contain code; it's either blank or just contains a comment. Such a line will never actually be executed, so a breakpoint can't be set on it.\nUse the 'list' command to see the code of the file you're currently in ('help list' for details on this command), and then set breakpoints on lines which include executable code.\nYou can also use the 'where' command to see the stack frame, since you might not be in the right file because you're not looking at the level of the stack frame where you think you are. Use 'up' and 'down' to go to the level of the stack where you want to debug.\n"
] |
[
9
] |
[] |
[] |
[
"django",
"pdb",
"python"
] |
stackoverflow_0001852427_django_pdb_python.txt
|
Q:
Ruby Quiz for Python
Is there a blog/forum/listserv that is equivalent to RubyQuiz.com for the Python language?
A:
How about the Python Challenge?
It isn't a weekly challenge, more a fixed set of challenges of increasing difficulty but it is fun and educational none-the-less. A great way to get to know python and have fun solving puzzles. Try to do them yourself without cheating to get the most out of it!
A:
At the risk of stating the obvious, why not just do the rubyquiz examples in python. Those exercises as well as others aren't tied to a language - you're just as well off just doing projecteuler problems in python rather than searching for python-specific puzzles. A puzzle is a puzzle a language is just a tool to solve it.
|
Ruby Quiz for Python
|
Is there a blog/forum/listserv that is equivalent to RubyQuiz.com for the Python language?
|
[
"How about the Python Challenge?\nIt isn't a weekly challenge, more a fixed set of challenges of increasing difficulty but it is fun and educational none-the-less. A great way to get to know python and have fun solving puzzles. Try to do them yourself without cheating to get the most out of it!\n",
"At the risk of stating the obvious, why not just do the rubyquiz examples in python. Those exercises as well as others aren't tied to a language - you're just as well off just doing projecteuler problems in python rather than searching for python-specific puzzles. A puzzle is a puzzle a language is just a tool to solve it.\n"
] |
[
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001851396_python.txt
|
Q:
Python values with units
I need to keep track of units on float and int values in Python, but I don't want to use an external package like magnitude or others, because I don't need to perform operations on the values. Instead, all I want is to be able to define floats and ints that have a unit attribute (and I don't want to add a new dependency for something this simple). I tried doing:
class floatwithunit(float):
__oldinit__ = float.__init__
def __init__(self, *args, **kwargs):
if 'unit' in kwargs:
self.unit = kwargs.pop('unit')
self.__oldinit__(*args, **kwargs)
But this doesn't work at all:
In [37]: a = floatwithunit(1.,unit=1.)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/Users/tom/<ipython console> in <module>()
TypeError: float() takes at most 1 argument (2 given)
Any suggestions?
A:
You might be looking for something like this:
class UnitFloat(float):
def __new__(self, value, unit=None):
return float.__new__(self, value)
def __init__(self, value, unit=None):
self.unit = unit
x = UnitFloat(35.5, "cm")
y = UnitFloat(42.5)
print x
print x.unit
print y
print y.unit
print x + y
Yields:
35.5
cm
42.5
None
78.0
A:
You need to override __new__ (the "constructor proper", while __init__ is the "initializer"), otherwise float's __new__ gets called with extraneous arguments, which is the cause of the problem you're seeing. You don't need to call float's __init__ (it's a no-op). Here's how I'd code it:
class floatwithunit(float):
def __new__(cls, value, *a, **k):
return float.__new__(cls, value)
def __init__(self, value, *args, **kwargs):
self.unit = kwargs.pop('unit', None)
def __str__(self):
return '%f*%s' % (self, self.unit)
a = floatwithunit(1.,unit=1.)
print a
emitting 1.000000*1.0.
A:
I think you mean
class floatwithunit(float):
rather than
def floatwithunit(float):
A:
Alex Martelli indeed pointed out the root of the problem. I always find __new__ quite confusing, however, so here's a working piece of example code:
(UPDATE: It's been 13 years since this answer was written. This code requires a simple fix to work on recent Python versions (tested with 3.10): Replace the super() calls with float.)
class FloatWithUnit(float):
def __new__(cls, *args, **kwargs):
# avoid error in float.__new__
# the original kwargs (with 'unit') will still be passed to __init__
if 'unit' in kwargs:
kwargs.pop('unit')
return super(FloatWithUnit, cls).__new__(cls, *args, **kwargs)
def __init__(self, *args, **kwargs):
self.unit = kwargs.pop('unit') if 'unit' in kwargs else None
super(FloatWithUnit, self).__init__(*args, **kwargs)
|
Python values with units
|
I need to keep track of units on float and int values in Python, but I don't want to use an external package like magnitude or others, because I don't need to perform operations on the values. Instead, all I want is to be able to define floats and ints that have a unit attribute (and I don't want to add a new dependency for something this simple). I tried doing:
class floatwithunit(float):
__oldinit__ = float.__init__
def __init__(self, *args, **kwargs):
if 'unit' in kwargs:
self.unit = kwargs.pop('unit')
self.__oldinit__(*args, **kwargs)
But this doesn't work at all:
In [37]: a = floatwithunit(1.,unit=1.)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/Users/tom/<ipython console> in <module>()
TypeError: float() takes at most 1 argument (2 given)
Any suggestions?
|
[
"You might be looking for something like this:\nclass UnitFloat(float):\n\n def __new__(self, value, unit=None):\n return float.__new__(self, value)\n\n def __init__(self, value, unit=None):\n self.unit = unit\n\n\nx = UnitFloat(35.5, \"cm\")\ny = UnitFloat(42.5)\n\nprint x\nprint x.unit\n\nprint y\nprint y.unit\n\nprint x + y\n\nYields:\n35.5\ncm\n42.5\nNone\n78.0\n\n",
"You need to override __new__ (the \"constructor proper\", while __init__ is the \"initializer\"), otherwise float's __new__ gets called with extraneous arguments, which is the cause of the problem you're seeing. You don't need to call float's __init__ (it's a no-op). Here's how I'd code it:\nclass floatwithunit(float):\n\n def __new__(cls, value, *a, **k):\n return float.__new__(cls, value)\n\n def __init__(self, value, *args, **kwargs):\n self.unit = kwargs.pop('unit', None)\n\n def __str__(self):\n return '%f*%s' % (self, self.unit)\n\na = floatwithunit(1.,unit=1.)\n\nprint a\n\nemitting 1.000000*1.0.\n",
"I think you mean\nclass floatwithunit(float):\n\nrather than\ndef floatwithunit(float):\n\n",
"Alex Martelli indeed pointed out the root of the problem. I always find __new__ quite confusing, however, so here's a working piece of example code:\n(UPDATE: It's been 13 years since this answer was written. This code requires a simple fix to work on recent Python versions (tested with 3.10): Replace the super() calls with float.)\nclass FloatWithUnit(float):\n def __new__(cls, *args, **kwargs):\n # avoid error in float.__new__\n # the original kwargs (with 'unit') will still be passed to __init__\n if 'unit' in kwargs:\n kwargs.pop('unit')\n return super(FloatWithUnit, cls).__new__(cls, *args, **kwargs)\n\n def __init__(self, *args, **kwargs):\n self.unit = kwargs.pop('unit') if 'unit' in kwargs else None\n super(FloatWithUnit, self).__init__(*args, **kwargs)\n\n"
] |
[
11,
7,
1,
0
] |
[
"It looks like you need to check if kwargs is not None before you try to see if there is a label 'unit'.\nChange your code to\n if kwargs and 'unit' in kwargs:\n\nUpdated answer:\ndon't pass kwargs to __oldinit__\n\n"
] |
[
-2
] |
[
"python"
] |
stackoverflow_0001852720_python.txt
|
Q:
Django application deployment help
I'm using Capistrano to deploy a Django application (it uses Nginx as the web server), using instructions I found at http://akashxav.com/2009/07/11/getting-django-running-on-nginx-and-fastcgi-on-prgmr/ (I had to look at a cached version earlier today) and was wondering about the last command in there, which is
python manage.py runfcgi host=127.0.0.1 port=8081 --settings=settings
I understand at a high level that this is telling the application that we want to run a few instances of the FastCGI binary to serve up this application.
What I was wondering is how is the best way to handle "resetting" this, for lack of a better word. For those who don't know, Capistrano deploys things by creating "releases" directories and then providing a symlink to the latest release.
Since I can do post-deployment tasks (I've done this with CakePHP applications to do things like properly set directory permissions for a caching directory in the application) I was wondering how to turn off the existing processes created by the command above and start up new ones.
I hope I am making sense.
A:
There is a section in the django docs about this
Basically use the pidfile option to manage.py and then write a small shell script to use that pid to kill the existing cgi process if it exists before starting the new one.
Something like this
#!/bin/bash
if [ -f "pidfile" ]; then
kill `cat -- pidfile`
rm -f -- pidfile
fi
exec python manage.py runfcgi host=127.0.0.1 port=8081 pidfile=pidfile --settings=settings
NB FastCGI support is deprecated and will be removed in Django 1.9
|
Django application deployment help
|
I'm using Capistrano to deploy a Django application (it uses Nginx as the web server), using instructions I found at http://akashxav.com/2009/07/11/getting-django-running-on-nginx-and-fastcgi-on-prgmr/ (I had to look at a cached version earlier today) and was wondering about the last command in there, which is
python manage.py runfcgi host=127.0.0.1 port=8081 --settings=settings
I understand at a high level that this is telling the application that we want to run a few instances of the FastCGI binary to serve up this application.
What I was wondering is how is the best way to handle "resetting" this, for lack of a better word. For those who don't know, Capistrano deploys things by creating "releases" directories and then providing a symlink to the latest release.
Since I can do post-deployment tasks (I've done this with CakePHP applications to do things like properly set directory permissions for a caching directory in the application) I was wondering how to turn off the existing processes created by the command above and start up new ones.
I hope I am making sense.
|
[
"There is a section in the django docs about this\nBasically use the pidfile option to manage.py and then write a small shell script to use that pid to kill the existing cgi process if it exists before starting the new one.\nSomething like this\n#!/bin/bash\nif [ -f \"pidfile\" ]; then\n kill `cat -- pidfile`\n rm -f -- pidfile\nfi\nexec python manage.py runfcgi host=127.0.0.1 port=8081 pidfile=pidfile --settings=settings\n\nNB FastCGI support is deprecated and will be removed in Django 1.9\n"
] |
[
1
] |
[] |
[] |
[
"deployment",
"django",
"python"
] |
stackoverflow_0001852693_deployment_django_python.txt
|
Q:
Looking for feedback on my program design
I'm aware that SO is for questions but overall the aim is to help people learn so I figured I'd try my hand at sharing some code and asking for feedback on it.
I'm looking to create a program that will rely on random numbers, specifically dice. These will be presented in the form of "2D6", "4D10+3", "2D2 + 3D3" and so on and so forth. I thus set out to create a dice roller module that would be able to accept input like in that form.
It works just fine for what's needed but has a bug for things that probably won't be needed (the docstring at the start of the file should explain). What I am interested in is what people think of my code and if anybody can see ways to improve it.
It is still WIP and I've not started on the unit tests yet.
Link to code
#!/usr/bin/env python3
"""
Created by Teifion Jordan
http://woarl.com
Notes: The roller does not correctly apply * and / signs:
A + B * C is worked out as (A + B) * C, not A + (B * C) as would be correct
"""
import random
import re
import math
class Roller_dict (object):
"""A 'dictionary' that stores rollers, if it's not got that roller it'll make a new one"""
def __init__(self, generator=random.randint):
super(Roller_dict, self).__init__()
self.rollers = {}
# Generator is used to supply a "rigged" random function for testing purposes
self.generator = generator
def __call__(self, constructor):
constructor = constructor.replace(" ", "")
if constructor not in self.rollers:
self.rollers[constructor] = Roller(constructor, self.generator)
return self.rollers[constructor]()
# Regular expressions used by the Roller class
# Compiled here to save time if we need to make lots of Roller objects
pattern_split = re.compile(r"(\+|-|\*|/)")
pattern_constant = re.compile(r"([0-9]*)")
pattern_die = re.compile(r"([0-9]*)[Dd]([0-9]*)")
pattern_sign = re.compile(r"^(\+|-|\*|/)")
class Roller (object):
def __call__(self):
return self.roll()
def __init__(self, constructor, generator=random.randint):
super(Roller, self).__init__()
self.items = []
self.rebuild(constructor)
self.generator = generator
def rebuild(self, constructor):
"""Builds the Roller from a new constructor string"""
# First we need to split it up
c = pattern_split.split(constructor.replace(" ", ""))
# Check for exceptions
if len(c) == 0:
raise Exception('String "%s" did not produce any splits' % constructor)
# Stitch signs back into their sections
parts = []
last_p = ""
for p in c:
if p in "+-*/":
last_p = p
continue
if last_p != "":
p = "%s%s" % (last_p, p)
last_p = ""
parts.append(p)
# We have the parts, now we need to evaluate them into items
for p in parts:
# Look for a sign, default to positive
sign = pattern_sign.search(p)
if sign == None: sign = "+"
else: sign = sign.groups()[0]
# Strip out the sign, we're left with just the pure value
body = p.replace(sign, "")
# Now we find out what our main body is
# Die
value = pattern_die.search(body)
if value != None:
# Sign, Number, Sides
self.items.append(("die", sign, int(value.groups()[0]), int(value.groups()[1])))
continue
# Constant
value = pattern_constant.search(body)
if value != None:
self.items.append(("constant", sign, int(value.groups()[0])))
continue
# No matches
raise Exception('The part string "%s" had no matches' % body)
def roll(self):
"""Rolls the die/dice and returns the result"""
result = 0
for i in self.items:
# Get value
if i[0] == "die": value = self._derive_die(i[2], i[3])
elif i[0] == "constant": value = self._derive_constant(i[2])
else: raise Exception('No handler for item type "%s"' % i[0])
# Apply sign
if i[1] == "+": result += value
elif i[1] == "-": result -= value
elif i[1] == "*": result *= value
elif i[1] == "/": result /= value
return result
def _derive_die(self, number, sides):
result = 0
for n in range(0, number):
result += self.generator(0, sides)
return result
def _derive_constant(self, value):
return value
# Useful for running the tests to make sure that it uses "random" numbers
false_numbers = (int(math.cos(x)*5)+5 for x in range(0,1000))
def false_numbers_func(*args):
return false_numbers.next()
# If it's main, run unit tests?
if __name__ == '__main__':
r = Roller_dict(false_numbers_func)
print(r("2D6"))
print(r("2D6"))
print(r("2D6"))
A:
I think your approach makes for a lot of complexity: you're trying to solve the hard problem (parsing the input) at the same time you're solving the less-hard problem (doing the dice-rolling). It's easier if you separate the problems.
A class to roll dice is relatively easy to write. Two things I'm doing that you're not: the mapping of signs to operations (using a map means not having to write logic, plus it's reusable), and letting Roller objects be chained together in a simple linked list, so that calling roll on the head of the list rolls all of them and sums up the result.
import random
R = random.Random()
class Roller(object):
# map signs to operations
op = { "+" : lambda a,b: a+b,
"-" : lambda a,b: a-b,
"*" : lambda a,b: a*b,
"/" : lambda a,b: a/b }
def __init__(self, dice, sides, sign=None, modifier=0):
self.dice = dice
self.sides = sides
self.sign = sign
self.modifier = modifier
self.next_sign = None
self.next_roller = None
def roll(self):
self.dice_rolled = [R.randint(1, self.sides) for n in range(self.dice)]
result = sum(dice_rolled)
if self.sign:
result = self.op[self.sign](result, self.modifier)
if self.next_sign and self.next_roller:
result = self.op[self.next_sign](result, self.next_roller.roll())
return result
It's relatively easy to test that. Note that dice_rolled is saved as an attribute so that you can write unit tests more easily.
The next step is to figure out how to parse the input. This sort of works:
>>> p = """
(?P<next_sign>[-+*/])?
(?P<dice>[\d]+)
[\s]*D[\s]*
(?P<sides>[\d]+)
# trailing sign and modifier are optional, but if one is present both must be
([\s]*(?P<sign>[-+/*])[\s]*(?P<modifier>[\d]+))?"""
>>> r = re.compile(p, re.VERBOSE+re.IGNORECASE)
>>> m=r.match('2 d 20 +1')
>>> m.group('dice'), m.group('sides'), m.group('sign'), m.group('modifier')
('2', '20', '+', '1')
>>> r.findall('3D6*2-1D4+1*2D6-1')
[('', '3', '6', '*2', '*', '2'), ('-', '1', '4', '+1', '+', '1'), ('*', '2', '6', '-1', '-', '1')]
There's a lexical ambiguity that the syntax allows - 2D6+1D4 gets parsed as 2D6+1 followed by the unmatched D4, and it's not obvious to me how to fix that in the regular expression. Maybe that can be fixed with a negative lookahead assertion.
At any rate, once the regular expression gets fixed, the only thing left to do is process the results of r.findall to create a chain of Roller objects. And make that a class method if you really dig encapsulation.
A:
Superficially, there's PEP08; in particular, the use of 4 spaces for an indent vs using tab characters.
It also seems like you have a lot of accidental complexity here, but I'd have to chew on it some more to understand. It seems like a simple enough idea that it shouldn't take as much effort to grok as it seems to be.
A:
The pyparsing examples page includes a similar dice expression parser and roller, including these test cases:
D5+2d6*3-5.5+4d6
D5+2d6*3-5.5+4d6.takeHighest(3)
2d6*3-5.5+4d6.minRoll(2).takeHighest(3)
The first 30 lines or so of the script contain the parser, the rest contains an evaluator, including debugging code showing the rolls being rolled.
I realize this is more a "silver platter" answer rather than feedback to your posted code - one thing in common with Robert Rossney's answer is the clear separation of parsing vs. rolling. Perhaps between this and Robert's sample you can glean some tidbits for your own dice roller.
|
Looking for feedback on my program design
|
I'm aware that SO is for questions but overall the aim is to help people learn so I figured I'd try my hand at sharing some code and asking for feedback on it.
I'm looking to create a program that will rely on random numbers, specifically dice. These will be presented in the form of "2D6", "4D10+3", "2D2 + 3D3" and so on and so forth. I thus set out to create a dice roller module that would be able to accept input like in that form.
It works just fine for what's needed but has a bug for things that probably won't be needed (the docstring at the start of the file should explain). What I am interested in is what people think of my code and if anybody can see ways to improve it.
It is still WIP and I've not started on the unit tests yet.
Link to code
#!/usr/bin/env python3
"""
Created by Teifion Jordan
http://woarl.com
Notes: The roller does not correctly apply * and / signs:
A + B * C is worked out as (A + B) * C, not A + (B * C) as would be correct
"""
import random
import re
import math
class Roller_dict (object):
"""A 'dictionary' that stores rollers, if it's not got that roller it'll make a new one"""
def __init__(self, generator=random.randint):
super(Roller_dict, self).__init__()
self.rollers = {}
# Generator is used to supply a "rigged" random function for testing purposes
self.generator = generator
def __call__(self, constructor):
constructor = constructor.replace(" ", "")
if constructor not in self.rollers:
self.rollers[constructor] = Roller(constructor, self.generator)
return self.rollers[constructor]()
# Regular expressions used by the Roller class
# Compiled here to save time if we need to make lots of Roller objects
pattern_split = re.compile(r"(\+|-|\*|/)")
pattern_constant = re.compile(r"([0-9]*)")
pattern_die = re.compile(r"([0-9]*)[Dd]([0-9]*)")
pattern_sign = re.compile(r"^(\+|-|\*|/)")
class Roller (object):
def __call__(self):
return self.roll()
def __init__(self, constructor, generator=random.randint):
super(Roller, self).__init__()
self.items = []
self.rebuild(constructor)
self.generator = generator
def rebuild(self, constructor):
"""Builds the Roller from a new constructor string"""
# First we need to split it up
c = pattern_split.split(constructor.replace(" ", ""))
# Check for exceptions
if len(c) == 0:
raise Exception('String "%s" did not produce any splits' % constructor)
# Stitch signs back into their sections
parts = []
last_p = ""
for p in c:
if p in "+-*/":
last_p = p
continue
if last_p != "":
p = "%s%s" % (last_p, p)
last_p = ""
parts.append(p)
# We have the parts, now we need to evaluate them into items
for p in parts:
# Look for a sign, default to positive
sign = pattern_sign.search(p)
if sign == None: sign = "+"
else: sign = sign.groups()[0]
# Strip out the sign, we're left with just the pure value
body = p.replace(sign, "")
# Now we find out what our main body is
# Die
value = pattern_die.search(body)
if value != None:
# Sign, Number, Sides
self.items.append(("die", sign, int(value.groups()[0]), int(value.groups()[1])))
continue
# Constant
value = pattern_constant.search(body)
if value != None:
self.items.append(("constant", sign, int(value.groups()[0])))
continue
# No matches
raise Exception('The part string "%s" had no matches' % body)
def roll(self):
"""Rolls the die/dice and returns the result"""
result = 0
for i in self.items:
# Get value
if i[0] == "die": value = self._derive_die(i[2], i[3])
elif i[0] == "constant": value = self._derive_constant(i[2])
else: raise Exception('No handler for item type "%s"' % i[0])
# Apply sign
if i[1] == "+": result += value
elif i[1] == "-": result -= value
elif i[1] == "*": result *= value
elif i[1] == "/": result /= value
return result
def _derive_die(self, number, sides):
result = 0
for n in range(0, number):
result += self.generator(0, sides)
return result
def _derive_constant(self, value):
return value
# Useful for running the tests to make sure that it uses "random" numbers
false_numbers = (int(math.cos(x)*5)+5 for x in range(0,1000))
def false_numbers_func(*args):
return false_numbers.next()
# If it's main, run unit tests?
if __name__ == '__main__':
r = Roller_dict(false_numbers_func)
print(r("2D6"))
print(r("2D6"))
print(r("2D6"))
|
[
"I think your approach makes for a lot of complexity: you're trying to solve the hard problem (parsing the input) at the same time you're solving the less-hard problem (doing the dice-rolling). It's easier if you separate the problems.\nA class to roll dice is relatively easy to write. Two things I'm doing that you're not: the mapping of signs to operations (using a map means not having to write logic, plus it's reusable), and letting Roller objects be chained together in a simple linked list, so that calling roll on the head of the list rolls all of them and sums up the result.\nimport random\nR = random.Random()\n\nclass Roller(object):\n # map signs to operations\n op = { \"+\" : lambda a,b: a+b,\n \"-\" : lambda a,b: a-b,\n \"*\" : lambda a,b: a*b,\n \"/\" : lambda a,b: a/b }\n\n def __init__(self, dice, sides, sign=None, modifier=0):\n self.dice = dice\n self.sides = sides\n self.sign = sign\n self.modifier = modifier\n self.next_sign = None\n self.next_roller = None\n\n def roll(self):\n self.dice_rolled = [R.randint(1, self.sides) for n in range(self.dice)]\n result = sum(dice_rolled)\n if self.sign:\n result = self.op[self.sign](result, self.modifier)\n if self.next_sign and self.next_roller:\n result = self.op[self.next_sign](result, self.next_roller.roll())\n return result\n\nIt's relatively easy to test that. Note that dice_rolled is saved as an attribute so that you can write unit tests more easily.\nThe next step is to figure out how to parse the input. This sort of works:\n>>> p = \"\"\"\n(?P<next_sign>[-+*/])?\n(?P<dice>[\\d]+)\n[\\s]*D[\\s]*\n(?P<sides>[\\d]+)\n# trailing sign and modifier are optional, but if one is present both must be\n([\\s]*(?P<sign>[-+/*])[\\s]*(?P<modifier>[\\d]+))?\"\"\"\n>>> r = re.compile(p, re.VERBOSE+re.IGNORECASE)\n>>> m=r.match('2 d 20 +1')\n>>> m.group('dice'), m.group('sides'), m.group('sign'), m.group('modifier')\n('2', '20', '+', '1')\n>>> r.findall('3D6*2-1D4+1*2D6-1')\n[('', '3', '6', '*2', '*', '2'), ('-', '1', '4', '+1', '+', '1'), ('*', '2', '6', '-1', '-', '1')]\n\nThere's a lexical ambiguity that the syntax allows - 2D6+1D4 gets parsed as 2D6+1 followed by the unmatched D4, and it's not obvious to me how to fix that in the regular expression. Maybe that can be fixed with a negative lookahead assertion.\nAt any rate, once the regular expression gets fixed, the only thing left to do is process the results of r.findall to create a chain of Roller objects. And make that a class method if you really dig encapsulation.\n",
"Superficially, there's PEP08; in particular, the use of 4 spaces for an indent vs using tab characters.\nIt also seems like you have a lot of accidental complexity here, but I'd have to chew on it some more to understand. It seems like a simple enough idea that it shouldn't take as much effort to grok as it seems to be.\n",
"The pyparsing examples page includes a similar dice expression parser and roller, including these test cases:\nD5+2d6*3-5.5+4d6\nD5+2d6*3-5.5+4d6.takeHighest(3)\n2d6*3-5.5+4d6.minRoll(2).takeHighest(3)\n\nThe first 30 lines or so of the script contain the parser, the rest contains an evaluator, including debugging code showing the rolls being rolled.\nI realize this is more a \"silver platter\" answer rather than feedback to your posted code - one thing in common with Robert Rossney's answer is the clear separation of parsing vs. rolling. Perhaps between this and Robert's sample you can glean some tidbits for your own dice roller.\n"
] |
[
3,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001849924_python.txt
|
Q:
why Ghost Process appears after kill -9
In my Python script, I first launch a subprocess by subprocess.Popen(). Then later on, I want to kill that subprocess by kill -9 Pid.
What I found is that after the kill is executed, the subprocess is "stopped" because the GUI window of that process disappeared immediately. But when I perform a "ps aux" right after the kill, the same process (with same pid) is still shown in the result. The difference is the command of the process is included in a pair of () like below:
root 30506 0.0 0.0 0 0 s000 Z+ 6:13PM
0:00.00 (sample process)
This breaks my process detect logical since the dead process still can be found by ps.
Anyone know why this is happening?
Thanks!
A:
From the manual page of ps:
Z Defunct ("zombie") process,
terminated but not reaped by its
parent.
That means that the parent didn't do a waitpid() for the child that died.
Apart from waitpid(), you can avoid that by using a double fork when executing the child.
A:
I think -9 signal lets the process to try to handle kill and spend some time housekeeping. You can try just kill the process without signal.
Edit: oh, its actually -15 signal, that lets process die gracefully. never mind.
A:
Zombie processes are actually just an entry in the process table. They do not run, they don't consume memory; the entry just stays because the parent hasn't checked their exit code.
You can either do a double fork as Gonzalo suggests, or you can filter out all ps lines with a Z in the S column.
|
why Ghost Process appears after kill -9
|
In my Python script, I first launch a subprocess by subprocess.Popen(). Then later on, I want to kill that subprocess by kill -9 Pid.
What I found is that after the kill is executed, the subprocess is "stopped" because the GUI window of that process disappeared immediately. But when I perform a "ps aux" right after the kill, the same process (with same pid) is still shown in the result. The difference is the command of the process is included in a pair of () like below:
root 30506 0.0 0.0 0 0 s000 Z+ 6:13PM
0:00.00 (sample process)
This breaks my process detect logical since the dead process still can be found by ps.
Anyone know why this is happening?
Thanks!
|
[
"From the manual page of ps:\n\nZ Defunct (\"zombie\") process,\n terminated but not reaped by its\n parent.\n\nThat means that the parent didn't do a waitpid() for the child that died.\nApart from waitpid(), you can avoid that by using a double fork when executing the child.\n",
"I think -9 signal lets the process to try to handle kill and spend some time housekeeping. You can try just kill the process without signal.\nEdit: oh, its actually -15 signal, that lets process die gracefully. never mind.\n",
"Zombie processes are actually just an entry in the process table. They do not run, they don't consume memory; the entry just stays because the parent hasn't checked their exit code.\nYou can either do a double fork as Gonzalo suggests, or you can filter out all ps lines with a Z in the S column.\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"kill",
"process",
"python"
] |
stackoverflow_0001830370_kill_process_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.