Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
By using libpq on PG 9.1, I am trying to write query to get values from row with highest index 'my\_id':
```
SELECT my_id, col2, col3
FROM mytable
WHERE my_id = MAX(my_id)
```
That gives me error:
> ERROR: aggregates not allowed in WHERE clause...
How to write such query properly? | If your goal is to get the row with the highest my\_id value, then the following query should achieve the same goal.
```
SELECT my_id, col2, col3
FROM mytable
ORDER BY my_id DESC
LIMIT 1
``` | Just order by `my_id` and take only the first record with `limit 1`
```
SELECT my_id, col2, col3
FROM mytable
order by my_id desc
limit 1
```
Another but less performant way would be
```
SELECT my_id, col2, col3
FROM mytable
where my_id = (select max(my_id) from mytable)
``` | PostgreSQL, SELECT from max id | [
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
Having a server that has to handle lots of TCP-requests from gprs-modules I think it is handy to set up something to protect this server from multiple requests from certain ip's.
Now I want to make something(within python) that will check how much times a certain ip tries to connect and if this exceeds a given amount of tries this ip will be blocked for a given amount of time (or forever).
I am wondering if there are libraries present to do this, or how I should tackle this problem in my code. | Don't tackle this from your code - this is what a firewall is designed to do.
Using iptables its trivial:
```
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --set
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --update --seconds 600 --hitcount 2 -j DROP
```
The above means "drop anything that makes more than 2 connection attempts in 10 minutes at port $PORT" | If you decide you do want to handle this in code, you don't *need* a separate library (although using one will probably be more efficient), you can add something like the following to your connection handler:
```
from collections import defaultdict, deque
from datetime import datetime
floodlog = defaultdict(deque)
def checkForFlood(clientIP):
"""check if how many times clientIP has connected within TIMELIMIT, and block if more than MAX_CONNECTEIONS_PER_TIMELIMIT"""
now = datetime.now()
clientFloodLog = floodlog[clientIP]
clientFloodLog.append(now)
if len(clientFloodLog) > MAX_CONNECTIONS_PER_TIMELIMIT:
earliestLoggedConenction = clientFloodLog.popleft()
if now - earliestLoggedConnection < TIMELIMIT:
blockIP(clientIP)
``` | Blocking certain ip's if exceeds 'tries per x' | [
"",
"python",
""
] |
If I have a class like below:
```
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
```
And have 2 objects:
```
a = Point(1,2)
b = Point(1,2)
```
How can I modify class Point to make `id(a) == id(b)`? | ```
class Point(object):
__cache = {}
def __new__(cls, x, y):
if (x, y) in Point.__cache:
return Point.__cache[(x, y)]
else:
o = object.__new__(cls)
o.x = x
o.y = y
Point.__cache[(x, y)] = o
return o
>>> Point(1, 2)
<__main__.Point object at 0xb6f5d24c>
>>> id(Point(1, 2)) == id(Point(1,2))
True
```
When you need a really simple class like `Point`, always consider `collections.namedtuple`
```
from collections import namedtuple
def Point(x, y, _Point=namedtuple('Point', 'x y'), _cache={}):
return _cache.setdefault((x, y), _Point(x, y))
>>> Point(1, 2)
Point(x=1, y=2)
>>> id(Point(1, 2)) == id(Point(1, 2))
True
```
I used a function alongside `namedtuple` because it is simpler IMO but you can easily represent it as a class if needed:
```
class Point(namedtuple('Point', 'x y')):
__cache = {}
def __new__(cls, x, y):
return Point.__cache.setdefault((x, y),
super(cls, Point).__new__(cls, x, y))
```
As @PetrViktorin noted in his [answer](https://stackoverflow.com/a/16978210/1219006) you should consider the use of a [`weakref.WeakValueDictionary`](http://docs.python.org/2/library/weakref.html#weakref.WeakValueDictionary) so deleted instances of the class (doesn't work with `namedtuple` apparently) don't remain in memory since they remain referenced in the dictionary itself. | You need to have a global dictionary of objects, and get them through a factory function (or a custom `__new__`, see the other answers). Additionally, consider using a [`WeakValueDictionary`](http://docs.python.org/2/library/weakref.html#weakref.WeakValueDictionary) so that you don't unnecessarily fill up memory with objects that are no longer needed.
```
from weakref import WeakValueDictionary
class _Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
# Cache of Point objects the program currently uses
_points = WeakValueDictionary()
def Point(x, y):
"""Create a Point object"""
# Note that this is a function (a "factory function")
# You can also override Point.__new__ instead
try:
return _points[x, y]
except KeyError:
_points[x, y] = point = _Point(x, y)
return point
if __name__ == '__main__':
# A basic demo
print Point(1, 2)
print id(Point(1, 2))
print Point(2, 3) == Point(2, 3)
pt_2_3 = Point(2, 3)
# The Point(1, 2) we created earlier is not needed any more.
# In current CPython, it will have been been garbage collected by now
# (but note that Python makes no guarantees about when objects are deleted)
# If we create a new Point(1, 2), it should get a different id
print id(Point(1, 2))
```
Note that a namedtuple won't work with WeakValueDictionary. | How to make two objects have the same id in python? | [
"",
"python",
"object",
"identity",
""
] |
I am currently writing data from an infinite while loop to an SD Card on a raspberry pi.
```
file = open("file.txt", "w")
while True:
file.write( DATA )
```
It seems that sometimes file.txt doesn't always save if the program isn't closed through either a command or a keyboard interrupt. Is there a periodic way to save and make sure the data is being saved? I was considering using
```
open("file.txt", "a")
```
to append to file and periodically closing the txt file and opening it up again. Would there be a better way to safely store data while running through an infinite while loop? | A file's [`write()`](http://docs.python.org/2/library/stdtypes.html#file.write) method doesn't necessarily write the data to disk. You have to call the [`flush()`](http://docs.python.org/2/library/stdtypes.html#file.flush) method to ensure this happens...
```
file = open("file.txt", "w")
while True:
file.write( DATA )
file.flush()
```
Don't worry about the reference to `os.fsync()` - the OS will pretend the data has been written to disk even if it actually hasn't. | Use a with statement -- it will make sure that the file automatically closes!
```
with open("file.txt", "w") as myFile:
myFile.write(DATA)
```
Essentially, what the with statement will do in this case is this:
```
try:
myFile = open("file.txt", "w")
do_stuff()
finally:
myFile.close()
```
assuring you that the file will be closed, and that the information written to the file will be saved.
More information about the with statement can be found here: [PEP 343](http://www.python.org/dev/peps/pep-0343/) | Save to Text File from Infinite While Loop | [
"",
"python",
"while-loop",
""
] |
To extract only the appropriate rows, i have to do something like Mycolum in ('x%','f%') : wich means i want all the rows where Mycolumn is starting with `x` or `f`,
i can use `REGEXP_LIKE(Mycolumn, '^x', 'i')` : to extract all rows where Mycolumn start with `x` how can i add OR in regex to tell to my function i need also rows starting with `f`
thanks | You can do this:
```
REGEXP_LIKE(Mycolumn, '^[xf]')
``` | You can use OR to provide both the conditions.
```
SELECT * FROM TABLE_NAME
WHERE REGEXP_LIKE(COLUMN, '^x')
OR REGEXP_LIKE(COLUMN, '^f');
``` | Oracle 10g SQL like using REGEXP_LIKE | [
"",
"sql",
"regex",
"oracle10g",
""
] |
I'm using FuncAnimation in matplotlib's animation module for some basic animation. This function perpetually loops through the animation. Is there a way by which I can pause and restart the animation by, say, mouse clicks? | Here is [a FuncAnimation example](http://scipyscriptrepo.com/wp/?p=9) which I modified to pause on mouse clicks.
Since the animation is driven by a generator function, `simData`, when the global variable `pause` is True, yielding the same data makes the animation appear paused.
The value of `paused` is toggled by setting up an event callback:
```
def onClick(event):
global pause
pause ^= True
fig.canvas.mpl_connect('button_press_event', onClick)
```
---
```
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.animation as animation
pause = False
def simData():
t_max = 10.0
dt = 0.05
x = 0.0
t = 0.0
while t < t_max:
if not pause:
x = np.sin(np.pi*t)
t = t + dt
yield x, t
def onClick(event):
global pause
pause ^= True
def simPoints(simData):
x, t = simData[0], simData[1]
time_text.set_text(time_template%(t))
line.set_data(t, x)
return line, time_text
fig = plt.figure()
ax = fig.add_subplot(111)
line, = ax.plot([], [], 'bo', ms=10)
ax.set_ylim(-1, 1)
ax.set_xlim(0, 10)
time_template = 'Time = %.1f s'
time_text = ax.text(0.05, 0.9, '', transform=ax.transAxes)
fig.canvas.mpl_connect('button_press_event', onClick)
ani = animation.FuncAnimation(fig, simPoints, simData, blit=False, interval=10,
repeat=True)
fig.show()
``` | This works...
```
anim = animation.FuncAnimation(fig, animfunc[,..other args])
#pause
anim.event_source.stop()
#unpause
anim.event_source.start()
``` | stop / start / pause in matplotlib animation | [
"",
"python",
"matplotlib",
"matplotlib-animation",
""
] |
In Qt, you have this routine (among others) in QAbstractItemModel
```
bool insertRows(int row, int count, const QModelIndex &parent = QModelIndex());
```
Which basically instantiates a new QModelIndex every time it is called if parent is not specified.
In python, the meaning of the same line is vastly different: only one QModelIndex would be instantiated and shared at every call
The point that is not clear to me is how this difference is handled in PyQt. The [documentation](http://pyqt.sourceforge.net/Docs/PyQt4/qabstractitemmodel.html) seems to be automatically generated from the C++ one, with the result that the default argument instantiation uses the same syntax, but with a completely different meaning, leaving the issue unaddressed.
This problem of course carries on to custom reimplementations in PyQt of the QAbstractItemModel. Should you declare
```
def insertRows(self, row, count, index=QtCore.QModelIndex()):
```
or
```
def insertRows(self, row, count, index=None):
```
and then instantiate a new QModelIndex if index is None? | ```
bool insertRows(int row, int count,
const QModelIndex &parent = QModelIndex());
```
and
```
def insertRows(self, row, count, index=QtCore.QModelIndex()):
```
Both of these examples result in invalid index instance.
## What is an invalid [QModelIndex](http://qt-project.org/doc/qt-4.8/qmodelindex.html#details)?
> An invalid model index can be constructed with the QModelIndex
> constructor. Invalid indexes are often used as parent indexes when
> referring to top-level items in a model.
## Does [insertRows](http://qt-project.org/doc/qt-4.8/qabstractitemmodel.html#insertRows) need a new invalid instance each time it is called?
> In case of the [insertRows](http://qt-project.org/doc/qt-4.8/qabstractitemmodel.html#insertRows) function the base class implementation
> of this function does nothing and returns false.
The quote means that if you use `QAbstractItemModel` you need to implement `insertRows` yourself.
This means you need to call [beginInsertRows](http://qt-project.org/doc/qt-4.8/qabstractitemmodel.html#beginInsertRows) which takes the parent argument.
**When parent indexes are involved, C++ side of Qt will not care which instance is given**. As long as it is invalid it will mean the current item is in the top level of the model and has no parent.
**`QAbstractItemModel` should not delete any indexes that it hasn't created by it self**.
In C++, the parent argument is passed as `const` reference and thus will not be deleted or changed by `beginInsertRows` function.
Segmentation faults that could occur if C++ instance was deleted while it was still referenced in Python are your biggest problem I think.
Now in Python, the argument created in function definition will usually have long life span, and **there could be ways for the instance to get deleted *that I am unaware* of but generally you should be safe.**
If you are worried about this simply create new instance each time.
[index = index or QtCore.QModelIndex()](https://stackoverflow.com/a/17213716/327317)
But for what it's worth I don't remember having trouble creating index instances in function definitions and I have done so in several occasions. | This is how I did it in my program, with the begin and end added:
```
def insertRows(self, row, count, index):
self.beginInsertRows(index, row, count)
""" Your stuff here
"""
self.endInsertRows()
```
Also check out the [pyside docs](https://deptinfo-ensip.univ-poitiers.fr/ENS/pyside-docs/index.html), because they usually do a much better job than the pyqt ones, and are under the LGPL so you can use it for buisness. IIRC it is the same result, but different implementation under different licenses. | How are default arguments handled in pyqt? | [
"",
"python",
"qt",
"pyqt",
""
] |
We have a central module that within the module calls an init() function on loading:
```
import x
import y
import z
def init():
....
init()
if __name__ == '__main__':
...
```
This gets pulled into every one of our application modules with a statement like:
```
if __name__ == '__main__':
import central_module as b
b.do_this()
b.do_that()
```
init() does a number of bad things, notably establish connections to databases. Because of this, it breaks any unit tests, and the modules I write expect the usual behavior where you import the module and explicitly invoke any initialization.
I've implemented a work-around by adding an INITIALIZE variable up top:
```
#INITIALIZE = True
INITIALIZE = False # for DEV/test
if INITIALIZE:
init()
```
But requires me to edit that file in order to run my tests or do development, then revert the change when I'm ready to commit & push.
For political reasons, I haven't gotten any traction on just fixing it, with something like:
```
import central_module as b
...
b.init()
b.do_this()
b.do_that()
```
Is there some way that I can more transparently disable that call when the module loads? The problem is that by the time the module is imported, it's already tried to connect to the databases (and failed).
Right now my best idea is: I could move the INITIALIZE variable into a previous import, and in my tests import that, set initialize to FALSE, then import central\_module.
I'll keep working on the political side (arg), but was wondering if there's a better work-around I could drop in place to disable that init call without disrupting all the existing scripts. | A simple change to your `INITIALIZE` hack would be to have that come from an environment variable. Then, you never have to modify code to run those tests, and less hackage until you can actually fix the bad init.
```
INITIALIZE = os.environ.get('DO_TERRIBLE_INITIALIZE', False)
if INITIALIZE:
....
```
And the value can be anything, as long as its set.
```
export DO_TERRIBLE_INITIALIZE=ohgodwhy
``` | Here's my idea, evil as it may be:
1. Open the module as a file and read in the source.
2. Use `ast.parse()` to parse it into a AST.
3. Walk the AST until you find the offending function call, and prune it.
4. Evaluate the modified AST and inject it into a new module created by `imp.new_module()`, and stuff it into `sys.modules`.
5. Commit the crazy hack with a commit message saying that it shouldn't be necessary at all except for the fact that some twit wouldn't know proper initialization if it bit them in the ass. | Work-around for module that performs (bad) initialization | [
"",
"python",
""
] |
I've pored over the Django docs regarding the contenttypes framework several times, and I simply don't understand it well enough to implement generic relations in my project. I've looked for online examples or tutorials regarding the matter, but I can't find a single one. Call me dumb, but I need some help on this one (please don't answer by simply linking to the docs). Based on the lack of resources online, I believe if you answer this question with a thorough example, your answer may be the most helpful example online to date regarding django generic relations (bonus!).
So, my question is: can someone show a simple example of the models and maybe a couple lines of code showing how to interact with instances of a generic model?
---
As inspiration, here is what I believe would be a very common situation:
*A site has media items that are largely treated the same, but are slightly different. For example, let's say there are image and video items, and users can "like" an item or "comment" on an item. The likes and comments should be treated the same, regardless of whether they are posted on an image or video item. So, if there is an ItemView for viewing an image or a video in a user's album, the following kinds of calls would be possible : `mediaitem.comments.all()` or `len(mediaitem.likes.all())` or `comment.user_who_commented`, without having to know which kind of media item it is (image or video).*
I believe you would need **six models** for this:
* Abstract `MediaItem` class
* Two different types of media item: `ImageItem` and `VideoItem`
* Abstract `MediaItemActions` class
* Two different types of actions that can be done towards media items: `Like` and `Comment`
---
If you know how to use this Django feature, please show us a full example! I feel like it would be an extremely powerful tool and am aching to use it in my application. The more explicit, the better. | Your use case sounds very similar to the (now deprecated) [Django comments framework](https://github.com/django/django-contrib-comments). If you check out the [models](https://github.com/django/django-contrib-comments/blob/master/django_comments/models.py), you'll see how to use a generic relation in `BaseCommentAbstractModel`--note that you need all three fields, a `ForeignKey` to `ContentType`, a field to hold the objects' `pk`s, and the `GenericForeignKey` field.
As for how to query for objects by `GenericForeignKey`, you can see some examples in the [template tags](https://github.com/django/django-contrib-comments/blob/master/django_comments/templatetags/comments.py) in that project. See for example the `get_query_set` method in `BaseCommentNode`, which retrieves comments by querying on the content type and pk of the target object.
```
def get_query_set(self, context):
ctype, object_pk = self.get_target_ctype_pk(context)
if not object_pk:
return self.comment_model.objects.none()
qs = self.comment_model.objects.filter(
content_type = ctype,
object_pk = smart_text(object_pk),
site__pk = settings.SITE_ID,
)
``` | I actually have a very similar situation on one of my projects, with various media types.
```
class TaggedItem(models.Model):
tag = models.SlugField()
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
content_object = generic.GenericForeignKey('content_type', 'object_id')
class ReviewedItem(models.Model):
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
content_object = generic.GenericForeignKey('content_type', 'object_id')
review = models.ForeignKey("Review")
class CreativeWork(models.Model):
#other fields
keywords = generic.GenericRelation("TaggedItem",null=True, blank=True, default=None)
reviews = generic.GenericRelation("ReviewedItem",null=True, blank=True, default=None)
class MediaObject(CreativeWork):
#fields
class VideoObject(MediaObject):
#fields
class AudioObject(MediaObject):
#fields
```
Every Video or Audio is a MediaObject, which is a CreativeWork.
CreativeWorks have a GenericRelation to tags and Reviews. So now anything can be tagged or reviewed.
All you need is for the 'action' to have a ForeignKey to ContentType.
Than add a GenericRelation to your model. I actually found the django.docs to be pretty helpful :)
But if not hope this helps. | Django: Example of generic relations using the contenttypes framework? | [
"",
"python",
"django",
"django-models",
"django-contenttypes",
"generic-foreign-key",
""
] |
Suppose I have the path directory `pathname`. I can print all the files in that directory by doing
```
glob.glob('*')
```
but that doesn't print out the files in the subdirectories (such as `dir/file1.txt`)
Is it possible to print out all files in this directory as well as all subdirectories? | It's only a little more difficult using `os.walk`
```
import os
for dirpath, dirnames, filenames in os.walk('.'):
for f in filenames:
print os.path.join(dirpath, f)
```
You can also express this as a generator expression
```
for filename in (os.path.join(dp, f) for dp, dn, fn in os.walk('.') for f in fn):
# do something with filename
``` | ```
import os
top = 'F:/some/path/'
for root, dirs, files in os.walk(top, topdown=False):
for name in files:
print name
``` | Print out all files in current directory and subdirectories in Python | [
"",
"python",
"file",
""
] |
I am trying to find all div tags with id begins with "post-{here a lot of digits}"
I tried something like this:
```
tree.xpath("//div[starts-with(@id,'post-[0-9]')]")
```
But does not really work. Is there a way to do this without importing regular expressions in python? | **[XPath 1.0](http://www.w3.org/TR/xpath) does not support regular expressions**, i.e. the function `starts-with` does not support regular expressions.
Lxml does not support XPath 2.0. You have the following three options:
* Switch to a processor who is able to handle XPath 2.0. You can then use the [fn:matches()](http://www.w3.org/TR/xpath-functions/#func-matches) function.
* Use a XPath 1.0 compliant solution. This is rather ugly, but it works and may in some circumstances be the easiest solution. However, this is not a general solution! It will replace the numbers in `@id` with a `-` and match against this. So this would also deliver true if the original `id` was something like `post--`. Use a character which you know will not occur at this position.
```
tree.xpath("//div[starts-with(translate(@id, '0123456789', '----------'), 'post--')]")
```
* lxml supports the [EXSLT namespaces](http://www.exslt.org/regexp/functions/test/index.html) and you can use the regex functions from there. In my opinion this is the best solution.
```
regexpNS = "http://exslt.org/regular-expressions"
r = tree.xpath("//div[re:test(@id, '^post-[0-9]')]", namespaces={'re': regexpNS})
``` | If you just want to check @id which may start with 'post-', xpath `//div[starts-with(@id,'post-')]` is enough. But if you are looking for @id which must be a combination of 'post-$AnyDigit then you must use `matches()` function. | lxml find <div> with id='post-[0-9]*' | [
"",
"python",
"xpath",
"lxml",
""
] |
Given a python dictionary and an integer `n`, I need to access the `n`th key. I need to do this repeatedly many times in my project.
I have written a function which does this:
```
def ix(self,dict,n):
count=0
for i in sorted(dict.keys()):
if n==count:
return i
else:
count+=1
```
But the problem is that if the dictionary is huge, the time complexity increases when used repeatedly.
Is there an efficient way to do this? | I guess you wanted to do something like this, but as dictionary don't have any order so the order of keys in `dict.keys` can be anything:
```
def ix(self, dct, n): #don't use dict as a variable name
try:
return list(dct)[n] # or sorted(dct)[n] if you want the keys to be sorted
except IndexError:
print 'not enough keys'
``` | `dict.keys()` returns a list so, all you need to do is `dict.keys()[n]`
But, a dictionary is an unordered collection so nth element does not make any sense in this context.
> Note: Indexing `dict.keys()` is not supported in python3 | Extract the nth key in a python dictionary? | [
"",
"python",
"performance",
"dictionary",
""
] |
I'm new with SQL Server. I'm trying to convert my Access Database into SQL Server.
Here is my problem I got a Table here.
```
CarID CustomerName Date Replied StatusID Offer
1 Karl 05-01-2013 1 $1000
1 Karl 05-02-2013 1 $1000
1 Kathy 05-03-2013 1 $1500
1 Karl 05-04-2013 2 $1000
2 Karl 05-04-2013 1 $2000
```
I want to display my data as
```
CarID CustomerName Date Replied StatusID Offer
1 Kathy 05-03-2013 1 $1500
1 Karl 05-04-2013 2 $1000
2 Karl 05-04-2013 1 $2000
```
In MS Access
`I used SELECT CarID, Last(CustomerName) as CustomerName, Max(Date) as Date Replied, Last(StatusID) as StatusID, Last(Offer) as Offer from viewCarOffers GROUP BY CarID`
Problem with SQL Server. It doesn't have last function. I don't have any idea on how to get this same result using SQL Server.
I hope someone can help. Thanks. | Try
```
SELECT o.CarID,
o.CustomerName,
o.Date 'Date Replied',
o.StatusID,
o.Offer
FROM
(
SELECT CarID, CustomerName, MAX(Date) Date
FROM viewCarOffers
GROUP BY CarID, CustomerName
) q JOIN viewCarOffers o
ON o.CarID = q.CarID
AND o.CustomerName = q.CustomerName
AND q.Date = o.Date
ORDER BY CarID, CustomerName
```
Output:
```
| CARID | CUSTOMERNAME | DATE REPLIED | STATUSID | OFFER |
------------------------------------------------------------------------
| 1 | Karl | May, 04 2013 00:00:00+0000 | 2 | $1000 |
| 1 | Kathy | May, 03 2013 00:00:00+0000 | 1 | $1500 |
| 2 | Karl | May, 04 2013 00:00:00+0000 | 1 | $2000 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/a4848/8)** demo | Try this
```
SELECT
CarId,
CustomerName,
DATE,
StatusId,
Offer
FROM
(
SELECT
ROW_NUMBER() OVER ( PARTITION BY carid, CustomerName ORDER BY DATE DESC) ROW, *
FROM viewCarOffers
) vco
WHERE ROW = 1
ORDER BY CarId, offer DESC
``` | SQL Server 2005 Getting the last record of a column | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I have the following query
```
`SELECT t.client_id,
count(t.client_id) as trips_xdays
FROM trips t
JOIN users u ON t.client_id = u.usersid
WHERE t.city_id = 12
AND t.status = 'completed'
AND ( date_trunc('day',t.dropoff_at) - date_trunc('day',u.creationtime) <= 30 days, 0:00:00)
GROUP BY t.client_id`
```
and error when I try to constrain the query by <= 30 days, 0:00:00. However I thought that would be the correct format since I queried
```
`select date_trunc('day',t.dropoff_at) - date_trunc('day',u.creationtime)
from trips t
inner join users u ON t.client_id = u.usersid`
```
by itself and it came back with responses in the format of 30 days, 0:00:00
Any suggestions on how to correctly query so I can constrain the query on <= 30 days? | Assuming we are dealing with data type `timestamp`, you can simplify:
```
SELECT t.client_id, count(t.client_id) AS trips_xdays
FROM trips t
JOIN users u ON t.client_id = u.usersid
WHERE t.city_id = 12
AND t.status = 'completed'
AND t.dropoff_at::date < u.creationtime::date + 30
GROUP BY 1;
```
A simple cast to date is shorter and you can just add `integer` to `date`.
Or for a slightly different result and faster execution:
```
...
AND t.dropoff_at < u.creationtime + interval '30 days'
```
The last form can more easily use a plain index. And it measures 30 days *exactly*. | It looks like I was simply forgetting quotes. The query in its working order is:
```
SELECT t.client_id,
count(t.client_id) as trips_xdays
FROM trips t
JOIN users u ON t.client_id = u.usersid
WHERE t.city_id = 12
AND t.status = 'completed'
AND ( date_trunc('day',t.dropoff_at) - date_trunc('day',u.creationtime) <= '30 days, 0:00:00')
GROUP BY t.client_id
``` | Subtracting dates in the WHERE clause | [
"",
"sql",
"postgresql",
"datetime",
""
] |
Apologies if there is already an answer out there for this, I couldn't find it anywhere!
I want to create a SQL query (in Oracle) displays a list of all A, B, C rows, example below, where there are more than 1 counts of D, including Nulls.
Say I have 5 columns:
```
A B C D E
1 1 100 A 1
1 1 100 2
1 1 200 A 3
1 1 200 1
2 2 100 A 2
2 2 100 3
2 2 100 B 1
2 2 100 C 2
```
The blanks are null.
I want the following results back, ignoring E altogether:
```
A B C count
1 1 100 2
1 1 200 2
2 2 100 4
```
The problem I have currently is that if I use the following query, it doesn't count the nulls:
```
SELECT A, B, C, count(D)
FROM <TABLE>
GROUP BY A, B, C HAVING COUNT(D) > 1
```
I know that count(\*) does take into account nulls but I have other columns in my table that I don't want to include in my query. | Simple [`COUNT(*)`](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions032.htm#i82697)` should do it
```
SELECT A, B, C, COUNT(*) count
FROM table1
GROUP BY A, B, C
ORDER BY A, B, C
```
Output:
```
| A | B | C | COUNT |
-----------------------
| 1 | 1 | 100 | 2 |
| 1 | 1 | 200 | 2 |
| 2 | 2 | 100 | 4 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!4/786c4/2)** demo
`COUNT(*)` just counts the rows therefore it doesn't care for `NULL`s. On the other hand when you specify the column it will count only rows with non-null values in that column. | If you just want a count, and be sure that it counts every row even if you have no column to count that is reliably not null, just use `SUM(1)` or `COUNT(*)`instead of count on the column;
```
SELECT A, B, C, SUM(1) dcount1, COUNT(*) dcount2
FROM Table1
GROUP BY A, B, C
HAVING SUM(1) > 1
```
This counts every row, no matter if any column is NULL.
[An SQLfiddle to test with](http://sqlfiddle.com/#!4/b756e/6). | SQL count nulls and non nulls | [
"",
"sql",
"oracle",
""
] |
im writing a python function to do the following, add numbers from each line, so i can then find the average. this is what my file looks like:
```
-2.7858521
-2.8549764
-2.8881847
2.897689
1.6789098
-0.07865
1.23589
2.532461
0.067825
-3.0373958
```
Basically ive written a program that does a for loop for each line, incrementing the counter of lines and setting each line to a float value.
```
counterTot = 0
with open('predictions2.txt', 'r') as infile:
for line in infile:
counterTot += 1
i = float(line.strip())
```
now is the part i get a lil stuck
```
totalSum =
mean = totalSum / counterTot
print(mean)
```
As you can tell im new to python, but i find it very handy for text analysis work, so im getting into it.
**Extra function**
I was also looking into an extra feature. but should be a seperate function as above.
```
counterTot = 0
with open('predictions2.txt', 'r') as infile:
for line in infile:
counterTot += 1
i = float(line.strip())
if i > 3:
i = 3
elif i < -3:
i = -3
```
As you can see from the code, the function decides if a number is bigger than 3, if so, then make it 3. If number is smaller than -3, make it -3. But im trying to output this to a new file, so that it keeps its structure in tact. For both situations i would like to keep the decimal places. I can always round the output numbers myself, i just need the numbers intact. | You can use `enumerate` here:
```
with open('predictions2.txt') as f:
tot_sum = 0
for i,x in enumerate(f, 1):
val = float(x)
#do something with val
tot_sum += val #add val to tot_sum
print tot_sum/i #print average
#prints -0.32322842
``` | You can do this without loading the elements into a list by cheekily using `fileinput` and retrieve the line count from that:
```
import fileinput
fin = fileinput.input('your_file')
total = sum(float(line) for line in fin)
print total / fin.lineno()
``` | python program to add all values from each line | [
"",
"python",
""
] |
Is there a native numpy way to convert an array of string representations of booleans eg:
```
['True','False','True','False']
```
To an actual boolean array I can use for masking/indexing? I could do a for loop going through and rebuilding the array but for large arrays this is slow. | You should be able to do a boolean comparison, IIUC, whether the `dtype` is a string or `object`:
```
>>> a = np.array(['True', 'False', 'True', 'False'])
>>> a
array(['True', 'False', 'True', 'False'],
dtype='|S5')
>>> a == "True"
array([ True, False, True, False], dtype=bool)
```
or
```
>>> a = np.array(['True', 'False', 'True', 'False'], dtype=object)
>>> a
array(['True', 'False', 'True', 'False'], dtype=object)
>>> a == "True"
array([ True, False, True, False], dtype=bool)
``` | I've found a method that's even faster than DSM's, taking inspiration from Eric, though the improvement is best seen with smaller lists of values; at very large values, the cost of the iterating itself starts to outweigh the advantage of performing the truth testing during creation of the numpy array rather than after. Testing with both `is` and `==` (for situations where the strings are interned versus when they might not be, as `is` would not work with non-interned strings. As `'True'` is probably going to be a literal in the script it should be interned, though) showed that while my version with `==` was slower than with `is`, it was still much faster than DSM's version.
Test setup:
```
import timeit
def timer(statement, count):
return timeit.repeat(statement, "from random import choice;import numpy as np;x = [choice(['True', 'False']) for i in range(%i)]" % count)
>>> stateIs = "y = np.fromiter((e is 'True' for e in x), bool)"
>>> stateEq = "y = np.fromiter((e == 'True' for e in x), bool)"
>>> stateDSM = "y = np.array(x) == 'True'"
```
With 1000 items, the faster statements take about 66% the time of DSM's:
```
>>> timer(stateIs, 1000)
[101.77722641656146, 100.74985342340369, 101.47228618107965]
>>> timer(stateEq, 1000)
[112.26464996250706, 112.50754567379681, 112.76057346127709]
>>> timer(stateDSM, 1000)
[155.67689949529995, 155.96820504501557, 158.32394669279802]
```
For smaller string arrays (in the hundreds rather than thousands), the elapsed time is less than 50% of DSM's:
```
>>> timer(stateIs, 100)
[11.947757485669172, 11.927990253608186, 12.057855628259858]
>>> timer(stateEq, 100)
[13.064947253943501, 13.161545451986967, 13.30599035623618]
>>> timer(stateDSM, 100)
[31.270060799078237, 30.941749748808434, 31.253922641324607]
```
A bit over 25% of DSM's when done with 50 items per list:
```
>>> timer(stateIs, 50)
[6.856538342483873, 6.741083326021908, 6.708402786859551]
>>> timer(stateEq, 50)
[7.346079345032194, 7.312723444475523, 7.309259899921017]
>>> timer(stateDSM, 50)
[24.154247576229864, 24.173593700599667, 23.946403452288905]
```
For 5 items, about 11% of DSM's:
```
>>> timer(stateIs, 5)
[1.8826215278058953, 1.850232652068371, 1.8559381315990322]
>>> timer(stateEq, 5)
[1.9252821868467436, 1.894011299061276, 1.894306935199893]
>>> timer(stateDSM, 5)
[18.060974208809057, 17.916322392367874, 17.8379771602049]
``` | Numpy Convert String Representation of Boolean Array To Boolean Array | [
"",
"python",
"numpy",
""
] |
My query is
```
Select
NO,
Timestamp,
Event_type,
Comments
From TableA
Where NO = '12345';
```
but for `Event_type`, I only want to select when `event_type in ('ABC_XYZ','Comment_Change'`)
and `Comments` field must be like `Cause:%` (when `event_type = 'Comment_change'`)
How can I achieve this where condition? | Try:
```
Select
NO,
Timestamp,
Event_type,
Comments
From TableA
Where event_type = 'ABC_XYZ' or
(event_type = 'Comment_Change' and Commentsfield like 'Cause:%')
``` | ```
Select
NO,
Timestamp,
Event_type,
Comments
From TableA
Where NO = '12345'
and event_type = 'Comment_Change'
And Comments like 'Cause:%';
```
This seems to be what you asked for. | sql when field2 = 'something' then field3 like | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I need to extract the location and radius data from a large xml file that is formatted as below and store the data in 2-dimensional ndarray. This is my first time using Python and I can't find anything about the best way to do this.
```
<species name="MyHeterotrophEPS" header="family,genealogy,generation,birthday,biomass,inert,capsule,growthRate,volumeRate,locationX,locationY,locationZ,radius,totalRadius">
0,0,0,0.0,0.0,0.0,77.0645361927206,-0.1001871531330136,-0.0013358287084401814,4.523853439106942,234.14575280979898,123.92820420047076,0.0,0.6259920275663835;
0,0,0,0.0,0.0,0.0,108.5705297969604,-0.1411462759900182,-0.001881950346533576,1.0429122163754276,144.1066875513379,72.24884428367467,0.0,0.7017581019907897;
.
.
.
</species>
```
Edit:I mean "large" by human standards. I am not having any memory issues with it. | You essentially have CSV data in the XML text value.
Use [`ElementTree`](http://docs.python.org/2/library/xml.etree.elementtree.html) to parse the XML, then use [`numpy.genfromtxt()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html) to load that text into an array:
```
from xml.etree import ElementTree as ET
tree = ET.parse('yourxmlfilename.xml')
species = tree.find(".//species[@name='MyHeterotrophEPS']")
names = species.attrib['header']
array = numpy.genfromtxt((line.rstrip(';') for line in species.text.splitlines()),
delimiter=',', names=names)
```
Note the generator expression, with a [`str.splitlines()`](http://docs.python.org/2/library/stdtypes.html#str.splitlines) call; this turns the text of the XML element into a sequence of lines, which `.genfromtxt()` is quite happy to receive. We do remove the trailing `;` character from each line.
For your sample input (minus the `.` lines), this results in:
```
array([ (0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 77.0645361927206, -0.1001871531330136, -0.0013358287084401814, 4.523853439106942, 234.14575280979898, 123.92820420047076, 0.0, 0.6259920275663835),
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 108.5705297969604, -0.1411462759900182, -0.001881950346533576, 1.0429122163754276, 144.1066875513379, 72.24884428367467, 0.0, 0.7017581019907897)],
dtype=[('family', '<f8'), ('genealogy', '<f8'), ('generation', '<f8'), ('birthday', '<f8'), ('biomass', '<f8'), ('inert', '<f8'), ('capsule', '<f8'), ('growthRate', '<f8'), ('volumeRate', '<f8'), ('locationX', '<f8'), ('locationY', '<f8'), ('locationZ', '<f8'), ('radius', '<f8'), ('totalRadius', '<f8')])
``` | If your XML is just that `species` node, it's pretty simple, and Martijn Pieters has already explained it better than I can.
But if you've got a ton of `species` nodes in the document, and it's too large to fit the whole thing into memory, you can use [`iterparse`](http://docs.python.org/3.3/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse) instead of `parse`:
```
import numpy as np
import xml.etree.ElementTree as ET
for event, node in ET.iterparse('species.xml'):
if node.tag == 'species':
name = node.attr['name']
names = node.attr['header']
csvdata = (line.rstrip(';') for line in node.text.splitlines())
array = np.genfromtxt(csvdata, delimiter=',', names=names)
# do something with the array.
```
This won't help if you just have one super-gigantic `species` node, because even `iterparse` (or similar solutions like a SAX parser) parse one entire node at a time. You'd need to find an XML library that lets you stream the *text* of large nodes, and off the top of my head, I don't think of any stdlib or popular third-party parsers that can do that. | How do I parse some of the data from a large xml file? | [
"",
"python",
"xml",
"python-2.7",
"xml-parsing",
""
] |
I am trying to create a program that checks if an equation creates a whole number answer but the equation creates floating point numbers that won't compare to integers. When it gets to the first integer, which is supposed to be 390625 it prints it as 390625.0 and it doesn't leave the while loop when it gets to that number.
I'm new to programming so please keep it simple.
```
from myro import *
from math import *
def main():
z = 3
a = 2
b = 2
x = 3
y = 3
lim = 25
c = (a**x + b**y)**(1.0/z)
while int(c) != c:
while z <= lim:
while a <= lim:
while b <= lim:
while x <= lim:
while y <= lim:
c = (a**x + b**y)**(1.0/z)
print a, b, c, x, y, z
y = y + 1
y = 3
print a, b, c, x, y, z
x = x + 1
x = 3
print a, b, c, x, y, z
b = b + 1
b = 3
print a, b, c, x, y, z
a = a + 1
a = 3
print a, b, c, x, y, z
z = z + 1
print "code cycle complete. no numbers meet criteria"
print str(a) + "^" + str(x) + " + " + str(b) + "^" + str(y) + " = " + str(c) + "^" + str(z)
main()
``` | I'm surprised by the fact that everyone jumped in to conclude that the issue is with floating point comparison. You guys should take a look at the full question/code before coming to a conclusion and rushing to answer.
Let's come back to the point. I'm not trying to explain the issues with floating point comparison. I'm not looking at the nested while loop. I'll answer considering the fact that the author needs to break the loop when the calculation results in a whole number.
Felis Vulpes,
You expect the loop to break when 'c' is a whole number. But your condition "int(c) != c" is not checked as often as you think.
1. This will be checked when entering the loop. At that time the value for "c" will be 2.51984209979
2. Next checking will happen only after all the loops inside are finished. At that time, value of c will be 25.7028456664
What you will have to do is to check the value of "c" every time you recalculate it.
Your code may look like this
```
from myro import *
from math import *
def main():
z = 3
a = 2
b = 2
x = 3
y = 3
lim = 25
c = (a**x + b**y)**(1.0/z)
#while int(c) != c:
while z <= lim:
while a <= lim:
while b <= lim:
while x <= lim:
while y <= lim:
c = (a**x + b**y)**(1.0/z)
print a, b, c, x, y, z
if int(c) == c:
print str(a) + "^" + str(x) + " + " + str(b) + "^" + str(y) + " = " + str(c) + "^" + str(z)
return
y = y + 1
y = 3
print a, b, c, x, y, z
x = x + 1
x = 3
print a, b, c, x, y, z
b = b + 1
b = 3
print a, b, c, x, y, z
a = a + 1
a = 3
print a, b, c, x, y, z
z = z + 1
print "code cycle complete. no numbers meet criteria"
main()
``` | You have to be aware of the way floats are internally represented by your hardware. For example:
```
>>> x = 9999999.99
>>> y = 9999999.9900000002
>>> x == y
True
>>> x
9999999.9900000002
>>> y
9999999.9900000002
```
(this is Python 2.6, Intel CentOS-64bit; result might change depending on your architecture, but you get the idea)
That said, if your result happens to be `100.0`, sure, you'll say that's a whole number. What about `100.000000000000000000001`? Is that the real result of your equation, or some small deviation due to the way floats are represented in your computer's hardware?
You should read this: [Floating Point Arithmetic: Issues and Limitations](http://docs.python.org/2/tutorial/floatingpoint.html#floating-point-arithmetic-issues-and-limitations)
And perhaps consider using the [`decimal`](http://docs.python.org/2/library/decimal.html) package (with some performance tradeoff)
**Update**
If you use the [`decimal`](http://docs.python.org/2/library/decimal.html) package you can use the remainder operator `%` and the [`is_zero()`](http://docs.python.org/2/library/decimal.html#decimal.Decimal.is_zero) method. Example:
```
>>> from decimal import Decimal
>>> x = Decimal('100.00000000000001')
>>> y = Decimal('100.00000000000000')
>>> (x % 1).is_zero()
False
>>> (y % 1).is_zero()
True
``` | How to compare floating point numbers and integers | [
"",
"python",
"while-loop",
"integer",
"point",
"floating",
""
] |
Is there an elegant was to convert between `relativedelta` and `timedelta`?
The use case is getting user input ISO date. Python's `isodate` will return either `isodate.duration.Duration` or `datetime.timedelta`.
We need the features of `relativedelta` (per [What is the difference between "datetime.timedelta" and "dateutil.relativedelta.relativedelta" when working only with days?](https://stackoverflow.com/questions/12433233/what-is-the-difference-between-datetime-timedelta-and-dateutil-relativedelta) -- it does more) so need to convert both these types to a `relativedata`. | Just take the total number of `seconds` and `microseconds`, that's all a `timedelta` object stores:
```
def to_relativedelta(tdelta):
return relativedelta(seconds=int(tdelta.total_seconds()),
microseconds=tdelta.microseconds)
>>> to_relativedelta(timedelta(seconds=0.3))
relativedelta(microseconds=+300000)
>>> to_relativedelta(timedelta(seconds=3))
relativedelta(seconds=+3)
>>> to_relativedelta(timedelta(seconds=300))
relativedelta(minutes=+5)
>>> to_relativedelta(timedelta(seconds=3000000))
relativedelta(days=+34, hours=+17, minutes=+20)
``` | ```
d = datetime.timedelta(...)
dateutil.relativedelta.relativedelta(seconds=d.total_seconds())
``` | Elegant way to convert python datetime.timedelta to dateutil.relativedelta | [
"",
"python",
"datetime",
"python-datetime",
"python-dateutil",
""
] |
I'm unable to import my module for testing in the way that I'd like. I'm running all of this in a virtualenv on 2.7.2
I have a directory structure like
```
/api
/api
__init__.py
my_module.py
/tests
my_module_test.py
```
I have my PYTHONPATH set to /Path/api/. I CD into /Path/api and run the following
```
py.test tests/my_module_test.py
```
It does not work in the following case:
1. When I have the following at the top of my\_module\_test.py `from api.my_module import my_function`
It does work in the following case:
1. When I have the following at the top of my\_module\_test.py `from my_module import my_function`
Why am I not able to import my module as in case 1? | I use **PYTHONPATH** as
```
PYTHONPATH=`pwd` py.test tests/my_module_test.py
``` | From the py.text document,you should install first.:
```
pip install -e .
``` | Module Import Error running py.test with modules on Path | [
"",
"python",
"python-2.7",
"pythonpath",
"pytest",
""
] |
i have a table which i group by one column and then i output the different values and the count of this values.
```
activty | sum
-------------
Form | 1
Login | 4
Reg | 3
```
here is an example-code: <http://sqlfiddle.com/#!2/c6faf/2/0>
but i want the different values tp the column names with one row of values (the count).
like this:
```
Form | Login | Reg
------------------
1 | 4 | 3
```
i tried the PIVOT operation, but i'm not getting it working. what am i doing wrong?
here is my code: <http://sqlfiddle.com/#!2/c6faf/35/0>
thanks in advance!
br | Try this:
```
SELECT
SUM(IF(activity = 'Form',1,0)) as 'Form',
SUM(IF(activity = 'Login',1,0)) as 'Login',
SUM(IF(activity = 'Reg',1,0)) as 'Reg'
FROM
tbl_tracking
WHERE
id = 141
AND variant = 2
```
Sql Fiddle [here](http://sqlfiddle.com/#!2/c6faf/22) | Here you go
```
SELECT
MAX(CASE activity WHEN 'Form' THEN mysum ELSE NULL END) AS Form,
MAX(CASE activity WHEN 'Login' THEN mysum ELSE NULL END) AS Login,
MAX(CASE activity WHEN 'Reg' THEN mysum ELSE NULL END) AS Reg
FROM (
SELECT activity, COUNT(activity) AS mysum FROM tbl_tracking
WHERE id = 141 AND variant = 2
GROUP BY activity
)sq
```
But in my opinion such formatting should be done in application layer, not in database layer. | mysql: column values as column name in select result | [
"",
"mysql",
"sql",
"database",
""
] |
i have problem how to remove space between ")","(" and "/" in the sql. i just want to remove space `NOT include the text`. How to do that?.
```
For example:-
Sek. 175 (1) (a)/(b) atau Sek. 187B (1) (a)/(b)
AND i want the text to be like this:
Sek.175(1)(a)/(b) atau Sek.187B(1)(a)/(b)
This is my query:
SELECT distinct mhn.id_mohon,
'oleh sebab (' || ku.ruj_kanun || ')' ruj_kanun
FROM mohon mhn, kod_urusan ku, mohon_ruj_luar mrl, pguna pg,
kod_perintah kp
WHERE mhn.id_mohon = :p_id_mohon
AND mhn.kod_urusan = ku.kod(+)
AND mhn.id_mohon = mrl.id_mohon(+)
AND mrl.kod_perintah = kp.kod(+)
AND mhn.dimasuk = pg.id_pguna(+)
AND mhn.kod_urusan = 'PHKK'
```
Anyone know about this? | ```
replace(
regexp_replace(
regexp_replace(
regexp_replace(
string,
'\s([a-zA-Z]+($|\W))', chr(0)||'\1'
),
'((^|\W)[a-zA-Z]+)\s', '\1'||chr(0)
),
'\s'),
chr(0), ' ')
```
[fiddle](http://www.sqlfiddle.com/#!4/7b16d/1) | Definitely not the most effective but this should work
```
REPLACE(REPLACE(column, ' ', ''), 'atau', ' atau ')
``` | How to remove and replace ")" space? | [
"",
"sql",
"oracle",
""
] |
I'm having a problem when submitting a form containing a dynamically populated SelectField. For some reason when Flask tries to validate the CSRF token it always fails when the SelectField is in the form. When I remove the SelectField from the form, it validates the CSRF token successfully.
Has anyone come across this behavior?
**EDIT**
Form:
```
class AddToReportForm(Form):
selectReportField = SelectField(u'Reports',choices=[('test1','test')])
def __init__(self, *args, **kwargs):
"""
Initiates a new user form object
:param args: Python default
:param kwargs: Python default
"""
Form.__init__(self, *args, **kwargs)
def validate(self,id_list):
rv = Form.validate(self)
if not rv:
print False
#Check for the CSRF Token, if it's not there abort.
return False
print True
return True
```
Jinja2:
```
<form method=post name="test">
{{ form.hidden_tag()}}
{{ form.selectReportField }}
<a href="#" onclick="$(this).closest('form').submit()" class="button save">Add to report</a>
</form>
```
Rendering:
```
form = AddToReportForm()
return render_template('random',title='add reports',form=form
``` | I still can't see any connection between SelectField and CSRF. The `validate` method is little suspicious and the extra argument would trip the following testcase, but as it stands this seems to work just fine:
```
from flask import Flask, render_template_string
from flaskext.wtf import Form, SelectField
app = Flask(__name__)
app.debug = True
app.secret_key = 's3cr3t'
class AddToReportForm(Form):
selectReportField = SelectField(u'Reports', choices=[('test1', 'test')])
@app.route('/test', methods=['GET', 'POST'])
def test():
form = AddToReportForm()
if form.validate_on_submit():
print 'OK'
return render_template_string('''\
<form method=post name="test">
{{ form.hidden_tag()}}
{{ form.selectReportField }}
<input type="submit">
</form>
''', form=form)
app.run(host='0.0.0.0')
``` | Where are you setting SECRET\_KEY? It must be available either in the Form class:
```
class AddToReportForm(Form):
selectReportField = SelectField(u'Reports',choices=[('test1','test')])
SECRET_KEY = "myverylongsecretkey"
def __init__(self, *args, **kwargs):
"""
Initiates a new user form object
:param args: Python default
:param kwargs: Python default
"""
Form.__init__(self, *args, **kwargs)
def validate(self,id_list):
rv = Form.validate(self)
if not rv:
print False
#Check for the CSRF Token, if it's not there abort.
return False
return True
```
or in the application bootstrap:
```
app = Flask(__name__)
app.secret_key = 'myverylongsecretkey'
```
or in the constructor:
```
form = AddToReportForm(secret_key='myverylongsecretkey')
return render_template('random',title='add reports',form=form)
``` | Flask-WTF SelectField with CSRF protection enabled | [
"",
"python",
"flask",
"csrf",
""
] |
I am just the beginner in django. I use pydev eclipse in windows 8. First I write a "Hello World " program and display the string in browser but when I changed the code the change in the output is not appear. Whatever I change nothing change in output. But when I close the eclipse and shutdown the computer and restart then I change the program and run it. The code output is changed. But now further to change my program I again need to restart my computer. What is happening ? | You can able to run by using short cut key [ ctrl + shift + F9] or by clicking on button inside blue circle below
 | try to restart development server | Unable to write code in pydev for django project | [
"",
"python",
"django",
""
] |
Given a set of locations and a single location, find the location from the set which is closest to the single location. It is not about finding a path trough nodes; it's about distance in a birds eye view.
The locations are a property of a 'node', (it's for a Finite Element software extension). Problem is: this takes to friggin long. I'm looking for something quicker. One user has to call this function up to 500 times (with a different single location) on a set of 1 million locations (the set stays the same).
I'd rather not limit the set before doing this calculation; I don't have to query a database or anything; I feel this simple arithmethics should be done in a few ms anyway. I don't get why it takes so long.
```
# excerpt of how LocationByNodeId looks like. 40k keys is a small model, can contain up to a million keys.
node_location_by_nodeId = {43815: (3.2835714285714266, -1.8875000000000068, 0.23571428571420952), 43816: (3.227857142857142, -1.8875000000000068, 0.23571428571421035)}
location_in_space=(1,3,7)
def node_closest_to_location_in_space(location_in_space):
global node_location_by_nodeId
distances = {}
for NodeId in node_location_by_nodeId:
NodeLocation = node_location_by_nodeId[NodeId]
distances[NodeId] = (NodeLocation[0] - location_in_space[0])**2 +
(NodeLocation[1] - location_in_space[1])**2 +
(NodeLocation[2] - location_in_space[2])**2
return min(distances, key=distances.get) # I don't really get this statement, i got it from here. Maybe this one is slow?
node_closest_to_location_in_space(location_in_space)
```
edit: solution taken from answers below reduced runtime to 35% of original runtime in big data set (400 calls over a set of 1.2million).
```
closest_node = None
closest_distance = 1e100 # An arbitrary, HUGE, value
x,y,z = location_in_space[:3]
for NodeId, NodeLocation in LocationByNodeId.iteritems():
distance = (NodeLocation[0] - x)**2 + (NodeLocation[1] - y)**2 + (NodeLocation[2] - z)**2
if distance < closest_distance:
closest_distance = distance
closest_node = NodeId
return closest_node
``` | Indexing into your location argument takes time, and location doesn't change for all your million nodes, so lift these invariants out of the for loop:
```
for NodeId, NodeLocation in node_location_by_nodeId.iteritems():
distance = (NodeLocation[0] - location_in_space[0])**2 +
(NodeLocation[1] - location_in_space[1])**2 +
(NodeLocation[2] - location_in_space[2])**2
if distance <= closest_distance:
closest_distance = distance
closest_node = NodeId
```
becomes:
```
x,y,z = location_in_space
for NodeId, NodeLocation in node_location_by_nodeId.iteritems():
distance = (NodeLocation[0] - x)**2 +
(NodeLocation[1] - y)**2 +
(NodeLocation[2] - z)**2
if distance <= closest_distance:
closest_distance = distance
closest_node = NodeId
```
Now these become simple (and faster) local value references.
You can also try replacing your distance calculation with calls to `math.hypot`, which is implemented in fast C code:
```
from math import hypot
x,y,z = location_in_space
for NodeId, NodeLocation in node_location_by_nodeId.iteritems():
distance = hypot(hypot((NodeLocation[0] - x), (NodeLocation[1] - y)),(NodeLocation[2] - z))
if distance <= closest_distance:
closest_distance = distance
closest_node = NodeId
```
(`hypot` is written to only do 2D distance calculation, so to do 3D you have to call `hypot(hypot(xdist,ydist),zdist)`.) | You cannot run a simple linear search on an unsorted dict and expect it to be fast (at least not very fast).
There are so many algorithms that helps you tackle this problem in a much optimized way.
An [R-Tree](http://en.wikipedia.org/wiki/R-tree) as suggested is the perfect data structure to store your locations.
You can also look for solutions in this wikipedia page: [Nearest Neighbor Search](http://en.wikipedia.org/wiki/Nearest_neighbor_search) | Given a set of locations and a single location, find the closest location from the set to the single | [
"",
"python",
"performance",
"ironpython",
""
] |
How to get user profile image using Twython?
I see `show_user()` method, but instantiating Twython with api key and secret + oauth token and secret, and calling this method returns 404:
`TwythonError: Twitter API returned a 404 (Not Found), Sorry, that page does not exist`.
Calling same method from Twython instantiated w.o api/oauth keys returns 400: `TwythonAuthError: Twitter API returned a 400 (Bad Request), Bad Authentication data`.
I also tried to `GET` user info from `https://api.twitter.com/1.1/users/show.json?screen_name=USERSCREENNAME`, and got 400 as well.
I would appreciate a working example of authenticated request to twitter api 1.1 . Can't find it on twitter api reference. | You need to call the show\_user method with the screen\_name argument
```
t = Twython(app_key=settings.TWITTER_CONSUMER_KEY,
app_secret=settings.TWITTER_CONSUMER_SECRET,
oauth_token=oauth_token,
oauth_token_secret=oauth_token_secret)
print t.show_user(screen_name=account_name)
``` | I solved my issue, following way:
```
api = 'https://api.twitter.com/1.1/users/show.json'
args = {'screen_name': account_name}
t = Twython(app_key=settings.TWITTER_CONSUMER_KEY,
app_secret=settings.TWITTER_CONSUMER_SECRET,
oauth_token=token.token,
oauth_token_secret=token.secret)
resp = t.request(api, params=args)
```
this returns a json respons, see [twitter docs](https://dev.twitter.com/docs/api/1.1/get/users/show). So in my case: `resp['profile_image_url_https']` gives the url to user profile image in normal size for twitter, which is 48px by 48px. | Twitter API/Twython - show user to get user profile image | [
"",
"python",
"twitter",
"twython",
""
] |
I already create & use prepared statements by utilizing the libpq (of PostgreSQL). I am wondering if there is a way to delete a prepared statement without disconnecting the database? Or the best way to achieve this is to reconnect & re-prepare?
I am using the libpq of PostgreSQL version 8.4. I searched the 9.2 documentation but could not find anything related to this... | According to the [documentation](http://www.postgresql.org/docs/current/static/libpq-exec.html), `DEALLOCATE` is the only way to delete a prepared statement, emphasis added:
> Prepared statements for use with PQexecPrepared can also be created by
> executing SQL PREPARE statements. Also, although **there is no libpq
> function for deleting a prepared statement**, the SQL DEALLOCATE
> statement can be used for that purpose.
Presumably they did not bother to expose a C function for this because this would be as simple as:
```
char query[NAMEDATALEN+12];
snprintf(query, sizeof(query), "DEALLOCATE %s", stmtName);
return PQexec(conn, query);
``` | As @Amit mentioned, sometimes Postgress throws "Prepared statement doesn't exists."
Unfortunately, there is no `DEALLOCATE IF EXISTS`, but a simple workaround is:
`DEALLOCATE ALL;`. | How to delete a prepared statement in PostgreSQL? | [
"",
"sql",
"database",
"postgresql",
"prepared-statement",
"libpg",
""
] |
[MySQL Docs](http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html) say :
The size of the table slows down the insertion of indexes by log N, assuming B-tree indexes.
Does this mean that for insertion of each new row, the insertion speed will be slowed down by a factor of log N where N, I assume is number of rows? even if I insert all rows in just one query? i.e. :
```
INSERT INTO mytable VALUES (1,1,1), (2,2,2), (3,3,3), .... ,(n,n,n)
```
Where n is ~70,000
I currently have ~1.47 million rows in a table with the following structure :
```
CREATE TABLE mytable (
`id` INT,
`value` MEDIUMINT(5),
`date` DATE,
PRIMARY_KEY(`id`,`date`)
) ENGINE = InnoDB
```
When I insert in the above mentioned fashion in a transaction, the commit time taken is ~275 seconds. How can I optimize this, since new data is to be added everyday and the insert time will just keep on slowing down.
Also, is there anything apart from just queries that might help? maybe some configuration settings?
# Possible Method 1 - Removing Indices
I read that removing indices just before insert might help insert speed. And after inserts, I add the index again. But here the only index is primary key, and dropping it won't help much in my opinion. Also, while the primary key is *dropped* , all the select queries will be crippling slow.
~~I do not know of any other possible methods.~~
**Edit :** Here are a few tests on inserting ~60,000 rows in the table with ~1.47 mil rows:
**Using the plain query described above :** 146 seconds
**Using MySQL's LOAD DATA infile :** 145 seconds
**Using MySQL's LOAD DATA infile and splitting the csv files as suggested by David Jashi in his answer:** 136 seconds for 60 files with 1000 rows each, 136 seconds for 6 files with 10,000 rows each
**Removing and re-adding primary key :** key removal took 11 seconds, 0.8 seconds for inserting data BUT 153 seconds for re-adding primary key, totally taking ~165 seconds | If you want fast inserts, first thing you need is proper hardware. That assumes sufficient amount of RAM, an SSD instead of mechanical drives and rather powerful CPU.
Since you use InnoDB, what you want is to optimize it since default config is designed for slow and old machines.
[Here's a great read about configuring InnoDB](https://www.percona.com/blog/2007/11/01/innodb-performance-optimization-basics/)
After that, you need to know one thing - and that's how databases do their stuff internally, how hard drives work and so on. I'll simplify the mechanism in the following description:
A transaction is MySQL waiting for the hard drive to confirm that it wrote the data. That's why transactions are slow on mechanical drives, they can do 200-400 input-output operations per second. Translated, that means you can get 200ish insert queries per second using InnoDB on a mechanical drive. Naturally, **this is simplified explanation**, just to outline what's happening, **it's not the full mechanism behind transaction**.
Since a query, especially the one corresponding to size of your table, is relatively small in terms of bytes - you're effectively wasting precious IOPS on a single query.
If you wrap multiple queries (100 or 200 or more, there's no exact number, you have to test) in a single transaction and then commit it - you'll instantly achieve more writes per second.
Percona guys are achieving 15k inserts a second on a relatively cheap hardware. Even 5k inserts a second isn't bad. The table such as yours is small, I've done tests on a similar table (3 columns more) and I managed to get to 1 billion records without noticeable issues, using 16gb ram machine with a 240GB SSD (1 drive, no RAID, used for testing purposes).
TL;DR: - follow the link above, configure your server, get an SSD, wrap multiple inserts in 1 transactions and profit. And don't turn indexing off and then on, it's not applicable always, because at some point you will spend processing and IO time to build them. | Dropping index will sure help anyway. Also consider using `LOAD DATA`. You can find some comparison and benchmarks [here](http://www.mediabandit.co.uk/blog/215_mysql-bulk-insert-vs-load-data)
Also, when constructing PRIMARY KEY, use fields, that come first in your table, sequentially, i.e. switch places of second and third fields in structure. | MySQL optimizing INSERT speed being slowed down because of indices | [
"",
"mysql",
"sql",
"insert",
"indexing",
""
] |
In NumPy, I can get the size (in bytes) of a particular data type by:
```
datatype(...).itemsize
```
or:
```
datatype(...).nbytes
```
For example:
```
np.float32(5).itemsize # 4
np.float32(5).nbytes # 4
```
I have two questions. First, is there a way to get this information *without creating an instance* of the datatype? Second, what's the difference between `itemsize` and `nbytes`? | You need an instance of the `dtype` to get the itemsize, but you shouldn't need an instance of the `ndarray`. (As will become clear in a second, `nbytes` is a property of the array, not the dtype.)
E.g.
```
print np.dtype(float).itemsize
print np.dtype(np.float32).itemsize
print np.dtype('|S10').itemsize
```
As far as the difference between `itemsize` and `nbytes`, `nbytes` is just `x.itemsize * x.size`.
E.g.
```
In [16]: print np.arange(100).itemsize
8
In [17]: print np.arange(100).nbytes
800
``` | Looking at the NumPy C source file, this is the comment:
```
size : int
Number of elements in the array.
itemsize : int
The memory use of each array element in bytes.
nbytes : int
The total number of bytes required to store the array data,
i.e., ``itemsize * size``.
```
So in NumPy:
```
>>> x = np.zeros((3, 5, 2), dtype=np.float64)
>>> x.itemsize
8
```
So `.nbytes` is a shortcut for:
```
>>> np.prod(x.shape)*x.itemsize
240
>>> x.nbytes
240
```
So, to get a base size of a NumPy array without creating an instance of it, you can do this (assuming a 3x5x2 array of doubles for example):
```
>>> np.float64(1).itemsize * np.prod([3,5,2])
240
```
However, important note from the NumPy help file:
```
| nbytes
| Total bytes consumed by the elements of the array.
|
| Notes
| -----
| Does not include memory consumed by non-element attributes of the
| array object.
``` | Size of data type using NumPy | [
"",
"python",
"numpy",
""
] |
if i have an integer like this:
123456789
I would like to return only 12345 - so trimming the number to a length of 5 digits.
Is this possible without first converting to a string? I cant see any built in function to do this. | ```
import math
def five_digits(number):
ndigits = int(math.log10(number))+1
try:
return number//int(10**(ndigits-5))
except ZeroDivisionError:
return number
print five_digits(10000000)
print five_digits(12345678)
print five_digits(12345)
print five_digits(1234) #don't know what you want to do with this one...
``` | For example:
```
number = 123456789
while number >= 10**6:
number = number/10
#wrapped in a function for n arbitrary
def trim_to_n(number, n):
negative = False
if number < 0:
negative = True
number = number * -1
limit = 10**(n+1)
while number >= limit:
number = number/10
if negative:
return number * -1
return number
```
**UPDATED:** For negative values :D
Hope this helps! :) | python trim number without converting to string first | [
"",
"python",
""
] |
I want to automatically generate a list of tor exit nodes that can reach a certain IP address. I scoured the internetfor a while and came across this piece of code from <http://sjoerd-hemminga.com/blog/2012/10/block-tor-exit-nodes-using-iptables/>
```
if [[ -z "$1" ]]; then
echo Usage: $0 "<your host's ip>"
exit 1
fi
hostip=$1
for i in $(wget https://check.torproject.org/cgi-bin/TorBulkExitList.py\?ip=$hostip -O- -q |\
grep -E '^[[:digit:]]+(\.[[:digit:]]+){3}$'); do
sudo iptables -A INPUT -s "$i" -j DROP
done
```
Can someone please help me understand this code better, because every time I try to run it, it produces errors.
Any alternate answers are welcomed but I would like if they are in Python. | ```
import os
import re
import sys
import urllib
if len(sys.argv) != 2:
print "Usage {} <your host's ip>".format(sys.argv[0])
sys.exit(1)
hostip = sys.argv[1]
u = urllib.urlopen('https://check.torproject.org/cgi-bin/TorBulkExitList.py?ip=' + hostip)
for ip in u:
ip = ip.strip()
if re.match('\d+(\.\d+){3}$', ip):
#print ip
os.system('sudo iptables -A INPUT -s "{}" -j DROP'.format(ip))
u.close()
``` | If you have configured Tor to have a control port and server descriptors...
```
ControlPort 9051
CookieAuthentication 1
UseMicrodescriptors 0
```
... then you can easily enumerate the exits using [stem](https://stem.torproject.org/)...
```
from stem.control import Controller
with Controller.from_port(port = 9051) as controller:
controller.authenticate()
for desc in controller.get_server_descriptors():
if desc.exit_policy.is_exiting_allowed():
print desc.address
```
That said, if your real question is 'how do I do something using every Tor exit' then please don't! Repeatedly making circuits is harmful to the Tor network, for more about about this see [stem's FAQ](https://stem.torproject.org/faq.html#how-do-i-request-a-new-identity-from-tor). | Automatically generate list of tor exit nodes | [
"",
"python",
"tor",
""
] |
(I'm using Oracle 11.)
I have a query that looks like this to get me the unique report numbers ...
```
select distinct (REPORT_NUMBERS) from REPORTS
```
Works fine.
Now I want to sort them by their CREATION\_DATE field. I tried this ...
```
select distinct (REPORT_NUMBERS), CREATION_DATE from REPORTS order by CREATION_DATE asc
```
But I get duplicate REPORT\_NUMBERS. I tried this ...
```
select distinct (REPORT_NUMBERS) from REPORTS order by CREATION_DATE asc
```
But that gives me a "ORA-01791: not a SELECTed expression" error.
How can I get the unique list of report numbers ordered by creation date?
Any help is greatly appreciated!
Rob | Since you want them ordered in an ascending way, then this should do:
```
SELECT REPORT_NUMBERS, MIN(CREATION_DATE) MinCreationDate
FROM REPORTS
GROUP BY REPORT_NUMBERS
ORDER BY MinCreationDate
``` | You can GROUP to get MAX/MIN creation date:
```
select REPORT_NUMBERS, MIN(CREATION_DATE)
from REPORTS
GROUP BY REPORT_NUMBERS
```
Min() would be the earliest date, MAX() the most recent. | how to SQL query for unique values but include an extra date field? | [
"",
"sql",
"oracle",
""
] |
I have string in Python which I am splitting using Split. Then I access its length and want to pick up the 3rd value.
```
Q = S.split('/')
E = = len(Q)
R = Q[E[2]] // ERROR
```
Any idea how can I access the 3rd value after using split ? | All of these answers are correct, I just wanted to add a bit of of a break down of what is going wrong with the original question. When you make the following assignment:
```
E = len(Q)
```
`E` is being set to an integer. When you attempt to execute this:
```
R = Q[E[2]]
```
When you are actually trying to do is take the second element of the `integer` `E`. Since `E` isn't an array (the technical error is `TypeError: 'int' object is not subscriptable`) this is where the program breaks (actually it would break on `E = = len(Q)` but I'm assuming this is a typo in your question since you point to an error happening after this line). | Your second line is not needed at all. You can do this a few ways
```
Q = S.split('/')
R = Q[2]
```
Another option is:
```
R = S.split('/')[2]
```
Example output:
```
>>> S = "Hey/there/you/are"
>>> Q = S.split('/')
>>> R = Q[2]
>>> R
'you'
>>> R = S.split('/')[2]
>>> R
'you'
``` | Splitting a string and picking a value | [
"",
"python",
"split",
""
] |
I have a situation, where at some point in my code I want to trigger a number of timers, the code will keep running, but at some point these functions will trigger and remove an item from a given list. Similar though not exactly like the code below. The problem is, I want these functions to wait a certain amount of time, the only way I know how is to use sleep, but that stops all of the code, when I need the first function to keep running. So how can I set a function aside with out making everything wait for it? If the answer involves threading, please know that I have very little experience with it and like explanations with pictures and small words.
```
from time import sleep
from datetime import datetime
def func():
x = 1
for i in range(20):
if i % 4 == 0:
func2()
print("START", datetime.now())
x += 1
else:
print("continue")
def func2():
print("go")
sleep(10)
print("func 2--------------------------------------", datetime.now())
func()
``` | You need to use threading. <http://docs.python.org/2/library/threading.html>
You can start functions in their own threads. | I used background function. It will run in the background, even if going to another page.
---
You need to import **threading**, also **time** to use `time.sleep()`:
```
import threading
import time
```
---
I had a function where I wanted to sleep code in the background, here is an example:
```
# This is the one that will sleep, but since you used args on the Thread, it will not make the mainFunction to sleep.
def backgroundFunction(obj):
theObj = obj
time.sleep(120)
# updates the Food to 5 in 2 minutes
obj["Food"] = 5
return
def mainFunction():
obj = {"Food": 4, "Water": 3}
# Make sure there are a comma in the args().
t1 = threading.Thread(target=backgroundFunction, args=(obj,))
t1.start()
return
```
If you used `t1 = threading.Thread(target=backgroundFunction(obj))` it will not be in the background so don't use this, unless you want **mainFunction** to sleep also. | Python, sleep some code not all | [
"",
"python",
"sleep",
"wait",
""
] |
I have an object with a huge list somewhere down in the bowels. The object's dump (e.g. using `print(o)`) doesn't fit on one screen.
But if I could manage to have members that are 1-D lists printed comma-separated on one line, the object would be easier to inspect.
How can this be achieved?
---
EDIT:
Found out that the object I was trying to display had a `__repr__` that explicitly showed it's array content in a vertical manner... So this question may be closed. | Found out that the object I was trying to display had a **repr** that explicitly showed it's array content in a vertical manner... So this question may be closed.
On regular python lists, arrays *are* printed on one line.
[A snippet to show this.](http://codepad.org/R3BvN8vb) | Most of time this gives a readable output:
```
import pprint
pprint.pprint(o)
``` | display long lists in member of member of... on one line in python | [
"",
"python",
"list",
"inspect",
""
] |
I have a NumPy array like:
```
a = np.arange(30)
```
I know that I can replace the values located at positions `indices=[2,3,4]` using for instance fancy indexing:
```
a[indices] = 999
```
But how to replace the values at the positions that are not in `indices`? Would be something like below?
```
a[ not in indices ] = 888
``` | I don't know of a clean way to do something like this:
```
mask = np.ones(a.shape,dtype=bool) #np.ones_like(a,dtype=bool)
mask[indices] = False
a[~mask] = 999
a[mask] = 888
```
Of course, if you prefer to use the numpy data-type, you could use `dtype=np.bool_` -- There won't be any difference in the output. it's just a matter of preference really. | Only works for 1d arrays:
```
a = np.arange(30)
indices = [2, 3, 4]
ia = np.indices(a.shape)
not_indices = np.setxor1d(ia, indices)
a[not_indices] = 888
``` | Change the values of a NumPy array that are NOT in a list of indices | [
"",
"python",
"arrays",
"numpy",
"replace",
"multidimensional-array",
""
] |
Here's what I have to do :
I have a text file which has 3 columns: `PID, X, Y`.
Now I have two tables in my database:
* `Table 1` contains 4 columns: `UID, PID, X, Y`
* `Table 2` contains multiple columns, required ones being `UID, X, Y`
I need to update `Table 2` with corresponding X and Y values.
I think we can use `BULK INSERT` for updating `table 1`, then some `WHILE` loop or something.
But I can't figure out exact thing. | ```
CREATE PROCEDURE [dbo].[BulkInsert]
(
@PID int ,
@x int,
@y int,
)
AS
BEGIN
SET NOCOUNT ON;
declare @query varchar(max)
CREATE TABLE #TEMP
(
[PID] [int] NOT NULL ,
[x] int NOT NULL,
[y] int NOT NULL,
)
SET @query = 'BULK INSERT #TEMP FROM ''' + PathOfYourTextFile + ''' WITH ( FIELDTERMINATOR = '','',ROWTERMINATOR = ''\n'')'
--print @query
--return
execute(@query)
BEGIN TRAN;
MERGE TableName AS Target
USING (SELECT * FROM #TEMP) AS Source
ON (Target.YourTableId = Source.YourTextFileFieldId)
-- In the above line we are checking if the particular row exists in the table(Table1) then update the Table1 if not then insert the new row in Table-1.
WHEN MATCHED THEN
UPDATE SET
Target.PID= Source.PID, Target.x= Source.x, Target.y= Source.y
WHEN NOT MATCHED BY TARGET THEN
-- Insert statement
```
You can use this above approach to solve your problem. Hope this helps. :) | How are you going to run it ? From a stored procedure ?
To save some performance, I would have done `BULK INSERT` to temp table, then insert from temp table to Table 1 & 2.
It should look like this
```
INSERT INTO Table1 ( PID, X, Y)
SELECT PID, X, Y
FROM #tempTable
```
Some will tell that temp table are not good, but it really depend - if you file is big, reading it from disk will take time and you don't want to do it twice. | SQl: Update Table from a text File | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
since I have installed the updated version of pandas every time I type in the name of a dataframe, e.g.
```
df[0:5]
```
To see the first few rows, it gives me a summary of the columns, the number of values in them and the data types instead.
How do I get to see the tabular view instead? (I am using iPython btw).
Thanks in advance! | *Note: To show the top few rows you can also use [`head`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html).*
However, pandas will show the summary view if there are more columns than `display.max_columns` or they are longer than `display.width` (analogously for rows), so you'll need to increase these to show a tabular view. You can change these using set option, for example:
```
pd.options.display.max_columns = 50
```
*See this pandas issue: ["Unintuitive default behavior with wide DataFrames in the IPython notebook"](https://github.com/pydata/pandas/issues/3573).*
Alternatively, show the first few rows for the first few columns you can use:
```
df.head(5)[df.columns[0:4]]
# alternatively
df.iloc[:5, :4]
``` | Short answer:
If your version is later that 0.11.0, try setting your display width with:
```
pd.set_option('display.width', 200)
```
Long answer:
Granted, the thread is 4 months old, but I was having this same problem last night. An ipython notebook that worked fine at my job would not present html tables at home.
First, check your version:
```
import pandas as pd
pd.__version__
```
Mine comes back with: '0.11.0'
Next, check your settings with:
```
pd.describe_option('display')
```
This will give you a long list of all possible display options and the current default values.
You may have to play with changing the defaults. Go with large values to get it working and then scale back as needed.
Display.width is the important one. I put these at the top with imports.
```
pd.set_option('display.height', 500)
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 200)
``` | Showing Pandas data frame as a table | [
"",
"python",
"pandas",
""
] |
Suppose we have two items missing in a sequence of consecutive integers and the missing elements lie between the first and last elements. I did write a code that does accomplish the task. However, I wanted to make it efficient using less loops if possible. Any help will be appreciated. Also what about the condition when we have to find more missing items (say close to n/4) instead of 2. I think then my code should be efficient right because I am breaking out from the loop earlier?
```
def missing_elements(L,start,end,missing_num):
complete_list = range(start,end+1)
count = 0
input_index = 0
for item in complete_list:
if item != L[input_index]:
print item
count += 1
else :
input_index += 1
if count > missing_num:
break
def main():
L = [10,11,13,14,15,16,17,18,20]
start = 10
end = 20
missing_elements(L,start,end,2)
if __name__ == "__main__":
main()
``` | Assuming that L is a list of integers with no duplicates, you can infer that the part of the list between start and index is completely consecutive if and only if `L[index] == L[start] + (index - start)` and similarly with index and end is completely consecutive if and only if `L[index] == L[end] - (end - index)`. This combined with splitting the list into two recursively gives a sublinear solution.
```
# python 3.3 and up, in older versions, replace "yield from" with yield loop
def missing_elements(L, start, end):
if end - start <= 1:
if L[end] - L[start] > 1:
yield from range(L[start] + 1, L[end])
return
index = start + (end - start) // 2
# is the lower half consecutive?
consecutive_low = L[index] == L[start] + (index - start)
if not consecutive_low:
yield from missing_elements(L, start, index)
# is the upper part consecutive?
consecutive_high = L[index] == L[end] - (end - index)
if not consecutive_high:
yield from missing_elements(L, index, end)
def main():
L = [10,11,13,14,15,16,17,18,20]
print(list(missing_elements(L,0,len(L)-1)))
L = range(10, 21)
print(list(missing_elements(L,0,len(L)-1)))
main()
``` | If the input sequence is *sorted*, you could use sets here. Take the start and end values from the input list:
```
def missing_elements(L):
start, end = L[0], L[-1]
return sorted(set(range(start, end + 1)).difference(L))
```
This assumes Python 3; for Python 2, use `xrange()` to avoid building a list first.
The `sorted()` call is optional; without it a `set()` is returned of the missing values, with it you get a sorted list.
Demo:
```
>>> L = [10,11,13,14,15,16,17,18,20]
>>> missing_elements(L)
[12, 19]
```
Another approach is by detecting gaps between subsequent numbers; using an older [`itertools` library sliding window recipe](http://docs.python.org/release/2.3.5/lib/itertools-example.html):
```
from itertools import islice, chain
def window(seq, n=2):
"Returns a sliding window (of width n) over data from the iterable"
" s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... "
it = iter(seq)
result = tuple(islice(it, n))
if len(result) == n:
yield result
for elem in it:
result = result[1:] + (elem,)
yield result
def missing_elements(L):
missing = chain.from_iterable(range(x + 1, y) for x, y in window(L) if (y - x) > 1)
return list(missing)
```
This is a pure O(n) operation, and if you know the number of missing items, you can make sure it only produces those and then stops:
```
def missing_elements(L, count):
missing = chain.from_iterable(range(x + 1, y) for x, y in window(L) if (y - x) > 1)
return list(islice(missing, 0, count))
```
This will handle larger gaps too; if you are missing 2 items at 11 and 12, it'll still work:
```
>>> missing_elements([10, 13, 14, 15], 2)
[11, 12]
```
and the above sample only had to iterate over `[10, 13]` to figure this out. | Efficient way to find missing elements in an integer sequence | [
"",
"python",
"indexing",
""
] |
In one of my db table, there is a column `FinalDate` which will store the date and the data type is not `datetime` but `varchar`. I would like to write a query that I can select distinct `FinalDate` and group/display like `Jun 2012`, `Jul 2012`.
Values for the `FinalDate` column would be something like below:
```
20120213
20120225
20120218
20120306
20120320
```
So, how I can write a query to select distinct of the `FinalDate` and display them in:
```
Feb 2012
Mar 2012
``` | ```
Declare @a table (d varchar(8))
insert into @a Values ('20120213'),('20120225'),('20120218'),('20120306'),('20120320');
Select FinalDate
from
(
select Distinct
--DateName(Month,d)+' '+CAST(Datepart(yy,d) as Varchar(4)) as FinalDate
SubString(DateName(Month,d),1,3)+' '+CAST(Datepart(yy,d) as Varchar(4)) as FinalDate
,Datepart(yy,d) as yy,Datepart(mm,d) as mm
from
(Select CAST(d as datetime) as d from @a) a
) b
Order by yy,mm
``` | Try this one -
**Query:**
```
DECLARE @temp TABLE (t VARCHAR(8))
INSERT INTO @temp VALUES
('20120213'),
('20120225'),
('20120218'),
('20120306'),
('20120320')
SELECT LEFT(DATENAME(MONTH, t), 3) + ' ' + y
FROM (
SELECT DISTINCT
t = CAST(LEFT(t, 6) + '01' AS DATETIME)
, y = LEFT(t, 4)
FROM @temp
) t
ORDER BY t
```
**Output:**
```
Feb 2012
Mar 2012
``` | select distinct of different date in varchar format and display/group them in month and year only | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table like this.
```
_id (integer)
event_name(varchar(20))
event_date(timestamp)
```
Here is some sample data given below.
---
```
ID event_date event_name
101 2013-04-24 18:33:37.694818 event_A
102 2013-04-24 20:34:37.000000 event_B
103 2013-04-24 20:40:37.000000 event_A
104 2013-04-25 01:00:00.694818 event_B
105 2013-04-25 12:00:15.694818 event_A
```
I need the data from above table in below format.
```
Date count_eventA count_eventB
2013-04-24 2 1
2013-04-25 1 1
```
hence basically in need the count of each event on each date.
I have tried below query for getting the desired result.
```
SELECT A.date1 AS Date ,
A.count1 AS count_eventA,
B.count2 AS count_eventB,
FROM
(SELECT count(event_name)AS count1,
event_date::date AS date1
FROM tblname
WHERE event_name='event_A'
GROUP BY (event_date::date))AS A
LEFT JOIN
(SELECT count(event_name)AS count1,
event_date::date AS date1
FROM tblname
WHERE event_name='event_B'
GROUP BY (event_date::date))AS B ON A.date1=B.date2
```
Can someone please suggest me to find out a better and optimized query? , or I am following a good approach . | Something on this lines should work:
```
select event_date::date AS Date ,
count_eventA = sum(case when event_name = 'event_A' then 1 else 0 end),
count_eventB = sum(case when event_name = 'event_B' then 1 else 0 end)
from tblname
GROUP BY (event_date::date))
```
If you have more events you only need to add more `sum(case)` lines :)
The DBEngine only runs through the table once to give you the totals, independiently of the number of the events you want to count: when you have a high rowcount you will observe significant delay with the original query. Should I add this to my answer, you think | Simpler (and cleaner) than the case syntax:
```
select
event_date::date as Date,
count(event_name = 'event_A' or null) count_eventA,
count(event_name = 'event_B' or null) count_eventB
from t
group by 1
``` | How can i optimize the below query? | [
"",
"sql",
"postgresql",
"postgresql-performance",
"sql-tuning",
""
] |
I have a `filter` in Python3 that I am applying with a lambda function. Here is my function:
```
affy_reader = csv.DictReader(filter(lambda row:
not row[0].startswith('#') and
str(row[0]).isdigit(),
file_pointer),
delimiter='\t',
fieldnames=affy_column_headers)
```
Is there a way that I can print the value of `row` from within this lambda function? I think I need to do it like this because `row` is only scoped within the lambda. For example, were this a LISP Lambda procedure, I believe I could do something like this:
```
affy_reader = csv.DictReader(filter(lambda row: print(row)
not row[0].startswith('#') and
str(row[0]).isdigit(),
file_pointer),
delimiter='\t',
fieldnames=affy_column_headers)
```
Because the `print()` is read and executed in-line. Is there some way in Python to do this? Or if not, what is a good way for me to see this value? Thanks!
\*I realize my "LISP" example is Python not LISP. I was just trying to illustrate further what it is I am trying to do. | I don't think you can do it with a `lambda`. Just define a helper function. That way you can also display whether a certain row is or isn't going to be filtered:
```
def filter_plus_print(row):
result = not row[0].startswith('#') and str(row[0]).isdigit()
print("Keeping:" if result else "Removing:", row)
return result
```
and then do
```
affy_reader = csv.DictReader(filter(filter_plus_print, file_pointer),
delimiter='\t',
fieldnames=affy_column_headers)
``` | General Python 3.x trick to print debug info in a lambda without changing its semantics:
Original:
```
lambda: 4
```
Instrumented:
```
lambda: (print (3), 4) [1]
```
Explanation:
Both arguments of the tuple will be evaluated.
The left one, `print (3)` in the example, is relying purely on a sideeffect, i.e. in this case it will print something. You can call any function here, since any Python function will return a value. This value will be the Python predefined value `None` if a return statement is missing. This doesn't matter since the return value isn't used anywhere.
The second argument, `4` in the example, can be any expression including a function call or a call to a functor (object with overloaded round brackets). This argument is returned by the lambda function by selecting `[1]`, i.e. the second element of the tuple (indexing in Python starts at 0].
The reason why this works specifically in Python 3.x is that `print` is a "perfectly ordinary" function there, whereas in Python 2.x it was a statement. | Python - Use a print/debug statement within a Lambda | [
"",
"python",
"filter",
"lambda",
""
] |
I'm trying to understand some code and see that the "ON" operator was used in this query (using sql server).
```
SELECT A.*, B.UID
FROM Table1 A Left Outer Join
(SELECT ID FROM Table2) AS B ON A.ID= B.ID
...
```
What exactly does this operator do? | `ON` represents one or more `JOIN` conditions by which we could match records from one table to other.
For understanding how joins works visually, read following -
* [Introduction to JOINs – Basic of JOINs](http://blog.sqlauthority.com/2009/04/13/sql-server-introduction-to-joins-basic-of-joins/)
* [A Visual Explanation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html) | It's not an operator - it's a *part* of a `JOIN`, which is part of the [`FROM` clause](http://msdn.microsoft.com/en-us/library/ms177634.aspx)
It's very similar to a `WHERE` clause - except it is intended just to filter the joining of two tables (or rowsets).
In this case, it's the condition whereby rows from `A` and `B` are matched up. If you had the same conditions in the `WHERE` clause, it would affect the join - a `LEFT JOIN` (here) is allowed to find no matching row in `B` yet still contribute that row from `A` to the result (with `B`s columns being `NULL`). If you place the same condition in `WHERE`, it forces the join to become an `INNER JOIN` instead of a `LEFT JOIN`. | ON operator SQL, what is it? | [
"",
"sql",
"sql-server",
"join",
""
] |
```
SELECT.....
(v1.PCG like 'n0%'
or
v1.PCG like 'n1%'
or
v1.PCG like 'n2%'
or
v1.PCG like 'n3%'
or
v1.PCG like 'n4%'
or
v1.PCG like 'n5%'
or v1.PCG='N63Af1')
and v1.PCG not like 'N2D%'
and v1.PCG not like 'N2E%'
and v1.PCG not like 'N2C%'
and v1.PCG not like 'N2F%'
and v1.PCG not like 'N2J%'
and v1.PCG not like 'N2K%'
and v1.PCG not like 'N2U%'
and v1.PCG not like 'N2GC%'
and v1.PCG not like 'N2GD%'
and v1.PCG not like 'N2GH%'
and v1.PCG not like 'N2GJ%'
and v1.PCG not like 'N2GK%'
) as 'Value Of PN Orders',
from........
```
I have competed my code but am trying to find a more effecient way of doing this, i had a look but cannot find another way...Any suggestions? | `Like` supports character classes so;
```
where v1.PCG like 'n[12345]%'
``` | This particular case can be worked with wildcards - not really requiring regular expressions:
```
(v1.PCG LIKE 'n[0-5]%' OR v1.PCG='N63Af1')
AND v1.PCG NOT LIKE 'N2[CDEFJKU]%'
AND v1.PCG NOT LIKE 'N2G[CDHJK]%'
```
An alternative that wouldn't change the semantics much, but would read cleaner, would be to use a table variable or CTE to JOIN to:
```
DECLARE @match TABLE (Value varchar(5))
INSERT @match VALUES
('N0%'),
('N1%'),
('N2%'),
('N3%'),
('N4%'),
('N5%'),
('N63Af1')
DECLARE @not TABLE (Value varchar(5))
INSERT @not VALUES
('N2D%'),
('N2E%'),
('N2C%'),
('N2F%'),
('N2J%'),
('N2K%'),
('N2U%'),
('N2GC%'),
('N2GD%'),
('N2GH%'),
('N2GJ%'),
('N2GK%')
SELECT ...
JOIN @match ON
v1.PCG LIKE @match.Value
LEFT OUTER JOIN @not ON
v1.PCG NOT LIKE @not.Value
WHERE
@not.Value IS NULL
```
That's a good trick if your values are user-supplied. In this case, though, I think the wildcard sets win out for ease of use. | Trying to find an alternative to likes and not likes (SQL) | [
"",
"sql",
"sql-server-2008",
""
] |
I'm building software for exam signup and grading:
I need to get data from these two tables:
Exams
```
|------------------------------------------------------------|
| ExamId | ExamTitle | EducationId | ExamDate |
|------------------------------------------------------------|
```
ExamAttempts
```
|-----------------------------------------------------------------------------|
| ExamAttemptId | ExamId | StudentId | Grade | NotPresentCode |
|-----------------------------------------------------------------------------|
```
1. Students attends an education
2. Educations have multiple exams
3. Students have up to 6 attempts per exam
4. Every attempt is graded or marked as not present
Students can sign up for an exam if:
- not passed yet (grade below 2)
- has not used all attempts
I want to list every exams that a student can sign up for.
It maybe fairly simple, but I just can't get my head around it and now I'm stuck! I've tried EVERYTHING but haven't got it right yet. This is one of the more hopeless tries I made (!):
```
CREATE PROCEDURE getExamsOpenForSignUp
@EducationId int,
@StudentId int
AS
SELECT ex.*
FROM Exams ex
LEFT JOIN (
SELECT ExamId, COUNT(ExamId) AS NumAttempts
FROM ExamAttempts
WHERE StudentId = @StudentId AND grade < 2 OR grade IS NULL
GROUP BY ExamId
) exGrouped ON ex.ExamId = exGrouped.ExamId
WHERE educationid = @EducationId and exGrouped.ExamId IS NULL OR exGrouped.NumAttempts < 6;
GO
```
What am i doing wrong? Please help... | You need to start with a list of all possibilities of exams and students and then weed out the ones that don't meet the requirements.
```
select driver.StudentId, driver.ExamId
from (select @StudentId as StudentId, e.ExamId
from exams e
where e.EducationId = @EducationId
) driver left outer join
(select ea.ExamId, ea.StudentId
from ExamAttempts ea
group by ea.ExamId, ea.StudentId
having max(grade) >= 2 or -- passed
count(*) >= 6
) NotEligible
on driver.ExamId = NotEligible.ExamId and
driver.StudentId = NotEligible.StudentId
where NotEligible.ExamId is NULL
```
The structure of this query is quite specific. The `driver` table contains all possible combinations. In this case, you have only one student and all exams are in the "education". Then the left join determines which are *not* eligible, based on your two requirements. The final `where` is selecting the non-matches to the not-eligible -- or the exams that are eligible. | Check if this works in your SP:
```
Select EduExams.ExamId
from
(select * from Exams
where Exams.EducationId = @EducationId) EduExams
left outer join
(select * from ExamAttempts
where ExamAttempts.StudentId = @StudentId) StudentAttempts
on EduExams.ExamID = StudentAttempts.ExamId
group by EduExams.ExamId
having count(StudentAttempts.ExamAttemptId) < 6
and ((max(StudentAttempts.Grade) is null) or (max(StudentAttempts.Grade) < 2))
``` | SQL: tricky join | [
"",
"sql",
"join",
""
] |
i'm trying define a function that return a list when i specify an object, and it returns a list of all the objects in the scene with \*\_control when i don't specify anything..
that's my function but it doesn't work....
i'm working with maya then..
```
from maya import cmds
def correct_value(selection):
if not isinstance(selection, list):
selection = [selection]
objs = selection
return objs
if not selection :
objs = cmds.ls ('*_control')
return objs
```
when i don't specify anything it returns an error :
> Error: line 1: TypeError: file line 1: correct\_value()
> takes exactly 1 argument (0 given)
what's wrong ?? | ```
def correct_value(selection=None):
if selection is None: # note that You should check this before
# You wil check whether it is list or not
objs = cmds.ls ('*_control')
return objs
if not isinstance(selection, list):
selection = [selection]
objs = selection
return objs
``` | To handle a default parameter even if it might be `None`
```
def correct_value(*args):
if not args:
objs = cmds.ls ('*_control')
return objs
elif len(args) == 1:
selection = args
objs = selection
return objs
else:
raise TypeError # ...
``` | def function : if nothing selected , return a list of all the objects | [
"",
"python",
"function",
""
] |
I realize this is probably really easy, I want the equivalent of this C code in Python:
```
#include <stdio.h>
#include <string.h>
#define BUFFER 200
int main()
{
char buffer[BUFFER];
while (fgets(buffer, BUFFER, stdin))
{
if (strstr(buffer, "BEER"))
printf("Drink up!\n");
}
return 0;
}
```
I searched on the web, but I think my brain is fried. | Equivalent way of doing it, though not exactly the same
```
for line in iter(raw_input, ''):
if 'BEER' in line:
print('Drink up!')
foo
bar
aBEERc
Drink up!
***blank line***
```
--- | The Pythonic thing to do is a little different:
```
import fileinput
for line in fileinput.input():
if "BEER" in line:
print("Drink up!")
```
But that does a whole lot of things your C code doesn't do, like allowing the user to pass filename arguments to be read in place of stdin as arguments, etc.
---
This is a more direct mapping of your code:
```
try:
while True:
buffer = input() # or raw_input, for 2.x
if "BEER" in buffer:
print("Drink up!")
except EOFError:
pass
```
However, it's missing the part where a line more than 300-character line that includes `"BEER"` in the first 200 characters and then again in the next will print twice, or a 300-character line that has "BEER" starting at position 198 won't print it at all.
---
If that's a problem, you probably want something like this:
```
import sys
while True:
buffer = sys.stdin.readlines(200)
if not buffer:
break
for line in buffer:
for i in range(0, len(buffer), 200):
if "BEER" in buffer[i*200:i*200+200]:
print("Drink up!")
``` | Python I/O and string matching | [
"",
"python",
"io",
"string-matching",
""
] |
I'm trying to write a function that takes as input a list of coefficients (a0, a1, a2, a3.....a n) of a polynomial p(x) and the value x. The function will return p(x), which is the value of the polynomial when evaluated at x.
A polynomial of degree n with coefficient a0, a1, a2, a3........an is the function
```
p(x)= a0+a1*x+a2*x^2+a3*x^3+.....+an*x^n
```
So I'm not sure how to attack the problem. I'm thinking that I will need a range but how can I make it so that it can handle any numerical input for x? I'm not expecting you guys to give the answer, I'm just in need of a little kick start. Do I need a for loop, while loop or could recursive be an option here?
```
def poly(lst, x)
```
I need to iterate over the items in the list, do I use the indices for that, but how can I make it iterate over an unknown number of items?
I'm thinking I can use recursion here:
```
def poly(lst, x):
n = len(lst)
If n==4:
return lst[o]+lst[1]*x+lst[2]*x**2+lst[3]*x**3
elif n==3:
return lst[o]+lst[1]*x+lst[2]*x**2
elif n==2:
return lst[o]+lst[1]*x
elif n==1:
return lst[o]
else:
return lst[o]+lst[1]*x+lst[2]*x**2+lst[3]*x**3+lst[n]*x**n
```
This works for n<=4 but I get a index error: list index out of range for n>4, can't see why though. | simple:
```
def poly(lst, x):
n, tmp = 0, 0
for a in lst:
tmp = tmp + (a * (x**n))
n += 1
return tmp
print poly([1,2,3], 2)
```
simple recursion:
```
def poly(lst, x, i = 0):
try:
tmp = lst.pop(0)
except IndexError:
return 0
return tmp * (x ** (i)) + poly(lst, x, i+1)
print poly([1,2,3], 2)
``` | The most efficient way is to evaluate the polynomial backwards using Horner's Rule. Very easy to do in Python:
```
# Evaluate a polynomial in reverse order using Horner's Rule,
# for example: a3*x^3+a2*x^2+a1*x+a0 = ((a3*x+a2)x+a1)x+a0
def poly(lst, x):
total = 0
for a in reversed(lst):
total = total*x+a
return total
``` | Evaluating Polynomial coefficients | [
"",
"python",
"python-3.x",
"iteration",
""
] |
I am trying with this query to get only the rtp.releaseid that have a unique territory=200
```
select rtp.ReleaseId, rtp.TerritoryId from ReleaseTerritoryPrice rtp where
rtp.TerritoryId=200
```
but I guess something is missing , can you please help.
Thanks. | You can use the following with `NOT EXISTS` in a WHERE clause:
```
select rtp1.releaseId, rtp1.territoryId
from ReleaseTerritoryPrice rtp1
where rtp1.territoryId = 200
and not exists (select releaseId
from ReleaseTerritoryPrice t2
where t2.territoryId <> 200
and rtp1.releaseId = t2.releaseId);
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/85ae6/2)
Or you can use `NOT IN` in a WHERE clause:
```
select rtp1.releaseId, rtp1.territoryId
from ReleaseTerritoryPrice rtp1
where rtp1.territoryId = 200
and rtp1.releaseId not in (select releaseId
from ReleaseTerritoryPrice t2
where t2.territoryId <> 200);
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/85ae6/5) | A couple of ways
```
SELECT ReleaseId
FROM ReleaseTerritoryPrice
WHERE TerritoryId = 200
EXCEPT
SELECT ReleaseId
FROM ReleaseTerritoryPrice
WHERE TerritoryId IS NULL OR TerritoryId <> 200
```
Or
```
SELECT ReleaseId
FROM ReleaseTerritoryPrice
GROUP BY ReleaseId
HAVING COUNT(DISTINCT TerritoryId) =1 AND MAX(TerritoryId )=200
``` | sql query that select data that have only one value | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have defined a Button with Tklinter
```
class ventanapp(Tk):
def initialize(self):
self.grid()
self.entry = Entry(self)
msg = Message(text=' SSH - Ingresar IP ->')
self.entry.grid(column=0,row=0,sticky='EW')
button = Button(self,text=u"realizar la conexion",command=ssh.crear_ssh())
button.grid(column=1,row=0)
```
But when i call the `command=ssh.crear_ssh()` from Another Class
```
class ssh():
def crear_ssh():
print('PONER ACA EL CODIGO DE LA CONEXION')`
```
I receive the error : "global name 'ssh' is not defined"
Any ideas? Thanks! | As commented the solution was to create a variable of the class SSH should be created before the class definition of ventana.
ssh = ssh()
button = Button(self,text=u"realizar la conexion",command=lambda: ssh.crear\_ssh())
Many thanks to all. | I'm not exactly sure how your project is laid out, but:
```
class ventanapp(Tk):
def initialize(self):
self.grid()
self.entry = Entry(self)
msg = Message(text=' SSH - Ingresar IP ->')
self.entry.grid(column=0,row=0,sticky='EW')
#note the lambda below, so the method doesn't run until the button is clicked
button = Button(self,text=u"realizar la conexion",command=lambda: ssh.crear_ssh)
button.grid(column=1,row=0)
```
Elsewhere in the same file:
```
class ssh():
def crear_ssh(self): #added 'self' to the args
print('PONER ACA EL CODIGO DE LA CONEXION')`
``` | Python Tklinter Button calling a function from another class | [
"",
"python",
"class",
""
] |
I have a common callback for a range of buttons.
Once the callback is called, I want to identify the caller by its previously assigned name.
But I could not figure out in the Tkinter documentation how I could do this. Any ideas?
My program is approx.
```
def callback(event):
event.widget['text'] # 'name' does not work so
# I have to use the label of the button now, but it is a dirty solution.
root.bind("<Button-1>", cb_button)
```
I need the names because I load the GUI from JSON using [pytkgen](https://github.com/tmetsch/pytkgen).
**UPDATE:**
Justin's solution looks nice but why does the following code print always `9` regardless of which button is clicked?
```
def cb_button(i):
print i
buttons = [('btn'+str(i), i) for i in range(10)]
for (b, i) in buttons:
root.button(b, lambda: cb_button(i))
``` | The winfo\_name() method of the widget returns the name of the widget. It is usually an 8 digit number expressed as a string, ie: '40123211' If you have an event object, evt, then use: evt.widget.winfo\_name()
In the body of the question you're asking something different: "How can I tell which button invoked a common callback function"
**Use a lambda function to pass a unique value to a common callback:**
Based on the update to your question, I think I understand the problem. You have a JSON file that is used to create a Tkinter interface via pytkgen. In this JSON file are definitions for several buttons, each of which has been given a unique name. When assigning commands to these buttons, they're all given the same callback, but the callback needs to know which button initiated the call, and you want to do that via the name. Is that correct?
If so, I'm guessing that you're currently creating the callback assignments like this (very generic example, assuming that `root` is the interface root returned by calling `tkgen.gengui.TkJson` with the path to your JSON file):
```
root.button("name1", callback)
root.button("name2", callback)
...
```
This isn't giving you the name that you want, though. One way to pass the name is to create a `lambda` that passes the button name to the callback function. The callback assignments would then look something like this:
```
root.button("name1", lambda:callback("name1"))
root.button("name2", lambda:callback("name2"))
...
```
Then your callback definition could look like this:
```
def callback(name):
if name == "name1":
# Do something in here
elif name == "name2":
# Do something else in here
...
```
UPDATE: If you're creating the buttons inside a loop, the lambda definition will need to be modified to store the desired loop variable(s) as keyword defaults. Otherwise the final value of the loop variable(s) will be applied to all buttons.
```
def cb_button(i):
print i
buttons = [('btn'+str(i), i) for i in range(10)]
for (b, i) in buttons:
root.button(b, lambda x=i: cb_button(x))
```
**Add an attribute to the widget object**
Another solution is to attributes to the button widget object can you inspect in the callback function. (This example using Tkinter)
```
but1 = Tkinter.Button(root, text="Sync", width=10)
but1.bind("<Button-1>", doButton)
but1.myId = "this"
but2 = Tkinter.Button(root, text="Sync", width=10)
but2.bind("<Button-1>", doButton)
but2.myId = "that"
but2.otherStuff = "anything"
def doButton(evt):
if evt.widget.myId == "this":
print("This")
else:
print("That "+evt.widget.otherStuff)
``` | A common way is to use a common function, but not exactly the same callback (through `lambda` closure, as pointed by [Justin's answer](https://stackoverflow.com/a/16973799)).
There are two imperfect alternatives relying on internals of toolkits:
* tkinter provide a private attribute `_name` in widget
* pytkgen root feature a `widgets` dictionary you can iterate to find back your widget names (`name = [k for k, v in root.widgets.iteritems() if v == event.widget][0]`). For better code separation, you can retrieve root from widget with `event.widget._nametowidget('.')`
It is worth noting these solutions would not work with button `command` that does not provide an event to their callback. And binding action to buttons is preferentially done through
`command` since it implements the usual behavior of a button (action on release, you can abort by leaving the button...). | How can I get the name of a widget in Tkinter? | [
"",
"python",
"tkinter",
""
] |
Say, I have a dictionary of data and their variable names, e.g.
```
dct = {'a': 1, 'b': 2, 'c':3}
```
and I want to retrieve the values associated with a subset of keys. How can I do this without coding too much, i.e. extracting the variables one-by-one? Is there something like
```
a, c = dct['a','c']
``` | You can use a generator expression for this:
```
>>> d = {'a':1,'b':2,'c':3}
>>> a, c = (d[key] for key in ['a', 'c'])
>>> a
1
>>> c
3
```
If you do this often enough that it's worth wrapping up in a function, that's trivial:
```
>>> def multiget(mapping, *keys):
... return (d[key] for key in keys)
>>> a, c = multiget(d, 'a', 'c')
```
However, if you look carefully, you'll see that `multiget` is just [`operator.itemgetter`](http://docs.python.org/2/library/operator.html#operator.itemgetter) uncurried, and it's usually a lot simpler to just use that:
```
>>> a, c = operator.itemgetter('a', 'c')(d)
``` | For educational purposes:
```
>>> d = {'a':1,'b':2,'c':3}
>>> globals().update((k, d[k]) for k in ('a', 'c'))
>>> a
1
>>> c
3
```
However **never** actually use something like this | Shortcut for unrolling a dictionary | [
"",
"python",
"dictionary",
""
] |
I have a file where they are segmented by "\n" and the number of lines per segment is unknown. A sample of the file looks like this:
```
800004
The London and North-Western's Euston Station was first, but at the eastern end of Euston Road the Great Northern constructed their King's Cross terminal.
Initially the Midland Railway ran into King's Cross but a quarrel over access led them to construct next door to King's Cross their St Pancras terminal, which was topped by a statue of Britannia, a <tag "510285">calculated</> snook-cocking exercise because Britannia was the company emblem of the Midland's hated rival, the London and North-Western.
800005
GROWTH in Malaysia's gross domestic product this year is expected to be 8.5 per cent.
Nearly two percentage points higher than the Treasury's estimate, Bank Negara, the central bank, reported yesterday.
Last year's growth, <tag "510270">calculated</> by the bank, was 8.7 per cent, compared with 7.6 per cent by the Treasury.
800006
He was a Catholic.
When he visited the Pope, even then, he couldn't help <tag "510270">calculating</> the Pope's worldly riches (life-proprietor of the Sistine Chapel, landlord of the Vatican and contents &ellip. ).
```
Is there a simpler way of getting the segments from the textfile?
I've been doing it this way:
```
doc = []
segments = []
for line in open(trainfile):
if line == "\n":
doc.append(segments)
segments = []
else:
segments.append(line.strip())
for i in doc:
print i
``` | Use a generator function:
```
def per_section(it):
section = []
for line in it:
if line.strip():
section.append(line)
else:
yield ''.join(section)
section = []
# yield any remaining lines as a section too
if section:
yield ''.join(section)
```
This yields each section, separated by blank lines, as one string:
```
with open(sectionedfile, 'r') as inputfile:
for section in per_section(inputfile):
print section
``` | It seems like `itertools.groupby` would be your friend here:
```
for k,section in groupby(file,key=str.isspace):
if k:
for line in section:
...
``` | How to read a textfile by x number of lines, but x is unknown? | [
"",
"python",
"io",
"text-files",
"newline",
""
] |
I am trying to create a function that returns a SELECTed resultset.
When I call my postgres function like this `select * from tst_dates_func()` I get an error as shown below:
```
ERROR: query has no destination for result data
HINT: If you want to discard the results of a SELECT, use PERFORM instead.
CONTEXT: PL/pgSQL function "tst_dates_func" line 3 at SQL statement
********** Error **********
ERROR: query has no destination for result data
SQL state: 42601
Hint: If you want to discard the results of a SELECT, use PERFORM instead.
Context: PL/pgSQL function "tst_dates_func" line 3 at SQL statement
```
Here is the function I created:
```
CREATE OR REPLACE FUNCTION tst_dates_func()
RETURNS TABLE( date_value date, date_id int, date_desc varchar) as
$BODY$
BEGIN
select a.date_value, a.date_id, a.date_desc from dates_tbl a;
END;
$BODY$
LANGUAGE plpgsql;
```
I am not sure why I am getting the above error. I would like to run `select * from tst_dates_func();`
and get data back. Or further join the result set if needed. What is the problem here? | Do it as plain SQL
```
CREATE OR REPLACE FUNCTION tst_dates_func()
RETURNS TABLE( date_value date, date_id int, date_desc varchar) as
$BODY$
select a.date_value, a.date_id, a.date_desc from dates_tbl a;
$BODY$
LANGUAGE sql;
```
If you really need plpgsql use `return query`
```
CREATE OR REPLACE FUNCTION tst_dates_func()
RETURNS TABLE( date_value date, date_id int, date_desc varchar) as
$BODY$
BEGIN
perform SELECT dblink_connect('remote_db');
return query
select a.date_value, a.date_id, a.date_desc from dates_tbl a;
END;
$BODY$
LANGUAGE plpgsql;
``` | In PLPGSQL - use RETURN QUERY
```
CREATE OR REPLACE FUNCTION tst_dates_func()
RETURNS TABLE( date_value date, date_id int, date_desc varchar) as
$BODY$
BEGIN
RETURN QUERY (select a.date_value, a.date_id, a.date_desc from dates_tbl a);
END;
$BODY$
LANGUAGE plpgsql;
``` | Function with SQL query has no destination for result data | [
"",
"sql",
"postgresql",
"plpgsql",
""
] |
I'm building an automated process to produce extensions. Is there a code example of calculating the extension-ID directly and entirely bypassing interaction with the browser?
(I'm answering my own question, below.) | I was only able to find a related article with a Ruby fragment, and it's only available in the IA: <http://web.archive.org/web/20120606044635/http://supercollider.dk/2010/01/calculating-chrome-extension-id-from-your-private-key-233>
Important to know:
1. This depends on a DER-encoded public key (raw binary), not a PEM-encoded key (nice ASCII generated by base64-encoding the DER key).
2. The extension-IDs are base-16, but are encoded using [a-p] (called "mpdecimal"), rather than [0-9a-f].
Using a PEM-encoded public key, follow the following steps:
1. If your PEM-formatted public-key still has the header and footer and is split into multiple lines, reformat it by hand so that you have a single string of characters that excludes the header and footer, and runs together such that every line of the key wraps to the next.
2. Base64-decode the public key to render a DER-formatted public-key.
3. Generate a SHA256 hex-digest of the DER-formatted key.
4. Take the first 32-bytes of the hash. You will not need the rest.
5. For each character, convert it to base-10, and add the ASCII code for 'a'.
The following is a Python routine to do this:
```
import hashlib
from base64 import b64decode
def build_id(pub_key_pem):
pub_key_der = b64decode(pub_key_pem)
sha = hashlib.sha256(pub_key_der).hexdigest()
prefix = sha[:32]
reencoded = ""
ord_a = ord('a')
for old_char in prefix:
code = int(old_char, 16)
new_char = chr(ord_a + code)
reencoded += new_char
return reencoded
def main():
pub_key = 'MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCjvF5pjuK8gRaw/2LoRYi37QqRd48B/FeO9yFtT6ueY84z/u0NrJ/xbPFc9OCGBi8RKIblVvcbY0ySGqdmp0QsUr/oXN0b06GL4iB8rMhlO082HhMzrClV8OKRJ+eJNhNBl8viwmtJs3MN0x9ljA4HQLaAPBA9a14IUKLjP0pWuwIDAQAB'
id_ = build_id(pub_key)
print(id_)
if __name__ == '__main__':
main()
```
You're more than welcome to test this against an existing extension and its ID. To retrieve its PEM-formatted public-key:
1. Go into the list of your existing extensions in Chrome. Grab the extension-ID of one.
2. Find the directory where the extension is hosted. On my Windows 7 box, it is: C:\Users<username>\AppData\Local\Google\Chrome\User Data\Default\Extensions<extension ID>
3. Grab the public-key from the manifest.json file under "key". Since the key is already ready to be base64-decoded, you can skip step (1) of the process.
The public-key in the example is from the "Chrome Reader" extension. Its extension ID is "lojpenhmoajbiciapkjkiekmobleogjc".
See also:
1. [Google Chrome - Alphanumeric hashes to identify extensions](https://stackoverflow.com/questions/1882981/google-chrome-alphanumeric-hashes-to-identify-extensions/2050916#2050916)
2. <http://blog.roomanna.com/12-14-2010/getting-an-extensions-id> | Starting with Chrome 64, Chrome changed the package format for extensions to the [CRX₃ file format](https://docs.google.com/document/d/1pAVB4y5EBhqufLshWMcvbQ5velk0yMGl5ynqiorTCG4/edit?usp=sharing), which supports multiple signatures and explicitly declares its CRX ID. Extracting the CRX ID from a CRX₃ file requires parsing a protocol buffer.
Here is a small python script for extracting the ID from a CRX₃ file.
This solution should only be used with trusted CRX₃ files or in contexts where security is not a concern: unlike CRX₂, the package format does not restrict what CRX ID a CRX₃ file declares. (In practice, consumers of the file (i.e. Chrome) will place restrictions upon it, such as requiring the file to be signed with at least one key that hashes to the declared CRX ID).
```
import binascii
import string
import struct
import sys
def decode(proto, data):
index = 0
length = len(data)
msg = dict()
while index < length:
item = 128
key = 0
left = 0
while item & 128:
item = data[index]
index += 1
value = (item & 127) << left
key += value
left += 7
field = key >> 3
wire = key & 7
if wire == 0:
item = 128
num = 0
left = 0
while item & 128:
item = data[index]
index += 1
value = (item & 127) << left
num += value
left += 7
continue
elif wire == 1:
index += 8
continue
elif wire == 2:
item = 128
_length = 0
left = 0
while item & 128:
item = data[index]
index += 1
value = (item & 127) << left
_length += value
left += 7
last = index
index += _length
item = data[last:index]
if field not in proto:
continue
msg[proto[field]] = item
continue
elif wire == 5:
index += 4
continue
raise ValueError(
'invalid wire type: {wire}'.format(wire=wire)
)
return msg
def get_extension_id(crx_file):
with open(crx_file, 'rb') as f:
f.read(8); # 'Cr24\3\0\0\0'
data = f.read(struct.unpack('<I', f.read(4))[0])
crx3 = decode(
{10000: "signed_header_data"},
[ord(d) for d in data])
signed_header = decode(
{1: "crx_id"},
crx3['signed_header_data'])
return string.translate(
binascii.hexlify(bytearray(signed_header['crx_id'])),
string.maketrans('0123456789abcdef', string.ascii_lowercase[:16]))
def main():
if len(sys.argv) != 2:
print 'usage: %s crx_file' % sys.argv[0]
else:
print get_extension_id(sys.argv[1])
if __name__ == "__main__":
main()
```
(Thanks to <https://github.com/thelinuxkid/python-protolite> for the protobuf parser skeleton.) | How to programmatically calculate Chrome extension ID? | [
"",
"python",
"google-chrome",
"google-chrome-extension",
"sha256",
""
] |
I want to sort product by discount on certain condition
```
ORDER BY
CASE WHEN @OrderBy = 0
THEN table.id END ASC,
CASE WHEN @Orderby = 2
THEN table.id END ASC,
```
I want to do something like below as I don't have discount column in table
```
CASE WHEN @OrderBy = 4
THEN (100-((table.price/table.oldprice)*100) as discount END ASC
```
but it throws an error - how can I sort by discount? | There are quite a few problems, e.g. You can't alias a calculation field in an order by, and you'll need to escape your table name, fix the `cae`, and count the parenthesis.
Also, since it seems you just want to `CASE` on a single variable, you can move the `@OrderBy` out to the top of the CASE, like so:
```
SELECT * from [table]
ORDER BY
CASE @OrderBy
WHEN 0
THEN [table].id -- ASC
WHEN 2
THEN [table].id * -1 -- DESC
---I want to do something like below as I don't have discount column in table
WHEN 4
THEN (100-([table].price/[table].oldprice)*100)
END
```
[SqlFiddle Here](http://www.sqlfiddle.com/#!3/41859/1)
As an aside, if you need to dynamically change the `ASC` or `DESC` of a column, you can use a hack like [this](https://stackoverflow.com/questions/3884884/conditional-sql-order-by-asc-desc-for-alpha-columns) by multiplying by -1.
(Also note that `ORDER BY CASE ... END ASC, CASE ... END ASC` will set the first and then second orderings ... this doesn't seem to make sense given that `@OrderBy` can only have a single value) | It seems to me you need something similar to this
```
select * from TableName where someCondition >100
order by
case when @OrderBy = 'AirlineService'
then AirlineService END desc,
case when @OrderBy = 'SomeMore'
then MyOtherColumn end
GO
```
If you not have a coulmn then you can not sort with that. Please read this [Microsoft Link](http://msdn.microsoft.com/en-GB/library/ms188385%28v=sql.105%29.aspx) Please keep in mind - specifies the sort order used on columns returned in a SELECT statement.
Hope it helps. | Sql Order By ... using `Case When` for different Ascending, Descending, and Custom Orders | [
"",
"sql",
"sql-server",
""
] |
I've this query and it's getting error "`The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified.`"
I've solved this issue with using `TOP` in the inner query, but some cases I wouldn't measure the record count. Is there any other possibility to sort it out ?
```
select *
from rs_column_lang
where column_id in (
select column_id
from rs_column
order by column_id
)
``` | Having an `order by` in your inner query doesn't make sense. You're checking your outer query for matching records, but you're reading the entire table on your inner query to find the matches. Sorting that inner query serves no purpose, so SQL Server yells at you for it.
Like the other answers indicate, assuming your goal is to sort the results, your `order by` should be part of your outer query, not the inner one.
The reason a `top` would work in the inner query is that it would change the results of the query, depending on what you're ordering by. But just changing the order would not change the results. | Try this
```
select * from rs_column_lang
where column_id in
(
select column_id from rs_column
)
order by column_id
``` | How can use order by column with out using top in sub query | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I want to match possible names from a string. A name should be 2-4 words, each with 3 or more letters, all words capitalized. For example, given this list of strings:
```
Her name is Emily.
I work for Surya Soft.
I sent an email for Ery Wulandari.
Welcome to the Link Building Partner program!
```
I want a regex that returns:
```
None
Surya Soft
Ery Wulandari
Link Building Partner
```
currently here is my code:
```
data = [
'Her name is Emily.',
'I work for Surya Soft.',
'I sent an email for Ery Wulandari.',
'Welcome to the Link Building Partner program!'
]
for line in data:
print re.findall('(?:[A-Z][a-z0-9]{2,}\s+[A-Z][a-z0-9]{2,})', line)
```
It works for the first three lines, but it fail on the last line. | You can use grouping for repeating structure as given below:
```
compiled = re.compile('(?:(([A-Z][a-z0-9]{2,})\s*){2,})')
for line in data:
match = compiled.search(line)
if match:
print match.group()
else:
print None
```
Output:
```
None
Surya Soft
Ery Wulandari
Link Building Partner
``` | You can use:
```
re.findall(r'((?:[A-Z]\w{2,}\s*){2,4})', line)
```
It may add a trailing whitespace that can be trimmed with `.strip()` | Regex to match possible names from a string | [
"",
"python",
"regex",
""
] |
I have two tables classroom and computer and currently computer is a variable in the table classroom
```
CREATE TABLE classroom_tbl
(
room_id INT AUTO_INCREMENT PRIMARY KEY,
teacher_name VARCHAR(30),
subject_name VARCHAR(30),
computer VARCHAR(30)
);
```
and I want to make it so instead of being a VARCHAR in the classroom table the variable computer calls the computer table
```
CREATE TABLE computer_tbl
(
computer_id INT AUTO_INCREMENT PRIMARY KEY,
computer_type VARCHAR(30),
computer_status INT
);
```
Is there any way to do this? I've tried UNION and INNER JOIN but I always get an error that says that my columns are different sizes. Which makes sense because classroom is bigger than computer. Thanks for any help you can give! | I believe you are new to SQL and have some experience in programming. SQL does not have variables like we do have in programming langauages such as C/C++/java. Rather SQL tables relate to each other with relationships such as foreign key relationship. You need to go through SQL tutorials for understanding more about relationships, here is a link to one of those:
<http://www.functionx.com/sql/Lesson11.htm>
In order to use the JOINS you need to have Primary-foreign key relationship between the two tables. You may have to create your table like this:
```
CREATE TABLE classroom_tbl
(
room_id INT AUTO_INCREMENT PRIMARY KEY,
teacher_name VARCHAR(30),
subject_name VARCHAR(30),
computer_id INT REFERENCES computer_tbl(computer_id)
);
``` | Since a given classroom can have many computers in it, but a given computer can only be in one classroom at a time, it makes more sense to have classroom as a foreign key on the computer table, rather than vice versa:
```
CREATE TABLE classroom_tbl
(
room_id INT AUTO_INCREMENT PRIMARY KEY,
teacher_name VARCHAR(30),
subject_name VARCHAR(30)
);
CREATE TABLE computer_tbl
(
computer_id INT AUTO_INCREMENT PRIMARY KEY,
computer_type VARCHAR(30),
computer_status INT,
room_id INT
);
```
To see which computers are in each room, try a query like:
```
select r.room_id, r.teacher_name, r.subject_name,
c.computer_id, c.computer_type, c.computer_status
from classroom_tbl r
left join computer_tbl c on r.room_id = c.room_id
``` | Use a Table as a Variable in SQL | [
"",
"mysql",
"sql",
"database",
""
] |
My brain refuses to cooperate with me today to actually think this through properly so I was hoping to get some feedback: I want to return a single record from each member for their most recent entrance into the system but so far I obviously have only been able to return a single record for the most recent datetime of any member. I know the query isn't quite right but my brain refuses to really cooperate...
The SQL:
```
SELECT
cm.FNAME,
cm.LNAME,
cl.entry_access_point,
cl.date_entered,
cl.res_id,
dbo.HourMinuteSecond(cl.date_entered, getUTCDate())[Day:Hour:Minute:Second]
FROM
cred.members cm, cred.allocate_log cl
WHERE
cm.member_id = cl.member_id AND
cl.date_exited IS NULL AND
cl.evt_id = @eventId AND
date_entered IN (SELECT max(cl.date_entered)
FROM cred.allocate_log cl, cred.members cm
WHERE cl.member_id = cm.member_id)
ORDER BY
cl.date_entered;
``` | Just add rule for member\_id in the subquery:
```
SELECT
cm.FNAME,
cm.LNAME,
cl.entry_access_point,
cl.date_entered,
cl.res_id,
dbo.HourMinuteSecond(cl.date_entered, getUTCDate())[Day:Hour:Minute:Second]
FROM cred.members cm, cred.allocate_log cl
WHERE cm.member_id = cl.member_id AND
cl.date_exited IS NULL AND
cl.evt_id = @eventId AND
date_entered IN (
SELECT max(cl.date_entered)
FROM cred.allocate_log cl, cred.members cms
WHERE cl.member_id = cms.member_id and cms.member_id = cm.member_id)
ORDER BY cl.date_entered
```
---
But you also need statement for the evt\_id in the subquery and it can be simplified like this:
```
SELECT
cm.FNAME,
cm.LNAME,
cl.entry_access_point,
cl.date_entered,
cl.res_id,
dbo.HourMinuteSecond(cl.date_entered, getUTCDate())[Day:Hour:Minute:Second]
FROM cred.members cm, cred.allocate_log cl
WHERE cm.member_id = cl.member_id AND
cl.date_exited IS NULL AND
cl.evt_id = @eventId AND
date_entered >= ALL (
SELECT cl.date_entered
FROM cred.allocate_log cls
WHERE cls.member_id = cm.member_id AND cls.evt_id = cl.evt_id)
ORDER BY cl.date_entered
```
---
Change the alias for table in the subquery:
```
SELECT
cm.FNAME,
cm.LNAME,
cl.entry_access_point,
cl.date_entered,
cl.res_id,
dbo.HourMinuteSecond(cl.date_entered, getUTCDate())[Day:Hour:Minute:Second]
FROM cred.members cm, cred.allocate_log cl
WHERE cm.member_id = cl.member_id AND
cl.date_exited IS NULL AND
cl.evt_id = @eventId AND
date_entered >= ALL (
SELECT cls.date_entered
FROM cred.allocate_log cls
WHERE cls.member_id = cm.member_id AND cls.evt_id = cl.evt_id)
ORDER BY cl.date_entered
``` | The best way to do this sort of query is using `row_number()`. Also, you should learn to use `join` syntax instead of putting joins in the `where` clause.
```
select t.*
from (SELECT cm.FNAME, cm.LNAME, cl.entry_access_point, cl.date_entered, cl.res_id,
dbo.HourMinuteSecond(cl.date_entered, getUTCDate()) as [Day:Hour:Minute:Second],
ROW_NUMBER() over (partition by cm.member_id order by cl.date_enetered desc) as seqnum
FROM cred.members cm join
cred.allocate_log cl
on cm.member_id = cl.member_id
WHERE cl.date_exited IS NULL AND
cl.evt_id = @eventId
) t
ORDER BY date_entered;
``` | sql select most recent datetime for each member | [
"",
"sql",
"sql-server",
"ssms",
""
] |
**Specs:** Ubuntu 13.04, Python 3.3.1
**General Background:** total beginner to Python;
**Question-specific background:** I'm exhausted trying to solve this problem, and I'm aware that, besides its instructional value for learning Python, this problem is boring and does not in any way make this world a better place :-( So I'd be even more grateful if you could share some guidance on this exhausting problem. But really don't want to waste your time if you are not interested in this kind of problems.
**What I intended to do:** "Calculate the number of basic American coins given a value less than 1 dollar. A penny is worth 1 cent, a nickel is worth 5 cents, a dime is worth 10 cents,
and a quarter is worth 25 cents. It takes 100 cents to make 1 dollar. So given an amount less than 1 dollar (if using floats, convert to integers for this exercise), calculate the number of each type of coin necessary to achieve the amount, maximizing the number of larger denomination coins. For example, given $0.76, or 76 cents, the correct output would be "3 quarters and 1 penny." Output such as "76 pennies" and "2 quarters, 2 dimes, 1 nickel, and 1 penny" are not acceptable."
**What I was able to come up with:**
```
penny = 1
nickel = 5
dime = 10
quarter = 25
i = input("Please enter an amount no more than 1 dollar(in cents): ")
i = int(i)
if i > 100:
print ("Please enter an amount equal or less than 100. ")
elif i >= quarter:
quarter_n = i % quarter
i = i - quarter * quarter_n
if i >= dime:
dime_n = i % dime
i = i - dime * dime_n
if i >= nickel:
nickel_n = i % nickel
i = i - nickel * nickel_n
if i >= penny:
penny_n = i % penny
print (quarter_n,"quarters,",dime_n,"dimes",nickel_n,"nickels",penny_n,"pennies")
else:
if i >= penny:
penny_n = i % penny
print (quarter_n,"quarters,",dime_n,"dimes",penny_n,"pennies")
else:
if i >= nickel:
nickel_n = i % nickel
i = i - nickel * nickel_n
if i >= penny:
penny_n = i % penny
print (quarter_n,"quarters,",nickel_n,"nickels",penny_n,"pennies")
else:
if i >= penny:
penny_n = i % penny
print (quarter_n,"quarters,",penny_n,"pennies")
else:
if i >= dime:
dime_n = i % dime
i = i - dime * dime_n
if i >= nickel:
nickel_n = i % nickel
i = i - nickel * nickel_n
if i >= penny:
penny_n = i % penny
print (dime_n,"dimes",nickel_n,"nickels",penny_n,"pennies")
else:
if i >= penny:
penny_n = i % penny
print (dime_n,"dimes",penny_n,"pennies")
else:
if i >= nickel:
nickel_n = i % nickel
i = i - nickel * nickel_n
if i >= penny:
penny_n = i % penny
print (nickel_n,"nickels",penny_n,"pennies")
else:
if i >= penny:
penny_n = i % penny
print (penny_n,"pennies")
```
This solution, though the best I could come up with, does not work as expected when fed with actual input numbers. And I'm unable to figure out why. Besides, I know that even from sheer size of the code that something is wrong. I searched for similar questions but the closest I got was one that dealt with very difficult math which I couldn't understand.
**My question:** I know I can't ask for a complete solution because that's down to me to figure it out. I'll appreciate either a) general pointer on the correct line of thinking b) critiques to my current code/line of thinking so that I might be able to improve it.
Thank you for taking the time, even just reading this! | I think your solution may actually be working if you do a "find and replace" for all the mod operators `%`, switching in integer division `//`.
Say you have `76` cents and want to find the number of quarters. Using `76 % 25` results in `1` whereas `76 // 25` is `3`.
With regard to the code, you should probably be thinking in terms of iterating over the possible coin values rather than a huge `if`, `elif` mess.
Try something like this. The only part that may need some explaining is using `divmod` but its really just a `tuple` of the integer division, modulo result. You can use that to get the number of coins and the new amount, respectively.
```
def coins_given(amount):
coins = [(25, 'quarter'), (10, 'dime'), (5, 'nickel'), (1, 'penny')]
answer = {}
for coin_value, coin_name in coins:
if amount >= coin_value:
number_coin, amount = divmod(amount, coin_value)
answer[coin_name] = number_coin
return answer
print coins_given(76)
# {'quarter': 3, 'penny': 1}
``` | i think your algorithm is too complicated,
you don't need all the elifs and the elses
just check with and if and then modidy the remaining amount until you get to zero
something like this
```
penny = 1
nickel = 5
dime = 10
quarter = 25
q = 0
d = 0
n = 0
p = 0
i = input("Please enter an amount no more than 1 dollar(in cents): ")
i = int(i)
if i>=25:
q = i/quarter
i %= quarter
if i>=10:
d = i/dime
i%=dime
if i>=5:
n = i/nickel
i %= nickel
if i>0:
p = i/penny
i = 0
print "The coins are %i quarters, %i dimes, %i nickels and %i pennys." %(q , d, n, p)
>>>
Please enter an amount no more than 1 dollar(in cents): 99
The coins are 3 quarters, 2 dimes, 0 nickels and 4 pennys.
>>>
Please enter an amount no more than 1 dollar(in cents): 76
The coins are 3 quarters, 0 dimes, 0 nickels and 1 pennys.
``` | Python-Modulus-Stuck with a coin-for-given-dollar-amount scenario | [
"",
"python",
"python-3.x",
"conditional-statements",
"modulus",
""
] |
Here is the script. I have been attempting to modify the print line ( in bold ).
reverseNames() is from the IPy module. I am unclear how to implement in this example.
print addy.reverseNames() % (addy)
```
#!/usr/bin/env python
import sys
import re
try:
if sys.argv[1:]:
print "File: %s" % (sys.argv[1])
logfile = sys.argv[1]
else:
logfile = raw_input("Please enter a log file to parse, e.g /var/log/secure: ")
try:
file = open(logfile, "r")
ips = []
for text in file.readlines():
text = text.rstrip()
regex = re.findall(r'(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})$',text)
if regex is not None and regex not in ips:
ips.append(regex)
for ip in ips:
outfile = open("/tmp/blocked_ips_test", "a")
addy = "".join(ip)
if addy is not '':
**print "IP: %s" % (addy)**
outfile.write(addy)
outfile.write("\n")
finally:
file.close()
outfile.close()
except IOError, (errno, strerror):
print "I/O Error(%s) : %s" % (errno, strerror)
``` | Since you already capture the individual octets as a list of strings, just reverse the list and join on a dot. If you want the `.in-addr.arpa.` suffix tacked on, that should be trivial.
```
addy = '.'.join(reversed(ip)) + ".in-addr.arpa.'
```
If you do wish to use the [IPy](https://pypi.python.org/pypi/IPy/) API, you need to convert the string representation to an IPy object first.
```
import IPy
i = IP('.'.join(ip))
addy = i.reverseNames()
``` | The ipaddress module in Python standard library also supports IPv6.
```
from ipaddress import ip_address
print(ip_address("127.0.0.1").reverse_pointer)
print(ip_address("2a03:2880:f10a:83:face:b00c::25de").reverse_pointer)
```
```
1.0.0.127.in-addr.arpa
e.d.5.2.0.0.0.0.c.0.0.b.e.c.a.f.3.8.0.0.a.0.1.f.0.8.8.2.3.0.a.2.ip6.arpa
``` | I need to reverse an IP address but am unclear how to use IPy ).reverseNames() | [
"",
"python",
""
] |
The recursive formula for computing the number of ways of choosing `k` items out of a set of `n` items, denoted `C(n,k)`, is:
```
1 if K = 0
C(n,k) = { 0 if n<k
c(n-1,k-1)+c(n-1,k) otherwise
```
I’m trying to write a recursive function `C` that computes `C(n,k)` using this recursive formula. The code I have written should work according to myself but it doesn’t give me the correct answers.
This is my code:
```
def combinations(n,k):
# base case
if k ==0:
return 1
elif n<k:
return 0
# recursive case
else:
return combinations(n-1,k-1)+ combinations(n-1,k)
```
The answers should look like this:
```
>>> c(2, 1)
0
>>> c(1, 2)
2
>>> c(2, 5)
10
```
but I get other numbers... don’t see where the problem is in my code. | I would try reversing the arguments, because as written `n < k`.
I think you mean this:
```
>>> c(2, 1)
2
>>> c(5, 2)
10
``` | Your calls, e.g. `c(2, 5)` means that `n=2` and `k=5` (as per your definition of c at the top of your question). So `n < k` and as such the result should be `0`. And that’s exactly what happens with your implementation. And all other examples do yield the actually correct results as well.
Are you sure that the arguments of your example test cases have the correct order? Because they are all `c(k, n)`-calls. So either those calls are wrong, or the order in your definition of `c` is off. | How many combinations are possible? | [
"",
"python",
"recursion",
"python-3.x",
""
] |
I have the following query:
```
SELECT DISTINCT TOP 20
f.id_service as f_id_service,
f.id_city as f_id_city,
f.name as f_name,
f.address as f_address,
f.business as f_business,
f.web as f_web,
f.phone as f_phone,
f.id_firm as f_id_firm
FROM
Firm f
left join
Price p ON p.id_service = f.id_service
AND p.id_city = f.id_city
AND p.id_firm = f.id_firm
WHERE
f.blocked = '0'
AND (f.name LIKE 'авто%'
OR f.phone LIKE 'авто%')
AND (f.phone != '' OR f.address != '')
AND f.id_city = '73041'
AND f.dogovor = '1'
ORDER BY f.name ASC
```
This query show TOP 20 rows with unique `f.name`
When me need select next 20 rows with unique f.name i use next query:
```
SELECT DISTINCT TOP 20
f.id_service AS f_id_service,
f.id_city AS f_id_city,
f.name AS f_name,
f.address AS f_address,
f.business AS f_business,
f.web AS f_web,
f.phone AS f_phone,
f.id_firm AS f_id_firm
FROM
Firm f
LEFT JOIN
Price p ON p.id_service = f.id_service
AND p.id_city = f.id_city
AND p.id_firm = f.id_firm
WHERE
f.name NOT IN (SELECT DISTINCT TOP 20
f.name
FROM
Firm f
WHERE
f.blocked = '0'
AND (f.name LIKE '????%'
OR f.phone LIKE '????%')
AND (f.phone != '' OR f.address != '')
AND f.id_city = '73041'
ORDER BY f.name ASC)
AND f.dogovor = '1'
AND f.blocked = '0'
AND (f.name LIKE '????%'
OR f.phone LIKE '????%')
AND (f.phone != '' OR f.address != '')
AND f.id_city = '73041'
AND f.dogovor = '1'
ORDER BY f.name ASC
```
But I see that in last query I query select rows with unique f.name from first query.
For example:
Result first query (select TOP 20 rows):

Result second query (select NEXT TOP 20 rows):

How you can see in second image second query have rows from first image(result first query).
Tell me please where error and how write right ? | Try this one -
```
SELECT DISTINCT TOP 20
f.id_service AS f_id_service,
f.id_city AS f_id_city,
f.name AS f_name,
f.[address] AS f_address,
f.business AS f_business,
f.web AS f_web,
f.phone AS f_phone,
f.id_firm AS f_id_firm
FROM dbo.Firm f
WHERE f.blocked = '0'
AND (f.name LIKE 'авто%' OR f.phone LIKE 'авто%')
AND f.phone + f.[address] != ''
AND f.id_city='73041'
AND f.dogovor='1'
ORDER BY f.name
SELECT *
FROM (
SELECT DISTINCT
f.id_service AS f_id_service,
f.id_city AS f_id_city,
f.name AS f_name,
f.[address] AS f_address,
f.business AS f_business,
f.web AS f_web,
f.phone AS f_phone,
f.id_firm AS f_id_firm,
row_id = ROW_NUMBER() OVER (ORDER BY f.name)
FROM dbo.Firm f
WHERE f.blocked = '0'
AND (f.name LIKE 'авто%' OR f.phone LIKE 'авто%')
AND f.phone + f.[address] != ''
AND f.id_city='73041'
AND f.dogovor='1'
) d
WHERE d.row_id BETWEEN 21 AND 40
``` | First off, it might be a language barrier but let's be clear, your fist query doesn't display unique f.name but unique combinations of all columns.
Also, using TOP and DISTINCT in such manner is a very bad idea. There is no way to be sure that you will be getting the same results every single time.
Finally your NOT IN clause is missing `AND f.dogovor='1'` | How can I page through distinct filtered results? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
My dictionary with tuple as a key is as follows:
Key represents the x and y coordinates. (x, y)
```
D1 = {(10,12): 23, (8,14): 45, (12, 9): 29}
D2 = {(2, 8) : 67, (12, 10): 23, (14, 8): 56}
```
Now, from the above Dictionary, I would like to perform the following
1. Sort dictionary D1 based on Keys
2. Sort dictionary D2 based on Keys
3. Simply, print the keys and values in D1
4. Simply, print the keys and values in D2
5. Then do the following:
(pseudocode)
```
total = 0
For each key (x, y) in D1,
if D2.has_key((y, x)):
total = total + D1[(x, y)] * D2[(y, x)]
Print total
``` | * (1/2) To sort a dict you have to use `collections.OrderedDict` (because normal dicts aren't sorted)
Code:
```
from collections import OrderedDict
D1 = {(10,12): 23, (8,14): 45, (12, 9): 29}
D1_sorted = OrderedDict(sorted(D1.items()))
```
* (3/4) Code: `print(D1)`
* (5) convert your code to python
```
total = 0
for x, y in D1.keys():
try:
total = total + D1[(x, y)] * D2[(y, x)]
except KeyError:
pass
print total
``` | I think you're after (for your pseudo-code):
```
D1 = {(10,12): 23, (8,14): 45, (12, 9): 29}
D2 = {(2, 8) : 67, (12, 10): 23, (14, 8): 56}
total = sum(D1[k] * D2.get(k[::-1], 0) for k in D1.iterkeys())
# 3049
``` | Parse dictionary with Tuple as key | [
"",
"python",
"tuples",
""
] |
I'm trying to return a table with the depth of a node in a hierarchy represented using the nested set model, I'm following this tutorial but the query used in the section 'Finding the depth of Nodes' doesn't work for me: <http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/>
```
SELECT node.GroupName, (COUNT(parent.GroupName) - 1) AS depth
FROM CompanyGroup AS node,
CompanyGroup AS parent
WHERE node.LeftID BETWEEN parent.LeftID AND parent.RightID
GROUP BY node.GroupName
ORDER BY node.LeftID;
```
Running this query I get the error "*Column 'CompanyGroup.GroupName' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.*"
Can anyone explain why please?
Edit: wrong column in the error message, my apologies the error is: "*Column "CompanyGroup.LeftID" is invalid...*" | Try this one -
```
SELECT
node.GroupName
, depth = COUNT(parent.GroupName) - 1
FROM CompanyGroup node
JOIN CompanyGroup parent ON node.LeftID BETWEEN parent.LeftID AND parent.RightID
GROUP BY node.GroupName
ORDER BY MIN(node.LeftID) --<--
```
Or try this -
```
SELECT
node.GroupName
, depth = COUNT(parent.GroupName) - 1
FROM CompanyGroup node
JOIN CompanyGroup parent ON node.LeftID BETWEEN parent.LeftID AND parent.RightID
GROUP BY node.GroupName, node.LeftID
ORDER BY node.LeftID
``` | Are you running a different query? You should not be receiving that particular error, but rather something like `"Column "CompanyGroup.LeftID" is invalid in the ORDER BY..."`
This query should work:
```
SELECT node.GroupName, (COUNT(parent.GroupName) - 1) AS depth
FROM CompanyGroup AS node,
CompanyGroup AS parent
WHERE node.LeftID BETWEEN parent.LeftID AND parent.RightID
GROUP BY node.GroupName
;
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!3/9fcbf2/2) | Column is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
Background: total beginner to Python; searched about this question but the answer I found was more about "what" than "why";
What I intended to do: Creating a function that takes test score input from the user and output letter grades according to a grade scale/curve; Here is the code:
```
score = input("Please enter test score: ")
score = int(score)
def letter_grade(score):
if 90 <= score <= 100:
print ("A")
elif 80 <= score <= 89:
print ("B")
elif 70 <= score <= 79:
print("C")
elif 60 <= score <= 69:
print("D")
elif score < 60:
print("F")
print (letter_grade(score))
```
This, when executed, returns:
```
Please enter test score: 45
F
None
```
The `None` is not intended. And I found that if I use `letter_grade(score)` instead of `print (letter_grade(score))` , the `None` no longer appears.
The closest answer I was able to find said something like "Functions in python return None unless explicitly instructed to do otherwise". But I did call a function at the last line, so I'm a bit confused here.
So I guess my question would be: what caused the disappearance of `None`? I am sure this is pretty basic stuff, but I wasn't able to find any answer that explains the "behind-the-stage" mechanism. So I'm grateful if someone could throw some light on this. Thank you! | In python the default return value of a function is `None`.
```
>>> def func():pass
>>> print func() #print or print() prints the return Value
None
>>> func() #remove print and the returned value is not printed.
>>>
```
So, just use:
`letter_grade(score) #remove the print`
Another alternative is to replace all prints with `return`:
```
def letter_grade(score):
if 90 <= score <= 100:
return "A"
elif 80 <= score <= 89:
return "B"
elif 70 <= score <= 79:
return "C"
elif 60 <= score <= 69:
return "D"
elif score < 60:
return "F"
else:
#This is returned if all other conditions aren't satisfied
return "Invalid Marks"
```
Now use `print()`:
```
>>> print(letter_grade(91))
A
>>> print(letter_grade(45))
F
>>> print(letter_grade(75))
C
>>> print letter_grade(1000)
Invalid Marks
``` | Functions without a return statement known as void functions return None from the function. To return a value other than None, you need to use a return statement in the functions.
Values like None, True and False are not strings: they are special values and keywords in python which are being reserved.
If we get to the end of any function and we have not explicitly executed any return statement, Python will automatically return the value None. For better understanding see the example below. Here **stark** isn't returning anything so the output will be None
```
def stark(): pass
a = stark()
print a
```
the output of above code is:
```
None
``` | Python Script returns unintended "None" after execution of a function | [
"",
"python",
"printing",
"syntax",
"python-3.x",
"nonetype",
""
] |
using C or python (python preferred), How would i encode a binary file to audio that is then outputted though the headphone jack, also how would i decode the audio back to binary using input from the microphone jack, so far i have learned how to covert a text file to binary using python, It would be similar to RTTY communication.
This is so that i can record data onto a cassette tape.
```
import binascii
a = open('/Users/kyle/Desktop/untitled folder/unix commands.txt', 'r')
f = open('/Users/kyle/Desktop/file_test.txt', 'w')
c = a.read()
b = bin(int(binascii.hexlify(c), 16))
f.write(b)
f.close()
``` | From your comments, you want to process the binary data bit by bit, turning each bit into a high or low sound.
You still need to decide exactly what those high and low sounds are, and how long each one sounds for (and whether there's a gap in between, and so on). If you make it slow, like 1/4 of a second per sound, then you're treating them as notes. If you make it very fast, like 1/44100 of a second, you're treating them as samples. The human ear can't hear 44100 different sounds in a second; instead, it hears a single sound at up to 22050Hz.
Once you've made those decisions, there are two parts to your problem.
First, you have to generate a stream of samples—for example, a stream of 44100 16-bit integers for every second. For really simple things, like playing a chunk of a raw PCM file in 44k 16-bit mono format, or generating a square wave, this is trivial. For more complex cases, like playing a chunk of an MP3 file or synthesizing a sound out of sine waves and filters, you'll need some help. The [`audioop`](http://docs.python.org/3/library/audioop.html) module, and a few others in the stdlib, can give you the basics; beyond that, you'll need to search PyPI for appropriate modules.
Second, you have to send that sample stream to the headphone jack. There's no built-in support for this in Python. On some platforms, you can do this just by opening a special file and writing to it. But, more generally, you will need to find a third-party library on PyPI.
The simpler modules work for one particular type of audio system. Mac and Windows each have their own standards, and Linux has a half dozen different ones. There are also some Python modules that talk to higher-level wrappers; you may have to install and set up the wrapper, but once you do that, your code will work on any system.
---
So, let's work through one really simple example. Let's say you've got PortAudio set up on your system, and you've installed [PyAudio](http://people.csail.mit.edu/hubert/pyaudio/) to talk to it. This code will play square waves of 441Hz and 220.5Hz (just above middle C and low C) for just under 1/4th of a second (just because that's really easy).
```
import binascii
a = open('/Users/kyle/Desktop/untitled folder/unix commands.txt', 'r')
c = a.read()
b = bin(int(binascii.hexlify(c), 16))
sample_stream = []
high_note = (b'\xFF'*100 + b'\0'*100) * 50
low_note = (b'\xFF'*50 + b'\0'*50) * 100
for bit in b[2:]:
if bit == '1':
sample_stream.extend(high_note)
else:
sample_stream.extend(low_note)
sample_buffer = b''.join(sample_stream)
p = pyaudio.PyAudio()
stream = p.open(format=p.get_format_from_width(8),
channels=1,
rate=44100,
output=True)
stream.write(sample_buffer)
``` | So you want to transmit digital information using audio? Basically you want to implement a [MODEM](http://en.wikipedia.org/wiki/Modem) in software (no matter if it is pure software, it's still called modem).
> A modem (MOdulator-DEModulator) is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog signals, from light emitting diodes to radio. [wikipedia]
There are modems everywhere you need to transmit data over an analog media, be it sound, light or radio waves. Your TV remote probably is an infrared modem.
Modems implemented in pure software are called **soft-modems**. Most soft-modems I see in the wild are using some form of [FSK](http://en.wikipedia.org/wiki/Frequency-shift_keying) modulation:
> Frequency-shift keying (FSK) is a frequency modulation scheme in which digital information is transmitted through discrete frequency changes of a carrier wave.[1](http://en.wikipedia.org/wiki/Modem) The simplest FSK is binary FSK (BFSK). BFSK uses a pair of discrete frequencies to transmit binary (0s and 1s) information.[2](http://en.wikipedia.org/wiki/Frequency-shift_keying) With this scheme, the "1" is called the mark frequency and the "0" is called the space frequency. The time domain of an FSK modulated carrier is illustrated in the figures to the right. [wikipedia]
There are very interesting applications for data transmission through atmosphere via sound waves - I guess it is what [shopkick](http://www.shopkick.com/) uses to [verify user presence](http://www.businessinsider.com/heres-shopkicks-special-sauce-a-box-in-every-store-that-verifies-youre-really-there-2010-8).
For Python check the [GnuRadio](http://gnuradio.org/redmine/projects/gnuradio/wiki/WikiStart) project.
For a C library, look at the work of [Steve Underwood](http://www.soft-switch.org/) (but please don't contact him with silly questions). I used his soft-modem to bootstrap a [FAX to email gateway](http://ultranet.com.br/ultranew2/megafax.cfm?tar=megafax) for [Asterisk](http://www.asterisk.org/) (a fax transmission is not much more than a B/W [TIFF](http://en.wikipedia.org/wiki/Tagged_Image_File_Format) file encoded in audio for transmission over a phone line). | encode binary to audio python or C | [
"",
"python",
"c",
"audio",
"binary",
""
] |
I'm using a pandas DataFrame in which one column contains numpy arrays. When trying to sum that column via aggregation I get an error stating 'Must produce aggregated value'.
e.g.
```
import pandas as pd
import numpy as np
DF = pd.DataFrame([[1,np.array([10,20,30])],
[1,np.array([40,50,60])],
[2,np.array([20,30,40])],], columns=['category','arraydata'])
```
This works the way I would expect it to:
```
DF.groupby('category').agg(sum)
```
output:
```
arraydata
category 1 [50 70 90]
2 [20 30 40]
```
However, since my real data frame has multiple numeric columns, arraydata is not chosen as the default column to aggregate on, and I have to select it manually. Here is one approach I tried:
```
g=DF.groupby('category')
g.agg({'arraydata':sum})
```
Here is another:
```
g=DF.groupby('category')
g['arraydata'].agg(sum)
```
Both give the same output:
```
Exception: must produce aggregated value
```
However if I have a column that uses numeric rather than array data, it works fine. I can work around this, but it's confusing and I'm wondering if this is a bug, or if I'm doing something wrong. I feel like the use of arrays here might be a bit of an edge case and indeed wasn't sure if they were supported. Ideas?
Thanks | One, perhaps more clunky way to do it would be to iterate over the `GroupBy` object (it generates `(grouping_value, df_subgroup)` tuples. For example, to achieve what you want here, you could do:
```
grouped = DF.groupby("category")
aggregate = list((k, v["arraydata"].sum()) for k, v in grouped)
new_df = pd.DataFrame(aggregate, columns=["category", "arraydata"]).set_index("category")
```
This is very similar to what pandas is doing under the hood anyways [groupby, then do some aggregation, then merge back in], so you aren't really losing out on much.
---
### Diving into the Internals
The problem here is that pandas is checking explicitly that the output *not* be an `ndarray` because it wants to intelligently reshape your array, as you can see in this snippet from `_aggregate_named` where the error occurs.
```
def _aggregate_named(self, func, *args, **kwargs):
result = {}
for name, group in self:
group.name = name
output = func(group, *args, **kwargs)
if isinstance(output, np.ndarray):
raise Exception('Must produce aggregated value')
result[name] = self._try_cast(output, group)
return result
```
My guess is that this happens because `groupby` is explicitly set up to try to intelligently put back together a DataFrame with the same indexes and everything aligned nicely. Since it's rare to have nested arrays in a DataFrame like that, it checks for ndarrays to make sure that you are actually using an aggregate function. In my gut, this feels like a job for `Panel`, but I'm not sure how to transform it perfectly. As an aside, you can sidestep this problem by converting your output to a list, like this:
```
DF.groupby("category").agg({"arraydata": lambda x: list(x.sum())})
```
Pandas doesn't complain, because now you have an array of Python objects. [but this is really just cheating around the typecheck]. And if you want to convert back to array, just apply `np.array` to it.
```
result = DF.groupby("category").agg({"arraydata": lambda x: list(x.sum())})
result["arraydata"] = result["arraydata"].apply(np.array)
```
How you want to resolve this issue really depends on *why* you have columns of `ndarray` and whether you want to aggregate anything else at the same time. That said, you can always iterate over `GroupBy` like I've shown above. | Pandas works much more efficiently if you *don't* do this (e.g using numeric data, as you suggest). Another alternative is to use a [Panel](http://pandas.pydata.org/pandas-docs/stable/dsintro.html#panel) object for this kind of multidimensional data.
Saying that, this looks like a bug, the Exception is being raised purely because the result is an array:
```
Exception: Must produce aggregated value
In [11]: %debug
> /Users/234BroadWalk/pandas/pandas/core/groupby.py(1511)_aggregate_named()
1510 if isinstance(output, np.ndarray):
-> 1511 raise Exception('Must produce aggregated value')
1512 result[name] = self._try_cast(output, group)
ipdb> output
array([50, 70, 90])
```
If you were to recklessly remove these two lines from the source code it works as expected:
```
In [99]: g.agg(sum)
Out[99]:
arraydata
category
1 [50, 70, 90]
2 [20, 30, 40]
```
*Note: They're almost certainly in there for a reason...* | Pandas: aggregate when column contains numpy arrays | [
"",
"python",
"numpy",
"pandas",
"aggregation",
""
] |
I want to navigate from the root directory to all other directories within and print the same.
Here's my code:
```
#!/usr/bin/python
import os
import fnmatch
for root, dir, files in os.walk("."):
print root
print ""
for items in fnmatch.filter(files, "*"):
print "..." + items
print ""
```
And here's my O/P:
```
.
...Python_Notes
...pypy.py
...pypy.py.save
...classdemo.py
....goutputstream-J9ZUXW
...latest.py
...pack.py
...classdemo.pyc
...Python_Notes~
...module-demo.py
...filetype.py
./packagedemo
...classdemo.py
...__init__.pyc
...__init__.py
...classdemo.pyc
```
Above, `.` and `./packagedemo` are directories.
However, I need to print the O/P in the following manner:
```
A
---a.txt
---b.txt
---B
------c.out
```
Above, `A` and `B` are directories and the rest are files. | This will give you the desired result
```
#!/usr/bin/python
import os
# traverse root directory, and list directories as dirs and files as files
for root, dirs, files in os.walk("."):
path = root.split(os.sep)
print((len(path) - 1) * '---', os.path.basename(root))
for file in files:
print(len(path) * '---', file)
``` | Recursive walk through a directory where you get ALL files from all dirs in the current directory and you get ALL dirs from the current directory - because codes above don't have a simplicity (imho):
```
for root, dirs, files in os.walk(rootFolderPath):
for filename in files:
doSomethingWithFile(os.path.join(root, filename))
for dirname in dirs:
doSomethingWithDir(os.path.join(root, dirname))
``` | Using os.walk() to recursively traverse directories in Python | [
"",
"python",
"os.walk",
""
] |
If I plot a 2D array and contour it, I can get the access to the segmentation map, via `cs = plt.contour(...); cs.allsegs` but it's parameterized as a line. I'd like a segmap boolean mask of what's interior to the line, so I can, say, quickly sum everything within that contour.
Many thanks! | I dont think there is a really easy way, mainly because you want to mix raster and vector data. Matplotlib paths fortunately have a way to check if a point is within the path, doing this for all pixels will make a mask, but i think this method can get very slow for large datasets.
```
import matplotlib.patches as patches
from matplotlib.nxutils import points_inside_poly
import matplotlib.pyplot as plt
import numpy as np
# generate some data
X, Y = np.meshgrid(np.arange(-3.0, 3.0, 0.025), np.arange(-3.0, 3.0, 0.025))
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
fig, axs = plt.subplots(1,2, figsize=(12,6), subplot_kw={'xticks': [], 'yticks': [], 'frameon': False})
# create a normal contour plot
axs[0].set_title('Standard contour plot')
im = axs[0].imshow(Z, cmap=plt.cm.Greys_r)
cs = axs[0].contour(Z, np.arange(-3, 4, .5), linewidths=2, colors='red', linestyles='solid')
# get the path from 1 of the contour lines
verts = cs.collections[7].get_paths()[0]
# highlight the selected contour with yellow
axs[0].add_patch(patches.PathPatch(verts, facecolor='none', ec='yellow', lw=2, zorder=50))
# make a mask from it with the dimensions of Z
mask = verts.contains_points(list(np.ndindex(Z.shape)))
mask = mask.reshape(Z.shape).T
axs[1].set_title('Mask of everything within one contour line')
axs[1].imshow(mask, cmap=plt.cm.Greys_r, interpolation='none')
# get the sum of everything within the contour
# the mask is inverted because everything within the contour should not be masked
print np.ma.MaskedArray(Z, mask=~mask).sum()
```
Note that contour lines which 'leave' the plot at different edges by default wont make a path which follows these edges. These lines would need some additional processing.
 | Another way, perhaps more intuitive, is the [`binary_fill_holes`](https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.morphology.binary_fill_holes.html#scipy.ndimage.morphology.binary_fill_holes) function from `scipy.ndimage`.
```
import numpy as np
import scipy
image = np.zeros((512, 512))
image[contour1[:, 0], contour1[:, 1]] = 1
masked_image = scipy.ndimage.morphology.binary_fill_holes(image)
```
``` | How to go from a contour to an image mask in with Matplotlib | [
"",
"python",
"matplotlib",
"plot",
"mask",
"contour",
""
] |
I am having a string like this "11547QSD". I would like to split it in to 2 parts "11547" and "QSD". I got a hint with isnumeric() function. I am placing a overview down.Please suggest me a best way to split this.
```
str1 = "11547QSD" # is a valid string (in my context)
str2 = "ABC98765" # is a valid string
str3 = "111ABC111" # is not a valid string
if str1.isvalid():
str1_int = str1.integer_part()
str1_str = str1.string_part()
```
Thanks in advance | You can use regular expressions with named groups.
You basically first create regular expressions (I created two, for both cases: digits first or letters first). Then you check if the input matches. If it does, you call `groupdict()` on the resulting match object to get dictionary like `{'digits':'11547', 'letters':'QSD'}`. Then you just use it (I printed it).
Full example following the above advice:
```
>>> import re
>>> checks = [
re.compile(r'^(?P<digits>\d+)(?P<letters>\D+)$'),
re.compile(r'^(?P<letters>\D+)(?P<digits>\d+)$'),
]
>>> inputs = ['11547QSD', 'ABC98765', '111ABC111']
>>> for item in inputs:
for check in checks:
if check.match(item):
print('Digits are {digits}, letters are {letters}'.format(
**check.search(item).groupdict()
))
break
else:
print('%s is incorrect' % (item,))
Digits are 11547, letters are QSD
Digits are 98765, letters are ABC
111ABC111 is incorrect
```
## Shortened version
If you understand the above, you can shorten the code and create the resulting dict (matching string - resulting groups) like that:
```
>>> from itertools import product
>>> {item: check.search(item).groupdict()
for (item, check) in product(inputs, checks) if check.match(item)}
{'ABC98765': {'digits': '98765', 'letters': 'ABC'},
'11547QSD': {'digits': '11547', 'letters': 'QSD'}}
```
**Note**:
I used metacharacters `\d` and `\D`. The first basically means "digit", the second means "non-digit". The details on what they mean are [here](http://docs.python.org/2/library/re.html#regular-expression-syntax). | I think regex should be the best solution, an example:
`import re
re.split(r'(\d+|\(|\))', '11547QSD')` | What's the best way to split a string into integer part and string part? | [
"",
"python",
"regex",
"string",
""
] |
Suppose I have string that look like the following, of varying length, but with the number of "words" always equal to multiple of 4.
```
9c 75 5a 62 32 3b 3a fe 40 14 46 1c 6e d5 24 de
c6 11 17 cc 3d d7 99 f4 a1 3f 7f 4c
```
I would like to chop them into strings like `9c 75 5a 62` and `32 3b 3a fe`
I could use a regex to match the exact format, but I wonder if there is a more straightforward way to do it, because regex seems like overkill for what should be a simple problem. | A staight-forward way of doing this is as follows:
```
wordlist = words.split()
for i in xrange(0, len(wordlist), 4):
print ' '.join(wordlist[i:i+4])
```
If for some reason you can't make a list of all words (e.g. an infinite stream), you could do this:
```
from itertools import groupby, izip
words = (''.join(g) for k, g in groupby(words, ' '.__ne__) if k)
for g in izip(*[iter(words)] * 4):
print ' '.join(g)
```
Disclaimer: I didn't come up with this pattern; I found it in a similar topic a while back. It arguably relies on an implementation detail, but it would be quite a bit more ugly when done in a different way. | A slightly functional way of doing it based on [itertools grouper recipe](http://docs.python.org/2/library/itertools.html#recipes)
```
for x in grouper(words.split(), 4):
print ' '.join(x)
``` | Pythonic way to split a line into groups of four words? | [
"",
"python",
"string",
""
] |
What I tried to do:
> Create a calculator application. Write code that will take two numbers
> and an operator in the format: N1 OP N2, where N1 and N2 are floating point or
> integer values, and OP is one of the following: +, -, *, /, %, \**, representing
> addition, subtraction, multiplication, division, modulus/remainder, and
> exponentiation, respectively, and displays the result of carrying out that operation
> on the input operands.
What I was able to come up with:
```
def calculator(n1,op,n2):
n1 = float(n1)
n2 = float(n2)
if op == "+":
return (n1 + n2)
elif op == "-":
return (n1 - n2)
elif op == "*":
return (n1 * n2)
elif op == "/":
return (n1 / n2)
elif op == "%":
return (n1 % n2)
elif op == "**":
return (n1 ** n2)
```
It works. But there might be 2 potential improvements:
1. right now one has to use double quotes(`""`) when entering the operator, for example, calculator(3,"+",3). Otherwise the interpreter returns a `SyntaxError` pop-up. I tried to change `if op == "+":` into `if op == +:`, but then the interpreter returns a SyntaxError, highlighting the `:` after the `+`.
2. right now the function converts all kinds of number input into `float()`, even if integer was taken as input. How to let the function itself determine whether the input is integer or floating point, and convert it accordingly?
I read the Documentation on Functions, but it only talked about several types of arguments, and none of them seem to help solving the current problems. I'm sure this is pretty basic stuff, but as a beginner I tried and wasn't able to figure it out. | Quoting the operator is the only way to pass a symbol. It's normal for the language. Python will also figure out the correct type of the result, so no need to convert to float.
Here's your program with slight modifications. Single quotes are more "normal" for Python, no need for () around return values, and throwing exceptions for bad input is standard practice as well:
```
def calculator(n1,op,n2):
if op == '+':
return n1 + n2
elif op == '-':
return n1 - n2
elif op == '*':
return n1 * n2
elif op == '/':
return n1 / n2
elif op == '%':
return n1 % n2
elif op == '**':
return n1 ** n2
else:
raise ValueError('invalid operator')
```
Output:
```
>>> calculator(1,'+',2) # note result is int
3
>>> calculator(1,'/',2) # Python 3.x returns float result for division
0.5
>>> calculator(2,'*',2.5) # float result when mixed.
5.0
>>> calculator(2,'x',2.5)
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Users\metolone\Desktop\x.py", line 15, in calculator
raise ValueError('invalid operator')
ValueError: invalid operator
```
Also, building on @Ashwini answer, you can actually just pass the operator name as op instead of the symbol:
```
from operator import mul,add as plus,truediv as div,sub as minus,pow,mod
def calculator(n1,op,n2):
return op(n1,n2)
```
Output:
```
>>> calculator(2,plus,4)
6
>>> calculator(2,div,4)
0.5
``` | You can't use `+`,`*`,etc in your code as they are not valid identifiers, but you can use the `operator` module and a dictionary here to reduce your code:
```
from operator import mul,add,div,sub,pow,mod
dic = {'+':add, '-':sub, '*':mul, '**':pow, '%':mod, '/':div}
def calculator(n1,op,n2):
n1 = n1
n2 = n2
try:
return dic[op](n1,n2)
except KeyError:
return "Invalid Operator"
```
Demo:
```
>>> calculator(3,"**",3)
27
>>> calculator(3,"*",3)
9
>>> calculator(3,"+",3)
6
>>> calculator(3,"/",3)
1
>>> calculator(3,"&",3) # & is not defined in your dict
'Invalid Operator'
``` | Python-How to convert function argument into numeric operator/float()/int()? | [
"",
"python",
"function",
"math",
"python-3.x",
"arguments",
""
] |
I am using IPython and want to run functions from one notebook from another (without cutting and pasting them between different notebooks). Is this possible and reasonably easy to do? | You can connect with a qtconsole to the same kernel. Just supply this at startup:
```
ipython qtconsole --existing kernel-0300435c-3d07-4bb6-abda-8952e663ddb7.json
```
Look at the output after starting the notebook for the long string. | Starting your notebook server with:
```
ipython notebook --script
```
will save the notebooks (`.ipynb`) as Python scripts (`.py`) as well, and you will be able to import them.
Or have a look at: <http://nbviewer.ipython.org/5491090/> that contains 2 notebook, one executing the other. | Reusing code from different IPython notebooks | [
"",
"python",
"ipython",
"jupyter-notebook",
""
] |
[Using Python3] I have a csv file that has two columns (an email address and a country code; script is made to actually make it two columns if not the case in the original file - kind of) that I want to split out by the value in the second column and output in separate csv files.
```
eppetj@desrfpkwpwmhdc.com us ==> output-us.csv
uheuyvhy@zyetccm.com de ==> output-de.csv
avpxhbdt@reywimmujbwm.com es ==> output-es.csv
gqcottyqmy@romeajpui.com it ==> output-it.csv
qscar@tpcptkfuaiod.com fr ==> output-fr.csv
qshxvlngi@oxnzjbdpvlwaem.com gb ==> output-gb.csv
vztybzbxqq@gahvg.com us ==> output-us.csv
... ... ...
```
Currently my code kind of does this, but instead of writing each email address to the csv it overwrites the email placed before that. Can someone help me out with this?
I am very new to programming and Python and I might not have written the code in the most pythonic way, so I would really appreciate any feedback on the code in general!
Thanks in advance!
Code:
```
import csv
def tsv_to_dict(filename):
"""Creates a reader of a specified .tsv file."""
with open(filename, 'r') as f:
reader = csv.reader(f, delimiter='\t') # '\t' implies tab
email_list = []
# Checks each list in the reader list and removes empty elements
for lst in reader:
email_list.append([elem for elem in lst if elem != '']) # List comprehension
# Stores the list of lists as a dict
email_dict = dict(email_list)
return email_dict
def count_keys(dictionary):
"""Counts the number of entries in a dictionary."""
return len(dictionary.keys())
def clean_dict(dictionary):
"""Removes all whitespace in keys from specified dictionary."""
return { k.strip():v for k,v in dictionary.items() } # Dictionary comprehension
def split_emails(dictionary):
"""Splits out all email addresses from dictionary into output csv files by country code."""
# Creating a list of unique country codes
cc_list = []
for v in dictionary.values():
if not v in cc_list:
cc_list.append(v)
# Writing the email addresses to a csv based on the cc (value) in dictionary
for key, value in dictionary.items():
for c in cc_list:
if c == value:
with open('output-' +str(c) +'.csv', 'w') as f_out:
writer = csv.writer(f_out, lineterminator='\r\n')
writer.writerow([key])
``` | You can simplify this a lot by using a `defaultdict`:
```
import csv
from collections import defaultdict
emails = defaultdict(list)
with open('email.tsv','r') as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
if row:
if '@' in row[0]:
emails[row[1].strip()].append(row[0].strip()+'\n')
for key,values in emails.items():
with open('output-{}.csv'.format(key), 'w') as f:
f.writelines(values)
```
As your separated files are not comma separated, but single columns - you don't need the csv module and can simply write the rows.
The `emails` dictionary contains a key for each country code, and a list for all the matching email addresses. To make sure the email addresses are printed correctly, we remove any whitespace and add the a line break (this is so we can use `writelines` later).
Once the dictionary is populated, its simply a matter of stepping through the keys to create the files and then writing out the resulting list. | The problem with your code is that it keeps opening the same country output file each time it writes an entry into it, thereby overwriting whatever might have already been there.
A simple way to avoid that is to open all the output files at once for writing and store them in a dictionary keyed by the country code. Likewise, you can have another that associates each country code to a`csv.writer`object for that country's output file.
**Update:** While I agree that Burhan's approach is probably superior, I feel that you have the idea that my earlier answer was excessively long due to all the comments it had -- so here's another version of essentially the same logic but with minimal comments to allow you better discern its reasonably-short true length (even with the contextmanager).
```
import csv
from contextlib import contextmanager
@contextmanager # to manage simultaneous opening and closing of output files
def open_country_csv_files(countries):
csv_files = {country: open('output-'+country+'.csv', 'w')
for country in countries}
yield csv_files
for f in csv_files.values(): f.close()
with open('email.tsv', 'r') as f:
email_dict = {row[0]: row[1] for row in csv.reader(f, delimiter='\t') if row}
countries = set(email_dict.values())
with open_country_csv_files(countries) as csv_files:
csv_writers = {country: csv.writer(csv_files[country], lineterminator='\r\n')
for country in countries}
for email_addr,country in email_dict.items():
csv_writers[country].writerow([email_addr])
``` | Write key to separate csv based on value in dictionary | [
"",
"python",
"dictionary",
"python-3.x",
""
] |
I have the below code
```
$sql = <<<SQL
SELECT kit_color.color_id, color.color_name FROM kit_color
LEFT JOIN color ON kit_color.color_id = color.color_id
WHERE kit_color.id = %s
SQL;
return sprintf($sql, self::quote($id));
```
The table data is like
```
Table "product_color"
Column1 "color_id"
Column2 "product_id"
Table "kit_color"
Column1 "id"
Column2 "color_id"
Table "color"
Column1 "color_id"
Column2 "color_name"
```
Example data
`product_color, as-1, 86501`
I am able to use the `color_id`, but I need the `product_id` from that `color_id`that is being displayed. How can I do that?
I added the other table that is present. with the above code, everything works I just can display the product\_color.product\_id as it seems the info is not being pulled. | You could do this:
```
SELECT PC.product_id, KC.color_id, C.color_name
FROM kit_color as KC
LEFT JOIN color as C
ON KC.color_id = C.color_id
LEFT JOIN product_color as PC
ON PC.color_id = C.color_id
WHERE KC.id = %s
``` | I think you want this...
```
SELECT product_color.product_id, kit_color.color_id, color.color_name FROM kit_color
LEFT JOIN color ON kit_color.color_id = color.color_id
LEFT JOIN product_color ON product_color.color_id = color.color_id
WHERE kit_color.id = %s
``` | SQL questions for selecting correct data | [
"",
"sql",
""
] |
Assume I have a schema like:
```
group
-----
id
site
----
id
group_id (optional)
person
------
id
group_id (one of these two must exist
site_id and the other must be null)
device
------
id
group_id (one of these three must exist
site_id and the others must be null)
person_id
```
I don't like this representation, but I'm struggling to find a better one.
The two alternatives I've thought of are:
```
device
------
id
parent_table_name
parent_id
```
(but this is bad because I can't have foreign keys any more)
and:
```
entity
------
id
group
-----
entity_id
site
----
entity_id
link_entity_id (optional)
person
------
entity_id
link_entity_id (optional)
device
------
entity_id
link_entity_id
```
This is also less than perfect. It's really the Django ORMs method of inheritance, where entity is the parent of all the other classes.
Is there a better way of structuring the data, or is SQL just at odds with DAGs?
Is there a way of adding `CONSTRAINT`s to the person and device tables? | The following MySQL structure should normalize just fine. It will make your queries a little more complicated to write for some occasions, but it will make the application more powerful and able to grow exponentially without taking a hit on performance. We have a large MySQL database with many relating tables that hold foreign keys for people to various interviews, notes, and other data that works terrific! One note is that if you use group as a table name remember to use `` marks like:
```
`group`
```
That way MySQL does not try to invalidate a `INNER JOIN group ON (foo=bar)` and expect `GROUP BY`. You will also have to put restraints in the front end of your application that would prevent a device being added without a parent if that is the desired goal. But that is not too hard to do. Anyways look at the examples and have fun experimenting/programming!
Online Demo: <http://www.sqlfiddle.com/#!2/e9e94/2/0>
Here is the proposed MySQL table structure with the smallest amount of data to account for one instance of each needed case from your question: Copy and Paste into .sql file and import to empty database using phpMyAdmin
```
-- phpMyAdmin SQL Dump
-- version 3.5.2.2
-- http://www.phpmyadmin.net
--
-- Host: 127.0.0.1
-- Generation Time: Jun 07, 2013 at 08:14 PM
-- Server version: 5.5.27
-- PHP Version: 5.4.7
SET FOREIGN_KEY_CHECKS=0;
SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";
SET time_zone = "+00:00";
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
--
-- Database: `stackoverflow`
--
-- --------------------------------------------------------
--
-- Table structure for table `device`
--
CREATE TABLE IF NOT EXISTS `device` (
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=7 ;
--
-- Dumping data for table `device`
--
INSERT INTO `device` (`id`) VALUES
(1),
(2),
(3),
(4),
(5),
(6);
-- --------------------------------------------------------
--
-- Table structure for table `group`
--
CREATE TABLE IF NOT EXISTS `group` (
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=9 ;
--
-- Dumping data for table `group`
--
INSERT INTO `group` (`id`) VALUES
(1),
(2),
(3),
(4),
(5),
(6),
(7),
(8);
-- --------------------------------------------------------
--
-- Table structure for table `groups_have_devices`
--
CREATE TABLE IF NOT EXISTS `groups_have_devices` (
`group_id` int(11) NOT NULL,
`device_id` int(11) NOT NULL,
PRIMARY KEY (`group_id`,`device_id`),
KEY `device_id` (`device_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
--
-- Dumping data for table `groups_have_devices`
--
INSERT INTO `groups_have_devices` (`group_id`, `device_id`) VALUES
(4, 6);
-- --------------------------------------------------------
--
-- Table structure for table `groups_have_people`
--
CREATE TABLE IF NOT EXISTS `groups_have_people` (
`group_id` int(11) NOT NULL,
`person_id` int(11) NOT NULL,
PRIMARY KEY (`group_id`,`person_id`),
KEY `person_id` (`person_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
--
-- Dumping data for table `groups_have_people`
--
INSERT INTO `groups_have_people` (`group_id`, `person_id`) VALUES
(1, 2),
(5, 5);
-- --------------------------------------------------------
--
-- Table structure for table `groups_have_sites`
--
CREATE TABLE IF NOT EXISTS `groups_have_sites` (
`group_id` int(11) NOT NULL,
`site_id` int(11) NOT NULL,
PRIMARY KEY (`group_id`,`site_id`),
KEY `site_id` (`site_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
--
-- Dumping data for table `groups_have_sites`
--
INSERT INTO `groups_have_sites` (`group_id`, `site_id`) VALUES
(2, 2),
(3, 4),
(6, 6),
(7, 8);
-- --------------------------------------------------------
--
-- Table structure for table `people_have_devices`
--
CREATE TABLE IF NOT EXISTS `people_have_devices` (
`person_id` int(11) NOT NULL,
`device_id` int(11) NOT NULL,
PRIMARY KEY (`person_id`,`device_id`),
KEY `device_id` (`device_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
--
-- Dumping data for table `people_have_devices`
--
INSERT INTO `people_have_devices` (`person_id`, `device_id`) VALUES
(1, 1),
(2, 2),
(3, 3);
-- --------------------------------------------------------
--
-- Table structure for table `person`
--
CREATE TABLE IF NOT EXISTS `person` (
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=7 ;
--
-- Dumping data for table `person`
--
INSERT INTO `person` (`id`) VALUES
(1),
(2),
(3),
(4),
(5),
(6);
-- --------------------------------------------------------
--
-- Table structure for table `site`
--
CREATE TABLE IF NOT EXISTS `site` (
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=9 ;
--
-- Dumping data for table `site`
--
INSERT INTO `site` (`id`) VALUES
(1),
(2),
(3),
(4),
(5),
(6),
(7),
(8);
-- --------------------------------------------------------
--
-- Table structure for table `sites_have_devices`
--
CREATE TABLE IF NOT EXISTS `sites_have_devices` (
`site_id` int(11) NOT NULL,
`device_id` int(11) NOT NULL,
PRIMARY KEY (`site_id`,`device_id`),
KEY `device_id` (`device_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
--
-- Dumping data for table `sites_have_devices`
--
INSERT INTO `sites_have_devices` (`site_id`, `device_id`) VALUES
(3, 4),
(4, 5);
-- --------------------------------------------------------
--
-- Table structure for table `sites_have_people`
--
CREATE TABLE IF NOT EXISTS `sites_have_people` (
`site_id` int(11) NOT NULL,
`person_id` int(11) NOT NULL,
PRIMARY KEY (`site_id`,`person_id`),
KEY `person_id` (`person_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
--
-- Dumping data for table `sites_have_people`
--
INSERT INTO `sites_have_people` (`site_id`, `person_id`) VALUES
(1, 1),
(2, 3),
(5, 4),
(6, 6);
--
-- Constraints for dumped tables
--
--
-- Constraints for table `groups_have_devices`
--
ALTER TABLE `groups_have_devices`
ADD CONSTRAINT `groups_have_devices_ibfk_2` FOREIGN KEY (`device_id`) REFERENCES `device` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
ADD CONSTRAINT `groups_have_devices_ibfk_1` FOREIGN KEY (`group_id`) REFERENCES `group` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION;
--
-- Constraints for table `groups_have_people`
--
ALTER TABLE `groups_have_people`
ADD CONSTRAINT `groups_have_people_ibfk_2` FOREIGN KEY (`person_id`) REFERENCES `person` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
ADD CONSTRAINT `groups_have_people_ibfk_1` FOREIGN KEY (`group_id`) REFERENCES `group` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION;
--
-- Constraints for table `groups_have_sites`
--
ALTER TABLE `groups_have_sites`
ADD CONSTRAINT `groups_have_sites_ibfk_2` FOREIGN KEY (`site_id`) REFERENCES `site` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
ADD CONSTRAINT `groups_have_sites_ibfk_1` FOREIGN KEY (`group_id`) REFERENCES `group` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION;
--
-- Constraints for table `people_have_devices`
--
ALTER TABLE `people_have_devices`
ADD CONSTRAINT `people_have_devices_ibfk_2` FOREIGN KEY (`device_id`) REFERENCES `device` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
ADD CONSTRAINT `people_have_devices_ibfk_1` FOREIGN KEY (`person_id`) REFERENCES `person` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION;
--
-- Constraints for table `sites_have_devices`
--
ALTER TABLE `sites_have_devices`
ADD CONSTRAINT `sites_have_devices_ibfk_2` FOREIGN KEY (`device_id`) REFERENCES `device` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
ADD CONSTRAINT `sites_have_devices_ibfk_1` FOREIGN KEY (`site_id`) REFERENCES `site` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION;
--
-- Constraints for table `sites_have_people`
--
ALTER TABLE `sites_have_people`
ADD CONSTRAINT `sites_have_people_ibfk_2` FOREIGN KEY (`person_id`) REFERENCES `person` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
ADD CONSTRAINT `sites_have_people_ibfk_1` FOREIGN KEY (`site_id`) REFERENCES `site` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION;
SET FOREIGN_KEY_CHECKS=1;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
```
Here is a query to find all child devices of every group.
```
SELECT
`group`.`id` AS `group_id`,
`device`.`id` AS `device_id`
FROM
`group`
INNER JOIN groups_have_devices
ON (group.id=groups_have_devices.group_id)
INNER JOIN device
ON (groups_have_devices.device_id=device.id)
UNION ALL
SELECT
`group`.`id` AS `group_id`,
`device`.`id` AS `device_id`
FROM
`group`
INNER JOIN groups_have_people
ON (group.id=groups_have_people.group_id)
INNER JOIN person
ON (groups_have_people.person_id=person.id)
INNER JOIN people_have_devices
ON (person.id=people_have_devices.person_id)
INNER JOIN device
ON (people_have_devices.device_id=device.id)
UNION ALL
SELECT
`group`.`id` AS `group_id`,
`device`.`id` AS `device_id`
FROM
`group`
INNER JOIN groups_have_sites
ON (group.id=groups_have_sites.group_id)
INNER JOIN site
ON (groups_have_sites.site_id=site.id)
INNER JOIN sites_have_devices
ON (site.id=sites_have_devices.site_id)
INNER JOIN device
ON (sites_have_devices.device_id=device.id)
UNION ALL
SELECT
`group`.`id` AS `group_id`,
`device`.`id` AS `device_id`
FROM
`group`
INNER JOIN groups_have_sites
ON (group.id=groups_have_sites.group_id)
INNER JOIN site
ON (groups_have_sites.site_id=site.id)
INNER JOIN sites_have_people
ON (site.id=sites_have_people.site_id)
INNER JOIN person
ON (sites_have_people.person_id=person.id)
INNER JOIN people_have_devices
ON (person.id=people_have_devices.person_id)
INNER JOIN device
ON (people_have_devices.device_id=device.id)
ORDER BY
group_id
```
And here is a query to get all devices and their direct parent.
```
SELECT
device.id AS device_id,
person.id AS person_id,
NULL AS site_id,
NULL AS group_id
FROM
device
INNER JOIN people_have_devices
ON (device.id=people_have_devices.device_id)
INNER JOIN person
ON (people_have_devices.person_id=person.id)
UNION ALL
SELECT
device.id AS device_id,
NULL AS person_id,
site.id AS site_id,
NULL AS group_id
FROM
device
INNER JOIN sites_have_devices
ON (device.id=sites_have_devices.device_id)
INNER JOIN site
ON (sites_have_devices.site_id=site.id)
UNION ALL
SELECT
device.id AS device_id,
NULL AS person_id,
NULL AS site_id,
group.id AS group_id
FROM
device
INNER JOIN groups_have_devices
ON (device.id=groups_have_devices.device_id)
INNER JOIN `group`
ON (groups_have_devices.group_id=group.id)
```
You can further get the devices that are **direct** children for a particular person, group, or site like this
```
SELECT
device_id
FROM (
SELECT
device.id AS device_id,
NULL AS person_id,
site.id AS site_id,
NULL AS group_id
FROM
device
INNER JOIN sites_have_devices
ON (device.id=sites_have_devices.device_id)
INNER JOIN site
ON (sites_have_devices.site_id=site.id)
) sub_query
WHERE
sub_query.site_id='3'
``` | This is a typical type/subtype situation. Your second option is better and you could take it one step further. Think in terms of OO programming if you are more familiar with these concepts.
This is the how I would classify your entities. "Abstract" entities are in brackets.
```
(Owner) Device
|
+-----------+
| |
(Afffiliation) Person
|
+-------+
| |
Group Site
```
Heres is how to read it:
* Types of owners are: a Person, or an Affiliation (I can't find a better name, sorry). A Person "is a" Owner, and an Affiliation "is a" owner.
* Types of Affiliations are: a Group, or a Site
* A Person is affiliated with an Affiliation, either a Group or a Site
* A Device has a Owner, either a Group, or a Site, or a Person
How to translate this into tables:

Now you *could* stick to your first option. MySQL does not support the `CHECK()` syntax for declaring arbitrary constraints, but the same effect can be achieved through the use of triggers, However, the syntax is cumbersome, and the performance is dubious. | Tables with a single parent in one of n other tables | [
"",
"mysql",
"sql",
""
] |
I want to import a function from another file in the same directory.
Usually, one of the following works:
```
from .mymodule import myfunction
```
```
from mymodule import myfunction
```
...but the other one gives me one of these errors:
```
ImportError: attempted relative import with no known parent package
```
```
ModuleNotFoundError: No module named 'mymodule'
```
```
SystemError: Parent module '' not loaded, cannot perform relative import
```
Why is this? | > unfortunately, this module needs to be inside the package, and it also
> needs to be runnable as a script, sometimes. Any idea how I could
> achieve that?
It's quite common to have a layout like this...
```
main.py
mypackage/
__init__.py
mymodule.py
myothermodule.py
```
...with a `mymodule.py` like this...
```
#!/usr/bin/env python3
# Exported function
def as_int(a):
return int(a)
# Test function for module
def _test():
assert as_int('1') == 1
if __name__ == '__main__':
_test()
```
...a `myothermodule.py` like this...
```
#!/usr/bin/env python3
from .mymodule import as_int
# Exported function
def add(a, b):
return as_int(a) + as_int(b)
# Test function for module
def _test():
assert add('1', '1') == 2
if __name__ == '__main__':
_test()
```
...and a `main.py` like this...
```
#!/usr/bin/env python3
from mypackage.myothermodule import add
def main():
print(add('1', '1'))
if __name__ == '__main__':
main()
```
...which works fine when you run `main.py` or `mypackage/mymodule.py`, but fails with `mypackage/myothermodule.py`, due to the relative import...
```
from .mymodule import as_int
```
The way you're supposed to run it is by using the -m option and giving the path in the Python module system (rather than in the filesystem)...
```
python3 -m mypackage.myothermodule
```
...but it's somewhat verbose, and doesn't mix well with a shebang line like `#!/usr/bin/env python3`.
An alternative is to avoid using relative imports, and just use...
```
from mypackage.mymodule import as_int
```
Either way, you'll need to run from the parent of `mypackage`, or add that directory to `PYTHONPATH` (either one will ensure that `mypackage` is in the sys.path [module search path](https://docs.python.org/3/library/sys_path_init.html)). Or, if you want it to work "out of the box", you can frob the `PYTHONPATH` in code first with this...
```
import sys
import os
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.dirname(SCRIPT_DIR))
from mypackage.mymodule import as_int
```
It's kind of a pain, but there's a clue as to why in [an email](http://mail.python.org/pipermail/python-3000/2007-April/006793.html) written by a certain Guido van Rossum...
> I'm -1 on this and on any other proposed twiddlings of the `__main__`
> machinery. The only use case seems to be running scripts that happen
> to be living inside a module's directory, which I've always seen as an
> antipattern. To make me change my mind you'd have to convince me that
> it isn't.
Whether running scripts inside a package is an antipattern or not is subjective, but personally I find it really useful in a package I have which contains some custom wxPython widgets, so I can run the script for any of the source files to display a `wx.Frame` containing only that widget for testing purposes. | # Explanation
From [PEP 328](https://www.python.org/dev/peps/pep-0328/)
> Relative imports use a module's \_\_name\_\_ attribute to determine that
> module's position in the package hierarchy. If the module's name does
> not contain any package information (e.g. it is set to '\_\_main\_\_')
> **then relative imports are resolved as if the module were a top level
> module**, regardless of where the module is actually located on the file
> system.
At some point [PEP 338](https://www.python.org/dev/peps/pep-0338/) conflicted with [PEP 328](https://www.python.org/dev/peps/pep-0328/):
> ... relative imports rely on *\_\_name\_\_* to determine the current
> module's position in the package hierarchy. In a main module, the
> value of *\_\_name\_\_* is always *'\_\_main\_\_'*, so explicit relative imports
> will always fail (as they only work for a module inside a package)
and to address the issue, [PEP 366](https://www.python.org/dev/peps/pep-0366/) introduced the top level variable [`__package__`](https://docs.python.org/3/reference/import.html#__package__):
> By adding a new module level attribute, this PEP allows relative
> imports to work automatically if the module is executed using the *-m*
> switch. A small amount of boilerplate in the module itself will allow
> the relative imports to work when the file is executed by name. [...] When it [the attribute] is present, relative imports will be based on this attribute
> rather than the module *\_\_name\_\_* attribute. [...] When the main module is specified by its filename, then the *\_\_package\_\_* attribute will be set to *None*. [...] **When the import system encounters an explicit relative import in a
> module without \_\_package\_\_ set (or with it set to None), it will
> calculate and store the correct value** (**\_\_name\_\_.rpartition('.')[0]
> for normal modules** and *\_\_name\_\_* for package initialisation modules)
(emphasis mine)
If the `__name__` is `'__main__'`, `__name__.rpartition('.')[0]` returns empty string. This is why there's empty string literal in the error description:
```
SystemError: Parent module '' not loaded, cannot perform relative import
```
The relevant part of the CPython's [`PyImport_ImportModuleLevelObject` function](https://hg.python.org/cpython/file/9d65a195246b/Python/import.c#l1494):
```
if (PyDict_GetItem(interp->modules, package) == NULL) {
PyErr_Format(PyExc_SystemError,
"Parent module %R not loaded, cannot perform relative "
"import", package);
goto error;
}
```
CPython raises this exception if it was unable to find `package` (the name of the package) in `interp->modules` (accessible as [`sys.modules`](https://docs.python.org/3/library/sys.html#sys.modules)). Since `sys.modules` is *"a dictionary that maps module names to modules which have already been loaded"*, it's now clear that **the parent module must be explicitly absolute-imported before performing relative import**.
***Note:*** The patch from the [issue 18018](http://bugs.python.org/issue18018) has added [another `if` block](https://hg.python.org/cpython/file/c4e4886c6052/Python/import.c#l1494), which will be executed **before** the code above:
```
if (PyUnicode_CompareWithASCIIString(package, "") == 0) {
PyErr_SetString(PyExc_ImportError,
"attempted relative import with no known parent package");
goto error;
} /* else if (PyDict_GetItem(interp->modules, package) == NULL) {
...
*/
```
If `package` (same as above) is empty string, the error message will be
```
ImportError: attempted relative import with no known parent package
```
However, you will only see this in Python 3.6 or newer.
# Solution #1: Run your script using -m
Consider a directory (which is a Python [package](https://docs.python.org/3/glossary.html#term-package)):
```
.
├── package
│ ├── __init__.py
│ ├── module.py
│ └── standalone.py
```
All of the files in *package* begin with the same 2 lines of code:
```
from pathlib import Path
print('Running' if __name__ == '__main__' else 'Importing', Path(__file__).resolve())
```
I'm including these two lines *only* to make the order of operations obvious. We can ignore them completely, since they don't affect the execution.
*\_\_init\_\_.py* and *module.py* contain only those two lines (i.e., they are effectively empty).
*standalone.py* additionally attempts to import *module.py* via relative import:
```
from . import module # explicit relative import
```
We're well aware that `/path/to/python/interpreter package/standalone.py` will fail. However, we can run the module with the [`-m` command line option](https://docs.python.org/3/using/cmdline.html?highlight=#cmdoption-m) that will *"search [`sys.path`](https://docs.python.org/3/library/sys.html#sys.path) for the named module and execute its contents as the `__main__` module"*:
```
vaultah@base:~$ python3 -i -m package.standalone
Importing /home/vaultah/package/__init__.py
Running /home/vaultah/package/standalone.py
Importing /home/vaultah/package/module.py
>>> __file__
'/home/vaultah/package/standalone.py'
>>> __package__
'package'
>>> # The __package__ has been correctly set and module.py has been imported.
... # What's inside sys.modules?
... import sys
>>> sys.modules['__main__']
<module 'package.standalone' from '/home/vaultah/package/standalone.py'>
>>> sys.modules['package.module']
<module 'package.module' from '/home/vaultah/package/module.py'>
>>> sys.modules['package']
<module 'package' from '/home/vaultah/package/__init__.py'>
```
`-m` does all the importing stuff for you and automatically sets `__package__`, but you can do that yourself in the
# Solution #2: Set \_\_package\_\_ manually
***Please treat it as a proof of concept rather than an actual solution. It isn't well-suited for use in real-world code.***
[PEP 366](https://www.python.org/dev/peps/pep-0366/) has a workaround to this problem, however, it's incomplete, because setting `__package__` alone is not enough. You're going to need to import at least *N* preceding packages in the module hierarchy, where *N* is the number of parent directories (relative to the directory of the script) that will be searched for the module being imported.
Thus,
1. Add the parent directory of the *Nth* predecessor of the current module to `sys.path`
2. Remove the current file's directory from `sys.path`
3. Import the parent module of the current module using its fully-qualified name
4. Set `__package__` to the fully-qualified name from *2*
5. Perform the relative import
I'll borrow files from the *Solution #1* and add some more subpackages:
```
package
├── __init__.py
├── module.py
└── subpackage
├── __init__.py
└── subsubpackage
├── __init__.py
└── standalone.py
```
This time *standalone.py* will import *module.py* from the *package* package using the following relative import
```
from ... import module # N = 3
```
We'll need to precede that line with the boilerplate code, to make it work.
```
import sys
from pathlib import Path
if __name__ == '__main__' and __package__ is None:
file = Path(__file__).resolve()
parent, top = file.parent, file.parents[3]
sys.path.append(str(top))
try:
sys.path.remove(str(parent))
except ValueError: # Already removed
pass
import package.subpackage.subsubpackage
__package__ = 'package.subpackage.subsubpackage'
from ... import module # N = 3
```
It allows us to execute *standalone.py* by filename:
```
vaultah@base:~$ python3 package/subpackage/subsubpackage/standalone.py
Running /home/vaultah/package/subpackage/subsubpackage/standalone.py
Importing /home/vaultah/package/__init__.py
Importing /home/vaultah/package/subpackage/__init__.py
Importing /home/vaultah/package/subpackage/subsubpackage/__init__.py
Importing /home/vaultah/package/module.py
```
A more general solution wrapped in a function can be found [here](https://gist.github.com/vaultah/d63cb4c86be2774377aa674b009f759a). Example usage:
```
if __name__ == '__main__' and __package__ is None:
import_parents(level=3) # N = 3
from ... import module
from ...module.submodule import thing
```
# Solution #3: Use absolute imports and [setuptools](https://setuptools.readthedocs.io/en/latest/)
The steps are -
1. Replace explicit relative imports with equivalent absolute imports
2. Install `package` to make it importable
For instance, the directory structure may be as follows
```
.
├── project
│ ├── package
│ │ ├── __init__.py
│ │ ├── module.py
│ │ └── standalone.py
│ └── setup.py
```
where *setup.py* is
```
from setuptools import setup, find_packages
setup(
name = 'your_package_name',
packages = find_packages(),
)
```
The rest of the files were borrowed from the *Solution #1*.
Installation will allow you to import the package regardless of your working directory (assuming there'll be no naming issues).
We can modify *standalone.py* to use this advantage (step 1):
```
from package import module # absolute import
```
Change your working directory to `project` and run `/path/to/python/interpreter setup.py install --user` (`--user` installs the package in [your site-packages directory](https://docs.python.org/3/library/site.html#site.USER_SITE)) (step 2):
```
vaultah@base:~$ cd project
vaultah@base:~/project$ python3 setup.py install --user
```
Let's verify that it's now possible to run *standalone.py* as a script:
```
vaultah@base:~/project$ python3 -i package/standalone.py
Running /home/vaultah/project/package/standalone.py
Importing /home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/__init__.py
Importing /home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py
>>> module
<module 'package.module' from '/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py'>
>>> import sys
>>> sys.modules['package']
<module 'package' from '/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/__init__.py'>
>>> sys.modules['package.module']
<module 'package.module' from '/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py'>
```
***Note***: If you decide to go down this route, you'd be better off using [virtual environments](https://docs.python.org/3/library/venv.html) to install packages in isolation.
# Solution #4: Use absolute imports and some boilerplate code
Frankly, the installation is not necessary - you could add some boilerplate code to your script to make absolute imports work.
I'm going to borrow files from *Solution #1* and change *standalone.py*:
1. Add the parent directory of *package* to `sys.path` *before* attempting to import anything from *package* using absolute imports:
```
import sys
from pathlib import Path # if you haven't already done so
file = Path(__file__).resolve()
parent, root = file.parent, file.parents[1]
sys.path.append(str(root))
# Additionally remove the current file's directory from sys.path
try:
sys.path.remove(str(parent))
except ValueError: # Already removed
pass
```
2. Replace the relative import by the absolute import:
```
from package import module # absolute import
```
*standalone.py* runs without problems:
```
vaultah@base:~$ python3 -i package/standalone.py
Running /home/vaultah/package/standalone.py
Importing /home/vaultah/package/__init__.py
Importing /home/vaultah/package/module.py
>>> module
<module 'package.module' from '/home/vaultah/package/module.py'>
>>> import sys
>>> sys.modules['package']
<module 'package' from '/home/vaultah/package/__init__.py'>
>>> sys.modules['package.module']
<module 'package.module' from '/home/vaultah/package/module.py'>
```
I feel that I should warn you: try not to do this, *especially* if your project has a complex structure.
---
As a side note, [PEP 8](https://www.python.org/dev/peps/pep-0008/#imports) recommends the use of absolute imports, but states that in some scenarios explicit relative imports are acceptable:
> Absolute imports are recommended, as they are usually more readable
> and tend to be better behaved (or at least give better error
> messages). [...] However, explicit relative imports are an acceptable
> alternative to absolute imports, especially when dealing with complex
> package layouts where using absolute imports would be unnecessarily
> verbose. | Relative imports in Python 3 | [
"",
"python",
"python-3.x",
"python-import",
""
] |
I take look at [Hive Language Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins)
and recognize Hive query doensn't support IN/EXIST operator and they suggest using LEFT SEMI JOIN as a replacement. Do you know the reason why ? | This reminds me of the questions Eric Lippert gets of the form "Why doesn't C# have Java feature X?" People don't build a language by starting with another one and removing stuff, they start with nothing and decide what features to implement.
At some point, hive supported neither IN\EXISTS subqueries, nor LEFT SEMI JOIN. Then, someone [suggested they add LEFT SEMI JOIN](https://issues.apache.org/jira/browse/HIVE-870). Now that that's in the language, it takes away some of the reason for implementing IN\EXISTS subqueries, since the two are semantically equivalent. | Hive supports in/exist operators but does not support subqueries with in `in` .
As an alternative to that you can use left semi join.
Below is not possible in Hive
```
SELECT a.key, a.value
FROM a
WHERE a.key in
(SELECT b.key
FROM B);
```
The above can be rewritten using left semi join.
```
SELECT a.key, a.val
FROM a LEFT SEMI JOIN b on (a.key = b.key)
```
[check this](https://stackoverflow.com/questions/7677333/how-to-write-subquery-and-use-in-clause-in-hive) | Why Hive query doesn't support IN/EXIST operator? | [
"",
"sql",
"hadoop",
"hive",
""
] |
I need to write a function (say `fun1`) that has one argument, because it will be used in other function (`fun2`). The latter requires a function with a single argument. However, I need to pass other parameters to function `fun1`. How can I do this in Python without using global variables? Or this is the only way?
Addition: If it is important, `fun2` is some optimization function from `scipy.optimize`. Below is an example of passing additional parameter `c` to function `fun1` using `global`. In the first call, function `fun2` takes `fun1` as `x+1`, but in the second call, `fun1` is `x+2`. I would like to make similar, but without using `global`. Hopefully, the example clarifies the question. (The example is changed).
```
def fun1(x) :
global c
return x + c
def fun2(f1, x) :
return f1(x)
# main program
global c
x0= 1
c= 1; y= fun2(fun1, x0); print(y) # gives 2
c= 2; y= fun2(fun1, x0); print(y) # gives 3
``` | If I've understood your question correctly, there are quite a number of ways to do what you want and avoid using global variables. Here they are.
Given:
```
x0 = 1
def fun2(f1, x):
return f1(x)
```
All of these techniques accomplish your goal:
```
#### #0 -- function attributes
def fun1(x):
return x + fun1.c
fun1.c = 1; y = fun2(fun1, x0); print(y) # --> 2
fun1.c = 2; y = fun2(fun1, x0); print(y) # --> 3
#### #1 -- closure
def fun1(c):
def wrapper(x):
return x + c
return wrapper
y = fun2(fun1(c=1), x0); print(y) # --> 2
y = fun2(fun1(c=2), x0); print(y) # --> 3
#### #2 -- functools.partial object
from functools import partial
def fun1(x, c):
return x + c
y = fun2(partial(fun1, c=1), x0); print(y) # --> 2
y = fun2(partial(fun1, c=2), x0); print(y) # --> 3
#### #3 -- function object (functor)
class Fun1(object):
def __init__(self, c):
self.c = c
def __call__(self, x):
return x + self.c
y = fun2(Fun1(c=1), x0); print(y) # --> 2
y = fun2(Fun1(c=2), x0); print(y) # --> 3
#### #4 -- function decorator
def fun1(x, c):
return x + c
def decorate(c):
def wrapper(f):
def wrapped(x):
return f(x, c)
return wrapped
return wrapper
y = fun2(decorate(c=1)(fun1), x0); print(y) # --> 2
y = fun2(decorate(c=2)(fun1), x0); print(y) # --> 3
```
Note that writing `c=` arguments wasn't always strictly required in the calls -- I just put it in all of the usage examples for consistency and because it makes it clearer how it's being passed. | The fact that that function can be called even without those other parameters suggests, that they are optional and have some default value. So you should use *default arguments*.
```
def fun1(foo, bar='baz'):
# do something
```
This way you can call function `fun1('hi')` and `bar` will default to `'baz'`. You can also call it `fun1('hi', 15)`.
If they don't have any reasonable default, you can use `None` as the default value instead.
```
def fun1(foo, bar=None):
if bar is None:
# `bar` argument was not provided
else:
# it was provided
``` | How to pass additional parameters (besides arguments) to a function? | [
"",
"python",
"function",
"arguments",
""
] |
What I'm looking for is behavior like what you would observe when adding a constraint (e.g., unique) to an existing table, validating it against the existing data, then removing the constraint. I don't want the constraint to persist on the table, I just want to express it, validate it on existing data, then move on (or raise an exception of the check fails); **Does Postgresql (9.2) support a way to do this sort of thing directly?** | Create the constraint:
```
alter table t
add constraint the_pk primary key (user_id);
NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "the_pk" for table "t"
ERROR: could not create unique index "the_pk"
DETAIL: Key (user_id)=(1) is duplicated.
```
If there was no error drop it:
```
alter table t
drop constraint the_pk;
```
If you don't want to persist it even for some time then do it inside a transaction:
```
begin;
alter table t
add constraint the_pk primary key (user_id);
```
Once satisfied roll the transaction back:
```
rollback;
``` | Setup:
```
CREATE TABLE testu ( id integer );
INSERT INTO testu SELECT generate_series(1,100);
INSERT INTO testu VALUES (1),(10),(20);
```
Demo query to find possible problem rows:
```
regress=> select id, count(id) FROM testu GROUP BY id HAVING count(id) > 1;
id | count
----+-------
20 | 2
1 | 2
10 | 2
(3 rows)
```
Wrap it in `DO` block:
```
DO
$$
BEGIN
PERFORM id FROM testu GROUP BY id HAVING count(id) > 1 LIMIT 1;
IF FOUND THEN
RAISE EXCEPTION 'One or more duplicate IDs in table testu';
END IF;
END;
$$ LANGUAGE plpgsql;
```
If you want to report the individual colliding IDs you might do that by building a string from the results of the query, by looping over the results and raising `NOTICE`s, etc. Lots of options. I'll leave that as an exercise to you, the reader, with the PL/PgSQL documentation. | In Postgresql, can you express and check a constraint without adding it to an entity? | [
"",
"sql",
"postgresql",
"constraints",
"unique-constraint",
""
] |
I have a tricky issue I am struggling with on a mental level.
In our db we have a table showing the UK Holidays for the next few years, and a stored function returns a recordset to my front end.
I have a flag in my recordset called 'deletable' which allows the frontend to decide if a context menu can be shown in the data grid, thus allowing that record to be deleted.
Currently the test (in my stored proc) just checks if the date column has a date from three days ago or more.
```
case when DATEDIFF(d,a.[date],GETDATE()) > 3 then 1 else 0 end as [deletable]
```
how can I modify that to find the previous working date by checking weekends and the Holidays table 'Holiday' column (which is a Datetime) and see if the [date] column in my recordset row is 3 working days before, taking into account Holidays from the Holidays table and weekends?
so if the [date] column is 23th May, and todays's date is 28th May, then that column returns 0, as the 27th was a bank holiday, whereas the next day it would return 1 because there would be more than 3 working days difference.
Is there an elegent way to do that?
thanks
Philip | Okay I'm totally refactoring this.
```
declare
@DeletablePeriodStart datetime,
@BusinessDays int
set @DeletablePeriodStart = dateadd(d,0,datediff(d,0,getdate()))
set @BusinessDays = 0
while @BusinessDays < 3
begin
set @DeletablePeriodStart = dateadd(d,-1,@DeletablePeriodStart)
if datepart(dw,@DeletablePeriodStart) not in (1,7) and
not exists (select * from HolidayTable where Holiday = @DeletablePeriodStart)
begin
set @BusinessDays = @BusinessDays + 1
end
end
```
This time it doesn't make any assumptions. It runs a quick loop checking whether each day is a valid business day and doesn't stop till it counts three of them. Then later just check whether `a.[date] >= @DeletablePeriodStart` | You should substract the number of holidays between a.[date] and GETDATE() from the DATEDIFF. Try something like this:
```
case when DATEDIFF(d,a.[date],GETDATE())-(
SELECT COUNT(*) FROM Holidays
WHERE HolidayDate BETWEEN a.[date] AND GETDATE()
)>3 then 1 else 0 end as [deletable]
```
Razvan | calculate 3 working days before in stored proc using Holidays lookup table | [
"",
"sql",
"sql-server-2008",
"t-sql",
"date",
""
] |
How should I "rethrow" an exception, that is, suppose:
* I try something in my code, and unfortunately it fails.
* I try some "clever" workaround, which happens to also fail this time
If I throw the exception from the (failing) workaround, it's going to be pretty darn confusing for the user, so I *think* it may be best to rethrow the original exception (?), *with the descriptive traceback it comes with* (about the *actual* problem)...
Note: the motivating example for this is when calling `np.log(np.array(['1'], dtype=object))`, where it [tries a witty workaround and gives an `AttributeError`](https://github.com/numpy/numpy/issues/1611) (it's "really" a `TypeError`).
One way I can think of is just to re-call the offending function, but this seems doged (for one thing theoretically the original function may exert some different behaviour the second time it's called):
*Okay this is one **awful** example, but here goes...*
```
def f():
raise Exception("sparrow")
def g():
raise Exception("coconut")
def a():
f()
```
Suppose I did this:
```
try:
a()
except:
# attempt witty workaround
g()
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-4-c76b7509b315> in <module>()
3 except:
4 # attempt witty workaround
----> 5 g()
6
<ipython-input-2-e641f2f9a7dc> in g()
4
5 def g():
----> 6 raise Exception("coconut")
7
8
Exception: coconut
```
Well, the problem doesn't really lie with the coconut at all, but the sparrow:
```
try:
a()
except:
# attempt witty workaround
try:
g()
except:
# workaround failed, I want to rethrow the exception from calling a()
a() # ideally don't want to call a() again
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-4-e641f2f9a7dc> in <module>()
19 except:
20 # workaround failed, I want to rethrow the exception from calling a()
---> 21 a() # ideally don't want to call a() again
<ipython-input-3-e641f2f9a7dc> in a()
8
9 def a():
---> 10 f()
11
12
<ipython-input-1-e641f2f9a7dc> in f()
1 def f():
----> 2 raise Exception("sparrow")
3
4
5 def g():
Exception: sparrow
```
Is there a standard way to deal with this, or am I thinking about it completely wrong? | If you want to make it appear to the end user that you never called `g()`, then you need to store the traceback from the first error, call the second function and then throw the original with the original traceback. (otherwise, in Python2, bare raise re-raises the second exception rather than the first). The problem is that there is no 2/3 compatible way to raise with traceback, so you have to wrap the Python 2 version in an `exec` statement (since it's a `SyntaxError` in Python 3).
Here's a function that lets you do that (I added this to the `pandas` codebase recently):
```
import sys
if sys.version_info[0] >= 3:
def raise_with_traceback(exc, traceback=Ellipsis):
if traceback == Ellipsis:
_, _, traceback = sys.exc_info()
raise exc.with_traceback(traceback)
else:
# this version of raise is a syntax error in Python 3
exec("""
def raise_with_traceback(exc, traceback=Ellipsis):
if traceback == Ellipsis:
_, _, traceback = sys.exc_info()
raise exc, None, traceback
""")
raise_with_traceback.__doc__ = (
"""Raise exception with existing traceback.
If traceback is not passed, uses sys.exc_info() to get traceback."""
)
```
And then you can use it like this (I also changed the Exception types for clarity).
```
def f():
raise TypeError("sparrow")
def g():
raise ValueError("coconut")
def a():
f()
try:
a()
except TypeError as e:
import sys
# save the traceback from the original exception
_, _, tb = sys.exc_info()
try:
# attempt witty workaround
g()
except:
raise_with_traceback(e, tb)
```
And in Python 2, you only see `a()` and `f()`:
```
Traceback (most recent call last):
File "test.py", line 40, in <module>
raise_with_traceback(e, tb)
File "test.py", line 31, in <module>
a()
File "test.py", line 28, in a
f()
File "test.py", line 22, in f
raise TypeError("sparrow")
TypeError: sparrow
```
But in Python 3, it still notes there was an additional exception too, because you are raising within its `except` clause [which flips the order of the errors and makes it much more confusing for the user]:
```
Traceback (most recent call last):
File "test.py", line 38, in <module>
g()
File "test.py", line 25, in g
raise ValueError("coconut")
ValueError: coconut
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 40, in <module>
raise_with_traceback(e, tb)
File "test.py", line 6, in raise_with_traceback
raise exc.with_traceback(traceback)
File "test.py", line 31, in <module>
a()
File "test.py", line 28, in a
f()
File "test.py", line 22, in f
raise TypeError("sparrow")
TypeError: sparrow
```
If you absolutely want it to look like the `g()` Exception never happened in both Python 2 and Python 3, you need to check that you are out of the `except` clause first:
```
try:
a()
except TypeError as e:
import sys
# save the traceback from the original exception
_, _, tb = sys.exc_info()
handled = False
try:
# attempt witty workaround
g()
handled = True
except:
pass
if not handled:
raise_with_traceback(e, tb)
```
Which gets you the following traceback in Python 2:
```
Traceback (most recent call last):
File "test.py", line 56, in <module>
raise_with_traceback(e, tb)
File "test.py", line 43, in <module>
a()
File "test.py", line 28, in a
f()
File "test.py", line 22, in f
raise TypeError("sparrow")
TypeError: sparrow
```
And this traceback in Python 3:
```
Traceback (most recent call last):
File "test.py", line 56, in <module>
raise_with_traceback(e, tb)
File "test.py", line 6, in raise_with_traceback
raise exc.with_traceback(traceback)
File "test.py", line 43, in <module>
a()
File "test.py", line 28, in a
f()
File "test.py", line 22, in f
raise TypeError("sparrow")
TypeError: sparrow
```
It does add an additional non-useful line of traceback that shows the `raise exc.with_traceback(traceback)` to the user, but it is relatively clean. | Here is something totally nutty that I wasn't sure would work, but it works in both python 2 and 3. (It does however, require the exception to be encapsulated into a function...)
```
def f():
print ("Fail!")
raise Exception("sparrow")
def g():
print ("Workaround fail.")
raise Exception("coconut")
def a():
f()
def tryhard():
ok = False
try:
a()
ok = True
finally:
if not ok:
try:
g()
return # "cancels" sparrow Exception by returning from finally
except:
pass
>>> tryhard()
Fail!
Workaround fail.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in tryhard
File "<stdin>", line 2, in a
File "<stdin>", line 3, in f
Exception: sparrow
```
Which is the correct exception and the right stack trace, and with no hackery.
```
>>> def g(): print "Worked around." # workaround is successful in this case
>>> tryhard()
Fail!
Worked around.
>>> def f(): print "Success!" # normal method works
>>> tryhard()
Success!
``` | "Uncatching" an exception in python | [
"",
"python",
"exception",
""
] |
What is the standard way for a blueprint to access the application logger? | Inside the blueprint add:
```
from flask import current_app
```
and when needed, call:
```
current_app.logger.info('grolsh')
``` | Btw, I use this pattern:
```
# core.py
from werkzeug.local import LocalProxy
from flask import current_app
logger = LocalProxy(lambda: current_app.logger)
# views.py
from core import logger
@mod.route("/")
def index():
logger.info("serving index")
...
``` | In Flask: How to access app Logger within Blueprint | [
"",
"python",
"logging",
"flask",
""
] |
Let's say I have 2 entry widgets, 1 option menu(drop down list) and 1 button in tkinter. How can i set the button widget state to DISABLED until all 3 widgets are filled by the user?This is what i have currently:
```
import Tkinter as tk
root = tk.Tk()
entry1=tk.Entry(root,width=15).grid(row=1,column=1)
entry2=tk.Entry(root,width=15).grid(row=1,column=2)
choices=('a','b','c')
var=tk.StringVar(root)
option=tk.OptionMenu(root,var,*choices)
option.grid(row=1,column=3)
button=tk.Button(root,text="submit")
button.grid(row=1,column=4)
root.mainloop()
```
**--EDIT--**
Tried this way, but i don't think this is the correct way to do it.
```
import Tkinter as tk
root = tk.Tk()
def myfunction(event):
x=var.get()
y=entry1.get()
z=entry2.get()
print len(x),":",len(y),":",len(z)
if len(y)>0 and len(x)>0 and len(z)>0:
button.config(state='normal')
else:
button.config(state='disabled')
entry1=tk.Entry(root,width=15)
entry1.grid(row=1,column=1)
entry2=tk.Entry(root,width=15)
entry2.grid(row=1,column=2)
choices=('a','b','c')
var=tk.StringVar(root)
option=tk.OptionMenu(root,var,*choices)
option.grid(row=1,column=3)
button=tk.Button(root,text="submit")
button.grid(row=1,column=4)
root.bind("<Enter>", myfunction)
root.mainloop()
``` | Tkinter variables have a method called `trace` to add an observer, so the callback function is called when the value changes. I think it is much more efficient than `root.bind("<Enter>", myfunction)`:
```
import Tkinter as tk
root = tk.Tk()
def myfunction(*args):
x = var.get()
y = stringvar1.get()
z = stringvar2.get()
if x and y and z:
button.config(state='normal')
else:
button.config(state='disabled')
stringvar1 = tk.StringVar(root)
stringvar2 = tk.StringVar(root)
var = tk.StringVar(root)
stringvar1.trace("w", myfunction)
stringvar2.trace("w", myfunction)
var.trace("w", myfunction)
entry1 = tk.Entry(root, width=15, textvariable=stringvar1)
entry1.grid(row=1,column=1)
entry2 = tk.Entry(root, width=15, textvariable=stringvar2)
entry2.grid(row=1,column=2)
choices = ('a','b','c')
option = tk.OptionMenu(root, var, *choices)
option.grid(row=1,column=3)
button = tk.Button(root,text="submit")
button.grid(row=1, column=4)
root.mainloop()
``` | What about `validate` and `validatecommand` properties of Entry ?
```
#!/usr/bin/env python3
import tkinter
class App:
def __init__(self):
self.root = tkinter.Tk()
self.variables = {}
self.entries = {}
self.vcmd = (self.root.register(self.observer), '%W', '%P')
self.make_entry('fname')
self.make_entry('sname')
self.make_submit_button('Send')
def make_entry(self, name):
self.variables[name] = tkinter.StringVar()
self.entries[name] = tkinter.Entry(
self.root,
textvariable=self.variables[name],
validate='all',
validatecommand=self.vcmd)
self.entries[name].pack(side=tkinter.TOP)
def make_submit_button(self, text):
self.submit_button = tkinter.Button(
self.root,
text=text,
state=tkinter.DISABLED)
self.submit_button.pack(side=tkinter.BOTTOM)
def observer(self, id_, value):
id_ = int(id_[1:])
self.update_submit_button_state(entry_exclude=(value and id_ or None))
return True
def update_submit_button_state(self, entry_exclude=None):
if all(
var.get()
for name, var in self.variables.items()
if id(self.entries[name]) != entry_exclude
):
self.submit_button.config(state=tkinter.NORMAL)
else:
self.submit_button.config(state=tkinter.DISABLED)
App().root.mainloop()
``` | Python tkinter disable the button until all the fields are filled | [
"",
"python",
"widget",
"tkinter",
"state",
""
] |
In Python, are either
```
n**0.5 # or
math.sqrt(n)
```
recognized when a number is a perfect square? Specifically, should I worry that when I use
```
int(n**0.5) # instead of
int(n**0.5 + 0.000000001)
```
I might accidentally end up with the number one less than the actual square root due to precision error? | Yes, you should worry:
```
In [11]: int((100000000000000000000000000000000000**2) ** 0.5)
Out[11]: 99999999999999996863366107917975552L
In [12]: int(math.sqrt(100000000000000000000000000000000000**2))
Out[12]: 99999999999999996863366107917975552L
```
*obviously adding the `0.000000001` doesn't help here either...*
As @DSM points out, you can use the [decimal](http://docs.python.org/2/library/decimal.html) library:
```
In [21]: from decimal import Decimal
In [22]: x = Decimal('100000000000000000000000000000000000')
In [23]: (x ** 2).sqrt() == x
Out[23]: True
```
*for numbers over `10**999999999`, provided you keep a check on the precision (configurable), it'll throw an error rather than an incorrect answer...* | As several answers have suggested integer arithmetic, I'll recommend the [gmpy2](https://code.google.com/p/gmpy/) library. It provides functions for checking if a number is a perfect power, calculating integer square roots, and integer square root with remainder.
```
>>> import gmpy2
>>> gmpy2.is_power(9)
True
>>> gmpy2.is_power(10)
False
>>> gmpy2.isqrt(10)
mpz(3)
>>> gmpy2.isqrt_rem(10)
(mpz(3), mpz(1))
```
Disclaimer: I maintain gmpy2. | Rounding ** 0.5 and math.sqrt | [
"",
"python",
"math",
"square-root",
""
] |
I have implemented an approach to create a list of dictionaries, but I was wondering if there was a more efficient method of doing so (while preserving the order elements in both lists):
```
#global variable
dict_agriculture = []
col_name_agriculture = [tuple[0] for tuple in querycurs.description]
rows_agriculture = querycurs.fetchall()
for row in rows_agriculture:
dict1 = dict(zip(col_name_agriculture, list(row)))
dict_agriculture.append(dict1)
``` | You could use a list comprehension here; its more efficient at creating the final list. Don't use `.fetchall()` when iteration will do, don't use `tuple` as a variable name, and there is no need to call `list` on each row:
```
fields = [tup[0] for tup in querycurs.description]
dict_agriculture = [dict(zip(fields, row)) for row in querycurs]
``` | > I'm using sqlite.
In that case, the most efficient way would be to set [`row_factory`](http://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.row_factory) to the built-in [`Row`](http://docs.python.org/2/library/sqlite3.html#sqlite3.Row) type, which is implemented in C, and will be faster than any equivalent code in Python.
For example...
```
import sqlite3
con = sqlite3.connect(...)
con.row_factory = sqlite3.Row
# Now queries will return objects which act as both a tuple, and a dictionary,
# so you can just do...
querycurs = con.cursor()
querycurs.execute(...)
dict_agriculture = querycurs.fetchall()
``` | Creating list of dictionaries efficiently | [
"",
"python",
""
] |
What would be a good way to make a function return True if, and only if, what is inputted is alphanumeric (contains only letters or numbers), without using methods or import?
So far in class, we only covered `bools`, `while` loops, and `if`-statements, and are only allowed to use what we have learned.
This is what I got
```
def alphanumeric(s):
i = 0
while i < len(s):
if s[i] == 1 or s[i] == 2 or s[i] == 3.....:
i = i + 1
return True
else:
return False
```
Is there a more concise way to do this without using methods or import?
What I have doesn't work, and returns False no matter what I input.
Resolved. Thank you to all that helped! | You have misunderstood how `==` and `or` work. You need to test each and every character separately:
```
if s[i] == 1 or s[i] == 2 or s[i] == 3 ...:
```
or use an `in` test:
```
if s[i] in (1, 2, 3, 4, ...):
```
except that you are testing *characters* here, not integers, so either turn your character into an integer, or test against digits:
```
if int(s[i]) in (1, 2, 3, 4, ...):
```
or
```
if s[i] in '1234567890':
```
The latter works because strings are sequences too; testing if one character is in the string `'1234567890'` is a valid membership test too.
Since strings are sequences, you can also just loop over them directly:
```
for char in s:
if char in '1234567890':
```
No need to use a `while` loop and counter there.
This still only tests for digits; to test for letters as well, you could do one of two things:
* Use the `string` module, it has `ascii_letters` and `digits` attributes:
```
import string
for char in s:
if char in string.ascii_letters + string.digits:
```
* or test if the character is between two known characters:
```
if 'a' <= char <= 'z' or 'A' <= char <= 'Z' or '0' <= char <= '9':
```
and this works because strings are comparable and sortable; they are smaller or larger according to their position in the ASCII code table.
Your next problem is that you are returning `True` too early; you do this for the first match, but you need to test *all* characters. Only return `True` if you didn't find any misses:
```
for char in s:
if char not in string.ascii_letters + string.digits:
return False
return True
```
Now we test all characters, return `False` for the first character that is not alphanumeric, but only when the loop has completed, do we return `True`. | You can use the Python comparison operators to test against characters, so you could do something like this:
```
def is_alphanumeric(s):
for ch in s:
if not ('a' <= ch <= 'z' or 'A' <= ch <= 'Z' or '0' <= ch <= '9'):
return False
return True
```
That will loop through each character in the string and see if it is between 'a' and 'z' or 'A' and 'Z' or '0' and '9'. | Return True if alphanumeric (No methods or import allowed) | [
"",
"python",
"if-statement",
"python-3.x",
"while-loop",
"boolean",
""
] |
Hi I have an array with X amount of values in it I would like to locate the indexs of the ten smallest values. In this link they calculated the maximum effectively, [How to get indices of N maximum values in a numpy array?](https://stackoverflow.com/questions/6910641/how-to-get-the-n-maximum-values-in-a-numpy-array)
however I cant comment on links yet so I'm having to repost the question.
I'm not sure which indices i need to change to achieve the minimum and not the maximum values.
This is their code
```
In [1]: import numpy as np
In [2]: arr = np.array([1, 3, 2, 4, 5])
In [3]: arr.argsort()[-3:][::-1]
Out[3]: array([4, 3, 1])
``` | If you call
```
arr.argsort()[:3]
```
It will give you the indices of the 3 smallest elements.
```
array([0, 2, 1], dtype=int64)
```
So, for `n`, you should call
```
arr.argsort()[:n]
``` | Since this question was posted, numpy has updated to include a faster way of selecting the smallest elements from an array using [`argpartition`](https://docs.scipy.org/doc/numpy-1.8.0/reference/generated/numpy.argpartition.html). It was first included in Numpy 1.8.
Using [snarly's answer](https://stackoverflow.com/a/23734295/1342354) as inspiration, we can quickly find the `k=3` smallest elements:
```
In [1]: import numpy as np
In [2]: arr = np.array([1, 3, 2, 4, 5])
In [3]: k = 3
In [4]: ind = np.argpartition(arr, k)[:k]
In [5]: ind
Out[5]: array([0, 2, 1])
In [6]: arr[ind]
Out[6]: array([1, 2, 3])
```
This will run in O(n) time because it does not need to do a full sort. If you need your answers sorted (**Note:** in this case the output array was in sorted order but that is not guaranteed) you can sort the output:
```
In [7]: sorted(arr[ind])
Out[7]: array([1, 2, 3])
```
This runs on O(n + k log k) because the sorting takes place on the smaller
output list. | I have need the N minimum (index) values in a numpy array | [
"",
"python",
"arrays",
"numpy",
"minimum",
""
] |
If I have a 6 length list like this:
`l = ["AA","BB","CC","DD"]`
I can print it with:
`print "%-2s %-2s %-2s %-2s" % tuple(l)`
The output will be:
`AA BB CC DD`
But what if the list l could be in any length? Is there a way to print the list in the same format with unknown number of elements? | Generate separate snippets and join them:
```
print ' '.join(['%-2s' % (i,) for i in l])
```
Or you can use string multiplication:
```
print ('%-2s ' * len(l))[:-1] % tuple(l)
```
The `[:-1]` removes the extraneous space at the end; you could use `.rstrip()` as well.
Demo:
```
>>> print ' '.join(['%-2s' % (i,) for i in l])
AA BB CC DD
>>> print ' '.join(['%-2s' % (i,) for i in (l + l)])
AA BB CC DD AA BB CC DD
>>> print ('%-2s ' * len(l))[:-1] % tuple(l)
AA BB CC DD
>>> print ('%-2s ' * len(l))[:-1] % tuple(l + l)
AA BB CC DD AA BB CC DD
```
Timing stats:
```
>>> def joined_snippets(l):
... ' '.join(['%-2s' % (i,) for i in l])
...
>>> def joined_template(l):
... ' '.join(['%-2s' for i in l])%tuple(l)
...
>>> def multiplied_template(l):
... ('%-2s ' * len(l))[:-1] % tuple(l)
...
>>> from timeit import timeit
>>> l = ["AA","BB","CC","DD"]
>>> timeit('f(l)', 'from __main__ import l, joined_snippets as f')
1.3180170059204102
>>> timeit('f(l)', 'from __main__ import l, joined_template as f')
1.080280065536499
>>> timeit('f(l)', 'from __main__ import l, multiplied_template as f')
0.7333378791809082
>>> l *= 10
>>> timeit('f(l)', 'from __main__ import l, joined_snippets as f')
10.041708946228027
>>> timeit('f(l)', 'from __main__ import l, joined_template as f')
5.52706503868103
>>> timeit('f(l)', 'from __main__ import l, multiplied_template as f')
2.8013129234313965
```
The multiplied template option leaves the other options in the dust. | Another approach
```
' '.join(['%-2s' for i in l])%tuple(l)
```
I found this to be more than twice as fast as using a generator expression
```
' '.join('%-2s' for i in l)%tuple(l)
```
This is faster still
```
'%-2s '*len(l)%tuple(l) # leaves an extra trailing space though
``` | python print in format way unknown list length | [
"",
"python",
"list",
"printing",
"format",
""
] |
I have a file "test.txt":
```
this is 1st line
this is 2nd line
this is 3rd line
```
the following code
```
lines = open("test.txt", 'r')
for line in lines:
print "loop 1:"+line
for line in lines:
print "loop 2:"+line
```
only prints:
```
loop 1:this is 1st line
loop 1:this is 2nd line
loop 1:this is 3rd line
```
It doesn't print loop2 at all.
Two questions:
1. the file object returned by open(), is it an iterable? that's why it can be used in a for loop?
2. why loop2 doesn't get printed at all? | It is not only an *iterable*, it is an *iterator* [1], which is why it can only traverse the file once. You may reset the file cursor with `.seek(0)` as many have suggested but you should, in most cases, only iterate a file once.
[1]: The "file" object is of class [`TextIOBase`](/https://docs.python.org/3/library/io.html#io.TextIOBase) which is a subclass of [`IoBase`](https://docs.python.org/3/library/io.html#io.IOBase), and `IoBase` supports the [iterator protocol](https://docs.python.org/3/library/stdtypes.html#iterator-types). | Yes, file objects are iterators.
Like all iterators, you can only loop over them *once*, after which the iterator is exhausted. Your file read pointer is at the end of the file. Re-open the file, or use `.seek(0)` to rewind the file pointer if you need to loop again.
Alternatively, try to avoid looping over a file twice; extract what you need into another datastructure (list, dictionary, set, heap, etc.) during the first loop. | Is file object in python an iterable | [
"",
"python",
"iterable",
""
] |
How can I click on random link on a given webpage using selenium API for python. ?
I'm using python 2.7.
Thanks | find\_elements\_by\_tagname() will surely work. There is another option also. You can use find\_elements\_by\_partial\_link\_text where you can pass empty string.
```
>>> from selenium import webdriver
>>> from random import randint
>>> driver = webdriver.Firefox()
>>> driver.get('http://www.python.org')
>>> links = driver.find_elements_by_partial_link_text('')
>>> l = links[randint(0, len(links)-1)]
>>> l.click()
``` | ```
driver = webdriver.Firefox()
driver.get('https://www.youtube.com/watch?v=hhR3DwzV2eA')
# store the current url in a variable
current_page = driver.current_url
# create an infinite loop
while True:
try:
# find element using css selector
links = driver.find_elements_by_css_selector('.content-link.spf-link.yt-uix-sessionlink.spf-link')
# create a list and chose a random link
l = links[randint(0, len(links) - 1)]
# click link
l.click()
# check link
new_page = driver.current_url
# if link is the same, keep looping
if new_page == current_page:
continue
else:
# break loop if you are in a new url
break
except:
continue
```
Creating a list work, but if you are having problems like me use this code if you keep getting Timeout Error or your webdriver isn't consistently clicking the link.
As an example I used a YT video. Say you want to click a recommended video on the right side, well, it has a unique css selector for those links.
The reason why you want to make an infinite loop is b/c when when you make a list of elements, for some reason, python doesn't do a good job expressing stored elements. Be sure you have a catch all 'except' b/c you will get Timeout Errors and such and you want to force it to click on the random link. | Click on random link in webpage using selenium API for python | [
"",
"python",
"selenium",
"random",
"hyperlink",
"html",
""
] |
I need to select rows from a table where only one row exists with that ID.
Example Table, with two columns.
ID first and timestamp.
so first row is ID 1, Timestamp 01/01/2013
```
ID Timestamp
1 01/01/2013
2 02/02/2013
3 04/02/2013
3 07/02/2013
4 20/03/2013
```
So, for this table I'd only like to return the records with ID 1,2,4 as these all have only 1 row.
I then need to only return these rows if they have a timestamp of more than one month ago. The timestamp is in this format `2012-11-12-22.00.15.541231`
Can anyone help please? | This should work for you:
```
SELECT *
FROM mytable
WHERE id NOT IN (
SELECT id
FROM mytable
GROUP BY id
HAVING COUNT(*) > 1
) AND time_created < current_timestamp - 1 month
``` | you can use `group by` clause for this output -
**Edit**
adding [Condition](https://stackoverflow.com/questions/16893874/sql-stuck-with-this-one/16894095?noredirect=1#comment24377371_16894095) on datestamp column
```
SELECT *
FROM Your_Table_Name
WHERE Id IN
(SELECT Id FROM Your_Table_Name GROUP BY Id HAVING COUNT(*) = 1)
AND time_created < current_timestamp - 1 month
``` | SQL stuck with this one | [
"",
"sql",
"db2",
""
] |
Just a question about dealing will null values in a query.
For example I have the following table with the following fields and values
```
TABLEX
Column1
1
2
3
4
5
---------
Column2
null
A
B
C
null
```
I'm passing a variableY on a specific procedure. Inside the procedure is a cursor like this
```
CURSOR c_results IS
SELECT * FROM TABLEX where column2 = variableY
```
now the problem is variableY can be either null, A, B or C
if the variableY is null i want to select all record where column2 is null, else where column2 is either A, B or C.
I cannot do the above cursor/query because if variableY is null it won't work because the comparison should be
```
CURSOR c_results IS
SELECT * FROM TABLEX where column2 IS NULL
```
What cursor/query should I use that will accomodate either null or string variable.
Sorry if my question is a bit confusing. I'm not that good in explaining things. Thanks in advance. | Either produce different SQL depending on the contents of that parameter, or alter your SQL like this:
```
WHERE (column2 = variableY) OR (variableY IS NULL AND column2 IS NULL)
``` | Oracle's Ask Tom says:
```
where decode( col1, col2, 1, 0 ) = 0 -- finds differences
```
or
```
where decode( col1, col2, 1, 0 ) = 1 -- finds sameness - even if both NULL
```
[Safely Comparing NULL Columns as Equal](https://asktom.oracle.com/pls/asktom/f%3Fp%3D100:11:0::::P11_QUESTION_ID:4410543900346652511) | Oracle/PL SQL/SQL null comparison on where clause | [
"",
"sql",
"oracle",
"plsql",
"null",
""
] |
I'm trying to create a 4x3 table **without** methods or for-loops.
I'd like to use what I learned in class, which is booleans, if-statements, and while-loops.
I want it so that if I input `create_table('abcdefghijkl')` it would start from the the left top most row and column and go down until the end of the column and then start again at the top of the next column and so on, like displayed below:
```
| a | e | i |
| b | f | j |
| c | g | k |
| d | h | l |
```
Below is what I have so far. It's not complete. How do I add to the function so that after 4 rows down, the string should continue to the next column starting from the top?
I'm wracking my brain over this.
All examples I can find online uses for loops and methods to create tables such as these, but I'd like to implement the while loop for this one.
Thanks in advance!
```
def create_table(table):
t = "" + "|" + ""
i = 0
while i < 12:
t = t + " " + "|" + table[i] + " "
i=i+1
print(t)
return table
``` | Because you are printing to the terminal, you would want to think about printing each horizontal row, rather than each vertical column. Try something like:
```
table = 'abcdefghijkl'
i = 0
while i < 4:
print("| {} | {} | {} |".format(table[i], table[i+4], table[i+8]))
i += 1
``` | Think about it in terms of rows instead of columns. You're writing out a row at a time, not a column at a time, so look at the indices of the individual cells in the original list:
```
| 0 | 4 | 8 |
| 1 | 5 | 9 |
| 2 | 6 | 10 |
| 3 | 7 | 11 |
```
Notice each row's cells' indices differ by 4. Find a simple expression for the nth row's cells and the task will become much easier, as you'll essentially be printing out a regular table. | How to create a table without using methods or for-loops? | [
"",
"python",
"python-3.x",
""
] |
I have application running on my server. The problem with this application is that daily I am getting nearly 10-20, `System.Data.SqlClient.SqlException Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding` only one of my SP. Here is my SP,
```
ALTER PROCEDURE [dbo].[Insertorupdatedevicecatalog]
(@OS NVARCHAR(50)
,@UniqueID VARCHAR(500)
,@Longitude FLOAT
,@Latitude FLOAT
,@Culture VARCHAR(10)
,@Other NVARCHAR(200)
,@IPAddress VARCHAR(50)
,@NativeDeviceID VARCHAR(50))
AS
BEGIN
DECLARE @OldUniqueID VARCHAR(500) = '-1';
SELECT @OldUniqueID = [UniqueID] FROM DeviceCatalog WHERE (@NativeDeviceID != '' AND [NativeDeviceID] = @NativeDeviceID);
BEGIN TRANSACTION [Tran1]
BEGIN TRY
IF EXISTS(SELECT 1 FROM DeviceCatalog WHERE [UniqueID] = @UniqueID)
BEGIN
UPDATE DeviceCatalog
SET [OS] = @OS
,[Location] = geography::STGeomFromText('POINT(' + CONVERT(VARCHAR(100 ), @Longitude) + ' ' + CONVERT(VARCHAR(100), @Latitude) + ')', 4326)
,[Culture] = @Culture
,[Other] = @Other
,[Lastmodifieddate] = Getdate()
,[IPAddress] = @IPAddress
WHERE [UniqueID] = @UniqueID;
END
ELSE
BEGIN
INSERT INTO DeviceCatalog
([OS]
,[UniqueID]
,[Location]
,[Culture]
,[Other]
,[IPAddress]
,[NativeDeviceID])
VALUES (@OS
,@UniqueID
,geography::STGeomFromText('POINT(' + CONVERT(VARCHAR(100) ,@Longitude) + ' ' + CONVERT(VARCHAR(100), @Latitude) + ')', 4326)
,@Culture
,@Other
,@IPAddress
,@NativeDeviceID);
IF(@OldUniqueID != '-1' AND @OldUniqueID != @UniqueID)
BEGIN
EXEC DeleteOldDevice @OldUniqueID, @UniqueID;
END
END
COMMIT TRANSACTION [Tran1];
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION [Tran1];
DECLARE @ErrorNumber nchar(5), @ErrorMessage nvarchar(2048);
SELECT
@ErrorNumber = RIGHT('00000' + ERROR_NUMBER(), 5),
@ErrorMessage = @ErrorNumber + ' ' + ERROR_MESSAGE();
RAISERROR (@ErrorMessage, 16, 1);
END CATCH
END
```
Is there is any issue with this SP? Why I am getting Timeout exception only in this SP? Here is the Stack Trace,
```
System.Data.SqlClient.SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning()
at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe)
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
at App.Classes.DBLayer.Execute(SqlCommand command, Boolean executeNonQuery)
at App.Helpers.SQLHelper.GetResult(List`1 parameters, Boolean storedProcedure, String commandText, ResultType type)
at App.Helpers.SQLHelper.ExecuteNonQuery(List`1 parameters, Boolean storedProcedure, String commandText)
at App.Services.DeviceCatalogService.InsertOrUpdateDeviceCatalog(DeviceCatalog deviceCataLog)
at WebApplication1.Handlers.RegisterDevice.ProcessRequest(HttpContext context)
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
``` | You need to investigate this on the server side to understand why is the execution timing out. Note that the server has no timeout, the timeout is caused by the default 30 seconds on [`SqlCommand.CommandTimeout`](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.commandtimeout.aspx).
A good resource is [Waits and Queues](http://technet.microsoft.com/en-us/library/cc966413.aspx), which is a methodology to diagnose performance bottlenecks with SQL Server. Based on the actual cause of the timeout, proper action can be taken. You must establish first and foremost whether you're dealing with slow execution (a bad plan) or blocking.
If I'd venture a guess, I would say that the unhealthy pattern of `IF EXISTS... UPDATE` is the root cause. This pattern is incorrect and will cause failures under concurrency. Two concurrent transaction executing the `IF EXISTS` simultaneously will both reach the same conclusion and *both* attempt to `INSERT` or `UPDATE`. Depending on the exiting constraints in the database you can end up with a deadlock (the lucky case) or with a lost write (the unlucky case). However, only proper investigation would reveal the actual root cause. Could be something totally different, like [auto-growth events](https://www.simple-talk.com/sql/database-administration/sql-server-database-growth-and-autogrowth-settings/).
Your procedure is also incorrectly handling the CATCH block. You must **always** check the [`XACT_STATE()`](http://msdn.microsoft.com/en-us/library/ms189797.aspx) because the transaction may be already rolled back by the time your CATCH block runs. Is also not clear what you expect from naming the transaction, this is a common mistake I see often associated with confusing named transactions with savepoints. For a correct pattern see [Exception Handling and Nested Transactions](http://rusanu.com/2009/06/11/exception-handling-and-nested-transactions/).
**Edit**
Here is a possible way to investigate this:
1. Change the relevant [`CommandTimeout`](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.commandtimeout.aspx) to 0 (ie. infinite).
2. Enable the [`blocked process threshold`](http://msdn.microsoft.com/en-us/library/ms181150.aspx), set it to 30 seconds (the former CommandTimeout)
3. Monitor in Profiler for [Blocked Process Report Event](http://msdn.microsoft.com/en-us/library/ms191168.aspx)
4. Start your workload
5. See if the Profiler produces any report events. If it does, they will pinpoint the cause.
These actions will cause a 'blocked process report' event every time you would had get a timeout, if the timeout was cause by blocking. You application will continue to wait until the blocking is removed, if the blocking is caused by a [live-lock](http://www.codinghorror.com/blog/2008/12/the-problem-with-logging.html) then it will wait forever. | Add this line to your connection string:
```
Connect Timeout=200; pooling='true'; Max Pool Size=200
```
You can set `myCom.CommandTimeout = 200` also
If large number of data is there you can also increase the timeout seconds from 200 to 600 seconds.
Edit this in web.config as well.
Follow [THIS](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.commandtimeout.aspx) doccument. | Occasionally Getting SqlException: Timeout expired | [
"",
"asp.net",
"sql",
"sql-server",
"ado.net",
"sql-server-2012",
""
] |
I have various error checking methods and they are mainly just value or type checking and I want to give the user a chance to fix it so they don't lose a bunch of information regarding what the program is doing.
At this point, I just have this :
```
def foo(Option1, Option2):
if Option1 >= 0 and Option <= 100 :
continue
else:
e = ('Hey this thing doesn\'t work')
raise ValueError(e)
```
and then later in the program that is calling it, I have
```
except ValueError as e:
print(e)
```
I want to pass what method was the problem so that I can give the user a chance to try again, like with a prompt or something after right around where the print(e) statement is. Any ideas?
Edit:
Basically I would like my except code to look something like this
```
except ValueError as e:
# print the error
# get what method the error was raised in
# method = the_method_from_above
# prompt user for new value
# send command to the method using the new value
``` | You *can* do this with some introspection, but you *shouldn't*.
The following code will let you call the function in which the exception was raised, but there's almost certainly a better way to do whatever it is you're trying to achieve...
```
import sys
def foo(x):
print('foo(%r)' % x)
if not (0 <= x <= 100):
raise ValueError
def main():
try:
foo(-1)
except ValueError:
tb = sys.exc_info()[2]
while tb.tb_next is not None:
tb = tb.tb_next
funcname = tb.tb_frame.f_code.co_name
func = globals()[funcname]
func(50)
if __name__ == '__main__':
main()
```
...which prints out...
```
foo(-1)
foo(50)
``` | You can use the traceback module to provide stack trace information about exceptions.
```
import traceback
...
try:
pass
except ValueError as e:
print("Error {0}".format(e))
traceback.print_exc()
``` | How do I get the method that my exception was raised in? | [
"",
"python",
"python-2.7",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.