Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Let's say I have the following list
```
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]
```
I want to find all possible sublists of a certain lenght where they don't contain one certain number and without losing the order of the numbers.
For example all possible sublists with length 6 without the 12 are:
```
[1,2,3,4,5,6]
[2,3,4,5,6,7]
[3,4,5,6,7,8]
[4,5,6,7,8,9]
[5,6,7,8,9,10]
[6,7,8,9,10,11]
[13,14,15,16,17,18]
```
The problem is that I want to do it in a very big list and I want the most quick way.
Update with my method:
```
oldlist = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]
newlist = []
length = 6
exclude = 12
for i in oldlist:
if length+i>len(oldlist):
break
else:
mylist.append(oldlist[i:(i+length)]
for i in newlist:
if exclude in i:
newlist.remove(i)
```
I know it's not the best method, that's why I need a better one. | A straightforward, non-optimized solution would be
```
result = [sublist for sublist in
(lst[x:x+size] for x in range(len(lst) - size + 1))
if item not in sublist
]
```
An optimized version:
```
result = []
start = 0
while start < len(lst):
try:
end = lst.index(item, start + 1)
except ValueError:
end = len(lst)
result.extend(lst[x+start:x+start+size] for x in range(end - start - size + 1))
start = end + 1
``` | Use [`itertools.combinations`](http://docs.python.org/2/library/itertools.html#itertools.combinations):
```
import itertools
mylist = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]
def contains_sublist(lst, sublst):
n = len(sublst)
return any((sublst == lst[i:i+n]) for i in xrange(len(lst)-n+1))
print [i for i in itertools.combinations(mylist,6) if 12 not in i and contains_sublist(mylist, list(i))]
```
Prints:
```
[(1, 2, 3, 4, 5, 6), (2, 3, 4, 5, 6, 7), (3, 4, 5, 6, 7, 8), (4, 5, 6, 7, 8, 9), (5, 6, 7, 8, 9, 10), (6, 7, 8, 9, 10, 11), (13, 14, 15, 16, 17, 18)]
``` | Find all possible sublists of a list | [
"",
"python",
""
] |
I've just started learning python and i'm trying to create a small brute force program that will ask for an input from the user (password), brute force it then check if it matches.
My problem: I am stuck in the computation of the loop (as you'll see in the source code)
Thanks for your help.
**Source code:**
```
L1=['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']
L2=['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z']
L3=['0','1','2','3','4','5','6','7','8','9']
L4=L1+L2+L3
user=input("Enter your secret password here (maximum 4 characters): ")
sum=""
for i in range(0,len(L4)):
sum=L4[i]+L4[i+1]
print(sum)
if sum==user:
print("your cracked password is :", sum)
break;
``` | You can use `itertools.product` here:
```
>>> from string import letters, digits
>>> strs = letters + digits
>>> from itertools import product
def pwd_checker(pwd):
if 0 <len(pwd) <5:
for i in xrange(1,5):
for per in product(strs, repeat = i):
if "".join(per) == pwd:
print 'your password is', "".join(per)
return
else:
print "Password's length must be between 1 to 4"
...
>>> pwd_checker('a')
your password is a
>>> pwd_checker('ab12')
your password is ab12
>>> pwd_checker('foo')
your password is foo
>>> pwd_checker('Ab1c')
your password is Ab1c
>>> pwd_checker('aaaa')
your password is aaaa
>>> pwd_checker('BaBa')
your password is BaBa
``` | Tthis is a list of all possible passwords:
```
list(itertools.chain(
*[map(lambda x: ''.join(x), itertools.product(L4, repeat=i)) for i in range(5)])
)
```
You can loop through it or use `find` (and catch `ValueError`) | Python Brute forcing (Very basic) | [
"",
"python",
"python-3.x",
""
] |
given the following Python command:
```
os.system('snmpget -v 3 -u initial 172.17.171.113 1.3.6.1.2.1.1.5.0')
a=raw_input() // in order to be able to see the result. I dont want the windows to close at the end.
```
How can I read th result of this command line command?
I tried string=os.read(os.system,10)
but it isnt working... any ideas?
(I am using windows xp) | well usually i'm using
```
import os
output = os.popen("snmpget -v 3 -u initial 172.17.171.113 1.3.6.1.2.1.1.5.0").read()
``` | ```
import subprocess
output, error = subprocess.Popen('snmpget -v 3 -u initial 172.17.171.113 1.3.6.1.2.1.1.5.0'.split(), stdout=subprocess.PIPE).communicate()
```
OR
```
import subprocess
output = subprocess.check_output('snmpget -v 3 -u initial 172.17.171.113 1.3.6.1.2.1.1.5.0'.split())
```
`output` contain command output. | How can I read the Command Line feedback from Python os.system()? | [
"",
"python",
""
] |
I have difficulty in using the Flask-Login framework for authentication. I have looked through the documentation as thoroughly as possible but apparently I am missing something obvious.
```
class User():
def __init__(self, userid=None, username=None, password=None):
self.userid = userid
self.username = username
self.password = password
def is_authenticated(self):
return True
def is_active(self):
return True
def is_anonymous(self):
return False
def get_id(self):
return unicode(self.userid)
def __repr__(self):
return '<User %r>' % self.username
def find_by_username(username):
try:
data = app.mongo.db.users.find_one_or_404({'username': username})
user = User()
user.userid = data['_id']
user.username = data['username']
user.password = data['password']
return user
except HTTPException:
return None
def find_by_id(userid):
try:
data = app.mongo.db.users.find_one_or_404({'_id': userid})
user = User(data['_id'], data['username'], data['password'])
return user
except HTTPException:
return None
```
The above is my User class located in `users/models.py`
```
login_manager = LoginManager()
login_manager.init_app(app)
login_manager.login_view = 'users.login'
@login_manager.user_loader
def load_user(userid):
return find_by_id(userid)
```
The above is my user loader.
```
@mod.route('/login/', methods=['GET', 'POST'])
def login():
form = LoginForm()
if form.validate_on_submit():
pw_hash = hashlib.md5(form.password.data).hexdigest()
user = find_by_username(form.username.data)
if user is not None:
if user.password == pw_hash:
if login_user(user):
flash('Logged in successfully.')
return redirect(request.args.get('next') or url_for('users.test'))
else:
flash('Error')
else:
flash('Username or password incorrect')
else:
flash('Username or password incorrect')
return render_template('users/login.html', form=form)
```
There is no apparently error message, but when trying to access any views decorated with `@login_required`, it redirects me to the login form. Best as I can tell, the `login_user` function isn't actually working although it returns `True` when I called it. Any advice appreciated. | After stepping through a debugger for a while, I finally fixed the problem.
The key issue is that I was attempting to use the `_id` parameter from the MongoDB collection as the userid. I did not realize that the `_id` parameter was an `ObjectID` type instead of a string or unicode which I needed.
```
def find_by_username(username):
try:
data = app.mongo.db.users.find_one_or_404({'username': username})
user = User(unicode(data['_id']), data['username'], data['password'])
return user
except HTTPException:
return None
def find_by_id(userid):
try:
data = app.mongo.db.users.find_one_or_404({'_id': ObjectId(userid)})
user = User(unicode(data['_id']), data['username'], data['password'])
return user
```
Modifying the two functions appropriately fixed this error. | If you've verified it's not your `login_user` function, then that leaves your `find_by_id` function.
The source code for the `user_loader` says:
> The function you set should take a user ID (a `unicode`) and return a user object, or `None` if the user does not exist.
Your `find_by_id` function uses `find_one_or_404` which raises an eyebrow. I'd add some extra debugging around that function, add some prints, or logging to show it's being called, with the correct unicode id, and that it's returning a `User` object, or `None`.
Hopefully that'll get you closer to narrowing down the problem. | Error with flask-login | [
"",
"python",
"python-2.7",
"flask",
"flask-login",
""
] |
Generally speaking, what should the unary `+` do in Python?
I'm asking because, so far, I have never seen a situation like this:
```
+obj != obj
```
Where `obj` is a generic object implementing `__pos__()`.
So I'm wondering: why do `+` and `__pos__()` exist? Can you provide a real-world example where the expression above evaluates to `True`? | I believe that Python operators where inspired by C, where the `+` operator was introduced for symmetry (and also some useful hacks, see comments).
In weakly typed languages such as PHP or Javascript, + tells the runtime to coerce the value of the variable into a number. For example, in Javascript:
```
+"2" + 1
=> 3
"2" + 1
=> '21'
```
Python is strongly typed, so strings don't work as numbers, and, as such, don't implement an unary plus operator.
It is certainly possible to implement an object for which +obj != obj :
```
>>> class Foo(object):
... def __pos__(self):
... return "bar"
...
>>> +Foo()
'bar'
>>> obj = Foo()
>>> +"a"
```
As for an example for which it actually makes sense, check out the
[surreal numbers](https://en.wikipedia.org/wiki/Surreal_number "surreal numbers"). They are a superset of the reals which includes
infinitesimal values (+ epsilon, - epsilon), where epsilon is
a positive value which is smaller than any other positive number, but
greater than 0; and infinite ones (+ infinity, - infinity).
You could define `epsilon = +0`, and `-epsilon = -0`.
While `1/0` is still undefined, `1/epsilon = 1/+0` is `+infinity`, and `1/-epsilon` = `-infinity`. It is
nothing more than taking limits of `1/x` as `x` aproaches `0` from the right (+) or from the left (-).
As `0` and `+0` behave differently, it makes sense that `0 != +0`. | Here's a "real-world" example from the `decimal` package:
```
>>> from decimal import Decimal
>>> obj = Decimal('3.1415926535897932384626433832795028841971')
>>> +obj != obj # The __pos__ function rounds back to normal precision
True
>>> obj
Decimal('3.1415926535897932384626433832795028841971')
>>> +obj
Decimal('3.141592653589793238462643383')
``` | What's the purpose of the + (pos) unary operator in Python? | [
"",
"python",
""
] |
Trying to get the raw data of the HTTP response content in `requests` in Python. I am interested in forwarding the response through another channel, which means that ideally the content should be as pristine as possible.
What would be a good way to do this? | If you are using a `requests.get` call to obtain your HTTP response, you can use the `raw` attribute of the response. Here is the code from the [`requests` docs](http://requests.readthedocs.org/en/latest/user/quickstart/#raw-response-content). The `stream=True` parameter in the `requests.get` call is required for this to work.
```
>>> r = requests.get('https://github.com/timeline.json', stream=True)
>>> r.raw
<requests.packages.urllib3.response.HTTPResponse object at 0x101194810>
>>> r.raw.read(10)
'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03'
``` | After `requests.get()`, you can use `r.content` to extract the raw Byte-type content.
```
r = requests.get('https://yourweb.com', stream=True)
r.content
``` | How to get the raw content of a response in requests with Python? | [
"",
"python",
"http",
"web",
"request",
"python-requests",
""
] |
Is there any trick in postgresql to make a value match every possible value, a kind of "Catch all" value, an anti-NULL ?
Right now, my best idea is to choose a "catchall" keyword and force a match in my queries.
```
WITH cities AS (SELECT * FROM (VALUES('USA','New York'),
('USA','San Francisco'),
('Canada','Toronto'),
('Canada','Quebec')
)x(country,city)),
zones AS (SELECT * FROM (VALUES('USA East','USA','New York'),
('USA West','USA','San Francisco'),
('Canada','Canada','catchall')
)x(zone,country,city))
SELECT z.zone, c.country, c.city
FROM cities c,zones z
WHERE c.country=z.country
AND z.city IN (c.city,'catchall');
zone | country | city
----------+---------+---------------
USA East | USA | New York
USA West | USA | San Francisco
Canada | Canada | Toronto
Canada | Canada | Quebec
```
If a new canadian town was inserted in the "cities" table, the "zones" table would automatically recognize it as part of the 'Canada' zone.
The above query satisfies the functionality I'm looking for, but it feels awkward and prone to errors if repeated multiple times in a wide database.
Is this the proper way to do it, is there a better way, or am I asking the wrong question ?
Thanks a lot for your answers! | Personally, I think that `NULL` makes a better choice for this:
```
select z.zone, c.country, c.city
from cities c join
zones z
on c.country = z.country and
(c.city = z.city or z.city is null);
```
or even:
```
select z.zone, c.country, c.city
from cities c join
zones z
on c.country = z.country and
c.city = coalesce(z.city, c.city);
```
As per Denis, Postgres seems to be smart enough to use an index on the first query for both `country` and `city`.
You could also do a two part join, if you have indexes on both `zone(country)` and on `zone(country, city)`, you could do a two part join:
```
select coalesce(zcc.zone, zc.zone) as zone, c.country, c.city
from cities c join
zones zcc
on c.country = z.country and
c.city = z.city join
zones zc
on c.country = z.country and
zc.city is null;
```
Although a bit more complicated, both joins should be able to use appropriate indexes. | I doubt that there is something like that. A simple way, but not very elegant, efficient or flexible, is to define your function `cities_equals(text t1, text t2)` which returns true if both are not null and ( t1 == t2 OR t1 == 'catchall' OR t2 == 'catchall') | Value that matches every possible value (except null of course)? | [
"",
"sql",
"postgresql",
""
] |
I am expecting some number as output from the above code,but I am not getting it out.
I am new to python but started coding with PHP.
Sorry if I go wrong some where.thanks
```
# By Websten from forums
#
# Given your birthday and the current date, calculate your age in days.
# Compensate for leap days.
# Assume that the birthday and current date are correct dates (and no time travel).
# Simply put, if you were born 1 Jan 2012 and todays date is 2 Jan 2012
# you are 1 day old.
#
# Hint
# A whole year is 365 days, 366 if a leap year.
def nextDay(year, month, day):
"""Simple version: assume every month has 30 days"""
if day < 30:
return year, month, day + 1
else:
if month == 12:
return year + 1, 1, 1
else:
return year, month + 1, 1
def daysBetweenDates(year1, month1, day1, year2, month2, day2):
"""Returns the number of days between year1/month1/day1
and year2/month2/day2. Assumes inputs are valid dates
in Gergorian calendar, and the first date is not after
the second."""
num = 0
# YOUR CODE HERE!
yearx = year1
monthx = month1
dayx = day1
while ((year2 >= year1 ) and ( month2 >= month1 ) and ( day2 >= day1 ) ) :
yearx,monthx,dayx = nextDay(yearx,monthx,dayx)
num = num + 1
num = '5'
return num
print daysBetweenDates(2012,9,30,2012,10,30)
``` | You need to change the line:
> while ((year2 >= year1 ) and ( month2 >= month1 ) and ( day2 >= day1 ) ) :
to:
> while ((year2 >= yearx ) and ( month2 >= monthx ) and ( day2 >= dayx ) ) :
because you are not changing the value of month1 in your code but that of monthx.
Also, I think your while loop will break when dayx is greather that day2, so your measurement will be off by 1. | I have never mastered the while statement in Python but I think that is your infinite loop it is always true that day2 > day1 etc. So that condition remains true therefore you are stuck with num increasing
What does happen - do you get any error message?
if I were doing this I would set functions to determine
1. if the years are the same
2. if the years are the same then calculate the days between them
3. if the years are not the same calculate the number of days between the first date and the end of the year for that particular year
4. Calculate the number of days between the beginning of the year of the second date to the second date
5. Calculate the number of years difference between the end of the first year and the beginning of the second year and convert that into days
It may be clunky but it should get you home | My code is not giving output , I expected some number | [
"",
"python",
"loops",
"python-3.x",
"while-loop",
""
] |
I occasionally use Python string formatting. This can be done like so:
```
print('int: %i. Float: %f. String: %s' % (54, 34.434, 'some text'))
```
But, this can also be done like this:
```
print('int: %r. Float: %r. String: %r' % (54, 34.434, 'some text'))
```
As well as using %s:
```
print('int: %s. Float: %s. String: %s' % (54, 34.434, 'some text'))
```
My question is therefore: why would I ever use anything else than the %r or %s? The other options (%i, %f and %s) simply seem useless to me, so I'm just wondering why anybody would every use them?
[edit] Added the example with %s | For floats, the value of `repr` and `str` can vary:
```
>>> num = .2 + .1
>>> 'Float: %f. Repr: %r Str: %s' % (num, num, num)
'Float: 0.300000. Repr: 0.30000000000000004 Str: 0.3'
```
Using `%r` for strings will result in quotes around it:
```
>>> 'Repr:%r Str:%s' % ('foo','foo')
"Repr:'foo' Str:foo"
```
You should always use `%f` for floats and `%d` for integers. | @AshwiniChaudhary answered your question concerning old string formatting, if you were to use new string formatting you would do the following:
```
>>> 'Float: {0:f}. Repr: {0!r} Str: {0!s}'.format(.2 + .1)
'Float: 0.300000. Repr: 0.30000000000000004 Str: 0.3'
```
Actually `!s` is the default so you don't need it, the last format can simply be `{0}` | Why would I ever use anything else than %r in Python string formatting? | [
"",
"python",
"string-formatting",
""
] |
I have to make a GUI for some testing teams. I have been asked to do it in Python, but when I Google, all I see is about Iron Python.
I also was asked not to use Visual Studio because it is too expensive for the company. So if you have any idea to avoid that I would be very happy.
I am still new to Python and programming overall so not any to advanced solutions.
If you have any questions just ask.
*GUI PART: with would you use when using windows and mac(most windows) I would like some drag and drop so I don't waste to much time making the display part* | Python is the name of a programming language, there are various implementations of it:
* **CPython**: the standard Python interpreter, written in C
* **Jython**: Python interpreter for Java
* **IronPython**: Python interpreter for the .NET framework
* **PyPy**: Python interpreter written in Python
All of them are free (in the sense of not having to buy a license to use them), and can be used to create GUI programs. It really depends what you want to do and which OS you use.
There are various GUI frameworks/bindings for Python: Tkinter, PyGtk, PyQt, WinForms/WPF (IronPython) and the Java UI frameworks.
You also don't have to use Visual Studio for compiling .NET languages, there are open source alternatives like [MonoDevelop](http://monodevelop.com/). | IronPython is a implementation of Python running on .NET - however it is not the implementation that is in general referred to when someone mentions Python - that would be cPython: [Website for (normal) cPython](http://www.python.org/).
Now as to creating a UI - there are many ways that you can use to create a UI in Python.
If you only want to use what is available in a normal installation you could use the TK bindings: [TKInter](http://docs.python.org/3.3/library/tkinter.html). [This wiki entry](http://wiki.python.org/moin/TkInter) holds a wealth of information about getting started with TKInter.
Apart from TKInter there are bindings to many popular frameworks like QT, GTK and more (see [here](http://wiki.python.org/moin/GuiProgramming) for a list). | Python vs Iron Python | [
"",
"python",
"ironpython",
""
] |
I have following code:
```
print "my name is [%s], I like [%s] and I ask question on [%s]" % ("xxx", "python", "stackoverflow")
```
I want to split this LONG line into multiple lines:
```
print
"my name is [%s]"
", I like [%s] "
"and I ask question on [%s]"
% ("xxx", "python", "stackoverflow")
```
Can you please provide the right syntax? | Use implied line continuation by putting everything within parentheses. This is the method recommended in [Python's Style Guide (PEP 8)](http://www.python.org/dev/peps/pep-0008/#id12):
```
print ("my name is [%s]"
", I like [%s] "
"and I ask question on [%s]"
% ("xxx", "python", "stackoverflow"))
```
This works because the Python interpreter will concatenate adjacent string literals, so `"foo" 'bar'` becomes `'foobar'`. | An alternative style:
```
print "my name is [%s], I like [%s] and I ask question on [%s]" % (
"xxx", "python", "stackoverflow")
``` | python split statement into multiple lines | [
"",
"python",
""
] |
I'm trying to access a dictionary by using a variable that I've set previously.
```
myvar1 = 1
mydict = {'1': ['apple', 20, 80, 40],
'2': ['orange', 81, 400, 100]}
myvar2 = mydict[myvar1][0]
print(myvar2)
```
This gives me KeyError: 1
I've also tried
```
myvar2 = mydict['myvar1'][0]
```
This gives me KeyError: 'myvar1'
Clearly I am missing something basic. | `myvar1` is an integer and `mydict` uses strings as keys. Either change `myvar1` to be a string or change the dictionary to use integers as keys.
```
myvar1 = 1
mydict = {1: ['apple', 20, 80, 40],
2: ['orange', 81, 400, 100]}
``` | Try:
```
myvar1 = '1'
```
in your first statement. The `1` you're using is an integer, and not a character as in `'1'`:
```
In [1]: 1=='1'
Out[1]: False
In [2]: str(1)=='1'
Out[2]: True
``` | Python: Accessing dictionary using variable as the key | [
"",
"python",
"dictionary",
""
] |
I am new to python programming,
i like to know how can i enhance the features of builtin functions([Monkeypatch](http://en.wikipedia.org/wiki/Monkey_patch))
for example
i know [sum()](http://docs.python.org/release/3.1.5/library/functions.html#sum) built in function allowed only on number items
```
>>> sum([4,5,6,7]) #22
```
i would like to make sum function to be allow list of items as strings as shown below
for example
```
>>> sum(['s','t','a','c','k']) # 'stack'
```
Thanks in advance | Not monkey patching exactly, just re-defined `sum` to make it work for strings as well.
```
>>> import __builtin__
def sum(seq, start = 0):
if all(isinstance(x,str) for x in seq):
return "".join(seq)
else:
return __builtin__.sum(seq, start)
...
>>> sum([4,5,6,7])
22
>>> sum(['s','t','a','c','k'])
'stack'
``` | You can't really "monkeypatch" a function the way you can a class, object, module, etc.
Those other things all ultimately come down to a collection of attributes, so replacing one attribute with a different one, or adding a new one, is both easy and useful. Functions, on the other hand, are basically atomic things.\*
You can, of course, monkeypatch the builtins module by replacing the `sum` function. But I don't think that's what you were asking. (If you were, see below.)
Anyway, you can't patch `sum`, but you can write a new function, with the same name if you want, (possibly with a wrapper around the original function—which, you'll notice, is exactly what decorators do).
---
But there is really no way to use `sum(['s','t','a','c','k'])` to do what you want, because `sum` by default starts off with 0 and adds things to it. And you can't add a string to 0.\*\*
Of course you can always pass an explicit `start` instead of using the default, but you'd have to change your calling code to send the appropriate `start`. In some cases (e.g., where you're sending a literal list display) it's pretty obvious; in other cases (e.g., in a generic function) it may not be. That still won't work here, because `sum(['s','t','a','c','k'], '')` will just raise a `TypeError` (try it and read the error to see why), but it will work in other cases.
But there is no way to avoid having to know an appropriate starting value with `sum`, because that's how `sum` works.
If you think about it, `sum` is conceptually equivalent to:
```
def sum(iterable, start=0):
reduce(operator.add, iterable, start)
```
The only real problem here is that `start`, right? `reduce` allows you to leave off the start value, and it will start with the first value in the iterable:
```
>>> reduce(operator.add, ['s', 't', 'a', 'c', 'k'])
'stack'
```
That's something `sum` can't do. But, if you really want to, you can redefine `sum` so it *can*:
```
>>> def sum(iterable):
... return reduce(operator.add, iterable)
```
… or:
```
>>> sentinel = object()
>>> def sum(iterable, start=sentinel):
... if start is sentinel:
... return reduce(operator.add, iterable)
... else:
... return reduce(operator.add, iterable, start)
```
But note that this `sum` will be much slower on integers than the original one, and it will raise a `TypeError` instead of returning `0` on an empty sequence, and so on.
---
If you really do want to monkeypatch the builtins (as opposed to just defining a new function with a new name, or a new function with the same name in your module's `globals()` that shadows the builtin), here's an example that works for Python 3.1+, as long as your modules are using normal globals dictionaries (which they will be unless you're running in an embedded interpreter or an `exec` call or similar):
```
import builtins
builtins.sum = _new_sum
```
In other words, same as monkeypatching any other module.
In 2.x, the module is called `__builtin__`. And the rules for how it's accessed through globals changed somewhere around 2.3 and again in 3.0. See [`builtins`](http://docs.python.org/3/library/builtins.html)/[`__builtin__`](http://docs.python.org/2/library/__builtin__.html) for details.
---
\* Of course that isn't *quite* true. A function has a name, a list of closure cells, a doc string, etc. on top of its code object. And even the code object is a sequence of bytecodes, and you can use `bytecodehacks` or hard-coded hackery on that. Except that `sum` is actually a builtin-function, not a function, so it doesn't even have code accessible from Python… Anyway, it's close enough for most purposes to say that functions are atomic things.
\*\* Sure, you *could* convert the string to some subclass that knows how to add itself to integers (by ignoring them), but really, you don't want to. | how to enhance the features of builtin functions in python? | [
"",
"python",
"python-2.7",
"python-3.x",
""
] |
I have a script that writes database information to a csv based on a SQL query I wrote. I was recently tasked with modifying the query to return only rows where the `DateTime` field has a date that is newer then Jan. 1 of this year. The following query does not work:
```
$startdate = "01/01/2013 00:00:00"
SELECT Ticket, Description, DateTime
FROM [table]
WHERE ((Select CONVERT (VARCHAR(10), DateTime,105) as [DD-MM-YYYY])>=(Select CONVERT (VARCHAR(10), $startdate,105) as [DD-MM-YYYY]))"
```
The format of the DateTime field in the database is in the same format as the $startdate variable. What am I doing wrong? Is my query incorrectly formated? Thanks. | ```
DECLARE @startdate datetime
SELECT @startdate = '01/01/2013'
SELECT Ticket, Description, DateTime
FROM [table]
WHERE DateTime >= @startdate
``` | ```
DECLARE @startdate datetime
SELECT @startdate = '01/01/2013'
SELECT Ticket, Description, DateTime
FROM [table]
WHERE DateTime >= @startdate
``` | How can I obtain only MSSQL rows that have been created since Jan 1st? | [
"",
"sql",
"sql-server",
""
] |
I am trying to write a record into a MySQL DB where I have defined table `jobs` as:
```
CREATE TABLE jobs(
job_id INT NOT NULL AUTO_INCREMENT,
job_title VARCHAR(300) NOT NULL,
job_url VARCHAR(400) NOT NULL,
job_location VARCHAR(150),
job_salary_low DECIMAL(25) DEFAULT(0),
job_salary_high DECIMAL(25), DEFAULT(0),
company VARCHAR(150),
job_posted DATE,
PRIMARY KEY ( job_id )
);
```
The code I am testing with is:
```
cur.execute("INSERT INTO jobs VALUES(DEFAULT, '"+jobTitle+"','"
+jobHref+"',"+salaryLow+","+salaryHigh+",'"+company+"',"
+jobPostedAdjusted+"','"+salaryCurrency+"';")
print(cur.fetchall())
```
The errors that I am getting are:
```
pydev debugger: starting
Traceback (most recent call last):
File "C:\Users\Me\AppData\Local\Aptana Studio 3\plugins\org.python.pydev_2.7.0.2013032300\pysrc\pydevd.py", line 1397, in <module>
debugger.run(setup['file'], None, None)
File "C:\Users\Me\AppData\Local\Aptana Studio 3\plugins\org.python.pydev_2.7.0.2013032300\pysrc\pydevd.py", line 1090, in run
pydev_imports.execfile(file, globals, locals) #execute the script
File "C:\Users\Me\Documents\Aptana Studio 3 Workspace\PythonScripts\PythonScripts\testFind.py", line 25, in <module>
+jobPostedAdjusted+"','"+salaryCurrency+"';")
TypeError: cannot concatenate 'str' and 'float' objects
```
What is the best way insert this record? Thanks. | > What is the best way insert this record?
Use `%s` placeholders, and pass your parameters as a separate list, then `MySQLdb` does all the parameter interpolation for you.
For example...
```
params = [jobTitle,
jobHref,
salaryLow,
salaryHigh,
company,
jobPostedAdjusted,
salaryCurrency]
cur.execute("INSERT INTO jobs VALUES(DEFAULT, %s, %s, %s, %s, %s, %s, %s)", params)
```
This also protects you from SQL injection.
---
**Update**
> I have `print(cur.fetchall())` after the `cur.execute....` When the
> code is run, it prints empty brackets such as `()`.
`INSERT` queries don't return a result set, so `cur.fetchall()` will return an empty list.
> When I interrogate the DB from the terminal I can see nothing has been changed.
If you're using a transactional storage engine like InnoDB, you have explicitly commit the transaction, with something like...
```
conn = MySQLdb.connect(...)
cur = conn.cursor()
cur.execute("INSERT ...")
conn.commit()
```
If you want to `INSERT` lots of rows, it's much faster to do it in a single transaction...
```
conn = MySQLdb.connect(...)
cur = conn.cursor()
for i in range(100):
cur.execute("INSERT ...")
conn.commit()
```
...because InnoDB (by default) will sync the data to disk after each call to `conn.commit()`.
> Also, does the `commit;` statement have to be in somewhere?
The `commit` statement is interpreted by the MySQL client, not the server, so you won't be able to use it with `MySQLdb`, but it ultimately does the same thing as the `conn.commit()` line in the previous example. | The error message say it all:
> TypeError: cannot concatenate 'str' and 'float' objects
Python does not convert automagicaly floats to string. Your best friend here *could* be [`format`](http://docs.python.org/2/library/stdtypes.html#str.format)
```
>>> "INSERT INTO jobs VALUES(DEFAULT, '{}', '{}')".format("Programmer", 35000.5)
"INSERT INTO jobs VALUES(DEFAULT, 'Programmer', '35000.5')"
```
*But*, please note that insertion of user provided data in a SQL string without any precautions might lead to [SQL Injection](http://en.wikipedia.org/wiki/SQL_injection)! Beware... That's why `execute` provide its own way of doing, protecting you from that risk. Something like that:
```
>>> cursor.execute("INSERT INTO jobs VALUES(DEFAULT, '%s', '%f', "Programmer", 35000.5)
```
For a complete discussion about this, search the web. For example <http://love-python.blogspot.fr/2010/08/prevent-sql-injection-in-python-using.html>
---
And, btw, the float type is mostly for scientific calculation. But it is usually not suitable for monetary values, due to rounding errors (that's why your table use a `DECIMAL` column, and not `FLOAT` one, I assume). For *exact* values, Python provide the [`decimal`](http://docs.python.org/2/library/decimal.html#module-decimal) type. You should take a look at it. | Inserting into MySQL from Python - Errors | [
"",
"python",
"mysql",
""
] |
I have two tables like this:
```
Occupied Subject
+----------+-----------+ +----+---------+
| idClass | idSubject | | id | Name |
+----------+-----------+ +----+---------+
| 1 | 1 | | 1 | German |
| 1 | 2 | | 2 | English |
| 2 | 3 | | 3 | Math |
+----------+-----------+ +----+---------+
```
Now I want to get the *id* and the *Name* from all subjects which a special class occupied. I tried with this SQL statement:
```
SELECT S._id ,
S.Name
FROM Subject S
WHERE S._id = ( SELECT O.idSubject
FROM Occupied O
WHERE O.idClass = '1' -- '1' is variable and represents the special class
)
```
But I only get this result from the database:
```
+----+---------+
| id | Name |
+----+---------+
| 2 | English |
+----+---------+
```
So I lost the *German* row. Where is my mistake? | I think you could use an [`INNER JOIN`](http://dev.mysql.com/doc/refman/5.0/en/join.html) query
```
SELECT S._id ,
S.Name
FROM Subject S
INNER JOIN Occupied O ON O.idSubject = S.id
WHERE O.idClass='1';
```
As Charles stated this could be faster than having a subquery depending on database vendor and sql distribution | change equals to **IN**
```
SELECT S._id ,
S.Name
FROM Subject S
WHERE S._id IN ( SELECT O.idSubject
FROM Occupied O
WHERE O.idClass = '1'
)
``` | Lost one row in SQL statement | [
"",
"sql",
""
] |
How can I select rows from a DataFrame based on values in some column in Pandas?
In SQL, I would use:
```
SELECT *
FROM table
WHERE column_name = some_value
``` | To select rows whose column value equals a scalar, `some_value`, use `==`:
```
df.loc[df['column_name'] == some_value]
```
To select rows whose column value is in an iterable, `some_values`, use `isin`:
```
df.loc[df['column_name'].isin(some_values)]
```
Combine multiple conditions with `&`:
```
df.loc[(df['column_name'] >= A) & (df['column_name'] <= B)]
```
Note the parentheses. Due to Python's [operator precedence rules](https://docs.python.org/3/reference/expressions.html#operator-precedence), `&` binds more tightly than `<=` and `>=`. Thus, the parentheses in the last example are necessary. Without the parentheses
```
df['column_name'] >= A & df['column_name'] <= B
```
is parsed as
```
df['column_name'] >= (A & df['column_name']) <= B
```
which results in a [Truth value of a Series is ambiguous error](https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o).
---
To select rows whose column value *does not equal* `some_value`, use `!=`:
```
df.loc[df['column_name'] != some_value]
```
The `isin` returns a boolean Series, so to select rows whose value is *not* in `some_values`, negate the boolean Series using `~`:
```
df = df.loc[~df['column_name'].isin(some_values)] # .loc is not in-place replacement
```
---
For example,
```
import pandas as pd
import numpy as np
df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(),
'B': 'one one two three two two one three'.split(),
'C': np.arange(8), 'D': np.arange(8) * 2})
print(df)
# A B C D
# 0 foo one 0 0
# 1 bar one 1 2
# 2 foo two 2 4
# 3 bar three 3 6
# 4 foo two 4 8
# 5 bar two 5 10
# 6 foo one 6 12
# 7 foo three 7 14
print(df.loc[df['A'] == 'foo'])
```
yields
```
A B C D
0 foo one 0 0
2 foo two 2 4
4 foo two 4 8
6 foo one 6 12
7 foo three 7 14
```
---
If you have multiple values you want to include, put them in a
list (or more generally, any iterable) and use `isin`:
```
print(df.loc[df['B'].isin(['one','three'])])
```
yields
```
A B C D
0 foo one 0 0
1 bar one 1 2
3 bar three 3 6
6 foo one 6 12
7 foo three 7 14
```
---
Note, however, that if you wish to do this many times, it is more efficient to
make an index first, and then use `df.loc`:
```
df = df.set_index(['B'])
print(df.loc['one'])
```
yields
```
A C D
B
one foo 0 0
one bar 1 2
one foo 6 12
```
or, to include multiple values from the index use `df.index.isin`:
```
df.loc[df.index.isin(['one','two'])]
```
yields
```
A C D
B
one foo 0 0
one bar 1 2
two foo 2 4
two foo 4 8
two bar 5 10
one foo 6 12
``` | There are several ways to select rows from a Pandas dataframe:
1. **Boolean indexing (`df[df['col'] == value`] )**
2. **Positional indexing (`df.iloc[...]`)**
3. **Label indexing (`df.xs(...)`)**
4. **`df.query(...)` API**
Below I show you examples of each, with advice when to use certain techniques. Assume our criterion is column `'A'` == `'foo'`
(Note on performance: For each base type, we can keep things simple by using the Pandas API or we can venture outside the API, usually into NumPy, and speed things up.)
---
**Setup**
The first thing we'll need is to identify a condition that will act as our criterion for selecting rows. We'll start with the OP's case `column_name == some_value`, and include some other common use cases.
Borrowing from @unutbu:
```
import pandas as pd, numpy as np
df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(),
'B': 'one one two three two two one three'.split(),
'C': np.arange(8), 'D': np.arange(8) * 2})
```
---
# **1. Boolean indexing**
... Boolean indexing requires finding the true value of each row's `'A'` column being equal to `'foo'`, then using those truth values to identify which rows to keep. Typically, we'd name this series, an array of truth values, `mask`. We'll do so here as well.
```
mask = df['A'] == 'foo'
```
We can then use this mask to slice or index the data frame
```
df[mask]
A B C D
0 foo one 0 0
2 foo two 2 4
4 foo two 4 8
6 foo one 6 12
7 foo three 7 14
```
This is one of the simplest ways to accomplish this task and if performance or intuitiveness isn't an issue, this should be your chosen method. However, if performance is a concern, then you might want to consider an alternative way of creating the `mask`.
---
# **2. Positional indexing**
Positional indexing (`df.iloc[...]`) has its use cases, but this isn't one of them. In order to identify where to slice, we first need to perform the same boolean analysis we did above. This leaves us performing one extra step to accomplish the same task.
```
mask = df['A'] == 'foo'
pos = np.flatnonzero(mask)
df.iloc[pos]
A B C D
0 foo one 0 0
2 foo two 2 4
4 foo two 4 8
6 foo one 6 12
7 foo three 7 14
```
# **3. Label indexing**
*Label* indexing can be very handy, but in this case, we are again doing more work for no benefit
```
df.set_index('A', append=True, drop=False).xs('foo', level=1)
A B C D
0 foo one 0 0
2 foo two 2 4
4 foo two 4 8
6 foo one 6 12
7 foo three 7 14
```
# **4. `df.query()` API**
*`pd.DataFrame.query`* is a very elegant/intuitive way to perform this task, but is often slower. **However**, if you pay attention to the timings below, for large data, the query is very efficient. More so than the standard approach and of similar magnitude as my best suggestion.
```
df.query('A == "foo"')
A B C D
0 foo one 0 0
2 foo two 2 4
4 foo two 4 8
6 foo one 6 12
7 foo three 7 14
```
---
My preference is to use the `Boolean` `mask`
Actual improvements can be made by modifying how we create our `Boolean` `mask`.
**`mask` alternative 1**
*Use the underlying NumPy array and forgo the overhead of creating another `pd.Series`*
```
mask = df['A'].values == 'foo'
```
I'll show more complete time tests at the end, but just take a look at the performance gains we get using the sample data frame. First, we look at the difference in creating the `mask`
```
%timeit mask = df['A'].values == 'foo'
%timeit mask = df['A'] == 'foo'
5.84 µs ± 195 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
166 µs ± 4.45 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
Evaluating the `mask` with the NumPy array is ~ 30 times faster. This is partly due to NumPy evaluation often being faster. It is also partly due to the lack of overhead necessary to build an index and a corresponding `pd.Series` object.
Next, we'll look at the timing for slicing with one `mask` versus the other.
```
mask = df['A'].values == 'foo'
%timeit df[mask]
mask = df['A'] == 'foo'
%timeit df[mask]
219 µs ± 12.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
239 µs ± 7.03 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
```
The performance gains aren't as pronounced. We'll see if this holds up over more robust testing.
---
**`mask` alternative 2**
We could have reconstructed the data frame as well. There is a big caveat when reconstructing a dataframe—you must take care of the `dtypes` when doing so!
Instead of `df[mask]` we will do this
```
pd.DataFrame(df.values[mask], df.index[mask], df.columns).astype(df.dtypes)
```
If the data frame is of mixed type, which our example is, then when we get `df.values` the resulting array is of `dtype` `object` and consequently, all columns of the new data frame will be of `dtype` `object`. Thus requiring the `astype(df.dtypes)` and killing any potential performance gains.
```
%timeit df[m]
%timeit pd.DataFrame(df.values[mask], df.index[mask], df.columns).astype(df.dtypes)
216 µs ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
1.43 ms ± 39.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
```
However, if the data frame is not of mixed type, this is a very useful way to do it.
Given
```
np.random.seed([3,1415])
d1 = pd.DataFrame(np.random.randint(10, size=(10, 5)), columns=list('ABCDE'))
d1
A B C D E
0 0 2 7 3 8
1 7 0 6 8 6
2 0 2 0 4 9
3 7 3 2 4 3
4 3 6 7 7 4
5 5 3 7 5 9
6 8 7 6 4 7
7 6 2 6 6 5
8 2 8 7 5 8
9 4 7 6 1 5
```
---
```
%%timeit
mask = d1['A'].values == 7
d1[mask]
179 µs ± 8.73 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
Versus
```
%%timeit
mask = d1['A'].values == 7
pd.DataFrame(d1.values[mask], d1.index[mask], d1.columns)
87 µs ± 5.12 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
We cut the time in half.
---
**`mask` alternative 3**
@unutbu also shows us how to use `pd.Series.isin` to account for each element of `df['A']` being in a set of values. This evaluates to the same thing if our set of values is a set of one value, namely `'foo'`. But it also generalizes to include larger sets of values if needed. Turns out, this is still pretty fast even though it is a more general solution. The only real loss is in intuitiveness for those not familiar with the concept.
```
mask = df['A'].isin(['foo'])
df[mask]
A B C D
0 foo one 0 0
2 foo two 2 4
4 foo two 4 8
6 foo one 6 12
7 foo three 7 14
```
However, as before, we can utilize NumPy to improve performance while sacrificing virtually nothing. We'll use `np.in1d`
```
mask = np.in1d(df['A'].values, ['foo'])
df[mask]
A B C D
0 foo one 0 0
2 foo two 2 4
4 foo two 4 8
6 foo one 6 12
7 foo three 7 14
```
---
**Timing**
I'll include other concepts mentioned in other posts as well for reference.
*Code Below*
Each *column* in this table represents a different length data frame over which we test each function. Each column shows relative time taken, with the fastest function given a base index of `1.0`.
```
res.div(res.min())
10 30 100 300 1000 3000 10000 30000
mask_standard 2.156872 1.850663 2.034149 2.166312 2.164541 3.090372 2.981326 3.131151
mask_standard_loc 1.879035 1.782366 1.988823 2.338112 2.361391 3.036131 2.998112 2.990103
mask_with_values 1.010166 1.000000 1.005113 1.026363 1.028698 1.293741 1.007824 1.016919
mask_with_values_loc 1.196843 1.300228 1.000000 1.000000 1.038989 1.219233 1.037020 1.000000
query 4.997304 4.765554 5.934096 4.500559 2.997924 2.397013 1.680447 1.398190
xs_label 4.124597 4.272363 5.596152 4.295331 4.676591 5.710680 6.032809 8.950255
mask_with_isin 1.674055 1.679935 1.847972 1.724183 1.345111 1.405231 1.253554 1.264760
mask_with_in1d 1.000000 1.083807 1.220493 1.101929 1.000000 1.000000 1.000000 1.144175
```
You'll notice that the fastest times seem to be shared between `mask_with_values` and `mask_with_in1d`.
```
res.T.plot(loglog=True)
```
[](https://i.stack.imgur.com/ljeTd.png)
**Functions**
```
def mask_standard(df):
mask = df['A'] == 'foo'
return df[mask]
def mask_standard_loc(df):
mask = df['A'] == 'foo'
return df.loc[mask]
def mask_with_values(df):
mask = df['A'].values == 'foo'
return df[mask]
def mask_with_values_loc(df):
mask = df['A'].values == 'foo'
return df.loc[mask]
def query(df):
return df.query('A == "foo"')
def xs_label(df):
return df.set_index('A', append=True, drop=False).xs('foo', level=-1)
def mask_with_isin(df):
mask = df['A'].isin(['foo'])
return df[mask]
def mask_with_in1d(df):
mask = np.in1d(df['A'].values, ['foo'])
return df[mask]
```
---
**Testing**
```
res = pd.DataFrame(
index=[
'mask_standard', 'mask_standard_loc', 'mask_with_values', 'mask_with_values_loc',
'query', 'xs_label', 'mask_with_isin', 'mask_with_in1d'
],
columns=[10, 30, 100, 300, 1000, 3000, 10000, 30000],
dtype=float
)
for j in res.columns:
d = pd.concat([df] * j, ignore_index=True)
for i in res.index:a
stmt = '{}(d)'.format(i)
setp = 'from __main__ import d, {}'.format(i)
res.at[i, j] = timeit(stmt, setp, number=50)
```
---
**Special Timing**
Looking at the special case when we have a single non-object `dtype` for the entire data frame.
*Code Below*
```
spec.div(spec.min())
10 30 100 300 1000 3000 10000 30000
mask_with_values 1.009030 1.000000 1.194276 1.000000 1.236892 1.095343 1.000000 1.000000
mask_with_in1d 1.104638 1.094524 1.156930 1.072094 1.000000 1.000000 1.040043 1.027100
reconstruct 1.000000 1.142838 1.000000 1.355440 1.650270 2.222181 2.294913 3.406735
```
Turns out, reconstruction isn't worth it past a few hundred rows.
```
spec.T.plot(loglog=True)
```
[](https://i.stack.imgur.com/K1bNc.png)
**Functions**
```
np.random.seed([3,1415])
d1 = pd.DataFrame(np.random.randint(10, size=(10, 5)), columns=list('ABCDE'))
def mask_with_values(df):
mask = df['A'].values == 'foo'
return df[mask]
def mask_with_in1d(df):
mask = np.in1d(df['A'].values, ['foo'])
return df[mask]
def reconstruct(df):
v = df.values
mask = np.in1d(df['A'].values, ['foo'])
return pd.DataFrame(v[mask], df.index[mask], df.columns)
spec = pd.DataFrame(
index=['mask_with_values', 'mask_with_in1d', 'reconstruct'],
columns=[10, 30, 100, 300, 1000, 3000, 10000, 30000],
dtype=float
)
```
**Testing**
```
for j in spec.columns:
d = pd.concat([df] * j, ignore_index=True)
for i in spec.index:
stmt = '{}(d)'.format(i)
setp = 'from __main__ import d, {}'.format(i)
spec.at[i, j] = timeit(stmt, setp, number=50)
``` | How do I select rows from a DataFrame based on column values? | [
"",
"python",
"pandas",
"dataframe",
""
] |
If I have a database table `t1` that has the following data:
How can I select just those columns that contain the term "false"
```
a | b | c | d
------+-------+------+------
1 | 2 | 3 | 4
a | b | c | d
5 | 4 | 3 | 2
true | false | true | true
1 | 2 | 3 | 4
a | b | c | d
5 | 4 | 3 | 2
true | false | true | true
1 | 2 | 3 | 4
a | b | c | d
5 | 4 | 3 | 2
true | false | true | true
```
Thanks | I'm afraid the best way to do this is to write some procedural code to select all rows from the table, and at each row, examine the data to see which values match your condition. Remember the column containing the matching value by adding its name to a set.
When you are done with the full, painful table scan, your set will contain the column names that you need.
[PL/pgSQL docs](http://www.postgresql.org/docs/8.1/static/plpgsql.html)
I would be surprised if there were a way to select columns via some kind of meta-like SQL query, but one never knows with PostgresQL. Sometimes you have to do things procedurally, and this *sounds* like one of those cases. | Assuming that you want to select everything from any column that contains "false", which means you want to select everything from column `b` in this case:
Even if that were possible somehow - although I cannot think of a solution off the top of my head - it would be a serious violation of the relational concept. When you do a select, you are always supposed to know beforehand - which means at design time, not at run time - which, and especially how many columns you are supposed to get. Data specific column counts are not the relational database way. | Postgres selecting only columns that meet a condition | [
"",
"sql",
"postgresql",
""
] |
This is what I got so far. This game can have multiple players up to len(players).
I'd like it to keep prompting the different players each time to make their move.
So, for example, if there are 3 players A B C, if it was player's A turn, I want the next player to be player B, then the next player to be C, and then loop back to player A.
Using only while loops, if statements, and booleans if possible.
PLAYERS = 'ABCD'
```
def next_gamer(gamer):
count = 0
while count < len(GAMERS):
if gamer == GAMERS[0]:
return GAMERS[count + 1]
if gamer == GAMERS[-1]
return GAMER[0]
count = count + 1
``` | If you can't use the .index() method, I think what you are trying to do is this:
```
def next_gamer(gamer):
count = 0
while count < len(GAMERS):
if gamer == GAMERS[count]:
return GAMERS[count + 1]
count = count + 1
return GAMERS[0]
``` | My solution to your previous question, involving generators :
```
turn=0
def gameIsFinished():
global turn
turn = turn+1
return turn >10
def cycle(iterable):
while True:
for item in iterable:
if gameIsFinished():
return
yield item
for player in cycle([1,2,3]):
print player
``` | How to create a function that loops back to the first player from the last player? | [
"",
"python",
"while-loop",
""
] |
I have the following type of data:
The data is segmented into "frames" and each frame has a start and stop "gpstime". Within each frame are a bunch of points with a "gpstime" value.
There is a frames model that has a frame\_name,start\_gps,stop\_gps,...
Let's say I have a list of gpstime values and want to find the corresponding frame\_name for each.
I could just do a loop...
```
framenames = [frames.objects.filter(start_gps__lte=gpstime[idx],stop_gps__gte=gpstime[idx]).values_list('frame_name',flat=True) for idx in range(len(gpstime))]
```
This will give me a list of 'frame\_name', one for each gpstime. This is what I want. However this is very slow.
What I want to know: Is there a better way to preform this lookup to get a framename for each gpstime that is more efficient than iterating over the list. This list could get faily large.
Thanks!
**EDIT: Frames model**
```
class frames(models.Model):
frame_id = models.AutoField(primary_key=True)
frame_name = models.CharField(max_length=20)
start_gps = models.FloatField()
stop_gps = models.FloatField()
def __unicode__(self):
return "%s"%(self.frame_name)
``` | If I understand correctly, gpstime is a list of the times, and you want to produce a list of framenames with one for each gpstime. Your current way of doing this is indeed very slow because it makes a db query for each timestamp. You need to minimize the number of db hits.
The answer that comes first to my head uses numpy. Note that I'm not making any extra assumptions here. If your gpstime list can be sorted, i.e. the ordering does not matter, then it could be done much faster.
Try something like this:
```
from numpy import array
frame_start_times=array(Frame.objects.all().values_list('start_time'))
frame_end_times=array(Frame.objects.all().values_list('end_time'))
frame_names=array(Frame.objects.all().values_list('frame_name'))
frame_names_for_times=[]
for time in gpstime:
frame_inds=frame_start_times[(frame_start_times<time) & (frame_end_times>time)]
frame_names_for_times.append(frame_names[frame_inds].tostring())
```
EDIT:
Since the list is sorted, you can use `.searchsorted()`:
```
from numpy import array as a
gpstimes=a([151,152,153,190,649,652,920,996])
starts=a([100,600,900,1000])
ends=a([180,650,950,1000])
names=a(['a','b','c','d',])
names_for_times=[]
for time in gpstimes:
start_pos=starts.searchsorted(time)
end_pos=ends.searchsorted(time)
if start_pos-1 == end_pos:
print time, names[end_pos]
else:
print str(time) + ' was not within any frame'
``` | The best way to speed things up is to add indexes to those fields:
```
start_gps = models.FloatField(db_index=True)
stop_gps = models.FloatField(db_index=True)
```
and then run `manage.py dbsync`. | Django lte/gte query on a list | [
"",
"python",
"django",
"performance",
"list",
"filter",
""
] |
Table:
```
locations
---------
contract_id INT
order_position INT # I want to find any bad data in this column
```
I'd like to find any duplicate `order_position`s within a given `contract_id`.
So for example any cases like if two `locations` rows had a `contract_id` of **"24845"** and both had an `order_position` of **"3".**
I'm using MySQL 5.1. | A HAVING clause can show you instances where duplicates exist:
```
SELECT *
FROM locations
GROUP BY contract_id, order_position
HAVING COUNT(*) > 1
``` | ```
select contract_id, order_position
from locations
group by contract_id, order_position
having count(*) > 1
``` | How could I write a query to find duplicate values within grouped rows? | [
"",
"mysql",
"sql",
""
] |
I have a `JSON` file that has the following structure:
```
{
"name":[
{
"someKey": "\n\n some Value "
},
{
"someKey": "another value "
}
],
"anotherName":[
{
"anArray": [
{
"key": " value\n\n",
"anotherKey": " value"
},
{
"key": " value\n",
"anotherKey": "value"
}
]
}
]
}
```
Now I want to `strip` off all he whitespaces and newlines for every value in the `JSON` file. Is there some way to iterate over each element of the dictionary and the nested dictionaries and lists? | > Now I want to strip off all he whitespaces and newlines for every value in the JSON file
Using `pkgutil.simplegeneric()` to create a helper function `get_items()`:
```
import json
import sys
from pkgutil import simplegeneric
@simplegeneric
def get_items(obj):
while False: # no items, a scalar object
yield None
@get_items.register(dict)
def _(obj):
return obj.items() # json object. Edit: iteritems() was removed in Python 3
@get_items.register(list)
def _(obj):
return enumerate(obj) # json array
def strip_whitespace(json_data):
for key, value in get_items(json_data):
if hasattr(value, 'strip'): # json string
json_data[key] = value.strip()
else:
strip_whitespace(value) # recursive call
data = json.load(sys.stdin) # read json data from standard input
strip_whitespace(data)
json.dump(data, sys.stdout, indent=2)
```
Note: [`functools.singledispatch()`](http://docs.python.org/3.4/library/functools#functools.singledispatch) function (Python 3.4+) would allow to use `collections`' [`MutableMapping/MutableSequence`](http://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes) instead of `dict/list` here.
### [Output](http://ideone.com/Vfc6UW)
```
{
"anotherName": [
{
"anArray": [
{
"anotherKey": "value",
"key": "value"
},
{
"anotherKey": "value",
"key": "value"
}
]
}
],
"name": [
{
"someKey": "some Value"
},
{
"someKey": "another value"
}
]
}
``` | Parse the file using [JSON](http://docs.python.org/2/library/json.html):
```
import json
file = file.replace('\n', '') # do your cleanup here
data = json.loads(file)
```
then walk through the resulting data structure. | How to remove whitespaces and newlines from every value in a JSON file? | [
"",
"python",
"json",
"strip",
""
] |
In NumPy, I can generate a boolean array like this:
```
>>> arr = np.array([1, 2, 1, 2, 3, 6, 9])
>>> arr > 2
array([False, False, False, False, True, True, True], dtype=bool)
```
How can we chain comparisons together? For example:
```
>>> 6 > arr > 2
array([False, False, False, False, True, False, False], dtype=bool)
```
Attempting to do so results in the error message
> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() | AFAIK the closest you can get is to use `&`, `|`, and `^`:
```
>>> arr = np.array([1, 2, 1, 2, 3, 6, 9])
>>> (2 < arr) & (arr < 6)
array([False, False, False, False, True, False, False], dtype=bool)
>>> (2 < arr) | (arr < 6)
array([ True, True, True, True, True, True, True], dtype=bool)
>>> (2 < arr) ^ (arr < 6)
array([ True, True, True, True, False, True, True], dtype=bool)
```
I don't think you'll be able to get `a < b < c`-style chaining to work. | You can use the numpy logical operators to do something similar.
```
>>> arr = np.array([1, 2, 1, 2, 3, 6, 9])
>>> arr > 2
array([False, False, False, False, True, True, True], dtype=bool)
>>>np.logical_and(arr>2,arr<6)
Out[5]: array([False, False, False, False, True, False, False], dtype=bool)
``` | NumPy chained comparison with two predicates | [
"",
"python",
"arrays",
"numpy",
"boolean-expression",
""
] |
I'm busy building up a catalog site for a client of mine, and need to tweek the search a bit.
The catalog contains a whole bunch of products. Each product has the ability to contain a single, multiple and an interval of itemnumbers. To clarify that a bit I've listed a couple of examples beneath.
**EXAMPLE 1)**
*multiple itemnumbers*
itemnumber = 100, 105, 109, 200
**EXAMPLE 2)**
*an interval of itemnumbers*
itemnumber = 100 - 110
**EXAMPLE 3)**
*A combination*
itemnumber = 100 - 110, 220, 300 - 310, 400, 401
> **My question is therefore:**
>
> is there a syntax that allows me to check intervals between two
> numbers separated with ' - '?
>
> If **yes**, any suggestions on how to build up a query that allows me
> to implement.
>
> If **no**, any directions you would recommend?
---
**Additional info**
The site is build up in WordPress - where itemnumber is a custom meta field. Atm i've hooked into the `pre_posts` and added: - also pasted in pastebin for readability [**pastebin**](http://pastebin.com/95y7EUHg)
`$where .= " OR ID IN ( SELECT post_id FROM {$wpdb->postmeta} WHERE meta_value LIKE '%" . $wp_query->query_vars['s'] . "%' AND ( {$wpdb->posts}.ID=post_id AND {$wpdb->posts}.post_status!='inherit' AND ( {$wpdb->posts}.post_type='produkt' ) ) )";`
The above code simply just checks rather the products meta fields contain the searched word, not specific enough.
--- | It's impossible to use a relational logic on an intentionally denormalized database like evil "Wordpress custom meta field" approach.
So, the best you can do is to perform 2 queries:
* One to get all the numbers
* then expand all intervals in PHP (with array\_fill(), range() or whatever) to create a regular comma-separated list
* the latter passed to second query into IN()
As a benefit you will get much faster execution | Replace "-" with "AND" and use BETWEEN keyword to get the records:
**Where Column\_Name Between 100 AND 110** | SQL select if number is between interval | [
"",
"sql",
"wordpress",
"mysqli",
"intervals",
""
] |
I have two tables:
**products**
```
id name
1 Product 1
2 Product 2
3 Product 3
```
**products\_sizes**
```
id size_id product_id
1 1 1
2 2 1
3 1 2
4 3 2
5 3 3
```
So product 1 has two sizes: 1, 2. Product 2 has two sizes: 1, 3. Product 3 has one size: 3.
What I want to do is build a query that pulls back the products that have both size 1 and size 3 (i.e. Product 2). I can easily create a query that pulls back the products that have **both** sizes 1 AND 3:
```
select `products`.id, `products_sizes`.`size_id`
from `products` inner join `products_sizes` on `products`.`id` = `products_sizes`.`product_id`
where products_sizes.size_id IN (1, 3)
group by products.id
```
When I run this query, I get back Product 1, Product 2, and Product 3.
Just to reiterate, I'd like to only get back Product 2. I've tried using the HAVING clause, messing around with $id IN GROUP\_CONCAT(...) but I haven't been able to get anything to work. Thanks in advance, guys. | Since you want both 1 and 3, you need to `COUNT` the `size_id`s that are `IN (1, 3)` and require the result to be 2:
```
SELECT p.id AS id, p.name AS name
FROM products p, products_sizes s
WHERE p.id = s.product_id AND s.size_id IN (1, 3)
GROUP BY s.product_id
HAVING COUNT(DISTINCT s.size_id) = 2;
```
Check out the [**demo here**](http://sqlfiddle.com/#!2/be28a/8). Let me know if it works for you. | This one might work for you in MySQL, if group\_concat works the same way it does in PostgreSQL:
```
select
product_id
from products
left join ( select group_concat(cast(id as char(5)), ',') as agg1 from sz where id in (1, 3) group by size_id ) as qagg1 on 1=1
left join ( select products_sizes.product_id product_id, group_concat(cast(products_sizes.size_id as char(5)), ',') agg2 from sz where products_sizes.size_id IN (1, 3) group by products_sizes.product_id ) as qagg2 on 1=1
where qagg1.agg1 = qagg2.agg2
group by product_id
```
This is the original query, tested in PostgreSQL:
```
select
product_id
from pr
left join ( select string_agg(cast(id as char(5)), ',') as agg1 from sz where id in (1, 3) group by size_id ) as qagg1 on 1=1
left join ( select sz.product_id product_id, string_agg(cast(sz.size_id as char(5)), ',') agg2 from sz where sz.size_id IN (1, 3) group by sz.product_id ) as qagg2 on 1=1
where qagg1.agg1 = qagg2.agg2
group by product_id
``` | Multiple conditions on the joined table when using group by | [
"",
"mysql",
"sql",
"join",
"group-by",
""
] |
Right now I'm importing a fairly large `CSV` as a dataframe every time I run the script. Is there a good solution for keeping that dataframe constantly available in between runs so I don't have to spend all that time waiting for the script to run? | The easiest way is to [pickle](https://docs.python.org/3/library/pickle.html) it using [`to_pickle`](http://pandas.pydata.org/pandas-docs/stable/io.html#pickling):
```
df.to_pickle(file_name) # where to save it, usually as a .pkl
```
Then you can load it back using:
```
df = pd.read_pickle(file_name)
```
*Note: before 0.11.1 `save` and `load` were the only way to do this (they are now deprecated in favor of `to_pickle` and `read_pickle` respectively).*
---
Another popular choice is to use [HDF5](http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables) ([pytables](http://www.pytables.org)) which offers [very fast](https://stackoverflow.com/questions/16628329/hdf5-and-sqlite-concurrency-compression-i-o-performance) access times for large datasets:
```
import pandas as pd
store = pd.HDFStore('store.h5')
store['df'] = df # save it
store['df'] # load it
```
*More advanced strategies are discussed in the [cookbook](http://pandas-docs.github.io/pandas-docs-travis/#pandas-powerful-python-data-analysis-toolkit).*
---
Since 0.13 there's also [msgpack](http://pandas.pydata.org/pandas-docs/stable/io.html#msgpack-experimental) which may be be better for interoperability, as a faster alternative to JSON, or if you have python object/text-heavy data (see [this question](https://stackoverflow.com/q/30651724/1240268)). | Although there are already some answers I found a nice comparison in which they tried several ways to serialize Pandas DataFrames: [Efficiently Store Pandas DataFrames](http://matthewrocklin.com/blog/work/2015/03/16/Fast-Serialization).
They compare:
* pickle: original ASCII data format
* cPickle, a C library
* pickle-p2: uses the newer binary format
* json: standardlib json library
* json-no-index: like json, but without index
* msgpack: binary JSON alternative
* CSV
* hdfstore: HDF5 storage format
In their experiment, they serialize a DataFrame of 1,000,000 rows with the two columns tested separately: one with text data, the other with numbers. Their disclaimer says:
> You should not trust that what follows generalizes to your data. You should look at your own data and run benchmarks yourself
The source code for the test which they refer to is available [online](https://gist.github.com/mrocklin/4f6d06a2ccc03731dd5f). Since this code did not work directly I made some minor changes, which you can get here: [serialize.py](https://gist.github.com/agoldhoorn/ee3bec427dec5bfabb2c)
I got the following results:
[](https://i.stack.imgur.com/T9JEL.png)
They also mention that with the conversion of text data to [categorical](http://pandas.pydata.org/pandas-docs/version/0.15.2/generated/pandas.core.categorical.Categorical.html) data the serialization is much faster. In their test about 10 times as fast (also see the test code).
**Edit**: The higher times for pickle than CSV can be explained by the data format used. By default [`pickle`](https://docs.python.org/2/library/pickle.html#data-stream-format) uses a printable ASCII representation, which generates larger data sets. As can be seen from the graph however, pickle using the newer binary data format (version 2, `pickle-p2`) has much lower load times.
Some other references:
* In the question [Fastest Python library to read a CSV file](https://softwarerecs.stackexchange.com/questions/7463/fastest-python-library-to-read-a-csv-file) there is a very detailed [answer](https://softwarerecs.stackexchange.com/a/7510/18147) which compares different libraries to read csv files with a benchmark. The result is that for reading csv files [`numpy.fromfile`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html) is the fastest.
* Another [serialization test](https://gist.github.com/justinfx/3174062)
shows [msgpack](https://pypi.org/project/msgpack), [ujson](https://pypi.python.org/pypi/ujson), and cPickle to be the quickest in serializing. | How to reversibly store and load a Pandas dataframe to/from disk | [
"",
"python",
"pandas",
"dataframe",
""
] |
I was looking at the option of embedding Python into Fortran to add Python functionality to my existing Fortran 90 code. I know that it can be done the other way around by extending Python with Fortran using the f2py from NumPy. But, I want to keep my super optimized main loop in Fortran and add python to do some additional tasks / evaluate further developments before I can do it in Fortran, and also to ease up code maintenance. I am looking for answers for the following questions:
1. Is there a library that already exists from which I can embed Python into Fortran? (I am aware of f2py and it does it the other way around)
2. How do we take care of data transfer from Fortran to Python and back?
3. How can we have a call back functionality implemented? (Let me describe the scenario a bit....I have my main\_fortran program in Fortran, that call Func1\_Python module in python. Now, from this Func1\_Python, I want to call another function...say Func2\_Fortran in Fortran)
4. What would be the impact of embedding the interpreter of Python inside Fortran in terms of performance....like loading time, running time, sending data (a large array in double precision) across etc.
Thanks a lot in advance for your help!!
Edit1: I want to set the direction of the discussion right by adding some more information about the work I am doing. I am into scientific computing stuff. So, I would be working a lot on huge arrays / matrices in double precision and doing floating point operations. So, there are very few options other than fortran really to do the work for me. The reason i want to include python into my code is that I can use NumPy for doing some basic computations if necessary and extend the capabilities of the code with minimal effort. For example, I can use several libraries available to link between python and some other package (say OpenFoam using PyFoam library). | ## 1. Don't do it
I know that you're wanting to add Python code inside a Fortan program, instead of having a Python program with Fortran extensions. My first piece of advice is to not do this. Fortran is faster than Python at array arithmetic, but Python is easier to write than Fortran, it's easier to extend Python code with OOP techniques, and Python may have access to libraries that are important to you. You mention having a super-optimized main loop in Fortran; Fortran is great for super-optimized *inner* loops. The logic for passing a Fortran array around in a Python program with Numpy is much more straightforward than what you would have to do to correctly handle a Python object in Fortran.
When I start a scientific computing project from scratch, I always write first in Python, identify performance bottlenecks, and translate those into Fortran. Being able to test faster Fortran code against validated Python code makes it easier to show that the code is working correctly.
Since you have existing code, extending the Python code with a module made in Fortran will require refactoring, but this process should be straightforward. Separate the initialization code from the main loop, break the loop into logical pieces, wrap each of these routines in a Python function, and then your main Python code can call the Fortran subroutines and interleave these with Python functions as appropriate. In this process, you may be able to preserve a lot of the optimizations you have in your main loop in Fortran. F2PY is a reasonably standard tool for this, so it won't be tough to find people who can help you with whatever problems will arise.
## 2. System calls
If you absolutely must have Fortran code calling Python code, instead of the other way around, the simplest way to do this is to just have the Fortran code write some data to disk, and run the Python code with a `SYSTEM` or `EXECUTE_COMMAND_LINE`. If you use `EXECUTE_COMMAND_LINE`, you can have the Python code output its result to stdout, and the Fortran code can read it as character data; if you have a lot of output (e.g., a big matrix), it would make more sense for the Python code to output a file that the Fortran code then reads. Disk read/write overhead could wind up being prohibitively significant for this. Also, you would have to write Fortran code to output your data, Python code to read it, Python code to output it again, and Fortran code to re-input the data. This code should be straightforward to write and test, but keeping these four parts in sync as you edit the code may turn into a headache.
(This approach is tried in [this Stack Overflow question](https://stackoverflow.com/questions/2805244/how-to-compile-python-scripts-for-use-in-fortran?rq=1))
## 3. Embedding Python in C in Fortran
There is no way that I know of to directly pass a Python object in memory to Fortran. However, Fortran code can call C code, and C code can have Python embedded in it. (See the [Python tutorial on extending and embedding](http://docs.python.org/2/extending/).) In general, extending Python (like I recommend in point 1) is preferable to embedding it in C/C++. (See [Extending Vs. Embedding: There is Only One Correct Decision](http://twistedmatrix.com/users/glyph/rant/extendit.html).) Getting this to work will be a nightmare, because any communication problems between Python and Fortran could happen between Python and C, or between C and Fortran. I don't know if anyone is actually embedding Python in C in Fortran, and so getting help will be difficult. | I have developed the library [`Forpy`](https://github.com/ylikx/forpy) that allows you to use Python in Fortran (embedding).
It uses Fortran C interoperability to call Python C API functions.
While I agree that extending (using Fortran in Python) is often preferable, embedding has its uses:
* Large, existing Fortran codes might need a substantial amount of refactoring before
they can be used from Python - here embedding can save development time
* Replacing a part of an existing code with a Python implementation
* Temporarily embedding Python to experiment with a given Fortran code:
for example to test alternative algorithms or to extract intermediary results
Besides embedding, `Forpy` also supports extending Python.
With `Forpy` you can write a Python extension module entirely in Fortran.
An advantage to existing tools such as `f2py` is that you can use Python datatypes
(e. g. to write a function that takes a Python list as argument or a function that returns a Python dict).
Working with existing, possibly legacy, Fortran codes is often very challenging and I
think that developers should have tools at their disposal both for embedding and extending Python. | Embed Python into Fortran | [
"",
"python",
"fortran",
"embed",
""
] |
Please what's wrong with my code:
```
import datetime
d = "2013-W26"
r = datetime.datetime.strptime(d, "%Y-W%W")
print(r)
```
Display "2013-01-01 00:00:00", Thanks. | A week number is not enough to generate a date; you need a day of the week as well. Add a default:
```
import datetime
d = "2013-W26"
r = datetime.datetime.strptime(d + '-1', "%Y-W%W-%w")
print(r)
```
The `-1` and `-%w` pattern tells the parser to pick the Monday in that week. This outputs:
```
2013-07-01 00:00:00
```
`%W` uses Monday as the first day of the week. While you can pick your own weekday, you may get unexpected results if you deviate from that.
See the [`strftime()` and `strptime()` behaviour](http://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior) section in the documentation, footnote 4:
> When used with the `strptime()` method, `%U` and `%W` are only used in calculations when the day of the week and the year are specified.
Note, if your week number is a [ISO week date](https://en.wikipedia.org/wiki/ISO_week_date), you'll want to use `%G-W%V-%u` instead! Those directives require Python 3.6 or newer. | In Python 3.8 there is the handy `datetime.date.fromisocalendar`:
```
>>> from datetime import date
>>> date.fromisocalendar(2020, 1, 1) # (year, week, day of week)
datetime.date(2019, 12, 30, 0, 0)
```
In older Python versions (3.7-) the calculation can use the information from `datetime.date.isocalendar` to figure out the week ISO8601 compliant weeks:
```
from datetime import date, timedelta
def monday_of_calenderweek(year, week):
first = date(year, 1, 1)
base = 1 if first.isocalendar()[1] == 1 else 8
return first + timedelta(days=base - first.isocalendar()[2] + 7 * (week - 1))
```
Both works also with `datetime.datetime`. | Get date from week number | [
"",
"python",
"datetime",
"strptime",
""
] |
I am trying to write a sql query with group by so that i can get columns from the same raw by condition on one column, I can not use aggregate function
e.g Employee table
```
EmpId data1 data2 data3 reg_date
--------------------------------------
1 1 2 2 2013/06/12
1 5 6 7 2013/06/13
```
I want group by EmpId And want All other data where reg\_date is maximum.
```
SELECT EmpId,data1,data2,data3,reg_date FROM Employee
GROUP BY EmpId
```
Obviously this will give error because it needs aggregate function for data1,data2,data3 and reg\_date to decide which value out of two to select.
But can I use MAX function for reg\_date and all data field can be selected for that max date | try it by using `Row_Number` and `Partiton` ..this assume to be best in case like this...you can try like this...
```
Select EmpId,data1,data2,data3,reg_date from
(
SELECT Row_Number() Over(Partition By EmpId Order by reg_date desc) as Row, EmpId,data1,data2,data3,reg_date FROM Employee
) t where t.Row=1;
``` | The query below should roughly do what you need. So in the first select it gets the empID in combination with the latest reg\_date. THe second select gets all the data in the table and in the final select we join the 2 datasets on the criteria listed which should get you the empID, the max regdate and the values for data1, data2, data3 that correspond.
```
;with
GetDataSet1 as
(
SELECT EmpId,MAX(reg_date) "MaxRegDate"
FROM Employee
GROUP BY EmpId
)
,GetDataSet2 as
(
SELECT EmpId,data1,data2,data3,reg_date
FROM Employee
)
select *
FROM GetDataSet1 a
JOIN GetDataSet2 b on a.EmpID = b.EmpID AND a.MaxRegDate = b.reg_date
``` | sql query to select columns by condition on specific column(with out aggregate function) with group by | [
"",
"sql",
""
] |
I am currently trying to create a basic Gmail client, and I am trying to share my login details between two classes, `Authen` and `Application`. I need to access the credits entered in the `Authen` class, but I can't quite figure out how to do so. Here is my current code, in the class `Application`:
```
Username_mine = passcheck.Create_Widgets.Username.get()
Password_mine = passcheck.Create_Widgets.Password.get()
contents = self.Body.get("0.0", END)
FROM = "Unknown"
subject = self.Subject_Entry.get()
recipients = self.To_Entry.get()
server = smtplib.SMTP('smtp.gmail.com:587')
server.ehlo()
server.starttls()
server.login(Username_mine, Password_mine)
server.sendmail(FROM, recipients, contents)
```
When I run the code, I get the error:
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1410, in __call__
return self.func(*args)
File "/Users/BigKids/Desktop/Coding/Python 2/Email/Email Send GUI V1.py", line 112, in send_cmd
Username_mine = passcheck.Create_Widgets.Username.get()
AttributeError: 'function' object has no attribute 'Username'
```
Thank you in advance for the help.
**Edit** Here's my code:
```
#Email Send v3 Program. It has a basic GUI to send emails.
from Tkinter import *
import smtplib
import string
import random
def random_char():
char_select = random.randrange(52)
char_choice = string.ascii_letters[char_select]
return char_choice
class Authen(Frame):
"""Holds authentication code and basic GUI stuff"""
def __init__(self, master):
"""Start Authen GUI"""
Frame.__init__(self, master)
self.grid()
self.Create_Widgets()
def Create_Widgets(self):
"""Spawns the widgets"""
self.Usertext = Label(self, text = "Username: ")
self.Usertext.grid(row = 0, column = 0)
self.Username = Entry(self)
self.Username.grid(row = 0, column = 1)
self.Passtext = Label (self, text = "Password: ")
self.Passtext.grid(row = 1, column = 0)
self.Password = Entry (self, show = "*")
self.Password.grid(row = 1, column = 1)
self.Submit = Button(self, text = "Submit Credits",
command = self.authen_credits)
self.Submit.grid()
def authen_credits(self):
"""Backbone of authen process"""
Username_mine = self.Username.get()
Password_mine = self.Password.get()
server = smtplib.SMTP('smtp.gmail.com:587')
server.ehlo()
server.starttls()
while True:
try:
server.login(Username_mine,Password_mine)
#make my screen dimensions work
w = 500
h = 1000
app = Application()
app.title("SMTP Mail Client")
app.geometry("%dx%d" % (w, h))
break
except smtplib.SMTPAuthenticationError:
print "Login Failed"
class Application(Toplevel):
"""Toplevel Frame, holds sendmail stuff"""
def __init__(self):
"""Start the toplevel Frame"""
Toplevel.__init__(self)
self.grid()
self.entryFun()
self.mainFun()
def entryFun(self):
"""Holds entry Frame"""
self.entryFrame = Frame(self)
self.entryFrame.grid(row = 0, column = 0, sticky = W)
self.To_Lbl = Label(self.entryFrame, text = "TO: ")
self.To_Lbl.grid(row = 0, column = 0, sticky = W)
self.To_Entry = Entry(self.entryFrame)
self.To_Entry.grid(row = 0, column = 1, sticky = W)
self.Subject_Lbl = Label(self.entryFrame, text = "Subject: ")
self.Subject_Lbl.grid(row = 1, column = 0, sticky = W)
self.Subject_Entry = Entry(self.entryFrame)
self.Subject_Entry.grid(row = 1, column = 1, sticky = W)
def mainFun(self):
"""Holds body of thing"""
self.entryFrame = Frame(self)
self.entryFrame.grid(row = 1, column = 0, sticky = W)
self.Body = Text(self.entryFrame, width = 75, height = 50, relief = GROOVE, bd = 5)
self.Body.grid()
self.send_mail = Button(self, text = "Send Mail",
command = self.send_cmd)
self.send_mail.grid()
def send_cmd(self):
Username_mine = passcheck.Create_Widgets.Username.get()
Password_mine = passcheck.Create_Widgets.Password.get()
contents = self.Body.get("0.0", END)
FROM = "Unknown"
subject = self.Subject_Entry.get()
recipients = self.To_Entry.get()
server = smtplib.SMTP('smtp.gmail.com:587')
server.ehlo()
server.starttls()
server.login()
server.sendmail(FROM, recipients, contents)
root = Tk()
root.title("Login")
root.geometry("500x500")
passcheck = Authen(root)
root.mainloop()
``` | You're getting the error because`Create_Widgets()`is a method function of the `Authen` class, so the`self.Username = Entry(self)`in it is makes`Username`an attribute of the instance, not the method. Given that, and assuming `passcheck` is such an instance, I think you need to write:
```
Username_mine = passcheck.Username.get()
``` | Are you sure that Username is an actual member of the **function** object?
(make sure that you are using **self.Username** instead of **Username** in the function class) | How to share Variables between Classes in Python | [
"",
"python",
"python-2.7",
"smtp",
""
] |
If I have a dictionary such as the one below, what is the best way to
1. Pick all those items which have a partial key `1d`
2. Then delete those items based on that key
`c = {('1d','f1'):1.5,('1w','f1'):1.2,('1d','f2'):1.4}`
Thanks | ```
>>> c = {('1d','f1'):1.5,('1w','f1'):1.2,('1d','f2'):1.4}
>>> {k: v for k, v in c.iteritems() if '1d' not in k}
{('1w', 'f1'): 1.2}
```
In py2.x use `c.iteritems()` as it returns an iterator, for py3.x you can use `c.items()`.
Note that `c.items()` will work in both versions. | ```
>>> c = {('1d','f1'):1.5,('1w','f1'):1.2,('1d','f2'):1.4}
>>> {k: v for k, v in c.items() if k[0] != '1d'}
{('1w', 'f1'): 1.2}
```
For the general case use `if '1d' not in k`. In Python 2.x use `dict.iteritems` (an iterator instead of a list) instead | python pick and delete by item using dict comprehension | [
"",
"python",
"dictionary",
""
] |
As a warning, I'm still a bit inexperienced in python
I'm trying to perform the transitive reduction of directed graph using the networkx library. I've figured out an algorithm but I'm having trouble implementing it. After a quick search, I found algorithms similar to mine in other stack exchange questions but no demonstrations of how to actually code the algorithm.
Here's my algorthm:
```
For X in Nodes
For Y in Nodes
For z in Nodes
if (x,y) != (y,z) and (x,y) != (x,z)
if edges xy and yz are in Graph
delete xz
```
Here's my attempt at expressing this in python :
```
G = graph
N = G.Nodes()
for x in N:
for y in N:
for z in N:
if (x,y) != (y,z) and (x,y) != (x,z):
if (x,y) and (y,z) in G:
G.remove_edge(x,z)
```
I don't think I'm properly calling every permutation of edges in the network and was thinking of trying to use itertools. Even if I had every possible permutation, I don't know how to implement the algorithm with that information.
Any help would be wonderful. Thanks! | The following seems to work, at least for the sample data I provided. If you have a specific case that doesn't it'd be helpful to see it.
```
import random
import pprint
class Graph:
nodes = []
edges = []
removed_edges = []
def remove_edge(self,x,y):
e = (x,y)
try:
self.edges.remove(e)
print("Removed edge %s" % str(e))
self.removed_edges.append(e)
except:
print("Attempted to remove edge %s, but it wasn't there" % str(e))
def Nodes(self):
return self.nodes
# Sample data
def __init__(self):
self.nodes = [1,2,3,4,5]
self.edges = [
(1,2),
(1,3),
(1,4),
(1,5),
(2,4),
(3,4),
(3,5),
(4,5),
]
G = Graph()
N = G.Nodes()
for x in N:
for y in N:
for z in N:
#print("(%d,%d,%d)" % (x,y,z))
if (x,y) != (y,z) and (x,y) != (x,z):
if (x,y) in G.edges and (y,z) in G.edges:
G.remove_edge(x,z)
print("Removed edges:")
pprint.pprint(G.removed_edges)
print("Remaining edges:")
pprint.pprint(G.edges)
```
Output:
```
Removed edge (1, 4)
Attempted to remove edge (1, 4), but it wasn't there
Removed edge (1, 5)
Attempted to remove edge (2, 5), but it wasn't there
Removed edge (3, 5)
Removed edges:
[(1, 4), (1, 5), (3, 5)]
Remaining edges:
[(1, 2), (1, 3), (2, 4), (3, 4), (4, 5)]
``` | If I read the source of the `tred` utility (which calculates the transitive reduction of a graph in dot fromat as part of the graphviz utilities) correctly, then the algorithm it uses is the following: Go through all vertices of the graph and for each vertex, do a DFS on each of its children. For each vertex traversed that way remove any edge from the original parent vertex to that vertex.
I implemented that algorithm using [networkx](http://networkx.github.io/) like this:
```
g = nx.read_graphml("input.dot")
for n1 in g.nodes_iter():
if g.has_edge(n1, n1):
g.remove_edge(n1, n1)
for n2 in g.successors(n1):
for n3 in g.successors(n2):
for n4 in nx.dfs_preorder_nodes(g, n3):
if g.has_edge(n1, n4):
g.remove_edge(n1, n4)
nx.write_graphml(g, "output.dot")
```
The nesting level suggests a horrible complexity but what it actually does is to already perform the first two steps of the DFS which will be carried out later. Doing it that way avoids to check whether the current node is the direct successor of the parent or not in the DFS part (because the DFS result of networkx includes the vertex the traversal was started from). So an alternative formulation would be the following which better shows the complexity of this algorithm but is also a tiny tad slower because of the additional check in the most inner looptgoo:
```
g = nx.read_graphml("input.dot")
for n1 in g.nodes_iter():
c = set(g.successors(n1))
for n2 in nx.dfs_preorder_nodes(g, n1):
if n2 in c:
continue
if g.has_edge(n1, n2):
g.remove_edge(n1, n2)
nx.write_graphml(g, "output.dot")
```
The operations `has_edge` and `remove_edge` use Python dictionaries and thus are `O(1)` on average. The worst case time complexity of a DFS is `O(E)` with `E` being the number of edges in the graph. Since a DFS is carried out `N` times (with `N` being the number of vertices) the time complexity of above algorithm is `O(NE)`.
Another funny observation is, how the above Python code seems to be orders of magnitude faster than the `tred` utility from the graphviz suite. I have no idea why this is. Here is a comparison of run times on a acyclic graph with 482 vertices and 9656 edges:
* tred: 58.218 seconds
* python: 4.720 seconds
I also tried another benchmark on a graph with 14956 vertices and 190993 edges but while the Python code from above finished in 11 minutes, `tred` is still running after 2 hours of run time at the time of this writing.
EDIT: 7 days and 3 hours later, `tred` is still churning along on that graph. I'm giving up now. | I'm trying to perform the transitive reduction of directed graph in Python | [
"",
"python",
"graph-theory",
"networkx",
""
] |
Very simple question, but it is a curiosity for me...
Say we have a list of items which are strings. If we call the built-in function `str` on each element in the list, that would seem to be redundant since the items are already strings. What would happen under the hood, specifically for Python but interested in other languages as well. Would the interpreter already see that the item is a string and not call the `str` function? Or would it do it anyway and return a string, and what would a string of a string mean? | When invoking `str` on a string object, the underlying function `__str__(self)` will be executed. Whether to return the original object (the python string case) or a copy depends on the implementation of the function.
Generally speaking, the language will not do or do little to handle redundant calls I think. The program itself will decide the behavior. (i.e. whether a named function is defined on the object, if not an error will be issued.)
Let's think another way. If you have some knowledge about C++, in C++ there is a kind of copy constructor. It goes similarly facing the deep copy and shadow copy problem relying on your implementation. | Testing is simple:
```
>>> a = 'a'
>>> id(a)
4298835424
>>> id(str(a))
4298835424
>>> a is str(a)
True
```
It is the identical object.
Tested and works with CPython 2.7., 3.3., PyP 1.9., IronPython 2.0
Does **NOT** work with Jython 2.5:
```
Jython 2.5.2 (Release_2_5_2:7206, Mar 2 2011, 23:12:06)
[Java HotSpot(TM) 64-Bit Server VM (Apple Inc.)] on java1.6.0_45
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 'a'
>>> id(a)
2
>>> id(str(a))
3
>>> id(str(a))
4
>>> a is str(a)
False
``` | What happens in redundant function calls? | [
"",
"python",
""
] |
Please help me to understand BBS algorithm. I did this implementation:
```
class EmptySequenseError(Exception):
pass
class BlumBlumShub(object):
def __init__(self, length):
self.length = length
self.primes = e(1000) # Primes obtained by my own Sieve of Eratosthenes implementation.
def get_primes(self):
out_primes = []
while len(out_primes) < 2:
curr_prime = self.primes.pop()
if curr_prime % 4 == 3:
out_primes.append(curr_prime)
return out_primes
def set_random_sequence(self):
p, q = self.get_primes()
m = p * q
self.random_sequence = [((x+1)**2)%m for x in range(self.length)]
def get_random_sequence(self):
if self.random_sequence:
return self.random_sequence
raise EmptySequenseError("Set random sequence before get it!")
```
And I have several questions. At first I do not want to use `random` library, it is too naive. My sequence is increasing, it is not absolutely random. **How to prevent increasing in returned sequence?** And I do not understand this part of the algorithm description:
> At each step of the algorithm, some output is derived from *xn+1*; the output is commonly either the bit parity of *xn+1* or one or more of the least significant bits of *xn+1*.
**Please explain to me what does it mean?**
### Edit summary:
* The algorithm is corrected.
* Quote substituted to en.wikipedia quote. | ```
for x in range(self.length):
self.random_sequence.append((x ** 2) % m)
```
Just generates `[(x ** 2) % m for x in range(self.length)]`, which is roughly xn+1 = n2 mod M.
The algorithm is *supposed* to be: xn+1 = xn2 mod M
Do you see where your version is different?
---
As for the quote - you don't say where it's from, but [Wikipedia](http://en.wikipedia.org/wiki/Blum_Blum_Shub) has:
> At each step of the algorithm, some output is derived from *xn+1*; the output is commonly either the bit parity of *xn+1* or one or more of the least significant bits of *xn+1*.
It means that *xn+1* is the *seed* for the next iteration, but *not* the pseudo-random number returned. Instead, the return value is derived from *xn+1* by counting its bit parity (this yields either 0 or 1 each iteration), or by taking only some number of top bits. | Blum Blum Shub is described in [Chapter Five](http://cacr.uwaterloo.ca/hac/about/chap5.pdf) of the Handbook of Applied Cryptography, Section 5.5.2. There is a lot of helpful stuff about random number generation in that chapter. | Understanding Blum Blum Shub algorithm. (Python implementation) | [
"",
"python",
"algorithm",
"cryptography",
""
] |
I have a form and want that the *user* field should not be displayed.
models.py
```
class Questions(models.Model):
user=models.ForeignKey(User)
category=models.ForeignKey(Categories)
question=models.TextField(max_length=500)
created_date=models.DateTimeField('date created')
def __unicode__(self):
return "%s%s" %(self.question,self.created_date)
```
form.py
```
class QuestionsForm(ModelForm):
class Meta:
model = Questions
widgets = {
'question': TextInput(attrs={'placeholder': 'Add question here'})
}
exclude=('user',)
```
I tried *exclude* but the field still appears.
how to do it??
Thanks in advance. | Not sure it should work actually:
Just do some minute checks:
1.exclude should be part of class Meta
2.make that exclude from tuple to list , (not sure if it helps)
3.or instead of exclude try giving fields = (#some fields names, )
Hope this works .. | Use this in views:
```
form.fields['name_of_field'].widget = forms.HiddenInput()
```
or you can handle it in in HTML by input hidden tag:
```
<input type='hidden'>
``` | Prevent a form field from displaying | [
"",
"python",
"html",
"django",
"aptana",
""
] |
Just when I thought I had understood how Python lists work...
```
>>> a = [1,2,3]
>>> b = a[:]
>>> b
[1,2,3]
>>> b[1]=100
>>> b
[1,100,3]
>>> a
[1,2,3]
```
So far,so good. I am initializing b with the contents of a, so that b points to a different object. As a consequence, changes in b don't affect a.
Now take a look at this other example:
```
>>> a = [[1,2,3],[4,5,6],[7,8,9]]
>>> b = a[:][:]
>>> b
[[1,2,3],[4,5,6],[7,8,9]]
>>> b[1][1] = 100
>>> b
[[1,2,3],[4,100,6],[7,8,9]]
>>> a
[[1,2,3],[4,100,6],[7,8,9]]
```
Why has the change in b affected a this time? What is different from the previous example? | The slicing operation `x[:]` makes a *shallow copy*. That means, the outer list is different, but is contains the exact same elements. assume `a = [[1]]`:
```
b = a[:] # is the same as:
b = [x for x in a]
>>> a[0] is b[0]
True
```
The double slicing (`[:][:]`) is doing nothing more than that - again:
```
b = a[:][:] # is the same as:
b = [y for y in [x for x in a]]
>>> a[0] is b[0]
True
```
A shallow copy of a shallow copy is, well, a shallow copy.
So `b` is still a shallow copy of `a` - these lists are different objects, but they contain identical elements. Then you mutate the inner list in `b`, but it is the same list in `a`.
2-level shallow copy can be done with `b=[x[:] for x in a]`. It extends the same way for n-level copy.
By the way, the term "shallow copy" has the same meaning for any class or container.
If you want real deep copy, you should consider using [deep copy](http://docs.python.org/2/library/copy.html). | The silcing [:] makes a shallow copy. but inner objects remain not copied so if they are mutable and You will modified them they will be modified ;) and in both list You will see this
use list comprehension
```
b = [i[:] for i in a]
```
or copy.deepcopy()
```
import copy
b = copy.deepcopy(a)
``` | Copying python lists | [
"",
"python",
"list",
"deep-copy",
""
] |
I am looking to design a database that keeps track of every set of changes so that I can refer back to them in the future. So for example:
```
Database A
+==========+========+==========+
| ID | Name | Property |
1 Kyle 30
```
If I change the row's 'property' field to 50, it should update the row to:
```
1 Kyle 50
```
But should save the fact that the row's property was 30 at some point in time. Then if the row is again updated to be 70:
```
1 Kyle 70
```
Both facts that the row's property was 50 and 70 should be preserved, such that with some query I could retrieve:
```
1 Kyle 30
1 Kyle 50
```
It should recognize that these were the "same entries" just at different points in time.
Edit: This history will need to be presented to the user at some point in time so ideally, there should be an understanding of which rows belong to the same "revision cluster"
What is the best way to approach the design of this database? | One way is to have a `MyTableNameHistory` for every table in your database, and make its schema identical to the schema of table `MyTableName`, except that the Primary Key of the History table has one additional column named `effectiveUtc` as a DateTime. For example, if you have a table named `Employee`,
```
Create Table Employee
{
employeeId integer Primary Key Not Null,
firstName varChar(20) null,
lastName varChar(30) Not null,
HireDate smallDateTime null,
DepartmentId integer null
}
```
Then the History table would be
```
Create Table EmployeeHistory
{
employeeId integer Not Null,
effectiveUtc DateTime Not Null,
firstName varChar(20) null,
lastName varChar(30) Not null,
HireDate smallDateTime null,
DepartmentId integer null,
Primary Key (employeeId , effectiveUtc)
}
```
Then, you can put a trigger on Employee table, so that every time you insert, update, or delete anything in the Employee table, a new record is inserted into the EmployeeHistory table with the exact same values for all the regular fields, and current UTC datetime in the effectiveUtc column.
Then to find the values at any point in the past, you just select the record from the history table whose effectiveUtc value is the highest value prior to the asOf datetime you want the value as of.
```
Select * from EmployeeHistory h
Where EmployeeId = @EmployeeId
And effectiveUtc =
(Select Max(effectiveUtc)
From EmployeeHistory
Where EmployeeId = h.EmployeeId
And effcetiveUtc < @AsOfUtcDate)
``` | To add onto [Charles' answer](https://stackoverflow.com/a/17075789/2479481), I would use an [Entity-Attribute-Value model](https://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model) instead of creating a different history table for every other table in your database.
Basically, you would create ***one*** `History` table like so:
```
Create Table History
{
tableId varChar(64) Not Null,
recordId varChar(64) Not Null,
changedAttribute varChar(64) Not Null,
newValue varChar(64) Not Null,
effectiveUtc DateTime Not Null,
Primary Key (tableId , recordId , changedAttribute, effectiveUtc)
}
```
Then you would create a `History` record any time you *create* or *modify* data in one of your tables.
To follow your example, when you add 'Kyle' to your `Employee` table, you would create two records (one for each non-id attribute), and then you would create a new record every time a property changes:
```
History
+==========+==========+==================+==========+==============+
| tableId | recordId | changedAttribute | newValue | effectiveUtc |
| Employee | 1 | Name | Kyle | N |
| Employee | 1 | Property | 30 | N |
| Employee | 1 | Property | 50 | N+1 |
| Employee | 1 | Property | 70 | N+2 |
```
Alternatively, as [a\_horse\_with\_no\_name](https://stackoverflow.com/users/330315/a-horse-with-no-name) suggested in [this comment](https://stackoverflow.com/questions/17075577/database-design-with-change-history/42589834#comment72312103_42589834), if you don't want to store a new `History` record for every field change, you can store grouped changes (such as changing `Name` to 'Kyle' and `Property` to 30 in the same update) as a single record. In this case, you would need to express the collection of changes in JSON or some other blob format. This would merge the `changedAttribute` and `newValue` fields into one (`changedValues`). For example:
```
History
+==========+==========+================================+==============+
| tableId | recordId | changedValues | effectiveUtc |
| Employee | 1 | { Name: 'Kyle', Property: 30 } | N |
```
This is perhaps more difficult than creating a History table for every other table in your database, but it has multiple benefits:
* adding new fields to tables in your database won't require adding the same fields to another table
* fewer tables used
* It's easier to correlate updates to different tables over time
One architectural benefit of this design is that you are decoupling the concerns of your app and your history/audit capabilities. This design would work just as well as a microservice using a relational or even NoSQL database that is separate from your application database. | Database Design with Change History | [
"",
"sql",
"postgresql",
"change-tracking",
""
] |
So i have a SQL table like below. This table is a bunch of sms messages between people. I want to get all the thread that exists. This basically means the last message between two people. How do i do this?
```
-------------------------------------------------------
| sender_id | receiver_id | message | time |
-------------------------------------------------------
| 123 | 456 | hi | 4/17/2013 |
--------------------------------------------------------
| 123 | 111 | hi | 4/18/2013 |
--------------------------------------------------------
| 123 | 555 | hi | 4/19/2013 |
--------------------------------------------------------
| 555 | 123 | hi | 4/20/2013 |
--------------------------------------------------------
| 444 | 333 | hi | 4/21/2013 |
--------------------------------------------------------
| 123 | 555 | hi | 4/22/2013 |
--------------------------------------------------------
| 777 | 123 | hi | 4/23/2013 |
--------------------------------------------------------
```
I would like to get the response rows as below for **user= 123**. Notice how the sender\_id and receiver\_id as a whole are unique. Meaning a message that joe sent to bob is in the same thread as one that bob sent to joe.
```
-------------------------------------------------------
| sender_id | receiver_id | message | time |
-------------------------------------------------------
| 123 | 456 | hi | 4/17/2013 |
--------------------------------------------------------
| 123 | 111 | hi | 4/18/2013 |
--------------------------------------------------------
| 123 | 555 | hi | 4/22/2013 |
--------------------------------------------------------
| 777 | 123 | hi | 4/23/2013 |
--------------------------------------------------------
``` | An easier to read version, which correctly handles date ordering (as shown in the question) and utilizes indexes:
```
SELECT sender_id, receiver_id, message, time FROM
(
SELECT sender_id, receiver_id, message, time
FROM myTable
WHERE sender_id = 123 OR receiver_id = 123
ORDER BY time DESC
) a
GROUP BY (CASE WHEN sender_id = 123 THEN receiver_id
ELSE sender_id END);
```
[SQL fiddle](http://sqlfiddle.com/#!2/5946c/9/0). | the user 123 was only the example, I think the more general query is needed here.
This solution avoids time consuming joins, there is only assumption of maximum 10000 users (easily extendable)
```
SELECT sender_id, receiver_id, message, MAX(time),
IF(sender_id<receiver_id, sender_id*10000+receiver_id, receiver_id*10000+sender_id) as thread_id
FROM messages
GROUP BY thread_id
ORDER BY MAX(time) DESC
```
<http://sqlfiddle.com/#!2/c65d3/30>
UPDATE:
This version is free of number of users limitation:
```
SELECT sender_id, receiver_id, message, MAX(time),
IF(sender_id<receiver_id, CONCAT(sender_id,receiver_id), CONCAT(receiver_id,sender_id)) as thread_id
FROM messages
GROUP BY thread_id
ORDER BY MAX(time) DESC
```
<http://sqlfiddle.com/#!2/c65d3/31> | SQL distinct for two column | [
"",
"mysql",
"sql",
""
] |
I have a format string that I am creating dynamically based on user input. I am collecting the arguments for the format string in a list, and I'd like to know how to unpack them at the end of the format string. I've seen some questions that seem related to this, but I'm very new to Python and I can't seem to apply their solutions to my case.
The idea of what I'd like to do is:
```
my_string = "string contents" % tuple([for item in var_array])
```
Of course this isn't valid syntax but hopefully it describes what I am trying to do: I'd like to unpack var\_array as my list of arguments without knowing the length of var\_array ahead of time. How could I do this?
**Edit:**
I'll attempt to better explain my problem. I'm creating a format string by concatenating an unknown number of format strings. Thus my final string will have a a variable number of %s and therefore a variable number of args that are collected in a list.
For example:
```
"My format string might have %s or %s arguments" %([arg_list]) //here arg_list has 2 arguments
"But my format string might also have %s or %s or %s arguments. %([arg_list]) //here arg_list has 3 arguments
```
The length of the format string, and the number of arguments are variable based on user input so I want to be able to tack on a list of args at the end of the final string. Is this possible? | Here is an approach that goes from arguments to a formatted string (error checking is still needed):
```
>>> def StartDance(*args):
return "%d, %d, %d, %d!" % tuple(args)
>>> StartDance(5, 6, 7, 8)
'5, 6, 7, 8!'
```
Here is a more robust solution to error checking but I'm presenting it as a separate answer considering how much extra complexity it is adding:
```
>>> def StartDance(*args):
return (", ".join(["%d"] * len(args))+"!") % tuple(args)
>>> StartDance(5, 6, 7, 8)
'5, 6, 7, 8!'
>>> StartDance(5, 6, 7, 8, 9, 10)
'5, 6, 7, 8, 9, 10!'
>>> StartDance(1)
'1!'
```
And here is a function returning a list which is being unpacked as arguments only to have these arguments treated as a list (Python is fun :)
```
>>> StartDance(*range(5,9))
'5, 6, 7, 8!'
``` | Assuming what you want to make into a string supports the [`str`](http://docs.python.org/2/library/functions.html#str) builtin, you can do:
```
def join_args(*args):
return " ".join([str(x) for x in args])
print(join_args(1,2,3,4,5,6))
print(join_args('1','2','3','4'))
```
Out:
```
1 2 3 4 5 6
1 2 3 4
```
You could also use the following for a more flexible string:
```
def join_args(fmstr, *args):
return fmstr.format(*args)
print(join_args("One: {} Two: {} Three: {} Four: {}", 1,2,3,4))
```
Out:
```
One: 1 Two: 2 Three: 3 Four: 4
```
Just make sure there are an equal number of args and `{}`. | Python Unpack Argument List for Format String | [
"",
"python",
""
] |
This would save me a lot of code, but I'm not sure how to implement it. I would like to set my variable "totalfactors" to the result of a for loop iterating through a dictionary and performing a product operation ([Capital Pi Notation](http://en.wikipedia.org/wiki/Multiplication#Capital_Pi_notation)). So I would think I would write this like:
```
totalfactors = for x in dictionary: dictionary[x]*totalfactors
```
I know I could write this out in a couple lines like:
```
totalfactors = 1
for pf in apfactors:
totalfactors *= (apfactors[pf]+1)
```
Any help would be quite useful! Thanks | You could use the [functional](http://en.wikipedia.org/wiki/Fold_%28higher-order_function%29) built-in [`reduce`](http://docs.python.org/2/library/functions.html#reduce). It will repeatedly (or recursively) apply a function - here an anonymous lambda - on a list of values, building up some aggregate:
```
>>> reduce(lambda x, y: x * (y + 1), [1, 2, 3])
12
```
which would be equivalent to:
```
>>> (1 * (2 + 1)) * (3 + 1)
12
```
If you need another initial value, you can pass it as the last argument to reduce:
```
>>> reduce(lambda x, y: x * (y + 1), [1, 2, 3], 10)
240
>>> (((10 * (1 + 1)) * (2 + 1)) * (3 + 1))
240
```
Like @DSM points out in the comment, you probably want:
```
>>> reduce(lambda x, y: x * (y + 1), [1, 2, 3], 1) # initializer is 1
```
which can be written more succinctly with the [operator](http://docs.python.org/2/library/operator.html) module and a [generator expression](http://docs.python.org/2/reference/expressions.html#generator-expressions) as:
```
>>> from operator import mul
>>> reduce(mul, (v + 1 for v in d.values()))
```
---
I would have guessed, that the generator variant is faster, but on 2.7 it seems it is not (at least for very small dictionaries):
```
In [10]: from operator import mul
In [11]: d = {'a' : 1, 'b' : 2, 'c' : 3}
In [12]: %timeit reduce(lambda x, y: x * (y + 1), d.values(), 1)
1000000 loops, best of 3: 1 us per loop
In [13]: %timeit reduce(mul, (v + 1 for v in d.values()))
1000000 loops, best of 3: 1.23 us per loop
``` | Sounds like you may want to look into doing a reduce(). For example:
```
>>> d={'a':1,'b':2,'c':3,'d':4}
>>> reduce(lambda x,y: x*y, d.values())
24
``` | Setting a variable equal to a returned value from a for loop in Python | [
"",
"python",
"variables",
"for-loop",
""
] |
This question has been asked before, but the fast answers that I have seen also remove the trailing spaces, which I don't want.
```
" a bc "
```
should become
```
" a bc "
```
I have
```
text = re.sub(' +', " ", text)
```
but am hoping for something faster. The suggestion that I have seen (and which won't work) is
```
' '.join(text.split())
```
Note that I will be doing this to lots of smaller texts so just checking for a trailing space won't be so great. | Just a small rewrite of the suggestion up there, but just because something has a small fault doesn't mean you should assume it won't work.
You could easily do something like:
```
front_space = lambda x:x[0]==" "
trailing_space = lambda x:x[-1]==" "
" "*front_space(text)+' '.join(text.split())+" "*trailing_space(text)
``` | FWIW, some timings
```
$ python -m timeit -s 's=" a bc "' 't=s[:]' "while ' ' in t: t=t.replace(' ', ' ')"
1000000 loops, best of 3: 1.05 usec per loop
$ python -m timeit -s 'import re;s=" a bc "' "re.sub(' +', ' ', s)"
100000 loops, best of 3: 2.27 usec per loop
$ python -m timeit -s 's=" a bc "' "''.join((s[0],' '.join(s[1:-1].split()),s[-1]))"
1000000 loops, best of 3: 0.592 usec per loop
$ python -m timeit -s 'import re;s=" a bc "' "re.sub(' {2,}', ' ', s)"
100000 loops, best of 3: 2.34 usec per loop
$ python -m timeit -s 's=" a bc "' '" "+" ".join(s.split())+" "'
1000000 loops, best of 3: 0.387 usec per loop
``` | Python fastest way to remove multiple spaces in a string | [
"",
"python",
"regex",
""
] |
OK this is a Python question:
We a have a dictionary:
```
my_dict = {
('John', 'Cell3', 5): 0,
('Mike', 'Cell2', 6): 1,
('Peter', 'Cell1', 6): 0,
('John', 'Cell1', 4): 5,
('Mike', 'Cell2', 1): 4,
('Peter', 'Cell1', 8): 9
}
```
How do you make another dictionary which has only the key/value pair which has the name "Peter" in it?
Does it help if you turn this dictionary to a list of tuples of tuples, by
```
tupled = my_dict.items()
```
and then turn it back to a dictionary again?
How do you solve this with list comprehension?
Thanks in advance! | Try this, using the [dictionary comprehensions](https://stackoverflow.com/questions/7276511/are-there-dictionary-comprehensions-in-python-problem-with-function-returning) available in Python 2.7 or newer:
```
{ k:v for k,v in my_dict.items() if 'Peter' in k }
```
Alternatively, if we're certain that the name will always be in the first position, we can do this, which is a bit faster:
```
{ k:v for k,v in my_dict.items() if k[0] == 'Peter' }
```
If you're using an older version of Python, we can get an equivalent result using generator expressions and the right parameters for the `dict()` constructor:
```
dict((k,v) for k,v in my_dict.items() if k[0] == 'Peter')
```
Anyway, the result is as expected:
```
=> {('Peter', 'Cell1', 8): 8, ('Peter', 'Cell1', 6): 0}
``` | for any name
```
def select(d, name):
xs = {}
for e in d:
if e[0].lower() == name.lower(): xs[e] = d[e]
return xs
d = {('Alice', 'Cell3', 3): 9,
('Bob', 'Cell2', 6): 8,
('Peter', 'Cell1', 6): 0,
('Alice', 'Cell1', 6): 4,
('Bob', 'Cell2', 0): 4,
('Peter', 'Cell1', 8): 8
}
print select(d, 'peter')
>>>{('Peter', 'Cell1', 8): 8, ('Peter', 'Cell1', 6): 0}
``` | Extracting key/value pair from dictionary | [
"",
"python",
"list",
"dictionary",
"tuples",
""
] |
I am working on nmea processing for gps trackers, where I am processing it as a list of values on this way
```
"""
information package exmaple
41719.285,A,1623.5136,S,07132.9184,W,017.8,203.5,040613,,,A*6B|1.6|2375|1010|0000,0000|02CC000A138E96D6|11|0029560C
"""
gprmc, hdop, altitude, state, ad, baseid, csq, journey = information.split('|')
ptime, gpsindicator, lttd, ns, lgtd, ew, speed, course, pdate, dd, checksum = gprmc.split(',')
```
Then, sometimes data packages are bigger, however are well formed, it is because some customers re-configure devices with extra data than they need and make my program crash, so I am looking for a way that my code doesn't crash in these cases. | use slices
```
gprmc, hdop, altitude, state, ad, baseid, csq, journey = information.split('|')[:8]
data = gprmc.split(',')
ptime, gpsindicator, lttd, ns, lgtd, ew, speed, course, pdate, dd = data[:10]
checksum = data[-1]
```
in python 3.x You can use wildcard
```
gprmc, hdop, altitude, state, ad, baseid, csq, journey, *_ = information.split('|')
(ptime, gpsindicator, lttd, ns, lgtd,
ew, speed, course, pdate, dd, *_, checksum) = gprmc.split(',')
``` | A bit of a side-answer, as I am somewhat wont to give...
If you are interested in effective GPS data (particularly NMEA 0183) parsing using Python, you may be interested in twisted.positioning: a branch I'm trying to land in twisted, which handles all the seriously gnarly stuff you need to do to get useful data out of a GPS device.
Alternatively, you may be interested in gpsd, to fill the same role. Eventually, twisted.positioning will get a gpsd provider, so that you can write the same code but have it fed data through gpsd. Or, if you're so inclined, you could get positioning data from other places -- the interface is quite general. | list received is bigger than expected | [
"",
"python",
"iterable-unpacking",
""
] |
I have to migrate an old SQL Statement to SQLServer2008. The old SQL-Statement still has the old Join-Operator "\*=". Of course I could set the DB-Compatibility, but I don't like to do this :-)
My Question: I am not sure if I am doing following correct and I can't test the old statement anymore. Could someone please check the syntax of my new SQL-Statement ?
old original statement :
```
select * from A,B where A.field *= B.field
```
My guess:
```
SELECT * from A
LEFT JOIN B
ON A.field=B.field;
```
Or is it the opposit way ? :
```
SELECT * from B
LEFT JOIN A
ON B.field=A.field;
```
Thanks and regards | ```
SELECT * from A
LEFT JOIN B
ON A.field=B.field;
```
This is Correct one | Your first guess is correct. This query:
```
select * from A,B where A.field *= B.field
```
and this query:
```
SELECT * from A
LEFT JOIN B
ON A.field=B.field;
```
Produce the same results - provided that the original query was well-behaved with other predicates in the `WHERE` clause - the whole reason why the old syntax is deprecated is that it could produce "interesting" results in some cases - so if you were relying on those, you won't get them. | left Join old syntax migration | [
"",
"sql",
"sql-server-2008",
""
] |
I am trying to come up with a simple, performant query for the following problem:
Let's say there are several entities (items) which all have a unique ID. The entities have a variable set of attributes (properties), which therefore have been moved to a separate table:
```
T_Items_Props
=======================
Item_ID Prop_ID Value
-----------------------
101 1 'abc'
101 2 '123'
102 1 'xyz'
102 2 '123'
102 3 '102'
... ... ...
```
Now I want to search for an item, that matches some specified search-criteria, like this:
```
<<Pseudo-SQL>>
SELECT Item_Id(s)
FROM T_Items_Props
WHERE Prop 1 = 'abc'
AND Prop 2 = '123'
...
AND Prop n = ...
```
This would be fairly easy if I had a table like `Items(Id, Prop_1, Prop_2, ..., Prop_n)`. Then I could do a simple `SELECT` where the search criteria could simply (even programmatically) be inserted in the `WHERE`-clause, but in this case I would have to do something like:
```
SELECT t1.Item_ID
FROM T_Items_Props t1
, T_Items_Props t2
, ...
, T_Items_Props tn -- (depending on how many properties to compare)
AND t1.Item_ID = t2.Item_ID
AND t1.Prop_ID = 1 AND t1.Value = 'abc'
AND t2.Prop_ID = 2 AND t2.Value = '123'
...
AND tn.Prop_ID = n AND tn.Value = ...
```
Is there a better/simpler/faster way to do this? | I've offered this in a previous post of similar querying intentions. The user could have 2 criteria one time, and five criteria another and wanted an easy way to build the SQL command. To simplify the need of having to add FROM tables and update the WHERE clause, you can simplify by doing joins and put that criteria right at the join level... So, each criteria is it's own set added to the mix.
```
SELECT
t1.Item_ID
FROM
T_Items_Props t1
JOIN T_Items_Props t2
on t1.Item_ID = t2.Item_ID
AND t2.Prop_ID = 2
AND t2.Value = '123'
JOIN T_Items_Props t3
on t1.Item_ID = t3.Item_ID
AND t3.Prop_ID = 6
AND t3.Value = 'anything'
JOIN T_Items_Props t4
on t1.Item_ID = t4.Item_ID
AND t4.Prop_ID = 15
AND t4.Value = 'another value'
WHERE
t1.Prop_ID = 1
AND t1.Value = 'abc'
```
Notice the primary query will always start with a minimum of the "T1" property/value criteria, but then, notice the JOIN clauses... they are virtually the same so it is very easy to implement via a loop... Just keep aliasing the T2, T3, T4... as needed. This will start with any items that meet the T1 criteria, but then also require all the rest to be found too. | To make the query more readable, you could do something like:
```
SELECT
t1.Item_ID
FROM
T_Items_Props t1
where convert(varchar(10), t1.Item_ID) + ';' + t1.Value in (
'1;abc',
'2;123',
...
)
```
*NOTE:* This assumes, that your IDs will not have more than 10 digets. It might also slow your query down, due to the extra type conversion and string concatanation. | Optimizing tricky SQL search query | [
"",
"sql",
"optimization",
"query-optimization",
"sql-optimization",
""
] |
I have some data in a list of lists. I'm trying to print it out evenly in a table with format, however diferent lengths really throw it off. Is there any way to do this simply? Or do I have to do something awkward like calculate the maximum of each column and pad the other ones with spaces?
```
table_data = [['vlan1', '0013.F200.0058', '192.168.2.2'],
['vlan20', '0013.F200.0058', '192.168.30.2'],
['vlan20', '0010.600a.7026', '192.168.30.4'],
['vlan20', '70ca.9b99.6a82', '192.168.30.1'],
['vlan100', '0013.F200.0058', '192.168.110.2']]
for i in table_data:
interface,mac,ip = i
print "Internet {} {:>18s} {:>7s} {:>8s}".format(ip, mac,'ARPA' ,interface)
Protocol Address Hardware Addr Type Interface
Internet 192.168.2.2 0013.F200.0058 ARPA vlan1
Internet 192.168.30.2 0013.F200.0058 ARPA vlan20
Internet 192.168.30.4 0010.600a.7026 ARPA vlan20
Internet 192.168.30.1 70ca.9b99.6a82 ARPA vlan20
Internet 192.168.110.2 0013.F200.0058 ARPA vlan100
``` | I think using tabulations ('\t') should do the trick.
```
print "Internet\t{}\t{:>18s}\t{:>7s}\t{:>8s}".format(ip, mac,'ARPA' ,interface)
```
I tested it via the terminal and it seems to work, aligning it right. | Just fiddeling with offsets give me this:
```
print "Internet {:>16}{:>16}{:>8}{:>10}".format(ip, mac, 'ARPA' ,interface)
```
Output:
```
Internet 192.168.2.2 0013.F200.0058 ARPA vlan1
Internet 192.168.110.2 0013.F200.0058 ARPA vlan100
Internet 192.168.30.2 0013.F200.0058 ARPA vlan50
Internet 192.168.30.4 0010.600a.7026 ARPA vlan20
Internet 192.168.30.1 70ca.9b99.6a82 ARPA vlan2
``` | printing table from list of data | [
"",
"python",
"list",
""
] |
I was wondering: would it be possible to access dict values with uncomplete keys (as long as there are not more than one entry for a given string)? For example:
```
my_dict = {'name': 'Klauss', 'age': 26, 'Date of birth': '15th july'}
print my_dict['Date']
>> '15th july'
```
Is this possible? How could it be done? | You can't do such directly with `dict[keyword]`. You have to iterate through the `dict` and match each key against the keyword and return the corresponding value if the keyword is found.
This is going to be an `O(N)` operation.
```
>>> my_dict = {'name': 'Klauss', 'age': 26, 'Date of birth': '15th july'}
>>> next(v for k,v in my_dict.items() if 'Date' in k)
'15th july'
```
To get all such values use a list comprehension:
```
>>> [ v for k, v in my_dict.items() if 'Date' in k]
['15th july']
```
use `str.startswith` if you want only those values whose keys starts with 'Date':
```
>>> next( v for k, v in my_dict.items() if k.startswith('Date'))
'15th july'
>>> [ v for k, v in my_dict.items() if k.startswith('Date')]
['15th july']
``` | not the best solution, can be improved (overide **getitem**)
```
class mydict(dict):
def __getitem__(self, value):
keys = [k for k in self.keys() if value in k]
key = keys[0] if keys else None
return self.get(key)
my_dict = mydict({'name': 'Klauss', 'age': 26, 'Date of birth': '15th july'})
print(my_dict['Date'])# returns 15th july
``` | Accessing Python dict values with the key start characters | [
"",
"python",
"dictionary",
""
] |
I am running through some examples of '**nested for loops**' and the below code is given as an example. But isn't this technically a *for* loop?
```
xLimits = range(getWidth(pic))
for x in xLimits:
pixel = getPixelAt(pic, x, 0)
setColor(pixel, green)
```
Wouldn't a nested look something like this?
```
for x in xLimits:
for y in yLimits:
code
code
code
```
Can someone either agree or disagree with me?
I know this probably doesn't matter but if I am not looking at this the right way I would like to know why. | A nested loop (of any kind) is a loop within a loop. It's important to realize that the inner loop will be re-run for *every* iteration of the outer loop.
For example:
```
for i in xrange(3):
for j in xrange(2):
print 'i={0} j={1}'.format(i,j)
```
Output:
```
i=0 j=0
i=0 j=1
i=1 j=0 <-- inner loop restarts
i=1 j=1
i=2 j=0 <-- inner loop restarts
i=2 j=1
```
So your understanding is totally correct. The first example is not a nested loop, while the second example is.
You could *possibly* consider calling a function with a `for` loop, from within a `for` loop, a "nested for loop", although I would never call it that:
```
def foo(r):
for i in r:
do_something()
for x in xrange(20):
foo( xrange(x) )
``` | A nested for loop is a for loop inside another for loop, as you think. The first example you gave is not a nested for loop, but the second one is. | What is a nested for loop? | [
"",
"python",
"for-loop",
"nested-loops",
""
] |
I am running into an issue where I have a need to run a Query which should get some rows from a main table, and have an indicator if the key of the main table exists in a subtable (relation one to many).
The query might be something like this:
```
select a.index, (select count(1) from second_table b where a.index = b.index)
from first_table a;
```
This way I would get the result I want (0 = no depending records in second\_table, else there are), but I'm running a subquery for each record I get from the database. I need to get such an indicator for at least three similar tables, and the main query is already some inner join between at least two tables...
My question is if there is some really efficient way to handle this. I have thought of keeping record in a new column the "first\_table", but the dbadmin don't allow triggers and keeping track of it by code is too risky.
What would be a nice approach to solve this?
The application of this query will be for two things:
1. Indicate that at least one row in second\_table exists for a given row in first\_table. It is to indicate it in a list. If no row in the second table exists, I won't turn on this indicator.
2. To search for all rows in first\_table which have at least one row in second\_table, or which don't have rows in the second table.
Another option I just found:
```
select a.index, b.index
from first_table a
left join (select distinct(index) as index from second_table) b on a.index = b.index
```
This way I will get null for b.index if it doesn' exist (display can finally be adapted, I'm concerned on query performance here).
The final objective of this question is to find a proper design approach for this kind of case. It happens often, a real application culd be a POS system to show all clients and have one icon in the list as an indicator wether the client has open orders. | Try using EXISTS, I suppose, for such case it might be better then joining tables. On my oracle db it's giving slightly better execution time then the sample query, but this may be db-specific.
```
SELECT first_table.ID, CASE WHEN EXISTS (SELECT * FROM second_table WHERE first_table.ID = second_table.ID) THEN 1 ELSE 0 END FROM first_table
``` | why not try this one
```
select a.index,count(b.[table id])
from first_table a
left join second_table b
on a.index = b.index
group by a.index
``` | How to efficiently retrieve data in one to many relationships | [
"",
"sql",
"db2",
"sql-execution-plan",
""
] |
I have a procedure and input is comma separated like '1,2,3'.
I would like to query like
```
SELECT * FROM PERSON WHERE PERSON_ID IN(1,2,3).
```
Please note that PERSON\_ID is integer. | I've seen this type of question so often I posted a blog on it [here.](http://sqlstudies.com/2013/04/08/how-do-i-use-a-variable-in-an-in-clause/)
Basically you have three options (to the best of my knowledge)
The `LIKE` version that Gordon Lindoff suggested.
Using a split function like so.
```
DECLARE @InList varchar(100)
SET @InList = '1,2,3,4'
SELECT MyTable.*
FROM MyTable
JOIN DelimitedSplit8K (@InList,',') SplitString
ON MyTable.Id = SplitString.Item
```
Or using dynamic SQL.
```
DECLARE @InList varchar(100)
SET @InList = '1,2,3,4'
DECLARE @sql nvarchar(1000)
SET @sql = 'SELECT * ' +
'FROM MyTable ' +
'WHERE Id IN ('+@InList+') '
EXEC sp_executesql @sql
``` | Because `contains` seems like overkill (it is designed for fuzzy searching and uses a full text index), because `charindex()` is not standard SQL, and I abhor answers where `varchar` does not have length, let me give an alternative:
```
SELECT *
FROM PERSON
WHERE ','+@SearchList+',' like '%,'+cast(PERSON_ID as varchar(255))+',%';
```
The concatenation of commas for `@SearchList` makes sure that all values are surrounded by delimiters. These are then put around the particular value, to prevent `1` from matching `10`.
Note that this will *not* be particularly efficient, because it will require a full table scan. | How to split a comma separated data | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
My table has a column with **a JSON string that has nested objects** (so a simple REPLACE function cannot solve this problem) . For example like this: `{'name':'bob', 'blob': {'foo':'bar'}, 'age': 12}`. What is the easiest query to append a value to the end of the JSON string? So for the example, I want the end result to look like this: `{'name':'bob', 'blob': {'foo':'bar'}, 'age': 12, 'gender': 'male'}` The solution should be generic enough to work for any JSON values. | What about this
```
UPDATE table SET table_field1 = CONCAT(table_field1,' This will be added.');
```
**EDIT:**
I personally would have done the manipulation with a language like PHP before inserting it. Much easier. Anyway, Ok is this what you want? This should work providing your json format that is being added is in the format `{'key':'value'}`
```
UPDATE table
SET col = CONCAT_WS(",", SUBSTRING(col, 1, CHAR_LENGTH(col) - 1),SUBSTRING('newjson', 2));
``` | I think you can use [`REPLACE`](http://dev.mysql.com/doc/refman/5.0/en/replace.html) function to achieve this
```
UPDATE table
SET column = REPLACE(column, '{\'name\':\'bob\', \'blob\': {\'foo\':\'bar\'}, \'age\': 12}', '{\'name\':\'bob\', \'blob\': {\'foo\':\'bar\'}, \'age\': 12, \'gender\': \'male\'}')
```
Take care to properly escape all quotes inside json
Upon you request of nested json, i think you can just remove last character of the string with [`SUBSTRING`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring) function and then append whatever you need with `CONCAT`
```
UPDATE table
SET column = CONCAT(SUBSTRING(column, 0, -1), 'newjsontoappend')
``` | MySQL query to append key:value to JSON string | [
"",
"mysql",
"sql",
""
] |
I'm trying to parse all files in a folder and write the filenames in a CSV using Python. The code I used is
```
import os, csv
f=open("C:/Users/Amber/weights.csv",'r+')
w=csv.writer(f)
for path, dirs, files in os.walk("C:/Users/Amber/Creator"):
for filename in files:
w.writerow(filename)
```
The result I'm getting in the CSV has individual alphabets in one column rather than the entire row name. How to fix that? | ```
import os, csv
f=open("C:/Users/Amber/weights.csv",'r+')
w=csv.writer(f)
for path, dirs, files in os.walk("C:/Users/Amber/Creator"):
for filename in files:
w.writerow([filename])
``` | `writerow()` expects a sequence argument:
```
import os, csv
with open("C:/Users/Amber/weights.csv", 'w') as f:
writer = csv.writer(f)
for path, dirs, files in os.walk("C:/Users/Amber/Creator"):
for filename in files:
writer.writerow([filename])
``` | Writing filenames from a folder into a csv | [
"",
"python",
"csv",
""
] |
I need to compare two different columns in a mysql `WHERE` statement. Within a view that I have created I have a column called `E_ID` and one called `event`. I need to `SELECT` from the view where the `E_ID` != `$E_ID` when `event` = `$event`.
So if `$event` = status and `$E_ID` = 1 the statement would select anything thats not got the same event and the same `E_ID` but it could return data that has the same `E_ID` and a different event.
Okay lets have some examples. Lets say I have this as my two variable:
```
$E_ID = '1';
$event = 'status';
```
Okay and now in the table is this;
```
E_ID event
1 status
2 status
3 status
1 gig
3 track
5 gig
```
As you cans the first row contains the data set in the variables so we don't want to return that. But the problem lies in `E_ID` as they can be the same if the event is different. So I want to return everything that does not have the `E_ID` of `1` when the `event` is `status`. It can however return data of the same E\_ID and the same event.
What should be returned is this;
```
E_ID event
2 status
3 status
1 gig
3 track
5 gig
```
As you can see everything is returned but the row that has the data set in the variables.
Here's my query so far.
```
SELECT * FROM stream_view WHERE E_ID != '$E_ID'
```
Not really sure where to start so have struggled to figure it out myself | I think the OP is more interested in this:
```
SELECT * FROM
stream_view
WHERE (event = '$event'
AND E_ID <> '$E_ID')
OR event<> '$event';
``` | Do you mean like this?
```
SELECT * FROM stream_view WHERE E_ID != '$E_ID' AND event = '$event'
``` | Comparing multiple columns in mySQL | [
"",
"mysql",
"sql",
""
] |
I have the following situation: I am working on several projects which make use of library modules that I have written. The library modules contain several classes and functions. In each project, some subset of the code of the libraries is used.
However, when I publish a project for other users, I only want to give away the code that is used by that project rather than the whole modules. This means I would like, for a given project, to remove unused library functions from the library code (i.e. create a new reduced library). Is there any tool that can do this automatically?
**EDIT**
Some clarifications/replies:
1. Regarding the "you should not do this in general" replies: The bottom line is that in practice, before I publish a project, I manually go through the library modules and remove unused code. As we are all programmers, we know that there is no reason to do something manually when you could easily explain to a computer how to do it. So practically, writing such a program is possible and should even not be too difficult (yes, it may not be super general). My question was if someone know whether such a tool exists, before I start thinking about implementing it by myself. Also, any thoughts about implementing this are welcome.
2. I do not want to simply hide all my code. If I would have wanted to do that I would have probably not used Python. In fact, I want to publish the source code, but only the code which is relevant to the project in question.
3. Regarding the "you are legally protected" comments: In my specific case, the legal/license protection does not help me. Also, the problem here is more general than some stealing the code. For example, it could be for the sake of clarity: if someone needs to use/develop the code, you don't want dozens of irrelevant functions to be included. | My first advice to you would be to design your code with a stronger modularity, so that you can have all the functions you want to keep in as many python modules/eggs as you have to make it flexible to have just what you need for each of your projects. And I think that would be the only way to keep your code easily manageable and readable.
That said, if you really want to go the way you describe in your question, to my knowledge there's no tool that does exactly what you say, because it's an uncommon usage pattern.
But I don't think it would be hard to code a tool that does it using [rope](http://rope.sourceforge.net/). It does static and dynamic code analysis (so you can find what imported objects are being used thus guess what is not used from your imported modules), and also gives many refactoring tools to move or remove code.
Anyway, I think to be able to really make a tool that find accurately all code that is being used in your current code, you need to make a full unit test coverage of your code or you shall be really methodical in how you import your module's code (using only `from foo import bar` and avoiding chaining imports between the modules). | I agree with @zmo - one way to avoid future problems like this is to plan ahead and make your code as modular as possible. I would have suggested putting the classes and functions in much smaller files. This would mean that for every project you make, you would have to hand-select which of these smaller files to include. I'm not sure if that's feasible with the size of your projects right now. But for future projects it's a practice you may consider. | Is there a tool that removes functions that are not used in Python? | [
"",
"python",
""
] |
I have two tables, both with same data:
```
IP address | count
```
I need to combine the two tables into new one that contains data from both original tables.
* IF there is a matching record in both tables, their **count** should be added.
* IF there is a record that exists only in one table it gets copied over to the new table.
Let first table be called ip\_data\_january, second called ip\_data\_february and the one I am trying to create is ip\_data\_yearly. Thanks in advance. | If the `IP_Data_Yearly` table is empty, an `INSERT` with a subquery that aggregates count by IP should do the trick:
```
INSERT INTO IP_Data_Yearly
SELECT IP_Address, SUM(Count)
FROM (
SELECT IP_Address, Count FROM IP_Data_January
UNION ALL SELECT IP_Address, Count FROM IP_Data_February
) IPCombined
GROUP BY IP_Address
``` | 1st insert only new ip addresses (with count starting at zero)
```
insert into ip_data_yearly (ip_adress, count)
(select distinct ip_address, '0' from jan_table
where ip_addess not in (select ip_adress from ip_data_yearly);
```
2nd update the count
```
update ip_data_yearly y
set count= count +
(select count(j.ip_adress) from jan_table where j.ip_adress=y.ip_adress);
```
..
3rd do this for all months | SQL query to join two tables, adding new records if records missing | [
"",
"mysql",
"sql",
""
] |
How do I ensure sub-processes are stopped when I stop Supervisord?
I'm using Supervisord to run two Celery workers. The command for each worker is:
```
command=/usr/local/myapp/src/manage.py celery worker --concurrency=1 --loglevel=INFO
```
When I start supervisord, they run perfectly, but when I run `sudo service supervisord stop`, it reports my workers stop:
```
celery_1: stopped
celery_2: stopped
```
However, when I run `ps aux | grep celery` I see they're still running:
```
www-data 27667 0.0 1.5 318420 64360 ? S 10:45 0:00 manage.py celery worker --concurrency=1 --loglevel=INFO
www-data 27668 0.0 1.5 318420 318416 ? S 10:45 0:00 manage.py celery worker --concurrency=1 --loglevel=INFO
``` | i believe stopping the supervisor service stops the supervisor daemon, not the supervisor process that is managing your celeryworkers
`supervisorctl stop all` should allow you stop the workers, and also allow you to start/restart them
<http://supervisord.org/running.html#running-supervisorctl> | The reason I ended up here is that when I was using supervisor to start and stop my celery processes, the celery workers were not stopping, which lead to an accumulation of workers.
I tried various settings in supervisor, such as stopsignal, and killasgroup. None of them stopped the workers.
At this point, it appears that the celery process does not send signals to the workers when it is stopped.
What I ended up doing was to add:
```
ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
```
to my scripts. This is from [the celery docs](http://celery.readthedocs.org/en/latest/userguide/workers.html#id2)
At the end of this command, I tried kill -TERM instead of kill -9, hoping that would kill the processes more gracefully. But that always left one process. | Stopping Supervisor doesn't stop Celery workers | [
"",
"python",
"django-celery",
"supervisord",
""
] |
I'm trying to optimize or change the SQL to work with inner joins rather than independent calls
Database: one invoice can have many payment records and order (products) records
Original:
```
SELECT
InvoiceNum,
(SELECT SUM(Orders.Cost) FROM Orders WHERE Orders.Invoice = InvoiceNum and Orders.Returned <> 1 GROUP BY Orders.Invoice) as vat_only,
(SELECT SUM(Orders.Vat) FROM Orders WHERE Orders.Invoice = InvoiceNum and Orders.Returned <> 1 GROUP BY Orders.Invoice) as sales_prevat,
(SELECT SUM(pay.Amount) FROM Payments as pay WHERE Invoices.InvoiceNum = pay.InvoiceNum ) as income
FROM
Invoices
WHERE
InvoiceYear = currentyear
```
I'm sure we can do this another way by grouping and joining tables together. When I tried the SQL statement below, I wasn't getting the same amount (count) of records...I'm thinking in respect to the type of join or where it joins !! but still couldn't get it working after 3 hrs of looking on the screen..
So far I got to...
```
SELECT
Invoices.InvoiceNum,
Sum(Orders.Cost) AS SumOfCost,
Sum(Orders.VAT) AS SumOfVAT,
SUM(distinct Payments.Amount) as money
FROM
Invoices
LEFT JOIN
Orders ON Orders.Invoice = Invoices.InvoiceNum
LEFT JOIN
Payments ON Invoices.InvoiceNum = Payments.InvoiceNum
WHERE
Invoices.InvoiceYear = 11
AND Orders.Returned <> 1
GROUP BY
Invoices.InvoiceNum
```
Sorry for the bad english and I'm not sure what to search for to find if it's already been answered here :D
Thanks in advance for all the help | Assuming that both payments and orders can contain more than one record per invoice you will need to do your aggregates in a subquery to avoid cross joining:
```
SELECT Invoices.InvoiceNum, o.Cost, o.VAT, p.Amount
FROM Invoices
LEFT JOIN
( SELECT Invoice, Cost = SUM(Cost), VAT = SUM(VAT)
FROM Orders
WHERE Orders.Returned <> 1
GROUP BY Invoice
) o
ON o.Invoice = Invoices.InvoiceNum
LEFT JOIN
( SELECT InvoiceNum, Amount = SUM(Amount)
FROM Payments
GROUP BY InvoiceNum
) P
ON P.InvoiceNum = Invoices.InvoiceNum
WHERE Invoices.InvoiceYear = 11;
```
**ADDENDUM**
To expand on the `CROSS JOIN` comment, imagine this data for an Invoice (1)
**Orders**
```
Invoice Cost VAT
1 15.00 3.00
1 10.00 2.00
```
**Payments**
```
InvoiceNum Amount
1 15.00
1 10.00
```
When you join these tables as you did:
```
SELECT Orders.*, Payments.Amount
FROM Invoices
LEFT JOIN Orders
ON Orders.Invoice = Invoices.InvoiceNum
LEFT JOIN Payments
ON Invoices.InvoiceNum = Payments.InvoiceNum;
```
You end up with:
```
Orders.Invoice Orders.Cost Orders.Vat Payments.Amount
1 15.00 3.00 15.00
1 10.00 2.00 15.00
1 15.00 3.00 10.00
1 10.00 2.00 10.00
```
i.e. every combination of payments/orders, so for each invoice you would get many more rows than required, which distorts your totals. So even though the original data had £25 of payments, this doubles to £50 because of the two records in the order table. This is why each table needs to be aggregated individually, using DISTINCT would not work in the case there was more than one payment/order for the same amount on a single invoice.
---
One final point with regard to optimisation, you should probably index your tables, If you run the query and display the actual execution plan SSMS will suggest indexes for you, but at a guess the following should improve the performance:
```
CREATE NONCLUSTERED INDEX IX_Orders_InvoiceNum ON Orders (Invoice) INCLUDE(Cost, VAT, Returned);
CREATE NONCLUSTERED INDEX IX_Payments_InvoiceNum ON Payments (InvoiceNum) INCLUDE(Amount);
```
This should allow both subqueries to only use the index on each table, with no bookmark loopup/clustered index scan required. | Your problem is that an order has multiple lines for an invoice and it has multiple payments on an invoice (sometimes). This causes a cross product effect for a given order. You fix this by pre-summarizing the tables.
A related problem is that the `join` will fail if there are no payments, so you need `left outer join`.
```
select i.InvoiceNum, osum.cost, osum.vat, p.income
from Invoice i left outer join
(select o.Invoice, sum(o.Cost) as cost, sum(o.vat) as vat
from orders o
where Returned <> 1
group by o.Invoice
) osum
on osum.Invoice = i.InvoiceNum left outer join
(select p.InvoiceNum, sum(pay.Amount) as income
from Payments p
group by p.InvoiceNum
) psum
on psum.InvoiceNum = i.InvoiceNum
where i.InvoiceYear = year(getdate())
```
Two comments: Is the key field for `orders` really `Invoice` or is it also `InvoiceNum`? Also, do you have a field `Invoice.InvoiceYear`? Or do you want `year(i.InvoiceDate)` in the `where` clause? | SQL rewrite to optimize | [
"",
"sql",
"sql-server",
"optimization",
""
] |
I'd like to make a switch from Windows to Linux (Ubuntu) writing my python programs but I just can't get things to work. Here's the problem: I can see that there are quite the number of modules pre-installed (like numpy, pandas, matplotlib, etc.) in Ubuntu. They sit nicely in the /host/Python27/Lib/site-packages directory. But when I write a test python script and try to execute it, it gives me an ImportError whenever I try to import a module (for instance `import numpy as np` gives me `ImportError: No module named numpy`). When I type `which python` in the commandline I get the `/usr/bin/python` path. I think I might need to change things related to the python path, but I don't know how to do that. | You can use the following command in your terminal to see what folders are in your `PYTHONPATH`.
```
python -c "import sys, pprint; pprint.pprint(sys.path)"
```
I'm guessing `/host/Python27/Lib/site-packages` wont be in there (it doesn't sound like a normal python path. How did you install these packages?).
If you want to add folders to your `PYTHONPATH` then use the following:
```
export PYTHONPATH=$PYTHONPATH:/host/Python27/Lib/site-packages
```
Personally here are some recommendations for developing with Python:
1. Use [`virtualenv`](http://pypi.python.org/pypi/virtualenv). It is a very powerful tool that creates sandboxed python environments so you can install modules and keep them separate from the main interpreter.
2. Use [`pip`](https://pypi.python.org/pypi/pip) - When you've created a `virtualenv`, and activated it you can use `pip install` to install packages for you. e.g. `pip install numpy` will install numpy into your virtual environment and will be accessible from only this virtualenv. This means you can also install different versions for testing etc. Very powerful. I would recommend using `pip` to install your python packages over using ubuntu `apt-get install` as you are more likely to get the newer versions of modules (`apt-get` relies on someone packaging the latest versions of your python libraries and may not be available for as many libraries as `pip`).
3. When writing python scripts that you will make executable (`chmod +x my_python_script.py`) make sure you put `#!/usr/bin/env python` at the top as this will pick up the python interpreter in your virtual environment. If you don't (and put `#!/usr/bin/python`) then running `./my_python_script.py` will always use the system python interpreter. | `/host/Python27/Lib/site-packages` is not a default python directory on linux installations as far as I am aware.
The normal python installation (and python packages) should be found under `/usr/lib` or `/usr/lib64` depending on your processor architecture.
If you want to check where python is searching in addition to these directories you can use a terminal with the following command:
```
echo $PYTHONPATH
```
If the `/host/Python27/Lib/site-packages` path is not listed, attempt to use the following command and try it again:
```
export PYTHONPATH=$PYTHONPATH:host/Python27/Lib/site-packages
```
If this should work and you do not want to write this in a terminal every time you want to use these packages, simply put it into a file called `.bashrc` in your `home` folder (normally `/home/<username>`). | Python ImportError while module is installed [Ubuntu] | [
"",
"python",
"linux",
"ubuntu",
""
] |
**What I was trying to do:**
> Take a string and append a backwards copy of that string, making a
> palindrome
**What I came up with:**
```
# take an input string
a = input('Please enter a string: ')
a = list(a)
# read the string backwards
b = list(reversed(a))
# append the backward-ordered string to the original string, and print this new string
c = a + b
c = str(c)
print(c)
```
**Question:** When given a run, this script takes a string, for example "test", and returns `['t', 'e', 's', 't', 't', 's', 'e', 't']`; I'm confused about this result since I explicitly converted `c`, as a result of concatenation of `a` and `b`, to a string. (`c = str(c)`) I know I must have missed some basic stuff here, but I wasn't able to figure out what. Could someone throw some light on this? Thank you!
**And** would anyone care to elaborate on why my `c = str(c)` didn't work? Thanks! | The problem with saying `c = str(c)` is that applying `str` to a list simply gives a string representation *of that list* - so, for instance, `str([1,2,3])` yields the string `'[1, 2, 3]'`.
The easiest way to make a list of strings in to a string is to use the `str.join()` method. Given a string `s` and a list `a` of strings, running `s.join(a)` returns a string formed by joining the elements of `a`, using `s` as the glue.
For instance:
```
a = ['h','e','l','l','o']
print( ''.join(a) ) # Prints: hello
```
Or:
```
a = ['Hello', 'and', 'welcome']
print( ' '.join(a) ) # Prints: Hello and welcome
```
Finally:
```
a = ['555','414','2799']
print( '-'.join(a) ) # Prints: 555-414-2799
``` | It's worth understanding how to use `join`—and nrpeterson's answer does a great job explaining that.
But it's *also* worth knowing how not to create problems for yourself to solve.
Ask yourself why you've called `a = list(a)`. You're trying to convert a string to a sequence of characters, right? But a string is *already* a sequence of characters. You can call `reversed` on it, you can loop over it, you can slice it, etc. So, this is unnecessary.
And, if you've left `a` as a string, the slice `a[::-1]` is *also* a string.
That means your whole program can reduce to this:
```
a = input('Please enter a string: ')
# read the string backwards
b = a[::-1]
# append the backward-ordered string to the original string, and print this new string
c = a + b
print(c)
```
Or, more simply:
```
a = input('Please enter a string: ')
print(a + a[::-1])
``` | Python-Unable to convert a list to string | [
"",
"python",
"string",
"list",
"python-3.x",
"type-conversion",
""
] |
I am scraping 23770 webpages with a pretty simple web scraper using `scrapy`. I am quite new to scrapy and even python, but managed to write a spider that does the job. It is, however, really slow (it takes approx. 28 hours to crawl the 23770 pages).
I have looked on the `scrapy` webpage and the mailing lists and `stackoverflow`, but I can't seem to find generic recommendations for writing fast crawlers understandable for beginners. Maybe my problem is not the spider itself, but the way i run it. All suggestions welcome!
I have listed my code below, if it's needed.
```
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item, Field
import re
class Sale(Item):
Adresse = Field()
Pris = Field()
Salgsdato = Field()
SalgsType = Field()
KvmPris = Field()
Rum = Field()
Postnummer = Field()
Boligtype = Field()
Kvm = Field()
Bygget = Field()
class HouseSpider(BaseSpider):
name = 'House'
allowed_domains = ["http://boliga.dk/"]
start_urls = ['http://www.boliga.dk/salg/resultater?so=1&type=Villa&type=Ejerlejlighed&type=R%%C3%%A6kkehus&kom=&amt=&fraPostnr=&tilPostnr=&iPostnr=&gade=&min=&max=&byggetMin=&byggetMax=&minRooms=&maxRooms=&minSize=&maxSize=&minsaledate=1992&maxsaledate=today&kode=&p=%d' %n for n in xrange(1, 23770, 1)]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select("id('searchresult')/tr")
items = []
for site in sites:
item = Sale()
item['Adresse'] = site.select("td[1]/a[1]/text()").extract()
item['Pris'] = site.select("td[2]/text()").extract()
item['Salgsdato'] = site.select("td[3]/text()").extract()
Temp = site.select("td[4]/text()").extract()
Temp = Temp[0]
m = re.search('\r\n\t\t\t\t\t(.+?)\r\n\t\t\t\t', Temp)
if m:
found = m.group(1)
item['SalgsType'] = found
else:
item['SalgsType'] = Temp
item['KvmPris'] = site.select("td[5]/text()").extract()
item['Rum'] = site.select("td[6]/text()").extract()
item['Postnummer'] = site.select("td[7]/text()").extract()
item['Boligtype'] = site.select("td[8]/text()").extract()
item['Kvm'] = site.select("td[9]/text()").extract()
item['Bygget'] = site.select("td[10]/text()").extract()
items.append(item)
return items
```
Thanks! | Here's a collection of things to try:
* use latest scrapy version (if not using already)
* check if non-standard middlewares are used
* try to increase `CONCURRENT_REQUESTS_PER_DOMAIN`, `CONCURRENT_REQUESTS` settings ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#concurrent-requests))
* turn off logging `LOG_ENABLED = False` ([docs](http://doc.scrapy.org/en/latest/topics/settings.html#log-enabled))
* try `yield`ing an item in a loop instead of collecting items into the `items` list and returning them
* use local cache DNS (see [this thread](https://stackoverflow.com/questions/12427451/how-do-i-improve-scrapys-download-speed))
* check if this site is using download threshold and limits your download speed (see [this thread](https://stackoverflow.com/questions/13505194/scrapy-crawling-speed-is-slow-60-pages-min#comment18491083_13505194))
* log cpu and memory usage during the spider run - see if there are any problems there
* try run the same spider under [scrapyd](http://scrapyd.readthedocs.org/en/latest/) service
* see if [grequests](https://github.com/kennethreitz/grequests) + [lxml](http://lxml.de/) will perform better (ask if you need any help with implementing this solution)
* try running `Scrapy` on `pypy`, see [Running Scrapy on PyPy](https://stackoverflow.com/questions/31029362/running-scrapy-on-pypy)
Hope that helps. | Looking at your code, I'd say most of that time is spent in network requests rather than processing the responses. All of the tips @alecxe provides in his answer apply, but I'd suggest the `HTTPCACHE_ENABLED` setting, since it caches the requests and avoids doing it a second time. It would help on following crawls and even offline development. See more info in the docs: <http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.contrib.downloadermiddleware.httpcache> | Speed up web scraper | [
"",
"python",
"performance",
"web-scraping",
"scrapy",
""
] |
I have a table A which consists more than 7k records,Now i am creating a new table B .In my new table B I need to copy only 1000 records from table A which has more than 7000 records.
No condition applies, it may be any thousand records from 7000 . | In SQL Server
```
SELECT top 1000 *
INTO newTableName
FROM oldTableName;
```
In MySQL
```
SELECT *
INTO newTableName
FROM oldTableName Limit 1000;
``` | ```
INSERT INTO TABLEB(Col1, Col2, .... colN)
SELECT TOP 1000 Col1, Col2, .... colN FROM TABLEA
``` | how to copy top 1000 records from 7000 records in existing table to other new table | [
"",
"sql",
"sql-server-2005",
"database-table",
""
] |
I have an array of integer, and I need to transform it into string.
```
[1,2,3,4] => '\x01\x02\x03\x04'
```
What function can I use for it? I tried with str(), but it returns '1234'.
```
string = ""
for val in [1,2,3,4]:
string += str(val) # '1234'
``` | `''.join([chr(x) for x in [1, 2, 3, 4]])` | You can convert a `list` of small numbers directly to a [`bytearray`](http://docs.python.org/3/library/functions.html#bytearray):
> If it is an iterable, it must be an iterable of integers in the range 0 <= x < 256, which are used as the initial contents of the array.
And you can convert a `bytearray` directly to a `str` (2.x) or `bytes` (3.x, or 2.6+).
In fact, in 3.x, you can even convert the list straight to [`bytes`](http://docs.python.org/3/library/functions.html#bytes) without going through `bytearray`:
> constructor arguments are interpreted as for bytearray().
So:
```
str(bytearray([1,2,3,4])) # 2.6-2.7 only
bytes(bytearray([1,2,3,4])) # 2.6-2.7, 3.0+
bytes([1,2,3,4]) # 3.0+ only
```
If you really want a string in 3.x, as opposed to a byte string, you need to decode it:
```
bytes(bytearray([1,2,3,4])).decode('ascii')
```
See [Binary Sequence Types](http://docs.python.org/3/library/stdtypes.html#binaryseq) in the docs for more details. | Integer array to string in Python | [
"",
"python",
"converters",
""
] |
Hello i am trying to make my program check for certain words in the user input. For example: The user types "add the numbers 6+6" what the programs does is it has a dictionary and checks the words in the dictionary and compares them to the words in the user input this example is "add". If the word add is in the user input then it checks for numbers and also math symbols this example is "6+6" then it outputs the answer?
I have tried:
```
if test == "add":
do something
```
but this will not work unless the word "add" is all by itself. any help is very much appreciated. | It will work only in the cases like `add 6+6` or `6+6 add` or `add <some_text> 6+6` etc.
```
string = input()
if 'add' in string:
string = string.split('+')
no1 = int(string[0].split()[-1])
no2 = int(string[1].split()[0])
print(no1 + no2)
``` | You can loop through the input words and check them in your dictionary like
```
for word in input:
if word in dic:
pass
fail
``` | how to get certain words from user input | [
"",
"python",
"user-input",
""
] |
I'm following a tutorial at codecademy.com, and for some reason I can't understand, my program isn't returning the expected value, but is returning the value "none" instead.
I can't understand why. Would you mind taking a look?
The dictionaries I am using are:
```
lloyd = { "name": "Lloyd",
"homework": [90.0, 97.0, 75.0, 92.0],
"quizzes": [88.0, 40.0, 94.0],
"tests": [75.0, 90.0] }
alice = { "name": "Alice",
"homework": [100.0, 92.0, 98.0, 100.0],
"quizzes": [82.0, 83.0, 91.0],
"tests": [89.0, 97.0] }
tyler = { "name": "Tyler",
"homework": [0.0, 87.0, 75.0, 22.0],
"quizzes": [0.0, 75.0, 78.0],
"tests": [100.0, 100.0] }
```
And I tried the following functions:
```
def average(x):
return sum(x)/len(x)
def get_average(x):
a = (sum(x['homework'])/len(x['homework']) * 0.1 +
sum(x['quizzes'])/len(x['quizzes']) * 0.3 +
sum(x['tests'])/len(x['tests']) * 0.6)
return a
def get_letter_grade(score):
if score >= 90:
return "A"
elif score <= 80 and score < 90:
return "B"
elif score <= 70 and score < 80:
return "C"
elif score <= 60 and score < 70:
return "D"
elif score < 60:
return "F"
print get_letter_grade(get_average(lloyd))
``` | Your comparison logic is broken. This:
```
elif score <= 80 and score < 90:
```
says "if the score is less than or equal to 80, *and* less than 90"... So if the score is 80.55, that will never be true. You mean to say "if the score is *more than* 80 and less than 90".
The usual way to write that in Python is like this:
```
elif 80 <= score < 90:
``` | There is a mistake in your `get_letter_grade` function: you used `>=` when you meant `<=` three times. A fixed (and slightly neater) version:
```
def get_letter_grade(score):
if score >= 90:
return "A"
elif 80 <= score < 90:
return "B"
elif 70 <= score < 80:
return "C"
elif 60 <= score < 70:
return "D"
elif score < 60:
return "F"
``` | Function returning none when trying to get data from dictionaries | [
"",
"python",
""
] |
For a given database
```
Product(maker, model, type)
PC(code, model, speed, ram, hd, cd, price)
```
The question is:
*Find the printer makers which also produce PCs with the lowest RAM and the highest-speed processor among PCs with the lowest RAM. Result set: maker.*
Lets split the query!
* *Find the printer makers which also produce PCs*
> SELECT DISTINCT maker from product Group By maker,type HAVING type
> IN('Printer','PC')
I think this is wrong because IN('Printer','PC') is like as OR not AND
* *PCs with the lowest RAM*
> SELECT model,speed FROM pc WHERE ram=(SELECT MIN(ram) FROM pc) as lowestRam
* *the highest-speed processor among PCs with the lowest RAM*
> ```
> WHERE
> lowestRam.speed=(SELECT MAX(speed) FROM pc WHERE ram=(SELECT MIN(ram) FROM pc))
> ```
query itself!
```
SELECT DISTINCT maker FROM
(SELECT DISTINCT model,speed FROM pc WHERE ram=(SELECT MIN(ram) FROM pc)) as lowestRam
INNER JOIN product
ON product.model=lowestRam.model
WHERE
lowestRam.speed=(SELECT max(speed) FROM pc WHERE ram=(SELECT MIN(ram) FROM pc))
Group By maker,type HAVING type IN( 'Printer' ,'PC')
```
Unfortunately when I submit the query on checking site, it produces 1 extra incorrect result :(
Question comes from [Link](http://www.sql-ex.ru/learn_exercises.php?LN=25).
There are 2 steps of query verification. The 2nd step shows only the difference between user and correct result :( | ```
SELECT DISTINCT maker FROM
(SELECT DISTINCT model,speed FROM pc WHERE ram=(SELECT MIN(ram) FROM pc)) as lowestRam
INNER JOIN product
ON product.model=lowestRam.model AND
lowestRam.speed=(SELECT max(speed) FROM pc WHERE ram=(SELECT MIN(ram) FROM pc))
Group By maker HAVING maker IN( SELECT DISTINCT maker FROM Product WHERE type='Printer')
``` | This should resolve your problem :
```
select distinct maker from Product
where type = 'Printer' and maker in (select maker
from Product join PC on Product.model = PC.model
where ram = (select min(ram) from PC) and speed = (select max(speed) from PC
where ram = (select min(ram) from PC)))
``` | value is type of a and b | [
"",
"sql",
""
] |
By default Celery send all tasks to 'celery' queue, but you can change this behavior by adding extra parameter:
```
@task(queue='celery_periodic')
def recalc_last_hour():
log.debug('sending new task')
recalc_hour.delay(datetime(2013, 1, 1, 2)) # for example
```
Scheduler settings:
```
CELERYBEAT_SCHEDULE = {
'installer_recalc_hour': {
'task': 'stats.installer.tasks.recalc_last_hour',
'schedule': 15 # every 15 sec for test
},
}
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
```
Run worker:
```
python manage.py celery worker -c 1 -Q celery_periodic -B -E
```
This scheme doesn't work as expected: this workers sends periodic tasks to 'celery' queue, not 'celery\_periodic'. How can I fix that?
P.S. celery==3.0.16 | I found solution for this problem:
1) First of all I changed the way for configuring periodic tasks. I used **@periodic\_task** decorator like this:
```
@periodic_task(run_every=crontab(minute='5'),
queue='celery_periodic',
options={'queue': 'celery_periodic'})
def recalc_last_hour():
dt = datetime.utcnow()
prev_hour = datetime(dt.year, dt.month, dt.day, dt.hour) \
- timedelta(hours=1)
log.debug('Generating task for hour %s', str(prev_hour))
recalc_hour.delay(prev_hour)
```
2) I wrote **celery\_periodic** twice in params to **@periodic\_task**:
* **queue='celery\_periodic'** option is used when you invoke task from code (.delay or .apply\_async)
* **options={'queue': 'celery\_periodic'}** option is used when *celery beat* invokes it.
I'm sure, the same thing is possible if you'd configure periodic tasks with CELERYBEAT\_SCHEDULE variable.
UPD. This solution correct for both DB based and file based storage for **CELERYBEAT\_SCHEDULER**. | Periodic tasks are sent to queues by celery beat where you can do everything you do with the Celery API. Here is the list of configurations that comes with celery beat:
<https://celery.readthedocs.org/en/latest/userguide/periodic-tasks.html#available-fields>
In your case:
```
CELERYBEAT_SCHEDULE = {
'installer_recalc_hour': {
'task': 'stats.installer.tasks.recalc_last_hour',
'schedule': 15, # every 15 sec for test
'options': {'queue' : 'celery_periodic'}, # options are mapped to apply_async options
},
}
``` | How to send periodic tasks to specific queue in Celery | [
"",
"python",
"celery",
"django-celery",
""
] |
I'm just fiddling with a simulation of ([Mendel's First Law of Inheritance](http://rosalind.info/problems/iprb/)).
Before i can let the critters mate and analyze the outcome, the population has to be generated, i.e., a list has to be filled with varying numbers of three different types of tuples without unpacking them.
While trying to get familiar with [itertools](http://docs.python.org/2/library/itertools.html) (I'll need combinations later in the mating part), I came up with the following solution:
```
import itertools
k = 2
m = 3
n = 4
hd = ('A', 'A') # homozygous dominant
het = ('A', 'a') # heterozygous
hr = ('a', 'a') # homozygous recessive
fhd = itertools.repeat(hd, k)
fhet = itertools.repeat(het, m)
fhr = itertools.repeat(hr, n)
population = [x for x in fhd] + [x for x in fhet] + [x for x in fhr]
```
which would result in:
```
[('A', 'A'), ('A', 'A'), ('A', 'a'), ('A', 'a'), ('A', 'a'), ('A', 'a'), ('A', 'a'), ('A', 'a'), ('A', 'a')]
```
Is there a more reasonable, pythonic or memory saving way to build the final list, e.g. without generating the lists of for the three types of individuals first? | You could use `itertools.chain` to combine the iterators:
```
population = list(itertools.chain(fhd, fhet, fhr))
```
Though I would say there's no need to use `itertools.repeat` when you could simply do `[hd] * k`. Indeed, I would approach this simulation as follows:
```
pops = (20, 30, 44)
alleles = (('A', 'A'), ('A', 'a'), ('a', 'a'))
population = [a for n, a in zip(pops, alleles) for _ in range(n)]
```
or perhaps
```
allele_freqs = ((20, ('A', 'A')),
(30, ('A', 'a')),
(44, ('a', 'a')))
population = [a for n, a in allele_freqs for _ in range(n)]
``` | This should work I suppose.
```
pops = [2,3,4]
alleles = [('A','A'), ('A', 'a'), ('a','a')]
out = [pop*[allele] for pop, allele in zip(pops,alleles)]
print [item for sublist in out for item in sublist]
```
I have put the code on [CodeBunk](http://codebunk.com/bunk#-Ix2sl_uPsDtELj8pXJ3) so you could run it too. | Populate list with tuples | [
"",
"python",
"bioinformatics",
"rosalind",
""
] |
I am making a database with access 2007. I have a form for the call center to enter customer info; Name, Address, Phone Number, ect.
There is a field for credit card numbers and while we are supposed to enter them as first 4 numbers and last for number ie.1234xxxxxxxx4321
I want to make sure if they do enter them in that it keeps the first and last 4 numbers but changes other characters to "x" when the field loses focus. Could anyone point me in the right direction of how to do this?
Thanks in advance for all help in this matter. | All you need is something like this in your form code.
```
Private Sub txtCC_LostFocus()
txtCC.Text = Left(txtCC, 4) & String(8, "x") & Right(txtCC, 4)
End Sub
```
Then what you see is what will get stored in the DB. ie.1234xxxxxxxx4321
I'm going to assume you don't want to actually keep the whole CC# in your DB. That is a huge no-no unless you spend massive time & money to meet PCI compliance. Here's some info on PCI: <http://www.pcicomplianceguide.org/pcifaqs.php> | If you are only storing the first 4 and last 4 digits then the following works. Modify the function validCreditCardNumber() to have whatever checks you want to apply.
```
Function validCreditCardNumber(creditCardNumber) As Boolean
If Len(creditCardNumber) = 12 Then
validCreditCardNumber = True
Else
validCreditCardNumber = False
End If
End Function
Private Sub cbxCreditCardNumber_LostFocus()
If validCreditCardNumber(cbxCreditCardNumber) Then
cbxCreditCardNumber.Text = Left(cbxCreditCardNumber.Text, 4) & "xxxxxxxx" & Right(cbxCreditCardNumber.Text, 4)
End If
End Sub
```
If you want to store the entire number but only hide the digits from the screen then I think a input masks are what you are looking for. | Changing TextBox fields on Lost Focus | [
"",
"sql",
"vba",
"ms-access-2007",
""
] |
I have a simple question. In MySQL, consider a row "n", how can we order rows by id (for example), but start from the row "n+1" and end to the row "n-1" ?
Thanks !
**EDIT :** I ommit to precise that I seek the query in MySQL.
From an answer below, here an example :
```
ID
---
1
2
3
4 <--N
5
6
```
I want Desired Results ordered as follows
```
5 <--N + 1
6
1
2
3 <--N - 1
``` | So you mean. For a table
```
ID
---
1
2
3
4 <--N
5
6
```
You want Desired Results ordered as follows?
```
5 <--N + 1
6
1
2
3 <--N - 1
```
If so
```
SELECT ID
FROM T
WHERE ID <> 4
ORDER BY CASE WHEN ID > 4 THEN 0 ELSE 1 END, ID
``` | Assuming table `MyTable` with integer column `N`:
```
SELECT *
from MyTable
where Id between N-1 and N+1
order by N desc
``` | SQL Ordering Data from row "n+1" to "n-1" | [
"",
"mysql",
"sql",
"select",
"sql-order-by",
""
] |
According to [this conversion table](http://docs.python.org/2/library/json.html#py-to-json-table), Python ints get written as JSON numbers when serialized using the JSON module--as I would expect and desire.
I have a dictionary with an integer key and integer value:
```
>>> d = {1:2}
>>> type(d.items()[0][0])
<type 'int'>
>>> type(d.items()[0][1])
<type 'int'>
```
When I use the json module to serialize this to a JSON string, the value is written as a number, but the key is written as a string:
```
>>> json.dumps(d)
'{"1": 2}'
```
This isn't the behavior I want, and it seems particularly broken since it breaks json.dumps/json.loads round-tripping:
```
>>> d == json.loads(json.dumps(d))
False
```
Why does this happen, and is there a way I can force the key to be written as a number? | The simple reason is that [JSON does not allow integer keys.](http://json.org/)
```
object
{}
{ members }
members
pair
pair , members
pair
string : value # Keys *must* be strings.
```
As to how to get around this limitation - you will first need to ensure that the receiving implementation can handle the technically-invalid JSON. Then you can either replace all of the quote marks or use a custom serializer. | If you really want to, you can check keys for being convertable to integers again using:
```
def pythonify(json_data):
for key, value in json_data.iteritems():
if isinstance(value, list):
value = [ pythonify(item) if isinstance(item, dict) else item for item in value ]
elif isinstance(value, dict):
value = pythonify(value)
try:
newkey = int(key)
del json_data[key]
key = newkey
except TypeError:
pass
json_data[key] = value
return json_data
``` | Why do int keys of a python dict turn into strings when using json.dumps? | [
"",
"python",
"json",
"serialization",
""
] |
I just started learning programming so when I thought to write code for summation of numbers using recursion in python. I encountered this problem, here how can I make add = 0 as static, Can someone pls help since my ans is coming as 0
```
def sum(n):
add = 0
if n > 0:
add = add + n
sumf (n-1)
print add
sum(10)
``` | If I'm reading your question correctly, you aren't really solving the problem with recursion here. You need to do something like this instead:
```
def sum(n):
if n == 0:
return n
return n + sum(n - 1)
```
`sum(n - 1)` will return `(n - 1) + sum(n - 2)`, so `sum(n)` ends up returning `n + (n - 1) + sum(n - 2)`. It'll keep expanding until `n` is `0`, and at that point, you have the sum of all the numbers from `0` to `n`. | I think I see what you are asking. You can try to emulate a static variable like this in python:
```
def test():
test.static_variable += 1
print test.static_variable
test.static_variable = 0
test()
``` | how to make a variable static in python | [
"",
"python",
""
] |
Is there any way to use the mapping function or something better to replace values in an entire dataframe?
I only know how to perform the mapping on series.
I would like to replace the strings in the 'tesst' and 'set' column with a number
for example set = 1, test =2
Here is a example of my dataset: (Original dataset is very large)
```
ds_r
respondent brand engine country aware aware_2 aware_3 age tesst set
0 a volvo p swe 1 0 1 23 set set
1 b volvo None swe 0 0 1 45 set set
2 c bmw p us 0 0 1 56 test test
3 d bmw p us 0 1 1 43 test test
4 e bmw d germany 1 0 1 34 set set
5 f audi d germany 1 0 1 59 set set
6 g volvo d swe 1 0 0 65 test set
7 h audi d swe 1 0 0 78 test set
8 i volvo d us 1 1 1 32 set set
```
Final result should be
```
ds_r
respondent brand engine country aware aware_2 aware_3 age tesst set
0 a volvo p swe 1 0 1 23 1 1
1 b volvo None swe 0 0 1 45 1 1
2 c bmw p us 0 0 1 56 2 2
3 d bmw p us 0 1 1 43 2 2
4 e bmw d germany 1 0 1 34 1 1
5 f audi d germany 1 0 1 59 1 1
6 g volvo d swe 1 0 0 65 2 1
7 h audi d swe 1 0 0 78 2 1
8 i volvo d us 1 1 1 32 1 1
``` | What about [`DataFrame.replace`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html)?
```
In [9]: mapping = {'set': 1, 'test': 2}
In [10]: df.replace({'set': mapping, 'tesst': mapping})
Out[10]:
Unnamed: 0 respondent brand engine country aware aware_2 aware_3 age \
0 0 a volvo p swe 1 0 1 23
1 1 b volvo None swe 0 0 1 45
2 2 c bmw p us 0 0 1 56
3 3 d bmw p us 0 1 1 43
4 4 e bmw d germany 1 0 1 34
5 5 f audi d germany 1 0 1 59
6 6 g volvo d swe 1 0 0 65
7 7 h audi d swe 1 0 0 78
8 8 i volvo d us 1 1 1 32
tesst set
0 2 1
1 1 2
2 2 1
3 1 2
4 2 1
5 1 2
6 2 1
7 1 2
8 2 1
```
As @Jeff pointed out in the comments, in pandas versions < 0.11.1, manually tack `.convert_objects()` onto the end to properly convert tesst and set to `int64` columns, in case that matters in subsequent operations. | I know this is old, but adding for those searching as I was. Create a dataframe in pandas, df in this code
```
ip_addresses = df.source_ip.unique()
ip_dict = dict(zip(ip_addresses, range(len(ip_addresses))))
```
That will give you a dictionary map of the ip addresses without having to write it out. | python pandas replacing strings in dataframe with numbers | [
"",
"python",
"replace",
"dataframe",
"pandas",
""
] |
I'm writing this query in SQL :
```
select MAX(AVG(salary) ) from employees group by department_id;
```
First I will get groups by `department_id` , but next what will happen ? | If you have something like this
```
EmployeeId DepartmentId Salary
1 1 10
2 1 30
3 2 30
4 2 40
5 2 20
6 3 40
7 3 50
```
after grouping
```
DepartmentId AVG(Salary)
1 (10+30)/2 = 20
2 (30+40+20)/3 = 30
3 (40+50)/2= 45
```
So the query below will return 45 as Maximum average salary for departmentId 3
```
SELECT MAX(x.avg)
FROM ( SELECT AVG(salary)as avg FROM employees group by department_id)x;
``` | Most likely, dependent on your RDBMS, this will need to bedone with a sub-query
```
select max(AveragesByDept.avgSalary)
from (
select avgSalary=avg(salary)
from employees
group by department_id
) AveragesByDept
``` | Nested aggregate functions, Max(Avg()), in SQL | [
"",
"sql",
""
] |
I have a variable that contains a city, or city plus punctuation and then Post Code. I am using a statement to remove punctuation (and other stray) characters. It looks like this:
```
for c in "!@#%&*()[]{}/?<>,.":
jobLocationCleaned = string.replace(jobLocationUnclean, c, "")
jobLocation = jobLocationCleaned
# Add to list
params.append(jobLocation)
print(jobLocation)
```
However, when using Debug I can see the code step through and do the job it is supposed to yet when it comes to the `print` statement it prints the address before it was cleaned, i.e. the `for` loop has no effect.
Why is this? | On each iteration you are doing the replacement on `jobLocationUnclean` but assigning the result to `jobLocationClean`. Since you use the same unclean starting point on each iteration only the last iteration will have any affect on the result. Try changing your code to the following:
```
jobLocation = jobLocationUnclean
for c in "!@#%&*()[]{}/?<>,.":
jobLocation = jobLocation.replace(c, "")
params.append(jobLocation)
print(jobLocation)
```
Note that I also made two other minor modifications, I just use `jobLocation` and got rid of `jobLocationClean` because it is unnecessary, and instead of `string.replace(jobLocation, c, "")` I used `jobLocation.replace(c, "")`. This is the recommended way to call string functions, directly on the object rather than from the string module. | In the loop, you never use the result of previous iteration and use the original string instead. This is the root of your problems. | Why does this for loop have no effect? (Python) | [
"",
"python",
"for-loop",
""
] |
I have some tables:
### ws\_shop\_product
```
CREATE TABLE `ws_shop_product` (
`product_id` int(10) unsigned NOT NULL AUTO_INCREMENT
`product_title` varchar(255) COLLATE utf8_general_ci DEFAULT NULL,
PRIMARY KEY (`product_id`)
) ENGINE=MyISAM AUTO_INCREMENT=14499 DEFAULT CHARSET=utf8 COLLATE=utf8_general_ci
```
### ws\_system\_admin
```
CREATE TABLE `ws_system_admin` (
`admin_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`admin_username` varchar(255) NOT NULL,
`admin_password` char(40) NOT NULL,
PRIMARY KEY (`admin_id`)
) ENGINE=MyISAM AUTO_INCREMENT=14 DEFAULT CHARSET=utf8;
```
### ws\_shop\_product-updated
```
CREATE TABLE `ws_shop_product-updated` (
`updated_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`updated_product` int(10) unsigned DEFAULT NULL,
`updated_admin` int(10) unsigned DEFAULT NULL,
`updated_date` datetime DEFAULT NULL,
PRIMARY KEY (`updated_id`),
KEY `updated_product` (`updated_product`),
KEY `updated_admin` (`updated_admin`)
) ENGINE=MyISAM AUTO_INCREMENT=42384 DEFAULT CHARSET=utf8 COLLATE=utf8_general_ci
```
---
Whenever a `product` has been changed in the CMS, one row
will insert into the `ws_shop_product-updated` which keep
the `admin` ID, `product` ID and `date`.
Some data:
```
product_id product_title
---------- -------------
1 iPhone 5
```
```
updated_product updated_admin updated_date
--------------- ------------- ------------
1 301 2013-04-13 00:00:00
1 302 2013-04-15 00:00:00
1 303 2013-04-16 00:00:00
```
**Now my question is: How can I fetch products with latest update information?**
```
product_id product_title latest_admin_id latest_date
---------- ------------- --------------- -----------
1 iPhone 5 303 2013-04-16 00:00:00
``` | Because you are using mysql, you can use mysql's special (and non-portable) group by functionality to produce this fairly simple query:
```
SELECT * FROM (
SELECT p.product_id, p.product_title,
u.updated_admin latest_admin_id, u.updated_date latest_date
FROM ws_shop_product p
LEFT JOIN `ws_shop_product-updated` u ON u.updated_product = p.product_id
ORDER BY u.updated_date DESC) x
GROUP BY 1
```
This query will return all products, even if they don't have a row in the "updated" table - returning `null` values for the "latest" columns when there's no such row.
The reason this works is that (for mysql only) when not all non-aggregated columns are named in the group by, mysql returns the *first* row found for each unique combination of values of columns named in the group by clause. By ordering the data in the subquery latest-first, the first row found for each product\_id will be the latest. | You could use a query like this:
```
SELECT
p.product_id,
p.product_title,
u.updated_admin latest_admin_id,
u.updated_date latest_date
FROM
`ws_shop_product-updated` u INNER JOIN ws_shop_product p
ON u.updated_product = p.product_id
WHERE
(u.updated_product, u.updated_date) IN
(SELECT updated_product, MAX(updated_date)
FROM `ws_shop_product-updated`
GROUP BY product_id)
```
Please see fiddle [here](http://sqlfiddle.com/#!2/a950a/1).
The subquery will return the maximum updated\_date for each product, the outer query will return all columns of all rows that have the maximum updated\_date for every product. | How to fetch a row which is related to multiple rows in another table? | [
"",
"mysql",
"sql",
"select",
""
] |
I'm getting an error: ORA-01791: not a SELECTed expression when I try to run this query. I am able to run it when the SELECT is comma seperated (AH.NAME, REPLACE(A.ACTIVE\_DC,',','/'),etc) but with the ||','|| I am not able to get it to work. How can I get this query to run? Thanks!
```
SELECT DISTINCT
AH.NAME_1||','||
REPLACE(A.ACTIVE_DC,',','/')||','||
REPLACE(A.PASSIVE_DC,',','/')||','||
REPLACE(H.ENVIRONMENT,',','/')||','||
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x')||','||
H.FULL_NAME||','||
H.PRIMARY_IP||','||
H.COMPLIANCE||','||
H.OS
FROM
HOST H
FULL OUTER JOIN
APP_HOST AH ON
AH.ID_2 = H.ID
FULL OUTER JOIN
HOST_SVR HS ON
HS.ID_1 = H.ID
FULL OUTER JOIN
APP A ON
AH.ID_1 = A.ID
FULL OUTER JOIN
SVR S ON
HS.ID_2 = S.ID
WHERE S.NAME IS NOT NULL
ORDER BY
AH.NAME_1,
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x'),
H.FULL_NAME
``` | Since you aren't trying to order in the sequence the fields appear in the concatenation, you can't just `order by 1` or repeat the whole string. You can use a subquery though:
```
SELECT RESULT
FROM (
SELECT DISTINCT AH.NAME_1,
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x') AS NAME,
H.FULL_NAME,
AH.NAME_1||','||
REPLACE(A.ACTIVE_DC,',','/')||','||
REPLACE(A.PASSIVE_DC,',','/')||','||
REPLACE(H.ENVIRONMENT,',','/')||','||
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x')||','||
H.FULL_NAME||','||
H.PRIMARY_IP||','||
H.COMPLIANCE||','||
H.OS AS RESULT
FROM
HOST H
FULL OUTER JOIN
APP_HOST AH ON
AH.ID_2 = H.ID
FULL OUTER JOIN
HOST_SVR HS ON
HS.ID_1 = H.ID
FULL OUTER JOIN
APP A ON
AH.ID_1 = A.ID
FULL OUTER JOIN
SVR S ON
HS.ID_2 = S.ID
WHERE S.NAME IS NOT NULL
)
ORDER BY
NAME_1,
NAME,
FULL_NAME
``` | You need to place complex expression from `select` part to `order by` part or on the contrary add `REPLACE(REPLACE(S.NAME,',','-'),'_x','-x')` expression to selection list:
```
SELECT DISTINCT
AH.NAME||','||
REPLACE(A.ACTIVE_DC,',','/')||','||
REPLACE(A.PASSIVE_DC,',','/')||','||
REPLACE(H.ENVIRONMENT,',','/')||','||
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x')||','||
H.FULL_NAME||','||
H.PRIMARY_IP||','||
H.COMPLIANCE||','||
H.OS
FROM
HOST H
FULL OUTER JOIN
APP_HOST AH ON
AH.ID_2 = H.ID
FULL OUTER JOIN
HOST_SVR HS ON
HS.ID_1 = H.ID
FULL OUTER JOIN
APP A ON
AH.ID_1 = A.ID
FULL OUTER JOIN
SVR S ON
HS.ID_2 = S.ID
WHERE S.NAME IS NOT NULL
ORDER BY
-- same as selected
AH.NAME||','||
REPLACE(A.ACTIVE_DC,',','/')||','||
REPLACE(A.PASSIVE_DC,',','/')||','||
REPLACE(H.ENVIRONMENT,',','/')||','||
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x')||','||
H.FULL_NAME||','||
H.PRIMARY_IP||','||
H.COMPLIANCE||','||
H.OS
```
Query above produce wrong result from task's point of view because sorting order differs from initial query.
So use 2nd option:
```
SELECT DISTINCT
AH.NAME||','||
REPLACE(A.ACTIVE_DC,',','/')||','||
REPLACE(A.PASSIVE_DC,',','/')||','||
REPLACE(H.ENVIRONMENT,',','/')||','||
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x')||','||
H.FULL_NAME||','||
H.PRIMARY_IP||','||
H.COMPLIANCE||','||
H.OS,
-- next from order by
AH.NAME_1,
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x'),
H.FULL_NAME
FROM
HOST H
FULL OUTER JOIN
APP_HOST AH ON
AH.ID_2 = H.ID
FULL OUTER JOIN
HOST_SVR HS ON
HS.ID_1 = H.ID
FULL OUTER JOIN
APP A ON
AH.ID_1 = A.ID
FULL OUTER JOIN
SVR S ON
HS.ID_2 = S.ID
WHERE S.NAME IS NOT NULL
ORDER BY
AH.NAME_1,
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x'),
H.FULL_NAME
```
Second variant works because all `order by` expressions included into constructed string and can't produce more distinct values than at initial variant.
But if you want to produce result set with only one field, then you must do that in 2 steps with subquery: on first step distinct, on second - sort without distinct:
```
select
full_string
from (
SELECT DISTINCT
(
AH.NAME||','||
REPLACE(A.ACTIVE_DC,',','/')||','||
REPLACE(A.PASSIVE_DC,',','/')||','||
REPLACE(H.ENVIRONMENT,',','/')||','||
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x')||','||
H.FULL_NAME||','||
H.PRIMARY_IP||','||
H.COMPLIANCE||','||
H.OS
) as full_string,
-- next from order by
AH.NAME_1,
REPLACE(REPLACE(S.NAME,',','-'),'_x','-x') S_NAME_REPLACE,
H.FULL_NAME,
FROM
HOST H
FULL OUTER JOIN
APP_HOST AH ON
AH.ID_2 = H.ID
FULL OUTER JOIN
HOST_SVR HS ON
HS.ID_1 = H.ID
FULL OUTER JOIN
APP A ON
AH.ID_1 = A.ID
FULL OUTER JOIN
SVR S ON
HS.ID_2 = S.ID
WHERE S.NAME IS NOT NULL
)
ORDER BY
NAME_1,
S_NAME_REPLACE,
FULL_NAME
```
It works because there are no aggregate expression in final query.
Illustration for that can be found at [this SQLFiddle](http://www.sqlfiddle.com/#!4/d41d8/12305). | Query ORA-01791 Error due to ||','|| | [
"",
"sql",
"oracle",
""
] |
How do I convert a scientific notation to floating point number?
Here is an example of what I want to avoid:
```
Python 2.7.3 (default, Apr 14 2012, 08:58:41) [GCC] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> a=[78.40816326530613, 245068094.16326532]
>>> print a[0]/a[1]
3.19944395589e-07
>>> print float(a[0]/a[1])
3.19944395589e-07
>>> print float(a[0])/float(a[1])
3.19944395589e-07
``` | The scientific notation is just a convenient way of printing a floating point number. When there are a lot of leading zeros as in your example, the scientific notation might be easier to read.
In order to print a specific number of digits after a decimal point, you can specify a format string with print:
```
print 'Number is: %.8f' % (float(a[0]/a[1]))
```
Or you can use `format()` as in the other answers. | Use string formatting:
```
>>> "{:.50f}".format(float(a[0]/a[1]))
'0.00000031994439558937568872208504280885144055446290'
``` | Convert scientific notation to decimal - python | [
"",
"python",
"scientific-notation",
"floating-point-conversion",
""
] |
Scenario:
```
>>> a=' Hello world'
index = 3
```
In this case the "H" index is '3'. But I need a more general method such that for any string variable 'a' takes I need to know the index of the first character?
Alternative scenario:
```
>>> a='\tHello world'
index = 1
``` | If you mean the first non-whitespace character, I'd use something like this ...
```
>>> a=' Hello world'
>>> len(a) - len(a.lstrip())
3
```
Another one which is a little fun:
```
>>> sum(1 for _ in itertools.takewhile(str.isspace,a))
3
```
But I'm willing to bet that the first version is faster as it does essentially this exact loop, only in C -- Of course, it needs to construct a new string when it's done, but that's essentially free.
---
For completeness, if the string is empty or composed of entirely whitespace, both of these will return `len(a)` (which is invalid if you try to index with it...)
```
>>> a = "foobar"
>>> a[len(a)]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: string index out of range
``` | Using `regex`:
```
>>> import re
>>> a=' Hello world'
>>> re.search(r'\S',a).start()
3
>>> a='\tHello world'
>>> re.search(r'\S',a).start()
1
>>>
```
Function to handle the cases when the string is empty or contains only white spaces:
```
>>> def func(strs):
... match = re.search(r'\S',strs)
... if match:
... return match.start()
... else:
... return 'No character found!'
...
>>> func('\t\tfoo')
2
>>> func(' foo')
3
>>> func(' ')
'No character found!'
>>> func('')
'No character found!'
``` | how to find the index of the first non-whitespace character in a string in python? | [
"",
"python",
"string",
""
] |
the following code:
```
data = {"url": 'http://test.com/unsub/' + request.user.pk}
print(data)
```
Gives me this error:
> > TypeError: cannot concatenate 'str' and 'long' objects
Unsure why? | It's probably because `request.user.pk` is an integer and not a string, and you can't concatenate strings and integers (or long integers).
Use this instead (I'm an old-fashioned guy, I prefer the old syntax):
```
data = {"url": "http://test.com/unsub/%d" % request.user.pk}
``` | Its because pk is an integer and the other one is a string
```
data = {"url": 'http://test.com/unsub/' + str(request.user.pk)}
print(data)
``` | TypeError: cannot concatenate 'str' and 'long' objects | [
"",
"python",
"django",
""
] |
I am trying to highlight exactly what changed between two dataframes.
Suppose I have two Python Pandas dataframes:
```
"StudentRoster Jan-1":
id Name score isEnrolled Comment
111 Jack 2.17 True He was late to class
112 Nick 1.11 False Graduated
113 Zoe 4.12 True
"StudentRoster Jan-2":
id Name score isEnrolled Comment
111 Jack 2.17 True He was late to class
112 Nick 1.21 False Graduated
113 Zoe 4.12 False On vacation
```
My goal is to output an HTML table that:
1. Identifies rows that have changed (could be int, float, boolean, string)
2. Outputs rows with same, OLD and NEW values (ideally into an HTML table) so the consumer can clearly see what changed between two dataframes:
```
"StudentRoster Difference Jan-1 - Jan-2":
id Name score isEnrolled Comment
112 Nick was 1.11| now 1.21 False Graduated
113 Zoe 4.12 was True | now False was "" | now "On vacation"
```
I suppose I could do a row by row and column by column comparison, but is there an easier way? | The first part is similar to Constantine, you can get the boolean of which rows are empty\*:
```
In [21]: ne = (df1 != df2).any(1)
In [22]: ne
Out[22]:
0 False
1 True
2 True
dtype: bool
```
Then we can see which entries have changed:
```
In [23]: ne_stacked = (df1 != df2).stack()
In [24]: changed = ne_stacked[ne_stacked]
In [25]: changed.index.names = ['id', 'col']
In [26]: changed
Out[26]:
id col
1 score True
2 isEnrolled True
Comment True
dtype: bool
```
*Here the first entry is the index and the second the columns which has been changed.*
```
In [27]: difference_locations = np.where(df1 != df2)
In [28]: changed_from = df1.values[difference_locations]
In [29]: changed_to = df2.values[difference_locations]
In [30]: pd.DataFrame({'from': changed_from, 'to': changed_to}, index=changed.index)
Out[30]:
from to
id col
1 score 1.11 1.21
2 isEnrolled True False
Comment None On vacation
```
\* Note: it's important that `df1` and `df2` share the same index here. To overcome this ambiguity, you can ensure you only look at the shared labels using `df1.index & df2.index`, but I think I'll leave that as an exercise. | ## Highlighting the difference between two DataFrames
It is possible to use the DataFrame style property to highlight the background color of the cells where there is a difference.
**Using the example data from the original question**
The first step is to concatenate the DataFrames horizontally with the `concat` function and distinguish each frame with the `keys` parameter:
```
df_all = pd.concat([df.set_index('id'), df2.set_index('id')],
axis='columns', keys=['First', 'Second'])
df_all
```
[](https://i.stack.imgur.com/1s6UD.png)
It's probably easier to swap the column levels and put the same column names next to each other:
```
df_final = df_all.swaplevel(axis='columns')[df.columns[1:]]
df_final
```
[](https://i.stack.imgur.com/ntNuz.png)
Now, its much easier to spot the differences in the frames. But, we can go further and use the `style` property to highlight the cells that are different. We define a custom function to do this which you can see in [this part of the documentation](http://pandas.pydata.org/pandas-docs/stable/style.html#Building-Styles).
```
def highlight_diff(data, color='yellow'):
attr = 'background-color: {}'.format(color)
other = data.xs('First', axis='columns', level=-1)
return pd.DataFrame(np.where(data.ne(other, level=0), attr, ''),
index=data.index, columns=data.columns)
df_final.style.apply(highlight_diff, axis=None)
```
[](https://i.stack.imgur.com/72obZ.png)
This will highlight cells that both have missing values. You can either fill them or provide extra logic so that they don't get highlighted. | Compare two DataFrames and output their differences side-by-side | [
"",
"python",
"pandas",
"dataframe",
""
] |
Consider 2 dictionaries,
```
d1 = {'current': [[1369713600, 29], [1369756800, 47], [1369800000, 34]],
'curr_total': 110,
'past': [[1368417600, 2], [1368460800, 70], [1368504000, 10]],
'past_total': 82}
d2 = {'current': [(1369713601, 23), (1369756800, 87), (1369800000, 7)],
'curr_total': 117,
'past': [(1368417600, 100), (1368460800, 7), (1368504000, 9)],
'past_total': 116}
```
* In first dict, some values are integers (totals) and others are list of lists
* In 2nd dict, some values are integers (totals) and others are list of tuples.
* In both dictionaries, there are common keys like 'current','cure\_total' and as such.
* For same key in both dictionaries, first value of list/tuples can be same or otherwise.
Based on this, I need a dictionary like this.
```
f = {'current': [[1369713600, 29], [1369713601, 23], [1369756800, 134], [1369800000, 41]],
'curr_total': 227,
'past': [[1368417600, 102], [1368460800, 77], [1368504000, 19]],
'past_total': 198}
```
My attempt to do is:
```
for i in fo,pl:
for j in i:
if total.get(j):
total[j] += i[j]
else:
total[j] = i[j]
```
But found out that, if the first value is same, 2nd value of corresponding list and tuples are not aggregated? My way is very naive any way? Can someone suggest me more pythonic and efficient way to do this? | Instead of returning a list of list for `'current'` and `'past'`, I am using a dict here, as it is a more appropriate data structure for this case.
In case if you still want them to return a list then you can use `total['current'].items()` or `total['past'].items()`
```
from itertools import chain
d1 = {'current': [[1369713600, 29], [1369756800, 47], [1369800000, 34]],
'curr_total': 110,
'past': [[1368417600, 2], [1368460800, 70], [1368504000, 10]],
'past_total': 82}
d2 = {'current': [(1369713601, 23), (1369756800, 87), (1369800000, 7)],
'curr_total': 117,
'past': [(1368417600, 100), (1368460800, 7), (1368504000, 9)],
'past_total': 116}
total = {}
for k,v in chain(d1.iteritems() ,d2.iteritems()):
if isinstance(v, list):
for k1, v1 in v:
dic = total.setdefault(k,{})
dic[k1] = dic.get(k1,0) + v1
else:
total[k] = total.get(k,0) + v
#convert the dicts to list
for k in total:
if isinstance(total[k], dict):
total[k] = total[k].items()
print total
```
**Output:**
```
{'current': [(1369713600, 29), (1369756800, 134), (1369800000, 41), (1369713601, 23)],
'past': [(1368417600, 102), (1368460800, 77), (1368504000, 19)],
'curr_total': 227,
'past_total': 198
}
``` | ```
for i in fo,pl:
for j in i:
if total.get(j):
total[j] += i[j]
else:
total[j] = i[j]
```
is a good start, but as you pointed out, you'll end up with lists such as :
```
f['current'] = [[1369713600, 29], [1369756800, 47], [1369800000, 34], (1369713601, 23), (1369756800, 87), (1369800000, 7)]
```
such list is reduced by :
```
l = map(list, f['current'])
res = []
for k, v in groupby(sorted(l), lambda x: x[0]):
res.append([k, sum(map(lambda x: x[1], v))])
``` | python merging dictionary of lists of lists/tuples | [
"",
"python",
"list",
"dictionary",
""
] |
I have the following table named 'flt'
You can see the duplicates are identifed by 3 columns only `(flight, fltno, stad)`... I don't care about what is in `col1 and col2.`. But I should be able to show it in the query.
So.. you can see `ids 8, 3 and 10` are **duplicates**.
I want to write a pure SQL query... that can do the following:
1) the `duplicate count` column.. which basically counts how many records are there that matches the `flight, fltno, stad` of the currently selected row.
2) the `"duplicate rank"` column which orders the duplicates.. 1 means first record, 2 means this is the 2nd record and 3 means this is the 3rd record. You can see `ba 104` has 2 records in total... and it is ranked 1 and 2.
3) from the resulting (possibly editable) query.. I should be able to filter out (using where) all the duplicate ranks that are `> 1`... then able to delete those records.
So.. `id 8, 3 and 10 are > 1`.. and I should be able to delete them with in this query... by clicking on the row and delete key.
If the condition 3 is not entirely achievable.. please give me the best way possible. Thanks.
 | This SQL will get you the results as per your question, however it won't work as part of a DELETE query, I suggest SELECTING from this query into a temporary table and then running a DELETE query from that : )
```
SELECT A.id, A.flight, A.fltno, A.stad, A.col1, A.col2, B.concount AS [duplicate count], (SELECT Count(C.id) FROM tblfit As C WHERE C.flight&C.fltno&C.stad=A.concat AND C.id <= A.id) AS [duplicate rank]
FROM (SELECT tblfit.*, [flight] & [fltno] & [stad] AS concat
FROM tblfit) AS A,
(SELECT [flight] & [fltno] & [stad] AS concat, Count([concat]) AS concount
FROM tblfit
GROUP BY [flight] & [fltno] & [stad]) AS B
WHERE A.concat = B.concat;
``` | Added a column in the table where the value is always 1 called
```
countValue
```
Then the first query for duplicate count is
```
SELECT tableA.flight, tableA.fltno, tableA.stad, Sum(tableA.countValue) AS duplicateCount
FROM tableA
GROUP BY tableA.flight, tableA.fltno, tableA.stad;
```
Then the second query for duplicate rank (ranked by id number) is
```
SELECT (SELECT Count(*)+1 FROM tableA WHERE id < temp.id AND stad = temp.stad AND flight = temp.flight AND fltno = temp.fltno) AS flightRank, temp.id, temp.flight, temp.fltno, temp.stad
FROM tableA AS temp;
```
Then you can join them
```
SELECT tableA.id, tableA.flight, tableA.fltno, tableA.stad, tableA.col1, tableA.col2, queryCounts.duplicateCount, queryRanking.flightRank
FROM (tableA INNER JOIN queryRanking ON tableA.id = queryRanking.id) INNER JOIN queryCounts ON (tableA.stad = queryCounts.stad) AND (tableA.fltno = queryCounts.fltno) AND (tableA.flight = queryCounts.flight);
```
Then regarding the delete query read this thread since you need to delete using joins
[How to delete in MS Access when using JOIN's?](https://stackoverflow.com/questions/5585732/how-to-delete-in-ms-access-when-using-joins) | Access 2010 SQL - Show Duplicate Records in Order for eventual Deletion (pure SQL Solution pls) | [
"",
"sql",
"ms-access",
"duplicates",
"ms-access-2010",
"records",
""
] |
I know dictionary's are not meant to be used this way, so there is no built in function to help do this, but I need to delete every entry in my dictionary that has a specific value.
so if my dictionary looks like:
```
'NameofEntry1': '0'
'NameofEntry2': 'DNC'
...
```
I need to delete(probably pop) all the entries that have value DNC, there are multiple in the dictionary. | Modifying the original dict:
```
for k,v in your_dict.items():
if v == 'DNC':
del your_dict[k]
```
or create a new dict using dict comprehension:
```
your_dict = {k:v for k,v in your_dict.items() if v != 'DNC'}
```
From the [docs](http://docs.python.org/2/library/stdtypes.html#dict.iteritems) on `iteritems()`,`iterkeys()` and `itervalues()`:
> Using `iteritems()`, `iterkeys()` or `itervalues()` while adding or
> deleting entries in the dictionary may raise a `RuntimeError` or fail
> to iterate over all entries.
Same applies to the normal `for key in dict:` loop.
In Python 3 this is applicable to `dict.keys()`, `dict.values()` and `dict.items()`. | You just need to make sure that you aren't modifying the dictionary while you are iterating over it else you would get `RuntimeError: dictionary changed size during iteration`.
So you need to iterate over a copy of the keys, values (for `d` use `d.items()` in 2.x or `list(d.items())` in 3.x)
```
>>> d = {'NameofEntry1': '0', 'NameofEntry2': 'DNC'}
>>> for k,v in d.items():
... if v == 'DNC':
... del d[k]
...
>>> d
{'NameofEntry1': '0'}
``` | Remove a dictionary key that has a certain value | [
"",
"python",
""
] |
I have a dictionary of the format :
```
d[key] = [(val1, (Flag1, Flag2)),
(val2, (Flag1, Flag2)),
(val3, (Flag1, Flag2))]
```
I want to make it :
```
d[key] = [(val1, Flag1),
(val2, Flag1),
(val3, Flag1)]
```
How can I do it? | This should do it:
```
d[key] = [(x, y[0]) for x,y in d[key]]
```
Simple version:
```
new_val = []
for x, y in d[key]:
#In each iteraion x is assigned to VALs and `y` is assigned to (Flag1, Flag2)
#now append a new value, a tuple containg x and y[0](first item from that tuple)
new_val.append((x, y[0]))
d[key] = new_val #reassign the new list to d[key]
```
To modify the whole dictionary:
```
dic = { k: [(x, y[0]) for x,y in v] for k,v in dic.items()}
```
In py2.x you can use `dic.iteritems` as it returns an iterator, `dic.items()` will work on both py2x and py3x. | Using `tuple` unpacking:
```
d[key] = [(x, y) for (x, (y, z)) in d[key]]
``` | How to change a tuple within a value of a dictionary? | [
"",
"python",
"dictionary",
"tuples",
""
] |
A `User` has\_many `Posts`. I want to retrieve the latest Post for each day (using `created_at`), ignoring other posts that may have been written earlier. Another way to pose this question might to ask for a each top salary earning employee by department - same thing I think.
How do I write this query in Rails (4.0 preferably)? I think it has something to do with `group` and `maximum` but I can't seem to get it. Is there a way to do it without resorting to SQL?
To clarify, what I'd like returned is an array of post objects that are the last ones written on their respective date.
Thanks! | It's not very clean, but this works in Rails 3 (taken from a Book model in my case) using PostgreSQL syntax for truncating the created\_at to the date:
```
max_created_at_list = Book.select("max(created_at) as created_at").group("date_trunc('day',created_at)")
last_books = Book.where(:created_at => max_created_at_list)
```
... or just:
```
Book.where(:created_at =>Book.select("max(created_at) as created_at").group("date_trunc('day',created_at)"))
```
You'd want an index on created\_at for large data sets, and either created\_at to be constrained to not null at the database level or an "is not null" predicate if the RDBMS you use does not index nulls (eg. Oracle) | Something like this. You can convert this to AREL syntax as needed:
```
SELECT posts.created_at, *
FROM posts
INNER JOIN (
SELECT MAX(created_at) AS max_order_date FROM posts
GROUP BY DATE(posts.created_at)
) AS last_postings ON last_postings.max_order_date = posts.created_at
ORDER BY DATE(created_at) DESC
LIMIT 10
```
AREL syntax might be:
```
join_sql = <<-SQL
INNER JOIN (
SELECT MAX(created_at) AS max_order_date FROM posts
GROUP BY DATE(posts.created_at)
) AS last_postings ON last_postings.max_order_date = posts.created_at
SQL
Post.joins(join_sql).order('DATE(created_at) DESC')
```
Remove the `LIMIT` as it suits you. | Rails query - latest Post for each day | [
"",
"sql",
"ruby-on-rails",
""
] |
Which one should I use to maximize performance? `os.path.isfile(path)` or `open(path)`? | Testing helps. `os.path.isfile` is quite a bit faster than `open`:
```
In [475]: %timeit open('test_test.txt')
10000 loops, best of 3: 47.9 us per loop
In [476]: %timeit os.path.isfile('test_test.txt')
100000 loops, best of 3: 6.21 us per loop
```
But look at the run times. You need to open or check for **a lot** of files to have any practical impact on total run time for most applications. | Mike has shown that `isfile()` is faster, but there are two more things to consider:
1. `isfile()` only tests if a file exists -- it doesn't tell you anything about read or write permissions! It is very rare to *just* want to know whether or not a file exists, you often want to test if you can do something with it. `open()` will tell you this.
2. Pythonic code generally prefers an EAFP ([Easier to Ask Forgiveness than Permission](https://stackoverflow.com/questions/6092992/why-is-it-easier-to-ask-forgiveness-than-permission-in-python-but-not-in-java)) style, where you try to do things and catch exceptions if you can't. (The opposite is LBYL -- Look Before You Leap, which is common in Java and C, among other languages.)
Both these two points suggest you might be better off using `open()` unless you are *really* really pressed for performance. | checking if file exists: performance of isfile Vs open(path) | [
"",
"python",
"file",
"exists",
""
] |
I have a query which returns a result set like this:
```
Proj | release | releaseDt
1 | 2 | 1/2/2013
1 | 1 | 4/5/2012
2 | 1 | [null]
3 | 1 | 22/2/2013
1 | 3 | [null]
3 | 2 | [null]
```
I need to sort this by `releaseDt`, but I need to have all the records for that `Proj` together.
After sorting, the result should be something like this:
```
Proj | release | releaseDt
1 | 2 | 1/2/2013
1 | 1 | 4/5/2012
1 | 3 | [null]
3 | 1 | 22/2/2013
3 | 2 | [null]
2 | 1 | [null]
```
How can I do this with SQL Server? | You want to sort by the earliest release date for a project and then by the release date within a project.
You can get the earliest date using a window function, and then use that for sorting:
```
select t.Proj, t.release, t.releaseDt
from (select t.*, min(releasedt) over (partition by proj) as minrdt
from t
) t
order by t.minrdt, t.proj, t.releaseDt
``` | All you need to do is sort your table twice: first by `Proj`, then by `releaseDt`:
```
SELECT *
FROM mytable
ORDER BY
Proj ASC,
releaseDt DESC
``` | Sorting and ordering by two columns | [
"",
"sql",
"sql-server",
"t-sql",
"sql-order-by",
""
] |
I have a simple program, which looks for all compressed folders in a directory, targets one compressed file, gets an excel file located inside the compressed file and moves it to another location (it does this for every excel file, for how many ever compressed folders):
```
path = 'C:\Users\me\Documents\Extract'
new_path = 'C:\Users\me\Documents\Test'
i = 0
for folder in os.listdir(path):
path_to_folder = os.path.join(path, folder)
zfile = zipfile.ZipFile(os.path.join(path, folder))
for name in zfile.namelist():
if name.endswith('.xls'):
new_name = str(i)+'_'+name
new_path = os.path.join(new_path, new_name)
zfile.close()
#os.rename(path_to_folde, new_path) -- ERROR HERE
shutil.move(path_to_folde, new_path) -- AND ERROR HERE
i += 1
```
I have tried 2 ways to move the excel file `os.rename` and `shutil.move`. I keep on getting an error:
> WindowsError: [Error 32] The process cannot access the file beacause it is being used by another process.
I don't understand why this error persists, since I have closed every folder. | ```
path = 'C:\Users\me\Documents\Extract'
destination_path = 'C:\Users\me\Documents\Test'
i = 0
for folder in os.listdir(path):
path_to_zip_file = os.path.join(path, folder)
zfile = zipfile.ZipFile(path_to_zip_file)
for name in zfile.namelist():
if name.endswith('.xls'):
new_name = str(i)+'_'+name
new_path = os.path.join(destination_path, new_name)
# This is obviously going to fail because we just opened it
shutil.move(path_to_zip_file, new_path)
i += 1
zfile.close()
```
Changed some of the variable names in your code snippet. Do you see your problem now? You're trying to move the zip file that *your* process has open. You'll need to copy the `.xls` file to your destination using the zipfile module. | If you are on a windows computer go to the task manager and hit the processes tab. Scroll down to anything that says python and end the process. You may have had python running with something else. Then try running your python program again and it should work. | Error 32, Python, file being used by another process | [
"",
"python",
""
] |
```
>>> str(1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702)
'1.41421356237'
```
Is there a way I can make str() record more digits of the number into the string? I don't understand why it truncates by default. | Try this:
```
>>> from decimal import *
>>> Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
```
The `float` literal is truncated by default to fit in the space made available for it (i.e. it's not because of `str`):
```
>>> 1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702
1.4142135623730951
```
If you need more decimal places use [`decimal`](http://docs.python.org/2/library/decimal.html) instead. | Python's floating point numbers use double precision only, which is 64 bits. They simply cannot represent (significantly) more digits than you're seeing.
If you need more, have a look at the built-in [decimal](http://docs.python.org/2/library/decimal.html#module-decimal) module, or the [mpmath](http://code.google.com/p/mpmath/) package. | More Digits in Irrational Numbers | [
"",
"python",
"string",
"square-root",
""
] |
I have a query like below:
```
select
a.id, a.title, a.description
from
my_table_name as a
where
a.id in (select id from another_table b where b.id = 1)
```
My question is, is there any way I can avoid the subquery in where clause and use it in from clause itself without compromising of performance? | You may use `INNER JOIN` as:
```
select
a.id, a.title, a.description
from
my_table_name as a INNER JOIN another_table as b ON (a.id = b.id and b.id = 1)
```
Or
```
select
a.id, a.title, a.description
from
my_table_name as a INNER JOIN another_table as b ON a.id = b.id
where b.id = 1
```
> Both the queries may not return the same value for you. You may choose whatever works for you. Please use this as a starting point and not as a copy-paste code. | Both of the answers given so far are incorrect in the general case (though the database may have unique constraints which ensure they are correct in a specific case)
If `another_table` might have multiple rows with the same `id` then the `INNER JOIN` will bring back duplicates that are not present in the `IN` version. Trying to remove them with `DISTINCT` can change the semantics if the columns from `my_table_name` themselves have duplicates.
A general rewrite would be
```
SELECT a.id,
a.title,
a.description
FROM my_table_name AS a
JOIN (SELECT DISTINCT id
FROM another_table
WHERE id = 1) AS b
ON b.id = a.id
```
The performance characteristics of this rewrite are implementation dependant. | Write correlated subquery in a WHERE Clause as join | [
"",
"sql",
""
] |
From the same table, what would be a good way to do two different selects and generate a Boolean based on the results.
**testable**
```
pKey | prodStatus
-----------------
1 | 0
2 | 0
```
**I'm trying to do this**
```
select count(pKey) from testable where pKey = 1 and prodStatus = 0
along with
select count(pKey) from testable where pKey = 2 and prodStatus = 0
If both results where 1 then `true` else `false`
```
Right I do this using php and a lot of code because I've no idea how its done purely in sql and something like this is complete beyond me. How can I do something like this in sql itself? | ```
SELECT SUM(CASE WHEN pKey = 1 THEN 1 ELSE 0 END)
= SUM(CASE WHEN pKey = 2 THEN 1 ELSE 0 END)
FROM testable
WHERE prodStatus = 0
```
Based on Barmar's answer, except (1) he has `AND` where I think you want an `=` and (2) not all DBs allow the implicit conversion of boolean to 1/0. This is more portable, and will work if someone using a different DB lands here with Google.
The query planner *may* be smart enough to optimize two queries with subselects into just one pass over the table, but I wouldn't bet on it.
**[edit]** left out the `END` in first version. | Will this work for you?
```
SELECT (
select count(pKey) = 1 from testable where pKey = 1 and prodStatus = 0
) AND (
select count(pKey) = 1 from testable where pKey = 2 and prodStatus = 0
)
```
Check [Demo](http://www.sqlfiddle.com/#!2/fb3cc/2) | Return boolen from two counts on same table | [
"",
"mysql",
"sql",
""
] |
I'm doing the following, and it works, but it is not elegant. I'm wondering if anyone has any insight as to a prettier way to do this.
Let's say I have n booleans that the user will set to be true or false. For each boolean, there is an associated function do\_something that takes as input the next unprocessed column in a multidimensional list called myList. The following code does just this:
```
i=0
if boolean1:
do_something1([a[i] for a in myList])
i+=1
if boolean2:
do_something2([a[i] for a in myList])
i+=1
if boolean3:
do_something3([a[i] for a in myList])
i+=1
...
```
Even though it works, something in me tells me that there is a much more elegant way to do it. It seems like it might be better to push the columns onto a queue and then pop them off when one of the booleans is satisfied. But that seems like a whole lot of overhead. Does anyone have any suggestions? | If you can put the booleans and functions into a list, you could do:
```
processing = zip(booleans, functions)
for i, action in enumerate([p for (b,p) in processing if b]):
action([a[i] for a in myList])
``` | Transpose `myList` first (flip rows into columns) and then iterate over it.
```
for boolean, row in zip(booleans, zip(*myList)):
if boolean:
process(row)
```
The inner `zip` is part of the idiom `zip(*l)`, which transposes a list. | What is the pythonic way to get the next unprocessed list column? | [
"",
"python",
"list",
""
] |
I'm just trying to write a simple program to allow me to parse some of the following XML.
So far in following examples I am not getting the results I'm looking for.
I encounter many of these XML files and I generally want the info after a handful of tags.
What's the best way using elementtree to be able to do a search for `<Id>` and grab what ever info is in that tag. I was trying things like
```
for Reel in root.findall('Reel'):
... id = Reel.findtext('Id')
... print id
```
Is there a way just to look for every instance of `<Id>` and grab the urn: etc that comes after it? Some code that traverses everything and looks for `<what I want>` and so on.
This is a very truncated version of what I usually deal with.
This didn't get what I wanted at all. Is there an easy just to match `<what I want>` in any XML file and get the contents of that tag, or do i need to know the structure of the XML well enough to know its relation to Root/child etc?
```
<Reel>
<Id>urn:uuid:632437bc-73f9-49ca-b687-fdb3f98f430c</Id>
<AssetList>
<MainPicture>
<Id>urn:uuid:46afe8a3-50be-4986-b9c8-34f4ba69572f</Id>
<EditRate>24 1</EditRate>
<IntrinsicDuration>340</IntrinsicDuration>
<EntryPoint>0</EntryPoint>
<Duration>340</Duration>
<FrameRate>24 1</FrameRate>
<ScreenAspectRatio>2048 858</ScreenAspectRatio>
</MainPicture>
<MainSound>
<Id>urn:uuid:1fce0915-f8c7-48a7-b023-36e204a66ed1</Id>
<EditRate>24 1</EditRate>
<IntrinsicDuration>340</IntrinsicDuration>
<EntryPoint>0</EntryPoint>
<Duration>340</Duration>
</MainSound>
</AssetList>
</Reel>
```
@Mata that worked perfectly, but when I tried to use that for different values on another XML file I fell flat on my face. For instance, what about this section of a file.I couldn't post the whole thing unfortunately. What if I want to grab what comes after KeyId?
```
<?xml version="1.0" encoding="UTF-8" standalone="no" ?><DCinemaSecurityMessage xmlns="http://www.digicine.com/PROTO-ASDCP-KDM-20040311#" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:enc="http://www.w3.org/2001/04/xmlenc#">
<!-- Generated by Wailua Version 0.3.20 -->
<AuthenticatedPublic Id="ID_AuthenticatedPublic">
<MessageId>urn:uuid:7bc63f4c-c617-4d00-9e51-0c8cd6a4f59e</MessageId>
<MessageType>http://www.digicine.com/PROTO-ASDCP-KDM-20040311#</MessageType>
<AnnotationText>SPIDERMAN-3_FTR_S_EN-XX_US-13_51_4K_PH_20070423_DELUXE ~ KDM for Quvis-10010.pem</AnnotationText>
<IssueDate>2007-04-29T04:13:43-00:00</IssueDate>
<Signer>
<dsig:X509IssuerName>dnQualifier=BzC0n/VV/uVrl2PL3uggPJ9va7Q=,CN=.deluxe-admin-c,OU=.mxf-j2c.ca.cinecert.com,O=.ca.cinecert.com</dsig:X509IssuerName>
<dsig:X509SerialNumber>10039</dsig:X509SerialNumber>
</Signer>
<RequiredExtensions>
<Recipient>
<X509IssuerSerial>
<dsig:X509IssuerName>dnQualifier=RUxyQle0qS7qPbcNRFBEgVjw0Og=,CN=SM.QuVIS.com.001,OU=QuVIS Digital Cinema,O=QuVIS.com</dsig:X509IssuerName>
<dsig:X509SerialNumber>363</dsig:X509SerialNumber>
</X509IssuerSerial>
<X509SubjectName>CN=SM MD LE FM.QuVIS_CinemaPlayer-3d_10010,OU=QuVIS,O=QuVIS.com,dnQualifier=3oBfjTfx1me0p1ms7XOX\+eqUUtE=</X509SubjectName>
</Recipient>
<CompositionPlaylistId>urn:uuid:336263da-e4f1-324e-8e0c-ebea00ff79f4</CompositionPlaylistId>
<ContentTitleText>SPIDERMAN-3_FTR_S_EN-XX_US-13_51_4K_PH_20070423_DELUXE</ContentTitleText>
<ContentKeysNotValidBefore>2007-04-30T05:00:00-00:00</ContentKeysNotValidBefore>
<ContentKeysNotValidAfter>2007-04-30T10:00:00-00:00</ContentKeysNotValidAfter>
<KeyIdList>
<KeyId>urn:uuid:9851b0f6-4790-0d4c-a69d-ea8abdedd03d</KeyId>
<KeyId>urn:uuid:8317e8f3-1597-494d-9ed8-08a751ff8615</KeyId>
<KeyId>urn:uuid:5d9b228d-7120-344c-aefc-840cdd32bbfc</KeyId>
<KeyId>urn:uuid:1e32ccb2-ab0b-9d43-b879-1c12840c178b</KeyId>
<KeyId>urn:uuid:44d04416-676a-2e4f-8995-165de8cab78d</KeyId>
<KeyId>urn:uuid:906da0c1-b0cb-4541-b8a9-86476583cdc4</KeyId>
<KeyId>urn:uuid:0fe2d73a-ebe3-9844-b3de-4517c63c4b90</KeyId>
<KeyId>urn:uuid:862fa79a-18c7-9245-a172-486541bef0c0</KeyId>
<KeyId>urn:uuid:aa2f1a88-7a55-894d-bc19-42afca589766</KeyId>
<KeyId>urn:uuid:59d6eeff-cd56-6245-9f13-951554466626</KeyId>
<KeyId>urn:uuid:14a13b1a-76ba-764c-97d0-9900f58af53e</KeyId>
<KeyId>urn:uuid:ccdbe0ae-1c3f-224c-b450-947f43bbd640</KeyId>
<KeyId>urn:uuid:dcd37f10-b042-8e44-bef0-89bda2174842</KeyId>
<KeyId>urn:uuid:9dd7103e-7e5a-a840-a15f-f7d7fe699203</KeyId>
</KeyIdList>
</RequiredExtensions>
<NonCriticalExtensions/>
</AuthenticatedPublic>
<AuthenticatedPrivate Id="ID_AuthenticatedPrivate"><enc:EncryptedKey xmlns:enc="http://www.w3.org/2001/04/xmlenc#">
<enc:EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-oaep-mgf1p">
<ds:DigestMethod xmlns:ds="http://www.w3.org/2000/09/xmldsig#" Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
</enc:EncryptionMethod>
``` | The expression `Reel.findtext('Id')` only matches direct children of `Reel`. If you want to find all `Id` tags in your xml document, you can just use:
```
ids = [id.text for id in Reel.findall(".//Id")]
```
This would give you a list of all text nodes of all `Id` tags which are children of `Reel`.
---
edit:
Your updated example uses namespaces, in this case `KeyId` is in the default namespace (`http://www.digicine.com/PROTO-ASDCP-KDM-20040311#`), so to search for it you need to include it in your search:
```
from xml.etree import ElementTree
doc = ElementTree.parse('test.xml')
nsmap = {'ns': 'http://www.digicine.com/PROTO-ASDCP-KDM-20040311#'}
ids = [id.text for id in doc.findall(".//ns:KeyId", namespaces=nsmap)]
print(ids)
...
```
The xpath subset `ElementTree` supports is rather limited. If you want a more complete support, you should use [`lxml`](http://lxml.de) instead, it's [xpath support](http://lxml.de/xpathxslt.html#xpath) is way more complete.
For example, using xpath to search for all `KeyId` tags (ignoring namespaces) and returning their text content directly:
```
from lxml import etree
doc = etree.parse('test.xml')
ids = doc.xpath(".//*[local-name()='KeyId']/text()")
print(ids)
...
``` | It sounds like [XPath](http://docs.python.org/2/library/xml.etree.elementtree.html#elementtree-xpath) might be right up your alley - it will let you query your XML document for exactly what you're looking for, as long as you know the structure. | XML parsing in Python using Python 2 or 3 | [
"",
"python",
""
] |
I am attempting to write a command line minesweeper clone, and I am having some trouble with the mine generator code.
```
def genWorld(size, mines):
world = []
currBlock = []
for i in range(size):
for j in range(size):
currBlock.append("x")
world.append(currBlock)
currBlock = []
for i in range(mines):
while True:
row = randint(0, size)
col = randint(0, size)
if world[row[col]] != "M":
break
world[row[col]] = "M"
printWorld(world)
```
When I run this, I get the error:
```
Traceback (most recent call last):
File "C:\Python33\minesweeper.py", line 28, in <module>
genWorld(9, 10)
File "C:\Python33\minesweeper.py", line 23, in genWorld
if world[row[col]] != "M":
TypeError: 'int' object is not subscriptable
```
I'd imagine this means that I am referencing the list in the wrong way, but how would I go about doing this correctly? | You give `row` an integer value. `row[col]` then tries to access an element of that integer, which gives an error. I think what you want is `world[row][col]`. | You probably want `world[row][col]`, as `world[row]` gives a list, then `[col]` selects an element from that list. | What is the correct way to reference items in multi-dimensional lists? | [
"",
"python",
"multidimensional-array",
"python-3.x",
""
] |
I have a table formatted as follows:
```
mysql> select pattern,trunks,cost from sms_prices where pattern=1;
+---------+--------------+-------+
| pattern | trunks | cost |
+---------+--------------+-------+
| 1 | Vitelity | 0.099 |
| 1 | Plivo | 0.012 |
| 1 | Twilio | 0.012 |
+---------+--------------+-------+
3 rows in set (0.00 sec)
```
My question is:
Considering this table has another 700+ entries with 3-4 entries for same pattern, how do I select DISTINCT(pattern) ordering by cost, ASC?
I tried this:
```
mysql> select DISTINCT pattern,cost,trunks from sms_prices where pattern=1 order by cost;
+---------+-------+--------------+
| pattern | cost | trunks |
+---------+-------+--------------+
| 1 | 0.012 | Plivo |
| 1 | 0.012 | Twilio |
| 1 | 0.099 | Vitelity |
+---------+-------+--------------+
3 rows in set (0.00 sec)
mysql>
```
But as you can see it still gives me the same 3 results.
If i only select a single DISTINCT row, it gives me a single entry:
```
mysql> select DISTINCT pattern from sms_prices where pattern=1 order by cost;
+---------+
| pattern |
+---------+
| 1 |
+---------+
1 row in set (0.00 sec)
```
But I don't know which entry this is, so the result is useless.
Please help with a query that would return a single result per `pattern`, with the smallest `cost`
Thanks! | Perhaps this may not be what you wanted but:
```
SELECT pattern, cost, trunks
FROM sms_prices
WHERE cost = (select min(cost) from sms_prices where pattern = 1)
GROUP BY pattern;
```
Regards | Won't this work?
```
select pattern, min(cost) mincost
from sms_prices
where whatever
group by pattern
``` | MySQL query to group by and use distinct row | [
"",
"mysql",
"sql",
"distinct",
""
] |
In the queries I stumble upon each date is converted with to\_date function before any comparison. Sometimes it caused "literal does not match format string" error, which had rather nothing to do with format and the cause was explained here:
[ORA-01861: literal does not match format string](https://stackoverflow.com/questions/1387917/ora-01861-literal-does-not-match-format-string)
My question is: is it really necessary to use date conversion? Why is it converted in the first place before applying any logical comparison? | Oracle does not store dates as, well, dates. The problem is that there might be a time on the dates that would cause them to be unequal. (You can see the documentation [here](http://docs.oracle.com/cd/B19306_01/server.102/b14220/datatype.htm#i1847) for information about the date data type.)
In general, we think that "2013-01-01" is equal to "2013-01-01". However, the first date might be "2013-01-01 01:00:00" and the second "2013-01-01 02:02:02". And they would not be equal. To make matters worse, they may look the same when they are printed out.
You don't actually have to convert the dates to strings in order to do such comparisons. You can also use the `trunc()` function. Such a transformation of the data is insurance against "invisible" time components of the data interfering with comparisons. | You should really be storing dates as actual dates (or timestamps). If you have strings representing dates, you will often need to convert them using to\_date (with a specified format, not relying on default formats). It really depends on what comparisons/date functionality you want. You're getting errors because you hit a value that does not conform to your specified format. This is also a good reason to specify a column as DATE to store dates. For example,
```
select to_date('123', 'MM-DD-YYYY') from dual;
```
will throw an ORA-01861. So you may have 99.9% of the rows as MM-DD-YYYY, but the 0.1% will cause you headaches.
Anyway, if you cleanup those strings, you can do much more using to\_date and date functions. For example:
```
select
(last_day(to_date('02-05-2009', 'MM-DD-YYYY')) - to_date('01-15-1998', 'MM-DD-YYYY')) as days_between_dates
from dual;
```
Not fun to do that with strings. Or maybe just find the most recent date:
```
select greatest( to_date('02-05-2009', 'MM-DD-YYYY'), to_date('12-01-1988', 'MM-DD-YYYY')) from dual;
```
using string comparison would give wrong answer:
```
select greatest('02-05-2009', '12-01-1988') from dual;
```
Just a few examples, but much better to treat dates as dates, not strings. | Oracle - Should I convert dates with to_date before comparison? | [
"",
"sql",
"oracle",
""
] |
```
with open("movies.txt") as infile:
for line in infile:
list1 = [ ]
for temp in line.split(':'):
list1.append(temp)
if (list1[0] == 'product/productId'):
if(list1[1] != product):
product = list1[1]
f1=open(list1[1],'w')
elif(list1[0] == 'review/text'):
if (list1[1] != product):
f1.write(list1[1] + os.linesep)
```
i keep getting the ioerror which will disappear as soon as i use "for line in filename" instead of "with open(filename) as file:" help please
i have already tried all the solutin on this page [Read large text files in Python, line by line without loading it in to memory](https://stackoverflow.com/questions/6475328/read-large-text-files-in-python-line-by-line-without-loading-it-in-to-memory) but to no use
when i use this code it works perfectly fine...
```
for line in file_contents('movies.txt').splitlines():
list1 = [ ]
for temp in line.split(":"):
list1.append(temp)
for temp2 in line.split(":"):
list1.append(temp2)
if (list1[1] != product):
if (list1[0] == 'product/productId'):
product = list1[1]
f1 = open(list1[1],'w')
elif(list1[0] == 'review/text'):
f1.write(list1[1] + os.linesep)
```
but i have to use the first code that i posted.. | As you are reading lines from a file, you are getting the trailing new line character `\n`. You can see this in the Traceback that you posted and I assume that this is where the problem is coming from.
Use `.strip()` to remove unwanted white space and new line characters before trying to open the file. You may also have to provide the full path to the file you wish to work on rather than just the file name. | Likely because `line1[1]` is invalid as a filename, e.g. it is an empty string. | how do i avoid this error" invalid mode w or filename" in python 2.7? | [
"",
"python",
"python-2.7",
"ioerror",
""
] |
```
class AlarmBox(Widget):
hour = ["12","1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11"]
tensMin = ["0", "1", "2", "3", "4", "5"]
onesMin = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
day = ["AM", "PM"]
txt_inpt = ObjectProperty(None)
def print1(self):
self.txt_inpt.text("HI")
XXXXXXX
```
How do I call print1 within the object?
I tried doing at XXXXXX
1. self.print1()
2. self.print1(self)
3. print1(self)
4. primt1()
5. c = AlarmBox()
6. c.print1()
in java you can do:
this.print1() or print1() ! | At the outermost level (same indent level as `class AlarmBox`, you can declare code that is not part of that class:
```
c = AlarmBox()
c.print1()
```
The problem was that your code at `XXXXXX` was within the class. | You can do this in python as well, but you need to execute your code at some point:
```
class AlarmBox(Widget):
hour = ["12","1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11"]
tensMin = ["0", "1", "2", "3", "4", "5"]
onesMin = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
day = ["AM", "PM"]
txt_inpt = ObjectProperty(None)
def print1(self):
self.txt_inpt.text("HI")
# XXXXXXX
def print1_caller(self):
self.print1()
```
XXXXX is not a place to execute code, it's a place to define class members variables and methods. | Python Method Call | [
"",
"python",
"class",
"function",
"methods",
""
] |
Is there anything in the catalog views in SQL Server that will give you a list of the stored procedures like `select name from sys.procedures` that will also tell you how many parameters each procedure takes? Intellisense will take you the names of the parameters and Visual Studio knows as well. Is there a better way to get a result of of stored procedures and the number (and/or name) of the parameters each procedure takes?
I suppose one could do this is they parsed sp\_helptext 'sprocName' but I would think there is a better way to do it.
Dig a little more digging and came up with
```
;with x
as
(
SELECT
p.name AS Parameter,
t.name AS [Type],
sp.name
FROM sys.procedures sp
JOIN sys.parameters p
ON sp.object_id = p.object_id
JOIN sys.types t
ON p.system_type_id = t.system_type_id
)
select name,Parameter,COUNT(*)
from x
group by name,Parameter
```
which'll do the job. | You can get at the parameter info like so:
```
declare @ProcedureName sysname;
set @ProcedureName = '[dbo].[YourProcedureName]';
select sp.name,
spa.name,
spa.is_output
from sys.procedures sp
join sys.parameters spa on
sp.object_id = spa.object_id
where sp.object_id = object_id(@ProcedureName);
``` | Try this one out :
```
select * from information_schema.parameters
where specific_name='your_procedure_name'
``` | Programmatically determine how many parameters in a stored procedure | [
"",
".net",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have dict object something similar to like this,
```
topo = {
'name' : 'm0',
'children' : [{
'name' : 'm1',
'children' : []
}, {
'name' : 'm2',
'children' : []
}, {
'name' : 'm3',
'children' : []
}]
}
```
Now, i want to insert one more dict object let's say,
```
{
'name' : 'ABC',
'children' : []
}
```
as child of dict named "m2" inside m2's children array.
Could you please suggest how should i go about it?
Should i go for a separate data structure implementation ? | I would suggest you first convert it to a data structure like this:
```
topo = {
'm0' : {
'm1' : {},
'm2' : {},
'm3' : {},
},
}
```
That is, you have made every value for the 'name' key be a key in a dictionary, and every value for the 'children' key be the value for that key, and changed it to a dictionary instead of a list.
Now you don't need to assume beforehand the index position where m2 is found. You do need to know that m2 is inside m0, but then you can simply say
```
topo['m0']['m2']['ABC'] = {}
```
You can convert between formats with this code:
```
def verbose_to_compact(verbose):
return { item['name']: verbose_to_compact(item['children']) for item in verbose }
def compact_to_verbose(compact):
return [{'name':key, 'children':compact_to_verbose(value)} for key, value in compact]
```
Call them like this
```
compact_topo = verbose_to_compact([topo]) # function expects list; make one-item list
verbose_topo = compact_to_verbose(compact_topo)[0] # function returns list; extract the single item
```
I am assuming the format you have is the direct interpretation of some file format. You can read it in that way, convert it, work with it in the compact format, and then just convert it back when you need to write it out to a file again. | Your issue is a common Tree structure, you can consider to use <http://pythonhosted.org/ete2/tutorial/tutorial_trees.html> and populate each node with your dict value (don't reinvent the wheel). | Python: tree like implementation of dict datastructure | [
"",
"python",
"dictionary",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.