Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Lets consider the following table-
```
ID Score
1 95
2 100
3 88
4 100
5 73
```
I am a total SQL noob but how do I return the Scores featuring both IDs 2 and 4?
So it should return 100 since its featured in both ID 2 and 4 | This is an example of a "sets-within-sets" query. I recommend aggregation with the `having` clause, because it is the most flexible approach.
```
select score
from t
group by score
having sum(id = 2) > 0 and -- has id = 2
sum(id = 4) > 0 -- has id = 4
```
What this is doing is aggregating by score. Then the first part of the `having` clause (`sum(id = 2)`) is counting up how many "2"s there are per score. The second is counting up how many "4"s. Only scores that have at a "2" and "4" are returned. | ```
SELECT score
FROM t
WHERE id in (2, 4)
HAVING COUNT(*) = 2 /* replace this with the number of IDs */
```
This selects the rows with ID 2 and 4. The `HAVING` clause then ensures that we found both rows; if either is missing, the count will be less than 2.
This assumes that `id` is a unique column. | How to return rows that have the same column values in MySql | [
"",
"mysql",
"sql",
"relational-division",
""
] |
I have 2 tables
One table with questions
```
ID Description
== ===========
1 Some Question
2 Some Question
3 Some Question
4 Some Question
```
And an other one with the awsers to each question of every users
```
ID_USER ID_QUESTION ANSWER
======= =========== =========
1 2 a
1 1 b
1 3 d
2 1 e
2 4 a
3 4 c
3 2 a
```
As you can see it is possible that a user does not answer a question and this is my problem
I am currently trying to find wich answer a user did not answer to.
I'd like to have something like this
```
ID_USER ID_MISSING_QUESTION
======= ===================
1 4
2 3
2 2
3 1
3 3
```
I can easly find the missing questions for a single user but i can't do that for every user since they are quite numerous.
Thanks Ayoye | Quick and Dirty:
```
SELECT TB_USER.ID, TB_QUESTION.ID AS "Q_ID" FROM TB_USER, TB_QUESTION
minus
SELECT ID_USER, ID_QUESTION FROM tb_answer
```
[Sql Fiddle Demo here.](http://sqlfiddle.com/#!4/7bc0d/16/0) | You should post the SQL statement(s) you tried, before expecting a full answer, else someone might think you want let others write all the code for you...
Nevertheless, instead of plain JOIN, use a `FULL OUTER JOIN` and a `LEFT OUTER JOIN` resp. `RIGHT OUTER JOIN`, depending on table ordering in your SQL statement (which you did not post yet), and filter with `IS NULL`. | Oracle SQL How to find missing value for different users | [
"",
"sql",
"oracle",
"plsql",
""
] |
Basically, I've read in several places that `socket.recv()` will return whatever it can read, or an empty string signalling that the other side has shut down (the official docs don't even mention what it returns when the connection is shut down... great!). This is all fine and dandy for blocking sockets, since we know that `recv()` only returns when there actually is something to receive, so when it returns an empty string, it **MUST** mean the other side has closed the connection, right?
Okay, fine, but what happens when my socket is non-blocking?? I have searched a bit (maybe not enough, who knows?) and can't figure out how to tell when the other side has closed the connection using a non-blocking socket. There seems to be no method or attribute that tells us this, and comparing the return value of `recv()` to the empty string seems absolutely useless... is it just me having this problem?
As a simple example, let's say my socket's timeout is set to 1.2342342 (whatever non-negative number you like here) seconds and I call `socket.recv(1024)`, but the other side doesn't send anything during that 1.2342342 second period. The `recv()` call will return an empty string and I have no clue as to whether the connection is still standing or not... | In the case of a non blocking socket that has no data available, recv will throw the socket.error exception and the value of the exception will have the errno of either EAGAIN or EWOULDBLOCK. Example:
```
import sys
import socket
import fcntl, os
import errno
from time import sleep
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1',9999))
fcntl.fcntl(s, fcntl.F_SETFL, os.O_NONBLOCK)
while True:
try:
msg = s.recv(4096)
except socket.error, e:
err = e.args[0]
if err == errno.EAGAIN or err == errno.EWOULDBLOCK:
sleep(1)
print 'No data available'
continue
else:
# a "real" error occurred
print e
sys.exit(1)
else:
# got a message, do something :)
```
The situation is a little different in the case where you've enabled non-blocking behavior via a time out with [`socket.settimeout(n)`](https://docs.python.org/2.7/library/socket.html#socket.socket.settimeout) or [`socket.setblocking(False)`](https://docs.python.org/2.7/library/socket.html#socket.socket.setblocking). In this case a socket.error is stil raised, but in the case of a time out, the accompanying value of the exception is always a string set to 'timed out'. So, to handle this case you can do:
```
import sys
import socket
from time import sleep
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1',9999))
s.settimeout(2)
while True:
try:
msg = s.recv(4096)
except socket.timeout, e:
err = e.args[0]
# this next if/else is a bit redundant, but illustrates how the
# timeout exception is setup
if err == 'timed out':
sleep(1)
print 'recv timed out, retry later'
continue
else:
print e
sys.exit(1)
except socket.error, e:
# Something else happened, handle error, exit, etc.
print e
sys.exit(1)
else:
if len(msg) == 0:
print 'orderly shutdown on server end'
sys.exit(0)
else:
# got a message do something :)
```
As indicated in the comments, this is also a more portable solution since it doesn't depend on OS specific functionality to put the socket into non-blockng mode.
See [recv(2)](http://linux.die.net/man/2/recv) and [python socket](https://docs.python.org/2.7/library/socket.html) for more details. | It is simple: if `recv()` returns 0 bytes; [you will not receive any more data on this connection. Ever.](http://docs.python.org/2/howto/sockets.html) You still might be able to send.
It means that your non-blocking socket have to raise an exception (it might be system-dependent) if no data is available but the connection is still alive (the other end may send). | What does Python's socket.recv() return for non-blocking sockets if no data is received until a timeout occurs? | [
"",
"python",
"sockets",
"nonblocking",
"recv",
""
] |
I'm new to Boto, and I'm trying to use it to insert a Python dictionary into Amazon DynamoDB. I must be missing something, because the "dynamizer" (encoder) does not seem to support None values. This is a problem because the source data has tons of nulls in it. I could go through each row and delete out all of the key/value items where the value is None, but somehow I feel like a package as sophisticated as Boto should take care of that for me. I'm simply trying to insert one row like so:
```
conn = DynamoDBConnection(region=RegionInfo(endpoint="dynamodb.us-west-2.amazonaws.com"))
dest = Table('d_company', connection=conn)
data = {"company_id":99999, "company_name":None}
dest.put_item(data)
```
...and this gives me the error:
```
Error
Traceback (most recent call last):
File "TestDynamoDB.py", line 37, in testPutIntoDynamoDB
dest.put_item(data)
File "C:\Python27\lib\site-packages\boto\dynamodb2\table.py", line 452, in put_item
return item.save(overwrite=overwrite)
File "C:\Python27\lib\site-packages\boto\dynamodb2\items.py", line 362, in save
final_data = self.prepare_full()
File "C:\Python27\lib\site-packages\boto\dynamodb2\items.py", line 265, in prepare_full
final_data[key] = self._dynamizer.encode(value)
File "C:\Python27\lib\site-packages\boto\dynamodb\types.py", line 228, in encode
dynamodb_type = self._get_dynamodb_type(attr)
File "C:\Python27\lib\site-packages\boto\dynamodb\types.py", line 220, in _get_dynamodb_type
return get_dynamodb_type(attr)
File "C:\Python27\lib\site-packages\boto\dynamodb\types.py", line 110, in get_dynamodb_type
raise TypeError(msg)
TypeError: Unsupported type "<type 'NoneType'>" for value "None"
```
What am I doing wrong? | You are not doing anything wrong. While [boto](http://docs.pythonboto.org/en/latest/index.html) is sophisticated indeed, you have to remember, that it does not have the knowledge of your business logic.
For example, there are at least a few ways someone can think of saving `None` to DynamoDB database:
* as "None" string
* as empty string
* don't save at all - remember, it is NoSQL database, attributes are not required
The best way to determine it - is your code. If your data could be `None`, don't add it to the dictionary. | Attribute values in DynamoDB can be neither empty strings nor empty sets. While I discovered this empirically, the most direct reference I see to this is here:
<http://awsdocs.s3.amazonaws.com/dynamodb/latest/dynamodb-dg.pdf>
So the second bullet suggested by the approved answer will not work.
The third bullet of the accepted answer is the best approach, as implied by the accepted answer. From a design perspective, it more closely aligns with the NoSql paradigm and will likely provide *some* degree of efficiency over attempting to identify for each data type a None/NULL representation and storing it. This paradigm then manifests itself in your logic as checking existence/membership of the key (if/then or try/except, depending on the scenario) as opposed to checking the key for a "None/NULL equivalent" value.
If you truly seek to store attributes with a value that you equate to NULL/None, I'd recommend establishing a unique/proprietary value for accomplishing this within your application, something more identifiable than, in the case of a string, just 'None'.
(I'd rather have simply commented on the existing answer, but my status as a new user on stackoverflow evidently prevents me from doing so...hope this wasn't poor etiquette...) | Inserting None values into DynamoDB using Boto | [
"",
"python",
"nosql",
"boto",
"amazon-dynamodb",
""
] |
Imagine that I have a column like this:
```
Var: 1, 1, , 3, 2
-
Name: Ben, John, Josh, Bill
```
How can I select Entries with the same `VAR` column Value? Like, if I want entries with value `1` in the `VAR` column, it will give me `Ben` and `Josh`. | This will give you records having multiple records of the same `VAR`.
```
SELECT a.*
FROM TableName a
WHERE EXISTS
(
SELECT 1
FROM TableName b
WHERE a.Var = b.Var
GROUP BY Var
HAVING COUNT(*) > 1
)
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/af3a1e/2)
Another way to solve this is by using `JOIN`,
```
SELECT a.*
FROM TableName a
INNER JOIN
(
SELECT Var
FROM TableName b
GROUP BY Var
HAVING COUNT(*) > 1
) b ON a.Var = b.Var
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/af3a1e/3)
But adds a little confusion when you add this line: *"..if I want entries with value 1 in the VAR column, it will give me: Ben and Josh"* -- Do you want to specify `VAR` or not? [Like this demo <<](http://www.sqlfiddle.com/#!2/af3a1e/4) | Try
```
SELECT name
FROM table1
WHERE var IN (SELECT MIN(var)
FROM table1
GROUP BY var
HAVING COUNT(*) > 1)
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/c42fae/3)** demo. | SQL Select Entries with same Column Values | [
"",
"sql",
""
] |
There are two ways to authenticate a user using Django Auth LDAP
1. Search/Bind and
2. Direct Bind.
The first one involves connecting to the LDAP server either anonymously or with a fixed account and searching for the distinguished name of the authenticating user. Then we can attempt to bind again with the user’s password.
The second method is to derive the user’s DN from his username and attempt to bind as the user directly.
I want to be able to do a direct bind using the userid (sAMAccountName) and password of the user who is trying to gain access to the application. Please let me know if there is a way to achieve this? At the moment, I cannot seem to make this work due to the problem explained below.
In my case, the DN of users in LDAP is of the following format
```
**'CN=Steven Jones,OU=Users,OU=Central,OU=US,DC=client,DC=corp'**
```
This basically translates to 'CN=FirstName LastName,OU=Users,OU=Central,OU=US,DC=client,DC=corp'
This is preventing me from using Direct Bind as the **sAMAccountName** of the user is **sjones** and this is the parameter that corresponds to the user name (%user) and I can't figure out a way to frame a proper **AUTH\_LDAP\_USER\_DN\_TEMPLATE** to derive the User's DN using.
Due to the above explained problem, I am using Search/Bind for now but this requires me to have a fixed user credential to be specified in **AUTH\_LDAP\_BIND\_DN** and **AUTH\_LDAP\_BIND\_PASSWORD**.
Here is my current **settings.py** configuration
```
AUTH_LDAP_SERVER_URI = "ldap://10.5.120.161:389"
AUTH_LDAP_BIND_DN='CN=Steven Jones,OU=Users,OU=Central,OU=US,DC=client,DC=corp'
AUTH_LDAP_BIND_PASSWORD='fga.1234'
#AUTH_LDAP_USER_DN_TEMPLATE = 'CN=%(user)s,OU=Appl Groups,OU=Central,OU=US,DC=client,DC=corp'
AUTH_LDAP_USER_SEARCH = LDAPSearchUnion(
LDAPSearch("OU=Users, OU=Central,OU=US,DC=client,DC=corp",ldap.SCOPE_SUBTREE, "(sAMAccountName=%(user)s)"),
LDAPSearch("OU=Users,OU=Regional,OU=Locales,OU=US,DC=client,DC=corp",ldap.SCOPE_SUBTREE, "(sAMAccountName=%(user)s)"),
)
AUTH_LDAP_USER_ATTR_MAP = {"first_name": "givenName", "last_name": "sn","email":"mail"}
AUTH_LDAP_GROUP_SEARCH = LDAPSearch("CN=GG_BusinessApp_US,OU=Appl Groups,OU=Central,OU=US,DC=client,DC=corp",ldap.SCOPE_SUBTREE, "(objectClass=groupOfNames)")
AUTH_LDAP_GROUP_TYPE = GroupOfNamesType()
AUTH_LDAP_REQUIRE_GROUP = 'CN=GG_BusinessApp_US,OU=Appl Groups,OU=Central,OU=US,DC=client,DC=corp'
```
Looking forward for some guidance from the wonderful folks in here. | I had the same issue.
I ran across ticket 21 in the now-deleted bitbucket repository. (`cant-bind-and-search-on-activedirectory`). The issues were not migrated to [their github](https://github.com/django-auth-ldap/django-auth-ldap/issues), but the author brought up a way to change the library files for `django-auth-ldap` so that it could do a direct bind.
It came down to changing `<python library path>/django_auth_ldap/backend.py` to include two lines in `_authenticate_user_dn`:
```
if sticky and ldap_settings.AUTH_LDAP_USER_SEARCH:
self._search_for_user_dn()
```
I was able to get this to work on my local machine that was running Arch Linux 3.9.8-1-ARCH, but I was unable to replicate it on the dev server running Ubuntu 13.04.
Hopefully this can help. | (This is actually a comment to @amethystdragon's answer, but it's a bunch of code, so posting as a separate answer.) The problem still seems to exist with django\_auth\_ldap 1.2.5. Here's an updated patch. If you don't want or can't modify the source code, monkey-patching is possible. Just put this code to eg. end of `settings.py`. (And yes, I know monkey-patching is ugly.)
```
import ldap
from django_auth_ldap import backend
def monkey(self, password):
"""
Binds to the LDAP server with the user's DN and password. Raises
AuthenticationFailed on failure.
"""
if self.dn is None:
raise self.AuthenticationFailed("failed to map the username to a DN.")
try:
sticky = self.settings.BIND_AS_AUTHENTICATING_USER
self._bind_as(self.dn, password, sticky=sticky)
#### The fix -->
if sticky and self.settings.USER_SEARCH:
self._search_for_user_dn()
#### <-- The fix
except ldap.INVALID_CREDENTIALS:
raise self.AuthenticationFailed("user DN/password rejected by LDAP server.")
backend._LDAPUser._authenticate_user_dn = monkey
``` | Django Auth LDAP - Direct Bind using sAMAccountName | [
"",
"python",
"django",
"active-directory",
"django-authentication",
"django-auth-ldap",
""
] |
I am using this query to rename the database:
```
ALTER DATABASE BOSEVIKRAM MODIFY NAME = [BOSEVIKRAM_Deleted]
```
But it shows an error when excuting:
> Msg 5030, Level 16, State 2, Line 1
> The database could not be exclusively locked to perform the operation.
Is anything wrong with my query? | You could try setting the database to single user mode.
<https://stackoverflow.com/a/11624/2408095>
```
use master
ALTER DATABASE BOSEVIKRAM SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE BOSEVIKRAM MODIFY NAME = [BOSEVIKRAM_Deleted]
ALTER DATABASE BOSEVIKRAM_Deleted SET MULTI_USER
``` | 1. Set the database to single mode:
```
ALTER DATABASE dbName
SET SINGLE_USER WITH ROLLBACK IMMEDIATE
```
2. Try to rename the database:
```
ALTER DATABASE dbName MODIFY NAME = NewName
```
3. Set the database to Multiuser mode:
```
ALTER DATABASE NewName
SET MULTI_USER WITH ROLLBACK IMMEDIATE
``` | Error on renaming database in SQL Server 2008 R2 | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
In my database table (MySQL), there is a column with `1` and `0` to represent `true` and `false` respectively.
But in `SELECT`, I need to replace it for `true` or `false` in order to print in a GridView.
How do I make my `SELECT` query to do this?
In my current table:
```
id | name | hide
1 | Paul | 1
2 | John | 0
3 | Jessica | 1
```
I need it to show thereby:
```
id | name | hide
1 | Paul | true
2 | John | false
3 | Jessica | true
``` | You have a number of choices:
1. Join with a domain table with `TRUE`, `FALSE` *Boolean* value.
2. Use ([as pointed in this answer](https://stackoverflow.com/a/16753165/2308745))
```
SELECT CASE WHEN hide = 0 THEN FALSE ELSE TRUE END FROM
```
Or if *Boolean* is not supported:
```
SELECT CASE WHEN hide = 0 THEN 'false' ELSE 'true' END FROM
``` | I got the solution
```
SELECT
CASE status
WHEN 'VS' THEN 'validated by subsidiary'
WHEN 'NA' THEN 'not acceptable'
WHEN 'D' THEN 'delisted'
ELSE 'validated'
END AS STATUS
FROM SUPP_STATUS
```
This is using the CASE
This is another to manipulate the selected value for more that two options. | SQL How to replace values of select return? | [
"",
"mysql",
"sql",
"select",
""
] |
What techniques/modules would you use to parse specific string sections. Given lines of the type:
```
field 1: dog field 2: first comment: outstanding
field 1: cat field 2: comment: some comment about the cat
```
The field names always end with a colon, the field values can be blank and the fields are separated by spaces only. I just want access to the field values. I know how I would do this using regular expression but I'm sure there are more elegant ways to do this with Python. | This looks like a fixed width format to me.
If so, you could do this:
```
data={}
ss=((0,19),(20,41),(42,80))
with open('/tmp/p.txt','r') as f:
for n,line in enumerate(f):
fields={}
for i,j in ss:
field=line[i:j]
t=field.split(':')
fields[t[0].strip()]=t[1].strip()
data[n]=fields
print data
```
Prints:
```
{0: {'comment': 'outstanding', 'field 2': 'first', 'field 1': 'dog'}, 1: {'comment': 'some comment about the cat', 'field 2': '', 'field 1': 'cat'}}
```
If you want a list:
```
data=[]
ss=((0,19),(20,41),(42,80))
with open('/tmp/p.txt','r') as f:
for n,line in enumerate(f):
fields={}
for i,j in ss:
field=line[i:j]
t=field.split(':')
fields[t[0].strip()]=t[1].strip()
data.append(fields)
```
In either case, then you can access:
```
>>> data[0]['comment']
'outstanding'
``` | Something like this:
```
>>> with open("abc") as f:
lis = []
for line in f:
lis.append(dict( map(str.strip, x.split(":")) for x in line.split(" "*8)))
...
>>> lis
[{'comment': 'outstanding', 'field 2': 'first', 'field 1': 'dog'},
{'comment': 'some comment about the cat', 'field 2': '', 'field 1': 'cat'}
]
>>> lis[0]['comment'] #access 'comment' field on line 1
'outstanding'
>>> lis[1]['field 2'] # access 'field 2' on line 2
''
``` | Parsing string sections | [
"",
"python",
""
] |
I have built my query here thus far:
```
SELECT POLICYNUMBER, FPLAN, FEFFYY, FEFFMM, FEFFDD, FINSTP, CLIENTNUM, FIRSTNAME,
MIDNAME, LASTNAME, BIRTHDATE FROM PFCASBENE
INNER JOIN CMRELATN ON POLICYNUMBER = KEYFIELD1
INNER JOIN CMPERSON ON CLIENTNUM = CLIENTID
WHERE FPSFLG='I' OR FPSFLG='P' ORDER BY CLIENTNUM ASC
```
The FINSTP field has a single character for an insurance type code.
EDIT: POLICYNUMBER, FPLAN, FEFFYY, FEFFMM, FEFFDD, FINSTP are fields in PFCASBENE
CLIENTNUM is a field in CMRELATN and FIRSTNAME,
MIDNAME, LASTNAME, BIRTHDATE are from CMPERSON. I should have said that before.
I want to return the results if the client **only** has policies that the FINSTP='F'. If they have other policies that have FINSTP = 'X', 'V', etc, then I don't want any of the clients records in the results.
this query returns multiple rows since a client can have multiple policies. I can get results if I put FINSTP='F' in the WHERE clause, but that's not what I want. That returns all that are 'F'. I'm not sure what else I need to add to the where clause to tune this query to what I need it to do.
This is for a DB2 on an AS/400 system.
Any help would be greatly appreciated!
* Josh | ```
SELECT POLICYNUMBER, FPLAN, FEFFYY, FEFFMM, FEFFDD, FINSTP, CLIENTNUM, FIRSTNAME,
MIDNAME, LASTNAME, BIRTHDATE FROM PFCASBENE
INNER JOIN CMRELATN ON POLICYNUMBER = KEYFIELD1
INNER JOIN CMPERSON ON CLIENTNUM = CLIENTID
WHERE (FPSFLG='I' OR FPSFLG='P') AND CLIENTNUM not in (
SELECT CLIENTNUM FROM PFCASBENE
INNER JOIN CMRELATN ON POLICYNUMBER = KEYFIELD1
INNER JOIN CMPERSON ON CLIENTNUM = CLIENTID
WHERE FINSTP <> 'F')
ORDER BY CLIENTNUM ASC
```
Sub select says, return all clients records with FINSTP not equal to F and then exclude that clients results, even if they have a FINSTP of F (but due to other FINSTP values, the entire client is expunged. But if they ONLY have FINSTP of F then that client would not be excluded.
I'm basing this off of the OP stating:
I want to return the results if the client ***only*** has policies that the FINSTP='F'. If they have other policies that have FINSTP = 'X', 'V', etc, then I don't want any of the clients records in the results. | This should do it:
```
SELECT POLICYNUMBER, FPLAN, FEFFYY, FEFFMM, FEFFDD, FINSTP, CLIENTNUM, FIRSTNAME,
MIDNAME, LASTNAME, BIRTHDATE
FROM PFCASBENE
JOIN CMRELATN ON POLICYNUMBER = KEYFIELD1
JOIN CMPERSON ON CLIENTNUM = CLIENTID
WHERE FPSFLG IN ('I', 'P')
AND CLIENTNUM NOT IN (
SELECT CLIENTNUM
FROM PFCASBENE
JOIN CMRELATN ON POLICYNUMBER = KEYFIELD1
JOIN CMPERSON ON CLIENTNUM = CLIENTID
WHERE FINSTP != 'F'
)
ORDER BY CLIENTNUM ASC
```
I improved your main query by using `FPSFLG IN ('I', 'P')` rather than `FPSFLG 'I' OR FPSFLG = 'P'`, which apart from being neater, is more likely to use an index (if one exists on that column).
I would have coded this as a left join, but I have no idea which columns come from which tables - too much guess work, and this query will work. | SQL select query to return only if a record has a field of a single value | [
"",
"sql",
"select",
"db2",
""
] |
I was trying to create a simple random number guessing game. The problem is even if I type the correct number it replies with a 'The number is less than'. Can somebody provide me a solution for this one ?
Thanks in advance
```
import random
import sys
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9]
user = raw_input('Guess The Number\n Pick between 1 - 10\n >>> ')
try:
int(user)
except:
print "Numbers Only !"
sys.exit(0)
number = random.choice(numbers)
int(number)
for i in range(0, 4):
if number == user:
print 'You Won!'
if user > number:
print 'The number is less than', user
user = raw_input('>>> ')
try:
int(user)
except:
print "Numbers Only !"
if user < number:
print 'The number is bigger than', user
user = raw_input('>>> ')
int(user)
print "The Number was", number
``` | The biggest problem is that you're not saving the conversion to `int` so you're using the guess as the string the user entered. You need to save it by doing `user = int(raw_input('>>>'))`
There are other ways you can improve this code, however. You repeat yourself a bit, and you don't need `random.choice`, you can use `random.randrange(1, 10)`
You shouldn't just say `except:`. You wanna only catch the exceptions you are looking for. The particular exception you are looking for is a `ValueError`
Additionally, I suggest you let the user try again when they enter something that's not a number. You can wrap up the whole thing in it's own function.
```
import random
def get_user_num(msg='>>> '):
"""Print the msg parameter as a prompt for the user to enter a number. If
they enter an invalid string, reprompt them until they enter a number.
"""
while True:
try:
return int(raw_input(msg)) # save the conversion to int
except ValueError: # only except the error you're actually looking for
print 'Numbers Only!'
# 'from 1-9' is probably better than 'between 1-10'
user = get_user_num('Guess The Number\n Pick from 1-9\n>>> ')
number = random.randrange(1, 10) # <- numbers list is unnecessary
#int(number) # this conversion was never needed, it was already a number
for _ in range(4): # you don't need (0, 4), 0 is assumed
if number == user:
print 'You Won!' # the correct number has been guessed
break # exit the loop once the number has been correctly guessed
elif user > number:
print 'The number is less than', user
elif user < number:
print 'The number is bigger than', user
# Don't repeat yourself, put this outside the `if`s
user = get_user_num()
else:
#only print the answer when it wasn't guessed correctly
print "The Number was", number
``` | When you convert to int(user), you aren't saving a new int to user. So user still remains a string.
What you need to do is
```
user = int(user)
```
By the way, this is for all of the places where you use int(user) | Number Guessing Game in Python | [
"",
"python",
"random",
""
] |
Using the remote api (`remote_api_shell.py`) works fine on the production server. However, it only works on the development server when the development server is serving on `localhost`. It does not work when the server is running on specific IP (for example, `dev_appserver.py --host=192.168.0.1`).
This is using the Python SDK. I'm kinda sure this worked on version `1.7.5`. It does not work on `1.7.6` or `1.8.0`.
Here's a specific case:
Run the server and let it bind to the default address (`localhost:8080`):
```
/path/to/dev_appserver.py myapp/app.yaml
INFO 2013-05-25 19:11:15,071 sdk_update_checker.py:244] Checking for updates to the SDK.
INFO 2013-05-25 19:11:15,323 api_server.py:152] Starting API server at: http://localhost:39983
INFO 2013-05-25 19:11:15,403 dispatcher.py:98] Starting server "default" running at: http://localhost:8080
INFO 2013-05-25 19:11:15,405 admin_server.py:117] Starting admin server at: http://localhost:8000
```
Start the remote API shell, and it works fine:
```
$ ./remote_api_shell.py -s localhost:8080
Email: x@x
Password:
App Engine remote_api shell
Python 2.7.2+ (default, Jul 20 2012, 22:15:08)
[GCC 4.6.1]
The db, ndb, users, urlfetch, and memcache modules are imported.
dev~furloughfun>
```
However, if you start the server with a specified host:
```
/path/to/dev_appserver.py --host=192.168.0.1 myapp/app.yaml
INFO 2013-05-25 19:11:53,304 sdk_update_checker.py:244] Checking for updates to the SDK.
INFO 2013-05-25 19:11:53,554 api_server.py:152] Starting API server at: http://localhost:44650
INFO 2013-05-25 19:11:53,633 dispatcher.py:98] Starting server "default" running at: http://192.168.0.1:8080
INFO 2013-05-25 19:11:53,634 admin_server.py:117] Starting admin server at: http://localhost:8000
```
Notice it says `Starting API server at: http://localhost:44650` even though the content is served at `http://192.168.0.1:8080`. Is this indicative that you can **only** run the remote api on localhost? Perhaps for security reasons?
Also, when you try the `remote_api_shell.py` now, you can only log in with a valid account (no bogus accounts allowed) and it immediately errors and terminates.
The console errors end with:
```
urllib2.HTTPError: HTTP Error 200: OK
```
and the local development server outputs:
```
INFO 2013-05-25 19:24:06,674 server.py:528] "GET /_ah/remote_api?rtok=90927106532 HTTP/1.1" 401 57
```
Anyone know what's going on here?
Is it impossible to access the remote API other than on `localhost`?
Is it impossible to access the remote API (even on `localhost`) if your content is served on a specific IP? | It seems that api server doesn't have option to set host. `dev_appserver.py` has options to set host & port for content and admin server, and for api server only port (`api_port` option). Example:
```
dev_appserver.py --host=192.168.5.92
--admin_host 192.168.5.92 --admin_port 9000
--api_port 7000 .
```
Running this reports:
```
api_server.py:153] Starting API server at: http://localhost:7000
dispatcher.py:164] Starting server "default" running at: http://192.168.5.92:8080
admin_server.py:117] Starting admin server at: http://192.168.5.92:9000
```
Looking into the source of GAE dev\_appserver, caller of the method of api\_server which starts the server is module `devappserver2.py` and the line is:
```
apis = api_server.APIServer('localhost', options.api_port,
configuration.app_id)
```
You can see hardcoded host name `localhost`.
If you find no good workaround, I would suggest to *patch* `devappserver2.py` by introducing a new option and [report issue with patch attached](http://code.google.com/p/googleappengine/wiki/FilingIssues?tm=3)? | Now, at least from version 1.9.27 of the SDK, the option `--api_host` helps with that. | How do I make Google App Engine python SDK Remote API work with local development server when server is bound to specific IP (not localhost)? | [
"",
"python",
"google-app-engine",
""
] |
I have a class like this containing one or more numeric elements.
```
class Foo:
# ... other methods ...
def _update(self, f):
# ... returns a new Foo() object based on transforming
# one or more data members with a function f()
def __add__(self, k):
return self._update(lambda x: x.__add__(k))
def __radd__(self, k):
return self._update(lambda x: x.__radd__(k))
def __sub__(self, k):
return self._update(lambda x: x.__sub__(k))
def __rsub__(self, k):
return self._update(lambda x: x.__rsub__(k))
def __mul__(self, k):
return self._update(lambda x: x.__mul__(k))
def __rmul__(self, k):
return self._update(lambda x: x.__rmul__(k))
def __div__(self, k):
return self._update(lambda x: x.__div__(k))
def __rdiv__(self, k):
return self._update(lambda x: x.__rdiv__(k))
# I want to add other numeric methods also
```
Is there any way to generalize this for all the [numeric methods](http://docs.python.org/2/reference/datamodel.html#numeric-types), without having to do this for each and every one of them?
I just want to delegate for any method in the list of numeric methods. | You want to use the [`operator` module](http://docs.python.org/2/library/operator.html) here, together with a (short) list of binary numeric operator names, without the underscores for compactness:
```
import operator
numeric_ops = 'add div floordiv mod mul pow sub truediv'.split()
def delegated_arithmetic(handler):
def add_op_method(op, cls):
op_func = getattr(operator, op)
def delegated_op(self, k):
getattr(self, handler)(lambda x: op_func(x, k))
setattr(cls, '__{}__'.format(op), delegated_op)
def add_reflected_op_method(op, cls):
op_func = getattr(operator, op)
def delegated_op(self, k):
getattr(self, handler)(lambda x: op_func(k, x))
setattr(cls, '__r{}__'.format(op), delegated_op)
def decorator(cls):
for op in numeric_ops:
add_op_method(op, cls)
add_reflected_op_method(op, cls) # reverted operation
add_op_method('i' + op, cls) # in-place operation
return cls
return decorator
```
Now just decorate your class:
```
@delegated_arithmetic('_update')
class Foo:
# ... other methods ...
def _update(self, f):
# ... returns a new Foo() object based on transforming
# one or more data members with a function f()
```
The decorator takes the name you wanted to delegate the call to to make this a little more generic.
Demo:
```
>>> @delegated_arithmetic('_update')
... class Foo(object):
... def _update(self, f):
... print 'Update called with {}'.format(f)
... print f(10)
...
>>> foo = Foo()
>>> foo + 10
Update called with <function <lambda> at 0x107086410>
20
>>> foo - 10
Update called with <function <lambda> at 0x107086410>
0
>>> 10 + foo
Update called with <function <lambda> at 0x107086410>
20
>>> 10 - foo
Update called with <function <lambda> at 0x107086410>
0
``` | Use a class decorator:
```
def add_numerics(klass):
for numeric_fn in ['add','radd','sub','rsub','mul','rmul','div','rdiv']:
dunder_fn = '__{}__'.format(numeric_fn)
setattr(klass, dunder_fn, lambda self, k: self._update(lambda x: getattr(x, dunder_fn)(k)))
return klass
@add_numerics
class Foo:
def _update(self, f):
# ...
return Foo()
``` | python: generalizing delegating a method | [
"",
"python",
"magic-methods",
""
] |
I am implementing an algorithm which requires me to look at non-overlapping consecutive submatrices within a (strictly two dimensional) numpy array. eg, for the 12 by 12
```
>>> a = np.random.randint(20, size=(12, 12)); a
array([[ 4, 0, 12, 14, 3, 8, 14, 12, 11, 18, 6, 6],
[15, 13, 2, 18, 15, 15, 16, 2, 9, 16, 6, 4],
[18, 18, 3, 8, 1, 15, 14, 13, 13, 13, 7, 0],
[ 1, 9, 3, 6, 0, 4, 3, 15, 0, 9, 11, 12],
[ 5, 15, 5, 6, 4, 4, 18, 13, 10, 17, 11, 8],
[13, 17, 8, 15, 17, 12, 7, 1, 13, 15, 0, 18],
[ 2, 1, 11, 12, 3, 16, 11, 9, 10, 15, 4, 16],
[19, 11, 10, 7, 10, 19, 7, 13, 11, 9, 17, 8],
[14, 14, 17, 0, 0, 0, 11, 1, 10, 14, 2, 7],
[ 6, 15, 6, 7, 15, 19, 2, 4, 6, 16, 0, 3],
[ 5, 10, 7, 5, 0, 8, 5, 8, 9, 14, 4, 3],
[17, 2, 0, 3, 15, 10, 14, 1, 0, 7, 16, 2]])
```
and looking at 3x3 submatrices, I would want the first 3x3 submatrix to be from the upper left corner:
```
>>> a[0:3, 0:3]
array([[ 4, 0, 12],
[15, 13, 2],
[18, 18, 3]])
```
The next along to be given by `a[0:3, 3:6]` and so on. It doesn't matter if the last such set of indices in each row or column runs off the end of the array - numpy's behavior of simply giving the portion within the slice that exists is sufficient.
I want a way to generate these slice indices programatically for arbitrarily sized matrices and submatrices. I currently have this:
```
size = 3
x_max = a.shape[0]
xcoords = range(0, x_max, size)
xcoords = zip(xcoords, xcoords[1:])
```
and similarly to generate `y_coords`, so that the series of indices is given by `itertools.product(xcoords, ycoords)`.
My question is: is there a more direct way to do this, perhaps using [`numpy.mgrid`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.mgrid.html#numpy.mgrid) or some other numpy technique? | # Getting the indexes
Here's a quick way to get a specific `size x size` block:
```
base = np.arange(size) # Just the base set of indexes
row = 1 # Which block you want
col = 0
block = a[base[:, np.newaxis] + row * size, base + col * size]
```
If you wanted you could build up matrices similar to your `xcoords` like:
```
y, x = np.mgrid[0:a.shape[0]/size, 0:a.shape[1]/size]
y_coords = y[..., np.newaxis] * size + base
x_coords = x[..., np.newaxis] * size + base
```
Then you could access a block like this:
```
block = a[y_coords[row, col][:, np.newaxis], x_coords[row, col]]
```
---
# Getting the blocks directly
If you just want to get the blocks (and not the indexes of the block entries), I'd use [`np.split`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html) (twice):
```
blocks = map(lambda x : np.split(x, a.shape[1]/size, 1), # Split the columns
np.split(a, a.shape[0]/size, 0)) # Split the rows
```
then you have a 2D list of `size x size` blocks:
```
>>> blocks[0][0]
array([[ 4, 0, 12],
[15, 13, 2],
[18, 18, 3]])
>>> blocks[1][0]
array([[ 1, 9, 3],
[ 5, 15, 5],
[13, 17, 8]])
```
You could then make this a numpy array and use the same indexing style as above:
```
>>> blocks = np.array(blocks)
>>> blocks.shape
(4, 4, 3, 3)
``` | You can use the one-liner:
```
r = 3
c = 3
lenr = a.shape[0]/r
lenc = a.shape[1]/c
np.array([a[i*r:(i+1)*r,j*c:(j+1)*c] for (i,j) in np.ndindex(lenr,lenc)]).reshape(lenr,lenc,r,c)
``` | Indices of fixed size sub-matrices of numpy array | [
"",
"python",
"arrays",
"numpy",
""
] |
I need a little help understanding this code:
```
x = [1,2,3]
y = [4,5,6]
x = y
print x
print y
y.remove(4)
print x
print y
```
When I remove `4` from the list `[4,5,6]`, both the `x` and `y` variables will point to the same `[5,6]` list in memory am I right?
If so, then how can I make `x` equal to a replica of the `y` list so that `x` and `y` both point to different places in memory yet those places both hold the same value? (As opposed to them both pointing to the same place in memory as seen above) | For lists in particular, you can make a shallow copy by taking a slice of the whole list:
```
x = y[:]
```
This isn't guaranteed to work for arbitary sliceable objects (eg, numpy array slices don't create a new array), so it can be useful to use the builtin [`copy`](http://docs.python.org/3.3/library/copy.html) module:
```
import copy
x = copy.copy(y)
```
can be expected to work for an arbitary `y`. | Just clone the list:
```
x = list(y)
``` | How can I make x=y in terms of value and not memory? | [
"",
"python",
""
] |
UPDATE: This is a problem I am having with the 1.8.0 App Engine SDK on a fresh install of OS X 10.8.3.
First up - there's a bunch of questions on SO with a similar title. I've checked them out, and I don't believe they answer my question. Mostly they recommend getting libsqlite3-dev and rebuilding python to get \_sqlite3.so, but that's already where it should be:
```
$ find / -name _sqlite3.so
Password:
...
/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_sqlite3.so
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_sqlite3.so
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_sqlite3.so
```
The actual code that that causes app engine SDK to try to load that module is:
```
remaining = TaskSetElement.all().filter('taskSet', ts_key).filter('complete', False).count()
```
Here's the SDK stack trace:
```
File "~/dev/myApp/myApp/task.py", line 90, in completeTaskSetElement
remaining = TaskSetElement.all().filter('taskSet', ts_key).filter('complete', False).count()
File "~/dev/GAE/google_appengine/google/appengine/ext/db/__init__.py", line 2133, in count
result = raw_query.Count(limit=limit, **kwargs)
File "~/dev/GAE/google_appengine/google/appengine/api/datastore.py", line 1698, in Count
batch = self.GetBatcher(config=config).next()
File "~/dev/GAE/google_appengine/google/appengine/datastore/datastore_query.py", line 2754, in next
return self.next_batch(self.AT_LEAST_ONE)
File "~/dev/GAE/google_appengine/google/appengine/datastore/datastore_query.py", line 2791, in next_batch
batch = self.__next_batch.get_result()
File "~/dev/GAE/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 604, in get_result
return self.__get_result_hook(self)
File "/Users/colin/dev/GAE/google_appengine/google/appengine/datastore/datastore_query.py", line 2528, in __query_result_hook
self._batch_shared.conn.check_rpc_success(rpc)
File "~/dev/GAE/google_appengine/google/appengine/datastore/datastore_rpc.py", line 1222, in check_rpc_success
rpc.check_success()
File "~/dev/GAE/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 570, in check_success
self.__rpc.CheckSuccess()
File "/Users/colin/dev/GAE/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "~/dev/GAE/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "~/dev/GAE/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 234, in _MakeRealSyncCall
raise pickle.loads(response_pb.exception())
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1382, in loads
return Unpickler(file).load()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1090, in load_global
klass = self.find_class(module, name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1124, in find_class
__import__(module)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sqlite3/__init__.py", line 24, in
from dbapi2 import *
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sqlite3/dbapi2.py", line 27, in
from _sqlite3 import *
File "~/dev/GAE/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 856, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named _sqlite3
```
I've got a bunch of datastore code prior to this line that's executing fine. I get the same problem running dev\_appserver.py directly from the command line or in eclipse with pydev.
From the command line, everything looks good:
```
$ python
Python 2.7.2 (default, Oct 11 2012, 20:14:37)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
>>> import _sqlite3
>>> import sys
>>> print(sys.path)
['', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages']
>>>
```
This code snippet (running in app engine SDK) removes the app engine datastore code from the equation:
```
...
logging.info("Python Version: %s" % sys.version)
logging.info(filter(lambda p: 'lib-dynload' in p, sys.path))
import sqlite3
...
```
It outputs this:
```
INFO 2013-05-26 05:55:12,055 main.py:38] Python Version: 2.7.2 (default, Oct 11 2012, 20:14:37)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)]
INFO 2013-05-26 05:55:12,055 main.py:40] ['/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload']
ERROR 2013-05-26 05:55:12,058 cgi.py:121] Traceback (most recent call last):
File "main.py", line 42, in
import sqlite3
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sqlite3/__init__.py", line 24, in
from dbapi2 import *
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sqlite3/dbapi2.py", line 27, in
from _sqlite3 import *
File "~/dev/GAE/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 856, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named _sqlite3
```
Any ideas what the problem is? Thanks,
Colin | It looks like adding `'_sqlite3'` to the `_WHITE_LIST_C_MODULES` list at line 742 in sandbox.py (which lives at `/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python` on my machine) has worked for me for now.
To my uneducated eye it looks like `CModuleImportHook` is incorrectly blocking the import of \_sqlite3 and that modifying any one of the checks it has to return `None` removes the error. Someone with more nous - please expand on what I've said or correct me!
That change may allow you to import \_sqlite3 in your own code, which would be a mistake. Perhaps it needs a way to limit the import to dbapi2.py? | From you stack trace it shows you are trying to import sqlite in your main.py.
Why are you doing this?
Importing sqlite is not supported in appengine. sqlite is implemented with a binary library and you can't just import any old binary. In addition the filesystem is read only, you couldn't write to sqlite db. | OS X AppEngine - ImportError: No module named _sqlite3 | [
"",
"python",
"google-app-engine",
""
] |
Code:
```
>>> a = 1
>>> b = 2
>>> l = [a, b]
>>> l[1] = 4
>>> l
[1, 4]
>>> l[1]
4
>>> b
2
```
What I want to instead see happen is that when I set l[1] equal to 4, that the variable b is changed to 4.
I'm guessing that when dealing with primitives, they are copied by value, not by reference. Often I see people having problems with objects and needing to understand deep copies and such. I basically want the opposite. I want to be able to store a *reference* to the primitive in the list, then be able to assign new values to that variable either by using its actual variable name `b` or its reference in the list `l[1]`.
Is this possible? | You can't really do that in Python, but you can come close by making the variables `a` and `b` refer to mutable container objects instead of immutable numbers:
```
>>> a = [1]
>>> b = [2]
>>> lst = [a, b]
>>> lst
[[1], [2]]
>>> lst[1][0] = 4 # changes contents of second mutable container in lst
>>> lst
[[1], [4]]
>>> a
[1]
>>> b
[4]
``` | There are no 'primitives' in Python. Everything is an object, even numbers. Numbers in Python are immutable objects. So, to have a reference to a number such that 'changes' to the 'number' are 'seen' through multiple references, the reference must be through e.g. a single element list or an object with one property.
(This works because lists and objects are mutable and a change to what number they hold is seen through all references to it)
e.g.
```
>>> a = [1]
>>> b = a
>>> a
[1]
>>> b
[1]
>>> a[0] = 2
>>> a
[2]
>>> b
[2]
``` | Store reference to primitive type in Python? | [
"",
"python",
"reference",
""
] |
I have a bit of a tricky situation. I have a column that contains a pipe delimited set of numbers in numerous rows in a table. For example:
```
Courses
-------------------
1|2
1|2|3
1|2|8
10
11
11|12
```
What I want to achieve is to return rows where the number only appears once in my output.
Ideally, I want to try and carry this out using SQL rather than having to carry out checks at a web application level. Carrying out a DISTINCT does not achieve what I want.
The desired output would be:
```
Courses
-------------------
1
2
3
8
10
11
12
```
I would appreciated if anyone can guide me in the right direction.
Thanks. | Please try:
```
declare @tbl as table(Courses nvarchar(max))
insert into @tbl values
('1|2'),
('1|2|3'),
('1|2|8'),
('10'),
('11'),
('11|12')
select * from @tbl
SELECT
DISTINCT CAST(Split.a.value('.', 'VARCHAR(100)') AS INT) AS CVS
FROM
(
SELECT CAST ('<M>' + REPLACE(Courses, '|', '</M><M>') + '</M>' AS XML) AS CVS
FROM @tbl
) AS A CROSS APPLY CVS.nodes ('/M') AS Split(a)
ORDER BY 1
``` | Try this one -
```
SET NOCOUNT ON;
DECLARE @temp TABLE
(
string VARCHAR(500)
)
DECLARE @Separator CHAR(1)
SELECT @Separator = '|'
INSERT INTO @temp (string)
VALUES
('1|2'),
('1|2|3'),
('1|2|8'),
('10'),
('11'),
('11|12')
-- 1. XML
SELECT p.value('(./s)[1]', 'VARCHAR(500)')
FROM (
SELECT field = CAST('<r><s>' + REPLACE(t.string, @Separator, '</s></r><r><s>') + '</s></r>' AS XML)
FROM @temp t
) d
CROSS APPLY field.nodes('/r') t(p)
-- 2. CTE
;WITH a AS
(
SELECT
start_pos = 1
, end_pos = CHARINDEX(@Separator, t.string)
, t.string
FROM @temp t
UNION ALL
SELECT
end_pos + 1
, CHARINDEX(@Separator, string, end_pos + 1)
, string
FROM a
WHERE end_pos > 0
)
SELECT d.name
FROM (
SELECT
name = SUBSTRING(
string
, start_pos
, ABS(end_pos - start_pos)
)
FROM a
) d
WHERE d.name != ''
``` | Return Distinct Rows That Contain The Same Value/Character In SQL | [
"",
"sql",
"sql-server",
""
] |
I have 3 tables as per below:
```
CREATE TABLE USER_STATUS ("UID" varchar2(7), "STAT_ID" varchar2(11)) ;
INSERT ALL
INTO USER_STATUS ("UID", "STAT_ID") VALUES ('UID_001', 'STAT_ID_001')
INTO USER_STATUS ("UID", "STAT_ID") VALUES ('UID_001', NULL)
INTO USER_STATUS ("UID", "STAT_ID") VALUES ('UID_001', NULL)
INTO USER_STATUS ("UID", "STAT_ID") VALUES ('UID_002', 'STAT_ID_002')
INTO USER_STATUS ("UID", "STAT_ID") VALUES ('UID_002', NULL)
INTO USER_STATUS ("UID", "STAT_ID") VALUES ('UID_003', 'STAT_ID_003')
SELECT * FROM dual;
CREATE TABLE STATUS_LKUP ("LKUP_ID" varchar2(11), "STAT_CODE" varchar2(11), "STAT_ID" varchar2(11), "STATUS" varchar2(20));
INSERT ALL
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_001', 'ST_CODE_001', 'STAT_ID_001', 'Processing')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_002', 'ST_CODE_002', 'STAT_ID_001', 'Processing')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_003', 'ST_CODE_003', 'STAT_ID_001', 'Processing')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_004', 'ST_CODE_004', 'STAT_ID_001', 'Processing')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_005', 'ST_CODE_011', 'STAT_ID_001', 'Issue')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_006', 'ST_CODE_012', 'STAT_ID_001', 'Issue')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_007', 'ST_CODE_013', 'STAT_ID_001', 'Issue')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_008', 'ST_CODE_014', 'STAT_ID_001', 'Issue')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_009', 'ST_CODE_015', 'STAT_ID_001', 'Issue')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_010', 'ST_CODE_021', 'STAT_ID_001', 'Done')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_011', 'ST_CODE_022', 'STAT_ID_001', 'Done')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_012', 'ST_CODE_031', 'STAT_ID_001', 'Started')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_013', 'ST_CODE_032', 'STAT_ID_001', 'Started')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_014', 'ST_CODE_002', 'STAT_ID_002', 'Processing (Sent)')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_015', 'ST_CODE_004', 'STAT_ID_002', 'Processing (Waiting)')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_016', 'ST_CODE_014', 'STAT_ID_002', 'Issue in Prod')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_017', 'ST_CODE_012', 'STAT_ID_002', 'Issue in Prod')
INTO STATUS_LKUP ("LKUP_ID", "STAT_CODE", "STAT_ID", "STATUS") VALUES ('LKUP_ID_018', 'ST_CODE_021', 'STAT_ID_002', 'Done')
SELECT * FROM dual;
CREATE TABLE CORE ("CORE_ID" varchar2(11), "STAT_CODE" varchar2(11));
INSERT ALL
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_001', 'ST_CODE_001')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_002', 'ST_CODE_012')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_003', 'ST_CODE_021')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_004', 'ST_CODE_012')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_005', 'ST_CODE_012')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_006', 'ST_CODE_021')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_007', 'ST_CODE_001')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_008', 'ST_CODE_003')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_009', 'ST_CODE_012')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_010', 'ST_CODE_021')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_011', 'ST_CODE_001')
INTO CORE ("CORE_ID", "STAT_CODE") VALUES ('CORE_ID_013', 'ST_CODE_004')
SELECT * FROM dual;
```
Check this -> **[Oracle SQL Fiddle](http://sqlfiddle.com/#!4/05aa7)**
The table are created in Oracle DB. Now based on user's UID passed I need to retrieve their Cores with Statuses as per below:
**[Click here to view Expected Results](https://i.stack.imgur.com/Rth9K.png)**
So far I tried to retrieve results but can not able to join them.
```
SELECT STLK.STAT_CODE, STLK.STATUS FROM STATUS_LKUP STLK WHERE STLK.STAT_ID IN (SELECT USRST.STAT_ID FROM USER_STATUS USRST WHERE USRST.UID = 'UID_001');
```
Please help.
FYI: This is not a homework assignment. Actual tables are complicated and these are just converted for better explanation.
Thank you in advance. | A fairly straight forward join;
```
SELECT sl."STATUS", sl."STAT_CODE", c."CORE_ID"
FROM USER_STATUS us
JOIN STATUS_LKUP sl
ON us."STAT_ID" = sl."STAT_ID"
JOIN CORE c
ON c."STAT_CODE" = sl."STAT_CODE"
WHERE "UID" = 'UID_001'
ORDER BY "STATUS", "LKUP_ID"
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!4/05aa7/25). | Try
```
SELECT l.STATUS,
l.STAT_CODE,
c.CORE_ID
FROM STATUS_LKUP l JOIN CORE c
ON l.STAT_CODE = c.STAT_CODE JOIN USER_STATUS u
ON l.STAT_ID = u.STAT_ID
WHERE u."UID" = 'UID_002'
ORDER BY l.STATUS DESC, l.STAT_CODE
```
**[SQLFiddle](http://sqlfiddle.com/#!4/05aa7/26)** | Oracle Join SQL Quest | [
"",
"sql",
"oracle",
"join",
""
] |
I have this code to print some strings to a text file, but I need python to ignore every empty items, so it doesn't print empty lines.
I wrote this code, which is simple, but should do the trick:
```
lastReadCategories = open('c:/digitalLibrary/' + connectedUser + '/lastReadCategories.txt', 'w')
for category in lastReadCategoriesList:
if category.split(",")[0] is not "" and category is not None:
lastReadCategories.write(category + '\n')
print(category)
else: print("/" + category + "/")
lastReadCategories.close()
```
I can see no problem with it, yet, python keeps printing the empty items to the file. All categories are written in this notation: "category,timesRead", that's why I ask python to see if the first string before the comma is not empty. Then I see if the whole item is not empty (is not None). In theory I guess it should work, right?
P.S.: I've already tried asking the if to check if 'category' is not "" and is not " ", still, the same result. | rstrip() your category before writing it back to file
```
lastReadCategories = open('c:/digitalLibrary/' + connectedUser +'/lastReadCategories.txt', 'w')
for category in lastReadCategoriesList:
if category.split(",")[0] is not "" and category is not None:
lastReadCategories.write(category.rstrip() + '\n')
print(category.rstrip())
else: print("/" + category + "/")
lastReadCategories.close()
```
I was able to test it with your sample list provided (without writing it to file):
```
lastReadCategoriesList = ['A,52', 'B,1\n', 'C,50', ',3']
for category in lastReadCategoriesList:
if category.split(",")[0] is not "" and category is not None:
print(category.rstrip())
else: print("/" + category + "/")
>>> ================================ RESTART ================================
>>>
A,52
B,1
C,50
/,3/
>>>
``` | Test for boolean truth instead, and reverse your test so that you are certain that `.split()` will work in the first place, `None.split()` would throw an exception:
```
if category is not None and category.split(",")[0]:
```
The empty string is 'false-y', there is no need to test it against anything.
You could even just test for:
```
if category and not category.startswith(','):
```
for the same end result.
From comments, it appears you have newlines cluttering up your data. Strip those away when testing:
```
for category in lastReadCategoriesList:
category = category.rstrip('\n')
if category and not category.startswith(','):
lastReadCategories.write(category + '\n')
print(category)
else: print("/{}/".format(category))
```
Note that you can simply alter `category` inside the loop; this avoids having to call `.rstrip()` multiple times. | Python not ignoring empty items in list | [
"",
"python",
"list",
"for-loop",
""
] |
This is my query that works:
```
Select AsOfDate, Family, Type, DocID, Title, Date1, Date2, Date3, Stat1, Stat2, Stat3
FROM DocumentationData
WHERE Type = @Type
AND Family = @Family
AND AsOfDate = (SELECT Max(AsOfDate)
FROM DocumentationData
WHERE AsOfDate <= @CurrentDate )
```
I want to add a condition. I want an additional constraint of Usage = 'Active" if @ActiveOnly is true.
This is what I am trying, but it results in an error:
```
Select AsOfDate, Family, Type, DocID, Title, Date1, Date2, Date3, Stat1, Stat2, Stat3
FROM DocumentationData
WHERE Type = @Type
AND Family = @Family
AND AsOfDate = (IF (@ActiveOnly = 'TRUE')
BEGIN
SELECT Max(AsOfDate)
FROM DocumentationData
WHERE AsOfDate <= @CurrentDate
AND Usage = 'Active'
END
ELSE
BEGIN
SELECT Max(AsOfDate)
FROM DocumentationData
WHERE AsOfDate <= @CurrentDate
END
)
``` | I think you should be able to do this without using `IF...THEN...ELSE`. You were close w/ your first query attempt, just apply similar `IF` logic using additional clauses to the `WHERE` in your subquery.
```
SELECT AsOfDate, Family, Type, DocID, Title, Date1, Date2, Date3, Stat1, Stat2, Stat3
FROM DocumentationData
WHERE Type = @Type
AND Family = @Family
AND AsOfDate = (SELECT Max(AsOfDate)
FROM DocumentationData
WHERE AsOfDate <= @CurrentDate
AND ( ( @ActiveOnly = 'TRUE' AND Usage = 'Active')
OR (@ActiveOnly <> 'TRUE')
)
)
``` | I think you need a THEN in there.
```
AND AsOfDate = IF (@ActiveOnly = 'TRUE')
THEN
SELECT Max(AsOfDate)
FROM DocumentationData
WHERE AsOfDate <= @CurrentDate
AND Usage = 'Active'
ELSE
BEGIN
SELECT Max(AsOfDate)
FROM DocumentationData
WHERE AsOfDate <= @CurrentDate
END IF
``` | Unable to add an IF statement to SQL query | [
"",
"sql",
"azure-sql-database",
""
] |
I've found [this Library](https://github.com/pythonforfacebook/facebook-sdk/) it seems it is the official one, then [found this](https://stackoverflow.com/questions/10488913/how-to-obtain-a-user-access-token-in-python), but everytime i find an answer the half of it are links to [Facebook API Documentation](https://developers.facebook.com/docs/reference/php/facebook-getAccessToken/) which talks about Javascript or PHP and how to extract it from links!
How do i make it on simple python script?
NB: what i realy dont understand, why using a library and cant extract `token` if we can use `urllib` and `regex` to extract informations? | Javascript and PHP can be used as web development languages. You need a web front end for the user to grant permission so that you can obtain the access token.
Rephrased: **You cannot obtain the access token programmatically, there must be manual user interaction**
In Python it will involve setting up a web server, for example a script to update feed using facepy
```
import web
from facepy import GraphAPI
from urlparse import parse_qs
url = ('/', 'index')
app_id = "YOUR_APP_ID"
app_secret = "APP_SECRET"
post_login_url = "http://0.0.0.0:8080/"
user_data = web.input(code=None)
if not user_data.code:
dialog_url = ( "http://www.facebook.com/dialog/oauth?" +
"client_id=" + app_id +
"&redirect_uri=" + post_login_url +
"&scope=publish_stream" )
return "<script>top.location.href='" + dialog_url + "'</script>"
else:
graph = GraphAPI()
response = graph.get(
path='oauth/access_token',
client_id=app_id,
client_secret=app_secret,
redirect_uri=post_login_url,
code=code
)
data = parse_qs(response)
graph = GraphAPI(data['access_token'][0])
graph.post(path = 'me/feed', message = 'Your message here')
``` | Not sure if this helps anyone, but I was able to get an oauth\_access\_token by following this code.
```
from facepy import utils
app_id = 134134134134 # must be integer
app_secret = "XXXXXXXXXXXXXXXXXX"
oath_access_token = utils.get_application_access_token(app_id, app_secret)
```
Hope this helps. | How to get Facebook access token using Python library? | [
"",
"python",
"facebook",
"facebook-graph-api",
""
] |
I have a small GUI test with a "Start" button and a Progress bar. The desired behavior is:
* Click Start
* Progressbar oscillates for 5 seconds
* Progressbar stops
The observed behavior is the "Start" button freezes for 5 seconds, then a Progressbar is displayed (no oscillation).
Here is my code so far:
```
class GUI:
def __init__(self, master):
self.master = master
self.test_button = Button(self.master, command=self.tb_click)
self.test_button.configure(
text="Start", background="Grey",
padx=50
)
self.test_button.pack(side=TOP)
def progress(self):
self.prog_bar = ttk.Progressbar(
self.master, orient="horizontal",
length=200, mode="indeterminate"
)
self.prog_bar.pack(side=TOP)
def tb_click(self):
self.progress()
self.prog_bar.start()
# Simulate long running process
t = threading.Thread(target=time.sleep, args=(5,))
t.start()
t.join()
self.prog_bar.stop()
root = Tk()
root.title("Test Button")
main_ui = GUI(root)
root.mainloop()
```
Based on the information from Bryan Oakley [here](https://stackoverflow.com/questions/10847626/tkinter-program-freezing-durint-the-execution-of-a-function), I understand that I need to use threads. I tried creating a thread, but I'm guessing that since the thread is started from within the main thread, it doesn't help.
I had the idea to place the logic portion in a different class, and instantiate the GUI from within that class, similar to the example code by A. Rodas [here](https://stackoverflow.com/questions/15323574/how-to-connect-a-progress-bar-to-a-function/15323917#15323917).
My question:
I can't figure out how to code it so that this command:
```
self.test_button = Button(self.master, command=self.tb_click)
```
calls a function that is located in the other class. Is this a Bad Thing to do or is it even possible? How would I create a 2nd class that can handle the self.tb\_click? I tried following along to A. Rodas' example code which works beautifully. But I cannot figure out how to implement his solution in the case of a Button widget that triggers an action.
If I should instead handle the thread from within the single GUI class, how would one create a thread that doesn't interfere with the main thread? | When you join the new thread in the main thread, it will wait until the thread finishes, so the GUI will block even though you are using multithreading.
If you want to place the logic portion in a different class, you can subclass Thread directly, and then start a new object of this class when you press the button. The constructor of this subclass of Thread can receive a Queue object and then you will be able to communicate it with the GUI part. So my suggestion is:
1. Create a Queue object in the main thread
2. Create a new thread with access to that queue
3. Check periodically the queue in the main thread
Then you have to solve the problem of what happens if the user clicks two times the same button (it will spawn a new thread with each click), but you can fix it by disabling the start button and enabling it again after you call `self.prog_bar.stop()`.
```
import queue
class GUI:
# ...
def tb_click(self):
self.progress()
self.prog_bar.start()
self.queue = queue.Queue()
ThreadedTask(self.queue).start()
self.master.after(100, self.process_queue)
def process_queue(self):
try:
msg = self.queue.get_nowait()
# Show result of the task if needed
self.prog_bar.stop()
except queue.Empty:
self.master.after(100, self.process_queue)
class ThreadedTask(threading.Thread):
def __init__(self, queue):
super().__init__()
self.queue = queue
def run(self):
time.sleep(5) # Simulate long running process
self.queue.put("Task finished")
``` | I will submit the basis for an alternate solution. It is not specific to a Tk progress bar per se, but it can certainly be implemented very easily for that.
Here are some classes that allow you to run other tasks in the background of Tk, update the Tk controls when desired, and not lock up the gui!
Here's class TkRepeatingTask and BackgroundTask:
```
import threading
class TkRepeatingTask():
def __init__( self, tkRoot, taskFuncPointer, freqencyMillis ):
self.__tk_ = tkRoot
self.__func_ = taskFuncPointer
self.__freq_ = freqencyMillis
self.__isRunning_ = False
def isRunning( self ) : return self.__isRunning_
def start( self ) :
self.__isRunning_ = True
self.__onTimer()
def stop( self ) : self.__isRunning_ = False
def __onTimer( self ):
if self.__isRunning_ :
self.__func_()
self.__tk_.after( self.__freq_, self.__onTimer )
class BackgroundTask():
def __init__( self, taskFuncPointer ):
self.__taskFuncPointer_ = taskFuncPointer
self.__workerThread_ = None
self.__isRunning_ = False
def taskFuncPointer( self ) : return self.__taskFuncPointer_
def isRunning( self ) :
return self.__isRunning_ and self.__workerThread_.isAlive()
def start( self ):
if not self.__isRunning_ :
self.__isRunning_ = True
self.__workerThread_ = self.WorkerThread( self )
self.__workerThread_.start()
def stop( self ) : self.__isRunning_ = False
class WorkerThread( threading.Thread ):
def __init__( self, bgTask ):
threading.Thread.__init__( self )
self.__bgTask_ = bgTask
def run( self ):
try :
self.__bgTask_.taskFuncPointer()( self.__bgTask_.isRunning )
except Exception as e: print repr(e)
self.__bgTask_.stop()
```
Here's a Tk test which demos the use of these. Just append this to the bottom of the module with those classes in it if you want to see the demo in action:
```
def tkThreadingTest():
from tkinter import Tk, Label, Button, StringVar
from time import sleep
class UnitTestGUI:
def __init__( self, master ):
self.master = master
master.title( "Threading Test" )
self.testButton = Button(
self.master, text="Blocking", command=self.myLongProcess )
self.testButton.pack()
self.threadedButton = Button(
self.master, text="Threaded", command=self.onThreadedClicked )
self.threadedButton.pack()
self.cancelButton = Button(
self.master, text="Stop", command=self.onStopClicked )
self.cancelButton.pack()
self.statusLabelVar = StringVar()
self.statusLabel = Label( master, textvariable=self.statusLabelVar )
self.statusLabel.pack()
self.clickMeButton = Button(
self.master, text="Click Me", command=self.onClickMeClicked )
self.clickMeButton.pack()
self.clickCountLabelVar = StringVar()
self.clickCountLabel = Label( master, textvariable=self.clickCountLabelVar )
self.clickCountLabel.pack()
self.threadedButton = Button(
self.master, text="Timer", command=self.onTimerClicked )
self.threadedButton.pack()
self.timerCountLabelVar = StringVar()
self.timerCountLabel = Label( master, textvariable=self.timerCountLabelVar )
self.timerCountLabel.pack()
self.timerCounter_=0
self.clickCounter_=0
self.bgTask = BackgroundTask( self.myLongProcess )
self.timer = TkRepeatingTask( self.master, self.onTimer, 1 )
def close( self ) :
print "close"
try: self.bgTask.stop()
except: pass
try: self.timer.stop()
except: pass
self.master.quit()
def onThreadedClicked( self ):
print "onThreadedClicked"
try: self.bgTask.start()
except: pass
def onTimerClicked( self ) :
print "onTimerClicked"
self.timer.start()
def onStopClicked( self ) :
print "onStopClicked"
try: self.bgTask.stop()
except: pass
try: self.timer.stop()
except: pass
def onClickMeClicked( self ):
print "onClickMeClicked"
self.clickCounter_+=1
self.clickCountLabelVar.set( str(self.clickCounter_) )
def onTimer( self ) :
print "onTimer"
self.timerCounter_+=1
self.timerCountLabelVar.set( str(self.timerCounter_) )
def myLongProcess( self, isRunningFunc=None ) :
print "starting myLongProcess"
for i in range( 1, 10 ):
try:
if not isRunningFunc() :
self.onMyLongProcessUpdate( "Stopped!" )
return
except : pass
self.onMyLongProcessUpdate( i )
sleep( 1.5 ) # simulate doing work
self.onMyLongProcessUpdate( "Done!" )
def onMyLongProcessUpdate( self, status ) :
print "Process Update: %s" % (status,)
self.statusLabelVar.set( str(status) )
root = Tk()
gui = UnitTestGUI( root )
root.protocol( "WM_DELETE_WINDOW", gui.close )
root.mainloop()
if __name__ == "__main__":
tkThreadingTest()
```
Two import points I'll stress about BackgroundTask:
1) The function you run in the background task needs to take a function pointer it will both invoke and respect, which allows the task to be cancelled mid way through - if possible.
2) You need to make sure the background task is stopped when you exit your application. That thread will still run even if your gui is closed if you don't address that! | Tkinter: How to use threads to preventing main event loop from "freezing" | [
"",
"python",
"multithreading",
"tkinter",
"progress-bar",
"event-loop",
""
] |
Good day! Need to convert xml using xslt in Python. I have a sample code in php.
How to implement this in Python or where to find something similar? Thank you!
```
$xmlFileName = dirname(__FILE__)."example.fb2";
$xml = new DOMDocument();
$xml->load($xmlFileName);
$xslFileName = dirname(__FILE__)."example.xsl";
$xsl = new DOMDocument;
$xsl->load($xslFileName);
// Configure the transformer
$proc = new XSLTProcessor();
$proc->importStyleSheet($xsl); // attach the xsl rules
echo $proc->transformToXML($xml);
``` | Using [lxml](https://lxml.de/index.html),
```
import lxml.etree as ET
dom = ET.parse(xml_filename)
xslt = ET.parse(xsl_filename)
transform = ET.XSLT(xslt)
newdom = transform(dom)
print(ET.tostring(newdom, pretty_print=True))
``` | [LXML](https://lxml.de/index.html) is a widely used high performance library for XML processing in python based on libxml2 and libxslt - it includes facilities for [XSLT as well](https://lxml.de/xpathxslt.html#xslt). | How to transform an XML file using XSLT in Python? | [
"",
"python",
"xml",
"xslt",
"converters",
""
] |
I'm looking for the most efficient and pythonic (mainly efficient) way to update a dictionary but keep the old values if an existing key is present. For example...
```
myDict1 = {'1': ('3', '2'), '3': ('2', '1'), '2': ('3', '1')}
myDict2 = {'4': ('5', '2'), '5': ('2', '4'), '2': ('5', '4')}
myDict1.update(myDict2) gives me the following....
{'1': ('3', '2'), '3': ('2', '1'), '2': ('5', '4'), '5': ('2', '4'), '4': ('5', '2')}
```
notice how the key '2' exists in both dictionaries and used to have values ('3', '1') but now it has the values from it's key in myDict2 ('5', '4')?
Is there a way to update the dictionary in an efficient manner so as the key '2' ends up having values ('3', '1', '5', '4')? #in no particular order
Thanks in advance | I think the most effective way to do it would be something like this:
```
for k, v in myDict2.iteritems():
myDict1[k] = myDict1.get(k, ()) + v
```
But there isn't an `update` equivalent for what you're looking to do, unfortunately. | What is wrong with 2 in-place update operations?
```
myDict2.update(myDict1)
myDict1.update(myDict2)
```
Explanation:
The first update will overwrite the already existing keys with the values from myDict1, and insert all key value pairs in myDict2 which don't exist.
The second update will overwrite the already existing keys in myDict1 with values from myDict2, which are actually the values from myDict1 itself due to the 1st operation. Any new key value pairs inserted will be from the original myDict2.
This of course is conditional to the fact that you don't care about preserving myDict2
Update: With python3, you can do this without having to touch myDict2
```
myDict1 = {**myDict1, **myDict2, **myDict1}
```
which would actually be same as
```
myDict1 = {**myDict2, **myDict1}
```
Output
```
{'1': ('3', '2'), '3': ('2', '1'), '2': ('3', '1'), '4': ('5', '2'), '5': ('2', '4')}
``` | Updating a python dictionary while adding to existing keys? | [
"",
"python",
"performance",
"dictionary",
""
] |
According to [this answer](https://stackoverflow.com/a/4782649/2403580), a function call is a statement, but in the course I'm following in Coursera they say a function call is an expression.
*So this is my guess: a function call does something, that's why it's a statement, but after it's called it evaluates and passes a value, which makes it also an expression.*
Is a function call an expression? | A call is [an expression](http://docs.python.org/2/reference/expressions.html#calls); it is listed in the Expressions reference documentation.
If it was a statement, you could not use it as part of an expression; statements *can* contain expressions, but not the other way around.
As an example, `return expression` is a statement; it *uses* expressions to determine it's behaviour; the result of the expression is what the current function returns. You can use a call as part of that expression:
```
return some_function()
```
You cannot, however, use `return` as part of a call:
```
some_function(return)
```
That would be a syntax error.
It is the `return` that 'does something'; it ends the function and returns the result of the expression. It's not the expression itself that makes the function return.
If a python call was *not* an expression, you'd never be able to mix calls and other [expression atoms](http://docs.python.org/2/reference/expressions.html#atoms) into more complex expressions. | Function calls are expressions, so you can use them in statements such as `print` or `=`:
```
x = len('hello') # The expression evaluates to 5
print hex(48) # The expression evaluates to '0x30'
```
In Python, functions are objects just like integers and strings. Like other objects, functions have attributes and methods. What makes functions callable is the `__call__` method:
```
>>> def square(x):
return x * x
>>> square(10)
100
>>> square.__call__(10)
100
```
The parentheses dispatch to the `__call__` method.
This is at the core of how Python works. Hope you've found these insights helpful. | Is a function call an expression in python? | [
"",
"python",
"function",
"syntax",
"expression",
"call",
""
] |
I have a query where the result like the following
```
saledate Amount TRVal
20/05/2013 $6250.78 4
21/05/2013 $4562.23 4
22/05/2013 $565.32 6
23/05/2013 $85.36 8
24/05/2013 $56.36 5
```
I want the Average of Amount like
```
saledate Amount TRVal AvgVal
20/05/2013 $6250.78 4 2304.01
21/05/2013 $4562.23 4 2304.01
22/05/2013 $565.32 6 2304.01
23/05/2013 $85.36 8 2304.01
24/05/2013 $56.36 5 2304.01
```
I know to calculate the avg value but i want it included with the result of query.
The query i am using is
```
Select ISNULL([SaleDate],'Totals') AS [Totals], TempSaleDate, Domestic, Export, Import, TotalJobs,
'$' + CAST(CAST(Value AS DECIMAL(10,2)) AS VARCHAR(15)) Value,
'$' + CAST(CAST(ValueNOGST AS DECIMAL(10,2)) AS VARCHAR(15)) ValueNOGST,
Cancelled,
'$' + CAST(CAST(CancelledValue AS DECIMAL(10,2)) AS VARCHAR(15)) CancelledValue,
'$' + CAST(CAST(CancelledValueNOGST AS DECIMAL(10,2)) AS VARCHAR(15)) CancelledValueNOGST,
'$' + CAST(CAST(TotalValue AS DECIMAL(10,2)) AS VARCHAR(15)) TotalValue,
'$' + CAST(CAST(TotalValueNOGST AS DECIMAL(10,2)) AS VARCHAR(15)) TotalValueNOGST,
(select AVG(TotalValue) from sales) as FFF,
TotalGST, TotalValue+TotalGST TotalWithNOGSTCheck
FROM (
select convert(varchar(10), sales.saledate, 103) [SaleDate],max(sales.SaleDate) [TempSaleDate], SUM(sales.Domestic) [Domestic], SUM(sales.Export) [Export], SUM(sales.Import) [Import],
(SUM(sales.Domestic) + SUM(sales.Export) + SUM(sales.Import)) AS TotalJobs,
SUM(sales.Value) [Value], SUM(sales.ValueNoGST) [ValueNOGST],
Sum(sales.Cancelled) [Cancelled],
sum(sales.cancelledValue) [CancelledValue],
sum(sales.CancelledValueNOGST) [CancelledValueNOGST],
SUM(sales.totalValue) [TotalValue],
SUM(sales.TotalValueNOGST) [TotalValueNOGST],
SUM(sales.FGST) [FreightGST],SUM(sales.WGST) [WarrantyGST],SUM(sales.TGST) [TotalGST]
from
(
select TOP 100 PERCENT max(j.SaleDate) SaleDate,
case when max(oc.Code) = 'AU' and max(dc.Code) = 'AU' then 1 else 0 end [Domestic],
case when max(oc.Code) = 'AU' and max(dc.Code) <> 'AU' then 1 else 0 end [Export],
case when max(oc.Code) <> 'AU' and max(dc.Code) = 'AU' then 1 else 0 end [Import],
1 [Total],
MAX(charges.FreightGst) [FGST],
MAX(charges.warrantygst) [WGST],
MAX(charges.totalgst) [TGST],
max(ic.Total-charges.totalgst) [Value],
max(ic.Total) [ValueNoGST],
case when max(c.CancelDate) is not null then 1 else 0 end [Cancelled],
case when max(c.CancelDate) is not null then max(ic.Total) else 0 end [CancelledValueNOGST],
case when max(c.CancelDate) is null then max(ic.Total) else 0 end [TotalValueNOGST],
case when max(c.CancelDate) is not null then max(ic.Total-charges.totalgst) else 0 end [CancelledValue],
case when max(c.CancelDate) is null then max(ic.Total-charges.totalgst) else 0 end [TotalValue]
from invoices i
left join Jobs j on i.JobKey = j.JobKey
inner join tasks t on j.jobkey = t.jobkey
inner join Consignments c on t.TaskKey = c.consignmentkey
inner join places op on c.originplacekey = op.placekey
inner join places dp on c.destinationplacekey = dp.placekey
inner join places oC on dbo.ParentPlaceKey(c.originPlaceKey) = oc.placekey
inner join places dC on dbo.ParentPlaceKey(c.destinationplacekey) = dc.placekey
left join (select consignmentKey, sum(Value) [Value] from ConsignmentItems ci group by consignmentkey ) ci on ci.ConsignmentKey = c.ConsignmentKey
left join (
select invoicekey,
sum(case when ci.ChargeItemKey = 'FRT_SLL' then oc.Value else 0 end) [Freight],
sum(case when ci.ChargeItemKey = 'WTY_SLL' then oc.Value else 0 end) [Warranty],
sum(case when ci.ChargeType = 4 then oc.Value else 0 end) [Total]
from InvoiceCharges ic
left join OptionCharges oc on ic.OptionChargeKey = oc.OptionChargeKey
left join ChargeItems ci on oc.ChargeItemKey = ci.ChargeItemKey
group by invoicekey
) ic on ic.invoicekey = i.InvoiceKey
left join (
select OptionKey [OptionKey],
sum(case when ci1.ChargeItemKey = 'FRT_TX1' then oc1.Value else 0 end) [FreightGst],
sum(case when ci1.ChargeItemKey = 'WTY_TX1' then oc1.Value else 0 end) [WarrantyGst],
sum(case when ci1.ChargeType = 3 then oc1.Value else 0 end) [TotalGst]
from OptionCharges oc1
left join ChargeItems ci1 on oc1.ChargeItemKey = ci1.ChargeItemKey
group by optionkey
) charges on charges.OptionKey = c.SelectedOptionKey
where
j.SaleDate >= '20-May-2013'
and
j.operationalstorekey = dbo.StoreCode('AU-WEB')
and j.saledate is not null and SelectedOptionKey is not null
group by j.jobkey
) sales
group by convert(varchar(10), sales.saledate, 103) WITH ROLLUP
) AS SalesData order by TempSaleDate
```
I tried to add
```
(SELECT avg(TotalValue) FROM SalesData) as avgVal
```
but throws `invalid object name SalesData`.
Not sure what I am doing wrong. | ```
SELECT saledate, Amount, TRVal, (SELECT avg(amount) FROM tableA) as AvgVal FROM tableA
```
or
```
SELECT saledate, Amount, TRVal, avgVal FROM tableA
INNER JOIN (SELECT avg(amount) as AvgVal FROM tableA) x
```
**Edited to add:**
You have to replace `SalesData` with the name of the table you're querying (which you removed from your example:
```
FROM(
-- whatever name you put here is the one you need in place of SalesData
)
```
**Edited to add:**
Your FROM clause is complicated, so MSSQL 2008 added the WITH clause for just such cases:
```
;WITH thisTable AS (
-- put your FROM clause in here
)
SELECT /* list of fields */, (SELECT AVG(amount) FROM thisTable) as AvgVal
FROM thisTable
```
So for you:
```
; WITH thisTable AS (
select convert(varchar(10), sales.saledate, 103) [SaleDate],max(sales.SaleDate) [TempSaleDate],
SUM(sales.Domestic) [Domestic], SUM(sales.Export) [Export], SUM(sales.Import) [Import],
(SUM(sales.Domestic) + SUM(sales.Export) + SUM(sales.Import)) AS TotalJobs,
SUM(sales.Value) [Value], SUM(sales.ValueNoGST) [ValueNOGST],
Sum(sales.Cancelled) [Cancelled],
sum(sales.cancelledValue) [CancelledValue],
sum(sales.CancelledValueNOGST) [CancelledValueNOGST],
SUM(sales.totalValue) [TotalValue],
SUM(sales.TotalValueNOGST) [TotalValueNOGST],
SUM(sales.FGST) [FreightGST],SUM(sales.WGST) [WarrantyGST],SUM(sales.TGST) [TotalGST]
from
(
select TOP 100 PERCENT max(j.SaleDate) SaleDate,
case when max(oc.Code) = 'AU' and max(dc.Code) = 'AU' then 1 else 0 end [Domestic],
case when max(oc.Code) = 'AU' and max(dc.Code) <> 'AU' then 1 else 0 end [Export],
case when max(oc.Code) <> 'AU' and max(dc.Code) = 'AU' then 1 else 0 end [Import],
1 [Total],
MAX(charges.FreightGst) [FGST],
MAX(charges.warrantygst) [WGST],
MAX(charges.totalgst) [TGST],
max(ic.Total-charges.totalgst) [Value],
max(ic.Total) [ValueNoGST],
case when max(c.CancelDate) is not null then 1 else 0 end [Cancelled],
case when max(c.CancelDate) is not null then max(ic.Total) else 0 end [CancelledValueNOGST],
case when max(c.CancelDate) is null then max(ic.Total) else 0 end [TotalValueNOGST],
case when max(c.CancelDate) is not null then max(ic.Total-charges.totalgst) else 0 end [CancelledValue],
case when max(c.CancelDate) is null then max(ic.Total-charges.totalgst) else 0 end [TotalValue]
from invoices i
left join Jobs j on i.JobKey = j.JobKey
inner join tasks t on j.jobkey = t.jobkey
inner join Consignments c on t.TaskKey = c.consignmentkey
inner join places op on c.originplacekey = op.placekey
inner join places dp on c.destinationplacekey = dp.placekey
inner join places oC on dbo.ParentPlaceKey(c.originPlaceKey) = oc.placekey
inner join places dC on dbo.ParentPlaceKey(c.destinationplacekey) = dc.placekey
left join (select consignmentKey, sum(Value) [Value] from ConsignmentItems ci group by consignmentkey ) ci on ci.ConsignmentKey = c.ConsignmentKey
left join (
select invoicekey,
sum(case when ci.ChargeItemKey = 'FRT_SLL' then oc.Value else 0 end) [Freight],
sum(case when ci.ChargeItemKey = 'WTY_SLL' then oc.Value else 0 end) [Warranty],
sum(case when ci.ChargeType = 4 then oc.Value else 0 end) [Total]
from InvoiceCharges ic
left join OptionCharges oc on ic.OptionChargeKey = oc.OptionChargeKey
left join ChargeItems ci on oc.ChargeItemKey = ci.ChargeItemKey
group by invoicekey
) ic on ic.invoicekey = i.InvoiceKey
left join (
select OptionKey [OptionKey],
sum(case when ci1.ChargeItemKey = 'FRT_TX1' then oc1.Value else 0 end) [FreightGst],
sum(case when ci1.ChargeItemKey = 'WTY_TX1' then oc1.Value else 0 end) [WarrantyGst],
sum(case when ci1.ChargeType = 3 then oc1.Value else 0 end) [TotalGst]
from OptionCharges oc1
left join ChargeItems ci1 on oc1.ChargeItemKey = ci1.ChargeItemKey
group by optionkey
) charges on charges.OptionKey = c.SelectedOptionKey
where
j.SaleDate >= '20-May-2013'
and
j.operationalstorekey = dbo.StoreCode('AU-WEB')
and j.saledate is not null and SelectedOptionKey is not null
group by j.jobkey
) sales
group by convert(varchar(10), sales.saledate, 103) WITH ROLLUP
)
/* end of definition of thisTable */
)
/* here's your select */
Select ISNULL([SaleDate],'Totals') AS [Totals], TempSaleDate, Domestic, Export, Import, TotalJobs,
'$' + CAST(CAST(Value AS DECIMAL(10,2)) AS VARCHAR(15)) Value,
'$' + CAST(CAST(ValueNOGST AS DECIMAL(10,2)) AS VARCHAR(15)) ValueNOGST,
Cancelled,
'$' + CAST(CAST(CancelledValue AS DECIMAL(10,2)) AS VARCHAR(15)) CancelledValue,
'$' + CAST(CAST(CancelledValueNOGST AS DECIMAL(10,2)) AS VARCHAR(15)) CancelledValueNOGST,
'$' + CAST(CAST(TotalValue AS DECIMAL(10,2)) AS VARCHAR(15)) TotalValue,
'$' + CAST(CAST(TotalValueNOGST AS DECIMAL(10,2)) AS VARCHAR(15)) TotalValueNOGST,
/* here's the AVG() */
(select AVG(TotalValue) from thisTable) as FFF,
TotalGST, TotalValue+TotalGST TotalWithNOGSTCheck,
FROM thisTable
``` | User a windowed AVG on an aggregate SUM.
```
SELECT
DATEADD(dd, DATEDIFF(dd, 0, saledate), 0)
, SUM(something) AS Amount
, ?? AS TRVal -- no idea what it is
, AVG(SUM(something)) OVER () AS AvgVal
FROM MyTable -- or whatever
GROUP BY DATEDIFF(dd, 0, saledate)
``` | How to calculate average of a column and then include it in a select query in SQL | [
"",
"sql",
"sql-server-2008",
""
] |
My application: I am trying to rotate an image (using OpenCV and Python)

At the moment I have developed the below code which rotates an input image, padding it with black borders, giving me A. What I want is B - the largest possible area crop window within the rotated image. I call this the axis-aligned boundED box.
This is essentially the same as [Rotate and crop](https://stackoverflow.com/questions/16255037/rotate-and-crop), however I cannot get the answer on that question to work. Additionally, that answer is apparently only valid for square images. My images are rectangular.
Code to give A:
```
import cv2
import numpy as np
def getTranslationMatrix2d(dx, dy):
"""
Returns a numpy affine transformation matrix for a 2D translation of
(dx, dy)
"""
return np.matrix([[1, 0, dx], [0, 1, dy], [0, 0, 1]])
def rotateImage(image, angle):
"""
Rotates the given image about it's centre
"""
image_size = (image.shape[1], image.shape[0])
image_center = tuple(np.array(image_size) / 2)
rot_mat = np.vstack([cv2.getRotationMatrix2D(image_center, angle, 1.0), [0, 0, 1]])
trans_mat = np.identity(3)
w2 = image_size[0] * 0.5
h2 = image_size[1] * 0.5
rot_mat_notranslate = np.matrix(rot_mat[0:2, 0:2])
tl = (np.array([-w2, h2]) * rot_mat_notranslate).A[0]
tr = (np.array([w2, h2]) * rot_mat_notranslate).A[0]
bl = (np.array([-w2, -h2]) * rot_mat_notranslate).A[0]
br = (np.array([w2, -h2]) * rot_mat_notranslate).A[0]
x_coords = [pt[0] for pt in [tl, tr, bl, br]]
x_pos = [x for x in x_coords if x > 0]
x_neg = [x for x in x_coords if x < 0]
y_coords = [pt[1] for pt in [tl, tr, bl, br]]
y_pos = [y for y in y_coords if y > 0]
y_neg = [y for y in y_coords if y < 0]
right_bound = max(x_pos)
left_bound = min(x_neg)
top_bound = max(y_pos)
bot_bound = min(y_neg)
new_w = int(abs(right_bound - left_bound))
new_h = int(abs(top_bound - bot_bound))
new_image_size = (new_w, new_h)
new_midx = new_w * 0.5
new_midy = new_h * 0.5
dx = int(new_midx - w2)
dy = int(new_midy - h2)
trans_mat = getTranslationMatrix2d(dx, dy)
affine_mat = (np.matrix(trans_mat) * np.matrix(rot_mat))[0:2, :]
result = cv2.warpAffine(image, affine_mat, new_image_size, flags=cv2.INTER_LINEAR)
return result
``` | So, after investigating many claimed solutions, I have finally found a method that works; The answer by [Andri](https://stackoverflow.com/users/671973/andri) and [Magnus Hoff](https://stackoverflow.com/users/2971/magnus-hoff) on [Calculate largest rectangle in a rotated rectangle](https://stackoverflow.com/questions/5789239/calculate-largest-rectangle-in-a-rotated-rectangle#7519376).
The below Python code contains the method of interest - `largest_rotated_rect` - and a short demo.
```
import math
import cv2
import numpy as np
def rotate_image(image, angle):
"""
Rotates an OpenCV 2 / NumPy image about it's centre by the given angle
(in degrees). The returned image will be large enough to hold the entire
new image, with a black background
"""
# Get the image size
# No that's not an error - NumPy stores image matricies backwards
image_size = (image.shape[1], image.shape[0])
image_center = tuple(np.array(image_size) / 2)
# Convert the OpenCV 3x2 rotation matrix to 3x3
rot_mat = np.vstack(
[cv2.getRotationMatrix2D(image_center, angle, 1.0), [0, 0, 1]]
)
rot_mat_notranslate = np.matrix(rot_mat[0:2, 0:2])
# Shorthand for below calcs
image_w2 = image_size[0] * 0.5
image_h2 = image_size[1] * 0.5
# Obtain the rotated coordinates of the image corners
rotated_coords = [
(np.array([-image_w2, image_h2]) * rot_mat_notranslate).A[0],
(np.array([ image_w2, image_h2]) * rot_mat_notranslate).A[0],
(np.array([-image_w2, -image_h2]) * rot_mat_notranslate).A[0],
(np.array([ image_w2, -image_h2]) * rot_mat_notranslate).A[0]
]
# Find the size of the new image
x_coords = [pt[0] for pt in rotated_coords]
x_pos = [x for x in x_coords if x > 0]
x_neg = [x for x in x_coords if x < 0]
y_coords = [pt[1] for pt in rotated_coords]
y_pos = [y for y in y_coords if y > 0]
y_neg = [y for y in y_coords if y < 0]
right_bound = max(x_pos)
left_bound = min(x_neg)
top_bound = max(y_pos)
bot_bound = min(y_neg)
new_w = int(abs(right_bound - left_bound))
new_h = int(abs(top_bound - bot_bound))
# We require a translation matrix to keep the image centred
trans_mat = np.matrix([
[1, 0, int(new_w * 0.5 - image_w2)],
[0, 1, int(new_h * 0.5 - image_h2)],
[0, 0, 1]
])
# Compute the tranform for the combined rotation and translation
affine_mat = (np.matrix(trans_mat) * np.matrix(rot_mat))[0:2, :]
# Apply the transform
result = cv2.warpAffine(
image,
affine_mat,
(new_w, new_h),
flags=cv2.INTER_LINEAR
)
return result
def largest_rotated_rect(w, h, angle):
"""
Given a rectangle of size wxh that has been rotated by 'angle' (in
radians), computes the width and height of the largest possible
axis-aligned rectangle within the rotated rectangle.
Original JS code by 'Andri' and Magnus Hoff from Stack Overflow
Converted to Python by Aaron Snoswell
"""
quadrant = int(math.floor(angle / (math.pi / 2))) & 3
sign_alpha = angle if ((quadrant & 1) == 0) else math.pi - angle
alpha = (sign_alpha % math.pi + math.pi) % math.pi
bb_w = w * math.cos(alpha) + h * math.sin(alpha)
bb_h = w * math.sin(alpha) + h * math.cos(alpha)
gamma = math.atan2(bb_w, bb_w) if (w < h) else math.atan2(bb_w, bb_w)
delta = math.pi - alpha - gamma
length = h if (w < h) else w
d = length * math.cos(alpha)
a = d * math.sin(alpha) / math.sin(delta)
y = a * math.cos(gamma)
x = y * math.tan(gamma)
return (
bb_w - 2 * x,
bb_h - 2 * y
)
def crop_around_center(image, width, height):
"""
Given a NumPy / OpenCV 2 image, crops it to the given width and height,
around it's centre point
"""
image_size = (image.shape[1], image.shape[0])
image_center = (int(image_size[0] * 0.5), int(image_size[1] * 0.5))
if(width > image_size[0]):
width = image_size[0]
if(height > image_size[1]):
height = image_size[1]
x1 = int(image_center[0] - width * 0.5)
x2 = int(image_center[0] + width * 0.5)
y1 = int(image_center[1] - height * 0.5)
y2 = int(image_center[1] + height * 0.5)
return image[y1:y2, x1:x2]
def demo():
"""
Demos the largest_rotated_rect function
"""
image = cv2.imread("lenna_rectangle.png")
image_height, image_width = image.shape[0:2]
cv2.imshow("Original Image", image)
print "Press [enter] to begin the demo"
print "Press [q] or Escape to quit"
key = cv2.waitKey(0)
if key == ord("q") or key == 27:
exit()
for i in np.arange(0, 360, 0.5):
image_orig = np.copy(image)
image_rotated = rotate_image(image, i)
image_rotated_cropped = crop_around_center(
image_rotated,
*largest_rotated_rect(
image_width,
image_height,
math.radians(i)
)
)
key = cv2.waitKey(2)
if(key == ord("q") or key == 27):
exit()
cv2.imshow("Original Image", image_orig)
cv2.imshow("Rotated Image", image_rotated)
cv2.imshow("Cropped Image", image_rotated_cropped)
print "Done"
if __name__ == "__main__":
demo()
```

Simply place [this image](https://i.stack.imgur.com/vWdWB.png) (cropped to demonstrate that it works with non-square images) in the same directory as the above file, then run it. | The math behind this solution/implementation is equivalent to [this solution of an analagous question](https://stackoverflow.com/questions/5789239/calculate-largest-rectangle-in-a-rotated-rectangle#7519376), but the formulas are simplified and avoid singularities. This is python code with the same interface as `largest_rotated_rect` from the other solution, but giving a bigger area in almost all cases (always the proven optimum):
```
def rotatedRectWithMaxArea(w, h, angle):
"""
Given a rectangle of size wxh that has been rotated by 'angle' (in
radians), computes the width and height of the largest possible
axis-aligned rectangle (maximal area) within the rotated rectangle.
"""
if w <= 0 or h <= 0:
return 0,0
width_is_longer = w >= h
side_long, side_short = (w,h) if width_is_longer else (h,w)
# since the solutions for angle, -angle and 180-angle are all the same,
# if suffices to look at the first quadrant and the absolute values of sin,cos:
sin_a, cos_a = abs(math.sin(angle)), abs(math.cos(angle))
if side_short <= 2.*sin_a*cos_a*side_long or abs(sin_a-cos_a) < 1e-10:
# half constrained case: two crop corners touch the longer side,
# the other two corners are on the mid-line parallel to the longer line
x = 0.5*side_short
wr,hr = (x/sin_a,x/cos_a) if width_is_longer else (x/cos_a,x/sin_a)
else:
# fully constrained case: crop touches all 4 sides
cos_2a = cos_a*cos_a - sin_a*sin_a
wr,hr = (w*cos_a - h*sin_a)/cos_2a, (h*cos_a - w*sin_a)/cos_2a
return wr,hr
```
Here is a comparison of the function with the other solution:
```
>>> wl,hl = largest_rotated_rect(1500,500,math.radians(20))
>>> print (wl,hl),', area=',wl*hl
(828.2888697391496, 230.61639227890998) , area= 191016.990904
>>> wm,hm = rotatedRectWithMaxArea(1500,500,math.radians(20))
>>> print (wm,hm),', area=',wm*hm
(730.9511000407718, 266.044443118978) , area= 194465.478358
```
With angle `angle` in `[0,pi/2[` the bounding box of the rotated image (width `w`, height `h`) has these dimensions:
* width `w_bb = w*cos_a + h*sin_a`
* height `h_bb = w*sin_a + h*cos_a`
If `w_r`, `h_r` are the computed optimal width and height of the cropped image, then the insets from the bounding box are:
* in horizontal direction: `(w_bb-w_r)/2`
* in vertical direction: `(h_bb-h_r)/2`
**Proof:**
Looking for the axis aligned rectangle between two parallel lines that has maximal area is an optimization problem with one parameter, e.g. `x` as in this figure:

Let `s` denote the distance between the two parallel lines (it will turn out to be the shorter side of the rotated rectangle). Then the sides `a`, `b` of the sought-after rectangle have a constant ratio with `x`, `s-x`, resp., namely x = a sin α and (s-x) = b cos α:

So maximizing the area `a*b` means maximizing `x*(s-x)`. Because of "theorem of height" for right-angled triangles we know `x*(s-x) = p*q = h*h`. Hence the maximal area is reached at `x = s-x = s/2`, i.e. the two corners E, G between the parallel lines are on the mid-line:

This solution is only valid if this maximal rectangle fits into the rotated rectangle. Therefore the diagonal `EG` must not be longer than the other side `l` of the rotated rectangle. Since
EG = AF + DH = s/2\*(cot α + tan α) = s/(2*sin α*cos α) = s/sin 2\*α
we have the condition s ≤ l*sin 2*α, where s and l are the shorter and longer side of the rotated rectangle.
In case of s > l*sin 2*α the parameter `x` must be smaller (than s/2) and s.t. all corners of the sought-after rectangle are each on a side of the rotated rectangle. This leads to the equation
x\*cot α + (s-x)\*tan α = l
giving x = sin α\*(l*cos α - s*sin α)/cos 2\*α. From a = x/sin α and b = (s-x)/cos α we get the above used formulas. | Rotate image and crop out black borders | [
"",
"python",
"algorithm",
"opencv",
"aabb",
""
] |
I have table t1.
```
table t1
id post_id tags
1 null a
2 1 null
3 1 null
4 null b
```
I want to update tags where post\_id = id.
I tried a query it is giving me zero output.
post\_id is always null when tags exists and tags is always null when post\_id exists
```
update t1 set tags = tags where post_id = id;
```
Can u guys frame it properly for me. Please help me | `update t1 a join t1 b on a.id = b.post_id set b.tag = a.tag` | ```
update t1 set tags = tags where post_id = id;
```
Here no record will update because `tags = tags`.You are updating the column with that column value. | Update column from same table column | [
"",
"mysql",
"sql",
""
] |
Is it possible to yield the following using comprehension, I have tried getting both values a,b etc.. but the only way I know is through indexing and when I do that I get string index out of range.
```
path = ['a', 'b', 'c', 'd', 'e']
```
--
```
a, b
b, c
c, d
d, e
``` | You can use `zip` here:
```
>>> lis = ['a', 'b', 'c', 'd', 'e']
>>> for x,y in zip(lis,lis[1:]):
... print x,y
...
a b
b c
c d
d e
``` | [`itertools` pairwise recipe](http://docs.python.org/2/library/itertools.html#recipes) works on any iterable
```
from itertools import tee, izip
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return izip(a, b)
path = ['a', 'b', 'c', 'd', 'e']
>>> for x, y in pairwise(path):
print x, y
a b
b c
c d
d e
>>> list(pairwise(path))
[('a', 'b'), ('b', 'c'), ('c', 'd'), ('d', 'e')]
``` | getting two values using comprehension | [
"",
"python",
"list-comprehension",
""
] |
I've been staring at this for a moment and think I'm not perceiving the obvious.
The resulting display is 1064 (mysql reference says it's a syntax error)
```
$query = "INSERT INTO members ( id , username , password , all , articles ) VALUES ( ";
$query .= "'' , " ;
$query .= $username . "' , '" ;
$query .= $password . "' , '" ;
$query .= $allVals . "' , ";
$query .= "'' );";
$result = mysqli_query($con, $query);
if (mysqli_errno($con)){
echo mysqli_errno($con);
echo mysqli_connect_error($con);
}
```
I should note that $allVals is an encoded json object.
What's wrong with **my query**? | It looks like there is a single quote after `$username`, but not before:
```
$query = "INSERT INTO members ( id , username , password , all , articles ) VALUES ( ";
$query .= "'' , '" ; //missed the quote here
$query .= $username . "' , '" ;
$query .= $password . "' , '" ;
$query .= $allVals . "' , ";
$query .= "'' );";
$result = mysqli_query($con, $query);
if (mysqli_errno($con)){
echo mysqli_errno($con);
echo mysqli_connect_error($con);
}
``` | ```
$query .= "'' , " ;
```
You miss here a single-quote.
```
$query .= "'' , '" ;
```
Should do the job.
I'd also consider to use prepared statements to better see where your syntax error may be; when you try to build your query like this, is will be probably more difficult to debug it.
```
$stmt = $con->prepare("INSERT INTO members ( id , username , password , all , articles ) VALUES ( '', ?, ?, ?, '')");
$stmt->bind_param("sss", $username, $password, $allVals);
$stmt->execute();
/* ... */
``` | Head busting MYSQL error 1064 What's Wrong with my Syntax? | [
"",
"mysql",
"sql",
"syntax",
"mysql-error-1064",
""
] |
i have problem when running this code :
```
>>> from selenium import webdriver
>>> driver = webdriver.firefox()
Traceback (most recent call last):
File "<pyshell#19>", line 1, in <module>
driver = webdriver.firefox()
TypeError: 'module' object is not callable
```
i have searched for the problem and i got some results. but unfortunately , they didn't work. So , how can i solve this?
thanks. | You have made a typo.
```
webdriver.Firefox()
```
Note the capital F. | the same goes for other browsers!
e.g.
```
webdriver.chrome Vs. webdriver.Chrome
```
(its even harder to notice this!)
thanks so much for the help! ;) | TypeError: 'module' object is not callable ( when importing selenium ) | [
"",
"python",
"firefox",
"selenium",
"webdriver",
"typeerror",
""
] |
I have made a simple n-body simulator and I plot/animate the movement with the following code:
```
for i in range(N):
[...]
x = [ Rbod[j][0], Rbod[j][0]]
y = [ Rbod[j][1], Rbod[j][1]]
#print(R1, V1, A1, F12)
if i%10 == 0:
print(i)
pylab.ion()
pylab.scatter( x, y, c=(j/nbodies,j/nbodies,j/nbodies) )
pylab.axis([-400, 400, -400, 400])
pylab.draw()
```
Now I would really like to save the animation as a gif. Is this possible? The internet vaguely said that it was but not on how to do it with `pylab`.
Example of 4 body interaction:
 | I solved it by using [ffmpeg](http://www.ffmpeg.org/), a conversion programme ran trough the commandline.
So first I save all the seperate pictures and then make them into a avi and the avi into a gif.
```
print(i)
#pylab.ion()
pylab.scatter( x, y, c=(j/nbodies,j/nbodies,j/nbodies) )
pylab.axis([-400, 400, -400, 400])
#pylab.draw()
pylab.savefig('picture'+str(i))
os.chdir('C://Users/Alex')
subprocess.call(['ffmpeg', '-i', 'picture%d0.png', 'output.avi'])
subprocess.call(['ffmpeg', '-i', 'output.avi', '-t', '5', 'out.gif'])
``` | Check out this tutorial on animation in matplotlib. <http://nbviewer.ipython.org/urls/raw.github.com/jakevdp/matplotlib_pydata2013/master/notebooks/05_Animations.ipynb>
A quick search shows how to create an animation as mp4, you can then convert it with a third-party tool to a format you desire.
```
from matplotlib import animation
# call the animator.
...
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=200, interval=20, blit=True)
...
anim.save('basic_animation.mp4', fps=30)
``` | Animated graph to Animated Gif (Python) | [
"",
"python",
"matplotlib",
"plot",
"animated-gif",
""
] |
I have a long list of float numbers ranging from 1 to 5, called "average", and I want to return the list of indices for elements that are smaller than a or larger than b
```
def find(lst,a,b):
result = []
for x in lst:
if x<a or x>b:
i = lst.index(x)
result.append(i)
return result
matches = find(average,2,4)
```
But surprisingly, the output for "matches" has a lot of repetitions in it, e.g. `[2, 2, 10, 2, 2, 2, 19, 2, 10, 2, 2, 42, 2, 2, 10, 2, 2, 2, 10, 2, 2, ...]`.
Why is this happening? | You are using `.index()` which will only find the **first** occurrence of your value in the list. So if you have a value 1.0 at index 2, and at index 9, then `.index(1.0)` will *always* return `2`, no matter how many times `1.0` occurs in the list.
Use [`enumerate()`](http://docs.python.org/2/library/functions.html#enumerate) to add indices to your loop instead:
```
def find(lst, a, b):
result = []
for i, x in enumerate(lst):
if x<a or x>b:
result.append(i)
return result
```
You can collapse this into a list comprehension:
```
def find(lst, a, b):
return [i for i, x in enumerate(lst) if x<a or x>b]
``` | if you're doing a lot of this kind of thing you should consider using `numpy`.
```
In [56]: import random, numpy
In [57]: lst = numpy.array([random.uniform(0, 5) for _ in range(1000)]) # example list
In [58]: a, b = 1, 3
In [59]: numpy.flatnonzero((lst > a) & (lst < b))[:10]
Out[59]: array([ 0, 12, 13, 15, 18, 19, 23, 24, 26, 29])
```
In response to Seanny123's question, I used this timing code:
```
import numpy, timeit, random
a, b = 1, 3
lst = numpy.array([random.uniform(0, 5) for _ in range(1000)])
def numpy_way():
numpy.flatnonzero((lst > 1) & (lst < 3))[:10]
def list_comprehension():
[e for e in lst if 1 < e < 3][:10]
print timeit.timeit(numpy_way)
print timeit.timeit(list_comprehension)
```
The numpy version is over 60 times faster. | Finding the indices of matching elements in list in Python | [
"",
"python",
"list",
"indexing",
"find",
""
] |
```
class C:
def __init__(self,n,x):
self.n = n
self.x = x
a = C('a',1)
b = C('b',2)
c = C('c',3)
classList = [b,a,c]
for q in classList: print q.n,
classList.sort(lambda a,b: long(a.x - b.x))
for q in classList: print q.n,
```
Running the code above would get the error `TypeError: comparison function must return int, not long`.
Is there another clean way to sort class objects by certain class variables? | Use the built-in `cmp` function: `cmp(a.x, b.x)`
By the way, you can also utilize the `key` parameter of `sort`:
`classList.sort(key=lambda c: c.x)`
which is faster.
According to [wiki.python.org](http://wiki.python.org/moin/HowTo/Sorting/):
> This technique is fast because the key function is called exactly once
> for each input record. | I dont think you need `long`
```
class C:
def __init__(self,n,x):
self.n = n
self.x = x
a = C('a',1)
b = C('b',2)
c = C('c',3)
classList = [b,a,c]
for q in classList: print q.n,
classList.sort(lambda a,b: a.x - b.x)
for q in classList: print q.n,
```
Output:
```
b a c a b c
``` | comparison function must return int, not long | [
"",
"python",
"sorting",
""
] |
Python AST nodes have `lineno` and `col_offset` attributes, which indicate the beginning of respective code range. Is there an easy way to get also the end of the code range? A 3rd party library? | We had a similar need, and I created the [asttokens](https://github.com/gristlabs/asttokens) library for this purpose. It maintains the source in both text and tokenized form, and marks AST nodes with token information, from which text is also readily available.
It works with Python 2 and 3 (tested with 2.7 and 3.5). For example:
```
import ast, asttokens
st='''
def greet(a):
say("hello") if a else say("bye")
'''
atok = asttokens.ASTTokens(st, parse=True)
for node in ast.walk(atok.tree):
if hasattr(node, 'lineno'):
print atok.get_text_range(node), node.__class__.__name__, atok.get_text(node)
```
Prints
```
(1, 50) FunctionDef def greet(a):
say("hello") if a else say("bye")
(17, 50) Expr say("hello") if a else say("bye")
(11, 12) Name a
(17, 50) IfExp say("hello") if a else say("bye")
(33, 34) Name a
(17, 29) Call say("hello")
(40, 50) Call say("bye")
(17, 20) Name say
(21, 28) Str "hello"
(40, 43) Name say
(44, 49) Str "bye"
``` | *EDIT: Latest code (tested in Python 3.5-3.7) is here: <https://bitbucket.org/plas/thonny/src/master/thonny/ast_utils.py>*
As I didn't find an easy way, here's a hard (and probably not optimal) way. Might crash and/or work incorrectly if there are more lineno/col\_offset bugs in Python parser than those mentioned (and worked around) in the code. Tested in Python 3.3:
```
def mark_code_ranges(node, source):
"""
Node is an AST, source is corresponding source as string.
Function adds recursively attributes end_lineno and end_col_offset to each node
which has attributes lineno and col_offset.
"""
NON_VALUE_KEYWORDS = set(keyword.kwlist) - {'False', 'True', 'None'}
def _get_ordered_child_nodes(node):
if isinstance(node, ast.Dict):
children = []
for i in range(len(node.keys)):
children.append(node.keys[i])
children.append(node.values[i])
return children
elif isinstance(node, ast.Call):
children = [node.func] + node.args
for kw in node.keywords:
children.append(kw.value)
if node.starargs != None:
children.append(node.starargs)
if node.kwargs != None:
children.append(node.kwargs)
children.sort(key=lambda x: (x.lineno, x.col_offset))
return children
else:
return ast.iter_child_nodes(node)
def _fix_triple_quote_positions(root, all_tokens):
"""
http://bugs.python.org/issue18370
"""
string_tokens = list(filter(lambda tok: tok.type == token.STRING, all_tokens))
def _fix_str_nodes(node):
if isinstance(node, ast.Str):
tok = string_tokens.pop(0)
node.lineno, node.col_offset = tok.start
for child in _get_ordered_child_nodes(node):
_fix_str_nodes(child)
_fix_str_nodes(root)
# fix their erroneous Expr parents
for node in ast.walk(root):
if ((isinstance(node, ast.Expr) or isinstance(node, ast.Attribute))
and isinstance(node.value, ast.Str)):
node.lineno, node.col_offset = node.value.lineno, node.value.col_offset
def _fix_binop_positions(node):
"""
http://bugs.python.org/issue18374
"""
for child in ast.iter_child_nodes(node):
_fix_binop_positions(child)
if isinstance(node, ast.BinOp):
node.lineno = node.left.lineno
node.col_offset = node.left.col_offset
def _extract_tokens(tokens, lineno, col_offset, end_lineno, end_col_offset):
return list(filter((lambda tok: tok.start[0] >= lineno
and (tok.start[1] >= col_offset or tok.start[0] > lineno)
and tok.end[0] <= end_lineno
and (tok.end[1] <= end_col_offset or tok.end[0] < end_lineno)
and tok.string != ''),
tokens))
def _mark_code_ranges_rec(node, tokens, prelim_end_lineno, prelim_end_col_offset):
"""
Returns the earliest starting position found in given tree,
this is convenient for internal handling of the siblings
"""
# set end markers to this node
if "lineno" in node._attributes and "col_offset" in node._attributes:
tokens = _extract_tokens(tokens, node.lineno, node.col_offset, prelim_end_lineno, prelim_end_col_offset)
#tokens =
_set_real_end(node, tokens, prelim_end_lineno, prelim_end_col_offset)
# mark its children, starting from last one
# NB! need to sort children because eg. in dict literal all keys come first and then all values
children = list(_get_ordered_child_nodes(node))
for child in reversed(children):
(prelim_end_lineno, prelim_end_col_offset) = \
_mark_code_ranges_rec(child, tokens, prelim_end_lineno, prelim_end_col_offset)
if "lineno" in node._attributes and "col_offset" in node._attributes:
# new "front" is beginning of this node
prelim_end_lineno = node.lineno
prelim_end_col_offset = node.col_offset
return (prelim_end_lineno, prelim_end_col_offset)
def _strip_trailing_junk_from_expressions(tokens):
while (tokens[-1].type not in (token.RBRACE, token.RPAR, token.RSQB,
token.NAME, token.NUMBER, token.STRING,
token.ELLIPSIS)
and tokens[-1].string not in ")}]"
or tokens[-1].string in NON_VALUE_KEYWORDS):
del tokens[-1]
def _strip_trailing_extra_closers(tokens, remove_naked_comma):
level = 0
for i in range(len(tokens)):
if tokens[i].string in "({[":
level += 1
elif tokens[i].string in ")}]":
level -= 1
if level == 0 and tokens[i].string == "," and remove_naked_comma:
tokens[:] = tokens[0:i]
return
if level < 0:
tokens[:] = tokens[0:i]
return
def _set_real_end(node, tokens, prelim_end_lineno, prelim_end_col_offset):
# prelim_end_lineno and prelim_end_col_offset are the start of
# next positioned node or end of source, ie. the suffix of given
# range may contain keywords, commas and other stuff not belonging to current node
# Function returns the list of tokens which cover all its children
if isinstance(node, _ast.stmt):
# remove empty trailing lines
while (tokens[-1].type in (tokenize.NL, tokenize.COMMENT, token.NEWLINE, token.INDENT)
or tokens[-1].string in (":", "else", "elif", "finally", "except")):
del tokens[-1]
else:
_strip_trailing_extra_closers(tokens, not isinstance(node, ast.Tuple))
_strip_trailing_junk_from_expressions(tokens)
# set the end markers of this node
node.end_lineno = tokens[-1].end[0]
node.end_col_offset = tokens[-1].end[1]
# Try to peel off more tokens to give better estimate for children
# Empty parens would confuse the children of no argument Call
if ((isinstance(node, ast.Call))
and not (node.args or node.keywords or node.starargs or node.kwargs)):
assert tokens[-1].string == ')'
del tokens[-1]
_strip_trailing_junk_from_expressions(tokens)
# attribute name would confuse the "value" of Attribute
elif isinstance(node, ast.Attribute):
if tokens[-1].type == token.NAME:
del tokens[-1]
_strip_trailing_junk_from_expressions(tokens)
else:
raise AssertionError("Expected token.NAME, got " + str(tokens[-1]))
#import sys
#print("Expected token.NAME, got " + str(tokens[-1]), file=sys.stderr)
return tokens
all_tokens = list(tokenize.tokenize(io.BytesIO(source.encode('utf-8')).readline))
_fix_triple_quote_positions(node, all_tokens)
_fix_binop_positions(node)
source_lines = source.split("\n")
prelim_end_lineno = len(source_lines)
prelim_end_col_offset = len(source_lines[len(source_lines)-1])
_mark_code_ranges_rec(node, all_tokens, prelim_end_lineno, prelim_end_col_offset)
``` | How to get source corresponding to a Python AST node? | [
"",
"python",
"abstract-syntax-tree",
""
] |
I'm trying to use in Python 3.3 an old library (dating from 2003!). When I import it, Python throws me an error because there are `<>` signs in the source file, e.g.:
```
if (cnum < 1000 and nnum <> 1000 and ntext[-1] <> "s":
...
```
I guess it's a now-abandoned sign in the language.
What exactly does it mean, and which (more recent) sign should I replace it with? | It means not equal to. It was taken from `ABC` (python's predecessor) see [here](http://homepages.cwi.nl/%7Esteven/abc/qr.html#TESTS):
> `x < y, x <= y, x >= y, x > y, x = y, x <> y, 0 <= d < 10`
>
> Order tests (`<>` means *'not equals'*)
I believe `ABC` took it from Pascal, a language Guido began programming with.
It has now been removed in Python 3. Use `!=` instead. If you are **CRAZY** you can scrap `!=` and allow only `<>` in Py3K using [this easter egg](http://www.python.org/dev/peps/pep-0401/#official-acts-of-the-flufl):
```
>>> from __future__ import barry_as_FLUFL
>>> 1 != 2
File "<stdin>", line 1
1 != 2
^
SyntaxError: with Barry as BDFL, use '<>' instead of '!='
>>> 1 <> 2
True
``` | It means NOT EQUAL, but it is deprecated, use `!=` instead. | What does `<>` mean in Python? | [
"",
"python",
"syntax",
"operators",
"python-2.x",
""
] |
Atm I have this as my code, the first line seems to work well but the 2nd gives errrors.
```
os.chdir('C://Users/Alex/Dropbox/code stuff/test')
subprocess.call('ffmpeg -i test%d0.png output.avi')
```
Also when I try to run it as this, it gives a 1s cmd flicker and then nothing happens
```
os.system('ffmpeg -i test%d0.png output.avi')
``` | For the later generations looking for the answer, this worked. (You have to separate the command by the spaces.)
```
import os
import subprocess
os.chdir('C://Users/Alex/')
subprocess.call(['ffmpeg', '-i', 'picture%d0.png', 'output.avi'])
subprocess.call(['ffmpeg', '-i', 'output.avi', '-t', '5', 'out.gif'])
``` | It is better to call `subprocess.call` in another way.
The preferred way is:
```
subprocess.call(['ffmpeg', '-i', 'test%d0.png', 'output.avi'])
```
Alternatively:
```
subprocess.call('ffmpeg -i test%d0.png output.avi', shell=True)
```
You can find the reasons for this in the [manual](http://docs.python.org/2/library/subprocess.html#frequently-used-arguments). I quote:
> args is required for all calls and should be a string, or a sequence
> of program arguments. Providing a sequence of arguments is generally
> preferred, as it allows the module to take care of any required
> escaping and quoting of arguments (e.g. to permit spaces in file
> names). If passing a single string, either shell must be True (see
> below) or else the string must simply name the program to be executed
> without specifying any arguments. | Running cmd in python | [
"",
"python",
"cmd",
""
] |
I got a PostgreSQL database with 4 tables:
**Table A**
```
---------------------------
| ID | B_ID | C_ID | D_ID |
---------------------------
| 1 | 1 | NULL | NULL |
---------------------------
| 2 | NULL | 1 | NULL |
---------------------------
| 3 | 2 | 2 | 1 |
---------------------------
| 4 | NULL | NULL | 2 |
---------------------------
```
**Table B**
```
-------------
| ID | DATA |
-------------
| 1 | 123 |
-------------
| 2 | 456 |
-------------
```
**Table C**
```
-------------
| ID | DATA |
-------------
| 1 | 789 |
-------------
| 2 | 102 |
-------------
```
**Table D**
```
-------------
| ID | DATA |
-------------
| 1 | 654 |
-------------
| 2 | 321 |
-------------
```
I'm trying to retrieve a result set which has joined the data from table B and the data from table C, only if one of booth IDs is not null.
```
SELECT "Table_A"."ID", "Table_A"."ID_B", "Table_A"."ID_C", "Table_A"."ID_D", "Table_B"."DATA", "Table_C"."DATA"
FROM "Table_A"
LEFT JOIN "Table_B" on "Table_A"."ID_B" = "Table_B"."ID"
LEFT JOIN "Table_C" on "Table_A"."ID_C" = "Table_C"."ID"
WHERE "Table_A"."ID_B" IS NOT NULL OR "Table_A"."ID_C" IS NOT NULL;
```
Is this recommended or should I better split this in multiple queries?
Is there a way to do an inner join between these tables?
The result I expect is:
```
-------------------------------------------------
| ID | ID_B | ID_C | ID_D | DATA (B) | DATA (C) |
-------------------------------------------------
| 1 | 1 | NULL | NULL | 123 | NULL |
-------------------------------------------------
| 2 | NULL | 1 | NULL | NULL | 789 |
-------------------------------------------------
| 3 | 2 | 2 | NULL | 456 | 102 |
-------------------------------------------------
```
**EDIT:** `ID_B`, `ID_C`, `ID_D` are foreign keys to the tables `table_b`, `table_c`, `table_d` | The `WHERE "Table_A"."ID_B" IS NOT NULL OR "Table_A"."ID_C" IS NOT NULL;` can be replaced by the corresponding clause on the B and C tables : `WHERE "Table_B"."ID" IS NOT NULL OR "Table_C"."ID" IS NOT NULL;` . This would also work if table\_a.id\_b and table\_a.id\_c are not FKs to the B and C tables. Otherwise, a table\_a row with { 5, 5,5,5} would retrieve two NULL rows from the B and C tables.
```
SELECT ta."ID" AS a_id
, ta."ID_B" AS b_id
, ta."ID_C" AS c_id
, ta."ID_D" AS d_id
, tb."DATA" AS bdata
, tc."DATA" AS cdata
FROM "Table_a" ta
LEFT JOIN "Table_B" tb on ta."ID_B" = tb."ID"
LEFT JOIN "Table_C" tc on ta."ID_C" = tc."ID"
WHERE tb."ID" IS NOT NULL OR tc."ID" IS NOT NULL
;
``` | Since you have foreign key constraints in place, referential integrity is guaranteed and the query in your Q is **already the best answer**.
Also indexes on `Table_B.ID` and `Table_C.ID` are given.
**If** matching cases in `Table_A` are *rare* (less than ~ 5 %, depending on row with and data distribution) a [partial multi-column index](http://www.postgresql.org/docs/current/interactive/indexes-partial.html) would help performance:
```
CREATE INDEX table_a_special_idx ON "Table_A" ("ID_B", "ID_C")
WHERE "ID_B" IS NOT NULL OR "ID_C" IS NOT NULL;
```
In PostgreSQL 9.2 a covering index ([index-only scan](http://www.postgresql.org/docs/current/interactive/index-scanning.html) in Postgres parlance) might help even more - in which case you would include all columns of interest in the index (not in my example). Depends on several factors like row width and frequency of updates in your table. | Joining tables if the reference exists | [
"",
"sql",
"postgresql",
"select",
"join",
""
] |
In my introduction to computer science class we are learning about namespaces. And I understand the concept as in when importing a module like math, then we are importing a namespace and the, the class attributes under that namespace, but executing the process is very confusing to me. Here is one of the questions that I have no idea how to begin executing:
Write a function, `name_add(a,b)`, that uses exception handling (just a simple `try/except` statement) to add two objects, a and b, and return the result. If the user calls the function with any types for which the `+ operator` is not defined, the function should print a message stating that the addition operator is undefined between `type(a)` and `type(b)` (whatever those types are)......l
If someone could explain step by step what this function should look like or what they are asking in beginners terms, I would greatly appreciate it because I am not really understanding this at all or the relation it has to namespaces. | You need to catch the TypeError exception. This is the answer to the question:
```
def name_add(a,b):
try:
return a+b
except TypeError:
print 'The + operator is not defined for a and b'
return None
```
As Lattyware commented, catching the exception and just printing a message is *not* a good practice. You should either:
* Solve the exception in the function so that the function can go on and produce a sensible result (Which is not possible in this case).
* Let the program catch the exception at a higher level.
Look at this fragment of one of my programs. This is the top level function:
```
def main(argv):
"""Main program for the nc2pdf utility.
:argv: command line arguments
"""
if len(argv) == 1: # No filenames given, only the name of the script
binary = os.path.basename(argv[0])
print __proginfo__
print "Usage: {} [file ...]".format(binary)
print
sys.exit(0)
del argv[0]
for fn in argv: # Loop over all the files
try:
ofn = outname(fn) # outname can raise ValueError...
with open(fn, 'r') as inf: # Open can raise IOError
rd = inf.read()
except ValueError:
fns = "Cannot construct output filename. Skipping file '{}'."
print fns.format(fn)
continue
except IOError:
print "Cannot open the file '{}'. Skipping it.".format(fn)
continue
... # do something with the file's data
```
In this case, the exception can be handled by skipping (not processing) one of the files named on the command line and moving on to the next file. Not handling the exception here would crash the program, even though other files might still be processed. A filename can be misspelled, or the process may not have access permission to the file. These things happen and should be handled gracefully. | ```
>>> foo = 1
>>>
>>> def bar():
... global foo
... foo = 2
...
>>> bar()
>>> foo
2
```
If you're not trying to modify the global variable, simply use:
```
# Inside yourfuncs.py
def adder(a, b):
"""
Returns the sum of a and b, or raises an exception.
"""
try:
return a + b
except TypeError:
print 'Oops'
raise
```
Then:
```
>>> import yourfuncs
>>> x = adder(1, 2)
>>> x
3
>>> addr({}, 1)
Oops
# traceback omitted
TypeError: unsupported operand type(s) for +: 'dict' and 'int'
```
Note that I didn't stop the error from being propagated. Don't return the sum when it works, and then an error string when it doesn't. That's bad design. If you want to raise your own exception, that's a fine idea. | Edit: Namespaces, and Exception Handling | [
"",
"python",
""
] |
I'm working on a web application (Python/Django) which handle a big database and I need to optimize this loop in order to obtain a better execution time.
I have a list of entries, each entry has a yes\_count attribute, a no\_count attribute and a tid attribute.
I need to create two new lists depending of the ratio = yes\_count / (yes\_count + no\_count)
Is it a better way to do it using built-in fonctions (or something even faster) ?
```
yes_entries = []
no_entries = []
for e in entries:
if e.tid in tids:
if e.yes_count > 0 or e.no_count > 0:
ratio = e.yes_count / (e.yes_count + e.no_count)
if ratio > 0.75:
yes_entries.append(e.tid)
elif ratio < 0.25:
no_entries.append(e.tid)
``` | I would suggest making `tids` into a set for O(1) amortized lookup speed (as opposed to O(N) for lists):
```
set_tids = set(tids)
```
before the `for` loop, and then
```
if e.tid in set_tids
```
Otherwise the rest of the code you have given looks pretty optimized | You can also save some time by only accessing `e.tid`, `e.yes_count` and `e.no_count` once, and store them in variables:
```
for e in entries:
tid = e.tid
if tid in tids:
yes_count = e.yes_count
no_count = e.no_count
if yes_count > 0 or no_count > 0:
ratio = yes_count / (yes_count + no_count)
if ratio > 0.75:
yes_entries.append(tid)
elif ratio < 0.25:
no_entries.append(tid)
```
You may also save time by caching no\_entries.append and yes\_entries.append:
```
yes_entries_append = yes_entries.append
no_entries_append = no_entries.append
for e in entries:
tid = e.tid
if tid in tids:
yes_count = e.yes_count
no_count = e.no_count
if yes_count > 0 or no_count > 0:
ratio = yes_count / (yes_count + no_count)
if ratio > 0.75:
yes_entries_append(tid)
elif ratio < 0.25:
no_entries_append(tid)
```
But at that point, you're maybe starting to get silly.
Another, probably even sillier, thing to try, is to see if using a filter is faster. In python2, filter returns a list, meaning you're iterating over it twice, which is less than ideal. However, we've got itertools to help us out there:
```
def filterfunc(e):
return (e.tid in tids) and (yes_count > 0 or no_count > 0)
for e in itertools.ifilter(filterfunc, entries):
tid = e.tid
yes_count = e.yes_count
no_count = e.no_count
ratio = yes_count / (yes_count + no_count)
if ratio > 0.75:
yes_entries_append(tid)
elif ratio < 0.25:
no_entries_append(tid)
```
The next problem is that we're accessing the fields on e twice again. Let's fix that with some iterator magic:
```
def filterfunc(t):
tid, yes_count, no_count = t
return (tid in tids) and (yes_count > 0 or no_count > 0)
for tid, yes_count, no_count in itertools.ifilter(filterfunc, itertools.imap(attrgetter(["tid", "yes_count", "no_count"]), entries)):
ratio = yes_count / (yes_count + no_count)
if ratio > 0.75:
yes_entries_append(tid)
elif ratio < 0.25:
no_entries_append(tid)
```
It's up to you and your profiler to determine the best approach from all the options I've suggested.
Also, if you're using python3, use `filter` instead of `itertools.ifilter`, as it returns a generator rather than the list of python2's version. | How to do an efficient loop with comparisons and insertions in other lists | [
"",
"python",
"django",
"list",
"loops",
""
] |
I am trying to create a dict reading the date from a file for further processing but unable to get the code to work. I am working in python and new to this language. My file data looks like this:
```
Name1 L1 11 P27 41
Name1 L1 13 P27 43
Name1 L2 85 O60 125
Name1 L2 07 O60 107
Name1 L2 68 O60 118
Name1 L2 17 O60 117
Name1 L2 92 O60 192
Name2 L1 04 O60 84
Name2 L1 19 Z91 139
Name2 L2 32 Z91 332
```
Now, I want to create the dict object as:
```
{
'Name1':[L1,(11,13),(41,43),P27],[L2,(85,07,68,17,92),(125,107,118,117,192),O60],
'Name2':[L1,(19),(139),Z91],[L2,(32),(332),Z91]
}
``` | A `defaultdict` is helpful for this sort of problem, it allows you to append to a dictionary entry, if an entry doesn't exist yet, it will append to an empty list and place it there, instead of throwing an exception as usual. Here's how I used it to process your data:
```
from collections import defaultdict
d=defaultdict(list)
with open("input.txt") as data:
for line in data:
line = line.strip().split()
namelist = d[line[0]]
try:
idx = [x[0] for x in namelist].index(line[1])
except:
idx = -1
if len(namelist) and idx >= 0:
namelist[idx][1].append(line[2])
namelist[idx][2].append(line[4])
else:
namelist.append([line[1], [line[2]], [line[4]], line[3]])
print d
>>> defaultdict(<type 'list'>,
{'Name2': [
['L1', ['04', '19'], ['84', '139'], 'O60'],
['L2', ['32'], ['332'], 'Z91']
],
'Name1': [
['L1', ['11', '13'], ['41', '43'], 'P27'],
['L2', ['85', '07', '68', '17', '92'], ['125', '107', '118', '117', '192'], 'O60']
]})
``` | To process the lines, use
```
with open(filename) as file_handle: # open your file
for line in file_handle: # iterate over lines
chunks = line.split() # extract parts of the lines
...
```
Now `chunks` will contain parts of your line.
You should build a [`dict`](http://docs.python.org/2/tutorial/datastructures.html#dictionaries), or even better [`defaultdict(list)`](http://docs.python.org/2/library/collections.html#collections.defaultdict) and insert the elements there. | How to create a complex type dict object | [
"",
"python",
"python-2.7",
"dictionary",
""
] |
I have 2 tables, i want to get all the entries from 2 tables which are not duplicates e.g.
as shown below
```
Table1
MID, ITEM, PRICE, QUANTITY
1000 ab 10 5
2000 bc 20 6
Table2
MID, ITEM, PRICE, QUANTITY
3000 cd 30 4
1000 ed 10 7
```
Result should be
```
MID, ITEM, PRICE, QUANTITY
3000 cd 30 4
1000 ed 10 7
2000 bc 20 6
```
Kindly let me know by using which SQLite query this can be achieved? | Here is a simple way of expressing your logic:
```
select *
from table2
union all
select *
from table1
where table1.mid not in (select mid from table2)
```
Take everything from `table2`. Then take the extra rows from `table1`, based on `mid`. | Per your comment, you could filter out rows from `Table1` that are also in `Table2`:
```
select *
from Table1 t1
where not exists
(
select *
from Table2 t2
where t1.mid = t2.mid
and t1.item = t2.item
)
union all
select *
from Table2
```
I'm assuming that `(mid, item)` is unique in each individual table. | Get all the unique entries from 2 tables | [
"",
"ios",
"sql",
"sqlite",
""
] |
Psychology experiments often require you to pseudo-randomize the trial order, so that the trials are apparently random, but you don't get too many similar trials consecutively (which could happen with a purely random ordering).
Let's say that the visual display on each trial has a colour and a size:
```
display_list = []
colours = {0: 'red', 1: 'blue', 2: 'green', 3: 'yellow'}
sizes = [1] * 20 + [2] * 20 + [3] * 20 + [4] * 20 + [5] * 20 + [6] * 20
for i in range(120):
display_list.append({'colour': colours[i % 4], 'size': sizes[i]})
print(display_list)
```
And we can look at the maximum number of consecutive trials that has the same value for either property using this function:
```
def consecutive_properties(seq, field):
longest_run = 0
prev_value = None
current_run = 0
for d in seq:
if d[field] == prev_value:
current_run += 1
else:
current_run = 1
if current_run > longest_run:
longest_run = current_run
prev_value = d[field]
return longest_run
```
Output:
```
>>> print("Consecutive colours: ", consecutive_properties(display_list, 'colour')
('Consecutive colours: ', 1)
>>> print("Consecutive sizes: ", consecutive_properties(display_list, 'size'))
('Consecutive sizes: ', 20)
```
Are there any algorithms you know of that would allow minimizing the consecutive runs of either or both properties, or at least keep these runs below a specified length? If the latter, let's say no more than 4 in a row of the same colour or size.
---
What I've tried:
The solution I have now basically does a slightly intelligent [bogosort](http://en.wikipedia.org/wiki/Bogosort), which has to be horribly inefficient. Basically:
* You break the entire list into chunks containing all the permutations of the properties: if you break down `display_list` into chunks of length 24, each chunk has each colour paired with each size. Let's assume that the trial list can always be broken down into these permutation chunks, since you know what the permutations are from the design of the experiment.
* You choose a maximum run length per chunk
* You shuffle each chunk until the run lengths for each chunk are below the maximum value (this actually means that in the overall trial list, your runs might be double that length, since you could have a run of this length at the end of one chunk and the start of the next) | > Question: Are there any algorithms you know of that would allow minimizing the
> consecutive runs of either or both properties, or at least keep these
> runs below a specified length?
Yes. There is an easy algorithm for doing this by simply reducing the probability of a color or size being chosen if it is already occurring in a run.
```
from random import randrange
def choose(colors, numselections, maxrun):
'Repeatedly choose colors. Gradually reduce selection probability to avoid runs.'
colors = list(colors)
n = len(colors)
total = n * maxrun
current_run = 0
for _ in range(numselections):
i = randrange(total - current_run) // maxrun
yield colors[i]
colors[i], colors[-1] = colors[-1], colors[i]
current_run = current_run + 1 if i==n-1 else 1
if __name__ == '__main__':
colors = ['red', 'blue', 'green', 'yellow']
for color in choose(colors, 100, maxrun=4):
print color
```
Note, this approach requires less effort than the other answers which use reselection techniques to avoid runs. Also, note the runs are faded-out gradually rather than all at once as in the other answers. | You're clearly not concerned with anything like true randomness, so if you define a distance metric, and draw your sequence randomly, you can reject any new draw if it's distance is "too close" to the previous draw, and simply draw again.
If you're drawing from a finite set (say, a pack of cards) then the whole set can
be the draw pile, and your sort would consist of swapping two elements when a close
pair is found, but also reject a swap partner if the swapped element would become unacceptable, so each swap step leaves the whole set improved.
If your criteria are not too hard to satisfy, this will terminate very quickly. | Sorting algorithm to keep equal values separated | [
"",
"python",
"algorithm",
"sorting",
""
] |
I have a database, in one of the fields occasionally I get an entry which starts off as mail.domainname.com
Is it possible using mysql and php to select only the rows from the field hostname where the first 4 characters = 'mail'? | One way to do it is to use [`LIKE`](http://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html#operator_like)
```
SELECT * FROM tablename WHERE hostname LIKE 'mail%'
```
Another is to use [`SUBSTR()`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substr)
```
SELECT * FROM tablename WHERE SUBSTR(hostname, 1, 4) ='mail'
``` | Case sensitive:
```
SELECT * FROM tbl WHERE hostname LIKE BINARY 'mail%'
```
Case insensitive:
```
SELECT * FROM tbl WHERE hostname LIKE 'mail%'
``` | Select from MYSQL where the first four characters are | [
"",
"mysql",
"sql",
""
] |
I have a certain string, or several strings.
For example:
```
a = '12345678'
b = '123456789'
```
I'm using Python 3.2. I'm trying to get the last half of a string written on backwards. If a string has an odd amount of characters, the middle character is discarded. So, what I'm trying to achieve is:
```
a_out = '8765'
b_out = '9876'
```
The way I do it is the following:
```
a_back = a[::-1]
a_out = a_back[0:len(a_back)//2]
```
The question is: is there a shorter way to do this? Can it be done in one operation instead of two? | You can calculate the index and reverse the string at the same time:
```
>>> a[:-(len(a)+1)//2:-1]
'8765'
>>> b[:-(len(b)+1)//2:-1]
'9876'
``` | You can just make it a one-liner:
```
a_out = a[::-1][0:len(a)//2]
```
It would be nice if you could do fancy indexing in a string, like we do in a `numpy` array, then another solution would be:
```
a_out = a[ range(len(a)-1, len(a)//2-1, -1) ] # NOT POSSIBLE
```
@jamylak suggested to use 'itemgetter' to accomplish an equivalent for fancy indexing in an efficient way:
```
from operator import itemgetter
items = [0,3,2,1,0,0,2,3,]
a_out = ''.join( itemgetter( *items )(a) )
#14321134
``` | Getting the last half of string written on backwards | [
"",
"python",
"python-3.x",
""
] |
Hi I am a newbie to the world of sql but struggling to get some of the basics to work.
I have a set of data that looks like this:
```
Table name: Sample
PROJECT WORK ORDER AMOUNT
-----------------------------------------
111 a 100
222 b 200
111 c 300
444 d 400
111 e 500
666 f 600
```
I want it to end up looking like this:
```
Table name: Sample
PROJECT WORK ORDER AMOUNT PROJECT AMOUNT
--------------------------------------------------------
111 e 500 900
111 c 300 900
111 a 100 900
666 f 600 600
444 d 400 600
222 b 200 200
```
Sorted by project with the greatest TOTAL amount
Group by does not work for me as it groups all projects into one, so I can't see the 3 work order lines for "Project 111"
```
PROJECT WORK ORDER AMOUNT
-----------------------------------------
111 a 900
222 b 200
444 d 400
666 f 600
```
Order by does not work as I can't get it sort it out on the basis of the greatest project value
```
Table name: Sample
PROJECT WORK ORDER AMOUNT
-----------------------------------------
666 f 600
111 e 500
444 d 400
111 c 300
222 b 200
111 a 100
```
My alternative idea was if I could create another column "Project Amount" that calculates the projects total based on values in "Project" column and I can then easily sort it by Project Amount instead to achieve the desired format
```
Table name: Sample
PROJECT WORK ORDER AMOUNT PROJECT AMOUNT
--------------------------------------------------------
111 e 500 900
111 c 300 900
111 a 100 900
666 f 600 600
444 d 400 600
222 b 200 200
```
But I am struggling how to get column "Project Amount" to calculate all the projects total value and present them on any rows that appear with the same project number.
Any advise? | ```
select *
, sum(amount) over (partition by project) as ProjAmount
, row_number() over
from YourTable
order by
ProjAmount desc
```
[Example at SQL Fiddle.](http://sqlfiddle.com/#!6/40d57/1/0)
---
To select only the top two projects with the highest amounts, you could use `dense_rank`:
```
select *
from (
select *
, dense_rank() over (order by ProjAmount desc) as dr
from (
select *
, sum(amount) over (partition by project) as ProjAmount
from YourTable
) WithProjAmount
) WithDenseRank
where dr < 3
order by
ProjAmount desc
```
[Example at SQL Fiddle.](http://sqlfiddle.com/#!6/40d57/3/0) | A version with plain SQL subquery
```
SELECT s.*,
(SELECT SUM(Amount) FROM Sample WHERE Project = s.Project) ProjectAmount
FROM Sample s
ORDER BY ProjectAmount DESC
```
**[SQLFiddle](http://sqlfiddle.com/#!3/1bbde/2)** | Problems with group by and order by | [
"",
"sql",
"sql-server",
""
] |
I don't know if I have a good design here, but I have a class that is derived from unittest.TestCase and the way I have it set up, my code will dynamically inject a bunch of `test_*` methods into the class before invoking unittest to run through it. I use `setattr` for this. This has been working well, but now I have a situation in which I want to remove the methods I previously injected and inject a new set of methods. How can I remove all the methods in a class whose names match the pattern `test_*`? | It's called `delattr` and is documented [here](http://docs.python.org/2/library/functions.html#delattr). | ```
>>> class Foo:
def func(self):
pass
...
>>> dir(Foo)
['__doc__', '__module__', 'func']
>>> del Foo.func
>>> dir(Foo)
['__doc__', '__module__']
``` | Python - how can I dynamically remove a method from a class -- i.e. opposite of setattr | [
"",
"python",
"setattr",
""
] |
I want to wrap a test project containing C++ and OpenMP code with Cython, and build it with distutils via a `setup.py` file. The content of my file looks like this:
```
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
from Cython.Distutils import build_ext
modules = [Extension("Interface",
["Interface.pyx", "Parallel.cpp"],
language = "c++",
extra_compile_args=["-fopenmp"],
extra_link_args=["-fopenmp"])]
for e in modules:
e.cython_directives = {"embedsignature" : True}
setup(name="Interface",
cmdclass={"build_ext": build_ext},
ext_modules=modules)
```
The `-fopenmp` flag is used with gcc to compile and link against OpenMP. However, if I just invoke
```
cls ~/workspace/CythonOpenMP/src $ python3 setup.py build
```
this flag is not recognized, because the compiler is clang:
```
running build
running build_ext
skipping 'Interface.cpp' Cython extension (up-to-date)
building 'Interface' extension
cc -Wno-unused-result -fno-common -dynamic -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I/usr/local/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python3/3.3.0/Frameworks/Python.framework/Versions/3.3/include/python3.3m -c Interface.cpp -o build/temp.macosx-10.8-x86_64-3.3/Interface.o -fopenmp
clang: warning: argument unused during compilation: '-fopenmp'
cc -Wno-unused-result -fno-common -dynamic -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I/usr/local/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python3/3.3.0/Frameworks/Python.framework/Versions/3.3/include/python3.3m -c Parallel.cpp -o build/temp.macosx-10.8-x86_64-3.3/Parallel.o -fopenmp
clang: warning: argument unused during compilation: '-fopenmp'
Parallel.cpp:24:10: warning: unknown pragma ignored [-Wunknown-pragmas]
#pragma omp parallel for
^
1 warning generated.
c++ -bundle -undefined dynamic_lookup -L/usr/local/lib -L/usr/local/opt/sqlite/lib build/temp.macosx-10.8-x86_64-3.3/Interface.o build/temp.macosx-10.8-x86_64-3.3/Parallel.o -o build/lib.macosx-10.8-x86_64-3.3/Interface.so -fopenmp
ld: library not found for -lgomp
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'c++' failed with exit status 1
```
I've unsucessfully tried to specify gcc:
```
cls ~/workspace/CythonOpenMP/src $ python3 setup.py build --compiler=g++-4.7
running build
running build_ext
error: don't know how to compile C/C++ code on platform 'posix' with 'g++-4.7' compiler
```
How can I tell distutils to use gcc? | Try setting the "CC" environment variable from inside the setup.py with os.environ. | I just took a look at the `distutils` source, and the `--compiler` option expects "unix", "msvc", "cygwin", "mingw32", "bcpp", or "emx". It checks the compiler name you want by checking the `CC` environment variable. Try calling build like this:
```
CC=gcc python setup.py build
```
You don't need to set `CXX`, it doesn't check for that. | How to tell distutils to use gcc? | [
"",
"python",
"compiler-errors",
"cython",
"distutils",
""
] |
I am building an app called 'competencies'. I made changes to models, migrated the app locally, and everything worked. This is my eighth migration on the app.
I have deployed the app on heroku, so I committed changes and pushed to heroku. I can see that the changes went through, because the new migrations appear in the heroku files. When I log in to heroku and try to migrate the competencies app, I get the following error:
```
NoMigrations: Application '<module 'django.contrib.admin' from '/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/__init__.py'>' has no migrations.
```
I have searched for this error, and I have not found anything meaningful. Can anyone suggest what I am doing wrong, or how to address the issue? | `django.contrib.admin` should not have migrations. Contrib packages are not south managed.
If you EVER ran `python manage.py schemamigration django.contrib.auth --XXX` on your local it would create the migrations folder in your local copy's install of the venv's django. However, this will never get transferred to heroku.
test something for me. create a new copy of your site on your local machine
* new DB
* new virtualenv
* new folder w/ new clone of repo
try to run `python manage.py migrate` if you get the same error its b/c you broke your virtualenv with south.
Something else you could try IF your database models have not changed a bunch since the last working copy:
* roll your models back to the last working configuration
* delete EVERY app's migrations folder
* truncate `south_migrations` table
* run `python manage.py schemamigration --initial X` for each of your apps.
* push and `migrate --fake`
* redo your model changes
* create migrations for the model changes
* push and migrate regularly | I recently encountered this error after dumping a live database to a dev box for testing data migrations.
One of the dependencies was throwing this error (specifically `taggit`). I think that I have a different version of `taggit` on the dev box which does not have migrations, but the database I dumped had two migrations for `taggit` in `south_migrationhistory`.
I deleted the entries in `south_migrationhistory` for the problem app erroneously claiming `NoMigrations` and that solved my problem. Everything's running again. | NoMigrations error in Django | [
"",
"python",
"django",
"heroku",
"django-south",
"database-migration",
""
] |
I passed the copy of a list to a function and for some reason the original list changed.I tried everything I could and this is totally illogical or I did something really wrong.
```
maze="""XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXX XXXXXXXXXXXXXXXXXXXXX EXX
XXXXXXXXXXXXXX XXX XXXX XXX
XX XX XXX XXXXXXX XXXX XXXXXXX
XX XX XXXXXXXXX XX XXXXXXX XXXX XXXXXXX
XX XX XXXXXXX XXXXX XXXXXXX XXXX XXXXXXX
XX XX XXXX XXXXX XXXXX XX
XX XX XXXX XX XXXXX XXX XXXXXXXXXXXXXX
XX XX XXXX XX XXXXXXX XXX XXXX XX
XX XX XX XXX XXXXXXX XXXX XX XX
XX XXXXXXXXXX XXX XXX XXXXXXX XXXXX
XX XXXXX XXX XXXXXX XXXX XXXXX
XXXX XX XXXXX XXX XX XXXX XXXX XXX
XXXX XX XXXXX XXX XX XXXXXXXX XXXXXX XXX
XX XX XXX XXX XX XXXXX XXXX XXX
XXXX XX XXXXX XXXXX XXXXXXX XXXXX
XXXX XXXXXXXXXXXXXXXXXXXXX XXXXXXX XXX
XXXX XXX XXXXX XXX
XXXXXXXXXXXXXXXX XXXXXX XXXXXX XXXXX XXX
XXX XXXXXX XXXXXX XXXXX XXX
XXX XXXXXXXXXXXXXXXXX XX XXXXX XX
XXX XX XX XXXX XXXXXXXXXX XX
XS XXXXXXXX XXXXX XXXX XX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"""
grid=maze.split("\n")
for x in xrange(len(grid)):
grid[x]=list(grid[x])
row=len(grid)
col=len(grid[0])
def check(maze,x,z,row,col):
lim=0
if z+1<col and maze[x][z+1]=="X":
lim+=1
if x+1<row and maze[x+1][z]=="X":
lim+=1
if z-1>=0 and maze[x][z-1]=="X":
lim+=1
if x-1>=0 and maze[x-1][z]=="X":
lim+=1
return lim
def get_pos(grida,row,col):
for l in xrange(100):
for x in xrange(len(grida)):
for z in xrange(len(grida[x])):
if check(grida,x,z,row,col)>=3 and grida[x][z]!="E" and grida[x][z]!="S":
grida[x][z]="X"
else:
continue
return grida
grid_temp=grid[:]
print grid
grid_2=get_pos(grid_temp,row,col)
print grid
```
I want to know why grid is actually changing. | Your grid variable is a 2D array, i.e. a list of lists. When you do `grid[:]` you are making a new list of lists, but the elements themselves are still references to the same maze row. You want to do a deepcopy, e.g. the `deepcopy` function from the `copy` module. | Simple solution, use `copy` module instead of ad hoc copying:
```
import copy
foo = copy.deepcopy(bar)
```
To answer your question, I think *why* comes down to following bit in your code:
```
grida[x][y] = ...
```
While your `[:]` copy only copies one level deep, the assignment is two levels deep. | Why is list changing with no reason? | [
"",
"python",
""
] |
I have two tables Product and Stock that sit on two different databases, dbA and dbB respectively. They contain QTY, Quantity columns respectively.
```
QTY Quantity
--- -----
50 30
50 40
50 60
50 50
```
I want to return the sum of all the values in these two columns. 50 + 50 + .. 40 + 60 + 50 = 380
If I do this: `SELECT sum(QTY) FROM dbA.Product` I get 200 and `SELECT sum(Quantity) FROM dbB.Stock` gives me 180 but when I do something like this:
`SELECT sum(QTY) + sum(Quantity) FROM dbA.Product, dbB.Stock;` I don't get 380 but some other number which I have no idea how it arrives to it. | That is because you are doing a cross join.
Do this instead:
```
select coalesce(asum, 0) + coalesce(bsum, 0)
from (select sum(qty) as asum from dba.Product) a cross join
(select sum(quantity) as bsum from dbB.Stock) b
```
The cross join is producing every possible pair of values between the two tables for a total of 16 rows (4\*4 = 16) For each row in the first table you are getting the rows: (50, 30), (50, 40), (50, 60), (50, 50). | this works too:
```
SELECT
x.total + y.total AS total
FROM
(SELECT
SUM(quantity) AS total
FROM
dbB.stock) AS x
CROSS JOIN
(SELECT
SUM(qty) AS total
FROM
dbA.product) AS y
``` | adding values of two columns in different databases sql | [
"",
"sql",
""
] |
Could you supply a real-life example of [WSGI](http://wsgi.readthedocs.org/en/latest/) [`start_response`](http://webpython.codepoint.net/wsgi_application_interface) function? (Web-server provides that function to wsgi application)
I can't understand the **purpose** of introducing the `start_response`.
(I've read like 10 identical texts about the WSGI standard. They all say "WSGI standard is..." None of them says "WSGI is designed this way **in order to**..." :() | > Could you supply a real-life example of WSGI `start_response()` function?
Well, the `start_response()` function for [`mod_wsgi`](http://en.wikipedia.org/wiki/Mod_wsgi) is defined on [line 2678 of `mod_wgsi.c`](http://code.google.com/p/modwsgi/source/browse/mod_wsgi/mod_wsgi.c#2678)
> None of them says "WSGI is designed this way in order to..."
There doesn't seem to be much rationale for this aspect of WSGI's design in [`PEP3333`](http://www.python.org/dev/peps/pep-3333/). Looking through the [web-sig mailing list archives](http://mail.python.org/pipermail/web-sig/), I came across [this message](http://mail.python.org/pipermail/web-sig/2010-April/004366.html)...
> Some time ago I objected the decision to remove start\_response
> function from next version WSGI, using as rationale the fact that
> without start\_callable, asynchronous extension are impossible to
> support.
>
> Now I have found that removing start\_response will also make
> impossible to support coroutines (or, at least, some coroutines
> usage).
>
> [...]
...which started a long thread about the rationale for this part of the implementation which might be worth a read.
If you really want to know the origins of this aspect of the WSGI interface, you'll have to read a lot of the messages between [this initial draft](http://mail.python.org/pipermail/web-sig/2003-December/000394.html) in December 2003, and [this later draft](http://mail.python.org/pipermail/web-sig/2004-August/000518.html) in August 2004.
---
**Update**
> How would that be compatible with that other protocol?
I'm not quite sure what you mean. Ignoring all the early drafts, the WSGI 1.x interface can be used in two different ways.
The 'deprecated' method is...
```
def application(environ, start_response):
write = start_response(status, headers)
write('content block 1')
write('content block 2')
write('content block 3')
return None
```
...and the 'recommended' method is...
```
def application(environ, start_response):
start_response(status, headers)
return ['content block 1',
'content block 2',
'content block 3']
```
Presumably, you could use both, with...
```
def application(environ, start_response):
write = start_response(status, headers)
write('content block 1')
return ['content block 2',
'content block 3']
```
...but the resulting behavior may be undefined.
By the looks of [this blog post](http://dirtsimple.org/2007/02/wsgi-middleware-considered-harmful.html), the new WSGI 2.x method being considered is...
```
def application(environ):
return (status,
headers,
['content block 1',
'content block 2',
'content block 3'])
```
...which eliminates the `start_response()` callable, and, obviously, the `write()` callable, but there's no indication as to when (or even if) this is likely to supercede WSGI 1.x. | I found a old thread may explain why.
* <https://mail.python.org/pipermail/web-sig/2005-April/001204.html>
> > Why is there a start\_response and then a separate return?
>
> One reason is that it allows you to write an application as a generator. But more importantly, it's necessary in order to support 'write()' for backward compatibility with existing frameworks, and that's pretty much the "killer reason" it's structured how it is. This particular innovation was Tony Lownds' brainchild, though, not mine. In my original WSGI concept, the application received an output stream and just wrote headers and everything to it. | WSGI: what's the purpose of start_response function | [
"",
"python",
"web-services",
"web-applications",
"wsgi",
"middleware",
""
] |
I have four tables like this
Table 1: `MsPatient`
```
PatientID PatientName
PA001 | Danny andrean
PA002 | John Travolta
PA003 | Danny Lee
```
Table 2: `TransactionHeader`
```
TransactionID PatientID TransactionDate
TR001 | PA001 | 2012/12/6
TR002 | PA002 | 2013/11/4
TR003 | PA003 | 2010/4/12
```
Table 3: `TransactionDetail`
```
TransactionID MedicineID Quantity
TR001 | ME001 | 5
TR002 | ME001 | 6
TR003 | ME002 | 5
```
Table 4: `MsMedicine`
```
MedicineID MedicineName MedicineStock
ME001 |HIVGOD |100
ME002 |CancerCure |50
```
How can I show show `PatientID`, `PatientName`, and `TotalMedicineBought` (obtained from the amount of Medicine Quantity purchased) where `MedicineID` of purchased medicine was 'ME001' and `PatientName` consists of 2 words or more.
Example:
```
PatientID | PatientName | Total Medicine Bought
PA001 | Danny Andrean | 5
PA002 | John Travolta | 6
```
I tried this query:
```
select
mp.PatientID,mp.PatientName,SUM(td.Quantity) as TotalMedicineBought
from
MsPatient mp, TransactionDetail td
inner join
TransactionHeader th on th.TransactionID = td.TransactionID
Group by
td.TransactionID, mp.PatientID, mp.PatientName
```
I don't know how to make a condition that consist two words
I use SQL Server 2008 | The problem with the original query is the mixing of different ways of doing joins. You should avoid commas in the `from` clause. If you really, really want a `cross join`, then use the `cross join` statement.
Your problem is one of filtering for the right name *and* joining the tables together:
```
select p.PatientID, p.PatientName,
sum(case when MedicineId = 'ME001' then Quantity else 0
end) as Total_Medicine_Bought
from MsPatient p join
TransactionHeader th
on p.PatientID= th.PatientID join
TransactionDetail td
on td.TransactionID =th.TransactionID
where PatientName like '% %'
group by p.PatientID, p.PatientName;
```
This query also shows a good use of aliases. For readability, these should be abbreviations of the table names, rather than arbitrary letters. It also shows the use of conditional aggregation. You should easily be able to see how to modify the query to also show `ME002`, for instance. Having at least two parts in the patient name seems equivalent to having at least one space. `like` is a simple mechanism to look for a single space. | Are you looking for something like this -:
```
select a.PatientID,a.PatientName,sum(Quantity) Total_Medicine_Bought
from MsPatient a
join TransactionHeader b
on a.PatientID= b.PatientID
join TransactionDetail c
on b.TransactionID =c.TransactionID
where MedicineID='MEOO1' and (Length(a.PatientName) - length(REPLACE(a.PatientName, ' ', '')) + 1)=2
group by a.PatientID,a.PatientName
```
Sql Fiddle :<http://sqlfiddle.com/#!2/4c2cc/3> | How to aggregate sum across three tables? | [
"",
"sql",
"database",
"sql-server-2008",
""
] |
I have date strings like this:
```
'January 11, 2010'
```
and I need a function that returns the day of the week, like
```
'mon', or 'monday'
```
etc.
I can't find this anywhere in the Python help.
Anyone? Thanks. | You might want to use [`strptime and strftime`](http://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior) methods from [`datetime`](http://docs.python.org/2/library/datetime.html):
```
>>> import datetime
>>> datetime.datetime.strptime('January 11, 2010', '%B %d, %Y').strftime('%A')
'Monday'
```
or for `'Mon'`:
```
>>> datetime.datetime.strptime('January 11, 2010', '%B %d, %Y').strftime('%a')
'Mon'
``` | use `date.weekday()`
Return the day of the week as an integer, where Monday is 0 and Sunday is 6.
<http://docs.python.org/2/library/datetime.html#datetime.date.weekday> | Convert Date String to Day of Week | [
"",
"python",
"datetime",
""
] |
Is there any way to *remove* an attribute from a subclass that is present in the parent?
In the following example
```
class A(object):
foo = 1
bar = 2
class B(A):
pass
# <desired code here>
b = B()
assert hasattr(b, 'bar') == False
```
Is there any code we can write to make the assertion pass? | ```
class A(object):
foo = 1
bar = 2
class B(A):
@property
def bar(self):
raise AttributeError
>>> b = B()
>>> b.bar
Traceback (most recent call last):
File "<pyshell#17>", line 1, in <module>
b.bar
File "<pyshell#15>", line 4, in bar
raise AttributeError
AttributeError
``` | This works for me whe I don't want a specific attribute ('bar' in this case) to be listed in dir(A).
```
class A(object):
foo = 1
bar = 2
class B(A):
def ___init__(self):
self.delete()
def delete(self):
delattr(self, 'bar')
```
Basically, create a method (delete) in the subclass B that deletes that attribute and put that in the constructor. | Remove attribute from subclass in Python | [
"",
"python",
"inheritance",
""
] |
Execute following statement in the SQL Server
```
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[SP_CandidateRegistration]') AND type in (N'P', N'PC'))
BEGIN
CREATE PROCEDURE [dbo].[SP_CandidateRegistration]
(
@UserName VARCHAR(50),
@Password VARCHAR(50),
@EmailID VARCHAR(50),
@TestId int,
@IsActiveUser INTEGER,
@USER_ID INTEGER OUTPUT
)
AS
DECLARE @UserName VARCHAR(50)
DECLARE @Password VARCHAR(50)
DECLARE @EmailID VARCHAR(50)
DECLARE @TestId int
DECLARE @IsActiveUser INTEGER
DECLARE @USER_ID INTEGER
INSERT INTO [dbo].[IER_CandidateRegistration](User_Name, Password, EmailId, Test_Id, is_active )
VALUES (@UserName, @Password, @EmailID,@TestId, @IsActiveUser)
select @USER_ID=@@identity
RETURN
END
GO
```
Error after executing in SQL Server 2008
> Msg 156, Level 15, State 1, Line 3
> Incorrect syntax near the keyword 'PROCEDURE'. | You can run the `CREATE` in a child batch that is only compiled and executed `IF NOT EXISTS`.
You will need to fix the errors in the procedure first (why are you trying to declare variables with the same name as the parameters also use `SCOPE_IDENTITY()` not `@@IDENTITY`) but something like
```
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[SP_CandidateRegistration]') AND type in (N'P', N'PC'))
BEGIN
EXEC('
CREATE PROCEDURE [dbo].[SP_CandidateRegistration] (@UserName VARCHAR(50),
@Password VARCHAR(50),
@EmailID VARCHAR(50),
@TestId INT,
@IsActiveUser INTEGER,
@USER_ID INTEGER OUTPUT)
AS
INSERT INTO [dbo].[IER_CandidateRegistration]
(User_Name,
Password,
EmailId,
Test_Id,
is_active)
VALUES (@UserName,
@Password,
@EmailID,
@TestId,
@IsActiveUser)
SELECT @USER_ID = SCOPE_IDENTITY()
RETURN
')
END
```
NB: This question was asked in 2013 (about SQL Server 2008) but since SQL Server 2016 it has been possible to do `CREATE OR ALTER PROCEDURE` to avoid the need for any `IF NOT EXISTS` type procedural code at all. | The `CREATE PROCEDURE` statement cannot be combined with other `Transact-SQL` statements in a single batch.
So,You have to do like this:-
```
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[SP_CandidateRegistration]') AND type in (N'P', N'PC'))
DROP PROCEDURE [SP_CandidateRegistration]
GO
CREATE PROCEDURE [dbo].[SP_CandidateRegistration]
(
@UserName VARCHAR(50),
@Password VARCHAR(50),
@EmailID VARCHAR(50),
@TestId int,
@IsActiveUser INTEGER,
@USER_ID INTEGER OUTPUT
)
AS
INSERT INTO [dbo].[IER_CandidateRegistration](User_Name, Password, EmailId, Test_Id, is_active )
VALUES (@UserName, @Password, @EmailID,@TestId, @IsActiveUser)
select @USER_ID=@@identity
RETURN
GO
```
Also, you are again declaring the variables. | Incorrect syntax near the keyword 'PROCEDURE' | [
"",
"sql",
"sql-server-2008",
"stored-procedures",
""
] |
I am not really sure how multi indexing works, so I maybe simply be trying to do the wrong things here. If I have a dataframe with
```
Value
A B
1 1 5.67
1 2 6.87
1 3 7.23
2 1 8.67
2 2 9.87
2 3 10.23
```
If I want to access the elements where B=2, how would I do that? df.ix[2] gives me the A=2. To get a particular value it seems df.ix[(1,2)] but that is the purpose of the B index if you can't access it directly? | You can use [`xs`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html):
```
In [11]: df.xs(2, level='B')
Out[11]:
Value
A
1 6.87
2 9.87
```
alternatively:
```
In [12]: df1.xs(1, level=1)
Out[12]:
Value
A
1 5.67
2 8.67
``` | Just as an alternative, you could use `df.loc`:
```
>>> df.loc[(slice(None),2),:]
Value
A B
1 2 6.87
2 2 9.87
```
The tuple accesses the indexes in order. So, `slice(None)` grabs all values from index `'A'`, the second position limits based on the second level index, where `'B'=2` in this example. The `:` specifies that you want all columns, but you could subet the columns there as well. | Python Pandas Accessing values from second index in multi-indexed dataframe | [
"",
"python",
"pandas",
"multi-index",
""
] |
I have a column of string names, and I would like to find often occuring patterns (words).
Is there a way to return, say, strings with a higher (or equal) length than X, and occur more often than Y times in the the whole column?
```
column <- c("bla1okay", "okay1243bla", "blaokay", "bla12okay", "okaybla")
getOftenOccuringPatterns <- function(.....)
getOftenOccuringPatterns(column, atleaststringsize=3, atleasttimes=4)
> what times
[1] bla 5
[2] okay 5
```
Referring to the comment by [Tim](https://stackoverflow.com/users/20670/tim-pietzcker):
I would like the nested ones to be removed, so if there is "aaaaaaa" and "aaaa" and both would occur in the output, only "aaaaaaa" and the times that one occurs counts.
If `atleaststringsize=3` and `atleaststringsize=4`, both the output will be the same. Lets say `atleasttimes=10`, and "aaaaaaaa" occurs 15 times and "aaaaaa" occurs 15 times, then:
```
getOftenOccurringPatterns(column, atleaststringsize=3, atleasttimes=10)
> what times
[1] aaaaaaaa 15
```
and
```
getOftenOccurringPatterns(column, atleaststringsize=4, atleasttimes=10)
> what times
[1] aaaaaaaa 15
```
The longest one stays, and it's the same thing for both atleast=3, and atleast=4. | OK, I've written a solution in Python. Sorry, I can't give you a working R program, but you should be able to implement one from this. As you can see, this is quite a brute force solution, but I don't really see a way around building all possible substrings from all the strings in your input.
I've broken down the problem into simple, self-contained steps. These should be straightforward to translate into R. I'm sure that there are comparable data structures in R for lists, sets and counters.
```
from collections import Counter
strings = ["bla1okay", "okay1243bla", "blaokay", "bla12okay", "okaybla"]
def substrings(s, minlength=3):
"""Finds all possible unique substrings of s, given a minimum length.
>>> substrings("12345")
{'1234', '234', '345', '12345', '123', '2345'}
>>> substrings("123123")
{'2312', '123123', '12312', '123', '23123', '1231', '231', '3123', '312'}
>>> substrings("aaaaa")
{'aaaaa', 'aaaa', 'aaa'}
"""
maxsize = current = len(s)
result = []
while current >= minlength:
result.extend([s[start:start+current]
for start in range(maxsize-current+1)])
# range(5) is [0,1,2,3,4]
current -= 1
return set(result) # set() removes duplicates
def all_substrings(strings, minlength=3):
"""Returns the union of all the sets of substrings of a list of strings.
>>> all_substrings(["abcd", "1234"])
{'123', 'abc', 'abcd', '1234', 'bcd', '234'}
>>> all_substrings(["abcd", "bcde"])
{'abc', 'bcd', 'cde', 'abcd', 'bcde'}
"""
result = set()
for s in strings:
result |= substrings(s, minlength)
# "|=" is the set union operator
return result
def count(strings, minlength=3):
"""Counts the occurrence of each substring within the provided list of strings,
given a minimum length for each substring.
>>> count(["abcd", "bcde"])
Counter({'bcd': 2, 'bcde': 1, 'abc': 1, 'abcd': 1, 'cde': 1})
"""
substrings = all_substrings(strings, minlength)
counts = Counter()
for substring in substrings: # Check each substring
for string in strings: # against each of the original strings
if substring in string: # to see whether it is contained there
counts[substring] += 1
return counts
def prune(counts, mincount=4):
"""Returns only the longest substrings whose count is >= mincount.
First, all the substrings with a count < mincount are eliminated.
Then, only those that aren't substrings of a longer string are kept.
>>> prune(Counter({'bla': 5, 'kay': 5, 'oka': 5, 'okay': 5, 'la1': 2, 'bla1': 2}))
[('okay', 5), ('bla', 5)]
"""
# Throw out all counts < mincount. Sort result by length of the substrings.
candidates = sorted(((s,c) for s,c in counts.items() if c >= mincount),
key=lambda l: len(l[0]), reverse=True) # descending sort
result = []
seenstrings = set() # Set of strings already in our result
# (we could also look directly in the result, but set lookup is faster)
for item in candidates:
s = item[0] # item[0] contains the substring
# Make sure that s is not already in our result list
if not any(s in seen for seen in seenstrings):
result.append(item)
seenstrings.add(s)
return result
counts = count(strings)
print(prune(counts))
```
Output:
```
[('okay', 5), ('bla', 5)]
``` | This creates a vector of all occurrences of all substrings; it does so naively, iterating over the maximum length of the input string max(nchar(x)) and looking for all subsequences of length 1, 2, ... max(nchar(x)), so scales in polynomial time -- it won't be efficient for super-large problems.
This revision incorporates the following changes:
1. `.accumulate` in inner and outer loops of the previous version implemented the dreaded "copy-and-append" pattern; now we accumulate results in a pre-allocated list `answer0` and then accumulate these after the inner loop.
2. `allSubstrings()` has arguments `min_occur`, `min_nchar` (and `max_nchar`) to restrict the search space. In particular, `min_occur` (the minimum number of times a substring must occur to be retained) helps to reduce the length of the character vector in which longer substrings are searched.
3. The function `.filter()` can be used to more aggressively remove strings that do not contain substrings of length i; this can be costly, so there's a heuristic and argument `useFilter` that can be set. The use of a filter makes the whole solution seem more like a hack than an algorithm -- the information about substrings has already been extracted, so we shouldn't have to go back and search for their occurrence again.
Here is the revised main function
```
allSubstrings <-
function(x, min_occur=1L, min_nchar=1L, max_nchar=max(nchar(x)),
..., useFilter=max(nchar(x)) > 100L)
{
len <- nchar(x)
x <- x[len >= min_nchar]; len <- len[len >= min_nchar]
answer <- vector("list", max_nchar - min_nchar + 1L)
for (i in seq(min_nchar, max_nchar)) {
## suffix of length i, starting at character j
x0 <- x; len0 <- len; n <- max(len0) - i + 1L
answer0 <- vector("list", n)
for (j in seq_len(n)) {
end <- j + i - 1L
f <- factor(substr(x0, j, end))
answer0[[j]] <- setNames(tabulate(f), levels(f))
x0 <- x0[len0 != end]; len0 <- len0[len0 != end]
}
answer0 <- unlist(answer0) # accumulate across start positions
answer0 <- vapply(split(answer0, names(answer0)), sum, integer(1))
answer0 <- answer0[answer0 >= min_occur]
if (length(answer0) == 0L)
break
answer[[i - min_nchar + 1L]] <- answer0
idx <- len != i # no need to process some strings
if (useFilter)
idx[idx] <- .filter(x[idx], names(answer0))
x <- x[idx]; len <- len[idx]
if (length(x) == 0L)
break
}
unlist(answer[seq_len(i)])
}
```
and the `.filter` function
```
.filter <-
function(s, q)
{
## which 's' contain at least one 'q'
answer <- rep(FALSE, length(s))
idx <- !answer # use this to minimize the number of greps
for (elt in q) {
answer[idx] <- answer[idx] | grepl(elt, s[idx], fixed=TRUE)
idx[idx] <- !answer[idx]
}
answer
}
```
As before result is a named vector, where the names are the strings and the values are the counts of their occurrence.
```
> column <- c("bla1okay", "okay1243bla", "blaokay", "bla12okay", "okaybla")
> xx <- allSubstrings(column)
> head(sort(xx, decreasing=TRUE))
a b o k l y
10 5 5 5 5 5
> xtabs(~nchar(names(xx)) + xx)
xx
nchar(names(xx)) 1 2 3 5 10
1 2 1 1 5 1
2 8 2 0 5 0
3 15 1 0 3 0
4 20 1 0 1 0
5 22 0 0 0 0
....
```
Queries like in the original question are then easy to perform, e.g., all substrings of >= 3 characters occurring more than 4 times:
```
> (ok <- xx[nchar(names(xx)) >= 3 & xx > 4])
bla oka kay okay
5 5 5 5
```
The code doesn't fully answer the question, e.g., nested substrings are present, but might replace the nested `lapply` portion of @user1609452's answer. Post-processing this result to eliminate nested subsequences is a little inelegant, but since the result being post-processed is not large will likely be fast enough, e.g., to eliminate nested substrings
```
> fun <- function(p, q) length(grep(p, q, fixed=TRUE))
> ok[ sapply(names(ok), fun, names(ok)) == 1L ]
bla okay
5 5
```
Here we use the 99k word dictionary on my laptop for input, with some basic timings for the revised algorithm
```
> timer <- function(n, x, ...)
system.time(allSubstrings(head(x, n), ...))[[3]]
> n <- c(100, 1000, 10000, 20000)
> data.frame(n=n, elapsed=sapply(n, timer, words))
n elapsed
1 100 0.050
2 1000 0.074
3 10000 0.490
4 20000 1.031
```
This is about 10x faster than the original algorithm, due in this case entirely to revision 1 (using pre-allocate and fill, followed by accumulation).
Here's a corpus of longer sentences
```
shakes <- readLines("http://www.gutenberg.org/cache/epub/100/pg100.txt")
shakes <- paste(shakes[nchar(shakes) != 0], collapse=" ")
shakes <- gsub(" +", " ", shakes)
shakes <- strsplit(shakes, "\\. +",)[[1]]
```
and some timings. This benefits alot from specifying a `min_occur` argument, and from use of the filter.
```
> n <- c(100, 1000, 2000, 5000)
> data.frame(n=n, elapsed=sapply(n, timer, shakes, min_occur=10))
n elapsed
1 100 1.725
2 1000 7.724
3 2000 12.415
4 5000 60.914
```
The need to use a filter and the poor performance on longer strings leads one to want to arrive at a better algorithm, like [suffix array](http://en.wikipedia.org/wiki/Suffix_array); the "Rlibstree" package might also be useful, although I'm unsure of where to get a current version or whether the exposed part of the interface is sufficient to answer the original question. | Get often occuring string patterns from column in R or Python | [
"",
"python",
"regex",
"r",
""
] |
Using Python 3.2x and PyQT 4.8x:
I initialized an action and assigned to a menu item:
```
self.__actionOpen = QtGui.QAction(self.__mw)
self.__actionOpen.setObjectName("actionOpen")
self.__actionOpen.setText("OpenFile")
QtCore.QObject.connect(self.__actionOpen, QtCore.SIGNAL("triggered()"), self.__accessFile)
self.__menuFile.addAction(self.__actionOpen)
```
Works fine - menu item is there with caption "OpenFile" and the action signal/slot is invoked.
I tried it with a QPushButton - same QAction object:
```
self.__buttonFile.addAction(self.__actionOpen)
```
Nothing: No caption on the button, nothing happens when it's clicked.
Do actions not work with QButton (the addAction call did not complain...)? Or is there something wrong with my code? Perhaps the "triggered()" signal is not appropriate for an action that interacts with QPushButton? | You can't assign a `QAction` to a `QPushButton` the way you want. `QPushButton` doesn't redefine `addAction` so the behavior comes from `QWidget.addAction` which adds the action to the context menu of the button.
You can however assign the action to a `QToolButton` with [`setDefaultAction`](http://qt-project.org/doc/qt-4.8/qtoolbutton.html#setDefaultAction) which will change the button caption and trigger the action when clicked.
Or you could do it manually anyway by subclassing `QPushButton` and adding a `setDefaultAction` method that would change everything in the button according to the action (caption, tooltip...) and connects the relevant button's signals to the action's slots. | Adding an action won't "run" the action when the button is clicked, and that is by design.
If what you are after is to reuse or refer the QAction's behaviour you can just connect the `clicked()` signal of the `QPushButton` to the `trigger()` of the `QAction`:
```
QtCore.QObject.connect(self.__menuFile,
QtCore.SIGNAL("clicked()"),
self.__actionOpen.trigger)
```
That way the `self.__actionOpen` action will be triggered whenever the `self.menuFile` button is clicked. | PyQt: Can a QPushButton be assigned a QAction? | [
"",
"python",
"qt",
"pyqt4",
"qpushbutton",
"qaction",
""
] |
I have this view function:
```
def forum(request):
qs = Forum.objects.all()
try:
f = Forum.objects.filter().order_by('-id')[0] <------------problem
return render_to_response("forum.html",{'qs':qs,'f':f},context_instance=RequestContext(request))
except Forum.DoesNotExist or IndexError:
return render_to_response("forum.html",{'qs':qs},context_instance=RequestContext(request))
```
but it is still giving following error for the problem line above:
```
IndexError: list index out of range
```
is my code fine? can i catch multiple exceptions in this way? | You can catch multiple exceptions in this manner
```
try:
...
except (Forum.DoesNotExist, IndexError) as e:
...
``` | When you have this in your code:
```
except Forum.DoesNotExist or IndexError:
```
It's actually evaluated as this:
```
except (Forum.DoesNotExist or IndexError):
```
where the bit in parentheses is an evaluated expression. Since `or` returns the first of its arguments if it's truthy (which a class is), that's actually equivalent to merely:
```
except Forum.DoesNotExist:
```
---
If you want to actually catch multiple different types of exceptions, you'd instead use a tuple:
```
except (Forum.DoesNotExist, IndexError):
``` | django - catch multiple exceptions | [
"",
"python",
"django",
"exception",
""
] |
What is the best way in python to handle reading a file that may potentially not exist?
Currently I have the following:
```
try:
with open(filename, "r") as f:
return f.read()
except IOError:
return False
```
Is this the best way to do it or is this wrong on any level at all?
I guess my biggest concerns are:
1. Catching the exception only to return false
2. Maybe i am missing a 'Python' way of silently missing the error for a missing file | A `try` / `except` is indeed the best way. | A try except block will catch the error, but you might not want to suppress the error.
If you're writing a function that returns the content read from the file, then it would be wiser to `return ''` instead of `False`. It's generally a good idea for a function to only return one type. Something like:
```
try:
with open(filename) as f:
return f.read()
except IOError:
return ''
```
Really it seems like you're signalling an error condition with a return. If so, you're usually better off just letting the exception propagate out of the function. It's not pythonic to use a returned value to signal an exceptional condition. | Dealing with trying to read a file that might not exist | [
"",
"python",
""
] |
I'm importing a CSV of macroeconomic data and haven't been able to figure out how to get Pandas to interpret this type of date. Is there a way to do it automatically or will I need to parse it myself?
When I ask the parser to try, I get:
```
File "datetime.pxd", line 133, in datetime._string_to_dts (pandas/tslib.c:31399)ValueError: Unable to parse 2002Q1
``` | Since the `pd.Period` can parse quarterly periods, you could use it as the custom `date_parser`. Then, to convert the date to the last day of the quarter, you could use `map` and the `end_time` attribute:
```
import pandas as pd
text = '''\
date val
2013Q2 100
2013Q3 120
'''
filename = '/tmp/data'
with open(filename, 'w') as f:
f.write(text)
df = pd.read_table(filename, sep='\s+', date_parser=pd.Period, parse_dates=[0])
df['date'] = df['date'].map(lambda x: x.end_time.date())
print(df)
# date val
# 0 2013-06-30 100
# 1 2013-09-30 120
``` | Here's something to help those who have years and quarters in different columns:
```
year quarter foo
1994 q1 10
1994 q3 20
1995 q1 30
1995 q3 40
```
The `parse_dates` argument to `read_csv` just works. It's very cool:
```
>>> pd.read_csv('bar.csv', parse_dates={'period':['year', 'quarter']})
period foo
1994 q1 10
1994 q3 20
1995 q1 30
1995 q3 40
``` | Does Pandas support quarterly dates of the form yyyyQp (e.g. 2013Q2)? | [
"",
"python",
"pandas",
""
] |
I have this form:
```
<form action="{% url create_question %}" method="post">
```
and this url.py
```
url(r'^neues_thema/(\w+)/','home.views.create_question',name="create_question"),
```
but I am getting this error:
```
Reverse for 'create_question' with arguments '()'
and keyword arguments '{}' not found.
```
what am i doing wrong?
**EDIT**: what i want to do is: the user submits the form, and i want to take the titel of question which user is creating and put it into url. then url will look like: `neues_thema/how-to-make-bread/`. how can i give that parameter to `{% url create_question ??? %}` dynamically while submitting the form
this thread [url template tag in django template](https://stackoverflow.com/questions/1777612/url-template-tag-in-django-template) didnot help me. | Seems like you don't need any parameters to `{% url %}` in your template.
You can add function to your `views.py` for creating questions, that will redirect user to question page after success:
**urls.py:**
```
url(r'^neues_thema/', 'home.views.create_question', name="create_question"),
url(r'^neues_thema/(?P<title>\w+)/', 'home.views.question', name="question"),
```
**views.py:**
```
from django.core.urlresolvers import reverse
from django.shortcuts import render
def create_question(request):
if request.method == 'POST':
title = request.POST['title']
# some validation of title
# create new question with title
return redirect(reverse('question', kwargs={'title': title})
def question(request, title):
# here smth like:
# question = get_object_or_404(Question, title=title)
return render(request, 'question.html', {'question': question})
```
template with form for creating question:
```
<form action="{% url create_question %}" method="post">
```
---
Answering your "what am i doing wrong?". You are trying to render url by mask `neues_thema/(\w+)/` with this: `{% url create_question %}`. Your mask needs some parameter (`(\w+)`), but you are putting no parameter. Rendering with parameter should be `{% url create_question title %}`. But the problem is: you don't know the `title` while rendering page. | Your url regular expression expects a parameter, your template should be like:
```
<form action="{% url create_question some_user_name %}" method="post">
```
[See **url** on **Built-in template tags and filters** docs](https://docs.djangoproject.com/en/dev/ref/templates/builtins/#url) | django - url tag not working | [
"",
"python",
"django",
"url",
""
] |
I have two tables like this
```
Users table
id | name
-------------
1 | s1
2 | s2
3 | s3
4 | s4
5 | s5
6 | s6
friends table
friendID | user_a | user_b
--------------------
1 | 1 | 2
2 | 3 | 1
3 | 4 | 2
4 | 1 | 3
```
I want to run this query: **Who is friends with s1?**
This is my current query, but it doesn't work
```
select a.name
from users a, friends b
where a.id=b.user_b
and b.user_a = (select b.user_a
from friends
where a.name='s1');
``` | Here you need to join Users table twice with each `user_a` and `user_b`:
Try this query:
```
SELECT u.name
FROM Users u
JOIN friends f
ON u.id = f.user_b
JOIN Users u1
ON u1.id = f.user_a
WHERE u1.name = 's1';
```
Result:
```
╔══════╗
║ NAME ║
╠══════╣
║ s2 ║
║ s3 ║
╚══════╝
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/83f96/4)
---
**Edit**: In your query (which you have tried) you used outer table's id and name in subquery. So you needed to use sub table's id and name like this:
```
select a.name
from users a, friends b
where a.id=b.user_b
and b.user_a IN (select id
from users
where name='s1');
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/83f96/15) | Please try this:
```
SELECT DISTINCT c.name
FROM users a, friends b, users c
WHERE a.id=b.user_a
AND b.user_b=c.id
AND a.name='s1';
``` | “Friend” relationships (2 tables) | [
"",
"sql",
"database",
""
] |
I'm trying to decipher the information contained in my logs (the logging setup is using the default formatter). The [documentation](http://docs.python.org/2/library/logging.html#logging.Formatter) states:
> Do formatting for a record - if a formatter is set, use it. Otherwise, use the default formatter for the module.
However, I can't find any reference actually stating what this default format is. | The default format is located [here](http://hg.python.org/cpython/file/5c4ca109af1c/Lib/logging/__init__.py#l1640) which is:
```
BASIC_FORMAT = "%(levelname)s:%(name)s:%(message)s"
```
The [Format](http://hg.python.org/cpython/file/5c4ca109af1c/Lib/logging/__init__.py#l399) code will tell you how you can customize it. Here is one example on how you can customize it.
```
import sys
import logging
logging.basicConfig(
level=logging.DEBUG,
format="[%(asctime)s] %(levelname)s [%(name)s.%(funcName)s:%(lineno)d] %(message)s",
datefmt="%d/%b/%Y %H:%M:%S",
stream=sys.stdout)
logging.info("HEY")
```
Which results in:
```
[26/May/2013 06:41:40] INFO [root.<module>:1] HEY
``` | ```
import logging
print(logging.BASIC_FORMAT)
```
Also some comments asked about how one could have come to discover this on their own. Here is a natural thing to do:
```
import logging
print(dir(logging))
```
BASIC\_FORMAT is in there, in fact it is the first entry in the result in my case. | What is Python's default logging formatter? | [
"",
"python",
"logging",
"python-logging",
""
] |
We have a code that [creates figures from input.txt files](https://www.dropbox.com/s/g646f1wghktrl1b/matplotlib-combine-different-figures-and-put-them-in-a-single-subplot-sharing-a.zip). We need to combine 2 of these figures in a single subplot. The data from figure1 will be plotted in the left subplot and from figure2 in the right subplot, sharing the same legend and witht he same scale in axes x and y:

Here there is some example data:
```
x = [ 1, 2, 3, 5, 10, 100, 1000 ]
y1 = [ 1, 0.822, 0.763, 0.715, 0.680, 0.648, 0.645 ]
y2 = [ 1, 0.859, 0.812, 0.774, 0.746, 0.721, 0.718 ]
import matplotlib.pyplot as plt
# mode 01 from one case
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
ax1.plot( x, y1, label='mode 01' )
# mode 01 from other case
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
ax2.plot( x, y2, label='mode 01' )
```
---
EDIT: the method suggested by @nordev works. Now it would be really convenient to pass the ax1 and ax2 objects to the new figure, since they have much more information. [It seems that there is no straightforward way to achieve that](https://stackoverflow.com/questions/6309472/matplotlib-can-i-create-axessubplot-objects-then-add-them-to-a-figure-instance#comment18919455_6309636).
The [real case has been made available here](https://www.dropbox.com/s/g646f1wghktrl1b/matplotlib-combine-different-figures-and-put-them-in-a-single-subplot-sharing-a.zip). To make it work, please run `plot_both.py`.
---
EDIT2: it was easier to change the routine that reads the input.txt files. [Now it supports multiple plots](https://www.dropbox.com/s/ekywe8m3f8vh6j4/matplotlib-combine-different-figures-and-put-them-in-a-single-subplot-sharing-a-new.zip). But the question is still valid because it would be great to treat the `AxesSubplot` as an easily interchangeable object among different figures, subplots and so forth... | Does this solve your problem?
```
x = [ 1, 2, 3, 5, 10, 100, 1000 ]
y1 = [ 1, 0.822, 0.763, 0.715, 0.680, 0.648, 0.645 ]
y2 = [ 1, 0.859, 0.812, 0.774, 0.746, 0.721, 0.718 ]
import matplotlib.pyplot as plt
from matplotlib.transforms import BlendedGenericTransform
# mode 01 from one case
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
line1, = ax1.plot( x, y1, label='mode 01' )
# mode 01 from other case
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
line2, = ax2.plot( x, y2, label='mode 01' )
# Create new figure and two subplots, sharing both axes
fig3, (ax3, ax4) = plt.subplots(1,2,sharey=True, sharex=True,figsize=(10,5))
# Plot data from fig1 and fig2
line3, = ax3.plot(line1.get_data()[0], line1.get_data()[1])
line4, = ax4.plot(line2.get_data()[0], line2.get_data()[1])
# If possible (easy access to plotting data) use
# ax3.plot(x, y1)
# ax4.lpot(x, y2)
ax3.set_ylabel('y-axis')
ax3.grid(True)
ax4.grid(True)
# Add legend
fig3.legend((line3, line4),
('label 3', 'label 4'),
loc = 'upper center',
bbox_to_anchor = [0.5, -0.05],
bbox_transform = BlendedGenericTransform(fig3.transFigure, ax3.transAxes))
# Make space for the legend beneath the subplots
plt.subplots_adjust(bottom = 0.2)
# Show only fig3
fig3.show()
```
This gives output as seen below

# Edit
Looking at the code in your uploaded zip-file, I'd say most of the requested functionality is achieved?
I see you have changed the function creating the plots, making the solution to your problem radically different, as you are no longer trying to "merge" two subplots from different figures. Your solution is basically the same as the one I presented above, in the sense that both are creating both `Axes` instances as subplots on the same figure (giving the desired layout), and *then* plotting, rather than *plotting, then extract/move the axes*, as your question was concerning originally.
As I suspected, the easiest and most trivial solution is to make the individual `Axes` subplots of the same figure instead of having them tied to separate figures, as moving one `Axes` instance from one `Figure` to another is not easily accomplished (if at all possible), as specified in a comment. The "original" problem still seems to be very hard to accomplish, as simply adding an `Axes` instance to the `Figure`'s `_axstack` makes it hard to customize to the desired layout.
One modification to the `ax.legend(...` of your current code, to make the legend centered horizontally, with the top just below the axes:
```
# Add this line
from matplotlib.transforms import BlendedGenericTransform
# Edit the function call to use the BlendedGenericTransform
ax.legend(loc='upper center',
ncol=7,
labelspacing=-0.7,
columnspacing=0.75,
fontsize=8,
handlelength=2.6,
markerscale=0.75,
bbox_to_anchor=(0.5, -0.05),
bbox_transform=BlendedGenericTransform(fig.transFigure, ax.transAxes))
```
Here, the `bbox_to_anchor` argument should be customized to fit within the boundaries of our figure.
The [`BlendedGenericTransform`](http://matplotlib.org/devel/transformations.html?highlight=blendedgenerictransform#matplotlib.transforms.BlendedGenericTransform) allows the transforms of the x-axis and y-axis to be different, which can be very useful in many situations. | I ran into the same issue, trying to merge in a single matplotlib figure axes built with different python packages for which I can't easily access the data.
I could make a dirty work around, by saving the images as a png file, then reimporting them as images and create axes based on it.
Here's a function doing it (It's ugly, and you'll need to adapt it, but it does the job).
```
import cairosvg
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
def merge_2axes(fig1,fig2,file_name1="f1.png",file_name2="f2.png"):
fig1.savefig(file_name1)
fig2.savefig(file_name2)
fig, (ax1, ax2) = plt.subplots(2, figsize=(30, 30))
if file_name1[-3:] == "svg":
img_png = cairosvg.svg2png(url=file_name1)
img = Image.open(BytesIO(img_png))
ax1.imshow(img)
else:
ax1.imshow(plt.imread(file_name1))
if file_name2[-3:] == "svg":
img_png = cairosvg.svg2png(url=file_name2)
img = Image.open(BytesIO(img_png))
ax2.imshow(img)
else:
ax2.imshow(plt.imread(file_name2))
for ax in (ax1, ax2):
for side in ('top', 'left', 'bottom', 'right'):
ax.spines[side].set_visible(False)
ax.tick_params(left=False, right=False, labelleft=False,
labelbottom=False, bottom=False)
return fig
``` | matplotlib: combine different figures and put them in a single subplot sharing a common legend | [
"",
"python",
"matplotlib",
""
] |
If I query this :
```
SELECT DISTINCT class_low
FROM groups NATURAL JOIN species
WHERE type ~~ 'faune'
AND class_high ~~ 'Arachnides'
AND (class_middle ~~ 'Araignées' OR class_middle IS NULL)
AND (class_low ~~ '%' OR class_low IS NULL);
```
I get :
```
class_low
---------------------
Dictynidés
Linyphiidés
Sparassidés
Metidés
Thomisidés
Dolomedidés
Pisauridés
Araignées sauteuses
Araneidés
Lycosidés
Atypidés
Pholcidés
Ségestriidés
Tetragnathidés
Miturgidés
Agelenidés
```
**Notice the NULL value (it's not a empty varchar).**
now if I query like that :
```
SELECT array_to_string(array_agg(DISTINCT class_low), ',')
FROM groups NATURAL JOIN species
WHERE type ~~ 'faune'
AND class_high ~~ 'Arachnides'
AND (class_middle ~~ 'Araignées' OR class_middle IS NULL)
AND (class_low ~~ '%' OR class_low IS NULL);
```
I get :
```
array_to_string
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Agelenidés,Araignées sauteuses,Araneidés,Atypidés,Dictynidés,Dolomedidés,Linyphiidés,Lycosidés,Metidés,Miturgidés,Pholcidés,Pisauridés,Ségestriidés,Sparassidés,Tetragnathidés,Thomisidés
```
The NULL value is not inserted.
Is there any way to include it ? I mean having something like :
...,,... (just a double colon) | I don't have an 8.4 handy but in more recent versions, the `array_to_string` is ignoring your NULLs so the problem isn't `array_agg`, it is `array_to_string`.
For example:
```
=> select distinct state from orders;
state
---------
success
failure
```
That blank line is in fact a NULL. Then we can see what `array_agg` and `array_to_string` do with this stuff:
```
=> select array_agg(distinct state) from orders;
array_agg
------------------------
{failure,success,NULL}
=> select array_to_string(array_agg(distinct state), ',') from orders;
array_to_string
-----------------
failure,success
```
And the NULL disappears in the `array_to_string` call. The [documentation](http://www.postgresql.org/docs/8.4/interactive/functions-array.html#ARRAY-FUNCTIONS-TABLE) doesn't specify any particular handling of NULLs but ignoring them seems as reasonable as anything else.
In version 9.x you can get around this using, as usual, [COALESCE](http://www.postgresql.org/docs/8.4/static/functions-conditional.html#AEN15275):
```
=> select array_to_string(array_agg(distinct coalesce(state, '')), ',') from orders;
array_to_string
------------------
,failure,success
```
So perhaps this will work for you:
```
array_to_string(array_agg(DISTINCT coalesce(class_low, '')), ',')
```
Of course that will fold NULLs and empty strings into one value, that may or may not be an issue. | You could use a *case* statement to handle the *null* value before it gets passed into array\_agg:
```
select
array_to_string(array_agg(case xxx
when null then 'whatever'
when '' then 'foo'
else xxx end), ', ')
```
This way you can map any number of "keys" to the values you like | how can I include a NULL value using array_agg in postgresql? | [
"",
"sql",
"arrays",
"postgresql",
"aggregate-functions",
"postgresql-8.4",
""
] |
For the sake of interest I want to convert video durations from YouTubes `ISO 8601` to seconds. To future proof my solution, I picked [a really long video](http://www.youtube.com/watch?v=2XwmldWC_Ls) to test it against.
The API provides this for its duration - `"duration": "P1W2DT6H21M32S"`
I tried parsing this duration with `dateutil` as suggested in [stackoverflow.com/questions/969285](https://stackoverflow.com/questions/969285/how-do-i-translate-a-iso-8601-datetime-string-into-a-python-datetime-object).
```
import dateutil.parser
duration = = dateutil.parser.parse('P1W2DT6H21M32S')
```
This throws an exception
```
TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int'
```
What am I missing? | Python's built-in dateutil module only supports parsing ISO 8601 dates, not ISO 8601 durations. For that, you can use the "isodate" library (in pypi at <https://pypi.python.org/pypi/isodate> -- install through pip or easy\_install). This library has full support for ISO 8601 durations, converting them to datetime.timedelta objects. So once you've imported the library, it's as simple as:
```
import isodate
dur = isodate.parse_duration('P1W2DT6H21M32S')
print(dur.total_seconds())
``` | Works on python 2.7+. Adopted from a [JavaScript one-liner for Youtube v3 question here](https://stackoverflow.com/questions/22148885/converting-youtube-data-api-v3-video-duration-format-to-seconds-in-javascript-no).
```
import re
def YTDurationToSeconds(duration):
match = re.match('PT(\d+H)?(\d+M)?(\d+S)?', duration).groups()
hours = _js_parseInt(match[0]) if match[0] else 0
minutes = _js_parseInt(match[1]) if match[1] else 0
seconds = _js_parseInt(match[2]) if match[2] else 0
return hours * 3600 + minutes * 60 + seconds
# js-like parseInt
# https://gist.github.com/douglasmiranda/2174255
def _js_parseInt(string):
return int(''.join([x for x in string if x.isdigit()]))
# example output
YTDurationToSeconds(u'PT15M33S')
# 933
```
Handles iso8061 duration format to extent Youtube Uses up to hours | How to convert YouTube API duration to seconds? | [
"",
"python",
"youtube-api",
""
] |
I'm trying to implement an assert function. How can I get the text of the failing condition into the error message? If I have to parse it from the backtrace, can I portably rely on anything about the format of frames? | If you're sure the expression to test is secure you could do something like this:
File my\_assert.py:
```
import sys
def my_assert(condition):
caller = sys._getframe(1)
if not eval(condition, caller.f_globals, caller.f_locals):
raise AssertionError(repr(condition) + " on line " +
str(caller.f_lineno) + ' in ' +
caller.f_code.co_name)
```
File test\_my\_assert.py:
```
from my_assert import my_assert
global_var = 42
def test():
local_var = 17
my_assert('local_var*2 < global_var') # OK
my_assert('local_var > global_var')
test()
```
Output:
```
Traceback (most recent call last):
File "test_my_assert.py", line 10, in <module>
test()
File "test_my_assert.py", line 8, in test
my_assert('local_var > global_var')
File "my_assert.py", line 8, in my_assert
caller.f_code.co_name)
AssertionError: 'local_var > global_var' on line 8 in test
``` | AssertionError is just like any other exception in python, and `assert` is a simple statement that is equivalent to
```
if __debug__:
if not expression: raise AssertionError
```
or
```
if __debug__:
if not expression1: raise AssertionError(expression2)
```
so you can add a second parameter to your assertion to have additional output
```
from sys import exc_info
from traceback import print_exception
# assertions are simply exceptions in Python
try:
assert False, "assert was false"
except AssertionError:
print_exception(*exc_info())
```
outputs
```
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
AssertionError: assert was false
``` | Implementing C-like assert | [
"",
"python",
""
] |
All, I'm writing a flask application that depends on [flask-principal](http://pythonhosted.org/Flask-Principal/) for managing user roles. I'd like to write some simple unit tests to check which views can be accessed by which user. An example of code is posted [on pastebin](http://pastebin.com/15g5jcjg) to avoid cluttering this post. In short, I define a few routes, decorating some so that they can be accessed only by users with the proper role, then try to access them in a test.
In the code pasted, the `test_member` and `test_admin_b` both fail, complaining about a `PermissionDenied`. Obviously, I'm failing to declare the user properly; at least, the info about the user roles is not in the right context.
Any help or insight about the complexities of context processing will be deeply appreciated. | Flask-Principal does not store information for you between requests. It's up to you to do this however you like. Keep that in mind and think about your tests for a moment. You call the `test_request_context` method in the `setUpClass` method. This creates a new request context. You are also making test client calls with `self.client.get(..)` in your tests. These calls create additional request contexts that are not shared between each other. Thus, your calls to `identity_changed.send(..)` do not happen with the context of the requests that are checking for permissions. I've gone ahead and edited your code to make the tests pass in hopes that it will help you understand. Pay special attention to the `before_request` filter I added in the `create_app` method.
```
import hmac
import unittest
from functools import wraps
from hashlib import sha1
import flask
from flask.ext.principal import Principal, Permission, RoleNeed, Identity, \
identity_changed, identity_loaded current_app
def roles_required(*roles):
"""Decorator which specifies that a user must have all the specified roles.
Example::
@app.route('/dashboard')
@roles_required('admin', 'editor')
def dashboard():
return 'Dashboard'
The current user must have both the `admin` role and `editor` role in order
to view the page.
:param args: The required roles.
Source: https://github.com/mattupstate/flask-security/
"""
def wrapper(fn):
@wraps(fn)
def decorated_view(*args, **kwargs):
perms = [Permission(RoleNeed(role)) for role in roles]
for perm in perms:
if not perm.can():
# return _get_unauthorized_view()
flask.abort(403)
return fn(*args, **kwargs)
return decorated_view
return wrapper
def roles_accepted(*roles):
"""Decorator which specifies that a user must have at least one of the
specified roles. Example::
@app.route('/create_post')
@roles_accepted('editor', 'author')
def create_post():
return 'Create Post'
The current user must have either the `editor` role or `author` role in
order to view the page.
:param args: The possible roles.
"""
def wrapper(fn):
@wraps(fn)
def decorated_view(*args, **kwargs):
perm = Permission(*[RoleNeed(role) for role in roles])
if perm.can():
return fn(*args, **kwargs)
flask.abort(403)
return decorated_view
return wrapper
def _on_principal_init(sender, identity):
if identity.id == 'admin':
identity.provides.add(RoleNeed('admin'))
identity.provides.add(RoleNeed('member'))
def create_app():
app = flask.Flask(__name__)
app.debug = True
app.config.update(SECRET_KEY='secret', TESTING=True)
principal = Principal(app)
identity_loaded.connect(_on_principal_init)
@app.before_request
def determine_identity():
# This is where you get your user authentication information. This can
# be done many ways. For instance, you can store user information in the
# session from previous login mechanism, or look for authentication
# details in HTTP headers, the querystring, etc...
identity_changed.send(current_app._get_current_object(), identity=Identity('admin'))
@app.route('/')
def index():
return "OK"
@app.route('/member')
@roles_accepted('admin', 'member')
def role_needed():
return "OK"
@app.route('/admin')
@roles_required('admin')
def connect_admin():
return "OK"
@app.route('/admin_b')
@admin_permission.require()
def connect_admin_alt():
return "OK"
return app
admin_permission = Permission(RoleNeed('admin'))
class WorkshopTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
app = create_app()
cls.app = app
cls.client = app.test_client()
def test_basic(self):
r = self.client.get('/')
self.assertEqual(r.data, "OK")
def test_member(self):
r = self.client.get('/member')
self.assertEqual(r.status_code, 200)
self.assertEqual(r.data, "OK")
def test_admin_b(self):
r = self.client.get('/admin_b')
self.assertEqual(r.status_code, 200)
self.assertEqual(r.data, "OK")
if __name__ == '__main__':
unittest.main()
``` | As [Matt](https://stackoverflow.com/users/32396/matt-w) explained, it's only a matter of context. Thanks to his explanations, I came with two different ways to switch identities during unit tests.
Before all, let's modify a bit the application creation:
```
def _on_principal_init(sender, identity):
"Sets the roles for the 'admin' and 'member' identities"
if identity.id:
if identity.id == 'admin':
identity.provides.add(RoleNeed('admin'))
identity.provides.add(RoleNeed('member'))
def create_app():
app = flask.Flask(__name__)
app.debug = True
app.config.update(SECRET_KEY='secret',
TESTING=True)
principal = Principal(app)
identity_loaded.connect(_on_principal_init)
#
@app.route('/')
def index():
return "OK"
#
@app.route('/member')
@roles_accepted('admin', 'member')
def role_needed():
return "OK"
#
@app.route('/admin')
@roles_required('admin')
def connect_admin():
return "OK"
# Using `flask.ext.principal` `Permission.require`...
# ... instead of Matt's decorators
@app.route('/admin_alt')
@admin_permission.require()
def connect_admin_alt():
return "OK"
return app
```
---
A first possibility is to create a function that loads an identity before each request in our test. The easiest is to declare it in the `setUpClass` of the test suite after the app is created, using the `app.before_request` decorator:
```
class WorkshopTestOne(unittest.TestCase):
#
@classmethod
def setUpClass(cls):
app = create_app()
cls.app = app
cls.client = app.test_client()
@app.before_request
def get_identity():
idname = flask.request.args.get('idname', '') or None
print "Notifying that we're using '%s'" % idname
identity_changed.send(current_app._get_current_object(),
identity=Identity(idname))
```
Then, the tests become:
```
def test_admin(self):
r = self.client.get('/admin')
self.assertEqual(r.status_code, 403)
#
r = self.client.get('/admin', query_string={'idname': "member"})
self.assertEqual(r.status_code, 403)
#
r = self.client.get('/admin', query_string={'idname': "admin"})
self.assertEqual(r.status_code, 200)
self.assertEqual(r.data, "OK")
#
def test_admin_alt(self):
try:
r = self.client.get('/admin_alt')
except flask.ext.principal.PermissionDenied:
pass
#
try:
r = self.client.get('/admin_alt', query_string={'idname': "member"})
except flask.ext.principal.PermissionDenied:
pass
#
try:
r = self.client.get('/admin_alt', query_string={'idname': "admin"})
except flask.ext.principal.PermissionDenied:
raise
self.assertEqual(r.data, "OK")
```
(Incidentally, the very last test shows that Matt's decorator are far easier to use....)
---
A second approach uses the `test_request_context` function with a `with ...` to create a temporary context. No need to define a function decorated by `@app.before_request`, just pass the route to test as argument of `test_request_context`, send the `identity_changed` signal in the context and use the `.full_dispatch_request` method
```
class WorkshopTestTwo(unittest.TestCase):
#
@classmethod
def setUpClass(cls):
app = create_app()
cls.app = app
cls.client = app.test_client()
cls.testing = app.test_request_context
def test_admin(self):
with self.testing("/admin") as c:
r = c.app.full_dispatch_request()
self.assertEqual(r.status_code, 403)
#
with self.testing("/admin") as c:
identity_changed.send(c.app, identity=Identity("member"))
r = c.app.full_dispatch_request()
self.assertEqual(r.status_code, 403)
#
with self.testing("/admin") as c:
identity_changed.send(c.app, identity=Identity("admin"))
r = c.app.full_dispatch_request()
self.assertEqual(r.status_code, 200)
self.assertEqual(r.data, "OK")
``` | Unit-testing a flask-principal application | [
"",
"python",
"unit-testing",
"flask",
"flask-principal",
""
] |
I have two tables with the following records:
Clients:
```
cid | cname | ccountry
-----------------------
1 | John | Australia
2 | Mark | USA
3 | Liz | England
```
Orders:
```
oid | cid | oquantity
---------------------
1 | 1 | 100
2 | 1 | 100
3 | 2 | 50
4 | 2 | 150
5 | 3 | 50
6 | 3 | 100
```
I need to find out the Client name(s) who has maximum quantity of orders. I run the following query and got the correct result.
```
select cname, ccountry
from Clients
where cid in
(select cid
from Orders
group by cid
having sum(oquantity) = (select max(amount) from
(select sum(oquantity) amount
from Orders
group by cid)t1))
```
2 row(s) returned
> 'John', 'Australia'
>
> 'Mark', 'USA'
But I just need to know, whether it can be done by more simple way. It has become complicated once total quantity is also required to be returned. | I have reduced your `2` Subqueries.
```
SELECT
clients.cid,
cname,
ccountry
FROM Orders,
Clients
WHERE orders.cid = clients.cid
GROUP BY clients.cid
HAVING SUM(orders.oquantity) = (SELECT
SUM(oquantity) AS amount
FROM Orders
GROUP BY cid
ORDER BY amount DESC
LIMIT 1 )
``` | Try :
```
select cname, ccountry, sum(oquantity)
FROM Clients c
INNER JOIN Orders o ON (o.cid = c.cid)
where c.cid in
(
select cid from Orders group by cid
having sum(oquantity) = (
select max(amount) from (
select sum(oquantity) amount from Orders group by cid)t1))
GROUP BY c.cid
``` | multiple row fetched with max number sql | [
"",
"mysql",
"sql",
"group-by",
"max",
""
] |
I have a very large table of contacts which I am building an interface to help my client to de-dupe. Here is an example of the table content
```
id | firstname | lastname | email | address1 | addres2 | verifiedAt |
1 | James | johnson | james@test.com | | | |
2 | David | bloggs | james@bloggs.com | | | |
3 | John | nobel | james@nobel.com | | | |
4 | Terry | jacket | james@jacket.com | | | 05/05/2013 |
5 | James | johnson | james@johnson.com| | | |
6 | James | privett | james@test.com | | | |
```
I need to write a query that will return the first contact that has another contact in the same table where either the email addresses match or the firstname + lastname match.
Is this possible in a single query?
Thanks in advance | This works ([SQLFiddle DEMO](http://www.sqlfiddle.com/#!3/74c50/23)):
```
SELECT a.* FROM mytable a
JOIN (
SELECT email
FROM mytable
GROUP BY email
HAVING count(*) > 1
) b ON a.email = b.email
UNION
SELECT a.* FROM mytable a
JOIN (
SELECT firstname, lastname
FROM mytable
GROUP BY firstname, lastname
HAVING count(*) > 1
) b ON a.firstname = b.firstname AND a.lastname = b.lastname
```
To make sure that this query works fast, be sure to have at least following indexes:
```
CREATE INDEX i1 ON mytable(email);
CREATE INDEX i2 ON mytable(firstname, lastname);
``` | Try this ([SQL Fiddle](http://www.sqlfiddle.com/#!3/74c50/46)).
```
SELECT DISTINCT *
FROM
( SELECT
MIN(id) as [id]
FROM mytable
GROUP BY email
HAVING COUNT(*) > 1
UNION ALL
SELECT
MIN(id) as [id]
FROM mytable
GROUP BY firstName,lastName
HAVING Count(*) > 1 )dups
JOIN myTable t
ON t.Id = dups.id
``` | SQL Query needs to match similar records | [
"",
"sql",
"sql-server-2008",
""
] |
I have this table:
```
A B C
1 Record 1 Type 1
2 Record 2 Type 2
3 Record 3 Type 1
4 Record 4 Type 2
```
I need to pair up rows by their values in `C` (Type 1 & Type 2) given that the first record with `Type 1` must match with the nearest `ID` that with `Type 2`
Desired output:
```
A B C A B C
1 Record 1 Type 1 2 Record 2 Type 2
3 Record 3 Type 1 4 Record 4 Type 2
```
I tried doing this in a query with 2 CTEs but I couldnt come up with the expected result:
```
WITH SET_A (A, B, C) AS
(SELECT * FROM A WHERE C = 'Type 1'),
SET_B (A, B, C) AS
(SELECT * FROM A WHERE C = 'Type 2')
SELECT * FROM SET_A CROSS JOIN SET_B;
```
Are there any other approach than using cross joins? | Here you go. For each "Type 1" it will find the nearest subsequent (by id) "Type 2".
<http://sqlfiddle.com/#!6/a5263/20>
```
CREATE TABLE t
(
A int,
B varchar(32),
C varchar(32)
);
insert into t values (1, 'Record 1', 'Type 1')
insert into t values (2, 'Record 2', 'Type 2')
insert into t values (3, 'Record 3', 'Type 1')
insert into t values (4, 'Record 4', 'Type 2')
insert into t values (5, 'Record 5', 'Type 1')
insert into t values (6, 'Record 6', 'Type 1a')
insert into t values (7, 'Record 7', 'Type 2')
;
with set_a as
(
select * from t where c = 'type 1'
)
, set_b as
(
select a, b, c, a_match = (select max(t2.a) from t t2 where t2.a < t.a and t2.c = 'type 1')
from t where c = 'type 2'
)
select set_a.* , a2 = set_b.a, b2 = set_b.b, c2 = set_b.c
from set_a
join set_b on set_b.a_match = set_a.a
``` | ```
SELECT t1.a a1, t1.b b1, t1.c c1, t2.a a2, t2.b b2, t2.c c2
FROM Table1 t1
JOIN Table1 t2 ON t1.c = 'Type 1' AND t2.c = 'Type 2' AND t1.a < t2.a
WHERE t2.a = (SELECT MIN(t3.a) FROM Table1 t3 WHERE t3.c = 'Type 2' AND t3.a > t1.a)
```
**[SQL Fiddle](http://sqlfiddle.com/#!3/0d62f/9)** | pair up two rows in single row | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Using split in a for loop results in the mentioned exception. But when taking the elements indpendent from a for loop it works:
```
>>> for k,v in x.split("="):
... print k,v
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack
>>> y = x.split("=")
>>> y
['abc', 'asflskfjla']
>>> k,v = y
>>> k
'abc'
>>> v
'asflskfjla'
```
An explanation would be appreciated - and also naturally the proper syntax for the for loop version. | The `for` loop expects that each item in the iterable can be unpacked into two variables. So in your case, it'd look something like one of these:
```
[('a, b'), ('c, d'), ...]
[['a, b'], ['c, d'], ...]
['ab', 'cd', ...]
...
```
Each item in each of those iterables can be split up into a `k` and a `v` component. In your case, they cannot, as the output of `x.split('=')` is a list of strings with more than two characters:
```
['abc', 'asflskfjla']
``` | `x.split` returns a list of strings, as you can see from your `y` variable. When you iterate over that, it takes the first element of the list `'abc'` and tries to bind it to the tuple `k, v`. Since strings are a sequence type, it tries to assign the characters of the string to the tuple you've asked for - and there are in fact too many values (three letters) to unpack into a two-element tuple. | Convert a split string to a tuple results in "too many values to unpack" | [
"",
"python",
"string",
"for-loop",
"split",
"iterable-unpacking",
""
] |
I have constructed a condition that extracts exactly one row from my dataframe:
```
d2 = df[(df['l_ext']==l_ext) & (df['item']==item) & (df['wn']==wn) & (df['wd']==1)]
```
Now I would like to take a value from a particular column:
```
val = d2['col_name']
```
But as a result, I get a dataframe that contains one row and one column (i.e., one cell). It is not what I need. I need one value (one float number). How can I do it in pandas? | If you have a DataFrame with only one row, then access the first (only) row as a Series using *[iloc](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html)*, and then the value using the column name:
```
In [3]: sub_df
Out[3]:
A B
2 -0.133653 -0.030854
In [4]: sub_df.iloc[0]
Out[4]:
A -0.133653
B -0.030854
Name: 2, dtype: float64
In [5]: sub_df.iloc[0]['A']
Out[5]: -0.13365288513107493
``` | These are fast access methods for scalars:
```
In [15]: df = pandas.DataFrame(numpy.random.randn(5, 3), columns=list('ABC'))
In [16]: df
Out[16]:
A B C
0 -0.074172 -0.090626 0.038272
1 -0.128545 0.762088 -0.714816
2 0.201498 -0.734963 0.558397
3 1.563307 -1.186415 0.848246
4 0.205171 0.962514 0.037709
In [17]: df.iat[0, 0]
Out[17]: -0.074171888537611502
In [18]: df.at[0, 'A']
Out[18]: -0.074171888537611502
``` | How can I get a value from a cell of a dataframe? | [
"",
"python",
"pandas",
"dataframe",
""
] |
I'd like to be able to perform fits that allows me to fit an arbitrary curve function to data, and allows me to set arbitrary bounds on parameters, for example I want to fit function:
```
f(x) = a1(x-a2)^a3\cdot\exp(-\a4*x^a5)
```
and say:
* `a2` is in following range: `(-1, 1)`
* `a3` and `a5` are positive
There is nice [scipy curve\_fit](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) function, but it doesn't allow to specify parameter bounds. There also is nice <http://code.google.com/p/pyminuit/> library that does generic minimalization, and it allows to set bounds on parameters, but in my case it did not coverge. | Note: New in version 0.17 of SciPy
Let's suppose you want to fit a model to the data which looks like this:
```
y=a*t**alpha+b
```
and with the constraint on alpha
```
0<alpha<2
```
while other parameters a and b remains free. Then we should use the bounds option of curve\_fit in the following fashion:
```
import numpy as np
from scipy.optimize import curve_fit
def func(t, a,alpha,b):
return a*t**alpha+b
param_bounds=([-np.inf,0,-np.inf],[np.inf,2,np.inf])
popt, pcov = curve_fit(func, xdata, ydata,bounds=param_bounds)
```
Source is [here](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html). | As already mentioned by [Rob Falck](https://stackoverflow.com/users/754536/rob-falck), you could use, for example, the scipy nonlinear optimization routines in [scipy.minimize](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize) to minimize an arbitrary error function, e.g. the mean squared error.
Note that the function you gave does not necessarily have real values - maybe this was the reason your minimization in pyminuit did not converge. You have to treat this a little more explicitly, see example 2.
The examples below both use the `L-BFGS-B` minimization method, which supports bounded parameter regions. I split this answer in two parts:
1. A function with real codomain, resembling the one given by you. I added absolutes to ensure the function you gave returns real numbers in the domain [-3,3)
2. The actual function you gave, which has a complex codomain
## 1. Real codomain
The example below shows optimization of this slightly modified version of your function.
```
import numpy as np
import pylab as pl
from scipy.optimize import minimize
points = 500
xlim = 3.
def f(x,*p):
a1,a2,a3,a4,a5 = p
return a1*np.abs(x-a2)**a3 * np.exp(-a4 * np.abs(x)**a5)
# generate noisy data with known coefficients
p0 = [1.4,-.8,1.1,1.2,2.2]
x = (np.random.rand(points) * 2. - 1.) * xlim
x.sort()
y = f(x,*p0)
y_noise = y + np.random.randn(points) * .05
# mean squared error wrt. noisy data as a function of the parameters
err = lambda p: np.mean((f(x,*p)-y_noise)**2)
# bounded optimization using scipy.minimize
p_init = [1.,-1.,.5,.5,2.]
p_opt = minimize(
err, # minimize wrt to the noisy data
p_init,
bounds=[(None,None),(-1,1),(None,None),(0,None),(None,None)], # set the bounds
method="L-BFGS-B" # this method supports bounds
).x
# plot everything
pl.scatter(x, y_noise, alpha=.2, label="f + noise")
pl.plot(x, y, c='#000000', lw=2., label="f")
pl.plot(x, f(x,*p_opt) ,'--', c='r', lw=2., label="fitted f")
pl.xlabel("x")
pl.ylabel("f(x)")
pl.legend(loc="best")
pl.xlim([-xlim*1.01,xlim*1.01])
pl.show()
```

## 2. Extension to complex codomain
Extension of the above minimization to the complex domain can be done by explicitly casting to complex numbers and adapting the error function:
First, you cast explicitly the value x to complex-valued to ensure f returns complex values and can actually compute fractional exponents of negative numbers. Second, we compute some error function on both real and imaginary parts - a straightforward candidate is the mean of the squared complex absolutes.
```
import numpy as np
import pylab as pl
from scipy.optimize import minimize
points = 500
xlim = 3.
def f(x,*p):
a1,a2,a3,a4,a5 = p
x = x.astype(complex) # cast x explicitly to complex, to ensure complex valued f
return a1*(x-a2)**a3 * np.exp(-a4 * x**a5)
# generate noisy data with known coefficients
p0 = [1.4,-.8,1.1,1.2,2.2]
x = (np.random.rand(points) * 2. - 1.) * xlim
x.sort()
y = f(x,*p0)
y_noise = y + np.random.randn(points) * .05 + np.random.randn(points) * 1j*.05
# error function chosen as mean of squared absolutes
err = lambda p: np.mean(np.abs(f(x,*p)-y_noise)**2)
# bounded optimization using scipy.minimize
p_init = [1.,-1.,.5,.5,2.]
p_opt = minimize(
err, # minimize wrt to the noisy data
p_init,
bounds=[(None,None),(-1,1),(None,None),(0,None),(None,None)], # set the bounds
method="L-BFGS-B" # this method supports bounds
).x
# plot everything
pl.scatter(x, np.real(y_noise), c='b',alpha=.2, label="re(f) + noise")
pl.scatter(x, np.imag(y_noise), c='r',alpha=.2, label="im(f) + noise")
pl.plot(x, np.real(y), c='b', lw=1., label="re(f)")
pl.plot(x, np.imag(y), c='r', lw=1., label="im(f)")
pl.plot(x, np.real(f(x,*p_opt)) ,'--', c='b', lw=2.5, label="fitted re(f)")
pl.plot(x, np.imag(f(x,*p_opt)) ,'--', c='r', lw=2.5, label="fitted im(f)")
pl.xlabel("x")
pl.ylabel("f(x)")
pl.legend(loc="best")
pl.xlim([-xlim*1.01,xlim*1.01])
pl.show()
```

## Notes
It seems the minimizer might be a little sensitive to initial values - I therefore placed my first guess (p\_init) not too far away from the optimum. Should you have to fight with this, you can use the same minimization procedure in addition to a global optimization loop, e.g. [basin-hopping](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.basinhopping.html#scipy.optimize.basinhopping) or [brute](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.brute.html#scipy.optimize.brute). | Python curve fit library that allows me to assign bounds to parameters | [
"",
"python",
"scipy",
"numeric",
"curve-fitting",
"pyminuit",
""
] |
How can I debug a Python segmentation fault?
We are trying to run our python code on SuSE 12.3. We get reproducible segmentation faults. The python code has been working on other platforms without segmentation faults, for years.
We only code Python, no C extension ....
What is the best way to debug this? I know a bit ansi c, but that was ten years ago ....
Python 2.7.5
**Update**
The segmentation fault happens on interpreter shutdown.
I can run the script several times:
```
python -m pdb myscript.py arg1 arg1
continue
run
continue
run
```
But the segmentation faults happen, if I leave the pdb with ctrl-d.
**Update 2**
I now try to debug it with gdb:
```
gdb
> file python
> run myscript.py arg1 arg2
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffefbe2700 (LWP 15483)]
0x00007ffff7aef93c in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0
(gdb) bt
#0 0x00007ffff7aef93c in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0
#1 0x00007ffff7af5303 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.7.so.1.0
#2 0x00007ffff7adc858 in ?? () from /usr/lib64/libpython2.7.so.1.0
#3 0x00007ffff7ad840d in PyObject_Call () from /usr/lib64/libpython2.7.so.1.0
#4 0x00007ffff7af1082 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0
#5 0x00007ffff7af233d in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0
#6 0x00007ffff7af233d in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0
#7 0x00007ffff7af5303 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.7.so.1.0
#8 0x00007ffff7adc5b6 in ?? () from /usr/lib64/libpython2.7.so.1.0
#9 0x00007ffff7ad840d in PyObject_Call () from /usr/lib64/libpython2.7.so.1.0
#10 0x00007ffff7ad9171 in ?? () from /usr/lib64/libpython2.7.so.1.0
#11 0x00007ffff7ad840d in PyObject_Call () from /usr/lib64/libpython2.7.so.1.0
#12 0x00007ffff7aeeb62 in PyEval_CallObjectWithKeywords () from /usr/lib64/libpython2.7.so.1.0
#13 0x00007ffff7acc757 in ?? () from /usr/lib64/libpython2.7.so.1.0
#14 0x00007ffff7828e0f in start_thread () from /lib64/libpthread.so.0
#15 0x00007ffff755c7dd in clone () from /lib64/libc.so.6
```
**Update 3**
I installed gdbinit from <http://hg.python.org/cpython/file/default/Misc/gdbinit>
and the debugging symbols from <http://download.opensuse.org/debug/distribution/12.3/repo/oss/suse/x86_64/>
```
(gdb) pystack
No symbol "_PyUnicode_AsString" in current context.
```
What now?
**Update 4**
We installed the a new RPM (python-2.7.5-3.1.x86\_64). We get less segfaults, but they still happen.
Here is the link to repository:
<http://download.opensuse.org/repositories/devel:/languages:/python:/Factory/openSUSE_12.3/x86_64/>
**Update 5**
Solved my initial problem:
It was <http://bugs.python.org/issue1856> (shutdown (exit) can hang or segfault with daemon threads running)
Related: [Detect Interpreter shut down in daemon thread](https://stackoverflow.com/questions/18098475/detect-interpreter-shut-down-in-daemon-thread) | I got to this question because of the `Segmentation fault`, but not on exit, just in general, and I found that nothing else helped as effectively as [faulthandler](https://docs.python.org/3/library/faulthandler.html). It's part of Python 3.3, and you can install in 2.7 using `pip`. | tl;dr for python3 users.
Firstly, from the docs:
> [faulthandler](https://docs.python.org/3/library/faulthandler.html) is a builtin module since Python 3.3
Code usage:
```
import faulthandler
faulthandler.enable()
// bad code goes here
```
Shell usage:
```
$ python3 -q -X faulthandler
>>> /// bad cod goes here
``` | How to debug a Python segmentation fault? | [
"",
"python",
"segmentation-fault",
""
] |
My table have following data,
```
ID name devID
1 abc 101
2 def 111
3 ghi 121
4 abc 102
5 def 110
```
I want to select the rows (ID,name,devID)based on the following condition:
a. the value of devID for name abc have been increased by 1, so only higher value record should be displayed in the result (only 102)
b. the value of devID for name def have been decrease by 1, it should display all records
(111 and 110)
Also we will be keep on adding the records for different rows and each name will not have more than 2 or max 3 rows in the table, so above condition should be always true.
Please help me on this query.
Thanks in Advance. | The below should help you solve your problem if I have understood correctly your question:
```
SELECT *
FROM table_data AS a
WHERE a.devid >=
(SELECT DEVID
FROM table_data AS C
WHERE c.ID =
(SELECT max(b.ID)
FROM table_data AS b
GROUP BY b.name HAVING b.name = a.name)) ;
```
**SQL Fiddle:** <http://www.sqlfiddle.com/#!3/b14513/18>
This code causes only rows with a `DEVID` greater (or equal) with the last inserted `DEVID` of someone with the name `Name` to be displayed.
**Results**
```
ID NAME DEVID
2 def 111
3 ghi 121
4 abc 102
5 def 110
```
**Update** (Query can be simplified further to):
```
SELECT *
FROM table_data AS a
WHERE a.devid >=
(SELECT DEVID
FROM table_data AS C
WHERE c.ID =
(SELECT max(b.ID)
FROM table_data AS b
where b.name = a.name)) ;
```
Also indexes should be placed in ID and devID. | If I understood you correctly, you simply want to get the **latest** devID (as shown below).
So **why bother with Joins and stuff**, if this simple approach works too:
```
SELECT DISTINCT(Name), (SELECT TOP 1 devID FROM Table t2
WHERE t2.Name=t1.Name Order By ID desc) FROM table t1
```
Your records:
```
ID name devID
1 abc 101
2 def 111
3 ghi 121
4 abc 102
5 def 110
```
Your expected Result (Also checked with fiddler)
```
name devID
ghi 121
abc 102
def 110
``` | SQL Group by query to select the data from same table | [
"",
"sql",
"sql-server",
""
] |
Say I have a string that looks like
```
'one, two, three,'
```
What is a pythonic way of iterating through replacing **','** with **'.'** one at a time?
Ideally the return value of the function would look like:
```
['one. two, three,' , 'one, two. three,' , 'one, two, three.']
```
---
Reasoning for selected answer, thanks for contributing!
```
import timeit
def one(s):
b = ["%s.%s" % (s[:i], s[i+1:]) for i, c in enumerate(s) if c == ","]
def two(s):
b = [s[:i] + "." + s[i+1:] for i in range(len(s)) if s[i] == ","]
def thr(s):
b = [s[:i] + "." + s[i+1:] for i, c in enumerate(s) if s[i] == ","]
def fou(s):
ss = s.split(',')
b = [','.join(ss[:i]) + '.' + ','.join(ss[i:]) for i in range(1,len(ss))]
a = 'one,two,three,four,five,six,seven,eight,nine,ten,eleven,twelve,'
print(timeit.timeit('one(a)', 'from __main__ import one, a', number = 1000000))
print(timeit.timeit('two(a)', 'from __main__ import two, a', number = 1000000))
print(timeit.timeit('thr(a)', 'from __main__ import thr, a', number = 1000000))
print(timeit.timeit('fou(a)', 'from __main__ import fou, a', number = 1000000))
# C:\dev\goddangit>python timin.py
# 14.3008527857
# 11.8759967856
# 13.3739626708
# 18.8536401851
``` | One-liner, `s` being `'one, two, three,'`:
```
>>> [s[:i] + "." + s[i+1:] for i in range(len(s)) if s[i] == ","]
['one. two, three,', 'one, two. three,', 'one, two, three.']
```
Alternatively, replace the outermost `[ ]` by `( )` to have a generator object instead.
That is, of course, only for *single-character* replacements. For more generally replacing substrings with other strings you should use one of the other solutions, e.g., using regular expressions. | You can use enumerate:
```
["%s.%s" % (s[:i], s[i+1:]) for i, c in enumerate(s) if c == ","]
```
or regular expressions:
```
["%s.%s" % (s[:m.start()], s[m.start()+1:]) for m in re.finditer(',', s)]
``` | Elegantly iterating over string replacement | [
"",
"python",
""
] |
I write my scripts in python and run them with cmd by typing in:
```
C:\> python script.py
```
Some of my scripts contain separate algorithms and methods which are called based on a flag.
Now I would like to pass the flag through cmd directly rather than having to go into the script and change the flag prior to run, I want something similar to:
```
C:\> python script.py -algorithm=2
```
I have read that people use sys.argv for almost similar purposes however reading the manuals and forums I couldn't understand how it works. | There are a few modules specialized in parsing command line arguments: [`getopt`](http://docs.python.org/2/library/getopt.html), [`optparse`](http://docs.python.org/2/library/optparse.html) and [`argparse`](http://docs.python.org/2/library/argparse.html). `optparse` is deprecated, and `getopt` is less powerful than `argparse`, so I advise you to use the latter, it'll be more helpful in the long run.
Here's a short example:
```
import argparse
# Define the parser
parser = argparse.ArgumentParser(description='Short sample app')
# Declare an argument (`--algo`), saying that the
# corresponding value should be stored in the `algo`
# field, and using a default value if the argument
# isn't given
parser.add_argument('--algo', action="store", dest='algo', default=0)
# Now, parse the command line arguments and store the
# values in the `args` variable
args = parser.parse_args()
# Individual arguments can be accessed as attributes...
print args.algo
```
That should get you started. At worst, there's plenty of documentation available on line (say, [this one](http://pymotw.com/2/argparse/) for example)... | It might not answer your question, but some people might find it usefull (I was looking for this here):
How to send 2 args (arg1 + arg2) from cmd to python 3:
----- Send the args in test.cmd:
```
python "C:\Users\test.pyw" "arg1" "arg2"
```
----- Retrieve the args in test.py:
```
print ("This is the name of the script= ", sys.argv[0])
print("Number of arguments= ", len(sys.argv))
print("all args= ", str(sys.argv))
print("arg1= ", sys.argv[1])
print("arg2= ", sys.argv[2])
``` | Pass arguments from cmd to python script | [
"",
"python",
"cmd",
"command-line-arguments",
""
] |
Is there an API which allows me to send a notification to Google Hangout? Or is there even a python module which encapsulates the Hangout API?
I would like to send system notification (e.g. hard disk failure reports) to a certain hangout account. Any ideas, suggestions? | Hangouts does not currently have a public API.
That said, messages delivered to the Google Talk XMPP server (talk.google.com:5222) are still being delivered to users via Hangouts. This support is only extended to one-on-one conversations, so the notification can't be delivered to a group of users. The messages will need to be supplied through an authenticated Google account in order to be delivered. | There is pre-alpha library for sending hangouts messages on python:
<https://pypi.python.org/pypi/hangups/0.1>
The API have been reverse engineered and is not published (as someone posted in comments). Thus it might change a Google's will.
Also, messages sent using XMPP ceased to be delivered to Hangouts users. I guess this is another cut (of the thousand scheduled). | send google hangout notification using python | [
"",
"python",
"notifications",
"google-plus",
"google-hangouts",
""
] |
i have this sql table
```
CREATE TABLE Notes(
NoteID [int] IDENTITY(1,1) NOT NULL,
NoteTitle [nvarchar](255) NULL,
NoteDescription [nvarchar](4000) NULL
) CONSTRAINT [PK_Notes] PRIMARY KEY CLUSTERED
(
NoteID ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
```
And i want to copy records from a temporary table INCLUDING the NoteID(using sql query)..
this is my script:
```
SET IDENTITY_INSERT Notes OFF
INSERT INTO Notes (NoteID, NoteTitle,NoteDescription)
SELECT NoteID, NoteTitle,NoteDescription from Notes_Temp
SET IDENTITY_INSERT Notes ON
```
with this script, i'm getting an error:
```
Cannot insert explicit value for identity column in table 'Notes' when IDENTITY_INSERT is set to OFF.
```
is there other way of insert records to a table with identity column using sql query? | Change the OFF and ON around
```
SET IDENTITY_INSERT Notes ON
INSERT INTO Notes (NoteID, NoteTitle,NoteDescription)
SELECT NoteID, NoteTitle,NoteDescription from Notes_Temp
SET IDENTITY_INSERT Notes OFF
``` | ```
SET IDENTITY_INSERT Notes ON
INSERT INTO Notes
/*Note the column list is REQUIRED here, not optional*/
(NoteID, NoteTitle,NoteDescription)
SELECT NoteID, NoteTitle,NoteDescription from Notes_Temp
SET IDENTITY_INSERT Notes OFF
```
You're inserting values for NoteId that is an identity column.
You can turn on identity insert on the table like this so that you can specify your own identity values. | INSERT record to SQL table with IDENTITY column | [
"",
"sql",
""
] |
Every time I try me code it works but when I type in `'stop'` it gives me an error:
> ValueError: invalid literal for int() with base 10: 'stop'
```
def guessingGame():
global randomNum
guessTry = 3
while True:
guess = input('Guess a Number between 1 - 10, You have 3 Tries, or Enter Stop: ')
if int(guess) == randomNum:
print('Correct')
break
if int(guess) < randomNum:
print('Too Low')
guessTry = guessTry - 1
print('You have, ' + str(guessTry) + ' Guesses Left')
if int(guess) > randomNum:
print('Too High')
guessTry = guessTry - 1
print('You have, ' + str(guessTry) + ' Guesses Left')
if guessTry == 0:
print('You have no more tries')
return
if str(guess) == 'stop' or str(guess) == 'Stop':
break
``` | The string passed to `int()` should only contain digits:
```
>>> int("stop")
Traceback (most recent call last):
File "<ipython-input-114-e5503af2dc1c>", line 1, in <module>
int("stop")
ValueError: invalid literal for int() with base 10: 'stop'
```
A quick fix will be to use [exception handling](http://docs.python.org/2/tutorial/errors.html#exceptions) here:
```
def guessingGame():
global randomNum
global userScore
guessTry = 3
while True:
guess = input('Guess a Number between 1 - 10, You have 3 Tries, or Enter Stop: ')
try:
if int(guess) == randomNum:
print('Correct')
break
if int(guess) < randomNum:
print('Too Low')
guessTry = guessTry - 1
print('You have, ' + str(guessTry) + ' Guesses Left')
if int(guess) > randomNum:
print('Too High')
guessTry = guessTry - 1
print('You have, ' + str(guessTry) + ' Guesses Left')
if guessTry == 0:
print('You have no more tries')
return
except ValueError:
#no need of str() here
if guess.lower() == 'stop':
break
guessingGame()
```
And you can use `guess.lower() == 'stop'` to match any uppercase-lowercase combination of "stop":
```
>>> "Stop".lower() == "stop"
True
>>> "SToP".lower() == "stop"
True
>>> "sTOp".lower() == "stop"
True
``` | Here's a more pythonic (Python 3) version.
```
def guessing_game(random_num):
tries = 3
print("Guess a number between 1 - 10, you have 3 tries, or type 'stop' to quit")
while True:
guess = input("Your number: ")
try:
guess = int(guess)
except (TypeError, ValueError):
if guess.lower() == 'stop' :
return
else:
print("Invalid value '%s'" % guess)
continue
if guess == random_num:
print('Correct')
return
elif guess < random_num:
print('Too low')
else:
print('Too high')
tries -= 1
if tries == 0:
print('You have no more tries')
return
print('You have %s guesses left' % tries)
``` | ValueError: invalid literal for int() with base 10: 'stop' | [
"",
"python",
"string",
"int",
""
] |
The `Menu` table has the following attributes:
```
MEAL_ID
MEAL_NAME
MEAL_TYPE
ITEM1_ID
ITEM2_ID
ITEM3_ID
ITEM4_ID
ITEM5_ID
COST
ACTIVE_INDICATOR
```
The `item` table had the following attributes:
```
ITEM_ID
ITEM_NAME
ACTIVE_INDICATOR
```
I am trying to print the name of each item for each instance of the `item_id` in the `menu` table;
`select a.MEAL_ID, a.MEAL_NAME, a.MEAL_TYPE, b.ITEM_NAME, b.ITEM_NAME from mdw_meals_menu a, mdw_item_menu b where a.ITEM1_ID = b.ITEM_ID and a.Item2_ID= b.ITEM_ID and a.item3_id= b.ITEM_ID and a.item4_id = b.ITEM_ID and a.item5_ID= b.ITEM_ID;`
This is the code that i used, but it is not returning any values. Please help | You should really consider moving the Items into a different table to normalize your database. Consider having a Meal\_Items table with a Meal\_Id and an Item\_Id -- that way you could have a 1-n number of items associated with each meal.
However, assuming you want 1 row with all the item names, then you need to `JOIN` your menu table to the item table for each item:
```
SELECT m.Meal_Id, a.Meal_Name, a.Meal_Type,
i1.Item_Name Item_Name_1,
i2.Item_Name Item_Name_2,
i3.Item_Name Item_Name_3,
i4.Item_Name Item_Name_4,
i5.Item_Name Item_Name_5
FROM mdw_meals_menu m
LEFT JOIN mdw_item_menu i1 ON m.Item1_Id = i1.Item_Id
LEFT JOIN mdw_item_menu i2 ON m.Item2_Id = i2.Item_Id
LEFT JOIN mdw_item_menu i3 ON m.Item3_Id = i3.Item_Id
LEFT JOIN mdw_item_menu i4 ON m.Item4_Id = i4.Item_Id
LEFT JOIN mdw_item_menu i5 ON m.Item5_Id = i5.Item_Id
```
I used a `LEFT JOIN` in case the Item did not exist, but you may be able to just use an `INNER JOIN` depending on your data. | Try this...
```
SELECT a.MEAL_ID ,
a.MEAL_NAME ,
a.MEAL_TYPE ,
b.ITEM_NAME
FROM mdw_meals_menu a ,
mdw_item_menu b
WHERE b.ITEM_ID IN ( SELECT ITEM1_ID
FROM mdw_meals_menu
UNION
SELECT ITEM2_ID
FROM mdw_meals_menu
UNION
SELECT ITEM3_ID
FROM mdw_meals_menu
UNION
SELECT ITEM4_ID
FROM mdw_meals_menu
UNION
SELECT ITEM5_ID
FROM mdw_meals_menu )
``` | How to select the item names for multiple instances of item id in another table | [
"",
"sql",
""
] |
How can I merge two column in ONE column if the given column (year\_id) has no value (null)
table 1
```
id txt year_id date
----------------------------------
1 text1 1
2 text2 2
3 text3 3
4 text4 2013-01-02
5 text5 2013-01-03
```
table 2
```
id year
----------------
1 2009
2 2010
3 2011
4 2012
```
I need a result like this
```
id txt merge_column
-------------------------
1 text1 2009
2 text2 2010
3 text3 2011
4 text4 2013-01-02
5 text5 2013-01-03
```
thank you in advance, this query complicates my mind.. thank you | JOIN both table first the use `COALESCE()` or `IFNULL()`.
```
SELECT a.id,
a.txt,
COALESCE(b.year, a.date) merge_column
FROM table1 a
LEFT JOIN table2 b
ON a.year_id = b.id
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/61ecf/1) | ```
SELECT t1.id, txt, IFNULL(date, year) merge_column
FROM table1 t1
LEFT JOIN table2 t2 ON t1.year_id = t2.id
``` | MYSQL merge in one column if given column has no value | [
"",
"mysql",
"sql",
""
] |
I am looking for a way to cleanly convert HTML tables to readable plain text.
I.e. given an input:
```
<table>
<tr>
<td>Height:</td>
<td>200</td>
</tr>
<tr>
<td>Width:</td>
<td>440</td>
</tr>
</table>
```
I expect the output:
```
Height: 200
Width: 440
```
I would prefer not using external tools, e.g. `w3m -dump file.html`, because they are (1) platform-dependent, (2) I want to have some control over the process and (3) I assume it is doable with Python alone with or without extra modules.
I don't need any word-wrapping or adjustable cell separator width. Having tabs as cell separators would be good enough.
### Update
This was an old question for an old use case. Given that [pandas provides the read\_html method](https://pandas.pydata.org/docs/reference/api/pandas.read_html.html), my current answer would definitely be [pandas-based](https://stackoverflow.com/a/31247247/191246). | How about using this:
[Parse HTML table to Python list?](https://stackoverflow.com/questions/6325216/parse-html-table-to-python-list)
But, use `collections.OrderedDict()` instead of simple dictionary to preserve order. After you have a dictionary, it is very-very easy to get and format the text from it:
Using the solution of @Colt 45:
```
import xml.etree.ElementTree
import collections
s = """\
<table>
<tr>
<th>Height</th>
<th>Width</th>
<th>Depth</th>
</tr>
<tr>
<td>10</td>
<td>12</td>
<td>5</td>
</tr>
<tr>
<td>0</td>
<td>3</td>
<td>678</td>
</tr>
<tr>
<td>5</td>
<td>3</td>
<td>4</td>
</tr>
</table>
"""
table = xml.etree.ElementTree.XML(s)
rows = iter(table)
headers = [col.text for col in next(rows)]
for row in rows:
values = [col.text for col in row]
for key, value in collections.OrderedDict(zip(headers, values)).iteritems():
print key, value
```
Output:
```
Height 10
Width 12
Depth 5
Height 0
Width 3
Depth 678
Height 5
Width 3
Depth 4
``` | You should look at the standard library modules [ElementTree](http://docs.python.org/2/library/xml.etree.elementtree.html) and [minidom](http://docs.python.org/2/library/xml.dom.minidom.html#dom-example) | Python solution to convert HTML tables to readable plain text | [
"",
"python",
"html",
"html-table",
"html-parsing",
"plaintext",
""
] |
I have a relative simple query here that is not returning the expected number of rows. I think that the following query should be working correctly for my case, but I must am missing something essential here to make sure not all rows are shown multiple times.!
```
SELECT p.*, c.value AS color, g.value AS gender, p.value AS price
FROM products p
LEFT JOIN product_fields c ON c.product_id = p.id AND c.name = 'color'
LEFT JOIN product_fields g ON g.product_id = p.id AND g.name = 'gender'
LEFT JOIN product_fields pp ON pp.product_id = p.id AND pp.name = 'price'
``` | It seems that you name table `products AS p` and then later also `product_fields AS p`. Try this:
```
SELECT p.*, c.value AS color, g.value AS gender, p.value AS price
FROM products p
LEFT JOIN product_fields c ON c.product_id = p.id AND c.name = 'color'
LEFT JOIN product_fields g ON g.product_id = p.id AND g.name = 'gender'
LEFT JOIN product_fields pr ON pr.product_id = p.id AND pr.name = 'price'
```
Note, that you use `LEFT JOIN`, which allows a match NULL values. maybe all you need to do is changing to `INNER JOIN`?
```
SELECT p.*, c.value AS color, g.value AS gender, p.value AS price
FROM products p
INNER JOIN product_fields c ON c.product_id = p.id AND c.name = 'color'
INNER JOIN product_fields g ON g.product_id = p.id AND g.name = 'gender'
INNER JOIN product_fields pr ON pp.product_id = p.id AND pp.name = 'price'
``` | A LEFT JOIN includes all records from the first table and all matching records from the second table, assuming the query does not exclude null values from the second table.
But an INNER JOIN excludes from the results set any records that do not exist in both tables.
So this might help you:
```
SELECT p.*, c.value AS color, g.value AS gender, p.value AS price
FROM products p
INNER JOIN product_fields c ON c.product_id = products.id
INNER JOIN product_fields g ON g.product_id = products.id
INNER JOIN product_fields pp ON pp.product_id = products.id
WHERE c.name = 'color' AND g.name = 'gender' AND pp.name = 'price'
```
For differences between types of join [MySQL Joins](http://rakesh.sankar-b.com/2010/12/25/mysql-join-cross-inner-outer-left-right-full/) | LEFT JOINS create to much rows in MySQL | [
"",
"mysql",
"sql",
""
] |
How can I convert this query to a stored procedure that returns also the same(fields) table:
```
SELECT R.REGION_NAME,
TP.TERRITORY_NAME,
LSC.LOOKUP_NAME,
CAST (WM_CONCAT (CM.CUST_NAME) AS VARCHAR2 (500)) AS CUST_NAMES,
TP.VIEWER_PROFILE,
TP.CASTING_PREFERENCE,
TP.PROG_TYPE_PREFERENCE,
STN_LIST.STN_NAMES,
STN_LIST.LIC_RIGHTS
FROM TERRITORY_PROFILE_MSTR TP
INNER JOIN
REGION_MSTR R
ON TP.REGION_ID = R.REGION_ID
LEFT OUTER JOIN
LOOKUP_SUB_CATEGORY LSC
ON TP.BROADCAST_STD_CD = LSC.LOOKUP_SUB_CAT_ID
LEFT OUTER JOIN
CUST_MSTR CM
ON CM.TERRITORY_ID = TP.TERRITORY_ID
LEFT OUTER JOIN
(SELECT CAST (WM_CONCAT (SLR.CUST_STN_NAME) AS VARCHAR2 (500))
AS STN_NAMES,
CAST (WM_CONCAT (LRM.LIC_RIGHTS_ID) AS VARCHAR2 (500))
AS LIC_RIGHTS,
SLR.CUST_COMPETITOR_ID AS CUST_ID
FROM STN_LIC_RIGHTS SLR, LIC_RIGHTS_MSTR LRM
WHERE SLR.STN_LIC_ID = LRM.LIC_RIGHTS_ID
GROUP BY SLR.CUST_COMPETITOR_ID) STN_LIST
ON STN_LIST.CUST_ID = CM.CUST_ID
WHERE TP.TERRITORY_ID <> 0
GROUP BY TP.TERRITORY_NAME,
R.REGION_NAME,
LSC.LOOKUP_NAME,
STN_LIST.STN_NAMES,
STN_LIST.LIC_RIGHTS,
TP.VIEWER_PROFILE,
TP.CASTING_PREFERENCE,
TP.PROG_TYPE_PREFERENCE
ORDER BY R.REGION_NAME, TP.TERRITORY_NAME
```
I think I don't need to specify the table structure. Please take note that the query is fully functional and the values are correctly return the way I expect it to return.
Thank you.
**UPDATE:**
**The stored procedure must be compatible and executable in ASP.NET(C#) through SQL Server Reporting Services or SSRS.** | Nobody mentioned any alternative to my question like the use of Oracle `VIEWS`.
Instead of creating a stored procedure for my SSRS reports, I can use `VIEWS` though they don't have any parameters, it answers my question.
For future reference here's the code:
```
CREATE VIEW RV_TERRITORY_PROFILE
AS
SELECT R.REGION_NAME,
TP.TERRITORY_NAME,
LSC.LOOKUP_NAME,
CAST (WM_CONCAT (CM.CUST_NAME) AS VARCHAR2 (500)) AS CUST_NAMES,
TP.VIEWER_PROFILE,
TP.CASTING_PREFERENCE,
TP.PROG_TYPE_PREFERENCE,
STN_LIST.STN_NAMES,
STN_LIST.LIC_RIGHTS
FROM TERRITORY_PROFILE_MSTR TP
INNER JOIN
REGION_MSTR R
ON TP.REGION_ID = R.REGION_ID
LEFT OUTER JOIN
LOOKUP_SUB_CATEGORY LSC
ON TP.BROADCAST_STD_CD = LSC.LOOKUP_SUB_CAT_ID
LEFT OUTER JOIN
CUST_MSTR CM
ON CM.TERRITORY_ID = TP.TERRITORY_ID
LEFT OUTER JOIN
(SELECT CAST (WM_CONCAT (SLR.CUST_STN_NAME) AS VARCHAR2 (500))
AS STN_NAMES,
CAST (WM_CONCAT (LRM.LIC_RIGHTS_ID) AS VARCHAR2 (500))
AS LIC_RIGHTS,
SLR.CUST_COMPETITOR_ID AS CUST_ID
FROM STN_LIC_RIGHTS SLR, LIC_RIGHTS_MSTR LRM
WHERE SLR.STN_LIC_ID = LRM.LIC_RIGHTS_ID
GROUP BY SLR.CUST_COMPETITOR_ID) STN_LIST
ON STN_LIST.CUST_ID = CM.CUST_ID
WHERE TP.TERRITORY_ID <> 0
GROUP BY TP.TERRITORY_NAME,
R.REGION_NAME,
LSC.LOOKUP_NAME,
STN_LIST.STN_NAMES,
STN_LIST.LIC_RIGHTS,
TP.VIEWER_PROFILE,
TP.CASTING_PREFERENCE,
TP.PROG_TYPE_PREFERENCE
ORDER BY R.REGION_NAME, TP.TERRITORY_NAME
```
then you can run it like
```
SELECT * FROM RV_TERRITORY_PROFILE
``` | There are a few ways to do what you're asking:
1. By using a function / procedure which returns a sys\_refcursor as described [here](https://stackoverflow.com/questions/1170548/get-resultset-from-oracle-stored-procedure)
2. By using a "parameterized view" as described [here](https://stackoverflow.com/questions/9024696/creating-parameterized-views-in-oracle11g)
3. By using a table functions as described [here](https://stackoverflow.com/questions/12338573/returning-a-table-from-an-oracle-function)
and probably more... | Convert oracle query to stored procedure for SQL Server Reporting Services (SSRS) | [
"",
"sql",
"oracle",
"stored-procedures",
"reporting-services",
"oracle11g",
""
] |
So, I have this query
```
SELECT
NOTES,
DATECREATED,
DATEMODIFIED,
*
FROM myTable
WHERE
USERCREATEDBY = 465
AND NOTES LIKE ' :%'
ORDER BY DATECREATED
```
and it throws this error `Ambiguous column name 'DATECREATED'`.
I see nothing wrong with my query and I found a fix for this in the form of adding an alias to column `DATECREATED`, like `DATECREATED a`, and ordering by alias, `ORDER BY a`.
I do not understand why this happens and I am very curious to know why. | Well, if your table contains this column `DATECREATED`, then you're selecting this column **twice** with the above `SELECT` - once explicitly, once implicitly by using the `*`.
So your result set now contains **two columns** called `DATECREATED` - which one do you want `ORDER BY` to be applied to??
Either explicitly specify the complete column list that you need, or then at least provide an alias for the `DATECREATED` column:
```
SELECT
NOTES,
SecondDateCreated = DATECREATED,
DATEMODIFIED,
*
FROM myTable
WHERE
USERCREATEDBY = 465
AND NOTES LIKE ' :%'
ORDER BY DATECREATED
``` | It's probably because you're selecting DATECREATED twice, once as a column and once in \*
```
SELECT *
FROM myTable
WHERE
USERCREATEDBY = 465
AND NOTES LIKE ' :%'
ORDER BY DATECREATED
```
Should work fine | ORDER BY works only with column alias | [
"",
"sql",
"sql-server-2008",
"sql-order-by",
""
] |
I'm using Django Celery task to connect to Facebook Graph API with requests lib using Gevent. Issue I'm constantly running at is that every now and then I get EOF occurred in violation of protocol exception. I've searched around and various sources offer different fixes but none seems to work.
I've tried monkey patching the ssl module(gevent.monkey.patch\_all()) and some others too but no luck.
I'm not even sure if this is openssl issue as some sources might suggest as I haven't encountered it before applying Gevent optimisation
```
Connection error: [Errno 8] _ssl.c:504: EOF occurred in violation of protocol
Traceback (most recent call last):
File "/home/user/workspace/startup/project/events/tasks.py", line 52, in _process_page
data = requests.get(current_url)
File "/home/user/workspace/startup/env/local/lib/python2.7/site-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/home/user/workspace/startup/env/local/lib/python2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/home/user/workspace/startup/env/local/lib/python2.7/site-packages/requests/sessions.py", line 354, in request
resp = self.send(prep, **send_kwargs)
File "/home/user/workspace/startup/env/local/lib/python2.7/site-packages/requests/sessions.py", line 460, in send
r = adapter.send(request, **kwargs)
File "/home/user/workspace/startup/env/local/lib/python2.7/site-packages/requests/adapters.py", line 250, in send
raise SSLError(e)
SSLError: [Errno 8] _ssl.c:504: EOF occurred in violation of protocol
```
I'm using latest 1.0rc Gevent version.
Another issue that keeps poping up time to time although URL is correct is:
Retrying (5 attempts remain) after connection broken by 'error(2, 'No such file or directory')': /**ID**/events?limit=5000&fields=description,name,location,start\_time,end\_time&access\_token=**TOKEN** | Using the forced [TLSv1 fix](https://stackoverflow.com/questions/14102416/python-requests-requests-exceptions-sslerror-errno-8-ssl-c504-eof-occurred) as suggested by J.F Sebastian fixed all the issues I was facing.
Hints for future questions regarding:
* DNSError exception - upgrading Gevent from 0.13.X to 1.0rc fixes this issue
* SSL issues - look at fix in link provided by J.F Sebastian | I installed the latest Python 2.7 (2.7.11) and the problem went away. I believe the problem might even be solved back in 2.7.6 (I was using 2.7.5 on Mac OSX). | Python SSL connection "EOF occurred in violation of protocol" | [
"",
"python",
"django",
"celery",
"python-requests",
"gevent",
""
] |
Below is the code I am working on. From what I can tell there is no issue, but when I attempt to run the piece of code I receive an error.
```
import os
import datetime
def parseOptions():
import optparse
parser = optparse.OptionParser(usage= '-h')
parser.add_option('-t', '--type', \
choices= ('Warning', 'Error', 'Information', 'All'), \
help= 'The type of error',
default= 'Warning')
parser.add_option('-g', '--goback', \
type= 'string')
(options, args) = parser.parse_args()
return options
options = parseOptions() now = datetime.datetime.now() subtract = timedelta(hours=options.goback) difference = now - subtract
if options.type=='All' and options.goback==24:
os.startfile('logfile.htm')
else:
print
print 'Type =', options.type,
print
print 'Go Back =', options.goback,'hours'
print difference.strftime("%H:%M:%S %a, %B %d %Y")
print
```
Error is as followed:
```
Traceback (most recent call last):
File "C:\Python27\Lib\SITE-P~1\PYTHON~2\pywin\framework\scriptutils.py", line 325, in RunScript
exec codeObject in __main__.__dict__
File "C:\Users\user\Desktop\Python\python.py", line 19, in <module>
subtract = timedelta(hours=options.goback)
NameError: name 'timedelta' is not defined
```
Any help would be appreciated. | You've imported datetime, but not defined timedelta. You want either:
```
from datetime import timedelta
```
or:
```
subtract = datetime.timedelta(hours=options.goback)
```
Also, your goback parameter is defined as a string, but then you pass it to timedelta as the number of hours. You'll need to convert it to an integer, or better still set the `type` argument in your option to `int` instead. | It should be `datetime.timedelta` | Timedelta is not defined | [
"",
"python",
"python-2.7",
"nameerror",
""
] |
I'm walking through basic tutorials for matplotlib, and the example code that I'm working on is:
```
import numpy as np
import matplotlib.pylab as plt
x=[1,2,3,4]
y=[5,6,7,8]
line, = plt.plot(x,y,'-')
plt.show()
```
Does anyone know what the comma after line (`line,=plt.plot(x,y,'-')`) means?
I thought it was a typo but obviously the whole code doesn't work if I omit the comma. | `plt.plot` returns a list of the `Line2D` objects plotted, even if you plot only one line.
That comma is unpacking the single value into `line`.
ex
```
a, b = [1, 2]
a, = [1, ]
``` | The `plot` method returns objects that contain information about each line in the plot as a list. In python, you can expand the elements of a list with a comma. For example, if you plotted two lines, you would do:
```
line1, line2 = plt.plot(x,y,'-',x,z,':')
```
Where `line1` would correspond to `x,y`, and line2 corresponds to `x,z`. In your example, there is only one line, so you need the comma to tell it to expand a 1-element list. What you have is equivalent to
```
line = plt.plot(x,y,'-')[0]
```
or
```
line = ply.plot(x,y,'-')
line = line[0]
```
Your code should work if you omit the comma, only because you are not using `line`. In your simple example `plt.plot(x,y,'-')` would be enough. | matplotlib 2d line line,=plot comma meaning | [
"",
"python",
"matplotlib",
"tuples",
"line",
""
] |
I was wondering in what order does the dictionary in python store key : value pairs. I wrote the following in my python shell but I can't figure out what is the reason for the order it is storing the key : value pairs.
```
>>> d = {}
>>> d['a'] = 8
>>> d['b'] = 8
>>> d
{'a': 8, 'b': 8}
>>> d['c'] = 8
>>> d
{'a': 8, 'c': 8, 'b': 8}
>>> d['z'] = 8
>>> d
{'a': 8, 'c': 8, 'b': 8, 'z': 8}
>>> d['w'] = 8
>>> d
{'a': 8, 'c': 8, 'b': 8, 'z': 8, 'w': 8}
```
I also tried the same thing with different values for the same keys. But the order remained the same. Adding one more key : value pair gives another result that just can't make out. Here it is :
```
>>> d[1] = 8
>>> d
{'a': 8, 1: 8, 'c': 8, 'b': 8, 'w': 8, 'z': 8}
``` | The short answer is: in an implementation-defined order. You can't rely and shouldn't expect any particular order, and it can change after changing the dictionary in a supposedly-irrelevant manner.
Although not directly, it's somehow explained in [Dictionary view objects](http://docs.python.org/3/library/stdtypes.html#dictionary-view-objects):
> Keys and values are iterated over in an arbitrary order which is non-random, varies across Python implementations, and depends on the dictionary’s history of insertions and deletions. If keys, values and items views are iterated over with no intervening modifications to the dictionary, the order of items will directly correspond. | Elements are stored based on the hash of their key. The documentation states that a [key must be a hashable type](http://docs.python.org/2/library/stdtypes.html#typesmapping). | In what order does a dictionary in python store data? | [
"",
"python",
""
] |
I have a simple query in PostgreSQL which I need to convert to SQLAlchemy.
```
SELECT table1.name,table2.ip,table2.info
FROM table1 LEFT OUTER JOIN table2
ON(table1.name=table2.hostname)
GROUP BY table1.name;
```
I have tried using this with no success:
```
session.query(table1.name
,table2.ip
,table2.info).distinct().join(table2).group_by(table1.name)
```
Can someone please help me with this?
Thanks in advance
Ishwar | I figured out the answer. I was actually missing the ON condition in query:
session.query(table1.name
,table2.ip
,table2.info
).distinct().outerjoin(table2 , table1.name=table2.hostname
).group\_by(table1.name)
Thanks for help! | Here's how you would do it using the Expression API
<http://docs.sqlalchemy.org/en/latest/core/expression_api.html>
1. Define your table, for example:
```
table1 = table('table1',
column('name', String())
)
table2 = table('table2',
column('ip', String()),
column('info', String()),
column('hostname', String()),
)
```
2. Compose your query, like:
```
my_query = select([table1.c.name,
table2.c.ip,
table2.c.info]
).select_from(
table1.outerjoin(table2, table1.c.name == table2.c.hostname)
).group_by(table1.c.name)
```
3. Execute your query, like:
```
db.execute(my_query)
``` | Convert PostgreSQL to SqlAlchemy query | [
"",
"sql",
"postgresql",
"sqlalchemy",
"flask-sqlalchemy",
""
] |
I am practicing my python coding on this [website](http://codingbat.com/prob/p164876). This is the problem
```
Return True if the string "cat" and "dog" appear
the same number of times in the given string.
cat_dog('catdog') → True
cat_dog('catcat') → False
cat_dog('1cat1cadodog') → True
```
This is my code , for some unknown reason , i dont pass all the testcases. I have problems debugging it
```
def cat_dog(str):
length=len(str)-2
i=0
catcount=0
dogcount=0
for i in range (0,length):
animal=str[i:i+2]
if ("cat" in animal):
catcount=catcount+1
if ("dog" in animal):
dogcount=dogcount+1
if (dogcount==catcount):
return True
else:
return False
``` | You don't need to creat a function,just a line is enough.like:
`return s.count('cat') == s.count('dog')` | An alternative without loop:
```
> def cat_dog(str):
> total_len = len(str)
> cat = str.replace("cat", "")
> dog = str.replace("dog", "")
> if len(cat) == len(dog):
> if len(cat) < len(str):
> if len(dog) < len(str):
> return True
> if len(cat) == len(str) and len(dog) == len(str):
> return True
>
> else: return False
``` | Check how many times a string appears in a particular word | [
"",
"python",
"debugging",
""
] |
Is there some thing should i modify concerning the following query (because it takes long time about `15 seconds` !! just to bring about `400` records)
---
```
select unique a.regnum , a.name
from re23mmf a , rj82std b , gg7crs c , rr099stdusr j , hkcourse h , aalt:mms3f x , aalt:rhcasestudy y
where a.regnum = b.regnum and a.regnum = c.regnum
and b.term_no = ( select max(d.term_no) from rj82std d where d.regnum = a.regnum )
and b.dep_code = j.dep_code and b.study_code = j.study_code
and j.start_date <=today and j.end_date >=today
and j.emp_num =4324
and c.crsnum = h.crsnum
and h.is_project = 'j'
and a.regnum = x.regnum and x.regserial = y.regserial
and y.batch_no = ( select max(z.batch_no) from rhcasestudy z where z.regserial = y.regserial )
and y.case_code <> 5
```
---
And please what should i take care about when writing query like this,concerning the performance issues ? | Using JOIN notation, you can write:
```
SELECT UNIQUE a.regnum, a.name
FROM re23mmf a
JOIN rj82std b ON a.regnum = b.regnum
JOIN gg7crs c ON a.regnum = c.regnum
JOIN rr099stdusr j ON b.dep_code = j.dep_code
AND b.study_code = j.study_code
JOIN hkcourse h ON c.crsnum = h.crsnum
JOIN aalt:mms3f x ON a.regnum = x.regnum
JOIN aalt:rhcasestudy y ON x.regserial = y.regserial
WHERE b.term_no = (SELECT MAX(d.term_no) FROM rj82std d
WHERE d.regnum = a.regnum)
AND j.start_date <= TODAY AND j.end_date >= TODAY
AND j.emp_num = 4324
AND h.is_project = 'j'
AND y.batch_no = (SELECT MAX(z.batch_no) FROM rhcasestudy z
WHERE z.regserial = y.regserial)
AND y.case_code <> 5
```
Unless I goofed during transcription and editing, this is effectively the same as the original query. I've left the single-table filter conditions in the WHERE clause; the optimizer should push them down to the relevant tables so as to minimize the workload.
As already noted, the key is to study the output from SET EXPLAIN and ensure there are no missing indexes. It's also a good idea to make sure that statistics are up to date, perhaps using AUS (Auto-Update Statistics).
It would help to identify the cardinalities of the tables. Generating 400 rows in 15 seconds might be ghastly performance (there are at most 400 rows in any table), or it might be stellar performance (the biggest table has 4 billion rows, each of 2 KiB). Without sizing information, it is hard to know which is more probable.
```
a ----> b ----> j
| |
| +-----> d
|
+-----> c ----> h
|
+-----> x ----> y ----> z
```
The diagram shows the joining structures. Alias `a` joins to `b`, `c`, and `x` independently; `b` joins to `j` and to (a sub-query on) `d`; `c` joins with `h`; `x` joins with `y`; and `y` joins to (a sub-query on) `z`.
Note that the sub-queries are both correlated sub-queries; those are less efficient, in general. Maybe you need to use sub-queries like:
```
SELECT d.regnum, MAX(d.term_no) AS max_term_no
FROM rj82std AS d
GROUP BY d.regnum
```
in the FROM clause:
```
SELECT UNIQUE a.regnum, a.name
FROM re23mmf a
JOIN rj82std b ON a.regnum = b.regnum
JOIN gg7crs c ON a.regnum = c.regnum
JOIN rr099stdusr j ON b.dep_code = j.dep_code
AND b.study_code = j.study_code
JOIN hkcourse h ON c.crsnum = h.crsnum
JOIN aalt:mms3f x ON a.regnum = x.regnum
JOIN aalt:rhcasestudy y ON x.regserial = y.regserial
JOIN (SELECT d.regnum, MAX(d.term_no) AS max_term_no
FROM rj82std AS d
GROUP BY d.regnum
) AS d1 ON b.regnum = d1.regnum AND b.term_no = d1.max_term_no
JOIN (SELECT z.regserial, MAX(z.batch_no) AS max_batch_no
FROM rhcasestudy z
GROUP BY z.regserial
) AS z1 ON y.regserial = z1.regserial
AND y.batch_no = z1.max_batch_no
WHERE j.start_date <= TODAY AND j.end_date >= TODAY
AND j.emp_num = 4324
AND h.is_project = 'j'
AND y.case_code <> 5
```
These sub-queries only need to be evaluated once, not for each row as in a correlated sub-query. This isn't a guaranteed win, but it often helps a lot. | First check indexes. Do you have clustered indexes or non-clustered indexes on the tables? You can create a non-clustered index with the columns in your query to improve performance. | How enhance the performance of query among several tables? | [
"",
"sql",
"performance",
"informix",
""
] |
So I have python 2.7.3 installed on Windows 7 64 bit and I want to do an incremental upgrade to version 2.7.5. I have pip installed and it works fine; I just installed Django using it.
I ran into this command:
pip install --upgrade 'python>=2.7,<2.7.99'
Now it forces pip to download the latest version that is not Python 3 which is what I want.
2.7.5 starts downloading and I get the following error:
```
Downloading/unpacking python>=2.7,<2.7.99
Downloading Python-2.7.5.tar.bz2 (12.1MB): 12.1MB downloaded
Running setup.py egg_info for package python
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "c:\users\name\appdata\local\temp\pip-build-name\python\setup.py", line 33, in <module>
COMPILED_WITH_PYDEBUG = ('--with-pydebug' in sysconfig.get_config_var("CONFIG_ARGS"))
TypeError: argument of type 'NoneType' is not iterable
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "c:\users\name\appdata\local\temp\pip-build-name\python\setup.py", line 33, in <module>
COMPILED_WITH_PYDEBUG = ('--with-pydebug' in sysconfig.get_config_var("CONFIG_ARGS"))
TypeError: argument of type 'NoneType' is not iterable
----------------------------------------
Command python setup.py egg_info failed with error code 1 in c:\users\name\appdata\local\temp\pip-build-name\python
```
Also I am new to pip. When I cancel a download is that safe? I typed install "pip install python" and it started downloading version python version 3. So I cancelled. That won't override my main python 2.7.3 install?
Curious. | `pip` is designed to upgrade python packages and not to upgrade python itself. `pip` shouldn't try to upgrade python when you ask it to do so.
Don't type `pip install python` but use an installer instead. | Basically, pip comes with python itself.Therefore it carries no meaning for using pip itself to install or upgrade python.
Thus,try to install python through installer itself,visit the site "<https://www.python.org/downloads/>" for more help.
Thank you. | upgrade python version using pip | [
"",
"python",
"upgrade",
"pip",
""
] |
I have a collection of binary files which have names as so:
```
d010-recomb.bin
d011-recomb.bin
.............
.............
.............
d100-recomb.bin
```
Using the python glob module, i can access all the files in a folder and can do further processing with these files:
```
import glob
binary = sorted(glob.glob('C:/Users/Desktop/bin/*.bin'))
```
I can also use some criteria for the files that I want to access:
FOr example if I use the following code then I will gain access to all the files from d010-recomb.bin to d019-recomb.bin
```
binary = sorted(glob.glob('C:/Users/Desktop/bin/d01*.bin'))
```
But using this criteria I can't get access to files such as d015 to d025.
**Please** tell me what I can do to gain access to these files. | Filter the list afterwards; either turn the filename portion to an `int` or create a range of strings that are to be included:
```
included = {'d{:03d}'.format(i) for i in range(15, 26)} # a set
binary = sorted(f for f in glob.glob('C:/Users/Desktop/bin/*.bin') if f[21:25] in included)
```
The above code generates the strings `'d015'` through to `'d025'` as a set of strings for fast membership testing, then tests the first 4 characters of each file against that set; because `glob()` returns whole filenames we slice off the path for that to work.
For variable paths, I'd store the slice offset, for speed, based on the path:
```
pattern = 'C:/Users/Desktop/bin/*.bin'
included = {'d{:03d}'.format(i) for i in range(15, 26)} # a set
offset = len(os.path.dirname(pattern)) + 1
binary = sorted(f for f in glob.glob(pattern) if f[offset:offset + 4] in included)
```
Demo of the latter:
```
$ mkdir test
$ touch test/d014-recomb.bin
$ touch test/d015-recomb.bin
$ touch test/d017-recomb.bin
$ touch test/d018-recomb.bin
$ fg
bin/python2.7
>>> import os, glob
>>> pattern = '/tmp/stackoverflow/test/*.bin'
>>> included = {'d{:03d}'.format(i) for i in range(15, 26)} # a set
>>> offset = len(os.path.dirname(pattern)) + 1
>>> sorted(f for f in glob.glob(pattern) if f[offset:offset + 4] in included)
['/tmp/stackoverflow/test/d015-recomb.bin', '/tmp/stackoverflow/test/d017-recomb.bin', '/tmp/stackoverflow/test/d018-recomb.bin']
``` | You can either filter list, using:
```
def filter_path(path,l,r):
i = int(os.path.basename(path)[1:4])
if (i >= l) and (i <= r):
return True
return False
result = [i for i in binary if filter_path(i,19,31)]
```
If you are 100% confident about number of elements in directory, you can:
```
result = binary[19:30]
```
Or once you have data sorted, you may find the *first index* and the *last index* and [[1]](https://stackoverflow.com/questions/176918/in-python-how-do-i-find-the-index-of-an-item-given-a-list-containing-it)[[2]](http://docs.python.org/3.2/tutorial/datastructures.html):
```
l = binary.find('C:/Users/Desktop/bin/d015.bin')
r = binary.find('C:/Users/Desktop/bin/d023.bin')
result = binary[l:r+1]
``` | python access selective files with glob module | [
"",
"python",
""
] |
I have the following Model:
```
class Car(models.Model):
id = models.AutoField(primary_key=True)
slug = models.CharField(max_length=128L, unique=True, blank=True)
name = models.CharField(max_length=128L, blank=True)
model_slug = models.CharField(max_length=128L, unique=True, blank=True)
...
```
Whenever I'm getting a list of `Car` from an XML file, some of them show up multiple times
(with the same slug), if I try to save that, I get
`IntegrityError(1062, "Duplicate entry 'car-slug' for key 'slug'")`
I want it to load the existing Car (or updated it, whichever is easier) in case there's a Duplicate error.
But I want it to be generic enough so that it would work for model\_slug (or any unique field that generates that error). | I ended up writing a custom save on the model (so that my code doesn't change, just the model):
```
def save(self, *args, **kwargs):
try:
super(Car, self).save(*args, **kwargs)
except IntegrityError, e:
existing_object = Car.objects.get(slug=self.slug)
self = existing_object
super(Car, self).save(*args, **kwargs)
return existing_object.id
```
Now I return the object's ID to assign it so the save() command looks like this:
```
the_id = generic_object.save() #e.g. Car
if the_id:
generic_object.id = the_id
``` | You need [`get_or_create`](https://docs.djangoproject.com/en/dev/ref/models/querysets/#get-or-create):
```
car,created = Car.objects.get_or_create(slug='car-slug')
if created:
print 'New car was created'
car.slug = 'new-car-slug'
else:
# do whatever needs to be done here
# with the existing car object, which will
# be car
# car.name = 'new name'
car.save()
```
Whatever arguments you provide to `get_or_create`, it will use those to search existing records for the model.
---
Suppose you don't know what combination of fields will trigger a duplicate. The easy way is to find out which fields in your model have that restriction (ie `unique=True`). You can introspect this information from your model, or a simpler way is to simply pass these fields to `get_or_create`.
First step is to create a mapping between your XML fields and your model's fields:
```
xml_lookup = {}
xml_lookup = {'CAR-SLUG': 'slug'} # etc. etc.
```
You can populate this will all fields if you want, but my suggestion is to populate it only with those fields that have a unique constraint on them.
Next, while you are parsing your XML, populate a dictionary for each record, mapping each field:
```
for row in xml_file:
lookup_dict = {}
lookup_dict[xml_lookup[row['tag']] = row['value'] # Or something similar
car, created = Car.objects.get_or_create(**lookup_dict)
if created:
# Nothing matched, a new record was created
# Any any logic you need here
else:
# Existing record matched the search parameters
# Change/update whatever field to prevent the IntegrityError
car.model_slug = row['MODEL_SLUG']
# Set/update fields as required
car.save() # Save the modified object
``` | Django load object in case Unique key is Duplicate | [
"",
"python",
"django",
"django-models",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.