Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I execute the code below:
```
use AdventureWorks2008R2
begin transaction
BEGIN
alter table HumanResources.Department add newcolumn int
update HumanResources.Department set newcolumn=1 where departmentid=1
END
commit
```
The error I get is:
> Invalid column name 'newcolumn'.
Can `ALTER` statements be included in Transactions like this? If so, how can I prevent this error?
I have researched this online [e.g. here](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/6f74a432-74df-41d8-aca4-0fd16a07d156/alter-table-statements-in-a-transaction). I have not found an answer to my specific question. | Yes, you can include an `ALTER` in a transaction. The problem is that the parser validates the syntax for your `UPDATE` statement, and can't "see" that you are also performing an `ALTER`. One workaround is to use dynamic SQL, so that the parser doesn't inspect your syntax (and validate column names) until runtime, where the `ALTER` will have already happened:
```
BEGIN TRANSACTION;
ALTER TABLE HumanResources.Department ADD newcolumn INT;
EXEC sp_executesql N'UPDATE HumanResources.Department
SET newcolumn = 1 WHERE DepartmentID = 1;';
COMMIT TRANSACTION;
```
Note that indentation makes code blocks much more easily identifiable (and your `BEGIN/END` was superfluous). | Aaron has explained everything already. Another alternative that works for ad-hoc scripts in SSMS is to insert the batch separator `GO` so that the script is sent as two parts to the server. This only works if it is valid to split the script in the first place (you can't split an `IF` body for example). | Alter statement in a Transaction | [
"",
"sql",
"sql-server",
"t-sql",
"transactions",
""
] |
I have an xml I am parsing, making some changes and saving out to a new file. It has the declaration `<?xml version="1.0" encoding="utf-8" standalone="yes"?>` which I would like to keep. When I am saving out my new file I am loosing the `standalone="yes"` bit. How can I keep it in?
Here is my code:
```
templateXml = """<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<package>
<provider>Some Data</provider>
<studio_display_name>Some Other Data</studio_display_name>
</package>"""
from lxml import etree
tree = etree.fromstring(templateXml)
xmlFileOut = '/Users/User1/Desktop/Python/Done.xml'
with open(xmlFileOut, "w") as f:
f.write(etree.tostring(tree, pretty_print = True, xml_declaration = True, encoding='UTF-8'))
``` | You can pass `standalone` keyword argument to `tostring()`:
```
etree.tostring(tree, pretty_print = True, xml_declaration = True, encoding='UTF-8', standalone=True)
``` | Specify `standalone` using [tree.docinfo.standalone](http://lxml.de/api/lxml.etree.DocInfo-class.html).
Try following:
```
from lxml import etree
tree = etree.fromstring(templateXml).getroottree() # NOTE: .getroottree()
xmlFileOut = '/Users/User1/Desktop/Python/Done.xml'
with open(xmlFileOut, "w") as f:
f.write(etree.tostring(tree, pretty_print=True, xml_declaration=True,
encoding=tree.docinfo.encoding,
standalone=tree.docinfo.standalone))
``` | XML Declaration standalone="yes" lxml | [
"",
"python",
"xml",
"parsing",
"xml-parsing",
"lxml",
""
] |
Python bit me today. I'm trying to access an object's attribute inside its `__setattr__` implementation - I can't figure out how. This is what I've tried so far:
```
class Test1(object):
def __init__(self):
self.blub = 'hi1'
def __setattr__(self, name, value):
print self.blub
class Test2(object):
def __init__(self):
self.blub = 'hi2'
def __setattr__(self, name, value):
print object.__getattr__(self, 'blub')
class Test3(object):
def __init__(self):
self.blub = 'hi3'
def __setattr__(self, name, value):
print object.__getattribute__(self, 'blub')
class Test4(object):
def __init__(self):
self.blub = 'hi4'
def __setattr__(self, name, value):
print self.__getattr__('blub')
class Test5(object):
def __init__(self):
self.blub = 'hi5'
def __setattr__(self, name, value):
print self.__getattribute__('blub')
class Test6(object):
def __init__(self):
self.blub = 'hi6'
def __setattr__(self, name, value):
print self.__dict__['blub']
```
Testing:
```
try:
TestX().bip = 'bap'
except Exception as ex:
print ex
```
with `X` from `1` to `6`
Output:
```
'Test1' object has no attribute 'blub'
type object 'object' has no attribute '__getattr__'
'Test3' object has no attribute 'blub'
'Test4' object has no attribute '__getattr__'
'Test5' object has no attribute 'blub'
'blub'
```
Any suggestions? | OP, you haven't told us the whole story. You did not just run code like this:
```
TestX().bip = 'bap'
```
You ran code like this:
```
try:
TestX().bip = 'bap'
except Exception as ex:
print ex
```
There's a big difference. Why, you ask? Well, your output seems on first glance to indicate that `Test6` works, and several comments and answers assumed that it did. Why does it appear to work? Reading the code, there's *no way* it should work. A closer inspection of the source code reveals that if it *had* worked, it should have printed `hi6`, not `'blub'`.
I put a breakpoint at the `print ex` line in `pdb` to examine the exception:
```
(Pdb) ex
KeyError('blub',)
(Pdb) print ex
'blub'
```
For some reason `print ex` does not print `KeyError: blub` like you'd expect, but just `'blub'`, which was why `Test6` appeared to work.
So we've cleared that up. In the future, please do not leave out code like this because it might be important.
All the other answers correctly point out that you have not set the attribute you're attempting to print, and that this is your problem. The answer you had accepted previously, before you accepted this answer istead, prescribed the following solution:
```
def __setattr__(self, name, value):
self.__dict__[name] = value
print self.__dict__[name]
```
While this solution does indeed work, it is not good design. This is because you might want to change the base class at some point, and that base class might might have important side effects when setting and/or getting attributes, or it might not store the attributes in `self.__dict__` at all! It is better design to avoid messing around with `__dict__`.
The correct solution is to invoke the parent `__setattr__` method, and this was suggested by at least one other answer, though with the wrong syntax for python 2.x. Here's how I'd do it:
```
def __setattr__(self, name, value):
super(Test6, self).__setattr__(name, value)
print getattr(self, name)
```
As you see I'm using `getattr` instead of `__dict__` to look up attributes dynamically. Unlike using `__dict__` directly, this will call `self.__getattr__` or `self.__getattribute__`, as appropriate. | Because inside the `__init__` it is trying to set `blub` which calls `__setattr__`; and it does not set anything but tries to access (and print) `blub`, finds nothing and raises the error. Check this:
```
>>> class Test2(object):
def __init__(self):
print "__init__ called"
self.blub = 'hi2'
print "blub was set"
def __setattr__(self, name, value):
print "__setattr__ called"
print self.blub
>>> Test2()
__init__ called
__setattr__ called
Traceback (most recent call last):
File "<pyshell#10>", line 1, in <module>
Test2()
File "<pyshell#9>", line 4, in __init__
self.blub = 'hi2'
File "<pyshell#9>", line 9, in __setattr__
print self.blub
AttributeError: 'Test2' object has no attribute 'blub'
>>>
``` | Accessing an object's attribute inside __setattr__ | [
"",
"python",
""
] |
I need to create SQL statements from Enterprise architect v.9 table description, since I need "CREATE TABLE" to be placed in a text file.
Please advice me where to look in EA interface! | In the diagram where your tables are designed, select all of them and right click in one of the tables. Look for *SQL Code Generation* and choose one of the options: single DDL script or not.
The difference is that in the single script all create table statements will be generated in only one file (which is usually better!) | After open your database diagram,
1. Look at **Tools** tab and expand **Database Engineering** menu,
2. Select **Generate Package DLL**,
3. Enter output file name and generate it. | Need to export SQL for creating table(s) from Enterprise Architect | [
"",
"sql",
"enterprise-architect",
""
] |
Within my program, I audit incoming data, which can be of 4 types. If the data meets all required criteria, it gets stored with success in a table column, along with the message type and timestamp of when the row was entered into the table.
Data can also be written to the table with error, due to something like a connection issue occurrring etc with auditing. The program will retry auditing this data, and if succesful will write a new row, with successful. So you see I now have 2 rows for that particular message of data, one having success, one having error, both with different time stamps. (Success having the most recent timestamp than the error record.)
A third message, rejected occurs and has a record written if the incoming data doesnt meet the required standard, again with a create timestamp.
What I'd like to do, is write a Sybase SQL query to pull back only the record for each received message, with the highest timestamp.
So with the above error example, I dont want to return the error record, only the corresponding success record from when the process retried and was a success.
I had thought of something like the following..
```
SELECT distinct(*)
FROM auditingTable
WHERE timestamp = (SELECT MAX(timestamp) from auditingTable)
```
though Im aware this will only bring back 1 record, with the highest timestamp in the whole table.
How could I get back the most recent record for each message received, regardless of its status??
Any ideas welcome! | I want to note that a simple modification to your query allows you to do what you want (although I prefer the `row_number()` method in Valex's answer). That is to turn the subquery in the `where` clause to a correlated subquery:
```
SELECT *
FROM auditingTable at1
WHERE timestamp = (SELECT MAX(timestamp)
from auditingTable at2
where at1.MessageId = at2.MessageId
);
```
This is standard SQL and should work in any version of Sybase. | You haven't mentioned your Sybase version.
You can use [ROW\_NUMBER() function](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00800.1530/doc/html/san1276751057140.html)
For example your table has `MessageId`,`MessageTime` fields you can use following query:
```
SELECT * FROM
(
SELECT auditingTable.*,
ROW_NUMBER() OVER (PARTITION BY MessageID ORDER BY MessageTime DESC) as RN
FROM auditingTable
) as T
WHERE RN=1;
``` | SQL - Returning distinct row for data with Max created timestamp value | [
"",
"sql",
"database",
"sybase",
""
] |
I went through these threads:
* [Find unique rows in numpy.array](https://stackoverflow.com/questions/16970982/find-unique-rows-in-numpy-array)
* [Removing duplicates in each row of a numpy array](https://stackoverflow.com/questions/7438438/removing-duplicates-in-each-row-of-a-numpy-array)
* [Pandas: unique dataframe](https://stackoverflow.com/questions/12322779/pandas-unique-dataframe)
and they all discuss several methods for computing the matrix with unique rows and columns.
However, the solutions look a bit convoluted, at least to the untrained eye. Here is for example top solution from the first thread, which (correct me if I am wrong) I believe it is the safest and fastest:
```
np.unique(a.view(np.dtype((np.void, a.dtype.itemsize*a.shape[1])))).view(a.dtype).reshape(-1,
a.shape[1])
```
Either way, the above solution only returns the matrix of unique rows. What I am looking for is something along the original functionality of `np.unique`
```
u, indices = np.unique(a, return_inverse=True)
```
which returns, not only the list of unique entries, but also the membership of each item to each unique entry found, but how can I do this for columns?
Here is an example of what I am looking for:
```
array([[0, 2, 0, 2, 2, 0, 2, 1, 1, 2],
[0, 1, 0, 1, 1, 1, 2, 2, 2, 2]])
```
We would have:
```
u = array([0,1,2,3,4])
indices = array([0,1,0,1,1,3,4,4,3])
```
Where the different values in `u` represent the set of unique columns in the original array:
```
0 -> [0,0]
1 -> [2,1]
2 -> [0,1]
3 -> [2,2]
4 -> [1,2]
``` | Essentially, you want np.unique to return the indexes of the unique columns, and the indices of where they're used? This is easy enough to do by transposing the matrix and then using the code from the other question, with the addition of `return_inverse=True`.
```
at = a.T
b = np.ascontiguousarray(at).view(np.dtype((np.void, at.dtype.itemsize * at.shape[1])))
_, u, indices = np.unique(b, return_index=True, return_inverse=True)
```
With your `a`, this gives:
```
In [35]: u
Out[35]: array([0, 5, 7, 1, 6])
In [36]: indices
Out[36]: array([0, 3, 0, 3, 3, 1, 4, 2, 2, 4])
```
It's not entirely clear to me what you want `u` to be, however. If you want it to be the unique columns, then you could use the following instead:
```
at = a.T
b = np.ascontiguousarray(at).view(np.dtype((np.void, at.dtype.itemsize * at.shape[1])))
_, idx, indices = np.unique(b, return_index=True, return_inverse=True)
u = a[:,idx]
```
This would give
```
In [41]: u
Out[41]:
array([[0, 0, 1, 2, 2],
[0, 1, 2, 1, 2]])
In [42]: indices
Out[42]: array([0, 3, 0, 3, 3, 1, 4, 2, 2, 4])
``` | First lets get the unique indices, to do so we need to start by transposing your array:
```
>>> a=a.T
```
Using a modified version of the above to get unique indices.
```
>>> ua, uind = np.unique(np.ascontiguousarray(a).view(np.dtype((np.void,a.dtype.itemsize * a.shape[1]))),return_inverse=True)
>>> uind
array([0, 3, 0, 3, 3, 1, 4, 2, 2, 4])
#Thanks to @Jamie
>>> ua = ua.view(a.dtype).reshape(ua.shape + (-1,))
>>> ua
array([[0, 0],
[0, 1],
[1, 2],
[2, 1],
[2, 2]])
```
For sanity:
```
>>> np.all(a==ua[uind])
True
```
To reproduce your chart:
```
>>> for x in range(ua.shape[0]):
... print x,'->',ua[x]
...
0 -> [0 0]
1 -> [0 1]
2 -> [1 2]
3 -> [2 1]
4 -> [2 2]
```
To do exactly what you ask, but will be a bit slower if it has to convert the array:
```
>>> b=np.asfortranarray(a).view(np.dtype((np.void,a.dtype.itemsize * a.shape[0])))
>>> ua,uind=np.unique(b,return_inverse=True)
>>> uind
array([0, 3, 0, 3, 3, 1, 4, 2, 2, 4])
>>> ua.view(a.dtype).reshape(ua.shape+(-1,),order='F')
array([[0, 0, 1, 2, 2],
[0, 1, 2, 1, 2]])
#To return this in the previous order.
>>> ua.view(a.dtype).reshape(ua.shape + (-1,))
``` | Find unique columns and column membership | [
"",
"python",
"numpy",
"unique",
""
] |
I have a python dictionary of user-item ratings that looks something like this:
```
sample={'user1': {'item1': 2.5, 'item2': 3.5, 'item3': 3.0, 'item4': 3.5, 'item5': 2.5, 'item6': 3.0},
'user2': {'item1': 2.5, 'item2': 3.0, 'item3': 3.5, 'item4': 4.0},
'user3': {'item2':4.5,'item5':1.0,'item6':4.0}}
```
I was looking to convert it into a pandas data frame that would be structured like
```
col1 col2 col3
0 user1 item1 2.5
1 user1 item2 3.5
2 user1 item3 3.0
3 user1 item4 3.5
4 user1 item5 2.5
5 user1 item6 3.0
6 user2 item1 2.5
7 user2 item2 3.0
8 user2 item3 3.5
9 user2 item4 4.0
10 user3 item2 4.5
11 user3 item5 1.0
12 user3 item6 4.0
```
Any ideas would be much appreciated :) | Try following code:
```
import pandas
sample={'user1': {'item1': 2.5, 'item2': 3.5, 'item3': 3.0, 'item4': 3.5, 'item5': 2.5, 'item6': 3.0},
'user2': {'item1': 2.5, 'item2': 3.0, 'item3': 3.5, 'item4': 4.0},
'user3': {'item2':4.5,'item5':1.0,'item6':4.0}}
df = pandas.DataFrame([
[col1,col2,col3] for col1, d in sample.items() for col2, col3 in d.items()
])
``` | I think the operation you're after -- to unpivot a table -- is called "melting". In this case, the hard part can be done by `pd.melt`, and everything else is basically renaming and reordering:
```
df = pd.DataFrame(sample).reset_index().rename(columns={"index": "item"})
df = pd.melt(df, "item", var_name="user").dropna()
df = df[["user", "item", "value"]].reset_index(drop=True)
```
---
Simply calling `DataFrame` produces something which has the information we want but has the wrong shape:
```
>>> df = pd.DataFrame(sample)
>>> df
user1 user2 user3
item1 2.5 2.5 NaN
item2 3.5 3.0 4.5
item3 3.0 3.5 NaN
item4 3.5 4.0 NaN
item5 2.5 NaN 1.0
item6 3.0 NaN 4.0
```
So let's promote the index to a real column and improve the name:
```
>>> df = pd.DataFrame(sample).reset_index().rename(columns={"index": "item"})
>>> df
item user1 user2 user3
0 item1 2.5 2.5 NaN
1 item2 3.5 3.0 4.5
2 item3 3.0 3.5 NaN
3 item4 3.5 4.0 NaN
4 item5 2.5 NaN 1.0
5 item6 3.0 NaN 4.0
```
Then we can call `pd.melt` to turn the columns. If we don't specify the variable name we want, "user", it'll give it the boring name of "variable" (just like it gives the data itself the boring name "value").
```
>>> df = pd.melt(df, "item", var_name="user").dropna()
>>> df
item user value
0 item1 user1 2.5
1 item2 user1 3.5
2 item3 user1 3.0
3 item4 user1 3.5
4 item5 user1 2.5
5 item6 user1 3.0
6 item1 user2 2.5
7 item2 user2 3.0
8 item3 user2 3.5
9 item4 user2 4.0
13 item2 user3 4.5
16 item5 user3 1.0
17 item6 user3 4.0
```
Finally, we can reorder and renumber the indices:
```
>>> df = df[["user", "item", "value"]].reset_index(drop=True)
>>> df
user item value
0 user1 item1 2.5
1 user1 item2 3.5
2 user1 item3 3.0
3 user1 item4 3.5
4 user1 item5 2.5
5 user1 item6 3.0
6 user2 item1 2.5
7 user2 item2 3.0
8 user2 item3 3.5
9 user2 item4 4.0
10 user3 item2 4.5
11 user3 item5 1.0
12 user3 item6 4.0
```
`melt` is pretty useful once you get used to it. Usually, as here, you do some renaming/reordering before and after. | Pandas data frame from dictionary | [
"",
"python",
"pandas",
""
] |
What I'd like to do is basically I have this file with data in separate lines, except the last piece which is a biography and may stretch across many lines. The biography may be any number of lines long, and all I know is that it starts on the 5th line. Now what I need is a way to retrieve the biography from the fifth line to the end of the file, but I don't know how to do this. Thanks in advance.
Here's what I tried:
```
from tkinter import *
import os
class App:
charprefix = "character_"
charsuffix = ".iacharacter"
chardir = "data/characters/"
def __init__(self, master):
self.master = master
frame = Frame(master)
frame.pack()
# character box
Label(frame, text = "Characters Editor").grid(row = 0, column = 0, rowspan = 1, columnspan = 2)
self.charbox = Listbox(frame)
for chars in []:
self.charbox.insert(END, chars)
self.charbox.grid(row = 1, column = 0, rowspan = 5)
charadd = Button(frame, text = " Add ", command = self.addchar).grid(row = 1, column = 1)
charremove = Button(frame, text = "Remove", command = self.removechar).grid(row = 2, column = 1)
charedit = Button(frame, text = " Edit ", command = self.editchar).grid(row = 3, column = 1)
for index in self.charbox.curselection():
charfilelocale = self.charbox.get(int(index))
charfile = open(app.chardir + app.charprefix + app.charfilelocale, 'r+')
charinfo = str.splitlines(0)
``` | Another way to phrase your question would be "how do I discard the first four lines of a file I read?" Taking the answer to that a step at a time:
```
filename = "/a/text/file"
input_file = open(filename)
```
where the default mode for `open()` is `'r'` so you don't have to specify it.
```
contents = input_file.readlines()
input_file.close()
```
where `readlines()` returns a list of all the lines contained in the input file in one gulp. You were going to have to read it all anyway, so let's do it with one method call. And, of course `close()` because you are a tidy coder. Now you can use [list slicing](https://stackoverflow.com/questions/509211/the-python-slice-notation) to get the part that you want:
```
biography = contents[4:]
```
which didn't actually throw away the first four lines, it just assigned all but the first four to biography. To make this a little more idiomatic gives:
```
with open(filename) as input_file:
biography = input_file.readlines()[4:]
```
The `with` context manager is useful to know but look it up when you are ready. Here it saved you the `close()` but it is a little more powerful than just that.
**added in response to comment**:
Something like
```
with open(filename) as input_file:
contents = input_file.readlines()
person = contents[0]
birth_year = contents[1]
...
biography = contents[4:]
```
but I think you figured that bit out while I was typing it. | If you just want to put the entire biography in a string, you can do this:
```
with open('biography.txt') as f:
for i in range(4): # Read the first four lines
f.readline()
s = ''
for line in f:
s += line
```
"`for line in f`" iterates over `f`. `iter(f)` returns a generator function that yields `f.readline()` until the end of the file is reached. | Python: How do you read a chunk of text from a file without knowing how long the file actually is? | [
"",
"python",
"file",
"operating-system",
"lines",
""
] |
So I would like to join 2 tables with 1 where condition:
Here is what I've tried so far:
```
SELECT * FROM (ci_usertags)
JOIN ci_tags ON ci_usertags.utag_tid
WHERE utag_uid = 1
```
My tables look like this:
```
ci_usertags
utag_id
utag_uid
utag_tid
ci_tags
tag_id
tag_name
```
I want to select all `tag`s for user with `ID` 1 and get the `tag name`. With this SQL I am getting duplicate results and even the `tags` which are not for the user with `ID` 1 | Your query is:
```
SELECT *
FROM (`ci_usertags`) JOIN
`ci_tags`
ON `ci_usertags`.`utag_tid`
WHERE `utag_uid`
```
This is sort of non-sensical SQL. The conditions are that `utag_tid` is not 0 or NULL and `utag_uid` is not 0 or NULL. This is because there are no conditions, so it is only looking at the value of a variable. In some other SQL engines, this would result in errors.
I think you want something like:
```
SELECT *
FROM `ci_usertags` ut JOIN
`ci_tags` t
ON ut.`utag_tid` = t.tag_id
WHERE `utag_uid` = 1
``` | Solution with nested query as replacement for join (speed improvement)
```
Select tag_id, tag_name
FROM ci_tags
WHERE tag_id IN (Select utag_tid FROM ci_usertags WHERE utag_id = 1 GROUP BY 1)
GROUP BY 1,2
``` | SQL join tables with condition | [
"",
"sql",
"database-design",
"join",
""
] |
A part of my table is as follows,
```
Key|Value
---------
A | V1
B | V2
C | V3
```
I know the following query would definitely return no result, but then what is the correct query of achieving a positive result, i.e. '1'.
I mean what query should I apply that checks that A has a value as V1 and B has a value as V2 and then returns me '1'.
The new query should fail to return anything if either A or B has different results.
Assume the table as a Map of key/value pairs. Only difference here is it's not a Java map but a table in Oracle Db. So, how to achieve a result that satisfies both the key/value pairs.
```
select 1 from myTable
where (key = 'A' and value = 'V1')
AND (key = 'B' and value = 'V2');
```
Let me know if the design of the table itself needs some change. | if you only want to check if there are rows that fulfill certain conditions you can use following construct
```
select count(*) from dual
where exists (
select 1
from myTable
where (key = 'A' and value = 'V1')
)
AND exists (
select 1
from myTable
where (key = 'B' and value = 'V2')
);
``` | I think this is what you want.
```
select 1 from dual
where exists(
select 1
from mytable
where key = 'A' and value = 'V1')
and exists(
select 1
from mytable
where key = 'B' and value = 'V2')
``` | Select statement having multiple conditions over multiple columns | [
"",
"sql",
"oracle",
""
] |
I am using MySQL.
I have produced the following result-set using a select with joins etc but I wish to remove the nulls if possible and produce a result similar to the second table.
```
-----------------------------------
|#|IMG_KEY|IMAGE|XX_STYLE|YY_STYLE|
-----------------------------------
|1| 1 |PIX01| <NULL> | STYLEB |
|2| 1 |PIX01| STYLEA | <NULL> |
|3| 2 |PIX02| <NULL> | STYLEB |
|4| 2 |PIX02| STYLEA | <NULL> |
|5| 3 |PIX03| <NULL> | STYLEB |
|6| 3 |PIX03| STYLEA | <NULL> |
-----------------------------------
```
NOTE: `XX_STYLE` & `YY_STYLE` are calculated columns.
```
-----------------------------------
|#|IMG_KEY|IMAGE|XX_STYLE|YY_STYLE|
-----------------------------------
|1| 1 |PIX01| STYLEA | STYLEB |
|2| 2 |PIX02| STYLEA | STYLEB |
|3| 3 |PIX03| STYLEA | STYLEB |
-----------------------------------
```
Is it possible to do this using some combination of the `GROUP BY` and/or `HAVING` etc keywords. I've tried `COALESCE` but to no avail. | Try using `GROUP_CONCAT()`:
```
SELECT MIN(ID) ID, IMG_KEY, IMAGE
,GROUP_CONCAT(XX_STYLE) XX_STYLE
,GROUP_CONCAT(YY_STYLE) YY_STYLE
FROM MyTable
GROUP BY IMG_KEY, IMAGE
```
Result:
```
| ID | IMG_KEY | IMAGE | XX_STYLE | YY_STYLE |
----------------------------------------------
| 1 | 1 | PIX01 | STYLEA | STYLEB |
| 3 | 2 | PIX02 | STYLEA | STYLEB |
| 5 | 3 | PIX03 | STYLEA | STYLEB |
```
To get row numbers starting from 1 try this:
```
SELECT @Row:=@Row+1 AS ID, IMG_KEY, IMAGE
,GROUP_CONCAT(XX_STYLE) XX_STYLE
,GROUP_CONCAT(YY_STYLE) YY_STYLE
FROM MyTable
, (SELECT @Row:=0) r
GROUP BY IMG_KEY, IMAGE
```
Result:
```
| ID | IMG_KEY | IMAGE | XX_STYLE | YY_STYLE |
----------------------------------------------
| 1 | 1 | PIX01 | STYLEA | STYLEB |
| 2 | 2 | PIX02 | STYLEA | STYLEB |
| 3 | 3 | PIX03 | STYLEA | STYLEB |
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/8a5ef/5) | remove those 'null' values using query itself. For that add following line in your query
```
where XX_STYLE is not null and YY_STYLE is not null
```
If you provide your query, then i'll give you complete query | Removing NULLs from ResultSet | [
"",
"mysql",
"sql",
"null",
""
] |
I need to write a programm that collects different datasets and unites them. For this I have to read in a comma seperated matrix: In this case each row represents an instance (in this case proteins), each column represents an attribute of the instances. If an instance has an attribute, it is represented by a 1, otherwise 0. The matrix looks like the example given below, but much larger, with 35000 instances and hundreds of attributes.
```
Proteins,Attribute 1,Attribute 2,Attribute 3,Attribute 4
Protein 1,1,1,1,0
Protein 2,0,1,0,1
Protein 3,1,0,0,0
Protein 4,1,1,1,0
Protein 5,0,0,0,0
Protein 6,1,1,1,1
```
I need a way to store the matrix before writing into a new file with other information about the instances. I thought of using numpy arrays, since i would like to be able to select and check single columns. I tried to use numpy.empty to create the array of the given size, but it seems that you have to preselect the lengh of the strings and cannot change them afterwards.
Is there a better way to deal with such data? I also thought of dictionarys of lists but then iI cannot select single columns. | You can use `numpy.loadtxt`, for example:
```
import numpy as np
a = np.loadtxt(filename, delimiter=',',usecols=(1,2,3,4),
skiprows=1, dtype=float)
```
Which will result in something like:
```
#array([[ 1., 1., 1., 0.],
# [ 0., 1., 0., 1.],
# [ 1., 0., 0., 0.],
# [ 1., 1., 1., 0.],
# [ 0., 0., 0., 0.],
# [ 1., 1., 1., 1.]])
```
Or, using `structured arrays` (`np.recarray'):
```
a = np.loadtxt('stack.txt', delimiter=',',usecols=(1,2,3,4),
skiprows=1, dtype=[('Attribute 1', float),
('Attribute 2', float),
('Attribute 3', float),
('Attribute 4', float)])
```
from where you can get each field like:
```
a['Attribute 1']
#array([ 1., 0., 1., 1., 0., 1.])
``` | Take a look at [pandas](http://pandas.pydata.org/).
> pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. | Recomended way to create a matrix containing strings in python | [
"",
"python",
"arrays",
"string",
"numpy",
""
] |
I've been struggling with this problem. Head against the wall. I know this has to be easy.
I want to select items that match multiple criteria on a reference table. Here is a example schema that can help illustrate the problem
```
tblCars
------------
CarID
CarName
tblCarColors
------------
ColorID
Color
tblCarColorXRef
------------
ID
CarID
ColorID
```
Basically, I want to find cars with multiple colors that I'm searching against. To continue the example. Let's say Toyota in the car table is multi-colored.. black and yellow
CarID of the Toyota would be 1
ColorID for black is 1 and yellow would be 2
I need to find all cars in the tblCarColorXRef table that match 1 AND 2. Has to be 'and'. I don't want to find cars that are black or cars that are yellow, but cars that contain both yellow and black.
The problem, is that I can search WHERE ColorID = 1 AND ColorID = 2. That would never be true, so this is where the head banging starts. I need to wrap this query with other criteria from other tables, I get close with HAVING and COUNT but, that is not necessarily accurate or correct.
Side note... cars can have 1 or many colors with the XRef table. | You can try this.
```
select carid
from tblCarColorXRef
where colorid in (1,2)
group by carid
having count(colorid = 2);
```
You can use `count(disinct colorid) = 2`, if the car, color combination is not unique. | I would suggest a couple of other things.
1. Remove "tbl" from the table names.
2. Try to get rid of the ID columns -- in this example they wouldn't be necessary as Types of cars is the actual key you're looking for (Honda, Toyota, etc..) - Same with Colors. This may help your Xref as well - which would have a composite key as the primary key and not an ID column. See if this helps - [CarColorExample](http://sqlfiddle.com/#!3/0fb67/1), if not, give me more information and I'll revise. | Find rows with multiple reference values in MySQL | [
"",
"mysql",
"sql",
""
] |
I have a small problem with my Python 2 program.
Here is my function:
```
def union(q,p):
q = q + p
q = set(q)
return q, p
```
Then I have created new two lists and called my function:
```
a = [1,2,3]
b = [2,4,6]
union(a,b)
```
Finally I'm printing out `a` and `b`:
```
>>>print a
[1,2,3]
>>>print b
[2,4,6]
```
As you can see my function didn't change the value of `a`. Why? How can I fix that? What am I doing wrong?
**NOTE:** `a` used to be `[1,2,3,4,6]` instead of `[1,2,3]`
Thanks. | Assign return values back to `a` and `b`:
```
>>> def union(q,p):
... q = q + p
... q = set(q)
... return q, p
...
>>> a = [1,2,3]
>>> b = [2,4,6]
>>> a, b = union(a, b)
>>> a
set([1, 2, 3, 4, 6])
>>> b
[2, 4, 6]
```
To get a list from set, use `list` as Haidro commented:
```
>>> list(a)
[1, 2, 3, 4, 6]
``` | Your function doesn't change it in place, it returns the new items. Thus, you have to return the result to a variable:
```
a, b = union(a, b)
``` | Python 2: Function didn't change the list data | [
"",
"python",
"list",
"function",
"python-2.7",
""
] |
Is there a way that I can find where stored procedures are saved so that I can just copy the files to my desktop? | Stored procedures aren't stored as files, they're stored as metadata and exposed to us peons (thanks Michael for the reminder about [`sysschobjs`](http://technet.microsoft.com/en-us/library/ms179503.aspx)) in the catalog views [`sys.objects`](http://technet.microsoft.com/en-us/library/ms190324.aspx), [`sys.procedures`](http://technet.microsoft.com/en-us/library/ms188737.aspx), [`sys.sql_modules`](http://technet.microsoft.com/en-us/library/ms175081.aspx), etc. For an individual stored procedure, you can query the definition directly using these views (most importantly `sys.sql_modules.definition`) or using the [`OBJECT_DEFINITION()`](http://technet.microsoft.com/en-us/library/ms176090.aspx) function as [Nicholas pointed out](https://stackoverflow.com/a/18240374/61305) (though his description of [`syscomments`](http://technet.microsoft.com/en-us/library/aa260393(v=sql.80).aspx) is not entirely accurate).
To extract all stored procedures to a single file, one option would be to open Object Explorer, expand `your server > databases > your database > programmability` and highlight the `stored procedures` node. Then hit `F7` (View > [Object Explorer Details](http://technet.microsoft.com/en-us/library/cc646011.aspx)). On the right-hand side, select all of the procedures you want, then right-click, `script stored procedure as > create to > file`. This will produce a single file with all of the procedures you've selected. If you want a single file for each procedure, you could use this method by only selecting one procedure at a time, but that could be tedious. You could also use this method to script all accounting-related procedures to one file, all finance-related procedures to another file, etc.
An easier way to generate exactly one file per stored procedure would be to use the [Generate Scripts wizard](http://technet.microsoft.com/en-us/library/hh245282.aspx) - again, starting from Object Explorer - right-click your database and choose `Tasks > Generate scripts`. Choose `Select specific database objects` and check the top-level `Stored Procedures` box. Click Next. For output choose `Save scripts to a specific location`, `Save to file`, and `Single file per object.`
*These steps may be slightly different depending on your version of SSMS.* | Stored procedures are not "stored" as a separate file that you're free to browse and read without the database. It's stored in the database it belongs to in a set of system tables. The table that contains the definition is called [sysschobjs] which isn't even accessible (directly) to any of us end users.
To retrieve the definition of these stored procedures from the database, I like to use this query:
```
select definition from sys.sql_modules
where object_id = object_id('sp_myprocedure')
```
But I like Aaron's answer. He gives some other nice options. | How to script out stored procedures to files? | [
"",
"sql",
"sql-server",
""
] |
I have a table in SQL server, I have a column in it with datatype DateTime, I want to get data on specific month or year, ie
```
Select * from Table where datejoined='07'//Getting July Data
```
or
```
Select * from Table where datejoined='2013'//Getting 2013 Data
```
I have tried this,but its comparing the minutes of time
```
select * from Table where datejoined like '%07%'
``` | What you're looking for is something like this:
```
SELECT *
FROM TABLE
WHERE Year(DATEJOINED) = 2013
```
or:
```
SELECT *
FROM TABLE
WHERE Month(DATEJOINED) = 7
``` | Maybe following solutions are complicated but the will give you better performance (this means Index Seek) if you have indexes. These examples are based on [AdventureWorks2008R2](http://msftdbprodsamples.codeplex.com/releases/view/59211) database:
```
SET NOCOUNT ON;
GO
DECLARE @StartDate DATETIME,@EndDate DATETIME;
PRINT 'Test #1: By YEAR';
DECLARE @Year SMALLINT;
SET @Year=2005;
SET @StartDate=CONVERT(CHAR(4),@Year)+'0101';
SET @EndDate=DATEADD(YEAR,DATEDIFF(YEAR,0,@StartDate)+1,0)
SELECT @StartDate AS StartDate,@EndDate AS EndDate
SELECT h.OrderDate,h.SalesOrderID
FROM Sales.SalesOrderHeader h
WHERE h.OrderDate>=@StartDate AND h.OrderDate<@EndDate;
PRINT 'Test #2: By MONTH';
DECLARE @Month TINYINT,@FromYear SMALLINT,@ToYear SMALLINT;
SET @Month=7;
SELECT @FromYear=YEAR(MIN(h.OrderDate)),@ToYear=YEAR(MAX(h.OrderDate))
FROM Sales.SalesOrderHeader h
SELECT @FromYear AS FromYear,@ToYear AS ToYear;
SET @StartDate=CONVERT(CHAR(4),@Year)+'0101';
SET @EndDate=DATEADD(YEAR,DATEDIFF(YEAR,0,@StartDate)+1,0);
WITH N10(Num)
AS
(
SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL
SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL SELECT 10
)
SELECT y.*,h.OrderDate,h.SalesOrderID
FROM
(
SELECT x.StartDate,DATEADD(MONTH,DATEDIFF(MONTH,0,x.StartDate)+1,0) AS EndDate
FROM
(
SELECT CONVERT(DATETIME,CONVERT(CHAR(4),@FromYear-1+n.Num)+RIGHT('00'+CONVERT(VARCHAR(2),@Month),2)+'01') AS StartDate
FROM N10 n
WHERE n.Num<=(@ToYear+1-@FromYear)
) x
) y INNER JOIN Sales.SalesOrderHeader h ON h.OrderDate>=y.StartDate AND h.OrderDate<y.EndDate
```
Results:
```
Test #1: By YEAR
StartDate EndDate
----------------------- -----------------------
2005-01-01 00:00:00.000 2006-01-01 00:00:00.000
OrderDate SalesOrderID
----------------------- ------------
2005-07-01 00:00:00.000 43659
...
2005-12-31 00:00:00.000 45037
Test #2: By MONTH
FromYear ToYear
-------- ------
2005 2008
StartDate EndDate OrderDate SalesOrderID
--------- ----------------------- ----------------------- ------------
20050701 2005-08-01 00:00:00.000 2005-07-01 00:00:00.000 43659
...
20080701 2008-08-01 00:00:00.000 2008-07-01 00:00:00.000 74159
```
Execution plans:
 | SQL server apply Month or year condition | [
"",
"sql",
"sql-server-2008",
""
] |
I want to know about relation dependent destroy and dependent nullify on rails and relation with SQL.
Thanks | Example:
Table users and table cars
user has many cars
car belongs to users
in table car you have user\_id on each row
if you set dependent destroy when defining the relationship in users, then when you delete a user, all cars having that user\_id will be deleted also
if you set nullify, cars will remain, but the user\_id column will be set to null (it is pointless to have any value there because the user with that id was deleted)
Hope that this helps | You use these options when you want to get rid of orphaned records.
Most common used is `destroy` because it removes all associated objects one by one. | Rails When using dependent destroy and dependent nullify | [
"",
"sql",
"database",
"ruby-on-rails-3",
""
] |
Is there a good way to share a multiprocessing Lock between gunicorn workers? I am trying to write a json API with Flask. Some of the API calls will interact a python class that manages a running process (like ffmpeg for video conversion). When I scale up my number of web workers to more than 1, how can I ensure that only 1 worker is interacting with the class at the same time?
My initial thought was to use multiprocessing.Lock so the start() function can be atomic. I don't think I've figured out the right place to create a Lock so that one is shared across all the workers:
```
# runserver.py
from flask import Flask
from werkzeug.contrib.fixers import ProxyFix
import dummy
app = Flask(__name__)
@app.route('/')
def hello():
dummy.start()
return "ffmpeg started"
app.wsgi_app = ProxyFix(app.wsgi_app)
if __name__ == '__main__':
app.run()
```
Here is my dummy operation:
```
# dummy.py
from multiprocessing import Lock
import time
lock = Lock()
def start():
lock.acquire()
# TODO do work
for i in range(0,10):
print "did work %s" % i
time.sleep(1)
lock.release()
```
When I refresh the page a few times, I see the output from each call woven together.
Am I barking up the wrong tree here? Is there an easier way to make sure that only copy of the processing class (here just the dummy start() method) gets run at the same time? I think I might need something like celery to run tasks (and just use only 1 worker) but that seems a bit overkill for my small project. | Follow peterw's answer, the workers can share the lock resource.
But, It is better to use `try-finally` block to ensure the lock will always be released.
```
# dummy.py
from multiprocessing import Lock
import time
lock = Lock()
def start():
lock.acquire()
try:
# TODO do work
for i in range(0,10):
print "did work %s" % i
time.sleep(1)
finally:
lock.release()
``` | I tried something, and it seems to work. I put `preload_app = True` in my `gunicorn.conf` and now the lock seems to be shared. I am still looking into exactly what's happening here but for now this is good enough, YMMV. | Sharing a lock between gunicorn workers | [
"",
"python",
"concurrency",
"flask",
"multiprocessing",
"gunicorn",
""
] |
When I tried to embed album art in an MP3, mutagen updated the ID3 tag to version 2.4 - which I don't want, because in ID3v2.4 my cell phone (which runs Windows Phone 8) and my computer can't recognize the tags.
Apparently, simply changing the `mutagen.id3.version` attribute doesn't work: the real version doesn't change. | **Update:** this is now fixed, as pointed out by [JayRizzo](https://stackoverflow.com/questions/18248200/how-can-i-stop-mutagen-automatically-updating-the-id3-version/18255921?noredirect=1#comment130424055_18255921) in a comment to this answer.
---
Sadly, you can't. From [the docs](https://mutagen.readthedocs.org/en/latest/api/id3.html#mutagen.id3.ID3.load):
> Mutagen is only capable of writing ID3v2.4 tags ...
See also:
* [Bugtracker issue 85: Add support for writing ID3v2.3 tags](http://code.google.com/p/mutagen/issues/detail?id=85)
* [Issue 153: Why write ID3 v2.4 only?](http://code.google.com/p/mutagen/issues/detail?id=153) | There is a "v2\_version" option in tags saving function, shown below.
```
import mutagen
audio=mutagen.File('1.mp3')
#audio.tags.update_to_v23()
audio.tags.save(v2_version=3)
```
It is also documented in help()
```
help(audio.tags.save)
```
as below:
> save(self, filename=None, v1=1, v2\_version=4, v23\_sep='/') | How can I stop mutagen automatically updating the ID3 version? | [
"",
"python",
"mp3",
"id3",
"id3v2",
"mutagen",
""
] |
Trying to obtain the Userid value in a row where the max(DateOccurred) was found. I'm getting lost in all these sub-queries.
I'm using SQL Server 2008.
NOTE: Need to return single value since part of another larger query in a SELECT statement.
Example of how I obtain max date (which works); but now I need the userid associated with this subquery max date.
```
(
SELECT MAX(LC.[Date])
FROM table_LC LC LEFT JOIN table_LM LM ON LC.[c] = LM.[c] AND LC.[L] = LM.[L]
WHERE LM.[c] = LC.[c] AND LM.[L] = LC.[L] AND LC.[LC] = 'ABCDEF'
) as [ABCDEF_Date],
``` | I cannot see your whole query, but you probably want to use a window function instead:
```
max(case when lc.lc = 'ABCDEF' then lc.[DATE] end) over () as maxdate
```
This may not be exactly right. It is hard to say without seeing the rest of the query.
Getting the user id would use similar logic.
EDIT:
You can also get the same effect by putting this in the `from` clause (it is not correlated):
```
from . . . cross join
(select top 1 lc.[Date], lc.id
FROM table_LC LC LEFT JOIN
table_LM LM
ON LC.[c] = LM.[c] AND LC.[L] = LM.[L]
WHERE LM.[c] = LC.[c] AND LM.[L] = LC.[L] AND LC.[LC] = 'ABCDEF'
order by lc.[date] desc
) as ABCDEF_Date
``` | Use this sub query -
```
(SELECT Userid,
Date
FROM (SELECT Userid, Date FROM table_LC) b
WHERE date = (SELECT MAX(LC.[Date])
FROM table_LC LC LEFT JOIN table_LM LM ON LC.[c] = LM.[c] AND LC.[L] = LM.[L]
WHERE LM.[c] = LC.[c] AND LM.[L] = LC.[L] AND LC.[LC] = 'ABCDEF'))
``` | sql - select different value of a max value nested query | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
currently I am getting errors in python but I cannot seem to find them
```
def dictionaryObjectParsed():
a = []
b = []
a, b = zip(*(map(lambda x: x.rstrip('\n\r').split('\t'), open('/Users/settingj/Desktop/NOxMultiplier.csv').readlines())))
for x in range(0,len(a)):
print a[x]
print b[x]
def timer(f):
threading.Timer(1, timer, f).start()
print time.strftime('%I:%M:%S %p %Z')
timer(dictionaryObjectParsed)
```
Heres the error I'm getting
```
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 756, in run
self.function(*self.args, **self.kwargs)
TypeError: timer() argument after * must be a sequence, not function
```
I was able to do this earlier but I think I did something to create this error, what the heck :(
I clearly am passing arguments to the timer function ... right?
**EDIT**
I also tried `timer(dictionaryObjectParsed)` but nothing...
Also, sorry for the noobie question this is just my second day in python... :P | Pass the function without calling it (drop the '()')..
```
timer(dictionaryObjectParsed)
```
and
```
def timer(f):
threading.Timer(1,f).start()
print time.strftime('%I:%M:%S %p %Z')
```
instead of
```
threading.Timer(1,timer)
```
You are trying to create a recursive timer function, I think, by mistake. The error you are getting, is calling the function 'timer' again, without the function parameter. I think it was a simple mistake.
---
Ok, so you do want a recursive function, so try this:
```
def timer(f):
threading.Timer(1,timer,[f,]).start()
f()
print time.strftime('%I:%M:%S %p %Z')
```
worked? | You have multiple errors.
Try this:
```
def timer(f):
f() # NOTE THIS NEW LINE
threading.Timer(1,timer, f).start() # NOTE CHANGE ON THIS LINE
print time.strftime('%I:%M:%S %p %Z')
timer(dictionaryObjectParsed) # NOTE CHANGE ON THIS LINE
```
Note that on the final line, you want to pass the function, not the result of the invoking the function.
Note that on the line `threading.Timer ...`, you want to pass enough arguments that subsequent invokations of `timer()` have the correct number of args.
Note the new line -- without it, `dictionaryObjectParsed` will never be invoked! | Passing Functions to Functions | [
"",
"python",
"function",
"timer",
""
] |
```
class Product(models.Model):
products = models.CharField(max_length=256)
def __unicode__(self):
return self.products
class PurchaseOrder(models.Model):
product = models.ManyToManyField('Product')
vendor = models.ForeignKey('VendorProfile')
dollar_amount = models.FloatField(verbose_name='Price')
```
I have that code. Unfortunately, the error comes in admin.py with the `ManyToManyField`
```
class PurchaseOrderAdmin(admin.ModelAdmin):
fields = ['product', 'dollar_amount']
list_display = ('product', 'vendor')
```
The error says:
> 'PurchaseOrderAdmin.list\_display[0]', 'product' is a ManyToManyField
> which is not supported.
However, it compiles when I take `'product'` out of `list_display`. So how can I display `'product'` in `list_display` without giving it errors?
**edit**: Maybe a better question would be how do you display a `ManyToManyField` in `list_display`? | You may not be able to do it directly. [From the documentation of `list_display`](https://docs.djangoproject.com/en/1.9/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display)
> ManyToManyField fields aren’t supported, because that would entail
> executing a separate SQL statement for each row in the table. If you
> want to do this nonetheless, give your model a custom method, and add
> that method’s name to list\_display. (See below for more on custom
> methods in list\_display.)
You can do something like this:
```
class PurchaseOrderAdmin(admin.ModelAdmin):
fields = ['product', 'dollar_amount']
list_display = ('get_products', 'vendor')
def get_products(self, obj):
return "\n".join([p.products for p in obj.product.all()])
```
OR define a model method, and use that
```
class PurchaseOrder(models.Model):
product = models.ManyToManyField('Product')
vendor = models.ForeignKey('VendorProfile')
dollar_amount = models.FloatField(verbose_name='Price')
def get_products(self):
return "\n".join([p.products for p in self.product.all()])
```
and in the admin `list_display`
```
list_display = ('get_products', 'vendor')
``` | This way you can do it, kindly checkout the following snippet:
```
class Categories(models.Model):
""" Base category model class """
title = models.CharField(max_length=100)
description = models.TextField()
parent = models.ManyToManyField('self', default=None, blank=True)
when = models.DateTimeField('date created', auto_now_add=True)
def get_parents(self):
return ",".join([str(p) for p in self.parent.all()])
def __unicode__(self):
return "{0}".format(self.title)
```
And in your admin.py module call method as follows:
```
class categories(admin.ModelAdmin):
list_display = ('title', 'get_parents', 'when')
``` | How to show a many-to-many field with "list_display" in Django Admin? | [
"",
"python",
"django",
"django-models",
"django-admin",
"django-queryset",
""
] |
is it possible to make a select and update at the same time?
```
select id,name from mytable where booled = 0
UPDATE mytable SET booled=1 WHERE (select id,name from mytable where booled = 0)
```
So to say those 2 commands in one. | There is no need to reinvent the wheel - you simply need to properly use transactions. MySQL supports transactions as long as you use InnoDB engine for your tables (old MyISAM would not work).
Following series of statements would do what you want:
```
BEGIN;
SELECT id,name FROM mytable WHERE booled=0;
UPDATE mytable SET booled=1 WHERE booled=0;
COMMIT;
```
Depending on your programming language and database drivers you may not be able to directly use begin/commit transaction statements, but instead use some framework specific mechanisms to do that. For example, in Perl, you need to do something like this:
```
my $dbh = DBI->connect(...);
$dbh->begin_work(); # This is BEGIN TRANSACTION;
my $sth = $dbh->prepare(
"SELECT id,name FROM mytable WHERE booled=0");
$sth->execute();
while (my $row = $sth->fetchrow_hashref()) {
# do something with fetched $row...
}
$sth->finish();
$dbh->do("UPDATE mytable SET booled=1 WHERE booled=0");
$dbh->commit(); # This is implicit COMMIT TRANSACTION;
``` | Why not this ?
```
UPDATE mytable SET booled=1 WHERE booled=0
``` | Database update and select at the same time | [
"",
"mysql",
"sql",
""
] |
I have a list and I want to filter my Queryset when any of these items is found in a foreign table's non-primary key 'test'. So I write something like this:
```
test_list = ['test1', 'test2', 'test3', 'test4', 'test5']
return cls.objects.filter(reduce(lambda x, y: x | y, [models.Q(next_task__test = item) for item in test_list]))[:20]
```
This returns an empty list. When I look at the SQL query it generated, I get:
```
SELECT ...
FROM ...
WHERE "job"."next_task_id" IN (test1, test2, test3, test4, test5) LIMIT 20;
```
Whereas what it should have been is this:
```
SELECT ...
FROM ...
WHERE "job"."next_task_id" IN ('test1', 'test2', 'test3', 'test4', 'test5') LIMIT 20;
```
Without the quotes, SQLite3 believes those are column names, and does not return anything. When I manually add the quotes and execute an SQLite3 query on the table without Django at all, I get the desired results. How do I make Django issue the query correctly? | This issue is quite interesting, it seems to happen with SQLite only. It's known here: <https://code.djangoproject.com/ticket/14091> and in the [docs](https://docs.djangoproject.com/en/dev/ref/databases/#sqlite-connection-queries).
So basically the query might not be wrong, but when you get the query back with Django it looks wrong:
```
>>> test_list = ['test1', 'test2', 'test3', 'test4', 'test5']
>>> cls.objects.filter(next_task__test__in=test_list).query.__str__()
SELECT ...
FROM ...
WHERE "job"."next_task_id" IN (test1, test2, test3, test4, test5);
```
Work around: if you really think the query is wrong, then provide more quote for the list, something like:
```
>>> test_list = ["'test1'", "'test2'", "'test3'", "'test4'", "'test5'"]
>>> cls.objects.filter(next_task__test__in=test_list).query.__str__()
SELECT ...
FROM ...
WHERE "job"."next_task_id" IN ('test1', 'test2', 'test3', 'test4', 'test5');
```
I would rely on the standard one anyway, the work around above is too hackish. | I really like the answer from @Andrey-St, but a colleague pointed out that this makes a round trip to the database to do the work. So instead, we changed it to just grab the formatted query from the cursor.
```
def stringify_queryset(qs):
sql, params = qs.query.sql_with_params()
with connection.cursor() as cursor:
return cursor.mogrify(sql, params)
```
(We're using psycopg2 for Postgres -- I am not sure if `mogrify()` is available on other DB engines). | Django Queryset Filter Missing Quotes | [
"",
"python",
"django",
""
] |
Sorry for the clunky title. An example will explain.
Say I have a likes table, with an entry for each like
```
user_id | liked_id
--------------------
1 | a
2 | a
1 | b
2 | c
```
Meaning user `1` has liked items `a` and `b`, and user `2` has liked items `a` and `c`.
To get an aggregate count of likes for each item, I can do:
```
SELECT liked_id, COUNT(*)
FROM likes
GROUP BY liked_id
```
Is there a nice way, however, to do that but *only* for items that have been liked by a particular user? So, for instance, querying on user `1`, the result I'd like is:
```
liked_id | count
------------------
a | 2
b | 1
```
Because user `1` has liked items `a` and `b`, but not `c`.
The best I can think of is a `JOIN` or `IN` with a subselect:
```
SELECT l.liked_id, count(*)
FROM likes l
JOIN (
SELECT liked_id
FROM likes
WHERE user_id = 1
) l2
ON l.liked_id=l2.liked_id
GROUP BY l.liked_id;
```
Is there a better way to roll things up when aggregating? I feel like there might be some `HAVING` trickery I can do, but maybe not and it might be a slower solution anyway.
EDIT: I am using Postgres, if the tags did not make that clear.
EDIT: Thanks for all the answers, I accepted what I thought was the best and fastest, as I asked the question - should've been obvious, really, but I gave everyone a +1.
I should've mentioned that I needed another piece of data from the entry in the likes table so I can order on that later. The subselect will do it as will the accepted answers self join with an additional entry in the `SELECT` and `GROUP BY` parts. That'll teach me to oversimplify something for a SO question... thanks! | Sure: Join the table to itself:
```
SELECT t1.liked_id, COUNT(*)
FROM likes t1
JOIN likes t2 on t2.liked_id = t1.liked_id
WHERE t1.user_id = 1
GROUP BY t1.liked_id
```
Not only is this an elegant way to code it, it is also the best performing, as long as there is an index on `liked_id` for join perfomance, and an index on `user_id` for lookup performance. | You can try something like:
```
SELECT liked_id, COUNT(*)
FROM likes
GROUP BY liked_id
HAVING count(case when user_id = 1 then 1 end) > 0;
```
`count(case when user_id = 1 then 1 end)` will count how many times `user_id = 1` liked particular `liked_id`.
This query will get the results in one full scan. It will be faster than 2 full scans, but might be slower than 2 index scans (if you have indexes on `liked_id` and `user_id`). | Efficient way to aggregate a table only considering items that have a particular entry | [
"",
"sql",
"postgresql",
""
] |
When I try to do the following construction in Python, I get, `"No module named foo"` **on the second line**
```
import my_package.my_very_long_module_name as foo
from foo import f1, f2, f3
from foo import a, b, c
from foo import x, y, z
```
`my_very_long_module_name` is a module (`my_very_long_module_name.py`) within the folder `my_package` (the folder has the file `__init__.py`).
Why does the second line above fail? Am I not allowed to import names from an aliased module?
If that construction is not legal in Python, is there any other way to do this? ( | ```
import my_package.my_very_long_module_name as foo
from foo import f1, f2, f3
```
The second line fails because python will try to find a module name `foo.py`, it won't use the variable `foo` you just imported.
You can try something like this:
```
import my_package.my_very_long_module_name as foo
f1, f2, f3 = foo.f1, foo.f2, foo.f3
del foo
``` | While it is correct that you cannot use variables in import statements, you can simply access the variables of the module.
Thus, you can do:
```
a = foo.a
```
And you can even write to `globals` if you insist. Honestly, though, I encourage you to just use the qualified `foo.a` in your code. It'll make it more readable. | Importing names from an aliased module. Is it possible? | [
"",
"python",
"python-import",
""
] |
If I want to return more that one variable from a function in Python I have been doing the following:
```
def foo():
firstName = 'Bob'
lastName = 'Jones'
return [firstName, lastName]
```
Then if I wanted to find only the first name I would do this
```
[firstName, dummy] = foo()
```
However, say I initially had the function
```
def fooInitial():
firstName = 'Bob'
return firstName
```
and I called this from many parts of my code, if I wanted to add the `lastName` as another output the way I have been doing, I would have to change everywhere else in my code that was calling it.
Is there a way to do something similar to Matlab where calling `a = foo()` gives me the first name (not the first and last in a list) and `[a,b]=foo()` gives me the first and last? | No, there isn't. You are better of either changing all the code calling `fooInitial()`, or by adding a *different* method that returns the two parameters and have `fooInitial()` use that method:
```
def barInitial():
first_name, last_name = 'Bob', 'Jones'
return first_name, last_name
def fooInitial():
return barInitial()[0]
```
Note that you can just return a tuple instead of a list too; tuples only require a comma to define so the syntax is lighter (no need for square brackets). You can do the same when unpacking the returned value:
```
first_name, last_name = barInitial()
``` | You could do this:
```
def foo(last=None):
# current behaviour, change no code
If not last:
return 'bob'
# new behaviour return first and last list
return ['bob', 'jones']
```
The addition of the named keyword with a default argument gives you your current behaviour for code you don't want to change, and for new code where you want first and last returned you would use
```
first, last = foo(last=true)
``` | Python Function Return | [
"",
"python",
"function",
""
] |
Let's take a look of the following example:
```
>>> class Foo(object):
... pass
...
```
***My current understanding is when Python interpreter reads the line `class Foo(object)` [Foo class definition], it will create a Foo class object in memory.***
Then I did the following two tests:
```
>>> dir()
['Foo', '__builtins__', '__doc__', '__name__', '__package__']
```
It looks like the Python interpreter has stored 'Foo' class object in memory.
```
>>> id(Foo)
140608157395232
```
It seems Foo class object is at memory address: 140608157395232.
Is my reasoning correct? If not, when does Python create class object in memory? | To be more specific, Python creates the class type object when it finishes processing the *entire* class definition (so being pedantic, it wouldn't create it until it had parsed and processed the `pass` line as well, in your example).
This is typically only relevant in esoteric edge cases, like the following:
```
>>> class Foo(object):
... print repr(Foo)
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in Foo
NameError: name 'Foo' is not defined
```
But yes, your reasoning is generally correct. | The class object is created by the line `class Foo(object):`, yes. It is not created when it reads that line, though, it's created when it reaches the end of the class definition.
The `id` of the class does not need to have any relation to the memory address. That's an implementation detail, and one you don't have any use of. | When does Python create class object in memory? | [
"",
"python",
"class",
"object",
"memory",
""
] |
So this is the brief:
i need to input something like:
```
BaSe fOO ThE AttAcK
```
and return:
```
attack the base.
```
As you can see to decode this i need to start by reading the words in reverse order and if the first letter of the word isupper() then make it lowercase and append it to a list which i will later print. This is what i have so far:
```
# Enter your code for "BaSe fOO ThE AttAcK" here.
default = input('code: ')
listdefault = default.split()
uncrypted = []
for i in range(len(listdefault)):
if listdefault[:-i].istitle(): # doesn't work
i = i.lower() # dont know if this works, Should convert word to lower case.
uncrypted.append(i)
solution = ' '.join(uncrypted)
print(solution)
```
Can someone show me how to get this program to work? Says that i can't use the istitle() method on a list type.. | You're close, but you're switching between treating `i` as an index (`for i in range...`) and as the word itself (`uncrypted.append(i)`). And you probably meant `listdefault[i].istitle()` rather than `listdefault[:-i].istitle()`.
The best solution is to change it to:
```
for w in listdefault[::-1]:
if w.istitle():
w = w.lower()
uncrypted.append(w)
```
Since `listdefault[::-1]` is a way to reverse the list (`reversed(listdefault)` also works). If you know how to use list comprehensions, you can do it in one line:
```
solution = ' '.join([w.lower() for w in listdefault[::-1] if w.istitle()])
``` | This can be done much simpler...
```
text = input("Code: ")
result = [w.lower() for w in reversed(text.split()) if w[0].isupper()]
print(' '.join(result))
```
```
Code: BaSe fOO ThE AttAcK
attack the base
```
I hope this makes you want to learn Python deeper.
By the way, you can't use `.istitle()` for this task, because it checks whether *only* the first letter is uppercase. | Uncrypting a very simple code in python | [
"",
"python",
"python-3.x",
"decode",
"lowercase",
""
] |
I am very new to Python, thus am possibly asking a simple question.
I am wrting a multiprocess code with Python:
```
from multiprocessing import Process
from multiprocessing import Queue
class myClass(object):
def __init__(self):
self.__i = 0
self.__name = 'rob'
return
def target_func(self, name, q):
self.__name = name
print 'Hello', self.__name
self.__i += 1
print self.__i
q.put([self.__i, self.__name])
return
def name(self):
return self.__name
def i(self):
return self.__i
if __name__ == '__main__':
mc = myClass()
q = Queue()
p = Process(target = mc.target_func, args = ('bob', q,))
p.start()
ret = q.get()
p.join()
p2 = Process(target = mc.target_func, args = ('tom', q,))
p2.start()
ret = q.get()
p2.join()
```
I expect the print out should be
```
Hello bob
1
Hello tom
2
```
But actually, the print out is
```
Hello bob
1
Hello tom
1 <------------------ Why it's not 2?
```
May I know what am I wrong?
Many thanks. | `target_func` is called in separated process. `mc` is copied to each subprocess; not shared between processes.
Using Thread, you will get expected(?) result. For safety you should use lock; I omitted it in following code.
```
from threading import Thread
from Queue import Queue
....
if __name__ == '__main__':
mc = myClass()
q = Queue()
p = Thread(target = mc.target_func, args = ('bob', q,))
p.start()
ret = q.get()
p.join()
p2 = Thread(target = mc.target_func, args = ('tom', q,))
p2.start()
ret = q.get()
p2.join()
``` | Processes don't share memory, unlike threads. The name \_\_i in the second process refers to a different variable, whose initial value was copied from the original process when you launched the subprocess.
You can use the Value or Array data types to transfer information from one process to another, or you can use the Queue to push data from the subprocess back the the original. All of these classes are included in the multiprocessing module
<http://docs.python.org/2/library/multiprocessing.html#multiprocessing.Queue>
<http://docs.python.org/2/library/multiprocessing.html#multiprocessing.Value>
<http://docs.python.org/2/library/multiprocessing.html#multiprocessing.Array> | change object value in threads using python | [
"",
"python",
"multiprocessing",
""
] |
I have written a code that returns a phrase depending on the length of a name. Problem is, I can't submit it unless i fix this tiny error.
Here's what I've written:
```
name = input('Enter your name: ')
if len(name) <= 3:
print ('Hi',name,', you have a short name.')
elif len(name) > 8:
print ('Hi',name,', you have a long name.')
elif len(name) >= 4:
print ('Hi',name,', nice to meet you.')
```
For example: When i type in a 3 letter name such as 'Lin' it reruns the following
> "Hi Lin\*(space)\*, you have a short name."
**I wish to get rid of the *(space)* and have it return:**
> "Hi Lin, you have a short name."
I think the error lies with my concatenation and it automatically ads a space after the comma. | You're are right, print(x, y, z) will sequentially print x, y and z with a space inbetween. As you suggest, concatenate your name to your greeting, and then print the greeting:
```
name = input('Enter your name: ')
if len(name) <= 3:
greeting = "Hi {}, you have a short name.".format(name)
elif len(name) > 8:
greeting = "Hi {}, you have a long name.".format(name)
elif len(name) >= 4:
greeting = "Hi {}, nice to meet you.".format(name)
print(greeting)
``` | The `print` function prints a space between its arguments. You can explicitly concatenate the strings to control the separation:
```
print('Hi ' + name + ', you have a short name.')
```
or use string formatting:
```
print('Hi {}, you have a short name.'.format(name))
``` | Concatenation - Python | [
"",
"python",
"concatenation",
""
] |
I want to get the last cost with latest costing date and minimum cost for products.
When I use the query below, it is giving me the Max Date and Min Cost for each column. Please see the screenshots below.
```
SELECT MAX(CostingDate) AS LatestDate,
MIN(Cost) AS MinPrice,
OutletCode,
ProductID
FROM AccountsCosting
WHERE OutletCode = 'C&T01'
GROUP BY OutletCode, ProductID
```
Result:

E.g - for productID: `200006`
```
SELECT * FROM AccountsCosting
WHERE ProductID = 200006 AND OutletCode = 'C&T01'
ORDER BY CostingDate DESC
```

What I want is the last costing date with the minimum cost (the one that I highlighted with red color). Even if the purchase date is the same `2013-03-20`, it should return the minimum cost.

How can I edit my query to get the result? Any help will be much appreciated! | First you need to get the Latest Date then you can find the minimum cost for them. e.g.
```
select
a.OutletCode,
a.ProductID,
LatestDate,
MIN(Cost) AS MinPrice
from
(
SELECT MAX(CostingDate) AS LatestDate,
OutletCode,
ProductID
FROM AccountsCosting
WHERE OutletCode = 'C&T01'
GROUP BY OutletCode, ProductID
) a
left join
FROM AccountsCosting b
on
a.OutletCode=b.OutletCode
and a.ProductID=b.ProductID
and a.LatestDate=b.CostingDate
group by a.OutletCode, a.ProductID, LatestDate
``` | If i understand you correctly, you want the minimum cost for the latest date?!
Try this:
```
SELECT CostingDate AS LatestDate,
Cost AS MinPrice,
OutletCode,
ProductID
FROM AccountsCosting
WHERE OutletCode = 'C&T01'
and CostingDate in (SELECT MAX(CostingDate) as CostingDate FROM AccountsCosting WHERE OutletCode = 'C&T01')
and Cost in (SELECT MIN(Cost) as Cost FROM AccountsCosting WHERE OutletCode = 'C&T01' and CostingDate in (SELECT MAX(CostingDate) as CostingDate FROM AccountsCosting WHERE OutletCode = 'C&T01'))
GROUP BY OutletCode, ProductID;
``` | How to return MAX and MIN of a value from a table? | [
"",
"sql",
"max",
"min",
""
] |
I have a table with a column named `Skills` which contains comma separated values for different employees like
```
EmpID Skills
1 C,C++,Oracle
2 Java,JavaScript,PHP
3 C,C++,Oracle
4 JavaScript,C++,ASP
5 C,C++,JavaScript
```
So I want to write a query which will order all the employees first who knows `JavaScript`, how can I get this result? | Try this
```
SELECT *
FROM
(
SELECT *
,CASE WHEN Skills LIKE '%JavaScript%' THEN 0 ELSE 1 END AS Rnk
FROM MyTable
) T
ORDER BY rnk,EmpID
```
[**DEMO**](https://data.stackexchange.com/stackoverflow/query/edit/129130)
OR
```
SELECT * FROM #MyTable
ORDER BY CASE WHEN Skills LIKE '%JavaScript%' THEN 0 ELSE 1 END,EmpID
``` | You should **not** use one attribute to store multiple values. That goes against relation DB principles.
Instead of that you should create additional table to store skills and refer to employee in it. Then, your query will looks like:
```
SELECT
*
FROM
employees
LEFT JOIN employees_skills
ON employee.id=employees_skills.employee_id
WHERE
employees_skills='JavaScript'
``` | How to use Order By clause on a column containing string values separated by comma? | [
"",
"sql",
"sql-server-2012",
"sql-order-by",
""
] |
I have a string that looks like this:
```
"{\\x22username\\x22:\\x229\\x22,\\x22password\\x22:\\x226\\x22,\\x22id\\x22:\\x222c8bfa56-f5d9\\x22, \\x22FName\\x22:\\x22AnkQcAJyrqpg\\x22}"
```
as far as I understand `\x22` is `"`. So how could I convert this into a readable JSON with quotes around keys and values? | Decode from `string_escape`:
```
>>> import json
>>> value = "{\\x22username\\x22:\\x229\\x22,\\x22password\\x22:\\x226\\x22,\\x22id\\x22:\\x222c8bfa56-f5d9\\x22, \\x22FName\\x22:\\x22AnkQcAJyrqpg\\x22}"
>>> value.decode('string_escape')
'{"username":"9","password":"6","id":"2c8bfa56-f5d9", "FName":"AnkQcAJyrqpg"}'
>>> json.loads(value.decode('string_escape'))
{u'username': u'9', u'password': u'6', u'id': u'2c8bfa56-f5d9', u'FName': u'AnkQcAJyrqpg'}
``` | For a Unicode string under Python3, I found this:
```
value.encode('utf8').decode('unicode_escape')
```
<https://stackoverflow.com/a/14820462/450917> | How to convert characters like \x22 into a string? | [
"",
"python",
"encoding",
"ascii",
""
] |
I have the following code, and the idea is to be able to iterate over `root` for each string in `some_list[j]`. The goal is to stay away from nested for loops and learn a more pythonic way of doing this. I would like to return each `value` for the first item in `some_list` then repeat for the next item in `some_list`.
```
for i, value in enumerate(root.iter('{0}'.format(some_list[j])))
return value
```
Any ideas?
EDIT: root is
```
tree = ElementTree.parse(self._file)
root = tree.getroot()
``` | I *think* what you're trying to do is this:
```
values = ('{0}'.format(root.iter(item)) for item in some_list)
for i, value in enumerate(values):
# ...
```
But really, `'{0}'.format(foo)` is silly; it's just going to do the same thing as `str(foo)` but more slowly and harder to understand. Most likely you already have strings, so all you really need is:
```
values = (root.iter(item) for item in some_list)
for i, value in enumerate(values):
# ...
```
You could merge those into a single line, or replace the genexpr with `map(root.iter, some_list)`, etc., but that's the basic idea.
---
At any rate, there are no nested loops here. There are two loops, but they're just interleaving—you're still only running the inner code once for each item in `some_list`. | So, given List A, which contains multiple lists, you want to return the first element of each list? I may not be understanding you correctly, but if so you can use a list comprehension...very "Pythonic" ;)
```
In [1]: some_list = [[1,2,3],[4,5,6],[7,8,9]]
In [2]: new = [x[0] for x in some_list]
In [3]: new
Out[3]: [1, 4, 7]
``` | Pythonic way of handling string formatted loop | [
"",
"python",
"python-3.3",
""
] |
How can I ask if a string pattern, in this case `C`, exists within any element of this set without removing them each and looking at them?
This test fails, and I am not sure why. My guess is that Python is checking if any element in the set **is** `C`, instead of if any element **contains** `C`:
```
n [1]: seto = set()
In [2]: seto.add('C123.45.32')
In [3]: seto.add('C2345.345.32')
In [4]: 'C' in seto
Out[4]: False
```
I know that I can iterate them set to make this check:
```
In [11]: for x in seto:
if 'C' in x:
print(x)
....:
C2345.345.32
C123.45.32
```
But that is not what I am looking to do in this case. Ok thanks for the help!
**Edit**
I am sorry, these are set operations, not list as my original post implied. | ```
'C' in seto
```
This checks to see if any of the members of seto is the exact string `'S'`. Not a substring, but exactly that string. To check for a substring, you'll want to iterate over the set and perform a check on each item.
```
any('C' in item for item in seto)
```
The exact nature of the test can be easily changed. For instance, if you want to be stricter about where `C` can appear:
```
any(item.startswith('C') for item in seto)
``` | Taking [John's answer](https://stackoverflow.com/a/18196208/102441) one stage further, if you want to use the subset of items containing `C`:
```
items_with_c = {item for item in seto if 'C' in item}
if items_with_c:
do_something_with(items_with_c)
else:
print "No items contain C"
``` | How can I tell if a string pattern exists within any element of a set in Python? | [
"",
"python",
"python-3.x",
"iteration",
""
] |
With this DataFrame, how can I conditionally set `rating` to 0 when `line_race` is equal to zero?
```
line_track line_race rating foreign
25 MTH 10 84 False
26 MTH 6 88 False
27 TAM 5 87 False
28 GP 2 86 False
29 GP 7 59 False
30 LCH 0 103 True
31 LEO 0 125 True
32 YOR 0 126 True
33 ASC 0 124 True
```
In other words, what is the proper way on a DataFrame to say if ColumnA = x then ColumnB = y else ColumnB = ColumnB | ```
df.loc[df['line_race'] == 0, 'rating'] = 0
``` | Use [numpy.where](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) to say *if ColumnA = x then ColumnB = y else ColumnB = ColumnB*:
```
df['rating'] = np.where(df['line_race']==0, 0, df['rating'])
``` | How to conditionally update DataFrame column in Pandas | [
"",
"python",
"pandas",
""
] |
I'm working on a code-checker for my students (I tutor). The project is that they write a function that prints a specific string using the `print` keyword. I want to be able to test what they have printed by storing it and matching to a list (or something similar). The basic setup is:
```
def checker():
#run user code
do some other things like save and check error messages etc
```
Now somewhere in this `checker` function I want to be able to keep track of what was printed. In Javascript, I was able to do something like:
```
var logs = [];
var hold_logger = console.log //saves the console.log so nothing gets ruined
console.log = function (x) { logs.push(x) };
```
Now when I run the students code, instead of printing to the console, it pushes the value to `logs`. I want to achieve the same thing in Python 2.7. | You can assign a different file-like object to [`sys.stdout`](http://docs.python.org/2/library/sys.html#sys.stdout); anything that is printed is written to that object.
You can use a [`io.BytesIO()` object](http://docs.python.org/2/library/io.html#io.BytesIO) to replace `stdout`:
```
import sys
from io import BytesIO
orig_stdout, sys.stdout = BytesIO()
```
`io` is the newer, more robust I/O library from Python 3, available in Python 2 as well; `io.BytesIO()` is the more robust version of `StringIO.StringIO()`.
You can then inspect what was printed by calling `sys.stdout.getvalue()`; when done, you can restore from `orig_stdout`.
Demo:
```
>>> import sys
>>> from io import BytesIO
>>> orig_stdout, sys.stdout = sys.stdout, BytesIO()
>>> print 'Hello world!'
>>> output = sys.stdout.getvalue()
>>> sys.stdout = orig_stdout
>>> output
'Hello world!\n'
```
Note that I restored `sys.stdout` from an earlier reference. You could also use [`sys.__stdout__`](http://docs.python.org/2/library/sys.html#sys.__stdout__), *provided nothing else replaces `sys.stdout`*; saving a reference to what `sys.stdout` is pointing to *now* is safer. | Here's one way - you can replace the standard output file object with a custom one:
```
import sys
# A very basic logging class (it should really implement all the file object methods, but for your purposes, this will probably suffice.
class basicLogger:
def __init__(self):
self.log = []
def write(self, msg):
self.log.append(msg)
def writelines(self, msgs):
for msg in msgs:
self.log.append(msg)
log = basicLogger()
# Replaces the default output file object with your logger
sys.stdout = log
def checker():
#run user code
print "hello!"
print
print "I am a student!"
checker()
# This replaces the original output file object.
sys.stdout = sys.__stdout__
print "Print log:"
for msg in log.log:
print "\t", msg,
```
output:
```
Print log:
hello!
I am a student!
``` | Python 2.7 Keeping track of what's been printed | [
"",
"python",
"logging",
"python-2.7",
""
] |
I want to show the content of an object using the following code:
```
def get(self):
url="https://www.googleapis.com/language/translate/v2?key=MY-BILLING-KEY&q=hello&source=en&target=ja"
data = urllib2.urlopen(url)
parse_data = json.load(data)
parsed_data = parse_data['data']['translations']
// This command is ok
self.response.out.write("<br>")
// This command shows above error
self.response.out.write(str(json.loads(parsed_data[u'data'][u'translations'][u'translatedText'])))
```
But the error
> TypeError: expected string or buffer
appears as a result of the line:
```
self.response.out.write(str(json.loads(parsed_data[u'data'][u'translations'][u'translatedText'])))
```
or
```
self.response.out.write(json.loads(parsed_data[u'data'][u'translations'][u'translatedText']))
```
---
**UPDATE** (fix):
I needed to convert from string to JSON object:
```
# Convert to String
parsed_data = json.dumps(parsed_data)
# Convert to JSON Object
json_object = json.loads(parsed_data)
# Parse JSON Object
translatedObject = json_object[0]['translatedText']
# Output to page, by using HTML
self.response.out.write(translatedObject)
``` | All I need, is convert from String to JSON object, as the following code :
```
# Convert to String
parsed_data = json.dumps(parsed_data)
# Convert to JSON Object
json_object = json.loads(parsed_data)
# Parse JSON Object
translatedObject = json_object[0]['translatedText']
# Output to page, by using HTML
self.response.out.write(translatedObject)
``` | ```
parse_data = json.load(data)
parsed_data = parse_data['data']['translations']
```
Those lines already did the json.load, and extracted 'data' and 'translations'. Then instead of:
```
self.response.out.write(str(
json.loads(parsed_data)[u'data'][u'translations'][u'translatedText']))
```
you should:
```
self.response.out.write(str(
parsed_data[u'translatedText']))
``` | TypeError: expected string or buffer in Google App Engine's Python | [
"",
"python",
"json",
"google-app-engine",
"typeerror",
""
] |
Sorry for my English, it's not my native language.
I want to develop application with this basic functionality:
1. User can create tasks with subtasks. Level of hierarcy should be unlimited, so subtasks can have subtasks themselfs and so on.
2. User can create tags and every tasks can have unlimited level of tags, so user can view all tasks that have been tagged by some tags.
3. Tasks should be sync with cloud.
4. It should work fast. So, for example, user wouldn't expirience any lag in transition next level of tasks or in displaying items with different tags.
Well, there is plently of other functionality, like reminders and stuff, but it's not linked to this choice of bare JSON, nosql db or sqlite.
The question is what is more suitable for this functionality?
For example :
1. In sql we would have to store IDs of subtasks somewhere in schema and do O(n) queries for n level of of hierarchy, but it's rather easy in json. We could have tasks object with array of tasks which is its subtasks. Or in xml (I don't know if that could be done in JSON) we could just have array of some IDs of subtasks) probably. What should I choose, how do you think?
2. In JSON I could have string items called "tag". And every tasks could have array of tags, pretty simple. In sql I would have to have another table "tags" which would have all tags and all tasks IDs one for each unique pair of tag and ID which is kind of redundant.
3. Syncing with cloud is easy with JSON. I could just have one big file with all tasks and uploading it from or to server depending where was latest changes (well, basically). But there is two problems. But this way I would tranfering all file, whereas changes were very few and this way app should consume more traffic. Maybe noSQL DB could resolve this? In sql again app could also do that, but it would have to tranfer all db data. | Here are a few ideas -
couchDB is considered a good type of key-value store for replication across multiple servers/clients. This is probably an appropriate type of database for the problem you are describing.
Otherwise, using any kind of SQL/noSQL style database with a REST access point will work in your scenario. I think MongoDB is a particularly good choice because despite being key/value store it it quick to learn for someone coming from the SQL word - it also returns answers to your queries in json - so if you using something like NodeJS as your REST server it can simplify things a fair bit.
As for general structure of your application - have a look at the docs for asana at <http://developer.asana.com/documentation/> - It should give you a good starting point. | Couchbase Lite is an embedded native NoSQL database for iOS and Android. It stores JSON documents and transparently syncs them via your cloud servers to other devices, with a security model designed for fine grained access control in multi-user interactive applications.
Github repos here:
* <https://github.com/couchbase/couchbase-lite-ios>
* <https://github.com/couchbase/couchbase-lite-android>
Mailing list here: <https://groups.google.com/forum/#!forum/mobile-couchbase> | bare JSON, nosql db or sqlite for android and web app (maybe iOS) for task managment | [
"",
"android",
"sql",
"json",
"web",
"nosql",
""
] |
I have a "date" table similar to:
```
id date
----------------
1 2012-02-02
2 2013-02-02
3 2014-04-06
```
and a "date\_range" table similar to:
```
start end
--------------------------
2011-01-01 2013-01-01
2014-01-01 2016-01-01
```
How can I get results from the "date" table where date does not fall between one of the "date\_range" table entries?
The expected result is id->2, date->2013-02-02.
I've tried:
```
SELECT * FROM date
JOIN date_range
ON date.date NOT BETWEEN date_range.start AND date_range.end
```
and the obvious fail:
```
SELECT * FROM date
WHERE date.date NOT BETWEEN (SELECT start, end FROM date_range)
``` | Use `NOT EXISTS`
```
SELECT * FROM [Date] D
WHERE NOT EXISTS
(
SELECT 1 FROM Date_Range DR
WHERE D.[Date] >= DR.[Start]
AND D.[Date] <= DR.[End]
)
```
[**Demo**](http://sqlfiddle.com/#!6/14c0b/3/0) | ```
SELECT * FROM date
LEFT JOIN date_range
ON date.date >= date_range.start AND date.date <= date_range.end
WHERE date_range.start IS NULL
``` | SQL results from a second table's ranges | [
"",
"sql",
"nested",
"between",
""
] |
I've been using ZeroMQ's request/response sockets for the purpose of exchanging messages between a web application and slave applications that were used for offloading processing. I've noticed that in a few cases, not all ZMQ messages sent were actually received by the other side. What's even weirded that this happens even with the **IPC** protocol, which, I thought, would be pretty reliable.
What could be responsible for the sent message not being delivered, while not producing any errors?
*Here is an example of the client code:*
```
# ironic
class ReliableClient(object):
def _reconnect(self):
if self.socket:
self.socket.close()
self.socket = None
self.socket = self.context.socket(zmq.REQ)
self.socket.connect(self.server_url)
# Give the server 2 sec to respond
self.socket.RCVTIMEO = 2000
self.socket.SNDTIMEO = 2000
self.socket.LINGER = 3
def __init__(self, server_url=None, server_name=None):
self.socket = None
self.server_url = server_url
if server_name is None:
self.server_name = server_url
else:
self.server_name = server_name
self.lock = threading.Lock()
self.context = zmq.Context()
self._reconnect()
def msg(self, msg):
raw_out = dumps(msg)
# send
self.lock.acquire()
try:
self.socket.send(
raw_out,
copy=True
)
except zmq.ZMQError as ex:
log.exception(ex, '%s: failed to send', self.server_name)
self.lock.release()
raise CommunicationError('failed to send')
# receive
try:
raw_in = self.socket.recv()
except zmq.ZMQError as ex:
log.exception(ex, '%s: failed to receive', self.server_name)
raise CommunicationError('failed to receive')
finally:
self.lock.release()
msg_in = loads(raw_in)
return msg_in
``` | I'm not familiar with the IPC protocol, but I've worked with ZMQ/TCP extensively.
Even the simplest ZMQ REQ/REP pattern over TCP **will never drop messages** if you're using it properly. It may hang indefinitely if your network connection or remote endpoint goes down, but it will never silently fail. There are certain sockets that are by design going to drop messages in certain circumstances. For example, some may drop messages when the HWM is reached. | > What could be responsible for the sent message not being delivered,
> while not producing any errors?
If you're using `ROUTER` sockets to intermediate messages from server to zmq worker processes, `ROUTER`s, by default, will drop any outbound messages they can't deliver. What does "can't deliver" mean? ROUTERs maintain an internal mapping of client identity to client connection, and because all messages on a router have an identity, either provided by the client or auto-assigned by the router, any outbound message that doesn't have a corresponding connection on which to route, will be dropped, silently.
You can identify when this scenario occurs by telling the ROUTER to report undeliverable messages, in other words, the ROUTER generates an error when it can't deliver a message. In Java, the method is `routerSocket.setRouterMandatory(true)`, you just need to find the python counterpart to this (I'm not a py guy, lol)
If you determine the ROUTER is dropping messages, the question then becomes why? In my case, I had a zmq client that sent and received server messages on different threads, and the receiving thread simply wasn't connecting fast enough for the initial "Ok" response from the server, so it was just a timing issue in the client.
Hope that helps | ZeroMQ reliability? | [
"",
"python",
"zeromq",
""
] |
I want to run the Linux word count utility wc to determine the number of lines currently in the /var/log/syslog, so that I can detect that it's growing. I've tried various test, and while I get the results back from wc, it includes both the line count as well as the command (e.g., var/log/syslog).
So it's returning:
1338 /var/log/syslog
But I only want the line count, so I want to strip off the /var/log/syslog portion, and just keep 1338.
I have tried converting it to string from bytestring, and then stripping the result, but no joy. Same story for converting to string and stripping, decoding, etc - all fail to produce the output I'm looking for.
These are some examples of what I get, with 1338 lines in syslog:
* b'1338 /var/log/syslog\n'
* 1338 /var/log/syslog
Here's some test code I've written to try and crack this nut, but no solution:
```
import subprocess
#check_output returns byte string
stdoutdata = subprocess.check_output("wc --lines /var/log/syslog", shell=True)
print("2A stdoutdata: " + str(stdoutdata))
stdoutdata = stdoutdata.decode("utf-8")
print("2B stdoutdata: " + str(stdoutdata))
stdoutdata=stdoutdata.strip()
print("2C stdoutdata: " + str(stdoutdata))
```
The output from this is:
* 2A stdoutdata: b'1338 /var/log/syslog\n'
* 2B stdoutdata: 1338 /var/log/syslog
* 2C stdoutdata: 1338 /var/log/syslog
* 2D stdoutdata: 1338 /var/log/syslog | I suggest that you use `subprocess.getoutput()` as it does exactly what you want—run a command in a shell and get its [string output](http://docs.python.org/3.1/library/subprocess.html#subprocess.getoutput) (as opposed to [byte string](http://docs.python.org/3.1/library/subprocess.html#subprocess.check_output) output). Then you can [split on whitespace](http://docs.python.org/3.1/library/stdtypes.html#str.split) and grab the first element from the returned list of strings.
Try this:
```
import subprocess
stdoutdata = subprocess.getoutput("wc --lines /var/log/syslog")
print("stdoutdata: " + stdoutdata.split()[0])
``` | Since Python 3.6 you can make `check_output()` return a `str` instead of `bytes` by giving it an *encoding* parameter:
```
check_output('wc --lines /var/log/syslog', encoding='UTF-8')
```
But since you just want the count, and both `split()` and `int()` are usable with `bytes`, you don't need to bother with the encoding:
```
linecount = int(check_output('wc -l /var/log/syslog').split()[0])
```
While some things might be easier with an external program (e.g., counting log line entries printed by `journalctl`), in this particular case you don't need to use an external program. The simplest Python-only solution is:
```
with open('/var/log/syslog', 'rt') as f:
linecount = len(f.readlines())
```
This does have the disadvantage that it reads the entire file into memory; if it's a huge file instead initialize `linecount = 0` before you open the file and use a `for line in f: linecount += 1` loop instead of `readlines()` to have only a small part of the file in memory as you count. | Python3 subprocess output | [
"",
"python",
"python-3.x",
"subprocess",
""
] |
I have a table in my PostgreSQL database which has 3 columns - `c_uid`, `c_defaults` and `c_settings`. `c_uid` simply stores the name of a user and `c_defaults` is a long piece of text which contains a lot of data w.r.t that user.
I have to execute a statement from a bash script which selects the value of the `c_defaults` column based on the `c_uid` value and this needs to be done by the database user 'postgres'.
On the CLI I can do the following:
```
[mymachine]# su postgres
bash-4.1$psql
postgres=#\c database_name
You are now connected to database "database_name" as user "postgres".
database_name=#SELECT c_defaults FROM user_info WHERE c_uid = 'testuser';
```
However, how do I achieve this through a bash script?
The aim is to get the information from that column, edit it and write it back into that column - all through a bash script. | Try this one:
```
#!/bin/bash
psql -U postgres -d database_name -c "SELECT c_defaults FROM user_info WHERE c_uid = 'testuser'"
```
Or using `su`:
```
#!/bin/bash
su -c "psql -d database_name -c \"SELECT c_defaults FROM user_info WHERE c_uid = 'testuser'\"" postgres
```
And also `sudo`:
```
#!/bin/bash
sudo -u postgres -H -- psql -d database_name -c "SELECT c_defaults FROM user_info WHERE c_uid = 'testuser'"
``` | You can connect to psql as below and write your sql queries like you do in a regular postgres function within the block. There, bash variables can be used. However, the script should be strictly sql, even for comments you need to use -- instead of #:
```
#!/bin/bash
psql postgresql://<user>:<password>@<host>/<db> << EOF
<your sql queries go here>
EOF
``` | PostgreSQL - query from bash script as database user 'postgres' | [
"",
"sql",
"bash",
"postgresql",
""
] |
I have a function `accuracy(n)` which sets accuracy for float number n. Is it possible to overload somekind of operator of type float so that:
```
f = 1.5
x = f
```
would be automatically implemented (in the background) as:
```
x = accuracy(f)
```
Thanks a lot
**Edit** Is it possible to overload `__eq__` of `float`? | There isn't any overloading you can do so that those exact lines will behave the way you described, but you can get something similar:
```
class Foo(object):
@property
def x(self):
return self._x
@x.setter
def x(self, value):
self._x = accuracy(value)
foo = Foo()
f = 1.5
foo.x = f # set foo._x to accuracy(f)
print(foo.x) # prints foo._x
``` | No. But you can create a class that defines `__float__()` so that the new value is used when the object is used in a float context. | redefine behaviour of float | [
"",
"python",
"operator-overloading",
""
] |
I want to check if a string the user inputted contains one of the strings in a list.
I know how to check if the string contains **one string** but I want to know how to check if a string contains either this string, or that string, or another string.
For example, if I am checking the user's input to see if it contains a vowel.. there can be 5 different choices when we're talking about vowels (a, e, i, o, or u). How can I check to see if the string the user inputted has one of those? | ```
>>> vowels = set(['a', 'e', 'i', 'o', 'u'])
>>> inp = "foobar"
>>> bool(vowels.intersection(inp))
True
>>> bool(vowels.intersection('qwty'))
False
``` | Using [any](http://docs.python.org/2/library/functions.html#any):
```
>>> vowels = 'a', 'e', 'i', 'o', 'u'
>>> s = 'cat'
>>> any(ch in vowels for ch in s)
True
>>> s = 'pyth-n'
>>> any(ch in vowels for ch in s)
False
``` | How to check if a string contains a string from a list? | [
"",
"python",
"python-3.x",
""
] |
Using Cython, I am developing an extension module which gets build as an .so file. I then test it using IPython. During development, I frequently need to make changes and rebuild. I also need to exit the IPython shell and reenter all commands. Reimporting the module with
```
import imp
imp.reload(Extension)
```
does not work, the code is not updated. Is there a way for me to avoid restarting the IPython shell after I rebuild the module? | C extensions cannot be reloaded without restarting the process (see [this official Python bug](http://bugs.python.org/issue1144263) for more info).
Since you are already using IPython, I might recommend using one of the two-process interfaces such as the Notebook or QtConsole, if it's acceptable to you. These allow you to easily *restart* the kernel process, which allows to you load the module anew. Obviously, this isn't as convenient as reload for a Python module because you have to re-execute to get back to the same state. But that is not avoidable, so the point is to mitigate the inconvenience.
I find the notebook interface the most convenient for developing extensions, because it provides the easiest way to get back to the same state:
1. rebuild the extension
2. restart kernel
3. Run All to re-execute the notebook
and you are back to the same state with the new version of the extension. Mileage may vary, depending on how costly your interactive work is to re-run, but it has served me well. | You can try setting an `autoreload` in the `ipython shell`, [documentation here](http://ipython.org/ipython-doc/dev/config/extensions/autoreload.html).
Set `autoreload`
```
In [1]: %load_ext autoreload
In [2]: %autoreload 2
```
Set `autoreload` on specific module
```
%aimport foo
```
Also, take a look at [`dreload`](http://ipython.org/ipython-doc/rel-0.10.1/html/interactive/reference.html#dreload) (more on `dreload` [here](http://ipython.org/ipython-doc/stable/interactive/reference.html)) and the [run-magic](http://ipython.org/ipython-doc/rel-0.10.1/html/interactive/tutorial.html#the-run-magic-command) | Reloading a Python extension module from IPython | [
"",
"python",
"python-3.x",
"ipython",
"cython",
""
] |
There are two tables named *masters* and *versions*. The *versions* table holds entries of the *master* table at different points in time.
```
-------------------------
masters
-------------------------
id | name | added_at
----+-------+------------
1 | a-old | 2013-08-13
2 | b-new | 2012-04-19
3 | c-old | 2012-02-01
4 | d-old | 2012-12-24
```
It is guaranteed that there is **at least one** *versions* entry for each *masters* entry.
```
---------------------------------------------
versions
---------------------------------------------
id | name | added_at | notes | master_id
----+-------+--------------------------------
1 | a-new | 2013-08-14 | lorem | 1
1 | a-old | 2013-08-13 | lorem | 1
2 | b-new | 2012-04-19 | lorem | 2
3 | c-old | 2012-02-01 | lorem | 3
4 | d-new | 2013-02-20 | lorem | 4
5 | d-old | 2012-12-24 | lorem | 4
```
The tables can also be found in this [SQL Fiddle](http://sqlfiddle.com/#!2/1cf9f/1).
The latest *version* of each *master* record can be selected as shown in this example for *masters* record `2`:
```
SELECT * FROM versions
WHERE master_id = 2
ORDER BY added_at DESC
LIMIT 1;
```
How can I update **each record** of the *masters* table with its **latest** *version* in **one command**? I want to overwrite the values for both the `name` and `added_at` columns. Please note, there are additional columns in the *versions* table which do not exist in the *masters* table such as `notes`.
Can the update been done with a `JOIN` so it performs fast on larger tables? | There is no need to fire subquery twice.
* [Here is SQLFiddle Demo](http://sqlfiddle.com/#!2/e56eb9/1)
Below is the update statement
```
update masters m, (
select id, name, added_at, master_id
from versions
order by added_at desc
) V
set
m.name = v.name,
m.added_at = v.added_at
where v.master_id = m.id;
``` | This might do what you need:
```
REPLACE INTO masters
SELECT v.master_id,v.name,v.added_at
FROM versions v
WHERE v.added_at = (SELECT MAX(vi.added_at)
FROM versions vi
WHERE vi.master_id = v.master_id);
```
Note that this relies on masters having a primary key on id and is MySQL specific. | How to update multiple columns based on values from an associated table? | [
"",
"mysql",
"sql",
"sql-update",
"sqldatetime",
"sql-limit",
""
] |
I just tried to learn both list comprehensions and Lambda functions. I think I understand the concept but I have been given a task to create a program that when fed in a positive integer creates the identity matrix. Basically if I fed in 2 it would give me: [[1, 0],[0, 1]] and if I gave it 3: [[1, 0, 0],[0, 1, 0], [0, 0, 1] so list within a list.
Now I need to create this all within a lambda function. So that if I type:
FUNCTIONNAME(x) it will retrieve the identity matrix of size x-by-x.
By the way x will always be a positive integer.
This is what I have so far:
```
FUNCTIONNAME = lambda x: ##insertCodeHere## for i in range(1, x)
```
I think I am doing it right but I don't know. If anyone has an idea please help! | How about:
```
>>> imatrix = lambda n: [[1 if j == i else 0 for j in range(n)] for i in range(n)]
>>> imatrix(3)
[[1, 0, 0], [0, 1, 0], [0, 0, 1]]
```
`1 if j == i else 0` is an example of Python's [conditional expression](http://www.python.org/dev/peps/pep-0308/). | This would be my favorite way to do it:
```
identity = lambda x: [[int(i==j) for i in range(x)] for j in range(x)]
```
It takes advantage of the fact that `True` maps to 1 and `False` maps to 0. | Python Lambda Identity Matrix | [
"",
"python",
"python-3.x",
"matrix",
"lambda",
"algebra",
""
] |
Can the list
```
mylist = ['a',1,2,'b',3,4,'c',5,6]
```
be used to create the dict
```
mydict = {'a':(1,2),'b':(3,4),'c':(5,6)}
``` | You can try something like this:
```
>>> mylist = ['a',1,2,'b',3,4,'c',5,6]
>>>
>>> v = iter(mylist)
>>> mydict = {s: (next(v),next(v)) for s in v}
>>> mydict
{'a': (1, 2), 'c': (5, 6), 'b': (3, 4)}
``` | Only if you have some kind of criteria which ones are the keys. If the strings are the keys then:
```
d = {}
key = None
for item in my_list:
if isinstance(item, str):
key = item
else:
d.setdefault(key, []).append(item)
``` | Python list to dict with multiple values for each key | [
"",
"python",
""
] |
I'm trying to setup a Flask application on a machine running Apache and mod\_wsgi. My application runs 'randomly' well, meaning that sometimes it works and sometimes I refresh it and it throws an Internal Server Error. It seems quite random.. I have cleared the cache of my browser, tried a different browser, tried incognito mode, asked a friend to try from his laptop. It always shows this intermitent 500 behaviour.
Does anyone have any ideas where I can look for the cause? Or maybe you had this problem before?
All the data I can think of about this is below, let me know if you need anything else.
Thanks!
---
The Apache error\_log shows the following when the refreshing fails:
```
[Wed Aug 14 16:42:52 2013] [error] [client 171.65.95.100] mod_wsgi (pid=1160): Target WSGI script '/home/server/servers/flaskapp.wsgi' cannot be loaded as Python module.
[Wed Aug 14 16:42:52 2013] [error] [client 171.65.95.100] mod_wsgi (pid=1160): Exception occurred processing WSGI script '/home/server/servers/flaskapp.wsgi'.
[Wed Aug 14 16:42:52 2013] [error] [client 171.65.95.100] Traceback (most recent call last):
[Wed Aug 14 16:42:52 2013] [error] [client 171.65.95.100] File "/home/server/servers/flaskapp.wsgi", line 5, in <module>
[Wed Aug 14 16:42:52 2013] [error] [client 171.65.95.100] from flaskapp.frontend import app
[Wed Aug 14 16:42:52 2013] [error] [client 171.65.95.100] ImportError: cannot import name app
```
The application is organized like this:
```
flaskapp.wsgi
flaskapp/
__init__.py (empty)
settings.py
frontend/
__init__.py (app is defined here)
static/
style.css
templates/
index.html
views.py
```
The **init**.py contains the following:
```
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config.from_object('flaskapp.settings')
db = SQLAlchemy(app)
import flaskapp.views
```
The configuration file in the Apache httpd.conf file related to this application is:
```
<VirtualHost *:80>
ServerName <redacted>
WSGIDaemonProcess flaskapp user=server group=server
WSGIScriptAlias /flaskapp /home/server/servers/flaskapp.wsgi
<Directory /home/server/servers/flaskapp/>
WSGIProcessGroup flaskapp
WSGIApplicationGroup %{GLOBAL}
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
``` | Miguel's answer makes sense and indeed restarting the server fixes this and other problems (changes not taking effect).
My uneducated guess is that the different processes running under Apache load at some point the modules of the application and when a refresh is issued they won't 'bother' to update this information.
Restarting the Apache server, thus killing all these processes and re-spawning new ones, solves the issue. | I had the same problem on Apache+wsgi+Django. I have tried [setting up wsgi in Daemon mode](https://docs.djangoproject.com/en/1.8/howto/deployment/wsgi/modwsgi/#using-mod-wsgi-daemon-mode), as recommended by Django manual, and that seems to solve the problem. I have now done 1000 pageloads with no 500 responses.
The same solution should probably work using a Flask setup. | Intermitent 500 HTTP errors with Flask and WSGI | [
"",
"python",
"apache",
"flask",
"mod-wsgi",
"wsgi",
""
] |
I have some arbitrary curve in 3 dimensions made up of a list of XYZ cartesian points. The points are not evenly distributed (theres a time factor). How can I 'rebuild' the curve with a given number of points that should make up the curve. I see this done in 3D modeling programs so im pretty sure its possible, I just dont know how.

Based on the answer, i needed it in python so i started working to convert interparc into python. I got as far as the linear interpolation. It is probably inefficient and has redundancies, but maybe it will be useful to someone <http://pastebin.com/L9NFvJyA> | I'd use [interparc](http://www.mathworks.com/matlabcentral/fileexchange/34874-interparc), a tool of mine designed to do exactly that. It fits a spline through a general space curve in 2 or more dimensions, then chooses points that are equally spaced in terms of distance along that curve. In the case of a cubic spline, the solution uses an odesolver to do what must be a numerical integration so it is a bit slower, but it is still reasonably fast. In many cases, a simple linear interpolation (as I used here) will be entirely adequate, and extremely fast.
The curve may be completely general, even crossing over itself. I'll give a simple example for a 3-d space curve:
```
t = linspace(0,1,500).^3;
x = sin(2*pi*t);
y = sin(pi*t);
z = cos(3*x + y);
plot3(x,y,z,'o')
grid on
box on
view(-9,12)
```

```
xyzi = interparc(100,x,y,z,'lin');
plot3(xyzi(:,1),xyzi(:,2),xyzi(:,3),'o')
box on
grid on
view(-9,12)
```
 | First of all, thank you to Mr. John D'Errico for interparc. What a great job!
I too was facing this problem but am not familiar with the MATLAB engine API. Given that, I tried to convert part of the interparc Matlab code to Python (just including the linear interpolant because it would be enough to address my problem).
And so here is my code; hope it can help all the pythonics seeking something similar:
```
import numpy as np
def interpcurve(N,pX,pY):
#equally spaced in arclength
N=np.transpose(np.linspace(0,1,N))
#how many points will be uniformly interpolated?
nt=N.size
#number of points on the curve
n=pX.size
pxy=np.array((pX,pY)).T
p1=pxy[0,:]
pend=pxy[-1,:]
last_segment= np.linalg.norm(np.subtract(p1,pend))
epsilon= 10*np.finfo(float).eps
#IF the two end points are not close enough lets close the curve
if last_segment > epsilon*np.linalg.norm(np.amax(abs(pxy),axis=0)):
pxy=np.vstack((pxy,p1))
nt = nt + 1
else:
print('Contour already closed')
pt=np.zeros((nt,2))
#Compute the chordal arclength of each segment.
chordlen = (np.sum(np.diff(pxy,axis=0)**2,axis=1))**(1/2)
#Normalize the arclengths to a unit total
chordlen = chordlen/np.sum(chordlen)
#cumulative arclength
cumarc = np.append(0,np.cumsum(chordlen))
tbins= np.digitize(N,cumarc) # bin index in which each N is in
#catch any problems at the ends
tbins[np.where(tbins<=0 | (N<=0))]=1
tbins[np.where(tbins >= n | (N >= 1))] = n - 1
s = np.divide((N - cumarc[tbins]),chordlen[tbins-1])
pt = pxy[tbins,:] + np.multiply((pxy[tbins,:] - pxy[tbins-1,:]),(np.vstack([s]*2)).T)
return pt
``` | How to redistribute points evenly over a curve | [
"",
"python",
"math",
"3d",
""
] |
I have following two code samples
Example 1:
```
class MyClass(object):
def __init__(self, key, value):
self._dict = self._dict.update({key:value})
m = MyClass('ten',10)
print m._dict
```
Output:
```
AttributeError: 'MyClass' object has no attribute '_dict'
```
Example2:
```
class MyClass(object):
_dict = {}
def __init__(self, key, value):
self._dict = self._dict.update({key:value})
m = MyClass('ten',10)
print m._dict
```
Output:
`None`
I am quite surprised with above behavior
Why the example2 compiled successfully by just addition of \_dict = {}
line, and line present at class scope.
also why `None` output?
I believed class scope variables has no relation with instance variable
(special with `self`)
Any Explaination? | The `None` output is because `dict.update` returns None. It modifies the dictionary itself, but does not return anything. So you probably wanted `self._dict.update({key:value})`. However, `self._dict` doesn't exist at initialization. So it would make more sense to do `self._dict = {key: value}`. If you're trying to modify the object's *internal* dictionary, then you should do `self.__dict__.update({key:value})`. However, this is bad practice. A better idea would be to write `setattr(self, key, value)`. The reason Example2 is working successfully is because if you try to do `getattr(instance, thing)` (which is what `instance.thing` does), and `thing` is not in `instance.__dict__`, then `instance.__class__.__dict__` will be checked instead. | Your 'example 2' defines a single dictionary at the class level. All instances of the class will share that same dictionary, at least unless you reassign \_dict on the instance.
See this question for a detailed explanation:
[Why do attribute references act like this with Python inheritance?](https://stackoverflow.com/questions/206734/why-do-attribute-references-act-like-this-with-python-inheritance)
As for why you're getting `None` - the `update` method changes its dict in place, and returns `None`. | Python : Behavior of class and instance variables | [
"",
"python",
"instance-variables",
"class-variables",
""
] |
I'm new to Django but I've been really enjoying it. But occasionally I seem to run into places where I just don't seem to get things correct. So, I'm asking for some help and guidance.
I'm trying to extend the object-tools for one of my models so I can have a Print button next to History.
My templates is as follows:
```
project/app/templates/admin/
```
I'm *successfully* extending base\_site.html with no issues.
```
project/app/templates/admin/base_site.html
```
However, when I add change\_form.html like so:
```
project/app/templates/admin/change_form.html
```
With the following:
```
{% extends 'admin/change_form.html' %}
{% block object-tools %}
<a href="one">One</a>
<a href="one">Two</a>
{% endblock %}
```
I get an exception: **maximum recursion depth exceeded while calling a Python object**
This seems like I'm missing something quite basic.
Things that I've tried:
* Many variations of the {% block %}
* extending base\_site, base etc ...
* adding /model as part of the path (project/app/templates/admin/model/change\_form.html)
I'm confused and unsuccessful.
P.S.: I'm also using a bootstrap theme from here <http://riccardo.forina.me/bootstrap-your-django-admin-in-3-minutes/> but for the purposes of this problem I'm currently not using it. | The problem is that `admin/change_form.html` in your `{% extend %}` block is getting resolved as `project/app/templates/admin/change_form.html`.
One solution is to create a subdirectory of `templates` named for your app - possibly `project/templates/admin/app/change_form.html`.
> In order to override one or more of them, first create an admin directory in your project’s templates directory. This can be any of the directories you specified in TEMPLATE\_DIRS.
>
> Within this admin directory, create sub-directories named after your app.
<https://docs.djangoproject.com/en/dev/ref/contrib/admin/#overriding-vs-replacing-an-admin-template> | That's because you're extending the template with itself. What I do is put my custom admin templates in `templates/admin`. Then in that same folder I symlink to the django admin folder (`templates/admin/admin`).
So my extends looks like:
```
{% extends 'admin/admin/change_form.html' %}
```
Make sure you also override `index.html` if you want to go down that path. | Django 1.5 extend admin/change_form.html object tools | [
"",
"python",
"django",
"recursion",
"django-templates",
"django-admin",
""
] |
I need to replace/update values in my table according att\_id for each customer\_id.
The table looks like:
```
ID att_id customer_id value
1 5 1 name
2 30 1 12345
3 40 1
4 5 2 name2
5 30 2 12345
6 40 2
```
I'd like to replace it like this:
```
ID att_id customer_id value
1 5 1 name
2 30 1
3 40 1 12345
4 5 2 name2
5 30 2
6 40 2 12345
``` | **UPDATE**: Based on your comments *...I need to find values for attribute 30, check if they are mobile phone numbers, and if it's true, write it itno value for attribute 40...* your query might look like this
```
UPDATE table1 t1 JOIN table1 t2
ON t1.customer_id = t2.customer_id
AND t1.att_id = 40
AND t2.att_id = 30
SET t1.value = t2.value
-- ,t2.value = NULL -- uncomment if you need to clear values in att_id = 30 at the same time
WHERE t2.value REGEXP '^[+]?[0-9]+$'
```
You might need to tweak a regexp to match your records ("mobile phone numbers") properly
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/0e96e8/1)** demo
---
It's hard to tell for sure from your description but if you need to swap values of `att_id` `30` and `40` per customer\_id you may do something like this
```
UPDATE table1 t1 JOIN table1 t2
ON t1.customer_id = t2.customer_id
AND t1.att_id = 40
AND t2.att_id = 30
SET t1.value = t2.value,
t2.value = t1.value
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/8572c/1)** demo
or if you need to put values of `att_id = 30` to `att_id = 40` and "clear" values of `att_id = 30`
```
UPDATE table1 t1 JOIN table1 t2
ON t1.customer_id = t2.customer_id
AND t1.att_id = 40
AND t2.att_id = 30
SET t1.value = t2.value,
t2.value = NULL
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/a080e/1)** demo | Here is a general approach for swapping the values on rows with `att_id` equal to 30 and 40:
```
update t join
t t30
on t.customer_Id = t30.customer_Id and t30.att_id = 30 join
t t40
on t.customer_Id = t40.customer_Id and t40.att_id = 40 join
set t.value = (case when att_id = 30 then t40.value
when att_id = 40 then t30.value
else t.value
end)
where att_id in (30, 40);
``` | How to update value in same table using mysql? | [
"",
"mysql",
"sql",
""
] |
I have a simple MySQL query like this:
```
SELECT * ,
( MATCH (table.get) AGAINST('playstation ' IN BOOLEAN MODE) )
+ ( table.get LIKE '%playstation%') AS _score
FROM table
JOIN users on table.id_user = users.id
WHERE table.expire_datetime > 1375997618
HAVING _score > 0
ORDER BY RAND(table.id) ,_score DESC ;
```
If I run this query in MySQL, it returns usually more then 1 record, now I would like to LIMIT 1 and get one of them randomly, not always the same record.
Is it possible? | ```
select * from <my_table>
ORDER BY RAND()
LIMIT 4
``` | You would quit seeding the random number generator. My guess is that it is returning the first table id encountered, so the numbers are generated in the same sequence:
```
SELECT * ,
( MATCH (table.get) AGAINST('playstation ' IN BOOLEAN MODE) )
+ ( table.get LIKE '%playstation%') AS _score
FROM table
JOIN users on table.id_user = users.id
WHERE table.expire_datetime > 1375997618
HAVING _score > 0
ORDER BY RAND()
LIMIT 1;
``` | Get random record in a set of results | [
"",
"mysql",
"sql",
"random",
""
] |
In a table we have schools' data:
```
ID | Name | City
------------------
1 A X
2 B X
3 C Z
4 D Z
```
I want to have a list of each two schools that are in the same city:
```
Name1 | Name2
--------------
A B
C D
```
I chose schools in the same city with this:
```
SELECT Name FROM Schools
Group by City
Having City = City
```
Is it correct? How to bring 2 matched schools on a new table along side?
Thanks | **UPDATE** Another way to do it
One way to do that if you insist on grouping
```
SELECT City,
MIN(Name) Name1,
MAX(Name) Name2
FROM Schools
GROUP BY City
-- HAVING COUNT(*) > 1
```
Another way
```
SELECT City,
MIN(CASE WHEN rnum = 1 THEN Name END) Name1,
MIN(CASE WHEN rnum = 2 THEN Name END) Name2
FROM
(
SELECT s.*, ROW_NUMBER() OVER (PARTITION BY City ORDER BY Name) rnum
FROM Schools s
) q
GROUP BY City
```
Sample output:
```
| CITY | NAME1 | NAME2 |
------------------------
| X | A | B |
| Z | C | D |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/071a7d/1)** demo | ```
SELECT a.Name, b.Name
FROM Schools a JOIN Schools b ON a.City = b.City
AND a.ID != b.ID
AND a.Name < b.Name;
```
This returns all the pairs of schools in the same city, not only a pair for each city. | Select 2 records which were grouped, Then put them in 2 columns along side | [
"",
"sql",
"sql-server",
"database",
"pivot",
""
] |
I'm trying to write a python tool that will read a logfile and process it
One thing it should do is use the paths listed in the logfile (it's a logfile for a backup tool)
```
/Volumes/Live_Jobs/Live_Jobs/*SCANS\ and\ LE\ Docs/_LE_PROOFS_DOCS/JEM_lj/JEM/0002_OXO_CorkScrew/3\ Delivery/GG_Double\ Lever\ Waiters\ Corkscrew_072613_Mike_RETOUCHED/gg_3110200_2_V3_Final.tif
```
Unfortunately the paths that I'm provided with aren't appropriately escaped and I've had trouble properly escaping in python. Perhaps python isn't the best tool for this, but I like it's flexibility - it will allow me to extend whatever I write
Using the regex escape function escapes too many characters, pipes.quote method doesn't escape the spaces, and if I use a regex to replace ' ' with '\ ' I end up getting
```
/Volumes/Live_Jobs/Live_Jobs/*SCANS\\ and\\ LE\\ Docs/_LE_PROOFS_DOCS/JEM_lj/JEM/0002_OXO_CorkScrew/3\\ Delivery/GG_Double\\ Lever\\ Waiters\\ Corkscrew_072613_Mike_RETOUCHED/gg_3110200_2_V3_Final.tif
```
which are double escaped and wont pass to python functions like `os.path.getsize()`.
What am I doing wrong?? | If you're reading paths out of a file, and passing them to functions like `os.path.getsize`, you don't need to escape them. For example:
```
>>> with open('name with spaces', 'w') as f:
... f.write('abc\n')
>>> os.path.getsize('name with spaces')
4
```
In fact, there are only a handful of functions in Python that need spaces escaped, either because they're passing a string to the shell (like `os.system`) or because they're trying to do shell-like parsing on your behalf (like `subprocess.foo` with an arg string instead of an arg list).
---
So, let's say `logfile.txt` looks like this:
```
/Volumes/My Drive/My Scans/Batch 1/foo bar.tif
/Volumes/My Drive/My Scans/Batch 1/spam eggs.tif
/Volumes/My Drive/My Scans/Batch 2/another long name.tif
```
… then something like this will work fine:
```
with open('logfile.txt') as logf:
for line in logf:
with open(line.rstrip()) as f:
do_something_with_tiff_file(f)
```
Noticing those `*` characters in your example, if these are glob patterns, that's fine too:
```
with open('logfile.txt') as logf:
for line in logf:
for path in glob.glob(line.rstrip()):
with open(path) as f:
do_something_with_tiff_file(f)
```
---
If your problem is the exact opposite of what you described, and the file is full of strings that *are* escaped, and you want to unescape them, `decode('string_escape')` will undo Python-style escaping, and there are different functions to undo different kinds of escaping, but without knowing what kind of escaping you want to undo it's hard to say which function you want… | Try this:
```
myfile = open(r'c:\tmp\junkpythonfile','w')
```
The 'r' stands for a raw string.
You could also use \ like
```
myfile = open('c:\\tmp\\junkpythonfile','w')
``` | Escape space in filepath | [
"",
"python",
"regex",
""
] |
If have a text (actually lots of texts), where somewhere is one ISBN inside, and I have to find it.
I know: my ISBN-13 will start with "978" followed by 10 digits.
I don't kow: how many '-' (minus) there are and if they are at the correct place.
My code will only find me the ISBN without any Minus:
```
regex=r'978[0-9]{10}'
pattern = re.compile(regex, re.UNICODE)
for match in pattern.findall(mytext):
print(match)
```
But how can I find ISBN like these:
* 978-123-456-789-0
* 978-1234-567890
* 9781234567890
* etc...
Is this possible with one regex-pattern?
Thanks! | This matches 10 digits and allows one optional hyphen before each:
```
regex = r'978(?:-?\d){10}'
``` | Since you can't have 2 consecutive hyphens, and it must end with a digit:
`r'978(-?\d){10}'`
... allowing for a hyphen right after then `978`, mandating a digit after every hyphen (does not end in a hyphen), and allowing for consecutive digits by making each hyphen optional.
I would add `\b` before the `978` and after then `{10}`, to make sure the ISBN's are well separated from surrounding text.
Also, I would add `?:` right after the opening parenthesis, to make those non-capturing (slightly better performance, and also more expressive), making it:
`r'\b978(?:-?\d){10}\b'` | Find ISBN with regex in Python | [
"",
"python",
"regex",
""
] |
So what I am trying to do if a result in my select has the same post\_data\_id as another post\_id remove that result but show one of them. I've been trying for days to try to get this to work but have failed miserably. Please check my SQL Fiddle asweel as the screenshot for what I mean
View the [SQLFiddle](http://sqlfiddle.com/#!2/fed8a/1)

Thanks | There's likely a simpler method for this, but you could use:
```
SELECT p.*
FROM posts p
LEFT JOIN following f
ON f.user_id=1 AND p.post_user_id = f.follower_id
WHERE (post_user_id=1
OR f.follower_id IS NOT NULL)
AND (POST_DATA_ID NOT IN (SELECT POST_ID
FROM posts p
LEFT JOIN following f
ON f.user_id=1 AND p.post_user_id = f.follower_id
WHERE (post_user_id=1
OR f.follower_id IS NOT NULL))
OR POST_DATA_ID IS NULL)
ORDER BY `post_id` DESC;
```
[SQL Fiddle](http://sqlfiddle.com/#!2/fed8a/15/1) | You can't really DELETE *and* SELECT in one statement. You will always need to execute two statements consecutively.
I am not sure which rows you wanted to delete (the duplicate post\_id or the duplicate post\_data\_id), but you're going to need two of the following four statements:
```
SELECT * FROM posts WHERE post_data_id IN (
SELECT post_id FROM posts);
SELECT * FROM posts WHERE post_id IN (
SELECT post_data_id FROM posts);
DELETE FROM posts WHERE post_data_id IN (
SELECT post_id FROM posts);
DELETE FROM posts WHERE post_id IN (
SELECT post_data_id FROM posts);
``` | Need some help hiding duplicates from results MYSQL | [
"",
"mysql",
"sql",
""
] |
Im writing a simple function to take out any odd numbers from a list and return a list of only the even ones.
```
def purify(numbers):
for i in numbers:
if i%2!=0:
numbers.remove(i)
return numbers
print(purify([4,5,5,4]))
```
However, the above returns:
```
[4, 5, 4]
```
Why doesn't the second 5 get removed as it also meets the if condition?
Im looking less for a different method to the problem and more to understand why this happens. | When you remove an item, the items that follow get moved one position to the left. This results in the loop skipping some items.
BTW, a more idiomatic way to write that code is
```
numbers = [num for num in numbers if num % 2 == 0]
``` | One option I didn't see mentioned was, ironically `filter`:
```
>>> filter(lambda x: not x % 2, [4,5,5,4])
[4, 4]
``` | How to filter a list | [
"",
"python",
""
] |
This is kind of hard to explain in words but here is an example of what I am trying to do in SQL. I have a query which returns the following records:
```
ID Z
--- ---
1 A
1 <null>
2 B
2 E
3 D
4 <null>
4 F
5 <null>
```
I need to filter this query so that each unique record (based on ID) appears only once in the output and if there are multiple records for the same ID, the output should contain the record with the value of Z column being **non-null**. If there is only a single record for a given ID and it has value of null for column Z the output still should return that record. So the output from the above query should look like this:
```
ID Z
--- ---
1 A
2 B
2 E
3 D
4 F
5 <null>
```
How would you do this in SQL? | If you need to return both 2-B and 2-E rows:
```
SELECT *
FROM YourTable t1
WHERE Z IS NOT NULL
OR NOT EXISTS
(SELECT * FROM YourTable t2
WHERE T2.ID = T1.id AND T2.z IS NOT NULL)
``` | You can use `GROUP BY` for that:
```
SELECT
ID, MAX(Z) -- Could be MIN(Z)
FROM MyTable
GROUP BY ID
```
Aggregate functions ignore `NULL`s, returning them only when all values on the group are `NULL`. | Filter unique records from a database while removing double not-null values | [
"",
"sql",
"sql-server-2008-r2",
""
] |
Is there a way that I can execute a shell program from Python, which prints its output to the screen, and read its output to a variable without displaying anything on the screen?
This sounds a little bit confusing, so maybe I can explain it better by an example.
Let's say I have a program that prints something to the screen when executed
```
bash> ./my_prog
bash> "Hello World"
```
When I want to read the output into a variable in Python, I read that a good approach is to use the `subprocess` module like so:
```
my_var = subprocess.check_output("./my_prog", shell=True)
```
With this construct, I can get the program's output into `my_var` (here `"Hello World"`), however it is also printed to the screen when I run the Python script. Is there any way to suppress this? I couldn't find anything in the `subprocess` documentation, so maybe there is another module I could use for this purpose?
EDIT:
I just found out that `commands.getoutput()` lets me do this. But is there also a way to achieve similar effects in `subprocess`? Because I was planning to make a Python3 version at some point.
---
EDIT2: Particular Example
Excerpt from the python script:
```
oechem_utils_path = "/soft/linux64/openeye/examples/oechem-utilities/"\
"openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/"\
"oechem-utilities/"
rmsd_path = oechem_utils_path + "rmsd"
for file in lMol2:
sReturn = subprocess.check_output("{rmsd_exe} {rmsd_pars}"\
" -in {sIn} -ref {sRef}".format(rmsd_exe=sRmsdExe,\
rmsd_pars=sRmsdPars, sIn=file, sRef=sReference), shell=True)
dRmsds[file] = sReturn
```
---
Screen Output (Note that not "everything" is printed to the screen, only a part of
the output, and if I use `commands.getoutput` everything works just fine:
```
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: mols in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: confs in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd - RMSD utility [OEChem 1.7.2]
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: mols in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: confs in: 1 out: 0
``` | To add to Ryan Haining's answer, you can also handle stderr to make sure nothing is printed to the screen:
```
p = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, close_fds=True)
out,err = p.communicate()
``` | If `subprocess.check_ouput` is not working for you, use a [Popen](http://docs.python.org/2/library/subprocess.html#subprocess.Popen) object and a `PIPE` to capture the program's output in Python.
```
prog = subprocess.Popen('./myprog', shell=True, stdout=subprocess.PIPE)
output = prog.communicate()[0]
```
the `.communicate()` method will wait for a program to finish execution and then return a tuple of `(stdout, stderr)` which is why you'll want to take the `[0]` of that.
If you also want to capture `stderr` then add `stderr=subprocess.PIPE` to the creation of the `Popen` object.
If you wish to capture the output of `prog` while it is running instead of waiting for it to finish, you can call `line = prog.stdout.readline()` to read one line at a time. Note that this will hang if there are no lines available until there is one. | Executing shell program in Python without printing to screen | [
"",
"python",
"python-2.7",
"subprocess",
""
] |
I am trying to delete from a table(`MyTable`) that has a `foreign key` reference linking it to 4 other tables.
I needed to delete all the data `MyTable` that is referenced by `Table1` and `Table2`, but NOT `Table3` and `Table4`. I have already deleted the data in `Table1` and `Table2`
I tried something like this:
```
delete from MyTable where ID NOT IN(SELECT MyTableID FROM Table1)
delete from MyTable where ID NOT IN(SELECT MyTableID FROM Table2)
```
But it obviously doesn't work because if it did it would inadvertently delete the data that `Table2` references.
Is there a way to delete from a table where `FKs` aren't being referenced by certain tables? | (Rewritten answer to SQL Server syntax after some basic research and finding [The DELETE statement in SQL Server](https://www.simple-talk.com/sql/learn-sql-server/the-delete-statement-in-sql-server/).)
Use the multiple table syntax of the `DELETE` statement.
```
DELETE
MyTable
FROM
MyTable
LEFT JOIN Table1 ON MyTable.ID = Table1.MyTableID
LEFT JOIN Table2 ON MyTable.ID = Table2.MyTableID
LEFT JOIN Table3 ON MyTable.ID = Table3.MyTableID
LEFT JOIN Table4 ON MyTable.ID = Table4.MyTableID
WHERE
(Table1.MyTableID IS NOT NULL OR Table2.MyTableID IS NOT NULL)
AND Table3.MyTableID IS NULL
AND Table4.MyTableID IS NULL
```
The `DELETE` will only operate on the table *before* the `FROM` clause. You can select rows using other tables in the `FROM` clause which will not be affected. This example joins `MyTable` with all the tables that you mention and then checks for each row that either `Table1` or `Table2` refer to the row *and* that `Table3` and `Table4` do not refer to the row. | Probably not too efficient, but the below should work.
It should "delete all the data [in] MyTable that is referenced by Table1 and Table2, but NOT Table3 and Table4". Your query seems to not exactly match up with desiring to do this.
```
DELETE FROM MyTable
WHERE
ID IN (SELECT MyTableID FROM Table1) AND
ID IN (SELECT MyTableID FROM Table2) AND
ID NOT IN (SELECT MyTableID FROM Table3) AND
ID NOT IN (SELECT MyTableID FROM Table4)
``` | delete from table with foreign keys to other tables | [
"",
"sql",
"sql-server",
""
] |
I'm trying to do something that may appear to be simple, but I can't figure it out. As always, django surprises me with its complexity...
My view generates an instance of a model and "passes it on" in a context to a template. On that template, the user fills a form and submits it. And this is what should happen next: the object that was in the context when the page loaded is modified a bit and submitted in a context once again (to the same template). However, I can't get the instance of the object that was in the context when the page loaded. Is it possible to do? Maybe as a hidden input? Or with some fancy django function? Any other idea is appreciated as well, even workarounds (it's not really a professional project, I'm doing it for fun and for experience).
I'm sorry if this question is stupid, but I'm new to django and my brain still has troubles with understanding everything. Thanks for your help! | This is not related to django. It's how the web works. HTTP is stateless.
When you generate the page, you've finished with that task.
The model instance is destroyed.
When the user submits the form or sends the modifications in any other way, a new connection starts with a new request and a new context.
At this point you need to re-instance the object to modify.
Depends on the application and the model itself.
You can pass the unique\_id of the object, if it has one, and get it back in your actual context querying for it. | You cannot pass a model (or any Python object) directly to another page. There are workarounds (sessions, serializers) but in most cases these are not necessary.
In your case it's not necessary or even recommended to pass the actual Python object. Django supplies many features and options for [form handling](https://docs.djangoproject.com/en/dev/topics/forms/). You may want to take a particular look at [ModelForms](https://docs.djangoproject.com/en/dev/topics/forms/modelforms/), a nice feature to easily create forms that allow you to directly edit your models. | Reusing an object from a context after submitting a form | [
"",
"python",
"django",
""
] |
I have a one liner `if` statement that looks like this:
```
var = var if var < 1. else 1.
```
The first part `var = var` looks a bit ugly and I'd bet there's a more pythonic way to say this. | The following is *39%* shorter and in my opinion is simpler and more pythonic than other answers. *But* we should note that sometimes people get it wrong thinking that 1 is a lower bound being confused by `min` function when actually 1 is an upper bound for `var`.
```
var = min(var, 1.0)
``` | ```
if var >= 1.:
var = 1
```
or if you like one liners
```
if var >= 1.: var = 1
``` | Simplify one line if statement | [
"",
"python",
"if-statement",
""
] |
I'm taking a beginning Python course, and am having problems trying to do a regex substitution.
The question states: Write a substitution command that will change names like file1, file2, etc. to file01, file02, etc. but will not add a zero to names like file10 or file20.
Here's my solution:
```
re.sub(r'(\D+)(\d)$',r'\10\2','file1')
```
As you can see, the 0 is messing with my \1 reference. Can anyone help me with an easy solution? Thanks! | ```
import re
print re.sub(r'(\D+)(\d)$',r'\g<1>0\2','file1')
```
Don't ask.. just do the \g<#> thing and it'll work fine in python. Other languages have the same issue:
<http://resbook.wordpress.com/2011/01/04/regex-with-back-references-followed-by-number/> | dont know python, but in your regex you just want one digit and not two
for the match you can do it like this
```
.+[^\d]\d$
```
test1 will match
test1 will not match
Good luck | Python regex reference conflicting with substitution number | [
"",
"python",
"regex",
"substitution",
""
] |
I need to write 1 state that would do multiple things.
**Here is first select which selects count and it works fine.**
```
select PLANNED
from (SELECT count(FACT.EVENT) AS PLANNED
FROM FACT FACT
WHERE FACT.PLANNEDOTGFLAG = 1
AND FACT.STARTDATETIME >= SYSDATE - 365
)
```
I need to write another statement which would show different information from same table example.
```
select count(effected)
from fact
where startime between 01/01/2013 and 01/02/2013
```
I was wondering if I can do something like this
```
select PLANNED,
Count_EFFECTED
from ( SELECT count(FACT.EVENT) AS PLANNED
FROM FACT FACT
WHERE FACT.PLANNEDOTGFLAG = 1
AND FACT.STARTDATETIME >= SYSDATE - 365
union all
select count(FACT.effected) AS Count_EFFECTED
from fact fact
where fact.startime between 01/01/2013 and 01/02/2013
)
```
---
**3rd Nested Query**
```
SELECT
A.PLANNED,
B.effec,
C.XOUND // not working yet.
FROM
(SELECT count(FACT.EVENT) AS PLANNED FROM FACT FACT
WHERE FACT.PLANNEDOTGFLAG = 1 AND FACT.STARTDATETIME >= SYSDATE - 365) A,
(select count(FACT.effected) AS effec from fact fact
WHERE FACT.STARTDATETIME between 01/01/2013 and 01/02/2013) B
//how can I add 3rd select which is nested.
(select round(FACT.ID) AS XOUND
FROM
(SELECT SUM(FACT.CIM)/SUM(FACT.CUST) AS ID
FROM FACT FACT
WHERE FACT.STARTDATETIME between 01/01/2013 and 01/02/2013
AND OTGFLAG = 1 AND PLANNEDOTGFLAG = 0 )) C
``` | Your second query will give you an error as the table that is resulted in FROM clause will return only one column (2 rows), first row will have `PLANNED_OUTAGES` and 2nd will have `effect` count respectively.
For your requirement you could change your query to:
Included 3rd query:
```
SELECT A.PLANNED_OUTAGES,B.effec, C.ID
FROM (SELECT count(FACT.EVENT) AS PLANNED
FROM FACT FACT
WHERE FACT.PLANNEDOTGFLAG = 1 AND FACT.STARTDATETIME >= SYSDATE - 365) A,
(select count(FACT.effected) AS effec
from fact fact
WHERE FACT.STARTDATETIME between 01/01/2013 and 01/02/2013) B,
(SELECT ROUND(SUM(FACT.CIM)/SUM(FACT.CUST)) AS ID
FROM FACT FACT
WHERE FACT.STARTDATETIME between 01/01/2013 and 01/02/2013
AND OTGFLAG = 1 AND PLANNEDOTGFLAG = 0
GROUP BY FACT.CIM,FACT.CUST) C
``` | Since you're getting two different counts from the same table, I suggest using case statements:
```
SELECT
Planned_Outages = COUNT(CASE WHEN Fact.PlannedOTGFlag = 1 AND Fact.StartDateTime >= Sysdate - 365 THEN Fact.Event END),
Effec = COUNT(CASE WHEN FactStartTime between '01/01/2013' and '01/02/2013' THEN Fact.Effected END)
FROM
Fact
``` | multiple select from query Error | [
"",
"jquery",
"sql",
"oracle",
"oracle11g",
""
] |
I have field of `REAL` type in db. I use PostgreSQL. And the query
```
SELECT * FROM my_table WHERE my_field = 0.15
```
does not return rows in which the value of `my_field` is `0.15`.
But for instance the query
```
SELECT * FROM my_table WHERE my_field > 0.15
```
works properly.
How can I solve this problem and get the rows with `my_field = 0.15` ? | To *solve* your problem use the data type [**`numeric`**](https://www.postgresql.org/docs/current/datatype-numeric.html) instead, which is not a floating point type, but an arbitrary precision type.
If you enter the [numeric literal](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS-NUMERIC) `0.15` into a `numeric` (same word, different meaning) column, the *exact* amount is stored - unlike with a `real` or `float8` column, where the value is coerced to next possible binary approximation. This may or may not be exact, depending on the number and implementation details. The decimal number 0.15 happens to fall between possible binary representations and is stored with a tiny error.
Note that the result of a calculation can be inexact itself, so be still wary of the `=` operator in such cases.
It also depends *how* you test. When comparing, Postgres coerces diverging numeric types to a type that can best hold the result.
Consider this **demo**:
```
CREATE TABLE t(num_r real, num_n numeric);
INSERT INTO t VALUES (0.15, 0.15);
SELECT num_r, num_n
, num_r = num_n AS test1 --> FALSE
, num_r = num_n::real AS test2 --> TRUE
, num_r - num_n AS result_nonzero --> float8
, num_r - num_n::real AS result_zero --> real
FROM t;
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=bd3ea5149ff9b334bf6c19991af8cc85)*
Old [sqlfiddle](http://www.sqlfiddle.com/#!17/9a5b9c/1)
Therefore, if you have entered `0.15` as numeric literal into your column of data type `real`, you can find all such rows with:
```
SELECT * FROM my_table WHERE my_field = real '0.15'
```
Use `numeric` columns if you need to store fractional digits exactly. | Your problem originates from [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).
`0.15` is not `0.15`, but `0.15000000596046448` (assuming double precision), as it can not be exactly represented as a binary floating point number.
([check this calculator](http://users.minet.uni-jena.de/~sack/SS04/download/IEEE-754.html))
Why is this a problem? In this case, most likely because the other side of the comparison uses the exact value 0.15 - through an exact representation, like a `numeric` type. (Cleared up on suggestion by **Eric**)
So there are two ways:
* use a format that actually stores the numbers in decimal format - as **Erwin** suggested
+ (or at least use the same type across the board)
* use rounding as **Jack** suggested - which has to be used carefully (by the way this uses a `numeric` type too, to exactly represent 0.15...)
**Recommended reading:**
[What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)
(Sorry for the terse answer...) | Value of real type incorrectly compares | [
"",
"sql",
"postgresql",
"types",
"floating-point",
"numeric",
""
] |
Using CQL3, how does one enumerate all the partition keys of a table in Cassandra? In particular there are complications with returning distinct keys, and paginating the results. | With a little pre-knowledge about the possible values of your keys, I think this could be done using with the help of the token function. Take a look at [this answer](https://stackoverflow.com/a/11889327/2459609). Is that what you are looking for?
Also, native pagination seems to be [an upcoming feature for 2.0](https://issues.apache.org/jira/browse/CASSANDRA-4415). It's [in the latest beta](http://www.datastax.com/dev/blog/cql-in-cassandra-2-0).
Until 2.0 arrives, you can see this [work-around for pagination](http://www.datastax.com/dev/blog/cql3-table-support-in-hadoop-pig-and-hive) on the datastax blog (go the "CQL3 pagination" section). This is, in principle, much the same as the link I posted above but goes into great detail how to implement pagination taking column keys into account etc. | The bad news is that for now (August 2013) you have to select the entire primary key, not just the partition key, to paginate through them. With a compound PK, this may involve a lot of duplicate partition keys.
The good news is that <https://issues.apache.org/jira/browse/CASSANDRA-4536> is open to allow `SELECT DISTINCT` for the special case of partition keys in 2.0.1, since it's possible to retrieve the unique partition keys efficiently under the hood; CQL just doesn't have a good way to express that until then. | Using CQL to walk partition keys of a Cassandra table | [
"",
"sql",
"cassandra",
"partitioning",
"cql",
"cql3",
""
] |
This is my assignment:
> Write a program which reads in text from the keyboard until an
> exclamation mark ('!') is found.
>
> Using an array of integers subscripted by the letters 'A' through 'Z',
> count the number occurrences of each letter. In a separate counter,
> also count the total number of "other" characters.
>
> Print out which letter was found the most times. (Note there may be
> more than one letter which has the maximum count attached to it.)
> Also, print out which letter (or letters) was found the least number
> of times, but make certain to exclude letters which were not found at
> all.
And this is my code:
```
msg = input("What is your message? ")
print ()
num_alpha = 26
int_array = [0] * num_alpha
vowel = [0] * 10000
consanant = [0] * 10000
for alpha in range(num_alpha):
int_array[alpha] = chr(alpha + 65)
if int_array[alpha] == 'A' or int_array[alpha] == 'E' or int_array[alpha] == 'I' or int_array[alpha] == 'O' or int_array[alpha] == 'U':
vowel[alpha] = int_array[alpha]
else:
consanant[alpha] = int_array[alpha]
print()
lett = 0
otherch = 0
num_vowels = 0
num_consonants = 0
count_character = [0] * 100000
length = len(msg)
for character in msg.upper():
if character == "!":
otherch = otherch + 1
count_character[ord(character)] = count_character[ord(character)] + 1
break
elif character < "A" or character > "Z":
otherch = otherch + 1
count_character[ord(character)] = count_character[ord(character)] + 1
else:
lett = lett + 1
count_character[ord(character)] = count_character[ord(character)] + 1
alpha = ord(character) - ord('A')
if vowel[(alpha)] == (character):
num_vowels = num_vowels + 1
else:
num_consonants = num_consonants + 1
print()
print("Number of Letters =", lett)
print("Number of Other Characters = ", otherch)
print("Number of Vowels = ", num_vowels)
print("Number of Consanants = ", num_consonants)
print()
for character in msg.upper():
print("Character", character, "appeared" , count_character[ord(character)] , "time(s).")
if character == "!":
break
print()
max_letter = -999999999999
min_letter = 999999999999
count_hi = 0
count_low = 0
for character in msg.upper():
if count_character[ord(character)] > max_letter:
max_letter = count_character[ord(character)]
count_hi = count_hi + 1
print("Character" , msg[count_hi + 1] , "appeared the most. It appeared", max_letter, "times.")
print(count_hi)
for character in msg.upper():
if count_character[ord(character)] < min_letter:
min_letter = count_character[ord(character)]
count_low = count_low + 1
print("Character" , msg[count_low + 1] , "appeared the least. It appeared", min_letter, "times.")
print(count_low)
```
I know that the counter is completely wrong but I can't seem to figure it out. Any ideas?
EDIT:
If i input the string : "AAAAAAAAAAAAAAAAAAAaaaaaaaaaaHHHHHh!"
it prints out:
Character A appeared the most. It appeared 29 times.
1
Character A appeared the least. It appeared 1 times.
3
obviously the first string is right but the second one should say character h appeared the least. | In the block
```
for character in msg.upper():
if count_character[ord(character)] > max_letter:
max_letter = count_character[ord(character)]
count_hi = count_hi + 1
```
count\_hi will be the number of times a different letter was selected as the one with the highest count, not the index of the letter. Just save the character to output it later, like
```
for character in msg.upper():
if count_character[ord(character)] > max_letter:
max_letter = count_character[ord(character)]
high_letter = character
print("Character" , high_letter , "appeared the most. It appeared", max_letter, "times.")
```
change the low check similarly and you should get back close to what you wanted | The most-occuring item is finding the mode. The following will work assuming the list is sorted:
```
def get_mode(list):
current_mode = list[0]
new_mode = current_mode
mode_count = 1
top_count = 1
for idx in range(0, len(list)):
if list[idx] == modeChar:
mode_count += 1
else:
if mode_count > top_count:
new_mode = current_mode
top_count = mode_count
current_mode = char
current_count = 1
if mode_count > top_count:
new_mode = current_mode
top_count = current_count
return new_mode, top_count
```
You can work in the min logic pretty easily - assume the first item is the least-occuring, keep track of its count like the mode, and store it on change. You'll need to include a check after the loop to ensure that if the last sorted sequence of items in the list has a count less than the stored minimum count, that you set that item and its count to the proper values. Just append the final values to the return statement, and you have a tuple with (mode, mode\_count, least-occuring, least\_count).
Since this looks like a homework assignment I didn't code the min material, and I'm also assuming you aren't allowed to do a simple one-liner involving some imported library. If you are allowed to use that, then I would suggest a [Counter.](http://docs.python.org/dev/library/collections.html#counter-objects) | How to Count Most & Least Common Characters in Input? | [
"",
"python",
"arrays",
"counter",
"subscript",
""
] |
Yes, I know about [IronPython 3 compatibility](https://stackoverflow.com/questions/7751767/ironpython-3-compatibility), but that is from two years ago. I was searching on the internet but couldn't find any information on this that is up-to-date.
So does IronPython support Python 3? If not, how many of the future imports work, and are there any Iron-specific ways to make it seem more like Python 3? | IronPython 3 is once again alive. An alpha for Python 3.4 was released in April 2021. It is a very active effort which is being supported by the .NET Foundation. More here: [Python 3.4 Release Notes](https://github.com/IronLanguages/ironpython3/releases/tag/v3.4.0-alpha1) | Currently it doesn't support Python3. [IronPython3 Todo](http://blog.jdhardy.ca/2013/06/ironpython-3-todo.html). All future imports supported by the standard Python 2.7 interpreter should be supported by the newest version of IronPython.
And there are no Iron-specific ways to make it seem more like Python3 as far as I know. | IronPython 3 support? | [
"",
"python",
"python-3.x",
"ironpython",
""
] |
So I have the following:
```
myString = "This is %s string. It has %s replacements."
myParams = [ "some", "two" ]
# This is a function which works (and has to work) just like that
myFunction(myString, myParams)
```
Now when I debug, I do the following:
```
print("Debug: myString = " + myString)
print("Debug: myParams = " + myParams)
```
But I would like to get it all directly in one print, like:
```
"Debug: This is some string. It has two replacements."
```
Is that somehow possible? Something like
```
print("Debug: myString = " + (myString % myParams))
```
? | You need to use a *tuple*; cast your list into a tuple and that works just fine:
```
>>> myString = "This is %s string. It has %s replacements."
>>> myParams = [ "some", "two" ]
>>> myString % tuple(myParams)
'This is some string. It has two replacements.'
```
Define `myParams` to be a tuple to start with:
```
>>> myString = "This is %s string. It has %s replacements."
>>> myParams = ("some", "two")
>>> myString % myParams
'This is some string. It has two replacements.'
```
You can combine that into a function:
```
def myFunction(myString, myParams):
return myString % tuple(myParams)
myFunction("This is %s string. It has %s replacements.", ("some", "two"))
```
or better still, make `myParams` a catch-all argument, which always resolves to a tuple:
```
def myFunction(myString, *myParams):
return myString % myParams
myFunction("This is %s string. It has %s replacements.", "some", "two")
```
The latter is what the `logging.log()` function (and related functions) already does. | I'm using python 2.7 but this is quite close and tidy to what you're seeking
I am using the `*` or splat operator to unpack the list into positional arguments (or a tuple) and feeding that tuple into `format()` and using format on the `myString` variable is possible by changing the `%s` to `{index}`.
```
>> myString = "This is {0} string. It has {1} replacements."
>> myParams = ["some", "two"]
>> print "Debug: myString = "+ myString.format(*myParams)
>> Debug: myString = This is some string. It has two replacements.
``` | Insert replacements into %-string subsequently | [
"",
"python",
""
] |
I have a list of dictionaries. e.g:
```
list = [ {list1}, {list2}, .... ]
```
One of the key-value pair in each dictionary is another dictionary.
Ex
```
list1 = { "key1":"val11", "key2":"val12", "key3":{"inkey11":"inval11","inkey12":"inval12"} }
list2 = { "key1":"val21", "key2":"val22", "key3":{"inkey21":"inval21","inkey22":"inval22"} }
```
I was thinking of getting all values of `key3` in the all the dictionaries into a list.
Is it possible to access them directly (something like `list[]["key3"]` ) or do we need to iterate through all elements to form the list?
I have tried
> requiredlist = list [ ] ["key3"].
But it doesn't work.
The final outcome I wanted is
`requiredlist = [ {"inkey11":"inval11","inkey12":"inval12"}, {"inkey21":"inval21","inkey22":"inval22"} ]` | I think a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions) would work well here:
```
[innerdict["key3"] for innerdict in list]
``` | Try this:
```
list1["key3"]
```
Notice that `list1` and `list2` as you defined them are dictionaries, not lists - and the value of `"key3"` is a *set*, not a list. You're confounding `{}` with `[]`, they have different meanings. | Get a list from the dictionary key in a list of dictionaries | [
"",
"python",
""
] |
I currently get output like this:
<http://www.site.com/prof.php?pID=478http://www.site.com/prof.php?pID=693>
after using suggestion from commenter below I have:
```
urls = [el.url for el in domainLinkOutput]
return HttpResponse(urls)
```
How do I turn this output into a python dictionary like:
```
urls = { '0': 'http://www.site.com/prof.php?pID=478', '1': 'http://www.site.com/prof.php?pID=693' }
``` | I don't believe you need regex here - just use attribute access on the `Link` objects you have...
If you have a list of `Link` objects, then use something like:
```
urls = [el.url for el in list_of_objects]
```
You should just be able to get the url by `Link_object.url`... | Use this regex for matching the urls:
```
url='([^']+)'
```
Sample output:
```
[0] => http://www.somesite.com/prof.php?pID=478
[1] => http://www.somesite.com/prof.php?pID=527
[2] => http://www.somesite.com/prof.php?pID=645
```
If you want to exclude the parameters, use
```
url='([^'?]+)
```
Sample output:
```
[0] => http://www.somesite.com/prof.php
[1] => http://www.somesite.com/prof.php
[2] => http://www.somesite.com/prof.php
``` | Grab Strings with Python Attribute Access Link Access into Dictionary | [
"",
"python",
""
] |
I have 3 tables. The `user_sticker` holds all sent stickers between users. When the profile view screen is loaded, i need to display which is the most common sticker that has been given to the user.
**user**
* id\_user
* name
**sticker**
* id\_sticker
* name
**user\_sticker**
* id\_user\_sticker
* id\_sticker
* id\_user\_from
* id\_user\_to
So, if `user_sticker` holds this info:
* 1, 3, 254, 205
* 2, 2, 362, 205
* 3, 2, 519, 205
* 4, 3, 945, 205
* 5, 3, 199, 205
(which means that users 254, 362, 519, 945, 199 sent stickers to user 205).
The result has to return **both** the user 205 information (Name) and the most common sticker id (in this case #3) in the same record. | Ok, lets see
In principal, you need to use the user\_sticker table. So:
```
SELECT *
FROM user_sticker us
```
Then, match with the user table for user\_to
```
SELECT *
FROM user_sticker us JOIN user u ON us.id_user_to = u.id_user
```
Then, you want the name and the stickers the user you want recieve
```
SELECT us.id_sticker, u.name
FROM user_sticker us JOIN user u ON us.id_user_to = u.id_user
WHERE u.id_user = "yourwanteduser"
```
Now, pick the first
```
SELECT TOP 1 us.id_sticker, u.name
FROM user_sticker us JOIN user u ON us.id_user_to = u.id_user
WHERE u.id_user = "yourwanteduser"
ORDER BY (SELECT COUNT(*)
FROM user_sticker us2
WHERE us2.id_user_to=u.id_user AND us2.id_sticker=us.id_sticker)
```
The ORDER BY is used to search how many of the stickers were sent to that user
Sorry for my bad english! I hope this help! | this would be the select:
```
select id_user_to, count(id_sticker) quant from user_sticker group by id_sticker order by quant desc limit 1;
```
now with the info of the user: JOIN
```
select us.id_user_to, u.name, count(us.id_sticker) quant
from user_sticker us
join user u on u.id_user=us.id_user_to
group by us.id_sticker
order by quant desc limit 1;
``` | Returning most common linked ID for a user | [
"",
"mysql",
"sql",
""
] |
I really dont get this, tried with coalesce() but with no result...
I have one select (very simplified to understand the problem):
```
select col1,
col2
from table1 t1
where table1.col1='value'
and table1.col2='value2'
and table1.col3='value3'
```
And i really need a result so if this select resultset is null (and **only if it is** null) (no result) then the following sql select came to picture
```
select col1,
col2
from table1 t1
where table1.col1='another_value'
and table1.col2='another_value2'
```
How can i get this two in to one big select? (any syntax which is recommended is appreciated...) | Something like:
```
; WITH Base AS (
select col1,
col2
from table1 t1
where table1.col1='value'
and table1.col2='value2'
and table1.col3='value3'
)
, Base2 AS (
select col1,
col2
from table1 t1
where table1.col1='another_value'
and table1.col2='another_value2'
AND NOT EXISTS (SELECT 1 FROM Base) -- HERE!!!
)
SELECT * FROM Base
UNION
SELECT * FROM Base2
```
and let's hope the SQL optimizer won't run the first query twice :-)
It is a CTE (Common Table Expression)... I used it so I could reuse the first query twice (one in the `EXISTS` and the other in the `SELECT ... UNION`)
By using a temporary table
```
select col1,
col2
INTO #temp1 -- HERE!!!
from table1 t1
where table1.col1='value'
and table1.col2='value2'
and table1.col3='value3'
select col1,
col2
from table1 t1
where table1.col1='another_value'
and table1.col2='another_value2'
AND NOT EXISTS (SELECT 1 FROM #temp1) -- HERE!!!
``` | It could benefit us a little better if you had a little more information in your example. Is there a common value between the two tables that a JOIN can be established?
```
SELECT col1
,col2
FROM Table1 t1
WHERE table1.col1='value'
and table1.col2='value2'
and table1.col3='value3'
UNION
SELECT col1
,col2
FROM Table2 t2
WHERE table1.col1='another_value'
and table1.col2='another_value2'
WHERE NOT EXISTS (SELECT 1 FROM Table1 t1 WHERE t1.Col1 = t2.Col2)
``` | Select only when first select is null | [
"",
"sql",
"select",
"conditional-statements",
""
] |
I have problem when executing this code:
```
SELECT * FROM tblpm n
WHERE date_updated=(SELECT MAX(date_updated)
FROM tblpm GROUP BY control_number
HAVING control_number=n.control_number)
```
Basically, I want to return the most recent date for each control number. The query above returns correct output but it takes 37secs. before the output was shown.
Is there any other sql clause or command that can execute faster than the query above? | Putting the subquery in the WHERE clause and restricting it to n.control\_number means it runs the subquery many times. This is called a *correlated subquery*, and it's often a performance killer.
It's better to run the subquery once, in the FROM clause, to get the max date per control number.
```
SELECT n.*
FROM tblpm n
INNER JOIN (
SELECT control_number, MAX(date_updated) AS date_updated
FROM tblpm GROUP BY control_number
) AS max USING (control_number, date_updated);
``` | There's no need to group in that subquery... a where clause would suffice:
```
SELECT * FROM tblpm n
WHERE date_updated=(SELECT MAX(date_updated)
FROM tblpm WHERE control_number=n.control_number)
```
Also, do you have an index on the 'date\_updated' column? That would certainly help. | GROUP BY having MAX date | [
"",
"mysql",
"sql",
"optimization",
"greatest-n-per-group",
""
] |
I want to display the date in below format
```
July 23, 2011
SELECT REG_NO, FORMAT(DOB ,'YYYY/MM/DD')
FROM Students ;
```
I tried below
```
SELECT REG_NO, FORMAT(DOB ,'MON DD,YYYY')
FROM Students ;
```
It seems not working | use **to\_char**
The syntax is `to_char( value, [ format_mask ], [ nls_language ] )`
```
SELECT REG_NO, TO_CHAR(DOB ,'MONTH DD,YYYY') FROM Students ;
``` | How about trying this one:
```
SELECT REG_NO, to_char(DOB, 'FMMonth DD, YYYY') FROM Students;
```
[TO\_CHAR()](http://www.techonthenet.com/oracle/functions/to_char.php) | Displaying date in Month DD,YYYY format | [
"",
"sql",
"oracle",
""
] |
How can i join a string that contains quotation marks? Here is an example, of a program to generate HTML image tags. But the program is not printing correctly.
```
import sys
sys.stdout = open("/Users/kyle/images.html", "w")
for x in xrange(100,490):
site = "http://www.example.com/image"+str(x)+".jpg"
print "<img src=""+site+"">"
```
It will print properly this way `print "<img src="+'"'+site+'"'+">"`
But i want to know if there is another way that is more simple. | Yes, you can use a single-quoted string for the `print` statement, and then include the double-quoted strings literally:
```
print '<img src="' + site + '">'
```
I generally prefer using `str.format()` for this kind of thing, though, because it makes the position of the output more obvious.
```
print '<img src="{0}">'.format(site)
```
However, if you're generating large amounts of HTML, you might be better off using a templating language like [Jinja2](http://jinja.pocoo.org/docs/), which will handle auto-escaping and whatnot better. | Python doesn't care which quotes you use.
```
print '<img src="' + site + '">'
``` | Join string containg quotation marks | [
"",
"python",
"html",
"join",
"tags",
""
] |
I have two lists:
```
list1 = [
set(['3105']),
set(['3106', '3107']),
set(['3115']),
set(['3122']),
set(['3123', '3126', '286'])
]
```
and
```
list2 = [
set(['400']),
set(['3115']),
set(['3100']),
set(['3107']),
set(['3123', '3126'])
]
```
How do I compare the intersection of these lists, so, for example, if 3126 is somewhere in any of the sets of both lists, it will append another list with 3126. My end goal is to append a separate list and then take the length of the list so I know how many matches are between lists. | You can flatten the two lists of sets into sets:
```
l1 = set(s for x in list1 for s in x)
l2 = set(s for x in list2 for s in x)
```
Then you can compute the intersection:
```
common = l1.intersection(l2) # common will give common elements
print len(common) # this will give you the number of elements in common.
```
Results:
```
>>> print common
set(['3123', '3115', '3107', '3126'])
>>> len(common)
4
``` | You'd have to merge all sets; take the unions of the sets in both lists, then take the intersection of these two unions:
```
sets_intersection = reduce(set.union, list1) & reduce(set.union, list2)
if 3126 in sets_intersection:
# ....
``` | Python - Comparing two lists of sets | [
"",
"python",
"list",
"intersection",
"set",
""
] |
Let's say I have a table like this:
```
|id|userID|email |website |
--------------------------------------
|1 |user1 |user1@test.com|website.com|
|2 |user2 |user2@test.com|website.com|
|3 |user3 |user3@test.com|website.com|
|4 |user1 |user1@test.com|foo.com |
|5 |user2 |user2@test.com|foo.com |
```
And I want to get all of the rows where website='website.com' and have a corresponding row with a matching userID where website='foo.com'
So, in this instance it would return rows 1 and 2.
Any ideas? | To get the user you can do
```
select userID
from your_table
where website in ('website.com', 'foo.com')
group by userID
having count(distinct website) = 2
```
but if you need the complete row then do
```
select * from your_table
where userID in
(
select userID
from your_table
where website in ('website.com', 'foo.com')
group by userID
having count(distinct website) = 2
)
``` | Here is one way:
```
select t.*
from t
where t.website = 'website.com' and
exists (select 1 from t t2 where t2.userId = t.userId and t2.website = 'foo.com');
```
EDIT:
You can also express this as a join:
```
select distinct t.*
from t join
t2
on t2.userId = t.userId and
t.website = 'website.com' and
t2.website = 'foo.com';
```
If you know there are not duplicates, then you can remove the `distinct`. | Get all rows with a matching field in a different row in the same table | [
"",
"sql",
"hive",
"impala",
""
] |
I'm having a difficult time extracting a value from a subquery.
Suppose I have following tables:
**D( fid , pid , r ),**
**F( fid , t , j )**
I need f.j from a subquery below at the first select.
```
SELECT pid -- Here I need f.j to show up
FROM D
WHERE r='something' AND fid IN
(
SELECT f2.fid
FROM F f2,
(
SELECT f.j, COUNT(*) -- I need f.j above
FROM F f
GROUP BY f.j
HAVING COUNT(*) >=2
) f
WHERE f.j = f2.j
)
GROUP BY pid
HAVING COUNT(*) >= 2
```
Thanks. | Since there is no sample data it is not clear what should be the output, still you can try the following but this is not optimized.
```
SELECT pid ,
t.j -- Here I need f.j to show up
FROM D
INNER JOIN ( SELECT f2.fid ,
f2.j
FROM F f2 ,
( SELECT f.j
FROM F f
GROUP BY f.j
HAVING COUNT(*) >= 2
) f
WHERE f.j = f2.j
) t ON D.fid = t.fid
WHERE r = 'something'
GROUP BY pid, t.j
HAVING COUNT(*) >= 2
``` | I would create a temp table in this form **T( pid , j )**. Then you can insert all the pid and f.j data separately. | Extracting value from subquery to main query | [
"",
"sql",
""
] |
I have an array:
```
>>> data = np.ones((1,3,128))
```
I save it to file using `savez_compressed`:
```
>>> with open('afile','w') as f:
np.savez_compressed(f,data=data)
```
When I try to load it I don't seem to be able to access the data:
```
>>> with open('afile','r') as f:
b=np.load(f)
>>> b.files
['data']
>>> b['data']
Traceback (most recent call last):
File "<pyshell#196>", line 1, in <module>
b['data']
File "C:\Python27\lib\site-packages\numpy\lib\npyio.py", line 238, in __getitem__
bytes = self.zip.read(key)
File "C:\Python27\lib\zipfile.py", line 828, in read
return self.open(name, "r", pwd).read()
File "C:\Python27\lib\zipfile.py", line 853, in open
zef_file.seek(zinfo.header_offset, 0)
ValueError: I/O operation on closed file
```
Am I doing something obviously wrong?
**EDIT**
Following @Saullo Castro's answer I tried this:
```
>>> np.savez_compressed('afile.npz',data=data)
>>> b=np.load('afile.npz')
>>> b.files
['data']
>>> b['data']
```
and got the following error:
```
Traceback (most recent call last):
File "<pyshell#253>", line 1, in <module>
b['data']
File "C:\Python27\lib\site-packages\numpy\lib\npyio.py", line 241, in __getitem__
return format.read_array(value)
File "C:\Python27\lib\site-packages\numpy\lib\format.py", line 440, in read_array
shape, fortran_order, dtype = read_array_header_1_0(fp)
File "C:\Python27\lib\site-packages\numpy\lib\format.py", line 336, in read_array_header_1_0
d = safe_eval(header)
File "C:\Python27\lib\site-packages\numpy\lib\utils.py", line 1156, in safe_eval
ast = compiler.parse(source, mode="eval")
File "C:\Python27\lib\compiler\transformer.py", line 53, in parse
return Transformer().parseexpr(buf)
File "C:\Python27\lib\compiler\transformer.py", line 132, in parseexpr
return self.transform(parser.expr(text))
File "C:\Python27\lib\compiler\transformer.py", line 124, in transform
return self.compile_node(tree)
File "C:\Python27\lib\compiler\transformer.py", line 159, in compile_node
return self.eval_input(node[1:])
File "C:\Python27\lib\compiler\transformer.py", line 194, in eval_input
return Expression(self.com_node(nodelist[0]))
File "C:\Python27\lib\compiler\transformer.py", line 805, in com_node
return self._dispatch[node[0]](node[1:])
File "C:\Python27\lib\compiler\transformer.py", line 578, in testlist
return self.com_binary(Tuple, nodelist)
File "C:\Python27\lib\compiler\transformer.py", line 1082, in com_binary
return self.lookup_node(n)(n[1:])
File "C:\Python27\lib\compiler\transformer.py", line 596, in test
then = self.com_node(nodelist[0])
File "C:\Python27\lib\compiler\transformer.py", line 805, in com_node
return self._dispatch[node[0]](node[1:])
File "C:\Python27\lib\compiler\transformer.py", line 610, in or_test
return self.com_binary(Or, nodelist)
File "C:\Python27\lib\compiler\transformer.py", line 1082, in com_binary
return self.lookup_node(n)(n[1:])
File "C:\Python27\lib\compiler\transformer.py", line 615, in and_test
return self.com_binary(And, nodelist)
File "C:\Python27\lib\compiler\transformer.py", line 1082, in com_binary
return self.lookup_node(n)(n[1:])
File "C:\Python27\lib\compiler\transformer.py", line 619, in not_test
result = self.com_node(nodelist[-1])
File "C:\Python27\lib\compiler\transformer.py", line 805, in com_node
return self._dispatch[node[0]](node[1:])
File "C:\Python27\lib\compiler\transformer.py", line 626, in comparison
node = self.com_node(nodelist[0])
File "C:\Python27\lib\compiler\transformer.py", line 805, in com_node
return self._dispatch[node[0]](node[1:])
File "C:\Python27\lib\compiler\transformer.py", line 659, in expr
return self.com_binary(Bitor, nodelist)
File "C:\Python27\lib\compiler\transformer.py", line 1082, in com_binary
return self.lookup_node(n)(n[1:])
File "C:\Python27\lib\compiler\transformer.py", line 663, in xor_expr
return self.com_binary(Bitxor, nodelist)
File "C:\Python27\lib\compiler\transformer.py", line 1082, in com_binary
return self.lookup_node(n)(n[1:])
File "C:\Python27\lib\compiler\transformer.py", line 667, in and_expr
return self.com_binary(Bitand, nodelist)
File "C:\Python27\lib\compiler\transformer.py", line 1082, in com_binary
return self.lookup_node(n)(n[1:])
File "C:\Python27\lib\compiler\transformer.py", line 671, in shift_expr
node = self.com_node(nodelist[0])
File "C:\Python27\lib\compiler\transformer.py", line 805, in com_node
return self._dispatch[node[0]](node[1:])
File "C:\Python27\lib\compiler\transformer.py", line 683, in arith_expr
node = self.com_node(nodelist[0])
File "C:\Python27\lib\compiler\transformer.py", line 805, in com_node
return self._dispatch[node[0]](node[1:])
File "C:\Python27\lib\compiler\transformer.py", line 695, in term
node = self.com_node(nodelist[0])
File "C:\Python27\lib\compiler\transformer.py", line 805, in com_node
return self._dispatch[node[0]](node[1:])
File "C:\Python27\lib\compiler\transformer.py", line 715, in factor
node = self.lookup_node(nodelist[-1])(nodelist[-1][1:])
File "C:\Python27\lib\compiler\transformer.py", line 727, in power
node = self.com_node(nodelist[0])
File "C:\Python27\lib\compiler\transformer.py", line 805, in com_node
return self._dispatch[node[0]](node[1:])
File "C:\Python27\lib\compiler\transformer.py", line 739, in atom
return self._atom_dispatch[nodelist[0][0]](nodelist)
File "C:\Python27\lib\compiler\transformer.py", line 754, in atom_lbrace
return self.com_dictorsetmaker(nodelist[1])
File "C:\Python27\lib\compiler\transformer.py", line 1214, in com_dictorsetmaker
assert nodelist[0] == symbol.dictorsetmaker
AssertionError
```
**EDIT 2**
The above error was in IDLE. It worked using Ipython. | When using `numpy.load` you [can pass the file name](http://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html), and if the extension is `.npz`, it will first decompress the file:
```
np.savez_compressed('filename.npz', array1=array1, array2=array2)
b = np.load('filename.npz')
```
and do `b['array1']` and so forth to retrieve the data from each array... | > You can also use the `f` attribute, which leaves you with a
> `np.ndarray`:
```
images_npz = np.load('images.npz')
images = images.f.arr_0
images_npz.close()
```
The name/key of the array inside the .npz-file (e.g. `arr_0`) can be found through
```
images.keys()
```
*Note*: The `f` attribute is not documented in the docstring of load. When load reads an `npz` file, it returns an instance of the `class NpzFile`. This class is available as `numpy.lib.npyio.NpzFile`. The docstring of the `NpzFile` class describes the `f` attribute. (As of this writing, the source code of the class can be found [here](https://github.com/numpy/numpy/blob/master/numpy/lib/npyio.py#L95.). | Load compressed data (.npz) from file using numpy.load | [
"",
"python",
"arrays",
"numpy",
"file-io",
""
] |
I'm very new to Python (2.x) and trying to understand how to iterate over a dictionary containing multiple lists:
```
dict = {'list_1':[3, 'green', 'yellow', 'black'], 'list_2':[2, 'green', 'blue']}
```
I am trying to create a new list containing all of the unique values of these lists, but ignoring the first item (the integer). The result I'm looking for would be:
```
['green', 'yellow', 'black', 'blue']
```
Here is one of my many attempts. I'm quite lost, so if somebody could explain I would be most grateful.
```
newlist = []
for colors in dict.values() [1:]:
if not colors in newlist:
newlist.append(colors)
``` | Use `set.union`:
```
>>> dic = {'list_1':[3, 'green', 'yellow', 'black'], 'list_2':[2, 'green', 'blue']}
>>> set().union(*(x[1:] for x in dic.itervalues()))
set(['blue', 'black', 'green', 'yellow'])
```
If a list is required simply pass this set to `list()`.
The working version of your attempt, though it is not efficient;
```
newlist = []
for colors in dic.values():
lis = colors[1:] #do slicing here
for item in lis:
if item not in newlist:
newlist.append(item)
print newlist #['green', 'blue', 'yellow', 'black']
``` | One way using [`itertools.chain`](http://docs.python.org/2/library/itertools.html#itertools.chain) to flatten the dict values into a single list then [`list comprehension`](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions) to filter out the non-string values and finally [`set`](http://docs.python.org/2/library/sets.html) for the unique values:
```
In [1]: from itertools import chain
In [2]: dict={'list_1':[3,'green','yellow','black'],'list_2':[2,'green','blue']}
In [3]: set([x for x in chain(*dict.values()) if isinstance(x, str)])
Out[3]: set(['blue', 'black', 'green', 'yellow'])
```
If really want to remove the first item in the list only and not all ints then similarly you could do:
```
In [4]: set(chain(*[x[1:] for x in dict.itervalues()]))
Out[4]: set(['blue', 'black', 'green', 'yellow'])
```
The first answers fails if you want all ints not in the first position in the final set and second fails for none ints found at the first position so you should state what should happen for these cases. | Iterating over a dictionary of list elements | [
"",
"python",
""
] |
What is correct way to convert Python dict in single string?
I have example program with code:
```
keywordsList = {u'roland.luethe1@gmail.com': 2, u'http://www.3ho.de/': 4, u'http://www.kundalini-yoga-zentrum-berlin.de/index.html': 1, u'ergo@ananda-pur.de': 3}
keywordsCount = sorted(keywordsList.items(),key=lambda(k,v):(v,k),reverse=True)
for words in keywordsCount:
print words[0], " - count: ", words[1]
```
so after I sort my items I get result like this:
```
http://www.3ho.de/ - count: 4
ergo@ananda-pur.de - count: 3
roland.luethe1@gmail.com - count: 2
http://www.kundalini-yoga-zentrum-berlin.de/index.html - count: 1
```
And my question is what is correct way to combine all dict stuff with cout in one single string that would look printed out something like:
```
'http://www.3ho.de/ : 4, ergo@ananda-pur.de : 3, roland.luethe1@gmail.com : 2, http://www.kundalini-yoga-zentrum-berlin.de/index.html : 1'
```
or something similar but in logic way? | Use `str.join()` judiciously:
```
', '.join([' : '.join((k, str(keywordsList[k]))) for k in sorted(keywordsList, key=keywordsList. get, reverse=True)])
```
Outputs:
```
>>> ', '.join([' : '.join((k, str(keywordsList[k]))) for k in sorted(keywordsList, key=keywordsList. get, reverse=True)])
u'http://www.3ho.de/ : 4, ergo@ananda-pur.de : 3, roland.luethe1@gmail.com : 2, http://www.kundalini-yoga-zentrum-berlin.de/index.html : 1'
```
You should really look into [`collections.Counter()`](http://docs.python.org/2/library/collections.html#collections.Counter) and save yourself having to sort like that. The `Counter.most_common()` method lists items in reverse sorted order by frequency for you. | I'm not sure about the correct way but you could also do this:
```
', '.join(' : '.join([a, str(b)]) for a, b in keywordsCount)
``` | Python dict to string | [
"",
"python",
"string",
"dictionary",
""
] |
I have a text file sample.txt(space delimited) in this format
```
12 john E 44 L
13 adam D 78 L
14 tue E 98 L
```
I want to to convert this file into a nested list
```
table_data = [
[12, 'john', 'E', 44, 'L'],
[13, 'adam', 'D', 78, 'L'],
[14, 'tue', 'E', 98, 'L'],
]
```
How can i do it ? | Use `str.split` and a list comprehension:
```
with open('filename') as f:
table_data = [ line.split() for line in f]
```
If you want the numbers to be converted to integers then write an additional function that processes each item on a give line:
```
def func(x):
try:
return int(x)
except ValueError:
return x
>>> with open('abc1') as f:
... table_data = [[ func(x) for x in line.split()] for line in f]
...
>>> table_data
[[12, 'john', 'E', 44, 'L'],
[13, 'adam', 'D', 78, 'L'],
[14, 'tue', 'E', 98, 'L']]
``` | Use the inbuilt [`csv` package](http://docs.python.org/2/library/csv.html)
```
from csv import reader
types = [int, str, str, int, str]
with open('sample.txt', 'rb') as f:
table_data = [[t(v) for v, t in zip(row, types)] for row in reader(f, delimiter=' ')]
print table_data
```
Has the output:
```
[[12, 'john', 'E', 44, 'L'], [13, 'adam', 'D', 78, 'L'], [14, 'tue', 'E', 98, 'L']]
``` | use file content to create a nested list in python | [
"",
"python",
"list",
"parsing",
"nested-lists",
""
] |
I read this: [Importing a CSV file into a sqlite3 database table using Python](https://stackoverflow.com/questions/2887878/importing-a-csv-file-into-a-sqlite3-database-table-using-python)
and it seems that everyone suggests using line-by-line reading instead of using bulk .import from SQLite. However, that will make the insertion really slow if you have millions of rows of data. Is there any other way to circumvent this?
Update: I tried the following code to insert line by line but the speed is not as good as I expected. Is there anyway to improve it
```
for logFileName in allLogFilesName:
logFile = codecs.open(logFileName, 'rb', encoding='utf-8')
for logLine in logFile:
logLineAsList = logLine.split('\t')
output.execute('''INSERT INTO log VALUES(?, ?, ?, ?)''', logLineAsList)
logFile.close()
connection.commit()
connection.close()
``` | Divide your data into chunks on the fly using generator expressions, make inserts inside the transaction. Here's a quote from [sqlite optimization FAQ](https://web.archive.org/web/20140331055857/http://web.utk.edu/%7Ejplyon/sqlite/SQLite_optimization_FAQ.html):
> Unless already in a transaction, each SQL statement has a new
> transaction started for it. This is very expensive, since it requires
> reopening, writing to, and closing the journal file for each
> statement. This can be avoided by wrapping sequences of SQL statements
> with BEGIN TRANSACTION; and END TRANSACTION; statements. This speedup
> is also obtained for statements which don't alter the database.
[Here's](https://stackoverflow.com/a/7137270/771848) how your code may look like.
Also, sqlite has an ability to [import CSV files](https://www.sqlite.org/cli.html#importing_files_as_csv_or_other_formats). | Since this is the top result on a Google search I thought it might be nice to update this question.
From the [python sqlite docs](https://docs.python.org/2/library/sqlite3.html#sqlite3-controlling-transactions) you can use
```
import sqlite3
persons = [
("Hugo", "Boss"),
("Calvin", "Klein")
]
con = sqlite3.connect(":memory:")
# Create the table
con.execute("create table person(firstname, lastname)")
# Fill the table
con.executemany("insert into person(firstname, lastname) values (?,?)", persons)
```
I have used this method to commit over 50k row inserts at a time and it's lightning fast. | Bulk insert huge data into SQLite using Python | [
"",
"python",
"sqlite",
""
] |
I have an application where you write a short story (maximum 130 chars) and post it on the site.
However, what I want to do is to add a specific hashtag in the end of the story and post it on a specific Twitter account in the same time as it's posted on the site.
What is the best way to deal with this action? Is there any apps out there that I could use? I read about Twython but it seems to contain of a lot of things I don't need.
Thanks in advance! | If you want something minimal, checkout <https://github.com/sixohsix/twitter>. Here is an example <https://gist.github.com/damilare/4435261> | do you mean that you don't want to import all of twython because you'd only be using some of it? If so then you could still used twython but only import some of it:
```
from twython import <thing you want>
```
I wasn't quite sure what you were asking, but other twitter API libraries for python include:
* [tweepy](https://github.com/tweepy/tweepy)
* [python-twitter](https://github.com/bear/python-twitter)
* [TweetPony](https://github.com/Mezgrman/TweetPony)
* [Python Twitter Tools](https://github.com/sixohsix/twitter)
* [twitter-gobject](https://github.com/tchx84/twitter-gobject)
* [TwitterSearch](https://github.com/ckoepp/TwitterSearch)
* [TwitterAPI](https://github.com/geduldig/TwitterAPI)
Test these out if you like and tell us which is best,
Hope this helped,
Jake Zachariah Nixon. | Posting to Twitter account from Django | [
"",
"python",
"django",
"twitter",
""
] |
I am fairly new to django and I am using it to make a website for an online game. The game already has it's own auth stuff so I am using that as a custom auth model for django.
I created a new app called 'account' to put this stuff in and added the models. I added the router and enabled it in the settings and everything works good, I can log in from the admin site and do stuff.
Now I am also trying to learn TDD, so I need to dump the auth database to a fixture.
When I run `./manage.py dumpdata account` i get an empty array back. There aren't any errors or any trace back what so ever, just an empty array. I've fiddled with it the best I can but I can't seem to find what the issue is.
Here are some relevant settings.
## Database
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'censored',
'USER': 'censored',
'PASSWORD': 'censored',
'HOST': 'localhost',
'PORT': '',
},
'auth_db': {
'ENGINE': 'mysql_pymysql',
'NAME': 'censored',
'USER': 'censored',
'PASSWORD': 'censored',
'HOST': '127.0.0.1',
'PORT': '3306'
}
}
```
## Router
```
class AccountRouter(object):
"""
A router to control all database operations on models in the
account application.
"""
def db_for_read(self, model, **hints):
"""
Attempts to read account models go to auth_db.
"""
if model._meta.app_label == 'account':
return 'auth_db'
return None
def db_for_write(self, model, **hints):
"""
Attempts to write account models go to auth_db.
"""
if model._meta.app_label == 'account':
return 'auth_db'
return None
def allow_relation(self, obj1, obj2, **hints):
"""
Allow relations if a model in the account app is involved.
"""
if obj1._meta.app_label == 'account' or \
obj2._meta.app_label == 'account':
return True
return None
def allow_syncdb(self, db, model):
"""
Make sure the account app only appears in the 'auth_db'
database.
"""
if model._meta.app_label == 'account':
return False
return None
```
## Django Settings
```
DATABASE_ROUTERS = ['account.router.AccountRouter']
```
I am really at a loss for what to try, any help or ideas are appreciated. | I also had the same issue, you need to specify the correct database. For example, given your code:
```
$ ./manage.py dumpdata --database=auth_db account
``` | I had similar problem. Creating an empty file called `models.py` solved the problem for me. Check if you have such a file in your app catalog and if not - create one. | django dumpdata empty array | [
"",
"python",
"django",
""
] |
I have an error message that spans across multiple (2-3) lines. I want to catch it and embed in a warning. I think that substituting new-lines into spaces is ok.
My question is, which method is the best practice. I know this is not the best kind of question, but I want to code it properly. I also might be missing something. So far I have came up with 3 methods:
1. string.replace()
2. regular expression
3. string.translate()
I was leaning towards string.translate(), however after reading how it works, I think it's an overkill to covert every character into itself except '\n'. Regexp also seems like an overkill for such a simple task.
Is there any other method designated to it, or should I pick up one of the aforementioned? I care about portability and robustness more than speed but it is still somewhat relevant. | Just use the `replace` method:
```
>>> "\na".replace("\n", " ")
' a'
>>>
```
It is the simplest solution. Using Regex is overkill and also means you have to import. `translate` is a little better, but still doesn't give anything that `replace` doesn't (except more typing of course).
`replace` should run faster too. | If you want to leave all these implementation details up to the python implementation you could do:
```
s = "This\nis\r\na\rtest"
print " ".join(s.splitlines())
# prints: This is a test
```
Note:
> This method uses the *universal newlines* approach to splitting lines.
Which means:
> **universal newlines** A manner of interpreting text streams in which all of the following are recognized as ending a line: the Unix end-of-line convention `'\n'`, the Windows convention `'\r\n'`, and the old Macintosh convention `'\r'`. See PEP 278 and PEP 3116, as well as `str.splitlines()` for an additional use.
A benefit of splitting lines over replacing linefeeds is that you can filter out lines you don't need, i.e. to avoid clutter in your log. For example, if you have this output of `traceback.format_exc()`:
```
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ZeroDivisionError: integer division or modulo by zero
```
And you need to add only the last line(s) to your log:
```
import traceback
try:
1/0
except: # of course you wouldn't catch exceptions like this in real code
print traceback.format_exc().splitlines()[-1]
# prints: ZeroDivisionError: integer division or modulo by zero
```
For reference:
* <http://docs.python.org/2/library/stdtypes.html#str.splitlines>
* <http://docs.python.org/2/library/stdtypes.html#str.join>
* <http://docs.python.org/2/glossary.html#term-universal-newlines>
* <http://www.python.org/dev/peps/pep-0278/>
* <http://docs.python.org/2/library/traceback.html> | Good method to substitute end-lines '\n' into spaces in a string | [
"",
"python",
"string",
"replace",
""
] |
I'm using nested dictionaries as implemented using the AutoVivification class answer at [What is the best way to implement nested dictionaries?](https://stackoverflow.com/questions/635483/what-is-the-best-way-to-implement-nested-dictionaries-in-python); namely
```
class AutoVivification(dict):
"""Implementation of perl's autovivification feature."""
def __getitem__(self, item):
try:
return dict.__getitem__(self, item)
except KeyError:
value = self[item] = type(self)()
return value
a = AutoVivification()
a['foo']['bar'] = 'spam'
```
thus permitting arbitrary nesting in the dictionary. Is there a way to modify the class such that one could assign values to members using an arbitrary set of keys, but only previously defined sets of keys are allowed when trying to access / read from the member? For example,
```
print a['foo']['bar']
print a['foo']['eggs']
```
currently outputs
```
spam
{}
```
It would be great if the second one gave an error instead, as a['foo']['eggs'] hasn't been defined... | The problem you'll run into is that, in order to set an item on a nested dictionary, you must first be able to get all the parent items. For example:
```
d[1][2][3] = 42
```
requires *getting* `d[1][2]` in order to *set* `d[1][2][3]`. There's no way to know whether an assignment is in process when you are accessing the intermediate dictionaries, so the only way to get assignment to work is to always create the sub-dictionary on access. (You could return some kind of proxy object instead of creating the sub-dictionary, and defer creation of the intermediate dictionaries until assignment, but you're still not going to get an error when you access a path that doesn't exist.)
The simplest way around this is to use a single tuple key rather than repeated subkeys. In other words, instead of setting `d[1][2][3]` you would set `d[1, 2, 3]`. Assignments then are self-contained operations: they do not require getting any intermediate nesting levels, so you can create the intermediate levels *only* on assignment.
As a bonus, you may find working with a tuple much more straightforward when passing multiple keys around, since you can just stick 'em in the `[]` and get the item you want.
You *could* do this with a single dictionary, using the tuples as keys. However, this loses the hierarchical structure of the data. The implementation below uses sub-dictionaries. A dictionary subclass called `node` is used so that we can assign an attribute on the dictionary to represent the value of the node at that location; this way, we can store values on the intermediate nodes as well as at the leaves. (It has a `__repr__` method that shows both the node's value and its children, if it has any.) The `__setitem__` method of the `tupledict` class handles creating intermediate nodes when assigning an element. `__getitem__` traverses the nodes to find the value you want. (If you want to access the individual nodes *as* nodes, you can use `get()` to reach them one at a time.)
```
class tupledict(dict):
class node(dict):
def __repr__(self):
if self:
if hasattr(self, "value"):
return repr(self.value) + ", " + dict.__repr__(self)
return dict.__repr__(self)
else:
return repr(self.value)
def __init__(self):
pass
def __setitem__(self, key, value):
if not isinstance(key, tuple): # handle single value
key = [key]
d = self
for k in key:
if k not in d:
dict.__setitem__(d, k, self.node())
d = dict.__getitem__(d, k)
d.value = value
def __getitem__(self, key):
if not isinstance(key, tuple):
key = [key]
d = self
for k in key:
try:
d = dict.__getitem__(d, k)
except KeyError:
raise KeyError(key[0] if len(key) == 1 else key)
try:
return d.value
except AttributeError:
raise KeyError(key[0] if len(key) == 1 else key)
```
Usage:
```
td = tupledict()
td['foo', 'bar'] = 'spam'
td['foo', 'eggs'] # KeyError
key = 'foo', 'bar'
td[key] # 'spam'
``` | I don't think there is any way to do exactly what you are trying to do, but if you are okay with a slight modification to how you set the keys you can get something pretty similar by just using regular dictionaries.
```
def nested_dict_set(d, keys, value):
for k in keys[:-1]:
d = d.setdefault(k, {})
d[keys[-1]] = value
a = {}
nested_dict_set(a, ['foo', 'bar'], 'spam')
print a['foo']['bar']
print a['foo']['eggs'] # raises a KeyError
``` | Nested dictionaries in python with error when accessing non-existent key | [
"",
"python",
"dictionary",
"nested",
""
] |
I'm trying to capture the numbers in a decimal inches statement without catching the `"`. An expression that works well in Perl seems to fail in Python and I can't see why.
In the following two expressions I expect to see `1` and `1.5` but in Python see instead `1"` and `1.5"` and I expect them to work the same. What am I missing?
### Perl:
```
DB<15> x '1"' =~ m{^(?:(\d+(?:\.\d+)*)")}
0 1
DB<16> x '1.5"' =~ m{^(?:(\d+(?:\.\d+)*)")}
0 1.5
```
### Python:
```
>>> re.search(r'^(?:(\d+(?:\.\d+)*)")', '1"').group()
'1"'
>>> re.search(r'^(?:(\d+(?:\.\d+)*)")', '1.5"').group()
'1.5"'
```
Ultimately I was hoping to use an expression like:^(?:(\d+)\')|(?:(\d+(?:.\d+)\*)") to match either 1' or 1" or 1.5" and by the location of the match, tell which expression worked. hwnd pointed out the 'findall' which I had previously overlooked so I expect my solution will look something like:
```
>>> re.findall(r'^(?:(\d+)\')|(?:(\d+(?:\.\d+)*)")', '1\'')
[('1', '')]
>>> re.findall(r'(?:(\d+)\')|(?:(\d+(?:\.\d+)*)")', '1\' 1" 1.5"')
[('1', ''), ('', '1'), ('', '1.5')]
```
Here is another interesting possibility using finditer/groupdict/comprehension:
```
>>> [m.groupdict() for m in re.finditer(r'(?P<feet>(\d+)\')|(?P<inches>(\d+(?:\.\d+)*)")', '1\' 1" 1.5"')]
[{'feet': "1'", 'inches': None},
{'feet': None, 'inches': '1"'},
{'feet': None, 'inches': '1.5"'}]
```
Thank you all for another enlightening trip into Python. | You can easily do..
```
import re
string = 'I have values "1" and "1.5" also "12.555"'
m = re.findall(r'\"(\d+|\d+\.\d+)\"', string)
print ", " . join(m)
```
Output:
```
1, 1.5, 12.555
``` | Try:
```
re.search(r'^(?:(\d+(?:\.\d+)*)")', '1.5"').group(1)
```
See [re](http://docs.python.org/2/library/re.html#match-objects):
> `group([group1, ...])`
>
> Returns one or more subgroups of the match. If there is a single argument, the result is a single string; if there are multiple arguments, the result is a tuple with one item per argument. Without arguments, group1 defaults to zero **(the whole match is returned)**. If a `groupN` argument is zero, the corresponding return value is the entire matching string; if it is in the inclusive range [1..99], it is the string matching the corresponding parenthesized group. (*emphasis mine*)
Now, you can use named capture groups both in Perl (unless you are stuck with a very old version) and Python.
So, I would actually recommend:
```
>>> re.search(r'^(?:(?P<inches>\d+(?:\.\d+){0,1})")', '1.5"').groupdict()['inches']
'1.5'
``` | Perl regex vs. Python regex non-capture seems to work differently | [
"",
"python",
"regex",
""
] |
I have a page to show 10 messages by each user (don't ask me why)
I have the following code:
`SELECT *, row_number() over(partition by user_id) as row_num
FROM "posts"
WHERE row_num <= 10`
It doesn't work.
When I do this:
`SELECT *
FROM (
SELECT *, row_number() over(partition by user_id) as row_num FROM "posts") as T
WHERE row_num <= 10`
It does work.
Why do I need nested query to see `row_num` column? Btw, in first request I actually see it in results but can't use `where` keyword for this column. | It seems to be the same "rule" as any query, column aliases aren't visible to the `WHERE` clause;
This will also fail;
```
SELECT id AS newid
FROM test
WHERE newid=1; -- must use "id" in WHERE clause
``` | SQL Query like:
```
SELECT *
FROM table
WHERE <condition>
```
will execute in next order:
```
3.SELECT *
1.FROM table
2.WHERE <condition>
```
so, as Joachim Isaksson say, columns in `SELECt` clause are not visible in `WHERE` clause, because of processing order.
In your second query, column `row_num` are fetched in `FROM` clause first, so it will be visible in `WHERE` clause.
[Here](http://blog.sqlauthority.com/2009/04/06/sql-server-logical-query-processing-phases-order-of-statement-execution/) is simple list of steps in order they executes. | Why do partitions require nested selects? | [
"",
"sql",
"postgresql",
""
] |
I need to calculate the arcsine function of small values that are under the form of mpmath's "mpf" floating-point bignums.
What I call a "small" value is for example e/4/(10\*\*7) = 0.000000067957045711476130884...
Here is a result of a test on my machine with mpmath's built-in asin function:
```
import gmpy2
from mpmath import *
from time import time
mp.dps = 10**6
val=e/4/(10**7)
print "ready"
start=time()
temp=asin(val)
print "mpmath asin: "+str(time()-start)+" seconds"
>>> 155.108999968 seconds
```
This is a particular case: I work with somewhat small numbers, so I'm asking myself if there is a way to calculate it in python that actually beats mpmath for this particular case (= for small values).
Taylor series are actually a good choice here because they converge very fast for small arguments. But I still need to accelerate the calculations further somehow.
Actually there are some problems:
1) Binary splitting is ineffective here because it shines only when you can write the argument as a small fraction. A full-precision float is given here.
2) arcsin is a non-alternating series, thus Van Wijngaarden or sumalt transformations are ineffective too (unless there is a way I'm not aware of to generalize them to non-alternating series).
<https://en.wikipedia.org/wiki/Van_Wijngaarden_transformation>
The only acceleration left I can think of is Chebyshev polynomials. Can Chebyshev polynomials be applied on the arcsin function? How to? | Actually binary splitting does work very well, if combined with iterated argument reduction to balance the number of terms against the size of the numerators and denominators (this is known as the bit-burst algorithm).
Here is a binary splitting implementation for mpmath based on repeated application of the formula atan(t) = atan(p/2^q) + atan((t\*2^q-p) / (2^q+p\*t)). This formula was suggested recently by Richard Brent (in fact mpmath's atan already uses a single invocation of this formula at low precision, in order to look up atan(p/2^q) from a cache). If I remember correctly, MPFR also uses the bit-burst algorithm to evaluate atan, but it uses a slightly different formula, which possibly is more efficient (instead of evaluating several different arctangent values, it does analytic continuation using the arctangent differential equation).
```
from mpmath.libmp import MPZ, bitcount
from mpmath import mp
def bsplit(p, q, a, b):
if b - a == 1:
if a == 0:
P = p
Q = q
else:
P = p * p
Q = q * 2
B = MPZ(1 + 2 * a)
if a % 2 == 1:
B = -B
T = P
return P, Q, B, T
else:
m = a + (b - a) // 2
P1, Q1, B1, T1 = bsplit(p, q, a, m)
P2, Q2, B2, T2 = bsplit(p, q, m, b)
T = ((T1 * B2) << Q2) + T2 * B1 * P1
P = P1 * P2
B = B1 * B2
Q = Q1 + Q2
return P, Q, B, T
def atan_bsplit(p, q, prec):
"""computes atan(p/2^q) as a fixed-point number"""
if p == 0:
return MPZ(0)
# FIXME
nterms = (-prec / (bitcount(p) - q) - 1) * 0.5
nterms = int(nterms) + 1
if nterms < 1:
return MPZ(0)
P, Q, B, T = bsplit(p, q, 0, nterms)
if prec >= Q:
return (T << (prec - Q)) // B
else:
return T // (B << (Q - prec))
def atan_fixed(x, prec):
t = MPZ(x)
s = MPZ(0)
q = 1
while t:
q = min(q, prec)
p = t >> (prec - q)
if p:
s += atan_bsplit(p, q, prec)
u = (t << q) - (p << prec)
v = (MPZ(1) << (q + prec)) + p * t
t = (u << prec) // v
q *= 2
return s
def atan1(x):
prec = mp.prec
man = x.to_fixed(prec)
return mp.mpf((atan_fixed(man, prec), -prec))
def asin1(x):
x = mpf(x)
return atan1(x/sqrt(1-x**2))
```
With this code, I get:
```
>>> from mpmath import *
>>> mp.dps = 1000000
>>> val=e/4/(10**7)
>>> from time import time
>>> start = time(); y1 = asin(x); print time() - start
58.8485069275
>>> start = time(); y2 = asin1(x); print time() - start
8.26498985291
>>> nprint(y2 - y1)
-2.31674e-1000000
```
Warning: atan1 assumes 0 <= x < 1/2, and the determination of the number of terms might not be optimal or correct (fixing these issues is left as an exercise to the reader). | Can you use the `mpfr` type that is included in gmpy2?
```
>>> import gmpy2
>>> gmpy2.get_context().precision = 3100000
>>> val = gmpy2.exp(1)/4/10**7
>>> from time import time
>>> start=time();r=gmpy2.asin(val);print time()-start
3.36188197136
```
In addition to supporting the GMP library, gmpy2 also supports the MPFR and MPC multiple-precision libraries.
Disclaimer: I maintain gmpy2. | Fastest possible method for the arcsin function on small, arbitrary floating-point values | [
"",
"python",
"trigonometry",
"bignum",
"taylor-series",
"mpmath",
""
] |
I have a small problem with `True` or `False` Boolean.
I have Defined a procedure `weekend` which takes a string as its input, and returns the Boolean `True if 'Saturday' or 'Sunday'` and `False` otherwise.
Here is my `weekend` function:
```
def weekend(day):
if day == 'Saturday' or day == 'Sunday':
return "True"
else:
return "False"
```
Here is my output:
```
>>>print weekend('Monday')
False
>>>print weekend('Saturday')
True
>>>print weekend('July')
False
```
But as you see in my code, I'm returning a **string** BUT I want to return a **Boolean** `True or False`.
How can I do that?
Thanks. | Try this:
```
def weekend(day):
if day == 'Saturday' or day == 'Sunday':
return True
else:
return False
```
Or this:
```
def weekend(day):
return day == 'Saturday' or day == 'Sunday'
```
Or even simpler:
```
def weekend(day):
return day in ('Saturday', 'Sunday')
```
Anyway: in Python the boolean values are `True` and `False`, without quotes - but also know that there exist several *falsy* values - that is, values that behave exactly like `False` if used in a condition. For example: `""`, `[]`, `None`, `{}`, `0`, `()`. | This is the shortest way to write the function and output a boolean
```
def weekend(day):
return day == 'Saturday' or day == 'Sunday'
```
or
```
def weekend(day):
return day in ('Saturday', 'Sunday')
``` | Return a Boolean instead of a string containing True or False in Python | [
"",
"python",
"python-2.7",
"boolean",
""
] |
Why is it that some objects keep their identities while others change at each initialization like so?
```
>>> id(2 ** 850)
26826480
>>> id(2 ** 850)
26826480
>>> id(2 ** 850)
26826480
>>> id(2 ** 850)
26826480
>>> id(2 ** 851)
26826624
>>> id(2 ** 851)
26826480
>>> id(2 ** 851)
26826624
>>> id(2 ** 851)
26826480
>>> id(2 ** 851)
26826624
```
I've also wrote the following to find a pattern but the result seems meaningless. I can't see a pattern.
```
def identifier():
ids = list()
i = 0
while i < 1000:
a = id(2 ** i)
b = id(2 ** i)
c = id(2 ** i)
d = id(2 ** i)
if a == b == c == d:
ids.append(i)
i += 1
print(ids)
identifier()
```
and the result is
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 11, 18, 23, 24, 28, 31, 32, 33, 35, 37, 38, 39, 41, 42,
43, 44, 45, 47, 49, 50, 51, 52, 53, 55, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90,
91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109,
110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 122, 124, 126, 128, 130, 132,
134, 136, 138, 140, 142, 144, 146, 148, 150, 151, 152, 153, 154, 155, 156, 157, 158,
159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175,
176, 177, 178, 179, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 200, 202, 204,
206, 208, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224,
225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 242,
244, 246, 248, 250, 252, 254, 256, 258, 260, 262, 264, 266, 268, 270, 271, 272, 273,
274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290,
291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 302, 304, 306, 308, 310, 312, 314,
316, 318, 320, 322, 324, 326, 328, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339,
340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356,
357, 358, 359, 360, 362, 364, 366, 368, 370, 372, 374, 376, 378, 380, 382, 384, 386,
388, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405,
406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 422, 424,
426, 428, 430, 432, 434, 436, 438, 440, 442, 444, 446, 448, 450, 451, 452, 453, 454,
455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471,
472, 473, 474, 475, 476, 477, 478, 479, 480, 482, 484, 486, 488, 490, 492, 494, 496,
498, 500, 502, 504, 506, 508, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520,
521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537,
538, 539, 540, 542, 544, 546, 548, 550, 552, 554, 556, 558, 560, 562, 564, 566, 568,
570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586,
587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 602, 604, 606,
608, 610, 612, 614, 616, 618, 620, 622, 624, 626, 628, 630, 631, 632, 633, 634, 635,
636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652,
653, 654, 655, 656, 657, 658, 659, 660, 662, 664, 666, 668, 670, 672, 674, 676, 678,
680, 682, 684, 686, 688, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701,
702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718,
719, 720, 722, 724, 726, 728, 730, 732, 734, 736, 738, 740, 742, 744, 746, 748, 750,
751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767,
768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 782, 784, 786, 788,
790, 792, 794, 796, 798, 800, 802, 804, 806, 808, 810, 811, 812, 813, 814, 815, 816,
817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833,
834, 835, 836, 837, 838, 839, 840, 842, 844, 846, 848, 850, 852, 854, 856, 858, 860,
862, 864, 866, 868, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882,
883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899,
900, 902, 904, 906, 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 931,
932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948,
949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 962, 964, 966, 968, 970,
972, 974, 976, 978, 980, 982, 984, 986, 988, 990, 991, 992, 993, 994, 995, 996, 997,
998, 999]
```
I can see why there is a consistency until 2 \*\* 8 since python caches the numbers from -5 to 256 but I can't understand the behavior after this. Are there any python gurus out there who can shed some light on this?
Edit: This strange behavior piqued my interest so I would like to learn the reason of this. I've checked mailing lists and the source codes but It doesn't say anything more than caching small integers and allocating blocks of int objects to accelerate the process of allocating and deallocating int objects in programs.
```
>>> id(2 ** 950)
139686876237120
>>> id(2 ** 950)
139686876237120
>>> id(2 ** 951)
27205680
>>> id(2 ** 951)
27206160
>>> id(2 ** 951)
27206320
>>> id(2 ** 950)
139686876237120
>>> id(2 ** 950)
139686876237120
```
Why is that after the first int object above is destroyed it gets the same memory address when it gets reconstructed at the last line? | It's an implementation detail. The `id` in CPython is basically the memory address of the object. Sometimes temporary objects that are created and destroyed in the same line of execution (like the result of `2**850`) will have the same address in the next line of execution. Don't rely on the value!
Here's an interesting case:
```
>>> id(object())
34418928
>>> id(object())
34418928
>>> id(object()),id(object())
(34418912, 34418912)
>>> id(object()) == id(object())
True
>>> object() is object()
False
```
Though the objects look like they have the same id, it is because they happened to get created and destroyed in the same area of memory. Even equality comparison of the id's of two object instances returns `True`. But in reality (via `is`, the *correct* way of comparing two objects) the two objects are different. | Quoting the [docs](http://docs.python.org/2/library/functions.html#id) for `id`:
> Two objects with non-overlapping lifetimes may have the same id() value.
When you get the same `id()` results, it doesn't mean you have the same object. In fact, two unequal numbers may have the same ID if one of the objects is deallocated before the other comes into existence. Whether the IDs were the same in any of your tests was an implementation detail. | Can anyone define internals of this magic behavior of int objects in python | [
"",
"python",
"python-3.x",
""
] |
I have duplicate rows in my table, how can I delete them based on a single column's value?
Eg
```
uniqueid, col2, col3 ...
1, john, simpson
2, sally, roberts
1, johnny, simpson
delete any duplicate uniqueIds
to get
1, John, Simpson
2, Sally, Roberts
``` | You can `DELETE` from a cte:
```
WITH cte AS (SELECT *,ROW_NUMBER() OVER(PARTITION BY uniqueid ORDER BY col2)'RowRank'
FROM Table)
DELETE FROM cte
WHERE RowRank > 1
```
The `ROW_NUMBER()` function assigns a number to each row. `PARTITION BY` is used to start the numbering over for each item in that group, in this case each value of `uniqueid` will start numbering at 1 and go up from there. `ORDER BY` determines which order the numbers go in. Since each `uniqueid` gets numbered starting at 1, any record with a `ROW_NUMBER()` greater than 1 has a duplicate `uniqueid`
To get an understanding of how the `ROW_NUMBER()` function works, just try it out:
```
SELECT *,ROW_NUMBER() OVER(PARTITION BY uniqueid ORDER BY col2)'RowRank'
FROM Table
ORDER BY uniqueid
```
You can adjust the logic of the `ROW_NUMBER()` function to adjust which record you'll keep or remove.
For instance, perhaps you'd like to do this in multiple steps, first deleting records with the same last name but different first names, you could add last name to the `PARTITION BY`:
```
WITH cte AS (SELECT *,ROW_NUMBER() OVER(PARTITION BY uniqueid, col3 ORDER BY col2)'RowRank'
FROM Table)
DELETE FROM cte
WHERE RowRank > 1
``` | You probably have a row id that *is* assigned by the DB upon insertion and is actually unique. I'll call this rowId in my example.
```
rowId |uniqueid |col2 |col3
----- |-------- |---- |----
1 10 john simpson
2 20 sally roberts
3 10 johnny simpson
```
You can remove duplicates by grouping on the thing that is supposed to be unique (whether it be one column or many), then you grab a rowId from each group, and delete everything else besides those rowIds. In the inner query, everything in the table will have a rowId except for the duplicate rows.
```
select *
--DELETE
FROM MyTable
WHERE rowId NOT IN
(SELECT MIN(rowId)
FROM MyTable
GROUP BY uniqueid);
```
You could also use MAX instead of MIN with similar results. | SQL Server 2008: delete duplicate rows | [
"",
"sql",
"sql-server-2008",
"duplicates",
""
] |
I've got these tables in my database:
**Tourist** - this is the first table
```
Tourist_ID - primary key
name...etc...
```
**EXTRA\_CHARGES**
```
Extra_Charge_ID - primary key
Extra_Charge_Description
Amount
```
**Tourist\_Extra\_Charges**
```
Tourist_Extra_Charge_ID
Extra_Charge_ID - foreign key
Tourist_ID - foreign key
```
---
So here is the example
I've one Tourist with Tourist\_ID - 86 . This tourist with id 86 has extra charges with Extra\_Charge\_ID - 7 and and Extra\_charge\_ID - 11;
I try to make a query so I can take the name of the Tourist and all the charges in EXTRA\_CHARGES table that **doesn't** belong to this tourist.
Here is the query that I try - but it doesn't return nothing.
```
SELECT
Tourist.Name
, EXTRA_CHARGES.Extra_Charge_Description
, EXTRA_CHARGES.Amount
FROM
Tourist
INNER JOIN TOURIST_EXTRA_CHARGES
ON Tourist.Tourist_ID = TOURIST_EXTRA_CHARGES.Tourist_ID
INNER JOIN EXTRA_CHARGES
ON TOURIST_EXTRA_CHARGES.Extra_Charge_ID = EXTRA_CHARGES.Extra_Charge_ID
WHERE
Tourist.Tourist_ID= 86
and EXTRA_CHARGES.Extra_Charge_ID NOT IN
( SELECT Extra_Charge_ID
FROM TOURIST_EXTRA_CHARGES te
WHERE te.Tourist_ID = 86
)
```
I of course can get only the charges with this query
```
SELECT * FROM EXTRA_CHARGES e
WHERE e.Extra_Charge_ID NOT IN
(SELECT Extra_Charge_ID from TOURIST_EXTRA_CHARGES te
WHERE te.Tourist_ID = 86
)
```
but I can't find a way to get the name of this tourist | You can use two options, both are pretty much the same, but [one may perform better than the other](https://stackoverflow.com/questions/2246772/whats-the-difference-between-not-exists-vs-not-in-vs-left-join-where-is-null) depending on your DBMS.
The principal of both is the same, get a cross join of tourist and extra charges so you have all extra charges for all tourists, then use either `NOT EXISTS` or `LEFT JOIN/IS NULL` to eliminate all extra charges that the tourist has:
```
SELECT Tourist.Name,
EXTRA_CHARGES.Extra_Charge_Description,
EXTRA_CHARGES.Amount
FROM Tourist
CROSS JOIN EXTRA_CHARGES
WHERE Tourist.Tourist_ID= 86
AND NOT EXISTS
( SELECT 1
FROM TOURIST_EXTRA_CHARGES
WHERE TOURIST_EXTRA_CHARGES.Tourist_ID = Tourist.Tourist_ID
AND TOURIST_EXTRA_CHARGES.Extra_Charge_ID = EXTRA_CHARGES.Extra_Charge_ID
);
SELECT Tourist.Name,
EXTRA_CHARGES.Extra_Charge_Description,
EXTRA_CHARGES.Amount
FROM Tourist
CROSS JOIN EXTRA_CHARGES
LEFT JOIN TOURIST_EXTRA_CHARGES
ON TOURIST_EXTRA_CHARGES.Tourist_ID = Tourist.Tourist_ID
AND TOURIST_EXTRA_CHARGES.Extra_Charge_ID = EXTRA_CHARGES.Extra_Charge_ID
WHERE Tourist.Tourist_ID = 86
AND TOURIST_EXTRA_CHARGES.Tourist_Extra_Charge_ID IS NULL;
```
---
**EDIT**
Since the two criteria you are applying are logically different, you need to use two queries to get it. The first is as before, the extra charges that a tourist does not have, and the second is for tourists who have all extra charges
```
SELECT Tourist.Name,
EXTRA_CHARGES.Extra_Charge_Description,
EXTRA_CHARGES.Amount
FROM Tourist
CROSS JOIN EXTRA_CHARGES
LEFT JOIN TOURIST_EXTRA_CHARGES
ON TOURIST_EXTRA_CHARGES.Tourist_ID = Tourist.Tourist_ID
AND TOURIST_EXTRA_CHARGES.Extra_Charge_ID = EXTRA_CHARGES.Extra_Charge_ID
WHERE Tourist.Tourist_ID = 1
AND TOURIST_EXTRA_CHARGES.Tourist_Extra_Charge_ID IS NULL
UNION ALL
SELECT Tourist.Name,
NULL,
NULL
FROM Tourist
CROSS JOIN EXTRA_CHARGES
LEFT JOIN TOURIST_EXTRA_CHARGES
ON TOURIST_EXTRA_CHARGES.Tourist_ID = Tourist.Tourist_ID
AND TOURIST_EXTRA_CHARGES.Extra_Charge_ID = EXTRA_CHARGES.Extra_Charge_ID
WHERE Tourist.Tourist_ID = 1
GROUP BY Tourist.Name
HAVING COUNT(*) = COUNT(TOURIST_EXTRA_CHARGES.Tourist_Extra_Charge_ID);
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!2/28665/7)** | You may include the tourist name (for Tourist\_ID=86) as a subquery in the select statement:
```
SELECT (SELECT Tourist.Name FROM Tourist WHERE Tourist_ID=86) TouristName, e.*
FROM EXTRA_CHARGES e
WHERE e.Extra_Charge_ID NOT IN
(SELECT Extra_Charge_ID
FROM TOURIST_EXTRA_CHARGES te
WHERE te.Tourist_ID = 86
)
``` | sql subquery issue with inner join | [
"",
"sql",
""
] |
In Python I am trying to figure out how to evaluate commands given as strings in a program. For example, consider the built in math functions `sin`, `cos` and `tan`
Say I am given these functions as a list;
```
li = ['sin', 'cos', 'tan']
```
Now, I want to iterate over each element in the list and apply each function to a numeric argument:
```
x = 45
for func in li:
func(x)
```
The above clearly will not work as func is a string and just shows the idea. In lisp I could make each function a quoted symbol and then evaluate similarly to the above (well in lisp syntax of course, but the idea is the same).
How is this done in python?
Thanks and let me know if you need more information! | Just use the functions themselves:
```
from math import sin, cos, tan
li = [sin, cos, tan]
```
If you really need to use strings, create a dict:
```
funcs = {'sin': sin, 'cos': cos, 'tan': tan}
func = funcs[string]
func(x)
``` | There are a several options here, I've listed some of the better options below:
* If all of the functions come from the same module, you can use `module.getattr(func)` to access the function. In this case sin, cos, and tan are all functions from math so you could do the following:
```
import math
li = ['sin', 'cos', 'tan']
x = 45
for func in li:
x = getattr(math, func)(x)
```
* Create a dictionary mapping names to functions, and use that as a lookup table:
```
import math
table = {'sin': math.sin, 'cos': math.cos, 'tan': math.tan}
li = ['sin', 'cos', 'tan']
x = 45
for func in li:
x = table[func](x)
```
* Put the functions in your list directly:
```
import math
li = [math.sin, math.cos, math.tan]
x = 45
for func in li:
x = func(x)
``` | python evaluate commands in form of strings | [
"",
"python",
"functional-programming",
"symbols",
"evaluation",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.