Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
So say I have 4 lists of 10 different options each and I want to pick one item from each of the four lists a user specified amount of times. But no repeated outputs. Is this possible? I can't seem to write anything that will not repeat outputs. | Lets assume we have these `4` lists:
```
>>> lists = [range(10*i, 10*(i+1)) for i in range(4)]
>>> lists
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39]]
```
First of all, shuffle them:
```
>>> for lst in lists: random.shuffle(lst)
>>> lists
[[6, 8, 2, 1, 3, 5, 9, 0, 7, 4], [17, 12, 16, 10, 14, 15, 18, 11, 13, 19], [20, 28, 23, 21, 27, 25, 24, 29, 26, 22], [35, 32, 38, 31, 39, 34, 30, 33, 36, 37]]
```
And then output the values with `zip`:
```
>>> for items in zip(*lists):
print(items)
(6, 17, 20, 35)
(8, 12, 28, 32)
(2, 16, 23, 38)
(1, 10, 21, 31)
(3, 14, 27, 39)
(5, 15, 25, 34)
(9, 18, 24, 30)
(0, 11, 29, 33)
(7, 13, 26, 36)
(4, 19, 22, 37)
```
If you need only the specified amount of them, just use [`islice`](http://docs.python.org/2/library/itertools.html#itertools.islice):
```
>>> from itertools import islice
>>> for items in islice(zip(*lists),5):
print(items)
(6, 17, 20, 35)
(8, 12, 28, 32)
(2, 16, 23, 38)
(1, 10, 21, 31)
(3, 14, 27, 39)
``` | ```
population = [1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1]
population = set(population)
samples = random.sample(population, 3)
``` | Python: Random from multiple lists, no repeats | [
"",
"python",
""
] |
I'm planning to debug Joomla site by entering each query and it's query execution time to a database table. I have more than 10 models which have different queries. I'm pretty sure that all the queries go through a single place/class before executing but I have no idea where/what the place/class is.
My issue is, Is there a central place I can edit to log the database query and the execution time of a SQL query? I mean like edit a core file just to log every SQL query & it's execution time.
How can I get it done? | Rather than trying to do this programmatically with brute force, it seems it would be far easier and less intrusive to use a proper SQL benchmarking tool such as [MySQL Benchmark Suite](http://dev.mysql.com/doc/refman/5.0/en/mysql-benchmarks.html) Another possible non-brute-force option might be [Toad World](http://www.toadworld.com/MYSQL/SpotlightonMySQL/tabid/620/Default.aspx)
If you wanted to stay away from third-party tools, a [slow query log](http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html) might be the place to start. | Have you considered using Joomla's built-in [System Debug](http://learn.theartofjoomla.com/developing-extensions/setting-up-your-joomla-site-for-debugging.html)? | Joomla SQL Query Debug Method | [
"",
"mysql",
"sql",
"debugging",
"joomla",
"joomla2.5",
""
] |
How to stop a transaction if an if condition fails?
I want this transaction to stop if `IF (@dela = '01-01-2013')` is true, but instead it raises the error and procedes with the `Update` execution.
```
CREATE TRIGGER TR_AdaugareModificareOfertaSpeciala
ON OferteSpeciale
FOR UPDATE, INSERT
AS
BEGIN TRAN T1;
DECLARE @dela DATETIME;
SET @dela = (SELECT dela FROM INSERTED);
IF (UPDATE(codP) OR UPDATE(codM) OR UPDATE(dela))
BEGIN
RAISERROR('Nu se poate modifica cheia primara.', 1, 1);
ROLLBACK TRAN T1;
END
SAVE TRANSACTION T1;
IF (@dela = '01-01-2013')
BEGIN
RAISERROR('Data nu este corecta.', 1, 1);
ROLLBACK TRAN T1;
END
```
Example where `dela = '01-01-2013':`
```
UPDATE OferteSpeciale SET pret = 23.69 where codP = 'P1' and codM = 'M1';
```
It raises the error but also makes the update.
Thank you. | You have to know that SQL Server DML triggers are **always** set based and not row based. So, `inserted` and `deleted` tables may contain **more rows**, not just one row.
If you want to cancel the update/insert statement when `(@dela = '01-01-2013')` then you could use this condition:
```
IF EXISTS(SELECT * FROM inserted WHERE dela = '20130101')
BEGIN
ROLLBACK;
RAISERROR('The starting date is wrong', 16, 1);
END
```
Note 1: [SMALL]DATE[TIME][2] constants should follow ISO8601 rules: `yyyymmdd`, `yyyymmdd hh:mm:ss` or `yyyy-mm-dd`, `yyyy-mm-ddThh:mm:ss`.
Note 2: The error message from
```
IF (UPDATE(codP) OR UPDATE(codM) OR UPDATE(dela))
BEGIN
RAISERROR('Nu se poate modifica cheia primara.', 1, 1);
ROLLBACK TRAN T1;
END
```
is misleading if the PK doesn't contains codP, codM and dela columns.
Note 3: The severity level for RAISERROR statement should be 16 not 1. A RAISERROR statement with a severity level equal to 1 is pretty much like the PRINT statement. | I believe you could do this with an [INSTEAD OF UPDATE](http://msdn.microsoft.com/en-us/library/ms179288.aspx) trigger.
```
CREATE TRIGGER TR_AdaugareModificareOfertaSpeciala
ON OferteSpeciale
INSTEAD OF UPDATE
AS
DECLARE @dela DATETIME;
SET @dela = (SELECT dela FROM INSERTED);
IF (UPDATE(codP) OR UPDATE(codM) OR UPDATE(dela)) BEGIN
RAISERROR('Nu se poate modifica cheia primara.', 1, 1);
END
ELSE IF (@dela = '01-01-2013') BEGIN
RAISERROR('Data nu este corecta.', 1, 1);
END
ELSE BEGIN
UPDATE o SET dela = i.dela, codm = i.codm, codp = i.codp, pret = i.pret-- add rest of columns here
FROM OferteSpeciale o
JOIN Inserted i ON o.[Primarykey] = i.[Primarykey]
END
``` | How to stop a Transaction in SQL Server 2008? | [
"",
"sql",
"sql-server",
""
] |
I'm using tkinter's "askokcancel" message box to warn the user, with a pop-up, of an irreversible action.
```
from tkinter import Tk
Tk().withdraw()
from tkinter.messagebox import askokcancel
askokcancel("Warning", "This will delete stuff")
```
I'd like to change the text of the 'OK' button (from 'OK') to something like 'Delete', to make it less benign-looking.
Is this possible?
If not, what is another way to achieve it? Preferably without introducing any dependancies... | **Why not open a child window thus creating your own box with your own button like this:**
```
from tkinter import *
def messageWindow():
win = Toplevel()
win.title('warning')
message = "This will delete stuff"
Label(win, text=message).pack()
Button(win, text='Delete', command=win.destroy).pack()
root = Tk()
Button(root, text='Bring up Message', command=messageWindow).pack()
root.mainloop()
``` | No, there is no way to change the text of the buttons for the built-in dialogs.
Your best option is to create your own dialog. It's not very hard to do, and it gives you absolute control over what is in the dialog widget. | Change words on tkinter Messagebox buttons | [
"",
"python",
"python-3.x",
"tkinter",
"messagebox",
""
] |
I need to fill a file with a lot of records identified by a number (test data). The number of records is very big, and the ids should be unique and the order of records should be random (or pseudo-random).
I tried this:
```
# coding: utf-8
import random
COUNT = 100000000
random.seed(0)
file_1 = open('file1', 'w')
for i in random.sample(xrange(COUNT), COUNT):
file_1.write('ID{0},A{0}\n'.format(i))
file_1.close()
```
But it's eating all of my memory.
Is there a way to generate a big shuffled sequence of consecutive (not necessarily but it would be nice, otherwise unique) integer numbers? Using a generator and not keeping all the sequence in RAM? | If you have 100 million numbers like in the question, then this is actually manageable in-memory (it takes about 0.5 GB).
As DSM pointed out, this can be done with the standard modules in an efficient way:
```
>>> import array
>>> a = array.array('I', xrange(10**8)) # a.itemsize indicates 4 bytes per element => about 0.5 GB
>>> import random
>>> random.shuffle(a)
```
---
It is also possible to use the third-party NumPy package, which is the standard Python tool for managing arrays in an efficient way:
```
>>> import numpy
>>> ids = numpy.arange(100000000, dtype='uint32') # 32 bits is enough for numbers up to about 4 billion
>>> numpy.random.shuffle(ids)
```
(this is only useful if your program already uses NumPy, as the standard module approach is about as efficient).
---
Both method take about the same amount of time on my machine (maybe 1 minute for the shuffling), but the 0.5 GB they use is not too big for current computers.
**PS**: There are **too many elements for the shuffling to be really random** because there are way too many permutations possible, compared to the period of the random generators used. In other words, there are fewer Python shuffles than the number of possible shuffles! | Maybe something like (won't be consecutive, but will be unique):
```
from uuid import uuid4
def unique_nums(): # Not strictly unique, but *practically* unique
while True:
yield int(uuid4().hex, 16)
# alternative yield uuid4().int
unique_num = unique_nums()
next(unique_num)
next(unique_num) # etc...
``` | Generate big random sequence of unique numbers | [
"",
"python",
"random",
""
] |
I'm trying to overwrite values that are found in TYPE1 with values that are found in TYPE2.
I wrote this SQL to try it out, but for some reason it isn't updating:
```
select * from stuff
update stuff
set TYPE1 = TYPE2
where TYPE1 is null;
update stuff
set TYPE1 = TYPE2
where TYPE1 ='Blank';
```
<http://www.sqlfiddle.com/#!3/a4733/17>
Any reason why my values in TYPE1 are not updating? | This works for me
```
select * from stuff
update stuff
set TYPE1 = TYPE2
where TYPE1 is null;
update stuff
set TYPE1 = TYPE2
where TYPE1 ='Blank';
select * from stuff
``` | ```
UPDATE a
SET a.column1 = b.column2
FROM myTable a
INNER JOIN myTable b
on a.myID = b.myID
```
in order for both "a" and "b" to work, both aliases must be defined | Update values from one column in same table to another in SQL Server | [
"",
"sql",
"sql-server",
"sql-update",
""
] |
I have this code;
```
offset = -0
print ("In Command 3 - Brute force")
string = input("Please enter a string to Brute Force:")
while offset > -26:
offset = offset - 1
print("")
for letter in string:
letter = (ord(letter))
letter = letter + offset
if letter > 126:
letter - 95
elif letter < 32:
letter + 32
output = (chr(letter))
print(output,end='')
choice = 0
```
Output depending on the string something like this;
```
rc`rcr
qb_qbq
pa^pap
o`]o`o
n_\n_n
m^[m^m
l]Zl]l
k\Yk\k
j[Xj[j
iZWiZi
hYVhYh
gXUgXg
fWTfWf
eVSeVe
dURdUd
cTQcTc
bSPbSb
aROaRa
`QN`Q`
_PM_P_
^OL^O^
]NK]N]
\MJ\M\
[LI[L[
ZKHZKZ
YJGYJY
```
Now, I need some text before the output for example;
```
Decryption string rc`rcr
Decryption string qb_qbq
```
etc...
I have tried;
```
print("Decryption",output,end='')
```
and
```
print("Decryption"+output,end='')
```
However this gives me that text in front of every letter.
Please assist if you can, and explanation would also be preferred.
Thanks for your time.
Ben | You want to do something like this:
```
offset = -0
print ("In Command 3 - Brute force")
string = input("Please enter a string to Brute Force:")
while offset > -26:
offset = offset - 1
word = ""
for letter in string:
letter = (ord(letter))
letter = letter + offset
if letter > 126:
letter - 95
elif letter < 32:
letter + 32
output = (chr(letter))
word = word + output
choice = 0
print("Decryption: "+word)
```
The problem with what you were trying is that it will print the 'Decrypting:' message for each character not for each word, so you need to build the word before printing it. | You are printing the output letter by letter, so adding `print("Decryption"+output,end='')` will just add the 'Decryption' part to each printout. I suggest doing a:
```
print("Decryption" + string, end=' ')
```
before you start your `for` loop. | Appending to the front of for statement output | [
"",
"python",
""
] |
I have a set of rows with values

I want the following OutPut with individual Max and min values

Sorry for the poor prntsceen. i dont know how to draw tables in stackoverflow. | ```
select value1, value2, value3, value4,
[min]=(select min(value) from (
select value1 union all
select value2 union all
select value3 union all
select value4) X(value)),
[max]=(select max(value) from (
select value1 union all
select value2 union all
select value3 union all
select value4) Y(value))
from tbl;
```
To recognize NULLs as **min** values, use the below instead
```
select value1, value2, value3, value4,
[min]=(select TOP(1) value from (
select value1 union all
select value2 union all
select value3 union all
select value4) X(value)
ORDER BY value ASC),
[max]=(select TOP(1) value from (
select value1 union all
select value2 union all
select value3 union all
select value4) X(value)
ORDER BY value DESC)
from tbl;
``` | Try this one -
```
DECLARE @temp TABLE
(
Value1 INT
, Value2 INT
, Value3 INT
, Value4 INT
)
INSERT INTO @temp (Value1, Value2, Value3, Value4)
VALUES
(NULL, 1, 1, NULL),
(NULL, 1, 2, NULL),
(NULL, 2, 2, NULL),
(NULL, 2, 2, NULL),
(1, 1, 1, 1),
(2, 2, 1, 2),
(1, 1, 1, NULL),
(2, 2, 3, 2),
(2, 2, 2, 2),
(1, 1, 1, 1)
SELECT
Value1
, Value2
, Value3
, Value4
, MinValue = (
SELECT TOP 1 value
FROM (
SELECT value = value1
UNION
SELECT value2
UNION
SELECT value3
UNION
SELECT value4
) mn
ORDER BY value
)
, MaxValue = (
SELECT MAX(value)
FROM (
SELECT value = value1
UNION
SELECT value2
UNION
SELECT value3
UNION
SELECT value4
) mx
)
FROM @temp
```
Results window:
 | Get MAX and MIN values of each rows in sql server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
""
] |
I have the date like
```
date['min'] = '2013-11-11'
date['max'] = '2013-11-23'
```
Is there any single line function which can return true if date lies in that range.
I mean if only `date.min` is provided then i need to check if given date is grater than it and if only max is provided then i need to check if its less than that. and if both are provided then whether it falls between them | Dates in the form `YYYY-MM-DD` can be compared alphabetically as well:
```
'2013-11-11' < '2013-11-15' < '2013-11-23'
date['min'] < your_date < date['max']
```
This won't work correctly for other formats, such as `DD.MM.YYYY` or `MM/DD/YYYY`. In that case you have to parse the strings and convert them into `datetime` objects.
If don't know whether the min/max variables are present, you can do:
```
date.get('min', '0000-00-00') < your_date < date.get('max', '9999-99-99')
```
and replace the default text values with anything you prefer. | I think simple comparison works for that.
```
>>> from datetime import timedelta, date
>>> min_date = date.today()
>>> max_date = date.today() + timedelta(days=7)
>>> d1 = date.today() + timedelta(days=1)
>>> d2 = date.today() + timedelta(days=10)
>>> min_date < d1 < max_date
True
>>> min_date < d2 < max_date
False
```
Here is the updated version:
```
def is_in_range(d, min=date.min, max=date.max):
if max:
return min < d < max
return min < d
print is_in_range(d1, min_date, max_date)
print is_in_range(d2, min_date, max_date)
print is_in_range(d1, min_date)
print is_in_range(d2, min_date)
True
False
True
True
``` | How can i find if the date lies between two dates | [
"",
"python",
"django",
""
] |
I have following rows in my database.
```
profileID | startDate | endDate
-------------------|-------------------|--------------
Jr.software eng. |2012-07-01 |2030-01-01
..................|...................|..............
Eng |2013-03-28 |2013-03-28
..................|...................|..............
Sr.eng |2013-04-09 |2013-04-17
..................|...................|..............
CEO |2012-11-21 |
..................|...................|..............
```
above row are stored in my database. I want to get a result like this kinds of conditions.
```
1. If endDate is null then I get only it related startDate.
```
like in above row my expected result is
```
profileID | startDate | endDate
-------------------|-------------------|--------------
CEO |2012-11-21 |
..................|...................|..............
```
1. If there is no null endDate then I want to get maximum endDate from endDate list.
But if row is like this then
```
profileID | startDate | endDate
-------------------|-------------------|--------------
Jr.software eng. |2012-07-01 |2030-01-01
..................|...................|..............
Eng |2013-03-28 |2013-03-28
..................|...................|..............
Sr.eng |2013-04-09 |2013-04-17
```
Then my expected result is
```
profileID | startDate | endDate
-------------------|-------------------|--------------
Jr.software eng. |2012-07-01 |2030-01-01
..................|...................|..............
```
I need a mysql query. | ```
Select profileId, Case When endate is NULL then startDate
Else Max(EndDate) end As `Date`
From tablename
``` | i guess you looking for something like that :
```
SELECT profileID,startDate,MAX(endDate) AS endDate
FROM table1
group by profileID
```
[**DEMO HERE**](http://sqlfiddle.com/#!2/ead57/10) | Finding max date if no null value in endDate? | [
"",
"mysql",
"sql",
""
] |
In PostgreSQL, does this query
```
SELECT "table".* FROM "table" WHERE "table"."column" IN (1, 5, 3)
```
always return the results in the `1, 5, 3` order or is it ambiguous?
If it's ambiguous, how do I properly assure the results are in the order `1, 5, 3`? | Add something like the following to your select statement
```
order by CASE WHEN "column"=1 THEN 1
WHEN "column"=2 THEN 2
ELSE 3
END
```
if you have many more than three values it may be easier to make a lookup table and join to that in you query | The WHERE clause will not order the results in any way, it will just select matching records, in whatever order the database index finds them in.
You'll have to add an order by clause. | PostgreSQL order of WHERE "table"."column" IN query | [
"",
"sql",
"postgresql",
"sql-order-by",
""
] |
I'd like to add two numpy arrays of different shapes, but without broadcasting, rather the "missing" values are treated as zeros. Probably easiest with an example like
```
[1, 2, 3] + [2] -> [3, 2, 3]
```
or
```
[1, 2, 3] + [[2], [1]] -> [[3, 2, 3], [1, 0, 0]]
```
I do not know the shapes in advance.
I'm messing around with the output of np.shape for each, trying to find the smallest shape which holds both of them, embedding each in a zero-ed array of that shape and then adding them. But it seems rather a lot of work, is there an easier way?
Thanks in advance!
edit: by "a lot of work" I meant "a lot of work for me" rather than for the machine, I seek elegance rather than efficiency: my effort getting the smallest shape holding them both is
```
def pad(a, b) :
sa, sb = map(np.shape, [a, b])
N = np.max([len(sa),len(sb)])
sap, sbp = map(lambda x : x + (1,)*(N-len(x)), [sa, sb])
sp = np.amax( np.array([ tuple(sap), tuple(sbp) ]), 1)
```
not pretty :-/ | This is the best I could come up with:
```
import numpy as np
def magic_add(*args):
n = max(a.ndim for a in args)
args = [a.reshape((n - a.ndim)*(1,) + a.shape) for a in args]
shape = np.max([a.shape for a in args], 0)
result = np.zeros(shape)
for a in args:
idx = tuple(slice(i) for i in a.shape)
result[idx] += a
return result
```
You can clean up the for loop a little if you know how many dimensions you expect on result, something like:
```
for a in args:
i, j = a.shape
result[:i, :j] += a
``` | > I'm messing around with the output of np.shape for each, trying to find the smallest shape which holds both of them, embedding each in a zero-ed array of that shape and then adding them. But it seems rather a lot of work, is there an easier way?
Getting the `np.shape` is trivial, finding the smallest shape that holds both is very easy, and of course adding is trivial, so the only "a lot of work" part is the "embedding each in a zero-ed array of that shape".
And yes, you can eliminate that, by just calling the [`resize`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.resize.html#numpy.ndarray.resize) method (or the [`resize`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.resize.html) function, if you want to make copies instead of changing them in-place). As the docs explain:
> Enlarging an array: … missing entries are filled with zeros
For example, if you know the dimensionality statically:
```
>>> a1 = np.array([[1, 2, 3], [4, 5, 6]])
>>> a2 = np.array([[2], [2]])
>>> shape = [max(a.shape[axis] for a in (a1, a2)) for axis in range(2)]
>>> a1.resize(shape)
>>> a2.resize(shape)
>>> print(a1 + a2)
array([[3, 4, 3],
[4, 5, 6]])
``` | adding numpy arrays of differing shapes | [
"",
"python",
"numpy",
""
] |
I have the following query that prefixes a p\_id with a p\_number
```
UPDATE table1 t1
SET t1.p_id = CONCAT((SELECT CONCAT(t2.p_number,' ')
FROM tble2 t2
WHERE t2.id = t1.p_id)
, t1.p_id)
WHERE t1.p_id = '19680';
```
Which basically takes values like
```
t2.p_number:X12
t1.p_id: 19680
```
and turns it into
```
t2.p_number:X12
t1.p_id: X12 19680
```
What I have been asked for is a 'rollback' script, baring in mind the only column I will know the value of is the '19680'!
I have looked at the REPLACE command but not sure if I can get it to work; any ideas?
---
Resolved!
Thanks guys, a log of good tips there, I have gone for what in the end looks a quite simple query from Ben.
There's quite a bit of history behind this issue, where basically a client has entered the values for p\_id and has entered duplicates. They therefore want to prefix the p\_number onto the p\_id to make them unique again (once entered the client can not update), but they also want a get-out-clause, in case something goes wrong so they want to be able to remove the prefixed data.
The p\_id is actually used within the system so virtual tables and code changes were not really feasible in this scenario, rather it was easiest to just update the client's data for them.
Cheers guys, hope the client is happy with the proposal. | Here's my 2c.
**Do not** update your original field. If there's a possibility that you want to roll it back then you cause yourself massive problems. Create a new field that you update with the new values. When you want to reference this field you can do so. When you want to reference another field you can do that.
More generally, this is a good tactic for any fields that you want to "fix". You may improve what you're "fixing" later on and if you've overwritten the original data you're unable to. Always keep the raw data and then you can re-use it as often as you like.
If all you want to do is remove everything before the first space then the following will work:
```
update the_table
set p_id = substr(p_id, instr(p_id, ' ') + 1);
```
[SQL Fiddle](http://www.sqlfiddle.com/#!4/aa4ae/3) | As a general rule, do not store the data you can **infer**.
Just prefix the field in the client code, or in a VIEW, or a virtual column (Oracle 11), without modifying the original value. Since you haven't modified it, there is nothing to rollback. | Oracle UPDATE remove prefixed value from column data | [
"",
"sql",
"oracle",
""
] |
The json file content as follows:
```
{"votes": {"funny": 0, "useful": 5, "cool": 2}, "user_id": "rLtl8ZkDX5vH5nAx9C3q5Q", "review_id": "fWKvX83p0-ka4JS3dc6E5A", "stars": 5, "date": "2011-01-26", "text": "My wife took me here on my birthday for breakfast and it was excellent. It looked like the place fills up pretty quickly so the earlier you get here the better.\n\nDo yourself a favor and get their Bloody Mary. It came with 2 pieces of their griddled bread with was amazing and it absolutely made the meal complete. It was the best \"toast\" I've ever had.\n\nAnyway, I can't wait to go back!", "type": "review", "business_id": "9yKzy9PApeiPPOUJEtnvkg"}
{"votes": {"funny": 0, "useful": 0, "cool": 0}, "user_id": "0a2KyEL0d3Yb1V6aivbIuQ", "review_id": "IjZ33sJrzXqU-0X6U8NwyA", "stars": 5, "date": "2011-07-27", "text": "I have no idea why some people give bad reviews about this place. It goes to show you, you can please everyone. They are probably griping about something that their own fault... but they said we'll be seated when the girl comes back from seating someone else. So, everything was great and not like these bad reviewers. That goes to show you that you have to try these things yourself because all these bad reviewers have some serious issues.", "type": "review", "business_id": "ZRJwVLyzEJq1VAihDhYiow"}
```
my code is:
```
import json
from pprint import pprint
review = open('/User/Desktop/python/test.json')
data = json.load(review)
pprint(data["votes"])
```
The error is:
```
Traceback (most recent call last):
File "/Users/hadoop/Documents/workspace/dataming-course/src/Yelp/main.py", line 8, in <module>
data = json.load(review)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 278, in load
**kw)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 363, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 2 column 1 - line 3 column 1 (char 623 - 1294)
``` | For what it's worth, you could try putting your JSON into an array, like this:
```
[ { "business_id" : "9yKzy9PApeiPPOUJEtnvkg",
"date" : "2011-01-26",
"review_id" : "fWKvX83p0-ka4JS3dc6E5A",
"stars" : "5",
"text" : "My wife took me here on my birthday for breakfast and it was excellent. It looked like the place fills up pretty quickly so the earlier you get here the better.\n\nDo yourself a favor and get their Bloody Mary. It came with 2 pieces of their griddled bread with was amazing and it absolutely made the meal complete. It was the best \"toast\" I've ever had.\n\nAnyway, I can't wait to go back!",
"type" : "review",
"user_id" : "rLtl8ZkDX5vH5nAx9C3q5Q",
"votes" : { "cool" : "2",
"funny" : "0",
"useful" : "5"
}
},
{ "business_id" : "ZRJwVLyzEJq1VAihDhYiow",
"date" : "2011-07-27",
"review_id" : "IjZ33sJrzXqU-0X6U8NwyA",
"stars" : "5",
"text" : "I have no idea why some people give bad reviews about this place. It goes to show you, you can please everyone. They are probably griping about something that their own fault... but they said we'll be seated when the girl comes back from seating someone else. So, everything was great and not like these bad reviewers. That goes to show you that you have to try these things yourself because all these bad reviewers have some serious issues.",
"type" : "review",
"user_id" : "0a2KyEL0d3Yb1V6aivbIuQ",
"votes" : { "cool" : "0",
"funny" : "0",
"useful" : "0"
}
}
]
```
(And do note the `,` that separates the two "main" parts of the JSON array :) | You have two JSON documents in a single file. Consider putting them into an array or something. The top-level of the file should only contain a single element. | The loading issue for json file in python | [
"",
"python",
""
] |
I looked at a [similar](https://stackoverflow.com/questions/11860272/import-class-without-executing-py-it-is-in) question but it does not really answer the question that I have. Say I have the following code (overly simplified to highlight only my question).
```
class A:
def __init__(self,x):
self.val = x
a = A(4)
print a.val
```
This code resides in a file `someones_class.py`. I now want to import and use class `A` in my program *without modifying* `someones_class.py`. If I do `from someones_class import A`, python would still execute the script lines in the file.
Question: Is there a way to just import class `A` without the last two lines getting executed?
I know about `if __name__ == '__main__'` thing but I do not have the option of modifying `someones_class.py` file as it is obtained only after my program starts executing. | This answer is just to demonstrate that it **can** be done, but would obviously need a better solution to ensure you are including the class(es) you want to include.
```
>>> code = ast.parse(open("someones_class.py").read())
>>> code.body.pop(1)
<_ast.Assign object at 0x108c82450>
>>> code.body.pop(1)
<_ast.Print object at 0x108c82590>
>>> eval(compile(code, '', 'exec'))
>>> test = A(4)
>>> test
<__main__.A instance at 0x108c7df80>
```
You could inspect the `code` body for the elements you want to include and remove the rest.
**NOTE:** This is a giant hack. | Nope, there's no way to prevent those extra lines from being executed. The best you can do is read the script and parse out the class -- using that to create the class you want.
This is likely way more work than you want to do, but, for the strong willed, the `ast` module might be helpful. | Python: import module without executing script | [
"",
"python",
"import",
""
] |
My first post on stackoverflow please be gentle!
I have a MySQL (5.5.24) Database (WAMP Setup) with a table with records like this
```
Person_ID R_Date
137 2013-01-01
137 2013-02-01
137 2013-03-15
168 2013-01-01
168 2013-02-01
168 2013-03-21
172 2013-01-01
172 2013-02-01
172 2013-03-27
```
However I would like to just have 1 record for each Person\_ID using the most recent/current R\_Date.
So the results would look like this:
```
Person_ID R_Date
137 2013-03-15
168 2013-03-21
172 2013-02-27
```
I have had difficulty searching for answers as I'm not sure of the terminology, it is possibly something really simple.
It is possible to get this result using PHP While loops combined with multiple MySQL queries but I'm after a pure MySQL solution if it's possible.
Maybe the results I am after are achievable with subqueries?
Thanks | ```
SELECT Person_ID, MAX(R_Date) as 'R_Date' FROM TableName GROUP BY Person_ID
``` | ```
SELECT *
FROM PERSONTABLE a
WHERE R_Date = ( SELECT MAX(R_Date)
FROM PERSONTABLE b
WHERE a.Person_ID = b.Person_ID
)
``` | MySQL - Filtering table based on most current dates | [
"",
"mysql",
"sql",
"wamp",
""
] |
I am generating a bunch of html emails in django, and I want to save them into a model, in a FileField. I can quite easily generate the html content and dump in into a `File`, but I want to create something that can be opened in email clients, e.g. an eml file. Does anyone know of a python or django module to do this? Just to be clear, I'm not looking for an alternative email backend, as I also want the emails to be sent when they're generated.
**Edit:** After a bit of reading, it looks to me like the `EmailMessage.messge()` should return the content that should be stored int he eml file. However, if I try to save it like this, the file generated is empty:
```
import tempfile
name = tempfile.mkstemp()[1]
fh = open(name, 'wb')
fh.write(bytes(msg.message()))
fh.close()
output = File(open(name, 'rb'), msg.subject[:50])
```
I want to use a `BytesIO` instead of a temp file, but the temp file is easier for testing. | EML file is actually a text file with name value pairs. A valid EML file would be like
```
From: test@example.com
To: test@example.com
Subject: Test
Hello world!
```
If you follow the above pattern and save it in file with .eml extension, thunderbird like email clients will parse and show them without any problem. | Django's `EmailMessage.message().as_bytes()` will return the content of the .eml file. Then you just need to save the file to the directory of your choice:
```
from django.core.mail import EmailMessage
msg = EmailMessage(
'Hello',
'Body goes here',
'from@example.com',
['to3@example.com'],
)
eml_content = msg.message().as_bytes()
file_name = "/path/to/eml_output.eml"
with open(file_name, "wb") as outfile:
outfile.write(eml_content)
``` | Save an django email to eml | [
"",
"python",
"django",
"file",
"email",
""
] |
I needed help on how to find which test has the lowest number. This code will help explain.
```
test_list=[]
numbers_list=[]
while True:
test=raw_input("Enter test or (exit to end): ")
if test=="exit":
break
else:
test_numbers=input("Enter number: ")
test_list.append(test)
numbers_list.append(test_numbers)
```
If `test_list=['Test1','Test2','Test3']` and `numbers_list=[2,1,3]`
How would I print that Test2 has the lowest number? Since Test2 = 1 | 1. Find the index `i` in `numbers_list` corresponding to the smallest element:
* [Python: Find index of minimum item in list of floats](https://stackoverflow.com/q/13300962/139010)
* [Python Finding Index of Maximum in List](https://stackoverflow.com/q/11530799/139010)
* [Efficient way to get index of minimum value in long vector, python](https://stackoverflow.com/q/6044645/139010)
* [How to find all positions of the maximum value in a list?](https://stackoverflow.com/q/3989016/139010)
2. Retrieve `test_list[i]` | You could use `zip` to zip them together:
```
>>> zip(numbers_list, test_list)
[(2, 'Test1'), (1, 'Test2'), (3, 'Test3')]
```
Then use `min` to find the smallest pair:
```
>>> min(zip(numbers_list, test_list))
(1, 'Test2')
```
Finally, you can split the pair up:
```
>>> number, test = min(zip(numbers_list, test_list))
>>> number
1
>>> test
'Test2'
``` | Basic Python Lists | [
"",
"python",
""
] |
how can I find all span's with a class of `'blue'` that contain text in the format:
```
04/18/13 7:29pm
```
which could therefore be:
```
04/18/13 7:29pm
```
or:
```
Posted on 04/18/13 7:29pm
```
in terms of constructing the logic to do this, this is what i have got so far:
```
new_content = original_content.find_all('span', {'class' : 'blue'}) # using beautiful soup's find_all
pattern = re.compile('<span class=\"blue\">[data in the format 04/18/13 7:29pm]</span>') # using re
for _ in new_content:
result = re.findall(pattern, _)
print result
```
I've been referring to <https://stackoverflow.com/a/7732827> and <https://stackoverflow.com/a/12229134> to try and figure out a way to do this, but the above is all i have got so far.
**edit:**
to clarify the scenario, there are span's with:
```
<span class="blue">here is a lot of text that i don't need</span>
```
and
```
<span class="blue">this is the span i need because it contains 04/18/13 7:29pm</span>
```
and note i only need `04/18/13 7:29pm` not the rest of the content.
**edit 2:**
I also tried:
```
pattern = re.compile('<span class="blue">.*?(\d\d/\d\d/\d\d \d\d?:\d\d\w\w)</span>')
for _ in new_content:
result = re.findall(pattern, _)
print result
```
and got error:
```
'TypeError: expected string or buffer'
``` | ```
import re
from bs4 import BeautifulSoup
html_doc = """
<html>
<body>
<span class="blue">here is a lot of text that i don't need</span>
<span class="blue">this is the span i need because it contains 04/18/13 7:29pm</span>
<span class="blue">04/19/13 7:30pm</span>
<span class="blue">Posted on 04/20/13 10:31pm</span>
</body>
</html>
"""
# parse the html
soup = BeautifulSoup(html_doc)
# find a list of all span elements
spans = soup.find_all('span', {'class' : 'blue'})
# create a list of lines corresponding to element texts
lines = [span.get_text() for span in spans]
# collect the dates from the list of lines using regex matching groups
found_dates = []
for line in lines:
m = re.search(r'(\d{2}/\d{2}/\d{2} \d+:\d+[a|p]m)', line)
if m:
found_dates.append(m.group(1))
# print the dates we collected
for date in found_dates:
print(date)
```
---
output:
```
04/18/13 7:29pm
04/19/13 7:30pm
04/20/13 10:31pm
``` | This is a flexible regex that you can use:
```
"(\d\d?/\d\d?/\d\d\d?\d?\s*\d\d?:\d\d[a|p|A|P][m|M])"
```
Example:
```
>>> import re
>>> from bs4 import BeautifulSoup
>>> html = """
<html>
<body>
<span class="blue">here is a lot of text that i don't need</span>
<span class="blue">this is the span i need because it contains 04/18/13 7:29pm</span>
<span class="blue">04/19/13 7:30pm</span>
<span class="blue">04/18/13 7:29pm</span>
<span class="blue">Posted on 15/18/2013 10:00AM</span>
<span class="blue">Posted on 04/20/13 10:31pm</span>
<span class="blue">Posted on 4/1/2013 17:09aM</span>
</body>
</html>
"""
>>> soup = BeautifulSoup(html)
>>> lines = [i.get_text() for i in soup.find_all('span', {'class' : 'blue'})]
>>> ok = [m.group(1)
for line in lines
for m in (re.search(r'(\d\d?/\d\d?/\d\d\d?\d?\s*\d\d?:\d\d[a|p|A|P][m|M])', line),)
if m]
>>> ok
[u'04/18/13 7:29pm', u'04/19/13 7:30pm', u'04/18/13 7:29pm', u'15/18/2013 10:00AM', u'04/20/13 10:31pm', u'4/1/2013 17:09aM']
>>> for i in ok:
print i
04/18/13 7:29pm
04/19/13 7:30pm
04/18/13 7:29pm
15/18/2013 10:00AM
04/20/13 10:31pm
4/1/2013 17:09aM
``` | How to find spans with a specific class containing specific text using beautiful soup and re? | [
"",
"python",
"regex",
"beautifulsoup",
""
] |
I'm trying to plot the streamlines of a magnetic field around a sphere using matplotlib, and it does work quite nicely. However, the resulting image is not symmetric, but it should be (I think).

This is the code used to generate the image. Excuse the length, but I thought it would be better than just posting a non-working snippet. Also, it's not very pythonic; that's because I converted it from Matlab, which was easier than I expected.
```
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
def cart2spherical(x, y, z):
r = np.sqrt(x**2 + y**2 + z**2)
phi = np.arctan2(y, x)
theta = np.arccos(z/r)
if r == 0:
theta = 0
return (r, theta, phi)
def S(theta, phi):
S = np.array([[np.sin(theta)*np.cos(phi), np.cos(theta)*np.cos(phi), -np.sin(phi)],
[np.sin(theta)*np.sin(phi), np.cos(theta)*np.sin(phi), np.cos(phi)],
[np.cos(theta), -np.sin(theta), 0]])
return S
def computeB(r, theta, phi, a=1, muR=100, B0=1):
delta = (muR - 1)/(muR + 2)
if r > a:
Bspherical = B0*np.array([np.cos(theta) * (1 + 2*delta*a**3 / r**3),
np.sin(theta) * (delta*a**3 / r**3 - 1),
0])
B = np.dot(S(theta, phi), Bspherical)
else:
B = 3*B0*(muR / (muR + 2)) * np.array([0, 0, 1])
return B
Z, X = np.mgrid[-2.5:2.5:1000j, -2.5:2.5:1000j]
Bx = np.zeros(np.shape(X))
Bz = np.zeros(np.shape(X))
Babs = np.zeros(np.shape(X))
for i in range(len(X)):
for j in range(len(Z)):
r, theta, phi = cart2spherical(X[0, i], 0, Z[j, 0])
B = computeB(r, theta, phi)
Bx[i, j], Bz[i, j] = B[0], B[2]
Babs[i, j] = np.sqrt(B[0]**2 + B[1]**2 + B[2]**2)
fig=plt.figure()
ax=fig.add_subplot(111)
plt.streamplot(X, Z, Bx, Bz, color='k', linewidth=0.8*Babs, density=1.3,
minlength=0.9, arrowstyle='-')
ax.add_patch(Circle((0, 0), radius=1, facecolor='none', linewidth=2))
plt.axis('equal')
plt.axis('off')
fig.savefig('streamlines.pdf', transparent=True, bbox_inches='tight', pad_inches=0)
``` | First of all, for curiosity, why would you want to plot symmetric data? Why plotting half of isn't fine?
Said that, this is a possible hack. You can use mask arrays as Hooked suggested to plot half of it:
```
mask = X>0
BX_OUT = Bx.copy()
BZ_OUT = Bz.copy()
BX_OUT[mask] = None
BZ_OUT[mask] = None
res = plt.streamplot(X, Z, BX_OUT, BZ_OUT, color='k',
arrowstyle='-',linewidth=1,density=2)
```
then you save in res the result from streamplot, extract the lines and plot them with the opposite X coordinate.
```
lines = res.lines.get_paths()
for l in lines:
plot(-l.vertices.T[0],l.vertices.T[1],'k')
```
I used this hack to extract streamlines and arrows from a 2D plot, then apply a 3D transformation and plot it with mplot3d. A picture is in one of my questions [here](https://stackoverflow.com/questions/14963004/continuous-shades-on-matplotlib-3d-surface). | Quoting from the documentation:
> ```
> density : float or 2-tuple
> Controls the closeness of streamlines. When density = 1,
> the domain is divided into
> a 25x25 grid—density linearly scales this grid.
> Each cell in the grid can have, at most, one traversing streamline.
> For different densities in each direction, use [density_x, density_y].
> ```
so you are getting aliasing effects between the cells it uses to decide where the stream lines are, and the symmetries of your problem. You need to carefully choose your grid size (of the data) and the density.
It is also sensitive to where the box boundaries are relative to the top of the sphere. Is the center of your sphere *on* a data grid point or between the data grid points? If it is *on* a grid point then the box that contains the center point will be different than the boxes adjacent to it.
I am not familiar with exactly how it decides which stream lines to draw, but I could imagine that it is some sort of greedy algorithm and hence will give different results walking towards the high density region and away density region.
To be clear, you issue is not that the stream lines are *wrong*, they are valid stream lines, it is that you find the result not aesthetically pleasing. | Symmetric streamplot with matplotlib | [
"",
"python",
"matplotlib",
""
] |
I have problem similar to my this question :
[sql server 2008 case statement in where clause not working](https://stackoverflow.com/questions/16118306/sql-server-2008-case-statement-in-where-clause-not-working)
I have same document,SR and event table.
My document table structure is like this :
```
documentid int,
documenttype nvarchar(50),
relationid int,
document image,
docdate date
```
and SR table structure :
```
SRId int,
CustomerId int, ( related to customer table)
SRdetail,
SRDate
```
and Event Table :
```
EventId int,
SRId int, (related to SR table)
Event,
EventDate
```
I got all the rows properly with this query :
```
Select *
from documents d
left join SR s on d.relationid = s.SRId
left join Events e on d.relationid = e.EventId
```
But I have to do filtration also. like I need to display all documents by customer.
So I am performing query for this like this :
```
Select *
from documents d
left join SR s on d.relationid = s.SRId
left join Events e on d.relationid = e.EventId
where (s.CustomerId = 123 or
e.SRId in (select SRId from SR where CustomerId = 123))
```
But its not giving me proper output. its displaying some of the records of diff customer's events and that customer's SR. means its not filtering properly.
I did try with 'and' instead of 'or' but with 'and' its displaying only some records.
Am I wrong somewhere in query ?
Can anyone please help me ? | My Problem got solved.
I create one view with the use of all 3 tables and of this query :
```
Select * from documents d left join SR s on d.relationid = s.SRId left join Events e on d.relationid = e.EventId
```
and than i am using where condition on that view to filter data by customer.
Thanks a lot everyone for helping me. Thanks a lot. | Try:
```
Select *
from documents d
left join SR s on d.relationid = s.SRId and s.CustomerId = 123
left join Events e
join SR se on e.SRId = se.SRId and se.CustomerId = 123
on d.relationid = e.EventId
where coalesce(s.CustomerId, se.CustomerId) = 123
``` | How to get desired result from this query? | [
"",
"sql",
"sql-server-2008",
""
] |
I am wondering in particular about PostgreSQL. Given the following contrived example:
```
SELECT name FROM
(SELECT name FROM people WHERE age >= 18 ORDER BY age DESC) p
LIMIT 10
```
Are the names returned from the outer query guaranteed to be be in the order they were for the inner query? | No, put the order by in the outer query:
```
SELECT name FROM
(SELECT name, age FROM people WHERE age >= 18) p
ORDER BY p.age DESC
LIMIT 10
```
The inner (sub) query returns a result-set. If you put the order by there, then the intermediate result-set passed from the inner (sub) query, to the outer query, is guaranteed to be ordered the way you designate, but without an order by in the outer query, the result-set generated by processing that inner query result-set, is not guaranteed to be sorted in any way. | For simple cases, [@Charles query](https://stackoverflow.com/a/16248822/939860) is most efficient.
More generally, you can use the window function [**`row_number()`**](https://www.postgresql.org/docs/current/functions-window.html) to carry *any* order you like to the main query, including:
* order by columns *not* in the `SELECT` list of the subquery and thus not reproducible
* arbitrary ordering of *peers* according to `ORDER BY` criteria. Postgres will reuse the same arbitrary order in the window function *within* the subquery. (But *not* truly random order from `random()` for instance!)
If you don't want to preserve arbitrary sort order of peers from the subquery, use `rank()` instead.
This may also be generally superior with complex queries or multiple query layers:
```
SELECT p.name
FROM (
SELECT name, row_number() OVER (ORDER BY <same order by criteria>) AS rn
FROM people
WHERE age >= 18
ORDER BY <any order by criteria>
) p
ORDER BY p.rn
LIMIT 10;
``` | Is order in a subquery guaranteed to be preserved? | [
"",
"sql",
"postgresql",
"subquery",
"sql-order-by",
""
] |
The question is in the title...
I'm in a process of learning Python and I've heard a couple of times that function returning None is something you never should have in a real program (unless your function never returns anything). While I can't seem to find a situation when it is absolutely necessary, I wonder if it ever could be a good programming practice to do it. I.e., if your function returns integers (say, solutions to an equation), None would indicate that there is no answer to return. Or should it always be handled as an exception inside the function? Maybe there are some other examples when it is actually useful? Or should I *never* do it? | This just flat-out isn't true. For one, any function that doesn't need to return a value will return `None`.
Beyond that, generally, keeping your output consistent makes things easier, but do what makes sense for the function. In some cases, returning `None` is logical.
If something goes wrong, yes, you should throw an exception as opposed to returning `None`.
Unfortunately, programming tends to be full of advice where things are over-generalized. It's easy to do, but Python is pretty good with this - *practicality beats purity* is part of the Zen of Python. It's essentially the use case for `dict.get()` - in general, it's better to throw the exception if a key isn't found, but in some specific cases, getting a default value back is more useful. | ```
def abc():
print 1
return None
print 2
```
is the same as
```
def abc():
print 1
return
print 2
```
or even
```
def abc():
print 1
```
All functions that don't return something return `None`. One very important use case of returning `None` is when you want to say "terminate this function" without having to nest a bunch of ifs. | Is returning 'None' for Python function ever a good idea? | [
"",
"python",
""
] |
I designed a table with a column whose data contains \n character (as the separator, I used this instead of comma or anything else). It must save the \n characters OK because after loading the table into a DataTable object, I can split the values into arrays of string with the separator '\n' like this:
```
DataTable dt = LoadTable("myTableName");
DataRow dr = dt.Rows[0]; //suppose this row has the data with \n character.
string[] s = dr["myColumn"].ToString().Split(new char[]{'\n'}, StringSplitOptions.RemoveEmptyEntries);//This gives result as I expect, e.g an array of 2 or 3 strings depending on what I saved before.
```
That means '\n' does exist in my table column. But when I tried selecting only rows which contain \n character at myColumn, it gave no rows returned, like this:
```
--use charindex
SELECT * FROM MyTable WHERE CHARINDEX('\n',MyColumn,0) > 0
--use like
SELECT * FROM MyTable WHERE MyColumn LIKE '%\n%'
```
I wonder if my queries are wrong?
I've also tested with both '\r\n' and '\r' but the result was the same.
How can I detect if the rows contain '\n' character in my table? This is required to select the rows I want (this is by my design when choosing '\n' as the separator).
Thank you very much in advance! | Since \n is the ASCII linefeed character try this:
```
SELECT *
FROM MyTable
WHERE MyColumn LIKE '%' || X'0A' || '%'
```
Sorry this is just a guess; I don't use SQLite myself. | Maybe you should just be looking for carriage returns if you arent storing the "\n" literal in the field. Something like
```
SELECT *
FROM table
WHERE column LIKE '%
%'
```
or `select * from table where column like '%'+char(13)+'%' or column like '%'+char(10)+'%'`
(Not sure if char(13) and 10 work for `SQLite`
UPDATED: Just found someone's solution [here](https://stackoverflow.com/questions/4642535/how-to-remove-carriage-returns-in-a-text-field-in-sqlite) They recommend to replace the carriage returns
So if you want to replace them and strip the returns, you could
```
update yourtable set yourCol = replace(yourcol, '
', ' ');
``` | Detect \n character saved in SQLite table? | [
"",
"sql",
"sqlite",
"select",
"carriage-return",
""
] |
My database teacher asked me to write (on Oracle Server) a query: select the groupid with the highest score average for year 2010
I wrote:
```
SELECT * FROM (
SELECT groupid, AVG(score) average FROM points
WHERE yr = 2010
AND score IS NOT NULL
GROUP BY groupid
ORDER BY average DESC
) WHERE rownum = 1;
```
My teacher tells me that this request is "better":
```
SELECT groupid, AVG(score) average FROM points
WHERE yr = 2010
GROUP BY groupid
HAVING AVG(score) >= ALL (
SELECT AVG(score) FROM points
WHERE yr = 2010
GROUP BY groupid
);
```
Which one is the fastest/better ? Is there any better solution too (for Oracle only) ?
Thanks. | There are two reasons your instructor is telling you that.
1. Data model. Relational DBMSs deal with sets, not lists. If you are learning SQL, it is better for you to think in terms of sets of tuples, that are unordered, than in order lists. You will be better at understanding how to query the DBMS. I consider your solution a hack: one that works, partially, since--as Perun\_x has pointed out--- it does not work if more than one tuple match the result. It is contrary to the data model and spirit of SQL).
2. Portability. This is the real killer. Your code will work on Oracle but not in other DBMSs that do not support the row\_number attribute (each has its own way to do it).
--dmg | The queries aren't equivalent. The 1st query always selects 1 row. The second one selects all rows with the highest average (there can theoretically be more such lines). | Oracle ORDER BY with rownum or HAVING >= ALL | [
"",
"sql",
"select",
"average",
"having",
"rownum",
""
] |
I started learning python few weeks ago, with no programming background so far.
I was wondering if somebody could explain me in plain words, what does the "Looping over list" does?
Here is the example:
```
list = [4, 'b', 'e',21,5]
for n in list:
print n
```
After running the script, what I get is: list being printed item by item:
4
b
e
21
5
But I do not understand the logic behind it.
Would somebody be kind to explain it to me?
Thank you, and sorry for ignorance. | I think your example makes it about as clear as you can ask for.
```
mylist = [4, 'b', 'e', 21, 5]
```
By using the `[` syntax, this creates a new [list](http://docs.python.org/2/tutorial/datastructures.html), and initializes it with the items given, in the order they are given in. The list is then assigned to the variable `mylist`. (Note that I changed the name, because `list` is the default name of the `list` datatype).
```
for n in mylist:
```
This is a [for loop](http://en.wikipedia.org/wiki/For_loop), that iterates over all items in the list. Python strives to be read like English. So it says,
> for every item in the list `mylist`, call the item `n` and do the following:
Then, everything that is indented under the `for`, is executed for each item.
```
print n
```
Of course, this prints `n` to the screen.
---
So in conclusion, your code says:
> 1) Create a list with the following items: 4, 'b', 'e', 21, 5
>
> 2) For every item in that list, call it `n`, and then print `n` to the screen.
* [Python `for`](http://docs.python.org/2/tutorial/controlflow.html#for-statements) | Since you seem to had no exposure to other programming languages, you cannot really see, why this is really a useful notation to express the iteration over the elements of a list.
To contrast it, here's an example from another, more lower level language:
```
int foo[] = { 0, 1, 2, 3, 4, 5, 6 };
int i;
for(i = 0; i < sizeof(foo) / sizeof(int); i++)
{
printf("%d", foo[i]);
}
```
Basically Python lists are like arrays with a lot of syntactic convenience added to them. That is, you can express something that you'll need often, like iteration, in a much more compact form:
```
somelist = [0, 1, 2, 3, 4, 5, 6]
for element in somelist:
print(element)
```
---
Edit: Sometimes it helps to look at the source, in this case the file `Objects/listobject.c` of a Python source distribution (e.g. 3.3). There you'll find the implementation of all python list methods, among which there is `listiter_next`, which handles traversal. Stripped of a few things, the code looks like this:
```
static PyObject *
listiter_next(listiterobject *it)
{
PyListObject *seq;
PyObject *item;
seq = it->it_seq;
if (seq == NULL)
return NULL;
if (it->it_index < PyList_GET_SIZE(seq)) {
item = PyList_GET_ITEM(seq, it->it_index);
++it->it_index;
return item;
}
it->it_seq = NULL;
return NULL;
}
```
You could say that this is *what looping over a list does*. | Looping over list in python | [
"",
"python",
"list",
"loops",
""
] |
Why does the first statement return `NameError`, while `max` is available
```
>>> __builtin__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name '__builtin__' is not defined
>>> max
<built-in function max>
>>> import __builtin__
>>> __builtin__.max
<built-in function max>
``` | > The builtins namespace associated with the execution of a code block is actually found by looking up the name `__builtins__` in its global namespace; this should be a dictionary or a module (in the latter case the module’s dictionary is used). By default, when in the `__main__` module, `__builtins__` is the built-in module `__builtin__` (note: no ‘s’); when in any other module, `__builtins__` is an alias for the dictionary of the `__builtin__` module itself. `__builtins__` can be set to a user-created dictionary to create a weak form of restricted execution.
So really it is looking up `__builtins__` (since you are in the main module)
```
>>> __builtins__.max
<built-in function max>
```
But as mentioned above, this is just an alias for `__builtin__` (which isn't part of the main module's namespace, although it has been loaded and referenced by `__builtins__`). | `__builtin__` is just a way to import/access the pseudo module in case you want to replace or add a function that is always globally available. You do not need to import it to use the functions. But `__builtin__` itself does not exist on `__builtin__` so it is not available without importing it first.
See [the python docs](http://docs.python.org/2/library/__builtin__.html) for more information about this module. | How is __builtin__ made available at runtime? | [
"",
"python",
""
] |
I have a list of coordinates, and I need to split them in half based on their x value. Something like this:
```
l = [(0, 0), (1, 0), (2, 0), (3, 0), (0, 1), (1, 1), (2, 1), (3, 1)]
left = []
right = []
for i in l:
if i[0] < 2:
left.append(i)
else:
right.append(i)
print(left)
print(right)
```
output:
```
[(0, 0), (1, 0), (0, 1), (1, 1)]
[(2, 0), (3, 0), (2, 1), (3, 1)]
```
Is there a faster way to do this? | This is not faster(2n) but maybe more elegant, the best you can get is log n if the list is sorted by using binary search.
```
>>> l = [(0, 0), (1, 0), (2, 0), (3, 0), (0, 1), (1, 1), (2, 1), (3, 1)]
>>> left = [ x for x in l if x[0] < 2]
>>> right = [ x for x in l if x[0] >= 2]
``` | You do it in O(n). If you had the list sorted, you could do it in O(log(n)), by searching for the pivot element with binary search. Sorting it yourself beforehand just to use binary search would not pay off, because sorting is O(n\*log(n))
On the other hand... *does it really matter*? If this is your bottleneck, then maybe reconsider whole algorithm or the data structure. For instance, if you have a complex problem where you need to operate on points in some area, you can consider using [kd-trees](http://en.wikipedia.org/wiki/K-d_tree) | Python, split list of coordinates, based on x | [
"",
"python",
"list",
""
] |
Something that I just thought about:
Say I'm writing view code for my Django site, and I make a mistake and create an infinite loop.
Whenever someone would try to access the view, the worker assigned to the request (be it a Gevent worker or a Python thread) would stay in a loop indefinitely.
If I understand correctly, the server would send a timeout error to the client after 30 seconds. But what will happen with the Python worker? Will it keep on working indefinitely? That sounds dangerous!
Imagine I've got a server in which I've allocated 10 workers. I let it run and at some point, a client tries to access the view with the infinite loop. A worker will be assigned to it, and will be effectively dead until the next server restart. The dangerous thing is that at first I wouldn't notice it, because the site would just be imperceptibly slower, having 9 workers instead of 10. But then it might happen again and again throughout a long span of time, maybe months. The site would just get progressively slower, until eventually it would be really slow with just one worker.
A server restart would solve the problem, but I'd hate to have my site's functionality depend on server restarts.
Is this a real problem that happens? **Is there a way to avoid it?**
**Update:** I'd also really appreciate a way to take a stacktrace of the thread/worker that's stuck in an infinite loop, so I could have that emailed to me so I'll be aware of the problem. (I don't know how to do this because there is no exception being raised.)
**Update** to people saying things to the effect of "Avoid writing code that has infinite loops": In case it wasn't obvious, I do not spend my free time intentionally putting infinite loops into my code. When these things happen, they are mistakes, and mistakes can be minimized but never completely avoided. I want to know that even when I make a mistake, there'll be a safety net that will notify me and allow me to fix the problem. | It is a real problem. In case of gevent, due to context switching, it can even immediately stop your website from responding.
Everything depends on your environment. For example, when running django in production through uwsgi you can set `harakiri` - that is time in seconds, after which thread handling the request will be killed if it didn't finish handling the response. It is strongly recommended to set such a value in order to deal with some faulty requests or bad code. Such event is reported in uwsgi log. I believe other solutions for running Django in production have similar options.
Otherwise, due to network architecture, client disconnection will not stop the infinite loop, and by default there will be no response at all - just infinite loading. Various timeout options (one of which `harakiri` is) may end up showing connection timeout - for example, php has (as far as i remember) default timeout of 30 seconds and it will return 504 gateway timeout. Socket disconnection timeout depends on http server settings and it will not stop application thread, it will only close client socket.
If not using gevent (or any other green threads), infinite loop will tend to take up 100% of available CPU power (limited to one core), possibly eating up more and more memory, so your website will work pretty slow and/or timeout really quick. Django itself is not aware of request time, so - as mentioned before - your production environment stack is the way to prevent this from happening. In case of uwsgi, <http://uwsgi-docs.readthedocs.org/en/latest/Options.html#harakiri-verbose> is the way to go.
Harakiri does print stack trace of the killed proces: (<https://uwsgi-docs.readthedocs.org/en/latest/Tracebacker.html?highlight=harakiri>) straight to uwsgi log, and due to alarm system you can get notified through e-mail (<http://uwsgi-docs.readthedocs.org/en/latest/AlarmSubsystem.html>) | I just tested this on Django's development server.
**Results:**
* Does not give a timeout after 30 seconds. (this might because its not a production server though)
* Stays in loading until i close the page.
I guess one way to avoid it, without actually just avoiding a code like that, would be to use threading to have control of timeouts and be able to stop the thread.
Maybe something like:
```
import threading
from django.http import HttpResponse
class MyThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
print "your possible infinite loop code here"
def possible_loop_view(request):
thread = MyThread()
thread.start()
return HttpResponse("html response")
``` | What happens when you have an infinite loop in Django view code? | [
"",
"python",
"django",
"web",
"infinite-loop",
""
] |
I am calling a Python program from C code using the `system()` call inside a Linux machine.
Let's say the first call to the Python program reads the first 10 lines of some text file by opening the text file and reading lines 1 - 10. Now during the second call to the Python program, I want to read the next 10 lines 11-20 of the same text file that was opened during the last call to Python WITHOUT reopening the file and starting from the first line of the file. During the 3rd call to the Python program, I want to be able to read the next 10 lines 21 - 30 of the same text file WITHOUT reopening the file and starting from the beginning of the file. This goes on ...
Here is the sample code
```
//This is C code
...
...
int initial_line_number, final_line_number
initial_line_number = 1;
final_line_number = 10;
for(i = 1; i <= 10; i++)
{
system("python test.py initial_line_number, final_line_number"); //test.py reads a text file from initial_line number to final_line_number
initial_line_number += 10;
final_line_number +=10;
}
```
Can this be done? What is the most elegant way to do it? | On a theoretical level, you might want to explore using DBus with python and c.
Have a python daemon that reads your file, then from c, make a dbus call that returns x number of lines (you can specify that in a dbus header).
This way, you can keep the python file open for as long as you'd like AND you wouldn't require the system() call, which would mean your program would be more secure.
(Dbus can be run from user and system privileges, so your program does not have to be run from an admin level as long as you have permissions for it.) | No, you can't have the second call continue without reopening the file.
Each `system()` call will run the program given as an argument and will wait for that process to die. Therefore, you get a new child each time — there is no (simple) way for the second to continue where the first left off.
(There could be complex methods, where your first process launches a Python process in the background that reads some lines and hangs around; the process you launched directly will terminate. The second call might recognize that the Python process is still hanging around and tell it to continue. But that's tricky programming — not worth the effort. And it is still true that each `system()` call will run a separate process and wait for that process to die.) | Calling Python from C using system( ) call in Linux | [
"",
"python",
"c",
""
] |
This should be self-explanatory
```
>>> (1+2j).real #Normal Usage
1.0
>>> (1+2j).imag #Normal Usage
2.0
>>> 1+2j.real #Normal Usage
1.0
>>> 1+2j.imag #Flips Out
3.0
>>> 123+657j.real
123.0
>>> 123+657j.imag
780.0
```
`z.imag` somehow adds up the real and imaginary parts.
I found this interesting .. is this a bug or is this an intentional feature? | The problem is that `1+2j.imag` is being parsed as `1+(2j.imag)`. Clearly `2j.imag` is `2.0`, so `1+2j` returns `3.0`. When you do `1+2j.real`, Python adds `1` and `2j.real`. `2j.real` is `0.0`, so you get the correct answer `1.0` in return.
If you parenthesize the complex number, like in your first two examples, you will get the expected results. | This is an operator precedence issue. `.` has higher precedence than `+`, so you are effectively doing
```
1+(2j.imag)
```
which of course is 3.
It is a Very Good Idea™ to always parenthesize complex number literals. | operator precedence with complex numbers | [
"",
"python",
"cmath",
""
] |
I read [a problem about bullseyes](https://code.google.com/codejam/contest/2418487/dashboard#s=p0) in Google Code Jam. (The contest is over now, so it's okay to talk about it)
> 
>
> Maria starts with t millilitres of black paint, which she will use to draw rings of thickness 1cm (one centimetre). A ring of thickness 1cm is the space between two concentric circles whose radii differ by 1cm.
>
> Maria draws the first black ring around a white circle of radius r cm.
>
> The area of a disk with radius 1cm is π cm2. One millilitre of paint is required to cover area π cm2. What is the maximum number of black rings that Maria can draw?
By my calculations on paper, the area of paint to draw a bullseye with n rings, inner radius r, as a multiple of pi is `2*n**2 + n*(2*r-1)`
So given `t*pi` millitres of paint the problem is to find the greatest n such that `f(n,r) <= t`.
This morning I solved that with binary search <https://github.com/hickford/codejam/blob/master/2013/1A/bullseye/bullseye.py>
I chose binary search over the quadratic equation because I'm very wary of floating point imprecision — in this problem t and r are integers as big as 10\*\*18). Arithmetic imprecision bit me in a previous Code Jam.
But I'm curious. Can you shore up the quadratic equation to give the correct answer for equations with large integer coefficients? Do maths libraries like Sympy or Numpy have anything to offer me?
---
Demonstration that quadratic equation gives wrong answer for large inputs. For example, with `r=308436464205151562` and `t=1850618785230909388` . The quadratic equation to solve is
```
2*n**2 + 616872928410303123*n -1850618785230909388 <= 0
```
ie. the coefficients are
```
a = 2
b = 616872928410303123
c = -1850618785230909388
```
Computing in Python
```
> int((-b + math.sqrt(b**2 - 4*a*c)) / (2*a))
0
```
This is the wrong answer! The right answer (found by binary search) is 3
```
>>> n = 3
>>> 2*n**2 + 616872928410303123*n -1850618785230909388 <= 0
True
``` | For symbolic exact manipulations, there's [sympy](http://sympy.org/en/index.html).
If you paste the following:
```
a, b, c = 2, 616872928410303123, -1850618785230909388
x = Symbol('x')
int(max(solve(a*x**2 + b*x + c, x)))
```
[here](http://live.sympy.org/), you get 3.
[edited following OP comment]. | The roundoff precision killed me on this problem... but you could keep everything at 64 bit integer precision, and do a binary search on the resulting quadratic equation. I outlined my approach [here](http://bleedingedgemachine.blogspot.com/2013/04/2013-google-code-jam-round-1a.html). | How to exactly solve quadratic equations with large integer coefficients (over integers)? | [
"",
"python",
"algorithm",
"math",
"equation-solving",
""
] |
In SQL Plus output it is taking up much more space than it needs and I'd like to reduce it from say 50 chars to 20. | If you reduce the width to 20 it won't be wide enough for the default `TIMESTAMP` or `TIMESTAMP WITH TIME ZONE` format. When that happens, SQLPlus will wrap the value.
Assume table `b` has timestamp column `TS`:
```
COLUMN ts FORMAT A20
SELECT ts FROM b;
TS
--------------------
25-APR-13 11.28.40.1
50000 AM
```
To cut down the width even further, decide which information you want and format accordingly. The Oracle DateTime and Timestamp formatting codes are listed [here](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements004.htm#i34924).
Note that SQLPlus won't let you specify a date format with the `COLUMN` statement. That's why I used `FORMAT A20` above. You can get it to 19 characters if you drop fractional seconds and use a 24-hour clock instead of AM/PM, and drop the time zone:
```
COLUMN TSFormatted FORMAT A20
SELECT TO_CHAR(ts, 'MM/DD/YYYY HH24:MI:SS') AS TSFormatted FROM b;
TSFORMATTED
--------------------
04/25/2013 11:28:40
```
If you're willing to drop the century you can get two of the fractional seconds and an exact width of 20:
```
COLUMN TSFormatted FORMAT A20
SELECT TO_CHAR(ts, 'MM/DD/YY HH24:MI:SS.FF2') AS TSFormatted from b;
TSFORMATTED
--------------------
04/25/13 11:28:40.15
```
Finally, if you want all of your timestamps to be automatically formatted in a certain way, use `ALTER SESSION`:
```
ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'MM/DD/YY HH24:MI:SS.FF2';
COLUMN ts FORMAT A20
SELECT ts from b; -- don't need to_char because of the default format
TS
--------------------
04/25/13 11:28:40.15
``` | Just a short extension to Ed Gibbs' answer above.
For debugging purposes, I put together this simple LOGIN.SQL file:
```
alter session set container = XEPDB1;
alter session set time_zone = 'Europe/Budapest';
alter session set nls_date_format='YYYY-MM-DD HH24:MI:SS';
alter session set nls_timestamp_format='YYYY-MM-DD HH24:MI:SS.FF1';
alter session set nls_timestamp_tz_format = 'YYYY-MM-DD HH24:MI:SS.FF1';
set pagesize 9999;
set linesize 9999;
set long 9999;
show linesize pagesize long user;
column added format a22;
column modified format a22;
column deleted format a22;
column localtimestamp format a22;
select localtimestamp from dual;
```
The names "added", "modified" and "deleted" are common column names in my schemas that always hold timestamp values. Now, when I start SQL\*Plus, I see (with several empty lines in between):
```
SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jul 10 13:25:52 2020
Version 18.4.0.0.0
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Last Successful login time: Fri Jul 10 2020 13:14:09 +00:00
Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
linesize 9999
pagesize 9999
long 9999
USER is "CURRENT_USER"
LOCALTIMESTAMP
----------------------
2020-07-10 15:25:53.2
SQL>
``` | How can I reduce the width of column in Oracle (SQL Plus) with TIMESTAMP datatype? | [
"",
"sql",
"oracle",
"timestamp",
"width",
"sqlplus",
""
] |
I am battling with joins and inner joins.
I have 2 tables that look like this:
USERS
```
-----------------------------------------
ID | fname | lname | div_1_id | div_2_id
-----------------------------------------
1 paul smith 1 2
2 john lip 2 null
3 kim long 1 4
```
DIVISIONS
```
------------------
ID | name
------------------
1 estate
2 litigation
3 property
4 civil
```
DESIRED RESULT (sql query)
```
--------------------------------------------------
user.ID | fname | lname | div_1_name | div_2_name
--------------------------------------------------
1 paul smith estate litigation
2 john lip litigation
3 kim long estate civil
```
I would like to create a new table from a MS sql query that looks like the above. | Use `LEFT JOIN` for this:
```
SELECT u.ID, u.fname, u.lname
, d1.name as div_1_name
, d2.name as div_2_name
FROM USERS u
LEFT JOIN DIVISIONS d1 ON u.div_1_id = d1.ID
LEFT JOIN DIVISIONS d2 ON u.div_2_id = d2.ID
```
### [See this SQLFiddle](http://www.sqlfiddle.com/#!3/7ec01/1) | Try using sub-query:
```
select a.ID, a.fname, a.lname,
(select name from DIVISIONS b where b.id=a.div_1_id) div_1_name,
(select name from DIVISIONS b where b.id=a.div_2_id) div_2_name
from
USERS a
``` | Create 2 new columns from a query | [
"",
"sql",
"sql-server",
"join",
""
] |
Im running a sql search query to bring up records that match a post code
say i have a postcode:
'CB4 1AB'
if the database has (with and without a space)
* cb41ab
* cb4 1ab
or i search with (with and without a space)
* cb41ab
* cb4 1ab
i want it to bring back the record
How can i do it?
```
select addr1, addr2, postcode
from addresses p
where p.postcode LIKE 'cb%'
```
Thanks | You can try something like this:
```
select addr1, addr2, postcode
from addresses p
where replace(p.postcode, ' ', '') LIKE 'cb41ab'
``` | So, it sounds like you're going for this. Expanding on other answers:
```
DECLARE @input VARCHAR(50)
SET @input = 'CB4 1AB'
SELECT addr1, addr2, postcode
FROM addresses p
WHERE REPLACE(p.postcode, ' ', '') = REPLACE(@input, ' ', '')
```
EDIT: I removed the "LIKE" since this should cover all above cases. | Searching for postcode when space exists or does not exist | [
"",
"sql",
""
] |
I know Perl has a design pattern known as a modulino, in which a library module file can act as both a library and a script. Is there any equivalent to this in Ruby / Python?
I think this design pattern would be very useful for me; I'm writing workers that are fairly short, but also require a script to run them. I think it would be convenient to have this all run from the same place. | Python has `__name__`:
```
class MyClass(object):
pass
if __name__ == '__main__':
print("This will only run if you run the script explicitly, not import it")
```
If you run `python myscript.py`, the `print` function will run. If you import `MyClass` from `myscript`, the `print` will not. | This is the Ruby version:
```
if __FILE__ == $PROGRAM_NAME #equivalent: if __FILE__ == $0
puts "This is the main file running, it is not being required."
end
``` | Equivalent to Perl Modulino for Ruby, Python? | [
"",
"python",
"ruby",
"perl",
"modulino",
""
] |
```
from
[PMDB].[dbo].PROJECT P
inner join PROJWBS PW on P.proj_id = PW.proj_id
and PW.proj_node_flag = 'Y' and PW.wbs_short_name not like '%- B%'
and PW.status_code <> 'WS_Planned' and PW.status_code <> 'WS_Whatif'
inner join [PMDB].[dbo].TASKSUM TS on PW.proj_id = TS.proj_id and PW.wbs_id = TS.wbs_id
inner join reference..fiscal_year_qtr_month FYQM on isnull(TS.act_end_date,P.scd_end_date) > FYQM.fiscal_month_begin_datetime
and isnull(TS.act_end_date,P.scd_end_date) <= FYQM.fiscal_month_end_datetime
inner join reference..mfg_year_month_ww MYMW on isnull(TS.act_end_date,P.scd_end_date) > MYMW.mfg_ww_begin_datetime
a
```
I have tried the internet, but just don't understand what does the 'reference..' mean here. Am I missing something? | `reference` is a database name. In between the database name and the object name goes the schema.
So if you want to query from `sys.database_files` in `master`, you would say:
```
SELECT name FROM master.sys.database_files;
---- database ---^^^^^^
------------- schema ---^^^
----------------- object ---^^^^^^^^^^^^^^
```
You can leave out the schema if you know the entity name is unambiguous. For catalog views/DMVs, you can't leave it out, but if you're using your default schema (usually dbo), you can leave out the explicit reference. [Not that that's a good idea](https://sqlblog.org/2009/10/11/bad-habits-to-kick-avoiding-the-schema-prefix). | ```
reference..mfg_year_month_ww
```
is the short hand for
```
reference.dbo.mfg_year_month_ww
```
basically it means use default schema. | What does 'reference..' mean in SQL syntax? | [
"",
"sql",
"sql-server-2008",
"syntax",
""
] |
Please consider this XML:
```
<Parent ID="p">
<Child ID="1">10</Child >
<Child ID="2">20</Child >
<Child ID="3">0</Child >
</Parent >
```
I want to **SUM** all child value inside the `Parent` node with `ID="p"`.for above example I want query return `30`
How I can do this? | ```
select @xml.value('sum(/Parent[@ID = "p"]/Child)', 'float') as Sum
```
The use of `float` protects against there being no `Parent` with that `ID`. You can then cast this result to `int`. | Try this :-
```
Declare @xml xml
set @xml='<Parent ID="p">
<Child ID="1">10</Child >
<Child ID="2">20</Child >
<Child ID="3">0</Child >
</Parent >'
Select @xml.value('sum(/Parent/Child)','int') as Sum
```
Result : `30`
or if you want the sum for a specific `Parent ID` then try the below query
```
Select @xml.value('sum(/Parent/Child)','int') AS SumVal
where @xml.exist('/Parent[@ID="p"]') = 1;
```
Demo in [SQL FIDDLE](http://sqlfiddle.com/#!3/68aa1/1) | sum some xml nodes values in sql server 2008 | [
"",
"sql",
"sql-server",
"xml",
"sql-server-2008",
"xpath",
""
] |
I'd like to extract only the month and day from a timestamp using the datetime module (not time) and then determine if it falls within a given season (fall, summer, winter, spring) based on the fixed dates of the solstices and equinoxes.
For instance, if the date falls between March 21 and June 20, it is spring. Regardless of the year. I want it to just look at the month and day and ignore the year in this calculation.
I've been running into trouble using this because [the month is not being extracted properly from my data](https://stackoverflow.com/questions/16140047/python-incorrectly-extracting-month-from-string), [for this reason](https://stackoverflow.com/questions/5247582/python-datetime-strptime-month-specifier-doesnt-seem-to-work). | > if the date falls between March 21 and June 20, it is spring.
> Regardless of the year. I want it to just look at the month and day
> and ignore the year in this calculation.
```
#!/usr/bin/env python
from datetime import date, datetime
Y = 2000 # dummy leap year to allow input X-02-29 (leap day)
seasons = [('winter', (date(Y, 1, 1), date(Y, 3, 20))),
('spring', (date(Y, 3, 21), date(Y, 6, 20))),
('summer', (date(Y, 6, 21), date(Y, 9, 22))),
('autumn', (date(Y, 9, 23), date(Y, 12, 20))),
('winter', (date(Y, 12, 21), date(Y, 12, 31)))]
def get_season(now):
if isinstance(now, datetime):
now = now.date()
now = now.replace(year=Y)
return next(season for season, (start, end) in seasons
if start <= now <= end)
print(get_season(date.today()))
```
It is an extended version of [@Manuel G answer](https://stackoverflow.com/a/28686747/4279) to support any year. | It might be easier just to use the day of year parameter. It's not much different than your approach, but possibly easier to understand than the magic numbers.
```
# get the current day of the year
doy = datetime.today().timetuple().tm_yday
# "day of year" ranges for the northern hemisphere
spring = range(80, 172)
summer = range(172, 264)
fall = range(264, 355)
# winter = everything else
if doy in spring:
season = 'spring'
elif doy in summer:
season = 'summer'
elif doy in fall:
season = 'fall'
else:
season = 'winter'
``` | Determine season given timestamp in Python using datetime | [
"",
"python",
"date",
"python-2.6",
""
] |
I'm studying for an exam that's in a couple of weeks and came across an SQL querying problem I still can't figure out. I was wondering if anyone could advise me.
Relational Database:
```
Books(**ISBN**, Title, Genre, Price, Publisher, PublicationYear)
Author(**AuthorNum**, Name)
Write(**ISBN**, AuthorNum)
```
Problem: Find the most expensive book from each publisher, along with the name of the author, arranged alphabetically by book title.
I've tried many things, with this one being the one I think is closest to the solution but it's not correct:
```
SELECT Title, Name
FROM Author AS a, Books AS b, Write AS w
WHERE a.AuthorNum = w.AuthorNum AND b.ISBN = w.ISBN
GROUP BY Publisher
HAVING MAX(Price)
ORDER BY Title
``` | This is how I would do it:
```
SELECT b.Title, b.Name, b.Publisher, a.Author
FROM Books b
LEFT JOIN Write w ON w.ISBN = b.ISBN
INNER JOIN Author a ON a.AuthorNum = w.AuthorNum
WHERE b.Price = (SELECT MAX(bb.Price) FROM Books bb
WHERE b.Publisher = bb.Publisher)
ORDER BY Title
;
```
Note some of the finer points:
1. uses only standard SQL syntax, no vendor-specific nor deprecated syntax
2. Accommodates the possibility that multiple books may have the hihgest price from one publisher
3. Accommodates the possibility that books may have more than one Author
4. Accommodates the possibility that a book may not have *any* known authors
5. Avoids the unnecessary use of GROUP BY which studies have shown is likely to be slower than either joins or subqueries | Inline views often perform quite well on a variety of databases. Don't prematurely optimize.
You can get the top price per publisher so:
# 1
```
select publisher, max(price) as MaxPublisherPrice
from books
group by publisher
```
You can find out which book(s) from each publisher have a price that equals the MaxPublisherPrice by joining against the set returned by the statement above like this:
# 2
```
select books.title, P.MaxPublisherPrice as bookprice
from books
inner join
(
select publisher, max(price) as MaxPublisherPrice
from books
group by publisher
) as P
on books.publisher = P.publisher
and books.price = P.maxpublisherprice
```
and you can then pull in the author name so:
# 3
```
select books.title, P.MaxPublisherPrice as bookprice, author.name
from books
inner join
(
select publisher, max(price) as MaxPublisherPrice
from books
group by publisher
) as P
on books.publisher = P.publisher
and books.price = P.maxpublisherprice
inner join write
on write.isbn = books.isbn
inner join author
on write.authornum = author.authornum
order by books.title
``` | Advice needed on SQL Querying | [
"",
"sql",
""
] |
I am writing a reusable django application for returning json result for jquery ui autocomplete.
Currently i am storing the Class/function for getting the result in a dictionary with a unique key for each class/function.
When a request comes then I selects the corresponding class/function from the dict and returns the output.
My query is whether is the best practice to do the above or are there some other tricks to obtains the same result.
Sample GIST : <https://gist.github.com/ajumell/5483685> | That's a very general question. It primary depends on the infrastructure of your code. The way your class and models are defined and the dynamics of the application.
Second, is important to have into account the resources of the server where your application is running. How much memory do you have available, and how much disk space so you can take into account what would be better for the application.
Last but not least, it's important to take into account how much operations does it need to put all these resources in memory. Memory is volatile, so if your application restarts you'll have to instantiate all the classes again and maybe this is to much work.
Resuming, as an optimization is very good choice to keep in memory objects that are queried often (that's what cache is all about) but you have to take into account all of the previous stuff. | You seem to be talking about a form of memoization.
This is OK, as long as you don't *rely* on that result being in the dictionary. This is because the memory will be local to each process, and you can't guarantee subsequent requests being handled by the same process. But if you have a fallback where you generate the result, this is a perfectly good optimization. | is it a good practice to store data in memory in a django application? | [
"",
"python",
"django",
""
] |
Given this tuple:
```
my_tuple = ('chess', ['650', u'John - Tom'])
```
I want to create dictionary where `chess` is the key. It should result in:
```
my_dict = {'chess': ['650', u'John - Tom']}
```
I have this code
```
my_dict = {key: value for (key, value) in zip(my_tuple[0], my_tuple[1])}
```
but it's flawed and results in:
```
{'c': '650', 'h': u'John - Tom'}
```
Can you please help me fixing it? | ```
>>> my_tuple = ('a', [1, 2], 'b', [3, 4])
>>> dict(zip(*[iter(my_tuple)]*2))
{'a': [1, 2], 'b': [3, 4]}
```
For your particular case though:
```
{my_tuple[0]: my_tuple[1]}
``` | You can always create a dictionary from a list of tuples, (or single tuple) with 2 values.
Like so:
```
>>> my_tuple = ('chess', ['650', u'John - Tom'])
>>> d = dict([my_tuple])
>>> d
{'chess': ['650', u'John - Tom']}
```
---
In this easy way you could also have a list of tuples...
```
>>> my_tuple_list = [('a','1'), ('b','2')]
>>> d = dict(my_tuple_list)
>>> d
{'a': '1', 'b': '2'}
``` | Dict comprehension issue | [
"",
"python",
"python-2.7",
"dictionary-comprehension",
""
] |
Basically I have a list of 92 Currencies (ID & Name) from Google:
<https://developers.google.com/adsense/management/appendix/currencies>
Can I insert all the data at once into Microsoft SQL Server? | Just download the .csv file and remove the first row which is column header and run
the following code. keep your .csv file in C drive.
```
create TABLE Currencies
(
CurrecyCode nchar(5),
CurrencyName nvarchar(30)
)
BULK
INSERT Currencies
FROM 'c:\currencies.csv'
WITH
(
CODEPAGE='RAW',
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
``` | You can use Import/Export wizard from Management Studio to get data from csv. Just right click on your database, select Tasks->Import Data and follow wizard, it's quite simple.
Here is a link providing tutorial:
<http://www.mssqltips.com/sqlservertutorial/203/simple-way-to-import-data-into-sql-server/> | Insert whole table in Microsoft SQL Server Management Studio | [
"",
"sql",
"sql-server",
""
] |
I am creating some end-to-end tests for a web app using [Selenium](http://docs.seleniumhq.org/docs/ "Selenium Docs").
I am working in Python and using the Firefox driver
`driver = webdriver.Firefox()`
The problem is that my web app using HTML5 geolocation, and it seems that everytime I run my tests, I have to click the 'Allow Location' popup in Firefox, making my tests less than automated.
Is there a way to force the Selenium Firefox driver to always allow geolocation without prompting? | I believe the default is to launch Firefox with a new, anonymous profile. You can launch selenium with -Dwebdriver.firefox.profile=whatever, where "whatever" is the name of a profile when you launch firefox -P.
To make sure there's no weirdness with persistent logins and other cookies:
* Launch Firefox with "firefox -P"
* Pick the profile you'll launch the tests with
* Edit -> Preferences -> Privacy, select Use custom settings for history
* Tell Firefox to keep cookies until "I close Firefox" | You can force browser to return some predefined location without permission requests.
Just execute the following JavaScript code:
```
"navigator.geolocation.getCurrentPosition = function(success) { success({coords: {latitude: 50.455755, longitude: 30.511565}}); }"
```
Tested in Firefox and Chrome. | Always Allow Geolocation in Firefox using Selenium | [
"",
"python",
"selenium",
"geolocation",
""
] |
I would like to identify the ordering of events in a SQL data table. My data is arranged such that each identifier-date-event combination appears on a separate row. The output should be a single row per identifier, indicating the order in which 3 (and only 3) events occurred, and a flag indicating which of the three events ever occurred. For indicating the order, I only care to know the type of first event and the type of the most recent event. (So, for example, ABC=ADAC, because I'm only interested in the fact that A was the very first thing and C was the very last thing.)
Suppose my data is:
```
CREATE TABLE #ABC
(ID INT NOT NULL,
CODE_DATE DATE NOT NULL,
CODE_GROUP VARCHAR(10) NULL)
INSERT INTO #ABC VALUES (1,'20000-01-01','APPROVED')
INSERT INTO #ABC VALUES (1,'20001-01-01','DENIED')
INSERT INTO #ABC VALUES (1,'20003-01-01','ON HOLD')
INSERT INTO #ABC VALUES (1,'20002-01-01','APPROVED')
INSERT INTO #ABC VALUES (2,'20008-01-01','DENIED')
INSERT INTO #ABC VALUES (2,'20004-01-01','DENIED')
INSERT INTO #ABC VALUES (3,'20006-01-01','ON HOLD')
INSERT INTO #ABC VALUES (3,'20005-01-01','APPROVED')
INSERT INTO #ABC VALUES (3,'20009-01-01','DENIED')
INSERT INTO #ABC VALUES (4,'20001-01-01','ON HOLD')
INSERT INTO #ABC VALUES (4,'20004-01-01','ON HOLD')
INSERT INTO #ABC VALUES (4,'20007-01-01','DENIED')
INSERT INTO #ABC VALUES (5,'20005-01-01','ON HOLD')
INSERT INTO #ABC VALUES (5,'20008-01-01','ON HOLD')
INSERT INTO #ABC VALUES (5,'20009-01-01','APPROVED')
```
Then the desired output is:
```
ID RESULT EVER_APPROVED EVER_DENIED EVER_ON_HOLD
1 'APPROVED THEN ON HOLD' 'Y' 'Y' 'Y'
2 'DENIED' 'N' 'Y' 'N'
3 'APPROVED THEN DENIED' 'Y' 'Y' 'Y'
4 'ON HOLD THEN DENIED' 'N' 'Y' 'Y'
5 'ON HOLD THEN APPROVED' 'Y' 'N' 'Y'
``` | This is giving the correct results for your data:
```
with ABCOrdered as
(
select *
, FirstEvent = row_number() over (partition by ID order by CODE_DATE)
, LastEvent = row_number() over (partition by ID order by CODE_DATE desc)
from ABC
)
select f.ID
, [RESULT] = case
when f.CODE_GROUP = l.CODE_GROUP or l.CODE_GROUP is null then f.CODE_GROUP
else f.CODE_GROUP + ' THEN ' + l.CODE_GROUP
end
, EVER_APPROVED = case
when exists (select 1 from ABC where l.ID = ABC.ID and ABC.CODE_GROUP = 'APPROVED') then 'Y'
else 'N'
end
, EVER_DENIED = case
when exists (select 1 from ABC where l.ID = ABC.ID and ABC.CODE_GROUP = 'DENIED') then 'Y'
else 'N'
end
, EVER_ON_HOLD = case
when exists (select 1 from ABC where l.ID = ABC.ID and ABC.CODE_GROUP = 'ON HOLD') then 'Y'
else 'N'
end
from ABCOrdered f
left join ABCOrdered l on f.ID = l.ID and l.LastEvent = 1
where f.FirstEvent = 1
order by f.ID
```
[SQL Fiddle with demo](http://sqlfiddle.com/#!3/7a2cf/20). | Here's another way to do it:
```
;WITH cteMaxMin As
(
SELECT
ID,
Max(CODE_DATE+':'+CODE_GROUP) As MaxDt,
Min(CODE_DATE+':'+CODE_GROUP) As MinDt,
Max(Case When CODE_GROUP='APPROVED' Then 'Y' Else Null End) As Apd,
Max(Case When CODE_GROUP='DENIED' Then 'Y' Else Null End) As Dnd,
Max(Case When CODE_GROUP='ON HOLD' Then 'Y' Else Null End) As Ohd
FROM #ABC
GROUP BY ID
)
SELECT
ID,
SUBSTRING(MaxDt, 13, LEN(MaxDt))
+ COALESCE(' THEN '+SUBSTRING(MinDt, 13, LEN(MinDt)), '') As RESULT,
COALESCE(Apd, 'N') As EVER_APPROVED,
COALESCE(Dnd, 'N') As EVER_DENIED,
COALESCE(Ohd, 'N') As EVER_ON_HOLD
FROM cteMaxMin
``` | Identify date ordering of events in SQL (for each group) | [
"",
"sql",
"sql-server-2008",
""
] |
I want to convert a string like this `"29-Apr-2013-15:59:02"`
into something more usable.
The dashes can be easily replaced with spaces or other characters. This format would be ideal: `"YYYYMMDD HH:mm:ss (20130429 15:59:02)"`.
**Edit:**
Sorry, I did not specifically see the answer in another post. But again, I'm ignorant so could have been looking at the solution and didn't know it. I've got this working, but I wouldn't consider it "pretty."
```
#29-Apr-2013-15:59:02
import sys, datetime, time
#inDate = sys.argv[1]
inDate = 29-Apr-2013-15:59:02
def getMonth(month):
monthDict = {'Jan':'01','Feb':'02','Mar':'03','Apr':'04','May':'05','Jun':'06','Jul':'07','Aug':'08','Sep':'09','Oct':'10','Nov':'11','Dec':'12'}
for k, v in monthDict.iteritems():
if month == k:
return v
day = inDate[:2]
#print day
month = inDate[3:6]
#print month
year = inDate[7:11]
#print year
time = inDate[-8:]
#print time
newDate = year+getMonth(month)+day
newDateTime = newDate+" "+time
print newDate
print newDateTime
```
Any thoughts on improving? | Use [datetime.strptime()](http://docs.python.org/2/library/datetime.html#datetime.datetime.strptime) to parse the `inDate` string into a date object, use [datetime.strftime()](http://docs.python.org/2/library/datetime.html#datetime.date.strftime) to output in whatever format you like:
```
>>> from datetime import datetime
>>> inDate = "29-Apr-2013-15:59:02"
>>> d = datetime.strptime(inDate, "%d-%b-%Y-%H:%M:%S")
>>> d
datetime.datetime(2013, 4, 29, 15, 59, 2)
>>> d.strftime("YYYYMMDD HH:mm:ss (%Y%m%d %H:%M:%S)")
'YYYYMMDD HH:mm:ss (20130429 15:59:02)'
``` | Have you investigated dateutil?
<http://labix.org/python-dateutil>
I found a similar question to yours:
[How do I translate a ISO 8601 datetime string into a Python datetime object?](https://stackoverflow.com/questions/969285/how-do-i-translate-a-iso-8601-datetime-string-into-a-python-datetime-object) | Converting (YYYY-MM-DD-HH:MM:SS) date time | [
"",
"python",
"datetime",
"time",
""
] |
I am a bit of a Python Newbie, and I've just been trying to get some code working.
Below is the code, and also the nasty error I keep getting.
> ```
> import pywapi import string
>
> google_result = pywapi.get_weather_from_google('Brisbane')
>
> print google_result
> def getCurrentWather():
> city = google_result['forecast_information']['city'].split(',')[0]
> print "It is " + string.lower(google_result['current_conditions']['condition']) + " and
> " + google_result['current_conditions']['temp_c'] + " degree
> centigrade now in "+ city+".\n\n"
> return "It is " + string.lower(google_result['current_conditions']['condition']) + " and
> " + google_result['current_conditions']['temp_c'] + " degree
> centigrade now in "+ city
>
> def getDayOfWeek(dayOfWk):
> #dayOfWk = dayOfWk.encode('ascii', 'ignore')
> return dayOfWk.lower()
>
> def getWeatherForecast():
> #need to translate from sun/mon to sunday/monday
> dayName = {'sun': 'Sunday', 'mon': 'Monday', 'tue': 'Tuesday', 'wed': 'Wednesday', ' thu': 'Thursday', 'fri': 'Friday', 'sat':
> 'Saturday', 'sun': 'Sunday'}
>
> forcastall = []
> for forecast in google_result['forecasts']:
> dayOfWeek = getDayOfWeek(forecast['day_of_week']);
> print " Highest is " + forecast['high'] + " and "+ "Lowest is " + forecast['low'] + " on " + dayName[dayOfWeek]
> forcastall.append(" Highest is " + forecast['high'] + " and "+ "Lowest is " + forecast['low'] + " on " + dayName[dayOfWeek])
> return forcastall
> ```
Now is the error:
> ```
> Traceback (most recent call last): File
> "C:\Users\Alex\Desktop\JAVIS\JAISS-master\first.py", line 5, in
> <module>
> import Weather File "C:\Users\Alex\Desktop\JAVIS\JAISS-master\Weather.py", line 4, in
> <module>
> google_result = pywapi.get_weather_from_google('Brisbane') File "C:\Python27\lib\site-packages\pywapi.py", line 51, in
> get_weather_from_google
> handler = urllib2.urlopen(url) File "C:\Python27\lib\urllib2.py", line 126, in urlopen
> return _opener.open(url, data, timeout) File "C:\Python27\lib\urllib2.py", line 400, in open
> response = meth(req, response) File "C:\Python27\lib\urllib2.py", line 513, in http_response
> 'http', request, response, code, msg, hdrs) File "C:\Python27\lib\urllib2.py", line 438, in error
> return self._call_chain(*args) File "C:\Python27\lib\urllib2.py", line 372, in _call_chain
> result = func(*args) File "C:\Python27\lib\urllib2.py", line 521, in http_error_default
> raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 403: Forbidden
> ```
Thanks for any help I can get! | The 403 error doesn't come from your code but from Google. Google lets you know that you don't have permission to access the resource you were requesting, in this case the weather API, because it has been discontinued (403 technically stands for Forbidden, they could also have gone for 404 Not Found or 410 Gone).
For more information, read <http://thenextweb.com/google/2012/08/28/did-google-just-quietly-kill-private-weather-api/>
Other than that, your code is correct. | It's not your error. Google discontinued its weather API a few time ago.
If you need a free weather API I would recommend using the service I'm building, Metwit [weather API](http://metwit.com/weather-api/). This is the simplest working example using [metwit-weather](https://pypi.python.org/pypi/metwit-weather):
```
from metwit import Metwit
weather = Metwit.weather.get(location_lat=45.45, location_lng=9.18)
```
Further examples are found here: <http://soup.metwit.com/post/45997437810/python-weather-by-metwit> | Python HTTP Error 403 Forbidden | [
"",
"python",
"http",
"urllib",
"weather-api",
"google-weather-api",
""
] |
What is a regular expression to replace doublequotes (") in a string with escape backslash followed by doublequotes (\") except at the first and last characters of the string.
Example 1: Double quote embedded in a string
```
Input: "This is a "Test""
Expected Output: "This is a \"Test\""
```
Example 2: No double quotes in the middle of the string
```
Input: "This is a Test"
Expected Output: "This is a Test"
```
When I perform a `re.sub()` operation in python, everything including the first and last doublequote characters are getting replaced. In my example above, the output string becomes: \"This is a Test\". | As pointed out by @mgilson, you can just slice the first and last characters off so this regex is basically pointless
```
>>> print re.sub(r'(?<!^)"(?!$)', '\\"', '"This is a "Test""')
"This is a \"Test\""
>>> print re.sub(r'(?<!^)"(?!$)', '\\"', '"This is a Test"')
"This is a Test"
``` | I don't know about you, but I'd do it the easy way:
```
'"{}"'.format(s[1:-1].replace('"',r'\"'))
```
Of course, this makes a whole bunch of assumptions -- The strongest being that the first and last characters are always double quotes ...
Maybe this is a little better:
```
'{0}{1}{2}'.format(s[0],s[1:-1].replace('"',r'\"'),s[-1])
```
which preserves the first and last characters and escapes all double quotes in the middle. | Regular expression replace except first and last characters | [
"",
"python",
"regex",
"replace",
""
] |
How does one go about replacing terms in a string - except for the last, which needs to be replaced to something different?
An example:
```
letters = 'a;b;c;d'
```
needs to be changed to
```
letters = 'a, b, c & d'
```
I have used the replace function, as below:
```
letters = letters.replace(';',', ')
```
to give
```
letters = 'a, b, c, d'
```
The problem is that I do not know how to replace the last comma from this into an ampersand. A position dependent function cannot be used as there could be any number of letters e.g 'a;b' or 'a;b;c;d;e;f;g' . I have searched through stackoverflow and the python tutorials, but cannot find a function to just replace the last found term, can anyone help? | In `str.replace` you can also pass an optional 3rd argument(`count`) which is used to handle the number of replacements being done.
```
In [20]: strs = 'a;b;c;d'
In [21]: count = strs.count(";") - 1
In [22]: strs = strs.replace(';', ', ', count).replace(';', ' & ')
In [24]: strs
Out[24]: 'a, b, c & d'
```
Help on `str.replace`:
```
S.replace(old, new[, count]) -> string
Return a copy of string S with all occurrences of substring
old replaced by new. If the optional argument count is
given, only the first count occurrences are replaced.
``` | ```
letters = 'a;b;c;d'
lettersOut = ' & '.join(letters.replace(';', ', ').rsplit(', ', 1))
print(lettersOut)
``` | Python: replace terms in a string except for the last | [
"",
"python",
"string",
"replace",
""
] |
I'd like to read a file in python line by line, but in some cases (based on an if condition) I'd also like to read the next line in the file, and then keep reading it the same way.
Example:
```
file_handler = open(fname, 'r')
for line in file_handler:
if line[0] == '#':
print line
else:
line2 = file_handler.readline()
print line2
```
basically in this example I am trying to read it line by line, but when the line does not start with `#` I'd like to read the next line, print it, and then keep reading the line after line2. This is just an example where I got the error for similar stuff I am doing in my code but my goal is as stated in the title.
But I'd get an error like `ValueError: Mixing iteration and read methods would lose data`.
Would it be possible to do what I am trying to do in a smarter way? | If you just want to skip over lines not starting with `#`, there's a much easier way to do this:
```
file_handler = open(fname, 'r')
for line in file_handler:
if line[0] != '#':
continue
# now do the regular logic
print line
```
Obviously this kind of simplistic logic won't work in all possible cases. When it doesn't, you have to do exactly what the error implies: either use iteration consistently, or use read methods consistently. This is going to be more tedious and error-prone, but it's not *that* bad.
For example, with `readline`:
```
while True:
line = file_handler.readline()
if not line:
break
if line[0] == '#':
print line
else:
line2 = file_handler.readline()
print line2
```
Or, with iteration:
```
lines = file_handler
for line in file_handler:
if line[0] == '#':
print line
else:
print line
print next(file_handler)
```
However, that last version is sort of "cheating". You're relying on the fact that the iterator in the `for` loop is the same thing as the iterable it was created from. This happens to be true for files, but not for, say, lists. So really, you should do the same kind of `while True` loop here, unless you want to add an explicit `iter` call (or at least a comment explaining why you don't need one).
And a better solution might be to write a generator function that transforms one iterator into another based on your rule, and then print out each value iterated by that generator:
```
def doublifier(iterable):
it = iter(iterable)
while True:
line = next(it)
if line.startswith('#'):
yield line, next(it)
else:
yield (line,)
``` | ```
file_handler = open(fname, 'r')
for line in file_handler:
if line.startswith('#'): # <<< comment 1
print line
else:
line2 = next(file_handler) # <<< comment 2
print line2
```
### Discussion
1. Your code used a single equal sign, which is incorrect. It should be double equal sign for comparison. I recommend to use the .startswith() function to enhance code clarity.
2. Use the `next()` function to advance to the next line since you are using `file_handler` as an iterator. | Read a file line by line, sometimes reading the next line within same loop | [
"",
"python",
""
] |
Per Python documentation, `subprocess.call` should be blocking and wait for the subprocess to complete. In this code I am trying to convert few `xls` files to a new format by calling `Libreoffice` on command line. I assumed that the call to subprocess call is blocking but seems like I need to add an artificial delay after each call otherwise I miss few files in the `out` directory.
what am I doing wrong? and why do I need the delay?
```
from subprocess import call
for i in range(0,len(sorted_files)):
args = ['libreoffice', '-headless', '-convert-to',
'xls', "%s/%s.xls" %(sorted_files[i]['filename'],sorted_files[i]['filename']), '-outdir', 'out']
call(args)
var = raw_input("Enter something: ") # if comment this line I dont get all the files in out directory
```
**EDIT** It might be hard to find the answer through the comments below. I used `unoconv` for document conversion which is blocking and easy to work with from an script. | The problem is that the `soffice` command-line tool (which `libreoffice` is either just a link to, or a further wrapper around) is just a "controller" for the real program `soffice.bin`. It finds a running copy of `soffice.bin` and/or creates on, tells it to do some work, and then quits.
So, `call` is doing exactly the right thing: it waits for `libreoffice` to quit.
But you don't want to wait for `libreoffice` to quit, you want to wait for `soffice.bin` to finish doing the work that `libreoffice` asked it to do.
It looks like what you're trying to do isn't possible to do *directly*. But it's possible to do *indirectly*.
The [docs](https://help.libreoffice.org/Common/Starting_the_Software_With_Parameters) say that headless mode:
> … allows using the application without user interface.
>
> This special mode can be used when the application is controlled by external clients via the API.
In other words, the app doesn't quit after running some UNO strings/doing some conversions/whatever else you specify on the command line, it sits around waiting for more UNO commands from outside, while the launcher just runs as soon as it sends the appropriate commands to the app.
---
You probably have to use that above-mentioned external control API (UNO) directly.
See [Scripting LibreOffice](https://help.libreoffice.org/Common/Scripting) for the basics (although there's more info there about internal scripting than external), and the [API documentation](http://api.libreoffice.org) for details and examples.
But there may be an even simpler answer: [`unoconv`](https://github.com/dagwieers/unoconv) is a simple command-line tool written using the UNO API that does exactly what you want. It starts up LibreOffice if necessary, sends it some commands, waits for the results, and then quits. So if you just use `unoconv` instead of `libreoffice`, `call` is all you need.
Also notice that `unoconv` is written in Python, and is designed to be used as a module. If you just `import` it, you can write your own (simpler, and use-case-specific) code to replace the ["Main entrance"](https://github.com/dagwieers/unoconv/blob/master/unoconv#L1096) code, and not use `subprocess` at all. (Or, of course, you can tear apart the module and use the relevant code yourself, or just use it as a very nice piece of sample code for using UNO from Python.)
Also, the `unoconv` page linked above lists a variety of other similar tools, some that work via UNO and some that don't, so if it doesn't work for you, try the others.
---
If nothing else works, you could consider, e.g., creating a sentinel file and using a filesystem watch, so at least you'll be able to detect exactly when it's finished its work, instead of having to guess at a timeout. But that's a real last-ditch workaround that you shouldn't even consider until eliminating all of the other options. | It's possible likely that `libreoffice` is implemented as some sort of daemon/intermediary process. The "daemon" will (effectively1) parse the commandline and then farm the work off to some other process, possibly detaching them so that it can exit immediately. (based on the `-invisible` option in the [documentation](https://help.libreoffice.org/Common/Starting_the_Software_With_Parameters) I suspect strongly that this is indeed the case you have).
If this is the case, then your `subprocess.call` does do what it is advertised to do -- It waits for the daemon to complete before moving on. However, it doesn't do what you want which is to wait for all of the work to be completed. The only option you have in that scenario is to look to see if the daemon has a `-wait` option or similar.
---
1It is likely that we don't have an *actual daemon* here, only something which behaves similarly. See [comments by abernert](https://stackoverflow.com/questions/16285918/subprocess-call-does-not-wait-for-the-process-to-complete/16285977?noredirect=1#comment23310584_16285977) | subprocess.call does not wait for the process to complete | [
"",
"python",
"subprocess",
""
] |
I'm just getting started with PyGame. Here, I'm trying to draw a rectangle, but it's not rendering.
Here's the whole program.
```
import pygame
from pygame.locals import *
import sys
import random
pygame.init()
pygame.display.set_caption("Rafi's Game")
clock = pygame.time.Clock()
screen = pygame.display.set_mode((700, 500))
class Entity():
def __init__(self, x, y):
self.x = x
self.y = y
class Hero(Entity):
def __init__(self):
Entity.__init__
self.x = 0
self.y = 0
def draw(self):
pygame.draw.rect(screen, (255, 0, 0), ((self.x, self.y), (50, 50)), 1)
hero = Hero()
#--------------Main Loop-----------------
while True:
hero.draw()
keysPressed = pygame.key.get_pressed()
if keysPressed[K_a]:
hero.x = hero.x - 3
if keysPressed[K_d]:
hero.x = hero.x + 3
if keysPressed[K_w]:
hero.y = hero.y - 3
if keysPressed[K_s]:
hero.y = hero.y + 3
screen.fill((0, 255, 0))
#Event Procesing
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
#Event Processing End
pygame.display.flip()
clock.tick(20)
```
`self.x` and `self.y` are currently 0 and 0.
Note that this is not a finished program, all it should do is draw a red square on a green background that can be controled by the WASD keys. | Let's look at a portion of your main loop:
```
while True:
hero.draw()
keysPressed = pygame.key.get_pressed()
if keysPressed[K_a]:
hero.x = hero.x - 3
if keysPressed[K_d]:
hero.x = hero.x + 3
if keysPressed[K_w]:
hero.y = hero.y - 3
if keysPressed[K_s]:
hero.y = hero.y + 3
screen.fill((0, 255, 0))
```
Inside the Hero class's draw function, you are drawing the rect. In the main loop, you are calling `hero.draw()`, and then after handling your inputs, you are calling `screen.fill()`. This is drawing over the rect you just drew. Try this:
```
while True:
screen.fill((0, 255, 0))
hero.draw()
keysPressed = pygame.key.get_pressed()
....
```
That will color the entire screen green, *then* draw your rect over the green screen. | This is more of an extended comment and question than an answer.
The following draws a red square. Does it work for you?
```
import sys
import pygame
pygame.init()
size = 320, 240
black = 0, 0, 0
red = 255, 0, 0
screen = pygame.display.set_mode(size)
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
screen.fill(black)
# Either of the following works. Without the fourth argument,
# the rectangle is filled.
pygame.draw.rect(screen, red, (10,10,50,50))
#pygame.draw.rect(screen, red, (10,10,50,50), 1)
pygame.display.flip()
``` | Can't draw rect in python pygame | [
"",
"python",
"pygame",
"render",
"draw",
"rect",
""
] |
I'd like to find a way to print a list of dictionnaries line by line, so that the result be clear and easy to read
the list is like this.
---
```
myList = {'1':{'name':'x',age:'18'},'2':{'name':'y',age:'19'},'3':{'name':'z',age:'20'}...}
```
---
and the result should be like this:
```
>>> '1':{'name':'x',age:'18'}
'2':{'name':'y',age:'19'}
'3':{'name':'z',age:'20'} ...
``` | Using your example:
```
>>> myList = {'1':{'name':'x','age':'18'},'2':{'name':'y','age':'19'},'3':{'name':'z','age':'20'}}
>>> for k, d in myList.items():
print k, d
1 {'age': '18', 'name': 'x'}
3 {'age': '20', 'name': 'z'}
2 {'age': '19', 'name': 'y'}
```
---
More examples:
A list of dictionaries:
```
>>> l = [{'a':'1'},{'b':'2'},{'c':'3'}]
>>> for d in l:
print d
{'a': '1'}
{'b': '2'}
{'c': '3'}
```
A dictionary of dictionaries:
```
>>> D = {'d1': {'a':'1'}, 'd2': {'b':'2'}, 'd3': {'c':'3'}}
>>> for k, d in D.items():
print d
{'b': '2'}
{'c': '3'}
{'a': '1'}
```
If you want the key of the dicts:
```
>>> D = {'d1': {'a':'1'}, 'd2': {'b':'2'}, 'd3': {'c':'3'}}
>>> for k, d in D.items():
print k, d
d2 {'b': '2'}
d3 {'c': '3'}
d1 {'a': '1'}
``` | ```
>>> import json
>>> dicts = {1: {'a': 1, 'b': 2}, 2: {'c': 3}, 3: {'d': 4, 'e': 5, 'f':6}}
>>> print(json.dumps(dicts, indent=4))
{
"1": {
"a": 1,
"b": 2
},
"2": {
"c": 3
},
"3": {
"d": 4,
"e": 5,
"f": 6
}
}
``` | loop for to print a dictionary of dictionaries | [
"",
"python",
"for-loop",
"python-2.x",
""
] |
Given a list of numpy arrays, each with the same dimensions, how can I find which array contains the maximum value on an element-by-element basis?
e.g.
```
import numpy as np
def find_index_where_max_occurs(my_list):
# d = ... something goes here ...
return d
a=np.array([1,1,3,1])
b=np.array([3,1,1,1])
c=np.array([1,3,1,1])
my_list=[a,b,c]
array_of_indices_where_max_occurs = find_index_where_max_occurs(my_list)
# This is what I want:
# >>> print array_of_indices_where_max_occurs
# array([1,2,0,0])
# i.e. for the first element, the maximum value occurs in array b which is at index 1 in my_list.
```
Any help would be much appreciated... thanks! | Another option if you want an array:
```
>>> np.array((a, b, c)).argmax(axis=0)
array([1, 2, 0, 0])
```
So:
```
def f(my_list):
return np.array(my_list).argmax(axis=0)
```
This works with multidimensional arrays, too. | For the fun of it, I realised that @Lev's original answer was faster than his generalized edit, so this is the generalized stacking version which is much faster than the `np.asarray` version, but it is not very elegant.
```
np.concatenate((a[None,...], b[None,...], c[None,...]), axis=0).argmax(0)
```
That is:
```
def bystack(arrs):
return np.concatenate([arr[None,...] for arr in arrs], axis=0).argmax(0)
```
Some explanation:
I've added a new axis to each array: `arr[None,...]` is equivalent to `arr[np.newaxis,...]` which is the same as `arr[np.newaxis,:,:,:]` where the `...` expands to be the appropriate number dimensions. The reason for this is because `np.concatenate` will then stack along the new dimension, which is `0` since the `None` is at the front.
So, for example:
```
In [286]: a
Out[286]:
array([[0, 1],
[2, 3]])
In [287]: b
Out[287]:
array([[10, 11],
[12, 13]])
In [288]: np.concatenate((a[None,...],b[None,...]),axis=0)
Out[288]:
array([[[ 0, 1],
[ 2, 3]],
[[10, 11],
[12, 13]]])
```
In case it helps to understand, this would work too:
```
np.concatenate((a[...,None], b[...,None], c[...,None]), axis=a.ndim).argmax(a.ndim)
```
where the new axis is now added at the end, so we must stack and maximize along that last axis, which will be `a.ndim`. For `a`, `b`, and `c` being 2d, we could do this:
```
np.concatenate((a[:,:,None], b[:,:,None], c[:,:,None]), axis=2).argmax(2)
```
Which is equivalent to the `dstack` I mentioned in my comment above (`dstack` adds a third axis to stack along if it doesn't exist in the arrays).
To test:
```
N = 10
M = 2
a = np.random.random((N,)*M)
b = np.random.random((N,)*M)
c = np.random.random((N,)*M)
def bystack(arrs):
return np.concatenate([arr[None,...] for arr in arrs], axis=0).argmax(0)
def byarray(arrs):
return np.array(arrs).argmax(axis=0)
def byasarray(arrs):
return np.asarray(arrs).argmax(axis=0)
def bylist(arrs):
assert arrs[0].ndim == 1, "ndim must be 1"
return [np.argmax(x) for x in zip(*arrs)]
In [240]: timeit bystack((a,b,c))
100000 loops, best of 3: 18.3 us per loop
In [241]: timeit byarray((a,b,c))
10000 loops, best of 3: 89.7 us per loop
In [242]: timeit byasarray((a,b,c))
10000 loops, best of 3: 90.0 us per loop
In [259]: timeit bylist((a,b,c))
1000 loops, best of 3: 267 us per loop
``` | How to find which numpy array contains the maximum value on an element by element basis? | [
"",
"python",
"algorithm",
"numpy",
""
] |
I have a `job` table and a `visit` table. A job can have multiple visits. I need to retrieve all jobs, which haven't been set as paid, with all visits tied to that job set as completed.
So basically I need to only retrieve a job if:
* It hasn't been paid `(paid = 'N')`
* All the visits tied to that job are set as complete `(status = 2)`
Obviously doing the following doesn't work as it will return any result where `job.paid = 'N' and visit.status = '2'`:
```
SELECT *
FROM job INNER JOIN visit
ON job.id = visit.job_id
WHERE job.paid = 'N' AND
visit.status = 2;
```
I could retrieve the results, and run additional queries to check that all the visits for a job are complete, but I was wondering if it's possible to retrieve the data in a single query? | **UPDATE 1**
```
SELECT a.ID -- <<== add some columns here
FROM job a INNER JOIN visit b ON a.id = b.job_ID
WHERE a.paid = 'N'
GROUP BY a.ID
HAVING COUNT(DISTINCT b.Status) = 1 AND MAX(b.status) = 2
``` | ```
SELECT *
FROM job j
WHERE j.paid = 'N' AND
NOT EXISTS (SELECT 1 FROM visit WHERE job_id = j.id AND visit.status <> 2);
``` | Retrieve rows where all related data meets specified criteria | [
"",
"mysql",
"sql",
""
] |
```
INSERT INTO Log_Table
VALUES ('Mismatch', 'C:\Folder-SBX2\', '\Subfolder1\', 'm1.txt',
'37587b066cf68b3870101c4bbc1a5dc0', 'SBX2', '7.2 SP13 CC5',
To_Date('2013/04/04 11:46:06 AM', 'YYYY-MM-DD HH:MI:SS AM'));
```
Specifically that last line. I want the date to show up as **2013/04/04 11:46:06 AM** but instead it's entered as **04-APR-13**. What is the problem? | Oracle stores dates in its own format. Storage has nothing to do with display.
`04-APR-2013` is just the default format Oracle uses if we don't change it. If you want to change how the date is displayed you need to change the NLS\_DATE\_FORMAT in the client. For instance, you could run this command:
```
sql> alter session set nls_date_format='YYYY-MM-DD HH:MI:SS AM';
```
---
> "that is incredibly stupid. Not your answer, but Oracle I mean"
Umm, no. What would be stupid would be storing dates in a variety of different formats. Check any of the myriad questions on SO from people who have dates stored as strings.
> " I use this alter session statement in conjunction with a Select statement to display the date properly"
You don't. You have two choices. The first - and in my opinion the better - option is to just return the date in the Oracle canonical format, and let the client software decide how to display it. Alternatively you can use TO\_CHAR and apply the format mask you want; only now you have cast the date to a string, and that may have undesirable consequences.
If you can persuade your DBA that your preferred format is the One True Format then they can change the NLS\_DATE\_FORMAT globally, so everybody will see dates rendered as you would like (unless they change their own local settings). | So add formatting to your select statement as:
```
select to_char(date_column, 'YYYY/MM/DD HH:MI:SS')
from my_table
``` | What is wrong with this To_Date() statement? | [
"",
"sql",
"oracle",
""
] |
This code will not work
```
import urllib
def loadHtml (url):
response = urllib.open(url)
html = response.read()
return html
firstUrl = 'http://www.google.it';
html = loadHtml (firstUrl);
```
This is the error
```
File "af1.py", line 10, in <module>
html = loadHtml (firstUrl);
File "af1.py", line 5, in loadHtml
response = urllib.open(url)
```
I'm at my second day on python .. what's the problem now ?
```
AttributeError: 'module' object has no attribute 'open'
```
EDIT: **I've not searched for open in urllib because I was not understanding what Python mean by 'module'** | Maybe `urllib.urlopen()` is what you need, not `urllib.open()`?
You may find more documentation on the library:
* in the official docs: <http://docs.python.org/2/library/urllib.html>
* Or by calling `help(urllib)` | The problem is exactly as it says, the `urllib` doesn't have a method called `open()`.
Perhaps you meant [`urllib.urlopen()`](http://docs.python.org/2/library/urllib.html#urllib.urlopen).
One way of resolving things like these without leaving Python is to use the `dir()` function on the module, and throwing on some trivial code to search through the list:
```
>>> import urllib
>>> [x for x in dir(urllib) if x.find("open") >= 0]
['FancyURLopener', 'URLopener', '_urlopener', 'urlopen']
``` | Python (newbie): how using imported function into my functions? | [
"",
"python",
"urllib",
""
] |
I'm importing data from one system to another. The former keys off an alphanumeric field whereas the latter requires a numeric integer field. I'd like to find or write a function that I can feed the alphanumeric value to and have it return a number that would be unique to the value passed in.
My first thought was to do a hash, but of course the result of any built in hashes are going to contains letters and plus it's technically possible (however unlikely) that a hash may not be unique.
My first question is whether there is anything built in to sql that I'm overlooking, and short of that I'd like to hear suggestions on the easiest way to implement such a function. | Here is a function which will probably convert from base 10 (integer) to base 36 (alphanumeric) and back again:
<https://www.simple-talk.com/sql/t-sql-programming/numeral-systems-and-numbers-conversion-in-sql/>
You might find the resultant number is too big to be held in an integer though. | You could concatenate the ascii values of each character of your string and cast the result as a bigint. | Sql function to turn character field into number field | [
"",
"sql",
"sql-server-2008",
""
] |
I have the following code:
```
x=4200/820
```
It should return `x=5.12` but it gives `x=5`. How can I change this. Thanks
EDIT:
If instead I had this code:
```
x=(z-min(z_levels))/(max(z_levels)-min(z_levels))
```
Where `z, min(z_levels) and max(z_levels)` are values that change in a for loop taken from a list.
How can I make `x` a float? Do I need to change the code like this:?
```
x=float(z-min(z_levels))/float(max(z_levels)-min(z_levels))
``` | In python2.x `integer division` always results in truncated output, but if one of the operand is changed to `float` then the output is `float` too.
```
In [40]: 4200/820
Out[40]: 5
In [41]: 4200/820.0
Out[41]: 5.121951219512195
In [42]: 4200/float(820)
Out[42]: 5.121951219512195
```
This has been [changed in python 3.x](http://docs.python.org/3.0/whatsnew/3.0.html#integers), in py3x `/` results in True division(non-truncating) while `//` is used for truncated output.
```
In [43]: from __future__ import division #import py3x's division in py2x
In [44]: 4200/820
Out[44]: 5.121951219512195
``` | Use a decimal to make python use floats
```
>>> x=4200/820.
>>> x
5.121951219512195
```
If you then want to round x to 5.12, you can use the `round` function:
```
>>> round(x, 2)
5.12
``` | Rounding of variables | [
"",
"python",
"rounding",
""
] |
I'm using an Oracle 9i database and want to obtain, within a function, the timestamp representing the start of the week, i.e. The most recent monday, at 00:00:00.
I am aware that the timestamp representing the start of the current day is `TO_TIMESTAMP(SYSDATE)`. | You can use the function `next_day` to get that:
```
SQL> select next_day(sysdate-7, 'MONDAY') FROM DUAL;
NEXT_DAY
---------
29-APR-13
``` | Getting the start of the week should work with `trunc` (see [docs](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions242.htm#i1002084)).
So,
```
select to_timestamp(trunc(sysdate, 'D')) from dual
```
should work.
However, depending on your NLS settings, the first day of the week for oracle may well be Sunday. | In oracle SQL, how would you obtain the timestamp representing the start of the week? | [
"",
"sql",
"oracle",
"timestamp",
"oracle9i",
""
] |
I have some volumetric imaging data consisting of values sampled on a regular grid in x,y,z, but with a non-cubic voxel shape (the space between adjacent points in z is greater than in x,y). I would eventually like to be able to interpolate the values on some arbitrary 2D plane that passes through the volume, like this:

I'm aware of `scipy.ndimage.map_coordinates`, but in my case using it is less straightforward because it implicitly assumes that the spacing of the elements in the input array are equal across dimensions. I could first resample my input array according to the smallest voxel dimension (so that all of my voxels would then be cubes), then use `map_coordinates` to interpolate over my plane, but it doesn't seem like a great idea to interpolate my data twice.
I'm also aware that `scipy` has various interpolators for irregularly-spaced ND data (`LinearNDInterpolator`, `NearestNDInterpolator` etc.), but these are very slow and memory-intensive for my purposes. What is the best way of interpolating my data given that I know that the values *are* regularly spaced within each dimension? | You can use `map_coordinates` with a little bit of algebra. Lets say the spacings of your grid are `dx`, `dy` and `dz`. We need to map these **real world** coordinates to **array index** coordinates, so lets define three new variables:
```
xx = x / dx
yy = y / dy
zz = z / dz
```
The **array index** input to `map_coordinates` is an array of shape `(d, ...)` where `d` is the number of dimensions of your original data. If you define an array such as:
```
scaling = np.array([dx, dy, dz])
```
you can transform your **real world** coordinates to **array index** coordinates by dividing by `scaling` with a little broadcasting magic:
```
idx = coords / scaling[(slice(None),) + (None,)*(coords.ndim-1)]
```
To put it all together in an example:
```
dx, dy, dz = 1, 1, 2
scaling = np.array([dx, dy, dz])
data = np.random.rand(10, 15, 5)
```
Lets say we want to interpolate values along the plane `2*y - z = 0`. We take two vectors perpendicular to the planes normal vector:
```
u = np.array([1, 0 ,0])
v = np.array([0, 1, 2])
```
And get the coordinates at which we want to interpolate as:
```
coords = (u[:, None, None] * np.linspace(0, 9, 10)[None, :, None] +
v[:, None, None] * np.linspace(0, 2.5, 10)[None, None, :])
```
We convert them to **array index** coordinates and interpoalte using `map_coordinates`:
```
idx = coords / scaling[(slice(None),) + (None,)*(coords.ndim-1)]
new_data = ndi.map_coordinates(data, idx)
```
This last array is of shape `(10, 10)` and has in position `[u_idx, v_idx]` the value corresponding to the coordinate `coords[:, u_idx, v_idx]`.
You could build on this idea to handle interpolation where your coordinates don't start at zero, by adding an offset before the scaling. | Here's a simple class `Intergrid`
that maps / scales non-uniform to uniform grids,
then does `map_coordinates`.
On a [4d test case](https://stackoverflow.com/questions/14119892/python-4d-linear-interpolation-on-a-rectangular-grid) it runs at about 1 μsec per query point.
`pip install [--user] intergrid` should work (February 2020), in python2 or python3; see [intergrid on PyPi](https://pypi.org/project/intergrid/).
```
""" interpolate data given on an Nd rectangular grid, uniform or non-uniform.
Purpose: extend the fast N-dimensional interpolator
`scipy.ndimage.map_coordinates` to non-uniform grids, using `np.interp`.
Background: please look at
http://en.wikipedia.org/wiki/Bilinear_interpolation
https://stackoverflow.com/questions/6238250/multivariate-spline-interpolation-in-python-scipy
http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.ndimage.interpolation.map_coordinates.html
Example
-------
Say we have rainfall on a 4 x 5 grid of rectangles, lat 52 .. 55 x lon -10 .. -6,
and want to interpolate (estimate) rainfall at 1000 query points
in between the grid points.
# define the grid --
griddata = np.loadtxt(...) # griddata.shape == (4, 5)
lo = np.array([ 52, -10 ]) # lowest lat, lowest lon
hi = np.array([ 55, -6 ]) # highest lat, highest lon
# set up an interpolator function "interfunc()" with class Intergrid --
interfunc = Intergrid( griddata, lo=lo, hi=hi )
# generate 1000 random query points, lo <= [lat, lon] <= hi --
query_points = lo + np.random.uniform( size=(1000, 2) ) * (hi - lo)
# get rainfall at the 1000 query points --
query_values = interfunc( query_points ) # -> 1000 values
What this does:
for each [lat, lon] in query_points:
1) find the square of griddata it's in,
e.g. [52.5, -8.1] -> [0, 3] [0, 4] [1, 4] [1, 3]
2) do bilinear (multilinear) interpolation in that square,
using `scipy.ndimage.map_coordinates` .
Check:
interfunc( lo ) -> griddata[0, 0],
interfunc( hi ) -> griddata[-1, -1] i.e. griddata[3, 4]
Parameters
----------
griddata: numpy array_like, 2d 3d 4d ...
lo, hi: user coordinates of the corners of griddata, 1d array-like, lo < hi
maps: a list of `dim` descriptors of piecewise-linear or nonlinear maps,
e.g. [[50, 52, 62, 63], None] # uniformize lat, linear lon
copy: make a copy of query_points, default True;
copy=False overwrites query_points, runs in less memory
verbose: default 1: print a 1-line summary for each call, with run time
order=1: see `map_coordinates`
prefilter: 0 or False, the default: smoothing B-spline
1 or True: exact-fit interpolating spline (IIR, not C-R)
1/3: Mitchell-Netravali spline, 1/3 B + 2/3 fit
(prefilter is only for order > 1, since order = 1 interpolates)
Non-uniform rectangular grids
-----------------------------
What if our griddata above is at non-uniformly-spaced latitudes,
say [50, 52, 62, 63] ? `Intergrid` can "uniformize" these
before interpolation, like this:
lo = np.array([ 50, -10 ])
hi = np.array([ 63, -6 ])
maps = [[50, 52, 62, 63], None] # uniformize lat, linear lon
interfunc = Intergrid( griddata, lo=lo, hi=hi, maps=maps )
This will map (transform, stretch, warp) the lats in query_points column 0
to array coordinates in the range 0 .. 3, using `np.interp` to do
piecewise-linear (PWL) mapping:
50 51 52 53 54 55 56 57 58 59 60 61 62 63 # lo[0] .. hi[0]
0 .5 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 3
`maps[1] None` says to map the lons in query_points column 1 linearly:
-10 -9 -8 -7 -6 # lo[1] .. hi[1]
0 1 2 3 4
More doc: https://denis-bz.github.com/docs/intergrid.html
"""
# split class Gridmap ?
from __future__ import division
from time import time
# warnings
import numpy as np
from scipy.ndimage import map_coordinates, spline_filter
__version__ = "2014-01-15 jan denis" # 15jan: fix bug in linear scaling
__author_email__ = "denis-bz-py@t-online.de" # comments welcome, testcases most welcome
#...............................................................................
class Intergrid:
__doc__ = globals()["__doc__"]
def __init__( self, griddata, lo, hi, maps=[], copy=True, verbose=1,
order=1, prefilter=False ):
griddata = np.asanyarray( griddata )
dim = griddata.ndim # - (griddata.shape[-1] == 1) # ??
assert dim >= 2, griddata.shape
self.dim = dim
if np.isscalar(lo):
lo *= np.ones(dim)
if np.isscalar(hi):
hi *= np.ones(dim)
self.loclip = lo = np.asarray_chkfinite( lo ).copy()
self.hiclip = hi = np.asarray_chkfinite( hi ).copy()
assert lo.shape == (dim,), lo.shape
assert hi.shape == (dim,), hi.shape
self.copy = copy
self.verbose = verbose
self.order = order
if order > 1 and 0 < prefilter < 1: # 1/3: Mitchell-Netravali = 1/3 B + 2/3 fit
exactfit = spline_filter( griddata ) # see Unser
griddata += prefilter * (exactfit - griddata)
prefilter = False
self.griddata = griddata
self.prefilter = (prefilter == True)
self.maps = maps
self.nmap = 0
if len(maps) > 0:
assert len(maps) == dim, "maps must have len %d, not %d" % (
dim, len(maps))
# linear maps (map None): Xcol -= lo *= scale -> [0, n-1]
# nonlinear: np.interp e.g. [50 52 62 63] -> [0 1 2 3]
self._lo = np.zeros(dim)
self._scale = np.ones(dim)
for j, (map, n, l, h) in enumerate( zip( maps, griddata.shape, lo, hi )):
## print "test: j map n l h:", j, map, n, l, h
if map is None or callable(map):
self._lo[j] = l
if h > l:
self._scale[j] = (n - 1) / (h - l) # _map lo -> 0, hi -> n - 1
else:
self._scale[j] = 0 # h <= l: X[:,j] -> 0
continue
self.maps[j] = map = np.asanyarray(map)
self.nmap += 1
assert len(map) == n, "maps[%d] must have len %d, not %d" % (
j, n, len(map) )
mlo, mhi = map.min(), map.max()
if not (l <= mlo <= mhi <= h):
print "Warning: Intergrid maps[%d] min %.3g max %.3g " \
"are outside lo %.3g hi %.3g" % (
j, mlo, mhi, l, h )
#...............................................................................
def _map_to_uniform_grid( self, X ):
""" clip, map X linear / nonlinear inplace """
np.clip( X, self.loclip, self.hiclip, out=X )
# X nonlinear maps inplace --
for j, map in enumerate(self.maps):
if map is None:
continue
if callable(map):
X[:,j] = map( X[:,j] ) # clip again ?
else:
# PWL e.g. [50 52 62 63] -> [0 1 2 3] --
X[:,j] = np.interp( X[:,j], map, np.arange(len(map)) )
# linear map the rest, inplace (nonlinear _lo 0, _scale 1: noop)
if self.nmap < self.dim:
X -= self._lo
X *= self._scale # (griddata.shape - 1) / (hi - lo)
## print "test: _map_to_uniform_grid", X.T
#...............................................................................
def __call__( self, X, out=None ):
""" query_values = Intergrid(...) ( query_points npt x dim )
"""
X = np.asanyarray(X)
assert X.shape[-1] == self.dim, ("the query array must have %d columns, "
"but its shape is %s" % (self.dim, X.shape) )
Xdim = X.ndim
if Xdim == 1:
X = np.asarray([X]) # in a single point -> out scalar
if self.copy:
X = X.copy()
assert X.ndim == 2, X.shape
npt = X.shape[0]
if out is None:
out = np.empty( npt, dtype=self.griddata.dtype )
t0 = time()
self._map_to_uniform_grid( X ) # X inplace
#...............................................................................
map_coordinates( self.griddata, X.T,
order=self.order, prefilter=self.prefilter,
mode="nearest", # outside -> edge
# test: mode="constant", cval=np.NaN,
output=out )
if self.verbose:
print "Intergrid: %.3g msec %d points in a %s grid %d maps order %d" % (
(time() - t0) * 1000, npt, self.griddata.shape, self.nmap, self.order )
return out if Xdim == 2 else out[0]
at = __call__
# end intergrid.py
``` | Fast interpolation of regularly sampled 3D data with different intervals in x,y, and z | [
"",
"python",
"numpy",
"scipy",
"interpolation",
"volume-rendering",
""
] |
I don't have a clue why is this happening. I was messing with some lists, and I needed a `for` loop going from 0 to `log(n, 2)` where n was the length of a list. But the code was amazingly slow, so after a bit a research I found that the problem is in the range generation. Sample code for demonstration:
```
n = len([1,2,3,4,5,6,7,8])
k = 8
timeit('range(log(n, 2))', number=2, repeat=3) # Test 1
timeit('range(log(k, 2))', number=2, repeat=3) # Test 2
```
The output
```
2 loops, best of 3: 2.2 s per loop
2 loops, best of 3: 3.46 µs per loop
```
The number of tests is low (I didn't want this to be running more than 10 minutes), but it already shows that `range(log(n, 2))` is orders of magnitude slower than the counterpart using just the logarithm of an integer. This is really surprising and I don't have any clue on why is this happening. Maybe is a problem on my PC, maybe a Sage problem or a Python bug (I didn't try the same on Python).
Using `xrange` instead of `range` doesn't help either. Also, if you get the number with `.n()`, test 1 runs at the same speed of 2.
Does anybody know what can be happening?
Thanks! | Good grief -- I recognize this one. It's related to one of mine, [trac #12121](http://trac.sagemath.org/sage_trac/ticket/12121). First, you get extra overhead from using a Python `int` as opposed to a Sage `Integer` for boring reasons:
```
sage: log(8, 2)
3
sage: type(log(8, 2))
sage.rings.integer.Integer
sage: log(8r, 2)
log(8)/log(2)
sage: type(log(8r, 2))
sage.symbolic.expression.Expression
sage: %timeit log(8, 2)
1000000 loops, best of 3: 1.4 us per loop
sage: %timeit log(8r, 2)
1000 loops, best of 3: 404 us per loop
```
(The `r` suffix means "raw", and prevents the Sage preparser from wrapping the literal `2` into `Integer(2)`)
And then it gets weird. In order to produce an int for `range` to consume, Sage has to figure out how to turn `log(8)/log(2)` into 3, and it turns out that she does the worst thing possible. Plagiarizing my original diagnosis (mutatis mutandis):
First she checks to see if this object has its own way to get an int, and it doesn't. So she builds a RealInterval object out of log(8)/log(2), and it turns out that this is about the worst thing she could do! She checks to see whether the lower and upper parts of the interval agree [on the floor, I mean] (so that she knows for certain what the floor is). But in this case, **because it really is an integer!** this is always going to look like:
```
sage: y = log(8)/log(2)
sage: rif = RealIntervalField(53)(y)
sage: rif
3.000000000000000?
sage: rif.endpoints()
(2.99999999999999, 3.00000000000001)
```
These two bounds have floors which aren't aren't equal, so Sage decides she hasn't solved the problem yet, and she keeps increasing the precision to 20000 bits to see if she can prove that they are.. but by construction it's never going to work. Finally she gives up and tries to simplify it, which succeeds:
```
sage: y.simplify_full()
3
```
Proof without words that it's a perverse property of the exactly divisible case:
```
sage: %timeit range(log(8r, 2))
1 loops, best of 3: 2.18 s per loop
sage: %timeit range(log(9r, 2))
1000 loops, best of 3: 766 us per loop
sage: %timeit range(log(15r, 2))
1000 loops, best of 3: 764 us per loop
sage: %timeit range(log(16r, 2))
1 loops, best of 3: 2.19 s per loop
``` | Python 2 allows range(some\_float), but its deprecated and doesn't work in python 3.
The code sample doesn't give the output specified. But we can walk through it. First, timeit needs a full script, the import in the script calling timeit is not used:
```
>>> timeit('range(log(8,2))')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.6/timeit.py", line 226, in timeit
return Timer(stmt, setup, timer).timeit(number)
File "/usr/lib64/python2.6/timeit.py", line 192, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
NameError: global name 'log' is not defined
```
If you add the import to the script being timed, it includes the setup time:
```
>>> timeit('from math import log;range(log(8,2))')
3.7010221481323242
```
If you move the import to the setup, its better, but timing a one-shot is notoriously inaccurate:
```
>>> timeit('range(log(8,2))',setup='from math import log')
1.9139349460601807
```
Finally, run it a bunch of times and you get a good number:
```
>>> timeit('range(log(8,2))',setup='from math import log',number=100)
0.00038290023803710938
``` | Why is creating a range from 0 to log(len(list), 2) so slow? | [
"",
"python",
"performance",
"sage",
""
] |
So this doesn't work with python's regex:
```
>>> re.sub('oof', 'bar\\', 'foooof')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\re.py", line 151, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "C:\Python27\lib\re.py", line 270, in _subx
template = _compile_repl(template, pattern)
File "C:\Python27\lib\re.py", line 257, in _compile_repl
raise error, v # invalid expression
error: bogus escape (end of line)
```
I thought my eyes were deceiving me, so I did this:
```
>>> re.sub('oof', "bar\x5c", 'foooof')
```
Got the same thing. I've searched and have confirmed people have this problem. So what's the problem with treating repl as just an ordinary string? Are there additional formatting options that can be in placed in repl? | Yes, the replacement string is processed for escape characters. From [the docs](http://docs.python.org/2/library/re.html):
> repl can be a string or a function; if it is a string, any backslash
> escapes in it are processed. That is, \n is converted to a single
> newline character, \r is converted to a carriage return, and so forth.
> Unknown escapes such as \j are left alone. Backreferences, such as \6,
> are replaced with the substring matched by group 6 in the pattern. | If you don't want the string escapes to be processed, you can use a lambda and the string is not processed:
```
>>> re.sub('oof', lambda x: 'bar\\', 'foooof')
'foobar\\'
>>> s=re.sub('oof', lambda x: 'bar\\', 'foooof')
>>> print s
foobar\
```
But it will still be interpreted when printed:
```
>>> re.sub('oof', lambda x: 'bar\r\\', 'foooof')
'foobar\r\\'
>>> print re.sub('oof', lambda x: 'bar\r\\', 'foooof')
\oobar
```
Or, use a raw string:
```
>>> re.sub('oof', r'bar\\', 'foooof')
'foobar\\'
``` | re.sub tries to escape repl string? | [
"",
"python",
"regex",
""
] |
I'm having trouble generating a query (and I've looked online and SO and can't find it). I'm wanting to select distinct two columns where only the second column is different.
Ideally, it would look like this:
```
SELECT DISTINCT colA, colB
FROM table
GROUP BY colA, colB
```
with the condition that colB is only different
For example, here's what I'd like it to do, in case what I said is not clear
```
colA | colB
-----------
abc |1
abc |2
abd |1
abf |1
xyz |1
asd |2
```
*SQL MAGIC*
```
calA | colB
-----------
abc |1
abc |2
```
It basically removes the rows where the `colB` is is different.
Thanks! | A `JOIN` solution is:
```
SELECT DISTINCT a.*
FROM table a
JOIN table b
ON a.colA = b.colA
AND a.colB != b.colB
``` | Based on your sample output, it looks like you're only including `colA` values if they have multiple, differing `colB` values. You can do that like this:
```
SELECT colA, colB FROM table
WHERE colA IN (
SELECT colA
FROM table
GROUP BY colA
HAVING COUNT(DISTINCT colB) > 1)
``` | SQL: Select distinct from two columns where only column2 is different | [
"",
"mysql",
"sql",
""
] |
Can someone please explain why
```
select ~1
```
gives the result
```
-2
```
Perhaps there is a lot for me to learn about the actual bits of number types. What resource would you recommend? | You should read about Two's Complement ([http://en.wikipedia.org/wiki/Two's\_complement](http://en.wikipedia.org/wiki/Two%27s_complement))
+1 in binary is represented as `00000001`, the inverse of which is `11111110`, which is -2 in Two's Complement | Q: Can someone please explain why select ~1 gives the result -2?
A: For the same reason ~0 would give -1 :)
Here's a good article on "twos complement" arithmetic:
* <http://en.wikipedia.org/wiki/Twos_complement>
Most CPU architectures are two's complement (vs. one's complement). From the same article:
> Two's complement is the easiest to implement in hardware, which may be
> the ultimate reason for its widespread popularity[citation needed].
> Processors on the early mainframes often consisted of thousands of
> transistors – eliminating a significant number of transistors was a
> significant cost savings. The architects of the early integrated
> circuit-based CPUs (Intel 8080, etc.) chose to use two's complement
> math. As IC technology advanced, virtually all adopted two's
> complement technology. Intel, AMD, and Power Architecture chips are
> all two's complement. | SQL Bitwise not (~) | [
"",
"sql",
"bit-manipulation",
""
] |
**Disclaimer: I am new to programming and scripting in general so please excuse the lack of technical terms**
So i have two text file data sets that contain names listed:
```
First File | Second File
bob | bob
mark | mark
larry | bruce
tom | tom
```
I would like to run a script (pref python) that outputs the intersection lines in one text file and the different lines in another text file, ex:
*matches.txt*:
```
bob
mark
tom
```
*differences.txt*:
```
bruce
```
How would I accomplish this with Python? Or with a Unix command line, if it's easy enough? | ```
words1 = set(open("some1.txt").read().split())
words2 = set(open("some2.txt").read().split())
duplicates = words1.intersection(words2)
uniques = words1.difference(words2).union(words2.difference(words1))
print "Duplicates(%d):%s"%(len(duplicates),duplicates)
print "\nUniques(%d):%s"%(len(uniques),uniques)
```
something like that at least | sort | uniq is good, but comm might be even better. "man comm" for more information.
From the manual page:
```
EXAMPLES
comm -12 file1 file2
Print only lines present in both file1 and file2.
comm -3 file1 file2
Print lines in file1 not in file2, and vice versa.
```
You can also use the Python set type, but comm is easier. | Diff and intersection reporting between two text files | [
"",
"python",
"list",
"shell",
"compare",
""
] |
Here is an example table for my problem:
Database : MySql 5.1
Table Name : Education
```
id edutype university subjects yearofpass percentmarks
200 1 CBSE Maths,Science,English,Hindi 2002 78.00
200 2 CBSE Maths,Physics,Chem,Biology 2004 68.00
200 3 WBUT Computer Science Engineering 2008 87.00
100 1 ICSE Maths,Science,English,Hindi 2001 72.00
100 2 CBSE Maths,Physics,Chem,Biology 2003 65.00
100 3 NIT Electronics Engineering 2008 75.00
300 1 CBSE Maths,Science,English,Hindi 2003 65.00
300 2 CBSE Maths,Physics,Chem,Biology 2005 63.00
300 3 VIT Metallurgy Engineering 2009 79.00
```
Now i would like to run a sql query which will output results in following format:
```
id uvr1 sub1 yop1 pcm1 uvr2 sub2 yop2 pcm2 uvr3 sub3 yop3 pcm3
200 CBSE Maths,Science,English,Hindi 2002 78.00 CBSE Maths,Physics,Chem,Biology 2004 68.00 WBUT Computer Science Engineering 2008 87.00
100 ICSE Maths,Science,English,Hindi 2001 72.00 CBSE Maths,Physics,Chem,Biology 2003 65.00 NIT Electronics Engineering 2008 75.00
300 CBSE Maths,Science,English,Hindi 2003 65.00 CBSE Maths,Physics,Chem,Biology 2005 63.00 VIT Metallurgy Engineering 2009 79.00
```
Please share if you have any good method to achieve this, need your help.
Thanks in Advance | You can try this:
```
SELECT
e1.id,
e1.university as uvr1,
e1.subjects as sub1,
e1.yearofpass as yop1,
e1.percentmarks as pcm1,
e2.university as uvr2,
e2.subjects as sub2,
e2.yearofpass as yop2,
e2.percentmarks as pcm2,
e3.university as uvr3,
e3.subjects as sub3,
e3.yearofpass as yop3,
e3.percentmarks as pcm3
FROM
Education as e1
LEFT JOIN Education as e2
ON e1.id = e2.id
AND e2.edutype = e1.edutype+1
LEFT JOIN Education as e3
ON e1.id = e3.id
AND e3.edutype = e1.edutype+2
WHERE
e1.edutype = 1
```
But... this works only is you have no more the 3 universities with the same id if you have more you need to add extra joins.
Also looking at you table structure i think you should read more about [Database Normalization](https://en.wikipedia.org/wiki/Database_normalization) it can help you .
You can do something like this:
 | You would try CONCAT and SUBSTRING functions and nested SELECT.
Like:
```
SELECT SUBSTRING(CONCAT(subjects,
(
SELECT subjects FROM education WHERE university=CBSE and rownum=2
)),0,27)
FROM education WHERE university='CBSE' and rownum=1;
```
But this is very bad and unconsistent solution. In your place I would try to reorganize requirements for select. Maybe it is not nessesary to have such output you mentioned. | To retrieve values from multiple rows into a single row from a single table under multiple columns | [
"",
"mysql",
"sql",
"join",
""
] |
I have a `requirements.txt` file with a list of packages that are required for my virtual environment. Is it possible to find out whether all the packages mentioned in the file are present. If some packages are missing, how to find out which are the missing packages? | **UPDATE**:
An up-to-date and improved way to do this is via `distutils.text_file.TextFile`. See Acumenus' [answer](https://stackoverflow.com/a/45474387/240950) below for details.
**ORIGINAL**:
The pythonic way of doing it is via the `pkg_resources` [API](https://setuptools.readthedocs.io/en/latest/pkg_resources.html#api-reference). The requirements are written in a format understood by setuptools. E.g:
```
Werkzeug>=0.6.1
Flask
Django>=1.3
```
The example code:
```
import pkg_resources
from pkg_resources import DistributionNotFound, VersionConflict
# dependencies can be any iterable with strings,
# e.g. file line-by-line iterator
dependencies = [
'Werkzeug>=0.6.1',
'Flask>=0.9',
]
# here, if a dependency is not met, a DistributionNotFound or VersionConflict
# exception is thrown.
pkg_resources.require(dependencies)
``` | Based on the [answer by Zaur](https://stackoverflow.com/a/16298328/), assuming you indeed use a requirements file, you may want a unit test, perhaps in `tests/test_requirements.py`, that confirms the availability of packages.
Moreover, this approach uses a [subtest](https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests) to independently confirm each requirement. This is useful so that all failures are documented. Without subtests, only a single failure is documented.
```
"""Test availability of required packages."""
import unittest
from pathlib import Path
import pkg_resources
_REQUIREMENTS_PATH = Path(__file__).parent.with_name("requirements.txt")
class TestRequirements(unittest.TestCase):
"""Test availability of required packages."""
def test_requirements(self):
"""Test that each required package is available."""
# Ref: https://stackoverflow.com/a/45474387/
requirements = pkg_resources.parse_requirements(_REQUIREMENTS_PATH.open())
for requirement in requirements:
requirement = str(requirement)
with self.subTest(requirement=requirement):
pkg_resources.require(requirement)
``` | Check if my Python has all required packages | [
"",
"python",
"pip",
"requirements.txt",
""
] |
I am trying to do a query for all cities (selecting only their name attribute) by their ID, and I want to be able to specify a range of ID's to select. My code is below:
```
def list_cities(start, stop)
cities = City.all(order: 'name ASC', id: start..stop, select: 'name')
cities.map { |city| "<li> #{city.name} </li>" }.join.html_safe
end
```
However, I get an error:
```
Unknown key: id
```
My implementation in my view is:
```
<%= list_cities(1,22) %>
```
This is a helper method to be put in all views, so I am not putting the logic in a particular controller.
My schema for this model is:
```
create_table "cities", :force => true do |t|
t.datetime "created_at", :null => false
t.datetime "updated_at", :null => false
t.string "neighborhoods"
t.string "name"
t.integer "neighborhood_id"
end
```
When I ran the method in my console, I got:
```
City Load (0.9ms) SELECT name FROM "cities" WHERE ("cities"."id" BETWEEN 1 AND 3) ORDER BY name ASC
=> ""
```
I know it's not an issue of having an empty database since it worked with the following version of the method:
```
def list_cities(start, stop)
cities = City.all(order: 'name ASC', limit: stop - start, select: 'name')
cities.map { |city| "<li> #{city.name} </li>" }.join.html_safe
end
```
However, this method returns only the first 'n' records and not a range like I want.
---
When trying a simpler query in the console:
```
1.9.3p385 :009 > City.where(:id => 1..4)
City Load (0.9ms) SELECT "cities".* FROM "cities" WHERE ("cities"."id" BETWEEN 1 AND 4)
=> []
```
I figured out why it was happening...
I did City.all in my console and realized that my cities started with id "946" because I had seeded multiple times and the ID's were not what I thought they were! The solution offered was correct! | ```
City.where(:id => start..stop).order('name ASC').select(:name)
``` | You can turn your query to the following:
```
cities = City.all(order: 'name ASC', conditions: { id: start..stop }, select: 'name')
``` | Rails - how to select all records by ID range | [
"",
"sql",
"ruby-on-rails",
"ruby",
"activerecord",
"orm",
""
] |
```
SELECT COUNT(Type) from House where Type = 1
SELECT COUNT(Type) from House where Type = 2
SELECT COUNT(Type) from House where Type = 3
```
My question is: I want to join the above 3 statements to get: 3 columns i.e. eg:
ColumnType1: '50', ColumnType2: '60', columnType3: '45'
thanks | You can create the columns using an aggregate function with a `CASE` expression:
```
SELECT
count(case when Type = 1 then Type end) as type_1,
count(case when Type = 2 then Type end) as type_2,
count(case when Type = 3 then Type end) as type_3
from House
``` | You can use a `case` and add up if the `Type` matches
```
SELECT sum(case when Type = 1 then 1 else 0 end) as type_1,
sum(case when Type = 2 then 1 else 0 end) as type_2,
sum(case when Type = 3 then 1 else 0 end) as type_3
from House
``` | Join select statements to get columns in SQL | [
"",
"sql",
"t-sql",
""
] |
I have my 2 lists like this:
```
list1=['A','B']
list2=['A','C','D']
```
I want to make comparison between two list two find out missing, additional and no change in entries, I am accessing it like this:
```
set1=set(list1)
set2=set(list2)
MissingName=set1.difference(set2)
AdditionalName=set2.difference(set1)
```
This gives me missing and additional entries, How can I find No change, which should be A?? | You're probably looking for [set.intersection](http://docs.python.org/2/library/stdtypes.html#set.intersection). | You could use the [`Counter`](http://docs.python.org/2/library/collections.html#collections.Counter) class:
```
>>> list1=['A','B']
>>> list2=['A','C','D']
>>> from collections import Counter
>>> c1=Counter(list1)
>>> c2=Counter(list2)
>>> c1-c2 # missing items
Counter({'B': 1})
>>> c2-c1 # additional items
Counter({'C': 1, 'D': 1})
>>> c2&c1 # intersection
Counter({'A': 1})
```
---
The benefit of using the `Counter` class is that, unlike using `set`, it will work in cases where multiplicity matters, e.g.:
```
>>> Counter(['A', 'A', 'B']) - Counter(['A', 'B'])
Counter({'A': 1})
```
Also, you don't have to use such clunky method names :-) | List Comparison in Python | [
"",
"python",
""
] |
I have to plot several data sets in one plot. It is useful to be able to highlight one or more of the plots in order to compare them. For this, I toggle the line style of the plot between `":"` (background plot) and `"-"` (highlighted plot) whenever a line is selected directly, or, by clicking on the corresponding entry in the legend.
This works perfectly until I try to move the legend outside the axes using `bbox_to_anchor`. After this, a single mouse click on the legend line triggers 2 click events in succession, thereby canceling the toggling effect.
How do I place the legend outside the axes while preserving the correct behaviour for the pick\_event?
Simplified code that reproduces the problem (Clicking on a plot line toggles between 'highlighted' and 'not-highlighted', whereas clicking on a legend line briefly toggles the plot-line before going back to the previous state):
```
import pylab
import numpy
# Create data for plotting
t = numpy.linspace(0, 1.0, 100)
a = numpy.sin(2*numpy.pi*t)
# Set up figure
fig = pylab.figure()
ax = pylab.subplot(111)
# Plot figures
lines = []
for i in range(5):
line = ax.plot(t, (i+1)*a, linestyle=':', picker=5, label='line%d'%(i+1))
lines.append(line[0]) # Save plot lines
# Create legend
leg = ax.legend(bbox_to_anchor=(1.01, 1), loc=2) # Does not work as expected
# leg = ax.legend() # Works!!
# Get legend lines
leglines = leg.get_lines()
# Set event for legend lines
for line in leglines:
line.set_picker(5)
# Create a 2 way mapping between legend lines <-> plot lines
line2leg = dict(zip(lines+leglines, leglines+lines))
# Define event function
def onpick(event):
thisline = event.artist
if thisline.get_linestyle()==':':
print ": -> -" # For debugging
thisline.set_linestyle('-')
line2leg[thisline].set_linestyle('-')
else:
print "- -> :" # For debugging
thisline.set_linestyle(':')
line2leg[thisline].set_linestyle(':')
fig.canvas.draw()
# connect event function
fig.canvas.mpl_connect('pick_event', onpick)
pylab.show()
``` | If you monkey patch `Artist.pick` with the following:
```
matplotlib.artist.Artist.orig_pick = matplotlib.artist.Artist.pick
def nu_pick(self, me):
print self
matplotlib.artist.Artist.orig_pick(self, me)
matplotlib.artist.Artist.pick = nu_pick
```
You can look at how the artists recurse on a pick event. (Each `Artist` object calls `pick` on it's self and then on all of it's children). For reasons I don't understand, there are two copies of each line in the drawing area of the legend (and it behaves differently when it is inside and outside).
A way-hacky solution is to just count how many times the `leglines` have been hit, and only toggle on the odd ones:
```
import pylab
import numpy
# Create data for plotting
t = numpy.linspace(0, 1.0, 100)
a = numpy.sin(2*numpy.pi*t)
# Set up figure
fig = pylab.figure()
ax = pylab.subplot(111)
# Plot figures
lines = []
for i in range(5):
line = ax.plot(t, (i+1)*a, linestyle=':', picker=5, label='line%d'%(i+1))
lines.append(line[0]) # Save plot lines
# Create legend
leg = ax.legend(bbox_to_anchor=(1.01, 1), loc=2) # Does not work as expected
#leg = ax.legend() # Works!!
# Get legend lines
leglines = leg.get_lines()
# Set event for legend lines
for line in leglines:
line.set_picker(5)
# Create a 2 way mapping between legend lines <-> plot lines
line2leg = dict(zip(lines+leglines, leglines+lines))
count_dict = dict((l, 0) for l in lines )
# Define event function
def onpick(event):
thisline = event.artist
print event
print thisline
if thisline in lines:
print 'lines'
count_dict[thisline] = 0
elif thisline in leglines:
print 'leglines'
thisline = line2leg[thisline]
count_dict[thisline] += 1
print 'added'
if (count_dict[thisline] % 2) == 1:
print count_dict[thisline]
return
print 'tested'
if thisline.get_linestyle()==':':
print ": -> -" # For debugging
thisline.set_linestyle('-')
line2leg[thisline].set_linestyle('-')
else:
print "- -> :" # For debugging
thisline.set_linestyle(':')
line2leg[thisline].set_linestyle(':')
fig.canvas.draw()
# connect event function
fig.canvas.mpl_connect('pick_event', onpick)
pylab.show()
```
(I left all my de-bugging statements in).
Pretty sure this is a bug, if you don't want to create an issue on github I will. | My diving into the legend artist has found that a legend line is in the children tree of a legend twice when the legend has the bbox\_to\_anchor set.
I asked about this [here](https://stackoverflow.com/questions/16266890/picking-on-matplotlib-legend-when-legend-is-outside-of-axes) with my solution where I watched for a NEW mouseevent and kept track of the artists that had already been handled by my callback.
I've asked for comments if anyone thinks there's a more elegant way to handle this "feature"
I'm not sure this is a bug. But, it seems unique to legends where the children lines are held in the .lines attribute and deep in the packing boxes data structure - the get\_children method finds both of these. Luckily they are the same object rather than a copy so I could check for a line already been handled. | Double event registered on mouse-click if legend is outside axes | [
"",
"python",
"python-2.7",
"matplotlib",
""
] |
I have a file containing some data – *data.txt* (existing in proper localization). I would like that django app processes this file **before** starting app and reacts for every change (**without restart**). What is the best way to do it? | For startup you can write middleware that does what you want in **init** and afterwards raise django.core.exceptions.MiddlewareNotUsed from the **init**, so the django will not use it for any request processing. [docs](https://docs.djangoproject.com/en/dev/topics/http/middleware/#marking-middleware-as-unused)
And middleware **init** will be called at startup, not at the first request.
As for react to file changes you can use <https://github.com/gorakhargosh/watchdog> ( an example of usage can be found [here](http://pythonhosted.org/watchdog/quickstart.html#a-simple-example)).
So you can either start it somewhere in middleware too, or if its only db updates you can create a separate script(or django management command) that will be run via supervisor or something like this and will monitor this file and update the db. | maybe you could put a object in the settings which will lookup to the file for every change. ...
ie :
make a class who will load the file and reload this one if he is modified
```
class ExtraConfigWatcher(object):
def __init__(self, file):
self.file = file
self.cached = dict()
self.last_date_modified = None
def update_config(self):
"""
update the config by reloading the file
"""
if has_been_modified(self.file, self.last_date_modified):
# regenerate the config with te file.
self.cached = get_dict_with_file(self.file)
self.last_date_modified = time.time()
def __getitem__(self, *args, **kwargs):
self.update_config()
return self.cached.__getitem__(*args, **kwargs)
def __setitem__(self, *args, **kwargs):
raise NotImplemented("you can't set config into this")
```
in settings.py: initialize this object
```
EXTRA_CONFIG = ExtraConfigWatcher("path/to/the/file.dat")
```
in myapps/views.py: import settings and use EXTRA\_CONFIG
```
from django.conf import settings
def dosomthing(request):
if settings.EXTRA_CONFIG["the_data_from_the_file"] == "foo":
# bouhh
``` | Processing some file before starting app and reacting for every change | [
"",
"python",
"django",
""
] |
I am trying to count the number of times punctuation characters appear in a novel. For example, I want to find the occurrences of question marks and periods along with all the other non alphanumeric characters. Then I want to insert them into a csv file. I am not sure how to do the regex because I don't have that much experience with python. Can someone help me out?
```
texts=string.punctuation
counts=dict(Counter(w.lower() for w in re.findall(r"\w+", open(cwd+"/"+book).read())))
writer = csv.writer(open("author.csv", 'a'))
writer.writerow([counts.get(fieldname,0) for fieldname in texts])
``` | ```
In [1]: from string import punctuation
In [2]: from collections import Counter
In [3]: counts = Counter(open('novel.txt').read())
In [4]: punctuation_counts = {k:v for k, v in counts.iteritems() if k in punctuation}
``` | ```
from string import punctuation
from collections import Counter
with open('novel.txt') as f: # closes the file for you which is important!
c = Counter(c for line in f for c in line if c in punctuation)
```
This also avoids loading the whole novel into memory at once.
Btw this is what `string.punctuation` looks like:
```
>>> punctuation
'!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'
```
You may want to add or detract symbols from here depending on your needs.
Also `Counter` defines a `__missing__` with simply does `return 0`. So instead of down-initialising it into a dictionary and then calling `.get(x, 0)`. Just leave it as a counter and access it like `c[x]`, if it doesn't exist, its count is 0. I'm not sure why everybody has the sudden urge to downgrade all their `Counter`s into `dict`s just because of the scary looking `Counter([...])` you see when you print one, when in fact `Counter`s are dictionaries too and deserve respect.
```
writer.writerow([counts.get(c, 0) for c in punctuation])
```
If you leave your counter you can just do this:
```
writer.writerow([counts[c] for c in punctuation])
```
and that was much easier. | Counting punctuation in text using Python and regex | [
"",
"python",
"regex",
"text-mining",
""
] |
I am writing a pseudo code where I want the below two "for" loops to be repeated until all the keys in data\_changes and modem\_changes are same, i.e. there should not be any keys that are present data\_changes but not in modem\_changes and vice versa. I should be able to write the Python implementation after this; can anyone provide inputs?
EDITED:
am more interested in
1. how to call func1 and func2 for keys that are present in one and not the other
2.repeating the for loops until the keys are the same in data\_changes and modem\_changes
```
data_changes = {
'253036': [''],
'313115': ['313113']
}
modem_changes = {'305403': [],
'311957': ['253036', '312591']
}
s1 = set(data_changes.keys())
s2 = set(modem_changes.keys())
value1 = s2.difference(s1)
print value1
value2 = s1.difference(s2)
print value2
def func1 (data_changes,key):
if key == '311957':
output = ''
if key == '305403':
output = ''
return output
def func2 (modem_changes,key):
if key == '313115':
output =''
if key == '253036':
output=''
return output
def method(d1, f1, d2, f2):
s1 = set(d1.keys())
s2 = set(d2.keys())
for k in s2.difference(s1):#set(['311957', '305403'])
f1(d1, k) # k in d2 not in d1.
for k in s1.difference(s2):#set(['313115', '253036'])
f2(d2, k) # k in d1 not in d2.
while(True):
method(data_changes, func1, modem_changes, func2)
value = set(data_changes.keys()) - set(modem_changes.keys())
print value
if value == set([]):
break;
EXPECTED OUTPUT:-
data_changes = {
'253036': [''],
'313115': ['313113']
'305403':['']
'311957':['']
}
modem_changes = {'305403': [],
'311957': ['253036', '312591']
'253036':[]
'313115':[]
}
``` | ```
In [8]: keys = set(data_changes.keys()) & set(modem_changes.keys())
In [9]: data_changes = {k:data_changes[k] for k in keys}
In [10]: modem_changes = {k:modem_changes[k] for k in keys}
In [11]: data_changes
Out[11]: {'253036': ['']}
In [12]: modem_changes
Out[12]: {'253036': ['311957', '312994', '312591']}
``` | Try the following to keep repeating the for loops:
```
while(True):
for key in data_changes:
if key not in modem_changes:
func1()
for key in modem_changes:
if key not in data_changes:
func2()
if(True): #logic to check wether to run for loops again
break;
``` | Control flow of for loops | [
"",
"python",
""
] |
I have two lists of tuples:
```
old = [('6.454', '11.274', '14')]
new = [(6.2845306, 11.30587, 13.3138)]
```
I'd want to compare each value from the same position (`6.454` against `6.2845306` and so on) and if value from the `old` tuple is greater then value from the `new` tuple, I print it.
The net effect should be then:
```
6.454, 14
```
I did it using simple `if` statement
```
if float(old[0][0]) > float(new[0][0]):
print old[0][0],
if float(old[0][1]) > float(new[0][1]):
print old[0][1],
if float(old[0][-1]) > float(new[0][-1]):
print marathon[0][-1]
```
Since there are always 3 or 2-element tuples, it's not a big problem to use slicing here but I'm looking for more elegant solution, which is list comprehension. Thank you for any help. | ```
[o for o,n in zip(old[0], new[0]) if float(o) > float(n)]
```
This should work? | So you want something like:
```
print [o for o,n in zip(old[0],new[0]) if float(o) > float(n)]
``` | Comparing values from different tuples, list comprehension | [
"",
"python",
"python-2.7",
"list-comprehension",
""
] |
There's a project I'm working on, kind of a distributed Database thing.
I started by creating the conceptual schema, and I've partitioned the tables such that I may require to perform joins between tables in MySQL and PostgreSQL.
I know I can write some sort of middleware that will break down the SQL queries and issue sub-queries targeting individual DBs, and them merge the results, but I'd like to do do this using SQL if possible.
My search so far has yielded [this](http://dev.mysql.com/doc/refman/5.1/en/federated-storage-engine.html) (Federated storage engine for MySQL) but it seems to work for MySQL databases.
If it's possible, I'd appreciate some pointer's on what to look at, preferably in Python.
Thanks. | From the postgres side, you can try using a [foreign data wrapper](http://wiki.postgresql.org/wiki/Foreign_data_wrapper) such as `mysql_ftw` ([example](http://blogs.enterprisedb.com/2011/08/01/postgresql-9-1-meet-mysql/)). Queries with joins can then be run through various Postgres clients, such as psql, pgAdmin, [psycopg2](http://initd.org/psycopg/) (for Python), etc. | It might take some time to set up, but PrestoDB is a valid OpenSource solution to consider.
see <https://prestodb.io/>
You connect connect to Presto with JDBC, send it the SQL, it interprets the different connections, dispatches to the different sources, then does the final work on the Presto node before returning the result. | Performing a join across multiple heterogenous databases e.g. PostgreSQL and MySQL | [
"",
"mysql",
"sql",
"postgresql",
""
] |
I have two dictionaries,i want to combine these two int a list in the format keys-->values,keys-->values... remove any None or ['']
currently I have the below where I can combine the dicts but not create a combined lists...i have the expecte output..
any inputs appreeciated
```
dict1={'313115': ['313113'], '311957': None}
dict2={'253036': [''], '305403': [], '12345': ['']}
dict = dict(dict1.items() + dict2.items())
print dict
{'313115': ['313113'], '311957': None, '253036': [''], '12345': [''], '305403': []}
EXPECTED OUTPUT:
['313115','313113','311957','253036','305403','12345']
``` | This should do it:
```
[i for k, v in (dict1.items() + dict2.items()) for i in [k] + (v or []) if i]
```
walk the combined items of the two dicts, then walk the key plus the list of values, returning each item from the second walk that exists.
Returns `['313115', '313113', '311957', '253036', '12345', '305403']` on your example dicts -- the order is different because python's dict iteration is unordered.
EDIT:
`dict.items()` can be expensive on large dicts -- it takes O(n) size, rather than iterating. If you use itertools, this is more efficient (and keeps the dicts you're working with in one place):
```
import itertools
[i
for k, v in itertools.chain.from_iterable(d.iteritems() for d in (dict1, dict2))
for i in [k] + (v or [])
if i]
```
Thanks to Martijn Pieters for the from\_iterable tip. | The following line gives you what you want in as efficient a manner as possible, albeit a little verbose:
```
from itertools import chain, ifilter
list(ifilter(None, dict1.viewkeys() | dict2.viewkeys() | set(chain(chain.from_iterable(ifilter(None, dict1.itervalues())), chain.from_iterable(ifilter(None, dict2.itervalues()))))))
```
You could break it down to:
```
values1 = chain.from_iterable(ifilter(None, dict1.itervalues()))
values2 = chain.from_iterable(ifilter(None, dict2.itervalues()))
output = list(ifilter(None, dict1.viewkeys() | dict2.viewkeys() | set(chain(values1, values2))))
```
`ifilter` with a `None` filter removes false-y values such as `None` and `''` from the iterable. the outer filter is not needed for your specific input but would remove `''` and `None` if used as keys as well. Duplicate values are removed.
Ordering in Python dictionaries is arbitrary so ordering doesn't match your sample but all expected values are there.
Demo:
```
>>> list(ifilter(None, dict1.viewkeys() | dict2.viewkeys() | set(chain(chain.from_iterable(ifilter(None, dict1.itervalues())), chain.from_iterable(ifilter(None, dict2.itervalues()))))))
['313115', '305403', '313113', '311957', '253036', '12345']
``` | Combining two dicts into a list | [
"",
"python",
""
] |
I have the following list:
```
Filedata:
[1, 0, 0, 0]
[2, 0, 0, 100]
[3, 0, 0, 200]
[4, 100, 0, 0]
[5, 100, 0, 100]
[6, 100, 0, 200]
...
```
where the first column is an ID, the second is the X coordinate, the third the Y coordinate and the fourth the Z coordinate.
I'd like to make two lists of the different X and Y coordinates in the original list:
```
X:
[0, 100, ...]
Y:
[0, 100, ...]
```
to do this I thought of a code but in the process I realised there must be an easier why of doing this instead of doing multiple for and if loops. Any ideas? | ```
IDs,Xs,Ys,Zs = zip(*filedata)
positions = zip(Xs,Ys,Zs) # list of tuples of (x,y,z)
unique_Xs = []
[unique_Xs.append(val) for val in Xs if val not in unique_Xs]
unique_Ys = []
[unique_Ys.append(val) for val in Ys if val not in unique_Ys]
```
I think would work , assuming filedata is a 2d list, although filedata is not a great variable name ...
I added a way to get unique points in each list ... not entirely sure thats what you want though | ```
filedata = [
[1, 0, 0, 0],
[2, 0, 0, 100],
[3, 0, 0, 200],
[4, 100, 0, 0],
[5, 100, 0, 100],
[6, 100, 0, 200],
]
IDs, Xs, Ys, Zs = zip(*filedata)
Xs, Ys, Zs = set(Xs), set(Ys), set(Zs)
print 'X:', Xs
print 'Y:', Ys
print 'Z:', Zs
``` | Make list of X and Y coordinates | [
"",
"python",
"list",
""
] |
I have a list of variables. Some are equal to True, some are not. I would like to know the easiest way to print only ones that are 'True'. It doesn't matter what order they are printed, but they need to be separated by a blank line in the output.
For example:
```
Var1 = "Cat"
Var2 = "Dog"
Var3 = "Sheep"
Var4 = ""
Var5 = ""
Var6 = "Horse"
```
And I need to somehow get the system to print:
```
Cat
Dog
Sheep
Horse
```
I was thinking something like:
```
print True in [Var1, Var2, Var3, Var4, Var5, Var6]
```
But obviously that doesn't work as it doesn't seem to be correct usage (I was just guessing). Any ideas would be very much appreciated! | Although, as noted in a comment to your question, `VarX, VarY, ...` is a common anti pattern and you should rather use a list, you could use:
```
print('\n'.join(filter(bool, [Var1, Var2, Var3, Var4, Var5, Var6])))
``` | To save building a string in memory use `filter` and the `sep=` argument for `print`:
```
items = (var1, var2, var3) # etc...
print (*filter(None, items), sep='\n')
``` | Print only variables that are True Python 3.3 | [
"",
"python",
"python-3.3",
""
] |
Is it possible to string a table columns headers dynamically from information\_schema recursively
So if I had 2 tables (or however many) in my database with 5 columns each
Could I get the query to find this tables by schema and then string all the table columns hearders into another table ending up with something like
```
table name ¦ string
Table A ¦ id,columnName1,columnName2 ,columnName3 ,columnName4 ,columnName5
Table b ¦ id,columnName1,columnName2 ,columnName3 ,columnName4 ,columnName5
```
(THESE ARE TWO ROWS BTW) | Using ***sys.objects*** more effective that ***sys.tables***. Possible this be helpful for you -
```
SELECT
table_name = s.name + '.' + o.name
, [columns] = STUFF((
SELECT ', ' + c.name
FROM sys.columns c WITH (NOWAIT)
WHERE c.[object_id] = o.[object_id]
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
FROM (
SELECT
o.[object_id]
, o.name
, o.[schema_id]
FROM sys.objects o WITH (NOWAIT)
WHERE o.[type] = 'U'
AND o.is_ms_shipped = 0
) o
INNER JOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]
ORDER BY
s.name
, o.name
``` | It can be simpler, no XML:
```
declare @columns varchar(1000)
set @columns=''
select @columns = @columns + ',' + name
from sys.columns
where object_id = object_id('MyTable')
``` | t-sql Table columns to String | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to work work out unit cost based on the units passed. So for example
```
0-499 = unit cost of 0.049
500-999 = unit cost of 0.050
999+ = unit cost of 0.060
```
This is what I have tried so far:
```
unitCost = 0.049
if self.units <= 499:
unitCost = 0.049
elif self.units >= 500:
unitCost = 0.050
elif self.units >= 999:
unitCost = 0.060
else:
unitCost = 0.049
```
First of all this does not work if I have 1000 units it matches the wrong unit cost. Second, I'm not sure this is the good way. Does Python has a between function?
I.e.
```
if x between(0-499):
``` | The problem is your if-statmement. You need to do something like `units >= 500 and units <= 999`, which basically checks if the value is between 500 and 999.
```
unitCost = 0.049
if units <= 499:
unitCost = 0.049
elif units >= 500 and units <= 999:
unitCost = 0.050
elif units > 999:
unitCost = 0.060
print unitCost
``` | `elif` is only evaluated if the previous `if` is False; your third condition can never be reached.
Reverse it:
```
if self.units >= 999:
unitCost = 0.060
elif self.units >= 500:
unitCost = 0.050
else:
unitCost = 0.049
``` | Python workout unit cost based on range | [
"",
"python",
"django",
""
] |
I have this query that will display the content of the table by its ID which can be sorted, but I have this one data that I always want to be displayed at the last part, due to some circumstances I cannot simply change its sort ID to that of the last number.
A sample table would be:
```
sort_id | data |
0 | dog |
1 | cat |
2 | bear |
3 | wolf |
```
I want this to display as follow: dog,cat,wolf,bear where bear is the constant value that should always be displayed last.
Is this possible? | Order by a case statement will work
```
Select *
FROM
table1
ORDER BY
case when data = 'bear' then 1 else 0 End,
sort_id
```
[DEMO](http://sqlfiddle.com/#!2/7ab94/2) | `SELECT * FROM table1 ORDER BY data='bear',sort_id` | SQL query where one item is always placed last | [
"",
"mysql",
"sql",
""
] |
I create created\_by and modified\_by fields in my abstract model.
Django Version: 1.4.3
Python Version: 2.7.3
According with documentation I created my abstract class:
```
# base.py
class MyModel(models.Model):
created_by = models.ForeignKey('userdata.Profile',
related_name=
'%(app_label)s_%(class)s_created_by',
default=1)
modified_by = models.ForeignKey('userdata.Profile',
related_name=
'%(app_label)s_%(class)s_modified_by',
default=1)
class Meta:
abstract = True
```
I inherit this model in all my models about 50 models.
syncdb - OK
south - OK
runserver - OK
I look on my tables by SQLite Manager and everything looks fine.
I login into admin site and open any of my tables and I get:
```
Traceback:
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\core\handlers\base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\contrib\admin\options.py" in wrapper
366. return self.admin_site.admin_view(view)(*args, **kwargs)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\utils\decorators.py" in _wrapped_view
91. response = view_func(request, *args, **kwargs)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\views\decorators\cache.py" in _wrapped_view_func
89. response = view_func(request, *args, **kwargs)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\contrib\admin\sites.py" in inner
196. return view(request, *args, **kwargs)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\utils\decorators.py" in _wrapper
25. return bound_func(*args, **kwargs)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\utils\decorators.py" in _wrapped_view
91. response = view_func(request, *args, **kwargs)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\utils\decorators.py" in bound_func
21. return func(self, *args2, **kwargs2)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\contrib\admin\options.py" in changelist_view
1233. 'selection_note': _('0 of %(cnt)s selected') % {'cnt': len(cl.result_list)},
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\db\models\query.py" in __len__
85. self._result_cache = list(self.iterator())
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\db\models\query.py" in iterator
291. for row in compiler.results_iter():
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\db\models\sql\compiler.py" in results_iter
763. for rows in self.execute_sql(MULTI):
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\db\models\sql\compiler.py" in execute_sql
818. cursor.execute(sql, params)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\db\backends\util.py" in execute
40. return self.cursor.execute(sql, params)
File "I:\xxx\virtualenvs\fff-2\lib\site-packages\django-1.4.3-py2.7.egg\django\db\backends\sqlite3\base.py" in execute
344. return Database.Cursor.execute(self, query, params)
Exception Type: DatabaseError at /admin/userdata/address/
Exception Value: at most 64 tables in a join
```
My question is: Can you see any mistakes on this example?
My question 2: Is it SQLite limited to 64 joins? and on Postgres it will be fine?
EDITE:
Yes, this happend when I open list view any model in admin panel.
I removed created\_by and modified\_by from all list\_displays. Didn't help either on local mashine with SQLite and on Server with Postgres
All tests are correct. | I had to removed every single relation from list\_display not only created\_by and modified\_by. Then create new colummns in view like in example:
```
class MyModelAdmin(admin.ModelAdmin):
list_display = ('level', 'description', '_parent', '_created_by',
'_modified_by')
def _parent(self, obj):
return "%s" % obj.parent
_parent.short_description = 'Parent'
def _created_by(self, obj):
return "%s" % obj.created_by
_created_by.short_description = 'Created By'
def _modified_by(self, obj):
return "%s" % obj.modfied_by
_modified_by.short_description = 'Modified By'
``` | Indeed Sqlite3 limits the number of join to 64 tables as the [doc](http://www.sqlite.org/limits.html) points out. It's most likely that with other database engine you will be fine. | Django created_by and modifed_by admin site issue | [
"",
"python",
"django",
""
] |
I've heard and read everywhere that variables are "names, not storage" in Python, and that it's important to not think of them like storage, but I've not found a single example of why that would be important. So the question is really, why is it important to distinguish between variables being names and variables being storage? | ```
a = SomeObject()
b = a
```
If names were storage (as they are in C and C++, for example), then both `a` and `b` would literally contain an object each:
```
a +---------+
| value 1 |
+---------+
b +---------+
| value 2 |
+---------+
```
So for example, `a.x = ...` would operate on value 1, and value 2 is completely uninvolved. Note that languages which do that provide values, which allow to manipulate one value by another (e.g. pointers). However, this is independent of this topic and you can do similar things in Python's model instead.
In Python and similar languages, memory looks more like this:
```
a +-------------+
| reference 1 | ---------+
+-------------+ v
+---------+
| value 1 |
+---------+
b +-------------+ ^
| reference 2 | ---------+
+-------------+
```
Reference here is an imaginary token which refers (duh!) to objects. There can be any number of references to any object, the object isn't aware of any of them, and objects can still linger around if there are no references to it. Also note that variables aren't the only place where references can pop up -- lists contain references, dicts contain references, objects' attributes contain references, etc. It's a bit like a pointer in C, except that it's not a discernible value, let alone object, in the language (and therefore there's no equivalent to pointer arithmetic either).
The most visible consequence is that variables can alias, so mutation of `value #1` through one is visible through the other:
```
a.something = 1
b.something = 2
assert a.something == 2
```
Re-assignment of a variable is *not* mutation of value 1 though, it just changes the reference. In other words, `a = ...` does not affect `b` and vice versa! | Unlike, say C, where variables "contain" the data.
In Python, the names are references to where the data is stored.
So, with lists(mutable)
```
>>> x = [10]
>>> y = x
>>> id(x) == id(y) # they refer to the same object
True
>>> y.append(1) # manipulate y
>>> x # x is manipulated
[10, 1]
>>> y # and so is y.
[10, 1]
```
And with strings(immutable)
```
>>> x = '10'
>>> y = x
>>> id(x) == id(y)
True
>>> y += '1' # manipulate y
>>> id(x) == id(y) # the ids are no longer equal
False
>>> x # x != y
'10'
>>> y
'101'
```
When you `del` a variable, you remove the reference to the object, and when an object has 0 references, it is garbage-collected. | Is there a difference between "variables being names" and "variables being storage"? | [
"",
"python",
"variables",
""
] |
I am trying to insert the records that come of this query into a temp table. Any help or suggestion will be helpful
```
insert into #AddRec
select *
from stg di
right join
ED bp
on
bp.ID = di.ID
except
select *
from stg di
inner join
ED bp
on
bp.ID = di.ID
``` | [This](http://sqlfiddle.com/#!6/6cc36/3) may help it simplifies your query a little.
```
create table #AddRec(id int) ;
insert into #addrec
select ed.id
from stg right join
ed on stg.id=ed.id
where stg.id is null;
select * from #Addrec
```
If you need more fields from the tables add the definitions into the temp table and add them into the select line | ```
;WITH Q AS
(
select *
from stg di
right join
ED bp
on
bp.ID = di.ID
except
select *
from stg di
inner join
ED bp
on
bp.ID = di.ID
)
INSERT INTO #AddRec(... list of fields ...)
SELECT (... list of fields ...) FROM Q
```
if you want to create the temp table from scratch, just replace the insert with:
```
SELECT (... list of fields ...)
INTO #AddRec
FROM Q
``` | Sql temp table with except | [
"",
"sql",
"join",
"except",
"temp-tables",
""
] |
I know what they do and I've seen many examples of both, but I haven't found a single example where I would have to use `classmethod` instead of replacing it with a `staticmethod`.
The most common example of `classmethod` I've seen is **for creating a new instance** of the class itself, like this (very simplified example, there's no use of the method atm. but you get the idea):
```
class Foo:
@classmethod
def create_new(cls):
return cls()
```
This would return a new instance of `Foo` when calling `foo = Foo.create_new()`.
Now why can't I just use this instead:
```
class Foo:
@staticmethod
def create_new():
return Foo()
```
It does the exact same, why should I ever use a `classmethod` over a `staticmethod`? | There's little difference in your example, but suppose you created a subclass of `Foo` and called the `create_new` method on the subclass...
```
class Bar(Foo):
pass
obj = Bar.create_new()
```
...then this base class would cause a new `Bar` object to be created...
```
class Foo:
@classmethod
def create_new(cls):
return cls()
```
...whereas this base class would cause a new `Foo` object to be created...
```
class Foo:
@staticmethod
def create_new():
return Foo()
```
...so the choice would depend which behavior you want. | Yes, those two classes would do the same.
However, now imagine a subtype of that class:
```
class Bar (Foo):
pass
```
Now calling `Bar.create_new` does something different. For the static method, you get a `Foo`. For the class method, you get a `Bar`.
So the important difference is that a class method gets the type passed as a parameter. | Why use classmethod instead of staticmethod? | [
"",
"python",
"class",
"python-3.x",
"static-methods",
"class-method",
""
] |
In Python I have a dictionary of settings which relate to a task class. In the parent constructor of these tasks I would like to store only the relevant settings, but to do this I need to access the child class from the parent class.
```
settings = {
SomeTask: { 'foo': 'bar' },
SomeOtherTask: { 'bar': 'foo' },
}
class SomeTask(BaseTask):
pass
class SomeOtherTask(BaseTask):
pass
class BaseTask:
def __init__(self, settings):
self.settings = settings[child_class]
```
In PHP I can do this by calling `get_class($this);` in the constructor (returns the child class name rather than the parent), does Python have something similar? | Just do this:
```
class BaseTask:
def __init__(self, settings):
self.settings = settings[self.__class__]
class SomeTask(BaseTask):
pass
class SomeOtherTask(BaseTask):
pass
```
When you initialise one of the child classes with the settings, they will do what you expect. | The closest Python equivalent to the PHP code...
```
$class_name = get_class($my_object)
```
...is...
```
class_name = my_object.__class__.__name__
```
...which should work for both old-style and new-style Python classes.
Indeed, if you index the classes by their name, rather than using a reference to the class object, then you don't need to pass in the `settings` parameter (which I assume you only did to avoid a circular reference), and access the global `settings` variable directly...
```
settings = {
'SomeTask': { 'foo': 'bar' },
'SomeOtherTask': { 'bar': 'foo' },
}
class BaseTask:
def __init__(self):
self.settings = settings[self.__class__.__name__]
class SomeTask(BaseTask):
pass
class SomeOtherTask(BaseTask):
pass
``` | Access child class instance in parent | [
"",
"python",
"oop",
"python-2.7",
""
] |
This is a follow up question to a previous one I posted earlier.
The problem is how to stop (terminate|quit|exit) a QThread from the GUI when using the recommended method of NOT subclassing Qthread, but rather vreating a QObject and then moving it to a QThread. Below if a working example. I can start the GUI and the Qthread and I can have the latter update the GUI. However, I cannot stop it. I tried several methods for qthread (quit(), exit(), and even terminate()) to no avail.
Help greatly appreciated.
Here is the complete code:
```
import time, sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
class SimulRunner(QObject):
'Object managing the simulation'
stepIncreased = pyqtSignal(int, name = 'stepIncreased')
def __init__(self):
super(SimulRunner, self).__init__()
self._step = 0
self._isRunning = True
self._maxSteps = 20
def longRunning(self):
while self._step < self._maxSteps and self._isRunning == True:
self._step += 1
self.stepIncreased.emit(self._step)
time.sleep(0.1)
def stop(self):
self._isRunning = False
class SimulationUi(QDialog):
'PyQt interface'
def __init__(self):
super(SimulationUi, self).__init__()
self.goButton = QPushButton('Go')
self.stopButton = QPushButton('Stop')
self.currentStep = QSpinBox()
self.layout = QHBoxLayout()
self.layout.addWidget(self.goButton)
self.layout.addWidget(self.stopButton)
self.layout.addWidget(self.currentStep)
self.setLayout(self.layout)
self.simulRunner = SimulRunner()
self.simulThread = QThread()
self.simulRunner.moveToThread(self.simulThread)
self.simulRunner.stepIncreased.connect(self.currentStep.setValue)
self.stopButton.clicked.connect(simulThread.qui) # also tried exit() and terminate()
# also tried the following (didn't work)
# self.stopButton.clicked.connect(self.simulRunner.stop)
self.goButton.clicked.connect(self.simulThread.start)
self.simulThread.started.connect(self.simulRunner.longRunning)
self.simulRunner.stepIncreased.connect(self.current.step.setValue)
if __name__ == '__main__':
app = QApplication(sys.argv)
simul = SimulationUi()
simul.show()
sys.exit(app.exec_())
``` | I found out that my original question was actually two questions in one: in order to stop a secondary thread from the main one, you need two things:
1. Be able to communicate from the main thread *down* to the secondary thread
2. Send the proper signal to stop the thread
I haven't been able to solve (2), but I figured out how to solve (1), which gave me a workaround to my original problem. Instead of stopping the *thread*, I can stop the thread's *processing* (the `longRunning()` method)
The problem is that a a secondary thread can only respond to signals if it runs its own event loop. A regular Qthread (which is what my code used) does not. It is easy enough, though, to subclass QThread to that effect:
```
class MyThread(QThread):
def run(self):
self.exec_()
```
and used `self.simulThread = MyThread()` in my code instead of the original `self.simulThread = Qthread()`.
This ensures that the secondary thread runs an event loop. That was not enough, though. The `longRunning()` method needs to have a chance to actually process the event coming down from the main thread. With the help of [this SO answer](https://stackoverflow.com/questions/7101610/pyqt4-interrupt-qthread-exec-when-gui-is-closed/7124843#7124843"pyqt4-interrupt-qthread-exec-when-gui-is-closed") I figured out that the simple addition of a `QApplication.processEvent()` in the `longRunning()` method gave the secondary thread such a chance. I can now stop the *processing* carried out in the secondary thread, even though I haven't figured out how to stop the thread itself.
To wrap up. My longRunning method now looks like this:
```
def longRunning(self):
while self._step < self._maxSteps and self._isRunning == True:
self._step += 1
self.stepIncreased.emit(self._step)
time.sleep(0.1)
QApplication.processEvents()
```
and my GUI thread has these three lines that do the job (in addition to the QThread subclass listed above):
```
self.simulThread = MyThread()
self.simulRunner.moveToThread(self.simulThread)
self.stopButton.clicked.connect(self.simulRunner.stop)
```
Comments are welcome! | I know its long ago but i just stumbled over the same problem.
I have been also searching for an appropriate way to do this. Finally it was easy. When exiting the application the task needs to be stopped and the thread needs to be stopped calling its quit method. See stop\_thread method on bottom. And you need to wait for the thread to finish. Otherwise you will get *QThread: Destroyed while thread is still running' message at exit*.
(I also changed my code to use pyside)
```
import time, sys
from PySide.QtCore import *
from PySide.QtGui import *
class Worker(QObject):
'Object managing the simulation'
stepIncreased = Signal(int)
def __init__(self):
super(Worker, self).__init__()
self._step = 0
self._isRunning = True
self._maxSteps = 20
def task(self):
if not self._isRunning:
self._isRunning = True
self._step = 0
while self._step < self._maxSteps and self._isRunning == True:
self._step += 1
self.stepIncreased.emit(self._step)
time.sleep(0.1)
print "finished..."
def stop(self):
self._isRunning = False
class SimulationUi(QDialog):
def __init__(self):
super(SimulationUi, self).__init__()
self.btnStart = QPushButton('Start')
self.btnStop = QPushButton('Stop')
self.currentStep = QSpinBox()
self.layout = QHBoxLayout()
self.layout.addWidget(self.btnStart)
self.layout.addWidget(self.btnStop)
self.layout.addWidget(self.currentStep)
self.setLayout(self.layout)
self.thread = QThread()
self.thread.start()
self.worker = Worker()
self.worker.moveToThread(self.thread)
self.worker.stepIncreased.connect(self.currentStep.setValue)
self.btnStop.clicked.connect(lambda: self.worker.stop())
self.btnStart.clicked.connect(self.worker.task)
self.finished.connect(self.stop_thread)
def stop_thread(self):
self.worker.stop()
self.thread.quit()
self.thread.wait()
if __name__ == '__main__':
app = QApplication(sys.argv)
simul = SimulationUi()
simul.show()
sys.exit(app.exec_())
``` | How to stop a QThread from the GUI | [
"",
"python",
"pyqt",
"pyqt4",
"qthread",
""
] |
I am trying to import a CSV file, using a form to upload the file from the client system. After I have the file, I'll take parts of it and populate a model in my app. However, I'm getting an "iterator should return strings, not bytes" error when I go to iterate over the lines in the uploaded file. I've spent hours trying different things and reading everything I could find on this but can't seem resolve it (note, I'm relatively new to Django- running 1.5- and python - running 3.3). I stripped out things to get to just the error and ran it like this to make sure it is still there. The error is displayed when executing the line "for clubs in club\_list" in tools\_clubs\_import():
**The following is the corrected views.py that works, based on answer marked below:**
```
import csv
from io import TextIOWrapper
from django.shortcuts import render
from django.http import HttpResponseRedirect
from django.core.urlresolvers import reverse
from rank.forms import ClubImportForm
def tools_clubs_import(request):
if request.method == 'POST':
form = ClubImportForm(request.POST, request.FILES)
if form.is_valid():
# the following 4 lines dumps request.META to a local file
# I saw a lot of questions about this so thought I'd post it too
log = open("/home/joel/meta.txt", "w")
for k, v in request.META.items():
print ("%s: %s\n" % (k, request.META[k]), file=log)
log.close()
# I found I didn't need errors='replace', your mileage may vary
f = TextIOWrapper(request.FILES['filename'].file,
encoding='ASCII')
club_list = csv.DictReader(f)
for club in club_list:
# do something with each club dictionary entry
pass
return HttpResponseRedirect(reverse('rank.views.tools_clubs_import_show'))
else:
form = ClubImportForm()
context = {'form': form, 'active_menu_item': 4,}
return render(request, 'rank/tools_clubs_import.html', context)
def tools_clubs_import_show(request):
return render(request, 'rank/tools_clubs_import_show.html')
```
The following is the original version of what I submitted (the html that generates the form is included at the bottom of this code list:
```
views.py
--------
import csv
from django.shortcuts import render
from django.http import HttpResponseRedirect
from rank.forms import ClubImportForm
def tools(request):
context = {'active_menu_item': 4,}
return render(request, 'rank/tools.html', context)
def tools_clubs(request):
context = {'active_menu_item': 4,}
return render(request, 'rank/tools_clubs.html', context)
def tools_clubs_import(request):
if request.method == 'POST':
form = ClubImportForm(request.POST, request.FILES)
if form.is_valid():
f = request.FILES['filename']
club_list = csv.DictReader(f)
for club in club_list:
# error occurs before anything here is executed
# process here... not included for brevity
return HttpResponseRedirect(reverse('rank.views.tools_clubs_import_show'))
else:
form = ClubImportForm()
context = {'form': form, 'active_menu_item': 4,}
return render(request, 'rank/tools_clubs_import.html', context)
def tools_clubs_import_show(request):
return render(request, 'rank/tools_clubs_import_show.html')
forms.py
--------
from django import forms
class ClubImportForm(forms.Form):
filename = forms.FileField(label='Select a CSV to import:',)
urls.py
-------
from django.conf.urls import patterns, url
from rank import views
urlpatterns = patterns('',
url(r'^tools/$', views.tools, name='rank-tools'),
url(r'^tools/clubs/$', views.tools_clubs, name='rank-tools_clubs'),
url(r'^tools/clubs/import$',
views.tools_clubs_import,
name='rank-tools_clubs_import'),
url(r'^tools/clubs/import/show$',
views.tools_clubs_import_show,
name='rank-tools_clubs_import_show'),
)
tools_clubs_import.html
-----------------------
{% extends "rank/base.html" %}
{% block title %}Tools/Club/Import{% endblock %}
{% block center_col %}
<form enctype="multipart/form-data" method="post" action="{% url 'rank-tools_clubs_import' %}">{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Submit" />
</form>
{% endblock %}
```
Exception Value:
iterator should return strings, not bytes (did you open the file in text mode?)
Exception Location: /usr/lib/python3.3/csv.py in fieldnames, line 96 | `request.FILES` gives you *binary* files, but the `csv` module wants to have text-mode files instead.
You need to wrap the file in a [`io.TextIOWrapper()` instance](http://docs.python.org/3/library/io.html#io.TextIOWrapper), and you need to figure out the encoding:
```
from io import TextIOWrapper
f = TextIOWrapper(request.FILES['filename'].file, encoding=request.encoding)
```
It'd probably be better if you took the `charset` parameter from the `Content-Type` header if provided; that is what the client tells you the character set is.
You cannot work around needing to know the correct encoding for the file data; you can force interpretation as ASCII, for example, by providing a `errors` keyword as well (setting it to 'replace' or 'ignore'), but that does lead to data loss:
```
f = TextIOWrapper(request.FILES['filename'].file, encoding='ascii', errors='replace')
```
Using TextIOWrapper will only work when using Django 1.11 and later (as [this changeset added the required support](https://github.com/django/django/commit/4f474607de9b470f977a734bdd47590ab202e778)). In earlier versions, you can monkey-patch the support in after the fact:
```
from django.core.files.utils import FileProxyMixin
if not hasattr(FileProxyMixin, 'readable'):
# Pre-Django 1.11, add io.IOBase support, see
# https://github.com/django/django/commit/4f474607de9b470f977a734bdd47590ab202e778
def readable(self):
if self.closed:
return False
if hasattr(self.file, 'readable'):
return self.file.readable()
return True
def writable(self):
if self.closed:
return False
if hasattr(self.file, 'writable'):
return self.file.writable()
return 'w' in getattr(self.file, 'mode', '')
def seekable(self):
if self.closed:
return False
if hasattr(self.file, 'seekable'):
return self.file.seekable()
return True
FileProxyMixin.closed = property(
lambda self: not self.file or self.file.closed)
FileProxyMixin.readable = readable
FileProxyMixin.writable = writable
FileProxyMixin.seekable = seekable
``` | In python 3, I used:
```
import csv
from io import StringIO
csvf = StringIO(xls_file.read().decode())
reader = csv.reader(csvf, delimiter=',')
```
xls\_file being the file got from the POST form. I hope it helps. | How to resolve "iterator should return strings, not bytes" | [
"",
"python",
"django",
"python-3.x",
"iterator",
""
] |
I'm trying to write a small function for another script that pulls the generated text from "<http://subfusion.net/cgi-bin/quote.pl?quote=humorists&number=1>"
Essentially, I need it to pull whatever sentence is between < br> tags.
I've been trying my darndest using regular expressions, but I never really could get the hang of those.
All of the searching I did turned up things for pulling either specific sentences, or single words.
This however needs to pull whatever arbitrary string is between < br> tags.
Can anyone help me out? Thanks.
*Best I could come up with:*
```
html = urlopen("http://subfusion.net/cgi-bin/quote.pl?quote=humorists&number=1").read()
output = re.findall('\<br>.*\<br>', html)
```
EDIT: Ended up going with a different approach all together, simply splitting the HTML in a list seperated by < br> and pulling [3], made for cleaner code and less string operations. Keeping this question up for future reference and other people with similar questions. | This is uh, 7 years later, but for future reference:
Use the beautifulsoup library for these kind of purposes, as suggested by Floris in the comments. | You need to use the `DOTALL` flag as there are newlines in the expression that you need to match. I would use
```
re.findall('<br>(.*?)<br>', html, re.S)
```
However will return multiple results as there are a bunch of `<br><br>` on that page. You may want to use the more specific:
```
re.findall('<hr><br>(.*?)<br><hr>', html, re.S)
``` | Finding a random sentence in HTML with python regex | [
"",
"python",
"html",
"regex",
""
] |
i want to print "(" in python
> print "(" + var + ")"
but it says:
> TypeError: coercing to Unicode: need string or buffer, NoneType found
can somebody help me? that cant be too hard... -.- | Using [`string formatting`](http://docs.python.org/3.3/library/string.html#format-string-syntax):
```
foo = 'Hello'
print('({})'.format(foo))
``` | maybe a simple `print "(" + str(var) + ")"`? | How to print brackets in python | [
"",
"python",
"printing",
"brackets",
""
] |
I am relatively new to programing and I'm trying to generate a list of numbers using this formula.
If "i" is the index of the list, the formula would be list[i] = list[i-2] + list[i-3]. The first few numbers would look like this if you started with 1,1,1.
1,1,1,2,2,3,4,5,7,9,12,16,21,28,37,49,65,86.etc. To get each number(after the 1,1,1) you skip back one number, then take the sums of the previous two numbers e.g. 49 was from the sum of 21 and 28.
The process of finding the numbers is similar to Fibonacci but these numbers are worlds different.
My code looks like this:
```
start = [1,1,1] #the list must start with three 1's
list1 = start #list1 starts with 'start'
newList = []
ammountOfNumbers = int(raw_input("Enter the ammount of numbers to be generated(n >= 3): "))# to dictate length of generated list
def generateList(newList, aList, ammountOfNumbers, *a):
while len(aList) <= ammountOfNumbers: #while length of list is less than or = size of list you want generated
for x in range((ammountOfNumbers-1)):
newList.append(x) #this puts value of x in index '0'
newList[x] = aList[len(aList)-1] + aList[len(aList)-2] # generate next number
aList += newList #add the next generated number to the list
x+=1
print
#print "Inside: ", aList #test
#print "Length inside: ",len(aList) #test
print
return aList
final = generateList(newList, list1, ammountOfNumbers) # equal to the value of list1
print"Final List: " , final
print
print"Length Outside: ", len(final) #wrong value
```
It obviously doesn't work correctly right now. I'm hoping to be able to generate a list of around 500 of these numbers. Does anyone have any suggestions?
Thanks! | I would use a generator:
```
from collections import deque
def generate_list():
que = deque([1,1,1],3)
yield 1
yield 1
yield 1
while True:
out = que[-3]+que[-2]
yield out
que.append(out)
```
This will generate an *infinite* series of according to that recurrence relation. To truncate it, I would use `itertools.islice`. Alternatively, you could pass in a number to be the maximal number that you want and only loop the appropriate number of times.
---
To create a general recurrence relation function I'd do something like:
```
def recurrence_relation(seed,func):
seed = list(seed)
que = deque(seed,len(seed))
for x in seed:
yield seed
while True:
out = func(que)
yield out
queue.append(out)
```
To use this for your problem, it would look like:
```
series = recurrence_relation([1,1,1],lambda x:x[-3] + x[-2])
for item in islice(series,0,500):
#do something
```
I think this combines the nice "seeding" ability proposed by Blender with a very generally scalable formalism that using a `deque` allows as I originally proposed. | I would use a generator:
```
def sequence(start):
a, b, c = start
yield a
yield b
while True:
yield c
a, b, c = b, c, a + b
```
Since the generator will keep going forever, you will have to stop it somehow:
```
for i, n in enumerate(sequence([1, 1, 1])):
if i > 100:
break
print n
```
Or with `itertools`:
```
from itertools import islice:
for n in islice(sequence([1, 1, 1]), 100):
print n
``` | Formula to Generate List of Specific Numbers | [
"",
"python",
"list",
"loops",
"sequence",
"fibonacci",
""
] |
I am trying to solve this for sometime, tried searching internet and refering some books, yet have not be able to find a solution.
There is one solution proposed here but not sure if there is any other simpler approach.
Refer: [Comparing Python dicts with floating point values included](https://stackoverflow.com/questions/13749218/comparing-python-dicts-with-floating-point-values-included)
Hope you can give some pointers.
**Background:**
Have dict\_A which comes with a {key:{key:{key:[value]}}} relationship. This dict\_A will go through an iterative process to optimise its value based on several constraints and an optimization objective. Will stop the optimizing process only when the final optimized dict i.e dict\_B2 is equal with the dict optimized one cycle before i.e dict\_B1. This gives an impression the dict would not be able to be optimized further and this is used to break the iterative cycle.
**Question:**
As the dicts value contain float number, some stored values get changed, perhaps because dictionary stores value in binary format. Do refer to example below, the variation of the first float value in the dictionary.
```
dict_B1 = {0: {36: {3: [-1], 12: [0.074506333542951425]}}, 1: {36: {2: [-1], 16: [0.048116666666666676], 17: [-1]}}, 2: {}, 3: {36: {5: [-1], 6: [-1], 15: [0.061150932060349471]}}}
dict_B2 = {0: {36: {3: [-1], 12: [0.074506333542951439]}}, 1: {36: {2: [-1], 16: [0.048116666666666676], 17: [-1]}}, 2: {}, 3: {36: {5: [-1], 6: [-1], 15: [0.061150932060349471]}}}
```
If I use the below, the interative process goes infinite loop and does not break,
```
if (dict_B1==dict_B2):
Exit
```
or,
```
if (cmp(dict_B1,dict_B2)):
Exit
```
Is there any other way to compare the dictionaries say, compare with 15 floating point precision from the 18 floating point precision values ?
I tried storing lesser precision values floats in the dictionary. The problem still persist.
Hope you can assist to point me to the right direction.
**Update 1: Jakub's Suggestion**
Jakub's suggestion is good. I can create two intermediary lists i.e List\_B1 and List\_B2 to store the floats, these will be used for comparison and as a flag to decide when to break the iterative process.
The below is the code used to test the case. The second item in List\_B2 is purposely altered so the value is way above the precision threshold.
```
def is_equal(floats_a, floats_b, precision=1e-15):
return all((abs(a-b) < precision) for a, b in izip(floats_a, floats_b))
List_B1=[0.074506333542951425,0.048116666666666676,0.061150932060349471]
List_B2=[0.074506333542951439,9.048116666666666676,0.061150932060349471]
print "is_equal(List_B1,List_B2):",is_equal(List_B1,List_B2)
for a, b in izip(List_B1, List_B2):
print a,b, (abs(a-b) < 1e-15)
```
Results:
```
is_equal(List_B1,List_B2): True
0.074506333543 0.074506333543 True
0.0481166666667 9.04811666667 False
0.0611509320603 0.0611509320603 True
```
Strangely `is_equal` function always returns `TRUE` which is not correct but when disected the code, it works correctly. Perhaps `return all` is doing an `OR` rather than an `AND`. Still troubleshooting this.
Do share if you have any hints. Will continue to work to solve this. Thanks to Jakub and Julien for all your guidance so far.
rgds
Saravanan K | When comparing floating points, always keep in mind that floats are not of infinite precision and accumulate errors. What you are really interested is if two floats are close enough, not if they are equal
If you want to test if two lists of floats are equal, I would do
```
def is_equal(floats_a, floats_b, precision=1e-15):
return all((abs(a-b) < precision) for a, b in izip(floats_a, floats_b))
``` | As you explained, your code checks that the computed solution is equal to the previous step result. The problem might be that your algorithm oscillate between two (or more) solutions which are really close from each other.
So I think you can either:
* Store several previous results, to check whether you have entered a loop. The problem would be to know how many previous solutions you need to store.
* Or, as suggested by Jakub and by the post you pointed out, you can check if dict\_B1 is within a certain range of dict\_B2.
The second solution is painful in your case because your structure is way too complex. If you don't want to break all your code, you can replace `[value]` in `{key:{key:{key:[value]}}}` by a custom list-of-float class which redefine the `__eq__()` operator to check equality within a certain range. | Compare Two Dictionaries - Floating Point | [
"",
"python",
""
] |
Is there any real difference between using 'not like' without any % signs and using the not-equal operator <> in Microsoft SQL? Example:
```
if exists (select * from table_name where column_name not like @myvariable)
```
or
```
if exists (select * from table_name where column_name not like 'myvalue')
```
versus
```
if exists (select * from table_name where column_name <> @myvariable)
```
or
```
if exists (select * from table_name where column_name <> 'myvalue')
```
I've noticed I have a habit of using not like (it's faster to type and feels more intuitive when reading my own code) and I was wondering if there's any chance that it will ever cause behavior that is different from a not-equal. I read in other questions that 'like' is slower than 'equals' but I'm more concerned about the result of the comparison here.
I am nearly always using the varchar data type when doing comparisons. | <> will not evaluate wild cards
<> '%' is a search on literal %
There are more "wild cards" than % \_
These are not the same
```
SELECT TOP 1000 [ID],[word]
FROM [FTSwordDef]
where [word] like '[a-z]a'
SELECT TOP 1000 [ID],[word]
FROM [FTSwordDef]
where [word] = '[a-z]a'
```
Use <> when you have a literal match
Use LIKE when you want to use "wild cards"
The expression are not evaluated the same way and it is sloppy to just use LIKE exclusively with the assumption that they are interchangeable. | In the examples you give, there is not a difference in the end result of the query. However, I'd say this is probably a bad idea. You're opening yourself up to bugs related to reserved characters that could be tricky to track down (LIKE uses % and \_ as reserved characters for pattern matching). If you're hard-coding the WHERE clause, that might not be a problem, but you've got variables in there too. Your application would need to check that the variable doesn't contain % or \_ in order to avoid bugs and security holes.
Also, LIKE is "marked" syntax--you don't typically use it unless you have to do pattern matching. Someone else reading your code is going to spend time trying to figure out why you used LIKE when you actually meant <>. Considering that the semantic meaning of what you're trying to do is "does not equal," using the designated operator will result in maximum clarity. | Microsoft SQL 'not like' vs <> | [
"",
"sql",
"sql-server",
"t-sql",
"comparison",
"sql-like",
""
] |
I'm trying to solve [this](http://acm.timus.ru/problem.aspx?space=1&num=1219) problem from Timus Online Judge. To solve this problem you need generate a sequence of 1 000 000 lowercase Latin letters and write it to stdin in 1 second.
It is easy to solve this problem with C++ or Java. I have python solution here:
```
import os
from random import randint
s = ''.join(chr(97 + randint(0, 25)) for i in range(1000000))
os.write(1, bytes(s, 'utf8'))
```
It takes 1.7s:
```
$ time python3.3 1219.py > /dev/null
real 0m1.756s
user 0m1.744s
sys 0m0.008s
```
And I got "Time limit exceeded" in result. So the question is "How to do it faster?"
**UPD1**:
Using `randint(97, 122)` reduces time at 16ms. Now it is 1.740s
**UPD2:**
Solution by @Martijn Pieters takes 0.979s, but it doesn't pass test either.
**UPD3**
[Martijn Pieters](https://stackoverflow.com/users/100297/martijn-pieters) suggested a very good solutions, but it's still slow:
```
from sys import stdin
from random import choice
from string import ascii_lowercase
s = ''.join([choice(ascii_lowercase) for _ in range(1000000)])
stdout.write(s)
```
Takes **0.924s**
```
from sys import stdout
from random import choice
from string import ascii_lowercase
for _ in range(1000000):
stdout.write(choice(ascii_lowercase))
```
Takes **1.173s**
```
from sys import stdout
from random import choice
from string import ascii_lowercase
bal = [c.encode('ascii') for c in ascii_lowercase]
out = stdout.buffer
for _ in range(1000000):
out.write(choice(bal))
```
Takes **1.155s**
```
from sys import stdout
from random import choice
from string import ascii_lowercase
bal = [c.encode('ascii') for c in ascii_lowercase]
stdout.buffer.write(b''.join([choice(bal) for _ in range(1000000)]))
```
Takes **0.901s**
**UPD4**
Some guy [just solved](http://acm.timus.ru/status.aspx?space=1&num=1219) problem on Timus. I hope he will share his solution :)
**UPD5**
Thanks to [Ashwini Chaudhary](https://stackoverflow.com/users/846892/ashwini-chaudhary) for sharing his Python 2.x solution with us:
```
from random import choice
from string import ascii_lowercase
lis=list(ascii_lowercase)
print ''.join(choice(lis) for _ in xrange(1000000))
```
It takes **0.527s** on my computer and it passes tests on Timus. But problem with Python3.x still remains.
**UPD6**
Thanks to [Markku K.](https://stackoverflow.com/users/2275418/markku-k) this code:
```
import os
from random import random
from string import ascii_lowercase
bal = [c.encode('ascii') for c in ascii_lowercase]
os.write(1, b''.join([bal[int(random() * 26)] for _ in range(1000000)]))
```
Takes **0.445s**, but still didn't pass the test | Here's Python 3 code that generates 1000000 "random" lowercase letters in `0.28` seconds (see also `0.11`-seconds solution at the end; @Ashwini Chaudhary's code from the question takes `0.55` seconds on my machine, @Markku K.'s code -- `0.53`):
```
#!/usr/bin/env python3
import os
import sys
def write_random_lowercase(n):
min_lc = ord(b'a')
len_lc = 26
ba = bytearray(os.urandom(n))
for i, b in enumerate(ba):
ba[i] = min_lc + b % len_lc # convert 0..255 to 97..122
sys.stdout.buffer.write(ba)
write_random_lowercase(1000000)
```
`% len_lc` skews the distribution (see at the end on how to fix it) though It still satisfies the conditions (ascii, lowercase, frequencies of 1, 2, 3 letter sequences):
```
$ python3 generate-random.py | python3 check-seq.py
```
where `check-seq.py`:
```
#!/usr/bin/env python3
import sys
from collections import Counter
from string import ascii_lowercase
def main():
limits = [40000, 2000, 100]
s = sys.stdin.buffer.readline() # a single line
assert 1000000 <= len(s) <= 1000002 # check length +/- newline
s.decode('ascii','strict') # check ascii
assert set(s) == set(ascii_lowercase.encode('ascii')) # check lowercase
for n, lim in enumerate(limits, start=1):
freq = Counter(tuple(s[i:i+n]) for i in range(len(s)))
assert max(freq.values()) <= lim, freq
main()
```
Note: on acm.timus.ru `generate-random.py` gives "Output limit exceeded".
To improve performance, you could use [`bytes.translate()` method](http://docs.python.org/3.3/library/stdtypes.html#bytes.translate) (`0.11` seconds):
```
#!/usr/bin/env python3
import os
import sys
# make translation table from 0..255 to 97..122
tbl = bytes.maketrans(bytearray(range(256)),
bytearray([ord(b'a') + b % 26 for b in range(256)]))
# generate random bytes and translate them to lowercase ascii
sys.stdout.buffer.write(os.urandom(1000000).translate(tbl))
```
## How to fix `% len_lc` skew
`256` (number of bytes) is not evenly divisible by `26` (number of lower Latin letters) therefore the formula `min_lc + b % len_lc` makes some values appear less often than others e.g.:
```
#!/usr/bin/env python3
"""Find out skew: x = 97 + y % 26 where y is uniform from [0, 256) range."""
from collections import Counter, defaultdict
def find_skew(random_bytes):
char2freq = Counter(chr(ord(b'a') + b % 26) for b in random_bytes)
freq2char = defaultdict(set)
for char, freq in char2freq.items():
freq2char[freq].add(char)
return {f: ''.join(sorted(c)) for f, c in freq2char.items()}
print(find_skew(range(256)))
# -> {9: 'wxyz', 10: 'abcdefghijklmnopqrstuv'}
```
Here, the input `range(256)` is uniformly distributed (each byte occurs exactly once) but `'wxyz'` letters in the output are less often then the rest `9` vs. `10` occurrences. To fix it, unaligned bytes could be dropped:
```
print(find_skew(range(256 - (256 % 26))))
# -> {9: 'abcdefghijklmnopqrstuvwxyz'}
```
Here, the input is uniformly distributed bytes in the range `[0, 234)` the output is uniformly distributed ascii lowercase letters.
`bytes.translate()` accepts the second argument to specify bytes to delete:
```
#!/usr/bin/env python3
import os
import sys
nbytes = 256
nletters = 26
naligned = nbytes - (nbytes % nletters)
tbl = bytes.maketrans(bytearray(range(naligned)),
bytearray([ord(b'a') + b % nletters
for b in range(naligned)]))
bytes2delete = bytearray(range(naligned, nbytes))
R = lambda n: os.urandom(n).translate(tbl, bytes2delete)
def write_random_ascii_lowercase_letters(write, n):
"""*write* *n* random ascii lowercase letters."""
while n > 0:
# R(n) expected to drop `(nbytes - nletters) / nbytes` bytes
# to compensate, increase the initial size
n -= write(memoryview(R(n * nbytes // naligned + 1))[:n])
write = sys.stdout.buffer.write
write_random_ascii_lowercase_letters(write, 1000000)
```
If the random generator (`os.urandom` here) produces long sequences of the bytes that are outside of the aligned range (`>=234`) then the `while` loop may execute many times.
The time performance can be improved by another order of magnitude if [`random.getrandbits(8*n).to_bytes(n, 'big')`](https://docs.python.org/library/random.html#random.getrandbits) is used instead of [`os.urandom(n)`](https://docs.python.org/library/os.html#os.urandom). The former uses Mersenne Twister as the core generator that may be faster than `os.urandom()` that uses sources provided by the operating system. The latter is more secure if you use the random string for secrets. | Use `string.ascii_lowercase` instead of `chr` to generate lowercase charaters:
```
from sys import stdin
from random import choice
from string import ascii_lowercase
s = ''.join([choice(ascii_lowercase) for _ in range(1000000)])
stdout.write(s)
```
Also writing to `stdout` directly appears to be faster, encoding yourself in python is not faster than having it all handled in the C code.
I also use a list comprehension; `str.join()` needs to scan through the input sequence twice, once to determine the length of the output, once to actually copy the input elements to output string. A list comprehension then beats out the slower generator-to-list code.
Just using `choice(ascii_lowercase)` over your method of generating each character from an integer is over twice as fast:
```
>>> timeit.timeit('f()', 'from __main__ import yours as f', number=3)
11.299837955011753
>>> timeit.timeit('f()', 'from __main__ import mine as f', number=3)
5.330044150992762
```
You could try and avoid the `''.join()` overhead by writing individual characters directly to `stdout`:
```
from sys import stdout
from random import choice
from string import ascii_lowercase
for _ in range(1000000):
stdout.write(choice(ascii_lowercase))
```
Next to try is to write raw bytes:
```
from sys import stdout
from random import choice
from string import ascii_lowercase
bal = [c.encode('ascii') for c in ascii_lowercase]
out = stdout.buffer
for _ in range(1000000):
out.write(choice(bal))
```
but these are no improvements over `''.join()` in my tests.
Next we move to encoding the ASCII characters to bytes once, then using `bytes.join()`:
```
from sys import stdout
from random import choice
from string import ascii_lowercase
bal = [c.encode('ascii') for c in ascii_lowercase]
stdout.buffer.write(b''.join([choice(bal) for _ in range(1000000)]))
```
`bal` is a list of lowercase ASCII characters encoded to bytes, from which we random pick 1 million items, join them to into a large byte string then write that in one go to the binary stdout buffer.
The bytes join is just as 'slow' as the string version:
```
>>> timeit.timeit('f()', 'from __main__ import bytes as f', number=3)
5.41390264898655
```
but we encode 26 characters, not 1 million so the write stage is faster. | Fastest method to generate big random string with lower Latin letters | [
"",
"python",
"performance",
"python-3.x",
"random",
"stdin",
""
] |
I have a query that uses the `OR` statement at the `WHERE` clause.
How can I order my elements by it's OR statement appearance?
For example, imagine the following query:
```
SELECT id,message,link,data_hora,like_count FROM mensagens
WHERE message = 'pineapple juice' OR message = 'pineapple' OR messsage = 'juice'
```
I would like that the results first showed the "pinapple juice" results and after that, the others, I couldn't think of solving this with the `ORDER BY` statement | use `FIELD()`. The simpliest answer in `MySQL`.
```
SELECT
FROM
WHERE
ORDER BY FIELD(message, 'pineapple juice', 'pineapple', 'juice')
``` | Try using a [`CASE`](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#operator_case) expression
```
SELECT id,message,link,data_hora,like_count
FROM mensagens
WHERE message = 'pineapple juice' OR message = 'pineapple' OR messsage = 'juice'
ORDER BY CASE message WHEN 'pineapple juice' THEN 0
WHEN 'pineapple' THEN 1
WHEN 'juice' THEN 2
END
``` | Order results by `OR` statements MySQL | [
"",
"mysql",
"sql",
""
] |
I have two tables `Employee` and `Emp_Audit`.
On table `Employee`, I have an `AFTER INSERT` trigger, which fires when I insert rows into `Employee`. The function of trigger is to insert the rows into the `Emp_Audit` table that have been inserted into `Employee`.
The trigger works fine when I explicitly use 'insert values' for each record to be inserted in `Employee` as
```
INSERT INTO Employee_Test VALUES ('Anees',1000);
INSERT INTO Employee_Test VALUES ('Rick',1200);
INSERT INTO Employee_Test VALUES ('John',1100);
INSERT INTO Employee_Test VALUES ('Stephen',1300);
INSERT INTO Employee_Test VALUES ('Maria',1400);
```
Trigger inserts all these rows inti `Emp_Audit` -------------GOOD
But when I use values constructor as
```
insert into dbo.Employee_Test
values ('Kritika', 25000),
('Ritu', 15000),
('Maduri', 7000),
('Dinkar', 7000);
```
Only the first row `('Kritika', 25000)` gets inserted into `Emp_Audit`
The whole query is as follows:
```
CREATE TABLE Employee_Test
(
Emp_ID INT Identity,
Emp_name Varchar(100),
Emp_Sal Decimal (10,2)
)
CREATE TABLE Employee_Test_Audit
(
Emp_ID int,
Emp_name varchar(100),
Emp_Sal decimal (10,2),
Audit_Action varchar(100),
Audit_Timestamp datetime
)
-----------------------Trigger------------------------------------
CREATE TRIGGER trgInsertAfter ON [dbo].[Employee_Test]
FOR INSERT
AS
declare @empid int;
declare @empname varchar(100);
declare @empsal decimal(10,2);
declare @audit_action nvarchar(200);
select @empid = inserted.Emp_ID
FROM inserted;
select @empname = inserted.Emp_name
from inserted;
select @empsal = inserted.Emp_Sal
from inserted;
set @audit_action = 'Record Inserted after Insert Trigger Fired';
INSERT INTO Employee_Test_Audit
VALUES(@empid, @empname, @empsal, @audit_action, GETDATE());
GO
print('Insert trigger FIRED')
insert into dbo.Employee_Test
values ('Kritika', 25000),
('Ritu', 15000),
('Maduri', 7000),
('Dinkar', 7000);
``` | Yep, what you want is:
```
CREATE TRIGGER trgInsertAfter ON [dbo].[Employee_Test]
FOR INSERT
AS
INSERT INTO Employee_Test_Audit (Emp_ID, Emp_name, Emp_Sal, Audit_Action, Audit_Timestamp)
SELECT Emp_ID,Emp_name,Emp_Sal,
'Record Inserted after Insert Trigger Fired',GETDATE()
from inserted;
```
Because [`inserted`](http://msdn.microsoft.com/en-GB/library/ms191300.aspx) can contain *multiple* rows (or no rows), you have to treat it as a table. I've never seen any different behaviour, but there's no guarantee (in your version) that all of the variables would have been assigned values from the *same* row even.
Also, you really should get into the habit of supplying a column list to the [`INSERT`](http://msdn.microsoft.com/en-us/library/ms174335%28v=sql.105%29.aspx). | Try this one -
```
CREATE TRIGGER dbo.trgInsertAfter
ON [dbo].[Employee_Test]
FOR INSERT
AS BEGIN
INSERT INTO dbo.Employee_Test_Audit(Emp_ID, Emp_name, Emp_Sal, ..., ...)
SELECT
i.Emp_ID
, i.Emp_name
, i.Emp_Sal
, 'Record Inserted after Insert Trigger Fired'
, GETDATE()
FROM INSERTED i
END
``` | Values() constructor in SQL Server 2008 and triggers | [
"",
"sql",
"sql-server",
"triggers",
""
] |
I am getting the following error after running my tests in the console:
```
ActiveRecord::StatementInvalid: SQLite3::SQLException: table users has no column named password: INSERT INTO "users"
```
user\_test.rb:
```
class UserTest < ActiveSupport::TestCase
test "a user should enter a first name" do
user = User.new
assert !user.save
assert !user.errors[:first_name].empty?
end
test "a user should enter a last name" do
user = User.new
assert !user.save
assert !user.errors[:last_name].empty?
end
test "a user should enter a profile name" do
user = User.new
assert !user.save
assert !user.errors[:profile_name].empty?
end
test "a user should have a unique profile name" do
user = User.new
user.profile_name = users(:adam).profile_name
assert !user.save
assert !user.errors[:profile_name].empty?
end
end
```
users.rb:
```
class User < ActiveRecord::Base
# Include default devise modules. Others available are:
# :token_authenticatable, :confirmable,
# :lockable, :timeoutable and :omniauthable
devise :database_authenticatable, :registerable,
:recoverable, :rememberable, :trackable, :validatable
# Setup accessible (or protected) attributes for your model
attr_accessible :email, :password, :password_confirmation, :remember_me,
:first_name, :last_name, :profile_name
validates :first_name, presence: true
validates :last_name, presence: true
validates :profile_name, presence: true,
uniqueness: true
has_many :statuses
def full_name
first_name + " " + last_name
end
end
```
users.yml:
```
dan:
first_name: "Dan"
last_name: "Can"
email: "dan@email.com"
profile_name: "dan"
password: "123456"
password_confirmation: "123456"
```
database.yml:
```
# SQLite version 3.x
# gem install sqlite3
#
# Ensure the SQLite 3 gem is defined in your Gemfile
# gem 'sqlite3'
development:
adapter: sqlite3
database: db/development.sqlite3
pool: 5
timeout: 5000
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
adapter: sqlite3
database: db/test.sqlite3
pool: 5
timeout: 5000
production:
adapter: sqlite3
database: db/production.sqlite3
pool: 5
timeout: 5000
```
What I believe to be my user migrate file:
```
class DeviseCreateUsers < ActiveRecord::Migration
def change
create_table(:users) do |t|
t.string :first_name
t.string :last_name
t.string :profile_name
## Database authenticatable
t.string :email, :null => false, :default => ""
t.string :encrypted_password, :null => false, :default => ""
## Recoverable
t.string :reset_password_token
t.datetime :reset_password_sent_at
## Rememberable
t.datetime :remember_created_at
## Trackable
t.integer :sign_in_count, :default => 0
t.datetime :current_sign_in_at
t.datetime :last_sign_in_at
t.string :current_sign_in_ip
t.string :last_sign_in_ip
## Confirmable
# t.string :confirmation_token
# t.datetime :confirmed_at
# t.datetime :confirmation_sent_at
# t.string :unconfirmed_email # Only if using reconfirmable
## Lockable
# t.integer :failed_attempts, :default => 0 # Only if lock strategy is :failed_attempts
# t.string :unlock_token # Only if unlock strategy is :email or :both
# t.datetime :locked_at
## Token authenticatable
# t.string :authentication_token
t.timestamps
end
add_index :users, :email, :unique => true
add_index :users, :reset_password_token, :unique => true
end
end
```
I would like to know what is causing the error, but more importantly why. | When I removed the `password` and `password_confirmation` columns from the users fixture it passed the test with no errors. I'm told by a friend that is was likely due to an upgrade in devise. | I think there are a couple of possibilities here but I will focus on the immediate problem. If your Rake tasks are returning `No command found`, then it may be because Rake isn't installed on your computer. I would start there.
To install Rake, type in your terminal:
```
gem install rake
```
The reason why your code doesn't work is because your users table doesn't have a column named `password`. With `rake db:migrate` and `rake db:test:prepare` you are ensuring that any migrations you created are applied to your database.
Let me know the results. | Why does my users table have a no column error? | [
"",
"sql",
"ruby-on-rails",
"ruby",
"testing",
"yaml",
""
] |
I am trying to connect google's geocode api and github api to parse user's location and create a list out of it.
The array (list) I want to create is like this:
```
location, lat, lon, count
San Francisco, x, y, 4
Mumbai, x1, y1, 5
```
Where location, lat and lon is parsed from Google geocode, count is the occurrence of that location. Eevery time a new location is added: if it exists in the list the count is incremented otherwise it is appended to the array(list) with location, lat, lon and the count should be 1.
Another example:
```
location, lat, lon, count
Miami x2, y2, 1 #first occurrence
San Francisco, x, y, 4 #occurred 4 times already
Mumbai, x1, y1, 5 #occurred 5 times already
Cairo, x3, y3, 1 #first occurrence
```
I can already get the user's location from github and can get the geocoded data from google. I just need to create this array in python which I'm struggling with.
Can anyone help me? thanks. | With `collections.Counter`, you could do :
```
from collections import Counter
# initial values
c=Counter({("Mumbai", 1, 2):5, ("San Francisco", 3,4): 4})
#adding entries
c.update([('Mumbai', 1, 2)])
print c # Counter({('Mumbai', 1, 2): 6, ('San Francisco', 3, 4): 4})
c.update([('Mumbai', 1, 2), ("San Diego", 5,6)])
print c #Counter({('Mumbai', 1, 2): 7, ('San Francisco', 3, 4): 4, ('San Diego', 5, 6): 1})
``` | This would be better stored as a dictionary, indexed by city name. You could store it as two dictionaries, one dictionary of tuples for latitude/longitude (since lat/long never changes):
```
lat_long_dict = {}
lat_long_dict["San Francisco"] = (x, y)
lat_long_dict["Mumbai"] = (x1, y1)
```
And a `collections.defaultdict` for the count, so that it always starts at 0:
```
import collections
city_counts = collections.defaultdict(int)
city_counts["San Francisco"] += 1
city_counts["Mumbai"] += 1
city_counts["San Francisco"] += 1
# city counts would be
# defaultdict(<type 'int'>, {'San Francisco': 2, 'Mumbai': 1})
``` | Python list help (incrementing count, appending) | [
"",
"python",
"arrays",
"list",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.