Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I'm pulling data to use in a Pivot table in Excel, but Excel is not recognizing the date format.
I'm using..
```
CONVERT(date, [DateTimeSelected]) AS [Search Date]
```
Which shows as as **2013-08-01** in my query output.
When it get to into my Excel pivot table via my SQL Connection it looks the same but the filters do not recognize as a date.
Here you can see how Excel is see it as text on the left and n the right what it should look like in Excel when it recognizes it as a date.

Any ideas please?
Thanks ;-)
Tried all these but only the original B.Depart (date time) comes through as a date, none of the converted columns are read by Excel as a date...
I get loads of formats but Excel must not like converted dates??
```
B.Depart AS 'Holiday Date time',
CONVERT(VARCHAR(10), B.Depart,103) AS 'Holiday Date',
DATENAME(weekday, B.Depart) AS 'Holiday Day Name',
CONVERT(CHAR(2), B.Depart, 113) AS 'Holiday Day',
CONVERT(CHAR(4), B.Depart, 100) AS 'Holiday Month',
CONVERT(CHAR(4), B.Depart, 120) AS 'Holiday Year',
CONVERT(VARCHAR(10),B.Depart,10) AS 'New date',
CONVERT(VARCHAR(19),B.Depart),
CONVERT(VARCHAR(10),B.Depart,10),
CONVERT(VARCHAR(10),B.Depart,110),
CONVERT(VARCHAR(11),B.Depart,6),
CONVERT(VARCHAR(11),B.Depart,106),
CONVERT(VARCHAR(24),B.Depart,113)
``` | Is appears that `cast(convert(char(11), Bo.Depart, 113) as datetime)` works.. | I found the only answer for me was to set the SQL data as source for a Table, rather than a pivot table, and then build a pivot table from that Table.
That was I was able to apply the Number format to the Pivot table Row Labels, and that allowed me to format the date how I wanted on the chart. | SQL Date formatting for Excel pivot tables | [
"",
"sql",
"excel",
"date",
"pivot",
"pivot-table",
""
] |
[Sql fiddle](http://sqlfiddle.com/#!3/25d42/6)
```
CREATE TABLE [Users_Reg]
(
[User_ID] [int] IDENTITY (1, 1) NOT NULL CONSTRAINT User_Reg_P_KEY PRIMARY KEY,
[Name] [varchar] (50) NOT NULL,
[Type] [varchar] (50) NOT NULL /*Technician/Radiologist*/
)
CREATE Table [Study]
(
[UID] [INT] IDENTITY (1,1) NOT NULL CONSTRAINT Patient_Study_P_KEY PRIMARY KEY,
[Radiologist] [int], /*user id of Radiologist type*/
[Technician] [int], /*user id of Technician type*/
)
select * from Study
inner join Users_Reg
on Users_Reg.User_ID=Study.Radiologist
```
In patient\_study table may be Radiologist or Technician have 0 value.
how i get technician name and radiologist name from query. | You will want to JOIN on the `users_reg` table twice to get the result. Once for the radiologist and the another for the technician:
```
select ps.uid,
ur1.name rad_name,
ur1.type rad_type,
ur2.name tech_name,
ur2.type tech_type
from Patient_Study ps
left join Users_Reg ur1
on ur1.User_ID=ps.Radiologist
left join Users_Reg ur2
on ur2.User_ID=ps.Technician;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/25d42/21). This will return both the radiologist and technician name/type for all patient studies. If you want to replace the null in any of the columns, then you could use `COALESCE` similar to the following:
```
select ps.uid,
coalesce(ur1.name, '') rad_name,
coalesce(ur1.type, '') rad_type,
coalesce(ur2.name, '') tech_name,
coalesce(ur2.type, '') tech_type
from Patient_Study ps
left join Users_Reg ur1
on ur1.User_ID=ps.Radiologist
left join Users_Reg ur2
on ur2.User_ID=ps.Technician;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/25d42/46) | in your case you could use `isnull`:
```
select *
from Patient_Study as PS
inner join Users_Reg as U on U.User_ID = isnull(nullif(PS.Radiologist, ''), PS.Technician)
```
or two `left outer joins`:
```
select PS.*, isnull(UP.Type, UT.Type) as Type
from Patient_Study as PS
left outer join Users_Reg as UP on UP.User_ID = PS.Radiologist
left outer join Users_Reg as UT on UT.User_ID = PS.Technician;
```
[**SQL FIDDLE**](http://sqlfiddle.com/#!3/25d42/20) with example.
As advice I might say - store `null` values in your table if you have no proper value
```
INSERT INTO Patient_Study(Radiologist,Technician)VALUES('1','2')
INSERT INTO Patient_Study(Radiologist,Technician)VALUES('1','')
INSERT INTO Patient_Study(Radiologist,Technician)VALUES(null,'2')
``` | Sql query getting name column null | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I use python os.walk() to get files and dirs in some directories, but there're files whose names are too long(>300), os.walk() return nothing, use onerror I get '[Error 234] More data is available'. I tried to use yield, but also get nothing and shows 'Traceback: StopIteration'.
OS is windows, code is simple. I have tested with a directory, if there's long-name file, problem occur, while if rename the long-name files with short names, code can get correct result.
I can do nothing for these directories, such as rename or move the long-name files.
Please help me to solve the problem!
```
def t(a):
for root,dirs,files in os.walk(a):
print root,dirs,files
t('c:/test/1')
``` | In Windows file names (including path) can not be greater than 255 characters, so the error you're seeing comes from Windows, not from Python - because somehow you managed to create such big file names, but now you can't read them. See this [post](https://stackoverflow.com/questions/265769/maximum-filename-length-in-ntfs-windows-xp-and-windows-vista) for more details. | The only workaround I can think of is to map the the folder to the specific directory. This will make the path way shorter. e.g. z:\myfile.xlsx instead of c:\a\b\c\d\e\f\g\myfile.xlsx | Python's os.walk() fails in Windows when there are long filenames | [
"",
"python",
"windows",
"file",
"filenames",
"os.walk",
""
] |
I need a little help with this, I've been searching for a solution with no results.
This are my settings:
settings.py:
```
STATIC_ROOT = ''
# URL prefix for static files.
# Example: "http://media.lawrence.com/static/"
STATIC_URL = '/static/'
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
STATICFILES_DIRS = (
PROJECT_ROOT + '/static/'
)
```
Installed apps:
```
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin', . . .
```
Running with DEBUG = TRUE:
```
August 01, 2013 - 16:59:44
Django version 1.5.1, using settings 'settings'
Development server is running at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
[01/Aug/2013 16:59:50] "GET / HTTP/1.1" 200 6161
[01/Aug/2013 16:59:50] "GET /static/media/css/jquery-ui/ui-lightness/jquery-ui- 1.10.3.custom.min.css HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/css/bootstrap/bootstrap.css HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/css/bootstrap/bootstrap-responsive.min.css HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/css/styles.css HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/js/jquery/jquery-1.9.1.min.js HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/js/bootstrap/bootstrap.min.js HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/js/jquery-ui/jquery-ui-1.10.3.custom.min.js HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/js/messages.js HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/js/validate/jquery.validate.min.js HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/images/FERREMOLQUES2.png HTTP/1.1" 404 5904
[01/Aug/2013 16:59:50] "GET /static/media/js/dynamic-style.js HTTP/1.1" 404 5904
```
As a special mention I'm running Django 1.5.1 and Python 2.7.5 in a **VIRTUALENV**. I do not know if this configuration is causing the problem
Any help would be appreciate
Thanks.
**EDIT: When I off VIRTUALENV and install proper version of Django and the project's dependencies, My project works well, without any issue. . . statics are shown as it should** | For hours and hours of searching for any solution, finally I found that this problem is a bug:
<https://bugzilla.redhat.com/show_bug.cgi?id=962223>
I'm not sure if this bug is by Django or Python, My Django version is 1.5.1 and Python is 2.7.5. I would need to proof in previous django and python version to see if bug is present.
My setting.py was in `DEBUG=False` when I change it to True the problem has gone, right now in development, I'm not worried about that, but I wait for a patch when my project reach production.
Thanks again. | Are you sure your `STATICFILE_DIRS` is correct? If your settings is like at the moment, the `static` folder is supposed to be in same level as `settings.py`.
```
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__)) # it means settings.py is in PROJECT_ROOT?
STATICFILES_DIRS = (
PROJECT_ROOT + '/static/', # <= don't forget a comma here
)
```
My normal `settings.py` is a bit different:
```
ROOT_PATH = path.join(path.dirname(__file__), '..') # up one level from settings.py
STATICFILES_DIRS = (
path.abspath(path.join(ROOT_PATH, 'static')), # static is on root level
)
```
Apart from that, you need `django.core.context_processors.static` as context processors:
```
TEMPLATE_CONTEXT_PROCESSORS = (
# other context processors....
'django.core.context_processors.static',
)
```
And enable the urlpattern in `urls.py`:
```
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
urlpatterns += staticfiles_urlpatterns()
```
Hope it helps! | Django 1.5 GET 404 on static files | [
"",
"python",
"django",
"static",
"http-status-code-404",
""
] |
I am using this command to find the same values in two tables when the tables have 100-200 records. But When the tables have 100000-20000 records, the sql manager, browsers, shortly the computer is freesing.
Is there any alternative command for this?
```
SELECT
distinct
names
FROM
table1
WHERE
names in (SELECT names FROM table2)
``` | a simple join will also do it.
make sure the column is indexed.
```
select distinct t1.names
from table1 t1, table2 t2
where t1.names = t2.names
``` | Try with `join`
```
SELECT distinct t1.names
FROM table1 t1
join table2 t2 on t2.names = t1.names
``` | Looking for an alternative SQL command to find the same values in two tables | [
"",
"mysql",
"sql",
""
] |
I can create a named child logger, so that all the logs output by that logger are marked with it's name. I can use that logger exclusively in my function/class/whatever.
However, if that code calls out to functions in another module that makes use of logging using just the logging module functions (that proxy to the root logger), how can I ensure that those log messages go through the same logger (or are at least logged in the same way)?
For example:
main.py
```
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
do_stuff(logger)
```
other.py
```
import logging
def do_more_stuff():
logging.info("doing other stuff")
```
Outputs:
```
$ python main.py
INFO:stuff:doing stuff
INFO:root:doing other stuff
```
I want to be able to cause both log lines to be marked with the name 'stuff', and I want to be able to do this only changing main.py.
How can I cause the logging calls in other.py to use a different logger without changing that module? | This is the solution I've come up with:
Using thread local data to store the contextual information, and using a Filter on the root loggers handlers to add this information to LogRecords before they are emitted.
```
context = threading.local()
context.name = None
class ContextFilter(logging.Filter):
def filter(self, record):
if context.name is not None:
record.name = "%s.%s" % (context.name, record.name)
return True
```
This is fine for me, because I'm using the logger name to indicate what task was being carried out when this message was logged.
I can then use context managers or decorators to make logging from a particular passage of code all appear as though it was logged from a particular child logger.
```
@contextlib.contextmanager
def logname(name):
old_name = context.name
if old_name is None:
context.name = name
else:
context.name = "%s.%s" % (old_name, name)
try:
yield
finally:
context.name = old_name
def as_logname(name):
def decorator(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
with logname(name):
return f(*args, **kwargs)
return wrapper
return decorator
```
So then, I can do:
```
with logname("stuff"):
logging.info("I'm doing stuff!")
do_more_stuff()
```
or:
```
@as_logname("things")
def do_things():
logging.info("Starting to do things")
do_more_stuff()
```
The key thing being that any logging that `do_more_stuff()` does will be logged as if it were logged with either a "stuff" or "things" child logger, without having to change `do_more_stuff()` at all.
This solution would have problems if you were going to have different handlers on different child loggers. | This is what logging.handlers (or the handlers in the logging module) is for. In addition to creating your logger, you create one or more handlers to send the logging information to various places and add them to the root logger. Most modules that do logging create a logger that they use for there own purposes but depend on the controlling script to create the handlers. Some frameworks decide to be super helpful and add handlers for you.
Read the [logging docs](http://docs.python.org/2/library/logging.html), its all there.
(edit)
logging.basicConfig() is a helper function that adds a single handler to the root logger. You can control the format string it uses with the 'format=' parameter. If all you want to do is have all modules display "stuff", then use `logging.basicConfig(level=logging.INFO, format="%(levelname)s:stuff:%(message)s")`. | How can you make logging module functions use a different logger? | [
"",
"python",
"logging",
""
] |
I realize that this question has been asked before([Python Pyplot Bar Plot bars disappear when using log scale](https://stackoverflow.com/questions/14047068/python-pyplot-bar-plot-bars-disapear-when-using-log-scale)), but the answer given did not work for me. I set my pyplot.bar(x\_values, y\_values, etc, log = True) but got an error that says:
```
"TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'"
```
I have been searching in vain for an actual example of pyplot code that uses a bar plot with the y-axis set to log but haven't found it. What am I doing wrong?
here is the code:
```
import matplotlib.pyplot as pyplot
ax = fig.add_subplot(111)
fig = pyplot.figure()
x_axis = [0, 1, 2, 3, 4, 5]
y_axis = [334, 350, 385, 40000.0, 167000.0, 1590000.0]
ax.bar(x_axis, y_axis, log = 1)
pyplot.show()
```
I get an error even when I removre pyplot.show. Thanks in advance for the help | The error is raised due to the `log = True` statement in `ax.bar(...`. I'm unsure if this a matplotlib bug or it is being used in an unintended way. It can easily be fixed by removing the offending argument `log=True`.
This can be simply remedied by simply logging the y values yourself.
```
x_values = np.arange(1,8, 1)
y_values = np.exp(x_values)
log_y_values = np.log(y_values)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.bar(x_values,log_y_values) #Insert log=True argument to reproduce error
```
Appropriate labels `log(y)` need to be adding to be clear it is the log values. | Are you sure that is all your code does? Where does the code throw the error? During plotting? Because this works for me:
```
In [16]: import numpy as np
In [17]: x = np.arange(1,8, 1)
In [18]: y = np.exp(x)
In [20]: import matplotlib.pyplot as plt
In [21]: fig = plt.figure()
In [22]: ax = fig.add_subplot(111)
In [24]: ax.bar(x, y, log=1)
Out[24]:
[<matplotlib.patches.Rectangle object at 0x3cb1550>,
<matplotlib.patches.Rectangle object at 0x40598d0>,
<matplotlib.patches.Rectangle object at 0x4059d10>,
<matplotlib.patches.Rectangle object at 0x40681d0>,
<matplotlib.patches.Rectangle object at 0x4068650>,
<matplotlib.patches.Rectangle object at 0x4068ad0>,
<matplotlib.patches.Rectangle object at 0x4068f50>]
In [25]: plt.show()
```
Here's the plot
 | Barplot with log y-axis program syntax with matplotlib pyplot | [
"",
"python",
"matplotlib",
"typeerror",
"bar-chart",
""
] |
I need another python script that will execute these 3 scripts. | You probably want the following :
```
import os
def include(filename):
if os.path.exists(filename):
execfile(filename)
include('myfile.py')
```
But I think it would be better to refactor your code using functions and use **import** . There already was similar [questio](https://stackoverflow.com/questions/714881/how-to-include-external-python-code-to-use-in-other-files) at SO: | import - will execute code which you import (once)
os.system("scriptname.py")
subprocess
popen | How to execute three .py files from another python file? | [
"",
"python",
""
] |
I'm trying to run a for loop. Here's the section of my code I'm having trouble with:
```
aldurstengd_ororka = {(18, 19, 20, 21, 22, 23, 24):1, (25):0.95, (26):0.90,
(27):0.85, (28, 29):0.75, (30, 31):0.65, (32, 33):0.55, (34, 35):0.45,
(36, 37):0.35, (40, 41, 42, 43, 44, 45):0.15, (46, 47, 48, 49, 50):0.10,
(51, 52, 53, 54, 55):0.075, (56, 57, 58, 59, 60):0.05, (61, 62, 63, 64,
65, 66):0.025}
for age in aldurstengd_ororka.keys():
for item in age:
if ororkualdur == item:
baetur = baetur + ororkulifeyrir * aldurstengd_ororka([age])
```
So my intention is to run through aldurstengd\_ororka, and for each "age" tuple in the dictionary, I run another for loop for each "item" inside the tuple. The error I get is
TypeError: 'int' object is not iterable | If `aldurstengd_ororka` is a dictionary, then this expression:
```
aldurstengd_ororka([age])
```
is an error. Perhaps you meant something like:
```
aldurstengd_ororka[(age)]
```
EDIT: The error you were seeing is quite interesting, I did reproduce it with this snippet:
```
for age in aldurstengd_ororka.keys():
print 'age:', age
for item in age:
print item
```
The output of the code is:
```
age: (32, 33)
32
33
age: (36, 37)
36
37
age: (51, 52, 53, 54, 55)
51
52
53
54
55
age: (61, 62, 63, 64, 65, 66)
61
62
63
64
65
66
age: (30, 31)
30
31
age: 25
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/ma/mak/Documents/t.py in <module>()
3 for age in aldurstengd_ororka.keys():
4 print 'age:', age
----> 5 for item in age:
6 print item
7
TypeError: 'int' object is not iterable
```
So, what happens is Python 'unpacks' a tuple of 1 element when assigning it to the age variable. So age instead of `(25)`, as you would expect, is just `25`... It's a bit strange. A workaround would be to do something like:
```
for age in aldurstengd_ororka.keys():
# if not tuple, make it a tuple:
if not type(age) == type( (0,1) ): age = (age,)
print 'age:', age
for item in age:
print item
``` | Your tuple keys that just have a single int in them are being parsed as an int instead of a tuple. So when you try to for item in age - you're trying to iterate through a non-iterable. Use lists `[4]` or use a comma `(4,)`, and it'll do the trick:
```
aldurstengd_ororka = {(18, 19, 20, 21, 22, 23, 24):1, (25):0.95, (26):0.90,
(27):0.85, (28, 29):0.75, (30, 31):0.65, (32, 33):0.55, (34, 35):0.45,
(36, 37):0.35, (40, 41, 42, 43, 44, 45):0.15, (46, 47, 48, 49, 50):0.10,
(51, 52, 53, 54, 55):0.075, (56, 57, 58, 59, 60):0.05, (61, 62, 63, 64,
65, 66):0.025}
for age in aldurstengd_ororka.keys():
if isinstance(age, [tuple, list]):
for item in age:
if ororkualdur == item:
baetur = baetur + ororkulifeyrir * aldurstengd_ororka[age]
else:
baetur = baetur + ororkulifeyrir * aldurstengd_ororka[age]
``` | "Int" object is not iterable | [
"",
"python",
"object",
"int",
"iterable",
""
] |
I am trying to understand how the following COBOL cursor works:
```
T43624 EXEC SQL
T43624 DECLARE X_CURSOR CURSOR FOR
T43624 SELECT
T43624 A
T43624 ,B
T43624 ,C
T43624 ,D
T43624 ,E
T43624 ,F
T43624 FROM
T43624 X
T43624 WHERE
T43624 L = :PP-L
T43624 AND M <= :PP-M
T43624 AND N = :PP-N
T43624 AND O = :PP-O
T43624 AND P = :PP-P
T43624 AND Q = :PP-Q
T43624 END-EXEC.
```
Given that there is no ORDER BY clause, in what order will the rows be returned? Could a default have been set somewhere? | There is no default sort order for results returned from a DB/2 select statement. If you need, or expect, data to be
returned in some order then the ordering must be specified using an ORDER BY clause on the SQL predicate.
You may find that results appear to be ordered but that ordering is just an artifact of the access paths used
by DB/2 to resolve the perdicate. Simple queries requiring only stage 1 processing are often resolved using an index
and these are typically ordered
because the undelying index follows that order. This is totally unreliable and may change due to a
rebind causing a different access path to be used or when the underlying index is in need of being rebuilt (after
many insertions/deletions, lack of free space etc).
Queries that require stage 2 processing tend to come out ordred, but this too is just an artifact of query resolution
and should never be relied upon.
COBOL does not excercise any inherent control over DB/2 operations other that what may be achieved using SQL alone. | There is no **default** ordering. The order the records are returned to the program will be determined by the access method DB2 uses.
For example on a single Table query if DB2 does a
```
Full Table Scan Rows most likely in table sequence (small tables)
Index Scan Rows most likely in Index sequence (small tables)
Other Possibly table sequence, but could be index
sequence or a random sequence.
```
To confuse things further
* On mainframe DB2, Clustering Index's are often used (store DB in Index sequence).
* DB2 can change its **access method** each time there is a **bind**.
* For huge tables, I suspect it might use multiple readers, which would change the above order's.
If you need/want the data in a specific sequence, use the **Order by** clause | Is there a default sort order for cursors in COBOL reading from DB2? | [
"",
"sql",
"cursor",
"db2",
"cobol",
""
] |
I would like to use Ansible to execute a simple job on several remote nodes concurrently. The actual job involves grepping some log files and then post-processing the results on my local host (which has software not available on the remote nodes).
The command line ansible tools don't seem well-suited to this use case because they mix together ansible-generated formatting with the output of the remotely executed command. The Python API seems like it should be capable of this though, since it exposes the output unmodified (apart from some potential unicode mangling that shouldn't be relevant here).
A simplified version of the Python program I've come up with looks like this:
```
from sys import argv
import ansible.runner
runner = ansible.runner.Runner(
pattern='*', forks=10,
module_name="command",
module_args=(
"""
sleep 10
"""),
inventory=ansible.inventory.Inventory(argv[1]),
)
results = runner.run()
```
Here, `sleep 10` stands in for the actual log grepping command - the idea is just to simulate a command that's not going to complete immediately.
However, upon running this, I observe that the amount of time taken seems proportional to the number of hosts in my inventory. Here are the timing results against inventories with 2, 5, and 9 hosts respectively:
```
exarkun@top:/tmp$ time python howlong.py two-hosts.inventory
real 0m24.285s
user 0m0.216s
sys 0m0.120s
exarkun@top:/tmp$ time python howlong.py five-hosts.inventory
real 0m55.120s
user 0m0.224s
sys 0m0.160s
exarkun@top:/tmp$ time python howlong.py nine-hosts.inventory
real 1m57.272s
user 0m0.360s
sys 0m0.284s
exarkun@top:/tmp$
```
Some other random observations:
* `ansible all --forks=10 -i five-hosts.inventory -m command -a "sleep 10"` exhibits the same behavior
* `ansible all -c local --forks=10 -i five-hosts.inventory -m command -a "sleep 10"` appears to execute things concurrently (but only works for local-only connections, of course)
* `ansible all -c paramiko --forks=10 -i five-hosts.inventory -m command -a "sleep 10"` appears to execute things concurrently
Perhaps this suggests the problem is with the ssh transport and has nothing to do with using ansible via the Python API as opposed to from the comand line.
What is wrong here that prevents the default transport from taking only around ten seconds regardless of the number of hosts in my inventory? | Some investigation reveals that ansible is looking for the hosts in my inventory in ~/.ssh/known\_hosts. My configuration has HashKnownHosts enabled. ansible isn't ever able to find the host entries it is looking for because it doesn't understand the hash known hosts entry format.
Whenever ansible's ssh transport can't find the known hosts entry, it acquires a global lock for the duration of the module's execution. The result of this confluence is that all execution is effectively serialized.
A temporary work-around is to give up some security and disabled host key checking by putting `host_key_checking = False` into `~/.ansible.cfg`. Another work-around is to use the paramiko transport (but this is incredibly slow, perhaps tens or hundreds of times slower than the ssh transport, for some reason). Another work-around is to let some unhashed entries get added to the known\_hosts file for ansible's ssh transport to find. | Since you have HashKnownHosts enabled, you should upgrade to the latest version of Ansible. Version 1.3 added support for hashed `known_hosts`, see [the bug tracker](https://github.com/ansible/ansible/issues/3716) and [changelog](https://github.com/ansible/ansible/blob/devel/CHANGELOG.md). This should solve your problem without compromising security (workaround using `host_key_checking=False`) or sacrificing speed (your workaround using paramiko). | How do I drive Ansible programmatically and concurrently? | [
"",
"python",
"concurrency",
"parallel-processing",
"ansible",
"ansible-runner",
""
] |
I have this list of lists in Python:
```
[[100,XHS,0],
[100,34B,3],
[100,42F,1],
[101,XHS,2],
[101,34B,5],
[101,42F,2],
[102,XHS,1],
[102,34B,2],
[102,42F,0],
[103,XHS,0],
[103,34B,4],
[103,42F,2]]
```
and I would like to find the most efficient way (I'm dealing with a lot of data) to create a new list of lists using the last element from each list for each id (the first element)..
So for the sample list above, my result would be:
```
[[0,3,1],
[2,5,2],
[1,2,0],
[0,4,2]]
```
How can I implement this in Python? Thanks | An itertools approach with the building blocks broken out - get last elements, group into threes, convert groups of 3 into a list...
```
from operator import itemgetter
from itertools import imap, izip
last_element = imap(itemgetter(-1), a)
in_threes = izip(*[iter(last_element)] * 3)
res = map(list, in_threes)
# [[0, 3, 1], [2, 5, 2], [1, 2, 0], [0, 4, 2]]
```
However, it looks like you want to "group" on the first element (instead of purely blocks of 3 consecutive items), so you can use `defaultdict` for this:
```
from collections import defaultdict
dd = defaultdict(list)
for el in a:
dd[el[0]].append(el[-1])
# defaultdict(<type 'list'>, {100: [0, 3, 1], 101: [2, 5, 2], 102: [1, 2, 0], 103: [0, 4, 2]})
``` | ```
new_list = []
temp_list = []
counter = 1
for x in list:
temp_list.extend(x[-1])
if ((counter % 3) == 0):
new_list.append(temp_list)
temp_list = []
counter += 1
print new_list
``` | Using the last element from each list in Python | [
"",
"python",
""
] |
Can someone please specify what is the difference between `os.path.getmtime(path)` and `os.path.getctime(path)` in unix systems . As per the defnition in python docs:
`os.path.getmtime(path)`
> Return the time of last modification of path. The return value is a
> number giving the number of seconds since the epoch (see the time
> module). Raise os.error if the file does not exist or is inaccessible.
`os.path.getctime(path)`
> Return the system’s ctime which, on some systems (like Unix) is the
> time of the last change, and, on others (like Windows), is the
> creation time for path. The return value is a number giving the number
> of seconds since the epoch (see the time module). Raise os.error if
> the file does not exist or is inaccessible.
Does that basically mean they are the same things when used in unix/systems?
```
#!/usr/bin/python
import os
print os.path.getmtime('File')
print os.path.getctime('FIle')
```
Both the prints fetch me the same value.
I am basically looking for last creation date for file , rather than last modification date. Is there a way to achieve the same in unix? | The mtime refers to last time the file's contents were changed. This can be altered on unix systems in various ways. Often, when you restore files from backup, the mtime is altered to indicate the last time the contents were changed before the backup was made.
The ctime indicates the last time the inode was altered. This cannot be changed. In the above example with the backup, the ctime will still reflect the time of file restoration. Additionally, ctime is updated when things like file permissions are changed.
Unfortunately, there's usually no way to find the original date of file creation. This is a limitation of the underlying filesystem. I believe the ext4 filesystem has added creation date to the inode, and Apple's HFS also supports it, but I'm not sure how you'd go about retrieving it in Python. (The C `stat` function and the corresponding `stat` command should show you that information on filesystems that support it.) | From the man page on stat, which `os.path.getmtime()` and `os.path.getctime()` both use on Unix systems:
> The field `st_mtime` is changed by file modifications, for example, by `mknod(2)`, `truncate(2)`, `utime(2)` and `write(2)` (of more than zero bytes). Moreover, `st_mtime` of a directory is changed by the creation or deletion of files in that directory. The `st_mtime` field is not changed for changes in owner, group, hard link count, or mode.
> ...
>
> The field `st_ctime` is changed by writing or by setting inode information (i.e., owner, group, link count, mode, etc.).
So no, these are not the same. | Difference between python - getmtime() and getctime() in unix system | [
"",
"python",
"python-2.6",
""
] |
What's the proper way to tell a looping thread to stop looping?
I have a fairly simple program that pings a specified host in a separate `threading.Thread` class. In this class it sleeps 60 seconds, the runs again until the application quits.
I'd like to implement a 'Stop' button in my `wx.Frame` to ask the looping thread to stop. It doesn't need to end the thread right away, it can just stop looping once it wakes up.
Here is my `threading` class (note: I haven't implemented looping yet, but it would likely fall under the run method in PingAssets)
```
class PingAssets(threading.Thread):
def __init__(self, threadNum, asset, window):
threading.Thread.__init__(self)
self.threadNum = threadNum
self.window = window
self.asset = asset
def run(self):
config = controller.getConfig()
fmt = config['timefmt']
start_time = datetime.now().strftime(fmt)
try:
if onlinecheck.check_status(self.asset):
status = "online"
else:
status = "offline"
except socket.gaierror:
status = "an invalid asset tag."
msg =("{}: {} is {}. \n".format(start_time, self.asset, status))
wx.CallAfter(self.window.Logger, msg)
```
And in my wxPyhton Frame I have this function called from a Start button:
```
def CheckAsset(self, asset):
self.count += 1
thread = PingAssets(self.count, asset, self)
self.threads.append(thread)
thread.start()
``` | # Threaded stoppable function
Instead of subclassing `threading.Thread`, one can modify the function to allow
stopping by a flag.
We need an object, accessible to running function, to which we set the flag to stop running.
We can use `threading.currentThread()` object.
```
import threading
import time
def doit(arg):
t = threading.currentThread()
while getattr(t, "do_run", True):
print ("working on %s" % arg)
time.sleep(1)
print("Stopping as you wish.")
def main():
t = threading.Thread(target=doit, args=("task",))
t.start()
time.sleep(5)
t.do_run = False
if __name__ == "__main__":
main()
```
The trick is, that the running thread can have attached additional properties. The solution builds
on assumptions:
* the thread has a property "do\_run" with default value `True`
* driving parent process can assign to started thread the property "do\_run" to `False`.
Running the code, we get following output:
```
$ python stopthread.py
working on task
working on task
working on task
working on task
working on task
Stopping as you wish.
```
## Pill to kill - using Event
Other alternative is to use `threading.Event` as function argument. It is by
default `False`, but external process can "set it" (to `True`) and function can
learn about it using `wait(timeout)` function.
We can `wait` with zero timeout, but we can also use it as the sleeping timer (used below).
```
def doit(stop_event, arg):
while not stop_event.wait(1):
print ("working on %s" % arg)
print("Stopping as you wish.")
def main():
pill2kill = threading.Event()
t = threading.Thread(target=doit, args=(pill2kill, "task"))
t.start()
time.sleep(5)
pill2kill.set()
t.join()
```
Edit: I tried this in Python 3.6. `stop_event.wait()` blocks the event (and so the while loop) until release. It does not return a boolean value. Using `stop_event.is_set()` works instead.
## Stopping multiple threads with one pill
Advantage of pill to kill is better seen, if we have to stop multiple threads
at once, as one pill will work for all.
The `doit` will not change at all, only the `main` handles the threads a bit differently.
```
def main():
pill2kill = threading.Event()
tasks = ["task ONE", "task TWO", "task THREE"]
def thread_gen(pill2kill, tasks):
for task in tasks:
t = threading.Thread(target=doit, args=(pill2kill, task))
yield t
threads = list(thread_gen(pill2kill, tasks))
for thread in threads:
thread.start()
time.sleep(5)
pill2kill.set()
for thread in threads:
thread.join()
``` | This has been asked before on Stack. See the following links:
* [Is there any way to kill a Thread in Python?](https://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python)
* [Stopping a thread after a certain amount of time](https://stackoverflow.com/questions/6524459/stopping-a-thread-python)
Basically you just need to set up the thread with a stop function that sets a sentinel value that the thread will check. In your case, you'll have the something in your loop check the sentinel value to see if it's changed and if it has, the loop can break and the thread can die. | How to stop a looping thread in Python? | [
"",
"python",
"multithreading",
"wxpython",
""
] |
This is my Data
```
Id Name Amt
1 ABC 20
2 XYZ 30
3 ABC 25
4 PQR 50
5 XYZ 75
6 PQR 40
```
I want the last record by every particular Name like :
```
3 ABC 25
5 XYZ 75
6 PQR 40
```
I tried group by, but i am missing some thing.
```
SELECT PatientID, Balance, PReceiptNo
FROM tblPayment
GROUP BY PatientID, Balance, PReceiptNo
``` | Something like this should work:
```
SELECT p1.*
FROM tblPayment p1
LEFT JOIN tblPayment p2 ON p1.Name = p2.Name AND p1.Id < p2.Id
WHERE p2.Id IS NULL;
```
See [this SQLFiddle](http://sqlfiddle.com/#!2/3d8a5/2) | Should be similar to:
```
SELECT
id,
name,
amt
FROM
myTable mt1
where mt1.id = (
SELECT
MAX(id)
FROM myTable mt2
WHERE mt2.name = mt1.name
)
``` | find the last record by Person | [
"",
"sql",
""
] |
I have two tables:
```
Table1 has columns A, B, C, D, E, F, G
Table2 has columns G, H, I, J, K, L, M, N
```
I want to join those two tables on column G. however, to avoid the duplicate columns(ambiguous G).
I have to do the query like below.
```
select
t1.*,
t2.H,
t2.I,
t2.J,
t2.K,
t2.L,
t2.M,
t2.N
from Table1 t1
inner join Table2 t2
on t1.G = t2.G
```
I have already use t1.\* to try to avoid typing every columns names from table1 however, I still have to type in all the columns EXCEPT for the joined column G, which is a complete disaster if you have a table with many columns...
Is there a handy way some where we can do
```
select
t1.*
t2.*(except G)
....
```
Thanks a lot!
I know I can print out all the columns names and then copy and paste, however, the query is still too long to debug even if I don't have to type that in manually.... | You can use a [*natural join*](http://en.wikipedia.org/wiki/Join_%28SQL%29#Natural_join):
> A natural join is a type of equi-join where the join predicate arises
> implicitly by comparing all columns in both tables that have the same
> column-names in the joined tables. The resulting joined table contains
> **only one column for each pair of equally named columns**.
```
SELECT * FROM T1 NATURAL JOIN T2;
```
Please checkout [**this demo**](http://www.sqlfiddle.com/#!2/f291d/1).
Note, however, that `NATURAL JOIN`s are dangerous and therefore strongly discourage their use. The danger comes from inadvertently adding a new column, named the same as another column in the other table. | It's usually *strongly discouraged* to use `SELECT * FROM` in anything but ad-hoc queries for testing.
The reason is that table schemas change, which can break code that assumes the presence of a certain column, or the order of columns in a table.
Even if it makes your query quite long, I'd suggest specifying each and every column you want to return in your dataset.
However, to answer your question, no there is no way to specify *every column except* one in a SELECT clause.. | SQL all columns except | [
"",
"mysql",
"sql",
"hive",
""
] |
Is there a debugger that can debug the Python virtual machine **while it is running Python code**, similar to the way that `GDB` works with C/C++? I have searched online and have come across `pdb`, but this steps through the code executed ***by*** the Python interpreter, not the Python interpreter as its running the program. | If you're looking to debug Python at the bytecode level, that's exactly what `pdb` does.
If you're looking to debug the CPython reference interpreter… as icktoofay's answer says, it's just a C program like any other, so you can debug it the same way as any other C program. (And you can get the source, compile it with extra debugging info, etc. if you want, too.)
You almost certainly want to look at [EasierPythonDebugging](http://fedoraproject.org/wiki/Features/EasierPythonDebugging), which shows how to set up a bunch of GDB helpers (which are Python scripts, of course) to make your life easier. Most importantly: The Python stack is tightly bound to the C stack, but it's a big mess to try to map things manually. With the right helpers, you can get stack traces, frame dumps, etc. in Python terms instead of or in parallel with the C terms with no effort. Another big benefit is the `py-print` command, which can look up a Python name (in nearly the same way a live interpreter would), call its `__repr__`, and print out the result (with proper error handling and everything so you don't end up crashing your `gdb` session trying to walk the `PyObject*` stuff manually).
If you're looking for some level in between… well, there *is* no level in between. (Conceptually, there are multiple layers to the interpreter, but it's all just C code, and it all looks alike to gdb.)
If you're looking to debug *any* Python interpreter, rather than specifically CPython, you might want to look at PyPy. It's written in a Python-like language called RPython, and there are various ways to use `pdb` to debug the (R)Python interpreter code, although it's not as easy as it could be (unless you use a flat-translated PyPy, which will probably run about 100x too slow to be tolerable). There are also GDB debug hooks and scripts for PyPy just like the ones for CPython, but they're not as complete. | The reference implementation of Python, CPython, is written in C. You can use GDB to debug it as you would debug any other program written in C.
That said, Python does have a few little helpers for use in GDB [buried under `Misc/gdbinit`](http://hg.python.org/cpython/file/2.7/Misc/gdbinit). It's got comments to describe what each command does, but I'll repeat them here for convenience:
* **`pyo`:** Dump a `PyObject *`.
* **`pyg`:** Dump a `PyGC_Head *`.
* **`pylocals`:** Print the local variables of the current Python stack frame.
* **`lineno`:** Get the current Python line number.
* **`pyframe`:** Print the source file name, line, and function.
* **`pyframev`:** `pyframe` + `pylocals`
* **`printframe`:** `pyframe` if within `PyEval_EvalFrameEx`; built-in `frame` otherwise
* **`pystack`:** Print the Python stack trace.
* **`pystackv`:** Print the Python stack trace with local variables.
* **`pu`:** Print a Unicode string.
It looks like the Fedora project has also assembled [their own collection of commands](https://fedoraproject.org/wiki/Features/EasierPythonDebugging) to assist with debugging which you may want to look at, too. | Debugging the python VM | [
"",
"python",
"debugging",
"python-2.7",
""
] |
I am having trouble with my small film database tag/category query.
My table is:
```
ID(index),Name(film name),category
```
One movie can have multiple categories.
```
SELECT Name FROM categorytable WHERE category ='Action';
```
Works fine but if I want other tags I get empty cursor:
```
SELECT Name FROM categorytable WHERE category ='Action' AND category ='Sci-Fi';
```
Example select:
```
1 Film001 Action
2 Film001 Sci-Fi
3 Film002 Action
```
EDIT:
My home databese:
```
ID|NAMEFILM|DESCRIPTION
```
And complete query is:
```
SELECT DATABASEFILM.NAMEFILM , DATABASEFILM.DESCRIPTION , NAME from DATABASEFILM , CATEGORY where DATABASEFILM.NAMEFILM=NAME AND category=(SELECT NAME FROM CATEGORY WHERE category ='Action');
``` | The reason your query doesn't work is because each row has only one category. Instead, you need to do aggregation. I prefer doing the conditions in the `having` clause, because it is a general approach.
```
SELECT Name
FROM categorytable
group by Name
having sum(case when category ='Action' then 1 else 0 end) > 0 and
sum(case when category ='Sci-Fi' then 1 else 0 end) > 0;
```
Each clause in the `having` is testing for the presence of one category. If, for instance, you changed the question to be "Action films that are *not* Sci-Fi", then you would change the `having` clause by making the second condition equal to 0:
```
having sum(case when category ='Action' then 1 else 0 end) > 0 and
sum(case when category ='Sci-Fi' then 1 else 0 end) = 0;
``` | You can use the OR clause, or if you have multiple categories it will probably be easier to use IN
So either
```
SELECT Name FROM categorytable WHERE category ='Action' OR category ='Sci-Fi'
```
Or using `IN`
```
SELECT Name
FROM categorytable
WHERE category IN ('Action', 'Sci-Fi', 'SomeOtherCategory ')
```
Using IN should compile to the same thing, but it's easier to read if you start adding more then just two categories. | Select multiple film category | [
"",
"sql",
"sqlite",
"select",
""
] |
I'm really new to databases so please bear with me.
I have a website where people can go to request tickets to an upcoming concert. Users can request tickets for either New York or Dallas. Similarly, for each of those locales, they can request either a VIP ticket or a regular ticket.
I need a database to keep track of how many people have requested each type of ticket (`VIP and NY` or `VIP and Dallas` or `Regular and NY` or `Regular and Dallas`). This way, I won't run out of tickets.
What schema should I use for this database? Should I have one row and then 4 columns (VIP&NY, VIP&Dallas, Regular&NY and Regular&Dallas)? The problem with this is it doesn't seem very flexible, thus I'm not sure if it's good design. | You should have one column containing a quantity, a column that specifies the type (VIP), and another that specifies the city. | To make it flexible you would do:
```
Table:
location
Columns:
location_id integer
description varchar
Table
type
Columns:
type_id integer
description varchar
table
purchases
columns:
purchase_id integer
type_id integer
location_id integer
```
This way you can add more cities, more types and you allways insert them in purchases.
When you want to know how many you sold you count them | Database schema for storing ints | [
"",
"mysql",
"sql",
"database",
"schema",
""
] |
I am trying to get value form a list in a dict? not sure how it can be accessed? but here's what I did & I'm getting a error as
**TypeError: string indices must be integers, not str**
```
lloyd = {
"name": "Lloyd",
"homework": [90.0, 97.0, 75.0, 92.0],
"quizzes": [88.0, 40.0, 94.0],
"tests": [75.0, 90.0]
}
alice = {
"name": "Alice",
"homework": [100.0, 92.0, 98.0, 100.0],
"quizzes": [82.0, 83.0, 91.0],
"tests": [89.0, 97.0]
}
tyler = {
"name": "Tyler",
"homework": [0.0, 87.0, 75.0, 22.0],
"quizzes": [0.0, 75.0, 78.0],
"tests": [100.0, 100.0]
}
for i in alice:
print i['quizzes']
```
I'm learning python, & I had few more queries If anyone could help would be a gr8 help in learning.Thanks in advance.
1) How to access the key 'homework' & sum its value in list?
2) How to access dict alice & sum its values of all keys? homework + quizzes + tests with sum() function
3) How to access the key 'llyod' & get its len() of a list? say 'homework'
this is what I tried & I got the same error for this one as well:
```
def average(x):
for a in x:
return sum(a['homework']) / len(a['homework'])
return sum(a['quizzes']) / len(a['quizzes'])
return sum(a['tests']) / len(a['tests'])
```
If anyone could please clear my doubts on those above 3 question. | ### question1(How to access the key 'homework' & sum its value in list?)
```
>>> sum(lloyd['homework'])
354.0
```
### question2 homework + quizzes + tests
```
def findsum(x):
return sum([sum(x['homework']),sum(x['quizzes']),sum(x['tests'])])
>>> findsum(lloyd)
741.0
```
### question3) How to access the key 'llyod' & get its len() of a list? say 'homework'
```
>>> len(lloyd['homework'])
4
``` | You are looping over the *keys* of `alice`, not the values. Your keys are strings. Even if you were looping over the values, none of the values in `alice` can be indexed by `'quizzes'`. You could just print `alice['quizzes']`, but that is probably not what you wanted to start with.
You want to put all your named dictionary into one 'parent' dictionary instead:
```
students = {
"lloyd": {
"name": "Lloyd",
"homework": [90.0, 97.0, 75.0, 92.0],
"quizzes": [88.0, 40.0, 94.0],
"tests": [75.0, 90.0]
},
"alice": {
"name": "Alice",
"homework": [100.0, 92.0, 98.0, 100.0],
"quizzes": [82.0, 83.0, 91.0],
"tests": [89.0, 97.0]
},
"tyler": {
"name": "Tyler",
"homework": [0.0, 87.0, 75.0, 22.0],
"quizzes": [0.0, 75.0, 78.0],
"tests": [100.0, 100.0]
},
}
```
Now you can loop over *this* dictionary and access various keys per student:
```
for student_data in students.values():
print student_data['quizzes']
```
Note the use of `.values()` here to loop over *just* the values of the `students` dictionary, as we don't use the keys here.
Use the same loop to calculate your averages, but remember that a function *ends* when a `return` statement is encountered. You can always return multiple values from a function by returning a tuple:
```
def average(student):
homework = ...
quizzes = ...
tests = ....
return (homework, quizzes, tests)
```
or you could use a dictionary, for example. | string indices must be integers not str dictionary | [
"",
"python",
"python-2.7",
""
] |
I am trying to understand how to create a query to filter out some results based on an inner join.
Consider the following data:
```
formulation_batch
-----
id project_id name
1 1 F1.1
2 1 F1.2
3 1 F1.3
4 1 F1.all
formulation_batch_component
-----
id formulation_batch_id component_id
1 1 1
2 2 2
3 3 3
4 4 1
5 4 2
6 4 3
7 4 4
```
I would like to select all formulation\_batch records with a project\_id of 1, and has a formulation\_batch\_component with a component\_id of 1 or 2. So I run the following query:
```
SELECT formulation_batch.*
FROM formulation_batch
INNER JOIN formulation_batch_component
ON formulation_batch.id = formulation_batch_component.formulation_batch_id
WHERE formulation_batch.project_id = 1
AND ((formulation_batch_component.component_id = 2
OR formulation_batch_component.component_id = 1 ))
```
However, this returns a duplicate entry:
```
1;"F1.1"
2;"F1.2"
4;"F1.all"
4;"F1.all"
```
Is there a way to modify this query so that I only get back the unique formulation\_batch records which match the criteria?
EG:
```
1;"F1.1"
2;"F1.2"
4;"F1.all"
```
Thanks for your time! | One way would be to use `distinct`:
```
SELECT distinct "formulation_batch".*
FROM "formulation_batch"
INNER JOIN "formulation_batch_component"
ON "formulation_batch"."id" = "formulation_batch_component"."formulationBatch_id"
WHERE "formulation_batch"."project_id" = 1
AND (("formulation_batch_component"."component_id" = 2
OR "formulation_batch_component"."component_id" = 1 ))
``` | In this case it is possible to apply the `distinct` before the `join` possibly making it more performant:
```
select fb.*
from
formulation_batch fb
inner join
(
select distinct formulationbatch_id
from formulation_batch_component
where component_id in (1, 2)
) fbc on fb.id = fbc.formulationbatch_id
where fb.project_id = 1
```
Notice how to use alias for the table names to make the query clearer. Also then `in` operator is very handy. The use of double quotes with those identifiers is not necessary. | how to prevent duplicates with inner join query (Postgres) | [
"",
"sql",
"postgresql",
"inner-join",
""
] |
I am making a simple program for fun. This should take input for X amount of files to be filled with Y amount of random 0's and 1's.
When I run this I would like to have 2 files both filled with 20 random 0's and 1's in each file. At the moment when I run this only the first file gets filled and the second one is left empty.
I think it has something to do with my second loop but I am not sure, how can I get this to work?
```
import random
fileamount = int(raw_input("How many files should I make? > "))
amount = int(raw_input("How many characters in the files? > "))
print "I will now make %r files with %r characters in them!\n" % (fileamount, amount)
s1 = 0
s2 = 0
while s2 < fileamount:
s2 = s2 + 1
textfile = file('a'+str(s2), 'wt')
while s1 < amount:
s1 = s1 + 1
textfile.write(str(random.randint(0,1)))
``` | In addition to reseting the value of `s1`, make sure you close your files. Sometimes, the output does not get written to the file if the program ends before the buffer is written to disk.
You can guarantee the file is closed by using a [`with` statement](http://effbot.org/zone/python-with-statement.htm).
The file will be closed when Python's flow of execution leaves the `with` suite.
```
import random
fileamount = int(raw_input("How many files should I make? > "))
amount = int(raw_input("How many characters in the files? > "))
print "I will now make %r files with %r characters in them!\n" % (fileamount, amount)
for s2 in range(fileamount):
with open('a'+str(s2), 'wt') as textfile:
for s1 in range(amount):
textfile.write(str(random.randint(0,1)))
``` | You don't reinit `s1` to `0`. So the second time there will be nothing written to the file.
```
import random
fileamount = int(raw_input("How many files should I make? > "))
amount = int(raw_input("How many characters in the files? > "))
print "I will now make %r files with %r characters in them!\n" % (fileamount, amount)
s2 = 0
while s2 < fileamount:
s2 = s2 + 1
textfile = open('a'+str(s2), 'wt') #use open
s1 = 0
while s1 < amount:
s1 = s1 + 1
textfile.write(str(random.randint(0,1)))
textfile.close() #don't forget to close
``` | Nested loop and File io | [
"",
"python",
"loops",
"file-io",
""
] |
I've seen a couple similar threads, but attempting to escape characters isn't working for me.
In short, I have a list of strings, which I am iterating through, such that I am aiming to build a query that incorporates however many strings are in the list, into a 'Select, Like' query.
Here is my code (Python)
```
def myfunc(self, cursor, var_list):
query = "Select var FROM tble_tble WHERE"
substring = []
length = len(var_list)
iter = length
for var in var_list:
if (iter != length):
substring.append(" OR tble_tble.var LIKE %'%s'%" % var)
else:
substring.append(" tble_tble.var LIKE %'%s'%" % var)
iter = iter - 1
for str in substring:
query = query + str
...
```
That should be enough. If it wasn't obvious from my previously stated claims, I am trying to ***build a query which runs the SQL 'LIKE'*** comparison across a *list* of relevant strings.
Thanks for your time, and feel free to ask any questions for clarification. | First, your problem has nothing to do with SQL. Throw away all the SQL-related code and do this:
```
var = 'foo'
" OR tble_tble.var LIKE %'%s'%" % var
```
You'll get the same error. It's because you're trying to do `%`-formatting with a string that has stray `%` signs in it. So, it's trying to figure out what to do with `%'`, and failing.
---
You can escape these stray `%` signs like this:
```
" OR tble_tble.var LIKE %%'%s'%%" % var
```
However, that probably isn't what you want to do.
---
First, consider using `{}`-formatting instead of `%`-formatting, *especially* when you're trying to build formatted strings with `%` characters all over them. It avoids the need for escaping them. So:
```
" OR tble_tble.var LIKE %'{}'%".format(var)
```
---
But, more importantly, you shouldn't be doing this formatting at all. Don't format the values into a SQL string, just pass them as SQL parameters. If you're using sqlite3, use `?` parameters markers; for MySQL, `%s`; for a different database, read its docs. So:
```
" OR tble_tble.var LIKE %'?'%"
```
There's nothing that can go wrong here, and nothing that needs to be escaped. When you call `execute` with the query string, pass `[var]` as the args.
This is a lot simpler, and often faster, and neatly avoids a lot of silly bugs dealing with edge cases, and, most important of all, it protects against SQL injection attacks.
The [sqlite3 docs](http://docs.python.org/3.3/library/sqlite3.html) explain this in more detail:
> Usually your SQL operations will need to use values from Python variables. You shouldn’t assemble your query using Python’s string operations… Instead, use the DB-API’s parameter substitution. Put `?` as a placeholder wherever you want to use a value, and then provide a tuple of values as the second argument to the cursor’s `execute()` method. (Other database modules may use a different placeholder, such as %s or :1.) …
---
Finally, as others have pointed out in comments, with `LIKE` conditions, you have to put the percent signs *inside* the quotes, not *outside*. So, no matter which way you solve this, you're going to have another problem to solve. But that one should be a lot easier. (And if not, you can always come back and ask another question.) | You need to escape `%` like this you need to change the quotes to include the both `%` generate proper SQL
```
" OR tble_tble.var LIKE '%%%s%%'"
```
For example:
```
var = "abc"
print " OR tble_tble.var LIKE '%%%s%%'" % var
```
It will be translated to:
```
OR tble_tble.var LIKE '%abc%'
``` | "ValueError: Unsupported format character ' " ' (0x22) at..." in Python / String | [
"",
"python",
"regex",
"string",
"list",
""
] |
I have recently had a few problems with my Python installation and as a result I have just reinstalled python and am trying to get all my addons working correctly as well. I’m going to look at virtualenv after to see if I can prevent this from happening again.
When I type `which python` into terminal I now get
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
```
I understand this to be the correct location and now want to get all the rest of my addons installed correctly as well.
However after installing pip via `sudo easy_install pip` and type `which pip` i get
```
/usr/local/bin/pip
```
Is this correct? I would have thought it should reflect the below
```
/Library/Python/2.7/site-packages/
```
There is a folder in here called pip-1.4-py2.7.egg which was not there prior to instillation but the above path does not give me any confidence.
Where should pip and my other addons such as Distribute, Flask and Boto be installed if I want to set this up correctly?
Mac OSX 10.7, Python 2.7 | Since `pip` is an executable and `which` returns path of executables or filenames in environment. It is correct. Pip module is installed in site-packages but the executable is installed in bin. | Modules go in `site-packages` and executables go in your system's executable path. For your environment, this path is `/usr/local/bin/`.
To avoid having to deal with this, simply use `easy_install`, `distribute` or `pip`. These tools know which files need to go where. | Python: PIP install path, what is the correct location for this and other addons? | [
"",
"python",
"python-2.7",
"pip",
""
] |
What is the correct way to write an SQL Query so I can use the output of a function that I have used in the Select Statement in the Where clause?
Data Table:
```
ID Count_ID
111 2
111 2
222 3
222 3
222 3
333 1
```
Query:
```
Select ID, Count(Table1.ID) As Count_ID
From Table1
Where Count_ID = 3
Group By ID
```
It gives me invalid column name currently in there Where Clause for Count\_ID. | There's a circular dependency on your filtering. You want to only select records where the count is 3, but you must count them before you can determine this. This means that you need a HAVING clause rather than a WHERE clause (to filter on an aggregate function, you always need a HAVING clause).
Furthermore, you can't use an aliased column name for an aggregate function in a WHERE or HAVING clause. You have to repeat the function in the filtering:
```
Select ID, Count(ID) As Count_ID
From Table1
Group By ID
HAVING Count(ID) = 3;
``` | In this case, because you're referencing an aggregate function and a grouping, you have to use a `HAVING` clause.
```
Select ID, Count(Table1.ID) As Count_ID
From Table1
Group By ID
Having Count(Table1.ID) = 3
``` | Using output of function from select in where clause | [
"",
"sql",
""
] |
Assume you have a numpy array as `array([[5],[1,2],[5,6,7],[5],[5]])`.
Is there a function, such as `np.where`, that can be used to return all row indices where `[5]` is the row value? For example, in the array above, the returned values should be `[0, 3, 4]` indicating the `[5]` row numbers.
Please note that each row in the array can differ in length.
Thanks folks, you all deserve best answer, but i gave the green mark to the first one :) | This should do it:
```
[i[0] for i,v in np.ndenumerate(ar) if v == [5]]
=> [0, 3, 4]
``` | If you check `ndim` of your array you will see that it is actually not a multi-dimensional array, but a `1d` array of list objects.
You can use the following list comprehension to get the indices where 5 appears:
```
[i[0] for i,v in np.ndenumerate(a) if 5 in v]
#[0, 2, 3, 4]
```
Or the following list comprehension to get the indices where the list is exactly `[5]`:
```
[i[0] for i,v in np.ndenumerate(a) if v == [5]]
#[0, 3, 4]
``` | np.where equivalent for multi-dimensional numpy arrays | [
"",
"python",
"arrays",
"numpy",
""
] |
Working with Python 2.7, I'm wondering what real advantage there is in using the type `unicode` instead of `str`, as both of them seem to be able to hold Unicode strings. Is there any special reason apart from being able to set Unicode codes in `unicode` strings using the escape char `\`?:
Executing a module with:
```
# -*- coding: utf-8 -*-
a = 'á'
ua = u'á'
print a, ua
```
Results in: á, á
More testing using Python shell:
```
>>> a = 'á'
>>> a
'\xc3\xa1'
>>> ua = u'á'
>>> ua
u'\xe1'
>>> ua.encode('utf8')
'\xc3\xa1'
>>> ua.encode('latin1')
'\xe1'
>>> ua
u'\xe1'
```
So, the `unicode` string seems to be encoded using `latin1` instead of `utf-8` and the raw string is encoded using `utf-8`? I'm even more confused now! :S | `unicode` is meant to handle *text*. Text is a sequence of **code points** which *may be bigger than a single byte*. Text can be *encoded* in a specific encoding to represent the text as raw bytes(e.g. `utf-8`, `latin-1`...).
Note that `unicode` *is not encoded*! The internal representation used by python is an implementation detail, and you shouldn't care about it as long as it is able to represent the code points you want.
On the contrary `str` in Python 2 is a plain sequence of *bytes*. It does not represent text!
You can think of `unicode` as a general representation of some text, which can be encoded in many different ways into a sequence of binary data represented via `str`.
*Note: In Python 3, `unicode` was renamed to `str` and there is a new `bytes` type for a plain sequence of bytes.*
Some differences that you can see:
```
>>> len(u'à') # a single code point
1
>>> len('à') # by default utf-8 -> takes two bytes
2
>>> len(u'à'.encode('utf-8'))
2
>>> len(u'à'.encode('latin1')) # in latin1 it takes one byte
1
>>> print u'à'.encode('utf-8') # terminal encoding is utf-8
à
>>> print u'à'.encode('latin1') # it cannot understand the latin1 byte
�
```
Note that using `str` you have a lower-level control on the single bytes of a specific encoding representation, while using `unicode` you can only control at the code-point level. For example you can do:
```
>>> 'àèìòù'
'\xc3\xa0\xc3\xa8\xc3\xac\xc3\xb2\xc3\xb9'
>>> print 'àèìòù'.replace('\xa8', '')
à�ìòù
```
What before was valid UTF-8, isn't anymore. Using a unicode string you cannot operate in such a way that the resulting string isn't valid unicode text.
You can remove a code point, replace a code point with a different code point etc. but you cannot mess with the internal representation. | Unicode and encodings are completely different, unrelated things.
# Unicode
Assigns a numeric ID to each character:
* 0x41 → A
* 0xE1 → á
* 0x414 → Д
So, Unicode assigns the number 0x41 to A, 0xE1 to á, and 0x414 to Д.
Even the little arrow → I used has its Unicode number, it's 0x2192. And even emojis have their Unicode numbers, is 0x1F602.
You can look up the Unicode numbers of all characters in [this table](https://en.wikibooks.org/wiki/Unicode/Character_reference/0000-0FFF). In particular, you can find the first three characters above [here](https://en.wikibooks.org/wiki/Unicode/Character_reference/0000-0FFF), the arrow [here](https://en.wikibooks.org/wiki/Unicode/Character_reference/2000-2FFF), and the emoji [here](https://en.wikibooks.org/wiki/Unicode/Character_reference/1F000-1FFFF).
These numbers assigned to all characters by Unicode are called **code points**.
**The purpose of all this** is to provide a means to unambiguously refer to a each character. For example, if I'm talking about , instead of saying *"you know, this laughing emoji with tears"*, I can just say, *Unicode code point 0x1F602*. Easier, right?
*Note that Unicode code points are usually formatted with a leading `U+`, then the hexadecimal numeric value padded to at least 4 digits. So, the above examples would be U+0041, U+00E1, U+0414, U+2192, U+1F602.*
Unicode code points range from U+0000 to U+10FFFF. That is 1,114,112 numbers. 2048 of these numbers are used for [surrogates](https://en.wikipedia.org/wiki/Universal_Character_Set_characters#Surrogates), thus, there remain 1,112,064. This means, Unicode can assign a unique ID (code point) to 1,112,064 distinct characters. Not all of these code points are assigned to a character yet, and Unicode is extended continuously (for example, when new emojis are introduced).
**The important thing to remember is that all Unicode does is to assign a numerical ID, called code point, to each character for easy and unambiguous reference.**
# Encodings
Map characters to bit patterns.
These bit patterns are used to represent the characters in computer memory or on disk.
There are many different encodings that cover different subsets of characters. In the English-speaking world, the most common encodings are the following:
### [ASCII](https://en.wikipedia.org/wiki/ASCII)
Maps [128 characters](https://en.wikipedia.org/wiki/ASCII#Character_set) (code points U+0000 to U+007F) to bit patterns of length 7.
Example:
* a → 1100001 (0x61)
You can see all the mappings in this [table](https://en.wikipedia.org/wiki/ASCII#Character_set).
### [ISO 8859-1 (aka Latin-1)](https://en.wikipedia.org/wiki/ISO/IEC_8859-1)
Maps [191 characters](https://en.wikipedia.org/wiki/ISO/IEC_8859-1#Code_page_layout) (code points U+0020 to U+007E and U+00A0 to U+00FF) to bit patterns of length 8.
Example:
* a → 01100001 (0x61)
* á → 11100001 (0xE1)
You can see all the mappings in this [table](https://en.wikipedia.org/wiki/ISO/IEC_8859-1#Code_page_layout).
### [UTF-8](https://en.wikipedia.org/wiki/UTF-8)
Maps [1,112,064 characters](https://en.wikipedia.org/wiki/UTF-8#cite_note-1) (all existing Unicode code points) to bit patterns of either length 8, 16, 24, or 32 bits (that is, 1, 2, 3, or 4 bytes).
Example:
* a → 01100001 (0x61)
* á → 11000011 10100001 (0xC3 0xA1)
* ≠ → 11100010 10001001 10100000 (0xE2 0x89 0xA0)
* → 11110000 10011111 10011000 10000010 (0xF0 0x9F 0x98 0x82)
The way UTF-8 encodes characters to bit strings is very well described [here](https://en.wikipedia.org/wiki/UTF-8#Description).
# Unicode and Encodings
Looking at the above examples, it becomes clear how Unicode is useful.
For example, if I'm **Latin-1** and I want to explain my encoding of á, I don't need to say:
> "I encode that a with an aigu (or however you call that rising bar) as 11100001"
But I can just say:
> "I encode U+00E1 as 11100001"
And if I'm **UTF-8**, I can say:
> "Me, in turn, I encode U+00E1 as 11000011 10100001"
And it's unambiguously clear to everybody which character we mean.
# Now to the often arising confusion
It's true that sometimes the bit pattern of an encoding, if you interpret it as a binary number, is the same as the Unicode code point of this character.
For example:
* ASCII encodes *a* as 1100001, which you can interpret as the hexadecimal number **0x61**, and the Unicode code point of *a* is **U+0061**.
* Latin-1 encodes *á* as 11100001, which you can interpret as the hexadecimal number **0xE1**, and the Unicode code point of *á* is **U+00E1**.
Of course, this has been arranged like this on purpose for convenience. But you should look at it as a **pure coincidence**. The bit pattern used to represent a character in memory is not tied in any way to the Unicode code point of this character.
Nobody even says that you have to interpret a bit string like 11100001 as a binary number. Just look at it as the sequence of bits that Latin-1 uses to encode the character *á*.
# Back to your question
The encoding used by your Python interpreter is **UTF-8**.
Here's what's going on in your examples:
## Example 1
The following encodes the character á in UTF-8. This results in the bit string 11000011 10100001, which is saved in the variable `a`.
```
>>> a = 'á'
```
When you look at the value of `a`, its content 11000011 10100001 is formatted as the hex number 0xC3 0xA1 and output as `'\xc3\xa1'`:
```
>>> a
'\xc3\xa1'
```
## Example 2
The following saves the Unicode code point of á, which is U+00E1, in the variable `ua` (we don't know which data format Python uses internally to represent the code point U+00E1 in memory, and it's unimportant to us):
```
>>> ua = u'á'
```
When you look at the value of `ua`, Python tells you that it contains the code point U+00E1:
```
>>> ua
u'\xe1'
```
## Example 3
The following encodes Unicode code point U+00E1 (representing character á) with UTF-8, which results in the bit pattern 11000011 10100001. Again, for output this bit pattern is represented as the hex number 0xC3 0xA1:
```
>>> ua.encode('utf-8')
'\xc3\xa1'
```
## Example 4
The following encodes Unicode code point U+00E1 (representing character á) with Latin-1, which results in the bit pattern 11100001. For output, this bit pattern is represented as the hex number 0xE1, which **by coincidence** is the same as the initial code point U+00E1:
```
>>> ua.encode('latin1')
'\xe1'
```
There's no relation between the Unicode object `ua` and the Latin-1 encoding. That the code point of á is U+00E1 and the Latin-1 encoding of á is 0xE1 (if you interpret the bit pattern of the encoding as a binary number) is a pure coincidence. | Python str vs unicode types | [
"",
"python",
"string",
"unicode",
""
] |
I need to query for rows from a table where one of columns matches a string whose defining characteristic is an alphanumeric string of specific length (say 4) followed by a ":" followed by an integer.
* pattern : alphanumericstring : integer
* example1: 1234:someint
* example2: abcd:someotherint
I tried the following
```
select * from mytable where col1 like '[]{4}:%'
select * from mytable where col1 like '.{4}:%'
```
and neither of these work. I am aware I didn't even try to ensure that the piece following the ":" was an integer. | You can use a combination of `charindex`, `substring` and `isnumeric`
```
CREATE TABLE MyTable
(
col1 varchar(20),
col2 varchar(50)
)
INSERT INTO MyTable
VALUES
('ABCD:123', 'Value 123'),
('1234:1234', 'Value 1234'),
('xyz:1234', 'should not be selected'),
('cdef:abcd', 'should not be selected too')
SELECT *
FROM MyTable
WHERE CHARINDEX(':', col1, 0) = 5 AND
ISNUMERIC(SUBSTRING(col1, CHARINDEX(':', col1) + 1, 20)) = 1
``` | SQL Server doesn't directly support regular expressions (if you search around, you can probably find some tutorials for adding them via user-defined functions).
`LIKE` doesn't support quantifiers, but it does have wildcards and lightweight character classes.
An underscore will match any character:
```
SELECT col1
FROM data
WHERE col1 LIKE '____:%';
```
Or you can specify range(s) of characters to match:
```
SELECT col1
FROM data
WHERE col1 LIKE '[a-z0-9][a-z0-9][a-z0-9][a-z0-9]:%';
```
See these [live on SQLFiddle](http://sqlfiddle.com/#!6/9fcec/2).
To specify that the second part must consist of digits only, an additional condition could be used:
```
SELECT col1
FROM data
WHERE col1 LIKE '[a-z0-9][a-z0-9][a-z0-9][a-z0-9]:%'
AND col1 NOT LIKE '[a-z0-9][a-z0-9][a-z0-9][a-z0-9]:%[^0-9]%';
```
You can [test the last one live as well](http://sqlfiddle.com/#!6/8d536/1). | pattern matching for a string of specific length in sql | [
"",
"sql",
"sql-server",
""
] |
I'm doing a simple lookup
```
if xo.st in sts:
#...
```
If the condition is met I need to get the index of the element in sts (sts is a list or a tuple). This needs to be fast (it is a big list). What is the best solution for this? | What about:
```
if xo.st in sts:
print sts.index(xo.st)
```
This will return the first index of `xo.st` in`sts` | First thing that comes to mind is
```
list.index(xo.st)
``` | Python: Get index if in statement evaluates true | [
"",
"python",
"performance",
"indexing",
""
] |
I have created table as bellow
```
create table T1(num varchar2(20))
```
then I inserted 3 lac numbers in above table so now it looks like below
```
num
1
2
3
.
.
300000
```
Now if I do
```
select * from T1
```
then it takes 1min 15sec to completely fetch the records and as I created index on column `num` and if I use below query then it should be faster to fetch 3 lac records but it takes also 1min15sec for fetch the records
```
select * from T1 where num between '1' and '300000'
```
So how the index has improved my retrieval process? | The index does not improve the retrieval process when you are trying to fetch all rows.
The index makes it possible to find a subset of rows much more quickly. | An index can help if you want to retrieve a few rows from a large table. But since you retrieve all rows and since your index contains all the columns of your table, it won't speed up the query.
Furthermore, you don't tell us what tool you use to retrieve the data. I guess you use SQL Developer or Toad. So what you measure is the time it takes SQL Developer or Toad to store 300,000 rows in memory in such a way that they can be easily displayed on screen in a scrollable table. You aren't really measuring how long it takes to retrieve them. | Index Created but doesn't speed-up on retrieval process | [
"",
"sql",
"oracle",
"oracle10g",
""
] |
I have a table like this
```
ItemsWithQuantity
=================
ID Quantity
---------------
x 4
y 7
```
I want to built a view or query that turns it into this
```
ID
x
x
x
x
y
y
y
y
y
y
y
```
To basically un-count them, and get a unique row for every quantity.
What's the best way to do this dynamically, in MS SQL 2005 or 2008 | Simplest way:
```
SELECT iq.ID FROM ItemsWithQuantity iq
INNER JOIN master..spt_values n ON iq.Quantity > n.number AND n.type = 'p'
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/d24bd/1)**
This is using `master..spt_values` system table for numbers, which is a pretty safe but still *undocumented* feature. If you are unsure about using it, you can create your own Numbers table that will just list numbers, or create CTE on the fly:
```
WITH CTE_Numbers AS
(
SELECT MAX(Quantity) AS Number FROM dbo.ItemsWithQuantity
UNION ALL
SELECT number - 1 FROM CTE_Numbers
WHERE number >=1
)
SELECT iq.ID FROM ItemsWithQuantity iq
INNER JOIN CTE_Numbers n ON iq.Quantity > n.number
ORDER BY ID
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/d24bd/2)** | Create a numbers table first (`(ID INT NOT NULL PRIMARY KEY)`) with numbers from 1 to 1m).
Then the query becomes more easy:
```
SELECT tab.ID
FROM ItemsWithQuantity tab
CROSS APPLY (
SELECT ID FROM Numbers WHERE ID BETWEEN 1 AND Quantity
) x
ORDER BY tab.ID
```
`CROSS APPLY` reads like "for each outer row, join the following rows". In our case we are joining `Quantity` inner rows to each outer row. | Take a table with a quantity field, or a count of items, and make unique rows for each, dynamically | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
""
] |
I am using the PyQt library to take a screenshot of a webpage, then reading through a CSV file of different URLs. I am keeping a variable feed that incremements everytime a URL is processed and therefore should increment to the number of URLs.
Here's code:
```
webpage = QWebPage()
fo = open("C:/Users/Romi/Desktop/result1.txt", "w")
feed = 0
def onLoadFinished(result):
#fo.write( column1[feed])#, column2[feed], urls[feed])
#feed = 0
if not result:
print "Request failed"
fo.write(column1[feed])
fo.write(',')
fo.write(column2[feed])
fo.write(',')
#fo.write(urls[feed])
fo.write(',')
fo.write('404,image not created\n')
feed = feed + 1
sys.exit(1)
save_page(webpage, outputs.pop(0)) # pop output name from list and save
if urls:
url = urls.pop(0) # pop next url to fetch from list
webpage.mainFrame().load(QUrl(url))
fo.write(column1[feed])#,column2[feed],urls[feed],'200','image created','/n')
fo.write(',')
fo.write(column2[feed])
fo.write(',')
#fo.write(urls[feed])
fo.write(',')
fo.write('200,image created\n')
feed = feed + 1
else:
app.quit() # exit after last url
webpage.connect(webpage, SIGNAL("loadFinished(bool)"), onLoadFinished)
webpage.mainFrame().load(QUrl(urls.pop(0)))
#fo.close()
sys.exit(app.exec_())
```
It gives me the error:
```
local variable feed referenced before the assignment at fo.write(column1[feed])#,column2[feed],urls[feed],'200','image created','/n')
```
Any idea why? | When Python parses the body of a function definition and encounters an assignment such as
```
feed = ...
```
Python interprets `feed` as a local variable by default. If you do not wish for it to be a local variable, you must put
```
global feed
```
in the function definition. The global statement does not have to be at the beginning of the function definition, but that is where it is usually placed. Wherever it is placed, the global declaration makes `feed` a global variable *everywhere* in the function.
Without the global statement, since `feed` is taken to be a local variable, when Python executes
```
feed = feed + 1,
```
Python evaluates the right-hand side first and tries to look up the value of feed. The first time through it finds `feed` is undefined. Hence the error.
The shortest way to patch up the code is to add `global feed` to the beginning of `onLoadFinished`. The nicer way is to use a class:
```
class Page(object):
def __init__(self):
self.feed = 0
def onLoadFinished(self, result):
...
self.feed += 1
```
The problem with having functions which mutate global variables is that it makes it harder to grok your code. Functions are no longer isolated units. Their interaction extends to everything that affects or is affected by the global variable. Thus it makes larger programs harder to understand.
By avoiding mutating globals, in the long run your code will be easier to understand, test and maintain. | Put a global statement at the top of your function and you should be good:
```
def onLoadFinished(result):
global feed
...
```
To demonstrate what I mean, look at this little test:
```
x = 0
def t():
x += 1
t()
```
this blows up with your exact same error where as:
```
x = 0
def t():
global x
x += 1
t()
```
does not.
The reason for this is that, inside `t`, Python thinks that `x` is a local variable. Furthermore, unless you explicitly tell it that `x` is global, it will try to use a local variable named `x` in `x += 1`. But, since there is no `x` defined in the local scope of `t`, it throws an error. | Local variable referenced before assignment? | [
"",
"python",
"python-2.7",
"python-3.x",
"variable-assignment",
""
] |
I am trying to find the last entry for a specific **user** for each **location**. I also need to ignore any rows where the **balance** is zero.
Sample table:
```
ID location user balance
-- -------- ---- -------
1 GLA A 10.21
2 GLA A 0.00
3 GLA B 45.68
4 GLA B 12.56
5 EDI C 1.23
6 EDI D 5.48
7 EDI D 0.00
8 LON E 78.59
9 MAN F 0.00
10 MAN F 12.98
11 MAN G 56.45
12 MAN G 89.25
13 MAN H 0.00
14 BIR I 0.00
15 BIR I 32.87
```
Result required:
```
ID location user balance
-- -------- ---- -------
4 GLA B 12.56
5 EDI C 1.23
8 LON E 78.59
10 MAN F 12.98
12 MAN G 89.25
15 BIR I 32.87
```
I have been left to cover for a colleague who is off sick and unfotunately my SQL knowledge is limited in comparison.
I have been researching using MAX and PARTITION BY but with no luck so far. This is for a SQL Server 2005 database.
Any assistance with this would be greatly appreciated. I will add any progress I make.
Many Thanks. | You should be able to use `row_number()` to get the result:
```
select id, location, [user], balance
from
(
select id, location, [user], balance,
row_number() over(partition by location, [user]
order by id desc) seq
from yourtable
) d
where seq = 1
and balance <> 0
order by id;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/a2bd1/15) | ```
Select
A.ID ,
A.location,
A.[user] ,
A.balance from TABLE1 A
inner join (
Select Max(Id) AS Id from TABLE1
group by location,[user]
) B
on A.ID=B.ID
WHERE A.Balance>0
```
[**Sql Fiddle Demo**](http://sqlfiddle.com/#!2/72cc3/2) | Find the last entry for specific user id | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
I am self learning Python from the MIT open courseware website. I am having trouble completing this assignment using only the information learned in the lectures. The last thing I learned was iterations using the "While" and "For" loops. I have not learned functions yet. Is it possible to write a program that computes and prints the 1000th prime number using only this?
Here is my code so far:
```
count = 0
prime = []
candidate = []
x = 2
y = 1
while count < 1000:
x = x+1
if x > 1:
if x%2 != 0:
if x%3 != 0:
if x%5 != 0:
if x%7 != 0:
if x%11 != 0:
if x%13 != 0:
candidate.append(x)
``` | Your code has a few problems which I’ll try to point out:
```
count = 0
prime = [] # this is obviously meant to collect all primes
candidate = [] # what is this supposed to do then though?
x = 2
y = 1 # never used
while count < 1000: # you start at `count = 0` but never increase the count
# later on, so would loop forever
x = x+1
if x > 1: # x is always bigger than 1 because you started at 2
# and only increase it; also, you skipped 2 itself
if x%2 != 0: # here, all you do is check if the
if x%3 != 0: # number is dividable by any prime you
if x%5 != 0: # know of
if x%7 != 0: # you can easily make this check work
if x%11 != 0: # for any set (or list) of primes
if x%13 != 0: #
candidate.append(x) # why a candidate? If it’s
# not dividable by all primes
# it’s a prime itself
```
So, building on this, you can make it all work:
```
primes = [2] # we're going to start with 2 directly
count = 1 # and we have already one; `2`
x = 2
while count < 1000:
x += 1
isPrime = True # assume it’s a prime
for p in primes: # check for every prime
if x % p == 0: # if it’s a divisor of the number
isPrime = False # then x is definitely not a prime
break # so we can stop this loop directly
if isPrime: # if it’s still a prime after looping
primes.append(x) # then it’s a prime too, so store it
count += 1 # and don’t forget to increase the count
```
---
> Where did the p in the for loop come from?
`for x in something` is a construct that will loop over every element in `something` and for each iteration it gives you a variable `x` that will contain the current value. So for example the following will separately print `1`, `2`, `3`.
```
for i in [1, 2, 3]:
print(i)
```
Or for a list of primes, `for p in primes` will loop over all your stored primes and in each iteration `p` will be one prime from the list.
So the whole check will essentially loop over every known prime, and for each prime it will check if said prime is a divisor of the number. And if we find one prime for which this is the case, we can abort the loop, because the current number is definitely not a prime itself. | Without doing the whole thing for you, as has been said you are building up a list of primes, in `prime` so you could use that instead of hardcoding things to check.
```
prime = []
x = 2
while len(prime) < 1000:
if *** check here ***
prime.append(x)
x = x + 1
``` | Trouble finding the 1000th prime using python | [
"",
"python",
"python-2.7",
""
] |
Hello I have browsed the forum for a while and am asking my first question here. I'm in a bit of a bind and was wondering if I could get some help out. I am using Access 2007 and have not found a good answer to the question on the Net yet.
My data is Diagnostic Codes and CustomerID's and what I am looking for is a why to find the distinct count of CustomerID's for each Diagnostic Code. Ideally in non-Access SQL it would look like this:
```
SELECT DiagCode, Count(Distinct(CustomerID))
FROM CustomerTable
Group By DiagCode;
```
I know this is a pretty straightforward question but the answers that I'm finding are either too complicated(multiple aggregate functions) or too simple. Here is an approach I made to solving it but this is returning too many results:
```
SELECT DiagCode, Count(CustomerID)
FROM CustomerTable
WHERE CustomerID in (SELECT Distinct CustomerID from CustomerTable)
Group By DiagCode;
```
Hope I'm being clear here like I said my first post and any help is appreciated. | I'm not expert in MS Access and it is quite a long time last time I have written anything for it, but this maybe will work:
```
SELECT cd.DiagCode, Count(cd.CustomerID)
FROM (select distinct DiagCode, CustomerID from CustomerTable) as cd
Group By cd.DiagCode;
``` | I had the same question and found a link (now defunct) by the Access Team at Microsoft to have a nice working example of how to accomplish this; which I will also include here below.
---
**Data:**
```
Color Value
Red 5
Green 2
Blue 8
Orange 1
Red 8
Green 6
Blue 2
```
To get a count of the number of unique colors in the table, you could write a query such as:
```
SELECT Count(Distinct Color) AS N FROM tblColors
```
This would return the value 4 as there are four unique colors in the Color field in the table. Unfortunately, the Access Database Engine does not support the Count(Distinct) aggregate. To return this value from an Access table, you would need to use a subquery such as:
```
SELECT Count(*) AS N
FROM
(SELECT DISTINCT Color FROM tblColors) AS T;
```
Now let's say that you also want to include another aggregate value such as a Sum, and want to group by some value, in this case, Color. On SQL Server, you could write this query as:
```
SELECT Color, Sum(Value) AS Total, Count(Distinct Color) AS N
FROM tblColors
GROUP BY Color
```
This provides the following results:
**Data:**
```
Color Total N
Blue 10 1
Green 8 1
Orange 1 1
Red 13 1
```
---
Now, if you're asking whether or not this should return the value of '1', the answer is yes. As I understand it, the Count(Distinct) here can be used as a test to verify the results of a given query.
If your data is on a server that supports Count(Distinct), you might be able to use a pass-through query to retrieve the results. If you are working with Access data, this becomes a bit more challenging.
Since we used subqueries for the previous query, we'll need to do the same here. The trick however is that we need to use two subqueries as shown in the following SQL:
```
SELECT C.Color, Sum(C.Value) AS Total, T2.N
FROM
(SELECT T.Color, Count(T.Color) AS N
FROM
(SELECT DISTINCT Color, Count(*) AS N
FROM tblColors GROUP BY Color) AS T
GROUP BY T.Color) AS T2
INNER JOIN tblColors AS C
ON T2.Color = C.Color
GROUP BY C.Color, T2.N;
``` | Count Distinct in a Group By aggregate function in Access 2007 SQL | [
"",
"sql",
"ms-access-2007",
""
] |
I have to insert some data in oracle DB, without previously checking if it already exist.
Does exist any way, transiction on oracle to catch the exception inside the query and handle it to don't return any exception?
It would be perfect something in mysql's style like
```
insert .... on duplicate key a=a
``` | You can use [`MERGE`](http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm). The syntax is a bit different from a regular insert though;
```
MERGE INTO test USING (
SELECT 1 AS id, 'Test#1' AS value FROM DUAL -- your row to insert here
) t ON (test.id = t.id) -- duplicate check
WHEN NOT MATCHED THEN
INSERT (id, value) VALUES (t.id, t.value); -- insert if no duplicate
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!4/35b50/1). | If you can use PL/SQL, and you have a unique index on the columns where you don't want any duplicates, then you can catch the exception and ignore it:
```
begin
insert into your_table (your_col) values (your_value);
exception
when dup_val_on_index then null;
end;
``` | Oracle DB insert and do nothing on duplicate key | [
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
Subject: Issues and their tasks.
Environment: SQL Server 2008 or above
Database tables: Issues, Tasks, and IssuesTasks
Let's say I have a single input screen that deals with a single issue and their associated tasks.
We're dealing with Issue1 and there are 7 tasks listed to check off.
The user checks 3 of the 7 tasks as completed and saves to database.
Is it possible to write a SQL that shows Issue1 with the 7 tasks on the same row? (Keep in mind only 3 were checked, so the others should be null).
Also note, there are only 3 tasks in the IssuesTasks join table representing what the user checked. | Use SQL server build in `PIVOT` function:
```
SELECT <non-pivoted column>,
[first pivoted column] AS <column name>,
[second pivoted column] AS <column name>,
...
[last pivoted column] AS <column name>
FROM
(<SELECT query that produces the data>)
AS <alias for the source query>
PIVOT
(
<aggregation function>(<column being aggregated>)
FOR
[<column that contains the values that will become column headers>]
IN ( [first pivoted column], [second pivoted column],
... [last pivoted column])
) AS <alias for the pivot table>
<optional ORDER BY clause>;
```
You can use the PIVOT and UNPIVOT relational operators to change a table-valued expression into another table. PIVOT rotates a table-valued expression by turning the unique values from one column in the expression into multiple columns in the output, and performs aggregations where they are required on any remaining column values that are wanted in the final output. UNPIVOT performs the opposite operation to PIVOT by rotating columns of a table-valued expression into column values.
Simple AdventureWorks example:
```
-- Pivot table with one row and five columns
SELECT 'AverageCost' AS Cost_Sorted_By_Production_Days,
[0], [1], [2], [3], [4]
FROM
(SELECT DaysToManufacture, StandardCost
FROM Production.Product) AS SourceTable
PIVOT
(
AVG(StandardCost)
FOR DaysToManufacture IN ([0], [1], [2], [3], [4])
) AS PivotTable;
```
More complex example:
```
USE AdventureWorks2008R2;
GO
SELECT VendorID, [250] AS Emp1, [251] AS Emp2, [256] AS Emp3, [257] AS Emp4, [260] AS Emp5
FROM
(SELECT PurchaseOrderID, EmployeeID, VendorID
FROM Purchasing.PurchaseOrderHeader) p
PIVOT
(
COUNT (PurchaseOrderID)
FOR EmployeeID IN
( [250], [251], [256], [257], [260] )
) AS pvt
ORDER BY pvt.VendorID;
```
[For more information see here](http://msdn.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) | Please excuse me if this isn't kosher; I'm still getting acclimated with the rules around here (long time reader of stackoverflow, first day posting). I actually just wrote an article on this on my new blog, and I sincerely think it would help. Basically, you can dynamically build your pivoted column values and pass them in to a dynamically-built PIVOT query like this:
```
IF (OBJECT_ID('tempdb..#TEMP') is not null) DROP TABLE #TEMP
DECLARE @cols NVARCHAR(2000)
SELECT DISTINCT DATE
INTO #TEMP
FROM T_EMPLOYEE_PRODUCTIVITY
SELECT @cols = ISNULL(@cols + ',', '') + '[' + CONVERT(NVARCHAR, DATE) + ']'
FROM #TEMP
ORDER BY DATE
SELECT @cols
DECLARE @query NVARCHAR(4000)
SET @query = 'SELECT EMPLOYEE_NAME, ' + @cols +
'FROM
(
SELECT EMPLOYEE_NAME, DATE, UNITS
FROM T_EMPLOYEE_PRODUCTIVITY
) AS SourceTable
PIVOT
(
SUM(UNITS)
FOR DATE IN ('+ @cols + ')
) AS PivotTable
ORDER BY EMPLOYEE_NAME'
SELECT @query
EXECUTE(@query)
```
If you need a more detailed explanation with sample data, check it out here: <http://thrillhouseblog.blogspot.com/2013/08/dynamic-pivot-query-in-tsql-microsoft.html>
I hope this helps! | sql turn dynamic rows into columns | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I am making a formset in python/django and need to dynamically add more fields to a formset as a button is clicked. The form I'm working on is for my school asking students who they would like to disclose certain academic information to, and the button here allows them to add more fields for entering family members/people they want to disclose to.
I have the button working to the point where the extra fields show up, and you can add as many as you like. Problem is, the data that was previously entered into the already existing fields gets deleted. However, only the things in the formset get deleted. Everything else that was filled out earlier in the form stays persistent.
Is there any way to make the formset keep the data that was entered before the button was pressed?
form.py:
```
from django import forms
from models import Form, ParentForm, Contact
from django.core.exceptions import ValidationError
def fff (value):
if value == "":
raise ValidationError(message = 'Must choose a relation', code="a")
# Create your forms here.
class ModelForm(forms.ModelForm):
class Meta:
model = Form
exclude = ('name', 'Relation',)
class Parent(forms.Form):
name = forms.CharField()
CHOICES3 = (
("", '-------'),
("MOM", 'Mother'),
("DAD", 'Father'),
("GRAN", 'Grandparent'),
("BRO", 'Brother'),
("SIS", 'Sister'),
("AUNT", 'Aunt'),
("UNC", 'Uncle'),
("HUSB", 'Husband'),
("FRIE", 'Friend'),
("OTHE", 'Other'),
("STEP", 'Stepparent'),
)
Relation = forms.ChoiceField(required = False, widget = forms.Select, choices = CHOICES3, validators = [fff])
```
models.py
```
from django.db import models
from django import forms
from content.validation import *
from django.forms.models import modelformset_factory
class Contact(models.Model):
name = models.CharField(max_length=100)
class Form(models.Model):
CHOICES1 = (
("ACCEPT", 'I agree with the previous statement.'),
)
CHOICES2 = (
("ACADEMIC", 'Academic Records'),
("FINANCIAL", 'Financial Records'),
("BOTH", 'I would like to share both'),
("NEITHER", 'I would like to share neither'),
("OLD", "I would like to keep my old sharing settings"),
)
Please_accept = models.CharField(choices=CHOICES1, max_length=200)
Which_information_would_you_like_to_share = models.CharField(choices=CHOICES2, max_length=2000)
Full_Name_of_Student = models.CharField(max_length=100)
Carthage_ID_Number = models.IntegerField(max_length=7)
I_agree_the_above_information_is_correct_and_valid = models.BooleanField(validators=[validate_boolean])
Date = models.DateField(auto_now_add=True)
name = models.ManyToManyField(Contact, through="ParentForm")
class ParentForm(models.Model):
student_name = models.ForeignKey(Form)
name = models.ForeignKey(Contact)
CHOICES3 = (
("MOM", 'Mother'),
("DAD", 'Father'),
("GRAN", 'Grandparent'),
("BRO", 'Brother'),
("SIS", 'Sister'),
("AUNT", 'Aunt'),
("UNC", 'Uncle'),
("HUSB", 'Husband'),
("FRIE", 'Friend'),
("OTHE", 'Other'),
("STEP", 'Stepparent'),
)
Relation = models.CharField(choices=CHOICES3, max_length=200)
def __unicode__(self):
return 'name: %r, student_name: %r' % (self.name, self.student_name)
```
and views.py
```
from django.shortcuts import render
from django.http import HttpResponse
from form import ModelForm, Parent
from models import Form, ParentForm, Contact
from django.http import HttpResponseRedirect
from django.forms.formsets import formset_factory
def create(request):
ParentFormSet = formset_factory(Parent, extra=1)
if request.POST:
Parent_formset = ParentFormSet(request.POST, prefix='Parent_or_Third_Party_Name')
if 'add' in request.POST:
list=[]
for kitties in Parent_formset:
list.append({'Parent_or_Third_Party_Name-0n-ame': kitties.data['Parent_or_Third_Party_Name-0-name'], 'Parent_or_Third_Party_Name-0-Relation': kitties.data['Parent_or_Third_Party_Name-0-Relation']})
Parent_formset = ParentFormSet(prefix='Parent_or_Third_Party_Name', initial= list)
form = ModelForm(request.POST)
if form.is_valid() and Parent_formset.is_valid():
form_instance = form.save()
for f in Parent_formset:
if f.clean():
(obj, created) = ParentForm.objects.get_or_create(name=f.cleaned_data['name'], Relation=f.cleaned_data['Relation'])
return HttpResponseRedirect('http://Google.com')
else:
form = ModelForm()
Parent_formset = ParentFormSet(prefix='Parent_or_Third_Party_Name')
return render(request, 'content/design.html', {'form': form, 'Parent_formset': Parent_formset})
def submitted(request):
return render(request, 'content/design.html')
```
Thank you in advance! | I've had trouble with dynamically adding fields in Django before and this stackoverflow question helped me:
[dynamically add field to a form](https://stackoverflow.com/questions/6142025/dynamically-add-field-to-a-form)
To be honest, I'm not entirely sure what you mean by "persistent" in your case - are the values of your forms being removed as you add inputs? Are you sure it isn't something with your JS? | A coworker of mine finally figured it out. Here is the revised views.py:
```
from django.shortcuts import render
from django.http import HttpResponse
from form import ModelForm, Parent
from models import Form, ParentForm, Contact
from django.http import HttpResponseRedirect
from django.forms.formsets import formset_factory
def create(request):
ParentFormSet = formset_factory(Parent, extra=1)
boolean = False
if request.POST:
Parent_formset = ParentFormSet(request.POST, prefix='Parent_or_Third_Party_Name')
if 'add' in request.POST:
boolean = True
list=[]
for i in range(0,int(Parent_formset.data['Parent_or_Third_Party_Name-TOTAL_FORMS'])):
list.append({'name': Parent_formset.data['Parent_or_Third_Party_Name-%s-name' % (i)], 'Relation': Parent_formset.data['Parent_or_Third_Party_Name-%s-Relation' % (i)]})
Parent_formset = ParentFormSet(prefix='Parent_or_Third_Party_Name', initial= list)
form = ModelForm(request.POST)
if form.is_valid() and Parent_formset.is_valid():
form_instance = form.save()
for f in Parent_formset:
if f.clean():
(contobj, created) = Contact.objects.get_or_create(name=f.cleaned_data['name'])
(obj, created) = ParentForm.objects.get_or_create(student_name=form_instance, name=contobj, Relation=f.cleaned_data['Relation'])
return HttpResponseRedirect('http://Google.com')
else:
form = ModelForm()
Parent_formset = ParentFormSet(prefix='Parent_or_Third_Party_Name')
return render(request, 'content/design.html', {'form': form, 'Parent_formset': Parent_formset, 'boolean':boolean})
def submitted(request):
return render(request, 'content/design.html')
```
Thank you for your input, those of you who answered :) | Can a django formset that dynamically adds fields have persistent data? | [
"",
"python",
"database",
"django",
"django-forms",
""
] |
How do I get the index column name in Python's pandas? Here's an example dataframe:
```
Column 1
Index Title
Apples 1
Oranges 2
Puppies 3
Ducks 4
```
What I'm trying to do is get/set the dataframe's index title. Here is what I tried:
```
import pandas as pd
data = {'Column 1' : [1., 2., 3., 4.],
'Index Title': ["Apples", "Oranges", "Puppies", "Ducks"]}
df = pd.DataFrame(data)
df.index = df["Index Title"]
del df["Index Title"]
```
Anyone know how to do this? | You can just get/set the index via its `name` property
```
In [7]: df.index.name
Out[7]: 'Index Title'
In [8]: df.index.name = 'foo'
In [9]: df.index.name
Out[9]: 'foo'
In [10]: df
Out[10]:
Column 1
foo
Apples 1
Oranges 2
Puppies 3
Ducks 4
``` | You can use [`rename_axis`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename_axis.html), for removing set to `None`:
```
d = {'Index Title': ['Apples', 'Oranges', 'Puppies', 'Ducks'],'Column 1': [1.0, 2.0, 3.0, 4.0]}
df = pd.DataFrame(d).set_index('Index Title')
print (df)
Column 1
Index Title
Apples 1.0
Oranges 2.0
Puppies 3.0
Ducks 4.0
print (df.index.name)
Index Title
print (df.columns.name)
None
```
---
The new functionality works well in method chains.
```
df = df.rename_axis('foo')
print (df)
Column 1
foo
Apples 1.0
Oranges 2.0
Puppies 3.0
Ducks 4.0
```
You can also rename column names with parameter `axis`:
```
d = {'Index Title': ['Apples', 'Oranges', 'Puppies', 'Ducks'],'Column 1': [1.0, 2.0, 3.0, 4.0]}
df = pd.DataFrame(d).set_index('Index Title').rename_axis('Col Name', axis=1)
print (df)
Col Name Column 1
Index Title
Apples 1.0
Oranges 2.0
Puppies 3.0
Ducks 4.0
print (df.index.name)
Index Title
print (df.columns.name)
Col Name
```
```
print df.rename_axis('foo').rename_axis("bar", axis="columns")
bar Column 1
foo
Apples 1.0
Oranges 2.0
Puppies 3.0
Ducks 4.0
print df.rename_axis('foo').rename_axis("bar", axis=1)
bar Column 1
foo
Apples 1.0
Oranges 2.0
Puppies 3.0
Ducks 4.0
```
From version `pandas 0.24.0+` is possible use parameter `index` and `columns`:
```
df = df.rename_axis(index='foo', columns="bar")
print (df)
bar Column 1
foo
Apples 1.0
Oranges 2.0
Puppies 3.0
Ducks 4.0
```
Removing index and columns names means set it to `None`:
```
df = df.rename_axis(index=None, columns=None)
print (df)
Column 1
Apples 1.0
Oranges 2.0
Puppies 3.0
Ducks 4.0
```
---
If `MultiIndex` in index only:
```
mux = pd.MultiIndex.from_arrays([['Apples', 'Oranges', 'Puppies', 'Ducks'],
list('abcd')],
names=['index name 1','index name 1'])
df = pd.DataFrame(np.random.randint(10, size=(4,6)),
index=mux,
columns=list('ABCDEF')).rename_axis('col name', axis=1)
print (df)
col name A B C D E F
index name 1 index name 1
Apples a 5 4 0 5 2 2
Oranges b 5 8 2 5 9 9
Puppies c 7 6 0 7 8 3
Ducks d 6 5 0 1 6 0
```
---
```
print (df.index.name)
None
print (df.columns.name)
col name
print (df.index.names)
['index name 1', 'index name 1']
print (df.columns.names)
['col name']
```
---
```
df1 = df.rename_axis(('foo','bar'))
print (df1)
col name A B C D E F
foo bar
Apples a 5 4 0 5 2 2
Oranges b 5 8 2 5 9 9
Puppies c 7 6 0 7 8 3
Ducks d 6 5 0 1 6 0
df2 = df.rename_axis('baz', axis=1)
print (df2)
baz A B C D E F
index name 1 index name 1
Apples a 5 4 0 5 2 2
Oranges b 5 8 2 5 9 9
Puppies c 7 6 0 7 8 3
Ducks d 6 5 0 1 6 0
df2 = df.rename_axis(index=('foo','bar'), columns='baz')
print (df2)
baz A B C D E F
foo bar
Apples a 5 4 0 5 2 2
Oranges b 5 8 2 5 9 9
Puppies c 7 6 0 7 8 3
Ducks d 6 5 0 1 6 0
```
Removing index and columns names means set it to `None`:
```
df2 = df.rename_axis(index=(None,None), columns=None)
print (df2)
A B C D E F
Apples a 6 9 9 5 4 6
Oranges b 2 6 7 4 3 5
Puppies c 6 3 6 3 5 1
Ducks d 4 9 1 3 0 5
```
---
For `MultiIndex` in index and columns is necessary working with `.names` instead `.name` and set by list or tuples:
```
mux1 = pd.MultiIndex.from_arrays([['Apples', 'Oranges', 'Puppies', 'Ducks'],
list('abcd')],
names=['index name 1','index name 1'])
mux2 = pd.MultiIndex.from_product([list('ABC'),
list('XY')],
names=['col name 1','col name 2'])
df = pd.DataFrame(np.random.randint(10, size=(4,6)), index=mux1, columns=mux2)
print (df)
col name 1 A B C
col name 2 X Y X Y X Y
index name 1 index name 1
Apples a 2 9 4 7 0 3
Oranges b 9 0 6 0 9 4
Puppies c 2 4 6 1 4 4
Ducks d 6 6 7 1 2 8
```
Plural is necessary for check/set values:
```
print (df.index.name)
None
print (df.columns.name)
None
print (df.index.names)
['index name 1', 'index name 1']
print (df.columns.names)
['col name 1', 'col name 2']
```
---
```
df1 = df.rename_axis(('foo','bar'))
print (df1)
col name 1 A B C
col name 2 X Y X Y X Y
foo bar
Apples a 2 9 4 7 0 3
Oranges b 9 0 6 0 9 4
Puppies c 2 4 6 1 4 4
Ducks d 6 6 7 1 2 8
df2 = df.rename_axis(('baz','bak'), axis=1)
print (df2)
baz A B C
bak X Y X Y X Y
index name 1 index name 1
Apples a 2 9 4 7 0 3
Oranges b 9 0 6 0 9 4
Puppies c 2 4 6 1 4 4
Ducks d 6 6 7 1 2 8
df2 = df.rename_axis(index=('foo','bar'), columns=('baz','bak'))
print (df2)
baz A B C
bak X Y X Y X Y
foo bar
Apples a 2 9 4 7 0 3
Oranges b 9 0 6 0 9 4
Puppies c 2 4 6 1 4 4
Ducks d 6 6 7 1 2 8
```
Removing index and columns names means set it to `None`:
```
df2 = df.rename_axis(index=(None,None), columns=(None,None))
print (df2)
A B C
X Y X Y X Y
Apples a 2 0 2 5 2 0
Oranges b 1 7 5 5 4 8
Puppies c 2 4 6 3 6 5
Ducks d 9 6 3 9 7 0
```
And @Jeff solution:
```
df.index.names = ['foo','bar']
df.columns.names = ['baz','bak']
print (df)
baz A B C
bak X Y X Y X Y
foo bar
Apples a 3 4 7 3 3 3
Oranges b 1 2 5 8 1 0
Puppies c 9 6 3 9 6 3
Ducks d 3 2 1 0 1 0
``` | How to get/set a pandas index column title or name? | [
"",
"python",
"pandas",
"dataframe",
""
] |
I'm trying to create a powerset in Python 3. I found a reference to the `itertools`
module, and I've used the powerset code provided on that page. The problem: the code returns a reference to an `itertools.chain` object, whereas I want access to the elements in the powerset. My question: how to accomplish this?
Many thanks in advance for your insights. | `itertools` functions return [*iterators*](https://docs.python.org/3/glossary.html#term-iterator), objects that produce results lazily, on demand.
You could either loop over the object with a `for` loop, or turn the result into a list by calling `list()` on it:
```
from itertools import chain, combinations
def powerset(iterable):
"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
for result in powerset([1, 2, 3]):
print(result)
results = list(powerset([1, 2, 3]))
print(results)
```
You can also store the object in a variable and use the [`next()` function](http://docs.python.org/3/library/functions.html#next) to get results from the iterator one by one. | Here's a solution using a generator:
```
from itertools import combinations
def all_combos(s):
n = len(s)
for r in range(1, n+1):
for combo in combinations(s, r):
yield combo
``` | Powersets in Python using itertools | [
"",
"python",
"python-3.x",
"python-itertools",
""
] |
I am reading JSON from the database and parsing it using python.
```
cur1.execute("Select JSON from t1")
dataJSON = cur1.fetchall()
for row in dataJSON:
jsonparse = json.loads(row)
```
The problem is some JSON's that I'm reading is broken.
I would like my program to skip the json if its not a valid json and if it is then go ahead and parse it. Right now my program crashes once it encounters a broken json.
T1 has several JSON's that I'm reading one by one. | **Update**
You're getting an expecting string or buffer - you need to be using row[0] as the results will be 1-tuples... and you wish to take the first and only column.
**If you did want to check for bad json**
You can put a try/except around it:
```
for row in dataJSON:
try:
jsonparse = json.loads(row)
except Exception as e:
pass
```
Now - instead of using `Exception` as above - use the type of exception that's occuring at the moment so that you don't capture non-json loading related errors... (It's probably `ValueError`) | If you just want to silently ignore errors, you can wrap `json.loads` in a try..except block:
```
try: jsonparse = json.loads(row)
except: pass
``` | Skipping broken jsons python | [
"",
"python",
"json",
""
] |
I am doing a task for my work and unfortunately the tables are designed horribly and I can't do too much as far as modifying the structure (one of our major programs here has been based off these outdated tables for years). That being said, I need to find a way to use SQL Server (TSQL) to count the number of distinct columns in a given row.
### Example:
I have a table with columns `name`, `fieldA`, `fieldB`, `fieldC`, `fieldD`, etc
I want to return a table with each of the rows, returning `name` and the number of distinct columns for `fieldA` - `fieldD`
### Visually:
* Jim - 1 - 3 - 4 - 6
* John - 1 - 1 - 1 - 2
* Jane 2 - 2 - 3 - 3
Would return
* Jim - 4
* John - 2
* Jane - 2 | One way to do this is by unpivoting the data and then subsequently grouping it again. This gives you the ability to use `count(distinct)`:
```
select name, count(distinct val)
from t
unpivot (val for col in (FieldA, FieldB, FieldC, FieldD)) unpvt
group by name;
```
However, the most efficient way to do this is to keep all the processing on a single row -- no `join`s or `group by`s or `union`s. Assuming none of the values are `NULL`, the following works for 4 columns:
```
select name,
4 - ((case when FieldA in (FieldB, FieldC, FieldD) then 1 else 0 end) +
(case when FieldB in (FieldC, FieldD) then 1 else 0 end) +
(case when FieldC in (FieldD) then 1 else 0 end)
)
from t;
```
That is, start with the total count of columns. Then subtract 1 each time a column is like a column that comes *later*. This last condition ensures that duplicates are not counted more than once. | ```
DECLARE @x TABLE
(
Name VARCHAR(32),
A VARCHAR(32),
B VARCHAR(32),
C VARCHAR(32),
D VARCHAR(32)
);
INSERT @x VALUES
('Jim', 1,3,4,6),
('John',1,1,1,2),
('Jane',2,2,3,3);
SELECT Name, NumCols = COUNT(DISTINCT x)
FROM @x AS x
UNPIVOT (x FOR y IN (A,B,C,D)) AS up
GROUP BY Name
ORDER BY NumCols DESC;
``` | Count Number of Distinct Columns in a Row with tSQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to render a Jinja2 Template using a custom object implementing the `__getitem__` interface. The object implements a lazy variable look up because it is impossible to create a dictionary from it (the number of available variables is almost infinite, value retrieval works dynamically on the queried key).
Is it possible to render a Jinja2 template using a context object?
```
# Invalid code, but I'd like to have such an interface.
#
from jinja2 import Template
class Context(object):
def __getitem__(self, name):
# Create a value dynamically based on `name`
if name.startswith('customer'):
key = name[len('customer_'):]
return getattr(get_customer(), key)
raise KeyError(name)
t = Template('Dear {{ customer_first }},\n')
t.render(Context())
``` | I now figured out this (extremely hacky and ugly) solution.
```
t = CustomTemplate(source)
t.set_custom_context(Context())
print t.render()
```
Using the following replacements:
```
from jinja2.environment import Template as JinjaTemplate
from jinja2.runtime import Context as JinjaContext
class CustomContextWrapper(JinjaContext):
def __init__(self, *args, **kwargs):
super(CustomContextWrapper, self).__init__(*args, **kwargs)
self.__custom_context = None
def set_custom_context(self, custom_context):
if not hasattr(custom_context, '__getitem__'):
raise TypeError('custom context object must implement __getitem__()')
self.__custom_context = custom_context
# JinjaContext overrides
def resolve(self, key):
if self.__custom_context:
try:
return self.__custom_context[key]
except KeyError:
pass
return super(CustomContextWrapper, self).resolve(key)
class CustomTemplate(JinjaTemplate):
def set_custom_context(self, custom_context):
self.__custom_context = custom_context
# From jinja2.environment (2.7), modified
def new_context(self, vars=None, shared=False, locals=None,
context_class=CustomContextWrapper):
context = new_context(self.environment, self.name, self.blocks,
vars, shared, self.globals, locals,
context_class=context_class)
context.set_custom_context(self.__custom_context)
return context
# From jinja2.runtime (2.7), modified
def new_context(environment, template_name, blocks, vars=None,
shared=None, globals=None, locals=None,
context_class=CustomContextWrapper):
"""Internal helper to for context creation."""
if vars is None:
vars = {}
if shared:
parent = vars
else:
parent = dict(globals or (), **vars)
if locals:
# if the parent is shared a copy should be created because
# we don't want to modify the dict passed
if shared:
parent = dict(parent)
for key, value in iteritems(locals):
if key[:2] == 'l_' and value is not missing:
parent[key[2:]] = value
return context_class(environment, parent, template_name, blocks)
```
Can anyone offer a better solution? | It looks like you have a function, `get_customer()` which returns a dictionary, or is an object?
Why not just pass that to the template?
```
from jinja2 import Template
t = Template('Dear {{ customer.first }},\n')
t.render(customer=get_customer())
```
IIRC, Jinja is pretty forgiving of keys that don't exist, so `customer.bogus_key` shouldn't crash. | Lazy variable lookup in Jinja templates | [
"",
"python",
"templates",
"jinja2",
""
] |
I have a very large list called 'data' and I need to answer queries equivalent to
```
if (x in data[a:b]):
```
for different values of a, b and x.
Is it possible to preprocess data to make these queries fast | # idea
you may create a `dict`. For every element store the sorted list of positions where it occurs.
To answer query: binary search first element that greater or equal `a`, check if it exists and less than `b`
# Pseudocode
Preprocessing:
```
from collections import defaultdict
byvalue = defaultdict(list)
for i, x in enumerate(data):
byvalue[x].append(i)
```
Query:
```
def has_index_in_slice(indices, a, b):
r = bisect.bisect_left(indices, a)
return r < len(indices) and indices[r] < b
def check(byvalue, x, a, b):
indices = byvalue.get(x, None)
if not indices: return False
return has_index_in_slice(indices, a, b)
```
Complexity is `O(log N)` per query here if we suppose that `list` and `dict` have O(1) "get by index" complexity. | Yes, you could preprocess those slices into sets, thereby making membership lookup `O(1)` instead of `O(n)`:
```
check = set(data[a:b])
if x in check:
# do something
if y in check:
# do something else
``` | Fast membership in slices of lists | [
"",
"python",
"performance",
""
] |
I have the following matrix which I believe is sparse. I tried converting to dense using the x.dense format but it never worked. Any suggestions as to how to do this?, thanks.
```
mx=[[(0, 2), (1, 1), (2, 1), (3, 1), (4, 1), (5, 3), (6, 4), (7, 2), (8, 5), (9, 1)],
[(10, 1), (11, 5), (12, 2), (13, 1), (21, 1), (22, 1), (23, 1), (24, 1), (25, 1), (26, 2)],
[(27, 2), (28, 1), (29, 1), (30, 1), (31, 2), (32, 1), (33, 1), (34, 1), (35, 1), (36, 1)]]
```
someone put forward the solution below, but is there a better way?
```
def assign_coo_to_dense(sparse, dense):
dense[sparse.row, sparse.col] = sparse.data
```
mx.todense().
Intended output should appear in this form:[[2,1,1,1,1,3,4], [1,5,2,1,1,1,1], [2,1,1,1,2,1,1,1]] | List comprehension is the easiest way:
```
new_list = [[b for _,b in sub] for sub in mx]
```
Result:
```
>>> new_list
[[2, 1, 1, 1, 1, 3, 4, 2, 5, 1], [1, 5, 2, 1, 1, 1, 1, 1, 1, 2], [2, 1, 1, 1, 2, 1, 1, 1, 1, 1]]
``` | Your source data do not really match any of the built-in formats supported by sparse matrices in SciPy (see <http://docs.scipy.org/doc/scipy/reference/sparse.html> and <http://en.wikipedia.org/wiki/Sparse_matrix>), so using `.todense()` will not really be productive here. In particular, if you have something like:
```
import numpy as np
my_sparseish_matrix = np.array([[(1, 2), (3, 4)]])
```
then `my_sparseish_matrix` will already be a dense numpy array ! Calling `.todense()` on it at that point will produce an error, and doesn't make sense anyway.
So my recommendation is to construct your dense array explicitly using a couple of `for` loops. To do this you'll need to know how many items are possible in your resulting vector -- call it `N`.
```
dense_vector = np.zeros((N, ), int)
for inner in mx:
for index, value in inner:
dense_vector[index] = value
``` | How to convert sparse matrix to dense form using python | [
"",
"python",
"numpy",
"matrix",
"scipy",
"word-frequency",
""
] |
My select looks like this and it returns the fields from the biggest id...
```
SELECT * FROM Pontos WHERE IdPonto = (SELECT MAX(IdPonto) FROM Pontos)
```
But now I want to select just the fields from the second biggest. | `SELECT * FROM Pontos WHERE IdPonto = (SELECT IdPonto FROM Pontos ORDER BY IdPonto DESC LIMIT 1,1)` should work. | This would be one way to do it:
```
SELECT * FROM Pontos ORDER BY Id DESC LIMIT 1 OFFSET 1
```
(Not quite certain about sqlite syntax, but read this SO thread for more info: [Sqlite LIMIT / OFFSET query](https://stackoverflow.com/questions/3325515/sqlite-limit-offset-query-doubt))
edit: I do not think it is necessary in this case to run nested queries. | SQL: SELECT All the fields from the SECOND BIGGEST ID | [
"",
"android",
"sql",
"performance",
"sqlite",
"select",
""
] |
I have a variable which may or may not get a value in the instance:
```
class EC():
__init__(self, a=False):
...
if a: self.__var = ...
```
Later I want to check if the **\_\_var** exists in the instance. Because prepending **\_\_** to the name changes the internal name to **\_EC\_\_var** the checking code becomes a little bit messy:
```
if ''.join(['_',self.__class__.__name__,'__name']) in self.__dict__: ...
```
**Is code above considered normal or not? If not what are the preferred alternatives?**
One option I can think of is to give **\_\_var** some value anyway, for example:
```
_no_value = object()
...
def __init__(self, a):
self.__var = _no_value
...
if a: self.__var = ...
```
So later I can compare **\_\_var** to **\_no\_value** instead of a mess with internal variables. | You've forgotten the [EAFP principle](https://stackoverflow.com/questions/11360858/what-is-the-eafp-principle-in-python):
```
try:
value = self.__var
except AttributeError:
# do something else
```
If you're determined to use a sentinel, you can combine it with a class variable:
```
class EC():
__var = object():
...
if self.__var is not EC.__var:
...
``` | Just use hasattr(self, '\_var') to see if it exists - it may be set to None but it will exist if hasattr says it does.
E.g.:
```
>>> class a():
... def __init__(self):
... self.a = 3
... self._a_ = 4
... self.__a__ = 'Fred'
...
>>> A=a()
>>> hasattr(a, 'a')
False
>>> hasattr(A, 'a')
True
>>> hasattr(A, '_a_')
True
>>> hasattr(A, '__a__')
True
>>> hasattr(A, '__b__')
False
>>>
``` | What's the best way to check if class instance variable is set in Python? | [
"",
"python",
""
] |
I want to read huge text file line by line (and stop if a line with "str" found).
How to check, if file-end is reached?
```
fn = 't.log'
f = open(fn, 'r')
while not _is_eof(f): ## how to check that end is reached?
s = f.readline()
print s
if "str" in s: break
``` | There's no need to check for EOF in python, simply do:
```
with open('t.ini') as f:
for line in f:
# For Python3, use print(line)
print line
if 'str' in line:
break
```
[Why the `with` statement](http://docs.python.org/2/tutorial/inputoutput.html#methods-of-file-objects):
> It is good practice to use the `with` keyword when dealing with file
> objects. This has the advantage that the file is properly closed after
> its suite finishes, even if an exception is raised on the way. | Just iterate over each line in the file. Python automatically checks for the End of file and closes the file for you (using the `with` syntax).
```
with open('fileName', 'r') as f:
for line in f:
if 'str' in line:
break
``` | Python: read all text file lines in loop | [
"",
"python",
"file",
""
] |
With same keys in dictionaries, I have found [this answer](https://stackoverflow.com/questions/13975021/merge-join-lists-of-dictionaries-based-on-a-common-value-in-python)
However I want to merge the previous example's dictionaries as if I had these two:
```
list_a = {'data' : [{'user__name': u'Joe', 'user__id': 1},
{'user__name': u'Bob', 'user__id': 3}]}
list_b = {'data' : [{'hours_worked': 25, 'user_num': 3},
{'hours_worked': 40, 'user_num': 1}]}
```
I tried:
```
for (k,v) in list_a['data']:
list_a['data'][k]['user_num'] = list_a['data'][k].pop('user__id')
```
But I got: `ValueError: too many values to unpack`
**Update:**
I want my final result look like:
```
list_c = {'data' : [{'user__name': u'Joe', 'user_num': 1, 'hours_worked': 40},
{'user__name': u'Bob', 'user_num': 3, 'hours_worked': 25 }]}
``` | In that case you'd have to use a dictionary to map ids to dictionaries first:
```
result = {d['user__id': d for d in list_a}
for d in list_b:
if d['user_num'] in result:
result[d['user_num']].update(d)
``` | ```
>>> res = {d["user_num"]: d for d in list_b["data"]}
>>> for a in list_a["data"]:
... res[a["user__id"]]["user__name"] = a["user__name"]
>>> list_c = {"data" : res.values()}
```
However, it will raise KeyError if there's no user in list\_b for user in list\_a | Join two Python dictionaries based on same value but different key name (like SQL's JOIN) | [
"",
"python",
"dictionary",
"key",
""
] |
This query is giving me error so whats wrong with this query?
```
SELECT recipes.recipeId as recipeId,
recipes.title as title,
recipes.cookingtime as cookingtime,
recipes.serving as serving,
schedule.catId as catId,
images.imagePath AS imagePath
FROM recipes
INNER JOIN schedule ON recipes.recipeId = schedule.recipeId
INNER JOIN images ON images.recipeId = recipes.recipeId
GROUP BY recipeId
WHERE schedule.day = 'saturday'
ORDER BY catId ASC
```
I think that the position of group by in query is not correct. | You are thinking correct. `GROUP BY` should come after `WHERE` clause and before `ORDER BY` clause.
```
SELECT recipes.recipeId as recipeId,
recipes.title as title,
recipes.cookingtime as cookingtime,
recipes.serving as serving,
schedule.catId as catId,
images.imagePath AS imagePath
FROM recipes
INNER JOIN schedule ON recipes.recipeId = schedule.recipeId
INNER JOIN images ON images.recipeId = recipes.recipeId
WHERE schedule.day = 'saturday'
GROUP BY recipeId
ORDER BY catId ASC
``` | should be like that, where then group by , then order
```
WHERE schedule.day = 'saturday'
GROUP BY recipeId
ORDER BY catId ASC
``` | Whats wrong with this SQL query | [
"",
"mysql",
"sql",
""
] |
I have a database and its structure looks like this:
1. The table is called `Values`
2. The columns of that table are: `Key, Attribute, Value`
3. An `object` is a series of rows that share the same `Key`.
For example:
```
ROWID Key Attribute Value
***** ************************************ ********* ***********************
1 9847CAD7-C430-4401-835B-A7FCE9A33A90 FirstName Tito
2 9847CAD7-C430-4401-835B-A7FCE9A33A90 CreatedAt 2013-08-03 10:10:23:344
3 9847CAD7-C430-4401-835B-A7FCE9A33A90 UpdatedAt 2013-08-03 11:10:23:344
----- ------------------------------------ --------- -----------------------
4 4AE4B3F4-895B-4BF7-90E6-C889DA875D26 FirstName Tito
5 4AE4B3F4-895B-4BF7-90E6-C889DA875D26 CreatedAt 2013-01-01 10:10:10:344
6 4AE4B3F4-895B-4BF7-90E6-C889DA875D26 UpdatedAt 2013-01-01 10:10:10:344
```
In the example above I have separated the two "objects" with a series of "-" to help understand how the data is structured.
The goal: select all rows where its `UpdatedAt` value is greater than its `CreatedAt` value and both share the same `Key`. In the above example, I'd like to match Key `9847CAD7-C430-4401-835B-A7FCE9A33A90` because its `UpdatedAt` value is greater than the value of `CreatedAt` (the matching should occur within the same "object".)
Any idea how I can accomplish this query? Thanks in advance for the help. | ```
select
t1.key
from
Table1 t1
inner join Table1 t2 on t1.key = t2.key
where t1.Attribute = 'CreatedAt' and t2.Attribute = 'UpdatedAt'
and t1.value < t2.value
```
* see it working live in an [sqlfiddle](http://sqlfiddle.com/#!2/7bc8c/1/0) | Here's my version:
```
SELECT Key,
MAX(CASE WHEN Attribute = 'FirstName' THEN Value END) As "First_Name",
MAX(CASE WHEN Attribute = 'CreatedAt' THEN Value END) As "Created_At",
MAX(CASE WHEN Attribute = 'UpdatedAt' THEN Value END) As "Updated_At"
FROM Value
GROUP BY Key
HAVING Created_At < Updated_At;
```
and **[SQL Fiddle](http://sqlfiddle.com/#!7/c9175/10)**
It shows all the attributes of the matching *object*. | Multiple column match across rows | [
"",
"sql",
"sqlite",
""
] |
Here's the schema:
```
CREATE TABLE `employees` (
`employee_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`department_id` int(11) DEFAULT NULL,
`boss_id` int(11) DEFAULT NULL,
`name` varchar(255) DEFAULT NULL,
`salary` varchar(255) DEFAULT NULL,
PRIMARY KEY (`employee_id`)
);
CREATE TABLE `departments` (
`department_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
PRIMARY KEY (`department_id`)
);
```
Here's the dataset:
```
INSERT INTO `employees` (`employee_id`, `department_id`, `boss_id`, `name`, `salary`)
VALUES
(1,1,0,'manager','80000'),
(2,1,1,'emp1','60000'),
(3,1,1,'emp2','50000'),
(4,1,1,'emp3','95000'),
(5,1,1,'emp4','75000');
INSERT INTO `departments` (`department_id`, `name`)
VALUES
(1,'IT'),
(2,'HR'),
(3,'Sales'),
(4,'Marketing');
```
---
Exercise question: List employees who have the biggest salary in their departments.
Here's my query:
```
select e.name as 'employee name',max(e.salary) as 'salary',d.name as 'dept name'
from employees e join departments d
on e.department_id=d.department_id
group by d.name
```
Why does my query only return one row? Shouldnt it return 4 (one per dept name)?
Thanks in advance! | It's doing exactly what you think it's doing. The problem is that your data isn't what you think it is. The department\_id for all entries in the `employees` table are the same, `1`. So you are getting one value per department, the problem is that only one department is represented.
Here's a [SQLfiddle](http://sqlfiddle.com/#!2/3860f/1) to illustrate this. I've augmented your schema but kept the query unmodified. | ```
SELECT
d.name,
e.name,
e.salary
FROM employees e
INNER join departments d on e.department_id=d.department_id
INNER JOIN (
SELECT
d.department_id,
max(e.salary) as salary
from departments d
left join employees e on e.department_id=d.department_id
group by d.department_id
) x ON x.salary = e.salary AND d.department_id = x.department_id
```
This query will return the employees list with max salary in each department. And if you want to see an info for more than one department - just fill up another departments with more employees (according to your statements, you have an employees only in the department with department\_id = 1) | SQL Query only returns one row | [
"",
"mysql",
"sql",
""
] |
Ok so I'm defining a function that takes a variable number of inputs and clamps each of them
```
def clamp(*args):
return [ max(min(arg, 0.8), 0.2) for arg in args]
```
I like the readability of this function:
```
a = 0.12
b = 0.45
c = 0.992
A,B,C = clamp(a,b,c)
print A,B,C
>> 0.2, 0.45, 0.8
```
This takes advantage of python's automatic unpacking of lists to tuples. The trouble I am having is that if I only give one argument, python doesn't unpack it from the list and I get a list instead of a float, which is annoying.
```
print clamp(a)
>> [0.2]
```
My solution so far is to check the length of the list and index it if there is only one element:
```
def clamp(*args):
result = [ max(0.2, min(0.8,arg)) for arg in args]
return result if len(result) > 1 else result[0]
a = 0.12
print clamp(a)
>> [0.2]
```
My question is, is there a more idiomatic way to do this? | I'm not very conviced but here is an alternative solution
```
>>> clam = lambda a: max(min(a, 0.8), 0.2)
>>> def clamp(a, *args):
... if args:
... return [ clam(arg) for arg in (a,)+args]
... else:
... return clam(a)
...
>>> clamp(123, 123)
[0.8, 0.8]
>>> clamp(123)
0.8
``` | You can force it to unpack a single element by adding a comma after the name. Not ideal but here you go:
```
A, = clamp(a)
``` | Unpacking variable length list returned from function | [
"",
"python",
""
] |
I have two SQLite tables like this:
```
AuthorId | AuthorName
----------------------
1 | Alice
2 | Bob
3 | Carol
... | ....
BookId | AuthorId | Title
----------------------------------
1 | 1 | aaa1
2 | 1 | aaa2
3 | 1 | aaa3
4 | 2 | ddd1
5 | 2 | ddd2
... | ... | ...
19 | 3 | fff1
20 | 3 | fff2
21 | 3 | fff3
22 | 3 | fff4
```
I want to make a SELECT query that will return the first N (e.g. two) rows for each AuthorId, ordering by Title ("Select the first two books of each author").
Sample output:
```
BookId | AuthorId | AuthorName | Title
------------------------------------------
1 | 1 | Alice | aaa1
2 | 1 | Alice | aaa1
4 | 2 | Bob | ddd1
5 | 2 | Bob | ddd2
19 | 3 | Carol | fff1
20 | 3 | Carol | fff2
```
How can I build this query?
(Yes, I found a similar topic, and I know how to return only one row (first or top). The problem is with the two). | You can do the counting using a correlated subquery:
```
SELECT b.BookId, a.AuthorId, a.AuthorName, b.Title
FROM Author a join
Book b
on a.AuthorId = b.AuthorId
where (select count(*)
from book b2
where b2.bookId <= b.BookId and b2.AuthorId = b.AuthorId
) <= 2;
```
For a small database this should be fine. If you create a composite index on `Book(AuthorId, BookId)` then that will help the query. | There is alternative variant:
```
SELECT * FROM (
SELECT * FROM BOOK, AUTHOR
WHERE BOOK.AUTHORID = AUTHOR.AUTHORID
) T1
WHERE T1.BOOKID IN (
SELECT T2.BOOKID FROM BOOK T2
WHERE T2.AUTHORID = T1.AUTHORID
ORDER BY T2.BOOKTITLE
LIMIT 2
)
ORDER BY T1.BOOKTITLE
``` | How to select the first N rows of each group? | [
"",
"sql",
"sqlite",
"greatest-n-per-group",
"limit-per-group",
""
] |
I have a select statement where I want to order by different criteria based on a CASE expression, but I'm having trouble with syntax when I want to order by multiple criteria. I would like it to be similar to the following code, but I get syntax errors.
```
SELECT *
FROM Table1
ORDER BY
CASE WHEN @OrderBy = 1 THEN Column1, Column2 END,
CASE WHEN @OrderBy = 2 THEN Column3 END,
``` | Although a `case` only returns one value, you can repeat the case:
```
SELECT *
FROM Table1
ORDER BY (CASE WHEN @OrderBy = 1 THEN Column1
WHEN @OrderBy = 2 THEN Column3
end),
(CASE WHEN @OrderBy = 1 THEN Column2 END)
```
This gives the secondary sort on `Column2` for `@OrderBy = 1`.
In fact, this will also work and it might be closer to what you were originally thinking:
```
SELECT *
FROM Table1
ORDER BY (CASE WHEN @OrderBy = 1 THEN Column1 end),
(CASE WHEN @OrderBy = 1 THEN Column2 end),
(CASE WHEN @OrderBy = 2 THEN Column3 end)
```
In this version, the first two clauses will return `NULL` for all rows for the value of 2. Then the third row will be invoked for the sort. | `CASE` is an *expression* which returns *exactly* one result, and cannot be used for control-of-flow logic like in some other languages. I have no idea what your data types are so I'm guessing you want this:
```
CASE WHEN @OrderBy = 1 THEN Column1 END,
CASE WHEN @OrderBy = 2 THEN Column3 END,
Column2;
```
If Column1 and Column3 have the same (or compatible) data types, you can simplify:
```
CASE @OrderBy
WHEN 1 THEN Column1
WHEN 2 THEN Column3 END,
Column2;
```
You can just add Column2 to the end because it doesn't need a conditional - it's 2nd for the first condition and if you only care about Column3 for the second condition, it probably doesn't matter that you also order by Column2. | ORDER BY with two criteria in a CASE expression | [
"",
"sql",
"sql-server",
"select",
"sql-order-by",
"switch-statement",
""
] |
Looking for elegant (or any) solution to convert columns to rows.
Here is an example: I have a table with the following schema:
```
[ID] [EntityID] [Indicator1] [Indicator2] [Indicator3] ... [Indicator150]
```
Here is what I want to get as the result:
```
[ID] [EntityId] [IndicatorName] [IndicatorValue]
```
And the result values will be:
```
1 1 'Indicator1' 'Value of Indicator 1 for entity 1'
2 1 'Indicator2' 'Value of Indicator 2 for entity 1'
3 1 'Indicator3' 'Value of Indicator 3 for entity 1'
4 2 'Indicator1' 'Value of Indicator 1 for entity 2'
```
And so on..
Does this make sense? Do you have any suggestions on where to look and how to get it done in T-SQL? | You can use the [UNPIVOT](http://msdn.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) function to convert the columns into rows:
```
select id, entityId,
indicatorname,
indicatorvalue
from yourtable
unpivot
(
indicatorvalue
for indicatorname in (Indicator1, Indicator2, Indicator3)
) unpiv;
```
Note, the datatypes of the columns you are unpivoting must be the same so you might have to convert the datatypes prior to applying the unpivot.
You could also use `CROSS APPLY` with UNION ALL to convert the columns:
```
select id, entityid,
indicatorname,
indicatorvalue
from yourtable
cross apply
(
select 'Indicator1', Indicator1 union all
select 'Indicator2', Indicator2 union all
select 'Indicator3', Indicator3 union all
select 'Indicator4', Indicator4
) c (indicatorname, indicatorvalue);
```
Depending on your version of SQL Server you could even use CROSS APPLY with the VALUES clause:
```
select id, entityid,
indicatorname,
indicatorvalue
from yourtable
cross apply
(
values
('Indicator1', Indicator1),
('Indicator2', Indicator2),
('Indicator3', Indicator3),
('Indicator4', Indicator4)
) c (indicatorname, indicatorvalue);
```
Finally, if you have 150 columns to unpivot and you don't want to hard-code the entire query, then you could generate the sql statement using dynamic SQL:
```
DECLARE @colsUnpivot AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @colsUnpivot
= stuff((select ','+quotename(C.column_name)
from information_schema.columns as C
where C.table_name = 'yourtable' and
C.column_name like 'Indicator%'
for xml path('')), 1, 1, '')
set @query
= 'select id, entityId,
indicatorname,
indicatorvalue
from yourtable
unpivot
(
indicatorvalue
for indicatorname in ('+ @colsunpivot +')
) u'
exec sp_executesql @query;
``` | well If you have 150 columns then I think that UNPIVOT is not an option. So you could use xml trick
```
;with CTE1 as (
select ID, EntityID, (select t.* for xml raw('row'), type) as Data
from temp1 as t
), CTE2 as (
select
C.id, C.EntityID,
F.C.value('local-name(.)', 'nvarchar(128)') as IndicatorName,
F.C.value('.', 'nvarchar(max)') as IndicatorValue
from CTE1 as c
outer apply c.Data.nodes('row/@*') as F(C)
)
select * from CTE2 where IndicatorName like 'Indicator%'
```
**`sql fiddle demo`**
You could also write dynamic SQL, but I like xml more - for dynamic SQL you have to have permissions to select data directly from table and that's not always an option.
**UPDATE**
As there a big flame in comments, I think I'll add some pros and cons of xml/dynamic SQL. I'll try to be as objective as I could and not mention elegantness and uglyness. If you got any other pros and cons, edit the answer or write in comments
**cons**
* it's **not as fast** as dynamic SQL, rough tests gave me that xml is about 2.5 times slower that dynamic (it was one query on ~250000 rows table, so this estimate is no way exact). You could compare it yourself if you want, here's [**sqlfiddle**](http://sqlfiddle.com/#!3/cd87e/1) example, on 100000 rows it was 29s (xml) vs 14s (dynamic);
* may be it could be **harder to understand** for people not familiar with xpath;
**pros**
* it's the **same scope** as your other queries, and that could be very handy. A few examples come to mind
+ you could query `inserted` and `deleted` tables inside your **trigger** (not possible with dynamic at all);
+ user don't have to have **permissions** on direct select from table. What I mean is if you have stored procedures layer and user have permissions to run sp, but don't have permissions to query tables directly, you still could use this query inside stored procedure;
+ you could **query table variable** you have populated in your scope (to pass it inside the dynamic SQL you have to either make it temporary table instead or create type and pass it as a parameter into dynamic SQL;
* you can do this **query inside the function** (scalar or table-valued). It's not possible to use dynamic SQL inside the functions; | SQL Server : Columns to Rows | [
"",
"sql",
"sql-server",
"t-sql",
"unpivot",
""
] |
I have a table with columns: `ID` (Int), `Date` (Date) and `Price` (Decimal). `Date` column is in format 2013-04-14:
## Table Example
```
ID Date Price
1 2012/05/02 23.5
1 2012/05/03 25.2
1 2012/05/04 22.5
1 2012/05/05 22.2
1 2012/05/06 26.5
2 2012/05/02 143.5
2 2012/05/03 145.2
2 2012/05/04 142.2
2 2012/05/05 146.5
3 2012/05/02 83.5
3 2012/05/03 85.2
3 2012/05/04 80.5
```
## Query Example:
I want to be able to select all `ID`1 and `ID`3's data between a date range from the table and have this in a table with three columns, ordered by `Date` column. Also I would want to insert this into a temporary table to perform mathematical calculations on the data. Please comment if there is a better way.
Correct Result Example
```
Date ID1 ID3
2012-05-02 23.5 83.5
2012-05-03 25.2 85.2
2012-05-04 22.5 80.2
```
Any help and advice will be appreciated,
Thanks
--- | Try the following.
```
CREATE TABLE #temp (
Date date,
x money,
y money
)
;
SELECT
Date,
MAX(CASE WHEN id=1 THEN price END) AS x,
MAX(CASE WHEN id=3 THEN price END) AS y
FROM Top40
WHERE Date BETWEEN '2012-05-02' AND '2012-05-04'
GROUP BY
Date
;
```
See [SQL Fiddle](http://sqlfiddle.com/#!6/5f2f0/4) for working example
**EDIT:**
To use the LAG window function on the x and y columns, you'll have to use a common table expression or CTE first.
```
WITH prices AS(
SELECT
Date as myDate,
MAX(CASE WHEN id=1 THEN price END) AS x,
MAX(CASE WHEN id=3 THEN price END) AS y
FROM Top40
WHERE Date BETWEEN '2012-05-02' AND '2012-05-04'
GROUP BY
Date
)
SELECT
myDate,
p.x,
(p.x/(LAG(p.x) OVER (ORDER BY MyDate))-1) as x_return,
p.y,
(p.y/(LAG(p.y) OVER (ORDER BY MyDate))-1) as y_return
FROM prices p
ORDER BY
myDate
;
```
See new [SQL Fiddle](http://sqlfiddle.com/#!6/5f2f0/9) for example. | The simplest way to do it in code (although it may not perform well with large data sets) is to do something like:
```
SELECT [Date], x = MAX(CASE WHEN ID = 1 THEN PRICE END)
, y = MAX(CASE WHEN ID = 3 THEN PRICE END)
INTO #tmp
FROM Top40
GROUP BY [Date]
``` | Insert data into temp table, multiple columns from one Table | [
"",
"sql",
"sql-server-2012",
""
] |
I have a set I want to make into a sorted list. I run:
```
sorted_list=list(my_set).sort()
```
but this returns `none`, even when both `list(my_set)` and `my_set` are both nonempty. On the other hand this:
```
sorted_list=list(my_set)
sorted_list.sort()
```
works just fine.
Why is this happening? Does python not allow methods to be called on objects directly returned by constructors? | `.sort()` sorts the list in place and returns `None`. You need to use the `sorted()` function here.
```
>>> a = [3, 2, 1]
>>> print a.sort()
None
>>> a
[1, 2, 3]
>>> sorted(a)
[1, 2, 3]
``` | It's simple:
* `sort()` makes sorting in place and returns `None`
* `sorted()` returns a sorted copy
Here's a quote from [How To/Sorting - Python Wiki](http://wiki.python.org/moin/HowTo/Sorting/):
> Python lists have a built-in sort() method that modifies the list
> in-place and a sorted() built-in function that builds a new sorted
> list from an iterable. | Why can't I sort a list right after I make it? | [
"",
"python",
"list",
"python-3.x",
""
] |
I'm a total noob trying to create a blank MS Access database using VBA in Excel. I want to name the new database "tblImport". This is the code I´m using:
```
Sub makedb()
Dim accessApp As Access.Application
Set accessApp = New Access.Application
accessApp.DBEngine.CreateDatabase "C:\tblImport.accdb", dbLangGenera
accessApp.Quit
Set accessApp = Nothing
End Sub
```
I get the following error message:
"Run Time Error 3001: Application Defined or Object Defined Error"
What can I do? | The name of the locale constant in the CreateDatabase method is wrong:
This:
`accessApp.DBEngine.CreateDatabase "C:\tblImport.accdb", dbLangGenera`
Should be:
`accessApp.DBEngine.CreateDatabase "D:\tblImport.accdb", DB_LANG_GENERAL`
Change that and your code should work. (It does for me at least). | Old Question. Here are my two cents. You have a typo...
`dbLangGenera` should be `dbLangGeneral`
More about it in [Workspace.CreateDatabase Method (DAO)](https://msdn.microsoft.com/en-us/library/office/ff822832.aspx)
Voting to close this question as per [Dealing with questions with obvious replies](https://meta.stackexchange.com/questions/212024/dealing-with-questions-with-obvious-replies)
Try this. This works.
```
Sub makedb()
Dim accessApp As Access.Application
Set accessApp = New Access.Application
accessApp.DBEngine.CreateDatabase "C:\tblImport.accdb", dbLangGeneral
accessApp.Quit
Set accessApp = Nothing
End Sub
```
**EDIT**: Will delete this answer and post it as a comment once the post is closed. | How can a blank MS Access database be created using VBA? | [
"",
"sql",
"excel",
"vba",
"ms-access",
""
] |
I am new to Mock and am writing a unit test for this function:
```
# utils.py
import requests
def some_function(user):
payload = {'Email': user.email}
url = 'http://api.example.com'
response = requests.get(url, params=payload)
if response.status_code == 200:
return response.json()
else:
return None
```
I am using [Michael Foord's Mock](http://www.voidspace.org.uk/python/mock/index.html) library as part of my unit test and am having difficulty mocking the `response.json()` to return a json structure. Here is my unit test:
```
# tests.py
from .utils import some_function
class UtilsTestCase(unittest.TestCase):
def test_some_function(self):
with patch('utils.requests') as mock_requests:
mock_requests.get.return_value.status_code = 200
mock_requests.get.return_value.content = '{"UserId":"123456"}'
results = some_function(self.user)
self.assertEqual(results['UserId'], '123456')
```
I have tried numerous combinations of different mock settings after reading the docs with no luck. If I print the `results` in my unit test it always displays the following instead of the json data structure I want:
```
<MagicMock name=u'requests.get().json().__getitem__().__getitem__()' id='30315152'>
```
Thoughts on what I am doing wrong? | Patch `json` method instead of `content`. (`content` is not used in `some_function`)
Try following code.
```
import unittest
from mock import Mock, patch
import utils
class UtilsTestCase(unittest.TestCase):
def test_some_function(self):
user = self.user = Mock()
user.email = 'user@example.com'
with patch('utils.requests') as mock_requests:
mock_requests.get.return_value = mock_response = Mock()
mock_response.status_code = 200
mock_response.json.return_value = {"UserId":"123456"}
results = utils.some_function(self.user)
self.assertEqual(results['UserId'], '123456')
``` | Another pattern I like to use that is a little more reusable would be to start the patcher in your unit test's `setUp` method. It's also important to check that mock request was called with the expected parameters:
```
class UtilsTestCase(TestCase):
def setUp(self):
self.user = Mock(id=123, email='foo@bar.com')
patcher = patch('utils.requests.get')
self.mock_response = Mock(status_code=200)
self.mock_response.raise_for_status.return_value = None
self.mock_response.json.return_value = {'UserId': self.user.id}
self.mock_request = patcher.start()
self.mock_request.return_value = self.mock_response
def tearDown(self):
self.mock_request.stop()
def test_request(self):
results = utils.some_function(self.user)
self.assertEqual(results['UserId'], 123)
self.mock_request.assert_called_once_with(
'http://api.example.com'
payload={'Email': self.user.email},
)
def test_bad_request(self):
# override defaults and reassign
self.mock_response.status_code = 500
self.mock_request.return_value = self.mock_response
results = utils.some_function(self.user)
self.assertEqual(results, None)
self.mock_request.assert_called_once_with(
'http://api.example.com'
payload={'Email': user.email},
)
``` | Django unittest and mocking the requests module | [
"",
"python",
"unit-testing",
"mocking",
"python-requests",
""
] |
I am using the `ttk.Progressbar` in my app. I have scoured the net for an answer but no avail.
I have the following code which is working well. But I want to change the thickness of the bar.
```
progressbar = ttk.Progressbar(myGui, orient=HORIZONTAL,
length=400, mode="determinate",
variable=value_progress,
)
progressbar.pack()
```
I want the length to still be 400, but from the top of the bar to the bottom, I wish to decrease that so its half or less then half. (I want my bar on a diet, so to say)
But I am beating my head against the wall to figure out a solution.
Andy ideas? Thanks in advance. | The [ttk progress bar](http://docs.python.org/3.1/library/tkinter.ttk.html#progressbar) appears to lack the width option in Python.
Using a work around ([here](https://stackoverflow.com/a/11981214/1663352)) for an issue with a Tkinter Button. From this I have been able to create a working solution.
The key to solving the issue was to add the progress bar to a [window](http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.create_window-method) inside the canvas. Using a window inside the canvas doesn't cause the canvas to resize when the widget is added which means we can control the width of the progress bar.
I have created some working example code:
```
from ttk import Progressbar
import Tkinter
class Example(Tkinter.Frame):
def __init__(self, parent):
Tkinter.Frame.__init__(self, parent)
self.parent = parent
self.initUI()
def initUI(self):
value_progress =50
self.parent.title("Progressbar Thingymawhatsit")
self.config(bg = '#F0F0F0')
self.pack(fill = Tkinter.BOTH, expand = 1)
#create canvas
canvas = Tkinter.Canvas(self, relief = Tkinter.FLAT, background = "#D2D2D2",
width = 400, height = 5)
progressbar = Progressbar(canvas, orient=Tkinter.HORIZONTAL,
length=400, mode="indeterminate",
variable=value_progress,
)
# The first 2 create window argvs control where the progress bar is placed
canvas.create_window(1, 1, anchor=Tkinter.NW, window=progressbar)
canvas.grid()
def main():
root = Tkinter.Tk()
root.geometry('500x50+10+50')
app = Example(root)
app.mainloop()
if __name__ == '__main__':
main()
```
So to sum up the progress bar is the same size but you just cant see half of it! | If you *must* use the xpnative theme or themes like it, then you will likely not have the option to change the thickness the conventional way. However if you use the default theme, you can configure the thickness with a style. There are likely other themes that let you do this as well, and if you're going to be playing around a lot with the look and feel of your program, you may wish to use these instead.
```
from Tkinter import *
from ttk import *
def main():
root = Tk()
s = Style()
s.theme_use("default")
s.configure("TProgressbar", thickness=50)
pb = Progressbar(root, style="TProgressbar")
pb.pack()
root.mainloop()
main()
``` | tkinter.ttk.Progressbar: How to change thickness of a horizontal bar | [
"",
"python",
"python-3.x",
"tkinter",
"ttk",
""
] |
I'm doing an exercise that asks for a function that approximates the value of pi using Leibniz' formula. These are the explanations on Wikipedia:


Logical thinking comes to me easily, but I wasn't given much of a formal education in maths, so I'm a bit lost as to what the leftmost symbols in the second one represent. I tried to make the code `pi = ( (-1)**n / (2*n + 1) ) * 4`, but that returned `1.9999990000005e-06` instead of 3.14159..., so I used an accumulator pattern instead (since the chapter of the guide that this was in mentions them as well) and it worked fine. However, I can't help thinking that it's somewhat contrived and there's probably a better way to do it, given Python's focus on simplicity and making programmes as short as possible. This is the full code:
```
def myPi(n):
denominator = 1
addto = 1
for i in range(n):
denominator = denominator + 2
addto = addto - (1/denominator)
denominator = denominator + 2
addto = addto + (1/denominator)
pi = addto * 4
return(pi)
print(myPi(1000000))
```
Does anyone know a better function? | The capital sigma here is [sigma notation](http://en.wikipedia.org/wiki/Summation#Capital-sigma_notation). It is notation used to represent a summation in concise form.
So your sum is actually an infinite sum. The first term, for n=0, is:
```
(-1)**0/(2*0+1)
```
This is added to
```
(-1)**1/(2*1+1)
```
and then to
```
(-1)**2/(2*2+1)
```
and so on for ever. The summation is what is known mathematically as a *convergent sum*.
In Python you would write it like this:
```
def estimate_pi(terms):
result = 0.0
for n in range(terms):
result += (-1.0)**n/(2.0*n+1.0)
return 4*result
```
If you wanted to optimise a little, you can avoid the exponentiation.
```
def estimate_pi(terms):
result = 0.0
sign = 1.0
for n in range(terms):
result += sign/(2.0*n+1.0)
sign = -sign
return 4*result
....
>>> estimate_pi(100)
3.1315929035585537
>>> estimate_pi(1000)
3.140592653839794
``` | The Leibniz formula translates directly into Python with no muss or fuss:
```
>>> steps = 1000000
>>> sum((-1.0)**n / (2.0*n+1.0) for n in reversed(range(steps))) * 4
3.1415916535897934
``` | Leibniz formula for π - Is this any good? (Python) | [
"",
"python",
"python-3.x",
""
] |
Ok, most possibly I am doing something wrong, but following the advice of a user here I run this query:
```
SELECT id, item,
(SELECT COUNT(item) FROM Table1 WHERE id=a.id AND item=a.item) cnt
FROM (SELECT DISTINCT a.id,b.item FROM Table1 a, Table1 b) a
ORDER BY id, item;
```
on this table:
```
ID ITEM
-----------------
0001 345
0001 345
0001 120
0002 567
0002 034
0002 567
0003 567
0004 533
0004 008
...
```
in order to get this result:
```
ID ITEM CNT
1 8 0
1 34 0
1 120 1
1 345 2
1 533 0
1 567 0
2 8 0
2 34 1
...
```
but it is taking too long and the query is still running after a day...
Is there a way to improve performance? I have about 4 million rows
Thank you | Your query is quite convoluted. I think you just want to count the combinations of `id` and `item`. If so, this is a simple aggregation:
```
select id, item, count(*)
from Table1 a
group by id, item;
```
If you want all ids and items to appear, then use a driver table:
```
select driver.id, driver.item, coalesce(count(t1.id), 0)
from (select id.id, item.item
from (select distinct id from Table1) id cross join
(select distinct item from Table1) item
) driver left outer join
Table1 t1
on driver.id = t1.id and driver.item = t1.item
group by driver.id, driver.item;
```
The original query has this statement:
```
(SELECT DISTINCT a.id,b.item FROM Table1 a, Table1 b) a
```
This is doing full cartesian product and then doing a distinct. So, if your table has 100,000 rows, then the intermediate table has 10,000,000,000 rows for the distinct (I don't think MySQL optimizes this a bit better). Doing the distinct first (as for the driver) greatly reduces the volume of data.
EDIT:
There are a class of SQL questions where you need to look at all combinations of two or more items and then determine values for everyone (even those that don't exist in the data) or find those that are *not* in the data. These problems pose the same problem: how do you get information about values not in the data?
The solution that I advocate is to create a table that has all possible combinations, and then use `left [outer] join` for the remaining tables. I call this the "driver" table, because the rows in this query "drive" the query by defining the population for subsequent joins.
This terminology is fairly consistent with the reference in the comment. The comment is using the term from the optimizer perspective. Some join algorithms -- particularly nested loop and index lookup -- treat the two sides of the join differently; for these, one side is the "driving/driver" table. For instance, when joining from a large table to a small reference table, the large table is the driving table and the other table is accessed through an index. Other join algorithms -- such as merge join and hash joins (in general) -- treat both sides the same, so the concept is less applicable there.
From the logical perspective, I'm using it to mean the query that defines the population. An important similarity is that for a left/right outer join, both definitions are, in practice, the same. The optimizer would typically choose the first table in a `left join` as the "driver", because it defines the output rows. | If the only thing you want to achieve is a a count grouped by `id` and `item`, why don't you just :
```
SELECT ID, Item, COUNT(1)
FROM Table 1
GROUP BY ID, Item
```
It's as simple as that! | Count query is taking too long - over 24 hours have passed | [
"",
"mysql",
"sql",
"database",
""
] |
I want to create a stored procedure to insert random data in 'Video' table. I have already generated 30,000 record data for UserProfile table.
Note: The username is FK element in Video table.
```
CREATE TABLE UserProfile
(
Username VARCHAR(45) NOT NULL ,
UserPassword VARCHAR(45) NOT NULL ,
Email VARCHAR(45) NOT NULL ,
FName VARCHAR(45) NOT NULL ,
LName VARCHAR(45) NOT NULL ,
Birthdate DATE ,
Genger VARCHAR(10) NOT NULL ,
ZipCode INT ,
Image VARCHAR(50) ,
PRIMARY KEY(Username)
);
GO
CREATE TABLE Video
(
VideoId INT NOT NULL DEFAULT 1000 ,
Username VARCHAR(45) NOT NULL ,
VideoName VARCHAR(160) NOT NULL ,
UploadTime DATE ,
TotalViews INT ,
Thumbnail VARCHAR(100) ,
PRIMARY KEY(VideoId),
FOREIGN KEY(Username)
REFERENCES UserProfile(Username)
);
GO
``` | It's not too difficult to generate random data, even in SQL
For example, to get a random username from your userprofile table.
```
BEGIN
-- get a random row from a table
DECLARE @username VARCHAR(50)
SELECT @username = [Username] FROM (
SELECT ROW_NUMBER() OVER(ORDER BY [Username]) [row], [Username]
FROM [UserProfile]
) t
WHERE t.row = 1 + (SELECT CAST(RAND() * COUNT(*) as INT) FROM [UserProfile])
print(@username)
END
```
To generate a random integer...
```
BEGIN
-- get a random integer between 3 and 7 (3 + 5 - 1)
DECLARE @totalviews INT
SELECT @totalviews = CAST(RAND() * 5 + 3 as INT)
print(@totalviews)
END
```
To generate a random varchar string
```
BEGIN
-- get a random varchar ascii char 32 to 128
DECLARE @videoname VARCHAR(160)
DECLARE @length INT
SELECT @videoname = ''
SET @length = CAST(RAND() * 160 as INT)
WHILE @length <> 0
BEGIN
SELECT @videoname = @videoname + CHAR(CAST(RAND() * 96 + 32 as INT))
SET @length = @length - 1
END
print(@videoname)
END
```
And finally, a random date
```
BEGIN
-- get a random datetime +/- 365 days
DECLARE @uploadtime DATETIME
SET @uploadtime = GETDATE() + (365 * 2 * RAND() - 365)
print(@uploadtime)
END
``` | As Aaron already suggested in his comment – I wouldn’t really reinvent the wheel.
Just go and find some of the already available tools for this.
[Here](http://www.mssqltips.com/sqlservertip/2190/generating-sql-server-test-data-with-visual-studio-2010/) is an article on how to do this in VS and thread here on StackOverflow.
[Data generators for SQL server?](https://stackoverflow.com/questions/157600/data-generators-for-sql-server) | How to generate random data in SQL server | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"stored-procedures",
""
] |
I want to implement an algorithm that gets the index of letter changes.
I have the below list, here I want to find the beginning of every letter changes and put a result list except the first one. Because, for the first one, we should get the last index of occurrence of it. Let me give you an example:
```
letters=['A','A','A','A','A','A','A','A','A','A','A','A','B','C','C','X','D','X','B','B','A','A','A','A']
```
Transitions:
```
'A','A','A','A','A','A','A','A','A','A','A','A'-->'B'-->'C','C'-->'X'-->'D'-->'X'-->'B','B'-->'A','A','A','A'
```
Here, after A letters finish, B starts, we should put the index of last A and the index of first B and so on, but we should not include X letter into the result list.
Desired result:
```
[(11, 'A'), (12, 'B'), (13, 'C'), (16, 'D'), (18, 'B'), (20, 'A')]
```
So far, I have done this code, this finds other items except the (11, 'A'). How can I modify my code to get the desired result?
```
for i in range(len(letters)):
if letters[i]!='X' and letters[i]!=letters[i-1]:
result.append((i,(letters[i])))
```
My result:
```
[(12, 'B'), (13, 'C'), (16, 'D'), (18, 'B'), (20, 'A')] ---> missing (11, 'A').
``` | Here's my suggestion. It has three steps.
1. Fist, find all the starting indexes for each run of letters.
2. Replace the index in the first non-X run with the index of the end of its run, which will be one less than the start of the following run.
3. Filter out all X runs.
The code:
```
def letter_runs(letters):
prev = None
results = []
for index, letter in enumerate(letters):
if letter != prev:
prev = letter
results.append((index, letter))
if results[0][1] != "X":
results[0] = (results[1][0]-1, results[0][1])
else: # if first run is "X" second must be something else!
results[1] = (results[2][0]-1, results[1][1])
return [(index, letter) for index, letter in results if letter != "X"]
``` | Now that you've explained you want the first index of every letter after the first, here's a one-liner:
```
letters=['A','A','A','A','A','A','A','A','A','A','A','A','B','C','C','X','D','X','B','B','A','A','A','A']
[(n+1, b) for (n, (a,b)) in enumerate(zip(letters,letters[1:])) if a!=b and b!='X']
#=> [(12, 'B'), (13, 'C'), (16, 'D'), (18, 'B'), (20, 'A')]
```
Now, your first entry is different. For this, you need to use a recipe which finds the last index of each item:
```
import itertools
grouped = [(len(list(g))-1,k) for k,g in (itertools.groupby(letters))]
weird_transitions = [grouped[0]] + [(n+1, b) for (n, (a,b)) in enumerate(zip(letters,letters[1:])) if a!=b and b!='X']
#=> [(11, 'A'), (12, 'B'), (13, 'C'), (16, 'D'), (18, 'B'), (20, 'A')]
```
Of course, you could avoid creating the whole list of `grouped`, because you only ever use the first item from groupby. I leave that as an exercise for the reader.
This will also give you an X as the first item, if X is the first (set of) items. Because you say nothing about what you're doing, or why the Xs are there, but omitted, I can't figure out if that's the right behaviour or not. If it's not, then probably use my entire other recipe (in my other answer), and then take the first item from that. | An algorithm to find transitions in Python | [
"",
"python",
"algorithm",
""
] |
Here is an example of my table:
```
ID Name ClickLink1 ClickLink2 ClickLink3
-- ---- ---------- ---------- ----------
1 John Landing ThankYou
2 Abby ThankYou Landing Landing
3 Chris ThankYou
4 Sam Landing ThankYou ThankYou
```
I'm looking for results such as:
```
Page Link Count
---- ---- -----
Landing ClickLink1 2
Landing ClickLink2 1
Landing ClickLink3 1
```
Ultimately I will repeat the query in a separate report for the "ThankYou" page, but I can easily duplicate based off query for "Landing" page.
**Using SQL Server 2008 R2** | I think this would do it below, and should be an agnostic solution since you didn't provide the DBMS flavor.
```
SELECT ClickLink1 as Page, 'ClickLink1' as Link, Count(ID) as Count
FROM myTable
GROUP BY ClickLink1
UNION
SELECT ClickLink2 as Page, 'ClickLink2' as Link, Count(ID) as Count
FROM myTable
GROUP BY ClickLink2
UNION
SELECT ClickLink3 as Page, 'ClickLink3' as Link, Count(ID) as Count
FROM myTable
GROUP BY ClickLink3
``` | The main issue is that your current table is denormalized so you need to count across the columns. One way to do this would be to unpivot the data from multiple columns into multiple rows.
There are a few different ways that you can do this. You can use a UNION ALL to converts the columns into rows and then count the values:
```
select page, link, count(*) Total
from
(
select ClickLink1 as page, 'ClickLink1' as link
from yourtable
union all
select ClickLink2 as page, 'ClickLink2' as link
from yourtable
union all
select ClickLink3 as page, 'ClickLink3' as link
from yourtable
) d
where page = 'Landing'
group by page, link;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/33fe2/7). Another way would be to use a CROSS JOIN to a virtual table and count the values:
```
SELECT page,
col as link,
COUNT(*) AS TOTAL
FROM
(
SELECT col,
CASE s.col
WHEN 'ClickLink1' THEN ClickLink1
WHEN 'ClickLink2' THEN ClickLink2
WHEN 'ClickLink3' THEN ClickLink3
END AS page
FROM yourtable
CROSS JOIN
(
SELECT 'ClickLink1' AS col UNION ALL
SELECT 'ClickLink2' UNION ALL
SELECT 'ClickLink3'
) s
) s
where page = 'Landing'
group by page, col;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/33fe2/6). Depending on your database that you are using you might be able to use an UNPIVOT function along with the aggregate to get the result. For example, if you are using SQL Server you can use:
```
select page, link, count(*) Total
from yourtable
unpivot
(
page
for link in (ClickLink1, ClickLink2, ClickLink3)
) unpiv
where page = 'Landing'
group by page, link;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/33fe2/9) | SQL Query counting appearances in multiple columns | [
"",
"sql",
"sql-server",
""
] |
ok, so I have a list of lists
```
list = [['a','b','c'], ['1','2','3'], ['x','y','z']]
```
and I want to edit the first item of each list so it has a symbol before it. A "?" in this example. I figure I can use list comprehension to do this. Something similar to this:
```
list = ['?'+x for x in i[0] for i in list]
```
But that just gives me an error. This list comprehension stuff confuses me, how do I do it? | Do
```
l = [['?' + i[0]] + i[1:] for i in l] (l is the list you pass in)
``` | First of all, don't name a variable `list`; you are now masking the [built-in `list()` type](http://docs.python.org/2/library/functions.html#list), and can easily lead to bugs if you expect `list()` to still be that type elsewhere in your code.
To prepend a string to the first element of each nested list, use a simple list comprehension:
```
outerlist = [['?' + sub[0]] + sub[1:] for sub in outerlist]
```
This builds a new list from the nested list, by concatenating a one-element list with the first element altered, plus the remainder of the sublist.
Demo:
```
>>> outerlist = [['a','b','c'], ['1','2','3'], ['x','y','z']]
>>> [['?' + sub[0]] + sub[1:] for sub in outerlist]
[['?a', 'b', 'c'], ['?1', '2', '3'], ['?x', 'y', 'z']]
``` | List comprehension for a list of lists | [
"",
"python",
"list-comprehension",
""
] |
Often, some of the answers mention that a given solution is **linear**, or that another one is **quadratic**.
How to make the difference / identify what is what?
Can someone explain this, the easiest possible way, for the ones like me who still don't know? | A method is linear when the time it takes increases linearly with the number of elements involved. For example, a for loop which prints the elements of an array is roughly linear:
```
for x in range(10):
print x
```
because if we print range(100) instead of range(10), the time it will take to run it is 10 times longer. You will see very often that written as O(N), meaning that the time or computational effort to run the algorithm is proportional to N.
Now, let's say we want to print the elements of two for loops:
```
for x in range(10):
for y in range(10):
print x, y
```
For every x, I go 10 times looping y. For this reason, the whole thing goes through 10x10=100 prints (you can see them just by running the code). If instead of using 10, I use 100, now the method will do 100x100=10000. In other words, the method goes as O(N\*N) or O(N²), because every time you increase the number of elements, the computation effort or time will increase as the square of the number of points. | They must be referring to run-time complexity also known as Big O notation. This is an extremely large topic to tackle. I would start with the article on wikipedia: <https://en.wikipedia.org/wiki/Big_O_notation>
When I was researching this topic one of the things I learned to do is graph the runtime of my algorithm with different size sets of data. When you graph the results you will notice that the line or curve can be classified into one of several orders of growth.
Understanding how to classify the runtime complexity of an algorithm will give you a framework to understanding how your algorithm will scale in terms of time or memory. It will give you the power to compare and classify algorithms loosely with each other.
I'm no expert but this helped me get started down the rabbit hole.
Here are some typical orders of growth:
* O(1) - constant time
* O(log n) - logarithmic
* O(n) - linear time
* O(n^2) - quadratic
* O(2^n) - exponential
* O(n!) - factorial
If the wikipedia article is difficult to swallow, I highly recommend watching some lectures on the subject on iTunes University and looking into the topics of algorithm analysis, big-O notation, data structures and even operation counting.
Good luck! | Linear time v.s. Quadratic time | [
"",
"python",
"big-o",
"complexity-theory",
"time-complexity",
""
] |
I am trying to get a list of players who are logged into an online game (DayZ). The backend database is MySQL and the way to determine if a user is logged in is not exectly straight forward. Is is building this query that I could use help with.
Basically there is a table named `player_login` that makes a new entry everytime a player logs into and out of the system. If the user logs in then a field named `action` is set to `2`. If the user logs out the `action` field is set to `0`. Here is some smaple data (simplified)
```
LoginID PlayerUID DateStamp Action
126781 79067462 2013-08-01 13:16:28 0
126777 79067462 2013-08-01 12:59:22 2
126775 79067462 2013-08-01 12:42:10 0
126774 79067462 2013-08-01 12:41:34 2
126773 79067462 2013-08-01 12:38:38 0
```
I can query the table to find out if a single user is logged in via the following query ..
```
SELECT PlayerUID, Action
FROM player_login
WHERE PlayerUID = 79067462
ORDER BY Datestamp DESC
LIMIT 1
```
If the result is `0` then the last thing the player did was login and is therefore online. If the value is `2` then the last thing the player did was logout and are therefor offline. What I am having some difficulty doing is transforming this query into something that would return a list of PlayerUIDs where the latest `Action` value is `0` and thereby giving me all players currently online. | The following query yields the most recent logout time for each player:
```
SELECT PlayerUID, MAX(DateStamp) AS MostRecentLogout
FROM player_login
WHERE Action = 0
GROUP BY PlayerUID
```
A similar query with `Action = 2` yields all most-recent login times. Combining these lets you compare them:
```
SELECT plin.PlayerUID, plin.MostRecentLogin, plout.MostRecentLogout,
CASE WHEN plout.MostRecentLogout IS NULL
OR plout.MostRecentLogout < plin.MostRecentLogin
THEN 1
ELSE 0
END AS IsPlayerLoggedIn
FROM (SELECT PlayerUID, MAX(DateStamp) AS MostRecentLogin
FROM player_login
WHERE Action = 2
GROUP BY PlayerUID) plin
LEFT JOIN (SELECT PlayerUID, MAX(DateStamp) AS MostRecentLogout
FROM player_login
WHERE Action = 0
GROUP BY PlayerUID) plout ON plout.PlayerUID = plin.PlayerUID
```
Note the LEFT JOIN and attendant NULL handling in the CASE statement: this is meant to handle players that have logged in for the first time and never yet logged out. | try this
```
SELECT PlayerUID, Action
FROM player_login
WHERE `Action` = 0
GROUP BY PlayerUID
ORDER BY Datestamp DESC
``` | SQL query to determine which users are online | [
"",
"mysql",
"sql",
""
] |
a simple database design question has been bugging me for a while, thought I'll ask it here.
Suppose I have a database table, `"Loan"` with the following fields,
```
StudentIdentification, LoanDate, ReturnDate
```
This table is used track every student who has loaned something (not in the database).
Since every student can loan and return and loan again (but not loan multiple times without returning, a loan must be followed by a return), a composite primary key is
```
used: StudentIdentifcation and LoanDate
```
Is it better to store data this way or instead to have 2 tables,
```
table 1: Loan ( StudentIdentification, LoanDate)
table 2: LoanHistory ( StudentIdentification, LoanDate, ReturnDate)
```
in this case, Loan table's primary key is
```
StudentIdentification
```
and LoanHistory table's primary key is
```
StudentIdentification, LoanDate
```
Everytime the student returns, the record in "Loan" is moved to the "LoanHistory" table with the ReturnDate updated (done in a transaction).
Which is better? | I would create a single table, and then have a filtered index (SQL Server 2008+) or indexed view (SQL Server 2005-) to enforce that there is only a single row for each student with a `NULL` return date:
```
CREATE TABLE Loans (
StudentID int not null,
LoanDate datetime not null,
ReturnDate datetime null,
constraint PK_Loans PRIMARY KEY (StudentID,LoanDate),
constraint CK_Loans_NoTimeTravel CHECK (LoanDate < ReturnDate)
)
```
Filtered index:
```
CREATE UNIQUE INDEX IX_Loans_SingleOpen ON Loans (StudentID) WHERE ReturnDate IS NULL
```
Indexed view:
```
CREATE VIEW dbo.Loans_SingleOpen_DRI
WITH SCHEMABINDING
AS
SELECT StudentID FROM dbo.Loans WHERE ReturnDate IS NULL
GO
CREATE UNIQUE CLUSTERED INDEX IX_Loans_SingleOpen ON Loans_SingleOpen_DRI (StudentID)
```
(Assuming `dbo` is the appropriate schema - which is needed for `SCHEMABINDING`, which in turn is needed to create the index) | You can use a simple SCD (slowly changing dimension) structure and use one table to store both current and historical data.
```
StudentIdentification, CreationDate, LoanDate, ReturnDate
```
the first two columns are the PK.
You can also add an amount to indicate a loan or a return.
You might find [this question](https://stackoverflow.com/questions/15901223/sql-store-rate-changing-in-time-and-select-rate-valid-for-particular-time-instan/15901397#15901397) relvant | Storing all loan (insert loan date, update return date) records in one table or 2 tables (with history) is better? | [
"",
"sql",
"sql-server",
"database",
"oracle",
""
] |
This is extracted from Learning Python 4th edition. Its function is to subclass set using list. But I don't understand line 5 `list.__init__([])`, please help. The code works even when I commented out this line of code. Why?
```
### file: setsubclass.py
class Set(list):
def __init__(self, value = []): # Constructor
list.__init__([]) # Customizes list
self.concat(value) # Copies mutable defaults
def intersect(self, other): # other is any sequence
res = [] # self is the subject
for x in self:
if x in other: # Pick common items
res.append(x)
return Set(res) # Return a new Set
def union(self, other): # other is any sequence
res = Set(self) # Copy me and my list
res.concat(other)
return res
def concat(self, value): # value: list, Set . . .
for x in value: # Removes duplicates
if not x in self:
self.append(x)
def __and__(self, other): return self.intersect(other)
def __or__(self, other): return self.union(other)
def __repr__(self): return 'Set:' + list.__repr__(self)
if __name__ == '__main__':
x = Set([1,3,5,7])
y = Set([2,1,4,5,6])
print(x, y, len(x))
print(x.intersect(y), y.union(x))
print(x & y, x | y)
x.reverse(); print(x)
x
``` | The code in the book contains an error. I've submitted an errata to O'Reilly books, which you can read along with the authors response on [this page](http://oreilly.com/catalog/errata.csp?isbn=0636920028154) (search for 982). Here's a small snippet of his response:
> This code line has apparently been present in the book since the 2nd Edition (of 2003--10 years ago!), and has gone unremarked by hundreds of thousands of readers until now
The line `list.__init__([])` is *missing an argument*, and commenting it out makes no difference whatsoever, except speeding up your program slightly. Here's the corrected line:
```
list.__init__(self, [])
```
When calling methods that are not static methods or class methods directly on class objects, the normally implicit first argument `self` must be provided explicitly. If the line is corrected like this it would follow the good practice that Antonis talks about in his answer. Another way to correct the line would be by using `super`, which again makes the `self` argument implicit.
```
super(Set, self).__init__([])
```
The code in the book provides a *different* empty list (`[]`) as the `self` argument, which causes *that* list to be initialized over again, whereupon it is quickly garbage collected. In other words, the whole line is dead code.
To verify that the original line has no effect is easy: temporarily change `[]` in `list.__init__([])` to a non-empty list and observe that the resulting `Set` instance doesn't contain those elements. Then insert `self` as the first argument, and observe that the items in the list are now added to the `Set` instance. | You mean this line?
```
list.__init__([])
```
When you override the `__init__` method of any type, it's good practice to always call the inherited `__init__` method; that is, the `__init__` method of the base class. This way you perform initialization of the parent class, and add initialization code specific to the child class.
You should follow this practice even if you are confident that the `__init__` of the parent does nothing, in order to ensure compatibility with future versions.
**Update:** As explained by Lauritz in another answer, the line
```
list.__init__([])
```
is wrong. See his answer and the other answers for more. | Extending Types by Subclassing in Python | [
"",
"python",
""
] |
I have two tables with the following structure:
```
DECLARE @Table1 TABLE
(
IdColumn INT,
DateColumn DATETIME
)
DECLARE @Table2 TABLE
(
IdColumn INT,
DateColumn DATETIME,
Value NUMERIC(18,2)
)
```
What i want to do is get the latest value from table2 having a less or equal date in table1.
This is the query i build:
```
SET NOCOUNT ON
DECLARE @Table1 TABLE
(
IdColumn INT,
DateColumn DATETIME
)
DECLARE @Table2 TABLE
(
IdColumn INT,
DateColumn DATETIME,
Value NUMERIC(18,2)
)
DECLARE @RefDate DATETIME='2012-09-01'
DECLARE @NMonths INT
DECLARE @MonthsCounter INT=1
SELECT @NMonths=DATEDIFF(MM,'2012-09-01','2013-03-01')
WHILE @MonthsCounter<=@NMonths
BEGIN
INSERT INTO @Table1
SELECT 1,@RefDate
SET @RefDate=DATEADD(MM,1,@RefDate);
SET @MonthsCounter+=1;
END
INSERT @Table2
SELECT 1,'2012-09-01',1000
UNION
SELECT 1,'2012-12-01',5000
UNION
SELECT 1,'2013-01-01',3000
SELECT
T1.IdColumn,
T1.DateColumn,
T2.Value
FROM @Table1 T1
LEFT JOIN @Table2 T2
ON T2.IdColumn=T1.IdColumn AND T1.DateColumn>=t2.DateColumn
```
The problem is when a new value comes with a more recent date, i get all values until that date.
```
IdColumn DateColumn Value
----------- ----------------------- ---------------------------------------
1 2012-09-01 00:00:00.000 1000.00
1 2012-10-01 00:00:00.000 1000.00
1 2012-11-01 00:00:00.000 1000.00
1 2012-12-01 00:00:00.000 1000.00
1 2012-12-01 00:00:00.000 5000.00
1 2013-01-01 00:00:00.000 1000.00
1 2013-01-01 00:00:00.000 5000.00
1 2013-01-01 00:00:00.000 3000.00
1 2013-02-01 00:00:00.000 1000.00
1 2013-02-01 00:00:00.000 5000.00
1 2013-02-01 00:00:00.000 3000.00
```
The desired output is this one:
```
IdColumn DateColumn Value
----------- ----------------------- ---------------------------------------
1 2012-09-01 00:00:00.000 1000.00
1 2012-10-01 00:00:00.000 1000.00
1 2012-11-01 00:00:00.000 1000.00
1 2012-12-01 00:00:00.000 5000.00
1 2013-01-01 00:00:00.000 3000.00
1 2013-02-01 00:00:00.000 3000.00
```
How can i solve this ?
Thanks. | I would do this with a correlated subquery:
```
select t1.*,
(select top 1 value
from @table2 t2
where t2.idColumn = t1.idColumn and
t2.dateColumn <= t1.dateColumn
order by t2.dateColumn desc
) t2value
from @table1 t1;
``` | I am just posting Gordon's Answer with correct syntax :
```
select t1.*,
(select top 1 value
from @table2 t2
where t2.IdColumn = t1.IdColumn and
t2.DateColumn <= t1.DateColumn
order by t2.DateColumn desc
) t2value
from @table1 t1
``` | Get the latest value on each date | [
"",
"sql",
"sql-server",
""
] |
I have two tables:
```
FirstField | SecondField | ThirdField
FirstValue SecondValue ThirdValues
----------------------------------------------
FirstField | SecondField | ThirdField
OtherValue1 OtherValue2 OtherValue3
```
What I need it to add those two tables together into one SQL query. They can not be joined as I don't have anything to join them on and that's not what I want. I want my new table to look like:
```
FirstField | SecondField | ThirdField
FirstValue SecondValue ThirdValues
OtherValue1 OtherValue2 OtherValue3
```
This may be very simple but I am new to SQL and have been unable to find any help elsewhere. | Try `UNION ALL`:
```
SELECT FirstField ,SecondField ,ThirdField
FROM Table1
UNION ALL
SELECT FirstField ,SecondField ,ThirdField
FROM Table2
```
If you want to remove duplicate rows use `UNION` instead.
```
SELECT FirstField ,SecondField ,ThirdField
FROM Table1
UNION
SELECT FirstField ,SecondField ,ThirdField
FROM Table2
``` | Have a lok at using a [UNION/UNION ALL](http://technet.microsoft.com/en-us/library/ms180026.aspx)
> Combines the results of two or more queries into a single result set
> that includes all the rows that belong to all queries in the union.
> The UNION operation is different from using joins that combine columns
> from two tables.
So something like
```
SELECT Field1,
Field2,
...
Fieldn
FROM Table1
UNION ALL
SELECT Field1,
Field2,
...
Fieldn
FROM Table2
``` | Multiple SQL tables added together without a JOIN | [
"",
"sql",
""
] |
I have a python dictionary of the form :
```
a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
```
The end result I need is a dictionary of the form :
```
{'bat': '1', 'cat': '8'}
```
I am currently doing this:
```
b1 = list(itertools.chain(*a1.values()))
c1 = dict(itertools.izip_longest(*[iter(b1)] * 2, fillvalue=""))
```
which gives me the output:
```
>>> c1
{'bat': '1', 'cat': '5'}
```
I can iterate over the dictionary and get this but can anybody give me a more pythonic way of doing the same? | Using [defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict):
```
import itertools
from collections import defaultdict
a1 = {u'SFP_1': [u'cat', u'3'], u'SFP_0': [u'cat', u'5', u'bat', u'1']}
b1 = itertools.chain.from_iterable(a1.itervalues())
c1 = defaultdict(int)
for animal, count in itertools.izip(*[iter(b1)] * 2):
c1[animal] += int(count)
# c1 => defaultdict(<type 'int'>, {u'bat': 1, u'cat': 8})
c1 = {animal: str(count) for animal, count in c1.iteritems()}
# c1 => {u'bat': '1', u'cat': '8'}
``` | ```
In [8]: a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
In [9]: answer = collections.defaultdict(int)
In [10]: for L in a1.values():
for k,v in itertools.izip(itertools.islice(L, 0, len(L), 2),
itertools.islice(L, 1, len(L), 2)):
answer[k] += int(v)
In [11]: answer
Out[11]: defaultdict(<type 'int'>, {'bat': 1, 'cat': 8})
In [12]: dict(answer)
Out[12]: {'bat': 1, 'cat': 8}
``` | Convert list into dictionary in python | [
"",
"python",
""
] |
I am trying to make a numpy array that looks like this:
```
[a b c ]
[ a b c ]
[ a b c ]
[ a b c ]
```
So this involves updating the main diagonal and the two diagonals above it.
What would be an efficient way of doing this? | This is an example of a [Toeplitz matrix](https://en.wikipedia.org/wiki/Toeplitz_matrix) - you can construct it using [`scipy.linalg.toeplitz`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.toeplitz.html):
```
import numpy as np
from scipy.linalg import toeplitz
first_row = np.array([1, 2, 3, 0, 0, 0])
first_col = np.array([1, 0, 0, 0])
print(toeplitz(first_col, first_row))
# [[1 2 3 0 0 0]
# [0 1 2 3 0 0]
# [0 0 1 2 3 0]
# [0 0 0 1 2 3]]
``` | You can use `np.indices` to get the indices of your array and then assign the values where you want.
```
a = np.zeros((5,10))
i,j = np.indices(a.shape)
```
`i,j` are the line and column indices, respectively.
```
a[i==j] = 1.
a[i==j-1] = 2.
a[i==j-2] = 3.
```
will result in:
```
array([[ 1., 2., 3., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 2., 3., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 1., 2., 3., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 2., 3., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 2., 3., 0., 0., 0.]])
``` | Make special diagonal matrix in Numpy | [
"",
"python",
"arrays",
"numpy",
"matrix",
"toeplitz",
""
] |
I am learning about reading/writing strings, lists, etc to txt/dat files. I wanted to add a comment in my code so I can refer back to what the access keys are. So this is what I did.
```
# Mode Description
# rb Read from a binary file. If the file doesn’t exist, Python will complain with an error.
# wb Write to a binary file. If the file exists, its contents are overwritten. If the file doesn’t exist,
# it’s created.
# ab Append a binary file. If the file exists, new data is appended to it. If the file doesn’t exist, it’s
# created.
# rb+ Read from and write to a binary file. If the file doesn’t exist, Python will complain with an
# error.
# wb+ Write to and read from a binary file. If the file exists, its contents are overwritten. If the file
# doesn’t exist, it’s created.
# ab+ Append and read from a binary file.
```
And after I have:
```
import pickle, shelve
print("Pickling lists.")
variety = ["sweet", "hot", "dill"]
shape = ["whole", "spear", "chip"]
brand = ["Claussen", "Heinz", "Vlassic"]
f = open("pickles1.dat", "wb")
pickle.dump(variety, f)
pickle.dump(shape, f)
pickle.dump(brand, f)
f.close()
print("\nUnpickling lists.")
f = open("pickles1.dat", "rb")
variety = pickle.load(f)
shape = pickle.load(f)
brand = pickle.load(f)
print(variety)
print(shape)
print(brand)
f.close()
```
When I run it I get the following error:
SyntaxError: Non-UTF-8 code starting with '\x92' in file PickleIt.py on line 10, but no encoding declared; see <http://python.org/dev/peps/pep-0263/> for details
I checked out the link, but I really don't understand it, I haven't seen this before.
---
Oh and apologies **line 10 is `# rb`** | Replace all the `’` by `'`.
It's complaining because you didn't add the encoding type to the beginning of the file. The default encoding is `utf-8`, in which those characters are not allowed.
Instead of replacing, you can add this line to the beginning (before the comments):
`# coding: iso-8859-1`
(Or another encoding in which those characters are present, like `latin-1` for example.)
This line sets the encoding for the file and allows the use of special characters. | Replace the smart quotes `’` with the regular quote `'` in your comments. | Python - Error when trying to add comments | [
"",
"python",
"python-3.x",
"syntax-error",
""
] |
I have been trying to return a variable in a function in a variable and use it outside of it:
```
test = 0
def testing():
test = 1
return test
testing()
print(test)
```
But when I run it, the result is 0. How could I fix this problem? | You are messing up a bit the scopes and/or assignment. Try this:
```
def testing():
test = 1
return test
test = testing()
print(test)
```
Explanation: The `test` inside `testing` is different to the `test` inside the module. You have to assign it on module-level to get the expected result. | Because you declare `test` in the function, it is not a global variable, thus, you can not access the variable `test` you created in the function outside of it as they are different scopes
If you want to `return test` to a variable, you have to do
```
result = testing()
print(result)
```
Or, you can also add a `global` statement:
```
test = 0
def testing():
global test
test = 1
return test
testing()
print(test)
```
---
By the way, when doing a conditional statement, you don't need the brackets around the `1==1` :). | Returning Variables in Functions Python Not Working Right | [
"",
"python",
"variables",
"return-value",
""
] |
My table scheme is as follows: (Bold column name is primary key)
Table 1: **id1** - id2
Table 2: **id2** - name2
Table 3: **id3** - name3
Table 4: id1 - Id3
What I want to do is have sql code that :
1. Select data in id1 and id3 columns for which name2=input=name3
2. Insert into table 4
3. Only insert into 4 if id1, id3 combination doesn't exist in table 4
Currently I can do step 1 and 2, but (assuming it can be done) I cannot get the syntax for "NOT EXIST" correct for step 3.
This is my code currently:
```
INSERT INTO table4( id1, id3)
SELECT id1, id3
FROM table2
INNER JOIN table1 ON table1.id2 = table2.id2
INNER JOIN table3 ON table2.name2 = table3.name3
WHERE name2 LIKE 'input'
``` | Here the query you need
```
insert into table4(id1, id3)
select t1.id1, t3.id3
from table2 as t2
inner join table1 as t1 on t1.id2 = t2.id2
inner join table3 as t2 on t2.name2 = t3.name3
where
t2.name2 like 'input' and
not exists (
select *
from table4 as t4
where t4.id1 = t1.id1 and t4.id3 = t3.id3
)
```
as an advice - I suggest you always use aliases (and refer to column as `alias.column_name`) in your queries, it'll help you to avoid bugs and your queries will be more readable. | I think you are looking for this
```
INSERT INTO table4( id1, id3)
SELECT id1, id3
FROM table2
INNER JOIN table1 ON table1.id2 = table2.id2
Left JOIN table3 ON table2.name2 = table3.name3
WHERE name2 LIKE 'input' and table3.name3 is null
```
or something similar. Left (outer join) gets all the records in table2 whether they exist or not. If they don't table3.name3 will be null, so those are the chaps you want. | Select data from one table and insert into another existing table, which doesn't exist in the table | [
"",
"mysql",
"sql",
""
] |
Just wanted to see if there is a way to output a result that is not stored based on the addition of up to 4 fields?
Have a table that holds the count of passengers on a service for different categories, Adult, Child Infant and staff, and I want to try and out put a number based on the result of these fields that is not stored.
For example if the result of the add is >15 then output 45, if > 9 output 35. the output is the size of the coach that is required.
I know I can do it in Excel after the data is extracted but was wondering if it can be done before and included with the data?
Any suggestions and help appreciated. | You can do this with a query on the data:
```
select t.*,
(case when Adult + Child + Infant + staff > 15 then 45
when Adult + Child + Infant + staff > 9 then 35
end) as CoachSize
from t;
```
You can also do this using a `view` so it is available as if it were a table:
```
create view vw_t as
select t.*,
(case when Adult + Child + Infant + staff > 15 then 45
when Adult + Child + Infant + staff > 9 then 35
end) as CoachSize
from t;
```
And, in some databases, you can add a computed column directly into the table definition. | Depending on what SQL program you're using, most offer the `CASE WHEN` statement (see this [SQL Fiddle example](http://sqlfiddle.com/#!6/e416c/1))
```
CREATE TABLE CaseValues(
Value INT
)
INSERT INTO CaseValues
VALUES (1)
, (16)
, (9)
SELECT CASE WHEN Value > 15 THEN 45
WHEN Value BETWEEN 6 AND 14 THEN 35
ELSE 25 END AS Result
FROM CaseValues
```
You can also use `CASE WHEN` on multiple columns, [see this example](http://sqlfiddle.com/#!6/b7669/1). | Is there anything like an Excel IF Statement in SQL | [
"",
"sql",
""
] |
Given two lists, I want to merge them so that all elements from the first list are even-indexed (preserving their order) and all elements from second list are odd-indexed (also preserving their order). Example below:
```
x = [0,1,2]
y = [3,4]
result = [0,3,1,4,2]
```
I can do it using for loop. But I guess there could be a fancy pythonic way of doing this (using a less-known function or something like that). Is there any better solution that writing a for-loop?
edit: I was thinking about list comprehensions, but didn't come up with any solution so far. | You can simply do:
```
for i,v in enumerate(y):
x.insert(2*i+1,v)
```
this takes the advantage that insert will use the last index when it is overpassed.
One example:
```
x = [0,1,2,3,4,5]
y = [100, 11,22,33,44,55,66,77]
print x
# [0, 100, 1, 11, 2, 22, 3, 33, 4, 44, 5, 55, 66, 77]
``` | Here's something you can use. (Use `list(izip_longest(...))` for Py2x)
```
>>> from itertools import chain
>>> from itertools import zip_longest
>>> list(filter(lambda x: x != '', chain.from_iterable(zip_longest(x, y, fillvalue = ''))))
[0, 3, 1, 4, 2]
```
This works for arbitrary length lists like follows -
```
>>> x = [0, 1, 2, 3, 4]
>>> y = [5, 6]
>>> list(filter(lambda x: x != '', chain.from_iterable(zip_longest(x, y, fillvalue = ''))))
[0, 5, 1, 6, 2, 3, 4]
```
**Explanation on it's working** -
1. `zip_longest(...)` with a fill value zips the lists and fills in the given fill value for iterables of unequal length. So, for your original example, it evaluates to something like `[(0, 3), (1, 4), (2, '')]`
2. We need to flatten the result because this method gives us a list of tuples. For that we use `chain.from_iterable(...)` giving us something like `[0, 3, 1, 4, 2, '']`.
3. We now use `filter(...)` to remove all occurences of `''` and we get the required answer. | python merge two lists (even/odd elements) | [
"",
"python",
"list",
"merge",
""
] |
```
Select convert(varchar(8), max(checkdate),1) lastcheckdate
from table
where status = 'processed'
and not status in ('delivered', 'scheduled')
Select convert(varchar(8), max(checkdate),1) as nextcheckdate
from table
where status = 'scheduled'
and not status in ('delivered', 'processed')
```
What I'm looking for is 1 single row that has `nextcheckdate` and `lastcheckdate`. Any help would be great. | Formally, I think the following returns the two values.
```
select convert(varchar(8), max(case when status = 'processed' then checkdate end), 1
) as lastcheckdate,
convert(varchar(8), max(case when status = 'scheduled' then checkdate end), 1
) as nextcheckdate
from table;
```
However, your queries are someone non-sensical. The `where` clause in each case is checking that status has a value (say `'processed'`). When this is true, the `not` part is always true.
This leads me to suspect that there is some aggregation involved, where you are looking for `processed` on rows with some id that don't have the `delivered` or `scheduled` values. This is speculation, because there is not enough information in the question to know what you are really looking for. | Your WHERE clauses are redundant.
```
Select convert(varchar(8), max(checkdate),1) lastcheckdate
from table
where status = 'processed'
and not status in ('delivered', 'scheduled')
```
For any row searched, if status = 'processed', then status is neither 'delivered', nor 'scheduled'. They are exclusive.
Therefore, start working towards your solution by reducing code.
```
Select convert(varchar(8), max(checkdate),1) lastcheckdate
from table
where status = 'processed'
Select convert(varchar(8), max(checkdate),1) as nextcheckdate
from table
where status = 'scheduled'
```
Now, look at the select columns. Notice that the function logic is identical except you are aliasing the columns differently.
Let's make them consistent.
```
Select convert(varchar(8), max(checkdate),1) as next_or_last_check_date
from table
where status = 'processed'
Select convert(varchar(8), max(checkdate),1) as next_or_last_check_date
from table
where status = 'scheduled'
```
Now the only difference is the two status checks. We can OR them together to make a single query or use an IN (same difference).
```
Select convert(varchar(8), max(checkdate),1) as next_or_last_check_date
from table
where status = 'processed'
or status = 'scheduled'
```
same thing using in:
```
Select convert(varchar(8), max(checkdate),1) as next_or_last_check_date
from table
where status IN ( 'processed', 'scheduled' )
``` | Is it possible to have these 2 queries in one? | [
"",
"sql",
"sql-server",
""
] |
As I did a bit test, a python dict of int=>int (different value) of 30 million items can easily eats >2G memory on my mac. Since I work with only int to int dict, is there any better solution than using python dict?
Some requirements I need are,
1. more memory efficient at holding tens of million level of int to int items
2. basic dict methods like fetching value by key and iterating all items
3. easy to serialise to string / binary would be a plus
Update,
4. easy to get subset by given keys, like d.fromkeys([...])
Thanks. | Judy-array based solution seems the option I should look into. I'm still looking for a good implementation that can be used by Python. Will update later.
Update,
finally I'm experimenting a Judy array wrapper at <http://code.google.com/p/py-judy/> .
Seems no any document there but I tried to find its methods simply by dir(...) its package and object, however it works.
Same experiment it eats ~986MB at ~1/3 of standard dict by using judy.JudyIntObjectMap. It also provides JudyIntSet which in some special scenario will save much more memory since it doesn't need to reference to any real Python object as value comparing to JudyIntObjectMap.
(As tested further as below, JudyArray simply uses several MB to tens of MB, most of ~986MB is actually used by value objects in Python memory space.)
Here's some code if it helps for you,
```
>>> import judy
>>> dir(judy)
['JudyIntObjectMap', 'JudyIntSet', '__doc__', '__file__', '__name__', '__package__']
>>> a=judy.JudyIntObjectMap()
>>> dir(a)
['__class__', '__contains__', '__delattr__', '__delitem__', '__doc__', '__format__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__len__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', '__value_sizeof__', 'by_index', 'clear', 'get', 'iteritems', 'iterkeys', 'itervalues', 'pop']
>>> a[100]=1
>>> a[100]="str"
>>> a["str"]="str"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: 'non-integer keys not supported'
>>> for i in xrange(30000000):
... a[i]=i+30000000 #finally eats ~986MB memory
...
```
Update,
ok, a JudyIntSet of 30M int as tested.
```
>>> a=judy.JudyIntSet()
>>> a.add(1111111111111111111111111)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: we only support integers in the range [0, 2**64-1]
```
It totally uses only 5.7MB to store 30M sequential int array [0,30000000) which may due to JudyArray's auto compression. Above 709MB is bcz I used range(...) instead of more proper xrange(...) to generate the data.
So the size of the core JudyArray with 30M int is simply ignorable.
If anyone knows a more complete Judy Array wrapper implementation please let me know, since this wrapper only wraps JudyIntObjectMap and JudyIntSet. For int-int dict, JudyIntObjectMap still requires real python object. If we only do counter\_add and set on the values, it would be a good idea to store int of values in C space rather than using python object. Hope someone be interested to create or introduce one : ) | There are at least two possibilities:
**arrays**
You could try using two arrays. One for the keys, and one for the values so that index(key) == index(value)
**Updated 2017-01-05:** use 4-byte integers in array.
An array would use less memory. On a 64-bit FreeBSD machine with python compiled with clang, an array of 30 million integers uses around 117 MiB.
These are the python commands I used:
```
Python 2.7.13 (default, Dec 28 2016, 20:51:25)
[GCC 4.2.1 Compatible FreeBSD Clang 3.8.0 (tags/RELEASE_380/final 262564)] on freebsd11
Type "help", "copyright", "credits" or "license" for more information.
>>> from array import array
>>> a = array('i', xrange(30000000))
>>> a.itemsize
4
```
After importing array, `ps` reports:
```
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
rsmith 81023 0.0 0.2 35480 8100 0 I+ 20:35 0:00.03 python (python2.7)
```
After making the array:
```
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
rsmith 81023 29.0 3.1 168600 128776 0 S+ 20:35 0:04.52 python (python2.7)
```
The Resident Set Size is reported in 1 KiB units, so (128776 - 8100)/1024 = 117 MiB
With list comprehensions you could easily get a list of indices where the key meets a certain condition. You can then use the indices in that list to access the corresponding values...
**numpy**
If you have numpy available, using that is faster, has lots more features and and uses slightly less RAM:
```
Python 2.7.5 (default, Jun 10 2013, 19:54:11)
[GCC 4.2.1 Compatible FreeBSD Clang 3.1 ((branches/release_31 156863))] on freebsd9
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> a = np.arange(0, 30000000, dtype=np.int32)
```
From `ps`: 6700 KiB after starting Python, 17400 KiB after import numpy and 134824 KiB after creating the array. That's around 114 MiB.
Furthermore, numpy supports [record arrays](http://docs.scipy.org/doc/numpy/user/basics.rec.html);
```
Python 2.7.5 (default, Jun 10 2013, 19:54:11)
[GCC 4.2.1 Compatible FreeBSD Clang 3.1 ((branches/release_31 156863))] on freebsd9
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> a = np.zeros((10,), dtype=('i4,i4'))
>>> a
array([(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0)],
dtype=[('f0', '<i4'), ('f1', '<i4')])
>>> a.dtype.names
('f0', 'f1')
>>> a.dtype.names = ('key', 'value')
>>> a
array([(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0)],
dtype=[('key', '<i4'), ('value', '<i4')])
>>> a[3] = (12, 5429)
>>> a
array([(0, 0), (0, 0), (0, 0), (12, 5429), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0)],
dtype=[('key', '<i4'), ('value', '<i4')])
>>> a[3]['key']
12
```
Here you can access the keys and values separately;
```
>>> a['key']
array([ 0, 0, 0, 12, 0, 0, 0, 0, 0, 0], dtype=int32)
``` | efficient way to hold and process a big dict in memory in python | [
"",
"python",
"dictionary",
""
] |
I have a dictionary D where:
```
D = {'foo':{'meow':1.23,'mix':2.34}, 'bar':{'meow':4.56, 'mix':None}, 'baz':{'meow':None,'mix':None}}
```
I wrote this code to write it to a text file:
```
def dict2txt(D, writefile, column1='-', delim='\t', width=20, order=['mix','meow']):
import csv
with open( writefile, 'w' ) as f:
writer, w = csv.writer(f, delimiter=delim), []
head = ['{!s:{}}'.format(column1,width)]
for i in D[D.keys()[0]].keys(): head.append('{!s:{}}'.format(i,width))
writer.writerow(head)
for i in D.keys():
row = ['{!s:{}}'.format(i,width)]
for k in order: row.append('{!s:{}}'.format(D[i][k],width))
writer.writerow(row)
```
But the output ignores `order = ['mix','meow']` and writes the file like:
```
- meow mix
bar None 4.56
foo 2.34 1.23456
baz None None
```
How do I get it to write:
```
- mix meow
bar 4.56 None
foo 1.23456 2.34
baz None None
```
Thanks!
Update: Thanks to @SukritKalra in the comments below for pointing out that the code works fine. I just wasn't reordering the column headers!
The line `for i in D[D.keys()[0]].keys(): head.append('{!s:{}}'.format(i,width))` should read `for i in order: head.append('{!s:{}}'.format(i,width))`. Thanks folks! | Now, an alternate, easier and more efficient way of doing this, by using the wonderful [Pandas](http://pandas.pydata.org/) library
```
import pandas as pd
order=['mix', 'meow']
D = {'foo':{'meow':1.23,'mix':2.34}, 'bar':{'meow':4.56, 'mix':None}, 'baz':{'meow':None,'mix':None}}
df = pd.DataFrame(D).T.reindex(columns=order)
df.to_csv('./foo.txt', sep='\t', na_rep="none")
```
Result:
```
$ python test1.py
$ cat foo.txt
mix meow
bar none 4.56
baz none none
foo 2.34 1.23
``` | Take advantage of your "order" variable to drive some generators:
```
def dict2txt(D, writefile, column1='-', delim='\t', width=20, order=['mix','meow']):
import csv
# simplify formatting for clarity
fmt = lambda s: '{!s:{}}'.format(s, width)
with open(writefile, 'w') as f:
writer = csv.writer(f, delimiter=delim)
writer.writerow([fmt(column1)] + [fmt(s) for s in order])
for i in D.keys():
writer.writerow([fmt(i)] + [fmt(D[i][k]) for k in order])
```
The rows are still unordered. So you might want something like: "for i in sorted(D.keys()):" | Python: Write dictionary to text file with ordered columns | [
"",
"python",
"text",
"dictionary",
""
] |
I have read through some somewhat related questions, but did not find the specifics related to my question.
If I have a stable application that is not going to be changed and it has been thoroughly tested and used in the wild... one might consider removing referential integrity / foreign key constraints in the database schema, with the aim to improve performance.
Without discussing the cons of doing this, does anyone know how much of a performance benefit one might experience? Has anyone done this and experienced noticeable performance benefits? | From my experience with Oracle:
Foreign Keys provide information to the optimizer ("you're going to find exactly one match on this join"), so removing those might result in (not so) funny things happening to your execution plans.
Foreign Keys do perform checks, which costs performance. I have seen those to use up a big chunk of execution time on batch processing (hours on jobs running for large chunks of a day), causing us to use deferred constraints.
Since dropping foreign keys changes the semantic (think cascade, think the application relying on not being able to remove a master entry which gets referenced by something else, at least in the situation of concurrent access) I would only consider such a step when foreign keys are proven to dominate the performance in this application. | The benefits (however small) with be insignificant to the cons.
If performance is a problem check the indexes. Throw more hardware its way. There are a host of techniques to improve performance.
I know you said not to mention the cons - but you should consider them. The data is a very valuable asset and ensuring its validity keeps your business going. If the data becomes invalid you have a huge problem to fix it. | Removing Referential Integrity on Stable Application | [
"",
"sql",
"database",
""
] |
`traceback.format_exc()`
can get it with raising an exception.
`traceback.print_stack()`
prints the stack without an exception needed, but it does not return a string.
There doesn't seem to be a way to get the stack trace string without raising an exception in python? | It's `traceback.extract_stack()` if you want convenient access to module and function names and line numbers, or `''.join(traceback.format_stack())` if you just want a string that looks like the `traceback.print_stack()` output. | How about [traceback.format\_stack?](http://docs.python.org/2/library/traceback.html#traceback.format_stack) | How to get stack trace string without raising exception in python? | [
"",
"python",
"stack-trace",
""
] |
I want to print an enumerated list in user specified columns.
```
import random
lst = random.sample(range(100), 30)
counter = 0
for pos, num in enumerate(sorted(lst)):
if counter % 2 == 0:
print '(%s) %s' %(pos, num),
counter = counter + 1
else:
print '(%s) %s'.rjust(16) %(pos, num)
counter = counter + 1
```
This gives:
```
(0) 3 (1) 6
(2) 7 (3) 10
(4) 11 (5) 13
(6) 17 (7) 18
(8) 20 (9) 25
(10) 45 (11) 46
(12) 48 (13) 51
(14) 58 (15) 59
(16) 60 (17) 63
(18) 68 (19) 69
(20) 77 (21) 81
(22) 83 (23) 84
(24) 87 (25) 89
(26) 93 (27) 94
(28) 97 (29) 98
```
How do I print a list where first 15 numbers are in the first column and second fifteen are in the second? I've seen some examples where `range(start, stop, step)` have been used to create sub-lists and resultant rows have been printed but I would have to lose enumeration and just print the numbers. | The "user specified" part of your question is tricky. How does the user specify what they want in each column? Can they specify more than two columns? etc.
However, if you know you have 30 items, and you know that you want the 1st 15 in the first column and the next 15 in the 2nd column I believe the following will do the trick:
```
for i in xrange(15):
first = '(%s) %s' % (i, lst[i])
padding = ' ' * (30 - len(first))
second = '(%s) %s' % (i + 15, lst[i + 15])
print '%s%s%s' % (first, padding, second)
``` | ```
def print_table(seq, columns=2):
table = ''
col_height = len(seq) / columns
for x in xrange(col_height):
for col in xrange(columns):
pos = (x * columns) + col
num = seq[x + (col_height * col)]
table += ('(%s) %s' % (pos, num)).ljust(16)
table += '\n'
print table
```
Results
```
>>> print_table(lst)
(0) 0 (1) 59
(2) 3 (3) 61
(4) 5 (5) 65
(6) 7 (7) 75
(8) 8 (9) 79
(10) 17 (11) 81
(12) 18 (13) 83
(14) 22 (15) 84
(16) 24 (17) 86
(18) 43 (19) 88
(20) 48 (21) 89
(22) 49 (23) 92
(24) 51 (25) 96
(26) 52 (27) 97
(28) 58 (29) 99
>>> print_table(lst, 3)
(0) 0 (1) 48 (2) 81
(3) 3 (4) 49 (5) 83
(6) 5 (7) 51 (8) 84
(9) 7 (10) 52 (11) 86
(12) 8 (13) 58 (14) 88
(15) 17 (16) 59 (17) 89
(18) 18 (19) 61 (20) 92
(21) 22 (22) 65 (23) 96
(24) 24 (25) 75 (26) 97
(27) 43 (28) 79 (29) 99
>>>
```
Still thinking of a way to specify between horizontal or verticle numbering.
Also, if the number of items in `seq` is not divisible by `columns` then the remainder won't get printed.
---edit---
Misunderstood the OP. There, just like Olivers.
```
def print_table(lst):
table = ''
half = len(lst) / 2
for x1 in xrange(half):
x2 = x1 + half
col1 = '(%s) %s' % (x1, lst[x1])
col2 = '(%s) %s\n' % (x2, lst[x2])
table += col1.ljust(16) + col2
print table
``` | print an enumerated list in multiple columns | [
"",
"python",
"multiple-columns",
"enumerate",
""
] |
Consider this list here:
```
example=[]
```
And another:
```
python=["hi","bye","hello","yes","no"]
```
If I decide to add one of the elements from python to example, will a duplicate of that element b created or will the variable python lose an element.
```
example+=[python[0]]
```
So would the string "hi" be duplicated or transferred to example using the aforementioned example | No, there will be no "transfer". This is easy enough to check, by just printing the values after the operation.
Instead, the list example will have appended to it the elements of the first string:
```
>>> f = []
>>> f+= ["hi", "there"][0]
>>> f
['h', 'i']
```
This happens because `a += b` is conceptually\* equivalent to `a = a+b`, and `a+b` creates a list which has all the elements of `a` followed by the elements of `b`. A string is sequence, the elements of which are strings composed of individual characters, which is why you get this behaviour.
`*` There are differences, notably that `list + nonlist` won't work. | The string "hi" will be split into chars and assigned to `example` when you do `example+=python[0]`
So `example` in this case will contain `['h','i']`.
Also, the list `python` will not lose an element. | In python, what happens, will the length of a list decrease if I add part of the list to another list? | [
"",
"python",
"python-3.x",
""
] |
I use the following code to tokenize a string,read from stdin.
```
d=[]
cur = ''
for i in sys.stdin.readline():
if i in ' .':
if cur not in d and (cur != ''):
d.append(cur)
cur = ''
else:
cur = cur + i.lower()
```
This gives me a array of non repeating words.However, in my output,Some of the words dont get split.
My Input is
```
Dan went to the north pole to lead an expedition during summer.
```
And the output array d is
['dan', 'went', 'to', 'the', 'north', 'pole', 'tolead', 'an', 'expedition', 'during', 'summer']
Why is `tolead` together? | Try this
```
d=[]
cur = ''
for i in sys.stdin.readline():
if i in ' .':
if cur not in d and (cur != ''):
d.append(cur)
cur = '' # note the different indentation
else:
cur = cur + i.lower()
``` | Try this:
```
for line in sys.stdin.readline():
res = set(word.lower() for word in line[:-1].split(" "))
print res
```
Example:
```
line = "Dan went to the north pole to lead an expedition during summer."
res = set(word.lower() for word in line[:-1].split(" "))
print res
set(['north', 'lead', 'expedition', 'dan', 'an', 'to', 'pole', 'during', 'went', 'summer', 'the'])
```
After comments, I edit: this solution preserves input order and filters separators
```
import re
from collections import OrderedDict
line = "Dan went to the north pole to lead an expedition during summer."
list(OrderedDict.fromkeys(re.findall(r"[\w']+", line)))
# ['Dan', 'went', 'to', 'the', 'north', 'pole', 'lead', 'an', 'expedition', 'during', 'summer']
``` | Tokenizing a string gives some words merged | [
"",
"python",
"stdin",
""
] |
The new version of Pandas uses [the following interface](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.excel.read_excel.html#pandas.io.excel.read_excel) to load Excel files:
```
read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA'])
```
but what if I don't know the sheets that are available?
For example, I am working with excel files that the following sheets
> Data 1, Data 2 ..., Data N, foo, bar
but I don't know `N` a priori.
Is there any way to get the list of sheets from an excel document in Pandas? | You can still use the [ExcelFile](http://pandas.pydata.org/pandas-docs/dev/io.html#excel-files) class (and the `sheet_names` attribute):
```
xl = pd.ExcelFile('foo.xls')
xl.sheet_names # see all sheet names
xl.parse(sheet_name) # read a specific sheet to DataFrame
```
*see [docs for parse](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.ExcelFile.parse.html) for more options...* | You should explicitly specify the second parameter (sheetname) as None. like this:
```
df = pandas.read_excel("/yourPath/FileName.xlsx", None);
```
"df" are all sheets as a dictionary of DataFrames, you can verify it by run this:
```
df.keys()
```
result like this:
```
[u'201610', u'201601', u'201701', u'201702', u'201703', u'201704', u'201705', u'201706', u'201612', u'fund', u'201603', u'201602', u'201605', u'201607', u'201606', u'201608', u'201512', u'201611', u'201604']
```
please refer pandas doc for more details: <https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html> | Pandas: Looking up the list of sheets in an excel file | [
"",
"python",
"excel",
"pandas",
"openpyxl",
"xlrd",
""
] |
In scipy, the error occurs quite often.
```
>>> import scipy
>>> scipy.integrate.trapz(gyroSeries, timeSeries)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'integrate'
>>>
```
**I figure out how to solve this problem by doing the following:**
```
>>>
>>> import scipy.integrate
>>> scipy.integrate.trapz(gyroSeries, timeSeries)
>>> 1.2
```
**My question:**
Why does the error occur?
Why would that fix the error? | Most possibly because scipy is a library (package) that contains modules and to import a specific module from the scipy library, you need to specify it and import the module itself. As it's a separate module (sub-package), once you import it, it's attributes are available to you by using the regular scipy.module.attribute | In order to fix the error, add the following line at the top of your script
```
from scipy import integrate
``` | AttributeError: 'module' object (scipy) has no attribute *** Why does this error occur? | [
"",
"python",
"scipy",
""
] |
I have a counter that looks a bit like this:
```
Counter: {('A': 10), ('C':5), ('H':4)}
```
I want to sort on keys specifically in an alphabetical order, NOT by `counter.most_common()`
is there any way to achieve this? | Just use [sorted](http://docs.python.org/2/library/functions.html#sorted):
```
>>> from collections import Counter
>>> counter = Counter({'A': 10, 'C': 5, 'H': 7})
>>> counter.most_common()
[('A', 10), ('H', 7), ('C', 5)]
>>> sorted(counter.items())
[('A', 10), ('C', 5), ('H', 7)]
``` | ```
>>> from operator import itemgetter
>>> from collections import Counter
>>> c = Counter({'A': 10, 'C':5, 'H':4})
>>> sorted(c.items(), key=itemgetter(0))
[('A', 10), ('C', 5), ('H', 4)]
``` | sorting a counter in python by keys | [
"",
"python",
"python-2.7",
"dictionary",
""
] |
It can be like this form?
I try select case but error always occur.
if there is any help i'll be thankful.
```
CREATE PROCEDURE searchStatement
(
@word1 varchar(50),
@word2 varchar(50),
@word3 varchar(50)
)
as
select * from words where 1=1
if @word1 <> '-1'
and word1 =@word1
if @word2 <> '-1'
and word2 =@word2
if @word3 <> '-1'
and word3 =@word3
``` | You can check the condition via `if` statement and form a `string query` as like below then you can execute the string `query` through `exec` command.
```
declare @resultSet nvarchar(max)
set @resultSet = 'select *from searchStatement where 1= 1 '
if @word1 <> '-1'
begin
set @resultSet = @resultSet + ' and word1 = '+@word1
end
if @word2 <> '-1'
begin
set @resultSet = @resultSet + ' and word2 = '+@word2
end
if @word3 <> '-1'
begin
set @resultSet = @resultSet + ' and word3 = '+@word3
end
exec(@resultSet)
``` | you could use this:
```
select *.
from word
where
(@word1 = '-1' or word = @word1) and
(@word2 = '-1' or word = @word2) and
(@word3 = '-1' or word = @word3)
```
BTW, if you want to give an empty parameter into procedures, it's better to use null instead of -1 | I try to create stored procedure that received 3 parameters, if any param its value equal "-1" it'll not be included in the select statement | [
"",
"sql",
"sql-server",
""
] |
The strings in Python are immutable and support the buffer interface. It could be efficient to return not the new strings, but the buffers pointing to the parts of the old string when using slices or the `.split()` method. However, a new string object is constructed each time. Why? The single reason I see is that it can make garbage collection a bit more difficult.
True: in regular situations the memory overhead is linear and isn't noticeable. Copying is fast, and so is allocation. But there is already too much done in Python, so maybe such buffers are worth the effort?
EDIT:
It seems that forming substrings this way would make memory management much more complicated. The case where only 20% of the arbitrary string is used, and we can't deallocate the rest of the string, is a simple example. We can improve the memory allocator, so it would be able to deallocate strings partially, but probably it would be mostly a disprovement. All the standard functions can anyway be emulated with `buffer` or `memoryview` if memory becomes critical. The code wouldn't be that concise, but one has to give up something in order to get something. | The underlying string representation is **null-terminated**, even though it keeps track of the length, hence you *cannot* have a string object that references a sub-string that isn't a suffix. This already limits the usefulness of your proposal since it would add a lot of complications to deal differently with suffices and non-suffices (and giving up with null-terminating strings brings other consequences).
Allowing to refer to sub-strings of a string means to complicate *a lot* garbage collection and string handling. For every string you'd have to keep track how many objects refer to each character, or to each range of indices. This means complicating a lot the `struct` of string objects and any operation that deals with them, meaning a, probably big, slow down.
Add the fact that starting with python3 strings have 3 different internal representations, and things are going to be too messy to be maintainable,
and your proposal probably doesn't give enough benefits to be accepted.
---
An other problem with this kind of "optimization" is when you want to deallocate "big strings":
```
a = "Some string" * 10 ** 7
b = a[10000]
del a
```
After this operations you have the substring `b` that prevents `a`, a huge string, to be deallocated. Surely you could do copies of small strings, but what if `b = a[:10000]`(or another big number)? 10000 characters looks like a big string which ought to use the optimization to avoid copying, but it is preventing to realease megabytes of data.
The garbage collector would have to keep checking whether it is worth to deallocate a big string object and make copies or not, and all these operations must be as fast as possible, otherwise you end up decreasing time-performances.
99% of the times the strings used in the programs are "small"(max 10k characters), hence copying is really fast, while the optimizations you propose start to become effective with really big strings(e.g. take substrings of size 100k from huge texts)
and are much slower with really small strings, which is the common case, i.e. the case that should be optimized.
---
If you think important then you are free to propose a PEP, show an implementation and the resultant changes in speed/memory usage of your proposal. If it is really worth the effort it may be included in a future version of python. | That's how slices work. Slices always perform a shallow copy, allowing you to do things like
```
>>> x = [1,2,3]
>>> y = x[:]
```
Now it would be possible to make an exception for strings, but is it really worth it? [Eric Lippert blogged about his decision not to do that for .NET](http://blogs.msdn.com/b/ericlippert/archive/2011/07/19/strings-immutability-and-persistence.aspx); I guess his argument is valid for Python as well.
See also [this question](https://stackoverflow.com/q/6742923/20670). | Python's immutable strings and their slices | [
"",
"python",
"string",
"garbage-collection",
""
] |
I am having a problem with installing Django. It seems that Python will not recognize it once it is installed. Below are the steps I have taken. I'm using Mac OS 10.7.5 Python 2.7 and Django 1.5.1
I am not using virtualenv
What am I doing wrong? Is there any reason Python will not recognize it? How can I fix this?
Here's what I've done:
uninstalled it, (with pip)
```
Successfully uninstalled Django
```
checked that is uninstalled (via yolk)
```
localhost:mysite brendan$ yolk -l
Flask-SQLAlchemy - 0.16 - active
Flask - 0.10.1 - active
Jinja2 - 2.7 - active
MarkupSafe - 0.18 - active
Python - 2.7 - active development (/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload)
SQLAlchemy - 0.8.2 - active
Werkzeug - 0.9.1 - active
boto - 2.9.7 - active
itsdangerous - 0.22 - active
nose - 1.3.0 - active
pip - 1.3.1 - active
setuptools - 0.8 - active
tweepy - 1.7.1 - non-active
tweepy - 1.9 - active
virtualenv - 1.9.1 - active
wsgiref - 0.1.2 - active development (/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7)
yolk - 0.4.3 - active
```
installed it, (with pip)
```
localhost:mysite brendan$ pip install django
Downloading/unpacking django
Downloading Django-1.5.1.tar.gz (8.0MB): 8.0MB downloaded
Running setup.py egg_info for package django
warning: no previously-included files matching '__pycache__' found under directory '*'
warning: no previously-included files matching '*.py[co]' found under directory '*'
Installing collected packages: django
Running setup.py install for django
changing mode of build/scripts-2.7/django-admin.py from 644 to 755
warning: no previously-included files matching '__pycache__' found under directory '*'
warning: no previously-included files matching '*.py[co]' found under directory '*'
changing mode of /Library/Frameworks/Python.framework/Versions/2.7/bin/django-admin.py to 755
Successfully installed django
Cleaning up...
localhost:mysite brendan$ python
Python 2.7.5 (default, May 19 2013, 13:26:47)
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
And I'm getting this error when trying to import:
```
>>> import django
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named django
>>>
```
**EDIT / UPDATES:**
which pip
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/pip
```
which python
```
/opt/local/bin/python
```
I am not using virtualenv | It is likely that your pip installation is pointing to a different python than your python version installed at `opt/local/bin/python`. It is likely that your `pip` is talking to the system installed python and not the version you installed.
You can consider uninstalling python and re-installing in the correct path or creating a virtualenv that points to the version of python you desire. Then any subsequent installs will point to the correct version of python without affecting other python installs | try using pip as the superuser :
```
sudo pip install -U django
```
worked for me ... | Python not recognising Django Installation. (ImportError: No module named django) | [
"",
"python",
"django",
"python-2.7",
""
] |
I have a huge data file (~2 G) that needs to be splitted into odd and even lines, processed separately and written into two files, I don't want to read the whole file into RAM, so I think a generator should be a suitable choice. In short I want do something like this:
```
lines = (l.strip() for l in open(inputfn))
oddlines = somefunction(getodds(lines))
evenlines = somefunction(getevens(lines))
outodds.write(oddlines)
outevens.write(evenlines)
```
Is this possible? Apparently indexing will not work:
```
In [75]: lines[::2]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/kaiyin/Phased/build37/chr22/segments/segment_1/<ipython-input-75-97be680d00e3> in <module>()
----> 1 lines[::2]
TypeError: 'generator' object is not subscriptable
``` | ```
def oddlines(fileobj):
return (line for index,line in enumerate(fileobj) if index % 2)
def evenlines(fileobj):
return (line for index,line in enumerate(fileobj) if not index % 2)
```
Note that this will require scanning the file twice, since these aren't designed to run in parallel. It does, however, lead to much less complex code. (Also note that an 'odd' line here is one with an index of 1,3,5 - which means that the first line is an 'even' line due to zero-indexing.)
As Ashwini notes, you could also use `itertools.islice` to do this. | Use `itertools.islice` to slice an iterator:
```
from itertools import islice
with open('filename') as f1, open('evens.txt', 'w') as f2:
for line in islice(f1, 0, None, 2):
f2.write(line)
with open('filename') as f1, open('odds.txt', 'w') as f2:
for line in islice(f1, 1, None, 2):
f2.write(line)
``` | Separate odd and even lines in a generator with python | [
"",
"python",
"data-manipulation",
""
] |
Lets say I have two tables:
Table1
```
Id Name
1 Joe
2 Greg
3 Susan
4 Max
```
Table2
```
Uid comment
2 Good customer
4 Great guy
```
What I want to do is list all elements of Table 1, and if Table1.Id = Table2.Uid I want to select this comment. If comment does not exist give blank field.
Result should be:
```
1 Joe
2 Greg Good customer
3 Susan
4 Max Great Guy
```
I can't figure out how to do it, if I write:
```
select
table1.Id,
table1.Name,
table2.comment
where
table1.id=table2.Uid
```
It gives me only users 2 and 4. | Try to use `left join` it shows you all data from `table1`
```
select t1.Id, t1.Name, t2.comment
from table1 t1
left join table2 t2 on t1.id=t2.Uid
```
**NOTE**:
Good practice is to use aliases as above. Code is more readable. | ```
select
table1.Id,
table1.Name,
table2.comment
from table1 left outer join table2 on table1.id=table2.Uid
``` | SQL optional parameter? | [
"",
"mysql",
"sql",
""
] |
Say I have a table structured like this:
```
id dt val
a 1/1/2012 23
a 2/1/2012 24
a 6/1/2013 12
a 7/1/2013 56
b 1/1/2009 34
b 3/1/2009 78
```
Every `id` has a `dt` in the form of a month, and a value. There may be months missing, but there will never be duplicate months.
I need to calculate a 12-month rolling average for each data point. For example, the fourth row would be (56+12)/12. The third row would be (12)/12. The second row would be (24+23)/12, etc. I need to identify the month (and value) of the maximum moving average for a given ID.
Is this something I can even do in SQL itself, or do I need to export the dataset and use some other method? There are millions of rows, so I'd like to do it in SQL if I can. I've looked at a few of the MA methods and I'm not sure if they will work for what I'm trying to do.
The SQL I am using is a derivative used with Teradata. It supports most of the standard functions that I've needed to use. | Just use a subquery as the expression:
```
SELECT id,
dt,
val,
(
SELECT SUM(val)/12
FROM mytable t2
WHERE t2.id = t.id
AND t2.dt > DATEADD(mm, -12, t.dt)
AND t2.dt < t.dt
) val12MonthAvg
FROM mytable t
```
However with millions or rows it's likely to be very slow. | Assumptions:
* Your date format is m/d/yyyy (I used format mm/dd/yyyy)* id on this table is an FK to some other entity where id is the PK* you are meant to take the date of the chosen row, and look for that row and all rows less than 12 months older for that id, and sum the val's in those rows
I'll write this in Oracle SQL because that's what I'm using and you didn't specify ;)
Query Summary:
* "Chosen" is the instance of your table to serve as the input row* "Lookback" gathers all rows including your Chosen row and up to 12 months back minus 1 day* Sum up the lookback.val's for your answer
```
WITH DateTable
AS (SELECT 'a' id, TO_DATE ('01/01/2012', 'mm/dd/yyyy') dt, 23 val FROM DUAL
UNION
SELECT 'a', TO_DATE ('1/1/2012', 'mm/dd/yyyy'), 23 FROM DUAL
UNION
SELECT 'a', TO_DATE ('02/01/2012', 'mm/dd/yyyy'), 24 FROM DUAL
UNION
SELECT 'a', TO_DATE ('06/01/2013', 'mm/dd/yyyy'), 12 FROM DUAL
UNION
SELECT 'a', TO_DATE ('07/01/2013', 'mm/dd/yyyy'), 56 FROM DUAL
UNION
SELECT 'b', TO_DATE ('01/01/2009', 'mm/dd/yyyy'), 34 FROM DUAL
UNION
SELECT 'b', TO_DATE ('03/01/2009', 'mm/dd/yyyy'), 78 FROM DUAL)
SELECT chosen.id, chosen.dt, SUM (lookback.val)/12
FROM DateTable chosen, DateTable lookback
WHERE chosen.id = 'a' --your input id
AND chosen.dt = TO_DATE ('07/01/2013', 'mm/dd/yyyy') --your input date
AND chosen.id = lookback.id
AND lookback.dt > ADD_MONTHS (chosen.dt, -12)
AND lookback.dt <= chosen.dt
GROUP BY chosen.id, chosen.dt;
```
And if you want to query on dates/months not present in any row, do this:
```
WITH DateTable
AS (SELECT 'a' id, TO_DATE ('01/01/2012', 'mm/dd/yyyy') dt, 23 val FROM DUAL
UNION
SELECT 'a', TO_DATE ('1/1/2012', 'mm/dd/yyyy'), 23 FROM DUAL
UNION
SELECT 'a', TO_DATE ('02/01/2012', 'mm/dd/yyyy'), 24 FROM DUAL
UNION
SELECT 'a', TO_DATE ('06/01/2013', 'mm/dd/yyyy'), 12 FROM DUAL
UNION
SELECT 'a', TO_DATE ('07/01/2013', 'mm/dd/yyyy'), 56 FROM DUAL
UNION
SELECT 'b', TO_DATE ('01/01/2009', 'mm/dd/yyyy'), 34 FROM DUAL
UNION
SELECT 'b', TO_DATE ('03/01/2009', 'mm/dd/yyyy'), 78 FROM DUAL),
InputData
AS (SELECT 'b' id, TO_DATE ('12/15/2009', 'mm/dd/yyyy') dt FROM DUAL)
SELECT InputData.id, InputData.dt, SUM (lookback.val)/12
FROM DateTable lookback, InputData
WHERE lookback.id = InputData.id
AND lookback.dt > ADD_MONTHS (InputData.DT, -12)
AND lookback.dt <= InputData.DT
GROUP BY InputData.id, InputData.dt;
``` | Ways to Calculate rolling subtotal | [
"",
"sql",
""
] |
I want to produce a list of possible websites from two lists:
```
strings = ["string1", "string2", "string3"]
tlds = ["com', "net", "org"]
```
to produce the following output:
```
string1.com
string1.net
string1.org
string2.com
string2.net
string2.org
```
I've got to this:
```
for i in strings:
print i + tlds[0:]
```
But I can't concatenate str and list objects. How can I join these? | One very simple way to write this is the same as in most other languages.
```
for s in strings:
for t in tlds:
print s + '.' + t
``` | `itertools.product` is designed for this purpose.
```
url_tuples = itertools.product(strings, tlds)
urls = ['.'.join(url_tuple) for url_tuple in url_tuples]
print(urls)
``` | concatenate all items from two lists in Python | [
"",
"python",
""
] |
[Project Euler Problem 10](http://projecteuler.net/problem=10):
> The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
>
> Find the sum of all the primes below two million.
I don't think there is any errors in my code. But it takes really a lot of time to give the answer. I've tried using PyPy because I've heard its faster than the CPython interpreter, but still no good.
Here is the code:
```
#Implementation of Sieve of Eratosthenes
def prime_sieve(limit):
primes = range(2, limit)
for i in primes:
for j in range(2, primes[-1]):
try:
primes.remove(i*j)
except ValueError:
pass
return primes;
answer = 0
for x in prime_sieve(2000000):
answer += x
print "Answer: %d." % answer
raw_input()
``` | The right data structure for a prime sieve is a bitset, indexed by integer value. Python doesn't have one of those built-in, but since your limit is small (only 2 million), a regular list of integers should fit in memory even though its wasteful by a factor of 30 or more (it will take about 9 MB where the equivalent bitset in C would take 250 KB).
The important thing for speed is to never access the array except by immediate direct indexing (so no delete/remove). Also, limit the outer loop of the sieve to sqrt(limit), and advance the loop to the next prime, not the next value.
So something like this should be pretty quick (it take about 2 seconds on my old machine in vanilla Python 2.7).
```
import math, sys
def prime_sieve(limit):
# Mark everything prime to start
primes = [1 for x in xrange(limit)]
primes[0] = 0
primes[1] = 0
# Only need to sieve up to sqrt(limit)
imax = int(math.sqrt(limit) + 1)
i = 2
while (i < imax):
j = i + i
while j < limit:
primes[j] = 0
j += i
# Move i to next prime
while True:
i += 1
if primes[i] == 1:
break
return primes
s = prime_sieve(2000000)
print(sum(i for i in xrange(len(s)) if s[i] == 1))
``` | The problem is this:
```
primes.remove(i*j)
```
`.remove()` is very inefficient when called on large lists, because it first has to iterate through the entire list to determine where, if any, the value is present, and then it has to iterate through a portion of the list again to shift all of the elements after the removed element down one spot.
There are other ways you could use data structures here (both other ways of using lists, and other data structures entirely) that would be more efficient.
Finally: your code modifies `primes` at the same time as you are iterating over it (which is what `for i in primes` is doing). This is generally considered a bad thing, since modifying something as it's being iterated over is potentially undefined behavior. | Trying to solve of Project Euler #10, but code takes *a lot* of time to display output | [
"",
"python",
"python-2.7",
"pypy",
""
] |
Why does `dict(k=4, z=2).update(dict(l=1))` return `None`? It seems as if it should return `dict(k=4, z=2, l=1)`? I'm using Python 2.7 should that matter. | The `.update()` method alters the dictionary *in place* and returns `None`. The dictionary *itself* is altered, no altered dictionary needs to be returned.
Assign the dictionary first:
```
a_dict = dict(k=4, z=2)
a_dict.update(dict(l=1))
print a_dict
```
This is clearly documented, see the [`dict.update()` method documentation](http://docs.python.org/2/library/stdtypes.html#dict.update):
> Update the dictionary with the key/value pairs from other, overwriting existing keys. Return `None`. | [`dict.update()`](http://docs.python.org/2/library/stdtypes.html#dict.update) method does update in place. It does not return the modified dict, but `None`.
The doc says it in first line:
> Update the dictionary with the key/value pairs from other, overwriting existing keys. **Return None.** | Why does dict(k=4, z=2).update(dict(l=1)) return None in Python? | [
"",
"python",
"python-2.7",
"dictionary",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.