Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
My database Table looks like:
```
ID | INDEX | Value |
1 | 0 | 3 |
1 | 1 | 5 |
1 | 2 | 7 |
2 | 0 | 4 |
2 | 1 | 6 |
2 | 2 | 2 |
```
What I want my output to look like is the difference of the values column based on their index
i.e. value(id=2,index = i) - value(id = 1, index = i) so the output table will look like
```
INDEX | Delta Value |
0 | 1 |
1 | 1 |
2 | -5 |
```
My attempt at solving this problem is as follows:
```
SELECT Top 6
col1.value column1,
col2.value column2,
col2.value - col1.value
FROM My_Table col1
INNER JOIN My_Table col2
ON col1.index = col2.index
WHERE col1.id = 1
OR col2.id = 2
```
I know there are problems with this query. But I just haven't been able to produce the output that I want. Any help is appreciated. | You can do this by join
```
select
t1.ind, t1.value - t2.value as delta
from My_Table as t1
inner join My_Table as t2 on t2.id = 2 and t2.ind = t1.ind
where t1.id = 1
```
Or by simple aggregate:
```
select
ind, sum(case id when 1 then 1 when 2 then -1 end * value) as delta
from My_Table
group by ind
``` | Do you want something like this?
```
select
col1.value column1,
col2.value column2,
col2.value - col1.value AS delta
From My_Table col1
INNER JOIN My_Table col2
ON col1.index = col2.index
AND col2.id = 2
where col1.id = 1
``` | T-sql using values from one table and displaying them in different columns | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm almost certain you cannot do this within the context of the case statement, and I haven't been able to find any documentation about it, but is it possible to do the following:
```
SELECT CASE WHEN testValue > 2
THEN testValue(Without Repeating it) ELSE FailValue)
END
FROM Table
```
A better more thorough example:
```
Select CASE WHEN (Foo-stuff+bar) > 2
THEN Conditional statement without >2 Else "Fail"
END
FROM TABLE
```
I am looking for a way to create a select without repeating the conditional query.
EDIT: Due to a poor example on my part, and the lack of answers I was looking for:
```
testValue = (Table.A / Table.B) * Table.C Table.D
SELECT CASE WHEN testValue > 2
THEN testValue ELSE FailValue)
END
FROM Table
``` | Like so
```
DECLARE @t INT=1
SELECT CASE
WHEN @t>0 THEN
CASE
WHEN @t=1 THEN 'one'
ELSE 'not one'
END
ELSE 'less than one'
END
```
EDIT:
After looking more at the question, I think the best option is to create a function that calculates the value. That way, if you end up having multiple places where the calculation needs done, you only have one point to maintain the logic. | The query can be written slightly simpler, like this:
```
DECLARE @T INT = 2
SELECT CASE
WHEN @T < 1 THEN 'less than one'
WHEN @T = 1 THEN 'one'
ELSE 'greater than one'
END T
``` | SQL Server - Case Statement | [
"",
"sql",
"sql-server",
"syntax",
"case",
""
] |
I have the following example.
```
DECLARE @String varchar(100) = 'GAME_20131011_Set - SET_20131012_Game'
SELECT SUBSTRING(@String,0,CHARINDEX('_',@String))
SELECT SUBSTRING(@String,CHARINDEX('- ',@STRING),CHARINDEX('_',@STRING))
```
I want to get the words 'GAME' and 'SET' (the first word before the first '\_' from both sides of ' - '.
I am getting 'GAME' but having trouble with 'SET'
UPDATE: 'GAME' and 'SET' are just examples, those words may vary.
```
DECLARE @String1 varchar(100) = 'GAMEE_20131011_Set - SET_20131012_Game' -- Looking for 'GAME' and 'SET'
DECLARE @String2 varchar(100) = 'GAMEE_20131011_Set - SETT_20131012_Game' -- Looking for 'GAMEE' and 'SETT'
DECLARE @String2 varchar(100) = 'GAMEEEEE_20131011_Set - SETTEEEEEEEE_20131012_Game' -- Looking for 'GAMEEEEE' and 'SETTEEEEEEEE'
``` | As long as your two parts will always be separated be a specific character (`-` in your example), you could try splitting on that value:
```
DECLARE @String varchar(100) = 'GAME_20131011_Set - SET_20131012_Game'
DECLARE @Left varchar(100),
@Right varchar(100)
-- split into two strings based on a delimeter
SELECT @Left = RTRIM(SUBSTRING(@String, 0, CHARINDEX('-',@String)))
SELECT @Right = LTRIM(SUBSTRING(@String, CHARINDEX('-',@String)+1, LEN(@String)))
-- handle the strings individually
SELECT SUBSTRING(@Left, 0, CHARINDEX('_', @Left))
SELECT SUBSTRING(@Right, 0, CHARINDEX('_', @Right))
-- Outputs:
-- GAME
-- SET
```
Here's a SQLFiddle example of this: <http://sqlfiddle.com/#!3/d41d8/22594>
The issue that you are running into with your original query is that you are specifying `CHARINDEX('- ', @String)` for your start index, which will include `-` in any substring starting at that point. Also, with `CHARINDEX('_',@STRING)` for your length parameter, you will always end up with the index of the *first* `_` character in the string.
By splitting the original string in two, you avoid these problems. | Try this
```
SELECT SUBSTRING(@String,0,CHARINDEX('_',@String))
SELECT SUBSTRING(@String,CHARINDEX('- ',@STRING)+1, CHARINDEX('_',@STRING)-1)
``` | T-SQL SUBSTRING at certain places | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
Can you please help me converting the following Oracle `MERGE` statement into a valid `UPSERT` statement for use in a PostgreSQL 9.3 database?
```
MERGE INTO my_table a
USING (SELECT v_c1 key,
v_c2 AS pkey,
v_c3 AS wcount,
v_c4 AS dcount
FROM DUAL) b
ON ( a.key = b.key
AND a.pkey = b.pkey
WHEN MATCHED
THEN
UPDATE SET wcount = b.wcount,
dcount = b.dcount
WHEN NOT MATCHED
THEN
INSERT (key,
pkey,
wcount,
dcount)
VALUES(b.key,b.pkey,b.wcount,b.dcount);
``` | I don't think there's UPSERT statement in PostgreSQL 9.3, but you can do this:
```
with cte_dual as (
select
v_c1 as key,
v_c2 as pkey,
v_c3 as wcount,
v_c4 as dcount
), cte_update as (
update my_table as a set
wcount = b.wcount,
dcount = b.dcount
from cte_dual as b
where b.key = a.key and b.pkey = a.pkey
returning *
)
insert into my_table (key, pkey, wcount, dcount)
select d.key, d.pkey, d.wcount, d.dcount
from cte_dual as d
where not exists (select * from cte_update as u WHERE u.key = d.key and u.pkey = d.pkey)
```
You can read couple of similar questions:
* [How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?](https://stackoverflow.com/questions/17267417/how-do-i-do-an-upsert-merge-insert-on-duplicate-update-in-postgresql)
* [Insert, on duplicate update in PostgreSQL?](https://stackoverflow.com/questions/1109061/insert-on-duplicate-update-postgresql) | I ran on this today, read these, the docs, eventually used something very similar to what to my understanding the docs are advising now for `UPSERT`:
```
INSERT INTO table (
$INSERT_COLUMN_NAMES
)
VALUES (
$INSERT_COLUMN_VALUES
)
ON CONFLICT (
$LIST_OF_PRIMARY_KEY_COLUMN_NAMES
)
DO UPDATE SET
$UPDATE_SET_SUBCLAUSE
WHERE
$WHERE_CONDITION_ON_PRIMARY_KEYS
;
```
I was actually working it at DAO level so I can not provide a meaningful example at the moment, will review this later.
If I'm not mistaken is working perfectly fine. I felt forced into create a primary key that wasn't there, but actually belonged there. | Migrating an Oracle MERGE statement to a PostgreSQL UPSERT statement | [
"",
"sql",
"oracle",
"postgresql",
"upsert",
"sql-merge",
""
] |
I tried to understand how ON and WHERE would affect the execution of the statement but no luck..
I have this statement which gives me the expected result:
```
SELECT DISTINCT u1.email
FROM user u1
LEFT JOIN user u2
ON u1.email = u2.email
WHERE u1.id != u2.id
```
[Correct output](http://sqlfiddle.com/#!2/6a133/36)
Interchanging ON and WHERE gives the same result which is fine, but when I delete the WHERE I get more result:
```
SELECT DISTINCT u1.email
FROM user u1
LEFT JOIN user u2
ON u1.email = u2.email
AND u1.id != u2.id
```
[Incorrect output](http://sqlfiddle.com/#!2/6a133/35)
Why is that ? | This happens because in the first case the `where` clause is applied *after* the join and will remove all rows that were only included by the "outer" join. Those rows are removed because the `email` column from the `u2` table will be null for those and any comparison with `null` yields undefined which means that the `where` clause will discard those rows.
A `where` condition on an outer joined table essentially turns your outer join back into an inner join. | For starters, your sqlfiddle isn't matching your code on this page. Minor issue though.
Short answer is this: when you add a where clause that checks criteria on the left join, it essentially makes it act like an inner join. | LEFT JOIN Statement with ON and WHERE | [
"",
"sql",
""
] |
let me start off by saying Yes this is a homework question.
It seems so simple but I can't get it to work. I am just starting sub-queries and I'm guessing that's what my teacher wants here.
here's the question
```
5. Write a SELECT statement that returns the name and discount percent
of each product that has a unique discount percent. In other words,
don’t include products that have the same discount percent as another
product.
-Sort the results by the ProductName column.
```
Here's what I tried
```
SELECT DISTINCT p1.ProductName, p1.DiscountPercent
FROM Products AS p1
WHERE NOT EXISTS
(SELECT p2.ProductName, p2.DiscountPercent
FROM Products AS p2
WHERE p1.DiscountPercent <> p2.DiscountPercent)
ORDER BY ProductName;
```
Any Help would be highly appreciated - thanks ! | Try this, it's quite similar to yours, but with `not in` you assure that the discount is not present in another product.
```
SELECT p1.ProductName, p1.DiscountPercent
FROM Products AS p1
WHERE p1.DiscountPercent NOT IN
(SELECT p2.DiscountPercent
FROM Products AS p2
WHERE p1.ProductName <> p2.ProductName)
ORDER BY ProductName
``` | When checking for uniqueness using `COUNT()` makes it simple, you can either use it in a `HAVING` clause or select it outright.
```
SELECT a.ProductName, a.DiscountPercent
FROM Products a
JOIN (SELECT DiscountPercent, COUNT(DiscountPercent) AS CT
FROM Products
GROUP BY DiscountPercent
)b
ON a.DiscountPercent = b.DiscountPercent
WHERE b.CT = 1
```
Or:
```
SELECT a.ProductName, a.DiscountPercent
FROM Products a
JOIN (SELECT DiscountPercent
FROM Products
GROUP BY DiscountPercent
HAVING COUNT(DiscountPercent) = 1
)b
ON a.DiscountPercent = b.DiscountPercent
``` | SQL Listing only unique values with subquery | [
"",
"mysql",
"sql",
""
] |
I know my title is a bit misleading but i am unsure of what would be a good title.
Authored has 2 columns namely ID, PubID
Is there anyway i could output the P into my result.
I would like to know for each respective ID, PubID pair, how many rows would have the same PubID but different ID.
```
select a.authorId, P
from Authored A
WHERE 1 <
(Select count(*) as P
from Authored B
where A.pubId = B.pubId
AND A.authorId<> B.authorId)
```
Thanks to all who have answered.
---
## Table
```
AuthorID pubID
1 2
3 2
4 2
10 1
11 1
```
---
## Expected Result
```
AuthorID NumberOfOccurenceOfDiffAuthIDWithSamePubID
1 3
3 3
4 3
10 2
11 2
``` | Updated using `count() over()`:
**[Fiddle demo](http://sqlfiddle.com/#!3/156b3/8)**:
```
select a.AuthorId, count(*) over(partition by pubId) counts
from Authored a
order by a.AuthorId;
``` | Please try below query for MS Sql server:
```
select
*,
COUNT(*) over (partition by pubId)
From Authored
where authorId<>pubId
``` | Taking variables outside of query in sql | [
"",
"sql",
""
] |
I have a query to fetch a set of quizzes that are running that day. I want to make sure that the user hasn't already finished that quiz. The query I have is:
```
SELECT qi.qiId, qi.qiTitle, qi.qiDescription, qi.qiDate, qi.qiLeague, qi.qiType, qi.qiNumQuestions
FROM quizzes qi
LEFT JOIN quizOneDaySpecials qo
ON qo.qoQuizId = qi.qiId
AND qo.qoUserId = 11
WHERE qi.qiLeague = 0
AND qo.qoIsCompleted != 1
AND qi.qiDate BETWEEN 1381878000 AND 1381964399
```
All the quiz details are in the `quizzes` table, and if a user has started or completed the quiz, then a record will exist in `quizOneDaySpecials`. However if the user hasn't started the quiz, no record will be in `quizOneDaySpecials`, and that's where my problem occurs.
`qo.qoIsCompleted` is `NULL` if a user hasn't started a quiz, and every time I run my query I get back no results. If I take out the `AND qo.qoIsCompleted != 1` then the quiz that the user hasn't started is returned as expected.
Why isn't my quiz being returned when I have the `qo.qoIscompleted != 1` check in? It seems perfectly valid, it doesn't equal one, as it's `NULL`. | Nulls evaluate to unknown and therefore won't match any operators other than IS NULL or IS NOT NULL. You could either add an OR statement to your qoIsCompleted or you can use IFNULL() function to check if it is null and if it is default the value to something like 0 so that it doesn't match.
```
AND IFNULL(qo.qoIsCompleted, 0) != 1
``` | You can't compare NULL values and non-null values directly. Under default settings SQL Server doesn't know whether NULL is or isn't equal to 1. It knows only that it's NULL. It looks like what you want is something like
```
WHERE IFNULL(qo.qoIsCompleted, 0) != 1 AND ...
``` | MySQL query returning no results when querying joined table | [
"",
"mysql",
"sql",
"select",
"left-join",
""
] |
I am trying to generate some user analytics in a stored procedure.
Here is the sql code:
```
SELECT count(*),SUM(n.credit) from notifications n
left join questions q on q.id = n.question_id
where n.user_id = u_id and q.question_level = 1 ;
```
The column `q.question_level` has three possible value => `1,2 and 3` Is there a way to get the separate count and sum values for the three levels in a single sql statement instead of separate sql statements as above. | Do you mean like this?
```
SELECT
count(*),
SUM(n.credit) AS totalCredit,
SUM(CASE WHEN q.question_level = 1 THEN n.credit ELSE 0 END) as level1_sum,
SUM(CASE WHEN q.question_level = 2 THEN n.credit ELSE 0 END) as level2_sum,
SUM(CASE WHEN q.question_level = 3 THEN n.credit ELSE 0 END) as level3_sum,
SUM(q.question_level = 1) as level1_count,
SUM(q.question_level = 2) as level2_count,
SUM(q.question_level = 3) as level3_count
from
notifications n
left join questions q
on q.id = n.question_id
where
n.user_id = u_id
``` | Try this:
```
SELECT COUNT(*), SUM(n.credit)
FROM notifications n LEFT JOIN questions q ON q.id = n.question_id
WHERE n.user_id = u_id
GROUP BY q.question_level;
``` | Get count and sum on multiple conditions in a single sql statement | [
"",
"mysql",
"sql",
"stored-procedures",
""
] |
Let's say I want to update table `course_main`. My initial query was:
```
update course_main
set data_src_pk1 = 2
where course_id LIKE '%FA2013' and available_ind = 'N'
```
Well this will get some courses(only a small set, fortunately)that I do not want to update. So I have a select statement to retrieve the actual data I do want to update, and it returns 145 rows.
```
select course_id from course_main
where course_id like '%FA2013' and available_ind = 'N'
and course_id <> 'ENGL-0330-112WE-FA2013'
and course_id <> 'ENGL-0360-112WE-FA2013'
and course_id <> 'ENGL-0390-112WE-FA2013'
and course_id <> 'ARTC-1053-128HY-CEQ113'
and course_id <> 'ARTC-1353-128HY-FA2013'
and course_id <> 'HITT-1005-005IN-CEQ113'
and course_id <> 'HITT-1305-005IN-FA2013'
and course_id <> 'HITT-1305-006IN-FA2013'
and course_id <> 'READ-0300-104WE-FA2013'
and course_id <> 'READ-0340-104WE-FA2013'
and course_id <> 'READ-0370-104WE-FA2013'
and course_id <> 'WBCT-1003-011IN-FA2013'
and course_id <> 'WBCT-1005-011IN-CEQ113'
and course_id <> 'WBCT-1003-010IN-FA2013'
and course_id <> 'WBCT-1005-010IN-CEQ113'
and course_id <> 'ARTS-1301-012IN-FA2013'
order by course_id asc
```
I want to use an update statement to only hit the 145 results from the second query. Any pointers on how to accomplish this?
Thank you. | ```
update course_main
set data_src_pk1 = 2
where course_id in (select course_id from course_main
where course_id like '%FA2013' and available_ind = 'N'
and course_id <> 'ENGL-0330-112WE-FA2013'
and course_id <> 'ENGL-0360-112WE-FA2013'
and course_id <> 'ENGL-0390-112WE-FA2013'
and course_id <> 'ARTC-1053-128HY-CEQ113'
and course_id <> 'ARTC-1353-128HY-FA2013'
and course_id <> 'HITT-1005-005IN-CEQ113'
and course_id <> 'HITT-1305-005IN-FA2013'
and course_id <> 'HITT-1305-006IN-FA2013'
and course_id <> 'READ-0300-104WE-FA2013'
and course_id <> 'READ-0340-104WE-FA2013'
and course_id <> 'READ-0370-104WE-FA2013'
and course_id <> 'WBCT-1003-011IN-FA2013'
and course_id <> 'WBCT-1005-011IN-CEQ113'
and course_id <> 'WBCT-1003-010IN-FA2013'
and course_id <> 'WBCT-1005-010IN-CEQ113'
and course_id <> 'ARTS-1301-012IN-FA2013')
```
? | How about
```
UPDATE course_main
SET data_src_pk1 = 2
WHERE course_id LIKE '%FA2013'
AND available_ind = 'N'
AND course_id NOT IN ('ENGL-0330-112WE-FA2013','ENGL-0360-112WE-FA2013',.....)
ORDER BY course_id ASC;
```
? | How can I update a table based on results from a select query? | [
"",
"mysql",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have two entities, "User" and "Product", and two tables: One to store which users **viewed** which products and the other to store which users **liked** which products. I want query once and return the results as illustrated below:
Table "Views":

Table "Likes":

Query result:

Ideally, there should not be repeated user\_id, product\_id in the results. Notice that if an user neither viewed nor liked a product it should not be returned (i.e., there should be no FALSE,FALSE rows).
As MySQL does not support FULL JOIN I suspect there should be a workaround using UNION ALL. I appreciate any ideas. | You can try this out:
```
SELECT user_id, product_id, max(`view`) `view`, max(`like`) `like` FROM (
SELECT user_id, product_id, TRUE `view`, FALSE `like` FROM views
UNION ALL
SELECT user_id, product_id, FALSE, TRUE FROM likes
) s
GROUP BY user_id, product_id
```
Btw, you should try to avoid `view` and `like` as names as they are reserved words in MySQL.
Bear in mind that in MySQL `true` and `1` as well as `false` and `0` are synonyms.
Fiddle [here](http://sqlfiddle.com/#!2/ddffb7/1). | ```
SELECT
v.User_ID, v.Product_ID, true AS `View`, IF (l.User_ID, true, false) AS `Like`
FROM views v
LEFT JOIN likes l ON (l.User_ID = v.User_ID AND l.Product_ID = v.Product_ID)
UNION
SELECT
l.User_ID, l.Product_ID, IF (v.User_ID, true, false) AS `View`, true AS `Like`
FROM likes l
LEFT JOIN views v ON (l.User_ID = v.User_ID AND l.Product_ID = v.Product_ID)
ORDER BY User_ID, Product_ID
``` | MySQL query to return OR-inclusive relationships from multiple tables | [
"",
"mysql",
"sql",
"join",
""
] |
I have stored procedure like this:
```
alter procedure [dbo].[testdate]
as
begin
declare @current nvarchar(100),
@date1 datetime
select @current = getdate()
select @date1 = getdate()
end
```
While executing I want to show the values of `@current` and `@date1`..
How can I declare these two variables?
Now I am getting 0 | ```
BEGIN
DECLARE @curent nvarchar(100),
@date1 datetime
SELECT @curent=getdate()
SELECT @date1=getdate()
SELECT @curent
SELECT @date1
END
``` | try
```
alter procedure [dbo].[testdate] as
begin
declare
@curent nvarchar(100),
@date1 datetime
select @curent=getdate()
select @date1=getdate()
select @curent as "CurrentDate"
select @date1 as "Date1"
end
``` | Show the declared variable value while executing stored procedure | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
Using `SQL Server 2012`. I want to select a distinct value from a table `T` if only **one** distinct value exists. Below I have a simplified example of what I'm trying to do.
This data would result in 1 returned value: 'bbb'
```
ID Data
----------
1 'bbb'
```
The same goes for this data:
```
ID Data
----------
1 'bbb'
2 'bbb'
3 'bbb'
```
But this data would result in 0 returned rows:
```
ID Data
----------
1 'aaa'
2 'bbb'
3 'bbb'
```
Ideally, there would be a way to count the number of rows in the `WHERE` clause (without grouping data). Also, please remember that my original query is quite complicated. For this simplified version, I tried something like this, but it doesn't work:
```
SELECT [Data] FROM
(SELECT [Data], COUNT(*) OVER() AS [DinstinctValueCount] FROM
(SELECT DISTINCT [Data] FROM [T]) [A]) [B] WHERE [DistinctValueCount] = 1
```
Invalid column name 'DistinctValueCount'.
## UPDATE
I found a solution. Is there a better one?
```
SELECT CASE WHEN COUNT(*) = 1 THEN MIN([Data]) ELSE NULL END AS [Data] FROM
(SELECT DISTINCT [Data] FROM [T]) [A]
```
Any aggregate function will do. | ```
SELECT MIN(ID), [Data]
FROM [tablename]
GROUP BY [Data]
HAVING
1=(SELECT COUNT(DISTINCT [Data]) FROM [tablename])
```
or if you don't want to repeat [tablename], you could use something like this:
```
SELECT MIN(ID), MIN([Data])
FROM [tablename]
HAVING MIN([Data])=MAX([Data])
```
Please see fiddle [here](http://sqlfiddle.com/#!6/ba8c7/4). | Where clause logically execute before select statement. So the alias which mentioned in select list is not found in where clause because select would execute next.
true is this :
```
with s as (
SELECT [Data] FROM
(SELECT [Data], COUNT(*) OVER() AS [DinstinctValueCount] FROM
(SELECT DISTINCT [Data] FROM [T]) [A]) [B] )
SELECT * FROM S WHERE [DinstinctValueCount]= 1
``` | Select distinct value if only ONE distinct value exists? | [
"",
"sql",
"sql-server",
""
] |
I created indexes in postgres for the below mentioned table using md5. The indexes and the table are given below:
```
create table my_table(col1 character varying, col2 character varying, col3 character varying);
```
my\_table looks like (I have just given an example. My actual table is of 1Tera Byte):
```
col1 col2 col3
<a12> <j178> <k109>
create index index1 on my_table (md5(col1), md5(col2), md5(col2));
```
I tried to create index without using md5, however I ended up getting the error:
```
ERROR: index row size 2760 exceeds maximum 2712 for index "index1"
HINT: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text indexing.
```
However, I notice that my query processing time remains the same whether I had created index or not. I am confused as to what could be the reason. Can someone please help me with this?
The sql query which I have fired is of the form:
```
select col3 from my_table where col1='<a12>' and col2='<j178>';
``` | Since you are getting an error when you try to create a standard btree index I am guessing that the data in one or more of these columns is quite large.
The index you created is perhaps best described as "a b-tree index of the md5 hashes of the three columns", rather than an "md5 index of three columns".
In order for PostgreSQL to use the index, your query must be for md5 hashes. Try:
```
SELECT col3
FROM my_table
WHERE
md5(col1) = md5('<a12>')
and md5(col2) = md5('<j178>')
```
The planner will say "oh, I have an index of md5(col1) etc, I'll use that". Note that this will only work for full equality queries (=) and not for LIKE or range queries. Also it won't get the value for `col3` from the index because only the md5 of `col3` is stored there, so it will still need to go to the table to get the `col3` value.
For a small table this would probably result in the planner deciding to skip the index and just do a full scan on the table, but it sounds like your table is big enough that the index will be worthwhile - postgres will scan the index, find the row entries that match, then retreive those rows from the table.
Now if `col3` is the one that has large hunks of data in it and cols 1 and 2 are small, you could just create a normal index of `col1`, `col2`. You really only need to index the columns in your `where` clause, not those in the `select` part.
The postgres indexes documentation is pretty good: <http://www.postgresql.org/docs/9.0/static/indexes.html> but the CREATE INDEX page is probably the single most useful one: <http://www.postgresql.org/docs/9.1/static/sql-createindex.html>
The best way to find out if your indexes are being used is to use the "EXPLAIN" instruction: <http://www.postgresql.org/docs/9.1/static/sql-explain.html> - if you use pgadmin3 to play with your DB (I highly recommend it) then just press F7 in the query window and it will do the explain and present it in a nice GUI showing you the query plan. This has saved many hours of hair-pulling trying to find out why my indices weren't being used. | Create index for each column, not combined column. If you create index for several separated column, postgresql query planar can combine those using what it calls a bitmap index scan. Combining single column indexes is often as fast, and you can use them in any query that references a column that you indexed. Creating index for combined columns is not good design.
Reference [Postgresql doc 11.5. Combining Multiple Indexes](http://www.postgresql.org/docs/9.1/static/indexes-bitmap-scans.html)
About md5, I got posted without your refresh. using md5() is ok. Like other answers, you should also use md5() in where clause and need to add full data comparision to account for the possible collisions of hash.
And there are another possibility. Single column index can help you reduce index row size than multi column combined index. | Slow performance of postgres even on creating indexes | [
"",
"sql",
"postgresql",
""
] |
I have an sqlite table of card controlled entry door usages.
I need the sum of the even-odd time intervals, so for example the expected sum for card 02 is (12:03 - 08:07) + (16:03 - 14:52) = 03:56 + 01:21 = 05:07
People can move freely in daytime, so there could be many entries for a card.
[Test table](http://cdn.virtuosoft.eu/stackoverflow/19393071/entries.sqlite)
The table looks like this:
```
card_id | time
----------------------------
03 | 07:55
01 | 08:02
02 | 08:07
03 | 11:56
02 | 12:03
03 | 12:23
02 | 14:52
03 | 15:56
01 | 15:58
02 | 16:03
``` | Query is not hard but time interval handling is!
```
SELECT card_id,
TIME(1721059.5+SUM(JULIANDAY(time)*((SELECT COUNT() FROM t AS _ WHERE card_id=t.card_id AND time<t.time)%2*2-1)))
FROM t
GROUP BY card_id;
```
How it works:
`((SELECT COUNT() FROM t AS _ WHERE card_id=t.card_id AND time<t.time)%2*2-1)` counts all records before current one and returns `-1` or `1`.
`JULIANDAY(time)` convert time string in a numeric quantity. Product of former two results will return desired calculation (in days).
`TIME(1721059.5+...)` will return a properly formatted time string. `1721059.5` is required because of `JULIANDAY` definition and [SQLite date/time functions](http://www.sqlite.org/lang_datefunc.html) being only valid for dates after `0000-01-01 00:00:00`.
EDIT: Looking at @CL answer, I see that using `JULIANDAY('00:00')` is more elegant then `1721059.5`. However, I keep constant as it should perform better then using function call. | I don't have an SQLite available to try this, but something like this ought to work:
```
CREATE TEMP TABLE card_times AS
SELECT time FROM entries
WHERE card_id = 2;
SELECT SUM(strftime("%s", '2000-01-01 ' || x.time || ':00')
- strftime("%s", '2000-01-01 ' || y.time || ':00'))
FROM card_times AS x INNER JOIN card_times AS y
ON x.ROWID = y.ROWID + 1
WHERE (x.ROWID % 2) = 0
```
Get a table of times for the desired card. Join it with itself using row id's to match the right pairs. Filter out the odd pairs. Sum the results.
Note this will fail if the clock wraps to the next day. Taking care of that case in SQLite would be ugly. You really need full date-times in the original data.
This gives the answer in seconds. I let you work out how to get it in the units you need. | Get sum of paired time differences | [
"",
"sql",
"sqlite",
""
] |
I'm trying an SQLi attack.
The compiler executes the following query despite warnings:
```
select * from users where username='A' or 'B' and password='C';
```
Because the query executes, the attack is successful. Why does this query works and what is it doing? Can I assume that the value 'B' is taken as 'True' in the boolean sense? How do the logical operators work with each other? Is there any specific order like BODMAS for mathematics? Note that 'B' is standalone and not a boolean condition.
A query of the above format works fine in mysql database and returns the same result as the query:
```
select * from users where username='A';
```
No syantax errors are returned.
```
mysql> select * from authentication where user='A' or 'B' and password='C';
+------+----------+
| user | password |
+------+----------+
| A | B |
+------+----------+
1 row in set, 1 warning (0.00 sec)
```
The warning is as follows:
```
mysql> show warnings;
+---------+------+----------------------------------------+
| Level | Code | Message |
+---------+------+----------------------------------------+
| Warning | 1292 | Truncated incorrect INTEGER value: 'B' |
+---------+------+----------------------------------------+
1 row in set (0.00 sec)
``` | **UPDATE**
Based on your updated question and comments, it would appear that you are aware of the error in your syntax and want to know *specifically* how *MySQL* handles this query.
It would appear the MySQL first converts the character string to an integer value (producing a warning).
MySQL treats any [non-zero number as `TRUE`](http://dev.mysql.com/doc/refman/5.7/en/logical-operators.html), so 'B', aftering being converted to an integer, would evalute as `TRUE`.
---
To begin with, I don't think your query would be valid in any DBMS I am aware of. Since you say that 'B' is not a boolean condition, then I have to think that you mean:
```
select * from users where username IN ( 'A', 'B' ) and password='C';
```
which would, of course, invalidate your question about [BODMAS](http://www.mathsisfun.com/operation-order-bodmas.html). So, to answer what appears to be your core question, I am going to change your SQL to this:
```
select * from users where username='A' or username='B' and password='C';
```
It's going to depend on the DBMS and how it chooses to parse the SQL. Generally the `AND` should take precedence over `OR`, but obviously software developers are free to implement SQL however they'd like, so I'm sure you could find counter-examples if you looked hard enough (perhaps a DBMS that processes boolean operators in order, for example).
**SQL Server** will process the `AND` before `OR`
<http://technet.microsoft.com/en-us/library/ms190276.aspx>
```
username='A' or ( username='B' and password='C' )
```
**MySQL** will also process the `AND` before `OR`
<http://dev.mysql.com/doc/refman/5.0/en/operator-precedence.html>
```
username='A' or ( username='B' and password='C' )
```
**Sybase** will process the `AND` before the `OR`
<http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.sqlug/html/sqlug/sqlug102.htm>
```
username='A' or ( username='B' and password='C' )
```
**DB2**, like the others, will process `AND` before `OR`
<http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.db2z9.doc.sqlref%2Fsrc%2Ftpc%2Fdb2z_precedenceofoperations.htm>
```
username='A' or ( username='B' and password='C' )
```
Even **MS Access**, one of the least standards-compliant SQL implementations I am aware of, preserves `AND`-before-`OR` precedence:
<http://office.microsoft.com/en-us/access-help/HV080756432.aspx>
```
username='A' or ( username='B' and password='C' )
``` | If 'B' is not a boolean then you would need to specify what you are comparing against, which in this case is probably `username`.
Something like;
```
select * from users where username='A' or username='B' and password='C'
```
This would evaluate `username='B' AND password='C'` and then `OR username='A'`
You can also refer to this [StackOverflow](https://stackoverflow.com/questions/1241142/sql-logic-operator-precedence-and-and-or/1241158#1241158) post.
You can find the SQL operator ordering [here](http://technet.microsoft.com/en-us/library/ms190276.aspx). | And Or conditions in SQL, but not properly nest them | [
"",
"mysql",
"sql",
"logical-operators",
""
] |
First I run all these in java with jdbc driver...
Here I define a table:
```
create table HistoryCCP(
ID NUMBER(6) NOT NULL,
SCRIPT VARCHAR2(1000) NOT NULL
)
```
Here I define a sequence:
```
CREATE SEQUENCE SYSTEM.HistoryId
MINVALUE 1
MAXVALUE 1000000
INCREMENT BY 1
START WITH 1
NOORDER
NOCYCLE
```
Now I insert to table by using this here:
```
insert into HistoryCCP
values (SYSTEM.HistoryId.nextval ,'HELLOOOO ')
```
Whenever I close the program and run it again and try to insert, it increments it by ten!
And when I defined sequence like this:
```
CREATE SEQUENCE SYSTEM.HistoryId
MINVALUE 1
MAXVALUE 1000000
INCREMENT BY 1
START WITH 1
CACHE 100 -- added cache parameter
NOORDER
NOCYCLE
```
It increase it by 100!
Do you know why it behaves like this and how to increment it by 1? | Never rely on sequences for gap free numbering.
The cache value is the number of sequence values that are held in memory by the database server to avoid the need to keep updating it's internal $SEQ table with the most recently used value. If you reduce the cache value then you increase the rate at which the $SEQ table has to be modified, which slows the system.
Cached values can be aged out, and are lost on system restart, and values are not reused if a transaction gets rolled back.
The presence of gaps should not be a problem for you -- if it is then you'll need to use something other than a sequence to generate the numbers, and doing so will serialise inserts to that table. | NOCACHE would work, but also would be a bad idea under for many reasons, and a total nonsense if you plan to bring your application on a Oracle RAC.
Oracle Sequences are for (internal) unique ID, not for strictly progressive number imposed by requirements. As example, let's say that using a sequence for generating the classical "protocol number" is a common flaw of many financial accounting software: looks easy when beginning but when the project grows it kills you. | How to increment sequence only by 1? | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a number of tables with values I need to sum up. They are not linked either, but the order is the same across all the tables.
Basically, I would like to take this two tables:
```
CASH TABLE
London 540
France 240
Belgium 340
CHEQUE TABLE
London 780
France 490
Belgium 230
```
To get an output like this to feed into a graphing application:
```
London 1320
France 730
Belgium 570
``` | ```
select region,sum(number) total
from
(
select region,number
from cash_table
union all
select region,number
from cheque_table
) t
group by region
``` | ```
SELECT (SELECT COALESCE(SUM(London), 0) FROM CASH) + (SELECT COALESCE(SUM(London), 0) FROM CHEQUE) as result
```
'And so on and so forth.
"The COALESCE function basically says "return the first parameter, unless it's null in which case return the second parameter" - It's quite handy in these scenarios." [Source](https://stackoverflow.com/a/8888404/1081016) | SQL: How to to SUM two values from different tables | [
"",
"sql",
"database",
""
] |
I have a select statement using count and since I am counting the rows instead of returning them how do I make sure that I do not get a duplicate value on a column?
For Example
```
_table_
fName someField
Eric data
Kyle mdata
Eric emdata
Andrew todata
```
I want the count to be 3 because Eric is duplicated, is there a way to do that? My select is:
```
Select Count(*) From _table_ INTO :var
```
Thanks, | ```
SELECT Count(DISTINCT fName) From _table_ INTO :var
```
It will count number of distinct elements from `fName` column. | This will do the Job `Select Count(distinct fnmae) From _table_ INTO :var` | SQL remove duplicate rows when counting | [
"",
"sql",
""
] |
I am creating a module for use in DNN 7+ and would like to use DAL2 for data access but am having some problems selecting items from the database.
My code appears to be connecting to the database successfully but the queries that are generated by DAL2 do not include the names of the fields in the database table. I an running an SQL Server Profiler to watch what gets to the database and see queries that start with "SELECT NULL FROM Product...". I expect to see "SELECT \* FROM Product..." or "SELECT ProductCode FROM Product..."
For the purposes of isolating the problem and being able to include a full code sample, I have worked my test down to the following:
I have a Product.cs file:
```
using System.Web.Caching;
using DotNetNuke.ComponentModel.DataAnnotations;
namespace MyModule.Components
{
[TableName("Product")]
[PrimaryKey("productCode")]
[Cacheable("MYMODULE_Product_", CacheItemPriority.Default, 20)]
[Scope("productCode")] //different values here did not change the result.
public class Product
{
public string productCode;
}
}
```
I have a ProductRepository.cs file:
```
using DotNetNuke.Data;
namespace MyModule.Components
{
public class ProductRepository
{
private const string EXTERNAL_DB_CONNECTION_STRING = "MY_DB_CONNECTIONSTRING_NAME";
public Product GetProduct(string productCode)
{
Product t;
using (IDataContext ctx = DataContext.Instance(EXTERNAL_DB_CONNECTION_STRING))
{
var rep = ctx.GetRepository<Product>();
t = rep.GetById(productCode);
}
return t;
}
}
}
```
I use these two files in my view with the following code:
```
ProductRepository productRepo = new ProductRepository();
Product product = (Product)productRepo.GetProduct("MYCODE");
```
While running this code, I monitor using the SQL Server Profiler and see that the following query is executed:
```
exec sp_executesql N'SELECT NULL FROM [Product] WHERE [productCode]=@0',N'@0 nvarchar(4000)',@0=N'MYCODE'
```
I don't know why the select query above is selecting NULL. I am expecting a list of product fields from the Product.cs file, or the \* character.
My database field definition (as seen from the tree view of the Microsoft SQL Server Management Studio) for the productCode is:
```
productCode(PK, varchar(50), not null)
```
I am connecting to an external database and the data is not connected with a specific module or portal. That is why I specify "productCode" as the Scope. I am not sure what the proper use of Scope is when the data is not tied to the portal or module. To make sure the problem was not connected to the Scope attribute in the Product.cs file I tested with the Scope variable set to "PortalId", "ModuleId", "productCode", and "Nothing". All of these values resulted in the same query showing up in the SQL Server Profiler.
I also tested by removing the Scope attribute entirely. When the Scope attribute was not included I saw the following query in the SQL Server Profiler:
```
SELECT NULL FROM [Product]
```
I am not sure why the WHERE clause was removed when the Scope attribute was excluded, but that is what test results showed.
These tests with the Scope attribute lead me to believe it is not related to the "SELECT NULL" problem I am seeing but I wanted to include it here for completeness.
Can anyone help, why is this creating a select with NULL and how do I get it to select my data?
Thanks! | I found an answer but would like a little help understanding why this works.
First, to get the data column to connect, I changed the Product.cs class to this:
```
using System.Web.Caching;
using DotNetNuke.ComponentModel.DataAnnotations;
namespace MyModule.Components
{
[TableName("Product")]
[PrimaryKey("productCode")]
[Cacheable("MYMODULE_Product_", CacheItemPriority.Default, 20)]
[Scope("productCode")] //tried different values here and nothing changed but excluding this caused more problems
public class Product
{
public string productCode { get; set; };
}
}
```
The difference here is the addition of the get and set accessors. This is confusing to me as I thought the previously used line
```
public string productCode;
```
would be functionally equivalent to
```
public string productCode { get; set; };
```
in this context. I thought 'get' and 'set' were not needed unless access to the variable was going to be limited to one of those operations, or if more extensive processing needed to happen while setting or getting the protected value.
Am I missing something in my understanding of get and set or am I understanding this correctly? Are the get and set only required here to be compatible with DAL2? | RacerNerd,
Needing the proper accessors:
```
public string productCode { get; set; };
```
vs
```
public string productCode;
```
I missed that! Good catch.
I believe the reason for the explicit getter and setters is not only to follow the convention of Entity Framework POCOs, but also I believe when you explicitly use getters and setters, you are telling the CLR the class member is a property vs a public variable. On the surface, they function the same. But if PetaPoco is using reflection to match class properties to fields, that may be why it is required | Why are DAL2 queries missing field names and showing SELECT NULL? | [
"",
"sql",
"data-access-layer",
"dotnetnuke-module",
"irepository",
"dotnetnuke-7",
""
] |
I have a table `orders` with columns `orderid` and `OrderDate`.
I want to get data in batches of 5 years, like total orders between 1990-1995, 1995-2000, 2000-2005 etc in SQL Server.
For example :
```
Period Orders
1990-1995 4500
1995-2000 7000
2000-2005 9000
```
Thanks in advance | I think this is a clear syntax, using one subquery:
```
SELECT CAST(T.YearDivision * 5 AS nchar(4)) + N' - ' + CAST(T.YearDivision * 5 + 4 AS nchar(4)) AS YearRange, SUM(T.TotalCount) AS TotalOrders
FROM
(
SELECT YEAR(Date) / 5 AS YearDivision, Date, COUNT(*) AS TotalCount
FROM Orders
GROUP BY Date
) T
GROUP BY YearDivision
```
I think you are overlapping the year ranges repeating the years at the end and the start of each consecutive range. I remove that overlapping here.
This are the results for this query on sample table of one of my databases:
```
YearRange TotalOrders
----------- -----------
2005 - 2009 2680
2010 - 2014 1395
```
Hope this helps | you can group by YEAR function - this will do groupping by each year, like:
```
select YEAR(d), sum(...)
from table
group by YEAR(d)
```
if you want to combine several years into one - consider simple math:
```
select FLOOR(YEAR(d)/5)*5, sum(...)
from table
group by FLOOR(YEAR(d)/5)
```
with this group your will have [1990-1994] [1995-1999] [2000-2004] ([ and ] means inclusively)
if you want to change it to have [1991-1995] [1996-2000] etc, use:
```
select FLOOR((YEAR(d)+1)/5)*5-1, sum(...)
from table
group by FLOOR((YEAR(d)+1)/5)
``` | Get data between each 5 year in SQL Server (like 1990-1995, 1995-2000, 2000-2005) | [
"",
"sql",
"sql-server",
""
] |
I'm trying to return the following three columns: Song ID, Song name, Times Played.
This represents a list of requested and played songs.
Assumptions:
1. Table SONG has SONG\_ID and SONG\_NAME, one row for each song
2. Table SONG\_REQUEST only has SONG\_ID for each time the song is requested
3. Table PLAY has SONG\_ID for each time the song is played
This is my query:
```
SELECT r.SONG_ID, s.SONG_NAME, COUNT(p.SONG_ID) AS TimesPlayed
FROM PLAY p
INNER JOIN SONG s ON p.SONG_ID = s.SONG_ID
LEFT JOIN SONG_REQUEST r ON p.SONG_ID = r.SONG_ID
GROUP BY p.SONG_ID, s.SONG_NAME, TimesPlayed
```
The error thrown is:
```
ORA-00904: "TIMESPLAYED": invalid identifier
00904. 00000 - "%s: invalid identifier"
*Cause:
*Action:
Error at Line: 5 Column: 33
``` | Something like this should work:
```
SELECT r.SONG_ID, s.SONG_NAME, COUNT(p.SONG_ID) AS TimesPlayed
FROM PLAY p
INNER JOIN SONG s ON p.SONG_ID = s.SONG_ID
LEFT JOIN SONG_REQUEST r ON p.SONG_ID = r.SONG_ID
GROUP BY r.SONG_ID, s.SONG_NAME;
```
Note that in the `group by` clause I've reference the *exact same* columns as in the `select` clause (changed alias `p` to `r`). | Oracle doesn't allow to use aliases from the `SELECT` clasuse in the `GROUP BY` clause (you'd need to write `COUNT(p.SONG_ID)` again) but it doesn't make much sense to group by `TimesPlayed` anyway. You probably want to remove it entirely.
Also, you select these columns:
```
r.SONG_ID, s.SONG_NAME
```
... but attempt to group by
```
p.SONG_ID, s.SONG_NAME
```
Is it a typo? | Oracle throwing "invalid identifier" on COUNT() alias within GROUP BY | [
"",
"sql",
"oracle",
""
] |
I have a table for storing coupons/discounts, and I want to use the coupon\_code column as the primary key, which is a `VARCHAR`.
My rationale is that, each coupon will have a unique code, and the only commands I will be running are `SELECT ... FROM ... WHERE coupon_code='..'`
I won't be doing any joins or indexing, and I don't see there ever being more than a few hundred entries in this table.
It seems to me that this will be OK, but I don't know if there is anything I'm missing/not thinking about. | Of course you can, in the sense that your RDBMS will let you do it. The answer to a question of whether or not you *should* do it is different, though: in most situations, values that have a meaning outside your database system should *not* be chosen to be a primary key.
If you know that the value is unique in the system that you are modeling, it is appropriate to add a unique index or a unique constraint to your table. However, your primary key should generally be some "meaningless" value, such as an auto-incremented number or a GUID.
The rationale for this is simple: data entry errors and infrequent changes to things that appear non-changeable do happen. They become much harder to fix on values which are used as primary keys. | A blanket "no you shouldn't" is terrible advice. This is perfectly reasonable in many situations depending on your use case, workload, data entropy, hardware, etc.. What you *shouldn't* do is make assumptions.
It should be noted that you can specify a prefix which will limit MySQL's indexing, thereby giving you some help in narrowing down the results before scanning the rest. This may, however, become less useful over time as your prefix "fills up" and becomes less unique.
It's very simple to do, e.g.:
```
CREATE TABLE IF NOT EXISTS `foo` (
`id` varchar(128),
PRIMARY KEY (`id`(4))
)
```
Also note that the prefix `(4)` appears *after* the column quotes. Where the `4` means that it should use the first 4 characters of the 128 possible characters that can exist as the `id`.
Lastly, you should read how index prefixes work and their limitations before using them: <https://dev.mysql.com/doc/refman/8.0/en/create-index.html> | Can I use VARCHAR as the PRIMARY KEY? | [
"",
"mysql",
"sql",
"primary-key",
"unique",
"varchar",
""
] |
I have a table with two columns, name and numb. The first column contains a name and the second contains a number for that name.
What I'm trying to do is output 3 columns, name, number, name, where the last name column is a name that contains the same number as the first name column.
So the output would probably look like this:
```
| NAME | NUMB | NAME |
|------|------|------|
| john | 1 | lucy |
| stan | 2 | paul |
| jane | 3 | bill |
```
etc.
I have an sqlfiddle link here of the table I'm using: <http://sqlfiddle.com/#!2/ed72b/1>
I'm doing this in MySQL. Thank you. | You need to make a `JOIN` on the same table, like this:
```
SELECT t1.name, t1.numb, t2.name AS name2
FROM test t1
JOIN test t2 ON t1.numb = t2.numb AND t1.name <> t2.name
```
See it [live](http://sqlfiddle.com/#!2/ed72b/20)! | You could try something like
```
SELECT *,
(SELECT name FROM test tt WHERE tt.numb = t.numb AND tt.name <> t.name limit 1) nname
from test t;
```
## [SQL Fiddle DEMO](http://sqlfiddle.com/#!2/ed72b/21)
The problem with your idea, can be seen when looking at the second set of rows
```
| NAME | NUMB | NAME |
|------|------|------|
| paul | 1 | stan |
| bill | 2 | jane |
| lucy | 3 | john |
```
which is the reverse of the first rows. How do you determine to use the first 3, and not the last 3? | Comparing a column using the data of one column in the same table SQL | [
"",
"mysql",
"sql",
"compare",
""
] |
I am fairly new to SQL so apologies if this questions sounds odd.
I keep coming across the issue of poor data. As an example London could be stored as LON, London UK, London England etc. Prior to using SQL I had lots of Excel lookup tables where in the first column I would have the original, and in the 2nd helper column I would have the corrected version. As an example:
```
Name Name_1
London, UK London
Lon London
LON London
London London
London, England London
LND London
```
Is there a straightforward way of doing this in SQL? I am currently trying to create lookup tables and then use joins. This is becoming tricky as I do not always have corrections for every instance, so in most scenarios my lookup tables have fewer items than the tables I'm joining them to.
I've been teaching myself about stored procedures and I wondered if this could solve the problem. Trouble is my searching on the subject of lookup tables is coming up empty.
Any advice or pointers would be gratefully received, even if it is just to say that this cannot be done.
Thanks as always for you help and apologies for the long post. | You don't have to do any of the other things, just return the original if you don't have a translation for it.
```
SELECT
t1.FirstName,
t1.LookupField,
case when t2.Name_1 is null
then t1.lookupfield
else t2.name_1 end Name_1
FROM People as t1
LEFT INNER JOIN TableLookupCities as t2
ON t1.LookupField = t2.Name
``` | You can join to the lookup table and preferrable use the value given there. If not found, use the original:
```
SELECT t1.FirstName, LookupField = ISNULL(t2.Name_1, t1.LookupField)
FROM People as t1
LEFT INNER JOIN TableLookupCities as t2 ON t1.LookupField = t2.Name
```
Make sure, that for each name there is at most one match in `TableLookupCities`, or else the join will produce multiple results. Create a **unique index** on `TableLookupCities.Name`:
```
CREATE UNIQUE (CLUSTERED) INDEX djgndkg ON TableLookupCities (Name) INCLUDE (Name_1)
``` | Best Practice For Lookup Tables In SQL | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I've a table named `wp_bp_activity` with many columns that I think I should work with 4 of them for this issue: `id`, `type`, `item_id` and `date_recorded`.
**1 -** When someone post a new activity the `type` is `activity_update` and `item_id` is `0`
**2 -** When someone post a new comment on an activity its `type` is `activity_comment` and `item_id` is existed activity id in column `id`
**3 -** In both, `date_recorded` is the date of when data is inserted.
*There are also more `type`s in the table.*
But I wanna fetch only rows with `type` of `activity_update` that someone is recently replied to **or** are new (based on `date_recorded` I think)
Which I tried is :
```
SELECT a.*, b.*
FROM wp_bp_activity as a, wp_bp_activity as b
WHERE
((b.type = 'activity_update') OR (b.type = 'activity_comment' AND b.item_id = a.id))
order by cast(a.date_recorded as datetime) desc
limit 0,20
```
That takes too long to be executed and ends with memory insufficient error.
Any help on this kind of query is appreciated.
***Update #1***
```
wp_bp_activity table
id type item_id date_recorded
|---------|-----------------------|-----------|--------------------------
| 12081 | activity_comment | 12079 | 2013-10-18 07:27:01
|---------|-----------------------|-----------|--------------------------
| 12080 | activity_update | 0 | 2013-10-18 07:26:40
|---------|-----------------------|-----------|--------------------------
| 12079 | activity_update | 0 | 2013-10-17 05:15:43
```
**Which rows I want from this table are these ids at this order:**
```
12079
12080
```
**What I don't want to get:**
`activity_comment` type
**In which order should rows fetch?**
As you can see the row with `activity_comment` type has an `item_id` with the value of `12079`. so `12079` is the latest activity that recently someone made a comment on it, and `12080` has no comments but is just posted. So I want both but at this order:
```
12079
12080
``` | I am going to assume that you are looking for "recent entries" (`WHERE type = 'activity_update' AND date_recorded > [threshold]`) and "entries having a recent reply, regardless of the entry's age" (`WHERE reply.type = 'activity_comment' AND reply.date_recorded > [threshold]`).
The first set is straightforward:
```
SELECT entries.*
FROM activity AS entries
WHERE type = 'activity_update' AND date_recorded > [threshold]
```
The second set is a bit less obvious:
```
SELECT entries.*
FROM activity AS entries
JOIN activity AS replies
ON replies.item_id = entries.id
AND replies.type = 'activity_comment'
WHERE
entries.type = 'activity_update'
AND replies.date_recorded > [threshold]
```
Putting it all together:
```
SELECT entries.*
FROM activity AS entries
LEFT JOIN activity AS replies -- LEFT JOIN, because an INNER JOIN would filter out entries without any reply
ON replies.item_id = entries.id
AND replies.type = 'activity_comment'
WHERE
entries.type = 'activity_update'
AND (
entries.date_recorded > [threshold]
OR replies.date_recorded > [threshold] -- evaluates as FALSE if replies.date_recorded is NULL
)
ORDER BY IFNULL(replies.date_recorded, entries.date_recorded) -- replies if not null, entries otherwise
```
*I am not proud of my poorly performing `ORDER BY` clause, I hope someone can suggest a better idea* | Maybe I'm missing something huge but what's wrong with just?:
```
SELECT *
FROM `wp_bp_activity`
WHERE `type`='activity_update'
ORDER BY `date_recorded` ASC
LIMIT 0, 20;
``` | Idea for writing a mysql query | [
"",
"mysql",
"sql",
"wordpress",
""
] |
I have a SQL Server 2005 query where I am using a case statement in the Select statement. I want to know what the "correct" way of doing it is.
Each format seems to work perfectly fine, but I was wondering if there is some redundancy between the different selects or something.
Sample formats:
```
SELECT 'isItemDisplayed' = CASE WHEN ... THEN ... END FROM myTable
SELECT isItemDisplayed = CASE WHEN ... THEN ... END FROM myTable
SELECT CASE WHEN ... THEN ... END AS isItemDisplayed FROM myTable
SELECT CASE WHEN ... THEN ... END AS 'isItemDisplayed' FROM myTable
SELECT CASE WHEN ... THEN ... END isItemDisplayed FROM myTable
``` | This really has nothing to do with the CASE statement. You can create a column alias for any expression in the select list, including simple column names (ie: to re-name to match client code), literal expressions, function calls, etc. But continuing with your CASE example, there are some things to know.
I almost never see it this way, but the ANSI Sql standard says to use double quotes for the name, like so:
```
SELECT CASE WHEN ... THEN ... END "isItemDisplayed" FROM myTable
```
What I do commonly see are the 3rd or 5th options from your original question, shown below. Either is fine:
```
SELECT CASE WHEN ... THEN ... END As isItemDisplayed FROM myTable
SELECT CASE WHEN ... THEN ... END isItemDisplayed FROM myTable
```
Either of these also allow you to encase the names in square-brackets (`[]`) if you want a name that conflicts with a reserved word or uses spaces. I would avoid any that use the `=` syntax, as it could confuse someone that you're looking for a boolean result. The other bit advice I can give here is to pick one style and stick with it in a current environment. *Consistency!* | Referencing the example at the bottom of [this](http://technet.microsoft.com/en-us/library/ms188348.aspx) page, I would say:
```
SELECT CASE WHEN ... THEN ... END AS isItemDisplayed FROM myTable
```
because it explicitly defines `isItemDisplayed` as the name.
I personally would prefer:
```
SELECT CASE WHEN ... THEN ... END AS [isItemDisplayed] FROM myTable
```
because it covers for reserved words(even though you should never name anything the same as a reserved word) and you can include spaces.
According to [this](https://sqlblog.org/2012/01/23/bad-habits-to-kick-using-as-instead-of-for-column-aliases) blog, not using any explicit identifier(`=` Or `As`) is poor practice.
There is no advantage to using single quotation marks around an alias, besides the SSMS showing the name as a different color, maybe making it stick out? | SQL Case Statement Proper Aliasing | [
"",
"sql",
"sql-server-2005",
""
] |
I have 2 tables (`AllClients` & `AllActivities`) and need to retrieve the following information:
I need a list of distinct clients where the most recent activity has been entered in the last year. I've gotten the following code to work, but it is painfully slow and therefore not useful. I believe a join (without the subquery) will be faster, but I just can't figure it out. Here is my current sql statement:
```
select distinct(AllClients.LookupCode)
from AllClients
join (select LookupCode,
max(AllActivities.EnteredDate) as EnteredDate
from AllActivities
group by LookupCode) AllActivities
on AllClients.LookupCode = AllActivities.LookupCode
where AllClients.Name = '$userName'
and AllClients.TypeCode = 'P' and AllActivities.EnteredDate < '$oneYearAgo'";
``` | try this:
```
select AllClients.LookupCode
from AllClients
join AllActivities on AllClients.LookupCode = AllActivities.LookupCode
where AllClients.Name = '$userName' and AllClients.TypeCode = 'P'
group by AllClients.LookupCode
having max(AllActivities.EnteredDate) < '$oneYearAgo';
``` | do you mean something like this?
```
SELECT AllClients.LookupCode
FROM AllClients
JOIN AllActivities
ON AllClients.LookupCode = AllActivities.LookupCode
WHERE AllClients.Name = '$userName'
AND AllClients.TypeCode = 'P'
GROUP BY AllClients.LookupCode
HAVING MAX(AllActivities.EnteredDate) < '$oneYearAgo'";
``` | Need help improving SQL performance (subquery versus join) | [
"",
"mysql",
"sql",
""
] |
I have data that looks as follows:

I would like to pivot this data so that there is only a single row for the SubId. The columns would be SubId, 802Lineage, 802ReadTime, 1000Lineage, 1000ReadTime etc.
If it wasn't for the requirement to have the Lineage included, this would be pretty straightforward, as follows:
```
Select SubId, [800] as [800Time], [1190] as [1190Time], [1605] as [1605Time]
From
(Select SubId, ProcessNumber, ReadTime From MyTable) as SourceTable
PIVOT
(
Max(ReadTime)
For ProcessNumber IN ([800],[802],[1190],[1605])
) as PivotTable;
```
I'm just not sure how to do this with the Lineage included. This is for SQL Server 2012 | you can pivot manually:
```
select
SubId,
max(case when ProcessNumber = 802 then ReadTime end) as [802Time],
max(case when ProcessNumber = 802 then Lineage end) as [802Lineage],
....
from SourceTable
group by SubId
``` | You can use the PIVOT function to get the result but you will have to unpivot the `Lineage` and `ReadTime` columns from the multiple columns into multiple rows.
Since you are using SQL Server 2012 you can unpivot the data using CROSS APPLY with VALUES:
```
select subid,
colname = cast(processNumber as varchar(10)) + colname,
value
from mytable
cross apply
(
values
('Lineage', Lineage),
('ReadTime', convert(varchar(20), readtime, 120))
) c (colname, value)
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/595af/8). This will convert your current data into the format:
```
| SUBID | COLNAME | VALUE |
|-------------|--------------|---------------------|
| 12010231146 | 802Lineage | PBG12A |
| 12010231146 | 802ReadTime | 2012-01-02 21:44:00 |
| 12010231146 | 1000Lineage | PBG12A |
| 12010231146 | 1000ReadTime | 2012-01-02 21:43:00 |
| 12010231146 | 1190Lineage | PBG11B |
| 12010231146 | 1190ReadTime | 2012-01-03 14:36:00 |
```
Once the data is in this format, then you can easily apply the PIVOT function to get your final result:
```
select *
from
(
select subid,
colname = cast(processNumber as varchar(10)) + colname,
value
from mytable
cross apply
(
values
('Lineage', Lineage),
('ReadTime', convert(varchar(20), readtime, 120))
) c (colname, value)
) d
pivot
(
max(value)
for colname in ([802Lineage], [802ReadTime],
[1000Lineage], [1000ReadTime],
[1190Lineage], [1190ReadTime])
) piv;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/595af/2).
The above works great if you have a limited number of rows that you want to convert, but if you have an unknown number then you can use dynamic SQL:
```
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT ',' + QUOTENAME(cast(processnumber as varchar(10))+col)
from mytable
cross apply
(
select 'Lineage', 0 union all
select 'ReadTime', 1
) c (col, so)
group by processnumber, col, so
order by processnumber, so
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT subid, ' + @cols + '
from
(
select subid,
colname = cast(processNumber as varchar(10)) + colname,
value
from mytable
cross apply
(
values
(''Lineage'', Lineage),
(''ReadTime'', convert(varchar(20), readtime, 120))
) c (colname, value)
) x
pivot
(
max(value)
for colname in (' + @cols + ')
) p '
execute sp_executesql @query;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/595af/7). This gives a result:
```
| SUBID | 802LINEAGE | 802READTIME | 1000LINEAGE | 1000READTIME | 1190LINEAGE | 1190READTIME | 1605LINEAGE | 1605READTIME | 1745LINEAGE | 1745READTIME | 1790LINEAGE | 1790READTIME | 1990LINEAGE | 1990READTIME | 2690LINEAGE | 2690READTIME | 2795LINEAGE | 2795READTIME | 2990LINEAGE | 2990READTIME | 3090LINEAGE | 3090READTIME | 3290LINEAGE | 3290READTIME |
|-------------|------------|---------------------|-------------|---------------------|-------------|---------------------|-------------|---------------------|-------------|---------------------|-------------|---------------------|-------------|---------------------|-------------|---------------------|-------------|---------------------|-------------|---------------------|-------------|---------------------|-------------|---------------------|
| 12010231146 | PBG12A | 2012-01-02 21:44:00 | PBG12A | 2012-01-02 21:43:00 | PBG11B | 2012-01-03 14:36:00 | PBG11B | 2012-01-03 15:15:00 | PBG11A | 2012-01-03 15:16:00 | PBG11A | 2012-01-03 15:19:00 | PBG11A | 2012-01-03 15:23:00 | PBG11A | 2012-01-03 15:32:00 | PBG11A | 2012-01-03 15:39:00 | PBG11A | 2012-01-03 15:41:00 | PBG11A | 2012-01-03 15:46:00 | PBG11A | 2012-01-03 15:47:00 |
``` | Is it possible to Pivot data like this? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
"pivot",
""
] |
I am using `Order by` for records sorting. I want to select NULL records at last position in every sorting.
Eg.
```
ASC - {1,2,3,4,5,6,7,NULL,NULL,NULL}
DESC - {7,6,5,4,3,2,1,NULL,NULL,NULL}
```
How to do this in Oracle query? | Your order by clause should be as follows:
desc
```
order by <field_name> desc nulls last;
```
asc
```
order by <field_name> asc nulls last;
``` | Basic MySql Query and Yii2 based Code:
```
Basic Query in MySql:
SELECT *, IF(cloumn_name IS NULL, 1, 0) as cloumn_name_alias FROM `table_name` WHERE 1 ORDER BY cloumn_name_alias, cloumn_name ASC;
For Yii2 Framework:
$dataProvider = new ActiveDataProvider([
'query' => $query,
'sort' => [
'defaultOrder' => [
'position_alias' => SORT_ASC,
'position' => SORT_ASC,
],
'attributes' => [
'position_alias',
'position',
]
]
]);
``` | Oracle - Order by column with NULL values in last position | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
My table looks something like this:
```
group date cash checks
1 1/1/2013 0 0
2 1/1/2013 0 800
1 1/3/2013 0 700
3 1/1/2013 0 600
1 1/2/2013 0 400
3 1/5/2013 0 200
```
-- Do not need cash just demonstrating that table has more information in it
I want to get the each unique group where date is max and checks is greater than 0. So the return would look something like:
```
group date checks
2 1/1/2013 800
1 1/3/2013 700
3 1/5/2013 200
```
attempted code:
```
SELECT group,MAX(date),checks
FROM table
WHERE checks>0
GROUP BY group
ORDER BY group DESC
```
problem with that though is it gives me all the dates and checks rather than just the max date row.
using ms sql server 2005 | ```
SELECT group,MAX(date) as max_date
FROM table
WHERE checks>0
GROUP BY group
```
That works to get the max date..join it back to your data to get the other columns:
```
Select group,max_date,checks
from table t
inner join
(SELECT group,MAX(date) as max_date
FROM table
WHERE checks>0
GROUP BY group)a
on a.group = t.group and a.max_date = date
```
Inner join functions as the filter to get the max record only.
FYI, your column names are horrid, don't use reserved words for columns (group, date, table). | You can use a [window](http://msdn.microsoft.com/en-us/library/ms189461%28v=sql.90%29.aspx "OVER Clause (Transact-SQL)") MAX() like this:
```
SELECT
*,
max_date = MAX(date) OVER (PARTITION BY group)
FROM table
```
to get max dates per `group` alongside other data:
```
group date cash checks max_date
----- -------- ---- ------ --------
1 1/1/2013 0 0 1/3/2013
2 1/1/2013 0 800 1/1/2013
1 1/3/2013 0 700 1/3/2013
3 1/1/2013 0 600 1/5/2013
1 1/2/2013 0 400 1/3/2013
3 1/5/2013 0 200 1/5/2013
```
Using the above output as a derived table, you can then get only rows where `date` matches `max_date`:
```
SELECT
group,
date,
checks
FROM (
SELECT
*,
max_date = MAX(date) OVER (PARTITION BY group)
FROM table
) AS s
WHERE date = max_date
;
```
to get the desired result.
Basically, this is similar to [@Twelfth's suggestion](https://stackoverflow.com/a/19433107/297408) but avoids a join and may thus be more efficient.
You can try the method [at SQL Fiddle](http://sqlfiddle.com/#!6/be3f0/1). | Select info from table where row has max date | [
"",
"sql",
"sql-server-2005",
"greatest-n-per-group",
""
] |
Here is what I have:.
```
Con.close()
Con.open()
Query ="Update Products Set QOH = QOH - '" & txtQoH.text & "' Where Prod_ID ='"& textbox1.text & "'"
Command.ExecuteNonQuery()
Con.Close().
```
Okay There, I'm trying to update a product quantity on hand once certain number of product is being purchased. So I've tried that and its not working, can somebody help me? | There are so many problems in your code. Let me start to list them.
* First The field `QOH` is numeric right? So don't try to subtract a
string from a number (The quotes around the textbox)
* Second, you write a query but there is no code that show how do you
set this text in your command
* Third, a connection should be opened when needed and closed/disposed immediately
afterwards (The [Using statement](http://msdn.microsoft.com/en-us/library/htd05whh.aspx) is fundamental for this)
* Fourth, how do you check if your user inputs numeric values instead
of bogus strings? A [TryParse](http://msdn.microsoft.com/en-us/library/system.int32.tryparse.aspx) will help here to avoid executing a
failed query
* Fifth, and this is the most important. DO NOT CONCATENATE strings to
build sql commands. [Use always a parameterized query](http://www.codinghorror.com/blog/2005/04/give-me-parameterized-sql-or-give-me-death.html)
```
Dim qty
If Int32.TryParse(txtQoH.text, qty) Then
MessageBox.Show("Invalid numeric quantity")
Return
End if
Dim prodID
If Int32.TryParse(textBox.text, prodID) Then
MessageBox.Show("Invalid product ID")
Return
End if
Query ="Update Products Set QOH = QOH - @qty Where Prod_ID = @prodID"
Using con = new SqlConnection(.....constring here ....)
Using cmd = new SqlCommand(con, Query)
cmd.Parameters.AddWithValue("@qty", qty)
cmd.Parameters.AddWithValue("@prodID", prodID)
cmd.ExecuteNonQuery()
End Using
End Using
``` | Unless you've left out some code, I don't see where you're actually setting the [`CommandText`](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.commandtext.aspx) of the `Command` object:
```
Query ="Update Products Set QOH = QOH - '" & txtQoH.text & "' Where Prod_ID ='"& textbox1.text & "'"
Command.CommandText = Query
Command.ExecuteNonQuery()
```
You should also use [parameters](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.parameters.aspx) instead of string concatenation, but that's another issue... | SQL and VB.NET 2008 Query | [
"",
"sql",
"vb.net",
""
] |
I have a SQL Data Project that was created from the schema of my production database. It pulled in all the stored procedures into the project and I am trying to deploy the generated scripts to localdb for offline development.
The Issue is that it appears SQL Data Project is trying to validate the SQL script that it generated but its getting stuck at the first instance where it encounters a string containing `$(`.
Example 1
`INSERT INTO mytable (text) VALUES ('$(')`
This results in an error
> Incorrect Syntax was encountered while $(' was being parsed.
Example 2
`INSERT INTO mytable (text) VALUES ('$(this')`
This results in an error
> Incorrect Syntax was encountered while $(this' was being parsed.
Its seems like a bug in SQL Data Projects parse validation which is preventing the deployment from succeeding even though the scripts execute fine in SQL Management Studio.
**UPDATE**
I tried some of the ideas and it seems the issue is specifically when SQL Data Project is parsing the string and encounters `$(`. In addition, everything that follows the $( in the string gets caught by the error up until the next instance of a single quote (Be it an escaping single quote or the end of the string single quote) | It seems that the script that SQL Data Tools generates for deployment of the database uses `$()` as a way of doing variable replacement. To correct the issue I had to replace the contents of the string with
`CHAR(36) + '()'`
to represent
`$()`
This way the parser wouldn't try to see it as a variable. | You may try like this:-
```
INSERT INTO mytable (text) VALUES ('''I''''''m Sure this broke things''')
```
If you are inserting from SQL you may also try this:-
```
DECLARE @Value varchar(50)
SET @Value = 'I''m Sure this broke things'
INSERT INTO Table1 (Column1) VALUES (@Value)
```
If you are using ADO.NET then try like this:
```
using (SqlConnection conn = new SqlConnection(connectionString)) {
conn.Open();
using (SqlCommand command = conn.CreateCommand()) {
command.CommandText = "INSERT INTO Table1 (text) VALUES (@Value)";
command.Parameters.AddWithValue("@Value", "'I''m Sure this broke things'");
command.ExecuteNonQuery();
}
}
``` | SQL Data Project: Incorrect Syntax '$(' | [
"",
"sql",
"visual-studio-2012",
"escaping",
"localdb",
""
] |
Say I have two tables, `patients` and `rooms`, and `patients` is
```
CREATE TABLE patient (
id int,
room int,
FOREIGN KEY (room) REFERENCES room (id)
);
```
and `room` is
```
CREATE TABLE rooms (
id int,
);
```
I'd like to create a view of `rooms` that includes how many patients are in that room.
I can calculate the number of patients in the room with
```
select count(1) from patients where room = N;
```
for any existing room `N`.
How would I write the `SELECT` statement I need?
---
My best shot at a solution:
```
select *,
count(1) as patients_in_room
from patients
where patients.room = rooms.id
from rooms;
``` | This will bring all the rooms, with `patientsCount = 0` when they are empty.
```
SELECT r.id roomId, coalesce(count(p.room),0) patientsCount
FROM room r left join patients p on r.id = p.room
GROUP BY r.id
``` | ```
SELECT room, COUNT(*) patients_in_room
FROM patient
GROUP BY room
``` | Calculate a column on select in MySQL | [
"",
"mysql",
"sql",
"database",
""
] |
I have following function which is supposed to check for the number to be 4 digits.
```
function f_checkNum(
@pnum integer
) returns integer
begin
return case
when @pnum like '[0-9][0-9][0-9][0-9]' then 1
else 0
end;
end
```
This works fine if numbers entered are under 4 digits but if they are over 4 it gives error
```
Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.
```
Please let me know how to fix it. Thanks | ```
Return CASE WHEN @pnum between 1000 and 9999 Then 1 Else 0 End
```
If you need to include negative numbers then
```
Return CASE WHEN ABS(@pnum) between 1000 and 9999 Then 1 Else 0 End
``` | Try something like this:
```
return case
when LEN(cast(@pnum as varchar)) = 4 then 1
else 0
end;
``` | Checking for number of digits | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have stored procedure that inserts data into table. One column in the table is datetime and is used for storing the time stamp of row insert:
```
INSERT INTO myTable (Field1, Field2, Field3) VALUES (1, 2, GETUTCDATE());
```
`Field3` is datetime column. When I select data from that table with simple `SELECT * FROM myTable` query, all datetime values are shown with `.000` value for milliseconds.
If I execute `SELECT GETUTCDATE()`, the milliseconds are displayed: `2013-10-16 18:02:55.793`
Why milliseconds are not stored/displayed in the date time column on SELECT? | You have to be doing something somewhere to change this to smalldatetime or something because it works fine. I just created a new table, inserted data like you showed, queried the table and I have the milliseconds.
I have been unable to find anything where you can set the precision at the server level so it must be in your code. | As [Steve](https://stackoverflow.com/users/2284256/steve) suggested, issue was not related with server. There is a trigger on this table, that trigger does milliseconds rounding on insert. | Milliseconds from GETUTCDATE not stored in datetime field | [
"",
"sql",
"sql-server",
"datetime",
""
] |
I have two/three state variables stored in DB-table fields and I wonder what's the best way to toggle them (from 1 to 0 and vice versa) and return back their new value.
Running two queries seems too much to me.
Is there another, better way?
Here is my query now (i haven't tested it yet):
```
UPDATE MyTable qrus
SET qrus.favorite=(CASE WHEN (qrus.favorite=0) THEN 1 ELSE 0 END)
WHERE <sth>;
```
I am using OracleDataClient.
I don't know if I could insert an additional select after the update but in the same CommandText. I will try this.
But isn't there a better way? | I don't think what you want is possible. If you desperately want your application to only make one call, you could use a stored function to update and then retrieve (or the other way around, retrieve and then update, doesn't matter). But that's still 2 statements, it's just that the application only makes 1 call. | You can use a sub-query to pull the inverted value and join back to the parent query. Then use a `RETURNING` clause.
```
UPDATE MyTable qrus
SET qrus.favorite = (
SELECT CASE WHEN b.favorite = 0 THEN 1 ELSE 0 END
FROM MyTable b WHERE grus.PK_FIELDS =b.PK_FIELDS
)
WHERE grus.PK_FIELDS = :pkFields
RETURNING grus.favorite INTO :favorite;
```
Just confirmed this on `Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production` | Oracle : toggle field and return the new value | [
"",
".net",
"sql",
"oracle",
""
] |
This is what I've done.
```
create proc INITCAP(@string varchar(30))
as
begin
SET @string = UPPER(LEFT(@string,1)) + LOWER(RIGHT(@string, LEN(@string) -1))
end
declare @lastname varchar
set @lastname = exec INITCAP 'MILLER'
declare @firstname varchar
set @firstname = exec INITCAP 'StEvE'
UPDATE Employee SET firstname = @firstname, lastname = @lastname WHERE empID = 7934
```
I keep getting the errors:
> Msg 156, Level 15, State 1, Procedure INITCAP, Line 97
> Incorrect syntax near the keyword 'exec'.
> Msg 156, Level 15, State 1, Procedure INITCAP, Line 100
> Incorrect syntax near the keyword 'exec'.
What shall I do? I want the procedure `INITCAP` to work as it does in Oracle: to return a name like: "Steve", "Miller" | **Solution #1** (I wouln't use this solution)
You could use OUTPUT parameters thus:
```
create proc INITCAP(@string varchar(30) OUTPUT)
as
begin
SET @string = UPPER(LEFT(@string,1)) + LOWER(SUBSTRING(@string, 2, 8000))
end
go
declare @lastname varchar
set @lastname = 'MILLER'
exec INITCAP @lastname OUTPUT
declare @firstname varchar
set @firstname = 'StEvE'
exec INITCAP @firstname OUTPUT
```
**Solution #2**: Instead, **I would choose** to create an inline function thus:
```
CREATE FUNCTION dbo.Capitalize1(@string varchar(30))
RETURNS TABLE
AS
RETURN
SELECT UPPER(LEFT(@string,1)) + LOWER(SUBSTRING(@string, 2, 8000)) AS Result;
```
Usage:
```
UPDATE e
SET firstname = cap.Result
FROM Employee e
CROSS APPLY dbo.Capitalize1(e.firstname) cap;
```
**Solution #3**: Another option could be a scalar function `with schemabinding` option (for performance reasons):
```
CREATE FUNCTION dbo.Capitalize2(@string varchar(30))
RETURNS VARCHAR(30)
WITH SCHEMABINDING
AS
BEGIN
RETURN UPPER(LEFT(@string,1)) + LOWER(SUBSTRING(@string, 2, 8000));
END;
```
Usage:
```
UPDATE Employee
SET firstname = dbo.Capitalize2(firstname);
``` | Do you really need a stored Proc for this ??? I would do something like this a UDF would do the job just fine i think....
```
CREATE FUNCTION dbo.udf_SomeFunction (@String VARCHAR(30))
RETURNS VARCHAR(30)
AS
BEGIN
DECLARE @rtnString VARCHAR(30);
SET @rtnString = UPPER(LEFT(@string,1)) + LOWER(RIGHT(@string, LEN(@string) -1))
RETURN(@rtnString);
END;
```
You can call this function in your **SELECT** statement , Having a proc doing the same job doesnt give you this flexibility
**UPDATE**
```
UPDATE Employee
SET firstname = dbo.udf_SomeFunction (firstname)
, lastname = dbo.udf_SomeFunction (lastname)
WHERE empID = 7934
``` | SQL Server : return string procedure INITCAP | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have the following table's in my SQL database.
```
DIM_BENCHMARK:
Fund_sk | Num_Bench | Bench | Weight | Type_Return
1 2 XXX 0.9 TR
1 2 YYY 0.1 Net
2 3 XXX 0.45 TR
2 3 YYY 0.45 TR
2 3 ZZZ 0.10 Net
FACT_Returns:
Date | Bench | TR | Net
10/10 XXX 0.010 0.005
10/10 YYY 0.012 0.008
10/10 ZZZ 0.006 0.012
```
Desired Output of Stored Procedure:
```
FACT_Result:
Date | Fund_SK | Num_Bench | Bench_Returns
10/10 1 2 (eg. 0.9*TR of XXX) + (0.1*Net Return of YYY)
10/10 2 3 (eg. 0.45*TR of XXX) + (0.45*TR of YYY) + (0.10*Net of ZZZ)
```
The tables above show the format of my input data and my desired output. I am still reasonably new to SQL and this dynamic SQL query is past the depths of my knowledge.
I would like to multiply the figures (either) FACT\_Returns.TR or .Net by DIM\_Benchmark.weight depending on the specification in DIM\_Benchmark.Type\_Return. The variables in DIM\_Benchmark.Type\_Return are the same as the column headers in FACT\_Returns.
As always, any help greatly appreciated! | Your main problem is your fact table. It is not normalised properly - you should have one row for each value.
The `unpivot` in the below SQL normalises the data.
```
select
[date], fund_sk,Num_Bench, sum(weight * val)
from
(
select *
from fact_returns
unpivot (val for type in (tr, net)) u
) f
inner join dim_benchmark b
on f.bench = b.Bench
and f.type = b.Type_Return
group by [date], fund_sk, Num_Bench
``` | This should solve the issue and would also work in any DBMS:
```
SELECT fr.date, db.fund_sk, db.num_bench, sum(db.weight *
CASE db.type_return
WHEN 'TR' THEN fr.tr
WHEN 'Net' THEN fr.net
ELSE 0
END) Bench_Returns
FROM dim_benchmark db
JOIN fact_returns fr ON db.bench = fr.bench
GROUP BY fr.date, db.fund_sk, db.num_bench
```
Bear in mind I'm making sure only `TR` and `Net` values are considered for the `SUM`. If you only have `TR` and `Net` in that column then you can change the case to:
```
CASE db.type_return
WHEN 'TR' THEN fr.tr
ELSE fr.net
END
```
Which would run slightly faster.
Output:
```
| DATE | FUND_SK | NUM_BENCH | BENCH_RETURNS |
|-------|---------|-----------|---------------|
| 10/10 | 1 | 2 | 0.0098 |
| 10/10 | 2 | 3 | 0.0111 |
```
Fiddle [here](http://sqlfiddle.com/#!2/bb6eb/1). | Dynamic Column Selection SQL | [
"",
"sql",
"sql-server",
"ssis",
"dynamic-programming",
""
] |
In a Mysql database with every table based on InnoDB with Autocommit enabled, will queries with subqueries and/or joins be atomic?
Examples:
* `INSERT INTO users SELECT (x,y,z) FROM users, comments WHERE users.id = comments.user_id;` (joins)
* `UPDATE users, comments SET users.x = x1 WHERE users.age > 30;` (joins)
* `UPDATE users, comments SET users.x = x1, comments.y = y1 WHERE users.age > 30;` (joins)
* `UPDATE users, comments SET users.x = x1, comments.y = y1 WHERE users.id IN (SELECT id FROM users WHERE age > 30);` (subqueries) | I understand your question like "is each of those queries in itself an atomic operation?". Then the answer is "yes".The other two answers are right, when they say that all your statements together are not atomic.
Atomicity in databases **only means** all or nothing. It **does not mean** correctness of data. Your statement succeeds or not. It has nothing to do with joins or subqueries. One statement is one statement, no matter if your database has to use a temporary table in memory or on disk or not.
Transactions just tell your database to treat multiple statements as one statement. When one of the statements fails, all of them are rolled back.
An important related topic here is the [isolation level](http://en.wikipedia.org/wiki/Isolation_%28database_systems%29). You might want to read up about those.
**EDIT (to answer the comment):**
That's right. As long as it is a valid statement and no power failure occurs or other reasons why a query could fail, it's being done. Atomicity in itself just guarantees that the statement(s) is/are being done or not. It guarantees completeness and that data is not corrupt (cause a write operation didn't finish or something). **It does not guarantee you the correctness of data.** Given a query like `INSERT INTO foo SELECT MAX(id) + 1 FROM bar;` you have to make sure via setting the correct **isolation level**, that you don't get phantom reads or anything. | No. Unless you wrap them in `START TRANSACTION` like this
```
START TRANSACTION;
SELECT @A:=SUM(salary) FROM table1 WHERE type=1;
UPDATE table2 SET summary=@A WHERE type=1;
COMMIT;
```
Example from [Mysql manual](http://dev.mysql.com/doc/refman/5.5/en/commit.html) | Is join insert/update on MySQL an atomic operation? | [
"",
"mysql",
"sql",
"concurrency",
"locking",
"atomic",
""
] |
I have the tables:
```
contest_images {contestID, imageID}
image {ID, userID}
user {ID}
```
(They contain more information than that, but this is all we need)
My query so far:
```
SELECT imageID, userID FROM contest_images
JOIN image ON contest_images.imageID = image.ID
JOIN user ON user.ID = image.userID
ORDER BY contest_images.imageID DESC
```
The contest\_images can contain multiple images from one user (which is intended)
**I want to** retrieve the x newest all users have added. (So I can restrict the users to only have one image in the contest at a time)
I also tried to make a view displaying {contestID, imageID, userID}.
Thanks in advance | Try like this,
```
SELECT MAX(imageID), userID FROM contest_images
JOIN image ON contest_images.imageID = image.ID
JOIN user ON user.ID = image.userID
GROUP BY image.userID
``` | To limit to one image/ latest image in contest at a time you can use Top operator as :
```
SELECT top 1 imageID, userID FROM contest_images
JOIN [image] ON contest_images.imageID = [image].ID
JOIN [user] ON [user].ID = image.userID
ORDER BY contest_images.imageID DESC
``` | SQL Query I can't figure out | [
"",
"mysql",
"sql",
""
] |
I have a table "Customers" with columns `CustomerID`, `MainCountry` and `CustomerTypeID`.
I have 5 customer types 1,2,3,4,5 .
I want to count number of customers of each country according to customer type. I am using the following query:
```
select count(CustomerID) as CustomerCount,MainCountry,CustomerTypeID
from Customers
group by CustomerTypeID,MainCountry
```
But some countries not have any customers, under type 1,2,3,4 or 5.
So I want to put a default value 0 for if customer type is not exist for that country.
Currently it is giving data as follows :-
```
CustomerCount MainCountry CustomerTypeID
5695 AU 1
525 AU 2
12268 AU 3
169 AU 5
18658 CA 1
1039 CA 2
24496 CA 3
2259 CA 5
2669 CO 1
10 CO 2
463 CO 3
22 CO 4
39 CO 5
```
As "AU" not have type 4 so I want a default value for it. | ```
Select Country.MainCountry, CustomerType.CustomerTypeId, Count(T.CustomerID) As CustomerCount
From (Select Distinct MainCountry From Customers) As Country
Cross Join (Select Distinct CustomerTypeId From Customers) As CustomerType
Left Join Customers T
On Country.MainCountry = T.MainCountry
And CustomerType.CustomerTypeId = T.CustomerTypeId
-- Edit here
And T.CreatedDate > Convert(DateTime, '1/1/2013')
-- End Edit
Group By Country.MainCountry, CustomerType.CustomerTypeId
Order By MainCountry, CustomerTypeId
``` | You should JOIN your table with a table with TypeId's. In this case
```
select count(CustomerID) as CustomerCount,TypeTable.MainCountry,TypeTable.TId
from
Customers
RIGHT JOIN (
select MainCountry,TId from
(
select Distinct MainCountry from Customers
) as T1,
(
select 1 as Tid
union all
select 2 as Tid
union all
select 3 as Tid
union all
select 4 as Tid
union all
select 5 as Tid
) as T2
) as TypeTable on Customers.CustomerTypeID=TypeTable.TId
and Customers.MainCountry=TypeTable.MainCountry
group by TypeTable.TId,TypeTable.MainCountry
``` | sql query to get data group by customer type and need to add default value if customer type not found | [
"",
"sql",
"sql-server",
""
] |
I'm trying to write a complex query using PostgreSQL 9.2.4, and I'm having trouble getting it working. I have a table which contains a time range, as well as several other columns. When I store data in this table, if all of the columns are the same and the time ranges overlap or are adjacent, I combine them into one row.
When I retrieve them, though, I want to split the ranges at day boundaries - so for example:
```
2013-01-01 00:00:00 to 2013-01-02 23:59:59
```
would be selected as two rows:
```
2013-01-01 00:00:00 to 2013-01-01 23:59:59
2013-01-02 00:00:00 to 2013-01-02 23:59:59
```
**with the values in the other columns the same for both retrieved entries.**
I have seen [this question](https://stackoverflow.com/questions/1378026/splitting-up-an-interval-in-weeks-in-postgres) which seems to more or less address what I want, but it's for a "very old" version of PostgreSQL, so I'm not sure it's really still applicable.
I've also seen [this question](https://stackoverflow.com/questions/17035176/need-oracle-sql-to-split-up-date-time-range-by-day), which does exactly what I want, but as far as I know the `CONNECT BY` statement is an Oracle extension to the SQL standard, so I can't use it.
I believe I can achieve this using PostgreSQL's `generate_series`, but I'm hoping there's a simple example out there demonstrating how it can be used to do this.
This is the query I'm working on at the moment, which currently doesn't work (because I can't reference the `FROM` table in a joined subquery), but I believe this is more-or-less the right track.
[Here's the fiddle](http://sqlfiddle.com/#!12/46376/11) with the schema, sample data, and my working query.
**Update:** I just found out a fun fact, thanks to [this question](https://stackoverflow.com/questions/12579724/postgresql-join-to-denormalize-a-table-with-generate-series), that if you use a set-returning function in the `SELECT` part of the query, PostgreSQL will "automagically" do a cross join on the set and the row. I think I'm close to getting this working. | First off, your upper border concept is *broken*. A timestamp with `23:59:59` is no good. The data type `timestamp` has fractional digits. What about `2013-10-18 23:59:59.123::timestamp`?
**Include** the lower border and **exclude** the upper border everywhere in your logic. Compare:
* [Calculate number of concurrent events in SQL](https://stackoverflow.com/questions/8733718/calculate-number-of-concurrent-events-in-sql/8752353#8752353)
Building on this premise:
### Postgres 9.2 or older
```
SELECT id
, stime
, etime
FROM timesheet_entries t
WHERE etime <= stime::date + 1 -- this includes upper border 00:00
UNION ALL
SELECT id
, CASE WHEN stime::date = d THEN stime ELSE d END -- AS stime
, CASE WHEN etime::date = d THEN etime ELSE d + 1 END -- AS etime
FROM (
SELECT id
, stime
, etime
, generate_series(stime::date, etime::date, interval '1d')::date AS d
FROM timesheet_entries t
WHERE etime > stime::date + 1
) sub
ORDER BY id, stime;
```
Or simply:
```
SELECT id
, CASE WHEN stime::date = d THEN stime ELSE d END -- AS stime
, CASE WHEN etime::date = d THEN etime ELSE d + 1 END -- AS etime
FROM (
SELECT id
, stime
, etime
, generate_series(stime::date, etime::date, interval '1d')::date AS d
FROM timesheet_entries t
) sub
ORDER BY id, stime;
```
The simpler one may even be faster.
Note a **corner case** difference when `stime` and `etime` both fall on `00:00` exactly. Then a row with a zero time range is added at the end. There are various ways to deal with that. I propose:
```
SELECT *
FROM (
SELECT id
, CASE WHEN stime::date = d THEN stime ELSE d END AS stime
, CASE WHEN etime::date = d THEN etime ELSE d + 1 END AS etime
FROM (
SELECT id
, stime
, etime
, generate_series(stime::date, etime::date, interval '1d')::date AS d
FROM timesheet_entries t
) sub1
ORDER BY id, stime
) sub2
WHERE etime <> stime;
```
### Postgres 9.3+
In Postgres 9.3+ you would better use **`LATERAL`** for this
```
SELECT id
, CASE WHEN stime::date = d THEN stime ELSE d END AS stime
, CASE WHEN etime::date = d THEN etime ELSE d + 1 END AS etime
FROM timesheet_entries t
, LATERAL (SELECT d::date
FROM generate_series(t.stime::date, t.etime::date, interval '1d') d
) d
ORDER BY id, stime;
```
[Details in the manual](http://www.postgresql.org/docs/current/interactive/sql-select.html#SQL-FROM).
Same corner case as above.
[**SQL Fiddle**](http://sqlfiddle.com/#!15/cf299/13) demonstrating all. | There is simply solution (if intervals starts in same time)
```
postgres=# select i, i + interval '1day' - interval '1sec'
from generate_series('2013-01-01 00:00:00'::timestamp, '2013-01-02 23:59:59', '1day') g(i);
i │ ?column?
─────────────────────┼─────────────────────
2013-01-01 00:00:00 │ 2013-01-01 23:59:59
2013-01-02 00:00:00 │ 2013-01-02 23:59:59
(2 rows)
```
I wrote a table function, that do it for any interval. It is fast - two years range divide to 753 ranges in 10ms
```
create or replace function day_ranges(timestamp, timestamp)
returns table(t1 timestamp, t2 timestamp) as $$
begin
t1 := $1;
if $2 > $1 then
loop
if t1::date = $2::date then
t2 := $2;
return next;
exit;
end if;
t2 := date_trunc('day', t1) + interval '1day' - interval '1sec';
return next;
t1 := t2 + interval '1sec';
end loop;
end if;
return;
end;
$$ language plpgsql;
```
Result:
```
postgres=# select * from day_ranges('2013-10-08 22:00:00', '2013-10-10 23:00:00');
t1 │ t2
─────────────────────┼─────────────────────
2013-10-08 22:00:00 │ 2013-10-09 23:59:59
2013-10-09 00:00:00 │ 2013-10-09 23:59:59
2013-10-10 00:00:00 │ 2013-10-10 23:00:00
(3 rows)
Time: 6.794 ms
```
and faster (and little bit longer) version based on RETURN QUERY
```
create or replace function day_ranges(timestamp, timestamp)
returns table(t1 timestamp, t2 timestamp) as $$
begin
t1 := $1; t2 := $2;
if $1::date = $2::date then
return next;
else
-- first day
t2 := date_trunc('day', t1) + interval '1day' - interval '1sec';
return next;
if $2::date > $1::date + 1 then
return query select d, d + interval '1day' - interval '1sec'
from generate_series(date_trunc('day', $1 + interval '1day')::timestamp,
date_trunc('day', $2 - interval '1day')::timestamp,
'1day') g(d);
end if;
-- last day
t1 := date_trunc('day', $2); t2 := $2;
return next;
end if;
return;
end;
$$ language plpgsql;
``` | PostgreSQL splitting time range into days | [
"",
"sql",
"postgresql",
"date-range",
"generate-series",
""
] |
```
SELECT [schoolname] AS combinationschools,
CASE
WHEN [schoolname] LIKE '%/%' THEN (SELECT value
FROM
[dbo].[Split]('/', '#6/#9E/#9M'))
END AS schoolname
FROM [dbo].[schools];
```
i'm getting q sql error like wise-
> Subquery returned more than 1 value. This is not permitted when the
> subquery follows =, !=, <, <= , >, >= or when the subquery is used as
> an expression. | When you use a subquery like this, you can only have one record in the result set for each record. Clearly your table Split has more than one record.
Use a join instead of a subquery. Or make this a correlated subquery by joining it to the schools table in the subquery or pull only the Max or min record or make some sort of where clause in thw subquery to get only one record. Without seeing the table structures and data is is hard to determine exactly what to do. | That's because `Select value from [dbo].Split` is returning more than one value. | "Subquery returned more than 1 value" error | [
"",
"sql",
"sql-server",
""
] |
I'm having trouble coming up with a way to compare some data in SQL Server (2005). I have two tables and I need to make sure that the values from table 1 are matched in table two. Here's the table structure and some example data.
```
Table 1
GenreId
6
```
This is a temp table with a list of IDs that are passed in.
```
Table 2
Show| GenreId
Show1 | 2
Show1 | 6
Show2 | 6
```
This table can have many GenreIds for a show. The results that I am trying to figure out how to retrieve are that I only need shows that only have the GenreIds from table 1. So the results I expect in the end would be:
If table 1 has 6, I expect to ONLY get Show2. If table 1 has 2 and 6, then I get Show1 and Show2.
I know this is probably simple but I am really drawing a blank. Any help is very much appreciated. | This is the query you're looking for:
```
SELECT SHOW FROM t2
WHERE SHOW NOT IN (
SELECT SHOW FROM t2
WHERE genreId NOT IN (6, 2)
)
GROUP BY SHOW
HAVING count(DISTINCT genreId) = 2
```
Those queries are a bit tricky. Take into account that the number in the `HAVING` clause have to match the AMOUNT of items in the `IN` clause.
Now, provided that you have a table that contains those IDs, then you can solve it this way:
```
SELECT SHOW FROM t2
WHERE SHOW NOT IN (
SELECT SHOW FROM t2
WHERE genreId NOT IN (
SELECT genreId FROM t1
)
)
GROUP BY SHOW
HAVING count(DISTINCT genreId) = (SELECT COUNT(*) FROM t1)
```
Fiddle [here](http://sqlfiddle.com/#!6/f42f0/1). | Well, this should work but it may have horrible performance if `Table2` is large...
```
SELECT * FROM Table2 t2
WHERE NOT EXISTS(
SELECT GenreID
FROM Table1
WHERE GenreID NOT IN (
SELECT GenreID
FROM Table2
WHERE Show = t2.Show))
AND NOT EXISTS(
SELECT GenreID
FROM Table2
WHERE GenreID NOT IN (
SELECT GenreID FROM Table1)
AND Show = t2.Show)
``` | How to query two tables to compare data | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
This is a follow on question from @Erwin's answer to [Efficient time series querying in Postgres](https://stackoverflow.com/a/14877465/79790).
In order to keep things simple I'll use the same table structure as that question
```
id | widget_id | for_date | score |
```
The original question was to get score for each of the widgets for every date in a range. If there was no entry for a widget on a date then show the score from the previous entry for that widget. The solution using a cross join and a window function worked well if all the data was contained in the range you were querying for. My problem is I want the previous score even if it lies outside the date range we are looking at.
Example data:
```
INSERT INTO score (id, widget_id, for_date, score) values
(1, 1337, '2012-04-07', 52),
(2, 2222, '2012-05-05', 99),
(3, 1337, '2012-05-07', 112),
(4, 2222, '2012-05-07', 101);
```
When I query for the range May 5th to May 10th 2012 (ie `generate_series('2012-05-05'::date, '2012-05-10'::date, '1d')`) I would like to get the following:
```
DAY WIDGET_ID SCORE
May, 05 2012 1337 52
May, 05 2012 2222 99
May, 06 2012 1337 52
May, 06 2012 2222 99
May, 07 2012 1337 112
May, 07 2012 2222 101
May, 08 2012 1337 112
May, 08 2012 2222 101
May, 09 2012 1337 112
May, 09 2012 2222 101
May, 10 2012 1337 112
May, 10 2012 2222 101
```
The best solution so far (also by @Erwin) is:
```
SELECT a.day, a.widget_id, s.score
FROM (
SELECT d.day, w.widget_id
,max(s.for_date) OVER (PARTITION BY w.widget_id ORDER BY d.day) AS effective_date
FROM (SELECT generate_series('2012-05-05'::date, '2012-05-10'::date, '1d')::date AS day) d
CROSS JOIN (SELECT DISTINCT widget_id FROM score) AS w
LEFT JOIN score s ON s.for_date = d.day AND s.widget_id = w.widget_id
) a
LEFT JOIN score s ON s.for_date = a.effective_date AND s.widget_id = a.widget_id
ORDER BY a.day, a.widget_id;
```
But as you can see in this [SQL Fiddle](http://www.sqlfiddle.com/#!12/1cb5c/1/0) it produces null scores for widget 1337 on the first two days. I would like to see the earlier score of 52 from row 1 in its place.
Is it possible to do this in an efficient way? | As [@Roman mentioned](https://stackoverflow.com/a/19443336/939860), `DISTINCT ON` can solve this. Details in this related answer:
* [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
Subqueries are generally a bit faster than CTEs, though:
```
SELECT DISTINCT ON (d.day, w.widget_id)
d.day, w.widget_id, s.score
FROM generate_series('2012-05-05'::date, '2012-05-10'::date, '1d') d(day)
CROSS JOIN (SELECT DISTINCT widget_id FROM score) AS w
LEFT JOIN score s ON s.widget_id = w.widget_id AND s.for_date <= d.day
ORDER BY d.day, w.widget_id, s.for_date DESC;
```
You can use a set returning function like a table in the `FROM` list.
[**SQL Fiddle**](http://www.sqlfiddle.com/#!15/36b31/1)
One [multicolumn index](http://www.postgresql.org/docs/current/interactive/indexes-multicolumn.html) should be the key to performance:
```
CREATE INDEX score_multi_idx ON score (widget_id, for_date, score)
```
The third column `score` is only included to make it a [covering index in Postgres 9.2 or later](https://wiki.postgresql.org/wiki/Index-only_scans). You would not include it in earlier versions.
Of course, if you have many widgets and a wide range of days, the `CROSS JOIN` produces a lot of rows, which has a price-tag. Only select the widgets and days you actually need. | Like you wrote, you should find matching score, but if there is a gap - fill it with nearest earlier score. In SQL it will be:
```
SELECT d.day, w.widget_id,
coalesce(s.score, (select s2.score from score s2
where s2.for_date<d.day and s2.widget_id=w.widget_id order by s2.for_date desc limit 1)) as score
from (select distinct widget_id FROM score) AS w
cross join (SELECT generate_series('2012-05-05'::date, '2012-05-10'::date, '1d')::date AS day) d
left join score s ON (s.for_date = d.day AND s.widget_id = w.widget_id)
order by d.day, w.widget_id;
```
Coalesce in this case means "if there is a gap". | Time series querying in Postgres | [
"",
"sql",
"postgresql",
"time-series",
"greatest-n-per-group",
"generate-series",
""
] |
This is a simplified version of my problem.
I have table like below
```
Id Name SNumber
100 XYZ 123
100 XYZ 123
101 ABC 123
103 QAZ 123
100 XYZ 971
100 XYZ 872
100 XYZ 659
102 PQR 145
102 PQR 707
103 QAZ 421
```
I want to count rows having Snumber as '123' ie **Total** column and rows having Snumber not as '123' i.e. **otherTotal** column
```
Id Name Total OtherTotal
100 XYZ 2 3
101 ABC 1 0
102 PQR 0 2
103 QAZ 1 1
```
What I am doing is using join
```
Select xx.*,otherTotal
From
( Select Id,Name,count(*) as Total
From table
Where Snumber like '123'
Group By id,name
)xx
Inner join
( Select Id,Name,count(*) as otherTotal
From table
Where Snumber not like '123'
Group By id,name
)yy
On xx.Id=yy.Id
```
But this will only return rows if particular `Id` has both `Snumber` as `123` and not as `123`
Data returned is like below
```
Id Name Total OtherTotal
100 XYZ 2 3
103 QAZ 1 1
```
Now there is no guarntee that a particular Id will always have Snumber as 123 so I can't use Left or Right join. How to solve this quagmire ? Giggity | Try this:
```
SELECT id, name,
COUNT(CASE WHEN SNumber = 123 THEN 1 END) Total,
COUNT(CASE WHEN SNumber <> 123 THEN 1 END) OtherTotal
FROM t
GROUP BY id, name
ORDER BY id
```
Fiddle [here](http://sqlfiddle.com/#!6/ce7a2/1). | ```
select
Id, Name,
sum(case when SNumber = 123 then 1 else 0 end) as Total,
sum(case when SNumber <> 123 then 1 else 0 end) as OtherTotal
from Table1
group by Id, Name
order by Id
```
or
```
select
Id, Name,
count(*) - count(nullif(SNumber, 123)) as Total,
count(nullif(SNumber, 123)) as OtherTotal
from Table1
group by Id, Name
order by Id
```
**`sql fiddle demo`** | Retrieve matching rows using join | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
```
Select GroupId,
count(distinct GroupProgramYearParticipantID)as [ChildAddedprior]
from #temp1
Where (MonthFlag = '1')
and (ParticipantTypeName = 'child')
and (GroupProgramYearParticipantID not in
(Select distinct GroupProgramYearParticipantID
from #temp1 t
Where (t.MonthFlag = '2')
and (t.ParticipantTypeName = 'child')
and t.GroupId = GroupId))
group by groupId
```
If groupID 1 has GroupProgramYearParticipantID's 1,2,2,3,4,4
and groupID 2 has GroupProgramYearParticipantID's 2,4,4,5,5,6,7
The above query returns
```
GroupID-1 ChildAddedprior- 4( which takes 1,2,3,4)
DroupID-2 ChildAddedPrior- 5(which includes 2,4,5,6,7)
```
But what I want is
```
GroupID-1 ChildAddedprior- 4 (which takes 1,2,3,4)
DroupID-2 ChildAddedPrior- 3 (which includes 5,6,7)(this doesn't include 2,4 which are counted earlier).
```
Help is really appreciated | ```
SELECT GroupID, COUNT(Distinct GroupParticipantID) AS CNT
FROM #Temp1 A
WHERE A.GroupParticipantID NOT IN
(
SELECT GroupParticipantID
FROM #Temp1 B
WHERE A.GroupID > B.GroupID
)
Group BY GroupID
```
**[SQL Fiddle Demo](http://sqlfiddle.com/#!6/b95ae/5)** | You could use 2 queries to achieve that.
1. Select the first group's ids into a temp table.
2. When selecting the second group, don't select ids that exist in the temp table.
Alternatively, you can take the results of your query (ids) and use common table expressions to count groups recursively starting from the first one and excluding items from groups you already counted. | Count only the id's which are not counted in earlier group | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Let's say you have a table with columns A and B, among others. You create a multi-column index (A, B) on the table.
Does your query have to take the order of indexes into account? For example,
```
select * from MyTable where B=? and A in (?, ?, ?);
```
In the query we put B first and A second. But the index is (A, B). Does the order matter?
**Update**: I do know that the order of indexes matters *significantly* in terms of the leftmost prefix rule. However, does it matter which column comes first in the query itself? | In this case no but I recommend to use the [EXPLAIN](http://dev.mysql.com/doc/refman/5.1/en/using-explain.html) keyword and you will see which optimizations MySQL will use (or not). | The order of columns in the index can affect the way the MySQL optimiser uses the index. Specifically, MySQL can use your compound index for queries on column A because it's the first part of the compound index.
However, your question refers to the order of column references in the query. Here, the optimiser will take care of the references appropriately, and the order is unimportant. The different clauses must come in a particular order to satisfy syntax rules, so you have little control anyway.
Mysql reference on multi-column index optimisation is [here](http://dev.mysql.com/doc/refman/5.0/en/multiple-column-indexes.html) | Does the order of indexes matter? | [
"",
"mysql",
"sql",
""
] |
I am trying to assign 'A' to [Student Details].group based on this SELECT statement.
```
SELECT TOP (10) PERCENT [Person Id], [Given Names], Surname, Gpa, [Location Cd]
FROM [Student Details]
WHERE ([Location Cd] = 'PAR')
ORDER BY Gpa DESC
```
I can't figure out how to use a SELECT statement in an UPDATE statement.
Can someone please explain how to accomplish this?
I am using ASP .NET and MsSQL Server if it makes a difference.
Thanks | Try this using `CTE` (Common Table Expression):
```
;WITH CTE AS
(
SELECT TOP 10 PERCENT [Group]
FROM [Student Details]
WHERE ([Location Cd] = 'PAR')
ORDER BY Gpa DESC
)
UPDATE CTE SET [Group] = 'A'
``` | I'm assuming you want to update these records and then return them :
```
SELECT TOP (10) PERCENT [Person Id], [Given Names], Surname, Gpa, [Location Cd]
INTO #temp
FROM [Student Details]
WHERE ([Location Cd] = 'PAR')
ORDER BY Gpa DESC
update [Student Details] set group='A' where [person id] in(select [person id] from #temp)
select * from #temp
```
I'm also assuming person id is the PK of student details | Update multiple rows from select statement | [
"",
"asp.net",
"sql",
""
] |
Is it possible to `SELECT` the minimum or maximum among two or more values. I'd need something like this:
```
SELECT MAX_VALUE(A.date0, B.date0) AS date0, MIN_VALUE(A.date1, B.date1) AS date1
FROM A, B
WHERE B.x = A.x
```
Can I achieve this by only using MySQL? | You can use `LEAST` and `GREATEST` function to achieve it.
```
SELECT
GREATEST(A.date0, B.date0) AS date0,
LEAST(A.date1, B.date1) AS date1
FROM A, B
WHERE B.x = A.x
```
Both are described here <http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html> | I suppose you are looking for:
[`GREATEST()`](http://dev.mysql.com/doc/refman/5.6/en/comparison-operators.html#function_greatest)
and
[`LEAST()`](http://dev.mysql.com/doc/refman/5.6/en/comparison-operators.html#function_least) | MySQL Select minimum/maximum among two (or more) given values | [
"",
"mysql",
"sql",
""
] |
I'm developing an ecommerce website, but i got confius with the database design.
I have 3 tables:
1. Products (id, category\_id, subcategory\_id, name)
2. Categories (id, name)
3. Subcategories (id, category\_id, name)
For example: Product (Stripped Tshirt) is in Category (Fashion) and Subcategory (Tshirt)
What if I want to add size for this product? Should I create another table name 'Sizes' and and size\_id in products table?
But, for another category (such as Gadget/Handphone) got no size. What are the best options to design this database? | Firstly, I would not have a category ID and subcategory ID on the product. It only belongs to a subcategory. You can find out the parent category through the subcategory if you need to.
```
products (id, subcategory_id, name)
categories (id, name)
subcategories (id, category_id, name)
```
For the attributes which could be different for all products, you could do this:
```
attribute_types (id, name)
attributes (id, attribute_type_id, value)
products_attributes (id, product_id, attribute_id)
```
So in your attribute\_types table you might have:
```
id | name
-----------------------------
1 | T-Shirt Size
2 | Shoe Size (US)
3 | Colour
4 | Batteries Included
```
In your attributes table:
```
id | attribute_type_id | value
------------------------------
1 | 1 | Small
2 | 1 | Medium
3 | 1 | Large
4 | 2 | 9
5 | 2 | 9.5
6 | 2 | 10
7 | 2 | 10.5
8 | 2 | 11
9 | 3 | Red
10 | 3 | Blue
11 | 3 | Green
12 | 3 | Yellow
13 | 3 | Black
14 | 4 | Yes
15 | 4 | No
```
And then in your products\_attributes table you just add whichever attributes that product has:
```
id | product_id | attribute_id
------------------------------
1 | 456 | 6
2 | 456 | 13
```
So for that product, it is a Size 10 shoe, Black. | Magento, which is a huge shop solution, solves this using [EAV](http://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model). EAV is flexible but does not perform as good as a dedicated table solution.
Your Category -> Subcategory does not make much sense. Read about nested sets or use a pure parent-child relationship within the same categories table. | Cakephp database design for ecommerce | [
"",
"mysql",
"sql",
"cakephp",
"size",
"product",
""
] |
Not sure if I have phrased the title properly, but here it goes. I have these two tables:
table:staff
```
id Name groupId Status
1 John Smith 1 1
2 John Doe 1 1
3 Jane Smith 2 1
4 Jerry Smith 1 1
```
table:jobqueue
```
id job_id staff_id jobStatus
1 1 1 1
2 2 1 1
3 5 2 1
4 7 3 0
```
Now, what I need to do is to find the staff with the least amount of job assigned to him which I am able to do by querying the jobqueue table.
```
SELECT min(cstaff),tmp.staff_id FROM (SELECT t.staff_id, count(staff_id) cstaff from jobqueue t join staff s on t.staff_id=s.id join group g on s.groupId=g.id where g.id=26 GROUP BY t.id ) tmp
```
This works fine, but the problem is if a staff is not assigned to any job at all, this query wont get them, because it only queries the jobqueue table, where that particular staff won't have any entry. I need to modify the query to include the staff table and if a staff is not assigned any job in the jobqueue then I need to get the staff details from the staff table. Basically, I need to find staff for a group who are not assigned any job and if all staffs are assigned job then find staff with the least amount of jobs assigned. Could use some help with this. Also, tagging as Yii as I would like to know if this is doable with Yii active-records. But I am okay with a plain sql query that will work with Yii sql commands. | not sure that it is optimal query, but it works:
```
select d.groupId, d.name, (select count(*) from jobqueue as e where e.staff_id=d.id) as jobassigned
from staff as d
where d.id in (
select
(
select a.id
from staff as a
left outer join
jobqueue as b
on (a.id = b.staff_id)
where a.groupId = c.groupId
group by a.id
order by count(distinct job_id) asc
limit 1
) as notassigneduserid
from (
select distinct groupId from staff
) as c)
```
maybe need some comments:
`c` query is needed to get all distinct groupId - if you have separate table for this, you can replace it
`notassigneduserid` statement for each groupId select user with minimal job count
`d` query is needed to fetch actual user names, groupId for all found "unassigned users" and present it
here is the results for data from question:
```
Group Staff Jobs assigned
1 Jerry Smith 0
2 Jane Smith 1
``` | ```
with
counts as (
select s.groupId
, s.id
, (select count(*) from jobqueue where staff_id = s.id) count
from staff s
group by s.id, s.groupId),
groups as (
select groupId, min(count) mincount
from counts
group by groupId)
select c.groupId, c.id, c.count
from counts c
join groups g on c.groupId = g.groupId
where c.count = g.mincount
```
This SQL will give you all the staff with the minimum number of jobs in each group. It might be that more than one staff has the same minimum number of jobs. The approach is to use common table expressions to build first a list of counts, and then to retrieve the minimum count for each group. Finally I join the counts and groups tables and retrieve the staff that have the minimum count for each group.
I tested this on SQL Server, but the syntax should work for MySQL as well. To your data I added:
```
id Name groupId Status
5 Bubba Jones 2 1
6 Bubba Smith 1 1
```
and
```
id job_id staff_id jobStatus
5 4 5 1
```
Results are
```
group name count
1 Bubba Smith 0
1 Jerry Smith 0
2 Bubba Jones 1
2 Jane Smith 1
```
BTW, I would not try to do this with active record, it is far too complex. | Mysql (conditional?) query from two tables | [
"",
"mysql",
"sql",
"yii",
""
] |
Cant seem to figure this one out...
I have a table, one of the fields is variabletype. There are several user input variable types. For example:
```
id | variabletype
1 | button
2 | text
3 | button
4 | link
5 | button
6 | link
```
I wrote some SQL to basically count the number of times each variable type is listed. I nestled that into a subquery so that I can then get the record that has the max number of instances (in this example - button).
My problem is the query only returns the max number, it does not display the actual variable type. my ideal outcome is having the variabletype display along with the max count. Any thoughts?
```
SELECT MAX(y.mosttested)
FROM (SELECT variabletype, COUNT(variableid) AS mosttested
FROM variable GROUP BY variabletype) y
``` | Try this :
```
SELECT `variabletype `,
COUNT(`variabletype `) AS `value_occurrence`
FROM `my_table`
GROUP BY `value`
ORDER BY `value_occurrence` DESC
LIMIT 1;
``` | If row\_number function is avialable in your SQL environment then you can do the following:
```
SELECT y.variabletype, y.mosttested as max_mosttested
FROM (SELECT variabletype, COUNT(variableid) AS mosttested, row_number()over(order by count(variableid) desc)
FROM variable GROUP BY variabletype) y
where rownum=1
``` | Returning field name when using Max and Count queries | [
"",
"mysql",
"sql",
""
] |
I have a requirement to round a datetime2 value down to the nearest half hour. For example '10/17/2013 12:10:00.123' would round down to '10/17/2013 12:00:00.0' And '10/17/2013 12:34:17.123' would round down to 10/17/2013 12:30:00.0'. My first thought was to create a UDF which would break the date and time apart and do it that way. However, I'm wondering if something like this can be done in a single T-SQL statement?
I'm using SQL Server 2012 and the data type of the column is a dateTime2 (which cannot be converted to a float!!) | The answer by Ian is good, but it contains an unnecessary conversion. I suggest
```
SELECT CONVERT(smalldatetime, ROUND(CAST([columnname] AS float) * 48.0,0,1)/48.0) FROM [tableName]
```
If you want to round to the nearest half-hour instead of always rounding down, use
```
SELECT CONVERT(smalldatetime, ROUND(CAST([columnname] AS float) * 48.0,0)/48.0) FROM [tableName]
``` | How about this
```
declare @d datetime = '2013-05-06 12:29.123'
select
case
when datepart(minute, @d) < 30 then cast(dateadd(minute, -datepart(minute,@d)-datepart(second,@d), @d) as smalldatetime)
when datepart(minute, @d) >= 30 then cast(dateadd(minute, -datepart(minute,@d)-datepart(second,@d)+30, @d) as smalldatetime)
end
``` | Rounding a datetime value down to the nearest half hour | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I get an error in the query below. Why ?
```
DECLARE @Vr varchar(50);
SET @Vr = N'Infos';
IF @Vr = FAR
UPDATE MyTable
SET TAR = @Vr
```
Error - Invalid column name 'FAR'.
I am using the right database and right table. Then, why does this happen ?
My logic is if Column Far = variable Vr, then update Column TAR. | In your case, you are checking the column name and not that particular column's value. I suggest you try dynamic sql. This [link](http://www.codeproject.com/Articles/20815/Building-Dynamic-SQL-In-a-Stored-Procedure) can be useful.
As far as I've understood, if your `@Vr = 'FAR'` then update field `TAR` else do something else. In that case, see my below query
```
DECLARE @SQL NVARCHAR(MAX)
DECLARE @valueToBeUpdatedInYourColumn nvarchar(100)
set @valueToBeUpdatedInYourColumn = 'this value will be your updated value'
SET @Vr = 'Infos';
IF @Vr = 'FAR'
BEGIN
SET @SQL = '
UPDATE MyTable
SET TAR= '''+@valueToBeUpdatedInYourColumn+'''
'
EXEC(@SQL)
END
ELSE
BEGIN
...
END
```
The number of single quotes `'` is very important | ```
DECLARE @Vr varchar(50);
SET @Vr = N'Infos';
-- IF @Vr = FAR
IF ( select Count(*) from mytable where ColumnFar =@Vr) >0
BEGIN
UPDATE MyTable
SET TAR = @Vr where ColumnFar =@Var
END
ELSE
BEGIN
-- Goes your condition
END
```
Or use CASE
```
DECLARE @Vr varchar(50);
SET @Vr = N'Infos';
UPDATE MyTable
SET TAR =
CASE
WHEN ( @Vr = FAR)
THEN 'FAR'
Else
NULL -- Or any other value you want.
END
``` | Error in if else in sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a database with some tables, each with `ID` primary key column. All `ID`s contain huge random numbers like `827140014`, `9827141241`, etc. What is the easiest way to edit and change this values for chronological, starting from 1 (`1`, `2`, `3`, etc.)? Rows' order doesn't matter.
I want to do it for SQL Server, Oracle, PostgreSQL and SQLite (there can be different solutions for each one).
Additionally suppose that there are some tables that depends on `ID` (foreign keys). | Oracle solution: given table some\_table with column id as primary key:
```
CREATE TABLE my_order AS SELECT id, rownum rn FROM some_table;
ALTER TABLE my_order ADD CONSTRAINT pk_order PRIMARY KEY (id);
UPDATE
(SELECT t.*, o.rn FROM some_table t JOIN my_order o on (t.id = o.id))
SET id = rn;
DROP TABLE my_order;
```
You should be able to run something similar in PostgreSQL, just use analytic function row\_number instead of Oracle's rownum. I'm not sure about other engines.
For referencing tables, just ensure that foreign key constraints are ON UPDATE CASCADE. | For Oracle
a). If there are tables referencing the table you want to update, first thing you have to do is to disable foreign key constraints. You can generate all the `ALTER` statements using below query:
```
SELECT 'ALTER TABLE ' || owner || '.' || table_name ||
' DISABLE CONSTRAINT ' || constraint_name || ';'
FROM all_constraints
WHERE constraint_type = 'R'
AND r_constraint_name =
(SELECT constraint_name
FROM all_constraints
WHERE constraint_type = 'P'
AND table_name = 'YOUR_TABLE_NAME'
AND owner = 'OWNER_OF_THAT_TABLE');
```
b). Run the generated ALTER statements.
c). Next, you have to generate new IDs. You can either add new column to hold those values, or create a temporary table. New column approach:
```
ALTER TABLE YOUR_TABLE_NAME ADD temp_new_id NUMBER;
```
d). Populate the column:
```
-- Create a sequence to generate new IDs
CREATE SEQUENCE YOUR_TABLE_NAME_seq START WITH 1 CACHE 20;
UPDATE YOUR_TABLE_NAME SET temp_new_id = YOUR_TABLE_NAME_seq.nextVal;
COMMIT;
```
e). Update ID in each of the dependent tables in this manner:
```
UPDATE some_dep_table sdt SET sdt.master_table_id =
(SELECT ytn.temp_new_id FROM YOUR_TABLE_NAME ytn WHERE sdt.master_table_id = ytn.id);
COMMIT;
```
f). Update your table - move IDs from temporary column to the actual column with ID:
```
UPDATE YOUR_TABLE_NAME SET id = temp_new_id;
COMMIT;
```
g). Drop temporary column from your table:
```
ALTER TABLE YOUR_TABLE_NAME DROP COLUMN temp_new_id;
```
h). ENABLE constraints on dependent tables (use query to generate them from point a), just replace DISABLE with ENABLE). | Massive primary key edition for disordered primary key column | [
"",
"sql",
"sql-server",
"oracle",
"sqlite",
"postgresql",
""
] |
The search term is to ambiguous for google aparently. I am looking at a SQL call and it has 2 FROM keywords? I've never seen this before, can someone explain?
```
SELECT TOP(5) SUM(column) AS column, column
FROM ( SELECT DISTINCT column, column, column
FROM ((((((table table
INNER JOIN table table ON (column = column
AND column = 2
AND column != '' ))
INNER JOIN table table ON (column = column
AND (column = 144 OR column = 159 OR column = 162 OR column = 164 OR column = 163 OR column = 1 OR column = 2 OR column = 122 OR column = 155 OR column = 156 )))
inner join table table ON (column = column
AND column = 0 ))
INNER JOIN table ON (column = column ))
INNER JOIN table table ON ( column = column
AND (column = 102 OR column = 103 )))
INNER JOIN table table ON (column = column ))) TempTable
GROUP BY column ORDER BY column desc
```
You will note the multiple FROM keywords. It runs just fine. Just curious to what the purpose is. | This is called as `subquery`. You can use `subquery` within your main query
So subquery made the multiple FORM clause. | There's a reason why SQL is called a **Structured** Query Language: it lets you formulate queries that use other queries as their source, thus creating a hierarchical query structure.
This is a common practice: each `FROM` keyword is actually paired with its own `SELECT`, making the inner query a source for the outer one.
Proper formatting would help you understand what is going on: indenting inner `SELECT`s helps you see the structure of your query, making it easier to understand which part is used as the source of what other parts:
```
SELECT TOP(5) SUM(price) AS total_price, item_id
FROM ( -- The output of this query serves as input for the outer query
SELECT price, item
FROM order -- This may have its own selects, joins, etc.
GROUP BY order_id
)
GROUP BY item_id
``` | SQL Server - Multiple FROM keywords? | [
"",
"sql",
"sql-server",
""
] |
Lets say i have a table that has the following columns
(Start, End, Interval)
Is there any way to get all values between Start, End with the interval each on one row?
Notice that will be more then one row in the (Start, End, Interval) table but they should not overlap.
If possible without loops/cursors/temp tables/variable tables.
Sample data
```
Start End Interval
1 3 1
9 12 1
16 20 2
```
Desired outcome:
```
Result
1
2
3
9
10
11
12
16
18
20
``` | This is a great use case for a recursive common table expression:
```
;with cte as (
select [Start] as Result, [End], [Interval]
from Table1
union all
select Result + [Interval], [End], [Interval]
from cte
where Result + [Interval] <= [End]
)
select Result
from cte
order by Result
```
**`sql fiddle demo`** | You can do it like this
```
WITH tally AS (
SELECT 0 n
UNION ALL
SELECT n + 1 FROM tally WHERE n < 100 -- adjust 100 to a max possible value for (end - start) / interval
)
SELECT start + n * [interval] result
FROM Table1 t CROSS JOIN tally n
WHERE n.n <= (t.[end] - t.start) / t.[interval]
ORDER BY result
```
**Note:** If you do a lot of such queries you may consider to substitute the recursive CTE `tally` with a persisted numbers table `tally` with a primary key on `n` column.
Output:
```
| RESULT |
|--------|
| 1 |
| 2 |
| 3 |
| 9 |
| 10 |
| 11 |
| 12 |
| 16 |
| 18 |
| 20 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/bcb970/6)** demo | SELECT all values between two values | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
I have 2 tables with both of them having the first column as PRIMARY which is also an auto incrementing. First table has 67 entries starting from 1 to 67 and the second table has 48 entries. Both of them have same columns. I want to take the content from Table2 and insert them into Table1 such that the next entry in Table1 starts from 68. Finally I will have 115 entries in Table1 and the PRIMARY column will also show that. I tried this:
```
INSERT INTO `Table1` SELECT * FROM `Table2`
```
But it said
```
#1062 - Duplicate entry '1' for key 'PRIMARY'
```
What do I do? | Name the columns you want to insert and leave the auto\_incrment columns from the insert
```
INSERT INTO `Table1` (col2, col3, col4)
select col2, col3, col4 from `table2`
``` | You need to specify the columns you wish to enter, without the identity column.
Something like
```
INSERT INTO `Table1` (column1, column2,...,columnn)
SELECT column1, column2,...,columnn FROM `Table2`
``` | Join 2 tables with same PRIMARY key that is autoincrementing too | [
"",
"mysql",
"sql",
"sql-server",
""
] |
**This is TERADATA** (not SQL Server, not Oracle )
I have a column of phone numbers:
```
(312)9879878
(298)989-9878
430-394-2934
394s9048ds987
..........
```
I need to clean this column into
```
3129879878
2989899878
4303942934
3949048987
..........
```
So that only numbers should stay. All other letters, special characters, hyphens ... should be removed. How can I do this? | Which release of TD is running at your site?
If it's 14 or you got the oTranslate UDF installed you can simply do an old trick nesting Translate:
```
oTranslate(phonenum, oTranslate(phonenum, '0123456789', ''), '')
``` | **Answer :**
> DECLARE @Input varchar(1000)
>
> SET @Input = '01 vishal 98-)6543'
>
> DECLARE @pos INT
>
> SET @Pos = PATINDEX('%[^0-9]%',@Input)
>
> WHILE @Pos > 0
>
> BEGIN
>
> ```
> SET @Input = STUFF(@Input,@pos,1,'')
>
> SET @Pos = PATINDEX('%[^0-9]%',@Input)
> ```
>
> END
>
> SELECT @Input
**Thank You,
Vishal Patel** | How to replace all nonnumeric values? | [
"",
"sql",
"teradata",
""
] |
```
SELECT * FROM table t
SELECT t.* FROM table t
```
I tried it and it yielded the same results, but I want to make sure because I'm refactoring a piece of code that uses the second version, and I was surprised as it is both longer to write, and less simple.
Are there any hidden stuff here?
MySQL version: 5.5.29-0ubuntu0.12.04.2 (Ubuntu) | Both statements are the same in your case.
They would be not if you join multiple tables in one query.
```
select *
```
selects **all** columns.
```
select t.*
```
select all columns **of table t** (or the table assigned the alias t) | ```
SELECT * FROM table t and SELECT t.* FROM table t
```
Return the whole table
`SELECT t.* FROM table as t inner join table2 as t2`
will only return the fields in the "table" table while
`SELECT * FROM table as t inner join table2 as t2`
will return the fields of table and table2 | Are there any differences in the following sql statements? | [
"",
"mysql",
"sql",
""
] |
In this scenario, I am trying to report on the operating\_system\_version for each distinct computer\_id where the report\_id for that computer\_id is the greatest.
Currently, I am getting the below results:
```
operating_system_version | computer_id | report_id
10.8 | 1 | 10
10.9 | 1 | 20
10.9 | 2 | 11
10.8 | 2 | 21
```
The above is returned by this statement:
```
SELECT operating_systems.operating_system_version,
reports.computer_id,
reports.report_id
FROM operating_systems
INNER JOIN reports
ON operating_systems.report_id = reports.computer_id
```
Instead, would like return the most recent (highest report\_id) operating\_system\_version for each distinct computer\_id, for example:
```
operating_system_version | computer_id | report_id
10.9 | 1 | 20
10.8 | 2 | 21
```
I am brand new to SQL .. Appreciate any help. | You would need to add a group by statement and a having statement.
The group by would look like
```
group by computer_id
```
The having would look like
```
having report_id= (select max(report_id) )
``` | ```
SELECT operating_systems.operating_system_version,
reports.computer_id,
reports.report_id
FROM operating_systems INNER JOIN reports ON operating_systems.report_id = reports.computer_id
WHERE NOT EXISTS (SELECT 1
FROM reports r2
WHERE r2.computer_id = reports.computer_id
AND r2.reports_id > reports.reports_id)
``` | MySQL: How to limit results to max value of another field? | [
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I am trying to make a change to one of the tables in our database named `NonConf`. Currently, we have three ***Yes/No*** fields called `Closed`, `Open`, and `OnHold`. We are going to be adding more statuses and I think it is a bad idea to continue adding fields for the new statuses. Instead I would like to convert the fields to one `Status` field.
I have already added the `Status` field to the `NonConf` table. How do I use an ***UPDATE*** query to populate `Status`? | You can use a [Switch](http://office.microsoft.com/en-us/access-help/switch-function-HA001228918.aspx) expression instead of nesting multiple `IIf` expressions.
```
UPDATE NonConf AS N
SET N.Status =
Switch
(
N.Closed, "Closed",
N.Open, "Open",
N.OnHold,"OnHold",
True, ""
);
```
`Switch` operates similar to `SELECT CASE` in VBA. So it returns the value from the first expression/value pair where the expression is `True`. The last expression/value pair (`True, ""`) catches anything which falls through the earlier pairs. Perhaps instead of an empty string, you would prefer Null or some other value to indicate none of the sourced Yes/No columns were `True`. | You can use one query to update the `Status` in one shot via a nested `IIF`:
```
UPDATE NonConf AS N
SET N.Status =
IIF (N.Closed, "Closed",
IIF(N.Open, "Open",
IIF(N.OnHold,"OnHold","")))
``` | Combine three fields into one | [
"",
"sql",
"ms-access",
"sql-update",
""
] |
I have two tables in my sqlite db - tenders and lots.
tenders:
```
id
name
address
```
lots:
```
id
tender_id # it's tender's ID from table `tenders`
summ
```
One tender may have many lots. How to count tenders that have 0 lots?
I tried queries such as:
```
select lots.* from lots
inner join tenders on tenders.id = lots.tender_id
group by lots.id
having count(*) =0
```
and
```
SELECT COUNT(tenders.id) AS cnt FROM tenders, lots WHERE lots.tender_id = tenders.id group by tenders.id having cnt=0
```
with no result.
But when I try to do
```
select * from lots where tender_id=263128
```
I see 0 rows as a result.
Thank you. | ```
SELECT tenders.id
FROM tenders
WHERE tenders.id NOT IN (SELECT tender_id FROM lots)
```
or different amounts of lots:
```
SELECT tenders.id
FROM tenders
WHERE tenders.id IN
(
SELECT tender_id FROM lots GROUP BY tender_id HAVING COUNT(*) IN (2,3)
)
``` | Using sub-query:
```
SELECT id, (SELECT COUNT() FROM lots WHERE tender_id=tenders.id) AS LotCount FROM tenders;
``` | How to count child rows? | [
"",
"sql",
"sqlite",
""
] |
I am trying to perform the following query in SQL server:
```
declare @queryWord as nvarchar(20) = 'asdas'
SELECT * FROM TABLE_1
WHERE (ISDATE(@queryWord) = 1)
AND TABLE_1.INIT_DATE = CONVERT(Date, @queryWord)
```
This obviously causes an error because `'asdas'` cannot be converted to `Date`. Although, I was expecting a different behavior. That is, because `ISDATE(@queryWord) = 1` is `false`, I was expecting SQL to **not** check the second condition, but apparently, it does.
I know there are some other ways to perform this query but this is not my question. I wonder if there is some way to do not check the second condition is the first one does not satisfy. I am curious because I thought that SQL already did this. | SQL Server does not do short-circuiting (nor should it).
If you need it to not *try* something under some circumstances, you need to force that in the way that you write your query.
For this query the easiest fix would be to use a `CASE` expression in your `WHERE` clause.
```
declare @queryWord as nvarchar(20) = 'asdas'
SELECT * FROM TABLE_1
WHERE TABLE_1.INIT_DATE = (CASE WHEN ISDATE(@queryWord) = 1
THEN CONVERT(Date, @queryWord)
ELSE NULL END)
```
Off-hand, `CASE` and query-nesting are the only two supported ways that I can think of to force an order of evaluation for dependent conditions in SQL. | I Guess you could do it in 2 passes:
```
declare @queryWord as nvarchar(20) = 'asdas'
select
*
from
(
SELECT * FROM TABLE_1
WHERE (ISDATE(@queryWord) = 1) ) t1
where t1.INIT_DATE = CONVERT(Date, @queryWord)
```
So your inner query runs the first test and the outer query the second. In a single query, I don't believe there is any way to force any order of evaluating conditions. | How to Short-Circuit SQL Where Clause | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am a novice Oracle user. I wanted to update a boolean field in my table for one of the record. Which one of these statements is correct ?
```
update MyTable set myBooleanColumn = 1 where UserId= 'xx12345';
or
update MyTable set myBooleanColumn = '1' where UserId= 'xx12345';
```
any help is greatly appreciated!! thanks ! | It depends on how the field is defined.
If its defined as a CHAR(1) field, then you can store 'Y'/'N' or 'T'/'F' in it. To update the field, you'd use the quotes as it would be a string literal.
```
UPDATE TestTable set myCharBooleanColumn = 'Y';
```
If the field is defined as NUMERIC, then the convention is 0=false and 1 or -1 is true (I've seen both).
```
UPDATE TestTable set myNumericBooleanColumn = 1;
```
Many people will advocate the CHAR(1) approach, but in the real world - you see both. It depends on how the boolean is implemented.
You can read more in Oracle's docs on Datatypes
<http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm> | There is no such thing as a Boolean field in Oracle so your field is either a numeric field or a character field. If it's a numeric field you don't need to quote the number; if it's a character field you should quote the string.
You can find out the type of the column by querying [USER\_TAB\_COLUMNS](http://docs.oracle.com/cd/E11882_01/server.112/e40402/statviews_5475.htm#sthref3105):
```
select *
from user_tab_columns
where table_name = 'MYTABLE'
and column_name = 'MYBOOLEANCOLUMN'
```
or by [describing](http://docs.oracle.com/cd/E11882_01/server.112/e16604/ch_twelve019.htm#SQPUG040) the table. | How to update a boolean field in Oracle table | [
"",
"sql",
"oracle",
"oracle11g",
"oracle10g",
"sql-update",
""
] |
I'm designing a database and I have a question about how to make private some user data.
I have a user table, with name, city, birthday, biography, etc. And the user can make some data private (*other users can see that data*).
First, I thought to add columns to the table to indicate if a column is private or not. For example:
```
User
-------
user_id | name | city | cityIsPrivate | birthday | birthdayIsPrivate
---------+------+------+---------------+----------+------------------
```
Or, another approach is to add a varchar column to indicate which columns are private:
```
User
-------
user_id | name | city | birthday | privateColumns
---------+------+------+----------+---------------
```
And this privateColumns will have this: `"city:NO; birthday:YES"`.
The user table will only have three columns that can be private or public. I will only have to add three columns more to the table.
**Any advice?** | Do not move data into a separate table if you are going to have to join to it every time you query the main table.
Boolean (or equivalent) columns to indicate privacy for every column on which a privacy setting can be applied:
1. is very simple to add.
2. takes up practically no space.
3. is much quicker to query.
4. shows that the privacy settings are an intrinsic part of the user data.
5. removes unnecessary complexity from the schema.
The facts that these relate to other columns and that they represent a single kind of logical object are just a distraction. Go for simplicity, performance, and logical correctness. | Move the list of you private columns to separate table whith three fields, like:
```
UserId |ColumnName |IsPrivate
-----------------------------
```
Then you can join your queries with that table and get proper result set for each user, and at the same time change the columns of yor user table.
If your User table would not suppose have changes, it is better to move the list of you private columns to separate table with proper structure, like this:
```
UserId |cityIsPrivate |birthdayIsPrivate
----------------------------------------
```
Then you can join your user table with this table in a single query and get result set your need.
But don't make it in the same table. The first approach brings redundancy to your database structure. In your second case you would not be able to make SELECT queries by IsPrivate criterias. | DB Design: how to indicate if a column data is public or private | [
"",
"sql",
"database",
"database-design",
""
] |
Suppose I had the following table in my oracle DB:
```
ID: Name: Parent_ID:
123 a 234
345 b 123
234 c 234
456 d 345
567 e 567
678 f 567
```
And what I would like to do is find, for each `ID` the `ULTIMATE parent ID` (described as row, that when you go up, recursively, based upon `Parent_ID` the row where you finally get that `ID = Parent_ID`).
So, for example, 345's parent is 123 and 123's parent is 234 and 234's parent is 234 (meaning it is the top of the chain), therefore 345's ultimate parent is 234 - I hope this makes sense...
So, my result should look as follows:
```
ID: Name: Ult_Parent_ID: Ult_Parent_Name:
123 a 234 c
345 b 234 c
234 c 234 c
456 d 234 c
567 e 567 e
678 f 567 e
```
I just found out about Oracle `Connect By` statments today, so this is completely new to me, but I'm imagining my query would have to look SOMETHING as follows:
```
SELECT ID, Name, Parent_ID as Ult_Parent_ID,
(SELECT Name from MyTable t2 WHERE t2.ID = t1.Parent_ID) as Ult_Parent_Name
FROM MyTable t1
CONNECT BY PRIOR Parent_ID = ID;
```
Now, like I said, this is my first stab at this kind of SQL - **THIS DOES NOT WORK** (I get the following error `[1]: ORA-01436: CONNECT BY loop in user data` and it highlights the table name in the SQL editor), and I also don't know where / how to use the `START WITH` clause for this kind of query, but the logic of it seems correct to me.
Please help / help point me in the right direction!!!
Thanks!!! | I think the CONNECT\_BY\_ROOT is what you need:
```
select x.*, t2.name ultimate_name
from
(
select t.id, t.name, CONNECT_BY_ROOT parent_id ultimate_id
from toto t
start with t.id = t.parent_id
connect by nocycle prior id = parent_id
) x, toto t2
where x.ultimate_id = t2.id
;
```
This gives:
```
456 d 234 c
345 b 234 c
123 a 234 c
234 c 234 c
678 f 567 e
567 e 567 e
``` | Please try this one:
```
SELECT ID, Name, Parent_ID as Ult_Parent_ID,
(SELECT Name from MyTable t2 WHERE t2.ID = t1.Parent_ID) as Ult_Parent_Name,
LEVEL
FROM MyTable t1
CONNECT BY NOCYCLE Parent_ID = PRIOR ID
START WITH Parent_ID = ID;
```
I believe that we have to use `NOCYCLE` because of how your roots are defined.
I added pseudo-column `LEVEL` just for illustration purposes. You **do not** have to have it in your final query.
[**SQL Fiddle**](http://sqlfiddle.com/#!4/8b2c1/2) with your test data | Oracle Connect By Prior for Recursive Query Syntax | [
"",
"sql",
"oracle",
"recursive-query",
""
] |
I am trying to convert from one date format to another. I am not sure how to write the functions.
My source date looks like `01/15/2009 01:23:15`
The format I need is `01152009`.
Thanks | Try this.
```
TO_CHAR(TO_DATE('01/15/2009 01:23:15','MM/DD/YYYY HH:MI:SS'),'MMDDYYY')
```
More info here,
<http://psoug.org/reference/date_func.html> | ```
select TO_CHAR(TO_DATE('01/15/2009 01:23:15','MM/DD/YYYY MI:HH:SS'),'MMDDYYYY') from dual
```
if your field is already of data type date then you should only do:
```
select TO_CHAR(<fieldname>,'ddmmyyyy') ...
``` | How do I convert this oracle date | [
"",
"sql",
"oracle",
""
] |
I have the following exclude that brings back all of books that have had transactions in the past two minutes. I want to add another constraint that says only if from a series of stores.
In SQL it would be where location\_id = 1 or location\_id = 2 or location\_id = 3
Books have a location\_id
How can I apply that to the below query?
```
transaction_window = timezone.now() + datetime.timedelta(minutes=-2)
ts = Book.objects.exclude(book_id__in = Status.objects
.filter(transaction_time__gte=transaction_window)
.values_list('book_id', flat=True))
``` | You probably just want `location__in=(1, 2, 3)` | From your code assuming `Book` model has field `book_id` and `Status` has also field `book_id` and this field are just plain `IntegerField` or similar.
```
from django.db.models import Q
transaction_window = timezone.now() - datetime.timedelta(minutes=2)
statuses = Status.objects.filter(transaction_time__gte=transaction_window)
ts = Book.objects.filter(~Q(book__id__in=statuses)
| ~Q(location_id__in=(1, 2, 3)))
```
If my guess about your models is correct. | how to add OR constraints to exclude query in django? | [
"",
"sql",
"django",
"django-models",
"django-orm",
""
] |
I have a gender column
```
gender
```
Using MS SQL Server 2012
currently it is smallint which is 2 bytes -2^15 (-32,768) to 2^15-1 (32,767)
and works like follows
```
1 = male
0 = female
-1 = not specified
```
we do a lot of queries on this type of field. and the issues I have are
1. Its not intuitive as to what the data means without explanation
2. It's using two bytes
so I was wondering how others do this
I could do a 1 byte char(1)
```
m = male
f = female
x = not specified
```
Would this cause any performance issues on where or join clauses. | We (EHR software) store as a 1 byte char, as it is concise and easier to understand while working with lots of demographic data.
The possible values map as follows:
* `U` - unknown or unspecified
* `M` - male
* `F` - female
* `NULL` - person hasn't been asked/no value on record.
For us, it is important to note if they have purposely decided to not provide their gender, or if it hasn't been captured yet (thus `NULL` vs `U`).
One consideration is mapping this as a more meaningful structure in the application (like, an `enum` in .NET or similar). It's maybe a little annoying as an application guy to have to use a `switch` or other approach to get the enum, whereas I can cast an enum directly from an numeric value.
It's a trivial concern, of course, but if you're curious how was solved it, we used a `struct` type that could be coerced from a string (explicit conversion from string) and static constants to act as possible enum values. | The much better way, in general, to do things list this is to use a domain or lookup table.
If your attribute is required, it should be non-nullable. If it is not-required, it should be nullable. `Null` means that the data is missing; the user didn't answer the question. It is a different value than an affirmative answer of "I don't know" or "None of your business." But I digress.
A schema like this is what you want:
```
create table dbo.person
(
. . .
gender_id tinyint null foreign key references dbo.gender(id) ,
. . .
)
create table dbo.gender
(
id tinyint not null primary key clustered ,
description varchar(128) not null unique ,
)
insert dbo.gender values( 1 , 'Fale' )
insert dbo.gender values( 2 , 'Memale' )
insert dbo.gender values( 3 , 'Prefer Not To Say' )
```
The domain of the column `gender_id` in the `person` table is enforced by the foreign key constraint and is
* `null` is missing or unknown data. No data was supplied.
* 1 indicates that the person is female.
* 2 indicates that the person is male.
* 3 indicates that the person didn't feel like giving you the information.
And, more importantly, when you need to expand the domain of values, like so:
```
insert dbo.gender values( 4 , 'Transgendered, male-to-female, post-gender reassignment surgery' )
insert dbo.gender values( 5 , 'Transgendered, male-to-female, receiving hormone therapy' )
insert dbo.gender values( 6 , 'Transgendered, female-to-male, post-gender reassignment surgery' )
insert dbo.gender values( 7 , 'Transgendered, female-to-male, receiving hormone therapy' )
```
Your code change [theoretically] consists of inserting a few rows into the domain table. User-interface controls, validators, etc., are (or should be) populating themselves from the domain table. | Gender Storage and data types | [
"",
"sql",
"sql-server",
""
] |
I have a table that has the following column (with some sample entries provided):
```
COLUMN NAME: pt_age_old
20 Years 8 Months 3 Weeks
1 Year 3 Months 2 Weeks
58 Years 7 Months
1 Year
7 Years 11 Months 2 Weeks
26 Years 6 Months
56 Years 6 Months
48 Years 6 Months 4 Weeks
29 Years 11 Months
4 Years 3 Months
61 Years 8 Months 4 Weeks
```
I have tried to cast it to datetime, of course this did not work. Same with convert.
Keep getting the following message:
```
Msg 241, Level 16, State 1, Line 2
Conversion failed when converting date and/or time from character string.
```
Main 2 questions are:
Is this type of conversion even possible with this existing string format?
If so, can you steer me in the right direction so I can make this happen?
Thanks! | This could be done using custom code such as below - I've assumed the values you're using are people's ages, and you're trying to work out their date of birth given their age today.
You can see the below code in action here: <http://sqlfiddle.com/#!3/c757c/2>
```
create function dbo.AgeToDOB(@age nvarchar(32))
returns datetime
as
begin
declare @pointer int = 0
, @pointerPrev int = 1
, @y nvarchar(6)
, @m nvarchar(6)
, @w nvarchar(6)
, @d nvarchar(6)
, @result datetime = getutcdate() --set this to the date we're working to/from
--convert various ways of expressing units to a standard
set @age = REPLACE(@age,'Years','Y')
set @age = REPLACE(@age,'Year','Y')
set @age = REPLACE(@age,'Months','M')
set @age = REPLACE(@age,'Month','M')
set @age = REPLACE(@age,'Weeks','W')
set @age = REPLACE(@age,'Week','W')
set @age = REPLACE(@age,'Days','D')
set @age = REPLACE(@age,'Day','D')
set @pointer = isnull(nullif(CHARINDEX('Y',@age),0),@pointer)
set @y = case when @pointer > @pointerprev then SUBSTRING(@age,@pointerprev,@pointer - @pointerprev) else null end
set @pointerPrev = @pointer + 1
set @pointer = isnull(nullif(CHARINDEX('M',@age),0),@pointer)
set @m = case when @pointer > @pointerprev then SUBSTRING(@age,@pointerprev,@pointer - @pointerprev) else null end
set @pointerPrev = @pointer + 1
set @pointer = isnull(nullif(CHARINDEX('W',@age),0),@pointer)
set @w = case when @pointer > @pointerprev then SUBSTRING(@age,@pointerprev,@pointer - @pointerprev) else null end
set @pointerPrev = @pointer + 1
set @pointer = isnull(nullif(CHARINDEX('D',@age),0),@pointer)
set @d = case when @pointer > @pointerprev then SUBSTRING(@age,@pointerprev,@pointer - @pointerprev) else null end
set @result = dateadd(YEAR, 0 - isnull(cast(@y as int),0), @result)
set @result = dateadd(MONTH, 0 - isnull(cast(@m as int),0), @result)
set @result = dateadd(Week, 0 - isnull(cast(@w as int),0), @result)
set @result = dateadd(Day, 0 - isnull(cast(@d as int),0), @result)
return @result
end
go
select dbo.AgeToDOB( '20 Years 8 Months 3 Weeks')
```
NB: there's a lot of scope for optimisation here; I've left it simple above to help keep it clear what's going on. | Technically it is possible to convert the relative time label into a datetime, but you will need a reference date though (20 Years 8 Months 3 Weeks as of '2013-10-16'). The reference date is most likely today (using GETDATE() or CURRENT\_TIMESTAMP), but there is a chance it is a different date. You will have to parse the label, convert it to a duration, and then apply the duration to the reference time. This will be inherently slow.
There are at least two possible ways of doing this, write a FUNCTION to parse and convert the relative time label or use a .NET extended function to do the conversion. You would need to identify all of the possible labels otherwise the conversion will fail. Keep in mind that the reference date is important since 2 months is not a constant number of days (Jan-Feb = either 58 or 59 days).
Here is a sample of what the function might look like:
```
-- Test data
DECLARE @test varchar(50)
, @ref_date datetime
SET @test = '20 Years 8 Months 3 Weeks'
SET @ref_date = '2013-10-16' -- or use GETDATE() / CURRENT_TIMESTAMP
-- Logic in function
DECLARE @pos int
, @ii int
, @val int
, @label varchar(50)
, @result datetime
SET @pos = 0
SET @result = @ref_date
WHILE (@pos <= LEN(@test))
BEGIN
-- Parse the value first
SET @ii = NULLIF(CHARINDEX(' ', @test, @pos), 0)
SET @label = RTRIM(LTRIM(SUBSTRING(@test, @pos, @ii - @pos)))
SET @val = CAST(@label AS int)
SET @pos = @ii + 1
-- Parse the label next
SET @ii = NULLIF(CHARINDEX(' ', @test, @pos), 0)
IF (@ii IS NULL) SET @ii = LEN(@test) + 1
SET @label = RTRIM(LTRIM(SUBSTRING(@test, @pos, @ii - @pos)))
SET @pos = @ii + 1
-- Apply the value and offset
IF (@label = 'Days') OR (@label = 'Day')
SET @result = DATEADD(dd, -@val, @result)
ELSE IF (@label = 'Weeks') OR (@label = 'Week')
SET @result = DATEADD(ww, -@val, @result)
ELSE IF (@label = 'Months') OR (@label = 'Month')
SET @result = DATEADD(mm, -@val, @result)
ELSE IF (@label = 'Years') OR (@label = 'Year')
SET @result = DATEADD(yy, -@val, @result)
END
SELECT @result
-- So if this works correctly,
-- 20 Years 8 Months 3 Weeks from 2013-10-16 == 1993-01-26
``` | TSQL - converting date string to datetime | [
"",
"sql",
"t-sql",
""
] |
I have an ASP .NET 4.5 application. On a maintenance page is a text box which allows administrative users to write SQL which is executed directly against the SQL Server 2008 database.
Occasionally one of the administrative users writes some inefficient SQL and the SQL Server process starts using up all the memory and CPU cycles on the server. We then have to start and stop the service to get the server responsive again.
Is there any way that I can stop these from queries consuming all the resources? These queries will not return fast enough for the user to see them so it's okay to cancel the query.
**Edit**
I realise it would be better to prevent users from writing SQL queries, but unfortunately I cannot remove this functionality from the application. The admin users don't want to be educated. | You can set the query governor at the server level but not sure about a per user or per connection / application limit.
<http://technet.microsoft.com/en-us/magazine/dd421653.aspx>
That said, it is likely a poor / dangerous practice to allow users to directly enter SQL queries. | First off I would make sure these queries are not locking any tables (either use NOLOCK or SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED for each query ran).
As far as resources go, I would look at something like the Resource Governor. We use this for any adhoc reports on our production OLTP system.
Pinal Dave has a great blog entry on it. You can also look at Technet or other MS sites for information on what it is and how to set it up.
<http://blog.sqlauthority.com/2012/06/04/sql-server-simple-example-to-configure-resource-governor-introduction-to-resource-governor/> | How can inefficient SQL queries be prevented from slowing a database server | [
"",
"asp.net",
"sql",
"sql-server",
"performance",
""
] |
Is there a way to check the integrity of a SQL Server database? I have these developers (OK, me included) that constantly change objects in our database (views/functions/stored procs) and sometimes we change signatures of objects or drop objects and other objects become invalid/uncompilable. I would like to get some sort of a report/alert on what objects are broken preferably one that could be integrated with CruiseControl.NET. Ideas? | I think what you need is database unit tests that can then be executed through CruiseControl.NET. I recommend using tSQLt (<http://tsqlt.org>). RedGate also sells a nice GUI over the top of this product, but is not required to get the functionality that you need.
(<http://www.red-gate.com/products/sql-development/sql-test/> | Olla Hallengran provides an excellant way to check the Integrity of databases.
Please refer
<http://ola.hallengren.com/sql-server-integrity-check.html> | Automatically checking the integrity of the SQL Server Database | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
I have two SQL tables, call them t1 and t2. Both have column names str\_id. The str\_id is typically a 10 digit string, but I have special str\_ids that begin with 'sp' followed by a normal 10 digit str\_id. I would like to select all the 'sp' str\_ids from t1, remove the sp, and look up the str\_id in t2. I tried this query:
```
Select
str_id,
desired_column
From
t2
Where
str_id in (
Select
substr(str_id, 3)
From
t1
Where
str_id Like 'sp%' and
other_column = 'my_value'
);
```
Syntactically, this command seems correct, but when I run it my SQL frontend just hangs. If I run the command without the SUBSTR, then it executes fine, but it doesn't return the results I want because all the str\_ids have sp prefixed. Just for reference, the command that works is:
```
Select
str_id,
desired_column
From
t2
Where
str_id In (
Select
str_id
From
t1
Where
str_id Like 'sp%' and
other_column = 'my_value'
);
```
What can I do to fix the query above? | Show the `EXPLAIN` outputs of those two, but I assume with the normal one mysql is able to directy crossreference indexes, while the `SUBSTR` one requires creating an intermediate resultset, and checking each individual item from t2. Which is a nice example why prefixing values with significant data is about as bad as people storing delimited strings in 1 column: hard to work with. | Edited my answer. Try running this instead
```
SELECT str_id,
(
SELECT desired_column
FROM t2
WHERE str_id = SUBSTR(t1.str_id, 3)
)
FROM t1
WHERE str_id LIKE 'sp%' AND other_column = 'my_value'
``` | SQL works fine until I add a SUBSTR to subquery, then it just hangs. Why? | [
"",
"mysql",
"sql",
""
] |
The existing design for this program is that all changes are written to a changelog table with a timestamp. In order to obtain the current state of an item's attribute we `JOIN` onto the changelog table and take the row having the most recent timestamp.
This is a messy way to keep track of current values, but we cannot readily change this changelog setup at this time.
I intend to slightly modify the behavior by adding an "IsMostRecent" bit to the changelog table. This would allow me to simply pull the row having that bit set, as opposed to the `MAX()` aggregation or recursive seek.
What strategy would you employ to make sure that bit is always appropriately set? Or is there some alternative you suggest which doesn't affect the current use of the logging table?
Currently I am considering a trigger approach, which turns the bit off all other rows, and then turns it on for the most recent row on an `INSERT` | **Edit:** Started this answer before Richard Harrison answered, promise :)
I would suggest another table with the structure similar to below:
```
VersionID TableName UniqueVal LatestPrimaryKey
1 Orders 209 12548
2 Orders 210 12549
3 Orders 211 12605
4 Orders 212 10694
VersionID -- being the tables key
TableName -- just in case you want to roll out to multiple tables
UniqueVal -- is whatever groups multiple rows into a single item with history (eg Order Number or some other value)
LatestPrimaryKey -- is the identity key of the latest row you want to use.
```
Then you can simply `JOIN` to this table to return only the latest rows.
If you already have a trigger inserting rows into the changelog table this could be adapted:
```
INSERT INTO [MyChangelogTable]
(Primary, RowUpdateTime)
VALUES (@PrimaryKey, GETDATE())
-- Add onto it:
UPDATE [LatestRowTable]
SET [LatestPrimaryKey] = @PrimaryKey
WHERE [TableName] = 'Orders'
AND [UniqueVal] = @OrderNo
```
Alternatively it could be done as a merge to capture inserts as well. | I've done this before by having a "MostRecentRecorded" table which simply has the most recently inserted record (Id and entity ID) fired off a trigger.
Having an extra column for this isn't right - and can get you into problems with transactions and reading existing entries.
In the first version of this it was a simple case of
```
BEGIN TRANSACTION
INSERT INTO simlog (entityid, logmessage)
VALUES (11, 'test');
UPDATE simlogmostrecent
SET lastid = @@IDENTITY
WHERE simlogentityid = 11
COMMIT
```
Ensuring that the MostRecent table had an entry for each record in SimLog can be done in the query but ISTR we did it during the creation of the entity that the SimLog referred to (the above is my recollection of the first version - I don't have the code to hand).
However the simple version caused problems with multiple writers as could cause a deadlock or transaction failure; so it was moved into a trigger. | How should I reliably mark the most recent row in SQL Server table? | [
"",
"sql",
"triggers",
"sql-server-2008-r2",
""
] |
I understand that any join can be done with a CROSS join and a WHERE clause.
I did some experiments, and it looks like placing the equality predicate inside the where clause or as the parameter of an inner join yields the same result with the same performance.
Furthermore, using inner joins does not save any typing, since the join predicate must still be specified.
I guess that the same is true for the various kinds of outer joins. Just specify that the values can be null or not null.
Can I just go with only cross joins ? | Not any join can be done with a CROSS join and a WHERE clause.Each of the join cross, inner and outer has it's own logical significance.
1. **A cross join applies only one phase—Cartesian Product.**
2. **An inner join applies two phases—Cartesian Product and Filter.**
3. **An outer join applies three phases—Cartesian Product, Filter, and Add Outer Rows.**
A confusing aspect of queries containing an OUTER JOIN clause is whether to specify a logical expression in the ON filter or in the WHERE filter. The main difference between the two is that **ON is applied before adding outer rows** , while **WHERE is applied afterwards**. An elimination of a row from the preserved table (specified as left outer or right outer) by the ON filter is not final because it will be added back; **an elimination of a row by the WHERE filter, by contrast, is final.**
This logical difference between the ON and WHERE clauses **exists only when using an outer
join.** When you use an inner join, it doesn’t matter where you specify your logical expressions in ON clause with inner join table operator or in Where clause with cross join table operator.
Hope this helps!!! | The answer is performance may not be impacted, however, readability and code clarity is hindered when you have experienced developers looking at cross joins when they are really inner joins. Other than that it is a matter of personal preference. | Are INNER JOIN and OUTER JOIN necessary? | [
"",
"sql",
"database",
"join",
""
] |
I have two almost identical tables A and B in a SQL Server database.
Table A contains data and a primary key X set to `Is Identity == No`.
Table B contains no data but has primary key X set to `Is Identity == Yes` (`Identity Increment = 1, Identity Seed = 1`).
The data in primary key X increments by 1 to 100 i.e table A has 100 records, the first record is 1 and the 100th record is 100.
How do I copy the data from table A to table B in the simplest way without errors. | You use `set identity_insert tableB on` before running your `insert`
```
set identity_insert tableB on
insert tableB (Id, field1, field2)
select id, field1, field2 from tablea
set identity_insert tableB off
``` | Yes, you need to enumerate the columns. On the other hand - is on the B table the identity necessary, while its taken from A?
> The data in primary key X increments by 1 to 100 i.e table A has 100 records, the first record is 1 and the 100th record is 100.
Following this rule seems the incrementation on B not being needed, while the identity is taken from table A.
Eventually consider to RESEED on B to meet the IDENT\_CURRENT limitation. ( [Reseed information](http://blog.sqlauthority.com/2007/03/15/sql-server-dbcc-reseed-table-identity-value-reset-table-identity/)) | Copying data into table A into table B (Is Identity == Yes) | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a sql table like :
```
id buy_product buy_product_total sell_product sell_product_total
1 apple 5
2 banana 8
3 cake 20
4 apple 1
5 cake 2
6 apple 2
```
My problem is, I want to show product name and how many product left. Like :
```
product_name left
apple 6
cake 18
```
How can I show like that solution with sql query ?
I create table as answerers as :
Buy Table
```
id product_name total
1 apple 5
2 banana 8
3 cake 20
4 apple 2
```
Sell Table
```
id product_name total
1 apple 1
2 cake 2
```
I want to table like this
```
product_name left
apple 6
banana 8
cake 18
``` | As others have noted, your table structure is less than optimal.
However, given what you have, this will give you the results you're after.
```
select product, sum(total) from
(
select buy_product as product, buy_product_total as total
from yourtable
where buy_product is not null
union
select sell_product, -sell_product_total
from yourtable
where sell_product is not null
) v
group by product
```
Or, with your two tables
```
select product_name, sum(total) from
(
select product_name, total
from buy_table
union
select product_name, -total
from sell_table
) v
group by product_name
``` | Is not a good table, could be better that buy and sell to be the same collumn buy with positive values and sell with negative.
But answer your question, suppose that your table name is myTable,
obs: you can execute every select separeted to understand better
```
select buy_product as product_name, (buy_total - sell_total) as left
from (
(select buy_product, sum(buy_product_total) as buy_total
from myTable where buy_product_total is not null group by buy_product) as buy_list
inner join
(select sell_product, sum(sell_product_total) as sell_total
from myTable where sell_product_total is not null group by sell_product) as sell_list
on buy_list.buy_product = sell_list.sell_product
)
``` | SQL Table Solution With Calculating Different Columns in Same Table | [
"",
"mysql",
"sql",
""
] |
According to another SO post ([SQL: How to keep rows order with DISTINCT?](https://stackoverflow.com/questions/7026234/sql-how-to-keep-rows-order-with-distinct)), distinct has pretty undefined behavior as far as sorting.
I have a query:
```
select col_1 from table order by col_2
```
This can return values like
```
3
5
3
2
```
I need to then select a distinct on these that preserves ordering, meaning I want
```
select distinct(col_1) from table order by col_2
```
to return
```
3
5
2
```
but not
```
5
3
2
```
Here is what I am actually trying to do. Col\_1 is a user id, and col\_2 is a log in timestamp event by that user. So the same user (col\_1) can have many login times. I am trying to build a historical list of users in which they were seen in the system. I would like to be able to say "our first user ever was, our second user ever was", and so on.
That post seems to suggest to use a group by, but group by is not meant to return an ordering of rows, so I do not see how or why this would be applicable here, since it does not appear group by will preserve any ordering. In fact, another SO post gives an example where group by will destroy the ordering I am looking for: see "Peter" in [what is the difference between GROUP BY and ORDER BY in sql](https://stackoverflow.com/questions/1277460/what-is-the-difference-between-group-by-and-order-by-in-sql). Is there anyway to guarantee the latter result? The strange thing is, if *I* were implementing the DISTINCT clause, I would surely do the order by first, then take the results and do a linear scan of the list and preserve the ordering naturally, so I am not sure why the behavior is so undefined.
EDIT:
Thank you all! I have accepted IMSoP answer because not only was there an interative example that I could play around with (thanks for turning me on to SQL Fiddle), but they also explained why several things worked the way they worked, instead of simply "do this". Specifically, it was unclear that GROUP BY does not destroy (rather, keeps them in some sort of internal list) values in the other columns outside of the group by, and these values can still be examined in an ORDER BY clause. | This all has to do with the "logical ordering" of SQL statements. Although a DBMS might actually retrieve the data according to all sorts of clever strategies, it has to behave according to some predictable logic. As such, the different parts of an SQL query can be considered to be processed "before" or "after" one another in terms of how that logic behaves.
As it happens, the `ORDER BY` clause is the very last step in that logical sequence, so it can't change the behaviour of "earlier" steps.
If you use a `GROUP BY`, the rows have been bundled up into their groups by the time the `SELECT` clause is run, let alone the `ORDER BY`, so you can only look at columns which have been grouped by, or "aggregate" values calculated across all the values in a group. (MySQL implements [a controversial extension to `GROUP BY`](https://dev.mysql.com/doc/refman/8.0/en/group-by-handling.html) where you can mention a column in the `SELECT` that can't logically be there, and it will pick one from an arbitrary row in that group).
If you use a `DISTINCT`, it is logically processed *after* the `SELECT`, but the `ORDER BY` still comes afterwards. So only once the `DISTINCT` has thrown away the duplicates will the remaining results be put into a particular order - but the rows that have been thrown away can't be used to determine that order.
---
As for how to get the result you need, the key is to find a value to sort by which is valid *after* the `GROUP BY`/`DISTINCT` has (logically) been run. Remember that if you use a `GROUP BY`, any aggregated values are still valid - an aggregate function can look at all the values in a group. This includes `MIN()` and `MAX()`, which are ideal for ordering by, because "the lowest number" (`MIN`) is the same thing as "the first number if I sort them in ascending order", and vice versa for `MAX`.
So to order a set of distinct `foo_number` values based on the lowest applicable `bar_number` for each, you could use this:
```
SELECT foo_number
FROM some_table
GROUP BY foo_number
ORDER BY MIN(bar_number) ASC
```
[Here's a live demo with some arbitrary data](http://sqlfiddle.com/#!9/d58f7/2).
---
***EDIT:*** In the comments, it was discussed why, if an ordering is applied before the grouping / de-duplication takes place, that order is not applied to the groups. If that were the case, you would still need a strategy for which row was kept in each group: the first, or the last.
As an analogy, picture the original set of rows as a set of playing cards picked from a deck, and then sorted by their face value, low to high. Now go through the sorted deck and deal them into a separate pile for each suit. Which card should "represent" each pile?
If you deal the cards face up, the cards showing at the end will be the ones with the *highest* face value (a "keep last" strategy); if you deal them face down and then flip each pile, you will reveal the *lowest* face value (a "keep first" strategy). Both are obeying the original order of the cards, and the instruction to "deal the cards based on suit" doesn't automatically tell the dealer (who represents the DBMS) which strategy was intended.
If the final piles of cards are the groups from a `GROUP BY`, then `MIN()` and `MAX()` represent picking up each pile and looking for the lowest or highest value, regardless of the order they are in. But because you can look inside the groups, you can do other things too, like adding up the total value of each pile (`SUM`) or how many cards there are (`COUNT`) etc, making `GROUP BY` much more powerful than an "ordered `DISTINCT`" could be. | I would go for something like
```
select col1
from (
select col1,
rank () over(order by col2) pos
from table
)
group by col1
order by min(pos)
```
In the subquery I calculate the position, then in the main query I do a group by on col1, using the smallest position to order.
Here the [demo in SQLFiddle](http://sqlfiddle.com/#!4/24dea/2) (this was Oracle, the MySql info was added later.
**Edit for MySql:**
```
select col1
from (
select col1 col1,
@curRank := @curRank + 1 AS pos
from table1, (select @curRank := 0) p
) sub
group by col1
order by min(pos)
```
And here [the demo for MySql](http://sqlfiddle.com/#!9/08422/3). | SQL select distinct but "keep first"? | [
"",
"mysql",
"sql",
""
] |
I have a DB called Johnson containing 50 tables. The team that uses this DB has asked if I can provide them with a size estimate for each table
How can I display a list of all the tables within the DB and their respective size in GB or MB using SQL Server? | ```
SET NOCOUNT ON
DBCC UPDATEUSAGE(0)
-- DB size.
EXEC sp_spaceused
-- Table row counts and sizes.
CREATE TABLE #t
(
[name] NVARCHAR(128),
[rows] CHAR(11),
reserved VARCHAR(18),
data VARCHAR(18),
index_size VARCHAR(18),
unused VARCHAR(18)
)
INSERT #t EXEC sp_msForEachTable 'EXEC sp_spaceused ''?'''
SELECT *
FROM #t
-- # of rows.
SELECT SUM(CAST([rows] AS int)) AS [rows]
FROM #t
DROP TABLE #t
```
When ran In a database context give results something like this.. Northwind Database
 | You could use SSMS to display the size of biggest 1000 tables thus:
 | How to analyze tables for size in a single DB using SQL Server? | [
"",
"sql",
"sql-server",
""
] |
I am having trouble with a SQL Server query. I have a couple of tables involved `[Order]` (I know, not named well) and `[Order Entry]`.
Order Entry is basically a "Line Item" on an Order (so there are one or more per Order). There are various columns in Order Entry, one of which is `ItemID` (there is only one `ItemID` per Order Entry). I want a query that returns all rows (Orders) that don't contain one or more Order Entry's with a list of ItemID's defined in a list.
Here is what I have so far:
```
SELECT DISTINCT
oe.OrderID, StoreID
FROM
OrderEntry oe
INNER JOIN
[order] o ON o.ID = oe.OrderID
AND o.StoreID = oe.StoreID
AND oe.ItemID NOT IN (60, 856, 857, 858, 902, 59, 240, 57, 217, 853, 855, 854, 41)
```
What I want to do seem similar to this (below) but I can't figure it out:
[SELECT all orders with more than one item and check all items status](https://stackoverflow.com/questions/14052456/select-all-orders-with-more-than-one-item-and-check-all-items-status)
Help please! (much appreciated) | I was able to take a few pieces from several of you (Miguel Guzman - you provided the spark I needed to get this) and got it working. Here is my final Query:
```
SELECT o.ID, o.StoreID
FROM [Order] o
JOIN PSD_ServiceTicket st
ON o.ID = st.WorkOrderID
AND o.StoreID = st.StoreID
WHERE o.StoreID = 101
AND o.Time >= '10/1/2013'
AND o.Time <= '10/18/2013'
AND o.ID NOT IN (SELECT OrderID
FROM OrderEntry oe
INNER JOIN [order] o
ON o.ID = oe.OrderID
AND o.StoreID = oe.StoreID
WHERE oe.StoreID = 101
AND o.Time >= '10/1/2013'
AND o.Time <= '10/18/2013'
AND oe.ItemID IN ( 60,856,857,858,902,59,240,57,217,853,855,854,41 )
)
AND (st.ServiceTypeID = 1 OR st.ServiceTypeID = 4 )
```
Thanks to everyone! | If I'm understanding you correctly, you want something like this. Starting your query with the Orders table and using a left join will ensure that you get the orders you're looking for. By left joining with a match on ItemID, you can then check for nulls in your where statement to find the orders that don't have those line items.
```
select distinct o.OrderID, o.StoreID
from Orders o
left join OrderEntry oe on oe.OrderID = o.ID and oe.StoreID = o.StoreID
and oe.ItemID in (60,856,857,858,902,59,240,57,217,853,855,854,41)
where oe.OrderID is null
```
So to break this down a bit:
* "select distinct... from Orders" means "get me everything from orders"
* "left join OrderEntry on..." means "get me all OrderEntry records that meet this criteria; if no records meet the criteria, nulls are OK"
* "where oe.OrderID is null" means "I only want to see the items in Orders that had no match in the left join"
If we had used an inner join instead, we'd have lost that "nulls are OK" part, so the where clause wouldn't work. | Selecting Orders with multiple line items but not when one or more line items contains a defined list | [
"",
"sql",
"sql-server",
"inner-join",
""
] |
I have tables:
```
Users{UserId ...}
Professors{UserId ...}
```
I set `UserId` to be PK in both tables and made 1:1 relationship.
But if I try to insert new user it doesn't work as it requires insert in the `Professor` table too.
I want to make that one user can have only 1 record in Professor table but I also want to make it that it doesn't have to exist in Professor table (I don't want to make all users professors :) ).
How to set 1 to (0...1) relationship in SQL Server Management Studio?
I know that set enforce key constraints to NO is not a solution :) | If you have these requirements:
* a User can be Professor - or not
* a Professor is always a User
then you are correct that it's a `1 :: 0..1` relationship. In SQL, it can be implemented like this:
```
CREATE TABLE Users
( UserId INT NOT NULL
, ...
, PRIMARY KEY (UserId)
) ;
CREATE TABLE Professors
( UserId INT NOT NULL
, ...
, PRIMARY KEY (UserId)
, FOREIGN KEY (UserId)
REFERENCES Users (UserId)
) ;
```
From what you describe, you probably have defined the foreign key constraint in reverse order. | I think you are should use foreign key here.
Professor ID should be the foreign key in the user table, that will solve all your problem.
Just look up what foreign key is and how to write foreign key query | How to set 1 to 0...1 relationship in SQL Server Management Studio | [
"",
"sql",
"database",
"database-design",
"relational-database",
"ssms",
""
] |
I am about to embark on a project for work that is very outside my normal scope of duties. As a SQL DBA, my initial inclination was to approach the project using a SQL database but the more I learn about NoSQL, the more I believe that it might be the better option. I was hoping that I could use this question to describe the project at a high level to get some feedback on the pros and cons of using each option.
The project is relatively straightforward. I have a set of objects that have various attributes. Some of these attributes are common to all objects whereas some are common only to a subset of the objects. What I am tasked with building is a service where the user chooses a series of filters that are based on the attributes of an object and then is returned a list of objects that matches all^ of the filters. When the user selects a filter, he or she may be filtering on a common or subset attribute but that is abstracted on the front end.
*^ There is a chance, depending on user feedback, that the list of objects may match only some of the filters and the quality of the match will be displayed to the user through a score that indicates how many of the criteria were matched.*
After watching this talk by Martin Folwler (<http://www.youtube.com/watch?v=qI_g07C_Q5I>), it would seem that a document-style NoSQL database should suit my needs but given that I have no experience with this approach, it is also possible that I am missing something obvious.
Some additional information - The database will initially have about 5,000 objects with each object containing 10 to 50 attributes but the number of objects will definitely grow over time and the number of attributes could grow depending on user feedback. In addition, I am hoping to have the ability to make rapid changes to the product as I get user feedback so flexibility is very important.
Any feedback would be very much appreciated and I would be happy to provide more information if I have left anything critical out of my discussion. Thanks. | May as well make this an answer. I should comment that I'm not strong in NoSQL, so I tend to lean towards SQL.
I'd do this as a three table set. You will see it referred to as entity value pair logic on the web...it's a way of handling multiple dynamic attributes for items. Lets say you have a bunch of products and each one has a few attributes.
```
Prd 1 - a,b,c
Prd 2 - a,d,e,f
Prd 3 - a,b,d,g
Prd 4 - a,c,d,e,f
```
So here are 4 products and 6 attributes...same theory will work for hundreds of products and thousands of attributes. Standard way of holding this in one table requires the product info along with 6 columns to store the data (in this setup at least one third of them are null). New attribute added means altering the table to add another column to it and coming up with a script to populate existing or just leaving it null for all existing. Not the most fun, can be a head ache.
The alternative to this is a name value pair setup. You want a 'header' table to hold the common values amoungst your products (like name, or price...things that all rpoducts always have). In our example above, you will notice that attribute 'a' is being used on each record...this does mean attribute a can be a part of the header table as well. We'll call the key column here 'header\_id'.
Second table is a reference table that is simply going to store the attributes that can be assigned to each product and assign an ID to it. We'll call the table attribute with atrr\_id for a key. Rather straight forwards, each attribute above will be one row.
Quick example:
```
attr_id, attribute_name, notes
1,b, the length of time the product takes to install
2,c, spare part required
etc...
```
It's just a list of all of your attributes and what that attribute means. In the future, you will be adding a row to this table to open up a new attribute for each header.
Final table is a mapping table that actually holds the info. You will have your product id, the attribute id, and then the value. Normally called the detail table:
```
prd1, b, 5 mins
prd1, c, needs spare jack
prd2, d, 'misc text'
prd3, b, 15 mins
```
See how the data is stored as product key, value label, value? Any future product added can have any combination of any attributes stored in this table. Adding new attributes is adding a new line to the attribute table and then populating the details table as needed.
I beleive there is a wiki for it too... <http://en.wikipedia.org/wiki/Entity-attribute-value_model>
After this, it's simply figuring out the best methodology to pivot out your data (I'd recommend Postgres as an opensource db option here) | This problem can be solved in by using two separate pieces of technology. The first is to use a relatively well designed database schema with a modern RDBMS. By modeling the application using the usual principles of normalization, you'll get really good response out of storage for individual CRUD statements.
Searching this schema, as you've surmised, is going to be a nightmare at scale. Don't do it. Instead look into using [Solr/Lucene](http://lucene.apache.org/solr) as your full text search engine. Solr's support for dynamic fields means you can add new properties to your documents/objects on the fly and immediately have the ability to search inside your data if you have designed your Solr schema correctly. | SQL vs NoSQL for data that will be presented to a user after multiple filters have been added | [
"",
"sql",
"object",
"attributes",
"nosql",
"filtering",
""
] |
Basically I got a table in my EF database with the following properties:
```
public int Id { get; set; }
public string Title { get; set; }
public string Description { get; set; }
public string Image { get; set; }
public string WatchUrl { get; set; }
public int Year { get; set; }
public string Source { get; set; }
public int Duration { get; set; }
public int Rating { get; set; }
public virtual ICollection<Category> Categories { get; set; }
```
It works fine however when I change the int of Rating to be a double I get the following error when updating the database:
**The object 'DF\_*Movies*\_Rating\_\_48CFD27E' is dependent on column 'Rating'.
ALTER TABLE ALTER COLUMN Rating failed because one or more objects access this column.**
What's the issue? | Try this:
Remove the constraint *DF\_Movies\_Rating\_\_48CFD27E* before changing your field type.
The constraint is typically created automatically by the DBMS (SQL Server).
To see the constraint associated with the table, expand the table attributes in **Object explorer**, followed by the category *Constraints* as shown below:

You must remove the constraint before changing the field type. | I'm adding this as a response to explain where the constraint comes from.
I tried to do it in the comments but it's hard to edit nicely there :-/
If you create (or alter) a table with a column that has default values it will create the constraint for you.
In your table for example it might be:
```
CREATE TABLE Movie (
...
rating INT NOT NULL default 100
)
```
It will create the constraint for default 100.
If you instead create it like so
```
CREATE TABLE Movie (
name VARCHAR(255) NOT NULL,
rating INT NOT NULL CONSTRAINT rating_default DEFAULT 100
);
```
Then you get a nicely named constraint that's easier to reference when you are altering said table.
```
ALTER TABLE Movie DROP CONSTRAINT rating_default;
ALTER TABLE Movie ALTER COLUMN rating DECIMAL(2) NOT NULL;
-- sets up a new default constraint with easy to remember name
ALTER TABLE Movie ADD CONSTRAINT rating_default DEFAULT ((1.0)) FOR rating;
```
You can combine those last 2 statements so you alter the column and name the constraint in one line (you have to if it's an existing table anyways) | The object 'DF__*' is dependent on column '*' - Changing int to double | [
"",
"sql",
"sql-server",
"database",
"entity-framework",
"entity-framework-4",
""
] |
I have the following two tables:
```
rsrpID rsrpName
1 Library Catalog
2 Interlibrary Loan
3 Academic Search Complete
4 JSTOR
5 Project Muse
6 LibGuides
7 Web Resource
8 Other (please add to Notes)
9 Credo Reference
rsriID rsrirsrpID rsrisesdID
603 6 243
604 1 243
605 7 243
606 8 244
607 6 245
608 8 245
```
What I'm trying to do is return the whole first table, and, for those rows in the second table that match the rsrpID in the first table, return those on the relevant rows alongside the first table, for example:
```
rsrpID rsrpName rsrisesdID
1 Library Catalog 243
2 Interlibrary Loan
3 Academic Search Complete
4 JSTOR
5 Project Muse
6 LibGuides 243
7 Web Resource 243
8 Other (please add to Notes)
9 Credo Reference
```
...but I can't for the life of me figure out a join statement that'll return this. Currently the query I was given is
```
select rp.rsrpID as ID, rp.rsrpName as Name,
(select if((count(rsrisesdID) > 0), 'checked', '')
from resourcesintroduced ri
where (ri.rsrirsrpID = rp.rsrpID)
and (rsrisesdID = 243) ) as 'checked'
from resourcesintroduced ri,
resourcepool rp
where rsrisesdID = 243
group by ID
order by Name asc;
```
As you can see that query is clunky and, if a particular `rsrisesdID` doesn't appear at all, then the query returns no rows at all. | You are looking for an Outer Join:
```
select rp.rsrpID as ID, rp.rsrpName as Name, ri.rsrisesdID
from resourcepool rp
left outer join resourcesintroduced ri on (ri.rsrirsrpID = rp.rsrpID and ri.rsrisesdID = 243)
``` | You use a LEFT JOIN
```
SELECT
rsrpID,
rsrpName,
vrsrisesdID
FROM
rp LEFT JOIN
ri ON rp.rsrpID = ri.rsrirsrpID
```
Which returns:
```
1 Library Catalog 243
2 Interlibrary Loan NULL
3 Academic Search Complete NULL
4 JSTOR NULL
5 Project Muse NULL
6 LibGuides 245
6 LibGuides 245
7 Web Resource 243
8 Other (please add to Notes) 244
8 Other (please add to Notes) 245
9 Credo Reference NULL
```
Depending on the flavor of DBMS, you may have to use `LEFT OUTER JOIN`.
Hope this helps! | Return all rows from one table, and match with a subset of rows from another table? | [
"",
"sql",
"join",
""
] |
I have two tables:
table A: table B:
```
id m1 id m2
------ -------
1 a 1 a
2 b 2 c
3 3 d
```
when I execute the sql:
```
select A.*,B.id as mid from A left join B on A.m1 = b.m2
```
I want to get the result:
```
id m1 mid
--------------
1 a 1
2 b
3
```
but I get:
```
id m1 mid
--------------
1 a 1
2 b
```
Does anyone have idea how to fix this? | I'm getting the correct results in this [fiddle](http://sqlfiddle.com/#!2/986f44/1).
```
| ID | M1 | MID |
|----|--------|--------|
| 1 | a | 1 |
| 2 | b | (null) |
| 3 | (null) | (null) |
```
It will also work for empty strings. The resulting behaviour you're mentioning goes against the left join. | NULL comparison results in unknown so you might want to replace NULL with something in JOIN condition. | Problems about 'left join' in MySQL | [
"",
"mysql",
"sql",
"left-join",
""
] |
I'm having some trouble figuring out a SQL Server query. The relational model is quite old (it seems) and is not very optimal, to say the least.
My query looks like this:
```
SELECT
[RequestUsers].requestId, [Requests].name, [Requests].isBooked
FROM
[RequestUsers]
JOIN
[Requests] ON [RequestUsers].requestId = [Requests].id
WHERE
[RequestUsers].dateRequested >= '10-01-2013'
AND [RequestUsers].dateRequested <= '10-16-2013'
```
This query gives a result of loads of duplicated records, i.e.:
```
id name isBooked
-----------------------------
1393 Request1 0
1393 Request1 0
1393 Request1 0
1394 Request2 0
1394 Request2 0
1399 Request3 0
1399 Request3 0
1399 Request3 0
1399 Request3 0
1399 Request3 0
```
(I omitted lots of records here)
My question is: is there any way to modify the above query to group the duplicated records and make a `requestCount` column which holds the number of duplicates? Like this:
```
id name isBooked requestCount
---------------------------------------------
1393 Request1 0 3
1399 Request2 0 2
1393 Request3 0 5
```
? :-)
Thanks in advance! | ```
SELECT [RequestUsers].requestId,
[Requests].name,
[Requests].isBooked,
Count(*) AS requestCount
FROM [RequestUsers]
JOIN [Requests]
ON [RequestUsers].requestId = [Requests].id
WHERE [RequestUsers].dateRequested >= '10-01-2013'
AND [RequestUsers].dateRequested <= '10-16-2013'
GROUP BY [RequestUsers].requestId,
[Requests].name,
[Requests].isBooked
``` | ```
SELECT [RequestUsers].requestId, [Requests].name, [Requests].isBooked,
COUNT([RequestUsers].requestId) "requestCount"
FROM [RequestUsers]
JOIN [Requests]
ON [RequestUsers].requestId = [RequestUsers].id
WHERE [RequestUsers].dateRequested >= '10-01-2013'
AND [RequestUsers].dateRequested <= '10-16-2013'
GROUP BY [RequestUsers].requestId, [Requests].name, [Requests].isBooked
``` | Count unique rows in SQL Server query | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
So I have data like this:
```
UserID CreateDate
1 10/20/2013 4:05
1 10/20/2013 4:10
1 10/21/2013 5:10
2 10/20/2012 4:03
```
I need to group by each user get the average time between CreateDates. My desired results would be like this:
```
UserID AvgTime(minutes)
1 753.5
2 0
```
How can I find the difference between CreateDates for all records returned for a User grouping?
EDIT:
Using SQL Server 2012 | Try this:
```
SELECT A.UserID,
AVG(CAST(DATEDIFF(MINUTE,B.CreateDate,A.CreateDate) AS FLOAT)) AvgTime
FROM #YourTable A
OUTER APPLY (SELECT TOP 1 *
FROM #YourTable
WHERE UserID = A.UserID
AND CreateDate < A.CreateDate
ORDER BY CreateDate DESC) B
GROUP BY A.UserID
``` | This approach should aslo work.
**[Fiddle demo here](http://sqlfiddle.com/#!6/84ba5/10)**:
```
;WITH CTE AS (
Select userId, createDate,
row_number() over (partition by userid order by createdate) rn
from Table1
)
select t1.userid,
isnull(avg(datediff(second, t1.createdate, t2.createdate)*1.0/60),0) AvgTime
from CTE t1 left join CTE t2 on t1.UserID = t2.UserID and t1.rn +1 = t2.rn
group by t1.UserID;
```
**Updated:** Thanks to @Lemark for pointing out `number of diff = recordCount - 1` | Get average time between record creation | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I am doing my project in MVC4 using c# and sql.. I have a table `MemberDetails` which contain table
```
CREATE TABLE [dbo].[MemberDetails] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[Mem_FirstNA] VARCHAR (100) NOT NULL,
[Mem_LastNA] VARCHAR (100) NOT NULL,
[Mem_Occ] VARCHAR (100) NOT NULL,
[Mem_DOB] DATETIME NOT NULL,
[Mem_Email] VARCHAR (50) NOT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);
```
I just want to select the names and date of birth where whose birthday in next 30 days and I use the following query
```
SELECT
Mem_FirstNA, Mem_LastNA, Mem_DOB
FROM
MemberDetails
WHERE
Mem_DOB >= getdate() - 1 AND Mem_DOB <= getdate() + 30
```
Is that correct, I got 0 item selected , I use the following table.
```
1 Pal Software 08-03-1987 AM 12:00:00
3 mn Par Bussiness 19-10-1967 AM 12:00:00
4 man George Business 13-11-1956 AM 12:00:00
5 Smi Kan Housewife 22-10-1980 AM 12:00:00
``` | Try this...
~~SELECT
Mem\_FirstNA, Mem\_LastNA, Mem\_DOB
FROM
MemberDetails
WHERE
ltrim(str(year(GETDATE()))) + '-' + ltrim(str(month(Mem\_DOB))) + '-' + ltrim(str(day(Mem\_DOB))) >= getdate() - 1 AND ltrim(str(year(GETDATE()))) + '-' + ltrim(str(month(Mem\_DOB))) + '-' + ltrim(str(day(Mem\_DOB))) <= getdate() + 30~~
EDIT: The comments for this answer have rightly pointed out that it will not work if the current year is a leap year. So this update. The list of dates can be more efficiently generated by using [Get a list of dates between two dates using a function](https://stackoverflow.com/questions/1378593/get-a-list-of-dates-between-two-dates-using-a-function)
```
Select Mem_FirstNA, Mem_LastNA, Mem_DOB from MemberDetails m, (
Select datepart(dd,getdate()) as d, datepart(mm,getdate()) as m
union
Select datepart(dd,getdate() + 1) as d, datepart(mm,getdate() + 1) as m
union
Select datepart(dd,getdate() + 2) as d, datepart(mm,getdate() + 2) as m
union
Select datepart(dd,getdate() + 3) as d, datepart(mm,getdate() + 3) as m
union
Select datepart(dd,getdate() + 4) as d, datepart(mm,getdate() + 4) as m
union
Select datepart(dd,getdate() + 5) as d, datepart(mm,getdate() + 5) as m
union
Select datepart(dd,getdate() + 6) as d, datepart(mm,getdate() + 6) as m
union
Select datepart(dd,getdate() + 7) as d, datepart(mm,getdate() + 7) as m
union
Select datepart(dd,getdate() + 8) as d, datepart(mm,getdate() + 8) as m
union
Select datepart(dd,getdate() + 9) as d, datepart(mm,getdate() + 9) as m
union
Select datepart(dd,getdate() + 10) as d, datepart(mm,getdate() + 10) as m
union
Select datepart(dd,getdate() + 11) as d, datepart(mm,getdate() + 11) as m
union
Select datepart(dd,getdate() + 12) as d, datepart(mm,getdate() + 12) as m
union
Select datepart(dd,getdate() + 13) as d, datepart(mm,getdate() + 13) as m
union
Select datepart(dd,getdate() + 14) as d, datepart(mm,getdate() + 14) as m
union
Select datepart(dd,getdate() + 15) as d, datepart(mm,getdate() + 15) as m
)X
where
datepart(dd, m.Mem_DOB) = x.d and datepart(mm, m.Mem_DOB) = x.m
```
If you are downvoting, please comment why. | Updated
```
SELECT
Mem_FirstNA, Mem_LastNA, Mem_DOB
FROM
MemberDetails
WHERE
DATEDIFF(DAY, GETDATE(), DATEADD(YEAR, DATEDIFF(YEAR, Mem_DOB, GETDATE()), Mem_DOB)) BETWEEN 0 AND 30
``` | How to get 30 days from now from SQL table | [
"",
"sql",
"sql-server-2008",
""
] |
I have in my database (SQL Server 2008 R2) a table like this:
```
ID......Team...........Name......Age
102 Barcelona Mike 15
103 Barcelona Peter 10
104 Barcelona Jacke 10
105 Barcelona Jonas 10
106 Real Madrid Michael 20
107 Real Madrid Terry 26
108 Chelsea James 26
109 Chelsea Arthur 23
110 Chelsea Spence 22
```
1. How can I loop the field 'Team' and know that, there are records like Barcelona, Real Madrid and Chelsea.
2. After that I want to calculate the sum of the team player of each team.
For Barcelona: -> 10 + 10 + 10 + 15 = 45
For Real Madrid: -> 20 + 26 = 46
For Chelsea: -> 26 + 23 + 22 = 71
1. Fill each result in a separate variable.
The whole calculation should be done in a stored procedure.
The second thing, if I have a table like this:
```
ID......Team...........Name......HeaderGoal......FreeKickGoal
104 Barcelona Mike 2 1
105 Barcelona Peter 1 0
106 Real Madrid Michael 0 1
107 Real Madrid Terry 0 1
108 Chelsea James 0 0
109 Chelsea Arthur 2 3
110 Chelsea Spence 4 0
```
1. How can I loop the field 'Team' and know that, there are records like Barcelona, Real Madrid and Chelsea.
2. After that I want to calculate the sum of all Goals of each team with the goal type HeaderGoal and FreeKickGoal.
Example for
-> Barcelona: 2+1+1 = 4
-> Real Madrid: 1+1 = 2
-> Chelsea: 2 + 3 + 4 = 9
1. Fill each result in a separate variable.
The whole calculation should be done in a stored procedure.
I hope you can help me!
BK\_ | If I understood your question correctly it looks like what you want are aggregates for each group, something that is easily accomplished with the [GROUP BY](http://technet.microsoft.com/en-us/library/ms177673.aspx) clause.
For the first query you would use:
```
SELECT team, SUM(age) AS 'Sum of the team'
FROM table
GROUP BY team
```
This will give this result:
```
Team Sum of the team
-------------------- ---------------
Barcelona 45
Chelsea 71
Real Madrid 46
```
and for the second:
```
SELECT team, SUM(headergoal + freekickgoal) AS 'Sum of goals'
FROM table
GROUP BY team
```
which will give this result:
```
Team Sum of goals
-------------------- ------------
Barcelona 4
Chelsea 9
Real Madrid 2
```
In your example data you list the desired result for the first part for Chelsea as 45 but I guess that is just a typo as you omitted one of Chelseas rows in the calculation?
As for turning it into a stored procedure I can just tell you that it's easy and refer you to the [documentation](http://technet.microsoft.com/en-us/library/ms345415.aspx) as I won't do all the work for you...
Edit: added `merge into`as a response to a comment:
To insert the result of the second query into an existing table you can use either a simple `INSERT` statement like this:
```
INSERT table_with_team_and_goals
SELECT team, SUM(headergoal + freekickgoal)
FROM table
GROUP BY team
```
or `MERGE INTO` which might be better if you intend to run the query many times (the target table will then be updated if the team already exist in it):
```
MERGE INTO table_with_team_and_goals AS target
USING (SELECT Team, SUM(headergoal + freekickgoal) AS goals FROM table_with_goals GROUP BY team) AS source
ON target.team=source.team
WHEN MATCHED THEN
UPDATE SET goals = source.goals
WHEN NOT MATCHED THEN
INSERT (Team, Goals)
VALUES (source.team, source.goals);
``` | ```
SELECT TEAM , Name, COUNT(TEAM) As GoalsPerTeam, COUNT(NAME) As GoalPerPlayer
FROM TABLE
GROUP BY TEAM , Name
```
This query will give you tolal goals scored by per player and per team . | Calculate in a table with different cells - SQL Server / T-SQL | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
How do I make MySQL's SELECT DISTINCT case sensitive?
```
create temporary table X (name varchar(50) NULL);
insert into X values ('this'), ('This');
```
Now this query:
```
select distinct(name) from X;
```
Results in:
> this
What's going on here? I'd like SELECT DISTINCT to be case sensitive. Shouldn't that be the default? | Use [`BINARY` operator](http://dev.mysql.com/doc/refman/5.0/en/charset-binary-op.html) for that:
```
SELECT DISTINCT(BINARY name) AS Name FROM X;
```
You can also [`CAST`](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#function_cast) it while selecting:
```
SELECT DISTINCT
(CAST(name AS CHAR CHARACTER SET utf8) COLLATE utf8_bin) AS Name FROM X;
```
### See this [SQLFiddle](http://sqlfiddle.com/#!9/590497/1) | I would rather update the column definition to be case sensitive collision.
Like this:
```
create table X (name VARCHAR(128) CHARACTER SET utf8 COLLATE utf8_bin NULL);
insert into X values ('this'), ('This');
```
SQLFiddle: <http://sqlfiddle.com/#!2/add276/2/0> | MySQL SELECT DISTINCT should be case sensitive? | [
"",
"mysql",
"sql",
"distinct",
"case-sensitive",
""
] |
How can I select all BUT the last row in a table?
I know I can select the last via `SELECT item FROM table ORDER BY id DESC LIMIT 1` but I want to select ALL but the last.
How can I do this?
Selecting a certain amount and using `ASC` won't work because I won't know how many rows there are.
I thought about doing a row count and taking that amount and selecting that -1 and using `ORDER BY id ASC` but are there any easier ways? | If Id is unique, you can do it as follows:
```
SELECT ...
FROM MyTable
WHERE Id < (SELECT MAX(Id) FROM MyTable)
``` | The easiest way to do this is to discard the row in the application. You can do it in SQL by using a calculated limit. Calculate it as `(SELECT COUNT(*) FROM T) - 1`.
Or:
```
SELECT *
FROM T
EXCEPT
SELECT TOP 1 *
FROM T
ORDER BY id ASC
```
Excluding the last row. | SQL Selecting all BUT the last row in a table? | [
"",
"sql",
""
] |
Trying to search in a database a date range. Problem is, I cannot use the datetime column type in my database. To compensate, date's are displayed as three columns. a Month column, a Day column, and a Year column. Here is my SQL query:
```
SELECT COUNT(*)
FROM `import`
WHERE `call_day` BETWEEN 29 AND 15
AND `call_month` BETWEEN 9 AND 10
AND `call_year` BETWEEN 2013 AND 2013
```
You can see where I run into trouble. call\_day needs to search between the 29th day and the 15th day. This won't work because 15 is smaller than 29, but I need it to work because the month is in the future :)
Any thoughts/solutions? No I cannot change the database in any way. Read only. | Besides the concatenation approach, which can be implemented in quite a few ways, e.g.
```
SELECT *
FROM import
WHERE STR_TO_DATE(CONCAT_WS('-', call_year, call_month, call_day), '%Y-%c-%e')
BETWEEN '2013-09-29' AND '2013-10-15'
```
or
```
SELECT *
FROM import
WHERE CONCAT(call_year, LPAD(call_month, 2, '0'), LPAD(call_day, 2, '0'))
BETWEEN '20130929' AND '20131015'
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/56d09/7)** demo
that will always cause a full scan and assuming that date ranges in your queries usually don't span more than a few months you can also do
```
SELECT *
FROM import
WHERE (call_year = 2013 AND
call_month = 9 AND
call_day BETWEEN 29 AND DAY(LAST_DAY('2013-09-01'))) -- or just 30
OR (call_year = 2013 AND
call_month = 10 AND
call_day BETWEEN 1 AND 15)
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/c7ae8/4)** demo
For a query that spans a year (e.g. from `2012-08-20` to `2013-10-15`)
```
SELECT *
FROM import
WHERE (call_year = 2012 AND
call_month = 8 AND
call_day BETWEEN 20 AND 31)
OR (call_year = 2012 AND
call_month BETWEEN 9 AND 12 AND
call_day BETWEEN 1 AND 31)
OR (call_year = 2013 AND
call_month BETWEEN 1 AND 9 AND
call_day BETWEEN 1 AND 31)
OR (call_year = 2013 AND
call_month = 10 AND
call_day BETWEEN 1 AND 15)
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/41e9b/4)** demo | Concat the values like yyyymmdd and then you can compare them like strings. | SQL search between two days without datetime | [
"",
"mysql",
"sql",
"datetime",
""
] |
I'm still starting to put my feet into SQL. I have a problem with querying dates and group them. I want to group the dates. And from this dates display the name, type of visit on a specific range of dates.
Say from the example below, I want to query the people who visited from the the range of **01/01/2012 until 31/12/2012** and with type of visit of **4**
 | ```
SELECT DISTINCT NAME
FROM YOUR_tABLE
WHERE (TYPE_OF_VISIT = 4) AND (DATE_OF_VISIT BETWEEN ? AND ?)
```
if you want the first visit of each customer within this range, then you should try sth like this
```
SELECT DISTINCT T1.NAME, T1.DATE_OF_VISIT
FROM YOUR_TABLE T1
WHERE (T1.TYPE_OF_VISIT = 4) AND (T1.DATE_OF_VISIT BETWEEN ? AND ?)
AND T1.DATE_OF_VISIT =
(SELECT MIN (T2.DATE_OF_VISIT) FROM YOUR_TABLE T2 WHERE T1.NAME=T2.NAME AND
(T2.DATE_OF_VISIT BETWEEN ? AND ?))
``` | If column Date\_of\_visit does not contain any time portion (ie. only date). You can execute the following
```
SELECT NAME, TYPE_OF_VISIT
FROM appointment
WHERE Type_of_visit = 4
AND Date_of_visit >= '01/01/2012 '
AND Date_of_visit <= '31/12/2012'
GROUP BY NAME
``` | Query relating to multiple Dates | [
"",
"sql",
"date",
"grouping",
""
] |
for an assignment I had something similar to the following (simplified for brevity):
```
STUDENT(StudentID, Fname. Lname) StudentID PK
UNIT(UnitID, UnitName) UnitID PK
STUDENT_UNIT((StudentID, UnitID) StudentID PK/FK UnitID PK/FK
```
Needed to insert info about a student and the units that he/she had completed.
As it is only beginner level SQL the following was accepted
```
INSERT INTO STUDENT
VALUES(seqStudID.NextVal, 'Bob', 'Brown');
INSERT INTO STUDENT_UNIT(seqStudID.CurrVal, 111);
INSERT INTO STUDENT_UNIT(seqStudID.CurrVal, 222);
INSERT INTO STUDENT_UNIT(seqStudID.CurrVal, 333);
```
But I was wondering what would be the real way to enter this data, would it be a procedure with a loop? If so what sort of loop (so that it could handle any amount of units).
Thanks in advance | One of the best approach to do this is by using `stored procedure`. The below procedure will do everything for you.
```
CREATE OR REPLACE
PROCEDURE set_stud_unit(
i_fname IN VARCHAR2,
i_lname IN VARCHAR2,
i_unitid IN VARCHAR2)
IS
l_studentid student.studentid%TYPE;
BEGIN
INSERT INTO student(studentid, fname, lname)
VALUES(seqstudid.NEXTVAL, i_fname, i_lname)
RETURNING studentid INTO l_studentid;
INSERT INTO student_unit (studentid, unitid)
(SELECT l_studentid, (COLUMN_VALUE).getNumberVal() vid FROM xmltable(i_unitid));
COMMIT;
END;
/
```
You can pass the unitid as comma separated as below,
```
EXECUTE set_stud_unit('Bob', 'Brown', '111,222,333');
``` | you can use select in your insert:
```
INSERT INTO STUDENT_UNIT select t1.StudentID ,t2.UnitID from STUDENT t1 ,UNIT t2;
```
and you can use where to limit this selection ;-) | Oracle SQL insert query - into parent and child tables | [
"",
"sql",
"oracle",
"insert",
""
] |
Here am having Person and Address Tables .Some Persons may have address or not.If they have address then want to join address table otherwise no need to join.Please help to solve this case.
```
select p.name,nvl(a.address,'address not available') from person p,address a
where p.id = 2 and case
when p.addid is not null
then p.addid = a.id
else 0=0 end
``` | The *general solution* - use Boolean logic. You cannot choose between complete expressions using CASE, so you should rewrite it to use a combination of AND and OR. Using the logic from your question, you would rewrite it as:
```
WHERE p.id = 2
AND
(
(p.addid IS NOT NULL AND p.addid = a.id)
OR (p.addid IS NULL AND 0=0)
)
```
Which ultimately simplifies down to:
```
WHERE p.id = 2
AND (p.addid IS NULL OR p.addid = a.id)
```
---
The *specific solution for your query* - use better JOIN syntax, and simply leverage a LEFT JOIN:
```
SELECT p.name, nvl(a.address,'address not available')
FROM person p
LEFT OUTER JOIN address a ON p.addid = a.id
WHERE p.id = 2
``` | Try to use **coalesce** function as below
```
select p.name,nvl(a.address,'address not available') from person p,address a
where p.id = 2
and coalesce(p.addid,a.id)=a.id
``` | How to compare with Case in Where clause | [
"",
"sql",
"oracle",
""
] |
In sql table i have fields: *name*, *description* and *language*
my language field has values like: "en-GB" or "fr-FR".
```
SELECT * FROM items WHERE `name` LIKE '%yellow%' OR `description` LIKE '%yellow%' AND `language` = 'en-GB';
```
Results with search term **yellow** are ok. but i get also results who has langauge *fr-FR*, but in my query i have set as language *en-GB*.
I have searched for query on [w3c](http://www.w3schools.com/sql/sql_like.asp), and i think part with "like" is true.
What i do here wrong? | Use paranthesis
```
SELECT * FROM items
WHERE (`name` LIKE '%yellow%' OR `description` LIKE '%yellow%') AND `language` = 'en-GB';
``` | You need to use brackets as the 'OR' operated is evaluated first, where you are expecting the 'AND' operator to be.
```
SELECT *
FROM items
WHERE (`name` LIKE '%yellow%' OR `description` LIKE '%yellow%')
AND `language` = 'en-GB'
``` | SQL request - search query not working | [
"",
"sql",
"search",
""
] |
I have following table as LocalBarcode:

and following as Barcodecancel cancel :
I want to take:
1. count of total barcodes
2. count of barcodes whose IsUpload is true
from LocalBarcode and
1. count of Barcode whose isUpload is true
from BarcodeCancel.
I designed following query:
```
select localbarcode.(select count(barcode)
from localbarcode
where Int([TimeStamp])= Format(CDate('10/18/2013'))),
localbarcode.(select count(isupload)
from localbarcode
where isupload=0),
BarcodeCancel.(select count(barcode)
from BarcodeCancel
where Int([TimeStamp])= Format(CDate('10/18/2013')))
from localbarcode,BarcodeCancel
```
But this query is giving me error on first line.
Please help me.
**EDIT:**
 | ```
select (select count(barcode)
from localbarcode
where Int([TimeStamp])= Format(CDate('10/18/2013'))),
(select count(isupload)
from localbarcode
where isupload=0),
(select count(barcode)
from BarcodeCancel
where Int([TimeStamp])= Format(CDate('10/18/2013')))
from dual
```
You need to use the dummy table "dual". I made a simplified try out: <http://sqlfiddle.com/#!2/15291/3> | You can get required output using following query:
select count(LocalBarcode.Barcode), count(lb.Barcode), count(lb1.Barcode) from LocalBarcode left join LocalBarcode lb on LocalBarcode.Barcode = lb.Barcode and LocalBarcode.IsUpload = 1 left join Barcodecancel lb1 on LocalBarcode.Barcode = lb1.Barcode and LocalBarcode.roleIsUpload = 1 | taking data from multiple tables in single query | [
"",
"mysql",
"sql",
"database",
"oledb",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.