Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Yesterday, I posted this question here: [MSSQL 2008: Get last updated record by specific field](https://stackoverflow.com/questions/26382131/mssql-2008-get-last-updated-record-by-specific-field/26382251)
Gordon Linoff came up with a good solution and I was happy until today, when I realized I posted only half of the scenarios. Here's my new Question:
Given this table `Content`:
```
ContentId lastUpdate FileId IrrelevantField
1 2014-01-01 00:00:00 File-A Dr. Hoo /* user uploads file*/
1 2014-01-02 00:00:00 File-B Dr. Hoo /* (!) user uploads new file */
1 2014-01-03 00:00:00 File-B Dr. Who /* user updates info */
2 2014-02-01 00:00:00 File-M 41 /* (!) user uploads file */
2 2014-02-02 00:00:00 File-M 42 /* user updates info */
3 2014-03-01 00:00:00 File-S Donald Duck /* user uploads file*/
```
---
Basically what I want is to get all rows that meet these conditions:
* They have a different `FileId` than it's previous row with the same `ContentId`.
* If the `FileId` has never changed, get the first ever submitted row (this applies in the example to `ContentId` = 2 & 3.)
`IrrelevantField` triggers a row update. My goal is, to get the rows, when `FileId` has changed.
---
The output would be following:
```
ContentId lastUpdate FileId IrrelevantField
1 2014-01-02 00:00:00 File-B Dr. Hoo
2 2014-02-01 00:00:00 File-M 41
3 2014-03-01 00:00:00 File-S Donald Duck
```
---
`FileId` is never `NULL`.
---
I have tried to add a `OUTER APPLY` to Gordon Linoff's solution, so I can check, whether the `FileId` is still the same as in the initial upload. But that has gotten me irrelevant Updates as well.
|
Try this should work with all your scenario..
```
;with cte
as
(
select rank() over(partition by fileid,contentid order by lastupdate ) id, ContentId,lastUpdate,FileId,IrrelevantField from tablename
)
select ContentId,lastUpdate,FileId,IrrelevantField from(
select row_number() over(partition by contentid order by lastupdate desc) fstid, * from cte where id=1) a where fstid=1
```
|
You were along the right lines changing it to an outer apply, then you can change the `WHERE` clause slightly to allow for records where there is no previous record.
```
SELECT c.ContentID,
c.LastUpdate,
c.FileID,
c.IrrelevantField,
FileID2 = c2.FileID
FROM Content AS c
OUTER APPLY
( SELECT TOP 1 c2.FileID
FROM Content AS c2
WHERE c2.ContentID = c.ContentID
AND c2.LastUpdate < c.LastUpdate
ORDER BY c2.LastUpdate DESC
) AS c2
WHERE c.FileId != c2.FileId
OR c2.FileID IS NULL;
```
This means though that two records are returned for ContentID = 1 (and any other content with changes):
```
ContentID LastUpdate FileID IrrelevantField FileID2
1 2014-01-01 File-A Dr. Hoo NULL
1 2014-01-02 File-B Dr. Hoo File-A
2 2014-02-01 File-M 41 NULL
3 2014-03-01 File-S Donald Duck NULL
```
So you will need to use a further ranking function to limit this to only the latest record:
```
WITH CTE AS
( SELECT c.ContentID,
c.LastUpdate,
c.FileID,
c.IrrelevantField,
RowNumber = ROW_NUMBER() OVER(PARTITION BY c.ContentID ORDER BY c.LastUpdate DESC)
FROM Content AS c
OUTER APPLY
( SELECT TOP 1 c2.FileID
FROM Content AS c2
WHERE c2.ContentID = c.ContentID
AND c2.LastUpdate < c.LastUpdate
ORDER BY c2.LastUpdate DESC
) AS c2
WHERE c.FileId != c2.FileId
OR c2.FileID IS NULL
)
SELECT c.ContentID,
c.LastUpdate,
c.FileID,
c.IrrelevantField
FROM CTE AS c
WHERE RowNumber = 1;
```
|
MSSQL 2008: Get last updated record by specific field (Part 2)
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have this query, it supposes to display data from current date (today) only
```
SELECT * FROM Customer WHERE visitTime <= GETDATE() AND visitTime > GETDATE() - 1
```
where *visitTime* is **datetime** type
But it doesn't work well because I think problem is at `AND visitTime > GETDATE() - 1` but I don't know how to fix this, anybody has any suggestions?
|
Look at this example:
```
declare @visitTime datetime ='2014-10-16 23:59:59.000'
select GETDATE() GETDATE, @visitTime visitTime, GETDATE() - 1 [GETDATE-1]
GETDATE visitTime GETDATE-1
2014-10-17 00:02:18.980 2014-10-16 23:59:59.000 2014-10-16 00:02:18.980
```
You'll see that the visittime date clearly falls in the range you specified as the lower bound (the -1) subtracts a whole day and not just the time part.
You could use this instead:
```
-- using GETDATE() for the upper bound misses visitTime that are
-- on the current day, but at a later time than now.
WHERE visitTime < DateAdd(Day, DateDiff(Day, 0, GetDate())+1, 0)
AND visitTime >= DateAdd(Day, DateDiff(Day, 0, GetDate()), 0)
```
or if you're on SQL Server 2008+ that has a`date`data type, this:
```
WHERE CAST(visitTime AS DATE) = CAST(GETDATE() AS DATE)
```
Note that`GETDATE()`is T-SQL specific, the ANSI equivalent is`CURRENT_TIMESTAMP`
|
Assuming today is midnight last night to midnight tonight, you can use following condition
```
Select * from Customer where
visitTime >= DateAdd(d, Datediff(d,1, current_timestamp), 1)
and
visitTime < DateAdd(d, Datediff(d,0, current_timestamp), 1);
```
|
SQL display | select data from today date | current day only
|
[
"",
"sql",
"sql-server",
"datetime",
""
] |
I'm wondering how I can return specific results depending on my first selected statement. Basically I have two IDs. CustBillToID and CustShipToID. If CustShipToID is not null I want to select that and all the records that are joined to it. If it is null default to the CustBillToID and all the results that are joined to that.
Here is my SQL that obviously doesn't work. I should mention I tried to do a sub query in the conditional, but since it returns multiple results it won't work. I am using SQL Server 2012.
```
SELECT CASE WHEN cp.CustShipToID IS NOT NULL
THEN
cy.CustDesc,
cy.Address1,
cy.Address2,
cy.City,
cy.State,
cy.ZIP,
cy.Phone
ELSE
c.CustDesc,
c.Address1,
c.Address2,
c.City,
c.State,
c.ZIP,
c.Phone
END
LoadID,
cp.CustPOID,
cp.POBillToRef,
cp.POShipToRef,
cp.CustBillToID,
cp.CustShipToID,
cp.ArrivalDate,
cp.LoadDate,
cp.StopNum,
cp.ConfNum,
cp.EVNum,
cp.ApptNum,
ld.CarrId,
ld.Temperature,
cr.CarrDesc
FROM [Sales].[dbo].[CustPO] AS cp
LEFT OUTER JOIN Load AS ld
ON cp.LoadID = ld.LoadID
LEFT OUTER JOIN Carrier AS cr
ON ld.CarrId = cr.CarrId
LEFT OUTER JOIN Customer AS c
ON c.CustId = cp.CustBillToID
WHERE CustPOID=5213
```
Any ideas?
Also my current SQL is below, I do a conditional to determine if it's set. I'd rather do it in SQL if possible.
```
SELECT cp.LoadID,
cp.CustPOID,
cp.POBillToRef,
cp.POShipToRef,
cp.CustBillToID,
cp.CustShipToID,
cp.ArrivalDate,
cp.LoadDate,
cp.StopNum,
cp.ConfNum,
cp.EVNum,
cp.ApptNum,
ld.CarrId,
ld.Temperature,
cr.CarrDesc,
c.CustDesc as CustBillToDesc,
c.Address1 as CustBillAddress1,
c.Address2 as CustBillAddress2,
c.City as CustBillCity,
c.State as CustBillState,
c.ZIP as CustBillZIP,
c.Phone as CustBillPhone,
cy.CustDesc as CustShipToDesc,
cy.Address1 as CustShipAddress1,
cy.Address2 as CustShipAddress2,
cy.City as CustShipCity,
cy.State as CustShipState,
cy.ZIP as CustShipZIP,
cy.Phone as CustShipPhone
FROM [Sales].[dbo].[CustPO] as cp
left outer join Load as ld
on cp.LoadID = ld.LoadID
left outer join Carrier as cr
on ld.CarrId = cr.CarrId
left outer join Customer as c
on c.CustId = cp.CustBillToID
left outer join Customer as cy
on cy.CustId = cp.CustShipToID
WHERE CustPOID=?
```
|
For this, you basically want to build a string that is your SQL and then execute the string...look @ the answer to this one ::
[SQL conditional SELECT](https://stackoverflow.com/questions/1038435/sql-conditional-select)
|
You need a separate `case` for each column:
```
SELECT (CASE WHEN cp.CustShipToID IS NOT NULL THEN cy.CustDesc ELSE c.CustDesc END) as CustDesc,
(CASE WHEN cp.CustShipToID IS NOT NULL THEN cy.Address1 ELSE c.Address1 END) as Address1,
(CASE WHEN cp.CustShipToID IS NOT NULL THEN cy.Address2 ELSE c.Address2 END) as Address2,
(CASE WHEN cp.CustShipToID IS NOT NULL THEN cy.City ELSE c.City END) as City,
(CASE WHEN cp.CustShipToID IS NOT NULL THEN cy.State ELSE c.State END) as State,
(CASE WHEN cp.CustShipToID IS NOT NULL THEN cy.ZIP ELSE c.ZIP END) as ZIP,
(CASE WHEN cp.CustShipToID IS NOT NULL THEN cy.Phone ELSE c.Phone END) as Phone,
. . .
```
|
Selecting columns based on a case SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"conditional-statements",
""
] |
I have this query that will give me those rows that have the field "username" duplicated:
```
SELECT id, jos_users.username, email, password, lastvisitDate
FROM jos_users
INNER JOIN (SELECT username
FROM jos_users
GROUP BY username HAVING count(id) > 1) dup
ON jos_users.username = dup.username;
```
I need to get those that have the lastvisitDate lower.
For example:
```
id | username | email | password | lastvisitDate |
1 | mylogin | | | 2014-10-15 16:42:42 |
2 | mylogin | | | 2014-10-16 16:42:42 |
```
As you can see, the row with id=1 have the lowest lastvisitDate. How could I put this sentence on the query?
I want this because I'll peform a delete query using this select to delete duplicated rows.
|
```
SELECT id, jos_users.username, email, password, lastvisitDate
FROM jos_users
INNER JOIN (SELECT username
FROM jos_users jb
where jb.lastvisitDate =
(select min(jb1.lastvisitDate)
from jos_users jb1
where jb1.username = jb.username)
GROUP BY username
HAVING count(id) > 1) dup
ON jos_users.username = dup.username;
```
|
I would probably write this as:
```
SELECT u.id, u.username, u.email, u.password, u.lastvisitDate
FROM jos_users u
WHERE EXISTS (SELECT 1 FROM jos_users WHERE username = u.username GROUP BY username HAVING count(id) > 1)
AND EXISTS (SELECT 1 FROM jos_users WHERE username = u.username GROUP BY username HAVING min(lastvisitDate) = u.lastvisitDate)
```
|
Find duplicate row get only the older one
|
[
"",
"mysql",
"sql",
""
] |
table [Status] has the following data:
```
ID Status
1 PaymentPending
2 Pending
3 Paid
4 Cancelled
5 Error
```
====================================
Data Table has the following structure:
```
ID WeekNumber StatusID
1 1 1
2 1 2
3 1 3
4 2 1
5 2 2
6 2 2
7 2 3
```
Looking for a Pivot
```
Week # PaymentPending Pending Paid Cancelled
Week 1 1 1 1 0
Week 2 1 2 1 0
```
|
A pivot might look like this:
```
SELECT * FROM
(SELECT
'Week ' + CAST(D.WeekNumber AS varchar(2)) [Week #],
S.Status
FROM DataTbl D
INNER JOIN Status S ON D.StatusID = S.ID
) Derived
PIVOT
(
COUNT(Status) FOR Status IN
([PaymentPending], [Pending], [Paid], [Cancelled]) -- add [Error] if needed
) Pvt
```
If you expect the number of items in the`Status`table to change you might want to consider using a dynamic pivot to generate the column headings. Something like this:
```
DECLARE @sql AS NVARCHAR(MAX)
DECLARE @cols AS NVARCHAR(MAX)
SELECT @cols = ISNULL(@cols + ',','') + QUOTENAME(Status)
FROM (SELECT ID, Status FROM Status) AS Statuses ORDER BY ID
SET @sql =
N'SELECT * FROM
(SELECT ''Week '' + CAST(D.WeekNumber AS varchar(2)) [Week #], S.Status
FROM Datatbl D
INNER JOIN Status S ON D.StatusID = S.ID) Q
PIVOT (
COUNT(Status)
FOR Status IN (' + @cols + ')
) AS Pvt'
EXEC sp_executesql @sql;
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/45d36/4)
|
```
SELECT 'Week '+CAST(coun.WeekNumber AS VARCHAR(10)) [Week #],[PaymentPending],[Pending],[Paid],[Cancelled],[Error] FROM
(SELECT [WeekNumber],[Status] FROM dbo.WeekDetails
INNER JOIN [dbo].[Status] AS s
ON [dbo].[WeekDetails].[StatusID] = [s].[ID]) AS wee
PIVOT (COUNT(wee.[Status]) FOR wee.[Status]
IN ([PaymentPending],[Pending],[Paid],[Cancelled],[Error])) AS Coun
```
|
How to use SQL Server 2005 Pivot based on lookup table
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"pivot",
""
] |
I just want to return a list from table where date difference more than 15 days are returned. It only returns where `RequestStatus=1` not getting from where `RequestStatus=2`.
Here is my query:
```
SELECT *
FROM User
WHERE RequestStatus = 1
OR RequestStatus = 3
AND (DATEDIFF(DAY, GETDATE(), TaskCompletionDate)) > 15
```
|
Use a SQL `IN` clause to specify all legitimate values for `RequestStatus` column in your `WHERE` condition like
```
Select *
from User
Where RequestStatus in (1,2,3)
and (DATEDIFF(day, getdate(), TaskCompletionDate))> 15
```
|
I would suggest writing the query as:
```
select *
from User
Where RequestStatus in (1, 2, 3) and
TaskCompletionDate > DATEADD(day, 15, getdate())
```
By moving `TaskCompletionDate` outside the date functions, you give SQL Server more opportunities to optimize the query (for instance, by potentially making use of an index, if available and appropriate).
|
SQL Multiple conditions (AND, OR) in WHERE clause yield incorrect result
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I am trying to subtract 2 dates from each other but it seems that it is not subtracting properly and i am not sure what i am doing wrong here. I am using case statement to flag as 1 if the difference between the dates are less than 90 days else flag it as 0. But it is always flagging as 1 even if the difference between the dates are greater than 90 days. I am PostgreSQL here and here is my case statement:
```
CASE WHEN EXTRACT(DAY FROM CAST(SVS_DT AS DATE) - CAST(DSCH_TS AS DATE)) <90
THEN 1 ELSE 0 END AS FU90
```
example of the dates are here:
```
SVS_DT DSCH_TS
2013-03-22 00:00:00 2010-05-06 00:00:00
```
it is suppose to flag as 0 in this case but it is flagging as 1 because the difference between these 2 dates are greater than 90 days.
|
`extract` of a day returns the day element of a date. Since days are always between 1 and 31, the maximum difference is 30, and cannot be larger than 90.
Subtracting `date`s returns the difference in days, as an integer. So you need only drop the `extract` calls:
```
CASE WHEN (CAST(SVS_DT AS DATE) - CAST(DSCH_TS AS DATE)) < 90 THEN 1
ELSE 0
END AS FU90
```
|
you can use below one:
```
CASE WHEN (EXTRACT(EPOCH FROM (DATE_COLUMN_2 - DATE_COLUMN_1)) < (90*24*60*60)
THEN 1 ELSE 0 END AS FU90
```
here it returns the deference in seconds so you need to convert 90 days into seconds
|
Subtracting two dates using PostgreSQL
|
[
"",
"sql",
"postgresql",
"date",
""
] |
I have two tables in a MySQL database:
```
Book(title, publisher, year) title is primary key
Author(name, title) title is foreign key to Book
```
I am trying to select the name of the Authors that published a book each year from 2000 to 2005 inclusive. This SQL query works, but is there a way to do this that makes it easier to change the date range if needed?
```
SELECT DISTINCT name
FROM Author
WHERE name IN (SELECT Author.name
FROM Author INNER JOIN Book ON (Author.title = Book.title)
WHERE year = 2000)
AND name IN
(SELECT Author.name
FROM Author INNER JOIN Book ON (Author.title = Book.title)
WHERE year = 2001)
AND name IN
(SELECT Author.name
FROM Author INNER JOIN Book ON (Author.title = Book.title)
WHERE year = 2002)
AND name IN
(SELECT Author.name
FROM Author INNER JOIN Book ON (Author.title = Book.title)
WHERE year = 2003)
AND name IN
(SELECT Author.name
FROM Author INNER JOIN Book ON (Author.title = Book.title)
WHERE year = 2004)
AND name IN
(SELECT Author.name
FROM Author INNER JOIN Book ON (Author.title = Book.title)
WHERE year = 2005);
```
|
Here's two ways to do it, and how another approach is wrong due to a subtle fault.
[SQL Fiddle](http://sqlfiddle.com/#!2/aa444f/2)
**MySQL 5.5.32 Schema Setup**:
```
create table Book (title varchar(10), year int) ;
create table Author (name varchar(10), title varchar(10));
insert Book values
('Book1',2000),('Book2',2000),
('Book3',2000),('Book4',2000),
('Book5',2000),('Book6',2000),
('Book7',2001),('Book8',2002),
('Book9',2003),('Book10',2004),
('Book11',2005);
insert into Author values
('Author1','Book1'),('Author1','Book2'),
('Author1','Book3'),('Author1','Book4'),
('Author1','Book5'),('Author1','Book6'),
('Author2','Book6'),('Author2','Book7'),
('Author2','Book8'),('Author2','Book9'),
('Author2','Book10'),('Author2','Book11');
# author1 has written 6 books in one year
# author2 has written 1 book in every of the six years
```
**Query 1**:
```
# incorrect as it matches author1 who has 6 books in a single year
SELECT name from Author
INNER JOIN BOOK on Author.title = Book.Title
WHERE year IN (2000,2001,2002,2003,2004,2005)
GROUP BY name
HAVING COUNT(name) = 6
```
**[Results](http://sqlfiddle.com/#!2/aa444f/2/0)**:
```
| NAME |
|---------|
| Author1 |
| Author2 |
```
**Query 2**:
```
# correct as it counts distinct years
SELECT name from Author
INNER JOIN BOOK on Author.title = Book.Title
WHERE year IN (2000,2001,2002,2003,2004,2005)
GROUP BY name
HAVING COUNT(DISTINCT year) = 6
```
**[Results](http://sqlfiddle.com/#!2/aa444f/2/1)**:
```
| NAME |
|---------|
| Author2 |
```
**Query 3**:
```
# correct using relational division
SELECT DISTINCT name
FROM Author A1
INNER JOIN Book B1 ON A1.title = B1.Title
WHERE NOT EXISTS (
SELECT *
FROM Book B2
WHERE year IN (2000,2001,2002,2003,2004,2005)
AND NOT EXISTS (
SELECT *
FROM Author A2
INNER JOIN Book B3 ON A2.title = B3.Title
WHERE (A1.name = A2.name)
AND (B3.year = B2.year)
)
)
```
**[Results](http://sqlfiddle.com/#!2/aa444f/2/2)**:
```
| NAME |
|---------|
| Author2 |
```
|
I would put an 'OR' clause in the where statement. It depends on how your table is set up but it should work:
```
SELECT DISTINCT name
FROM Author
WHERE name IN (SELECT Author.name
FROM Author INNER JOIN Book ON (Author.title = Book.title)
WHERE year = 2000)
```
|
SQL Query: How to improve?
|
[
"",
"mysql",
"sql",
"relational-division",
""
] |
1.
```
select course_id
from course
where exists
(select course_id
from course
where course_id not in
(select course_id
from section));
```
2.
```
select course_id
from course
where course_id not in
(select course_id
from section);
```
I want to retrieve every `course_id` in table `course` that does not appear in table `section`. The first results in 200; The second results in 115.
Why the two have different result?
I interpret the first that I will check whether there is `course_id` in `course` that does not belong to `section`. If exists return true, then `course_id` that were checked before is the `course_id` chosen.
|
From docs:
```
If a subquery returns any rows at all, EXISTS subquery is TRUE, and NOT EXISTS subquery is FALSE
```
So in Your case, 1st query returns all rows casue it will looks like
```
select course_id from course where TRUE;
```
So if You want to retrieve every course\_id in table course that does not appear in table section Your second query is correct.
And about that:
> I interpret the first that I will check whether there is course\_id in course that does not belong to section. If exists return true, then course\_id that were checked before is the course\_id chosen.
This is wrong interpretation:
[Reference to docs](http://dev.mysql.com/doc/refman/5.0/en/exists-and-not-exists-subqueries.html)
|
Your first query simply returns all courses, as 115 courses always `EXIST` that are not in section table. Therefore, the `EXIST` clause is always true.
Just to show the difference, the following query should return 115 rows as your second one. But your second one is certainly more efficient.
```
select c1.course_id
from course c1
where exists
(select c2.course_id
from course c2
where c1.course_id = c2.course_id
and c2.course_id not in
(select course_id
from section));
```
See [this fiddle](http://sqlfiddle.com/#!2/248a62/2).
|
What is the different between the two sql codes?
|
[
"",
"sql",
"exists",
""
] |
I got this problem. When i tried to summarize the min quatity of nations's products and it did not work.
I have 2 tables below
PRODUCT:
```
ID|NAME |NaID|Qty
-------------------
01|Fruit|JP |50
02|MEAT |AUS |10
03|MANGA|JP |80
04|BOOK |AUS |8
```
NATION:
```
NaID |NAME
-------------------
AUS |Australia
JP |Japan
```
I want my result like this:
```
ID|NAME |Name|minQty
-------------------
01|Fruit|JP |50
04|BOOK |AUS |8
```
and i used:
```
select p.id,p.name, p.NaID,n.name,min(P.Qty)as minQty
from Product p,Nation n
where p.NaID=n.NaID
group by p.id,p.name, p.NaID,n.name,p.Qty
```
and i got this (T\_T):
```
ID|NAME |NaID|minQty
-------------------
01|Fruit|JP |50
02|MEAT |AUS |10
03|MANGA|JP |80
04|BOOK |AUS |8
```
Please,Could soneone help me? I am thinking that i am bad at SQL now.
|
SQL Server 2005 supports window functions, so you can do something like this:
```
select id,
name,
NaID,
name,
qty
from (
select p.id,
p.name,
p.NaID,
n.name,
min(P.Qty) over (partition by n.naid) as min_qty,
p.qty
from Product p
join Nation n on p.NaID=n.NaID
) t
where qty = min_qty;
```
If there is more than one nation with the same minimum value, you will get each of them. If you don't want that, you need to use `row_number()`
```
select id,
name,
NaID,
name,
qty
from (
select p.id,
p.name,
p.NaID,
n.name,
row_number() over (partition by n.naid order by p.qty) as rn,
p.qty
from Product p
join Nation n on p.NaID = n.NaID
) t
where rn = 1;
```
As your example output with only includes the NaID but not the nation's name you don't really need the the join between `product` and `nation`.
---
(There is no DBMS product called "SQL 2005". `SQL` is just a (standard) for a query language. The DBMS product you mean is called Microsoft SQL **Server** 2005. Or just SQL Server 2005).
|
In Oracle, you can use several techniques. You can use subqueries and analytic functions, but the most efficient one is to use aggregate functions [MIN](http://docs.oracle.com/database/121/SQLRF/functions111.htm#SQLRF00667) and [FIRST](http://docs.oracle.com/database/121/SQLRF/functions073.htm#SQLRF00641).
Your tables:
```
SQL> create table nation (naid,name)
2 as
3 select 'AUS', 'Australia' from dual union all
4 select 'JP', 'Japan' from dual
5 /
Table created.
SQL> create table product (id,name,naid,qty)
2 as
3 select '01', 'Fruit', 'JP', 50 from dual union all
4 select '02', 'MEAT', 'AUS', 10 from dual union all
5 select '03', 'MANGA', 'JP', 80 from dual union all
6 select '04', 'BOOK', 'AUS', 8 from dual
7 /
Table created.
```
The query:
```
SQL> select max(p.id) keep (dense_rank first order by p.qty) id
2 , max(p.name) keep (dense_rank first order by p.qty) name
3 , p.naid "NaID"
4 , n.name "Nation"
5 , min(p.qty) "minQty"
6 from product p
7 inner join nation n on (p.naid = n.naid)
8 group by p.naid
9 , n.name
10 /
ID NAME NaID Nation minQty
-- ----- ---- --------- ----------
01 Fruit JP Japan 50
04 BOOK AUS Australia 8
2 rows selected.
```
Since you're not using Oracle, a less efficient query, but probably working in all RDBMS:
```
SQL> select p.id
2 , p.name
3 , p.naid
4 , n.name
5 , p.qty
6 from product p
7 inner join nation n on (p.naid = n.naid)
8 where ( p.naid, p.qty )
9 in
10 ( select p2.naid
11 , min(p2.qty)
12 from product p2
13 group by p2.naid
14 )
15 /
ID NAME NAID NAME QTY
-- ----- ---- --------- ----------
01 Fruit JP Japan 50
04 BOOK AUS Australia 8
2 rows selected.
```
Note that if you have several rows with the same minimum quantity per nation, all those rows will be returned, instead of just one as in the previous "Oracle"-query.
|
SQL:How to get min Quantity?
|
[
"",
"sql",
"sql-server",
""
] |
I'm practicing a lab manual excercise in which I have to create 6 tables. Creation of 5 is
successful.
But one line is giving error
```
constraint GRADE_Designation_FK
FOREIGN KEY(Designation) References EMPLOYEE(Designation),
```
ERROR at line 7:
> ORA-02270: no matching unique or primary key for this column-list
Queries of 2 linked tables are
```
create table EMPLOYEE
(
Empno number(4) constraint EMPLOYEE_Empno_PK PRIMARY KEY,
Name varchar2(10) not null,
Designation varchar2(50),
Qualification varchar2(10),
Joindate date
);
create table GRADE
(
Designation varchar2(50) constraint GRADE_Designation_PK PRIMARY KEY,
Grade number(2),
TotalPosts number(4),
PostsAvailable number(4),
constraint GRADE_Grade_CK check(Grade between 1 and 20),
constraint GRADE_PostsAvailable_CK check(PostsAvailable <= TotalPosts),
constraint GRADE_Designation_FK FOREIGN KEY(Designation) References EMPLOYEE(Designation)
);
```
Tried
```
create table GRADE
(
Designation varchar2(50) constraint GRADE_Designation_PK PRIMARY KEY,
Grade number(2),
TotalPosts number(4),
PostsAvailable number(4),
constraint GRADE_Grade_CK check(Grade between 1 and 20),
constraint GRADE_PostsAvailable_CK check(PostsAvailable <= TotalPosts)
);
create table EMPLOYEE
(
Empno number(4) constraint EMPLOYEE_Empno_PK PRIMARY KEY,
Name varchar2(10) not null,
Designation varchar2(50) NOT NULL UNIQUE,
Qualification varchar2(10),
Joindate date default sysdate
constraint EMPLOYEE_Designation_FK FOREIGN KEY(Designation) References GRADE(Designation),
);
```
Now new error
> constraint EMPLOYEE\_Designation\_FK FOREIGN KEY(Designation) References GRADE(Designation)
> \*
>
> ERROR at line 8:
>
> ORA-02253: constraint specification not allowed here
|
You've got the constraint on the wrong table. You should create a foreign key on EMPLOYEE.DESIGNATION, referencing back to GRADE.DESIGNATION.
So your tables should look something like:
```
create table GRADE
(
Designation varchar2(50) constraint GRADE_Designation_PK PRIMARY KEY,
Grade number(2),
TotalPosts number(4),
PostsAvailable number(4),
constraint GRADE_Grade_CK check(Grade between 1 and 20),
constraint GRADE_PostsAvailable_CK check(PostsAvailable <= TotalPosts),
);
create table EMPLOYEE
(
Empno number(4) constraint EMPLOYEE_Empno_PK PRIMARY KEY,
Name varchar2(10) not null,
Designation varchar2(50)
constraint EMPLOYEE_FK1
REFERENCES GRADE(DESIGNATION),
Qualification varchar2(10),
Joindate date
);
```
Share and enjoy.
|
> ORA-02270: no matching unique or primary key for this column-list
That error is very self explanatory and tells you what's wrong. In your case, you are trying to create foreign key on a non primary key column and so the error
```
constraint GRADE_Designation_FK FOREIGN KEY(Designation)
References EMPLOYEE(Designation)
<--Here
```
`Designation` in `EMPLOYEE` table is not a primary key and you can't create FK on a non primary key column. Your table creation rather should look like
```
create table GRADE
(
Designation varchar2(50) constraint GRADE_Designation_PK PRIMARY KEY,
employee_Empno number(4),
Grade number(2),
TotalPosts number(4),
PostsAvailable number(4),
constraint GRADE_Grade_CK check(Grade between 1 and 20),
constraint GRADE_PostsAvailable_CK check(PostsAvailable <= TotalPosts),
constraint GRADE_Designation_FK FOREIGN KEY(employee_Empno)
References EMPLOYEE(Empno));
```
|
error ORA-02270: no matching unique or primary key for this column-list
|
[
"",
"sql",
"database",
"oracle",
""
] |
I m try to make a Table using Create Table in SQL,
where a person can work at multiple places, and a place can have multiple person working on it,
this is what i m trying, i m sure its not correct
```
create table ( person char(15), place char(15), salary int)
```
now since a person can work in multiple places, i m confused should the tuple place has multiple values,
if yes. how do i do it
Thanks in advance
|
You should create three separate tables:
```
"persons"
int ID (primary key, auto-increment)
varchar username
varchar email ... (all other info needed)
"places"
int ID (primary key, auto-increment)
varchar name
etc.
```
And the third table gives you the relationship between the two:
```
"person_places" (or place_persons, depends on what you like)
int ID (primary key, auto-increment)
int place_id (linked to the ID of the "places" entry)
int person_id (linked to the ID of the "persons" entry)
```
This way, every time a person starts working in a new place, you just add an entry to the "person\_places". Same thing when they leave a place, or a place goes out of business or whatever, you just need to touch the "person\_places" table.
Also, this way, one person can work in several places, just like one place can have several people working in it.
|
It is called a *n to m relation*. Use 3 tables
```
persons table
-------------
id int
name varchar
places table
------------
id int
name varchar
place_persons table
-------------------
place_id int
person_id int
```
|
Creating a Table in SQL, where each tuple can have mutiple values
|
[
"",
"mysql",
"sql",
"database",
"create-table",
""
] |
I have the following data in my table:
```
SELECT category, value FROM test
```
```
| category | value |
+----------+-------+
| 1 | 1 |
| 1 | 3 |
| 1 | 4 |
| 1 | 8 |
```
Right now I am using two separate queries.
1. To get average:
```
SELECT category, avg(value) as Average
FROM test
GROUP BY category
```
```
| category | value |
+----------+-------+
| 1 | 4 |
```
2. To get median:
```
SELECT DISTINCT category,
PERCENTILE_CONT(0.5)
WITHIN GROUP (ORDER BY value)
OVER (partition BY category) AS Median
FROM test
```
```
| category | value |
+----------+-------+
| 1 | 3.5 |
```
Is there any way to merge them in one query?
Note: I know that I can also get median with two subqueries, but I prefer to use PERCENTILE\_CONT function to get it.
|
AVG is also a windowed function:
```
select
distinct
category,
avg(value) over (partition by category) as average,
PERCENTILE_CONT(0.5)
WITHIN GROUP (ORDER BY value)
OVER (partition BY category) AS Median
from test
```
|
I would approach this in a slightly different way:
```
select category, avg(value) as avg,
avg(case when 2 * seqnum in (cnt, cnt + 1, cnt + 2) then value end) as median
from (select t.*, row_number() over (partition by category order by value) as seqnum,
count(*) over (partition by category) as cnt
from test t
) t
group by category;
```
|
Average and Median in one query in Sql Server 2012
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"average",
"median",
""
] |
I have an "insert only" database, wherein records aren't physically updated, but rather logically updated by adding a new record, with a CRUD value, carrying a larger sequence. In this case, the "seq" (sequence) column is more in line with what you may consider a primary key, but the "id" is the logical identifier for the record. In the example below,
This is the physical representation of the table:
```
seq id name | CRUD |
----|-----|--------|------|
1 | 10 | john | C |
2 | 10 | joe | U |
3 | 11 | kent | C |
4 | 12 | katie | C |
5 | 12 | sue | U |
6 | 13 | jill | C |
7 | 14 | bill | C |
```
This is the logical representation of the table, considering the "most recent" records:
```
seq id name | CRUD |
----|-----|--------|------|
2 | 10 | joe | U |
3 | 11 | kent | C |
5 | 12 | sue | U |
6 | 13 | jill | C |
7 | 14 | bill | C |
```
In order to, for instance, retrieve the most recent record for the person with id=12, I would currently do something like this:
```
SELECT
*
FROM
PEOPLE P
WHERE
P.ID = 12
AND
P.SEQ = (
SELECT
MAX(P1.SEQ)
FROM
PEOPLE P1
WHERE P.ID = 12
)
```
...and I would receive this row:
```
seq id name | CRUD |
----|-----|--------|------|
5 | 12 | sue | U |
```
What I'd rather do is something like this:
```
WITH
NEW_P
AS
(
--CTE representing all of the most recent records
--i.e. for any given id, the most recent sequence
)
SELECT
*
FROM
NEW_P P2
WHERE
P2.ID = 12
```
The first SQL example using the the subquery already works for us.
**Question:** How can I leverage a CTE to simplify our predicates when needing to leverage the "most recent" logical view of the table. In essence, I don't want to inline a subquery every single time I want to get at the most recent record. I'd rather define a CTE and leverage that in any subsequent predicate.
P.S. While I'm currently using DB2, I'm looking for a solution that is database agnostic.
|
This is a clear case for *window* (or *OLAP*) functions, which are supported by all modern SQL databases. For example:
```
WITH
ORD_P
AS
(
SELECT p.*, ROW_NUMBER() OVER ( PARTITION BY id ORDER BY seq DESC) rn
FROM people p
)
,
NEW_P
AS
(
SELECT * from ORD_P
WHERE rn = 1
)
SELECT
*
FROM
NEW_P P2
WHERE
P2.ID = 12
```
PS. Not tested. You may need to explicitly list all columns in the CTE clauses.
|
I guess you already put it together. First find the max `seq` associated with each `id`, then use that to join back to the main table:
```
WITH newp AS (
SELECT id, MAX(seq) AS latestseq
FROM people
GROUP BY id
)
SELECT p.*
FROM people p
JOIN newp n ON (n.latestseq = p.seq)
ORDER BY p.id
```
What you originally had would work, or moving the CTE into the "from" clause. Maybe you want to use a timestamp field rather than a sequence number for the ordering?
|
CTE to represent a logical table for the rows in a table which have the max value in one column
|
[
"",
"sql",
"db2",
"common-table-expression",
""
] |
**Requirements**:
1. Every employee has a unique ID. (EPID)
2. A employee can only be either one of below,
* FT - Full Time
* PT - Part Time
3. Any employee can never be both FT and PT.
4. FT & PT have lots of different fields to capture.
**Implementation**:
```
Create Table EmpFT( EPID int primary key, F1, F2, etc)
Create Table EmpPT( EPID int primary key, P1, P2, etc)
--This does not prevent same EPID on both EmpFT and EmpPT.
```
**How do you implement No. 3 in database?**
I am using SQL Server 2012 standard edition.
|
Try this method:
```
CREATE TABLE Emp(EPID INT PRIMARY KEY,
t CHAR(2) NOT NULL, UNIQUE (EPID,t));
CREATE TABLE EmpFT(EPID INT PRIMARY KEY, ... other columns
t CHAR(2) NOT NULL CHECK (t = 'FT'),
FOREIGN KEY (EPID,t) REFERENCES Emp (EPID,t));
CREATE TABLE EmpPT(EPID INT PRIMARY KEY, ... other columns
t CHAR(2) NOT NULL CHECK (t = 'PT'),
FOREIGN KEY (EPID,t) REFERENCES Emp (EPID,t));
```
|
You can add check constraints. Something like this for both tables
```
ALTER TABLE EmpFT
ADD CONSTRAINT chk_EmpFT_EPID CHECK (dbo.CHECK_EmpPT(EPID)= 0)
ALTER TABLE EmpPT
ADD CONSTRAINT chk_EmpPT_EPID CHECK (dbo.CHECK_EmpFT(EPID)= 0)
```
And the functions like so:
```
CREATE FUNCTION CHECK_EmpFT(@EPID int)
RETURNS int
AS
BEGIN
DECLARE @ret int;
SELECT @ret = count(*) FROM EmpFT WHERE @EPID = EmpFT.EPID
RETURN @ret;
END
GO
CREATE FUNCTION CHECK_EmpPT(@EPID int)
RETURNS int
AS
BEGIN
DECLARE @ret int;
SELECT @ret = count(*) FROM EmpPT WHERE @EPID = EmpPT.EPID
RETURN @ret;
END
GO
```
Further reading here:
* <http://www.w3schools.com/sql/sql_check.asp>
* <http://technet.microsoft.com/en-us/library/ms188258%28v=sql.105%29.aspx>
|
How do you enforce unique across 2 tables in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"database-design",
""
] |
I am trying to get the count of certain types of records in a related table. I am using a left join.
So I have a query that isn't quite right and one that is returning the correct results. The correct results query has a higher execution cost. Id like to use the first approach, if I can correct the results. (see <http://sqlfiddle.com/#!15/7c20b/5/2>)
```
CREATE TABLE people(
id SERIAL,
name varchar not null
);
CREATE TABLE pets(
id SERIAL,
name varchar not null,
kind varchar not null,
alive boolean not null default false,
person_id integer not null
);
INSERT INTO people(name) VALUES
('Chad'),
('Buck'); --can't keep pets alive
INSERT INTO pets(name, alive, kind, person_id) VALUES
('doggio', true, 'dog', 1),
('dog master flash', true, 'dog', 1),
('catio', true, 'cat', 1),
('lucky', false, 'cat', 2);
```
My goal is to get a table back with ALL of the people and the counts of the KINDS of pets they have alive:
```
| ID | ALIVE_DOGS_COUNT | ALIVE_CATS_COUNT |
|----|------------------|------------------|
| 1 | 2 | 1 |
| 2 | 0 | 0 |
```
I made the example more trivial. In our production app (not really pets) there would be about 100,000 dead dogs and cats per person. Pretty screwed up I know, but this example is simpler to relay ;) I was hoping to filter all the 'dead' stuff out before the count. I have the slower query in production now (from sqlfiddle above), but would love to get the LEFT JOIN version working.
|
Typically fastest if you fetch **all or most rows**:
```
SELECT pp.id
, COALESCE(pt.a_dog_ct, 0) AS alive_dogs_count
, COALESCE(pt.a_cat_ct, 0) AS alive_cats_count
FROM people pp
LEFT JOIN (
SELECT person_id
, count(kind = 'dog' OR NULL) AS a_dog_ct
, count(kind = 'cat' OR NULL) AS a_cat_ct
FROM pets
WHERE alive
GROUP BY 1
) pt ON pt.person_id = pp.id;
```
Indexes are irrelevant here, full table scans will be fastest. **Except** if alive pets are a *rare* case, then a [**partial index**](http://www.postgresql.org/docs/current/interactive/indexes-partial.html) should help. Like:
```
CREATE INDEX pets_alive_idx ON pets (person_id, kind) WHERE alive;
```
I included all columns needed for the query `(person_id, kind)` to allow index-only scans.
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/78baa/2)
Typically fastest for a **small subset or a single row**:
```
SELECT pp.id
, count(kind = 'dog' OR NULL) AS alive_dogs_count
, count(kind = 'cat' OR NULL) AS alive_cats_count
FROM people pp
LEFT JOIN pets pt ON pt.person_id = pp.id
AND pt.alive
WHERE <some condition to retrieve a small subset>
GROUP BY 1;
```
You should at least have an index on `pets.person_id` for this (or the partial index from above) - and possibly more, depending ion the `WHERE` condition.
Related answers:
* [Query with LEFT JOIN not returning rows for count of 0](https://stackoverflow.com/questions/15467624/count-on-left-join-not-returning-0-values/15468034#15468034)
* [GROUP or DISTINCT after JOIN returns duplicates](https://stackoverflow.com/questions/25486942/group-or-distinct-after-join-returns-duplicates/25487898#25487898)
* [Get count of foreign key from multiple tables](https://stackoverflow.com/questions/24745091/get-count-of-foreign-key-from-multiple-tables/24747523#24747523)
|
Your `WHERE alive=true` is actually filtering out record for `person_id = 2`. Use the below query, push the `WHERE alive=true` condition into the `CASE` condition as can be noticed here. See your modified [Fiddle](http://sqlfiddle.com/#!15/7c20b/11)
```
SELECT people.id,
pe.alive_dogs_count,
pe.alive_cats_count
FROM people
LEFT JOIN
(
select person_id,
COALESCE(SUM(case when pets.kind='dog' and alive = true then 1 else 0 end),0) as alive_dogs_count,
COALESCE(SUM(case when pets.kind='cat' and alive = true then 1 else 0 end),0) as alive_cats_count
from pets
GROUP BY person_id
) pe on people.id = pe.person_id
```
(OR) your version
```
SELECT
people.id,
COALESCE(SUM(case when pets.kind='dog' and alive = true then 1 else 0 end),0) as alive_dogs_count,
COALESCE(SUM(case when pets.kind='cat' and alive = true then 1 else 0 end),0) as alive_cats_count
FROM people
LEFT JOIN pets on people.id = pets.person_id
GROUP BY people.id;
```
|
Postgres LEFT JOIN with SUM, missing records
|
[
"",
"sql",
"postgresql",
"left-join",
"aggregate-functions",
""
] |
I am trying to group on column A but also order the groups by column B. I cannot get it to work. Also I want to join on another table.
```
select t.*, db_users.user as username, db_users.thumbnail
from
(
SELECT db_chats.* FROM db_chats WHERE db_chats.user = 'utQ8YDxD6kSrlI5QtFOUAE4h'
group by db_chats.partner
order by db_chats.timestamp desc
)
as t
inner join db_users on t.partner = db_users.id
```
This query gives me only 1 row and it is not sorted by timestamp of the grouped column. How can I get this to work?
Here is a **[sqlfiddle](http://sqlfiddle.com/#!2/4f44b/2)**
**Expected result:**
```
2 rows
user partner timestamp
------------------------ ------------------------ -------------------
utQ8YDxD6kSrlI5QtFOUAE4h 6h4rpXOXMB456CJOz8elP2zM
utQ8YDxD6kSrlI5QtFOUAE4h D5RgVSRdbp9kPp9ua9q9rtdf 1413572130 sorted!!!
```
|
Assuming you after the last chat message from each partner for a given user...
We need to get a set of data that consists of the max time-stamp for each user partner (thus the inner select)
We then join this back to the chat table to get a all the additional desired attributes.
The reason why you're initial queries were not working were because you were not limiting results to an aggregate (Max), which is why I had so much trouble understanding why you needed a group by.
Once you clarified in plain text English what you were after and provided examples, we could figure it out.
```
SELECT A.*, DU.user as username, du.thumbnail
FROM DB_CHATS A
INNER JOIN (
SELECT max(timestamp) TS, user, partner
FROM db_chats
GROUP BY user,partner) T
on A.TimeStamp=T.TS
and A.user=T.User
and A.Partner = T.Partner
LEFT JOIN db_users DU on t.partner = DU.id
WHERE A.user = 'utQ8YDxD6kSrlI5QtFOUAE4h'
```
I think this is what you're after (updated)
<http://sqlfiddle.com/#!2/8abcac/16/0>
I used a left join on the chance that a partner has account deleted in db\_users but records still exist in DB\_Chats.
You may want to use an INNER JOIN to exclude DB\_USERS that are no longer in the system however... you're call don't know the need.
|
```
SELECT t1.* FROM db_chats t1
WHERE t1.user = 'utQ8YDxD6kSrlI5QtFOUAE4h'
AND NOT EXISTS (SELECT 1 FROM db_chats t2
WHERE t2.user = t1.user
AND t2.partner = t1.partner
AND t2.timestamp > t1.timestamp);
```
<http://sqlfiddle.com/#!2/4f44b/27>
|
Group by A but order by B
|
[
"",
"sql",
""
] |
I have two queries, the first one returns a movie and year which has movies which has more then two cast members and the second query displays the movies which have won more than two awards.
So I want to write a query which will give me the movie and year which occurs in one query but not both. How will I able to do this?
The syntax is in Oracle.
|
We can do this MINUS
First set is rows that exists in table1 alone
Second set is rows that exists only on table2
```
SELECT * FROM table1
MINUS
SELECT * FROM table2
UNION
SELECT * FROM table2
MINUS
SELECT * FROM table1
```
|
> I want to write a query which will give me the movie and year which
> occurs in one query but not both.
To do this you need to do `UNION` of both the queries and `INTERCEPT` of both the queries AND `MINUS` the `INTERCEPT` from the `UNION`. Like this
```
((SELECT T2.movie_title,T2.release_year
FROM(SELECT b.movie_title,b.release_year, COUNT(b.movie_title) as NUMMOVIES
FROM ACTOR a FULL OUTER JOIN CAST_MEMBER b ON a.actor_name=b.actor_name
WHERE EXISTS(SELECT c.actor_name FROM CAST_MEMBER c WHERE c.actor_name=a.actor_name)
GROUP BY b.movie_title,b.release_year) T2
WHERE T2.NUMMOVIES > 2)
UNION
(SELECT a.movie_title,a.release_year
FROM MOVIE a
WHERE (SELECT COUNT(b.won) as Won_Counter
FROM NOMINATION b
WHERE b.movie_title=a.movie_title AND a.release_year=b.release_year AND b.won ='Yes') > 2))
MINUS
((SELECT T2.movie_title,T2.release_year
FROM(SELECT b.movie_title,b.release_year, COUNT(b.movie_title) as NUMMOVIES
FROM ACTOR a FULL OUTER JOIN CAST_MEMBER b ON a.actor_name=b.actor_name
WHERE EXISTS(SELECT c.actor_name FROM CAST_MEMBER c WHERE c.actor_name=a.actor_name)
GROUP BY b.movie_title,b.release_year) T2
WHERE T2.NUMMOVIES > 2)
INTERSECT
(SELECT a.movie_title,a.release_year
FROM MOVIE a
WHERE (SELECT COUNT(b.won) as Won_Counter
FROM NOMINATION b
WHERE b.movie_title=a.movie_title AND a.release_year=b.release_year AND b.won ='Yes') > 2))
```
Learn more about these operators [here](http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries004.htm)
I am sure there is a much better way to do this but we will need more information about your tables
|
Movie and Year exist in one query but not both
|
[
"",
"sql",
"oracle10g",
"exists",
""
] |
Which one of the two would perform better(I was recently accused of not being careful with my code because I used the later in Oracle):
```
Select *
from Tab1
Where (not) exists(Select 1 From Tab2 Where Tab1.id = Tab2.id)
Select *
from Tab1
Where (not) exists(Select Field1 From Tab2 Where Tab1.id = Tab2.id)
```
Or are they both same?
Please answer both from SQL Server perspective as well as Oracle perspective.
I have googled (mostly from sql-server side) and found that there is still a lot of debate over this although my present opinion/assumption is the optimiser in both the RDMBS are mature enough to understand that all that is required from the subquery is a Boolean value.
|
Yes, they are the same. `exists` checks if there is at least one row in the sub query. If so, it evaluates to `true`. The columns in the sub query don't matter in any way.
According to [MSDN](http://msdn.microsoft.com/en-us/library/ms188336.aspx), `exists`:
> Specifies a subquery to test for the existence of rows.
And [Oracle](http://docs.oracle.com/cd/E11882_01/server.112/e10592/conditions012.htm#SQLRF52167):
> An EXISTS condition tests for existence of rows in a subquery.
Maybe the [MySQL documentation](http://docs.oracle.com/cd/E17952_01/refman-5.1-en/exists-and-not-exists-subqueries.html) is even more explaining:
> Traditionally, an EXISTS subquery starts with SELECT \*, but it could begin with SELECT 5 or SELECT column1 or anything at all. **MySQL ignores the SELECT list in such a subquery, so it makes no difference.**
|
I know this is old,but want to add few points i observed recently..
Even though exists checks for only existence ,when we write "select \*" all ,columns will be expanded,other than this slight overhead ,there are no differences.
Source:
<http://www.sqlskills.com/blogs/conor/exists-subqueries-select-1-vs-select/>
**Update:**
Article i referred seems to be not valid.Even though when we write,`select 1` ,SQLServer will expand all the columns ..
please refer to below link for in depth analysis and performance statistics,when using various approaches..
[Subquery using Exists 1 or Exists \*](https://stackoverflow.com/a/6140367/2975396)
|
Exists / not exists: 'select 1' vs 'select field'
|
[
"",
"sql",
"sql-server",
"oracle",
"exists",
""
] |
I am having problems increasing the prices of my hp products by 10%.
Here is what I've tried -->>
```
UPDATE products SET price = price*1.1;
from products
where prod_name like 'HP%'
```
Here is a picture of the products table:

|
This is your query:
```
UPDATE products SET price = price*1.1;
from products
where prod_name like 'HP%'
```
It has one issue with the semicolon in the second row. Also, it is not standard SQL (although this will work in some databases). The standard way of expressing this is:
```
update products
set price = price * 1.1
where prod_name like 'HP%';
```
The `from` clause is not necessary in this case.
|
This is an `UPDATE`, not a `SELECT`, so the `FROM` clause is incorrect. Also, the semicolon should go at the end of the last line.
```
UPDATE products SET price = price*1.1; <== Remove the semicolon
from products <== remove this line
where prod_name like 'HP%' <== add a semicolon at the end of this line
```
Try this instead:
```
UPDATE products SET price = price*1.1
where prod_name like 'HP%';
```
|
Increase products price by 10%
|
[
"",
"mysql",
"sql",
"percentage",
""
] |
What I need to do is get the Highest `StockOnSite` per `ProductID` ( calculating the StockDifference ) record and concatenate StockOnSite with StockOffsite to create a column AllStock
I am completely lost? as I cannot group as we have a StockOnSite and StockOffsite
Here is the SQLFiddle
[Fiddle](http://sqlfiddle.com/#!3/76e50/1)
This is not a duplicate post, as the outer select complicates the grouping.
Thanks
|
I believe this is the query you've been looking for.
It evaluates `StockDiff` as you suggested: `StockOnSite - StockOffSite` and then takes the highest value for every `SiteID`
```
SELECT
SiteID, Description, StockOnSite, StockOffsite, AllStock, LastStockUpdateDateTime, StockDiff
FROM (
SELECT
*,
rank() OVER (PARTITION BY SiteID ORDER BY StockDiff DESC) AS rank
FROM (
SELECT
s.SiteID,
s.Description,
p.StockOnSite,
p.StockOffsite,
CAST(p.StockOnSite AS varchar(10)) + '/' + CAST(p.StockOffSite AS varchar(10)) AS AllStock,
p.LastStockUpdateDateTime,
p.StockOnSite - p.StockOffSite AS StockDiff
FROM
Sites s
JOIN Products p ON s.SiteID = p.OnSiteID
) foo
) goo
WHERE rank = 1
ORDER BY 1
```
I used a [Window function](http://msdn.microsoft.com/en-us/library/ms189461.aspx) to get it done.
**Edit**
If you need highest `StockOnSite` you can easily modify the Window Function by replacing `StockDiff` in `ORDER BY StockDiff DESC` with `StockOnSite`
|
You wanted to highest StockOnSite by ProductID, but the sample set all had unique productIDs. This query will get what you're asking for, but I think your question might be unclear as to what your result set needs to look like.
```
select pr.ProductID
, s.SiteID
,s.Description
,StockOnSite
,StockOffsite
,AllStock = cast(coalesce(pr.StockOnSite,'') as varchar(10)) + '-' + cast(coalesce(pr.StockOffsite,'') as varchar(10))
,LastStockUpdateDateTime = pr.LastStockUpdateDateTime
,Stockdiff = StockOnSite - StockOffsite
from Products pr
inner join (
select pr.ProductID
, MAX(StockOnSite) MaxStockOnSite
from dbo.Products pr
group by [ProductID]
) x
ON x.ProductID = pr.ProductID
and x.MaxStockOnSite = pr.StockOnSite
inner join dbo.Sites s
ON pr.OnSiteId = s.SiteId
```
|
SQL Highest Value With Group
|
[
"",
"sql",
"sql-server",
""
] |
I have a oracle table like this
```
customer1 customer2 city
A B NY
B A NY
A C NY
A D NY
D A NY
C A NY
```
I am just interested in unique combination .
A B or B A etc
**Output I need is**
```
customer1 customer2 city
A B NY
A C NY
A D NY
```
|
I don't know how to translate this to Oracle (if it is possible at all), but Postgres gives the short, if perhaps inefficient,
```
SELECT DISTINCT ON (LEAST(c1, c2), GREATEST(c1, c2))
LEAST(c1, c2), GREATEST(c1, c2), city FROM t;
```
|
In Oracle you can do that:
```
SELECT DISTINCT LEAST(customer1, customer2),
GREATEST(customer1, customer2),
city
FROM T
```
See <http://sqlfiddle.com/#!4/b73ba/1>
Simple and easy to understand. But not be very efficient (can't use your index).
---
If you need to keep `customer1` and `customer2` in the same order as in the original table for non-duplicates, you probably need something more complex:
```
SELECT T.* FROM T
JOIN (SELECT MIN(ROWID) RID
FROM T GROUP BY LEAST(customer1, customer2),
GREATEST(customer1, customer2),
city) V
ON T.ROWID = V.RID
```
Or (maybe better):
```
SELECT T1.* FROM T T1
LEFT JOIN T T2
ON T1.city = T2.city
AND T1.customer1 = T2.customer2
AND T1.customer2 = T2.customer1
WHERE T2.city IS NULL OR T1.customer1 < T1.customer2
```
See <http://sqlfiddle.com/#!4/f7bbd/3> for a comparison of those three solutions.
|
sql, how to eliminated repeated data across columns
|
[
"",
"sql",
"oracle",
""
] |
I am trying to create an insert statement, below is what I have so far:
```
INSERT INTO Role (application, developer, role)
VALUES (? ,
(SELECT Id from Developer WHERE firstName='Alice' and lastname='Wonderland'),
'ARCHITECT');
```
In the missing value field, I am trying to do a join, the following join statement returns the VALUES I need:
```
SELECT a.id
FROM Application a
JOIN application d ON d.id = a.id and a.category='GAMES'
```
However, when I insert the previous query into the missing value field in the insert statement, it gives me an error stating that the query returned more than 1 row (which is intentional, seeing as I want ALL rows returned to be inserted into the role table)
Can anyone provide me feedback on how to fix this issue?
|
I think this is what you are looking for:
```
INSERT INTO Role(application, developer, role)
SELECT a.id, (SELECT Id from Developer WHERE firstName = 'Alice' and lastname = 'Wonderland'),
'ARCHITECT'
FROM Application a JOIN
application d
ON d.id = a.id and a.category = 'GAMES';
```
Note: I was going to move the subquery with `Developer` into the `FROM` clause. However, if there are no matches, this will return `NULL` -- which seems to be your intention. If I put it in the `FROM` clause, you will get no matches (MySQL doesn't have a `left cross join`).
|
Incorporate your other parameters into your select statement
```
INSERT INTO Role (application, developer, role)
SELECT ?, Id, 'ARCHITECT' FROM Application a
JOIN application d ON d.id = a.id and a.category='GAMES'
```
|
MySQL How do you INSERT INTO a table when a SELECT subquery returning multiple rows?
|
[
"",
"mysql",
"sql",
""
] |
I noticed on some test data that queries of the form
```
SELECT *
FROM a
JOIN b ON a.x = b.y
WHERE [more conditions...];
```
were returning the same data as queries of the form
```
SELECT *
FROM a,b
WHERE a.x = b.y
AND [more conditions...];
```
Intuitively they seem equivalent, but I'm not confident that this is the case.
I prefer the query that uses `JOIN` as it separates the general structure of the query from the specific business logic of the query. i.e. The conditions in the `WHERE` section of the query using `JOIN` are ones that could be parameterized.
In which other ways, if any, are these queries not identical?
|
Both forms are equivalent but another reason why
```
SELECT *
FROM a
JOIN b ON a.x = b.y
WHERE [more conditions...];
```
is often preferred is that it offers more flexibility if you need to filter things a bit differently. Imagine for instance that x may be `null`. You have just to change the type of join to `left join`.
|
These queries are logically identical. The logical differences between the new `JOIN` form (SQL-92) and the older `,` form are in how the outer join expressions work.
Where they are not identical is in code quality. The SQL-89 form was superceeded by the SQL-92 over 20 years ago, and the newer form is much preferred for its clarity, better standards adoption for outer joins and greater expressive power for outer joins.
|
Is FROM x JOIN y ON x.a = y.b equivalent to FROM x,y WHERE x.a=y.b?
|
[
"",
"sql",
"performance",
"join",
"syntactic-sugar",
"equivalent",
""
] |
I'm trying to join 3 tables into a select statement but count the occurences of one while still showing the record if no occurences happen.
My example can be seen in the quick sqlFiddle that I've put together. I've tried to use left joins but it doesn't produce the result I want.
<http://sqlfiddle.com/#!6/e2840/8>
This is the SQL Statement:
```
SELECT O.OptionID,O.OptionName, Count(A.OptionID) AS Total
FROM Options as O
LEFT JOIN Answers AS A ON O.OptionID = A.OptionID
LEFT JOIN Users as U ON A.UserId = U.UserID
WHERE A.QuestionID = 1
GROUP BY O.OptionID,O.OptionName
```
What I want it to return is all the rows from the options table and display a 0 in the total column if no answers were found for that option.
What the current fiddle returns:

What I would like it to return:

|
Subselect answer:
```
SELECT O.OptionID,O.OptionName, Count(A.OptionID) AS Total
FROM Options as O
LEFT JOIN
(
SELECT OPTIONID,USERID
FROM Answers WHERE QuestionID = 1
) AS A ON O.OptionID = A.OptionID
LEFT JOIN Users as U ON A.UserId = U.UserID
GROUP BY O.OptionID,O.OptionName
```
|
You are using a WHERE clause with LEFT JOIN, which is making it act like an INNER JOIN.
Try this:
```
SELECT O.OptionID, O.OptionName, Count(A.OptionID) AS Total
FROM Options as O
LEFT JOIN Answers AS A ON O.OptionID = A.OptionID
and A.QuestionID = 1
LEFT JOIN Users as U ON A.UserId = U.UserID
GROUP BY O.OptionID,O.OptionName
```
|
SQL Join to include NULL values
|
[
"",
"sql",
"sql-server",
"join",
"sql-server-2012",
""
] |
I have an insert statement similar to this:
```
insert into table (id, name, descr) values (4, 'asdf', 'this is not a word');
```
I need to insert this same statement with multiple ids. Right now I have:
```
insert into table (id, name, descr) values (4, 'asdf', 'this is not a word');
insert into table (id, name, descr) values (6, 'asdf', 'this is not a word');
insert into table (id, name, descr) values (7, 'asdf', 'this is not a word');
insert into table (id, name, descr) values (9, 'asdf', 'this is not a word');
```
Am I just going to have to run this, or is there a more condensed version?
|
Use a `select . . . insert`:
```
insert into table(id, name, descr)
select i.id, 'asdf', 'this is not a word'
from (select 4 as id from dual union all
select 6 from dual union all
select 7 from dual union all
select 9 from dual
) i;
```
|
```
INSERT INTO [TableName] (id, name, descr) VALUES
(4, 'asdf', 'this is not a word'),
(6, 'asdf', 'this is not a word'),
(7, 'asdf', 'this is not a word'),
(9, 'asdf', 'this is not a word')
```
|
Insert same data multiple times
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I'm trying to calculate responses for a questionnaire system. I want to show the result in one table (the question, options, number of responses). I wrote a query which works just fine however, it doesn't display all the options and if there are no responses for them.
My query
```
SELECT R.QuestionID, Q.QuestionName, A.OptionName, COUNT(R.OptionID) AS Responses, A.OptionID
FROM Response AS R
INNER JOIN
Question AS Q ON Q.QuestionID = R.QuestionID
INNER JOIN
Option AS A ON R.OptionID = A.OptionID
WHERE (R.QuestionnaireID = 122)
GROUP BY R.QuestionID, Q.QuestionName, A.OptionName, R.OptionID, A.OptionID
```
**database structure**:
* Questionnaire (questionnaireID PK, questionnaireName)
* Question (questionID PK, questionnaireID FK, questionnaireName)
* Option (OptionID PK, questionID FK, optionName)
* Response (ResponseID PK, questionnaireID FK, questionID FK, value)
**Table definitions**
```
CREATE TABLE [dbo].[Questionnaire] (
[QuestionnaireID] INT IDENTITY (1, 1) NOT NULL,
[QuestionnaireName] NVARCHAR (100) NOT NULL,
PRIMARY KEY CLUSTERED ([QuestionnaireID] ASC),
);
CREATE TABLE [dbo].[Question] (
[QuestionID] INT IDENTITY (1, 1) NOT NULL,
[QuestionnaireID] INT NOT NULL,
[QuestionName] NVARCHAR (250) NOT NULL,
PRIMARY KEY CLUSTERED ([QuestionID] ASC),
CONSTRAINT [FK_Question_Questionnaire] FOREIGN KEY ([QuestionnaireID]) REFERENCES [dbo].[Questionnaire] ([QuestionnaireID])
);
CREATE TABLE [dbo].[Option] (
[OptionID] INT IDENTITY (1, 1) NOT NULL,
[QuestionID] INT NOT NULL,
[OptionName] NVARCHAR (150) NOT NULL,
PRIMARY KEY CLUSTERED ([OptionID] ASC),
CONSTRAINT [FK_Option_Question] FOREIGN KEY ([QuestionID]) REFERENCES [dbo].[Question] ([QuestionID])
);
CREATE TABLE [dbo].[Response] (
[ResponseID] INT IDENTITY (1, 1) NOT NULL,
[QuestionnaireID] INT NOT NULL,
[QuestionID] INT NOT NULL,
[Val] NVARCHAR (150) NOT NULL,
[OptionID] INT NULL,
PRIMARY KEY CLUSTERED ([ResponseID] ASC),
CONSTRAINT [FK_Response_Option] FOREIGN KEY ([OptionID]) REFERENCES [dbo].[Option] ([OptionID]),
CONSTRAINT [FK_Response_Question] FOREIGN KEY ([QuestionID]) REFERENCES [dbo].[Question] ([QuestionID]),
CONSTRAINT [FK_Response_Questionnaire] FOREIGN KEY ([QuestionnaireID]) REFERENCES [dbo].[Questionnaire] ([QuestionnaireID])
);
```
**Current data:**
```
insert into questionnaire values ('ASP.NET questionnaire');
insert into questionnaire values('TEST questionnaire');
insert into question values (2, 'rate our services');
insert into question values (2, 'On scale from 1 to 5, how much youre sleepy?');
insert into question values (2, 'how are you today');
insert into [Option] values (1, 'good');
insert into [Option] values (1, 'bad');
insert into [Option] values (1, 'medium');
insert into [Option] values(2, '1');
insert into [Option] values(2, '2');
insert into [Option] values(2, '3');
insert into [Option] values(2, '4');
insert into [Option] values(2, '5');
insert into [option] values (3, 'fine');
insert into [option] values (3, 'great');
insert into [option] values (3, 'not bad');
insert into [option] values (3, 'bad');
insert into response values(2, 1, 'good', 1);
insert into response values(2, 1, 'good', 1);
insert into response values(2, 1, 'bad', 2);
insert into response values(2, 1, 'good', 1);
insert into response values(2, 2, '1', 4);
insert into response values(2, 2, '3', 3);
insert into response values(2, 2, '4', 5);
insert into response values(2, 2, '5', 8);
```
**Desired output**

**SQL Fiddle**
[Sql Fiddle](http://sqlfiddle.com/#!6/084d0/3)
|
You need to use a `LEFT JOIN`, if you want to `display all the options and if there are no responses for them` like
**EDIT**
I have updated the answer based on your SQL fiddle. It works in [SQL Fiddle](http://sqlfiddle.com/#!6/4acf0/14) and gives you your desired output.
```
SELECT Q.QuestionName AS Question,
A.OptionName AS [Option],
COUNT(R.OptionID) AS Responses
FROM Question AS Q
INNER JOIN
[Option] AS A ON A.questionID = Q.questionID
LEFT JOIN
Response AS R ON Q.QuestionID = R.QuestionID AND R.OptionId=A.Optionid
WHERE (Q.QuestionnaireID = 2)
GROUP BY Q.QuestionID, Q.QuestionName, A.OptionName
ORDER BY Q.QuestionName,A.OptionName
```
|
It could be something to do with your Inner joins. an Inner join produces only the set of records that match in both Table A and Table B.
Reviewing this might be of help <http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/>
|
SQL table joins query
|
[
"",
"sql",
""
] |
I'm trying to alter *(not best practice?)* a SQL server database's table that was installed by a piece of software.
**Goal:**
Set a constraint on a column to be a primary key then remove it.
**Notes:**
`APPT` - Table name
`ApptId` - Column that functions as a primary key (but was not set as the primary key by the software)
**Question:**
How to resolve `___ is not a constraint.` error message?
The command I'm trying to run (test) is as follows and should enable and disable the `ApptId` row as the primary key:
```
-- Set an existing field as the primary key
ALTER TABLE APPT
ADD PRIMARY KEY (ApptID)
-- Remove primary key constraint
ALTER TABLE APPT
DROP CONSTRAINT ApptID
```
The above command sets the `ApptID` as the primary key, however when it tries to **drop** it an **error message** is produced:
```
-- Error 3728: 'ApptID' is not a constraint.
-- Could not drop constraint. See previous errors.
```
Why is this the case?
```
-- List all constraints for a specific table
SELECT OBJECT_NAME(OBJECT_ID) AS NameofConstraint,
OBJECT_NAME(parent_object_id) AS TableName,
type_desc AS ConstraintType
FROM sys.objects
WHERE type_desc LIKE '%CONSTRAINT' AND parent_object_id = OBJECT_ID('APPT')
```
I get:
```
NameofConstraint | TableName | ConstraintType
PK__APPT__EDACF695230515B9 | APPT | PRIMARY_KEY_CONSTRAINT
```
---
When I set the `ApptId` column as the primary key via Sql Management Studio the result looks cleaner (i.e. without random string)
```
NameofConstraint | TableName | ConstraintType
PK_APPT | APPT | PRIMARY_KEY_CONSTRAINT
```
But this does not resolve my issue when trying to programmatically drop the primary key constraint on the `ApptId` field.
|
When you add primary key with out a name, SQL server generates a name for the constraint
So you are seeing the name as PK\_\_APPT\_\_EDACF695230515B9
In this case you need to use the generated name to drop the constraint
```
ALTER TABLE APPT DROP CONSTRAINT PK__APPT__EDACF695230515B9
```
When you added the Primary key from SSMS, it gave the name as PK\_APPT
you can do the same
```
ALTER TABLE APPT
ADD CONSTRAINT PK_APPT PRIMARY KEY (ApptID)
```
|
Have you tried using the name of the constraint?
```
ALTER TABLE APPT DROP CONSTRAINT PK_APPT;
```
|
Error 3728: is not a constraint
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
I am using the following command to output the result of an SQL query to a text file:
```
$sqlite3 my_db.sqlite "SELECT text FROM message;" > out.txt
```
This gives me output like this:
```
text for entry 1
text for entry 2
```
Unfortunately, this breaks down when the text contains a newline:
```
text for entry 1
text for
entry 2
```
How can I specify an output delimiter (which I know doesn't exist in the text) for SQLite to use when outputting the data so I can more easily parse the result? E.g.:
```
text for entry 1
=%=
text for
entry 2
=%=
text for entry 3
```
|
In order to seperate columns, you would have to work with group\_concat and a seperator.
Query evolution:
```
SELECT text FROM messages;
SELECT GROUP_CONCAT(text, "=%=") FROM messages;
SELECT GROUP_CONCAT(text, "\r\n=%=\r\n") FROM messages;
// to get rid of the concat comma, use replace OR change seperator
SELECT REPLACE(GROUP_CONCAT(text, "\r\n=%="), ',', '\r\n') FROM messages;
```
**[SQLFiddle](http://sqlfiddle.com/#!2/5cdbb/4)**
Alternative: Sqlite to CSV export (with custom seperator), then work with that.
|
Try [-separator option](http://www.sqlite.org/sessions/sqlite.html) for this.
```
$sqlite3 -separator '=%=' my_db.sqlite "SELECT text FROM message;" > out.txt
```
**Update 1**
I quess this is because of '-list' default option. In order to turn this option off you need to change current mode.
This is a list of modes
```
.mode MODE ?TABLE? Set output mode where MODE is one of:
csv Comma-separated values
column Left-aligned columns. (See .width)
html HTML <table> code
insert SQL insert statements for TABLE
line One value per line
list Values delimited by .separator string
tabs Tab-separated values
tcl TCL list elements
-list Query results will be displayed with the separator (|, by
default) character between each field value. The default.
-separator separator
Set output field separator. Default is '|'.
```
Found this info [here](http://manpages.ubuntu.com/manpages/precise/man1/sqlite3.1.html)
|
How can I specify the record delimiter to be used in SQLite's output?
|
[
"",
"sql",
"sqlite",
""
] |
Could you please help me with query to DB, i need to select products that have same combined ID's.
For example products with ID's 70 and 75. They both have filters 1 and 12.
IN wont work, it will also take #66 cuz it has filter 1 there, but second one is 11 and thats not what i need....
```
product_id | filter_id
______________________
66 | 1
66 | 11
68 | 9
69 | 13
70 | 1
70 | 12
71 | 14
72 | 4
72 | 17
73 | 7
73 | 14
74 | 16
75 | 1
75 | 12
```
|
Try this:
```
SELECT t1.*
FROM table AS t1
JOIN table AS t2 ON t1.product_id = t2.product_id
WHERE t1.filter = 1
AND t2.filter = 12
```
|
```
SELECT Product_ID
, Filter_ID
FROM Your_Table a
WHERE Filter_ID = 1
AND EXISTS (
SELECT NULL
FROM Your_Table b
WHERE Filter_ID = 12
AND b.Product_ID = a.Product_ID
)
```
|
SQL select: fits both id's
|
[
"",
"mysql",
"sql",
""
] |
Read below and navigate to this url <http://sqlfiddle.com/#!2/e825f> to get a better understanding of my issue.
I have been trying to get this query right for awhile but cannot figure it out. I have been trying to join two tables to get the right data, but maybe I think I should make a third table. See my situation below:
I have two tables:
1. sites table - lists all websites that I have. See below for how I created my table
```
create table sites
(
id int,
websiteName varchar(50),
url varchar(50),
mobile_id varchar(50),
is_responsive varchar(1)
);
```
2. Mobiles table - lists which websites are mobile and has a mobile url
```
create table mobile
(
id int,
mobile_url varchar(50)
);
```
The `sites` table and `mobiles` table are related through the foreign key in the `sites` table called [mobile\_id].
The is\_responsive column located in the [sites table] is a bit field that holds a 1, stating this site is fully responsive, or a 0 stating this site is non-responsive.
MY GOAL:
To build a query that returns all mobile websites and responsive websites. I tried using three tables but that did not work. Originally my third table held all responsive websites, but adding another table does not make sense. Also I found a solution using the UNION statement, but I do not want to use that.
|
Based on your description of the specification... "all mobile websites and responsive websites"...
I will suggest that something of this form may return the resultset you specified.
```
SELECT s.id
, s.websiteName
, s.url
, s.mobile_id
, s.is_responsive
, IF(m.mobile_url IS NOT NULL,'1','0') AS is_mobile
FROM sites s
LEFT
JOIN mobile m
ON m.mobile_url = s.mobile_id
WHERE m.mobile_url IS NOT NULL
OR s.is_responsive = '1'
```
We'd normally expect the foreign key to reference the id column, but given that you say you have a foreign key (and don't specify which column is the target), and the foreign key column has a datatype of VARCHAR(50), we are going to guess that this references the `mobile_url` column, since that is the only column in mobile that has a matching datatype.
If the intent is to reference the `id` column, then the datatype of the `mobile_id` should *match* the datatype of the `id` column, and the join predicate would be:
```
ON m.id = s.mobile_id
```
Also, each of your tables should have a PRIMARY KEY, or at least a UNIQUE KEY defined.table.
The table definitions I would use, based on what you posted, would be something along these lines:
```
CREATE TABLE mobile
( id INT UNSIGNED NOT NULL PRIMARY KEY COMMENT 'PK'
, mobile_url VARCHAR(50) COMMENT 'mobile website URL'
) ;
CREATE TABLE site
( id INT UNSIGNED NOT NULL PRIMARY KEY COMMENT 'PK'
, websiteName VARCHAR(50) COMMENT 'website name'
, url VARCHAR(50) COMMENT 'website URL'
, mobile_id INT UNSIGNED COMMENT 'FK ref mobile.id'
, is_responsive TINYINT(1) NOT NULL DEFAULT 0 COMMENT 'boolean'
, CONSTRAINT FK_site_mobile (mobile_id) REFERENCES mobile(id)
) ;
```
|
Try something like this
```
select sites.* from sites
left join mobile on mobile.id = sites.mobile_id
where (sites.is_responsive = '1' or mobile_id is not null)
```
|
SQL and Joining
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have a column as
```
--------------------------------------------------------------
| Sl. No. | bal1 | bal2 | bal3 | status1 | status2 | status3 |
--------------------------------------------------------------
| 1 | 520 | 270 | 351 | 1 | 0 | 1 |
| 2 | 201 | 456 | 154 | 0 | 1 | 1 |
--------------------------------------------------------------
```
I would like to add the rows field for Status value = 1 in SQL Server
eg. result
```
--------------------
| Sl. No. | amount |
--------------------
| 1 | 871 | // bal1 + bal3 as the status1 and status3 is 1
| 2 | 610 | // bal2 + bal3 as the status2 and status3 is 1
--------------------
```
Thanks in advance.
|
you can do it using `CASE`
[SQL Fiddle](http://www.sqlfiddle.com/#!3/3c0cf/4)
```
SELECT [SI. No.],
(case when status1 =1 then bal1 else 0 end +
case when status2 =1 then bal2 else 0 end +
case when status3 =1 then bal3 else 0 end) as balance
from Table1
```
|
If the `status` values will always be 1 or 0, you can multiply and add:
```
select [Sl. No.], bal1 * status1 + bal2 * status2 + bal3 * status3
from table
```
|
SQL Server : conditionally add rows and columns
|
[
"",
"sql",
"sql-server",
""
] |
We have these tables
```
CREATE TABLE tbl01
(
[id] int NOT NULL PRIMARY KEY,
[name] nvarchar(50) NOT NULL
)
CREATE TABLE tbl02
(
[subId] int NOT NULL PRIMARY KEY ,
[id] int NOT NULL REFERENCES tbl01(id),
[val] nvarchar(50) NULL,
[code] int NULL
)
```
If we run this query:
```
SELECT
tbl01.id, tbl01.name, tbl02.val, tbl02.code
FROM
tbl01
INNER JOIN
tbl02 ON tbl01.id = tbl02.id
```
we get these results:
```
-------------------------------
id | name | val | code
-------------------------------
1 | one | FirstVal | 1
1 | one | SecondVal | 2
2 | two | YourVal | 1
2 | two | OurVal | 2
3 | three | NotVal | 1
3 | three | ThisVal | 2
-------------------------------
```
You can see that each two rows are related to same "id"
The question is: we need for each `id` to retrieve one record with all `val`, each `val` will return in column according to the value of column `code`
```
if(code = 1) then val as val-1
else if (code = 2) then val as val-2
```
Like this:
```
-------------------------------
id | name | val-1 | val-2
-------------------------------
1 | one | FirstVal | SecondVal
2 | two | YourVal | OurVal
3 | three | NotVal | ThisVal
-------------------------------
```
Any advice?
|
Use can use MAX and Group By to achieve this
```
SELECT id,
name,
MAX([val1]) [val-1],
MAX([val2]) [val-2]
FROM ( SELECT tbl01.id, tbl01.name,
CASE code
WHEN 1 THEN tbl02.val
ELSE ''
END [val1],
CASE code
WHEN 2 THEN tbl02.val
ELSE ''
END [val2]
FROM tbl01
INNER JOIN tbl02 ON tbl01.id = tbl02.id
) Tbl
GROUP BY id, name
```
|
If there are always only two values, you could join them or even easier, group them:
```
SELECT tbl01.id as id, Min(tbl01.name) as name, MIN(tbl02.val) as val-1, MAX(tbl02.val) as val-2
FROM tbl01
INNER JOIN tbl02 ON tbl01.id = tbl02.id
GROUP BY tbl02.id
```
note: this query will always put the lowest value in the first column and highest in the second, if this is not wanted: use the join query:
# Join query
If you always want code 1 in the first column and code 2 in the second:
```
SELECT tbl01.id as id, tbl01.name as name, tbl02.val as val-1, tbl03.val as val-2
FROM tbl01
INNER JOIN tbl02 ON tbl01.id = tbl02.id
ON tbl02.code = 1
INNER JOIN tbl03 ON tbl01.id = tbl03.id
ON tbl03.code = 2
```
# Variable amount of columns
You cannot get an variable amount of columns, only when you do this by building your query in code or t-sql stored procedures.
My advice:
If its always to values: join them in query, if not, let your server-side code transform the data. (or even better, find a way which makes it not nessecery to transform data)
|
SQL Server, Merge two records in one record
|
[
"",
"sql",
"sql-server",
""
] |
I've been searching for a bit without finding anything. I'm not 100% at home with SQL but here's what I want to do:
```
SELECT TOP (SELECT COUNT (DISTINCT Name)
FROM Logg WHERE Id= 1)
* FROM Logg WHERE Id= 1
```
In other words, I want a single query, which first counts the number of rows with a distinct "Name" and then selects the top x amount of rows given that count.
How would I structure such a query?
|
Your query does work - you just need to specify a column list:
```
SELECT TOP (SELECT COUNT (DISTINCT Name) FROM Logg WHERE Id= 1) * -- Need to specify columns
FROM Logg WHERE Id= 1;
```
[SqlFiddle](http://sqlfiddle.com/#!3/a0d06/1)
|
```
SELECT TOP 1 * FROM Logg WHERE Id= 1
```
|
How to specify a query for COUNT?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"select",
""
] |
In essence I have one table with two columns
```
One Two
-----------
A B
B C
C D
```
and I want to count the number of A's through D.
Resulting
```
Letter Count
---------------
A 1
B 2
C 2
D 1
```
My code now is
```
Select one, count("") from table
group by one
union
Select two, count("*") from table
group by two
```
Right now I am getting
```
Letter Count
---------------
A 1
B 1
B 1
C 1
C 1
D 1
```
How do I fix it?
|
Try this
```
SELECT Count(a),
a
FROM (SELECT cola a
FROM table
UNION ALL
SELECT colb a
FROM table) c
GROUP BY a
```
|
there is no reason to group twice.
```
select letter, count(*) as total_cnt
from
(
Select one as letter from table
union all
Select two as letter from table
)
group by letter;
```
|
Counting from 2 columns in SQL server
|
[
"",
"sql",
"sql-server",
""
] |
Im developing android app, which use SQLite database.
I have **ListView** which uses data from Database to show a list (see picture).
The picture is decoded in Base64 String and stored in Database. My problem is this Log:
```
10-19 16:51:36.612: W/CursorWindow(15151): Window is full:
requested allocation 136877 bytes, free space 78836 bytes, window size 2097152 bytes
```
It skypes a lot of Frames, because the read time grows.
This is because i read always x+10 rows.
1st, it reads 10 rows, then 20, then 30 and go on...
**The solution, what i want to use is, get rows from 0-10, 11 - 20, 21 - 30 and so on. How to achieve this? I just need the Sql query.**
**EDIT:** my query
```
String columns[] = {KEY_ID, KEY_NAME, KEY_RATING, KEY_CUISINE, KEY_IMAGE};
Cursor cursor = db.query(TABLE_RECIPES, columns, null, null, null, null, KEY_NAME, lim);
```

|
Use the rawQuery method, and specify the limit keyword.
e.g.:
```
"SELECT * FROM myTable limit 10" <-- get the 1st 10 rows
"SELECT * FROM myTable limit 10, 20" <-- get the 2nd 10 rows between 10 and 20, etc
```
This should get you started.
|
I tried the answer above and the result was:
```
"SELECT * FROM myTable limit 10" <-- Got the first 10 rows
"SELECT * FROM myTable limit 10, 20" <-- Got the first 20 rows !
```
So the answer above did not work for me, what worked for me was this:
This is my table:
[](https://i.stack.imgur.com/qT77x.png)
To get the **first 10**:
```
select * from quran_text where sura = 2 limit 0, 10
```
[](https://i.stack.imgur.com/oTaCT.png)
**Second 10:**
```
select * from quran_text where sura = 2 limit 10, 10
```
[](https://i.stack.imgur.com/xLXhg.png)
**Third 10:**
```
select * from quran_text where sura = 2 limit 20, 10
```
[](https://i.stack.imgur.com/2n0i1.png)
in the picture it shows only 5 record cuz of space, but it gets the right result.
so to make it as pattern:
to get it for first 10 here X = 1;
creating query in java:
```
String query = "select * from quran_text where sura = 2 limit "+((X-1)*10)+", 10";
```
|
Sqlite Get x rows, then next x rows
|
[
"",
"android",
"sql",
""
] |
I have the following table `MyTable`:
```
id │ value_two │ value_three │ value_four
────┼───────────┼─────────────┼────────────
1 │ a │ A │ AA
2 │ a │ A2 │ AA2
3 │ b │ A3 │ AA3
4 │ a │ A4 │ AA4
5 │ b │ A5 │ AA5
```
I want to query an array of objects `{ value_three, value_four }` grouped by `value_two`. `value_two` should be present on its own in the result. The result should look like this:
```
value_two │ value_four
───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
a │ [{"value_three":"A","value_four":"AA"}, {"value_three":"A2","value_four":"AA2"}, {"value_three":"A4","value_four":"AA4"}]
b │ [{"value_three":"A3","value_four":"AA3"}, {"value_three":"A5","value_four":"AA5"}]
```
It does not matter whether it uses `json_agg()` or `array_agg()`.
However the best I can do is:
```
with MyCTE as ( select value_two, value_three, value_four from MyTable )
select value_two, json_agg(row_to_json(MyCTE)) value_four
from MyCTE
group by value_two;
```
Which returns:
```
value_two │ value_four
───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
a │ [{"value_two":"a","value_three":"A","value_four":"AA"}, {"value_two":"a","value_three":"A2","value_four":"AA2"}, {"value_two":"a","value_three":"A4","value_four":"AA4"}]
b │ [{"value_two":"b","value_three":"A3","value_four":"AA3"}, {"value_two":"b","value_three":"A5","value_four":"AA5"}]
```
With an extra `value_two` key in the objects, which I would like to get rid of. Which SQL (Postgres) query should I use?
|
### [`json_build_object()`](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-CREATION-TABLE) in Postgres 9.4 or newer
Or [`jsonb_build_object()`](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-CREATION-TABLE) to return `jsonb`.
```
SELECT val2, json_agg(json_build_object('val3', val3
, 'val4', val4)) AS val4
FROM tbl
GROUP BY val2;
```
[The manual:](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-CREATION-TABLE)
> Builds a JSON object out of a variadic argument list. By convention,
> the argument list consists of alternating keys and values.
### For any version (incl. Postgres 9.3)
[`to_json()`](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-CREATION-TABLE) (or [`to_jsonb`](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-CREATION-TABLE)) with a [**`ROW`**](https://www.postgresql.org/docs/current/sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS) expression does the trick. (Or [`row_to_json()`](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-CREATION-TABLE) with optional line feeds):
```
SELECT val2, json_agg(to_json((val3, val4))) AS val4
FROM tbl
GROUP BY val2;
```
But you lose original column names. A cast to a registered row type avoids that. (The row type of a temporary table serves for ad hoc queries, too.)
```
CREATE TYPE foo AS (val3 text, val4 text); -- once in the same session
```
```
SELECT val2, json_agg(to_json((val3, val4)::foo)) AS val4
FROM tbl
GROUP BY val2;
```
Or use a **subselect** instead of the `ROW` expression. More verbose, but without type cast:
```
SELECT val2, json_agg(to_json((SELECT t FROM (SELECT val3, val4) t))) AS val4
FROM tbl
GROUP BY val2;
```
More explanation in Craig's related answer:
* [Using row\_to\_json() with nested joins](https://stackoverflow.com/questions/13227142/postgresql-9-2-row-to-json-with-nested-joins)
[fiddle](https://dbfiddle.uk/siXk-Lm1)
Old [sqlfiddle](http://sqlfiddle.com/#!17/71087/1)
|
**to\_json** with **array\_agg** with **composite type**
```
begin;
create table mytable(
id bigint, value_two text, value_three text, value_four text);
insert into mytable(id,value_two, value_three,value_four)
values
( 1, 'a', 'A', 'AA'),
(2, 'a' , 'A2' , 'AA2'),
(3, 'b' , 'A3', 'AA3'),
( 4, 'a' , 'A4', 'AA4'),
(5, 'b' , 'A5', 'AA5');
commit;
create type mytable_type as (value_three text, value_four text);
```
---
```
select value_two,
to_json( array_agg(row(value_three,value_four)::mytable_type))
from mytable
group by 1;
```
|
Return multiple columns of the same row as JSON array of objects
|
[
"",
"sql",
"arrays",
"json",
"postgresql",
"aggregate-functions",
""
] |
I have a requirement , to get the out put in the following format when we give two dates as input:
* Input 1 --> `sysdate`
* Input 2 --> `to_date('10-OCT-2014')` --> this is dynamic
Output :
```
11 days 00 Hours 27 Minutes 41 Seconds
```
The seconds could be an optional one.
currently am using
```
select TRIM (TRAILING '.' FROM (TRIM (BOTH '0' FROM (TRIM (LEADING '+' FROM NUMTODSINTERVAL (
(24
* 60
* 60
* (SYSDATE -
to_date('10-OCT-2014'))),
'second')))))) from dual
```
but the output will be like "11 00:27:41" but the requirement is "11 days 00 Hours 27 Minutes 41 Seconds"
It can be a query or a function
|
I always appreciate questions related to datetime formatting with SQL. There are so many stupid hacks in the world (e.g. converting to strings and doing string operations)....
As your question is oracle specific, the answer by Multisync should be accepted. But if someone looks for a version that does not need the numtodsinterval function (for portability or something else), you could also try this more verbose solution:
```
with base as (
select abs(sysdate-to_date('2014-10-10', 'YYYY-MM-DD')) as diff
from dual
), days as (
select floor(diff) part,
diff-floor(diff) remainder
from base
), hours as (
select floor(remainder*24) part,
remainder*24-floor(remainder*24) remainder
from days
), minutes as (
select floor(remainder*60) part,
remainder*60-floor(remainder*60) remainder
from hours
), seconds as (
select floor(remainder*60) part
from minutes
)
select days.part||' days '||
lpad(hours.part, 2, '0')||' hours '||
lpad(minutes.part, 2, '0')||' minutes '||
lpad(seconds.part, 2, '0')||' seconds'
from days, hours, minutes, seconds;
```
**Extended solution that respects the direction of the difference**
```
with base as (
select sysdate now, to_date('2014-10-10', 'YYYY-MM-DD') otherdate
from dual
), diff as (
select abs(now-otherdate) as value,
sign(now-otherdate) as dir
from base
), days as (
select floor(value) part,
value-floor(value) remainder
from diff
), hours as (
select floor(remainder*24) part,
remainder*24-floor(remainder*24) remainder
from days
), minutes as (
select floor(remainder*60) part,
remainder*60-floor(remainder*60) remainder
from hours
), seconds as (
select floor(remainder*60) part
from minutes
)
select case dir
when 1 then 'before '||strbuilder.intervalstr
when -1 then 'after '||strbuilder.intervalstr
else 'now' end
from (
select days.part||' days '||
lpad(hours.part, 2, '0')||' hours '||
lpad(minutes.part, 2, '0')||' minutes '||
lpad(seconds.part, 2, '0')||' seconds ' as intervalstr
from days, hours, minutes, seconds
) strbuilder, diff;
```
|
```
select extract(day from ds_int) || ' days '
|| extract(hour from ds_int) || ' Hours '
|| extract(minute from ds_int) || ' Minutes '
|| extract(second from ds_int) || ' Second'
from
(select numtodsinterval(sysdate - to_date('10-10-2014', 'DD-MM-YYYY'), 'day') ds_int from dual);
```
With trailing "0" (won't work with negative interval)
```
select extract(day from ds_int) || ' days '
|| lpad(extract(hour from ds_int), 2, 0) || ' Hours '
|| lpad(extract(minute from ds_int), 2, 0) || ' Minutes '
|| lpad(extract(second from ds_int), 2, 0) || ' Second'
from
(select numtodsinterval(sysdate - to_date('10-10-2014', 'DD-MM-YYYY'), 'day') ds_int from dual);
```
There is a restriction however - day may contain 9 digits (maximum)
It's better to avoid using a construction such "to\_date('10-OCT-2014')" - result may be unpredictable (depending on NLS settings of your session). Better define a mask "to\_date('10-10-2014', 'DD-MM-YYYY')"
|
Formatting date (difference )- Oracle
|
[
"",
"sql",
"oracle",
""
] |
I have a concatenation in a query that takes values from different fields and creates the product name. My issue here is I need to keep the name under 80 characters, so if it passes 80 characters there
are certain key words in the "name" that I should be able to delete. For example
```
Oracle Lights 79-80 American Motors AMX 4x6" LED Amber Halo 2 Sealed Headlights
```
Is fine, it's just under 80 characters.
But for:
```
Oracle Lights 79-83 American Motors Concord 4x6" LED ColorSHIFT Halo 2 Sealed Headlights
```
this is 88 characters, so I have to figure out a way that if this concatenation is too long, the word `Lights` can be deleted. And if its still over 80 characters, then the world `Oracle` should be deleted.
For anyone that wants to know the field `[year_single] = 79-80, [make] = American Motors, [model] = AMX, and [color] = Amber`. Everything else is string that is added in the concatenation because its static.
Anyone know a way I can do this ?
|
ok thanks to Phicon for pushing me in the right direction. I created a function that takes a string and checks its length, then takes out "Lights" if the string is over 80 characters, and "Oracle" afterward if it's still over 80 characters, then called it in an update query.
```
Public Function shortname(value As String) As String
Dim temp As String
temp = value
If Len(temp) > 80 Then temp = Replace(temp, "Lights ", "", 1, 1)
If Len(temp) > 80 Then temp = Replace(temp, "Oracle ", "", 1, 1)
shortname = temp
End Function
```
update query:
```
UPDATE tbldata SET name = shortname(name);
```
|
You could try the follwing in the vba.
```
if len(yourfield) > 80 then Replace(yourfield, "Lights ", "") end if
if len(yourfield) > 80 then Replace(yourfield, "Oracle ", "") end if
```
|
Deleting certain words depending on length of the value
|
[
"",
"sql",
"ms-access",
"vba",
"ms-access-2013",
""
] |
I have four tables `Customer`, `Sales`, `Invoice`, and `Receipt`.
## Customer
```
ID Name
1 A
```
## Sales
```
ID Name
1 Ben
```
## Invoice
```
ID Amt Date CustomerID SalesID
1 12 1/9/2014 1 1
2 10 1/10/2014 1 1
3 20 2/10/2014 1 1
4 30 3/10/2014 1 1
```
## Receipt
```
ID Amt Date CustomerID SalesID
1 10 4/10/2014 1 1
```
I wish to join those 4 table as below with sum up the Ammount(s), but I am stuck as to how I can achieve my desired
## RESULT
```
CustomerID SalesID Inv_Amt Rep_Amt Month
1 1 12 0 9
1 1 60 10 10
```
I've been stuck for days. But, I have no idea how to proceed.
|
You can get month wise total receipt and invoice amount by grouping and sub query like below :
```
SELECT Invoice.CustomerID [CustomerID],
Invoice.SalesID [SalesID],
SUM(Invoice.Amt) [Invoice_Amt],
ISNULL((SELECT SUM(Amt)
FROM Receipt
WHERE CustomerID = Invoice.CustomerID
AND SalesID = Invoice.SalesID
AND Month(Date) = Month(Invoice.Date)),0) [Receipt_Amt],
MONTH(Invoice.Date) Month
FROM Invoice
GROUP BY Invoice.CustomerID, Invoice.SalesID, MONTH(Invoice.Date)
```
**[SQL Fiddle Demo1](http://sqlfiddle.com/#!3/b0450/19)**
**Warning :** Here data will come for all months which are in Invoice table. If for any month, there is no any data in invoice table then no result will come for that month even for receipt also.
**UPDATE:**
To get result from all months of invoice and receipt table, you need to get it using CTE as like below :
```
;with CTE as
(
SELECT Invoice.CustomerID, Invoice.SalesID, MONTH(Invoice.Date) MonthNo FROM Invoice
UNION
SELECT Receipt.CustomerID, Receipt.SalesID, MONTH(Receipt.Date) MonthNo FROM Receipt
)
SELECT CTE.CustomerID [CustomerID],
CTE.SalesID [SalesID],
ISNULL((SELECT SUM(Amt)
FROM Invoice
WHERE CustomerID = CTE.CustomerID
AND SalesID = CTE.SalesID
AND Month(Date) = CTE.MonthNo),0) [Invoice_Amt],
ISNULL((SELECT SUM(Amt)
FROM Receipt
WHERE CustomerID = CTE.CustomerID
AND SalesID = CTE.SalesID
AND Month(Date) = CTE.MonthNo),0) [Receipt_Amt],
MonthNo
FROM CTE
```
**[SQL Fiddle Demo2](http://sqlfiddle.com/#!3/19cdf4/1)**
|
Frankly, since you're just selecting customer and sales IDs (as opposed to names), you don't even need to joint all four tables:
```
SELECT i.CustomerID,
i.SalesID,
SUM(i.Amt) AS InvAmt,
SUM(r.Amt) AS RepAmt,
MONTH(i.`Date`) AS `Month`
FROM Invoice i
JOIN Receipt r ON i.CustomerID = r.CustomerID AND
i.SalesID = r.SalesID AND
MONTH(i.`Date`) = MONTH(r.`Date`)
GROUP BY i.CustomerID, i.SalesID, MONTH(i.`Date`) AS `Month`
```
|
SQL - Join multiple table
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I faced an interview and below is the question. The tables are below
Table1: Book (BookId is primary key)
```
BookId | Book_Title
1 | Book1
2 | Book2
3 | Book3
4 | Book4
5 | Book5
```
Table2: Book\_Copies
```
BookId | BranchId | No_of_copies
1 1 2
1 2 5
2 1 0
2 2 2
2 3 0
3 1 0
3 2 0
```
The output should list all the Books with no stock at all as shown below.
**Output:**
```
Book_Title
---------
Book3
Book4
Book5
```
Please note that there are two possibilities. Either the 'No\_of\_copies' could be 0 or no record for a Book in the "Book\_Copies" table.
For e.g.
1. the total number of copies for "Book3" is 0. The output should
include "Book3"
2. the total number of copies for "Book1 and "Book2" is 7 and 2
respectively. Both "Book1" and "Book2" shouldn't display
3. there is no entry for Book4 and Book5 in "Book\_Copies", so both
should be included in output
After coming home, I wrote the below query after so many trials :)
```
select B.Book_Title
from
(
select BC.BookId, sum(BC.No_of_copies) as 'No of copies'
from Book_Copies BC
group by BC.BookId
having sum(BC.No_of_copies) = 0
union
select B.BookId, BC.No_of_copies
from Book B
left outer join Book_Copies BC on B.BookId = BC.BookId
where BC.BookId is null
)
as BookIds_withNoStock
inner join Book B on B.BookId = BookIds_withNoStock.BookId
```
This query works fine and tested properly.
Is there any way we can improve this query? like complexity, performance etc
If we can improve it, it would be helpful if you can provide the optimized query and reason. Thanks
|
Try This
```
SELECT B.Book_Title
From Book B LEFT JOIN book_copies BC ON B.Bookid = BC.Bookid
Group By B.Book_Title
Having SUM(ISNULL(BC.no_of_copies,0)) = 0
```
**[FIddle Demo](http://sqlfiddle.com/#!6/4da44/2)**
**Output:**
```
BOOK_TITLE
Book3
Book4
Book5
```
|
You want books with no stock. I would approach this with a `left outer join`, aggregation, and a `having` clause:
```
select b.bookid, b.book_title
from book b left join
book_copies bc
on b.bookid = bc.bookid
group by b.bookid, b.book_title
having coalesce(sum(bc.no_of_copies), 0) = 0;
```
|
SQL interview- how to get the record if there is no record or count is zero using multiple tables
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
I need to perform a query to get the candies that are most liked by kids and here's what I've got:
```
SELECT COUNT(*) as counts,candies.name
FROM candies
INNER JOIN kid_candy ON kid_candy.candy_id = candies.id
INNER JOIN kids ON kids.id = kid_candy.kid_id
GROUP BY candies.name
```
which would return:
```
counts | name
--------+---------
3 | snowbear
3 | whiterabbit
2 | lollipop
```
All I want to see would be just
```
counts | name
--------+---------
3 | snowbear
3 | whiterabbit
```
So what should my query be?
|
Assuming **a classical n:m relationship** between kids and candies like detailed here:
* [How to implement a many-to-many relationship in PostgreSQL?](https://stackoverflow.com/questions/9789736/how-to-implement-a-many-to-many-relationship-in-postgresql/9790225#9790225)
You should provide such details in your question.
```
SELECT c.id, kc.counts, c.name
FROM (
SELECT candy_id AS id, count(*) AS counts
, rank() OVER (ORDER BY count(*) DESC) AS rnk
FROM kid_candy
GROUP BY 1
) kc
JOIN candies c USING (id)
WHERE kc.rnk = 1;
```
This should be considerably faster than a query with two CTEs and needless joins.
### Major points
* It is potentially wrong (and more expensive) to group by `candies.name`. The name might not be unique. Use the primary key column for that, which is probably `candies.id`
* Assuming referential integrity we do not need to join to the table `kids` at all.
* Since we need to inspect the whole table, it is faster to *aggregate first* and *join* to candies to get the `name` *later*.
* You can run a window function over an aggregate function:
+ [Get the distinct sum of a joined table column](https://stackoverflow.com/questions/13169367/get-the-distinct-sum-of-a-joined-table-column/13169627#13169627)
|
So, first lets get the counts per candy name (`tmp_table`), then get the max count of all candies (`max_cnt`) and finally put it all together and get the candies from `tmp_table` which have a count equal to `max(counts)` from `max_cnt` table...
```
with tmp_table AS (
select COUNT(*) as counts,candies.name as c_name
from candies
INNER JOIN kid_candy ON kid_candy.candy_id = candies.id
INNER JOIN kids ON kids.id = kid_candy.kid_id
GROUP BY candies.name
),
max_cnt AS (
SELECT max(counts) as max_count from tmp_table
)
SELECT counts, c_name as candies
FROM tmp_table
JOIN max_cnt on max_count = counts
```
|
Get rows with maximum count
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"greatest-n-per-group",
""
] |
Is it possible to select only the id's of the users that have admin role(role id = 3) without using NOT IN(same query with roleid)?
Each user has different roles(user, manager, admin, etc).
```
SELECT appUserInGroups.UserId, appRoles.RoleName, appRoles.RoleId
FROM appGroupInRoles
INNER JOIN appRoles ON appGroupInRoles.RoleId = appRoles.RoleId
INNER JOIN appUserInGroups ON appGroupInRoles.GroupId = appUserInGroups.GroupId
INNER JOIN appApplications ON appRoles.ApplicationId = appApplications.ApplicationId
WHERE appGroupInRoles.UserId not in ( SELECT appUserInGroups.UserId
FROM appGroupInRoles
INNER JOIN appRoles ON appGroupInRoles.RoleId = appRoles.RoleId
INNER JOIN appUserInGroups ON appGroupInRoles.GroupId = appUserInGroups.GroupId
INNER JOIN appApplications ON appRoles.ApplicationId = appApplications.ApplicationId
WHERE appGroupInRoles.RoleId = 3)
```
The NOT IN kills performance completely.
User Roles preview:

|
Try this:
```
WITH Admins AS
(
SELECT appUserInGroups.UserId AS uID
FROM appGroupInRoles
INNER JOIN appUserInGroups ON appGroupInRoles.GroupId = appUserInGroups.GroupId
WHERE appGroupInRoles.RoleId = 3
)
SELECT appUserInGroups.UserId, appRoles.RoleName, appRoles.RoleId
FROM appGroupInRoles
INNER JOIN appRoles ON appGroupInRoles.RoleId = appRoles.RoleId
INNER JOIN appUserInGroups ON appGroupInRoles.GroupId = appUserInGroups.GroupId
INNER JOIN appApplications ON appRoles.ApplicationId = appApplications.ApplicationId
LEFT JOIN Admins on appGroupInRoles.UserId = Admins.uID
WHERE Admins.uID is null
```
I removed some extra joins from the sub-query and added it as a CTE. I also used a left join instead of not in (which should actually compile in the execution plan the same) but added it to show you the "other way" to do it.
This should have the same exact execution plan as the following if the optimizer is working well.
```
SELECT appUserInGroups.UserId, appRoles.RoleName, appRoles.RoleId
FROM appGroupInRoles
INNER JOIN appRoles ON appGroupInRoles.RoleId = appRoles.RoleId
INNER JOIN appUserInGroups ON appGroupInRoles.GroupId = appUserInGroups.GroupId
INNER JOIN appApplications ON appRoles.ApplicationId = appApplications.ApplicationId
WHERE appGroupInRoles.UserId NOT IN (
SELECT appUserInGroups.UserId AS uID
FROM appGroupInRoles
INNER JOIN appUserInGroups ON appGroupInRoles.GroupId = appUserInGroups.GroupId
WHERE appGroupInRoles.RoleId = 3
)
```
*EDIT: See link in comments for a nice post explaining why you should always use NOT EXISTS on SQL Server. Random side note; from my experience you should use NOT IN on DB2*
|
Try an exclusion join:
```
SELECT u.UserId, r.RoleName, r.RoleId
FROM appUserInGroups u
INNER JOIN appGroupInRoles g on g.GroupId = u.GroupId
INNER JOIN appRoles r ON r.RoleId = g.RoleId
LEFT JOIN (
--Admin Users
SELECT u.UserId
FROM appUserInGroups u
INNER JOIN appGroupInRoles g on g.GroupId = u.GroupId and g.RoleID=3
) a on a.UserId = u.UserId
WHERE a.UserId IS NULL
```
Not sure what you were doing with the `appApplications` table, and it seemed like the order you were looking at a the tables was a little strange... like you were starting in the middle.
You might be able to do even a little better still with a `NOT EXISTS` query.
|
Not selecting userId if not in role (admin), multiple roles in one user
|
[
"",
"sql",
"sql-server",
""
] |
I'm working on a big database and i'm seeking for everything that can speed up the database. The question is : when you have an unique index on somes fields, what is the faster between make a select request to check if it's ok, or try it anyway and catch the exception if the entry already exists ?
I made some researchs but nothing conclusive. Thanks.
|
A manual check won't do what you think it does. (See below.)
If you check first, *every* insert requires two round-trips to the database. It might also require serializable transactions.
And you have to trap errors anyway. A duplicate value is just *one* thing that can go wrong on an insert; there are a *lot* of other things that can go wrong.
I say just insert, and trap the errors.
---
The point of a SELECT before INSERT is to determine whether a value already exists in the database. But you can't rely on that to work. Here's why.
Open two terminal sessions (for example), and connect both to your database. This table already exists. It's empty.
```
create table test (
test_id serial primary key,
test_email varchar(15) not null unique
);
```
```
A: begin transaction;
A: select test_email
from test
where test_email = 'a@b.com';
(0 rows)
B: begin transaction;
A: insert into test (test_email)
values ('a@b.com');
INSERT 0 1
B: select test_email
from test
where test_email = 'a@b.com';
(0 rows)
B: insert into test (test_email)
values ('a@b.com');
(waiting for lock)
A: commit;
B: ERROR: duplicate key value
violates unique constraint...
```
|
you have 2 choices
1. Try to insert and if the query fails, you will catch and roll back transaction.
2. Try to query once to check if it exists or not. If it does not exist,Insert the value.
In my opinion, first one is better because you use network connection twice if you query twice. Also select is an good option when you have really big data.
In the first case you try to insert but you get DataIntegrityException. Single request and response is better than two request and two response.
Transaction Manager can handle the exception as well.
|
SQL unique : manual check vs catch exception
|
[
"",
"sql",
"unique",
""
] |
I am using SQL Server.
I have a table with the following design:
```
ID bigint
Number varchar(50)
Processed int
```
I have a lot of duplicates in the Number column
I want to delete all repeated Numbers, and keep the Number `where processed=1`
For Example if I have
```
Number --- Processed
111 --- 0
111 --- 0
111 --- 1
```
I want to delete all and keep **the last one**
Any help would be appreciated
|
If you are just trying to delete the rows where number equals 111 and processed does not equal 1 you can do:
```
delete from <table>
where
Number = 111 and
Processed <> 1
```
Assuming the ID is sequential and you want to keep the last row for each Number you can do:
```
delete from <table> t
left join (
select
MAX(ID) filter_ID
from <table>
group by
Number
) filter on
t.ID = filter.filter_ID
where
filter.filter_ID is null
```
to keep at least one row per Number giving priority to Processed = 1
```
delete from <table> t
left join (
select
ID
from (
select
ROW_NUMBER() OVER (
PARTITION BY
Number
ORDER BY
Processed DESC,
ID DESC
) last_R,
ID
from <table>
) filter
where
last_R = 1
) filter on
t.ID = filter.filter_ID
where
filter.filter_ID is null
```
|
Here is one method:
```
with todelete as (
select t.*,
row_number() over (partition by number order by processed desc) as seqnum
from table t
)
delete from todelete
where seqnum > 1;
```
The `row_number()` enumerates the rows, using the processed as a priority. The logic ensures that exactly one row remains, even if none have `processed = 1`.
|
Delete duplicates and keep one with condition in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a query with a bunch of OR's inside an AND in the where clause and I'm trying to replace them with CASE WHEN to see if it improves the performance.
The select query inside the stored procedure is something like:
```
DECLARE @word = '%word%' --These are inputs
DECLARE @type = 'type'
SELECT * FROM table1
WHERE SomeCondition1
AND ( (@type = 'CS' AND col1 like @word)
OR
(@type = 'ED' AND col2 like @word)
....
)
```
I'm trying to write this query as:
```
SELECT * FROM table1
WHERE SomeCondition1
AND ( 1= CASE WHEN @type = 'CS'
THEN col1 like @word
WHEN @type = 'ED'
THEN col2 like @word
END )
```
But SQL 2012 gives the error 'Incorrect Syntax Near Like' for `THEN col1 like @word`. If I replace `THEN col1 like @word` with `1` then no complaints but LIKE should return a 0 or 1 anyway.
I tried `SELECT (col1 like @word)`, extra `()`, etc with no success.
Is there a way to include `LIKE` in `CASE WHEN` in `WHERE` or should I just not bother if using `CASE WHEN` instead of the original IF's won't make any performance difference?
**UPDATE:**
This actually didn't make any difference performance wise.
|
There are is a lot of info online about these 'optional' type stored procedures and how to avoid parameter sniffing performance issues.
This syntax should get you closer though:
```
AND CASE
WHEN @type = 'CS' THEN col1
WHEN @type = 'ED' THEN col2
END LIKE @word
```
Just make sure the col1 and col2 datatypes are similar (don't mix `INT` and `VARCHAR`)
You should compare query plans between the two syntaxes to ascertain whether it even makes a difference. Your performance issue might be due more to parameter sniffing.
|
You can also try nested case statements. e.g. based on your latest post, something like:
```
1 = CASE WHEN @type = 'CandidateStatus'
THEN (CASE WHEN co.[Description] LIKE @text THEN 1 END)
...
END
```
|
CASE WHEN in WHERE with LIKE condition instead of 1
|
[
"",
"sql",
"sql-server",
"performance",
"t-sql",
"stored-procedures",
""
] |
I have something like this:
```
SELECT cars.brand, cars.id FROM cars WHERE cars.brand = 4 LIMIT 1;
SELECT cars.brand, cars.id FROM cars WHERE cars.brand = 24 LIMIT 1;
SELECT cars.brand, cars.id FROM cars WHERE cars.brand = 29 LIMIT 1;
```
If will be more queries it will be very long.
How do this in the other way, in one line? But important is for me speed.
cars.brand = X AND LIMIT X is a variable so it maybe change.
|
You are worried about having to send so many queries one by one, so that would take a lot of time. You can make this one query instead:
```
SELECT cars.brand, cars.id FROM cars WHERE cars.brand = 4 LIMIT 1
UNION ALL
SELECT cars.brand, cars.id FROM cars WHERE cars.brand = 24 LIMIT 1
UNION ALL
SELECT cars.brand, cars.id FROM cars WHERE cars.brand = 29 LIMIT 1;
```
|
```
$ids = (4, 24, 29);
$limit = 1;
foreach ($ids as $id){
$query = "SELECT cars.brand, cars.id FROM cars WHERE cars.brand = $id LIMIT $limit";
// perform query
// store result
}
```
|
How do this SQL queries cleaner, better, in one query?
|
[
"",
"mysql",
"sql",
"limit",
"distinct",
"greatest-n-per-group",
""
] |
I needed some help with doing a task in a clever way. I want pick out the very first movement on customer number and sales day and the same thing but for their last movement.
Is there someway of doing this?
These are the column names:
* Cust\_no
* type of sale
* sales day
* activation day
* campaign name
* seller name
I was thinking of first taking out their first movement with the help of
```
ROW_NUMBER() OVER (PARTITION BY cust_no ORDER BY sales day ASC)
```
separate these results in an Excel file
and last movement:
```
ROW_NUMBER() OVER (PARTITION BY cust_no ORDER BY sales day DESC)
```
separate them in an Excel file.
Is there a way of attaching to an Excel file both first movement and last movement without doing it in parts?
|
You can select both rows using a subquery
```
SELECT *
FROM
( SELECT *, ROW_NUMBER() OVER (PARTITION BY cust_no ORDER BY sales day ASC) as seq1,
ROW_NUMBER() OVER (PARTITION BY cust_no ORDER BY sales day DESC) as seq2,
) T
where T.seq1 =1 or T.seq2 =1
```
|
try this
```
SELECT *
FROM tablename A
JOIN (SELECT cust_no,
Min(sales_day) min_sal,
Max(sales_day) AS max_sal
FROM tablename) B
ON a.cust_no = b.cust_no
AND ( a.sales_day = b.min_sal
OR a.sales_day = b.max_sal )
```
|
Finding first and last customer movement SQL Server - tips?
|
[
"",
"sql",
"sql-server",
""
] |
I have a string like `A12345678B`
I want to be able to check if a string starts with a character, is followed by 8 digits and ends with a character.
We are trying:
```
SELECT REGEXP_INSTR('A12345678B', '[[:alpha:]]{1}[[:digit:]]{8}[[:alpha:]]{10}',1,1,1,'i') from DUAL
```
This returns :
```
11
```
We want to be able to determine that if a string does NOT start and end with a character and is not followed by 8 digits after the first character ( see sample string above ), THEN this is not the string that we are looking for .
|
> string starts with a character
```
^[[:alpha:]]
```
> is followed by 8 digits
```
[[:digit:]]{8}
```
> ends with a character
```
[[:alpha:]]$
```
So the complete regex would be,
```
^[[:alpha:]][[:digit:]]{8}[[:alpha:]]$
```
This `[[:alpha:]]{10}` in your regex assumes that there must be exactly 10 alphabets present after to the 8 digit number.
|
Try using `regexp_like` if you want a full string match. For a match, use:
```
WHERE REGEXP_LIKE('A12345678B', '^[[:alpha:]]{1}[[:digit:]]{8}[[:alpha:]]{1}$')
```
For a non-match use:
```
WHERE NOT REGEXP_LIKE('A12345678B', '^[[:alpha:]]{1}[[:digit:]]{8}[[:alpha:]]{1}$')
```
Note: this assumes that you want to filter the results (which is what your question implies, not put a flag into the `select` clause.
|
How can I search for occurences in a string with Oracle REGEXP_INSTR
|
[
"",
"sql",
"regex",
"string",
"oracle",
""
] |
I have a VERY LARGE table with a timestamp and a value V. Some of the Vs might be null:
```
timestamp,V
sometime_1,value1
sometime_2,value2
sometime_3,NULL
sometime_4,value4
```
I want a query to select the value V of a given timestamp T, but if it is NULL, get me the value at the time closest to T on either side of T. For example, if I have
```
2010-09-01 00:00:01,v1
2010-09-01 00:00:02,v2
2010-09-01 00:00:03,NULL
2010-09-01 00:00:04,NULL
2010-09-01 00:00:05,v3
```
I want the query for timestamp "3" to return "v2" because 2 is closer to 3 than 5, but would want the query for timestamp "4" to return v3 because 5 is closer.
*Time* is not guaranteed to be contiguous either, e.g., we may have:
```
2010-09-01 00:00:01,v1
2010-09-01 00:00:04,v2
2010-09-01 00:00:30,NULL
2010-09-01 00:00:42,NULL
2010-09-01 00:00:50,v3
```
In which case v3 is closest to both 30 and 42.
Currently, I'm doing this by connecting Python to SQL and starting a for loop in both directions from the timestamp in question, and returning the closest value that does not return NULL. But if I can do this in SQL, it would be much cleaner. I cannot write a stored procedure; this has to be a single query.
I am doing this because I need the closest (temporally) valid value for a given timestamp, but sometimes the value is NULL for the timestamp.
|
Similar to @par solution but with time:
```
SELECT v
FROM (
(SELECT v, TIMEDIFF(T, `timestamp`) AS tdiff
FROM table_name
WHERE `timestamp` <= T AND v IS NOT NULL
ORDER BY `timestamp` DESC
LIMIT 1)
UNION ALL
(SELECT v, TIMEDIFF(`timestamp`, T) AS tdiff
FROM table_name
WHERE `timestamp` > T AND v IS NOT NULL
ORDER BY `timestamp` ASC
LIMIT 1)
) u
ORDER BY tdiff
LIMIT 1
```
Here `T` is given timestamp to search for value `V`.
To speed up you must have index on `timestamp` column.
|
You can create a VIEW like
```
SELECT T2.T,T2.V FROM TAB AS T2 WHERE T2.V IS NOT NULL
UNION
SELECT T0.T,(
SELECT T1.V FROM TAB AS T1 WHERE T1.V IS NOT NULL
ORDER BY ABS(T0.T-T1.T) LIMIT 1
) FROM TAB AS T0 WHERE T0.V IS NULL;
```
The clause
```
ORDER BY ABS(T0.T-T1.T)
```
of course may return multiple entries with exactly the same the distance of timestamps under which circumstances different RDBMS may sort differently and you end up with a different value V.
It also depends how well subqueries are supported on your system.
|
select closest non null value to timestamp
|
[
"",
"mysql",
"sql",
""
] |
I have a table of images with the some column like this: `id, path, name, likeCount`
I want to have a query that select an image by random between those 20 images with the highest `likeCount` .
Do you have any idea?
|
In that case, you need to do that in 2 steps:
First, you need to create a view in which you store the 20 first images with the highest `likeCount`:
```
CREATE VIEW first_Twenty_images AS
SELECT *
FROM images_table
ORDER BY likeCount DESC
LIMIT 20
```
After that, you can select a random element from them by using the view we created just before:
```
SELECT * FROM first_Twenty_images ORDER BY RAND()
```
|
Try like this Not Tested.
```
Select * from ( select @a:=@a+1 no, id, path, name, likeCount from
table1,(SELECT @a:= 0) AS a order by likeCount desc limit 20 ) as tt order by
Floor(RAND() * 20) limit 1
```
|
how to select some record by random between the some records?
|
[
"",
"mysql",
"sql",
"select",
"random",
""
] |
I'm trying to write a query that gives total numbers of unmatched letters between two strings.
For example, I have the given two strings
> String 1: Jamess String 2: Romeeo
I need to find out total number of letters in the second string that don't having a matching in the first string.
The letters would be
> R, o, o and e
(note that the first string has only one e, so the extra e in Romeeo doesn't having a matching in string 1).
In short, Those letters (R, o, o, and e) don't exist in string 1.
Is there away to solve this problem in Oracle SQL?
|
Amusing puzzle game ;)
Using the analytic function `COUNT()` and by partitioning up to the current row, you are in fact able to "number yours letters":
```
SELECT letters,
COUNT(*) OVER (PARTITION BY letters ORDER BY n ROWS
BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) cnt FROM (
-- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SELECT SUBSTR('Jameess', LEVEL, 1) letters, LEVEL n FROM DUAL
CONNECT BY LEVEL <= LENGTH('Jameess')
)
```
Producing that result:
```
LETTERS CNT
J 1 -- first J
a 1 -- first a
e 1 -- first e
e 2 -- second e
m 1 -- ...
s 1
s 2
```
Do it once for each string, and you only have to compare each letter index it its own group:
```
SELECT s2.letters
FROM (
SELECT letters,
COUNT(*) OVER (PARTITION BY letters ORDER BY n ROWS
BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) cnt FROM (
SELECT SUBSTR('Jameess', LEVEL, 1) letters, LEVEL n FROM DUAL
CONNECT BY LEVEL <= LENGTH('Jameess')
)
) S1
RIGHT OUTER JOIN (
SELECT letters,
COUNT(*) OVER (PARTITION BY letters ORDER BY n ROWS
BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) cnt FROM (
SELECT SUBSTR('Romeeeeo', LEVEL, 1) letters, LEVEL n FROM DUAL
CONNECT BY LEVEL <= LENGTH('Romeeeeo')
)
) S2
ON s1.letters = s2.letters AND s1.cnt = s2.cnt
WHERE s1.cnt IS NULL
-- ^^^^^^
-- change to `s2.cnt` to compare your strings the other way around
-- and replace the RIGHT JOIN by a LEFT JOIN
ORDER BY letters
```
Producing:
```
LETTERS
R
e
e
o
o
```
(For testing purposes, I add few extra `e` in `Jameess` and `Romeeeeo`)
|
**In Oracle**,
```
SQL> WITH DATA AS
2 ( SELECT 'Jamess' str1, 'Romeeo' str2 FROM dual
3 ),
4 data2 AS
5 (SELECT SUBSTR(str1, LEVEL, 1) str1
6 FROM DATA
7 CONNECT BY LEVEL <= LENGTH(str1)
8 ),
9 data3 AS
10 (SELECT SUBSTR(str2, LEVEL, 1) str2
11 FROM DATA
12 CONNECT BY LEVEL <= LENGTH(str2)
13 )
14 SELECT * FROM data3 WHERE str2 NOT IN
15 (SELECT str1 FROM data2
16 )
17 UNION ALL
18 SELECT str2
19 FROM data3
20 WHERE str2 IN
21 (SELECT str1 FROM data2
22 )
23 GROUP BY str2
24 HAVING COUNT(*)>1
25 /
S
-
R
o
o
e
SQL>
```
|
Find Unmatching Letters between Two Strings
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have a user table which contains among others money, level and ranking.
```
Id | money| ranking| level
---------------------------
1 |30000| 1 1
2 |20000| 2 3
3 |10000| 3 2
4 |50000| 4 2
```
I want to update the ranking field based on user level (first filter) and money.
That is a user in higher level will always be ranked higher.
That is i want the table after the update like this:
```
Id | money| ranking| level
---------------------------
1 |30000| 4 1
2 |20000| 1 3
3 |10000| 3 2
4 |50000| 2 2
```
Thanks!
|
As a side note, I would NOT store this field within the database - storing values that are dependent on other records in the table make maintenance much more difficult.
Here's a query that would work as a view or within a stored procedure:
```
SELECT
ID,
[money],
ROW_NUMBER() OVER (order by [level] desc, [money] desc) AS [ranking],
[level]
FROM myTable
```
If you REALLY wanted to update the table just make the query a subquery to an update:
```
UPDATE m1
SET ranking = m2.ranking
FROM myTable m1
INNER JOIN
(SELECT
ID,
ROW_NUMBER() OVER (order by [level] desc, [money] desc) ranking
FROM myTable) m2
ON m1.ID = m2.ID
```
|
If you simply want to select then here is the query :
```
select *, dense_rank() over (order by level desc, mony desc) as newranking from YourTable
```
and if you want to update then :
```
;with cte_toupdate (ranking, newranking)
as (
select ranking, dense_rank() over (order by level desc, mony desc) as newranking from YourTable
)
Update cte_toupdate set ranking = newranking
select * from YourTable
```
check here : <http://sqlfiddle.com/#!3/8d6d3/10>
Note : if you want unique ranks then use Row\_Number() instead of dense\_rank().
|
Updating field based on sorting of another
|
[
"",
"sql",
"sql-server",
""
] |
For example I have a table with orders. The table has a key (Order.No), a date where the order was opened (Order.Open) and a date where the order was closed (Order.Close).
```
No Open Close
--------- ---------- ----------
2013-1208 2013-03-11 2013-03-26
2013-1272 2013-03-11 2013-03-11
2013-1273 2013-03-11 2013-03-11
2013-1274 2013-03-11 2013-03-11
2013-1275 2013-03-11 2013-03-11
2013-1280 2013-03-11 2013-06-26
2013-1281 2013-03-11 2013-04-18
2013-1282 2013-03-11 2013-03-14
2013-1287 2013-03-12 2013-04-18
2013-1291 2013-03-12 2013-03-12
```
Now I want to make a query, where I can find out for each month how many orders were still open on the last day of the month.
For example I want to find out how many orders were still open on the last day of January:
The order was closed on or after the first day of February and the order was opened before the first day of February:
```
SELECT COUNT(Order.No) 'Open', '1' 'Month', '2013' 'Year' FROM Orders
WHERE (Orders.Open < '2013-02-01') AND (Orders.Close >= '2013-02-01')
```
Now if I want to get this information for each month I would have to do:
```
SELECT COUNT(Order.No) 'Open', '1' 'Month', '2013' 'Year' FROM Orders
WHERE (Orders.Open < '2013-02-01') AND (Orders.Close >= '2013-02-01')
UNION
SELECT COUNT(Order.No) Open, '2' Month, '2013' 'Year' FROM Orders
WHERE (Orders.Open < '2013-03-01') AND (Orders.Close >= '2013-03-01')
UNION
SELECT COUNT(Order.No) 'Open', '3' 'Month', '2013' 'Year' FROM Orders
WHERE (Orders.Open < '2013-04-01') AND (Orders.Close >= '2013-04-01')
UNION
SELECT COUNT(Order.No) 'Open', '4' Month, '2013' Year FROM Orders
WHERE (Orders.Open < '2013-05-01') AND (Orders.Close >= '2013-05-01')
```
Can I somehow simplify this query so I don't have to write this for every month and year?
The required output would be something like:
```
Open Month Year
---- ----- ----
684 1 2013
683 2 2013
760 3 2013
659 4 2013
```
|
You can just enter the list of months/years you require and join to them, e.g.
```
SELECT [Open] = COUNT(o.No),
[Month] = DATEPART(MONTH, d.[Date]),
[Year] = DATEPART(YEAR, d.[Date])
FROM (VALUES
('2013-01-01'),
('2013-02-01'),
('2013-03-01'),
('2013-04-01'),
('2013-05-01'),
('2013-06-01')
) d (Date)
LEFT JOIN Orders AS o
ON o.[Open] < d.[Date]
AND o.[Close] >= d.[Date]
GROUP BY d.Date;
```
If you have a [calendar table](http://blog.jontav.com/post/9380766884/calendar-tables-are-incredibly-useful-in-sql) then you can use this instead of a list of hard coded dates.
If you don't want to hardcode the dates you want then you can fairly easily generate a list on the fly, first generate a list of numbers:
```
WITH E1 AS -- 10 ROWS
( SELECT N = 1
FROM (VALUES (1), (1), (1), (1), (1), (1), (1), (1), (1), (1)) n (N)
), E2 AS -- 10 X 10 = 100 ROWS
( SELECT N = 1
FROM E1 CROSS JOIN E1 AS E2
), E3 AS -- 100 x 100 = 10,000 ROWS
( SELECT N = 1
FROM E2 CROSS JOIN E2 AS E3
)
SELECT N = ROW_NUMBER() OVER(ORDER BY N)
FROM E3;
```
This is just an example, but will generate 10,000 sequential numbers, realistically you probably don't need to report on 10,000 months, but it doesn't hurt to demonstrate. Then you can turn this list of numbers into a list of dates:
```
WITH E1 (N) AS
( SELECT 1
FROM (VALUES (1), (1), (1), (1), (1), (1), (1), (1), (1), (1)) n (N)
),
E2 (N) AS (SELECT 1 FROM E1 CROSS JOIN E1 AS E2),
E3 (N) AS (SELECT 1 FROM E2 CROSS JOIN E2 AS E3)
SELECT [Date] = DATEADD(MONTH, ROW_NUMBER() OVER(ORDER BY N) - 1, '19000101')
FROM E3;
```
Then you can use this as your main table and LEFT JOIN to Orders:
```
WITH E1 (N) AS
( SELECT 1
FROM (VALUES (1), (1), (1), (1), (1), (1), (1), (1), (1), (1)) n (N)
),
E2 (N) AS (SELECT 1 FROM E1 CROSS JOIN E1 AS E2),
E3 (N) AS (SELECT 1 FROM E2 CROSS JOIN E2 AS E3),
Dates AS
( SELECT [Date] = DATEADD(MONTH, ROW_NUMBER() OVER(ORDER BY N) - 1, '19000101')
FROM E3
)
SELECT [Open] = COUNT(o.No),
[Month] = DATEPART(MONTH, d.[Date]),
[Year] = DATEPART(YEAR, d.[Date])
FROM Dates AS d
LEFT JOIN Orders AS o
ON o.[Open] < d.[Date]
AND o.[Close] >= d.[Date]
WHERE d.Date >= '20130101' -- OR WHATEVER DATE YOU LIKE
AND d.Date < GETDATE();
```
For some further reading on generating and using a numbers/dates table, both static and on the fly, take a look at this series:
* [Generate a set or sequence without loops – part 1](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1)
* [Generate a set or sequence without loops – part 2](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-2)
* [Generate a set or sequence without loops – part 3](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-3)
|
First generate all months you want to see in your results. Then join each month with the open orders and count per month.
```
with months as
(
select
datefromparts(year(min(opened)), month(min(opened)), 1) as startdate
from orders
union all
select
dateadd(month, 1, startdate)
from months
where dateadd(month, 1, startdate) <= cast(getdate() as date)
)
select
year(months.startdate) as yr, month(months.startdate) as mon, count(orders.opened) as cnt
from months
left join orders
on orders.opened <= eomonth(months.startdate)
and coalesce(orders.closed, '2999-12-31') > eomonth(months.startdate)
group by year(months.startdate), month(months.startdate)
order by year(months.startdate), month(months.startdate);
```
Here is the SQL fiddle: <http://sqlfiddle.com/#!6/12ee7/3>.
In SQL Server 2008 the functions DATEFROMPARTS and EOMONTH are missing. Replace them by
```
cast( cast(year(min(opened)) as varchar) + '-' + cast(month(min(opened)) as varchar) + '-01' as date) as startdate
```
and
```
dateadd(day, -1, dateadd(month, 1, months.startdate))
```
Here is the SQL fiddle for SQL Server 2008: <http://sqlfiddle.com/#!3/12ee7/5>.
|
GROUP BY Conditions
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have two tables, tbl1 and tbl2 as below:
```
CREATE TABLE tbl1 (`uid` int);
INSERT INTO tbl1 (`uid`)
VALUES
(100),
(200),
(300),
(400);
CREATE TABLE tbl2 (`id` int, `uid` int, `status` int);
INSERT INTO tbl2 (`id`, `uid`, `status`)
VALUES
(1, 100, 0),
(2, 100, 1),
(3, 100, 2),
(4, 100, 4),
(5, 200, 0),
(6, 200, 1),
(7, 300, 0),
(8, 300, 3),
(9, 300, 4),
(10, 400, 1),
(11, 400, 2);
```
SQLFIDDLE:
<http://sqlfiddle.com/#!2/1a6c20/13>
Problem:
I want to join these two tables.
The result should show the rows having tbl2.status = 0 but not having tbl2.status = 1.
This is the SQL query which I'm trying to run:
```
SELECT DISTINCT tbl1.uid, tbl2.id, tbl2.status
FROM tbl1
INNER JOIN tbl2 ON (tbl1.uid = tbl2.uid)
WHERE tbl2.status = 0
OR tbl2.status <> 1;
```
CORRECT expected result is: `7, 300, 0`.
Here, uid=300 has a row with status=0 and this uid=0 has no row with status=1. So this is the expected result that I want.
```
uid=100 has both status=0 and status=1, so this is not the required result.
uid=200 also has both status=0 and status=1 so this is not the required result.
uid=400 does not have status=0, this is not the required result.
```
Help please!!!
|
You need to select uids with status=0 but not the ones that appear in your table also with status=1. So you need to exclude them from the result set. You need one more condition in your where clause to have the expected result. This can be done by using NOT IN.
Try the following query
```
SELECT tbl1.uid, tbl2.id, tbl2.status
FROM tbl1
INNER JOIN tbl2 ON (tbl1.uid = tbl2.uid)
WHERE tbl2.status = 0
and tbl2.uid NOT IN (SELECT uid from tbl2 where status=1);
```
|
you can use `NOT EXISTS` clause
```
SELECT DISTINCT T1.uid, T2.id, T2.status
FROM tbl1 T1
INNER JOIN tbl2 T2 ON (T1.uid = T2.uid)
WHERE T2.status = 0
AND NOT EXISTS ( SELECT 1 FROM tbl2 T22
where T2.uid = T22.uid
and T22.status =1 )
```
|
SQL query to implement multiple WHERE condition on same column
|
[
"",
"mysql",
"sql",
""
] |
I'm currently working with a database containing the following schema:

Here are the tasks I'm required to carry out:

For task 1, I formulated the following query (inclusion of avg(salary) was for testing purposes):
```
select dname, count(*), avg(salary)
from department, employee
where dno = dnumber
group by dname having avg(salary)>30000;
```
giving the output:
```
DNAME COUNT(*) AVG(SALARY)
------------------------- ---------- -----------
Hardware 10 63450
Research 4 33250
Headquarters 1 55000
Administration 3 31000
Software 8 60000
Sales 14 40821.4286
```
However I can not figure out task two. \*\*I'm required to have the same values, however only with the count of males but the same average as the previous query \*\*. I tried the following statement:
```
select dname, count(*), avg(salary)
from department, employee
where dno = dnumber and (dno,sex) in (select dno, sex from employee where sex = 'M' )
group by dname having avg(salary)>30000;
```
Which resulted in the correct count value, but resulted in the departmental average salary for males, not males and females. As seen below:
```
DNAME COUNT(*) AVG(SALARY)
------------------------- ---------- -----------
Hardware 7 65785.7143
Research 3 36000
Headquarters 1 55000
Software 7 57285.7143
Sales 10 42150
```
**Please note I have to do this using nested queries, not CASE.**
Hope this makes sense, any help would be appreciated.
Thanks.
|
you can do male employee count using CASE
Also converted explicit join ( comma separated tables in FROM CLAUSE) to implicit join
```
select D.dname, count(case when E.sex='M' then 1 else 0), avg(E.salary)
from department D
join employee E
on E.dno = D.dnumber
group by D.dname
having avg(Esalary)>30000;
```
As per OP comment, if this needs to be done using only nested queries, it can be done with subquery
```
select T.dname, count(*), T.salary as AverageSalary
FROM employee E
join ( select D.dname, D.dnumber, avg(salary) as salary
from employee E
join department D
on E.dno = D.dnumber
group by D.dname, D.dnumber
having avg(salary) > 30000 ) T
on E.sex ='M'
and E.dno = T.dnumber
group by T.dname, T.salary
```
|
This can also be done by evaluating the overall average in a subquery, derived table or cte per department before applying it to the filtered employees. Also note the preference for joining in `join` not in the `where`.
```
with cteSalary as
(
select dname, dno, avg(salary) as avgSalary
from department inner join employee on dno = dnumber
group by dname, dno
)
select s.dname, count(e.fname), s.avgSalary
from employee e inner join cteSalary s on e.dno = s.dno
where e.sex = 'M' and s.avgSalary > 30000
group by s.dname, s.avgSalary;
```
|
Oracle SQL Nested Statement Issue
|
[
"",
"sql",
"oracle",
""
] |
I need to update a single column from a backup version of a database. I've restored the backup to a different db location and am running the following:
```
update table_name
Set
column = restoredDB.dbo.table_name.column
From restoredDB.dbo.table_name
Where restoredDB.dbo.table.ID = table_name.ID
```
The following result is returned:
> The multi-part identifier "table\_name.ID" could not be bound
The ID field is a primary key, bigint. I've found many postings on the topic of "multi-part identifier could not be bound" however they don't seem to apply to my situation. Could someone explain why this is happening and how to correct the error?
Thx.
|
All you need for column refernces is the TableName.ColumnName:
```
update table_name
Set
column = table_name.column
From restoredDB.dbo.table_name
Where table.ID = table_name.ID
```
Also, in your `Where table.id =` expression, you do not have any table name `table` in your example query.
|
I think this should resolve the issue.
```
USE [<table_name_DB_Name>]
GO
UPDATE table_name
SET
column = t1.column
From restoredDB.dbo.table_name t1, table_name t2
Where t1.ID = t2.ID
```
|
update table - multi-part identifier could not be bound
|
[
"",
"sql",
"sql-server-2012",
""
] |
I have the below query where I'm trying to calculate sum of salaries for over a period of year..
```
select sum(case when date_key between to_char(sysdate,'yyyymm')
and to_char(add_months(sysdate,-12), 'yyyymm')
then salary end) as annual_salary
from employee
group by emp_key
```
When I execute the query I'm getting null's in the result set..
I actually have valid figures for salaries in employee table.
Where am I going wrong?
|
```
select sum(case when date_key between
to_char(add_months(sysdate,-12), 'yyyymm') and to_char(sysdate,'yyyymm')
then salary end) as annual_salary
from employee
group by emp_key
```
|
Just invert the 2 bounds, they are not in the correct order:
```
...
between to_char(add_months(sysdate,-12), 'yyyymm')
and to_char(sysdate,'yyyymm')
```
|
Aggregation between dates
|
[
"",
"sql",
"oracle",
"aggregation",
""
] |
Given a table of the following format:
```
Name | Value
A | 0
B | 0
C | 1
D | 2
E | 3
```
How could I query this table in this format:
```
SELECT value,count(*) FROM table GROUP BY value
```
Such that we produced the result:
```
Value | Count
0.. 1 | 3
2 | 1
3 | 1
```
That is, such that the grouping can have a rule specified that it should group either a range or even a set of distinct values (so we could group anything with `0` or `3` as one row).
(For reference the query above would actually produce the output)
```
Value | Count
0 | 2
1 | 1
2 | 1
3 | 1
```
|
Use a case construct:
```
SELECT
CASE WHEN value IN (0,1) THEN '0..1' ELSE CAST(value AS CHAR(10)) END as values,
count(*)
FROM mytable
GROUP BY CASE WHEN value IN (0,1) THEN '0..1' ELSE CAST(value AS CHAR) END;
```
|
Two solutions comes in to my mind:
First is to use a function for grouping the values
```
SELECT some_function(value) v, count(*) c FROM table GROUP BY some_function(value);
```
The other way is to join the mapping table with group and group relation mappings.
|
Grouping values by custom rules
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I am running the following code to extract all relevant rows from all tables that have a particular column. The outer `IF` is supposed to check if the column exists on the table for that iteration. If not, it should finish that iteration and move to the next table. If the table has the `GCRecord` column, it should then check to see if that table will return any records. If there are no records to return, it should end that iteration and move on to the next table. If there are records, it should display them in SSMS.
```
USE WS_Live
EXECUTE sp_MSforeachtable
'
USE WS_Live
IF EXISTS( SELECT *
FROM sys.columns
WHERE columns.Object_ID = Object_ID(''?'')
AND Name = ''GCRecord''
)
BEGIN
IF EXISTS (SELECT * FROM ? WHERE GCRecord IS NOT NULL)
BEGIN
SELECT * FROM ? WHERE GCRecord IS NOT NULL
END
END
'
```
It seems to work because SSMS is only returning grids with valid entries. What I don't understand is: Why am I still getting these errors?
```
Msg 207, Level 16, State 1, Line 10
Invalid column name 'GCRecord'.
Msg 207, Level 16, State 1, Line 13
Invalid column name 'GCRecord'.
```
**EDIT**
After using the suggestions, I have this:
```
USE WS_Live
EXECUTE sp_MSforeachtable
'
USE WS_Live
IF EXISTS(SELECT * FROM sys.columns WHERE columns.Object_ID = Object_ID(''?'')AND Name = ''GCRecord'')
BEGIN
IF EXISTS (SELECT * FROM ? WHERE GCRecord IS NOT NULL)
BEGIN
EXEC('' SELECT * FROM ? WHERE GCRecord IS NOT NULL'')
END
END
'
```
Which returns this error:
```
Msg 207, Level 16, State 1, Line 7
Invalid column name 'GCRecord'.
```
Which refers to this line
```
IF EXISTS(SELECT * FROM sys.columns WHERE columns.Object_ID = Object_ID(''?'')AND Name = ''GCRecord'')
```
**UPDATE**
I tried nesting `EXEC` statements which did not work, but using the selected answer I got the results I was looking for without the errors.
|
Use Dynamic query inside begin to avoid inner pre-compilation of code, for tables do not contain column '`GCRecord`'
```
USE WS_Live
GO
EXECUTE sp_MSforeachtable
'
IF EXISTS( SELECT *
FROM sys.columns
WHERE columns.Object_ID = Object_ID(''?'')
AND Name = ''GCRecord''
)
BEGIN
EXEC(''
IF EXISTS (SELECT * FROM ? WHERE GCRecord IS NOT NULL)
BEGIN
SELECT * FROM ? WHERE GCRecord IS NOT NULL
END
'')
END
'
```
|
You are very close. Use "EXEC"
```
USE WS_Live
EXECUTE sp_MSforeachtable
'
USE WS_Live
IF EXISTS( SELECT *
FROM sys.columns
WHERE columns.Object_ID = Object_ID(''?'')
AND Name = ''GCRecord''
)
BEGIN
EXEC(''SELECT * FROM ? WHERE GCRecord IS NOT NULL'')
END
'
```
|
SQL Iterate Over All Tables
|
[
"",
"sql",
"sql-server",
"conditional-statements",
"sp-msforeachdb",
""
] |
I am cleaning up a database and there are performance issues with heavy traffic. Reading and writing. But the heavy traffic is condensce to a few days annually.
I suspect one problem is a lot of heavily-used tables don't have a clustered index. But they do have a primary key and a unique, non-clustered index tied to it that is just on the primary key (no includes).
Is this a beneficial approach anytime?
My guess is someone thought it would be faster if the db didn't have to insert all the columns from a clustered index (I wouldn't do this ever myself so I don't know the madness behind the method).
I am starting the task of applying clustered indexes to these tables, but wanted to survey smarter minds before I got too far to see if there is a scenario where a strategy like this would help?
I would think the loss of connections to the clustered index would outweigh any gain this might have?
|
I feel fairly confident in saying that no, this is not a beneficial approach, ever. If a table has only nonclustered indexes, it's a heap table, and aside from bulk insert operations, there is no good reason for having a heap table. That's not just me talking, that's [Microsoft's opinion as well](http://technet.microsoft.com/en-us/library/cc917672.aspx).
I have maintained a big warehouse where it was a good idea to not have a primary key at all (because the index for it took up a huge amount of space without being used), but only a unique clustered index on a `DATETIME` plus an `ID`. But you'll note that, primary key or not, it did have a clustered index, because not having one is just a bad idea.
|
No. If you have a candidate key, then use it as clustered index. A unique, non-clustered index can be used for key look-ups and to guarantee uniqueness. A clustered index will be used for that, and in addition for range scans and for sort guarantees.
When the heavy traffic is condensed to 'a few days', this is indicative of time series, where queries ask for recent *ranges*: totals and other aggregates for today, for last day, for last week etc. Making the table organized by the time makes all these queries work w/o having to scan the entire table, end-tot-end.
A primary key does not have to match the clustered index. The primary key is a logical concept, useful for modeling the data and enforcing referential integrity in a primary/foreign key relationship (strictly speaking the foreign key can reference *any* column(s), but most often it references the primary key).
The clustered index will define the physical layout, is driven by practical considerations around most frequent queries, type of range scans present, and size-of-key trade-offs.
Adding a clustered index will likely change some access patterns and may introduce new deadlock possibilities, but frankly the chances are slim. Is usually *removing* a clustered index that adds deadlocks, not adding it.
Finally, I wouldn't worry too much about insert/update cost. Most applications have an overwhelming read-to-write ratio and having faster reads make the app feel much more responsive and more 'snappy'. Also the read improvement from having a usable index manifest as x100s times improvements (small range scan vs. table end-to-end scan) while write degradation manifests as fractional increase (e.g. write times increase by 10-15%), usually unnoticeable in the app.
|
Would there ever be a good reason to replace a clustered index with a unique non-clustered index
|
[
"",
"sql",
"sql-server",
"database-administration",
""
] |
I have a SQL Server table like this:
<http://sqlfiddle.com/#!2/a15dd/1>
What I want to do is display the latest year and month where trades were made.
In this case, i want to display
```
ID: 1
Year: 2013
Month: 11
Trades: 2
```
I've tried to use:
```
select
id, MAX(year), MAX(month)
from
ExampleTable
where
trades > 0
group by
id
```
Do I have to concatenate the columns?
|
You can use [ROW\_NUMBER](http://msdn.microsoft.com/en-GB/library/ms186734.aspx) to assign each row a number based on it's relative position (as defined by your order by):
```
SELECT ID,
Year,
Month,
Trades,
RowNum = ROW_NUMBER() OVER(PARTITION BY ID ORDER BY Year DESC, Month DESC)
FROM ExampleTable
WHERE Trades > 0;
```
With your example data this gives:
```
ID YEAR MONTH TRADES RowNum
1 2013 11 2 1
1 2013 4 42 2
```
Then you can just limit this to where `RowNum` is 1:
```
SELECT ID, Year, Month, Trades
FROM ( SELECT ID,
Year,
Month,
Trades,
RowNum = ROW_NUMBER() OVER(PARTITION BY ID ORDER BY Year DESC, Month DESC)
FROM ExampleTable
WHERE Trades > 0
) AS t
WHERE t.RowNum = 1;
```
If, as in your example, `Year` and `Month` are stored as `VARCHAR` you will need to convert to an `INT` before ordering:
```
RowNum = ROW_NUMBER() OVER(PARTITION BY ID
ORDER BY
CAST(Year AS INT) DESC,
CAST(Month AS INT) DESC)
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!3/a15dd0/2)**
If you are only bothered about records where ID is 1, you can do it simply using `TOP`:
```
SELECT TOP 1 ID, Year, Month, Trades
FROM ExampleTable
WHERE ID = 1
AND Trades > 0
ORDER BY CAST(Year AS INT) DESC, CAST(MONTH AS INT) DESC;
```
|
Why store "year" and "month" as separate columns? In any case, the basic logic is to combine the two values to get the latest one. This is awkward because you are storing numbers as strings and the months are not zero-padded. But it is not so hard:
```
select id,
max(year + right('00' + month, 2))
from ExampleTable
group by id;
```
To separate them out:
```
select id,
left(max(year + right('00' + month, 2)), 4) as year,
right(max(year + right('00' + month, 2)), 2) as month
from ExampleTable
group by id;
```
[Here](http://sqlfiddle.com/#!6/a15dd/9) is a SQL Fiddle. Note when you use SQL Fiddle that you should set the database to the correct database.
|
SQL Server : getting year and month
|
[
"",
"sql",
"sql-server",
""
] |
I have 3 tables of accounts that all contain the same fields. Table1 contains all accounts while Table2 and Table3 contain subsets of the accounts. I'm trying to select records in Table1 that do no exist in Table2 or Table3.
Let's say the table layout is like this and is the same for all 3 tables:
|AcctNum|Name|State|
I know how to do this if it was just Table1 and Table2, using a left join and Is Null, but the 3rd table is throwing me. Is this possible to do in one query? Can you combine left joins? I should point out I'm using Access 2010.
|
Yes you can combine left joins and with the odd syntax Access uses the query should look like this:
```
SELECT T1.AcctNum
FROM (Table1 AS T1 LEFT JOIN Table2 AS T2 ON T1.AcctNum = T2.AcctNum)
LEFT JOIN Table3 AS T3 ON T1.AcctNum = T3.AcctNum
WHERE (((T2.AcctNum) Is Null) AND ((T3.AcctNum) Is Null));
```
|
You can use Access to create a view called **TableCombined** that is a union of both **Table2** and **Table3**.
At that point, you can use your `left join and Is Null` query and join **TableCombined** to **Table1**.
Hope this helps!
|
How to select records that do not exist in two (or more) tables
|
[
"",
"sql",
"ms-access-2010",
""
] |
Disclaimer: I'm helping a person learn SQL by using an online tutorial. So you can consider this as a homework question.
There are 3 SQL Server tables that we are dealing with here:
* Ships (name, class)
* Classes - different classes of the ship (class)
* Outcomes - what has happened to that ship (ship, result)
Now there are several crazy things in the layout of the database, the biggest one being that the ships in the `Outcomes` table may not be present in the `Ships` table when they are named the same way as the class in `Class`.
The point is to get the number of sunk ships of each class. I have helped the student to get to the following SQL:
```
Select
dbo.Classes.[class], Count(dbo.Outcomes.ship) as [count]
from
dbo.Classes
left join
dbo.Ships on dbo.Ships.[class] = dbo.Classes.[class]
left join
dbo.Outcomes on (dbo.Outcomes.ship = dbo.Classes.[class] or
dbo.Outcomes.ship = dbo.Ships.name)
and dbo.Outcomes.result = 'sunk'
Group by
dbo.Classes.[class]
```
However, it, apparently, is the incorrect solution, as it can on some occasions return incorrect results. On the web i have managed to find the following solution to this tutorial:
```
select
classes.class, count(T.ship)
from
classes
left join
(select
ship, class
from
outcomes
left join
ships on ship = name
where
result = 'sunk'
union
select
ship, class
from
outcomes
left join
classes on ship = class
where
result = 'sunk') as T on classes.class = T.class
group by
classes.class
```
But I cannot understand under which conditions will the results be different. Doesn't the `Union` operation of using two different join paths serve the exactly same function as does `OR` in the condition of the join?
P.S. This particular question in the tutorial is actually marked as 2 on the scale from 1-5 for difficulty. So i feel myself quite stupid.
|
With this dataset:
```
with Ships as (
select * from (values
('HMS Prince of Wales','King George V')
,('King George V','King George V')
)Ships(name,class)
),
Classes as (
select * from (values
('King George V')
)Classes(class)
),
Outcomes as (
select * from (values
('HMS Prince of Wales','sunk')
,('King George V','sunk')
)Outcomes(ship,result)
)
```
the two queries you provide respectively yield:
```
class count
------------- -----------
King George V 3
class
------------- -----------
King George V 2
```
The reason for the difference is that *UNION* is a set operator that eliminates duplicates (unlike *UNION ALL*) while the *OR* operator doesn't. We can test this by replacing UNION in the second query with UNION ALL, which now yields:
```
class
------------- -----------
King George V 3
```
just as in your first proposed solution.
|
The queries might return different results in many cases. One obvious case is when a sunk ship matches both `class` and `name` in `outcomes`. The `union` in the second query will return one row in this case. The `join` in the first query will return two rows. Hence the counts will be different.
I think you could fix this particular issue by using `count(distinct)` in the first query.
|
Under what conditions do these two SQL queries give different results?
|
[
"",
"sql",
"sql-server",
""
] |
I have 3 related tables in the database, the table is A, B & C
table structure as below
table A
```
id | name_A
--- + ---------
1 | Endru
2 | maz
3 | Hudson
```
table B
```
id | name_B
----- + -------
10 | Food
11 | clothes
```
table C
```
id | status
--- + -------
1 | exist
10 | exist
3 | none
```
I want to produce like this
```
id | name_A | name_B | status
------------------------------------------
1 | Endru | null | exist
10 | null | food | exist
3 | hudson | null | none
```
how do I achieve, I have tried using inner and left outer join but no success.
thanks
|
I think the following query will fetch what you need
```
SELECT tableC.id, TableA.name_A,TableB.name_B,tableC.status
FROM tableC
LEFT JOIN TableA on TableC.id = TableA .id
LEFT JOIN TableB on TableC.id = TableB .id
```
|
You can do this using `union all` and aggregation:
```
select id, max(name_a) as name_a, max(name_b) as name_b,
coalesce(max(status), 'none') as status
from (select id, name_a, NULL as name_b, NULL as status
from tableA
union all
select id, NULL, name_b, NULL
from tableB
union all
select id, NULL, NULL as name_b, status
from tableC
) abc
group by id;
```
|
sql select statement with 3 tables?
|
[
"",
"mysql",
"sql",
""
] |
Can someone help me put my pl/sql procedure in a package? I've tried and I'm struggling with it:
This is what I have, for my package specification:
```
CREATE OR REPLACE PACKAGE film_pkg
IS
title VARCHAR2(100);
PROCEDURE get_films(fname VARCHAR2);
END film_pkg;
--
```
This is my package body where im running into the problems:
```
CREATE OR REPLACE PACKAGE BODY film_pkg
IS
PROCEDURE get_films (fname IN film.title%TYPE,
r_date OUT film.release_date%TYPE,
dur OUT film.duration%TYPE)
AS
BEGIN
SELECT release_date, duration
INTO r_date, dur
FROM FILM
WHERE title = fname;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
r_date := '';
dur := '';
END get_films;
END film_pkg;
```
if anyone could help me fix the errors i'd appreciate it:
```
Error(4,11): PLS-00323: subprogram or cursor 'GET_FILMS' is declared in a package specification and must be defined in the package body
```
|
Your header and body procedure definitions don't match
In the header, you have:
```
PROCEDURE get_films(fname VARCHAR2);
```
Whereas in the body:
```
PROCEDURE get_films(fname IN film.title%type,
r_date OUT film.release_date%type, dur OUT film.duration%type)
```
You probably just need to update the header definition with the two additional OUT params?
**To Summarize**
* Ensure the header definition matches all parameters of the body implementation (number of parameters, names of parameters, order of parameters, and the parameter types)
* As per Alex's comment, do not mix and match the custom type (`film.title%type`) with the base type (`VARCHAR2`). Choose one or the other.
|
"**parameter name** that was inside the procedure defined in the body did not match with the corresponding parameter name in the body."
|
Oracle Error PLS-00323: subprogram or cursor is declared in a package specification and must be defined in the package body
|
[
"",
"sql",
"oracle",
"stored-procedures",
"plsql",
"package",
""
] |
i have this query:
```
SELECT distinct
num as number,
name as name
from my_table_name
where number = '12345';
```
And this is the results:
```
number - name
1. 12345 - mike
2. 12345 - charlie
3. 12345 - jose
```
I need a new query when this happens (numbers duplicate or triplicate) show me only one of them. Example:
```
number - name
12345 - mike
```
I only need one of them; the position doesn't matter. If it find one, print this and close the procedure, function or cursor.
|
Distinct is going to return results that are distinct, relative to all of the data you are querying for. If you only want one of the results returned and you know that the result used is arbitrary, you can just add [a filter based on the row number](http://www.w3schools.com/sql/sql_top.asp) (how specifically this is done depends on what DBMS you are using.)
Oracle example:
```
select num as "number",
name as "name"
from my_table_name
where number = '12345'
and rownum = 1; -- just gets the first row.
```
|
If you don't care which one is being returned, why are you even asking for it to be returned?
However, to get a single line of results regardless of the number of matching rows, you should probably be using `GROUP BY` and a summary function:
```
Select
num as number,
max(name) as name --or min(), or any other summary function that works on this data type
from my_table_name
where num = '12345'
group by num
```
|
duplicate values in a row using pl/sql
|
[
"",
"sql",
"oracle",
""
] |
I am facing one problem in sql queries, Here is the solution i tried:
```
CREATE TABLE #EventTable(eStartDate datetime,eEndDate datetime)
insert into #EventTable VALUES('2014-10-21 10:57:48.603','2014-10-22 10:57:48.603')
insert into #EventTable VALUES('2014-10-23 10:57:48.603','2014-10-23 10:57:48.603')
insert into #EventTable VALUES('2014-10-25 10:57:48.603','2014-10-27 10:57:48.603')
insert into #EventTable VALUES('2014-10-28 10:57:48.603','2014-10-30 10:57:48.603')
insert into #EventTable VALUES('2014-10-31 10:57:48.603','2014-11-02 10:57:48.603')
```
Second table
```
CREATE TABLE #NeedDateTable(StartDate datetime,EndDate datetime)
insert into #NeedDateTable VALUES('2014-10-22 10:57:48.603','2014-10-26 10:57:48.603')
insert into #NeedDateTable VALUES('2014-11-23 10:57:48.603','2014-11-23 10:57:48.603')
insert into #NeedDateTable VALUES('2014-11-02 10:57:48.603','2014-11-03 10:57:48.603')
SELECT * from #EventTable
SELECT * from #NeedDateTable
```
I want to return dates from #NeedDateTable which are overlapping with #EventTable. So in my case it should return two dates "**2014-10-22 10:57:48.603** && **2014-10-26 10:57:48.603**" and
"**2014-11-02 10:57:48.603** && **2014-11-03 10:57:48.603**"
Here is my query
```
SELECT Startdate , EndDate
FROM #NeedDateTable a,#EventTable b
WHERE ((a.Startdate <= b.eStartDate and b.eStartDate<= a.EndDate)
OR (a.Startdate <= b.eEndDate and b.eEndDate <= a.EndDate))
```
It is returning duplicate records for above dates. Could you please help in solving my query.
|
what I see in you output that you want records of NeedDateTable which have StartDate = EventTable's eEndDate. if this is the case then here is the query :
```
select a.* from NeedDateTable a join EventTable b
on a.StartDate = b.eEndDate;
```
OR if you want to return records of NeedDateTable where exists a record in EventTable whose estartdate or eenddate lies between startdate and end date of NeedDateTable, then:
```
select * from NeedDateTable a
where exists
(
select 1 from EventTable b
where (b.eStartDate between a.StartDate and a.EndDate) or
(b.eEndDate between a.StartDate and a.EndDate)
);
```
[sample sql fiddle](http://sqlfiddle.com/#!3/5b5ed/4)
|
## Query
```
SELECT a.StartDate,a.EndDate
FROM NeedDateTable a
INNER JOIN
EventTable b
ON a.StartDate=b.eStartDate
OR a.StartDate=b.eEndDate
OR a.EndDate=b.eStartDate
OR a.EndDate=b.eEndDate;
```
## [Fiddle Demo](http://sqlfiddle.com/#!3/6bb04/3)
|
How to compare dates from one table with another table
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to get a sql for below data.
I have data like
```
id | flag
1 | Y
2 | Y
1 | Y
2 | N
1 | Y
1 | Y
1 | Y
3 | N
3 | N
3 | N
2 | N
3 | N
```
Expeted output
```
1 | Y
2 | Y
3 | N
```
if id column has Y and or N then display Y. if it contains Only N then Display N
|
You just need to group and count the values for the group. You said *if it contains Only N then Display N*. This means when all the flags that contain `N` equal all the elements in the group then you should display an `N`. Otherwise, a `Y`.
So you just need to group by and perform the previous comparison. Literally. This will allow you to respond properly in case a `Z` or a `NULL` flag comes in the future.
**Standard SQL**
```
SELECT id,
CASE WHEN SUM(flag = 'N') = COUNT(*) THEN 'N' ELSE 'Y' END groupedFlag
FROM tab GROUP BY id
```
**MySQL**
```
SELECT id, IF(SUM(flag = 'N') = COUNT(*), 'N', 'Y') groupedFlag
FROM tab GROUP BY id
```
Fiddle with `Z` and `NULL` flags [here](http://sqlfiddle.com/#!2/bc904/1).
|
```
SELECT [id], MAX([flag])
FROM Table1 GROUP BY [id]
```
[**FIDDLE**](http://sqlfiddle.com/#!6/567af/1)
|
Assign 'Y' flag if 'Y' is present in multi set table
|
[
"",
"sql",
""
] |
A school assignment I'm working on says to format a decimal(14,0) column "as currency [US$xxx,xxx] and right justified [all the commas line up vertically]."
I can select the data in the correct format using this:
```
CONCAT("US$", FORMAT(theColumn, 0))
```
But the data is not right justified. I've been searching and searching and simply haven't been able to find any way to right justify the output.
I did find an answer on here that shows how to do it if the column is a string type and has a fixed width, but I can't find a way to right justify the output for a decimal data type. Is it possible?
EDIT:
MySQL returns data left justified, like this:
```
US$18,100,000,000
US$130,100,000,000
US$1,200,000,000
```
I want to select it right justified, like this:
```
US$18,100,000,000
US$130,100,000,000
US$1,200,000,000
```
|
I think the following query is the answer to your question.
Since the maximum length for a DECIMAL(14,0) type as represented with separators is 18 you have to find out how many spaces you need to add in front of every concatenated result. I calculate the number of digits of number, then the number of commas ((length of the number -1) DIV 3) and finally the number of spaces to be added (18 - numberlength - separatorslength).Hope I am not missing something.
```
SELECT column_name,
CONCAT("US$", FORMAT(column_name, 0)) as authorstry,
CHAR_LENGTH(CAST(FORMAT(column_name, 0) AS CHAR(18))) AS stringlength,
CHAR_LENGTH(column_name) as numofdigits,
((CHAR_LENGTH(column_name)-1) DIV 3) as numofseparators,
(18-(CHAR_LENGTH(column_name))-((CHAR_LENGTH(column_name)-1) DIV 3)) as spacestobeadded
CONCAT(SPACE(18-(CHAR_LENGTH(column_name))-((CHAR_LENGTH(column_name)-1) DIV 3)) ,"US$",FORMAT(column_name, 0)) as finalresutl
FROM table;
```
|
I think you want
```
select lpad(column_name,x,' ') from table_name;
```
where x is the number of places you want that value you fill (so say 8 places)
|
How to right-align a column when selecting data?
|
[
"",
"mysql",
"sql",
""
] |
I have a table that stores pizza orders. The table has a value called quantity; the amount of pizzas in that specific order. I need to "create a query that gets the distribution of orders against different pizza quantities in an order. The questions states"Your sql query should count the number of orders with the following pizza quantities: 1, 2, 3, 4, 5, 6 and above 6". After researching for a while i came up with this query, but it doesnt work the way I would like. Here is the query and its results. Thanks in advance.
```
SELECT CASE
WHEN Quantity = 1 THEN '1 '
WHEN Quantity = 2 THEN '2'
WHEN Quantity = 3 THEN '3 '
WHEN quantity = 4 THEN '4'
WHEN quantity = 5 THEN '5 '
WHEN quantity = 6 THEN '6'
ELSE '>6'
END AS Range, COUNT(Quantity) AS Amount
FROM orders
GROUP BY Quantity
```
This outputs :
```
Range Amount
1 2
2 3
3 3
4 4
5 5
6 1
>6 1
>6 2
>6 2
>6 1
```
How would i go about grouping the >6 range and counting the Amount's. so that it is under a single '>6' entry?
|
One possible way is using the same `CASE WHEN` statement for the `GROUP BY` clause, for example\* :
```
SELECT CASE
WHEN Quantity <= 6 THEN CAST(Quantity as VARCHAR(1))
ELSE '>6'
END AS Range, COUNT(Quantity) AS Amount
FROM orders
GROUP BY (CASE
WHEN Quantity <= 6 THEN CAST(Quantity as VARCHAR(1))
ELSE '>6'
END)
```
\*) Using simplified version of your original `CASE WHEN` statement. Written in SQL Server flavor of SQL.
|
Run this query instead of your previous one:
```
SELECT CASE
WHEN Quantity = 1 THEN '1 '
WHEN Quantity = 2 THEN '2'
WHEN Quantity = 3 THEN '3 '
WHEN quantity = 4 THEN '4'
WHEN quantity = 5 THEN '5 '
WHEN quantity = 6 THEN '6'
ELSE '>6'
END AS Range, COUNT(Quantity) AS Amount
FROM orders
GROUP BY Range
```
|
SQL Group By custom range?
|
[
"",
"sql",
"sql-server",
"select",
"case",
""
] |
We have a piece of software that does not delete entries we no longer want. In order to get a feel for how much data is wasting away in our server and prepare for a big cleanup operation, I am trying to loop through all of the tables and pull the records that are marked for deletion. This is what I'm working with:
```
DECLARE @total INT
DECLARE @count INT
DECLARE @name NVARCHAR(25)
DECLARE @rn INT
SET @total = (SELECT COUNT(Name) FROM sys.tables)
SET @count = 1
SET @rn = (SELECT ROW_NUMBER() OVER(ORDER BY Name) FROM sys.tables)
WHILE @count <= @total AND @count < 2
BEGIN
SET @name = ( SELECT Name, ROW_NUMBER() OVER(ORDER BY Name)
FROM sys.tables
WHERE @rn = @count
)
EXEC('SELECT * FROM WS_Live.dbo.' + @name + ' WHERE GCRecord IS NOT NULL')
SET @count += 1
END
```
This is my error:
> Msg 116, Level 16, State 1, Line 19
> Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
I realize that my error probably has to do with selecting two columns in the line
```
SET @name = ( SELECT Name, ROW_NUMBER() OVER(ORDER BY Name)
FROM sys.tables
WHERE @rn = @count
)
```
but, I'm not sure how else to ensure that I am selecting the next row.
P.S. `AND @count <2` is just for script testing.
**How can I loop through all of the tables?**
|
Use this system stored procedure
```
sp_MSforeachtable @command1="select count(*) from ?"
```
[sample code](http://weblogs.sqlteam.com/joew/archive/2007/10/23/60383.aspx)
Note:
* This `sp_MSforeachtable` is an undocumented stored procedure.
* Does not support on Azure SQL (per comment below).
|
Maybe this is what you are looking for
```
DECLARE @NAME VARCHAR(100)
DECLARE @SQL NVARCHAR(300)
DECLARE CUR CURSOR FOR
SELECT NAME
FROM SYS.TABLES
WHERE TYPE = 'U'
AND SCHEMA_ID = 1
OPEN CUR
FETCH NEXT FROM CUR INTO @NAME
WHILE @@FETCH_STATUS = 0
BEGIN
SET @SQL = 'SELECT * FROM WS_LIVE.DBO.'+@NAME+' WHERE GCRECORD IS NOT NULL'
PRINT @SQL
EXEC Sp_executesql
@SQL
FETCH NEXT FROM CUR INTO @NAME
END
CLOSE CUR
DEALLOCATE CUR
```
|
How to loop through all SQL tables?
|
[
"",
"sql",
"sql-server",
"loops",
"t-sql",
""
] |
I have two tables in my datebase:
Cars:
IdCar PK,
Company,
Type,
Color
Renting:
Id-rent\_date PK,
IdCar FK,
Car\_return\_date
I need to write a query that will return three columns:
- Year
- amount of days in which cars of specific company was rented in the year
- Company - from Cars table
For now i have:
> SELECT DATEDIFF(dd, Id-rent\_date, Car\_return\_date) AS Days, Company
> FROM Renting INNER JOIN Cars ON Renting.IdCar = Cars.IdCar
I tried to group by Company in the end of query but it is returning an error:
Column 'Renting.Id-rent\_date' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
|
Complete query:
```
declare @iter int
declare @max int
declare @test Table ([Id-rent_date] datetime, Car_return_date datetime, IdCar int)
set @iter = 0
select @max = max(year(Car_return_date) - year([Id-rent_date])) FROM Renting
WHILE(@iter < @max)
BEGIN
insert into @test
SELECT
DATEADD(yy, DATEDIFF(yy,0,[Id-rent_date]) + @iter, 0)
,DATEADD(yy, DATEDIFF(yy,0,[Id-rent_date]) + 1 + @iter, -1)
,IdCar
FROM Renting
where year([Id-rent_date]) + @iter <> year(Car_return_date) and year([Id-rent_date]) <> year(Car_return_date)
set @iter = @iter + 1
END
SELECT YEAR([Id-rent_date]), SUM(DATEDIFF(dd, [Id-rent_date], Car_return_date)) AS Days,
Company
FROM
(SELECT
[Id-rent_date]
,[Car_return_date]
,IdCar
FROM Renting
where year([Id-rent_date]) = year(Car_return_date)
union all
SELECT
[Id-rent_date]
,DATEADD(yy, DATEDIFF(yy,0,[Id-rent_date]) + 1, -1)
,IdCar
FROM Renting
where year([Id-rent_date]) <> year(Car_return_date)
UNION ALL
SELECT
DATEADD(yy, DATEDIFF(yy,0,[Car_return_date]), 0)
,[Car_return_date]
,IdCar
FROM Renting
where year([Id-rent_date]) <> year(Car_return_date)
UNION ALL
select * from @test) RentingSplit inner join Cars ON RentingSplit.IdCar = Cars.IdCar
GROUP BY Company, YEAR([Id-rent_date])
```
|
If you add a `group by` clause, all the items in the select list must either be the columns you've grouped by or aggregate calculations. Here, e.g., you want to sum the amount of days each company had:
```
SELECT SUM(DATEDIFF(dd, Id-rent_date, Car_return_date)) AS Days,
Company
FROM Renting
INNER JOIN Cars ON Renting.IdCar = Cars.IdCar
GROUP BY Company
```
|
Joining two sql tables and grouping
|
[
"",
"sql",
"group-by",
""
] |
After doing a data migration from an old system, I came across a situation where there are records without dates.


The example I give above is the situation I have. I have nowhere to get the data from those NULL. So what is logically set to:
1. Get the Minor date within that range (00013 or 00021).
2. Update records with the earliest date of the series.
I suppose you make a CASE to achieve this. Someone can give me a help to figure out a way to get it?
|
I set up a small POC to show you how to do this.
```
CREATE TABLE
#T
(
c1 char(9) NOT NULL
, c2 datetime NULL
);
INSERT INTO
#T
(
c1
, c2
)
VALUES
('0011.001', '2008-12-16T00:00:00.000')
, ('0013.000', NULL)
, ('0013.001', '2008-07-10T00:00:00.000')
, ('0013.002', NULL)
, ('9999.000', '2009-07-03T00:00:00.000');
```
At this point, we have a table with 3 unique "bands" which is what I'm calling the leading 4 character in that column. It's not a number as numbers don't have leading zeros. That's an artifact of presentation.
```
-- Identify the minimum value per first 4 digit values
SELECT
LEFT(T.c1, 4) AS Band
, MIN(T.c2) AS MinDate
FROM
#T AS T
GROUP BY
LEFT(T.c1, 4)
```
Running that query gives results as
```
Band MinDate
0011 2008-12-16 00:00:00.000
0013 2008-07-10 00:00:00.000
9999 2009-07-03 00:00:00.000
```
That's what I expect. Now I simply need to use those values to backfill my missing dates.
```
-- Use this query to backfill data
-- the join condition here makes things painful for large data sets
UPDATE
T
SET
T.c2 = D.MinDate
FROM
#T AS T
-- use the above query as a derived table
INNER JOIN
(
-- Identify the minimum value per first 4 digit values
SELECT
LEFT(T.c1, 4) AS Band
, MIN(T.c2) AS MinDate
FROM
#T AS T
GROUP BY
LEFT(T.c1, 4)
) D
ON D.Band = LEFT(T.c1, 4)
WHERE
T.c2 IS NULL;
```
Of course, if you are using SQL Server 2005+ (which is likely these days), you could avoid using a join with the help of a window `MIN()`:
```
UPDATE
T
SET
T.c2 = T.MinDate
FROM
(
SELECT
T.c2
, MIN(T.c2) OVER (PARTITION BY LEFT(T.c1, 4)) AS MinDate
FROM
#T AS T
) T
WHERE
T.c2 IS NULL;
```
Basically, the above works same as the previous solution except the `c2` values and their corresponding minimums are calculated and returned in the same subquery, which is used as a derived table (updatable in this context).
|
First of all: It's good that you move this to a new database. It is a bad design to have range and sub-range in one column.
This should do:
```
update mytable
set mydate =
(
select min(mydate)
from mytable other
where left(other.range,5) = left(mytable.range,5)
)
where mydate is null;
```
|
Conditional Upload with Case
|
[
"",
"sql",
"sql-server",
"ssis",
""
] |
Table by the name Products where column consist of Apple(A) and Grape(G).
```
A G
1 1
1 1
1 Null
```
I ran the Query..
```
Select count(Apple) as 'Apple' count(Grape) as 'Grape'
from products
where Apple = 1 and Grape = 1
```
The Output which i get from the above query is
```
Apple=2
Grape=2
```
Where i should be getting Apple=3 and Grape=2. Kindly help.
|
```
create table AppleGrape(A int, B int)
go
insert into AppleGrape values(1,1)
insert into AppleGrape values(1,1)
insert into AppleGrape values(1,NULL)
SELECT sum(A) as 'Apple',sum(isnull(B,0)) as 'Grape'
FROM AppleGrape
```
check fiddle too. <http://sqlfiddle.com/#!3/126e05/5>
|
It seems this may be safer if you need a total. It handles null values more safely and doesn't depend on different behaviors from the aggregates default handling of nulls, which can trip some queries up.
```
SELECT SUM(ISNULL(Apple,0)) AS Apple, SUM(ISNULL(Grapes,0)) AS Grape FROM Products
```
|
How to Display and Count multiple columns in SQL-2
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I am new to SQL so please bear with me. I am trying to select rows only with specific product codes. When I run it for one selection without OR statement in the below query results are fine but as soon I add another choice with OR statement the results are not accurate. It looks like it keeps putting ProductCode MW-BShorts-0009 against each OrderID.
```
SELECT
Orders.OrderID, OrderDetails.ProductCode
FROM
Orders, OrderDetails
WHERE
Orders.OrderID = OrderDetails.OrderID
AND OrderDetails.ProductCode = 'MacFRC'
OR OrderDetails.ProductCode = 'MW-BShorts-0009'
```
|
Use [IN](http://www.w3schools.com/sql/sql_in.asp) clause and its better to use [Join](http://en.wikipedia.org/wiki/Join_%28SQL%29#Inner_join) instead of [Cartesian product](http://en.wikipedia.org/wiki/Join_%28SQL%29#Cross_join)
```
SELECT Orders.OrderID, OrderDetails.ProductCode
FROM Orders
JOIN OrderDetails
ON Orders.OrderID = OrderDetails.OrderID
AND OrderDetails.ProductCode IN ('MacFRC','MW-BShorts-0009')
```
|
You can use IN as Srikanth has mentioned. But you need to understand the concept of [operator precedence](http://msdn.microsoft.com/en-us/library/ms190276.aspx) as well. In your case, you are saying "Either Orders.OrderID must equal OrderDetails.OrderID and OrderDetails.ProductCode must equal 'MacFRC', or OrderDetails.ProductCode must equal 'MW-BShorts-0009'." What you want is for the OrderID = ProductCode criterion to apply to either ProductCode.
All you need to do is override the default precedence with parentheses, same as you would if you want to do an addition before a multiplication. Like this:
```
SELECT Orders.OrderID, OrderDetails.ProductCode
FROM Orders, OrderDetails
WHERE Orders.OrderID = OrderDetails.OrderID
AND (OrderDetails.ProductCode = 'MacFRC'
OR OrderDetails.ProductCode = 'MW-BShorts-0009')
```
|
Select multiple selected values from a column from each row
|
[
"",
"sql",
""
] |
There's a problem on SQLzoo under More Join operations for MySQL.
The problem asked is
**List the films released in the year 1978 ordered by the number of actors in the cast.**
The tables given are as follows, ord has value of 0 or 1, 1 for star role.
```
movie | actor | casting
id id movieid
yr name actorid
title ord
budget
gross
director
```
I wrote my SQL query as follows, SQLzoo isn't telling me I have the right answer, but I believe my query to be correct. Can someone verify this and possibly tell me how to do this without a subquery please? Thanks!
```
SELECT DISTINCT
movie.title, f.num_actors
FROM
(SELECT
casting.movieid,COUNT(casting.actorid) AS num_actors
FROM
casting
GROUP BY
casting.movieid ) f
JOIN
movie ON f.movieid = movie.id
WHERE
movie.yr = 1978
ORDER BY
2 DESC
```
|
You can do the join first, then do the grouping:
```
SELECT movie.*
FROM movie
JOIN casting ON movie.id = casting.movieid
WHERE movie.yr = 1978
GROUP BY movie.id
ORDER BY COUNT(*) DESC
```
|
```
SELECT title, COUNT(actorid) AS actors FROM movie
JOIN casting ON id = movieid
WHERE yr = 1978
GROUP BY title
ORDER BY actors DESC
```
|
SQL Join SQLzoo
|
[
"",
"mysql",
"sql",
"sql-server",
"sqlite",
""
] |
As I doubt my title makes very much sense, I will do my best to explain what I am asking. I need to clean up an audit table which tracks when the state of an object was modified. For one reason or another, multiple records are being created with new dates while the state of the object is still the same. I need to preserve the first record of each state change and then remove any following records where the state is the same. Oh, and there is no primary key. Yeah! :|
Here is an example data set:
```
ObjectID ObjectState DateOfEntry
101144 1 2007-08-14 12:39:30.587
101144 1 2007-08-14 12:41:52.620
101144 1 2007-08-14 12:42:11.150
101144 1 2007-08-14 12:42:24.197
101144 3 2007-08-14 12:44:06.403
101144 3 2007-08-14 12:44:06.467
101144 3 2007-08-14 12:46:12.573
101144 3 2007-08-14 12:50:51.670
101144 3 2007-08-14 12:50:51.750
101144 3 2007-08-14 12:56:34.330
101144 4 2007-08-14 17:28:59.280
101144 3 2007-08-14 17:32:26.313
101144 3 2007-08-14 17:32:48.720
101144 3 2007-08-14 17:45:07.460
101144 3 2007-08-14 17:46:31.740
101144 3 2007-08-14 17:47:04.380
101144 3 2007-08-14 17:47:29.507
101144 3 2007-08-14 17:49:13.460
101144 3 2007-08-14 17:54:15.320
101144 3 2007-08-14 17:55:57.540
101144 3 2007-08-14 19:50:11.913
101144 3 2007-08-14 19:53:10.820
101144 3 2007-08-14 20:03:44.900
101144 3 2007-08-16 10:34:56.477
101144 3 2007-08-16 10:36:06.477
101144 3 2007-08-16 10:36:24.570
101144 3 2007-11-06 09:19:26.157
101144 3 2007-11-06 09:24:28.200
101144 4 2010-09-27 14:11:03.287
101144 4 2014-01-27 17:31:58.077
```
The end table result should be:
```
ObjectID ObjectState DateOfEntry
101144 1 2007-08-14 12:39:30.587
101144 3 2007-08-14 12:44:06.403
101144 4 2007-08-14 17:28:59.280
101144 3 2007-08-14 17:32:26.313
101144 4 2010-09-27 14:11:03.287
```
I have tried using `RANK()` but the problem is that I can't just sort on `ObjectState` because the `ObjectState` values can be repeated out of order. I have to order them by the `DateOfEntry`. But if I do `RANK() OVER(ORDER BY DateOfEntry)` then I basically get row numbering.
How can I create a SQL query that will allow me to order by `DateOfEntry` but then group by `ObjectState` so I can remove all rows within that "object state group" except for the minimum one of the group?
|
An alternative solution I finally thought of using LAG() will eliminate the CTE.
```
DELETE @Audits
FROM @Audits a1
INNER JOIN (SELECT ObjectID, DateOfEntry
FROM (SELECT ObjectID, DateOfEntry, ObjectState,
LAG(ObjectState) OVER(PARTITION BY ObjectID ORDER BY DateOfEntry) AS [PreviousObjectState]
FROM @Audits) AS Audits
WHERE Audits.ObjectState = PreviousObjectState
) a2
ON a2.ObjectID = a1.ObjectID AND a2.DateOfEntry = a1.DateOfEntry
SELECT * FROM @Audits
```
Long version with proof (I've duplicated the data set with a different ID just to verify the partition by is working as expected)
```
DECLARE @Audits TABLE (ObjectID INT, ObjectState INT, DateOfEntry DATETIME)
INSERT @Audits
SELECT 101144,1,'2007-08-14 12:39:30.587' UNION ALL
SELECT 101144,1,'2007-08-14 12:41:52.620' UNION ALL
SELECT 101144,1,'2007-08-14 12:42:11.150' UNION ALL
SELECT 101144,1,'2007-08-14 12:42:24.197' UNION ALL
SELECT 101144,3,'2007-08-14 12:44:06.403' UNION ALL
SELECT 101144,3,'2007-08-14 12:44:06.467' UNION ALL
SELECT 101144,3,'2007-08-14 12:46:12.573' UNION ALL
SELECT 101144,3,'2007-08-14 12:50:51.670' UNION ALL
SELECT 101144,3,'2007-08-14 12:50:51.750' UNION ALL
SELECT 101144,3,'2007-08-14 12:56:34.330' UNION ALL
SELECT 101144,4,'2007-08-14 17:28:59.280' UNION ALL
SELECT 101144,3,'2007-08-14 17:32:26.313' UNION ALL
SELECT 101144,3,'2007-08-14 17:32:48.720' UNION ALL
SELECT 101144,3,'2007-08-14 17:45:07.460' UNION ALL
SELECT 101144,3,'2007-08-14 17:46:31.740' UNION ALL
SELECT 101144,3,'2007-08-14 17:47:04.380' UNION ALL
SELECT 101144,3,'2007-08-14 17:47:29.507' UNION ALL
SELECT 101144,3,'2007-08-14 17:49:13.460' UNION ALL
SELECT 101144,3,'2007-08-14 17:54:15.320' UNION ALL
SELECT 101144,3,'2007-08-14 17:55:57.540' UNION ALL
SELECT 101144,3,'2007-08-14 19:50:11.913' UNION ALL
SELECT 101144,3,'2007-08-14 19:53:10.820' UNION ALL
SELECT 101144,3,'2007-08-14 20:03:44.900' UNION ALL
SELECT 101144,3,'2007-08-16 10:34:56.477' UNION ALL
SELECT 101144,3,'2007-08-16 10:36:06.477' UNION ALL
SELECT 101144,3,'2007-08-16 10:36:24.570' UNION ALL
SELECT 101144,3,'2007-11-06 09:19:26.157' UNION ALL
SELECT 101144,3,'2007-11-06 09:24:28.200' UNION ALL
SELECT 101144,4,'2010-09-27 14:11:03.287' UNION ALL
SELECT 101144,4,'2014-01-27 17:31:58.077' UNION ALL
SELECT 101145,1,'2007-08-14 12:39:30.587' UNION ALL
SELECT 101145,1,'2007-08-14 12:41:52.620' UNION ALL
SELECT 101145,1,'2007-08-14 12:42:11.150' UNION ALL
SELECT 101145,1,'2007-08-14 12:42:24.197' UNION ALL
SELECT 101145,3,'2007-08-14 12:44:06.403' UNION ALL
SELECT 101145,3,'2007-08-14 12:44:06.467' UNION ALL
SELECT 101145,3,'2007-08-14 12:46:12.573' UNION ALL
SELECT 101145,3,'2007-08-14 12:50:51.670' UNION ALL
SELECT 101145,3,'2007-08-14 12:50:51.750' UNION ALL
SELECT 101145,3,'2007-08-14 12:56:34.330' UNION ALL
SELECT 101145,4,'2007-08-14 17:28:59.280' UNION ALL
SELECT 101145,3,'2007-08-14 17:32:26.313' UNION ALL
SELECT 101145,3,'2007-08-14 17:32:48.720' UNION ALL
SELECT 101145,3,'2007-08-14 17:45:07.460' UNION ALL
SELECT 101145,3,'2007-08-14 17:46:31.740' UNION ALL
SELECT 101145,3,'2007-08-14 17:47:04.380' UNION ALL
SELECT 101145,3,'2007-08-14 17:47:29.507' UNION ALL
SELECT 101145,3,'2007-08-14 17:49:13.460' UNION ALL
SELECT 101145,3,'2007-08-14 17:54:15.320' UNION ALL
SELECT 101145,3,'2007-08-14 17:55:57.540' UNION ALL
SELECT 101145,3,'2007-08-14 19:50:11.913' UNION ALL
SELECT 101145,3,'2007-08-14 19:53:10.820' UNION ALL
SELECT 101145,3,'2007-08-14 20:03:44.900' UNION ALL
SELECT 101145,3,'2007-08-16 10:34:56.477' UNION ALL
SELECT 101145,3,'2007-08-16 10:36:06.477' UNION ALL
SELECT 101145,3,'2007-08-16 10:36:24.570' UNION ALL
SELECT 101145,3,'2007-11-06 09:19:26.157' UNION ALL
SELECT 101145,3,'2007-11-06 09:24:28.200' UNION ALL
SELECT 101145,4,'2010-09-27 14:11:03.287' UNION ALL
SELECT 101145,4,'2014-01-27 17:31:58.077'
DELETE @Audits
FROM @Audits a1
INNER JOIN (SELECT ObjectID, DateOfEntry
FROM (SELECT ObjectID, DateOfEntry, ObjectState,
LAG(ObjectState) OVER(PARTITION BY ObjectID ORDER BY DateOfEntry) AS [PreviousUserState]
FROM @Audits) AS Audits
WHERE Audits.ObjectState = PreviousUserState
) a2
ON a2.ObjectID = a1.ObjectID AND a2.DateOfEntry = a1.DateOfEntry
SELECT * FROM @Audits
```
**Yields this output**
```
ObjectID ObjectState DateOfEntry
----------- ----------- -----------------------
101144 1 2007-08-14 12:39:30.587
101144 3 2007-08-14 12:44:06.403
101144 4 2007-08-14 17:28:59.280
101144 3 2007-08-14 17:32:26.313
101144 4 2010-09-27 14:11:03.287
101145 1 2007-08-14 12:39:30.587
101145 3 2007-08-14 12:44:06.403
101145 4 2007-08-14 17:28:59.280
101145 3 2007-08-14 17:32:26.313
101145 4 2010-09-27 14:11:03.287
```
|
Short answer:
```
; WITH Records AS (
SELECT
ObjectId,
ObjectState,
DateOfEntry,
ROW_NUMBER() OVER (PARTITION BY ObjectID ORDER BY DateOfEntry) AS RowNum
FROM @Audits
)
DELETE R2
FROM Records R1
INNER JOIN Records R2
ON R1.ObjectId = R2.ObjectId
AND R1.ObjectState = R2.ObjectState
AND R1.RowNum + 1 = R2.RowNum
```
**Proof of solution**
```
DECLARE @Audits TABLE (ObjectID INT, ObjectState INT, DateOfEntry DATETIME)
INSERT @Audits
SELECT 101144,1,'2007-08-14 12:39:30.587' UNION ALL
SELECT 101144,1,'2007-08-14 12:41:52.620' UNION ALL
SELECT 101144,1,'2007-08-14 12:42:11.150' UNION ALL
SELECT 101144,1,'2007-08-14 12:42:24.197' UNION ALL
SELECT 101144,3,'2007-08-14 12:44:06.403' UNION ALL
SELECT 101144,3,'2007-08-14 12:44:06.467' UNION ALL
SELECT 101144,3,'2007-08-14 12:46:12.573' UNION ALL
SELECT 101144,3,'2007-08-14 12:50:51.670' UNION ALL
SELECT 101144,3,'2007-08-14 12:50:51.750' UNION ALL
SELECT 101144,3,'2007-08-14 12:56:34.330' UNION ALL
SELECT 101144,4,'2007-08-14 17:28:59.280' UNION ALL
SELECT 101144,3,'2007-08-14 17:32:26.313' UNION ALL
SELECT 101144,3,'2007-08-14 17:32:48.720' UNION ALL
SELECT 101144,3,'2007-08-14 17:45:07.460' UNION ALL
SELECT 101144,3,'2007-08-14 17:46:31.740' UNION ALL
SELECT 101144,3,'2007-08-14 17:47:04.380' UNION ALL
SELECT 101144,3,'2007-08-14 17:47:29.507' UNION ALL
SELECT 101144,3,'2007-08-14 17:49:13.460' UNION ALL
SELECT 101144,3,'2007-08-14 17:54:15.320' UNION ALL
SELECT 101144,3,'2007-08-14 17:55:57.540' UNION ALL
SELECT 101144,3,'2007-08-14 19:50:11.913' UNION ALL
SELECT 101144,3,'2007-08-14 19:53:10.820' UNION ALL
SELECT 101144,3,'2007-08-14 20:03:44.900' UNION ALL
SELECT 101144,3,'2007-08-16 10:34:56.477' UNION ALL
SELECT 101144,3,'2007-08-16 10:36:06.477' UNION ALL
SELECT 101144,3,'2007-08-16 10:36:24.570' UNION ALL
SELECT 101144,3,'2007-11-06 09:19:26.157' UNION ALL
SELECT 101144,3,'2007-11-06 09:24:28.200' UNION ALL
SELECT 101144,4,'2010-09-27 14:11:03.287' UNION ALL
SELECT 101144,4,'2014-01-27 17:31:58.077'
; WITH Records AS (
SELECT
ObjectId,
ObjectState,
DateOfEntry,
ROW_NUMBER() OVER (PARTITION BY ObjectID ORDER BY DateOfEntry) AS RowNum
FROM @Audits
)
DELETE R2
FROM Records R1
INNER JOIN Records R2
ON R1.ObjectId = R2.ObjectId
AND R1.ObjectState = R2.ObjectState
AND R1.RowNum + 1 = R2.RowNum
SELECT * FROM @Audits
```
**Yields this output**
```
ObjectID ObjectState DateOfEntry
----------- ----------- -----------------------
101144 1 2007-08-14 12:39:30.587
101144 3 2007-08-14 12:44:06.403
101144 4 2007-08-14 17:28:59.280
101144 3 2007-08-14 17:32:26.313
101144 4 2010-09-27 14:11:03.287
```
|
Deleting duplicate rows within group ordered by date
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a table with, for example this data
```
ID |start_date |end_date |amount
---|------------|-----------|-------
a1 |2013-12-01 |2014-03-31 |100
```
Iwant to have a query that split the dates so I have the amount splitted out over the year like this :
```
ID |org_start_date |org_end_date |new_start_date |new_end_date |amount
---|----------------|---------------|----------------|----------------|-------
a1 |2013-12-01 |2014-03-31 |2013-12-01 |2013-12-31 |25
a1 |2013-12-01 |2014-03-31 |2014-01-01 |2014-03-31 |75
```
The 25 in 2013 is because 2013 has one month and 75 in 2014 because this has 3 months
Is there a way to do this in T-SQL?
Thx in advance!
|
Use `spt_values` table to create a calendar table, then join to your table to split date range into any part you want.
If split by year and divide amount by months you could:
```
with dates as
(
select number,DATEADD(day,number,'20130101') as dt
from master..spt_values
where number between 0 and 1000 AND TYPE='P'
)
select
m.start_date as org_start_date,
m.end_date as org_end_date,
min(d.dt) as new_start_date,
max(d.dt) as new_end_date,
m.amount*count(distinct month(d.dt))/(datediff(month,m.start_date,m.end_date)+1) as amount
from
MonthSplit m
join
dates d
on
d.dt between m.start_date and m.end_date
group by
m.start_date, m.end_date, year(d.dt),m.amount
```
Here is the [SQL FIDDLE DEMO](http://sqlfiddle.com/#!6/ae78a/2/0).
|
Here is a solution using a numbers table:
[SQL Fiddle Example](http://sqlfiddle.com/#!3/bb719/10/0)
```
DECLARE @STARTYR INT = (SELECT MIN(YEAR([Start Date])) FROM Table1)
DECLARE @ENDYR INT = (SELECT MAX(YEAR([End Date])) FROM Table1)
SELECT [Id]
, @STARTYR + Number AS [Year]
, CASE WHEN YEAR([Start Date]) < @STARTYR + Number
THEN DATEADD(YEAR, @STARTYR - 1900 + Number,0)
ELSE [Start Date] END AS [Start]
, CASE WHEN YEAR([End Date]) > @STARTYR + Number
THEN DATEADD(YEAR, @STARTYR - 1900 + Number + 1,0)
ELSE [End Date] END AS [End]
, DATEDIFF(MONTH,CASE WHEN YEAR([Start Date]) < @STARTYR + Number
THEN DATEADD(YEAR, @STARTYR - 1900 + Number,0)
ELSE [Start Date] END
,CASE WHEN YEAR([End Date]) > @STARTYR + Number
THEN DATEADD(YEAR, @STARTYR - 1900 + Number + 1,0)
ELSE DATEADD(MONTH,DATEDIFF(MONTH,0,DATEADD(MONTH,1,[End Date])),0) END) AS [Months]
, DATEDIFF(MONTH,[Start Date],[End Date]) + 1 [Total Months]
, ([Amount] / (DATEDIFF(MONTH,[Start Date],[End Date]) + 1))
*
DATEDIFF(MONTH,CASE WHEN YEAR([Start Date]) < @STARTYR + Number
THEN DATEADD(YEAR, @STARTYR - 1900 + Number,0)
ELSE [Start Date] END
,CASE WHEN YEAR([End Date]) > @STARTYR + Number
THEN DATEADD(YEAR, @STARTYR - 1900 + Number + 1,0)
ELSE DATEADD(MONTH,DATEDIFF(MONTH,0,DATEADD(MONTH,1,[End Date])),0) END) AS [Proportion]
FROM Numbers
LEFT JOIN Table1 ON YEAR([Start Date]) <= @STARTYR + Number
AND YEAR([End Date]) >= @STARTYR + Number
WHERE Number <= @ENDYR - @STARTYR
```
|
SQL split number over date range
|
[
"",
"sql",
"sql-server",
"t-sql",
"date-range",
""
] |
I have an Oracle table that looks like below:
```
URL SIZE MODDATE MODTIME
AAA 100 24-OCT-14 10:00:00
AAA 100 24-OCT-14 09:00:00
AAA 100 23-OCT-14 08:00:00
BBB 100 24-OCT-14 10:00:00
BBB 100 24-OCT-14 09:00:00
BBB 100 23-OCT-14 08:00:00
CCC 100 24-OCT-14 10:00:00
CCC 100 24-OCT-14 09:00:00
CCC 100 23-OCT-14 08:00:00
```
I need to get the latest data for each row based on MODDATE & MODTIME. So the result would look like this:
```
URL SIZE MODDATE MODTIME
AAA 100 24-OCT-14 10:00:00
BBB 100 24-OCT-14 10:00:00
CCC 100 24-OCT-14 10:00:00
```
How would the SELECT look like (Oracle)?
Thank you kindly for your help!
|
you can use `ROW_NUMBER` analytic function
```
SELECT "URL", "SIZE", "MODDATE", "MODTIME"
FROM
(
SELECT "URL", "SIZE", "MODDATE", "MODTIME",
ROW_NUMBER() OVER ( PARTITION BY "URL" order by "MODDATE" desc, "MODTIME" desc) as seq
FROM Table1
) T
Where T.seq =1
```
|
Try:
```
SELECT *
FROM table
WHERE MODDATE IN (SELECT max(MODDATE) FROM table)
ORDER BY MODTIME
```
|
Oracle How to select a set of latest entries
|
[
"",
"sql",
"oracle",
"datetime",
"select",
"unique",
""
] |
So I'm using a node module called json-sql
<https://github.com/2do2go/json-sql>
but the end result is
```
sql.query
// insert into users (name, lastname, age, gender) values ($p1, $p2, 24, $p3);
sql.values
// { p1: 'John', p2: 'Snow', p3: 'male' }
```
Is there a quick way to make that SQL readable? I would like to have a bunch of these statements and make that into a .sql file to be run on a sql server later
|
Unfortunately the library is unable to natively produce exectuable MS SQL-compliant statements.
A valid MS SQL prepared statement would look like this:
```
sp_EXECUTEsql @statement=N'Select @p as t', @parameters = N'@p varchar(50)', @p=N'Test'
```
So in this case you'd want to do three things:
1. Use the library's `options` parameter to set the ValuesPrefix to `@` rather than the default `$`
2. Write some sort of method to take the `sql.values` result and generate both the value for `@parameters` above as well as each individual parameter value.
3. Concatenate all the pieces together into a statement resembling the one above.
|
It seems that you can use option `separatedValues: false` to get plain sql-query without placeholders.
For example, in your case:
```
var jsonSql = require('json-sql')({separatedValues: false});
sql.query
// insert into "users" ("name", "lastname", "age", "gender") values (\'John\', \'Snow\', 24, \'male\');
```
Then you just have to replace the characters you don't want (double quotes for example);
I hope that somebody find this useful.
|
SQL Statement Doesn't make sense
|
[
"",
"sql",
"json",
"node.js",
""
] |
i am inserting public holidays for further reference in project, Holiday date is of DATE type and it must be of this type for some reason, i am concatenating @Year with it but it throws error
so how can i just want to put holidays with dynamic year ?
ERROR:
```
Msg 206, Level 16, State 2, Line 3
Operand type clash: int is incompatible with date
Msg 402, Level 16, State 1, Line 6
The data types varchar and date are incompatible in the add operator.
USE [AMS]
GO
Declare @Year date
Set @Year = (Select Year(Getdate()))
INSERT INTO [dbo].[GazettedHolidays]
Select 1, GETDATE(), 'Kashmir Day', '05 Feb' +@Year,'Kashmir Day',1
Union All
Select 1, GETDATE(), 'Pakistan Day', '23 Mar'+@Year,'Pakistan Day',1
Union All
Select 1, GETDATE(), 'Labour Day', '01 May' +@Year,'Labour Day',1
Union All
Select 1, GETDATE(), 'Independence Day', '14 Aug' +@Year,'Independence Day',1
Union All
Select 1, GETDATE(), 'Iqbal day', '09 Nov' +@Year,'Iqbal Day',1
Union All
Select 1, GETDATE(), 'Quaid-e-Azam Day', '25 Dec' +@Year,'Quaid-e-Azam Day',1
GO
```
How ?
|
I think you need to case year to a varchar like this:
```
CAST(@Year as VARCHAR(4))
```
So that it gets like this:
```
INSERT INTO [dbo].[GazettedHolidays]
Select 1, GETDATE(), 'Kashmir Day', '05 Feb' +CAST(@Year as VARCHAR(4)),'Kashmir Day',1
...
```
And then I think that the YEAR function returns a INT not a DATE. So change this statement from this:
```
Declare @Year date
Set @Year = (Select Year(Getdate()))
```
To this:
```
Declare @Year int
Set @Year = (Select Year(Getdate()))
```
|
```
Declare @Year VARCHAR(10)
Set @Year = (Select Year(Getdate()))
INSERT INTO [dbo].[GazettedHolidays]
Select 1, GETDATE(), 'Kashmir Day', '05 Feb' +@Year,'Kashmir Day',1
Union All
Select 1, GETDATE(), 'Pakistan Day', '23 Mar'+@Year,'Pakistan Day',1
Union All
Select 1, GETDATE(), 'Labour Day', '01 May' +@Year,'Labour Day',1
Union All
Select 1, GETDATE(), 'Independence Day', '14 Aug' +@Year,'Independence Day',1
Union All
Select 1, GETDATE(), 'Iqbal day', '09 Nov' +@Year,'Iqbal Day',1
Union All
Select 1, GETDATE(), 'Quaid-e-Azam Day', '25 Dec' +@Year,'Quaid-e-Azam Day',1
```
|
How to concatnate only year part in Sql?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have the following table:
```
ID BuyOrSell Total
4 B 10
4 B 11
4 S 13
4 S 29
8 B 20
9 S 23
```
What I am trying to do is to have sum of B and S columns for each ID and if there is not a B or S for an ID have a row with 0, so expected output would be
```
ID BuyOrSell Total
4 B 21
4 S 42
8 B 20
8 S 0
9 S 23
9 B 0
```
I have tried this and it is kind of doing what I am after but not exactly:
```
DECLARE @Temp Table (ID int, BuyOrSell VARCHAR(1), charge Decimal)
INSERT INTO @Temp
SELECT 4, 'B', 10 UNION ALL
SELECT 4, 'B', 11 UNION ALL
SELECT 4, 'S', 13 UNION ALL
SELECT 4, 'S', 29 UNION ALL
SELECT 8, 'B', 20 UNION ALL
SELECT 9, 'S', 23
;With Results AS
(
SELECT ID,
BuyOrSell,
SUM(charge) AS TOTAL
FROM @Temp
Group by ID, BuyOrSell
)
Select t.*,max(
case when BuyOrSell = 'B' then 'Bfound'
end) over (partition by ID) as ref
,max(
case when BuyOrSell = 'S' then 'Sfound'
end) over (partition by ID) as ref
FROM Results t;
```
Thanks
|
Try this:
```
;WITH CTE(ID, BuyOrSell) AS(
SELECT
ID, T.BuyOrSell
FROM @Temp
CROSS JOIN(
SELECT 'B' UNION ALL SELECT 'S'
)T(BuyOrSell)
GROUP BY ID, T.BuyOrSell
)
SELECT
C.ID,
C.BuyOrSell,
Total = ISNULL(SUM(T.charge), 0)
FROM CTE C
LEFT JOIN @Temp T
ON T.ID = C.ID
AND T.BuyOrSell = C.BuyOrSell
GROUP BY C.ID, C.BuyOrSell
ORDER BY C.ID, C.BuyOrSell
```
|
Here is a solution with tricky join:
```
SELECT t1.ID,
v.l as BuyOrSell,
SUM(CASE WHEN t1.BuyOrSell = v.l THEN t1.charge ELSE 0 END) AS Total
FROM @Temp t1
JOIN (VALUES('B'),('S')) v(l)
ON t1.BuyOrSell = CASE WHEN EXISTS(SELECT * FROM @Temp t2
WHERE t2.ID = t1.ID AND t2.BuyOrSell <> t1.BuyOrSell)
THEN v.l ELSE t1.BuyOrSell END
GROUP BY t1.ID, v.l
ORDER BY t1.ID, v.l
```
Output:
```
ID l Total
4 B 21
4 S 42
8 B 20
8 S 0
9 B 0
9 S 23
```
|
SQL - Group by to get sum but also return a row if the sum is 0
|
[
"",
"sql",
"sql-server",
"group-by",
"sql-server-2012",
""
] |
I have a table that looks like this:
```
Id | PersonId | Date | Number | NumberOld
------+------------+------------+----------+-----------
1 | 1 | 2014 1 1 | 1 | 0
2 | 1 | 2014 1 2 | 2 | 1
3 | 1 | 2014 1 3 | 3 | 2
4 | 2 | 2014 1 1 | 1 | 0
5 | 2 | 2014 1 2 | 3 | 2
6 | 2 | 2014 1 3 | 4 | 3
```
What I want is a query, that gets me the person for which the continuity of the number and numberold is not given.
So for `PersonId = 1`, everything is alright, so the query shouldn't return person1.
```
1 - 0
2 - 1
3 - 2
```
But for `PersonId = 2` the continuity of the numbers is not given
```
1 - 0
3 - 2
4 - 3
```
Number 1 of the first record does not correspond to numberold 2 of the second record.
How can I achieve something like this?
|
You can use LAG to look into the previous record. Thus you find mismatches and with the PersonIDs found you select the persons' histories:
```
select *
from mytable
where personid in
(
select personid
from
(
select personid, numberold, lag(number) over (partition by personid order by logdate, number) as numberbefore
from mytable
) lookup
where numberold <> numberbefore
)
order by personid, logdate, number;
```
Here is an SQL fiddle: <http://sqlfiddle.com/#!6/991e7/2>.
|
I think you can do this with a `left join` to find mismatches:
```
select t.person
from table t left join
table tnext
on t.person = tnext.person and t.number = tnext.numberold
where tnext.person is null
group by t.person
having count(*) > 1;
```
The `join` finds all rows where the numbers don't match. However, there will always be at least one (the last one) for a given person. This is filtered out using the `having` clause.
|
TSQL - Query to verify continuity
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I dont think a count will work here, can someone help me get an sql that identifies which account numbers have multiple agents, more than two agents in the where condition.
```
AGENT_NAME ACCOUNT_NUMBER
Clemons, Tony 123
Cipollo, Michael 123
Jepsen, Sarah 567
Joanos, James 567
McMahon, Brian 890
Novak, Jason 437
Ralph, Melissa 197
Reitwiesner, John 221
Roman, Marlo 123
Rosenzweig, Marcie 890
```
Results should be something like this.
```
ACCOUNT_NUMBER AGENT_NAME
123 Cipollo, Michael
123 Roman, Marlo
123 Clemons, Tony
890 Rosenzweig, Marcie
890 McMahon, Brian
567 Joanos, James
567 Jepsen, Sarah
```
|
You can do this using window functions:
```
select t.account_number, t.agent_name
from (select t.*, min(agent_name) over (partition by account_number) as minan,
max(agent_name) over (partition by account_number) as maxan
from table t
) t
where minan <> maxan;
```
If you know the agent names are never duplicated, you could just do:
```
select t.account_number, t.agent_name
from (select t.*, count(*) over (partition by account_number) as cnt
from table t
) t
where cnt > 1;
```
|
Using count function u can get the result
```
CREATE TABLE #TEMP
(
AGENT_NAME VARCHAR(100),
ACCOUNT_NUMBER INT
)
INSERT INTO #TEMP
VALUES ('CLEMONS, TONY',123),
('CIPOLLO, MICHAEL',123),
('JEPSEN, SARAH',567),
('JOANOS, JAMES',567),
('MCMAHON, BRIAN',890),
('NOVAK, JASON',437),
('RALPH, MELISSA',197),
('REITWIESNER, JOHN',221),
('ROMAN, MARLO',123),
('ROSENZWEIG, MARCIE',890)
SELECT a.ACCOUNT_NUMBER,a.AGENT_NAME
FROM #TEMP A
JOIN(SELECT COUNT(1) CNT,
ACCOUNT_NUMBER
FROM #TEMP
GROUP BY ACCOUNT_NUMBER) B
ON A.ACCOUNT_NUMBER = B.ACCOUNT_NUMBER
WHERE B.CNT != 1
```
|
sql that identifies which account numbers have multiple agents
|
[
"",
"sql",
"sql-server",
""
] |
I have an auto-generated SQL script to run on 5.6.17-157.2.
It worked fine on 5.5.33-1.17.1.
Every SQL statement shows `\g` at the end. For example
```
CREATE TABLE articoli
(
ID INT,
titolo LONGTEXT,
sottotitolo LONGTEXT,
descrizione LONGTEXT,
note LONGTEXT,
nomeopzione1 LONGTEXT,
nomeopzione2 LONGTEXT,
nomeopzione3 LONGTEXT,
pagina CHAR(100),
sottopagina SMALLINT,
plain_titolo CHAR(200),
plain_sottotitolo CHAR(200),
nomeopzione4 LONGTEXT,
KEY (ID),
KEY (pagina),
KEY (sottopagina)
);\g
```
What changed between the two version to break query execution? How can I tell 5.6 to accept \g and don't care?
I can't just change the SQL. It's auto-generated code that must run as final step of a monstrous software abomination "daily update" (<https://serverfault.com/questions/458340/euro-character-messed-up-during-ftp-transfer>)
[Update] Better change the question: it's not enough to know what is that. I need to get the queries running.
|
MySQL already does accept `\g`, but it must follow a SQL statement.
The `\g` is basically the same as `;` That is, it is the terminator for a statement and that means send it to the server for parsing and execution.
Your sample shows a create table statement terminated by *both* a semicolon and `\g`. This results in the create table statement running, because it has a semicolon. But then it tries to run another statement terminator without a statement.
Try this:
```
mysql> ;
ERROR:
No query specified
```
Of course there was no query specified, this just shows a semicolon with no query.
It's the same with a line with nothing but `\g`:
```
mysql> \g
ERROR:
No query specified
```
And if you run a real query, and then a redundant terminator of either type, you get something similar. It runs the first query, then fails on the empty query:
```
mysql> select 123; ;
+-----+
| 123 |
+-----+
| 123 |
+-----+
ERROR:
No query specified
mysql> select 123; \g
+-----+
| 123 |
+-----+
| 123 |
+-----+
ERROR:
No query specified
```
I don't know what you mean about this code is generated and you can't change it. You'll have to, because what you've got won't work.
I would suggest you strip out the `\g` from your file before trying to run it. Here's an example of a file containing the bad empty-query pattern, and using sed to remove the redundant `\g`:
```
$ cat > bad.sql
select 123; \g
$ sed -e 's/\\g//g' bad.sql
select 123;
```
|
<http://dev.mysql.com/doc/refman/5.1/en/mysql-commands.html>
> go - (\g) - Send command to mysql server.
|
What does `\g` mean at the end of a MySQL statement? And how do I fix queries not running?
|
[
"",
"sql",
"mysql",
"database",
"mysqlcommand",
"mysql-command-line-client",
""
] |
I'm running contests on my website. Every contest could have multiple entries. I want to retrieve the best 3 entries or more (in the case of draw) based on the score. The score is calculated as the sum of the score of all votes.
Here is the SQLFiddle: <http://sqlfiddle.com/#!9/c2480>
The table ENTRY is as follows:
```
id contest_id
1 1
2 1
3 1
4 1
5 1
6 1
7 1
8 1
9 1
```
The table ENTRY\_VOTE is as follow:
```
id entry_id score
-- entry 1 has 20 votes (5+10+5)
1 1 5
2 1 10
3 1 5
-- entry 2 has 20 votes (10+10)
4 2 10
5 2 10
-- entry 3 has 25 votes (5+5+5+10)
6 3 5
7 3 5
8 3 5
9 3 10
-- entry 4 has 10 votes (10)
10 4 10
-- entry 5 has 25 votes (10+10+5)
11 5 10
12 5 10
13 5 5
-- entry 6 has 5 votes (5)
14 6 5
-- entry 7 has 50 votes (10+10+10+10+10)
15 7 10
16 7 10
17 7 10
18 7 10
19 7 10
-- entry 8 has 20 votes (10+10)
20 8 10
21 8 10
-- entry 9 has 5 votes (5)
22 9 5
```
The result should be (with draws):
```
id (entry_id) contest_id score
7 1 50
3 1 25
5 1 25
1 1 20
2 1 20
8 1 20
```
I'm trying the follow query:
```
select * from (
select entry.*, sum(score) as final_score from entry join entry_vote on
entry.entry_id = entry_vote.entry_id group by entry_id
) as entries
where final_score in (
select distinct(final_score) from (
select entry_id, sum(score) as final_score from entry_vote group by entry_id
) as final_scores order by final_score desc limit 3;
)
```
The first subquery returns all entries with all entry\_vote score summed.
The second subquery returns the top 3 distincts summed scores (50, 25 and 20).
This query is returning error. What is wrong and how to solve?
|
Here are the SQLFiddle link: <http://sqlfiddle.com/#!9/c2480/50/0>
I managed to solve the query by using joins instead of subquery.
The query i posted on the question are getting error because the second subquery was finishing with ";":
```
select distinct(final_score) from (
select entry_id, sum(score) as final_score from entry_vote group by entry_id
) as final_scores order by final_score desc limit 3;
```
Solving this, the next error will be the limit inside subqueries. Mysql does not support them.
The final query is:
```
select * from (
select entry.*, sum(score) as score from entry
join entry_vote on entry.entry_id = entry_vote.entry_id where entry.contest_id = <CONTEST_ID> group by entry_id
) as entry_with_score join (
select distinct(score) as score from (
select sum(score) as score from entry_vote
join entry on entry_vote.entry_id = entry.id
join contest on audition.contest_id = contest.id where contest.id = <CONTEST_ID> group by entry_id
) as score_by_entry order by score desc limit 3
) as top_score on entry_with_score.score = top_score.score;
```
|
```
SELECT entry_id,total
FROM
( SELECT x.*
, IF(@prev=total,@i,@i:=@i+1) i,@prev:=total prev
FROM
( SELECT entry_id
, SUM(score) total
FROM entry_vote
GROUP
BY entry_id
) x
, (SELECT @i:=0,@prev:=NULL)vars
ORDER
BY total DESC
) a
WHERE i <=3;
```
|
Mysql query for winners in the contest including draws entries having score in another table
|
[
"",
"mysql",
"sql",
""
] |
Need to find columns that match the following condition.
* Must start with "Foo"
* Follow by a number of any length
* Follow by a space or "-"
* Follow by a number of any length
* End or follow by A,B,C, PE, a period, or a space
So the following would match
* Foo42004 45058
* Foo42004-45058
* Foo42004 45058A
* Foo42004 45058PE
* Foo42004 45058B.v34
* Foo42004-45058C bar
* Foo42004-45058C bar
The select statement would be something like the following, but I don't think it works
```
select mycolumn
from mytable
where mycolumn like 'Foo[0-9]{0,15}[ -][0-9]{0,15}[A|B|C|PE]%'
```
|
TSQL has no notion of the {0,15} like in Regex. So the part of your pattern Foo[0-9]{0,15} would match the strings Foo0{0,15}, Foo1{0,15}, Foo2{0,15}, ... Foo9{0,15} etc. Like BlorgBeard said, LIKE is limited.
|
The fastest way to implement this would be to use a CLR to implement a real regular expression function. There are many free options available with a google search for "sql clr regex" as well as instructions on how to roll your own.
Your other option would be to build your own function, using T-SQL [string parsing functions](http://msdn.microsoft.com/en-us/library/ms181984.aspx) to verify your format.
|
TSQL pattern matching with like
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am receiving this error when trying to execute the query below. Any ideas or suggestions?
Error:
> Multiple columns are specified in an aggregated expression containing an outer reference. If an expression being aggregated contains an outer reference, then that outer reference must be the only column referenced in the expression.
```
SELECT TestInstances.pkTestInstanceID AS 'pkTestInstanceID',
bands.pkPerformanceLevelReportBandID AS 'BandID',
bands.StackPosition AS 'StackPosition',
(SELECT TOP 100 PERCENT SUM(CASE WHEN bands.StackPosition = b.StackPosition THEN 1 ELSE 0 END) * 100/ CASE WHEN COUNT(StudentScores_Subject.pkStudentScoreID) = 0 THEN 1 ELSE COUNT(StudentScores_Subject.pkStudentScoreID) END
FROM PerformanceLevelReportBands b
WHERE b.fkPerformanceLevelReportID = @intPerfLevelReportId
ORDER BY SUM(CASE WHEN bands.StackPosition = b.StackPosition THEN 1 ELSE 0 END) * 100/ CASE WHEN COUNT(StudentScores_Subject.pkStudentScoreID) = 0 THEN 1 ELSE COUNT(StudentScores_Subject.pkStudentScoreID) END) AS 'Percent',
COUNT(StudentScores_Subject.pkStudentScoreID) AS 'Count'
FROM StudentScores_Subject
INNER JOIN StudentTests ON StudentScores_Subject.fkStudentTestID = StudentTests.pkStudentTestID
INNER JOIN TestInstances ON TestInstances.pkTestInstanceID = StudentTests.fkTestInstanceID
INNER JOIN CAHSEE_TestPeriods ON CAHSEE_TestPeriods.pkTestPeriodID = TestInstances.fkTestPeriodID
INNER JOIN PerformanceLevelReportBands bands ON bands.fkPerformanceLevelReportID = @intPerfLevelReportId
LEFT JOIN MMARS_Web_TestInfo_California.dbo.PerfLevelReportBandCutScores cutScores ON cutScores.fkPerformanceLevelReportBandID = bands.pkPerformanceLevelReportBandID
AND cutScores.fkGradeID = @intGradeId
AND cutScores.fkTestSubjectID IN (SELECT id FROM @tempSubs)
INNER JOIN PerfLevelReportBandComponents bandComponents ON bandComponents.fkPerformanceLevelReportBandID = bands.pkPerformanceLevelReportBandID
AND((bandComponents.ScoreValue = StudentScores_Subject.ScoreValue) OR
((CAST(StudentScores_Subject.ScoreValue AS INT) BETWEEN bandComponents.minScore and bandComponents.maxScore)
OR
(CAST(StudentScores_Subject.ScoreValue AS INT) BETWEEN cutScores.minScore and cutScores.maxScore)))
RIGHT JOIN MM_SchoolYears ON MM_SchoolYears.pkSchoolYearID = TestInstances.fkSchoolYearID
WHERE MM_SchoolYears.pkSchoolYearID IN (SELECT number FROM itot(@strYearIds, N','))
AND StudentScores_Subject.fkStudentTestID IN (SELECT id FROM @tempTests)
AND StudentScores_Subject.fkScoreTypeID = bandComponents.fkScoreTypeID
AND StudentScores_Subject.fkTest_SubjectID IN (SELECT id FROM @tempSubs)
GROUP BY TestInstances.pkTestInstanceID, bands.pkPerformanceLevelReportBandID, bands.StackPosition
ORDER BY TestInstances.pkTestInstanceID, bands.pkPerformanceLevelReportBandID, bands.StackPosition
```
|
The problem is here you can't combine an outer and inner reference in an aggregate function
```
(SELECT TOP 100 PERCENT SUM(CASE WHEN bands.StackPosition = b.StackPosition THEN 1 ELSE 0 END) * 100/ CASE
WHEN COUNT(StudentScores_Subject.pkStudentScoreID) = 0 THEN 1
ELSE COUNT(StudentScores_Subject.pkStudentScoreID)
END
FROM PerformanceLevelReportBands b
WHERE b.fkPerformanceLevelReportID = @intPerfLevelReportId
ORDER BY SUM(CASE WHEN bands.StackPosition = b.StackPosition THEN 1 ELSE 0 END) * 100/ CASE
WHEN COUNT(StudentScores_Subject.pkStudentScoreID) = 0 THEN 1
ELSE COUNT(StudentScores_Subject.pkStudentScoreID)
END) AS 'Percent'
```
So change it to
```
(SELECT TOP 100 PERCENT SUM(CASE WHEN bb.StackPosition = b.StackPosition THEN 1 ELSE 0 END) * 100/ CASE
WHEN COUNT(StudentScores_Subject.pkStudentScoreID) = 0 THEN 1
ELSE COUNT(StudentScores_Subject.pkStudentScoreID)
END
FROM PerformanceLevelReportBands b JOIN PerformanceLevelReportBands bb
ON bb.fkPerformanceLevelReportID =bands.fkPerformanceLevelReportID
AND b.fkPerformanceLevelReportID =bb.fkPerformanceLevelReportID
WHERE b.fkPerformanceLevelReportID = @intPerfLevelReportId
ORDER BY SUM(CASE WHEN bb.StackPosition = b.StackPosition THEN 1 ELSE 0 END) * 100/ CASE
WHEN COUNT(StudentScores_Subject.pkStudentScoreID) = 0 THEN 1
ELSE COUNT(StudentScores_Subject.pkStudentScoreID)
END) AS 'Percent'
```
Here is a more [thorough explanation](https://www.itprotoday.com/aggregates-outer-reference).
|
I'd recommend commenting out `bandComponents` then `cutScores`, rerunning after removing each components and seeing where the query fails. Once you figure out where it's failing, then you can fix it.
Also, could be this line, the query in your Percent column.
```
ORDER BY SUM(CASE WHEN bands.StackPosition = b.StackPosition THEN 1 ELSE 0 END) * 100
```
I tried to organize your query a bit better to make it more legible.
```
SELECT
TestInstances.pkTestInstanceID AS 'pkTestInstanceID'
, bands.pkPerformanceLevelReportBandID AS 'BandID'
, bands.StackPosition AS 'StackPosition'
, (
SELECT TOP 100 PERCENT
SUM( CASE
WHEN bands.StackPosition = b.StackPosition
THEN 1
ELSE 0
END) * 100 /
CASE
WHEN COUNT(StudentScores_Subject.pkStudentScoreID) = 0
THEN 1
ELSE COUNT(StudentScores_Subject.pkStudentScoreID)
END
FROM PerformanceLevelReportBands b
WHERE b.fkPerformanceLevelReportID = @intPerfLevelReportId
ORDER BY SUM(CASE WHEN bands.StackPosition = b.StackPosition THEN 1 ELSE 0 END) * 100
/
CASE
WHEN COUNT(StudentScores_Subject.pkStudentScoreID) = 0
THEN 1
ELSE COUNT(StudentScores_Subject.pkStudentScoreID)
END
) AS 'Percent'
, COUNT(StudentScores_Subject.pkStudentScoreID) AS 'Count'
FROM
StudentScores_Subject
INNER JOIN
StudentTests ON
StudentScores_Subject.fkStudentTestID = StudentTests.pkStudentTestID
INNER JOIN
TestInstances ON
TestInstances.pkTestInstanceID = StudentTests.fkTestInstanceID
INNER JOIN
CAHSEE_TestPeriods ON
CAHSEE_TestPeriods.pkTestPeriodID = TestInstances.fkTestPeriodID
INNER JOIN
PerformanceLevelReportBands bands ON
bands.fkPerformanceLevelReportID = @intPerfLevelReportId
LEFT JOIN
MMARS_Web_TestInfo_California.dbo.PerfLevelReportBandCutScores cutScores ON
cutScores.fkPerformanceLevelReportBandID = bands.pkPerformanceLevelReportBandID
AND cutScores.fkGradeID = @intGradeId
AND cutScores.fkTestSubjectID IN (SELECT id FROM @tempSubs)
INNER JOIN
PerfLevelReportBandComponents bandComponents ON
bandComponents.fkPerformanceLevelReportBandID = bands.pkPerformanceLevelReportBandID
AND(
(bandComponents.ScoreValue = StudentScores_Subject.ScoreValue) OR
(
(CAST(StudentScores_Subject.ScoreValue AS INT) BETWEEN bandComponents.minScore and bandComponents.maxScore) OR
(CAST(StudentScores_Subject.ScoreValue AS INT) BETWEEN cutScores.minScore and cutScores.maxScore)
)
)
RIGHT JOIN
MM_SchoolYears ON
MM_SchoolYears.pkSchoolYearID = TestInstances.fkSchoolYearID
WHERE
MM_SchoolYears.pkSchoolYearID IN (SELECT number FROM itot(@strYearIds, N','))
AND StudentScores_Subject.fkStudentTestID IN (SELECT id FROM @tempTests)
AND StudentScores_Subject.fkScoreTypeID = bandComponents.fkScoreTypeID
AND StudentScores_Subject.fkTest_SubjectID IN (SELECT id FROM @tempSubs)
GROUP BY TestInstances.pkTestInstanceID, bands.pkPerformanceLevelReportBandID, bands.StackPosition
ORDER BY TestInstances.pkTestInstanceID, bands.pkPerformanceLevelReportBandID, bands.StackPosition
```
|
Error: "Multiple columns are specified in an aggregated expression containing an outer reference."
|
[
"",
"sql",
"sql-server",
"t-sql",
"aggregation",
""
] |
I normally work with Ruby on Rails and write most of my DB queries via the Ruby adapter interface.
This time, however, I'm actually creating a DB view and need to write the SQL itself so that this view can be queried (working with a Legacy system).
I've tried to make these examples simpler than the system yet still solve the particular issue.
In pseudocode, here are the relationships:
* `parents has_many children belongs_to parents`
* `children has_many pets belongs_to children`
* `pets belongs_to pet_types`
* `pet_types has_many pets`
This is accomplished via standard foreign\_key relationships (ie, `children` table has a `parent_id` column).
**Goal**
What I'm trying to accomplish is list ALL parents and the last child's (via `created_at`) last pet cat (via `created_at`). If the `parent` does not have any children, `null` out the rest of the fields.
My attempt at the SQL was basically a bunch of `LEFT OUTER JOINs` that wasn't filtering properly.
```
-------------------------------------------------------------
| id | parent_name | child_name | pet_name | pet_type_value |
-------------------------------------------------------------
| 1 | Bob | Jeremy | Wildfire | cat |
-------------------------------------------------------------
...more records...
```
**Tables**
```
parents
--------------------------
| id | name | created_at |
--------------------------
| 1 | Bob | 2014-10... |
| 2 | John | 2014-10... |
| 3 | Suzy | 2014-10... |
--------------------------
children
----------------------------------------
| id | parent_id | name | created_at |
----------------------------------------
| 1 | 1 | Jeremy | 2014-10... |
| 2 | 1 | Katy | 2014-10... |
| 3 | 2 | Garet | 2014-10... |
----------------------------------------
pets
-------------------------------------------------------
| id | child_id | name | pet_type_id | created_at |
-------------------------------------------------------
| 1 | 1 | Wildfire | 1 | 2014-10... |
| 2 | 1 | Ninja | 1 | 2014-10... |
| 3 | 2 | Grumpy | 2 | 2014-10... |
-------------------------------------------------------
pet_types
--------------
| id | value |
--------------
| 1 | cat |
| 2 | dog |
--------------
```
**Original Attempt**
Here's what I originally came up with that didn't work...I am not a SQL developer/coder, as you can obviously tell. Note that I've cleaned it up a bit from the original tables/columns and converted it to the example tables:
```
SELECT
parents.id as id,
parents.name as name,
parents.created_at as created_at,
pet_types.value as pet_type_value,
children.id as child_id,
children.name as child_name
FROM
parents
LEFT OUTER JOIN (
SELECT
pet_types.value as pet_type_value,
children.parent_id as parent_id
FROM children
LEFT OUTER JOIN pets
ON pets.child_id = children.id
LEFT OUTER JOIN pet_types
ON (
pets.pet_type_id = pet_types.id
AND pet_types.name = 'cat'
)
) AS children
ON (children.parent_id = parents.id)
WHERE
children.parent_id = parents.id
ORDER BY parents.id
```
|
You could try an Outer Apply, something like below
```
select * from parent par
outer apply
( select top 1 * from children child
outer apply
( select top 1 * from pets left join pet_types where child.id=pets.child_id
order by created_at desc) pet
where par.id=child.parent_id
order by created_at desc) child
```
|
Try this:
```
SELECT
t0.ParentName,
t0.ChildName,
t0.PetName,
t0.PetValue
FROM
(SELECT
t1.name as ParentName,
t2.name as ChildName,
t3.name as PetName,
t4.value as PetTypeValue,
MAX(t2.created_at) as MaxDateChild,
MAX(t3.created_at) as MaxDatePet
FROM
Parents t1
LEFT OUTER JOIN
children t2
ON t1.id = t2.parent_id
LEFT OUTER JOIN
pets t3
ON t2.id = t3.child_id
LEFT OUTER JOIN
pet_types t4
ON t3.pet_type_id= t4.id
GROUP BY
t1.name,
t2.name,
t3.name,
t4.value
) t0
```
|
MS SQL Server: complex JOIN query among several tables
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I am trying to define a database schema to store a list of "objects" in a table.
The main problem is that I need to have a list of geometric shapes and other kinds of markers in a table, in order to draw them on to a google map.
For example, a circle will have only x,y and radius. A polyline will have a set of points, and a square will have...etc etc...
My first thought was to create 2 related tables like:
**object**
* id
* Name
**object\_details**
* id
* object\_id (foreign from object)
* attribute\_type ( enum cartesian\_point, size etc....)
* numeric\_attribute\_valueA
* numeric\_attribute\_valueB
* datetime\_attribute\_value
* etc...
But this is a really ugly and inefficent implementation because I may need to have even more different attribute data types - like text or time or anything else and this will complicate the schema creating mostly empty rows in the object\_details table.
What would be a good/clean solution for this?
### Update:
Reading this question after almost 5 years I'd strongly suggest to consider any Schema-Less database for this kind of problems.
|
You'll have to pack the data in some format, store it, and unpack it later for use. For example, JSON, which is supported by both PHP and JS. For example in PHP:
```
$shape = array("kind"=>"circle", "cx"=>45,"cy"=>90,"r"=>20);
$json = json_encode($shape, JSON_NUMERIC_CHECK); // NUMERIC_CHECK doesn't treat numbers as strings
```
SQL:
```
$query = "INSERT into SHAPES set `shape`=>'$json'";
```
You can get back to a PHP array with "json\_decode();"
```
$shape = json_decode($json);
```
Also, this format makes it easy to use with ajax and jquery, as your server can deliver the JSON contents directly:
```
$.getJSON("server.php",function(response) {
console.log("Kind is " + response.kind + ", radius is " + response.r);
});
```
|
I 'd suggest to have a look at the [spatial extensions for MySQL](http://dev.mysql.com/doc/refman/5.0/en/spatial-extensions.html) and
[POSTGIS](http://postgis.net/) extension for postgresql which both support storing multiple types of geometry in a spatial columns.
|
SQL Store different types of objects in a single table
|
[
"",
"mysql",
"sql",
""
] |
I have an oracle database table with a lot of columns. I'd like to count the number of fully unique rows. The only thing I could find is:
```
SELECT COUNT(DISTINCT col_name) FROM table;
```
This however would require me listing all the columns and I haven't been able to come up with syntax that will do that for me. I'm guessing the reason for that is that this query would be very low performance? Is there a recommended way of doing this?
|
How about
```
SELECT COUNT(*)
FROM (SELECT DISTINCT * FROM Table)
```
|
It depends on what you are trying to accomplish.
To get a count of the distinct rows by specific column, so that you know what data exists, and how many of that distinct data there are:
```
SELECT DISTINCT
A_CODE, COUNT(*)
FROM MY_ARCHV
GROUP BY A_CODE
```
--This informs me there are 93 unique codes, and how many of each of those codes there are.
Another method
--How to count how many of a type value exists in an oracle table:
`select A_CDE`, --the value you need to count
`count(*) as numInstances` --how many of each value
`from A_ARCH` -- the table where it resides
`group by A_CDE` -- sorting method
Either way, you get something that looks like this:
```
A_CODE Count(*)
1603 32
1600 2
1605 14
```
|
How to count unique rows in Oracle
|
[
"",
"sql",
"oracle",
""
] |
I am running following sql.
```
SELECT value FROM ppr_price_factor WHERE product_factor_id=3;
```
which returns values like this
```
17
25
35
45
```
I need to display my results as something like below.
```
value value Range
17 0-17
25 18-25
35 26-35
45 36-45
```
Can I do this using sql?
|
Maybe that helps:
```
SELECT value,
TO_CHAR(1 + LAG(value, 1, -1) OVER (ORDER by value)) || '-' || TO_CHAR(value) AS value_range
FROM your_table
```
|
I would use [case](http://dev.mysql.com/doc/refman/5.0/en/case.html) statement for the respective problem.
You can use the following sql as a reference
```
SELECT value
,CASE
WHEN value >= 0
AND value <= 17
THEN '0-17'
WHEN value >= 18
AND value <= 25
THEN '18-25'
WHEN value >= 26
AND value <= 35
THEN '26-35'
WHEN value >= 36
AND value <= 45
THEN '36-45'
ELSE '46-'
END AS valueRange
FROM table1
```
[sql fiddle link](http://sqlfiddle.com/#!2/59e121/4)
|
break into ranges using sql
|
[
"",
"sql",
"oracle",
""
] |
I have a table with the following columns in a MySQL database.
```
Chapters
Columns:
id int(11) AI PK
title varchar(100)
text varchar(10000)
created datetime
revision int(11)
book_id int(11)
```
Each book (book\_id) can have multiple chapters. Each chapter can have multiple revisions. I'm trying to get a query that returns the most recent revision of each chapter for each book. I've done this but it isn't doing what I want.
```
select max(id), title, text, min(created), max(revision)
from chapters
group by book_id, title;
```
That gets me the max revision number, proper id of the revision and when the chapter was originally created. However the text for the chapter isn't the text of the latest revision. I can see why that is but I don't know how to fix it to get what I want. Any help would be greatly appreciated.
Based on the answers below I got the idea for the following query.
```
select id, title, text, revision, created, book_id
from chapters
where id in (select max(id)
from chapters
group by book_id, title)
```
Now this gets the information I want except for 1 thing. The created field in this query is the date that the latest revision for that chapter was created. That's not a bad piece of information to have and I don't mind keeping it. However, I also need the created date for the earliest revision for each chapter.
Let's say I have the following data.
```
id, title, text, revision, created, book_id
-------------------------------------------------
1 | Chpt 1 | Blah | 1 | 23/2/2014 | 1
-------------------------------------------------
2 | Chpt 2 | Blah2 | 1 | 24/2/2014 | 1
-------------------------------------------------
3 | Chpt 1 | Blah3 | 2 | 25/2/2014 | 1
-------------------------------------------------
4 | Chpt 1 | Blah4 | 1 | 26/2/2014 | 2
-------------------------------------------------
5 | Chpt 3 | Blah5 | 1 | 27/2/2014 | 1
-------------------------------------------------
6 | Chpt 1 | Blah6 | 3 | 28/2/2014 | 1
```
After running the query I'd like the follow data.
```
id, title, text, revision, created, rev_date, book_id
------------------------------------------------------------
6 | Chpt 1 | Blah6 | 3 | 23/2/2014 | 28/2/2014 | 1
------------------------------------------------------------
2 | Chpt 2 | Blah2 | 1 | 24/2/2014 | 24/2/2014 | 1
------------------------------------------------------------
5 | Chpt 3 | Blah5 | 1 | 27/2/2014 | 27/2/2014 | 1
------------------------------------------------------------
4 | Chpt 1 | Blah4 | 1 | 26/2/2014 | 26/2/2014 | 2
------------------------------------------------------------
```
Andrew
\*\*Adding some clarification.
|
Because it's mysql, you can do it thus;
```
select * from (
select * from chapters
order by revision desc) x
group by book_id, title
```
This non-standard grouping functionality is a mysql-only trick, and although the documentation says the actual row you get for each group is non deterministic in reality it is completely deterministic: It's always the *first* row encountered. If you sort first, then group (as I have) you'll get the most recently revision of each chapter for each book.
|
Something like this should work. Effectively we are grabbing the top 1 result from the data ordered by the criteria you mentioned.
```
select id, title, text, created, revision
from chapters
order by id desc, created asc, revision desc
limit 1
```
|
Need to get most recent chapter within book
|
[
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I recently came across few decode queries.
I found this decode function somewhat confusing. here are the queries:
```
SELECT promo_id,
DECODE(NVL(promo_cost,0),promo_cost, promo_cost * 0.25, 100) "Discount"
FROM promotions;
SELECT promo_id,
DECODE(promo_cost,10000, DECODE(promo_category, 'G1', promo_cost *.25, NULL), NULL) "Catcost";
```
I went through the internet, studied some articles but it seems that DECODE is something that varies per query.
Could anyone please explain to me what these two queries are doing.
Thank you very much.
Note: this is not my homework. I am a Musician by profession just took C and SQL recently as it is so interesting.
|
The syntax for `DECODE` function is `DECODE( expression , search , result [, search , result]... [, default] )`.
In your first query :
`DECODE(NVL(promo_cost,0),promo_cost, promo_cost * 0.25, 100)`
The following is the interpretation :
```
expression = NVL(promo_cost,0)
search = promo_cost
result = promo_cost * 0.25
default = 100
```
So, it means,
```
IF NVL(promo_cost,0) = promo_cost
THEN
output = promo_cost * 0.25
ELSE
output = 100
END
```
The same could be interpreted using `CASE` as well :
```
CASE
WHEN NVL(promo_cost,0) = promo_cost
THEN
promo_cost * 0.25
ELSE
100
END
```
|
```
DECODE(NVL(promo_cost,0),promo_cost, promo_cost * 0.25, 100) "Discount"
```
= CASE WHEN NVL(promo\_cost,0) = promo\_cost THEN promo\_cost \* 0.25 ELSE 100 END
```
DECODE(promo_cost,10000, DECODE(promo_category, 'G1', promo_cost *.25, NULL), NULL)
```
= CASE WHEN promo\_cost = 10000 THEN DECODE(promo\_category, 'G1', promo\_cost \*.25, NULL) ELSE NULL END
```
DECODE(promo_category, 'G1', promo_cost *.25, NULL)
```
= CASE WHEN promo\_category = 'G1' THEN promo\_cost \*.25 ELSE NULL END
DECODE is similar to CASE, but usually more compact. However, CASE may have 2 forms:
1. CASE exp1 WHEN exp2 THEN res1 ELSE res2 END (if exp1 = exp2 then res1 else res2)
2. CASE WHEN conditon1 THEN res1 ELSE res2 END (if condition1 then res1 else res2)
DECODE is the first form of CASE
In addition,
- you can compare NULL with DECODE: decode(col, NULL, 'undefined');
- types of arguments can be different in DECODE (some rules still exist) while in CASE all exps & results should be from the same type group. For example:
```
select case 1 when '1' then 'OK' end from dual; -- ERROR
select decode(1, '1', 'OK') from dual; -- OK
```
|
DECODE Oracle SQL
|
[
"",
"sql",
"oracle",
"decode",
""
] |
I have to concatenate two fields and use concatenated field in where clause but it gives me invalid identifier. How to solve this query.
```
select i.FIRST_NAME || ' - ' || i.LAST_NAME as NAME, i.* from CONTACT i
where NAME = 'JOHN - HANKS'
```
This gives me
```
ORA-00904: "NAME": invalid identifier
00904. 00000 - "%s: invalid identifier"
```
|
You cannot use a column alias at the same level. Just use a subquery (or repeat the expression):
```
select c.*
from (select i.FIRST_NAME || ' - ' || i.LAST_NAME as NAME, i.*
from CONTACT i
) c
where c.NAME = 'JOHN - HANKS';
```
|
The WITH clause is also a good alternative, better readability. Also, if the subquery is to be used multiple times, it is even better.
```
WITH data as(
select i.FIRST_NAME || ' - ' || i.LAST_NAME as NAME, i.* from CONTACT i)
select * from data where name = 'JOHN - HANKS';
```
|
Concatenate and use in where clause oracle plsql
|
[
"",
"sql",
"oracle",
""
] |
If I'll execute below query then I'm getting more than 1000 records here
```
select
ID, name, 'Default value' as "Other"
from
emp_table
```
Output:
```
ID | Name | Other
---------------------------------
1 | H | Default value
2 | V | Default value
3 | D | Default value
```
but here I have added on a virtual column with default value of `Default Value`. The default value should reflect only for first row instead with all the rows
Expected output:
```
ID | Name | Other
----------------------------------
1 | H | Default value
2 | V | null
3 | D | null
```
|
If you are using SQL Server you can use ROW\_NUMBER() to add a row number to the query and then use a `CASE` statement to write the default value or null:
```
select
ID,
NAME,
CASE ROW_NUMBER() OVER(ORDER by NAME)
WHEN 1 THEN 'Default Value'
ELSE NULL
END AS [Other]
from emp_table
ORDER BY NAME
```
The ordering in `OVER` should match the ordering of the query.
|
```
SELECT ID,
name,
CASE
WHEN ID=1 THEN 'Default value'
ELSE NULL
END AS "Other"
FROM emp_table
```
|
Build SQL Query with Customized Virtual Values
|
[
"",
"sql",
""
] |
I am working in SQL. What is the complete explanation for using comments in SQL, with some example code?
|
MySQL Server supports three comment styles:
From a `#` character to the end of the line.
From a `--` sequence to the end of the line. In MySQL, the “-- ” (double-dash) comment style requires the second dash to be followed by at least one whitespace or control character (such as a space, tab, newline, and so on). This syntax differs slightly from standard SQL comment syntax, as discussed in Section 1.8.2.5, “'--' as the Start of a Comment”.
From a `/*` sequence to the following `*/` sequence, as in the C programming language. This syntax enables a comment to extend over multiple lines because the beginning and closing sequences need not be on the same line.
Reference <http://dev.mysql.com/doc/refman/5.0/en/comments.html>
|
**In Oracle, there are three ways.**
**1.**
```
SQL> REM This is a comment
```
**2.**
```
SQL> -- This is a single line comment
```
**3.**
```
SQL> /* This is a
SQL> multiple line
SQL> comment */
SQL>
```
The difference between REM and the other two is that, -- and /\* \*/ can be used in a `PL/SQL` block, while `REM[ARK]` cannot.
Let's see.
```
SQL> REM comment 1
SQL> -- comment 2
SQL> /* comment 3*/
SQL> begin
2 DBMS_OUTPUT.PUT_LINE('comment 1'); --comment 1
3 DBMS_OUTPUT.PUT_LINE('comment 2'); /* comment 2*/
4 end;
5 /
comment 1
comment 2
PL/SQL procedure successfully completed.
SQL>
```
So, `--` and `/* */` works in PL/SQL block too. However, `REM` won't.
```
SQL> begin
2 DBMS_OUTPUT.PUT_LINE('comment'); REM comment
3 end;
4 /
DBMS_OUTPUT.PUT_LINE('comment'); REM comment
*
ERROR at line 2:
ORA-06550: line 2, column 41:
PLS-00103: Encountered the symbol "COMMENT" when expecting one of the
following:
:= . ( @ % ;
The symbol "; was inserted before "COMMENT" to continue.
```
**Update** Mostly all GUI based tools are able to execute `SQL*Plus` commands in their own sqlplus type window. It works perfectly in `SQL Developer` when executed as a `script`. In `PL/SQL Developer` too it should work with `COMMAND` window.
**A screenshot from `SQL Developer`**.

Same in `SQL*Plus`.
```
SQL> SELECT * FROM DUAL;
D
-
X
SQL> REM THIS IS A COMMENT
SQL> SELECT 'ABOVE COMMENT WORKS' FROM DUAL;
'ABOVECOMMENTWORKS'
-------------------
ABOVE COMMENT WORKS
```
|
How to give comments in SQL
|
[
"",
"mysql",
"sql",
"oracle",
""
] |
Let's say I have a table like this
```
CREATE TABLE events (
id INT,
sensor INT,
event_type INT,
time datetime
);
INSERT INTO events VALUES (0,2,4,'2012-06-08 12:13:14');
INSERT INTO events VALUES (1,3,4,'2012-06-08 13:13:14');
INSERT INTO events VALUES (2,2,4,'2012-06-08 12:15:14');
INSERT INTO events VALUES (3,1,6,'2012-06-08 15:13:14');
```
What is the "best" way to retrieve the most recent event added by sensor? so the result wold be like this (note that id 2 is displayed and not id 0 because id 2 is most recent:
```
ID sensor
3 1
1 3
2 2
```
I wrote a select like this, but is there any other way?or simpler way to archive this?
```
SELECT id,time,sensor,event_type
FROM events s1
WHERE time = (
SELECT MAX(time) FROM events s2 WHERE s1.sensor = s2.sensor
)
ORDER BY time DESC
```
THanks!
|
This one variant using `GROUP BY`, need to run with your data and see the performance
```
SELECT events.id, events.sensor
FROM events
JOIN
(
SELECT sensor, max(time) as maxTime
FROM events
group by sensor
) T
on events.sensor = T.sensor
and events.time = T.maxTime
ORDER BY events.time DESC
```
|
```
select * from events
where time in(SELECT max(time) FROM events
group by sensor_id,event_type)
Order by time desc;
```
|
SQL how to select the most recent timestamp by values?
|
[
"",
"mysql",
"sql",
"group-by",
"timestamp",
""
] |
I'am currently developing a program and i want to write a function which is accept a value in following format
```
"AAAA BBBB" CCCC DDDD EEEE "FFFF GGGG HHHH"
```
I want to replace the spaces in above with "\_" and need a output as showed following format (please note that this should happen only for string which is inside double quotes)
```
"AAAA_BBBB" CCCC DDDD EEEE "FFFF_GGGG_HHHH"
```
So [I got the correct answer to this in my previous question](https://stackoverflow.com/questions/26521414/replace-spaces-using-function-in-sql/26521566#26521566)
```
DECLARE @In VARCHAR(50) = '"AAAA BBBB" CCCC DDDD EEEE "FFFF GGGG HHHH"'
DECLARE @Quote SMALLINT = -1, @Index INT = 1, @Char CHAR(1)
WHILE @Index <= LEN(@In) BEGIN
SET @Char = SUBSTRING(@In, @Index, 1)
IF @Char = '"'
SET @Quote = @Quote * -1
IF @Char = ' ' AND @Quote > 0
SET @In = STUFF(@In, @Index, 1, '_')
SET @Index = @Index + 1
END
PRINT @In
```
and now I want to modify this with the following requirement
with the required format you can note that, underscore should be replace with the spaces which is only inside the enclosed double quotes, but now i get a wrong output if the user input was as following
```
"AAAA BBBB" CCCC DDDD EEEE "FFFF GGGG HHHH cccc
```
it gives output as
```
"AAAA_BBBB" CCCC DDDD EEEE "FFFF_GGGG_HHHH_cccc
```
but it must give output only for the first enclosed double quote string,because other one is not closed, expected output should be as follows
```
"AAAA_BBBB" CCCC DDDD EEEE "FFFF GGGG HHHH cccc
```
|
Please refer this code.
I think this code will give some hint.
This code is not the tested code. I am posting answer Because at least you get some hint to find your solution.
```
DECLARE @In VARCHAR(50) = '"AAAA BBBB" CCCC DDDD EEEE "FFFF GGGG HHHH IIIII'
DECLARE @Quote SMALLINT = -1, @Index INT = 1, @Char CHAR(1) ,@New_Char CHAR(100)
WHILE @Index <= LEN(@In) BEGIN
SET @Char = SUBSTRING(@In, @Index, 1)
IF @Char = '"'
SET @Quote = @Quote * -1
SET @New_Char = SUBSTRING(@In, @Index + 1,213)
IF @Char = ' ' AND @Quote > 0 AND @New_Char like '%"%'
SET @In = STUFF(@In, @Index, 1, '_')
SET @Index = @Index + 1
END
PRINT @In
```
|
Here are the modifications I would make
* Once all replacements have been made, exit existing the loop (added WHILE loop condition and how logic for the quote recognition behaves to facilitate the exit)
* Remove dependencies on any length (CHAR(100), SUBSTRING(...,213))
**Code**
```
DECLARE @In VARCHAR(50) = '"AAAA BBBB" CCCC DDDD EEEE "FFFF GGGG HHHH IIIII"'
DECLARE @Quote SMALLINT = 0, @Index INT = 1, @Char CHAR(1)
WHILE @Quote < 2 AND @Index <= LEN(@In) BEGIN
SET @Char = SUBSTRING(@In, @Index, 1)
IF @Char = '"'
SET @Quote = @Quote + 1
IF @Char = ' ' AND @Quote = 1
SET @In = STUFF(@In, @Index, 1, '_')
SET @Index = @Index + 1
END
PRINT @In
```
**Output**
```
"AAAA_BBBB" CCCC DDDD EEEE "FFFF GGGG HHHH IIIII"
```
|
SQL Character Replace function extend
|
[
"",
"sql",
"sql-server",
"replace",
"sql-function",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.