Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I would like to exclude a row in the query result if all 3 sum columns are zero.
```
Select Name
,sum(case when cast(Date as date) <= Convert(datetime, '2014-05-01') then 1 else 0 end) as 'First'
,sum(case when cast(Date as date) <= Convert(datetime, '2014-04-01') then 1 else 0 end) as 'Second'
,sum(case when cast(Date as date) <= Convert(datetime, '2013-05-01') then 1 else 0 end) as 'Third'
FROM [dbo].[Posting]
inner join dbo.Names on Name.NameId = Posting.NameId
where active = 1
group by Name
order by Name
```
|
this may work for you :
```
select * from
(
.......your query......
) as t
where First <> 0 or Second <> 0 or Third <> 0
```
|
You can repeat the expressions in the `having` clause:
```
having sum(case when cast(Date as date) <= Convert(datetime, '2014-05-01') then 1 else 0 end) > 0 or
sum(case when cast(Date as date) <= Convert(datetime, '2014-04-01') then 1 else 0 end) > 0 or
sum(case when cast(Date as date) <= Convert(datetime, '2013-05-01') then 1 else 0 end) > 0
```
However, you could write the conditions more simply as:
```
having sum(case when cast(Date as date) <= '2014-05-01' then 1 else 0 end) > 0 or
sum(case when cast(Date as date) <= '2014-04-01' then 1 else 0 end) > 0 or
sum(case when cast(Date as date) <= '2013-05-01' then 1 else 0 end) > 0
```
Or, because the first encompasses the other two:
```
having sum(case when cast(Date as date) <= '2014-05-01' then 1 else 0 end) > 0
```
Or, even more simply:
```
having min(date) <= '2014-05-01'
```
Also, you should use single quotes only for string and date names. Don't use single quotes for column aliases (it can lead to confusion and problems). Choose names that don't need to be escaped. If you *have* to have a troublesome name, then use square braces.
|
Exclude multiple sum rows when all zero
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I have a query
```
Select Id,DeviceId,TankCount,Tank1_Level,Tank2_Level,ReadTime from Table
Id TankCount DeviceId Tank1_Level Tank2_Level ReadTime
1 1 123 20 50 2014-11-07 14:39:33.277
2 2 456 52 78 2014-11-07 14:39:33.277
3 1 789 44 50 2014-11-07 14:39:33.277
```
Tank2\_Level is 50 in all TankCount=1 rows.
I dont want display Tank2\_Level when value equal to 50.
TankCount int,Tank1\_Level int,Tank2\_Level int.
```
Id TankCount DeviceId Tank1_Level Tank2_Level ReadTime
1 1 123 20 null or empty 2014-11-07 14:39:33.277
2 2 456 52 78 2014-11-07 14:39:33.277
3 1 789 44 null or empty 2014-11-07 14:39:33.277
```
|
```
SELECT Id,
DeviceId,
TankCount,
Tank1_Level,
CASE WHEN Tank2_Level = '50' and TankCount != '2'
THEN NULL
ELSE Tank2_Level
END AS [Tank2_Level],
ReadTime
FROM table
```
|
Is `Tank2_Level` a number or a string? Let me assume it is a number:
```
select <other columns>,
(case when Tank2_Level = 50
then 'null or empty'
else cast(Tank2_Level as varchar(255))
end) as Tank2_Level
from . . .;
```
|
How to use Case in SQL?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
This is a simple query using with CTE but is not behaving the way I want to.
The idea is to filter those records wit precio\_90 = null and then update the field precio\_90 with the price from mytable2 where codigo=codigo on a specific date.
At present I get all records updated without the actual filter.
```
DECLARE @mytable1 TABLE
(
codigo VARCHAR(10) NOT NULL,
precio_90 NUMERIC(10, 4)
);
DECLARE @mytable2 TABLE
(codigo VARCHAR(10) NOT NULL,
fecha date NOT NULL,
precio NUMERIC(10, 4) NOT NULL
);
INSERT INTO @mytable1(codigo, precio_90)
VALUES ('stock1', 51),
('stock1', 3),
('stock1',5),
('stock1',6),
('stock1',2),
('stock1',7),
('stock1',null)
INSERT INTO @mytable2(codigo, fecha, precio)
VALUES ('stock1', '20140710', 26),
('stock2', '20140711', 66),
('stock1', '20140712', 23),
('stock2', '20140710', 35);
;WITH CTE_1
as
( SELECT codigo, precio_90
FROM @mytable1
where precio_90 is null )
UPDATE t1
SET t1.precio_90= t2.[precio]
from @mytable1 as t1
INNER JOIN @mytable2 as t2
ON t1.codigo = t2.[codigo] and '2014-07-10'=t2.fecha
```
|
Well, first, you don't use the CTE anywhere in your update, which is why your results aren't filtered right. Second, you don't need a CTE for this... you can filter `precio_90 is null` right in the update.
```
UPDATE t1
SET t1.precio_90= t2.[precio]
from @mytable1 as t1
INNER JOIN @mytable2 as t2 ON t1.codigo = t2.codigo
where t1.precio_90 is null
and '2014-07-10'=t2.fecha
```
|
From your sample `@mytable1` INSERTS, all the records inserted with codigo = 'stock1'. Therefore, on your `UPDATE` statement, you update all the records from @mytable1 to stock1 level from @mytable2.
For example, if your sample table 1 had records with stock2 like this:
```
INSERT INTO @mytable1(codigo, precio_90)
VALUES ('stock1', 51),
('stock1', 3),
('stock1',5),
('stock1',6),
('stock2',2), -- Set to stock2
('stock2',7), -- Set to stock2
('stock1',null)
```
Then your CTE is updating the stock2 records as specified from the 7/10/2014 records:
```
codigo precio_90
---------- ---------------------------------------
stock1 26.0000
stock1 26.0000
stock1 26.0000
stock1 26.0000
stock2 35.0000
stock2 35.0000
stock1 26.0000
```
|
CTE with update
|
[
"",
"sql",
"sql-server",
""
] |
In my stored procedure, I want to export select result to a `.CSV` file. I need to write a stored procedure which selects some data from different tables and saves it to a `.CSV` file.
Selecting part is ready
```
SELECT DISTINCT
PER.NREGNUMBER_PERNUM AS [Registration Number],
PER.CFAMNAME_PER AS [Family Name],
PER.CGIVNAME_PER AS [Given Name],
CONVERT(varchar(10), CONVERT(date, PER.DBIRTHDATE_PER, 106), 103) AS [Birth Date],
PER.CGENDER_PERGEN as [Gender],
PHONE.MOBILE_NUMBER
FROM PERSON AS PER
LEFT OUTER JOIN
PHONE ON PER.NREGNUMBER_PERNUM = PHONE.NREGNUMBER_PPHPER AND PHONE.CPRIMARY_PPH = 'Y'
```
|
The task was I had to export from database some data to **.CSV** at specified time. In the begining we wanted to use windows scheduler for running stp. The STP had to be able to export data. But I couldn't find a way. Instead the thing what we did was creating simple STP which brings only data . And we created **batch** file which calls STP and export result to **.CSV** file. The batch file is simple
```
sqlcmd -S Etibar-PC\SQLEXPRESS -d MEV_WORK -E -Q "dbo.SelectPeople" -o "MyData1.csv" -h-1 -s"," -w 700
```
dbo.SelectPeople is STP
Etibar-PC\SQLEXPRESS is Schema
MEV\_WORK is Database name.
|
i have build a procedure to help you all
```
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- example exec Sys_Database_exportToCsv 'select MyEmail from
QPCRM.dbo.Myemails','D:\test\exported.csv'
create PROCEDURE Sys_Database_exportToCsv
(
@ViewName nvarchar(50),
@exportFile nvarchar(50)
)
AS
BEGIN
SET NOCOUNT ON;
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'xp_cmdshell', 1;
RECONFIGURE;
Declare @SQL nvarchar(4000)
Set @SQL = 'Select * from ' + 'QPCRM.dbo.Myemails'
Declare @cmd nvarchar(4000)
SET @cmd = 'bcp '+CHAR(34)+@ViewName+CHAR(34)+' queryout
'+CHAR(34)+@exportFile+CHAR(34)+' -S '+@@servername+' -c -t'+CHAR(34)+','+CHAR(34)+' -r'+CHAR(34)+'\n'+CHAR(34)+' -T'
exec master..xp_cmdshell @cmd
EXEC sp_configure 'xp_cmdshell', 0;
RECONFIGURE;
EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
END
GO
```
|
SQL Server stored procedure to export Select Result to CSV
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
"export-to-excel",
""
] |
I am trying to get a list of values from the same column in a table by running two queries.
This is what the table looks like:
```
******************************************
Key | Short_text | UID | Boolean_value
******************************************
Name | John | 23 | null
******************************************
Male | NULL | 23 | true
******************************************
Name | Ben | 45 | null
******************************************
Male | NULL | 45 | true
```
I am trying to get the SHORT\_TEXT of the NAME rows if the Boolean values of the Male rows are true based on the UIDs
This is what I have so far (Which is throwing an error: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
)
```
SELECT SHORT_TEXT_VALUE
FROM Table
WHERE ((SELECT UID
FROM Table
WHERE KEY = 'NAME') =
(SELECT CUSTOMER_UID
FROM Table
WHERE KEY = 'Male'
AND BOOLEAN_VALUE = 1))
```
I am very new to sql so I am not sure what I should do to achieve what I would like.
Any help would greatly be appreciated.
|
I am unsure what you are trying to accomplish but basing on your code I think this is what you want
```
SELECT SHORT_TEXT_VALUE
FROM Table
WHERE KEY='Name'
and UID in(SELECT UID
FROM Table
WHERE KEY = 'Male'
AND BOOLEAN_VALUE = 1)
```
But on a more important note. You might want to think about your redesigning your table design. Why is Male details of a specific uid on a different row?
|
You can join your table with itself:
```
SELECT
t1.UID,
t1.Short_text
FROM
tablename t1 INNER JOIN tablename t2
ON t1.UID=t2.UID
WHERE
t1.Key='Name' AND t2.Key='Male' AND t2.Boolean_value=TRUE
```
or this with EXISTS:
```
SELECT
t1.UID,
t1.Short_text
FROM
tablename t1
WHERE
t1.Key='Name' AND
EXISTS (SELECT * FROM tablename t2
WHERE t1.UID=t2.UID AND t2.Key='Male' AND t2.Boolean_value=1)
```
|
SQL get multiple values (from the same column) from same table using multiple queries
|
[
"",
"mysql",
"sql",
""
] |
I got a table with the following Columns: ID, IsRunningTotal and Amount (like in the CTE of the SQL sample).
The Amount represents a Value or a RunningTotal identified by the IsRunningTotal Flag.
If You wonder about the UseCase just imagine that the ID represents the month of the Year (e.g. ID:1 = Jan 2014), so the Amount for a certain Month will be given as a RunningTotal (e.g. 3000 for March) or simply as a value (e.g. 1000 for January).
So the following sample DataSet is given:
```
ID IsRunTot Amount
1 0 1000
2 0 1000
3 1 3000
4 1 4000
5 0 1000
6 0 1000
7 1 7000
8 1 8000
```
Now I want to break Down the RunningTotals to get the simple values for each ID (here 1000 for each row).
like:
```
ID IsRunTot Amount Result
1 0 1000 1000
2 0 1000 1000
3 1 3000 1000
4 1 4000 1000
5 0 1000 1000
6 0 1000 1000
7 1 7000 1000
8 1 8000 1000
```
For now I got this Mssql Query "work in progress" (written for SQL Server 2008 R2):
```
WITH MySet (ID, IsRunTot, Amount)
AS
(
SELECT 1 AS ID, 0 AS IsRunTot, 1000 AS Amount
UNION
SELECT 2 AS ID, 0 AS IsRunTot, 1000 AS Amount
UNION
SELECT 3 AS ID, 1 AS IsRunTot, 3000 AS Amount
UNION
SELECT 4 AS ID, 1 AS IsRunTot, 4000 AS Amount
UNION
SELECT 5 AS ID, 0 AS IsRunTot, 1000 AS Amount
UNION
SELECT 6 AS ID, 0 AS IsRunTot, 1000 AS Amount
UNION
SELECT 7 AS ID, 1 AS IsRunTot, 7000 AS Amount
UNION
SELECT 8 AS ID, 1 AS IsRunTot, 8000 AS Amount
)
, MySet2 (ID, IsRunTot, Amount, BreakDown)
AS
(
SELECT ID, IsRunTot, Amount, Amount AS BreakDown
FROM MySet WHERE ID = 1
UNION ALL
SELECT A.ID, A.IsRunTot, A.Amount
, CASE WHEN A.IsRunTot = 1 AND B.IsRunTot = 1 THEN A.Amount - B.Amount ELSE NULL END AS BreakDown
FROM MySet A
INNER JOIN MySet B
ON A.ID - 1 = B.ID
)
SELECT *
FROM MySet2
OPTION (MAXRECURSION 32767);
```
That works if the predecessor was a running Total and produces the following result:
```
ID IsRunTot Amount BreakDown
1 0 1000 1000
2 0 1000 NULL
3 1 3000 NULL
4 1 4000 1000
5 0 1000 NULL
6 0 1000 NULL
7 1 7000 NULL
8 1 8000 1000
```
As you see I am missing the Breakdown-result for ID 3 and 7.
How do I extend my Query to produce the desired result?
|
The following utilizes CTE to calculate the true breakdowns and running totals.
```
DECLARE @Data TABLE (ID INT, IsRunTot BIT, Amount INT)
INSERT @Data VALUES (
1,0,1000),(
2,0,1000),(
3,1,3000),(
4,1,4000),(
5,0,1000),(
6,0,1000),(
7,1,7000),(
8,1,8000)
; WITH CTE AS (
SELECT TOP 1
ID,
IsRunTot,
Amount,
Amount AS RunningTotal,
Amount AS Breakdown
FROM @Data
ORDER BY ID
UNION ALL
SELECT
D2.ID,
D2.IsRunTot,
D2.Amount,
D1.RunningTotal + D2.Amount - (CASE WHEN D2.IsRunTot = 1 THEN D1.RunningTotal ELSE 0 END),
D2.Amount - (CASE WHEN D2.IsRunTot = 1 THEN D1.RunningTotal ELSE 0 END)
FROM CTE D1
INNER JOIN @Data D2
ON D1.ID + 1 = D2.ID
)
SELECT *
FROM CTE
```
**This yields output**
```
ID IsRunTot Amount RunningTotal Breakdown
----------- -------- ----------- ------------ -----------
1 0 1000 1000 1000
2 0 1000 2000 1000
3 1 3000 3000 1000
4 1 4000 4000 1000
5 0 1000 5000 1000
6 0 1000 6000 1000
7 1 7000 7000 1000
8 1 8000 8000 1000
```
|
This solution subtracts the previous running total and all values in between.
```
;WITH MySet (ID, IsRunTot, Amount)
AS
(
SELECT 1, 0, 1000
UNION SELECT 2, 0, 1000
UNION SELECT 3, 1, 3000
UNION SELECT 4, 1, 4000
UNION SELECT 5, 0, 1000
UNION SELECT 6, 0, 1000
UNION SELECT 7, 1, 7000
UNION SELECT 8, 1, 8000
)
SELECT A.ID, A.IsRunTot, A.Amount
, BreakDown = CASE WHEN A.IsRunTot = 1 THEN A.Amount -
(SELECT SUM(B.Amount) FROM MySet B WHERE B.ID BETWEEN ISNULL(
(SELECT MAX(C.ID) FROM MySet C WHERE C.ID < A.ID AND IsRunTot = 1)
,1) AND A.ID - 1) END
FROM MySet A;
```
|
SQL Server: break down Running totals from mixed Set (Running Totals and Values)
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"cumulative-sum",
""
] |
How can I get the highlighted rows from the table below in SQL? (Distinct rows based on User name with the highest Version are highlighted)

In case you need plain text table:
```
+----+-----------+---+
| 1 | John | 1 |
+----+-----------+---+
| 2 | Brad | 1 |
+----+-----------+---+
| 3 | Brad | 3 |
+----+-----------+---+
| 4 | Brad | 2 |
+----+-----------+---+
| 5 | Jenny | 1 |
+----+-----------+---+
| 6 | Jenny | 2 |
+----+-----------+---+
| 7 | Nick | 4 |
+----+-----------+---+
| 8 | Nick | 1 |
+----+-----------+---+
| 9 | Nick | 3 |
+----+-----------+---+
| 10 | Nick | 2 |
+----+-----------+---+
| 11 | Chris | 1 |
+----+-----------+---+
| 12 | Nicole | 2 |
+----+-----------+---+
| 13 | Nicole | 1 |
+----+-----------+---+
| 14 | James | 1 |
+----+-----------+---+
| 15 | Christine | 1 |
+----+-----------+---+
```
What I have so far is (works for one user)
```
SELECT USER, VERSION
FROM TABLE
WHERE USER = 'Brad'
AND VERSION = (SELECT MAX(VERSION ) FROM TABLE WHERE USER= 'Brad')
```
|
this might help you :
```
select id, user, version
from
(
select id, user, version, row_number() over (partition by user order by version desc) rownum
from yourtable
) as t
where t.rownum = 1
```
[sql fiddle](http://sqlfiddle.com/#!3/1a060/6)
|
```
SELECT USER, max(VERSION) VERSION
FROM TABLE GROUP BY USER;
```
If you need an ID then
```
SELECT ID, USER, VERSION FROM (
SELECT ID, USER, VERSION,
RANK() OVER(PARTITION BY USER ORDER BY VERSION DESC) RNK
FROM TABLE
) WHERE RNK = 1;
```
if you have
```
| 2 | Brad | 5 |
+----+-----------+---+
| 3 | Brad | 3 |
+----+-----------+---+
| 4 | Brad | 5 |
```
The query with RANK gives you both users
```
| 2 | Brad | 5 |
+----+-----------+---+
| 4 | Brad | 5 |
```
If you need only one row then replace `RANK()` with `ROW_NUMBER()`
In your query you're using `AND VERSION = (SELECT MAX(VERSION ) FROM TABLE WHERE USER= 'Brad')` which is equivalent to RANK() (all rows with the max VERSION)
|
Getting distinct values with the highest value in a specific column
|
[
"",
"sql",
"oracle",
"select",
"distinct",
""
] |
I have a table testtable having fields
```
Id Name Status
1 John active
2 adam active
3 cristy incative
4 benjamin inactive
5 mathew active
6 thomas inactive
7 james active
```
I want a query that should dispaly the reuslt like
```
Id Name Status
1 John active
3 cristy incative
2 adam active
4 benjamin inactive
5 mathew active
6 thomas inactive
7 james active
```
my question is how to take records in the order of active status then inactive then active then inactive etc.. like that from this table.
|
This query sorts on interleaved active/inactive state:
```
SELECT [id],
[name],
[status]
FROM (
(
SELECT
Row_number() OVER(ORDER BY id) AS RowNo,
0 AS sorter,
[id],
[name],
[status]
FROM testtable
WHERE [status] = 'active'
)
UNION ALL
(
SELECT
Row_number() OVER(ORDER BY id) AS RowNo,
1 AS sorter,
[id],
[name],
[status]
FROM testtable
WHERE [status] = 'inactive'
)
) innerUnion
ORDER BY ( RowNo * 2 + sorter )
```
This approach uses an inner UNION on two SELECT statements, one which returns active rows, the other inactive rows. They both have a RowNumber generated, which is later multiplied by two to ensure it's always even. There's a sorter column that's just a bit field, and to ensure that a unique number is available for sorting: adding it to the RowNumber yields either an odd or even number depending on active/inactive state, hence allowing the results to be interleaved.
The SQL Fiddle link is here, to allow testing and manipulation:
<http://sqlfiddle.com/#!3/8a8a1/11/0>
In the absence of a specified DB system, I've assumed that SQL Server 2008 (or newer) is being used. An alternate row numbering system would be necessary on other DBMSes.
|
Finally i got the answer
```
SET @rank=0;
SET @rank1=0;
SELECT @rank:=@rank+1 AS rank,id,name,status FROM `testtablejohn` where status='E'
UNION
SELECT @rank1:=@rank1+1 AS rank,id,name,status FROM `testtablejohn` where status='D'
order by rank
```
|
SQL query interleaving two different statuses
|
[
"",
"sql",
""
] |
Please consider a table of vendors having two columns: `VendorName` and `PayableAmount`
I'm looking for a query which returns top ten vendors sorted by `PayableAmount` descending and sum of other payable amounts as "other" in 11th row.
Obviously, `sum of PayableAmount` from `Vendors` table should be equal to `sum of PayableAmount` from `Query`.
|
This would perform the query you're looking for. Firstly extracting those in the top 10, then `UNION` ing that result with the higher ranked vendors, but calling those `'Other'`
```
WITH rank AS (SELECT
VendorName,
PayableAmount,
ROW_NUMBER() OVER (ORDER BY PayableAmount DESC) AS rn
FROM vendors)
SELECT VendorName,
rn,
PayableAmount
FROM
rank WHERE rn <= 10
UNION
SELECT VendorName, 11 AS rn, PayableAmount
FROM
(
SELECT 'Other' AS VendorName,
SUM(PayableAmount) AS PayableAmount
FROM
rank WHERE rn > 10
) X11
ORDER BY rn
```
This has been tested in SQLFiddle.
|
Technically, it's possible to do in one query:
```
declare @t table (
Name varchar(50) primary key,
Amount money not null
);
-- Dummy data
insert into @t (Name, Amount)
select top (20) sq.*
from (
select name, max(number) as [Amount]
from master.dbo.spt_values
where number between 100 and 100000
and name is not null
group by name
) sq
order by newid();
-- The table itself, for verification
select * from @t order by Amount desc;
-- Actual query
select top (11)
case when sq.RN > 10 then '<All others>' else sq.Name end as [VendorName],
case
when sq.RN > 10 then sum(sq.Amount) over(partition by case when sq.rn > 10 then 1 else 0 end)
else sq.Amount
end as [Value]
from (
select t.Name, t.Amount, row_number() over(order by t.Amount desc) as [RN]
from @t t
) sq
order by sq.RN;
```
It will even work on any SQL Server version starting with 2005. But, in real life, I would prefer to calculate these 2 parts separately and then `UNION` them.
|
Sorting top ten vendors and showing remained vendors as "other"
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to query the table CANCELLATION\_DEFINITION, and count the amount of rows that have an ACTION\_TYPE value that isn't "-". Unfortunately, the query is giving me inaccurate results. For example, it returns 3 for this table when there are 6 rows in the table that have an ACTION\_TYPE value other than "-". Code is below.
```
SELECT COUNT(*)
FROM (
SELECT DISTINCT
ACTION_TYPE
FROM CANCELLATION_DEFINITION WHERE ACTION_TYPE != '-'
)AS distinctified
```
|
Try:
```
SELECT COUNT(*)
FROM (
SELECT ACTION_TYPE
FROM CANCELLATION_DEFINITION WHERE ACTION_TYPE != '-'
)AS distinctified
```
Hope that helps
|
When you perform a select distinct, it only returns distinct (ie different) values. So if you have action\_type ["INSERT", "UPDATE", "UPDATE", "DELETE"] it will only give you 3 results because it merges UPDATE and UPDATE.
What you really want is
```
SELECT COUNT(ACTION_TYPE)
FROM CANCELLATION_DEFINITION
WHERE ACTION_TYPE != '-'
```
|
Select Count Query Giving Inaccurate Results
|
[
"",
"sql",
""
] |
I have 2 tables with foreign key constraint:
```
Table A:
[id] int identity(1, 1) PK,
[b_id] INT
```
and
```
Table B:
[id] int identity(1, 1) PK
```
where [b\_id] refers to [id] column of Table B.
The task is:
On each insert into table A, and new record into table B and update [b\_id].
Sql Server 2008 r2 is used.
Any help is appreciated.
|
Having misread this the first time, I am posting a totally different answer.
First if table B is the parent table, you insert into it first. Then you grab the id value and insert into table A.
It is best to do this is one transaction. Depending on what the other fields are, you can populate table A with a trigger from table B or you might need to write straight SQL code or a stored procedure to do the work.
It would be easier to describe what to do if you have a table schema for both tables. However, assuming table B only has one column and table A only has ID and B\_id, this is the way the code could work (you would want to add explicit transactions for production code). The example is for a single record insert which would not happen from a trigger. Triggers should always handle multiple record inserts and it would have to be written differently then. But without knowing what the columns in the tables are it is hard to provide a good example of this.
```
create table #temp (id int identity)
create table #temp2 (Id int identity, b_id int)
declare @b_id int
insert into #temp default values
select @B_id = scope_identity()
insert into #temp2 (B_id)
values(@B_id)
select * from #temp2
```
Now the problem gets more complex if there are other columns, as you would have to provide values for them as well.
|
Without removing identity specification you can use the following option:
**SET IDENTITY\_INSERT B ON**
Try this:
```
CREATE TRIGGER trgAfterInsert ON [dbo].[A]
FOR INSERT
AS
IF @@ROWCOUNT = 0 RETURN;
SET NOCOUNT ON;
SET IDENTITY_INSERT B ON
DECLARE @B_Id INT
SELECT @B_Id = ISNULL(MAX(Id), 0) FROM B;
WITH RES (ID, BIDS)
AS
(SELECT Id, @B_Id + ROW_NUMBER() OVER (ORDER BY Id) FROM INSERTED)
UPDATE A SET [b_Id] = BIDS
FROM A
INNER JOIN RES ON A.ID = RES.ID
INSERT INTO B (Id)
SELECT @B_Id + ROW_NUMBER() OVER (ORDER BY Id) FROM INSERTED
SET IDENTITY_INSERT B OFF
```
GO
|
For each inserted row create row in other table with foreign key constrain
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I m doing an export using SQL\*Plus using the following code:
```
Set echo off
Set verify off
Set feedback off
Set heading off
Set termout on
Set pages 50000
Set linesize 1000
Set newpage none
Set colsep ","
spool Z:\ff\ss.csv
Select Client_ID, Cust_ID from cust;
spool off;
exit;
```
I get the correct output, but it doesn't have the best formatting.
```
SQL> Select Client_ID, Cust_ID from cust;
100,200
200,300
300,400
400,500
SQL>spool off;
```
1. How can I get rid of the spaces before the values?
100,200
2. How can I get rid of the `SQL>` statements in the output file?
|
`echo off` only works to remove the SQL> prompts when run from a file. Put your commands in a file called export\_client\_data.sql and run them using the @ command:
```
SQL>@export_client_data.sql
```
SQL\*Plus, being a reporting tool, will format output to the column width as defined in the table. Using TRIM() doesn't always work as expected because of this. You will see a VARCHAR column padded on the right to the width for example.
Typically to create an extract using SQL\*Plus you would format the query like this and no need to TRIM anything:
```
Select Client_ID || ',' || Cust_ID from cust;
```
|
Try this options :
SET TRIMSPOOL ON otherwise every line in the spoolfile is filled up with blanks until the linesize is reached.
SET TRIMOUT ON otherwise every line in the output is filled up with blanks until the linesize is reached.
SET WRAP OFF Truncates the line if its is longer then LINESIZE. This should not happen if linesize is large enough.
SET TERMOUT OFF suppresses the printing of the results to the output. The lines are still written to the spool file. This may accelerate the exectution time of a statement a lot.
|
Remove leading spaces and the SQL> tags when spooling
|
[
"",
"sql",
"oracle",
"formatting",
"sqlplus",
"spool",
""
] |
I have a view where I get some values and diffs from my database. I use this code:
```
Select DATEDIFF(minute, 0, DATEADD(day, 0, t1.HorasEfe)) as Soma, t1.IDDiligencia
from DiligenciaSub t1
group by t1.IDDiligencia,t1.HorasEfe
order by t1.HorasEfe
```
I get this as output:

What I need:
Sum the values from Soma where IDDiligencia is equal!
Is it possible to adapt my actual query to do this?
|
Just take `HorasEfe` out of the grouping and add `SUM`:
```
Select SUM(DATEDIFF(minute, 0, DATEADD(day, 0, t1.HorasEfe))) as Soma, t1.IDDiligencia
from DiligenciaSub t1
group by t1.IDDiligencia
```
|
remove sum and group by from your original query then
insert your result into temp table #t
and then use this query
```
select sum(soma) as sum_soma,IDDiligencia as idd
from #t
group by IDDiligencia
```
you will get the following result

|
Aggregation and Sum with one Group By
|
[
"",
"sql",
"sql-server",
"view",
""
] |
If we have a table like:
```
col1 | col2
-----------
A | 1
B | 2
A | 1
C | 16
B | 3
```
How it can be determined if the all rows for given value in col1 are the same?
For example, here whe have only '1's for A, but for B we have '2' and '3'.
Something like:
```
A | true
B | false
C | true
```
|
```
select col1, case when count(distinct col2) = 1
then 'true'
else 'false'
end as same_col2_results
from your_table
group by col1
```
|
I have a preference for using `min()` and `max()` for this purpose, rather than `count(distinct)`:
```
select col1,
(case when min(col2) = max(col2) then 'true' else 'false' end) as IsCol2Same
from table t
group by col1;
```
Then comes the issue of `NULL` values. If you want to ignore them (so a column could actually have two values, `NULL` and another value), then the above is fine (as is `count(distinct)`). If you want to treat `NULL` the same way as other values, then you need some additional tests:
```
select col1,
(case when min(col2) is null then 'true' -- All NULL
when count(col2) <> count(*) then 'false' -- Some NULL
when min(col2) = max(col2) then 'true' -- No NULLs and values the same
else 'false'
end) as IsCol2Same
from table t
group by col1;
```
|
SQL Compare grouped values
|
[
"",
"sql",
"oracle",
""
] |
I have a SQL database that I am querying as part of a project - I only have read access to it.
There is a column called `ResultStatus` - possible values are "Passed" and "Failed". However, there were some typos by the original data inputter so some of them say "Fialed" as well.
I want to count the number of "Failed" entries, but I want to include the "Fialed" ones as well.
```
SELECT
ResultStatus, Count(*)
FROM
[DB_018].[dbo].[ProjectData]
GROUP BY ResultStatus
```
is obviously grouping "Fialed" in a different category. I want it to be counted along with "Failed".
|
You can correct the spelling yourself
```
SELECT Case When ResultStatus = 'Fialed' then 'Failed' Else ResultStatus End AS ResultStatus, Count(*)
FROM [DB_018].[dbo].[ProjectData]
GROUP BY Case When ResultStatus = 'Fialed' then 'Failed' Else ResultStatus End
```
What this is doing is replacing the incorrect spelling with the correct one while you group the data.
Note that this is possible, and possibly cleaner, to do using a CTE
```
with CleanedResults as (
select
case
when ResultStatus = 'Fialed' then 'Failed'
when ResultStatus = 'Pased' then 'Passed'
else ResultStatus
end as ResultStatus
from [DB_018].[dbo].[ProjectData]
) select
ResultStatus
, count(*) as NumResults
from CleanedResults
group by ResultStatus
```
|
You need to get a distinct list of ResultStatus and add them all to the case statement below. I prefer this method to Raj's as you don't need to use a CTE (not available in all version of SQL Server) or adjusting the group by.
```
SELECT
ResultStatus,count(*) [Count]
FROM(
SELECT
CASE
WHEN ResultStatus = 'FIAL' THEN 'FAIL'
WHEN ResultStatus = 'FAIL' THEN 'FAIL'
WHEN ResultStatus = 'Passed' THEN 'Passed'
END [ResultStatus]
FROM [DB_018].[dbo].[ProjectData]
)a
GROUP BY ResultStatus
```
|
How do you count misspelled fields using a SQL query?
|
[
"",
"sql",
"sql-server",
""
] |
I have table with following structure:
TestTable (ID INT, LoanID INT, Amount INT)
```
ID LoanID Amount
1 1 20
2 1 30
3 1 15
4 2 40
5 2 20
6 3 50
```
I want to calculate LoanAmounts with this scenario
```
ID LoanID Amount Sum
1 1 20 20
2 1 30 50
3 1 15 65
4 2 40 40
5 2 20 60
6 3 50 50
```
What is the best way to get this Sum collumn (**Of course without cursor :)**)
|
You can use a correlated sub-query to get the [running-totals](http://en.wikipedia.org/wiki/Running_total):
```
SELECT t.ID, t.LoanID, t.Amount,
[Sum] = (SELECT SUM(AMOUNT)
FROM TestTable t2
WHERE t.LoanID=t2.LoanID
AND t2.ID <= t.ID)
FROM TestTable t
```
`Demo`
|
Unfortunately SQL Server 2008 doesn't have a clean function to do a cumulative sum, so you need to add up all of the "previous" rows with the same `LoadID`:
```
SELECT
ID,
LoanID,
Amount,
(SELECT SUM(Amount) FROM TestTable t2 WHERE t2.LoanID = t.LoanID AND t2.ID <= t.ID) Sum
FROM TestTable t
```
|
Using Sum function
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
i want to get column value that has some other value and null but don't want value that equal to some point.
**Example**
```
ID|VID |IC |
1 |v001 |123 |
2 |v001 |null|
3 |v003 |456 |
4 |v004 |null|
```
so i want to get ID which IC is not equal to 123. but i only get ID 3 but ID 2 and 4 will not shown out in sql. i have tried `select * from table where IC<>'123' and IC is null` but it shows nothing.
|
You want `OR`, not `AND`:
```
select * from table where IC != 123 OR IC IS NULL
```
Any comparison with `NULL` produces the [third logic value](http://en.wikipedia.org/wiki/Three-valued_logic), `UNKNOWN`, and any negation of `UNKNOWN` produces `UNKNOWN`, so you have to treat the `NULL`s separately - but, of course, for any particular row, it cannot possibly be true that it's simultaneously got a value different from `123` but at the same time it's `NULL`. So `AND` is wrong.
|
```
SELECT * FROM table WHERE IC != '123' OR IC IS NULL
```
|
Sql select where clause operation <>
|
[
"",
"sql",
"sql-server",
""
] |
I have a dataset with multiple ids. For every id there are multiple entries. Like this:
```
--------------
| ID | Value |
--------------
| 1 | 3 |
| 1 | 4 |
| 1 | 2 |
| 2 | 1 |
| 2 | 2 |
| 3 | 3 |
| 3 | 5 |
--------------
```
Is there a SQL DELETE query to delete (random) rows for every id, except for one (random rows would be nice but is not essential)? The resulting table should look like this:
```
--------------
| ID | Value |
--------------
| 1 | 2 |
| 2 | 1 |
| 3 | 5 |
--------------
```
Thanks!
|
I tried the given answers with HSQLDB but it refused to execute those queries for different reasons (join is not allowed in delete query, ignore statement is not allowed in alter query). Thanks to Andrew I came up with this solution (which is a little bit more circumstantial, but allows it to delete random rows):
Add a new column for random values:
```
ALTER TABLE <table> ADD COLUMN rand INT
```
Fill this column with random data:
```
UPDATE <table> SET rand = RAND() * 1000000
```
Delete all rows which don't have the minimum random value for their id:
```
DELETE FROM <table> WHERE rand NOT IN (SELECT MIN(rand) FROM <table> GROUP BY id)
```
Drop the random column:
```
ALTER TABLE <table> DROP rand
```
For larger tables you probably should ensure that the random values are unique, but this worked perfectly for me.
|
Try this:
```
alter ignore table a add unique(id);
```
Here `a` is the table name
|
Delete rows except for one for every id
|
[
"",
"sql",
"hsqldb",
"delete-row",
""
] |
```
Declare @xml xml,
@y int
set @xml= '<ContactUpdates>
<Contact VendorID="4"><LastName>McCrystle</LastName>
<FirstName>Timothy</FirstName>
</Contact>
<Contact VendorID="10">
<LastName>Flynn</LastName>
<FirstName>Erin</FirstName>
</Contact></ContactUpdates>'
Exec sp_xml_preparedocument @y output, @xml;
Select * from openxml(@y,'/ContactUpdates/Contact')
With (VendorID Varchar(20),
LastName Varchar(30),
FirstName Varchar(30))`
```

I do not know where has the mistake been done. Please help me out with this.
|
You have a mix of both attribute-centric and element centric-projections. The reason why `VendorId` is mapped, but not the two elements, is because attribute centric is the default. In a mixed / complex hierarchy scenario, as [per here](http://msdn.microsoft.com/en-us/library/ms186918.aspx), you will need to explicitly provide the `xpath` mappings:
```
Exec sp_xml_preparedocument @y output, @xml;
Select * from openxml(@y,'/ContactUpdates/Contact')
With (VendorID Varchar(20) '@VendorID', -- Attribute
LastName Varchar(30) 'LastName', -- Element
FirstName Varchar(30) 'FirstName'); -- Element
```
**Edit**
Something of interest to note is that the `flags` attribute is, well, a bitwise style [flag]. This means you can OR the options together. `1` is attribute centric, and `2` element centric, so `1 | 2 = 3` will give you both:
```
Exec sp_xml_preparedocument @y output, @xml;
Select * from openxml(@y,'/ContactUpdates/Contact', 3)
With (VendorID Varchar(20),
LastName Varchar(30),
FirstName Varchar(30));
-- Remember to release the handle with sp_xml_removedocument
```
I however do not believe this is good practice - it doesn't convey much to the developer, and it possibly has negative performance implications given that it is less specific than the exact xpath.
|
Use the proper, native XQuery support in SQL Server :
```
SELECT
VendorID = xc.value('@VendorID', 'int'),
FirstName = xc.value('(FirstName)[1]', 'varchar(50)'),
LastName = xc.value('(LastName)[1]', 'varchar(50)')
FROM
@xml.nodes('/ContactUpdates/Contact') AS XT(XC)
```
Easily gives you this nice output:

|
OpenXml returns null for elements inside XML
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"openxml",
""
] |
I am new to sql, I am in trouble with date formats I know I can use cast or convert function but I was looking to dateformat (we use format dd/mm/yy). I looks quite interesting function and very easy but there is a problem.
IF I run this dataview
```
SELECT hotel, id, codicecliente, prodotto, CAST(REPLACE(numeros, ',', '.') AS decimal(7, 2)) AS numero, CAST(REPLACE(importos, ',', '.') AS decimal(7, 2)) AS importo,
CAST(datas AS datetime) AS data, puntovendita, DATEDIFF(day, '01/01/2000', datas) AS datav, isdate(datas) AS Expr1
FROM dbo.addebititmp
WHERE (isdate(datas) = 1)
```
it shows me about 15.000 records it is not showing all record with data with day more than 12 because system recognizes first 2 as month not day
if I add SET DATEFORMAT dmy
and run the following
```
SET DATEFORMAT dmy
SELECT hotel, id, codicecliente, prodotto, CAST(REPLACE(numeros, ',', '.') AS decimal(7, 2)) AS numero, CAST(REPLACE(importos, ',', '.') AS decimal(7, 2)) AS importo,
CAST(datas AS datetime) AS data, puntovendita, DATEDIFF(day, '01/01/2000', datas) AS datav, isdate(datas) AS Expr1
FROM dbo.addebititmp
WHERE (isdate(datas) = 1)
```
It shows correctly all 38.000 records but when I save the view system shows error: incorrect syntax near the keyword set. So I can run but not use it. I tried also with transact sql but if I creat a view with dateformat it does not save the dateformat and shows less records.
Any suggestion? Thank You
|
The error when creating the view with `SET DATEFORMAT dmy` is due to a view being an encapsulation of a single query, not multiple commands. If you need multiple commands, then you would have to use a Multistatement Table-Valued Function. But using a TVF is not necessary here.
Use [TRY\_CONVERT](http://msdn.microsoft.com/en-us/library/hh230993.aspx) as it will handle both the translation and the "ISDATE" behavior. It will either convert to a proper DATETIME or it will return NULL. In this sense, a non-NULL value equates to ISDATE returning 1 while a NULL value equates to ISDATE returning 0. Since your data is in the format of DD/MM/YYYY, that is the "style" number 103 (full list of Date and Time styles found on the [CAST and CONVERT](http://msdn.microsoft.com/en-us/library/ms187928.aspx) MSDN page).
```
SELECT TRY_CONVERT(DATETIME, tmp.DateDDMMYYYY, 103) AS [ConvertedDate],
tmp.ShouldItWork
FROM (
VALUES('23/05/2014', 'yes'),
('05/23/2014', 'no'),
('0a/4f/2014', 'no')
) tmp(DateDDMMYYYY, ShouldItWork);
```
Results:
```
ConvertedDate ShouldItWork
2014-05-23 00:00:00.000 yes
NULL no
NULL no
```
|
It looks like you are probably Italian?? If so, you should change your default language to Italian on the server. This will also by default change your DATEFORMAT to accept European-style dates. I am assuming all your char-formatted dates will be stored in that format? This will show you how:
[How to change default language for SQL Server?](https://stackoverflow.com/questions/15705983/how-to-change-default-language-for-sql-server)
Also, regarding saving the dateformat setting in the view, you can't save it in a view for the same reason you can't save "set nocount on" in a view. But you can set the dateformat in a stored proc that references the view. But I really think in your case you should set the server-wide language, which will address the issue.
|
DATEFORMAT FUNCTION IN SQL
|
[
"",
"sql",
"sql-server",
""
] |
I have some dates that I would like to filter with SQL.
I want to be able to pass a flag to say keep all the FIRST Mondays of the months from X date to Y Date.
So essentially, I want to pass in a date and be able to tell if it's the first, second, third, fourth or last Monday (for example) of a given month.
I have already filtered down the months and days and I am currently using `DATEPART(DAY, thedate)` to check if the day is < 8 then 1 week < 15 2 week etc.... but this is not correct.
So the part I am stuck on is `Where IsDateFirstOfMonth(@date)`
Where would I start to write the function `IsDateFirstOfMonth`?
Any help much appreciated
|
you can do this
```
alter function IsDateFirstOfMonth(@date as datetime)
returns int
as
begin
declare @first datetime,
@last datetime,
@temp datetime,
@appearance int
declare @table table(Id int identity(1,1),
StartDate datetime,
EndDate datetime,
DayName nvarchar(20),
RowNumber int)
set @first=dateadd(day,1-day(@date),@date)
set @last=dateadd(month,1,@first)-1
set @temp=@first
while @temp<=@last
begin
insert into @table(StartDate,EndDate,DayName) values(@temp,@temp+6,datename(dw,@temp))
set @temp=@temp +1
end
select @appearance=Nb
from(
select StartDate,EndDate,DayName,row_number() over (partition by DayName order by StartDate) as Nb
from @table) t
where @date between t.StartDate and t.EndDate and datename(dw,@date)=t.DayName
if @last-@date<7
set @appearance=-1
return @appearance
end
select dbo.IsDateFirstOfMonth('31 Dec 2014')
select dbo.IsDateFirstOfMonth('03 Nov 2014') -- result 1 ( first) monday
select dbo.IsDateFirstOfMonth('10 Nov 2014') -- result 2 (second)
select dbo.IsDateFirstOfMonth('17 Nov 2014') -- result 3 (third)
select dbo.IsDateFirstOfMonth('24 Nov 2014') -- result -1 (last) .... here it will be the last monday
select dbo.IsDateFirstOfMonth('02 Nov 2014') -- result 1 ( first) sunday
```
hope this will help you
|
For this kind of problem it's usually much easier to implement a table with the required date information and join on tthat table, and filter using it. I.e. create a table with this info:
```
CREATE TABLE Dates(
Date DATE PRIMARY KEY CLUSTERED,
PositionInMoth TINYINT,
LastInMonth BIT)
```
Then fill up this table using whichever method you want. I think you'll do it much easyly with a simple ad-hoc app, but you can also create it using a T-SQL script.
Then you simply need to join your table with this one, and use the PositionInMoth or LastInMonth columns for filtering. You can also use this as a lookup table to easyly implement the required function.
By the way, don't forget that there are many months which have a fifth instance of a given day, for example, on december 2014 there are 5 Mondays, 5 Tuesdays, and 5 Wednesdays. The number of days with 5 instances in a given motnh is: number of days in the month - 28, for example, in December it's 31-28 = 3. So you can't count on the 4th being the last.
This table really takes up very little space, *roughly*, 3 bytes for the `DATE`, 1 byte for the `TINYINT`, and 1 byte for `BIT`, so it's 3+1+1 = 5 bytes per day, 1,825 bytes per year, and 178 kb for a whole century. So, even if you needed several centuries to cover all your possible dates, i would still be a very small table. I say *roughly* because the index structure, the fill factor and some other things will make the table somewhat bigger. Being such an small table means that SQL Server can easyly cache the whole table in memory when executing the queries, so your they will run really fast.
*NOTE: you can expand this table to cover other needs like checking if a day is the last in the month, or the last or first working day in a month, by adding new `BIT` columns*
Very interesting link, from OP comment: [CALENDAR TABLES IN T-SQL](http://blog.aware.co.th/calendar-tables-in-t-sql/)
|
SQL - Check if a date is the first occurrence of that day in its month
|
[
"",
"sql",
"sql-server",
"t-sql",
"datetime",
""
] |
I have two tables in an Oracle database:
The first table has a date range and I need help in writing a SQL query to find all the records from second table as in the result table below. The first four digits in the date is year and last two are session (10-Fall; 20-Spring; 30-Summer).
1) Table1
```
seqnum | min_date| max_date |c_id
1 | 201210 | 201210 | 100
1 | 201220 | 201330 | 150
1 | 201410 | 201410 | 200
```
2) Table2
```
seqnum | b_date
1 | 201210
1 | 201220
1 | 201230
1 | 201310
1 | 201320
1 | 201330
1 | 201410
1 | 201420
1 | 201430
```
3) Result table
```
seqnum | b_date | c_id
1 | 201210 | 100
1 | 201220 | 150
1 | 201230 | 150
1 | 201310 | 150
1 | 201320 | 150
1 | 201330 | 150
1 | 201410 | 200
1 | 201420 | 200
1 | 201430 | 200
```
If `Table1` have only the first record then all the dates in `Table2` must be associated with `c_id` 100 only.
|
To do this as simply as possible:
```
select t2.seqnum, t2.b_date, coalesce(t1.c_id, t3.max_id) as c_id
from table2 t2
left outer join table1 t1
on t2.b_date between t1.min_date and t1.max_date
cross join (select max(c_id) as max_id from table1) t3
order by t1.c_id, t2.b_date
```
[SQLFiddle here](http://sqlfiddle.com/#!4/2ef62/7)
Share and enjoy.
|
**Fiddle:** <http://sqlfiddle.com/#!4/45c72/10/0>
```
select t2.seqnum,
t2.b_date,
case when t2.b_date < min_rg then x.c_id
when t2.b_date > max_rg then y.c_id
else t1.c_id
end as c_id
from (select min(min_date) as min_rg, max(max_date) as max_rg from table1) z
join table1 x
on x.min_date = z.min_rg
join table1 y
on y.max_date = z.max_rg
cross join table2 t2
left join table1 t1
on t2.b_date between t1.min_date and t1.max_date
order by b_date
```
When B\_DATE on table2 is lower than the first MIN\_DATE on table1 it will show C\_ID from table1 of the lowest MIN\_DATE (100 in your case, right now).
When B\_DATE on table2 is higher than the last MAX\_DATE on table1 it will show C\_ID from table1 of the highest MAX\_DATE (200 in your case, right now).
|
Oracle Join tables with range of dates in first table and dates in second table
|
[
"",
"sql",
"database",
"oracle",
"join",
""
] |
I have two tables: `USERS` and `USER_TOKENS`
`USERS` is structured as follows:
```
id (INT)
name (VARCHAR)
pass (VARCHAR)
birthdate (DATETIME)
...
etc
```
`USER_TOKENS` is structured as follows:
```
user_id (INT)
key (VARCHAR)
value (VARCHAR)
```
Essentially `USERS` contains basic data, whereas `USER_TOKENS` is used to store completely arbitrary KEY/VALUE pairs for a given user. So for example there may be 3 records for the USER whose id is 137:
```
user_id:137; key:"height"; value:"610";
user_id:137; key:"food"; value:"candy";
user_id:137; key:"income"; value:"low";
```
Now, to the point:
How do I query the DB to get all the records from table `USER` where `USER.name = 'bob'`, but at the same time ALL the records from `USER_TOKENS` for each one of the selected users?
|
If you need really all matching users and their respective tokens as one resultset, you can try this:
```
SELECT u.id, ut.key, ut.value -- and list also other required fields
FROM Users u
LEFT JOIN User_tokens ut ON u.id = ut.user_id
WHERE u.name = 'bob'
```
Try not to use `SELECT *` because you get duplicate fields that way (you get both `users.id` and `user_tokens.user_id` which are allways equal). Using LEFT JOIN you also get users that do not have any tokens.
But this query does not make sense to me very much, because you allready know the user, so why to repeat the users' data in every single row. (It would only make sense if there would be more users with the name of 'bob').
You probably need something like this:
```
SELECT ut.key, ut.value
FROM Users u
INNER JOIN User_tokens ut ON u.id = ut.user_id
WHERE u.name = 'bob'
```
Or perhaps better:
```
SELECT ut.key, ut.value
FROM User_tokens ut
WHERE EXISTS (SELECT * FROM Users u WHERE u.name = 'bob' AND u.id = ut.user_id)
```
This lists all the tokens for all the users with name='bob'.
If there is only one `bob` then there is no need to include all the duplicate data from `users` table - you can get them eventually with a separate SELECT that would return one single row.
|
```
SELECT u.*, t.* FROM users AS u, users_tokens AS t
WHERE u.name = 'bob' AND t.user_id = u.id;
```
|
SQL query: combining data from two tables
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to pull two things from my database: entries where one attribute is TRUE and the entries where the attribute is FALSE. I then want to divide the first result by the second result to get a percentage of entries where the attribute is TRUE.
```
SELECT product, COUNT(entries) FROM myTable
WHERE has_bug = 1
AND date > "2014-07-01"
GROUP BY product
SELECT product, COUNT(entries) FROM myTable
WHERE has_bug = 0
AND date > "2014-07-01"
GROUP BY product
```
I get the results fine, and I can do the division separately, but is it possible to divide the results of these two SELECT statements in this one query?
EDIT:
This did the trick:
```
SELECT product, SUM(has_bug = 1) / SUM(has_bug = 0)
FROM myTable
WHERE date > "2014-07-01"
GROUP BY product
```
|
You can (ab)use MySQL's automatic type-conversion logic:
```
SELECT product, (SUM(entries = 0) / SUM(entries = 1)) AS ratio
FROM myTable
WHERE date > '2014-07-01'
GROUP BY product
```
The boolean true/false of the `entries = X` comparisons will get converted to integer `0` or `1` by MySQL, and summed up, essentially reproducing your `COUNT()`, but in a single query.
|
Use conditional aggregation. Actually, the following will do what you want, assuming `has_bug` only takes on the values of `0` and `1`:
```
select product, avg(has_bug)
from mytable
where date > '2014-07-01'
group by product;
```
A more explicit match to your query is:
```
select product, sum(has_bug = 1) / count(*)
from mytable
where date > '2014-07-01'
group by product;
```
|
SQL: Selecting two different things and doing calculations on the results in one query
|
[
"",
"mysql",
"sql",
""
] |
Let's suppose I created this table:
```
CREATE TABLE T (
NAME VARCHAR(10),
A INT NOT NULL,
B INT NOT NULL
)
```
I want to create a view that has two attributes, T.Name and a char that is `'Y' if A >= B or 'N' otherwise`.
How can I build this second attribute? Thank you
|
Just use a `case` statement:
```
create view v as
select t.name, (case when t.a >= t.b then 'Y' else 'N' end) as attribute
from table t;
```
|
```
CREATE VIEW YourViewName
AS
SELECT name , (CASE WHEN A >= B THEN 'Y' ELSE 'N' END) as your_name
FROM T
```
|
SQL - Insert conditioned values in a View
|
[
"",
"sql",
"create-view",
""
] |
Suppose I fetch a set of rows from several tables. I want to know the total sum of values in column `x` in these rows, as well as sum of only those values in `x` where the row satisfies some additional condition.
For example, let's say I fetched:
```
X Y
1 0
10 0
20 100
35 100
```
I want to have a sum of all `x` (66) and `x` in those rows where `x` > `y` (11). So, I'd need something like:
```
SELECT sum(x) all_x, sum(x /* but only where x > y */) some_x FROM ...
```
Is there a way to formulate that in SQL? (Note that the condition is not a separate column in some table, I cannot group over it, or at least don't know how to do that.)
**EDIT**: I use Oracle Database, so depending on Oracle extensions is OK.
|
You could use a `case` expression inside the `sum`:
```
SELECT SUM (x) all_x,
SUM (CASE WHEN x > y THEN x ELSE 0 END) some_x
FROM my_table
```
|
You're looking for the `CASE` operator :
```
SELECT sum(X) all_x,
sum(CASE WHEN X>Y THEN X ELSE 0 END) some_x
FROM Table1
```
In this case (no pun) you would get 11 for `some_x`
You can use whatever condition you want instead of `X>Y` after the `WHEN`, and select whatever value instead of `X`.
[SQL fiddle](http://sqlfiddle.com/#!2/3b2ae/1) to test this query.
|
SQL: Computing sum of all values *and* a sum of only values matching condition
|
[
"",
"sql",
"oracle",
"select",
"sum",
"aggregate-functions",
""
] |
Consider this query:
```
select
count(p.id),
count(s.id),
sum(s.price)
from
(select * from orders where <condition>) as s,
(select * from products where <condition>) as p
where
s.id = p.order;
```
There are, for example, 200 records in products and 100 in orders (one order can contain one or more products).
I need to join then and then:
1. count products (should return 200)
2. count orders (should return 100)
3. sum by one of orders field (should return sum by 100 prices)
The problem is after join **p** and **s** has same length and for **2)** I can write *count(distinct s.id)*, but for **3)** I'm getting duplicates (for example, if sale has 2 products it sums price twice) so sum works on entire 200 records set, but should query only 100.
Any thoughts how to *sum* only distinct records from joined table but also not ruin another selects?
Example, joined table has
```
id sale price
0 0 4
0 0 4
1 1 3
2 2 4
2 2 4
2 2 4
```
So the **sum(s.price)** will return:
```
4+4+3+4+4+4=23
```
but I need:
```
4+3+4=11
```
|
If the `products` table is really more of an "order lines" table, then the query would make sense. You can do what you want by in several ways. Here I'm going to suggest conditional aggregation:
```
select count(distinct p.id), count(distinct s.id),
sum(case when seqnum = 1 then s.price end)
from (select o.* from orders o where <condition>) s join
(select p.*, row_number() over (partition by p.order order by p.order) as seqnum
from products p
where <condition>
) p
on s.id = p.order;
```
Normally, a table called "products" would have one row per product, with things like a description and name. A table called something like "OrderLines" or "OrderProducts" or "OrderDetails" would have the products within a given order.
|
You are not interested in single product records, but only in their number. So join the aggregate (one record per order) instead of the single rows:
```
select
count(*) as count_orders,
sum(p.cnt) as count_products,
sum(s.price)
from orders as s
join
(
select order, count(*) as cnt
from products
where <condition>
group by order
) as p on p.order = s.id
where <condition>;
```
|
Aggregate after join without duplicates
|
[
"",
"sql",
"postgresql",
"select",
"join",
""
] |
I am having what seems to be a rather simplistic issue, but its hampering what I need to do.
Essentially, I want to present all records (including NULLS) when I evaluate my `CASE` statement in my SQL. Right now its filtering out the NULL values
Table
## fname | lname
steve | smith
NULL | jones
Query:
```
SELECT
fname, lname
FROM
users
WHERE
fname = (CASE WHEN @param = 'All' THEN fname ELSE @param)
```
When I do this, it pulls Steve Smith fine, but it doesn't pull Jones. And I actually want Jones to show up as its part of a larger recordset.
The result set I am looking for is:
```
STEVE SMITH
JONES
```
I am doing this in an SSRS 2005 Report and even when just going into the report because of the @parameter = 'All' by default, its not presenting the records that have nulls in that particular column that I am bouncing my parameter against.
Thanks in advance.
Just to add to this based on the responses.
I am evaluating @param coming into the SQL. so when its 'All' I make the criteria for fname = fname. Which is supposed to cancel it out and return everything (like there was no criteria), its only if the @param has a name in it that I am using it as a criteria
```
WHERE fname = (CASE WHEN fname = 'All' THEN fname ELSE @PARAM END)
```
What I am trying to get to is something like:
```
WHERE fname = (CASE WHEN fname = 'All' THEN (fname OR NULL) ELSE @PARAM END)
```
|
OK I could not find a SQL solution to this but in SSRS I found a way of getting around it, since this is a report that goes through SSRS.
On the front end table tablix I applied a filter with conditions.
```
=iif(Parameters!RPfname.Value="All", Parameters!RPfname.Value,Fields!fName.Value) = =Parameters!RPname.Value
```
This took the evaluation of the fname out of the database side and put it on the front end. The records are filtered when the parameter is selected against the value presented in the parameter.
If its set to ALL it just presents the field without any filtering or evaluations.
Thanks for your efforts all.
|
Use an `OR` as nothing is equal to `NULL`
```
WHERE
@param = 'All'
OR
fname = @param
```
|
SQL Server / SSRS 2005 CASE evaluation of Parameter and NULL
|
[
"",
"sql",
"sql-server",
"null",
"case",
"reportingservices-2005",
""
] |
I have a table with transaction history for 3 years, I need to compare the sum ( transaction) for 12 months with sum( transaction) for 4 weeks and display the customer list with the result set.
```
Table Transaction_History
Customer_List Transaction Date
1 200 01/01/2014
2 200 01/01/2014
1 100 10/24/2014
1 100 11/01/2014
2 200 11/01/2014
```
The output should have only Customer\_List with 1 because sum of 12 months transactions equals sum of 1 month transaction.
I am confused about how to find the sum for 12 months and then compare with same table sum for 4 weeks.
|
the query below will work, except your sample data doesnt make sense
total for customer 1 for the last 12 months in your data set = 400
total for customer 1 for the last 4 weeks in your data set = 200
unless you want to exclude the last 4 weeks, and not be a part of the last 12 months?
then you would change the "having clause" to:
```
having
sum(case when Dt >= '01/01/2014' and dt <='12/31/2014' then (trans) end) - sum(case when Dt >= '10/01/2014' and dt <= '11/02/2014' then (trans) end) =
sum(case when Dt >= '10/01/2014' and dt <= '11/02/2014' then (trans) end)
```
of course doing this would mean your results would be customer 1 and 2
```
create table #trans_hist
(Customer_List int, Trans int, Dt Date)
insert into #trans_hist (Customer_List, Trans , Dt ) values
(1, 200, '01/01/2014'),
(2, 200, '01/01/2014'),
(1, 100, '10/24/2014'),
(1, 100, '11/01/2014'),
(2, 200, '11/01/2014')
select
Customer_List
from #trans_hist
group by
Customer_List
having
sum(case when Dt >= '01/01/2014' and dt <='12/31/2014' then (trans) end) =
sum(case when Dt >= '10/01/2014' and dt <= '11/02/2014' then (trans) end)
drop table #trans_hist
```
|
I suggest a self join.
```
select yourfields
from yourtable twelvemonths join yourtable fourweeks on something
where fourweek.something is within a four week period
and twelvemonths.something is within a 12 month period
```
You should be able to work out the details.
|
SQL Comparison Query Error
|
[
"",
"sql",
"sql-server",
""
] |
I am only a beginner in SQL, and I have problem that I can not solve.
The problem is the following:
i have four tables
```
Student: matrnr, name, semester, start_date
Listening: matrnr<Student>, vorlnr<Subject>
Subject: vorlnr, title, sws, teacher<Professor>
Professor: persnr, name, rank, room
```
I need to list all the students that are listening the Subject of some Professor with samo name.
EDIT:
```
select s.*
from Student s, Listening h
where s.matrnr=h.matrnr
and h.vorlnr in (select v.vorlnr from Subject v, Professor p
where v.gelesenvon=p.persnr and p.name='Kant');
```
This is how i solved it but i am not sure is it optimal solution.
|
Your approach is good. Only, you want to show students, but join students with listings thus getting student-listing combinations.
Moreover you use a join syntax that is out-dated. It was replaced more than twenty years ago with explicit joins (INNER JOIN, CROSS JOIN, etc.)
You can do it with subqueries only:
```
select *
from Students,
where matrnr in
(
select matrnr
from Listening
where vorlnr in
(
select vorlnr
from Subject
where gelesenvon in
(
select persnr
from Professor
where name='Kant'
)
)
);
```
Or join the other tables:
```
select *
from Students
where matrnr in
(
select l.matrnr
from Listening l
inner join Subject s on s.vorlnr = l.vorlnr
inner join Professor p on p.persnr = s.gelesenvon and p.name='Kant'
);
```
Or with EXISTS:
```
select *
from Students s
where exists
(
select *
from Listening l
inner join Subject su on su.vorlnr = l.vorlnr
inner join Professor p on p.persnr = su.gelesenvon and p.name='Kant'
where l.matrnr = s.matrnr
);
```
Some people like to join everthing and then clean up in the end using DISTINCT. This is easy to write, especially as you don't have to think your query through at first. But for the same reason it can get complicated when more tables and more logic are involved (like aggregations) and it can become quite hard to read, too.
```
select distinct s.*
from Students s
inner join Listening l on l.matrnr = s.matrnr
inner join Subject su on su.vorlnr = l.vorlnr
inner join Professor p on p.persnr = su.gelesenvon and p.name='Kant';
```
At last it is a matter of taste.
|
When you have an SQL problem, a good way of presenting the problem is to show us the tables as `CREATE TABLE` statements. Such statements show details such as the types of the columns and which columns are primary keys. Additionally this allows us to actually build a little database in order to reproduce a faulty behavior or just to test our solutions.
```
CREATE TABLE Student
(
matrnr NUMBER(9) PRIMARY KEY,
name NVARCHAR2(50),
semester NUMBER(2),
start_date DATE
);
CREATE TABLE Listening
(
matrnr NUMBER(9), -- Student
vorlnr NUMBER(9), -- Subject
CONSTRAINT PK_Listening PRIMARY KEY (matrnr, vorlnr)
);
CREATE TABLE Subject
(
vorlnr NUMBER(9) PRIMARY KEY,
title NVARCHAR2(50),
sws NVARCHAR2(50),
teacher NUMBER(9) -- Professor
);
CREATE TABLE Professor
(
persnr NUMBER(9) PRIMARY KEY,
name NVARCHAR2(50),
rank NUMBER(3),
room NVARCHAR2(50)
);
```
Using this schema, my solution would look like this:
```
SELECT *
FROM
Student
WHERE
matrnr IN (
SELECT L.matrnr
FROM
Listening L
INNER JOIN Subject S
ON L.vorlnr = S.vorlnr
INNER JOIN Professor P
ON S.teacher = P.persnr
WHERE P.name = 'Kant'
);
```
You can find it here: <http://sqlfiddle.com/#!4/5179dc/2>
Since I didn't insert any records, the only thing it is testing is the syntax and the correct use of table and column names.
Your solution is suboptimal. It does not differentiate between joining of tables and additional conditions specified as where-clause. It can produce several result records per student if they attend several courses of the professor. Therefore my solution puts all the other tables into the sub-select.
|
Connecting 4 tables
|
[
"",
"sql",
"database",
"oracle",
"oracle-sqldeveloper",
""
] |
I imported a txt file into my table column B which is of datatype varchar
the data is like 10.00 GB, 20 TB , 100 MB etc
```
column a column b
host A 100 TB
host B 20 GB
host C 100 MB
```
I did try convert (int,column name) which returned error advising cannot convert data type varchar to type int
I can replace the GB with blanks but want to convert anything with Tb or MB to converted to GB. yes I don't want the TB or GB or MB to be displayed in my column B.just numbers.
may be good if i can store these values in a separate column with datatype as int and then delete my original column in the same table.
Please could someone help
|
You can split the column using:
```
select t.a, t.b,
cast(left(b, charindex(' ', b)) as int) as num,
right(b, 2) as units
from t;
```
This assumes that the values are all in exactly this format.
You can put it in one column by using multiplication:
```
select t.a, t.b,
(cast(left(b, charindex(' ', b)) as float) *
(case right(b, 2) when 'GB' then 1.0
when 'TB' then 1024.0
when 'MB' then 1.0/1024
else 1.0
end)) as inGB
from t;
```
EDIT:
You might have a problem if all the values are not in the format you expect. You can you a case statement, something like:
```
case when b like '[0-9] %' or b like '[0-9][0-9] %' or b like '[0-9][0-9][0-9] %' then . . .
```
to verify that a number is actually at the beginning of the string.
|
I know that 1000 is not the same as 1024, but depending on your exact needs, this is a pretty simple method:
`select cast(ROUND(replace(replace(replace(columnB, ' TB', '000000000'), ' GB', '000000'), ' MB', '000') ,0,1) as int) from t`
|
SQL update column from varchar data type to int
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
"sql-server-2008-express",
""
] |
I want to retrain last 25 entered records and delete remaining records according to id.
```
DELETE * FROM list
WHERE id NOT IN
(
SELECT *
FROM (
SELECT *
FROM 'list'
ORDER BY id DESC LIMIT 25
) as rows
)
```
|
```
DELETE *
FROM 'list'
WHERE id NOT IN ( SELECT id
FROM 'list'
ORDER BY id DESC
LIMIT 25 )
```
|
Deleting while selecting from the same table isn't permitted in MySQL.
You can try something like this:
```
SELECT @rows_to_delete:=COUNT(*)-25 FROM list;
DELETE FROM list ORDER BY id ASC LIMIT 0, @rows_to_delete;
```
**NB: this is not tested please test before running it on real data.**
|
select last 25 records from SQL table
|
[
"",
"mysql",
"sql",
""
] |
What I whant is to rearrange the numerical data in column for example BookID. If I have column with data like this:
```
BookID ; BookTitle
1 name1
4 name2
11 name3
```
How can I rearrange to look like this:
```
BookID ; BookTitle
0 name1
1 name2
2 name3
```
|
Is this what you want?
```
select row_number() over (order by BookId) - 1 as BookId, BookTitle
from books b;
```
If you want to change the ids in the data, you *can* do that. But, it is not recommended. The primary key on a row does not need to have any meaning. It gets used for foreign key references in other tables, for instance, and if you change the value in the original table, you need to change it there as well.
|
To update BookID in sequential order. Use the below update query.
```
UPDATE A SET A.BookID = B.NewBookID
FROM Books A
INNER JOIN (
SELECT BookID, NewBookID = ROW_NUMBER() OVER (ORDER BY BookTitle) - 1
FROM Books
) AS B
ON A.BookID = B.BookID
```
|
How can I rearrange the numerical data of some columns
|
[
"",
"sql",
"sql-server",
""
] |
Suppose I have three tables
```
Student Student_Interest Interest
======= ================ ========
Id Student_Id Id
Name Interest_Id Name
```
Where Student\_Interest.Student\_Id refers to Student.Id
and Student\_Interest.Interest\_Id refers to Interest.Id
Let's say we have three kinds of interest viz. "Java", "C", "C++" and "C#" and there are some entries in the student table and their respective interest mapping entries in the Student\_Interest table. (A typical many-to-many relationship)
How can we get the list of students that have both "Java" and "C" as their interests?
|
> How can we get the list of students that have both "Java" and "C" as
> their interests?
We can write t(t.c,...) to say that row (t.c,...) is in table t. Let's alias Student to s, Student\_Interest to sij and sic and Interest to ij and ic. We want rows (s.Id,s.Name) where
```
s(s.Id,s.Name)
AND sij(sij.Student_Id,sij.Interest_Id) AND s.Id = sij.Student_Id
AND sic(sic.Student_Id,sic.Interest_Id) AND s.Id = sic.Student_Id
AND ij(ij.Id,ij.Name) AND ij.Id=sij.Interest_Id AND ij.Name = 'Java'
AND ic(ic.Id,ic.Name) AND ic.Id=sic.Interest_Id AND ic.Name = 'C'
```
So:
```
select s.Id,s.Name
from Student s
join Student_Interest sij on s.Id = sij.Student_Id
join Student_Interest sic on s.Id = sic.Student_Id
join Interest ij on ij.Id=sij.Interest_Id AND ij.Name = 'Java'
join Interest ic on ic.Id=sic.Interest_Id AND ic.Name = 'C'
```
|
Simply get the Java and C records from student\_interest, group by student and see if you get the complete number of interests for a student. With such students found you can display data from the student table.
```
select *
from student
where id in
(
select student_id
from student_interest
where interest_id in (select id from interest where name in ('Java', 'C'))
group by student_id
having count(distinct interest_id) = 2
);
```
EDIT: You've asked me to show a query with EXISTS. The straight-forward way would be:
```
select *
from student
where exists
(
select *
from student_interest
where student_id = student.id
and interest_id = (select id from interest where name = 'Java')
)
and exists
(
select *
from student_interest
where student_id = student.id
and interest_id = (select id from interest where name = 'C')
);
```
For every interest an additional EXISTS clause. If, however, you want to convert the IN query above to an EXISTS query, so to have only one EXISTS clause, you get:
```
select *
from student
where exists
(
select student_id
from student_interest
where student_id = student.id
and interest_id in (select id from interest where name in ('Java', 'C'))
group by student_id
having count(distinct interest_id) = 2
);
```
I find the IN clause more readable, but that's a matter of taste, I guess.
|
Matching one value in one column with more than one values in other column in SQL
|
[
"",
"mysql",
"sql",
""
] |
If I have string like
```
select field1 + N' ' + field2 + N' ' + field3 from table1
```
how to make so that I could change or replace the characters starting from third place to fifth?
I mean if select statement above returns a string `ABCD EFGH IJKL`, then I want to replace `CD(space)` with `*`
Thanks.
|
Use the stuff function:
```
SELECT STUFF('ABCD EFGH IJKL', 3, 3, '*');
select STUFF(field1 + N' ' + field2 + N' ' + field3, 3, 3, '*') from table1
```
Syntax:
```
STUFF ( character_expression , start , length , replaceWith_expression )
```
|
I feel STUFF function is the best way to go about it.
Below is a small example on how i would approach the same
```
CREATE TABLE TESTDATA
(
DATA NVARCHAR(MAX),
DATA1 NVARCHAR(MAX),
DATA2 NVARCHAR(MAX)
)
INSERT INTO TESTDATA VALUES('ABCD','HGHG','HGFD')
INSERT INTO TESTDATA VALUES('HHGG','ADFS', 'ERET')
INSERT INTO TESTDATA VALUES('JKJK','GGHG', 'TRED')
DECLARE @DATA TABLE(DATA VARCHAR(100))
INSERT INTO @DATA select STUFF(DATA + N' ',3,3,'*') + DATA1 + N' ' + DATA2 from TESTDATA
SELECT * FROM @DATA
```
**The Result for the same would be**,
```
AB*HGHG HGFD
HH*ADFS ERET
JK*GGHG TRED
```
I hope the above was useful
|
Replace the string characters in SQL Server
|
[
"",
"sql",
"sql-server",
"ssms",
""
] |
Between table1 and table2 there are an 1 N relation. "code" column of table2 is foreign key of table1.
I want to delete all rows in table1 who don't have any row related in table2. I try
```
delete * from table1 r
inner join table2 a
where (r.code!=a.code)
```
but this delete all rows in two tables...
|
Maybe this is what you want?
```
delete from table1
where code not in (
select code from table2
)
```
before you run the delete you might want to verify that the correct rows will be deleted using a select query:
```
select * from table1
where code not in (
select code from table2
)
```
|
You may try this:
```
delete from table1 r
where not exists (select 1 from table2 a where r.code = a.code);
```
|
Delete rows from two tables related
|
[
"",
"sql",
"delete-row",
""
] |
I want to display the `subtraction` of two values from `two` different rows using a `SQL` query.
This is the table structure:
```
------------------------------------
id | name | sub1 | sub2 | date
------------------------------------
1 | ABC | 50 | 75 | 2014-11-07
2 | PQR | 60 | 80 | 2014-11-08
```
I want to subtract date 2014-11-08 subject marks from date 2014-11-07.
Output should be like as
```
| sub1 | sub2 |
---------------
| 10 | 5 |
```
|
You can use a join to get the rows and then subtract the values:
```
SELECT(t2.sub1 - t1.sub1) AS sub1, (t2.sub2 - t1.sub2) AS sub2
FROM table t1 CROSS JOIN
table t2
WHERE t1.date = '2014-11-08' AND t2.id = '2014-11-07';
```
|
I feel like you're leaving out an important part of your actual needs where you'll probably want to group by some specific field and return corresponding values, so the answer will be kind of limited. You can double reference the table like the example above, but it's usually much better if you can somehow only reference the table only once and remove the need for index lookups, bookmark lookups, etc. You can usually use simple aggregates or windowed aggregates to accomplish this.
```
SELECT
MAX(sub1) - MIN(sub1) AS sub1,
MAX(sub2) - MIN(sub2) AS sub2
FROM
dbo.someTable;
```
<http://sqlfiddle.com/#!6/75ccc/2>
|
How can I subtract two row's values within same column using sql query?
|
[
"",
"sql",
""
] |
I have a table that has only one record as below
```
Date1 Date2 Amount CountDays
2014-01-01 2014-01-4 1000 4
```
field "CountDays" will be calculatd by a trigger. i want to write a query that returns below result:
```
Date Amount
2014-01-01 250
2014-01-02 250
2014-01-03 250
2014-01-04 250
```
it is really neccessary for me please help me.
|
This should work:
```
WITH Nums AS(
SELECT DISTINCT Value = number
FROM master..[spt_values]
)
SELECT Date = DATEADD(d, n.Value - 1, t.Date1),
Amount = t.Amount / t.CountDays * 1.0
FROM Nums n CROSS JOIN TableName t
WHERE n.Value BETWEEN 1 AND t.CountDays
```
[**Demo**](http://sqlfiddle.com/#!6/9339a/8/0)
Note that this apporach works only until 2,164. Another approach is to use a number-table. Read:
<http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1>
|
Creating test data:
```
create table yourtable (Id int, Date1 datetime, Date2 datetime, Amount int, CountDays decimal);
insert into yourtable values (1, '2014-01-01','2014-01-04', 1000, 4);
insert into yourtable values (2, '2014-01-13','2014-01-19', 1200, 6);
```
And the sql query (common table expression with recursion):
```
WITH cte (id, date1, date2, amount) AS (
SELECT id, date1, date2, (Amount / CountDays) as amount
FROM yourtable
UNION ALL
select id, DATEADD(day, 1, date1) date1, date2, amount
from cte
where date1 < date2
)
```
Finally to receive the data:
```
select * from cte
order by id
```
Output:
```
ID DATE1 DATE2 AMOUNT
1 January, 01 2014 00:00:00+0000 January, 04 2014 00:00:00+0000 250
1 January, 02 2014 00:00:00+0000 January, 04 2014 00:00:00+0000 250
1 January, 03 2014 00:00:00+0000 January, 04 2014 00:00:00+0000 250
1 January, 04 2014 00:00:00+0000 January, 04 2014 00:00:00+0000 250
2 January, 13 2014 00:00:00+0000 January, 19 2014 00:00:00+0000 200
2 January, 14 2014 00:00:00+0000 January, 19 2014 00:00:00+0000 200
2 January, 15 2014 00:00:00+0000 January, 19 2014 00:00:00+0000 200
2 January, 16 2014 00:00:00+0000 January, 19 2014 00:00:00+0000 200
2 January, 17 2014 00:00:00+0000 January, 19 2014 00:00:00+0000 200
2 January, 18 2014 00:00:00+0000 January, 19 2014 00:00:00+0000 200
2 January, 19 2014 00:00:00+0000 January, 19 2014 00:00:00+0000 200
```
[Fiddle Demo](http://sqlfiddle.com/#!3/ec624/4)
|
Select All dates between two dates
|
[
"",
"sql",
"sql-server-2008",
""
] |
I am trying to select some data from different tables using join.
First, here is my SQL (MS) query:
```
SELECT Polls.pollID,
Members.membername,
Polls.polltitle, (SELECT COUNT(*) FROM PollChoices WHERE pollID=Polls.pollID) AS 'choices',
(SELECT COUNT(*) FROM PollVotes WHERE PollVotes.pollChoiceID = PollChoices.pollChoicesID) AS 'votes'
FROM Polls
INNER JOIN Members
ON Polls.memberID = Members.memberID
INNER JOIN PollChoices
ON PollChoices.pollID = Polls.pollID;
```
And the tables involved in this query is here:

The query returns this result:
```
pollID | membername | polltitle | choices | votes
---------+------------+-----------+---------+-------
10000036 | TestName | Test Title| 2 | 0
10000036 | TestName | Test Title| 2 | 1
```
Any help will be greatly appreciated.
|
Your INNER JOIN with `PollChoices` is bringing in more than 1 row for a given `poll` as there are 2 choices for the poll 10000036 as indicated by `choices` column.
You can change the query to use `GROUP BY` and get the counts.
In case you don't have entries for each member in the `PollVotes` or `Polls` table, you need to use `LEFT JOIN`
```
SELECT Polls.pollID,
Members.membername,
Polls.polltitle,
COUNT(PollChoices.pollID) as 'choices',
COUNT(PollVotes.pollvoteId) as 'votes'
FROM Polls
INNER JOIN Members
ON Polls.memberID = Members.memberID
INNER JOIN PollChoices
ON PollChoices.pollID = Polls.pollID
INNER JOIN PollVotes
ON PollVotes.pollChoiceID = PollChoices.pollChoicesID
AND PollVotes.memberID = Members.memberID
GROUP BY Polls.pollID,
Members.membername,
Polls.polltitle
```
|
This query result is telling you the number of votes **per choice** in each poll.
In your example, this voter named TestName answered the poll (with ID 10000036) and gave one choice 1 vote, and the second choice 0 votes. This is why you are getting two rows in your result.
I'm not sure if you are expecting just one row because you didn't specify what data, exactly, you are trying to select. However if you are trying to see the number of votes that TestName has submitted, for each choice where the vote was greater than 1, then you will have to modify your query like this:
```
select * from
(SELECT Polls.pollID,
Members.membername,
Polls.polltitle, (SELECT COUNT(*) FROM PollChoices WHERE pollID=Polls.pollID) AS 'choices',
(SELECT COUNT(*) FROM PollVotes WHERE PollVotes.pollChoiceID = PollChoices.pollChoicesID) AS 'votes'
FROM Polls
INNER JOIN Members
ON Polls.memberID = Members.memberID
INNER JOIN PollChoices
ON PollChoices.pollID = Polls.pollID) as mysubquery where votes <> 0;
```
|
SQL select with join are returning double results
|
[
"",
"sql",
"inner-join",
"redundancy",
""
] |
I have the 3 below statements,
1. Selects the Order Numbers that dont exist
> ```
> select Orders.OrderNumber
> FROM Orders
> inner join InvoiceControl on Orders.OrderNumber = InvoiceControl.OrderNumber
> where not exists (select OrderNumber from Orders where InvoiceControl.OrderNumber = Orders.OrderNumber)
> ```
2. Selects a specific Order number that does not exist
> ```
> select OrderNumber from Orders where OrderNumber = 987654
> ```
3. Selects the specific Order Number in the corresponding table that does not exist
> ```
> select OrderNumber from InvoiceControl where OrderNumber = 987654
> ```
these 3 queries work in other scenarios regarding other tables but not this one, have I made an obvious mistake anywhere? below is the query ran and the outputs:

the idea behind this is to locate the OrderNumbers that do not exist in the InvoiceControl, based on the OrderNumbers in the Orders Tabl, so the top query would also return the value 987654 as this value has not yet been included in the InvoiceControl Table as this could be a new Order without an Invoice
|
Because your `INNER JOIN` will already create all correspondents between `Orders.OrderNumber = InvoiceControl.OrderNumber`.
After this result set is built, you actually filter out everything based on the condition in your `WHERE`.
`where not exists (select OrderNumber from Orders where InvoiceControl.OrderNumber = Orders.OrderNumber)`
Hypothetically, if you'd have just `987654` in your `Orders` table and you'd have a Correspondent in your `InvoiceControl` table, then the following query, without your `WHERE` clause
```
select Orders.OrderNumber
FROM Orders
inner join InvoiceControl on Orders.OrderNumber = InvoiceControl.OrderNumber
```
would return:
```
OrderNumber
987654
```
Then, by applying your `where not exists (select OrderNumber from Orders where InvoiceControl.OrderNumber = Orders.OrderNumber)` condition, you'd be looking for all records that do not have a correspondent (but you already have all possible correspondents between your two tables, based on your INNER JOIN).
Thus, your result will be:
`OrderNumber`
|
In the first query, you first asked for rows in both Orders and InvoiceControl (by way of the FROM and JOIN tables), and then you added in your WHERE clause a request to exclude all rows that exist in Orders. Since your starting set only includes rows that are in Orders, if you ask for all of those rows to be excluded, you will get no results back.
|
SQL Server WHERE NOT EXISTS not working
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I'm trying to create a query using `SELECT CASE` that would return the accumulative results.
Here is my query, it works but doesnt return the correct result.
```
SELECT total, count(*) as count FROM
(
SELECT case
WHEN ( X ) < 1 THEN '1 km'
WHEN ( X ) < 3 THEN '3 km'
WHEN ( X ) < 5 THEN '5 km'
WHEN ( X ) < 10 THEN '10 km'
WHEN ( X ) < 15 THEN '15 km'
WHEN ( X ) < 25 THEN '25 km'
WHEN ( X ) < 50 THEN '50 km'
WHEN ( X ) < 100 THEN '100 km'
WHEN ( X ) > 0 THEN '-1'
else '-2'
end AS `total`
FROM `store` AS d WHERE d.pending!='1'
) AS someRandomAliasHere
GROUP BY `total`
```
**X is a formula i'm using to calculate radius from a lat and lang.** `total` is NOT a column in my database table, just a result to calculations of X
The query above gives me this..
```
1 km (4)
3 km (19)
5 km (103)
25 km (540)
50 km (61)
....
```
4,19,103,540,62 are the total matches found.
The total count for `3` should be `19+4=23`.
`5` should be `103+19+4=122` etc. And `WHEN ( X ) > 0 THEN '-1'` should show the total count. of matches
I tried using `BETWEEN 0 AND 1, BETWEEN 0 AND 3` etc but it still didn't give me the correct results
Any ideas?
|
Another approach is to calculate the results independently then union them:
```
SELECT 1 AS total, COUNT(*) AS cnt FROM `store` WHERE store.pending != 1 AND ( X ) < 1
UNION ALL
SELECT 3 AS total, COUNT(*) AS cnt FROM `store` WHERE store.pending != 1 AND ( X ) < 3
UNION ALL
SELECT 5 AS total, COUNT(*) AS cnt FROM `store` WHERE store.pending != 1 AND ( X ) < 5
UNION ALL
/** ... **/
SELECT 100 AS total, COUNT(*) AS cnt FROM `store` WHERE store.pending != 1 AND ( X ) < 100
```
|
In addition to the accumulation, you also want a total value at the end with `-1`. This is a bit of a pain, but it can be accomplished.
The simplest way to do cumulative sums in MySQL is using variables. The basic idea is this:
```
SELECT total, cnt, (@cnt := @cnt + cnt) as count
FROM (SELECT (case WHEN ( X ) < 1 THEN '1'
WHEN ( X ) < 3 THEN '3'
WHEN ( X ) < 5 THEN '5'
WHEN ( X ) < 10 THEN '10'
WHEN ( X ) < 15 THEN '15'
WHEN ( X ) < 25 THEN '25'
WHEN ( X ) < 50 THEN '50'
WHEN ( X ) < 100 THEN '100'
WHEN ( X ) > 0 THEN '-1'
else '-2'
end) AS total, COUNT(*) as cnt
FROM store s
WHERE s.pending <> '1'
GROUP BY total
) t CROSS JOIN
(SELECT @cnt := 0) vars
ORDER BY total;
```
The issue with this is that you will not get an overall total of the non-negative values. Let me assume that you have no negative values. This requires adding another row into the total line:
```
SELECT total, cnt, (@cnt := @cnt + cnt) as count
FROM (SELECT (case WHEN ( X ) < 1 THEN '1'
WHEN ( X ) < 3 THEN '3'
WHEN ( X ) < 5 THEN '5'
WHEN ( X ) < 10 THEN '10'
WHEN ( X ) < 15 THEN '15'
WHEN ( X ) < 25 THEN '25'
WHEN ( X ) < 50 THEN '50'
WHEN ( X ) < 100 THEN '100'
WHEN ( X ) > 0 THEN '-1'
else '-2'
end) AS total, COUNT(*) as cnt
FROM store s
WHERE s.pending <> '1'
GROUP BY `total`
UNION ALL
SELECT -1, 0
) t CROSS JOIN
(SELECT @cnt := 0) vars
ORDER BY (total >= 0) desc, total;
```
I've changed the `order by` as well. Note that the value `-2` is probably meaningless, because `X < 1` and `X > 0` cover all possible values of `X` (except for `NULL`). If you actually have values 100 or greater, there are some small changes to refine the query. You do not describe what to do with those values, so clarification on the question would be helpful.
|
SELECT CASE to return accumulative results
|
[
"",
"mysql",
"sql",
""
] |
I have created three tables like Product, Brands and ProductBrands in ms sql server
These tables contains data as the following
Products (PId - PName)
* 1 - Mobiles
* 2 - T.V
* 3 - A.C
Brands (BId - BName)
* 1 - Samsung
* 2 - Sony
* 3 - L.G
ProductsBrands (PBId - PId - BId)
* 1 - 1 - 1
* 2 - 1 - 2
* 3 - 1 - 3
* 4 - 2 - 1
* 5 - 2 - 3
...
...
Now I have to fetch from tables the following feilds
* Product Id
* PName
* BId
* BName
Basically I have to fetch all the Brands assigned to a Products
Please help me!!! I am very new to this
|
```
SELECT P.PId ,
P.PName,
B.BId,
B.BName
FROM Products P
JOIN ProductsBrands PB On PB.PId = P.PId
JOIN Brands B ON B.BId = PB.BId
```
See, whether this is what you are looking for.
|
If my understanding is correct:(Fetch all brands assigned to a Product)
```
select BName from Brands where BId in
(select distinct Bid from ProductsBrands where PId = (select PId from Products where PName = @input))
```
|
How to fetch data from tables where there is one to many relationship?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Pretty straight forward question, I want to store feet and inches in 1 column using a decimal, but I dont want it truncated like how the float type does.
|
Store all your data in MKS (metric). When presenting and storing convert the data into an international standard. and store it in a decimal type.
Thus if your tool to gather the data is in 6'2" format convert it into cm and save in your data table. Then reverse this for display.
By saving in a standard format decimal cm. finding people with the same range of height is easier, where as if Ft and In are in separate columns ranges are really hard.
|
The imperial unit system still used in Myanmar, Liberia and that one country in North America is unfortunately not very arithmetics-friendly. There is [no native data-type](http://msdn.microsoft.com/en-us/library/ms187752.aspx) to handle the strange base12/base3/base1860 math for it.
You should really use the much more widely-used metric system and use a FLOAT or DECIMAL value representing meters.
However, when you really want to stay with the imperial system, you should store the value in inch and do the conversation to feet + inch on the GUI level.
|
whats the best datatype to store height?
|
[
"",
"sql",
"sql-server",
"types",
""
] |
I want to sort a list of record from the database. The records retrieved is sorted based on a column with int values. The possible int values are 1,2,3,4,5. The sorting order required is 1,3,2,4,5
so i cannot use `Order by table.a asc`. What should be my query to retrieve the desired order? for example my table has the following record
```
---------------------
name | to_order |
--------------------
n1 | 1
--------------------
n2 | 2
--------------------
n3 | 3
--------------------
```
The result of my query should be (n1,1),(n3,3),(n2,2).
NOTE: I am using mysql
|
you can add case statements to your order by clause
```
SELECT name, to_order
FROM Table1
ORDER BY
(
CASE
WHEN to_order = 1 THEN 0
WHEN to_order = 3 THEN 1
ELSE 2
END
),
to_order
```
Add case for 3 so it would be prioritize againts other numbers. I included 1 as case since 1 should still be prioritize before 3
|
I would write this as:
```
order by (case when to_order = 2 then 3
when to_order = 3 then 2
else to_order
end)
```
|
sql sort by int, except for one int value which has a different order
|
[
"",
"mysql",
"sql",
"sorting",
""
] |
In Oracle/SQL how to select rows where the value of the columns remained the same
I have a list of budget, each budget spread over 12 months, with fixed salary each month. I would like to return a list of budgets for which the value of the salary remained the same over 12 months.
```
BUDGET MONTH SALARY
5468 1 1500
5468 2 1500
5468 3 1500
5468 4 1500
5468 5 1500
5468 6 1500
5468 7 1500
5468 8 1500
5468 9 1500
5468 10 1500
5468 11 1500
5468 12 1500
3456 1 1675
3456 2 1675
3456 3 1675
3456 4 1675
3456 5 1500
3456 6 1500
3456 7 1500
3456 8 1500
3456 9 1675
3456 10 1675
3456 11 1675
3456 12 1675
3948 1 2900
3948 2 2900
3948 3 2900
3948 4 2900
3948 5 2900
3948 6 2900
3948 7 2900
3948 8 2900
3948 9 2900
3948 10 2900
3948 11 2900
3948 12 2900
9756 1 2900
9756 2 2900
9756 3 2900
9756 4 2900
9756 5 2900
9756 6 2900
9756 7 2900
9756 8 2900
9756 9 2900
9756 10 2900
9756 11 2900
9756 12 2900
9712 1 2743
9712 2 2900
9712 3 2900
9712 4 2900
9712 5 2743
9712 6 2900
9712 7 2743
9712 8 2900
9712 9 2900
9712 10 2743
9712 11 2900
5394 1 1500
5394 2 1500
5394 3 1500
5394 4 1500
5394 5 1500
5394 6 1500
5394 7 1500
5394 8 1500
5394 9 1500
5394 10 1500
5394 11 1500
5394 12 1500
```
|
Count the `distinct` salary for every budget
```
select budget
from your_table
group by budget
having count(distinct salary) = 1
```
|
```
SELECT BUDGET,SALARY FROM BUDTGET_TAB
HAVING COUNT(MONTH)=12
GROUP BY BUDGET,SALARY
ORDER BY 1,2
```
|
In Oracle/SQL how to select rows where the value of the columns remained the same
|
[
"",
"sql",
"oracle",
"oracle11g",
"oracle10g",
""
] |
I know how to do this in SQL Server thanks to this clever bit of code
```
Use <database>
SELECT execquery.last_execution_time AS [Date Time], execsql.text AS [Script]
FROM sys.dm_exec_query_stats AS execquery
CROSS APPLY sys.dm_exec_sql_text(execquery.sql_handle) AS execsql
ORDER BY execquery.last_execution_time DESC
```
> SOURCE:
> [Recover unsaved SQL query scripts](https://stackoverflow.com/questions/15034832/recover-unsaved-sql-query-scripts-sql-server)
Is there a way to do this in Oracle SQL Developer?
|
If you have the privileges then:
```
SELECT * FROM v$sql
```
If not then press `F8` to bring up a list of previously ran queries.
|
[This](http://www.solveitproblems.net/2012/11/restore-unsaved-and-lost-plsql-code-in.html) has saved my butt several times.
> It is really a problem when you lose unsaved code. For about a
> month I have been working on a big procedure, and forgot to save the
> code in SVN. If you read this and remember have such unsaved code,
> commit it immediately! :) Because everything could happen with your
> test db. Ok. you're lucky if you were using Oracle SQL Developer,
> because this program has a great feature - it saves your code in its
> sql history inspite of eating a big party of your RAM. Open your file
> explorer and locate this folder:
>
> C:\Users\%USERNAME%\AppData\Roaming\SQL Developer\SqlHistory
>
> You'll find a lot of XML files, and if you're twice lucky, you'll find
> your lost code. It's wonderful. :) . If you're using another program,
> try to find a feature like this and maybe it helps you. My condolences
> if this post doesn't help you, in any case, try to find something good
> among the next: 1) write your code again, and it will be better than
> before as you did it once 2) commit your code, so you'll not face such
> problem in the future
|
Recover unsaved SQL query Scripts in Oracle SQL Developer
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I am having a ~90 MB SQLite database on a SSD consisting mostly on message attachments including a BLOB column content, that stores the binary attachment data.
Now I found that the following query
```
SELECT message_id FROM attachments WHERE length(content) IS NULL;
```
is 500x faster (0.5ms vs. 250ms) than the original
```
SELECT message_id FROM attachments WHERE content IS NULL;
```
Is it true, that both queries are equivalent?
**Additional info**
1. No indexes are involved apart from the autoindex.
2. It's not caching. The result can be reproducued unlimited times in any order from any number of SQLite processes.
|
In SQLite, the length and type of each column value are stored [at the beginning of the row](http://www.sqlite.org/fileformat2.html#record_format).
This allows the `length()` and `typeof()` functions to be optimized to avoid loading the actual value.
The IS NULL operator has no such optimization (although it would be possible to implement it).
|
I made a script to benchmark both functions. `length(x) IS NULL` is faster unless you have mostly `NULL` values.
## Results:
* 50% alternating between random data and null:
+ `IS NULL`: 11.343180236999842
+ `length(x) IS NULL`: 7.824154090999855
* Entirely blobs, no nulls:
+ `IS NULL`: 15.019244787999924
+ `length(x) IS NULL`: 7.527420233999919
* Entirely nulls, no blobs:
+ `IS NULL`: 6.184766045999822
+ `length(x) IS NULL`: 6.448342310000044
## Test script:
```
import sqlite3
import timeit
conn = sqlite3.connect("test.db")
c = conn.cursor()
c.execute("DROP TABLE IF EXISTS test")
c.execute("CREATE TABLE test (data BLOB)")
for i in range(10000):
# Modify this to change data
if i % 2 == 0:
c.execute("INSERT INTO test(data) VALUES (randomblob(1024))")
else:
c.execute("INSERT INTO test(data) VALUES (NULL)")
def timeit_isnull():
c.execute("SELECT data IS NULL AS dataisnull FROM test")
c.fetchall()
def timeit_lenisnull():
c.execute("SELECT length(data) IS NULL AS dataisnull FROM test")
c.fetchall()
print(timeit.timeit(timeit_isnull, number=1000))
print(timeit.timeit(timeit_lenisnull, number=1000))
```
|
Is 'length() IS NULL' equivalent and faster than 'IS NULL' for BLOBs?
|
[
"",
"sql",
"sqlite",
""
] |
I am attempting to get the average rating (1-10) for each of 6 games that were rated.
Here is an example of the table:

I can select the average of one of the 6 games like this:
```
SELECT ROUND(AVG(overallRating), 0), COUNT(*) AS Total, gameID
FROM gameSurvey
WHERE gameID = 1
```
I want to retrieve the average rating of all 6 games?
|
Try:
```
SELECT ROUND(AVG(overallRating), 0), count(*) as Total, gameID
FROM gameSurvey
GROUP BY gameID
```
|
As everyone else has stated, you should use group by.
What this does, as it is implied, is group together certain rows based on a given column. In this case, you want to group all gameID values into one row. Then, any aggregate functions (such as AVG) are preformed within these groups, so the average value you see in your result set, is the average value for that given group.
Here is a resource on [GROUP BY](http://www.tutorialspoint.com/mysql/mysql-group-by-clause.htm) clause.
Your query would look like this:
```
SELECT ROUND(AVG(overallRating), 0), COUNT(*) AS numberOfRatings, gameID
FROM gameSurvey
GROUP BY gameID;
```
I tried to build an SQL Fiddle example for you, but it seems to be down at the moment. Hopefully this is enough information to help you moving forward, though.
|
How can I get multiple averages from multiple rows using Mysql
|
[
"",
"mysql",
"sql",
""
] |
I have one field for date contains year and months. Field name new\_TarikhSebenarTamatPengajian. Exmaple of data 199110 (Format:YYYYMM). I want to add date to set a new format require are (YYYYMMDD).
For the available data Can I add date follow end of the month for that month.
Example 199101 = 19910130 OR 199102 = 19910228 OR 199103 = 19910231
This code is not work for me
> SELECT DMSTAG.dbo.TEMP\_AKAUN\_MARA.new\_TarikhSebenarTamatPengajian
> CASE
> WHEN RIGHT(new\_TarikhSebenarTamatPengajian,2) = 01 THEN CONCAT(new\_TarikhSebenarTamatPengajian,31)
> WHEN RIGHT(new\_TarikhSebenarTamatPengajian,2) = 02 THEN CONCAT(new\_TarikhSebenarTamatPengajian,28)
> WHEN RIGHT(new\_TarikhSebenarTamatPengajian,2) = 03 THEN CONCAT(new\_TarikhSebenarTamatPengajian,31)
> WHEN RIGHT(new\_TarikhSebenarTamatPengajian,2) = 04 THEN CONCAT(new\_TarikhSebenarTamatPengajian,30) WHEN
> RIGHT(new\_TarikhSebenarTamatPengajian,2) = 05 THEN
> CONCAT(new\_TarikhSebenarTamatPengajian,31) WHEN
> RIGHT(new\_TarikhSebenarTamatPengajian,2) = 06 THEN
> CONCAT(new\_TarikhSebenarTamatPengajian,30) WHEN
> RIGHT(new\_TarikhSebenarTamatPengajian,2) = 07 THEN
> CONCAT(new\_TarikhSebenarTamatPengajian,31) WHEN
> RIGHT(new\_TarikhSebenarTamatPengajian,2) = 08 THEN
> CONCAT(new\_TarikhSebenarTamatPengajian,31) WHEN
> RIGHT(new\_TarikhSebenarTamatPengajian,2) = 09 THEN
> CONCAT(new\_TarikhSebenarTamatPengajian,30) WHEN
> RIGHT(new\_TarikhSebenarTamatPengajian,2) = 10 THEN
> CONCAT(new\_TarikhSebenarTamatPengajian,31) WHEN
> RIGHT(new\_TarikhSebenarTamatPengajian,2) = 11 THEN
> CONCAT(new\_TarikhSebenarTamatPengajian,30) WHEN
> RIGHT(new\_TarikhSebenarTamatPengajian,2) = 12 THEN
> CONCAT(new\_TarikhSebenarTamatPengajian,31)
> END AS newdate FROM [dbo].[TEMP\_AKAUN\_MARA]
|
Try this one:
```
SELECT id,crse_date,
CASE
WHEN RIGHT(crse_date,2) = 01 THEN CONCAT(crse_date,31)
WHEN RIGHT(crse_date,2) = 02 THEN CONCAT(crse_date,28)
WHEN RIGHT(crse_date,2) = 03 THEN CONCAT(crse_date,31)
WHEN RIGHT(crse_date,2) = 04 THEN CONCAT(crse_date,30)
END AS newdate
FROM yourtable
After you alter your table. Here is new query for your problem
Takenote : your field name is not advisable
SELECT new_TarikhSebenarTamatPengajian,
CASE
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 01 THEN CONCAT(new_TarikhSebenarTamatPengajian,31)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 02 THEN CONCAT(new_TarikhSebenarTamatPengajian,28)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 03 THEN CONCAT(new_TarikhSebenarTamatPengajian,31)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 04 THEN CONCAT(new_TarikhSebenarTamatPengajian,30)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 05 THEN CONCAT(new_TarikhSebenarTamatPengajian,31)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 06 THEN CONCAT(new_TarikhSebenarTamatPengajian,30)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 07 THEN CONCAT(new_TarikhSebenarTamatPengajian,31)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 08 THEN CONCAT(new_TarikhSebenarTamatPengajian,31)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 09 THEN CONCAT(new_TarikhSebenarTamatPengajian,30)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 10 THEN CONCAT(new_TarikhSebenarTamatPengajian,31)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 11 THEN CONCAT(new_TarikhSebenarTamatPengajian,30)
WHEN RIGHT(new_TarikhSebenarTamatPengajian,2) = 12 THEN CONCAT(new_TarikhSebenarTamatPengajian,31)
END AS newdate
FROM TEMP_AKAUN_MARA
```
|
Assuming your column name is DateWithOutMonth :
```
Select Convert(nvarchar(8), Dateadd(day, -1, Dateadd(month, 1, Convert(datetime, DateWithOutMonth + '01'))) , 112)
from your table
```
Code explained :
* first, add `01` to the end of your date value, so the value will always is the beginning of a month
* convert the value to date time type
* add 1 month into the newly converted value
* minus one day from the new ly added value -> we have the end date of the month
|
SQL field add date change format to YYYYMMDD
|
[
"",
"sql",
"datefield",
""
] |
I have some trouble writing well performing query. Now I have two tables:
Cars:
```
id brand model_name
1 'Audi' 'A4'
2 'Audi' 'A4'
3 'Audi' 'A4'
4 'Audi' 'A6'
5 'Audi' 'A7'
```
Buyers:
```
id cars_id name activly_looking
1 2 'Brad' 'no'
2 2 'Kim' 'maybe'
3 4 'Sofia' 'yes'
4 4 'Tim' 'no'
5 5 'Tina' 'yes'
```
Now I need to select the count of cars that: HAS buyers but does NOT have any buyers who are `activly_looking = 'yes'`
The query should in this example return the count of `1`, since only car with id `2` fulfill the criteria.
BTW. this will be done as a onetime query to debug an issue, but on a quite huge table (Millions of rows).
|
Another way using conditional aggregation to filter all cars that have `actively_looking = yes` buyers.
The inner join ensures that the car has at least 1 buyer and the having clause ensures that none of the buyers are actively looking.
```
select count(*) from (
select c.id from cars c
join buyers b on b.cars_id = c.id
group by c.id
having count(case when b.activly_looking = 'yes' then 1 end) = 0
) t1
```
|
You can use `exists` and `not exists` to check your conditions and then just do an aggregation query:
```
select count(*)
from cars c
where exists (select 1 from buyers b where b.cars_id = c.id) and
not exists (select 1 from buyers b where b.cars_id = c.id and b.acively_looking = 'yes')
```
|
Simple way to count all where has 0 relations with specific value
|
[
"",
"sql",
"postgresql",
""
] |
I have this table:
```
[SDate - DateTime] [Value] - Represents a boolean
2010-11-16 10:10:00 1
2010-11-16 10:11:00 0
2010-11-16 10:12:00 1
2010-11-16 10:13:00 1
```
I need a query to count how many times the value changes.
For example:
```
2010-11-16 10:10:00 0
2010-11-16 10:11:00 1
2010-11-16 10:12:00 0
2010-11-16 10:13:00 0
2010-11-16 10:14:00 1
2010-11-16 10:15:00 0
2010-11-16 10:16:00 1
...
5 changes
```
---
```
2010-11-16 10:10:00 0
2010-11-16 10:11:00 0
2010-11-16 10:12:00 0
2010-11-16 10:13:00 1
2010-11-16 10:14:00 1
2010-11-16 10:15:00 1
2010-11-16 10:16:00 1
...
1 change
```
|
You can do this with `lag()`:
```
select count(*)
from (select t.*, lag(value) order by (sdate) as prev_value
from table t
) t
where prev_value <> value ;
```
|
This should work in earlier versions too..
```
;WITH cte
AS (SELECT Row_number()OVER(ORDER BY sdates) rn,*
FROM <tablename>)
SELECT Sum(CASE WHEN a.boolvalue = b.boolvalue THEN 0 ELSE 1 END)
FROM cte a
JOIN cte b
ON a.rn = b.rn + 1
```
|
Counting how many times a boolean value changes in SQL Server
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I am using MSSQL database SQL Server 2008
I have query to database
```
select *, DateDiff(n,StartDateTime,EndDateTime) as DateDifference
from result
where Project='Sample' AND SubProject='Sample' and
(StartDateTime)>='11/01/2013 00:00:00' AND (EndDateTime)<='11/11/2014 23:59:59'
order by EndDateTime desc
```
It will list down all entries from Result table. The Result of above query is
```
TestCaseName userName StartDatetime EndDateTime
poonam_tm_003 User1 2014-11-10 17:39:27.000 2014-11-10 17:39:31.000
poonam_tm_003 User1 2014-11-10 17:39:24.000 2014-11-10 17:39:27.000
poonam_tm_003 User1 2014-11-10 17:39:20.000 2014-11-10 17:39:24.000
poonam_tm_003 User2 2014-11-10 17:39:17.000 2014-11-10 17:39:20.000
30SepTestCase TM1 2014-10-29 10:12:09.000 2014-10-29 10:17:07.000
30SepTestCase TM1 2014-10-29 10:06:10.000 2014-10-29 10:09:41.000
```
I want enries such as means last executed Testcase
```
TestCaseName userName StartDatetime EndDateTime
poonam_tm_003 User1 2014-11-10 17:39:27.000 2014-11-10 17:39:31.000
30SepTestCase TM1 2014-10-29 10:12:09.000 2014-10-29 10:17:07.000
```
I need only last entires for each test case. I mean testcase which has max(EndDateTime)
|
Here is an answer might help:
```
SELECT TestCaseName,
userName,
StartDatetime,
EndDateTime,
Datediff(N, StartDatetime, EndDateTime) AS datedifference
FROM (SELECT *,
ROW_NUMBER()
OVER (
partition BY testcasename
ORDER BY enddatetime DESC) AS rn
FROM yourtable) AS t
WHERE Project = 'Sample'
AND SubProject = 'Sample'
AND ( StartDateTime ) >= '11/01/2013 00:00:00'
AND ( EndDateTime ) <= '11/11/2014 23:59:59'
AND rn = 1
```
|
I hope group by and max will be work
```
WITH CTE(Testcasename,userName,StartDatetime)
AS
(
SELECT Testcasename,userName,MAX(StartDatetime) StartDatetime
FROM result
GROUP BY Testcasename,Username
)
SELECT A.*,DateDiff(DD,A.StartDateTime,A.EndDateTime) as DateDifference FROM
RESULT A INNER JOIN CTE B ON A.Testcasename=B.Testcasename
AND A.userName=B.userName
AND A.StartDatetime=B.StartDatetime
WHERE
Project='Sample' AND SubProject='Sample' and
(A.StartDateTime)>='11/01/2013 00:00:00' AND (A.EndDateTime)<='11/11/2014 23:59:59'
order by A.EndDateTime desc
```
|
How to self query table in order to get last executed testcase from same table
|
[
"",
"sql",
""
] |
I've searched stackoverflow but could not get an answer to my question.
Table 1 has two columns
1. employee\_id
2. employee\_name
Table 2 has two columns
1. employee\_id
2. Manager\_id
Manager\_id is a subset of Employee\_id, which in turn means that Manager's name will be available in Employee\_name list.
How do I join both the tables to get result like : employee\_id, Employee\_name, Manager name.
My query :
```
SELECT employee_id,
employee_name,
manager_id AS manager_name
FROM table1
LEFT JOIN table2
ON table1.employee_id = table2.employee_id
WHERE manager_id = employee_id;
```
Please , suggest an appropriate approach for this situation.
Thanks in advance.
|
You need to join back to `table1`:
```
select
employees.employee_name.employee_id,
employees.employee_name,
managers.employee_name as manager_name
from table1 as employees
left join table2
on table2.employee_id = employees.employee_id
left join table1 as managers
on managers.employee_id = table2.manager_id
```
Additionally, since you're already filtering down to matching managers through the `JOIN` clauses, you can exclude the `WHERE` from your original query.
|
You need a 2nd join.
* 1st join is from t1 to t2 to get the manager of the employee.
* The next join would be from t2 back to t1 again to get the name of the manger.
I'm assuming you want all employees and the name of the manger if one is defined.
```
SELECT emp.employee_id,
emp.employee_name,
mgr.employee_name AS manager_name
FROM table1 Emp
LEFT JOIN table2 t2
ON emp.employee_id = t2.employee_id
LEFT JOIN table1 mgr
on mgr.employee_Id = t2.manager_Id
```
--What's this for?
```
WHERE manager_id = employee_id;
```
|
need help joining two tables, sql
|
[
"",
"sql",
"join",
"oracle-sqldeveloper",
""
] |
I'm trying to check if a stored procedure exists, and then if it doesn't, create it.
I'm getting the error
> Incorrect Syntax Near SET. Expecting EXTERNAL
Code:
```
IF (NOT EXISTS (SELECT *
FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_SCHEMA = 'dbo'
AND ROUTINE_NAME = 'ELMAH_GetErrorXml'))
BEGIN
CREATE PROCEDURE [dbo].[ELMAH_GetErrorXml]
(
@Application NVARCHAR(60),
@ErrorId UNIQUEIDENTIFIER
)
AS
SET NOCOUNT ON
SELECT
[AllXml]
FROM
[ELMAH_Error]
WHERE
[ErrorId] = @ErrorId
AND
[Application] = @Application
END
```
|
You got that error because that is not the right for checking the existence of a object(procedure) and if not exists then creating it . try something like this.
```
IF NOT EXISTS (SELECT *
FROM sys.objects
WHERE object_id = Object_id(N'[dbo].[ELMAH_GetErrorXml]')
AND type IN ( N'P', N'PC' ))
BEGIN
EXEC('CREATE PROCEDURE [dbo].[Elmah_geterrorxml] (@Application NVARCHAR(60),
@ErrorId UNIQUEIDENTIFIER)
AS
BEGIN
SET NOCOUNT ON
SELECT [AllXml]
FROM [ELMAH_Error]
WHERE [ErrorId] = @ErrorId
AND [Application] = @Application
END')
END
```
But the simple way is
```
IF EXISTS (SELECT *
FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_SCHEMA = 'dbo'
AND ROUTINE_NAME = 'ELMAH_GetErrorXml')
BEGIN
DROP PROCEDURE [dbo].[Elmah_geterrorxml]
END
GO
CREATE PROCEDURE [dbo].[Elmah_geterrorxml] (@Application NVARCHAR(60),
@ErrorId UNIQUEIDENTIFIER)
AS
BEGIN
SET NOCOUNT ON
SELECT [AllXml]
FROM [ELMAH_Error]
WHERE [ErrorId] = @ErrorId
AND [Application] = @Application
END
```
|
```
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
IF EXISTS (SELECT 1 FROM sys.objects where name = 'ELMAH_GetErrorXml' and type ='P')
Drop Procedure [dbo].[ELMAH_GetErrorXml]
Go
CREATE PROCEDURE [dbo].[ELMAH_GetErrorXml]
(
@Application NVARCHAR(60),
@ErrorId UNIQUEIDENTIFIER
)
AS
SET NOCOUNT ON
SELECT
[AllXml]
FROM
[ELMAH_Error]
WHERE
[ErrorId] = @ErrorId
AND
[Application] = @Application
END
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
```
|
Incorrect Syntax Near SET. Expecting EXTERNAL
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I'd like identify all views and procedures in a HANA DB that use a given table. In SQL Server, one can query `sysobjects` and `syscomments` like the following:
```
SELECT
o.name
FROM sysobjects o
JOIN syscomments c
ON o.id = c.id
WHERE c.comment LIKE '%tableName%'
```
Is there an equivalent in HANA?
|
Well that didn't take long to find. An answer can be found on the [SCN](http://scn.sap.com/thread/3526533).
```
SELECT
Procedure_Name
FROM sys.procedures
WHERE Definition LIKE '%tableName%'
```
|
I believe the best approach would be to search them using the Public Synonyms of the system, since the current user may not have SELECT privilege on schema SYS.
All objects can be found using:
```
SELECT *
FROM OBJECTS
```
For procedures, you can just
```
SELECT *
FROM PROCEDURES
```
For more public synonyms available, check Catalog folder -> Public Synonyms.
|
HANA Equivalent of syscomments
|
[
"",
"sql",
"hana",
""
] |
Hopefully simple enough but cannot find the answer. Trying to run:
SQL Server
```
left outer join exchanges fx (nolock) on
v1.ccy = fx.ccy and v1.date = fx.date
```
The problem is that fx.ccy may not exist on fx.date. I would therefore like to join on the next available date.
Many thanks,
José
|
Possible solution:
```
....
left outer join exchanges fx (nolock) on
v1.ccy = fx.ccy and fx.date = (
select min(date) from exchanges where date >= v1.date and ccy = v1.ccy)
```
|
```
left outer join exchanges fx (nolock) on
v1.ccy = fx.ccy and fx.date = (select min(date)
from exchanges x
where v1.ccy = x.ccy
and x.date >= v1.date)
```
|
SQL - Coalesce until next available date
|
[
"",
"sql",
"join",
"coalesce",
""
] |
I have a table that has 3 columns: date,name,salary
I want to query the table and add in the result set another calculated column that can have 2 values: 'new' and 'same'.
The rule to assign the value is: order records by date ascending. if the name of the current record is different from the name of the previous record then the new column value will be 'start' else (if it is the same) it will be 'same'.
How can i do that? is there a function (something like prevRow())? Or do i have to make a "trick"?
How can i do this in mysql and DB2?
|
It would seem that DB2 (versions after 9.7 at least) support the LAG() window function, so this query should work (although I haven't been able to test it):
```
SELECT
date,
name,
salary
CASE
WHEN lag(name, 1) OVER (ORDER BY date ASC) = name THEN 'same'
ELSE 'start'
END AS calc_col
FROM your_table
ORDER BY date ASC
```
|
Depends on the database you are using, MS SQL has a lag function : <http://msdn.microsoft.com/en-us/library/hh231256.aspx>
There is a mysql hack for this, check out this question : [Simulate lag function in MySQL](https://stackoverflow.com/questions/11303532/simulate-lag-function-in-mysql)
[how to do lag operation in mysql](https://stackoverflow.com/questions/18437675/how-to-do-lag-operation-in-mysql)
|
Get previous record column value in SQL
|
[
"",
"mysql",
"sql",
"db2",
"record",
"recordset",
""
] |
So I have a table that looks like this:
```
ID amt_1 amt_2 amt_3 amt_4
001 100.00 300.00 50.00 200.00
002 200.00 400.00 100.00 200.00
003 700.00 50.0 200.00 700.00
```
And I want to make a new column that carries only the greatest value per ID ie:
```
ID amt_1 amt_2 amt_3 amt_4 NEW_COL
001 100.00 300.00 50.00 200.00 300.00
002 200.00 400.00 100.00 200.00 400.00
003 700.00 50.0 200.00 700.00 700.00
```
Not that I think a CTE would help me, but let me say I cannot use them because this is being imported to Tableau which does not recognize CTE's.
|
Using `CASE Statement` you can find the largest value among the columns.
```
SELECT ID,
amt_1,
amt_2,
amt_3,
amt_4,
CASE
WHEN amt_1 >= amt_2 AND amt_1 >= amt_3 AND amt_1 >= amt_4 THEN amt_1
WHEN amt_2 >= amt_1 AND amt_2 >= amt_3 AND amt_2 >= amt_4 THEN amt_2
WHEN amt_3 >= amt_1 AND amt_3 >= amt_2 AND amt_3 >= amt_4 THEN amt_3
WHEN amt_4 >= amt_1 AND amt_4 >= amt_2 AND amt_4 >= amt_3 THEN amt_4
END NEW_COL
FROM Tablename
```
If you are using `SQL SERVER 2008 or later versions` then try this
```
SELECT ID,amt_1,amt_2,amt_3,amt_4,
(SELECT Max(amt) FROM (VALUES (amt_1), (amt_2), (amt_3),(amt_4)) AS value(amt)) NEW_COL
FROM tablename
```
|
Using SQL, you can use subselect to do something like this :
```
SELECT MAX(SELECT amt_1 FROM table_name
UNION
SELECT amt_2 FROM table_name
UNION
SELECT amt_3 FROM table_name
// ...) as maxValue, id FROM table_name
```
I'm just thinking loudly, try to see if this pseudo code works, note that you can create a view or a temporary table to store the table returned from the Union of the 4 selects.
Goodluck
|
Need the greatest value in new column
|
[
"",
"sql",
"max",
"calculated-columns",
""
] |
I am getting stuck on some SQL with a CASE condition.
Its a back end for a SSRS 2005 report and the user can select ALL, < 50k, or > 50K
The guts are..
```
SELECT LOANAMOUNT
FROM LOANS
WHERE loanAmount (CASE WHEN @Parameter 'All' THEN = loanAmount --Ignore the variable
ELSE @parameter = 1 THEN <= 50k
ELSE @parameter = 2 THEN > 50K
END CASE)
```
The issue I am having is the >< signs. I can factor in the = but I can't tell it to get ALL then change my operators in the case.
Any insights would be great. Thanks
|
*All*=Select All Records
or depends upon `@Parameter` values will be filtered
```
SELECT *
FROM LOANS
WHERE
(@Parameter = 'All') OR
(@Parameter = '1' AND loanAmount <= 50000) OR
(@Parameter = '2' AND loanAmount > 50000)
```
|
You don't need a `case` at all
```
SELECT LOANAMOUNT
FROM LOANS
WHERE @Parameter 'All'
OR (@parameter = '1' AND loanAmount <= 50000)
OR (@parameter = '2' AND loanAmount > 50000)
```
|
SQL, BETWEEN and CASE (3 COndition Parameter)
|
[
"",
"sql",
"sql-server",
"case",
"reportingservices-2005",
"between",
""
] |
I have around 10 Linked Servers, i need to get GETDATE() datetime from each of these Linked Server and store it in a table. Please help me out to how to achieve this.
|
```
insert into table
(all_Dates)
SELECT *
FROM OPENQUERY(USLinkedServer, 'SELECT getdate()')
union
SELECT *
FROM OPENQUERY(UKLinkedServer, 'SELECT getdate()')
union
SELECT *
FROM OPENQUERY(AUSLinkedServer, 'SELECT getdate()')
.
.
.
```
Remember, OPENQUERY requires that the linked server is already set up
|
You can use **OPENQUERY** to get the result set from **remote server** which are added as **linked server**.
The syntax is **OPENQUERY ( linked\_server ,'query' )**
Any user can execute **OPENQUERY**. The permissions that are used to connect to the remote server are obtained from the settings defined for the linked server.
```
INSERT INTO table (DateColumn)
SELECT Date
FROM OPENQUERY(RemoteServerName1, 'SELECT GETDATE() AS Date')
UNION ALL
SELECT Date
FROM OPENQUERY(RemoteServerName2, 'SELECT GETDATE() AS Date')
UNION ALL
SELECT Date
FROM OPENQUERY(RemoteServerName3, 'SELECT GETDATE() AS Date')
```
|
How to get GETDATE() from multiple linked Servers
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I am trying to restore a database in PMA but only have access to frm and ibd files - not the ib\_log files which I understand you need.
I know I may not be able to recover the database data but is it possible to recover the structure of the tables from the frm files?
|
I restored the table from only `.frm` and `.idb` files.
### Get the SQL query to create the tables
If you already know the schema of your tables, you can skip this step.
1. First, install [MySQL Utilities](http://dev.mysql.com/downloads/utilities/).
Then you can use `mysqlfrm` command in command prompt (cmd).
2. Second, get the SQL queries from `.frm` files using `mysqlfrm` command:
```
mysqlfrm --diagnostic <path>/example_table.frm
```
Then you can get the SQL query to create same structured table.
Like this:
```
CREATE TABLE `example_table` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`username` varchar(150) NOT NULL,
`photo_url` varchar(150) NOT NULL,
`password` varchar(600) NOT NULL,
`active` smallint(6) NOT NULL,
`plan` int(11) NOT NULL,
PRIMARY KEY `PRIMARY` (`id`)
) ENGINE=InnoDB;
```
### Create the tables
Create the table(s) using the above SQL query.
If the old data still exists, you may have to drop the respective database and tables first. Make sure you have a backup of the data files.
### Restore the data
Run this query to remove new table data:
```
ALTER TABLE example_table DISCARD TABLESPACE;
```
This removes connections between the new `.frm` file and the (new, empty) `.idb` file. Also, remove the `.idb` file in the folder.
Then, put the old `.idb` file into the new folder, e.g.:
```
cp backup/example_table.ibd <path>/example_table.idb
```
Make sure that the `.ibd` files can be read by the `mysql` user, e.g. by running `chown -R mysql:mysql *.ibd` in the folder.
Run this query to import old data:
```
ALTER TABLE example_table IMPORT TABLESPACE;
```
This imports data from the `.idb` file and will restore the data.
|
InnoDB needs the ib\_log files for data recovery, but it also needs the ibdata1 file which contains the data dictionary and sometimes contains pending data for the tables.
The data dictionary is kind of a duplicate system that records table structure and also matches a table id to the physical .ibd file that contains the table data.
You can't just move .ibd files around without the InnoDB data dictionary, and the data dictionary must match the table id found inside the .ibd file. You *can* reattach a .ibd file and recover the data, but the procedure is not for the faint of heart. See <http://www.chriscalender.com/recovering-an-innodb-table-from-only-an-ibd-file/>
You can recover the structure using the .frm files with some file trickery, but you will not be able to create them as InnoDB tables at first. Here's a blog that covers a method for recovering .frm files as MyISAM tables:
<http://www.percona.com/blog/2008/12/17/recovering-create-table-statement-from-frm-file/>
You won't be able to use PMA for this. You need superuser access to the data directory on the server.
|
Restore table structure from frm and ibd files
|
[
"",
"mysql",
"sql",
"phpmyadmin",
"innodb",
""
] |
I have found myself in a challenging scenario which I cannot figure out. I have a table "#PriceChange" which i need to figure out the various profit losses, between two dates, based on the price change
Example:
Product 1001 (Black Jeans), has change price 3 times.
* 2014-11-02: 10.99 > 8.99 (50 sold)
* 2014-11-03: 8.99 > 4.99 (25 sold)
I need to calculate, the losses based on the original price.
Along the lines of:
* (10.99 - 8.99) = $2, $2 x **50** = $100 loss
* (8.99 - 4.99) = $4, $4 x **25** = $100 loss.
Total loss = $200.
Any help would be greatly appreciated.
```
CREATE TABLE #PriceChange
(
Product int,
Description varchar(30),
ValidFrom date,
ValidTo date,
CurrentPrice decimal (5,2),
SoldBetweenValidDates int
);
INSERT INTO #PriceChange VALUES (1001,'Black Jeans','2014-11-01','2014-11-01', 10.99, 100);
INSERT INTO #PriceChange VALUES (1001,'Black Jeans','2014-11-02','2014-11-02', 8.99, 50);
INSERT INTO #PriceChange VALUES (1001,'Black Jeans','2014-11-03',NULL, 4.99, 25);
INSERT INTO #PriceChange VALUES (1002,'Shirt','2014-11-01','2014-11-01', 10.99, 100);
INSERT INTO #PriceChange VALUES (1002,'Shirt','2014-11-02','2014-11-02', 8.99, 50);
INSERT INTO #PriceChange VALUES (1002,'Shirt','2014-11-03',NULL, 4.99, 25);
SELECT *
FROM #PriceChange
DROP TABLE #PriceChange
```
|
```
with cteprice as
( select *,
row_number() over(partition by product order by validfrom) as rnk
from #PriceChange)
select p1.Product,
sum((p1.CurrentPrice - p2.CurrentPrice)* p2.SoldBetweenValidDates)
from cteprice p1
join cteprice p2 on p1.rnk + 1 = p2.rnk and p1.Product = p2.Product
group by p1.Product
```
|
Hope this helps:
```
WITH cte AS (
SELECT
p1.Product
, OverallDiff
FROM
#PriceChange p1
CROSS APPLY
(
SELECT
MIN(p2.ValidFrom) as ValidP2
FROM
#PriceChange p2
WHERE
p1.Product = p2.Product
AND p1.ValidTo < ISNULL(p2.ValidFrom,'9999-12-31')
) as p2
CROSS APPLY
(
SELECT
(p1.CurrentPrice-p3.CurrentPrice) * (p1.SoldBetweenValidDates - p3.SoldBetweenValidDates) AS OverallDiff
FROM
#PriceChange p3
WHERE
p1.Product = p3.Product
AND p2.ValidP2 = p3.ValidFrom
) AS p3
)
SELECT
Product
, SUM(OverallDiff) AS OverallDiff
FROM
cte
GROUP BY
Product
```
|
How do I have a running sum count/calculation?
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have s table that lists absences(holidays) of all employees, and what we would like to find out is who is away today, and the date that they will return.
Unfortunately, absences aren't given IDs, so you can't just retrieve the max date from an absence ID if one of those dates is today.
However, absences **are** given an incrementing ID per day as they are inputt, so I need a query that will find the employeeID if there is an entry with today's date, then increment the AbsenceID column to find the max date on that absence.
Table Example (assuming today's date is 11/11/2014, UK format):
```
AbsenceID EmployeeID AbsenceDate
100 10 11/11/2014
101 10 12/11/2014
102 10 13/11/2014
103 10 14/11/2014
104 10 15/11/2014
107 21 11/11/2014
108 21 12/11/2014
120 05 11/11/2014
130 15 20/11/2014
140 10 01/03/2015
141 10 02/03/2015
142 10 03/03/2015
143 10 04/03/2015
```
So, from the above, we'd want the return dates to be:
```
EmployeeID ReturnDate
10 15/11/2014
21 12/11/2014
05 11/11/2014
```
Edit: note that the 140-143 range couldn't be included in the results as they appears in the future, and none of the date range of the absence are today.
Presumably I need an iterative sub-function running on each entry with today's date where the employeeID matches.
|
So based on what I believe you're asking, you want to return a list of the people that are off today and when they are expected back based on the holidays that you have recorded in the system, which should only work only on consecutive days.
## [SQL Fiddle Demo](http://sqlfiddle.com/#!6/8accc/4)
**Schema Setup**:
```
CREATE TABLE EmployeeAbsence
([AbsenceID] int, [EmployeeID] int, [AbsenceDate] DATETIME)
;
INSERT INTO EmployeeAbsence
([AbsenceID], [EmployeeID], [AbsenceDate])
VALUES
(100, 10, '2014-11-11'),
(101, 10, '2014-11-12'),
(102, 10, '2014-11-13'),
(103, 10, '2014-11-14'),
(104, 10, '2014-11-15'),
(107, 21, '2014-11-11'),
(108, 21, '2014-11-12'),
(120, 05, '2014-11-11'),
(130, 15, '2014-11-20')
;
```
**Recursive CTE to generate the output**:
```
;WITH cte AS (
SELECT EmployeeID, AbsenceDate
FROM dbo.EmployeeAbsence
WHERE AbsenceDate = CAST(GETDATE() AS DATE)
UNION ALL
SELECT e.EmployeeID, e.AbsenceDate
FROM cte
INNER JOIN dbo.EmployeeAbsence e ON e.EmployeeID = cte.EmployeeID
AND e.AbsenceDate = DATEADD(d,1,cte.AbsenceDate)
)
SELECT cte.EmployeeID, MAX(cte.AbsenceDate)
FROM cte
GROUP BY cte.EmployeeID
```
**[Results](http://sqlfiddle.com/#!6/8accc/4/0)**:
```
| EMPLOYEEID | Return Date |
|------------|---------------------------------|
| 5 | November, 11 2014 00:00:00+0000 |
| 10 | November, 15 2014 00:00:00+0000 |
| 21 | November, 12 2014 00:00:00+0000 |
```
**Explanation:**
The first `SELECT` in the CTE gets employees that are off today with this filter:
```
WHERE AbsenceDate = CAST(GETDATE() AS DATE)
```
This result set is then UNIONED back to the `EmployeeAbsence` table with a join that matches `EmployeeID` as well as the `AbsenceDate` + 1 day to find the consecutive days recursively using:
```
-- add a day to the cte.AbsenceDate from the first SELECT
e.AbsenceDate = DATEADD(d,1,cte.AbsenceDate)
```
The final `SELECT` simply groups the cte results by employee with the `MAX` AbsenceDate that has been calculated per employee.
```
SELECT cte.EmployeeID, MAX(cte.AbsenceDate)
FROM cte
GROUP BY cte.EmployeeID
```
**Excluding Weekends:**
I've done a quick test based on your comment and the below modification to the `INNER JOIN` within the CTE should exclude weekends when adding the extra days if it detects that adding a day will result in a Saturday:
```
INNER JOIN dbo.EmployeeAbsence e ON e.EmployeeID = cte.EmployeeID
AND e.AbsenceDate = CASE WHEN datepart(dw,DATEADD(d,1,cte.AbsenceDate)) = 7
THEN DATEADD(d,3,cte.AbsenceDate)
ELSE DATEADD(d,1,cte.AbsenceDate) END
```
So when you add a day: `datepart(dw,DATEADD(d,1,cte.AbsenceDate)) = 7`, if it results in Saturday (7), then you add 3 days instead of 1 to get Monday: `DATEADD(d,3,cte.AbsenceDate)`.
|
You'd need to do a few things to get this data into a usable format. You need to be able to work out where a group begins and ends. This is difficult with this example because there is no straight forward grouping column.
So that we can calculate when a group starts and ends, you need to create a CTE containing all the columns and also use `LAG()` to get the `AbsenceID` and `EmployeeID` from the previous row for each row. In this CTE you should also use `ROW_NUMBER()` at the same time so that we have a way to re-order the rows into the same order again.
Something like:
```
WITH
[AbsenceStage] AS (
SELECT [AbsenceID], [EmployeeID], [AbsenceDate]
,[RN] = ROW_NUMBER() OVER (ORDER BY [EmployeeID] ASC, [AbsenceDate] ASC, [AbsenceID] ASC)
,[AbsenceID_Prev] = LAG([AbsenceID]) OVER (ORDER BY [EmployeeID] ASC, [AbsenceDate] ASC, [AbsenceID] ASC)
,[EmployeeID_Prev] = LAG([EmployeeID]) OVER (ORDER BY [EmployeeID] ASC, [AbsenceDate] ASC, [AbsenceID] ASC)
FROM [HR_Absence]
)
```
Now that we have this we can compare each row to the previous to see if the current row is in a different "group" to the previous row.
The condition would be something like:
```
[EmployeeID_Prev] IS NULL -- We have a new group if the previous row is null
OR [EmployeeID_Prev] <> [EmployeeID] -- Or if the previous row is for a different employee
OR [AbsenceID_Prev] <> ([AbsenceID]-1) -- Or if the AbsenceID is not sequential
```
You can then use this to join the CTE to it's self to find the first row in each group with something like:
```
....
FROM [AbsenceStage] AS [Row]
INNER JOIN [AbsenceStage] AS [First]
ON ([First].[RN] = (
-- Get the first row before ([RN] Less that or equal to) this one where it is the start of a grouping
SELECT MAX([RN]) FROM [AbsenceStage]
WHERE [RN] <= [Row].[RN] AND (
[EmployeeID_Prev] IS NULL
OR [EmployeeID_Prev] <> [EmployeeID]
OR [AbsenceID_Prev] <> ([AbsenceID]-1)
)
))
...
```
You can then `GROUP BY` the `[First].[RN]` which will now act like a group id and allow you to get the start and end date of each absence group.
```
SELECT
[Row].[EmployeeID]
,MIN([Row].[AbsenceDate]) AS [Absence_Begin]
,MAX([Row].[AbsenceDate]) AS [Absence_End]
...
-- FROM and INNER JOIN from above
...
GROUP BY [First].[RN], [Row].[EmployeeID];
```
You could then put all that into a view giving you the `EmployeeID` with the Start and End date of each absence. You can then easily pull out the Employee's currently off with a:
```
WHERE CAST(CURRENT_TIMESTAMP AS date) BETWEEN [Absence_Begin] AND [Absence_End]
```
**[SQL Fiddle](http://sqlfiddle.com/#!6/83c84/1/0)**
|
Finding most recent date based on consecutive dates
|
[
"",
"sql",
"sql-server",
"greatest-n-per-group",
""
] |
I have two tables:
```
data a;
input a b c;
datalines;
1 2 .
;
run;
data b;
input a b c;
datalines;
1 . 3
;
run;
```
The result I want from these tables is replacing the missings by the values that are not missing:
```
a b c
-----
1 2 3
```
How can I do it with mostly less piece of code?
EDIT:
I wrote the code and it works, but may be there is more simple code for this.
```
%macro x;
%macro dummy; %mend dummy;
data _null_;
set x end=Last;
call symputx("name"||left(_N_),name);
if Last then call symputx("num",_n_);
run;
data c;
set a b;
run;
data c;
set c;
%do i=1 %to #
x&i=lag(&&name&i);
%end;
n=_n_;
run;
data c1 (drop= n %do i=1 %to # x&i %end;);
set c (where=(n=2));
%do i=1 %to #
if missing(&&name&i) and not missing(x&i) then &&name&i=x&i;
%end;
run;
%mend;
%x;
```
|
Using `proc SQL`, you can do this with aggregation:
```
proc sql;
select max(a) as a, max(b) as b, max(c) as c
from (select a, b, c from a union all
select a, b, c from b
) x;
```
If, as I suspect, the first column is an id for matching the two tables, you should instead do:
```
proc sql;
select coalesce(a.a, b.a), coalesce(a.b, b.b) as b, coalesce(a.c, b.c) as c
from a full join
b
on a.a = b.a;
```
|
If the values are consistent, ie, you never have:
```
1 2 3
1 3 .
```
and/or are happy for them to be overwritten, then UPDATE is excellent for this.
```
data c;
update a b;
by a;
run;
```
`UPDATE` will only replace values with non-missing values, so `.` gets replaced by `3` but `2` is not replaced by `.`. Again assuming `a` is the ID variable as Gordon assumes.
You also can easily do this:
```
data c;
set a b;
by a;
retain b_1 c_1;
if first.a then do; *save the first b and c;
b_1=b;
c_1=c;
end;
else do; *now fill in missings using COALESCE which only replaces if missing;
b_1=coalesce(b_1,b); *use coalescec if this is a char var;
c_1=coalesce(c_1,c); *same;
end;
if last.a then output; *output last row;
drop b c;
rename
b_1=b
c_1=c
;
run;
```
This makes sure you keep the *first* instance of any particular value, if they may be different (the opposite of `update` which keeps the *last* instance, and different from the SQL solution which takes MAX specifically). All three should give the same result if you have only identical values. Data step options should be a bit faster than the SQL option, I expect, as they're both one pass solutions with no matching required (though it probably doesn't matter).
|
Replace missings SAS
|
[
"",
"sql",
"sas",
""
] |
I have a table with two DateTime columns, `_dateEntry` and `_dateUpdate`.
I've to create a query and choose the right datetime column based on these conditions: take the most recent between `_dateEntry` and `_dateUpdate`. `_dateUpdate` might be null, so I wrote this condition
```
SELECT id, description, ISNULL(_dateUpdate, _dateEntry) as date FROM mytable
```
but is not enought as I've to get also the most recent. How can I modify my query?
|
I suppose \_dateEntry is never NULL. If can be NULL tell me because I change my query
Try this:
```
SELECT id, description,
CASE
WHEN _dateUpdate IS NULL THEN _dateEntry
WHEN _dateUpdate > _dateEntry THEN dateUpdate
ELSE _dateEntry
END as date
FROM mytable
ORDER BY
CASE
WHEN _dateUpdate IS NULL THEN _dateEntry
WHEN _dateUpdate > _dateEntry THEN dateUpdate
ELSE _dateEntry
END desc
```
So if \_dateUpdate IS NULL you'll get the value of \_dateEntry, so the order is on two fields
|
You can use a case to return what you want:
```
SELECT
id
, description
, CASE WHEN _dateEntry > ISNULL(_dateUpdate,0)
THEN _dateEntry ELSE _dateUpdate END AS date
FROM
mytable
```
|
Query between datetime columns
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have an application with forty questions. I'm holding the questions in tbQuestion and their corresponding answers in tbAnswer along with what answer the user chose in tbResult.
Here's the table structure:
```
tbQuestion: tbAnswer: tbResult:
QuestionID AnswerID ResultID
QuestionText QuestionID QuestionID
AnswerText AnswerID
UserID
```
I want to select all the questions but indicate whether they've been answered or not by if there is a row in tbResult. It's something along the lines of the below query but It's only selecting questions I have answered. I want to use AS Answered as an indicator of whether it's been answered. Blank if it's not answered, the AnswerID if it has been answered.
```
SELECT tbQuestion.QuestionID, QuestionText, tbAnswer.AnswerID, AnswerText, tbResult.AnseredID AS Answered
FROM tbQuestion
JOIN tbAnswer ON tbQuestion.QuestionID = tbAnswer.QuestionID
LEFT OUTER JOIN tbResult ON tbQuestion.QuestionID = tbResult.QuestionID
WHERE tbResult.UserID = 1234567
```
Also there are 39 radio input answers (one answer per question) and 1 question with checkboxes where the user can have multiple answers for one question so I'll have to know the AnswerID of of each answer they gave.
|
Move the condition in your WHERE clause to the ON clause of the outer join:
```
select q.questionid,
q.questiontext,
a.answerid,
a.answertext,
r.anseredid as answered
from tbquestion q
join tbanswer a
on q.questionid = a.questionid
left join tbresult r
on q.questionid = r.questionid
and r.userid = 1234567 -- moved to on clause
```
Right now your query is only showing questions actually answered by the user, because your criteria specifying the user is in the WHERE clause. If the user never fills out an answer for a given question they won't have a row on tbresult (for the given question) so the row will be null. In order to allow nulls, you place such criteria into the on clause of an outer join.
|
Try just LEFT OUTER JOIN tbAnswer
```
SELECT tbQuestion.QuestionID ,
QuestionText ,
tbAnswer.AnswerID ,
AnswerText ,
tbResult.AnseredID AS Answered
FROM tbQuestion
LEFT OUTER JOIN tbAnswer
ON tbQuestion.QuestionID = tbAnswer.QuestionID
LEFT OUTER JOIN tbResult
ON tbQuestion.QuestionID = tbResult.QuestionID
WHERE tbResult.UserID = 1234567
```
|
Need SQL query for a quiz application
|
[
"",
"sql",
"sql-server",
""
] |
# The Query needs to Check for null value and replace with the most recent not null value.
For example consider the source data provided below .
Query needs to check the null value and found it in second row, now it should replace the null value with recent not null data and the data is"colmin".
Again the Query need to check the null value and Now it found it in third row and then it will update with recent not null data and the data is again "colmin".
Thanks in advance !
# Input
```
source data
Colmin
NULL
NULL
NULL
NULL
columnxy
column99
NULL
NULL
money
NULL
NULL
NULL
start end
NULL
```
so the output should look like..
# output
```
Ouput data
Colmin
Colmin
Colmin
Colmin
Colmin
Colmin
columnxy
column99
column99
column99
money
money
money
money
start end
start end
```
|
Try this:
```
declare @input table (id bigint not null identity(1,1), OutputData nvarchar(16) null)
insert @input select 'Colmin'
insert @input select NULL
insert @input select NULL
insert @input select NULL
insert @input select NULL
insert @input select 'columnxy'
insert @input select 'column99'
insert @input select NULL
insert @input select NULL
insert @input select 'money'
insert @input select NULL
insert @input select NULL
insert @input select NULL
insert @input select 'start end'
insert @input select NULL
--where a's value is null, replace with a non-null from b
select coalesce(a.OutputData, b.OutputData)
--return all rows from input as a
from @input a
--return all non-null values as b
left outer join
(
select id, OutputData
from @input
where OutputData is not null
) b
--the non-null value should come before the null value (< or <= works equally here)
on b.id <= a.id
--and there shouldn't be a non-null value between b and a's records (i.e. b is the most recent before a)
and not exists (
select top 1 1
from @input c
where c.id between b.id and a.id
and c.id <> b.id
and c.OutputData is not null
)
--display the output in the required order
order by a.id
```
|
What is "most recent" ? I hope you've got some field to sort on. Row number MUST NOT ALWAYS be in right order! Though I'd use `orderField` which is will be used to determine the row order.
```
UPDATE myTable
SET a = (
SELECT a
FROM myTable
WHERE a IS NOT NULL
AND orderField > (
SELECT orderField
FROM myTable
WHERE a IS NULL
ORDER BY orderField
LIMIT 1
)
ORDER BY orderField
LIMIT 1
)
WHERE a IS NULL
ORDER BY orderField
```
something like this should do it ... i hope. it's untested.
what it does:
1. Find "orderField" for first row with a = null
2. Find first a value (!= null) AFTER orderField from `1.`
3. Update a value from `1.` with value from `2.`
It should also work more easy:
```
UPDATE myTable t1
SET t1.a = (
SELECT t2.a
FROM myTable t2
WHERE t2.a IS NOT NULL
AND t2.orderField > t1.orderField
ORDER BY t2.orderField
LIMIT 1
)
WHERE t1.a IS NULL
```
|
Check for null and replace the most recent not null value
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I've a table with 2 columns in `SQL`
```
+------+--------+
| WEEK | OUTPUT |
+------+--------+
| 1 | 10 |
| 2 | 20 |
| 3 | 30 |
| 4 | 40 |
| 5 | 50 |
| 6 | 50 |
+------+--------+
```
How do I calculate to sum up output for 2 weeks before (ex : on week 3, it will sum up the output for week 3, 2 and 1), I've seen many tutorials to do moving average but they are using date, in my case i want to use (int), is that possible ?.
Thanks !.
|
I think you want something like this :
```
SELECT *,
(SELECT Sum(output)
FROM table1 b
WHERE b.week IN( a.week, a.week - 1, a.week - 2 )) AS SUM
FROM table1 a
```
OR
In clause can be converted to `between a.week-2 and a.week`.
[sql fiddle](http://sqlfiddle.com/#!3/4f2b8/4)
|
You can use a self-join. The idea is to put you table beside itself with a condition that brings matching rows in a single row:
```
SELECT * FROM [output] o1
INNER JOIN [output] o2 ON o1.Week between o2.Week and o2.Week + 2
```
this select will produce this output:
```
o1.Week o1.Output o2.Week o2.Output
--------------------------------------------
1 10 1 10
2 20 1 10
2 20 2 20
3 30 1 10
3 30 2 20
3 30 3 30
4 40 2 20
4 40 3 30
4 40 4 40
```
and so on. Note that for weeks 1 and 2 there aren't previous weeks available.
Now you should just group the data by `o1.Week` and get the `SUM`:
```
SELECT o1.Week, SUM(o2.Output)
FROM [output] o1
INNER JOIN [output] o2 ON o1.Week between o2.Week and o2.Week + 2
GROUP BY o1.Week
```
|
How to calculate moving average in SQL?
|
[
"",
"sql",
"sql-server",
"moving-average",
""
] |
I need to prevent the Order By SettingID at the bottom of this SQL from re-ordering everything. I want to Order my first select by UserID and my second select by OrderID. How do put a bound on the bottom Order By? I tried parentheses but I'm a T-SQL beginner so couldn't get that figured out. I tried using @ResultA and @ResultB and then Select them into @Result but the end result is the same the bottom Order By applies to the entire @Result. Would temp tables be the way to go and if so how?
```
DECLARE @Result TABLE (SettingID INT, GroupID INT, UserID INT)
INSERT INTO @Result
SELECT
SettingID,
GroupID,
UserID
FROM Table1
WHERE (GroupID = @GroupID AND UserID = @UserID)
ORDER BY UserID
INSERT INTO @Result
SELECT
SettingID,
GroupID,
UserID
FROM Tabel1
WHERE (GroupID IS NULL)
ORDER BY SettingID DESC -- This Order By is reordering everything in @Result.
SELECT * FROM @Result
```
|
You should not depend on the ordering of rows in a table, even table variables. In SQL, tables represent *unordered* sets. You can do what with a single query:
```
SELECT SettingID, GroupID, UserID
FROM Table1
ORDER BY (CASE WHEN (GroupID = @GroupID AND UserID = @UserID) THEN 1 ELSE 0 END) DESC,
GroupID
```
You can put this into a table variable:
```
INSERT INTO @Result
SELECT SettingID, GroupID, UserID
FROM Table1
ORDER BY (CASE WHEN (GroupID = @GroupID AND UserID = @UserID) THEN 1 ELSE 0 END) DESC,
GroupID;
```
However, you cannot assume that:
```
SELECT *
FROM @Result
```
will be in any particular order, such as insert order.
There is a trick if you want to do that. Create an identity column in the temporary table and order by that:
```
DECLARE @Result TABLE (
ResultId INT IDENTITY(1, 1) PRIMARY KEY,
SettingID INT,
GroupID INT,
UserID INT
);
INSERT INTO @Result(SettingID, GroupID, UserID)
SELECT SettingID, GroupID, UserID
FROM Table1
ORDER BY (CASE WHEN (GroupID = @GroupID AND UserID = @UserID) THEN 1 ELSE 0 END) DESC,
GroupID;
SELECT *
FROM @Result
ORDER BY ResultId;
```
|
Use commas to add ORDER BY layers:
```
SELECT * FROM @Result ORDER BY UserID, OrderID
```
You can and should remove the order by in the queries that populate your table variable.
|
How I prevent the bottom Order By from effecting everything in @Result?
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-order-by",
""
] |
I have table
```
id color image_id
1 red 2
2 green 2
3 pink 2
4 black 2
5 gray 2
6 orange 2
7 purple 2
8 yellow 2
9 greenish 2
10 white 2
select image_id,count(image_id) from colors group by image_id having count(image_id) > 5
```
if count is greater than 5 than delete data which is greater than 5.
for example
```
6 orange 2
7 purple 2
8 yellow 2
9 greenish 2
10 white 2
```
it should be deleted
|
You want to keep the first five rows for each image id. Here is a method that uses variables:
```
delete i
from images i join
(select i.*,
(@rn := if(@id = id, @rn + 1,
if(@id := id, 0, 0)
)
) as rn
from images i cross join
(select @id := 0, @rn := 0) vars
order by image_id, id
) ii
on ii.id = i.id
where ii.rn > 5;
```
|
```
delete from colors
where id not in
(select id
from colors
order by id
limit 5)
```
|
Delete if image_id count is greater than 5?
|
[
"",
"mysql",
"sql",
"select",
""
] |
I am doing analysis on the Stack Overflow dump.
Problem statement: I have 4 tables and require result in the format given.
```
Table 1: UserID Year QuestionsOnTopicA
Table 2: UserID Year AnswersOnTopicA
Table 3: UserID Year QuestionsOnTopicB
Table 4: UserID Year AnswersOnTopicB
```
Desired Output:
```
UserID Year QuestionsOnTopicA AnswersOnTopicA QuestionsOnTopicB AnswersOnTopicB
```
UserID column should have entries from all the 4 tables.
I tried performing inner and outer join on the tables but the results were incorrect.
Inner join (returns userid present only in first table 1)
Outer join (returns other columns only for userid in table 1)
Not sure if union will make sense in this scenario.
Queries are being executed on data.stackexchange.com/stackoverflow
Example
Table 1: 1001, 2010, 5 || 1001, 2011, 3 || 1002, 2010, 4
Table 2: 1001, 2010, 10 || 1001, 2011, 7 || 1002, 2010, 5
Table 3: 1002, 2010, 5
Table 4: 1001, 2010, 10 || 1004, 2011, 5
Output:
1001, 2010, 5 , 10, 0, 10
1001, 2011, 3, 7, 0, 0
1002, 2010, 4, 5, 5, 0
1004, 2011, 0, 0, 0, 5
|
Ok, this works as intended:
```
SELECT COALESCE(A.UserID,B.UserID,C.UserID,D.UserID) UserID,
COALESCE(A.[Year],B.[Year],C.[Year],D.[Year]) [Year],
ISNULL(A.QuestionsOnTopicA,0) QuestionsOnTopicA,
ISNULL(B.AnswersOnTopicA,0) AnswersOnTopicA,
ISNULL(C.QuestionsOnTopicB,0) QuestionsOnTopicB,
ISNULL(D.AnswersOnTopicB,0) AnswersOnTopicB
FROM Table1 A
FULL JOIN Table2 B
ON A.UserID = B.UserID
AND A.[Year] = B.[Year]
FULL JOIN Table3 C
ON COALESCE(A.UserID,B.UserID) = C.UserID
AND COALESCE(A.[Year],B.[Year]) = C.[Year]
FULL JOIN Table4 D
ON COALESCE(A.UserID,B.UserID,C.UserID) = D.UserID
AND COALESCE(A.[Year],B.[Year],C.[Year]) = D.[Year]
```
[**Here is a sqlfiddle**](http://sqlfiddle.com/#!3/0bd79) with a demo of this.
And the results are:
```
╔════════╦══════╦═══════════════════╦═════════════════╦═══════════════════╦═════════════════╗
║ UserID ║ Year ║ QuestionsOnTopicA ║ AnswersOnTopicA ║ QuestionsOnTopicB ║ AnswersOnTopicB ║
╠════════╬══════╬═══════════════════╬═════════════════╬═══════════════════╬═════════════════╣
║ 1001 ║ 2010 ║ 5 ║ 10 ║ 0 ║ 10 ║
║ 1001 ║ 2011 ║ 3 ║ 7 ║ 0 ║ 0 ║
║ 1002 ║ 2010 ║ 4 ║ 5 ║ 5 ║ 0 ║
║ 1004 ║ 2011 ║ 0 ║ 0 ║ 0 ║ 5 ║
╚════════╩══════╩═══════════════════╩═════════════════╩═══════════════════╩═════════════════╝
```
|
Use this SQL may be?
```
SELECT a.UserID, a.Year,
a.QuestionsOnTopicA,
b.AnswersOnTopicA,
c.QuestionsOnTopicB,
d.AnswersOnTopicB
FROM Table 1 a,
Table 2 b,
Table 3 c,
Table 4 d
WHERE a.UserID = b.UserID
AND b.UserID = c.UserID
AND c.UserID = d.UserID
AND d.UserID = a.UserID
```
|
Return results from multiple tables
|
[
"",
"sql",
"database",
"join",
"data.stackexchange.com",
""
] |
I am having trouble figuring out a query to only display consecutive dates (minimum 3) in SQL. After searching stack exchange, there are a few solutions but I can't get them to work exactly how I want it to. Consider the following table (actual data and table names changed for security):
```
code food date
------ ------ ------
ABC123 Sushi 09/28/2013
ABC123 Sushi 09/29/2013
ABC123 Sushi 09/30/2013
ABC123 Sushi 10/01/2013
BCD234 Burger 10/05/2013
BCD234 Burger 10/10/2013
BCD234 Burger 10/27/2013
BCD234 Fries 10/05/2013
BCD234 Fries 10/06/2013
BCD234 Fries 10/10/2013
CDE345 Steak 10/15/2013
CDE345 Steak 10/16/2013
CDE345 Steak 10/17/2013
CDE345 Steak 10/19/2013
CDE345 Steak 10/20/2013
DEF456 Pasta 09/05/2013
DEF456 Pasta 09/06/2013
DEF456 Pasta 09/10/2013
DEF456 Burrito 09/09/2013
DEF456 Burrito 09/10/2013
DEF456 Burrito 09/11/2013
```
Only this should be displayed:
```
code food date
------ ------ ------
ABC123 Sushi 09/28/2013
ABC123 Sushi 09/29/2013
ABC123 Sushi 09/30/2013
ABC123 Sushi 10/01/2013
CDE345 Steak 10/15/2013
CDE345 Steak 10/16/2013
CDE345 Steak 10/17/2013
DEF456 Burrito 09/09/2013
DEF456 Burrito 09/10/2013
DEF456 Burrito 09/11/2013
```
Considering that the code, food, and date are all variable at any given time, what would be the query to create a result as shown above? The query should only find minimum 3 consecutive dates for each given code and food pair (key).
I tried messing around with one of the queries on Stack Exchange:
```
select code, grp, count(*) as NumInSequence, min(date), max(date)
from (select t.*, (date - row_number() over (partition by code order by date)) as grp
from #TempTable t
) t
group by code, grp
```
...but I get an error regarding converting data type varchar to bigint (which is probably due to the code being alphanumeric as opposed to just a regular int ID). Also, I am assuming that the above code wouldn't give me the exact result anyway.
Please advise, and thank you for your help.
|
You can do this using window functions. You can identify groups of consecutive dates by subtracting `row_number()` from the date. With the group, you can then count the number of rows, and choose only those that have 3 or more rows:
```
select code, food, date
from (select t.*, count(*) over (partition by code, food, grp) as cnt
from (select t.*,
dateadd(day, - row_number() over (partition by code, food order by date), date) as grp
from #temptable t
) t
) t
where cnt >= 3;
```
|
Simple change to your original code should make it work:
```
select code, FOOD,grp, count(*) as NumInSequence, min([date]) AS MIN_date, max([date]) AS MAX_Date
from (select CODE,FOOD,[DATE],(DATEDIFF(D,'1/1/1900',[date])) - row_number() over (partition by code,FOOD order by [date]) as grp from @T t ) t
group by code, FOOD, grp
HAVING count(*) >= 3
order by code
```
By converting the inner date to an int via days from some base date '1/1/1900', an int value is created that can now be used within your existing code!
Additionally I added Food Group to the row\_number() section to find only matches that also include the user/Item Combo.
|
Find All Consecutive Dates in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I am writing a feature for a store website that will suggest other products that other customers bought with the products in the current customer's shopping cart.
My plan is to have a job that trawls the orders database and updates a many-to-many table tracking how often each pair of products were associated with one another. I thought to have a structure like this:
```
+--------------+--------------+----------------+
| product_a_id | product_b_id | times_together |
+--------------+--------------+----------------+
| 12 | 53 | 118 |
+--------------+--------------+----------------+
```
Then I realized that I could not necessarily avoid a duplication of data in the table definition with rows like this:
```
+--------------+--------------+----------------+
| product_a_id | product_b_id | times_together |
+--------------+--------------+----------------+
| 53 | 12 | 118 |
+--------------+--------------+----------------+
```
So to increment an association from a new order, I have to do two queries:
```
UPDATE also_bought SET times_together = times_together + 1 WHERE product_a_id = 12 AND product_b_id = 53;
UPDATE also_bought SET times_together = times_together + 1 WHERE product_a_id = 53 AND product_b_id = 12;
```
Is there a more elegant structure where I could update only with only one query, and avoid rows that necessarily duplicate data in the table?
|
In your `also_bought` table, add a `check` constraint to make sure that `product_a_id < product_b_id`. Make your job add entries in that order. That avoids duplication in the table.
But since you want to be able to look up pairs of product IDs in a simple way, create a view that's a union of `also_bought` with itself, only with the ID columns reversed:
```
create view vw_also_bought as
select a as product_a_id, b as product_b_id, times_together
from (
select product_a_id as a, product_b_id as b, times_together
from also_bought
union all
select product_b_id as a, product_a_id as b, times_together
from also_bought
)
```
Now you can look up *any* product ID in a single column and get its corresponding paired product ID and buy counts from the other columns.
|
You could exclude flipped pairs altogether with:
```
product_a_id <> product_b_id
AND product_a_id < product_b_id
```
You could also do this in one `UPDATE`:
```
SET times_together = times_together + 1 WHERE product_a_id IN (12,53) AND product_b_id IN (12,53);
```
|
avoiding duplicate data in a "customers also bought" table
|
[
"",
"sql",
""
] |
Below is the result set from `SELECT` query,
```
mysql> select * from mytable where userid =242 ;
+--------+-----------------------------+------------+---------------------+---------------------+
| UserId | ActiveLinks | ModifiedBy | DateCreated | DateModified |
+--------+-----------------------------+------------+---------------------+---------------------+
| 242 | 1|2|4|6|9|15|22|33|43|57|58 | 66 | 2013-11-28 16:17:25 | 2013-11-28 16:17:25 |
+--------+-----------------------------+------------+---------------------+---------------------+
```
What I want is to `SELECT` the records by splitting the **Active links** columns and associating it with **UserId** in the below format,
*eg*,
```
UserId ActiveLinks
242 1
242 2
242 4
242 6
```
Can anyone help me with this query , as of now nothing coming to my mind. Thanks
|
Dealing with lists stored in data is a pain. In MySQL, you can use `substring_index()`. The following should do what you want:
```
SELECT userid,
substring_index(substring_index(l.ActiveLinks, '||', n.n), '|', -1) as link
FROM (select 1 as n union all select 2 union all select 3 union all select 4) n join
ipadminuserslinks l
on length(l.ActiveLinks) - length(replace(l.ActiveLinks, '||', '')) + 1 <= n.n
WHERE userid = 242;
```
The first subquery generates a bunch of numbers, which you need. You may have to increase the size of this list.
The `on` clause limits the numbers to the number of elements in the list.
As you can probably tell, this is rather complicated. It is much easier to use a junction table, which is the relational way to store this type of information.
|
If this is just a list of values that do not relate to anything on another table I would do it the same way as Gordon (if needs be you can cross join the sub query that gets the lists of numbers to easily generate far larger ranges of numbers). One minor issue is that if the range of number is bigger than the number of delimited values on a row then the last value will be repeated (easily removed using DISTINCT in this case, more complicated when there are duplicate values in there that you want to keep).
However if the list of delimited values are related to another table (such as being the id field of another table then you could do it this way:-
```
SELECT a.UserId, b.link_id
FROM mytable a
LEFT OUTER JOIN my_link_table b
ON FIND_IN_SET(b.link_id, replace(a.ActiveLinks, '|', ','))
```
Ie, use FIND\_IN\_SET to join your table with the related table. In this case converting any | symbols used as delimiters to commas to allow FIND\_IN\_SET to work.
|
MySql SELECT by Spliting multiple values separated by ||
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have two tables that one list the transactions and one that list that are cancelled. How do you only show the transactions that are not cancelled and omit the cancelled ones?

I just want to show the transactions that are not cancelled, in this case the transaction with
transaction\_id = 1.
|
```
SELECT *
FROM TRANSACTION_TABLE
WHERE transaction_id
NOT IN (
SELECT transaction_id
FROM CANCELED_TABLE);
```
However, looking at your example I would suggest to add a field "canceled" to TRANSACTION\_TABLE instead of using a seperate table.
|
When building a somewhat complex query, the best way is to build it in stages, looking over the results you get at every stage. You know that to see the transactions that have not been canceled a left outer join is needed. Ok then, write it. (See `SQL Fiddle` for details):
```
select *
from Xaction x
left join Cancellations c
on c.TransID = x.ID;
```
Now look at the results. This gives you all of the transactions and either the details from the Cancellation table, if they exist, or NULLs. The rows you want to see are the ones where the cancellation data is null. So just filter on that condition:
```
select *
from Xaction x
left join Cancellations c
on c.TransID = x.ID
where c.TransID is null;
```
Then just clean up the *select* list to arrange the data to just the way you want to see it. Easy.
|
SQL Left Join Omit Rows Found
|
[
"",
"sql",
"left-join",
"rows",
""
] |
I have an input table which looks as follows:
```
ID NAME PARENT_ID
------------------------------------
1 ABC 0
2 DEF 1
3 XYZ 1
4 PQR 2
5 GHI 0
6 JKL 5
7 MNO 6
8 STU 6
```
I want the output as follows in the form a string comma separated:
```
ABC, ABC -> DEF, ABC -> XYZ, ABC -> DEF -> PQR
GHI, GHI -> JKL, GHI -> JKL -> MNO, GHI -> JKL -> STU
```
I tried a CTE and `Cross Apply` but with no success. Can someone please help me to achieve this?
Best regards
|
you can do something like this.
If you want you can check in your SQL I have also created insert query for you just check that.
**Data For Check The Output**
```
create table tbl
(
ID int,
NAME varchar(20),
PARENT_ID int
)
insert into
tbl
values
(1 ,'ABC', 0),
(2, 'DEF', 1),
(3 ,'XYZ', 1),
(4 ,'PQR', 2)
select * from tbl
```
**Query**
```
select table2.NAME as ChildName,table1.NAME as ParentName from tbl as table1
inner join tbl table2 on table1.ID = table2.PARENT_ID
```
|
If you want retrieve a hierarchical list :
```
with cte_1(ID, PARENT_ID, LABEL)
as (
select ID, PARENT_ID, cast(NAME as varchar(512))
from a_table
where PARENT_ID = 0
union all
select a.ID, a.PARENT_ID, cast(b.LABEL + ' --> ' +a.NAME as varchar(512))
from a_table a
join cte_1 b on a.PARENT_ID = b.ID
)
select LABEL from cte_1
order by LABEL
```
This gives you :
```
'ABC'
'ABC --> DEF'
'ABC --> DEF --> PQR'
'ABC --> XYZ'
'GHI'
'GHI --> JKL'
'GHI --> JKL --> MNO'
'GHI --> JKL --> STU'
```
If you also want to concanenate the list in a single string you can use more CTE:
```
-- recursively build the hierarchical list
with cte_1(ID, PARENT_ID, LABEL)
as (
select ID, PARENT_ID, cast(NAME as varchar(512))
from a_table
where PARENT_ID = 0
union all
select a.ID, a.PARENT_ID, cast(b.LABEL + ' --> ' +a.NAME as varchar(512))
from a_table a
join cte_1 b on a.PARENT_ID = b.ID
),
-- order them
cte_2 (NR, LABEL)
as (
select row_number() over (order by LABEL) as NR,
LABEL
from cte_1
),
-- recursive concatenation
cte_3 (NR, LABEL) as (
select NR, cast(LABEL as varchar(1024))
from cte_2 where NR = 1
union all
select a.NR, cast(b.LABEL + ', ' + a.LABEL as varchar(1024))
from cte_2 a
join cte_3 b on a.NR = b.NR + 1
)
-- get the result
select top 1 LABEL
from cte_3
order by len(LABEL) desc
```
The result is :
```
'ABC, ABC --> DEF, ABC --> DEF --> PQR, ABC --> XYZ, GHI, GHI --> JKL, GHI --> JKL --> MNO, GHI --> JKL --> STU'
```
|
How to retrieve All childrens using a Select SQL Server query
|
[
"",
"sql",
"sql-server-2008",
"recursion",
""
] |
My fields are
```
ID | Name | StartTime | EndTime | Date | Description
```
I am looking for a way to select all rows with the same entries in all fields except for the ID.
I am not that familiar with SQL so I tried [this](https://stackoverflow.com/questions/2112618/finding-duplicate-rows-in-sql-server) approach but there is only one field relevant not (as in my case) five.
My first idea was to try something like:
```
SELECT *
FROM Table
order by Name, Date, StartTime, EndTime, Description
```
if I would look through all entries I would at least find the duplicates but that is definitely not the best way to solve the problem.
|
This should do what you need:
```
select Name, Date, StartTime, EndTime, Description
from table
group by Name, Date, StartTime, EndTime, Description
having count(*) > 1
```
|
This query should work for you:
```
SELECT ID, Name, StartTime, EndTime, Date, Description
FROM (
SELECT
ROW_NUMBER() OVER (PARTITION BY ID ORDER BY ID, Name, Date, StartTime, EndTime, Description) AS 'IndexNr'
, ID
, Name
, StartTime
, EndTime
, Date
, Description
FROM Table) AS ResultSet
WHERE ResultSet.IndexNr > 1
```
|
Find duplicate rows in SQL Server by multiple conditions
|
[
"",
"sql",
"sql-server",
""
] |
Rather than manually going through millions of records, is there a way to find the records which has difference in two amount columns? Is there a query to do it for me?
For example, Table A contains ColA and ColB with amountss on this and the datatype is "Money" on both columns. There are few instances where values are different (It's decimal values, apparently, yeah? Yes, because its money field).
Can somebo0dy help me with a query to extract the rows with difference in money values?
|
As both are same datatype. You just need to check ColA <> ColB.
```
SELECT * FROM TableA WHERE ColA <> ColB
```
|
To consider rows where ColA or ColB are null you can write as:
```
select ColA, ColB
from TableA
where Isnull(ColA,0) - Isnull(ColB ,0) != 0
```
`Demo`
|
How to identify or extract the rows where two column values (Amount fields) are different in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"comparison",
"compare",
""
] |
I got a table like this :
```
value | version1 | version2 | version3
--------|----------|----------|---------
aaa| 1| 1| 1
aaa| 1| 1| 2
aaa| 1| 2| 1
aaa| 1| 2| 2
aaa| 3| 1| 1
aaa| 3| 2| 1
aaa| 3| 2| 2
bbb| 1| 1| 1
bbb| 2| 1| 2
bbb| 2| 2| 1
ccc| 2| 1| 2
ccc| 3| 1| 1
ccc| 3| 2| 1
ccc| 3| 4| 1
```
What I want to do, is write a query that only selects the last record. With one version field it's easy to do with just a max function, but how can I do this with 3 version fields?
In this case, the "version1" field has the highest priority and "version3" the lowest, logicaly "version2" is in the middle. For example, a version1 with a value of 2 always has priority over a version3 with a a value of 10 when his version1 is 1.
So in above table, I want to select the following records :
```
value | version1 | version2 | version3
--------|----------|----------|---------
aaa| 3| 2| 2
bbb| 2| 2| 1
ccc| 3| 4| 1
```
first look at the highest version1, then the highest version2 and then the highest version3.
Can someone please tell me what the most efficient way to do this is?
thx in advance!
|
Edited:
Looking at your further explanation, you need to partition your output per different value.
You can then use the `ROW_NUMBER()` function, limiting the output to just one row per different value.
Example:
```
SELECT
value, version1, version2, version3
FROM (
SELECT
ROW_NUMBER() OVER (
PARTITION BY value
ORDER BY version1 DESC, version2 DESC, version3 DESC
) RN,
value, version1, version2, version3
FROM tableName
) T
WHERE RN = 1
```
`ROW_NUMBER()` function will give an incremental number to each row of the result set,
while `PARTITION BY` clause will give an independent row number for each set of different values.
`ORDER BY` works by sorting your records using the first value declared (version 1), then skipping to second value in case of exact match of first value, and so on...
`DESC` keyword indicates to sort from higher to lower value.
Applying the final `WHERE` condition does the magic: just the top row for each different value is returned.
|
This might help you :
```
select value,version1,version2,version3
from
(
select *,row_number() over(partition by value order by version1 desc, version2 desc, version3 desc) as rn
from yourtable
) as t
where rn = 1
```
|
select max version over several fields
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
The question is simple, I have two pieces of code and do not understand how they produce different results. In fact, the first piece of code in some instances produces over twice as many results as the second piece of code. Please help me to understand how these differ, I thought they would perform the same thing.
```
SELECT DISTINCT COUNT(*), EYSOURCESYSTEM
FROM MERCK_SCSA_1028_THIRD_RN
WHERE TRIM(PAYMENTTERMS) = '0' OR TRIM(PAYMENTTERMS) = '0.000'
AND POSTINGDATEYEAR = '2014'
GROUP BY EYSOURCESYSTEM
ORDER BY COUNT(*) DESC
```
versus
```
SELECT DISTINCT COUNT(*), EYSOURCESYSTEM
FROM MERCK_SCSA_1028_THIRD_RN
WHERE TRIM(PAYMENTTERMS) IN ('0', '0.000')
AND POSTINGDATEYEAR = '2014'
GROUP BY EYSOURCESYSTEM
ORDER BY COUNT(*) DESC
```
|
Your first is like
> paymentterms == '0' || paymentterms == '0.000' && postingdateyear = '2014'
(EDIT 1: Please see the [Operator Precedence](https://docs.oracle.com/cd/E17952_01/refman-5.1-en/operator-precedence.html) reference; note that `&&` is of a higher priority than `||`)
Your first statement will be evaluated as the following:
> paymentterms == '0' || (paymentterms == '0.000' && postingdateyear = '2014')
In contrast, your second is like
> (paymentterms == '0' || paymentterms == '0.000') && postingdateyear = '2014'
Because they are being evaluated in a different order, the first and second statements are not performing the same query.
Edit 2: Adding a Venn diagram.
You are getting more rows returned because of the extraneous `'0' but not '2014'` green section in the first diagram.

Your second query, with the `IN`, seems to be the correct one.
|
Comment and other answer are correct, but just thought I'd try to clarify for you:
```
WHERE TRIM(PAYMENTTERMS) IN ('0', '0.000')
AND POSTINGDATEYEAR = '2014'
```
Includes only records with `POSTINGDATEYEAR` of 2014 that also have `PAYMENTTERMS` of either 0/0.0000,
While:
```
WHERE TRIM(PAYMENTTERMS) = '0'
OR TRIM(PAYMENTTERMS) = '0.000'
AND POSTINGDATEYEAR = '2014'
```
Includes all records with `PAYMENTTERMS` of 0 regardless of year, and also records with `PAYMENTTERMS` of 0.000 with `POSTINGDATEYEAR` of 2014.
|
Oracle Riddle, using 'IN' versus using 'Or'
|
[
"",
"sql",
"oracle",
""
] |
I have a `SELECT` Statement that shows details on Orders.
It uses the below code
```
SELECT Orders.OrderID, Orders.invoiceID, Items.itemName AS 'Item Name', Orders.quantity, DATENAME(mm, Orders.OrderDate) + ' ' + DATENAME(dd, Orders.OrderDate) + ', ' + DATENAME(yyyy, Orders.OrderDate) AS 'Order Date', (Orders.price * Orders.quantity) AS 'Total', Orders.delivered
FROM Orders
INNER JOIN Items
ON Orders.itemID = Items.itemID
ORDER BY Orders.orderID, Items.itemID ASC
```
Everything works fine, however I'm not sure how to get a "$" to show up to the left of the numbers in the "total" field.
All help will be appreciated.
Thanks,
Bryan
|
try this
```
SELECT Orders.OrderID,
Orders.invoiceID,
Items.itemName AS 'Item Name',
Orders.quantity,
DATENAME(mm, Orders.OrderDate) + ' ' + DATENAME(dd, Orders.OrderDate) + ', ' + DATENAME(yyyy, Orders.OrderDate) AS 'Order Date',
'$' + Convert(VARCHAR(50), Orders.price * Orders.quantity) AS 'Total',
Orders.delivered
FROM Orders
INNER JOIN Items ON Orders.itemID = Items.itemID
ORDER BY Orders.orderID, Items.itemID ASC
```
|
You could just explicitly concat it:
```
SELECT Orders.OrderID,
Orders.invoiceID,
Items.itemName AS 'Item Name',
Orders.quantity,
DATENAME(mm, Orders.OrderDate) + ' ' + DATENAME(dd, Orders.OrderDate) + ', ' + DATENAME(yyyy, Orders.OrderDate) AS 'Order Date',
'$' + (CAST (Orders.price * Orders.quantity) AS VARCHAR(100)) AS 'Total',
Orders.delivered
FROM Orders
INNER JOIN Items ON Orders.itemID = Items.itemID
ORDER BY Orders.orderID, Items.itemID ASC
```
|
Inserting "$" in a SELECT Statement SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I Need to generate a random number between two numbers for each row.
For Eg:
select ColName,***(Random nos between 1500 to 2000)*** from TableName
Thanks in advance
|
```
DECLARE @start INT
DECLARE @end INT
SET @start = 1500
SET @end = 2000
SELECT Round(( ( @end - @start - 1 ) * Rand() + @start ), 0)
```
**Update :** To get random number for each row.
```
select ABS(Checksum(NewID()) % (@end-@start)) + @start
```
|
Random number formula is
**RANDOM DECIMAL RANGE**
```
SELECT RAND()*(b-a)+a;
```
**Example**
```
SELECT RAND()*(25-10)+10; // Random between 10 to 25
```
**RANDOM INTEGER RANGE**
```
SELECT FLOOR(RAND()*(b-a)+a);
```
**Example**
```
SELECT FLOOR(RAND()*(25-10)+10);
```
|
Random Number Generation in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"random",
"sql-server-2012",
""
] |
**Query 1**
```
select test_id, count (student_id)*100/
(select count(distinct student_id)from test_taken)as "pass rate"
from test_taken
where result>50
group by test_id;
```
**Query 2**
```
select test_id, count (student_id)*100/
(select count(distinct student_id)from test_taken)as "fail rate"
from test_taken
where result<50
group by test_id;
```
I have the following
Table:
```
test_taken
Columns:test_id,student_id,result
```
I am looking to get the percentage pass rate and fail rate where a pass is result >50% and fail is result <50%.
I have the pass rate and fail rate as 2 separate queries but I want them combined into one query.
|
If the result of the two queries is equal in terms of column numbers and column types, you can use `UNION` to get one table result :
```
SELECT test_id,
COUNT (student_id) * 100 / (SELECT COUNT(DISTINCT student_id)
FROM test_taken)AS rate
FROM test_taken
WHERE result > 50
GROUP BY test_id;
UNION
SELECT test_id,
COUNT (student_id) * 100 / (SELECT COUNT(DISTINCT student_id)
FROM test_taken)AS rate
FROM test_taken
WHERE result < 50
GROUP BY test_id;
```
|
```
SELECT test_id,
sum(case when result > 50 then 1 else 0 end) * 100 / (SELECT COUNT(DISTINCT student_id)
FROM test_taken) AS "pass rate",
sum(case when result < 50 then 1 else 0 end) * 100 / (SELECT COUNT(DISTINCT student_id)
FROM test_taken) AS "fail rate"
FROM test_taken
GROUP BY test_id;
```
|
Combining two queries into one query
|
[
"",
"sql",
""
] |
I have the following table:
```
select * from [Auction].[dbo].[Bids]
```

I need to select a row with the highest BidValue. When I do
```
SELECT bids.itemId, max(bids.[bidValue]) as HighestBid
FROM [Auction].[dbo].[Bids] bids
WHERE bids.itemId = 2
GROUP BY bids.itemId
```
I get the right row:

... but when I add two other fields it doesn't work (I know it shows 3 rows because of the Group by, but it throws an error if I don't include those fields in group by):
```
SELECT bids.itemId, max(bids.[bidValue]) as HighestBid, bids.submittedBy, bids.submittedOn
FROM [Auction].[dbo].[Bids] bids
WHERE bids.itemId = 2
GROUP BY bids.itemId, bids.submittedBy, bids.submittedOn
```

So, I need it do display one row with itemId, HighestBid, submittedBy, and submittedOn.
Any help is appreciated!
|
You can simply do:
```
select TOP 1 * from Bids where ItemId = 2 order by BidValue desc
```
|
Try this
```
select top 1 * from Bids order by BidValue desc
```
|
SQL statement to select only row containing max value in one of the columns
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table `vehicles` with these columns
```
id license_number created_at updated_at pump_number
1 ABC123 2014-10-28 13:43:58.679812 2014-10-28 13:43:58.679812 1
2 XYZ224 2014-10-29 05:24:18.163042 2014-10-29 05:24:18.163042 2
3 AB1111 2014-10-29 06:50:56.891475 2014-10-29 06:50:56.891475 1
4 AB1112 2014-11-10 06:20:06.666361 2014-11-10 06:20:06.666361 1
5 ABC123 2014-11-10 06:21:10.160651 2014-11-10 06:21:10.160651 1
6 XXU111 2014-11-10 06:33:57.813795 2014-11-10 06:33:57.813795 2
7 ABC323 2014-11-11 09:02:48.509402 2014-11-11 09:02:48.509402 1
8 YYY123 2014-11-12 06:12:13.851119 2014-11-12 06:12:13.851119 2
9 ZZZ123 2014-11-12 06:12:42.099546 2014-11-12 06:12:42.099546 2
```
where id is primary key and autoupdated.
I want to fetch two last recent updated\_at records of every pump\_number in table
So query should return me only row with IDs 9,8,7,5
|
I found similar question [here](https://stackoverflow.com/questions/15969614/in-sql-how-to-select-the-top-2-rows-for-each-group). check that for reference.
This should work for you :
```
SELECT *
FROM vehicles t1
WHERE (SELECT Count(*)
FROM vehicles t2
WHERE t1.pump_number = t2.pump_number
AND t1.updated_at < t2.updated_at) < 2
```
|
Below query will return last 2 records for every pump number:
```
select * from (select * from vehicles order by updated_at desc)a group by pump_number
Union
select * from (select * from vehicles order by updated_at desc where id not in
(select id from (select * from vehicles order by updated_at desc) group by pump_number)b
)c group by pump_number
```
|
Order By and Limit on columns
|
[
"",
"mysql",
"sql",
"activerecord",
""
] |
I have two tables
Table 1.
```
Name Grade Math
---------------------------
Chris A 30
Julie AA 35
Ross AA 32
```
Table 2
```
Name English
-------------------------
Julie 29
Chris 22
Ross 20
```
I want to get the sum of the scores for English of students with AA grade for their math. How can I do that? Please help.
EDIT: I want to get the sum of English Scores of students with AA grades for Math, i.e for Julie and Ross: sum of English scores would be 29+20.
|
Try this,
```
SELECT Sum(B.English) Total
FROM #Table_1 A
JOIN #Table_2 B ON A.Name = B.Name
WHERE Grade = 'AA'
```
If you want the marks separately use this
```
SELECT A.Name,
Sum(B.English) Total
FROM #Table_1 A
JOIN #Table_2 B ON A.Name = B.Name
WHERE Grade = 'AA'
GROUP BY A.Name
```
|
Try
```
SELECT SUM(t2.English)
FROM Table1 t1
JOIN Table2 t2
ON t1.Name = t2.Name
WHERE t1.Grade = 'AA'
```
|
Extracting Data from two tables in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I have a table `my_table`:
```
id x_id type
--- ---- ----
1 31 a
2 14 b
3 31 c
4 12 c
5 31 b
6 12 c
7 14 a
```
I want to return all rows where `x_id` has a `type` meeting all elements in the array `arr`.
**Edit: However, the actual values in that array and their number are unknown but would be possible values of the column `type`. There is always at least one value in `arr`.**
So for `arr = [b, c]`, I should get the following result:
```
id x_id type
--- ---- ----
3 31 c
5 31 b
```
How would I write a query to get such a result?
Note: I'm not really familiar with `sql` commands so please help me edit my question if it doesn't make sense.
|
```
select *
from my_table
where x_id in (select x.x_id
from (select x_id from my_table where type = 'b') x
join (select x_id from my_table where type = 'c') y
on x.x_id = y.x_id)
and type in ('b', 'c')
```
**Fiddle test:** <http://sqlfiddle.com/#!2/f8601/2/0>
This might conform better to variables (having one variable to hold the list of types in 'b','c' format and another variable to hold the count of the types. Is there any way you can get a variable to hold what's in the array as that value: `'b','c'` ?
```
select *
from my_table
where x_id in (select x_id
from my_table
where type in ('b', 'c')
group by x_id
having count(distinct type) = 2)
and type in ('b', 'c')
```
**Fiddle:** <http://sqlfiddle.com/#!2/f8601/12/0>
So you would use the variable in the 2 `type in ()` spots (variable inside the `()`) and the variable holding the count in place of the 2.
|
Here's a somewhat ugly way to do it:
```
SELECT * FROM my_table
WHERE x_id IN
(
SELECT x_id FROM
(
SELECT x_id, type
FROM my_table x
WHERE type IN ('b', 'c')
GROUP BY x_id, type
) y
GROUP BY x_id
HAVING COUNT(x_id) = 2
) AND type IN ('b', 'c')
```
|
Filter for rows with column condition on an array
|
[
"",
"mysql",
"sql",
"dynamic-sql",
""
] |
```
;with CTEima(PersonId,IsEmployeeActive)
as
(select count(*)
from custom.viwSSAppsEmpMasterExtended vem
where vem.SupervisorPersonId = @p_PersonId
union all
select CTEima.IsEmployeeActive
from Custom.viwSSAppsEmpMasterExtended vem
join CTEima on CTEima.PersonId = vem.SupervisorPersonId
)
set @v_IsManager = (select count(*)from CTEima where IsEmployeeActive = 'Y')
```
here i am getting error like
Incorrect syntax near the keyword 'set'
tell me how to set values from CTE into variable
|
You can not set values with the `SET` keyword in the `SELECT` statement.
You can either assign the fields from the query to variables in the `SELECT` statement:
```
WITH CTE AS (
/** .. Your Query Here .. **/
)
SELECT
@YourVariable = FieldNameOrSubquery -- In short: Expression
FROM
CTE
```
**In this case all fields in the `SELECT` list should be assigned to a variable!**
Or you can assign a *single row-single column* `SELECT` statement's result to a variable by the `SET` keyword:
```
SET @YourVariable = (SELECT COUNT(1) FROM YourTable).
```
You can not mix the above options.
Furthermore, CTE is defined within the execution scope of a single `SELECT`, `INSERT`, `UPDATE`, or `DELETE` statement. (<http://msdn.microsoft.com/en-us/library/ms175972.aspx>). `SET` is not a `SELECT`/`INSERT`/`UPDATE`/`DELETE` statement, this is why SQL Server reports a syntax error (CTEs can not be defined in the scope of the SET statement.)
**The solution with your example query**
```
;WITH CTEima(PersonId,IsEmployeeActive) AS
( SELECT COUNT(*)
FROM custom.viwSSAppsEmpMasterExtended vem
WHERE vem.SupervisorPersonId = @p_PersonId
UNION ALL
SELECT CTEima.IsEmployeeActive
FROM Custom.viwSSAppsEmpMasterExtended vem
JOIN CTEima on CTEima.PersonId = vem.SupervisorPersonId
)
SELECT @v_IsManager = COUNT(*)
FROM CTEima
WHERE IsEmployeeActive = 'Y'
```
|
Replace your last line with this:
```
select @v_IsManager = count(*) from CTEima where IsEmployeeActive = 'Y'
```
|
how to assign cte value to variable
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008",
"common-table-expression",
""
] |
I have a data set where 1 column may have 2 different values for each ID. I want to know if there is a way to find out which IDs don't have one of the values.
Here's what a sample of my data set looks like.
```
Object ID | Relate Type
------------------------
1 | P
1 | S
2 | P
3 | S
4 | P
4 | S
```
And I want to write a query that will tell me which Objet ID does not have a Relate Type of S.
This was my first attempt:
```
SELECT [obj_id], COUNT([obj_id]) AS [Successor Count]
FROM [Prim].[dbo].[relations]
WHERE [relate_type] = 'S'
GROUP BY [obj_id]
```
After thinking about it for a little bit, I realized that this will never give me a Successor Count of 0 because I am specifically querying the Object IDs that have a Relate Type of S. Knowing this, I decided I would probably need some kind of sub query and then I came up with these 2 solutions:
```
SELECT [obj_id]
FROM [Prim].[dbo].[relations]
WHERE NOT EXISTS
(SELECT DISTINCT [obj_id], [relate_type]
FROM [Prim].[dbo].[relations]
WHERE relate_type = 'S')
```
The above solution doesn't give me the right answer. At least I'm pretty sure it doesn't. Either way, it takes 2.5 minutes to run on a relatively small dataset of 700,000 entries.
The below query I'm pretty sure works and it runs quickly. But I was wondering if there was a different way to do this.
```
SELECT A.[obj_id]
FROM [Prim].[dbo].[relations] A
LEFT JOIN
(SELECT DISTINCT [obj_id], [relate_type]
FROM [Prim].[dbo].[relations]
WHERE [relate_type] = 'S') B ON A.[obj_id] = B.[obj_id]
WHERE A.[relate_type] != 'S' AND B.[obj_id] IS NULL
```
|
I came up with this other option. Tell me if this is faster in your case. :)
```
SELECT [obj_id]
FROM [Prim].[dbo].[relations]
GROUP BY [obj_id]
HAVING MAX(CASE WHEN [relate_type] = 'S' THEN 1 ELSE 0 END) = 0
```
|
This should work:
```
SELECT *
FROM [Prim].[dbo].[relations] r
WHERE NOT EXISTS(SELECT 1 FROM [Prim].[dbo].[relations]
WHERE [relate_type] = 'S'
AND obj_id = r.obj_id)
```
|
SQL Server: How to determine if a value exists for a row
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
SQL Server 2008 r2
I have 2 tables
**Table 1**
```
Col1 Col2 Col3 Col4
123 08 1 1
123 08 1 1
123 08 2 1
```
**Table 2**
```
Col1 Col2 Col3
123 08 6
```
Table 1 is the main column so most of the data comes from this table and it must be a LEFT OUTER JOIN on to Table2 because Table1 has values(col1, col2) that dont always return values in TABLE2 and should appear as NULLS.
My query is
```
SELECT
a.Col2,
SUM(a.Col3),
SUM(a.Col4),
SUM(b.Col3)
FROM
Table 1 a
LEFT OUTER JOIN Table2 b
ON a.col1 = b.col1 AND a.col2 = b.col2
GROUP BY a.Col2
```
This would return 08, 4, 3, 18
I would want this to return 08, 4, 3, 6
Because Table 2 joins on 3 rows it then triples the SUM value for b.Col3
What would be the best solution?
|
If you do not want to sum across the records, you can add `b.Col3` to the GROUP BY:
```
SELECT
a.Col2,
SUM(a.Col3),
SUM(a.Col4),
b.Col3
FROM
Table 1 a
LEFT OUTER JOIN Table2 b
ON a.col1 = b.col1 AND a.col2 = b.col2
GROUP BY a.Col2, b.Col3
```
|
Please try this. This will also take care if you have two records in table2 for same col1 and col2.
```
SELECT
a.Col2,
SUM(a.Col3),
SUM(a.Col4),
MAX(b.Col3) FROM
Table 1 a
LEFT OUTER JOIN (select col1, col2, sum(col3) as col3 from Table2 group by col1, col2) b ON a.col1 = b.col1 AND a.col2 = b.col2
GROUP BY
a.Col2
```
|
SQL - One to Many join with left outer join
|
[
"",
"sql",
"sql-server",
"join",
"sql-server-2008-r2",
""
] |
I am trying to concatenate the results of two of my queries into the same result table and I am not sure how to do this. I tried to play around with the UNION and JOIN operators, but could not figure it out. This is the SQL for the two queries that I want to concatenate. Both individual queries get the results that they are supposed to. Thanks in advance!
```
SELECT s.Store_Num || ': ' || s.Store_Name AS "Store",
COUNT(e.Store_Num) AS "Total Rented"
FROM Employee e JOIN store s ON e.Store_Num = s.Store_Num
JOIN rental r ON e.Emp_ID = r.Emp_ID
JOIN rented_item ri ON r.Rental_Num = ri.Rental_Num
WHERE(SYSDATE - Rent_Date) < 60
GROUP BY s.Store_Num, s.Store_Name;
UNION
SELECT COUNT(i.Store_Num) AS "Total Inventory"
FROM inventory i JOIN store s ON i.Store_Num = s.Store_Num
GROUP BY s.Store_Num, s.Store_Name;
```
|
Just pack multiple Sub-Selects into one statement!
You have two queries which count rows, so they should be written as two queries each with a COUNT(\*) because you count rows, or you should be using a COUNT(item\_num) because you count items and not stores, this will be clearer for the reader.
You then simply select all Stores and for each store you do the two counts in sub-queries - this is easy to maintain, and the optimizer should get the right join predicates.
```
SELECT s.Store_Num || ': ' || s.Store_Name "Store",
( SELECT COUNT(*)
FROM Employee e
JOIN rental r ON e.Emp_ID = r.Emp_ID
JOIN rented_item ri ON r.Rental_Num = ri.Rental_Num
WHERE e.Store_Num = s.Store_Num
AND (SYSDATE - Rent_Date) < 60
) "Total Rented",
( SELECT COUNT(*)
FROM inventory i WHERE i.Store_Num = s.Store_Num
) "Total Inventory"
FROM store s
;
```
|
Judging from your columns, a UNION is not what you want. A couple of different options would be to use a JOIN or sub query instead. I don't know how all of your data is set up, but this should be close.
```
SELECT s.Store_Num || ': ' || s.Store_Name AS "Store",
COUNT(e.Store_Num) AS "Total Rented", (
SELECT COUNT(i.Store_Num)
FROM inventory i
WHERE i.Store_Num = s.Store_Num) AS "Total Inventory"
FROM Employee e
JOIN store s
ON e.Store_Num = s.Store_Num
JOIN rental r ON e.Emp_ID = r.Emp_ID
JOIN rented_item ri ON r.Rental_Num = ri.Rental_Num
WHERE(SYSDATE - Rent_Date) < 60
GROUP BY s.Store_Num, s.Store_Name;
```
|
Add Column From Second Query SQL
|
[
"",
"sql",
"oracle",
"join",
"union",
""
] |
I've got a database with a SQLServer back end which was migrated from Access and an Access front end. After the migration one problem I keep running into is that autonumbers are not generated until after the record is saved (kind of obvious but Access didn't seem to care). I have a form that opens to create a new record in a table, but elements of that form require the value of the autonumber (Identity) field of that new record to calculate things. I want to somehow obtain this number right as the form loads instead of having to save it and reopen it just to obtain this number. What's the best way to go about this? Thanks in advance.
|
Access databases return + generate the autonumber when the record is dirty. In the case of SQL server + Access you cannot use nor does the form display the autonumber UNTIL record save time.
The simple solution is to thus force a save in the forms data, and then any following/existing code you have will continue to work.
So your code can look like this:
```
If Me.NewRecord = True Then
Me.Dirty = False
End If
```
The above will work as long as SOME editing has occurred. Note that if NO editing has occurred the above will NOT generate the autonumber ID (however even in non SQL server databases, when no editing has occurred then autonumber is not available anyway).
The above works for a bound form. If you have reocrdset code, then you change typial code like this:
```
Set rstRecords = CurrentDb.OpenRecordset("tblmain")
rstRecords.AddNew
```
In above your VBA code can/could grab autonumber. However the above code for sql sever will have to force a save.
In fact code that will work for both ACE or SQL server is thus:
will become:
Dim rstRecords As DAO.Recordset
Dim lngNext As Long
```
Set rstRecords = CurrentDb.OpenRecordset("tblmain", dbOpenDynaset, dbSeeChanges))
rstRecords.AddNew
' code here does whatever
rstRecords.Update
rstRecords.Bookmark = rstRecords.LastModified
lngNext = rstRecords!ID
rstRecords.Close
```
So the simple “issue” is you need to write out the record to force SQL server to generate the auotnumber. Once you done this record save, then your forms and most VBA code should run “as is”. You do not need to "resort" to additional code such as select @@identity UNLESS you using SQL insertcommands as opposed to say above forms or recordset code.
|
What you are referring to is the AutoNumber column in Access which is an Identity field in SQL.
The only way to accomplish this functionality in SQL is to insert the record when your form is opening and then use the @@Identity in SQL to retrieve the most recent value.
When your Access application was combined together with the database, Access was essentially doing this for you. You can check the following link for more details.
<http://bytes.com/topic/sql-server/answers/143378-identity-sql-vs-autonumber-access>
|
VBA Access Need to obtain autonumber before saving new record
|
[
"",
"sql",
"vba",
"ms-access",
""
] |
I have tables A, B and C. A has many B and C. For A I have 6 B records and half of them are marked as deleted (deleted\_at != null).
When I select with query
```
SELECT a.id, COUNT(b.id)
FROM table_a a
INNER JOIN table_b b ON a.id = b.a_id AND b.deleted_at IS NULL
WHERE a.id = 5;
```
answer is correct (id: 5, count: 3)
But when I add one more inner join
```
SELECT a.id, COUNT(b.id)
FROM table_a a
INNER JOIN table_b b ON a.id = b.a_id AND b.deleted_at IS NULL
INNER JOIN table_c c ON a.id = c.a_id AND c.deleted_at IS NULL
WHERE a.id = 5;
```
it doubles count result (id:5, count: 6)
How can I fix it?
|
As othered have mentioned you can count distinct b IDs here.
However, your problem is quite common when one needs aggregates from various tables. The problems occur, because people join all records and then try to get the aggregates.
For instance:
```
SELECT a.id, SUM(b.value)
FROM table_a a
INNER JOIN table_b b ON a.id = b.a_id AND b.deleted_at IS NULL
INNER JOIN table_c c ON a.id = c.a_id AND c.deleted_at IS NULL
WHERE a.id = 5;
```
This is almost the same query as yours. But here you cannot use DISTINCT anymore, because if you had values 100, 200, 200 in b, you would get 300 instead of 500 then. So the *general* solution is: Only join what actually shall be joined. Here are two solutions for above query:
Solution 1: Aggregate before you join:
```
SELECT a.id, b.total
FROM table_a a
INNER JOIN
(
select a_id, sum(value) as total
from table_b
where deleted_at IS NULL
group by a_id
) b ON a.id = b.a_id
INNER JOIN table_c c ON a.id = c.a_id AND c.deleted_at IS NULL
WHERE a.id = 5;
```
Solution 2: If you only need one value from a table, get it in the SELECT clause:
```
SELECT a.id,
(
select sum(value)
from table_b
where deleted_at IS NULL
and a_id = a.id
) as total
FROM table_a a
INNER JOIN table_c c ON a.id = c.a_id AND c.deleted_at IS NULL
WHERE a.id = 5;
```
|
you could count distinct values
```
SELECT a.id, COUNT(DISTINCT b.id)
FROM table_a a
INNER JOIN table_b b ON a.id = b.a_id AND b.deleted_at IS NULL
INNER JOIN table_c c ON a.id = c.a_id AND c.deleted_at IS NULL
WHERE a.id = 5;
```
|
Second inner join doubles count result
|
[
"",
"mysql",
"sql",
""
] |
if the query :
```
select 1 'id member',2 'material total'
select 1 'id member',4 'cash total'
```
Result:
```
id member material total
----------- --------------
1 2
id member cash total
----------- -----------
1 4
```
I want join that result. Desired Result:
```
id member | material total | cash total
-----------| -------------- |-----------
1 | 2 | 4
```
|
For the given example, the other answer by SonalPM is correct.
However, as a general solution where your data is not *hard coded*, use SQL **JOIN**.
In this case, you can join the two result sets based on `id member`
e.g.
```
SELECT x.id_member, x.mat_total, y.cash_total
FROM
(SELECT 1 `id_member`, 2 `mat_total`) x,
(SELECT 1 `id_member`, 4 `cash_total`) y
WHERE x.id_member = y.id_member
```
|
Just use a join like this:
```
SELECT a.idm,
a.mat,
b.cash
FROM (SELECT 1 `idm`, 2 `mat`) a
JOIN (SELECT 1 `idm`, 4 `cash`) b ON a.idm = b.idm
```
|
How to combine 2 select result with same ID to right in single record
|
[
"",
"sql",
""
] |
I have a table like this:
```
------------------
sl no | Error type
------------------
1 | err1
------------------
2 | err2
------------------
3 | err1
------------------
4 | err2
------------------
5 | err2
------------------
6 | err3
------------------
```
Result I want:
```
---------------------------
Error type | Count | %
---------------------------
err1 | 2 | 33.33
---------------------------
err2 | 3 | 50
---------------------------
err3 | 1 | 16.66
---------------------------
```
I'm using following query to get above result:
```
select
error as Error,
count(*) as Count,
100* count(*)/(select count(*) from logs) as Percentage
from logs
group by error;
```
Is it optimized for the task? If not, what are some other efficient ways I can get that information?
|
This query will perform the `count(*)` per row. I'd query it once, and cross join it to the individual counts:
```
SELECT a.error, a.cnt, 100 * a.cnt / b.total_count AS percentage
FROM (SELECT error, COUNT(*) as cnt
FROM logs
GROUP BY error) a
CROSS JOIN (SELECT COUNT(*) AS total_count FROM logs) b
```
|
Before you go and try to optimize any query, you should review the execution plan. For example, in MSSQL, the query plan will look something like this:

You can very clearly see two index scans (top and bottom branches to the right of "Nested Loops"). The top branch is performing your `count(*)` aggregate to get the total number of rows in the table. This is done once, and the resulting single row containing the count is paired with each of the rows in the bottom branch. The bottom branch is performing the `count` aggregate for each group.
So without changing your highly readable query, you likely already have an efficient execution plan.
|
Optimizing percentage calculation SQL query
|
[
"",
"mysql",
"sql",
"aggregate-functions",
""
] |
I have a `datetime` like this: `01-Jan-2014 16:30`
How do i search for records which are `between` the following times in any year, month or day:
```
SELECT *
FROM TABLE
WHERE ORDERDATE BETWEEN '01-Jan-2014 16:30' AND '01-Jan-2014 17:00'
```
|
A query using `cast` to extract the time from datetime field
```
where cast(ORDERDATE as time) between '16:30' and '17:00'
```
|
```
DECLARE @F time='16:30'
DECLARE @L time='17:00'
SELECT *
FROM TABLE
WHERE cast(ORDERDATE as time) BETWEEN @F AND @L
```
|
Searching for records based on time range using SQL
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
How to I can update only one record in a table?
Table:
```
name name1 name2
----------------------------
xx xy xz
xx xx xx
xx xx xx
xx xx xx
xy xx zz
```
Update query:
```
UPDATE table1
SET name2 = '01'
WHERE name1='xx'
```
I need update only one row per time.
|
you can use ROWCOUNT
```
SET ROWCOUNT 1
UPDATE table1
SET name2 = '01'
WHERE name1='xx'
SET ROWCOUNT 0
```
or you can use update top
```
UPDATE TOP (1) table1
SET name2 = '01'
WHERE name1='xx'
```
|
```
UPDATE table1
SET name2 = '01'
WHERE name1='xx'
LIMIT 1;
```
|
How to update only one row in a table?
|
[
"",
"sql",
"sql-server",
"sql-update",
""
] |
I have some data related to customers and the number of transactions they perform every day. I would like to see how many "new" customers do we get each week. The data looks like the following:
```
Custnum Created Revenue
1 2014/10/23 30
4 2014/10/23 20
5 2014/10/23 40
2 2014/10/30 13
3 2014/10/30 45
1 2014/10/30 56
```
In the above (sample) data, we can see that customer with `custnum` 1 has transactions in consecutive weeks, I want only the new customers for the next week, ones that have never done business with us in the past. In other words, I want the number of totally new customers for each week. So the result should be:
```
CustCount Created
3 2014/10/23
2 2014/10/30
```
I tried using the following query:
```
select
count(distinct custnum),
DATEADD(wk, DATEDIFF(wk, 0, created), 0) as Date
from ORDERS
where created > '2013-01-01'
group by
DATEADD(wk, DATEDIFF(wk, 0, created), 0)
order by
DATEADD(wk, DATEDIFF(wk, 0, created), 0)
```
But this query gives me the number of unique customers each week, I want the number of new customers every week.
Any help will be much appreciated.
|
### My understanding of your question
I took these statements:
* I would like to see how many "new" customers do we get each week.
* [customers] that have never done business with us in the past
* I want the number of totally new customers for each week
Do you want
```
CustCount Created
2 week1 -- customer 1 and 2
1 week2 -- customer 3
2 week3 -- customer 4 and 5
-- Option A
1 week4 -- customer 6 is new and 2 was not counted
-- or Option B
2 week4 -- customer 6 and 2;
-- since customer 2 did not order anything in week3
```
### Option A
This query `SELECT Custnum, DATEPART ( week , created) as WeekNumber from Revenues Order by Custnum` returns this output for the provided sample data
```
Custnum WeekNumber
1 31 -- counts
1 44 -- does not count, since customer already ordered once
2 36 -- counts
3 36 -- counts
3 44 -- does not count
4 43 -- counts
5 43 -- counts
5 45 -- does not count
```
### First step: filter down the records
To get only the first record for a customer (the new customer) you can do this:
```
SELECT Distinct Custnum, Min(Created) as Min_Created
FROM Revenues
GROUP BY Custnum
```
### Second step: Counting and Grouping by week
First i used the sql from [grouping customer orders by week](https://stackoverflow.com/questions/2429434/), which you can find at the [old sqlfiddle](http://sqlfiddle.com/#!3/37b06/8) . But then i decided to use
```
Select Count(Custnum) as CountCust
, DATEPART(week, Min_Created) as Week_Min_Created
FROM (
SELECT Distinct Custnum, Min(Created) as Min_Created
FROM Revenues Group By Custnum
) sq Group by DATEPART(week, Min_Created)
```
On my [sql-server-2008-r2](/questions/tagged/sql-server-2008-r2 "show questions tagged 'sql-server-2008-r2'") this returns
```
CountCust Week_Min_Created
1 31 -- only customer 1
2 36 -- customer 2 and 3
2 43 -- customer 4 and 5
-- nothing for week 45 since customer 5 was already counted
```
### Some sample data
This is the sample data i used
```
CREATE TABLE Revenues
(
Custnum int ,
Created datetime,
Revenue int
);
INSERT INTO Revenues (Custnum, Created, Revenue)
VALUES
(1, '20140801', 30),
(2, '20140905', 13), (3, '20140905', 45),
(4, '20141023', 20), (5, '20141023', 40),
(3, '20141030', 45), (1, '20141030', 56),
(5, '20141106', 60);
```
|
Sounds like you want to work with a subset of ORDERS that only has each customer's first order date.
```
select
count(custnum),
DATEADD(wk, DATEDIFF(wk, 0, created), 0) as Date
from
(Select custnum, min(created) as created From Orders Group by custnum) o
where created > '2013-01-01'
group by
DATEADD(wk, DATEDIFF(wk, 0, created), 0)
order by
DATEADD(wk, DATEDIFF(wk, 0, created), 0)
```
|
Get new customers every week in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to use my query to find the specific Worker.wname that has none of his relations with the animal.wild is TRUE.
for example from the following table I need to return "Yossi" since he has no row with wild = t
```
wname | type | wild
-------+-----------+------
David | Capricorn | f
David | goat | f
David | Lamb | f
David | Tiger | t
David | wolf | t
David | Wolf | t
Yossi | Capricorn | f
Yossi | goat | f
```
This the query i've got so far (which returns the TABLE above):
```
SELECT
worker.wname, animal.type, animal.wild
FROM resposibility
LEFT JOIN worker ON resposibility.wid = worker.wid
LEFT JOIN cage ON resposibility.cno = cage.cno
LEFT JOIN animal ON cage.cno = animal.cno
GROUP BY worker.wname, animal.wild, animal.type
ORDER BY worker.wname
```
|
```
SELECT worker.wname
FROM resposibility
LEFT JOIN worker ON resposibility.wid = worker.wid
LEFT JOIN cage ON resposibility.cno = cage.cno
LEFT JOIN animal ON cage.cno = animal.cno
GROUP BY worker.wname
HAVING sum(case when animal.wild = 't' then 1 else 0 end) = 0
ORDER BY worker.wname
```
|
You could also use the [bool\_or](http://www.postgresql.org/docs/9.3/static/functions-aggregate.html) as follow:
```
SELECT worker.wname
FROM resposibility
LEFT JOIN worker ON resposibility.wid = worker.wid
LEFT JOIN cage ON resposibility.cno = cage.cno
LEFT JOIN animal ON cage.cno = animal.cno
GROUP BY worker.wname
having bool_or(animal.type) is false
ORDER BY worker.wname
```
|
Ignore multiple records in LEFT JOIN query by case
|
[
"",
"mysql",
"sql",
"postgresql",
"left-join",
""
] |
This is my current SQL query:
```
DECLARE
@FromDate datetime=NULL,
@ToDate datetime=NULL
SET @FromDate = '10/11/2014'
SET @ToDate = '11/11/2014'
SELECT *
FROM
tblUSer
WHERE
((convert(VARCHAR(10), CreatedDate, 103) BETWEEN convert(VARCHAR(10), @FromDate, 103) AND convert(VARCHAR(10), @ToDate, 103))
```
But I get only two records after executing this query, i.e. for 11/11/2014, and I have four records in my table over all:

How can I get the desired records between the date range when both from and to dates are given and also when both are null. What am I doing wrong?
|
This worked for me
```
DECLARE
@FromDate datetime=NULL,
@ToDate datetime=NULL
SET @FromDate = '10/11/2014'
SET @ToDate = '11/11/2014'
SELECT *
FROM
tblUSer
WHERE
CreatedDate >= @FromDate AND
CreatedDate <= DATEADD(DAY, 1, @ToDate)
```
|
Try this Query:
```
DECLARE @FromDate AS datetime, @ToDate AS datetime
SET @FromDate = '12/12/2008'
SET @ToDate = '12/12/2009'
SELECT *
FROM tblUSer
WHERE CreatedDate BETWEEN @FromDate AND @ToDate
```
|
Retrieve dates between two given dates
|
[
"",
"sql",
"sql-server-2008",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.