Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
In SQL Server, I'm running a query on users age groups on data where, for some years, there are zero users per age group. For example there were users in 2013 in the "18-21" age group, so the query returns the next age group, "22-25", as the first row because there were no entries containing "18-21." Instead, I would like to return a row that contains 18-21, but has 0 as the value for number of users.
Currently, I have:
```
SELECT YEAR, AGE_GROUP, SUM(USERS) as usercount,
FROM USERS
WHERE YEAR = '2013'
AND PRIMARY_GROUP = 'NT'
GROUP BY YEAR, AGE_GROUP
```
This returns:
```
YEAR AGE_GROUP usercount
2014 22-25 200
2014 25-28 10
```
I want it to return:
```
YEAR AGE_GROUP usercount
2014 18-21 0
2014 22-25 200
2014 25-28 10
```
How can I create a row for specific values that don't exist and fill the count with 0 values?
For the record, I DO in fact have a column called 'users' in the users table. Confusing, I know, but it's a stupidly named schema that I took over. The Users table contains data ABOUT my users for reporting. It should probably have been named something like Users\_Reporting.
|
I assume you have another table that contain all rows Age Group.
TABLE NAME: **AGEGROUPS**
```
AGE_GROUP
18-21
22-25
25-28
```
Try this:
```
SELECT '2014' AS YEAR, AG.AGE_GROUP, COALESCE(TB.usercount, 0) AS usercount
FROM (
SELECT YEAR, AGE_GROUP, SUM(USERS) as usercount,
FROM USERS
WHERE YEAR = '2014'
AND PRIMARY_GROUP = 'NT'
GROUP BY YEAR, AGE_GROUP
) AS TB
RIGHT JOIN AGEGROUPS AG ON TB.AGE_GROUP=AG.AGE_GROUP
```
|
You do this by joining with a "fake" list of age group+years:
```
SELECT AGE_GROUPS.YEAR, AGE_GROUPS.AGE_GROUP, COALESCE(SUM(USERS), 0) as usercount
FROM (
SELECT YEAR, AGE_GROUP
FROM (
SELECT '18-21' AS AGE_GROUP
UNION SELECT '22-25'
UNION SELECT '25-28'
) AGE_GROUPS, (SELECT DISTINCT YEAR FROM USERS) YEARS
) AGE_GROUPS
LEFT JOIN USERS ON (USERS.AGE_GROUP = AGE_GROUPS.AGE_GROUP AND USERS.YEAR = AGE_GROUPS.YEAR)
WHERE AGE_GROUPS.YEAR = '2014'
GROUP BY AGE_GROUPS.YEAR, AGE_GROUPS.AGE_GROUP
```
You can also simplify this, assuming that your USERS table has all possible age groups ignoring a specific year:
```
SELECT AGE_GROUPS.YEAR, AGE_GROUPS.AGE_GROUP, COALESCE(SUM(USERS), 0) as usercount
FROM (
SELECT YEAR, AGE_GROUP
FROM (SELECT DISTINCT AGE_GROUP FROM USERS) AGE_GROUPS, (SELECT DISTINCT YEAR FROM USERS) YEARS
) AGE_GROUPS
LEFT JOIN USERS ON (USERS.AGE_GROUP = AGE_GROUPS.AGE_GROUP AND USERS.YEAR = AGE_GROUPS.YEAR)
WHERE AGE_GROUPS.YEAR = '2014'
GROUP BY AGE_GROUPS.YEAR, AGE_GROUPS.AGE_GROUP
```
|
How can I create a row for values that don't exist and fill the count with 0 values?
|
[
"",
"sql",
"sql-server",
""
] |
Consider these three tables:
**News** (Columns: ID, etc.)
**News\_Tag\_Cross** (Columns: ID, NewsID, TagID )
**Tags** (Columns: Id, Name)
How would I get all news articles that have two tags: "Dealer" AND "Clients"?
I can run this query for both tags & UNION the results, but that returns rows for either **Dealer** tag or **Client** tag. Obviously, I want to return news items that are joined to **both**.
```
SELECT n.id FROM news
INNER JOIN news_tag_cross ntc
ON ntc.newsid=n.id
INNER JOIN tags t
ON t.id=ntc.tagid
WHERE news_tag_cross.tagID = 'DealerID'
```
I must note that this is a rough translation of my current query - I don't really want to give away table names, etc., or confuse anyone with extra columns and data.
|
You can use aggregation with a `having` clause:
```
SELECT n.id
FROM news INNER JOIN
news_tag_cross ntc
ON ntc.newsid = n.id INNER JOIN
tags t
ON t.id = ntc.tagid
GROUP BY n.id
HAVING SUM(t.name = 'Dealer') > 0 AND
SUM(t.name = 'Client') > 0;
```
There are other ways to express this (notably using joins). I like this method because the `having` clause can be quite flexible on the conditions to include or exclude.
|
My approach to this type of query is to do via a join, but having the first WHERE clause on one criteria and the join based on the second. This way, you are not looking for all news ID entries and then throwing them out if they don't have something...
```
select
n.*
from
tags t
JOIN News_Tag_Cross ntc
on t.id = ntc.TagID
JOIN News_tag_Cross ntc2
on ntc.NewsID = ntc2.NewsID
Join Tags t2
on ntc2.tagID = t2.id
AND t2.Name = 'Client'
JOIN News n
on ntc.NewsID = n.ID
where
t.name = 'Dealer'
```
Might be a bit longer, but I would have an index on the tags table both ways to allow for the first WHERE clause, and again on the JOIN clause.
```
table indexed on
tags (id, name) <-- for the JOIN clause
tags (name) <-- for the outermost WHERE clause
News_tag_Cross (tagID, newsID)
```
To clarify... say you have 100,000 news entries, and only 200 have "Dealer" keyword. Instead of querying the entire table and grouping by the keywords, I am STARTING only with those "Dealer" entries. From that, going to the news\_tag\_cross table ONLY for those news items and looking for a secondary tag of the "Client". Done.
|
Select news, join on multiple tags
|
[
"",
"mysql",
"sql",
""
] |
I have a query in the database to return 6 results in an `Agenda` table that has times available per day (example: `_08: 10: 00: 000 | 08: 30: 00: 000_`) and periods (example: T (representing `afternoon` or `M` representing `morning`).
I'm getting recover already, but I have a problem I'm not able to resolve.
I can only have a schedule for each period of the day.
Example: On `2015-12-19` I have a time in the morning and a time in the afternoon, but on `2015-12-19` I can never have two hours in the mornings or afternoon.
My query is this:
```
SELECT * FROM (
SELECT TOP(3) agendaidentificador,agendadata, 'M' AS periodo
FROM AGENDA
WHERE
agendaconsumolocktempo IS NULL
AND
agendaconsumoidentificador IS NULL
AND
agendadata > GETDATE()
GROUP BY
agendaidentificador,
agendadata
HAVING
CAST(DATEPART(HOUR,agendadata) AS INT) < 12
ORDER BY
NEWID(),
agendadata asc
) A
UNION
SELECT * FROM (
SELECT TOP(3) agendaidentificador,agendadata, 'T' AS periodo
FROM AGENDA
WHERE
agendaconsumolocktempo IS NULL
AND
agendaconsumoidentificador IS NULL
AND
agendadata > GETDATE()
GROUP BY
agendaidentificador,
agendadata
HAVING
CAST(DATEPART(HOUR,agendadata) AS INT) >= 12
AND
COUNT(CAST(agendadata AS DATE)) = 1
ORDER BY
NEWID(),
agendadata asc
) B
GROUP BY
agendaidentificador,
agendadata,
periodo
HAVING
COUNT(CAST(agendadata as DATE)) = 1
ORDER BY agendadata
```
The result is:
```
line |agendaIdentificador | agendaData | periodo
-----|--------------------|-------------------------|---------
1 | 173352 | 2015-01-12 12:50:00.000 | T
2 | 173363 | 2015-01-12 14:40:00.000 | T
3 | 175255 | 2015-01-19 11:30:00.000 | M
4 | 175520 | 2015-01-26 14:50:00.000 | T
5 | 125074 | 2015-02-25 08:20:00.000 | M
6 | 125076 | 2015-02-25 08:40:00.000 | M
```
Can't happen the same, like lines `1` and `2`.
|
Lets make some test data:
```
DECLARE @AGENDA TABLE
(
agendaIdentificador int,
agendaData datetime,
periodo varchar(1),
agendaconsumolocktempo int,
agendaconsumoidentificador int
)
INSERT INTO @AGENDA
( agendaIdentificador, agendaData, periodo, agendaconsumolocktempo, agendaconsumoidentificador )
VALUES
(173352, '2015-01-12 12:50:00.000', 'T', null, null),
(173353, '2015-01-12 12:50:00.000', 'T', null, null),
(173354, '2015-01-12 12:50:00.000', 'T', null, null),
(173355, '2015-01-12 12:50:00.000', 'T', null, null),
(173356, '2015-01-13 12:50:00.000', 'T', null, null),
(173363, '2015-01-12 14:40:00.000', 'T', null, null),
(175255, '2015-01-19 11:30:00.000', 'M', null, null),
(175520, '2015-01-26 14:50:00.000', 'T', null, null),
(125074, '2015-02-25 08:20:00.000', 'M', null, null),
(125076, '2015-02-25 08:40:00.000', 'M', null, null),
(125076, '2015-02-25 08:40:00.000', 'M', null, null),
(125076, '2015-02-25 08:40:00.000', 'M', null, null),
(125076, '2015-02-25 08:40:00.000', 'M', null, null),
(125076, '2015-02-26 08:40:00.000', 'M', null, null);
```
Now I fix the test data to split out all of the records for the morning and afternoons and count them.
```
WITH AGENDA AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY CAST(agendaData AS DATE), periodo ORDER BY agendaData) RN
FROM @AGENDA
)
```
Now we take the query from before, but the change for the afternoon session is I only want the first of each set (where RN=1)
```
SELECT * FROM (
SELECT TOP(3) agendaidentificador,agendadata, 'M' AS periodo
FROM AGENDA
WHERE agendaconsumolocktempo IS NULL
AND agendaconsumoidentificador IS NULL
AND agendadata > GETDATE()
AND DATEPART(HOUR,agendadata) < 12
AND RN = 1
GROUP BY
agendaidentificador,
agendadata
ORDER BY
NEWID(),
agendadata asc
) A
UNION
SELECT * FROM (
SELECT TOP(3) agendaidentificador,agendadata, 'T' AS periodo
FROM AGENDA
WHERE agendaconsumolocktempo IS NULL
AND agendaconsumoidentificador IS NULL
AND agendadata > GETDATE()
and RN = 1
and DATEPART(HOUR,agendadata) >= 12
GROUP BY
agendaidentificador,
agendadata
ORDER BY
NEWID(),
agendadata asc
) B
GROUP BY
agendaidentificador,
agendadata,
periodo
ORDER BY agendadata
```
here is the output:
```
agendaidentificador agendadata periodo
173352 2015-01-12 12:50:00.000 T
173356 2015-01-13 12:50:00.000 T
175255 2015-01-19 11:30:00.000 M
175520 2015-01-26 14:50:00.000 T
125074 2015-02-25 08:20:00.000 M
125076 2015-02-26 08:40:00.000 M
```
I added some more duplicate records, cleaned up all of the useless having clauses
|
You have to group by (the final group by) only by the datepart of "AgendaData". U cant group by the whole date + time field.
i Think this should work : group by TO\_CHAR("timestamp\_field",'YYYY-MM-DD')
|
DISTINCT with UNION and COUNT
|
[
"",
"sql",
"sql-server",
""
] |
In DB2 9.7 I am looking for a way to reverse a string in a SQL query.
I am familiar with SQL Server where the query would be like
```
SELECT
REVERSE(LEFT_TO_REIGHT) AS RIGHT_TO_LEFT
FROM
TABLE1;
```
I couldn't find a similar function in DB2. is there a simple way to reverse a string?
|
Creating a `REVERSE(..)` function is unnecessary.
DB2 has something called [`RIGHT(string-expression, length)`](http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000844.html?cp=SSEPGG_9.7.0%2F2-10-3-2-125):
> The RIGHT function returns the rightmost string of string-expression
> of length length, expressed in the specified string unit. If
> string-expression is a character string, the result is a character
> string. If string-expression is a graphic string, the result is a
> graphic string
So if you're interested in the last 8 characters, you can pretty trivially do this via:
```
SELECT RIGHT(left_to_right, 8) AS right_to_left
FROM Table1
```
(I'm actually still concerned about the fact that you're splitting off 8 characters consistently, as it implies you have a multi-part key of some sort).
|
Try something like:
```
SELECT STRIP(CAST( TRANSLATE('87654321',LEFT_TO_REIGHT, '12345678') AS VARCHAR(8) ))
FROM TABLE1;
```
|
String operation in SQL to reverse a string
|
[
"",
"sql",
"db2",
"reverse",
"db2-luw",
""
] |
I created a function that will return a varchar
```
CREATE FUNCTION dbo.Test
(
@i INT
)
RETURNS VARCHAR(MAX)
AS
BEGIN
DECLARE @it VARCHAR(MAX)
SET @it = 'INSERT INTO @Test
VALUES (1)
'
RETURN @it
END
```
I tried to used that function in the query below but got an error.
```
DECLARE @d VARCHAR(MAX)
DECLARE @Test TABLE
(
i INT
)
SET @d = dbo.Test(1)
SELECT @d
EXEC (@d)
SELECT * FROM @Test
```
Why function is not working in table variable?
How to make it work?
|
Try this:
If we try to use Select from @test from outside its declaration scope then it will not work. Hence, it has been SET in @it.
```
CREATE FUNCTION [dbo].[Test]
(
@i INT
)
RETURNS VARCHAR(MAX)
AS
BEGIN
DECLARE @it VARCHAR(MAX)
SET @it = 'DECLARE @Test TABLE(i INT); INSERT INTO @Test VALUES (1); SELECT * FROM @Test'
RETURN @it
END
DECLARE @Test TABLE
(
i INT
)
DECLARE @d VARCHAR(MAX)
SET @d = dbo.Test(1)
SELECT @d
INSERT INTO @Test
EXEC (@d)
```
|
Try doing it this way by using a temporary table instead of table variable since table variable won't be accessible as already pointed by others in comment. [Not Tested]
```
CREATE FUNCTION dbo.Test
(
@i INT
)
RETURNS VARCHAR(MAX)
AS
BEGIN
DECLARE @it VARCHAR(MAX)
SET @it = 'CREATE TABLE #Test(i INT); INSERT INTO #Test
VALUES (1)'
RETURN @it
END
```
Then call your function like
```
DECLARE @d VARCHAR(MAX)
SET @d = dbo.Test(1)
SELECT @d
EXEC (@d)
SELECT * FROM #Test
```
|
Function is not working in table variable
|
[
"",
"sql",
"sql-server",
"function",
"t-sql",
"sql-server-2008-r2",
""
] |
Hi i need some help in making query to SUM different Materials in one.
MaterialName Quantity Date
A ...................... 20 .... 1/1/2010
A ...................... 10 .... 1/2/2010
B ...................... 30 .... 1/2/2010
C ...................... 20 .... 1/3/2010
for example I only want to sum Material A and B.
|
Try this. Add a `where clause` to filter `A` & `B` then `SUM` the quantity per `MaterialName`
```
SELECT MaterialName,
Sum(Quantity) Sum_Quantity
FROM TableName
WHERE MaterialName IN ( 'A', 'B' )
GROUP BY MaterialName
```
|
```
SELECT SUM(Quantity) FROM TABLE WHERE MaterialName in ('A', 'B')
```
OR
```
SELECT MaterialName , SUM(Quantity) FROM TABLE
WHERE MaterialName in ('A', 'B') GROUP BY MaterialName
```
|
SQL Sum of different rows
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
**Current Data**
```
ID | Name1 | Name2
<guid1> | XMind | MindNode
<guid2> | MindNode | XMind
<guid3> | avast | Hitman Pro
<guid4> | Hitman Pro | avast
<guid5> | PPLive | Hola!
<guid6> | ZenMate | Hola!
<guid7> | Hola! | PPLive
<guid8> | Hola! | ZenMate
```
**Required Output**
```
ID1 | ID2 | Name1 | Name2
<guid1> | <guid2> | XMind | MindNode
<guid3> | <guid4> | avast | Hitman Pro
<guid5> | <guid7> | PPLive | Hola!
<guid6> | <guid8> | Hola! | ZenMate
```
These are relations between apps. I want to show that Avast and Hitman has a relation but in this view i do not need to show in what "direction" they have an relation. It's a given in this view that the relation goes both ways.
**EDIT:** Seems like my example was to simple. The solution doesn't work with more data.
```
DECLARE @a TABLE (ID INT, Name1 VARCHAR(50), Name2 VARCHAR(50))
INSERT INTO @a VALUES ( 1, 'XMind', 'MindNode' )
INSERT INTO @a VALUES ( 2, 'MindNode', 'XMind' )
INSERT INTO @a VALUES ( 3, 'avast', 'Hitman Pro' )
INSERT INTO @a VALUES ( 4, 'Hitman Pro', 'avast' )
INSERT INTO @a VALUES ( 5, 'PPLive Video Accelerator', 'Hola! Better Internet' )
INSERT INTO @a VALUES ( 6, 'ZenMate', 'Hola! Better Internet' )
INSERT INTO @a VALUES ( 7, 'Hola! Better Internet', 'PPLive Video Accelerator' )
INSERT INTO @a VALUES ( 8, 'Hola! Better Internet', 'ZenMate' )
SELECT a1.ID AS ID1 ,
a2.ID AS ID2 ,
a1.Name1 ,
a2.Name1 AS Name2
FROM @a a1
JOIN @a a2 ON a1.Name1 = a2.Name2
AND a1.ID < a2.ID -- avoid duplicates
```
This works however so i guess it's the Guid that is messing with me.
**EDIT AGAIN:**
I haven't looked at this for a while and i thought it worked but i just realized it does not. I've struggled all morning with this but i must admit that SQL is not really my strong suite. The thing is this.
```
DECLARE @a TABLE (ID int, Name1 VARCHAR(50), Name2 VARCHAR(50))
INSERT INTO @a VALUES ( 1, 'XMind', 'MindNode' )
INSERT INTO @a VALUES ( 2, 'MindNode', 'XMind' )
INSERT INTO @a VALUES ( 3, 'avast', 'Hitman Pro' )
INSERT INTO @a VALUES ( 4, 'PPLive Video Accelerator', 'Hola! Better Internet' )
INSERT INTO @a VALUES ( 5, 'ZenMate', 'Hola! Better Internet' )
INSERT INTO @a VALUES ( 6, 'Hitman Pro', 'avast' )
INSERT INTO @a VALUES ( 7, 'Hola! Better Internet', 'PPLive Video Accelerator' )
INSERT INTO @a VALUES ( 8, 'Hola! Better Internet', 'ZenMate' )
INSERT INTO @a VALUES ( 9, 'XX', 'A' )
INSERT INTO @a VALUES ( 10, 'XX', 'BB' )
INSERT INTO @a VALUES ( 11, 'BB', 'XX' )
INSERT INTO @a VALUES ( 12, 'A', 'XX' )
INSERT INTO @a VALUES ( 13, 'XX', 'CC' )
INSERT INTO @a VALUES ( 14, 'CC', 'XX' )
;With CTE as
(
SELECT a1.ID AS ID1 ,
a2.ID AS ID2 ,
a1.Name1 ,
a2.Name1 AS Name2,
CheckSum(Case when a1.Name1>a2.Name1 then a2.Name1+a1.Name1 else a1.Name1+a2.Name1 end) ck, -- just for display
Row_Number() over (Partition by CheckSum(Case when a1.Name1>a2.Name1 then a2.Name1+a1.Name1 else a1.Name1+a2.Name1 end)
order by CheckSum(Case when a1.Name1>a2.Name1 then a2.Name1+a1.Name1 else a1.Name1+a2.Name1 end)) as rn
FROM @a a1
JOIN @a a2 ON a1.Name1 = a2.Name2
)
Select ID1, ID2,Name1, Name2
from CTE C1
where rn=1
```
When i use this code it sure works fine with the names but it doesn't match the ID's correctly.
The result is
```
ID1 | ID2 | Name1 | Name2
12 | 9 | A | X (Correct)
7 | 5 | Hola! | ZenMate (Not Correct)
[..]
```
I've pulled my hair all morning but i can't figure this out. I still use Guid's as ID's and just use Int's here to make it a bit more readable.
|
If the output should contain only two-way relations `('XX' + 'A') AND ('A' + 'XX')`, try this:
```
;
WITH m (ID1, ID2, Name1, Name2) AS (
SELECT ID1, ID2, Name1, Name2
FROM (
SELECT a1.ID AS ID1
,a2.ID AS ID2
,a1.Name1 AS Name1
,a2.Name1 AS Name2
,ROW_NUMBER() OVER (PARTITION BY a1.Name1, a2.Name1 ORDER BY (SELECT 1)) AS n
FROM @a AS a1
JOIN @a AS a2
ON a1.Name1 = a2.Name2
AND a1.Name2 = a2.Name1
) AS T
WHERE n = 1
)
SELECT DISTINCT *
FROM (
SELECT ID1, ID2, Name1, Name2
FROM m
WHERE ID1 <= ID2
UNION ALL
SELECT ID2, ID1, Name2, Name1
FROM m
WHERE ID1 > ID2
) AS dm
```
It produces the output as follows:
```
+------+-----+--------------------------+-----------------------+
| ID1 | ID2 | Name1 | Name2 |
+------+-----+--------------------------+-----------------------+
| 1 | 2 | XMind | MindNode |
| 3 | 6 | avast | Hitman Pro |
| 4 | 7 | PPLive Video Accelerator | Hola! Better Internet |
| 5 | 8 | ZenMate | Hola! Better Internet |
| 9 | 12 | XX | A |
| 10 | 11 | XX | BB |
| 13 | 14 | XX | CC |
+------+-----+--------------------------+-----------------------+
```
|
```
DECLARE @a TABLE (ID INT, Name1 VARCHAR(50), Name2 VARCHAR(50))
INSERT INTO @a VALUES ( 1, 'XMind', 'MindNode' )
INSERT INTO @a VALUES ( 2, 'MindNode', 'XMind' )
INSERT INTO @a VALUES ( 3, 'avast', 'Hitman Pro' )
INSERT INTO @a VALUES ( 4, 'Hitman Pro', 'avast' )
SELECT a1.ID AS ID1 ,
a2.ID AS ID2 ,
a1.Name1 ,
a2.Name1 AS Name2
FROM @a a1
JOIN @a a2 ON a1.Name1 = a2.Name2
AND a1.ID < a2.ID -- avoid duplicates
```
Referring to the amendment and extension of your question, a more complicated solution is required.
We form a [CHECKSUM](http://msdn.microsoft.com/en-US/us_en/library/ms189788.aspx) on a1.Name1,a2.Name (to get an identical we exchanged on size).
Using this we generate with [ROW\_NUMBER (Transact-SQL)](http://msdn.microsoft.com/en-US/us_en/library/ms186734.aspx) a number and use only rows from the result with number 1.
```
DECLARE @a TABLE (ID uniqueIdentifier, Name1 VARCHAR(50), Name2 VARCHAR(50))
INSERT INTO @a VALUES ( NewID(), 'XMind', 'MindNode' )
INSERT INTO @a VALUES ( NewID(), 'MindNode', 'XMind' )
INSERT INTO @a VALUES ( NewID(), 'avast', 'Hitman Pro' )
INSERT INTO @a VALUES ( NewID(), 'Hitman Pro', 'avast' )
INSERT INTO @a VALUES ( NewID(), 'PPLive Video Accelerator', 'Hola! Better Internet' )
INSERT INTO @a VALUES ( NewID(), 'ZenMate', 'Hola! Better Internet' )
INSERT INTO @a VALUES ( NewID(), 'Hola! Better Internet', 'PPLive Video Accelerator' )
INSERT INTO @a VALUES ( NewID(), 'Hola! Better Internet', 'ZenMate' )
INSERT INTO @a VALUES ( NewID(), 'XX', 'A' )
INSERT INTO @a VALUES ( NewID(), 'A', 'XX' )
INSERT INTO @a VALUES ( NewID(), 'XX', 'BB' )
INSERT INTO @a VALUES ( NewID(), 'BB', 'XX' )
INSERT INTO @a VALUES ( NewID(), 'XX', 'CC' )
INSERT INTO @a VALUES ( NewID(), 'CC', 'XX' )
;With CTE as
(
SELECT a1.ID AS ID1 ,
a2.ID AS ID2 ,
a1.Name1 ,
a2.Name1 AS Name2,
CheckSum(Case when a1.Name1>a2.Name1 then a2.Name1+a1.Name1 else a1.Name1+a2.Name1 end) ck, -- just for display
Row_Number() over (Partition by CheckSum(Case when a1.Name1>a2.Name1 then a2.Name1+a1.Name1 else a1.Name1+a2.Name1 end)
order by CheckSum(Case when a1.Name1>a2.Name1 then a2.Name1+a1.Name1 else a1.Name1+a2.Name1 end)) as rn
FROM @a a1
JOIN @a a2 ON a1.Name1 = a2.Name2
)
Select *
from CTE C1
where rn=1
```
*Edit:*
If you only want to get those where both fields are fitting the needed query would simply be:
```
SELECT a1.ID AS ID1 , a2.ID AS ID2 , a1.Name1 , a2.Name1 AS Name2
FROM @a a1
JOIN @a a2 ON a1.Name1 = a2.Name2 and a1.Name2 = a2.Name1 AND a1.ID < a2.ID
```
|
Merge a two way relation in the same table in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
Strangely, it seems that the `EXISTS` clause is not supported by `DBISAM`'s sql engine, as it's always resulting in an SQL error. The following is a sample where EXISTS is being used. Am i missing anything here?
```
update Table1 set HASXACTION = False
WHERE EXISTS (SELECT SERIALID
From Table2
LEFT JOIN Table1 ON (Table2 .AUXILACT = Table1 .CODE)
AND (Table2 .CHARTACT = Table1 .CHARTACT) )
```
|
Never mind people, i just learned that DBISAM does not support EXISTS operator for specifying sub-select predicates in WHERE clauses. It's documented within DBISAM's help file(screenshot attached).

|
I presume that you don't really want the `join` in the subquery. You probably intend a correlated subquery:
```
UPDATE Table1
SET HASXACTION = False
WHERE EXISTS (SELECT SERIALID
FROM Table2
WHERE Table2.AUXILACT = Table1.CODE AND Table2.CHARTACT = Table1.CHARTACT
);
```
This should also fix the problem you are having, which is the reference to `Table1` both in the `update` clause and the subquery. (This is a MySQL limitation.)
EDIT:
I cannot find any reference to `EXISTS` (or even subqueries) for dbisam. But, you can do updates with joins, so this should be equivalent:
```
UPDATE Table1
SET HASXACTION = False
FROM Table1 JOIN
Table2
ON Table2.AUXILACT = Table1.CODE AND Table2.CHARTACT = Table1.CHARTACT;
```
|
Is EXISTS clause supported by DBISAM's sql engine?
|
[
"",
"sql",
"dbisam",
""
] |
I'm making statistic for 5 tables. I have made the example with one client data.
**loan**
```
id | status
------------
1454 | payed
```
**payment schedule**
```
id | loan_id | user_client_id
-----------------------------
1456 | 1454 | 3113
```
**payment\_schedule\_row**
```
id | payment_schedule_id | payment | payment_date
---------------------------------------------------
5013 | 1456 | 32 | 2013-11-06
5014 | 1456 | 32 | 2013-12-06
5015 | 1456 | 32 | 2013-01-05
5016 | 1456 | 32 | 2013-02-04
5017 | 1456 | 32 | 2013-03-06
5018 | 1456 | 32 | 2013-04-05
5019 | 1456 | 32 | 2013-05-05
5020 | 1456 | 32 | 2013-06-04
5021 | 1456 | 32 | 2013-07-04
5022 | 1456 | 32 | 2013-08-03
5023 | 1456 | 32 | 2013-09-02
5014 | 1456 | 32 | 2013-10-02
```
**payment\_schedule\_cover**
```
id | payment_schedule_id | date | sum
----------------------------------------------
2282 | 1456 | 2013-11-08 | 34
3054 | 1456 | 2013-12-07 | 40
3776 | 1456 | 2013-01-04 | 38
4871 | 1456 | 2013-02-06 | 49
5954 | 1456 | 2013-03-06 | 40
7070 | 1456 | 2013-04-25 | 49
9029 | 1456 | 2013-05-21 | 52
10377 | 1456 | 2013-06-20 | 30
10391 | 1456 | 2013-06-21 | 30
10927 | 1456 | 2013-07-07 | 60
```
**payment\_schedule\_delay**
```
id | payment_schedule_row_id | start_date | end_date | delay
----------------------------------------------------------------
1135 | 5013 | 2013-11-07 | 2013-11-08 | 0.07
1548 | 5014 | 2013-12-07 | 2013-12-07 | 0.03
2628 | 5016 | 2014-02-05 | 2014-02-06 | 0.01
```
And the query is :
```
SELECT period, loan_sum, covers, delay
FROM
(SELECT MAX(EXTRACT(YEAR_MONTH FROM psc.date)) AS period,
(SELECT SUM(psr2.payment) FROM payment_schedule_row AS psr2 WHERE psr.payment_schedule_id = psr2.payment_schedule_id) AS loan_sum,
(SELECT SUM(psc2.sum) FROM payment_schedule_cover AS psc2 WHERE psc.payment_schedule_id = psc2.payment_schedule_id) AS covers,
(SELECT SUM(psd2.delay) FROM payment_schedule_delay AS psd2 WHERE psr.id = psd2.payment_schedule_row_id) AS delay
FROM loan
INNER JOIN payment_schedule AS ps ON ps.loan_id = loan.id
INNER JOIN payment_schedule_row AS psr ON psr.payment_schedule_id = ps.id
INNER JOIN payment_schedule_cover AS psc ON psc.payment_schedule_id = ps.id
WHERE loan.status = 'payed'
GROUP BY ps.id) AS sum_by_id
GROUP BY period
```
Result for the query:
```
period | loan_sum | covers | delay
-----------------------------------
201407 | 384 | 422 | 0.07
```
Everything is right except the delay. It should be 0.11 (0.07 + 0.03 + 0.01)
So I have been trying to find the error from the query for days now. Maybe someone can tell me what I'm doing wrong.
**Sqlfiddle link:** <http://sqlfiddle.com/#!2/21585/2>
|
I finally got an answer from a MySQL forum. The answer what fixed my problem was:
... there are problems ...
1. The Group By operator in the subquery does not see aggregation inside the correlated sub-subquery sums. Those sums need to be moved out a level.
2. There's no aggregation for the outer query's Group By to group; it just functions as an Order By
3. A query like select a,b,c sum(d) ... group by a can return arbitrary results for b and c unless a strictly 1:1 relationship holds between a and each of b and c, which looks unlikely to be the case in your subquery.
4. Correlated subqueries are inefficient, as yours illustrate with their two-stage joins
5. The `delay` correlated subquery doesn't join to anything
So move correlated subquery logic to the FROM clause, join the `delay` query, touch up the Group By clause, and we have ...
```
select psc.period, psc.sum, psr.payments, sum(psd.delay) as delay
from loan
join payment_schedule as ps on ps.loan_id = loan.id
join(
select payment_schedule_id, sum(payment) as payments
from payment_schedule_row
group by payment_schedule_id
) as psr on psr.payment_schedule_id = ps.id
join (
select payment_schedule_id, sum(sum) as sum, max( extract(year_month from date) ) as period
from payment_schedule_cover
group by payment_schedule_id
) psc on ps.id = psc.payment_schedule_id
join payment_schedule_row psr2 on ps.id = psr2.payment_schedule_id
join (
select payment_schedule_row_id, sum(delay) as delay
from payment_schedule_delay
group by payment_schedule_row_id
) as psd on psr2.id = psd.payment_schedule_row_id
where loan.status = 'payed'
group by psc.period, psc.sum, psr.payments;
```
|
```
SELECT period, loan_sum, covers, delay
FROM
(SELECT MAX(EXTRACT(YEAR_MONTH FROM psc.date)) AS period,
(SELECT SUM(psr2.payment) FROM payment_schedule_row AS psr2 WHERE psr.payment_schedule_id = psr2.payment_schedule_id) AS loan_sum,
(SELECT SUM(psc2.sum) FROM payment_schedule_cover AS psc2 WHERE psc.payment_schedule_id = psc2.payment_schedule_id) AS covers,
(SELECT SUM(psd2.delay) FROM payment_schedule_delay AS psd2 WHERE psr.id = psd2.payment_schedule_row_id) AS delay
FROM loan
INNER JOIN payment_schedule AS ps ON ps.loan_id = loan.id
INNER JOIN payment_schedule_row AS psr ON psr.payment_schedule_id = ps.id
INNER JOIN payment_schedule_cover AS psc ON psc.payment_schedule_id = ps.id
INNER JOIN payment_schedule_delay AS psd ON psr.id = psd.payment_schedule_row_id
WHERE loan.status = 'payed'
GROUP BY ps.id) AS sum_by_id
GROUP BY period
```
|
SQL SUM() Calculates only one row in sub query
|
[
"",
"mysql",
"sql",
""
] |
I have problem when I use my query bellow to have a looping inside the cursor.
data in table1 will be like this:
```
id | data
----|---------
A | 4
B | 2
C | 5
```
the result in table2 should be like this:
```
id | data
----|---------
A | 1
A | 1
A | 1
A | 1
B | 1
B | 1
C | 1
C | 1
C | 1
C | 1
C | 1
```
I have SQL query with cursor like this:
```
DECLARE @table2 table ( id VARCHAR(500), data INTEGER)
DECLARE Cur CURSOR FOR
SELECT id, data FROM table1
OPEN Cur
WHILE ( @@FETCH_STATUS = 0 )
BEGIN
DECLARE @LoopNum INTEGER
DECLARE @tempID VARCHAR(255)
DECLARE @tempDATA INTEGER
FETCH NEXT FROM Cur INTO @tempID, @tempDATA
set @LoopNum = 0
WHILE @LoopNum < @tempDATA
BEGIN
INSERT INTO table2 (id, data)
VALUES( @tempID, 1)
SET @LoopNum = @LoopNum + 1
END
END
CLOSE Cur
DEALLOCATE Cur
SELECT * FROM table2
```
but the query didn't work. is there something wrong with my query?
Thank you.
|
Use this query to the expected result.
```
CREATE TABLE #test
(id CHAR(1),data INT)
INSERT #test VALUES ('A',4)
INSERT #test VALUES('B',2)
INSERT #test VALUES('C',5);
SELECT s.id, 1 AS data
FROM #test s
INNER JOIN
master.dbo.spt_values t ON t.type='P'
AND t.number BETWEEN 1 AND s.data
```
**Note**: Refer this [Why (and how) to split column using master..spt\_values?](https://stackoverflow.com/questions/4273978/why-and-how-to-split-column-using-master-spt-values/4280038#4280038)
|
You actually don't need a loop
```
IF OBJECT_ID('TEMPDB..#TEMP') IS NOT NULL
DROP TABLE #TEMP
SELECT 'A' AS ID, 4 AS DATA
INTO #TEMP UNION
SELECT 'B', 2 UNION
SELECT 'C', 5
;WITH CTE AS
(
SELECT 1 AS NUMBER
UNION ALL
SELECT NUMBER + 1
FROM CTE
WHERE NUMBER < 100
)
SELECT T.ID, 1
FROM CTE C
INNER JOIN #TEMP T
ON C.NUMBER <= T.DATA
ORDER BY T.ID
```
Carefull that if you want ot generate a large set of numbers in the CTE it may become slower.
|
How to create loop based on value of row?
|
[
"",
"sql",
"sql-server",
"t-sql",
"loops",
"cursor",
""
] |
This is my data in table.
```
**ID webLoad transmitted transmissionRcvd ackRcvd**
1 Y Y N N
2 Y N Y N
3 Y Y Y Y
4 N Y Y N
5 Y N N Y
6 N Y Y Y
7 Y Y Y Y
8 Y Y Y N
9 Y Y Y Y
```
I want to be able to select all rows where if any one of the above 4 columns has a value of 'N'
so the rows in result should be ID=1,2,4,5,6,8
**NOTE:** In some cases the input will not be same for all columns. I mean there could be a case that parameter for webLoad will null or 'Y' and 'N' for others. same can be happened to any one these columns.
|
```
select * from your_table
where 'N' in (webLoad, transmitted, transmissionRcvd, ackRcvd)
```
|
```
SELECT *
FROM table
WHERE webload = 'N'
OR transmitted = 'N'
OR transmissionRcvd = 'N'
OR ackRcvd = 'N'
```
|
SQL Server: Select all rows where if any one of the 4 columns has a value of 'N'
|
[
"",
"sql",
"sql-server",
"t-sql",
"select",
""
] |
I'm trying to create a SQL query which will supply values for auto completion for a text field. Everything is working however I can't seem to create an SQL query which is exact enough for the purposes I want. I am using MySQL.
If there is a space (or multiple spaces) in the search term, I only want the query to do a LIKE comparison on the part of the string after the last space.
For example, say I have two possible values in the database:
* Bolt
* Bolts Large
Currently if the user types 'Bolt' then a space, both values above are returned using this query -
```
SELECT name FROM items WHERE name LIKE 'SEARCH_TERM%'
```
What I want is that if the user types 'Bolt' then a space, then only Bolt is returned from the database.
Effectively meaning that only the last part of the search term after the space is compared using LIKE, the results should match exactly up until the last space.
I've also tried:
```
SELECT name FROM items WHERE name LIKE 'SEARCH_TERM[a-z]%'
```
But that actually returns no results using the above scenario.
Is what I'm after possible? I've also tried to explore using Full Text Search but have had no look with that. I believe full text search is enabled on the name field, however I have limited experience with this. The query below didn't work.
```
SELECT name FROM items WHERE MATCH(name) AGAINST('SEARCH_TERM')
```
Any advice or points would be very appreciated.
|
The query
```
SELECT name FROM items WHERE name LIKE 'Bolt %'
```
doesn't return any record, because both 'Bolt' and 'Bolts Large' don't match 'Bolt %'.
```
SELECT name FROM items WHERE name LIKE 'Bolt%'
```
returns both records, because both 'Bolt' and 'Bolts Large' match 'Bolt%'.
To look for 'Bolt' and not 'Bolts', you must add a space to both your search string and the column string:
```
SELECT name FROM items WHERE concat(name, ' ') LIKE 'Bolt %'
```
returns 'Bolt' but not 'Bolts Large'.
|
```
SELECT name FROM items WHERE REPLACE(name, ' ', '') LIKE 'SEARCH_TERM%'
```
|
SQL Text Search - EXACT before space, LIKE after in search term
|
[
"",
"mysql",
"sql",
""
] |
I need to update a value in table with value in other table
```
String sql6="UPDATE project.wish_list SET project.wish_list.quantity= project.book.quantity";
```
But don't work why?
thank you
|
You have to join the tables:
```
sql6="UPDATE project.wish_list join project.book on <condition> SET project.wish_lista.quantity= project.book.quantity";
```
|
First of all you have a syntax error:
```
UPDATE project.wish_list SET project.wish_lista.quantity= project.book.quantity
```
The "a" in whish\_list -> "project.wish\_lista.quantity"... it should be:
```
UPDATE project.wish_list SET project.wish_list.quantity= project.book.quantity
```
|
UPDATE-SQL value in table with another value in table
|
[
"",
"sql",
"oracle",
"sql-update",
""
] |
I got around 18 db's. All these db's have the same structure. I want to query all these db's once to get my results.
Example:
ShopA
ShopB
ShopC
These db's got all the table article (and also the same rows).
How do I get all articles in one result with a WHERE?
I thought:
```
select *
from shopa.dbo.article
shopb.dbo.article
shopc.dbo.article
where color = 'red'
```
Did someone got an idea?
|
Have you considered doing a `UNION ALL`?
So something like:
```
SELECT 'a' AS Shop, *
FROM shopa.dbo.article
WHERE color = 'red'
UNION ALL
SELECT 'b' AS Shop, *
FROM shopb.dbo.article
WHERE color = 'red'
UNION ALL
SELECT 'c' AS Shop, *
FROM shopc.dbo.article
WHERE color = 'red'
```
Or, with a CTE (if you RDBMS supports it)
```
;WITH allstores AS (
SELECT 'a' AS Shop, *
FROM shopa.dbo.article
UNION ALL
SELECT 'b' AS Shop, *
FROM shopb.dbo.article
UNION ALL
SELECT 'c' AS Shop, *
FROM shopc.dbo.article
)
SELECT *
FROM allstores
WHERE color = 'red'
```
|
you could use UNION
if you can simply select the db names you could also use a cursor select with OPENQUERY on a dynamically created string insert into a temp table and select from that
|
sql select on multiple db's
|
[
"",
"sql",
""
] |
I was wondering if someone could lend me a hand.
I have quite a lengthy SQL query.
What i want to do is break down the a column which stores the date and order the results just by the year. part of the query is as follows:
```
SELECT CONVERT(CHAR(8), DateAdd(minute, -601, pay.Expiry), 10)
as DailyDate
FROM table
ORDER BY DailyDate
```
This gives me a 'DailyDate' outputted in the following format:
```
12-07-14
```
The year is the last 2 digits.
Is there a way i can sort results based on those 2 digits?
Any help would be great.
Cheers,
|
Use this query.
```
SELECT CONVERT(CHAR(8), DateAdd(minute, -601, DailyDate ), 10) as DailyDate FROM
table
ORDER BY SUBSTRING(CONVERT(CHAR(8), DateAdd(minute, -601, DailyDate ), 10), 7, 2),
SUBSTRING(CONVERT(CHAR(8), DateAdd(minute, -601, DailyDate ), 10), 4, 2)
```
|
```
... ORDER BY SUBSTRING(DailyDate, 7, 2)
```
|
SQL Server - Order by part of a result
|
[
"",
"sql",
"sql-server",
"asp-classic",
""
] |
I have the following table of PHOTOS, their times and months:
```
TIME|MONTH|PHOTO
----|-----|-----
x1 | mx1 | p1
x2 | mx2 | p2
...
```
I'd like to get all months sorted descendent BUT, with the total photos count up until that month - not just the count of that month's group.
For example, the following query isn't good enough as it returns the count of each specific month's group instead of the total count up until that group:
```
select MONTH, count(TIME) from PHOTOS group by MONTH sort by MONTH desc
```
Ideas?
|
SQLite has less functionalities compared to other RDBMS but I think this is how you should do it:
```
SELECT photosA.month,
(SELECT COUNT(time) AS PhotoCounter
FROM photos AS photosB
WHERE photosB.month <= photosA.month) AS total_photos
FROM photos AS photosA
GROUP BY photosA.month
```
ciao!
|
```
select a.month, count(b.month) as CountSmaller
from [PHOTOS] a
inner join [PHOTOS] b on b.[MONTH] <= a.[MONTH]
group by a.time, a.month, a.photo
order by 2 desc
```
tested in sqlfiddle
|
sqlite - how to combine overall count and group by?
|
[
"",
"sql",
"sqlite",
"count",
"group-by",
""
] |
I would like to move all the rows inside a couple of tables from one SQL Server database into another SQL Server located in a different remote server. This means I cannot connect to both at the same time, so the exchange should have to be done via file or a similar procedure. The tables have identical column definition, meaning it is only the data inside what differs. Also important to note, I do not expect a full overwrite, but rather a merge into the second database.
|
You can use SQL Management Studios export/import tool. It can extract data in CSV file, so you can save it on a disk, after that change connection to the new server and run import with exported file. This should do a trick.
Here is a nice and quick tutorial:
<https://www.youtube.com/watch?v=Vf6pluv0Lv4>
|
Could you create a linked server?
<http://msdn.microsoft.com/en-us/library/ff772782.aspx>
|
Merge Tables between Databases
|
[
"",
"sql",
"sql-server",
""
] |
I have a table in a SQL Server 2008 database with a number column that I want to arrange on a scale 1 to 10.
Here is an example where the column (`Scale`) is what I want to accomplish with SQL
```
Name Count (Scale)
----------------------
A 19 2
B 1 1
C 25 3
D 100 10
E 29 3
F 60 7
```
In my example above the min and max count is 1 and 100 (this could be different from day to day).
I want to get a number to which each record belongs to.
```
1 = 0-9
2 = 10-19
3 = 20-29 and so on...
```
It has to be dynamic because this data changes everyday so I can not use a `WHERE` clause with static numbers like this: `WHEN Count Between 0 and 10...`
|
Try this, though note technically the value 100 doesn't fall in the range 90-99 and therefore should probably be classed as 11, hence why the value 60 comes out with a scale of 6 rather than your 7:
[SQL Fiddle](http://sqlfiddle.com/#!3/d41d8/41244)
**MS SQL Server 2008 Schema Setup**:
**Query 1**:
```
create table #scale
(
Name Varchar(10),
[Count] INT
)
INSERT INTO #scale
VALUES
('A', 19),
('B', 1),
('C', 25),
('D', 100),
('E', 29),
('F', 60)
SELECT name, [COUNT],
CEILING([COUNT] * 10.0 / (SELECT MAX([Count]) - MIN([Count]) + 1 FROM #Scale)) AS [Scale]
FROM #scale
```
**[Results](http://sqlfiddle.com/#!3/d41d8/41244/0)**:
```
| NAME | COUNT | SCALE |
|------|-------|-------|
| A | 19 | 2 |
| B | 1 | 1 |
| C | 25 | 3 |
| D | 100 | 10 |
| E | 29 | 3 |
| F | 60 | 6 |
```
This gets you your answer where 60 becomes 7, hence 100 is 11:
```
SELECT name, [COUNT],
CEILING([COUNT] * 10.0 / (SELECT MAX([Count]) - MIN([Count]) FROM #Scale)) AS [Scale]
FROM #scale
```
|
```
WITH MinMax(Min, Max) AS (SELECT MIN(Count), MAX(Count) FROM Table1)
SELECT Name, Count, 1 + 9 * (Count - Min) / (Max - Min) AS Scale
FROM Table1, MinMax
```
|
SQL: Arrange numbers on a scale 1 to 10
|
[
"",
"sql",
"sql-server",
"database",
"where-clause",
"min",
""
] |
Is it possible to display only 20 items of the whole result?
I know of a query "rownum>=20" but that only makes the sql check only the first 20 rows in the table.
But i want the sql to check my whole table and display only the highest 20 entries.
To be clear, I have a table with job description like Title, description, agency, salary\_min, salary max. I want to display the 20 highest paying jobs.
|
The "highest" 20 entries suggests a sort. You would do something like this:
```
select t.*
from (select t.*
from table t
order by highestcol desc
) t
where rownum <= 20;
```
If you are using Oracle 12g or more recent, you can use the `fetch first` clause instead:
```
select t.*
from table t
order by highestcol desc
fetch first 20 rows only;
```
|
First sort(order by) and then use rownum function.
|
how to display only 20 items of the result in oracle sql?
|
[
"",
"sql",
"database",
"oracle",
"top-n",
""
] |
I got this error in SQL Server 2012:
```
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , > , >= or when the subquery is used as an expression.
```
This is my script:
```
CREATE PROCEDURE dbo.Update_F_ARTCLIENT
@PK varchar(19)
,@AR_Ref varchar(19)
,@AC_Categorie smallint
,@AC_PrixVen numeric(24,6)
,@AC_Coef numeric(24,6)
,@AC_PrixTTC smallint
,@AC_Remise numeric(24,6)
AS
BEGIN
SET NOCOUNT ON;
UPDATE [dbo].[F_ARTCLIENT]
SET
[AR_Ref] = @AR_Ref
,[AC_Categorie] = @AC_Categorie
,[AC_PrixVen] = @AC_PrixVen
,[AC_Coef]= @AC_Coef
,[AC_PrixTTC] = @AC_PrixTTC
,[AC_Remise] = @AC_Remise
WHERE (SELECT CONCAT([AR_Ref], [AC_Categorie]) as PK FROM [dbo].[F_ARTCLIENT])= @PK
END
```
|
Try modifying your WHERE condition to be:
```
WHERE CONCAT([AR_Ref], [AC_Categorie]) = @PK
```
|
As the error description reports, your subquery return more than a value. You can avoid this changing your subquery in this way:
```
(SELECT TOP 1 CONCAT([AR_Ref], [AC_Categorie]) as PK FROM [dbo].[F_ARTCLIENT])= @PK)
```
|
SQL ERROR: This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression
|
[
"",
"sql",
"sql-server-2008",
"stored-procedures",
"sql-server-2012",
""
] |
I have this sample data below. But when I chart it in SSRS, February 2012 does not show up because there was no sale on Feb 2012. I need all months to show up in the chart, even when there are no transaction data.
How can I solve this? This table is growing as the months go, but sometimes there are no data on certain months. I had an idea below to fix this but it does not work. Please help.
Table `tTable`:
```
╔════╦═════════════════╦══════════════╦══════════╗
║ ID ║ TransactionDate ║ Month Number ║ Amount ║
║ 1 ║ 3-Jan-12 ║ 1 ║ $1 ║
║ 2 ║ 3-Mar-12 ║ 3 ║ $56 ║
║ 3 ║ 3-Apr-12 ║ 4 ║ $6 ║
║ 4 ║ 3-May-12 ║ 5 ║ $8 ║
║ 5 ║ 3-Jun-12 ║ 6 ║ $11 ║
║ 6 ║ 3-Jul-12 ║ 7 ║ $8 ║
║ 7 ║ 3-Aug-12 ║ 8 ║ $5 ║
║ 8 ║ 3-Sep-12 ║ 9 ║ $2 ║
║ 9 ║ 3-Oct-12 ║ 10 ║ $1 ║
║ 10 ║ 3-Nov-12 ║ 11 ║ $300 ║
║ 11 ║ 3-Dec-12 ║ 12 ║ $21 ║
║ 12 ║ 3-Jan-13 ║ 1 ║ $54 ║
║ 13 ║ 3-Feb-13 ║ 2 ║ $54 ║
║ 14 ║ 3-Mar-13 ║ 3 ║ $87 ║
║ 15 ║ 3-Apr-13 ║ 4 ║ $99 ║
║ 16 ║ 3-May-13 ║ 5 ║ $12 ║
║ 17 ║ 3-Jun-13 ║ 6 ║ $2,187 ║
║ 18 ║ 3-Jul-13 ║ 7 ║ $21,487 ║
║ 19 ║ 3-Aug-13 ║ 8 ║ $1,214 ║
║ 20 ║ 3-Sep-13 ║ 9 ║ $21 ║
║ 21 ║ 3-Oct-13 ║ 10 ║ $21 ║
║ 22 ║ 3-Nov-13 ║ 11 ║ $235 ║
║ 23 ║ 3-Dec-13 ║ 12 ║ $2,313 ║
╚════╩═════════════════╩══════════════╩══════════╝
```
First I thought a full outer join would work. But I don't have an ID in my `tMonth` Table. so it is still not working.
```
SELECT cMonth FROM (
SELECT 1 "cMonth"
UNION
SELECT 2
UNION
SELECT 3
UNION
SELECT 4
UNION
SELECT 5
UNION
SELECT 6
UNION
SELECT 7
UNION
SELECT 8
UNION
SELECT 9
UNION
SELECT 10
UNION
SELECT 11
UNION
SELECT 12)tMonth
FULL OUTER JOIN tTable ON tTable.MonthNumber = tMonth.cMonth
```
My Hope would be to come up with this result
```
+-----------------+--------------+------+----------+
| TransactionDate | Month Number | Year | Amount |
+-----------------+--------------+------+----------+
| 3-Jan-12 | 1 | 2012 | $1 |
<b>| NULL | 2 | 2012 | NULL |</b>
| 3-Mar-12 | 3 | 2012 | $56 |
| 3-Apr-12 | 4 | 2012 | $6 |
| 3-May-12 | 5 | 2012 | $8 |
| 3-Jun-12 | 6 | 2012 | $11 |
| 3-Jul-12 | 7 | 2012 | $8 |
| 3-Aug-12 | 8 | 2012 | $5 |
| 3-Sep-12 | 9 | 2012 | $2 |
| 3-Oct-12 | 10 | 2012 | $1 |
| 3-Nov-12 | 11 | 2012 | $300 |
| 3-Dec-12 | 12 | 2012 | $21 |
| 3-Jan-13 | 1 | 2013 | $54 |
| 3-Feb-13 | 2 | 2013 | $54 |
| 3-Mar-13 | 3 | 2013 | $87 |
| 3-Apr-13 | 4 | 2013 | $99 |
| 3-May-13 | 5 | 2013 | $12 |
| 3-Jun-13 | 6 | 2013 | $2,187 |
| 3-Jul-13 | 7 | 2013 | $21,487 |
| 3-Aug-13 | 8 | 2013 | $1,214 |
| 3-Sep-13 | 9 | 2013 | $21 |
| 3-Oct-13 | 10 | 2013 | $21 |
| 3-Nov-13 | 11 | 2013 | $235 |
| 3-Dec-13 | 12 | 2013 | $2,313 |
+-----------------+--------------+------+----------+
```
I just realize the year column is another issue.
|
I solve this using CTE to Build the calendar year to years ahead
```
DECLARE @START_DATE DATETIME
DECLARE @ENDDATE DATETIME
SET @START_DATE = '20110101'
SET @ENDDATE = '20151231'
;
WITH CTE_DATES AS
(
SELECT
@START_DATE DateValue UNION ALL SELECT
DateValue + 1
FROM CTE_DATES
WHERE DateValue + 1 < @ENDDATE)
SELECT
CAST(DateValue AS date) "DateValue"
into #Calendar
FROM CTE_DATES
OPTION (MAXRECURSION 0)
SELECT sum(Amount),iMonth,iYear FROM (
SELECT month(datevalue) "iMonth",Year(datevalue) "iYear" FROM #Calendar
GROUP BY month(datevalue),Year(datevalue)
ORDER BY Year(datevalue) ,month(datevalue))Cal
LEFT JOIN tTable ON tTable.MonthNumber = Cal.iMonth AND Year(tTable.TransactionDate)=Cal.iYear OR tTable.TransactionDate IS NULL
GROUP BY iMonth,iYear
```
I got the CTE calendar from this link
<http://www.sqlshack.com/sql-server-using-recursive-cte-persisted-computed-columns-create-calendar-table/>
|
How about using a Tally Table to get the months and years, then using a left join from that to tTable:
```
SELECT TOP 100
IDENTITY( INT,0,1 ) AS N
INTO tTable
FROM master.dbo.syscolumns s1 ,
master.dbo.syscolumns s2
DECLARE @startdate DATETIME ,
@enddate DATETIME
SELECT @startdate = MIN(Transactiondate) ,
@enddate = MAX(Transactiondate)
FROM #temp;
WITH tMonth
AS ( SELECT DATEADD(MONTH, DATEDIFF(MONTH, N, @startdate) + N, 0) AS Months
FROM #tally
WHERE N <= DATEDIFF(MONTH, @startdate, @enddate)
)
SELECT tTable.TransactionDate ,
MONTH(tMonth.Months) ,
YEAR(tMonth.Months) AS YearNumber ,
tTable.Amount
FROM tMonth
LEFT OUTER JOIN tTable ON MONTH(tMonth.Months) = tTable.MonthNumber
AND YEAR(tMonth.Months) = YEAR(tTable.Transactiondate)
DROP TABLE #tally
```
|
Query to have fixed month on transaction table for SSRS chart
|
[
"",
"sql",
"sql-server",
"t-sql",
"reporting-services",
"ssrs-2008",
""
] |
I am trying to get latest date for each item.
For e.g.
Say I have a table with vendor, item, shipment Date. I am retrieving latest date for each item shipped by all vendors.
Data:
```
|Vendor|item |shipmentDate|
|HP |laptop | 2014/07/29 |
|HP |laptop | 2014/06/28 |
|Apple |tablet | 2014/05/05 |
|Apple |tablet | 2014/06/28 |
```
Get latest date query:
```
SELECT Vendor, item, MAX(shipmentDate)
FROM table
GROUP BY Vendor, item;
```
Result:
```
|Vendor|item |shipmentDate|
|HP |laptop | 2014/07/29 |
|Apple |tablet | 2014/06/28 |
```
Everything is fine. However, my actual table would yield more than 50,000 records for above query. So I am trying to execute the query in a batch from my JAVA application. Thus, I am trying to retrieve row count of data:
```
SELECT COUNT(*)
FROM (SELECT Vendor, item, MAX(shipmentDate)
FROM table
GROUP BY Vendor, item) T;
```
Row Count query takes: 00:04:47 same amount of time as the other query: 00:04:43. Looking to find a way to optimize the row count query. Any help would be appreciated.
|
Try doing this:
```
select count(distinct vendor, item)
from table;
```
This *might* be faster with an index on `table(vendor, item)`. I am not 100% if MySQL will take advantage of the index for this query.
|
I only want to suggest using DISTINCT to get the count, maybe it yields better results:
```
select count(Vendor) from
(select distinct Vendor, item
from table)
```
|
Optimize query to return row count
|
[
"",
"mysql",
"sql",
"optimization",
""
] |
you all know the like operator in sql. For example:
```
select *
from customer
where email like '%goog%'
```
So my question is how can the database return so fast a result?
When I should program a function like this, I would loop over all customers and over each email. But this is very slow. I heard about indexes. How can a database use a index when the database doesn't know which the first or last letter is? Or is their a other way to do it?
I don't want to program something like this. I only want to know how it works.
|
I have no idea what engine you are using and what's beneath its actual hood but here is some helpful information regarding this problem:
1. Often, SQL engines uses free text search inside the column to be able to extract queries like that extra fast. This is done by creating an [**inverted index**](http://en.wikipedia.org/wiki/Inverted_index), that maps from each word to the "documents" (row,column) that contains them. One widely used library is [Apache Lucene](http://lucene.apache.org/core/). Unfortunately, most IR (Information Retrieval) libraries do NOT support wild card at the beginning of the query (but they do for anywhere else), so your specific example cannot be searched in such index.
2. You can create an index to support a wild card at the beginning of the index, by using a [Suffix Tree](http://en.wikipedia.org/wiki/Suffix_tree). Suffix trees are excellent for searching a substring, like your example. However, they are not very optimized for searching a string with a wild card in the middle of it.
|
As I understand it this style of query is not very efficient - if there is a wild card that affects the start of words an entire scan is needed. However if the column is indexed the DBMS only has to bring the entire index into memory and scan it not the entire contents of the table - typically this would be a relatively fast task.
|
Why is like-operator so fast
|
[
"",
"sql",
"database",
"algorithm",
"sql-like",
""
] |
I have a simple SQL query that looks like this...
```
SELECT
COUNT (* )
AS
count
FROM
wlead
WHERE
f_id = '765'
```
This works great but I would now like to limit the query to just return results from the last 7 days. There is a field called date\_created which is formatted like this...
```
2014-10-12 11:31:26
```
Is there an automatic way of limiting the results or do I need to work out the date 7 days previous first?
|
As you didn't mention your DBMS, the following is ANSI SQL:
```
select count(*) as cnt
from wlead
where f_id = '765'
and date_created >= current_date - interval '7' day;
```
If `date_created` is actually a timestamp, you might want to compare that to a timestamp as well:
```
select count(*) as cnt
from wlead
where f_id = '765'
and date_created >= current_timestamp - interval '7' day;
```
Note that this does not take care of "removing" the time part. `current_timestamp - interval '7' day` will yield a timestamp 7 days a ago at the same time (e.g. 20:52)
To "ignore" the time part you could cast `date_created` to a `date`:
```
select count(*) as cnt
from wlead
where f_id = '765'
and cast(date_created as date) >= current_date - interval '7' day;
```
|
MySQL will like this:
```
... WHERE DATEDIFF(getdate(), date_created) <= 7
```
|
Limit SQL results to last 7 days
|
[
"",
"sql",
""
] |
I have a rather complex (well for me) sql query happening and I am having trouble with some concepts.
I have the following sql on a webpage that i am building
```
SELECT
[dbo].[Enrolment].[_identity], [dbo].[Enrolment].CommencementDate,
[dbo].[Enrolment].CompletionDate, [dbo].[Enrolment].enrolmentDate,
[dbo].[Course].name coursename, [dbo].[Course].Identifier as QUALcode,
[dbo].[Person].givenName, [dbo].[Person].Surname,[dbo].[Employer].name as empname,
[dbo].[Employer].Address1,[dbo].[Employer].Suburb,[dbo].[Employer].Phone,
[dbo].[Employer].PostCode,[dbo].[EnrolmentStatus].name as enrolname,
[dbo].[Student].identifier,[dbo].[Student].person,[dbo].[Contact].person as CONTACTid
FROM
(((([dbo].[Enrolment]
LEFT JOIN
[dbo].[Course] ON [dbo].[Enrolment].course = [dbo].[Course].[_identity])
LEFT JOIN
[dbo].[Employer] ON [dbo].[Enrolment].employer = [dbo].[Employer].[_identity])
LEFT JOIN
[dbo].[EnrolmentStatus] ON [dbo].[Enrolment].status = [dbo].[EnrolmentStatus].[_identity])
LEFT JOIN
[dbo].[Student] ON [dbo].[Enrolment].student = [dbo].[Student].[_identity])
LEFT JOIN
[dbo].[Person] ON [dbo].[Student].person = [dbo].[Person].[_identity]
LEFT JOIN
[dbo].[Contact] ON [dbo].[Employer].[_identity] = [dbo].[Contact].employer
WHERE
(([dbo].[EnrolmentStatus].name) = 'training'
OR
([dbo].[EnrolmentStatus].name) = 'enrolled')
```
This is working fine but what I would like to do is join to the `[dbo].[Person]` table again but this time joining from another table so the code I effectively need to patch into the above statement is
```
LEFT JOIN
[dbo].[Trainer] ON [dbo].[Enrolment].Trainer = [dbo].[Trainer].[_identity])
LEFT JOIN
[dbo].[Person] ON [dbo].[Trainer].person = [dbo].[Person].[_identity]
```
I then need to be able to get from the person table the name of the student and the name of the trainer, so I need 2 records from the person table for every record from the `Enrolment` table, the fields I need from the person table are the same for both trainer and student in that I am trying to get the `given name` and `surname` for both.
Any help or pointers would be most appreciated.
|
Use aliasing like this..
```
LEFT JOIN [dbo].[Trainer] ON [dbo].[Enrolment].Trainer = [dbo].[Trainer].[_identity])
LEFT JOIN [dbo].[Person] AS p ON [dbo].[Trainer].person = p.[_identity]
```
|
You have to just use replace your from clause with this. You have to just first use the Trainer table join, then Person table, then use the AND keyword to use multiple mapping with single table
```
FROM (((([dbo].[Enrolment]
LEFT JOIN [dbo].[Course] ON [dbo].[Enrolment].course = [dbo].[Course].[_identity])
LEFT JOIN [dbo].[Employer] ON [dbo].[Enrolment].employer = [dbo].[Employer].[_identity])
LEFT JOIN [dbo].[EnrolmentStatus] ON [dbo].[Enrolment].status = [dbo].[EnrolmentStatus].[_identity])
LEFT JOIN [dbo].[Student] ON [dbo].[Enrolment].student = [dbo].[Student].[_identity])
LEFT JOIN [dbo].[Trainer] ON [dbo].[Enrolment].Trainer = [dbo].[Trainer].[_identity])
LEFT JOIN [dbo].[Person] ON [dbo].[Student].person = [dbo].[Person].[_identity]
AND [dbo].[Trainer].person = [dbo].[Person].[_identity]
LEFT JOIN [dbo].[Contact] ON [dbo].[Employer].[_identity] = [dbo].[Contact].employer
```
|
Joining multiple tables to one table in sql
|
[
"",
"sql",
"sql-server",
"join",
""
] |
As I know, heap tables are tables without clustered index and has no physical order.
I have a heap table "scan" with 120k rows and I am using this select:
```
SELECT id FROM scan
```
If I create a non-clustered index for the column "id", I get **223 physical reads**.
If I remove the non-clustered index and alter the table to make "id" my primary key (and so my clustered index), I get **515 physical reads**.
If the clustered index table is something like this picture:

Why Clustered Index Scans workw like the table scan? (or worse in case of retrieving all rows). Why it is not using the "clustered index table" that has less blocks and already has the ID that I need?
|
SQL Server indices are b-trees. A non-clustered index just contains the indexed columns, with the leaf nodes of the b-tree being pointers to the approprate data page. A clustered index is different: its leaf nodes are the data page itself and the clustered index's b-tree becomes the backing store for the table itself; the heap ceases to exist for the table.
Your non-clustered index contains a single, presumably integer column. It's a small, compact index to start with. Your query `select id from scan` has a *covering index*: the query can be satisfied just by examining the index, which is what is happening. If, however, your query included columns not in the index, assuming the optimizer elected to use the non-clustered index, an additional lookup would be required to fetch the data pages required, either from the clustering index or from the heap.
To understand what's going on, you need to examine the execution plan selected by the optimizer:
* See [Displaying Graphical Execution Plans](http://technet.microsoft.com/en-us/library/ms178071(v=sql.105).aspx)
* See Red Gate's [SQL Server Execution Plans](http://download.red-gate.com/ebooks/SQL/eBOOK_SQLServerExecutionPlans_2Ed_G_Fritchey.pdf), by Grant Fritchey
|
A clustered index generally is about as big as the same data in a heap would be (assuming the same page fullness). It should use just a little more reads than a heap would use because of additional B-tree levels.
A CI cannot be smaller than a heap would be. I don't see why you would think that. Most of the size of a partition (be it a heap or a tree) is in the data.
Note, that less physical reads does not necessarily translate to a query being faster. Random IO can be 100x slower than sequential IO.
|
Why NonClustered index scan faster than Clustered Index scan?
|
[
"",
"sql",
"sql-server",
"indexing",
"sql-execution-plan",
"full-table-scan",
""
] |
I have to identify missing records from the example below.
```
Category BatchNo TransactionNo
+++++++++++++++++++++++++++++++++
CAT1 1 1
CAT1 1 2
CAT1 2 3
CAT1 2 4
CAT1 2 5
CAT1 3 6
CAT1 3 7
CAT1 3 8
CAT1 5 12
CAT1 5 13
CAT1 5 14
CAT1 5 15
CAT1 7 18
CAT2 1 1
CAT2 1 2
CAT2 3 6
CAT2 3 7
CAT2 3 8
CAT2 3 9
CAT2 4 10
CAT2 4 11
CAT2 4 12
CAT2 6 14
```
I need a script that will identify missing records as below
```
Category BatchNo
+++++++++++++++++++
CAT1 4
CAT1 6
CAT2 2
CAT2 5
```
I do not need to know that `CAT1 8` and `CAT2 7` are not there as they potentially have not been inserted yet.
|
You can create temporary result set with all possible batch no up to max batch number for each category than select batch no which are not available.
```
create table TEMP(
Category varchar(10),
BatchNo int,
TransactionNo int
)
insert into TEMP values
('CAT1', 1, 1),
('CAT1', 1, 2),
('CAT1', 2, 3),
('CAT1', 2, 4),
('CAT1', 2, 5),
('CAT1', 3, 6),
('CAT1', 3, 7),
('CAT1', 3, 8),
('CAT1', 5, 9),
('CAT1', 7, 10),
('CAT2', 1, 1),
('CAT2', 1, 2),
('CAT2', 3, 3),
('CAT2', 4, 4),
('CAT2', 4, 5),
('CAT2', 4, 6),
('CAT2', 6, 7);
WITH BatchNo (BatchID,Category,MaxBatch) AS (
SELECT 1, Category, MAX(BatchNo) AS MaxBatch FROM TEMP GROUP BY Category
UNION ALL
SELECT BatchID + 1, Category, MaxBatch FROM BatchNo
WHERE BatchID < MaxBatch
)
SELECT
BatchNo.Category,
BatchNo.BatchID
FROM
BatchNo
WHERE
BatchID NOT IN (SELECT BatchNo FROM TEMP WHERE Category = BatchNo.Category)
ORDER BY
BatchNo.Category,
BatchNo.BatchID
DROP TABLE TEMP
```
|
without use cycle or fetch you can use this one: (#Category is my eqvivalent of your table name). (Performance is perfect)
```
DECLARE @t TABLE (RN INT IDENTITY,Category VARCHAR(255), BatchNo INT)
INSERT INTO @t
SELECT DISTINCT Category, BatchNo
FROM #Category
SELECT a.Category,a.BatchNo+1 AS BatchNo
FROM @t a
CROSS APPLY (SELECT * FROM @t b
WHERE a.RN+1 = b.RN AND
a.Category = b.Category AND
a.BatchNo+1 != b.BatchNo) x
```
|
Find missing sequences by category
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I use SQL Server 2012.
I write two queries but what is a different between `NOLOCK` and `UnCommitted` ?
```
SELECT lastname, firstname
FROM HR.Employees with (READUNCOMMITTED)
SELECT lastname, firstname
FROM HR.Employees with (NoLock)
```
|
**NOLOCK :** Is equivalent to `READ UNCOMMITTED` (source : **[MSDN](http://msdn.microsoft.com/en-us/library/ms187373.aspx)**)
`NOLOCK` or `READ UNCOMMITTED` Specifies that dirty reads are allowed. No shared locks are issued to prevent other transactions from modifying data read by the current transaction, and exclusive locks set by other transactions do not block the current transaction from reading the locked data. Allowing dirty reads can cause higher concurrency, but at the cost of reading data modifications that then are rolled back by other transactions
`READ UNCOMMITTED` and `NOLOCK` hints apply only to data locks. All queries, including those `with READ UNCOMMITTED and NOLOCK` hints, acquire Sch-S (schema stability) locks during compilation and execution. Because of this, queries are blocked when a concurrent transaction holds a Sch-M (schema modification) lock on the table
|
No difference in terms of their functions, like other have mentioned.
The single difference is that you can apply `WITH(NOLOCK)` selectively, on some tables but not others. `READ UNCOMMITTED` applies `NOLOCK` to all tables in a session.
If you do this:
```
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT *
FROM Table1 T1
INNER JOIN Table2 T2 ON T1.ID = T2.id
```
It is functionally equivalent to:
```
SELECT *
FROM Table1 T1 WITH(NOLOCK)
INNER JOIN Table2 T2 WITH(NOLOCK) ON T1.ID = T2.ID
```
But you can also apply `WITH(NOLOCK)` selectively:
```
SELECT *
FROM Table1 T1 WITH(TABLOCK)
INNER JOIN Table2 WITH(NOLOCK) ON T1.ID = T2.ID
```
|
What is (are) difference between NOLOCK and UNCOMMITTED
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"locking",
"isolation-level",
""
] |
i have table like below
```
ID | Place
-------------
0 | BANGALORE
1 | BEGUR
```
Now in this table, i need a query which returns the rows when i search like 'BR'. BR terms are present in both rows, is there a way to search string like this?
|
Use `LIKE` operator. This will return rows which has the word `'B'` before `'R'` in any position
```
select * from tablename where Place like '%B%R%'
```
|
Use SQL Wildcards
```
SELECT * WHERE place LIKE "%B%R%"
```
**Reference**
<http://www.w3schools.com/sql/sql_wildcards.asp>
|
How to search for letter in string in SQL
|
[
"",
"android",
"sql",
"sqlite",
""
] |
Given the following table:
```
id column1 column2
-------------------------
1 3 8
2 4 7
3 4 10
4 4 14
5 14 17
6 10 27
7 14 21
8 16 14
9 21 4
10 30 3
```
what is the nicest way to query selecting rows when number 4 and 14 are selected in `column1` or `column2` BUT exclude when number 4 and number 14 are both in the row. Be aware that the order can be reversed.
expected output
```
id column1 column2
-------------------------
2 4 7
3 4 10
5 14 17
7 14 21
8 16 14
9 21 4
```
|
```
SELECT * FROM table WHERE (column1=4 XOR column2=14) XOR (column1=14 XOR column2=4)
```
|
Try this:
```
SELECT *
FROM mytable
WHERE ((column1 = 4 AND column2 != 14)
OR (column2 = 14 AND column1 != 4)
OR (column1 = 14 AND column2 != 4)
OR (column2 = 4 AND column1 != 14))
```
|
SQL select when one condition or another are met but not both
|
[
"",
"sql",
"select",
""
] |
i am stuck in problem like i am passing accountID and on the basis of that SP picks amount details of a person e.g.
```
AccountID AccountTitle TransactionDate Amount
1 John01 2014/11/28 20
```
now if there is 2nd or more records for same accountID then it should add with previous e.g. if 2nd record for accountID 1 is 40 then amount should display 60 (such that it should be already added to 20 and display total in 2nd record)
```
AccountID AccountTitle TransactionDate Amount
1 John01 2014/12/30 60 (in real it was 40 but it should show result after being added to 1st record)
```
and same goes for further records
```
Select Payments.Accounts.AccountID, Payments.Accounts.AccountTitle,
Payments.Transactions.DateTime as TranasactionDateTime,
Payments.Transactions.Amount from Payments.Accounts
Inner Join Payments.Accounts
ON Payments.Accounts.AccountID = Payments.Transactions.Account_ID
Inner Join Payments.Transactions
where Payments.Transactions.Account_ID = 1
```
it has wasted my time and can't tackle it anymore, so please help me,
|
SQL Server 2012+ supports cumulative sums (which seems to be what you want):
```
Select a.AccountID, a.AccountTitle, t.DateTime as TranasactionDateTime,
t.Amount,
sum(t.Amount) over (partition by t.Account_Id order by t.DateTime) as RunningAmount
from Payments.Accounts a Inner Join
Payments.Transactions t
on a.AccountID = t.Account_ID
where t.Account_ID = 1;
```
In earlier versions of SQL Server you can most easily do this with a correlated subquery or using `cross apply`.
I also fixed your query. I don't know why you were joining to the Accounts table twice. Also, table aliases make queries much easier to write and to read.
|
Here is the answer if grouping by all columns is acceptable to you.
```
Select AccountID, AccountTitle, TransactionDate, SUM(Payments.Transactions.Amount)
from Payments.Accounts
group by AccountID, AccountTitle, TransactionDate
```
If you want to group only by AccountId, The query is this:
```
Select AccountID, SUM(Payments.Transactions.Amount)
from Payments.Accounts
group by AccountID
```
In the second query, the AccountTitle and TransactionDate are missing because they are not used in the group by clause. To include them in the results, you must think of a rule to decide which row of the multiple rows with the same AccountID is used to get the values AccountTitle and TransactionDate.
|
How to add 1 records data to previous?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am wondering what would this code mean in the context of a table called webpages, with the following fields; (pagetitle, content, author, URL, links, #linkto). Replies greatly appreciated. The code is -
WHERE place LIKE %New York%
ORDER BY #linkto
|
The wildcard characters to use in a LIKE query against an Access database depend on two factors. If
1. the query is run from within Microsoft Access itself, and
2. the database has **not** been configured to use "SQL Server Compatible Syntax (ANSI 92)"
then the wildcard characters are
* `*` (zero or more characters), and
* `?` (one character).
In all other cases the wildcard characters are the more common `%` and `_`, respectively, as it appears to be in your particular case.
|
It is a mistake. In most databases (and the standard), `%` is a wildcard character (for `LIKE`) that matches zero or more characters. In MS Access, the equivalent character is `*`, so the code would be:
```
WHERE place LIKE "*New York*"
```
In other words, in Access, `%` is the percent character.
|
Function of % symbol in SQL for Microsoft Access
|
[
"",
"sql",
"ms-access",
""
] |
I have xml as shown below
```
<StudentData>
<Student ID ="1" RollNo ="15" />
</StudentData>
```
One of my stored procedure accepts this xml as input as shown below
```
CREATE PROCEDURE [dbo].[FetchStudentData]
@xml XML
AS
BEGIN
SET NOCOUNT ON;
SELECT
st.value('@ID','INT') AS Id,
st.value('@RollNo','INT') AS RollNo
@xml.nodes('/StudentData/Student')AS TEMPTABLE(st)
END
```
Here I want to save the Id and RollNo in some variable so that I can use them in further queries with in stored procedure. I don't know exact syntax to fetch the Id and RollNo from node and store in variable.
Can anybody suggest me the way to do it?
|
This query will be helpful.
```
CREATE PROCEDURE [dbo].[FetchStudentData]
@xml XML
AS
BEGIN
SET NOCOUNT ON;
DECLARE @sID AS INT, @sRollNo AS INT
SELECT @sID = xmlData.Student.value('@ID','INT'),
@sRollNo = xmlData.Student.value('@RollNo','INT')
FROM @xml.nodes('//StudentData/Student') xmlData(Student)
SELECT @sID AS ID, @sRollNo AS RollNo
END
```
**Oupput:**

**Nore:** For multiple students tags
```
CREATE PROCEDURE [dbo].[Test1]
@xml XML
AS
BEGIN
SET NOCOUNT ON;
DECLARE @StudentTbl as TABLE
(
ID int,
RollNo int
)
INSERT INTO @StudentTbl
SELECT xmlData.Student.value('@ID','INT'),
xmlData.Student.value('@RollNo','INT')
FROM @xml.nodes('//StudentData/Student') xmlData(Student)
SELECT * FROM @StudentTbl
END
```
|
You can assign them to variables using the value function for the xml type and nvaigate to the attribute directly in xpath...
```
DECLARE @xml AS XML
SET @xml = '<StudentData>
<Student ID ="1" RollNo ="15" />
</StudentData>'
DECLARE @rollNo int , @ID int
SET @rollNo = @xml.value('/StudentData[1]/Student[1]/@RollNo','int')
SET @ID = @xml.value('/StudentData[1]/Student[1]/@ID','int')
SELECT @rollNo ,@ID
```
|
XML Parsing in Stored procedure
|
[
"",
"sql",
"sql-server",
""
] |
I have assigned the data type of one of the columns of a table I have created as `int`. My problem is that it is not showing decimal places when I run a query against it.
How do I correct the data type of this column so that it accepts decimal places?
Table is `dbo.Budget` and the column concerned is called `ROE`.
|
Easy - just run this SQL statement
```
ALTER TABLE dbo.Budget
ALTER COLUMN ROE DECIMAL(20,2) -- or whatever precision and scale you need.....
```
See the [**freely available** MSDN documentation](http://msdn.microsoft.com/en-us/library/ms190476.aspx) on what exactly the precision, scale and length in decimal numbers are and what ranges of values are possible
|
You can execute this simple sql statement
```
Alter table yourtable
Alter Column yourtable_column Decimal(10,2)
```
you can set decimal precision and scale whatever you need.
|
How to change data type of a column in an SQL table from integer to decimal
|
[
"",
"sql",
"sql-server",
"integer",
"decimal",
"type-conversion",
""
] |
I have table called Sub\_Router with below sample data and only two columns
Associated\_Router\_L1,Associated\_Router\_L2 both are composite primary key.
i want select all L2 Ronter id for given L1 Routerid
for example if gave 2001 then result 2005,2006,2002,2007,2003,2008
My select sql is
Query - 1(With one level)
```
select s1.Associated_Router_L2
from Sub_Router s1
where s1.Associated_Router_L1 in (select s2.Associated_Router_L2
from Sub_Router s2
where s2.Associated_Router_L1 = 2001)
```
Query - 2(With two level)
```
select s1.Associated_Router_L2
from Sub_Router s1
where s1.Associated_Router_L1 in (select s2.Associated_Router_L2
from Sub_Router s2
where s2.Associated_Router_L1 in (select s3.Associated_Router_L2
from Sub_Router s3
where s3.Associated_Router_L1 = 2001))
```
Query-2 using Inner Join
select s1.Associated\_Router\_L2 from Sub\_Router s1 INNER JOIN Sub\_Router s2 ON s1.Associated\_Router\_L1 = s2.Associated\_Router\_L2 INNER join Sub\_Router s3 on s2.Associated\_Router\_L1 = s3.Associated\_Router\_L2 where s1.Associated\_Router\_L1 = 2001 OR s2.Associated\_Router\_L1 = 2001 OR s3.Associated\_Router\_L1 = 2001
OR operation again creating a performace
Result coming fine, any better approach to avoid this IN,OR operator.
Table name: `Sub_Router`
Column name: `Associated_Router_L1,Associated_Router_L2` both composite primary keys
Data as attached
```
Associated_Router_L1 Associated_Router_L2
2000 2001
2000 2002
2000 2003
2000 2004
2001 2005
2001 2006
2001 2002
2002 2007
2002 2003
2002 2008
```
|
The following sql might help:
```
SELECT DISTINCT concat(s2.Associated_Router_L1, '-->', s2.Associated_Router_L2) branch
FROM Sub_Router s1, Sub_Router s2
WHERE
s1.Associated_Router_L1 = 2001
AND (s1.Associated_Router_L2 = s2.Associated_Router_L1 OR 2001 = s2.Associated_Router_L1)
```
If you want to go one lever deeper, add another table, expand the WHERE clause, and modify the SELECT fields:
```
SELECT DISTINCT concat(s3.Associated_Router_L1, '-->', s3.Associated_Router_L2) branch
FROM Sub_Router s1, Sub_Router s2, Sub_Router s3
WHERE
s1.Associated_Router_L1 = 2001
AND (s1.Associated_Router_L2 = s2.Associated_Router_L1 OR 2001 = s2.Associated_Router_L1)
AND (s2.Associated_Router_L2 = s3.Associated_Router_L1 OR s1.Associated_Router_L2 = s3.Associated_Router_L1 OR 2001 = s3.Associated_Router_L1)
```
|
Standard SQL is not good at dealing with hierarchical / tree models.
Oracle for example has added `CONNECT BY ...` to support [hierarchical queries](https://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm) as a non-standard extension
With standard SQL a typical approach to this is the '[Nested Set Model](http://en.wikipedia.org/wiki/Nested_set_model)',
or you could use the [OQGraph storage engine](http://openquery.com.au/graph/doc) for MySQL / [MariaDB](https://mariadb.com/kb/en/mariadb/documentation/storage-engines/oqgraph-storage-engine/)
|
Best way to select record with out IN operator in MYSQL
|
[
"",
"mysql",
"sql",
""
] |
```
doc.id|tag.id|taggings.id| name
3 | 3 | 3 | heroku
3 | 4 | 4 | javascript
3 | 5 | 5 | html
4 | 4 | 6 | javascript
4 | 3 | 7 | heroku
4 | 5 | 8 | html
4 | 6 | 9 | swagger
```
I have this table
I want to select elements that have at least N of the same tags,
so lets say documents that are tagged with html and heroku.
I would want to return doc 3 and doc 4 [in this table it would be the only two things in the table. lol but still!]
|
Here's one way with `count` and `case`:
```
select id
from documents
group by id
having count(case when name = 'html' then 1 end) > 0
and count(case when name = 'heroku' then 1 end) > 0
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!15/d781a/1)
|
Generally the scope would be in format:
```
class Doc
has_and_belongs_to_many :tags
scope :tagged_with(*list), -> {|list| joins(:tags).merge(Tag.named(*list)) }
end
class Tag
has_and_belongs_to_many :docs
scope :named(*list), -> {|list| where(name: list) }
end
```
I'm not sure that `where(name: list)` will work properly, so just replace the scope with arel.
```
scope :named(*list), -> {|list| where(arel_table[:name].in(list)) }
```
Usage:
```
Doc.tagged_with(:html, :heroku)
```
|
How to select documents in SQL with at least N similar foreign keys
|
[
"",
"sql",
"ruby-on-rails",
"postgresql",
"relational-division",
""
] |
This code works as expected, but I it's long and creepy.
```
select p.name, p.played, w.won, l.lost from
(select users.name, count(games.name) as played
from users
inner join games on games.player_1_id = users.id
where games.winner_id > 0
group by users.name
union
select users.name, count(games.name) as played
from users
inner join games on games.player_2_id = users.id
where games.winner_id > 0
group by users.name) as p
inner join
(select users.name, count(games.name) as won
from users
inner join games on games.player_1_id = users.id
where games.winner_id = users.id
group by users.name
union
select users.name, count(games.name) as won
from users
inner join games on games.player_2_id = users.id
where games.winner_id = users.id
group by users.name) as w on p.name = w.name
inner join
(select users.name, count(games.name) as lost
from users
inner join games on games.player_1_id = users.id
where games.winner_id != users.id
group by users.name
union
select users.name, count(games.name) as lost
from users
inner join games on games.player_2_id = users.id
where games.winner_id != users.id
group by users.name) as l on l.name = p.name
```
As you can see, it consists of 3 repetitive parts for retrieving:
* player name and the amount of games they played
* player name and the amount of games they won
* player name and the amount of games they lost
And each of those also consists of 2 parts:
* player name and the amount of games in which they participated as player\_1
* player name and the amount of games in which they participated as player\_2
How could this be simplified?
The result looks like so:
```
name | played | won | lost
---------------------------+--------+-----+------
player_a | 5 | 2 | 3
player_b | 3 | 2 | 1
player_c | 2 | 1 | 1
```
|
### Postgres 9.4 or newer
Use the standard-SQL aggregate **`FILTER`** clause:
```
SELECT u.name
, count(*) FILTER (WHERE g.winner_id > 0) AS played
, count(*) FILTER (WHERE g.winner_id = u.id) AS won
, count(*) FILTER (WHERE g.winner_id <> u.id) AS lost
FROM games g
JOIN users u ON u.id IN (g.player_1_id, g.player_2_id)
GROUP BY u.name;
```
Only rows that pass the boolean expression in the `FILTER` clause contribute to the aggregate.
* [The manual](https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES)
* [Postgres Wiki](https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.4#Aggregate_FILTER_clause)
* [Depesz blog post](https://www.depesz.com/2013/07/23/waiting-for-9-4-implement-the-filter-clause-for-aggregate-function-calls/)
### Any Postgres version
```
SELECT u.name
, count(g.winner_id > 0 OR NULL) AS played
, count(g.winner_id = u.id OR NULL) AS won
, count(g.winner_id <> u.id OR NULL) AS lost
FROM games g
JOIN users u ON u.id IN (g.player_1_id, g.player_2_id)
GROUP BY u.name;
```
Older versions need a workaround. This is shorter and faster than nested sub-selects or `CASE` expressions. See:
* [For absolute performance, is SUM faster or COUNT?](https://dba.stackexchange.com/a/27572/3684)
|
This is a case where correlated subqueries may simplify the logic:
```
select u.*, (played - won) as lost
from (select u.*,
(select count(*)
from games g
where g.player_1_id = u.id or g.player_2_id = u.id
) as played,
(select count(*)
from games g
where g.winner_id = u.id
) as won
from users u
) u;
```
This assumes that there are no ties.
|
Aggregate columns with additional (distinct) filters
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"aggregate-filter",
""
] |
Lets say i have the table sales
```
+------------+---------+------------------------------+-------------+
| SaleAmount | OrderID | CompanyName | ShippedDate |
+------------+---------+------------------------------+-------------+
| 3302 | 10393 | Save-a-lot Markets | 1997-01-03 |
| 2736 | 10398 | Save-a-lot Markets | 1997-01-09 |
| 3063 | 10400 | Eastern Connection | 1997-01-16 |
```
if i query with aggregate function
```
select max(saleamount),shippeddate from sales where shippeddate = '1997-01-16'
group by shippeddate
```
It will display the max of sale amount for the particular date. here
i want to show the top n max values for the particular date.
this i can achieve by doing the same using subquery or storing in table like below
```
select * from
(select * from sales h order by shippeddate, saleamout) where shippeddate = '1997-01-16' and rownum <= 2
```
Is there any other way to display the top n (max,min) values or any way to write the multirow functions.
**EDIT:**
here i don't want to use the subquery where the buffer size is too low and it takes larger time for execution time. My database is oracle 11g
|
You can use analitic functions:
```
select * from
(
select s.*,
row_number() over(partition by shippeddate order by saleamout desc) max_rw,
row_number() over(partition by shippeddate order by saleamout asc) min_rw
from sales s
)
-- where max_rw <= N -- max N
-- where min_rw <= N -- min N
```
`partition by` defines a group (in this case all the rows with the same `shippeddate`
`order by` sorts the rows inside the group
`row_number()` assigns row number for each row in the group according to the ORDER BY
|
[SQL Fiddle](http://sqlfiddle.com/#!4/d40e5f/5)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE sales
("SaleAmount" int,
"OrderID" int primary key,
"CompanyName" varchar2(18),
"ShippedDate" varchar2(10))
;
CREATE INDEX sales_amount ON sales ("SaleAmount");
CREATE INDEX sales_date ON sales ("ShippedDate");
INSERT ALL
INTO sales ("SaleAmount", "OrderID", "CompanyName", "ShippedDate")
VALUES (3302, 10393, 'Save-a-lot Markets', '1997-01-03')
INTO sales ("SaleAmount", "OrderID", "CompanyName", "ShippedDate")
VALUES (2736, 10398, 'Save-a-lot Markets', '1997-01-09')
INTO sales ("SaleAmount", "OrderID", "CompanyName", "ShippedDate")
VALUES (7063, 10401, 'Save-a-lot Markets', '1997-01-16')
INTO sales ("SaleAmount", "OrderID", "CompanyName", "ShippedDate")
VALUES (3063, 10400, 'Eastern Connection', '1997-01-16')
INTO sales ("SaleAmount", "OrderID", "CompanyName", "ShippedDate")
VALUES (4063, 10402, 'Save-a-lot Markets', '1997-01-16')
INTO sales ("SaleAmount", "OrderID", "CompanyName", "ShippedDate")
VALUES (5063, 10404, 'Eastern Connection', '1997-01-16')
INTO sales ("SaleAmount", "OrderID", "CompanyName", "ShippedDate")
VALUES (7763, 10406, 'Save-a-lot Markets', '1997-01-16')
INTO sales ("SaleAmount", "OrderID", "CompanyName", "ShippedDate")
VALUES (4763, 10408, 'Save-a-lot Markets', '1997-01-16')
SELECT * FROM dual
;
```
**Query 1**:
```
SELECT /*+ INDEX(sales sales_amount) */ "SaleAmount"
FROM sales
WHERE "ShippedDate" = '1997-01-16' AND "SaleAmount" <> 0 AND rownum <= 2
UNION ALL
(
SELECT /*+ INDEX(sales sales_amount) */ "SaleAmount"
FROM sales
WHERE "ShippedDate" = '1997-01-16' AND "SaleAmount" <> 0 AND
rownum <= (SELECT COUNT(*) FROM sales WHERE "ShippedDate" = '1997-01-16')
MINUS
SELECT /*+ INDEX(sales sales_amount) */ "SaleAmount"
FROM sales
WHERE "ShippedDate" = '1997-01-16' AND "SaleAmount" <> 0 AND
rownum <= (SELECT COUNT(*)-2 FROM sales WHERE "ShippedDate" = '1997-01-16')
)
```
**[Results](http://sqlfiddle.com/#!4/d40e5f/5/0)**:
```
| SALEAMOUNT |
|------------|
| 3063 |
| 4063 |
| 7063 |
| 7763 |
-- AND rownum <= N -- min N
-- SELECT COUNT(*)-N -- max N
```
|
getting two or more values from max min aggregate function
|
[
"",
"sql",
"oracle",
"aggregate-functions",
""
] |
I do that query that should return me 5 query but I get 10...
```
SELECT *
FROM article ar, account ac
WHERE ar.approved = '1'
AND ar.author = ac.id
ORDER BY ar.id DESC
LIMIT 5 , 10
```
Showing rows 0 - 9 (10 total, Query took 0.0028 sec)
What am I doing wrong? It was working fine before...
|
In mySQL [`LIMIT X, Y`](http://dev.mysql.com/doc/refman/5.1/en/select.html) means
* `X` is starting element (offset)
* `Y` is number of elements that you want to be returned
that's why you're getting 10 rows back.
If you only want 5 rows back and you need 5 first rows to be skipped, you should use `LIMIT 5, 5`.
|
As you need only 5 rows and you need to skip the first 5 rows use:
LIMIT(5,5)
|
MySQL Query Bug?
|
[
"",
"mysql",
"sql",
""
] |
I have a simple table like this
```
....................................
| hotelNo | roomType | totalBooking |
....................................
| 1 | single | 2 |
| 1 | family | 4 |
| 2 | single | 3 |
| 2 | family | 2 |
| 3 | single | 1 |
.....................................
```
Now I want to get the most commonly booked roomType for each hotels, i.e the following result
```
......................
| hotelNo | roomType |
......................
| 1 | family |
| 2 | single |
| 3 | single |
......................
```
P.S I use sub-query to get the first table
|
If you want the maximum, you can use window functions:
```
select hotelNo, roomType
from (select t.*, row_number() over (partition by hotelNo order by totalBooking desc) as seqnum
from table t
) t
where seqnum = 1;
```
|
Sample table
```
SELECT * INTO #TEMP
FROM
(
SELECT 1 HOTELNO ,'SINGLE' ROOMTYPE ,2 TOTALBOOKING
UNION ALL
SELECT 1,'FAMILY',4
UNION ALL
SELECT 2,'SINGLE',3
UNION ALL
SELECT 2,'FAMILY',2
UNION ALL
SELECT 3,'SINGLE',1
)TAB
```
Result query
```
;WITH CTE1 AS
(
SELECT HOTELNO,ROOMTYPE,TOTALBOOKING,
MAX(TOTALBOOKING) OVER (PARTITION BY HOTELNO) MAXX
FROM #TEMP
)
SELECT DISTINCT HOTELNO,ROOMTYPE,MAXX
FROM CTE1
WHERE TOTALBOOKING=MAXX
```
|
Select rows with maximum value where values in two columns are same
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have the following table structure also I have mention my expected output please help me with query as I don't know much about SQL query
Following is the table structure
```
select Id,DocumentName from dbo.JoinigDocument
Id DocumentName
1 Exp letter
2 birth cert
3 Leaving
```
Table 2:
```
SELECT * FROM dbo.EmployeeJoiningDocument
Id JoiningDocumentId EmployeeId
1 1 1
2 3 1
3 1 2
4 2 2
5 3 2
6 1 3
```
Table 3 :
```
select Id,FName from dbo.EmployeeDetail
Id FName
1 Pratik
2 Nilesh
3 Praveen
```
Expected Output:-
```
DocumentName FName Present
Exp letter Pratik 1
birth cert Pratik 0
Leaving Pratik 1
Exp letter Nilesh 1
birth cert Nilesh 1
Leaving Nilesh 1
Exp letter Praveen 1
birth cert Praveen 0
Leaving Praveen 0
```
|
You're looking for the cartesian product (cross join) of the `employee` and `document` tables. Then you need to `outer join` to the joining table:
```
select t.documentname, t.fname, count(ejd.id) present
from (
select d.documentname, d.id documentid, e.fname, e.id employeeid
from JoinigDocument d cross join EmployeeDetail e
) t left join EmployeeJoiningDocument ejd on
t.documentid = ejd.joiningdocumentid and t.employeeid = ejd.employeeid
group by t.documentname, t.fname
```
* [Condensed SQL Fiddle Demo](http://sqlfiddle.com/#!3/eb28d/4)
|
try this:
```
select JD.DocumentName,ED.FName,
ISNULL(case when COUNT(*)>1 then 1 else 0 end ,0) as Present
from JoinigDocument JD
cross join EmployeeDetail ED
LEFT join EmployeeJoiningDocument EJ
on JD.Id=EJ.JoiningDocumentId
group by JD.DocumentName,ED.FName
```
|
Get summary output using joins in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm developing a website for sharepoint using asp.net, it is connected to a database. In visual studio server explorer, the database is accessible using a new sql user I created. I granted sysadmin permissions to that user. However, when I deploy the site and run it, it gives me this error 'login failed for user'.
The sql server is configured to allow mixed authentications. I restarted the server, I played with the user permissions but nothing ever changed.
The connection string I got it from server explorer which I'm sure it is correct:
`Data Source=servertest;Initial Catalog=qDB;User ID=me2;Password=***********"`
What should I do? I've tried every possible solution I found on google and I ended up here.
|
Try like this by editing your connection string:-
```
Server=myServerAddress;Database=myDataBase;User Id=myUsername; Password=myPassword;
```
|
I too had same scenario this worked for me
`<connectionStrings>
<add name="connectionstring" connectionString="Data Source=XYZ\SHAREPOINT;Initial Catalog=DBName;Integrated Security=true;" providerName="System.Data.SqlClient" />
</connectionStrings>`
|
Failed login for user
|
[
"",
"sql",
"asp.net",
"sql-server-2012",
"sharepoint-2013",
""
] |
How to get `SUM` of `Email ID` with there `CreatedDate` in SQL Server, as I have two columns in table, I have `EMailID` as `varchar`, `CreatedDate` as `datetime` and I need the sum of counts of emailid.
As example table listed below.
```
Email ID createddate
-----------------------------------------
xyz1@gmail.com 2014-09-19 19:19:36.007
xyz2@gmail.com 2014-09-19 19:19:36.007
Count Createddate
---------------------------
2 2014-09-19
```
|
```
select
COUNT(Email) as Count_Mails,
DATEADD(dd, 0, DATEDIFF(dd, 0, Createddate))
from
tblUser
GROUP BY
DATEADD(dd, 0, DATEDIFF(dd, 0, Createddate))
```
|
You Use `Count` with `Group BY`
```
select COUNT(EmailID) as Count_Mails, CAST(Createddate as DATE)
GROUP BY cast(createdDate as date)
```
|
How can I add sum of Email ID with there CreatedDate in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
Let consider the below table structure
```
Product Year Month Price
A 2011 01 23
A 2011 02 34
```
.......
.....
```
A 2011 12 54
B 2011 01 13
B 2011 02 12
```
.......
.....
B 2011 12 20
From this table i need to aggregate the value for every 3 months ie..,
```
Product Year Month Price
A 2011 1-3 45
A 2011 4-6 23
A 2011 7-9 45
A 2011 10-12 16
```
A 2012 1-3 12
.......
.......
Can anybody tell me how to do this calculation using sql query...
Thanks in Advance !!!
|
Try this. Use `Case statement` to group the month Quarter wise then find the `sum of price`
```
SELECT product,
[year],
CASE
WHEN ( [month] ) IN( '01', '02', '03' ) THEN '1-3'
WHEN ( [month] ) IN( '04', '05', '06' ) THEN '4-6'
WHEN ( [month] ) IN( '07', '08', '09' ) THEN '7-9'
WHEN ( [month] ) IN( '10', '11', '12' ) THEN '10-12'
END [Month],
Sum(Price)
FROM tablename
GROUP BY product,
[year],
CASE
WHEN ( [month] ) IN( '01', '02', '03' ) THEN '1-3'
WHEN ( [month] ) IN( '04', '05', '06' ) THEN '4-6'
WHEN ( [month] ) IN( '07', '08', '09' ) THEN '7-9'
WHEN ( [month] ) IN( '10', '11', '12' ) THEN '10-12'
END
```
**Note :** By looking at your `month` data it looks like its a `varchar` column but you can change it to **[TINYINT](http://msdn.microsoft.com/en-IN/library/ms187745.aspx)**
|
Try using `CASE` in `GROUP BY` clause:
```
...
GROUR BY [Year], CASE WHEN Month in (1,2,3) THEN '1-3'
WHEN Month in (4,5,6) THEN '4-6'
...
END
```
|
Sql query for aggregation
|
[
"",
"sql",
"sql-server",
"group-by",
"aggregation",
""
] |
I'm looking for a MySQL query to extract values like in the following example:
```
TABLE1:
ID name
25 ab
24 abc
23 abcd
22 abcde
21 abcdef
TABLE2:
ID ID_TABLE1 total
1 25 0
2 25 1
3 25 2
4 25 3
5 23 1
6 22 0
7 22 1
8 21 0
9 21 2
10 24 10
11 24 7
```
I want to return all `TABLE1` rows where max value of `total` column (in `TABLE2`) is < 3.
So the results should be:
```
ID name
23 abcd
22 abcde
21 abcdef
```
I tried this:
```
SELECT t1.*
FROM TABLE1 t1
INNER JOIN (
SELECT MAX( total ) AS max_total, ID_TABLE1
FROM TABLE2
GROUP BY total, ID_TABLE1
) t2
ON t1.ID = t2.ID_TABLE1
WHERE t2.max_total < 3
```
but it's not the result I want.
|
Your inner query groups the results by `id_table` **and by total**. Since the maximum of `total` per `total` is the value itself, it makes the inner query somewhat meaningless. Just remove the `total` from the `group by` clause and you should be OK:
```
SELECT t1.*
FROM TABLE1 t1
INNER JOIN (
SELECT MAX( total ) AS max_total, ID_TABLE1
FROM TABLE2
GROUP BY ID_TABLE1
) t2
ON t1.ID = t2.ID_TABLE1
WHERE t2.max_total < 3
```
|
Try this:
```
SELECT t1.ID, t1.name
FROM TABLE1 t1
INNER JOIN (SELECT ID_TABLE1, MAX(total) AS max_total
FROM TABLE2
GROUP BY ID_TABLE1
) t2 ON t1.ID = t2.ID_TABLE1
WHERE t2.max_total < 3;
```
|
Select rows from a table based on max value in different table
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"max",
""
] |
I need help with writing a query which will find [Replacement Character](http://en.wikipedia.org/wiki/Specials_%28Unicode_block%29) in SQL table.
I have multiple cells which contain that character and I want to find all those cells. This is how the value of cell looks like this:

Thank you for your help!
|
Use the [Unicode](http://msdn.microsoft.com/en-us/library/ms180059.aspx) function:
```
DECLARE @TEST TABLE (ID INT, WORDS VARCHAR(10))
INSERT INTO @TEST VALUES (1, 'A�AA')
INSERT INTO @TEST VALUES (2, 'BBB')
INSERT INTO @TEST VALUES (3, 'CC�C')
INSERT INTO @TEST VALUES (4, 'DDD')
SELECT * FROM @TEST WHERE WORDS LIKE '%' + NCHAR(UNICODE('�')) + '%'
UPDATE @TEST
SET WORDS = REPLACE(WORDS, NCHAR(UNICODE('�')), 'X')
SELECT * FROM @TEST WHERE WORDS LIKE '%' + NCHAR(UNICODE('�')) + '%'
SELECT * FROM @TEST
```
|
The UNICODE suggestion didn't work for me - the � character was being treated as a question mark, so the query was finding all strings with question marks, but not those with �.
The fix posted by Tom Cooper at this link worked for me: <https://social.msdn.microsoft.com/forums/sqlserver/en-US/2754165e-7ab7-44b0-abb4-3be487710f31/black-diamond-with-question-mark>
```
-- Find rows with the character
Select * From [MyTable]
Where CharIndex(nchar(65533) COLLATE Latin1_General_BIN2, MyColumn) > 0
-- Update rows replacing character with a !
Update [MyTable]
set MyColumn = Replace(MyColumn, nchar(65533) COLLATE Latin1_General_BIN2, '!')
```
|
SQL Find Replacement Character as a part of a string
|
[
"",
"sql",
"t-sql",
""
] |
I've two tables (MySQL) in my application related to tracking jobs. The first table "jobs" contains one row for each job recorded in the system. This is supported by a jobs progress table which keeps a track of the progress of a job with a new row added each time a job moves to a new state. It is also possible that a job can be re-opened again and pushed back to a previous state. Therefore the job\_progress table could have multiple entries for the same job state but on different date/times.
I'd like to be able to get the current state of a job by getting the max date for a particular state via the jobs\_progress table. I'd also like to be able to get for each job the date that it progressed through each state. However I'm struggling to do this via SQL and would appreciate some help. See the following SQLFiddle: <http://sqlfiddle.com/#!2/8b27f/4>
Here's what I currently have:
```
SELECT j.*
, jp.job_state
, jp.effective_date
, jp.user_id
FROM jobs j
LEFT
JOIN jobs_progress jp
ON jp.job_id = j.id;
+----+-----------+-------------+-----------+-----------------------+-----------+---------------+---------------+---------------------+---------+
| id | agency_id | entity_type | entity_id | job_title | job_state | system_status | job_state | effective_date | user_id |
+----+-----------+-------------+-----------+-----------------------+-----------+---------------+---------------+---------------------+---------+
| 1 | 123 | PROPERTY | 61 | set of keys to be cut | INVOICED | ACTIVE | NEW | 2014-07-08 12:27:54 | 102 |
| 1 | 123 | PROPERTY | 61 | set of keys to be cut | INVOICED | ACTIVE | APPROVED | 2014-07-08 12:28:02 | 102 |
| 1 | 123 | PROPERTY | 61 | set of keys to be cut | INVOICED | ACTIVE | ASSIGNED | 2014-07-08 12:29:02 | 102 |
| 1 | 123 | PROPERTY | 61 | set of keys to be cut | INVOICED | ACTIVE | WORK_COMPLETE | 2014-07-08 12:29:11 | 102 |
| 1 | 123 | PROPERTY | 61 | set of keys to be cut | INVOICED | ACTIVE | INVOICED | 2014-07-08 12:29:27 | 102 |
| 1 | 123 | PROPERTY | 61 | set of keys to be cut | INVOICED | ACTIVE | ASSIGNED | 2014-08-21 12:29:02 | 103 |
| 1 | 123 | PROPERTY | 61 | set of keys to be cut | INVOICED | ACTIVE | WORK_COMPLETE | 2014-08-30 12:29:11 | 103 |
| 1 | 123 | PROPERTY | 61 | set of keys to be cut | INVOICED | ACTIVE | INVOICED | 2014-09-01 12:29:27 | 103 |
+----+-----------+-------------+-----------+-----------------------+-----------+---------------+---------------+---------------------+---------+
```
Here's what I want:
```
+----+-----------+-----------------------+---------------------+--------------------------+--------------------------+--------------------------------+-------------------------+
| id | agency_id | job_title | raised_date (NEW) | approved_date (APPROVED) | assigned_date (ASSIGNED) | completed_date (WORK_COMPLETE) | invoiced_date (INVOICED)|
+----+-----------+-----------------------+---------------------+--------------------------+--------------------------+--------------------------------+-------------------------+
| 1 | 123 | set of keys to be cut | 2014-07-08 12:27:54 | 2014-07-08 12:28:02 | 2014-08-21 12:29:02 | 2014-08-30 12:29:11 | 2014-09-01 12:29:27 |
+----+-----------+-----------------------+---------------------+--------------------------+--------------------------+--------------------------------+-------------------------+
```
And here's what I tried:
|
This will show you the maximum date for each status, so should contain everything you want?
```
select
j.id,
j.agency_id,
j.entity_type,
j.entity_id,
j.job_title,
j.system_status,
jp.jobstate2 job_state,
jp.effectivedate
from jobs j
inner join (select job_id,job_state jobstate2,max(effective_date) effectivedate
from jobs_progress
group by job_id,job_state) jp
on jp.job_id = j.id
order by effectivedate desc
```
EDIT: Following some more requirements being added
It looks liek you're after a PIVOTed output. As far as I know, there isn;t an easy way to do this in MySQL, but you could try this, which isn't pretty, but does produce the result you're after:
```
select
j.id,
j.agency_id,
j.entity_type,
j.entity_id,
j.job_title,
j.system_status,
j.job_state,
jp_new.effectivedate raised_date,
jp_approved.effectivedate approved_date,
jp_assigned.effectivedate assigned_date,
jp_complete.effectivedate complete_date,
jp_invoiced.effectivedate invoice_date
from jobs j
inner join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'NEW'
group by job_id,job_state) jp_new
on jp_new.job_id = j.id
inner join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'APPROVED'
group by job_id,job_state) jp_approved
on jp_approved.job_id = j.id
inner join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'ASSIGNED'
group by job_id,job_state) jp_assigned
on jp_assigned.job_id = j.id
inner join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'WORK_COMPLETE'
group by job_id,job_state) jp_complete
on jp_complete.job_id = j.id
inner join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'INVOICED'
group by job_id,job_state) jp_invoiced
on jp_invoiced.job_id = j.id
```
EDIT: Following some more requirements being added
If you want to show jobs that haven't been through all of the stages, then use `LEFT OUTER JOIN` instead of `INNER JOIN`:
```
select
j.id,
j.agency_id,
j.entity_type,
j.entity_id,
j.job_title,
j.system_status,
j.job_state,
jp_new.effectivedate raised_date,
jp_approved.effectivedate approved_date,
jp_assigned.effectivedate assigned_date,
jp_complete.effectivedate complete_date,
jp_invoiced.effectivedate invoice_date
from jobs j
left outer join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'NEW'
group by job_id,job_state) jp_new
on jp_new.job_id = j.id
left outer join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'APPROVED'
group by job_id,job_state) jp_approved
on jp_approved.job_id = j.id
left outer join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'ASSIGNED'
group by job_id,job_state) jp_assigned
on jp_assigned.job_id = j.id
left outer join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'WORK_COMPLETE'
group by job_id,job_state) jp_complete
on jp_complete.job_id = j.id
left outer join (select job_id,max(effective_date) effectivedate
from jobs_progress
where job_state = 'INVOICED'
group by job_id,job_state) jp_invoiced
on jp_invoiced.job_id = j.id
```
|
This will get you the last date a row was invoiced. If you want all the transition dates, how do you want that displayed?
```
select jobs.*, LastEntry.effective_date
from jobs
INNER JOIN (SELECT job_id, MAX(effective_date) AS effective_date FROM jobs_progress WHERE job_state = 'INVOICED' GROUP BY job_id) AS LastEntry ON jobs.id = LastEntry.job_id
```
This will only show jobs that have been invoiced. If you want to include jobs that have not been invoiced, then use a LEFT OUTER JOIN on the derived table instead of an inner join
|
MYSQL query to get multiple statuses from statuses table
|
[
"",
"mysql",
"sql",
"join",
""
] |
I'm modifying code for an ID search bar and I'm trying to enable the user to be able to search for ID's using SQL syntax, so for example '%535%'. Doing just that is simple enough, but I've been searching and racking my brains for a while now and I can't seem to find a solution to the issue described below:
The problem is that the IDs are all left-padded varchar(14), as in:
```
' 8534'
' 393583'
' 123456/789'
```
This virtually disables the user from searching **only** for IDs that begin with a certain sequence, as '85%' returns no results due to the whitespace padding.
The site I'm maintaining is an oldie written in classic ASP (w/ JScript) and the search is done via a stored procedure with the whole 'WHERE' clause being passed in as a parameter.
I'm not able to modify the database, so what I"m asking is: is there any way to modify the clause so that the padding is ignored and '52%' returns IDs beginning with 52?
|
Functions in the where clause tend to be slow. Something like this might be quicker:
```
where id like '123%'
or id like '% 123%'
```
|
```
SELECT LTRIM(ID) FROM table WHERE ID LIKE '%1234%'
```
Edit as you can only modify WHERE statement
```
WHERE LTRIM(ID) LIKE '1234%'
```
|
How to ignore left padding in a LIKE statement?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-like",
""
] |
I'm fetching data from 2 tables according to card number from new\_cards table and old\_cards table
"I want to display card\_code and card\_status from theses 2 tables if the card\_code contains '8'"
```
(SELECT *
FROM bills INNER JOIN cats INNER JOIN suppliers
INNER JOIN new_cards
WHERE new_cards.`Card_Code` LIKE '%8%'
AND bills.`Sup_ID` = suppliers.`Sup_ID`
AND new_cards.`Sup_ID` = suppliers.`Sup_ID`
AND cats.`Cat_ID` = bills.`Cat_ID`
AND new_cards.`Cat_ID` = cats.`Cat_ID`
AND new_cards.`Bill_ID` = bills.`Bill_ID`)
UNION
(SELECT * FROM bills INNER JOIN cats INNER JOIN suppliers
INNER JOIN sold_cards WHERE
sold_cards.`Card_Code` LIKE '%8%'
AND bills.`Sup_ID` = suppliers.`Sup_ID`
AND sold_cards.`Sup_ID` = suppliers.`Sup_ID`
AND cats.`Cat_ID` = bills.`Cat_ID`
AND sold_cards.`Cat_ID` = cats.`Cat_ID`
AND sold_cards.`Bill_ID` = bills.`Bill_ID`)
```
but the result is card\_coldes from the first table only which is new\_cards
|
Are you sure there is a match for that? Remove the second query and see if any results are returned. If a combined row doesn't match all criteria from all tables it will not appear. That means there will have to be rows where it joins both new\_cards and solid cards. You might be looking for a LEFT JOIN instead of an INNER JOIN
Since you don't mention solid\_cards in the new\_cards condition or vice versa, it's also going to attempt to join every single row matching the second condition, multiplied by every single row in new\_cards. Be careful with OR queries like that, they can be extremely inefficient, especially on large databases.
|
Tip: To avoid any duplication in query result all records for each row shouldn't have different value.
and if it happens then you can use distinct function.
|
where clause containing union returns the first query only
|
[
"",
"mysql",
"sql",
"where-clause",
""
] |
I am new to sql Database.i have a problem
my table is like this
```
s.no |name | amount
---------------------
1 | xyz | 15
2 | xyz | 54
3 | xyz | 787
4 | xyz | 455
5 | hhh | 58
6 | hhh | 789
7 | hhh | 478
8 | hhh | 7855
```
and I want result like this .
```
id |name| | amount
---------------------
1 | xyz | 15
2 | xyz | 54
3 | xyz | 787
4 | xyz | 455
| total | 1311
5 | hhh | 58
6 | hhh | 789
7 | hhh | 478
8 | hhh | 7855
total | 9180
```
|
It looks like you want the `rollup` clause, which is supported by many databases.
Here is what the query would look like in MySQL:
```
select s.no, name, sum(amount)
from mytable
group by name, s.no with rollup
```
You can do something like `coalesce(s.no, 'total')` to see the string total instead of a `NULL`.
|
Try Like this
```
select id,name,amount,grp from (
select id,name,amount,name as grp from table1
union ll
select 0 as id,'Total' as name,sum(amount) as amount,
concat(name,'-Total') as grp
from table1 group by name ) as tab1 order by grp
```
|
sum every 5 rows in one row in sql
|
[
"",
"sql",
"sum",
""
] |
I have a table name called table1 with one column called ID
like this
ID
1
2
3
4
5
6
7
8
9
10
now i want the out put like this
out put
1 2 3 4 5 6 7 8 9 10
that means 1 coumn with different values becomes 1 row
the coumns names can be any thing like id1 id2 id3 ,.....
i.e
id1 id2 id3 id4........ID10
1 2 3 4 ........10
is it possible to get like this ?
if so how ?
|
You can use PIVOT funcftion like this:
```
select * from table1
pivot
(
MAX(ID) for ID in ([1],[2],[3],[4],[5],[6])
) piv
```
Instead of "MAX(ID)" you can insert the vaules that you want to have in your rows.
|
A combination of Dynamic SQL and the `PIVOT` keyword will do what you want.
This question has been asked many times before:
* [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query)
* [T-SQL dynamic pivot](https://stackoverflow.com/questions/12210692/t-sql-dynamic-pivot)
* [Dynamic Pivot Columns in SQL Server](https://stackoverflow.com/questions/14797691/dynamic-pivot-columns-in-sql-server)
Just do a google search for "t-sql dynamic pivot"
|
1 column to 1 row
|
[
"",
"sql",
"sql-server-2008",
"mysqli",
""
] |
My table:
```
timestamp | value
------------------------+---------
2013-08-31 22:00:01.000 | 19.1
2013-08-31 22:00:03.000 | 21.5
...
```
Due to missing seconds in my data series i want to calculate an average value per minute. So instead of having a data series in seconds I want to have it in minutes instead, like so:
```
timestamp | value
-----------------+---------
2013-08-31 22:00 | 19.5
2013-08-31 22:01 | 21.1
...
```
How could I write an SQL query that give me this result? I am using SQL Server 2012.
|
Casting from datetime to smalldatetime avoids tedious (and computationally slow) mucking about with date-to-character-back-to-date conversions. The following will calculate the average per minute.
```
SELECT
cast(Timestamp as smalldatetime) Timestamp
,avg(value) Value
from PageLogs
group by cast(Timestamp as smalldatetime)
order by cast(Timestamp as smalldatetime)
```
The downsides is rounding; this would convert values between 21:00:30.000 and 22:01:29.997 to 22:00. If you need to average by "calendar" minutes (22:00:00.000 to 22:00:59.997), you'd have to adjust the times (at the millisecond level) to get the right breakpoints, like so:
```
SELECT
cast(dateadd(ms, -30000, Timestamp) as smalldatetime) Timestamp
,avg(value) Value
from PageLogs
group by cast(dateadd(ms, -30000, Timestamp) as smalldatetime)
order by cast(dateadd(ms, -30000, Timestamp) as smalldatetime)
```
|
You can remove seconds using `CONVERT()` with format 100:
```
SELECT CONVERT(VARCHAR,GETDATE(),100) AS Dt
,AVG(value)
FROM YourTable
GROUP BY CONVERT(VARCHAR,GETDATE(),100)
```
You could re-cast as `DATETIME()` if needed:
```
CAST(CONVERT(VARCHAR,GETDATE(),100)AS DATETIME)
```
That has seconds/miliseconds but they are all zeroes.
|
Average value per minute from second
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I have a column of data of type `VARCHAR`, that I want to `CONVERT` or `CAST` to an integer (my end goal is for all of my data points to be integers). However, all the queries I attempt return values of `0`.
My data looks like this:
```
1
2
3
4
5
```
If I run either of the following queries:
```
SELECT CONVERT(data, BINARY) FROM table
SELECT CONVERT(data, CHAR) FROM table
```
My result is:
```
1
2
3
4
5
```
No surprises there. However, if I run either of these queries:
```
SELECT CONVERT(data, UNSIGNED) FROM table
SELECT CONVERT(data, SIGNED) FROM table
```
My result is:
```
0
0
0
0
0
```
I've searched SO and Google all over for an answer to this problem, with no luck, so I thought I would try the pros here.
EDIT/UPDATE
I ran some additional queries on the suggestions from the comments, and here are the results:
```
data LENGTH(data) LENGTH(TRIM(data)) ASCII(data)
1 3 3 0
2 3 3 0
3 3 3 0
4 3 3 0
5 3 3 0
```
It appears that I have an issue with the data itself. For anyone coming across this post: my solution at this point is to `TRIM` the excess from the data points and then `CONVERT` to `UNSIGNED`. Thanks for all of the help!
FURTHER EDIT/UPDATE
After a little research, turns out there were hidden `NULL` bytes in my data. The answer to this question helped out: [How can I remove padded NULL bytes using SELECT in MySQL](https://stackoverflow.com/questions/8938081/how-can-i-remove-padded-null-bytes-using-select-in-mysql)
|
What does `SELECT data, LENGTH(data), LENGTH(TRIM(data)), ASCII(data) FROM table` return? It's possible your numeric strings aren't just numeric strings.
Alternately, are you using multi-byte character encoding?
|
I believe the query you have is fine; as it worked for me: sqlfiddle.com/#!2/a15ec4/1/3.
Makes me think you have a data problem. Are you sure there's not a return or space in the data somewhere?
you can check the data by trying to do a length or a ascii on the data to see if you have more than expected:
`select ascii(data) from foo where ascii(data) not between 48 and 57` or
`select length(data) as mLEN from table having mlen>1` for length.
|
MySQL VARCHAR Type won't CONVERT to Integer
|
[
"",
"mysql",
"sql",
"sequelpro",
""
] |
I am trying to update all city values of the table team as: “city” + “ #p” + “number of players” +” g” + “number of goals forward” (e.g. “Tokyo #p25 g74”).
I tried to do this and have this two queries. One for get the number of players and one for get the number of goals.
Query for the number of players:
```
select t.city + '#p' + CONVERT(varchar(10), count(pt.playerId)) + '#g'
from team t,
player_team pt
where pt.teamID = t.teamID
group By t.teamId,t.city
```
Query for the number of goals:
```
select count(*) totalgoals,
pt.teamID
from goals g,
player_team pt
where g.playerId = pt.playerId
group by pt.teamID
```
i couldn't merge theese two counts.
Help me out pls...
Also my tables hierarchy and fields like the shown below
```
player
(
playerID int,
firstName nvarchar(25),
lastName nvarchar(25),
nationality varchar(25),
birthDate smalldatetime,
age smallint,
position varchar(25)
)
team
(
teamID int,
name nvarchar(50),
city nvarchar(25)
)
player_team
(
playerID int,
teamID int,
season varchar(5)
)
match
(
matchID int,
homeTeamID int,
visitingTeamID int,
dateOfMatch smalldatetime,
week tinyint
)
goals
(
matchID int,
playerID int,
isOwnGoal bit,
minute tinyint
)
```
EDIT : Select query with the given below is worked well and give me the right results.But how can i update the table with this multiple records? When i try to update it as a subquery into update statement, it gives me compile error and complaining about multirecords...
Error:Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
|
Assuming your 2 queries are producing the desired results, try combining them with an an `outer join` and a subquery:
```
select pt.teamId,
t.city + ' #p' + CONVERT(varchar(10), count(pt.playerId))
+ ' g' + CONVERT(varchar(10), t2.totalgoals)
from team t
inner join player_team pt on pt.teamID = t.teamID
left join (
select count(*) totalgoals,
pt.teamID
from goals g inner join player_team pt on g.playerId = pt.playerId
group by pt.teamID
) t2 on t.teamid = t2.teamid
group By pt.teamId,t.city,t2.totalgoals
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!3/6a4b7/9)
|
The simplest way probably is to use correlated subqueries to look up the values for each team:
```
SELECT t.city || ' #p' ||
CONVERT(varchar(10), (SELECT count(*)
FROM player_team
WHERE teamId = t.teamId))
|| ' #g ' ||
CONVERT(varchar(10), (SELECT count(*)
FROM goals
JOIN player_team USING (playerId)
WHERE teamId = t.teamId))
FROM team t
```
|
how to merge count values in sql
|
[
"",
"mysql",
"sql",
"sql-server",
"sqlite",
"t-sql",
""
] |
Given a number, I would like to get the immediately lower and upper values from a table like this:
```
4.420
4.570
5.120
5.620
6.120
6.370
7.120
7.370
7.870
8.120
8.370
```
For example, if I use 6.20 as parameter, the query should return 6.12 and 6.37.
I'm doing this because I want to calculate if a financial option is At the money.
I'm using PostgreSQL 9.3.
|
```
with all_numbers as (
select nr,
lag(nr,1,nr) over (order by nr) as prev_value,
lead(nr,1,nr) over (order by nr) as next_value
from numbers
)
select *
from all_numbers
where 6.20 between prev_value and next_value;
```
For completeness I was playing around with another solution:
```
select *
from numbers
where nr >= (select max(nr) from numbers where nr < 6.20)
and nr <= (select min(nr) from numbers where nr > 6.20);
```
And to my surprise this was actually faster than the version with the window functions with an index on the `nr` column: 0.2 seconds vs. 2.7 seconds on a table with 2.8 million rows.
Here are the execution plans:
* version with window function: <http://explain.depesz.com/s/Bse3>
* version with sub-select: <http://explain.depesz.com/s/L8V>
Another possible solution is to install the btree\_gist extension and then use the "distance" operator that comes with it: `<->`
As the result of the `<->` is a value (the distance between the two arguments) you would need to define a threshold that is low enough:
```
select nr
from numbers
where nr <-> 6.20 < 0.05;
```
or use an `order by nr <-> 6.20 limit 1`.
This took about 1 second on my test table
This version took about 1 second on my test table.
|
```
select * from
(
SELECT col, lag(col) OVER (ORDER BY col) before_value, lead(col) OVER (ORDER BY col) after_value
FROM foo order by col
) A
WHERE 6.20 between before_value and after_value limit 1;
```
|
Get upper and lower value surrounding a number
|
[
"",
"sql",
"postgresql",
""
] |
I found a couple similar threads but none with just 1 table. Basically we're doing some updates on our Quote system and there's no great way to find the quote revision history, as the only place it's stored is "priorquoteID." I wrote a SQL stored procedure that went through and traversed all of the prior quotes and set a new field I created called "OriginalQuoteID" equal to the oldest quote and set the newest quote as "CurrentQuote."
However, about 2/3s of the quotes were never revised so weren't processed by my SQLProcedure. Conceptually I need something that works like this:
```
update
SalesQuotesTest.dbo.Quote
set CurrentQuote = 1
group by OriginalQuoteID
having COUNT(*) < 2
```
So any quotes that don't have a revision (is the only quote in the revision thread) I can mark as current.
Does this make sense? Any ideas?
|
you could use a sub-query for that:
```
update
SalesQuotesTest.dbo.Quote
set CurrentQuote = 1
where OriginalQuoteID in ( select OriginalQuoteID
from SalesQuotesTest.dbo.Quote
group by OriginalQuoteID
having COUNT(*) < 2
)
```
|
This is totally untested but here is my suggestion. Make any changes that are necessary to fit with your schema.
```
WITH quotesToUpdate(QuoteId) AS (
SELECT Quote.Identifier
FROM SalesQuotesTest.dbo.Quote
GROUP BY OriginalQuoteID HAVING COUNT(*) < 2
)
UPDATE SalesQuotesTest.dbo.Quote
SET CurrentQuote = 1
FROM SalesQuotesTest.dbo.Quote q
INNER JOIN quotesToUpdate qu ON q.Identifier = qu.QuoteId
```
|
Update using GROUP by
|
[
"",
"sql",
"sql-server",
""
] |
Assume I have a single mySQL table (users) with the following fields:
```
userid
gender
region
age
ethnicity
income
```
I want to be able to return the number of total records based on the number a user enters. Furthermore, they will also be providing additional criteria.
In the simplest example, they may ask for 1,000 records, where 600 records should have gender = 'Male' and 400 records where gender = 'Female'. That's simple enough to do.
Now, go one step further. Assume they now want to specify Region:
```
GENDER
Male: 600 records
Female: 400 records
REGION
North: 100 records
South: 200 records
East: 300 records
West: 400 records
```
Again, only 1000 records should be returned, but in the end, there must be 600 males, 400 females, 100 Northerners, 200 Southerners, 300 Easterners and 400 Westerners.
I know this isn't valid syntax, but using pseudo-mySQL code, it hopefully illustrates what I'm trying to do:
```
(SELECT * FROM users WHERE gender = 'Male' LIMIT 600
UNION
SELECT * FROM users WHERE gender = 'Female' LIMIT 400)
INTERSECT
(SELECT * FROM users WHERE region = 'North' LIMIT 100
UNION
SELECT * FROM users WHERE region = 'South' LIMIT 200
UNION
SELECT * FROM users WHERE region = 'East' LIMIT 300
UNION
SELECT * FROM users WHERE region = 'West' LIMIT 400)
```
Note that I'm not looking for a one-time query. The total number of records and the number of records within each criteria will constantly be changing based on input by the user. So, I'm trying to come up with a generic solution that can be re-used over and over, not a hard-coded solution.
To make things more complicated, now add more criteria. There could also be age, ethnicity and income each with their own set number of records for each group, additional code appended to above:
```
INTERSECT
(SELECT * FROM users WHERE age >= 18 and age <= 24 LIMIT 300
UNION
SELECT * FROM users WHERE age >= 25 and age <= 36 LIMIT 200
UNION
SELECT * FROM users WHERE age >= 37 and age <= 54 LIMIT 200
UNION
SELECT * FROM users WHERE age >= 55 LIMIT 300)
INTERSECT
etc.
```
I'm not sure if this is possible to write in one query or if this requires multiple statements and iterations.
|
## Flatten Your Criteria
---
You can flatten your multi-dimensional criteria into a single level criteria

Now this criteria can be achieved in one query as follow
```
(SELECT * FROM users WHERE gender = 'Male' AND region = 'North' LIMIT 40) UNION ALL
(SELECT * FROM users WHERE gender = 'Male' AND region = 'South' LIMIT 80) UNION ALL
(SELECT * FROM users WHERE gender = 'Male' AND region = 'East' LIMIT 120) UNION ALL
(SELECT * FROM users WHERE gender = 'Male' AND region = 'West' LIMIT 160) UNION ALL
(SELECT * FROM users WHERE gender = 'Female' AND region = 'North' LIMIT 60) UNION ALL
(SELECT * FROM users WHERE gender = 'Female' AND region = 'South' LIMIT 120) UNION ALL
(SELECT * FROM users WHERE gender = 'Female' AND region = 'East' LIMIT 180) UNION ALL
(SELECT * FROM users WHERE gender = 'Female' AND region = 'West' LIMIT 240)
```
**Problem**
* It does not always return the correct result. For example, if there are less than 40 users whose are male and from north, then the query will return less than 1,000 records.
---
## Adjust Your Criteria
---
Let say that there is less than 40 users whose are male and from north. Then, you need to adjust other criteria quantity to cover the missing quantity from "Male" and "North". I believe it is not possible to do it with bare SQL. This is pseudo code that I have in mind. For sake of simplification, I think we will only query for Male, Female, North, and South
```
conditions.add({ gender: 'Male', region: 'North', limit: 40 })
conditions.add({ gender: 'Male', region: 'South', limit: 80 })
conditions.add({ gender: 'Female', region: 'North', limit: 60 })
conditions.add({ gender: 'Female', region: 'South', limit: 120 })
foreach(conditions as condition) {
temp = getResultFromDatabaseByCondition(condition)
conditions.remove(condition)
// there is not enough result for this condition,
// increase other condition quantity
if (temp.length < condition.limit) {
adjust(...);
}
}
```
Let say that there are only 30 northener male. So we need to adjust +10 male, and +10 northener.
```
To Adjust
---------------------------------------------------
Male +10
North +10
Remain Conditions
----------------------------------------------------
{ gender: 'Male', region: 'South', limit: 80 }
{ gender: 'Female', region: 'North', limit: 60 }
{ gender: 'Female', region: 'South', limit: 120 }
```
'Male' + 'South' is the first condition that match the 'Male' adjustment condition. Increase it by +10, and remove it from the "remain condition" list. Since, we increase the South, we need to decrease it back at other condition. So add "South" condition into "To Adjust" list
```
To Adjust
---------------------------------------------------
South -10
North +10
Remain Conditions
----------------------------------------------------
{ gender: 'Female', region: 'North', limit: 60 }
{ gender: 'Female', region: 'South', limit: 120 }
Final Conditions
----------------------------------------------------
{ gender: 'Male', region: 'South', limit: 90 }
```
Find condition that match the 'South' and repeat the same process.
```
To Adjust
---------------------------------------------------
Female +10
North +10
Remain Conditions
----------------------------------------------------
{ gender: 'Female', region: 'North', limit: 60 }
Final Conditions
----------------------------------------------------
{ gender: 'Female', region: 'South', limit: 110 }
{ gender: 'Male', region: 'South', limit: 90 }
```
And finally
```
{ gender: 'Female', region: 'North', limit: 70 }
{ gender: 'Female', region: 'South', limit: 110 }
{ gender: 'Male', region: 'South', limit: 90 }
```
I haven't come up with the exact implementation of adjustment yet. It is more difficult than I have expected. I will update once I can figure out how to implement it.
|
The problem that you describe is a multi-dimensional modeling problem. In particular, you are trying to get a stratified sample along multiple dimensions at the same time. The key to this is to go down to the smallest level of granularity and build up the sample from there.
I am further assuming that you want the sample to be representative at all levels. That is, you don't want all the users from "North" to be female. Or all the "males" to be from "West", even if that does meet the end criteria.
Start by thinking in terms of a total number of records, dimensions, and allocations along each dimension. For instance, for the first sample, think of it as:
* 1000 records
* 2 dimensions: gender, region
* gender split: 60%, 40%
* region split: 10%, 20%, 30%, 40%
Then, you want to allocate these numbers to each gender/region combination. The numbers are:
* North, Male: 60
* North, Female: 40
* South, Male: 120
* South, Female: 80
* East, Male: 180
* East, Female: 120
* West, Male: 240
* West, Female: 160
You'll see that these add up along the dimensions.
The calculation of the numbers in each cell is pretty easy. It is the product of the percentages times the total. So, "East, Female" is 30%\*40% \* 1000 . . . Voila! The value is 120.
Here is the solution:
1. Take the input along each dimension as *percentages* of the total. And be sure they add up to 100% along each dimension.
2. Create a table of the expected percentages for each of the cells. This is the product of the percentages along each dimension.
3. Multiple the expected percentages by the overall total.
4. The final query is outlined below.
Assume that you have a table `cells` with the expected count and the original data (`users`).
```
select enumerated.*
from (select u.*,
(@rn := if(@dims = concat_ws(':', dim1, dim2, dim3), @rn + 1,
if(@dims := concat_ws(':', dim1, dim2, dim3), 1, 1)
)
) as seqnum
from users u cross join
(select @dims = '', @rn := '') vars
order by dim1, dim2, dim3, rand()
) enumerated join
cells
on enumerated.dims = cells.dims
where enuemrated.seqnum <= cells.expectedcount;
```
Note that this is a sketch of the solution. You have to fill in the details about the dimensions.
This will work as long as you have enough data for all the cells.
In practice, when doing this type of multi-dimensional stratified sampling, you do run the risk that cells will be empty or too small. When this happens, you can often fix this with an additional pass afterwards. Take what you can from the cells that are large enough. These typically account for the majority of the data needed. Then add records in to meet the final count. The records to be added in are those whose values match what is needed along the most needed dimensions. However, this solution simply assumes that there is enough data to satisfy your criteria.
|
MySQL query to get "intersection" of numerous queries with limits
|
[
"",
"mysql",
"sql",
"database",
"select",
"inner-join",
""
] |
I have two queries that return one result each i.e one number
```
Select Count(*) as StockCountA from Table_A where dept='AAA'
```
Results
```
StockCountA
550
```
.
```
Select Count(*) as StockCountB from Table_B where dept='BBB'
```
Results
```
StockCountB
450
```
I wish to join the two results into one row record i.e
```
| StockCountA | StockCountB
| 550 | 450
```
|
You can use:
```
select
(Select Count(*) as StockCountA from Table_A where dept='AAA') as StockCountA,
(Select Count(*) as StockCountB from Table_B where dept='BBB') as StockCountB
```
Explanation: you can select single value as a field in a select statement, so you could write something like
```
select
x.*,
(select Value from Table_Y y) as ValueFromY
from
Table_X x
```
This will work only with [*scalar* queries](https://stackoverflow.com/a/44556182/511529), meaning that the sub-query should have exactly 1 column, and at most 1 row. With 0 rows ValueFromY will return `NULL` and with more than 1 row, the query will fail.
An additional feature of `select` (in SQL Server, MySQL and probably others) is that you can select just values without specifying a table at all, like this:
```
Select
3.14 as MoreOrLessPI
```
You can combine both those facts to combine the two counts into a single result, by writing a query that looks like:
```
Select
(Select query that returns at most 1 row) as Result1,
(Select another query that returns at most 1 row) as Result2
```
|
This should give you the desired result:
```
SELECT * FROM(
(Select Count(*) as StockCountA from Table_A where dept='AAA') StockCountA ,
(Select Count(*) as StockCountB from Table_B where dept='BBB') StockCountB
);
```
|
How to combine two query's results into one row?
|
[
"",
"sql",
"sql-server",
""
] |
I got list of database.
```
**NAME**
OEE_LINE_ABC
OEE_LINE_ABCD
OEE_LINE_QWERTYUIOP
OEE_LINE_zxcvbd
```
I need to grab the `Name` that after `OEE_LINE_`. But it does not standardize how many characters. What I get from google is fix how many characters.
Example :
```
SUBSTR(Name,10,3)
Result: 'ABC'
```
If I use above substr, it able to grab the 1st value (`OEE_LINE_ABC`) without problem, when comes to 2nd value, there's a problem because above substr only can retrieve 3 characters.
Expected result:
the SQL statement able to grab values that after `OEE_LINE_`
Result(return name without the `OEE_LINE_`):
```
Name = ABC
Name = ABCD
Name = QWERTYUIOP
Name = zxcvbd
```
Any related post to share and answer?
|
I would suggest looking for the last `'_`' and taking the string from there:
```
select substr(name, length(name) - instr(reverse(name), '_') + 2)
```
|
```
SUBSTR(Name,10,length(Name)-9)
```
|
Oracle Substr SQL
|
[
"",
"sql",
"oracle",
""
] |
I want to make a new table after joining this two tables if it is possible
```
SELECT * FROM tablepeople
JOIN tableinfo
ON tablepeople.id = tableinfo.ctrlid
```
|
Yes, you can use [`CREATE TABLE AS ... SELECT FROM`](http://dev.mysql.com/doc/refman/5.0/en/create-table-select.html) construct like below; considering the fact that the `new_table` doesn't already exists
```
create table new_table as
SELECT * from tablepeople
join tableinfo
on tablepeople.id = tableinfo.ctrlid
```
**EDIT:**
Based on the latest comment, use a table alias to get around this
```
CREATE TABLE mytable3 AS
SELECT t1.*
FROM mytable1 t1
JOIN mytable2 t2 ON t1.ID=t2.ID
```
|
**you can use**
**Create table new\_table**
```
Create table new_table as
SELECT * FROM customers
LEFT JOIN orders ON customers.idcustomers = orders.idorders
UNION
SELECT * FROM customers
RIGHT JOIN orders ON customers.idcustomers = orders.idorders;
```
|
Can I join two tables in MYSQL and create a new table
|
[
"",
"mysql",
"sql",
""
] |
I have the following scenario
I am getting Output like this
```
Month Product Name Amount
Jan-2014 A-Prodcut 50
Jan-2014 B-Product 45
Jan-2014 C-Product 55
Feb-2014 A-Prodcut 60
Feb-2014 B-Product 48
Feb-2014 C-Product 80
```
I want Output like this :
```
Jan-2014 C-Product 55
Feb-2014 C-Product 80
```
I want top product from each month How to achieve this
Here is my query
```
;With cte as
(
SELECT @EndDate AS TheMonth, 0 as [Counter]
UNION ALL
SELECT DATEADD(mm,-[Counter] -1,@EndDate), Counter + 1 AS TheMonth
from cte
WHERE DATEADD(mm,-[Counter] -1,@EndDate) >= @StartDate
)
SELECT
left(DATENAME(MM,TheMonth),3) +'-'+ cast(year(TheMonth) as varchar) AS [Month-Year],
ISNULL(IM.product_name,'N/A') as 'Product Name',
isnull(sum((ism.selling_price * siim.qty) + (((tm.tax_amount*(ism.selling_price * siim.qty))/100))),0) AS Amount
FROM
cte
LEFT OUTER JOIN RS_Sell_Order_Master AS SM on MONTH(invoice_date) = MONTH(TheMonth)
AND YEAR(invoice_date) = YEAR(TheMonth)
AND sm.is_approved = 1
LEFT OUTER JOIN RS_Sells_Invoice_Info_Master AS SIIM ON SM.sell_order_no = SIIM.sell_order_no
LEFT OUTER JOIN RS_Inventory_Master AS IM ON SIIM.product_id = IM.product_id
LEFT OUTER JOIN RS_Tax_Master AS TM ON TM.tax_id = SIIM.tax_id
LEFT OUTER JOIN RS_Inventory_Selling_Master AS ISM ON ISM.selling_product_id = SIIM.selling_product_id
GROUP BY
IM.product_name,
TheMonth
```
|
Use Row\_number() in the select query ie.
```
SELECT MONTH, PRODUCT_NAME, AMOUNT
FROM (
SELECT MONTH, PRODUCT_NAME, AMOUNT,
ROW_NUMBER ( )
OVER (PARTITION BY Month order by amount Desc) AS RANK
)
WHERE RANK = 1
```
Hope this helps..
|
Try this:
```
SELECT *
FROM
(
SELECT [Month-Year], [Product Name], Amount, ROW_NUMBER() OVER (Partition BY [Month-Year] ORDER BY Amount DESC) as rn
FROM
(
SELECT
left(DATENAME(MM,TheMonth),3) +'-'+ cast(year(TheMonth) as varchar) AS [Month-Year],
ISNULL(IM.product_name,'N/A') as 'Product Name',
isnull(sum((ism.selling_price * siim.qty) + (((tm.tax_amount*(ism.selling_price * siim.qty))/100))),0) AS Amount
FROM
cte
LEFT OUTER JOIN RS_Sell_Order_Master AS SM on MONTH(invoice_date) = MONTH(TheMonth)
AND YEAR(invoice_date) = YEAR(TheMonth)
AND sm.is_approved = 1
LEFT OUTER JOIN RS_Sells_Invoice_Info_Master AS SIIM ON SM.sell_order_no = SIIM.sell_order_no
LEFT OUTER JOIN RS_Inventory_Master AS IM ON SIIM.product_id = IM.product_id
LEFT OUTER JOIN RS_Tax_Master AS TM ON TM.tax_id = SIIM.tax_id
LEFT OUTER JOIN RS_Inventory_Selling_Master AS ISM ON ISM.selling_product_id = SIIM.selling_product_id
GROUP BY
IM.product_name,
TheMonth
) a
)b
WHERE rn = 1
```
|
Top 1 Item from the List in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a table with multiple rows. I also have a multithreaded application which reads a row with status = 1 and then changes it to status = 2 after reading.
However, due to the multithreaded nature of the application, it keeps reading the same row twice (by different threads). I know this is a concurrency problem but I am unable to solve it.
As of now, I am reading the row as:
```
SELECT TOP 1 * FROM Inbox WHERE Status = 1 ORDER BY ID DESC;
```
Then, using the ID, I update the row:
```
UPDATE Inbox SET Status = 2 WHERE ID = X;
```
I would like a query to lock a row as it selects its ID and returns it so no other threads are able to read it.
|
Here is an illustration how can you achieve this :
create Table
```
create table Inbox ( id int primary key clustered, stts int )
insert into Inbox values
(1,1), (2,1), (3,1), (4,1)
```
now, open two tabs in your SMSS and write this in both :
```
begin tran
select top 1 * from Inbox with(readpast,updlock) where stts = 1 order by id desc
--rollback tran
```
now, run the first and check what it returned. Then, go to second and run it and check the result. both are giving different results. so, assume both the tabs as different threads and you will get the idea how to achieve it. now uncomment `rollback tran` and execute it. conclusion is you need to create a transaction boundary, select your data within the transaction boundary with lock hints 'readpast,updlock' do your stuff with that data and in the end `commit the transaction`.
Note : This is how I implemented multithreading job processor in `C++` so it might not work for you. If the query second tab stucks and doesn't give you result then you need to create an index.
Check similar question [here](https://stackoverflow.com/questions/3636950/return-unlocked-rows-in-a-select-top-n-query) and a useful information [here](https://stackoverflow.com/questions/3683593/select-top-1-with-updlock-and-readpast-sets-exclusive-lock-on-the-entire-table).
|
One possible solution is to add a [`ROWVERSION`](http://msdn.microsoft.com/en-us/library/ms182776.aspx) column to your table. This creates a column that updates automatically whenever you run an `UPDATE` to a row. Using it in your queries means you can check if another process has already touched the same row. First add the column:
```
ALTER TABLE Inbox
ADD RowVersion ROWVERSION
```
Now you change your `UPDATE` query to take it in to account:
```
UPDATE Inbox SET Status = 2 WHERE ID = X AND RowVersion = @RowVersion
```
Check the number of rows updated and you know know if you have been the first person to try.
```
SELECT @@ROWCOUNT
```
Alternatively, using the MSDN docs for `ROWVERSION` you can do something like this:
```
DECLARE @t TABLE (myKey int);
UPDATE Inbox
SET Status = 2
OUTPUT inserted.myKey INTO @t(X)
WHERE ID = X
AND [RowVersion] = @RowVersion
IF (SELECT COUNT(*) FROM @t) = 0
BEGIN
RAISERROR ('Error changing row with ID = %d'
,16 -- Severity.
,1 -- State
,X) -- Key that was changed
END
```
|
Select just one row from a table and lock it
|
[
"",
"sql",
"sql-server",
"concurrency",
""
] |
Lets assume I have following table
```
Customer_ID Item_ID
1 A
1 B
2 A
2 B
3 A
```
What I need is to find customers that purchased the same set of items.
Output:
```
Customer_id Customer_id
id1 id2
```
(id1 < id2)
If there are more than two customers with same set of items, for example three, output must be next:
```
Customer_id Customer_id
id1 id2
id2 id3
id1 id3
```
Thanks
|
this has an enumID in the middle but is the same thing
```
with cte as ( SELECT [sID], [enumID], [valueID], count(*) over (partition by [sID], [enumID]) as ccount
FROM [docMVenum1]
WHERE [sID] < 10000 )
select [cte1].[sID], [cte2].[sID], [cte1].[enumID] -- , [cte1].[valueID], [cte1].[ccount]
from cte as [cte1]
join cte as [cte2]
on [cte1].[sID] < [cte2].[sID]
and [cte1].[enumID] = [cte2].[enumID]
and [cte1].[valueID] = [cte2].[valueID]
and [cte1].[ccount] = [cte2].[ccount]
group by [cte1].[sID], [cte2].[sID], [cte1].[enumID], [cte1].[ccount]
having count(*) = [cte1].[ccount]
order by [cte1].[sID], [cte2].[sID] --, [cte1].[enumID], [cte1].[valueID];
```
|
In your case :
```
create table CustomerAA (
Customer_Id int ,
ItemId varchar(10)
);
Insert Into CustomerAA(Customer_Id,ItemId) values(1,'A')
Insert Into CustomerAA(Customer_Id,ItemId) values(1,'B')
Insert Into CustomerAA(Customer_Id,ItemId) values(2,'A')
Insert Into CustomerAA(Customer_Id,ItemId) values(2,'B')
Insert Into CustomerAA(Customer_Id,ItemId) values(3,'A')
```
You can use `Left Join` like above :
```
Select Distinct C1.Customer_Id , C2.Customer_Id
From CustomerAA As C1
Left Join CustomerAA As C2 on ( C1.ItemId = C2.ItemId )
Where
( C1.Customer_Id < C2.Customer_Id )
```
Or You can use `Inner Join` :
```
Select Distinct C1.Customer_Id , C2.Customer_Id
From CustomerAA As C1
Inner Join CustomerAA As C2 on ( C1.ItemId = C2.ItemId )
Where
( C1.Customer_Id < C2.Customer_Id )
```
Or
```
Select Distinct C1.Customer_Id , C2.Customer_Id
From CustomerAA As C1
Inner Join CustomerAA As C2
on (
( C1.ItemId = C2.ItemId )
And
( C1.Customer_Id < C2.Customer_Id )
)
```
|
SQL Query for finding rows with same set of values
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
```
INSERT INTO [dbo].[LikesRefined] (userA,userB)
SELECT l1.[user],l1.likes
FROM [dbo].[Like] l1
inner join [dbo].[Like] l2 on l2.[user] = l1.likes and l2.likes = l1.[user]
WHERE l1.[user] < l1.likes
```
I'm inserting values `userA`,`userB` if there is a match in `[dbo].[Like]` into `[dbo].[LikesRefined]`
How can I only insert records that don't already exist in `[dbo].[LikesRefined]` ?
|
Try the merge as well. but performance could be an issue because merge forces you to do matched/unmatched so you'd be updating the records that exist even though its not needed.
```
SELECT l1.[user],l1.likes
INTO #recordsToInsert
FROM [dbo].[Like] l1
inner join [dbo].[Like] l2 on l2.[user] = l1.likes and l2.likes = l1.[user]
WHERE l1.[user] < l1.likes
MERGE LikesRefined AS T
USING #recordsToinsert AS S
ON (T.userA = S.user AND T.userB=s.likes)
WHEN NOT MATCHED BY TARGET
THEN INSERT(userA, userB) VALUES(S.user, S.likes)
WHEN MATCHED BY TARGET THEN
UPDATE T SET userA=user,userB=likes WHERE userA=user and userB=likes;
DROP TABLE #recordsToInsert
```
|
I believe you can just add the NOT EXISTS to your where clause and check the LikesRefined table to determine whether it exists or not first.
```
INSERT INTO [dbo].[LikesRefined] (userA,userB)
SELECT l1.[user],l1.likes
FROM [dbo].[Like] l1
inner join [dbo].[Like] l2 on l2.[user] = l1.likes and l2.likes = l1.[user]
WHERE l1.[user] < l1.likes
AND NOT EXISTS(SELECT 1
FROM dbo.LikesRefined
WHERE userA = l1.[user] AND UserB = l1.likes)
```
|
Insert values if they don't already exist
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I need a SQL query from the below:
**Table**: user\_results
| id | cId | uId | tId | pass | date |
| --- | --- | --- | --- | --- | --- |
| 1 | 23 | 34 | 2 | Y | 7/8/2013 10:24:47 |
| 2 | 23 | 34 | 2 | N | 11/27/2014 10:36:32 |
| 3 | 23 | 34 | 3 | Y | 12/9/2013 10:24:47 |
| 4 | 23 | 34 | 3 | N | 11/27/2014 10:39:10 |
| 5 | 23 | 34 | 4 | Y | 10/26/2013 10:24:47 |
| 6 | 23 | 34 | 4 | N | 11/27/2014 10:38:08 |
| 7 | 59 | 93 | 2 | Y | 11/24/2013 9:34:23 |
| 8 | 69 | 82 | 2 | Y | 11/28/2014 9:04:22 |
| 9 | 69 | 82 | 2 | Y | 11/28/2014 8:59:52 |
| 10 | 69 | 82 | 4 | Y | 11/28/2014 8:59:52 |
| 11 | 69 | 82 | 4 | Y | 11/28/2014 9:10:40 |
| 12 | 69 | 82 | 4 | N | 11/28/2014 9:12:01 |
| 13 | 72 | 72 | 2 | N | 12/1/2014 6:46:02 |
| 14 | 73 | 69 | 2 | N | 12/1/2014 6:49:29 |
| 15 | 73 | 69 | 3 | N | 12/1/2014 6:51:31 |
| 16 | 73 | 69 | 3 | N | 12/1/2014 7:11:25 |
Below one is the expected results:
| id | cId | uId | tId | pass | date |
| --- | --- | --- | --- | --- | --- |
| 1 | 23 | 34 | 2 | Y | 7/8/2013 10:24:47 |
| 3 | 23 | 34 | 3 | Y | 12/9/2013 10:24:47 |
| 5 | 23 | 34 | 4 | Y | 10/26/2013 10:24:47 |
| 7 | 59 | 93 | 2 | Y | 11/24/2013 9:34:23 |
| 9 | 69 | 82 | 2 | Y | 11/28/2014 8:59:52 |
| 11 | 69 | 82 | 4 | Y | 11/28/2014 9:10:40 |
| 13 | 72 | 72 | 2 | N | 12/1/2014 6:46:02 |
| 14 | 73 | 69 | 2 | N | 12/1/2014 6:49:29 |
| 16 | 73 | 69 | 3 | N | 12/1/2014 7:11:25 |
**Note:** If User passed the tests(# of attempts), need to show the recent pass entries and If User failed the tests(# of attempts), need to show the recent fail entries.
Here is my query:
```
SELECT *, COUNT(tId), MAX(date)
FROM user_results
WHERE DATE_ADD(date, INTERVAL 1 YEAR ) >= DATE_SUB( CURDATE(), INTERVAL 1 YEAR )
GROUP BY cId, tId
HAVING COUNT(tId) =1 OR (pass = 'Y' AND COUNT(tId) >=2)
```
|
Please check with the following query,
1) You have to add the failed cases in HAVING clause and will get the both pass and fail results.
2) Concat the pass and date column with '@' separator and will get the value: Y@7/8/2013 10:24:47
3) Sorting with this value you will get the recent pass and recent fail.
```
SELECT *, COUNT(tId), MAX(date),
SUBSTRING_INDEX(MAX(CONCAT(pass, '@', date)), '@', -1) AS max_date,
SUBSTRING_INDEX(MAX(CONCAT(pass, '@', date)), '@', 1) AS pass_stat
FROM user_results
WHERE DATE_ADD(date, INTERVAL 1 YEAR ) >= DATE_SUB( CURDATE(), INTERVAL 1 YEAR )
GROUP BY cId, tId
HAVING COUNT(tId) =1 OR (pass = 'Y' AND COUNT(tId) >=2) OR (pass = 'N' AND COUNT(tId) >=2)
ORDER BY date DESC
```
|
The query
```
select cid, tid, max(date) as max_date
from user_results where pass = 'Y' group by cId, tId
```
gives you all users, that have passed the exam and the latest date. Because we need to group by cid, tid, we have to exlude the id. To get the right id to our "final" result we just make a join with the original user\_results\_table with:
```
select r.* from user_results r join
(select cid, tid, max(date) as max_date
from user_results where pass = 'Y' group by cId, tId) t on
(r.cId = t.cId and r.tId = t.tId and r.date = t.max_date)
```
Now you have all users that have passed the exam with the latest date and all the additional information like id and uId.
The next step is, to include all those users, that failed an exam and the latest date. Similar to the first query above, the following query
```
select r1.cId, r1.tId, max(r1.date) as max_date
from user_results r1
where r1.pass = 'N'
group by r1.cId, r1.tId
```
will give you all users that failed an exam and the latest date. But the problem with this query is, that it includes every user that has failed an exam, which means that users, that have passed the exam also gets included. Therefore we need to exlude those users who already have passed an exam by adding (for example) a not exists statement
```
select r1.cId, r1.tId, max(r1.date) as max_date
from user_results r1
where r1.pass = 'N' and not exists
(select * from user_results r2
where r2.pass = 'Y' and r2.cId = r1.cId and r2.tId = r1.tId)
group by r1.cId, r1.tId
```
Like above we then join this result with the user\_results table to get all the additional information:
```
select r.*
from user_results r join
(select r1.cId, r1.tId, max(r1.date) as max_date
from user_results r1
where r1.pass = 'N' and not exists
(select * from user_results r2
where r2.pass = 'Y' and r2.cId = r1.cId and r2.tId = r1.tId)
group by r1.cId, r1.tId) t on
(r.cId = t.cId and r.tId = t.tId and r.date = t.max_date)
```
And finally we "merge" these to result-queries (the query for those users who have passed the exam and the query for those who have failed) with the union-operator. So the final query is:
```
select r.*
from user_results r join
(select cid, tid, max(date) as max_date
from user_results where pass = 'Y' group by cId, tId) t on
(r.cId = t.cId and r.tId = t.tId and r.date = t.max_date)
union
select r.*
from user_results r join
(select r1.cId, r1.tId, max(r1.date) as max_date
from user_results r1
where r1.pass = 'N' and not exists
(select * from user_results r2
where r2.pass = 'Y' and r2.cId = r1.cId and r2.tId = r1.tId)
group by r1.cId, r1.tId) t on
(r.cId = t.cId and r.tId = t.tId and r.date = t.max_date)
```
This should work. I've tested it and couldn't notice any failures.
|
How can I get the maximum date of both passed and failed results using MySQL?
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"greatest-n-per-group",
""
] |
I have a query:
```
Select distinct yearend, customerid from table1
```
This returns, with a row count of 3 (bottom right of SSMS):
```
Yearend CustomerID
2011-03-31 3
2013-10-31 5
2013-12-31 6
```
What I need to do is come up with a count of the above, just returning the value of 3.
I've tried several variations on this but cannot just return '3' as a counted total.
|
You just have to `COUNT` the results of your query:
```
SELECT COUNT(*)
FROM (
SELECT DISTINCT yearend, customerid
FROM table1
)
```
|
```
SELECT count(DISTINCT CustomerID)
FROM table1
GROUP BY CustomerID
```
|
Data integrity check on SQL help please
|
[
"",
"sql",
"count",
"distinct",
""
] |
Following is the schema and data:
```
Create table tblTestSorting (col int);
INSERT INTO tblTestSorting VALUES(2), (5), (3), (6), (4), (7), (1)
```
I have a query like this:
```
SELECT col, 5-col AS 'extra' FROM tblTestSorting ORDER BY 'extra';
```
I want to get the result sorted by 'extra' column, but
This query does not give me result sorted by 'extra' column..
How to achieve this..??
|
Try this:
```
SELECT col, (5 - col) AS extra
FROM tblTestSorting
ORDER BY extra;
```
Check [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!2/cb3f48/1)
**OUTPUT**
```
| COL | EXTRA |
|-----|-------|
| 7 | -2 |
| 6 | -1 |
| 5 | 0 |
| 4 | 1 |
| 3 | 2 |
| 2 | 3 |
| 1 | 4 |
```
|
give the order by type(asc or desc)..
try this..
```
SELECT col, 5-col AS extra FROM tblTestSorting ORDER BY col ASC
```
|
How to sort MySQL ResultSet by Custom Field?
|
[
"",
"mysql",
"sql",
"select",
"sql-order-by",
""
] |
I am passing datatable as input parameter to a stored procedure. I have created custom type for it.
Here is my stored procedure:
```
CREATE PROCEDURE [dbo].[Proc_AddEmployee]
@tblEmp EmpType READONLY,
@Code int
AS
BEGIN
INSERT INTO Employee ([Name], [Lname], [Code])
SELECT
[Name], [Lname], @Code
FROM @tblEmp
```
Here I fetching record from datatable and inserting into Employee table.
Datatable contain Name,Lname and mobileno.
I want to check combination of Name,Lname and mobileno.If combination of it present in Employee table,pls don't insert record([Name], [Lname], @Code ) in Employee.Else Insert
|
Use `Correlated SubQuery` with `NOT EXISTS` to find the existence of a record
```
INSERT INTO Employee
([Name],[Lname],[Code])
SELECT [Name],[Lname],@Code
FROM @tblEmp A
WHERE NOT EXISTS (SELECT 1
FROM Employee B
WHERE B.[Name] = A.[Name]
AND B.[Lname] = A.[Lname]
AND B.[mobno] = A.[mobno])
```
|
try this:
```
CREATE PROCEDURE [dbo].[Proc_AddEmployee]
@tblEmp EmpType READONLY,
@Code int
AS
BEGIN
INSERT INTO Employee ([Name], [Lname], [Code])
SELECT
[Name], [Lname], @Code
FROM @tblEmp T
where NOT EXISTS (SELECT
[Name], [Lname], @Code
FROM Employee E where T.[Name]=E.[Name], T.[Lname]=E.[Lname] )
END
```
|
If not exist in sql
|
[
"",
"sql",
"asp.net",
""
] |
I am looking to add a common value.
My table has the following data:
```
Customer: $ Value:
123 100.00
123 100.00
abc 100.00
abc 100.00
```
I want it to display as:
```
Customer: $ Value:
123 200.00
abc 200.00
```
There are a number of other columns too that contain various different dates etc but they are not relevant here.
|
```
SELECT Customer, SUM(Value)
FROM myTable
GROUP BY Customer
```
|
Use `GROUP BY` Clause and the aggregate function `SUM()`.
For more info about `GROUP BY` refer [here](http://msdn.microsoft.com/en-us/library/ms177673.aspx)
```
SELECT Customer, SUM(Value) AS Value FROm table GROUP BY Customer
```
|
Sum Data in SQL by a column
|
[
"",
"sql",
""
] |
this is my table:

I want to fetch records of Those Vendor which contain ServiceDescription "Plaster" or Skills "Plaster" or
is in Category "Plaster" and also want to calculate averagerating of those Vendor.
note:If there is no review Of any vendor then also that records should come.
this is my query:
```
select * from UserDetails u
,VendorInCategory v
,CategoryMaster c
,Review rv
where v.CategoryId=c.Id
and u.Id=r.UserId
and u.Id=rv.VendorId
and v.VendorId=u.Id
and ((u.ServiceDescription like '%Plaster%' )
or (u.Skills like '%Plaster%')
or (c.Name like '%Plaster%'))
```
here problem in above query is i am not getting that vendor whose review is not there.
but i also want that vendor which does not contain review but matches my criteria.
**UserDetails:**
```
id Servicedescription Skills
1 Plaster plaster
2 construction construvtion
3 plaster plaster
4 null null(not vendor)
5 null null(not vendor)
```
**Review**
```
id CustomerId Vendorid rating
1 4 1 3
2 5 1 3
```
**Expected output:**
```
VendorId ServiceDescription Skills averagerating
1 plaster plaster 3
3 plaster plaster 0
```
Note:final output should in descending order of average rating
|
Here, try this:
# SAMPLE DATA
```
create table UserDetails(
Id int,
ServiceDescription varchar(20),
Skills varchar(20)
)
create table Review(
Id int,
CustomerId int,
VendorId int,
Rating int
)
insert into UserDetails values(1, 'Plaster', 'plaster'),(2, 'construction', 'construction'),(3, 'plaster', 'plaster');
insert into Review values(1, 4, 1, 3),(2, 5, 1, 3);
```
# SOLUTION
```
select
u.Id as VendorId,
u.ServiceDescription,
u.Skills,
isnull(sum(r.rating)/count(r.rating), 0) as AverageRating
from UserDetails u
left join Review r
on r.VendorId = u.id
where
u.ServiceDescription like '%plaster%'
or u.Skills like '%plaster%'
group by
u.Id,
u.ServiceDescription,
u.Skills
order by AverageRating desc
```
|
Many users have used the **AVERAGE** function to calculate the average of a series of data. But what do you do when you have summary data instead of individual responses and need to calculate an average? (For example, counts of the number of people who selected each rating on a 5-point rating scale like the rating of a product.)
**How to Calculate a Weighted Average**
Let’s say you want to get the average overall rating for each product:
* For rating 1, (9) nine people.
* For rating 2, (13) Thirteen people.
* For rating 3, (1) one people.
Using the AVERAGE function would result in an **average of 7.7**. Of course, this doesn’t make any sense. We should expect an average within the range of the scale (1 to 5).
In order to correctly calculate the average overall response to each question, we need to:
1. Multiply the number of individuals selecting each rating by the
corresponding rating value (1 – 5)
2. Add the results of those calculations together.
3. Divide that result by the total number of responses to the question.
**SAMPLE DATA:**
```
create table #tableRatings(
Id int,
Rating numeric(18,6)
)
insert into #tableRatings values(1, 4.3),(2,3.3),(3,4.8);
```
**SOLUTION:**
```
SELECT
SUM(
case
WHEN FLOOR(rating) = 1 THEN rating
WHEN FLOOR(rating) = 2 THEN rating *2
WHEN FLOOR(rating) = 3 THEN rating *3
WHEN FLOOR(rating) = 4 THEN rating *4
WHEN FLOOR(rating) = 5 THEN rating *5
end
) / SUM(rating)
FROM #tableRatings
```
**RESULT:**
```
3.733870
```
|
calculate average rating in sql server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"select",
"average",
""
] |
I've produced a report of "Which and how many of Products been sold in each country". I'm using Northwind database, SQL Server 2012.
The following is the code:
```
SELECT
o.ShipCountry AS 'Country',od.ProductID,
p.ProductName, p.UnitPrice,
SUM(od.Quantity) AS 'Number of Units Sold'
FROM
Products p
INNER JOIN
[Order Details] od ON od.ProductID = p.ProductID
INNER JOIN
Orders o ON o.OrderID = od.OrderID
GROUP BY
p.ProductName, od.ProductID, p.UnitPrice, o.ShipCountry
ORDER BY
o.ShipCountry, 'Number of Units Sold' DESC
```
The result shows over 900 rows, each country has about 10 to 20 rows:

But I want to take it up a notch, and now I want to produce "Top 3 products sold per country"
So I tried `ROW_NUMBER() OVER (PARTITION BY` but I'm clumsy at using `Row_NUMBER()`
The below is my wrong code:
```
WITH CTE AS
(
SELECT
o.ShipCountry AS 'Country',od.ProductID,
p.ProductName, p.UnitPrice,
SUM(od.Quantity) AS 'Number of Units Sold',
ROW_NUMBER() OVER (PARTITION BY o.ShipCountry ORDER BY ('Number of Units Sold') DESC) AS 'Number of Units Sold'
FROM
Products p
INNER JOIN
[Order Details] od ON od.ProductID = p.ProductID
INNER JOIN
Orders o ON o.OrderID = od.OrderID)
SELECT
'Country', ProductID,
ProductName, UnitPrice, 'Number of Units Sold'
FROM
CTE
WHERE
'Number of Units Sold' < 4
GROUP BY
p.ProductName, od.ProductID, p.UnitPrice, o.ShipCountry
ORDER BY
o.ShipCountry DESC
```
|
Try this:
```
WITH CTE AS
(
SELECT
o.ShipCountry, od.ProductID,
p.ProductName, p.UnitPrice,
SUM(od.Quantity) AS UnitsSold,
RowNum = ROW_NUMBER() OVER (PARTITION BY o.ShipCountry ORDER BY SUM(od.Quantity) DESC)
FROM
Products p
INNER JOIN
[Order Details] od ON od.ProductID = p.ProductID
INNER JOIN
Orders o ON o.OrderID = od.OrderID
GROUP BY
p.ProductName, od.ProductID, p.UnitPrice, o.ShipCountry
)
SELECT *
FROM CTE
WHERE CTE.RowNum <= 3
```
Basically, in the CTE, you define the columns you want - word of caution: **don't use** column names with spaces and stuff like that! Makes for a nice presentation on screen, but really hard to use in a query!
Then you add the `ROW_NUMBER()` that will number each entry for each country starting at 1.
And finally, you select from the CTE, and you take only those rows with a `RowNum <= 3` ==> the TOP 3 for each country.
|
```
;with CTE as(
SELECT o.ShipCountry AS 'Country',
od.ProductID,
p.ProductName,
p.UnitPrice,
SUM(od.Quantity) AS 'Number of Units Sold'
FROM Products p
INNER JOIN [Order Details] od
ON od.ProductID=p.ProductID
INNER JOIN Orders o
ON o.OrderID=od.OrderID
GROUP BY p.ProductName, od.ProductID, p.UnitPrice, o.ShipCountry
)
,CTE2 as
( Select
CTE.Country,
CTE.ProductID,
CTE.ProductName,
CTE.UnitPrice,
CTE.[Number of Units Sold],
ROW_NUMBER() OVER (PARTITION BY CTE.Country
ORDER BY CTE.[Number of Units Sold] DESC) AS rownum
from CTE
)
select CTE2.Country,
CTE2.ProductID,
CTE2.ProductName,
CTE2.UnitPrice,
CTE2.[Number of Units Sold]
FROM CTE2
WHERE CTE2.rownum<4
ORDER BY CTE2.Country, CTE2.[Number of Units Sold] DESC
```
|
Top 3 rows per country
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
"row-number",
""
] |
I have 2 set of values in a column i.e first 4 character are characters and next 4 character are numeric.
Ex:AAAA1234
Now I have to increment the value from right end i.e when numeric value reached 9999 then I have to increment character by 1 character.
Sample :
Consider the last value stored in a column is AAAA9999 then next incremented values should be in a sequence AAAB9999,....... AABZ9999,..... BZZZ9999..... ZZZZ9999(last value). And when it reaches ZZZZ9999 then I have to reset the value to AAAA0001.
How can do it in T-SQL ???
|
Here is a conceptual script, which does what you want. You will need to tweak it to suit your requirements
```
DECLARE @test table(TestValue char(8))
DECLARE @CharPart char(4),@NumPart int
SET @CharPart = 'AAAA'
SET @NumPart = 1
WHILE @NumPart <=9999
BEGIN
INSERT INTO @test
SELECT @CharPart+RIGHT(('0000'+CAST(@NumPart AS varchar(4))),4)
IF @NumPart = 9999
BEGIN
IF SUBSTRING(@CharPart,4,1)<>'Z'
BEGIN
SET @CharPart = LEFT(@CharPart,3)+CHAR(ASCII(SUBSTRING(@CharPart,4,1))+1)
SET @NumPart = 1
END
ELSE IF SUBSTRING(@CharPart,4,1)='Z' AND SUBSTRING(@CharPart,3,1) <>'Z'
BEGIN
SET @CharPart = LEFT(@CharPart,2)+CHAR(ASCII(SUBSTRING(@CharPart,3,1))+1)+RIGHT(@CharPart,1)
SET @NumPart = 1
END
ELSE IF SUBSTRING(@CharPart,3,1)='Z' AND SUBSTRING(@CharPart,2,1) <>'Z'
BEGIN
SET @CharPart = LEFT(@CharPart,1)+CHAR(ASCII(SUBSTRING(@CharPart,2,1))+1)+RIGHT(@CharPart,2)
SET @NumPart = 1
END
ELSE IF SUBSTRING(@CharPart,1,1)<>'Z'
BEGIN
SET @CharPart = CHAR(ASCII(SUBSTRING(@CharPart,1,1))+1)+RIGHT(@CharPart,3)
SET @NumPart = 1
END
ELSE IF SUBSTRING(@CharPart,1,1)='Z'
BEGIN
SET @CharPart = 'AAAA'
SET @NumPart = 1
INSERT INTO @test
SELECT @CharPart+RIGHT(('0000'+CAST(@NumPart AS varchar(4))),4)
BREAK
END
END
ELSE
BEGIN
SET @NumPart=@NumPart+1
END
END
SELECT * FROM @test
```
|
With the help of PATINDEX,SUBSTRING,ASCII functions you can achieve your special cases.
(I have found the solution for your special cases). Likewise you can add your own addition feature.
```
create table #temp(col1 varchar(20))
insert into #temp values('AAAA9999')
insert into #temp values('AAAZ9999')
insert into #temp values('AAZZ9999')
insert into #temp values('AZZZ9999')
insert into #temp values('ZZZZ9999')
select * from #temp
select col1,
case when cast(substring(col1,patindex('%[0-9]%',col1),len(col1)) as int) = 9999 and left(col1,4) <> 'ZZZZ'
then
case
when substring(col1,(patindex('%[0-9]%',col1)-1),1) <> 'Z' then left(col1,3)+char(ASCII(substring(col1,(patindex('%[0-9]%',col1)-1),1)) + 1)+right(col1,4)
when substring(col1,(patindex('%[0-9]%',col1)-2),1) <> 'Z' then left(col1,2)+char(ASCII(substring(col1,(patindex('%[0-9]%',col1)-2),1)) + 1)+right(col1,5)
when substring(col1,(patindex('%[0-9]%',col1)-3),1) <> 'Z' then left(col1,1)+char(ASCII(substring(col1,(patindex('%[0-9]%',col1)-3),1)) + 1)+right(col1,6)
when substring(col1,(patindex('%[0-9]%',col1)-4),1) <> 'Z' then char(ASCII(substring(col1,(patindex('%[0-9]%',col1)-4),1)) + 1)+right(col1,7)
end
else 'AAAA0001'
end as outputofcol1
--patindex('%[0-9]%',col1)-1 as charpos,
--substring(col1,(patindex('%[0-9]%',col1)-1),1) as substr4,
--substring(col1,(patindex('%[0-9]%',col1)-2),1) as substr3,
--substring(col1,(patindex('%[0-9]%',col1)-3),1) as substr2,
--substring(col1,(patindex('%[0-9]%',col1)-4),1) as substr1
--ASCII(substring(col1,(patindex('%[0-9]%',col1)-1),1)) as ASC_value
from #temp
```

|
Incrementing Character value in T-sql
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to know how to write a query, which selects specific columns(not common) from 2 different tables and combine them together.
I tried this, but didn't work:
```
SELECT ii.sequence
FROM Costs ii
WHERE ii.order_ID IN (SELECT book.order_ID
FROM BookInfo ci
WHERE ii.order_ID = ci.order_ID)
UNION
SELECT ft.released_title
FROM FinishedBook ft
WHERE ft.version IN (SELECT ii.iiversion
FROM Costs ii
WHERE ii.iiorder_ID IN (SELECT ci.order_ID
FROM BookInfo ci
WHERE ii.iiorder_ID = ci.order_ID))
ORDER BY sequence;
```
|
Isn't this a case of joining these tables and calling `Distinct` to avoid duplicates?
Try this:
```
select Distinct a.Sequence, b.RELEASED_TITLE
from IncludedIn a inner join FinishedTrack b
on a.OriginatesFrom = b.IIOriginatesFrom
Inner join CdInfo c on a.IIALBUM_ID = c.ALBUM_ID
Order By a.Sequence
```
|
For MSSQL Server, Use `Join` to get the result.
```
SELECT I.Sequence, F.Released_Title FROM FinishedTrack AS F
INNER JOIN IncludedIn AS I ON I.ORIGINATESFROM = F.IIORIGINATESFROM
INNER JOIN CdInfo AS A ON A.ALBUM_ID = I.IIALBUM_ID
ORDER BY I.Sequence DESC
```
|
select columns from different tables with different data type columns
|
[
"",
"sql",
"select",
"join",
""
] |
I have some students with report marks over a number of reporting periods. I use the following SQL statement to get those report marks
```
SELECT DISTINCT ID, thePeriod, MeritRank
FROM
(
SELECT ID,
cast(fileyear as varchar) + cast(filesemester as varchar) as thePeriod,
MeritRank
FROM uNCStudentMeritList) usm
ORDER BY ID, thePeriod asc
```
This gives me the following data

I would love to have another column which has the difference between each row, partitioned by the ID number. For example

Note: the first value item for each StudentId is left blank as this is the first report mark they have received. From then on, I would like to see the difference between one report mark and the next. If they have received a worse report mark then it should be a negative figure as shown. I don't have row ID numbers or anything in the table - I have seen other similar types of questions answered using row ID numbers. How can I get the sort of results I am after>
Any help would be much appreciated.
|
Try this :
```
;WITH cte_getdata
AS (SELECT ID,
Cast(fileyear AS VARCHAR(10))
+ Cast(filesemester AS VARCHAR(10)) AS thePeriod,
MeritRank,
ROW_NUMBER()
OVER (
partition BY id
ORDER BY Cast(fileyear AS VARCHAR(10)) ASC, Cast(filesemester AS VARCHAR(10)) ASC) AS rn
FROM uNCStudentMeritList)
SELECT t1.*,
t1.MeritRank - t2.MeritRank
FROM cte_getdata t1
LEFT JOIN cte_getdata t2
ON t1.rn = t2.rn + 1
AND t1.id = t2.id
```
The `Row_Number()` function will assign unique number to each record in partitions of `id` and order of `fileyear,filesemester` then you can `left join` the resultset with itself and match the current row with its previous row having same `id`. Using `Left Join` will give you all the rows regardless of matching condition. Here is a same approach with two `CTE` :
```
;WITH cte1
AS (SELECT ID,
Cast(fileyear AS VARCHAR)
+ Cast(filesemester AS VARCHAR) AS thePeriod,
MeritRank
FROM uNCStudentMeritList),
cte2
AS (SELECT *,
ROW_NUMBER()
OVER (
partition BY ID
ORDER BY thePeriod) rn
FROM cte1)
SELECT *
FROM cte2 c1
LEFT JOIN cte2 c2
ON c1.id = c2.id
AND c1.rn = c2.rn + 1
```
|
Try this.
```
;WITH cte
AS (SELECT Row_number()OVER (partition BY id ORDER BY theperiod) rn,
*
FROM tablename)
SELECT a.id,
a.theperiod,
a.MeritRank,
a.MeritRank - b.MeritRank
FROM cte A
LEFT JOIN cte b
ON a.rn = b.rn + 1
```
Or if you are using `SQL SERVER 2012+` try this.
```
SELECT id,
theperiod,
MeritRank,
Lag(MeritRank)
OVER (
partition BY id
ORDER BY theperiod) - MeritRank
FROM tablename
```
|
Get difference between the value in one row and the next
|
[
"",
"sql",
"t-sql",
""
] |
I'm trying to become the MAX value of a set of rows, on every year that is displayed.
I can see what I have to do, I just can't seem to translate it into code.
```
select YEAR(O.Orderdate) [Year] , SUM(O.Freight) [Total Freight], S.CompanyName
from Orders O inner join Shippers S on S.ShipperID = O.ShipVia
group by S.CompanyName, YEAR(O.Orderdate)
```
1996 4233,78
1996 2297,42
1996 3748,67
1997 11413,35
1997 8681,38
1997 12374,04
1998 4865,38
1998 5206,53
1998 12122,14
The code returns the total freight for each company (although company name not displayed), per year. I then need to get the highest amount of freight, for every year.
I used following code, but it returns the highest value for all years, and although I know it's wrong, I cannot see the logic or a way to return the highest value for every year separately...
```
select YEAR(O.Orderdate) [Year] , SUM(O.Freight) [Total Freight]
from Orders O inner join Shippers S on S.ShipperID = O.ShipVia
group by S.CompanyName, YEAR(O.Orderdate)
having SUM(O.Freight) >= ALL (
select SUM(O.Freight)
from Orders O join Shippers S on S.ShipperID = O.ShipVia
group by YEAR(O.Orderdate), S.CompanyName
)
```
This returns:
1997 12374,04
While my goal would be:
1996 4233,78
1997 12374,04
1998 12122,14
I don't need a blatant solution, I'd really much rather have the logic in what I'm doing wrong/what I should be doing.
Thanks in advance !
|
Your query is checking for sum of freight values being greater than equal to all values, so only the maximum value from year 1997 is matched , so only one row is returned
if you want maximum per year, i would go with @Gordon answer, but as per comment, you want to do it using Group by and having.
here is another way by having your current query in subquery.
```
SELECT Year, Max(TotalFreight) as [Total Freight]
FROM (
select YEAR(O.Orderdate) [Year] , SUM(O.Freight) [TotalFreight]
from Orders O inner join Shippers S on S.ShipperID = O.ShipVia
group by S.CompanyName, YEAR(O.Orderdate) T
group by Year
```
|
I would suggest using window functions:
```
select [year], [Total Freight]
from (select YEAR(O.Orderdate) as [Year] , SUM(O.Freight) as [Total Freight],
row_number() over (partition by year(o.orderdate) order by SUM(o.freight) desc) as seqnum
from Orders O inner join
Shippers S
on S.ShipperID = O.ShipVia
group by S.CompanyName, YEAR(O.Orderdate)
) yf
where seqnum = 1;
```
|
How to return the highest value for every different row parameter
|
[
"",
"sql",
""
] |
I have three tables:
ACTOR
```
id
nameofactor
```
MOVIE
```
id
nameOfmovie
```
CASTS
```
actorid
movieid
role
```
I want to show how the names of actors who had played more than one role in a movie
Here is what I have tried:
```
select
nameOfactor, nameOfmovie, CASTS.role
from
ACTOR, MOVIE, CASTS
where
ACTOR.id = CASTS.actorid
and CASTS.mid = MOVIE.movieid
group by
fname, lname, name, role
having
count(pid) >= 2;
```
But it doesn't work.
I guess the problem is in this way I have to put "role" in group by, but as I need to show roles, somehow I have to this
I dont know how to fix this query. I would be happy if anyone can help
Thanks in advance
|
You need to generate the list of actors with multiple roles, then use that to join back to your tables to get the roles.
So first get the list of actors with multiple roles:
```
SELECT actorid, movieid
FROM CASTS
GROUP BY actorid, movieid
HAVING COUNT(*) > 1
```
And now you can use that as a subquery and `JOIN` it back to your other tables:
```
select nameOfactor, nameOfmovie, CASTS.role
from ACTOR
JOIN CASTS
ON ACTOR.id=CASTS.actorid
JOIN MOVIE
ON CASTS.mid=MOVIE.movieid
JOIN (SELECT actorid, movieid
FROM CASTS
GROUP BY actorid, movieid
HAVING COUNT(*) > 1) A MultiRoles
ON MultiRoles.actorid = CASTS.actorid
AND MultiRoles.movieid = CASTS.movieid
```
|
```
select nameOfactor, nameOfmovie, count(role)
from ACTOR, MOVIE, CASTS
where ACTOR.id=CASTS.actorid and CASTS.movieid=MOVIE.id
group by nameOfactor, nameOfmovie
having count(role)>1 ;
```
|
SQL query confusion - actors, roles, movies
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to pull all bookings made today, however I'm coming across a syntax error for the final line.
```
SELECT
booking.BookingID, booking.CustID, booking.CostID, booking.DateOut,
customer.Fname, customer.Sname, costume.Description, costume.Size
FROM
booking
INNER JOIN
customer ON booking.CustID = customer.CustID
INNER JOIN
costume ON booking.CostID = costume.CostID
WHERE
booking.DateOut = Date();
```
Any help appreciated!
> Error Msg:
> #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')
> LIMIT 0, 25' at line 7
|
```
SELECT
booking.BookingID, booking.CustID, booking.CostID, booking.DateOut,
customer.Fname, customer.Sname, costume.Description, costume.Size
FROM
booking
INNER JOIN
customer ON booking.CustID = customer.CustID
INNER JOIN
costume ON booking.CostID = costume.CostID
WHERE
booking.DateOut = CURDATE();
```
|
```
SELECT booking.BookingID,
booking.CustID,
booking.CostID,
booking.DateOut,
customer.Fname,
customer.Sname,
costume.Description,
costume.Size
FROM booking
INNER JOIN customer
ON (booking.CustID = customer.CustID)
INNER JOIN costume
ON (booking.CostID = costume.CostID)
WHERE booking.DateOut = new_DATE(sysdate);
```
|
SQL Syntax Error When matching records to today
|
[
"",
"mysql",
"sql",
""
] |
I have a string in SQL stored in `@result` variable of type `Varchar(MAX)` like this:
```
Attributes >> Items >> Category >> Package
```
How can i get the reverse of this without impacting the performance of stored procedure .I want to break the string on the basis of `>>`.
```
Package >> Category >> Items >> Attributes
```
|
If you have at most four items in the string and none contain periods, then you can use `PARSENAME()`.
```
select (parsename(replace(@result, ' >> ', '.'), 1) + ' >> ' +
parsename(replace(@result, ' >> ', '.'), 2) + ' >> ' +
parsename(replace(@result, ' >> ', '.'), 3) + ' >> ' +
parsename(replace(@result, ' >> ', '.'), 4)
)
```
Another option is to split the string and reconstruct it.
|
Break the string into different parts using the delimiter `>>` the assign row\_number to each row.
Then `Convert the rows into single string` delimited by `>>` order by desc.
This should work if you have more than 4 elements
```
DECLARE @output VARCHAR(5000)='',
@String VARCHAR(5000)='Attributes >> Items >> Category >> Package'
SELECT @output += splt_data + ' >> '
FROM (SELECT Row_number()
OVER(
ORDER BY (SELECT NULL)) rn,
Rtrim(Ltrim(Split.bb.value('.', 'VARCHAR(100)'))) splt_data
FROM (SELECT Cast ('<M>' + Replace(@String, '>>', '</M><M>')
+ '</M>' AS XML) AS Data) AS A
CROSS APPLY Data.nodes ('/M') AS Split(bb)) ou
ORDER BY rn DESC
```
**OUTPUT :**
```
SELECT LEFT(@output, Len(@output) - 3) --Package >> Category >> Items >> Attributes
```
|
How to reverse a string in SQL Server
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I feel like this should be simple enough to do, but have not found any solutions that didn't use replace so far. I have the following select statement I am running, and for some of the columns there are commas separating the values. I would like to replace these commas with semicolons, however I only want to do it in the select statement. I don't want it to alter the values in the tables at all. This is not a one off statement either, or I'd just replace all the commas with semicolons and then revert back.
```
SELECT a.Category_Id, a.Category_Name, ISNULL(b.Category_Alias, '') as Category_Alias,
ISNULL(b.SUPPORT_NAMES, '') as SUPPORT_NAMES
FROM Categories a
INNER JOIN CategoryInfo b on b.Category_Id=a.Category_Id
```
For the Category\_Alias column, the records are actually stored like `CS, Customer Support` and I want that to show up as `CS; Customer Support` just for the select statement.
|
I believe you may be confused as to what the REPLACE function is doing. You can use REPLACE within your SELECT statement without altering the data in the database:
```
SELECT REPLACE(MyField, ',', ';') AS NewFieldName
FROM MyTable
```
|
I believe you don't want to replace the value physically in the table, but ok to [replace](http://msdn.microsoft.com/en-us/library/ms186862.aspx) on select
So you can
```
Select REPLACE(ColumnName,',',';')
From TableName
```
|
SQL Replace comma in results without using replace
|
[
"",
"sql",
"sql-server",
""
] |
I am unable subtract 30 days from the current date and I am a newbie to SQL Server.
This is the data in my column
```
date
------------------------------
Fri, 14 Nov 2014 23:03:35 GMT
Mon, 03 Nov 2014 15:18:00 GMT
Tue, 11 Nov 2014 01:24:47 GMT
Thu, 06 Nov 2014 19:13:47 GMT
Tue, 04 Nov 2014 12:37:06 GMT
Fri, 1 Nov 2014 00:33:00 GMT
Sat, 5 Nov 2014 01:06:00 GMT
Sun, 16 Nov 2014 06:37:12 GMT
```
For creating the above column I used `varchar(50)` and now my problem is I want to display the dates for past 15-20 days from the date column can any one help with this issue ?
update [ how can i display last 7 days dates in order
|
You can convert it to `datetime`, and then use `DATEADD(DAY, -30, date)`.
See [here](https://msdn.microsoft.com/en-us/library/ms186819.aspx).
**edit**
I suspect many people are finding this question because they want to substract from current date (as is the title of the question, but not what OP intended). The comment of munyul below answers that question more specifically. Since comments are considered ethereal (may be deleted at any given point), I'll repeat it here:
```
DATEADD(DAY, -30, GETDATE())
```
|
Try this:
```
SELECT GETDATE(), 'Today'
UNION ALL
SELECT DATEADD(DAY, 10, GETDATE()), '10 Days Later'
UNION ALL
SELECT DATEADD(DAY, –10, GETDATE()), '10 Days Earlier'
UNION ALL
SELECT DATEADD(MONTH, 1, GETDATE()), 'Next Month'
UNION ALL
SELECT DATEADD(MONTH, –1, GETDATE()), 'Previous Month'
UNION ALL
SELECT DATEADD(YEAR, 1, GETDATE()), 'Next Year'
UNION ALL
SELECT DATEADD(YEAR, –1, GETDATE()), 'Previous Year'
```
Result Set:
```
———————– —————
2011-05-20 21:11:42.390 Today
2011-05-30 21:11:42.390 10 Days Later
2011-05-10 21:11:42.390 10 Days Earlier
2011-06-20 21:11:42.390 Next Month
2011-04-20 21:11:42.390 Previous Month
2012-05-20 21:11:42.390 Next Year
2010-05-20 21:11:42.390 Previous Year
```
|
How to subtract 30 days from the current date using SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
Not sure how to solve this problem, which seems like it should be easy.
I need to identify all of the students who are not enrolled in a certain class that is required for graduation.
Using the example below, if `ClassNumber` C30 is required for graduation, I want the query to return the `studentIDs` 1 and 2 - neither of whom are registered in `ClassNumber` C30.
Am I missing something simple? I'm not sure how to get SQL to evaluate whether a string is present or not in `ClassNumber` within a set of records grouped by `studentID`.
```
ClassEnrollment Table
=====================
ID|ClassNumber|StudentID
1 |A10 |1
2 |A10 |1
3 |B20 |1
4 |A10 |2
5 |B20 |2
6 |B20 |2
7 |C30 |3
8 |A10 |3
9 |A10 |3
```
|
Using this one table, you can do it as an aggregation query:
```
select StudentId
from ClassEnrollment
group by StudentId
having sum(case when ClassNumber = 'C30' then 1 else 0 end) = 0;
```
If you have another table of students, I think a more natural way is to use `not exists`:
```
select StudentId
from Students s
where not exists (select 1
from ClassEnrollment ce
where ClassNumber = 'C30' and ce.StudentId = s.StudentId
);
```
|
This is not terribly efficient, but one way:
```
select distinct StudentID from ClassEnrollment
minus
select StudentID from ClassEnrollment
where ClassNumber = 'C30';
```
|
How to find a specific string within a group of records, for all groups of records
|
[
"",
"sql",
"oracle",
"group-by",
""
] |
I have two tables (`fruits` and `fruitSales`), where my query requirement is:
```
if fruit = 'Apple' OR 'Banana' then fire (query1) else fire(query2)
```
// i.e.., when my input is apple or banana, then query1 has to fire or else query2.
Here are my two queries:
Query #1:
```
select a.*, b.quantity
from fruits a
left join fruitSales b on a.fruitPrice = '%'+b.fruitPrice+'%'
where a.fruit = 'Apple'
```
Query #2:
```
select a.*, b.quantity
from fruits a
left join fruitSales b on a.fruitPrice like '%' + b.fruitPrice + '%'
where a.fruit = 'Orange'
```
In short: the only difference in my query is "like" in query2 and "=" in query1. I don't know how to use CASE queries in this case.(as my input data is dependent on two values, Apple or Banana)
Solution is greatly appreciated.
|
Use `CASE` condition in `JOIN` condition.
```
select a.*, b.quantity
from fruits a
left join fruitSales b on
CASE WHEN a.fruit IN ('Apple', 'Banana')
AND a.fruitPrice = '%'+b.fruitPrice+'%'
THEN 1
WHEN a.fruit NOT IN ('Apple', 'Banana')
AND a.fruitPrice like '%' + b.fruitPrice + '%'
THEN 1
ELSE 0
END = 1
where a.fruit = 'Apple'
```
|
Try this:
```
SELECT a.*, b.quantity
FROM fruits a
LEFT JOIN fruitSales b ON a.fruitPrice LIKE (CASE WHEN a.fruit IN ('Apple', 'Banana') THEN b.fruitPrice ELSE CONCAT('%', b.fruitPrice, '%') END)
WHERE a.fruit = 'Apple';
```
|
Case query based on two input values
|
[
"",
"sql",
"sql-server",
"select",
"case",
"sql-like",
""
] |
SQL(ite) newb here,
I have a sqlite3 DB with two tables in it, let's call them table\_a and table\_b. table\_a has 3.6 billion pairwise scores in the format:
```
mol_a,mol_b,score
int, int, real
int, int, real
...
```
I have added two blank columns to table\_a : year\_a and year\_b which I want to popluate with data from table\_b.
table b is composed of mol\_id's and years in the format
```
mol_id,year
int,int
int,int
...
```
where mol\_id corresponds to the possible ID's in table\_a's mol\_a and mol\_b fields.
I think I can do this with an insert statement but I can't quite get multiple table syntax right in the where clause. Any advice would be greatly appreciated.
This is what i'm trying:
```
INSERT INTO table_a.year_a
SELECT year
FROM table_b
WHERE table_b.mol_id = table_a.mol_a;
```
but it would appear my TABLE.COLUMN syntax is wrong?
I'm sure there is a simple answer but I can't seem to find it.
Thanks for any help
|
From what you describe, you want an `update`, not an `insert`:
```
update table_a
set year = (select b.year from table_b b where b.mol_id = table_a.mol_a);
```
|
Are you sure you don't want an update instead, i.e. to populate the new columns on the existing rows? In which case:
```
UPDATE table_a
SET year_a = (SELECT year
FROM table_b
WHERE table_b.mol_id = table_a.mol_a),
year_b = (SELECT year
FROM table_b
WHERE table_b.mol_id = table_a.mol_b),
```
|
reference multiple tables sqlite where clause
|
[
"",
"sql",
"sqlite",
""
] |
I am trying to find all of the Id's which are not matched between tables, my SQL is:
```
SELECT UserP.*, AccountP.*
FROM UserP
FULL JOIN AccountPON (UserP.Id = AccountP.Id)
WHERE UserP.Id NOT IN AccountP.Id
```
|
You can use a sub query for this:
```
SELECT * FROM UserP WHERE Id NOT IN (select Id from AccountPON);
```
|
You can use the `FULL OUTER JOIN` as you've done to find mismatches in BOTH sets. In a full outer join, the join key will be null on the side which is not matched, so the query needs to be changed to:
```
SELECT UserP.*, AccountP.*
FROM UserP FULL JOIN AccountPON ON UserP.Id = AccountP.Id
WHERE UserP.Id IS NULL OR AccountP.Id IS NULL;
```
[SqlFiddle here](http://sqlfiddle.com/#!6/452a9/3)
The `WHERE Id NOT IN...` subqueries will only return you data which is in the LHS but not in the RHS table, which appears to only to solve half of what you are after - you would need to repeat the `NOT IN` for the RHS not in LHS, and then UNION the two results together, like so:
```
SELECT Id
FROM UserP
WHERE Id NOT IN (SELECT ID FROM AccountPON)
UNION ALL
SELECT Id
FROM AccountPON
WHERE Id NOT IN (SELECT ID FROM UserP);
```
|
How to use NOT IN SQL
|
[
"",
"sql",
""
] |
I have created the following script in order to read data from Mobile App DB (which is based on MongoDB) from Oracle SQL Developer:
```
DECLARE
l_param_list VARCHAR2(512);
l_http_request UTL_HTTP.req;
l_http_response UTL_HTTP.resp;
l_response_text VARCHAR2(32767);
BEGIN
-- service's input parameters
-- preparing Request...
l_http_request := UTL_HTTP.begin_request ('https://api.appery.io/rest/1/db/collections/Photos?where=%7B%22Oracle_Flag%22%3A%22Y%22%7D'
, 'GET'
, 'HTTP/1.1');
-- ...set header's attributes
UTL_HTTP.set_header(l_http_request, 'X-Appery-Database-Id', '53f2dac5e4b02cca64021dbe');
--UTL_HTTP.set_header(l_http_request, 'Content-Length', LENGTH(l_param_list));
-- ...set input parameters
-- UTL_HTTP.write_text(l_http_request, l_param_list);
-- get Response and obtain received value
l_http_response := UTL_HTTP.get_response(l_http_request);
UTL_HTTP.read_text(l_http_response, l_response_text);
DBMS_OUTPUT.put_line(l_response_text);
insert into appery values(l_response_text);
-- finalizing
UTL_HTTP.end_response(l_http_response);
EXCEPTION
WHEN UTL_HTTP.end_of_body
THEN UTL_HTTP.end_response(l_http_response);
END;
/
```
The response *(l\_response\_text)* is a JSON-like string. For example:
```
[{"Postcode":"47100","OutletCode":"128039251","MobileNumber":"0123071303","_createdAt":"2014-11-10 06:12:49.837","_updatedAt":"2014-11-10 06:12:49.837"}, {"Postcode":"32100","OutletCode":"118034251", ..... ]
```
The code works fine, and inserts the response into one column-table called *appery*. However, I need to parse this response such that each array goes into into its specific column in a table called *appery\_test*. The table *appery\_test* has a number of columns same as the number of JSON pairs and in the same order.
I searched and I found most of the results about parsing Oracle table into JSON and not the opposite. I found, though, [this](https://stackoverflow.com/questions/7985841/work-with-json-in-oracle) link which is somewhat similar to my issue. However, the suggested library in the answer does not have any example on how to use it to insert JSON into conventional table using PL/SQL.
**N.B.:** I'm using 11g and not 12c. So the [built in](https://dba.stackexchange.com/questions/7535/work-with-json-in-oracle) functions are not available for me.
|
I used [PL/JSON library](https://sourceforge.net/p/pljson/wiki/Home/). Specifically, JSON\_EXT package functions in order to parse it.
The following script inspired by Oracle Community answer worked for me
```
DECLARE
l_param_list VARCHAR2(512);
l_http_request UTL_HTTP.req;
l_http_response UTL_HTTP.resp;
l_response_text VARCHAR2(32767);
l_list json_list;
A_id VARCHAR2(200);
UserId VARCHAR2(100);
UserName VARCHAR2(100);
OutletCode VARCHAR2(100);
OutletName VARCHAR2(100);
MobileNumber VARCHAR2(100);
PhoneNumber VARCHAR2(100);
Address VARCHAR2(100);
City VARCHAR2(100);
State VARCHAR2(100);
Postcode VARCHAR2(100);
Email VARCHAR2(100);
UpdateCount VARCHAR2(100);
loginCount VARCHAR2(100);
ReferencePhoto VARCHAR2(100);
Updates VARCHAR2(100);
AccountLocked VARCHAR2(100);
Oracle_Flag VARCHAR2(100);
acl VARCHAR2(100);
BEGIN
-- service's input parameters
-- preparing Request...
l_http_request := UTL_HTTP.begin_request('https://api.appery.io/rest/1/db/collections/Outlet_Details?where=%7B%22Oracle_Flag%22%3A%22Y%22%7D'
, 'GET'
, 'HTTP/1.1');
-- ...set header's attributes
UTL_HTTP.set_header(l_http_request, 'X-Appery-Database-Id', '53f2dac5e4b02cca64021dbe');
--UTL_HTTP.set_header(l_http_request, 'Content-Length', LENGTH(l_param_list));
-- ...set input parameters
-- UTL_HTTP.write_text(l_http_request, l_param_list);
-- get Response and obtain received value
l_http_response := UTL_HTTP.get_response(l_http_request);
UTL_HTTP.read_text(l_http_response, l_response_text);
DBMS_OUTPUT.put_line(l_response_text);
l_list := json_list(l_response_text);
FOR i IN 1..l_list.count
LOOP
A_id := json_ext.get_string(json(l_list.get(i)),'_id');
UserId := json_ext.get_string(json(l_list.get(i)),'UserId');
UserName := json_ext.get_string(json(l_list.get(i)),'UserName');
OutletCode := json_ext.get_string(json(l_list.get(i)),'OutletCode');
OutletName := json_ext.get_string(json(l_list.get(i)),'OutletName');
MobileNumber := json_ext.get_string(json(l_list.get(i)),'MobileNumber');
PhoneNumber := json_ext.get_string(json(l_list.get(i)),'PhoneNumber');
Address := json_ext.get_string(json(l_list.get(i)),'Address');
City := json_ext.get_string(json(l_list.get(i)),'City');
State := json_ext.get_string(json(l_list.get(i)),'State');
Postcode := json_ext.get_string(json(l_list.get(i)),'Postcode');
Email := json_ext.get_string(json(l_list.get(i)),'Email');
UpdateCount := json_ext.get_string(json(l_list.get(i)),'UpdateCount');
loginCount := json_ext.get_string(json(l_list.get(i)),'loginCount');
ReferencePhoto := json_ext.get_string(json(l_list.get(i)),'ReferencePhoto');
Updates := json_ext.get_string(json(l_list.get(i)),'Updates');
AccountLocked := json_ext.get_string(json(l_list.get(i)),'AccountLocked');
Oracle_Flag := json_ext.get_string(json(l_list.get(i)),'Oracle_Flag');
acl := json_ext.get_string(json(l_list.get(i)),'acl');
insert .....
```
Notice that *json\_ext.get\_string* retuns only VARCHAR2 limited to 32767 max. In order to use the same package with larger json\_list and json\_values (>32KB) check [here](https://stackoverflow.com/questions/27142161/store-big-json-files-into-oracle-db/27142469#27142469).
If you have APEX 5.0 and above, better option and much better performance via [APEX\_JSON](https://docs.oracle.com/cd/E59726_01/doc.50/e39149/apex_json.htm#AEAPI29635) package. See @[Olafur Tryggvason](https://stackoverflow.com/a/32287823/2153567)'s answer for details
|
Since this question scores high in results, I want to post this preferred alternative:
Oracle has released [APEX 5.0](http://www.oracle.com/technetwork/developer-tools/apex/downloads/index.html) (April 15. 2015). With it you get access to a great API to work with JSON
I'm using it on 11.2 and have been able to crunch every single json, from simple to very complex objects with multiple arrays and 4/5 levels. [APEX\_JSON](https://docs.oracle.com/cd/E59726_01/doc.50/e39149/apex_json.htm#AEAPI29635)
If you do not want to use APEX. Simply install the runtime environment to get access to the API.
Sample usage, data from [json.org's example](http://json.org/example.html) :
```
declare
sample_json varchar2 (32767)
:= '{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}';
begin
apex_json.parse (sample_json);
dbms_output.put_line (apex_json.get_varchar2 ('glossary.GlossDiv.title'));
dbms_output.put_line (apex_json.get_varchar2 ('glossary.GlossDiv.GlossList.GlossEntry.GlossTerm'));
dbms_output.put_line (apex_json.get_varchar2 ('glossary.GlossDiv.GlossList.GlossEntry.GlossDef.GlossSeeAlso[%d]', 2));
end;
```
Result:
PL/SQL block executed
```
S
Standard Generalized Markup Language
XML
```
|
Parse JSON into Oracle table using PL/SQL
|
[
"",
"sql",
"json",
"plsql",
"oracle11g",
""
] |
I found out some examples but I have difficulties applying this to my case and as I need the query very quickly, I am coming to you SQL masters :)
I need the below `SELECT` to return `Sessions.id` (the one that corresponds to Max(Sessions.DateFin) for each row)
```
SELECT Sessions.idFormation,
Inscriptions.idPersonnel,
Max(Sessions.DateFin) AS Dernier
FROM Sessions
INNER JOIN Inscriptions ON Sessions.id = Inscriptions.idSession
GROUP BY Sessions.idFormation, Inscriptions.idPersonnel
```
I believe I need to use something like (this one has nothing to do with my above query, just an example from some internet forum)
```
SELECT * FROM Table1 t1
JOIN
(
SELECT category, MAX(date) AS MAXDATE
FROM Table1
GROUP BY category
) t2
ON T1.category = t2.category
AND t1.date = t2.MAXDATE
```
but as t1 in my case is already a query I do not get how to implement this solution.
|
This is the query that works. I'm sure it can be simplified but how.
```
SELECT T1.idFormation AS idFormation, T1.idPersonnel AS idPersonnel, T2.Dernier, T1.Validé AS Validé
FROM (SELECT Sessions.idFormation, Inscriptions.idPersonnel, Sessions.DateFin, Sessions.Validé
FROM Sessions
INNER JOIN Inscriptions ON Sessions.id = Inscriptions.idSession) T1
INNER JOIN
( SELECT Sessions.idFormation, Inscriptions.idPersonnel, Max(Sessions.DateFin) AS Dernier FROM Sessions INNER JOIN Inscriptions ON Sessions.id = Inscriptions.idSession GROUP BY Sessions.idFormation, Inscriptions.idPersonnel
) T2
ON T1.idFormation = T2.idFormation
AND T1.idPersonnel = T2.idPersonnel
AND T1.DateFin = T2.Dernier
```
|
Try something like this. Make that as a `Subselect` and give a `alias name` to that `Subselect` then join with the existing subselect
```
SELECT * FROM (--your query--) t1
JOIN
(
SELECT category, MAX(date) AS MAXDATE
FROM Table1
GROUP BY category
) t2
ON T1.category = t2.category
AND t1.date = t2.MAXDATE
```
|
SQL Access GROUP BY et ID
|
[
"",
"sql",
"ms-access",
""
] |
Sorry for the vague Problem statement, maybe I could not frame it the right way hence couldn't get much help from the internet.
My basic intent is to select two columns from a table, where the table has been partitioned based on dates, and has huge number of records.
The intent is to select records for each date from jan 1 to nov 30, can't do it using date between statement, I need to input each date separately in the iteration in for loop.
Can anyone help writing a dummy sql query on this?
Thanks,
|
you can use any scripting language to iterate over each day to achieve.
below is a sample (untested code)
```
import pyodbc
conn = pyodbc.connect(DSN='dsnname',autocommit=True,ansi=True)
cur = conn.cursor()
sql = "INSERT INTO tgt_table SELECT col1,col2 FROM src_table WHERE partitioned_date_column = date '2014-01-01' + INTERVAL '%d' DAY"
for i in range(1,30):
cur.execute(sql % i)
```
|
I dont know how your tables look like, but have you tryed something like this:
"`WHERE {yourRow} > X GROUP BY date`"
[MySQL Query GROUP BY day / month / year](https://stackoverflow.com/questions/508791/mysql-query-group-by-day-month-year)
Hope this can help you :-)
`SELECT col1, col2 FROM {table} WHERE datecol between '2000-01-01'
and
DATE_ADD('2000-01-31',INTERVAL 1 DAY)`
OR
```
SELECT col1, col2, FROM {table} GROUP BY DAY(datecol)
```
|
select records from a table for each different date in single iteration of SQL query
|
[
"",
"sql",
"for-loop",
"teradata",
""
] |
I'm trying to make a view. So far, I have written this:
```
with ExpAndCheapMedicine(MostMoney, MinMoney) as
(
select max(unitprice), min(unitprice)
from Medicine
)
,
findmostexpensive(nameOfExpensive) as
(
select tradename
from Medicine, ExpAndCheapMedicine
where UnitPrice = MostMoney
)
,
findCheapest(nameOfCheapest) as
(
select tradename
from Medicine, ExpAndCheapMedicine
where UnitPrice = MinMoney
)
CREATE VIEW showing
as
select tradename, unitprice, GenericFlag
from Medicine;
```
Unfortunately, I get an error on the line containing `CREATE VIEW showing`
> "CREATE VIEW must be the only statement in the batch"
How can I fix this?!
|
Just as the error says, the `CREATE VIEW` statement needs to be the only statement in the query batch.
You have two option in this scenario, depending on the functionality you want to achieve:
1. Place the `CREATE VIEW` query at the beginning
```
CREATE VIEW showing
as
select tradename, unitprice, GenericFlag
from Medicine;
with ExpAndCheapMedicine(MostMoney, MinMoney) as
(
select max(unitprice), min(unitprice)
from Medicine
)
,
findmostexpensive(nameOfExpensive) as
(
select tradename
from Medicine, ExpAndCheapMedicine
where UnitPrice = MostMoney
)
,
findCheapest(nameOfCheapest) as
(
select tradename
from Medicine, ExpAndCheapMedicine
where UnitPrice = MinMoney
)
```
2. Use `GO` after the CTE and before the `CREATE VIEW` query
-- Option #2
```
with ExpAndCheapMedicine(MostMoney, MinMoney) as
(
select max(unitprice), min(unitprice)
from Medicine
)
,
findmostexpensive(nameOfExpensive) as
(
select tradename
from Medicine, ExpAndCheapMedicine
where UnitPrice = MostMoney
)
,
findCheapest(nameOfCheapest) as
(
select tradename
from Medicine, ExpAndCheapMedicine
where UnitPrice = MinMoney
)
GO
CREATE VIEW showing
as
select tradename, unitprice, GenericFlag
from Medicine;
```
|
I came across this question when I was trying to create a couple of views within the same statement, what worked well for me is using dynamic SQL.
```
EXEC('CREATE VIEW V1 as SELECT * FROM [T1];');
EXEC('CREATE VIEW V2 as SELECT * FROM [T2];');
```
|
CREATE VIEW must be the only statement in the batch
|
[
"",
"sql",
"sql-server",
"ddl",
"create-view",
""
] |
The following is my query:
```
SELECT DISTINCT
SC.Feature,
CASE WHEN SA.Target > ANY(SELECT SA.Availability) THEN 0 ELSE 1 END AS IsAvailable,
SA.Date AS DateTime
FROM dbo.ServiceAvailability AS SA LEFT OUTER JOIN
dbo.ServiceCatalog AS SC ON SA.ServiceID = SC.ServiceID
```
My output ends up being of this form:
```
Feature IsAvailable DateTime
-------------------------------------------------
F1 0 2014-11-01 07:00:00.000
F1 1 2014-11-01 07:00:00.000
```
How do I change my query to `AND` the output such that I would instead get those two rows to combine into the below?
```
Feature IsAvailable DateTime
-------------------------------------------------
F1 0 2014-11-01 07:00:00.000
```
EDIT: If both the `IsAvailable` values were 1, of course they would both `AND` and combine to give a single row with `IsAvailable = 1`
|
I think this should work:
```
SELECT
SC.Feature,
CASE WHEN sum(
CASE WHEN SA.Target > ANY(SELECT SA.Availability)
THEN 1
ELSE 0
END) >= 1
THEN 0
ELSE 1
END AS IsAvailable,
SA.Date AS DateTime
FROM dbo.ServiceAvailability AS SA
LEFT JOIN dbo.ServiceCatalog AS SC ON SA.ServiceID = SC.ServiceID
GROUP BY SC.Feature,
SA.Date
```
I reversed the inner CASE statement so `0` is now `1` and vice versa. Then we can sum that grouped by your group columns, and if the sum is greater-or-equal to 1 then there was at least one `1` (which is actually a `0`!), and therefore the result of the *and* operation will be `0`; otherwise they were all `0` (actually `1`) and the result should be `1`.
**Edit:** In response to "Cannot perform an aggregate function on an expression containing an aggregate or a subquery" - I didn't realise that was a limitation. I've rearranged the query to perform the subqueries first and then group on that dataset. Hopefully this works. I think it's a little clearer what's happening now as well.
```
SELECT
Feature,
CASE WHEN sum(IsNotAvailable) >= 1
THEN 0
ELSE 1
END AS IsAvailable,
[DateTime]
FROM (
SELECT
SC.Feature,
CASE WHEN SA.Target > ANY(SELECT SA.Availability)
THEN 1
ELSE 0
END AS IsNotAvailable,
SA.Date AS DateTime
FROM dbo.ServiceAvailability AS SA
LEFT JOIN dbo.ServiceCatalog AS SC ON SA.ServiceID = SC.ServiceID
) x
GROUP BY x.Feature,
x.[DateTime]
```
|
for **And** between two operands if only one operand's value be **0** the result will be **0**
so I think this will solve your problem:
```
select distinct *
from (SELECT DISTINCT
SC.Feature,
CASE WHEN SA.Target > ANY(SELECT SA.Availability) THEN 0 ELSE 1 EN
IsAvailable,
SA.Date AS DateTime
FROM dbo.ServiceAvailability AS SA LEFT OUTER JOIN
dbo.ServiceCatalog AS SC ON SA.ServiceID = SC.ServiceID)
where IsAvailable=0
```
|
Evaluate an && condition only for rows with matching times
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I have an SQL Programming question based on Sum Function.Supposing, I have the following table.
```
ID Values
1 20
1 30
1 100
2 10
2 1
2 12
3 45
3 66
```
How do I calculate the sum of the values pertaining to an ID and add it to a new column. I also would like to group it by ID. For Example:
```
ID Values Total_Value
1 20 150
1 30 150
1 100 150
2 10 23
2 1 23
2 12 23
3 45 111
3 66 111
```
Any suggestions would be appreciated. Thanks !
|
this can easily be done using a window function:
```
select id,
value,
sum(value) over (partition by id) as total_value_for_id
from the_table
order by id;
```
|
Use analytic functions. That is what they are there for:
```
select id, value, sum(value) over (partition by id) as totalvalue
from table t
order by id;
```
|
How do I use the SUM Function in Oracle SQL to produce the below output?
|
[
"",
"sql",
"oracle",
""
] |
**Using Case IF statements in queries** (SQL 2008)
I have servers with 3 instances each. I am doing **one** query to run on all of them. part of the query will filter the ids for each server i.e 123 for server1.
```
select COUNT(*) from Department Dep
where Dep.LocationID in
(1,2,3, -- Server1
4,5,6, -- Server2
7,8,9 --Server3
)
```
I can filter as above for all ids and get the result. As the table is very large (2 miilon rows) it may be best to filter just whats in the tabel i.e 123 for server1. Although it does not make much difference to the time it takes to complete.
I could do this
```
if @@SERVERNAME ='Server1'
Begin
select COUNT(*) from Department Dep
where dep.LocationID in
(1,2,3)
End
if @@SERVERNAME ='Server2'
Begin
select COUNT(*) from Department Dep
where dep.LocationID in
(4,5,6)
End
if @@SERVERNAME ='Server3'
Begin
select COUNT(*) from Department Dep
where dep.LocationID in
(7,8,9)
End
```
So if i am running the query above on server2 i would filter only (4,5,6)
I would want to join this result to another queries hence looking for a better way to do it.
Thanks
|
You can do this with three conditions in your WHERE clause:
```
select COUNT(*) from Employee emp
where
(@@SERVERNAME = 'Server1' AND emp.LocationID in (1,2,3))
OR
(@@SERVERNAME = 'Server2' AND emp.LocationID in (4,5,6))
OR
(@@SERVERNAME = 'Server3' AND emp.LocationID in (7,8,9))
```
|
The ideal way to do this would be to put the server name (or a related key) in a `locations` lookup table. Then you could do this quite simply:
```
select COUNT(*) from Employee emp
where emp.LocationID in
(select locationID from locations where server = @@SERVERNAME)
```
This would also allow you to add additional instances without changing your code.
|
Using Case IF statements in queries
|
[
"",
"sql",
"sql-server-2008",
"if-statement",
""
] |
I am trying to join two tables such that I am getting only a first match from the Right table instead of every match in Table2.
So if the query is:
```
SELECT T1.Name, T2.Dates
FROM Table1 T1
LEFT JOIN Table2 T2 ON T1.ID = T2 = ID
WHERE T1.Name = 'John'
```
I would like to see
```
John | 14/11/14
Joe | 10/10/2014
Jane | 25/10/2014
```
Instead of
```
John | 14/11/2014
John | 12/10/2014
Joe | 10/10/2014
Jane | 25/10/2014
Jane | 26/10/2014
```
Which join should I use?
|
You need to decide which row, you should select. Min or max as commented.
```
SELECT T1.Name,
( SELECT MIN( T2.Dates) FROM Table2 T2 WHERE T1.ID = T2 = ID) AS Dates
FROM Table1 T1
WHERE T1.Name = 'John'
```
|
The ANSI standard function `row_number()` can be a big help here. It is supported by most databases, so you can do:
```
SELECT T1.Name, T2.Dates
FROM Table1 T1 LEFT JOIN
(SELECT t2.*, ROW_NUMBER() OVER (PARTITION BY t2.ID ORDER BY t2.DATE DESC) as seqnum
FROM Table2 t2
) T2
ON T1.ID = T2.ID AND seqnum = 1
WHERE T1.Name = 'John';
```
In your question, you have only one column from the second table, so you can also do this with aggregation:
```
SELECT t1.ID, t1.Name, MAX(t2.Date)
FROM Table1 T1 LEFT JOIN
Table2 T2
ON t1.ID = t2.ID
WHERE T1.Name = 'John'
GROUP BY t1.ID, t1.Name;
```
|
Distinct SQL Join two tables
|
[
"",
"sql",
""
] |
I cannot get an alternating pattern of 1 an -1 with my database.
This explains what I am trying to do.
```
ID Purpose Date Val
1 Derp 4/1/1969 1
1 Derp 4/1/1969 -1
2 Derp 4/2/2011 1
2 Derp 4/2/2011 -1
```
From a database that is something like
```
ID Purpose Date
1 Derp 4/1/1969
1 Herp 4/1/1911
2 Woot 4/2/1311
2 Wall 4/2/211
```
Here is my attempt:
```
SELECT
ID
,Purpose
,Date
,Val as 1
FROM (
SELECT FIRST(Purpose)
FROM DerpTable WHERE Purpose LIKE '%DERP%'
GROUP BY ID, DATE) as HerpTable, DerpTable
WHERE HerpTable.ID = DerpTable.ID AND DerpTable.ID = HerpTable.ID
```
This query does not work for me because my mssm does not recognize 'FIRST' or 'FIRST\_VALUE' as built in functions. Thus, I have no way of numbering the first incident of derp and giving it a value.
Problems:
* I am using sql2012 and thus cannot use First.
* I tried using last\_value and first\_value as seen [here](http://www.sqlservercurry.com/2012/07/sql-server-2012-first-value-last-value.html) but get errors indicating that function is not found
* A bunch of sql queries. I've been staring at the MSDN T-SQL help pages
* This is me right [now](http://cdn.memegenerator.net/instances/400x/22696597.jpg).
What I need is a fresh perspective and assistance. Am I making this too hard?
|
Use a subquery along with [`ROW_NUMBER`](http://msdn.microsoft.com/en-us/library/ms186734.aspx) and the modulo operator:
```
select
ID,
Purpose,
Date,
case when rownum % 2 = 0 then 1 else -1 end as Val
from (
SELECT
ID
,Purpose
,Date
ROW_NUMBER() over (order by ID) as rownum
FROM (
SELECT
ID,
Purpose,
Date
FROM DerpTable WHERE Purpose LIKE '%DERP%'
GROUP BY ID, DATE) as HerpTable, DerpTable
WHERE HerpTable.ID = DerpTable.ID AND DerpTable.ID = HerpTable.ID
) [t1]
```
`ROW_NUMBER` will assign a value to each row, in this case it's an incrementing value. Using the modulus with 2 allows us to check if it's even or odd and assign `1` or `-1`.
Note: I don't know if this query will run since I don't know the architecture of your database, but the idea should get you there.
|
You can use `first_value()` in SQL Server 2012. I'm not sure what the `WHERE` condition is in your query, but the following should return your desired results:
```
SELECT ID,
FIRST_VALUE(Purpose) OVER (PARTITION BY ID ORDER BY DATE) as Purpose,
DATE,
2 * ROW_NUMBER() OVER (PARTITION BY ID ORDER BY DATE) - 1
FROM DERPTABLE
```
|
How to write an SQL query to have alternating pattern between rows (like -1 and 1)?
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I am facing an issue for finding exact date difference in Months.
My requirement is like
```
difference in months between two dates in 2015-01-25 00:00:00.000 and 2015-04-25 00:00:00.000 should be 3
difference in months between two dates 2015-01-25 00:00:00.000 and 2015-04-26 00:00:00.000 should be > 3
SELECT DATEDIFF(MONTH, '2015-01-25 00:00:00.000', '2015-04-28 00:00:00.000')
```
if you use DATEDIFF Function,then both of the above examples will return same value 3.
ie SELECT DATEDIFF(**MONTH**, '2015-01-**25** 00:00:00.000', '2015-04-**26** 00:00:00.000') is 3.**But i need result which is greater than 3**.
How can i implement above requirement using a single select query.
|
You need to calculate the months and then you need to advance the start date by the number of months and calculate the days, like this:
[SQL Fiddle](http://sqlfiddle.com/#!3/d41d8/41339)
**MS SQL Server 2008 Schema Setup**:
**Query 1**:
```
declare @f datetime, @t datetime
select @f='2015-01-25 00:00:00.000', @t='2015-04-28 00:00:00.000'
SELECT DATEDIFF(MONTH, @f, @t) as m,
datediff(d, dateadd(month, DATEDIFF(MONTH, @f, @t), @f), @t) as d,
DATEDIFF(MONTH, @f, @t) + convert(float, datediff(d, dateadd(month, DATEDIFF(MONTH, @f, @t), @f), @t)) / 30.0 as md
```
**[Results](http://sqlfiddle.com/#!3/d41d8/41339/0)**:
```
| M | D | MD |
|---|---|-----|
| 3 | 3 | 3.1 |
```
|
```
declare @s datetime, @e datetime
select @s='2015-01-25 00:00:00.000', @e='2015-04-28 00:00:00.000'
SELECT ceiling(cast(cast(DATEDIFF(MONTH, @s,@e) as varchar)+'.'+cast(-(DATEPART(dd,@s)-DATEPART(dd, @e)) as varchar) as float)) as Month
```
Result
```
Month
4
```
|
Issue with DateDiff function in SQL Server
|
[
"",
"sql",
".net",
"sql-server",
"database",
""
] |
I've triple checked my syntax but I am completely stumped on this. All I'm doing is a simple SELECT WHERE mysql query but it's returning #1064 error. It has something to do with the second value check because when I remove it it works fine. Here's my query:
```
SELECT * FROM flo_chart WHERE im_key='357803040539808' AND key='b8cb8ebc11dbb641e2290c7ff954d6f8'
```
What is wrong with my key string that makes this query fail?!?!?!?!
Cheers!
|
I used a reserved word for a field name so when it came to the sql query it failed
|
```
SELECT *
FROM some_table
WHERE im_key='357803040539808'
AND ym_key='b8cb8ebc11dbb641e2290c7ff954d6f8';
```
EDIT
After you posted up your **actual** query
```
SELECT * FROM flo_chart WHERE im_key='357803040539808' AND key='b8cb8ebc11dbb641e2290c7ff954d6f8'
```
you have used a reserved word `key`
|
simple sql query returns #1064 - You have an error in your SQL syntax
|
[
"",
"mysql",
"sql",
""
] |
# Problem
I have 3 tables: customers, companies and calls.
A customer has many Companies, and a Company has many calls. (both one to many).
The current status of a Company is the result of the last call for the company (MAX(created\_at)).
Now I want a list of all companies for a customer with the columns of the last call in the results.
# Needed Result
The result should be:
company.\*, lastcall.\*,
* There could be calls with the same created\_at date. Then there should be only 1 row in de result.
* Not all companies has a call yet, the company should still be in the result and the columns of the call should be NULL. (left join)
# Tables
**customers**
- id (int, primary key)
- name (varchar)
- address (varchar)
- city (varchar)
**companies**
- id (int, primary key)
- customer\_id (int)
- name (varchar)
- address (varchar)
- city (varchar)
**calls**
- id (int, primary key)
- company\_id (int)
- result (varchar)
- created\_at (datetime)
# Attempt
A query which didn't work I came up with is:
```
SELECT * FROM companies co
LEFT JOIN calls ca ON co.id = ca.company_id
WHERE co.customer_id = ?
GROUP BY co.id
HAVING ca.created_at = (SELECT max(ll.created_at) FROM calls ll WHERE ll.company_id = co.id)
```
|
It looks like I found the answer.
This one gives the correct result, and still fast enough (0.27 seconds)
```
SELECT co.*, v.*
FROM companies co
LEFT JOIN
(
SELECT
ca.*
FROM calls ca
JOIN
(
SELECT
company_id,
MAX(created_at) AS max_created_at
FROM calls
GROUP BY company_id
) t
ON ca.company_id = t.company_id AND ca.created_at = t.max_created_at
GROUP BY company_id
) v ON co.id = v.company_id
```
Thanks everybody!
|
you should just join a select so that way you arent trying to re evaluate the select.
```
SELECT co.id, co.label, ca.result, ca.id, t.date_created as most_recent
FROM companies co
LEFT JOIN
( SELECT MAX(created_at) as date_created, company_id
FROM calls
GROUP BY company_id
) t ON t.company_id = co.id
JOIN calls ca ON ca.company_id = t.company_id AND t.date_created = ca.created_at
WHERE co.customer_id = ?
```
## EDIT:
the issue is you have more than one call per company at the max date. to test this just pull out one customer and company and look at the results.
```
SELECT co.id, co.label, ca.result, ca.id, ca.created_at as most_recent_date
FROM companies co
LEFT JOIN
( SELECT MAX(created_at) as date_created, company_id
FROM calls
GROUP BY company_id
) t ON t.company_id = co.id
JOIN calls ca ON ca.company_id = t.company_id AND t.date_created = ca.created_at
WHERE co.customer_id = ? AND co.id = ?
```
run this query and specify a specific company. look at the `move_recent_date` column and see if the date is the same for each row and if it is the max date
|
SQL - JOIN with MAX(created_at)
|
[
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I am trying to join 2 tables and create a new field returning the lowest value of a product. I've tried many variations and can't seem to get it to work.
```
SELECT DISTINCT VENDOR.*, PRODUCT.P_PRICE, PRODUCT.LOWEST_PRICE AS MIN(PRODUCT.P_PRICE)
FROM PRODUCT
INNER JOIN VENDOR
ON VENDOR.V_CODE = PRODUCT.V_CODE
ORDER BY VENDOR.V_NAME
```
|
You can use this, This will order from Minimum price vendor product
```
SELECT VENDOR.V_NAME, MIN(PRODUCT.P_PRICE) AS LOWEST_PRICE
FROM VENDOR
INNER JOIN PRODUCT
ON VENDOR.V_CODE = PRODUCT.V_CODE
GROUP BY VENDOR.V_NAME
ORDER BY LOWEST_PRICE
```
**SQL FIDDLE:**- <http://sqlfiddle.com/#!3/467c8/2>
|
If you are looking for the lowest price by vendor, use group by and min:
```
SELECT VENDOR.V_NAME, MIN(PRODUCT.P_PRICE) AS LOWEST_PRICE
FROM PRODUCT
INNER JOIN VENDOR
ON VENDOR.V_CODE = PRODUCT.V_CODE
GROUP BY VENDOR.V_NAME
ORDER BY VENDOR.V_NAME
```
|
List vendors, and lowest priced product SQL
|
[
"",
"sql",
"inner-join",
"min",
""
] |
How to add column to the SQL THAT LIST USA as "America" India as "Asia" China as "Asia" and so on without actually adding the data to the tables but only include in the view. I want to include both COUNTRY\_VISITED and new column named CONTINENTS.
```
CREATE OR REPLACE VIEW V_HOLIDAYS
AS
SELECT CODE_OF_HOLIDAY, DESCRIPTION, COUNTRY_VISITED
FROM ZZZ_HOLIDAY_DETAILS;
```
|
```
CREATE OR REPLACE VIEW V_HOLIDAYS AS
SELECT CODE_OF_HOLIDAY,
DESCRIPTION,
case when COUNTRY_VISITED = 'USA' then 'America'
when COUNTRY_VISITED in ('India','China') then 'Asia'
else COUNTRY_VISITED
end as COUNTRY_VISITED
FROM ZZZ_HOLIDAY_DETAILS;
```
|
Use a `case` statement:
```
CREATE OR REPLACE VIEW V_HOLIDAYS
AS
SELECT CODE_OF_HOLIDAY, DESCRIPTION, COUNTRY_VISITED,
(CASE WHEN COUNTRY_VISITED = 'USA' THEN 'AMERICA'
WHEN COUNTRY_VISITED IN ('INDIA', 'CHINA') THEN 'ASIA'
END) as CONTINENT
FROM ZZZ_HOLIDAY_DETAILS;
```
|
SQL- How to add a column to the SQL VIEW, but not add or update it to the table?
|
[
"",
"sql",
"oracle-apex",
""
] |
I want to match two tables that both have first + last name fields. I would like to return the records that match each person's first + last name combination.
Table 1 fields:
```
id|firstname|lastname|position|
```
Table 2 fields:
```
firstname|lastname|datehired|department|deptcode|
```
|
You can join on multiple columns:
```
SELECT t1.id, t1.firstname, t1.lastname, t1.position,
t2.datehired, t2.department, t2.deptcode
FROM Table1 t1 INNER JOIN Table2 t2
ON t1.firstname = t2.firstname
AND t1.lastname = t2.lastname
```
|
How about:
```
Select FirstName, LastName From Table1
Intersect
Select Firstname, LastName From Table2
```
|
How to cross reference two tables that have FirstName + LastName fields
|
[
"",
"sql",
"t-sql",
""
] |
```
f1 f2 f3
----------------
10 20 30
10 15 50
11 12 25
11 79 13
```
Grouping by f1, how to get the max f2 and its correspondent f3?
```
f1 f2 f3
----------------
10 20 30
11 79 13
```
|
you can do it this way
```
SELECT t1.f1
, t.f2
, t1.f3
FROM tbl t1
cross apply (SELECT max(f2) f2
FROM tbl
GROUP BY f1) t
WHERE t1.f2=t.f2
```

|
You can also use this one (without RN):
```
SELECT DISTINCT a.f1,x.f2,x.f3
FROM YourTable a
CROSS APPLY (SELECT TOP 1 f2,f3
FROM YourTable b
WHERE a.f1 = b.f1 ORDER BY b.f2 DESC) x
```
|
SQL Group By and Agregate a column and its correspondent value in another column
|
[
"",
"sql",
"sql-server",
"group-by",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.