Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Let's say, for example, I have a list of email addresses retrieved like so:
```
SELECT email FROM users ORDER BY email
```
This returns:
```
a@email.com
b@email.com
c@email.com
...
x@email.com
y@email.com
z@email.com
```
I'd like to take this result set, slice the bottom 3 emails and move them to the top, so you'd see a result set like:
```
x@email.com
y@email.com
z@email.com -- Note x-z is here
a@email.com
b@email.com
c@email.com
...
u@email.com
v@email.com
w@email.com
```
Is there a way to do this within the SQL? I'd like to not have to do it application-side for memory reasons.
|
If you know the values are "x" or greater, you can simply do:
```
order by (case when email >= 'x@email.com' then 1 else 2 end),
email
```
Otherwise, you can use `row_number()`:
```
select email
from (select email, row_number() over (order by email desc) as seqnum
from users u
) u
order by (case when seqnum <= 3 then 1 else 2 end),
email;
```
|
Assuming `email` is defined `UNIQUE NOT NULL`. Else you need to do more.
```
SELECT email
FROM (SELECT email, row_number() OVER (ORDER BY email DESC) AS rn FROM users) sub
ORDER BY (rn > 3), rn DESC;
```
In Postgres you can just sort by a `boolean` expression. `FALSE` sorts before `TRUE`. More:
* [Sorting null values after all others, except special](https://stackoverflow.com/questions/21891803/sorting-null-values-after-all-others-except-special/21892611#21892611)
Secondary, sort by the computed row number (`rn`) in descending order. Don't sort by the (more expensive) text column `email` another time. Shorter and simpler - test with `EXPLAIN ANALYZE`, it should be faster, too.
|
PostgreSQL - Slice the bottom N results and move them to the top
|
[
"",
"sql",
"postgresql",
"sql-order-by",
""
] |
Hi and thanks in advance.
I have a few case statements that calculate "Quantity" based on the text in "ProductName."
```
Productname Quantity
Product1 - 5 pack should = 5
Product2 - 10 pack should = 10
Product3 - 25 pack should = 25
```
My case works well, however, when I do this: When `productname like '%5 pack' then quantity(5)` it also sees this as "25 pack" because of the wildcard and thus I get incorrect values. They all end exactly as seen here "- [##] pack"
|
Case expression are searched in order. Put the more restrictive ones first.
Or if all the names fit the pattern you describe then this will probably work.
```
select cast(left(right(ProductName, 7), 2) as int) as Quantity from ...
```
Or you could use `substring` with a negative start position potentially.
```
cast(substring(ProductName, -7, 2) as int)
```
Many other options as well...
|
Depending on your data, you could use LIKE '%[ -]5 pack'. (Notice that inside the brackets is a space and a hyphen.)
This keeps you from having to order the case statement correctly.
|
SQL Like ending in specific string %5% vs. %25%
|
[
"",
"sql",
""
] |
I have a following query:
```
SELECT COUNT (*) AS Total, Program, Status
FROM APP_PGM_CHOICE
WHERE Program IN ( 'EX', 'IM')
AND APP_PGM_REQ_DT >= '20150101'
AND APP_PGM_REQ_DT <= '20150131'
AND Status IN ( 'PE','DN','AP')
GROUP BY Program, Status
ORDER BY Program, Status
```
And the output is:
```
Total Program Status
12246 "EX" "AP"
13963 "EX" "DN"
21317 "EX" "PE"
540 "IM" "AP"
2110 "IM" "DN"
7184 "IM" "PE"
```
And I want the output like:
```
Total1 Program1 Total2 Program2 Status
12246 EX 540 IM AP
13963 EX 2110 IM DN
21317 EX 7184 IM PE
```
Can I do ii? If yes whats the way?
|
Yes you could do it this way:
```
Select T1.Total Total1, T1.Program Program1, T2.Total Total2, T2.Program Program2, T1.Status
From
(SELECT COUNT (*) AS Total, Program, Status
FROM APP_PGM_CHOICE
WHERE Program = 'EX'
AND APP_PGM_REQ_DT >= '20150101'
AND APP_PGM_REQ_DT <= '20150131'
AND Status IN ( 'PE','DN','AP')
GROUP BY Program, Status
ORDER BY Program, Status) T1
INNER JOIN
(SELECT COUNT (*) AS Total, Program, Status
FROM APP_PGM_CHOICE
WHERE Program = 'IM'
AND APP_PGM_REQ_DT >= '20150101'
AND APP_PGM_REQ_DT <= '20150131'
AND Status IN ( 'PE','DN','AP')
GROUP BY Program, Status
ORDER BY Program, Status) T2 on T1.Status = T2.Status
```
|
You can do this with a UNION query and some simple selects
```
SELECT GROUP_CONCAT(total1) as total1, GROUP_CONCAT(proram1) as program1, GROUP_CONCAT(total2) as total2, GROUP_CONCAT(program2) as program2
FROM
(SELECT total AS total1, program AS program1, null AS total2, null AS program2
WHERE program = 'EX'
UNION
SELECT null AS total1, null AS program1, total AS total2, program AS program2
WHERE program = 'IM') t
```
This is the easy way to pivot rows into columns
|
How to divide SQL/MySQL query output into multiple tables?
|
[
"",
"mysql",
"sql",
""
] |
I do not understand why I get an error saying "Unknown column 'cyclist.ISO\_id' in 'where clause'"
```
SELECT name, gender, height, weight FROM Cyclist LEFT JOIN Country ON Cyclist.ISO_id = Country.ISO_id WHERE cyclist.ISO_id = 'gbr%';
```
|
It looks like your table name is `Cyclist`, not `cyclist` - capital `C`. In your `WHERE` clause your're therefore referencing the column of a table which does not exist.
|
From the [docs](http://dev.mysql.com/doc/refman/5.7/en/identifier-case-sensitivity.html) (with my emphasis):
> Although database, table, and trigger names are not case sensitive on
> some platforms, **you should not refer to one of these using different
> cases within the same statement**. The following statement would not
> work because it refers to a table both as my\_table and as MY\_TABLE:
```
mysql> SELECT * FROM my_table WHERE MY_TABLE.col=1;
```
> Column, index, stored routine, and event names are not case sensitive
> on any platform, nor are column aliases.
So use either `cyclist` or `Cyclist` but **consistently**.
|
Why does my SQL statement show unknown column in WHERE clause
|
[
"",
"mysql",
"sql",
""
] |
So first I would like to preface this question by letting everyone know I am pretty new to sql and coding in general. I have a query:
```
Select Distinct
[t1].[Name] AS Subdivision
, [t2].[Description] AS SubStatus
, [t4].[Name] AS ConnectingSubName
, [t2].[Description] As ConnectingSubStatus
, [t5].[ActualPublicationEndDate]
, MAX([t5].[version]) as Version
From [PtcDbTracker].[dbo].[TrackDatabase] as [t0]
INNER Join [PTCDbTracker].[dbo].[Subdivision] as [t1] on [t0]. [SubdivisionId]=[t1].[SubdivisionId]
Inner Join [PTCDbTracker].[dbo].[Status] as [t2] on [t1].[StatusId]=[t2].[StatusId]
Inner Join [PTCDbTracker].[dbo].[ConnectingSubs] as [t3] on [t0].[SubdivisionId]=[t3].[SubdivisionId]
Inner Join [PTCDbTracker].[dbo].[Subdivision] as [t4] on ([t2].[StatusId]=[t4].[StatusId] AND [t3].[ConnectingSubId]=[t4].[SubdivisionId])
Inner Join [PtcDbTracker].[dbo].[TrackDatabase] as [t5] on t3.ConnectingSubId = t5.SubdivisionId
Where [t0].[SubdivisionId] = '90'
AND [t5].[Version] BETWEEN 8000 AND 9000
Group By t1.Name, t2.Description, t4.Name, t2.Description, t5.ActualPublicationEndDate
```
Which returns:
```
Subdivision SubStatus ConnectingSubName ConnectingSubStatus ActualPublicationEndDate Version
San Bernardino In Editing Alameda Corridor In Editing 2013-12-17 00:00:00.0000000 8000
San Bernardino In Editing Harbor In Editing 2014-04-25 00:00:00.0000000 8001
San Bernardino In Editing Alameda Corridor In Editing 2014-05-01 00:00:00.0000000 8001
San Bernardino In Editing Alameda Corridor In Editing 2014-09-25 00:00:00.0000000 8002
```
What I really want to return are Lines 2 and 4. I know that the Group By clause is creating groups of 1, but if I try to take anything out I get an error. Any help would be greatly appreciated. I am using MS Sql SMS 2012.
|
You want to use `row_number()`. Something like this:
```
with t as (
Select [t1].[Name] AS Subdivision, [t2].[Description] AS SubStatus,
[t4].[Name] AS ConnectingSubName, [t2].[Description] As ConnectingSubStatus,
[t5].[ActualPublicationEndDate], [t5].[version] as Version
From [PtcDbTracker].[dbo].[TrackDatabase] [t0] INNER Join
[PTCDbTracker].[dbo].[Subdivision] [t1]
on [t0].[SubdivisionId] = [t1].[SubdivisionId] Inner Join
[PTCDbTracker].[dbo].[Status] [t2]
on [t1].[StatusId]=[t2].[StatusId] Inner Join
[PTCDbTracker].[dbo].[ConnectingSubs] [t3]
on [t0].[SubdivisionId]=[t3].[SubdivisionId] Inner Join
[PTCDbTracker].[dbo].[Subdivision] [t4]
on ([t2].[StatusId]=[t4].[StatusId] AND [t3].[ConnectingSubId]=[t4].[SubdivisionId]) Inner Join
[PtcDbTracker].[dbo].[TrackDatabase] [t5]
on t3.ConnectingSubId = t5.SubdivisionId
Where [t0].[SubdivisionId] = '90' AND [t5].[Version] BETWEEN 8000 AND 9000
)
select t.*
from (select t.*,
row_number() over (partition by Subdivision, SubStatus, ConnectingSubName
order by version desc) as seqnum
from t
) t
where seqnum = 1
```
This uses `row_number()` to get the row with the largest version for each entity, and then returns that row.
|
The problem in your query is `ActualPublicationEndDate` column in `group by` you need to remove it from `group by` and `select` list
Instead you can use `Row_Number` to find the max `version` per `Subdivision, SubStatus, ConnectingSubName and ConnectingSubStatus`.
```
Select *
from
(
select *,
row_number() over(partition by Subdivision, SubStatus, ConnectingSubName, ConnectingSubStatus
order by [t5].[version] desc) RN
From join..
..
) A
where RN=1
```
|
SQL MAX() Function returning All Results
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"greatest-n-per-group",
""
] |
I am trying to return all instances in a Customers table where the statustype= 'dc' then for those results, the count on FC is > 1 and the Count on Address1 is 1.
IE:
```
FC Address1
111 abc
111 cde
432 qqq
432 qqq
```
I need the 111 FC results back because their address1 is different. But I don't need the 432 FC results back because there is more than 1 Address for that FC
```
SELECT *
FROM Customers
where FC IN( select FC from Customers where StatusType= 'dc'
group by FC having COUNT(FC) > 1 and COUNT(Address1) < 2
)
order by FC, Address1
```
I also tried = 1 instead of < 2
|
If you want the details about the FCs that have more than one unique address then this query will give you that:
```
select c.* from customers c
join (
select FC
from customers
where statustype = 'dc'
group by fc having count(distinct Address1) > 1
) a on c.FC = a.FC
```
|
Try using `Distinct COUNT`
```
SELECT *
FROM Customers
WHERE FC IN(SELECT FC
FROM Customers
WHERE StatusType = 'dc'
GROUP BY FC
HAVING Count(DISTINCT Address1) > 1)
ORDER BY FC,
Address1
```
|
Select where Count of 1 field is greater than a value and count of another is less than a value
|
[
"",
"sql",
"count",
"group-by",
""
] |
This may be simple, but I cannot figure out the correct and simplest way to query a table which contains a date col to return the rows in which the date belongs to the current academic year.
Knowing that for academic year I mean the period from the 1st of September of a year to the 31st of august of the next one, how can you obtain the right dataset
from a table that look like this:
```
TABLE
----
Date
----
12/08/2015
15/06/2015
01/09/2015 <-
07/10/2015 <-
09/11/2015 <-
21/12/2015 <-
15/01/2016 <-
18/03/2016 <-
28/04/2016 <-
29/06/2016 <-
30/07/2016 <-
12/09/2016
23/11/2016
```
|
This is an Oracle equivalent of Andomar's post -
```
select
*
from
dts
where
case
when extract (month from dt) < 9 then extract (year from dt) - 1
else extract (year from dt)
end = extract (year from sysdate)
```
**Fiddle:** <http://sqlfiddle.com/#!4/75a16/1/0>
|
You could use a `case` to convert a date to its academic year. For SQL Server, that could look like:
```
select *
from YourTable
where case
when month(DateColumn) < 9 then year(DateColumn) - 1
else year(DateColumn)
end = year(getdate())
```
|
How can a query select dates from only a specific academic year?
|
[
"",
"sql",
"oracle",
""
] |
For Example, I have table like this:
```
Date | Id | Total
-----------------------
2014-01-08 1 15
2014-01-09 3 24
2014-02-04 3 24
2014-03-15 1 15
2015-01-03 1 20
2015-02-24 2 10
2015-03-02 2 16
2015-03-03 5 28
2015-03-09 5 28
```
I want the output to be:
```
Date | Id | Total
---------------------
2015-01-03 1 20
2014-02-04 3 24
2015-03-02 2 16
2015-03-09 5 28
```
Here the distinct values are Id. I need latest Total for each Id.
|
You can use `left join` as
```
select
t1.* from table_name t1
left join table_name t2
on t1.Id = t2.Id and t1.Date >t2.Date
where t2.Id is null
```
<http://dev.mysql.com/doc/refman/5.0/en/example-maximum-column-group-row.html>
|
You can also use Max() in sql:
```
SELECT date, id, total
FROM table as a WHERE date = (SELECT MAX(date)
FROM table as b
WHERE a.id = b.id
)
```
|
how to get latest record or record with max corresponding date of all distinct values in a column in mysql?
|
[
"",
"mysql",
"sql",
""
] |
I have a table `Reviews` with columns `MovieID` and `Rating`.
In this table, Ratings are associated to a particular MovieID.
For example, MovieID 123 can have 500 ratings, ranging from 1-5.
I want to display N-Top movies, with the highest average rating(rounded to 4 decimals) on the top, in the format:
```
movieID|avg
123 : 4.06
512 : 4.01
744 : 3.68
23 : 2.51
```
Is this query the right way to do it?
```
SELECT MovieID, ROUND(AVG(CAST(Rating AS FLOAT)), 4) as avg
from Reviews order by avg desc
```
|
It's not the correct way to do it. When you use aggregate function like `avg()` you need to include a `group by` clause that determines over what item the function should be applied.
In your case you should do
```
SELECT TOP 5
MovieID, ROUND(AVG(CAST(Rating AS FLOAT)), 4) as avg
FROM Reviews
GROUP BY MovieID
ORDER BY ROUND(AVG(CAST(Rating AS FLOAT)), 4) DESC
```
to get the top 5 ratings. The `TOP 5` limits the records returned to the top 5 as determined by the `order by` clause.
Note that if the top 6 movies happen to have the same average rating you'd still only get five (and it would be undetermined which five of the six). If you want ties to count as one then you should look at the `rank()` function instead.
|
Here is query which will solve your problem:
```
SELECT MovieID, ROUND(AVG(rating),4) AS avg
FROM your_table
GROUP BY MovieID
ORDER BY ROUND(AVG(rating),4) DESC LIMIT 4
```
Here I have taken `N = 4` as in your sample output from above. Also, you will need to replace `your_table` in this query with the actual name of your table. Your original query was going in the wrong direction because it wasn't making use of the `GROUP BY` function to find the average rating for each movie.
|
Order a column by average sql
|
[
"",
"sql",
"sql-server",
""
] |
```
SELECT user.username, offer.* FROM offer
INNER JOIN user ON user.userid = offer.userid
WHERE offer.date BETWEEN
(NOW() - INTERVAL 40 DAY) AND NOW()
```
This work fine. It display record with in 40 days time interval, With current day/date as ref.
**Problem:**
I want to get records 40 or more days old with last 40 days as ref.
i.e. it should ignore first 40 days from current date and display records older than that date.
|
In this case you can do this:
```
SELECT user.username, offer.* FROM offer
INNER JOIN user ON user.userid = offer.userid
WHERE offer.date < (NOW() - INTERVAL 40 DAY)
```
You was pretty close to doing ;-)
|
```
SELECT user.username, offer.* FROM offer
INNER JOIN user ON user.userid = offer.userid
WHERE offer.date < (NOW() - INTERVAL 40 DAY)
```
|
How get values from database one month old ignoring certain days from current date?
|
[
"",
"mysql",
"sql",
"database",
"date",
"select",
""
] |
```
delete from tx_table
where buyer_id in(select t.buyer_id
from tx_table t
left join user_table u
on t.buyer_id=u.user_id
where u.user_id is null)
```
I am getting error for above query.The error is
> sql error 1093:You can't specify target table 'tx\_table' for update in
> FROM clause
Please help me
|
Try this:
```
DELETE FROM tx_table
WHERE buyer_id IN (
SELECT buyer_id FROM (
SELECT DISTINCT t.buyer_id AS buyer_id FROM tx_table
LEFT JOIN user_table u on t.buyer_id=u.user_id where u.user_id is null
) AS c
)
```
|
Maybe the sql sentence is wrong. Do:
```
$delete = delete from tx_table
where buyer_id in(select t.buyer_id
from tx_table t
left join user_table u
on t.buyer_id=u.user_id
where u.user_id is null);
echo delete
```
Now take "echo" sentence and go mysql server and write paste in sql tab. If you doing this you know is a sql error.
|
Error in mysql delete query
|
[
"",
"mysql",
"sql",
""
] |
I have a stored procedure as below. Please note this is an example only and the actual query is long and has many columns.
```
select
*,
(select field_1 from table_1 where field1='xx' and disty_name = @disty_name) as field_1,
(select field_2 from table_2 where field1='xx' and disty_name = @disty_name) as field_2,
from
table_xx
where
disty_name = @disty_name
```
`@disty_name` parameter will pass some values and works fine.
My question is what is the best and shortest way to ignore `disty_name = @disty_name` condition if the `@disty_name` parameter contains the value 'All'
I just want to remove `disty_name = @disty_name` condition in some cases because user want to query all records without having `disty_name` filtered.
|
I think I found the answer..
Step 1 - make the parameter optional in the SP
@disty\_name ncarchar(40) = null
and then in the query
```
select *,
(select field_1 from table_1 where field1='xx' and (@disty is null or dist_name=@disty)) as field_1,
(select field_2 from table_2 where field1='xx' and (@disty is null or dist_name=@disty)) as field_2,
from table_xx where (@disty is null or dist_name=@disty)
```
If you pass the @disty, it will filter the disty value from the query. If we have Null in the parameter , it will interpret as "Null is Null" which is true. If we have a parameter callrd 'xyz' it will interpret it as xyz is null which will return false. this is cool.. is it ?
|
```
Set @disty_name = NULLIF(@disty_name, 'All')
select *,
(select field_1 from table_1 where field1='xx' and disty_name = coalesce(@disty_name,disty_name)) as field_1,
(select field_2 from table_2 where field1='xx' and disty_name = coalesce(@disty_name,disty_name)) as field_2,
from table_xx where disty_name=coalesce(@disty_name,disty_name)
```
Also, I don't use it that often so I can't write it for you myself, but I suspect there's a more-efficient way to do this with `UNION`s and a `PIVOT`.
|
SQL Server : Where condition based on the parameters
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
So the task is: Find the youngest sailor in each rating level
-
My tables:
```
Sailors(sid : integer, sname : string, rating : integer, age : real)
Reserves(sid : integer, bid : integer, day : date)
Boats(bid : integer, bname : string, color : string)
```
-
Is something like this even possible:
```
select min(age)
from sailors
where rating =(1++)
```
|
```
select rating,sid,age
from Sailors as S
where (age,rating)
in
(
select min(age),rating
from Sailors
group by rating
)
```
**EDIT :**
You could just do `select rating,min(age) from sailors group by rating;` to get the minimum age in the rating level but you won't get the details of the sailor having that minimum age..
Check this <http://sqlfiddle.com/#!9/55276/1> where you can see that the sid is returned as 5 instead of 8 for rating 4...
where as this <http://sqlfiddle.com/#!9/55276/3> returns the sid correctly
|
```
SELECT rating, MIN(age)
FROM Sailors
GROUP BY rating;
```
See the MySQL documentation on [aggregate functions](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html).
|
how to increase a search variable by 1 in sql query
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to find some SQL code and having it doesn't seem to be returning my desired output.
Here is my create/insert statement
```
CREATE TABLE temp
([screenName] varchar(130), [realName] varchar(57))
;
INSERT INTO temp
([screenName], [realName])
VALUES
('WillyWonka', 'Will Stinson'),
('Barbara Smith', 'Barbara Smith'),
('JoanOfArc', 'JoanArcadia'),
('LisaD', 'Lisa Diddle')
;
```
What i'm looking for is the rows where the realName column has a space in it ... such as the first two and the fourth line ... and the realName the second word after the space Starts with the letter 'S' and is then followed by any character and then has the letter i, followed by any series of characters. This is where i'm stuck.
```
SELECT LEFT(realName,CHARINDEX(' ',realName)-1)
FROM
temp
Where LEFT(realName,CHARINDEX(' ',realName)-1) like 'S%'
```
although i'm pretty sure what i'm doing there is wrong but can't figure out how to make it correct.
apologies to the changes -- but how would i change the code if the name were to change and possibly have multiple space (`Jimmy Dean Stinson`) If I wanted to modify it to look from the right?
Thank you.
|
Use `LIKE` operator.
```
SELECT *
FROM temp
WHERE realName LIKE '% S_i%'
```
|
do you want
```
SELECT LEFT(realName,CHARINDEX(' ',realName)-1)
FROM
temp
Where realName like '% S_i%'
```
[wildcard single character](https://msdn.microsoft.com/en-us/library/ms174424.aspx)
|
Looking for a particular substring in SQL string
|
[
"",
"sql",
"substring",
"space",
"charindex",
""
] |
**SCENARIO**
I want to design tables for saving Data of ***Cattle*** and ***Calfs***.
**Attributes/columns Common among Cattle and Calf**
* Price
* Birth Date
etc. (Some coming from FK as well like Breed etc)
**Attributes reserved for Calf**
* Father
* Mother
**Attributes reserved for CATTLE (Female)**
* IsMilking (Y or N)
* InseminationDate (Off course a cow can be inseminated a number of times so it must be normalized and a new table should be there as: ID (FK using PK of CATTLE table) and InsemDate - but it should be present only for FEMALE)
**There is a Special attribute of *Status* for CATTLE having following Properties:**
* Common Among both genders: {*Healthy, Ill*}
* Reserved for Female ONLY: {*Pregnant, Heifer, etc.*}
My whole Software is failing due to this incompetent design. My existing design is as follows:
**Existing Design**
There is one table for CATTLE referencing all STATUS, BREED, etc tables as it's FK.
There is a table of CALF having these columns:
```
ID (FK from CATTLE table) - Having cattle and calf in same table is apparently failing
FatherID (FK from CATTLE table) - Again we can have Calfs themselves as Father in this column
MotherID (FK from CATTLE table) - Same issue
```
Thanks In Advance. I have tried as hard as I could do. Split the tables for MALE and FEMALE (but it costs inheritance and I have to split all the subsequent tables like STATUS table etc as well).
|
Question is too broad and many possible answer, but I tried to since I'm into database designing.
It's not perfect but I hope it can help you a little.
Calf Table (Calf records)

Relationships to Cattle 2x

Cattle Table (Cattle records)

Cattle Relationships

Insemination Table (Insem records)

Special Attribute Table

Special Attribute Record Table

Special Attribute Value Table

Calves View

Cattle Status View

|
I think one table for the animal with a self-relationship for father and mother is the way to go. Something like this;
```
CREATE TABLE [dbo].[Animal](
[AnimalID] [int] IDENTITY(1,1) NOT NULL,
[Sex] [char](1) NOT NULL CONSTRAINT [Animal_Sex] CHECK (([Sex]='F' OR [Sex]='M')),
[Name] [varchar](100) NOT NULL,
[Price] [money] NULL,
[BirthDate] [date] NULL,
[Father_AnimalID] [int] NULL,
[Father_Sex] AS (CONVERT([char](1),'M')) PERSISTED,
[Mother_AnimalID] [int] NULL,
[Mother_Sex] AS (CONVERT([char](1),'F')) PERSISTED,
[IsMilking] [char](1) NULL,
[HealthStatus] [char](1) NULL,
[FemaleStatus] [char](1) NULL,
CONSTRAINT [PK_Animal] PRIMARY KEY CLUSTERED ([AnimalID]),
CONSTRAINT [AK_Animal] UNIQUE NONCLUSTERED ([AnimalID], [Sex])
)
ALTER TABLE [dbo].[Animal] ADD CONSTRAINT [FK_Animal_Animal_Father]
FOREIGN KEY([Father_AnimalID], [Father_Sex]) REFERENCES [dbo].[Animal] ([AnimalID], [Sex])
ALTER TABLE [dbo].[Animal] ADD CONSTRAINT [FK_Animal_Animal_Mother]
FOREIGN KEY([Mother_AnimalID], [Mother_Sex]) REFERENCES [dbo].[Animal] ([AnimalID], [Sex])
```
Note how I've added a couple of constant computed columns (Father\_Sex and Mother\_Sex) - this lets me create a more sophisticated foreign key for the father and mother that forces the father to be male and the mother to be female, and indirectly prevents father and mother from being the same animal.
|
Database Design Issue - Different Statuses for Different Sex
|
[
"",
"sql",
"sql-server",
"database",
"database-design",
""
] |
Say I have two tables like so:
`fruits`
| id | name |
| --- | --- |
| 1 | Apple |
| 2 | Orange |
| 3 | Pear |
`users`
| id | name | fruit |
| --- | --- | --- |
| 1 | John | 3 |
| 2 | Bob | 2 |
| 3 | Adam | 1 |
I would like to query both of those tables and in the result get user ID, his name and a fruit name (fruit ID in users table corresponds to the ID of the fruit) like so:
| id | name | fruit |
| --- | --- | --- |
| 1 | John | Pear |
| 2 | Bob | Orange |
| 3 | Adam | Apple |
I tried joining those two with a query below with no success so far.
```
SELECT * FROM users, fruits WHERE fruits.id = fruit;
```
Thanks in advance.
|
You need to `JOIN` the fruit table like this:
```
SELECT u.id, u.name, f.name FROM users u JOIN fruits f ON u.fruit = f.id
```
See a working example [**here**](http://sqlfiddle.com/#!5/79438/2)
|
```
select a.id,a.name,b.name as fruit
from users a
join friuts b
on b.id=a.fruit
```
|
Query two tables and replace values from one table in the second one
|
[
"",
"mysql",
"sql",
""
] |
How would I query the range of a number column if the number ends somewhere then picks up again at a higher number?
If I had a column like:
```
Number
-------
1
2
3
4
5
11
12
13
```
How can I return a result like
```
Min | Max
----------
1 | 5
11 | 13
```
|
```
;WITH CTE AS
(
SELECT
Number,
Number - dense_rank() over (order by Number) grp
FROM yourtable
)
SELECT min(Number) min, max(Number) max
FROM CTE
GROUP BY grp
```
[FIDDLE](http://sqlfiddle.com/#!6/27c24/1)
|
Another approach:
```
select n1.number as min,min(n2.number) as max
from table n1
JOIN table n2 ON n1.number<n2.number
AND NOT EXISTS (select 1 from number where number = n1.number-1)
AND NOT EXISTS (select 1 from number where number=n2.number+1)
group by n1.number
order by n1.number
```
|
getting the range of a number column(min/max) if there are missing numbers in between
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I am trying to pull the memberinstance from a table based on the max DateEnd. If it is Null I want to pull that as it would be still ongoing. I am using sql server.
select memberinstanceid
from table
group by memberid
having MAX(ISNULL(date\_end, '2099-12-31'))
This query above doesnt work for me. I have tried different ones and have gotten it to return the separate instances, but not just the one with the max date.
Below is what my table looks like.
MemberID MemberInstanceID DateStart DateEnd
2 abc12 2013-01-01 2013-12-31
4 abc21 2010-01-01 2013-12-31
2 abc10 2015-01-01 NULL
4 abc19 2014-01-01 2014-10-31
I would expect my results to look like this
MemberInstanceID
abc10
abc19
I have been trying to figure out how to do this but have not had much luck. Any help would be much appreciated. Thanks
|
I think you need something like the following:
```
select MemberID, MemberInstanceID
from table t
where (
-- DateEnd is null...
DateEnd is null
or (
-- ...or pick the latest DateEnd for this member...
DateEnd = (
select max(DateEnd)
from table
where MemberID = t.MemberID
)
-- ... and check there's not a NULL entry for DateEnd for this member
and not exists (
select 1
from table
where MemberID = t.MemberID
and DateEnd is null
)
)
)
```
The problem with this approach would be if there are multiple rows that match for each member, i.e. multiple NULL rows with the same MemberID, or multiple rows with the same DateEnd for the same MemberID.
|
You have a good start but you don't need to perform any explicit grouping. What you want is the row where the EndDate is null or is the largest value (latest date) of all the records with the same MemberID. You also realized that the Max couldn't return the latest non-null date because the null, if one exists, must be the latest date.
```
select m.*
from Members m
where m.DateEnd is null
or m.DateEnd =(
select Max( IsNull( DateEnd, '9999-12-31' ))
from Members
where MemberID = m.MemberID );
```
|
Select most recent InstanceID base on max end date
|
[
"",
"sql",
""
] |
I'm having trouble figuring out how to return a query result where I have row values that I want to turn into columns.
In short, here is an example of my current schema in SQL Server 2008:
> 
And here is an example of what I would like the query result to look like:
> 
Here is the SQLFiddle.com to play with - <http://sqlfiddle.com/#!6/6b394/1/0>
Some helpful notes about the schema:
* The table will always contain two rows for each day
* One row for Apples, and one row for Oranges
* I am trying to combine each pair of two rows into one row
* To do that I need to convert the row values into their own columns, but only the values for [NumOffered], [NumTaken], [NumAbandoned], [NumSpoiled] - NOT every column for each of these two rows needs to be duplicated, as you can see from the example
As you can see, from the example desired end result image, the two rows are combined, and you can see each value has its own column with a relevant name.
I've seen several examples of how this is possible, but not quite applicable for my purposes. I've seen examples using Grouping and Case methods. I've seen many uses of PIVOT, and even some custom function creation in SQL. I'm not sure which is the best approach for me. Can I get some insight on this?
|
There are many different ways that you can get the result. Multiple JOINs, unpivot/pivot or CASE with aggregate.
They all have pros and cons, so you'll need to decide what will work best for your situation.
Multiple Joins - now you've stated that you will always have 2 rows for each day - one for apple and orange. When joining on the table multiple times your need some sort of column to join on. It appears the column is `timestamp` but what happens if you have a day that you only get one row. Then the [INNER JOIN](https://stackoverflow.com/a/29084817/426671) solution provided by [@Becuzz won't work](http://sqlfiddle.com/#!6/22219/4) because it will only return the rows with both entries per day. LeYou could use multiple JOINs using a `FULL JOIN` which will return the data even if there is only one entry per day:
```
select
[Timestamp] = Coalesce(a.Timestamp, o.Timestamp),
ApplesNumOffered = a.[NumOffered],
ApplesNumTaken = a.[NumTaken],
ApplesNumAbandoned = a.[NumAbandoned],
ApplesNumSpoiled = a.[NumSpoiled],
OrangesNumOffered = o.[NumOffered],
OrangesNumTaken = o.[NumTaken],
OrangesNumAbandoned = o.[NumAbandoned],
OrangesNumSpoiled = o.[NumSpoiled]
from
(
select timestamp, numoffered, NumTaken, numabandoned, numspoiled
from myTable
where FruitType = 'Apple'
) a
full join
(
select timestamp, numoffered, NumTaken, numabandoned, numspoiled
from myTable
where FruitType = 'Orange'
) o
on a.Timestamp = o.Timestamp
order by [timestamp];
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!6/fae84/5). Another issue with multiple joins is what if you have more than 2 values, you'll need an additional join for each value.
If you have a limited number of values then I'd suggest using an aggregate function and a CASE expression to get the result:
```
SELECT
[timestamp],
sum(case when FruitType = 'Apple' then NumOffered else 0 end) AppleNumOffered,
sum(case when FruitType = 'Apple' then NumTaken else 0 end) AppleNumTaken,
sum(case when FruitType = 'Apple' then NumAbandoned else 0 end) AppleNumAbandoned,
sum(case when FruitType = 'Apple' then NumSpoiled else 0 end) AppleNumSpoiled,
sum(case when FruitType = 'Orange' then NumOffered else 0 end) OrangeNumOffered,
sum(case when FruitType = 'Orange' then NumTaken else 0 end) OrangeNumTaken,
sum(case when FruitType = 'Orange' then NumAbandoned else 0 end) OrangeNumAbandoned,
sum(case when FruitType = 'Orange' then NumSpoiled else 0 end) OrangeNumSpoiled
FROM myTable
group by [timestamp];
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!6/6b394/10). Or even using [PIVOT/UNPIVOT](https://stackoverflow.com/a/29084921/426671) like @M.Ali has. The problem with these are what if you have unknown values - meaning more than just `Apple` and `Orange`. You are left with using dynamic SQL to get the result. Dynamic SQL will create a string of sql that needs to be execute by the engine:
```
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT ',' + QUOTENAME(FruitType + col)
from
(
select FruitType
from myTable
) d
cross apply
(
select 'NumOffered', 0 union all
select 'NumTaken', 1 union all
select 'NumAbandoned', 2 union all
select 'NumSpoiled', 3
) c (col, so)
group by FruitType, Col, so
order by FruitType, so
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT TimeStamp,' + @cols + '
from
(
select TimeStamp,
new_col = FruitType+col, value
from myTable
cross apply
(
select ''NumOffered'', NumOffered union all
select ''NumTaken'', NumOffered union all
select ''NumAbandoned'', NumOffered union all
select ''NumSpoiled'', NumOffered
) c (col, value)
) x
pivot
(
sum(value)
for new_col in (' + @cols + ')
) p '
exec sp_executesql @query;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!6/22219/9)
All versions give the result:
```
| timestamp | AppleNumOffered | AppleNumTaken | AppleNumAbandoned | AppleNumSpoiled | OrangeNumOffered | OrangeNumTaken | OrangeNumAbandoned | OrangeNumSpoiled |
|---------------------------|-----------------|---------------|-------------------|-----------------|------------------|----------------|--------------------|------------------|
| January, 01 2015 00:00:00 | 55 | 12 | 0 | 0 | 12 | 5 | 0 | 1 |
| January, 02 2015 00:00:00 | 21 | 6 | 2 | 1 | 60 | 43 | 0 | 0 |
| January, 03 2015 00:00:00 | 49 | 17 | 2 | 1 | 109 | 87 | 12 | 1 |
| January, 04 2015 00:00:00 | 6 | 4 | 0 | 0 | 53 | 40 | 0 | 1 |
| January, 05 2015 00:00:00 | 32 | 14 | 1 | 0 | 41 | 21 | 5 | 0 |
| January, 06 2015 00:00:00 | 26 | 24 | 0 | 1 | 97 | 30 | 10 | 1 |
| January, 07 2015 00:00:00 | 17 | 9 | 2 | 0 | 37 | 27 | 0 | 4 |
| January, 08 2015 00:00:00 | 83 | 80 | 3 | 0 | 117 | 100 | 5 | 1 |
```
|
Given your criteria, joining the two companion rows together and selecting out the appropriate fields seems like the simplest answer. You could go the route of PIVOTs, UNIONs and GROUP BYs, etc., but that seems like overkill.
```
select apples.Timestamp
, apples.[NumOffered] as ApplesNumOffered
, apples.[NumTaken] as ApplesNumTaken
, apples.[NumAbandoned] as ApplesNumAbandoned
, apples.[NumSpoiled] as ApplesNumSpoiled
, oranges.[NumOffered] as OrangesNumOffered
, oranges.[NumTaken] as OrangesNumTaken
, oranges.[NumAbandoned] as OrangesNumAbandoned
, oranges.[NumSpoiled] as OrangesNumSpoiled
from myTable apples
inner join myTable oranges on oranges.Timestamp = apples.Timestamp
where apples.FruitType = 'Apple'
and oranges.FruitType = 'Orange'
```
|
SQL query for returning row values as columns - SQL Server 2008
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
This query could return the wrong name, because the names I want to query are the ones of the smallest animals of each species. How could I get the right `a.name`
```
SELECT
a.name,
MIN(a.size)
FROM animal a
LEFT JOIN species s ON s.idSpecies = a.idAnimal
GROUP BY s.id
```
|
One approach for this, is to first find the smallest size of animal per species (as you have done), although I am assuming that species can never be null since an animal must belong to a species, it also does not require a join to species at this point:
```
SELECT a.IDSpecies, MIN(a.Size) AS Size
FROM Animal AS a
GROUP BY a.IDSpecies
```
Now you can join the result of this query back to your main query to filter the results.
```
SELECT a.Name AS AnimalName,
a.Size,
s.Name AS SpeciesName
FROM Animal AS a
INNER JOIN Species AS s
ON s.ID = a.IDSpecies
INNER JOIN
( SELECT a.IDSpecies, MIN(a.Size) AS Size
FROM Animal AS a
GROUP BY a.IDSpecies
) AS ma
ON ma.IDSpecies = a.IDSpecies
AND ma.Size = a.Size;
```
Another way of doing this, is to use `NOT EXISTS`:
```
SELECT a.Name AS AnimalName,
a.Size,
s.Name AS SpeciesName
FROM Animal AS a
INNER JOIN Species AS s
ON s.ID = a.IDSpecies
WHERE NOT EXISTS
( SELECT 1
FROM Animal AS a2
WHERE a2.IDSpecies = a.IDSpecies
AND a2.Size < a.Size
);
```
So you start with a simple select, then use `NOT EXISTS` to remove any animals, where a smaller animal exists in the same species.
Since MySQL will [optimize `LEFT JOIN/IS NULL` better than `NOT EXISTS`](http://explainextended.com/2009/09/18/not-in-vs-not-exists-vs-left-join-is-null-mysql/), then the better way of writing the query in MySQL is:
```
SELECT a.Name AS AnimalName,
a.Size,
s.Name AS SpeciesName
FROM Animal AS a
INNER JOIN Species AS s
ON s.ID = a.IDSpecies
LEFT JOIN Animal AS a2
ON a2.IDSpecies = a.IDSpecies
AND a2.Size < a.Size
WHERE a2.ID IS NULL;
```
The concept is *exactly* the same as `NOT EXISTS`, but does not require a correlated subquery.
|
Following the simple example of Group By :
```
SELECT C.CountryName Country, SN.StateName, COUNT(CN.ID) CityCount
FROM Table_StatesName SN
JOIN Table_Countries C ON C.ID = SN.CountryID
JOIN Table_CityName CN ON CN.StateID = SN.ID
GROUP BY C.CountryName, SN.StateName ORDER BY SN.StateName
```
OUTPUT :
```
Country StateName CityCount
Australia Australian Capital Territory 219
Australia New South Wales 2250
Australia Northern Territory 218
Australia Queensland 2250
Australia South Australia 1501
Australia Tasmania 613
Australia Victoria 2250
```
|
How to get multiple fields from one criteria using SQL group by?
|
[
"",
"sql",
"mysqli",
""
] |
We have a table in MySql with arround 30 million records, the following is table structure
```
CREATE TABLE `campaign_logs` (
`domain` varchar(50) DEFAULT NULL,
`campaign_id` varchar(50) DEFAULT NULL,
`subscriber_id` varchar(50) DEFAULT NULL,
`message` varchar(21000) DEFAULT NULL,
`log_time` datetime DEFAULT NULL,
`log_type` varchar(50) DEFAULT NULL,
`level` varchar(50) DEFAULT NULL,
`campaign_name` varchar(500) DEFAULT NULL,
KEY `subscriber_id_index` (`subscriber_id`),
KEY `log_type_index` (`log_type`),
KEY `log_time_index` (`log_time`),
KEY `campid_domain_logtype_logtime_subid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`),
KEY `domain_logtype_logtime_index` (`domain`,`log_type`,`log_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
```
Following is my query
I'm doing UNION ALL instead of using IN operation
```
SELECT log_type,
DATE_FORMAT(CONVERT_TZ(log_time,'+00:00','+05:30'),'%l %p') AS log_date,
count(DISTINCT subscriber_id) AS COUNT,
COUNT(subscriber_id) AS total
FROM stats.campaign_logs USE INDEX(campid_domain_logtype_logtime_subid_index)
WHERE DOMAIN='xxx'
AND campaign_id='123'
AND log_type = 'EMAIL_OPENED'
AND log_time BETWEEN CONVERT_TZ('2015-02-01 00:00:00','+00:00','+05:30') AND CONVERT_TZ('2015-03-01 23:59:58','+00:00','+05:30')
GROUP BY log_date
UNION ALL
SELECT log_type,
DATE_FORMAT(CONVERT_TZ(log_time,'+00:00','+05:30'),'%l %p') AS log_date,
COUNT(DISTINCT subscriber_id) AS COUNT,
COUNT(subscriber_id) AS total
FROM stats.campaign_logs USE INDEX(campid_domain_logtype_logtime_subid_index)
WHERE DOMAIN='xxx'
AND campaign_id='123'
AND log_type = 'EMAIL_SENT'
AND log_time BETWEEN CONVERT_TZ('2015-02-01 00:00:00','+00:00','+05:30') AND CONVERT_TZ('2015-03-01 23:59:58','+00:00','+05:30')
GROUP BY log_date
UNION ALL
SELECT log_type,
DATE_FORMAT(CONVERT_TZ(log_time,'+00:00','+05:30'),'%l %p') AS log_date,
COUNT(DISTINCT subscriber_id) AS COUNT,
COUNT(subscriber_id) AS total
FROM stats.campaign_logs USE INDEX(campid_domain_logtype_logtime_subid_index)
WHERE DOMAIN='xxx'
AND campaign_id='123'
AND log_type = 'EMAIL_CLICKED'
AND log_time BETWEEN CONVERT_TZ('2015-02-01 00:00:00','+00:00','+05:30') AND CONVERT_TZ('2015-03-01 23:59:58','+00:00','+05:30')
GROUP BY log_date,
```
Following is my Explain statement
```
+----+--------------+---------------+-------+-------------------------------------------+-------------------------------------------+---------+------+--------+------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------+---------------+-------+-------------------------------------------+-------------------------------------------+---------+------+--------+------------------------------------------+
| 1 | PRIMARY | campaign_logs | range | campid_domain_logtype_logtime_subid_index | campid_domain_logtype_logtime_subid_index | 468 | NULL | 55074 | Using where; Using index; Using filesort |
| 2 | UNION | campaign_logs | range | campid_domain_logtype_logtime_subid_index | campid_domain_logtype_logtime_subid_index | 468 | NULL | 330578 | Using where; Using index; Using filesort |
| 3 | UNION | campaign_logs | range | campid_domain_logtype_logtime_subid_index | campid_domain_logtype_logtime_subid_index | 468 | NULL | 1589 | Using where; Using index; Using filesort |
| NULL | UNION RESULT | <union1,2,3> | ALL | NULL | NULL | NULL | NULL | NULL | |
+----+--------------+---------------+-------+-------------------------------------------+-------------------------------------------+---------+------+--------+------------------------------------------+
```
1. I changed COUNT(subscriber\_id) to COUNT(\*) and observed no performance gain.
2.I removed COUNT(DISTINCT subscriber\_id) from the query , then I got huge
performance gain , I'm getting results in approx 1.5 sec, previously it
was taking 50 sec - 1 minute. But I need distinct count of subscriber\_id from the query
Following is explain when I remove COUNT(DISTINCT subscriber\_id) from the query
```
+----+--------------+---------------+-------+-------------------------------------------+-------------------------------------------+---------+------+--------+-----------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------+---------------+-------+-------------------------------------------+-------------------------------------------+---------+------+--------+-----------------------------------------------------------+
| 1 | PRIMARY | campaign_logs | range | campid_domain_logtype_logtime_subid_index | campid_domain_logtype_logtime_subid_index | 468 | NULL | 55074 | Using where; Using index; Using temporary; Using filesort |
| 2 | UNION | campaign_logs | range | campid_domain_logtype_logtime_subid_index | campid_domain_logtype_logtime_subid_index | 468 | NULL | 330578 | Using where; Using index; Using temporary; Using filesort |
| 3 | UNION | campaign_logs | range | campid_domain_logtype_logtime_subid_index | campid_domain_logtype_logtime_subid_index | 468 | NULL | 1589 | Using where; Using index; Using temporary; Using filesort |
| NULL | UNION RESULT | <union1,2,3> | ALL | NULL | NULL | NULL | NULL | NULL | |
+----+--------------+---------------+-------+-------------------------------------------+-------------------------------------------+---------+------+--------+-----------------------------------------------------------+
```
3. I ran three queries individually by removing UNION ALL. ONe query took 32seconds , others are taking 1.5 seconds each, but first query is dealing with around 350K records and others are dealing with only 2k rows
I could solve my performance problem by leaving out `COUNT(DISTINCT...)` but I need those values. Is there a way to refactor my query, or add an index, or something, to get the `COUNT(DISTINCT...)` values, but much faster?
**UPDATE**
the following information is about data distribution of above table
for
1 domain
1 campaign
20 log\_types
1k-200k subscribers
The above query I'm running for , the domain having 180k+ subscribers.
|
If the query without the `count(distinct)` is going much faster, perhaps you can do nested aggregation:
```
SELECT log_type, log_date,
count(*) AS COUNT, sum(cnt) AS total
FROM (SELECT log_type,
DATE_FORMAT(CONVERT_TZ(log_time,'+00:00','+05:30'),'%l %p') AS log_date,
subscriber_id, count(*) as cnt
FROM stats.campaign_logs USE INDEX(campid_domain_logtype_logtime_subid_index)
WHERE DOMAIN = 'xxx' AND
campaign_id = '123' AND
log_type IN ('EMAIL_SENT', 'EMAIL_OPENED', 'EMAIL_CLICKED') AND
log_time BETWEEN CONVERT_TZ('2015-02-01 00:00:00','+00:00','+05:30') AND
CONVERT_TZ('2015-03-01 23:59:58','+00:00','+05:30')
GROUP BY log_type, log_date, subscriber_id
) l
GROUP BY logtype, log_date;
```
With a bit of luck, this will take 2-3 seconds rather than 50. However, you might need to break this out into subqueries, to get full performance. So, if this does not have a significant performance gain, change the `in` back to `=` one of the types. If that works, then the `union all` may be necessary.
EDIT:
Another attempt is to use variables to enumerate the values before the `group by`:
```
SELECT log_type, log_date, count(*) as cnt,
SUM(rn = 1) as sub_cnt
FROM (SELECT log_type,
DATE_FORMAT(CONVERT_TZ(log_time,'+00:00','+05:30'),'%l %p') AS log_date,
subscriber_id,
(@rn := if(@clt = concat_ws(':', campaign_id, log_type, log_time), @rn + 1,
if(@clt := concat_ws(':', campaign_id, log_type, log_time), 1, 1)
)
) as rn
FROM stats.campaign_logs USE INDEX(campid_domain_logtype_logtime_subid_index) CROSS JOIN
(SELECT @rn := 0)
WHERE DOMAIN = 'xxx' AND
campaign_id = '123' AND
log_type IN ('EMAIL_SENT', 'EMAIL_OPENED', 'EMAIL_CLICKED') AND
log_time BETWEEN CONVERT_TZ('2015-02-01 00:00:00', '+00:00', '+05:30') AND
CONVERT_TZ('2015-03-01 23:59:58', '+00:00', '+05:30')
ORDER BY log_type, log_date, subscriber_id
) t
GROUP BY log_type, log_date;
```
This still requires another sort of the data, but it might help.
|
To answer your question:
> Is there a way to refactor my query, or add an index, or something, to
> get the COUNT(DISTINCT...) values, but much faster?
Yes, do not group by the calculated field (do not group by the result of the function). Instead, pre-calculate it, save it to the persistent column and include this persistent column into the index.
I would try to do the following and see if it changes performance significantly.
1) Simplify the query and focus on one part.
Leave only one longest running `SELECT` out of the three, get rid of `UNION` for the tuning period. Once the longest `SELECT` is optimized, add more and check how the full query works.
2) Grouping by the result of the function doesn't let the engine use index efficiently.
Add another column to the table (at first temporarily, just to check the idea) with the result of this function. As far as I can see you want to group by 1 hour, so add column `log_time_hour datetime` and set it to `log_time` rounded/truncated to the nearest hour (preserve the date component).
Add index using new column: `(domain, campaign_id, log_type, log_time_hour, subscriber_id)`. The order of first three columns in the index should not matter (because you use equality compare to some constant in the query, not the range), but make them in the same order as in the query. Or, better, make them in the index definition and in the query in the order of selectivity. If you have `100,000` campaigns, `1000` domains and `3` log types, then put them in this order: `campaign_id, domain, log_type`. It should not matter much, but is worth checking. `log_time_hour` has to come fourth in the index definition and `subscriber_id` last.
In the query use new column in `WHERE` and in `GROUP BY`. Make sure that you include all needed columns in the `GROUP BY`: both `log_type` and `log_time_hour`.
Do you need both `COUNT` and `COUNT(DISTINCT)`? Leave only `COUNT` first and measure the performance. Leave only `COUNT(DISTINCT)`and measure the performance. Leave both and measure the performance. See how they compare.
```
SELECT log_type,
log_time_hour,
count(DISTINCT subscriber_id) AS distinct_total,
COUNT(subscriber_id) AS total
FROM stats.campaign_logs
WHERE DOMAIN='xxx'
AND campaign_id='123'
AND log_type = 'EMAIL_OPENED'
AND log_time_hour >= '2015-02-01 00:00:00'
AND log_time_hour < '2015-03-02 00:00:00'
GROUP BY log_type, log_time_hour
```
|
Optimizing COUNT(DISTINCT) slowness even with covering indexes
|
[
"",
"mysql",
"sql",
"aggregate-functions",
"query-performance",
"mysql-variables",
""
] |
I'm trying to search in an database for records with a specific date. I've tried to search in the following ways:
```
SELECT *
FROM TABLE_1
WHERE CAL_DATE=01/01/2015
```
and
```
SELECT *
FROM TABLE_1
WHERE CAL_DATE='01/01/2015'
```
I'm working with an Access database, and in the table, the dates are showing in the same format (01/01/2015). Is there something I'm missing in the SQL statement?
|
Any of the options below should work:
Format the date directly in your query.
```
SELECT *
FROM TABLE_1
WHERE CAL_DATE=#01/01/2015#;
```
The DateValue function will convert a string to a date.
```
SELECT *
FROM TABLE_1
WHERE CAL_DATE=DateValue('01/01/2015');
```
The CDate function will convert a value to a date.
```
SELECT *
FROM TABLE_1
WHERE CAL_DATE=CDate('01/01/2015');
```
The DateSerial function will return a date given the year, month, and day.
```
SELECT *
FROM TABLE_1
WHERE CAL_DATE=DateSerial(2015, 1, 1);
```
See the following page for more information on the above functions: [techonthenet.com](http://www.techonthenet.com/access/functions/)
|
Try using CDATE function on your filter:
```
WHERE CAL_DATE = CDATE('01/01/2015')
```
This will ensure that your input is of date datatype, not a string.
|
Finding a Specific Date with SQL
|
[
"",
"sql",
"datetime",
"ms-access",
""
] |
My question is similar to this.
[Check if column value exists in another column in SQL](https://stackoverflow.com/questions/21800481/check-if-column-value-exists-in-another-column-in-sql)
In my case, however, there're up to five select statements in the `IN` statement. It looks something like this :
`SELECT Criterion1 FROM tblFilter WHERE Criterion1 IN (SELECT Criterion2 FROM tblFilter, SELECT Criterion3 FROM tblFilter)`
Yet, when I run it on vb.net, I got syntax error from this query. So, what is the correct one?
|
For Access SQL you will need to use a construct like this:
```
SELECT ...
FROM tblFilter
WHERE Criterion1 IN (SELECT Criterion2 FROM tblFilter)
OR Criterion1 IN (SELECT Criterion3 FROM tblFilter)
```
|
Try this:
`SELECT Criterion1 FROM tblFilter WHERE Criterion1 IN (SELECT Criterion2 FROM tblFilter UNION SELECT Criterion3 FROM tblFilte`
UNION is combine together your both select stetments, but they have to return the same count of element (1 in your stetment) and same type.
EDIT:
Try this:
`SELECT Criterion1 FROM tblFilter t1, tblFilter t2, tblFilter t3 WHERE t1.Criterion1 = t2.Criterion2 OR t1.Criterion2 = t3.Criterion3`
|
SQL IN statement with multiple SELECT statements in it
|
[
"",
"sql",
"ms-access",
"select",
""
] |
I want to randomly select 20 rows from a large table and use the following query that works fine:
```
SELECT id
FROM timeseriesentry
WHERE random() < 20*1.0/12940622
```
(12940622 is the number of rows in the table). I now want to retrieve the number of rows automatically and use
```
WITH tmp AS (SELECT COUNT(*) n FROM timeseriesentry)
SELECT id
FROM timeseriesentry, tmp
WHERE random() < 20*1.0/n
```
which yields zero rows even though n is correct.
What am I missing here?
Edit: id is not numerical which is why I can't create a random series to select from it. I need the proposed structure because my actual goal is
```
WITH npt AS (
SELECT type, COUNT(*) n
FROM timeseriesentry
GROUP BY type
)
SELECT v.id
FROM timeseriesentry v
JOIN npt ON npt.type= v.type
WHERE random() < 200*1.0/npt.n
```
which forces roughly the same amount of samples per type.
|
This is ugly, but it works. It also avoids the identifier `type`, which is an (unreserved) keyword.
```
WITH zzz AS (
SELECT ztype
, COUNT(*) AS cnt
FROM timeseriesentry
GROUP BY ztype)
SELECT *
FROM timeseriesentry src
WHERE random() < 20.0 / (SELECT cnt FROM zzz
WHERE zzz.ztype = src.ztype)
ORDER BY src.ztype
;
```
---
UPDATE: the same with a window function in a subquery:
```
SELECT *
FROM (SELECT *
, sum(1) OVER (PARTITION BY ztype) AS cnt
FROM timeseriesentry
) src
WHERE random() < 20.0 / src.cnt
ORDER BY src.ztype
;
```
Or, a bit more compact, the same thing, but using a CTE:
```
WITH src AS(SELECT *
, sum(1) OVER (PARTITION BY ztype) AS cnt
FROM timeseriesentry
)
SELECT *
FROM src
WHERE random() < 20.0 / src.cnt
ORDER BY src.ztype
;
```
Beware: the CTE-versions are not necessarily equal in performance. In fact they often are slower. (since the OQ actually needs to visit *all* the rows of the timeseriesentry table in either case, there will not be much difference in this particular case)
|
I created a table with no numeric field:
```
create table timeseriesentry as select generate_series('2015-01-01'::timestamptz,'2015-01-02'::timestamptz,'1 second'::interval) id, 'ret'::text v
;
```
and reused window aggregation:
```
WITH tmp AS (SELECT round(count(*) over()*random()) n FROM timeseriesentry limit 20)
select id from
(SELECT row_number() over() rn,id
FROM timeseriesentry
) sel, tmp
WHERE rn =n
;
```
so it gives "random" 20:
```
2015-01-01 01:27:22+01
2015-01-01 03:33:51+01
2015-01-01 06:15:28+01
2015-01-01 09:52:21+01
2015-01-01 10:00:02+01
2015-01-01 10:08:33+01
2015-01-01 10:26:31+01
2015-01-01 12:55:21+01
2015-01-01 14:03:54+01
2015-01-01 14:05:36+01
2015-01-01 15:12:08+01
2015-01-01 15:45:55+01
2015-01-01 16:10:35+01
2015-01-01 17:11:02+01
2015-01-01 18:18:32+01
2015-01-01 19:35:51+01
2015-01-01 22:06:08+01
2015-01-01 22:12:42+01
2015-01-01 22:43:45+01
2015-01-01 22:49:55+01
```
|
Comparison with calculated value fails in PostgreSQL
|
[
"",
"sql",
"postgresql",
""
] |
I've been given part of a project financial reporting system to work on. Basically, I'm trying to modify an existing query to restrict returned results by date in a field which contains a variable date format.
My query is being sent the date from someone elses code in the format YYYY twice, so e.g. 2014 and 2017. In the SQL query below they are listed as 2014 and 2017, so you'll just have to imagine them as variables..
The fields in the database which have the variable date forms come in two forms: YYYYMMDD or YYYYMM.
The existing query looks like:
```
SELECT
'Expense' AS Type,
dbo.Department.Description AS [Country Name],
dbo.JCdescription.Description AS Project,
dbo.Detail.AccountCode AS [FIN Code],
dbo.JCdescription.ReportCode1 as [Phase Date Start],
dbo.JCdescription.ReportCode2 as [Phase Date End],
dbo.JCdescription.ReportCode3 as [Ratification Date],
dbo.Detail.Year AS [Transaction Year],
dbo.Detail.Period AS [Transaction Year Period],
...
FROM
dbo.Detail
INNER JOIN
...
WHERE
(dbo.Detail.LedgerCode = 'jc')
...
AND
(dbo.Detail.Year) BETWEEN '2014 AND 2017";
```
Ideally I'd like to change the last line to:
```
(dbo.JCdescription.ReportCode2 LIKE '[2014-2017]%')
```
BUT this searches for all the digits 2,0,1,4,5 instead of everything between 2014 and 2017.
I'm sure I must be missing something simple here, but I can't find it! I realise I could rephrase it as `LIKE '201[4567]%'` but this means searches outside of 2010-2020 will error.. and requires me to start parsing the variables sent which will introduce an additional function which will be called a lot. I'd rather not do it. I just need the two numbers, 2014 and 2017 to be treated as whole numbers instead of 4 digits!
Running on MS SQL server 10.0.5520
|
I guess you could use the [**year**](https://msdn.microsoft.com/en-us/library/ms186313.aspx) system function if your date columns are any of the given [**date** or **datetime** data types available in SQL Server](https://msdn.microsoft.com/en-CA/library/ms186724.aspx).
This would allow you to write something that looks pretty much like the following.
```
where year(MyDateColumn) between 2014 and 2017
```
Besides, if you're using [**varchar**](https://msdn.microsoft.com/en-CA/library/ms176089.aspx) as the date column data type, you will have to [**cast**](https://msdn.microsoft.com/en-us/library/ms187928.aspx) them to the appropriate and comparable values, and you'll also have to make sure to get the only required [**substring**](https://msdn.microsoft.com/en-us/library/ms187748.aspx) you need for the comparison.
So let's suppose you have string value like '201505' in your `dateStringColumn`.
```
where cast(substring(dateStringColumn, 1, 4) as int) between 2014 and 2017
```
|
So from my understanding you have 2 parameters for the start and end years that should be used to filter results.
With dates either being `YYYYMM` or `YYYYMMDD`, you can use string manipulation to take the first 4 characters and convert it to an `INT`, which can then be used to compare to the parameters.
Something like:
```
CREATE TABLE #detail ( id INT , SomeDate NVARCHAR(10) )
INSERT INTO #detail
( id, SomeDate )
VALUES ( 1, '201311' ),
( 2, '201402' ),
( 3, '20140313' ),
( 4, '201409' ),
( 5, '201506' ),
( 6, '20150912' ),
( 7, '201612' ),
( 8, '201701' ),
( 9, '20181212' )
DECLARE @startYear INT = 2014, @endYear INT = 2017
SELECT *, CONVERT(INT, (LEFT(SomeDate, 4))) AS [Year]
FROM #detail
WHERE CONVERT(INT, (LEFT(SomeDate, 4))) BETWEEN @startYear AND @endYear
DROP TABLE #detail
```
Based on the sample data, this would produce:
```
id SomeDate Year
====================
2 201402 2014
3 20140313 2014
4 201409 2014
5 201506 2015
6 20150912 2015
7 201612 2016
8 201701 2017
```
(excludes rows 1 & 9)
|
SQL like syntax using date YYYY
|
[
"",
"sql",
"sql-server",
"syntax",
"sql-like",
""
] |
How to set dynamic count for `GO` statement?
I am getting the following error:
> A fatal scripting error occurred.Incorrect syntax was encountered
> while parsing Go.
when I tried to run the below query:
```
Declare @count int
Select @count=COUNT(*) From Users
Insert Into #DummyUsers
Select * from Users where UserName = 'Sachin'
GO @Count
```
But the same is working fine when I use the below query with hard coded count.
```
Declare @count int
Select @count=COUNT(*) From Users
Insert Into #DummyUsers
Select * from Users where UserName = 'Sachin'
GO 5
```
Appreciate your suggestions if you have any idea on this.
|
If you simply want to insert a repeated row you could use a CTE or numbers table.
```
-- Sample data.
declare @Users as Table ( UserId Int Identity, Name VarChar(16) );
insert into @Users ( Name ) values
( 'Bob' ), ( 'Carol' ), ( 'Ted' ), ( 'Alice' );
select * from @Users;
-- Load another table with repetitions of a single user.
declare @TempUsers as Table ( UserId Int, Name VarChar(16) );
declare @Repetitions as Int = ( select Count(*) from @Users );
with TempUsers as (
select UserId, Name, 1 as Repetitions
from @Users
where Name = 'Ted'
union all
select UserId, Name, Repetitions + 1
from TempUsers
where Repetitions < @Repetitions
)
insert into @TempUsers ( UserId, Name )
select UserId, Name
from TempUsers;
select * from @TempUsers;
```
|
You can't. As soon as SSMS encounters GO the batch is terminated and your variable no longer exists.
|
How to execute the GO statement with dynamic count?
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
I need to strip out urls from results of a SQL query. All urls look like `http://` so no complex regex is necessary. But the urls can be anywhere in the string.
Examples:
```
alpha beta http://alphabeta.com/abc
gamma http://gammadel.com/def delta
http://epsilo.com/ghi epsilon theta
```
How can I strip these urls from my results to get the following?
```
alpha beta
gamma delta
epsilon theta
```
Notes:
1. URLs (in my use case) *do* always start with `http://`.
2. Only one URL is found per string
3. Ideally solutions will not require additional libraries
|
What about
```
SELECT REPLACE(
'alpha gamma http://gammadel.com/def delta beta',
CONCAT('http://',
SUBSTRING_INDEX(
SUBSTRING_INDEX('alpha gamma http://gammadel.com/def delta beta', 'http://', -1),' ', 1)
),''
);
```
I've tested it for strings that you provided, but not sure if it meets fully your requirements.
Basically what this code does is:
1. extract URL with `SUBSTRING_INDEX()` function
2. replace URL with empty string in the original string.
Here's a full query to test each scenario:
```
SET @str1="foo bar http://foobar.com/abc";
SET @str2="foo http://foobar.com/def bar";
SET @str3="http://foobar.com/ghi foo bar";
SELECT
REPLACE(
@str1,
CONCAT('http://',
SUBSTRING_INDEX(
SUBSTRING_INDEX(@str1, 'http://', -1),
' ', 1
)
),''
) AS str1,
REPLACE(
@str2,
CONCAT('http://',
SUBSTRING_INDEX(
SUBSTRING_INDEX(@str2, 'http://', -1),
' ', 1
)
),''
) AS str2,
REPLACE(
@str3,
CONCAT('http://',
SUBSTRING_INDEX(
SUBSTRING_INDEX(@str3, 'http://', -1),
' ', 1
)
),''
) AS str3
;
```
Returns (as expected) :
```
foo bar
foo bar
foo bar
```
|
As you cannot use functions like `preg_replace` without having any addons/libraries - and as you've only tagged your question with `mysql`/`sql`, you're going to need to install this to give you the ability to use a regular expression replace. <https://github.com/mysqludf/lib_mysqludf_preg#readme>
Now that's installed, you can run;
```
SELECT CONVERT(
preg_replace('/(http:\/\/[^ \s]+)/i', '', foo)
USING UTF8) AS result
FROM `bar`;
```
This will give results like: <https://regex101.com/r/qX6jB8/1>
|
How can I strip out urls from a string in MySQL?
|
[
"",
"mysql",
"sql",
""
] |
I have a database table with 3 columns. I want to find all duplicates that have snuck in un-noticed and tidy them up.
Table is structured approximately
```
ID ColumnA ColumnB
0 aaa bbb
1 aaa ccc
2 aaa bbb
3 xxx bbb
```
So what would my query look like to return columns 0 and 2 as both column A and column B make a combined duplicate entry?
Standard sql preferred, but is running on a SQL 2008 server
|
You can create a query that groups and counts the duplicate rows:
```
SELECT COUNT(1) , ColumnA , ColumnB
FROM YourTable
GROUP BY ColumnA , ColumnB
HAVING COUNT(1) > 1
```
You can then add this to a subquery to output the full rows that hold the duplicate data.
Here's a full executable example based on your sample data:
```
CREATE TABLE #YourTable
([ID] INT, [ColumnA] VARCHAR(3), [ColumnB] VARCHAR(3))
;
INSERT INTO #YourTable
([ID], [ColumnA], [ColumnB])
VALUES
(0, 'aaa', 'bbb'),
(1, 'aaa', 'ccc'),
(2, 'aaa', 'bbb'),
(3, 'xxx', 'bbb')
;
SELECT *
FROM #YourTable t1
WHERE EXISTS ( SELECT COUNT(1) , ColumnA , ColumnB
FROM #YourTable
WHERE t1.ColumnA = ColumnA AND t1.ColumnB = ColumnB
GROUP BY ColumnA , ColumnB
HAVING COUNT(1) > 1 )
DROP TABLE #YourTable
```
|
Use `count(*)` as a window function:
```
select t.*
from (select t.*, count(*) over (partition by columna, columnb) as cnt
from table t
) t
where cnt > 1;
```
|
Find duplicates in a database table, where 2 columns are duplicated
|
[
"",
"sql",
"sql-server",
""
] |
Newbie question, please forgive my ignorance :P
I have a datasets with different columns. Quizzes, names, grades. Some students have not done all quizzes. I just want people who did the 8 quizzes. I put this code:
```
SELECT * FROM quiz_attempts
WHERE qname like "onlinelernen_quiz_1"
AND qname like "onlinelernen_quiz_2"
AND qname like "onlinelernen_quiz_3"
AND qname like "onlinelernen_quiz_4"
AND qname like "onlinelernen_quiz_5"
AND qname like "onlinelernen_quiz_6"
AND qname like "onlinelernen_quiz_7"
AND qname like "onlinelernen_quiz_8"*/
```
Using `IN` or `OR` is working, but it shows any student who did quizzes. But i want **Only those students who did all 8 quizzes**.
Thanks for the Help.
|
Your query can't work since you are asking for all the records where qname is like "onlinelernen\_quiz\_1" and, **simultaneously**, like "onlinelernen\_quiz\_2", 3 ... , which is impossible.
Try this: It shows only the names of students whose name appears 8 times related to onlinelernen quizs
```
SELECT Q.stname
FROM quiz_attempts AS Q
WHERE Q.qname Like 'onlinelernen_quiz_%'
GROUP BY Q.stname
HAVING Count(Q.stname)=8;
```
Good luck!
|
AND gives specific result when all condition matched. Use OR
```
SELECT
*
FROM
quiz_attempts
WHERE qname LIKE "onlinelernen_quiz_1"
OR qname LIKE "onlinelernen_quiz_2"
OR qname LIKE "onlinelernen_quiz_3"
OR qname LIKE "onlinelernen_quiz_4"
OR qname LIKE "onlinelernen_quiz_5"
OR qname LIKE "onlinelernen_quiz_6"
OR qname LIKE "onlinelernen_quiz_7"
OR qname LIKE "onlinelernen_quiz_8"
```
|
What is wrong with this simple sql query? it returns 0 rows
|
[
"",
"mysql",
"sql",
""
] |
How to select all the columns of a table except one column?
I have nearly 259 columns and I can't mention all 258 columns in the `SELECT` statement.
Is there any other way to do it?
|
You can use this approach to get the data from all the columns except one:-
1. Insert all the data into a temporary table
2. Then drop the column which you dont want from the temporary table
3. Fetch the data from the temporary table(This will not contain the data of the removed column)
4. Drop the temporary table
Something like this:
```
SELECT * INTO #TemporaryTable FROM YourTableName
ALTER TABLE #TemporaryTable DROP COLUMN Columnwhichyouwanttoremove
SELECT * FROM #TemporaryTable
DROP TABLE #TemporaryTable
```
|
Create a view. Yes, in the view creation statement, you will have to list each...and...every...field...by...name.
Once.
Then just `select * from viewname` after that.
|
Select all the columns of a table except one column?
|
[
"",
"sql",
"sql-server",
""
] |
We have a Transact-SQL query that contains subqueries in the where section. Inner Join gives unwanted results of adding tOrderDetails fields multiple times so we resorted to the (poor performance) sub queries. It is timing out often. We need the same result but with better performance. Any suggestions?
```
SELECT sum (cast(Replace(tOrders.TotalCharges,'$','') as money)) as TotalCharges
FROM [ArtistShare].[dbo].tOrders
WHERE tOrders.isGiftCardRedemption = 0
and tOrders.isTestOrder=0
and tOrders.LastDateUpdate between @startDate and @endDate
and (SELECT count(tORderDetails.ID)
from tORderDetails
where tORderDetails.ORderID = tORders.ORderID
and tOrderDetails.isPromo=1) = 0
and (SELECT top 1 tProjects.ProjectReleaseDt
from tProjects JOIN tOrderDetails
on tOrderDetails.ProjectID = tProjects.ID
where tOrderDetails.OrderID = tOrders.OrderID) >= @startDate
```
|
Although I think `EXISTS` may have the best performance, you might also consider the following approach. When you mention in your question that your query has "unwanted results of adding tOrderDetails fields multiple times" it's probably because you have multiple tOrderDetail records so you need to collapse them with a `GROUP BY`. Rather than using a correlated sub-query which is very inefficient, use a single sub-query with `INNER JOIN` like this.
```
SELECT
sum (cast(Replace(tOrders.TotalCharges, '$', '') as money)) as TotalCharges
FROM [ArtistShare].[dbo].tOrders
INNER JOIN (
SELECT OrderID
FROM tOrderDetails d INNER JOIN tProjects p on d.ProjectID = p.ID
WHERE d.isPromo = 0 AND p.ProjectReleaseDt > @startDate
GROUP BY OrderID
) qualifyingOrders ON qualifyingOrders.OrderID = tOrders.OrderID
WHERE tOrders.isGiftCardRedemption = 0
and tOrders.isTestOrder=0
and tOrders.LastDateUpdate between @startDate and @endDate
```
Again, you should compare this with the **EXISTS** approach to see which one performs better and makes the most sense for what you are trying to achieve.
|
Why do you count records if you just want to know if at least one *exists* or *not exists*?
```
SELECT Sum (Cast(Replace(tOrders.TotalCharges, '$', '') AS MONEY)) AS TotalCharges
FROM [ArtistShare].[dbo].tOrders
WHERE tOrders.isGiftCardRedemption = 0
AND tOrders.isTestOrder = 0
AND tOrders.LastDateUpdate BETWEEN @startDate AND @endDate
AND NOT EXISTS
(
SELECT 1 FROM tORderDetails
WHERE tORderDetails.ORderID = tORders.ORderID
AND tOrderDetails.isPromo = 1 )
AND EXISTS
(
SELECT 1 FROM tProjects INNER JOIN tOrderDetails
ON tOrderDetails.ProjectID = tProjects.ID
WHERE tOrderDetails.OrderID = tOrders.OrderID
AND tProjects.ProjectReleaseDt >= @startDate )
```
|
What is an alternative to using subqueries in this SQL query that will give us better performance?
|
[
"",
"sql",
"sql-server",
""
] |
I have two table A and table B, the common column in both table is Name,
I want to know what are name in Table A that is not in table B
when I do:
```
Select Name from A where Name not in (Select Name from B)
```
I am sure there are 2 name in Table A that is not in table B
but the result returns nothing
These name column in table A and B has the same datatype `varchar(50)`
so I copy the result of Name column and Insert into a new table and do the same query, and this time it returns the right result. what bug could this be?
example:
```
Table A
Name:
Kevin
Dexter
David
John
Marry
Table B
Name:
Kevin
Dexter
David
```
So the query should return `'John'`, `'Marry'` but it doesn't return in my original table, but it returns in another table I create and insert.
Thanks!
|
You probably have a `NULL` name on `B`, this makes the `NOT IN` false for every row. You should use `NOT EXISTS` instead:
```
SELECT Name
FROM A
WHERE NOT EXISTS (SELECT 1 FROM B
WHERE A.Name = B.Name)
```
|
Obviously it is because of NULL value on some rows in table B. You can do what you want with `EXCEPT`
```
SELECT Name FROM TableA
EXCEPT
SELECT Name FROM TableB
```
|
T-SQL query: where column not in is giving wrong result
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table
```
CREATE TABLE [misc]
(
[misc_id] [int] NOT NULL,
[misc_group] [nvarchar](255) NOT NULL,
[misc_desc] [nvarchar](255) NOT NULL
)
```
where `misc_id [int] not null` should have been `IDENTITY (1,1)` but is not and now I'm having issues
With a simple form that insert into this table but since `misc_id` is looking for a number that a user would not know unless they have access to the database.
I know a option would be to create another column make it `IDENTITY(1,1)` and copy that data.
Is there another way I will be able to get around this?
```
INSERT INTO misc (misc_group, misc_desc)
VALUES ('#misc_group#', '#misc_desc#')
```
I have SQL Server 2012
|
You should re-create your table with the desired identity column. The following statements will get you close. SQL Server will automatically adjust the table's identity field to MAX(misc\_id) + 1 as you're migrating data.
You'll obviously need to stop trying to insert misc\_id with new records. You'll want to retrieve the SCOPE\_IDENTITY() column after inserting records.
```
-- Note: I'd recommend having SSMS generate your base create statement so you know you didn't miss anything. You'll have to export the indexes and foreign keys as well. Add them after populating data to improve performance and reduce fragmentation.
CREATE TABLE [misc_new]
(
[misc_id] [int] NOT NULL IDENTITY(1,1),
[misc_group] [nvarchar](255) NOT NULL,
[misc_desc] [nvarchar](255) NOT NULL
-- Todo: Don't forget primary key but can be added later (not recommended).
)
GO
SET IDENTITY_INSERT misc_new ON;
INSERT INTO misc_new
(
[misc_id],
[misc_group],
[misc_desc]
)
SELECT
[misc_id],
[misc_group],
[misc_desc]
FROM misc
ORDER BY misc_id;
SET IDENTITY_INSERT misc_new OFF;
GO
EXEC sp_rename 'misc', 'misc_old';
EXEC sp_rename 'misc_new', 'misc';
GO
```
|
Changing a int column to an identity can cause problems because by default you cannot insert a value into an identity column without use the set identity\_insert command on. So if you have existing code that inserts a value into the identity column it will fail. However its much easier to allow SQL Server to insert values(that is change it to an identity column) so I would change misc\_id into an identity column and make sure that there are no programs inserting values into misc\_id.
|
Insert into column without having IDENTITY
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
what does this error mean in mysql?
ER\_NO\_DEFAULT\_FOR\_FIELD – “Field doesn’t have a default value”
I've found it listed here?
<http://mysqlserverteam.com/improvements-to-strict-mode-in-mysql/>
and got:
```
{
"code": "ER_NO_DEFAULT_FOR_FIELD",
"errno": 1364,
"sqlState": "HY000",
"index": 0
}
```
that reply in a postman request via expressjs, but this is the first table I've seem to have gotten it from. the table has foreign keys so I set default values for the foreign keys, but I still get it. what gives? is the index[0] implying the name, the first json value I sent to the server?
```
{
"name":"Cornwall Park Townhouses",
"line1":"17 Brewster Rd.",
"city":"Cornwall",
"state":32,
"zip":"12518",
"category":2,
"account_mgr":1,
"active":1
}
```
heres my json
|
Well my guess is you have a field that requires a value and has no default.
So when you insert into your table, if you don't provide a valid value for it, MySQL complains.
It'd be more help if you'd shown us your tables and maybe insert query as well, but take a look at your fields. If you find one that has no default and you're not doing anything with it, try inserting into it and passing a value for the field, if that solves the issue then you can add a default value for that field or just keep sending information to it.
If you don't want a default you could set the fields to allow null instead of a default value.
|
this problem occured due to many reason
1. You have not enter value for the column in the API(as you have not set some default value for that column) if you set some default value you willnot have error as sql add that default value of model/migrations in that column without explicitly providing any data for that column
2. Secondly (which i face) i have make associated using alias 2 times and my alias were same while creating association so i was entering the value for column and sql was throwing this error
So the overAll conclusion of the above error is you have defined a column which has got no default value and you are adding that table in database
|
ER_NO_DEFAULT_FOR_FIELD in MySQL (nodejs rest request)
|
[
"",
"mysql",
"sql",
"json",
"node.js",
"node-mysql",
""
] |
How can I count non-null entries by field/column? I see several answers to count by row but can't hack how to do so for columns.
Input:
```
╔════╦════════╦════════╦════════╗
║ id ║ field1 ║ field2 ║ field3 ║
║ 1 ║ do ║ re ║ me ║
║ 2 ║ fa ║ ║ so ║
║ 3 ║ la ║ te ║ ║
║ 4 ║ da ║ re ║ ║
╚════╩════════╩════════╩════════╝
```
output:
```
id 4
field1 4
field2 3
field3 2
```
I'm trying to get a gauge on field usage in a very dirty database I am migrating. There's about 50 columns in this database so I am looking for an approach that doesn't involve typing out each column name.
I might also have to extend that search to non-NULL & is-not-empty & ≠ 0 & ≠ "no" because of inconsistencies in data storage — some fields were never used but auto-filled with "no".
This answer looks close to what I need but generates an SQL error and I don't have enough reputation to comment: [Count number of NULL values in each column in SQL](https://stackoverflow.com/questions/24411159/count-number-of-null-values-in-each-column-in-sql)
|
Just use `count()`:
```
select count(field1), count(field2), count(field3)
from table t;
```
That is what `count()` does -- it counts non-NULL values.
If you have an aversion to typing out names, then use the metadata tables (typically `information_schema.columns`) to get the column names. You can generate the SQL as a query, or copy the column names into a spreadsheet to generate the code.
EDIT:
You can generate the code using:
```
select group_concat('count(', column_name, ')' separate ', ')
from information_schema.columns
where table_name = <whatever> and table_schema = <whatever2>;
```
Note this uses the little known ability of `group_concat()` to take multiple string arguments.
|
```
SELECT count(field1) as cnt ,'field1' as field from tbl where field1 IS NOT NULL
UNION all
SELECT count(field2) as cnt ,'field2' as field from tbl where field2 IS NOT NULL
union all
SELECT count(field3) as cnt ,'field3' as field from tbl where field3 IS NOT NULL
```
IF THERE ARE ONLY 3 FIXED FIELD ABOVE SQL WILL WORK
|
MySQL COUNT NULL content by column, GROUP by column
|
[
"",
"mysql",
"sql",
"count",
"null",
"field",
""
] |
I have to check certain rows from table and check `if-else` condition for each rows
TABLE: `report`
COLUMNS :
`sl.no`, `count`, `total_count`, `calls`
Sample Data:
```
sl.no count total_count calls
----------- ----------- ----------- -----------
1 2 6 4
2 2 7 5
3 4 9 3
```
Here i have to check if condition
```
if total_count > 6
select 'Y
else
select 'N'
```
I get correct answer for single row. If there is multiple row it can't check all, it check only last row of my table.
|
Use [**CASE**](https://msdn.microsoft.com/en-us/library/ms181765.aspx?f=255&MSPPError=-2147217396) expression
```
SELECT
*,
CASE WHEN total_count > 6 THEN 'Yes' ELSE 'N' END
FROM report
```
|
You could use `CASE`. You can read [documentation](https://msdn.microsoft.com/en-us/library/ms181765.aspx).
```
SELECT *,
CASE WHEN total_count > 6 THEN 'Y' ELSE ' N' END
FROM Report
```
|
select rows from table and check if condition at column in sql
|
[
"",
"sql",
"sql-server",
"database",
"ssms",
""
] |
Here is a query:
```
SELECT
k.[mg_KarId] AS [mg_KarId],
k.[SymKar] AS [SymKar],
k.[OpiKar] AS [OpiKar],
k.[Status] AS [Status],
kmlg.SymLok AS Lok_GLS1,
kmlk.SymLok AS Lok_KRS1,
kmlw.SymLok AS Lok_WLS1
FROM dbo.[mg_vv_Kar_All] AS k WITH (NOLOCK)
LEFT JOIN (
SELECT kml.SymKar, kml.SymLok
FROM dbo.mg_KarMagLok kml
WHERE kml.Mag LIKE 'GLS1'
) kmlg ON k.SymKar = kmlg.SymKar
LEFT JOIN (
SELECT kml.SymKar, kml.SymLok
FROM dbo.mg_KarMagLok kml
WHERE kml.Mag LIKE 'KRS1'
) kmlk ON k.SymKar = kmlk.SymKar
LEFT JOIN (
SELECT kml.SymKar, kml.SymLok
FROM dbo.mg_KarMagLok kml
WHERE kml.Mag LIKE 'WLS1'
) kmlw ON k.SymKar = kmlw.SymKar
WHERE k.Status <> 'W'
and k.SymKar = '0006438';
```
It gives me eight rows - 2 rows from every LEFT JOIN (2 \* 2 \* 2):
```
2002111576 0006438 CUKIER A R03-C-05 S07-A-08 WR03-19-01
2002111576 0006438 CUKIER A R03-C-05 S07-A-08 WR04-20-50
2002111576 0006438 CUKIER A R03-C-05 S07-A-09 WR03-19-01
2002111576 0006438 CUKIER A R03-C-05 S07-A-09 WR04-20-50
2002111576 0006438 CUKIER A R04-A-02 S07-A-08 WR03-19-01
2002111576 0006438 CUKIER A R04-A-02 S07-A-08 WR04-20-50
2002111576 0006438 CUKIER A R04-A-02 S07-A-09 WR03-19-01
2002111576 0006438 CUKIER A R04-A-02 S07-A-09 WR04-20-50
```
I just need only one row - no matter which one. I try to modify subquery in LEFT JOIN to:
```
SELECT TOP 1 kml.SymKar, kml.SymLok
FROM dbo.mg_KarMagLok kml
WHERE kml.Mag LIKE 'GLS1'
```
But it gives me NULL in main query at column Lok\_GLS1.
How should i resolve this problem?
|
```
SELECT
k.[mg_KarId] AS [mg_KarId],
k.[SymKar] AS [SymKar],
k.[OpiKar] AS [OpiKar],
k.[Status] AS [Status],
(
SELECT TOP 1 kml.SymLok
FROM dbo.mg_KarMagLok kml
WHERE kml.Mag LIKE 'GLS1'
AND kml ON k.SymKar = kmlg.SymKar
) ,
(
SELECT TOP 1 kml.SymLok
FROM dbo.mg_KarMagLok kml
WHERE kml.Mag LIKE 'KRS1'
AND k.SymKar = kml.SymKar
) ,
(
SELECT TOP 1 kml.SymLok
FROM dbo.mg_KarMagLok kml
WHERE kml.Mag LIKE 'WLS1'
AND kmlw ON k.SymKar = kml.SymKar
)
FROM dbo.[mg_vv_Kar_All] AS k WITH (NOLOCK)
WHERE k.Status <> 'W'
and k.SymKar = '0006438';
```
You can think to use subquery and avoid 3 joins
|
If you want to solve aggregation problem, you can group your data set.
Instead of using TOP keyword, write a query like this:
```
SELECT kml.SymKar, kml.SymLok
FROM dbo.mg_KarMagLok kml
WHERE kml.Mag LIKE 'GLS1'
group by kml.SymKar, kml.SymLok
```
|
left join with subquery with one result
|
[
"",
"sql",
"t-sql",
"subquery",
"left-join",
""
] |
Which is the right syntax in order to create two or more triggers in a row by executing a query with **SQL Server**?
I tried the following one:
```
GO
CREATE TRIGGER TRG_MyTable_UpdatedAt
ON dbo.MyTable
AFTER UPDATE
AS
UPDATE dbo.MyTable
SET UpdatedAt = GETDATE()
WHERE ID IN (SELECT DISTINCT ID FROM Inserted)
END
GO
CREATE TRIGGER TRG_MyTable_CreatedAt
ON dbo.MyTable
AFTER INSERT
AS
UPDATE dbo.MyTable
SET CreatedAt = GETDATE()
WHERE ID IN (SELECT DISTINCT ID FROM Inserted)
END
```
But it returns an error: `Incorrect syntax near 'END'.`
|
```
CREATE TRIGGER TRG_MyTable_UpdatedAt
ON dbo.MyTable
AFTER UPDATE
AS
BEGIN
UPDATE dbo.MyTable
SET UpdatedAt = GETDATE()
WHERE ID IN (SELECT DISTINCT ID FROM Inserted)
END
GO
CREATE TRIGGER TRG_MyTable_CreatedAt
ON dbo.MyTable
AFTER INSERT
AS
BEGIN
UPDATE dbo.MyTable
SET CreatedAt = GETDATE()
WHERE ID IN (SELECT DISTINCT ID FROM Inserted)
END
GO
```
|
You forgot to write `'Begin'`
```
CREATE TRIGGER TRG_MyTable_UpdatedAt
ON dbo.MyTable
AFTER UPDATE
AS
Begin
UPDATE dbo.MyTable
SET UpdatedAt = GETDATE()
WHERE ID IN (SELECT DISTINCT ID FROM Inserted)
END
GO
CREATE TRIGGER TRG_MyTable_CreatedAt
ON dbo.MyTable
AFTER INSERT
AS
Begin
UPDATE dbo.MyTable
SET CreatedAt = GETDATE()
WHERE ID IN (SELECT DISTINCT ID FROM Inserted)
END
GO
```
|
Create two or more triggers in a row by executing a query with SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have 2 tables:
```
CREATE TABLE Items (id INT, name VARCHAR(8));
CREATE TABLE Likes (item_id INT, usr VARCHAR(8), doeslike BOOLEAN);
INSERT INTO Items VALUES
(1, 'Pen'),
(2, 'Pencil'),
(3, 'Ruler');
INSERT INTO Likes VALUES
(1, 'joe', TRUE ),
(1, 'jill', TRUE ),
(1, 'jack', FALSE),
(2, 'jae', TRUE ),
(2, 'jill', FALSE),
(2, 'john', FALSE),
(2, 'jer', TRUE ),
(3, 'mill', TRUE ),
(3, 'mack', FALSE),
(3, 'moe', TRUE ),
(3, 'gill', TRUE ),
(3, 'zack', FALSE);
```
Fiddle here: <http://sqlfiddle.com/#!9/d1e0f>
I need a query that lists the items and number of people who like and dislike each item, like so:
```
ITEM LIKES DISLIKES
Pen 2 1
Pencil 2 2
Ruler 3 3
```
|
You can use conditional aggregation for this. If you use `SUM(condition)` inside of a select statement, you'll get the number of times that condition is true. Use it like this for your table:
```
SELECT i.name, SUM(l.doeslike = true) AS likes, SUM(l.doeslike = false) AS dislikes
FROM items i
JOIN likes l ON l.item_id = i.id
GROUP BY i.id;
```
Here is a [Fiddle](http://sqlfiddle.com/#!9/d1e0f/27) example. I would also recommend using a LEFT JOIN if there is a chance an item exists without having any likes/dislikes.
|
<http://sqlfiddle.com/#!9/d1e0f/2>
```
SELECT Items.* ,
SUM(CASE WHEN Likes.doesLike THEN 1 ELSE 0 END) as likes,
SUM(CASE WHEN Likes.doesLike THEN 0 ELSE 1 END) as dislikes
FROM Items
LEFT JOIN Likes
ON
Items.id = Likes.item_id
GROUP BY Items.id
```
|
SQL: Join and Count based on condition
|
[
"",
"mysql",
"sql",
""
] |
I want get date from sql server in user specified format using GETDATE() function.
if i give this query
```
select GETDATE()
```
then it is displating output date in this format
```
2015-03-17 07:29:58.377
```
but i want output like this .
```
2015-03-17
```
what statement should be added with query to get the result.
help me from this problem.
|
Just use `convert()`:
```
select convert(varchar(10), getdate(), 121)
```
|
In SQL Server 2012 onward you may use FORMAT() which is a bit more intuitive than recalling style numbers.
e.g.
select FORMAT( GETDATE() , 'yyyy-MM-dd' )
see: <https://msdn.microsoft.com/en-AU/library/hh213505.aspx>
however do note the second parameter is case sensitive
* 'yyyy-MM-dd' works
* 'YYYY-MM-DD' would only work for the MM
|
How to get date from sql server in user specified format
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Im using this:
```
SELECT *
WHERE id=MAX(id)
FROM history;
```
But my query is empty. I have also tried this (This one works):
```
SELECT MAX(id) AS max_id
FROM history;
```
But obviusly my query only contains the max\_id key. What am I doing wrong with the first one?
|
You need to add another level of `select` for the `MAX`, like this:
```
SELECT *
WHERE id=(SELECT MAX(id) from history)
FROM history;
```
A better approach would be to order by `id` in descending order, and limit the output to a single row:
```
SELECT *
FROM history
ORDER BY id DESC
LIMIT 1
```
|
Actually most simple query to get the max value of a column in sqlite is:
```
select MAX(id) from table;
```
|
Sqlite get max id not working (?)
|
[
"",
"sql",
"sqlite",
"select",
""
] |
I'm using PostgreSQL 9.4 with a table `teams` containing a `jsonb` column named `json`. I am looking for a query where I can get all teams which have the Players `3`, `4` and `7` in their array of players.
The table contains two rows with the following `json` data:
First row:
```
{
"id": 1,
"name": "foobar",
"members": {
"coach": {
"id": 1,
"name": "A dude"
},
"players": [
{
"id": 2,
"name": "B dude"
},
{
"id": 3,
"name": "C dude"
},
{
"id": 4,
"name": "D dude"
},
{
"id": 6,
"name": "F dude"
},
{
"id": 7,
"name": "G dude"
}
]
}
}
```
second row:
```
{
"id": 2,
"name": "bazbar",
"members": {
"coach": {
"id": 11,
"name": "A dude"
},
"players": [
{
"id": 3,
"name": "C dude"
},
{
"id": 5,
"name": "E dude"
},
{
"id": 6,
"name": "F dude"
},
{
"id": 7,
"name": "G dude"
},
{
"id": 8,
"name": "H dude"
}
]
}
}
```
How does the query have to look like to get the desired list of teams? I've tried a query where I'd create an array from the member players `jsonb_array_elements(json -> 'members' -> 'players')->'id'` and compare them, but all I was able to accomplish is a result where any of the compared player ids was available in a team, not all of them.
|
You are facing two non-trivial tasks at once.
* Process `jsonb` with a complex nested structure.
* Run the equivalent of a relational division query on the document type.
First `jsonb_populate_recordset()` works with a registered row type. Can be the row type of any (temp) table or view, or a composite type explicitly created with `CREATE TYPE`. If there is none, register one. For ad-hoc use, a temporary type does the job (undocumented hack, dropped automatically at the end of the session):
```
CREATE TYPE pg_temp.foo AS (id int); -- just "id"
```
We only need `id`, so don't include `name`. [The manual:](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSONB-OP-TABLE)
> JSON fields that do not appear in the target row type will be omitted from the output
## Query with index support
If you need it fast, create a **GIN index** on the `jsonb` column. The more specialized [operator class `jsonb_path_ops`](https://www.postgresql.org/docs/current/gin-builtin-opclasses.html) is even faster than the default `jsonb_ops`:
```
CREATE INDEX teams_json_gin_idx ON teams USING GIN (json jsonb_path_ops);
```
Can be used by the "contains" operator `@>`:
```
SELECT t.json->>'id' AS team_id
, ARRAY (SELECT * FROM jsonb_populate_recordset(null::foo, t.json#>'{members,players}')) AS players
FROM teams t
WHERE json @> '{"members":{"players":[{"id":3},{"id":4},{"id":7}]}}';
```
[SQL/JSON](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-SQLJSON-PATH) path language in Postgres 12+ can use the same index:
```
SELECT t.json->>'id' AS team_id
, ARRAY (SELECT * FROM jsonb_populate_recordset(null::foo, t.json#>'{members,players}')) AS players
FROM teams t
WHERE json @? '$.members ? (@.players.id == 3) ? (@.players.id == 4) ? (@.players.id == 7)';
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=2c13c41da316db908cb41600093a1a8f)*
See:
* [Find rows containing a key in a JSONB array of records](https://dba.stackexchange.com/a/196635/3684)
* [Update all values for given key nested in JSON array of objects](https://dba.stackexchange.com/a/251181/3684)
## Simple query
Without index support - unless you create a tailored expression index, see below.
```
SELECT t.json->>'id' AS team_id, p.players
FROM teams t
JOIN LATERAL (
SELECT ARRAY (
SELECT * FROM jsonb_populate_recordset(null::foo, t.json#>'{members,players}')
)
) AS p(players) ON p.players @> '{3,4,7}';
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=2c13c41da316db908cb41600093a1a8f)*
Old [sqlfiddle](http://sqlfiddle.com/#!17/c9e55/6)
### How?
Extracts the JSON array with player records:
```
t.json#>'{members,players}'
```
From these, I unnest rows with just the `id` with:
```
jsonb_populate_recordset(null::foo, t.json#>'{members,players}')
```
... and immediately aggregate those into a Postgres array, so we keep one row per row in the base table:
```
SELECT ARRAY ( ... )
```
All of this happens in a lateral join:
```
, JOIN LATERAL (SELECT ... ) AS p(players) ...
```
* Immediately filter the resulting arrays in the join condition to keep only the ones we are looking for - with the ["contains" *array* operator `@>`](https://www.postgresql.org/docs/current/functions-array.html):
... ON p.players @> '{3,4,7}'
If you run this query a lot on a big table, you could create a fake `IMMUTABLE` function that extracts the array like above and create functional **GIN index** based on this function to make this super fast.
"Fake" because the function depends on the underlying row type, i.e. on a catalog lookup, and would change if that changes. (So make sure it does not change.) Similar to this one:
* [Index for finding an element in a JSON array](https://stackoverflow.com/questions/18404055/index-for-finding-an-element-in-a-json-array/18405706#18405706)
Aside:
Don't use type names like `json` as column names (even if that's allowed), that invites tricky syntax errors and confusing error messages.
|
I wanted to do the same as above. Only other condition was I had to do substring matching and not an exact matching.
This is what I ended up doing(inspired from the answer above of course)
```
SELECT t.json->>'name' AS feature_name, f.features::text
FROM teams t
, LATERAL (
SELECT * FROM json_populate_recordset(null::foo, t.json#>'{members,features}')
) AS f(features)
WHERE f.features LIKE '%dud%';
```
Posting it here if it is of any help.
|
jsonb query with nested objects in an array
|
[
"",
"sql",
"postgresql",
"relational-division",
"postgresql-9.4",
"jsonb",
""
] |
I can do the same query in two ways as following, will #1 be more efficient as we don't have join?
# 1
```
select table1.* from table1
inner join table2 on table1.key = table2.key
where table2.id = 1
```
# 2
```
select * from table1
where key = (select key from table2 where id=1)
```
|
These are doing two different things. The second will return an error if more than one row is returned by the subquery.
In practice, my guess is that you have an index on `table2(id)` or `table2(id, key)`, and that `id` is unique in `table2`. In that case, both should be doing index lookups and the performance should be very comparable.
And, the general answer to performance question is: try them on your servers with your data. That is really the only way to know if the performance difference makes a difference in your environment.
|
MS SQL Server is smart enough to understand that it is the same action in such a simple query.
However if you have more than 1 record in subquery then you'll probably use IN. In is slow operation and it will never work faster than JOIN. It can be the same but never faster.
The best option for your case is to use EXISTS. It will be always faster or the same as JOIN or IN operation. Example:
```
select * from table1 t1
where EXISTS (select * from table2 t2 where id=1 AND t1.key = t2.key)
```
|
Performance of join vs pre-select on MsSQL
|
[
"",
"sql",
"sql-server",
"performance",
""
] |
I want to have the last column cumulative based on `ROW_ID` that resets every time it starts again with '1'.
Initially my table doesn't have the `ROW_ID`, this was created using partition so at least I can segregate my records.
It should add the Amt + CumulativeSum (except for the first record) all the way down and reset every time the Row\_ID = 1.
I have tried several queries but it doesn't give me the desired result. I am trying to read answers from several forums but to no avail.
Can someone advise the best approach to do this?
For the sake of representation, I made the sample table as straightforward as possible.
```
ID ROW-ID Amt RunningTotal(Amt)
1 1 2 2
2 2 4 6
3 3 6 12
4 1 2 2
5 2 4 6
6 3 6 12
7 4 8 20
8 5 10 30
9 1 2 2
10 2 4 6
11 3 6 12
12 4 8 20
```
|
try this
```
declare @tb table(ID int, [ROW-ID] int, Amt money)
insert into @tb(ID, [ROW-ID], Amt) values
(1,1,2),
(2,2,4),
(3,3,6),
(4,1,2),
(5,2,4),
(7,4,8),
(8,5,10),
(9,1,2),
(10,2,4),
(11,3,6),
(12,4,8)
select *,sum(amt) over(partition by ([id]-[row-id]) order by id,[row-id]) AS cum from @tb
```
other version
```
select *,(select sum(amt) from @tb t where
(t.id-t.[row-id])=(t1.id-t1.[ROW-ID]) and (t.id<=t1.id) ) as cum
from @tb t1 order by t1.id,t1.[row-id]
```
|
The idea is to create partitions from `R` column. First leave `1` if `R = 1`, else put `0`. Then cumulative sum on that column. When you have partitions you can finally calculate cumulative sums on `S` column in those partitions:
```
--- --- ---
| 1 | | 1 | | 1 |
| 2 | | 0 | | 1 | --prev 1 + 0
| 3 | | 0 | | 1 | --prev 1 + 0
| 1 | | 1 | | 2 | --prev 1 + 1
| 2 | => | 0 | => | 2 | --prev 2 + 0
| 3 | | 0 | | 2 | --prev 2 + 0
| 4 | | 0 | | 2 | --prev 2 + 0
| 5 | | 0 | | 2 | --prev 2 + 0
| 1 | | 1 | | 3 | --prev 2 + 1
| 2 | | 0 | | 3 | --prev 3 + 0
--- --- ---
DECLARE @t TABLE ( ID INT, R INT, S INT )
INSERT INTO @t
VALUES ( 1, 1, 2 ),
( 2, 2, 4 ),
( 3, 3, 6 ),
( 4, 1, 2 ),
( 5, 2, 4 ),
( 6, 3, 6 ),
( 7, 4, 8 ),
( 8, 5, 10 ),
( 9, 1, 2 ),
( 10, 2, 4 ),
( 11, 3, 6 ),
( 12, 4, 8 );
```
For `MSSQL 2008`:
```
WITH cte1
AS ( SELECT ID ,
CASE WHEN R = 1 THEN 1
ELSE 0
END AS R ,
S
FROM @t
),
cte2
AS ( SELECT ID ,
( SELECT SUM(R)
FROM cte1 ci
WHERE ci.ID <= co.ID
) AS R ,
S
FROM cte1 co
)
SELECT * ,
( SELECT SUM(S)
FROM cte2 ci
WHERE ci.R = co.R
AND ci.ID <= co.ID
)
FROM cte2 co
```
For `MSSQL 2012`:
```
WITH cte
AS ( SELECT ID ,
SUM(CASE WHEN R = 1 THEN 1
ELSE 0
END) OVER ( ORDER BY ID ) AS R ,
S
FROM @t
)
SELECT * ,
SUM(s) OVER ( PARTITION BY R ORDER BY ID ) AS T
FROM cte
```
Output:
```
ID R S T
1 1 2 2
2 1 4 6
3 1 6 12
4 2 2 2
5 2 4 6
6 2 6 12
7 2 8 20
8 2 10 30
9 3 2 2
10 3 4 6
11 3 6 12
12 3 8 20
```
EDIT:
One more way. This looks way better by execution plan then first example:
```
SELECT * ,
CASE WHEN R = 1 THEN S
ELSE ( SELECT SUM(S)
FROM @t it
WHERE it.ID <= ot.ID
AND it.ID >= ( SELECT MAX(ID)
FROM @t iit
WHERE iit.ID < ot.ID
AND iit.R = 1
)
)
END
FROM @t ot
```
|
SQL Server 2008 Cumulative Sum that resets value
|
[
"",
"sql",
"sql-server-2008",
""
] |
As the title says, I am trying to `INNER JOIN` on columns that have different values/data types.
In one database table, let's call it Table A I want to do a select statement to get the values of a few columns (Subject, Name, Description, Date). Though I also want a relation name. The problem is however that the relation name (which is set in the relation table, Table B) is displayed in Table A as a string value (D0001001) - so not as a literal name.
To get the literal relation name there is a link with Table B that has an ID column 1001 - 1000~ and a relation 'literal' name column. So for example in table B ID 1001 matches company name MC DONALDS and in table A the RelationID is D0001001 (MC DONALDS).
Don't ask me why the RelationID in table A is with the weird D000 in front of it, I don't know either but it had some functionality.
So back to the problem. I want to get a few fields from table A but also the literal relation name from table B where it matches the table A relationID values.
So the question is, how can I `INNER JOIN` on these 2 different values/types? `RelationID` in table A is of string type (`nvarchar` to be precise) and in Table B the `ID` that matches the relation name is an `Integer` type.
I thought I could fix it by:
1. Do a LIKE statement in the query where the ID of table B (1001 integer) partly matches the RelationID of table A (D0001001 string). This however didn't work
2. do a `REPLACE` statement by replacing the RelationID 'D000' values by nothing: "". This would probably still require some sort of cast to integer for the table A value. Had some error here probably because of a syntax error.
What I have so far:
```
SELECT
TableA.subject, TableA.Name, TableA.Description, TableA.Date,
TableB.RelationName
INNER JOIN
TableB ON TableA.RelationID = TableB.ID
```
This returned a conversion that isn't possible (string / integer).
So then I tried:
```
SELECT
TableA.subject, TableA.Name, TableA.Description, TableA.Date,
TableB.RelationName
INNER JOIN
TableB ON TableB.ID LIKE '% TableA.RelationID %'
```
This didn't work either (EOF).
To make it a bit clearer my tables:
**Table A**
+------------+-----------+------------------+---------------------+-----------+
| RelationID | Subject | Description | Name | Date |
+------------+-----------+------------------+---------------------+-----------+
| D0001001 | Fast Food | Some description | Name of form filler | 13-3-2015 |
| D0001002 | Drinks | Some description | Name of form filler | 10-3-2015 |
| D0001003 | Cars | Some description | Name of form filler | 7-3-2015 |
+------------+-----------+------------------+---------------------+-----------+
**Table B**
```
+------+--------------+
| ID | RelationName |
+------+--------------+
| 1001 | MC DONALDS |
| 1002 | COCA COLA |
| 1003 | MERCEDES |
+------+--------------+
```
--> INNER joins in ID and RelationID
Any alternatives? Thanks in advance!
|
To escape from the convertion error remove the first characters in `RelationID` using `Substring` then `convert` the `RelationID` to `INT` then `JOIN` with the `ID` column in `tableA`
```
SELECT TableA.subject,
TableA.Name,
TableA.Description,
TableA.Date,
TableB.RelationName
FROM tableA
INNER JOIN TableB
ON CONVERT(INT, Substring(TableA.RelationID, 2, Len(TableA.RelationID))) = TableB.ID
```
|
> Do a LIKE statement in the query where the ID of table B (1001
> integer) partly matches the RelationID of table A (D0001001 string).
> This however didnt worked.
Try the other way round:
```
SELECT TableA.Subject,
TableA.Name,
TableA.Description,
TableA.Date,
TableB.RelationName
FROM TableA
INNER JOIN TableB ON TableA.ID LIKE '%' + CAST(TableB.RelationID AS NVARCHAR(50)) + '%'
```
|
Is it possible to INNER JOIN 2 id's of a different type/values?
|
[
"",
"sql",
"sql-server",
"join",
"inner-join",
""
] |
Need a Select Query for the following scenario, using Microsoft SQL Server 2008
```
Order Customer Order_Type
1 A NULL
2 A NULL
3 B S
4 C NULL
5 D S
6 B NULL
```
I want to Group Customer who have placed Order Type S, So the output I am expecting is
```
Order Customer Order_Type
3 B S
6 B NULL
5 D S
1 A NULL
2 A NULL
4 C NULL
SELECT Order, Customer, Order_Type FROM CustomerOrder GROUP BY Customer,Order, Order_Type HAVING {?}
```
|
Try this out:
```
SELECT *
FROM CustomerOrder
ORDER BY COUNT(CASE WHEN Order_Type = 'S' THEN 1 END) OVER (PARTITION BY Customer) DESC,
Customer
```
I assume you basically want to order your data, placing on top groups of customers having at least one 'S'.
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/281d0/1)
**Edit:** You have to add:
```
CASE WHEN Order_Type = 'S' THEN 0 ELSE 1 END
```
at the tail of the `ORDER BY` clause if you want to always place `'S'` records in the first place of their group.
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/82e73/1)
|
`GROUP BY` is for grouping multiple rows together into a single row, based on a common column value.
For instance, you might get a count of how many orders a particular customer has made like so:
```
SELECT Customer, COUNT(*) FROM Table
GROUP BY Customer
```
That would get you output listing the customer name and the number of order rows they had.
If you want to limit this (like a `WHERE` clause), you use `HAVING`.
```
SELECT Customer, COUNT(*) FROM Table
GROUP BY Customer
HAVING Count(*) > 1
```
This will get you all the Customers who have more than 1 order.
If you want to organize your output, you can sort using `ORDER BY`.
```
SELECT * FROM Table
ORDER BY Order_Type
```
You can even use multiple columns to sort by. It will first sort based on the first column, and then it will sort those groups (rows whose first column all match) by the second, and so on.
```
SELECT * FROM Table
ORDER BY Order_Type, Customer
```
EDIT:
With other answers, it has become clear to me what you're really asking. I'll leave my answer above as-is, because a basic explanation of GROUP and ORDER is potentially useful to people who might find this question in the future.
As others have said, if you wish to organize your table such that all customers who have any row that fits criteria X (in this case, 'Order\_Type=S'), you can't do it with a simple query or a HAVING clause. As is pointed out, there are numerous ways to do this.
|
Order By Having Certain Value
|
[
"",
"sql",
"sql-server-2008",
""
] |
I need to use [WEEK](http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html#function_week) function so I need a **date** type.
While what I have in db is a **int** type like 20150101 representing 1st,Jan 2015.
I tried `FROM_UNIXTIME` but obviously it is not I want:
```
select FROM_UNIXTIME(20150101) from dual;
+-------------------------+
| FROM_UNIXTIME(20150101) |
+-------------------------+
| 1970-08-22 14:15:01 |
+-------------------------+
```
Do I have a way to convert the yyyymmdd(int) to a date type?
|
You can just select it as a `date()` and it converts automatically.
```
SELECT DATE(20150101) FROM dual;
```
|
`20150101` is not a unix timestamp its in the format `ymd` you need `date_format` to format it
```
mysql> select date_format('20150101','%Y-%m-%d') as date ;
+------------+
| date |
+------------+
| 2015-01-01 |
+------------+
1 row in set (0.00 sec)
mysql> select unix_timestamp('2015-01-01');
+------------------------------+
| unix_timestamp('2015-01-01') |
+------------------------------+
| 1420050600 |
+------------------------------+
1 row in set (0.00 sec)
mysql> select FROM_UNIXTIME(1420050600) ;
+---------------------------+
| FROM_UNIXTIME(1420050600) |
+---------------------------+
| 2015-01-01 00:00:00 |
+---------------------------+
1 row in set (0.00 sec)
```
|
Convert yyyymmdd(int type) to date
|
[
"",
"mysql",
"sql",
""
] |
I'd like to generate the following output using SQL Server 2012:
```
<parent>
<item>1</item>
<item>2</item>
<item>3</item>
</parent>
```
From three different columns in the same table (we'll call them col1, col2, and col3).
I'm trying to use this query:
```
SELECT
t.col1 as 'item'
,t.col2 as 'item'
,t.col3 as 'item'
FROM tbl t
FOR XML PATH('parent'), TYPE
```
But what I get is this:
```
<parent>
<item>123</item>
</parent>
```
What am I doing wrong here?
|
Add a column with NULL as value to generate a separate item node for each column.
```
SELECT
t.col1 as 'item'
,NULL
,t.col2 as 'item'
,NULL
,t.col3 as 'item'
FROM dbo.tbl as t
FOR XML PATH('parent'), TYPE;
```
Result:
```
<parent>
<item>1</item>
<item>2</item>
<item>3</item>
</parent>
```
[SQL Fiddle](http://sqlfiddle.com/#!6/2e29f/1)
**Why does this work?**
Columns without a name are inserted as text nodes. In this case the NULL value is inserted as a text node between the `item` nodes.
If you add actual values instead of NULL you will see what is happening.
```
SELECT
t.col1 as 'item'
,'1'
,t.col2 as 'item'
,'2'
,t.col3 as 'item'
FROM dbo.tbl as t
FOR XML PATH('parent'), TYPE;
```
Result:
```
<parent>
<item>1</item>1<item>2</item>2<item>3</item></parent>
```
Another way to specify a column without a name is to use the wildcard character `*` as a column alias.
[Columns with a Name Specified as a Wildcard Character](https://msdn.microsoft.com/en-us/library/bb500154.aspx)
It is not necessary to use the wildcard in this case because the columns with NULL values don't have a column name but it is useful when you want values from actual columns but you don't want the column name to be a node name.
|
Ok, you can't use path for that. Use explicit, instead,
```
SELECT 1 AS tag,NULL AS parent, t.col1 AS [Parent!1!Item!element],
t.col2 AS [Parent!1!Item!element],
t.col3 AS [Parent!1!Item!element]
FROM tbl t
FOR XML EXPLICIT
```
|
SQL Server FOR XML Path make repeating nodes
|
[
"",
"sql",
"sql-server",
"xml",
"t-sql",
""
] |
could someone help me to join 2 queries into one?
Basically it's the same query with different clauses (please notice the FIELD3 filters and related counters into the subquery):
```
SELECT A.FIELD1,A.FIELD2,A.FIELD3
FROM
TABLE A
INNER JOIN
(
SELECT FIELD1, COUNT(1)
FROM TABLE
where SUBSTR(FIELD3,1,5)='33377' and
timestamp between sysdate - interval '20' minute and sysdate - interval '2' minute
GROUP BY FIELD1
HAVING COUNT(1) >= 100
) B
ON A.FIELD1 = B.FIELD1 where ...some other clauses who interacts with FIELD4,5,6,etc... ;
SELECT A.FIELD1,A.FIELD2,A.FIELD3
FROM
TABLE A
INNER JOIN
(
SELECT FIELD1, COUNT(1)
FROM TABLE
where SUBSTR(FIELD3,1,5)!='33377' and
timestamp between sysdate - interval '20' minute and sysdate - interval '2' minute
GROUP BY FIELD1
HAVING COUNT(1) >= 150
) B
ON A.FIELD1 = B.FIELD1 where ...some other clauses who interacts with FIELD4,5,6,etc... ;
```
What's my goal? Let me explain better.
I have an Oracle DB with a table with some fields, for this query a need only 3 fields as shown below.
I'm trying to make a query with multiple counts. What I need is a query that output a list who exceeded a specific count based on the first 5 digits of FIELD3. Let me do a specific example:
This is what I have into the DB:
```
FIELD1 FIELD2 FIELD3
1234567314 333776543585218 333771434591151
1234567871 333771451776784 333771432365581
1234567314 333776543585218 333771240553976
1234567314 333776543585218 333773861003473
1234567314 333776543585218 333773861003473
1234567314 333776543585218 333023861003473
1234567314 333776543585218 333023861003473
1234567314 333776543585218 333023861003473
1234567337 333773660813075 333773650804767
1234567137 333773660798439 333771222628311
1234567319 333776543585219 333773660667594
1234567314 333776543585218 333901451463400
1234567314 333776543585218 333901451463400
```
Now, I want to output numbers in field1 in the following way:
```
OUTPUT FIELD1 (and related FIELD2 and FIELD3) who are exceeded a COUNT1 (EG: 3) based on FIELD3 having the same 5 first digits (33377)
OUTPUT FIELD1 (and related FIELD2 and FIELD3) who are exceeded a COUNT2 (EG: 10) based on FIELD3 NOT having the same 5 first digits (33377)
```
So, in the upper example, my output will be:
```
1234567314 333776543585218 333771434591151
1234567314 333776543585218 333771240553976
1234567314 333776543585218 333773861003473
1234567314 333776543585218 333773861003473
1234567314 333776543585218 333023861003473
1234567314 333776543585218 333023861003473
1234567314 333776543585218 333023861003473
1234567314 333776543585218 333901451463400
1234567314 333776543585218 333901451463400
```
33377 = 4 occurencies, Others = 5 occurences. First threshold exceeded the count so, report all rows.
Basically the 2 queries reported above works perfect, but I would like to join them into one to minimize query time and have an unique output.
Many thanks.
Lucas
|
Combining the two queries is easy enough. Two suggestions, though. First, use a CTE rather than inline view. It doesn't change the performance -- it just looks...cleaner, less cluttered (imho). Second, if you're going to combine two queries like this, add a field that identifies each row with the reason it is there. Makes the analyst's job a little easier.
```
with
Tablecounts( Field1, Hits, Misses )As(
Select Field1,
Sum( Case When Substr( Field3, 1, 5) = '33377' Then 1 Else 0 End ),
Sum( Case When Substr( Field3, 1, 5) = '33377' Then 0 Else 1 End )
From Table
Where Timestamp Between Sysdate - Interval '20' Minute And Sysdate - Interval '2' Minute
Group By Field1
)
Select Rd.Field1, Rd.Field2, Rd.Field3,
case when tc.Hits > 100
then 'This is a hit'
else 'This is a miss...or something' end as Why
From Table Rd
Join Tablecounts Tc
On Tc.Field1 = Rd.Field1
and( tc.Hits > 100 or tc.Misses > 150 );
```
**EDIT:** I rewrote using analytics. Except for slight differences in style, it is the same as Gordon's. But the comments under Gordon's answer suggest there was a problem. It looks to me like it should work. Was there actually a problem and, if so, what was it?
```
with
Counts( Field1, Field2, Field3, Hits, Misses )As(
Select Field1, Field2, Field3,
Sum( Case When Field3 Like '33377%' Then 1 Else 0 End ) Over( Partition By Field1 ),
Sum( Case When Field3 Like '33377%' Then 0 Else 1 End ) Over( Partition By Field1 )
From Table
Where Timestamp Between Sysdate - Interval '20' Minute And Sysdate - Interval '2' Minute
)
Select Field1, Field2, Field3,
Case When Hits > 3
Then 'This is a hit'
else 'This is a miss...or something' end as Why
From Counts
where Hits > 100 or Misses > 150;
```
|
You should be able to use analytic functions for this.
```
SELECT field1, field2, field3
FROM (SELECT t.*,
SUM(CASE WHEN field3 LIKE '33377%' THEN 1 ELSE 0 END) OVER (PARTITION BY field1) as cnthave,
SUM(CASE WHEN field3 NOT LIKE '33377%' THEN 1 ELSE 0 END) OVER (PARTITION BY field1) as cntnothave
FROM TABLE t
WHERE timestamp between sysdate - interval '20' minute and sysdate - interval '2' minute
) t
WHERE (Other conditions here) AND
(cnthave > 100 or cntnothave > 150);
```
|
Join 2 SQL queries into one
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
How to do a count including not existing records, which should have '0' as the count?
Here is my table:
```
CREATE TABLE SURVEY
(year CHAR(4),
cust CHAR(2));
INSERT INTO SURVEY VALUES ('2011', 'AZ');
INSERT INTO SURVEY VALUES ('2011', 'CO');
INSERT INTO SURVEY VALUES ('2012', 'ME');
INSERT INTO SURVEY VALUES ('2014', 'ME');
INSERT INTO SURVEY VALUES ('2014', 'CO');
INSERT INTO SURVEY VALUES ('2014', 'ME');
INSERT INTO SURVEY VALUES ('2014', 'CO');
```
I've tried this, but of course it is missing zero counts:
```
select cust, year, count(*) as count from SURVEY
group by cust, year
```
I want to have this result:
```
+------+---------+--------+
| cust | year | count |
+------+---------+--------+
| AZ | 2011 | 1 |
| AZ | 2012 | 0 |
| AZ | 2014 | 0 |
| CO | 2011 | 1 |
| CO | 2012 | 0 |
| CO | 2014 | 2 |
| ME | 2011 | 0 |
| ME | 2012 | 1 |
| ME | 2014 | 2 |
+------+---------+--------+
```
please note:
* My table has many records (~10k with different 'cust')
* years may not be sequential (for example 2013 is skipped)
* over time i may have 2015, 2016 and so on
* the actual query will be executed in MS\_ACCESS'2010 (not sure if its matter)
please help, thank you!
|
It sounds like you want a count for every cust x year combination with a zero when no survey record exists. If this is the case you will need two more tables: customers and years then do something like:
```
select leftside.cust, leftside.year, count(survey.cust) from
(select * from customers, years) as leftside left join survey
on leftside.cust = survey.cust and
leftside.year = survey.year
group by leftside.cust, leftside.year
```
|
If you have domain tables as others said, well and good. If you have to depend only on data in your table, the below query will do that for you.
```
select cp.cust, cp.year, iif(isnull(sum(cnt)), 0, sum(cnt)) as count from
(select * from (
(select distinct cust from survey) as c cross join
(select distinct year from survey) as y)
) cp left join
(select *, 1 as cnt from survey) s on cp.cust=s.cust and cp.year=s.year
group by cp.cust, cp.year
order by cp.cust,cp.year
```
Instead of `iif(isnull(sum(cnt)), 0, sum(cnt))`, you can use `coalesce(sum(cnt),0)` if that works. In MS Access use `iif` function and in other databases `coalesce` works.
|
How to do a count including not existing records?
|
[
"",
"sql",
"ms-access",
""
] |
I have 3 tables in SQL Server 2008 like this:
Table 1 with these columns:
```
[pod] [varchar](50) NOT NULL, primarykey
[origin] [varchar](100) NOT NULL,
[destination] [varchar](100) NOT NULL,
[town] [varchar](50) NOT NULL,
```
Table 2 with these columns:
```
[town_id] [int] NOT NULL, primary key
[town] [varchar](50) NOT NULL,
```
Table 3 with these columns:
```
[Area_id] [int] NOT NULL, primary key
[town_id] [int] NOT NULL, foreign key of table 2
[area] [varchar](50) NOT NULL,
```
With these details I want to get the values of
```
POD, origin, destination, town, area
```
My code:
```
SELECT
b.pod, b.origin, b.destination, b.town,
af.area,
(SELECT
*,
(SELECT a.area, a.town_id
FROM table3 a, town_table t
WHERE a.town_id = t.town_id) af
FROM table1 c, table2 tt
WHERE af.town_id = tt.town_id)
FROM table1 b
```
I am not getting any idea please try to help me....
Thank you,....
|
Not sure exactly what you want, but at a guess it sounds like you want to get the 4 columns from table 1 and the 'area' column from the 3rd table using the 2nd table as a way of joining the 2. If that's the case this should work:
```
SELECT tb1.pod, tb1.origin, tb1.destination, tb1.town, tb3.area
FROM Table1 as tb1
JOIN Table2 as tb2
ON tb1.town = tb2.town
JOIN Table3 as tb3
ON tb3.town_id = tb2.town_id
```
|
**Try below query :**
```
select POD, origin, destination , table1.town , area from Table1
inner join table2 on table2.town=table1.town
inner join Table3 on table2.town_id=table3.town_id
```
|
How to get values from three different tables in SQL Server with primary key and without?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
By running this SELECT query:
```
SELECT wp_posts.ID, wp_postmeta.meta_key, wp_postmeta.meta_value
FROM wp_posts
INNER JOIN wp_postmeta
ON wp_posts.ID = wp_postmeta.post_id
WHERE wp_posts.post_status = 'publish'
AND wp_posts.post_type = 'my_post_type'
AND wp_posts.post_date < NOW()
AND wp_postmeta.meta_key = 'wpcf-lat'
OR wp_postmeta.meta_key = 'wpcf-long'
```
I get table like this:
```
id meta_key meta_value
------------------------------
1270 wpcf-lat 12.6589
1270 wpcf-long 78.7425
1658 wpcf-lat 22.3654
1658 wpcf-long 65.2985
```
But I need result table to be like this
```
id wpcf-lat wpcf-long
------------------------------
1270 12.6589 78.7425
1658 22.3654 65.2985
```
How can I accomplish that?
|
For a known set of `meta_key` you can use the following query
```
select
wp.ID,
max(
case when pm.meta_key = 'wpcf-lat' then pm.meta_value end
) as `meta_value`,
max(
case when pm.meta_key = 'wpcf-long' then pm.meta_value end
) as `wpcf-long`
from wp_posts wp
join wp_postmeta pm on pm.post_id = wp.ID
group by wp.ID ;
```
|
A simple `while` or `foreach` in your PHP code is the easiest way to put the data in the format you need:
```
$query = '...';
$resultset = $DB->query($query);
$list = array();
while ($row = $resultset->fetchArray()) {
// Check if the entry having this ID was created before
$id = $row['id'];
if (! isset($list[$id]) {
// Create a new entry
$list[$id] = array(
'id' => $id,
'wpcf-lat' => NULL,
'wpcf-long' => NULL,
);
}
// Update the corresponding property
$key = $row['meta_key'];
$list[$id][$key] = $row['meta_value'];
}
```
|
SQL - How to concatenate this table?
|
[
"",
"mysql",
"sql",
"concatenation",
""
] |
I'm trying to append two tables in MS Access at the moment. This is my SQL View of my Query at the moment:
```
INSERT INTO MainTable
SELECT
FROM Table1 INNER JOIN Table2 ON Table1.University = Table2.University;
```
Where "University" is the only field name that *would* have similarities between the two tables. When I try and run the query, I get this error:
```
Query must have at least one destination field.
```
I assumed that the `INSERT INTO MainTable` portion of my SQL was defining the destination, but apparently I am wrong. How can I specify my destination?
|
You must select something from your select statement.
```
INSERT INTO MainTable
SELECT col1, col2
FROM Table1 INNER JOIN Table2 ON Table1.University = Table2.University;
```
|
Besides [Luke Ford's answer](https://stackoverflow.com/a/29103554/6884) *(which is correct)*, there's another gotcha to consider:
MS Access *(at least Access 2000, where I just tested it)* seems to match the columns by name.
In other words, when you execute the query from Luke's answer:
```
INSERT INTO MainTable
SELECT col1, col2
FROM ...
```
...MS Access assumes that `MainTable` has two columns named `col1` and `col2`, and tries to insert `col1` from your query into `col1` in `MainTable`, and so on.
**If the column names in `MainTable` are different, you need to specify them in the `INSERT` clause.**
Let's say the columns in `MainTable` are named `foo` and `bar`, then the query needs to look like this:
```
INSERT INTO MainTable (foo, bar)
SELECT col1, col2
FROM ...
```
|
How to define destination for an append query Microsoft Access
|
[
"",
"sql",
"database",
"ms-access",
"inner-join",
""
] |
I have a transaction table with the following structure:
```
select t.[GUID], t.[ID], ts.Description "Status", t.Payee, t.Amount, t.SequenceNumber
from [Transaction] t
inner join TransactionStatus ts on t.StatusID = ts.ID
GUID | ID | Status | Payee | Amount | SequenceNumber
AF732CF5-E6C0-E411-B8F6-004056AB77C2 | 1 | Posted | Amy | 500.00 | 1
AF732CF5-E6C0-E411-B8F6-004056AB77C2 | 2 | Voided | Amy | 500.00 | 2
1F7D880C-E7C0-E411-B8F6-004056AB77C2 | 3 | Posted | Bob | 70.00 | 1
AF732CF5-E6C0-E411-B8F6-004056AB77C2 | 4 | Posted | Amy | 512.50 | 3
1F7D880C-E7C0-E411-B8F6-004056AB77C2 | 5 | Posted | Bob | 66.00 | 2
F2CC0B03-76C7-E411-A48D-004056AB787C | 6 | Pending | Carol | 240.00 | NULL
```
I'm trying to construct a query to group the records by GUID and select the single record with the largest SequenceNumber (if it isn't NULL):
```
GUID | ID | Status | Payee | Amount | SequenceNumber
AF732CF5-E6C0-E411-B8F6-004056AB77C2 | 4 | Posted | Amy | 512.50 | 3
1F7D880C-E7C0-E411-B8F6-004056AB77C2 | 5 | Posted | Bob | 66.00 | 2
F2CC0B03-76C7-E411-A48D-004056AB787C | 6 | Pending | Carol | 240.00 | NULL
```
I've tried adding this line:
```
where SequenceNumber = (select MAX(SequenceNumber) from [Transaction] t2 where t.[GUID] = t2.[GUID])
```
but that doesn't get me any transactions where the status is Pending (they don't have sequence numbers). How can I fix this query?
|
If it's SQL-Server you can use a [CTE](https://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) + [`ROW_NUMBER`](https://msdn.microsoft.com/en-us/library/ms186734.aspx):
```
WITH CTE AS
(
select t.[GUID], t.[ID], ts.Description "Status", t.Payee, t.Amount, t.SequenceNumber,
rn = row_number() over (partition by t.[GUID] Order By t.SequenceNumber DESC)
from [Transaction] t
inner join TransactionStatus ts on t.StatusID = ts.ID
)
SELECT GUID, ID, Status, Payee, Amount, SequenceNumber
FROM CTE
WHERE rn = 1
```
This will include the row where `SequenceNumber` is null. If you want all rows with the maximum `SequenceNumber`(in case of ties) use `DENSE_RANK` instead of `ROW_NUMBER`.
|
You can calculate the `MAX(ID)` and it's related `[GUID]` in a subquery and `JOIN` to it in order to get the desired results:
**Sample subquery:**
```
SELECT [GUID] ,
MAX(ID) MaxId
FROM Transaction
GROUP BY [GUID]
```
Would produce:
```
GUID MaxId
1F7D880C-E7C0-E411-B8F6-004056AB77C2 5
AF732CF5-E6C0-E411-B8F6-004056AB77C2 4
F2CC0B03-76C7-E411-A48D-004056AB787C 6
```
**Full Demo:**
```
CREATE TABLE #Transaction
(
[GUID] VARCHAR(36) ,
[ID] INT ,
[Status] VARCHAR(7) ,
[Payee] VARCHAR(5) ,
[Amount] INT ,
[SequenceNumber] VARCHAR(4)
);
INSERT INTO #Transaction
( [GUID], [ID], [Status], [Payee], [Amount], [SequenceNumber] )
VALUES ( 'AF732CF5-E6C0-E411-B8F6-004056AB77C2', 1, 'Posted', 'Amy', 500.00,
'1' ),
( 'AF732CF5-E6C0-E411-B8F6-004056AB77C2', 2, 'Voided', 'Amy', 500.00,
'2' ),
( '1F7D880C-E7C0-E411-B8F6-004056AB77C2', 3, 'Posted', 'Bob', 70.00,
'1' ),
( 'AF732CF5-E6C0-E411-B8F6-004056AB77C2', 4, 'Posted', 'Amy', 512.50,
'3' ),
( '1F7D880C-E7C0-E411-B8F6-004056AB77C2', 5, 'Posted', 'Bob', 66.00,
'2' ),
( 'F2CC0B03-76C7-E411-A48D-004056AB787C', 6, 'Pending', 'Carol',
240.00, NULL );
SELECT #Transaction.*
FROM #Transaction
INNER JOIN ( SELECT [GUID] ,
MAX(ID) MaxId
FROM #Transaction
GROUP BY [GUID]
) t ON t.[GUID] = #Transaction.[GUID]
AND t.MaxId = #Transaction.ID
ORDER BY ID
```
|
Selecting records with maximum value in group
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I'm using SQL Server 2008 and have a database that includes the day of week in a bit type such as:
```
JobID int
JobDescription nvarchar(50)
M bit
Tu bit
W bit
Th bit
F bit
Sa bit
Su bit
```
I need to import data from an Excel spreadsheet that has rows and they identify the value as a name such as:
```
100 Backup Monday
101 Reports Monday
102 Cleaning Tuesday
```
So I would need this to show up for example
```
100 Backup 1 0 0 0 0 0 0
101 Reports 1 0 0 0 0 0 0
102 Cleaning 0 1 0 0 0 0 0
```
I have several thousand rows of this and can't seem to think of a way to import this data. Any thoughts?
|
First import file as is into temp table and then insert into destination table from:
```
DECLARE @t TABLE(JobID INT, JobDescription NVARCHAR(MAX), DayOfTheWeek NVARCHAR(MAX))
INSERT INTO @t VALUES
(100, 'Backup', 'Monday'),
(101, 'Reports', 'Monday'),
(102, 'Cleaning', 'Tuesday')
SELECT JobID,
JobDescription,
CASE WHEN DayOfTheWeek = 'Monday' THEN 1 ELSE 0 END AS M,
CASE WHEN DayOfTheWeek = 'Tuesday' THEN 1 ELSE 0 END AS Tu,
CASE WHEN DayOfTheWeek = 'Wednsday' THEN 1 ELSE 0 END AS W,
CASE WHEN DayOfTheWeek = 'Thusday' THEN 1 ELSE 0 END AS Th,
CASE WHEN DayOfTheWeek = 'Friday' THEN 1 ELSE 0 END AS F,
CASE WHEN DayOfTheWeek = 'Saturday' THEN 1 ELSE 0 END AS Sa,
CASE WHEN DayOfTheWeek = 'Sunday' THEN 1 ELSE 0 END AS Su
FROM @t
```
Output:
```
JobID JobDescription M Tu W Th F Sa Su
100 Backup 1 0 0 0 0 0 0
101 Reports 1 0 0 0 0 0 0
102 Cleaning 0 1 0 0 0 0 0
```
|
I have a two-step solution to your problem. First, you can generate 7 columns representing the bits for each day of the week using Excel. The following screen captures show how you can do this:


In each column for a given day of the week, I use a formula to check whether 0 or 1 should go there.
The next step is to export the file as CSV from Excel, using commas as a delimeter. Once you have done this, you can import this file into SQL Server.
```
BULK INSERT YourTable
FROM 'YourCSV.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
TABLOCK
)
```
And the output will be the same as with Giorgi's method:
```
JobID JobDescription M Tu W Th F Sa Su
100 Backup 1 0 0 0 0 0 0
101 Reports 1 0 0 0 0 0 0
102 Cleaning 0 1 0 0 0 0 0
```
The method I use let's you stay within the comforts of Excel to handle the messy part of the data scrubbing. Only in the last step do we bring the data into SQL Server.
|
Import DayOfWeek Name as Bit in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to understand why the SQL command of `MAX(SUM(col))` gives the a syntax error. I have the two tables as below-:
```
+--------+--------+---------+-------+
| pname | rollno | address | score |
+--------+--------+---------+-------+
| A | 1 | CCU | 1234 |
| B | 2 | CCU | 2134 |
| C | 3 | MMA | 4321 |
| D | 4 | MMA | 1122 |
| E | 5 | CCU | 1212 |
+--------+--------+---------+-------+
```
**Personnel Table**
```
+--------+-------+----------+
| rollno | marks | sub |
+--------+-------+----------+
| 1 | 90 | SUB1 |
| 1 | 88 | SUB2 |
| 2 | 89 | SUB1 |
| 2 | 95 | SUB2 |
| 3 | 99 | SUB1 |
| 3 | 99 | SUB2 |
| 4 | 82 | SUB1 |
| 4 | 79 | SUB2 |
| 5 | 92 | SUB1 |
| 5 | 75 | SUB2 |
+--------+-------+----------+
```
**Results Table**
Essentially I have a details table and a results table. I want to find the name and marks of the candidate who has got the highest score in `SUB1` and `SUB2` combined. Basically the person with the highest aggregate marks.
I can find the summation of `SUB1` and `SUB2` for all candidates using the following query-:
```
select p.pname, sum(r.marks) from personel p,
result r where p.rollno=r.rollno group by p.pname;
```
It gives the following output-:
```
+--------+--------------+
| pname | sum(r.marks) |
+--------+--------------+
| A | 178 |
| B | 167 |
| C | 184 |
| D | 198 |
| E | 161 |
+--------+--------------+
```
This is fine but I need the output to be only `D | 198` as he is the highest scorer. **Now when I modify query like the following it fails-:**
```
select p.pname, max(sum(r.marks)) from personel p,
result r where p.rollno=r.rollno group by p.pname;
```
In MySQL I get the error of `Invaild Group Function`.
Now searching on SO I did get my correct answer which uses derived tables. I get my answer by using the following query-:
```
SELECT
pname, MAX(max_sum)
FROM
(SELECT
p.pname AS pname, SUM(r.marks) AS max_sum
FROM
personel p, result r
WHERE
p.rollno = r.rollno
GROUP BY p.pname) a;
```
But my question is **Why doesn't `MAX(SUM(col))` work ?**
I don't understand why max can't compute the value returned by SUM(). Now an answer on SO stated that since SUM() returns only a single value so MAX() find its meaningless to compute the value of one value, but I have tested the following query -:
```
select max(foo) from a;
```
on the Table "a" which has only one row with only one column called `foo` that holds an integer value. So if MAX() can't compute single values then how did this work ?
Can someone explain to me how the query processor executes the query and why I get the error of invalid group function ? From the readability point of view using MAX(SUM(col)) is perfect but it doesn't work out that way. I want to know why.
Are `MAX` and `SUM` never to be used together? I am asking because I have seen queries like `MAX(COUNT(col))`. I don't understand how that works and not this.
|
The error is `1111: invalid use of group function`. As for why specifically MySQL has this problem I can really only say it is part of the underlying engine itself. `SELECT MAX(2)` does work (in spite of a lack of a `GROUP BY`) but `SELECT MAX(SUM(2))` does not work.
This error will occur when grouping/aggregating functions such as `MAX` are used in the wrong spot such as in a `WHERE` clause. `SELECT SUM(MAX(2))` also does not work.
You can imagine that MySQL attempts to aggregate both simultaneously rather than doing things in an order of operations, i.e. it does not `SUM` first and then get the `MAX`. This is why you need to do the queries as separate steps.
|
Aggregate functions require an argument that provides a value for each row in the group. Other aggregate functions don't do that.
It's not very sensical anyway. Suppose MySQL accepted `MAX(SUM(col))` -- what would it mean? Well, the `SUM(col)` yields the sum of all non-`NULL` values of column `col` over all rows in the relevant group, which is a single number. You could take the `MAX()` of that to be that same number, but what would be the point?
Your approach using a subquery is different, at least in principle, because it aggregates twice. The inner aggregation, in which you perform the `SUM()`, computes a separate sum for each value of `p.pname`. The outer query then computes the maximum across all rows returned by the subquery (because you do not specify a `GROUP BY` in the outer query). If that's what you want, that's how you need to specify it.
|
sql - Why doesn't MAX() of SUM() work?
|
[
"",
"mysql",
"sql",
"sum",
"max",
""
] |
I've following table:
```
PERSON_ID EFFECTIVE_END_DATE ASSIGNMENT_ID FULL_NAME
33151 2013-08-04 00:00:00.0 33885 Test, C
33151 2013-10-04 00:00:00.0 33885 Test, C
33151 2015-02-19 00:00:00.0 33885 Test, C
33151 2013-08-04 00:00:00.0 33885 Test, C
33151 2013-10-04 00:00:00.0 33885 Test, C
33151 2015-02-19 00:00:00.0 33885 Test, C
```
Here `PERSON_ID` is same I want to select row with maximum effective end date without using `group by`.
Can some one help me ?
|
In Oracle you can use:
```
select * from
(SELECT your_column_name
FROM table_name
order by EFFECTIVE_END_DATE desc)
where rownum=1;
```
OR
```
SELECT *
FROM (
SELECT your_columns, row_number() over (order by EFFECTIVE_END_DATE desc) EED
FROM table_name)
WHERE EED = 1
```
OR
```
SELECT *
FROM table_name as t1
WHERE EFFECTIVE_END_DATE = (
SELECT MAX(t2.EFFECTIVE_END_DATE)
FROM table_name as t2
WHERE t1.PERSON_ID = t2.PERSON_ID)
```
Refer This:
[MAX(DATE) - SQL ORACLE](https://stackoverflow.com/questions/11390585/maxdate-sql-oracle)
This is a similar example
|
You might have duplicate rows too. You could use Analytic ranking function to fetch the required rows.
**Set up:**
```
SQL> CREATE TABLE Table1
2 (PERSON_ID int, EFFECTIVE_END_DATE varchar2(21), ASSIGNMENT_ID int, FULL_NAME varchar2(7));
Table created.
SQL>
SQL> INSERT ALL
2 INTO Table1 (PERSON_ID, EFFECTIVE_END_DATE, ASSIGNMENT_ID, FULL_NAME)
3 VALUES (33151, to_date('2013-08-04','YYYY-MM-DD'), 33885, 'Test, C')
4 INTO Table1 (PERSON_ID, EFFECTIVE_END_DATE, ASSIGNMENT_ID, FULL_NAME)
5 VALUES (33151, to_date('2013-10-04','YYYY-MM-DD'), 33885, 'Test, C')
6 INTO Table1 (PERSON_ID, EFFECTIVE_END_DATE, ASSIGNMENT_ID, FULL_NAME)
7 VALUES (33151, to_date('2015-02-19','YYYY-MM-DD'), 33885, 'Test, C')
8 INTO Table1 (PERSON_ID, EFFECTIVE_END_DATE, ASSIGNMENT_ID, FULL_NAME)
9 VALUES (33151, to_date('2013-08-04','YYYY-MM-DD'), 33885, 'Test, C')
10 INTO Table1 (PERSON_ID, EFFECTIVE_END_DATE, ASSIGNMENT_ID, FULL_NAME)
11 VALUES (33151, to_date('2013-10-04','YYYY-MM-DD'), 33885, 'Test, C')
12 INTO Table1 (PERSON_ID, EFFECTIVE_END_DATE, ASSIGNMENT_ID, FULL_NAME)
13 VALUES (33151, to_date('2015-02-19','YYYY-MM-DD'), 33885, 'Test, C')
14 SELECT * FROM dual;
6 rows created.
SQL>
SQL> COMMIT;
Commit complete.
SQL>
SQL> SELECT * FROM table1;
PERSON_ID EFFECTIVE_END_DATE ASSIGNMENT_ID FULL_NA
---------- --------------------- ------------- -------
33151 04-AUG-13 33885 Test, C
33151 04-OCT-13 33885 Test, C
33151 19-FEB-15 33885 Test, C
33151 04-AUG-13 33885 Test, C
33151 04-OCT-13 33885 Test, C
33151 19-FEB-15 33885 Test, C
6 rows selected.
SQL>
```
**Test case:**
```
SQL> WITH data AS
2 (SELECT t.*,
3 rank() over(partition BY person_id order by person_id, EFFECTIVE_END_DATE DESC) rn
4 FROM table1 t
5 )
6 SELECT PERSON_ID,
7 EFFECTIVE_END_DATE,
8 ASSIGNMENT_ID,
9 FULL_NAME
10 FROM data
11 WHERE rn = 1;
PERSON_ID EFFECTIVE_END_DATE ASSIGNMENT_ID FULL_NA
---------- --------------------- ------------- -------
33151 19-FEB-15 33885 Test, C
33151 19-FEB-15 33885 Test, C
SQL>
```
|
How to select max date using SQL without using group by clause . There are solutions for this which have group by clause I want solution without it
|
[
"",
"sql",
"oracle",
"date",
"group-by",
""
] |
Consider the following table schema:
```
----------------------------------
| ID | MinValue | MaxValue |
----------------------------------
| 1 | 0 | 10 |
| 2 | 11 | 20 |
| 3 | 21 | 30 |
```
I want to be able to pass an integer, and have it return the appropriate ID where that value matches the range between Min and Max Value.
EG:
```
Input = 17
Output = 2
Input = 4
Output = 1
Input = 26
Output = 3
```
I thought I could do something like:
```
SELECT ID FROM MyTable WHERE MinValue >= @input AND MaxValue <= @input
```
But it doesn't work, nothing is returned.
I'm sure the solution is blatantly simple, but i'm stumped.
What's the best way to achieve this in SQL Server?
|
try this
```
SELECT ID FROM MyTable WHERE @input BETWEEN MinValue AND MaxValue
```
**DESCRIPTION of BEETWEEN**
The SQL `BETWEEN` Condition is used to retrieve values within a range in a `SELECT`, `INSERT`, `UPDATE`, or `DELETE` statement.
**SYNTAX**
The syntax for the SQL `BETWEEN` Condition is:
expression `BETWEEN value1 AND value2`;
Parameters or Arguments
expression is a column or calculation.
`value1` and `value2` create an inclusive range that expression is compared to.
**NOTE**
The SQL `BETWEEN` Condition will return the records where expression is within the range of `value1` and `value2` (inclusive).
ref: <http://www.techonthenet.com/sql/between.php>
or you can also use like
```
MinValue <= @input AND MaxValue >= @input
```
|
Try this,
```
SELECT ID FROM MyTable WHERE @input BETWEEN MinValue AND MaxValue.
```
Or flip the equality signs in your statement.
```
SELECT ID FROM MyTable WHERE MinValue <= @input AND MaxValue >= @input
```
|
Check if Integer is between a range of column values
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have 2 tables,
**indexInfo**:
```
[indexId] [name] [type]
101 nameA Cars
102 nameB Trucks
103 nameC Cars
```
**userGroupXindexInfo**
```
[usergroupnumber] [indexId]
201 101
201 103
202 102
202 103
203 103
```
There could be multiple `userGroups` assigned to the same index, how do i get a list of unique indices which belong to a list of user groups with a specific type?
So i want a list of unique indices that are assigned to user groups `(201, 202, 203)` from `indexInfo` where `type = Cars`?
In this case, it would be: 101, 103
I tried using a left outer join but was not sure if there are better ways to do this?
|
One way to do this is to use a `in` predicate to limit the `indexId` to those matching the specified groups:
```
select indexId
from indexInfo
where type = 'Cars'
and indexId in (
select indexId
from userGroupXindexInfo
where usergroupnumber in (201,202,203)
)
```
|
You can use `EXISTS` for that:
```
SELECT t.IndexId
FROM indexInfo t
WHERE t.Type='Cars'
AND EXISTS (
SELECT *
FROM userGroupXindexInfo i
WHERE t.IndexId = i.IndexId
)
```
The first condition `t.Type='Cars'` verifies the type. The second condition `EXISTS (...)` verifies that the index is assigned to a group.
You could do the same with `GROUP BY` and a join:
```
SELECT t.IndexId
FROM indexInfo t
JOIN userGroupXindexInfo i ON t.IndexId = i.IndexId
WHERE t.Type='Cars'
GROUP BY t.IndexId
```
|
How to construct sql query for the following case?
|
[
"",
"sql",
""
] |
Data is like so, (table1 links up to table2) on table1.col2 = table2.col2
Based on that criteria,
Employee 5 below assigned to Area 1 in first table, however in second table that employee is not assigned to Area 1, so the result that would return would only be the first record of the first table (emp5, a1)
Example below
Table1
```
Col1 Col2
-------------
emp5 A1
emp6 A1
emp5 A2
```
Table2
```
Col1 Col2
--------------
emp7 A1
emp6 A1
emp5 A2
```
|
This is tricky. You need employees who are in both tables. Then you need to check that col2 is different on one of the rows.
The following does this comparison using `union all`:
```
select col1, col2, max(which)
from ((select col1, col2, 1 as which
from table1 t1
where exists (select 1 from table2 t2 where t2.col1 = t1.col1)
) union all
(select col1, col2, 2 as which
from table2 t2
where exists (select 1 from table1 t1 where t2.col1 = t1.col1)
)
) tt
group by col1, col2
having count(*) = 1
```
This will also tell you which table has the extra row.
|
You can use MINUS, it is more intuitive. The syntax can be different in SQL Server, MySQL or Oracle, like you can see <http://blog.sqlauthority.com/2008/08/07/sql-server-except-clause-in-sql-server-is-similar-to-minus-clause-in-oracle/>
But I like MINUS, for instance
```
select
t1.Col1,
t1.Col2
from table1 t1
MINUS
select
t2.Col1,
t2.Col2
from table2 t2
```
This way, you can think like sets (math)!
|
Differences between two tables
|
[
"",
"sql",
""
] |
It's hard to explain with words what I'm trying to accomplish so here's an example. Let's say we have the following table:
```
Customer Group BeginDate ParentCustomer
1469046 3939 7/1/2010 1311044
1469046 3939 8/1/2010 1311044
1469046 4144 1/1/2011 1460224
1469046 4147 2/1/2011 1461557
1469046 3939 11/1/2013 1311044
1469046 3939 12/1/2013 1311044
1469046 3939 1/1/2014 1311044
1469046 3939 2/1/2014 1311044
1469046 3939 3/1/2014 1311044
1469046 3939 4/1/2014 1311044
1469046 3939 5/1/2014 1311044
1469046 3939 6/1/2014 1311044
1469046 3939 7/1/2014 1311044
1469046 3939 8/1/2014 1311044
1469046 587 9/1/2014 141274
1469046 587 10/1/2014 141274
1469046 587 11/1/2014 141274
1469046 587 12/1/2014 141274
```
And below is what I'm trying to get:
```
Customer Group BeginDate ParentCustomer
1469046 3939 7/1/2010 1311044
1469046 4144 1/1/2011 1460224
1469046 4147 2/1/2011 1461557
1469046 3939 11/1/2013 1311044
1469046 587 9/1/2014 141274
```
So, basically, for each group I only want to display when it started.
I have the code that does what I need but what throws it off is the repeating group 3939. It will only display it once beginning in 7/1/2010.
Does anyone have any idea if this is possible and how I would go about doing it?
|
```
select *
from
(
SELECT *,
LAG(Group, 1,0) OVER (ORDER BY BeginDate) AS PreviousGroup
FROM table
)
where PreviousGroup is null or PreviousGroup <> group
```
|
If you are using SQL Servier 2005 or SQL Server 2008 then you can use this.
```
SELECT T.*
FROM T
OUTER APPLY
( SELECT TOP 1 T2.[Group]
FROM T AS T2
WHERE T2.Customer = T.Customer
AND T2.BeginDate < T.BeginDate
ORDER BY T2.BeginDate DESC
) AS prev
WHERE prev.[Group] != T.[Group]
OR prev.[Group] IS NULL;
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!6/047d3d/5)**
This simply uses [`OUTER APPLY`](https://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx) to get the previous record for that customer, and removes the records where the previous group was the same, or null for the first record.
If you are using SQL Server 2012 or later then you can do this with the [`LAG`](https://msdn.microsoft.com/en-us/library/hh231256.aspx) Function, but this may not perform better depending on the cardinality of your table:
```
SELECT Customer, [Group], BeginDate, ParentCustomer
FROM ( SELECT Customer,
[Group],
BeginDate,
ParentCustomer,
PreviousGroup = LAG([Group]) OVER(PARTITION BY Customer ORDER BY BeginDate)
FROM T
) AS T
WHERE PreviousGroup != [Group]
OR PreviousGroup IS NULL;
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!6/047d3d/7)**
|
Return the first iteration of the set of values even if they repeat,
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
I want to create a query for the cheapest package for a holiday to Spain, given package ID. I'm just stuck to how to go about it when executing my query. I need help on what to include in the values for the 'package' table and I also need help on how to present the query.
Here is the table:
```
USE [zachtravelagency]
CREATE TABLE package (
[packageID] INTEGER NOT NULL IDENTITY (1,1) PRIMARY KEY,
[hotelID] INTEGER FOREIGN KEY REFERENCES hotels NOT NULL,
[excursionID] INTEGER FOREIGN KEY REFERENCES excursions NOT NULL,
[transportID] INTEGER FOREIGN KEY REFERENCES transport NOT NULL,
[flightID] INTEGER FOREIGN KEY REFERENCES flight NOT NULL,
);
```
Here are the columns, followed by some NULL values as I'm not sure what to put in.
```
Insert Into package (packageID, hotelID, excursionID, transportID, flightID)
Values (1, '', '', '', '')
```
Here is an example of entering data into my 'hotel' table (this is an example of one row)
```
Insert Into hotels (hotelID, hotelName, numRooms, location, totalCost, rating)
Values (1, 'Supreme Oyster Resort & Spa', '255', 'Spain', '250', '4')
```
I'm new to SQL so thank you for your patience.
|
First, for your insert statement for 'package', you don't specify packageId since it's an identity column. Instead it should look something like this
```
Insert Into package (hotelID, excursionID, transportID, flightID)
Values (1, 54, 43, 23)
```
Then to run a SELECT Query to find the cheapest package to Spain you will have to join your hotel, excursion, transport, and flight table on package, and sum the totalCost from each of the tables.
Example:
```
SELECT p.*, (h.totalCost + e.totalCost + t.totalCost, f.totalCost) as 'Total Package Cost' FROM Package p
INNER JOIN hotel h ON h.hotelId = p.hotelId
INNER JOIN excursion e ON e.excursionId = p.excursionId
INNER JOIN transport t ON t.transportId = p.transportId
INNER JOIN flight f ON f.flightId = p.flightId
WHERE h.location = 'Spain'
ORDER BY (h.totalCost + e.totalCost + t.totalCost, f.totalCost) ASC
```
Your cheapest packages will be listed first. If you only want the cheapest then you can use SELECT TOP 1
This query also assumes that each of the tables had a totalCost column.
|
Apparently you need to create a total of five tables. Because of the foreign keys you'll have to insert data in the packages table last. Let's assume all that is completed and you now want to query.
If you're given the packageID then you already have the answer. I'm not sure what you mean by that. If you want the minimum cost of a package that has a hotel in Spain then do this:
```
select min(h.totalCost)
from package as p inner join hotels as h on h.hotelID = p.hotelID
where h.location = 'Spain'
```
If you want packages that include a hotel in Spain of the lowest cost, try this. It could match more than one:
```
select * from package where hotelID in (
select hotelID from hotels where totalCost = (
select min(h.totalCost)
from package as p inner join hotels as h on h.hotelID = p.hotelID
where where p.packageID = ? and h.location = 'Spain'
)
)
```
|
SQL help on query
|
[
"",
"sql",
"sql-server",
"insert",
""
] |
I have the below table (sorry couldn't figure out how to post a table... in bold are the field names)
**code desc channel date**
1001 A supermarket 10-oct
1001 B minimarket 15-dic
1003 A restaurant 07-may
1003 B bar 30-abr
1003 A restaurant 12-dic
1002 B kiosk 10-oct
I am trying to get the latest record for each code and update it in another table where I have already all the codes I need to be updated (on this table I have the same fields but needed to update them to the latest)
the result would be this
**code des channel date**
1001 B minimarket 15-dic
1003 A restaurant 12-dic
1002 B kiosk 1 0-oct
thanks in advance for any help!
|
You can get the results using a query:
```
select t.*
from table as t
where t.date = (select max(t2.date) from table as t2 where t2.code = t.code);
```
I'm not sure what your other table looks like, but you could fix this into a query like:
```
update secondtable
set val = (select channel
from table as t
where t.code = secondtable.code and
t.date = (select max(t2.date) from table as t2 where t2.code = t.code)
);
```
You could also use a `join` if more than one field were being set.
|
Another answer (as the others posted work also) is to use a temporary table. It does require 3 SQL statements, but might be faster than the nested queries below:
(assuming the two tables you have are called t1 and t2, and I use MySQL)
```
CREATE TEMPORARY TABLE t3 AS
SELECT code, descr, channel, MAX(date) as mxdate <--- I would avoid using "desc" and "date" if possible
FROM t1
GROUP BY code;
UPDATE t2,t3
SET t2.descr=t3.descr, t2.channel=t3.channel, t2.date=t3.mxdate
WHERE t2.code=t3.code;
DROP TEMPORARY TABLE t3;
```
Not sure if this is faster.
|
Update another table based on latest record
|
[
"",
"sql",
"ms-access",
""
] |
Suppose I have an sql query like the following (I realize this query could be written better, just bear with me):
```
SELECT aT.NAME
FROM anothertable aT,
( SELECT ts.slot_id,
tgm.trans_id,
tagm.agent_id
FROM slots ts,
transactions tgm,
agents tagm
WHERE ts.slot_id = (12345, 678910)
and ts.slot_id = tagm.slot_id
AND ts.slot_id = tgm.slot_id) INNER
WHERE INNER.trans_id = aT.trans_id
AND INNER.agent_id = aT.trans_id
```
Now suppose that I need to break up this query into two parts...in the first I'll execute the inner query, do some processing on the results in code, and then pass back a reduced set to the outer part of the query. The question is, is there an easy way to emulate an inner table in sql?
For instance, if the results of the inner query returned 5 rows but my program deems to only need two of those rows, how can I write sql that will do what I am trying to do below? Is there a way, in sql, to declare a table for in memory in query use?
```
SELECT
at.Name
FROM
anotherTable aT,
(SLOT_ID, TRANS_ID, AGENT_ID
-------------------------
230743, 3270893, 2307203
078490, 230897, 237021) inner
WHERE
inner.trans_id = at.trans_id
AND INNER.agent_id = aT.trans_id
```
|
Just use a subquery:
```
SELECT at.Name
FROM anotherTable aT JOIN
(select 230743 as SLOT_ID, 3270893 as TRANS_ID, 2307203 as AGENT_ID from dual
select 078490, 230897, 237021 from dual
) i
on i.trans_id = at.trans_id AND i.agent_id = aT.trans_id;
```
|
The subquery method worked. Since this is Oracle, the syntax turned out to be:
```
SELECT aT.Name
FROM anotherTable aT,
(select 1907945 as SLOT_ID, 2732985 as TRANS_ID, 40157 as AGENT_ID FROM DUAL
union
select 1907945, 2732985, 40187 FROM DUAL
) inner
WHERE
inner.trans_id = aT.trans_id AND INNER.agent_id = aT.trans_id;
```
|
sql inner table substitution
|
[
"",
"sql",
"oracle11g",
"subquery",
""
] |
I have a `sas` data set consisting of more than 100 variables. Variables `pid` - a character variable and `year` - a numeric variable identify observations in the data set.
How can I create a new data set consisting of observations that have unique `pid` and `year` combination. That is, if a given `pid` and `year` combination occurs more than once, I want to delete all the associated observations, not just the duplicates.
|
I don't use much of data step. I use proc sql and is easy for me.
```
proc sql;
create table new_dataset as
select * from old_dataset as a
join
(select pid, year, count(1) from old_dataset group by pid, year having count(1)<2)
as b on a.pid=b.pid and a.year=b.year;
run;
```
inner query only gets pid and year which occur once. Any multiple occurrence of pid and year are not taken into account because of `having count(1)<2`. I get those observations only from original by joining back on pid and year. This actually doesn't need sorting.
Let me know in case of any questions.
|
Simple use of `first.` and `last.` in a data step will do this. Run `proc sort` if the data is not already sorted by pid and year.
```
proc sort data=have;
by pid year;
run;
data want;
set have;
by pid year;
if first.year and last.year then output; /* only keep unique rows */
run;
```
|
How to create SAS data set extracting observations with unique keys
|
[
"",
"sql",
"sas",
""
] |
I tried some queries and its taking me to almost my result using JOINs. But Please suggest if any better way to do this ?
I have table like this :
```
Name Number
A 1
B 2
C 3
D 4
E 5
```
I want the result to be like :
```
Name Number Name number
A 1 B 2
C 3 D 4
E 5
```
I have written a query like this :
```
DECLARE @TempTable1 Table(RowNumber Int, Name varchar(50), Number Int)
DECLARE @TempTable2 Table(RowNumber Int, Name varchar(50), Number Int)
Insert INTO @TempTable1
SELECT ROW_NUMBER() OVER(ORDER BY Name) AS Row, Name, Number
FROM Test
Insert INTO @TempTable2
SELECT ROW_NUMBER() OVER(ORDER BY Name) AS Row, Name, Number
FROM Test
DECLARE @Count INT
SELECT @Count = MAX(RowNumber) FROM @TempTable1
IF ((@Count % 2) = 0)
BEGIN
SELECT Temp1.RowNumber, Temp1.Name, Temp1.Number, Temp2.RowNumber,Temp2.Name, Temp2.Number
FROM @TempTable1 Temp1, @TempTable2 Temp2
WHERE Temp1.RowNumber = (Temp2.RowNumber - 1) AND (Temp1.RowNumber % 2) != 0
END
ELSE
BEGIN
SELECT Temp1.RowNumber, Temp1.Name, Temp1.Number, Temp2.RowNumber,Temp2.Name, Temp2.Number
FROM @TempTable1 Temp1, @TempTable2 Temp2
WHERE Temp1.RowNumber = (Temp2.RowNumber - 1) AND (Temp1.RowNumber % 2) != 0 OR ((Temp1.RowNumber = @Count) AND (Temp2.RowNumber = @Count))
END
```
|
Below Query will help you
```
SELECT a.NAME,a.Number,b.NAME,b.Number
FROM
(SELECT ROW_NUMBER() OVER(ORDER BY Number) AS ROW,NAME,Number FROM #temp
WHERE Number %2 <>0) a
LEFT OUTER JOIN
(SELECT ROW_NUMBER() OVER(ORDER BY Number) AS ROW,NAME,Number FROM #temp
WHERE Number %2 =0) b
ON a.ROW = b.ROW
```
|
Using full join as there could be %2 = 1 less than %2 = 0
```
DECLARE @a AS TABLE (a VARCHAR(10), b INT)
INSERT INTO @a VALUES ('A',1),('B',2),('C',3),('D',4),('E',5);
SELECT a.a,a.b, b.a, b.b
FROM (
SELECT a.a, a.b, DENSE_RANK() OVER(ORDER BY a.b) AS rownum
FROM @a a
WHERE a.b%2 = 1
) a
FULL JOIN (
SELECT a.a, a.b, DENSE_RANK() OVER(ORDER BY a.b) AS rownum
FROM @a a
WHERE a.b%2 = 0
) b
ON a.rownum = b.rownum
```
|
SQL Query for the Pattern
|
[
"",
"sql",
"sql-server",
""
] |
I created the following query
```
select
c.id as eid,
c.created,
e.class_id,
min(e.event_time),
min(cn.create_time) as lead_date
from
bm_emails.cid378 c
left join bm_sets.event378 e on e.eid = c.id
left join bm_sets.conversion378 cn on cn.eid = c.id
where e.class_id = 36
group by eid, created, class_id
```
What happens first when SQL runs the query? The `where` clause or the `min(e.event_time)`?
If the where clause comes after then great.
If not is there a way to reverse this so that the `min(e.event_time)` get's pulled first and then have the where condition apply to that subset of min dates?
Ideally without use of a sub query.
\*\*\*\* Example following responses:\*\*
If I sign up on the site and trigger an event with class\_id 40, then a week later I come to the site and trigger an event with class\_id 36, then I should NOT be included in the results. Right now it looks like I would be included since SQL will first filter on where clause, and then apply min date
|
You can use the result of your Min in your condition, however, you will need to use `HAVING`, not `WHERE`.
Reference: <http://www.mysqltutorial.org/mysql-having.aspx>
|
The WHERE clause must be executed first, because in your query, MIN runs on the results of the query. You'll need a join or a subquery to do what you want.
|
What happens first: min() in selector or where in condition?
|
[
"",
"mysql",
"sql",
""
] |
I have a table with columns `Date` and `Amount`:
```
CustID Date Amount
1 20150301 10000
1 20150302 5000
1 20150303 15000
2 20150208 9000
2 20150201 5000
2 20150202 6000
```
I want to set `Amount` to zero when `Date` is after a certain date. After update it should look as follows:
```
CustID Date Amount
1 20150301 10000
1 20150302 0
1 20150303 0
2 20150208 0
2 20150201 5000
2 20150202 0
```
How can I do this?
|
I would do this using window functions:
```
with toupdate as (
select t.*, min(date) over (partition by CustId) as mindate
from table t
)
update toupdate
set amount = 0
where date > mindate;
```
|
You can use the following query that checks if `date` is not equal to `MIN(Date)`:
```
UPDATE Mytable
SET Amount = 0
FROM Mytable AS t
WHERE [date] <> (SELECT MIN([Date])
FROM Mytable
WHERE CustId = t.CustID)
```
or, alternatively, use a `CTE`:
```
;WITH MIN_CTE AS (
SELECT MIN([Date]) AS MinDate, CustID
FROM Mytable
GROUP BY CustID
)
UPDATE Mytable
SET Amount = 0
FROM Mytable AS t
INNER JOIN MIN_CTE AS c ON t.CustID = c.CustID
WHERE t.[Date] <> c.MinDate
```
|
Set a table field to zero based on a date in another field
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
Is it possible to grab the last n inserts in a relational table without using a Date field ?
For example in the table `Author`:
```
Author(authid, f_name, l_name)
```
Also, authid is not a natural number (eg. 1,2,3,4..) but a string (example: JohnM32015)
I am using MySQL.
|
A *table* doesn't have a guaranteed sort order. A *query* does, if it explicitly defines one. If, for example, your records have an incrementing `authid` value then the last N inserts would be the highest N values for that column. So you'd order by that column descending and take the top N:
```
SELECT * FROM Author ORDER BY authid DESC LIMIT 10
```
However you define "the last N", you specify that definition in your query in a descending sort order and take the top N records from that result.
|
If the `authid` is auto-increment then you can do
```
select * from author
order by authid desc
```
To get only a limited number of records use `top n` in SQL-Server or `limit n` in MySQL for instance.
|
Select n last linesChubb from table
|
[
"",
"mysql",
"sql",
""
] |
I want to perform self join on the following query:
```
select
RANK() OVER (PARTITION BY CARD_ID, BE_BUSINESS_DATE ORDER BY CARD_TXN_SEQ_NUMBER) as Trip_Leg_Id
from [dbo].[Src_Nol_Txns] s1
inner join
```
so that, I join the current record with the record of rank -1
how to achieve that?
|
```
with cte as
( SELECT * ,
LAG(CARD_TXN_SEQ_NUMBER, 1,0) OVER (PARTITION BY CARD_ID, BE_BUSINESS_DATE ORDER BY CARD_TXN_SEQ_NUMBER) AS PreviousSEQ
FROM table );
select cte1.*, cte2.*
from cte as cte1
join cte as cte2
on cte2.CARD_ID = cte1.CARD_ID
and cte2.BE_BUSINESS_DATE = cte1.BE_BUSINESS_DATE
and cte2.CARD_TXN_SEQ_NUMBER = cte1.PreviousSEQ
and cte1.PreviousSEQ <> cte1.CARD_TXN_SEQ_NUMBER;
```
I think the last and will deal with duplicates
|
Here's a way to do it
```
With TripID AS
(
select
RANK() OVER (PARTITION BY CARD_ID, BE_BUSINESS_DATE ORDER BY CARD_TXN_SEQ_NUMBER) as Trip_Leg_Id
from [dbo].[Src_Nol_Txns] s1
)
select *
from TripID a
join TripID b
on b.Trip_Leg_Id=a.Trip_Leg_Id - 1
and a.CARD_ID=b.CARD_ID
and a.BE_BUSINESS_DATE =b.BE_BUSINESS_DATE
```
|
Self join in rank query
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
"self-join",
""
] |
I want to get top 10 posts based on score, and get 10 comments for each post.
I am using the following query:
```
with a as ( SELECt TOP 10 Score,Post.ID as PostID
FROM Post
order by Score desc ), b as
(select PostID,ID as CommentID from PostComment)
select * from a
left join b
on b.PostID = a.PostID
```
This query gets the top 10 posts, however the problem is that it gets all the comments for this post. how to modify this query to get only 10 comments for each post?
|
Try this:
```
WITH a AS (
SELECT TOP 10
Score ,
Post.ID AS PostID
FROM Post
ORDER BY Score DESC
)
SELECT *
FROM a
OUTER APPLY (
SELECT TOP 10
pc.PostID ,
pc.ID AS CommentID
FROM PostComment pc
WHERE pc.PostID = a.PostID
--ORDER BY SomeColumn
) o
```
|
Something along these lines would work.
```
with a as
(
SELECt TOP 10 Score
, Post.ID as PostID
FROM Post
order by Score desc
), b as
(
select PostID
, ID as CommentID
, ROW_NUMBER() over (partition by PostID order by ID) as RowNum
from PostComment
)
select *
from a
left join b
on b.PostID = a.PostID
where b.RowNum <= 10
```
|
get top 10 posts and 10 comments for each post sql
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I have a trigger in SQL Server that needs to check on an update the number of rows with a value between a certain amount and do something accordingly. My current code is something like this:
```
IF EXISTS(SELECT COUNT(id) as NumberOfRows
FROM database
WHERE id = 3 AND value <= 20 and value > 2
GROUP BY id
HAVING COUNT(id) > 18)
-- if true, do something
```
From what I can tell, the select statement should find the number of rows with a value between 2 and 20 and if there are more than 18 rows, the `EXISTS` function should return 1 and the query will execute the code within the IF statement.
However, what is happening is that it is always executing the code within the IF statement regardless of the number of rows with a value between 2 and 20.
Any ideas on why this might be? I can post more complete code if it might help.
|
The reason is that the `Exists` function is checking the result of the sub-query for `existing` - are there any rows or not. And, as you return the `COUNT`, it'll never be not-existing - `COUNT` returns `0` if there are no rows presented in database.
Try to store the resulting count in a local variable, like in this question:
[Using IF ELSE statement based on Count to execute different Insert statements](https://stackoverflow.com/questions/14072140/sql-using-if-else-statement-based-on-count-to-execute-different-insert-statement)
> ```
> DECLARE @retVal int
>
> SELECT @retVal = COUNT(*)
> FROM TABLE
> WHERE COLUMN = 'Some Value'
>
> IF (@retVal > 0)
> BEGIN
> --INSERT SOMETHING
> END
> ELSE
> BEGIN
> --INSERT SOMETHING ELSE
> END
> ```
|
I would do it like so (single line):
```
IF ((SELECT COUNT(id) FROM table WHERE ....)>18) BEGIN
...do something
```
You can even do between in a single line
```
IF ((SELECT COUNT(id) FROM table WHERE ....)between 2 and 20) BEGIN
...do something
END
```
|
SQL SELECT statement within an IF statement
|
[
"",
"sql",
"sql-server",
"select",
"if-statement",
"count",
""
] |
```
create table Department (
Dep_ID int not null,
Dep_Name varchar(30),
primary key (Dep_ID),
)
create table Course (
C_ID int not null,
C_Name varchar (30) not null,
C_Duration varchar (10) not null,
DegreeType varchar (20),
Dep_ID int,
primary key (C_ID),
constraint DEP_ID1 foreign key (Dep_ID) references Department (Dep_ID) on update cascade,
)
create table Student (
St_ID int not null,
St_Name varchar (100),
St_age smallint,
St_gender Varchar(6),
St_tel int,
St_ADD varchar (100) not null,
St_city varchar (50)not null,
St_type varchar (20) not null,
St_nationality varchar (5) not null,
Dep_ID int,
C_ID int,
primary key (St_ID),
constraint DEP_ID foreign key (Dep_ID) references Department(Dep_ID) on update cascade,
constraint CO_ID foreign key (C_ID) references Course(C_ID) on update cascade,
)
create table Staff (
Sta_ID int not null,
Sta_Name varchar (100) not null,
Sta_type varchar (20) not null,
Sta_Add varchar (100) not null,
Sta_tel int ,
Dep_ID int,
primary key (Sta_ID),
constraint DEeP_ID foreign key (Dep_ID) references Department (Dep_ID) on update cascade,
)
```
> this is the error im getting why cant i use cascade update on
> composite keys
Msg 1785, Level 16, State 0, Line 19
`Introducing FOREIGN KEY constraint 'CO_ID' on table 'Student' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints`.
Msg 1750, Level 16, State 0, Line 19
Could not create constraint. See previous errors.
|
I had done a research and get to know that this problem can be resolve by changing the line :
`constraint CO_ID foreign key (C_ID) references Course(C_ID) on update cascade,` to `constraint CO_ID foreign key (C_ID) references Course(C_ID) on update No Action,`
I have get a explanation on the following link :
[Foreign key constraint may cause cycles or multiple cascade paths?](https://stackoverflow.com/questions/851625/foreign-key-constraint-may-cause-cycles-or-multiple-cascade-paths)
This may resolve your problem.
|
When you are adding foreign key
```
They need to be exactly the same data type in both tables.
```
Try with your query it works fine with me i just remove "," from end of your column
```
CREATE TABLE Department (
Dep_ID INT NOT NULL,
Dep_Name VARCHAR(30),
PRIMARY KEY (Dep_ID)
)
CREATE TABLE Course (
C_ID INT NOT NULL,
C_Name VARCHAR (30) NOT NULL,
C_Duration VARCHAR (10) NOT NULL,
DegreeType VARCHAR (20),
Dep_ID INT,
PRIMARY KEY (C_ID),
CONSTRAINT DEP_ID1 FOREIGN KEY (Dep_ID) REFERENCES Department (Dep_ID) ON UPDATE CASCADE
)
CREATE TABLE Student (
St_ID INT NOT NULL,
St_Name VARCHAR (100),
St_age SMALLINT,
St_gender VARCHAR(6),
St_tel INT,
St_ADD VARCHAR (100) NOT NULL,
St_city VARCHAR (50)NOT NULL,
St_type VARCHAR (20) NOT NULL,
St_nationality VARCHAR (5) NOT NULL,
Dep_ID INT,
C_ID INT,
PRIMARY KEY (St_ID),
CONSTRAINT DEP_ID FOREIGN KEY (Dep_ID) REFERENCES Department(Dep_ID) ON UPDATE CASCADE,
CONSTRAINT CO_ID FOREIGN KEY (C_ID) REFERENCES Course(C_ID) ON UPDATE CASCADE
)
CREATE TABLE Staff (
Sta_ID INT NOT NULL,
Sta_Name VARCHAR (100) NOT NULL,
Sta_type VARCHAR (20) NOT NULL,
Sta_Add VARCHAR (100) NOT NULL,
Sta_tel INT ,
Dep_ID INT,
PRIMARY KEY (Sta_ID),
CONSTRAINT DEeP_ID FOREIGN KEY (Dep_ID) REFERENCES Department (Dep_ID) ON UPDATE CASCADE
)
```
|
can someone explain why im getting this error with sql query
|
[
"",
"mysql",
"sql",
"sql-server",
"sql-server-2008",
""
] |
is it possible to convert a number into a day of the week ?
i've tried
```
to_date(1,'DAY')
to_date('1','DAY')
```
with no luck.
|
You can do the following to convert the a number into a day of the week. Keep in mind that the result you get will depend on your NLS settings (for example, your week might start on Sunday or it might start on Monday, etc.):
```
SELECT TO_CHAR(NEXT_DAY(sysdate, 1), 'Day') FROM dual;
```
The above will convert the number into a day of the week name (e.g., `Sunday`). If you want a date, simply use:
```
SELECT NEXT_DAY(sysdate, 1) FROM dual;
```
This will return the next occurrence of that day of the week (on my system, for example, it returns the next Sunday).
|
Ironically, this is one place where the old-fashioned `decode()` is useful:
```
select decode(<your number>, 1, 'Mon', 2, 'Tue', . . . , 7, 'Sun')
```
This is interpreting the question as returning the day of the week for the day number of the week. Admittedly, the code does suggest that you actually want a date, but that doesn't make sense.
|
convert number into day oracle plsql
|
[
"",
"sql",
"oracle",
""
] |
I'm trying to write a stored procedure to return the maximum value of a column + 1 but for some reason it doesn't want to work.
```
DECLARE @ID int;
SET @ID = SELECT MAX(ID) + 1 FROM tbl;
```
I can't for the life of me see what is wrong.
It gives me the error of:
> incorrect syntax new the keyword 'SELECT'
|
Use parentheses `( ... )`:
```
DECLARE @ID int;
SET @ID = (SELECT MAX(ID) + 1 FROM tbl);
```
or `SELECT` as [suggested by Giorgi](https://stackoverflow.com/a/29148059/284240). `SET` is the ANSI standard way of assigning values to variables, `SELECT` is not. Apart from that using `SELECT` to assign values to variables is fine, it allows even multiple assignments with one `SELECT`.
But in general your query seems to be a race condition. Use an [`IDENTITY` column](https://msdn.microsoft.com/en-us/library/ms186775.aspx) if you want to autoincrement a value. [Auto increment primary key in SQL Server Management Studio 2012](https://stackoverflow.com/questions/10991894/auto-increment-primary-key-in-sql-server-management-studio-2012)
|
No need for `SET`. Select value directly:
```
DECLARE @ID int;
SELECT @ID = MAX(ID) + 1 FROM tbl;
```
|
Issues with SQL Max function: "Incorrect syntax new the keyword 'SELECT' "
|
[
"",
"sql",
"sql-server",
""
] |
```
SELECT CATEGORY
FROM PEA_TEMPLATE
WHERE TEMP_ID = 000001 AND TEMP_VERSION = 2
```
This query returns `'000001','000002'` which is saved in `CATEGORY` column as in this format.
```
SELECT *
FROM HR_CATEGORY
WHERE CAT_CODE IN ('000001', '000002')
```
This select is working fine with row string
```
SELECT *
FROM HR_CATEGORY
WHERE CAT_CODE IN (SELECT CATEGORY
FROM PEA_TEMPLATE
WHERE TEMP_ID = 000001 AND TEMP_VERSION = 2)
```
But when I use query inside **IN** it does not return any value.
What is the reason for this? Any way to fix this?
|
If CATEGORY can have concatenated strings with apostrophe e.g. `'000001'` or `'000001','000002'`, then better to use LIKE with JOIN construction, like this:
```
SELECT * FROM HR_CATEGORY
INNER JOIN
(SELECT CATEGORY FROM PEA_TEMPLATE WHERE TEMP_ID = 000001 AND TEMP_VERSION = 2) PEA
ON PEA.CATEGORY LIKE '%'''+CAT_CODE+'''%'
```
|
Your subquery is only returning a single row with the content:
> "'000001','000002'"
For the IN clause to work each value must be separate as two rows:
> '000001'
>
> '000002'
You must split the value from the CATEGORY column into multiple strings. To do that you could use a string-splitting-function. There are many of them out there on the internet, here is a related question and answer [T-SQL: Opposite to string concatenation - how to split string into multiple records](https://stackoverflow.com/questions/314824/t-sql-opposite-to-string-concatenation-how-to-split-string-into-multiple-reco)
|
How to use IN in SQL Server with Select
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have written a query to check the completeness of my table with a universal table. The query is a s follows
```
SELECT * FROM Universal
WHERE UNIVERSAL.TYPE=MYTYPE
AND Universal.CLASS=MYCLASS
AND Universal.ID NOT IN (SELECT Mytable.ID FROM Mytable)
```
As you see I filter the universal table before looking for completeness, both in the same query. I Now found that when I remove the NOT IN part, the query runs fine ,but when I include it I get no result. Whats wrong in this query? How can I write in some other form like using joins or something?
|
```
SELECT * FROM Universal
WHERE UNIVERSAL.TYPE=MYTYPE
AND Universal.CLASS=MYCLASS
AND Universal.ID NOT IN (SELECT Mytable.ID FROM Mytable
WHERE Mytable.ID IS NOT NULL)
```
Nulls and not in...!
The `SELECT` returns rows where the `WHERE` clause is `TRUE`, and doesn't return `FALSE` rows.
Also `UNKNOWN` rows aren't returned (i.e. `NULL` rows.) That's usually not a problem, but when it comes to `NOT IN` many programmers get surprised.
Say you have a cars table, and you want to return cars with colors that are not found in a list (or sub-query):
```
select * from cars where color not in ('red', 'blue')
```
is easy to understand, all cars but red and blue are returned.
But if the list of colors are extended with an unknown color (`NULL`), it will be:
```
select * from cars where color not in ('red', 'blue', NULL)
```
Now the `WHERE` clause doesn't know if e.g. a yellow car is in the list or not, it's not red or blue for sure, but we can't tell if it's not that unknown `NULL` color... I.e. the `WHERE` clause becomes unknown, and no row is returned!
Here the programmer can aviod unknown colors by specifying `IS NOT NULL` in the sub-query. (Or transform to a `NOT EXISTS`, which is "null safe".)
|
So you want to SELECT everything from Universal that is not in Mytable?
Using a LEFT JOIN will select everything from and Univeral and only rows from Mytable that match up to something in Universal. So if Mytable.ID is null then that row of Universal does not have a matching row in Mytable:
```
SELECT * FROM Universal
LEFT JOIN Universal.ID = Mytable.ID
WHERE UNIVERSAL.TYPE=MYTYPE
AND Universal.CLASS=MYCLASS
AND Mytable.ID IS NULL;
```
|
Writing a SQL query in a different way
|
[
"",
"mysql",
"sql",
"oracle",
""
] |
I can't find problem. I have many tables joined each other. I need to get count of form instances from `FormInstances` table (should count `Id` column), but It returing wrong data.
So my query looks like:
```
DECLARE @dateStart DATE = NULL,
@dateEnd DATE = NULL,
@workshop NVARCHAR(60) = '',
@shiftNr NVARCHAR(60) = '',
@article NVARCHAR(60) = '',
@controllier NVARCHAR(60) = 'Name of Controlier'
SELECT COUNT(FI.Id) AS [Count of Instances]
FROM FormInstances AS FI
LEFT JOIN FormFields AS FF
ON FI.FormVersionId = FF.FormVersionid
LEFT JOIN .....
WHERE
FF.Id = FV.FormFieldId AND
FI.Id = ...
```
OUTPUT FOR NOW:
```
Count of Instances
414400
414400
414400
414400
384800
```
OUTPUT SHOULD BE:
```
Count of Instances
5
```
If I pass to *SELECT* clause: `FI.Id` (`Id` column from `FormInstances` table) in following:
```
SELECT COUNT(FI.Id) AS [Count of Instances],
FI.Id AS [Instance Ids]
FROM....
```
I got correct `Instance Ids` but the same, wrong count of instances:
```
Count of Instances Instance Ids
414400 23
414400 24
414400 25
414400 26
384800 52
```
So why It returning incorrect count of instances?
|
You are doing a join, which is multiplying the number of records. Try using distinct within the count to remove duplicates:
```
COUNT(distinct FI.Id)
```
|
If you just want to get:
```
Count of Instances
5
```
You should change:
```
SELECT COUNT(FI.Id) AS [Count of Instances],
FI.Id AS [Instance Ids]
FROM....
```
To this:
```
SELECT COUNT(*) AS [Count of Instances]
FROM....
```
But I don't see how the query you posted can return the results you say you are getting so I think you've probably made a mistake in the question.
|
SQL. Wrong result of COUNT() function
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"count",
""
] |
I'm trying to wrap my head around these two functions in the [Spark SQL documentation](http://spark.apache.org/docs/1.2.0/api/scala/index.html#org.apache.spark.sql.SchemaRDD)–
* *def union(other: RDD[Row]): RDD[Row]*
Return the union of this RDD and another one.
* *def unionAll(otherPlan: SchemaRDD): SchemaRDD*
Combines the tuples of two RDDs with the same schema, keeping duplicates.
This is not the standard behavior of UNION vs UNION ALL, [as documented in this SO question](https://stackoverflow.com/questions/49925/what-is-the-difference-between-union-and-union-all).
My code here, borrowing from the [Spark SQL documentation](http://spark.apache.org/docs/1.1.0/sql-programming-guide.html#tab_scala_3), has the two functions returning the same results.
```
scala> case class Person(name: String, age: Int)
scala> import org.apache.spark.sql._
scala> val one = sc.parallelize(Array(Person("Alpha",1), Person("Beta",2)))
scala> val two = sc.parallelize(Array(Person("Alpha",1), Person("Beta",2), Person("Gamma", 3)))
scala> val schemaString = "name age"
scala> val schema = StructType(schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, true)))
scala> val peopleSchemaRDD1 = sqlContext.applySchema(one, schema)
scala> val peopleSchemaRDD2 = sqlContext.applySchema(two, schema)
scala> peopleSchemaRDD1.union(peopleSchemaRDD2).collect
res34: Array[org.apache.spark.sql.Row] = Array([Alpha,1], [Beta,2], [Alpha,1], [Beta,2], [Gamma,3])
scala> peopleSchemaRDD1.unionAll(peopleSchemaRDD2).collect
res35: Array[org.apache.spark.sql.Row] = Array([Alpha,1], [Beta,2], [Alpha,1], [Beta,2], [Gamma,3])
```
Why would I prefer one over the other?
|
In Spark 1.6, the above version of `union` was removed, so `unionAll` was all that remained.
In Spark 2.0, `unionAll` was renamed to `union`, with `unionAll` kept in for backward compatibility (I guess).
In any case, no deduplication is done in either [`union` (Spark 2.0)](https://spark.apache.org/docs/2.0.0/api/python/pyspark.sql.html#pyspark.sql.DataFrame.union) or [`unionAll` (Spark 1.6)](https://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html#pyspark.sql.DataFrame.unionAll).
|
`unionAll()` was deprecated in [Spark 2.0](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html), and for all future reference, `union()` is the only recommended method.
In either case, `union` or `unionAll`, both do not do a SQL style deduplication of data. In order to remove any duplicate rows, just use `union()` followed by a `distinct()`.
|
Why would I want .union over .unionAll in Spark for SchemaRDDs?
|
[
"",
"sql",
"scala",
"apache-spark",
"union",
"union-all",
""
] |
I need to copy a set of data from TableA into TableB, like so:
```
INSERT INTO TableB(id,field1,field2)
SELECT id,field1,field2 FROM TableA
```
The above will work well, however `TableB` might already contain some of the records which I need to copy, identified by the PK `id`.
Thus, how can I add a clause to **only** insert a record if that `id` value is not already in TableB? I know a `WHERE` clause can be added at the end of the INSERT statement, but I am unsure of how to apply it for each and every record.
|
You can take a look for 3 methods comparing [NOT IN vs. NOT EXISTS vs. LEFT JOIN / IS NULL](http://explainextended.com/2009/09/18/not-in-vs-not-exists-vs-left-join-is-null-mysql/).
> The best way to search for missing values in *MySQL* is using a *LEFT*
> *JOIN* / *IS NULL* or *NOT IN* rather than *NOT EXISTS*.
---
You can use [NOT EXISTS](http://dev.mysql.com/doc/refman/5.0/en/exists-and-not-exists-subqueries.html).
```
INSERT INTO TableB (id, field1, field2)
SELECT id, field1, field2
FROM TableA t1
WHERE NOT EXISTS (
SELECT *
FROM TableB t2
WHERE t1.id = t2.id
)
```
---
Also you can use [LEFT JOIN](http://dev.mysql.com/doc/refman/5.0/en/left-join-optimization.html).
```
INSERT INTO TableB (id, field1, field2)
SELECT id, field1, field2
FROM TableA t1
LEFT JOIN TableB t2 ON t1.id = t2.id
WHERE t2.id IS NULL
```
---
Also you can use [NOT IN](http://www.w3resource.com/mysql/comparision-functions-and-operators/not-in.php).
```
INSERT INTO TableB (id, field1, field2)
SELECT id, field1, field2
FROM TableA t1
WHERE t1.id NOT IN (
SELECT t2.id
FROM TableB t2
WHERE t1.id = t2.id
)
```
|
INSERT INTO TableB(id,field1,field2)
SELECT T2.id,T2.field1,T2.field2 FROM TableA as T1
inner join TableB as T2 on T1.id <> T2.id
|
How to transfer data between tables, only where id does not exist
|
[
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
Have 2 tables with a column `Country` and I'm trying to get the results one for the same countries in both, one for countries in the first table but not in the second, and one for countries in second table but not in the first.
`Tb1.Country`:
```
Botswana
Burkina Faso
Cameroon
Ethiopia
Ghana
Ghana
Ghana
Ghana
Ghana
Morocco
Nigeria
Nigeria
Nigeria
Sierra Leone
South Africa
South Africa
South Africa
South Africa
South Africa
South Africa
Tanzania
Zambia
India
India
India
India
Indonesia
Pakistan
Pakistan
Pakistan
Philippines
Thailand
Thailand
```
`TB2.Country`:
```
Angola
Botswana
Burkina Faso
Ethiopia
Ghana
Ghana
Ghana
Morocco
Nigeria
Nigeria
Nigeria
Rwanda
Sierra Leone
South Africa
South Africa
Tanzania
Zambia
India
India
Indonesia
Pakistan
Pakistan
Philippines
Sri Lanka
Thailand
Thailand
```
|
You could use a UNION query to do this in Access:
```
SELECT
TB1.Country,
"Country in Table 1, but not Table 2" as result
FROM
tb1
LEFT JOIN tb2 ON
tb1.country = tb2.country
WHERE tb2.country IS NULL
UNION ALL
SELECT
TB2.Country,
"Country in Table 2, but not Table 1"
FROM
tb2
LEFT JOIN tb1 ON
tb2.country = tb1.country
WHERE tb1.country IS NULL
UNION ALL
SELECT
TB2.Country,
"Country is in both tables"
FROM
tb2
INNER JOIN tb1 ON
tb2.country = tb1.country
```
That's three queries that are stuck together with a union. The first finds countries in table 1 that aren't in Table 2. The second finds countries in table 2 that aren't in table 1, and the final query only returns countries that are in both.
|
There's multiple set operations in Standard SQL, not only UNION, but also EXCEPT and INTERSECT.
Get the countries common in both tables:
```
select country from t1
intersect
select country from t2
```
Get the countries only in the first table:
```
select country from t1
except -- Oracle calls this MINUS
select country from t2
```
Change the order of the selects to get the countries in t2 not found in t1.
|
What is the simplest query in SQL to get the differences between 2 tables?
|
[
"",
"sql",
"ms-access",
"vba",
""
] |
I have the following queries looking at join tables:
```
select scholarship_id as scholId,
count(scholarship_id) as incompleteCount
from applicant_scholarship
group by scholarship_id
```
and
```
select scholarship_id as scholId,
count(scholarship_id) as completeCount
from applicant_comp_schol
group by scholarship_id
```
I would like the two queries to be combined and give me a single table with scholId, incompleteCount, and completeCount. Can somebody help with this?
Used the following for my soltuion:
```
SELECT scholId,
SUM (completeCount) AS completeCount,
SUM (incompleteCount) AS incompleteCount,
SUM (completeCount) + SUM (incompleteCount) AS totalCount
FROM ( SELECT scholarship_id AS scholId,
COUNT (scholarship_id) AS incompleteCount,
NULL AS completeCount
FROM applicant_scholarship
GROUP BY scholarship_id
UNION
SELECT scholarship_id AS scholId, NULL, COUNT (scholarship_id)
FROM applicant_comp_schol
GROUP BY scholarship_id)
GROUP BY scholId
```
|
```
select scholarship_id as scholId,
count(scholarship_id) as incompleteCount
null as completeCount
from applicant_scholarship group by scholarship_id
group by scholarship_id
Union
select scholarship_id as scholId,
null,
count(scholarship_id)
from applicant_comp_schol
group by scholarship_id
```
|
You should probably have one more table which contains the full list of records, which would be your base (to make sure you catch the "zeroes"). Assuming you do, you can keep your two queries as subqueries that you left-join against:
```
select
s.scholarship_id
,nvl(inc.num_records, 0) incompleteCount
,nvl(cpl.num_records, 0) completeCount
from
scholarships s
left join (
select
scholarship_id
,count(scholarship_id) num_records
from applicant_scholarship
group by scholarship_id
) inc on s.scholarship_id = inc.scholarship_id
left join (
select
scholarship_id
,count(scholarship_id) num_records
from applicant_comp_schol
group by scholarship_id
) cpl on s.scholarship_id = cpl.scholarship_id
```
If you don't have that actual "scholarships" table that contains everything, then you can build another subquery which unions these two tables together to get the combined unique scholarship\_id values, then use that as your base table.
|
SQL Combining two queries of different tables
|
[
"",
"sql",
"oracle",
""
] |
I have the following records:
```
ID FIRST LAST CATEGORY
123 Tom Smith Teacher
123 Tom Smith Tutor
345 Julia Brown Banker
345 Julia Brown Tutor
567 Dan Davids Fireman
567 Dan Davids Golfer
567 Dan Davids Painter
```
I want to exclude all records that has 'Tutor' as a value.
My desired output would be this:
```
ID FIRST LAST CATEGORY
567 Dan Davids Fireman
567 Dan Davids Golfer
567 Dan Davids Painter
```
How do I go about doing so?
The names come from the NAMES table and the categories comes from the CATEGORY table. The primary key is the ID field.
|
```
SELECT id, first, last, category FROM names WHERE
ID NOT IN (SELECT id FROM names where category = 'Tutor')
```
|
Use a SELECT with a JOIN and a WHERE clause that excludes the 'Tutor' value.
You didn't describe your schema well enough for me to give an exact query, so you may have to change some naming, but this is the idea:
```
SELECT names.id, `first`, `last`, category
FROM names JOIN category on names.category_fk = category.id
WHERE category.category != 'Tutor';
```
|
Do not return any records if one value exist in a field?
|
[
"",
"sql",
""
] |
Running following statement in DB2 CLP (Command Window)
> db2 "truncate table MYSCHEMA.TABLEA immediate"
```
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0969N There is no message text corresponding to SQL error "-20356" in the
message file on this workstation. The error was returned from module
"SQLNQBE2" with original tokens "MYSCHEMA.TABLEA".
```
Can some please tell me what I'm doing wrong or what I'm missing? I'm trying to simply truncate from a single table and I'm getting the following error message. Not sure what I'm doing wrong. I've tried it with/without quotes, with/without schema, with/without immediate. I've also tried in Command Editor (remove db2 and quotes) and still not working. I'm using:
DB2/AIX64 9.7.9
Also, I have delete privilege as I am able to delete records but I want to truncate.
Thanks in advance!
|
The version of the DB2 client you're using doesn't seem to match that of the server, this is why you cannot see the actual error message for SQLCODE -20356. If you could, you'd see this:
> The table MYSCHEMA.TABLEA cannot be truncated because DELETE triggers
> exist for the table, or the table is the parent in a referential
> constraint.
Further explanation and suggested actions can be found, as usual, [in the fine manual](http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.messages.sql.doc/doc/msql20356n.html?cp=SSEPGG_9.7.0%2F2-6-27-20-246&lang=en).
|
I had this problem recently too. In my case I had to do a COMMIT WORK right before TRUNCATE. This solved my problem. Please try and tell us if this helped.
|
How to Truncate table in DB2? Error: Not a valid Command Line Processor command
|
[
"",
"sql",
"database",
"db2",
"truncate",
"db2-luw",
""
] |
I have a column `rating` with number 0 - 1000
I want to order these rows by `rating ASC` but I want that if the rating is between 0-99 they will have the same sorting order (same for 100-199, 200-299, etc). And these groups I want to have randomized when returned from the DB
Is this possible with SQL?
Example:
This is the initial table

this is the table after a regular order by rating asc

and doing the order with the groups of steps of 100, and then randomized

|
Just divide the ratings by 100 and then use `random()`:
```
SELECT
*
FROM
T
ORDER BY Rating/100,random()
```
Assuming that `Rating` is an integer, the division automatically truncates the result to also be an integer, thus handily treating `199/100` and `101/100` as both just being `1`.
|
Probably try using offset
Example
SELECT rating FROM ratingtable ORDER BY rating ASC LIMIT 100 OFFSET 0;
this will display rating between 1-100
to display ratings between 101-200 just set the offset to 100
|
SQL Order by grouping
|
[
"",
"sql",
"postgresql",
"group-by",
"sql-order-by",
""
] |
I would like to ignore if there are any zero values in the first days of production.
```
SELECT D_DATE, PRODUCE FROM PRODUCTION
```
**Dataset**
```
Date Produce
1/1/2015 0
1/2/2015 0
1/3/2015 0
1/4/2015 6
1/5/2015 5
1/6/2015 2
1/7/2015 0
1/8/2015 1
1/9/2015 1
```
The first three days are zeros which I would like to ignore in my result but the 7th day should not be ignored
**Desired Result**
```
Date Produce
1/4/2015 6
1/5/2015 5
1/6/2015 2
1/7/2015 0
1/8/2015 1
1/9/2015 1
```
|
For simplicity I assume that there is at least one day with produce > 0.
```
SELECT d_date, produce
FROM production
WHERE
d_date >= (
SELECT MIN(d_date)
FROM production
WHERE
produce != 0
)
;
```
|
You can use SUM as analytical function to calculate the cumulative sum of produce and filter those greater than zero.
```
select d_date, produce
from (
select
d_date,
produce,
sum(produce) over (order by d_date) cuml_produce
from production
)
where cuml_produce > 0
order by d_date;
```
|
How to ignore the first zeros in the result of a query
|
[
"",
"sql",
"oracle",
""
] |
I have problem that I have to make my code shorter .
code:
```
IF @result_var = @expected_value
BEGIN
INSERT INTO reports.consistencycheckhistory VALUES
(
Getdate(),
@rule_guid,
'Ok',
@result_var
)
IF CONVERT(DATE,@check_time) <> CONVERT(DATE, Sysdatetime())
BEGIN
UPDATE reports.consistencycheckrules
SET ok_days_count =@ok_days_count + 1 ,
last_check_time=@check_time
where rule_guid=@rule_guid
END
END
ELSE
BEGIN
INSERT INTO reports.consistencycheckhistory VALUES
(
Getdate(),
@rule_guid,
'Error',
@result_var
)
UPDATE reports.consistencycheckrules
SET ok_days_count=0,
last_check_time=@check_time
WHERE rule_guid=@rule_guid
END
```
There have to be only 1 insert and 1 update that is what my boss is saying but I don't know if it is possible.
|
```
IF @result_var = @expected_value
BEGIN
SET @Status = 'Ok'
END
ELSE
BEGIN
SET @Status = 'Error'
END
IF CONVERT(DATE,@check_time) <> CONVERT(DATE, Sysdatetime())
BEGIN
SET @ok_days_count = @ok_days_count + 1;
END
ELSE
BEGIN
SET @ok_days_count = 0;
END
INSERT INTO reports.consistencycheckhistory VALUES
(
Getdate(),
@rule_guid,
@Status,
@result_var
)
UPDATE reports.consistencycheckrules
SET ok_days_count = @ok_days_count ,
last_check_time=@check_time
where rule_guid=@rule_guid
```
|
```
IF @result_var = @expected_value
BEGIN
SET @Status = 'Ok'
END
ELSE
BEGIN
SET @Status = 'Error'
END
IF @Status = 'Ok'
BEGIN
IF CONVERT(DATE, @check_time) <> CONVERT(DATE, Sysdatetime())
BEGIN
SET @ok_days_count = @ok_days_count + 1;
END
ELSE
BEGIN
@ok_days_count=@ok_days_count
END
end
ELSE
BEGIN
SET @ok_days_count = 0;
end
INSERT INTO reports.consistencycheckhistory VALUES
(
Getdate(),
@rule_guid,
@Status,
@result_var
)
UPDATE reports.consistencycheckrules
SET ok_days_count = @ok_days_count,
last_check_time = @check_time
WHERE rule_guid = @rule_guid
```
This seems like the right answer
|
Making SQL statement efficient and easy to understand
|
[
"",
"sql",
"sql-server",
""
] |
I have the following data in my webinar\_timing table in mysql database
start\_time and end\_time are of type datetime
```
id | webinar_id | start_time | end_time
-------------------------------------------------------------------
1 | 5 | 3/18/2015 6:00:00 PM | 3/18/2015 7:00:00 PM
2 | 5 | 3/19/2015 6:00:00 PM | 3/19/2015 7:00:00 PM
3 | 5 | 3/20/2015 6:00:00 PM | 3/20/2015 7:00:00 PM
4 | 5 | 3/21/2015 6:00:00 PM | 3/21/2015 7:00:00 PM
5 | 5 | 3/22/2015 6:00:00 PM | 3/22/2015 7:00:00 PM
6 | 11 | 3/20/2015 8:00:00 PM | 3/20/2015 9:00:00 PM
7 | 11 | 3/21/2015 8:00:00 PM | 3/21/2015 9:00:00 PM
8 | 11 | 3/22/2015 8:00:00 PM | 3/22/2015 9:00:00 PM
9 | 22 | 3/25/2015 8:00:00 PM | 3/25/2015 9:00:00 PM
10 | 22 | 3/27/2015 8:00:00 PM | 3/27/2015 9:00:00 PM
11 | 22 | 3/29/2015 8:00:00 PM | 3/27/2015 9:00:00 PM
```
Basically, for each webinar, I want the total occurences and number of classes completed or remaining AND the NEXT upcoming class
Egs: When I run this query say at 3/21/2015 at 4:00 PM - this is the result I am expecting
```
webinar_id | total | Classes Completed | Next Class
----------------------------------------------------------
5 | 5 | 3 | 3/21/2015 6:00:00 PM
11 | 3 | 1 | 3/21/2015 8:00:00 PM
22 | 3 | 0 | 3/25/2015 8:00:00 PM
```
OR
```
webinar_id | total | Classes Remaining | Next Class
----------------------------------------------------------
5 | 5 | 2 | 3/21/2015 6:00:00 PM
11 | 3 | 2 | 3/21/2015 8:00:00 PM
22 | 3 | 3 | 3/25/2015 8:00:00 PM
```
Based on a previous question - a fellow SO Peter assisted with the following
```
select webinar_id, count(*) AS total,
SUM(IF(end_time<NOW(), 1, 0)) AS completed,
SUM(IF(start_time>=NOW(), 1, 0)) AS remaining
from webinar_times
group by webinar_id;
```
SQL Fiddle
<http://sqlfiddle.com/#!9/c4e71/1>
Any help will be appreciated
Thanks in advance
|
Something like:
```
select webinar_id
, count(*) AS total
, count(case when end_time<NOW() then 1 end) as completed
, (select count(1)
from webinar_times y
where x.webinar_id = y.webinar_id
and y.start_time > NOW()) as remaining
, min(case when x.start_time > NOW() then x.start_time end) as next_class
from webinar_times x
group by webinar_id;
```
should do
EDIT: realized that the sub-select is un-necessary:
```
select webinar_id
, count(*) AS total
, count(case when end_time<NOW() then 1 end) as completed
, count(case when start_time>NOW() then 1 end) as remaining
, min(case when x.start_time > NOW() then x.start_time end) as next_class
from webinar_times x
group by webinar_id;
```
|
You can make an outer join between two grouped queries, e.g. one that counts the total number of webinars and another that both counts the remaining webinars and obtains the start time of the next one:
```
SELECT * FROM (
SELECT webinar_id, COUNT(*) total
FROM webinar_times
GROUP BY webinar_id
) totals NATURAL LEFT JOIN (
SELECT webinar_id, COUNT(*) remaining, MIN(start_time) next
FROM webinar_times
WHERE start_time > NOW()
GROUP BY webinar_id
) future
```
See it on [sqlfiddle](http://sqlfiddle.com/#!9/c4e71/12/0):
```
+------------+-------+-----------+-------------------------+
| webinar_id | total | remaining | next |
+------------+-------+-----------+-------------------------+
| 6 | 5 | 1 | March, 22 2015 06:00:00 |
| 11 | 3 | 1 | March, 22 2015 07:00:00 |
| 22 | 3 | 3 | March, 25 2015 07:00:00 |
+------------+-------+-----------+-------------------------+
```
A composite index defined over `(webinar_id, start_time)` would benefit this query, and avoids the full table scans that the approach outlined in your question would otherwise require.
|
Mysql - sql query to get next class based on date
|
[
"",
"mysql",
"sql",
""
] |
I have a question very similar to another question but I can't quite figure it out. Here is the link to the original question:[Oracle/SQL - Finding records with one value excluding by similar record](https://stackoverflow.com/questions/5169884/oracle-sql-finding-records-with-one-value-excluding-by-similar-record)
So similar to that problem, I have records that will either have a 1 or null. the same records can be a combination of 1 or null and in those instances, I want to exclude the record altogether. For example:
```
Person Type
--------------
Bob 1
Sue 1
Bob null
Tom 1
Frank 1
Frank null
Fred null
```
I want the following returned:
```
Person Type
--------------
Sue 1
Tom 1
```
Any direction on this would be very much appreciated. I dont have much time to solve this so even speaking conceptually will help!
The closest I came was
```
select person from table
where type = 'S'
MINUS
select person from table
where type is null
```
But of course that doesnt work.
I can write a function if that is the only way. Thank you!
|
Try this:
```
select person, type from table
where type = '1'
and person not in (select person from table where type is null)
```
|
Apart from Mark's `NOT IN` approach, this can also be written as a `NOT EXISTS` condition:
```
select p1.person
from person p1
where p1.type = 1
and not exists (select 1
from person p2
where p1.person = p2.person
and p2.type is null)
order by p1.person;
```
It essentially says: get me every person where type is 1 but where there is no other row for this person where the type is null.
SQLFiddle example: <http://sqlfiddle.com/#!4/7623c/4>
|
Oracle/SQL - Finding records with one value excluding nulls
|
[
"",
"sql",
"oracle",
""
] |
I want to get the substring of a string like: filename\_ip\_time.pdf
I want to select filename\_ip, that is the string till last occurance of '\_'
|
Use -1 as the start position with INSTR to start searching from the end of the string:
```
select INSTR('filename_ip_time.pdf', '_', -1) from dual
```
So if you want to select filename\_ip, you should do something like this:
```
SELECT SUBSTR ('filename_ip_time.pdf',
0,
(INSTR ('filename_ip_time.pdf', '_', -1)) - 1)
FROM DUAL
```
|
Another way for the sake of argument using REGEXP\_REPLACE():
```
SQL> select regexp_replace('filename_ip_time.pdf', '^(.*)_.*$', '\1') from dual;
REGEXP_REPL
-----------
filename_ip
SQL>
```
Where the regular expression can be read as:
Return the first remembered group (\1) where the pattern matches:
```
^ Start of the line
( start the first group to remember
. any character
* any number of the previous character (any character)
) end the first group, followed by
_ A literal underscore
.* followed by any number of any characters
$ until the end of the line.
```
So in essence you are returning the first part of the string up to but not including the last underscore followed by the rest of the line. To me, clearer than nested substr() and instr() but you need to get your head around regular expressions, which will give you more power ultimately when you need to do more complex pattern matching.
|
Find substring till last occurrence
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I'm trying to write a query which will pair up impressions and conversions from the same IP address in my database. I have a table of impressions and their IP addresses, and an impression can be of the type 'view' or 'conversion' (defined in the column 'name').
So basically, I need to identify groups of records with the same IP address, which contain both a view and a conversion.
After an hour of Googling I've got as far as the below, which isn't very far but should give an idea of the objects involved:
```
SELECT ip_address, name, COUNT(1) as CNT
FROM Impressions
GROUP BY ip_address, name;
```
Can anyone advise on the best way to do this?
|
You need to use the `HAVING` clause with a conditional count. You also need to remove `name` from the `GROUP BY` as this will treat your two different types separately.
```
SELECT ip_address,
COUNT(CASE WHEN Name = 'View' THEN 1 END) AS Views,
COUNT(CASE WHEN Name = 'Conversion' THEN 1 END) AS Conversions,
COUNT(1) as CNT
FROM Impressions
GROUP BY ip_address
HAVING COUNT(CASE WHEN Name = 'View' THEN 1 END) > 0
AND COUNT(CASE WHEN Name = 'Conversion' THEN 1 END) > 0;
```
|
You can try this:
```
SELECT
i.ip_address AS ip,
GROUP_CONCAT(DISTINCT CAST(i.name AS CHAR)) AS nameList,
SUM(IF(i.name = 'View', 1, 0)) AS viewCount,
SUM(IF(i.name = 'Conversion', 1, 0)) AS conversionCount
FROM Impressions i
GROUP BY i.ip_address;
```
You will get a comma delimited list of names containing both 'view' & 'conversion' and their separate counts.
|
SQL query to identify pairs with duplicates
|
[
"",
"mysql",
"sql",
""
] |
I need to copy information (boolean) from "details" table to "user" table as with "oppossite" value:
tables structure:
"user" table:
```
id name appear
1 John null
2 Matt null
3 jack null
4 sara null
5 sarah null
```
"details" table:
```
id user_id appaer
1 1 false
2 2 false
3 3 true
4 4 true
5 5 true
```
result "user" table should look like;
```
id name appear
1 John true
2 Matt true
3 jack false
4 sara false
5 sarah false
```
how to do this?
thanks,
|
You need to update with `join and case-when`
```
update
user u join details d on d.user_id = u.id
set u.appear = case when d.appaer = 'true' then 'false' else 'true' end ;
```
|
update appear from user table when `true` then `false` and join that two tables with `user_id`
```
update user
set appear=(select case when details.appear='true' then 'false'
when details.appear='false' then 'true' end
from details join user
on details.user_id=users.id)
```
|
MySQL: copy information from table to another with opposite value
|
[
"",
"mysql",
"sql",
""
] |
So I have a Visitor table, and a Visitor\_activity table. Say:
Visitor
```
Visitor_ID Int
Visitor_name varchar(20)
```
Visitor\_Activity
```
ID Int
Visitor_ID Int
Activity_Type char(3) -- values IN or OUT
Activity_Time datetime
```
Visitors might sign in and out multiple times in a day.
I'd like a nice query to tell me all visitors who are in: i.e. the last activity for today (on activity\_time) was an "IN" not an "OUT". Any advice much appreciated.
It's T-SQL by the way, but I think it's more of an in-principle question.
|
```
SELECT
v.*
FROM
Visitors v
JOIN Visitor_Activity va ON va.Visitor_ID = v.Visitor_ID
WHERE
va.Activity_Type = 'IN'
AND NOT EXISTS ( SELECT
*
FROM
Visitor_Activity va_out
WHERE
va_out.Visitor_ID = va.Visitor_ID
AND va_out.Activity_Type = 'OUT'
AND va_out.Activity_Time > va.Activity_Time )
```
|
One way to solve this is to use a correlated not exists predicate:
```
select Activity_Time, Visitor_ID
from Visitor_Activity t1
where Activity_Type = 'IN'
and not exists (
select 1
from Visitor_Activity
where Activity_Type = 'OUT'
and Visitor_ID = t1.Visitor_ID
and Activity_Time > t1.Activity_Time
and cast(Activity_Time as date) = cast(t1.Activity_Time as date)
)
```
This basically says *get all visitor\_id that have type = IN for which there doesn't exists any type = OUT record with a later time (on the same date)*.
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/2cfd6/1)
|
SQL Last activity of given type
|
[
"",
"sql",
"sql-server",
"join",
""
] |

How can I compare the value of amount given in two columns as shown in the attached picture? 'T', 'M' and 'B' in column 1 represent trillion, million and billion respectively. I have tried below mentioned code so far but that's not what I need.
```
left(f.name, CHARINDEX('M',f.name))
```

**NOTE:** The currency prefix could be: `$, GBP, GB, CAD` etc:
|
You can use string manipulation to extract the numeric portion of the string and the use `CASE` statements to multiply the values to produce a numeric value that can be used to compare the results.
**Sample query to perform conversion:**
```
CREATE TABLE #temp (col1 VARCHAR(20), col2 DECIMAL (18, 2))
INSERT INTO #temp
( col1, col2 )
VALUES ('$55T Sales blah', 55000000000000),
('$30M Sales blah', 30000000),
('$0.3M Sales blah', 300000),
('$0.5B Sales blah', 500000000)
SELECT CASE SUBSTRING(col1, (CHARINDEX(' Sales', col1) - 1), 1)
WHEN 'T'
THEN CONVERT(DECIMAL(18, 2), SUBSTRING(col1, 2, (CHARINDEX(' Sales', col1) - 3))) * 1000000000000
WHEN 'B'
THEN CONVERT(DECIMAL(18, 2), SUBSTRING(col1, 2, (CHARINDEX(' Sales', col1) - 3))) * 1000000000
WHEN 'M'
THEN CONVERT(DECIMAL(18, 2), SUBSTRING(col1, 2, (CHARINDEX(' Sales', col1) - 3))) * 1000000
END AS NumericValue,
col2
FROM #temp
```
From this data, you can simply compare the 2 columns, so you can wrap it in a subquery like so:
```
CREATE TABLE #temp (col1 VARCHAR(20), col2 DECIMAL (18, 2))
INSERT INTO #temp
( col1, col2 )
VALUES ('$55T Sales blah', 55000000000000),
('$30M Sales blah', 30000000),
('$0.3M Sales blah', 300000),
('$0.5B Sales blah', 500000000)
SELECT t.NumericValue,
t.col2,
CASE WHEN t.NumericValue = t.col2
THEN 'Match'
ELSE 'No Match'
END AS Match
FROM ( SELECT CASE SUBSTRING(col1, (CHARINDEX(' Sales', col1) - 1), 1)
WHEN 'T'
THEN CONVERT(DECIMAL(18, 2), SUBSTRING(col1, 2, (CHARINDEX(' Sales', col1) - 3))) * 1000000000000
WHEN 'B'
THEN CONVERT(DECIMAL(18, 2), SUBSTRING(col1, 2, (CHARINDEX(' Sales', col1) - 3))) * 1000000000
WHEN 'M'
THEN CONVERT(DECIMAL(18, 2), SUBSTRING(col1, 2, (CHARINDEX(' Sales', col1) - 3))) * 1000000
END AS NumericValue,
col2
FROM #temp) t
DROP TABLE #temp
```
This assumes that the numeric portion is always followed by " Sales" and follows the formatting you have shown in your sample data. You can replace the " Sales" in `CHARINDEX` with a space if required.
**Output:**
```
NumericValue col2 Match
==============================================
55000000000000.00 55000000000000.00 Match
30000000.00 30000000.00 Match
300000.00 300000.00 Match
500000000.00 500000000.00 Match
```
**Updated query to account for different prefixes:**
This uses `PATINDEX('%[0-9]%', col1)` to find the position of the first numeric value.
```
CREATE TABLE #temp
(
col1 VARCHAR(20) ,
col2 DECIMAL(18, 2)
)
INSERT INTO #temp
( col1, col2 )
VALUES ( 'GBP55T Sales blah', 55000000000000 ),
( '$30M Sales blah', 30000000 ),
( 'GB0.3M Sales blah', 300000 ),
( '$0.5B Sales blah', 500000000 )
SELECT t.NumericValue ,
t.col2 ,
CASE WHEN t.NumericValue = t.col2 THEN 'Match'
ELSE 'No Match'
END AS Match
FROM ( SELECT CONVERT(DECIMAL(18, 2), SUBSTRING(col1,
PATINDEX('%[0-9]%', col1),
CHARINDEX(' ', col1)
- PATINDEX('%[0-9]%',
col1) - 1))
* CASE SUBSTRING(col1, ( CHARINDEX(' ', col1) - 1 ), 1)
WHEN 'T' THEN 1000000000000
WHEN 'B' THEN 1000000000
WHEN 'M' THEN 1000000
END AS NumericValue ,
col2
FROM #temp
) t
DROP TABLE #temp
```
Credit to [@Giorgi Nakeuri's answer](https://stackoverflow.com/a/29143192/57475) for the less verbose `CASE` statement.
|
This will demonstrate my comment above. It uses a combination of `LEFT(), RIGHT(), CHARINDEX() and SUBSTRING()` as well as a `CASE` to determine the multiplier value.
```
declare @Text varchar(20) = '$30M Sales in Jan 2014'
SELECT @Text [OriginalText]
--select only the letter
,Right(left(@Text, CHARINDEX(' ',@Text)-1),1) [Multiplier]
--this trims the multiplier and the dollar sign
,SUBSTRING(left(@Text, CHARINDEX(' ',@Text)-1),2, len(left(@Text, CHARINDEX(' ',@Text)-1))-2) [ValueToMultiply]
,CASE Right(left(@Text, CHARINDEX(' ',@Text)-1),1)
WHEN 'T' then 1000000000000
WHEN 'B' then 1000000000
WHEN 'M' then 1000000
END
* Convert(decimal(18,4),SUBSTRING(left(@Text, CHARINDEX(' ',@Text)-1),2, len(left(@Text, CHARINDEX(' ',@Text)-1))-2)) [FinalValue]
```
This results in:
```
OriginalText | Multiplier | ValueToMultiply | FinalValue
$30M Sales in Jan 20 | M | 30 | 30000000.0000
```
|
Extract a numeric value from a varchar column to compare with a numeric column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to write a query where I am filtering by date and also by age , the query would be something like
```
select *
from table1
where birthdate >= 'date1'
and dead <= 'date2'
where age >17
and <55;
```
The table columns is like this
> ID---name---lastname---birthdate---deadDate
|
Your query is really close. However, any time you want multiple conditions in a WHERE clause, you just have to use AND (or OR) in between each. Try this:
```
SELECT *
FROM myTable
WHERE birthdate >= 'date1' AND death <= 'date2' AND age > 17 AND age < 55;
```
|
```
select * from table1
where birthdate >= 'date1'
and deadDate <= 'date2'
and DATEDIFF(birthdate,deadDate) between > 17 and < 55;
```
|
Select query filtering by datediff
|
[
"",
"mysql",
"sql",
"subquery",
"where-clause",
""
] |

I want to select all values from *Users* and min of `dateUsed` in Code table. How can I do that?
I've tried this:
```
SELECT u.firstName, u.lastName, u.fbId, q.dateUsed, u.codesLeft
FROM Users u
inner join Code q on u.Id = q.userId
```
But it's selecting all values from the Code and Users tables.
P.S. Adding distinct has no effect
|
> Adding distinct has no effect
As a rule of thumb, `DISTINCT` helps for a single-column `SELECT`s. With multiple columns you need to go for "big guns" - the `GROUP BY` clause.
In order for that to work you need to make sure that each item that you select is either a `GROUP BY` column, or has a suitable aggregation function:
```
SELECT u.firstName, u.lastName, u.fbId, MIN(q.dateUsed) as dateUsed, u.codesLeft
FROM Users u
INNER JOIN Code q ON u.Id = q.userId
GROUP BY u.firstName, u.lastName, u.fbId, u.codesLeft
```
|
Scalar Sub Query
```
SELECT u.firstName,
u.lastName,
u.fbId,
(SELECT TOP 1 dateUsed
FROM Code as q WHERE u.Id = q.userId
ORDER BY dateUsed ASC),
u.codesLeft
FROM Users u
```
OR CTE
```
;WITH SEQ as (
SELECT userId, MIN(DateUsed) as FirstDate, MAX(DateUsed) as lastDate
FROM Code GROUP BY userID)
SELECT... u.*, q.FirstDate, q.LastDate
FROM Users as u
JOIN SEQ as q ON u.ID = q.userID
```
|
Select all values from first table and ONLY first value from second table
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm not sure it's possible but I would like to select the distinct value of two tables columns.
Consider table1 like:
```
╔══════╗
║ col1 ║
╠══════╣
║ AAA ║
║ AAA ║
║ BBB ║
║ BBB ║
╚══════╝
```
and table2 like:
```
╔══════╗
║ col1 ║
╠══════╣
║ AAA ║
║ BBB ║
║ CCC ║
║ CCC ║
╚══════╝
```
The output I would like would be:
```
AAA
BBB
CCC
```
Obviously something like:
```
SELECT DISTINCT table1.col1 , table2.col1 FROM table1 , table2;
```
won't work.
[SQLFiddle](http://sqlfiddle.com/#!5/ea77f/4)
|
A simple `UNION` would do the tricks
```
SELECT col1 FROM table1
UNION
SELECT col1 FROM table2;
```
There is no need to use the `DISTINCT` keyword as `UNION` already handle duplicates.
FYI if you use `UNION ALL` the duplicates won't be handled anymore, then you would need to use `DISTINCT`.
|
Just `union` will do the job, as it will filter out the duplicate records.
```
SELECT col1 FROM table1 UNION SELECT col1 FROM table2
```
|
How can I select the distinct value of two tables?
|
[
"",
"mysql",
"sql",
"distinct",
""
] |
This is very weird, so I could use a pair of spare eyes to understand what's happening.
So I have this query:
```
WITH x as (
SELECT num_aula, tipo_aula, min(abs(capienza-1)) score
FROM aula
JOIN (
select num_aula, tipo_aula
from aula
where tipo_aula = 'Laboratorio'
minus
select num_aula, tipo_aula
from occr_lezione
where to_char(Data_inizio_occr_lezione,'hh24:mi') = '12:30'
and Nome_sede = 'Centro Direzionale'
and Giorno_lezione = 2
)
USING(num_aula,tipo_aula)
GROUP BY num_aula, tipo_aula
ORDER BY score asc
)
SELECT *
FROM x
```
which return this result set:
```
NUM TIPO_AULA SCORE
--- -------------------- ----------
1 Laboratorio 35
2 Laboratorio 35
```
Which is the desired result.
Now, if I add this line to the query:
```
WHERE rownum = 1;
```
which should return the first row of the table, I get this:
```
NUM TIPO_AULA SCORE
--- -------------------- ----------
2 Laboratorio 35
```
How is that possible?
|
I think the query you really want is
```
WITH x as (
SELECT num_aula,
tipo_aula, min(abs(capienza-1)) score,
row_number() over(partition by num_aula, tipo_aula order by score asc ) as seq
FROM aula
JOIN (
select num_aula, tipo_aula
from aula
where tipo_aula = 'Laboratorio'
minus
select num_aula, tipo_aula
from occr_lezione
where to_char(Data_inizio_occr_lezione,'hh24:mi') = '12:30'
and Nome_sede = 'Centro Direzionale'
and Giorno_lezione = 2
)
USING(num_aula,tipo_aula)
)
SELECT *
FROM x
WHERE x.seq = 1;
```
The `ROWNUM` keyword does not behave as you think, see [this article about rownum](http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html).
To give more details, the `ROWNUM` are assigned before any order is given to the result set.
If you really want to get the correct result using the `ROWNUM` keyword, then you could achieve this with a subquery that would first order, then generate the rownum to the actual ordered result set. However, I would prefer the first approach as it is more readable in my opinion, but you are free to prefer this one.
```
SELECT *
FROM (SELECT num_aula,
tipo_aula, min(abs(capienza-1)) score
FROM aula
JOIN (
select num_aula, tipo_aula
from aula
where tipo_aula = 'Laboratorio'
minus
select num_aula, tipo_aula
from occr_lezione
where to_char(Data_inizio_occr_lezione,'hh24:mi') = '12:30'
and Nome_sede = 'Centro Direzionale'
and Giorno_lezione = 2
) USING(num_aula,tipo_aula)
GROUP BY num_aula, tipo_aula
ORDER BY score asc) x
WHERE x.rownum = 1;
```
|
The rownum is assigned to the row before any sorting takes place, so I suspect that removing your order by and running without the where rownum = 1 will return:
```
NUM TIPO_AULA SCORE
--- -------------------- ----------
2 Laboratorio 35
1 Laboratorio 35
```
There is a good article on rownum that can be found here:
<http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html>
|
Why does using "where rownum = 1" not select the first ordered row?
|
[
"",
"sql",
"oracle",
""
] |
I am trying to do SUM() in a multi-currency setup. The following will demonstrate the problem that I am facing:-
```
Customer
-------------------------
Id | Name
1 | Mr. A
2 | Mr. B
3 | Mr. C
4 | Mr. D
-------------------------
Item
-------------------------
Id | Name | Cost | Currency
1 | Item 1 | 5 | USD
2 | Item 2 | 2 | EUR
3 | Item 3 | 10 | GBP
4 | Item 4 | 5 | GBP
5 | Item 5 | 50 | AUD
6 | Item 6 | 20 | USD
7 | Item 3 | 10 | EUR
-------------------------
Order
-------------------------
User_Id | Product_Id
1 | 1
2 | 1
1 | 2
3 | 3
1 | 5
1 | 7
1 | 5
2 | 6
3 | 4
4 | 2
-------------------------
```
Now, I want the output of a SELECT query that lists the Customer Name and the total amount worth of products purchased as:-
```
Customer Name | Amount
Mr. A | Multiple-currencies
Mr. B | 25 USD
Mr. C | 15 GBP
Mr. D | 2 EUR
```
So basically, I am looking for a way to add the cost of multiple products under the same customer, if all of them have the same currency, else simply show 'multiple-currencies'. Running the following query will not help:-
```
SELECT Customer.Name, SUM(Item.Amount) FROM Customer
INNER JOIN Order ON Order.User_Id = Customer.Id
INNER JOIN Item ON Item.Id = Order.Product_Id
GROUP BY Customer.Name
```
What should my query be? I am using **Sqlite**
|
Ok, I managed to solve the problem this way:-
```
SELECT innerQuery.Name AS Name, (CASE WHEN innerQuery.Currencies=1 THEN (innerQuery.Amount || innerQuery.Currency) ELSE 'Mutliple-Currencies' END) AS Amount, FROM
(SELECT Customer.Name, SUM(Item.Amount), COUNT(DISTINCT Item.Currency) AS Currencies, Item.Currency AS Currency FROM Customer
INNER JOIN Order ON Order.User_Id = Customer.Id
INNER JOIN Item ON Item.Id = Order.Product_Id
GROUP BY Customer.Name)innerQuery
```
|
I would suggest two output columns, one for the currency and one for the amount:
```
SELECT c.Name,
(case when max(currency) = min(currency) then sum(amount)
end) as amount,
(case when max(currency) = min(currency) then max(currency)
else 'Multiple Currencies'
end) as currency
FROM Customer c INNER JOIN
Order o
ON o.User_Id = c.Id INNER JOIN
Item
ON i.Id = o.Product_Id
GROUP BY c.Name
```
If you want, you can concatenate these into a single string column. I just prefer to have the information in two different columns for something like this.
The above is standard SQL.
|
SUM in multi-currency
|
[
"",
"sql",
"sqlite",
""
] |
I need to copy "user1\_id" & "user2\_id" from "battle" table to new table called "user\_battle",
tables structure:
"battle" table (structure):
```
id user1_id user2_id
1 1 2
2 1 3
3 2 3
4 2 4
5 1 4
```
"user\_battle" table (structure): (should be filled like this)
```
battle_id user_id
1 1
1 2
2 1
2 3
3 2
3 3
4 2
4 4
5 1
5 4
```
thanks,
|
I think you can use `union all` to get the data you want:
```
select id, user1_id
from battle
union all
select id, user2_id
from battle;
```
You can put `insert into user_battle(battle_id, user_id)` before this statement for the `insert`.
|
`insert into` allows you to use a `select` to effectively "copy" data from one table to another.
By using `union` you can get separate rows for `user1_id` and `user2_id`.
```
insert into user_battle (battle_id, user_id)
(
select id, user1_id from battle
union
select id, user2_id from battle
)
```
|
MySQL: copy data from table to another
|
[
"",
"mysql",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.