Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I'm trying to figure out what type of `SELECT` I need to write in order to pull a string that starts after a particular character and ends after another character. My data appears as follows:
```
Path
------------------------------------------------------
\1231254-0000001000-14671899.PDF
\74-0000001001-14672073.PDF
\65551-0000001001-14672929.PDF
```
And I need to return the following, the characters after the second dash and before the period.
```
ID
------------------------------------------------------
14671899
14672073
14672929
```
I know I need to use some variation of `LEN` and such, but I'm having a hard time grasping how best to utilize them considering the path lengths can be different.
Any help would be greatly appreciated!
|
You could use `PARSENAME(REPLACE())` to do this:
```
SELECT PARSENAME(REPLACE(path, '-', '.'), 2) FROM tableName
```
`PARSENAME()` takes a string and splits it by the period characters `.` and returns the token located at the second parameter's position.
|
You could use `substring` and `charindex` if the format of the column is consistent.
[Fiddle](http://www.sqlfiddle.com/#!6/1e5568/2)
```
select reverse (
substring (
reverse(path),
charindex('.',reverse(path))+1,
charindex('-', reverse(path))- charindex('.',reverse(path))-1
)
)
from t
```
|
Pull text following and preceded by two differing delimiters of a variable length string?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two tables:
```
create table FOO (
id integer primary key
);
create table BAR (
id primary key,
fooId integer -- yeah, this *should* be a foreign key
);
insert into FOO values (10);
insert into FOO values (11); -- no BAR
insert into BAR values (20, 10); -- OK
insert into BAR values (21, 3); -- No FOO
insert into BAR values (22, 10); -- duplicates are OK
```
For some reason, they don't have a FK relation even though they should. When I create the relation, I get an error because some of the relations are broken.
I'm looking for a SQL query which lists the primary keys of both tables that have a broken relation to the other one, i.e. `FOO`s which aren't used in any `BAR`s and `BAR`s which contain illegal `fooId`s. In the example, the query should return:
```
fooId | barId
11 NULL
NULL 21
```
|
Do a `UNION ALL` with two `NOT IN`'s:
```
select id, null from FOO where id not in (select fooId from bar where fooId is not null)
union all
select null, id from BAR where fooId not in (select id from foo where id is not null)
```
Or, a `FULL OUTER JOIN`:
```
select distinct f.id, b.id
from foo f
full outer join bar b on f.id = b.fooid
where f.id is null
or b.id is null
```
|
Just use `not exists` (or `not in` or `left join` with a `where` clause):
```
select b.*
from bar b
where not exists (select 1 from foo f where f.id = b.fooid);
```
The only *broken* relationships are those where `bar.fooid` does not match a valid `foo.id`. Having a value in `foo` with no corresponding value in `bar` is not broken.
But to find `foo.id` values that are not used in `bar`, a very similar query can be used:
```
select f.*
from foo f
where not exists (select 1 from bar b where f.id = b.fooid);
```
|
List problems in foreign key relations
|
[
"",
"sql",
"foreign-key-relationship",
""
] |
*I have looked at related Qs on Stack*.
I am trying to sort my query to display month in ascending order. A similar post on Stack said that I must indicate "ASC", but this doesn't work for me. I have written an Order By. I cannot figure out why it is not working.
```
SELECT DATENAME( MONTH,(Submission.SubmissionDate)) AS [Date]
FROM Submission
GROUP BY DATENAME( MONTH,(Submission.SubmissionDate)) ORDER BY [Date] ASC;
```
This is the output:
```
Month
August
February
September
```
Note: this is just a part of my query, I didnt think it would necessary to show the rest, which relates to other attributes from my table (Not month related)
Edit: This is my Entire Query: (The Initial One)
```
SELECT DATENAME( MONTH,(Submission.SubmissionDate)) AS [Date],
SUM( CASE WHEN Submission.Status='Under review' THEN 1 ELSE 0 END) [Under_Review],
SUM( CASE WHEN Submission.Status='Accepted' THEN 1 ELSE 0 END) [Accepted],
SUM( CASE WHEN Submission.Status='Rejected' THEN 1 ELSE 0 END) [Rejected],
SUM( CASE WHEN Submission.Status='In print' THEN 1 ELSE 0 END) [In_print],
SUM( CASE WHEN Submission.Status='Published' THEN 1 ELSE 0 END) [Published]
FROM Submission INNER JOIN ((Faculty INNER JOIN School ON Faculty.FacultyID = School.[FacultyID]) INNER JOIN (Researcher INNER JOIN ResearcherSubmission ON Researcher.ResearcherID = ResearcherSubmission.ResearcherID) ON School.SchoolID = Researcher.SchoolID) ON Submission.SubmissionID = ResearcherSubmission.SubmissionID
GROUP BY DATENAME( MONTH,(Submission.SubmissionDate))
ORDER BY DATENAME( MONTH,(Submission.SubmissionDate));
```
|
A simple option is to add `MONTH(Submission.SubmissionDate)` to your `group by` clause, and `order by` that as well:
```
SELECT DATENAME( MONTH,(Submission.SubmissionDate)) AS [Date]
FROM Submission
GROUP BY MONTH(Submission.SubmissionDate), DATENAME( MONTH,(Submission.SubmissionDate))
ORDER BY MONTH(Submission.SubmissionDate)
```
This will work for your real query as well.
|
```
SELECT DATENAME( MONTH, Submission.SubmissionDate) AS [Date]
FROM Submission
ORDER BY datepart(mm,Submission.SubmissionDate)
```
You don't need a `group by` (for the query shown). Also, when you `order by` `month name` it would return results in the alphabetical order of `month name`. You should not use previously defined `alias`es in the `where`,`order by` `having` and `group by` clauses.
Edit: The problem is with the `join` conditions. You should correct them as per the comments in line.
```
SELECT DATENAME( MONTH,(Submission.SubmissionDate)) AS [Date],
SUM( CASE WHEN Submission.Status='Under review' THEN 1 ELSE 0 END) [Under_Review],
SUM( CASE WHEN Submission.Status='Accepted' THEN 1 ELSE 0 END) [Accepted],
SUM( CASE WHEN Submission.Status='Rejected' THEN 1 ELSE 0 END) [Rejected],
SUM( CASE WHEN Submission.Status='In print' THEN 1 ELSE 0 END) [In_print],
SUM( CASE WHEN Submission.Status='Published' THEN 1 ELSE 0 END) [Published]
FROM Faculty
INNER JOIN School ON Faculty.FacultyID = School.[FacultyID]
INNER JOIN Researcher ON School.SchoolID = Researcher.SchoolID
INNER JOIN ResearcherSubmission ON Researcher.ResearcherID = ResearcherSubmission.ResearcherID
INNER JOIN SUBMISSION ON Submission.SubmissionID = ResearcherSubmission.SubmissionID
GROUP BY DATENAME( MONTH,(Submission.SubmissionDate))
ORDER BY DATEPART( MONTH,(Submission.SubmissionDate))
```
|
sorting month in ascending order
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with records and a period of time for each record, like reservations for instance. So my records look like this:
```
Table-reservations
id room datefrom dateto
1 'one' '2015-09-07' '2015-09-12'
2 'two' '2015-08-11' '2015-09-02'
3 'three' '2015-06-11' '2015-06-14'
4 'two' '2015-07-30' '2015-08-10'
5 'four' '2015-06-01' '2015-06-23'
6 'one' '2015-03-21' '2015-03-25'
...
n 'nth' '2015-06-01' '2015-07-03'
```
Also there is a table with rooms containing an ID, a roomnumber and a roomtype, like this:
```
Table-rooms
idrooms room roomtype
1 'one' 'simple'
2 'two' 'simple'
3 'three' 'double'
...
nx 'nth' 'simple'
```
As you can see some rooms appear multiple times, but with different periods, because they are booked on various periods.
What I need to obtain through SQL is a list of rooms that are available in a given period of time.
So something like(pseudocode):
```
Select room from table where there is no reservation on that room between 2015-08-13 and 2015-08-26
```
How can I do this?
So I will have a fromdate and a todate and I will have to use them in a query.
Can any of you guys give me some pointers please?
Right now I use the following sql to obtain a list of rooms that are available NOW
```
select * from rooms
where idrooms not in
(
select idroom from rezervations where
((date(now())<=dateto and date(now())>=datefrom)or(date(now())<=dateto and date(now())<=datefrom))
)
order by room
```
|
You'll want to check that records don't exist where 'date from' is less than or equal to the end date in your range and 'date to' is greater than or equal to the start date in your range.
```
select t1.room
from reservations t1
where not exists (
select *
from reservations t2
where t2.room = t1.room
and t2.datefrom <= '2015-08-26'
and t2.dateto >= '2015-08-13'
)
group by room
```
You can try it out here: <http://sqlfiddle.com/#!9/cbd59/5>
I'm new to the site, so it won't let me post a comment, but I think the problem on the first answer is that the operators should be reversed.
As mentioned in a previous comment, this is only good if all of the rooms have a reservation record. If not, better to select from your rooms table like this: <http://sqlfiddle.com/#!9/0b96e/1>
```
select room
from rooms
where not exists (
select *
from reservations
where rooms.room = reservations.room
and reservations.datefrom <= '2015-08-26'
and reservations.dateto >= '2015-08-13'
)
```
|
This might be easier to understand.
Assuming you have another table for rooms.
```
SELECT *
FROM rooms
WHERE NOT EXISTS (SELECT id
FROM reservations
WHERE reservations.room = rooms.id
AND datefrom >= '2015-08-13'
AND dateto <= '2015-08-26')
```
|
Select * from table where desired period does not overlap with existing periods
|
[
"",
"mysql",
"sql",
""
] |
I have a row in a databasetable that is on the following form:
```
ID | Amount | From | To
5 | 5439 | 01.01.2014 | 05.01.2014
```
I want to split this up to one row pr month using SQL/T-SQL:
```
Amount | From
5439 | 01.01.2014
5439 | 02.01.2014
5439 | 03.01.2014
5439 | 04.01.2014
5439 | 05.01.2014
```
I, sadly, cannot change the database source, and I want to preferrably do this in SQL as I am trying to result of this Query with an other table in Powerpivot.
Edit: Upon requests on my code, I have tried the following:
```
declare @counter int
set @counter = 0
WHILE @counter < 6
begin
set @counter = @counter +1
select amount, DATEADD(month, @counter, [From]) as Dato
FROM [database].[dbo].[table]
end
```
This however returns several databasesets.
|
You can use a [tally table](http://www.sqlservercentral.com/articles/T-SQL/62867/) to generate all dates.
[**SQL Fiddle**](http://sqlfiddle.com/#!6/3c693/1/0)
```
;WITH E1(N) AS(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
),
E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b),
E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b),
Tally(N) AS(
SELECT TOP(SELECT MAX(DATEDIFF(DAY, [From], [To])) + 1 FROM yourTable)
ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
FROM E4
)
SELECT
yt.Id,
yt.Amount,
[From] = DATEADD(DAY, N-1, yt.[From])
FROM yourTable yt
CROSS JOIN Tally t
WHERE
DATEADD(DAY, N-1, yt.[From]) <= yt.[To]
```
[Simplified explanation on Tally Table](https://stackoverflow.com/questions/32096103/selecting-n-rows-in-sql-server/32096374#32096374)
|
You need a tally table with "running numbers". This may be a function (I posted one shortly here: <https://stackoverflow.com/a/32096945/5089204>) or a physical table (I posted an example here: <https://stackoverflow.com/a/32474751/5089204>) or a CTE to do this "on the fly" (the table example does it this way).
If you go with the posted function it could be like this:
```
declare @startDate DATETIME={d'2015-09-01'};
declare @EndDate DATETIME={d'2015-09-10'};
select DATEADD(DAY, Nmbr,@startDate)
from dbo.GetRunningNumbers(DATEDIFF(DAY,@startDate,@endDate)+1,0);
```
|
Split row into several with SQL statement
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm using a v12 server in Azure SQL Database, and I have the following table:
`CREATE TABLE [dbo].[AudienceNiches](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[WebsiteId] [nvarchar](128) NOT NULL,
[VisitorId] [nvarchar](128) NOT NULL,
[VisitDate] [datetime] NOT NULL,
[Interest] [nvarchar](50) NULL,
[Gender] [float] NULL,
[AgeFrom18To24] [float] NULL,
[AgeFrom25To34] [float] NULL,
[AgeFrom45To54] [float] NULL,
[AgeFrom55To64] [float] NULL,
[AgeFrom65Plus] [float] NULL,
[AgeFrom35To44] [float] NULL,
CONSTRAINT [PK_AudienceNiches] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)`
I'm executing this query: (UPDATED QUERY)
```
`select a.interest, count(interest) from (
select visitorid, interest
from audienceNiches
WHERE WebsiteId = @websiteid
AND VisitDate >= @startdate
AND VisitDate <= @enddate
group by visitorid, interest) as a
group by a.interest`
```
And I have the following indexs (all ASC):
`idx_WebsiteId_VisitDate_VisitorId
idx_WebsiteId_VisitDate
idx_VisitorId
idx_Interest`
The problem is that my query return 18K rows aproximaly and takes 5 seconds, the whole table has 8.8M records, and if I expand a little the data the time increases a lot, so, what would be the best index to this query? What I'm missing?
|
The best index for this query is a composite index on these columns, in this order:
* WebsiteId
* VisitDate
* Interest
* VisitorId
This allows the query to be completely answered from the index. SqlServer can range scan on (`WebsiteId`, `VisitDate`) and then exclude null `Interest` and finally count distinct `VisitorIds` all from the index. The indexes entries will be in the correct order to allow these operations to occur efficiently.
|
It's difficult for me to write SQL without having the data to test against, but see if this gives the results you're looking for with a better execution time.
```
SELECT interest, count(distinct visitorid)
FROM audienceNiches
WHERE WebsiteId = @websiteid
AND VisitDate between @startdate and @enddate
AND interest is not null
GROUP BY interest
```
|
How to speed up current query with index
|
[
"",
"sql",
"sql-server",
"t-sql",
"indexing",
"azure-sql-database",
""
] |
I have a table called `results` with 5 columns.
I'd like to use the `title` column to find rows that are say: `WHERE title like '%for sale%'` and then listing the most popular words in that column. One would be `for` and another would be `sale` but I want to see what other words correlate with this.
Sample data:
```
title
cheap cars for sale
house for sale
cats and dogs for sale
iphones and androids for sale
cheap phones for sale
house furniture for sale
```
Results (single words):
```
for 6
sale 6
cheap 2
and 2
house 2
furniture 1
cars 1
etc...
```
|
You can extract words with some string manipulation. Assuming you have a numbers table and that words are separated by single spaces:
```
select substring_index(substring_index(r.title, ' ', n.n), ' ', -1) as word,
count(*)
from results r join
numbers n
on n.n <= length(title) - length(replace(title, ' ', '')) + 1
group by word;
```
If you don't have a numbers table, you can construct one manually using a subquery:
```
from results r join
(select 1 as n union all select 2 union all select 3 union all . . .
) n
. . .
```
The SQL Fiddle (courtesy of @GrzegorzAdamKowalski) is [here](http://sqlfiddle.com/#!9/b0749/6).
|
You can use ExtractValue in some interesting way. See SQL fiddle here: <http://sqlfiddle.com/#!9/0b0a0/45>
We need only one table:
```
CREATE TABLE text (`title` varchar(29));
INSERT INTO text (`title`)
VALUES
('cheap cars for sale'),
('house for sale'),
('cats and dogs for sale'),
('iphones and androids for sale'),
('cheap phones for sale'),
('house furniture for sale')
;
```
Now we construct series of selects which extract whole words from text converted to XML. Each select extracts N-th word from the text.
```
select words.word, count(*) as `count` from
(select ExtractValue(CONCAT('<w>', REPLACE(title, ' ', '</w><w>'), '</w>'), '//w[1]') as word from `text`
union all
select ExtractValue(CONCAT('<w>', REPLACE(title, ' ', '</w><w>'), '</w>'), '//w[2]') from `text`
union all
select ExtractValue(CONCAT('<w>', REPLACE(title, ' ', '</w><w>'), '</w>'), '//w[3]') from `text`
union all
select ExtractValue(CONCAT('<w>', REPLACE(title, ' ', '</w><w>'), '</w>'), '//w[4]') from `text`
union all
select ExtractValue(CONCAT('<w>', REPLACE(title, ' ', '</w><w>'), '</w>'), '//w[5]') from `text`) as words
where length(words.word) > 0
group by words.word
order by `count` desc, words.word asc
```
|
How to find most popular word occurrences in MySQL?
|
[
"",
"mysql",
"sql",
"denormalization",
""
] |
i want to ask, does exist some way to use selected value from leftjoin table, into round function as decimal parameter.
For example.: `ROUND(sum(stats_bets_hourly.turnover_sum / currencies.rate ), 2) AS turnover_sum`
Should be: `ROUND(sum(stats_bets_hourly.turnover_sum / currencies.rate ), currencies.comma) AS turnover_sum`
Thanks and sorry for my english.
---
**UPDATE:**
Sorry that was badly formulated question. Round is working fine, but if currencies.comma value is 0 then query response is - 75312.000000, if currencies.comma value is 2, then - 75312.480000, if instead of currencies.comma i just writing 0 or 2 then i got - 75312 and 75312.48.
|
Yes, of course you can use a column instead of the numeric literal in ROUND.
That's easy to demonstrate:
```
select round(123.4567, pos) from (select 2 as pos) x;
```
In your case which is
```
ROUND(sum(stats_bets_hourly.turnover_sum / currencies.rate ), currencies.comma)
```
there must be just one `currencies.comma` value you are dealing with in your query (by having `currencies.comma` or `currencies.id` in your GROUP BY clause, or by limiting them in the WHERE clause.) If you are dealing with multiple `currencies.comma` values, then you probably need two steps, e.g.:
```
select
sum(turnover_partsum) as turnover_sum
from
(
select
c.comma,
round(sum(sbh.turnover_sum / c.rate ), c.comma) AS turnover_partsum
from currencies c
join stats_bets_hourly sbh on ...
group by c.comma
);
```
EDIT: Just one more thought on that: `currencies.comma` tells you how to *round* when doing calculations? That doesn't seem likely. Maybe you'd rather want to *display* a currency with the according number of decimal places. That would be `FORMAT` rather than `ROUND`:
```
format(sum(sbh.turnover_sum / c.rate ), c.comma)
```
|
Yes, you can do so as long as it is an integer number.
|
Mysql query - round() function
|
[
"",
"mysql",
"sql",
""
] |
I have:
```
TABLE MESSAGES
message_id | conversation_id | from_user | timestamp | message
```
I want:
```
1. SELECT * WHERE from_user <> id
2. GROUP BY conversation_id
3. SELECT in every group row with MAX(timestamp) **(if there are two same timestamps in a group use second factor as highest message_id)** !!!
4. then results SORT BY timestamp
```
to have result:
```
2|145|xxx|10000|message
6|1743|yyy|999|message
7|14|bbb|899|message
```
with eliminated
```
1|145|xxx|10000|message <- has same timestamp(10000) as message(2) belongs to the same conversation(145) but message id is lowest
5|1743|me|1200|message <- has message_from == me
```
example group with same timestamp
[](https://i.stack.imgur.com/hrT45.png)
i want from this group row 3 but i get row 2 from query
```
SELECT max(message_timestamp), message_id, message_text, message_conversationId
FROM MESSAGES
WHERE message_from <> 'me'
GROUP BY message_conversationId
ORDER by message_Timestamp DESC
```
what is on my mind to do union from message\_id & timestamp and then get max???
|
Try below sql to achieve your purpose by group by twice.
```
select m.*
from
Messages m
-- 3. and then joining to get wanted output columns
inner join
(
--2. then selecting from this max timestamp - and removing duplicates
select conversation_id, max(timestamp), message_id
from
(
-- 1. first select max message_id in remainings after the removal of duplicates from mix of cv_id & timestamp
select conversation_id, timestamp, max(message_id) message_id
from Messages
where message <> 'me'
group by conversation_id, timestamp
) max_mid
group by conversation_id
) max_mid_ts on max_mid_ts.message_id = m.message_id
order by m.message_id;
```
<http://goo.gl/MyZjyU>
|
Your query is based on non-standard use of `GROUP BY` (I think SQLite allows that only for compatibility with MySQL) and I'm not at all sure that it will produce determinate results all the time.
Plus it uses `MAX()` on concatenated columns. Unless you somehow ensure that the two (concatenated) columns have fixed widths, the results will not be accurate for that reason as well.
I would write the query like this:
```
SELECT
m.message_timestamp,
m.message_id,
m.message_text,
m.message_conversationId
FROM
( SELECT message_conversationId -- for every conversation
FROM messages as m
WHERE message_from <> 'me'
GROUP BY message_conversationId
) AS mc
JOIN
messages AS m -- join to the messages
ON m.message_id =
( SELECT mi.message_id -- and find one message id
FROM messages AS mi
WHERE mi.message_conversationId -- for that conversation
= mc.message_conversationId
AND mi.message_from <> 'me'
ORDER BY mi.message_timestamp DESC, -- according to the
mi.message_id DESC -- specified order
LIMIT 1 -- (this is the one part)
) ;
```
|
sqlite3 query by max and filter by second factor
|
[
"",
"android",
"sql",
"database",
"sqlite",
""
] |
I'm trying to optimise a select (cursor in pl/sql code actually) that includes a pl/sql function e.g.
```
select * from mytable t,mytable2 t2...
where t.thing = 'XXX'
... lots more joins and sql predicate on various columns
and myplsqlfunction(t.val) = 'X'
```
The myplsqlfunction() is very expensive, but is only applicable to a manageably small subset of the other conditions.
The problem is that Oracle appears to evaluating myplsqlfunction() on more data than is ideal.
My evidence for this is if I recast the above as either
```
select * from (
select * from mytable t,mytable2 t2...
where t.thing = 'XXX'
... lots more joins and sql predicate on various columns
) where myplsqlfunction(t.val) = 'X'
```
or pl/sql as:
```
begin
for t in ( select * from mytable t,mytable2 t2...
where t.thing = 'XXX'
... lots more joins and sql predicate on various columns ) loop
if myplsqlfunction(t.val) = 'X' then
-- process the desired subset
end if;
end loop;
end;
```
performance is an order of magnitude better.
I am resigned to restructuring the offending code to use either of the 2 above idioms, but it would be delighted if there was any simpler way to get the Oracle optimizer to do this for me.
|
You could specify a bunch of hints to force a particular plan. But that would almost assuredly be more of a pain than restructuring the code.
I would expect that what you really want to do is to [associate non-default statistics with the function](http://www.oracle-developer.net/display.php?id=426). If you tell Oracle that the function is less selective than the optimizer is guessing or (more likely) if you provide high values for the CPU or I/O cost of the function, you'll cause the optimizer to try to call the function as few times as possible. The oracle-developer.net article walks through how to pick reasonably correct values for the cost (or going a step beyond that how to make those statistics change over time as the cost of the function call changes). You can probably fix your immediate problem by setting crazy-high costs but you probably want to go to the effort of setting accurate values so that you're giving the optimizer the most accurate information possible. Setting costs way too high or way too low tends to cause some set of queries to do something stupid.
|
You can use CTE like:
```
WITH X as
( select /*+ MATERIALIZE */ * from mytable t,mytable2 t2...
where t.thing = 'XXX'
... lots more joins and sql predicate on various columns
)
SELECT * FROM X
where myplsqlfunction(t.val) = 'X';
```
Note the Materiliaze hint. CTEs can be either inlined or materialized(into TEMP tablespace).
Another option would be to use `NO_PUSH_PRED` hint. This is generally better solution (avoids materializing of the subquery), but it requires some tweaking.
PS: you should **not** call another SQL from myplsqlfunction. This SQL might see data added after your query started and you might get surprising results.
You can also declare your function as RESULT\_CACHE, to force the Oracle to remember return values from the function - if applicable i.e. the amount of possible function's parameter values is reasonably small.
Probably the best solution is to associate the stats, as Justin describes.
|
can oracle hints be used to defer a (pl/sql) condition till last?
|
[
"",
"sql",
"oracle11g",
""
] |
Can anybody help me to get a result form multiple SQL requests:
```
SELECT *
(COUNT(*) AS Records FROM wp_posts WHERE post_type='news' AND post_status='publish') as News,
(COUNT(*) AS Records FROM wp_posts WHERE post_type='promotion' AND post_status='publish') as Promos,
(COUNT(*) AS Records FROM wp_posts WHERE post_type='contact' AND post_status='publish') as Contacts
FROM wp_posts
```
I just want to find out how many custom posts in my WP MySQL by sending one SQL requests.
|
There is no need for subqueries at all:
```
SELECT
SUM(CASE WHEN post_type='news' THEN 1 ELSE 0 END) AS News,
SUM(CASE WHEN post_type='promotion' THEN 1 ELSE 0 END) AS Promos,
SUM(CASE WHEN post_type='contact' THEN 1 ELSE 0 END) AS Contacts
FROM wp_posts
WHERE post_status='publish';
```
or even shorter:
```
SELECT
SUM(IF(post_type='news', 1, 0)) AS News,
SUM(IF(post_type='promotion', 1, 0)) AS Promos,
SUM(IF(post_type='contact', 1, 0)) AS Contacts
FROM wp_posts
WHERE post_status='publish';
```
|
Use **UNION ALL** to combine the queries (they need to have the same number of columns)
```
SELECT * FROM (
SELECT 1
UNION ALL
SELECT 2
)
```
|
Bind multiple queries into one result
|
[
"",
"mysql",
"sql",
""
] |
I have two tables namely tbl\_votes and tbl\_candidates:
tbl\_votes: contains the ff column:
```
voteID president vicePresident secretary treasurer rep1 rep2 rep3
1 1 3 9 12 15
2 1 4 6 8 12 15
3 2 3 5 7 9 12
```
while tbl\_candidates contain the ff column:
```
idcandidate fName mName lName position
1 Jefferson Garcia Roxas president
2 Carlson Mcvoy Zafiro president
3 Andrew Efron Anderson vice president
4 Jessica Garcia Roxas secretary
```
...
```
9 Jayson Sanchez Dela Cruz representative
10 Arille Alisa Villanueva Valdez representative
11 Arnold Castro Baltazar representative
12 Kristina de Leon Guillermo representative
13 Jayson Jones Dela Cruz representative
14 Karla Santos Abrogena representative
15 Jason Chua representative
```
The query that I want to is:
```
Name votes
Jefferson Garcia Roxas 2
Carlson Mcvoy Zafiro 1
Andrew Efron Anderson 2
Jessica Garcia Roxas 1
```
...
```
Jayson Sanchez Dela Cruz 2
Arnold Castro Baltazar 0
Kristina de Leon Guillermo 3
Jason Chua 2
```
I'm stuck where to count the votes and here is my codes:
```
SELECT `Name`, Votes
FROM (SELECT COUNT(*) as Votes FROM tbl_votes) a
CROSS JOIN (Select (Concat(fName ,' ',mName,' ', lName)) as `Name` from tbl_candidates) b;
```
the result of this query is:
```
Name Votes
Jefferson Garcia Roxas 3
Carlson Mcvoy Zafiro 3
Andrew Efron Anderson 3
Jessica Garcia Roxas 3
Jayson Sanchez Dela Cruz 3
Arnold Castro Baltazar 3
Kristina de Leon Guillermo 3
Jason Chua 3
```
How will i make the count specific to the idcandidate?
Your great help will be much appreciated. Thank you!
|
```
SELECT fName||' '||mName||' '||lName, votes FROM (
SELECT a.idcandidate, COUNT(b.idcandidate) votes
FROM tbl_candidates a
LEFT JOIN (
SELECT president idcandidate from tbl_votes
UNION ALL
SELECT vicePresident idcandidate from tbl_votes
UNION ALL
SELECT secretary idcandidate from tbl_votes
UNION ALL
SELECT treasurer idcandidate from tbl_votes
UNION ALL
SELECT rep1 idcandidate from tbl_votes
UNION ALL
SELECT rep2 idcandidate from tbl_votes
UNION ALL
SELECT rep3 idcandidate from tbl_votes ) b
ON (a.idcandidate = b.idcandidate)
GROUP BY a.idcandidate ) tab
JOIN tbl_candidates b on (b.idcandidate = tab.idcandidate)
```
The above answer was for SQLite, I somehow misread the tags on this question.
However it probably works, except the first line would need to be in mySQL format:
```
SELECT CONCAT_WS(" ", fName, mName, lName), votes FROM (
```
|
Whereas you *could* approach this with a `CROSS JOIN` (but a different one than you propose) and appropriate aggregation of the results, that's a poor approach that would not scale well. Of course, there are no really good approaches when you are saddled with a crummy data model, as you are.
There are several ways to approach this, none of them especially good, for instance:
```
SELECT `Name`, COUNT(*) AS `votes`
FROM
(
SELECT
CONCAT(fName, ' ', mName, ' ', lName) as `Name`
FROM
tbl_candidates c
JOIN tbl_votes v
ON c.idcandidate = v.president
WHERE
c.position = 'president'
UNION ALL
SELECT
CONCAT(fName, ' ', mName, ' ', lName) as `Name`
FROM
tbl_candidates c
JOIN tbl_votes v
ON c.idcandidate = v.vicePresident
WHERE
c.position = 'vice president'
UNION ALL
SELECT
CONCAT(fName, ' ', mName, ' ', lName) as `Name`
FROM
tbl_candidates c
JOIN tbl_votes v
ON c.idcandidate IN (v.rep1, v.rep2, v.rep3)
WHERE
c.position = 'representative'
) vote_agg
GROUP BY `Name`
```
That breaks down the problem by position, using one inline view for each position to generate a row for each vote for each candidate for that position. It then combines them into an overall list via `UNION ALL`, and performs an aggregate query on the result to count the votes for each candidate.
If there were any votes for an existing candidate for a position that they are not running for (which is difficult or impossible to prevent via constraints on the specified data model), then those would be ignored. If any one ballot had more than one vote for the same representative candidate, then only one would be counted (maybe the desired behavior, and maybe not).
|
How to select multiple columns with 1 column with count
|
[
"",
"mysql",
"sql",
""
] |
## GTS Table
```
CCP months QUART YEARS GTS
---- ------ ----- ----- ---
CCP1 1 1 2015 5
CCP1 2 1 2015 6
CCP1 3 1 2015 7
CCP1 4 2 2015 4
CCP1 5 2 2015 2
CCP1 6 2 2015 2
CCP1 7 3 2015 3
CCP1 8 3 2015 2
CCP1 9 3 2015 1
CCP1 10 4 2015 2
CCP1 11 4 2015 3
CCP1 12 4 2015 4
CCP1 1 1 2016 8
CCP1 2 1 2016 1
CCP1 3 1 2016 3
```
## Baseline table
```
CCP BASELINE YEARS QUART
---- -------- ----- -----
CCP1 5 2015 1
```
**Expected result**
```
CCP months QUART YEARS GTS result
---- ------ ----- ----- --- ------
CCP1 1 1 2015 5 25 -- 5 * 5 (here 5 is the baseline)
CCP1 2 1 2015 6 30 -- 6 * 5 (here 5 is the baseline)
CCP1 3 1 2015 7 35 -- 7 * 5 (here 5 is the baseline)
CCP1 4 2 2015 4 360 -- 90 * 4(25+30+35 = 90 is the basline)
CCP1 5 2 2015 2 180 -- 90 * 2(25+30+35 = 90 is the basline)
CCP1 6 2 2015 2 180 -- 90 * 2(25+30+35 = 90 is the basline)
CCP1 7 3 2015 3 2160.00 -- 720.00 * 3(360+180+180 = 720)
CCP1 8 3 2015 2 1440.00 -- 720.00 * 2(360+180+180 = 720)
CCP1 9 3 2015 1 720.00 -- 720.00 * 1(360+180+180 = 720)
CCP1 10 4 2015 2 8640.00 -- 4320.00
CCP1 11 4 2015 3 12960.00 -- 4320.00
CCP1 12 4 2015 4 17280.00 -- 4320.00
CCP1 1 1 2016 8 311040.00 -- 38880.00
CCP1 2 1 2016 1 77760.00 -- 38880.00
CCP1 3 1 2016 3 116640.00 -- 38880.00
```
[**SQLFIDDLE**](http://sqlfiddle.com/#!3/d78d2)
**Explantion**
Baseline table has single baseline value for each CCP.
The baseline value should be applied to first quarter of each CCP and for the next quarters previous quarter sum value will be the basleine.
Here is a working query in `Sql Server 2008`
```
;WITH CTE AS
( SELECT b.CCP,
Baseline = CAST(b.Baseline AS DECIMAL(15,2)),
b.Years,
b.Quart,
g.Months,
g.GTS,
Result = CAST(b.Baseline * g.GTS AS DECIMAL(15,2)),
NextBaseline = SUM(CAST(b.Baseline * g.GTS AS DECIMAL(15, 2))) OVER(PARTITION BY g.CCP, g.years, g.quart),
RowNumber = ROW_NUMBER() OVER(PARTITION BY g.CCP, g.years, g.quart ORDER BY g.Months)
FROM #GTS AS g
INNER JOIN #Base AS b
ON B.CCP = g.CCP
AND b.QUART = g.QUART
AND b.YEARS = g.YEARS
UNION ALL
SELECT b.CCP,
CAST(b.NextBaseline AS DECIMAL(15, 2)),
b.Years,
b.Quart + 1,
g.Months,
g.GTS,
Result = CAST(b.NextBaseline * g.GTS AS DECIMAL(15,2)),
NextBaseline = SUM(CAST(b.NextBaseline * g.GTS AS DECIMAL(15, 2))) OVER(PARTITION BY g.CCP, g.years, g.quart),
RowNumber = ROW_NUMBER() OVER(PARTITION BY g.CCP, g.years, g.quart ORDER BY g.Months)
FROM #GTS AS g
INNER JOIN CTE AS b
ON B.CCP = g.CCP
AND b.Quart + 1 = g.QUART
AND b.YEARS = g.YEARS
AND b.RowNumber = 1
)
SELECT CCP, Months, Quart, Years, GTS, Result, Baseline
FROM CTE;
```
**UPDATE :**
To work with more than one year
```
;WITH order_cte
AS (SELECT Dense_rank() OVER(partition BY ccp ORDER BY years, quart) d_rn,*
FROM #gts),
CTE
AS (SELECT b.CCP,
Baseline = Cast(b.Baseline AS DECIMAL(15, 2)),
g.Years,
g.Quart,
g.Months,
g.GTS,
d_rn,
Result = Cast(b.Baseline * g.GTS AS DECIMAL(15, 2)),
NextBaseline = Sum(Cast(b.Baseline * g.GTS AS DECIMAL(15, 2)))
OVER(
PARTITION BY g.CCP, g.years, g.quart),
RowNumber = Row_number()
OVER(
PARTITION BY g.CCP, g.years, g.quart
ORDER BY g.Months)
FROM order_cte AS g
INNER JOIN #Baseline AS b
ON B.CCP = g.CCP
AND b.QUART = g.QUART
AND b.YEARS = g.YEARS
UNION ALL
SELECT b.CCP,
Cast(b.NextBaseline AS DECIMAL(15, 2)),
g.Years,
g.Quart,
g.Months,
g.GTS,
g.d_rn,
Result = Cast(b.NextBaseline * g.GTS AS DECIMAL(15, 2)),
NextBaseline = Sum(Cast(b.NextBaseline * g.GTS AS DECIMAL(15, 2)))
OVER(
PARTITION BY g.CCP, g.years, g.quart),
RowNumber = Row_number()
OVER(
PARTITION BY g.CCP, g.years, g.quart
ORDER BY g.Months)
FROM order_cte AS g
INNER JOIN CTE AS b
ON B.CCP = g.CCP
AND b.d_rn + 1 = g.d_rn
AND b.RowNumber = 1)
SELECT CCP,
Months,
Quart,
Years,
GTS,
Result,
Baseline
FROM CTE;
```
Now am looking for a solution in `Sql Server 2012+` which will utilize `SUM OVER(ORDER BY)` functionality or any better way
Tried something like this
```
EXP(SUM(LOG(Baseline * GTS)) OVER (PARTITION BY CCP ORDER BY Years,Quart ROWS UNBOUNDED PRECEDING))
```
But didnt workout
|
Following solution assumes there are always 3 rows per quarter (only the last quarter might be partial), single SELECT, no recursion :-)
```
WITH sumQuart AS
(
SELECT *,
CASE
WHEN ROW_NUMBER() -- for the 1st month in a quarter
OVER (PARTITION BY CCP, Years, Quart
ORDER BY months) = 1
-- return the sum of all GTS of this quarter
THEN SUM(GTS) OVER (PARTITION BY CCP, Years, Quart)
ELSE NULL -- other months
END AS sumGTS
FROM gts
)
,cte AS
(
SELECT
sq.*,
COALESCE(b.Baseline, -- 1st quarter
-- product of all previous quarters
CASE
WHEN MIN(ABS(sumGTS)) -- any zeros?
OVER (PARTITION BY sq.CCP ORDER BY sq.Years, sq.Quart, sq.Months
ROWS BETWEEN UNBOUNDED PRECEDING AND 3 PRECEDING) = 0
THEN 0
ELSE -- product
EXP(SUM(LOG(NULLIF(ABS(COALESCE(b.Baseline,1) * sumGTS),0)))
OVER (PARTITION BY sq.CCP ORDER BY sq.Years, sq.Quart, sq.Months
ROWS BETWEEN UNBOUNDED PRECEDING AND 3 PRECEDING)) -- product
-- odd number of negative values -> negative result
* CASE WHEN COUNT(CASE WHEN sumGTS < 0 THEN 1 END)
OVER (PARTITION BY sq.CCP ORDER BY sq.Years, sq.Quart, sq.Months
ROWS BETWEEN UNBOUNDED PRECEDING AND 3 PRECEDING) % 2 = 0 THEN 1 ELSE -1 END
END) AS newBaseline
FROM sumQuart AS sq
LEFT JOIN BASELINE AS b
ON B.CCP = sq.CCP
AND b.Quart = sq.Quart
AND b.Years = sq.Years
)
SELECT
CCP, months, Quart, Years, GTS,
round(newBaseline * GTS,2),
round(newBaseline,2)
FROM cte
```
See [Fiddle](http://sqlfiddle.com/#!3/6c120d/1)
EDIT:
Added logic to handle values <= 0 [Fiddle](http://sqlfiddle.com/#!6/6ddf23/2)
|
Another method that uses the `EXP(SUM(LOG()))` trick and only window functions for the running total (no recursive CTEs or cursors).
Tested at **[dbfiddle.uk](http://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=e0d042ae452d9c0121d7ca570807d9c6)**:
```
WITH
ct AS
( SELECT
ccp, years, quart,
q2 = round(exp(coalesce(sum(log(sum(gts)))
OVER (PARTITION BY ccp
ORDER BY years, quart
ROWS BETWEEN UNBOUNDED PRECEDING
AND 1 PRECEDING)
, 0))
, 2) -- round appropriately to your requirements
FROM gts
GROUP BY ccp, years, quart
)
SELECT
g.*,
result = g.gts * b.baseline * ct.q2,
baseline = b.baseline * ct.q2
FROM ct
JOIN gts AS g
ON ct.ccp = g.ccp
AND ct.years = g.years
AND ct.quart = g.quart
CROSS APPLY
( SELECT TOP (1) b.baseline
FROM baseline AS b
WHERE b.ccp = ct.ccp
ORDER BY b.years, b.quart
) AS b
;
```
**How it works:**
* (`CREATE` tables and `INSERT` skipped)
* **1**, lets group by ccp, year and quart and calculate the sums:
> ```
> select
> ccp, years, quart,
> q1 = sum(gts)
> from gts
> group by ccp, years, quart ;
> GO
> ```
>
> ```
> ccp | years | quart | q1
> :--- | ----: | ----: | :--------
> CCP1 | 2015 | 1 | 18.000000
> CCP1 | 2015 | 2 | 8.000000
> CCP1 | 2015 | 3 | 6.000000
> CCP1 | 2015 | 4 | 9.000000
> CCP1 | 2016 | 1 | 12.000000
> ```
* **2**, we use the `EXP(LOG(SUM())` trick to calculate the running multiplications of these sums. We use `BETWEEEN .. AND -1 PRECEDING` in the window to skip the current values, as these values are only used for the baselines of the next quart.
The rounding is to avoid inaccuracies that come from using `LOG()` and `EXP()`. You can experiment with using either `ROUND()` or casting to `NUMERIC`:
> ```
> with
> ct as
> ( select
> ccp, years, quart,
> q1 = sum(gts)
> from gts
> group by ccp, years, quart
> )
> select
> ccp, years, quart, -- months, gts, q1,
> q2 = round(exp(coalesce(sum(log(q1))
> OVER (PARTITION BY ccp
> ORDER BY Years, Quart
> ROWS BETWEEN UNBOUNDED PRECEDING
> AND 1 PRECEDING),0)),2)
> from ct ;
> GO
> ```
>
> ```
> ccp | years | quart | q2
> :--- | ----: | ----: | ---:
> CCP1 | 2015 | 1 | 1
> CCP1 | 2015 | 2 | 18
> CCP1 | 2015 | 3 | 144
> CCP1 | 2015 | 4 | 864
> CCP1 | 2016 | 1 | 7776
> ```
* **3**, we combine the two queries in one (no need for that, it just makes the query more compact, you could have 2 CTEs instead) and then join to `gts` so we can multiply each value with the calculated `q2` (which gives us the baseline).
The `CROSS APPLY` is merely to get the base baseline for each ccp.
Note that I change this one slightly, to `numeric(22,6)` instead of rounding to 2 decimal places. The results are the same with the sample but they may differ if the numbers are bigger or not integer:
> ```
> with
> ct as
> ( select
> ccp, years, quart,
> q2 = cast(exp(coalesce(sum(log(sum(gts)))
> OVER (PARTITION BY ccp
> ORDER BY years, quart
> ROWS BETWEEN UNBOUNDED PRECEDING
> AND 1 PRECEDING)
> , 0.0))
> as numeric(22,6)) -- round appropriately to your requirements
> from gts
> group by ccp, years, quart
> )
> select
> g.*,
> result = g.gts * b.baseline * ct.q2,
> baseline = b.baseline * ct.q2
> from ct
> join gts as g
> on ct.ccp = g.ccp
> and ct.years = g.years
> and ct.quart = g.quart
> cross apply
> ( select top (1) baseline
> from baseline as b
> where b.ccp = ct.ccp
> order by years, quart
> ) as b
> ;
> GO
> ```
>
> ```
> CCP | months | QUART | YEARS | GTS | result | baseline
> :--- | -----: | ----: | ----: | :------- | :------------ | :-----------
> CCP1 | 1 | 1 | 2015 | 5.000000 | 25.000000 | 5.000000
> CCP1 | 2 | 1 | 2015 | 6.000000 | 30.000000 | 5.000000
> CCP1 | 3 | 1 | 2015 | 7.000000 | 35.000000 | 5.000000
> CCP1 | 4 | 2 | 2015 | 4.000000 | 360.000000 | 90.000000
> CCP1 | 5 | 2 | 2015 | 2.000000 | 180.000000 | 90.000000
> CCP1 | 6 | 2 | 2015 | 2.000000 | 180.000000 | 90.000000
> CCP1 | 7 | 3 | 2015 | 3.000000 | 2160.000000 | 720.000000
> CCP1 | 8 | 3 | 2015 | 2.000000 | 1440.000000 | 720.000000
> CCP1 | 9 | 3 | 2015 | 1.000000 | 720.000000 | 720.000000
> CCP1 | 10 | 4 | 2015 | 2.000000 | 8640.000000 | 4320.000000
> CCP1 | 11 | 4 | 2015 | 3.000000 | 12960.000000 | 4320.000000
> CCP1 | 12 | 4 | 2015 | 4.000000 | 17280.000000 | 4320.000000
> CCP1 | 1 | 1 | 2016 | 8.000000 | 311040.000000 | 38880.000000
> CCP1 | 2 | 1 | 2016 | 1.000000 | 38880.000000 | 38880.000000
> CCP1 | 3 | 1 | 2016 | 3.000000 | 116640.000000 | 38880.000000
> ```
|
Running Multiplication in T-SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
"sql-server-2014",
""
] |
I'm trying to limit my rows based on column values, but having difficulty getting the syntax right.
Given:
```
create table acctprefs (acctid char(5),
prefcode char(3));
insert into acctprefs values ('10000', 'ABC');
insert into acctprefs values ('10000', 'DEF');
insert into acctprefs values ('10000', 'GHI');
insert into acctprefs values ('10001', 'ABC');
insert into acctprefs values ('10001', 'DEF');
insert into acctprefs values ('10001', 'GHI');
insert into acctprefs values ('10001', 'ZZZ');
```
I would like to return a distinct list of accounts that do not have the 'ZZZ' preference. In this case, I'm trying to get a result that would be simply:
10000
I'm able to get the distinct accounts with 'ZZZ' with the query below, but I just need the opposite.
```
select *
from
acctprefs ap
where
ap.prefcode in
(select ap.prefcode from acctprefs ap group by ap.prefcode having(ap.prefcode = 'ZZZ'));
```
|
One approach is aggregation with a `having` clause:
```
select ap.acctid
from acctprefs ap
group by ap.acctid
having sum(case when ap.prefcode = 'ZZZ' then 1 else 0 end) = 0;
```
The `having` clause counts the number of ZZZ for each account . . . and returns only those with zero.
|
One way of doing it is using `minus`.
```
select acctid
from acctprefs
where arefcode <> 'ZZZ'
minus
select acctid
from acctprefs
where arefcode = 'ZZZ'
```
|
Oracle SQL: Distinct list of results based row not having column value
|
[
"",
"sql",
"oracle",
"having-clause",
""
] |
I am using Rails 4.2 with PostgreSQL. I have a `Product` model and a `Purchase` model with `Product` `has many` `Purchases`. I want to find the distinct recently purchased products. Initially I tried:
```
Product.joins(:purchases)
.select("DISTINCT products.*, purchases.updated_at") #postgresql requires order column in select
.order("purchases.updated_at DESC")
```
This however results in duplicates because it tries to find all tuples where the pair (`product.id` and `purchases.updated_at`) has a unique value. However I just want to select the products with distinct `id` after the join. If a product id appears multiple times in the join, only select the first one. So I also tried:
```
Product.joins(:purchases)
.select("DISTINCT ON (product.id) purchases.updated_at, products.*")
.order("product.id, purchases.updated_at") #postgres requires that DISTINCT ON must match the leftmost order by clause
```
This doesn't work because I need to specify `product.id` in the `order` clause because of [this](https://stackoverflow.com/questions/9795660/postgresql-distinct-on-with-different-order-by) constraint which outputs unexpected order.
What is the rails way to achieve this?
|
Use a subquery and add a different `ORDER BY` clause in the outer `SELECT`:
```
SELECT *
FROM (
SELECT DISTINCT ON (pr.id)
pu.updated_at, pr.*
FROM Product pr
JOIN Purchases pu ON pu.product_id = pr.id -- guessing
ORDER BY pr.id, pu.updated_at DESC NULLS LAST
) sub
ORDER BY updated_at DESC NULLS LAST;
```
Details for `DISTINCT ON`:
* [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
Or some other query technique:
* [Optimize GROUP BY query to retrieve latest record per user](https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-record-per-user/25536748#25536748)
But if all you need from `Purchases` is `updated_at`, you can get this cheaper with a simple aggregate in a subquery before you join:
```
SELECT *
FROM Product pr
JOIN (
SELECT product_id, max(updated_at) AS updated_at
FROM Purchases
GROUP BY 1
) pu ON pu.product_id = pr.id -- guessing
ORDER BY pu.updated_at DESC NULLS LAST;
```
About `NULLS LAST`:
* [PostgreSQL sort by datetime asc, null first?](https://stackoverflow.com/questions/9510509/postgresql-sort-by-datetime-asc-null-first/9511492#9511492)
Or even simpler, but not as fast while retrieving all rows:
```
SELECT pr.*, max(updated_at) AS updated_at
FROM Product pr
JOIN Purchases pu ON pu.product_id = pr.id
GROUP BY pr.id -- must be primary key
ORDER BY 2 DESC NULLS LAST;
```
`Product.id` needs to be defined as primary key for this to work. Details:
* [PostgreSQL - GROUP BY clause](https://stackoverflow.com/questions/18991625/postgresql-group-by-clause/18993394#18993394)
* [Return a grouped list with occurrences using Rails and PostgreSQL](https://stackoverflow.com/questions/11836874/return-a-grouped-list-with-occurrences-using-rails-and-postgresql/11847961#11847961)
If you fetch only a small selection (with a `WHERE` clause restricting to just one or a few `pr.id` for instance), this will be faster.
|
So building on @ErwinBrandstetter answer, I finally found the right way of doing this. The query to find distinct recent purchases is
```
SELECT *
FROM (
SELECT DISTINCT ON (pr.id)
pu.updated_at, pr.*
FROM Product pr
JOIN Purchases pu ON pu.product_id = pr.id
) sub
ORDER BY updated_at DESC NULLS LAST;
```
The `order_by` isn't needed inside the subquery, since we are anyway ordering in the outer query.
The rails way of doing this is -
```
inner_query = Product.joins(:purchases)
.select("DISTINCT ON (products.id) products.*, purchases.updated_at as date") #This selects all the unique purchased products.
result = Product.from("(#{inner_query.to_sql}) as unique_purchases")
.select("unique_purchases.*").order("unique_purchases.date DESC")
```
The second (and better) way to do this as suggested by @ErwinBrandstetter is
```
SELECT *
FROM Product pr
JOIN (
SELECT product_id, max(updated_at) AS updated_at
FROM Purchases
GROUP BY 1
) pu ON pu.product_id = pr.id
ORDER BY pu.updated_at DESC NULLS LAST;
```
which can written in rails as
```
join_query = Purchase.select("product_id, max(updated_at) as date")
.group(1) #This selects most recent date for all purchased products
result = Product.joins("INNER JOIN (#{join_query.to_sql}) as unique_purchases ON products.id = unique_purchases.product_id")
.order("unique_purchases.date")
```
|
Rails - Distinct ON after a join
|
[
"",
"sql",
"ruby-on-rails",
"postgresql",
"greatest-n-per-group",
"ruby-on-rails-4.2",
""
] |
I am fetching table-names from particular database like this
```
SELECT name FROM sys.Tables where name like 'some pattern'
Output :
Name
sampletable_123,
sampletable_456,
sample_789.
```
It can return more than one row like above output. How to fetch row one by one?
Because I want to find out column-name/date from obtained table-name. If date is less than current date, I want to drop that table.
```
select top 1 Udate from sampletable_123
where Udate < convert(varchar(10),getdate(),101)
delete table sampletable_123.
```
How to do that?
|
Thanks SubqueryCrunch and Sergio Internicola for ur help.@Sergio Internicola, ur logic is right.I think curser take time.I modified SubqueryCrunch's query.It is working fine with me.
```
DECLARE @SQL NVARCHAR(4000)
SET @SQL = ' '
SELECT
@SQL += 'IF EXISTS(SELECT TOP 1 udate FROM ' + name + ' WHERE DATEDIFF(day,udate,GETDATE()) != 0)
DROP TABLE ' + name +' '
FROM sys.Tables
where name like 'tbl_%'
PRINT @SQL
EXEC sp_executesql @SQL
```
|
You can use a cursor on the above select, and fetch one row at a time.
For each row, you can retrieve the first row of the table and decide to drop it or not.
```
DECLARE @TABLES CURSOR
DECLARE @MYNAME VARCHAR(100)
SET @TABLES CURSOR FOR
SELECT name FROM sys.Tables WHERE name LIKE 'sample%'
OPEN @TABLES
WHILE 1 = 1 BEGIN -- INFINITE LOOP
FETCH NEXT FROM @TABLES INTO @MYNAME
IF @@FETCH_STATUS <> 0 BREAK
IF EXISTS(SELECT TOP 1 Udate FROM @MYNAME WHERE Udate < CONVERT(VARCHAR(10),GETDATE(),101))
DROP TABLE @MYNAME
END
```
|
How to fetch row one by one in sql
|
[
"",
"sql",
"asp.net",
"sql-server",
""
] |
I have a table like this:
```
DROP TABLE IF EXISTS `locations`;
CREATE TABLE IF NOT EXISTS `locations` (
`tenant_id` int(11) NOT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
`waypoint_id` int(11) NOT NULL,
`material` int(11),
`price` decimal(10,2) NOT NULL,
PRIMARY KEY (`tenant_id`,`id`),
UNIQUE KEY `id` (`id`),
UNIQUE KEY(`waypoint_id`, `material`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=4 ;
```
I'm running this query:
```
UPDATE locations
SET waypoint_id=23,
material=19,
price=22.22,
unit_id=1
WHERE tenant_id=3 AND id = 54;
```
I get the following error:
```
Duplicate entry '23-19' for key 'waypoint_id'
```
I know that I have a record with those IDs already but then how can I edit the values in that row if it doesn't let me change them?
I don't understand why I get this error if I'm not trying to insert a new record with those ids 23-19 but I'm just trying to update that record. How can I solve this?
**NOTE**
I apologize, I pasted the wrong query, I edited the query with the first one that is creating the error.
|
Your query seems to be wrong. It should be like this:
```
UPDATE locations
SET waypoint_id=23,
material=19,
price=22.22,
unit_id=1
WHERE tenant_id=3 AND id = 54;
```
**EDIT:**
You need to check that your table doesn't have the waypoint\_id as 23 and material as 19 as you have made it as unique. If there is any duplicate entry already present then you cannot add 23 value and 19 value.
You can check it like this:
```
select waypoint_id, material from locations WHERE tenant_id=3 AND id = 54;
```
or rather check like
```
select * from locations where waypoint_id = 23 or material = 19
```
A workaround to your problem is to drop the unique key constraint from your table like this
```
alter table locations drop index waypoint_id ;
alter table locations drop index material ;
```
Then you can do the update
And after that apply the unique key on the combination of two columns like this:
```
ALTER TABLE `locations` ADD UNIQUE `unique_index`(`waypoint_id`, `material`);
```
|
```
UNIQUE KEY(`waypoint_id`, `material`)
```
So when you **do** have that constraint, you can not have 2 rows that have same combination for those 2 values. So you cant have two rows with 23 as waypoint\_id and 19 as material, simple as that.
That's what unique means. That combination has to be unique whether you update or insert.
~~Because you cant provide 2 different update queries mixed in one query like that.
UPDATE locations
SET waypoint\_id=23 WHERE tenant\_id=3 AND id = 54
Query ends there. Now it appears like you do have only 1 update going on but your syntax is not right. It should be like
UPDATE locations
SET waypoint\_id=23,
material=19,
price=22.22,
unit\_id=1
WHERE tenant\_id=3 AND id = 54~~
|
I can't update a table that has two unique keys
|
[
"",
"mysql",
"sql",
""
] |
So i have a MySQL table that contains 2 fields - deviceID and jobID. Below is a sample of what the table looks like with Data in it:
```
+----------------------+----------------------+
| deviceID | jobID |
+----------------------+----------------------+
| f18204efba03a874bb9f | be83dec5d120c42a6b94 |
| 49ed54279fb983317051 | be83dec5d120c42a6b94 |
+----------------------+----------------------+
```
Usually i run a query that looks a little like this:
```
SELECT Count(deviceID)
FROM pendingCollect
WHERE jobID=%s AND deviceID=%s
```
Now this runs fine and usually returns a 0 if the device doesnt exist with the specified job, and 1 if it does - which is perfectly fine. HOWEVER, for some reason - im having problems with thew second row. The query:
```
SELECT Count(deviceID)
FROM pendingCollect
WHERE jobID='be83dec5d120c42a6b94' AND deviceID='49ed54279fb983317051'
```
is returning 0 for some reason - Even though the data exists in the table3 and the count should be returned as 1, it is returning as 0... Any ideas why this is?
thanks in Advance
EDIT:
Sorry for the type guys! The example SQL query shouldnt have had the same devID and jobID.. My Mistake
EDIT 2:
Some people are suggesting i use the SQL LIKE operator.... Is there a need for this? Again, when i run the following query, everything runs fine and returns 1. It only seems to be on the deviceID "49ed54279fb983317051" that is returning the error...
```
SELECT Count(deviceID)
FROM pendingCollect
WHERE jobID='be83dec5d120c42a6b94' AND deviceID='f18204efba03a874bb9f'
```
The above query works as expected returning 1
|
You need to provide the correct value for jobID. Presently you are providing the value of deviceID in jobID which is not matching and hence returing 0 rows.
```
SELECT Count(deviceID) FROM pendingCollect
WHERE jobID='49ed54279fb983317051' AND deviceID='49ed54279fb983317051'
^^^^^^^^^^^^^^^^^^^^^^^
```
The reason why
```
jobID=%s and deviceID=%s
```
which I think you mean
```
jobID like '%s' and deviceID like '%s'
```
was working because both were matching. But now since you are using the AND condition and providing jobID value same for both so it would not match any row. And will return 0 rows.
**EDIT:**
You query seems to be correct and is giving giving the correct result.
**[SQL FIDDLE DEMO](http://sqlfiddle.com/#!9/d5cb2/1)**
You need to check if there is any space which is getting added to the values for the jobID and deviceID column.
|
This is because of the `AND` operator. `AND` means both conditions must be true. Instead of `AND`, use `OR` operator.
```
SELECT Count(deviceID)
FROM pendingCollect
WHERE jobID = '49ed54279fb983317051' OR deviceID = '49ed54279fb983317051'
```
|
Count returns 0 for a column that exists
|
[
"",
"mysql",
"sql",
""
] |
I want to run a query on `MySql version 5.1.9` that returns me only top two (order by JoiningDate) of selected Dept.
For example, my data is like:
```
+-------+------------------------------------------+----------+------------+
| empid | title | Dept | JoiningDate|
+-------+------------------------------------------+----------+------------+
| 1 | Research and Development | 1 | 2015-08-06 |
| 2 | Consultant | 2 | 2015-08-06 |
| 3 | Medical Consultant | 3 | 2015-08-06 |
| 4 | Officer | 4 | 2015-08-06 |
| 5 | English Translator | 5 | 2015-08-06 |
| 6 | Teacher | 1 | 2015-08-01 |
| 7 | Physical Education | 2 | 2015-08-01 |
| 8 | Accountant | 3 | 2015-08-01 |
| 9 | Science Teacher | 4 | 2015-08-01 |
| 10 | Home Science | 5 | 2015-08-01 |
| 11 | Research Assistant | 1 | 2015-08-05 |
| 12 | Consultant | 2 | 2015-08-05 |
| 13 | Consultant HR | 3 | 2015-08-05 |
| 14 | Technical Lead | 4 | 2015-08-05 |
| 15 | Hindi Translator | 5 | 2015-08-05 |
| 16 | Urdu Teacher | 1 | 2015-08-02 |
| 17 | Physical Education | 2 | 2015-08-02 |
| 18 | Accountant | 3 | 2015-08-02 |
| 19 | Science | 4 | 2015-08-02 |
| 20 | Home Science | 5 | 2015-08-02 |
+-------+------------------------------------------+----------+------------+
```
I want the query to output the latest joined two empid's of Dept (1,2,3) i.e:
```
+-------+------------------------------------------+----------+------------+
| empid | title | Dept | JoiningDate|
+-------+------------------------------------------+----------+------------+
| 1 | Research and Development | 1 | 2015-08-06 |
| 11 | Research Assistant | 1 | 2015-08-05 |
| 2 | Consultant | 2 | 2015-08-06 |
| 12 | Consultant | 2 | 2015-08-05 |
| 3 | Medical Consultant | 3 | 2015-08-06 |
| 13 | Consultant HR | 3 | 2015-08-05 |
+-------+------------------------------------------+----------+------------+
```
|
In mysql you can use user defined variables to achieve you desired results
```
SELECT
t.empid,
t.title,
t.Dept,
t.JoiningDate
FROM
(
SELECT
*,
@r:= CASE WHEN @g = b.Dept THEN @r + 1 ELSE 1 END rounum,
@g:= b.Dept
FROM (
SELECT *
FROM table1
CROSS JOIN (SELECT @r:= NULL,@g:=NULL) a
WHERE Dept IN(1,2,3)
ORDER BY Dept,JoiningDate DESC
) b
) t
WHERE t.rounum <=2
```
[**`DEMO`**](http://sqlfiddle.com/#!9/888f7/3)
|
Use a correlated sub-select to count number of rows with same date but a later JoiningDate. If less than 2, return the row.
```
select empid, title, Dept, JoiningDate
from tablename t1
where (select count(*) from tablename t2
where t2.Dept = t1.Dept
and t2.JoiningDate > t1.JoiningDate) < 2
```
|
How to get Latest N Records of selected Group
|
[
"",
"mysql",
"sql",
"date",
"group-by",
"sql-order-by",
""
] |
So far i was using
```
IF OBJECT_ID(''tempdb..#tempTable'') IS NOT NULL -- dropping the temp table
DROP TABLE #tempTable
```
**Is there a way in which** I could use the same statement for a table which is not a temp one?
Tried like this and it didn't work:
```
IF OBJECT_ID(''myOwnDb.dbo.myTable'') IS NOT NULL -- dropping the table
DROP TABLE dbo.myTable
```
|
Strings in MS SQL server should be enclosed in **single** quotes.
So neither `OBJECT_ID(''myOwnDb.dbo.myTable'')` nor `OBJECT_ID("myOwnDb.dbo.myTable")` will work.
But `OBJECT_ID('myOwnDb.dbo.myTable')` will work perfectly.
|
In addition to what other users have suggested wrt `Object_ID` which is fine, you can explore below method to detect if table exist or not using `INFORMATION_SCHEMA`
```
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = N'Your Table Name')
BEGIN
Drop table <tablename>
END
```
|
drop table #temp vs drop myTable if it's not null
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2012",
""
] |
I have a database of users who pay monthly payment. I need to check if there is continuity in these payments.
For example in the table below:
```
+---------+------------+
| user_id | date |
+---------+------------+
| 1 | 2015-02-01 |
| 2 | 2015-02-01 |
| 3 | 2015-02-01 |
| 1 | 2015-03-01 |
| 2 | 2015-03-01 |
| 3 | 2015-03-01 |
| 4 | 2015-03-01 |
| 1 | 2015-04-01 |
| 2 | 2015-04-01 |
| 3 | 2015-04-01 |
| 4 | 2015-04-01 |
| 5 | 2015-04-01 |
| 1 | 2015-05-01 |
| 2 | 2015-05-01 |
| 3 | 2015-05-01 |
| 4 | 2015-05-01 |
| 5 | 2015-05-01 |
| 1 | 2015-06-01 |
| 2 | 2015-06-01 |
| 3 | 2015-06-01 |
| 5 | 2015-06-01 |
| 3 | 2015-07-01 |
| 4 | 2015-07-01 |
| 5 | 2015-07-01 |
+---------+------------+
```
Until May everything was ok, but in June user 4 didn't pay although he paid in the next month (July).
In July users 1 and 2 didn't pay, but this is ok, because they could resign from the service.
So in this case I need to have information "User 4 didn't pay in June".
Is it possible to do that using SQL?
I use MS Access if it's necessary information.
|
I have written a simple query for this but I realize that it's not the best solution. Other solutions are still welcome.
```
SELECT user_id,
MIN(date) AS min_date,
MAX(date) AS max_date,
COUNT(*) AS no_of_records,
round((MAX(date)-MIN(date))/30.4+1,0) AS months,
(months-no_of_records) AS diff
FROM test
GROUP BY user_id
HAVING (round((MAX(date)-MIN(date))/30.4+1,0)-COUNT(*)) > 0
ORDER BY 6 DESC;
```
Now we can take a look at columns "no\_of\_records" and "months". If they are not equal, there was a gap for this user.
|
From my experience, you cannot just work with paid in table to fill the gaps. If in case all of your user does not pay a specific month, it is possible that your query leaves that entire month out of equation.
This means you need to list all dates from Jan to Dec and check against each user if they have paid or not. Which again requires a table with your requested date to compare.
Dedicated RDBMS provide temporary tables, SP, Functions which allows you to create higher level/complex queries. on the other hand ACE/JET engine provides less possibilities but there is a way around to get this done. (VBA)
In any case, you need to give the database specific date period in which you are looking for gaps. Either you can say current year or between yearX and yearY.
here how it could work:
1. create a temporary table called tbl\_date
2. create a vba function to generate your requested date range
3. create a query (all\_dates\_all\_users) where you select the requested dates & user id's (without a join) this will give you all dates x all users combination
4. create another query where you left join all\_dates\_all\_users query with your user\_payments query. (This will produce all dates with all users and join to your user\_payments table)
5. perform your check whether user\_payments is null. (if its null user x hasn't paid for that month)
Here is an example:
[Tables]
1. tbl\_date : id primary (auto number), date\_field (date/Time)
2. tbl\_user\_payments: pay\_id (auto number, primary), user\_id (number), pay\_Date (Date/Time) this is your table modify it as per your requirements. I'm not sure if you have a dedicated user table so i use this payments table to get the user\_id too.
[Queries]
1. qry\_user\_payments\_all\_month\_all\_user:
SELECT Year([date\_field]) AS mYear, Month([date\_field]) AS mMonth, qry\_user\_payments\_user\_group.user\_id
FROM qry\_user\_payments\_user\_group, tbl\_date
ORDER BY Year([date\_field]), Month([date\_field]), qry\_user\_payments\_user\_group.user\_id;
2. qry\_user\_payments\_paid\_or\_not\_paid
SELECT qry\_user\_payments\_all\_month\_all\_user.mYear,
qry\_user\_payments\_all\_month\_all\_user.mMonth,
qry\_user\_payments\_all\_month\_all\_user.user\_id,
IIf(IsNull([tbl\_user\_payments].[user\_id]),"Not paid","Paid") AS [Paid?]
FROM qry\_user\_payments\_all\_month\_all\_user
LEFT JOIN tbl\_user\_payments ON (qry\_user\_payments\_all\_month\_all\_user.user\_id = tbl\_user\_payments.user\_id)
AND ((qry\_user\_payments\_all\_month\_all\_user.mMonth = month(tbl\_user\_payments.[pay\_date]) AND (qry\_user\_payments\_all\_month\_all\_user.mYear = year(tbl\_user\_payments.[pay\_date]) )) )
ORDER BY qry\_user\_payments\_all\_month\_all\_user.mYear, qry\_user\_payments\_all\_month\_all\_user.mMonth, qry\_user\_payments\_all\_month\_all\_user.user\_id;
[Function]
```
Public Function FN_CRETAE_DATE_TABLE(iDate_From As Date, Optional iDate_To As Date)
'---------------------------------------------------------------------------------------
' Procedure : FN_CRETAE_DATE_TABLE
' Author : KRISH KM
' Date : 22/09/2015
' Purpose : will generate date period and check whether payments are received. A query will be opened with results
' CopyRights: You are more than welcome to edit and reuse this code. i'll be happy to receive courtesy reference:
' Contact : krishkm@outlook.com
'---------------------------------------------------------------------------------------
'
Dim From_month, To_Month As Integer
Dim From_Year, To_Year As Long
Dim I, J As Integer
Dim SQL_SET As String
Dim strDoc As String
strDoc = "tbl_date"
DoCmd.SetWarnings (False)
SQL_SET = "DELETE * FROM " & strDoc
DoCmd.RunSQL SQL_SET
If (IsMissing(iDate_To)) Or (iDate_To <= iDate_From) Then
'just current year
From_month = VBA.Month(iDate_From)
From_Year = VBA.Year(iDate_From)
For I = From_month To 12
SQL_SET = "INSERT INTO " & strDoc & "(date_field) values ('" & From_Year & "-" & VBA.Format(I, "00") & "-01 00:00:00')"
DoCmd.RunSQL SQL_SET
Next I
Else
From_month = VBA.Month(iDate_From)
To_Month = VBA.Month(iDate_To)
From_Year = VBA.Year(iDate_From)
To_Year = VBA.Year(iDate_To)
For J = From_Year To To_Year
For I = From_month To To_Month
SQL_SET = "INSERT INTO " & strDoc & "(date_field) values ('" & J & "-" & VBA.Format(I, "00") & "-01 00:00:00')"
DoCmd.RunSQL SQL_SET
Next I
Next J
End If
DoCmd.SetWarnings (True)
On Error Resume Next
strDoc = "qry_user_payments_paid_or_not_paid"
DoCmd.Close acQuery, strDoc
DoCmd.OpenQuery strDoc, acViewNormal
End Function
```
you can call this public function from button or form or debug window:
```
?FN_CRETAE_DATE_TABLE("2015-01-01","2015-10-01")
```
this will generate from jan to oct and check whether you received payments or not.
[Screen]:
[](https://i.stack.imgur.com/drqjQ.png)
|
Date continuity in a database (find missing date gaps)
|
[
"",
"sql",
"ms-access",
"vba",
"gaps-and-islands",
""
] |
I have the following problem:
Suppose there is a table `EMP` with employees data.
The columns that we are interested in are `DeptID` and `ManagerID`.
I have a list of (say) 50 records in the following format -
deptId1 , managerid1 so on
What is the most efficient way of doing this in sql?
For example, if I was only searching by `ManagerID`, I could have done the following:
```
Select * from EMP where ManagerID in (id1, id2, ..., id50);
```
However in case of pair, I am forced to execute the following query 50 times:
```
select * from EMP where ManagerID = Mid1
AND DeptID = deptid;
```
EDIT :
One approach I came up with:
If I change the report to have DeptID+ManagerID
I can do something like this:
```
WITH TEMPTABLE AS
select * from EMP where
(DeptID + ManagerID) IN ( sumID1 , sumID2.....)
select * from TEMPTABLE where ManagerID = Mid1
AND DeptID = deptid;
```
Do you think this can be faster?
More importantly can we somehow use an index on (DeptID + ManagerID)?
|
You can use `= ANY`:
**[SqlFiddleDemo](http://sqlfiddle.com/#!4/bbcd1/2/0)**
```
SELECT *
FROM EMP
WHERE (ManagerID, DeptId) = ANY ((1,1), (2,4));
```
or `IN` + `subquery`:
```
SELECT *
FROM EMP
WHERE (ManagerID, DeptId) IN (SELECT 1 AS col1, 1 AS col2 FROM dual
UNION ALL SELECT 2,4 FROM dual)
```
or `CTE/subquery` + `JOIN`:
```
WITH cte(ManagerId, DeptID) AS
(
SELECT 1 AS ManagerId, 2 AS DeptID FROM dual
UNION ALL SELECT 2, 4 FROM dual
)
SELECT *
FROM EMP e
JOIN cte c
ON e.ManagerId = c.ManagerId
AND e.DeptId = c.DeptId;
```
or simple `IN` as in comment:
**[SqlFiddleDemo\_IN](http://sqlfiddle.com/#!4/bbcd1/3/0)**
```
SELECT *
FROM EMP
WHERE (ManagerID, DeptId) IN ((1,1), (2,4));
```
**EDIT:**
Combining as you proposed `(DeptID + ManagerID) IN ( sumID1 , sumID2.....)` is not good idea for example `(1+5) = (3+3)`. You will get inaccurate results.
|
```
select
*
from
emp
where
(managerid, departmentid) in (
(1, 2),
(2, 3)
)
```
|
Efficient sql query to find a composite key in a table?
|
[
"",
"sql",
"oracle11g",
""
] |
I have a table that probably resulted from a listagg, similar to this:
```
# select * from s;
s
-----------
a,c,b,d,a
b,e,c,d,f
(2 rows)
```
How can I change it into this set of rows:
```
a
c
b
d
a
b
e
c
d
f
```
|
In redshift, you can join against a table of numbers, and use that as the split index:
```
--with recursive Numbers as (
-- select 1 as i
-- union all
-- select i + 1 as i from Numbers where i <= 5
--)
with Numbers(i) as (
select 1 union
select 2 union
select 3 union
select 4 union
select 5
)
select split_part(s,',', i) from Numbers, s ORDER by s,i;
```
EDIT: redshift doesn't seem to support recursive subqueries, only postgres. :(
|
[SQL Fiddle](http://sqlfiddle.com/#!4/b61675/5)
**Oracle 11g R2 Schema Setup**:
```
create table s(
col varchar2(20) );
insert into s values('a,c,b,d,a');
insert into s values('b,e,c,d,f');
```
**Query 1**:
```
SELECT REGEXP_SUBSTR(t1.col, '([^,])+', 1, t2.COLUMN_VALUE )
FROM s t1 CROSS JOIN
TABLE
(
CAST
(
MULTISET
(
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= REGEXP_COUNT(t1.col, '([^,])+')
)
AS SYS.odciNumberList
)
) t2
```
**[Results](http://sqlfiddle.com/#!4/b61675/5/0)**:
```
| REGEXP_SUBSTR(T1.COL,'([^,])+',1,T2.COLUMN_VALUE) |
|---------------------------------------------------|
| a |
| c |
| b |
| d |
| a |
| b |
| e |
| c |
| d |
| f |
```
|
Undo a LISTAGG in redshift
|
[
"",
"sql",
"amazon-redshift",
""
] |
I have `DS.UnitPrice` and `Ord.Qty` that I need to multiply. Then take the sum of that and add up each of those if there is multiple `LineTotal`'s.
From there, take the subtotal and multiply it by `1.1` (tax
of 10%) and get the orders total.
I had issues with `SubTotal`, but got it to work. But `TotalPrice` still gives me `0`, no matter what I do.
This is my query:
```
SELECT *,
SUM(DS.UnitPrice*Ord.Qty) AS LineTotal,
SUM(LineTotal) AS SubTotal,
SUM(SubTotal*1.1) AS TotalPrice
FROM (Orders Ord, Donuts DS, Customers Cust)
LEFT JOIN Customers ON (Cust.CustID = Ord.OrderID)
LEFT JOIN Donuts ON (DS.DonutID = Ord.DonutID)
```
|
The problem is that you are referring to a column alias `subtotal` in the definition of total. And, your `JOIN` conditions are all wrong.
If you want the totals per order:
```
SELECT Ord.OrderId,
SUM(DS.UnitPrice * Ord.Qty) AS SubTotal,
SUM(DS.UnitPrice * Ord.Qty * 1.1) AS TotalPrice
FROM Orders Ord JOIN
Customers Cust
ON Cust.CustID = Ord.OrderID JOIN
Donuts DS
ON DS.DonutID = Ord.DonutID
GROUP BY Ord.OrderId;
```
If you want the totals for all orders:
```
SELECT SUM(DS.UnitPrice * Ord.Qty) AS SubTotal,
SUM(DS.UnitPrice * Ord.Qty * 1.1) AS TotalPrice
FROM Orders Ord JOIN
Customers Cust
ON Cust.CustID = Ord.OrderID JOIN
Donuts DS
ON DS.DonutID = Ord.DonutID;
```
|
you can do this one :
```
SELECT *, SUM(DS.UnitPrice*Ord.Qty) as LineTotal ,SUM(LineTotal) as SubTotal,SUM(SubTotal*1.1) as TotalPrice
FROM (Orders Ord, Donuts DS, Customers Cust)
LEFT JOIN Customers ON (Cust.CustID = Ord.OrderID)
LEFT JOIN Donuts ON (DS.DonutID = Ord.DonutID)
```
|
Multiple SUM's in one SELECT
|
[
"",
"mysql",
"sql",
""
] |
I have a `SQL` table like this
```
col1 col2 col3
1 0 1
1 1 1
0 1 1
1 0 0
0 0 0
```
I am expecting output as like this
```
col1 col2 col3 NewCol
1 0 1 SL,PL
1 1 1 SL,EL,PL
0 1 1 EL,PL
1 0 0 SL
0 0 0 NULL
```
The condition for this is `if col1>0` then `SL` `else ' '`, `if col2>0` `EL` `else ' '`, `if col3>0 PL` `else ' '`
I tried to use [Concatenate many rows into a single text string?](https://stackoverflow.com/questions/194852/concatenate-many-rows-into-a-single-text-string) but didn't able to achieve the desired result properly
I have tried It is working fine with a message
> Invalid length parameter passed to the LEFT or SUBSTRING function.
```
WITH CTE AS (
SELECT col1, col2, col3,
CASE WHEN col1 > 0 THEN 'SL,' ELSE '' END +
CASE WHEN col2 > 0 THEN 'EL,' ELSE '' END +
CASE WHEN col3 > 0 THEN 'PL,' ELSE '' END AS NewCol
FROM Employee
)
SELECT col1, col2, col3,
substring(NewCol, 1, len(NewCol) - 1) AS NewCol
FROM CTE
```
But again my last condition is not matching if all columns is 0 then I have to show `NULL` as per desired output.
Find the attach fiddle <http://sqlfiddle.com/#!6/2bd6a/1>
|
The issue with your code example is that when all columns are 0 then the length is 0 and the substring function will throw an error.
Use `nullif` to fix it: `substring(NewCol, 1, len(nullif(NewCol,'')) - 1) AS NewCol`
|
You could also change to appending the delimiter on the front and use [`STUFF`](https://msdn.microsoft.com/en-us/library/ms188043.aspx).
`STUFF('',1,1,'')` will return `NULL` rather than an error.
```
WITH
Employee(col1, col2, col3) AS (
SELECT 1,1,1 UNION ALL
SELECT 0,0,0
),
CTE AS (
SELECT col1, col2, col3,
CASE WHEN col1 > 0 THEN ',SL' ELSE '' END +
CASE WHEN col2 > 0 THEN ',EL' ELSE '' END +
CASE WHEN col3 > 0 THEN ',PL' ELSE '' END AS NewCol
FROM Employee
)
SELECT col1,
col2,
col3,
STUFF(NewCol, 1, 1, '')
FROM CTE
```
Returns
```
+------+------+------+------------------+
| col1 | col2 | col3 | (No column name) |
+------+------+------+------------------+
| 1 | 1 | 1 | SL,EL,PL |
| 0 | 0 | 0 | NULL |
+------+------+------+------------------+
```
|
Concatenate SQL table rows with conditions in single text
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Will the result of this queries be able to join without multiplying to each other.
```
Select StudID, StudName, StudYear from tbl_students where StudYear='Third Year';
Select StudID, StudName, StudYear from tbl_students where StudYear='Second Year';
Select StudID, StudName, StudYear from tbl_students where StudYear='First Year';
```
or is it possible the "result" of these queries to be join with 9 columns referring the first 3 columns for the Third Year, second 3 columns for the Second Year and last 3 columns for the First Year.
Thank you for kind answer.
What i have in mind as a result is something like this
```
StudentID | StudentName | StudentYear
-----------------------------------------
1 | foo1 | 1
-----------------------------------------
2 | foo2 | 1
-----------------------------------------
3 | foo3 | 2
-----------------------------------------
4 | foo4 | 2
-----------------------------------------
5 | foo5 | 2
-----------------------------------------
6 | foo6 | 3
-----------------------------------------
```
```
StudentID-1 | StudentName-1 | StudentYear-1 |StudentID-2 | StudentName-2 | StudentYear-2 | StudentID-3 | StudentName-3 | StudentYear-3
----------------------------------------------------------------------------------------------------------------------------------------------------
1 | foo1 | 1 | 3 | foo3 | 2 |6 | foo6 | 3
----------------------------------------------------------------------------------------------------------------------------------------------------
2 | foo2 | 1 | 4 | foo4 | 2 | | |
----------------------------------------------------------------------------------------------------------------------------------------------------
| | | 5 | foo5 | 2 | | |
----------------------------------------------------------------------------------------------------------------------------------------------------
```
|
To achieve your requirement you must have some id kind of thing to join, so used row number else you will one to many to many resutls
Working in SQL Server. So give it a try in MYSQL.
```
CREATE TABLE #STUDENT(ID INT, NAME VARCHAR(10), STUDENT_YEAR INT);
INSERT INTO #STUDENT VALUES(1,'FOO1',1);
INSERT INTO #STUDENT VALUES(2,'FOO2',1);
INSERT INTO #STUDENT VALUES(3,'FOO3',2);
INSERT INTO #STUDENT VALUES(4,'FOO4',2);
INSERT INTO #STUDENT VALUES(5,'FOO5',2);
INSERT INTO #STUDENT VALUES(6,'FOO6',3);
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY S1.ID ASC) RN,
S1.ID AS [ID_1], S1.NAME AS [NAME_1],S1.STUDENT_YEAR AS [YEAR_1]
FROM #STUDENT S1
where s1.STUDENT_YEAR=1
) T1
FULL JOIN (
SELECT
ROW_NUMBER() OVER (ORDER BY S2.ID ASC) RN,
S2.ID AS [ID_2], S2.NAME AS [NAME_2],S2.STUDENT_YEAR AS [YEAR_2]
FROM #STUDENT S2
where s2.STUDENT_YEAR=2
) T2 ON T1.RN = T2.RN
FULL JOIN (
SELECT
ROW_NUMBER() OVER (ORDER BY S3.ID ASC) RN,
S3.ID AS [ID_3], S3.NAME AS [NAME_3],S3.STUDENT_YEAR AS [YEAR_3]
FROM #STUDENT S3
where s3.STUDENT_YEAR=3
) T3 ON T1.RN = T3.RN
```
**Note** : `#Student` indicates that it is a temporary table.
[](https://i.stack.imgur.com/PBYoi.png)
|
Ok changed my answer to use subqueries and treat each subquery as a table with its own aliased columns. This does not produce the exact results you wanted this will produce nulls in the other 6 columns that are not filled in. That being said I believe this is about as close are you are going to get to doing what you want in a query.
```
SELECT StudentFirst.*, StudentSecond.*, StudentThird.*
FROM (((tbl_students ts
left join (SELECT StudID AS StudIDFirst, StudName AS StudNameFirst, StudYear AS StudYearFirst FROM tbl_students WHERE StudYear = 'First Year') AS StudentFirst on (ts.StudID = StudentFirst.StudIDFirst))
left join (SELECT StudID AS StudIDSecond, StudName AS StudNameSecond, StudYear AS StudYearSecond FROM tbl_students WHERE StudYear = 'Second Year') AS StudentSecond on (ts.StudID = StudentSecond.StudIDSecond))
left join (SELECT StudID AS StudIDThird, StudName AS StudNameThird, StudYear AS StudYearThird FROM tbl_students WHERE StudYear ='Third Year') AS StudentThird on (ts.StudID = StudentThird.StudIDThird))
```
Here is the results of the query
[](https://i.stack.imgur.com/Ubflg.png)
I will say I do not recommend this approach. MySQL has gotten alot better at dealing with subqueries but anything before 5.5 will have horrible performance and even 5.5 won't be great once you get to a certain level of data. Also using your mysql statements to format the results isn't a very good idea either. If you need the results to look a certain way I would retrieve the record set then put them in that format on the front end of a application. I hope this helps.
|
Will it be possible to join table with itself
|
[
"",
"mysql",
"sql",
""
] |
I am using PostgreSQL to write an SQL query to select from the same sql results twice. For example,
```
SELECT a.id, b.id
FROM
(SELECT * FROM tableA) as a,
(SELECT * FROM tableA) as b
WHERE a.id = b.id+1
```
As you can see the following SQL query has been executed twice:
```
SELECT * FROM tableA
```
Is it possible to store the SQL results temporarily instead of running the same query twice because this query can be complicated?
|
Common Table Expression:
```
with cte as
(SELECT * FROM tableA)
select * from cte where...
```
So you don't have to write the same stuff twice. Don't know if its executed once or severel times though...
|
A CTE solves your problem. But it is likely that the `lead()` function would as well . . . and with better performance:
```
SELECT a.*
FROM (SELECT a.id, lead(a.id) over (order by a.id) as b_id
FROM tableA a
) a
WHERE id = b_id;
```
Or, for something that will work with duplicates:
```
SELECT a.id, a.id + 1
FROM tableA a
WHERE EXISTS (SELECT 1 FROM tableA a2 WHERE a2.id = a.id + 1);
```
A self-join seems like overkill for whatever you really want to accomplish. It might be the right solution, but there are alternatives.
|
Postgresql: How to select from the sql results twice
|
[
"",
"sql",
"postgresql",
"query-optimization",
""
] |
Hi guys I'm having problems with a sql query, look, this is my scenario:
I have an students table and a table where I store dates when students enter or leave the school, so I want to get the nearest date to a given date for every student, I can't find the way to do this.
```
Students Data:
|idstudent|name |
------------------
| 1 | John |
| 2 | Bob |
------------------
Dates Data:
|id|idstudent| date |type|
------------------------------
|1 | 1 |20-01-2015| 1 |
|2 | 2 |20-01-2015| 1 |
|3 | 2 |15-08-2015| 2 |
|4 | 1 |31-08-2015| 2 |
------------------------------
Desired Date = 01-08-2015
|idstudent| name | date |type|
-------------------------------------
| 1 | John | 31-08-2015 | 2 |
| 2 | Bob | 15-08-2015 | 2 |
```
Students Table:
```
CREATE TABLE students
(
idstudent serial NOT NULL,
name character varying(200),
CONSTRAINT idstudent PRIMARY KEY (idstudent)
)
WITH (
OIDS=FALSE
);
ALTER TABLE students
OWNER TO postgres;
```
Dates Table:
```
CREATE TABLE students_dates
(
idstudent_date serial NOT NULL,
idstudent bigint,
date_ date,
type smallint,
CONSTRAINT idstudent_date PRIMARY KEY (idstudent_date)
)
WITH (
OIDS=FALSE
);
ALTER TABLE students_dates
OWNER TO postgres;
```
Can anyone help me?
Thank you so much.
|
Using the proprietary `distinct on ()` is usually faster in Postgres than using window functions.
Building on Gordon's idea with the abs():
```
select distinct on (s.idstudent) s.*, sd.date_, sd.type
from students s
join students_dates sd on s.idstudent = sd.idstudent
order by s.idstudent, abs(sd.date_ - date '2015-09-26');
```
This can also be solved using a Window function:
```
select idstudent, name, date_, type
from (
select s.idstudent, s.name, sd.date_, sd.type,
row_number() over (partition by s.idstudent order by sd.date_ - date '2015-09-26' desc) as rn
from students s
join students_dates sd on s.idstudent = sd.idstudent
) t
where rn = 1;
```
SQLFiddle: <http://sqlfiddle.com/#!15/25fef/4>
|
Use `DATEDIFF` to get difference between dates, get `ABS` value. Then sort by `ABS(DATEDIFF())` and get top record.
|
Get closest date to given date for a group of records
|
[
"",
"sql",
"postgresql",
"date",
""
] |
Recently I have started learning `Oracle-sql`. I know that with the help of `DELETE` command we can delete a particular row(s). So, Is it possible to delete entire data from a particular column in a table using only `DELETE` command. (I know that using `UPDATE` command by setting null values to entire column we can achieve the functionality of `DELETE`).
|
**[DELETE](http://docs.oracle.com/cd/B12037_01/appdev.101/b10807/13_elems014.htm)**
> The DELETE statement **removes entire rows** of data from a specified
> table or view
If you want to "remove" data from particular column update it:
```
UPDATE table_name
SET your_column_name = NULL;
```
or if column is `NOT NULL`
```
UPDATE table_name
SET your_column_name = <value_indicating_removed_data>;
```
You can also remove entire column using DDL:
```
ALTER TABLE table_name DROP COLUMN column_name;
```
|
In SQL, `delete` deletes rows not columns.
You have three options in Oracle:
* Set all the values to `NULL` using update.
* Remove the column from the table.
* Set the column to unused.
The last two use `alter table`:
```
alter table t drop column col;
alter table t set unused (col);
```
|
Deleting entire data from a particular column in oracle-sql
|
[
"",
"sql",
"oracle",
"sql-delete",
""
] |
Is there an easier/more efficient way of doing the following WHERE conditions:
```
WHERE (Field LIKE %entry% OR Field2 LIKE %entry%)
AND (Field LIKE %entry2% OR Field2 LIKE %entry2%)
```
|
Please refer following [link](https://dev.mysql.com/doc/refman/5.1/en/fulltext-boolean.html) as well:
```
SELECT *
FROM mytable
WHERE MATCH(filed, field2) AGAINST ('entry' IN NATURAL LANGUAGE MODE)
```
|
```
AND (Field LIKE %entry2% OR Field2 LIKE %entry2%)
```
this part is a part of
```
(Field LIKE %entry% OR Field2 LIKE %entry%)
```
You can omit it.
|
SQL: Where Field OR Field2 LIKE %entry%
|
[
"",
"mysql",
"sql",
""
] |
This question is regarding PLSQL - for improving the efficiency of the code and coding standards.
Any help, pointers, references or suggestions are highly appreciated.
**Question:**
I have a plsql procedure with an INPUT parameter `i_flag` which is of type `BOOLEAN`.
Based upon the value of this i\_flag( which can be either true or false) I have to execute a sql query. If the value is `TRUE` then SQL1 (Assume query 1.1) else if the value is `FALSE` SQL2 (Assume query 1.2) would be executed.
SQL2 is same as SQL1 except an addition of where clause.
SQL1 (1.1)
```
select a.user_id, a.user_name, a.dept_id, b.country from user a , contact b
where a.user_id = b.user_id;
```
SQL1 (1.2)
```
select a.user_id, a.user_name, a.dept_id, b.country from user a , contact b
where a.user_id = b.user_id
and a.user_status is not null;
```
Instead of writing IF-ELSE in plsql is it possible to write this query in a single SQL query?
|
You can create the same behavior in a single query using the logical `or` operator:
```
select a.user_id, a.user_name, a.dept_id, b.country
from user a , contact b
where a.user_id = b.user_id AND (i_flag = TRUE OR a.user_status IS NOT NULL)
```
Note, by the way, that implicit joins (having two tables in the `from` clause) is a deprecated syntax, and it's recommended to switch to the modern, explicit, syntax:
```
SELECT a.user_id, a.user_name, a.dept_id, b.country
FROM user a
JOIN contact b ON a.user_id = b.user_id
where i_flag = TRUE OR a.user_status IS NOT NULL
```
|
First of all, if you want to use a boolean param in a SQL query, you have to substitute a sql-compatible type such as NUMBER, i.e. use 0 or 1 instead of FALSE or TRUE.
Second, while Mureinik's answer based on OR will work, Oracle will often give better performance if you use an alternative. One alternative is like this:
```
SELECT a.user_id, a.user_name, a.dept_id, b.country
FROM user a
JOIN contact b ON a.user_id = b.user_id
WHERE 1 = CASE WHEN i_flag = 1 THEN 1
WHEN a.user_status IS NOT NULL THEN 1
ELSE 0 END
```
It ain't pretty but in larger queries it sometimes helps significantly.
|
PLSQL - Improving code efficiency
|
[
"",
"sql",
"oracle",
"select",
"plsql",
"plsqldeveloper",
""
] |
I have 2 tables
**1. User Master**
user\_id, user\_full\_name, user\_dob...so on
**2. Login Details**
login\_id, login\_user\_id, login\_time, login\_date, logout\_time
***Problem***
2nd table has n number of rows against User Master table id
I need to make a join but the condition is that it should show only last login data of the user
example
user\_full\_name, user\_login, user\_logout so on...
|
If you want the result for a single user, you could use a simple `INNER JOIN` combined with an `ORDER BY` and `TOP 1`:
```
SELECT TOP 1 user_full_name, login_time, login_date, logout_time
FROM Users INNER JOIN Logins ON
Users.user_id = Logins.user_id
WHERE
Users.user_id = @user_id
ORDER BY login_date DESC, login_time DESC
```
(See [SQLFiddle](http://sqlfiddle.com/#!3/7c76c/1))
If you want the result for all users, you could use `CROSS APPLY`:
```
SELECT user_full_name, l.*
FROM Users u CROSS APPLY (
SELECT TOP 1 login_time, login_date, logout_time
FROM Logins
WHERE
u.user_id = Logins.user_id
ORDER BY login_date DESC, login_time DESC
) l
```
(See [SQLFiddle](http://sqlfiddle.com/#!3/7c76c/5))
|
A common solution for this problem is to use the `row_number` window function and filter for rows with row number 1 in each partition (by user, ordered by date/time):
```
WITH UserDetails AS (
SELECT
*
, ROW_NUMBER() OVER (PARTITION BY login_user_id
ORDER BY login_date DESC, login_time DESC) AS RN
FROM LoginDetails
)
SELECT *
FROM UserMaster M
JOIN UserDetails D ON M.user_id = D.login_user_id
WHERE D.RN = 1;
```
|
How to get latest DETAIL entry against the MASTER entry?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"greatest-n-per-group",
""
] |
What I am trying to do, is to set beginning of time interval, if that is not correctly set into stored procedure. However, it somehow does not work very well..
This is my code:
```
CREATE PROCEDURE intervals_generator (IN start DATETIME, IN ending DATETIME, IN intervalis INT)
BEGIN
-- temp values
DECLARE next_date DATETIME;
-- result temp values
DECLARE start_temp DATETIME;
DECLARE ending_temp DATETIME;
-- date formatting variables
DECLARE year CHAR(20);
DECLARE month CHAR(20);
DECLARE day CHAR(20);
DECLARE new_start CHAR(20);
-- SET starting date if is incorrect DATE_FORMAT(NOW(), '%d %m %Y')
SET year := DATE_FORMAT(start, '%Y');
SET month := DATE_FORMAT(start, '%c');
SET day := DATE_FORMAT(start, '%e');
IF intervalis = '1_day' THEN
BEGIN
SET new_start := year+' '+month+' '+day+' 00:00:00';
END;
ELSEIF intervalis = '1_month' THEN
BEGIN
SET new_start := year+' '+month+' 1 00:00:00';
END;
ELSEIF intervalis = '1_quarter' THEN
BEGIN
IF MONTH(start) IN (2, 3) THEN
SET month := 1;
ELSEIF MONTH(start) IN (5, 6) THEN
SET month := 4;
ELSEIF MONTH(start) IN (8, 9) THEN
SET month := 7;
ELSEIF MONTH(start) IN (11, 12) THEN
SET month := 10;
END IF;
SET new_start := year+' '+month+' 1 00:00:00';
END;
ELSEIF intervalis = '1_year' THEN
BEGIN
SET new_start := year+' 1 1 00:00:00';
END;
END IF;
SET start := STR_TO_DATE(new_start, '%Y %c %e %h:%i:%s');
SELECT year, month, day, start;
DROP TEMPORARY TABLE IF EXISTS intervals_result;
END//
DELIMITER ;
```
I have tried many different formattings settings and functions, but the output is still wrong, like this:
```
mysql> CALL intervals_generator('2013-02-01 00:00:00', '2015-12-31 00:00:00', '1_year');
+------+-------+------+---------------------+
| year | month | day | start |
+------+-------+------+---------------------+
| 2013 | 2 | 1 | 2016-00-00 00:00:00 |
+------+-------+------+---------------------+
1 row in set (0.02 sec)
Query OK, 0 rows affected, 1 warning (0.02 sec)
```
I really dont understand why output is "2016-00-00" instead of "2013-01-01". year, month and day variables are defined as CHAR and also function that extracts them from datetime should be returning CHAR. And function STR\_TO\_DATE should also been taking CHAR format, so it is a mystery for me.
If anyone has some idea, please give me hint.
|
If you work in [`DATE`s](https://dev.mysql.com/doc/refman/5.6/en/datetime.html) instead of strings, you can make use of MySQL's date [functions and operators](https://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html) and make everything a whole lot simpler... but not too simple because this is MySQL.
The problem with MySQL and dates is its date functionality is a real mish-mash that sometimes works with `DATE`s, sometimes with strings, sometimes with integers, and is missing basic functionality. It lacks a simple function to set a piece of a date; there's no function to change the `MONTH` part of a `DATE` to February. There's not even a good way to make a date from the year, month and day, closest thing you get is [`MAKEDATE()`](https://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_makedate) which takes a year and the day of the year (*?!*). [Fortunately, `DATE`s in MySQL respond to math operations](https://stackoverflow.com/a/20746923/14660) and it's better than messing with strings.
If you have, for example, `2013-02-12` and want `2013-02-01` you have to first make a new date with just the year using [`MAKEDATE`](https://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_makedate), then add the month part.
```
-- 2013-01-01
SET new_date := MAKEDATE(YEAR(old_date), 1);
-- 2013-02-01
-- Since MONTH returns from 1 to 12, you need to take away one.
SET new_date := new_date + (INTERVAL MONTH(old_date) - 1) MONTH;
```
After chopping out all the unused variables, changing to date math, and using the [CASE statement](https://dev.mysql.com/doc/refman/5.6/en/case.html) instead of a big IF/ELSE chain, we get this:
```
CREATE PROCEDURE intervals_generator (IN start_date DATE, IN intervals TEXT)
BEGIN
DECLARE new_start DATE;
CASE intervals
WHEN '1_day' THEN
-- Nothing to do, DATE has already truncated the time portion.
SET new_start := start_date;
WHEN '1_month' THEN
-- Set to the year and month of the start date
SET new_start := MAKEDATE(YEAR(start_date), 1) + INTERVAL (MONTH(start_date) - 1) MONTH;
WHEN '1_quarter' THEN
BEGIN
-- Set to the year and month of the start date
SET new_start := MAKEDATE(YEAR(start_date), 1) + INTERVAL (MONTH(start_date) - 1) MONTH;
-- Subtract the necessary months for the beginning of the quarter
SET new_start := new_start - INTERVAL (MONTH(new_start) - 1) % 3 MONTH;
END;
WHEN '1_year' THEN
-- Set the date to the first day of the year
SET new_start := MAKEDATE(YEAR(start_date), 1);
END CASE;
SELECT new_start;
END//
```
[Try it out.](http://sqlfiddle.com/#!9/d3ad0/1)
|
This statement is not doing what you expect:
```
SET new_start := year+' '+month+' '+day+' 00:00:00';
```
In MySQL, the `+` operator does addition. That's it, not concatenation.
I think you intend:
```
SET new_start := concat(year, ' ', month, ' ', day, ' 00:00:00');
```
I haven't looked at the rest of the logic to see if it makes sense, but this is one glaring problem.
|
Changing datetime value in MySQL
|
[
"",
"mysql",
"sql",
"datetime",
""
] |
I have a table with a member's name, address, etc. and a time stamp of the last time the record was updated. I have a second table that holds updates to the member record, a holding table, until changes are approved by staff.
I have a query that returns data from the member table. I now need to check the updates table, and if the member's record in the updates table has a more recent time stamp, return that record instead of the record in the member table.
I tried a few things such as a `UNION` with `Top 1` but it's not quite right. I could make a complex `CASE` statement but is that going to perform well?
It sounds simple, get the most recent record from table A, and the most recent from table B and return the one record that is the newest.
```
SELECT name, address, city, state, zipcode, time_stamp
FROM Member
WHERE ID = 123
SELECT name, address, city, state, zipcode, time_stamp
FROM MemberUpdates
WHERE ID = 123
```
**EDIT:**
OK, with the help so far, I was able to get the results I expected. Then, I went to add the extra where clauses and I broke it. Tried several different ways including using a CTE and could not quite get it right. Here is a query that works and returns the expected results, however notice I have to pass name\_last/birth\_year/memNum twice. Is there a better way?
```
SELECT TOP 1 m.abn,
m.aliases,
m.birth_year,
m.user_stamp,
q.updatePending,
q.name_first,
q.name_last,
q.company,
q.address1,
q.mailing_address,
q.city,
q.state,
q.zipcode,
q.email_address
FROM (
SELECT TOP 1
1 AS updatePending,
a.entity_number,
a.name_first,
a.name_last,
NULLIF(LTRIM(RTRIM(
LTRIM(RTRIM(ISNULL(a.company, ''))) +
LTRIM(RTRIM(ISNULL(a.firm_name, ''))))),'') AS company,
a.address1,
a.mailing_address,
a.city,
a.state,
a.zip_code AS zipcode,
a.internet_address AS email_address,
a.time_stamp
FROM statebar.dbo.STAGING_Address_Change_Request a
INNER JOIN Member m ON m.entity_number = a.entity_number
WHERE a.entity_number = (
SELECT m.entity_number
FROM Member m
INNER JOIN Named_Entity ne ON (ne.entity_number = m.entity_number)
WHERE ne.name_last = 'jones'
AND m.birth_year = '1975'
AND m.memNum = '12345'
)
AND a.time_stamp > m.time_stamp
UNION ALL
SELECT TOP 1
0 AS updatePending,
ne.entity_number,
ne.name_first,
ne.name_last,
NULLIF(LTRIM(RTRIM(
LTRIM(RTRIM(ISNULL(ne.company, ''))) +
LTRIM(RTRIM(ISNULL(ne.firm_name, ''))))),'') AS company,
ne.address1,
ne.mailing_address,
ne.city,
ne.state,
ne.zip_code,
ne.internet_address AS email_address,
m.time_stamp
FROM Member m
INNER JOIN Named_Entity ne ON (ne.entity_number = m.entity_number)
LEFT JOIN statebar.dbo.STAGING_Address_Change_Request a ON a.entity_number = m.entity_number
WHERE ne.entity_number = (
SELECT m.entity_number
FROM Member m
INNER JOIN Named_Entity ne ON (ne.entity_number = m.entity_number)
WHERE ne.name_last = 'jones'
AND m.birth_year = '1975'
AND m.memNum = '12345'
)
AND m.time_stamp > a.time_stamp
ORDER BY updatePending DESC, a.time_stamp DESC) q
INNER JOIN Member m on m.entity_number = q.entity_number
ORDER BY q.time_stamp DESC
```
|
Here is a simple query that will help you return the most recent record:
```
--Only selects the top row with the most recent record
SELECT TOP 1 * FROM record
(
--Select rows with the same ID
SELECT name, address, city, state, zipcode, time_stamp
FROM Member
WHERE ID = 123
UNION ALL
SELECT name, address, city, state, zipcode, time_stamp
FROM MemberUpdates
WHERE ID = 123
) t
ORDER BY t.time_stamp DESC --Order the table by time_stamp to get the most recent record
-- DESC is used because datetime is ordered by oldest first in ascending order.
```
|
The union approach is a good idea, but you'd want to use the `row_number()` window function and not just top. Also, `union all` can be used instead of `union`. You don't care about duplicates between `A` and `B`, and `union all` will just perform better:
```
SELECT name, address, city, state, zipcode, time_stamp
FROM (SELECT name, address, city, state, zipcode, time_stamp,
ROW_NUMBER() OVER (PARTITION BY name ORDER BY time_stamp DESC) rn
FROM (SELECT name, address, city, state, zipcode, time_stamp
FROM Member
UNION ALL
SELECT name, address, city, state, zipcode, time_stamp
FROM MemberUpdates) t
) q
WHERE rn = 1
```
|
Query to return values from table B when newer than record in Table A
|
[
"",
"sql",
"t-sql",
"select",
"sql-server-2005",
""
] |
Using SQL Server 2012
I have seen a few threads about this topic but I can't find one that involves multiple joins in the query. I can't create a VIEW on this database so the joins are needed.
The Query
```
SELECT
p.Price
,s.Type
,s.Symbol
, MAX(d.Date) Maxed
FROM AdventDW.dbo.FactPrices p
INNER JOIN dbo.DimSecurityMaster s
ON s.SecurityID = p.SecurityID
INNER JOIN dbo.DimDateTime d
ON
p.DateTimeKey = d.DateTimeKey
GROUP BY p.Price ,
s.Type ,
s.Symbol
ORDER BY s.Symbol
```
The query works but does not produce distinct results. I am using Order by to validate the results, but it is not required once I get it working. I The result set looks like this.
```
Price Type Symbol Maxed
10.57 bfus *bbkd 3/31/1989
10.77 bfus *bbkd 2/28/1990
100.74049 cbus 001397AA6 8/2/2005
100.8161 cbus 001397AA6 7/21/2005
```
The result set I want is
```
Price Type Symbol Maxed
10.77 bfus *bbkd 2/28/1990
100.74049 cbus 001397AA6 8/2/2005
```
Here were a few other StackOverflow threads I tried but couldn't get t work with my specific query
[How can I SELECT rows with MAX(Column value), DISTINCT by another column in SQL?](https://stackoverflow.com/questions/612231/how-can-i-select-rows-with-maxcolumn-value-distinct-by-another-column-in-sql)
[SQL Selecting distinct rows from multiple columns based on max value in one column](https://stackoverflow.com/questions/11951135/sql-selecting-distinct-rows-from-multiple-columns-based-on-max-value-in-one-colu)
|
If you want data for the maximum date, use `row_number()` rather than `group by`:
```
SELECT ts.*
FROM (SELECT p.Price, s.Type, s.Symbol, d.Date,
ROW_NUMBER() OVER (PARTITION BY s.Type, s.Symbol
ORDER BY d.Date DESC
) as seqnum
FROM AdventDW.dbo.FactPrices p INNER JOIN
dbo.DimSecurityMaster s
ON s.SecurityID = p.SecurityID INNER JOIN
dbo.DimDateTime d
ON p.DateTimeKey = d.DateTimeKey
) ts
WHERE seqnum = 1
ORDER BY s.Symbol;
```
|
You should use a derived table since you really only want to group the `DateTimeKey` table to get the `MAX` date.
```
SELECT p.Price ,
s.Type ,
s.Symbol ,
tmp.MaxDate
FROM AdventDW.dbo.FactPrices p
INNER JOIN dbo.DimSecurityMaster s ON s.SecurityID = p.SecurityID
INNER JOIN
( SELECT MAX(d.Date) AS MaxDate ,
d.DateTimeKey
FROM dbo.DimDateTime d
GROUP BY d.DateTimeKey ) tmp ON p.DateTimeKey = tmp.DateTimeKey
ORDER BY s.Symbol;
```
|
Get Distinct results of all columns based on MAX DATE of one
|
[
"",
"sql",
"sql-server",
"max",
""
] |
I am trying to display multiple authors per title in a single column. At the moment there a repeating rows, due to the fact that some `Titles` have more than 1 `FirstName`. Is there a form of concatenation that can be used to resolve this and display all the authors in a single filed and perhaps separated by a comma.
This is my current query:
```
SELECT
Submission.Title, Researcher.FirstName, Submission.Type
FROM
Submission
INNER JOIN
((Faculty
INNER JOIN
School ON Faculty.FacultyID = School.[FacultyID])
INNER JOIN
(Researcher
INNER JOIN
ResearcherSubmission ON Researcher.ResearcherID = ResearcherSubmission.ResearcherID)
ON School.SchoolID = Researcher.SchoolID)
ON Submission.SubmissionID = ResearcherSubmission.SubmissionID
GROUP BY
Submission.Title, Researcher.FirstName, Submission.Type;
```
This the output it generates:
[![current output[1]](https://i.stack.imgur.com/rf5sP.png)
this is the output I am trying to generate:
```
Title FirstName Type
---------------------------------------------------------------------------
21st Century Business Matthew, Teshar Book Chapter
A Family Tree... Keshant, Lawrence Book Chapter
Benefits of BPM... Jafta Journal Article
Business Innovation Matthew, Morna, Teshar Book Chapter
```
|
You may inclde the concantenation logic within a CROSS APPLY
```
SELECT
Submission.Title
, CA.FirstNames
, Submission.Type
FROM Submission
CROSS APPLY (
SELECT
STUFF((
SELECT /* DISTINCT ??? */
', ' + r.FirstName
FROM ResearcherSubmission rs
INNER JOIN Researcher r ON r.ResearcherID = rs.ResearcherID
WHERE Submission.SubmissionID = rs.SubmissionID
FOR XML PATH (''), TYPE
).value('.', 'NVARCHAR(MAX)'), 1, 2, ' ')
) AS CA (FirstNames)
GROUP BY
Submission.Title
, CA.FirstNames
, Submission.Type
;
```
NB: I'm not sure if you need to include DISTINCT into the subquery when concatenating the names, e.g. if these was 'Jane' (Smith) and 'Jane' (Jones) do you want the final list as: 'Jane' or 'Jane, Jane'?
|
You can do this in your application logic as well.
But if you want to do this with a query. You should be able do something like this:
```
SELECT DISTINCT
sm.Title,
STUFF(
(SELECT ', ' + r.FirstName
FROM ResearcherSubmission rs
INNER JOIN Researcher r ON r.ResearcherID = rs.ResearcherID
WHERE sm.SubmissionID = rs.SubmissionID
FOR XML PATH('')), 1, 2, '') AS FirstNames,
sm.Type
FROM Submission sm
```
|
display more than one value using a SQL query
|
[
"",
"sql",
"sql-server",
""
] |
My stored procedure has 3 parameters i'm passing `(@nameP, @idP, @dateP`)
and is inserting the data in a `#myTemp` table
then i use
```
select *
into dbo.realTable
from #myTemp
```
and then I want to filter out any data which already exists (in `dbo.FinalTable`)based on `dateP` and `idP`:
```
insert into dbo.FinalTable
select * from dbo.realTable
where not exists (select * from dbo.FinalTable
where idP = @idP
and dateP = @dateP)
drop table dbo.realTable
```
The data is appended to the table, when i execute my procedure. The problem is that if i put the same `idP` and execute it again for the same `dateP`, it shouldn't insert anything but it does. I think the problem might be in the `insert into` part.
## EDIT:
this works perfectly if i remove `and dateP = @dateP` from the where clause)
ps: thank you all for your answers, even if in my case i simply had to do what i wrote above, I learned from your answers
|
Basically you have used `not exists` which is absolutely correct. the problem in you query for next time is `@dateP`. If you pass same `@idP` but different `@dateP` then same `@idp` get created because your getting validate against @idp and `@dateP`. So how you protect that, think about your business scenario for duplicate, duplicate means for only `@idp` or `@idp` and `@datep`, if both then your query is correct but if only `@idp` then you have to remove `@dateP` from your `where` clause.
|
this inserts all rows from realtable do finaltable, where the combination of idP and dateP is NOT existing...(using a simple left join):
```
INSERT INTO dbo.FinalTable
SELECT *
FROM dbo.RealTable R
LEFT JOIN dbo.FinalTable T ON T.idP = R.idP
AND T.dateP = R.dateP
WHERE T.idP IS NULL
```
I would strongly recommend the use of column names instead of "\*"!!
```
INSERT INTO table (column1, column2)
SELECT column1, column2 FROM anothertable
```
|
insert into table and skip duplicates
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"sql-server-2012",
""
] |
I require a piece of sql code which shows me the year part of a specific date, however the new year should start on november 1st.
for example the below two will bring me 2014 :
```
select DATEPART(year, '2014-10-01')
select DATEPART(year, '2014-11-01')
```
How I need the second one to return 2015 as the new year for the company starts in november.
I also need this to not only work for the 2 specified years but for any date. someone told me it was as easy as minus-ing 2 months etc Thank you
|
Add two months and get the year part of it.
```
select datepart(year, dateadd(MONTH,2,'2014-10-01')), -- 2014
datepart(year, dateadd(MONTH,2,'2014-11-01')) -- 2015
```
|
Assuming MS SQL Server...
```
SELECT DATEPART(YEAR, DATEADD(MONTH,2,'2014-11-01'))
```
|
sql date for year start
|
[
"",
"sql",
"date",
""
] |
I would like to return table from plpgsql function.
Here is my code.
```
CREATE FUNCTION test() RETURNS my_table AS
$BODY$DECLARE
q4 my_table;
BEGIN
q4 := SELECT * FROM my_table;
RETURN q4;
END;$BODY$
LANGUAGE sql;
```
I am getting following error:
```
Error: ERROR: syntax error at or near "SELECT"
LINE 5: q4 := SELECT * FROM my_table;
```
I started from this questions/tutorials.
<https://dba.stackexchange.com/questions/35721/declare-variable-of-table-type-in-pl-pgsql> &&
<http://postgres.cz/wiki/PL/pgSQL_%28en%29>
The idea is that I need to assign this query to a variable. This is only small part of function that I would like to create.
The second problem will be how to iterate through that set and make some mathematical operations and assigning value to some field of that table.
However firstly I would like to solve this problem.
|
```
CREATE FUNCTION test()
RETURNS my_table AS
$BODY$
DECLARE
q4 my_table;
BEGIN
-- add brackets to get a value
-- select row as one value, as q4 is of the type my_table
-- and limit result to one row
q4 := (SELECT my_table FROM my_table ORDER BY 1 LIMIT 1);
RETURN q4;
END;$BODY$
-- change language to plpgsql
LANGUAGE plpgsql;
```
* You cannot use variables in `sql` functions, use `plpgsql`.
* You can assign single value to a variable, while `select query` returns set of rows.
* You have to select a row as one value, as the variable is of composite type.
---
Example of using a loop:
```
DROP FUNCTION test();
CREATE FUNCTION test()
-- change to SETOF to return set of rows, not a single row
RETURNS SETOF my_table AS
$BODY$
DECLARE
q4 my_table;
BEGIN
FOR q4 in
SELECT * FROM my_table
LOOP
RETURN NEXT q4;
END LOOP;
END;$BODY$
LANGUAGE plpgsql;
SELECT * FROM test();
```
Read the documentation about [Returning From a Function](http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html)
|
PostgreSQL has not table variables. So you cannot to return table via any variable. When you create any table, then PostgreSQL creates composite type with same name. But it isn't table type - it is composite type - record.
```
CREATE TABLE xx(a int, b int);
CREATE OR REPLACE FUNCTION foo()
RETURNS xx AS $$
DECLARE v xx;
BEGIN
v := (10,20);
RETURN v;
END;
$$ LANGUAGE plpgsql;
```
Function `foo` returns composite value - it is not a table. But you can write some function that returns set of composite values - it is table.
```
CREATE OR REPLACE FUNCTION foo(a int)
RETURNS SETOF xx AS $$
DECLARE v xx;
BEGIN
FOR i IN 1..a LOOP
v.a := i; v.b := i+1;
RETURN NEXT v;
END LOOP;
RETURN;
END;
$$ LANGUAGE plpgsql;
postgres=# SELECT * FROM foo(3);
βββββ¬ββββ
β a β b β
βββββͺββββ‘
β 1 β 2 β
β 2 β 3 β
β 3 β 4 β
βββββ΄ββββ
(3 rows)
```
When the result is based on query, you can use `RETURN QUERY`. It is little bit faster, shorter, more readable than `FOR IN SELECT` and `RETURN NEXT`:
```
CREATE OR REPLACE FUNCTION foo2(a int)
RETURNS SETOF xx AS $$
BEGIN
RETURN QUERY SELECT * FROM xx
WHERE xx.a = foo2.a;
RETURN;
END;
$$ LANGUAGE plpgsql;
```
Use these functions carefully. They are a black box for optimizer, so optimizer cannot to optimize query in function and outer query together, it must to optimize these queries separately, and this shouldn't be effective for some complex queries. When outer query is simple, then it should not be a problem.
|
Returning set of rows from plpgsql function.
|
[
"",
"sql",
"postgresql",
"plpgsql",
""
] |
What I want to do is to convert multiple rows into a single row using case-when, but I get
> ORA-00979: not a GROUP BY expression
this is my query
```
SELECT P.APPDISPLAYNAME AS PLAYERNAME ,
CASE WHEN GS.SETNUMBER = 1 THEN SS.SETSCORE END AS set1,
CASE WHEN GS.SETNUMBER = 2 THEN SS.SETSCORE END AS set2,
CASE WHEN GS.SETNUMBER = 3 THEN SS.SETSCORE END AS set3,
CASE WHEN GS.SETNUMBER = 4 THEN SS.SETSCORE END AS set4
FROM TENNISMATCH TM ,
TEAM T ,
TEAMPLAYER TP ,
TOURNAMENTPLAYER TRP ,
PLAYER P ,
SETSCORE SS ,
GAMESET GS ,
PLAYERBIO PB
WHERE TM.EVENTTYPE = 'MEN_SINGLES'
AND TM.MATCHSTATUS = 'LIVE'
AND TM.TENNISMATCHID = T.TENNISMATCHID
AND T.TEAMID = TP.TEAMID
AND TP.TOURNAMENTPLAYERID = TRP.TOURNAMENTPLAYERID
AND TRP.PLAYERID = P.PLAYERID
AND T.TEAMID = SS.TEAMID
AND SS.GAMESETID = GS.GAMESETID
AND P.PLAYERID = PB.PLAYERID
GROUP BY P.APPDISPLAYNAME
```
|
First, you should learn to use proper explicit `JOIN` syntax. Even Oracle now recommends its use.
Second, your `SELECT` needs aggregation functions. I would recommend something like this:
```
SELECT P.APPDISPLAYNAME AS PLAYERNAME ,
MAX(CASE WHEN GS.SETNUMBER = 1 THEN SS.SETSCORE END) AS set1,
MAX(CASE WHEN GS.SETNUMBER = 2 THEN SS.SETSCORE END) AS set2,
MAX(CASE WHEN GS.SETNUMBER = 3 THEN SS.SETSCORE END) AS set3,
MAX(CASE WHEN GS.SETNUMBER = 4 THEN SS.SETSCORE END) AS set4
. . .
```
|
All the columns that appear in your `SELECT` clause must either be included in your `GROUP BY` clause, or else have some kind of [Aggregate function](http://docs.oracle.com/database/121/SQLRF/functions003.htm#SQLRF20035) applied.
In your case, if you want just one row to be returned, you should for example use a `MAX()` function applied to every one of your `CASE` results.
This will of course not be accurate if the `CASE` statements can return different results for each row. But if that's the case, then your problem cannot be solved.
Try adding this to every `CASE`:
```
MAX(CASE WHEN GS.SETNUMBER=1 THEN SS.SETSCORE END) as set1
```
|
Convert multiple rows in to single row in SQL using Case-When
|
[
"",
"sql",
"oracle",
""
] |
I have following `UPDATE` query, but something is wrong when executing.
I'm trying to update values in *table1* with values in table *import\_data*, but I need the id from table *ids*... So the table *ids* links *table1* and \*import Simple, isn't it?
```
UPDATE table1
INNER JOIN import_data ON (
import_data.id = (
SELECT customer_id
FROM ids
INNER JOIN table1 ON (
table1.id = ids.link
)
WHERE ids.type = 'NORMAL'
)
)
SET table1.name = import_data.name;
```
**table1**
```
| id | name |
|----------|-----------|
| AA-00001 | Company 1 |
| AA-00002 | Company 2 |
| AA-00003 | Company 3 |
```
**import\_data**
```
| id | name |
|--------|-------------------|
| 401000 | Updated company 1 |
| 401001 | Updated company 2 |
| 401002 | Company 3 |
```
**ids**
```
| link | id | type |
|----------|--------|--------|
| AA-00001 | 401000 | NORMAL |
| AA-00002 | 401001 | NORMAL |
| AA-00003 | 401002 | NORMAL |
```
|
Try:
```
update table1 t1
inner join (
select idata.name, ids.link
from import_data idata
join ids on ids.id = idata.id
where ids.type = 'NORMAL'
) x
on x.link = t1.id
set t1.name = x.name
```
**Demo** [sqlfiddle](http://sqlfiddle.com/#!9/8c188/1)
|
I think it's depends on database you're using. The relevant post is answered here - [How can I do an UPDATE statement with JOIN in SQL?](https://stackoverflow.com/questions/1293330/how-can-i-do-an-update-statement-with-join-in-sql)
|
Updating values in one table based on another
|
[
"",
"mysql",
"sql",
""
] |
I have two tables; Savings and Spend. What I need is both of them rolled up into one row by Month & Year, and Site/Location.
Tables are setup as followed. Data is obviously cleansed and please ignore the lovely structure for dates.
Spend Table
```
Site# Region Total Spend Month Year
52 Region1 $589.01 8 2015
52 Region1 $236.41 8 2015
52 Region1 $635.97 8 2015
52 Region1 $710.54 8 2015
52 Region1 $411.42 8 2015
52 Region1 $374.79 8 2015
52 Region1 $153.82 8 2015
52 Region1 $903.01 8 2015
52 Region1 $930.48 8 2015
52 Region1 $778.35 8 2015
52 Region1 $306.33 8 2015
52 Region1 $1,360 8 2015
52 Region1 $9,4292 8 2015
52 Region1 $6,7191 8 2015
52 Region1 $90.77 8 2015
52 Region1 $60.48 8 2015
52 Region1 $30.66 8 2015
52 Region1 $66.59 8 2015
52 Region1 $28.03 8 2015
52 Region1 $16.77 8 2015
52 Region1 $4,3851 8 2015
52 Region1 $244.07 8 2015
52 Region1 $987.81 8 2015
52 Region1 $2,7327 8 2015
52 Region1 $189.94 8 2015
52 Region1 $360.72 8 2015
52 Region1 $427.31 8 2015
52 Region1 $1,5069 8 2015
52 Region1 $987.81 8 2015
52 Region1 $1,7397 8 2015
52 Region1 $224.74 8 2015
52 Region1 $134.96 8 2015
52 Region1 $2,5456 8 2015
52 Region1 $124.53 8 2015
52 Region1 $1,9049 8 2015
52 Region1 $63.21 8 2015
52 Region1 $252.84 8 2015
52 Region1 $218.43 8 2015
52 Region1 $172.64 8 2015
52 Region1 $327.25 8 2015
52 Region1 $1,8732 8 2015
52 Region1 $788.03 8 2015
52 Region1 $693.72 8 2015
52 Region1 $205.68 8 2015
52 Region1 $18.70 8 2015
52 Region1 $122.09 8 2015
52 Region1 $136.91 8 2015
52 Region1 $2,0666 8 2015
52 Region1 $2,0967 8 2015
52 Region1 $618.57 8 2015
52 Region1 $179.06 8 2015
52 Region1 $16.28 8 2015
52 Region1 $2,2232 8 2015
52 Region1 $694.80 8 2015
52 Region1 $165.42 8 2015
52 Region1 $47.88 8 2015
52 Region1 $4.36 8 2015
52 Region1 $785.33 8 2015
52 Region1 $108.49 8 2015
52 Region1 $9.86 8 2015
52 Region1 $119.09 8 2015
52 Region1 $10.83 8 2015
52 Region1 $1,2097 8 2015
52 Region1 $75.55 8 2015
52 Region1 $4,7307 8 2015
52 Region1 $73.46 8 2015
52 Region1 $396.71 8 2015
52 Region1 $4.94 8 2015
52 Region1 $309.58 8 2015
52 Region1 $126.86 8 2015
52 Region1 $1,5295 8 2015
52 Region1 $104.03 8 2015
52 Region1 $2,8494 8 2015
52 Region1 $2,4338 8 2015
52 Region1 $644.40 8 2015
52 Region1 $23.20 8 2015
52 Region1 $171.45 8 2015
52 Region1 $1,6264 8 2015
52 Region1 $784.53 8 2015
52 Region1 $535.24 8 2015
52 Region1 $259.69 8 2015
52 Region1 $288.57 8 2015
52 Region1 $408.43 8 2015
52 Region1 $72.95 8 2015
52 Region1 $49.95 8 2015
52 Region1 $30.09 8 2015
52 Region1 $1,8848 8 2015
52 Region1 $315.33 8 2015
52 Region1 $1,3658 8 2015
52 Region1 $470.30 8 2015
52 Region1 $29,445 8 2015
52 Region1 $154.12 8 2015
52 Region1 $110.68 8 2015
52 Region1 $75.47 8 2015
52 Region1 $273.65 8 2015
52 Region1 $366.40 8 2015
52 Region1 $316.01 8 2015
```
Savings Table
```
Month Year Site Region Total Savings
8 2015 52 Region1 $1,950.05
8 2015 52 Region1 $234.49
8 2015 52 Region1 $1,548.54
8 2015 52 Region1 $2,433.42
8 2015 52 Region1 $2,073.94
8 2015 52 Region1 $1,956.75
8 2015 52 Region1 $235.30
8 2015 52 Region1 $3,107.72
8 2015 52 Region1 $332.97
8 2015 52 Region1 $2,580.52
```
My expected output would be as follows
```
Site# Region Month Year Total Savings Total Spend
52 Region1 8 2015 16453.7 109866.17
```
Obviously there is a lot more data here and my query is much longer than any example I can give due to data sensitivity.. but the query I was running is close to this
```
SELECT
[s].[Month],
[s].[Year],
[s].[Site],
[s].[Region],
SUM([s].[Total Savings]),
SUM([sp].Total Spend)
FROM [Savings] AS [s]
LEFT JOIN (
SELECT
[Total Spend]
FROM [Spend]
) AS [sp]
ON [s].[Month] = [sp].[Month]
AND [s].[Year] = [sp].[Year]
AND [s].[Site] = [sp].[Site]
GROUP BY
[s].[Month],
[s].[Year],
[s].[Site],
[s].[Region]
```
Problem with the code is I'm getting a lot of unexpected aggregations.. and the values are being multiplied a lot. Sometimes I can get the savings to calculate correctly but its sum'ed on each line.
My question is what is the most appropriate way to combine data that is structured like this and be able to report on every column ( assuming they're not unique ). I know I could do a subquery for every single column but feel that is horrible practice.
TL;DR - I have two tables I need to join with aggregation, and be able to select all columns from both tables.
This is on Microsoft SQL via Tableau
EDIT
Just tried this query
```
SELECT
SUM(CAST([ms].[USD_SavingsAmt] AS decimal(38,2))) AS [Total Savings],
SUM([s].[USD_SpendAmt]) AS [Total Spend],
[ms].[MOR_Reporting_Year] AS [Year],
[ms].[MOR_Reporting_Month] AS [Month],
[ms].[Site#] AS [Site]
FROM [MonthlySavings_14637] AS [ms], [MonthlySpend_14637] AS [s]
WHERE [ms].[MOR_Reporting_Year] = [s].[MOR_Reporting_Year]
AND [ms].[MOR_Reporting_Month] = [s].[MOR_Reporting_Month]
AND [ms].[Site#] = [s].[Site#]
AND [s].[Site#] = '52'
AND [ms].[MOR_Reporting_Month] = '8'
AND [ms].[MOR_Reporting_Year] = '2015'
GROUP BY
[ms].[MOR_Reporting_Year],
[ms].[MOR_Reporting_Month],
[ms].[Site#]
```
And got this result
```
Site Month Total Savings Total Spend Year
52 8 1,596,008.90 1,098,661.65 2,015
```
The values are being duplicated.
TTG Guy, using your logic
```
SELECT
SUM([ms].[Total Savings]) AS [Total Savings],
SUM([s].[USD_SpendAmt]) AS [Total Spend],
[s].[MOR_Reporting_Year] AS [Year],
[s].[MOR_Reporting_Month] AS [Month],
[s].[Site#] AS [Site]
FROM [MonthlySpend_14637] AS [s]
INNER JOIN
(
SELECT
SUM([MonthlySavings_14637].[USD_SavingsAmt]) AS [Total Savings],
[MonthlySavings_14637].[MOR_Reporting_Month] AS [Month],
[MonthlySavings_14637].[MOR_Reporting_Year] AS [Year],
[MonthlySavings_14637].[Site#] AS [Site]
FROM [MonthlySavings_14637]
GROUP BY [MOR_Reporting_Month], [MOR_Reporting_Year], [Site#]
) AS [ms]
ON [ms].[Site]=[s].[Site#]
AND [ms].[Month] = [s].[MOR_Reporting_Month]
AND [ms].[Year] = [s].[MOR_Reporting_Year]
WHERE
[s].[Site#] = '52'
AND [s].[MOR_Reporting_Month] = '8'
AND [s].[MOR_Reporting_Year] = '2015'
GROUP BY [s].[MOR_Reporting_Month], [s].[MOR_Reporting_Year], [s].[Site#]
```
I got
```
Site Month Total Savings Total Spend Year
52 8 1,596,008.90 109,866.17 2,015
```
Spend is correct!
|
I know that you said in a few places that you were having problems with a subquery, but this one seems to work fine for me:
```
SELECT sp.site#, sp.region,sp.month,sp.year,savingstotals.[total savings], sum([totalspend]) as [Total Spend]
FROM
spend sp
INNER JOIN
(SELECT site, region, month, year , sum([total savings]) [total savings]
from savings
group by site, region, month, year ) SavingsTotals ON
Savingstotals.site=sp.site#
AND Savingstotals.month=sp.month
AND Savingstotals.year=sp.year
AND Savingstotals.region=sp.region
group by sp.site#, sp.region,sp.month,sp.year, SavingsTotals.[total savings]
```
|
```
select
([s_agg].[Month],
[s_agg].[Year],
[s_agg].[Site],
[s_agg].[Region],
[s_agg].[tot_sav],
[sp_agg].[tot_sp]
from (
select [s].[Month],
[s].[Year],
[s].[Site],
[s].[Region],
SUM([s].[Total Savings] [tot_sav]
FROM [Savings] AS [s]
GROUP BY
[s].[Month],
[s].[Year],
[s].[Site],
[s].[Region]
) AS [s_agg]
LEFT JOIN (
SELECT [Month],
[Year],
[Site],
[Region],
SUM([Total Spend]) [tot_sp]
FROM [Spend] AS [sp]
GROUP BY [sp].[Month],
[sp].[Year],
[sp].[Site],
[sp].[Region]
) AS [sp_agg]
ON [s_agg].[Month] = [sp_agg].[Month]
AND [s_agg].[Year] = [sp_agg].[Year]
AND [s_agg].[Site] = [sp_agg].[Site]
```
|
SQL Aggregation issue
|
[
"",
"sql",
"sql-server",
"aggregate",
"aggregation",
""
] |
```
Dim dtIzvrsenja datetime
Private Sub chkDate_CheckedChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles chkDate.CheckedChanged
If Me.chkDate.Checked Then
Me.dtpDate.CustomFormat = "dd.MM.yyyy"
Me.dtpDate.Enabled = True
dtIzvrsenja = dtpdate.value
Else
Me.dtpDate.CustomFormat = " "
Me.dtpDate.Enabled = False
dtIzvrsenja = Nothing
End If
End Sub
```
Is there any way to set the value of dtIzvrsenja to null if chk is not checked ?
Edit by using the way other mentioned.
> {"Nullable object must have a value."}
```
Dim dtIzvrsenja2 As DateTime? = Nothing
dtIzvrsenja = dtIzvrsenja2
```
|
In order to support that, you need to use the generic `Nullable(Of Date)` type rather than just `Date` (the VB alias for the `DateTime` type). Normally, only *reference type* variables (`Class` type variables) support null values. Since `Date` is a *value type* variable (`Structure`), it doesn't support null. `Nullable(Of T)` allows you to wrap any *value type* and add support for null values to that type.
In VB (as well as C#, by the way), a question mark after any type name is shorthand for making the type nullable. So, `Date?` is equivalent to `Nullable(Of Date)`. Here's your example code using nullables:
```
Dim dtIzvrsenja As Date?
Private Sub chkDate_CheckedChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles chkDate.CheckedChanged
If Me.chkDate.Checked Then
Me.dtpDate.CustomFormat = "dd.MM.yyyy"
Me.dtpDate.Enabled = True
dtIzvrsenja = dtpdate.value
Else
Me.dtpDate.CustomFormat = " "
Me.dtpDate.Enabled = False
dtIzvrsenja = Nothing
End If
End Sub
```
However, the confusing thing about `Nullable(Of T)` is that it, itself, is not a *reference type*. `Nullable(Of T)` is a *value type*. So, even `Nullable(Of Date)` variables cannot be set to null in the traditional sense. What `Nullable(Of T)` does is it adds a `HasValue As Boolean` property which allows you to check if the value is null. Therefore, when you go to read the value of the variable, rather than first checking to see if the variable `Is Nothing`, you need to instead check to see if it's `HasValue` property is true. For instance:
```
If dtIzvrsenja.HasValue Then
Dim dtIzvrsenja2 As Date = dtIzvrsenja.Value
' ...
End If
```
If you really want to treat the `Date` type as a real reference type, then you can do so by declaring the variable `As Object`. `Object` variables are always reference variables, so when set to `Nothing`, they really are set to null. When you set an `Object` variable to a value-type value, such as a `Date`, it will *box* the value. When you read it's value and convert it back to a `Date`, it will go through a process called *unboxing*. This *boxing/unboxing* process does add some overhead, however. Also, the other downside of this is that the type-checking for code working with the `Object` variable will be skipped at compile-time. That means that, if you are not careful, you could get some exceptions at runtime if the variable references a type of object that is different than what you expect. You could avoid the type-checking issue by declaring your own class:
```
Public Class DateReference
Public Property Value As Date
End Class
```
Or:
```
Public Class Reference(Of T)
Public Property Value As T
End Class
```
However, that seems silly. `Nullable(Of T)` already serves that purpose, so unless you really need to make it a reference type, I'd suggest that you take the time to get acquainted with the standard way of doing it.
As a side note, another complication which further causes confusion in this area is that VB's `Nothing` keyword does not strictly mean null. It just means the deafult value for any type. So, for an `Integer`, `Nothing` equals the value zero. For a `Date`, it means `Date.MinValue`, etc.
|
Use `Nullable`:
```
Dim nullableDateValue As New Nullable(Of Date) = Nothing
```
|
Set date variable to null
|
[
"",
"sql",
"vb.net",
""
] |
I have used SQL CASE statement on an INT type of column of a table.
Following is the SQL table schema and query which is working fine.
**SQL Query**
```
CREATE TABLE #TEMP_A(ID BIGINT ,ACCEPTED INT)
INSERT INTO #TEMP_A VALUES(1,1)
INSERT INTO #TEMP_A(ID) VALUES(2)
INSERT INTO #TEMP_A(ID) VALUES(3)
INSERT INTO #TEMP_A(ID) VALUES(4)
INSERT INTO #TEMP_A VALUES(5,1)
SELECT T.ID,T.ACCEPTED,
(CASE WHEN(T.ACCEPTED=1) THEN 'ACCEPTED' ELSE 'NOT ACCEPTED' END) AS STATUS
FROM #TEMP_A T
```
**Query OUTPUT**
```
ID ACCEPTED STATUS
1 1 ACCEPTED
2 NULL NOT ACCEPTED
3 NULL NOT ACCEPTED
4 NULL NOT ACCEPTED
5 1 ACCEPTED
```
But now problem is here that I have a condition that I need to print not accepted in select statement so I tried following query
**SQL Query**
```
SELECT T.ID,T.ACCEPTED,
(CASE WHEN(T.ACCEPTED=NULL) THEN 'NOT ACCEPTED' END) AS STATUS
FROM #TEMP_A T
```
**Query Actual OUTPUT**
```
ID ACCEPTED STATUS
1 1 NULL
2 NULL NULL
3 NULL NULL
4 NULL NULL
5 1 NULL
```
**Query Required OUTPUT**
```
ID ACCEPTED STATUS
1 1 NULL
2 NULL NOT ACCEPTED
3 NULL NOT ACCEPTED
4 NULL NOT ACCEPTED
5 1 NULL
```
I have to print status of those which are not accepted in select statement
what should I do to get required result.
|
try:
```
SELECT T.ID,T.ACCEPTED,
(CASE WHEN T.ACCEPTED IS NULL THEN 'NOT ACCEPTED' END) AS STATUS
FROM #TEMP_A T
```
|
Use `IS NULL` to check for a `NULL` value. Try this query:
```
SELECT T.ID,T.ACCEPTED,
(CASE WHEN (T.ACCEPTED IS NULL) THEN 'NOT ACCEPTED' END) AS STATUS
FROM #TEMP_A T
```
|
Why sql case is not working on null condition of int column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I want to find the peak value across multiple time series concurrently, like the following. However to make this a little more complicated, I may have 20k+ entities with 200 data points each, so performance is important.
I only require the peak achieved across all entities. So for clarification, let's say my entities are stores and my value is total sales per day, and I want to find out which day was the peak sales day across all 20k stores.
Sample data
```
Date Time Entity Value
01/01/1900 A 8
01/01/1900 B 6
01/01/1900 C 9
02/01/1900 A 4
02/01/1900 B 3
02/01/1900 C 6
03/01/1900 A 7
03/01/1900 B 8
03/01/1900 C 9
04/01/1900 A 1
04/01/1900 B 2
04/01/1900 C 5
```
OUTPUT
```
Date Time A B C Total
01/01/1900 8 6 9 23
02/01/1900 4 3 6 13
03/01/1900 7 8 9 24
04/01/1900 1 2 5 8
```
However this output is not important, I simple require the peak DateTime and total.
```
Date Time Total
03/01/1900 24
```
|
If you need just the date of the maximum sum, then simple `GROUP BY` would be enough.
[SQL Fiddle](http://sqlfiddle.com/#!3/9cc27/2/0)
```
SELECT TOP(1)
dt
,SUM(Value) AS Total
FROM T
GROUP BY dt
ORDER BY Total DESC;
```
**Result**
```
dt Total
1900-01-03 24
```
|
*here solution using dynamic query*
```
-- create temp table for data sample
IF OBJECT_ID('Tempdb..#test') IS NOT NULL
DROP TABLE #test
CREATE TABLE #test
(
[Date Time] DATE ,
Entity VARCHAR(1) ,
VALUE INT
)
INSERT INTO #test
( [Date Time], Entity, VALUE )
VALUES ( '1900-01-01', 'A', 4 ),
( '1900-01-01', 'B', 6 ),
( '1900-01-01', 'C', 9 ),
( '1900-01-02', 'A', 4 ),
( '1900-01-02', 'B', 3 ),
( '1900-01-02', 'C', 6 ),
( '1900-01-03', 'A', 7 ),
( '1900-01-03', 'B', 8 ),
( '1900-01-03', 'C', 9 ),
( '1900-01-04', 'A', 1 ),
( '1900-01-04', 'B', 2 ),
( '1900-01-04', 'C', 5 )
--final query
DECLARE @String AS NVARCHAR(MAX)
SET @String = SUBSTRING(( SELECT ',[' + CONVERT(VARCHAR, Entity) + ']'
FROM ( SELECT DISTINCT
Entity
FROM #test
) AS t
ORDER BY Entity
FOR
XML PATH('')
), 2, 1000)
SET @String = 'SELECT *, ' + REPLACE(@String, ',', '+') + ' AS Total '
+ 'FROM (SELECT * FROM #test AS [T] PIVOT ( MAX(T.VALUE) FOR [T].[Entity] IN ('
+ @String + ') ) AS P) AS T'
EXEC sys.sp_executesql @String
```
***test is here*** [**SQL Fiddle**](http://sqlfiddle.com/#!6/83c31/2)
|
Finding peaks across multiple time series
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
INSERT INTO clnt_reports_01 (r_id, cl_no, cl_no, servi, size, vol,
deliver_point, port_, a_port, road, term, compet, speed,
rcomments, stage, meetrating, username, user_status, kids,
hobbies, comments)
VALUES (1, 123123, "test", "test", "test", "test",
"test", "test", "test", "test", 1, "test", "test",
3, 5, "test", "test", 5, "test", "test");
```
Getting the error -
> Error Code: 1136. Column count doesn't match value count at row
|
**EDIT:**
You have specified 21 columns and provided only 20 values so there is a mismatch.
`cl_no` seems to be repeated twice. Remove that.
---
You need to use single quotes instead of double for text
```
INSERT INTO clnt_reports_01 (
r_id,
cl_no,
servi,
size,
vol,
deliver_point,
port_,
a_port,
road,
term,
compet,
speed,
rcomments,
stage,
meetrating,
username,
user_status,
kids,
hobbies,
comments)
VALUES (1,
123123,
'test',
'test',
'test',
'test',
'test',
'test',
'test',
'test',
1,
'test',
'test',
3,
5,
'test',
'test',
5,
'test',
'test');
```
|
Qoute strings with `'` and **make sure there is the same number of column in both cases (20)**:
```
INSERT INTO clnt_reports_01 (r_id,cl_no,servi,size,vol,deliver_point,port_,a_port,road,term,compet,speed,rcomments,stage,meetrating,username,user_status,kids,hobbies,comments)
VALUES (1, 123123, 'test', 'test', 'test', 'test', 'test', 'test', 'test', 'test', 1, 'test', 'test', 3, 5, 'test', 'test', 5, 'test', 'test');
```
`"` is treated as identifier (column name).
Also better use `INSERT ... SELECT` for readability:
```
INSERT INTO clnt_reports_01 (
r_id,
cl_no,
servi,
size,
vol,
deliver_point,
port_,
a_port,
road,
term,
compet,
speed,
rcomments,
stage,
meetrating,
username,
user_status,
kids,
hobbies,
comments)
SELECT
1 AS r_id,
123123 AS cl_no,
'test' AS servi,
'test' AS size,
'test' As vol,
'test' AS deliver_point,
'test' AS port_,
'test' AS a_port,
'test' AS road,
'test' AS term,
1 AS compet,
'test' AS speed,
'test' AS rcomments,
3 AS stage,
5 AS meetrating,
'test' AS username,
'test' AS user_status,
5 AS kids,
'test' AS hobbies,
'test' AS comments;
```
|
Column count doesn't match value count at row
|
[
"",
"mysql",
"sql",
""
] |
I have a relatively straightforward sql query which refuses to execute on sqldeveloper, I've narrowed down the offending line to this -
```
and (b.date_updated > (sysdate MINUS 2) or a.date_updated > (sysdate MINUS 2))
```
This is a part of my `where` clause along with other statements.
I've visually counted the parenthesis multiple times and it looks balanced to me, I'm not sure what I'm doing wrong here, can someone help me figure out what's wrong here.
For completeness, here's what the where clause looks like
```
where a.customer_id = b.customer_id
and (b.date_updated > (sysdate MINUS 2) or a.date_updated > (sysdate MINUS 2))
and a.c_id = c.c_id
```
|
`MINUS` is a set operator that operates on two sets, essentially the "opposite" of what `UNION` or `UNION ALL` does.
To subtract a number from a date (or another number) use `-`
This is documented in the chapter "Arithmetic operators" in the Oracle manual:
<https://docs.oracle.com/cd/E11882_01/server.112/e41084/operators002.htm#SQLRF51156>
You confused that with a SET operator:
<https://docs.oracle.com/cd/E11882_01/server.112/e41084/operators005.htm#SQLRF51164>
To learn *why* `current_date - 2` actually does what you expect it to, please read the chapter, "*Datetime/Interval Arithmetic*":
<https://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements001.htm#i48042>
|
`MINUS` is not legal syntax. Replace each `MINUS` with the actual operator `-`.
|
ORA-00907 Missing right parenthesis even when parenthesis are balanced
|
[
"",
"sql",
"oracle",
"syntax-error",
""
] |
I need to change a few values on my DB.
I forgot to set nullable to the table and it set to 0000-00-00 00:00:00 by default.
Now I need to convert that value in `NULL`.
The field type is Datetime.
How can I do it?
I try with the typical `Update table set field = NULL WHERE field = '0000-00-00 00:00:00';` but it doesn't work.
|
You need to first make the column nullable:
```
ALTER TABLE mytable MODIFY COLUMN field DATETIME NULL;
```
And then update the values:
```
UPDATE mytable SET field = NULL WHERE field = '0000-00-00 00:00:00';
```
|
From MySQL 5.7, SQL mode NO\_ZERO\_DATE makes this update impossible unless you firstly disable this restriction (for the duration of the transaction only).
```
SET sql_mode=(SELECT REPLACE(@@sql_mode,"NO_ZERO_DATE", ""));
UPDATE mytable SET field = NULL WHERE field = '0000-00-00 00:00:00';
```
|
How to set to NULL a datetime with 0000-00-00 00:00:00 value?
|
[
"",
"mysql",
"sql",
"datetime",
"null",
"sql-update",
""
] |
I need to create a column which shows the difference between the average of two consecutive days of a table, I tried to do this with a CTE query:
```
with aggregate
as
(
select date
,y
,z
,avg(x) as vwap
from table a
where z = 1
group by date,y,z
)
select aggregate.date
,aggregate.vwap - aggregate2.vwap
from aggregate
inner join aggregate aggregate2
on date = dateadd(day,-1,aggregate2.date) and aggregate.y = aggregate2.y
```
this Query takes 29 second to run, while the first select takes only 2 second and returns only 2000 rows.
what is a more efficient way to do this?
does it maybe make sense to create a view of the first query?
Thanks!
So I found out the lag/lead is not an option as my server is actually from 2008 and only the management studio is 2012, anyone have an idea how to do this efficient?
|
The problem was I think that for every join or aggregate in the final select the whole CTE was rerun. When I created a temporary table of the CTE results and than run my Inner join final query on that the speed increased by factor 10.
|
You can use new SQL analytic functions [SQL Lead()](http://www.kodyaz.com/t-sql/lead-function-in-sql-server-2012-for-next-value.aspx) and [Lag() function](http://www.kodyaz.com/t-sql/lag-function-in-sql-server-2012-for-previous-values.aspx) introduced with SQL Server 2012
Please check following SELECT statement
```
;with [aggregate] as (
select
[date], avg(x) as vwap
from table a
group by date,y,z
)
select
[date],
vwap,
previous = lag(vwap,1,null) over (order by date),
[next] = lead(vwap,1,null) over (order by date)
from [aggregate]
```
You can improve the above SELECT with adding difference calculation too
```
;with [aggregate] as (
select
[date], avg(x) as vwap
from table a
group by date,y,z
)
select
[date],
vwap,
previous = lag(vwap,1,null) over (order by date),
prev_diff = vwap - (lag(vwap,1,null) over (order by date)),
[next] = lead(vwap,1,null) over (order by date),
next_diff = vwap - (lead(vwap,1,null) over (order by date))
from [aggregate]
```
|
sql self join CTE or other way
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need a way to find out the number of days for each set of same IDs set. For example, if there are two rows of ID1 with the first date being the 1st and the second date the 4th, the calculation for the 1st would be Sept 4th - the Sept 1st (since the 4th is the next date for ID1) and the calculation for the 4th would be Today (Sept 28th) - Sept 4th (since 4th is the last date for ID1 and there is no more dates after the 4th for ID1). I included the formula for calculation of each column next to it below.
```
IDs Date
ID1 09/01/2015
ID1 09/04/2015
ID2 09/04/2015
ID2 09/09/2015
ID2 09/15/2015
ID3 09/09/2015
ID4 09/15/2015
```
To
```
IDs Date Days Formula...
ID1 09/01/2015 3 Sept 4th - Sept 1st
ID1 09/04/2015 22 Today - Sept 4th
ID2 09/04/2015 5 Sept 9th - Sept 4th
ID2 09/09/2015 6 Sept 15th - Sept 9th
ID2 09/15/2015 13 Today - Sept 15th
ID3 09/09/2015 19 Today - Sept 9th
ID4 09/15/2015 13 Today - Sept 15th
```
|
You are basically looking for `lead()` functionality. And, you don't have it, so you have to improvise. Here is a method using `cross apply`:
```
select t.*,
datediff(day, t.date, coalesce(n.date, getdate())
from table t outer apply
(select top 1 t2.date
from table t2
where t2.id = t.id and t2.date > t.date
order by t2.date
) n;
```
This should have reasonable performance with an index on `(id, date)`.
|
Using a common table expression, you can arrange your table into a temporary view with a start date and end date for each ID record, like this: <http://sqlfiddle.com/#!3/af078/7>
```
with t1 (id, dt, ndx) as (
select id, dt, row_number() over (partition by id order by dt)
from id_dates
)
,
t2 (id, startdt, enddt) as (
select t1.id, t1.dt, coalesce(t2.dt, convert(date, getdate()))
from t1
left join t1 t2
on t2.id = t1.id
and t2.ndx = t1.ndx + 1
)
select * from t2
```
Make sure to use the `coalesce` function to get the current date for the last row (it would otherwise be null from the left join).
From there, it's a simple matter of using the `datediff()` function: <http://sqlfiddle.com/#!3/af078/8>
```
with t1 (id, dt, ndx) as (
select id, dt, row_number() over (partition by id order by dt)
from id_dates
)
,
t2 (id, startdt, enddt) as (
select t1.id, t1.dt, coalesce(t2.dt, convert(date, getdate()))
from t1
left join t1 t2
on t2.id = t1.id
and t2.ndx = t1.ndx + 1
)
select id, startdt, datediff(day, startdt, enddt) as dtcount
from t2
```
|
Simple SQL Query for date ranges
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I want to replace all characters in a string to "\*" star character.
For example:
```
NAME_SURNAME : ANDREY KIMBELL
REPLACED : ****** *******
```
How can I do this in Oracle PL/SQL ?
|
You could use:
**[Demo](http://sqlfiddle.com/#!4/9eecb7d/7165/0)**
```
SELECT REGEXP_REPLACE('ANDREY KIMBELL', '\w', '*')
FROM dual
```
where `\w` is the Alphanumeric characters plus `_` equivalent of `[A-Za-z0-9_]`.
Or if only letters:
```
SELECT REGEXP_REPLACE('ANDREY KIMBELL1', '[[:alpha:]]', '*')
FROM dual
```
|
you could to it like this
```
select regexp_replace('abdcde123fge','[A-Za-z]','*')
from dual;
```
This replaces everything in the range of a-z and A-Z with a \*
|
replace all characters in string
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I'm quite novice to SQL, but here's what I'm trying to do:
I have 3 sql queries that return 3 different sets of results (students, staff, customer), the columns are all the same
```
id first name last name address zip group 1 group 2 group 3
```
each select statement for each group populates one of the group columns with the type (students, staff, customer), the union all between select statements gets me all 3 groups in 1 table...but I want the duplicates..merged and have the groups reflect from all 3 queries?
```
id first name last name address zip group 1 group 2 group 3
1 Bob Smith bkr st 33 STUDENT
2 Zoe Apple trk av 44 STAFF CUSTOMER
2 Zoe Apple trk av 44 STUDENT
```
becomes:
```
id first name last name address zip group 1 group 2 group 3
1 Bob Smith bkr st 33 STUDENT
2 Zoe Apple trk av 44 STUDENT STAFF CUSTOMER
```
here's more or less the prototype select statements with union:
```
SELECT DISTINCT ID, student_FN AS'FIRST NAME', student_LN AS 'LAST NAME', ADDRESS, ZIP. 'STUDENT' AS 'GROUP 1'. '' AS 'GROUP 2','' AS 'GROUP 3'
WHERE STUDENT_STATUS IN ('ENROLLED')
UNION ALL
SELECT DISTINCT ID, STAFF_FN AS 'FIRST NAME', STAFF_LN AS 'LAST NAME', ADDRESS, ZIP. '' AS 'GROUP 1'. 'STAFF' AS 'GROUP 2','' AS 'GROUP 3'
WHERE STAFF_RECORD IN ('ACTIVE')
UNION ALL
SELECT DISTINCT ID, CUS_FN AS 'FIRST NAME', CUS_LN AS 'LAST NAME', ADDRESS, ZIP. '' AS 'GROUP 1'. '' AS 'GROUP 2','CUSTOMER' AS 'GROUP 3'
WHERE CUSTOMER_SHOPPED IN ('STORE')
```
each select tables pulls the columns from slightly different places..
many thanks!
|
Try:
```
SELECT ID,
"FIRST NAME",
"LAST NAME",
ADDRESS,
ZIP,
MAX(CASE "GROUP TYPE" WHEN 'STUDENT' THEN 'STUDENT' END) "GROUP 1",
MAX(CASE "GROUP TYPE" WHEN 'STAFF' THEN 'STAFF' END) "GROUP 2",
MAX(CASE "GROUP TYPE" WHEN 'CUSTOMER' THEN 'CUSTOMER' END) "GROUP 3"
FROM
(SELECT ID, student_FN AS "FIRST NAME", student_LN AS "LAST NAME", ADDRESS, ZIP, 'STUDENT' AS "GROUP TYPE"
FROM STUDENTS WHERE STUDENT_STATUS IN ('ENROLLED')
UNION ALL
SELECT ID, STAFF_FN AS "FIRST NAME", STAFF_LN AS "LAST NAME", ADDRESS, ZIP, 'STAFF' AS "GROUP TYPE"
FROM STAFF WHERE STAFF_RECORD IN ("ACTIVE")
UNION ALL
SELECT ID, CUS_FN AS "FIRST NAME", CUS_LN AS "LAST NAME", ADDRESS, ZIP, 'CUSTOMER' AS "GROUP TYPE"
FROM CUSTOMERS WHERE CUSTOMER_SHOPPED IN ('STORE')
) AS ILQ
GROUP BY ID, "FIRST NAME", "LAST NAME", ADDRESS, ZIP
```
Note that this will return multiple rows for the same ID if any of the name or address details are different.
|
You can try this:-
```
SELECT id, first_name, last_name, address, zip, MAX(group1), MAX(group2), MAX(group3)
FROM YOUR_TABLES
GROUP BY id, first_name, last_name, address, zip
```
|
SQL union all but flag groups?
|
[
"",
"sql",
"sql-server",
""
] |
In DB2, I have this query to list numbers 1-x:
```
select level from SYSIBM.SYSDUMMY1 connect by level <= "some number"
```
But this maxes out due to `SQL20450N Recursion limit exceeded within a hierarchical query.`
How can I generate a list of numbers between 1 and x using a select statement when x is not known at runtime?
|
I found an answer based on [this post](https://stackoverflow.com/questions/14339369/how-to-generate-a-list-of-number-in-sql-as-it-was-a-list-of-comprehension):
```
WITH d AS
(SELECT LEVEL - 1 AS dig FROM SYSIBM.SYSDUMMY1 CONNECT BY LEVEL <= 10)
SELECT t1.n
FROM (SELECT (d7.dig * 1000000) +
(d6.dig * 100000) +
(d5.dig * 10000) +
(d4.dig * 1000) +
(d3.dig * 100) +
(d2.dig * 10) +
d1.dig AS n
FROM d d1
CROSS JOIN d d2
CROSS JOIN d d3
CROSS JOIN d d4
CROSS JOIN d d5
CROSS JOIN d d6
CROSS JOIN d d7) t1
JOIN ("subselect that returns desired value as i") t2
ON t1.n <= t2.i
ORDER BY t1.n
```
|
That's how I usually create lists:
For your example
```
numberlist (num) as
(
select min(1) from anytable
union all
select num + 1 from numberlist
where num <= x
)
```
|
select statement to list numbers in range
|
[
"",
"sql",
"db2",
""
] |
I'm trying to do an INNER JOIN of a few tables, but with certain condition:
I have the CONTACTS table:
```
code contact_type name email
----------------------------------------------------------------------------
1 P AAAAAAAA SSSS@DDD.COM
2 P BBBBBBB SDFSDF@DDD.COM
3 P CCCCC SDSDF@DD.COM
1 C AAAABBB DDDD@DDD.COM
2 C BBBBCCC DSDF@DD.COM
3 C CCCCDDD ASSAD@DD.COM
```
the PROVIDERS table:
```
code name
----------------------------------------------------
1 SIEMENS
2 FUJITSU
3 ASUS
```
and the CLIENTS table:
```
CODE NAME
---------------------------------------------------------
1 SMITH
2 PETER
3 MICHAEL
```
I like to select all rows from the contacts, but when CONTACT\_TYPE is `C` appears the correspondent `CLIENT.name` to this code, and when the CONTACT\_TYPE is `P` appears the correspondent `PROVIDER.NAME`,
I try:
```
SELECT code,contact_type,con.name,con.EMAIL
FROM contacts con
inner join CLIENTS cli
on cli.codcli=con.CODE
and con.CONTACT_TYPE='C'
inner join provIDERS p
on con.CODE=p.codpro
and con.CONTACT_TYPE='P'
```
But doesn't work,
I also try:
[[SQL Fiddle Demo]](http://sqlfiddle.com/#!3/bb8ad/2)
and I want to see like this:
```
PROVIDER/CLIENT NAME CONTACT_TYPE EMAIL
---------------------------------------------------------------------------
SIEMENS P SSSS@DDD.COM
FUJITSU P SDFSDF@DDD.COM
ASUS P SDSDF@DD.COM
SMITH C DDDD@DDD.COM
PETER C DSDF@DD.COM
MICHAEL C ASSAD@DD.COM
```
[[SQL Fiddle Demo]](http://sqlfiddle.com/#!3/bb8ad/2)
|
I suggest you to use this query:
```
SELECT
CASE
WHEN CONTACT_TYPE = 'C' THEN
(SELECT Name FROM CLIENTS c WHERE c.Code = co.Code)
WHEN CONTACT_TYPE = 'P' THEN
(SELECT Name FROM PROVIDERS p WHERE p.Code = co.Code)
END AS [PROVIDER/CLIENT NAME],
CONTACT_TYPE, EMAIL
FROM
CONTACTS co;
```
[[SQL Fiddle Demo]](http://sqlfiddle.com/#!3/bb8ad/4)
|
You can try to do it with union:
```
SELECT p.Name AS [Provider/Client Name] ,
c.Contact_Type ,
c.Email
FROM contacts c
INNER JOIN providers p
ON c.code = p.code
WHERE c.Contact_Type = 'P'
UNION
SELECT cl.Name AS [Provider/Client Name] ,
c.Contact_Type ,
c.Email
FROM contacts c
INNER JOIN client cl
ON cl.code = p.code
WHERE c.Contact_Type = 'C'
```
or by use case statement:
```
SELECT CASE
WHEN c.Contact_Type = 'P' THEN
(SELECT p.Name FROM providers p WHERE p.code = c.code)
ELSE
(SELECT cl.Name FROM client cl WHERE cl.code = c.code) END
AS [Provider/Client Name] ,
c.Contact_Type ,
c.Email
FROM contacts c
```
|
How to use a inner join with 2 conditions in the same column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"inner-join",
""
] |
I have a query that I need to add a column to. I can find many similar examples but nothing quite works right for this situation. There could be infinite CategoryIDs and I need to add the Sum(IsOutage) grouped by CategoryID. This is some sample data:
[](https://i.stack.imgur.com/pBC0B.png)
and here is the closest I have to getting it to work:
```
SELECT c.CategoryID
, c.Name
, COUNT(i.IssueID) AS TotalIssues
, AVG(DATEDIFF(MI, i.StartDate, i.ResolvedDate)) AS AvgDuration
,(Select COUNT(h.IssueID) From dbo.hdIssues AS h Where h.CategoryID = i.CategoryID AND IsOutage = 1 ) AS TotalOutages
FROM dbo.hdCategories AS c INNER JOIN
dbo.hdIssues AS i ON c.CategoryID = i.CategoryID
WHERE (i.StatusID = 3)
GROUP BY c.CategoryID, c.Name, TotalOutages
```
Am I close? Please advise and thanx in advance
**EDIT**: This is a mock up of what the result should look like:
[](https://i.stack.imgur.com/DHRuZ.png)
|
If i correctly understand your needs this query should give you desired result:
```
SELECT x.* ,
y.SumIsOutage
FROM (SELECT c.CategoryID ,
c.Name ,
COUNT(i.IssueID) AS TotalIssues ,
AVG(DATEDIFF(MI, i.StartDate, i.ResolvedDate)) AS AvgDuration
FROM dbo.hdCategories AS c
INNER JOIN dbo.hdIssues AS i ON c.CategoryID = i.CategoryID
WHERE (i.StatusID = 3)
GROUP BY c.CategoryID, c.Name ) x
LEFT JOIN (SELECT DISTINCT i.CategoryId,
SUM(CONVERT(INT,i.isOutage)) OVER (PARTITION BY i.CategoryId) AS SumIsOutage
FROM dbo.hdIssues AS i ) y
ON x.CategoryId = y.CategoryId
```
|
You can use conditional aggregation. Also, you shouldn't `group by` totaloutages
```
SELECT
c.CategoryID
, c.Name
, COUNT(i.IssueID) AS TotalIssues
, AVG(DATEDIFF(MI, i.StartDate, i.ResolvedDate)) AS AvgDuration
,sum(case when IsOutage = 1 then 1 else 0 end) AS TotalOutages
FROM dbo.hdCategories AS c INNER JOIN
dbo.hdIssues AS i ON c.CategoryID = i.CategoryID
WHERE i.StatusID = 3
GROUP BY c.CategoryID, c.Name
```
|
: Select query with sum() as subquery
|
[
"",
"sql",
"sql-server-2008",
""
] |
# Problem
We are trying to understand why executing the same code by calling an stored procedure versus executing the store procedure contents in a query window is showing very different execution times but returns exactly the same result set of 183 rows.
# Test1
Executing the following SP from SSMS takes 5 minutes to return the results.
> EXEC uspFleetSummaryReportSelectByDateCommand @UserID = 1468, @Date = '2015-09-28'
For reference this is the SP detail:
```
CREATE PROCEDURE [dbo].[uspFleetSummaryReportSelectByDateCommand]
(
@UserID int,
@Date DateTime
)
AS
DECLARE @CustomerID int
SET @CustomerID = (Select CustomerID FROM [User] WHERE UserID = @UserID)
SELECT j.JourneyID,
j.DeviceID,
j.StartDate,
j.EndDate,
ISNULL(JourneyDistance, 0.0) AS [JourneyDistance],
CONVERT(VARCHAR(8), DATEADD(SECOND, DATEDIFF(SECOND, j.StartDate, j.EndDate), 0), 114) AS [Duration],
v.Registration,
v.InitOdometer,
jt.Name AS [JourneyType],
dt.Name AS [DeviceType],
PrivateJourney = (dbo.fxIsPrivateJourney(j.JourneyTypeID, j.DeviceID, @UserID)),
CONVERT(VARCHAR(8), DATEADD(SECOND, ISNULL(e.IdleTime, 0), 0), 114) AS [IdleTime]
FROM Journey j WITH (NOLOCK)
INNER JOIN Vehicle v WITH (NOLOCK) ON v.DeviceID = j.DeviceID
INNER JOIN JourneyType jt WITH (NOLOCK) ON jt.JourneyTypeID = j.JourneyTypeID
INNER JOIN Device d WITH (NOLOCK) ON d.DeviceID = j.DeviceID
INNER JOIN Configuration config WITH (NOLOCK) ON config.ConfigurationID = d.ConfigurationID
INNER JOIN DeviceType dt WITH (NOLOCK) ON dt.DeviceTypeID = config.DeviceTypeID
LEFT OUTER JOIN (
SELECT
e.JourneyId,
SUM(DATEDIFF(SECOND, e.StartDateTime, e.EndDateTime)) AS [IdleTime]
FROM [Event] e WITH (NOLOCK)
WHERE e.JourneyId = JourneyID AND e.EventType = 4/*Idle Event*/
GROUP BY e.JourneyId
) e ON e.JourneyId = j.JourneyID
WHERE j.StartDate BETWEEN @Date AND DATEADD(DAY,1,@Date)
AND (j.JourneyDistance IS NOT NULL)
AND DATEDIFF(MINUTE,j.StartDate,ISNULL(j.EndDate,getdate())) > 0
AND j.DeviceID IN (Select v.DeviceID
FROM Vehicle v WITH (NOLOCK)
INNER JOIN Customer c WITH (NOLOCK) ON c.CustomerID = v.CustomerID
INNER JOIN [User] u ON u.CustomerID = c.CustomerID
WHERE v.CustomerID = @CustomerID AND u.UserID = @UserID
AND (v.LevelOneID = u.LevelOneID Or u.LevelOneID is null)
AND (v.LevelTwoID = u.LevelTwoID Or u.LevelTwoID is null)
AND (v.LevelThreeID = u.LevelThreeID Or u.LevelThreeID is null)
AND (v.LevelFourID = u.LevelFourID Or u.LevelFourID is null)
AND (v.LevelFiveID = u.LevelFiveID Or u.LevelFiveID is null)
AND (v.DriverID = u.LevelSixID Or u.LevelSixID is null)
AND ISNULL(v.HideFromCustomer,0) != 1
)
ORDER BY Registration,j.JourneyID
```
# Test2
But executing the same SP code and setting the variables take 10 seconds to return the results.
Please find below the same SP with the variables set. The following script is executed from SSMS query window.
```
DECLARE @UserID INT = 13651
DECLARE @Date DATETIME = '2015-09-28'
DECLARE @CustomerID int
SET @CustomerID = (Select CustomerID FROM [User] WHERE UserID = @UserID)
SELECT j.JourneyID,
j.DeviceID,
j.StartDate,
j.EndDate,
ISNULL(JourneyDistance, 0.0) AS [JourneyDistance],
CONVERT(VARCHAR(8), DATEADD(SECOND, DATEDIFF(SECOND, j.StartDate, j.EndDate), 0), 114) AS [Duration],
v.Registration,
v.InitOdometer,
jt.Name AS [JourneyType],
dt.Name AS [DeviceType],
PrivateJourney = (dbo.fxIsPrivateJourney(j.JourneyTypeID, j.DeviceID, @UserID)),
CONVERT(VARCHAR(8), DATEADD(SECOND, ISNULL(e.IdleTime, 0), 0), 114) AS [IdleTime]
FROM Journey j WITH (NOLOCK)
INNER JOIN Vehicle v WITH (NOLOCK) ON v.DeviceID = j.DeviceID
INNER JOIN JourneyType jt WITH (NOLOCK) ON jt.JourneyTypeID = j.JourneyTypeID
INNER JOIN Device d WITH (NOLOCK) ON d.DeviceID = j.DeviceID
INNER JOIN Configuration config WITH (NOLOCK) ON config.ConfigurationID = d.ConfigurationID
INNER JOIN DeviceType dt WITH (NOLOCK) ON dt.DeviceTypeID = config.DeviceTypeID
LEFT OUTER JOIN (
SELECT
e.JourneyId,
SUM(DATEDIFF(SECOND, e.StartDateTime, e.EndDateTime)) AS [IdleTime]
FROM [Event] e WITH (NOLOCK)
WHERE e.JourneyId = JourneyID AND e.EventType = 4/*Idle Event*/
GROUP BY e.JourneyId
) e ON e.JourneyId = j.JourneyID
WHERE j.StartDate BETWEEN @Date AND DATEADD(DAY,1,@Date)
AND (j.JourneyDistance IS NOT NULL)
AND DATEDIFF(MINUTE,j.StartDate,ISNULL(j.EndDate,getdate())) > 0
AND j.DeviceID IN (Select v.DeviceID
FROM Vehicle v WITH (NOLOCK)
INNER JOIN Customer c WITH (NOLOCK) ON c.CustomerID = v.CustomerID
INNER JOIN [User] u ON u.CustomerID = c.CustomerID
WHERE v.CustomerID = @CustomerID AND u.UserID = @UserID
AND (v.LevelOneID = u.LevelOneID Or u.LevelOneID is null)
AND (v.LevelTwoID = u.LevelTwoID Or u.LevelTwoID is null)
AND (v.LevelThreeID = u.LevelThreeID Or u.LevelThreeID is null)
AND (v.LevelFourID = u.LevelFourID Or u.LevelFourID is null)
AND (v.LevelFiveID = u.LevelFiveID Or u.LevelFiveID is null)
AND (v.DriverID = u.LevelSixID Or u.LevelSixID is null)
AND ISNULL(v.HideFromCustomer,0) != 1
)
ORDER BY Registration,j.JourneyID
```
# Debugging to date
Comparing the two statements side by side they are identical bar the setting of the variables.
Comparing the result sets side by side they are identical.
Selecting the variable CUSTOMERID in isolation takes milliseconds.
Dates variables passed are in the same format.
We have run this test multiple times to rule out cache related issue.
Query execution plan was examined on both tests. When executing the SP it was clear an index was missing on table EVENT when executing TEST1.
# Index added
```
CREATE NONCLUSTERED INDEX [290915_EventTypeJourneyID, EventTypeJID,>] ON [dbo].[Event]
(
[EventType] ASC,
[JourneyId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
```
# Result
Execution time for TEST1 dropped to 1 second.
# Question
Ok so the principle issue was resolved but fundamentally I don't understand why the performance gap between the two tests that essentially are running the same code ? its same same code , should be using the same indexes , execution times should be similar.
Thank you for any insights into this behaviour.
# Reference
Sql server 2008 64bit standard edition.
Table.JOURNEY (350m rows)
```
CREATE TABLE [dbo].[Journey](
[JourneyID] [int] IDENTITY(1,1) NOT NULL,
[StartAddress] [varchar](500) NULL,
[StartPostcode] [varchar](50) NULL,
[EndAddress] [varchar](500) NULL,
[EndPostcode] [varchar](50) NULL,
[JourneyTypeID] [int] NULL,
[Comment] [varchar](500) NULL,
[DriverID] [int] NULL,
[StartDate] [datetime] NULL,
[EndDate] [datetime] NULL,
[IdleTimeEngineOn] [int] NULL,
[TimeSinceLastJourney] [int] NULL,
[JourneyDistance] [decimal](8, 2) NULL,
[DeviceID] [int] NOT NULL,
[tempJourneyID] [int] NULL,
[tempCustomerID] [int] NULL,
CONSTRAINT [Journey_PK] PRIMARY KEY CLUSTERED
(
[JourneyID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[Journey] WITH CHECK ADD CONSTRAINT [Device_Journey_FK1] FOREIGN KEY([DeviceID])
REFERENCES [dbo].[Device] ([DeviceID])
GO
ALTER TABLE [dbo].[Journey] CHECK CONSTRAINT [Device_Journey_FK1]
GO
ALTER TABLE [dbo].[Journey] WITH CHECK ADD CONSTRAINT [Driver_Journey_FK1] FOREIGN KEY([DriverID])
REFERENCES [dbo].[Driver] ([DriverID])
GO
ALTER TABLE [dbo].[Journey] CHECK CONSTRAINT [Driver_Journey_FK1]
GO
ALTER TABLE [dbo].[Journey] WITH NOCHECK ADD CONSTRAINT [JourneyType_Journey_FK1] FOREIGN KEY([JourneyTypeID])
REFERENCES [dbo].[JourneyType] ([JourneyTypeID])
GO
ALTER TABLE [dbo].[Journey] CHECK CONSTRAINT [JourneyType_Journey_FK1]
GO
```
Table.EVENT (36m rows)
```
CREATE TABLE [dbo].[Event](
[EventID] [int] IDENTITY(1,1) NOT NULL,
[StartDateTime] [datetime] NULL,
[EndDateTime] [datetime] NULL,
[StartLocationID] [int] NOT NULL,
[EndLocationID] [int] NULL,
[AlertRaised] [bit] NULL,
[EventRuleID] [int] NULL,
[DeviceID] [int] NOT NULL,
[EventMessage] [varchar](max) NULL,
[TopSpeed] [decimal](4, 1) NULL,
[SpeedZone] [int] NULL,
[EventType] [int] NULL,
[ImpactId] [int] NULL,
[NotificationStatus] [bit] NULL,
[CableBreakZone0] [int] NULL,
[CableBreakDistance0] [int] NULL,
[CableBreakZone1] [int] NULL,
[CableBreakDistance1] [int] NULL,
[AdValue] [int] NULL,
[DriverId] [int] NULL,
[VehicleId] [int] NULL,
[JourneyId] [int] NULL,
CONSTRAINT [Event_PK] PRIMARY KEY CLUSTERED
(
[EventID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[Event] WITH CHECK ADD CONSTRAINT [Device_Event_FK1] FOREIGN KEY([DeviceID])
REFERENCES [dbo].[Device] ([DeviceID])
GO
ALTER TABLE [dbo].[Event] CHECK CONSTRAINT [Device_Event_FK1]
GO
ALTER TABLE [dbo].[Event] WITH CHECK ADD CONSTRAINT [Event_Impact_FK] FOREIGN KEY([ImpactId])
REFERENCES [dbo].[Impact] ([ImpactID])
GO
ALTER TABLE [dbo].[Event] CHECK CONSTRAINT [Event_Impact_FK]
GO
ALTER TABLE [dbo].[Event] WITH CHECK ADD CONSTRAINT [EventRule_Event_FK1] FOREIGN KEY([EventRuleID])
REFERENCES [dbo].[EventRule] ([EventRuleID])
GO
ALTER TABLE [dbo].[Event] CHECK CONSTRAINT [EventRule_Event_FK1]
GO
ALTER TABLE [dbo].[Event] WITH CHECK ADD CONSTRAINT [FK_Event_Driver] FOREIGN KEY([DriverId])
REFERENCES [dbo].[Driver] ([DriverID])
GO
ALTER TABLE [dbo].[Event] CHECK CONSTRAINT [FK_Event_Driver]
GO
ALTER TABLE [dbo].[Event] WITH CHECK ADD CONSTRAINT [FK_Event_Journey] FOREIGN KEY([JourneyId])
REFERENCES [dbo].[Journey] ([JourneyID])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[Event] CHECK CONSTRAINT [FK_Event_Journey]
GO
ALTER TABLE [dbo].[Event] WITH CHECK ADD CONSTRAINT [FK_Event_Vehicle] FOREIGN KEY([VehicleId])
REFERENCES [dbo].[Vehicle] ([VehicleID])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[Event] CHECK CONSTRAINT [FK_Event_Vehicle]
GO
```
|
There's several things that affect query plan creation in SQL Server which might cause really strange things seem to happen.
1. The plan for stored procedure is (normally) created in the first execution, using the parameters from that time. That plan is saved and used for all the future executions, even if the parameters change.
* The procedure can get a new plan if for example statistics change.
* If the procedure is such that the optimal plan is totally different depending on the values that are passed in or the procedure for example has a lot of so called optional parameters (for example field = @variable or @variable is NULL -style of coding) -- this can lead to really bad situations and this is usually what is referred as parameter sniffing.
* The parameters used to compile the plan can be seen in the properties of the leftmost object in the plan.
2. If you run the same statement in management studio, but the parameters are local variables that are assigned in the beginning, the statement will be optimized with unknown values, because the values are not known at that point, even if it seems obvious what the values will be.
* This is the same thing with procedures that use local variable defined inside the procedure.
3. If you are running applications that have different session settings, the plans created in step 1. might not be usable, and a different plan will be stored and used. This can cause a situation where it seems that the same procedure behaves different when executed with the same parameters in the application and management studio.
* The attributes can be checked from [sys.dm\_exec\_plan\_attributes](https://msdn.microsoft.com/en-us/library/ms189472.aspx).
For more details, you could check out for example [Slow in the Application, Fast in SSMS? Understanding Performance Mysteries](http://www.sommarskog.se/query-plan-mysteries.html) by Erland Sommarskog.
Edit: And to understand what's happening, always look at both actual execution plan and statistics IO output. Those should tell you why something is slower than the other (unless it's blocking, waiting etc related)
|
* [Source](https://stackoverflow.com/questions/440944/sql-server-query-fast-but-slow-from-procedure)
Difference between running a query in a batch of scripts in SSMS vs running a query in a Stored Procedure is at least in two way:
1. Before creating your SP use `SET ANSI_NULLS ON`.
2. Use inner variable instead arguments of SP in your query like this:
```
DECLARE @pUserID INT, @pDate DATETIME;
SELECT @pUserID = @UserID, @pDate = @Date;
SELECT ... @pUserID ...;
```
|
Executing same code via exec SP versus exec SP code in query window report same results but different execution times
|
[
"",
"sql",
"sql-server",
""
] |
I've been trying to write this query for 1 hour, but the SQL Developer always throws an error.
```
SELECT d.driver_name, COUNT(*) AS cnt
FROM Drivers d
JOIN Fastest_laps fl ON d.ID_driver = fl.ID_driver
GROUP BY d.driver_name
HAVING cnt = MAX(cnt);
```
> 904. 00000 - "%s: invalid identifier"
>
> Error at last line, column 20.
So I've figured out another solution, but another error is thrown:
```
SELECT d.driver_name, COUNT(*) as cnt
FROM Drivers d
JOIN Fastest_laps fl ON d.ID_driver = fl.ID_driver
GROUP BY d.driver_name
HAVING COUNT(*) = MAX(COUNT(*));
```
> 935. 00000 - "group function is nested too deeply"
>
> Error at last line, column 25.
EDIT: thanks gyus, you are awsome, almost all the replies are working, but I have to choose one...
|
Use a window function:
```
SELECT driver_name, cnt
FROM (SELECT d.driver_name, COUNT(*) AS cnt,
MAX(COUNT(*)) OVER () as MAXcnt
FROM Drivers d JOIN
Fastest_laps fl
ON d.ID_driver = fl.ID_driver
GROUP BY d.driver_name
) d
WHERE cnt = MAXcnt;
```
You can also express this using `RANK()` or `DENSE_RANK()`:
```
SELECT driver_name, cnt
FROM (SELECT d.driver_name, COUNT(*) AS cnt,
RANK() OVER (ORDER BY COUNT(*) DESC) as seqnum
FROM Drivers d JOIN
Fastest_laps fl
ON d.ID_driver = fl.ID_driver
GROUP BY d.driver_name
) d
WHERE seqnum = 1;
```
The advantage to this approach is that you can use `ROW_NUMBER()` instead and get exactly one row, even if multiple drivers have the same maximum.
|
Try this. I ordered by `cnt` in descending order. And then selected the top row from it. You can edit the query as `rownum <=2` to get the top 2 rows and so on.
```
with tbl1 as
(SELECT d.driver_name as driver_name, COUNT(*) AS cnt
FROM Drivers d
JOIN Fastest_laps fl ON d.ID_driver = fl.ID_driver
GROUP BY d.driver_name
order by cnt desc
)
select driver_name,cnt from tbl1
where cnt = (select cnt from tbl1 rownum=1)
```
|
Invalid identifier error, Oracle
|
[
"",
"sql",
"database",
"oracle",
""
] |
Table :
```
schedule_id job_id next_run_date next_run_time
------------------------------------------------------
221 D23EA7B2 20151005 90000
222 18EDFB21 20151020 90000
242 90283725 20151001 170000
239 4B69C670 20151011 90000
```
Result :
```
schedule_id job_id next_run_date_Time
--------------------------------------------
221 D23EA7B2 2015-10-05 09:00 AM
222 18EDFB21 2015-10-20 09:00 AM
242 90283725 2015-10-01 05:00 PM
239 4B69C670 2015-10-11 09:00 AM
```
How to join `next_run_date` and `next_run_time` together as a single column?
## My Query used in SSRS 2008
```
SELECT c.Name AS ReportName,[LastRunTime],
'Next Run Date' = CASE next_run_date WHEN 0 THEN null ELSE
substring(convert(varchar(15),next_run_date),1,4) + '/' +
substring(convert(varchar(15),next_run_date),5,2) + '/' +
substring(convert(varchar(15),next_run_date),7,2)
END,
--Need to add next_run_date_Time here
FROM
dbo.[Catalog] c
INNER JOIN dbo.[Subscriptions] S ON c.ItemID = S.Report_OID
INNER JOIN dbo.ReportSchedule R ON S.SubscriptionID = R.SubscriptionID
INNER JOIN msdb.dbo.sysjobs J ON Convert(nvarchar(128),R.ScheduleID) = J.name
INNER JOIN msdb.dbo.sysjobschedules JS ON J.job_id = JS.job_id
ORDER BY S.LastRunTime DESC
```
|
Assuming both are `varchar`, try this:
```
SELECT schedule_id, job_id,
CONVERT(datetime, next_run_date, 112)
+ CONVERT(time,
SUBSTRING(next_run_time, 1, LEN(next_run_time) - 4) + ':'
+ LEFT(RIGHT(next_run_time, 4), 2) + ':'
+ RIGHT(next_run_time, 2),
114) AS next_run_date_Time
FROM my_table
```
Here is a [fiddle](http://sqlfiddle.com/#!3/7d827/3)
If those fields are numbers, you can convert them in a sub-query first and then apply the same query above:
```
SELECT schedule_id, job_id,
CONVERT(datetime, next_run_date, 112)
+ CONVERT(time,
SUBSTRING(next_run_time, 1, LEN(next_run_time) - 4) + ':'
+ LEFT(RIGHT(next_run_time, 4), 2) + ':'
+ RIGHT(next_run_time, 2),
114) AS next_run_date_Time
FROM (SELECT schedule_id, job_id
, CAST(next_run_date AS VARCHAR(8)) AS next_run_date
, CAST(next_run_time AS VARCHAR(6)) AS next_run_time
FROM my_table) AS t
```
Here is a [fiddle](http://sqlfiddle.com/#!3/6d46d/5)
**EDIT** you can update your query to use this concept like this:
```
SELECT c.Name AS ReportName,[LastRunTime],
CONVERT(datetime, next_run_date, 112)
+ CONVERT(time,
SUBSTRING(next_run_time, 1, LEN(next_run_time) - 4) + ':'
+ LEFT(RIGHT(next_run_time, 4), 2) + ':'
+ RIGHT(next_run_time, 2),
114) AS 'Next Run Date'
FROM
dbo.[Catalog] c
INNER JOIN dbo.[Subscriptions] S ON c.ItemID = S.Report_OID
INNER JOIN dbo.ReportSchedule R ON S.SubscriptionID = R.SubscriptionID
INNER JOIN msdb.dbo.sysjobs J ON Convert(nvarchar(128),R.ScheduleID) = J.name
INNER JOIN (SELECT schedule_id, job_id
, CAST(next_run_date AS VARCHAR(8)) AS next_run_date
, CAST(next_run_time AS VARCHAR(6)) AS next_run_time
FROM msdb.dbo.sysjobschedules) AS JS ON J.job_id = JS.job_id
ORDER BY S.LastRunTime DESC
```
|
Here is one way to do it:
```
-- Create sample table and data
CREATE TABLE tbl (
next_run_date char(8),
next_run_time varchar(6)
)
INSERT INTO tbl VALUES
(20151005, 93020),
(20151001, 170000)
```
using cte1 to pad next\_run\_time with a leading zero if needed,
and using cte2 to break the string to a "normal" time representation:
```
;with cte1 as
(
select next_run_date,
right('000000'+ next_run_time, 6) as run_time_base
FROM tbl
), cte2 as
(
select next_run_date,
left(run_time_base, 2) + ':' +
substring(run_time_base, 3, 2) + ':' +
right(run_time_base, 2) as run_time
from cte1
)
select cast(next_run_date as datetime) + cast(run_time as datetime) as run_datetime
from cte2
-- clean up
drop table tbl
```
Results:
```
run_datetime
-----------------------
2015-10-05 09:30:20.000
2015-10-01 17:00:00.000
```
|
How to Combine dates and time in separate fields in SQL
|
[
"",
"sql",
"sql-server",
"date",
"ssrs-2008",
""
] |
(Editing my post with data and expected result as per replies.)
I have a table which looks as follows. I would like to compute the difference in `Score` between two adjacent records.
```
ID TimeStamp Score
1002010 9/26/2015 11:24:08 PM 32
1002010 9/28/2015 10:12:57 PM 38
```
This is what I have tried.
```
SELECT
[current].ID,
[current].Score,
ISNULL(convert(int,[next].Score), 0) - convert(int,[current].Score)
FROM
RiskPredLog AS [current]
LEFT JOIN
RiskPredLog AS [next] ON [next].ID = (SELECT MIN(ID)
FROM TableName
WHERE ID > [current].ID)
WHERE
[current].ID = '1002010'
```
But I always get the difference to be -1.
Expected result
```
ID TimeStamp Score
-----------------------------------------------
1002010 9/26/2015 11:24:08 PM NULL
1002010 9/28/2015 10:12:57 PM 6
```
|
You can use `lead` to get the value from the next row and use it for subtraction. Note that this function is available in sql server 2012 and later versions.
If you need `null` to be the result when there is no leading row, remove the `isnull` condition.
```
SELECT
ID,
Score,
ISNULL(convert(int, lead(Score) over(partition by id order by timestamp)), 0)
- convert(int, Score)
FROM RiskPredLog
-- where ID = '1002010'
```
|
You can implement `ROW_NUMBER()` in place of `lead/lag` if you are using pre-2012 SQL Server:
```
SELECT
[current].ID,
[current].Score,
ISNULL(convert(int,[next].Score), 0) - convert(int,[current].Score)
FROM
(Select *,ROW_NUMBER() OVER (ORDER BY ID,TimeStamp) as rn from RiskPredLog) AS [current]
LEFT JOIN
(Select *,ROW_NUMBER() OVER (ORDER BY ID,TimeStamp) as rn from RiskPredLog) AS [next]
ON [next].rn = [current].rn + 1
```
|
Getting difference of value between two adjacent records in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have a table `issue` with following structure:
```
+----+---------+-------------+------------+
| id | project | new_status | updated_at |
+----+---------+-------------+------------+
| 1 | 1 | New | 12:41:18 |
| 1 | 1 | In progress | 12:47:43 |
| 1 | 1 | Resolved | 17:05:29 |
+----+---------+-------------+------------+
```
I need to implement a query that returns time that every issue of particular project spent in every status, something like that:
```
+----+---------+-------------+------------+
| id | project | new_status | time_diff |
+----+---------+-------------+------------+
| 1 | 1 | New | 00:06:25 |
| 1 | 1 | In progress | 04:17:46 |
+----+---------+-------------+------------+
```
How can I get this? Preferably no special-concrete-db-features should be used, i.e. only pure SQL. But if it matters - I'm using PostgreSQL.
|
Since you already have some examples of doing it with lead() functions (which are definitely database specific), here's a different option: <http://sqlfiddle.com/#!15/497de/18>
```
with t1 (id, project, new_status, updated_at, ndx) as (
select id, project, new_status, updated_at,
row_number() over (partition by id, project order by updated_at)
from issue
)
,
t2 (id, project, new_status, starttime, endtime) as (
select t1.id, t1.project, t1.new_status, t1.updated_at, t2.updated_at
from t1
left join t1 t2
on t2.id = t1.id
and t2.project = t1.project
and t2.ndx = t1.ndx + 1
)
,
t3 (id, project, new_status, time_diff) as (
select id, project, new_status, endtime - starttime
from t2
)
select id, project, new_status, to_char(time_diff, 'HH24:MI:SS') as time_diff
from t3
where time_diff is not null
```
This option uses common table expressions to create an index using `row_number()` for each of your projects, and then left joins the table to itself based on that index; that is `t2.ndx = t1.ndx + 1`.
From there, it's a matter of calculating the difference in time and formatting it for display.
If you'd like to see how much time has passed with the issue at 'Resolved' status, then use something like `coalesce(t2.updated_at,localtime)` to get the current time if `t2.updated_at` is null.
|
I write this query on the fly so it is not tested:
```
SELECT id, project, new_status, (updated_at - nextUpdate) AS time_diff
--or CAST((updated_at - nextUpdate) AS time) AS time_diff
FROM (
SELECT *,
LEAD(updated_at) OVER (PARTITION BY project ORDER BY updated_at) AS nextUpdate
FROM yourTable) dt
WHERE nextUpdate IS NOT NULL;
```
A related answer is [this](https://stackoverflow.com/a/10065916/4519059).
|
SELECT query that uses data from another row
|
[
"",
"sql",
"postgresql",
"window-functions",
""
] |
Given a table in Google BigQuery:
```
User Timestamp
A TIMESTAMP(12/05/2015 12:05:01.8023)
B TIMESTAMP(9/29/2015 12:15:01.0323)
B TIMESTAMP(9/29/2015 13:05:01.0233)
A TIMESTAMP(9/29/2015 14:05:01.0432)
C TIMESTAMP(8/15/2015 5:05:01.0000)
B TIMESTAMP(9/29/2015 14:06:01.0233)
A TIMESTAMP(9/29/2015 14:06:01.0432)
```
Is there a simple way to compute:
```
User Maximum_Number_of_Events_this_User_Had_in_One_Hour
A 2
B 3
C 1
```
where the time window of one hour is a parameter?
I tried doing this myself using a combination of LAG and partition functions by building off of these two questions:
[BigQuery SQL for 28-day sliding window aggregate (without writing 28 lines of SQL)](https://stackoverflow.com/questions/27557919/bigquery-sql-for-28-day-sliding-window-aggregate-without-writing-28-lines-of-sq)
[Bigquery SQL for sliding window aggregate](https://stackoverflow.com/questions/22415647/bigquery-sql-for-sliding-window-aggregate)
But find those posts are too dissimilar since I am not finding number of people per time window, but instead finding max number of events per person within a time window.
|
Here is an efficient succinct way to do it that exploits the ordered structure of timestamps.
```
SELECT
user,
MAX(per_hour) AS max_event_per_hour
FROM
(
SELECT
user,
COUNT(*) OVER (PARTITION BY user ORDER BY timestamp RANGE BETWEEN 60 * 60 * 1000000 PRECEDING AND CURRENT ROW) as per_hour,
timestamp
FROM
[dataset_example_in_question_user_timestamps]
)
GROUP BY user
```
|
Try below for GBQ. Haven't tested much, but looks workable to me
```
SELECT
User, Max(events) as Max_Events
FROM (
SELECT
b.User as User,
b.Timestamp as Timestamp,
COUNT(1) as Events
FROM [your_dataset.your_table] as b
JOIN (
SELECT User, Timestamp
FROM [your_dataset.your_table]
) as w
ON w.User = b.User
WHERE ROUND((TIMESTAMP_TO_SEC(TIMESTAMP(w.Timestamp)) -
TIMESTAMP_TO_SEC(TIMESTAMP(b.Timestamp))) / 3600, 1) BETWEEN 0 AND 1
GROUP BY 1, 2
)
GROUP BY 1
```
|
BigQuery: Computing aggregate over window of time for each person
|
[
"",
"sql",
"aggregate-functions",
"google-bigquery",
"window-functions",
""
] |
I got a lookup list table of valid US state abbreviations and a source table:
**States** lookup and **Source** tables respectively
```
+-------+ +-----------------+
|States | |ID | Location |
+=======+ +=================+
| AK | | 1 | Madrid |
--------- -------------------
| AL | | 2 | AK |
--------- -------------------
| AR | | 3 | AR |
--------- -------------------
| ... | | ..| ... |
--------- -------------------
```
How do I create an INSERT statement into a **Target** table such that if `Location` is a valid state, it will be placed in the `State` column and if it's not, it would be placed in the `Other Locale` column?
Expected **Target** table output
```
+------------+-----------------
|ID | State | Other Locale |
+============+=================
| 1 | | Madrid |
-------------------------------
| 2 | AK | |
-------------------------------
| 3 | AR | |
-------------------------------
| ..| ... | ... |
-------------------------------
```
|
Since nobody has suggested using NULLIF i might as well throw that in.
```
DECLARE @Location TABLE (Id INT, Location VARCHAR(MAX))
DECLARE @States TABLE (States VARCHAR(MAX))
INSERT INTO @Location VALUES
(1, 'Madrid'), (2, 'AK'), (3, 'AR')
INSERT INTO @States VALUES
('AK'), ('AL'), ('AR')
SELECT l.Id,
s.States [State],
NULLIF(l.Location,s.States) [Other Locale]
FROM @Location l
LEFT JOIN @States s ON l.Location = s.States
```
|
Do a LEFT JOIN and check for NULLs due to non-matching states:
```
insert into target
select source.id, states.state,
case when states.state is null then location end as Other_Locale
from source left join states
on source.location = states.state
```
|
How to place value in column A if it exist in another table, otherwise, place in column B
|
[
"",
"sql",
"t-sql",
""
] |
I have collected data from an API to build a history. Initially, I saved *all* values every five minutes. Later, I changed my program to only save data that has changed.
Now, I want to clean up my old data and remove all values where the `count` has not changed from the previous record within the same `account` and `id`.
```
account id count time
42 12147 492 2015-09-20 11:31:14.0
42 12147 492 2015-09-20 11:36:19.0 // delete
13 12147 246 2015-09-20 11:31:14.0
2 12253 183 2015-09-20 11:36:19.0
2 19684 805 2015-09-20 12:00:41.0 // note in next comment
2 19684 810 2015-09-20 12:05:41.0
2 19684 805 2015-09-20 12:10:41.0 // we had this combination, but do not delete this record because the previous value was different
2 19684 805 2015-09-20 12:15:41.0 // delete
2 19684 805 2015-09-20 12:20:41.0 // delete
2 19684 806 2015-09-20 12:25:41.0
```
I tried to solve this with a `group by` over `account`, `id` and `count`. With that approach, however, it will delete *non-consecutive* duplicatesβi.e., if a record has the same value again after some time, it will fall into the same group.
I also thought about writing a small script where I iterate over all data and delete the current row if `account`, `id`, and `count` are the same as the previous record, but I'm curious if this be possible with an single SQL statement?
|
You can use the following query:
```
DELETE history
FROM history
INNER JOIN (SELECT MIN(time) AS minTime, account, id, count
FROM history
GROUP BY account, id, count) AS h
ON history.account = h.account AND history.id = h.id AND history.count = h.count
WHERE history.time > h.minTime
```
[**Demo here**](http://sqlfiddle.com/#!9/1f529d/1)
**EDIT:**
After the edit made I think there are still some errors in sample data of the OP (`time` field should be in acending order).
Using an additional assumption of a PK present in the table, you can use the following query:
```
SELECT pk
FROM history AS h1
WHERE account = (SELECT account
FROM history AS h2
WHERE h1.account = h2.account AND
h1.id = h2.id AND
h2.time < h1.time
ORDER BY time DESC
LIMIT 1)
AND
id = (SELECT id
FROM history AS h2
WHERE h1.account = h2.account AND
h1.id = h2.id AND
h2.time < h1.time
ORDER BY time DESC
LIMIT 1)
AND
count = (SELECT count
FROM history AS h2
WHERE h1.account = h2.account AND
h1.id = h2.id AND
h2.time < h1.time
ORDER BY time DESC
LIMIT 1)
```
in order to identify *to-de-deleted* records (see [this demo](http://sqlfiddle.com/#!9/e0f88/4)).
Now you can easily delete unwanted rows using `NOT IN` operator:
```
DELETE FROM history
WHERE pk IN (
SELECT x.pk
FROM (
SELECT pk
FROM history AS h1
WHERE
account = (SELECT account
FROM history AS h2
WHERE h1.account = h2.account AND
h1.id = h2.id AND
h2.time < h1.time
ORDER BY time DESC
LIMIT 1)
AND
id = (SELECT id
FROM history AS h2
WHERE h1.account = h2.account AND
h1.id = h2.id AND
h2.time < h1.time
ORDER BY time DESC
LIMIT 1)
AND
count = (SELECT count
FROM history AS h2
WHERE h1.account = h2.account AND
h1.id = h2.id AND
h2.time < h1.time
ORDER BY time DESC
LIMIT 1)) AS x)
```
[**Demo here**](http://sqlfiddle.com/#!9/74a11/1)
**EDIT 2:**
Using variables in order to located to-de-deleted `pk` values might lead to query that is considerably faster:
```
SELECT pk
FROM (
SELECT pk, account, id, count, time,
@rn := IF (account = @acc AND id = @id AND count = @count,
@rn + 1, 1) AS rn,
@acc := account,
@id := id,
@count := count
FROM history
CROSS JOIN (SELECT @rn = 0, @acc = 0, @id = 0, @count = 0) AS vars
ORDER BY account, id, time, count ) AS t
WHERE t.rn > 1
```
[**Demo here**](http://sqlfiddle.com/#!9/e0f88/8)
|
Valuing clarity, this is what I would recommend if one of my reports were trying to do this:
```
select *
from mytable
QUALIFY LAG(count, -1) OVER(PARTITION BY account, id ORDER BY time) != count
```
This returns the table without rows where the count from one row is the same as the count from the last one within the same account and id.
|
Delete only consecutive duplicate rows
|
[
"",
"mysql",
"sql",
"sql-delete",
""
] |
I have column with dates in format
> year/number\_of\_week\_in\_year/number\_of\_day\_of\_the\_week, for example:
>
> 2015015 = 01.01.2015
How to write query which convert this date to `RRRRmmdd` format?
|
Here's an untested answer for you, as I don't have access to SQL Server to fiddle around (on my phone at the moment).
The easy part is to get at DATE value for the year...
```
DATEADD(year, (input / 1000) - 1900), 0)
```
Then you need to add a certain number of days to it...
- 7 days for each week (not including week 1)
- 1 day of each day of the week
```
((input / 10) % 100 - 1) * 7
+ input % 10
```
Then deducting a number of days depending on the day of the week that year started on.
```
DATEPART(weekday, <your year as a date>)
```
Which seems to give...
```
DATEADD(
day,
((input / 10) % 100 - 1) * 7
+ input % 10
- DATEPART(weekday, DATEADD(year, (input / 1000) - 1900, 0)),
DATEADD(year, (input / 1000) - 1900, 0)
)
```
Using your example...
```
DATEADD(
day,
((2015015 / 10) % 100 - 1) * 7
+ 2015015 % 10
- DATEPART(weekday, DATEADD(year, (2015015 / 1000) - 1900, 0)),
DATEADD(year, (2015015 / 1000) - 1900), 0)
)
```
=>
```
DATEADD(
day,
(01 - 1) * 7
+ 5
- DATEPART(weekday, DATEADD(year, 2015 - 1900, 0)),
DATEADD(year, 2015 - 1900, 0)
)
```
=>
```
DATEADD(
day,
0 * 7
+ 5
- DATEPART(weekday, '2015-01-01'),
'2015-01-01'
)
```
=>
```
DATEADD(
day,
0 * 7
+ 5
- 5,
'2015-01-01'
)
```
=>
```
'2015-01-01'
```
|
Here's a simple solution I threw together, probably not the cleverest way to do it but hopefully makes sense:
```
DECLARE @inDate CHAR(7),
@inYear CHAR(4),
@inWeek INT,
@inDay INT,
@OutDate DATETIME;
SET @inDate = '2015015';
SET @inYear = SUBSTRING(@inDate, 0, 5);
SET @inWeek = CAST(SUBSTRING(@inDate, 5, 2) AS INT) - 1 -- Reduce by 1 because it will be added to start of year
SET @inDay = CAST(SUBSTRING(@inDate, 7, 1) AS INT)
SET @OutDate = CAST(@inYear + '-01-01' AS DATETIME)
SET @OutDate = DATEADD(dd, -DATEPART(weekday, @OutDate) + @inDay, @OutDate)
SET @OutDate = DATEADD(ww, @inWeek, @OutDate)
PRINT @OutDate -- Gives Jan 1 2015
```
|
Convert week date to calendar date
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have scoured the documentation for Poco but could not find a way to retrieve the last inserted id for an INSERT INTO Table operation.
The [Statement::execute](http://pocoproject.org/docs/Poco.Data.Statement.html#7389) documentation says that it returns the number of records affected (which could be greater than one for a bulk insert).
If it makes any difference I'm currently using SQLite but would like a generic solution that works across different database engines.
|
For SQLite, execute a statement, SELECT last\_insert\_rowid()
I know that isn't a generic answer.
But, this is how they do it in [Ti.Database](https://github.com/marshall/titanium/blob/master/modules/ti.Database/database_binding.cpp) module - which is a module for Titanium that uses poco. They select last\_insert\_rowid() and then set that as a property of the statement.
|
There is no such facility in Poco. Rather than bending backwards to abstract something that is inconsistent across different back-ends, we chose not to provide it at all and leave users to deal directly with the back-end. See [this discussion](https://sourceforge.net/p/poco/feature-requests/60/).
|
How do I get the last inserted id in a generic way with Poco?
|
[
"",
"sql",
"sqlite",
"poco-libraries",
""
] |
I am interested in manipulating my data like so:
My Source Data:
```
From | To | Rate
----------------
EUR | AUD | 1.5895
EUR | BGN | 1.9558
EUR | GBP | 0.7347
EUR | USD | 1.1151
GBP | AUD | 2.1633
GBP | BGN | 2.6618
GBP | EUR | 1.3610
GBP | USD | 1.5176
USD | AUD | 1.4254
USD | BGN | 1.7539
USD | EUR | 0.8967
USD | GBP | 0.6589
```
In regards to "distinct pairs", I consider the following to be "duplicates".
```
EUR | USD matches USD | EUR
EUR | GBP matches GBP | EUR
GBP | USD matches USD | GBP
```
I want my source data to be filtered such that it removes any 1 of the above "duplicates", such that my final table is 3 records less than the original. I do not care which record from the "duplicates" is kept or removed, just so long as only 1 is selected.
I have tried many variations of Joins, Exists, Except, Distinct, Group By, logical comparisons (< >) and I feel like I am so close with any given approach... but it just does not seem to click.
My favorite effort has involved inner joining on EXCEPT:
```
SELECT a.[FROM], a.[TO], a.[Rate]
FROM Table a
INNER JOIN
(
SELECT DISTINCT [From], [To]
FROM Table
EXCEPT
(
SELECT [TO] as [From], [From] as [To]
FROM Table
)
) b
ON a.[From] = b.[From] AND a.[To] = b.[To]
```
But alas, it removes all of the matched pairs.
|
I can suggest something very easy, if it doesn't matter which one of then you want, than you can pick only the one that his rate is bigger than 1 or on the contrary the one smaller. Each pare should be 1 rate bigger and one smaller (make sense) so
```
Select * from table where rate>1
```
|
One way to remove the duplicates that doesn't depend on the rates:
```
select s.*
from source s
where from < to
union all
select s.*
from source s
where to > from and
not exists (select 1 from source s2 where s.from = s2.to and s.to = s2.from);
```
Note: I did not put escape characters around `from` and `to`, although you would need them in your actual query.
|
SQL Distinct Pair Groupings
|
[
"",
"sql",
""
] |
I have table with hierarchical, parent-child relations and want to order it by that hierarchy. Table is:
```
id|parent|type
--------------
1 |0 |1
2 |0 |1
3 |0 |1
4 |0 |2
5 |0 |2
6 |2 |2
7 |3 |2
```
And as result I want this:
```
id|parent|type
--------------
1 |0 |1
2 |0 |1
6 |2 |2
3 |0 |1
7 |3 |2
4 |0 |2
5 |0 |2
```
So I want get something like a tree view where type 1 ordered first and type 2 at the end.
Now I'm trying to use recursion but the order is wrong:
```
with cte as
(
select id, parent, type from tbl where id=1
union all
select id, parent, type,
ROW_NUMBER()over(
order by
(case when t.type = 1 then 1
when t.type = 2 then 2
else 1000
end) as rn
from tbl t
inner join cte c on c.id=t.parent
)
select * from cte
order by rn
```
How can I do this?
|
Using the order by [`hierarchyid`](https://msdn.microsoft.com/en-us/library/bb677290(v=sql.105).aspx) with cte is simple, not test for recursive relations
```
DECLARE @Data table (Id int identity(1,1) primary key, Parent int, Type int)
INSERT @Data VALUES
(0, 1),
(0, 1),
(0, 1),
(0, 2),
(0, 2),
(2, 2),
(3, 2)
SELECT * FROM @Data
;WITH level AS
(
-- The root, build the hierarchy by /{Type}.{Id}/, where Type is important then Id
SELECT *, -- 0 AS Level,
'/' + CONVERT(varchar(max), Type + 0.1 * Id) + '/' AS Ordering
FROM @Data
WHERE Parent = 0
UNION ALL
-- Connect the parent with appending the hierarchy
SELECT d.*, -- c.Level + 1,
c.Ordering + CONVERT(varchar(max), d.Type + 0.1 * d.Id) + '/'
FROM @Data d INNER JOIN level c ON d.Parent = c.Id
)
SELECT Id, Parent, Type FROM level
ORDER BY CAST(Ordering as hierarchyid) -- The key part to convert data type
```
[SQL Fiddle](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/2318)
|
Can be done with the following recursive CTE:
```
WITH cte AS (
SELECT *,
CAST(ROW_NUMBER() OVER(ORDER BY id) AS REAL) rn,
1 level
FROM tbl
WHERE parent = 0
UNION ALL
SELECT t2.*,
cte.rn + (CAST(ROW_NUMBER() OVER(ORDER BY t2.id) AS REAL) / POWER(10, cte.level)) rn,
cte.level + 1 level
FROM tbl t2 INNER JOIN cte
ON t2.parent = cte.id
)
SELECT id, parent, type
FROM cte
ORDER BY rn
```
See [SQLFiddle](http://sqlfiddle.com/#!3/beb88/7) with more complicated sample data (deeper hierarchies, "unordered parent-child id's")
|
How to order rows by hierarchy
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Below is picture of two tables, owners and vehicles.
I need to find out the owner who has all types of bikes.
[](https://i.stack.imgur.com/F7rjS.png)
Example:
O\_id 100,O\_id 101,O\_id 102 have bikes with V\_id=1, but
O\_id 103 has all types of bikes(V\_id = 1 and V\_id = 5)
How to write a query to get these details?
My Query:
```
select o.o_id from owner o,vehicles v where
o.v_id = v.v_id where v_type = 'bike'
```
this is showing all owners who have a bike,but not owners who have all bikes
|
Group by the `o_id`that you want to get.
Take only those groups having the same number (`count(v_id)`) of bikes that exist in total `(select count(*) from vehicles where v_type = 'bike')`
```
select o.o_id
from owner o
join vehicles v on o.v_id = v.v_id
where v.v_type = 'bike'
group by o.o_id
having count(distinct v.v_id) = (select count(*) from vehicles where v_type = 'bike')
```
|
```
with bykes as (
select array_agg(v_id) as byke_ids
from vehicle
where lower(v_type) = 'byke'
), owner as (
select o_id, array_agg(v_id) as v_ids
from owner
group by o_id
)
select o_id
from owner cross join bykes
where v_ids @> byke_ids
;
o_id
------
103
```
The schema:
```
create table owner (
o_id int,
v_id int
);
create table vehicle (
v_id int,
v_type text
);
insert into owner (o_id, v_id) values
(100, 1),
(101, 1),
(102, 1),
(103, 1),
(100, 2),
(101, 3),
(103, 5);
insert into vehicle (v_id, v_type) values
(1, 'Byke'),
(2, 'Car'),
(3, 'Car'),
(4, 'Car'),
(5, 'byke');
```
|
select owner who has all types of bikes
|
[
"",
"sql",
"database",
"postgresql",
"join",
"subquery",
""
] |
I have a pretty simple sample query that I am trying to complete.
[SQL Fiddle](http://sqlfiddle.com/#!3/da20e/3)
```
SELECT
month(s.report_date),
COALESCE(COUNT(*),0)
FROM
stat_summary s LEFT OUTER JOIN ref_months m on MONTH(s.report_date) = m.month_id
GROUP BY month(s.report_date)
```
My results look like:
[](https://i.stack.imgur.com/FpRNf.png)
My desired results would be something like:
```
month | count
----------------
1 | 0
2 | 0
3 | 0
4 | 0
5 | 0
6 | 0
7 | 0
8 | 0
9 | 4
10 | 9
11 | 0
12 | 0
```
|
You need to use the months table as the primary one:
```
SELECT
m.month_id,
COALESCE(COUNT(s.report_date),0)
FROM ref_months m
LEFT JOIN stat_summary s
ON MONTH(s.report_date) = m.month_id
GROUP BY m.month_id;
```
|
You can change `LEFT JOIN` table order, and there is no need for `COALESCE`:
**[SqlFiddleDemo](http://sqlfiddle.com/#!3/da20e/14/0)**
```
SELECT
m.month_id,
[count] = COUNT(s.report_date)
FROM ref_months m
LEFT JOIN stat_summary s
ON MONTH(s.report_date) = m.month_id
GROUP BY m.month_id;
```
|
LEFT OUTER JOIN not returning NULL values
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
1. 501--> 588
2. 588--> 588
3. 589--> 688
|
I'm going to assume that you want the new number to be 12 less than the next 100 (so: -12, 88, 188, 288, etc).
So you'd first off, add 12 to your number, then divide it by 100. That will provide you with a number with decimals that you can now find the ceiling value for. Then you need to multiply that by 100 and subtract 12. Like so:
```
with sample_data as (select -12 num from dual union all
select -11 num from dual union all
select 88 num from dual union all
select 89 num from dual union all
select 173 num from dual union all
select 189 num from dual union all
select 501 num from dual union all
select 588 num from dual union all
select 589 num from dual)
select num,
ceil((num + 12)/100)*100 -12 new_num
from sample_data;
NUM NEW_NUM
---------- ----------
-12 -12
-11 88
88 88
89 188
173 188
189 288
501 588
588 588
589 688
```
|
TRUNC and ROUND functions can be invoked with negative precision. So I get the hundred part and add 88 or 188 depending if the rest of the number is greater than 88 or not.
```
WITH data(val) AS (
SELECT 88 FROM DUAL UNION ALL
SELECT 89 FROM DUAL UNION ALL
SELECT 501 FROM DUAL UNION ALL
SELECT 588 FROM DUAL UNION ALL
SELECT 1088 FROM DUAL UNION ALL
SELECT 1000 FROM DUAL UNION ALL
SELECT 1089 FROM DUAL UNION ALL
SELECT 589 FROM DUAL
)
SELECT
VAL,
TRUNC(VAL, -2) + CASE WHEN MOD(val, 100) > 88 THEN 188 ELSE 88 END result
FROM
data
```
|
How do roundup in to a specific number in Oracle SQL?
|
[
"",
"sql",
"oracle",
""
] |
```
Declare @Date date = getdate()
select DATEpart(WEEKDAY,@Date)as day32
SELECT CAST(@Date AS tinyint)
If @Date !=7 and @Date!=1
```
I need help converting a `date` data type to a `tiny int`
This is the error message I get when I try to execute
> Explicit conversion from data type date to int is not allowed.
|
It should be like this:
```
DECLARE @Date DATE = GETDATE(),
@TinyDate TinyInt
SELECT @TinyDate = CONVERT(TINYINT,DATEPART(WEEKDAY, @Date))
```
DATE can't be converted to INT so you have an error.
|
Try this with
```
SELECT DATEPART(WEEKDAY,GETDATE())
```
But please be aware of the influence of your system's culture. Some countries start their week on Monday other on Sunday...
You should read a bit about DATEFIRST-option
```
SELECT @@DATEFIRST --Your current setting
```
|
SQL - Converting Datepart to tinyint
|
[
"",
"sql",
"sql-server",
""
] |
I am working on an application, trying to improve performance. Obviously I will be doing my own profiling and testing, but I would like to know if there is a "consensus" or known best practice.
In the old SQL days, one of the main things to do to improve efficiency was to not select data you aren't going to consume. I'm trying to go down that route with EF6.
In this particular case, I have a master-detail-detail relationship where I need to render some data about the parent, child, and grandchild on the screen.
My application is n-tier with an MVC front end, and a web-api REST backend. These entities are ultimately going to be serialized as JSON, sent over the rest connection back to the MVC controller, where they will be rendered to the screen. In this case I will not be updating the entities from this flow, so I don't need to worry about merging partial entities back into the repository (in those cases, I would probably send over the full entity for ease of maintenance)
So, the original straightforward EF code I wrote looks like this
```
Repository.GetAll()
.AsNoTracking()
.Include("Children")
.Include("Children.GrandChildren")
.ToList();
```
However, I am only actually consuming a subset of the properties of these entities, and some of the unused properties can be rather large (big chunks of XML, etc)
Here is a first pass at trying to project out only the fields I need (for the example here, I have cut out and renamed most of the fields I would actually select to improve readability, but in general I'm using lets say 5-20% of the full entities)
```
var projection = Repository.GetAll()
.AsNoTracking()
.Select(r => new
{
r.Id,
r.RandomId,
r.State,
r.RequestType,
r.CreatedDate,
r.CreatedBy,
Children = r.Children.Select(r2 => new
{
r2.Id,
r2.Status,
GrandChildren = r2.GrandChildren.Select(r3 => new
{
r3.Id,
r3.Status,
r3.GrandChildType
})
}),
}
).ToList();
```
This is obviously using anonymous types (I believe this is required in EF there is not a way to project into a named type?) (edit : apparently you can project into a non-mapped named type, but in this case, the return type of the query is a mapped type. So I could create a DTO, but that's even more code to maintain)
so then I have to get back into my concrete types. I could certainly generate DTOs that only had the properties needed, but I don't think that changes the fundamental logic used, nor probably the performance characteristics.
I tried my standbys of Automapper and ValueInjecter, but neither one seemed to fit the bill perfectly here (deep clone of heterogeneous types with matching names) so I went dirty
```
var json = projection.Select(JsonConvert.SerializeObject).ToList();
var mapped = json.Select(JsonConvert.DeserializeObject<Parent>).ToList();
```
This is somewhat lame since its just going to be serialized again as part of the rest call. There is probably a way I can override the webAPI calls to say I am returning the already serialized data, which would let me skip the rehydration into the entity type (as all of the property names match, the rest client should be able to rehydrate the anonymous type as if it were the real type, the same way the snippet above does)
BUT all this seems like a lot of work, less maintainable code, more possible places to have bugs, etc for a use case that entity framework really does not seem to want to support. But my old school instincts can't let go of the idea that I'm selecting, serializing, and transferring a whole lot of data that ultimately I'm not going to consume.
Does this produce sane SQL under the covers? Is that worth the double serialization? (assuming I don't figure out how to override webapi to let me hand it the data)
I suppose my other choice would be to refactor all the entities so that the unused properties are in different sub entities that I can just not include, but that would be a lot of rework throughout the system (versus being able to surgically improve performance at critical points) and it also seems like a poor choice to design entities around the ORM I happen to be using vs standard normalization rules etc.
|
Use table splitting which allows you to split a table into more than one entity without modifying the underlying table. The "less accessed" properties can be lazy loaded on demand or loaded eagerly, behaving as any other navigation property. Note the key here is that the additional entities use their pk as the fk to the primary entity
|
you can project into named types.
```
var projection = Repository.GetAll()
.AsNoTracking()
.Select(r => new ParentModel()
{
Id = r.Id,
RandomId = r.RandomId,
State = r.State,
RequestType = r.RequestType,
CreatedDate = r.CreatedDate,
CreatedBy = r.CreatedBy,
Children = r.Children.Select(r2 => new ChildModel()
{
Id = r2.Id,
Status = r2.Status,
GrandChildren = r2.GrandChildren.Select(r3 => new GrandChildModel
{
Id = r3.Id,
Status = r3.Status,
GrandChildType = r3.GrandChildType
})
}),
}
).ToList();
```
but yes it does make sense to not include fields you're not going to need.
lately when using the DTO approach or Models.. i'll add a static Func to my model and use that in my context projections.. In your case it would look something like
```
public class ParentModel
{
public int Id { get; set; }
public int RandomId { get; set; }
public string State { get; set; }
public List<ChildModel> Children { get; set; }
public static Func<Parent, ParentModel> Project = item => new ParentModel
{
Id = item.Id,
RandomId = item.RandomId,
State = item.State,
Children = item.Children.Select(ChildModel.Project)
};
}
public class ChildModel
{
public int Id { get; set; }
public int Status { get; set; }
public string State { get; set; }
public List<GrandChildModel> GrandChildren { get; set; }
public static Func<Child, ChildModel> Project = item => new ChildModel
{
Id = item.Id,
Status = item.Status
GrandChildren = item.GrandChildren.Select(GrandChildModel.Project)
};
}
public class GrandChildModel
{
public int Id { get; set; }
public int Status { get; set; }
public int GrandChildType { get; set; }
public static Func<GrandChild, GrandChildModel> Project = item => new GrandChildModel
{
Id = item.Id,
Status = item.Status,
GrandChildType = item.GrandChildType
};
}
```
then your projection code would just look like
```
var projection = Repository.GetAll()
.AsNoTracking()
.Include("Children")
.Include("Children.GrandChildren")
.Select(ParentModel.Project)
.ToList();
```
|
In Entity Framework is it more efficient / preferred to create projections rather than selecting full entities?
|
[
"",
"sql",
"asp.net-mvc",
"performance",
"entity-framework",
"rest",
""
] |
I have data in a similar format as this
```
+--------+------------+-------+
| type | variety | price |
+--------+------------+-------+
| apple | gala | 2.79 |
| apple | fuji | 0.24 |
| apple | limbertwig | 2.87 |
| orange | valencia | 3.59 |
| orange | navel | 9.36 |
| pear | bradford | 6.05 |
| pear | bartlett | 2.14 |
| cherry | bing | 2.55 |
| cherry | chelan | 6.33 |
+--------+------------+-------+
```
And i want to get a row per type with the highest price and the lowest price like so, also the variety should be taken form the row with the highest price
```
+--------+------------+-------+-------+
| type | variety | min | max |
+--------+------------+-------+-------+
| apple | limbertwig | 0.24 | 2.87 |
| orange | navel | 9.36 | 3.59 |
| pear | bradford | 6.05 | 2.14 |
| cherry | chelan | 6.33 | 2.55 |
+--------+------------+-------+-------+
```
What is the best way to achieve this using Postgres?
I found this site: [How to select the first/least/max row per group in SQL](http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/), but it's not quite what i need.
|
Find fruits with the lowest price:
```
select distinct on (type) type, variety, price
from fruits
order by 1, 3;
type | variety | price
--------+----------+-------
apple | fuji | 0.24
cherry | bing | 2.55
orange | valencia | 3.59
pear | bartlett | 2.14
(4 rows)
```
Find fruits with the highest price:
```
select distinct on (type) type, variety, price
from fruits
order by 1, 3 desc;
type | variety | price
--------+------------+-------
apple | limbertwig | 2.87
cherry | chelan | 6.33
orange | navel | 9.36
pear | bradford | 6.05
(4 rows)
```
Combine the two queries:
```
select
f1.type,
f1.variety min_variety, f1.price min_price,
f2.variety max_variety, f2.price max_price
from (
select distinct on (type) type, variety, price
from fruits
order by 1, 3) f1
join (
select distinct on (type) type, variety, price
from fruits
order by 1, 3 desc) f2
on f1.type = f2.type
type | min_variety | min_price | max_variety | max_price
--------+-------------+-----------+-------------+-----------
apple | fuji | 0.24 | limbertwig | 2.87
cherry | bing | 2.55 | chelan | 6.33
orange | valencia | 3.59 | navel | 9.36
pear | bartlett | 2.14 | bradford | 6.05
(4 rows)
```
---
Alternative for those who cannot use Postgres with its great features:
```
select
f1.type,
f1.variety min_variety, f1.price min_price,
f2.variety max_variety, f2.price max_price
from (
select f.type, f.variety, f.price
from (
select type, min(price) minprice
from fruits group by type
) x
join fruits f on f.type = x.type and f.price = x.minprice
) f1
join (
select f.type, f.variety, f.price
from (
select type, max(price) maxprice
from fruits group by type
) x
join fruits f on f.type = x.type and f.price = x.maxprice
) f2
on f1.type = f2.type
order by 1;
```
|
Problems of this nature (get the largest, smallest, average, first, last, ... for each group in a table) are best solved with a so-called [window function](http://www.postgresql.org/docs/current/static/tutorial-window.html). With window functions you `PARTITION` the data into groups (here on column `"type"`) and then apply some [window function](http://www.postgresql.org/docs/current/static/functions-window.html) or [aggregate function](http://www.postgresql.org/docs/current/static/functions-aggregate.html) over the partitions. The function applies over a so-called *frame* in the partition; by default the frame runs from the first row in the partition to the current row but this default can be changed as is done in this answer.
```
SELECT DISTINCT "type", last_value(variety) OVER w AS variety
first_value(price) OVER w AS min, last_value(price) OVER w AS max
FROM my_table
WINDOW w AS (PARTITION BY "type" ORDER BY price
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
ORDER BY 1;
```
Note that the window functions or aggregate functions add columns to the output, just like in the regular case. In the answer above the `DISTINCT` clause is used to get only a single row for each of the "types" of fruit. Without that clause you would get an output row for every input row, with repeated data for every "type" of fruit.
Using window functions also means that you are making only a single scan of the table and not use any temporary tables (as you do with sub-selects or CTE's) or joins. On larger tables this should make a large difference in performance. Also, it scales much better with additional requirements (such as adding the average price to your output).
|
How to select the lowest and highest value joined in one row postgres
|
[
"",
"sql",
"postgresql",
""
] |
So my title might not make sense. Here's what I am trying to do.
```
C1 C2 C3
A1 yes X1
A2 no X1
A3 maybe X1
A4 yes X2
```
So I have a condition where C2 <>'no'. In this case I want the result to only include
```
C1 C2 C3
A4 yes X2
```
Appreciate your help folks!
|
Use `minus` to get a list of all `c3`s which don't have a `no` for `c2`. and select only those rows from the table.
```
select * from table
where c3 in(
select distinct c3 from table
minus
select distinct from table where c2 = 'no' )
```
|
You could try:
```
with x as (
select 'a1' as col1, 'yes' as col2, 'x1' as col3 from dual
union all
select 'a2' as col1, 'no' as col2, 'x1' as col3 from dual
union all
select 'a3' as col1, 'maybe' as col2, 'x1' as col3 from dual
union all
select 'a4' as col1, 'yes' as col2, 'x2' as col3 from dual
)
select *
from x
where not exists (
select 1
from x x2
where x2.col3 = x.col3
and x2.col2 = 'no'
);
```
Output:
```
COL1 COL2 COL3
a4 yes x2
```
|
Restrict column with multiple values not appear if one condition is also met
|
[
"",
"sql",
"oracle",
""
] |
I am trying to read multiple values from parameter, store it in a variable and later use it in the select statement to pull multiple values.
```
declare @codes XML = N'<Root>
<List Value="120" />
<List Value="110" />
</Root>';
declare @codeList VARCHAR(MAX) = (SELECT T.Item.value('@Value[1]','VARCHAR(MAX)') FROM @codes.nodes('/Root/List') AS T(Item));
WITH CODE_RESULT AS(
SELECT ID, Name, Region,
FROM dbo.MyTable1
WHERE
(@codes IS NULL OR DataCode IN ( @codeList))
...
UNION
SELECT ID, Name, Region,
FROM dbo.MyTable2
WHERE
(@codes IS NULL OR DataCode IN ( @codeList))
...
```
But, I am getting the following exception:
> Subquery returned more than 1 value. This is not permitted when the
> subquery follows =, !=, <, <= , >, >= or when the subquery is used as
> an expression.
**Expectation:**
How do I store the parameter values in a variable and later reuse it in multiple select statements under 'in' clause? Any suggestion is appreciated.
|
Your problem is here:
```
declare @codeList VARCHAR(MAX) = (SELECT T.Item.value('@Value[1]','VARCHAR(MAX)') FROM @codes.nodes('/Root/List') AS T(Item));
```
Your `select` returns two results (120, 110), but a scalar `VARCHAR` variable can only hold one.
You can fix it by creating a table variable:
```
DECLARE @codeList TABLE (code varchar(100));
INSERT @codeList
SELECT
T.Item.value('@Value[1]','VARCHAR(MAX)')
FROM @codes.nodes('/Root/List') AS T(Item);
```
And then change `(@codes IS NULL OR DataCode IN ( @codeList))` to `(@codes IS NULL OR DataCode IN (SELECT code FROM @codeList))`
Here, a `LEFT JOIN` should perform better than your current syntax, e.g.
```
WITH CODE_RESULT AS(
SELECT ID, Name, Region,
FROM dbo.MyTable1 t1
LEFT JOIN @codeList cl
ON t1.DataCode = cl.code
```
You may have to add additional logic to check if `cl.code` `is null` depending on your intentions.
|
If you need to store multiple values for use later, use a TABLE VARIABLE to hold the results, like this:
```
DECLARE @dataCodes TABLE (DataCode varchar(MAX));
INSERT INTO @dataCodes
SELECT T.Item.value('@Value[1]', 'varchar(max)')
FROM @codes.nodes('/Root/List') AS T(Item);
```
You can then use the "`WHERE DataCode IN (SELECT DataCode FROM @dataCodes)`" to do the select and get all the results.
|
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=,
|
[
"",
"sql",
"sql-server",
""
] |
My records looks like:
```
aid cmpyname rid imgpath
1 abc 1 ~/img/aa.jpg:~/img/bb.jpg:~/img/cc.jpg:
2 abc 1 ~/img/dd.jpg:~/img/ee.jpg:~/img/ff.jpg:
3 xyz 2 ~/img/gg.jpg:~/img/hh.jpg:~/img/ii.jpg:
4 xyz 2 ~/img/jj.jpg:~/img/kk.jpg:~/img/ll.jpg:
```
What I want to get is displayed below - but in a single query
```
cmpyname rid imgpath
abc 1 ~/img/aa.jpg:~/img/bb.jpg:~/img/cc.jpg:~/img/dd.jpg:~/img/ee.jpg:~/img/ff.jpg:
xyz 2 ~/img/gg.jpg:~/img/hh.jpg:~/img/ii.jpg:~/img/jj.jpg:~/img/kk.jpg:~/img/ll.jpg:
```
How can I do so? I haven't provided any relationship so please keep it in mind.
Thanks and Regards
|
```
select distinct rid,imgpath,rid from(
select imgpath=(select imgpath + ', ' AS 'data()'
FROM tbl_Temp
where tbl_Temp.rid=rid FOR XML PATH('') )
,id,cmpyname,rid from tbl_Temp
) as temp
```
|
```
select max(cmpyname) as cmpyname, rid,
stuff((
select imgpath
from Records t1
where t1.rid = t2.rid
order by t1.aid
for xml path(''), type).value('.', 'varchar(max)'), 1, 0, ''
) as imgpath
from Records t2
group by rid;
```
<http://sqlfiddle.com/#!18/46822/1>
|
Get 3 rows data in 1 row
|
[
"",
"sql",
".net",
"sql-server-2008",
"c#-4.0",
""
] |
In my MySQL db i have these 3 tables:
```
ACCOUNT (primary key is name)
name
directory
birthdate
sex
FILES (primary key is id)
id (autoincrement)
name (account foreign key)
filename
uploaded_date
CHECKED (primary key is the couple name - filename)
name (account foreign key)
filename (files foreign key)
```
I want to make a query that shows a resulting table like this:
```
RESULT
name - directory - birthdate - sex - filename uploaded_date - checkedCount
```
Where in checkedCount column i want to have the existing row's count number in CHECKED table for every unique filename.
I tried with this but i had no success:
```
SELECT * FROM (SELECT * FROM account NATURAL JOIN files) AS table_alias LEFT JOIN (SELECT *, count(*) AS checkedCount FROM checked) AS checked_alias ON table_alias.filename = checked_alias.filename
```
Currently in ACCOUNT table there are only 2 row, in FILES table 6 rows and in CHECKED tables there are 6 rows with the same name from ACCOUNT and all others 6 unique filenames from FILES.
I can't understand why in my resulting table in checkedCount columns i have all values setted to null, except one that is setted to 6. My goal should be to have all six rows with checkedCount = 1 since every FILE in FILES table is used one times in CHECK table.
|
try this
```
SELECT
a.*, f.filename, f.uploaded_date, IFNULL(c.cnt, 0) as 'checkedCount'
FROM
account a
LEFT OUTER JOIN files f ON a.name = f.name
LEFT OUTER JOIN (SELECT name, COUNT(*) FROM checked GROUP BY 1) c ON a.name=c.name
```
|
So I'm guessing you know how `count` works right? By using a count you need a group by clause.
Example:
```
SELECT count(name) as count, name FROM CHECKED GROUP BY name
```
Now here's the problem. You have a big nasty join that you want to do a count on. So lets join the tables, alias them, and then do a count on a field.
```
SELECT ACCOUNT.name AS name, ACCOUNT.directory AS directory, ACCOUNT.birthDate AS birthdate, ACCOUNT.sex AS sex, FILES.filename AS filename, FILES.uploaded_date AS uploaded_date, count(CHECKED.filename) AS checkedCount
FROM ACCOUNT JOIN FILES ON ACCOUNT.name = FILES.name JOIN CHECKED ON FILES.filename = CHECKED.filename GROUP BY CHECKED.filename;
```
This... Big... Long... Confusing... join statement should do what you want, but I would also take a look at [MySQL Count](https://dev.mysql.com/doc/refman/5.1/en/counting-rows.html) and [MySQL Aliasing](http://www.w3schools.com/sql/sql_alias.asp) to make things prettier.
|
Query sql to find row's count from another table
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have a table like this:
```
+-------------+-----------+------------+
| sample_name | test_name | test_value |
+-------------+-----------+------------+
| s1 | t1 | 1.5 |
| s2 | t2 | 3 |
| s3 | t1 | 8 |
| s4 | t3 | 5 |
+-------------+-----------+------------+
```
And I want to put the test\_names as column headers like this
```
+-------------+------+------+------+
| sample_name | t1 | t2 | t3 |
+-------------+------+------+------+
| s1 | 1.5 | null | null |
| s2 | null | 3 | null |
| s3 | 8 | null | null |
| s4 | null | null | 5 |
+-------------+------+------+------+
```
I have come up with a convoluted solution using a temporary table, dynamic sql and while loops
but its slow and would like to know, is there a way to do it with only a select?
thanks
|
if there are too much values for `test_name` then you may use ***pivot*** within ***dynamic sql***:
```
declare @Names varchar(max)
select @Names = coalesce(@Names + ', ', '') + test_name
from (select distinct test_name from tbl) t
exec('
SELECT *
FROM tbl t
PIVOT (SUM(test_value)
FOR Test_name
IN ('+@Names+'))p')
```
[SQLFIDDLE DEMO](http://sqlfiddle.com/#!3/513ced/3)
|
[SQL Fiddle](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/2294)
**MS SQL Server 2008 Schema Setup**:
**Query 1**:
```
DECLARE @Table TABLE(sample_name VARCHAR(10), test_name VARCHAR(10), test_value DECIMAL(10,2))
INSERT INTO @Table VALUES
('s1','t1',1.5),
('s2','t2',3 ),
('s3','t1',8 ),
('s4','t3',5 )
SELECT *
FROM @Table t
PIVOT (SUM(test_value)
FOR Test_name
IN (t1,t2,t3))p
```
**[Results](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/2294/0)**:
```
| sample_name | t1 | t2 | t3 |
|-------------|--------|--------|--------|
| s1 | 1.5 | (null) | (null) |
| s2 | (null) | 3 | (null) |
| s3 | 8 | (null) | (null) |
| s4 | (null) | (null) | 5 |
```
|
Transposing a column values into column headers
|
[
"",
"sql",
"sql-server",
""
] |
Here is what I want to achieve, in pseudocode
```
part1 = select from foo ...
part2 = init empty
foreach row in part1
part2 += select from foo where row.something as condition in here
union part1, part2
```
Is it possible to do it in one query?
This is the structure
```
parent_name | name
------------+-------
null | item1 # root
item1 | item2 # first level
item1 | item3 # first level
item2 | item4 # second level
item4 | item5 # third level
null | item6 # another root
```
And the result - with condition that I want only tree where root name = item1
```
parent_name | name
------------+-------
null | item1
item1 | item2
item1 | item3
item2 | item4
```
Basically it's a tree structure and I want to get all rows that are in the tree (two levels deep), starting with the root. Item5 is missing in the result because it's in the third level and item6 because it's different tree.
|
Looks like you're after a hierarchical query:
```
with sample_data as (select null parent_name, 'item1' name from dual union all
select 'item1' parent_name, 'item2' name from dual union all
select 'item1' parent_name, 'item3' name from dual union all
select 'item2' parent_name, 'item4' name from dual union all
select 'item4' parent_name, 'item5' name from dual union all
select null parent_name, 'item6' name from dual)
select parent_name,
name
from (select parent_name,
name,
level lvl,
max(level) over (partition by connect_by_root(name)) max_lvl
from sample_data sd
connect by prior name = parent_name
and level <= 3
start with parent_name is null)
where max_lvl > 1;
PARENT_NAME NAME
----------- -----
item1
item1 item2
item2 item4
item1 item3
```
N.B. I wasn't entirely sure why you didn't want to see item6 in the results, so I've assumed it was because it didn't have any child rows.
That's why I generated the "max\_lvl" column, which simply finds the deepest level for that particular branch, and then added the outer query to filter out branches that only have the top level.
If that isn't the case, then you'll have to be much more specific in the logic behind why you don't want it in the results.
|
You can use a hierarchical query:
```
SELECT parent_name, NAME
FROM table1
WHERE LEVEL <= 3
CONNECT BY PRIOR NAME = parent_name
START WITH parent_name IS NULL;
ORDER BY 1 NULLS FIRST;
```
For me it is not clear why you don't want "item6" to be shown, that is not logical for me. Anyway, you achieve it by this:
```
SELECT parent_name, NAME
FROM table1
WHERE LEVEL <= 3
CONNECT BY PRIOR NAME = parent_name
START WITH NAME = 'item1'
ORDER BY 1 NULLS FIRST;
```
|
How to use select result as input for the second select (like foreach)?
|
[
"",
"sql",
"oracle",
""
] |
Given this mockup:
```
+-----------+-------------+---------------------------+
| item_id | item_name | desc |
+-----------+-------------+---------------------------+
| 1 | Product 1 | lorem ipsum dolor... |
| 2 | Product 2 | lorem mauris eu... |
| 3 | Product 3 | scelerisque sagittis... |
| 4 | Product 4 | lorem dolor ipsum... |
| 5 | Product 5 | ipsum dolor lorem... |
+-----------+-------------+---------------------------+
```
And I want to search all of the products that contain the words `lorem ipsum` in either `item_name` or `desc`. Additionally, any words can appear between `lorem` and `ipsum`, and `lorem` and `ipsum` can appear in any order. Basically, this search would return items `1`, `4`, and `5`
Now, I know I could accomplish this with:
```
SELECT * FROM items
WHERE (item_name LIKE 'lorem%ipsum'
OR desc LIKE 'lorem%ipsum')
OR (item_name LIKE 'ipsum%lorem'
OR desc LIKE 'ipsum%lorem')
```
But if my search term is longer (ie. lorem ipsum dolor sit amet, consectetur adipiscing elit), I feel like it could become a bit ridiculous with the number of `OR` added to the query. Is there an easier/more efficient way to handle this?
|
this sort of search requirement sounds a good candidate for [full text search](https://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html).
Full text search is (or at least can be) more of a "search engine" like search as opposed to the traditional sql `like` searches. With full text searching, the order of the words being searched for does not matter, and depending on the RDBMs some full text searching capabilities allow for synonym lookup, as well as noise word filtering.
In (i believe) most cases, full text searching is significantly faster than a `like` search. [Here](http://devzone.zend.com/26/using-mysql-full-text-searching/) is an article on getting started with full text search in mysql.
Example mySql full text search syntax:
```
select *
from items
where match(item_name) against ('+lorem +ipsum' in boolean mode)
```
Full text searching does have certain requirements (which are gone into detail in the links in the article). I've not personally worked with `mysql`s full text search, or I'd list out the steps. Should be enough to get you started though if you wanted to go in that direction.
|
I think it is simpler to look for each word separately:
```
SELECT *
FROM items
WHERE (item_name like '%lorem%' and item_name like '%ipsum%') or
(`desc` like '%lorem%' and `desc` like '%ipsum%');
```
This generalizes most easily to more words and to more columns.
If you like, you can concatenate the values together. However, if you want both values in the same field, then the separate logic is clearer.
Also, if you are really looking for words, then full text search is a good option and will have much better performance.
|
SQL search for words in any order
|
[
"",
"mysql",
"sql",
""
] |
I need to compute the factorial of a variable in Google BigQuery - is there a function for this? I cannot find one in the documentation here:
<https://cloud.google.com/bigquery/query-reference#arithmeticoperators>
My proposed solution at this point is to compute the factorial for numbers 1 through 100 and upload that as a table and join with that table. If you have something better, please advise.
As context may reveal a best solution, the factorial is used in the context of computing a Poisson probability of a random variable (number of events in a window of time). See the first equation here: <https://en.wikipedia.org/wiki/Poisson_distribution>
|
Extending Mikhail's answer to be general and correct for computing the factorial for all number 1 to n, where n < 500, the following solution holds and can be computed efficiently:
```
select number, factorial
FROM js(
// input table
(
SELECT
ROW_NUMBER() OVER() AS number,
some_thing_from_the_table
FROM
[any table with at least LIMIT many entries]
LIMIT
100 #Change this to any number to compute factorials from 1 to this number
),
// input columns
number,
// output schema
"[{name: 'number', type: 'integer'},
{name: 'factorial', type: 'float'}]",
// function
"function(r, emit){
function fact(num)
{
if(num<0)
return 0;
var fact=1;
for(var i=num;i>1;i--)
fact*=i;
return fact;
}
#Use toExponential and parseFloat to handle large integers in both Javascript and BigQuery
emit({number: r.number, factorial: parseFloat(fact(r.number).toExponential())});
}"
)
```
|
Try below. Quick & dirty example
```
select number, factorial
FROM js(
// input table
(select number from
(select 4 as number),
(select 6 as number),
(select 12 as number)
),
// input columns
number,
// output schema
"[{name: 'number', type: 'integer'},
{name: 'factorial', type: 'integer'}]",
// function
"function(r, emit){
function fact(num)
{
if(num<0)
return 0;
var fact=1;
for(var i=num;i>1;i--)
fact*=i;
return fact;
}
var factorial = fact(r.number)
emit({number: r.number, factorial: factorial});
}"
)
```
|
Factorial in Google BigQuery
|
[
"",
"sql",
"google-bigquery",
"factorial",
""
] |
Is this possible in one SQL statement?
I pointer to a nice SQL tutorial on the subject is also appreciated.
I know I can use a command like `SELECT * FROM MYTAB WHERE MYFIELD = 0`
and then use a server script to go through the result and `UPDATE MYFIELD`
of the result rows.
|
Wouldn't this work?
`UPDATE MYTAB SET MYFIELD=1 WHERE MYFIELD=0`
**Edited to Add:**
If you are interested in SQL tutorials, a lightweight one that has the nice feature of allowing you edit and run SQL commands in the webpage to see how they work is from w3schools here: <http://www.w3schools.com/sql/>
|
If you can execute a query to read the data you should also be able to write the data;
`UPDATE MYTAB SET MYFIELD = 99 WHERE MYFIELD = 0`
* [UPDATE (Transact-SQL)](https://msdn.microsoft.com/en-us/library/ms177523.aspx)
* [UPDATE Statement (Oracle)](http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/update_statement.htm)
* [UPDATE Syntax (MySQL)](https://dev.mysql.com/doc/refman/5.0/en/update.html)
* [UPDATE Statement (PostgreSQL)](http://www.postgresql.org/docs/current/static/sql-update.html)
|
sql: how to select all records with field set to 0 and set the value of that field to one
|
[
"",
"sql",
""
] |
There is this query that I keep using over and over:
```
SELECT column_name, count(column_name) FROM table_name GROUP by column_name ORDER BY COUNT(column_name) DESC
```
I use this to check which different values there are in a column and how often they occur.
Because I use this query so often and it's repeating the same 4 times: column\_name, I was like: why not make a stored procedure:
```
CREATE PROCEDURE countcv @table_name VARCHAR(50),@column_name VARCHAR(50)
AS
BEGIN
SELECT @column_name,COUNT(@column_name) FROM @table_name GROUP BY @column_name ORDER BY COUNT(@column_name)
END
```
Here is where I get stuck, I can not manage to get a variable tablename:
> Must declare the table variable "@table\_name"
|
There is no way to do it directly. You need to use dynamicSQL approach. Assuming you pass correct table and column names. Below one should work.
```
CREATE PROCEDURE countcv @table_name VARCHAR(50),@column_name VARCHAR(50)
AS
BEGIN
declare @SQL nvarchar(max)
set @SQL = 'SELECT '+@column_name+',COUNT('+@column_name+')
FROM '+@table_name+'
GROUP BY '+@column_name+'
ORDER BY COUNT('+@column_name+')'
EXEC sp_executesql @SQL
END
```
|
I believe that @Julien Vavasseur and @Dark Knight has already addressed to your question.
However, I would like to add here that, Sql Server 2008 introduced [Table-Valued Parameter](https://msdn.microsoft.com/en-us/library/bb675163(v=vs.110).aspx) by using which we can pass table type variable to the stored procedures. e.g.
Assuming you have a table by the name **tblTest** with the below columns
> ```
> ID INT,
> Name VARCHAR(50)
> ```
**Step 1: Declare a new table User Defined Type**
```
CREATE TYPE tblTestType AS TABLE
(
ID INT,
Name VARCHAR(50)
)
```
**Step 2: Create a STORED PROCEDURE that has tblTestType as parameter**
```
CREATE PROCEDURE countcv
(
@tblName tblTestType readonly
)
AS
INSERT INTO tblTest (ID, Name)
SELECT ID, Name
FROM
@tblName;
```
Then you can use DataTable (if you are using C#) and pass this data table as a parameter to the Stored Procedure.(you can find an example in the link I provided).
|
How to use table as variable in stored procedure
|
[
"",
"sql",
"sql-server",
""
] |
Lets say I have a table with 2 columns (**a**, **b**) with following values:
```
a b
--- ---
1 5
1 NULL
2 NULL
2 NULL
3 NULL
```
My desired output:
```
a
---
2
3
```
I want to select only those distinct values from column **a** for which every single occurrence of this value has NULL in column **b**. Therefore from my desired output, "1" won't come in because there is a "5" in column **b** even though there is **a** NULL for the 2nd occurrence of "1".
How can I do this using a TSQL query?
|
If I understand correctly, you can do this with `group by` and `having`:
```
select a
from t
group by a
having count(b) = 0;
```
When you use `count()` with a column name, it counts the number of non-NULL values. Hence, if all values are `NULL`, then the value will be zero.
|
It's fairly simple to do:
```
SELECT A
FROM table1
GROUP BY A
HAVING COUNT(B) = 0
```
Grouping by A results in all the rows where the value of A is identical to be transferred into a single row in the output. Adding the `HAVING` clause enables to filter those grouped rows with an aggregate function. `COUNT` doesn't count `NULL` values, so when it's 0, there are no other values in B.
|
SQL - Select from column A based on values in column B
|
[
"",
"sql",
"t-sql",
""
] |
The following query attempts to do what can be expressed as `@users.courses` (where @user.id = 1). I'm giving this information as I'm not exactly sure if this is right since the query doesn't run
```
SELECT * FROM courses WHERE courses.id =
courses_users.user_id AND courses_users.user_id = 1
```
SQL complains:
> #1054 - Unknown column 'courses\_users.user\_id' in 'where clause'
I am certain that all of these fields exist so I'm not sure what I'm doing wrong.
|
Your `where` clause references `courses_users`, but this tables never appears in a `where` or `join` clause.
You should either added it to the `from` clause:
```
SELECT *
FROM courses, courses_users -- Here!
WHERE courses.id = courses_users.user_id AND
courses_users.user_id = 1
```
Or, better yet, use an explicit `join` clause:
```
SELECT *
FROM courses
JOIN courses_users ON courses.id = courses_users.user_id -- Here!
WHERE courses_users.user_id = 1
```
|
You're missing a `JOIN`. The statement doesn't know about `courses_users` unless you `JOIN` to it. For this your query needs to look something like this:
```
SELECT *
FROM courses
JOIN courses_users ON
courses.id = courses_users.user_id
WHERE
courses_users.user_id = 1
```
|
SQL complaining about an existing field
|
[
"",
"mysql",
"sql",
"ruby-on-rails",
""
] |
I know i can use a subquery to make an insert after selecting rows from the database.
```
INSERT INTO wp_43_term_relationships (object_id, term_taxonomy_id, term_order)
SELECT ID, 8363, 0 FROM
( SELECT DISTINCT
wp_43_posts.ID
FROM
wp_43_posts
INNER JOIN wp_43_term_relationships ON wp_43_term_relationships.object_id = wp_43_posts.ID
INNER JOIN wp_43_term_taxonomy ON wp_43_term_taxonomy.term_taxonomy_id = wp_43_term_relationships.term_taxonomy_id
WHERE (wp_43_term_taxonomy.term_id = 4613 OR wp_43_term_taxonomy.term_id = 4615) AND wp_43_term_taxonomy.term_id != 8363 ) posts
```
But what if, instead of hardcoding 8363, i'd need to derive it from another query?For example I'd need to deriv it from
```
SELECT
wp_43_term_taxonomy.term_taxonomy_id
FROM
wp_43_term_taxonomy
WHERE
wp_43_term_taxonomy.term_id = 8363
```
|
If you replace the constant with the second query in ()'s, it should work.
Like this:
```
INSERT INTO wp_43_term_relationships (object_id, term_taxonomy_id,term_order)
SELECT ID,
(SELECT wp_43_term_taxonomy.term_taxonomy_id FROM wp_43_term_taxonomy WHERE wp_43_term_taxonomy.term_id = 8363),
0 FROM
... remaining of the original ...
```
At least if you need only 1 value, otherwise it would be better to join this extra table to the others.
|
just bring it into the subquery and use an extra join:
```
INSERT INTO wp_43_term_relationships (object_id, term_taxonomy_id, term_order)
SELECT ID, someval, 0 FROM
( SELECT DISTINCT
wp_43_posts.ID,t.term_taxonomy_id someval
FROM
wp_43_posts
INNER JOIN wp_43_term_relationships ON wp_43_term_relationships.object_id = wp_43_posts.ID
INNER JOIN wp_43_term_taxonomy ON wp_43_term_taxonomy.term_taxonomy_id = wp_43_term_relationships.term_taxonomy_id
WHERE (wp_43_term_taxonomy.term_id = 4613 OR wp_43_term_taxonomy.term_id = 4615) AND wp_43_term_taxonomy.term_id != 8363
join wp_43_term_taxonomy t on t.term_id = 8363 ) posts
```
|
MYSQL: create an insert with data from two subqueries
|
[
"",
"mysql",
"sql",
""
] |
So, I`ve got two columns t1.NAME and t2.ITEMS, for each neme there can be more than one item assigned to it, so I want to select it like:
```
| NAME | ITEMS |
JOHN 1
2
BEN 4
7
3
DAVE 5
```
P.s. if it helps, they are connected by t1.id = t2.names\_id
|
Result of my below query is very close to what you want.. the only difference is, there is no blank name, because you cannot directly do that result in one step query.. each item belong to name of each id in t1. BUT you can do some trick there if you want to get the exact result, you can use `UPDATE` to do some trick with the result.
```
SELECT t1.NAME, t2.ITEMS
FROM t1 INNER JOIN t2 ON t1.id = t2.names_id
```
|
This kind of operation should be done in presentation layer.
[](https://i.stack.imgur.com/BDPYq.png)
[](https://i.stack.imgur.com/uF3y0.png)
But if you insist you can use sth like:
**[SqlFiddleDemo](http://sqlfiddle.com/#!4/2d915/6/0)**
```
SELECT DISTINCT NAME,
LISTAGG(Items, chr(13)||chr(10)) WITHIN GROUP (ORDER BY 1) OVER (PARTITION BY Name) AS Items
FROM tab
```
change tab with subquery that produce output you get now.
The clue is to concatenate for every name corresponding Items and adding new line character `CHR(13) + CHR(10)`.
|
Get rid from one column duplicate values in two column select
|
[
"",
"sql",
"oracle",
"output-formatting",
""
] |
I'm writing an application that calculates the ranking of teams in the pool stages of a rugby competition *(Rugby World Cup 2015, but it could apply to many other pool-based competitions)*.
At the completion of the pool phase, points are calculated based on wins/draws/losses/bonus points, and teams are ranked by points. **If two or more teams are level on points, then the winner of the match in which two tied teams have played each other shall be the higher ranked.** Once this initial ranking is complete, other criteria (points difference, tries difference, points scored, tries scored) are applied to complete the ranking process.
**I have written a query that does ALL of the above, EXCEPT the first ranking criteria - two teams level on points should be ordered based on the winner of the match in which the two teams played.**
Here is my database schema:
```
[TEAM] [MATCH]
-TeamId -MatchId
-TeamName -HomeTeamId
-Pool -AwayTeamId
-HomeTeamScore
-HomeTeamTries
-AwayTeamScore
-AwayTeamTries
```
Here is the SQL Server query I use to calculate the ranking/standings for a given pool:
```
WITH PoolResults ([MatchId], [TeamId], [Team], [P], [W], [D], [L], [PF], [PA], [PD], [TF], [TA], [TD], [PTS], [BP])
AS (SELECT
M.[MatchId],
M.[HomeTeamId] AS [TeamId],
HT.[TeamName],
1 AS [P],
CASE WHEN M.[HomeTeamScore] > M.[AwayTeamScore] THEN 1 ELSE 0 END AS [W],
CASE WHEN M.[HomeTeamScore] = M.[AwayTeamScore] THEN 1 ELSE 0 END AS [D],
CASE WHEN M.[HomeTeamScore] < M.[AwayTeamScore] THEN 1 ELSE 0 END AS [L],
M.[HomeTeamScore] AS [PF],
M.[AwayTeamScore] AS [PA],
(M.[HomeTeamScore] - M.[AwayTeamScore]) AS [PD],
M.[HomeTeamTries] AS [TF],
M.[AwayTeamTries] AS [TA],
(M.[HomeTeamTries] - M.[AwayTeamTries]) AS [TD],
CASE
WHEN M.[HomeTeamScore] > M.[AwayTeamScore] THEN 4
WHEN M.[HomeTeamScore] = M.[AwayTeamScore] THEN 2
WHEN M.[HomeTeamScore] < M.[AwayTeamScore] THEN 0
END AS [PTS],
CASE
WHEN ((M.[AwayTeamScore] - M.[HomeTeamScore]) BETWEEN 1 AND 7) AND M.[HomeTeamTries] >= 4 THEN 2
WHEN ((M.[AwayTeamScore] - M.[HomeTeamScore]) BETWEEN 1 AND 7) THEN 1
WHEN M.[HomeTeamTries] >= 4 THEN 1
ELSE 0
END AS [BP]
FROM
Match AS M
INNER JOIN Team AS HT
ON M.[HomeTeamId] = HT.[TeamId]
WHERE
M.[HomeTeamScore] IS NOT NULL
AND HT.[Pool] = @Pool
UNION
SELECT
M.[MatchId],
M.[AwayTeamId] AS [TeamId],
AT.[TeamName],
1 AS [P],
CASE WHEN M.[AwayTeamScore] > M.[HomeTeamScore] THEN 1 ELSE 0 END AS [W],
CASE WHEN M.[AwayTeamScore] = M.[HomeTeamScore] THEN 1 ELSE 0 END AS [D],
CASE WHEN M.[AwayTeamScore] < M.[HomeTeamScore] THEN 1 ELSE 0 END AS [L],
M.[AwayTeamScore] AS [PF],
M.[HomeTeamScore] AS [PA],
(M.[AwayTeamScore] - M.[HomeTeamScore]) AS [PD],
M.[AwayTeamTries] AS [TF],
M.[HomeTeamTries] AS [TA],
(M.[AwayTeamTries] - M.[HomeTeamTries]) AS [TD],
CASE
WHEN M.[AwayTeamScore] > M.[HomeTeamScore] THEN 4
WHEN M.[AwayTeamScore] = M.[HomeTeamScore] THEN 2
WHEN M.[AwayTeamScore] < M.[HomeTeamScore] THEN 0
END AS [PTS],
CASE
WHEN ((M.[HomeTeamScore] - M.[AwayTeamScore]) BETWEEN 1 AND 7) AND M.[AwayTeamTries] >= 4 THEN 2
WHEN ((M.[HomeTeamScore] - M.[AwayTeamScore]) BETWEEN 1 AND 7) THEN 1
WHEN M.[AwayTeamTries] >= 4 THEN 1
ELSE 0
END AS [BP]
FROM
Match AS M
INNER JOIN Team AS AT
ON M.[AwayTeamId] = AT.[TeamId]
WHERE
M.[AwayTeamScore] IS NOT NULL
AND AT.[Pool] = @Pool
)
SELECT ROW_NUMBER()
OVER (ORDER BY
SUM([BP] + [PTS]) DESC,
SUM([PD]) DESC,
SUM([TD]) DESC,
SUM([PF]) DESC,
SUM([TF]) DESC) AS [Position],
[TeamId],
[TeamName],
SUM([P]) AS [P],
SUM([W]) AS [W],
SUM([D]) AS [D],
SUM([L]) AS [L],
SUM([PF]) AS [PF],
SUM([PA]) AS [PA],
SUM([PD]) AS [PD],
SUM([TF]) AS [TF],
SUM([TA]) AS [TA],
SUM([BP]) AS [BP],
SUM([BP] + [PTS]) AS [PTS]
FROM
PoolResults
GROUP BY
[TeamId],
[TeamName];
```
As mentioned before, this does everything EXCEPT take into consideration ranking of two teams on the same points based on who won the game between them. **Does anyone have a suggestion on how to do this initial ranking?**
**===== UPDATE TO ORIGINAL POST =====**
**CLARIFICATION** - There can be more than 2 teams on equal points, in which case each combination of 2 teams must be evaluated to determine rankings. The following sample on SqlFiddle illustrates a scenario with 5 teams in a pool, where 3 teams have the same number of points - <http://sqlfiddle.com/#!6/c0701/3>
Query #1 shows the raw match data (which teams played and the scores)
Query #2 shows the unsorted pool standings:
```
Australia (10)
England (10)
Fiji (5)
Uruguay (0)
Wales (10)
```
Query #3 shows the pool standings sorted by points:
```
Australia (10)
England (10)
Wales (10)
Fiji (5)
Uruguay (0)
```
However, the real order should be:
```
Wales (10)
England (10)
Australia (10)
Fiji (5)
Uruguay (0)
```
with Wales ranked above England, because Wales beat England, and England ranked above Australia because England beat Australia
|
This need sql 2012+ using `LEAD()` and `LAG()` [**functions**](http://blog.sqlauthority.com/2013/09/22/sql-server-how-to-access-the-previous-row-and-next-row-value-in-select-statement/), Also require only two team have same Rank at this moment.
**SCHEMA**
```
CREATE TABLE Table1
([team] varchar(1), [rank] int);
INSERT INTO Table1
([team], [rank])
VALUES
('A', 1),('B', 1),('C', 2);
CREATE TABLE Table2
([team1] varchar(1), [team2] varchar(1), [win] varchar(1));
INSERT INTO Table2
([team1], [team2], [win])
VALUES
('A', 'B', 'B'), ('C', 'A', 'A'),('C', 'B', 'B');
```
**[SQL Fiddle Demo](http://sqlfiddle.com/#!6/afad1/7)**
```
WITH breakTie AS (
SELECT
[team],
[rank],
LAG([team]) OVER (ORDER BY [rank]) PreviousTeam,
LEAD([team]) OVER (ORDER BY [rank]) NextTeam,
LAG([rank]) OVER (ORDER BY [rank]) PreviousRank,
LEAD([rank]) OVER (ORDER BY [rank]) NextRank
FROM Table1
)
SELECT *, CASE
WHEN B.[rank] = B.[NextRank] and B.[team] = T.[win] THEN 1
WHEN B.[rank] = B.[PreviousRank] and B.[team] = T.[win] THEN 1
ELSE 0
END as breakT
FROM breakTie B
LEFT JOIN Table2 T
ON ( B.team = T.team1 or B.team = T.team2)
AND ( B.NextTeam = T.team1 or B.NextTeam = T.team2)
ORDER BY
[rank],
CASE
WHEN B.[rank] = B.[NextRank] and B.[team] = T.[win] THEN 1
WHEN B.[rank] = B.[PreviousRank] and B.[team] = T.[win] THEN 1
ELSE 0
END
```
|
This is complicated, but I would try the following.
1. Make a view each combination of opponents and who wins the tie.
2. With the results of your current ranking add in a column for the prior player, a column for prior rank, a column for next rank using windowing functions
3. Join to the view in step 1
4. Make a case statement that only sets a value if prior rank or next rank is equal that has the value from the view in step 1.
5. Sort by rank and the column in step 4
|
Ranking teams equal on points in a pool based on who won the game they played
|
[
"",
"sql",
"sql-server",
"algorithm",
"t-sql",
""
] |
I seem to have some kind of lapsus...
How can I format a continuous credit card number to have blank space after every 4 digits in T-SQL?
```
TableName | Cards |
ColumnName | CardNumber |
ExampleInput | 80756201005803400471 | varchar(20)
ExampleOutput | 8075 6201 0058 0340 0471 | varchar(25)
```
I whish to have `ExampleOutput` in a `SELECT` statement, performance is irrelevant.
|
Something like this:
```
select stuff(stuff(stuff(stuff(ccn, 17, 0, ' '), 13, 0, ' '), 9, 0, ' '), 5, 0, ' ')
```
That is, nested called to `stuff()`. Note: you need to start at the end of the string or else be very careful about the offsets.
|
Something like this if the length is fixed:
```
Select *
, SUBSTRING(n, 1, 4) + ' '
+ SUBSTRING(n, 5, 4) + ' '
+ SUBSTRING(n, 9, 4) + ' '
+ SUBSTRING(n, 13, 4) + ' '
+ SUBSTRING(n, 17, 4) + ' '
From (values('80756201005803400471')) as car(n)
```
|
How to insert Spaces in long Credit Card Number every X characters in T-SQL?
|
[
"",
"sql",
"sql-server",
"t-sql",
"credit-card",
""
] |
I have the following relations.
```
Suppliers( sid: integer, sname: string, address: string)
Parts(pid: integer, pname: string, color: string)
Catalog( sid: integer, pid: integer, cost: real)
```
The problem asks me to find the pnames of parts supplied by Acme Widget Suppliers and no one else. I wrote the following SQL statement; however I feel like this query is inefficient because of the repetition. I would like to know if there is a better way to write this query without repeating the selecting catalog part.
```
Select P.pname
FROM Parts P
WHERE P.pid IN (
Select C.pid
FROM Catalog C
INNER JOIN Supplier S
ON S.sid = C.sid
WHERE S.sname = "Acme Widget Suppliers"
AND C.pid NOT IN (
SELECT C2.pid
FROM Catalog C2
INNER JOIN Supplier S
ON S.sid = C2.sid
WHERE S.sname <> "Acme Widget Suppliers"
)
);
```
|
This can work:
```
select p.pname from Catalog c
join Parts p on p.pid = c.pid
join Suppliers s on s.sid = c.sid
where s.sname = 'Acme Widget Suppliers'
and p.pid in (select pid as SupplierCount from Catalog c
group by pid having count(*) = 1)
```
The inner `select` is meant to find exclusive parts from *any* supplier. If a part is sold by multiple providers, it will not be included.
Working example: <http://sqlfiddle.com/#!6/1ccde/10>
|
You are correct. The query you want to make can be greatly simplified. Try the following.
```
SELECT P.pname FROM Parts P, Suppliers S, Catalog C
WHERE C.pid = P.pid
AND C.sid = S.sid
AND S.sname == "Acme Widget Suppliers"
```
|
What is a better way to write this SQL query?
|
[
"",
"mysql",
"sql",
"database",
""
] |
I'm trying to view software that has been installed within the last 30 days. The format of my date is `20150327`. When I try to the following condition in the `where` clause
```
and DateDiff(day,arp.InstallDate0,GetDate()) < 30
```
I receive the following error message:
> Conversion failed when converting date and/or time from character
> string.
I have also tried the following and was unsuccessful:
```
CONVERT(varchar(8),arp.InstallDate0,112)
```
As well as:
```
ISDATE(CONVERT(datetime,arp.InstallDate0,112))
```
When I add `ISDATE`, it finally runs the query, but it is not showing any data and I know that there are installs within the last 30 days, so I'm thinking the date is still not being recognized.
***EDIT*** The InstallDate0 column is nvarchar.
|
You do not need a conversion format for YYYYMMDD when converting to `date`, `datetime2`, and `datetimeoffset`. SQL Server recognizes this ISO standard format with no conversion in these cases, regardless of internationalization settings (there is one setting that affects my preferred format of YYYY-MM-DD; the documentation is [here](https://msdn.microsoft.com/en-us/library/ms189491.aspx)). So, you could do:
```
where cast(arp.InstallDate0 as date) > dateadd(day, -30, getdate())
```
At this point: "Shame on you for storing dates as strings."
That said, it is better (in your case) to do the comparison as strings rather than dates. You have a good date format for this, so:
```
where arp.InstallDate0 > convert(varchar(8), dateadd(day, -30, getdate()), 112)
```
Why is this better? With no functions on the column name, the query can take advantage of an appropriate index (if one is available).
|
You must use the syntax below, as the first argument for the [`CONVERT` function](https://msdn.microsoft.com/en-us/library/ms187928.aspx) is the target data type
```
CONVERT(datetime,'20150327',112)
```
|
Can't convert YYYYMMDD to date
|
[
"",
"sql",
"sql-server-2008",
""
] |
I want get only year, and only numeric month from date, I try this for month by i get char month, how can i get numeric month, and year ?
```
select to_char(dateper,'MON')
from calcul
group by dateper
```
|
You can get a numeric representation in a string by using:
```
select extract(year from dateper) as yyyy, extract(month from dateper) as mm
from calcul
group by yyyy, mm;
```
Or:
```
select to_char(dateper, 'YYYY-MM')
from calcul
group by to_char(dateper, 'YYYY-MM') ;
```
|
Month:
```
SELECT date_part('month', now())::integer;
```
Year:
```
SELECT date_part('year', now())::integer;
```
|
get only month and year from date
|
[
"",
"sql",
"postgresql",
""
] |
According to this [Oracle documentation](http://www.oracle.com/technetwork/articles/sql/11g-sqlplanmanagement-101938.html), I can assume that the Optimizer postpones the hard parse and it doesn't generate an execution plan until the first time a prepared statement is executed:
"The answer is a phenomenon called bind peeking. Earlier, when you ran that query with the bind variable value set to 'NY', the optimizer had to do a hard parse for the first time and while doing so it peeked at the bind variable to see what value had been assigned to it."
But when executing an EXPLAIN PLAN for a prepared statement with bind parameters, we get an executed plan. On his site, [Markus Winand](http://use-the-index-luke.com/sql/where-clause/bind-parameters) says that:
"When using bind parameters, the optimizer has no concrete values available to determine their frequency. It then just assumes an equal distribution and always gets the same row count estimates and cost values. In the end, it will always select the same execution plan."
Which one is true? Does an execution plan get generated when the statement is prepared using an evenly distribution value model, or is the hard parsing postponed until the first execution time.
|
The first bind peek actually happens at the first execution. The plan optimization is deferred it doesn't happen at the prepare phase. And later on another bind peek might happen. Typically for VARCHAR2 when you bind two radically different values (i. e. in length of first value 1 byte and later 10 bytes) the optimizer peeks again and it might produce a new plan. In Oracle 12 it's extended even more, it has adaptive join methods. So optimizer suggest NESTED LOOPs but when it's actually being executed after many more rows than estimated comes it switches to HASH join immediately. It's not like adaptive cursor sharing where you need to make a mistake first to produce new execution plan.
Also one very important thing to prepared statements. Since these just re-executes the same cursor as is created with the first execution. They will always execute the same plan, there cannot be any adaptation. For adaptation and alternative execution plans at least SOFT parse must occur. So if the plan is aged out from shared pool or invalidated for any reason.
Explain plan is not cursor it will never respect bind variables. It's only display cursor where you can see bind variable information.
You can find actual information about captured bind values in V$SQL\_BIND\_CAPTURE.
|
This discussion misses a very important point about bind variables, parsing and bind peeking; and this is Histograms! Bind variables only becomes an issue when the column in question have histograms. Without histograms there is no need to peek at the value. Oracle have no information then about the distribution of the data, and will only use pure math (distinct values, number of null values, number of rows etc) to find the selectivity of the filter in question.
Binds and histograms are logical opposites. You use bind variables to get one execution plan for all your queries. You use histograms to get different execution plans for different search values. Bind peeking tried to overcome this issue. But it does not do a very good job at it. Many people have actually characterized the bind peeking feature as "a bug". Adaptive Cursor Sharing that comes around in Oracle 11g does a better job of solving this.
Actually I see to many histograms around. I usually disable histograms (method opt=>'for all columns size 1', and only create them when I truly need them.
And then to the original question: "Does Oracle choose a default execution plan when parsing a prepared statement?"
Parsing is not one activity. Parsing involves syntax checking, semantic analysis (does the tables and columns exist, do you have access to the tables), query rewrite (Oracle might rewrite the query in a better way - for instance - if we use the filters a=b and b=c, then Oracle can add the filter a=c), and of course finding an execution plan. We actually differ between different types of parsing - soft parse and hard parse. Hard parsing is where Oracle also have to create the execution plan for the query. This is a very costly activity.
Back to the question. The parsing doesn't really care if you are using bind variables or not. The difference is that if you use bind, you probably only have to do a soft parse. Using bind variables your query will look the same every time you run it (therefor getting the same hash\_value). When you run a query Oracle will check (in the library cache) to see if there all ready is an execution plan for your query. This is not a default plan, but a plan that allready exist because someone else has executed the same query (and made Oracle do a hard parse generating an execution plan for this query) and the execution plan hasn't aged out of the cache yet. This is not a default plan. It's just the plan the optimizer at parse time considered the best choice for your query.
When you come to Oracle 12c it actually gets even more complicated. In 12 Oracle have Adaptive Execution plans - this means that the execution plan has an alternative. It can start out with a nested loop, and if it realize that it got the cardinality estimates wrong it can switch to a hash join in the middle of the execution of the query. It also have something called adaptive statistics and sql plan directives. All to make the optimizer and Oracle to make better choises when running your SQLs :-)
|
Does Oracle chose a default execution plan when parsing a prepared statement?
|
[
"",
"sql",
"database",
"oracle",
"optimization",
"sql-execution-plan",
""
] |
I'm using SQLite and I"m really green. Essentially I have a table (example below)
```
ID Date QuestNum Value
1 1/1/1990 1 0
1 1/1/1990 2 3
1 1/1/1990 3 2
1 1/2/1990 1 5
1 1/2/1990 2 2
1 1/2/1990 3 6
2 1/1/1990 1 6
2 1/1/1990 2 3
2 1/1/1990 3 6
2 1/2/1990 1 2
2 1/2/1990 2 8
2 1/2/1990 3 2
```
And I would the result of my query to look like this
```
ID Date Quest1 Quest2 Quest3
1 1/1/1990 0 3 2
1 1/2/1990 5 2 6
2 1/1/1990 6 3 6
2 1/2/1990 2 8 2
```
The closest I got was
```
SELECT
ID, Date,
MAX(CASE WHEN QuestNum = "1" THEN Value END) AS Q1,
MAX(CASE WHEN QuestNum = "2" THEN Value END) AS Q2,
MAX(CASE WHEN QuestNum = "3" THEN Value END) AS Q3,
FROM table
GROUP BY subjID, Date
```
Although this does provide the structure I'm looking for it applies the MAX aggregate function and I'm worried that the resulting query holds data that was different from the original set; such that the values returned in columns Quest 1, Quest 2, Quest 3 are the maximum value from that selected column and not the corresponding data point from the value column.
I'm not sure if it is important to note but the QuestNum can skip at times, so that not everyone receives all questions.
Thanks,
qwerty
Please excuse any misnomers in my request, believe me they are all unintentional.
|
Essentially, you need to **transpose** your rows to columns or **reshape** from long to wide format. In SQL, this can be done with a derived table (type of subquery):
```
SELECT
[ID],
[Date],
Max(CASE WHEN QuestNum = "1" THEN [Value] END) As Q1,
Max(CASE WHEN QuestNum = "2" THEN [Value] END) As Q2,
Max(CASE WHEN QuestNum = "3" THEN [Value] END) As Q3
FROM (SELECT
[ID],
[Date],
QuestNum,
[Value]
FROM TableName)
AS dT
GROUP BY [ID], [Date]
```
|
Use `GROUP BY ID, Date` as group expression.
And SELECT never alters your data, except some functions in some databases (like nextval which modifies a sequence), but these changes are explicitly documented. simple aggregate functions like this are totally safe.
But what you see is not a table, it's only a result. If you need a new table containing the result, you need a statement like
```
SELECT ID,Date,
MAX(CASE WHEN QuestNum = "1" THEN Value END) AS Quest1,
MAX(CASE WHEN QuestNum = "2" THEN Value END) AS Quest2,
MAX(CASE WHEN QuestNum = "3" THEN Value END) AS Quest3,
FROM table
INTO table2
GROUP BY GROUP BY ID, Date
```
(Note the `INTO table2`)
|
Multiple row data transformed into single
|
[
"",
"sql",
"sqlite",
""
] |
I have this sample data and part of the query [See the Fiddle](http://sqlfiddle.com/#!6/b53e9/2). It shows the segment start time and end time. The task is get the sum of all the time segments between 6am to 10pm in seconds. I have filter it on the day but how can I restrict between 6am and 10 pm is difficult for me (May be another condition in the Case statement). The second difficulty is how to get part of a segment when its start time is 5am and endtime is 7am then I want to display 3600 seconds for the part which lies between 6am to 10pm. Similarly if a segment starts at 9pm and finished at 5am then I want to get 3600 seconds i.e. the part before 10pm.
```
Select
userid,
CASE
WHEN DateDiff(day, starttime, endtime) = 0 THEN datediff(second, starttime, endtime)
END As TotalSecondsFromSixAMToTenPM
FROM
Test
```
Here is the expected result, Note the first and second last value is obtained by taking these values
```
Select datediff(second, '2015-01-14 06:00:00.000', '2015-01-14 09:25:54.000')
Select datediff(second, '2015-01-14 21:30:14.000', '2015-01-14 22:00:00.000')
ββββββββββ¦βββββββββββββββββββββββββββββββ
β userid β TotalSecondsFromSixAMToTenPM β
β βββββββββ¬βββββββββββββββββββββββββββββββ£
β abc β 12354 β
β abc β 3600 β
β abc β 15150 β
β abc β 14684 β
β abc β 1786 β
β abc β NULL β
ββββββββββ©βββββββββββββββββββββββββββββββ
```
|
```
WITH splittedDates AS
(
SELECT
userid,
CAST(StartTime as date) 'StartDate',
CAST(EndTime as date) 'EndDate',
CAST(StartTime as time) 'stime',
CAST(EndTime as time) 'etime',
StartTime,
EndTime
FROM Test
), correctedTimes AS (
SELECT userid,
CASE WHEN StartDate=EndDate AND stime<etime AND stime<'10pm' THEN
CASE
WHEN stime>='6am' THEN StartTime
ELSE cast(StartDate as datetime) + cast('6am' as datetime)
END
END 'correctedStartTime',
CASE WHEN StartDate=EndDate AND stime<etime AND etime>'6am' THEN
CASE
WHEN etime<='10pm' THEN EndTime
ELSE cast(StartDate as datetime) + cast('10pm' as datetime)
END
END 'correctedEndTime'
FROM splittedDates
)
SELECT
userid,
datediff(second, correctedStartTime, correctedEndTime) TotalSecondsFromSixAMToTenPM
FROM correctedTimes
```
[fiddle](http://sqlfiddle.com/#!6/10c2d/6/0)
result:
```
| userid | TotalSecondsFromSixAMToTenPM |
|--------|------------------------------|
| abc | 12354 |
| abc | 3600 |
| abc | 15150 |
| abc | 14684 |
| abc | 1786 |
| abc | (null) |
```
**EDIT**
I added conditions `stime<'10pm'` and `etime>'6am'` to resolve the issue from comments
|
I don't have MS SQL in front of me so I can't test this, but how about something like the following?
```
Select
userid,
CASE
WHEN DateDiff(day, starttime, endtime) = 0 and starttime>=6am and endTime<=10am
THEN datediff(second, starttime, endtime)
END As TotalSecondsFromSixAMToTenPM
FROM
Test
```
|
How can I select records on the basis of their timings between specific times in SQL server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have the following three Tables that I am having trouble joining because there is not a unique key in between all three
Computers, Clients & Drives
```
*Table Computers*
ComputerID ClientID Name Domain UserName OS
========================================================================
1 1 WS01 ABC Admin 7
2 1 WS02 ABC Admin 7
3 1 WS03 ABC Admin 7
4 2 CH21 CORP Admin 7
5 2 CH22 CORP Admin 7
6 3 LT33 WGE Admin 7
*Table Clients*
ClientID Client Name
=======================
1 Company1
2 Company2
3 Company3
*Table Drives*
DriveID ComputerID Letter Size Free Missing FileSystem
======================================================================
1 1 C 102400 100000 0 NTFS
2 1 D 102400 100000 0 NTFS
3 1 E 102400 100000 1 FAT32
4 2 C 102400 100000 0 NTFS
5 3 C 102400 100000 0 NTFS
6 4 C 102400 100000 0 NTFS
7 4 F 102400 100000 0 NTFS
8 4 E 102400 100000 1 FAT32
9 4 D 102400 100000 0 NTFS
10 5 C 102400 100000 0 NTFS
11 5 D 102400 100000 0 NTFS
12 6 C 102400 100000 0 NTFS
```
I have the following Query:
```
SELECT cl.Name, comp.Name, dr.`Letter`, dr.`Free`,dr.`Size`,dr.`FileSystem`,
dr.`Missing`
FROM clients AS cl
INNER JOIN computers AS comp ON comp.`ClientID` = cl.clientid
LEFT OUTER JOIN drives AS dr ON dr.`ComputerID` = comp.`ComputerID`
```
I expect the following results
```
Expected Result:
ClientName ComputerName Letter Free Size Filesystem Missing
=======================================================================
Company1 WS01 C 100000 102400 NTFS 0
Company1 WS01 D 100000 102400 NTFS 0
Company1 WS01 E 100000 102400 NTFS 1
Company1 WS02 C 100000 102400 NTFS 0
Company1 WS03 C 100000 102400 NTFS 0
Company2 CH21 C 100000 102400 NTFS 0
Company2 CH21 F 100000 102400 NTFS 0
Company2 CH21 E 100000 102400 NTFS 1
Company2 CH21 D 100000 102400 NTFS 0
Company2 CH22 C 100000 102400 NTFS 0
Company2 CH22 D 100000 102400 NTFS 0
Company3 LT33 C 100000 102400 NTFS 0
```
Can somebody please explain why I am missing rows when I run the Query shown above & I don't get the expected result. Can they also provide a way to properly join the 3 Tables together when there is not a unique key in-between all three Tables
|
It all seems to work for me:
<http://sqlfiddle.com/#!6/f5d89/1>
It is in sql2014, but I don't think SQL flavours will be getting in the way here.
Which rows do you see missing?
|
Your code looks solid. The only thing I could think that may be an issue is the order of your JOINS.
There is a good article about Join order here.
<https://social.msdn.microsoft.com/Forums/sqlserver/en-US/9d223e3f-c040-4eb9-a44e-5556e85a821b/inner-join-after-left-outer-join>
|
Joining 3 x MYSQL Tables together with 2 Different Unique Keys
|
[
"",
"mysql",
"sql",
""
] |
Is there a way to pull back all records that have overlapping datetimes based on a user?
For instance;
TableA has the following rows;
```
TrainerID StartTime EndTime
1234 10-1-2015 08:30 10-1-2015 09:00
1234 10-1-2015 08:45 10-1-2015 09:15
1234 10-1-2015 09:30 10-1-2015 10:00
2345 10-1-2015 08:45 10-1-2015 09:15
2345 10-1-2015 09:30 10-1-2015 10:00
```
I need a query that can pull ONLY the following record because it's start time is before the previous end time for the trainer (double booked):
```
1234 10-1-2015 08:45 10-1-2015 09:15
```
|
The EXIST code below should give you that answer. The code ensures that the start time of the clashing entry is before the start of the main list entry while the start time of the clash is still after the start time of the mail list entry.
```
SELECT *
FROM tblTest clashing
WHERE EXISTS
(
SELECT 1
FROM tblTest mainlist
WHERE clashing.trainderid = mainlist.trainderid
AND clashing.starttime < mainlist.endtime
AND clashing.starttime > mainlist.starttime
)
```
This can also be written with an IN statement, but EXIST is much more efficient
|
Firstly you should sort by trainerId and Start\_time. And then join two tables with correct condition.
Try this query:
```
;WITH TrainersTemp AS
(
SELECT *, ROW_NUMBER() OVER ( ORDER BY trainerid, Start_Time) AS row_num
FROM Trainers
)
select t2.* from TrainersTemp t1
join TrainersTemp t2 on t1.TrainerId = t2.TrainerId and t1.row_num = t2.row_num-1
where t2.Start_Time<t1.End_Time
```
|
Find all rows where start date is before a prior end date
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
For example, I have the following relations.
```
Suppliers( sid: integer, sname: string, address: string)
Parts(pid: integer, pname: string, color: string)
Catalog( sid: integer, pid: integer, cost: real)
```
I write the following MYSQL query to find the sids of suppliers who supply only red parts.
Example 1)
```
SELECT DISTINCT C.sid FROM Catalog C
WHERE C.sid NOT IN(
SELECT C.sid FROM Catalog C
INNER JOIN Parts P ON p.pid = C.pid
WHERE P.color <> "red"
);
```
Example 2)
```
SELECT DISTINCT C.sid FROM Catalog C
WHERE C.sid NOT IN(
SELECT C2.sid FROM Catalog C2
INNER JOIN Parts P ON p.pid = C2.pid
WHERE P.color <> "red"
);
```
I would like to know if there is any difference bet. the above two query statements. If I want to reuse the same table twice in a nested query, do I have to name my table alias differently? I assume that inner query gets executed first before the outer query so Example 1 should be correct, but I cannot be sure because example 2 is the way the professor taught.
|
There are no difference between your two queries. They are exactly the same. The only difference is that you are using different alias for the table `Catalog`on then. The parser will resolve the alias from most inside to outside so in your query example 1
```
SELECT DISTINCT C.sid FROM Catalog C
WHERE C.sid NOT IN(
SELECT C.sid FROM Catalog C
INNER JOIN Parts P ON p.pid = C.pid
WHERE P.color <> "red"
);
```
The `Catalog` from subquery with alias `C` will no be the same as the outside query. Because the parser will resolve it as if it was another table.
No matter the alias you use the parser will resolve from inside (subqueries) to outsise. Hope you have understanded.
Although depending the operators you use you can use the outer alias inside a subquery. Like the query below.
To solve your problem you can use an NOT EXISTS clause joining with the outer table like this:
```
SELECT DISTINCT C.sid
FROM Catalog C
WHERE NOT EXISTS (
SELECT 1
FROM Catalog Cin
INNER JOIN Parts P ON p.pid = Cin.pid
WHERE P.color <> 'red'
AND Cin.sid=C.sid );
```
|
Your queries will both work. Should you want to do it without a subquery try this:
```
select c.sid
from Catalog c inner join Parts p on p.pid = c.sid
group by c.sid
having min(p.color) = 'red' and max(p.color) = 'red'
```
or
```
having count(case when p.color = 'red' then 1 else null end) = count(*)
```
And here's another approach in case you're interested. It lets you easily grab all the supplier info at once:
```
select *
from Suppliers s
where 'red' = all (
select p.color
from Catalog c inner join Parts p on p.pid = c.pid
where c.sid = s.sid
)
```
|
If I want to query the same table twice in not in clause, can I use the same table alias?
|
[
"",
"mysql",
"sql",
"database",
""
] |
I'd like to do a `prorata` calculation in PL/SQL but I have no clue how to do so.
Here my exemple :
```
ID class Wages Premium
1 A 15000 250
2 A 10000 0
```
I'd like to compute premium on a prorata basis within class, i.e. for every ID in each CLASS, I'd like to compute the premium on a prorata basis with respect to Wages.
Here, for ID 1,
it would :
```
premium = (15000 *sum(premium))/(sum(wages)) = (15000 * 250)/(25000) = 150
```
Thus, for ID 2 :
```
premium = (10000 * 250)/(25000) = 100
```
Obviously I have to do that on more voluminous data... But I haven't done much PL/SQL till now...
I may add that I MUST do it throgh an update statement...
Please could you help me ?
Thanks a lot
|
The most efficient method would be to use ratio\_to\_report(), which is often strangely ignored.
```
select id,
premium,
sum(premium) over (partition by class) *
ratio_to_report(wages) over (partition by class) prorated_premium
from my_table
```
For an update I would look at:
```
update (query from above)
set premium = prorated_premium
```
|
You can easily calculate the new premium values with analytic functions:
```
create table prorata_test as
select 1 id,
'A' class,
15000 wages,
250 premium
from dual
union all
select 2 id,
'A' class,
10000 wages,
0 premium
from dual
union all
select 3 id,
'B' class,
10000 wages,
50 premium
from dual;
select id,
class,
wages,
premium,
sum(wages) over (partition by class) tot_wages,
sum(premium) over (partition by class) tot_premium,
wages * sum(premium) over (partition by class) / sum(wages) over (partition by class) new_premium
from prorata_test;
ID CLASS WAGES PREMIUM TOT_WAGES TOT_PREMIUM NEW_PREMIUM
---------- ----- ---------- ---------- ---------- ----------- -----------
1 A 15000 250 25000 250 150
2 A 10000 0 25000 250 100
3 B 10000 50 10000 50 50
```
So you can see that the analytic sum functions that calculate the tot\_wages and to\_premium columns sum up the values across each class (which is what we partitioned by (aka grouped by)), and you can therefore just use them in the new\_premium calculation.
To store the new premium value, you would use a MERGE statement with the above query as the source data set, like so:
```
merge into prorata_test tgt
using (select id,
class,
wages,
premium,
wages * sum(premium) over (partition by class) / sum(wages) over (partition by class) new_premium
from prorata_test) src
on (tgt.id = src.id)
when matched then
update set tgt.premium = src.new_premium;
commit;
select *
from prorata_test
order by id;
ID CLASS WAGES PREMIUM
---------- ----- ---------- ----------
1 A 15000 150
2 A 10000 100
3 B 10000 50
```
N.B. I am assuming here that ID is the primary key (or at least, that it's unique across the table!).
|
How to compute prorata calculation in PL/SQL?
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
Reading this [article](http://www.dalibo.org/_media/understanding_explain.pdf) about the `EXPLAIN` command I come across the so called `invisible rows` concept. To me more specific:
> In a sequential scan, the executor needs:
>
> * to read all the blocks of relation foo
> * to check every row in each block to filter β***unvisible***β rows
Googling for the pharse [invisible row postgresql](https://www.google.ru/?gws_rd=ssl#newwindow=1&q=invisible%20row%20postgresql) and some related to it didn't give any useful result. So, what does the concept mean? Or it's an informal concept and is not standardized.
|
It's basically a consequence of MVCC and transactions. If you start a transaction then rows created by a different session will normally not be visible to your session until the transaction has run its course. This is to prevent the state of a transaction becoming inconsistent during its execution.
There are exceptions related to unique indexes and key columns, but its relatively rare to encounter those, especially if all your primary keys are SERIAL.
|
Invisible rows are rows that are not visible to a transaction (lets call it `T1`) when started.
A typical scenario is the following:
A transaction `T2` starts its execution. `T2` consists in the query
```
UPDATE users SET name = 'John' WHERE age < 18
```
Meanwhile, the transaction `T1` (concurrently with `T2`) starts its execution, doing the following:
```
SELECT COUNT(*) FROM users WHERE name = 'John'
```
As you can easily see, if `T1` ends before `T2`, its results will be a number X: the count of users whose name is John.
But if `T1` ends after `T2`, the resulting value X might be different (it will be, if exists some rows that satisfy the `WHERE` predicate).
The same thing can happen in a `JOIN`, the resulting join relation should or not contain the rows that satisfies the join predicate.
Think about the transaction `T1`
```
SELECT * FROM users u, infos i INNER JOIN u.id = info.id;
```
And concurrently there's the execution of `T2`
```
UPDATE infos SET id = 9 WHERE id > 12
```
The physical implementation of the logical operator JOIN, must handle this cases, in order to produce the right result.
|
What are invisible rows in postgresql?
|
[
"",
"sql",
"postgresql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.