Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I'm trying to create a function that sums the result of all of one query's values and compares it to a number of another, simple query.
This is what I have, however I'm getting a syntax error near begin (2nd line):
```
CREATE FUNCTION trigf1(sbno integer, scid numeric(4,0)) RETURNS integer
BEGIN
declare sum int default 0;
declare max as SELECT totvoters FROM ballotbox WHERE cid=scid AND bno=sbno;
for r as
SELECT nofvotes FROM votes WHERE cid=scid AND bno=sbno;
do
set sum = sum + r.nofvotes;
end for
if sum > max
then return(0);
else
return(1);
END
```
This results in:
> Syntax error near 'BEGIN'
I'm using postgreSQL and the pgadminIII (just in case it is relevant).
I have no idea why I'm getting this error, everything seems to be exactly as the textbook defined. (This is the text book I'm using: <http://digilib.usu.ac.id/buku/107859/Database-systems-concepts,-6th-ed.html>)
|
I don't know which "textbook" you were using but if everything you wrote is exactly as in that book, that book is totally wrong:
```
CREATE FUNCTION trigf1(sbno integer, scid numeric(4,0))
RETURNS integer
AS -- error #1: no AS keyword
$body$ -- error #2: use dollar quoting to specify the function body as a string
DECLARE -- error #3: the declare block comes before the actual code
sum_ integer := 0; -- error #5: you can't use a reserved keyword as a variable
max_ integer; -- error #6: you can't initialize a variable with a select,
r record; -- you need to declare the record for the cursor loop
BEGIN
select totvoters
into max_
from ballotbox
WHERE cid=scid AND bno=sbno;
-- error #7: the syntax for a loop uses IN not AS
-- error #8: you need to declare R before you can use it
-- error #9: the SELECT for a cursor loop must NOT be terminated with a ;
FOR r IN SELECT nofvotes FROM votes WHERE cid=scid AND bno=sbno
loop -- error #10: you need to use LOOP, not DO
sum_ := sum_ + r.nofvotes; -- error #11: you need to use := for an assignment, not SET
end loop; -- error #12: it's END LOOP
-- error #13: you need to terminate the statement with a ;
if sum_ > max_ then
return 0;
else
return 1;
end if; -- error #14: an END if is required
END;
$body$
language plpgsql; -- error #14: you need to specify the language
```
The manual documents all this:
* error #1,#2: <http://www.postgresql.org/docs/current/static/sql-createfunction.html>
* error #3: <http://www.postgresql.org/docs/current/static/plpgsql-structure.html>
* error #6: <http://www.postgresql.org/docs/current/static/plpgsql-declarations.html>
* error #7,#8,#9,#10,#12: <http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-RECORDS-ITERATING>
* error #11: <http://www.postgresql.org/docs/current/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-ASSIGNMENT>
* error #14: <http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-CONDITIONALS>
---
The whole `FOR` loop is not needed and extremely inefficient. It can be replaced with:
```
SELECT sum(nofvotes)
into sum_
FROM votes
WHERE cid=scid AND bno=sbno;
```
Postgres has a native boolean type, it's better to use that instead of integers. If you declare the function as `returns boolean`, the last line can be simplified to
```
return max_ > sum_;
```
---
This part:
```
select totvoters
into max_
from ballotbox
WHERE cid=scid AND bno=sbno;
```
will **only** work if cid,bno is unique in the table ballotbox. Otherwise you might get an error at runtime if the select returns more than one row.
---
Assuming that the select on `ballotbox` does use the primary (or a unique) key, the whole function can be simplified to a small SQL expression:
```
create function trigf1(sbno integer, scid numeric(4,0))
returns boolean
as
$body$
return (select totvoters from ballotbox WHERE cid=scid AND bno=sbno) >
(SELECT sum(nofvotes) FROM votes WHERE cid=scid AND bno=sbno);
$body$
language sql;
```
|
I am not really a postgresSQL person, but I would have thought
```
declare max as SELECT totvoters FROM ballotbox WHERE cid=scid AND bno=sbno;
```
should be
```
declare max := SELECT totvoters FROM ballotbox WHERE cid=scid AND bno=sbno;
```
|
Syntax error when create Postgres function
|
[
"",
"sql",
"postgresql",
""
] |
I have two tables, one that store product information and one that stores reviews for the products.
I am now trying to get the number of reviews submitted for the products between two dates but for some reason I get the same results regardless of the dates i put.
This is my query:
```
SELECT
productName,
COUNT(*) as `count`,
avg(rating) as `rating`
FROM `Reviews`
LEFT JOIN `Products` using(`productID`)
WHERE `date` BETWEEN '2015-07-20' AND '2015-07-30'
GROUP BY
`productName`
ORDER BY `count` DESC, `rating` DESC;
```
This returns:
```
+------------+---------------------+
| productName| count|rating |
+------------+------+--------------+
| productA | 23 | 4.3333333 |
| productB | 17 | 4.25 |
| productC | 10 | 3.5 |
+------------+---------------------+
```
Products table:
```
+---------+-------------+
|productID | productName|
+---------+-------------+
| 1 | productA |
| 2 | productB |
| 3 | productC |
+---------+-------------+
```
Reviews table
```
+---------+-----------+--------+---------------------+
|reviewID | productID | rating | date |
+---------+-----------+--------+---------------------+
| 1 | 1 | 4.5 | 2015-07-27 17:47:01|
| 2 | 1 | 3.5 | 2015-07-27 18:54:22|
| 3 | 3 | 2 | 2015-07-28 13:28:37|
| 4 | 1 | 5 | 2015-07-28 18:33:14|
| 5 | 2 | 1.5 | 2015-07-29 11:58:17|
| 6 | 2 | 3.5 | 2015-07-30 15:04:25|
| 7 | 2 | 2.5 | 2015-07-30 18:11:11|
| 8 | 1 | 3 | 2015-07-30 18:26:23|
| 9 | 1 | 3 | 2015-07-30 21:35:05|
| 10 | 1 | 4.5 | 2015-07-31 14:25:47|
| 11 | 3 | 0.5 | 2015-07-31 14:47:48|
+---------+-----------+--------+---------------------+
```
when I put two random dates that I do know for sure they not on the date column, I will still get the same results. Even when I want to retrieve records only on a certain day, I get the same results.
|
If the result, given your sample data, that you're looking for is:
```
| productName | count | rating |
|-------------|-------|--------|
| productA | 5 | 4 |
| productB | 3 | 3 |
| productC | 1 | 2 |
```
This is the count and average of reviews made on any date between `2015-07-20` and `2015-07-30` inclusive.
Then the there are two issues with your query. First, you need to change the join to a `inner join` instead of a `left join`, but more importantly you need to change the date condition as you are currently excluding reviews that fall on the last date on the range, but after midnight.
This happens because your `between` clause compares datetime values with date values so the comparison ends up being `date between '2015-07-20 00:00:00' and '2015-07-30 00:00:00'` which clearly excludes some dates at the end.
The fix is to either change the date condition so that the end is a day later:
```
where date >= '2015-07-20' and date < '2015-07-31'
```
or cast the `date` column to a `date` value, which will remove the time part:
```
where date(date) between '2015-07-20' and '2015-07-30'
```
[Sample SQL Fiddle](http://sqlfiddle.com/#!9/3c188a/26)
|
You should not use left join, because by doing so you retrieve all the data from one table. What you should use is something like :
```
select
productName,
count(*) as `count`,
avg(rating) as `rating`
from
products p,
reviews r
where
p.productID = r.productID
and `date` between '2015-07-20' and '2015-07-30'
group by productName
order by count desc, rating desc;
```
|
Using MySQL group by clause with where clause
|
[
"",
"mysql",
"sql",
""
] |
I have a trip that has sequence of stops
```
Trip Stop Time
1 A 1:10
1 B 1:15
1 B 1:20
1 B 1:25
1 C 1:30
2 A 2:10
2 B 2:15
2 C 2:20
2 B 2:25
```
I want to transfer the table to:
```
Trip Stop Time WaitTime
1 A 1:10 0
1 B 1:15 10min
1 C 1:30 0
2 A 2:10 0
2 B 2:15 0
2 C 2:20 0
2 B 2:25 0
```
I'm wondering if a oracle query can achieve it or cursor?
pseudo code:
SELECT case when previousstop = stop then time-lag(time) over (partition by trip order by trip,time) as waittime, but I don't know how to group by B.
FYI: In the second trip, I do wanna keep both Bs. The only time I want to group by B is when they are in sequence to each other. I don't think max, min will work well for this case.
|
Hi I figured it out myself! Here is the answer in case anyone needs
```
With TB AS (select t1.*
,sum(decode(t1.stop,t1.prev_stp,0,1)) over (partition by trip order by time) new_seq
from
(select t.*
,lag(stop) over (order by t.trip, t.time) prev_stp
from test t
order by tm) t1)
SELECT trip,stop,new_seq,min(etime) as Etime1, (max(etime)-min(etime)) wait time from TB
group by trip,stop,new_seq
order by trip,new_seq
```
|
```
with y as (select trip, stop, min(time) mintime, max(time) maxtime
from tablename group by trip, stop)
select y.trip, y.stop, y.mintime, (y.maxtime-y.mintime) waittime
from y join tablename t
on t.trip = y.trip and t.stop = y.stop
```
|
Loop and group by only when some conditions occur
|
[
"",
"sql",
"cursor",
""
] |
```
USE AQOA_Core
SELECT TOP 10
p.Title, SUM(r.SalesVolume) AS Sales, c.WeekId AS ReleaseWeek
FROM
Product p
INNER JOIN
RawData r ON p.ProductId = r.ProductId
INNER JOIN
Calendar c ON r.DayId = c.DayId
WHERE
c.WeekId BETWEEN 20145227 AND 20145228
GROUP BY
p.Title, c.WeekId
ORDER BY
Sales DESC, ReleaseWeek DESC
```
The result set of the above query is:
```
Title Sales ReleaseWeek
Movie1 10 20145228
Movie1 10 20145227
Movie2 10 20145228
Movie2 10 20145227
Movie3 10 20145228
Movie3 10 20145227
Movie4 10 20145228
Movie4 10 20145227
Movie5 10 20145228
Movie5 10 20145227
```
I basically want only the first week from the `ReleaseWeek` column but still `groupby` `Title` column
The ideal result set would look like:
```
Title Sales ReleaseWeek
Movie1 20 20145228
Movie2 20 20145228
Movie3 20 20145228
Movie4 20 20145228
Movie5 20 20145228
```
How should I go about doing this? Probably by a subquery?
The dataset used here is just for simple representation. The data in the actual dataset is much larger.
|
Have you tried:
```
SELECT TOP 10
p.Title, SUM(r.SalesVolume) AS Sales, Min(c.WeekId) AS ReleaseWeek
FROM
Product p
INNER JOIN
RawData r ON p.ProductId = r.ProductId
INNER JOIN
Calendar c ON r.DayId = c.DayId
WHERE
c.WeekId BETWEEN 20145227 AND 20145228
GROUP BY
p.Title
ORDER BY
Sales DESC, ReleaseWeek DESC
```
Your week ids are alphabetical or numerical so as far as i am aware you will get the first week with a min.
I have not used this technique in a while and i am pretty sure it does not work in all DBs
Also as commenter mentioned you may need Max instead of Min as looking at your example you are taking the last date not the first.
|
You need to select the first 10 rows in a sub select. Then from the sub select, use an aggregate. This will choose the latest ReleaseWeek like your test data instead of the first ReleaseWeek as your text describes. You can change this to MIN, if that was what you meant:
```
;WITH CTE as
(
SELECT TOP 10
p.Title, r.SalesVolume, c.WeekId
FROM
Product p
INNER JOIN
RawData r ON p.ProductId = r.ProductId
INNER JOIN
Calendar c ON r.DayId = c.DayId
WHERE
c.WeekId BETWEEN 20145227 AND 20145228
ORDER BY
Sales DESC, ReleaseWeek DESC
)
SELECT
Title, SUM(SalesVolume) AS Sales, MAX(WeekId) ReleaseWeek
FROM CTE
GROUP BY Title
```
Since you want to aggrigate WeekId, you can't include it in your GROUP BY
|
SQL: Get the first value of a GroupBY Clause
|
[
"",
"sql",
"sql-server",
""
] |
Refer to my previous posting.
[Sql Cleanup script, delete from one table that's not in the other](https://stackoverflow.com/questions/31676517/sql-cleanup-script-delete-from-one-table-thats-not-in-the-other])[1](https://i.stack.imgur.com/iNe0e.png)
Using DB2 for IBM i (As400, Db2).
I am executing the following sql as a cleanup script 3am.
```
DELETE FROM p6prodpf A WHERE (0 = (SELECT COUNT(*) FROM P6OPIPF B WHERE B.OPIID = A.OPIID))
```
I have a different process that at the same time that this sql runs inserts two records, the first record is the `P6OPIPF` record and then inserts the detail record into `P6PRODPF`.
The problem.
The `P6PRODPF` record is missing after the SQL cleanup ran. But remember that the process that stores the records ran the same time.
How I understand the SQL is that it go's through `P6PRODPF` and checks if the record is in `P6OPIPF` if its not in `P6OPIPF` then delete `P6PRODPF`.
But then I ran Visual Explain in I systems navigator on this SQL and got the following result.
[](https://i.stack.imgur.com/iNe0e.png)
[](https://i.stack.imgur.com/Yb65c.png)
[](https://i.stack.imgur.com/tkEBJ.png)
[](https://i.stack.imgur.com/pjnJ8.png)
Now I am confused.
After Visual explain It looks like the statement starts with checking `P6OPIPF`.
So then it reads: If there's a record in that instance of time in `P6OPIPF` and no record with the same key in `P6PRODPF` then delete the `P6PRODPF`.
This could explain my problem that `P6PRODPF` gets deleted when my process that inserts the records and the sql script runs at the same time.
So how I see it in Sequence.(My theory)
1. The process that inserts the two records starts.
2. The process that inserts the two records inserts the first record in
`P6OPIPF`.
3. At the same time the SQL cleanup runs. the query see's the `P6OPIPF`
record and checks if it has a `P6PRODPF` record. At this stage
there is still no `P6PRODPF` inserted so Sql thinks it needs to
delete the record in `P6PRODPF`.
4. In the same time The process that inserts the two records inserts the
second record in `P6PRODPF`.
5. And because the Sql did not see the `P6PRODPF` at that stage it
deletes the new inserted record in `P6PRODPF` leaving a `P6OPIPF`
record with no `P6PRODPF` record.
Am I correct?
What I actually want to know is just the Delete script listed above. How I understand the SQL is that it go's through P6PRODPF and checks if the record is in P6OPIPF if its not in P6OPIPF then delete P6PRODPF. But after the visual explain I can see its starts with checking P6OPIPF. So What will the delete statement first check?
The code of the insert is generated in CA PLEX generator. RPGIV code.
My one function that will insert first `P6OPIPF`(OperationsItem.Update.InsertRow) and then its detail in `P6PRODPF`(ProductDetail.Update.InsertRow).
[Insert Row function](https://www.dropbox.com/s/qnwto86esums9gn/P61vcF.txt?dl=0)
My Scheduled function code that will execute the delete Script.
[Scheduled delete script function](https://www.dropbox.com/s/pfsqj18at3epup6/Code%20to%20Delete.txt?dl=0)
Hope it makes sense.
|
Do you have heard of the concepts "transaction" and "isolation"? Typically different processes running against the same database are shielded (isolated) against each other, so they are operating without seeing the immediate impact of any other transaction running at the same time. Logically two transactions (a process or sequence of SQL statements) executed at the same time are executed in a serial way.
In your case either the first process is the "first" or "second". If you repeat your tests you may see different results depending on who is "first" (logically).
|
To explain the Visual Explain, DB2 will check the inner expression before executing the DELETE clause - it has to, or else it won't know what rows are affected.
The reason your archived rows aren't archived is because the delete script ran before the insert script.
|
In what sequence will the Delete SQL execute?
|
[
"",
"sql",
"db2",
"ibm-midrange",
"db2-400",
""
] |
I have a string like this `'abc : efg : xyz'` for a column in a table, all the records for this column will be like this. I want substring of this string after the last colon(:). Please do need ful help
example:
```
'abc : efg : xyz' -> xyz </br>
'abc : efg : efghj'-> efghj
```
I have query for this as below, but I want shorter then this query:
```
SELECT REPLACE (
REGEXP_SUBSTR (
'abc : efg : xyz',
':.*$',
REGEXP_INSTR (
'abc : efg : xyz',
':',
1,
LENGTH ('abc : efg : xyz')
- LENGTH (
REPLACE ('abc : efg : xyz',
':',
NULL))),
1),
': ') data_after_the_last_colon
FROM DUAL
```
|
What you want is:
```
REGEXP_REPLACE (INPUTSTR, '^([^:]+ : )*', '')
```
Specifically, this seeks a string starting from the beginning (initial caret `^`) with zero or more occurrences of a string with non-colons (`[^:]+`) followed by `:` . It replaces all of these leading strings terminated with the space-colon-space with an empty string.
The following query demonstrates how this works. I've used a factored subquery with your sample input and a half dozen other tests, and the query output has the input, my regexp\_replace result, and your `REPLACE (REGEXP_SUBSTR` syntax (I replaced your `' abc : efg : xyz '` with the factored subquery). You can add another test case by duplicating the `union all select` line and changing the string for `inputstr`.
Oh, about `doold` -- your syntax couldn't handle an input string without a colon, it would throw an error that killed all query results. So I wrapped your syntax in a `DECODE (doold` to get all the rows back.
```
with sampl as (
select 'abc : efg : xyz' as inputstr, 1 as doold from dual
union all select 'z : b :' as inputstr, 1 as doold from dual
union all select 'z : b : ' as inputstr, 1 as doold from dual
union all select ' : a ' as inputstr, 1 as doold from dual
union all select ' a ' as inputstr, 0 as doold from dual
union all select '' as inputstr, 1 as doold from dual
union all select ' hij : klm : nop : qrs : tuv' as inputstr, 1 as doold from dual
)
SELECT
inputstr,
regexp_replace (inputstr, '^([^:]+ : )*', '') as bettr,
decode (doold,
1, -- the following is your original expression, for comparison
-- purposes, edited only to replace 'abc : efg : xyz' with inputstr
REPLACE (
REGEXP_SUBSTR (
inputstr,
':.*$',
REGEXP_INSTR (
inputstr,
':',
1,
LENGTH (inputstr)
- LENGTH (
REPLACE (inputstr,
':',
NULL))),
1),
': '),
'Sorry the syntax won''t support input "' || inputstr || '"'
) data_after_the_last_colon
FROM sampl
order by doold desc, length (inputstr)
```
|
As you say the pattern is fixed, reversing the string and looking for and getting a substring till the first semi-colon would be the easiest. You can also use `trim` to eliminate any leading/trailing spaces.
```
select reverse(substr(reverse('abc : efg : efghj'),
1,instr(reverse('abc : efg : efghj'),':')-1)) from dual
```
|
Oracle sql regular expression
|
[
"",
"sql",
"regex",
"oracle",
""
] |
The tabled I am presented with looks similar to this
```
CREATE TABLE user_status (
user_id NUMBER(10,0) PRIMARY KEY,
applied TIMESTAMP,
joined TIMESTAMP,
last_attended TIMESTAMP,
quit TIMESTAMP
);
```
The database is Oracle 11g.
What SQL query could use if I would like to return APPLIED, JOINED, ACTIVE or INACTIVE based on whether applied, joined, last\_attended or quit is the latest and the respective date.
It is also acceptable if I can get the name of the column (instead of APPLIED, JOINED, ACTIVE or INACTIVE) that has the latest date, if that greatly simplifies the query.
Sample rows:
```
1 | 28-JUL-15 03.37.07 PM | 29-JUL-15 03.37.07 PM | 30-JUL-15 03.37.07 PM | (null)
2 | 18-JUL-15 03.55.24 PM | 19-JUL-15 03.55.24 PM | 20-JUL-15 03.55.24 PM | 31-JUL-15 03.55.24 PM
```
Expected result:
```
1 | ACTIVE | 30-JUL-15 03.37.07 PM
2 | INACTIVE | 31-JUL-15 03.55.24 PM
```
**Update**
Here's the solution I have based on Gordon Linoff's answer:
```
select (case greatest(coalesce(applied, to_date('0001-01-01', 'YYYY-MM-DD')),
coalesce(joined, to_date('0001-01-01', 'YYYY-MM-DD')),
coalesce(last_attended, to_date('0001-01-01', 'YYYY-MM-DD')),
coalesce(quit, to_date('0001-01-01', 'YYYY-MM-DD')))
when applied
then 'APPLIED'
when joined
then 'JOINED'
when last_attended
then 'INACTIVE'
when quit
then 'QUIT'
else null
end) as status, greatest(coalesce(applied, to_date('0001-01-01', 'YYYY-MM-DD')),
coalesce(joined, to_date('0001-01-01', 'YYYY-MM-DD')),
coalesce(last_attended, to_date('0001-01-01', 'YYYY-MM-DD')),
coalesce(quit, to_date('0001-01-01', 'YYYY-MM-DD'))) as status_date
from user_status;
```
|
If all have values, you can do:
```
select us.*
(case when applied = greatest(applied, joined, last_attended, quit)
then 'Applied'
when joined = greatest(applied, joined, last_attended, quit)
then 'Joined'
when last_attended = greatest(applied, joined, last_attended, quit)
then 'Last Attended'
else 'Quit'
end)
from user_status us
```
If these could have `NULL` values, then you need more complex logic, because `greatest()` returns `NULL` if any argument is `NULL`. Assuming all are in the past, then something like this:
```
select (case greatest(coalesce(applied, date '0000-01-01'),
coalesce(joined, date '0000-01-01'),
coalesce(last_attended, date '0000-01-01'),
coalesce(quit, date '0000-01-01') )
when applied
then 'Applied'
when joined
then 'Joined'
when last_attended
then 'Last Attended'
else 'Quit'
end)
from . . .
```
|
```
select user_id, case greatest(applied, joined, last_attended, quit)
when applied then 'APPLIED'
when joined then 'JOINED'
when last_attended then 'ACTIVE'
when quit then 'INACTIVE' end
from user_status
```
In the event of a tie between fields, the first field in the `case` expression wins.
|
Oracle SQL: Return a string based on which column has the highest value
|
[
"",
"sql",
"oracle",
""
] |
I want to retrieve data from my table `Card`.
```
table Card(
MembershipNumber,
EmbossLine,
status,
EmbossName
)
```
Such that only those rows should be returned that have repeating MembershipNumber i.e having `count` greater than 1.
Like if I Have following records
```
(11,0321,'active','John')
(11,0322,'active','John')
(23,0350,'active','Mary')
(46,0383,'active','Fudge')
(46,0382,'active','Fudge')
(46,0381,'active','Fudge')
```
The query should return all records except the third one. Is it possible?
**EDITED** I got the answer for my question. I have another query. I want to filter the rows by `status` too but when I run the following query I dont get the desired result:
```
SELECT EmbossLine,Membershipnumber,status,embossname,*
FROM (SELECT *,
Count(MembershipNumber)OVER(partition BY EmbossName) AS cnt
FROM card) A
WHERE cnt > 1 AND status='E0'
```
Before Adding `status` in the where clause, it works perfectly fine. see Picture[](https://i.stack.imgur.com/bSqyr.png)
After adding filtering by status
[](https://i.stack.imgur.com/QLqAD.png)
|
Use `Count() Over()` window function to do this.
```
SELECT *
FROM (SELECT *,
Count(MembershipNumber)OVER(partition BY EmbossName) AS cnt
FROM youurtable) A
WHERE cnt > 1
```
**Demo**
```
SELECT MembershipNumber,
[status],
EmbossName
FROM (SELECT *,
Count(MembershipNumber)OVER(partition BY EmbossName) AS cnt
FROM (VALUES (11.0321,'active','John'),
(11.0322,'active','John'),
(23.0350,'active','Mary'),
(46.0383,'active','Fudge'),
(46.0382,'active','Fudge'),
(46.0381,'active','Fudge')) tc (MembershipNumber, [status], EmbossName)) A
WHERE cnt > 1
```
|
Find the duplicates on empbossname and get the result
```
select t1.* from card as t1 inner join
(select empbossname from card group by empbossname having count(*)>1) as t2
on t1.empbossname =t2.empbossname
```
|
Retrieve rows on the basis of repeating value of a column
|
[
"",
"sql",
"sql-server",
"count",
""
] |
I need to add to the `WHERE` clause one condition, which is always true, but it must reference one of the columns, for example:
`ID` is the primary key (and therefore `NOT NULL`)
and I will execute a select:
```
SELECT *
FROM Table
WHERE ID IS NOT NULL
```
Will this condition be ignored or checked for every row?
|
SQL Server can determine at compile time that this condition will always be true and avoid the need to check at runtime.
```
CREATE TABLE #T
(
ID INT CONSTRAINT PK_ID PRIMARY KEY NONCLUSTERED,
X INT CONSTRAINT UQ_X UNIQUE
)
SELECT X
FROM #T
WHERE ID IS NOT NULL;
DROP TABLE #T
```
[](https://i.stack.imgur.com/cx6he.png)
In the execution plan above the only index accessed is `UQ_X` and this doesn't even contain the `ID` column that would make such a runtime evaluation possible.
By contrast if `ID` is nullable (and replaced with a unique constraint rather than primary key as a PK wouldn't allow NULL) then the check would of course need to be made at run time and so the plan will need to retrieve the column and might look like one of the following.
## Scans whole table with predicate pushed into scan
[](https://i.stack.imgur.com/12IIg.png)
## Attempt to use narrower index requires lookup to retrieve the column and evaluate the predicate
[](https://i.stack.imgur.com/zKydO.png)
|
When you execute following query:
```
SELECT * FROM Table WHERE ID IS NOT NULL;
```
It will display all the rows in your table given that ID is defined as the primary key. This will be similar to `SELECT * FROM Table;` from the display point of view. However, the condition will not be ignored.(i.e. the `where` clause will be checked)
|
SQL Server - ignoring always true condition
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a stored procedure to generate a text file, which one has a lot of filters, here it is
```
-- =============================================
-- Author: Ricardo Ríos
-- Create date: 17/01/2014
-- Description: Genera el TXT para Importar a Saint
-- =============================================
ALTER PROCEDURE [dbo].[SP_SAINT_TXT]
-- Add the parameters for the stored procedure here
(
@nomina VARCHAR(MAX),
@gerencia VARCHAR(MAX),
@sucursal VARCHAR(MAX),
@empresa VARCHAR(MAX),
@departamento VARCHAR(MAX),
@cargo VARCHAR(MAX),
@horario VARCHAR(MAX),
@locacion VARCHAR(MAX),
@empleados VARCHAR(MAX),
@desde DATETIME,
@hasta DATETIME
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @cedula varchar(max), @exnocturnas DECIMAL(5,2),
@diast DECIMAL(5,2), @diasf DECIMAL(5,0), @diasd DECIMAL(5,2),
@matut DECIMAL(5,2), @vespe DECIMAL(5,2), @noctu DECIMAL(5,2),
@linea varchar(max), @txt varchar(max),
@l1 varchar(max),
@l2 varchar(max),
@l3 varchar(max),
@l4 varchar(max),
@l5 varchar(max),
@l6 varchar(max),
@l7 varchar(max)
SET @txt = ''
SET @nomina = (SELECT REPLACE(@nomina, '(', ''))
SET @nomina = (SELECT REPLACE(@nomina, ')', ''))
SET @gerencia = (SELECT REPLACE(@gerencia, '(', ''))
SET @gerencia = (SELECT REPLACE(@gerencia, ')', ''))
SET @sucursal = (SELECT REPLACE(@sucursal, '(', ''))
SET @sucursal = (SELECT REPLACE(@sucursal, ')', ''))
SET @empresa = (SELECT REPLACE(@empresa, '(', ''))
SET @empresa = (SELECT REPLACE(@empresa, ')', ''))
SET @departamento = (SELECT REPLACE(@departamento, '(', ''))
SET @departamento = (SELECT REPLACE(@departamento, ')', ''))
SET @cargo = (SELECT REPLACE(@cargo, '(', ''))
SET @cargo = (SELECT REPLACE(@cargo, ')', ''))
SET @locacion = (SELECT REPLACE(@locacion,'(',''))
SET @locacion = (SELECT REPLACE(@locacion,')',''))
SET @empleados = (SELECT REPLACE(@empleados,'(',''))
SET @empleados = (SELECT REPLACE(@empleados,')',''))
declare cursor_txt cursor for
SELECT B.ID AS cedula,
SUM(A.extrasnocturnas) AS extrasnocturnas,
SUM(A.diastrabajados) AS diastrabajados,
SUM(A.diasfaltantes) AS diasfaltantes,
SUM(A.diasdescanso) AS diasdescanso,
SUM(A.maturinas) AS maturinas,
SUM(A.vespertinas) AS vespertinas,
SUM(A.nocturnas) AS nocturnas
FROM exsaint A
RIGHT JOIN tabela B ON A.cedula = B.ID
WHERE A.desde >= @desde AND A.hasta <= @hasta
AND B.tipo_nomina IN (CASE WHEN @nomina = '-1' THEN B.tipo_nomina ELSE @nomina END)
AND B.gerencia IN (CASE WHEN @gerencia = '-1' THEN B.gerencia ELSE @gerencia END)
AND B.sucursal IN (CASE WHEN @sucursal = '-1' THEN B.sucursal ELSE @sucursal END)
AND B.empresa IN (CASE WHEN @empresa = '-1' THEN B.empresa ELSE @empresa END)
AND B.departamento IN (CASE WHEN @departamento = '-1' THEN B.departamento ELSE @departamento END)
AND B.cargo IN (CASE WHEN @cargo = '-1' THEN B.cargo ELSE @cargo END)
AND B.locacion IN (CASE WHEN @locacion = '-1' THEN B.locacion ELSE @locacion END)
AND B.ID IN (CASE WHEN @empleados = '-1'THEN B.ID ELSE @empleados END)
GROUP BY ID
ORDER BY ID
open cursor_txt
fetch next from cursor_txt into @cedula, @exnocturnas, @diast, @diasf, @diasd, @matut, @vespe, @noctu
while @@fetch_status = 0
begin
SET @linea = ''
SET @l1 = CAST(@exnocturnas AS CHAR(8))
SET @l2 = CAST(@diast AS CHAR(8))
SET @l3 = CAST(@diasf AS CHAR(3))
SET @l4 = CAST(@diasd AS CHAR(8))
SET @l5 = CAST(@matut AS CHAR(8))
SET @l6 = CAST(@vespe AS CHAR(8))
SET @l7 = CAST(@noctu AS CHAR(8))
SET @linea =
LTRIM(CONVERT(CHAR(16),@cedula)) +
LTRIM(RTRIM((SELECT CONVERT(VARCHAR(10), @hasta, 103) AS [DD/MM/YYYY]))) +
LTRIM(RTRIM('000')) +
LTRIM(RTRIM(REPLICATE('0', 8-LEN(@l1)) + @l1 )) +
LTRIM(RTRIM(REPLICATE('0', 8-LEN(@l2)) + @l2 )) +
LTRIM(RTRIM(REPLICATE('0', 3-LEN(@l3)) + @l3 )) +
LTRIM(RTRIM(REPLICATE('0', 8-LEN(@l4)) + @l4 )) +
LTRIM(RTRIM(REPLICATE('0', 8-LEN(@l5)) + @l5 )) +
LTRIM(RTRIM(REPLICATE('0', 8-LEN(@l6)) + @l6 )) +
LTRIM(RTRIM(REPLICATE('0', 8-LEN(@l7)) + @l7 )) +
LTRIM(RTRIM('00000000')) +
+ CHAR(13) + CHAR(10);
PRINT @txt
PRINT @linea
SET @txt = @txt + @linea
fetch next from cursor_txt into @cedula, @exnocturnas, @diast, @diasf, @diasd, @matut, @vespe, @noctu
end
close cursor_txt
deallocate cursor_txt
SELECT @txt
END
```
The issue is when I pass some values to the `IN` filter I get this error.
> [Microsoft][ODBC SQL Server Driver][SQL Server]Conversion failed when converting the varchar value ' 5 , 4 ' to data type int.
When I execute the stored procedure like below.
```
EXECUTE SP_SAINT_TXT '-1', '-1', '-1', '( 5 , 4 )', '-1', '-1', '-1', '-1', '-1', '20140801', '20150802'
```
Is there a way that I can add those filters with some conversions or something else and it works?
|
`CASE` inside `IN` statements for this task seems not possible because `CASE` only capable of returning scalar value, [according to this post](https://stackoverflow.com/questions/11232267/using-case-statement-inside-in-clause). Also, SQL Server can't naturally deal with comma-separated value. If you can afford to change the parameter to a more appropriate data type like table parameter, that would be the ultimate solution.
Otherwise, there are some possible workaround that you can attempt to implement for this task [here](https://stackoverflow.com/questions/878833/passing-a-varchar-full-of-comma-delimited-values-to-a-sql-server-in-function). This one based on [the answer by @CeejeeB](https://stackoverflow.com/a/16936683/2998271). Change this part of your SQL.. :
```
AND B.empresa IN (CASE WHEN @empresa = '-1' THEN B.empresa ELSE @empresa END)
```
..to this :
```
AND (
@empresa = '-1'
or
B.empresa IN (SELECT t.id.value('.', 'int') id FROM @empresaXml.nodes('//s') as t(id))
)
```
The above statement always evaluates to `True` when `@empresa = '-1'`. Otherwise, `int` values will be extracted from `@empresaXml` variable and will be used to filter column `empresa`.
The variable `@empresaXml` it self declared and populated as follow :
```
DECLARE @empresaXml XML
SET @empresaXml = CAST('<s>' + REPLACE(@empresa, ',', '</s><s>') + '</s>' AS XML)
```
**[SQL Fiddle Demo](http://sqlfiddle.com/#!6/3ebed/1) :**
```
CREATE TABLE MyTable(empresa int, data varchar(10))
;
INSERT INTO MyTable
VALUES
(1,'a'),
(2,'b'),
(3,'c'),
(4,'d'),
(5,'e')
;
Declare @empresa varchar(50) = '( 5 , 4 )'
DECLARE @empresaXml XML
SET @empresa = REPLACE(REPLACE(@empresa,'(',''), ')', '')
SET @empresaXml = CAST('<s>' + REPLACE(@empresa, ',', '</s><s>') + '</s>' AS XML)
SELECT *
FROM MyTable B
WHERE (@empresa = '-1' or B.empresa IN (SELECT t.id.value('.', 'int') id
FROM @empresaXml.nodes('//s') as t(id)))
```
**output :**
```
| empresa | data |
|---------|------|
| 4 | d |
| 5 | e |
```
|
Ah, the issue is that its treating it like a varchar, not an integer.
What you need to do is split them out, either into another table or XML. Best explained here:
[Passing a varchar full of comma delimited values to a SQL Server IN function](https://stackoverflow.com/questions/878833/passing-a-varchar-full-of-comma-delimited-values-to-a-sql-server-in-function)
EDIT: Link only answers are bad, so here is the method I use (from that link):
```
Declare @Ids varchar(50)
Set @Ids = ‘1,2,3,5,4,6,7,98,234’
DECLARE @XML XML
SET @XML = CAST('<i>' + REPLACE(@Ids, ',', '</i><i>') + '</i>' AS XML)
SELECT *
FROM
SomeTable
INNER JOIN @XML.nodes('i') x(i)
ON SomeTable .Id = x.i.value('.', 'VARCHAR(MAX)')
```
|
How to convert nvarchar to int inside an IN filter on a stored procedure
|
[
"",
"sql",
"sql-server-2008",
"stored-procedures",
""
] |
I try to use `varchar` string declared as variable:
```
DECLARE @fsPath AS varchar = 'D:\win-develop\FSCat';
```
But in usage:
```
ALTER DATABASE MyDb
ADD FILE
( NAME = 'FSCatalog', FILENAME = @fsPath
)
TO FILEGROUP MyDbFSGroup;
GO
```
I got error:
```
Incorrect syntax near '@fsPath'.
```
What i doing wrong?
|
You are not specifying the length and the default length is "1" in this context. So:
```
DECLARE @fsPath AS varchar(max) = 'D:\win-develop\FSCat';
```
However, that is only one problem. You are not permitted to use variables in `alter database`. SQL Server documentation is rather convoluted on this topic, but you can always use dynamic SQL:
```
declare @sql nvarchar(max) = '
ALTER DATABASE MyDb
ADD FILE( NAME = ''FSCatalog'', FILENAME = ''@fsPath'')
TO FILEGROUP MyDbFSGroup';
select @sql = replace(@sql, '@fspath', @fsPath);
exec sp_executesql @sql;
```
|
You need to use dynamic SQL in-order to achieve this.
We cannot pass variables to ALTER
Check out [ALTER DATABASE / ADD FILE / VARIABLE FILENAME?](https://stackoverflow.com/questions/5590448/alter-database-add-file-variable-filename)
|
How to use varchar variable in SQL?
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I have a table with 10 rows with ids 1-10. I want to delete row 5 and 8 and I want to IDs to be updated so that they are 1-8 and not 1-4, 6-7 and 9-10. I don't want to run a massive amount of update statements or do it by hand. Also I need solutions that work for both SQLServer and Oracle databases. I think something like a procedure could do that but I have no idea how to create one or how to go about it.
|
You should include what you have tried, anyways you can run this query until no more updates happen (if you don't care about performance, better order by id and use the row number as the other answers do):
```
UPDATE myTable t SET id = id - 1 WHERE id <> 1 AND
NOT EXISTS (SELECT * FROM myTable s WHERE s.id = t.id - 1)
```
|
You really should not do that, as Gordon Linoff pointed out. If you still would do that, here is an `UPDATE`:
```
update t_table t set id = (
select newid from (
select id, row_number() over (order by id) newid from t_table
) x
where x.id = t.id
)
```
The syntax should probably work in both Oracle and MSSQL (only tested in Oracle), as long as they permit to update a primary key at all.
Still, I would suggest against using this technique. If you want a consecutive integer column, use `row_number()` as it is used in the query above to generate one on the fly.
|
Deleting rows from database while keeping id continuity
|
[
"",
"sql",
"sql-server",
"oracle",
""
] |
I have a SQL Server table with a column called `Category`. In my user interface, I have a dropdown list of the category. User can select a category and click on a `Search` button to filter the results by the category. In the dropdown, the first option is blank. Means if the user wants to see all records from all categories, he can select blank.
In my SQL Select I have 2 statements for this
```
IF @Catg IS NULL
Begin
Select *
From Table
End
Else
Begin
Select *
From Table
Where Catg = @Catg
End
```
The `Catg` column in the table will have either a NULL or a category. Is this possible to do in a single SQL statement?
|
You can use an `OR` statement to join the clauses together:
```
Select *
From Table
Where Catg = @Catg OR @Catg IS NULL
```
|
How about
```
SELECT *
FROM yourtable
WHERE (Catg = @Catg) OR (Catg IS NULL)
```
|
Single Select SQL Statement for 2 different Values
|
[
"",
"sql",
"sql-server",
""
] |
```
Table_A
ID Number
-- ------
1 0
2 00
3 0123
4 000000
5 01240
6 000
```
The 'Number' column is data type varchar.
**EDIT for clarity.**
My question is, can I easily pull back all rows of data which contain a variable length string of 0's?
I have tried:
```
SELECT *
FROM Table_A
WHERE LEFT(Number,1) = '0' AND RIGHT(Number,1) = '0'
```
Using the above, it would still return the below, using the example table provided.
```
ID Number
-- ------
1 0
2 00
4 000000
5 01240
6 000
```
I was looking for a function which I could pass the `LEN(Number)` int into, and then it generates a string of a specfic character (in my case a string of 0's). I wasn't able to find anything though.
Oh, and I also tried adding a `SUBSTRING` to the `WHERE` clause, but sometimes the `Number` column has a number which has a 0's in the middle, so it still returned strings with other numbers except only 0.
`SUBSTRING(Number,ROUND(LEN(Number)/2,0),1) = '0'`
Any help is appreciated.
|
So, you want a string that *doesn't* contain anything that *isn't* a `0`? Sounds like it's time for a double-negative:
```
SELECT *
FROM Table_A
WHERE NOT Number like '%[^0]%'
AND number like '0%' --But we do want it to contain at least one zero
```
(The final check is so that we don't match the empty string)
|
Answer:
```
Where number like '%0%'
```
|
WHERE varchar = variable length of 0's
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have a simple senario where I am executind 2 queries
first to get an ID from a table
And then using this ID to access information from second table.
```
SELECT ID
FROM TEST_1
WHERE name = 'Example 1'
Select *
FROM TEST_2
WHERE Parent_ID = %ID retrived from the above query%
```
|
Use `IN` (if subquery returns more than 1 ID) or `=` (if subquery returns only 1 ID):
```
Select *
FROM TEST_2
WHERE Parent_ID = (SELECT ID FROM TEST_1 WHERE name = 'Example 1')
```
|
You could have something like this: just placeyour 1st query in the where clause
```
Select *
FROM TEST_2
WHERE Parent_ID = (SELECT ID
FROM TEST_1
WHERE name = 'Example 1')
```
|
Merging 2 simple queries to one query
|
[
"",
"sql",
"sql-server-2008",
""
] |
Assume I have the following models in my web Application:
```
Table User:
(attr1, attr2, attr3, ...)
Table Prize:
(condition1, condition2, condition3, prize_val)
```
The application logic is: if a user satisfies all the conditions of a Prize, I want to grant him the prize. The condition could either be NULL (True for all users) or a specific value. Every condition can be computed with the user attributes. I can do the filtering in 2 ways:
1. Get all the prize rules from database (at most 100), and iterate the rules in my application code, checking if the current user satisfies the rule, to get a prize list.
2. User SQL to do the filtering like this:
`SELECT prize from Prize where (condition1=NULL or condition1=user_condition1) and (condition2=NULL or condition2=user_condition2) ...`
My question is: which one is more efficient?
And a more general question is: when is it better to do filtering in application code, instead of SQL?
PS. The reason I even think about iterations in code is this: If I iterate in code, and condition1 is NULL for a prize, I don't need to compute the condition1 value for the user (this computation can be expensive); But If I take the SQL approach, I have to pre-compute every condition value for the user.
|
1. You have a matrix of conditions with prize in each row.
2. The condition values can change over time along with prize value
So, it is advisable to keep in database.
Data should be in database and logic should be in code.
In your case the conditions are providing data, which changes. But logic remains constant.
Hope I'm clear.
|
Rule of thumb: `SQL Query` is always more efficient when comparing with `iterations` in the code.
About filtering - when you filter on `SQL` it will return less data than if you filter in the App. Also I think that filter in the `query` is faster than filter in the `code`.
|
Should I put this filtering in SQL or my application code?
|
[
"",
"sql",
"database",
"web",
""
] |
Lets say that at some point after several `SELECT`s and `JOIN`s, I have a following result:
```
Col1 Col2
+------+------+
| 1 | 2 |
| 1 | 3 |
| 4 | 5 |
| 4 | 6 |
| 5 | 6 |
| 6 | 7 |
+------+------+
```
Now I want to find "inconsistencies", i.e. to display rows that have the same values in `Col1` but different values in `Col2`. If there is just one occurrence of a value in `Col1`, then I don't want to display it at all.
What I've tried:
```
select Col1, Col2
from Table1
group by Col1
having (count(*) > 1)
```
It returns
```
Col1 Col2
+------+------+
| 1 | 2 |
| 4 | 5 |
+------+------+
```
Expected result:
```
Col1 Col2
+------+------+
| 1 | 2 |
| 1 | 3 |
| 4 | 5 |
| 4 | 6 |
+------+------+
```
|
```
select t1.* from
(select col1,count(*)
from table1
group by col1
having count(*) > 1) t2
join table1 t1 on t1.c1 = t2.c1
```
You should use a outer query after you find the counts which are > 1.
SQL Fiddle: <http://sqlfiddle.com/#!9/211a9e/6>
|
```
SELECT * FROM Table1 WHERE Col1 IN (
SELECT Col1 FROM Table1 GROUP BY Col1 HAVING COUNT(Col1) > 1
);
```
<http://sqlfiddle.com/#!9/8de26/2>
|
Finding inconsistency in the table
|
[
"",
"mysql",
"sql",
""
] |
I need to write a query to retrieve values that respect the following condition,
> Select Users from UserTable that I am (me@gmail.com) not friends with
Here are my table formats:
[](https://i.stack.imgur.com/5n4BU.jpg)
My email is stated at UserID and my friends are stated in the FriendID column. I need to select the users from the UserTable that are not listed as friends in the FriendsTable. Seems like a simple query but I couldn't figure it out. Here's what I tried:
**P.S:** I just wrote this for clarification and therefore I **don't** want to write a parameterized query. I am not planning on distributing this.
```
SELECT * From UserTable WHERE Email NOT LIKE '% (Select FriendsTable.FriendID From FriendsTable Where FriendsTable.UserID='me@gmail.com') %'
```
**EDIT**
jpw's query worked. However, how do I retrieve one random row from his solution's query?
This doesn't work:
```
select TOP 1 * from UserTable where UserTable.Email <> '" + email + "' and Email not in (select case when FriendsTable.UserID = '" + email + "' then FriendsTable.FriendID else UserID end from FriendsTable where '" + email + "' in (UserID, FriendID)); ORDER BY NEWID()
```
|
If the friend relationship can go both ways and you want to exclude not only rows where email = 'me@gmail.com' but also rows where FriendID = 'me@gmail.com', that is, both rows below
```
UserID FriendID
me@gmail.com ddaabb@gmail.com
kk@gmail.com me@gmail.com
```
then this query will do that:
```
select *
from userTable
where Email <> 'me@gmail.com'
and Email not in (
select
case
when UserID = 'me@gmail.com'
then FriendID else UserID
end
from FriendsTable
where 'me@gmail.com' in (UserID, FriendID)
);
```
Even if you only want to exclude those users that occur in the FriendID column this will still work, although there are better ways in that case.
With your sample data the result would be:
```
kk@gmail.com
yybb@gmail.com
```
Or you do this:
```
select * from
UserTable u
where u.Email <> 'me@gmail.com' and
not exists (
select 1 from FriendsTable
where FriendID = u.Email
);
```
|
You can use `NOT EXISTS` to check that the `Email` not present in `friends` table like below
```
select * from
UserTable ut
where not exists (
select 1 from FriendsTable
where UserID != ut.Email)
and ut.Email = 'me@gmail.com';
```
Another solution is to use a `LEFT JOIN` and choose the `NULL` like
```
select ut.*
from UserTable ut
left join FriendsTable ft on ut.Email = ft.UserID
where ft.UserID is null and ut.Email = 'me@gmail.com';
```
|
Writing SQL Query to retrieve data via a Subquery?
|
[
"",
"sql",
"azure-sql-database",
""
] |
I'm using Postgres 9.4 and want to do something like this:
```
movement_id|counter|standardized_output
---------------------------------------
1 | 3| 10
1 | 3| 12
1 | 5| 10
2 | 4| 5
```
I have the following query:
```
SELECT movement_id, counter, MAX(standardized_output) AS standardized_output
FROM "outputs"
WHERE "outputs"."user_id" = 1 AND "outputs"."movement_id" IN (1,2) AND (counter in (1,3,5))
GROUP BY movement_id, counter
```
Which gives me:
```
movement_id|counter|standardized_output
---------------------------------------
1 | 3| 12
1 | 5| 10
```
But what I want to find is what the MAX(standardized\_output) is for `counter >= (1,3,5)`. So the following result:
```
movement_id|counter|standardized_output
---------------------------------------
1 | 1| 12 (MAX value where movement_id is 1 and counter is >=1)
1 | 3| 12 (MAX value where movement_id is 1 and counter is >=3)
1 | 5| 10 (MAX value where movement_id is 1 and counter is >=5)
2 | 1| 5 (MAX value where movement_id is 2 and counter is >=1)
2 | 3| 5 (MAX value where movement_id is 2 and counter is >=3)
2 | 5| null (MAX value where movement_id is 2 and counter is >=5)
```
(small edit: movement\_id is IN, not =)
|
As you want results for rows that don't have any values you first need to create a set consisting of the rows that should be there, in this case the cartesian product of `{movement_id} X {1,3,5}`. To do this we can use a cross join and the table value constructor, and then it's just a case of using a left join and a subquery to get the max values.
I'm sure this query can be improved, but it should work.
```
select
all_values.movement_id,
all_values.num,
(
select max(standardized_output)
from outputs
where counter >= all_values.num
and movement_id = all_values.movement_id
) as standardized_output
from (
select movement_id, t.num
from outputs
cross join (values (1), (3), (5)) as t(num)
where "movement_id" in (1 ,2)
-- and "outputs"."user_id" = 1 --this was missing in your sample so I left it commented out.
) all_values
left join outputs o on all_values.movement_id = o.movement_id
and (counter in (all_values.num))
group by all_values.movement_id, all_values.num
order by all_values.movement_id, all_values.num;
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!15/1a56e/6)
Given your sample data the result from the query above is:
```
| movement_id | num | standardized_output |
|-------------|-----|---------------------|
| 1 | 1 | 12 |
| 1 | 3 | 12 |
| 1 | 5 | 10 |
| 2 | 1 | 5 |
| 2 | 3 | 5 |
| 2 | 5 | (null) |
```
Edit: the same result can be achieved using this query:
```
select
o1.movement_id,
t.num as counter,
max(o2.standardized_output) as standardized_output
from outputs o1 cross join (values (1), (3), (5)) as t(num)
left join outputs o2 on o1.movement_id = o2.movement_id and t.num <= o2.counter
where o1.movement_id in (1,2)
group by o1.movement_id, t.num
order by o1.movement_id, t.num;
```
[Sample fiddle](http://www.sqlfiddle.com/#!15/e7ffa/4)
|
You could probably put a `CASE` statement on the counter field?
```
CASE WHEN counter >= 5 THEN 5
WHEN counter >=3 THEN 3
WHEN counter >=1 THEN 1
ELSE 0 END
```
Then add it to the select and the group by clause.
```
SELECT movement_id,
(CASE WHEN counter >= 5 THEN 5
WHEN counter >=3 THEN 3
WHEN counter >=1 THEN 1
ELSE 0 END) as new_counter,
MAX(standardized_output) AS standardized_output
FROM "outputs"
WHERE "outputs"."user_id" = 1 AND "outputs"."movement_id" = 1
GROUP BY movement_id,
(CASE WHEN counter >= 5 THEN 5
WHEN counter >=3 THEN 3
WHEN counter >=1 THEN 1
ELSE 0 END)
```
|
SQL greater than IN
|
[
"",
"sql",
"postgresql",
""
] |
* `Table1` -> Id, CountryFk, CompanyName
* `Table2` -> Id, CountryFk, CompanyName, Website
I need to merge `Table1` and `Table2` into 1 master table. I know this can be done by something like the below query -
```
INSERT INTO masterTable(Id, CountryFk, CompanyName)
SELECT * FROM Table1
UNION
SELECT * FROM Table2;
```
But, I have an extra column, `website` in `table2` which isn't there in `table1`. I need this column in `masterTable`.
And more importantly, `Table1` and `Table2` have repeating companies with the same `countryFK`. For eg, `IBM` at `countryFK=123` could be present twice in `Table1`. And `Table1` could have a `companyName` that is present in `Table2`.
For eg: `IBM` at `countryFk = 123` could be present in `table1` and `table2`. I need to make sure that the `masterTable` does not have any duplicate companies.
Please note that the companyname by itself need not be unique. `masterTable` *can* have `IBM` with `countryFK = 123` and `IBM` with `countryFk = 321`.
`masterTable` **cannot** have `IBM` with `countryFk=123` twice.
|
IMHO, if you need to ensure both `CompanyName` and `CountryFk` not duplicate in MasterTable, you should add an unique index on the column.
Below query selects all distinct value in Table1 and Table2, and insert with existence checking for both `CompanyName` and `CountryFk`.
```
-- Id is identity, no need to insert value
INSERT MasterTable (CountryFk, CompanyName, WebSite)
SELECT
CountryFk,
CompanyName,
(
SELECT TOP(1) WebSite FROM Table2
WHERE CompanyName = data.CompanyName
AND CountryFk = data.CountryFk
AND WebSite IS NOT NULL
) AS WebSite
FROM
(
SELECT CountryFk, CompanyName FROM Table1
UNION
SELECT CountryFk, CompanyName FROM Table2
) data
WHERE
NOT EXISTS
(
SELECT * FROM MasterTable
WHERE CompanyName = data.CompanyName AND CountryFk = data.CountryFk
)
GROUP BY
CountryFk,
CompanyName
```
|
This may work
```
INSERT INTO masterTable(Id, CountryFk, CompanyName,Website)
SELECT Id, CountryFk, CompanyName, NULL as Website FROM Table1
UNION
SELECT Id, CountryFk, CompanyName,Website FROM Table2;
```
|
Inserting data from two similar tables into one master table in Sql Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
My data looks like this:
```
Col1 Col2 output
09:35 16:00 6,25 <-- need help with this
```
I would like to have the output show `H,m` (hours,minuts)
```
Datediff(Hours,Col1,Col2)
```
give me 7.
I don't want to make any parameters if possible only use some simple functions.
|
I think I would just do this explicitly, by taking the difference in minutes and doing the numeric calculations:
```
select (cast(datediff(minute, col1, col2) / 60 as varchar(255)) + ',' +
right('00' + cast(datediff(minute, col1, col2) % 60 as varchar(255)), 2)
)
```
|
What about getting the date diff in minutes and converting the result to the string you want:
```
SELECT CONCAT(DATEDIFF(MINUTE, '09:35', '16:00') / 60, ':', DATEDIFF(MINUTE, '09:35', '16:00') % 60 );
```
|
Datediff between hours
|
[
"",
"sql",
"sql-server",
"t-sql",
"datediff",
""
] |
I need help in creating a SQL query for my report.
I need to extract data which will show: how many managers in my company have employees example:
2 Managers have 4 directs each.
Like this:
```
Managers Employees
1 (number of managers not ID) 2
2 4
5 10
8 12
```
Table structure in Access data base:
Emplid, Last Name, First Name, Supervisorid, HRStatus
Emplid column includes to Supervisorsid because they are to employees.
I have tried to create such report in Excel but failed :(
I juped to VBA but stocked on SQL query.
Here the code:
```
Private Sub SWE_RAPORT()
Dim db As ADODB.connection
Dim rs As ADODB.Recordset
Dim SQL As String, dbPath As String, conStr As String
'CHECK WHERE IS THE DB?
With Application.FileDialog(msoFileDialogFilePicker)
.AllowMultiSelect = False
.Title = "WHERE IS THE DB?"
.Show
dbPath = .SelectedItems(.SelectedItems.Count)
End With
'CONNECT TO DB
Set db = New ADODB.connection
With db
.Provider = "Microsoft.ACE.OLEDB.12.0"
.connectionString = "Data Source=" & dbPath
.Mode = adModeRead
.Open
End With
'SQL QUERY
SQL = "SELECT A.SupervisorID, COUNT(A.Emplid) AS DIRECTS, A.HRStatus FROM (SELECT Emplid, SupervisorID, HRStatus FROM swe GROUP BY Emplid, SupervisorID, HRStatus) AS A WHERE A.HRStatus <> 'Terminated' and A.HRStatus <> 'Deceased' GROUP BY A.SupervisorID, A.HRStatus;"
'CONNECT TO RS
Set rs = New ADODB.Recordset
rs.Open SQL, db, adOpenStatic, adLockOptimistic
'LOAD DATA IN TO THE ACTIVE SHEET
With ActiveWorkbook.Worksheets("Dane")
.Cells.Clear
.Range("A1").CopyFromRecordset rs
End With
'CLOSE DB AND RECORDSET
On Error Resume Next
Set rs = Nothing
rs.Close
Set db = Nothing
db.Close
End Sub
```
Thank you for your help,
MIREK
|
This should work for you:
```
select count(b.SupervisorID) Managers,
b.directs Employees
from (SELECT A.SupervisorID,
COUNT(A.Emplid) AS DIRECTS
FROM swe A
WHERE A.HRStatus <> 'Terminated'
AND A.HRStatus <> 'Deceased'
GROUP BY A.SupervisorID) b
group by b.directs;
```
|
Try this on your query:
```
SELECT A.SupervisorID,
COUNT(A.Emplid) AS DIRECTS,
A.HRStatus
FROM swe as A
WHERE A.HRStatus <> 'Terminated'
AND A.HRStatus <> 'Deceased'
GROUP BY A.SupervisorID,
A.Emplid,
A.HRStatus
```
|
Managers report
|
[
"",
"sql",
"excel",
"ms-access",
"vba",
""
] |
I have a MySQL table with the following structure and data:
```
Increments
id emp_id starting_salary increment_rate increment_frequency
2 340 5000 250 1
3 340 5000 250 4
```
I need to have aliases, `a` and `b` which will hold some value based on the following formula:
```
starting_salary + (increment_rate * increment_frequency)
```
To be precise, I want `a = 5250` (based on `a = (5000 + (250 * 1))`) and `b = 6000` (based on `b = (5000 + (250 * 4))`)
Now I have another table with the following data:
```
PaySlips
id employee_id salary_month arrear
173824 340 '2015-06-01' 2350
```
I want to join `a` and `b` that I got from the table `Increments` with table `PaySlips`. And I want to use `a` and `b` in the following way:
```
((a * 8) / 30 + (b * 22) / 30)
```
My alias will be `basic_salary`. So `basic_salary` will hold this value from the above calculation:
```
basic_salary = ((a * 8) / 30 + (b * 22) / 30)
= ((5250 * 8) / 30 + (6000 *22) / 30)
= (1400 + 4400)
= 5800
```
I've got no idea how to do this. Can anyone please help me?
All I got are the common columns in both tables - `emp_id` and `employee_id` and I can join both tables using these columns. I just can't figure out how I can store the above values and organize the calculation inside my query.
Sample query:
```
SELECT x.id, x.employee_id,
(*my calculation using a and b from table Increments*) AS basic_salary,
x.salary_month, x.arrear
FROM PaySlips x
JOIN Increments y
ON x.employee_id = y.emp_id
```
For determining `a`:
```
SELECT
(
starting_salary +
(increment_rate * increment_frequency)
) AS a
FROM Increments
WHERE id = 2
```
And for determining `b`:
```
SELECT
(
starting_salary +
(increment_rate * increment_frequency)
) AS b
FROM Increments
WHERE id = 3
```
MySQL version: 5.2
|
I'm not clear on all the details, for example what should happen if there are three rows for one employee in increments? Anyhow, here's a sketch to start with:
```
select employee_id
, ((a * 8) / 30 + (b * 22) / 30) as basic_salary
from (
select x.employee_id
, min(starting_salary + (increment_rate * increment_frequency)) as a
, max(starting_salary + (increment_rate * increment_frequency)) as b
, x.salary_month, x.arrear
from payslips x
join increments y
on x.employee_id = y.emp_id
group by x.employee_id, x.salary_month, x.arrear
) as t
```
If id 2 and 3 are the criteria (I assumed they are examples) you can use a case statement like:
```
select employee_id
, ((a * 8) / 30 + (b * 22) / 30) as basic_salary
from (
select x.employee_id
, max(starting_salary + (increment_rate * case when y.id = 2 then increment_frequency end )) as a
, max(starting_salary + (increment_rate * case when y.id = 3 then increment_frequency end)) as b
, x.salary_month
, x.arrear
from payslips x
join increments y
on x.employee_id = y.emp_id
group by x.employee_id, x.salary_month, x.arrear
) as t;
```
In this case it does not matter what aggregate you use, it is to get rid of the row that contains null.
|
based on the requirements you added i think something like this will solve your problems:
```
SELECT PS.id, PS.employee_id, ((A.value * 8) / 30 + (B.value * 22) / 30) AS basic_salary
FROM PaySlips AS PS
JOIN (
SELECT I.emp_id, I.starting_salary + (increment_rate * increment_frequency) AS VALUE
FROM Increments AS I
WHERE I.id = 2
) AS A
ON A.emp_id = PS.employee_id
JOIN (
SELECT I.emp_id, I.starting_salary + (increment_rate * increment_frequency) AS value
FROM Increments AS I
WHERE I.id = 3
) AS B
ON B.emp_id = PS.employee_id
```
This version might need some alteration if it's not working on your real scenario, but please feel free to tell if anything else needs amending.
Hope it helps.
|
How to use value from two different rows of a table in another table
|
[
"",
"mysql",
"sql",
""
] |
This is my table:
```
--------------
Names
--------------
"Ben 52"
"Dan 236"
"Carter 73"
"Harry"
"Peter 53"
"Connor 27"
"Morgan"
"Richard 675"
```
I want to create a query that puts the columns without numbers at the end of the list. This is basically the idea of what I'm trying to create:
`SELECT names FROM table1 ORDER BY ContainsNum(names) DESC`
This is the result I should get:
```
--------------
Names
--------------
"Ben 52"
"Dan 236"
"Carter 73"
"Peter 53"
"Connor 27"
"Richard 675"
"Harry"
"Morgan"
```
Please comment if I wasn't clear enough.
I haven't see anyone ask this question before.
Thank you!
|
Just use `order by`. Most databases support some form of regular expression, something like this:
```
order by (case when names regexp '.*[0-9].*' then 0 else 1 end),
names
```
The exact syntax varies depending on the database.
Note: it works with just `names regexp '[0-9]'`. I added the wildcards because I think it makes the pattern more intuitive.
|
`SELECT names FROM table1 ORDER BY SUBSTRING(names, LENGTH(names)-2, 1) DESC`
Everything which ends with a number will be below anything which ends with a character.
|
MySQL select order by contains num
|
[
"",
"mysql",
"sql",
""
] |
I need some help figuring out how best to transform an an array into a row-vector. My array looks like this:
```
+-----+-------+----------+
| ID | Grade | Quantity |
+-----+-------+----------+
| Ape | Water | Y |
| Ape | Juice | Y |
| Ape | Milk | Y |
+-----+-------+----------+
```
Each ID can have up to 4 rows distinguished only by grade (Water, Juice, Beer, Milk); the list of possible values is static.
My desired output is this:
```
+-----+----------+-------+-------+------+------+
| ID | Quantity | Water | Juice | Beer | Milk |
+-----+----------+-------+-------+------+------+
| Ape | Y | 1 | 1 | 0 | 1 |
+-----+----------+-------+-------+------+------+
```
My own efforts have carried me as far as the PIVOT-operator, which transforms Grade-values into columns, but it doesn't group the rows by ID, leaving me with an equal number of rows post-transformation.
```
SELECT ID, Quantity, Water, Juice, Beer, Milk
FROM
(SELECT ID, Grade, Quantity FROM Feeding WHERE ID = 'Ape') src
PIVOT(
COUNT(Quantity) FOR [Grade] IN (ID, Quantity, Water, Juice, Beer, Milk)
)AS TransformData
```
Output:
```
+-----+----------+-------+-------+------+------+
| ID | Quantity | Water | Juice | Beer | Milk |
+-----+----------+-------+-------+------+------+
| Ape | Y | 1 | 0 | 0 | 0 |
| Ape | Y | 0 | 1 | 0 | 0 |
| Ape | Y | 0 | 0 | 0 | 1 |
+-----+----------+-------+-------+------+------+
```
Any suggestions?
|
How about;
```
;WITH Feeding(id,grade,quantity) as (
select 'Ape','Water','Y' union all
select 'Ape','Juice','Y' union all
select 'Ape','Juice','Y' union all
select 'Ape','Juice','Y' union all
select 'Ape','Juice','Y' union all
select 'Ape','Milk', 'N'
)
SELECT * FROM
(SELECT ID, Grade, Quantity agg, Quantity FROM Feeding WHERE ID = 'Ape') src
PIVOT ( COUNT(agg) FOR [Grade] IN (Water, Juice, Beer, Milk) ) AS TransformData
```
--
```
ID Quantity Water Juice Beer Milk
Ape N 0 0 0 1
Ape Y 1 4 0 0
```
|
you can try following query:-
```
SELECT ID, Quantity, CASE WHEN Grade = 'WATER' THEN 1 ELSE 0 END AS WATER,
CASE WHEN Grade = 'JUICE' THEN 1 ELSE 0 END AS JUICE,
CASE WHEN Grade = 'BEER' THEN 1 ELSE 0 END AS BEER,
CASE WHEN Grade = 'MILK' THEN 1 ELSE 0 END AS MILK
FROM YOUR_TABLE;
```
|
Transforming multiple rows into columns w. single row
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I need to run a simple select statement across all the databases in the same server. I have around 30-40 databases. This table has same structure in all the databases.
```
select * from table1 where condition
```
Can you please let me know how to get the records from all databases?? BTW i'm using SQL Server
|
I would use a cursor for this, `sp_msforeachdB` has a bad reputation for skipping databases.
A solution with a cursor would look something like this.....
```
DECLARE @DB_Name SYSNAME;
DECLARE @Sql NVARCHAR(MAX);
DECLARE Cur CURSOR LOCAL FAST_FORWARD FOR
SELECT Name
FROM sys.databases
WHERE name NOT IN ('master', 'tempdb','model','msdb')
OPEN Cur
FETCH NEXT FROM Cur INTO @DB_Name
WHILE (@@FETCH_STATUS = 0)
BEGIN
SET @Sql = N' select * from '+QUOTENAME(@DB_Name)
+ N'.[sechmaName].table1 where condition'
Exec sp_executesql @Sql
FETCH NEXT FROM Cur INTO @DB_Name
END
CLOSE Cur;
DEALLOCATE Cur;
```
|
```
Exec sp_msforeachdB 'select top 5 cola from dbo.tablea'
```
|
How to run single select statement across all the databases in the same server
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I have a table:
```
Trip Stop Time
-----------------
1 A 1:10
1 B 1:16
1 B 1:20
1 B 1:25
1 C 1:31
1 B 1:40
2 A 2:10
2 B 2:17
2 C 2:20
2 B 2:25
```
I want to add one more column to my query output:
```
Trip Stop Time Sequence
-------------------------
1 A 1:10 1
1 B 1:16 2
1 B 1:20 2
1 B 1:25 2
1 C 1:31 3
1 B 1:40 4
2 A 2:10 1
2 B 2:17 2
2 C 2:20 3
2 B 2:25 4
```
The hard part is B, if B is next to each other I want it to be the same sequence, if not then count as a new row.
I know
```
row_number over (partition by trip order by time)
row_number over (partition by trip, stop order by time)
```
None of them will meet the condition I want. Is there a way to query this?
|
```
create table test
(trip number
,stp varchar2(1)
,tm varchar2(10)
,seq number);
insert into test values (1, 'A', '1:10', 1);
insert into test values (1, 'B', '1:16', 2);
insert into test values (1, 'B', '1:20', 2);
insert into test values (1 , 'B', '1:25', 2);
insert into test values (1 , 'C', '1:31', 3);
insert into test values (1, 'B', '1:40', 4);
insert into test values (2, 'A', '2:10', 1);
insert into test values (2, 'B', '2:17', 2);
insert into test values (2, 'C', '2:20', 3);
insert into test values (2, 'B', '2:25', 4);
select t1.*
,sum(decode(t1.stp,t1.prev_stp,0,1)) over (partition by trip order by tm) new_seq
from
(select t.*
,lag(stp) over (order by t.tm) prev_stp
from test t
order by tm) t1
;
TRIP S TM SEQ P NEW_SEQ
------ - ---------- ---------- - ----------
1 A 1:10 1 1
1 B 1:16 2 A 2
1 B 1:20 2 B 2
1 B 1:25 2 B 2
1 C 1:31 3 B 3
1 B 1:40 4 C 4
2 A 2:10 1 B 1
2 B 2:17 2 A 2
2 C 2:20 3 B 3
2 B 2:25 4 C 4
10 rows selected
```
You want to see if the stop changes between one row and the next. If it does, you want to increment the sequence. So use lag to get the previous stop into the current row.
I used DECODE because of the way it handles NULLs and it is more concise than CASE, but if you are following the text book, you should probably use CASE.
Using SUM as an analytic function with an ORDER BY clause will give the answer you are looking for.
|
```
select *, dense_rank() over(partition by trip, stop order by time) as sqnc
from yourtable;
```
Use `dense_rank` so you get all the numbers consecutively, with no skipped numbers in between.
|
ROW_NUMBER query
|
[
"",
"sql",
"oracle",
"window-functions",
""
] |
I'm using SQL Server 2008. I have this data returned in a query that looks pretty much like this ordered by Day and ManualOrder...
```
ID Day ManualOrder Lat Lon
1 Mon 0 36.55 36.55
5 Mon 1 55.55 54.44
3 Mon 2 44.33 44.30
10 Mon 3 36.55 36.55
11 Mon 4 36.55 36.55
6 Mon 5 20.22 22.11
9 Mon 6 55.55 54.44
10 Mon 7 88.99 11.22
77 Sun 0 23.33 11.11
77 Sun 1 23.33 11.11
```
What I'm trying to do is get this data ordered by Day, then ManualOrder...but I'd like a row counter (let's call it MapPinNumber). The catch is that I'd like this row counter to be repeated once it encounters the same Lat/Lon for the same day again. Then it can continue on with the next row counter for the next row if it's a different lat/lon. We MUST maintain Day, ManualOrder ordering in the final result.
I'll be plotting these on a map, and this number should represent the pin number I'll be plotting in ManualOrder order. This data represents a driver's route and he may go to the same lat/lon multiple times during the day in his schedule. For example he drives to Walmart, then CVS, then back to Walmart again, then to Walgreens. The MapPinNumber column I need should be 1, 2, 1, 3. Since he goes to Walmart multiple times on Monday but it was the first place he drives too, it's always Pin #1 on the map.
Here's what I need my result to be for the MapPinNumber column I need to calculate. I've tried everything I can think of with ROW\_NUMBER and RANK, and going insane! I'm trying to avoid using an ugly CURSOR.
```
ID Day ManualOrder Lat Lon MapPinNumber
1 Mon 0 36.55 36.55 1
5 Mon 1 55.55 54.44 2
3 Mon 2 44.33 44.30 3
10 Mon 3 36.55 36.55 1
11 Mon 4 36.55 36.55 1
6 Mon 5 20.22 22.11 4
9 Mon 6 55.55 54.44 2
10 Mon 7 88.99 11.22 5
77 Sun 0 23.33 11.11 1
77 Sun 1 23.33 11.11 1
```
|
You can use aggregate function `MIN` with `OVER` to create your ranking groups and `DENSE_RANK` working on top of it like this.
**Brief Explanation**
1. `MIN(ManualOrder)OVER(PARTITION BY Day,Lat,Lon)` gets the minimum `ManualOrder` for a combination of `Day`, `Lat` and `Lon`.
2. `DENSE_RANK()` just sets this value as incremental values from `1`.
**[SQL Fiddle](http://sqlfiddle.com/#!3/558f5/1)**
**Sample Data**
```
CREATE TABLE Tbl ([ID] int, [Day] varchar(3), [ManualOrder] int, [Lat] int, [Lon] int);
INSERT INTO Tbl ([ID], [Day], [ManualOrder], [Lat], [Lon])
VALUES
(1, 'Mon', 0, 36.55, 36.55),
(5, 'Mon', 1, 55.55, 54.44),
(3, 'Mon', 2, 44.33, 44.30),
(10, 'Mon', 3, 36.55, 36.55),
(11, 'Mon', 4, 36.55, 36.55),
(6, 'Mon', 5, 20.22, 22.11),
(9, 'Mon', 6, 55.55, 54.44),
(10, 'Mon', 7, 88.99, 11.22),
(77, 'Sun', 0, 23.33, 11.11),
(77, 'Sun', 1, 23.33, 11.11);
```
**Query**
```
;WITH CTE AS
(
SELECT *,GRP = MIN(ManualOrder)OVER(PARTITION BY Day,Lat,Lon) FROM Tbl
)
SELECT ID,Day,ManualOrder,Lat,Lon,DENSE_RANK()OVER(PARTITION BY Day ORDER BY GRP) AS RN
FROM CTE
ORDER BY Day,ManualOrder
```
**Output**
```
ID Day ManualOrder Lat Lon RN
1 Mon 0 36.55 36.55 1
5 Mon 1 55.55 54.44 2
3 Mon 2 44.33 44.30 3
10 Mon 3 36.55 36.55 1
11 Mon 4 36.55 36.55 1
6 Mon 5 20.22 22.11 4
9 Mon 6 55.55 54.44 2
10 Mon 7 88.99 11.22 5
77 Sun 0 23.33 11.11 1
77 Sun 1 23.33 11.11 1
```
|
Here is my attempt using `ROW_NUMBER`:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/b4dee/1/0)
```
WITH CteRN AS(
SELECT *,
Rn = ROW_NUMBER() OVER(PARTITION BY Day ORDER BY ManualOrder),
Grp = ROW_NUMBER() OVER(PARTITION BY Day, Lat, Lon ORDER BY ManualOrder)
FROM tbl
),
CteBase AS(
SELECT *,
N = ROW_NUMBER() OVER(PARTITION BY Day ORDER BY ManualOrder)
FROM CteRN
WHERE Grp = 1
)
SELECT
r.ID, r.Day, r.ManualOrder, r.Lat, r.Lon,
MapPinNumber = ISNULL(b.N, r.RN)
FROM CteRN r
LEFT JOIN CteBase b
ON b.Day = r.Day
AND b.Lat = r.Lat
AND b.Lon = r.Lon
ORDER BY
r.Day, r.ManualOrder
```
|
SQL Number - Row_Number() - Allow Repeating Row Number
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"row-number",
"dense-rank",
""
] |
I have a stored procedure that has a `State` parameter as optional, so if I pass state it should query table using both parameter, if I am not passing the state parameter I only need to query table with the template parameter.
How I can do that?
This is my stored procedure:
```
CREATE PROCEDURE [dbo].[Get_TempName]
@TemType varchar(50),
@State varchar(50) = 'N/A'
AS
BEGIN
SET NOCOUNT ON;
SELECT
[TempType],
[TempName],
[State]
FROM
[Template] WITH (NOLOCK)
WHERE
TempType = @TempType
AND [State] = @State
END
```
|
```
Create[dbo].[Get_TempName]
@TemType varchar(50)
,@State varchar(50) = NULL
AS
BEGIN
SET NOCOUNT ON;
SELECT
[TempType]
,[TempName]
,[State]
FROM [Template] WITH (NOLOCK)
where TempType = @TempType
and (([State] IS NULL) OR ([State] = @State))
END
```
|
You can use `OR` condition with `Is Null` check. Also pass `NULL` in `@state` parameter when you don't want to use it in filter
```
Create[dbo].[Get_TempName]
@TemType varchar(50)
,@State varchar(50)
AS
BEGIN
SET NOCOUNT ON;
SELECT
[TempType]
,[TempName]
,[State]
FROM [Template] WITH (NOLOCK)
where TempType = @TempType
and ([State] = @State or @state is null)
END
```
|
Have a optional parameter in stored procedure
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to update SQL in which row's time is 5 minutes ago (or longer than 5 minutes):
```
UPDATE mytable SET status='EXPIRED'
WHERE (a column's time is 5 minutes or longer before now)
```
I tried to use `DATE_ADD(NOW(), INTERVAL 5 MINUTE)` but I had no luck!
|
You can compare `now()` with the row's date column plus 5 minutes:
```
UPDATE mytable
SET status = 'EXPIRED'
WHERE DATE_ADD(date_col, INTERVAL 5 MINUTE) <= NOW()
```
|
Not sure but considering that your time column is of `DATETIME` you can use `BETWEEN` operator like below
```
UPDATE mytable SET status='EXPIRED'
WHERE your_time_column BETWEEN DATE_ADD(NOW(), INTERVAL -5 MINUTE) AND DATE_ADD(NOW(), INTERVAL 5 MINUTE)
```
|
Update SQL if row's time is 5 minutes or longer than now
|
[
"",
"mysql",
"sql",
"date",
"time",
"sql-update",
""
] |
I have a list returning from select statement.
For example
```
select type, id
from userlist
```
returns:
```
type id
1 102
2 125
3 156
```
Now I want to assign variable values based on the type. Is it possible in single statement?
Like
```
SELECT
case when type = 1 then @frontenduser = id
when type = 2 then @backenduser = id
when type = 3 then @newuser = id
end
FROM
(SELECT type, id
FROM userlist) AS tbl1
```
|
Alternative to `pivot`:
```
select
@frontenduser = case type when 1 then id else @frontenduser end,
@backenduser = case type when 2 then id else @backenduser end,
@newuser = case type when 3 then id else @newuser end
from userlist
```
or the same, using `iif` operator (if you use SqlServer 2012 or later):
```
select
@frontenduser = iif(type = 1, id, @frontenduser),
@backenduser = iif(type = 2, id, @backenduser),
@newuser = iif(type = 3, id, @newuser)
from userlist
```
|
You can if you pivot the table.
```
select
@frontenduser = [1],
@backenduser = [2],
@newuser = [3]
from
(
select [type], id
from userlist
) As tbl
PIVOT
(
MAX(id)
FOR [type] IN([1],[2],[3])
) As pivotTable
```
|
Assign multiple variable from multiple rows in a select statement in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have tables:
```
User Table
User_id| Preferences |Blood_Group|City
Events Table
Event_id|Event_type|Blood_Group|Event_city|Event_Addded_By|Description
Commit Table
User_id|Event_id
```
I will send the user\_id so that,I want a query to return those events which are the events as prefered by user and blood\_group and city in event table is same as those values in user table and that event has not been commited by user in commit table.
Thank you in advance.
|
```
SELECT [User Table].User_id, [Events Table].*
FROM [Commit Table] INNER JOIN
[User Table] ON [Commit Table].User_id = [User Table].User_id INNER JOIN
[Events Table] ON [Commit Table].Event_id = [Events Table].Event_id
```
|
This might help you:
```
SELECT [Required Columns]
FROM [User Table] UT JOIN [Event Table] ET
ON ET.Bloodgroup = UT.Bloodgroup
AND ET.City = UT.City
WHERE NOT EXISTS (
SELECT 1 FROM [Commit Table] CT
WHERE CT.UserId = UT.UserId
AND ET.EventId = CT.EventId
)
AND UT.UserId = @UserId
```
|
Sql Query or Join for 3 tables
|
[
"",
"mysql",
"sql",
"database",
"stored-procedures",
""
] |
I have a table with multiple `type` values and I'm wanting to get a sample records from some of them.
My current query is as follows:
```
-- Pulling three sample records from each "type"
SELECT * FROM example WHERE type = "A" LIMIT 3
UNION ALL
SELECT * FROM example WHERE type = "B" LIMIT 3
UNION ALL
SELECT * FROM example WHERE type = "C" LIMIT 3
;
```
I expect this to return a total of 9 records; 3 from `type = "A"`, 3 from `type = "B"`, and 3 from `type = "C"`.
However, the result that I actually receive is 3 records from `type = "A"` *and nothing else*.
I know for a fact that the other `type` values exist because I can run the individual `SELECT` statements and they return results.
Why is MySQL only returning 3 records and how can I have it return the full 9 records that I want?
I've created a SQL Fiddle to illustrate the issue: <http://sqlfiddle.com/#!9/d911c/2>
|
Use parentheses:
```
(SELECT * FROM example WHERE type = "A" LIMIT 3)
UNION ALL
(SELECT * FROM example WHERE type = "B" LIMIT 3)
UNION ALL
(SELECT * FROM example WHERE type = "C" LIMIT 3);
```
[**Demo here**](http://sqlfiddle.com/#!9/d911c/3)
This is documented in the [manual](http://dev.mysql.com/doc/refman/5.1/en/union.html):
> To apply ORDER BY or LIMIT to an individual SELECT, place the clause
> inside the parentheses that enclose the SELECT:
>
> (SELECT a FROM t1 WHERE a=10 AND B=1 ORDER BY a LIMIT 10) UNION
> (SELECT a FROM t2 WHERE a=11 AND B=2 ORDER BY a LIMIT 10);
|
Put parentheses around the selects
```
(SELECT * FROM example WHERE type = "A" LIMIT 3)
UNION ALL
(SELECT * FROM example WHERE type = "B" LIMIT 3)
UNION ALL
(SELECT * FROM example WHERE type = "C" LIMIT 3)
```
### [SQLFiddle](http://sqlfiddle.com/#!9/d911c/3)
|
LIMIT and UNION ALL not returning the requested number of records
|
[
"",
"mysql",
"sql",
""
] |
I have a database with this structure:
* users
+ id
+ username
+ ........
+ user\_status
* questions
+ id
+ asker\_id
+ asked\_id
+ ........
+ question\_status
* answers
+ id
+ question\_id
+ who\_answer\_id
+ ........
+ answer\_status
* likes
+ id
+ liker\_id
+ liked\_id
+ ........
+ like\_status
How can I make a trigger when I update user\_status = 'deleted' to automatically set:
```
question_status = deleted
answer_status = deleted
like_status = deleted
```
|
What does this trigger do?
* List item
* Define new trigger
* Check if user\_status column is updated.
* Check if user\_status is updated to 'deleted'
* Take id of that user, and change status of all of it's questions, answers, and likes to 'deleted'.
Code:
```
DELIMITER //
CREATE TRIGGER user_delete BEFORE UPDATE ON users FOR EACH ROW
BEGIN
IF NEW.user_status <> OLD.user_status
THEN
IF STRCMP('deleted',NEW.user_status) = 0 THEN
UPDATE question_status SET question_status = 'deleted' WHERE asker_id = NEW.id;
UPDATE answers SET answer_status = 'deleted' WHERE who_answer_id = NEW.id;
UPDATE likes SET like_status = 'deleted' WHERE liker_id = NEW.id;
END IF;
IF STRCMP('active',NEW.user_status) = 0 THEN
UPDATE question_status SET question_status = 'active' WHERE asker_id = NEW.id;
UPDATE answers SET answer_status = 'active' WHERE who_answer_id = NEW.id;
UPDATE likes SET like_status = 'active' WHERE liker_id = NEW.id;
END IF;
END IF;
END//
DELIMITER ;
```
**Possible flow you might notice later**
Just consider this scenario on your site, QuestionA was already 'deleted' by some moderator, now when user (which is poster of that question) is re-activated, all such moderated questions/answers would come back !
You should ideally have another status to decide that it was automatically deleted or manually. Maybe 'auto\_delete' and 'delete' instead of just using 'delete'.
|
```
BEGIN
IF NEW.user_status <> OLD.user_status
THEN
IF STRCMP('deleted',NEW.user_status) = 0 THEN
UPDATE questions SET question_status = 'deleted' WHERE asker_id = NEW.id AND question_status != 'mod_deleted';
UPDATE answers SET answer_status = 'deleted' WHERE commentator_id = NEW.id AND answer_status != 'mod_deleted';
UPDATE likes SET like_status = 'deleted' WHERE liker_id = NEW.id AND like_status != 'mod_deleted';
END IF;
IF STRCMP('active',NEW.user_status) = 0 THEN
UPDATE questions SET question_status = 'active' WHERE asker_id = NEW.id AND question_status != 'mod_deleted';
UPDATE answers SET answer_status = 'active' WHERE commentator_id = NEW.id AND answer_status != 'mod_deleted';
UPDATE likes SET like_status = 'active' WHERE liker_id = NEW.id AND like_status != 'mod_deleted';
END IF;
END IF;
END
```
\_status fields are enum type so if I add mod\_deleted and in the update I add a AND should solve the flow
|
Database multiple tables trigger
|
[
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
I have this table named **`table1`**
```
id uniquefield field1 field2
1 11 test test2
2 12 test2 test3
```
and I have this value in my temp table **`#temp1`**
```
id uniquefield field1 field2
1 11 test test2
2 12 test2 test3
3 13 test4 test5
4 14 test5 test6
```
Now, what I want to happen is that I want to transfer all data from `#temp1` table. It would insert if data does not exist in table1 table and would update if it exist.
Does anybody know how to do this using SQL Server or dynamic SQL?
Hope to find some response from you.
|
Temp tables are no different in such cases like you mentioned. The difference is they are only available to the current connection for the user; and they are automatically deleted when the user disconnects from instances. So you can handle these tables like any other SQL table and use a MERGE query to achieve this data manupulation.
Assuming the uniquefield column can be treated as link between these tables.
```
MERGE table1 t
USING #temp1 t1
ON t.uniquefield = t1.uniquefield
WHEN MATCHED THEN
UPDATE
SET t.id = t1.id,
t.field1 = t1.field1,
t.field2 = t1.field2
WHEN NOT MATCHED BY TARGET THEN
INSERT (id, uniquefield, field1, field2)
VALUES (t1.id, t1.uniquefield, t1.field1, t1.field2 );
```
You can `DROP #temp1` after this and do a `SELECT * FROM table1` to check the updated/ inserted data.
|
`Temp tables` are no different in such cases like you mentioned. The difference is they are only available to the current connection for the user; and they are automatically deleted when the user disconnects from instances. So you can handle these tables like any other SQL table.
Assuming the `uniquefield` column can be treated as link between these tables.
Update statemant:
```
update table1
set
t.id = t1.id,
t.field1 = t1.field1,
t.field2 = t1.field2
from table1 t
join #temp1 t1
on t.uniquefield = t1.uniquefield
```
Insert statement:
```
insert into table1(id, uniquefield, field1, field2)
select t1.id, t1.uniquefield, t1.field1, t1.field2
from table1 t
join #temp1 t1
on t.uniquefield != t1.uniquefield
```
|
Transfer data from temp table to physical table
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I want to make an SQL request to display a list of users but only the one who haven't accepted a mission since at least 60 days. A user have multiple user missions attached to him, so I need to look at all of them and display the user only if no missions have been accepted since 60 days.
Here is what I have so far, but it is wrong, the user is in the list even if he have accepted a mission less than 60 days ago, but the mission doesn't show up though. So this request just display every missions that have been accepted more than 60 days ago. That is not what I want.
```
SELECT
u.username, u.id, u.email, date_part('days', now() - um.date_accepted) as "days since last mission"
FROM
users_user u
INNER JOIN
users_usermission um
ON
u.id=um.user_id
WHERE
date_part('days', now() - um.date_accepted) > 60
```
|
I think a query using a correlated `not exists` predicate would work:
```
SELECT
*
FROM
users_user u
WHERE NOT EXISTS (
SELECT 1 FROM users_usermission um
WHERE u.id = um.user_id
AND um.date_accepted > CURRENT_DATE - interval '60 days'
);
```
This would only return users for which there doesn't exists any missions within the last 60 days.
See this [sample SQL Fiddle](http://www.sqlfiddle.com/#!15/64ecd/2) for an example.
|
You can do what you want by using aggregation and a left join. I would do it like this:
```
SELECT u.username, u.id, u.email,
date_part('days', now() - um.date_accepted) as "days since last mission"
FROM users_user u LEFT JOIN
(SELECT um.user_id, MAX(date_accepted) as maxda
FROM users_usermission um
GROUP BY um.user_id
) um
ON u.id = um.user_id
WHERE maxda <= CURRENT_DATE - interval '60 days' or
maxda IS NULL;
```
I'm not sure what the complicated calculation is in the `SELECT` clause, but I'm leaving it in. It doesn't appear to be part of your question.
|
Select only if no date are under 60 days from now
|
[
"",
"sql",
"postgresql",
"date",
""
] |
How do I convert the date to a month spelled out?
For example.
`01/08/2015` = January
I have it code in excel `=TEXT(value, "MMMM")` but we are shifting away from Excel to SQL Server.
Thanks
|
You need to use the [`DATENAME`](https://msdn.microsoft.com/en-us/library/ms174395.aspx) function.
For example:
```
SELECT DATENAME(mm, DateCol) AS [Month]
FROM MyTable
```
|
```
select datename(month,convert(varchar,yourField,106)) as month
from yourTable
```
|
How to convert date to text Month?
|
[
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
I have a simple design
```
id | grpid | main
-----------------
1 | 1 | 1
2 | 1 | 0
3 | 1 | 0
4 | 2 | 0
5 | 2 | 1
6 | 2 | 0
```
The question to answer is
```
What is the "id" of the main in each group?
```
The result should be
```
id
---
1
5
```
Seriously at the moment, I'm not able to answer it on my own. Pls assist me.
|
The simplest way you can do this with:
```
select id from <table_name> where main=1
```
but as you have mentioned you want `id` with group by `grpid` below query will work.
```
select id from <table_name> group by grpid, main having main = 1
```
You have to apply `group by` on your group id and based on that check the value of main as 1. You will get the desired result.
|
Maybe i'm oversimplifying it here but couldn't you just do this:
```
select id,
grpid
from table
where main = 1;
```
|
SQL group by can't find correct phrase
|
[
"",
"sql",
"sql-server",
"group-by",
""
] |
I have table A with following data sample. I want to select the number `between` the last two `/`
[](https://i.stack.imgur.com/49P06.png)
|
Try this:
```
DECLARE @text VARCHAR(MAX);
SET @text = '79011/67541/545415/5401dfd245/25405244';
SELECT REVERSE(LEFT(REPLACE(REVERSE(@text),LEFT(REVERSE(@text),CHARINDEX('/',REVERSE(@text))),''),CHARINDEX('/',REPLACE(REVERSE(@text),LEFT(REVERSE(@text),CHARINDEX('/',REVERSE(@text))),''))-1));
```
Check and then replace in your query `PHT` with `@text`.
**EDTI:**
More simple solution:
```
SELECT REVERSE(SUBSTRING(REVERSE(@text)
,CHARINDEX('/', REVERSE(@text)) + 1
,CHARINDEX('/', REVERSE(@text), CHARINDEX('/', REVERSE(@text)) + 1) - CHARINDEX('/', REVERSE(@text)) - 1))
```
|
One way of doing it -
```
SELECT REPLACE(RIGHT(SUBSTRING(PTH, 1, LEN(PTH) - CHARINDEX('/', REVERSE(PTH)))
, CHARINDEX('/', REVERSE(PTH))),'/','')
FROM A
```
|
How to select a value between a pattern from the right side in SQL
|
[
"",
"sql",
"sql-server-2008",
"select",
"sql-server-2008-r2",
""
] |
I'm trying to determine the number of dates, in a date range, that a person held a particular status. I have three tables with the following (simplified) structure:
```
Table Fields
Calendar Date
DateRange RangeID, StartDate, EndDate
StatusHistory PersonID, Status, Date
```
The Calendar table contains the list of dates that I want to consider for the count. A person's status change may have been recorded before, after, or in the middle of the range, or might switch between statuses several times within that range.
I'd like to:
```
select PersonID, RangeID, Status, count(*) as DateCount
```
or at least have a result set with that structure.
I'm using SQL on DB2 for IBM i.
**Edit** with sample data:
DateRange table (containing the ranges I'd like to consider)
```
RangeID StartDate EndDate
+--------+------------+------------+
| A | 2015-01-01 | 2015-01-31 |
| B | 2015-02-06 | 2015-03-05 |
| C | 2015-03-07 | 2015-04-30 |
+--------+------------+------------+
```
Calendar table (containing the dates I'd like to count)
```
Date RangeID (not in Calendar table, but shown here for clarity)
+------------+ ----
| 2015-01-05 |
| 2015-01-06 | A
| 2015-01-07 |
| 2015-01-08 |
----
| 2015-02-05 |
----
| 2015-02-06 |
| 2015-02-07 | B
| 2015-02-08 |
| 2015-03-05 |
----
| 2015-03-06 |
----
| 2015-03-07 |
| 2015-03-08 |
| 2015-04-05 | C
| 2015-04-06 |
| 2015-04-07 |
| 2015-04-08 |
+------------+ ----
```
StatusHistory table (containing the dates that a person's status was entered or changed)
```
PersonID Status Date
+--------+-------+------------+ Edit for clarification:
| 1 | HAPPY | 2015-01-05 | While there's only one date
| 1 | SAD | 2015-02-07 | in each of these records,
| 1 | HAPPY | 2015-04-06 | a date range is implied. That is,
| 2 | HAPPY | 2015-01-07 | Person 1 is HAPPY from 2015-01-05
| 3 | SAD | 2014-10-31 | to 2015-02-07, then SAD 'til
| 3 | SAD | 2015-01-07 | 2015-04-06 and HAPPY from then on.
| 3 | HAPPY | 2015-04-05 |
| 3 | SAD | 2015-04-06 |
| 3 | SAD | 2015-04-07 |
+--------+-------+------------+
```
Result set
```
PersonID RangeID Status DateCount
+--------+-------+-------+---------+
| 1 | A | HAPPY | 4 |
| 1 | B | HAPPY | 1 |
| 1 | B | SAD | 3 |
| 1 | C | HAPPY | 3 |
| 1 | C | SAD | 3 |
| 2 | A | HAPPY | 2 |
| 2 | B | HAPPY | 4 |
| 2 | C | HAPPY | 6 |
| 3 | A | SAD | 4 |
| 3 | B | SAD | 4 |
| 3 | C | HAPPY | 1 |
| 3 | C | SAD | 5 |
+--------+-------+-------+---------+
```
|
We'd have an easier time of this if you were on LUW, and had access to `LEAD` (window functions are nice), but we'll just have to emulate it.
The first thing you need to ask is a conceptual question: what are you trying to count? The answer is "days" - yes, you have conditions, but that's what you're trying to count. So your initial table (the one in the `FROM`) will actually be your calendar table.
The next thing we'll need to do is to get the start-of-next range for `StatusHistory` (note that this would be an exclusive-upper bound. Always query dates/times/timestamps with an exclusive upper-bound... in fact, it's better if you pretend [`BETWEEN` does not exist](https://sqlblog.org/2011/10/19/what-do-between-and-the-devil-have-in-common)). Not having `LEAD` on the i, we'll have to emulate it. First, we need to index the entries, starting over for each person, and ordered by their entries:
```
StatusHistoryIndex (personId, status, startDate, index)
AS (SELECT personId, status, startDate,
ROW_NUMBER() OVER (PARTITION BY personId ORDER BY startDate)
FROM StatusHistory)
```
... next, we need to use this to connect the "current" row with the "next" one, by the generated index:
```
StatusHistoryRange (personId, status, startDate, endDate)
AS (SELECT Curr.personId, Curr.status, Curr.startDate,
Nxt.startDate
FROM StatusHistoryIndex Curr
LEFT JOIN StatusHistoryIndex Nxt
ON Nxt.personId = Curr.personId
AND Nxt.index = Curr.index + 1)
```
.... because we have an open upper-bound - we run up until the "last possible entry", and we don't *have* a "last" entry - we need to `LEFT JOIN` for `Nxt` (next), and the ending date (important - start of the next status!) will be null for the last entry. This sort of logic is a prime candidate to wrap in a view (to give the appearance of a complete range table), and potentially building an MQT if performance is an issue.
From here, it's straightforward. We don't have to worry about duplicates - the way we'll be joining takes care of that - and the ranges will overlap automatically as well.
A quick demonstration:
Given a calendar table that looks like this -
```
2015-01-01
2015-01-02
2015-01-03
2015-01-04
2015-01-05
```
... and a range table like this -
```
2015-01-02 2015-01-05
```
... Then joining can only *restrict* the rows chosen, as if it were a `WHERE` clause:
```
SELECT date
FROM Calendar
JOIN Range
ON Calendar.date >= Range.start
AND Calendar.date < Range.end
```
would yield:
```
2015-01-02
2015-01-03
2015-01-04
```
Of the excluded rows, `2015-01-01` is ignored because it's less than the start of the range, and `2015-01-05` is ignored because it's greater-than/equal to the end of the range. Joining more times with additional, similar ranges can only further restrict the data chosen. We have all the pieces we need.
The full statement ends up looking like this:
```
WITH StatusHistoryIndex (personId, status, startDate, index)
AS (SELECT personId, status, startDate,
ROW_NUMBER() OVER (PARTITION BY personId ORDER BY startDate)
FROM StatusHistory),
StatusHistoryRange (personId, status, startDate, endDate)
AS (SELECT Curr.personId, Curr.status, Curr.startDate,
Nxt.startDate
FROM StatusHistoryIndex Curr
LEFT JOIN StatusHistoryIndex Nxt
ON Nxt.personId = Curr.personId
AND Nxt.index = Curr.index + 1)
SELECT SHR.personId, DateRange.id, SHR.status, COUNT(*)
FROM Calendar
JOIN DateRange
ON Calendar.calendarDate >= DateRange.startRange
AND Calendar.calendarDate < DateRange.endRange
JOIN StatusHistoryRange SHR
ON Calendar.calendarDate >= SHR.startDate
AND (Calendar.calendarDate < SHR.endDate OR SHR.endDate IS NULL)
GROUP BY SHR.personId, DateRange.id, SHR.status
ORDER BY SHR.personId, DateRange.id, SHR.status
```
`SQL Fiddle Example`
(please note that my numbers are rather different than your example result. I'm confident the numbers I'm getting are the correct result, given the starting data, but let me know if I missed something)
You didn't specify, but I treated the ending date in `DateRange` as an exclusive upper-bound, which you may need to adjust (you *should* be storing the exclusive upper-bound here).
I also didn't put a limit on the ending date for the status. Presumably this would be `CURRENT_DATE`, although none of your test data went that far. It would be possible to put `COALESCE(Nxt.startDate, CURRENT_DATE)` inside the range CTE, but this is left as an exercise for the reader.
|
Here are two solutions:
1. Calculate all combinations and count them, so that 0s are displayed
2. Only show combinations with count > 0 by grouping
The idea to get the correct status is to join with StatusHistory on the date where it is <= the calendar date, but there exists no date bigger than the one from the status with the same PersonID and <= the calendar date. So essentially this trick selects the last existing state for a person (if any) on the given calendar day.
**Version 1**: Tested on PostgreSQL and Oracle ([SQL Fiddle](http://sqlfiddle.com/#!4/b30d6/6/0)).
```
SELECT
p.PersonID,
r.RangeID,
s.Status,
(SELECT COUNT(*) FROM Calendar c WHERE c.Date_ BETWEEN r.StartDate AND r.EndDate AND
EXISTS(SELECT * FROM StatusHistory h WHERE
h.PersonID = p.PersonID AND h.Status = s.Status AND h.Date_ <= c.Date_ AND
NOT EXISTS(SELECT * FROM StatusHistory z WHERE
z.PersonID = p.PersonID AND z.Date_ <= c.Date_ AND z.Date_ > h.Date_))
) AS Amount
FROM
(SELECT DISTINCT PersonID FROM StatusHistory) p,
(SELECT RangeID, StartDate, EndDate FROM DateRange) r,
(SELECT DISTINCT Status FROM StatusHistory) s
;
```
**Version 2**: Alternatively you can modify the old solution if you don't want the 0s ([SQL Fiddle](http://sqlfiddle.com/#!4/b30d6/11/0)):
```
SELECT
h.PersonID,
r.RangeID,
h.Status,
COUNT(*)
FROM
Calendar c,
DateRange r,
StatusHistory h
WHERE
c.Date_ BETWEEN r.StartDate AND r.EndDate AND
h.Date_ <= c.Date_ AND
NOT EXISTS (SELECT s.Date_ FROM StatusHistory s WHERE
s.Date_ <= c.Date_ AND s.Date_ > h.Date_ AND s.PersonID = h.PersonID)
GROUP BY
h.PersonID,
r.RangeID,
h.Status
;
```
If you make the first query `MINUS` the second query you will see that indeed only the rows with count = 0 are returned as it should be, because other than the 0 counts the queries should return the same rows.
The select was already correct, all that was needed is the grouping and correctly join / filter the tables. The grouping is needed because count is an aggregate function (like sum, min, max etc.) and they work on groups. You can imagine that you only look at the columns specified in group by and where they are the same they are put in one group, for the other columns you have to use aggregate functions (you cannot store several rows in one cell except if you use group\_concat (mysql) or listagg (oracle) which are also aggregate functions).
|
SQL - Count dates that a status was held in a date range
|
[
"",
"sql",
"db2",
""
] |
I have some query:
```
select disconnect_reason as disconnectReason, disconnect_cause, count(*) as callsCount
from calls group by disconnect_reason, disconnect_cause
```
How I need to get items with callsCount = 1 only.
How can I get it without `having`?
|
Always you can use `Having` clause to filter the groups but if you want a another way then try something like this
```
select * from
(
select disconnect_reason as disconnectReason, disconnect_cause, count(*) as callsCount
from calls group by disconnect_reason, disconnect_cause
) A
Where callsCount = 1
```
|
try using **CTE** For **MS SQL SERVER**
```
with cte
as
(
select disconnect_reason as disconnectReason, disconnect_cause, count(*) as callsCount
from calls group by disconnect_reason, disconnect_cause
)
select * from cte where callsCount >0
```
|
How can I filter intems by `count(*)` without `having`?
|
[
"",
"sql",
""
] |
Please help me to solve this.
I have this table note:
[Notes](http://sqlfiddle.com/#!3/615a5/12/0)
I want only this display:
[Highest note strength value, only 1 display if more than 1 same note strength value of distinct SubjectID and Stat](http://sqlfiddle.com/#!3/70112/1/0)
The problem is there were notes that have the same note strength value.
|
`SELECT ID,SubjectID,Stat,MAX([Note Strength Value]) AS [Note Strength Value] FROM dbo.Note WHERE SeasonContext = 2015 AND Days in (99999, 7, 14, 30, 60, 90, 77777, 88888) GROUP BY SubjectID,Stat)`
Change to
`SELECT ID,SubjectID,Stat,MAX([Note Strength Value]) AS [Note Strength Value] FROM dbo.Note WHERE SeasonContext = 2015 AND Days in (99999, 7, 14, 30, 60, 90, 77777, 88888) GROUP BY SubjectID,Stat,ID)`
always include the fields you select when using Group By.
|
```
SELECT n.ID
,n.SubjectID
,dbo.udf_StripHTML(ISNULL(n.Note,'')) AS [Note]
,n.[Note Strength Value]
FROM dbo.[Note] n INNER JOIN
(SELECT ID,SubjectID,Stat,MAX([Note Strength Value]) AS [Note Strength Value]
FROM dbo.Note
WHERE SeasonContext = 2015 AND Days in (99999, 7, 14, 30, 60, 90, 77777, 88888)
GROUP BY ID,SubjectID,Stat) m --added id in the group by clause
ON n.ID = m.ID
WHERE n.SubjectID = 14463
ORDER BY n.[Note Strength Value] DESC
```
You can't select a column when you are using a aggregate function without specifying a `group by` on that column.
|
SQL Server GROUP BY that include column in SELECT but not in GROUP BY clause
|
[
"",
"sql",
"database",
"database-administration",
""
] |
Sorry for bad title, I don't know how do describe better if you have a better one please tell me ;)
Please look at these small sql fiddle:
<http://sqlfiddle.com/#!6/8f522/4/1>
I need the "Value" from a specific "Title" in a new column ("Age"). But I need the "Value" in any row for the same "SN\_Main".
The first query was my first try. It is fast and ok but I get the "Value" only for the row with the same "Title".
The second query is wat I want but the subquery is to slow, so I wanna solve this without subquery. The productive tables are bigger and I need this like 10 times and with subquery it become incredible slow.
So is there any way to get this output with other sql statements???
I hope you understand me, I'm so sorry about bad explanation :)
Regards
Martin
|
You can join back onto the same table
```
SELECT MAIN.SN_Main
,MAIN.Data
,DETA.SN_Deta
,DETA.Title
,DETA.Value
,DETA2.Value AS Age
FROM MainData AS MAIN
INNER JOIN DetaData AS DETA
ON MAIN.SN_Main = DETA.SN_Main
INNER JOIN DetaData as DETA2
ON DETA.SN_Main = DETA2.SN_Main
and DETA2.Title = 'Age'
```
|
A proper index could also work :
```
CREATE NONCLUSTERED INDEX idx_DetaData_SN_Main_Title ON DetaData
( SN_Main, Title) INCLUDE (Value)
```
And rewrite your query with inner join:
```
SELECT MAIN.SN_Main, MAIN.Data, DETA.SN_Deta, DETA.Title, DETA.Value, DETA2.Value
FROM MainData AS MAIN
INNER JOIN DetaData AS DETA ON MAIN.SN_Main = DETA.SN_Main
INNER JOIN DetaData AS DETA2 ON MAIN.SN_Main = DETA2.SN_Main AND DETA2.Title = 'Age'
```
[SQL FIDDLE](http://sqlfiddle.com/#!6/b9ce46/1)
|
SQL - new column with the same cell value in every row for the same id without subquery
|
[
"",
"sql",
"sql-server",
"subquery",
""
] |
I have a table that lists employee schedules. I need to write a query to determine if more than one person was working on the same task at the same time. My data is like the following:
```
EmployeeID JobID StartTime EndTime M Tu W Th F Sa Su
======================================================================
10001 201 12:00:00 14:00:00 1 1 1 1 1 0 0
10001 202 15:00:00 17:00:00 1 1 1 1 1 0 0
10001 202 17:30:00 18:30:00 1 1 1 1 1 0 0
10002 202 16:00:00 18:00:00 1 1 1 1 1 0 0
```
The gist of what I was trying is below, but obviously this is not correct. I just can't figure out where to even start looking for a solution to this problem.
```
select *
from Table1
where (JobID=JobID) AND
(StartTime > StartTime AND StartTime < EndTime) OR
(EndTime > StartTime AND EndTime < EndTime)
```
I do need to take days into consideration as well, but that should be pretty straight forward once I figure out how to do the rest of the query.
|
You're close but you need to reference the same table twice. So, you need a `JOIN` to represent your overlap table. Try something like this:
```
SELECT *
FROM Table1 AS t1
JOIN Table1 AS t1_overlap ON
t1_overlap.JobID = t1.JobID
AND t1_overlap.EmployeeID != t1.EmployeeID
AND (t1_overlap.StartTime BETWEEN t1.StartTime AND t1.EndTime
OR t1_overlap.EndTime BETWEEN t1.StartTime AND t1.EndTime)
AND t1_overlap.M = t1.M
AND t1_overlap.Tu = t1.Tu
AND t1_overlap.W = t1.W
AND t1_overlap.Th = t1.Th
AND t1_overlap.F = t1.F
AND t1_overlap.Sa = t1.Sa
AND t1_overlap.Su = t1.Su
```
|
You just need to `self join` on the same table like others have mentioned and make sure to join on these following conditions:
```
SELECT distinct
t1.EmployeeID as FirstEmployee,
t2.EmployeeID as SecondEmployee,
t1.StartTime as FirstEmployeeStartTime,
t2.StartTime as SecondEmployeeStartTime,
t1.EndTime as FirstEmployeeEndTime,
t2.EndTime as SecondEmployeeEndTime
FROM table1 t1
INNER JOIN table1 t2 ON t1.JobID = t2.JobID
AND t1.employeeid <> t2.employeeid
AND (
t1.StartTime BETWEEN t2.StartTime
AND t2.EndTime
OR t1.EndTime BETWEEN t2.StartTime
AND t2.EndTime
)
```
[**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/a0647/8/0)
|
Check if Overlap in Schedules
|
[
"",
"sql",
"sql-server-2008",
""
] |
I am looking for the syntax to add a column to a MySQL table with the current time as a default value.
|
**IMPORTANT EDIT:** It is now possible to achieve this with `DATETIME` fields since **MySQL 5.6.5**, take a look at the [other post](https://stackoverflow.com/a/10603198/4275342) below...
It is now possible to achieve this with `DATETIME` fields since MySQL 5.6.5
But you can do it with `TIMESTAMP`:
```
create table test (str varchar(32), ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP)
```
|
Even so many persons have provided solution but this is just to add more information-
If you just want to insert current timestamp at the time of row insertion-
```
ALTER TABLE mytable ADD mytimestampcol TIMESTAMP DEFAULT CURRENT_TIMESTAMP;
```
If you also want that this column should update if any update this row then use this-
```
ALTER TABLE mytable ADD mytimestampcol TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP;
```
|
Adding a new column with the current time as a default value
|
[
"",
"mysql",
"sql",
"timestamp",
"ddl",
""
] |
A brief explanation on the relevant domain part:
A Category is composed of four data:
1. Gender (Male/Female)
2. Age Division (Mighty Mite to Master)
3. Belt Color (White to Black)
4. Weight Division (Rooster to Heavy)
So, `Male Adult Black Rooster` forms one category. Some combinations may not exist, such as mighty mite black belt.
An Athlete fights Athletes of the same Category, and if he classifies, he fights Athletes of different Weight Divisions (but of the same Gender, Age and Belt).
To the modeling. I have a `Category` table, already populated with all combinations that exists in the domain.
```
CREATE TABLE Category (
[Id] [int] IDENTITY(1,1) NOT NULL,
[AgeDivision_Id] [int] NULL,
[Gender] [int] NULL,
[BeltColor] [int] NULL,
[WeightDivision] [int] NULL
)
```
A `CategorySet` and a `CategorySet_Category`, which forms a many to many relationship with `Category`.
```
CREATE TABLE CategorySet (
[Id] [int] IDENTITY(1,1) NOT NULL,
[Championship_Id] [int] NOT NULL,
)
CREATE TABLE CategorySet_Category (
[CategorySet_Id] [int] NOT NULL,
[Category_Id] [int] NOT NULL
)
```
Given the following result set:
```
| Options_Id | Championship_Id | AgeDivision_Id | BeltColor | Gender | WeightDivision |
|------------|-----------------|----------------|-----------|--------|----------------|
1. | 2963 | 422 | 15 | 7 | 0 | 0 |
2. | 2963 | 422 | 15 | 7 | 0 | 1 |
3. | 2963 | 422 | 15 | 7 | 0 | 2 |
4. | 2963 | 422 | 15 | 7 | 0 | 3 |
5. | 2964 | 422 | 15 | 8 | 0 | 0 |
6. | 2964 | 422 | 15 | 8 | 0 | 1 |
7. | 2964 | 422 | 15 | 8 | 0 | 2 |
8. | 2964 | 422 | 15 | 8 | 0 | 3 |
```
Because athletes may fight two CategorySets, I need `CategorySet` and `CategorySet_Category` to be populated in two different ways (it can be two queries):
One `Category_Set` for each row, with one `CategorySet_Category` pointing to the corresponding `Category`.
One `Category_Set` that groups all WeightDivisions in one `CategorySet` in the same AgeDivision\_Id, BeltColor, Gender. In this example, only `BeltColor` varies.
So the final result would have a total of 10 `CategorySet` rows:
```
| Id | Championship_Id |
|----|-----------------|
| 1 | 422 |
| 2 | 422 |
| 3 | 422 |
| 4 | 422 |
| 5 | 422 |
| 6 | 422 |
| 7 | 422 |
| 8 | 422 |
| 9 | 422 | /* groups different Weight Division for BeltColor 7 */
| 10 | 422 | /* groups different Weight Division for BeltColor 8 */
```
And `CategorySet_Category` would have 16 rows:
```
| CategorySet_Id | Category_Id |
|----------------|-------------|
| 1 | 1 |
| 2 | 2 |
| 3 | 3 |
| 4 | 4 |
| 5 | 5 |
| 6 | 6 |
| 7 | 7 |
| 8 | 8 |
| 9 | 1 | /* groups different Weight Division for BeltColor 7 */
| 9 | 2 | /* groups different Weight Division for BeltColor 7 */
| 9 | 3 | /* groups different Weight Division for BeltColor 7 */
| 9 | 4 | /* groups different Weight Division for BeltColor 7 */
| 10 | 5 | /* groups different Weight Division for BeltColor 8 */
| 10 | 6 | /* groups different Weight Division for BeltColor 8 */
| 10 | 7 | /* groups different Weight Division for BeltColor 8 */
| 10 | 8 | /* groups different Weight Division for BeltColor 8 */
```
I have no idea how to insert into `CategorySet`, grab it's generated Id, then use it to insert into `CategorySet_Category`
I hope I've made my intentions clear.
I've also [created a SQLFiddle](http://sqlfiddle.com/#!6/56b17/1).
**Edit 1:** I commented in Jacek's answer that this would run only once, but this is false. It will run a couple of times a week. I have the option to run as SQL Command from C# or a stored procedure. Performance is not crucial.
**Edit 2:** Jacek suggested using `SCOPE_IDENTITY` to return the Id. Problem is, `SCOPE_IDENTITY` returns only the last inserted Id, and I insert more than one row in `CategorySet`.
**Edit 3:** Answer to @FutbolFan who asked how the FakeResultSet is retrieved.
It is a table `CategoriesOption` (Id, Price\_Id, MaxAthletesByTeam)
And tables `CategoriesOptionBeltColor`, `CategoriesOptionAgeDivision`, `CategoriesOptionWeightDivison`, `CategoriesOptionGender`. Those four tables are basically the same (Id, CategoriesOption\_Id, Value).
The query look like this:
```
SELECT * FROM CategoriesOption co
LEFT JOIN CategoriesOptionAgeDivision ON
CategoriesOptionAgeDivision.CategoriesOption_Id = co.Id
LEFT JOIN CategoriesOptionBeltColor ON
CategoriesOptionBeltColor.CategoriesOption_Id = co.Id
LEFT JOIN CategoriesOptionGender ON
CategoriesOptionGender.CategoriesOption_Id = co.Id
LEFT JOIN CategoriesOptionWeightDivision ON
CategoriesOptionWeightDivision.CategoriesOption_Id = co.Id
```
|
The solution described here will work correctly in multi-user environment and when destination tables `CategorySet` and `CategorySet_Category` are not empty.
I used schema and sample data from your [SQL Fiddle](http://sqlfiddle.com/#!6/56b17/1).
**First part is straight-forward**
(ab)use [`MERGE`](https://msdn.microsoft.com/en-us/library/bb510625.aspx) with `OUTPUT` clause.
`MERGE` can `INSERT`, `UPDATE` and `DELETE` rows. In our case we need only to `INSERT`. `1=0` is always false, so the `NOT MATCHED BY TARGET` part is always executed. In general, there could be other branches, see docs. `WHEN MATCHED` is usually used to `UPDATE`; `WHEN NOT MATCHED BY SOURCE` is usually used to `DELETE`, but we don't need them here.
This convoluted form of `MERGE` is equivalent to simple `INSERT`, but unlike simple `INSERT` its `OUTPUT` clause allows to refer to the columns that we need.
```
MERGE INTO CategorySet
USING
(
SELECT
FakeResultSet.Championship_Id
,FakeResultSet.Price_Id
,FakeResultSet.MaxAthletesByTeam
,Category.Id AS Category_Id
FROM
FakeResultSet
INNER JOIN Category ON
Category.AgeDivision_Id = FakeResultSet.AgeDivision_Id AND
Category.Gender = FakeResultSet.Gender AND
Category.BeltColor = FakeResultSet.BeltColor AND
Category.WeightDivision = FakeResultSet.WeightDivision
) AS Src
ON 1 = 0
WHEN NOT MATCHED BY TARGET THEN
INSERT
(Championship_Id
,Price_Id
,MaxAthletesByTeam)
VALUES
(Src.Championship_Id
,Src.Price_Id
,Src.MaxAthletesByTeam)
OUTPUT inserted.id AS CategorySet_Id, Src.Category_Id
INTO CategorySet_Category (CategorySet_Id, Category_Id)
;
```
`FakeResultSet` is joined with `Category` to get `Category.id` for each row of `FakeResultSet`. It is assumed that `Category` has unique combinations of `AgeDivision_Id, Gender, BeltColor, WeightDivision`.
In `OUTPUT` clause we need columns from both source and destination tables. The `OUTPUT` clause in simple `INSERT` statement doesn't provide them, so we use `MERGE` here that does.
The `MERGE` query above would insert 8 rows into `CategorySet` and insert 8 rows into `CategorySet_Category` using generated IDs.
**Second part**
needs temporary table. I'll use a table variable to store generated IDs.
```
DECLARE @T TABLE (
CategorySet_Id int
,AgeDivision_Id int
,Gender int
,BeltColor int);
```
We need to remember the generated `CategorySet_Id` together with the combination of `AgeDivision_Id, Gender, BeltColor` that caused it.
```
MERGE INTO CategorySet
USING
(
SELECT
FakeResultSet.Championship_Id
,FakeResultSet.Price_Id
,FakeResultSet.MaxAthletesByTeam
,FakeResultSet.AgeDivision_Id
,FakeResultSet.Gender
,FakeResultSet.BeltColor
FROM
FakeResultSet
GROUP BY
FakeResultSet.Championship_Id
,FakeResultSet.Price_Id
,FakeResultSet.MaxAthletesByTeam
,FakeResultSet.AgeDivision_Id
,FakeResultSet.Gender
,FakeResultSet.BeltColor
) AS Src
ON 1 = 0
WHEN NOT MATCHED BY TARGET THEN
INSERT
(Championship_Id
,Price_Id
,MaxAthletesByTeam)
VALUES
(Src.Championship_Id
,Src.Price_Id
,Src.MaxAthletesByTeam)
OUTPUT
inserted.id AS CategorySet_Id
,Src.AgeDivision_Id
,Src.Gender
,Src.BeltColor
INTO @T(CategorySet_Id, AgeDivision_Id, Gender, BeltColor)
;
```
The `MERGE` above would group `FakeResultSet` as needed and insert 2 rows into `CategorySet` and 2 rows into `@T`.
Then join `@T` with `Category` to get `Category.IDs`:
```
INSERT INTO CategorySet_Category (CategorySet_Id, Category_Id)
SELECT
TT.CategorySet_Id
,Category.Id AS Category_Id
FROM
@T AS TT
INNER JOIN Category ON
Category.AgeDivision_Id = TT.AgeDivision_Id AND
Category.Gender = TT.Gender AND
Category.BeltColor = TT.BeltColor
;
```
This will insert 8 rows into `CategorySet_Category`.
|
Here is not the full answer, but direction which you can use to solve this:
1st query:
```
select row_number() over(order by t, Id) as n, Championship_Id
from (
select distinct 0 as t, b.Id, a.Championship_Id
from FakeResultSet as a
inner join
Category as b
on
a.AgeDivision_Id=b.AgeDivision_Id and
a.Gender=b.Gender and
a.BeltColor=b.BeltColor and
a.WeightDivision=b.WeightDivision
union all
select distinct 1, BeltColor, Championship_Id
from FakeResultSet
) as q
```
2nd query:
```
select q2.CategorySet_Id, c.Id as Category_Id from (
select row_number() over(order by t, Id) as CategorySet_Id, Id, BeltColor
from (
select distinct 0 as t, b.Id, null as BeltColor
from FakeResultSet as a
inner join
Category as b
on
a.AgeDivision_Id=b.AgeDivision_Id and
a.Gender=b.Gender and
a.BeltColor=b.BeltColor and
a.WeightDivision=b.WeightDivision
union all
select distinct 1, BeltColor, BeltColor
from FakeResultSet
) as q
) as q2
inner join
Category as c
on
(q2.BeltColor is null and q2.Id=c.Id)
OR
(q2.BeltColor = c.BeltColor)
```
of course this will work only for empty `CategorySet` and `CategorySet_Category` tables, but you can use `select coalese(max(Id), 0) from CategorySet` to get current number and add it to `row_number`, thus you will get real ID which will be inserted into CategorySet row for second query
|
Insert into multiple tables
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I have a situation where I need to use a select statement in a where but then also append and prepend wildcards to the value it returns. E.g something like:
```
select * from [Customers.customervisibility] where userId like '%,' (Select Id from [Users.Users] where name ='MyName') ',%'
```
but running this gives:
> Incorrect syntax near ',%'.
Now the `Select` Statement is only ever going to return 1 id, so I don't know if there is a better way to write it maybe using a function.
The overall goal is so that I can select the rows from `[customer.customervisibility]` where the id is contained in a comma seperated string column `[UserId]`
e.g. if `id = 8`
I need to get the rows where `*,8,*...`
It has to be inline, I cannot use variable, and you will have to excuse the TERRIBLE database design. this is so that it will work with third party software
|
Try this `where` clause
If your DBMS support `Concat` the use this.
```
userId like concat('%' ,(Select top 1 cast(Id as varchar(50)) from [Users.Users] where name ='MyName') ,'%')
```
Else
```
userId like '%' +(Select top 1 cast(Id as varchar(50)) from [Users.Users] where name ='MyName') +'%'
```
I have used `Top 1` to avoid sub-query returns more than one row error if in case your subquery returns more than one row
|
Try this:
```
select * from [Customers.customervisibility] where convert(VARCHAR(100),userId) like '%'+(Select Id from [Users.Users] where name ='MyName')+'%'
```
|
Is it possible to have a where with a select statement which you add wildcards to
|
[
"",
"sql",
"select",
"where-clause",
"wildcard",
""
] |
I am stuck with a simple query. What i want is to get all rows except one Kindly have a look at the following data.
```
COL_A COL_B
B D
B (null)
B C
G D
G (null)
G C
```
I want to get all rows except but **B C**. Kindly have a look at the [sqlfiddle](http://sqlfiddle.com/#!4/33a62/6)
I have tried to get the rows by anding `col_A <> 'B' and col_B <> 'C'` but it's not anding the operation. Your help will be much appreciated.
Thanks
|
Try
```
where not(col_A = 'B' and col_B = 'C')
```
or
```
where col_A <> 'B' or col_B <> 'C'
```
|
One possible solution. Maybe not the most elegant:
```
select req_for col_A, doc_typ col_B
from a
where (req_for IS NULL OR doc_typ IS NULL)
OR (req_for,doc_typ)
NOT IN (select 'B','C' from dual);
```
|
Oracle: Get All Rows Except One
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have been asked to do a job which is beyond my SQL skills a little and having done some research online, cannot quite find the write solution to be done in SQL Server 2008 and not MySQL.
I have a table where I need to update specific names by adding an additional string after a certain point whilst keeping the rest of the string to the right intact.
```
e.g.
Current Name = 'Name - Location - 0005'
New Name = 'Name (West) - Location - 0005'
```
As you can see I need to add the text (West), I have a table which lists all the codes (e.g. 0005) and I need to link that into my where clause to only update the relevant names.
So the questions are:
1 - How can I update the Name by adding additional text at a set location (5th character), whilst maintaining whatever text is to the right of the names
2 - Is there a way that I can do a sort of like in to check the code, I tried using Contains however the table is not full-text indexed. This is not a massive issue as I can manually create update statement using just like for each row within the table, would just be a nice to know to add to my knowledge base.
|
with the help of `STUFF` we can achieve this
```
STUFF ( character_expression , start , length , replaceWith_expression )
SELECT STUFF('Name - Location - 0005', 5, 0, ' (West)');
```
output would be `Name (West) - Location - 0005`
|
Use STUFF function
```
declare @names varchar(100)
set @names='Name - Location - 0005'
select stuff(@names,5,0,' (West)')
```
|
SQL Server 2008 - Add to string in particular position within name column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-update",
""
] |
I have a SQL `User-Defined Table Type`. It used in many
stored procedures.Now i need to change a column in that table type.
I tried to drop and recreate the `User-Defined Table Type`.But SQL Server
doesn't Allow that. It shows up following error.
```
Msg 3732, Level 16, State 1, Line 3
Cannot drop type 'dbo.UserDefinedTableType' because it is being referenced by object 'SP_DoSomething'. There may be other objects that reference this type.
Msg 219, Level 16, State 1, Line 3
The type 'dbo.UserDefinedTableType' already exists, or you do not have permission to create it.
```
How to alter the `User-Defined Table Type` without modifying all the Stored procedure that uses `User-Defined Table Type` ?
|
You have binding in `SP_DoSomething` stored procedure. The type you want to change is used in that stored procedure.
You need to save script of that procedure. Drop it. Change `dbo.UserDefinedTableType` and create procedure again.
There is a similar post [here](https://stackoverflow.com/questions/11410722/altering-user-defined-table-types-in-sql-server). Check is some of the answers can help you. Answer of @norlando seems promising.
|
In total you should delete all Functions and Stored Procedures which use this User-Defined Table Type. Then you can drop User-Defined Table Type and recreate it. Then you should recreate all Stored Procedures and Functions which you deleted in previous step.
You can use this command for drop and recreate all SPs and Functions.
I suggest you to run this command with Print line to create Drop(s) and Create(s) command. Then you can put between Drop(s) command and Create(s) command your modification.
```
Declare @fullObjectName NVarChar(1000) = 'ref.Employee'
Declare @CreateCommand VarChar(Max), @DropCommand VarChar(Max)
Declare @ProcList Table
(
RowId Int,
CreateCommand NVarChar(Max),
DropCommand NVarChar(Max)
)
Insert Into @ProcList
SELECT ROW_NUMBER() OVER (ORDER BY OBJECT_NAME(m.object_id)) RowId,
definition As CreateCommand,
'DROP ' +
CASE OBJECTPROPERTY(referencing_id, 'IsProcedure')
WHEN 1 THEN 'PROC '
ELSE
CASE
WHEN OBJECTPROPERTY(referencing_id, 'IsScalarFunction') = 1 OR OBJECTPROPERTY(referencing_id, 'IsTableFunction') = 1 OR OBJECTPROPERTY(referencing_id, 'IsInlineFunction') = 1 THEN 'FUNCTION '
ELSE ''
END
END
+ SCHEMA_NAME(o.schema_id) + '.' +
+ OBJECT_NAME(m.object_id) As DropCommand
FROM sys.sql_expression_dependencies d
JOIN sys.sql_modules m
ON m.object_id = d.referencing_id
JOIN sys.objects o
ON o.object_id = m.object_id
WHERE referenced_id = TYPE_ID(@fullObjectName)
-----
Declare cur_drop SCROLL Cursor For Select CreateCommand, DropCommand From @ProcList
OPEN cur_drop
Fetch Next From cur_drop Into @CreateCommand, @DropCommand
While @@FETCH_STATUS = 0
Begin
--Exec sp_executesql @DropCommand
PRINT @DropCommand
Fetch Next From cur_drop Into @CreateCommand, @DropCommand
End
/*
Drop And ReCreate User Defined Table Type
*/
Fetch First From cur_drop Into @CreateCommand, @DropCommand
While @@FETCH_STATUS = 0
Begin
--Exec sp_executesql @CreateCommand
PRINT @CreateCommand
Fetch Next From cur_drop Into @CreateCommand, @DropCommand
End
Close cur_drop
Deallocate cur_drop
```
|
Unable to Modify User-Defined Table Type
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"sql-server-2012",
"user-defined-types",
""
] |
I have the following table say filenames.
```
filename flag
fileA 1
fileB 0
fileC 0
fileD 1
fileA 1
```
I want all distinct filenames from this table AND
If the flag is 1 for any file name, i want that file name to be replaced with 4 file names as FilaA\_part\_1, FilaA\_part\_2, FilaA\_part\_3, FilaA\_part\_4.
output should be
```
FilaA_part_1
FilaA_part_2
FilaA_part_3
FilaA_part_4
fileB
fileC
FilaD_part_1
FilaD_part_2
FilaD_part_3
FilaD_part_4
```
I am able to achieve this with temp table. I want to know if it is possible with single select query.
I could get as far as
```
SELECT DISTINCT CASE WHEN FLAG=1 THEN FILENAME + '_part_1'
ELSE FILENAME END
FROM FILENAMES
```
|
a union should do:
```
SELECT FILENAME
FROM FILENAMES
WHERE FLAG = 0
UNION
SELECT FILENAME + '_part_1'
FROM FILENAMES
WHERE FLAG = 1
UNION
SELECT FILENAME + '_part_2'
FROM FILENAMES
WHERE FLAG = 1
UNION
SELECT FILENAME + '_part_3'
FROM FILENAMES
WHERE FLAG = 1
UNION
SELECT FILENAME + '_part_4'
FROM FILENAMES
WHERE FLAG = 1
```
|
Try this should swork in sql server 2000
```
SELECT filename + '_part' + num
FROM Yourtable
CROSS JOIN (SELECT '_1' num
UNION ALL
SELECT '_2'
UNION ALL
SELECT '_3'
UNION ALL
SELECT '_4') A
WHERE flag = 1
UNION ALL
SELECT filename
FROM Yourtable B
WHERE NOT EXISTS (SELECT 1
FROM Yourtable A
WHERE a.filename = b.filename
AND a.flag = 1)
```
[**SQLFIDDLE DEMO**](http://sqlfiddle.com/#!3/d24c0b/1)
|
Single conditional select query in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
I have the following query:
```
select * from
from assignments dah, employees emp
where
dah.person_id=emp.person_id(+)
and
(dah.effective_end_date between emp.date_from(+) and emp.date_to(+)
and dah.effective_end_date between emp.valid_from(+) and emp.valid_to(+))
or
(dah.effective_start_date between emp.date_from(+) and emp.date_to(+)
and dah.effective_start_date between emp.valid_from(+) and emp.valid_to(+))
```
I get the following message: 'outer join operator (+) not allowed in operand of OR or IN'. I know about using 2 unions with inner joins is a solution, but I can't use it because I actually have a lot of code (code I provided is just an example).
Edit: i need this done via oracle syntax, because i work with data warehousing and our ETL doesn't fully support explicit syntax. Maybe there is something i am not seeing and this can be written differently?
Edit nr.2 : Maybe date overlapping logic can somehow be implemented without using OR and with oracle syntax?
|
Since you have to use the old-style outer join syntax, here's one way (simplified, since you didn't supply us with sample data and/or table creation scripts):
```
with assignments as (select 1 assignment_id, 1 person_id, to_date('01/08/2015', 'dd/mm/yyyy') start_date, to_date('03/08/2015', 'dd/mm/yyyy') end_date from dual union all
select 2 assignment_id, 1 person_id, to_date('02/08/2015', 'dd/mm/yyyy') start_date, to_date('04/08/2015', 'dd/mm/yyyy') end_date from dual union all
select 3 assignment_id, 1 person_id, to_date('06/08/2015', 'dd/mm/yyyy') start_date, to_date('10/08/2015', 'dd/mm/yyyy') end_date from dual union all
select 4 assignment_id, 2 person_id, to_date('02/08/2015', 'dd/mm/yyyy') start_date, to_date('03/08/2015', 'dd/mm/yyyy') end_date from dual),
employees as (select 1 person_id, to_date('01/08/2015', 'dd/mm/yyyy') start_date, to_date('03/08/2015', 'dd/mm/yyyy') end_date from dual union all
select 3 person_id, to_date('01/08/2015', 'dd/mm/yyyy') start_date, to_date('03/08/2015', 'dd/mm/yyyy') end_date from dual)
select *
from assignments dah,
employees emp
where dah.person_id = emp.person_id (+)
and dah.start_date <= emp.end_date (+)
and dah.end_date >= emp.start_date (+);
ASSIGNMENT_ID PERSON_ID START_DATE END_DATE PERSON_ID_1 START_DATE_1 END_DATE_1
------------- ---------- ---------- ---------- ----------- ------------ ----------
2 1 02/08/2015 04/08/2015 1 01/08/2015 03/08/2015
1 1 01/08/2015 03/08/2015 1 01/08/2015 03/08/2015
3 1 06/08/2015 10/08/2015
4 2 02/08/2015 03/08/2015
```
Are you sure you got your outer joins the right way round? Are you sure you're not actually after the following instead?:
```
with assignments as (select 1 assignment_id, 1 person_id, to_date('01/08/2015', 'dd/mm/yyyy') start_date, to_date('03/08/2015', 'dd/mm/yyyy') end_date from dual union all
select 2 assignment_id, 1 person_id, to_date('02/08/2015', 'dd/mm/yyyy') start_date, to_date('04/08/2015', 'dd/mm/yyyy') end_date from dual union all
select 3 assignment_id, 1 person_id, to_date('06/08/2015', 'dd/mm/yyyy') start_date, to_date('10/08/2015', 'dd/mm/yyyy') end_date from dual union all
select 4 assignment_id, 2 person_id, to_date('02/08/2015', 'dd/mm/yyyy') start_date, to_date('03/08/2015', 'dd/mm/yyyy') end_date from dual),
employees as (select 1 person_id, to_date('01/08/2015', 'dd/mm/yyyy') start_date, to_date('03/08/2015', 'dd/mm/yyyy') end_date from dual union all
select 3 person_id, to_date('01/08/2015', 'dd/mm/yyyy') start_date, to_date('03/08/2015', 'dd/mm/yyyy') end_date from dual)
select *
from assignments dah,
employees emp
where dah.person_id (+) = emp.person_id
and dah.start_date (+) <= emp.end_date
and dah.end_date (+) >= emp.start_date;
ASSIGNMENT_ID PERSON_ID START_DATE END_DATE PERSON_ID_1 START_DATE_1 END_DATE_1
------------- ---------- ---------- ---------- ----------- ------------ ----------
1 1 01/08/2015 03/08/2015 1 01/08/2015 03/08/2015
2 1 02/08/2015 04/08/2015 1 01/08/2015 03/08/2015
3 01/08/2015 03/08/2015
```
|
Use explicit `left join` syntax:
```
select *
from employees emp left join
assignments dah
on dah.person_id = emp.person_id and
((dah.effective_end_date between emp.date_from and emp.date_to and
dah.effective_end_date between emp.valid_from and emp.valid_to
) or
(dah.effective_start_date between emp.date_from and emp.date_to and
dah.effective_start_date between emp.valid_from and emp.valid_to
)
);
```
A simple rule is never to use a comma in the `from` clause. Always use explicit `join` syntax.
Note: Technically, your outer join syntax would have the tables in the *other* order:
```
from assignments dah left join
employees emp
on . . .
```
I swapped them on purpose. The `left join` keeps all rows in the first table, even those with no matches. The `+` syntax is harder to follow. The `+` goes on the side that would get the `NULL` values. However, to me, this seems less likely that the unmatched rows are in the assignments table.
If you have proper foreign key relationships, then all the assignments should have a correct person. I may not understand you data, however, and you might want to reverse your tables for what you are really trying to do.
EDIT:
As for *overlaps*, I would be inclined to use the simpler:
```
on dah.person_id = emp.person_id and
(dah.effective_end_date >= emp.date_from and
dah.effective_start_date <= emp.date_to
)
```
You can even write this using the archaic `+` notation, if you like. Also note: these do not do exactly the same things. This will detect overlaps where one period is entirely embedded in another period.
|
SQL Alternative for 'OR' in where clause when using outer join
|
[
"",
"sql",
"oracle",
"select",
"join",
""
] |
I have problem with splitting 0.00xxx float values between numbers.
Here is example of input [data](http://sqlfiddle.com/#!4/d03a0)
0 is sum of 1-3 float numbers.
As result I want to see rounded numbers without loosing sum of 1-3:
IN:
```
0 313.726
1 216.412
2 48.659
3 48.655
```
---
OUT:
```
0 313.73
1 216.41
2 48.66
3 48.66
```
How it should work:
Idea is to split the lowest rest(in our example it's 0.002 from value 216.412) between highest. 0.001 to 48.659 = 48.66 and 0.001 to 48.655 = 48.656 after this we can round numbers without loosing data.
---
After sitting on this problem yesterday I found the solution. The query as I think should look like this.
```
select test.*,
sum(value - trunc(value, 2)) over (partition by case when id = 0 then 0 else 1 end) part,
row_number() over(partition by case when id = 0 then 0 else 1 end order by value - trunc(value, 2) desc) rn,
case when row_number() over(partition by case when id = 0 then 0 else 1 end order by value - trunc(value, 2) desc) / 100 <=
round(sum(value - trunc(value, 2)) over (partition by case when id = 0 then 0 else 1 end), 2) then trunc(value, 2) + 0.01 else trunc(value, 2) end result
from test;
```
But still for me it's strange to add const value "0.01" while getting the result.
---
**Any ideas to improve this query?**
|
pragamtic solution based on following rules:
1) check the difference between the rounded sum and sum of rounded parts.
```
select round(sum(value),2) - sum(round(value,2)) from test where id != 0;
```
2) apply this difference
e.g. if you get 0.01, this means *one* rounded part must be *increased by 0.01*
if you get -.02, it means *two* rounded parts must be *decreased by 0.01*
The query below simple correct the last N parts:
```
with diff as (
select round(sum(value),2) - sum(round(value,2)) diff from test where id != 0
), diff_values as
(select sign(diff)*.01 diff_value, abs(100*diff) corr_cnt
from diff)
select id, round(value,2)
+ case when row_number() over (order by id desc) <= corr_cnt then diff_value else 0 end result
from test, diff_values where id != 0
order by id;
ID RESULT
---------- ----------
1 216,41
2 48,66
3 48,66
```
If the number of corrected records in much higher than two, check the data and the rounding precision.
|
```
select id, value,
case when id <> max(id) over () then round(value, 2)
else round(value, 2) - sum(round(value, 2)) over () +
round(first_value(value) over (order by id), 2) * 2
end val_rnd
from test
```
Output:
```
ID VALUE VAL_RND
------ ---------- ----------
0 313.726 313.73
1 216.413 216.41
2 48.659 48.66
3 48.654 48.66
```
Above query works, but it moves all difference to last row. And this is not "honest" and maybe not what you are after for other scenarios.
The most "unhonest" behavior is observable with big number of values, all equal `0.005`.
To make full distribution you need to:
* sum all original values in sub-rows and subtract rounded total value from row with id 0,
* use row\_number() to sort sub-rows in order of difference between rounded value and original value (maybe descending, it depends on sign of difference, use `sign()`, `abs`),
* assign to each row value increased by .01 (or decreased if difference < 0 ) until it reaches difference/.01 (use `case when` ),
* union row with id = 0 containing rounded sum
* optionally sort results.
It's hard (but achievable) in one query. Alternative is some PL/SQL procedure or function, which might be more readable.
|
Split float between list of numbers
|
[
"",
"sql",
"oracle",
""
] |
I have a simple table and I need to identified groups of four rows (the groups aren't consecutives), but each rows of each row has a +1 in the value. For example:
```
----------------------
| language | id |
----------------------
| C | 16 |
| C++ | 17 |
| Java | 18 |
| Python | 19 |
| HTML | 65 |
| JavaScript | 66 |
| PHP | 67 |
| Perl | 68 |
----------------------
```
I want to add a column that indicates the group or set, how is possible to get this output using MySQL?:
```
----------------------------
| language | id | set |
----------------------------
| C | 16 | 1 |
| C++ | 17 | 1 |
| Java | 18 | 1 |
| Python | 19 | 1 |
| HTML | 65 | 2 |
| JavaScript | 66 | 2 |
| PHP | 67 | 2 |
| Perl | 68 | 2 |
----------------------------
```
Note that in this examples is only 2 sets (it could be 1 or more sets) and they didn't start in 16 (such values are not knowledged, but the restriction is that each id value of each row has this form n, n+1, n+2 and n+3).
I've been investigating about **Gaps & Islands problem** but didn't figure how to solve it by using their solutions. Also I search on stackoverflow but the closest question that I found was [How to find gaps in sequential numbering in mysql?](https://stackoverflow.com/questions/4340793/how-to-find-gaps-in-sequential-numbering-in-mysql)
Thanks
|
```
select language,
@n:=if(@m+1=id, @n, @n+1) `set`,
(@m:=id) id
from t1,
(select @n:=0) n,
(select @m:=0) m
```
[**Demo on sqlfiddle**](http://sqlfiddle.com/#!9/99105d/1)
|
```
SELECT language,id,g
FROM (
SELECT language,id,
CASE WHEN id=@lastid+1 THEN @n ELSE @n:=@n+1 END AS g,
@lastid := id As b
FROM
t, (SELECT @n:=0) r
ORDER BY
id
) s
```
EDIT
In case you want just 4 per group add a row number variable:
```
SELECT language,id,g,rn
FROM (
SELECT language,id,
CASE WHEN id=@lastid+1 THEN @n ELSE @n:=@n+1 END AS g,
@rn := IF(@lastid+1 = id, @rn + 1, 1) AS rn,
@lastid := id As dt
FROM
t, (SELECT @n:=0) r
ORDER BY
id
) s
Where rn <=4
```
[FIDDLE](http://sqlfiddle.com/#!2/36738/2)
|
Group consecutively values in MySQL and add an id to such groups
|
[
"",
"mysql",
"sql",
"database",
"gaps-and-islands",
""
] |
I'm having trouble to understand how to nest case statements properly.
(MSSQL Server 2012)
Let's have the following table given.
The Column StatusMissing is what I want to create
```
+------+--+------+--+------+--+------+--+------+--+------+--+---------------+
| a1 | | a2 | | a3 | | b1 | | c1 | | d2 | | StatusMissing |
+------+--+------+--+------+--+------+--+------+--+------+--+---------------+
| OK | | OK | | OK | | OK | | OK | | OK | | AllOK |
| NULL | | NULL | | OK | | OK | | OK | | OK | | As |
| OK | | NULL | | OK | | OK | | OK | | OK | | As |
| OK | | OK | | NULL | | OK | | OK | | OK | | As |
| OK | | OK | | OK | | NULL | | OK | | OK | | B |
| OK | | OK | | OK | | OK | | NULL | | OK | | C |
| OK | | OK | | OK | | OK | | OK | | NULL | | D |
| NULL | | OK | | OK | | NULL | | NULL | | OK | | ABC |
| NULL | | OK | | OK | | OK | | NULL | | NULL | | ACD |
| NULL | | OK | | OK | | NULL | | OK | | NULL | | ABD |
| NULL | | OK | | OK | | NULL | | NULL | | NULL | | ABCD |
| NULL | | OK | | OK | | OK | | NULL | | NULL | | ACD |
| OK | | OK | | OK | | NULL | | NULL | | OK | | BC |
| OK | | OK | | OK | | OK | | OK | | OK | | AllOK |
| OK | | NULL | | OK | | OK | | NULL | | OK | | AC |
| OK | | OK | | OK | | NULL | | OK | | NULL | | BD |
| OK | | OK | | OK | | OK | | NULL | | NULL | | CD |
+------+--+------+--+------+--+------+--+------+--+------+--+---------------+
```
First, to understand the concept of nesting I simplified the table:
```
+------+--+------+--+------+
| a1 | | a2 | | b1 |
+------+--+------+--+------+
| OK | | OK | | OK |
| OK | | OK | | NULL |
| OK | | NULL | | OK |
| NULL | | OK | | OK |
| NULL | | NULL | | OK |
| NULL | | OK | | NULL |
| OK | | NULL | | NULL |
+------+--+------+--+------+
```
These attempts lead to these failures.
Query1
```
SELECT a1, a2, b1 'StatusMissing' =
CASE
WHEN a1 IS NULL
THEN
CASE
WHEN a1 IS NULL
THEN
CASE
WHEN b1 IS NULL
THEN 'AB'
END
ELSE 'A'
END
WHEN b1 IS NULL
THEN 'B'
ELSE 'AllOK'
END
FROM Table;
```
Result1:
```
+------+--+------+--+------+--+---------------+
| a1 | | a2 | | b1 | | StatusMissing |
+------+--+------+--+------+--+---------------+
| OK | | OK | | OK | | AllOK |
| OK | | OK | | NULL | | B |
| OK | | NULL | | OK | | AllOK |
| NULL | | OK | | OK | | NULL |
| NULL | | NULL | | OK | | NULL |
| NULL | | OK | | NULL | | AB |
| OK | | NULL | | NULL | | B |
+------+--+------+--+------+--+---------------+
```
Query2 (Else as main)
```
SELECT a1, a2, b1, 'Status' =
CASE
WHEN a1 IS NOT NULL AND a2 IS NOT NULL AND b1 IS NOT NULL
THEN 'AllOK!'
ELSE
CASE
WHEN a2 IS NOT NULL OR a2 IS NOT NULL
THEN
CASE
WHEN b1 IS NULL
THEN 'AB'
END
WHEN b1 IS NULL
THEN 'B'
ELSE 'A'
END
END
FROM Table;
```
Result2
```
+------+--+------+--+------+--+---------------+
| a1 | | a2 | | b1 | | StatusMissing |
+------+--+------+--+------+--+---------------+
| OK | | OK | | OK | | AllOK |
| OK | | OK | | NULL | | AB |
| OK | | NULL | | OK | | A |
| NULL | | OK | | OK | | NULL |
| NULL | | NULL | | OK | | A |
| NULL | | OK | | NULL | | AB |
| OK | | NULL | | NULL | | B |
+------+--+------+--+------+--+---------------+
```
What the hell am I doing wrong?
I'm quite new to SQL, so if there is a proper function to do this I would appreciate the info!
EDIT:
If something like this would be possible in SQL i mean:
Column StatusMissing = ' missing'
If(a1 == NULL) { StatusMissing += 'A'}
EDIT2:
The column StatusMissing IS NOT THERE!
I want to create it using the SQL statements like below.
SELECT .... Status =
So basically I only have A1,A2,B1 (in the simple table). Please don't get confused with the first Table. It's only there to SHOW HOW IT SHOULD look like.
|
I might suggest that you make two small modifications to your output:
* Instead of "As", just say "A".
* Instead of "AllOK", just leave the field blank.
With these modifications, the rules are pretty easy:
```
select t.*,
((case when a1 is null or a2 is null or a3 is null then 'A' else '' end) +
(case when b1 is null then 'B' else '' end) +
(case when c1 is null then 'C' else '' end) +
(case when d1 is null then 'D' else '' end)
) as StatusMissing
from table t;
```
If you do want your version, a subquery is perhaps the easiest way:
```
select t. . . .,
(case when StatusMissing = '' then 'AllOK'
when StatusMissing = 'A' then 'As'
else StatusMissing
end) as StatusMissing
from (select t.*,
((case when a1 is null or a2 is null or a3 is null then 'A' else '' end) +
(case when b1 is null then 'B' else '' end) +
(case when c1 is null then 'C' else '' end) +
(case when d1 is null then 'D' else '' end)
) as StatusMissing
from table t
) t
```
|
For the simplified table, assuming data type to be `nvarchar`.
Try using `UPDATE`-
```
UPDATE [dbo].[StatusMissing]
SET result='';
UPDATE [dbo].[StatusMissing]
SET result= CONCAT(result , 'A')
WHERE a1 is null or a2 is null;
UPDATE [dbo].[StatusMissing]
SET result= CONCAT(result , 'B')
WHERE b1 is null ;
UPDATE [dbo].[StatusMissing]
SET result= 'AllOK'
WHERE result ='';
```
This can be done in one step as well.
|
Nested case with multiple sub conditions
|
[
"",
"sql",
"sql-server",
"case",
"case-statement",
"multiple-conditions",
""
] |
I'm creating a table using this script.
```
create table TimeFrames(
Id int identity,
Name varchar(100),
Active bit
)
```
Before, I also altered the table by adding a primary key and anindex. However, now, I've forgotten to do the latter and, since I experience no difference in functionality, I wonder if it's only because of performance considerations that it needs to be there.
```
alter table TimeFrames add constraint PkTimeFrames primary key (Id)
create clustered index IxTimeFramesId on dbo.TimeFrames(Id)
```
Are there other considerations regarding the index?
|
You did create the primary key, right?
Indexes are there for performance reasons, unless they are also unique. In that case, they require each record to have a different value/set of values for the affected columns.
A clustered index will also determine how the rows are physically stored in the database, but that probably also falls under "performance".
|
Indexes exist almost exclusively for performance reasons. The one major exception that I can readily think of are unique indexes. These are also used to enforce `unique` constraints.
There is also the issue that a `foreign key` constraint requires a unique or primary key constraint in the referenced table.
|
The impact of not creating indexes
|
[
"",
"sql",
"sql-server",
""
] |
Our development database gets reset weekly. I find myself spending an hour or so on Monday mornings (like today) inserting all of my test data into the various tables and databases from individual scripts.
First, of course, I have to verify that these tables do not have my data:
```
SELECT Count(*) FROM Table1 WHERE INVLINK LIKE '190701050630%'
```
If no rows are returned, I can insert my records:
```
INSERT INTO Table1
( UPC, CATALOG, VENDOR, ADD_DATE, INVLINK, WHO_ADDED, TYPE, VENDOR_ID)
VALUES
('011010017760', 'LOVE', 'TES', '20150609', '19070105063041', 9388, 'P', '1013'),
('011010017760', 'HATE', 'TES', '20150609', '19070105063042', 9388, 'P', '1013'),
('011010017760', 'FEAR', 'TES', '20150609', '19070105063043', 9388, 'P', '1013')
```
What I would like to know is if there is a syntax that would allow me to call the insert command when the count is zero.
Here is what I've tried, **but this will not execute with an SQL call**.
```
SELECT CASE WHEN SELECT Count(*) = 0
THEN
INSERT INTO TBLBARTRANS
( UPC, CATALOG, VENDOR, ADD_DATE, INVLINK, WHO_ADDED, TYPE, VENDOR_ID)
VALUES
('011010017760', 'LOVE', 'TES', '20150609', '19070105063041', 9388, 'P', '1013'),
('011010017760', 'HATE', 'TES', '20150609', '19070105063042', 9388, 'P', '1013'),
('011010017760', 'FEAR', 'TES', '20150609', '19070105063043', 9388, 'P', '1013')
END
FROM Table1
WHERE INVLINK LIKE '190701050630%'
```
I can call the INSERT, SELECT, and UPDATE commands, but I am not a Database Administrator. So, creating a stored procedure to run these 15 to 20 scripts is not an option for me.
|
You need to change that so that you only "select" those values that aren't there:
```
INSERT INTO TBLBARTRANS
(UPC, CATALOG, VENDOR, ADD_DATE, INVLINK, WHO_ADDED, TYPE, VENDOR_ID)
select *
from (
values
('011010017760', 'LOVE', 'TES', '20150609', '19070105063041', 9388, 'P', '1013'),
('011010017760', 'HATE', 'TES', '20150609', '19070105063042', 9388, 'P', '1013'),
('011010017760', 'FEAR', 'TES', '20150609', '19070105063043', 9388, 'P', '1013')
) as t (UPC, CATALOG, VENDOR, ADD_DATE, INVLINK, WHO_ADDED, TYPE, VENDOR_ID)
where not exists (select 1
from TBLBARTRANS tbl
where tbl.invlink = t.invlink);
```
The inner most select creates a "virtual" table that contains the values you want to insert:
```
select *
from (
values
('011010017760', 'LOVE', 'TES', '20150609', '19070105063041', 9388, 'P', '1013'),
('011010017760', 'HATE', 'TES', '20150609', '19070105063042', 9388, 'P', '1013'),
('011010017760', 'FEAR', 'TES', '20150609', '19070105063043', 9388, 'P', '1013')
) as t(UPC, CATALOG, VENDOR, ADD_DATE, INVLINK, WHO_ADDED, TYPE, VENDOR_ID)
```
the above simply "simulates" a source table for the values you want to insert. The condition
```
where not exists (select 1
from TBLBARTRANS tbl
where tbl.invlink = t.invlink);
```
will then only return those rows from the "virtual" table that do not yet exist in the table `TBLBARTRANS`. The result of that select statement will then be inserted into the target table.
I tested this on a DB2 LUW - not sure if *all* DB2 versions support the `values()` clause as I have used it.
---
DB2's `MERGE` as suggested by Henrik is an alternative:
```
merge into TBLBARTRANS tg
using table (
values
('011010017760', 'LOVE', 'TES', '20150609', '19070105063041', 9388, 'P', '1013'),
('011010017760', 'HATE', 'TES', '20150609', '19070105063042', 9388, 'P', '1013'),
('011010017760', 'FEAR', 'TES', '20150609', '19070105063043', 9388, 'P', '1013')
) t (UPC, CATALOG, VENDOR, ADD_DATE, INVLINK, WHO_ADDED, TYPE, VENDOR_ID) on (t.INVLINK = tg.invlink)
when not matched then
insert (UPC, CATALOG, VENDOR, ADD_DATE, INVLINK, WHO_ADDED, TYPE, VENDOR_ID)
values (t.UPC, t.CATALOG, t.VENDOR, t.ADD_DATE, t.INVLINK, t.WHO_ADDED, t.TYPE, t.VENDOR_ID);
```
---
Unrelated, but:
Is `add_date` a `date` or `varchar` column? If it's a `varchar` column, then you should change that. Storing dates as strings is almost always a very bad idea.
The same is true for `VENDOR_ID`: you supply string value to that column which looks very much like number. Storing numbers in varchar columns is also almost always a very bad idea.
|
In addition to what others have said, you might consider moving your data to csv files and load/ingest those instead of inserting them inside an sql statement.
<https://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0057198.html?cp=SSEPGG_10.1.0%2F3-6-2-4-59>
<https://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0008305.html?cp=SSEPGG_10.1.0%2F3-6-2-4-83>
Load will reject those rows that violates the primary key, and load the rest. Ingest gives you more control and you can even use a merge statement as described above.
Even if you don't want to use any of these utilities, you might want to consider keeping the data separate from the actual command. You could for example create a utility script that loops over a number of tables, reads that data file for the table and constructs the insert statement dynamically. All the information needed is in the catalog.
EDIT:
Yet another idea is to create backup tables that contain the data that you want to add to the real tables. I assume that by reset you mean that a backup is restored. If the backup contains the backup tables you can insert, merge, load from cursor from these tables every monday. Drawback is that you have to take a new backup when the data changes.
|
IBM DB2 INSERT when Count(*) = 0
|
[
"",
"sql",
"db2",
""
] |
```
create table t1 (col1 int);
```
Leave the table empty.
```
select isnull(col1,0) from t1;
```
I want to replace null with zeros in the above case. However, `ISNULL()` does not work if there are no records in the table. How can I get around this?
|
Maybe you can first test if the table is empty:
```
IF NOT EXISTS (select * from t1)
SELECT 0
ELSE select coalesce(col1,0) from t1;
```
Anyway, you should use COALESCE instead of ISNULL because it is standard SQL.
|
In case if you want to replace non existing value then surround another null check of mentioned select statement. like this
```
ISNULL(select isnull(col1,0) from t1,0)
```
|
Handle missing data in SQL Server 2012 SELECT statement
|
[
"",
"sql",
"sql-server",
"t-sql",
"null",
""
] |
I have three tables , both with the same fields as in the example below:
Table:
```
dog
-------------
name, date
```
Table :
```
cat
-------------
name, date
```
Table:
```
animal
-------------
name, date
```
as I transfer the dog and cat data for animal table ? I tried the select into but could not do it with two tables.
```
Table value:
CAT
name date
Garfield 2015-08-03
DOG
name date
Spike 2015-08-03
```
Source:
```
insert into animal values ((select * from cat,dog))
```
Expected result
```
ANIMAL
name date
Garfield 2015-08-03
Spike 2015-08-03
```
|
Try this:
```
insert into animal
select name, date from dog
union all
select name, date from cat
```
|
It can be done with the execution of a query ie:Join operation.Table 1, Table 2,table 3. We have the same attribute in all these three tables.Just join the attributes with table1.fieldname JOIN table2.fieldname JOIN table3.fieldname
|
How can I insert data from two tables into one?
|
[
"",
"sql",
"sql-server",
""
] |
I have a column of type `number`. It has both positive and negative values. I need to get 4 values : Positive Maximum,Positive Minimum, Negative Maximum,Negative Minimum:
a)To get Positive Maximum : I can use [Check out the Fiddle](http://sqlfiddle.com/#!4/6aa70/4)
```
select max(cola) from test;
```
b)To get Negative Minimum: I can use [Check out the Fiddle](http://sqlfiddle.com/#!4/6aa70/4)
```
select min(cola) from test;
```
I have two question here:
1)Now i'm not sure how to get the other two values. Guide me to get that
2)Meanwhile while trying this i got another doubt. When i have some column of type `varchar2` and it has numbers as value. I'm performing the above operation in this column. Positive maximum is same as above. But Negative minimum is quiet weird. [Check Fiddle Here](http://sqlfiddle.com/#!4/6aa70/5) .Why there is no proper implicit conversion is taking place here. Somebody pls explain the reason behind this?
|
For question 1, you can easily use case to determine which values you do the min/max on. E.g.:
```
select max(case when cola >= 0 then cola end) max_positive,
min(case when cola >= 0 then cola end) min_positive,
max(case when cola < 0 then cola end) max_negative,
min(case when cola < 0 then cola end) min_negative
from test;
```
For question 2, when you do the min/max on something that's a varchar, you're going to be doing string comparisons, *NOT* number comparisons. You have to explicitly convert the values to numbers, since Oracle doesn't know that you expected an implicit conversion to take place. And you shouldn't really rely on implicit conversions anyway. E.g.:
```
select max(case when to_number(cola) >= 0 then to_number(cola) end) max_positive,
min(case when to_number(cola) >= 0 then to_number(cola) end) min_positive,
max(case when to_number(cola) < 0 then to_number(cola) end) max_negative,
min(case when to_number(cola) < 0 then to_number(cola) end) min_negative
from test1;
```
[Here's the SQLFiddle for both cases.](http://sqlfiddle.com/#!4/6aa70/9)
N.B. I've explicitly split out both negative and positive values (I stuck 0 in with the positive numbers; you'll have to decide how you want to treat rows with a value of 0!), just in case there were no negative numbers or no positive numbers.
|
use case expressions within the aggregation functions.
e.g.
max(when cola < 0 then cola end) max\_neg
min(cola) min\_neg -- no need for case expression here
[SQL Fiddle](http://sqlfiddle.com/#!4/6aa70/10)
**Oracle 11g R2 Schema Setup**:
```
Create table test(COLA number);
Insert into test values(1);
Insert into test values(50);
Insert into test values(-65);
Insert into test values(25);
Insert into test values(-2);
Insert into test values(-8);
Insert into test values(5);
Insert into test values(-11);
Create table test1(COLA varchar2(10));
Insert into test1 values('1');
Insert into test1 values('50');
Insert into test1 values('-65');
Insert into test1 values('25');
Insert into test1 values('-2');
Insert into test1 values('-8');
Insert into test1 values('5');
Insert into test1 values('-11');
```
**Query 1**:
```
select
max(case when cola < 0 then cola end) max_neg_cola
, min(cola)
, min(case when cola > 0 then cola end) min_pos_cola
, max(cola)
from test
```
**[Results](http://sqlfiddle.com/#!4/6aa70/10/0)**:
```
| MAX_NEG_COLA | MIN(COLA) | MIN_POS_COLA | MAX(COLA) |
|--------------|-----------|--------------|-----------|
| -2 | -65 | 1 | 50 |
```
|
How to find Negative maximum and positive minimum in Oracle?
|
[
"",
"sql",
"oracle",
"oracle11g",
"max",
"min",
""
] |
Using [Remove part of string in table](https://stackoverflow.com/questions/23907373/remove-part-of-string-in-table) as an example, I want to change part of my string in my database column with a different string.
Ex:
Database says `E:\websites\nas\globe.png` , `E:\websites\nas\apple.png` and etc
I want it to say `\\nas\globe.png`, `\\nas\apple.png`,
Only part I want to replace is the `E:\websites\` not the rest of the string
How do I do this?
So far I have:
```
SELECT file_name,
REPLACE(file_name,'E:\websites\','\\nas\')
FROM t_class;
```
I just referenced <http://nntp-archive.sybase.com/nntp-archive/action/article/%3C348_1744DC78C1045E920059DE7F85256A8B.0037D71C85256A8B@webforums%3E>
and used:
```
SELECT REPLACE('E:\websites\web\Class\Main_Image\','E:\websites\web\Class\Main_Image\','\\nas\class_s\Main_Image\') "Changes"
FROM DUAL;
```
but once again it wouldn't change O.o
|
I found a reference at [how to replace string values of column values by SQL Stored Procedure](https://stackoverflow.com/questions/28297168/how-to-replace-string-values-of-column-values-by-sql-stored-procedure)
by doing the following:
```
UPDATE t_class SET file_name =
REPLACE
(file_name, 'E:\websites\web\Class\Main_Image\No_Image_Available.png', '\\nas\class_s\Main_Image\No_Image_Available.png');
```
so only difference is the `update` and `=` sign
|
In Oracle, you may need to double up on the back slashes:
```
SELECT file_name,
REPLACE(file_name,'E:\\websites\\', '\\\\nas\\')
FROM t_class;
```
|
Replace part of string in table column
|
[
"",
"sql",
"string",
"replace",
"oracle10g",
""
] |
I have a `users` table with columns
`id`, `username`, `password`
I also have an `orders` table with columns
`id`, `user_id`, `price`, `status`
In the `index` method of the `UsersController`, I would like to order my users based on the amount of purchases they have made. The purchases of each user can be gotten as the sum of the `price` of all the orders in which the `user_id` of the order is equal to the `id` of a user and in which the `order.status` of such an order is equal to `4`
The restriction in order status is made because users have the possibility of cancelling orders in my system so I'll like to query only the orders which have not been canceled.
Basically, I'll like to get the list of all users based on the total amount of purchases they have made. I'll also like to paginate the results
### Trials
I tried getting the users based on the query below. I'm using `leftJoin` because not all the users have made an order.
```
$users = User::leftJoin('orders', 'users.id', '=', 'orders.user_id')
->where('orders.status', '!=', 2)->where('orders.status', '!=', 5)
->groupBy('orders.user_id');
```
But yet I still get errors.
Based on my Models, I could get all the purchases of a particular user by using a query such as
```
User::find(1)->orders()->where('orders.status', '=', 4)->sum('orders.price')
```
Thanks for your help
|
After some trials I got a good and simple solution
```
$users = User::leftJoin('orders', 'orders.user_id', '=', 'users.id')
->select('users.*', DB::raw( 'sum( orders.price ) as `total_purchase`' ) )
->groupBy('users.id')
->orderBy('total_purchase', 'DSC')
->paginate(10);
```
|
The pure SQL for this will look something like this:
```
SET @rank = 0;
SELECT
@rank := @rank + 1 AS Rank
,Username
FROM (
SELECT
U.username AS Username
FROM orders O
JOIN users U
ON U.id = O.user_id
WHERE O.status = 4
GROUP BY
O.user_id
ORDER BY
SUM(O.price) DESC
) X;
-- Reset Rank
SET @rank = 0;
```
|
How to order queries of one table based on the sum of a column in a related table
|
[
"",
"sql",
"laravel",
"laravel-4",
"eloquent",
""
] |
I have an Oracle database server running on a machine. Clients performs operations through a frontend application build from a different company, which generates corresponding SQL queries for these operations.
The frontend application generates SQL queries that we cannot modify. What I would like to know if there is any way to rewrite the SQL query upon its arrival. More specifically, we would like to be able to change tablespace names, default attribute values and most importantly compression parameters. For example, change this query:
```
CREATE TABLE EXAMPLE_TABLE (
ID INTEGER NOT NULL,
AMOUNT FLOAT(126) DEFAULT 0.0,
TAG VARCHAR2(50) DEFAULT ' '
)
TABLESPACE EXAMPLE_TABLESPACE NOCOMPRESS
```
to:
```
CREATE TABLE EXAMPLE_TABLE (
ID INTEGER NOT NULL,
AMOUNT FLOAT(126) DEFAULT 2.0,
TAG VARCHAR2(50) DEFAULT ' '
)
TABLESPACE EXAMPLE_TABLESPACE_TWO COMPRESS FOR OLTP
```
Note that the rewrites are not limited to create table statements, but can be applied to any SQL queries.
Any ideas about how to do this?
|
You might enjoy this Oracle Database 12c feature, the SQL Translation Framework. Designed for taking T-SQL to SQL when migrating applications from Sybase or SQL Server to Oracle, it can also be used to help with hard-coded vendor SQL you need to optimize/fix.
[Oracle Docs](https://docs.oracle.com/database/121/DRDAA/sql_transl_arch.htm)
[Blog Example](http://kerryosborne.oracle-guy.com/2013/07/sql-translation-framework/)
|
If I didn't understand correctly, you want to do two things.
1. Change tablespace for this table
2. Change metadata from this table
You can do both easily.
1. `alter table [table_name] move [new_tablespace]; -- it must exists`
( check tablespace quota in this tb for this user )
2. `alter table [table_anem] MODIFY(AMOUNT DEFAULT 0.2);`
|
Oracle SQL Query rewriting
|
[
"",
"sql",
"oracle",
""
] |
I hope that i can explain it right, because i'm not a native english speaker.
I have 2 tables- `Document` and `Vehicle`.
As you can see in the `SELECT` query, the columns `Id(Vehicle)` and `VehicleId(Document)` are common.
The columns `SeriePolita`,`NumarPolita`,`d.Status` are from the `Document` table and `v.CascoCategorieVehicul`,`v.NumarInmatriculare` from the `Vehicle` table.
This code was written some time ago ( is working - no problem in the syntax ) by someone who left the company and my colleagues are using this code and weren't able to explain it to me.
My question is -
*Can some explain or put a link with some info, for the `Vehicle v on v.Id = d.VehicleId`. What is the meanning of `vehicle v` and how is using `d.Status` different that `Status`, without the `d`.*
```
select
SeriePolita,
NumarPolita,
d.Status,
v.CascoCategorieVehicul,
v.NumarInmatriculare
from Document d
inner join Vehicle v on v.Id = d.VehicleId
where EndDate>'2015-06-30' and d.Class='CASCO' and Status in ('Polita', 'Anulat')
```
|
the code you have can be rewritten as follows:
```
select
d.SeriePolita,
d.NumarPolita,
d.Status,
v.CascoCategorieVehicul,
v.NumarInmatriculare
from Document AS d
inner join Vehicle AS v on v.Id = d.VehicleId
where EndDate>'2015-06-30'
and d.Class='CASCO'
and Status in ('Polita', 'Anulat');
```
mind the `AS` keyword between `Document` and `d`: that is the keyword to tell the system that the object `Document` from now on will be 'known as' `d`.
it is a keyword that is not mandatory, may be omitted safely as in your code.
on [msdn](https://technet.microsoft.com/en-us/library/ms187455%28v=sql.105%29.aspx) you can find details about table aliasing.
|
`Vehicle v` is a reference to the `Vehicle` table by the alias `v`. The reason for doing this is so you don't have to type, for instance `inner join Vehicle on Vehicle.Id = Document.VehicleId` - it's shortened and more concise and you can refer to the alias in the `select` and `where` clause.
Now suppose that there is a `Status` column in both tables, without referring to it by either table name or alias, you would get an ambiguous column error as the DB engine would not know which table column you are referring to. If there is only only `Status` column then your query will run fine, although it is unclear which table the column actually belongs to!
See more on this in the documentation:
[Using Table Aliases](https://technet.microsoft.com/en-us/library/ms187455(v=sql.105).aspx)
|
How to explain "inner join Vehicle v on v.Id = d.VehicleId"?
|
[
"",
"sql",
"sql-server",
""
] |
Can someone tell me what I'm doing wrong, and if I can get the expect result...
(Keep in mind this is a `VIEW`)
```
SELECT
[Id]
, [Nome]
, [Estado]
, (SELECT COUNT(EstProc) FROM LoginsImp AS LI WHERE (EstProc = 'A1.' OR EstProc = 'A2.') AND LI.LogImpFiles_Id = LIF.Id) AS ItemsProcessamento
, (SELECT COUNT(EstProc) FROM LoginsImp AS LI WHERE EstProc = 'A3.' AND LI.LogImpFiles_Id = LIF.Id) AS ItemsErroProcessamento
, (SELECT COUNT(EstProc) FROM LoginsImp AS LI WHERE (EstProc= 'A4' OR EstProc= 'A5') AND LI.LogImpFiles_Id= LIF.Id) AS ItemSucessoProcessamento
, SUM(ItemsErroProcessamento + ItemSucessoProcessamento) AS [ItemsProcessados]
, [CreatedOn]
, [CreatedBy]
FROM
[dbo].[LogImpFiles] AS LIF
GROUP BY
[Id], Nome, Estado, CreatedOn, CreatedBy
```
The result is this:
```
1 TesteImport1 6 2 3 0 2015-08-04 15:41:41.5130000 110032797
```
I was expecting something like this:
```
1 TesteImport1 6 2 3 **5** 2015-08-04 15:41:41.5130000 110032797
```
|
Use `JOIN` for your conditional aggregation instead of subquery. And as JamesZ pointed out, the `SUM` is not going to use the column alias.
```
SELECT
[Id]
, [Nome]
, [Estado]
, COUNT(CASE WHEN LI.EstProc = 'A1.' OR LI.EstProc = 'A2.' THEN LI.EstProc END) AS ItemsProcessamento
, COUNT(CASE WHEN LI.EstProc = 'A3.' THEN LI.EstProc END) AS ItemsErroProcessamento
, COUNT(CASE WHEN LI.EstProc= 'A4' OR LI.EstProc= 'A5' THEN LI.EstProc END) AS ItemSucessoProcessamento
, SUM(CASE WHEN LI.EstProc IN('A1.', 'A2.', 'A4', 'A5') THEN 1 ELSE 0 END) AS [ItemsProcessados]
, [CreatedOn]
, [CreatedBy]
FROM [dbo].[LogImpFiles] AS LIF
LEFT JOIN LoginsImp LI
ON .LogImpFiles_Id = LIF.Id
GROUP BY [Id], Nome, Estado, CreatedOn, CreatedBy
```
|
I am tempted to say "it is just your data". That is, your question doesn't have enough information.
However, I suspect the problem is `NULL` values. The `+` returns `NULL` if either value is `NULL`. So, try this:
```
SUM(COALESCE(ItemsErroProcessamento, 0) + COALESCE(ItemSucessoProcessamento, 0)
) AS [ItemsProcessados]
```
|
Sum result of SELECT...WHERE in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I need help verifying the empty or null values in mysql
I tried to use this way in my case but is not working:
```
SELECT
CASE
WHEN account_type IS NULL THEN 'Uninformed'
WHEN account_type = '' THEN 'Empty '
END as Type,
COUNT(accounts.id) AS Total
FROM
accounts
```
Does anyone know how I can fix this?
|
Your query is correct. At last you need to add `group by account_type;`.
Query should be like this.
```
SELECT
CASE WHEN account_type IS NULL THEN 'Uninformed'
WHEN account_type = '' THEN 'Empty '
END AS Type,
COUNT(accounts.id) AS Total
FROM accounts
GROUP BY account_type
```
|
```
SELECT
CASE WHEN account_type IS NULL THEN 'Uninformed'
WHEN account_type = '' THEN 'Empty '
END as Type,
COUNT(accounts.id) AS Total
FROM accounts
group by account_type
```
You are missing a `group by` clause.
|
MySQL check blank or null values
|
[
"",
"mysql",
"sql",
"null",
"case",
""
] |
I'm tring to filter products with both `Id` 17 with value 97 and `Id` 6 with value 11.
Here is the [SQL Fiddle](http://sqlfiddle.com/#!3/616573/2) for this.
In this example, I need to return the row where `fkProductId = 24011`.
Can you help me please?
```
CREATE TABLE AttributeBindings(
fkProductId int,
fkAttributeId int,
fkAttributeValueId int
)
INSERT INTO AttributeBindings (fkProductId, fkAttributeId ,fkAttributeValueId)
VALUES (24011, 17, 97)
INSERT INTO AttributeBindings (fkProductId, fkAttributeId ,fkAttributeValueId)
VALUES (24012, 17, 97)
INSERT INTO AttributeBindings (fkProductId, fkAttributeId ,fkAttributeValueId)
VALUES (24011, 6, 11)
```
|
You can self join the table on product id:
```
Select p1.fkProductId From AttributeBindings p1
inner join AttributeBindings p2 on p1.fkProductId = p2.fkProductId
where
(p1.fkAttributeId = 17 and p1.fkAttributeValueId = 97)
and (p2.fkAttributeId = 6 and p2.fkAttributeValueId = 11)
```
|
```
SELECT fkproductid FROM AttributeBindings
WHERE fkAttributeId = 17 AND fkAttributeValueId = 97
intersect
SELECT fkproductid FROM AttributeBindings
WHERE fkAttributeId = 6 AND fkAttributeValueId = 11
```
You can use `INTERSECT` to get the result.
|
ms sql 2008 - one table , multiple filter
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
Is there a way to know all the possible (acceptable) keywords, after a SELECT statement in SQL before the list of fields?
**Edit**
Reference DB is MySQL
|
## PostgreSQL 9.4
<http://www.postgresql.org/docs/9.4/static/sql-select.html>
```
[ WITH [ RECURSIVE ] with_query [, ...] ]
SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
[ * | expression [ [ AS ] output_name ] [, ...] ]
[ FROM from_item [, ...] ]
[ WHERE condition ]
[ GROUP BY expression [, ...] ]
[ HAVING condition [, ...] ]
[ WINDOW window_name AS ( window_definition ) [, ...] ]
[ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] select ]
[ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | LAST } ] [, ...] ]
[ LIMIT { count | ALL } ]
[ OFFSET start [ ROW | ROWS ] ]
[ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ]
[ FOR { UPDATE | NO KEY UPDATE | SHARE | KEY SHARE } [ OF table_name [, ...] ] [ NOWAIT ] [...] ]
where from_item can be one of:
[ ONLY ] table_name [ * ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
[ LATERAL ] ( select ) [ AS ] alias [ ( column_alias [, ...] ) ]
with_query_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
[ LATERAL ] function_name ( [ argument [, ...] ] )
[ WITH ORDINALITY ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
[ LATERAL ] function_name ( [ argument [, ...] ] ) [ AS ] alias ( column_definition [, ...] )
[ LATERAL ] function_name ( [ argument [, ...] ] ) AS ( column_definition [, ...] )
[ LATERAL ] ROWS FROM( function_name ( [ argument [, ...] ] ) [ AS ( column_definition [, ...] ) ] [, ...] )
[ WITH ORDINALITY ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( join_column [, ...] ) ]
and with_query is:
with_query_name [ ( column_name [, ...] ) ] AS ( select | values | insert | update | delete )
TABLE [ ONLY ] table_name [ * ]
```
## MySQL 5.6
<https://dev.mysql.com/doc/refman/5.6/en/select.html>
```
SELECT
[ALL | DISTINCT | DISTINCTROW ]
[HIGH_PRIORITY]
[STRAIGHT_JOIN]
[SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
[SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]
select_expr [, select_expr ...]
[FROM table_references
[PARTITION partition_list]
[WHERE where_condition]
[GROUP BY {col_name | expr | position}
[ASC | DESC], ... [WITH ROLLUP]]
[HAVING where_condition]
[ORDER BY {col_name | expr | position}
[ASC | DESC], ...]
[LIMIT {[offset,] row_count | row_count OFFSET offset}]
[PROCEDURE procedure_name(argument_list)]
[INTO OUTFILE 'file_name'
[CHARACTER SET charset_name]
export_options
| INTO DUMPFILE 'file_name'
| INTO var_name [, var_name]]
[FOR UPDATE | LOCK IN SHARE MODE]]
```
## SQL Server 2014
<https://msdn.microsoft.com/en-us/library/ms189499.aspx>
```
<SELECT statement> ::=
[ WITH { [ XMLNAMESPACES ,] [ <common_table_expression> [,...n] ] } ]
<query_expression>
[ ORDER BY { order_by_expression | column_position [ ASC | DESC ] }
[ ,...n ] ]
[ <FOR Clause>]
[ OPTION ( <query_hint> [ ,...n ] ) ]
<query_expression> ::=
{ <query_specification> | ( <query_expression> ) }
[ { UNION [ ALL ] | EXCEPT | INTERSECT }
<query_specification> | ( <query_expression> ) [...n ] ]
<query_specification> ::=
SELECT [ ALL | DISTINCT ]
[TOP ( expression ) [PERCENT] [ WITH TIES ] ]
< select_list >
[ INTO new_table ]
[ FROM { <table_source> } [ ,...n ] ]
[ WHERE <search_condition> ]
[ <GROUP BY> ]
[ HAVING < search_condition > ]
```
|
[**Reserved Keywords in MySql**](https://dev.mysql.com/doc/refman/5.0/en/keywords.html)
Keywords are words that have significance in SQL. Certain keywords, such as SELECT, DELETE, or BIGINT, are reserved and require special treatment for use as identifiers such as table and column names. This may also be true for the names of built-in functions.
Nonreserved keywords are permitted as identifiers without quoting. Reserved words are permitted as identifiers if you quote them
[**Reserved Keywords**](https://msdn.microsoft.com/en-us/library/ms189822.aspx) in MSSQL server :
Microsoft SQL Server uses reserved keywords for defining, manipulating, and accessing databases. Reserved keywords are part of the grammar of the Transact-SQL language that is used by SQL Server to parse and understand Transact-SQL statements and batches.
Although it is syntactically possible to use SQL Server reserved keywords as identifiers and object names in Transact-SQL scripts, you can do this only by using delimited identifiers.
|
Reserved keywords after SELECT
|
[
"",
"mysql",
"sql",
"select",
"keyword",
""
] |
I am building a locking system based on PostgreSQL, I have two methods, `acquire` and `release`.
For `acquire`, it works like this
```
BEGIN
while True:
SELECT id FROM my_locks WHERE locked = false AND id = '<NAME>' FOR UPDATE
if no rows return:
continue
UPDATE my_locks SET locked = true WHERE id = '<NAME>'
COMMIT
break
```
And for `release`
```
BEGIN
UPDATE my_locks SET locked = false WHERE id = '<NAME>'
COMMIT
```
This looks pretty straightforward, but it doesn't work. The strange part of it is, I thought
```
SELECT id FROM my_locks WHERE locked = false AND id = '<NAME>' FOR UPDATE
```
should only acquire the lock on target row only if the target row's `locked` is `false`. But in reality, it's not like that. Somehow, even no `locked = false` row exists, it acquire lock anyway. As a result, I have a deadlock issue. It looks like this
[](https://i.stack.imgur.com/c4TsR.png)
Release is waiting for `SELECT FOR UPDATE`, and `SELECT FOR UPDATE` is doing infinite loop while it's holding a lock for no reason.
To reproduce the issue, I wrote a simple test here
<https://gist.github.com/victorlin/d9119dd9dfdd5ac3836b>
You can run it with `psycopg2` and `pytest`, remember to change the database setting, and run
```
pip install pytest psycopg2
py.test -sv test_lock.py
```
|
The test case plays out like this:
* Thread-1 runs the `SELECT` and acquires the record lock.
* Thread-2 runs the `SELECT` and enters the lock's wait queue.
* Thread-1 runs the `UPDATE` / `COMMIT` and releases the lock.
* Thread-2 acquires the lock. Detecting that the record has changed since its `SELECT`, it rechecks the data against its `WHERE` condition. The check fails, and the row is filtered out of the result set, but the lock is still held.
This behaviour is mentioned in the [`FOR UPDATE` documentation](http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-FOR-UPDATE-SHARE):
> ...rows that satisfied the query conditions as of the query snapshot will be locked, although they will not be returned if they were updated after the snapshot and no longer satisfy the query conditions.
This can have some [unpleasant consequences](http://www.postgresql.org/message-id/002501ccfa9a$f923ead0$eb6bc070$@hellosam.net), so a superfluous lock isn't *that* bad, all things considered.
Probably the simplest workaround is to limit the lock duration by committing after every iteration of `acquire`. There are various other ways to prevent it from holding this lock (e.g. `SELECT ... NOWAIT`, running in a `REPEATABLE READ` or `SERIALIZABLE` isolation level, [`SELECT ... SKIP LOCKED`](http://www.postgresql.org/docs/9.5/static/sql-select.html#SQL-FOR-UPDATE-SHARE) in Postgres 9.5).
I think the cleanest implementation using this retry-loop approach would be to skip the `SELECT` altogether, and just run an `UPDATE ... WHERE locked = false`, committing each time. You can tell if you acquired the lock by checking `cur.rowcount` after calling `cur.execute()`. If there is additional information you need to pull from the lock record, you can use an `UPDATE ... RETURNING` statement.
But I would have to agree with [@Kevin](https://stackoverflow.com/a/31929467/1104979), and say that you'd probably be better off leveraging Postgres' built-in locking support than trying to reinvent it. It would solve a lot of problems for you, e.g.:
* Deadlocks are automatically detected
* Waiting processes are put to sleep, rather than having to poll the server
* Lock requests are queued, preventing starvation
* Locks would (generally) not outlive a failed process
The easiest way might be to implement `acquire` as `SELECT FROM my_locks FOR UPDATE`, `release` simply as `COMMIT`, and let the processes contend for the row lock. If you need more flexibility (e.g. blocking/non-blocking calls, transaction/session/custom scope), [advisory locks](http://www.postgresql.org/docs/9.4/static/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS) should prove useful.
|
PostgreSQL [normally](http://www.postgresql.org/docs/9.4/static/explicit-locking.html#LOCKING-DEADLOCKS) aborts transactions which deadlock:
> The use of explicit locking can increase the likelihood of deadlocks, wherein two (or more) transactions each hold locks that the other wants. For example, if transaction 1 acquires an exclusive lock on table A and then tries to acquire an exclusive lock on table B, while transaction 2 has already exclusive-locked table B and now wants an exclusive lock on table A, then neither one can proceed. **PostgreSQL automatically detects deadlock situations and resolves them by aborting one of the transactions involved**, allowing the other(s) to complete. (Exactly which transaction will be aborted is difficult to predict and should not be relied upon.)
Looking at your Python code, and at the screenshot you showed, it appears to me that:
* Thread 3 is holding the `locked=true` lock, and is [waiting to acquire a row lock](http://www.postgresql.org/docs/9.4/static/explicit-locking.html#LOCKING-ROWS).
* Thread 1 is also waiting for a row lock, and also the `locked=true` lock.
* The only logical conclusion is that Thread 2 is somehow holding the row lock, and waiting for the `locked=true` lock (note the short time on that query; it is looping, not blocking).
Since Postgres is not aware of the `locked=true` lock, it is unable to abort transactions to prevent deadlock in this case.
It's not immediately clear to me how T2 acquired the row lock, since all the information I've looked at says [it can't do that](http://www.postgresql.org/docs/9.4/static/explicit-locking.html#LOCKING-ROWS):
> FOR UPDATE causes **the rows retrieved by the SELECT statement** to be locked as though for update. This prevents them from being locked, modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt UPDATE, DELETE, SELECT FOR UPDATE, SELECT FOR NO KEY UPDATE, SELECT FOR SHARE or SELECT FOR KEY SHARE of these rows will be blocked until the current transaction ends; conversely, SELECT FOR UPDATE will wait for a concurrent transaction that has run any of those commands on the same row, and will then **lock and return the updated row (or no row, if the row was deleted)**. Within a REPEATABLE READ or SERIALIZABLE transaction, however, an error will be thrown if a row to be locked has changed since the transaction started. For further discussion see Section 13.4.
I was not able to find any evidence of PostgreSQL "magically" upgrading row locks to table locks or anything similar.
But what you're doing is not obviously safe, either. You're acquiring lock A (the row lock), then acquiring lock B (the explicit `locked=true` lock), then releasing and re-acquiring A, before finally releasing B and A in that order. This does not properly observe a lock hierarchy since we try both to acquire A while holding B and *vice-versa*. But OTOH, acquiring B while holding A should not fail (I think), so I'm still not sure this is outright wrong.
---
Quite frankly, it's my opinion that you'd be better off just using the [`LOCK TABLE`](http://www.postgresql.org/docs/9.4/static/sql-lock.html) statement on an empty table. Postgres is aware of these locks and will detect deadlocks for you. It also saves you the trouble of the `SELECT FOR UPDATE` finagling.
|
Strange deadlock PostgreSQL deadlock issue with SELECT FOR UPDATE
|
[
"",
"sql",
"database",
"postgresql",
"transactions",
"deadlock",
""
] |
Hello everyone I'm trying to get all records without duplicate.
Here my sample database records.
```
(`id`, `productName`, `description`)
-----------------------------------------------
(1, 'OpenIDM', 'Platform for building enterprise provisioning solutions'),
(2, 'OpenAM', 'Full-featured access management'),
(3, 'OpenDJ', 'Robust LDAP server for Java'),
(4, 'OpenDJ', 'Robust LDAP server for Java') !!This line should not appear after SQL result
```
What I want to get with SQL
```
(`id`, `productName`, `description`)
-----------------------------------------------
(1, 'OpenIDM', 'Platform for building enterprise provisioning solutions'),
(2, 'OpenAM', 'Full-featured access management'),
(3, 'OpenDJ', 'Robust LDAP server for Java'),
```
Here is my SQL snippet for this subject.
```
select productName
from ForgeRock
WHERE productName in (select productName from ForgeRock group by productName having count(*)>1)
```
<http://sqlfiddle.com/#!9/6c924/121>
How can I get all elements without duplicates ? Thanks for helpings.
|
I think that you must use DISTINCT
```
select distinct productName,description from ForgeRock;
```
And this is mi result on fiddle:
<http://sqlfiddle.com/#!9/6c924/132>
|
One method is to get data of minimum id
```
select min(`id`),`productname`,`description` from forgerock
group by `productname`, `description`
```
|
How to get all records without duplicate
|
[
"",
"sql",
"duplicates",
""
] |
I have a lot of data that I want to spool to a csv file. I need to `set heading off` so the heading will not repeat every page. However, I still need my produced file to contain headers. Is there a way to add a row of headers (not to the table itself) into the query that won't actually be considered a header when spooling? This is my code which works, it just doesn't contain headers when I `set heading off`.
```
select a.col1 as name1,
a.col2 as name2,
b.col3 as name3
from tab1 a,
tab2 b
```
Thanks in advance
|
you could always try something like:
```
set heading off;
select 'NAME1' name1, 'NAME2' name2, 'NAME3' name3 from dual
union all
select a.col1 as name1, a.col2 as name2, b.col3 as name3
from tab1 a, tab2 b
where <join condition>;
```
ETA: If the column types returned by the main query aren't all strings, you'll have to explicitly convert them. Here is an example:
```
create table test1 (col1 number,
col2 date,
col3 varchar2(10),
col4 clob);
insert into test1 values (1, sysdate, 'hello', 'hello');
commit;
select 'col1' col1, 'col2' col2, 'col3' col3, 'col4' col4 from dual
union all
select col1, col2, col3, col4
from test1;
*
Error at line 1
ORA-01790: expression must have same datatype as corresponding expression
set heading off;
select 'col1' col1, 'col2' col2, 'col3' col3, to_clob('col4') col4 from dual
union all
select to_char(col1), to_char(col2, 'dd/mm/yyyy hh24:mi:ss'), col3, col4
from test1;
col1 col2 col3 col4
1 05/08/2015 11:23:15 hello hello
```
|
You want to try:
```
set pages <number of rows you expect>
```
E.g.
`set pages 1000`
Another way around could be a UNION like so:
`SELECT 'name1', 'name2', 'name3' FROM DUAL UNION
select a.col1 as name1, a.col2 as name2, b.col3 as name3
from tab1 a, tab2 b`
|
trouble creating headers using spool in sqlplus
|
[
"",
"sql",
"oracle",
"sqlplus",
""
] |
```
WITH hi AS (
SELECT ps.id, ps.brgy_locat, ps.municipali, ps.bldg_name, fh.gridcode, ps.bldg_type
FROM evidensapp_polystructures ps
JOIN evidensapp_floodhazard fh ON fh.gridcode=3
AND ST_Intersects(fh.geom, ps.geom)
), med AS (
SELECT ps.id, ps.brgy_locat, ps.municipali ,ps.bldg_name, fh.gridcode, ps.bldg_type
FROM evidensapp_polystructures ps
JOIN evidensapp_floodhazard fh ON fh.gridcode=2
AND ST_Intersects(fh.geom, ps.geom)
EXCEPT SELECT * FROM hi
), low AS (
SELECT ps.id, ps.brgy_locat, ps.municipali,ps.bldg_name, fh.gridcode, ps.bldg_type
FROM evidensapp_polystructures ps
JOIN evidensapp_floodhazard fh ON fh.gridcode=1
AND ST_Intersects(fh.geom, ps.geom)
EXCEPT SELECT * FROM hi
EXCEPT SELECT * FROM med
)
SELECT brgy_locat, municipali, bldg_name, bldg_type, gridcode, count( bldg_name)
FROM (SELECT brgy_locat, municipali, bldg_name, gridcode, bldg_type
FROM hi
GROUP BY 1, 2, 3, 4, 5) cnt_hi
FULL JOIN (SELECT brgy_locat, municipali,bldg_name, gridcode, bldg_type
FROM med
GROUP BY 1, 2, 3, 4, 5) cnt_med USING (brgy_locat, municipali, bldg_name,gridcode,bldg_type)
FULL JOIN (SELECT brgy_locat, municipali,bldg_name,gridcode, bldg_type
FROM low
GROUP BY 1, 2, 3, 4, 5) cnt_low USING (brgy_locat, municipali, bldg_name, gridcode, bldg_type)
```
The query above returns an error:
> ERROR: column "cnt\_hi.brgy\_locat" must appear in the GROUP BY clause
> or be used in an aggregate function
> \*\*\*\*\*\*\*\*\*\* Error \*\*\*\*\*\*\*\*\*\*
>
> ERROR: column "cnt\_hi.brgy\_locat" must appear in the GROUP BY clause
> or be used in an aggregate function SQL state: 42803
But if I omit the `count(bldg_name)` it works. But I need to count based on `bldg_name`.
**EDIT:**
I wanted to get the number of buildings that intersect with the hazard value(gridcode): High(3), Medium(2) and Low(1). But, if a certain geometry intersects already in High,exclude in it Medium query and same goes with Low exclude those geometry that intersects in High and Medium.
**PostgreSQL: 9.4, PostGIS: 2.1.7**
Table Details:
```
CREATE TABLE evidensapp_floodhazard (
id integer NOT NULL DEFAULT nextval('evidensapp_floodhazard_id_seq'::regclass),
gridcode integer NOT NULL,
date_field character varying(60),
geom geometry(MultiPolygon,32651),
CONSTRAINT evidensapp_floodhazard_pkey PRIMARY KEY (id)
);
CREATE INDEX evidensapp_floodhazard_geom_id
ON evidensapp_floodhazard USING gist (geom);
ALTER TABLE evidensapp_floodhazard CLUSTER ON evidensapp_floodhazard_geom_id;
CREATE TABLE evidensapp_polystructures (
id serial NOT NULL,
bldg_name character varying(100) NOT NULL,
bldg_type character varying(50) NOT NULL,
brgy_locat character varying(50) NOT NULL,
municipali character varying(50) NOT NULL,
province character varying(50) NOT NULL,
geom geometry(MultiPolygon,32651),
CONSTRAINT evidensapp_polystructures_pkey PRIMARY KEY (id)
);
CREATE INDEX evidensapp_polystructures_geom_id
ON evidensapp_polystructures USING gist (geom);
ALTER TABLE evidensapp_polystructures CLUSTER ON evidensapp_polystructures_geom_id;
```
Intended output is like this but with correct count:
[](https://i.stack.imgur.com/O0C6H.png)
**EDIT 2:**
As much as I try my best to explain what the intended output is, anyway:
* count the `bldg_name` not the `id` in which what gridcode it intersects in `floodhazard` with the condition as mentioned above on the **EDIT 1**.
* then group it to what `brgy_locat`,`brgy_municipali` and what `gridcode` and `bldg_type` it belong.
Kindly take a look at the image above.
|
You probably want this instead:
```
WITH hi AS (
SELECT ps.brgy_locat, ps.municipali, ps.bldg_name, ps.bldg_type, fh.gridcode
, count(*) OVER(PARTITION BY ps.bldg_name, ps.bldg_type) AS building_count
FROM evidensapp_polystructures ps
JOIN evidensapp_floodhazard fh ON fh.gridcode = 3
AND ST_Intersects(fh.geom, ps.geom)
)
, med AS (
SELECT ps.brgy_locat, ps.municipali, ps.bldg_name, ps.bldg_type, fh.gridcode
, count(*) OVER(PARTITION BY ps.bldg_name, ps.bldg_type) AS building_count
FROM evidensapp_polystructures ps
JOIN evidensapp_floodhazard fh ON fh.gridcode = 2
AND ST_Intersects(fh.geom, ps.geom)
LEFT JOIN hi USING (bldg_name, bldg_type)
WHERE hi.bldg_name IS NULL
)
TABLE hi
UNION ALL
TABLE med
UNION ALL
SELECT ps.brgy_locat, ps.municipali, ps.bldg_name, ps.bldg_type, fh.gridcode
, count(*) OVER(PARTITION BY ps.bldg_name, ps.bldg_type) AS building_count
FROM evidensapp_polystructures ps
JOIN evidensapp_floodhazard fh ON fh.gridcode = 1
AND ST_Intersects(fh.geom, ps.geom)
LEFT JOIN hi USING (bldg_name, bldg_type)
LEFT JOIN med USING (bldg_name, bldg_type)
WHERE hi.bldg_name IS NULL
AND med.bldg_name IS NULL;
```
Based on your comments to the question and the chat, this counts per **`(bldg_name, bldg_type)`** now - excluding buildings that already intersect on a higher level - again based on `(bldg_name, bldg_type)`.
All other columns are either distinct (`id`, `geom`) or functionally dependent noise for the count (`brgy_locat`, `municipali`, ...). **If not**, add more columns the `PARTITION BY` clause to disambiguate buildings. And add the same columns to the USING clause of the JOIN condition.
If a building intersects with multiple rows in `evidensapp_floodhazard` with the ***same*** `gridcode` it is counted **that many times**. See alternative blow.
Since you do not actually want to aggregate rows but just count on partitions, the key feature is using `count()` as [**window function**](http://www.postgresql.org/docs/current/interactive/tutorial-window.html), not as aggregate function like in your original. Basic explanation:
* [Best way to get result count before LIMIT was applied](https://stackoverflow.com/questions/156114/best-way-to-get-result-count-before-limit-was-applied-in-php-postgresql/8242764#8242764)
`count(*)` does a better job here:
Using `LEFT JOIN` / `IS NULL` instead of `EXCEPT`. Details:
* [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
And I failed to see the purpose of `FULL JOIN` in the outer query. Using `UNION ALL` instead.
### Aternative query
This counts building ***once***, no matter how many times it intersects with `evidensapp_floodhazard` on the same gridcode level
Also, this variant (unlike the first!) assumes that all rows for the same `(bldg_name, bldg_type)` match on the same gridcode level, which may or may not be the case:
```
SELECT brgy_locat, municipali, bldg_name, bldg_type, 3 AS gridcode
, count(*) OVER(PARTITION BY bldg_name, bldg_type) AS building_count
FROM evidensapp_polystructures ps
WHERE EXISTS (
SELECT 1 FROM evidensapp_floodhazard fh
WHERE fh.gridcode = 3 AND ST_Intersects(fh.geom, ps.geom)
)
UNION ALL
SELECT brgy_locat, municipali, bldg_name, bldg_type, 2 AS gridcode
, count(*) OVER(PARTITION BY bldg_name, bldg_type) AS building_count
FROM evidensapp_polystructures ps
WHERE EXISTS (
SELECT 1 FROM evidensapp_floodhazard fh
WHERE fh.gridcode = 2 AND ST_Intersects(fh.geom, ps.geom)
)
AND NOT EXISTS (
SELECT 1 FROM evidensapp_floodhazard fh
WHERE fh.gridcode > 2 -- exclude matches on **all** higher gridcodes
AND ST_Intersects(fh.geom, ps.geom)
)
UNION ALL
SELECT brgy_locat, municipali, bldg_name, bldg_type, 1 AS gridcode
, count(*) OVER(PARTITION BY bldg_name, bldg_type) AS building_count
FROM evidensapp_polystructures ps
WHERE EXISTS (
SELECT 1 FROM evidensapp_floodhazard fh
WHERE fh.gridcode = 1 AND ST_Intersects(fh.geom, ps.geom)
)
AND NOT EXISTS (
SELECT 1 FROM evidensapp_floodhazard fh
WHERE fh.gridcode > 1 AND ST_Intersects(fh.geom, ps.geom)
);
```
Also demonstrating a variant without CTEs, which may or may not perform better, depending on data distribution.
### Index
Adding `gridcode` to the index *might* improve performance. (Not tested with PostGis):
You need to install the additional module `btree_gist` for this first. Details:
* [Equivalent to exclusion constraint composed of integer and range](https://stackoverflow.com/questions/20908570/equivalent-to-exclusion-constraint-composed-of-integer-and-range/20908766#20908766)
```
CREATE INDEX evidensapp_floodhazard_geom_id
ON evidensapp_floodhazard USING gist (gridcode, geom);
```
|
Error is asking you to include the select list columns in the `GROUP BY` clause; you can do like below
```
SELECT brgy_locat, municipali, bldg_name, bldg_type,
gridcode, building_count
FROM (SELECT brgy_locat, municipali, bldg_name, gridcode, bldg_type,
count( bldg_name) as building_count
FROM hi
GROUP BY 1, 2, 3, 4, 5) cnt_hi
FULL JOIN (SELECT brgy_locat, municipali,bldg_name, gridcode, bldg_type
FROM med
GROUP BY 1, 2, 3, 4, 5) cnt_med
USING (brgy_locat, municipali, bldg_name,gridcode,bldg_type)
FULL JOIN (SELECT brgy_locat, municipali,bldg_name,gridcode, bldg_type
FROM low
GROUP BY 1, 2, 3, 4, 5) cnt_low
USING (brgy_locat, municipali, bldg_name, gridcode, bldg_type);
```
|
PostgreSQL: Query doesn't work using count
|
[
"",
"sql",
"postgresql",
"group-by",
""
] |
I have a SQL query that performs an INNER JOIN on two tables having >50M rows each. I wish to reduce the time it takes to search through the join by reducing the rows that are joined based on a column present on one of the tables.
Say I have table1 with columns A,B,C and table2 with columns A,D,E. I wish to join based on column A but only those rows that have value 'e' for column E of table 2.
My SQL query :
```
SELECT one.B, two.D
FROM table1 one
INNER JOIN table2 two WHERE two.E IN ('e')
ON one.A = two.A
WHERE one.B > 10
AND two.D IN ('...')
```
It gives the error :
```
ORA-00905: missing keyword
```
Where am I going wrong? How do I achieve the intended result?
|
```
SELECT one.B, two.D
FROM table1 one
INNER JOIN table2 two -- WHERE two.E IN ('e') --> shouldn't use where here
ON one.A = two.A and two.E = 'e'
WHERE one.B > 10
AND two.D IN ('...')
```
Comments included in the code.
|
As vkp pointed out, the WHERE is improperly used. Instead you could also make a subquery to include that where statement. So that:
```
INNER JOIN table2 two WHERE two.E IN ('e')
```
becomes
```
INNER JOIN (select * from table2 WHERE E IN ('e')) two
```
|
Optimising SQL query
|
[
"",
"sql",
"oracle",
"inner-join",
""
] |
I have a table for my users scores like this:
```
id | kills
----------
2 | 1
1 | 1
1 | 5
1 | 3
2 | 4
2 | 5
3 | 5
```
I want to get the first 2 rows of each player which have more than 2 kills. So the result should look like this
```
id | kills
----------
1 | 5
1 | 3
2 | 4
2 | 5
3 | 5
```
I tried this but it doesn't work:
```
SELECT *
FROM user_stats us
WHERE
(
SELECT COUNT(*)
FROM user_stats f
WHERE f.id=us.id AND f.kills > 2
) <= 2;
```
|
I suspect that you just want the two largest values for users that have kills > 2. If so, use variables:
```
select us.*
from (select us.*,
(@rn := if(@i = id, @rn + 1,
if(@i := id, 1, 1)
)
) as seqnum
from user_stats us cross join
(select @rn := 0, @i := -1) params
where us.kills > 2
order by us.id, kills desc
) us
where seqnum <= 2;
```
|
Try this. I am coming from Oracle, where rownum is a count of rows selected. This should have the same effect.
```
select @rownum:=@rownum+1, us.*
from user_stats us , (select @rownum := 0) r
where id in (
select id from user_stats f
group by id
having count(*) > 2
)
and @rownum < 3;
```
|
first N row of each id in MySQL
|
[
"",
"mysql",
"sql",
"group-by",
"aggregate",
""
] |
Why does this SQL not work?
The:
```
6371 * ACos( Cos(RADIANS(Latitude)) * Cos(RADIANS('50.017466977673905')) * Cos(RADIANS('24.69924272460935')
- RADIANS(Longitude)) + Sin(RADIANS(Latitude)) * Sin(RADIANS('50.017466977673905')) )
```
Clause just calculates the order from a search point.
Which I am aliasing (because it so longwinded) to Distance.
```
SELECT [Hotel Id],
latitude,
longitude,
establishmentname,
6371 * ACos( Cos(RADIANS(Latitude)) * Cos(RADIANS('50.017466977673905')) * Cos(RADIANS('24.69924272460935') - RADIANS(Longitude)) + Sin(RADIANS(Latitude)) * Sin(RADIANS('50.017466977673905')) ) AS Distance
FROM [dbo].[RPT_hotels]
WHERE distance < '30'
ORDER BY Distance
```
Here I replace the "Distance < 30" with the longwinded phrase and it works fine.
I can even ORDER BY the column alias and that works!!?
```
SELECT [Hotel Id],
latitude,
longitude,
establishmentname,
6371 * ACos( Cos(RADIANS(Latitude)) * Cos(RADIANS('50.017466977673905')) * Cos(RADIANS('24.69924272460935') - RADIANS(Longitude)) + Sin(RADIANS(Latitude)) * Sin(RADIANS('50.017466977673905')) ) AS Distance
FROM [dbo].[RPT_hotels]
WHERE 6371 * ACos( Cos(RADIANS(Latitude)) * Cos(RADIANS('50.017466977673905')) * Cos(RADIANS('24.69924272460935') - RADIANS(Longitude)) + Sin(RADIANS(Latitude)) * Sin(RADIANS('50.017466977673905')) ) < '30'
ORDER BY Distance
```
What am I doing wrong?
|
This happens because of natural query processing order, which is the following:
1. `FROM`
2. `ON`
3. `OUTER`
4. `WHERE`
5. `GROUP BY`
6. `CUBE` | `ROLLUP`
7. `HAVING`
8. `SELECT`
9. `DISTINCT`
10. `ORDER BY`
11. `TOP`
You're assigning your alias in `SELECT` statement. As you can see `WHERE` is processed before `SELECT` and `ORDER BY` comes after it. That's the reason. Now what are the workarounds:
* Subqueries. But they can be hard to read.
* `CROSS APPLY`. This should *beautify* your code a bit and is recommended method.
`CROSS APPLY` will assign alias before `WHERE` statement, making it usable in it.
```
SELECT [Hotel Id]
, latitude
, longitude
, establishmentname
, Distance
FROM [dbo].[RPT_hotels]
CROSS APPLY (
SELECT 6371 * ACos(Cos(RADIANS(Latitude)) * Cos(RADIANS('50.017466977673905')) * Cos(RADIANS('24.69924272460935') - RADIANS(Longitude)) + Sin(RADIANS(Latitude)) * Sin(RADIANS('50.017466977673905')))
) AS T(Distance)
WHERE distance < 30
ORDER BY Distance;
```
If you want to find out more. Please read this question: [What is the order of execution for this SQL statement](https://stackoverflow.com/questions/17403935/what-is-the-order-of-execution-for-this-sql-statement)
|
As to why you can't specify an alias in the `WHERE` clause, this is due to the logical order of query processing: (<http://tsql.solidq.com/books/insidetsql2008/Logical%20Query%20Processing%20Poster.pdf>).
The `WHERE` clause is processed after the `SELECT` clause but `ORDER BY` is processed afterward. Column aliases can only be referenced after the `SELECT` clause has been processed.
|
Cannot use Alias name in WHERE clause but can in ORDER BY
|
[
"",
"sql",
"sql-server",
""
] |
I have several related tables that I want to be able to duplicate some of the rows while updating the references.
I want to duplicate a row in Table1, and all of it's related rows from Table2 and Table3, and I'm trying to figure out an efficient way of doing it short of iterating through rows.
So for example, I have a table of baskets:
```
+----------+---------------+
| BasketId | BasketName |
+----------+---------------+
| 1 | Home Basket |
| 2 | Office Basket |
+----------+---------------+
```
Each basket has fruit:
```
+---------+----------+-----------+
| FruitId | BasketId | FruitName |
+---------+----------+-----------+
| 1 | 1 | Apple |
| 2 | 1 | Orange |
| 3 | 2 | Mango |
| 4 | 2 | Pear |
+---------+----------+-----------+
```
And each fruit has some properties:
```
+------------+---------+--------------+
| PropertyId | FruitId | PropertyText |
+------------+---------+--------------+
| 1 | 2 | Is juicy |
| 2 | 2 | Hard to peel |
| 3 | 1 | Is red |
+------------+---------+--------------+
```
For this example, my properties are specific to the individual fruit row, these "apple" properties aren't properties of all apples in all baskets, just for that specific apple in that specific basket.
What I want to do is duplicate a basket. So given basket 1, I want to create a new basket, duplicate the fruit rows it contains, and duplicate the properties pointing to those fruits. In the end I'm hoping to have data like so:
```
+----------+---------------+
| BasketId | BasketName |
+----------+---------------+
| 1 | Home Basket |
| 2 | Office Basket |
| 3 | Friends Basket|
+----------+---------------+
+---------+----------+-----------+
| FruitId | BasketId | FruitName |
+---------+----------+-----------+
| 1 | 1 | Apple |
| 2 | 1 | Orange |
| 3 | 2 | Mango |
| 4 | 2 | Pear |
| 5 | 3 | Apple |
| 6 | 3 | Orange |
+---------+----------+-----------+
+------------+---------+--------------+
| PropertyId | FruitId | PropertyText |
+------------+---------+--------------+
| 1 | 2 | Is juicy |
| 2 | 2 | Hard to peel |
| 3 | 1 | Is red |
| 4 | 6 | Is juicy |
| 5 | 6 | Hard to peel |
| 6 | 5 | Is red |
+------------+---------+--------------+
```
Duplicating the basket and it's fruit were pretty straightforward, but duplicating the properties of the fruit seems to me to lead to iterating over rows and I'm hoping there's a better solution in TSQL.
Any ideas?
|
Why dont you join on the FruitName to get a table with old and new FruitId's? Considering information would be added at the same time.... it may not be the best option but you wont be using any cycles.
```
INSERT INTO BASKET(BASKETNAME)
VALUES ('COPY BASKET')
DECLARE @iBasketId int
SET @iBasketId = @@SCOPE_IDENTITY;
insert into Fruit (BasketId, FruitName)
select @iBasketId, FruitName
from Fruit
where BasketId = @originalBasket
declare @tabFruit table (originalFruitId int, newFruitId int)
insert into @tabFruit (originalFruitId, newFruitId)
select o.FruitId, n.FruitId
from (SELECT FruitId, FruitName from Fruit where BasketId = @originalBasket) as o
join (SELECT FruitId, FruitName from Fruit where BasketId = @newBasket) as n
on o.FruitName = n.FruitName
insert into Property (FruitId, PropertyText)
select NewFruitId, PropertyText
from Fruit f join @tabFruit t on t.originalFruitId = f.FruitId
```
|
(ab)use [`MERGE`](https://msdn.microsoft.com/en-us/library/bb510625.aspx) with `OUTPUT` clause.
`MERGE` can `INSERT`, `UPDATE` and `DELETE` rows. In our case we need only to `INSERT`. `1=0` is always false, so the `NOT MATCHED BY TARGET` part is always executed. In general, there could be other branches, see docs. `WHEN MATCHED` is usually used to `UPDATE`; `WHEN NOT MATCHED BY SOURCE` is usually used to `DELETE`, but we don't need them here.
This convoluted form of `MERGE` is equivalent to simple `INSERT`, but unlike simple `INSERT` its `OUTPUT` clause allows to refer to the columns that we need.
I will write down the definitions of table explicitly. Each primary key in the tables is `IDENTITY`. I've configured foreign keys as well.
**Baskets**
```
CREATE TABLE [dbo].[Baskets](
[BasketId] [int] IDENTITY(1,1) NOT NULL,
[BasketName] [varchar](50) NOT NULL,
CONSTRAINT [PK_Baskets] PRIMARY KEY CLUSTERED
(
[BasketId] ASC
)
```
**Fruits**
```
CREATE TABLE [dbo].[Fruits](
[FruitId] [int] IDENTITY(1,1) NOT NULL,
[BasketId] [int] NOT NULL,
[FruitName] [varchar](50) NOT NULL,
CONSTRAINT [PK_Fruits] PRIMARY KEY CLUSTERED
(
[FruitId] ASC
)
ALTER TABLE [dbo].[Fruits] WITH CHECK
ADD CONSTRAINT [FK_Fruits_Baskets] FOREIGN KEY([BasketId])
REFERENCES [dbo].[Baskets] ([BasketId])
ALTER TABLE [dbo].[Fruits] CHECK CONSTRAINT [FK_Fruits_Baskets]
```
**Properties**
```
CREATE TABLE [dbo].[Properties](
[PropertyId] [int] IDENTITY(1,1) NOT NULL,
[FruitId] [int] NOT NULL,
[PropertyText] [varchar](50) NOT NULL,
CONSTRAINT [PK_Properties] PRIMARY KEY CLUSTERED
(
[PropertyId] ASC
)
ALTER TABLE [dbo].[Properties] WITH CHECK
ADD CONSTRAINT [FK_Properties_Fruits] FOREIGN KEY([FruitId])
REFERENCES [dbo].[Fruits] ([FruitId])
ALTER TABLE [dbo].[Properties] CHECK CONSTRAINT [FK_Properties_Fruits]
```
---
**Copy Basket**
At first copy one row in `Baskets` table and use `SCOPE_IDENTITY` to get the generated `ID`.
```
BEGIN TRANSACTION;
-- Parameter of the procedure. What basket to copy.
DECLARE @VarOldBasketID int = 1;
-- Copy Basket, one row
DECLARE @VarNewBasketID int;
INSERT INTO [dbo].[Baskets] (BasketName)
VALUES ('Friends Basket');
SET @VarNewBasketID = SCOPE_IDENTITY();
```
**Copy Fruits**
Then copy `Fruits` using `MERGE` and remember a mapping between old and new IDs in a table variable.
```
-- Copy Fruits, multiple rows
DECLARE @FruitIDs TABLE (OldFruitID int, NewFruitID int);
MERGE INTO [dbo].[Fruits]
USING
(
SELECT
[FruitId]
,[BasketId]
,[FruitName]
FROM [dbo].[Fruits]
WHERE [BasketId] = @VarOldBasketID
) AS Src
ON 1 = 0
WHEN NOT MATCHED BY TARGET THEN
INSERT
([BasketId]
,[FruitName])
VALUES
(@VarNewBasketID
,Src.[FruitName])
OUTPUT Src.[FruitId] AS OldFruitID, inserted.[FruitId] AS NewFruitID
INTO @FruitIDs(OldFruitID, NewFruitID)
;
```
**Copy Properties**
Then copy `Properties` using remembered mapping between old and new Fruit IDs.
```
-- Copy Properties, many rows
INSERT INTO [dbo].[Properties] ([FruitId], [PropertyText])
SELECT
F.NewFruitID
,[dbo].[Properties].PropertyText
FROM
[dbo].[Properties]
INNER JOIN @FruitIDs AS F ON F.OldFruitID = [dbo].[Properties].FruitId
;
```
Check results, change rollback to commit once you confirmed that the code works correctly.
```
SELECT * FROM [dbo].[Baskets];
SELECT * FROM [dbo].[Fruits];
SELECT * FROM [dbo].[Properties];
ROLLBACK TRANSACTION;
```
|
How to better duplicate a set of data in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I want to extract content from text in an SQL field after a keyword. I have a field called `Description` in a table, the content for that field is:
> asdasf keyword dog
>
> aeee keyword cat
>
> ffffaa keyword wolf
I want to extract and save the text after "keyword " (in this case `dog`,`cat` and `wolf`) and save it in a view or show it with select.
|
Here is an example using `SUBSTRING()`:
```
SELECT SUBSTRING(YourField, CHARINDEX(Keyword,YourField) + LEN(Keyword), LEN(YourField))
```
Another example:
```
declare @YourField varchar(200) = 'Mary had a little lamb'
declare @Keyword varchar(200) = 'had'
select SUBSTRING(@YourField,charindex(@Keyword,@YourField) + LEN(@Keyword), LEN(@YourField) )
```
Result:
```
a little lamb
```
Please note that there is a space before the 'a' in this string.
|
Just to add to @psoshmo's answer
If keyword is not found it will substring the original string, to counter this I have added a case like below
```
SUBSTRING(YourField, CHARINDEX('Keyword',YourField) + (CASE WHEN ', ' + YourField +',' LIKE '%Keyword%' THEN LEN('Keyword') ELSE 1 END), LEN(YourField))
```
|
Extract string from a text after a keyword
|
[
"",
"sql",
"sqlite",
""
] |
**EXPLAINATION**
Imagine that I have 2 tables. `FormFields` where are stored column names as values, which should be pivoted and second table `FilledValues` with user's filled values with `FormFieldId` provided.
**PROBLEM**
As you see (below in *SAMPLE section*) in `FormFields` table I have duplicate names, but different ID's. I need to make that after joining tables, all values from `FilledValues` table will be assiged to column names, not to Id's.
What I need better you will see in *OUTPUT* section below.
**SAMPLE DATA**
`FormFields`
```
ID Name GroupId
1 col1 1
2 col2 1
3 col3 1
4 col1 2
5 col2 2
6 col3 2
```
`FilledValues`
```
ID Name FormFieldId GroupID
1 a 2 1
2 b 3 1
3 c 1 1
4 d 4 2
5 e 6 2
6 f 5 2
```
**OUTPUT FOR NOW**
```
col1 col2 col3
c a b -- As you see It returning only values for FormFieldId 1 2 3
-- d, e, f are lost that because It have duplicate col names, but different id's
```
**DESIRED OUTPUT**
```
col1 col2 col3
c a b
e f d
```
**QUERY**
```
SELECT * FROM
(
SELECT FF.Name AS NamePiv,
FV.Name AS Val1
FROM FormFields FF
JOIN FilledValues FV ON FF.Id = FV.FormFieldId
) x
PIVOT
(
MIN(Val1)
FOR NamePiv IN ([col1],[col2],[col3])
) piv
```
**[SQL FIDDLE](http://sqlfiddle.com/#!3/164e0/1)**
How can I produce the OUTPUT with the multiple rows?
|
Since you are using PIVOT the data is being aggregated so you only return one value for each column being grouped. You don't have any columns in your subquery that are unique and being used in the grouping aspect of PIVOT to return multiple rows. In order to do this you need some value. If you have a column with a unique value for each "group" then you would use that or you can use a windowing function like `row_number()`.
`row_number()` will create a sequenced number for each `FF.Name` meaning if you have 2 `col1` you will generate a `1` for a row and a `2` for another row. Once this is included in your subquery, you now have a unique value that is used when aggregating your data and you will return multiple rows:
```
SELECT [col1],[col2],[col3]
FROM
(
SELECT
FF.Name AS NamePiv,
FV.Name AS Val1,
rn = row_number() over(partition by ff.Name order by fv.Id)
FROM FormFields FF
JOIN FilledValues FV ON FF.Id = FV.FormFieldId
) x
PIVOT
(
MIN(Val1)
FOR NamePiv IN ([col1],[col2],[col3])
) piv;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/164e0/3). The output is:
```
| col1 | col2 | col3 |
|------|------|------|
| c | a | b |
| e | f | d |
```
|
Just adding `GroupId` in `Pivot source` query will fix your problem
```
SELECT * FROM (
SELECT FF.Name AS NamePiv,
FV.Name AS Val1,
ff.groupid
FROM FormFields FF
JOIN FilledValues FV ON FF.Id = FV.FormFieldId
) x
PIVOT
(
MIN(Val1)
FOR NamePiv IN ([col1],[col2],[col3])
) piv
```
[**SQLFIDDLE DEMO**](http://sqlfiddle.com/#!3/164e0/6)
|
Pivot duplicate column names and get all values for columns
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"pivot",
""
] |
I have the following query.
```
with getstock as
(
select
a.bomparent, a.bomchild, a.bomqty, a.bompos, a.baltmethod, a.bomissue
from
bom a
where
bomparent = 'QZ10-0262601' and baltmethod = '1'
union all
select
parent.bomparent, parent.bomchild, parent.bomqty, parent.bompos, parent.baltmethod, parent.bomissue
from
getstock as a
inner join
bom as parent, stock as s on parent.bomparent = a.bomchild
where
parent.baltmethod = '1' and parent.bomparent = s.stocknum
)
select *
from getstock
```
When I run it, I get the following error.
> Msg 102, Level 15, State 1, Line 9
> Incorrect syntax near ','.
Where is the issue?
|
You can not use multiple tables in `JOIN`. It's possible in `FROM`, but I won't suggest that. That's the old-style `JOIN` syntax. You should rewrite your query using explicit `JOIN`:
```
with getstock as
(
select
a.bomparent, a.bomchild, a.bomqty, a.bompos, a.baltmethod, a.bomissue
from
bom a
where
bomparent = 'QZ10-0262601' and baltmethod = '1'
union all
select
parent.bomparent, parent.bomchild, parent.bomqty, parent.bompos, parent.baltmethod, parent.bomissue
from
getstock as a
inner join
bom as parent on parent.bomparent = a.bomchild
inner join
stock as s on parent.bomparent = s.stocknum
where
parent.baltmethod = '1'
)
select *
from getstock
```
For more reading: [Avoid using old-style `JOIN` syntax.](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins)
|
```
with getstock as
(
select a.bomparent, a.bomchild, a.bomqty, a.bompos,
a.baltmethod, a.bomissue
from bom a
where bomparent = 'QZ10-0262601' and baltmethod = '1'
union all
select parent.bomparent, parent.bomchild, parent.bomqty,
parent.bompos, parent.baltmethod, parent.bomissue
from getstock as a
inner join bom as parent
on parent.bomparent = a.bomchild
inner join stock as s
parent.bomparent = s.stocknum
where parent.baltmethod = '1'
)
select *
from getstock
```
|
Incorrect syntax error in join query sql
|
[
"",
"sql",
"sql-server",
""
] |
Lets take we have:
```
SELECT Name, Surname, Salary, TaxPercentage
FROM Employees
```
returns:
```
Name |Surname |Salary |TaxPercentage
--------------------------------------
Moosa | Jacobs | $14000 | 13.5
Temba | Martins | $15000 | 13.5
Jack | Hendricks | $14000 | 13.5
```
I want it to return:
```
Name |Surname | Salary |TaxPercentage
-------------------------------------------
Moosa | Jacobs | $14000 | NULL
Temba | Martins | $15000 | NULL
Jack | Hendricks| $14000 | 13.5
```
Since TaxPercentage's value is repeated, I want it appear only once at the end.
|
In sql server 2012 and above you can use the [`Lead`](https://msdn.microsoft.com/en-us/library/hh213125(v=sql.110).aspx) window function to get the value of the next row. Assuming you have some way to sort the data (like an identity column), you can use this to your advantage:
```
SELECT Name,
Surname,
Salary,
CASE WHEN TaxPercentage = LEAD(TaxPercentage) OVER (ORDER BY Id) THEN
NULL
ELSE
TaxPercentage
END As TaxPercentage
FROM Employees
ORDER BY Id
```
See [fiddle](http://sqlfiddle.com/#!6/fa954/1) (thanks to [Lasse V. Karlsen](https://stackoverflow.com/users/267/lasse-v-karlsen))
|
If for some reason you can't use `LEAD()` then this should work:
```
with T as (
SELECT
Name, Surname, Salary, TaxPercentage,
row_number() over (order by TaxPercentage /* ??? */) as rn
FROM Employees
)
select
Name, Surname, Salary,
nullif(
TaxPercentage,
(select t2.rn from T as t2 where t2.rn = t.rn + 1)
) as TaxPercentage
from T as t
```
|
How to make a select statement to return "NULLs" if the value is a repetition in SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I got a SelfJoin query
```
;WITH t as
(
SELECT
ROW_NUMBER() OVER (ORDER BY dbo.DIM_PROJECT_TECH_OBJ.FUNCTIONAL_LOCATION,DATEADD(ms, DATEDIFF(ms, '00:00:00', CONVERT(VARCHAR,REPLACE(dbo.FACT_MEASUREMENT.Doc_Time,'24:00:00','23:59:59'),102)), CONVERT(DATETIME, dbo.DIM_TIME_USAGE.DATE)) ASC) AS Rowy,
dbo.FACT_MEASUREMENT.FACT_MEASUREMENT_KEY,
dbo.FACT_MEASUREMENT.Doc_Number,
dbo.FACT_MEASUREMENT.Created_By,
dbo.FACT_MEASUREMENT.Text,
dbo.FACT_MEASUREMENT.Doc_Time,
dbo.FACT_MEASUREMENT.Date_Loaded,
dbo.DIM_VC_MEASURE.VALUATION_CODE_AND_DESC,
dbo.DIM_TIME_USAGE.DATE,
dbo.DIM_PROJECT_TECH_OBJ.FUNCTIONAL_LOCATION,
dbo.DIM_VC_MEASURE.VALUATION_CODE,
CONVERT(VARCHAR,REPLACE(dbo.FACT_MEASUREMENT.Doc_Time,'24:00:00','23:59:59'),102) AS TIME,
DATEADD(ms, DATEDIFF(ms, '00:00:00', CONVERT(VARCHAR,REPLACE(dbo.FACT_MEASUREMENT.Doc_Time,'24:00:00','23:59:59'),102)), CONVERT(DATETIME, dbo.DIM_TIME_USAGE.DATE)) AS DATUM
FROM
dbo.DIM_PROJECT_TECH_OBJ INNER JOIN dbo.FACT_MEASUREMENT ON (dbo.FACT_MEASUREMENT.PROJECT_TECH_OBJ_KEY=dbo.DIM_PROJECT_TECH_OBJ.PROJECT_TECH_OBJ_KEY)
INNER JOIN dbo.DIM_TIME_USAGE ON (dbo.FACT_MEASUREMENT.TIME_KEY=dbo.DIM_TIME_USAGE.TIME_KEY)
INNER JOIN dbo.DIM_VC_MEASURE ON (dbo.DIM_VC_MEASURE.VALUATION_CODE_KEY=dbo.FACT_MEASUREMENT.VALUATION_CODE_KEY)
WHERE
dbo.FACT_MEASUREMENT.Measurement_Position = 'AVAILABILITY'
AND
dbo.DIM_PROJECT_TECH_OBJ.FUNCTIONAL_LOCATION IN ('XXX','YYY','ZZZ')
)
select t.*, tprev.DATUM AS PRE_DATUM, tprev.VALUATION_CODE AS PRE_CODE, DATEDIFF (minute,tprev.DATUM, t.DATUM) AS DELTA_MIN
from t join
t tprev
on tprev.rowy = t.rowy - 1 AND tprev.FUNCTIONAL_LOCATION = t.FUNCTIONAL_LOCATION ;
```
This will return a set of field
```
FUNTIONAL LOC DATUM CODE PRE_DATUM PRE_CODE (-> Other fields)
XXX 01/07/2015 A 06/06/2015 Y
XXX 05/07/2015 B 01/07/2015 A
XXX 10/07/2015 C 05/07/2015 B
YYY 03/07/2015 B 15/06/2015 K
YYY 09/07/2015 C 03/07/2015 B
YYY 15/07/2015 A 09/07/2015 C
```
Now I would like to create an outer join with calendar (image from 01/07 to 10/07) dates and obtain something like:
```
FUNTIONAL LOC DATUM CODE PRE_DATUM PRE_CODE
XXX 01/07/2015 A 06/06/2015 Y
XXX 02/07/2015 06/06/2015 Y
XXX 03/07/2015 06/06/2015 Y
XXX 04/07/2015 06/06/2015 Y
XXX 05/07/2015 B 01/07/2015 A
XXX 06/07/2015 01/07/2015 A
XXX 07/07/2015 01/07/2015 A
XXX 08/07/2015 01/07/2015 A
XXX 09/07/2015 01/07/2015 A
XXX 10/07/2015 C 05/07/2015 B
YYY 01/07/2015 15/06/2015 K
YYY 02/07/2015 15/06/2015 K
YYY 03/07/2015 B 15/06/2015 K
YYY 09/07/2015 C 03/07/2015 B
YYY 10/07/2015 03/07/2015 B
```
Basically when NO code is found on a calendar data use the prev one.
Any suggestion/idea?
Thanks in advance.
S.
|
You can try with the following steps:
* define this intermediate result AND the "calendar" table as a chain of the `WITH` statement
* perform a `LEFT JOIN` of the "calendar" table and the intermediate result defined on the point above (some DB requires specifying `LEFT OUTER JOIN`)
* perform the row-numbering join technique again on the result of the two steps above
Example of the first two steps:
```
WITH t as ( ... ),
t2 as ( select t.*, tprev.time
from t left join
t tprev
on tprev.rowy = t.rowy - 1 ),
cal as ( ... )
SELECT *
FROM cal
LEFT JOIN t2
ON cal.DATUM = t2.DATUM;
```
Please notice that:
* the intermediate result of your original question is now defined as `t2`
* in the final result you mocked, you can't keep the values of DATUM as coming from the original intermediate result as reported in your question, because values are missing and I suppose what you want is indeed to display all calendar values instead
* the third point of the steps above can be obtained by performing same technique as you illustrated in the original question.
* you will need to specify the ordering in the final results.
Hope this helps!
|
You need to `cross join` the result of your query with `calender` table. Something like this
```
WITH t
AS (SELECT Row_number()
OVER (
ORDER BY Q2.FUNCTIONAL_LOCATION) AS Rowy,
Q1.FACT_MEASUREMENT_KEY,
CONVERT(VARCHAR(255), Q1.Doc_Time, 102) AS TIME
FROM dbo.DIM_PROJECT_TECH_OBJ Q2
INNER JOIN dbo.FACT_MEASUREMENT Q1
ON Q1.PROJECT_TECH_OBJ_KEY = Q2.PROJECT_TECH_OBJ_KEY
WHERE Q1.Measurement_Position = 'XXX'),
result
AS (SELECT t.*,
tprev.time
FROM t
LEFT JOIN t tprev
ON tprev.rowy = t.rowy - 1),
date_cook
AS (SELECT FUNTIONAL_LOC,
c.dates,
PRE_DATUM,
PRE_CODE
FROM result r
CROSS JOIN calender c
WHERE c.dates BETWEEN '2015-07-01' AND '2015-07-10')
SELECT dc.FUNTIONAL_LOC,
dc.dates AS DATUM,
Isnull(r.code, '') AS code,
dc.PRE_DATUM,
dc.PRE_CODE
FROM date_cook dc
LEFT OUTER JOIN result r
ON dc.FUNTIONAL_LOC= r.FUNTIONAL_LOC
AND dc.dates = r.DATUM
AND dc.PRE_DATUM = r.PRE_DATUM
AND dc.PRE_CODE = r.PRE_CODE
```
|
SQL SelfJoin + Left Outer JOIN Calendar
|
[
"",
"sql",
"database",
"calendar",
""
] |
Trying to do a partial string match here but am getting a problem with the LIKE operator. I'm sure its the syntax but I cannot see it
```
SELECT Name
FROM Table1 a
INNER JOIN Table2 b ON a.Name = b.FullName LIKE '%' + a.Name + '%'
```
I get an error message when I execute this
> Msg 156, Level 15, State 1, Line 4
> Incorrect syntax near the keyword 'LIKE'.
|
Try this
```
SELECT distinct Name, FullName
FROM Table1 a
INNER JOIN Table2 b ON (b.FullName LIKE '%' + a.Name + '%' OR a.Name like '%'+b.FullName+'%')
```
|
I had this exact issue with Postgres.
Its a bit messy, but I used textcat to add the wildcard since the above '%'+ didn't play well with my query on Metabase
Try this:
```
SELECT distinct Name, FullName
FROM Table1 a
INNER JOIN Table2 b ON b.FullName LIKE textcat(textcat('%',a.Name),'%')
```
|
Partial String match SQL - Inner Join
|
[
"",
"sql",
"sql-server",
"join",
"sql-like",
""
] |
```
SELECT ItemID, Name
FROM tblItem
WHERE ItemID IN (4, 38, 39, 37, 16, 8, 15,14)
```
By using given above SQL query I am getting below result set which is in order by ItemID.
I want the result set without Order By or with any effect of sorting.
```
4 Item1
8 Item2
14 Item3
15 Item4
16 Item5
37 Item6
38 Item7
39 Item8
```
|
To get the items in a specific order you have to sort them that way. Specifying the values in the `in` statement in a specific order won't make the database fetch them in that order. The values in the `in` statement will be processed in some way, like sorted or put in a hash set so that they can be matched to an index or used in a table scan. Keeping the items in the original order would only make the query slower.
You can use a `case` to translate the identities into an ordering:
```
select
ItemID, Name
from
tblItem
where
ItemID in (4, 38, 39, 37, 16, 8, 15,14)
order by
case ItemID
when 4 then 1
when 38 then 2
when 39 then 3
when 37 then 4
when 16 then 5
when 8 then 6
when 15 then 7
when 14 then 8
end
```
|
If you want the result set as specified by the `in` clause (or anything else), then you need to do an `order by`.
One way of doing this is using a `VALUES` clause and `join`:
```
SELECT i.ItemID, i.Name
FROM tblItem iJOIN
(VALUES(4, 1), (38, 2), (39, 3), (37, 4), (16, 5), (8, 6), (15, 7), (14, 8)
) ids(id, priority)
ON i.ItemId = ids.id
ORDER BY ids.priority;
```
|
SQL query without Order By
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
Maybe the title was not the best I could use to discribe the issue
an example of the table structure I am dealing with is in the image below. I need to write a query to pull all records for "Manufactures" that have more than one record. So the end result I would have LINUX UBUNTU 5.6 and LINUX REDHAT 7.8
Just returning the duplicated MANUFACTURE is easy and I can do that with using grouping having count(\*) > 1 but when it comes to returning the duplicated manufacture and the corresponding columns with it is the issue I am coming up with.
[](https://i.stack.imgur.com/E8WJ1.png)
|
> returning the duplicated MANUFACTURE is easy and I can do that with using grouping `having count(*) > 1`
That's a good start. Now use that list of `manufacture`s to select the rest of the data:
```
SELECT *
FROM software
WHERE manufacture IN (
-- This is your "HAVING COUNT(*) > 1" query inside.
-- It drives the selection of rows in the outer query.
SELECT manufacture
FROM software
GROUP BY manufacture
HAVING COUNT(*) > 1
)
```
|
try this:
```
Select * from myTable
Where Manufacture In
(Select Manufacture
from myTable
Group By Manufacture
Having count(*) > 1)
```
|
SQL Group by Having > 1
|
[
"",
"sql",
"sybase",
"having",
""
] |
I have a table named `Images` that looks like this:
```
ImageKey RefKey ImageParentKey Sequence
-------- ------ -------------- --------
1234570 111111 1234567 3
1234568 111111 1234567 1
1234569 111111 1234567 2
1234567 111112 1234567 0
1234571 111112 1234571 0
1234572 111112 1234571 1
1234573 111112 1234571 2
```
The `ImageKey` column is the Primary Key for the table.
The `RefKey` column determines which file (in another table) the image is associated with.
The `ImageParentKey` column holds the value of the main ImageKey that other subsequent images are associated with.
The Sequence column determines the place of the image within the file
I'm trying to find all instances where the `ImageParentKey`=`ImageKey` *AND* where all other images with the same `ImageParentKey` have a different `RefKey`.
Basically, I need to find the location where every image belonging to a file is out of order and out of file (determined by the Sequence and RefKey columns).
The desired output would be the fourth row:
```
ImageKey RefKey ImageParentKey Sequence
-------- ------ --------- --------
1234567 111112 1234567 0
```
This row fits all the criteria:
* Its `ImageKey` equals its `ImageParentKey`
* Its `RefKey` does not match the other images with the same `ImageParentKey`
Here's what I have so far (which sadly does nothing near what I need it to do):
```
SELECT img.*
FROM Images AS img
LEFT JOIN Invoices AS inv
ON inv.InvoiceKey=img.ImageKey
WHERE ImageParentKey<>0 AND
img.ImageParentKey=img.ImageKey
```
Any help regarding this is greatly appreciated!
|
Here's my approach:
```
SELECT *
FROM Images i
WHERE
/* ImageKey equals its ImageParentKey */
i.ImageKey = i.ImageParentKey
AND
/* There's a RefKey that doesn't match other images with the same ImageParentKey */
EXISTS
(
SELECT *
FROM Images i2
WHERE i2.ImageParentKey = i.ImageKey AND i.RefKey <> i2.RefKey
)
```
Output:
```
ImageKey RefKey ImageParentKey Sequence
-------- ------ -------------- -----------
1234567 111112 1234567 0
```
|
You can do this with an exclusion join back to the same table:
```
SELECT i1.*
FROM Images i1
LEFT JOIN Images i2 ON i2.ImageParentKey = i1.ImageParentKey AND i2.RefKey = i1.RefKey
AND i2.Sequence <> i1.Sequence
WHERE i1.ImageKey = i1.ImageParentKey AND i2.ImageKey IS NULL
```
Results:
```
1234567 111112 1234567 0
```
[Sql Fiddle](http://sqlfiddle.com/#!3/0c1ca/2)
|
SQL Dependent on another SQL
|
[
"",
"sql",
""
] |
I want to copy 1 row from `Table A` to `Table A2`.
Here is my code:
```
Insert into A2
select * from A1
where ID=290
```
`Id` is a `Primary Key` and both tables have same structure.
Obviously the above code failed because it cannot insert the `Primary Key`(And I don't want it to, I want a new `PK` be generated).
I know I can use explicit insert but this is a very wide table with hundreds of columns.
Is there any way to insert every column expect `ID` which is the `Primary Key`?
NO: this is NOT the solution. Please spend 1 minute time and read question Carefully before closing them!
I said I know this is possible with explicit insert . I want to know if can done with shorter SQL since this is a VERY wide table.
If you have the solution please mention the reference here.
|
There is no `select * except some columns`. Your sql must contain the full list of columns except the id for both tables.
However, this does not mean you have to manually type them. You can generate the columns list by selecting them from `sys.columns` or from `information_schema.columns`.
If you are using SSMS, you can expand the columns list of one of the tables and simply drag the columns to your query window. If you want to drag them all, simply drag the columns "folder" icon to your query window.
|
Well, this might be a little bit overkill but it'll do what you want. This way no matter how big the tables get (columns added/removed) you'll get an exact copy.
EDIT: In response to the comment below, and in the interest of full disclosure, the PK MUST be an identity and the columns MUST be in the same order.
```
SELECT * INTO #TMP FROM A1 WHERE ID = <THE ID YOU WANT>
ALTER TABLE #TMP DROP COLUMN ID
INSERT INTO A2
SELECT * FROM #TMP
DROP TABLE #TMP
```
|
How to insert one row from Table A to Table B But not Primay key
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm new to SQL and am wondering what is wrong in my syntax or logic here. I have my own table `clients` like
```
id | name | postid | sord |
--------------------------------------------------------
1 Paul Allen 19 1
2 McDermot 8 2
```
and the `postid` column in that table is referring to the same thing as the `id` column in the `wp_posts` table like
```
id | ... | ... | guid | ...
--------------------------------------------------
1 ... ... images/somepic.jpg ...
.
.
8 ... ... images/mypic.gif ...
.
.
.
19 ... ... images/thatpic.png ...
.
.
.
```
What I want to return is a table of rows like
```
id | name | postid | sord | guid
--------------------------------------------------------
1 Paul Allen 19 1 images/mypic.gif
2 McDermot 8 2 images/thatpic.png
```
so that I have the corresponding image paths that I can use to build out my page. I believe the method for this scenario is a full outer join. My PHP is like
```
global $wpdb;
$allCients = $wpdb->get_results("
SELECT clients.id, clients.name, clients.postid, clients.sord, wp_posts.guid
FROM clients
FULL OUTER JOIN wp_posts
ON clients.postid=wp_posts.id
ORDER BY clients.sord
");
$numClients = count($allCients);
```
but for some reason that is returning 0 results (i.e. `$numClients` is `0`) instead of the expected 2. What am I doing wrong?
|
MySQL does not support `full outer join`. But you don't need it anyway, if you have a proper foreign key relationship. For instance, if you want all clients, even those without posts:
```
SELECT c.id, c.name, c.postid, c.sord, p.guid
FROM clients c LEFT JOIN
wp_posts p
ON c.postid = p.id
ORDER BY c.sord;
```
|
If you're using MySQL, there is no `FULL OUTER JOIN`. What you can do instead is a `LEFT OUTER JOIN`, a `UNION` and a `RIGHT OUTER JOIN`. This stack overflow answer has an example:
<https://stackoverflow.com/a/10137216/4629105>
```
select * from A as a
left outer join B as b on a.col = b.col
union
select * from A as a
right outer join B as b on a.col = b.col
```
|
Where is the error in my full outer join?
|
[
"",
"mysql",
"sql",
""
] |
I have a table similar to the one shown. It contains a list of user ids, the hour value for each hour of the day and an Avail flag to determine if that user is available on that hour.
I need to list all User ids which are available for a number of consecutive hours defined as @n
```
#####################
# UID # Avail # Hour#
#####################
# 123 # 1 # 0 #
# 123 # 1 # 1 #
# 123 # 0 # 2 #
# 123 # 0 # 3 #
# 123 # 0 # 4 #
# 123 # 1 # 5 #
# 123 # 1 # 6 #
# 123 # 1 # 7 #
# 123 # 1 # 8 #
# 341 # 1 # 0 #
# 341 # 1 # 1 #
# 341 # 0 # 2 #
# 341 # 1 # 3 #
# 341 # 1 # 4 #
# 341 # 0 # 5 #
# 341 # 1 # 6 #
# 341 # 1 # 7 #
# 341 # 0 # 8 #
######################
```
This should result in the following output for @n=3
```
#######
# UID #
#######
# 123 #
#######
```
I have attempted to use the
ROW\_NUMBER() over (partition by UID,Avail ORDER BY UID,Hour)
to assign a number to each row partitioned by the UID and Whether or not they are flagged as available. However this does not work as the periods of availability may change multiple times a day and the ROW\_NUMBER() function was only keeping two counts per user based on the Avail flag.
|
If you're using SQL Server 2012+ you could using a windowed SUM, but you have to specify the number of rows in the window frame in advance as it won't accept variables so it's not that flexible:
```
;with cte as
(
select distinct
UID,
SUM(avail) over (partition by uid
order by hour
rows between current row and 2 following
) count
from table1
)
select uid from cte where count = 3;
```
If you want flexibility you could make it a stored procedure and use dynamic SQL to build and execute the statement, something like this:
```
create procedure testproc (@n int) as
declare @sql nvarchar(max)
set @sql = concat('
;with cte as
(
select distinct
UID,
SUM(avail) over (partition by uid
order by hour
rows between current row and ', @n - 1 , ' following
) count
from table1
)
select uid from cte where count = ' , @n , ';')
exec sp_executesql @sql
```
and execute it using `execute testproc 3`
An even more inflexible solution is to use correlated subqueries, but then you have to add another subquery for each added count:
```
select distinct uid
from Table1 t1
where Avail = 1
and exists (select 1 from Table1 where Avail = 1 and UID = t1.UID and Hour = t1.Hour + 1)
and exists (select 1 from Table1 where Avail = 1 and UID = t1.UID and Hour = t1.Hour + 2);
```
And yet another way, using row\_number to find islands and then filtering by sum of avail for each island:
```
;with c as (
select
uid, avail,
row_number() over (partition by uid order by hour)
- row_number() over (partition by uid, avail order by hour) grp
from table1
)
select uid from c
group by uid, grp
having sum(avail) >= 3
```
|
Didn't have time to polish this ... but this is one option.
* First CTE (c) creates the new column Id
* Second CTE (mx) gets the max row number since you cannot use aggregates in recursive CTEs
* Final CTE (rc) is where the meat is.
```
;WITH c AS (
SELECT ROW_NUMBER() OVER (ORDER BY [UID],[Hour]) Id,
[UID],Avail,[Hour]
FROM #tmp
), mx AS (
SELECT MAX(Id) MaxRowCount FROM c
), rc AS (
SELECT Id, [UID], Avail, [Hour], c.Avail AS CummulativeHour
FROM c
WHERE Id = 1
UNION ALL
SELECT c.Id, c.[UID], c.Avail, c.[Hour], CASE WHEN rc.Avail = 0 OR c.Avail = 0 OR rc.[UID] <> c.[UID] THEN c.Avail
WHEN rc. Avail = 1 AND c.Avail = 1 THEN rc.CummulativeHour + 1 END AS CummulativeHour
FROM rc
JOIN c
ON rc.Id + 1 = c.Id
WHERE c.Id <= (SELECT mx.MaxRowCount FROM mx)
)
SELECT * FROM rc
```
Here is the sample data creation...
```
CREATE TABLE #tmp ([UID] INT, Avail INT, [Hour] INT)
INSERT INTO #tmp
( UID, Avail, Hour )
VALUES (123,1,0),
(123,1,1),
(123,0,2),
(123,0,3),
(123,0,4),
(123,1,5),
(123,1,7),
(123,1,8),
(341,1,0),
(341,0,2),
(341,1,3),
(341,1,4),
(341,0,5),
(341,1,6),
(341,1,7),
(341,0,8)
```
|
SQL Find consecutive numbers in groups
|
[
"",
"sql",
"sql-server",
""
] |
I am doing the following to select nodes from an XML string, the first part is just to show you what I'm selecting from.
The issue is I want to do this for various different XML columns and I'd like to not have to specify the node name for each column in my select, is there a way to select all nodes as columns automatically or even a cursor using count?
```
DECLARE @MyXML XML
SET @MyXML = (SELECT
CAST (
'<AllowAdd>N</AllowAdd>
<Allowed>NUMSEG</Allowed>
<AllSegmentsEqualValue>N</AllSegmentsEqualValue>
<ClusterLevelSA>Y</ClusterLevelSA>
<ClusterLevelPremium>Y</ClusterLevelPremium>
<AllowAssignedAndInTrust>N</AllowAssignedAndInTrust>
<MinSegments>1</MinSegments>
<MaxSegments>100</MaxSegments>
<DefaultSegments>10</DefaultSegments>
<RoundPremiumsTo>2</RoundPremiumsTo>
<TaxDeferredAllowance>0.05</TaxDeferredAllowance>
<HigherTaxValueBands>HTVB</HigherTaxValueBands>
<NumberYearsCalculationType>NONFIN</NumberYearsCalculationType>
<OnShore>POLICY</OnShore>
<OffShore>NONFIN</OffShore>'as XML) as x)
SELECT
Data.Col.value('(/AllowAdd)[1]','Varchar(10)') as [Allow Addition of]
,Data.Col.value('(/Allowed)[1]','Varchar(10)') as [Allowed]
,Data.Col.value('(/MinSegments)[1]','Int') as [Min Segments]
,Data.Col.value('(/MaxSegments)[1]','Int') as [Max Segments]
,Data.Col.value('(/DefaultSegments)[1]','Int') as [Default Segments]
,Data.Col.value('(/RoundPremiumsTo)[1]','Int') as [Round Premiums To]
,Data.Col.value('(/AllSegmentsEqualValue)[1]','Varchar(10)') as [All Segments Equal Value]
--,Data.Col.value('(/TaxDeferredAllowance)[1]','Varchar(10)') as [Tax Deferred Allowance]
,Data.Col.value('(/HigherTaxValueBands)[1]','Varchar(10)') as [Higher Tax Value Bands]
,Data.Col.value('(/NumberYearsCalculationType)[1]','Varchar(10)') as [Number Years Calculation Type]
,Data.Col.value('(/OnShore)[1]','Varchar(10)') as [OnShore]
,Data.Col.value('(/OffShore)[1]','Varchar(10)') as [OffShore]
FROM @MyXML.nodes('/OffShore') AS Data(Col)
```
|
I hope, this is what you are waiting for :)
```
DECLARE @MyXML XML
SET @MyXML = (SELECT
CAST (
'<AllowAdd>N</AllowAdd>
<Allowed>NUMSEG</Allowed>
<AllSegmentsEqualValue>N</AllSegmentsEqualValue>
<ClusterLevelSA>Y</ClusterLevelSA>
<ClusterLevelPremium>Y</ClusterLevelPremium>
<AllowAssignedAndInTrust>N</AllowAssignedAndInTrust>
<MinSegments>1</MinSegments>
<MaxSegments>100</MaxSegments>
<DefaultSegments>10</DefaultSegments>
<RoundPremiumsTo>2</RoundPremiumsTo>
<TaxDeferredAllowance>0.05</TaxDeferredAllowance>
<HigherTaxValueBands>HTVB</HigherTaxValueBands>
<NumberYearsCalculationType>NONFIN</NumberYearsCalculationType>
<OnShore>POLICY</OnShore>
<OffShore>NONFIN</OffShore>'as XML) as x)
DECLARE @Output nvarchar(max) = N''
DECLARE @PivotList nvarchar(max)
SELECT
@PivotList = COALESCE(@PivotList + ', ', N'') + N'[' + XC.value('local-name(.)', 'varchar(100)') + N']'
FROM
@MyXML.nodes('/*') AS XT(XC)
SET @Output = N'SELECT
'+@PivotList+N'
FROM
(
SELECT
ColName = XC.value(''local-name(.)'', ''nvarchar(100)''),
ColValue = ISNULL(NULLIF(CONVERT(nvarchar(max),XC.query(''./*'')),''''),XC.value(''.'',''nvarchar(max)''))
FROM
@MyXML.nodes(''/*'') AS XT(XC)
) AS s
PIVOT
(
MAX(ColValue)
FOR ColName IN ('+@PivotList+N')
) AS t;'
EXEC sp_executesql @Output, N'@MyXml xml', @MyXML = @MyXML;
```
|
Given your input XML, you can try to use this:
```
SELECT
ColName = XC.value('local-name(.)', 'varchar(100)'),
ColValue = xc.value('(.)[1]', 'varchar(100)')
FROM
@MyXML.nodes('/*') AS XT(XC)
```
This will output each XML element found under the root - its name and value - as a list:
[](https://i.stack.imgur.com/izSk4.png)
Of course, since it's a very generic approach, you cannot really define the proper datatypes for each columns in the second `xc.value()` - you basically get everything as a string.
|
Returning all child nodes as columns from XML in Sql Server
|
[
"",
"sql",
"sql-server",
"xml",
"sqlxml",
""
] |
I have a list of customer\_ids, the date on which some information was changed, and the corresponding changes. I would like to number each change, by order of date, on each customer. So for example; I have something that looks like the following
```
Cust_id Date information
-----------------------------------------------------
12345 2015-04-03 blue hat
12345 2015-04-05 red scarf
54321 2015-04-12 yellow submarine
```
and I would like an output which looks something like this;
```
cust_id change_number Date information
---------------------------------------------------------------
12345 1 2015-04-03 blue hat
12345 2 2015-04-0 red scarf
54321 1 2015-04-12 yellow submarine
```
This will be quite a big table, so it will need to be somewhat efficient.
There will be at most 1 entry per customer per day.
Any help you can give is appreciated.
|
As Indian said, Try this :
```
select cust_id,
Row_number() over(partition by cust_id order by date) change_number,
Date,
information
from tablename;
```
|
If you want to order over a change number like that you need to use an inner select like this:
```
SELECT *
FROM (
SELECT *
, ROW_NUMBER() OVER (PARTITION BY Cust_id ORDER BY [Date]) As Change_Number
FROM yourTable) t
ORDER BY
Cust_id, Change_Number;
```
|
SQL How to order each entry by date
|
[
"",
"sql",
"sql-server",
"date",
"label",
"rank",
""
] |
Is it possibile to select all childs of a parent without recursive system as WITH or WHILE?
The structure is:
```
Father Child
NULL 1
1 2
1 3
2 4
3 5
4 6
6 7
```
I want the "childs" (all genealogy, childs of childs, etc. etc.) of 2, so I want:
4, 6, 7
|
Yes. Every recursive function has a non-recursive solution:
[Can every recursion be converted into iteration?](https://stackoverflow.com/questions/931762/can-every-recursion-be-converted-into-iteration)
Some other resources that might be helpful:
<http://www.slideshare.net/rs_rs123/removal-of-recursion>
<http://www.refactoring.com/catalog/replaceRecursionWithIteration.html>
|
If you know precisely what depth you want to go to, then sure, you don't have to use recursion. For example, to find the **first-level** descendants of a given parent, just do:
```
select Child
from MyTable
where Father = 2
```
Even if you want multiple levels (grandchildren, grandparents, etc.), as long as you **know how many levels you want**, you don't strictly need recursion, you can just nest multiple inline views like this:
```
select t1.Child
from MyTable t1
where t1.Father = 2
or t1.Father in (
select t2.Child
from MyTable t2
where t2.Father = 2
)
```
(This gets children and grandchildren)
However, anytime you don't know how many levels up/down a tree you want to go (e.g. *all descendants*), recursion is generally the preferred, and sometimes the only recourse (pun intended).
|
Select all genealogy of a father in SQL without WITH or WHILE
|
[
"",
"sql",
"recursion",
"genealogy",
""
] |
I want To Select stockDate in ASC Order My `Table` Structure is.:-
[](https://i.stack.imgur.com/6dsfG.png)
|
You have saved the dates as varchar and they are evil, you should use mysql native date data types.
For this you first need to convert the varchar dates to real date using `str_to_date` function while doing the sorting, here how it works
```
mysql> select str_to_date('31/07/2015','%d/%m/%Y') as d ;
+------------+
| d |
+------------+
| 2015-07-31 |
+------------+
```
So the query becomes
```
SELECT * from inventory_details
ORDER BY str_to_date(stockDate,'%d/%m/%Y') desc
```
|
```
SELECT * from inventory_details
ORDER BY str_to_date(stockDate,'%d/%m/%Y') desc
```
|
How can Select date(dd/mm/yyyy) in ascending order in sql
|
[
"",
"mysql",
"sql",
"database",
"date",
""
] |
I need a select which brings two lines, one with the number of people with the " number of hits " > 0 and the other line with the number of people with the " number of hits " = 0
```
SELECT u.name as 'Usuário',u.full_name as 'Nome do Usuário',count(l.referer) as 'Número de Acessos'
FROM mmp_user u
LEFT JOIN MMP_MMPUBLISH_LOG l
on u.id=l.user_id
AND l.event_date between '2015-08-01' and '2015-08-08'
group by u.name,u.full_name
order by count(l.referer) desc
```
I have,
151 Users
9 accessed and
142 not accessed.
But i don't return this values in select, help me please.
**Table mmp\_user fields** (`ID`,`CREATED_BY`,`AVATAR_ID`,`CREATION_DATE`,`EMAIL`,`FULL_NAME`,`LAST_EDITED_BY`,`LAST_EDITION_DATE`,`NAME`,`OBSERVATION`,`USER_PASSWORD`,`PASSWORD_REMINDER`,`SIGNATURE`,`STATUS`,`ADMINISTRATOR`,`DESIGNER`,`SECURITY_OFFICE`,`PUBLISHER`,`BRANCH_ID`,`DEPARTMENT_ID`,`EXTENSION`,`PHONE`,`COMPANY_ID`,`POSITION`,`ADMISSION_DATE`,`PASSWORD_LAST_EDITION_DATE`,`DISMISSED_DATE`,`NEWSLETTER`,`EXPIRE_DATE`,`COMPANY`,`BRANCH`,`DEPARTMENT`,`AREA_ID`,`SITE`,`USER_NUMBER`,`PREFIX_HOME_PHONE`,`PREFIX_MOBILE_PHONE`,`ADDRESS`,`ADDRESS_COMPLEMENT`,`ADDRESS_TYPE`,`CITY`,`NEIGHBORHOOD`,`STATE`,`ZIP_CODE`,`BIRTHDATE`,`GENDER`,`HOME_PHONE`,`MOBILE_PHONE`,`CPF`,`MARIAGE_STATUS`,`NATIONALITY`,`RG`,`EDUCATION`,`URL_SITE`,`FIRST_NAME`,`LAST_NAME`,`ID_SAP`,`PASSWORD_GAFISA`,`NICKNAME`,`CODE_POSITION`,`CREATION_USER_ORIGIN`,`LEVEL_POSITION`,`BIRTH_DATE_VISIBILITY`,`HOME_PHONE_COUNTRY_PREFIX`,`HOME_PHONE_VISIBILITY`,`MOBILE_PHONE_COUNTRY_PREFIX`,`MOBILE_PHONE_VISIBILITY`,`AREA_PREFIX`,`COUNTRY_PREFIX`,`PHONE_OBSERVATION`,`RESPONSIBLE`,`RESOURCE_ID`,`AVATAR_RF_ID`,`RESOURCE_AVATAR_ID`,`AVATAR_URL_LUCENE`,`avatarurl`,`PASSWORD_EXCHANGE`,`USER_NAME_EXCHANGE`,`DOMAIN_EXCHANGE`,`I18N`,`LAST_IMPORT_FILE`,`HIERARCHY_POSITION`,`SECRET_NICKNAME`,`PROFILE_TYPE`,`NOT_VIEW_USER`,`CHANGE_POSITION_DATE`,`DISTINGUISHED_NAME`,`OU_USER`,`AUTH_TOKEN`,`AUTH_TOKEN_EXPIRATION`)
**TableMMP\_MMPUBLISH\_LOG fields** (`ID`,`MMPUBLISH_LOG_TYPE`,`EVENT_DATE`,`USER_ID`,`TRANSACTION_NAME`,`USER_IP`,`USER_LOGIN`,`USER_NAME`,`SESSION_ID`,`REFERER`,`PUBLISHING_OBJECT_ID`,`PUBLISHING_OBJECT_NAME`,`PHASE_ID`,`PHASE_NAME`,`PHASE_COMMENT`,`ACCESS_URL`,`HOME_PAGE_ID`,`HOMEPAGE_ID`,`phaseComment`,`phaseId`,`phaseName`,`PO_VERSION_NUMBER`)
Thanks
|
You could wrap this query with another query and apply a `case` expression to the count:
```
SELECT access_code, COUNT(*)
FROM (SELECT u.name,
u.full_name,
CASE WHEN COUNT(l.referer) > 0 THEN 'access'
ELSE 'no access'
END as access_code
FROM mmp_user u
LEFT JOIN mmp_mmpupluish_log l ON
u.id=l.user_id AND
l.event_date BETWEEN '2015-08-01' AND '2015-08-08'
GROUP BY u.name, u.full_name) t
GROUP BY access_code
ORDER BY access_code ASC
```
|
```
SELECT u.name Usuário, u.full_name [Nome do Usuário],
count(l.referer) [Número de Acessos],
Sum(case when NumberOfHits = 0 then 1 else 0 end) ZeroHitsCount,
Sum(case when NumberOfHits > 0 then 1 else 0 end) HasSomeHitsCount
FROM mmp_user u
LEFT JOIN MMP_MMPUBLISH_LOG l
on u.id=l.user_id
AND l.event_date between '2015-08-01' and '2015-08-08'
group by u.name, u.full_name
order by count(l.referer) desc
```
|
SQL count number of users hava a value > 1
|
[
"",
"sql",
"select",
""
] |
I have two tables, table1 and table2. Each with the same columns:
```
key, c1, c2, c3
```
I want to check to see if these tables are equal to eachother (they have the same rows). So far I have these two queries (<> = not equal in HIVE):
```
select count(*) from table1 t1
left outer join table2 t2
on t1.key=t2.key
where t2.key is null or t1.c1<>t2.c1 or t1.c2<>t2.c2 or t1.c3<>t2.c3
```
And
```
select count(*) from table1 t1
left outer join table2 t2
on t1.key=t2.key and t1.c1=t2.c1 and t1.c2=t2.c2 and t1.c3=t2.c3
where t2.key is null
```
So my idea is that, if a zero count is returned, the tables are the same. However, I'm getting a zero count for the first query, and a non-zero count for the second query. How exactly do they differ? If there is a better way to check this certainly let me know.
|
The first one excludes rows where t1.c1, t1.c2, t1.c3, t2.c1, t2.c2, or t2.c3 is null. That means that you effectively doing an inner join.
The second one will find rows that exist in t1 but not in t2.
To also find rows that exist in t2 but not in t1 you can do a full outer join. The following SQL assumes that all columns are `NOT NULL`:
```
select count(*) from table1 t1
full outer join table2 t2
on t1.key=t2.key and t1.c1=t2.c1 and t1.c2=t2.c2 and t1.c3=t2.c3
where t1.key is null /* this condition matches rows that only exist in t2 */
or t2.key is null /* this condition matches rows that only exist in t1 */
```
|
If you want to check for duplicates *and* the tables have exactly the same structure *and* the tables do not have duplicates within them, then you can do:
```
select t.key, t.c1, t.c2, t.c3, count(*) as cnt
from ((select t1.*, 1 as which from table1 t1) union all
(select t2.*, 2 as which from table2 t2)
) t
group by t.key, t.c1, t.c2, t.c3
having cnt <> 2;
```
There are various ways that you can relax the conditions in the first paragraph, if necessary.
Note that this version also works when the columns have `NULL` values. These might be causing the problem with your data.
|
Comparing two tables for equality in HIVE
|
[
"",
"sql",
"join",
"hive",
"left-join",
"hiveql",
""
] |
This post is in continuatation of a problem of another post [sql select min or max based on condition](https://stackoverflow.com/questions/31815050/sql-select-min-or-max-based-on-condition/31816877#31816877)
I'm trying to get a row based on various conditions.
**Scenario 1 -** get highest row if no hours exist against it that has (`setup` + `processtime` > 0).
**Scenario 2 -** if there's hours (like in this example) show next operation(`oprnum`) after this number. (which would be 60 in `prodroute`).
The query needs to work within a CTE as it's part of a bigger query.
```
CREATE TABLE ProdRoute
([ProdId] varchar(10), [OprNum] int, [SetupTime] int, [ProcessTime] numeric)
;
INSERT INTO ProdRoute
([ProdId], [OprNum], [SetupTime], [ProcessTime])
VALUES
('12M0004893', 12, 0.7700000000000000, 1.2500000000000000),
('12M0004893', 12, 0.0000000000000000, 0.0000000000000000),
('12M0004893', 40, 0.0800000000000000, 0.4000000000000000),
('12M0004893', 50, 0.0400000000000000, 2.8000000000000000),
('12M0004893', 50, 0.0000000000000000, 0.0000000000000000),
('12M0004893', 60, 0.0000000000000000, 0.6100000000000000),
('12M0004893', 60, 0.0000000000000000, 0.0000000000000000),
('12M0004893', 70, 0.0000000000000000, 1.2900000000000000),
('12M0004893', 70, 0.0000000000000000, 0.0000000000000000),
('12M0004893', 75, 0.0000000000000000, 3.8700000000000000),
('12M0004893', 75, 0.0000000000000000, 0.0000000000000000),
('12M0004893', 80, 0.0000000000000000, 0.5500000000000000),
('12M0003571', 3, 0.8900000000000000, 0.0000000000000000),
('12M0003571', 3, 0.0000000000000000, 0.0000000000000000),
('12M0003571', 7, 1.0000000000000000, 0.0000000000000000),
('12M0003571', 10, 0.3000000000000000, 0.3000000000000000),
('12M0003571', 10, 0.0000000000000000, 0.0000000000000000),
('12M0003571', 20, 0.0700000000000000, 0.1000000000000000),
('12M0003571', 20, 0.0000000000000000, 0.0000000000000000),
('12M0003571', 30, 0.0000000000000000, 0.0000000000000000),
('12M0003571', 40, 0.0000000000000000, 0.0000000000000000),
('12M0003571', 50, 0.0000000000000000, 0.0000000000000000),
('12M0003571', 60, 0.0000000000000000, 0.0000000000000000),
('12M0003571', 60, 0.0000000000000000, 0.0000000000000000),
('12M0003571', 70, 0.0700000000000000, 0.1500000000000000),
('12M0003571', 70, 0.0000000000000000, 0.0000000000000000)
;
CREATE TABLE ProdRouteTran
([ProdID] varchar(10), [MaxOpCompleted] int, [Hours] numeric)
;
INSERT INTO ProdRouteTran
([ProdID], [MaxOpCompleted], [Hours])
VALUES
('12M0004893', 50, 1.7800000000000000),
('12M0003571', 70, 1.2660000000000000)
;
```
expected output :
```
ProdId OprNum
12M0004893 60
ProdId OprNum
12M0003571 70
```
|
Based on new data and last comment on the answer by asker, here's the updated query and fiddle :<http://sqlfiddle.com/#!6/87e2f/2>
> hey i found an example that doesn't work... orderID '12M0003381'...
> i've added data to your fiddle. I would expect to see operation 70 as
> that's the last operation with a setup or process time... thanks!
```
select prodid, ISNULL(MAX(weighted_value),MIN(oprnum)) as value from
(
select
a.prodid,
a.oprnum,
ISNULL(LEAD(a.oprnum,1) OVER(Partition by a.prodID ORDER by a.oprnum asc),a.oprnum) *
MAX(case
when ISNULL([Hours], 0) >= (setupTime + ProcessTime) AND (SetupTime + ProcessTime ) > 0
then 1
else NULL
end) as weighted_value
from temp1 a LEFT JOIN temp4 b
ON a.OprNum = b.OPRNUM
AND a.ProdID = b.ProdId
group by a.prodid,a.oprnum
) t
group by prodid
```
**Explanation for below query changes:**
The only change made to query was to handle the `NULL` value for `weighted_value` using the following syntax
```
ISNULL(LEAD(a.oprnum,1) OVER(Partition by a.prodID ORDER by a.oprnum asc),a.oprnum)
```
The problematic part was the inner query which when run without group by clause shows what happened on a boundary case like added by user.
[](https://i.stack.imgur.com/tYp8A.png)
( See fiddle for this here: <http://sqlfiddle.com/#!6/87e2f/3> )
Without null handling, we had a `NULL` which after `group by` clause resulted in a structure like below[](https://i.stack.imgur.com/Q8ln7.png)
( See fiddle for this here:<http://sqlfiddle.com/#!6/87e2f/5> )
As you can see on grouping the LEAD value for `prodid : 12M0003381, oprnum:70` resulted as `NULL` instead of `70` (as grouping `70` and `NULL` should give `70`).
**This is justified if `LEAD` is calculated on grouped query/table , which is actually what is happening here.**
In that case, the `LEAD` function will not return any data for the last row of partition. This is the boundary case and must be handled correctly with `ISNULL`.
I assumed that `LEAD` `oprnum` value of last row should be corrected as `oprnum` value of current row.
**Old answer below:**
> So I tried and I am posting the fiddle link
> <http://sqlfiddle.com/#!6/e965c/1>
>
> ```
> select prodid, ISNULL(MAX(weighted_value),MIN(oprnum)) as value from
> (
> select
> a.prodid,
> a.oprnum,
> LEAD(a.oprnum,1) OVER(Partition by a.prodID ORDER by a.oprnum asc) *
> MAX(case
> when ISNULL([Hours], 0) >= (setupTime + ProcessTime) AND (SetupTime + ProcessTime ) > 0
> then 1
> else NULL
> end) as weighted_value
> from ProdRoute a LEFT JOIN COMPLETED_OP b
> ON a.OprNum = b.OPRNUM
> AND a.ProdID = b.ProdId
> group by a.prodid,a.oprnum
> ) t
> group by prodid
> ```
|
This isn't the prettiest thing I have ever written but it works. I also tested it against the other fiddle with additional data.
Modified to meet new requirement.
```
SELECT
*
FROM
(
SELECT
A.ProdID,
MIN(A.OprNum) AS 'OprNum'
FROM
#ProdRoute AS A
JOIN
(
SELECT
ProdID,
MAX(MaxOpCompleted) AS 'OprNum'
FROM
#ProdRouteTran
GROUP BY
ProdID
) AS B
ON A.ProdId = B.ProdId AND A.OprNum > B.OprNum
GROUP BY
A.ProdID
) AS [HoursA]
UNION ALL
SELECT
*
FROM
(
SELECT
DISTINCT
A.ProdID,
B.OprNum
FROM
#ProdRoute AS A
JOIN
(
SELECT
ProdID,
MAX(MaxOpCompleted) AS 'OprNum'
FROM
#ProdRouteTran
GROUP BY
ProdID
) AS B
ON A.ProdId = B.ProdId AND A.OprNum = B.OprNum
AND B.OprNum = (SELECT MAX(OprNum) FROM #ProdRoute WHERE ProdId = A.ProdId)
) AS [HoursB]
UNION ALL
SELECT
*
FROM
(
SELECT
ProdId,
MIN(OprNum) AS 'OprNum'
FROM
#ProdRoute
WHERE
ProdId NOT IN
(SELECT ProdId FROM #ProdRouteTran)
AND (SetupTime <> 0 OR ProcessTime <> 0)
GROUP BY
ProdId
) AS [NoHoursA]
UNION ALL
SELECT
*
FROM
(
SELECT
ProdId,
MIN(OprNum) AS 'OprNum'
FROM
#ProdRoute
WHERE
ProdId NOT IN
(SELECT ProdId FROM #ProdRouteTran)
GROUP BY
ProdId
HAVING
SUM(SetupTime) = 0 AND SUM(ProcessTime) = 0
) AS [NoHoursB]
```
|
sql select min or max based on condition part 2
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I'm trying to figure out a way, using SQL, to query for values that go out to, say, 5 or more decimal places. In other words, I want to see only results that have 5+ decimal places (e.g. 45.324754) - the numbers before the decimal are irrelevant, however, I still need to see the full number. Is this possible? Any help if appreciated.
|
Assuming your DBMS supports `FLOOR` and your datatype conversion model supports this multiplication, you can do this:
```
SELECT *
FROM Table
WHERE FLOOR(Num*100000)!=Num*100000
```
This has the advantage of not requiring a conversion to a string datatype.
|
On SQL Server, you can specify:
```
SELECT *
FROM Table
WHERE Value <> ROUND(Value,4,1);
```
For an ANSI method, you can use:
```
SELECT *
FROM Table
WHERE Value <> CAST(Value*100000.0 AS INT) / 100000.0;
```
Although this method might cause an overflow if you're working with large numbers.
|
How to find values with certain number of decimal places using SQL?
|
[
"",
"sql",
""
] |
in a database table I have made a date attribute but I have set it's type to varchar and not Date.
My question is, will I still be able to compare such dates in a SQL Query?
Dates in my DB are stored in this format:
> dd/mm/yyyy hh:mm:ss
I have to do a SQL Query in PHP that looks something like this:
```
SELECT *
FROM DBtable
WHERE DBname='$name' AND date>='01/01/2015' AND date<='01/09/2015';
```
I would appreciate an example how to do this.
Thank you for your help.
|
You'll need to convert/cast to compare:
```
SELECT *
FROM DBtable
WHERE DBname='$name'
AND CAST(date AS DATETIME) >='2015-01-01'
AND CAST(date AS DATETIME)<='2015-01-09'
;
```
Much better to store values as the appropriate data types to avoid this inefficiency. You could also use `DATE` instead of `DATETIME` if you want to compare without the time component. Syntax and available datatypes vary by database, so the above may need adjustment.
Update: Since you're using MySQL, you can use the following:
```
SELECT *
FROM DBtable
WHERE DBname='$name'
AND STR_TO_DATE(`date`, '%d/%c/%Y') >= '2015-01-01'
AND STR_TO_DATE(`date`, '%d/%c/%Y') <= '2015-01-09'
;
```
|
You can cast or convert a `varchar` to a `date` or `datetime` before you do any comparisons.
But you'd have to do it *every single time* you compare the date to something. That's because the following comparisons are all true if you compare them as `varchar`:
```
'2/1/2015' > '1/5/2016'
'25/1/2015' > '15/2/2015'
'11/1/2015' < '3/1/2015'
```
You'll also need to convert if you want to pull out some time-based aspect of the dates, such as any records where the hour was before `8:00 AM`. There is no easy way to do that if your date is a `varchar`.
And that assumes that the value in your database can always be parsed into a date! If an empty string or some other kind of data gets in there, `CONVERT(datetime, MyColumn)` will fail.
So I would strongly recommend that you change your column to be a `date` or `datetime`. It will make your life much easier.
|
SQL Comparing Dates
|
[
"",
"mysql",
"sql",
"date",
"compare",
""
] |
I am new to SQL and I need to find `count` of users every 7 days. I have a table with users for every single day starting from April 2015 up until now:
```
...
2015-05-16 00:00
2015-05-16 00:00
2015-05-17 00:00
2015-05-17 00:00
2015-05-17 00:00
2015-05-17 00:00
2015-05-17 00:00
2015-05-18 00:00
2015-05-18 00:00
...
```
and I need to count the number of users every 7 days (weekly) so I have data weekly.
```
SELECT COUNT(user_id), Activity_Date FROM TABLE_NAME
```
I need output like this:
```
TotalUsers week1 week2 week3 ..........and so on
82 80 14 16
```
I am using DB Visualizer to query Oracle database.
|
You should try following,
```
Select
sum(Week1) + sum(Week2) + sum(Week3) + sum(Week4) + sum(Week5) as Total,
sum(Week1) as Week1,
sum(Week2) as Week2,
sum(Week3) as Week3,
sum(Week4) as Week4,
sum(Week5) as Week5
From (
select
case when week = 1 then 1 else 0 end as Week1,
case when week = 2 then 1 else 0 end as Week2,
case when week = 3 then 1 else 0 end as Week3,
case when week = 4 then 1 else 0 end as Week4,
case when week = 5 then 1 else 0 end as Week5
from
(
Select
CEILING(datepart(dd,visitdate)/7+1) week,
user_id
from visitor
)T
)D
```
Here is [Fiddle](http://sqlfiddle.com/#!3/27f65/12)
You need to add month & year in the result as well.
|
This is my test table:
```
user_id act_date
1 01/04/2015
2 01/04/2015
3 04/04/2015
4 05/04/2015
..
```
This is my query:
```
select week_offset, count(*) nb from (
select trunc((act_date-to_date('01042015','DDMMYYYY'))/7) as week_offset from test_date)
group by week_offset
order by 1
```
and this is the output:
```
week_offset nb
0 6
1 3
4 5
5 7
6 3
7 1
18 1
```
Week offset is the number of the week from 01/04/2015, and we can show the first day of the week.
See [here](http://sqlfiddle.com/#!4/64264/1) for live testing.
|
SQL count number of users every 7 days
|
[
"",
"sql",
"oracle",
""
] |
I need to make a query that moves values of only one column one row up ↑ at a time:
```
+------------+----------------+
| anotherCOL | values_to_loop |
+------------+----------------+
| 1 | 1 |
| 2 | 2 |
| 3 | 3 |
| 4 | 4 |
| 5 | 5 |
| 6 | 6 |
| 7 | 7 |
| 8 | 8 |
| 9 | 9 |
| 10 | 10 |
+------------+----------------+
```
So, the next time i run the query, it should look like this
```
+------------+----------------+
| anotherCOL | values_to_loop |
+------------+----------------+
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 4 | 5 |
| 5 | 6 |
| 6 | 7 |
| 7 | 8 |
| 8 | 9 |
| 9 | 10 |
| 10 | 1 |
+------------+----------------+
```
I need to loop the values of only one MYSQL COLUMN, as in move the values one ROW UP ↑ each time I run the query.
Notice: Tables provided are just illustrative, the data is different.
|
Here's how you can do it within a single UPDATE query:
```
UPDATE tbl a
INNER JOIN (
SELECT values_to_loop
FROM (SELECT * FROM tbl) c
ORDER BY anotherCOL
LIMIT 1
) b ON 1 = 1
SET a.values_to_loop =
IFNULL(
(SELECT values_to_loop
FROM (SELECT * FROM tbl) c
WHERE c.anotherCOL > a.anotherCOL
ORDER BY c.anotherCOL
LIMIT 1),
b.values_to_loop
)
```
It works as follows:
1. Updates all records from tbl
2. Joins with a temporary table to retrieve the top value of values\_to\_loop (the one that will go to the bottom)
3. Set the new value for values\_to\_loop to the corresponding value from the next row (`c.anotherCOL > a.anotherCOL ... LIMIT 1`)
Notes:
* This works even if there are gaps in anotherCOL (eg: 1, 2, 3, 6, 9, 15)
* It is required to use `(SELECT * FROM tbl)` instead of `tbl` because you're not allowed to use the table that you're updating in the update query
---
## Faster query when there are no gaps in anotherCOL
If there are no gaps for values in anotherCOL you can use the query below that should work quite fast if you have an index on anotherCOL:
```
UPDATE tbl a
LEFT JOIN tbl b on b.anotherCOL = a.anotherCOL + 1
LEFT JOIN (
SELECT values_to_loop
FROM tbl
WHERE anotherCOL = (select min(anotherCOL) from tbl)
) c ON 1 = 1
SET a.values_to_loop = ifnull(
b.values_to_loop,
c.values_to_loop
)
```
|
I`ve created a sample table and added both a select to get the looped values and update to loop the values in the table. Also, using a @start\_value variable to know the "1" which might be other. Try this:
```
CREATE TEMPORARY TABLE IF NOT EXISTS temp_table
(other_col INT, loop_col int);
INSERT INTO temp_table (other_col, loop_col) VALUES (1,1);
INSERT INTO temp_table (other_col, loop_col) VALUES (2,2);
INSERT INTO temp_table (other_col, loop_col) VALUES (3,3);
INSERT INTO temp_table (other_col, loop_col) VALUES (4,4);
INSERT INTO temp_table (other_col, loop_col) VALUES (5,5);
DECLARE start_value INT;
SELECT start_value = MIN(loop_col) FROM temp_table;
SELECT T1.other_col, ISNULL(T2.loop_col, start_value)
FROM temp_table T1
LEFT JOIN temp_table T2
ON T1.loop_col = T2.loop_col - 1;
UPDATE T1 SET
T1.loop_col = ISNULL(T2.loop_col, @start_value)
FROM temp_table T1
LEFT JOIN temp_table T2
ON T1.loop_col = T2.loop_col - 1;
SELECT *
FROM temp_table;
```
Let me know if it works for you.
Step by step:
1 - created a temp\_table with values 1 to 5
2 - declared a start\_value which will keep the lowest value for the column you to need to loop through
3 - select all rows from temp\_table self left join with same temp\_table. join condition is on loop\_col - 1 so it can shift the rows up
4 - the same self left join, but this time update the values in place too.
please note that in case i get a null value, it should be the start\_value there, because it cannot match
|
Query for looping values in column
|
[
"",
"mysql",
"sql",
"database",
""
] |
Bit confused if we can perform order by on more than two columns.
Here is the result:-
```
Select *
from orderbytest
order by FName desc, LName desc, YearOfBirth desc
```
Resulting output:
```
FName LName YearOfBirth
------------------------------------
Thomas More 1478
Thomas Jefferson 1826
Thomas Alva Edison 1847
Benjamin White 1600
Benjamin Franklin 1706
```
|
Short answer - yes.
When applying multiple `order by` expressions the results are sorted by the first one. When multiple rows have the same value in the first order by expression, they are sorted internally by the second one. When multiple rows have the same value in the first and second order by expressions they are sorted by the third one, and so on.
Here, all your rows have a unique combination of the first two expressions, so the third one, while still valid, is inconsequential.
|
Yes you can perform however you like Just like Grouping.
here;
```
Select * from orderbytest order by FName ,LName asc,YearOfBirth desc
```
Not that, when you do not write any keyword after a field the result will list in the default order.
|
Can we perform order by on more than 2 columns in SQL Server
|
[
"",
"sql",
"sql-server",
"database",
"select",
"sql-order-by",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.