Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I just cannot seem to find an answer for this deceptively simple question. Most every solution either deletes all the duplicates, selects all the duplicates, or selects all the records except the duplicates. How can I select all rows such that, in this example, the "name" column values are unique, while selecting the first record of any duplicate set and ignoring the remaining duplicates of that same name? I do need all the values from all the columns in all the records in the selected record set.
```
Given the set of records:
pk fk name secs note
1 100 cat 90 gray
2 111 dog 123 mix
3 233 fish 75 gold
4 334 dog 932 black
5 238 cow 90 stray
6 285 cat 90 stray
The returned set should be:
pk fk name secs note
1 100 cat 90 gray
2 111 dog 123 mix
3 233 fish 75 gold
5 238 cow 90 stray
-- SQL
drop table if exists foo;
create table foo (
pk int unsigned,
fk int unsigned,
name varchar(10),
secs int,
note varchar(10),
primary key (pk)
) engine=innodb default charset=utf8;
insert into foo
(pk, fk, name, secs, note)
values
(1, 100, 'cat', 90, 'gray'),
(2, 111, 'dog', 123, 'mix'),
(3, 233, 'fish', 75, 'gold'),
(4, 334, 'dog', 932, 'black'),
(5, 238, 'cow', 90, 'stray'),
(6, 285, 'cat', 90, 'stray');
``` | Here is a query that that returns what you're looking for:
```
SELECT
F.*
FROM
foo F
INNER JOIN
( SELECT
MIN(F2.pk) AS `pk`
,F2.name
FROM foo F2
GROUP BY F2.name
) T
ON T.pk = F.pk;
```
Hope this will help you. | I think you are looking for this :
```
select * from table group by name;
```
Which can be written using distinct keyword as :
```
select distinct on name * from table;
``` | How to select all records based on non-duplication of one column | [
"",
"mysql",
"sql",
"database",
"duplicates",
""
] |
I have a query that produces a table like this:
```
BN System amount
13098 System1 860.05
16722 System1 63360
16722 System2 1544713.19000001
01292 System2 3260
17133 System2 6240
33851 System2 155340.03
24638 System2 11364.54
89936 System1 3719.85
57522 System2 50153558.7400001
84197 System2 6175
81568 System2 57402.05
99029 System2 59108.88
97532 System1 880
13283 System2 16745.51
51553 System2 26222
77632 System2 9202.5
84314 System2 185750
84314 System1 233766.5
```
Here is the query I use to get this table:
```
select
BN,
System,
SUM(Amount)
FROM
tbl1
group By
BN,
System
```
I want to select the rows that have duplicate BNs only.
For example, I want to return on when BN = 16722 and BN = 84314.
How can I do that?
I tried using
```
Having count(BN) > 1
```
but it did not work. | ```
select t.bn, t.system, sum(t.amount)
from tbl1 t
join (select bn from tbl1 group by bn having count(distinct system) > 1) x
on t.bn = x.bn
group by t.bn, t.system
```
I think by 'duplicate' you mean the same BN value is associated with 2+ unique systems?
The above should work if that is the case. | I think the best approach is using window functions:
```
select t.*
from (select t.*, count(*) over (partition by system)) as cnt
from tbl t
) t
where cnt > 1;
```
This is particularly useful if you really are using a subquery. This formulation only runs the subquery once. | SQL Select only duplicates from one column | [
"",
"sql",
"t-sql",
"sql-server-2012",
"duplicates",
""
] |
I have two tables that have no relationship. I want to select a value from each of them and divide one from the other. I'm essentially looking for something like this (although I know this is not correct):
SELECT (
SELECT MAX(row) FROM TableA /
SELECT max(row) from TableB))
How can I do this in a single SQL statement? | ```
SELECT (cast((SELECT MAX(row) FROM TableA) as decimal) / cast((SELECT max(row) from Table2) as decimal))
``` | This works in Sql Server, not sure about other implementations of SQL.
`select (tableA.Value/tableB.Value) from (select MAX(row) Value FROM TableA) tableA, (select MAX(row) Value FROM TableB) tableB` | How to select a single value from two different tables and divide them | [
"",
"sql",
""
] |
Here is my question I have this `SQL` Query:
```
SELECT * FROM leftTable
JOIN rightTable
ON leftTable.match = rightTable.match
```
is `JOIN` a left join, a right join or an inner Join? | `JOIN` is short for `INNER JOIN`.
`LEFT JOIN`, `RIGHT JOIN` and `FULL JOIN` are technically short for `LEFT OUTER JOIN`, `RIGHT OUTER JOIN` and `FULL OUTER JOIN` respectively. Standard compliant SQL parsers aren't required to accept the long versions. | `JOIN` keyword used alone refers to `INNER JOIN`, otherwhise you have to explecity define which join you want to use e.g
```
LEFT JOIN
FULL JOIN
e.t.c
``` | About the JOIN keyword | [
"",
"sql",
"join",
""
] |
I am using MS access.
I tried write SQL code that refer to the tables EMPLOYEE, PROJECT and JOB and return the results as shown in the figure below:
```
PROJ_NAME PROJ_VALUE PROJ_BALANCE EMP-LNAME EMP_FNAME EMP_INITIAL JOB_CODE JOB_DESCRIPTION JOB_CHG_HOUR
iCEBUTT 800.00 12000 SMITH ANNE E 12 ELECTRIC ENGENEER 10
FIREBUTT 810.00 20000 SHEMAN ANNE G 15 WATER ENGENEER 12
iCEBTEA 802.00 10000 SIMPSON ANNE H 11 NON ENGENEER 11
iCECUBE 890.00 18000 SMITFIELD ANNE A 19 ELECTRIC ENGENEER 9.5
```
MY CODE
```
SELECT PROJ_NAME, PROJ_VALUE, PROJ_BALANCE, EMP_LNAME, EMP_FNAME, EMP_INITIAL, JOB_CODE, JOB_DESCRIPTION, JOB_CHG_HOUR
FROM EMPLOYEE, JOB, PROJECT
WHERE PROJ_VALUE >= 10000;
```
what I got from my code is a lot of duplicate datas.
My result:
```
PROJ_NAME PROJ_VALUE PROJ_BALANCE EMP-LNAME EMP_FNAME EMP_INITIAL JOB_CODE JOB_DESCRIPTION JOB_CHG_HOUR
iCEBUTT 800.00 12000 SMITH ANNE E 12 ELECTRIC ENGENEER 10
iCEBUTT 800.00 12000 SMITH ANNE E 12 ELECTRIC ENGENEER 10
iCEBUTT 800.00 12000 SMITH ANNE E 12 ELECTRIC ENGENEER 10
iCEBUTT 800.00 12000 SMITH ANNE E 12 ELECTRIC ENGENEER 10
FIREBUTT 810.00 20000 SHEMAN ANNE G 15 WATER ENGENEER 12
FIREBUTT 810.00 20000 SHEMAN ANNE G 15 WATER ENGENEER 12
FIREBUTT 810.00 20000 SHEMAN ANNE G 15 WATER ENGENEER 12
FIREBUTT 810.00 20000 SHEMAN ANNE G 15 WATER ENGENEER 12
iCEBTEA 802.00 10000 SIMPSON ANNE H 11 NON ENGENEER 11
iCEBTEA 802.00 10000 SIMPSON ANNE H 11 NON ENGENEER 11
iCEBTEA 802.00 10000 SIMPSON ANNE H 11 NON ENGENEER 11
iCECUBE 890.00 18000 SMITFIELD ANNE A 19 ELECTRIC ENGENEER 9.5
iCECUBE 890.00 18000 SMITFIELD ANNE A 19 ELECTRIC ENGENEER 9.5
iCECUBE 890.00 18000 SMITFIELD ANNE A 19 ELECTRIC ENGENEER 9.5
``` | You are missing the condition for the joins. Now both `employee` and `job` give all records. You have to tell how they relate to the `project` table.
Something like this:
```
SELECT p.PROJ_NAME, p.PROJ_VALUE, p.PROJ_BALANCE, p.EMP_LNAME, p.EMP_FNAME, p.EMP_INITIAL, p.JOB_CODE, p.JOB_DESCRIPTION, p.JOB_CHG_HOUR
FROM EMPLOYEE e, JOB j, PROJECT p
WHERE p.job_code = j.job_code /*enter correct fields here*/
AND p.emp_name = e.emp_name /*enter correct fields here*/
AND p.PROJ_VALUE >= 805000.00;
```
The field names I used are based on your current query. I hope there are some ID fields you could use instead.
Preferably use real `join`s:
```
SELECT p.PROJ_NAME, p.PROJ_VALUE, p.PROJ_BALANCE, p.EMP_LNAME, p.EMP_FNAME, p.EMP_INITIAL, p.JOB_CODE, p.JOB_DESCRIPTION, p.JOB_CHG_HOUR
FROM PROJECT p
JOIN JOB j
ON p.job_code = j.job_code /*enter correct fields here*/
JOIN EMPLOYEE e
ON p.emp_name = e.emp_name /*enter correct fields here*/
WHERE p.PROJ_VALUE >= 805000.00;
``` | You have created a Cartesian product (or cross join). The valus obtained are not only duplicates but also wrong. To avoid this, you need to use joins in your query.
```
SELECT *
FROM EMPLOYEE
JOIN JOB ON EMPLOYEE.JOB_ID = JOB.ID
JOIN PROJECT ON EMPLOYEE.PROJ_ID = PROJECT.ID
WHERE PROJECT.PROJ_VALUE >= 805000.00;
```
Check [this tutorial](http://www.tutorialspoint.com/sql/sql-using-joins.htm) for some insight on using SQL joins. | I keep getting duplicate data in my record | [
"",
"sql",
"list",
"ms-access",
""
] |
I have an `Adventure` model, which is a join table between a `Destination` and a `User` (and has additional attributes such as `zipcode` and `time_limit`). I want to create a `query` that will return me all the `Destinations` where an `Adventure` between that `Destination` and the `User` currently trying to create an `Adventure` does not exist.
The way the app works when a `User` clicks to start a new `Adventure` it will create that `Adventure` with the `user_id` being that `User`'s `id` and then runs a method to provide a random `Destination`, ex:
`Adventure.create(user_id: current_user.id)` (it is actually doing `current_user.adventures.new` ) but same thing
I have tried a few things from writing raw `SQL queries` to using `.joins`. Here are a few examples:
`Destination.joins(:adventures).where.not('adventures.user_id != ?'), user.id)`
`Destination.joins('LEFT OUTER JOIN adventure ON destination.id = adventure.destination_id').where('adventure.user_id != ?', user.id)` | The below should return all destinations that user has not yet visited in any of his adventures:
```
destinations = Destination.where('id NOT IN (SELECT destination_id FROM adventures WHERE user_id = ?)', user.id)
```
To select a random one append one of:
```
.all.sample
# or
.pluck(:id).sample
```
Depending on whether you want a full record or just id. | No need for joins, this should do:
```
Destination.where(['id not in ?', user.adventures.pluck(:destination_id)])
``` | Rails ActiveRecord query where relationship does not exist based on third attribute | [
"",
"sql",
"ruby-on-rails",
"activerecord",
"active-record-query",
""
] |
I need an update statement to resolve some issues with duplicates in a table on SQL Server 2000. The table structure is shown below. There is no primary key in this table. I need a SQL statement that will update the values in cases where there are duplicates by adding 5 to the value until there are no more duplicates.
```
DocNumber SeQNumber
Doc001 900
Doc001 900
Doc001 900
Doc001 903
Doc001 904
```
Desired Result
```
DocNumber SeqNUmber
Doc001 900
Doc001 905
Doc001 910
Doc001 903
Doc001 904
```
This is what I have tried
My latest attempt is shown below. In that example I am just adding the counter, but the +5 is what I actually need.
```
Declare @count as integer = (SELECT COUNT(*) AS DUPLICATES
FROM dbo.RM10101
GROUP BY DocNumbr, SeqNumbr
HAVING (COUNT(*) > 1))
Declare @counter as integer =1
While @Counter < @count
begin
With UpdateData As
(
SELECT DocNumbr,SeqNumbr,
ROW_NUMBER() OVER (ORDER BY [SeqNumbr] DESC) AS RN
FROM RM10101
)
UPDATE RM10101 SET SeqNumbr = (select max(SeqNumbr) from RM10101 where docNumbr = RM10101.DocNumbr and SeqNumbr=RM10101.SeqNumbr) + (@counter)
FROM RM10101
INNER JOIN UpdateData ON RM10101.DocNumbr = UpdateData.DocNumbr
where rn =@counter
SET @counter = @counter + 1
end
end
```
Any help would be much appreciated.
Thanks,
Bob | The script might fail in case I have sequence like:
('Doc001',900), ('Doc001',900), ('Doc001',900), ('Doc001',903), ('Doc001',904), ('Doc001',905), ('Doc001',905), ('Doc001',905), ('Doc001',910), ('Doc001',910), ('Doc001',915), ('Doc001',915)
The script will give error as below:
*Msg 512, Level 16, State 1, Line 1
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.*
Correct script would be:
```
WHILE (SELECT SUM(X.A) FROM ( SELECT COUNT(*) A FROM Table_1 GROUP BY DocNumber, SeqNumber HAVING count(*) > 1) X) > 0
BEGIN
With TableAdjusted as (
select DocNumber, SeqNumber, row_number() over (partition by DocNumber, SeqNumber order by (select NULL)) RowID
from Table_1
)
update A
set SeqNumber = (B.RowID - 1)*5 + B.SeqNumber
from TableAdjusted A
inner join TableAdjusted B
on A.DocNumber = B.DocNumber and A.SeqNumber = B.SeqNumber and A.RowID = B.RowID
where A.RowID > 1
END
``` | ```
;with tb1(id, seqNUm, docNum) as (
SELECT ROW_NUMBER() OVER (ORDER BY SeQNumber), DocNumber, SeQNumber From table1
)
update tb1 set docnum=201 where id=2
```
* set id whatever row you want to update | Update Duplicates in SQL Server without a primary key | [
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
I have a query that outputs a list of percentages based on a total number, the only part I cant figure out is an efficient method to filter the 'usid' equal to a value on another table.
The query is not failing but is taking a very long time to complete.
```
SELECT badge, count(usid)*100 / (SELECT COUNT(DISTINCT usid) from Table1)
FROM Table1
WHERE usid IN(
SELECT usid
FROM Table2
WHERE msid = 1
)
GROUP BY badge
```
The output looks something like this
```
-----------------------------
badge count
-----------------------------
1 65.1
2 45.4
3 22.7
4 12.12
```
The usid that it is counting I am trying to set equal to the usid WHERE msid = 1.
Even if this method works it takes far too long. any ideas for a work around? | You should be able to use explicit JOIN notation instead of the IN clause:
```
SELECT a.badge, COUNT(a.usid)*100 / (SELECT COUNT(DISTINCT usid) from Table1)
FROM Table1 AS a
JOIN (SELECT DISTINCT usid FROM Table2 WHERE msid = 1) AS b ON a.usid = b.usid
GROUP BY a.badge
```
However, I'm not confident that will fix the performance problem. A half-way decent optimizer will realize that the sub-select in the select-list is constant, but you should verify that the optimizer is half-way decent (or better) by looking at the query plan.
I'm not convinced that the `COUNT(a.usid)` does anything different from `COUNT(*)` in this context. It would produce a different answer only if `a.usid` could contain nulls. See also [`COUNT(*)` vs `COUNT(1)` vs `COUNT(pk)` βΒ which is better?](https://stackoverflow.com/questions/2710621/count-vs-count1-vs-countpk-which-is-better/) | This is not such a simple query. Depending on the database you are using, the `in` might be quite inefficient and each output row is calculating the `count(distinct)`. Try rewriting the query as:
```
SELECT badge, count(usid)*100 / x.cnt
FROM Table1 t1 cross join
(SELECT COUNT(DISTINCT usid) as cnt from Table1) x
WHERE exists (select 1
from table2 t2
where t2.usid = t1.usid and t2.msid = 1
)
GROUP BY t1.badge, x.cnt;
```
This query will probably be faster, regardless of the database you are using.
By the way, it is suspicious that you are calculating `count(usid)` and then dividing by `count(distinct usid)`. I would expect either both or neither to be `count(distinct)`. | Simple SQL query is taking 20 minutes to run? | [
"",
"sql",
""
] |
I'm trying to get the **total number of rows** using this SQL query, the result should be = **1**
```
SELECT COUNT(id) FROM offer_process WHERE uid = 103 AND date = '2014-08-20'
```
I don't know exactly how to get the total number of rows from a date (2014-08-20) in this case.
 | You have a `datetime` column.
To check only `date` part, use [DATE() MySQL function](http://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_date "DATE() MySQL function")
```
SELECT COUNT(id) FROM offer_process WHERE uid = 103 AND DATE(date) = '2014-08-20'
```
Also, if you have an **index** on your `date` field, you should avoid using `DATE/TIME` MySQL functions. Instead, you can use `WHERE date LIKE '2014-08-20%'` for better performance | You may try this:
```
SELECT COUNT(id) FROM offer_process WHERE uid = 103
AND DATE_FORMAT(`date`, "%Y-%m-%d") = '2014-08-20'
```
or you may try using date
```
SELECT COUNT(id) FROM offer_process
WHERE uid = 103 AND date(`date`) = '2014-08-20'
``` | Simple SELECT COUNT SQL query error | [
"",
"mysql",
"sql",
""
] |
I have a list with values in it. I want to insert these values in a SQL table using a single `INSERT` statement.
Example: Say, there is a list with names (the size of the list is not constant). There is a `STUDENTS` table with `NAME` column. I want to insert in the `STUDENTS` table the names from the list using a single `INSERT` statement.
Right now I loop through the list and insert the value in the table. That means, the number of insert statements is equal to the length of the list.
```
List<String> Name;
foreach (String s in Name)
{
INSERT INTO STUDENTS (NAME) VALUES (s)
}
```
Is there a way I can accomplish this in a single SQL `INSERT` statement? Any help is much appreciated. | In MySql you can do it like this
```
INSERT INTO tbl (f1,f2) VALUES(1,2),(3,4),(5,6)...(55,77);
```
For mssql dialect all the same | If your list was in a temporary table, you could use the following syntax:
```
insert into tblname
(columnlist)
select name from temptable;
``` | Insert values from a list in a single SQL statement | [
"",
"sql",
"vb6",
"sql-insert",
""
] |
I've this database:
```
βββββββββββ¦βββββββ¦βββββββββββ¦ββββββββββββ¦βββββ¦ββββββββββ
β ID_Elab β Step β ID_Progr β ID_Causal β GI β Minutes β
β ββββββββββ¬βββββββ¬βββββββββββ¬ββββββββββββ¬βββββ¬ββββββββββ£
β 8 β 1 β 8 β 19 β 0 β 480 β
β 8 β 2 β 1391 β 19 β 0 β 480 β
β 8 β 3 β 1781 β 19 β 0 β 480 β
β 10 β 1 β 10 β 50 β 0 β 480 β
β 10 β 1 β 43 β 14 β 0 β 210 β
β 10 β 2 β 99 β 50 β 0 β 480 β
β 10 β 2 β 100 β 14 β 0 β 210 β
β 10 β 3 β 124 β 50 β 0 β 480 β
β 10 β 3 β 125 β 72 β 0 β 120 β
β 10 β 3 β 126 β 73 β 0 β 90 β
β 11 β 1 β 8 β 19 β 0 β 480 β
β 11 β 2 β 1391 β 19 β 0 β 480 β
βββββββββββ©βββββββ©βββββββββββ©ββββββββββββ©βββββ©ββββββββββ
```
I need to check, for each group of IDs, which is the greater `Step` value and then select every row of the specific group with that `Step` value.
The above table would becomes:
```
βββββββββββ¦βββββββ¦βββββββββββ¦ββββββββββββ¦βββββ¦ββββββββββ
β ID_Elab β Step β ID_Progr β ID_Causal β GI β Minutes β
β ββββββββββ¬βββββββ¬βββββββββββ¬ββββββββββββ¬βββββ¬ββββββββββ£
β 8 β 3 β 1781 β 19 β 0 β 480 β
β 10 β 3 β 124 β 50 β 0 β 480 β
β 10 β 3 β 125 β 72 β 0 β 120 β
β 10 β 3 β 126 β 73 β 0 β 90 β
β 11 β 2 β 1391 β 19 β 0 β 480 β
βββββββββββ©βββββββ©βββββββββββ©ββββββββββββ©βββββ©ββββββββββ
```
I've tried to follow [this question](https://stackoverflow.com/questions/12570747/how-to-sql-server-select-distinct-field-based-on-max-value-in-other-field) and this is my resulting query:
```
SELECT *
FROM testVela a
JOIN (
SELECT ID_Elab, MAX(Step) AS Step, ID_Progr, ID_Causal, GI, Minutes
FROM testVela
GROUP BY ID_Elab, ID_Progr, ID_Causal, Minutes
) b
ON a.ID_Elab = b.ID_Elab AND a.Step = b.Step
```
But this query returns something completely wrong... how could I do? | ```
create table #test_table(
ID_Elab int,
Step int,
ID_Progr int,
ID_Casusal int,
GI int,
Minutes int
)
insert into #test_table
select 8, 1, 8, 19, 0, 480 union all
select 8, 2, 1391, 19, 0, 480 union all
select 8, 3, 1781, 19, 0, 480 union all
select 10, 1, 10, 50, 0, 480 union all
select 10, 1, 43, 14, 0, 210 union all
select 10, 2, 99, 50, 0, 480 union all
select 10, 2, 100, 14, 0, 210 union all
select 10, 3, 124, 50, 0, 480 union all
select 10, 3, 125, 72, 0, 120 union all
select 10, 3, 126, 73, 0, 90 union all
select 11, 1, 8, 19, 0, 480 union all
select 11, 2, 1391, 19, 0, 480
;with cte as(
select
*,
rn = rank() over(partition by ID_Elab order by step desc)
from #test_table
)
select
ID_Elab,
Step,
ID_Progr,
ID_Casusal,
GI,
Minutes
from cte
where
rn = 1
drop table #test_table
``` | ```
mysql> select * FROM tst where step = (select max(step) from tst as B where tst.ID_Elab = B.ID_Elab);
+---------+------+----------+-----------+----+---------+
| ID_Elab | Step | ID_Progr | ID_Causal | GI | Minutes |
+---------+------+----------+-----------+----+---------+
| 8 | 3 | 1781 | 19 | 0 | 480 |
| 10 | 3 | 124 | 50 | 0 | 480 |
| 10 | 3 | 125 | 72 | 0 | 120 |
| 10 | 3 | 126 | 73 | 0 | 90 |
| 11 | 2 | 1391 | 19 | 0 | 480 |
+---------+------+----------+-----------+----+---------+
5 rows in set (0.01 sec)
``` | Check greater value of row and then take every row with same value grouping by ID | [
"",
"sql",
"sql-server",
""
] |
I have a table called `Users` that has a growing list of preferences. These preferences could includes ReligionId (which would key off to another table that contains the list of religions).
The list of preferences is growing. I want to split it off the `Users` table into 2 tables. The strategy that I think would work well is to make a separate table called `UserPreferences`. I'm wondering if doing this is in line with the rules of normalization. Here is an illustration to make things a bit more clear.

Is this normalized? Are there better ways? All comments appreciated.
EDIT: How UserPreferences key off to other tables:
 | At a minimum you could just have Users and Preferences. There should be a one to many relationship between users and preferences. One user can have many preferences. You could also split out the email addresses into another table - so that one user could have multiple email addresses - you can have a flag to denote the primary one. The DDL would look like:
```
create table Users
(
UserId int,
Age int
)
create table Preferences
(
PreferencesId int,
UserId int,
ReligionId int,
PersonalDescription varchar(2000),
HairColor int
)
create table EmailAddresses
(
EmailId int,
UserId int,
EmailAddress varchar(100),
IsPrimary bit
)
create table Religion
(
ReligionId int,
Name varchar(200)
)
Insert into Religion (ReligionId, Name) Values (1, 'Jediism')
Insert into Religion (ReligionId, Name) Values (2, 'Sithism')
Insert into Religion (ReligionId, Name) Values (3, 'Yuuzhan Vong')
Insert into Religion (ReligionId, Name) Values (4, 'Pinacism')
Insert into Users (UserId, Age) Values (1, 30)
Insert into Users (UserId, Age) Values (2, 18)
Insert into Preferences (PreferencesId, UserId, ReligionId, PersonalDescription) values (1, 1, 1, 'a description')
Insert into Preferences (PreferencesId, UserId, ReligionId, PersonalDescription) values (2, 1, 4, 'another description')
Insert into Preferences (PreferencesId, UserId, ReligionId, PersonalDescription) values (3, 1, 4, 'even another description')
```
 | Some folks are recommending to store the preferences one per row. This is called an Entity-Attribute-Value table, and it is *not* normalized. Some people say EAV is "more normalized," but they're mistaken. There is no rule of normalization that encourages EAV as a design in a relational database.
One practical way you can tell it's not normalized is that you can no longer use a foreign key constraint for your religion lookup table, if all values of all preferences share a single column in this preferences table. You can't make a foreign key constraint restrict the values only on rows for a particular preference type -- the FK constraint always applies to all rows in the table.
**Basically, Entity-Attribute-Value breaks SQL's support for constraints.**
The only normalized design is to define a separate column for each distinct preference. Then you can define data types and constraints appropriate to that preference type.
If you really want to understand relations and normalization, read [SQL and Relational Theory: How to Write Accurate SQL Code by C. J. Date](https://rads.stackoverflow.com/amzn/click/com/1449316409).
Each column represents selections from a set. A set can be the set of integers, or the set of religions, or the set of email addresses. A row in a table is a pairing of sets that "go together," for example a given user has a name, and an birthdate, and a religion, and an email address, so those values are matched up together into one row, and together they describe something that exists in the world, i.e. a human being with those attributes.
What this means is that in each row, you have one value chosen for each of the columns, i.e. from each of the component sets that are references. And each column contains values *only* from one set. In the religion column, you can only choose religions, you can't also put favorite color and mother's maiden name and shoe size into the same column.
That's why EAV is bogus from a relational perspective, because it mashes values from any and all disparate attributes into the same column. It's more like a spreadsheet than a database. Not that I'm saying relational is the only way to store data. It's just that if you're asking if EAV is normalized, and normalization assumes as a prerequisite that the data is relational, then no, EAV is not relational and therefore cannot be normalized. | Users table to Users and User Preferences. Is this normalized? | [
"",
"sql",
"sql-server",
"normalization",
""
] |
i am using this T-SQL to repair my TFS suspected database
```
EXEC sp_resetstatus [TFS_Projects];
ALTER DATABASE [TFS_Projects] SET EMERGENCY
DBCC checkdb([TFS_Projects])
ALTER DATABASE [TFS_Projects] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DBCC CheckDB ([TFS_Projects], REPAIR_ALLOW_DATA_LOSS)
ALTER DATABASE [TFS_Projects] SET MULTI_USER
```
but when i use this T-SQL i will get error
> Database 'TFS\_Projects' cannot be opened due to inaccessible files or insufficient memory or disk space.
how can i repair my SQL database?
i am using SQL Server 2012
**UPDATE 1:**
this error will Occurred in line :
> DBCC checkdb([TFS\_Projects])
**UPDATE 2:**
i have 20GB free on hard drive that my mdf and ldf on it
**UPDATE 3:**
i can not chek Autogrow becuase when i right click on db the error will appear
mdf and ldf is not readonly
and i am loged in by windows administartor, and loged in sql server by sa | i update my answer becuase old answer is dangerous and will damage TFS database!
this answer is from microsoft : <http://msdn.microsoft.com/en-us/library/jj620932.aspx>
> **To back up your databases**
>
> > Launch TFSBackup.exe.
> > The TFSBackup.exe tool is in the Tools folder where you installed Team Foundation Server. The default location is C:\Program Files\Microsoft Team Foundation Server 12.0\Tools.
> > In Source SQL Server Instance, enter the name of the SQL Server instance that hosts the TFS databases you want to back up and choose Connect.
> > In Select databases to backup, choose the databases to back up.
> > Need help? List of TFS 2010 databases on MSDN; List of TFS 2012 databases on MSDN.
> > In Backup Databases to, enter the name of a network share that is configured with read/write access for Everyone, or accept the default location in the file system of the SQL Server you connected to in step 2.
> > Note Note
> > If you want to overwrite backups stored in this network location, you can choose Overwrite existing database backups at this location.
> > Choose Backup Now.
> > The Backup tool reports progress on each database being backed up.
> > Choose Close.
> > Restore your data
>
> **To restore your TFS data**
>
> > Launch TFSRestore.exe.
> > The TFSRestore.exe tool is in the Tools folder where you installed Team Foundation Server. The default location is C:\Program Files\Microsoft Team Foundation Server 12.0\Tools.
> > In Target SQL Server Instance, enter the SQL Server instance you will use as the data tier and choose Connect.
> > Choose Add Share and enter the UNC path to the network share that is configured with read/write access to Everyone where you stored the backups of your TFS data. For example, \servername\sharename.
> > If the backup files are located on the file system of the server that is running TFSRestore.exe, you can use the drop down box to select a local drive.
> > Note Note
> > The service account for the instance of SQL Server you identified at the start of this procedure must have read access to this share.
> > In the left hand navigation pane, choose the network share or local disk you identified in the previous step.
> > The TFS Restore Tool displays the database backups stored on the file share.
> > Select the check boxes for the databases you want to restore to the SQL Server you identified at the start of this procedure.
> > Important note Important
> > For SharePoint, you must only restore the WSS\_Content database. Do not restore the WSS\_AdminContent or WSS\_Config databases. You want the new SharePoint Foundation versions of these databases, not the ones from the previous version of SharePoint or from a SharePoint installation running on any other server.
> > Choose Overwrite the existing database(s) and then choose Restore.
> > The Database Restore Tool restores your data and displays progress reports.
> > Choose Close. | 1) If possible add more hard drive space either by removing of unnecessary files from hard drive or add new hard drive with larger size.
2) Check if the database is set to Autogrow on.
3) Check if the account which is trying to access the database has enough permission to perform operation.
4) Make sure that .mdf and .ldf file are not marked as read only on operating system file system level.
Found here: <http://blog.sqlauthority.com/2007/08/02/sql-server-fix-error-945-database-cannot-be-opened-due-to-inaccessible-files-or-insufficient-memory-or-disk-space-see-the-sql-server-error-log-for-details/> | Repair Suspected TFS database | [
"",
"sql",
"sql-server",
"t-sql",
"tfs",
""
] |
I want to create trigger that fires every time any column is changed - whether it is freshly updated or new insert. I created something like this:
```
CREATE TRIGGER textsearch
BEFORE INSERT OR UPDATE
ON table
FOR EACH ROW
EXECUTE PROCEDURE trigger();
```
and body of trigger() function is:
```
BEGIN
NEW.ts := (
SELECT COALESCE(a::text,'') || ' ' ||
COALESCE(b::int,'') || ' ' ||
COALESCE(c::text,'') || ' ' ||
COALESCE(d::int, '') || ' ' ||
COALESCE(e::text,'')
FROM table
WHERE table.id = new.id);
RETURN NEW;
END
```
I hope it is clear what I want to do.
My problem is that trigger fires only on update, not on insert. I guess that this isn't working because I have BEFORE INSERT OR UPDATE, but if I change it to AFTER INSERT OR UPDATE then it doesn't work neither for INSERT nor UPDATE. | you need to use the `NEW` record directly:
```
BEGIN
NEW.ts := concat_ws(' ', NEW.a::text, NEW.b::TEXT, NEW.c::TEXT);
RETURN NEW;
END;
```
The advantage of `concat_ws` over `||` is that `concat_ws` will treat `NULL` values differently. The result of `'foo'||NULL` will yield `NULL` which is most probably not what you want. `concat_ws` will use an empty string `NULL` values. | It doesn't work because you're calling SELECT inside the function.
When it runs `BEFORE INSERT` then there isn't a row to select, is there?
Actually, `BEFORE UPDATE` you'll see the "old" version of the row anyway, won't it?
Just directly use the fields: `NEW.a` etc rather than selecting.
---
As an edit - here is an example showing what the trigger function can see. It's exaclty as you'd expect in a BEFORE trigger.
```
BEGIN;
CREATE TABLE tt (i int, t text, PRIMARY KEY (i));
CREATE FUNCTION trigfn() RETURNS TRIGGER AS $$
DECLARE
sv text;
BEGIN
SELECT t INTO sv FROM tt WHERE i = NEW.i;
RAISE NOTICE 'new value = %, selected value = %', NEW.t, sv;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigtest BEFORE INSERT OR UPDATE ON tt FOR EACH ROW EXECUTE PROCEDURE trigfn();
INSERT INTO tt VALUES (1,'a1');
UPDATE tt SET t = 'a2' WHERE i = 1;
ROLLBACK;
``` | Trigger on every update or insert | [
"",
"sql",
"postgresql",
""
] |
Hi I have below query in an SP
@CrmContactId is a parameter to the SP.
```
Select distinct A.PolicyBusinessId, A.PolicyDetailId
from TPolicyBusiness A
inner join TPolicyOwner B on a.PolicyDetailId=b.PolicyDetailId
Left Join TAdditionalOwner C on c.PolicyBusinessId=A.PolicyBusinessId
where (b.CRMContactId = @CRMContactId)
```
we made a new change and introduced an OR condition
```
Select distinct A.PolicyBusinessId, A.PolicyDetailId
from TPolicyBusiness A
inner join TPolicyOwner B on a.PolicyDetailId=b.PolicyDetailId
Left Join TAdditionalOwner C on c.PolicyBusinessId=A.PolicyBusinessId
where (b.CRMContactId = @CRMContactId OR C.CRMContactId = @CRMContactId)
```
Execution plan:

Attachment : [Execution Plan](https://docs.google.com/file/d/0Bye2QsXc8Mp9dVg3bnV1Z1VTR0E/edit)
but this change caused a huge performance issue in live server. TPolicyBusiness and TPolicyOwner are heavy tables having millions of records. TAdditionalOwner table is a light table with only few records.
To fix this issue Union all was instead of OR condition.
```
Select distinct A.PolicyBusinessId, A.PolicyDetailId
From
(
Select A.PolicyBusinessId, A.PolicyDetailId
from TPolicyBusiness A
inner join TPolicyOwner B on a.PolicyDetailId=b.PolicyDetailId
where b.CRMContactId = @CRMContactId
union all
Select A.PolicyBusinessId, A.PolicyDetailId
from TPolicyBusiness A
Join TAdditionalOwner C on c.PolicyBusinessId=A.PolicyBusinessId
where C.CRMContactId = @CRMContactId
) as A
```
Execution Plan:

Attachement
[Execution Plan](https://docs.google.com/file/d/0Bye2QsXc8Mp9OTJwLXRLNEV2S0k/edit?pli=1)
Could someone please explain why the introduction of OR caused an issue and why using Union is better than OR in this case? | Using `UNION ALL` to replace `OR` is actually one of the well known optimization tricks. The best reference and explanation is in this article: [Index Union](http://blogs.msdn.com/b/craigfr/archive/2006/10/18/index-union.aspx).
The gist of it is that `OR` predicates that *could* be be satisfied by two index seeks cannot be reliably detected by the query optimizer (the reason being impossibility to predict the disjoint sets from the two sides of the OR). So when expressing the same condition as an UNION ALL then the optimizer has no problem creating a plan that does two short seeks and unions the results. The important thing is to realize that `a=1 or b=2` can be different from `a=1 union all b=2` because the first query returns rows that satisfy *both* conditions once, while the later returns them twice. When you write the query as UNION ALL you are telling the compiler that you understand that and you have no problem with it.
For further reference see [How to analyse SQL Server performance](http://rusanu.com/2014/02/24/how-to-analyse-sql-server-performance/). | The JOIN to TAdditionalOwner is using
```
TPolicyBusiness.PolicyBusinessId = TAdditionalOwner.PolicyBusinessId
```
where the JOIN to TPolicyOwner is using the
```
TPolicyBusiness.PolicyDetailId = TPolicyOwner.PolicyDetailId
```
Check that there is a corresponding index for the PolicyBusinessId.
In the 2-way JOIN, that is part of the UNION, the smaller TAdditionalOwner table will be optimised if there is not an index for it to refer to in TPolicyBusiness, due to the small size. The server will still do a table scan, but use the values from the smaller table and see if they are in the big table somewhere. If there is no index, this optimisation will disappear quite quickly as the small table grows.
Given that you are not referring to either B or C in the SELECT you can simply to this
```
SELECT DISTINCT A.PolicyBusinessId, A.PolicyDetailId
FROM TPolicyBusiness A
LEFT JOIN TPolicyOwner B ON a.PolicyDetailId = b.PolicyDetailId AND b.CRMContactId = @CRMContactId
LEFT JOIN TAdditionalOwner C on c.PolicyBusinessId = A.PolicyBusinessId AND C.CRMContactId = @CRMContactId
```
This way it will JOIN to either table, same as per your UNION, but without the OUTER select.
Either way ensure that the fields used are indexed. | Why using OR condition instead of Union caused a performance Issue | [
"",
"sql",
"sql-server",
"union-all",
""
] |
Having this table:
```
+----+-------+---------+-----+
| id | group | name | age |
+----+-------+---------+-----+
| 1 | a | John | 11 |
+----+-------+---------+-----+
| 2 | a | Rachel | 12 |
+----+-------+---------+-----+
| 3 | a | Sarah | 11 |
+----+-------+---------+-----+
| 4 | a | Joe | 14 |
+----+-------+---------+-----+
| 5 | b | Richard | 13 |
+----+-------+---------+-----+
| 6 | b | Zoe | 12 |
+----+-------+---------+-----+
```
I want to write a query so I get all values from the **beginning** of the table (`id = 1`) until `group = 'b'`, so I get a table like this:
```
+----+-------+--------+-----+
| id | group | name | age |
+----+-------+--------+-----+
| 1 | a | John | 11 |
+----+-------+--------+-----+
| 2 | a | Rachel | 12 |
+----+-------+--------+-----+
| 3 | a | Sarah | 11 |
+----+-------+--------+-----+
| 4 | a | Joe | 14 |
+----+-------+--------+-----+
```
Can anyone help me? Many thanks in advance! | You would do a query like this:
```
select tt.*
from thistable tt
where tt.id < (select min(tt2.id) from thistable tt2 where tt2.`group` = 'b');
```
Note that `group` is a lousy name for a column name because it is a reserved word. | If you don't know the group values and want to get the first one you can use user defined variables or in simple words you can use rank query
```
select `id`, `group`, `name`, `age`
from (
select t.* ,
@r:= case when @g = t.`group` then @r else @r+1 end `row`,
@g:=t.`group`
from t
cross join(select @g:='',@r:=0) t1
order by t.id , t.`group`
) t2
where `row` = 1
```
## [`Demo`](http://sqlfiddle.com/#!2/4148c4/2)
Note i have ordered result using id column of the table so if you don't have the group records in series then change order by to
```
order by t.`group`
``` | MySQL Select from beginning until row = string | [
"",
"mysql",
"sql",
""
] |
I am not good with complex sql query so I am posting this question and I will happy If you help me.
I have 2 tables. First is `idea_box` and second is `idea_box_voting`. Now first I want to join two table and then I want to count total 0 and 1 numbers of there id. Here is my two tables.
```
idea_box
--------
Idea_id property_id idea_title the_idea user_id
25 0 Idea 25 Lorem 23
24 0 Idea 24 Lorem 23
23 0 Idea 23 Lorem 23
22 0 Idea 22 Lorem 23
21 0 Idea 21 Lorem 23
20 0 Idea 20 Lorem 23
idea_box_voting
---------------
vote_id idea_id ip user_id thumbs
37 25 192.168.1.14 23 1
44 25 192.168.1.14 23 1
47 25 192.168.1.14 23 0
40 25 192.168.1.14 23 1
41 24 192.168.1.14 23 1
38 24 192.168.1.14 23 0
45 24 192.168.1.14 23 1
39 23 192.168.1.14 23 1
46 23 192.168.1.14 23 0
42 23 192.168.1.14 23 0
43 22 192.168.1.14 23 1
```
**And I want this output:**
```
Idea_id property_id idea_title the_idea user_id one_count zero_count
25 0 Idea 25 Lorem 23 3 1
24 0 Idea 24 Lorem 23 2 1
23 0 Idea 23 Lorem 23 1 2
22 0 Idea 22 Lorem 23 1 0
21 0 Idea 21 Lorem 23 0 0
20 0 Idea 20 Lorem 23 0 0
```
Thanks. | ```
SELECT ib.*,
(
SELECT COUNT(*)
FROM idea_box_voting ibv
WHERE ibv.idea_id = ib.idea_id AND ibv.thumbs = 1
) AS one_count,
(
SELECT COUNT(*)
FROM idea_box_voting ibv
WHERE ibv.idea_id = ib.idea_id AND ibv.thumbs = 0
) AS zero_count
FROM idea_box ib
``` | Try this
```
Select Count(CASE WHEN vote.thumbs = 0 THEN 1 END) AS Zero_Count
Count(CASE WHEN vote.thumbs = 1 THEN 1 END) AS One_Count
FROM idea_box as ib
left Join idea_box_voting as vote
On vote.idea_id = ib.idea_id
Group by vote.idea_id
``` | How to use join table and count with select query | [
"",
"mysql",
"sql",
""
] |
I have two select statements which is returning duplicated data. What I'm trying to accomplish is to remove a duplicated leg. But I'm having hard times to get to the second row programmatically.
```
select i.InvID, i.UID, i.StartDate, i.EndDate, i.Minutes,i.ABID from inv_v i, InvoiceLines_v i2 where
i.Period = '2014/08'
and i.EndDate = i2.EndDate
and i.Minutes = i2.Minutes
and i.Uid <> i2.Uid
and i.abid = i2.abid
order by i.EndDate
```
This select statement returns the following data.

As you can see it returns duplicate rows where `minutes` are the same `ABID` is the same but `InvID` are different. What I need to do is to remove one of the `InvID` where the criteria matches. Doesn't matter which one.
The second select statement is returning different data.
```
select i.InvID, i.UID, i.StartDate, i.EndDate, i.Minutes from InvoiceLines_v i, InvoiceLines_v i2 where
i.Period = '2014/08'
and i.EndDate = i2.EndDate
and i.Uid = i2.Uid
and i.Abid <> i2.Abid
and i.Language <> i2.Language
order by i.startdate desc
```

In this select statement I want to remove an `InvID` where `UID` is the same then select the lowest `Mintues`. In This case, I would remove the following InvIDs: `2537676` , `2537210`
My goal is to remove those rows...
I could accomplish this using `cursor` grab the `InvID` and remove it by simple delete statement, but I'm trying to stay away from cursors.
Any suggestions on how I can accomplish this? | You can use `exists` to delete all duplicates except the one with the highest `InvID` by deleting those rows where another row exists with the same values but with a higher `InvID`
```
delete from inv_v
where exists (
select 1 from inv_v i2
where i2.InvID > inv_v.InvID
and i2.minutes = inv_v.minutes
and i2.EndDate = inv_v.EndDate
and i2.abid = inv_v.abid
and i2.uid <> inv_v.uid -- not sure why <> is used here, copied from question
)
``` | I have faced similar problems regarding duplicate data and some one told me to use partition by and other methods but those were causing performance issues
However , I had a primary key in my table through which I was able to select one row from the duplicate data and then delete it.
For example in the first select statement "minutes" and "ABID" are the criteria to consider duplicacy in data.But "Invid" can be used to distinguish between the duplicate rows.
So you can use below query to remove duplicacy.
```
delete from inv_i where inv_id in (select max(inv_id) from inv_i group by minutes,abid having count(*) > 1 );
```
This simple concept was helpful to me. It can be helpful in your case if "Inv\_id" is unique. | Remove duplicate row based on select statement | [
"",
"sql",
"sql-server-2008",
""
] |
I have a page audit table that records which pages a user has accessed. Given an specific page, I need to find what previous page the user has accessed and what was the most accessed.
For example, the FAQ Page\_ID is 3. I want to know if it is more frequently accessed from the First Access page (ID 1) or Home page (ID 5).
Example:
**Page Audit Table** (SQL Server)
```
ID | Page_ID | User_ID
1 | 1 | 6
2 | 3 | 6
3 | 5 | 4
4 | 3 | 4
5 | 1 | 7
6 | 3 | 7
7 | 1 | 5
8 | 3 | 2 --Note: previous user is not 2
9 | 3 | 5 --Note: previous page for user 5 is 1 and not 3
```
Looking for Page\_ID = 3, I want to retrieve:
```
Previous Page | Count
1 | 3
5 | 1
```
Note: I've looked some similar questions here (like that [one](https://stackoverflow.com/questions/4860024/how-to-get-previous-row-value)), but it didn't help me to solve this problem. | You can use window functions as one way to figure this out:
```
with UserPage as (
select
User_ID,
Page_ID,
row_number() over (partition by User_ID order by ID) as rn
from
PageAudit
)
select
p1.Page_ID,
count(*)
from
UserPage p1
inner join
UserPage p2
on p1.User_ID = p2.User_ID and
p1.rn + 1 = p2.rn
where
p2.Page_ID = 3
group by
p1.Page_ID;
```
[SQLFiddle Demo](http://sqlfiddle.com/#!3/2027b/6)
If you have SQL2012, the answers using `lag` will be a lot more efficient. This one works on SQL2008 too.
For reference, as I think one of the lag solutions is over complicated, and one is wrong:
```
with prev as (
select
page_id,
lag(page_id,1) over (partition by user_id order by id) as prev_page
from
PageAudit
)
select
prev_page,
count(*)
from
prev
where
page_id = 3 and
prev_page is not null -- people who landed on page 3 without a previous page
group by
prev_page
```
[SQLFiddle Example of Lag](http://sqlfiddle.com/#!6/c0037/25) | ```
select prev_page, count(*)
from (select id,
page_id,
user_id,
lag(page_id, 1) over(partition by user_id order by id) as prev_page
from page_audit_table) x
where page_id = 3
and prev_page <> page_id
group by prev_page
```
**Fiddle:**
<http://sqlfiddle.com/#!6/c0037/23/0> | How do I query previous rows? | [
"",
"sql",
"sql-server",
""
] |
Considering the following query:
```
SELECT 1 AS a, '100' AS b
UNION
SELECT 1 AS a, '50' AS b
ORDER BY a, b;
```
which results in:
```
a b
1 '100'
1 '50'
```
What I really want is to sort column b by number and not text.
One possible solution could be:
```
SELECT 1 AS a, '100' AS b, '100'::int AS c
UNION
SELECT 1 AS a, '50' AS b, '50'::int AS c
ORDER BY a, c;
```
Which results in the ordering of:
```
a b c
1 '50' 50
1 '100' 100
```
as desired.
This is quite satisfying but if I have 1 mio. result rows then I would also have 1 mio. values transferred in the result response which I do not need.
Is there a neater way of converting column values when ordering?
I am looking for a way of letting the SQL server convert the column values "within" the `ORDER BY` clause but only returning the "original" result columns. | you can do this with a subquery (think the union is part of the problem).
```
select a, b
from (
SELECT 1 AS a, '100' AS b
UNION
SELECT 1 AS a, '50' AS b)s
order by cast(b as int)-- or b::int
```
see [SqlFiddle](http://sqlfiddle.com/#!15/d41d8/3102) with difference
But if it's just a sample, and b is a varchar type in your table, you can do
```
select a, b
from YourTable
order by a, cast(b as int)
```
without any subquery. | Dealing with strings is best handled by strings, if any value in that column isnt convertible to integer it will fail.
```
SELECT
a, b
FROM (
SELECT 1 AS a, '100' AS b
UNION ALL
SELECT 1 AS a, '50' AS b
UNION ALL
SELECT 1 AS a, 'badvalue' AS b
) s
ORDER BY
a, right(concat('000000000',b),8)
;
| A | B |
|---|----------|
| 1 | 50 |
| 1 | 100 |
| 1 | badvalue |
```
BUT:
```
SELECT
a, b
FROM (
SELECT 1 AS a, '100' AS b
UNION ALL
SELECT 1 AS a, '50' AS b
UNION ALL
SELECT 1 AS a, 'badvalue' AS b
) s
ORDER BY
a, b::int
;
ERROR: invalid input syntax for integer: "badvalue": SELECT
```
<http://sqlfiddle.com/#!15/d41d8/3109>
---
EDIT in response to leading +/- negative characters
```
SELECT
a, b
FROM (
SELECT 1 AS a, '-100' AS b
UNION ALL
SELECT 1 AS a, '-50' AS b
UNION ALL
SELECT 1 AS a, '100' AS b
UNION ALL
SELECT 1 AS a, '50' AS b
UNION ALL
SELECT 1 AS a, 'badvalue' AS b
) s
ORDER BY
a
, case when left(b,1) = '-' then right(concat('000000000',b),8)
else right(concat('11111111',replace(b,'+','')),8)
end
;
```
<http://sqlfiddle.com/#!15/d41d8/3112> | Converting values within ORDER BY clause | [
"",
"sql",
"postgresql",
"type-conversion",
"sql-order-by",
""
] |
I have a problem with sorting sql tables.
I have this:
```
+------+------+
| col1 | col2 |
+------+------+
| a | 1 |
| b | 3 |
| c | 4 |
| d | 3 |
| e | 2 |
| f | 2 |
| g | 2 |
| h | 1 |
+------+------+
```
And i need to have this:
```
+------+------+
| col1 | col2 |
+------+------+
| e | 2 |
| f | 2 |
| g | 2 |
| a | 1 |
| h | 1 |
| b | 3 |
| d | 3 |
| c | 4 |
+------+------+
```
I tried with COUNT(), but it work only with GROUP OF that's why it isn't what i need.
Sorry for my bad english and thanks for all responses. | If database supports OVER clause then it is quite simple:
```
SELECT t.id, t.value
FROM t
ORDER BY count(*) over (partition by value) DESC
```
See SQL Fiddle - <http://sqlfiddle.com/#!6/ce805/3> | I see. You want to sort by the frequency of the values. Most dialects of SQL support window functions, so this does what you want:
```
select t.col1, t.col2
from (select t.*, count(*) over (partition by col2) as cnt
from table t
) t
order by cnt desc, col2;
```
Another way of writing this uses a `join` and aggregation:
```
select t.*
from table t join
(select col2, count(*) as cnt
from table t
group by col2
) tt
on t.col2 = tt.col2
order by tt.cnt desc, t.col2;
``` | Frequency based sort in sql | [
"",
"sql",
"sorting",
"frequency",
""
] |
I am trying to convert this Query from Microsoft access to **SQL Server 2012**.
```
SELECT
IIF(IsNull(Letter), '', CalculationType) AS [BF NI Calculation Basis]
FROM NICs
```
I converted it to the follwing, But I keep getting the error:
> Msg 245, Level 16, State 1, Line 1 Conversion failed when converting
> the varchar value 'A' to data type int.
This is what I have tried:
```
SELECT
CHOOSE(ISNULL(Letter,0), '', CalculationType) AS [BF NI Calculation Basis]
FROM NICs
```
and
```
SELECT
IIF(Letter = 0, '', CalculationType) AS [BF NI Calculation Basis]
FROM NICs
``` | If you want to check if Letter is NULL than write:
```
SELECT
IIF(Letter IS NULL, '', CalculationType) AS [BF NI Calculation Basis]
FROM NICs
```
If you want to check Letter for the Value 0
```
SELECT
IIF(Letter ='0', '', CalculationType) AS [BF NI Calculation Basis]
FROM NICs
``` | According to this [MSDN link for IIF](http://msdn.microsoft.com/en-IN/library/hh213574.aspx)
Returns the data type with the highest precedence from the types in **true\_value** and **false\_value**. it means in your query true\_value is ''(string) and false\_value is CalculationType(which is int) and that's the reason you get this error.
So your query should like this.
```
SELECT
IIF(Letter = '0', 0, CalculationType) AS [BF NI Calculation Basis]
FROM NICs
``` | Converting access SQL to SQL server | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
In SQL Server 2005 or later you can use following syntax to return a a variable number of rows:
```
Declare @Limit Int
Set @Limit=5
Select Top(@Limit) * From TableName
```
Is there some magic value, that you can use to let it return all rows? (Thinking of parametrized stored procedure here). Setting limit as `0` will just return no rows, and using negative value will generate run-time error.
I am pretty sure this is not possible, but I haven't found a definite answer. Having this work without `If/Else` block and duplicating the rather complicated query would be nice. | To avoid rewriting the query, which I'm assuming is more complicated than the SQL in the question, you could return the row count to replace the variable in a certain case, for example, if the variable = 0, then return all:
```
declare @Limit Int
set @Limit = 0
if @Limit = 0
// get the row count from the table you are querying if @Limit = 0
select @Limit = count(1) from TABLE_NAME
// then use the value in your query as before
select top(@Limit) * from TABLE_NAME
``` | The simple answer would be to convert to `bigint` and use its maximum possible value (9223372036854775807). Since that many rows cannot possibly be part of any table, you can be sure you won't miss anything. | Can you specify 'unlimited' when using "Select TOP (@variable) * From"? | [
"",
"sql",
"sql-server",
""
] |
Can anyone help me with this issue I tried many times but still haven't found the solution
Here is the original View that I have in my database but now I made changes in the database and the view need to be changed also.
Here is the view as it was:
```
SELECT
[tableN].*,
[tabB].[att1] AS tabB_email, [tabB].[name] AS tabB_name,
[tabC].[name] AS tabC_name
FROM
[tabC]
INNER JOIN
([tableN]
INNER JOIN [tabB] ON [tableN].[sender_id] = [tabB].[ID])
ON [tabC].[ID] = [tableN].[recipient_id]
```
Here is what is the difficult point for me. Now I don't have this 2 tables `tabB` and `tabC`
They are now in one table `tabX` and have an identifier field `roleId`. I manage to get all the columns except the last one `[tabC].[name] AS tabC_name`
Any ideas? | ```
SELECT [TableN].*, Bd.[email] AS bd_email, Bd.[showname] AS bd_name,
Pc.[showname] AS pc_name
FROM
[TABLE_X] AS Pc
INNER JOIN ([TableN]
INNER JOIN [TABLE_X] AS Bd
ON [TableN].[sender_id] = Bd.[ID] AND Bd.roleID = 1)
ON Pc.[ID] = [TableN].[recipient_id] AND Pc.roleID = 2
```
I finally find the the code that is working as needed | ```
SELECT [tableN].*, [Tabx] .[att1] AS tabB_email, [Tabx] .[name] AS tabB_name
FROM [Tabx] A
INNER JOIN [TABLEN] B
ON A.ROLEID=B.RECIPIENT_ID
``` | SQL View statement with 2 INNER JOINS | [
"",
"sql",
"sql-server",
"view",
""
] |
I have a database that handles Employment Date Hired and Date Left (the company)
```
EID |Name |Status1 |Hired |LefttheCOmpany
11 |Kid |Hired |2014-01-01 |2014-01-01
12 |Suzi |Hired |2014-03-01 |2014-03-01
13 |Mac |Fired |2014-04-01 |2014-07-17
```
Both Kid and Suzi are still a part of the company, hence that's why their left Dates are equal with the same date. And what I simply do in the GUI is that I just use an IF and output PRESENT on the "LefttheCOmpany" column in the table:
```
EID |Name |Status1 |Hired |LefttheCOmpany
11 |Kid |Hired |2014-01-01 |PRESENT
12 |Suzi |Hired |2014-03-01 |PRESENT
13 |Mac |Fired |2014-04-01 |2014-07-17
```
However, is there an *innate value* for the DATE datatype in SQL that means "PRESENT"? (Like when *NULL* means that SQL doesn't know what the value means/is.)
So that when the records are queried, SQL itself will say PRESENT (or something else) on the record?
What I did works, but maybe there's a better way of handling this. | There is a `getdate()` function which is used to get the present date but other than that I dont think anything is there in SQL Server which will represent PRESENT
My suggestion is to keep those rows set as null and when you are querying the table then you can set the `getdate()` in that column. Something like this:
```
select getdate() as LefttheCOmpany from table where LefttheCOmpany is null
```
Making Navneet's query more useful as per my suggestion
```
select EID,Name,Status1,Hired,
case when Hired=LefttheCOmpany then 'Present'
else LefttheCOmpany end as LefttheCOmpany
from table
``` | Please use the `ISNULL()` function, if the value is null it will replace it with 'Present'.
```
SELECT ISNULL(LeftTheCompany,'Present')
``` | SQL Date value that means "PRESENT" (as of today's date) | [
"",
"sql",
"sql-server",
""
] |
I know how to get the column names from a table
```
SELECT Column_name
from Information_schema.columns
where Table_name like 'summary'
```
And I know how to get the data like below
```
SELECT * FROM [summary] where Id='12345";
```
but how do I combine these so that I **atleast** get column names of a table even if there is no data?
Thanks in advance.
**EDIT\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_**
```
table
id name
123 test
12345 test2
```
in the above example after I have run this combined query I expect the result to be
```
id 12345
name test2
```
but if I run the query with an id like "hello"
then the result shall be
```
id
name
``` | IF you want to column names of a table when there is no data then you used like this
```
IF EXISTS(SELECT * FROM [summary] where Id='12345')
BEGIN
SELECT * FROM [summary] where Id='12345'
END
ELSE
BEGIN
SELECT Column_name
FROM Information_schema.columns
WHERE Table_name like 'summary'
END
``` | I think using simple `CASE` here would be enough:
```
SELECT
case when id is null then '' else id end ,
case when name is null then '' else name end
FROM [summary] where Id='hello';
``` | Get both data and column name from sql table | [
"",
"sql",
"sql-server",
""
] |
I am creating a query that ensures some constraints are met. Here's a semi-working version right now:
```
SELECT CASE
WHEN TaskId IS NULL THEN 0
ELSE 1
END AS TaskExists,
CASE
WHEN IsDownTask = 0 AND TaskStatus = 63 THEN 1
WHEN IsDownTask = 1 THEN 1
ELSE 0
END AS PressReady,
CASE
WHEN IsDownTask = 1 AND MachineId <> 2710 THEN 1
ELSE 0
END AS DownTaskAssignedToDifferentMachine
FROM Task T
WHERE TaskId = 555555
```
This works fine when `TaskId` exists in the `Task` table, but I also need to return values if that Task doesn't exist (hence the `TaskExists` field).
For a query on a non-existent Task, I'd expect to return
* TaskExists 0
* PressReady 0
* DownTaskAssignedToDisfferentMachine 0
How can I modify my query to return this even when no `TaskId` exists? | If you want to return those values just wrap each column with a `SUM` and an `ISNULL`:
```
SELECT ISNULL(SUM(CASE
WHEN TaskId IS NULL THEN 0
ELSE 1
END), 0) AS TaskExists,
ISNULL(SUM(CASE
WHEN IsDownTask = 0 AND TaskStatus = 63 THEN 1
WHEN IsDownTask = 1 THEN 1
ELSE 0
END), 0) AS PressReady,
ISNULL(SUM(CASE
WHEN IsDownTask = 1 AND MachineId <> 2710 THEN 1
ELSE 0
END), 0) AS DownTaskAssignedToDifferentMachine
``` | You can try something like this:
```
DECLARE @task INT
SET @task = 555555
SELECT CASE
WHEN TaskId IS NULL THEN 0
ELSE 1
END AS TaskExists,
CASE
WHEN IsDownTask = 0 AND TaskStatus = 63 THEN 1
WHEN IsDownTask = 1 THEN 1
ELSE 0
END AS PressReady,
CASE
WHEN IsDownTask = 1 AND MachineId <> 2710 THEN 1
ELSE 0
END AS DownTaskAssignedToDifferentMachine
FROM Task T
WHERE TaskId = @task
UNION ALL
SELECT 0 TaskExists, 0 PressReady, 0 DownTaskAssignedToDifferentMachine
WHERE NOT EXISTS (SELECT * FROM Task WHERE TaskId = @task)
``` | Return result from query even if WHERE clause not met | [
"",
"sql",
"sql-server",
"ibatis.net",
""
] |
i need paginate a posts from my data base, i write the next query:
```
SELECT posts.ID, posts.date, comments.name, comments.value
FROM posts
INNER JOIN comments
ON comments.ID = posts.ID
INNER JOIN relations
ON relations.ID = posts.ID
WHERE type_rel=1 AND status_post=1 AND
LIMIT 0,10
```
The problem is in the LIMIT sentence, i need limit only the "posts" table.
The comments table have many rows and if i put the limit in "0,10", the "posts" table limited to 10 posts, but the comments table also limited to 10.
Someone have a solution to my problem? i use this in PHP query.
Sorry for my bad english, thanks in advance. | You can use a subquery to limit the result set and then join:
```
SELECT
posts.ID,
posts.date,
comments.name,
comments.value
FROM
(SELECT * FROM posts WHERE status_post = 1 LIMIT 0,10) posts
LEFT JOIN
comments
ON comments.ID = posts.ID
LEFT JOIN
relations
ON relations.ID = posts.ID AND relations.type_rel = 1
```
From the comments, the query in your index file is wrong, this is the right one:
```
SELECT
wp_posts.ID,
wp_posts.post_date,
wp_postmeta.meta_key,
wp_postmeta.meta_value
FROM (SELECT * FROM wp_posts WHERE post_status="publish" AND post_type="post" LIMIT 0,2) wp_posts
LEFT JOIN wp_postmeta
ON wp_postmeta.post_id = wp_posts.ID
LEFT JOIN wp_term_relationships
ON wp_term_relationships.object_id = wp_posts.ID
AND wp_term_relationships.term_taxonomy_id=2
```
Example result [here](http://sqlfiddle.com/#!2/10b75/18/0), as you can see you have two posts, id 1 and 5.
If you want to keep the posts which have `term_taxonomy_id = 2` use this:
```
SELECT
wp_posts.ID,
wp_posts.post_date,
wp_postmeta.meta_key,
wp_postmeta.meta_value
FROM (
SELECT *
FROM wp_posts
JOIN wp_term_relationships
ON wp_term_relationships.object_id = wp_posts.ID
AND wp_term_relationships.term_taxonomy_id = 2
WHERE post_status="publish" AND post_type="post" LIMIT 0,2) wp_posts
LEFT JOIN wp_postmeta ON wp_postmeta.post_id = wp_posts.ID
```
[Example here](http://sqlfiddle.com/#!2/10b75/21/0), post id is 5 and 7, with limit 0,1 returns only 5. | You can limit only the rows returned from the posts table
```
SELECT posts.ID, posts.date, comments.name, comments.value
FROM (select * from posts limit 0,10) posts
INNER JOIN comments
ON comments.ID = posts.ID
INNER JOIN relations
ON relations.ID = posts.ID
WHERE type_rel=1 AND status_post=1
``` | MySQL Select with Inner Join, Limit only first | [
"",
"mysql",
"sql",
"inner-join",
"limit",
""
] |
Have a situation when table can be created from different places.
So, I have ~10 working applications, that can simultaneously try to create a same table.
Question. How can I synchronize them ? So I don't have any exceptions or errors ?
All instances of the application are trying to create a new table when the day ends, so when there is something like 00:00:00 they all will try to create it.
Sorry, for possible 'stupid question', have been googling for a while, no results.
Thank you. | You can use [`sp_getapplock`](http://msdn.microsoft.com/en-us/library/ms189823.aspx) to take arbitrary locks. You could make your app take such a lock before creating the table. Like that:
```
exec sp_getapplock
if tabledoesnotexist
create table ...
``` | As alluded to in the comments, your first step is to perform an existence check. Then, on the off chance that there are two simultaneous creations you can use [TRY...CATCH](http://msdn.microsoft.com/en-us/library/ms175976.aspx).
```
IF Object_ID('test', 'U') IS NULL
BEGIN
BEGIN TRY
CREATE TABLE test ( a int )
END TRY
BEGIN CATCH
SELECT Error_Message()
END CATCH
END
```
---
# UPDATE
You **do not** want to create a table every day. Seriously. This is very poor database design.
Instead you want to add a `datetime` column to your table that indicates when each record was created. | Concurrent table creation | [
"",
"sql",
"sql-server",
""
] |
My school book [Database Systems](https://rads.stackoverflow.com/amzn/click/com/0321523067) defines cardinality as follows:
> The cardinality of a relation is the number of tuples it contains. By contrast, the number of tuples is called the cardinality of the relation and this changes as tuples are added or deleted. High-cardinality - many tuples, low-cardinality - few tuples.
While the Wikipedia article on [Cardinality (SQL statements)](https://en.wikipedia.org/wiki/Cardinality_(SQL_statements)), defines it as follows:
> Cardinality refers to the uniqueness of data values contained in a particular column (attribute) of a database table. The lower the cardinality, the more duplicated elements in a column. There are 3 types of cardinality: high-cardinality, normal-cardinality, and low-cardinality.
They might both be right, but i can't connect the two definitions as related definitions. A rephrase would be appriciated! | They are speaking the same thing and it has to do with tuples (`relational algebra`) or rows (layman's term).
When it says high-cardinality are possible values of particular attribute (or field) that are unique and therefore the number of rows or tuples are higher:
**Example**:
```
StudentID Lastname Firstname Gender
101 Smith John M
102 Jones James M
103 Mayo Ann F
104 Jones George M
105 Smith Suse F
```
As far as as **StudentID** the `cardinality` is `high` because it is unique. In this it has five (5) tuples/rows.
On the other hand **Lastname** has normal cardinality, in particular there are only three (3) unique tuples/rows. Thus it has `normal cardinality`.
And finally **Gender** has only two possible unique tuples thus `Low Cardinality`.
You probably confuse `Cardinality` here with `Degree` of a relation which has something to do of the number of `attributes/fields` in a relation (or table).
On the other hand the textbook for `Database` when speaking of `Cardinality` normally has to do with an entity in relation to another entity, that is, the number of possible relations occurences for an entity participating in a given relationship type. Thus for example for a `binary relationship` `cardinality` could be either `one-to-one`, `one-to-many` or `many-to-many`. | Both definitions are trying to say that cardinality is the "number of rows". The difference is whether the comparison is "in the table" or "in a particular column".
The version in your database text book focuses on relational algebra and the structure of tables ("relations" in that lingo).
The Wikipedia entry is more practical. It encompasses the textbook definition, assuming the table has a primary key (the cardinality of the primary key is the same as the table). However, it can be applied to, say, a flag column as well. If the flag only takes on two values (`0` versus `1`), then we can say that the cardinality of the column is 2.
This is important for optimizing queries. Cardinality is one component of choosing the best methods for joining, aggregating, and selecting data. In practice, most databases use more information than the cardinality, so-called "statistics" about columns and their values for optimization. | What is the definition of cardinality in SQL | [
"",
"mysql",
"sql",
"statements",
"cardinality",
""
] |
I have a column "Time" with Time as its Datatype.
I need to get 25% of the value in that column.
Please help. I've been banging my head on the wall for the solution. | You need to convert `TIME` to float value. According to [Data type conversion table](http://i.msdn.microsoft.com/dynimg/IC170617.gif) you can do it through `DATETIME`, so:
```
DECLARE @d time = '03:00:00';
SELECT CONVERT(float, CONVERT(datetime, @d)) * 0.25 -- this will be 25% of the value in float
```
And then do reverse conversion:
```
DECLARE @f float = 0.03125; -- this is result of previous select.
SELECT CONVERT(time, CONVERT(datetime, @f)) -- Result = 00:45:00
```
So, in your query it will be:
```
SELECT ResultTime = CONVERT(time, CONVERT(datetime, (CONVERT(float, CONVERT(datetime, TimeColumn)) * 0.25)))
FROM Table
```
See [DEMO](http://rextester.com/VPHHFE23402) | ```
declare @time NUMERIC
set @TIME=(select cast(replace(replace(cast(cast(getdate() as time) as varchar),':',''),'.','') as NUMERIC)*.25)
IF LEN(@TIME)=12 BEGIN
SELECT '0'+SUBSTRING(CAST(@TIME AS VARCHAR(20)),0,2)+':'+SUBSTRING(CAST(@TIME AS VARCHAR(20)),3,2)+':'+SUBSTRING(CAST(@TIME AS VARCHAR(20)),5,2)+'.'+SUBSTRING(CAST(@TIME AS VARCHAR(20)),7,13)
END
ELSE
SELECT SUBSTRING(CAST(@TIME AS VARCHAR(20)),0,2)+':'+SUBSTRING(CAST(@TIME AS VARCHAR(20)),3,2)+':'+SUBSTRING(CAST(@TIME AS VARCHAR(20)),5,2)+'.'+SUBSTRING(CAST(@TIME AS VARCHAR(20)),7,13)
``` | arithmetic operation on time datatype sql server? | [
"",
"sql",
"sql-server",
""
] |
I have a table with 3 fields:
* fullname (Nvarchar(100))
* account (Nvarchar(50))
* birthdate (DateTime)
What I need is to create a query where I get all the persons that had their 25th birthday in a selected year.
> For example, to know all the persons that had their 25th birthday in the year 2003.
Right now I have this:
```
DECLARE @dynamicdate DATETIME
SET @dynamicdate = CONVERT(datetime, '20131231')
SELECT *
FROM persons p
WHERE
(year(@dynamicdate-p.birthdate)-1900)=25
```
But I know its totally wrong and It's should be very simple, any clue? | Do your best to avoid doing calculations or functions on columns used in a `WHERE` clause. instead, change the expression so the column is by itself one one side of the conditional expression, like below:
```
DECLARE @BirthYear int = 2003 - 25;
SELECT *
FROM dbo.Persons P
WHERE
P.BirthDate >= DateAdd(year, @BirthYear - 2000, '20000101')
AND P.BirthDate < DateAdd(year, @BirthYear - 2000, '20010101')
;
```
Please pay special attention to the inequality operators I used, `>=` and `<`. The best practice for date handling in SQL is to use an *inclusive* start (greater than or equal to) and an *exclusive* end (less than, but not equal to). This makes it possible to change the precision of the underlying column, for example to `datetime2(7)`, without having to change your code. It also makes it possible to join ranges contiguously with an equijoin (`Prior.ToDate = Next.FromDate`) and calculate range intersection correctly (`Range1.FromDate < Range2.ToDate AND Range2.FromDate < Range1.ToDate`).
If you cannot avoid having an expression on one column or another, put the expression on the column that has no indexes (leaving the column that does have an index by itself). Or, barring that (and this takes a bit of experience to know how to implement), put the expression on the side that in the query plan is earlier in the plan or affects the fewest rows.
Adding indexes, filtered indexes, and calculated persisted columns can all help with performance of often-repeated queries against date columns. But a warning: adding indexes can worsen performance, too. | You should simply be able to query the year part of the date in question, as you just want to know if the persons 25th birthday fell into that year.
```
DECLARE @Year AS INT
SET @Year = 2003
SELECT *
FROM Persons P
WHERE ((@Year - DATEPART(yy, birthdate)) = 25)
```
This SQLFiddle Demonstrates <http://sqlfiddle.com/#!6/b040c/2>
Keep in mind this is not optimised, but I wouldn't bother doing optimisation until you have actually measured performance and determined that this particular query is a problem. I would focus on the simplest, easiest to read solution first. | SQL Server - Get all persons that turned 25 years old in a selected year | [
"",
"sql",
"sql-server",
""
] |
I have this query
```
SELECT DISTINCT [CtrlNo]
,[RefNo]
,[DealNoCat]
,[tCustomer].[CustomerName]
,[tBank].BankName
,[tFIManagers].[FIName]
,[DaysOut]
,[FundDate]
,[Comment]
FROM [tContractsInTransit]
INNER JOIN tFIManagers
ON tFIManagers.FIManagerID = tContractsInTransit.FIManagerID
INNER JOIN tBank
ON tBank.BankID = tContractsInTransit.BankID
INNER JOIN tCustomer
ON tCustomer.CustomerID = tContractsInTransit.CustomerID
WHERE PFX = 'x'
AND Paid = 'false'
GROUP BY [CtrlNo]
,[RefNo]
,[DealNoCat]
,[tCustomer].[CustomerName]
,[tBank].BankName
,[tFIManagers].[FIName]
,[DaysOut]
,[FundDate]
,[Comment]
ORDER BY CtrlNo DESC
```
However, this doesn't give me the results that I want even though it's doing exactly what I tell it to do. If there are 10 records total, and three of the records have the same CtrlNo, it will display all three of these records if one of the other columns such as RefNo and DaysOut are not the same. I need to display the one record from those three that has the least amount of DaysOut. For example, if the DaysOut is 2, 6, and 48, the record with 2 should be display. Is there a way to do this? Thanks in advance. | Assuming a recently new sqlserver you can use a window function like row\_number() as in:
```
SELECT ...
FROM (
SELECT [CtrlNo]
,[RefNo]
,[DealNoCat]
,[tCustomer].[CustomerName]
,[tBank].BankName
,[tFIManagers].[FIName]
,[DaysOut]
,[FundDate]
,[Comment]
, row_number() over (partition by CtrlNo order by DaysOut) as rn
FROM [tContractsInTransit]
INNER JOIN tFIManagers
ON tFIManagers.FIManagerID = tContractsInTransit.FIManagerID
INNER JOIN tBank
ON tBank.BankID = tContractsInTransit.BankID
INNER JOIN tCustomer
ON tCustomer.CustomerID = tContractsInTransit.CustomerID
WHERE PFX = 'x'
AND Paid = 'false'
) as T
WHERE rn = 1;
ORDER BY CtrlNo DESC
``` | You want to leave off the DISTINCT keyword and use Aggregate functions. In this case, MIN.
Notice that 'DaysOut' is also removed from the GroupBy clause
```
SELECT [CtrlNo]
,[RefNo]
,[DealNoCat]
,[tCustomer].[CustomerName]
,[tBank].BankName
,[tFIManagers].[FIName]
,MIN([DaysOut]) as Min_DaysOut
,[FundDate]
,[Comment]
FROM [tContractsInTransit]
INNER JOIN tFIManagers
ON tFIManagers.FIManagerID = tContractsInTransit.FIManagerID
INNER JOIN tBank
ON tBank.BankID = tContractsInTransit.BankID
INNER JOIN tCustomer
ON tCustomer.CustomerID = tContractsInTransit.CustomerID
WHERE PFX = 'x'
AND Paid = 'false'
GROUP BY [CtrlNo]
,[RefNo]
,[DealNoCat]
,[tCustomer].[CustomerName]
,[tBank].BankName
,[tFIManagers].[FIName]
,[FundDate]
,[Comment]
ORDER BY CtrlNo DESC
``` | Select only 1 'control number' without using distinct | [
"",
"sql",
"sql-server",
""
] |
I try to join 3 tables like this:
table 1 : country
```
Id | Country
-------------------
1 | UK
2 | USA
3 | France
```
table 2 : Dates
```
Id | Date
-----------------
20000101 | 2000-01-01
20000102 | 2000-01-02
...
20140901 | 2014-09-01
```
table 3 : Customer
```
Id | Customer | Date_Creation_Id | Country_Id
---------------------------------------------
1 | AAA | 20000102 | 1
2 | B | 20000102 | 2
2 | CC | 20000103 | 2
```
I want to find the number of new customer for all date and all country.
```
Date | Country | number of creation
-------------------------------------------------
20000101 | UK | 0
20000101 | USA | 0
20000101 | France | 0
20000102 | UK | 1
20000102 | USA | 2
20000102 | France | 0
20000103 | UK | 0
20000103 | USA | 1
20000103 | France | 0
```
I try with this query
```
select count(*) as count,Customer.Date_Creation_Id, Customer.Country_Id
from customer
Right join Date on Dates.Id = Customer.Date_Creation_Id
Right join Country on Country.Id = Customer.Country_Id
group by Customer.Date_Creation_Id, Customer.Country_Id
```
But I don't have all dates with it | ```
SELECT D.Id,C.Country,COUNT(CU.Id) [number of creation]
FROM Country C CROSS JOIN [Date] D
LEFT JOIN customer CU on D.Dates.Id = CU.Date_Creation_Id And C.Country.Id = CU.Country_Id
GROUP BY D.Id,C.Country
ORDER BY D.Id,C.Id
``` | You need to generate the list of all countries and dates first, using `cross join`, and then `left join` your data:
```
select d.id as date_id, c.id as country_id, count(cu.id) as cnt
from dates d cross join
country c left join
customers cu
on cu.date_creation_id = d.id and cu.country_id = c.id
group by d.id, c.id
order by d.id, c.id;
``` | Join 3 tables with query sql | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two tables:
TABLE:cities
```
city_id, lat, lon, name, country_id
```
TABLE:countries
```
country_id, name
```
I want a SQL query that returns:
```
cities.lat, cities.lon, cities.name, countries.name(for the corresponding cities.country_id)
```
I swear this must be easy. But I can't get to make it work. | You can use join for this. Also always use alias of table name to understand easy and standard rule too.
```
select
ci.lat, ci.lon, ci.name, cn.name
from
city ci
inner join
countries cn on ci.country_id = cn.country_id
``` | If I got your point, this is the query.
```
SELECT cities.lat, cities.lon, cities.name, countries.name
FROM cities INNER JOIN countries
ON cities.country_id = countries.country_id
```
By the way, thought it is evident from the name that `cities.country_id` is referencing `countries.country_id`, you should have stated it explicitly. | SQL query in two tables | [
"",
"sql",
""
] |
I have Four tables,
DiscountCode which describes a discount.
Bundles which is basically sold as a product so it contains a productcode
Products which are sold as products so it also contains a productcode
ProductDiscount code meant to be used to describe the fact that a Bundle/Product can have multiple discount codes associated with it.
```
**Discount Code**
Id
Name
Code ex. SUMMER10 ie 10% off summer products
...
**Bundles**
Id
Name
ProductCode *Unique* Ex..ABC123
...
**Products**
Id
Name
ProductCode *Unique* Ex.. XYZ1234
...
**ProductDiscountCode**
DiscountId
ProductCode FK to both Products.ProductCode AND Bundles.ProductCode
Records in ProductDiscountCode:
1 ABC123
1 XYZ1234
1 URS576 <-- prevent this if Prod or Bundle does NOT contain URS576
```
My question is in regards to the ProductCode in the Ternary table. NOW KEEP IN MIND PELASE SQL IS NOT MY STRONG SUIT! Can/Should the ProductCode column in the ternary table be foreign keyed to TWO Separate tables in an attempt to restrict it's content to a product code that is EITHER contained in the Bundle table OR the Product table, assuming the productcode in bundles and products is unique between the two tables (to be enforced by business rules). Or does the ProductDiscountCode table need the following
```
**ProductDiscountCode**
DiscountId
ProductCode FK to ProductCode in Product
BundleCode FK to ProductCode in Bundle
``` | If you're going to use a Foreign Key then every value must exist as a key on the other table so you can't constrain your column Product Keys if it's got Bundle Keys in it and vice versa.
If you need to ensure referential integrity then you'll need to have two Nullable columns.
You should probably also have a Check Constraint to make sure that one and only one of the codes is null.
```
CREATE TABLE ProductDiscountCode
(
DiscountId int,
ProductCode varchar(12) null,
BundleCode varchar(12) null,
CONSTRAINT ProductDiscountCode_PK PRIMARY KEY(DiscountId),
CONSTRAINT FK_ProductDiscountCode_DiscountId FOREIGN KEY (DiscountId) REFERENCES DiscountCode (Id),
CONSTRAINT FK_ProductDiscountCode_BundleCode FOREIGN KEY (BundleCode) REFERENCES Bundle (ProductCode),
CONSTRAINT FK_ProductDiscountCode_ProductCode FOREIGN KEY (ProductCode) REFERENCES Product (ProductCode),
CONSTRAINT ProductCodeExists CHECK (
(ProductCode IS NULL AND BundleCode IS NOT NULL)
OR
(ProductCode IS NOT NULL AND BundleCode IS NULL)
)
)
```
If for some reason you really need to be able to show the Product Codes as a single column then you could do something along these lines
```
CREATE TABLE ProductDiscountCode
(
DiscountId int,
SingleProductCode varchar(12) null,
BundleCode varchar(12) null,
ProductCode as ISNULL (SingleProductCode ,BundleCode ) ,
CONSTRAINT ProductDiscountCode_PK PRIMARY KEY(DiscountId),
CONSTRAINT FK_ProductDiscountCode_DiscountId FOREIGN KEY (DiscountId) REFERENCES DiscountCode (Id),
CONSTRAINT FK_ProductDiscountCode_BundleCode FOREIGN KEY (BundleCode) REFERENCES Bundle (ProductCode),
CONSTRAINT FK_ProductDiscountCode_SingleProductCode FOREIGN KEY (SingleProductCode) REFERENCES Product (ProductCode),
CONSTRAINT SingleProductCodeExists CHECK ((SingleProductCode IS NULL AND BundleCode IS NOT NULL) OR (SingleProductCode IS NOT NULL AND BundleCode IS NULL))
)
```
But you do have to ask yourself first whether using Foreign key restraints is actually necessary in the first place.
Having two columns for product code could speed up your Select queries slightly but by having to decide which column you are storing the product code in the updates, deletes and inserts will be made more complicated. | OK, it is never a good idea to try to constrain to two differnt FKs for the same field, that is a sign of incorrect design.
Why is bundle a separate table if it is not using the product codes from the product table? WHy not add a column to the product table to identify if the line item is a bundle or an individual product and stopre both there? | SQL Server Table Design with Table having foreign key to two other tables | [
"",
"sql",
"sql-server",
""
] |
Given this table:
```
+----+-----------+--------+
| id | condition | values |
+----+-----------+--------+
| 1 | a | 1 |
+----+-----------+--------+
| 2 | a | 2 |
+----+-----------+--------+
| 3 | a | 3 |
+----+-----------+--------+
| 4 | a | 4 |
+----+-----------+--------+
| 5 | b | 5 |
+----+-----------+--------+
| 6 | b | 6 |
+----+-----------+--------+
```
How can I get a new table that begins on `id=3` (including) and goes until `condition = b` (excluding):
```
+----+-----------+--------+
| id | condition | values |
+----+-----------+--------+
| 3 | a | 3 |
+----+-----------+--------+
| 4 | a | 4 |
+----+-----------+--------+
```
**added fiddle:** <http://sqlfiddle.com/#!2/9882f7>
Basically I want a table between a matching first condition (over a specific column - id) and a second one (over a different column - condition) | Something like this:
```
select t.*
from table t
where id >= 3 and id < (select min(t2.id) from table t2 where t2.condition = 'b');
```
EDIT:
This query works fine on the SQL Fiddle:
```
select t.*
from t
where id >= 3 and id < (select min(t2.id) from t t2 where t2.condition = 'b');
``` | You need to stop thinking of SQL data as having any order. Think of SQL data in sets; you have to search by *values*, not by positions.
```
SELECT t1.*
FROM t AS t1
JOIN (
SELECT MIN(id) AS id FROM t
WHERE id >= 3 AND `condition` = 'b'
) AS t2
WHERE t1.id >= 3 AND t1.id < t2.id
ORDER BY t1.id
``` | MySQL select from specific ID until match condition | [
"",
"mysql",
"sql",
""
] |
We've been using MS Access, with the following syntax for MTD Data that works for us:
```
Between DateSerial(Year(Date()),Month(Date()),1)
And DateSerial(Year(Date()),Month(Date())+1,0)
```
We need to transition the above logic to SQL/SSRS for automatic emailed reports, but I cannot get this DateSerial logic to work with SQL.
In the Filter field of the SQL query, I can successfully use `BETWEEN '8/1/2014' AND '8/31/2014'` for MTD data, but would like to have a `DateSerial` logic applied so that reports don't need to be created for every month, quarter, year, etc.
When trying to use the `DateSerial` function, we get the error "Invalid or missing Expression". I've seen a few topics on this that Parameters are required, but really believe that this is a simple syntax issue for the filter field, since actual dates work with the BETWEEN command. | This has been resolved. The ODBC driver does not apparently play well with SSRS. The DateSerial command would not work within the query itself. The workaround was to add the filter to the Dataset. This syntax is what works, but again only in the Dataset filter: [expression] Between [first value box] =DateSerial(Year(Now()),1,1) [second value box] =DateSerial(Year(Now()),12,31)
This gives us the YTD reporting data that we require. | You can use the function CONVERT:
<http://msdn.microsoft.com/en-us/library/ms187928.aspx>
Or the function DATEFROMPARTS if you are using SQL Server 2012:
<http://msdn.microsoft.com/en-us/library/hh213228.aspx>
Or DATEADD:
```
select DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0); -- first day of current month
select DATEADD(MONTH, DATEDIFF(MONTH, -1, GETDATE()), -1) -- last day of current month
```
This last one I took from: <https://stackoverflow.com/a/11746042/1274092>
See mine at:
<http://sqlfiddle.com/#!3/d41d8/38333> | SQL automatic date range using DateSerial function | [
"",
"sql",
"sql-server",
"date",
"reporting-services",
""
] |
I have two tables, a word frequency table and a word weight table.
I need to write a t-sql script or scripts to calculate a weighted score of the word frequency table based on the words and weights given in the word weight table.
For example:
Word frequency table
```
Word Frequency
,,,,,,,,,,,,,,,,
Fat 3
Ugly 2
dumb 2
stupid 3
```
Word weight table
```
Word Weight
,,,,,,,,,,,,,,
Ugly 5
stupid 7
```
The weighted score from these two tables would work out to be (5x2)+(7x3)=31
I then need to print the results, if over 30 "Alert! Score over 30" or if under 30 then "Normal, score under 30".
I'm fine with creating the print script once the score is calculated, but I'm not too sure how to get there.
The scripts need to be able to allow for the tables to be altered, so I'm guessing it just needs to look for common values between them and then join the columns.
I could be way off but I'm figuring a join between the two tables based on where w.word = f.word ??
I've been looking for a solution all afternoon and really haven't gotten anywhere. Any help would be appreciated! | It should be
```
select sum (w.Weight * f.Frequency) from WeightTable w
join FreqTable f on f.Word = w.Word
``` | Just to prove @wraith answer, here is the code:
```
declare @WordFreq table (Word varchar(max), Frequency int );
declare @WordWeight table (Word varchar(max), Weight int );
insert into @WordFreq( Word, Frequency ) values
( 'Fat', 3)
, ( 'Ugly', 2)
, ( 'dumb', 2)
, ( 'stupid', 3)
insert into @WordWeight( Word, Weight ) values
( 'Ugly', 5)
, ( 'stupid', 7)
select sum (w.Weight * f.Frequency)
from @WordFreq f
join @WordWeight w on f.Word = w.Word
-----------------------
OUTPUT: 31
``` | Multiplying common values between 2 tables | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
My question: once I receive the parameter value, I want to modify the contents.
E.g. a string parameter value is: `FullName,Address,Category`
I want to change `FullName` to `l.FullName` and `Category` to `c.Category` and keep the rest same.
```
ALTER PROCEDURE [dbo].[TableA] @ColNames VARCHAR(1000)
AS
BEGIN
//I want to modify the contents of @ColNames here
``` | How about this?
```
select @ColNames = replace(@ColNames, 'FullName', 'l.FullName');
select @ColNames = replace(@ColNames, 'Category', 'c.Category');
``` | Yes, you can do that - @ColNames is a variable - you can manipulate the contents of it.
```
ALTER PROCEDURE [dbo].[TableA] @ColNames VARCHAR(1000)
AS
BEGIN
-- I want to modify the contents of @ColNames here
SET @ColNames = REPLACE(@ColNames, "FullName", "l.FullName")
SET @ColNames = REPLACE(@ColNames, "Category", "c.Category")
``` | TSQL replace my Stored procedure string parameter | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I'm mainly a presentation/logic tier developer and don't mess around with SQL all that much but I have a problem and am wondering if it's impossible within SQL as it's not a full programming language.
* I have a field ContactID which has an CompanyID attached to it
* In another table, the CompanyID is attached to CompanyName
* I am trying to create a SELECT statement that returns ONE CONTACT ID and in a seperate column, an aggregate of all the Companies attached to this contact (by name).
E.G
```
ContactID - CompanyID - CompanyName
***********************************
1 001 Lol
1 002 Haha
1 003 Funny
2 002 Haha
2 004 Lmao
```
I want to return
```
ContactID - Companies
*********************
1 Lol, Haha, Funny
2 Haha, Lmao
```
I have found the logic to do so with ONE ContactID at a time:
```
SELECT x.ContactID, substring(
(
SELECT ', '+y.CompanyName AS [text()]
FROM TblContactCompany x INNER JOIN TblCompany y ON x.CompanyID = y.CompanyID WHERE x.ContactID = 13963
For XML PATH (''), root('MyString'), type
).value('/MyString[1]','varchar(max)')
, 3, 1000)
[OrgNames] from TblContact x WHERE x.ContactID = 13963
```
As you can see here, I am hardcoding in the ContactID 13963, which is neccessary to only return the companies this individual is linked to.
The issue is when I want to return this aggregate information PER ROW on a much bigger scale SELECT (on a whole table full of ContactID's).
I want to have `x.ContactID = (this.ContactID)` but I can't figure out how!
Failing this, could I run one statement to return a list of ContactID's, then in the same StoredProc run another statement that LOOPS through this list of ContactID's (essentially performing the second statement x times where x = no. of ContactID's)?
Any help *greatly* appreciated. | You want a correlated subquery:
```
SELECT ct.ContactID,
stuff((SELECT ', ' + co.CompanyName AS [text()]
FROM TblContactCompany cc INNER JOIN
TblCompany co
ON cc.CompanyID = co.CompanyID
WHERE cc.ContactID = ct.ContactId
For XML PATH (''), root('MyString'), type
).value('/MyString[1]', 'varchar(max)'),
1, 2, '')
[OrgNames]
from TblContact ct;
```
Note the `where` clause on the inner subquery.
I also made two other changes:
1. I changed the table aliases to better represent the table names. This makes queries easier to understand. (Plus, the aliases had to be changed because you were using `x` in the outer query and the inner query.)
2. I replaced the `substring()` with `stuff()`, which does exactly what you want. | You could use a table variable to store the required `x.ContactID` and in your main query in the `WHERE` clause use `IN` clause like below
```
WHERE
...
x.ContactID IN (SELECT ContactID FROM @YourTableVariable)
``` | SQL Server dynamically change WHERE clause in a SELECT based on returned data | [
"",
"sql",
"sql-server",
"dynamic",
""
] |
I'm trying to import a flat file into an oledb target sql server database.
here's the field that's giving me trouble:

here are the properties of that flat file connection, specifically the field:

here's the error message:
> [Source - 18942979103\_txt [424]] Error: Data conversion failed. The
> data conversion for column "recipient-name" returned status value 4
> and status text "Text was truncated or one or more characters had no
> match in the target code page.".
What am I doing wrong? | After failing by increasing the length or even changing to data type text, I solved this by creating an XLSX file and importing. It accurately detected the data type instead of setting all columns as `varchar(50)`. Turns out `nvarchar(255)` for that column would have done it too. | Here is what fixed the problem for me. I did not have to convert to Excel. Just modified the DataType when choosing the data source to "text stream" (Figure 1). You can also check the "Edit Mappings" dialog to verify the change to the size (Figure 2).
**Figure 1**
[](https://i.stack.imgur.com/R6c6t.png)
**Figure 2**
[](https://i.stack.imgur.com/wZ9NE.png) | Text was truncated or one or more characters had no match in the target code page including the primary key in an unpivot | [
"",
"sql",
"sql-server",
"t-sql",
"ssis",
""
] |
I am working on a project with lots of large `SQL` requests, I want to know:
what provides a better performance in term of speed between:
* Join Queries
* Sub-Queries
For example
```
SELECT artistName FROM artist
JOIN group
On group.location = artist.location
AND group.available = 'true'
```
---
```
SELECT artistName FROM artist,
(SELECT group.available = true) AS groupAvailable
WHERE groupAvailable.location = artist.location
```
What can provide me with better performance for big queries. | As you have not mentioned that you need to know the answer for any specific query, I am assuming that you asked the question in general whether `join` queries are faster than `subqueries` or not. The answer is in most cases `join` queries are faster than `subqueries`. I too didn't know the answer, so after I saw your question I researched and I am quoting from what I found -
> In JOINs RDBMS can create an execution plan that is better for your
> query and can predict what data should be loaded to be processed and
> save time, unlike the sub-query where it will run all the queries and
> load all their data to do the processing.
Actually you are not the first one who asked this question. If I got you right, another similar question is [here](https://stackoverflow.com/questions/2577174/join-vs-sub-query) too. | I'm not quite sure your second query is valid SQL, but the answer to your question fortunately isn't related to this.
Performance is dependant on many, many factors, so the best answer you can get for this kind of question is "It depends".
First, make sure you need to optimize your queries. If the current runtime is acceptable, do not touch anything (don't fix what's not broken).
If you do need to optimize your queries, try it out for yourself. Depending on the DBMS, the server load, the data distribution and potentially other variables, your performance might change a lot, so the only way you can know is to test for your actual situation and determine if the result is what you want. | SQL sub-queries and Join Queries | [
"",
"sql",
"join",
"optimization",
"subquery",
"query-optimization",
""
] |
How to add compare select statement in `if()` condition in Oracle pl-sql
```
IF( (v_c1.int_rate = '0')
AND (local_type_advances in (SELECT DISTINCT(REF_CODE)
FROM XYZ
WHERE REF_REC_TYPE = '39' AND DEL_FLG = 'N'
AND REF_CODE BETWEEN '1' AND '50' )))
```
where select query generate following output
```
'31','32','33'
```
now as below if I hardcode it then it works fine but in query doesn't works
```
IF( (v_c1.int_rate='0') AND (local_type_advances in ('31','32','33')))
``` | What I have done is, I have stored result of that select statment in localvariable as varchar2. And added this local varchar in that if condition's in clause as result instead of that select statment
```
select '''' || listagg(REF_CODE, ''',''' ) within group (order by REF_CODE) || '''' as type_advances into local_type_advances_list from RCT@ADF2FIN where REF_REC_TYPE = '39' AND DEL_FLG = 'N' AND REF_CODE BETWEEN '1' AND '50';
IF( (v_c1.int_rate='0') AND (local_type_advances in (local_type_advances_list)) )
``` | Since you can't have a select statement in an IN clause, break it into two pieces. Select the range you need to search into a variable, then search that variable to see if the value is in there. Here is a working example using LISTAGG() to return a comma-separated list into a variable and REGEXP\_LIKE to test the list in the variable to see if the value is in the list. X\_TBL has one column, X\_COL that contains three rows, '31', '32' and '33' respectively.
```
declare
local_type_advance_string varchar2(50); -- Holds the list returned from
-- the query.
local_type_advance_nbr varchar2(2) := '32'; -- Value to test the list for.
begin
-- Use listagg() to select a comma-separated list into a string variable.
-- X_TBL is the table, X_COL is the column.
SELECT LISTAGG(X_COL, ',') WITHIN GROUP( ORDER BY X_COL)
into local_type_advance_string
FROM X_TBL;
--dbms_output.put_line(local_type_advance_string);
IF regexp_like(local_type_advance_string, '(^|\W)'||
local_type_advance_nbr || '(\W|$)') then
dbms_output.put_line(local_type_advance_nbr || ' Is in there!');
else
dbms_output.put_line(local_type_advance_nbr || ' Is not found');
end if;
end;
```
The regular expression allows for the position of the search number no matter if it's at the beginning, middle or end of the list.
Run via Toad:
```
32 Is in there!
``` | How to add compare select statement in if() condition oracle pl-sql | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have a problem and I can't figure out the solution. With a previously asked question ([SELECT rows with time 30 minutes or less?](https://stackoverflow.com/questions/10459608/select-rows-with-time-30-minutes-or-less)) I tried to make my COUNT(\*) function work but I can't make it work. Anybody an idea?
```
SELECT COUNT(*) FROM g_ad_view WHERE ad_view_time >= DATEADD(mi, -30, GETDATE())
```
And this is the error I get back:
> SQLSTATE[42000]: Syntax error or access violation: 1305 FUNCTION 18RVS15.DATEADD does not exist | The MySQL syntax is:
```
WHERE ad_view_time >= now() - interval 30 minute
``` | DateAdd is missspelled: It must be DATE\_ADD (with underscore). See the [documentation](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html) | How to use DATEADD in MySQL? | [
"",
"mysql",
"sql",
""
] |
I want to clean some contact numbers in a sql server database
table name is ronnie\_list and column name is "Name Phone#"
numbers can be in format
example formats are
```
T: (985)-124-5601
(985)124-5601
985)-124-5601
985.124.5601
9851245601
985124-5601 EX 1432
985-(124)-5601
```
I want them in this format 985-124-5601
please let me know how to do that | ## Create a Function
```
ALTER FUNCTION dbo.udf_GetNumeric
(@strAlphaNumeric VARCHAR(1000))
RETURNS VARCHAR(1000)
AS
BEGIN
DECLARE @intAlpha INT
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric)
BEGIN
WHILE @intAlpha > 0
BEGIN
SET @strAlphaNumeric = STUFF(@strAlphaNumeric, @intAlpha, 1, '' )
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric )
END
END
RETURN ISNULL(@strAlphaNumeric,0)
END
GO
```
## Test Data
```
DECLARE @temp TABLE(string NVARCHAR(1000))
INSERT INTO @temp (string)
VALUES
('T: (985)-124-5601'),
('(985)124-5601'),
('985)-124-5601'),
('985.124.5601'),
('9851245601'),
('985124-5601 EX 1432'),
('985-(124)-5601')
```
## Query
```
SELECT LEFT(OnlyNumbers,3) + '-' + SUBSTRING(OnlyNumbers,4,3) + '-' + RIGHT(OnlyNumbers, 4)
FROM (
SELECT LEFT(dbo.udf_GetNumeric(string), 10) OnlyNumbers
FROM @temp
)z
```
## Result
```
985-124-5601
985-124-5601
985-124-5601
985-124-5601
985-124-5601
985-124-5601
985-124-5601
``` | 1.
Write `REPLACE()` around the column untill you remove all your unwanted symbols.
like this. `REPLACE(REPLACE([COLUMN],')',''),'(','') --Only Brackets are removed here.`
```
(985)124-5601 becomes 985124-5601
985124-5601 EX 1432 becomes 9851245601 1432
```
2.
Once symbols are removed, Remove space as well, you can use `REPLACE` itself
```
985124-5601 becomes 9851245601
98512456011432 becomes 98512456011432
```
3.
Take `LEFT([COLUMN],10)` and Split it to 3 using `substring`, and put `minus symbol` in between.
```
9851245601 becomes 985-124-5601
98512456011432 becomes 985-124-5601
```
You can do all these in 1 sql script, just fyi.
Hope it helps.. | need help in cleaning contact number format | [
"",
"sql",
"sql-server",
""
] |
I have the following table COUNTRY\_PEOPLE:
```
COUNTRY - CITY - YEAR - PEOPLE
ENGLAND - LONDON 1980 - 7834020
ENGLAND - LONDON 2010 - 8308369
ENGLAND - DERBY 1980 - 231483
ENGLAND - DERBY 2010 - 233700
FRANCE - PARIS 1980 - 2174654
FRANCE - PARIS 2010 - 2274880
FRANCE - NANTES 1980 - 279321
FRANCE - NANTES 2010 - 290130
```
I need a sql query to get for each country the list of city with the difference between the people in the 2010 and 1980.
So:
```
ENGLAND - LONDON 474349
ENGLAND - DERBY 2217
FRANCE - PARIS 100226
FRANCE - NANTES 10809
```
Then the average for country, but this should be easy with a group by.
I just invented the example as translated from a real boring scenario, all data are fake.
Thank you in advance | Try:
```
SELECT Country, City,
AVG(CASE WHEN Year=2010 THEN People END)-AVG(CASE WHEN Year=1980 THEN People END) as AvgPeople
FROM COUNTRY_PEOPLE
GROUP BY Country, City
```
See `Demo`
**Result**:
 | Please Check below query
```
SELECT a.COUNTRY,a.CITY,(ISNULL(a.PEOPLE,0) - ISNULL(b.PEOPLE,0)) AS PEOPLE
FROM
(SELECT COUNTRY ,CITY , SUM(ISNULL(PEOPLE,0)) AS PEOPLE FROM COUNTRY_PEOPLE
WHERE [YEAR] = 2010 GROUP BY COUNTRY ,CITY) a
LEFT OUTER JOIN
(SELECT COUNTRY ,CITY , SUM(ISNULL(PEOPLE,0)) AS PEOPLE FROM COUNTRY_PEOPLE
WHERE [YEAR] = 1980 GROUP BY COUNTRY ,CITY)
b ON a.COUNTRY = b.COUNTRY AND a.CITY = b.CITY
``` | SQL - difference and average between more records | [
"",
"sql",
"h2",
""
] |
I am struggling to get this query to work . I have three tables and I want to do a query to get the red area.
Each circle is a table with different structure. I have managed a lot of combinations of inner joins but i specially cant get all the red areas.
A Payments : idPayment , idInvoice , Amount , date.
B Invoice : idInvoice , amount date.
C PromissoryNotes: IdNote , idInvoice, amount, date.

so far ...
SELECT B.idInvoice,A.idPayment,C.idNote FROM (Invoice b INNER JOIN payments a ON a.idInvoice=b.idInvoice) LEFT OUTER JOIN PromissoryNotes c ON c.idInvoice=b.idInvoice ORDER BY idInvoice.
DOESNT QUITE WORK
Any suggestions? | You were pretty close -- another `OUTER JOIN` and some `WHERE` criteria will do the trick:
```
SELECT B.idInvoice, A.idPayment, C.idNote
FROM Invoice b
LEFT JOIN payments a ON a.idInvoice=b.idInvoice
LEFT JOIN PromissoryNotes c ON c.idInvoice=b.idInvoice
WHERE a.idInvoice IS NOT NULL
OR c.idInvoice IS NOT NULL
ORDER BY B.idInvoice
```
What this basically says is give me all results from table B, where there's a match in table a or table c.
* [Condensed SQL Fiddle Demo](http://sqlfiddle.com/#!2/cd3fa/1) | You could do this two ways:
1) Create a set A that is the inner join of B and A, create a set C that is the inner join of B and C, then union A and C.
2) Create a sub query that inner joins A and B, then full outer join to a sub query that inner joins C and B.
Example of 1)
```
SELECT b.idInvoice FROM Invoice B
JOIN Payments A on A.IdInvoice = B.IdInvoice
UNION
SELECT b.idInvoice FROM Invoice B
JOIN PromissoryNotes C on c.idInvoice = B.id Invoice
```
Example of 2)
```
SELECT idInvoice FROM
(
SELECT b.idInvoice FROM Invoice B
JOIN Payments A on A.IdInvoice = B.IdInvoice
) B FULL OUTER JOIN
(
SELECT b.idInvoice FROM Invoice B
JOIN Payments A on A.IdInvoice = B.IdInvoice
) C on b.idInvoice = C.idInvoice
``` | Joining three tables - Mysq-l an inner and outer join perhaps? | [
"",
"mysql",
"sql",
"join",
""
] |
I'm using postgreSQL 9.2.
Let I've the following table:
```
id name definition
serial varchar(128) text
1 name1 definition1
..........................................
```
I need to write a query that remove all rows with the same name such that every row will have unique name. If two rows have the same name, their definitions are also the same. | Use row\_number() function on name and remove all rows that have row\_number() > 1
Here is an example query: [Deleting duplicates](http://wiki.postgresql.org/wiki/Deleting_duplicates) | ```
DELETE FROM mytable dd
WHERE EXISTS (
SELECT *
FROM mytable ex
WHERE ex.name = dd.name
AND ex.id < dd.id
);
``` | SQL query for removing non-unique row | [
"",
"sql",
"postgresql",
""
] |
\*\*Edit: RESOLVED - Thank you all! I can't 'vote up' any of your replies due to my low rep, but I appreciate them all!
Edit 2: Rep high enough now, vote up's to all!\*\*
ok, I have a sproc that runs a few inserts, assigning the new record identity to a variable, then inserting those into another table. The issue is that some of the inserts don't insert (if no data to insert) but the SCOPE\_IDENTITY(); carries from the previous insert...
in the code below, I would expect @NewId2 to be null, but it contains the id from the previous insert. What should I do to prevent @NewId2 from having the "wrong Id?"
```
CREATE TABLE #tempdemo (
theId int IDENTITY(100,3),
theField varchar(20)
)
DECLARE @NewId1 int
DECLARE @NewId2 int
INSERT INTO
#tempdemo
(theField)
SELECT
'test1'
--this would have a "from table" in a real situation
WHERE
1 = 1
SET @NewId1 = SCOPE_IDENTITY();
INSERT INTO
#tempdemo
(theField)
SELECT
'test2'
--this would have a "from table" in a real situation
WHERE
1 = 2 --obviously fails, in my real situation there are times the insert has nothing to insert
SET @NewId2 = SCOPE_IDENTITY();
select '@NewId1 = ', @NewId1, '@NewId2 = ', @NewId2
drop table #tempdemo
``` | After you insert, save `scope_identity()` in a variable if `@@rowcount` is greater than 0.
EX:
```
DECLARE @MY_NEW_ID INT
INSERT INTO TABLE2 (COL1)
SELECT VAL1
FROM TABLE1
WHERE X=Y
IF @@ROWCOUNT>0
BEGIN
SET @MY_NEW_ID = SCOPE_IDENTITY()
END
/* DO MORE INSERTS HERE ... */
IF @MY_NEW_ID IS NOT NULL
BEGIN
INSERT INTO TABLE3 (NEWID) VALUES @MY_NEW_ID
END
``` | You can try this:
```
IF @@ROWCOUNT<>0
SET @NewId2 = SCOPE_IDENTITY();
``` | SCOPE_IDENTITY, multiple inserts, carries value from previous insert if current insert doesn't insert | [
"",
"sql",
"sql-server",
""
] |
I want to run sql commands directly in my rails application. Is there any gem to do this?
For example:
I want to delete a user , who is having id 2 using below sql command
```
delete from users where id=2
``` | Yes. We can do that by using [query\_exec](https://rubygems.org/gems/query_exec) | ```
ActiveRecord::Base.connection.execute("delete from users where id=2")
``` | Ruby gem to run sql commands | [
"",
"sql",
"ruby",
"ruby-on-rails-3",
"rubygems",
""
] |
I have a table with a field datatype of NVarchar2(4000) I am moving data from a SQL Server to an Oracle Server. The SQL Server datatype is also nvarchar(4000). I have checked the MAX Size of this field on the SQL Server side, and the MAX is 3996, which is 4 characters short of the 4000 limit.
When I try to insert this data into Oracle, I get an error "LONG" due to the size.
What is going on here, will the Oracle NVarchar2(4000) not allow 4000 characters? If not, what is the limit, or how can I get around this? | There is a limit of 4000 bytes not 4000 characters. So NVARCHAR2(4000) with an AL16UTF16 national character set would occupy the maximum 4000 bytes.
From the [oracle docs of MAX\_STRING SIZE](http://docs.oracle.com/database/121/REFRN/refrn10321.htm#REFRN10321):
> Tables with virtual columns will be updated with new data type
> metadata for virtual columns of VARCHAR2(4000), 4000-byte NVARCHAR2,
> or RAW(2000) type.
**Solution:-**
Also if you want to store 4000 characters then I would recommend you to use **[CLOB](http://www.orafaq.com/wiki/CLOB)**
> A CLOB (Character Large Object) is an Oracle data type that can hold
> up to 4 GB of data. CLOBs are handy for storing text.
You may try like this to change column data type to CLOB:
```
ALTER TABLE table_name
ADD (tmpcolumn CLOB);
UPDATE table_name SET tmpcolumn =currentnvarcharcolumn;
COMMIT;
ALTER TABLE table_name DROP COLUMN currentnvarcharcolumn;
ALTER TABLE table_name
RENAME COLUMN tmpcolumn TO whatevernameyouwant;
``` | First, as others have pointed out, unless you're using 12.1, both `varchar2` and `nvarchar2` data types are limited in SQL to 4000 bytes. In PL/SQL, they're limited to 32767. In 12.1, you can increase the SQL limit to 32767 using the [`MAX_STRING_SIZE` parameter](http://docs.oracle.com/database/121/REFRN/refrn10321.htm#REFRN10321).
Second, unless you are working with a legacy database that uses a non-Unicode character set that cannot be upgraded to use a Unicode character set, you would want to avoid `nvarchar2` and `nchar` data types in Oracle. In SQL Server, you use `nvarchar` when you want to store Unicode data. In Oracle, the preference is to use `varchar2` in a database whose character set supports Unicode (generally `AL32UTF8`) when you want to store Unicode data.
If you store Unicode data in an Oracle `NVARCHAR2` column, the national character set will be used-- this is almost certainly `AL16UTF16` which means that every character requires at least 2 bytes of storage. A `NVARCHAR2(4000)`, therefore, probably can't store more than 2000 characters. If you use a `VARCHAR2` column, on the other hand, you can use a variable width Unicode character set (`AL32UTF8`) in which case English characters generally require just 1 byte, most European characters require 2 bytes, and most Asian characters require 3 bytes (this is, of course, just a generalization). That is generally going to allow you to store substantially more data in a `VARCHAR2` column.
If you do need to store more than 4000 bytes of data and you're using Oracle 11.2 or later, you'd have to use a `LOB` data type (`CLOB` or `NCLOB`). | Oracle Insert Into NVarchar2(4000) does not allow 4000 characters? | [
"",
"sql",
"sql-server",
"oracle",
"insert",
""
] |
Let's say a patient makes many visits. I want to write a query that returns distinct patient rows based on their earliest visit. For example, consider the following rows.
```
patients
-------------
id name
1 Bob
2 Jim
3 Mary
visits
-------------
id patient_id visit_date reference_number
1 1 6/29/14 09f3be26
2 1 7/8/14 34c23a9e
3 2 7/10/14 448dd90a
```
What I want to see returned by the query is:
```
id name first_visit_date reference_number
1 Bob 6/29/14 09f3be26
2 Jim 7/10/14 448dd90a
```
What I've tried looks something like:
```
SELECT
patients.id,
patients.name,
visits.visit_date AS first_visit_date,
visits.reference_number
FROM
patients
INNER JOIN (
SELECT
*
FROM
visits
ORDER BY
visit_date
LIMIT
1
) visits ON
visits.patient_id = patients.id
```
Adding the `LIMIT` causes the query to return 0 rows, but removing it causes the query to return duplicates. What's the trick here? I've also tried selecting `MIN(visit_date)` in the INNER JOIN but that's also returning dups.
**Update**
It's been suggested that this question is a duplicate, but to me it seems different because I'm doing this over two separate tables. The accepted answer on the other question suggests joining on `y.max_total = x.total`, which works if the table being joined is the same table being selected from. Additionally, I need to return other columns from the row with the MIN date, not just the date itself.
The answer I've accepted works great, however. | Use `distinct on`
```
select distinct on (p.id)
p.id,
p.name,
v.visit_date as first_visit_date,
v.reference_number
from
patients p
inner join
visits v on p.id = v.patient_id
order by p.id, v.visit_date
```
<http://www.postgresql.org/docs/current/static/sql-select.html#SQL-DISTINCT> | Avoiding the `DISTINCT ON(p.id)`, but using a plain old `NOT EXISTS(...)` instead
```
SELECT p.id, p.name
, v.first_visit_date, v.reference_number
FROM patients p
JOIN visits v ON p.id = v.patient_id
-- exclude all join-products that are not the first for a patient.
WHERE NOT EXISTS (
SELECT *
FROM visits nx
WHERE nx.patient_id = v.patient_id
AND ( nx.visit_date < v.visit_date
OR (nx.visit_date = v.visit_date AND nx.id < v.id) -- tie-breaker condition
)
);
``` | In one to many relationship, return distinct rows based on MIN value | [
"",
"sql",
"postgresql",
"one-to-many",
"greatest-n-per-group",
""
] |
```
INSERT INTO `table` VALUES ('val1','val2','val3')
INSERT INTO `table` SET a='val1',b='val2',c='val3'
```
Both are used for the same purpose.but which i should use? in which case? and why? | They are identical, but the first one is [standard SQL](http://en.wikipedia.org/wiki/Insert_(SQL)#Basic_form). As the SQL parser treats them as synonyms there is no difference in performance, usage or anything. As always, prefer using the standardized form, the first syntax in this case, for portability and compatibility.
[Refer to the SQL-92 standard syntax](http://savage.net.au/SQL/sql-92.bnf.html). | As far as I can tell, both syntaxes are equivalent.
```
The first is SQL standard, the second is MySQL's extension.
```
So they should be exactly equivalent performance wise.
<http://dev.mysql.com/doc/refman/5.6/en/insert.html> says:
```
INSERT inserts new rows into an existing table.
The INSERT ... VALUES and INSERT ... SET forms of the statement
insert rows based on explicitly specified values.
The INSERT ... SELECT form inserts rows selected from another
table or tables.
``` | Insert Set or Insert Values | [
"",
"mysql",
"sql",
""
] |
i asking your help to write an query for checking the availability of an room inside MySql.
Currently i have this kind of table:
```
ROOM | FROM | TO
-----------------------------
101 | 2014-08-09 | 2014-08-14
102 | 2014-08-09 | 2014-08-14
... ... ...
```
So i have the room 101 booked from **09-08-2014** to **14-08-2014**, my query to check availability is look like =
```
SELECT order_id FROM booking
WHERE `ROOM` = '101'
AND (`FROM` BETWEEN '2014-08-08' AND '2014-08-20')
AND (`TO` BETWEEN '2014-08-08' AND '2014-08-20')
```
In the above example i will check the availability in the dates between
```
What i trying to archive is this
Order --------09++++++++++13--------------
Check1 -----08+++++++++++++++++++++++++17-- Not availble
Check2 -----------------12+++++++++++++17-- Not availble
Check3 -----------10----------------------- Not availble
Check4 -----------10+11-------------------- Not availble
Check5 -----------------------14+++++++17-- Available
Check6 --07++++09-------------------------- Not availble
Check7 --------------------------15-------- Availble
SCALE 6-07-08-09-10-11-12-13-14-15-16-17-18-19...
```
I must check if the room is available. So if i get some result out of that query that means that the room is already booked... if i get nothing just the opposite... | First, let's generalize an algorithm for how to check for an overlap between intervals `[a,b]` and `[c,d]`. Note the square braces on those intervals, which means an inclusive interval. We can use this logic to check for an interval overlap:
```
a <= d and b >= c
```
If that condition is true, then we have an overlap.
So to apply this algorithm to SQL, we could do something like this:
```
a = 2014-08-08
b = 2014-08-20
c = FROM
d = TO
SELECT order_id FROM booking
WHERE NOT EXISTS (
SELECT * FROM booking
WHERE ROOM = '101'
AND '2014-08-08' <= `TO`
AND '2014-08-20' >= `FROM`
)
AND ROOM = '101'
```
The other problem with your approach is that you are checking to see if a room is available, and the assumption here is that if the room is available, then you will book it with another SQL statement. This is a problematic approach, because there is the possibility that you could double book a room. Consider the possibility that two processes check for room availability at the same (or close to the same) time. Or another example would be if this code were part of a transaction that hadn't been committed yet. The other process wouldn't see your committed result, and thus, would double book the room.
To remedy this flawed approach, we need to lock the room row before we check for its availability. Assuming you have some other table called ROOM, you could lock the row using a 'FOR UPDATE' statement:
```
SELECT * FROM `ROOM` WHERE ROOM = '101' FOR UPDATE
```
The "FOR UPDATE" will lock that room row, which will prevent another process from checking that room for availability until your transaction is finished. After you lock the row, you could run your overlap check. Thus, you eliminate the double booking problem.
You can read more about 'FOR UPDATE' [here](http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html). | If you want to check whether room is available for the whole period, look for existing bookings that overlap with period in question:
```
SELECT order_id FROM booking
WHERE `ROOM` = '101'
AND `FROM` <= '2014-08-20'
AND `TO` >= 2014-08-08'
```
If query returns rows, you have a reservation conflict and room is not available. | Find booking overlaps to check dates availability | [
"",
"mysql",
"sql",
""
] |
Let's say I have the following table.

Given 3 user names and one company name, I need to check if user1 with the given company name, user2 with the same given company name and user3 with the same given company name all are in the table. If they are all in the table, my program returns ture. If one of them is not in the table, it returns false.
For example:
* given Users A, B, E and company walmart, my program should return
false.
* given Users C, D, E and company google, my program should return
true.
What is the most efficient sql to accomplish it?
What I can think of now is that check each user with the given company once.
```
e.g.
select * from [table] where user = '[given user1]'
and company = '[given company name]'
select * from [table] where user = '[given user2]'
and company = '[given company name]'
select * from [table] where user = '[given user3]'
and company = '[given company name]'
```
Totally i will need to access the db 3 times for each check. I do not think this is a good way.
P.S. I am using SQL SERVER for this, but I prefer using normal/generic sql statement which can fit to most different databases if possible.
Thanks in advance. | You can use `MIN()` and `CASE` to do this:
```
SELECT MIN(CASE WHEN company = 'walmart' THEN 'True' ELSE 'False' END)
FROM Table1
WHERE users IN ('a','b','e')
--False
SELECT MIN(CASE WHEN company = 'google' THEN 'True' ELSE 'False' END)
FROM Table1
WHERE users IN ('c','d','e')
--True
```
If the values for comparison were stored in a table you could use the same with a `JOIN` instead. | You should use a case:
```
select CASE when user = '[given user1]' and company = '[given company name]' then "true" end as [company name], CASE when user = '[given user2]' and company = '[given company name]' then "true" end as [company name2]
from [table]
```
You can do this for as many company name as you want. And you can always CONCAT the result to show both the user and company, instead of just the company in this example. | sql how to check multiple data existed in db | [
"",
"sql",
"sql-server",
"database",
""
] |
In Oracle pl/sql, how do I retrieve only numbers from string.
e.g. 123abc -> 123 (remove alphabet)
e.g. 123\*-\*abc -> 123 (remove all special characters too) | Several options, but this should work:
```
select regexp_replace('123*-*abc', '[^[:digit:]]', '') from dual
```
This removes all non-digits from the input.
If using in pl/sql, you could do an assignment to a variable:
```
declare
l_num number;
l_string varchar2(20) := '123*-*abc';
begin
l_num := regexp_replace(l_string, '[^[:digit:]]', '');
dbms_output.put_line('Num is: ' || l_num);
end;
```
Output:
```
Num is: 123
``` | ```
your_string := regexp_replace(your_string, '\D')
``` | Oracle retrieve only number in string | [
"",
"sql",
"oracle",
""
] |
I have a set of tables (with several one-many relationships) that form a single "unit". I need to ensure that we weed out duplicates, but determining duplicates requires consideration of all the data.
To make matters worse, the DB in question is still in Sql 2000 compatibility mode, so it can't use any newer features.
```
Create Table UnitType
(
Id int IDENTITY Primary Key,
Action int not null,
TriggerType varchar(25) not null
)
Create Table Unit
(
Id int IDENTITY Primary Key,
TypeId int Not Null,
Message varchar(100),
Constraint FK_Unit_Type Foreign Key (TypeId) References UnitType(Id)
)
Create Table Item
(
Id int IDENTITY Primary Key,
QuestionId int not null,
Sequence int not null
)
Create Table UnitCondition
(
Id int IDENTITY Primary Key,
UnitId int not null,
Value varchar(10),
ItemId int not null
Constraint FK_UnitCondition_Unit Foreign Key (UnitId) References Unit(Id),
Constraint FK_UnitCondition_Item Foreign Key (ItemId) References Item(Id)
)
Insert into Item (QuestionId, Sequence)
Values (1, 1),
(1, 2)
Insert into UnitType(Action, TriggerType)
Values (1, 'Changed')
Insert into Unit (TypeId, Message)
Values (1, 'Hello World'),
(1, 'Hello World')
Insert into UnitCondition(UnitId, Value, ItemId)
Values (1, 'Test', 1),
(1, 'Hello', 2),
(2, 'Test', 1),
(2, 'Hello', 2)
```
I've created a [SqlFiddle](http://sqlfiddle.com/#!3/2c4a5/5) demonstrating a simple form of this issue.
A Unit is considered a Duplicate with all (non-Id) fields on the Unit, and *all* conditions on that Unit *combined* are exactly matched in every detail. Considering it like Xml - A `Unit` Node (containing the Unit info, and a Conditions sub-collection) is unique if no other `Unit` node exists that is an exact string copy
```
Select
Action,
TriggerType,
U.TypeId,
U.Message,
(
Select C.Value, C.ItemId, I.QuestionId, I.Sequence
From UnitCondition C
Inner Join Item I on C.ItemId = I.Id
Where C.UnitId = U.Id
For XML RAW('Condition')
) as Conditions
from UnitType T
Inner Join Unit U on T.Id = U.TypeId
For XML RAW ('Unit'), ELEMENTS
```
But the issue I have is that I can't seem to get the XML for each Unit to appear as a new record, and I'm not sure how to compare the Unit Nodes to look for Duplicates.
How Can I run this query to determine if there are duplicate Xml `Unit` nodes within the collection? | So, I managed to figure out what I needed to do. It's a little clunky though.
First, you need to wrap the Xml `Select` statement in another select against the Unit table, in order to ensure that we end up with xml representing only that unit.
```
Select
Id,
(
Select
Action,
TriggerType,
IU.TypeId,
IU.Message,
(
Select C.Value, I.QuestionId, I.Sequence
From UnitCondition C
Inner Join Item I on C.ItemId = I.Id
Where C.UnitId = IU.Id
Order by C.Value, I.QuestionId, I.Sequence
For XML RAW('Condition'), TYPE
) as Conditions
from UnitType T
Inner Join Unit IU on T.Id = IU.TypeId
WHERE IU.Id = U.Id
For XML RAW ('Unit')
)
From Unit U
```
Then, you can wrap this in another select, grouping the xml up by content.
```
Select content, count(*) as cnt
From
(
Select
Id,
(
Select
Action,
TriggerType,
IU.TypeId,
IU.Message,
(
Select C.Value, C.ItemId, I.QuestionId, I.Sequence
From UnitCondition C
Inner Join Item I on C.ItemId = I.Id
Where C.UnitId = IU.Id
Order by C.Value, I.QuestionId, I.Sequence
For XML RAW('Condition'), TYPE
) as Conditions
from UnitType T
Inner Join Unit IU on T.Id = IU.TypeId
WHERE IU.Id = U.Id
For XML RAW ('Unit')
) as content
From Unit U
) as data
group by content
having count(*) > 1
```
This will allow you to group entire units where the whole content is identical.
One thing to watch out for though, is that to test "uniqueness", you need to guarantee that the data on the inner Xml selection(s) is always the same. To that end, you should apply ordering on the relevant data (i.e. the data in the xml) to ensure consistency. What order you apply doesn't really matter, so long as two identical collections will output in the same order. | If you want to determine whether record is duplicate or not, you don't need to combine all values into one string. You can do this with ROW\_NUMBER function like this:
```
SELECT
Action,
TriggerType,
U.Id,
U.TypeId,
U.Message,
C.Value,
I.QuestionId,
I.Sequence,
ROW_NUMBER () OVER (PARTITION BY <LIST OF FIELD THAT SHOULD BE UNIQUE>
ORDER BY <LIST OF FIELDS>) as DupeNumber
FROM UnitType T
Inner Join Unit U on T.Id = U.TypeId
Inner Join UnitCondition C on U.Id = C.UnitId
Inner Join Item I on C.ItemId = I.Id;
```
If DupeNumber is greater than 1, then record id a duplicate. | Identify Duplicate Xml Nodes | [
"",
"sql",
"sql-server",
"t-sql",
"hashcode",
""
] |
I've the following table:
```
UserId Start Stop
1550f503-914e-4f83-af2b-249d54cce369 2014-04-15 18:20:23.000 2014-04-15 18:21:38.000
1550f503-914e-4f83-af2b-249d54cce369 2014-04-15 18:20:23.000 2014-04-15 18:21:38.000
1550f503-914e-4f83-af2b-249d54cce369 2014-04-15 18:20:23.000 2014-04-15 18:21:38.000
1550f503-914e-4f83-af2b-249d54cce369 2014-04-15 18:20:23.000 2014-04-15 18:21:38.000
1550f503-914e-4f83-af2b-249d54cce369 2014-04-15 18:39:14.000 2014-04-15 18:40:02.000
1550f503-914e-4f83-af2b-249d54cce369 2014-04-15 18:41:17.000 2014-04-15 18:46:08.000
1550f503-914e-4f83-af2b-249d54cce369 2014-04-15 19:57:28.000 2014-04-15 19:59:53.000
1550f503-914e-4f83-af2b-249d54cce369 2014-04-16 08:19:48.000 2014-04-16 08:20:27.000
1550f503-914e-4f83-af2b-249d54cce369 2014-04-16 13:18:41.000 2014-04-16 14:24:23.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-16 16:10:11.000 2014-04-16 16:15:53.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-16 17:18:57.000 2014-04-16 17:20:09.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-16 20:21:27.000 2014-04-16 20:32:40.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-16 20:57:06.000 2014-04-16 20:57:41.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-16 20:57:56.000 2014-04-16 21:33:47.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 08:03:34.000 2014-04-17 08:19:42.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 08:23:04.000 2014-04-17 08:25:18.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 08:32:56.000 2014-04-17 08:36:13.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 08:40:17.000 2014-04-17 08:42:01.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 09:03:56.000 2014-04-17 09:04:07.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 09:38:37.000 2014-04-17 09:45:55.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 09:46:37.000 2014-04-17 09:48:35.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 09:48:58.000 2014-04-17 09:55:27.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 09:55:49.000 2014-04-17 09:57:11.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 10:00:32.000 2014-04-17 10:03:32.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 10:08:09.000 2014-04-17 10:14:04.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 10:16:49.000 2014-04-17 10:26:29.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 10:32:36.000 2014-04-17 10:37:42.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 10:42:01.000 2014-04-17 10:43:23.000
8e2555b7-7d7b-410b-818e-1c62f1d9268a 2014-04-17 10:58:34.000 2014-04-17 10:58:41.000
8e2555b7-7d7b-410b-818e-1c62f1d9268a 2014-04-17 10:58:53.000 2014-04-17 10:59:49.000
8e2555b7-7d7b-410b-818e-1c62f1d9268a 2014-04-17 11:02:25.000 2014-04-17 11:04:03.000
7a2e5b7c-9a3a-4ad1-ad79-946bcf2bf2f3 2014-04-17 12:43:04.000 2014-04-17 12:43:13.000
```
You can see that in my table some of `UserId` field are repeated, so I've used the following query to distinct them and obtain a table in which the `UserId` aren't repeated:
```
select DISTINCT UserId from UserSessions
```
Now I need to get the number of time that the first UserId is repeated in the table to get the number of time that this user access to the application, so I thought to do a query like this:
```
select COUNT(UserId) from UserSessions
where UserId = (select DISTINCT UserId from UserSessions)
```
But it doesn't work, how I can solve my problem? | ```
select UserId, COUNT(UserId) from UserSessions
GROUP BY UserId
``` | I think this should do it:
```
SELECT UserId, COUNT(UserId)
FROM UserSessions
GROUP BY UserId
```
You don't have to include the DISTINCT USERID part in the WHERE clause, as it will group on the IDs. | Get the number of same field from a table in SQL | [
"",
"sql",
"sql-server",
""
] |
I have found several examples of SQL queries which will find missing numbers in a sequence. For example this one:
```
Select T1.val+1
from table T1
where not exists(select val from table T2 where T2.val = T1.val + 1);
```
This will only find gaps in an existing sequence. I would like to find gaps in a sequence starting from a minimum.
For example, if the values in my sequence are 2, 4 then the query above will return 3,5.
I would like to specify that my sequence must start at 0 so I would like the query to return 0,1,3,5.
How can I add the minimum value to my query?
* A few answers to questions below:
+ There is no maximum, only a minimum
+ The DB is oracle | This is quite easy in Postgres:
```
select x.i as missing_sequence_value
from (
select i
from generate_series(0,5) i -- 0,5 are the lower and upper bounds you want
) x
left join the_table t on t.val = x.i
where t.val is null;
```
SQLFiddle: <http://www.sqlfiddle.com/#!15/acb07/1>
**Edit**
The Oracle solution is a bit more complex because generating the numbers requires a workaround
```
with numbers as (
select level - 1 as val
from dual
connect by level <= (select max(val) + 2 from the_table) -- this is the maximum
), number_range as (
select val
from numbers
where val >= 0 -- this is the minimum
)
select nr.val as missing_sequence_value
from number_range nr
left join the_table t on t.val = nr.val
where t.val is null;
```
SQLFiddle: <http://www.sqlfiddle.com/#!4/71584/4>
The idea (in both cases) is to generate a list of numbers you are interested in (from 0 to 5) and then doing an outer join against the values in your table. The rows where the outer join does not return something from your table (that's the condition `where t.val is null`) are the values that are missing.
The Oracle solution requires two common table expressions ("CTE", the "with" things) because you can't add a `where level >= x` in the first CTE that generates the numbers.
Note that the `connect by level <= ...` is relies on an undocumented (and unsupported) way of using `connect by`. But so many people are using that to get a "number generator" that I doubt that Oracle will actually remove this. | If you have to option to use a [common table expression](http://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL#Common_table_expression) you can generate a sequence of numbers and use that as the source of numbers.
The variables `@start` and `@end` defines the range of numbers (you could easily use max(val) from yourtable as end instead).
This example is for MS SQL Server (but CTEs is a SQL 99 feature supported by many databases):
```
declare @start int, @end int
select @start=0, @end=5
;With sequence(num) as
(
select @start as num
union all
select num + 1
from sequence
where num < @end
)
select * from sequence seq
where not exists(select val from YourTable where YourTable.val = seq.num)
Option (MaxRecursion 1000)
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/940dc/2) | SQL to find missing numbers in sequence starting from min? | [
"",
"sql",
"oracle",
""
] |
I'm working on a movie database and I thought it might be a good idea to have some type of parental control in place for the future. I wrote some SQL code and it works for the most part, but i don't know why.
In my main movies table I have movies rated with the standard rating g, pg, pg-13, r, and NC-17.
Here is the SQL code i used
```
Select title
From movies
Where rating < "r";
```
I works though it still shows the NC-17 shows. If I change the r rating to NC-17 it only shows the g rated shows.
I know I can type out a longer SQL to give me the matches I want, but I want to understand why this code is performing the way it is.
Thanks for the help. | SQL doesn't understand the movie rating system. The `<` operator looks at strings in alphabetical order. So when you say `< 'R'`, it's looking for for all ratings that start with a letter before `R` in the alphabet. Since there are a limited number of options for ratings, you're best off doing something along the lines of this:
```
SELECT title
FROM movies
WHERE rating NOT LIKE 'R'
AND rating NOT LIKE 'NC-17'
``` | How is MySQL to know `R` is less than `NC-17`? MySQL knows how to sort numbers and letters but not movie ratings. You have to assign the ratings numbers and sort based on that.
For example:
```
Rating Value
------------------
G 1
PG 10
PG-13 20
R 30
NC-17 40
```
Than give each movie the numerical value of the rating (or use a join) and then sort on *that*. | SQL code need help understanding | [
"",
"mysql",
"sql",
""
] |
Not-null dependent means that at least 1 field is not null.
Ideas?
Example: email and phone-number can be not-null dependent. If email is null, phone-number must be not-null. If phone-number is null, email must be not-null. | MySQL doesn't support CHECK constraints, despite [this feature having been requested since 2004.](http://bugs.mysql.com/bug.php?id=3464)
Annoyingly, it parses CHECK constraint syntax, but it doesn't store the constraint or enforce it later. I have thought for a long time that this is misleading to accept an SQL statement that the RDBMS has no support for, without reporting even a warning.
To solve your problem, you could use a trigger:
```
CREATE TRIGGER t BEFORE INSERT ON mytable
FOR EACH ROW BEGIN
IF COALESCE(NEW.email, NEW.phone_number) IS NULL THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Either email or phone_number must be non-null';
END IF;
END
```
You also need to define a similar trigger before UPDATE. | ```
CREATE TABLE T (
EMAIL VARCHAR(256),
PHONE VARCHAR(10)
);
ALTER TABLE T ADD CONSTRAINT
CHECK (EMAIL IS NOT NULL) OR (PHONE IS NOT NULL);
```
Some SQL dialects allow or require you to put the `CHECK` constraint in the `CREATE TABLE` statement. | How to define a table in SQL with 2 fields being not-null (mandatory) dependent? | [
"",
"mysql",
"sql",
"create-table",
""
] |
I have a table in which 10 records available(insert record using bulk insert) now I want select some record with row number and some specific condition. No my problem how we get actual row number of the record.
Bulk insert code
```
BULK INSERT WorkDataLoad FROM 'D:\NPfiles\xyz.txt' with ( TABLOCK )
```
Table
Recordline
```
Record1
Record2
Record3
Record4
Record5
Record6
Record7
Record8
Record9
Record10
```
I tried this code and result is not good
```
Select ROW_NUMBER() OVER(ORDER BY RecordLine DESC) AS RecordLine
from WorkDataLoad
where Recordline='Record1' or Recordline='Record3' or Recordline='Record5'
or Recordline='Record6' or Recordline='Record7' or Recordline='Record10'
```
Result
```
RowNumber Recordline
1 Record1
2 Record3
3 Record5
4 Record6
5 Record7
6 Record10
```
But I want record like this
```
RowNumber Recordline
1 Record1
3 Record3
5 Record5
6 Record6
7 Record7
10 Record10
``` | Use `row_number()` to first assign a row number, then select only the rows you require:
```
with cteRows as
(
Select
RecordLine,
rn = ROW_NUMBER() OVER (ORDER BY RecordLine DESC)
from WorkDataLoad
)
select
rn, RecordLine
from cteRows
where
Recordline IN ('Record1', 'Record3', 'Record5', 'Record6', 'Record7', 'Record10')
```
If you run into ordering issues with '`1`' and '`10`', use something like (assuming each `RecordLine` begins with '`Record`'):
```
with cteRows as
(
Select
RecordLine,
rn = ROW_NUMBER()
OVER (ORDER BY CAST(SUBSTRING(RecordLine, 7, LEN(RecordLine) - 6) as int) DESC)
from WorkDataLoad
)
select
rn, RecordLine
from cteRows
where
Recordline IN ('Record1', 'Record3', 'Record5', 'Record6', 'Record7', 'Record10')
``` | ```
SELECT t.*
FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY RecordLine DESC ) AS RowNo,
Recordline
FROM WorkDataLoad
) AS t
WHERE t.Recordline = 'Record1'
OR t.Recordline = 'Record3'
OR t.Recordline = 'Record5'
OR t.Recordline = 'Record6'
OR t.Recordline = 'Record7'
OR t.Recordline = 'Record10'
``` | Get actual row number of the record in sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2012",
""
] |
I'm using a request to get a collection of columns name:
```
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE [...]
```
From this collection, I'd like to count every not null, not empty value from the original table group by column name.
Let's say I have a table containing
```
COL1 | COL2 | COL3
------------------
VAL1 | VAL2 | NULL
VAL3 | | VAL4
VAL5 | |
```
I'm looking for a request to get:
```
COL1 | 3
COL2 | 1
COL2 | 1
```
It's for analytics purpose.
Thanks for your help! | Here is a simple process. Run the following query:
```
SELECT 'SELECT ''' + COLUMN_NAME + ''', COUNT(['+COLUMN_NAME']) as NotNull FROM [' +SCHEMA_NAME+ '].['+TABLE_NAME+ '] union all '
FROM INFORMATION_SCHEMA.COLUMNS
WHERE [...]
```
Copy the results into a query window, remove the final `union all`, and run the query. | The below code seems to work for your issue
```
create table sample
(
col1 varchar(10),
col2 varchar(10),
col3 varchar(10)
)
INSERT INTO sample (COL1,COL2,COL3) VALUES ('VAL1 ',' VAL2 ',NULL);
INSERT INTO sample (COL1,COL2,COL3) VALUES ('VAL3 ',' ',' VAL4');
INSERT INTO sample (COL1,COL2,COL3) VALUES ('VAL5 ',' ',' ');
DECLARE @cols1 NVARCHAR(MAX);
DECLARE @sql NVARCHAR(MAX);
SELECT @cols1 = STUFF((
SELECT ', COUNT(CASE WHEN len(['+ t1.NAME + '])!=0 THEN 1 END) AS ' + t1.name
FROM sys.columns AS t1
WHERE t1.object_id = OBJECT_ID('sample')
--ORDER BY ', COUNT([' + t1.name + ']) AS ' + t1.name
FOR XML PATH('')
), 1, 2, '');
SET @sql = '
SELECT ' + @cols1 + '
FROM sample
'
EXEC(@sql)
``` | Selecting columns from a query | [
"",
"sql",
"sql-server",
"t-sql",
"analytics",
""
] |
I'm going to count returned rows by this query:
```
select * from tableName where( number > 1000);
```
I tried this query to get rows count:
```
select count(*) as count from (select * from tableName where( number > 1000));
```
but I got syntax error. What is wrong with that? | This works if you are using nested query & don't use 'count' as your temporary variable name:
```
select count(temp.id) as num from (select * from tableName where number > 1000) temp
``` | You dont want Nested Query Just Use
```
select count(*) as count from tableName where number > 1000 ;
``` | counting rows of selected data in mysql | [
"",
"mysql",
"sql",
""
] |
I am trying to extract those Jars which contain both `Bunties` and `Candies`.
```
Row# Column1 Columm2
---------------------------------------
1 Jar1 Bunties
2 Jar1 Candies
3 Jar1 Biscuits
4 Jar1 Jellies
5 Jar2 Bunties
6 Jar2 Candies
7 Jar2 Biscuits
8 Jar3 Bunties
9 Jar3 Candies
10 Jar4 Bunties
11 Jar4 Jellies
```
`Jar1,Jar2,Jar3` are the desired output because these are the only jars which contains both `Bunties` and `Candies`. | You could use a `GROUP BY` with a `HAVING` to get the result:
```
select Column1
from dbo.yourtable
where Columm2 in ('Bunties', 'Candies')
group by Column1
having count(distinct Columm2) = 2;
```
Your `WHERE` clause will include the values that you are looking for, then you `GROUP BY` the `Column1` and count the distinct values in `Columm2`. | You could use EXISTS:
```
SELECT Distinct t.Column1
FROM dbo.TableName t
WHERE t.Columm2 IN ('Bunties', 'Candies')
AND EXISTS(
SELECT 1 FROM dbo.TableName t2
WHERE t2.Column1 = t.Column1
AND t2.Columm2 <> t.Columm2
AND t2.Columm2 IN ('Bunties', 'Candies')
)
``` | Get distinct Column1 values which contains all required Column2 values | [
"",
"sql",
"sql-server-2008",
""
] |
I've been playing around with the sample on [Jeff' Server blog](http://weblogs.sqlteam.com/jeffs/archive/2004/11/10/2737.aspx) to compare two tables to find the differences.
In my case the tables are a backup and the current data. I can get what I want with this SQL statement (simplified by removing most of the columns). I can then see the rows from each table that don't have an exact match **and I can see from which table they come.**
```
SELECT
MIN(TableName) as TableName
,[strCustomer]
,[strAddress1]
,[strCity]
,[strPostalCode]
FROM
(SELECT
'Old' as TableName
,[JAS001].[dbo].[AR_CustomerAddresses].[strCustomer]
,[JAS001].[dbo].[AR_CustomerAddresses].[strAddress1]
,[JAS001].[dbo].[AR_CustomerAddresses].[strCity]
,[JAS001].[dbo].[AR_CustomerAddresses].[strPostalCode]
FROM
[JAS001].[dbo].[AR_CustomerAddresses]
UNION ALL
SELECT
'New' as TableName
,[JAS001new].[dbo].[AR_CustomerAddresses].[strCustomer]
,[JAS001new].[dbo].[AR_CustomerAddresses].[strAddress1]
,[JAS001new].[dbo].[AR_CustomerAddresses].[strCity]
,[JAS001new].[dbo].[AR_CustomerAddresses].[strPostalCode]
FROM
[JAS001new].[dbo].[AR_CustomerAddresses]) tmp
GROUP BY
[strCustomer]
,[strAddress1]
,[strCity]
,[strPostalCode]
HAVING
COUNT(*) = 1
```
[This Stack Overflow Answer](https://stackoverflow.com/a/18736060/934912) gives me a much cleaner SQL query **but does not tell me from which table the rows come.**
```
SELECT * FROM [JAS001new].[dbo].[AR_CustomerAddresses]
UNION
SELECT * FROM [JAS001].[dbo].[AR_CustomerAddresses]
EXCEPT
SELECT * FROM [JAS001new].[dbo].[AR_CustomerAddresses]
INTERSECT
SELECT * FROM [JAS001].[dbo].[AR_CustomerAddresses]
```
I could use the first version but I have many tables that I need to compare and I think that there has to be an easy way to add the source table column to the second query. I've tried several things and googled to no avail. I suspect that maybe I'm just not searching for the correct thing since I'm sure it's been answered before.
Maybe I'm going down the wrong trail and there is a better way to compare the databases? | Could you use the following setup to accomplish your goal?
```
SELECT 'New not in Old' Descriptor, *
FROM
(
SELECT * FROM [JAS001new].[dbo].[AR_CustomerAddresses]
EXCEPT
SELECT * FROM [JAS001].[dbo].[AR_CustomerAddresses]
) a
UNION
SELECT 'Old not in New' Descriptor, *
FROM
(
SELECT * FROM [JAS001].[dbo].[AR_CustomerAddresses]
EXCEPT
SELECT * FROM [JAS001new].[dbo].[AR_CustomerAddresses]
) b
``` | You can't add the table name there because union, except, and intersection all compare all columns. This means you can't differentiate between them by adding the table name to the query. A group by gives you control over what columns are considered in finding duplicates so you can exclude the table name.
To help you with the large number of tables you need to compare you could write a sql query off the metadata tables that hold table names and columns and generate the sql commands dynamically off those values. | SQL Server : compare two tables with UNION and Select * plus additional label column | [
"",
"sql",
"sql-server",
"union",
"intersect",
""
] |
Having given up on SQL Azure Data Sync for synchronizing data between two SQL Azure databases, how can I remove all the DataSync related objects (tables, triggers etc)?
E.g:
```
DataSync.<table>_dss_tracking
DataSync.schema_info_dss
DataSync.scope_config_dss
DataSync.scope_info_dss
```
And all the other objects? Is there a script that can be run? | There is an article on msgooroo.com:
<https://msgooroo.com/GoorooTHINK/Article/15141/Removing-SQL-Azure-Sync-objects-manually/5215>
Essentially the script is as follows:
```
-- Triggers
DECLARE @TRIGGERS_SQL VARCHAR(MAX) = (
SELECT
'DROP TRIGGER [' + SCHEMA_NAME(so.uid) + '].[' + [so].[name] + '] '
FROM sysobjects AS [so]
INNER JOIN sysobjects AS so2 ON so.parent_obj = so2.Id
WHERE [so].[type] = 'TR'
AND [so].name LIKE '%_dss_%_trigger'
FOR XML PATH ('')
)
PRINT @TRIGGERS_SQL
IF LEN(@TRIGGERS_SQL) > 0
BEGIN
EXEC (@TRIGGERS_SQL)
END
-- Tables
DECLARE @TABLES_SQL VARCHAR(MAX) = (
SELECT
'DROP TABLE [' + table_schema + '].[' + table_name + '] '
FROM
information_schema.tables where table_schema = 'DataSync'
FOR XML PATH ('')
)
PRINT @TABLES_SQL
IF LEN(@TABLES_SQL) > 0
BEGIN
EXEC (@TABLES_SQL)
END
-- Stored Procedures
DECLARE @PROC_SQL VARCHAR(MAX) = (
SELECT 'DROP PROCEDURE [' + routine_schema + '].[' + routine_name + '] '
FROM INFORMATION_SCHEMA.ROUTINES where ROUTINE_SCHEMA = 'DataSync' and routine_type = 'PROCEDURE'
FOR XML PATH ('')
)
PRINT @PROC_SQL
IF LEN(@PROC_SQL) > 0
BEGIN
EXEC (@PROC_SQL)
END
-- Types
DECLARE @TYPE_SQL VARCHAR(MAX) = (
SELECT
'DROP TYPE [' + SCHEMA_NAME(so.uid) + '].[' + [so].[name] + '] '
FROM systypes AS [so]
where [so].name LIKE '%_dss_bulktype%'
AND SCHEMA_NAME(so.uid) = 'Datasync'
FOR XML PATH ('')
)
PRINT @TYPE_SQL
IF LEN(@TYPE_SQL) > 0
BEGIN
EXEC (@TYPE_SQL)
END
-- Schema
DROP SCHEMA DataSync
``` | I recently ran into this issue and while the script in the accepted answer did remove the DataSync schema, it did not remove the dss or TaskHosting schemas nor the symmetric key objects that were preventing me from exporting my database.
I ended up needing to contact Azure support to get everything removed. Here is the script that they gave me:
```
declare @n char(1)
set @n = char(10)
declare @triggers nvarchar(max)
declare @procedures nvarchar(max)
declare @constraints nvarchar(max)
declare @views nvarchar(max)
declare @FKs nvarchar(max)
declare @tables nvarchar(max)
declare @udt nvarchar(max)
-- triggers
select @triggers = isnull( @triggers + @n, '' ) + 'drop trigger [' + schema_name(schema_id) + '].[' + name + ']'
from sys.objects
where type in ( 'TR') and name like '%_dss_%'
-- procedures
select @procedures = isnull( @procedures + @n, '' ) + 'drop procedure [' + schema_name(schema_id) + '].[' + name + ']'
from sys.procedures
where schema_name(schema_id) = 'dss' or schema_name(schema_id) = 'TaskHosting' or schema_name(schema_id) = 'DataSync'
-- check constraints
select @constraints = isnull( @constraints + @n, '' ) + 'alter table [' + schema_name(schema_id) + '].[' + object_name( parent_object_id ) + '] drop constraint [' + name + ']'
from sys.check_constraints
where schema_name(schema_id) = 'dss' or schema_name(schema_id) = 'TaskHosting' or schema_name(schema_id) = 'DataSync'
-- views
select @views = isnull( @views + @n, '' ) + 'drop view [' + schema_name(schema_id) + '].[' + name + ']'
from sys.views
where schema_name(schema_id) = 'dss' or schema_name(schema_id) = 'TaskHosting' or schema_name(schema_id) = 'DataSync'
-- foreign keys
select @FKs = isnull( @FKs + @n, '' ) + 'alter table [' + schema_name(schema_id) + '].[' + object_name( parent_object_id ) + '] drop constraint [' + name + ']'
from sys.foreign_keys
where schema_name(schema_id) = 'dss' or schema_name(schema_id) = 'TaskHosting' or schema_name(schema_id) = 'DataSync'
-- tables
select @tables = isnull( @tables + @n, '' ) + 'drop table [' + schema_name(schema_id) + '].[' + name + ']'
from sys.tables
where schema_name(schema_id) = 'dss' or schema_name(schema_id) = 'TaskHosting' or schema_name(schema_id) = 'DataSync'
-- user defined types
select @udt = isnull( @udt + @n, '' ) +
'drop type [' + schema_name(schema_id) + '].[' + name + ']'
from sys.types
where is_user_defined = 1
and schema_name(schema_id) = 'dss' or schema_name(schema_id) = 'TaskHosting' or schema_name(schema_id) = 'DataSync'
order by system_type_id desc
print @triggers
print @procedures
print @constraints
print @views
print @FKs
print @tables
print @udt
exec sp_executesql @triggers
exec sp_executesql @procedures
exec sp_executesql @constraints
exec sp_executesql @FKs
exec sp_executesql @views
exec sp_executesql @tables
exec sp_executesql @udt
GO
declare @n char(1)
set @n = char(10)
declare @functions nvarchar(max)
-- functions
select @functions = isnull( @functions + @n, '' ) + 'drop function [' + schema_name(schema_id) + '].[' + name + ']'
from sys.objects
where type in ( 'FN', 'IF', 'TF' )
and schema_name(schema_id) = 'dss' or schema_name(schema_id) = 'TaskHosting' or schema_name(schema_id) = 'DataSync'
print @functions
exec sp_executesql @functions
GO
--update
DROP SCHEMA IF EXISTS [dss]
GO
DROP SCHEMA IF EXISTS [TaskHosting]
GO
DROP SCHEMA IF EXISTS [DataSync]
GO
DROP USER IF EXISTS [##MS_SyncAccount##]
GO
DROP ROLE IF EXISTS [DataSync_admin]
GO
DROP ROLE IF EXISTS [DataSync_executor]
GO
DROP ROLE IF EXISTS [DataSync_reader]
GO
declare @n char(1)
set @n = char(10)
--symmetric_keys
declare @symmetric_keys nvarchar(max)
select @symmetric_keys = isnull( @symmetric_keys + @n, '' ) + 'drop symmetric key [' + name + ']'
from sys.symmetric_keys
where name like 'DataSyncEncryptionKey%'
print @symmetric_keys
exec sp_executesql @symmetric_keys
-- certificates
declare @certificates nvarchar(max)
select @certificates = isnull( @certificates + @n, '' ) + 'drop certificate [' + name + ']'
from sys.certificates
where name like 'DataSyncEncryptionCertificate%'
print @certificates
exec sp_executesql @certificates
GO
print 'Data Sync clean up finished'
``` | How to remove SQL Azure Data Sync objects manually | [
"",
"sql",
"t-sql",
"azure-sql-database",
""
] |
I know that in SQL when we compare two NULL values, result is always false. Hence, statements like
```
SELECT case when NULL = NULL then '1' else '0' end
```
will always print '0'. My question is how functions like `ISNULL` determine whether value is null or not. Because, as per my understanding (and explained in above query) comparison of two null values is always FALSE. | Your initial assumption appears to be that `ISNULL` is an alias for existing functionality which can be implemented directly within SQL statements, in the same way that a SQL function can. You are then asking how that function works.
This is an incorrect starting point, hence the confusion. Instead, like similar commands such as `IN` and `LIKE`, `ISNULL` is parsed and run within the database engine itself; its actual implementation is most likely written in C.
If you **really** want to look into the details of the implementation, you could take a look instead at mySQL - it's open source, so you may be able to search through the code to see how ISNULL is implemented there. They even provide a [guided tour](http://dev.mysql.com/doc/internals/en/guided-tour.html) of the code if required. | You need to set the `set ansi_nulls off` and then check your result. Null can be thought of as an unknown value and when you are comparing two unknown values then you will get the result as false only. The comparisons null = null is undefined.
```
set ansi_nulls off
SELECT case when NULL = NULL then '1' else '0' end
```
Result:-
```
1
```
From [MSDN](http://msdn.microsoft.com/en-us/library/ms188048.aspx)
> When SET ANSI\_NULLS is OFF, the Equals (=) and Not Equal To (<>)
> comparison operators do not follow the ISO standard. A SELECT
> statement that uses WHERE column\_name = NULL returns the rows that
> have null values in column\_name. A SELECT statement that uses WHERE
> column\_name <> NULL returns the rows that have nonnull values in the
> column. Also, a SELECT statement that uses WHERE column\_name <>
> XYZ\_value returns all rows that are not XYZ\_value and that are not
> NULL.
As correctly pointed by Damien in comments the behavior of NULL = NULL is **unknown or undefined.** | NULL comparison in SQL server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
HEXTORAW is a function found in several RDBMS's like [Oracle](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions064.htm), and [DB2 on LUW](http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r0059480.html). It takes a character, or integer input, and basically casts it to a HEX value.
```
HEXTORAW(1234) = x'1234'
```
What is the algorithm for this type conversion? What is happening in the code behind the scenes?
(This is motivated by wanting to create this function in an RDBMS that does not have the HEXTORAW function.) | In order to have a complete algorithm here:
Given a character string as an input parameter
1.Validate that the character string contains only the numbers 1-9 or the letters A-F.
2.Calculate the binary value by iterating over each character,
and concatenating the corresponding binary value:
```
binary hexadecimal
0000 0
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 8
1001 9
1010 a
1011 b
1100 c
1101 d
1110 e
1111 f
```
For example, *1234* would be:
```
0001 0010 0011 0100
```
3.Using that value, set the bits of a memory location.
4.Address it as a raw datatype
5.Return it as the function return value
The resulting raw datatype will have the hex representation equivalent to the original string.
Given the input '1234' the function would return the raw datatype which would be displayed as the hex value x'1234'. Binary data is typically represented in HEX to make it easier to read and reference.
(This builds on Mark J. Bobak's answer, so I want to give credit to him, but I also wanted to post a complete procedure.) | From this page:
<http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements001.htm#i46018>
> When Oracle automatically converts RAW or LONG RAW data to and from
> CHAR data, the binary data is represented in hexadecimal form, with
> one hexadecimal character representing every four bits of RAW data.
> For example, one byte of RAW data with bits 11001011 is displayed and
> entered as CB. | How does the HEXTORAW() function work? What is the algorithm? | [
"",
"sql",
"oracle",
"algorithm",
"rdbms",
"db2-luw",
""
] |
Need help figuring out how to do a cross-tabulated report within one query. There are 3-4 tables involved but the users table may not need to be included in the query since we just need a count.
I have put together a screenshot of the table schema and data as an example which can be seen below:

What I need it to return is a query result that looks like:

So I can make a report that looks like:

I've tried to do cursor loops as it's the only way I can do it with my basic knowledge, but it's way too slow.
One particular report I'm trying to generate contains 32 rows and 64 columns with about 70,000 answers, so it's all about the performance of getting it down to one query and fast as possible.
I understand this may depend on indexes and so on but if someone could help me figure out how I could get this done in 1 query (with multiple joins?), that would be awesome!
Thanks! | ```
SELECT MIN(ro.OptionText) RowOptionText, MIN(co.OptionText) RowOptionText, COUNT(ca.AnswerID) AnswerCount
FROM tblQuestions rq
CROSS JOIN tblQuestions cq
JOIN tblOptions ro ON rq.QuestionID = ro.QuestionID
JOIN tblOptions co ON cq.QuestionID = co.QuestionID
LEFT JOIN tblAnswers ra ON ra.OptionID = ro.OptionID
LEFT JOIN tblAnswers ca ON ca.OptionID = co.OptionID AND ca.UserID = ra.UserID
WHERE rq.questionText = 'Gender'
AND cq.questionText = 'How happy are you?'
GROUP BY ro.OptionID, co.OptionID
ORDER BY ro.OptionID, co.OptionID
```
This should be at least close to what you asked for. Turning this into a pivot will require dynamic SQL as SQL Server requires you to specify the actual value that will be pivoted into a column.
We cross join the questions and limit the results from each of those question references to the single question for the row values and column values respectively. Then we join the option values to the respective question reference. We use LEFT JOIN for the answers in case the user didn't respond to all of the questions. And we join the answers by UserID so that we match the row question and column question for each user. The MIN on the option text is because we grouped and ordered by OptionID to match your sequencing shown.
EDIT: Here's a [SQLFiddle](http://sqlfiddle.com/#!6/65a25/5 "SQL Fiddle")
For what it's worth, your query is complicated because you are using the Entity-Attribute-Value design pattern. Quite a few SQL Server experts consider that pattern to be problematic and to be avoided if possible. For instance see <https://www.simple-talk.com/sql/t-sql-programming/avoiding-the-eav-of-destruction/>.
EDIT 2: Since you accepted my answer, here's the dynamic SQL pivot solution :) [SQLFiddle](http://sqlfiddle.com/#!6/65a25/9 "SQL Fiddle")
```
DECLARE @SqlCmd NVARCHAR(MAX)
SELECT @SqlCmd = N'SELECT RowOptionText, ' + STUFF(
(SELECT ', ' + QUOTENAME(o.OptionID) + ' AS ' + QUOTENAME(o.OptionText)
FROM tblOptions o
WHERE o.QuestionID = cq.QuestionID
FOR XML PATH ('')), 1, 2, '') + ', RowTotal AS [Row Total]
FROM (
SELECT ro.OptionID RowOptionID, ro.OptionText RowOptionText, co.OptionID ColOptionID,
ca.UserID, COUNT(ca.UserID) OVER (PARTITION BY ra.OptionID) AS RowTotal
FROM tblOptions ro
JOIN tblOptions co ON ro.QuestionID = ' + CAST(rq.QuestionID AS VARCHAR(10)) +
' AND co.QuestionID = ' + CAST(cq.QuestionID AS VARCHAR(10)) + '
LEFT JOIN tblAnswers ra ON ra.OptionID = ro.OptionID
LEFT JOIN tblAnswers ca ON ca.OptionID = co.OptionID AND ca.UserID = ra.UserID
UNION ALL
SELECT 999999, ''Column Total'' RowOptionText, co.OptionID ColOptionID,
ca.UserID, COUNT(ca.UserID) OVER () AS RowTotal
FROM tblOptions ro
JOIN tblOptions co ON ro.QuestionID = ' + CAST(rq.QuestionID AS VARCHAR(10)) +
' AND co.QuestionID = ' + CAST(cq.QuestionID AS VARCHAR(10)) + '
LEFT JOIN tblAnswers ra ON ra.OptionID = ro.OptionID
LEFT JOIN tblAnswers ca ON ca.OptionID = co.OptionID AND ca.UserID = ra.UserID
) t
PIVOT (COUNT(UserID) FOR ColOptionID IN (' + STUFF(
(SELECT ', ' + QUOTENAME(o.OptionID)
FROM tblOptions o
WHERE o.QuestionID = cq.QuestionID
FOR XML PATH ('')), 1, 2, '') + ')) p
ORDER BY RowOptionID'
FROM tblQuestions rq
CROSS JOIN tblQuestions cq
WHERE rq.questionText = 'Gender'
AND cq.questionText = 'How happy are you?'
EXEC sp_executesql @SqlCmd
``` | I think I see the problem. I know you can't modify the schema, but you need a conceptual table for the crosstab information such as which questionID is the rowHeader and which is the colHeader. You can create it in an external data source and join with the existing source or simply hard-code the table values in your sql.
you need to have 2 instances of the question/option/answer relations, one for each rowHeader and colHeader for each crosstab. Those 2 relations are joined by the userID.
this version has your outer joins:
[sqlFiddle](http://sqlfiddle.com/#!6/53123/13)
this version doesn't have the crossTab table, just the row and col questionIDs hard-coded:
[sqlFiddleNoTbl](http://sqlfiddle.com/#!6/53123/14) | SQL Cross Tab Query | [
"",
"sql",
"sql-server",
"performance",
"t-sql",
"sql-server-2012",
""
] |
I have a table that looks like this
```
dbo.Box
ID SourceID OverrideQueueID
1 1 NULL
2 1 2
3 2 NULL
```
I need to figure out a way to say if the OverrideQueueID IS NULL then just do a join from dbo.Box to dbo.Source.ID, otherwise if OverrideQueueID IS NOT NULL join to the dbo.Queue.ID instead. Is this possible to do in one select since it is joining to different tables?
I am trying to do this without introducing a bunch of left joins if at all possible. | I hope a union will help you, like given below.
```
Select Col1,Col2
From dbo.Box B
Join dbo.Source S On S.Id = b.SourceID
Where B.OverrideQueueID is Null
Union
Select Col1,Col2
From dbo.Box B
Join dbo.Queue Q On Q.Id = b.SourceID
Where B.OverrideQueueID is Not Null
``` | One possible way:
```
select * from Box as a
join Box as b ON a.OverrideQueueID is null and a.ID = b.SourceID
join Queue as q ON a.OverrideQueueID is not null and a.ID = q.ID
``` | Conditional join to two different tables based on 2 columns in 1 table | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2012",
""
] |
Is there any diffrence between the time taken for `Select *` and `Select count(*)` for the table having no primary key and other indexes in SQL server 2008 R2?
I have tried `select count(*)` from a view and it has taken 00:05:41 for 410063922 records.
`Select (*)` from view has already taken 10 minutes for first 600000 records and the query is still running. So it looks like that it will take more than 1 hour.
Is there any way through which I can make this view faster without any change in the structure of the underlying tables?
Can I create indexed view for tables without indexes?
Can I use caching for the view inside sql server so if it is called again, it takes less time?
It's a view which contains 20 columns from one table only. The table does not have any indexes.The user is able to query the view. I am not sure whether user does select \* or select somecolumn from view with some where conditions. The only thing which I want to do is to propose them for some changes through which their querying on the view will return results faster. I am thinking of indexing and caching but I am not sure whether they are possible on a view with table having no indexes. Indexing is not possible here as mentioned in one of the answers.
Can anyone put some light on caching within sql server 2008 R2? | The execution time difference is due to the fact that `SELEC *` will show the entire content of your table and the `SELECT COUNT(*)` will only count how many rows are present without showing them.
## **Answer about optimisation**
In my opinion you're taking the problem with the wrong angle. First of all it's important to define the real need of your clients, when the requirements are defined you'll certainly be able to improve your view in order to get better performance and avoid returning billions of data.
Optimisations can even be made on the table structure sometimes (we don't have any info about your current structure).
SQL Server will automatically use a system of caching in order to make the execution quicker but that will not solve your problem. | `count(*)` returns just a number and `select *` returns all the data. Imagine having to move all that data and the time it takes for your hundred of thousands of records. Even if your table was indexed probably, running `select *` on your hundreds of thousands of records will still take a lot of time even if less than before, and should never bee needed in the first place.
> Can I create indexed view for tables without indexes?
No, you have to add indexes for indexed results
> Can I use caching for the view inside sql server so if it is called again, it takes less time?
Yes you can, but its of no use for such a requirement. Why are you selecting so many records in the first place? You should never have to return millions or thousands of rows of complete data in any query.
**Edit**
Infact you are trying to get **billions** of rows without any where clause. This is bound to fail on any server that you can get hold off, so better stop there :)
**TL;DR**
Indexes do not matter for a `SELECT * FROM myTABLE` query because there is no condition and billions of rows. Unless you change your query, no optimization can help you | Why select count(*) is faster the select * even if table has no indexes? | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have multiple tables that I'm trying to join. I have added indexes to the tables to increase the speed but is still taking a long time to join. I suspect this is expected but i was wondering if there is a more efficient way to create a join with multiple tables. I also set the net\_read\_timeout = to 150 since i was getting a Lost connection error. My query looks like:
```
set net_read_timeout = 150;
ALTER TABLE wspeed2 ADD INDEX (speed,roadtypeID) --For all the tables
SELECT a.month,a.roadTypeID,a.speed,a.pid, a.or, b.pid, b.or, c.pid, c.or, d.pid, d.or,
e.pid, e.or, f.pid, f.or, g.pid, g.or, h.pid, h.or, i.pid, i.or, j.pid, j.or,
k.pid, k.or, l.pid, l.or, m.pid, m.or, n.pid, n.or, o.pid, o.or, p.pid, p.or,
q.pid, q.or, r.pid, r.or, s.pid, s.or, t.pid, t.or, u.pid, u.or, v.pid, v.or
FROM wspeed2 a, wspeed3 b, wspeed20 c, wspeed24 d, wspeed25 e, wspeed26 f, wspeed27 g, wspeed63 h, wspeed65 i, wspeed68 j,
wspeed69 k, wspeed70 l, wspeed71 m, wspeed72 n, wspeed73 o, wspeed74 p, wspeed75 q, wspeed76 r, wspeed77 s, wspeed78 t, wspeed81 u, wspeed82 v
WHERE a.speed = b.speed and b.speed = c.speed and c.speed = d.speed and d.speed = e.speed and e.speed = f.speed and f.speed = g.speed and g.speed = h.speed
and h.speed = i.speed and i.speed = j.speed and j.speed = k.speed and k.speed = l.speed and l.speed = m.speed and m.speed = n.speed and n.speed = o.speed
and o.speed = p.speed and p.speed = q.speed and q.speed = r.speed and r.speed = s.speed and s.speed = t.speed and t.speed = u.speed and u.speed = v.speed
GROUP BY a.speed;
``` | Although the query itself looks simple yet strange, here it is with joins. Note... Since you had a=b b=c c=d etc... it also means a=r a=s a=t etc... So, instead of having all rely on the alias in front of it, it MIGHT help the engine to have all of the other speed tables as directly joined to the root "a" level alias as I have below.
That said, you are joining 21 different tables, if one or more tables DO NOT have a record for the corresponding speed in the "a" table, they will NOT appear in the result set. If you want ALL regardless of having a match in the other table, change them all to LEFT JOIN instead.
Now, looking at your "a" table, you are based on a road type and month per speed. Is the speed column a unique column? I would think it is, but not positive. If any of the underlying tables being joined to have more than 1 record per same speed value, you will get a Cartesian result and could be choking your query.
Also, you had a group by, but no aggregate function columns such as a SUM(something), count(), avg(), min(), max(), so what is the point of the group by. You may instead want it ordered by something (preferably something with an index on the "a" table.
```
SELECT
a.month, a.roadTypeID, a.speed,
a.pid, a.or, b.pid, b.or, c.pid, c.or, d.pid, d.or,
e.pid, e.or, f.pid, f.or, g.pid, g.or, h.pid, h.or,
i.pid, i.or, j.pid, j.or, k.pid, k.or, l.pid, l.or,
m.pid, m.or, n.pid, n.or, o.pid, o.or, p.pid, p.or,
q.pid, q.or, r.pid, r.or, s.pid, s.or, t.pid, t.or,
u.pid, u.or, v.pid, v.or
FROM
wspeed2 a
JOIN wspeed3 b on a.speed = b.speed
JOIN wspeed20 c on a.speed = c.speed
JOIN wspeed24 d on a.speed = d.speed
JOIN wspeed25 e on a.speed = e.speed
JOIN wspeed26 f on a.speed = f.speed
JOIN wspeed27 g on a.speed = g.speed
JOIN wspeed63 h on a.speed = h.speed
JOIN wspeed65 i on a.speed = i.speed
JOIN wspeed68 j on a.speed = j.speed
JOIN wspeed69 k on a.speed = k.speed
JOIN wspeed70 l on a.speed = l.speed
JOIN wspeed71 m on a.speed = m.speed
JOIN wspeed72 n on a.speed = n.speed
JOIN wspeed73 o on a.speed = o.speed
JOIN wspeed74 p on a.speed = p.speed
JOIN wspeed75 q on a.speed = q.speed
JOIN wspeed76 r on a.speed = r.speed
JOIN wspeed77 s on a.speed = s.speed
JOIN wspeed78 t on a.speed = t.speed
JOIN wspeed81 u on a.speed = u.speed
JOIN wspeed82 v on a.speed = v.speed
```
If this still does not help, maybe adding MySQL keyword "STRAIGHT\_JOIN" might help, such as:
select STRAIGHT\_JOIN [rest of query] | If the `speed` column is not unique in those tables (and likely it's not; given that you said you added an index with `speed` as a leading column...
If there are multiple rows with the same value of `speed`, in those tables, then your query could be creating an *immense* intermediate set.
Let's do some simple math. If there are two rows in each table that have the same speed value, then the JOIN operations between a and b will create 4 rows for that speed. When we add the join to c, with another two rows, that's a total of 8 rows. When we get all 22 tables joined in, each with two rows, we're at 2^22 or over 4 million rows. And then that whole set of rows, with all the same value for `speed`, needs to be processed in a GROUP BY operation to eliminate duplicates.
(Of course, if any one of the tables doesn't have a row for that same `speed` value, then the query would produce zero rows for that `speed`.)
Personally, I'd ditch the old-school comma syntax for the JOIN operation, and use the JOIN keyword instead. And I'd move the join predicates from the WHERE clause to appropriate ON clause.
I'd also make one of the tables the "driver" for all of the joins, I'd use a reference to the same table in each of the joins. (We know that if `a=b` and `b=c`, then `a=c`. But I'm not sure about the MySQL optimizer, whether it makes any difference whether we specify `a=b and a=c` in place of `a=b and b=c`.
If there are relatively few number of distinct values of `speed` in each of the tables, but lots of rows with the same value, I'd consider using inline views to get a single row for each speed from each of the tables. MySQL can make use of a suitable index to optimize the GROUP BY operation on each individual table... I'd opt for a covering indexes... e.g.
```
ON wspeed20 (speed, pid, `or`)
ON wspeed24 (speed, pid, `or`)
```
Unfortunately, the derived table (the result of an inline view query) isn't indexed, so the JOIN operations could be expensive (for a lot of rows from each inline view query).
```
SELECT a.month,a.roadTypeID,a.speed,a.pid,a.or, b.pid, b.or, c.pid, c.or, d.pid, d.or,
e.pid, e.or, f.pid, f.or, g.pid, g.or, h.pid, h.or, i.pid, i.or, j.pid, j.or,
k.pid, k.or, l.pid, l.or, m.pid, m.or, n.pid, n.or, o.pid, o.or, p.pid, p.or,
q.pid, q.or, r.pid, r.or, s.pid, s.or, t.pid, t.or, u.pid, u.or, v.pid, v.or
FROM (SELECT speed, pid, `or` FROM wspeed2 GROUP BY speed) a
JOIN (SELECT speed, pid, `or` FROM wspeed3 GROUP BY speed) b ON b.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed20 GROUP BY speed) c ON c.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed24 GROUP BY speed) d ON d.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed25 GROUP BY speed) e ON e.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed26 GROUP BY speed) f ON f.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed27 GROUP BY speed) g ON g.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed63 GROUP BY speed) h ON h.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed65 GROUP BY speed) i ON i.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed68 GROUP BY speed) j ON j.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed69 GROUP BY speed) k ON k.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed70 GROUP BY speed) l ON l.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed71 GROUP BY speed) m ON m.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed72 GROUP BY speed) n ON n.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed73 GROUP BY speed) o ON o.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed74 GROUP BY speed) p ON p.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed75 GROUP BY speed) q ON q.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed76 GROUP BY speed) r ON r.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed77 GROUP BY speed) s ON s.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed78 GROUP BY speed) t ON t.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed81 GROUP BY speed) u ON u.speed = a.speed
JOIN (SELECT speed, pid, `or` FROM wspeed82 GROUP BY speed) v ON v.speed = a.speed
```
That has the potential to cut down on the number of rows that need to be joined (again, if there is a large number of duplicate values for `speed`, and small number of distinct values for `speed`.) But, again, the JOIN operations between the derived tables will not have any available indexes. (At least, not in versions of MySQL up to 5.6.) | optimization for joining many tables in mysql | [
"",
"mysql",
"sql",
""
] |
I do quite a bit of data analysis and use SQL on a daily basis but my queries are rather simple, usually pulling a lot of data which I thereafter manipulate in excel, where I'm a lot more experienced.
This time though I'm trying to generate some Live Charts which have as input a single SQL query. I will now have to create complex tables without the aid of the excel tools I'm so familiar with.
The problem is the following:
We have telesales agents that book appointments by answering to inbound calls and making outbound cals. These will generate leads that might potentially result in a sale. The relevant tables and fields for this problem are these:
```
Contact Table
Agent
Sales Table
Price
OutboundCallDate
```
I want to know for each telesales agent their respective Total Sales amount in one column, and their outbound sales value in another.
The end result should look something like this:
```
+-------+------------+---------------+
| Agent | TotalSales | OutboundSales |
+-------+------------+---------------+
| Tom | 30145 | 0 |
| Sally | 16449 | 1000 |
| John | 10500 | 300 |
| Joe | 50710 | 0 |
+-------+------------+---------------+
```
With the below SQL I get the following result:
```
SELECT contact.agent, SUM(sales.price)
FROM contact, sales
WHERE contact.id = sales.id
GROUP BY contact.agent
```
```
+-------+------------+
| Agent | TotalSales |
+-------+------------+
| Tom | 30145 |
| Sally | 16449 |
| John | 10500 |
| Joe | 50710 |
+-------+------------+
```
I want to add the third column to this query result, in which the price is summed only for records where the `OutboundCallDate` field contains data. Something a bit like (where `sales.OutboundCallDate` is Not Null)
I hope this is clear enough. Let me know if that's not the case. | Use [CASE](http://msdn.microsoft.com/en-us/library/ms181765.aspx)
```
SELECT c.Agent,
SUM(s.price) AS TotalSales,
SUM(CASE
WHEN s.OutboundCallDate IS NOT NULL THEN s.price
ELSE 0
END) AS OutboundSales
FROM contact c, sales s
WHERE c.id = s.id
GROUP BY c.agent
``` | I think the code would look
```
SELECT contact.agent, SUM(sales.price)
FROM contact, sales
WHERE contact.id = sales.id AND SUM(WHERE sales.OutboundCallDate)
GROUP BY contact.agent
``` | Adding another column based on different criteria (SQL-server) | [
"",
"sql",
"sql-server",
""
] |
I have a table:
```
model1 model2 speed ram
----- ------ ------ ----
1121 1233 750 128
1232 1233 500 64
1232 1260 500 32
1233 1121 750 128
1233 1232 500 64
1260 1232 500 32
```
And I need to remove duplicate pairs of model1 and model2.
Here these rows are 4-6. Looking at the first row:
```
model1 1121 model2 1233
```
the duplicate is
`model1 1233 model2 1121`
etc.
Meant if model1+model2 = model2+model1 - that is a duplicate.
I've tried using such quesry:
```
where cast(model1 as char(4))+cast(model2 as char(4))<>cast(model1 as char(4))+cast(model2 as char(4))
```
. How can I do it? Thank you. | Untested, use at your own risk, but it could serve as starting point
to see unique only:
```
select model1,model2 from T
EXCEPT
select model2,model1 from T where model2 > model1;
``` | If you want to filter out duplicates from your results, seems like it should be simply:
```
where cast(model1 as char(4))+cast(model2 as char(4))<>cast(model1 as char(4))+cast(model2 as char(4))
AND cast(model1 as char(4))+cast(model2 as char(4))<>cast(model2 as char(4))+cast(model1 as char(4))
```
If you want to SELECT the duplicates for deletion, then turn it around like this:
```
where cast(model1 as char(4))+cast(model2 as char(4))=cast(model1 as char(4))+cast(model2 as char(4))
OR cast(model1 as char(4))+cast(model2 as char(4))=cast(model2 as char(4))+cast(model1 as char(4))
``` | Duplicates by few rows | [
"",
"sql",
"sql-server",
"database",
"duplicates",
""
] |
I'm trying to find rows with same Number but different Value, I arrived to this query:
```
DECLARE @Table1 AS TABLE (Number INT, Value int )
INSERT @Table1 VALUES (32, 3)
INSERT @Table1 VALUES (32, 5)
INSERT @Table1 VALUES (45, 5)
INSERT @Table1 VALUES (45, 5)
SELECT DISTINCT Number, COUNT(Value)
FROM @Table1 GROUP BY Number, Value
HAVING COUNT(Value) < 2
```
Is this the best approach? | Does your query return expected results? I usually see it written like this:
```
SELECT Number, COUNT(DISTINCT Value)
FROM @Table1 GROUP BY Number
HAVING COUNT(DISTINCT Value) < 2
```
Or, based on your comment below, you could do this:
```
SELECT Number
FROM @Table1 a
WHERE EXISTS(SELECT * FROM @Table1 b WHERE a.Number=b.Number and a.Value<>b.Value)
``` | ```
SELECT Number, COUNT(distinct Value) as value
FROM @Table1 GROUP BY Number
``` | Get Values from a column with different values in another | [
"",
"sql",
"sql-server",
""
] |
suppose I have a table name : **TEST**, that it has 2 columns **ID** and **CONTENT**, like this :
**TEST** TABLE :
**ID** ======= **CONTENT**
0 ======= hello world
now, how can I split **CONTENT** column from 0 index to 5 index to retrieve just **hello** word? (at here, **ID**=> 0)
(I just want to use sql, not other functions in others languages ); | ```
select substr(content, 0, 5) from test
``` | > The `substr(X,Y,Z)` function returns a substring of input string X that
> begins with the Y-th character and which is Z characters long. If Z is
> omitted then `substr(X,Y)` returns all characters.
The answer to your question is:
```
select substr(content, 0, 5) from test where id >= 0;
``` | how to use substring function sqlite android | [
"",
"android",
"sql",
"sqlite",
""
] |
I'm looking to essentially have a centralized table with a number of lookup tables that surround it. The central table is going to be used to store 'Users' and the lookup tables will be user attributes, like 'Religion'. The central table will store an Id, like ReligionId, and the lookup table would contain a list of religions.
Now, I've done a lot of digging into this and I've seen many people comment saying that a UserAttribute table might be the best way to go, essentially using an EAV pattern. I'm not looking to do this. I realize that my strategy will be join-heavy and that's why I ask this question here. I'm looking for a way to optimize those joins.
If the table has 100 lookup tables, how could it be optimized to be faster than just doing a massive 100 table inner join? Some ideas come to mind like using many smaller joins, sub-selects and views. I'm open to anything, including a combination of these strategies. Again, just to note, I'm not looking to do anything that's EAV-related. I need the lookup tables for other reasons and I like normalized data.
All suggestions considered!
Here's a visual look:

Edit: Is this insane? | Optimization techniques will likely depend on the size of the center table and intended query patterns. This is very similar to what you get in data warehousing star schemas, so approaches from that paradigm may help.
For one, ensuring the size of each row is absolutely as small as possible. Disk space may be cheap, but disk throughput, memory, and CPU resources are potential bottle necks. You want small rows so that it can read them quickly and cache as much as possible in memory.
A materialized/indexed view with the joins already performed allows the joins to essentially be precomputed. This may not work well if you are dealing with a center table that is being written to alot or is very large.
Anything you can do to optimize a single join should be done for all 100. Appropriate indexes based on the selectivity of the column, etc.
Depending on what kind of queries you are performing, then other techniques from data warehousing or OLAP may apply. If you are doing lots of group by's then this is likely an area to look in to. Data warehousing techniques can be applied within SQL Server with no additional tooling.
Ask yourself why so many attributes are being queried and how they are being presented? For most analysis it is not necessary to join with lookup tables until the final step where you materialize a report, at which time you may only have grouped by on a subset of columns and thus only need some of the lookup tables.
Group By's generally should be able to group on the lookup Id's without needing the text/description from the lookup table so a join is not necessary. If your lookups have other information relevant to the query at hand then consider denormalizing it into the central table to eliminate the join and/or make that discreet value its own lookup, essentially splitting the existing lookup ID into another ID.
You could implement a master code table that combines the code tables into a single table with a CodeType column. This is not the same as EAV because you'd still have a column in the center table for each code type and a join for each, where as EAV is usually used to normalize out an arbitrary number of attributes. (**Note**: I personally hate master code tables.)
Lastly, consider normalization the center table if you are not doing data warehousing.
Are there lots of null values in certain lookupId columns? Is the table sparse? This is an indication that you can pull some columns out into a 1 to 1/0 relationships to reduce the size of the center table. For example, a Person table that includes address information can have a PersonAddress table pulled out of it.
Partitioning the table may improve performance if there's a large number of rows and you can determine that certain rows, perhaps with a certain old datetime from couple years in the past, would rarely be queried.
Update: See **"Ask yourself why so many attributes are being queried and how they are being presented?"** above. Consider a user wants to know number of sales grouped by year, department, and product. You should have id's for each of these so you can just group by those IDs on the center table and in an outer query join lookups for only what columns remain. This ensures the aggregation doesn't need to pull in unnecessary information from lookups that aren't needed anyway.
If you aren't doing aggregations, then you probably aren't querying large numbers of records at a time, so join performance is less of a concern and should be taken care of with appropriate indexes.
If you're querying large numbers of records at a time pulling in all information, I'd look hard at the business case for this. No one sits down at their desk and opens a report with a million rows and 100 columns in it and does anything meaningful with all of that data, that couldn't be accomplished in a better way.
The only case for such a query be a dump of all data intended for export to another system, in which case performance shouldn't be as much as a concern as it can be scheduled overnight. | Since you are set on your way. you can consider duplicating data in order to join less times in a similar way to what is done in olap database.
<http://en.wikipedia.org/wiki/OLAP_cube>
With that said I don't think this is the best way to do it if you have 100 properties. | 100 Join SQL query | [
"",
"sql",
"sql-server",
"normalization",
""
] |
I am trying to select the tables from my database where they start with **srt\_factor\_** (which in this case are only the first two).
My tables are:
```
srt_factor_20121119
srt_factor_20130430
srt_factorxyzk_20130813
```
My query erroneously returns instead of the first two tables only, the last one as well (srt\_factorxyzk\_20130813)
```
Select (TABLE_NAME) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE'
AND TABLE_NAME LIKE 'srt_factor_%';
```
Any ideas how to fix it? | `'_'` is a wildcard in SQL Server, representing one character. Try this instead:
```
Select (TABLE_NAME)
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE' AND
TABLE_NAME LIKE 'srt[_]factor[_]%';
``` | The problem is in the underscores which have a special meaning in SQL (it's like the `*` wildcard, but for exact one character). You have to escape them like this:
```
TABLE_NAME LIKE 'srt[_]factor[_]%'
```
See [MSDN](http://msdn.microsoft.com/en-us/library/ms179859(v=sql.110).aspx) on the use of wildcard characters as literals. | Wildcard query % fetches wrong data | [
"",
"sql",
"sql-server",
""
] |
I have a problem with DATEDIFF function.
My date format is `dd/mm/yyyy`.
`@START_DATE = 01/02/2004`
`@END_DATE = 29/01/2014`
The query `(DATEDIFF(DAY,@START_DATE,@END_DATE) / 365)` return `10`, but the number of correct years is `9`. This happens because my query does not consider leap years.
What I can do to keep an accurate count?
Thanks. | I believe the following logic does what you want:
```
datediff(year,
@START_DATE - datepart(dayofyear, @START_DATE) + 1,
@END_DATE - datepart(dayofyear, @START_DATE) + 1
) as d2
```
Note: This treats that dates as `datetime`, because arithmetic is easier to express. You can also write this as:
```
datediff(year,
dateadd(day, - datepart(dayofyear, @START_DATE) + 1, @START_DATE),
dateadd(day, - datepart(dayofyear, @START_DATE) + 1, @END_DATE)
) as d2
```
The following query is a demonstration:
```
select datediff(year,
startdate - datepart(dayofyear, startdate) + 1,
enddate - datepart(dayofyear, startdate) + 1
) as d2
from (select cast('2004-02-01' as datetime) as startdate,
cast('2014-01-31' as datetime) as enddate
union all
select cast('2004-02-01' as datetime) as startdate,
cast('2014-02-01' as datetime) as enddate
) t
``` | Technically there would be 365.242 days in a year, when accounting for leap years so:
```
FLOOR(DATEDIFF(day, @STARTDATE, @ENDDATE) / 365.242)
```
Should be more correct.
Test:
```
SELECT FLOOR(DATEDIFF(day, '1980-01-16','2015-01-15') / 365.242),
FLOOR(DATEDIFF(day, '1980-01-16','2015-01-16') / 365.242)
```
ResultSet:
```
--------------------------------------- ---------------------------------------
34 35
```
Cheers! | (Transact-SQL) DATEDIFF and leap years | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
All,
This is my first post on Stackoverflow, so go easy...
I am using SQL Server 2008.
I am fairly new to writing SQL queries, and I have a problem that I thought was pretty simple, but I've been fighting for 2 days. I have a set of data that looks like this:
```
UserId Duration(Seconds) Month
1 45 January
1 90 January
1 50 February
1 42 February
2 80 January
2 110 February
3 45 January
3 62 January
3 56 January
3 60 February
```
Now, what I want is to write a single query that gives me the average for a particular user and compares it against all user's average for that month. So the resulting dataset after a query for user #1 would look like this:
```
UserId Duration(seconds) OrganizationDuration(Seconds) Month
1 67.5 63 January
1 46 65.5 February
```
I've been batting around different subqueries and group by scenarios and nothing ever seems to work. Lately, I've been trying OVER and PARTITION BY, but with no success there either. My latest query looks like this:
```
select Userid,
AVG(duration) OVER () as OrgAverage,
AVG(duration) as UserAverage,
DATENAME(mm,MONTH(StartDate)) as Month
from table.name
where YEAR(StartDate)=2014
AND userid=119
GROUP BY MONTH(StartDate), UserId
```
This query bombs out with a "Duration' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause" error.
Please keep in mind I'm dealing with a very large amount of data. I think I can make it work with CASE statements, but I'm looking for a cleaner, more efficient way to write the query if possible.
Thank you! | You are joining two queries together here:
* Per-User average per month
* All Organisation average per month
If you are only going to return data for one user at a time then an inline select may give you joy:
```
SELECT AVG(a.duration) AS UserAvergage,
(SELECT AVG(b.Duration) FROM tbl b WHERE MONTH(b.StartDate) = MONTH(a.StartDate)) AS OrgAverage
...
FROM tbl a
WHERE userid = 119
GROUP BY MONTH(StartDate), UserId
```
Note - using comparison on MONTH may be slow - you may be better off having a CTE (Common Table Expression) | missing partition clause in Average function
```
OVER ( Partition by MONTH(StartDate))
``` | SQL Aggregates OVER and PARTITION | [
"",
"sql",
"sql-server",
"window-functions",
""
] |
I've got a small problem I ran into. I want o create a form that ask for a procedure name and returns the params of the stored procedure and the columns that the procedure returns.
For example
```
CREATE PROCEDURE abc
@Param1 int,
@Param2 varchar(1)
as
SELECT *
FROM TableA
WHERE Param1 = @Param1 and Param2 = @Param2
GO
```
I know I can get the params of a procedure from SQL Server using this query
```
select
'Parameter_name' = name,
'Type' = type_name(user_type_id),
'Length' = max_length,
'Prec' = case when type_name(system_type_id) = 'uniqueidentifier'
then precision
else OdbcPrec(system_type_id, max_length, precision) end,
'Scale' = OdbcScale(system_type_id, scale),
'Param_order' = parameter_id,
'Collation' = convert(sysname,
case when system_type_id in (35, 99, 167, 175, 231, 239)
then ServerProperty('collation') end)
from sys.parameters where object_id = object_id('[dbo].[abc]')
```
and from what I've read on the web, there is no easy way to get the returned datatypes and their names from a stored procedure as the result may be different based on the input params.
I would love to know if there is a way to get the returned datatypes and their names from SQL, but I wouldn't mind if anyone can tell me, or guide me to a place where I can find a solution to get the returned datatypes and their names from Delphi. Something like when you click on a `TADOStoredProc` and click on AddAllFields option.
Hope I made myself understood.
Thank you | You can use **sys.dm\_exec\_describe\_first\_result\_set**.
See the example B *"Returning information about a procedure"*:
```
USE AdventureWorks2012;
GO
CREATE PROC Production.TestProc
AS
SELECT Name, ProductID, Color FROM Production.Product ;
SELECT Name, SafetyStockLevel, SellStartDate FROM Production.Product ;
GO
SELECT * FROM sys.dm_exec_describe_first_result_set
('Production.TestProc', NULL, 0) ;
```
<http://msdn.microsoft.com/en-us/library/ff878258.aspx#code-snippet-2>
It uses the same algorithm as **sp\_describe\_first\_result\_set**, so you can find some remarks about this at:
<http://msdn.microsoft.com/en-us/library/ff878602.aspx> | This is by no means a complete answer, but I wanted to show how [SET FMTONLY](http://msdn.microsoft.com/en-us/library/ms173839.aspx) is used. It may solve part of the OP's request.
```
CREATE PROCEDURE abc
@Param1 INT
AS
SELECT TOP(@Param1) *
FROM master.sys.objects
GO
--When set OFF, 3 rows are returned.
SET FMTONLY OFF
EXEC abc @Param1 = 3
--When set ON, no rows are returned, but the columns and column names of the result set are returned.
SET FMTONLY ON
EXEC abc @Param1 = 3
```
NOTE: if the stored proc has branching and may return different results sets, this option is not going to work. Again, this is not a complete answer, just a demo for the OP. | get returned fields datatypes from a stored procedure | [
"",
"sql",
"sql-server",
"delphi",
"stored-procedures",
""
] |
I am attempting to run a SQL query that will filter either to a value passed in a parameter, or if the value in the parameter is 'Unspecified' then all values should be returned.
The parameter is `@station` which may contain the values `'Unspecified', 'Station1', 'Station2', 'Station3'` etc.
If the parameter `@station` is equal to `Station1` for example, I want the returned data to be filtered to only `'Station1'`, however if the parameter `@station` is equal to `'Unspecified'` then the returned data should include values for `station1, station2` and all stations.
I am using other filters as part of this query, here is my code below and it is the last line that doesn't work:
```
where tblRTC.CODE IN ('C1', 'C2', 'C3')
and CAST(tblDET.START_DATE AS Date) >= CAST (@start_date AS Date)
and CAST(tblDET.END_DATE AS Date) <= CAST (@end_date AS Date)
and IIF(@station = 'Unspecified',tblSTACON.DESCRIPTION <>@station,tblSTACON.DESCRIPTION = @station)
```
I am writing this in a 3rd party application and have no control over the parameters passed.
Any help would be much appreciated. | ```
where tblRTC.CODE IN ('C1', 'C2', 'C3')
and CAST(tblDET.START_DATE AS Date) >= CAST (@start_date AS Date)
and CAST(tblDET.END_DATE AS Date) <= CAST (@end_date AS Date)
and tblSTACON.DESCRIPTION = (case when @station not in( 'Unspecified') then @station else tblSTACON.DESCRIPTION end)
``` | hope this helps
```
where tblRTC.CODE IN ('C1', 'C2', 'C3')
and CAST(tblDET.START_DATE AS Date) >= CAST (@start_date AS Date)
and CAST(tblDET.END_DATE AS Date) <= CAST (@end_date AS Date)
and tblSTACON.DESCRIPTION = (case when @station <> 'Unspecified' then @station else tblSTACON.DESCRIPTION end)
``` | Conditional filtering based on a specified value or unspecified | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table with the current values:

What I want is to get the `projekt` value where kod is `Greger` and `Γ
utte`.
As of now, my attempt returns nothing. Since the condition spans over two rows, I have no idea how to get the `projekt` I need.
The wanted result would look like this:

I might be a little rude asking for magic here but i'm currently out of magic points.
How do I get a column value where the condition spans over two rows? | Use this:
```
SELECT a.projekt
FROM myTable a
WHERE EXISTS (
SELECT b.projekt
FROM myTable b
WHERE b.projekt=a.projekt AND kod='Greger'
) AND EXISTS (
SELECT c.projekt
FROM myTable c
WHERE c.projekt=a.projekt AND kod='Γ
utte'
)
```
Another way is to COUNT the matching records, but you have to be sure you count DISTINCT kod values:
```
SELECT projekt
FROM myTable
WHERE kod IN ('Greger','Γ
utte')
GROUP BY projekt
HAVING COUNT(DISTINCT kod)=2
```
This can be simplified if you know that (projekt, kod) is a UNIQUE or PRIMARY key of myTable:
```
SELECT projekt
FROM myTable
WHERE kod IN ('Greger','Γ
utte')
GROUP BY projekt
HAVING COUNT(*)=2
```
**In all cases, best ensure you have an index on (projekt, kod) if this table is big ;)** | How about something similar?
```
select
distinct projekt
from myTable
where kod in ('Greger','Γ
utte')
group by projekt
having count(projekt) > 1
``` | How do I get a column value where the condition spans over two rows? | [
"",
"sql",
""
] |
I have a data set with a date stamp and an ID number as shown:
```
Date_Stamp | ID
2013-07-17 | ID1
2013-07-17 | ID1
2013-08-19 | ID1
2013-08-19 | ID2
```
From this data, I need to populate a table that has the following fields: Date, ID, First\_attempt, Second\_Attempt, Third\_Attempt, Total\_Attempts.
With that data, the file would look like this:
```
Date | ID | First_attempt | Second_Attempt | Third_Attempt | Total_Attempts
2013-07-17 | ID1 | 1 | 1 | 0 | 2
2013-08-19 | ID1 | 0 | 0 | 1 | 1
2013-08-19 | ID2 | 1 | 0 | 0 | 1
```
I have been able to get the first attempts with this code:
```
Select min(date_stamp) as date_stamp,
ID,
1 as First_attempt,
0 as Second_Attempt,
0 as Third_Attempt,
0 as Total_Attempts
from Table
group by ID
```
and the Total Attempts (per day) with this code:
```
Select date_stamp as date_stamp,
ID,
0 as First_attempt,
0 as Second_Attempt,
0 as Third_Attempt,
count(ID) as Total_Attempts
from Table
group by date_stamp, ID
```
Can someone help with scripting the second and third attempts?? | ```
;WITH TEMPTABLE AS(
SELECT DATE_STAMP,
ID,
ROW_NUMBER() OVER( PARTITION BY ID ORDER BY DATE_STAMP ) AS ROWNUMBER,
Count(ID) OVER(PARTITION BY Date_Stamp, ID) as countID
FROM #temp)
SELECT DATE_STAMP,
ID,
MAX(CASE WHEN ROWNUMBER = 1 THEN 1 ELSE 0 END )AS FIRST_ATTEMPT,
MAX(CASE WHEN ROWNUMBER = 2 THEN 1 ELSE 0 END) AS SECOND_ATTEMPT,
MAX(CASE WHEN ROWNUMBER = 3 THEN 1 ELSE 0 END) AS THIRD_ATTEMPT,
MAX(countID) Total_Attempts
FROM TEMPTABLE
GROUP BY DATE_STAMP,ID
``` | Try this:
```
;WITH CTE AS
(
SELECT *,
RN = ROW_NUMBER() OVER(PARTITION BY Date_Stamp, ID ORDER BY Date_Stamp),
Total = COUNT(*) OVER(PARTITION BY Date_Stamp, ID)
FROM YourTable
)
SELECT Date_Stamp AS [Date],
ID,
MAX(CASE WHEN RN = 1 THEN 1 ELSE 0 END) First_attempt,
MAX(CASE WHEN RN = 2 THEN 1 ELSE 0 END) Second_attempt,
MAX(CASE WHEN RN = 3 THEN 1 ELSE 0 END) Third_attempt,
MAX(Total) Total_Attempts
FROM CTE
GROUP BY Date_Stamp,
ID;
```
[**Here is**](http://sqlfiddle.com/#!3/ee3b6/3) a sqlfiddle with a demo.
And the results are:
```
ββββββββββββββ¦ββββββ¦ββββββββββββββββ¦βββββββββββββββββ¦ββββββββββββββββ¦βββββββββββββββββ
β DATE β ID β FIRST_ATTEMPT β SECOND_ATTEMPT β THIRD_ATTEMPT β TOTAL_ATTEMPTS β
β βββββββββββββ¬ββββββ¬ββββββββββββββββ¬βββββββββββββββββ¬ββββββββββββββββ¬βββββββββββββββββ£
β 2013-07-17 β ID1 β 1 β 1 β 0 β 2 β
β 2013-08-19 β ID1 β 1 β 0 β 0 β 1 β
β 2013-08-19 β ID2 β 1 β 0 β 0 β 1 β
ββββββββββββββ©ββββββ©ββββββββββββββββ©βββββββββββββββββ©ββββββββββββββββ©βββββββββββββββββ
``` | SQL - Creating counts for 1st, 2nd, 3rd occasion in a list | [
"",
"sql",
"sql-server",
"count",
"counter",
""
] |
```
DECLARE @a int;
DECLARE @b int;
SET @a = 9;
SET @b = 2;
SELECT CEILING (@a/@b);
```
It is returning as 4 instead of 5. Why?
**Edit:** I would like to get next smallest integer if the quotient is not whole number. | Try:
```
SELECT CEILING (@a/CAST(@b AS float))
```
And consider `NULLIF(@b,0)` too, to avoid a Division By Zero error. | After dividing 9 by 2 a decimal fraction is *Truncated* to its integer part - 4, not *Rounded* to 5. Try:
```
SELECT 9/2
```
Resilt is 4. Then `CEILING(4) = 4`
To get next integer declare variables as data types that can handle decimal part: `NUMERIC`,`FLOAT`, `REAL`. | CEILING returns FLOOR result - SQL SERVER 2008 R2 | [
"",
"sql",
"database",
"sql-server-2008",
"floor",
"ceil",
""
] |
Can someone help me with a SQL query for this situation.
I have a table with a surrogate key and a business key. For example this table:
```
ID ------ contact_id
1 -------- 100
2 -------- 100
3 -------- 200
4 -------- 100
5 -------- 100
```
Does anyone know a query to get the next biggest ID of a contact\_id. I tried something with joins but I don't know the next biggest id part.
edit: i don't mean for a specific contact\_id, more in general that could be used for all.
As suggested I should've put the expected outcome in the post. The problem is already solved by Anonxen by now but for future reference/other people with similar questions:
With "next biggest value" I mean the next int in ID per contact\_id. I don't mean the highest of all id numbers of a contact\_id, but only the next id that is linked to that contact. So that in the example it would skip the id's that are not related to the contact\_id but it should be the following number.
I am at id 1 and expect the query to look for the next id, so that would be 2. But if I would be at id 2 I want to go to id 4. | This should do it if I understand you properly :)
```
---Supply the ID and the Contact_ID
DECLARE @ID INT = 1
,@C_ID INT = 100;
WITH OrdCte AS(
SELECT ID,Contact_ID,ROW_NUMBER() OVER(PARTITION BY Contact_ID ORDER BY ID) ord
FROM #sample
)
SELECT B.ID,B.Contact_ID
FROM OrdCte A
JOIN OrdCte B
ON A.Contact_ID = B.Contact_ID
WHERE A.Contact_ID = @C_ID AND A.ID = @ID AND A.ord+1 = B.ord;
``` | I am not sure what you expect the **next big** value for `contact_id` 200, so in this case it will use the same `contact_id` as **next big**
```
Select Main.*,ISNULL(NextB.contact_id,Main.contact_id) as NextBig
From TableName Main
Outer APPLY (Select contact_id
From TableName T
Where T.Id>Main.Id and T.contact_id >Main.contact_id) As NextB
``` | How to get the next biggest int value in SQL? | [
"",
"sql",
"sql-server",
""
] |
I'm trying to select all tables which have columns OWNERID or L1ID or L1, but not have any of the following columns, HID, DHID, SPHID, SPHID, LINKHID, NODEID and SITEID.
The following is my SQL query. It's taking a long time. Can you give me a better solution or optimize this query.
```
SELECT
TABLE_NAME
FROM
USER_TABLES
WHERE
TABLE_NAME NOT IN (
SELECT TABLE_NAME
FROM USER_TAB_COLUMNS
WHERE COLUMN_NAME IN('HID', 'DHID', 'SPHID', 'LINKID', 'NODEID', 'SITEID')
GROUP BY TABLE_NAME
HAVING COUNT(*) = 1)
AND TABLE_NAME IN (
SELECT TABLE_NAME
FROM USER_TAB_COLUMNS
WHERE COLUMN_NAME IN ('OWNERID', 'L1ID', 'L1')
GROUP BY TABLE_NAME
HAVING COUNT(*) = 1)
```
Please don't mark this as duplicate. I have already tried other solutions and they don't seem to workout for me | Try not exist / exists instead. As long as you don't need to select values from the secondary tables, this is much faster than IN and especially NOT IN. Basic reason is that as soon as there is a match, the evaluation is done.
```
SELECT
TABLE_NAME
FROM USER_TABLES
WHERE NOT EXISTS
(select 1
FROM USER_TAB_COLUMNS
WHERE COLUMN_NAME IN('HID', 'DHID', 'SPHID', 'LINKID', 'NODEID', 'SITEID')
AND USER_TABLES.TABLE_NAME = USER_TAB_COLUMNS.TABLE_NAME
)
and EXISTS (
SELECT 1
FROM USER_TAB_COLUMNS
WHERE COLUMN_NAME IN ('OWNERID', 'L1ID', 'L1')
AND USER_TABLES.TABLE_NAME = USER_TAB_COLUMNS.TABLE_NAME
)
```
However, I am not totally sure what your COUNT(\*) = 1 means. Only one of OWNERID or L1ID or L1, but not say L1ID and L1 in the same table?
My code, as written will work if you only care if one or more of your condition columns are present, i.e. how I understood your question in English. If you need only 1 of them to be present, different query is needed. | ```
With exclude AS (
select TABLE_NAME
FROM USER_TAB_COLUMNS
WHERE COLUMN_NAME IN('HID', 'DHID', 'SPHID', 'LINKID', 'NODEID', 'SITEID')
GROUP BY TABLE_NAME
)
, preInclude AS (
SELECT TABLE_NAME
FROM USER_TAB_COLUMNS
WHERE COLUMN_NAME IN ('OWNERID', 'L1ID', 'L1')
GROUP BY TABLE_NAME
)
, Include AS (
SELECT preInclude.TABLE_NAME
FROM preInclude
LEFT JOIN exclude ON preIncude.TABLE_NAME = exclude.TABLE_NAME
WHERE exclude.TABLE_NAME is NULL
)
SELECT *
FROM Include
``` | SQL query optimization for selecting specific columns from a table | [
"",
"sql",
"oracle",
"optimization",
""
] |
For example ,
I use the following function to convert `rows` into `json` in `PostgreSQL 9.2`
```
select row_to_json(row(productid, product)) from gtab04;
```
and this will returns below results
```
row_to_json
---------------
{"f1":3029,"f2":"DIBIZIDE M TAB"}
{"f1":3026,"f2":"MELMET 1000 SR TAB"}
{"f1":2715,"f2":"GLUCORED FORTE"}
{"f1":3377,"f2":"AZINDICA 500 TAB"}
```
* unfortunately it loses the field names and replaces them with f1, f2, f3, etc.
* How can I get the actual field names or cast field name? | To work around this we must either create a row type and cast the row to that type or use a subquery. A subquery will typically be easier.
```
select row_to_json(t)
from (
select productid, product from gtab04
) t
``` | If one wants to prevent a sub-query, `json_build_object()` might be a solution. It does not map the column name, but let's your set the JSON keys explicitly.
---
**Query**
```
SELECT json_build_object('productid', productid, 'product', product) FROM gtab04;
json_build_object
------------------
{"productid":3029,"product":"DIBIZIDE M TAB"}
{"productid":3026,"product":"MELMET 1000 SR TAB"}
{"productid":2715,"product":"GLUCORED FORTE"}
{"productid":3377,"product":"AZINDICA 500 TAB"}
```
---
[View on DB Fiddle](https://www.db-fiddle.com/) | Select query in row_to_json function | [
"",
"sql",
"json",
"postgresql",
"postgresql-9.2",
"composite-types",
""
] |
I have a string of values like :
```
$cat = "1,5,6,8,9";
```
now I would like to use that variable in WHERE clause. how do I do it ?
```
"select category from test where category...?"
```
I am using below code but I get null from number\_of\_result variable :
```
$array_of_cat = explode(",",$cat);
if($number_of_result == 0){
$mresult = mysql_query("select * from books where
GoodName like '%$title%' and GroupCode in ($array_of_cat) ");
$number_of_result = mysql_num_rows($mresult);
}
``` | Change your SQL statement to this one. Use `implode()` to pass array elements in `IN` clause
```
$mresult = mysql_query("
SELECT * FROM books
WHERE GoodName LIKE '%$title%' AND GroupCode IN (" . implode(",", $array_of_cat) . ")");
``` | You do not need to explode the variable - doing so results in an array that will result in the wrong value (the string "Array") when interpolated into the SQL string.
Despite the fact that this is unsafe ([SQL injection prone](https://stackoverflow.com/questions/60174/how-can-i-prevent-sql-injection-in-php)) you could do:
```
if($number_of_result == 0){
$mresult = mysql_query("select * from books where
GoodName like '%$title%' and GroupCode in ($cat)");
$number_of_result = mysql_num_rows($mresult);
}
```
I strongly suggest you use the [mysql\_real\_escape\_string](http://php.net/mysql_real_escape_string) function to prepare your variables. | using a string of values in WHERE clause - mysql | [
"",
"mysql",
"sql",
""
] |
I have a table with three dates:
```
id date1 date2 date3
1 1945-12-13 1930-11-04 null
2 1970-09-12 1971-09-13 1972-09-14
3 null null null
4 2000-01-01 2001-01-01 2002-01-01
```
My proc passes in 2 parameters:
```
@dateFrom datetime = NULL,
@dateTo datetime = NULL
```
I want to return records that have any date in between the two. NULL for either @dateFrom or @DateTo makes it open ended.
This works for the open ended case, but not for the case where both are not null.
```
SELECT *
FROM Table1
WHERE (@dateFrom IS NULL
OR ISNULL(date1,'12/31/1753') >= @dateFrom
OR ISNULL(date2,'12/31/1753') >= @dateFrom
OR ISNULL(date3,'12/31/1753') >= @dateFrom)
AND (@dateTo IS NULL
OR ISNULL(date1,'12/31/2099') <= @dateTo
OR ISNULL(date2,'12/31/2099') <= @dateTo
OR ISNULL(date3,'12/31/2099') <= @dateTo)
```
If I pass @dateFrom = '1940-01-01' and @dateTo = '1950-01-01',
I only want the first record, but I'm getting 1,2,and 4 because they all have dates > 1940.
I just can't wrap my head around how to structure this.
*Column and Table names have been changed to protect the innocent.* | You need to filter each date field individually within the range, like so:
```
WHERE
(Date1 >= ISNULL(@DateFrom,'17531231')
AND Date1 <= ISNULL(@dateTo,'20991231'))
OR
(Date2 >= ISNULL(@DateFrom,'1753-12-31')
AND Date2 <= ISNULL(@dateTo,'20991231'))
OR
(Date3 >= ISNULL(@DateFrom,'1753-12-31')
AND Date3 <= ISNULL(@dateTo,'20991231'))
```
Otherwise you aren't checking the range for each date field, just that a date in that row matches one of the criteria. | Just for another way to look at it. This solution would also work. It makes the where clause simpler at the expense of an additional block of code and a join.
```
CREATE TABLE #dates (id INT, date1 DATE, date2 DATE, date3 DATE)
INSERT INTO #dates
VALUES
('1','12/13/1945','11/4/1930',NULL),
('2','9/12/1970','9/13/1971','9/14/1972'),
('3',NULL,NULL,NULL),
('4','1/1/2000','1/1/2001','1/1/2002')
DECLARE
@dateFrom datetime = '1940-01-01',
@dateTo datetime = '1950-01-01'
;WITH dateFilter AS (
SELECT id,[Date],DateIndex
FROM
(SELECT
id, date1, date2, date3
FROM #dates) p
UNPIVOT([DATE] FOR DateIndex IN ([date1],[date2],[date3])) AS up
WHERE
up.[DATE] BETWEEN @dateFrom AND @dateTo
)
SELECT
d.*
FROM #dates d
INNER JOIN dateFilter df
ON df.id = d.id
DROP TABLE #dates
``` | Filtering multiple dates in row with optional to and from parameters | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm rookie at kettle pentaho and Querys. What i'm trying to do is check if value A, in file 1 is in file 2.
I've got 2 files, that i export from my DB:
File 1:
```
Row1, Row2
A 3
B 5
C 99
Z 65
```
File 2:
```
Row1, Row2
A 3
D 11
E 22
Z 65
```
And i want to create one file output:
File Output
```
Row1, Row2
A 3
Z 65
```
What i'm doing: 2 files input, merge join, but no file output. Something missing here.
Any suggestion will be great!!! | You can have the two streams joined by a "Merge join" step, which allows you to set freely the join keys (in your case it seems like you want both fields to be used), and also what type of join, Inner, Left Outer, Right outer or Full outer. | You can use stream lookup for that. Start with the file input for file 1, then create a stream lookup step that uses the input stream for file 2 as its lookup stream. Now just match the columns and you can add the column from file 2 to your data stream. | Pentaho Join table values | [
"",
"sql",
"pentaho",
"kettle",
""
] |
I have a table like the following:
```
id website logInTime
1 Yahoo 1/1/2001 00:00:00
2 Google 1/1/2001 00:00:01
1 Yahoo 2/1/2014 00:00:00
2 Yahoo 2/1/2014 00:00:00
```
How can I retrieve the latest time of log in from each user based on the website? | ```
select t.id, t.website, max(logInTime)
from table t
group by t.id, t.website
``` | Is that the query you're looking for?
```
SELECT T.id
,T.website
,MAX(T.logInTime) AS [lastLogInTime]
FROM yourTable T
GROUP BY T.id, T.website
```
Hope this will help you. | Get the latest date of records from a table, SQL | [
"",
"sql",
""
] |
```
SELECT *
FROM pay_invoice_list p
LEFT outer JOIN paimentDetl d
on p.invoice_no= d.invoiceNo and p.invoice_no LIKE ' %$temp%'
ORDER BY iid DESC
```
This is my query. But it doesn't work correctly. The tow tables are not join properly. as well as the "Like" didn't work. | try like this
```
SELECT *
FROM pay_invoice_list p
LEFT outer JOIN paimentDetl d ON p.invoice_no = d.invoiceNo
WHERE p.invoice_no LIKE ' %$temp%'
ORDER BY iid DESC
``` | ```
SELECT *
FROM pay_invoice_list p
LEFT outer JOIN paimentDetl d
on p.invoice_no= d.invoiceNo
Where p.invoice_no LIKE ' %$temp%'
ORDER BY iid DESC
```
All you needed to do is add WHERE to your query. | How to use Like in left outer join | [
"",
"mysql",
"sql",
""
] |
Let's say I have two tables `Contracts` and `ServiceProviders`. A contract has a property `LeadServiceProvider` which references a service provider.
I also have a third table called `SubServiceProviders` that maintains a many-to-many relationship between `Contracts` and `ServiceProviders` acting as sub service providers.
I'd like to run a query on Contract that would give me a list of all service providers - Lead and Sub, together in one column:
```
Select
c.LeadServiceProvider, subSP.ServiceProvider
From
Contracts c
left join
SubServiceProviders subSP on c.ID = subSP.Contract
Where
[Some contract criteria goes here]
```
The query above gives me two columns that ideally I would like to flatten into one and run a `distinct()` on. I could do a select for `LeadServiceProvider` and then `UNION` it with a select for `SubServiceProviders`.
That is simple enough for this example, but, in my actual application, the `FROM` and `WHERE` clauses are pretty complicated, and if I'm to use `UNION`, I'm forced to repeat those clauses all over again | It seems you could use CROSS APPLY here:
```
Select distinct
v.ServiceProvider
From
Contracts c
left join
SubServiceProviders subSP on c.ID = subSP.Contract
cross apply
(values (c.LeadServiceProvider), (subSP.ServiceProvider)) as v (ServiceProvider)
Where
[Some contract criteria goes here]
```
[CROSS APPLY](http://msdn.microsoft.com/en-us/library/ms175156%28v=sql.90%29.aspx "Using APPLY") is supported in SQL Server 2005 and later versions, the [VALUES](http://msdn.microsoft.com/en-us/library/dd776382%28v=sql.100%29.aspx "Table Value Constructor (Transact-SQL)") row constructor in SQL Server 2008 and later. You can replace the VALUES clause with an equivalent UNION ALL subquery to make the query compatible with SQL Server 2005:
```
(
select c.LeadServiceProvider
union all
select subSP.ServiceProvider
) as v (ServiceProvider)
``` | If you are looking for a simple'ish way to do this in one query, you can use a CTE and unpivot the columns. I would do this explicitly with a `join` like this:
```
with t as (
Select c.LeadServiceProvider, subSP.ServiceProvider
From Contracts c left join SubServiceProviders subSP on c.ID = subSP.Contract
Where [Some contract criteria goes here]
)
select distinct (case when n.n = 1 then LeadServiceProvider else ServiceProvider end) as SP
from t cross join
(select 1 as n union all select 2) n;
```
EDIT:
You can just as easily put this in a subquery:
```
select distinct (case when n.n = 1 then LeadServiceProvider else ServiceProvider end) as SP
from (<your query here without the order by>) t cross join
(select 1 as n union all select 2) n;
```
Or forget the subquery:
```
Select distinct (case when n.n = 1 then c.LeadServiceProvider
else subSP.ServiceProvider
end)
From Contracts c left join
SubServiceProviders subSP
on c.ID = subSP.Contract cross join
(select 1 as n union all select 2) n
Where [Some contract criteria goes here]
``` | How to join two columns without repeating the FROM/WHERE clauses | [
"",
"sql",
"sql-server",
""
] |
I am using SQL Server Compact Edition 3.5 SP2.
I want to use this query:
```
SELECT DISTINCT PortalURI
FROM LoginHistory
ORDER BY LastLoginDate DESC
```
But it tells me that it requires `LastLoginDate` field to be selected too like this:
```
SELECT DISTINCT PortalURI, LastLoginDate
FROM LoginHistory
ORDER BY LastLoginDate DESC
```
But it gives me repeated `PortalURI`.
How can i keep `PortalURI` distinct?
**Edit:** I also used `GROUP BY` and `ORDER BY MAX(LastLoginDate)` but the result was this exception:
> Expressions in the ORDER BY list cannot contain aggregate functions. | Try this:
```
SELECT PortalURI, MAX(LastLoginDate) LastLoginDate
FROM LoginHistory
GROUP BY PortalURI
ORDER BY LastLoginDate DESC
``` | Please try:
```
SELECT PortalURI
FROM LoginHistory
GROUP BY PortalURI
ORDER BY MAX(LastLoginDate) DESC
```
**EDIT:** SQL Server Compact Edition does not support aggregate functions on the ORDER BY clause. So, here is an alternative solution:
```
SELECT PortalURI, MAX(LastLoginDate)
FROM LoginHistory
GROUP BY PortalURI
ORDER BY 2 DESC
``` | How to use DISTINCT for just one field when ORDER BY another field is used? | [
"",
"sql",
"sql-server",
"sql-server-ce",
"sql-order-by",
"distinct",
""
] |
What's the most efficient way to run a query encapsulated in another?
*1. Querying inside a query:*
```
select
id_user
from table1
where
foo in (select foo from table2 where dt_signin > '2014-01-01 00:00')
```
*2. By using a temporary table:*
```
create table tmp_table as
select
foo
from table2
where
dt_signin > '2014-01-01 00:00'
```
*and then querying the temporary table*
```
select
id_user
from table1 t1
join tmp_table tmp
on t1.foo = tmp.foo
```
Using `method 1` is the query inside the `where clause` run for (# rows of table1) times or just once and stored in memory for further comparisons with foo? | Your two queries do not do the same thing. For the second approach to be equivalent, you need `select distinct`:
```
select distinct id_user
from table1 t1 join
tmp_table tmp
on t1.foo = tmp.foo;
```
Because of this additional operation, I might expect the `in` to perform better. But, in general, when you have a specific performance question, you should test it on your data on your system.
As for your queries in the question, there are many different ways to approach this. Here are a few:
* Subquery with `in`
* Two tables with a `join` and `distinct`
* Two tables with `in`
* Two tables with exists
* Subquery with `exists`
* Two tables with `exists`
* CTE with a `join`
* CTE with `in`
* CTE with `exists`
In an ideal world, SQL compilers would simply study the query and get the best execution plan, regardless of how the query is expressed. That world is not the world we live in, alas.
One reason a temporary table is sometimes useful (and I much prefer single query solutions) is for optimization purposes:
1. The statistics about the temporary table are known, so the optimizer can choose a better plan.
2. You can build indexes on temporary tables to improve performance.
Your subquery is not very complicated, so these probably are not issues.
Under different circumstances, different approaches might work better. As a default, I would build an index on `tmp_Table(dt_signin, foo)` and use `exists`. | The `exists` variant frequently outperforms the others
```
select id_user
from table1
where exists (
select 1
from table2
where
dt_signin > '2014-01-01 00:00'
and
table1.foo = foo
)
```
Notice that the outer `table1.foo` is compared to `foo` in the subquery | Most efficient way to run an encapsulated query | [
"",
"sql",
"performance",
"postgresql",
""
] |
I have four MySQL tables where the first three share the first tables PK, and the forth has a FK to the third table (see below for schema).
Given the PK of the forth table, I need data from the first and forth table only.
Is there any need to join the second and third table?
For instance, is:
```
SELECT t1.*,t4.*
FROM t1
INNER JOIN t2 ON t2.t1_idt1=t1.idt1
INNER JOIN t3 ON t3.t2_idt2=t2.idt2
INNER JOIN t4 ON t4.t3_idt3=t3.idt3
WHERE t4.idt4=123;
```
better or worse than:
```
SELECT t1.*,t4.*
FROM t1
INNER JOIN t4 ON t4.t3_idt3=t1.idt1
WHERE t4.idt4=123;
```
Please explain why one is better than the other.
SCHEMA
```
-- MySQL Script generated by MySQL Workbench
-- 08/29/14 12:34:46
-- Model: New Model Version: 1.0
SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL,ALLOW_INVALID_DATES';
-- -----------------------------------------------------
-- Schema mydb
-- -----------------------------------------------------
CREATE SCHEMA IF NOT EXISTS `mydb` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci ;
USE `mydb` ;
-- -----------------------------------------------------
-- Table `mydb`.`t1`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`t1` (
`idt1` INT NOT NULL,
`data` VARCHAR(45) NULL,
PRIMARY KEY (`idt1`))
ENGINE = InnoDB;
-- -----------------------------------------------------
-- Table `mydb`.`t2`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`t2` (
`t1_idt1` INT NOT NULL,
`data` VARCHAR(45) NULL,
PRIMARY KEY (`t1_idt1`),
CONSTRAINT `fk_t2_t1`
FOREIGN KEY (`t1_idt1`)
REFERENCES `mydb`.`t1` (`idt1`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
-- -----------------------------------------------------
-- Table `mydb`.`t3`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`t3` (
`t2_t1_idt1` INT NOT NULL,
`data` VARCHAR(45) NULL,
PRIMARY KEY (`t2_t1_idt1`),
CONSTRAINT `fk_t3_t21`
FOREIGN KEY (`t2_t1_idt1`)
REFERENCES `mydb`.`t2` (`t1_idt1`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
-- -----------------------------------------------------
-- Table `mydb`.`t4`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`t4` (
`t3_t2_t1_idt1` INT NOT NULL,
`data` VARCHAR(45) NULL,
INDEX `fk_t4_t31_idx` (`t3_t2_t1_idt1` ASC),
CONSTRAINT `fk_t4_t31`
FOREIGN KEY (`t3_t2_t1_idt1`)
REFERENCES `mydb`.`t3` (`t2_t1_idt1`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
SET SQL_MODE=@OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;
``` | You don't need to join any tables if they're not relevant to the query. A table is relevant if you need to refer to it columns, or you want the result set limited to rows that have matches in that table. Joining with extra tables will slow down the query, because the query planner cannot determine that these joins are redundant; there's no way to tell the DBMS that there's a 1-to-1 relationship between two tables. The best you can do is declare foreign key relationships -- the foreign key is required to be a subset of the values in the other table; but there's no way to declare exact equivalence. In fact, even if you intend to have 1-to-1 equivalence, you don't actually have it -- when you insert into the three tables, you have to do them sequentially, so for a brief period there's an extra row in one or two of the tables.
You can see that the query planner makes use of the extra tables if you compare the result of `EXPLAIN` between the two queries. | If all IDs are the same in all four tables, and if you don't want to exclude records that don't appear in tables 2 and 3, then there is no reason to include them in the join. It will still use the index on table 1, even without the foreign key relationship.
I would consider renaming all the ID columns to signify the fact that they are the same. | Is joining 1-to-1 tables always necessary? | [
"",
"mysql",
"sql",
""
] |
i have a table in sql server 2012, with one of its column having the following data in time datatype,
How would you convert 00:45:00 to 0.45
or 01:45:00 to 1.45 ?
please help. | From the question, it seems you only want to remove the ':' with '.'. If so, it can be achieved in many ways in SQL Server 2012. One example is below.
If your column is 'Datetime', use this:
```
SELECT REPLACE(LEFT(CAST(DatetimeCol AS TIME),5),':',',') FROM Tbl
```
Else if it just 'Time' then remove the cast. | You could do arithmetic such as:
```
select cast(datepart(hour, col) + datepart(minute, col) / 100.00 as decimal(5, 2));
```
But as noted in the comments, "1.45" seems quite confusing. Most people think this should be "1.75". If you want the latter -- with precision to the minute -- you can do:
```
select cast(datepart(hour, col) + datepart(minute, col) / 60.00 as decimal(5, 2));
``` | convert time to decimal in SQL server 2012 | [
"",
"sql",
"time",
"decimal",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.