Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have the following query:
```
SELECT count(*) as 'totalCalls', HOUR(`end`) as 'Hour'
FROM callsDataTable
WHERE company IN (
SELECT number
FROM products
WHERE products.id IN (@_PRODUCTS))
AND YEAR(`end`) = @_YEAR AND MONTH(`end`) = @_MONTH
group by HOUR(`end`)
```
Above query returns only the hours in which there where calls made:
```
totalCalls Hour
2 0
1 2
4 7
98 8
325 9
629 10
824 13
665 15
678 16
665 17
606 18
89 22
5 23
```
The desired output should be all the hours, and where there are no calls it should be 0 calls for that hour, like below:
```
totalCalls Hour
0 0
0 1
1 2
0 3
0 4
0 5
0 6
4 7
98 8
325 9
629 10
0 11
0 12
824 13
0 14
665 15
678 16
665 17
606 18
0 19
0 20
0 21
89 22
5 23
```
|
First, your query can be expressed in a simpler way as:
```
SELECT COUNT(*) AS totalCalls, HOUR(`end`) AS `Hour`
FROM callsDataTable c
INNER JOIN products p ON c.company = p.number
AND p.id IN (@_PRODUCTS)
AND YEAR(`end`) = @_YEAR AND MONTH(`end`) = @_MONTH
GROUP BY HOUR(`end`) AS `Hour`
ORDER BY `Hour` ASC
```
Using the idea suggested by @NoDisplayName in [their answer](https://stackoverflow.com/a/28262689/4265352):
```
CREATE TABLE hours_table (hours INT);
INSERT INTO hours_table VALUES(0), (1), (2),
/* put the missing values here */ (23);
```
You can join the table that contains the hours to get the results you want:
```
SELECT COUNT(*) AS totalCalls, h.hours AS `Hour`
FROM callsDataTable c
INNER JOIN products p ON c.company = p.number
RIGHT JOIN hours_table h ON h.hours = HOUR(c.`end`)
AND p.id IN (@_PRODUCTS)
AND YEAR(`end`) = @_YEAR AND MONTH(`end`) = @_MONTH
GROUP BY h.hours
ORDER BY h.hours ASC
```
If it runs too slow (and I'm sure it is very slow) you should investigate a way to use something like `end BETWEEN '2015-01-01 00:00:00' AND '2015-01-31 23:59:59'` instead of comparing `YEAR(end)` and `MONTH(end)`.
It can be accomplished like this:
```
SET @start = STR_TO_DATE(CONCAT(@_YEAR, '-', @_MONTH, '-01 00:00:00'), '%Y-%m-%d %H:%i:%s');
SET @end = DATE_SUB(DATE_ADD(@start, INTERVAL 1 MONTH), INTERVAL 1 SECOND);
SELECT ...
...
AND `end` BETWEEN @start AND @end
...
```
But this change doesn't help by itself. It needs an index on field `end` to bring the desired speed improvement:
```
ALTER TABLE callsDataTable ADD INDEX(end);
```
Using `HOUR(c.end)` in the join condition is another reason to run slowly.
It can be improved by joining the table `hours_table` with the result set produced by the (simplified version of the) first query:
```
SELECT IFNULL(totalCalls, 0) AS totalCalls, h.hours AS `Hour`
FROM hours_table h
LEFT JOIN (
SELECT COUNT(*) AS totalCalls, HOUR(`end`) as `Hour`
FROM callsDataTable c
INNER JOIN products p ON c.company = p.number
AND p.id IN (@_PRODUCTS)
AND YEAR(`end`) = @_YEAR AND MONTH(`end`) = @_MONTH
GROUP BY HOUR(`end`) AS `Hour`
) d ON h.hours = d.`Hour`
ORDER BY h.hours ASC
```
|
You need a `Hour` table and then do a `left Outer Join` with the `Hour_table`.
Which will ensure that all `hours` will be returned. If `hour` doesn't exists in `callsDataTable` then count will be `0`.
**Hours Table**
```
create table hours_table (hours int);
insert into hours_table values(0);
insert into hours_table values(1);
...
insert into hours_table values(23);
```
**Query:**
```
SELECT count(HOUR(`end`)) as 'totalCalls', HT.Hours as 'Hour'
FROM Hours_table HT left Outer join callsDataTable CD
on HT.Hours = HOUR(`end`)
WHERE company IN (
SELECT number
FROM products
WHERE products.id IN (@_PRODUCTS))
AND YEAR(`end`) = @_YEAR AND MONTH(`end`) = @_MONTH
group by HT.Hours
```
|
Mysql: Group by Hour, 0 if no data
|
[
"",
"mysql",
"sql",
"time-series",
"aggregate-functions",
""
] |
I have two queries I've tested:
```
SELECT [FirstName], [LastName], [PreferredName], [DOB], [Sex], LastActivityDate,LoweredEmail, EmailConfirmed FROM [Patients] p
join AspNetUsers u on p.Id=u.PatientId
join AspNetUserRoles r on u.Id=r.UserId
WHERE (p.[ClinicId] = 1) and RoleId='16874754-779a-fbae-8caf-bbbb2344b7a6'
```
it returns 1301 results
but this query
```
SELECT AspNetUsers.id FROM dbo.AspNetUsers
INNER JOIN dbo.AspNetUserRoles ON AspNetUsers.Id=AspNetUserRoles.UserId
WHERE AspNetUsers.ClinicId = 1 and AspNetUserRoles.RoleID='16874754-779a-fbae-8caf-bbbb2344b7a6'
```
this one returns 1303
I've tried this query to check the ones missing and it's getting zero results
```
SELECT [FirstName], [LastName], [PreferredName], [DOB], [Sex], LastActivityDate,LoweredEmail, EmailConfirmed FROM [Patients] p
join AspNetUsers u on p.Id=u.PatientId
join AspNetUserRoles r on u.Id=r.UserId
WHERE (p.[ClinicId] = 1) and RoleId='16874754-779a-fbae-8caf-bbbb2344b7a6'
and u.PatientId not in
(
SELECT AspNetUsers.PatientId FROM dbo.AspNetUsers INNER JOIN
dbo.AspNetUserRoles ON AspNetUsers.Id=AspNetUserRoles.UserId WHERE
AspNetUsers.ClinicId = 1 and
AspNetUserRoles.RoleID='16874754-779a-fbae-8caf-bbbb2344b7a6'
)
```
How do I find out which is correct, and why am I getting a difference?
|
The second query is returning 1303 records:
```
SELECT AspNetUsers.id
FROM dbo.AspNetUsers INNER JOIN
dbo.AspNetUserRoles
ON AspNetUsers.Id = AspNetUserRoles.UserId
WHERE AspNetUsers.ClinicId = 1 and
AspNetUserRoles.RoleID='16874754-779a-fbae-8caf-bbbb2344b7a6'
```
The first query is returning 1301 records:
```
SELECT [FirstName], [LastName], [PreferredName], [DOB], [Sex],
LastActivityDate, LoweredEmail, EmailConfirmed
FROM [Patients] p join
AspNetUsers u
on p.Id = u.PatientId join
AspNetUserRoles r
on u.Id = r.UserId
WHERE (p.[ClinicId] = 1) and RoleId='16874754-779a-fbae-8caf-bbbb2344b7a6'
```
The difference between these two is the join on `Patients`. This is an `inner join`, so it may be filtering results. Note that duplicates could also cause problems.
You can determine the rows that are missing (because some are missing) with:
```
select *
from AspNetUsers nu
where not exists (select 1 from patents p where p.id = nu.PatientId);
```
|
Have you tried SELECT DISTINCT?
Also try your checking query the other way (subquery to main query, main query to subquery)
|
two queries returning different result counts
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I wanted to change one of the fields to currency type during select
```
SELECT AdvertID,NumWords,WeeksRunning,0.5*NumWords*WeeksRunning AS [Cost]
FROM AdvertsTb
```
As you can see, the new field won't be in the format of $XX.00. So which operator must I use to change the type so that it is shown as a currency?
|
You'll need to cast it
```
SELECT AdvertID,NumWords,WeeksRunning,cast(0.5*NumWords*WeeksRunning as money) AS [Cost]
FROM AdvertsTb
```
|
Just convert the `cost` to `varchar` and append `$` symbol.
```
SELECT AdvertID,NumWords,WeeksRunning,
"$" & CStr(0.5*NumWords*WeeksRunning) AS [Cost]
FROM AdvertsTb
```
|
How to change the datatype of a field
|
[
"",
"sql",
"ms-access",
""
] |
I have the following data:
```
-------table_a-------
| id | data | value |
| 1 | 5 | 1 |
| 2 | 3 | 3 |
```
My desired output is to merge row 2 with row 1 so that where `id = 1`, data remains to `=5` but the values are added, so `1+3`.
```
-------table_a-------
| id | data | value |
| 1 | 5 | 4 |
```
This is as far as I got with the queries but the first isn't seeming to work. Also this is not for retrieving the data, this is for manipulating the data in the database.
**Current queries (1st not working):**
```
UPDATE table_a SET value = value + (SELECT a.value FROM table_a a WHERE a.id = 2) WHERE id = 1;
DELETE FROM table_a WHERE id = 2;
```
|
If this is the logic that you want:
```
UPDATE table_a
SET value = value + (SELECT a.value FROM table_a a WHERE a.id = 2)
WHERE id = 1;
DELETE FROM table_a WHERE id = 2;
```
The `update` is not going to work because you cannot reference the table being updated in the rest of the query. You can use a hack for the update and do:
```
UPDATE table_a
SET value = value + (SELECT value FROM (SELECT a.value FROM table_a a WHERE a.id = 2) )
WHERE id = 1;
```
The double `select` fixes this problem. The more correct method is to use `join`:
```
UPDATE table_a a CROSS JOIN
(SELECT a.value FROM table_a a WHERE a.id = 2) as a2
SET a.value = a.value + a2.value
WHERE id = 1;
```
|
Here is one way on how you can make an accumulative column by joining the table with itself.
I'm assuming the Id column is an identity column,
Hope this helps.
```
CREATE TABLE [dbo].[TestTable](
[id] [int] NULL,
[data] [int] NULL,
[value] [int] NULL
) ON [PRIMARY]
INSERT INTO [dbo].[TestTable]
([id]
,[data]
,[value])
VALUES
(1,5,1),
(2,3,3),
(3,4,4)
SELECT a.id , a.data , a.value + b.value as Sum
FROM [TestTable] a
left join [TestTable] b
on a.id + 1 = b.id
```
|
Combining 2 rows in table
|
[
"",
"mysql",
"sql",
"merge",
""
] |
```
INSERT INTO table1
SELECT * FROM table2
WHERE NOT EXISTS
(SELECT 1 FROM table1
WHERE table2.id = table1.id)
```
What is the role of that `1` in the forth line of code? I want to make an incremental update of `table1` with records from `table2`. A friendly soul advised me to use the above query, which I find very common on the web in case of incremental update of a table. Can someone please explain how this mechanism works?
|
Exists checks for the *presence* of rows in the sub-select, not for the data returned by those rows.
So we are only interested *if* there is a row or not.
But as you can't have a `select` without selecting something, you need to put an expression into the select list.
That could be *any* expression. The actual expression is of no interest You could use `select some_column` or `select *` or `select null` or `select 42` - that would all be the same.
|
You can select whatever in the case of `EXISTS (sub-select`, the only thing that matters are if a row is found (EXISTS true), or no rows found (EXISTS false).
|
SQL insert into, where not exists (select 1... what this "1" stands for?
|
[
"",
"sql",
"insert",
"sql-server-2012",
""
] |
```
SELECT Region ,
flag ,
Name,
COUNT(ID) AS 'CountWithFlag'
FROM Table
GROUP BY flag
```
this query gives me the following results. I am grouping by flag and I am able to get the counts for English/non-English based on flag. I also want to display Total Counts of English and non-English adjacent to counts
OUTPUT:
```
Region Flag Name CountWithFlag
a 0 English 100
b 1 Non-English 200
c 0 English 100
d 1 Non-English 200
```
DESIRED OUTPUT:
```
Region Flag Name CountWithFlag Total
a 0 English 100 200
b 1 Non-English 200 400
c 0 English 100 200
d 1 Non-English 200 400
```
How can I do that? I want to apply group by for specific counts with flag. But I also want to get total counts in same query!
Any inputs on how I can do that?
|
Another way would be something like this:
```
;
WITH agg1
AS (
SELECT region,
flag,
name,
COUNT(id) AS 'CountWithFlag'
FROM [dbo].[t2]
GROUP BY region,
flag,
name
),
agg2
AS (
SELECT [name],
COUNT(id) AS CountByName
FROM [dbo].[t2]
GROUP BY [name]
)
SELECT [agg1].[region],
[agg1].[flag],
[agg1].[name],
[agg1].[CountWithFlag],
[agg2].[CountByName]
FROM [agg1]
INNER JOIN [agg2]
ON [agg2].[name] = [agg1].[name]
```
|
try this
```
;
WITH cte
AS ( SELECT DISTINCT
Region ,
flag ,
Name ,
COUNT(ID) OVER ( PARTITION BY flag, Region, Name ) AS [CountWithFlag]
FROM [Table]
)
SELECT Region ,
flag ,
Name ,
SUM([CountWithFlag]) OVER ( PARTITION BY Name ) AS Total
FROM cte
```
|
Aggregation and Total
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table:
```
ID | fish1 | fish2 | fish3 |
1 | shark | dolfy | whale |
2 | tuna | shark | dolfy |
3 | dolfy | shark | tuna |
4 | dolfy | tuna | shark |
```
and the result of he query is:
```
fish | count |
shark | 4 |
tuna | 3 |
dolfy | 4 |
whale | 1 |
```
Can someone give me a proper query for this.
|
You need to create a normalized view on your de-normalized table:
```
select fish,
count(*)
from (
select fish1 as fish from the_table
union all
select fish2 from the_table
union all
select fish3 from the table
) t
group by fish
```
In general it's a bad idea to store your data like that (numbered columns very often indicate a bad design).
You should rather think about a proper one-to-many relationship.
|
```
select fish, count (id)
from
(
SELECT fish1 as fish, id from my_table
union all
SELECT fish2 as fish, id from my_table
union all
SELECT fish3 as fish, id from my_table
) A
group by fish
```
|
Query to count items across multiple columns
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
""
] |
I am pretty new to using MS SQL 2012 and I am trying to create a query that will:
1. Report the order id, the order date and the employee id that processed the order
2. report the maximum shipping cost among the orders processed by the same employee prior to that order
This is the code that I've come up with, but it returns the freight of the particular order date. Whereas I am trying to get the maximum freight from all the orders before the particular order.
```
select o.employeeid, o.orderid, o.orderdate, t2.maxfreight
from orders o
inner join
(
select employeeid, orderdate, max(freight) as maxfreight
from orders
group by EmployeeID, OrderDate
) t2
on o.EmployeeID = t2.EmployeeID
inner join
(
select employeeid, max(orderdate) as mostRecentOrderDate
from Orders
group by EmployeeID
) t3
on t2.EmployeeID = t3.EmployeeID
where o.freight = t2.maxfreight and t2.orderdate < t3.mostRecentOrderDate
```
|
Step one is to read the order:
```
select o.employeeid, o.orderid, o.orderdate
from orders o
where o.orderid = @ParticularOrder;
```
That gives you everything you need to go out and get the previous orders from the same employee and join each one to the row you get from above.
```
select o.employeeid, o.orderid, o.orderdate, o2.freight
from orders o
join orders o2
on o2.employeeid = o.employeeid
and o2.orderdate < o.orderdate
where o.orderid = @ParticularOrder;
```
Now you have a whole bunch of rows with the first three values the same and the fourth is the freight cost of each previous order. So just group by the first three fields and select the maximum of the previous orders.
```
select o.employeeid, o.orderid, o.orderdate, max( o2.freight ) as maxfreight
from orders o
join orders o2
on o2.employeeid = o.employeeid
and o2.orderdate < o.orderdate
where o.orderid = @ParticularOrder
group by o.employeeid, o.orderid, o.orderdate;
```
Done. Build your query in stages and many times it will turn out to be much simpler than you at first thought.
|
It is unclear why you are using t3. From the question it doesn't sound like the employee's most recent order date is relevant at all, unless I am misunderstanding (which is absolutely possible).
I believe the issue lies in t2. You are grouping by orderdate, which will return the max freight for that date and employeeid, as you describe. You need to calculate a maximum total from all orders that occurred before the date that the order occurred on, for that employee, for every row you are returning.
It probably makes more sense to use a subquery for this.
```
SELECT o.employeeid, o.orderid, o.orderdate, m.maxfreight
FROM
orders o LEFT OUTER JOIN
(SELECT max(freight) as maxfreight
FROM orders AS f
WHERE f.orderdate <= o.orderdate AND f.employeeid = o.employeeid
) AS m
```
Hoping this is syntactically correct as I'm not in front of SSMS right now. I also included a left outer join as your previous query with an inner join would have excluded any rows where an employee had no previous orders (i.e. first order ever).
|
Getting max value before given date
|
[
"",
"sql",
"sql-server-2012",
""
] |
Here I am inserting values into two columns but now I want to both declared values which is created just now:
```
Declare @ClientCompanyId int, @PersonId int
INSERT INTO ClientCompany (CompanyName)
VALUES ('WebDunia')
SELECT @ClientCompanyId = SCOPE_IDENTITY()
INSERT INTO Person (ClientCompanyId, Company, AddressLine1, AddressLine2, City, State, Country, Zip, HomePhone, WorkPhone, CellPhone, Fax, EmailId, Website)
VALUES (@ClientCompanyId, 'Google', 'Chicago', 'USA', 'Norway', 'ALASKA', 'CANADA', '12345', 12345678912, 12345, 56454, 'Fax', 'Email', 'Website')
SELECT SCOPE_IDENTITY() AS PersonId
```
I want `PersonId` and `ClientCompanyId` at a time.
|
Use another variable set, same as you use for `@ClientCompanyId`
```
Declare @ClientCompanyId int,
@PersonId int
INSERT INTO ClientCompany(CompanyName) VALUES ('Google')
SELECT @ClientCompanyId=SCOPE_IDENTITY()
Insert into Person (ClientCompanyId,Company,AddressLine1, AddressLine2,
City, State, Country, Zip, HomePhone, WorkPhone,
CellPhone, Fax, EmailId, Website)
Values(@ClientCompanyId,'Google','Chicago','USA','Norway',
'ALASKA','CANADA','12345',12345678912,12345,56454,'Fax',
'Email','Website')
SELECT @ClientCompanyId, @PersonId=SCOPE_IDENTITY() <<<<<<<<<<<<<<<<<**Here**
```
|
why not just replicate what you do with `@ClientCompanyID` with @PersonID after the 2nd insert?
i.e. the line `SELECT SCOPE_IDENTITY() as PersonId` should read `SELECT @PersonID = SCOPE_IDENTITY()` Then both variables will have values set.
I don't know in which context this sql runs, but if you need both values out, you could either select them - or in the case of a stored procedure have them as out parameters.
|
How can I return ScopeIdentity in a query of two Insert statements
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table like
```
+------+-------+-------------------------------------+
| id | col2 | col3 |
+------+-------+-------------------------------------+
| 1 | 1 | 10 |
| 2 | 1 | 20 |
| 3 | 1 | 15 |
| 4 | 2 | 10 |
| 5 | 2 | 20 |
| 6 | 2 | 15 |
| 7 | 2 | 30 |
+------+-------+-------------------------------------+
```
I want to select Id where col3 has maximum value and col2 = 2. (id 7 in this case since it has maximum value 30 where col2=2). I tried with GROUP BY clause
```
SELECT id, MAX(col3) FROM table_name
WHERE col2 =2
GROUP BY id
```
But it gives me all the Id's where col2=2. How can I achieve desired output?
Thanks.
|
You have to use this following simple sql query for your requirement.
```
SELECT TOP 1 ID
FROM table_name
Where col2 = 2
ORDER BY col3 DESC
```
You can use also `ROW_NUMBER()` function. It's another way for same result.
|
You could use `ROW_NUMBER`:
```
CREATE TABLE temp(
ID INT,
Col2 INT,
Col3 INT
)
INSERT INTO temp VALUES
(1, 1, 10), (2, 1, 20), (3, 1, 15),
(4, 2, 10), (5, 2, 20), (6, 2, 15),
(7, 2, 30);
SELECT
ID, Col3
FROM(
SELECT *, rn = ROW_NUMBER() OVER(PARTITION BY col2 ORDER BY col3 DESC)
FROM table_name
)t
WHERE
rn = 1
AND col2 = 2
```
**RESULT**
```
ID Col3
----------- -----------
7 30
```
|
Get ID of a row having maximum value in other column
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with a column that stores the date and time. I need to write a query to get only the date from that column,
```
SELECT CAST(CONVERT(VARCHAR, LoginTime, 101) AS datetime) FROM AuditTrail
```
But, when I run the query I am getting this error:
> The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
the data in the column is datetime ex: 2012-06-18 12:08:04.000
so i need to extract the date only and remove the time
note that the [Logintime] column is datatime format
|
Try `ISDATE()` function in SQL Server. If 1, select valid date. If 0 selects invalid dates.
```
SELECT cast(CONVERT(varchar, LoginTime, 101) as datetime)
FROM AuditTrail
WHERE ISDATE(LoginTime) = 1
```
* [Click here](http://sqlfiddle.com/#!3/5b3cb/2) to view result
**EDIT :**
As per your update **i need to extract the date only and remove the time**, then you could simply use the inner `CONVERT`
```
SELECT CONVERT(VARCHAR, LoginTime, 101) FROM AuditTrail
```
or
```
SELECT LEFT(LoginTime,10) FROM AuditTrail
```
**EDIT 2 :**
The major reason for the error will be in your date in WHERE clause.ie,
```
SELECT cast(CONVERT(varchar, LoginTime, 101) as datetime)
FROM AuditTrail
where CAST(CONVERT(VARCHAR, LoginTime, 101) AS DATE) <=
CAST('06/18/2012' AS DATE)
```
will be different from
```
SELECT cast(CONVERT(varchar, LoginTime, 101) as datetime)
FROM AuditTrail
where CAST(CONVERT(VARCHAR, LoginTime, 101) AS DATE) <=
CAST('18/06/2012' AS DATE)
```
**CONCLUSION**
In **EDIT 2** the first query tries to filter in `mm/dd/yyyy` format, while the second query tries to filter in `dd/mm/yyyy` format. Either of them will fail and throws error
> The conversion of a varchar data type to a datetime data type resulted
> in an out-of-range value.
So please make sure to filter date either with `mm/dd/yyyy` or with `dd/mm/yyyy` format, whichever works in your db.
|
hope this may help you:
```
SELECT CAST(LoginTime AS DATE)
FROM AuditTrail
```
If you want to have some filters over this datetime or it's different parts, you can use built-in functions such as Year and Month
|
Conversion of a varchar data type to a datetime data type resulted in an out-of-range value in SQL query
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"datetime",
""
] |
I have employee hours log data. Now I have to flag all the records where an employee logs the same hours for the same tasks on the consecutive work days (Sat/Sun weekend).
Let me explain my problem with following example
I have employee hour log table say EMP\_HOUR\_LOG:
```
ROW EMP_NO TASK DATE HOURS FLAG
1 1000 T1 2015-01-01 8 0
2 1000 T1 2015-01-02 8 0
3 1000 T1 2015-01-05 8 0
4 1000 T1 2015-01-06 2 0
5 1000 T2 2015-01-01 4 0
6 1000 T2 2015-01-02 3 0
7 1000 T3 2015-01-09 5 0
8 1000 T3 2015-01-12 5 0
9 1000 T3 2015-01-13 3 0
10 1001 T1 2015-01-14 3 0
11 1001 T1 2015-01-15 3 0
```
In the above sample dataset I have to update FLAG to 1 for row number 1, 2, 3, 10 and 11, as these records are entries where hours entered by a same employee for the same task on consecutive days is same.
I have already implemented this using cursors as I couldn't think of any alternative to looping through the data record by record.
Please let me know if anyone here can suggest any better way to implement the same by avoiding cursor loops or loops in general.
Thanks
|
Probably also rows 7 and 8 should have flag = 1.
This is the query, but I think the problem has to be handled during the insert:
```
update e set e.FLAG = 1
from [dbo].[EMP_HOUR_LOG] e
where exists
(
select * from [dbo].[EMP_HOUR_LOG] e1
where e1.[TASK] = e.[TASK]
and e1.[EMP_NO] = e.[EMP_NO]
and e1.[HOURS] = e.[HOURS]
and e1.[DATE] in
(
--Next work day
dateadd(dd, case when DATENAME(dw,e.[DATE]) = 'Friday' then 3 else 1 end, e.[DATE]),
--Previous work day
dateadd(dd, case when DATENAME(dw,e.[DATE]) = 'Monday' then -3 else -1 end, e.[DATE])
)
)
```
|
I am not sure if I understand it correctly:
Same Employee, same task, with same amount of hours input one day after another (consecutive - weekends excluded).
But your described logic will pick also rows: 7 and 8
```
7 1000 T3 2015-01-09 5 0
8 1000 T3 2015-01-12 5 0
```
It is same employee `1000` same task `T3` same amount of hours `5` and `2015-01-09` is Friday and `2015-01-12` is monday so days are consecutive (weekends excluded)
---
Considering that I got it right here is MS SQL 2008 implementation:
```
WITH EHT AS (
SELECT [ROW]
,[EMP_NO]
,[TASK]
,[DATE]
,[HOURS]
,DATEPART(DW,[DATE]) AS DayWeek /* Sunday = 1 */
,ROW_NUMBER() OVER (PARTITION BY [EMP_NO],[TASK] ORDER BY [DATE]) AS DT_RNK
FROM [EMP_HOUR_LOG]
)
SELECT
A1.*
,A2.[DATE] AS Next_Date
,A3.[DATE] AS Previous_Date
,CASE /* for Next Date logic*/
WHEN A2.DayWeek<>2 /*Tuesday to Friday*/
AND DATEDIFF(DD, A1.[DATE], A2.[DATE]) = 1
THEN 1
WHEN A2.DayWeek=2 /*Monday*/
AND DATEDIFF(DD, A1.[DATE], A2.[DATE]) = 3 /* 3 days from Friday to Monday*/
Then 1
/* for Previous Date logic*/
WHEN A2.[DATE] IS NULL
AND A3.DayWeek=6 /* Friday */
AND DATEDIFF(DD, A3.[DATE], A1.[DATE]) = 3 /* 3 days from Friday to Monday*/
THEN 1
WHEN A2.[DATE] IS NULL
AND A3.DayWeek<>6 /* Mon to Thur */
AND DATEDIFF(DD, A3.[DATE], A1.[DATE]) = 1
Then 1
ELSE 0 END
AS FLAG
FROM EHT AS A1
LEFT JOIN EHT AS A2
ON (A1.[EMP_NO]=A2.[EMP_NO]
AND A1.[TASK]=A2.[TASK]
AND A1.[HOURS]=A2.[HOURS]
AND A1.DT_RNK=A2.DT_RNK-1)
LEFT JOIN EHT AS A3
ON (A1.[EMP_NO]=A3.[EMP_NO]
AND A1.[TASK]=A3.[TASK]
AND A1.[HOURS]=A3.[HOURS]
AND A1.DT_RNK=A3.DT_RNK+1)
```
Firstly creating temp table EHT with Weekday function to identify if a day is saturday or sunday (7,1).
Adding Order number (Rwo\_number function ) from 1...n which is reset on Employe, task and ordering dates from lowest to highest.
Then in second step Left joining EHT table to itself. With Emp, Task and hour columns (to exclude all case when emp, task and hours did not match)
+ shifting second table back by 1 Order num (`A1.DT_RNK=A2.DT_RNK-1`). With that I am able to identify Next data in series.
However Last date in series does not have Next date, because it is last. And I need to identify series from from beginning to the last item. Thus I am Left joining the table again but this time, shifting table forward by 1 order num (`A1.DT_RNK=A2.DT_RNK+1`) to identify Previous Date in a series.
Now the logic is just counting number of days between Date and Next date or Date and Previous date, if it is equal to 1, then they are consecutive. For Monday dates it have to be 3. Similarly when taking into account last entry in serie, which does not have next date, we need to check previous date if it is friday then it have to be also equal to 3.
PRobably there is simpler solution. But this is working. Still as Gordon Linoff mentioned above, you did not include into FLAG = 1 rows 7 and 8. My logic include them because it is from friday to monday consecutive dates. Maybe you are taking into account some other holidays.
Result details:

|
SQL Server Cursor Loop Alternative
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
"cursors",
""
] |
I'm quite new to Oracle so I'm not totally familiar with the `ROWNUM` statement. I'm trying to get the latest 4 articles from my table. I'm getting 4 results but they are 2012 articles even though my date ordering is set to `DESC`. Any help would be great.
Oracle query:
```
SELECT bt.article_id, ba.*
FROM articles_types bt
LEFT JOIN blog_articles ba
ON ba.article_id = bt.article_id
WHERE ROWNUM < 5
ORDER BY Published DESC
```
|
Just a wild guess, but order the result before rownum limit:
```
select t.* from
(
SELECT *
FROM articles_types bt
LEFT JOIN blog_articles ba
ON ba.article_id = bt.article_id
ORDER BY Published DESC
) T
WHERE ROWNUM <= 4
```
This worked, the issue was a duplicate column name
|
The `where` clause is evaluated before the `order by` clause. So what's happening here is that you're selecting the first four rows returned by the database (in completely arbitrary order), and then sorting them in descending order of `Published`.
One solution could be to move the `where` clause to an outer query:
```
SELECT *
FROM (SELECT bt.article_id, ba.*
FROM articles_types bt
LEFT JOIN blog_articles ba ON ba.article_id = bt.article_id
ORDER BY Published DESC)
WHERE ROWNUM < 5
```
Alternative, In Oracle 12c, you can (finally!) use a `fetch first` clause:
```
SELECT bt.article_id, ba.*
FROM articles_types bt
LEFT JOIN blog_articles ba ON ba.article_id = bt.article_id
ORDER BY Published DESC
FETCH FIRST 4 ROWS ONLY
```
|
SELECT 4 latest rows from Oracle table with join
|
[
"",
"sql",
"oracle",
"date",
"join",
"rownum",
""
] |
I have two tables on a PostgreSQL database, contracts and payments. One contract has multiple payments done.
I'm having the two following models:
```
module.exports = function(sequelize, DataTypes) {
var contracts = sequelize.define('contracts', {
id: {
type: DataTypes.INTEGER,
autoIncrement: true
}
}, {
createdAt: false,
updatedAt: false,
classMethods: {
associate: function(models) {
contracts.hasMany(models.payments, {
foreignKey: 'contract_id'
});
}
}
});
return contracts;
};
module.exports = function(sequelize, DataTypes) {
var payments = sequelize.define('payments', {
id: {
type: DataTypes.INTEGER,
autoIncrement: true
},
contract_id: {
type: DataTypes.INTEGER,
},
payment_amount: DataTypes.INTEGER,
}, {
classMethods: {
associate: function(models) {
payments.belongsTo(models.contracts, {
foreignKey: 'contract_id'
});
}
}
});
return payments;
};
```
I would like to sum all the payments made for every contract, and used this function:
```
models.contracts.findAll({
attributes: [
'id'
],
include: [
{
model: models.payments,
attributes: [[models.sequelize.fn('sum', models.sequelize.col('payments.payment_amount')), 'total_cost']]
}
],
group: ['contracts.id']
})
```
But it generates the following query:
```
SELECT "contracts"."id", "payments"."id" AS "payments.id", sum("payments"."payment_amount") AS "payments.total_cost"
FROM "contracts" AS "contracts"
LEFT OUTER JOIN "payments" AS "payments" ON "contracts"."id" = "payments"."contract_id" GROUP BY "contracts"."id";
```
I do not ask to select payments.id, because I would have to include it in my aggregation or group by functions, as said in the error I have:
> Possibly unhandled SequelizeDatabaseError: error: column "payments.id"
> must appear in the GROUP BY clause or be used in an aggregate function
Am I missing something here ? I'm following [this answer](https://stackoverflow.com/questions/25700004/how-can-i-do-a-group-by-across-3-tables-with-sequelize) but even there I don't understand how the SQL request can be valid.
|
This issue has been fixed on Sequelize 3.0.1, the primary key of the included models must be excluded with
```
attributes: []
```
and the aggregation must be done on the main model (infos in [this github issue](https://github.com/sequelize/sequelize/issues/3441)).
Thus for my use case, the code is the following
```
models.contracts.findAll({
attributes: ['id', [models.sequelize.fn('sum', models.sequelize.col('payments.payment_amount')), 'total_cost']],
include: [
{
model: models.payments,
attributes: []
}
],
group: ['contracts.id']
})
```
|
Try
```
group: ['contracts.id', 'payments.id']
```
|
Using group by and joins in sequelize
|
[
"",
"sql",
"node.js",
"postgresql",
"sequelize.js",
""
] |
How do I list all movies released between `2000` and `2010`?
```
SELECT title
FROM films
WHERE releaseYear > 2000 && < 2010
```
or do I have to use 'between'? I'm new to this and I'm trying to learn. I've looked for similar questions here on stackoverflow, but everyone seems to give a different answer. Are there different possibilities?
|
You can use the between clause like so:
```
Where ReleaseYear between 2000 and 2010
```
Or equivalent:
```
Where ReleaseYear >=2000 and ReleaseYear <= 2010
```
|
You can use like this
```
SELECT title FROM films WHERE releaseYear between 2000 and 2010;
```
Tip: when you using "between" you should write smaller firstly. **between** **small value** and **bigger value**
|
SQL select query between two dates
|
[
"",
"sql",
"date",
"select",
"between",
""
] |
How do I create a function constraint for denormalized data?
```
-- graduation_class table
graduation_class_id | graduation_year_id
123 1
456 2
-- user table
user_id | graduation_class_id | graduation_year_id
555 123 1
556 123 3 <--- bad data
557 456 2
```
A graduation class can only be assigned to a sign graduation year. The combination of the two is unique in the graduation\_class table.
I want to create a function constraint on the graduation\_year\_id column on the user table to make sure the graduation\_year\_id is in sync with the proper id from the graduation\_class table so that the record 556 in the user table will never happen.
\*\* Please assume normalization is NOT a solution to this, or I wouldn't be asking this question! :) \*\*
|
As mentioned by other answers, this schema is not ideal, but if you need to maintain data integrity in this schema before having the opportunity to clean things up, a function constraint will do this for you. First, we will create a function that takes in the `graduation_class_id` and `graduation_year_id` as parameters, and returns a bit value:
```
create function dbo.fn_check_graduation_year_id
(
@graduation_class_id int,
@graduation_year_id int
)
returns bit
as
begin
declare @return bit = 1
if @graduation_year_id != (select top 1 graduation_year_id
from graduation_class
where graduation_class_id = @graduation_class_id)
set @return = 0
return @return
end
```
This function will return true if the `graduation_year_id` matches the value on the `graduation_class` table given the `graduation_class_id` value. Next, we will add the constraint to your `user` table, making sure the result of the check function returns true:
```
alter table [user] with nocheck add constraint ck_graduation_year_id
check (dbo.fn_check_graduation_year_id(graduation_class_id,graduation_year_id) = 1)
```
|
Instead of function create a `composite primary key` and a `foreign key` to fix your problem
Make the `graduation_class_id ,graduation_year_id` in `graduation_class` as composite `primary key`
```
ALTER TABLE graduation_class
ADD PRIMARY KEY (graduation_class_id ,graduation_year_id)
```
Now add `foreign key` in the user table
```
ALTER TABLE user
ADD FOREIGN KEY (graduation_class_id ,graduation_year_id)
REFERENCES graduation_class(graduation_class_id ,graduation_year_id)
```
This will ensure you to add only the row added to `user` table is already present in `graduation_class` table
|
SQL Function Constraint for Denormalized Data
|
[
"",
"sql",
"sql-server",
"sql-function",
""
] |
Having a difficult time putting what I am trying to do into words so searching is also difficult.
Basically I am trying to look whether a certain value exists in a column, partitioned by group, and then propagate that value forward.
In this example I want to check whether a user has completed the tutorial and set a flag that carries forward.
```
pk | user | ... | activity
1 | A | ... | "login"
2 | A | ... | "started_tutorial"
3 | A | ... | "completed_tutorial"
4 | A | ... | "some other activity"
5 | A | ... | "logout"
5 | B | ... | "login"
6 | B | ... | "logout"
```
I think this should be something like
```
select *,
check(activity in ('completed_tutorial')) as completed_activity
from tbl
```
but I don't think I can use `check` in a select statement and this would be a constant flag rather than set to true only after it has been found.
Example of what I am trying to get:
```
pk | user | ... | activity | completed_tutorial
1 | A | ... | "login" | 0
2 | A | ... | "started_tutorial" | 0
3 | A | ... | "completed_tutorial" | 1
4 | A | ... | "some other activity" | 1
5 | A | ... | "logout" | 1
5 | B | ... | "login" | 0
6 | B | ... | "logout" | 0
```
|
You can filter SQL groups with the [`HAVING` clause](http://www.postgresql.org/docs/9.4/static/queries-table-expressions.html#QUERIES-GROUP). For example, you can group your table by users and their activity, and then filter it to contain only those that have completed the tutorial:
```
SELECT user FROM tbl
GROUP BY user, activity
HAVING activity = 'completed_tutorial';
```
**EDIT:** After OP has edited their question, this is my new answer. Here, I assume that your table has a date field.
```
SELECT *, COALESCE(date >= (
SELECT date FROM tbl WHERE activity = 'completed_tutorial'
AND user = outertbl.user
), FALSE)
FROM tbl AS outertbl
ORDER BY date
```
Notice, that such query is essentially NΒ² when unoptimised, so I would recommend instead to just get the data from the database and then process it in your program.
|
I am not sure about the speed of this, but what about the following solution?
```
SELECT
user
,max(CASE
WHEN activity = "completed_tutorial" THEN 1
ELSE 0
END) AS completed_tutorial
FROM tbl
GROUP BY user
;
```
|
Check whether value exists in column for each group
|
[
"",
"sql",
"postgresql",
""
] |
I have a select statement that is showing me all the data from table original every time it does not match the values on table real\_values.
So every time it does not match, instead of showing me which routes have the wrong values for capacity, I would like the query to update it with the correct values.
Here is a shorter version to use as an example:
<http://sqlfiddle.com/#!4/6a904/1>
Instead of being a select statement, how could I just update the values? I have tried some things I've seen online but nothing seems to work.
|
@DavidFaber's answer is how most people would do this. However, for this kind of query, I prefer to use `merge` over `update`:
```
MERGE INTO original o
USING real_values rv
ON (o.origin = rv.origin AND o.destination = rv.destination)
WHEN MATCHED THEN
UPDATE SET
o.capacity_wt = rv.capacity_wt, o.capacity_vol = rv.capacity_vol
WHERE o.capacity_wt != rv.capacity_wt
OR o.capacity_vol != rv.capacity_vol
```
(It was unclear to me from your question whether you want to update `original` or `real_values`, so I chose one. If I got this wrong, reversing it should be trivial.)
I find `merge` more readable and easier to use when you want to update multiple columns.
|
The usual form of such an update query in Oracle is the following:
```
UPDATE table1 t1
SET t1.value = ( SELECT t2.value FROM table2 t2
WHERE t2.key = t1.key )
WHERE EXISTS ( SELECT 1 FROM table2 t2
WHERE t2.key = t1.key );
```
I'm confused though. You've tagged this question `oracle` and `sql-server` but your fiddle link uses MySQL.
|
SQL - Update table values from values in another table
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I've come across some weird behavior that's preventing me from setting up a query in the way I'd like to. Would appreciate any ideas. If there's any information about the tables that would be helpful, please let me know. I looked through them and nothing jumped out at me that might cause this but nor did I know what I was looking for. Here is the behavior.
This works fine:
```
Select *
From
SCHEMA_A.TABLE_A a,
SCHEMA_B.TABLE_B b,
SCHEMA_C.TABLE_C c,
SCHEMA_A.TABLE_D d
Where
b.friend_id = c.friend_id
AND a.group_id = d.group_id
AND b.group_cd = d.group_cd
```
But this returns ORA-00904: b.friend\_id = c.friend\_id: invalid identifier
```
Select *
From
SCHEMA_A.TABLE_A a,
SCHEMA_B.TABLE_B b,
SCHEMA_A.TABLE_D d
Join
SCHEMA_C.TABLE_C c
On
b.friend_id = c.friend_id
Where
a.group_id = d.group_id
AND b.group_cd = d.group_cd
```
This returns ORA-00904: b.group\_cd = d.group\_cd: invalid identifier
```
Select *
From
SCHEMA_A.TABLE_A a,
SCHEMA_B.TABLE_B b
Join
SCHEMA_C.TABLE_C c
On
b.friend_id = c.friend_id
Join
SCHEMA_A.TABLE_D d
On
a.group_id = d.group_id
AND b.group_cd = d.group_cd
```
And this works again:
```
Select *
From
SCHEMA_A.TABLE_A a,
SCHEMA_B.TABLE_B b
Join
SCHEMA_C.TABLE_C c
On
b.friend_id = c.friend_id
Join
SCHEMA_A.TABLE_D d
On
b.group_cd = d.group_cd
Where
a.group_id = d.group_id
```
|
Try with the `using` keyword which is designed for `join` on same column name
```
Select *
From
SCHEMA_A.TABLE_A a,
SCHEMA_B.TABLE_B b
Join
SCHEMA_C.TABLE_C USING(friend_id)
Join
SCHEMA_A.TABLE_D d using(group_id)
where b.group_cd = d.group_cd;
```
Also make sure that you execute the other query with the right users, as an user that does not have correct permission will throw `invalid identifier`.
**Edit :** The actual problem is that you are joining `TABLE_C` with `TABLE_D` but the join condition refer to `TABLE_B` change it to
```
Select *
From
SCHEMA_A.TABLE_A a,
SCHEMA_A.TABLE_D d,
SCHEMA_B.TABLE_B b
Join
SCHEMA_C.TABLE_C c using(friend_id)
Where
a.group_id = d.group_id
AND b.group_cd = d.group_cd;
```
Apply the same logic for other queries, when doing a join, you do not join to a group of table, but to the last table mentionned in the `from` clause therefore it did not have access to `TABLE_B`.
For other people that might have an `invalid identifier` error, make sure your database is not case sensitive, else make sure to use the right case.
|
I believe that you have run into the intricacy in SQL dealing with join scoping/precedence. Since I am not myself a database programmer, I can't really tell you why it works the way it does, but I do see some logic in the way the query parser is behaving.
In your first query, all the joins are cross joins. As a result, they should have the same precedence. So all the joined columns are known when the join is evaluated.
In the second query, you have a combination of cross joins and inner joins. If we assume that inner join has precedence over cross joins, then table c and d are joined before any other tables are added to the mix. So when you reference `b.friend_id` in the inner join, which is outside of the inner join, the query parser isn't able to use that column in evaluating the join.
In the third query, the inner join between tables b, c and d take precedence over the cross join. So column `a.group_id` isn't available in evaluating the inner join condition. When you take table a out of the inner join condition in the final query, the query no longer has a conflicting precedence.
|
Query throws ORA-00904 Invalid Identifier when using joins
|
[
"",
"sql",
"oracle",
""
] |
I'm getting an error as follows:
ORA-01858 - a non-numeric character was found where a numeric character was expected.
Although it isn't really accurate, I believe it's related to the date formatting in the following lines.
```
ELSIF par_report_eff_date_start IS NOT NULL AND par_report_eff_date_start = to_date(par_report_eff_date_start, 'fxdd-mm-yyyy')
```
and
```
ELSIF par_report_eff_date_end IS NOT NULL AND par_report_eff_date_end = to_date( par_report_eff_date_end ,'fxDD-MM-YYYY')
```
I'm trying to get the parameters to render date in the format of 'dd/mm/yyyy' as it is passed in, but i'm not sure how to get around this.
I have looked, but have limited web access at work, so I can't use the regular sites.
```
procedure collect_mon_comm_bal_data_part (
par_report_eff_date_start DATE DEFAULT NULL, --dd/mm/yyyy;
par_report_eff_date_end DATE DEFAULT NULL) --dd/mm/yyyy;
is
v_report_eff_date_start DATE;
v_report_eff_date_end DATE;
BEGIN
IF par_report_eff_date_start IS NULL
THEN
-- Oracle job runs at the beginning of each month
select trunc(trunc(sysdate,'Mon')-1,'Mon')
into v_report_eff_date_start
from dual; -- Start of month Var
ELSIF par_report_eff_date_start IS NOT NULL AND par_report_eff_date_start = to_char(par_report_eff_date_start, 'fxdd/mm/yyyy')
THEN
v_report_eff_date_start := par_report_eff_date_start;
DBMS_OUTPUT.PUT_LINE(par_report_eff_date_start || 'Is The Start Date');
ELSE
DBMS_OUTPUT.PUT_LINE(par_report_eff_date_start || 'Is is the wrong format, needs tp be in dd/mm/yyyy');
GOTO the_end;
END IF;
IF par_report_eff_date_end IS NULL
THEN
-- Oracle job runs at the beginning of each month
select trunc(sysdate,'MM')-1
into v_report_eff_date_end
from dual; -- Start of month Var
ELSIF par_report_eff_date_end IS NOT NULL AND par_report_eff_date_end = to_char(par_report_eff_date_end, 'fxdd/mm/yyyy')
THEN
v_report_eff_date_end := par_report_eff_date_end;
DBMS_OUTPUT.PUT_LINE(par_report_eff_date_end || 'Is The Start Date');
ELSE
DBMS_OUTPUT.PUT_LINE(par_report_eff_date_end || 'Is is the wrong format, needs tp be in dd/mm/yyyy');
GOTO the_end;
END IF;
END;
```
|
I notice that you have parameter par\_report\_eff\_date\_start, and a variable v\_report\_eff\_date\_start. You probably want to compare those in the following line:
ELSIF par\_report\_eff\_date\_start IS NOT NULL AND par\_report\_eff\_date\_start = to\_char(par\_report\_eff\_date\_start, 'fxdd/mm/yyyy')
But now you are comparing the date **parameter** with a text value derived from the parameter value. I don't think that's what you want.
A date is just a date, and you only have to think about format masks when converting it to another data type (like VARCHAR2).
So why not use:
ELSIF par\_report\_eff\_date\_start IS NOT NULL AND trunc(par\_report\_eff\_date\_start) = trunc(sysdate, 'Mon')
I've added trunc() to the parameter value, and because you want the first day of the month (I guess!!), trunc(sysdate, 'Mon') will do just fine.
|
You don't want to convert a `date` to a `date`. You want to convert it to a character string. So try:
```
to_char(par_report_eff_date_start, 'fxdd-mm-yyyy')
```
I'm not sure what the rest of the logic is supposed to be doing. But, you use `to_date()` to convert a value *to* a date and `to_char()` to convert a value to a string.
|
Oracle Date Format with input Parametres - PLSQL
|
[
"",
"sql",
"oracle",
"date",
"plsql",
""
] |
Let's say we have this table with columns `RowID` and `Call`:
```
RowID Call DesiredOut
1 A 0
2 A 0
3 B
4 A 1
5 A 0
6 A 0
7 B
8 B
9 A 2
10 A 0
```
I want to SQL query the last column `DesiredOut` as follows:
Each time `Call` is 'A' go back until 'A' is found again and count the number of records which are in between two 'A' entries.
Example: `RowID` 4 has 'A' and the nearest predecessor is in `RowID` 2. Between `RowID` 2 and `RowID` 4 we have one `Call` 'B', so we count 1.
Is there an elegant and performant way to do this with ANSI SQL?
|
You could use a query to find the previous `Call = A` row. Then, you could count the number of rows between that row and the current row:
```
select RowID
, `Call`
, (
select count(*)
from YourTable t2
where RowID < t1.RowID
and RowID > coalesce(
(
select RowID
from YourTable t3
where `Call` = 'A'
and RowID < t1.RowID
order by
RowID DESC
limit 1
),0)
)
from YourTable t1
```
[Example at SQL Fiddle.](http://sqlfiddle.com/#!2/af0f5/6/0)
|
I would approach this by first finding the `rowid` of the previous "A" value. Then count the number of values in-between.
The following query implements this logic using correlated subqueries:
```
select t.*,
(case when t.call = 'A'
then (select count(*)
from table t3
where t3.id < t.id and t3.id > prevA
)
end) as InBetweenCount
from (select t.*,
(select max(rowid)
from table t2
where t2.call = 'A' and t2.rowid < t.rowid
) as prevA
from table t
) t;
```
If you know that `rowid` is sequential with no gaps, you can just use subtraction instead of a subquery for the calculation in the outer query.
|
SQL: Get running row delta for records
|
[
"",
"sql",
""
] |
I have a table with a list of timestamps and I want to calculate the duration between the timestamps
The Table list looks like this
```
MYTIME
2015-01-30-08.12.51.141000
2015-01-30-08.12.51.142000
2015-01-30-08.12.51.142000
2015-01-30-08.12.51.162000
2015-01-30-08.12.51.170000
2015-01-30-08.12.51.290000
```
what I want as a result set is
```
first timestamp next timestamp duration in microseconds
2015-01-30-08.12.51.141000 2015-01-30-08.12.51.142000 1000
2015-01-30-08.12.51.142000 2015-01-30-08.12.51.142000 0000
2015-01-30-08.12.51.142000 2015-01-30-08.12.51.162000 20000
```
I am using DB2 for i SQL but not sure how to do it with pivot table or CTE?
|
Here is my DB2 solution:
I have a table with a single column filled with Tags and timestamps;
First I created a CTE to filter to records I wanted then I pulled out tags and timestamps and a newly created rank column.
Then with this CTE table I used the rank to join 1 to 2 then 2 to 3 etc. to get my duration.
\*Thanks to Milan for heading me in the right direction.
With TagTimeTable (myTag, myTime, myRank) as
(Select substring(SPOOLRCD FROM 34 FOR 6) AS mytag, CAST(substring(SPOOLRCD FROM 60 FOR 26) AS TIMESTAMP) AS mytime, ROW\_NUMBER() OVER(ORDER BY substring(SPOOLRCD FROM 60 FOR 26)) as myrank
FROM MYLIB.SPOOLTBL
WHERE SPOOLRCD LIKE '%DSPLY%'
order by mytime
)
SELECT cast('0001-01-01 00:00:00.000' as timestamp) + TIMESTAMPDIFF(1, CHAR(time2.myTime - time1.myTime)) MICROSECOND as
TIME\_IN\_MICROSECONDS,
TIME1.MYTIME,
TIME2.MYTIME
FROM
TagTimeTable TIME1
INNER JOIN
TagTimeTable TIME2
ON (TIME1.myRank=TIME2.myRank-1);
|
It's depend what SQL server are you using. Does it implemented Window Functions or not?
It would be operation between lines. Previous and Next time stamp difference.
In MS SQL 2008:
```
WITH AUXILIERY_TBL AS (
SELECT
A.[Time_Stamp]
,ROW_NUMBER() OVER (ORDER BY A.[Time_Stamp]) AS Ordr
FROM [Time_Stamps_TBL] AS A
)
SELECT
A.[Time_Stamp]
,DATEDIFF(MINUTE ,B.[Time_Stamp] ,A.[Time_Stamp] ) AS Duration_Minutes
FROM AUXILIERY_TBL AS A
LEFT JOIN AUXILIERY_TBL AS B
ON (A.Ordr=(B.Ordr+1))
```
In MS SQL 2012/14 it would be much easier without need of temporary table:
```
SELECT
A.[Time_Stamp]
,DATEDIFF(MINUTE, (LAG(A.[Time_Stamp]) OVER (ORDER BY A.[Time_Stamp])), A.[Time_Stamp]) AS Duration_Minutes
FROM [Time_Stamps_TBL] AS A
```
You can change DATEDIFF function from minutes to hours or seconds, what suit you better.
|
How to calculate duration between a list of timestamps
|
[
"",
"sql",
"time",
"timestamp",
"duration",
""
] |
I want to query a giant table (SQL Server) for the latest version of a set of record e.g.
Table:
```
Id, key, Ver
1, AοΌ 1
2, BοΌ 1
3, AοΌ 2
4, BοΌ 2
5οΌBοΌ 3
```
Result:
```
Id, Key, Ver
3, A, 2
5, B, 3
```
The join method mentioned in [here](https://stackoverflow.com/questions/9430743/selecting-most-recent-and-specific-version-in-each-group-of-records-for-multipl) will work but only work well if want to get the latest version of all keys but the join will become a too much overhead if we only care about a subset of keys. so I would like ask whether we should do a loop of
```
select top 1 * from table where key = 'A' order by ver desc
```
or
```
select max(ver), key from table where key in ('A', 'B') group by key
```
or there is better way doing so? cheers
|
Essentially, looping over a set of keys one at a time and performing a operation `n` times is a bad approach with any RDBMS. It prevents the query engine from optimizing and guarantees `n` seeks\scans of a table\index.
If you can express your query as a set based operation this will allow the query engine to fully optimize your operation, creating a much more optimal query plan.
If you are using SQL Server 2008 or greater, use [`RANK`](https://msdn.microsoft.com/en-us/library/ms176102.aspx),
```
SELECT
[Id],
[Key],
[Ver]
FROM
[Table]
WHERE
RANK() OVER (PARTITION BY [Key], ORDER BY [Ver] DESC) = 1;
```
with more generic SQL,
```
SELECT
[T1].[Id],
[T2].[Key],
[T2].[Ver]
FROM (
SELECT
[Key]
MAX([Ver]) [Ver]
FROM
[Table]
WHERE
[Key] IN ('A', 'B')
GROUP BY
[Key]) [T2]
JOIN
[Table] [T1]
ON [T1].[Key] = [T2].[Key] AND [T1].[Ver] = [T2].[Ver];
```
To ensure performance of both queries, create a covering index on `Key` then `Ver`.
```
CREATE UNIQUE NONCLUSTERED INDEX [IX_Table_Key_Ver]
ON [Table] ([Key], [Ver] DESC);
```
|
Use a sub-select to find the max ver for a key:
```
select * from table t1
where ver = (select max(ver) from table
where key = t1.key)
```
|
Best way to get the latest version of a specific list of records in SQL
|
[
"",
"sql",
"sql-server",
""
] |
```
select COUNT(analysed) from Results where analysed="True"
```
I want to display count of rows in which analysed value is true.
However, my query gives the error: "The multi-part identifier "Results.runId" could not be bound.".
This is the actual query:
```
select ((SELECT COUNT(*) AS 'Count'
FROM Results
WHERE Analysed = 'True')/failCount) as PercentAnalysed
from Runs
where Runs.runId=Analysed.runId
```
My table schema is:



The value I want for a particular runId is: (the number of entries where analysed=true)/failCount
**EDIT** : How to merge these two queries?
i) select runId,Runs.prodId,prodDate,prodName,buildNumber,totalCount as TotalTestCases,(passCount\*100)/(passCount+failCount) as PassPercent,
passCount,failCount,runOwner from Runs,Product where Runs.prodId=Product.prodId
ii) select (cast(counts.Count as decimal(10,4)) / cast(failCount as decimal(10,4))) as PercentAnalysed
from Runs
inner join
(
SELECT COUNT(\*) AS 'Count', runId
FROM Results
WHERE Analysed = 'True'
GROUP BY runId
) counts
on counts.runId = Runs.runId
I tried this :
select runId,Runs.prodId,prodDate,prodName,buildNumber,totalCount as TotalTestCases,(passCount\*100)/(passCount+failCount) as PassPercent,
passCount,failCount,runOwner,counts.runId,(cast(counts.Count as decimal(10,4)) / cast(failCount as decimal(10,4))) as PercentAnalysed
from Runs,Product
inner join
(
SELECT COUNT(\*) AS 'Count', runId
FROM Results
WHERE Analysed = 'True'
GROUP BY runId
) counts
on counts.runId = Runs.runId
where Runs.prodId=Product.prodId
but it gives error.
|
Your problems are arising from improper joining of tables. You need information from both Runs and Results, but they aren't combined properly in your query. You have the right idea with a nested subquery, but it's in the wrong spot. You're also referencing the Analysed table in the outer where clause, but it hasn't been included in the from clause.
Try this instead:
```
select (cast(counts.Count as decimal(10,4)) / cast(failCount as decimal(10,4))) as PercentAnalysed
from Runs
inner join
(
SELECT COUNT(*) AS 'Count', runId
FROM Results
WHERE Analysed = 'True'
GROUP BY runId
) counts
on counts.runId = Runs.runId
```
I've set this up as an inner join to eliminate any runs which don't have analysed results; you can change it to a left join if you want those rows, but will need to add code to handle the null case. I've also added casts to the two numbers, because otherwise the query will perform integer division and truncate any fractional amounts.
|
I'd try the following query:
```
SELECT COUNT(*) AS 'Count'
FROM Results
WHERE Analysed = 'True'
```
This will count all of your rows where Analysed is `'True'`. This should work if the datatype of your `Analysed` column is either BIT (Boolean) or STRING(VARCHAR, NVARCHAR).
|
Count query giving wrong column name error
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I'm unsure the best route to take for this example:
A table that holds information for a job; salary, dates of employment etc. The field I am wondering how best to store is 'job\_title'.
* Job title is going to be used as part of an auto-complete field so
I'll be using a query to fetch results.
* The same job title will be used by multiple jobs in the DB.
* Job title is going to be a large part of many queries in the
application.
* A single job only ever has one title.
1 . Should I have a 2 tables, job and job\_title, job table referencing the job\_title table for its name.
2 . Should I have a 2 tables, job and job\_title but store title as a direct value in job, job\_title just storing a list of all preexisting values (somewhat redundant)?
3 . Or should I not use a reference table at all / other suggestion.
What is your choice of design in this situation, and how would it change in a one to many design?
This is an example, the actual design is much larger however I think this well conveys the issue.
**Update, To clarify:**
A User (outside scope of question) has many Jobs, a job (start/end date, {job title}) has a title, title ( name (ie. 'Web Developer' )
|
Your option 1 is the best design choice. Create the two tables along these lines:
* jobs (job\_id PK, title\_id FK not null, start\_date, end\_date, ...)
* job\_titles (title\_id PK, title)
The PKs should have clustered indexes; jobs.title\_id and job\_titles should have nonclustered or secondary indexes; job\_titles.title should have a unique constraint.
This relationship can be modeled as 1-to-1 or 1-to-many (one title, many jobs). To enforce 1-to-1 modeling, apply a unique constraint to jobs.title\_id. However, you should *not* model this as a 1-to-1 relationship, because it's not. You even say so yourself: "The same job title will be used by multiple jobs in the DB" and "A single job only ever has one title." An entry in the jobs table represents *a certain position held by a certain user during a certain period of time*. Because this is a 1-to-many relationship, a separate table is the correct way to model the data.
Here's a simple example of why this is so. Your company only has one CEO, but what happens if the current one steps down and the board appoints a new one? You'll have two entries in jobs which both reference the same title, even though there's only one CEO "position" and the two users' job date ranges don't overlap. If you enforce a 1-to-1 relationship, modeling this data is impossible.
Why these particular indexes and constraints?
* The ID columns are PKs and clustered indexes for hopefully obvious reasons; you use these for joins
* jobs.title\_id is an FK for hopefully obvious data integrity reasons
* jobs.title\_id is not null because every job should have a title
* jobs.title\_id needs an index in order to speed up joins
* job\_titles.title has an index because you've indicated you'll be querying based on this column (though I wouldn't query in such a fashion, especially since you've said there will be many titles; see below)
* job\_titles.title has a unique constraint because there's no reason to have duplicates of the same title. You can (and will) have multiple jobs with the same title, but you don't need two entries for "CEO" in job\_titles. Enforcing this uniqueness will preserve data integrity useful for reporting purposes (e.g. plot the productivity of IT's web division based on how many "web developer" jobs are filled)
Remarks:
> Job title is going to be used as part of an auto-complete field so I'll be using a query to fetch results.
As I mentioned before, use key-value pairs here. Fetch a list of them into memory in your app, and query that list for your autocomplete values. Then send the ID off to the DB for your actual SQL query. The queries will perform better that way; even with indexes, searching integers is generally quicker than searching strings.
[You've said that titles will be user created](https://stackoverflow.com/questions/28171722/db-design-for-one-to-one-single-column-table/28187015#comment44712851_28171782). Put some input sanitation and validation process in place, because you don't want redundant entries like "WEB DEVELOPER", "web developer", "web developer", etc. Validation should occur at both the application and DB levels; the unique constraint is part (but all) of this. [Prodigitalson's remark](https://stackoverflow.com/questions/28171722/db-design-for-one-to-one-single-column-table/28187015#comment44712757_28171722) about separate machine and display columns is related to this issue.
|
Edited: after getting the clarify
A table like this is enough - just add the `job_title_id` column as foreign key in the main `member` table
> ```
> ---- "job_title" table ---- (store the job_title)
> 1. pk - job_title_id
> 2. unique - job_title_name <- index this
> ```
\_\_ original answer \_\_
You need to clarify what's the job\_title going represent
* a person that hold this position?
* the division/department that has this position?
* A certain set of attributes? like Sales always has a commission
* or just a string of what was it called?
From what I read so far, you just need the "job\_title" as some sort of dimension - make the id for it, make the string searchable - and that's it
example
```
---- "employee" table ---- (store employee info)
1. pk - employee_id
2. fk - job_title_id
3. other attribute (contract_start_date, salary, sex, ... so on ...)
---- "job_title" table ---- (store the job_title)
1. pk - job_title_id
2. unique - job_title_name <- index this
---- "employee_job_title_history" table ---- (We can check the employee job history here)
1. pk - employee_id
2. pk - job_title_id
3. pk - is_effective
4. effective_date [edited: this need to be PK too - thanks to KM.]
```
I still think you need to provide us a use-case - that will greatly improve both of our understanding I believe
|
DB design for one-to-one single column table
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have the following query:
```
SELECT
*,
(SELECT locked FROM forum_topics t WHERE t.id = topicid) as topiclocked,
forum_subtopics.locked as stlocked
FROM forum_subtopics
WHERE topicid = 1;
```
I would like to use the fields `topiclocked` and `stlocked` to create a third custom coloumn named `locked`. I tried the following, but MySQL told me that `topiclocked` does not exist:
```
SELECT
*,
(SELECT locked FROM forum_topics t WHERE t.id = topicid) as topiclocked,
forum_subtopics.locked as stlocked,
(topiclocked || stlocked) as locked
FROM forum_subtopics
WHERE topicid = 1;
```
> Unknown column 'topiclocked' in 'field list'
How can I create that third coloumn in a way that it actually works?
|
You can assign variables in the `select`-part, like this
```
SELECT
*,
@topiclocked:=(SELECT locked FROM forum_topics t WHERE t.id = topicid) as topiclocked,
@stlocked:=forum_subtopics.locked as stlocked,
(@topiclocked || @stlocked) as locked
FROM forum_subtopics
WHERE topicid = 1;
```
<http://sqlfiddle.com/#!2/572453/1>
I would rewrite that query with a `JOIN` instead, that way you don't have to do a *subquery* and you'll have `locked` from both tables available.
```
SELECT s.*,
t.locked as topiclocked,
s.locked as stlocked,
(t.locked || s.locked) as locked
FROM forum_subtopics s
JOIN forum_topics t on (s.topicid = t.id)
WHERE topicid = 1;
```
|
You cannot use `alias name (topiclocked)` in the same `select` statement. Make the query as `sub-select` and do the concatenation in `outer query`. Try this.
```
SELECT *,
( topiclocked || stlocked ) AS locked
FROM (SELECT *,
(SELECT locked
FROM forum_topics t
WHERE t.id = topicid) AS topiclocked,
forum_subtopics.locked AS stlocked
FROM forum_subtopics
WHERE topicid = 1) a;
```
|
MySQL SELECT use 2 newly created "coloumns" in a third
|
[
"",
"mysql",
"sql",
""
] |
This is quite a mouthfull for me.
One of my challenges is that I don't know how to formulate the question - which is obvious by the title.
I'll try to illustrate my problem:
I have a table, A:
```
ID LocationID
11 185
12 185
13 206
```
And a table B:
```
ID AID Position Value
1 11 1 4
2 11 3 8
3 11 5 4
4 12 1 4
5 12 2 4
6 12 3 5
```
Table B is associated to table A by `ID` and `AID`. I would like to construct a query which has the following filters:
`Position = 1 AND Value = 4` and `Position = 3 AND Value = 5` and which gives me a list of distinct IDs from `A.ID` which stasify **all** the given criterias.
With this I mean that if I join the two tables together with an INNER JOIN, I only wish to have `A.ID = 12`.
My own start to solving this problem was something along the lines of:
```
SELECT DISTINCT A.ID
FROM A
INNER JOIN B ON (A.ID = B.AID)
WHERE
A.LocationID = 185 AND
(B.Position = 1 AND B.Value = 4) OR
(B.Position = 3 AND B.Value = 5)
```
Which obviously doesn't work. I thought I had a clear solution to this but when I come to think of it, I really don't.
I'm a bit stumped by this problem and I'm having a hard time searching for a strategy on how to solve it since I don't even know what keywords to use in my search.
|
You can do this with `GROUP BY` and `HAVING`:
```
SELECT A.ID
FROM A
INNER JOIN B ON (A.ID = B.AID)
GROUP BY A.ID
HAVING MAX(CASE WHEN A.LocationID = 185 THEN 1 END) = 1
AND MAX(CASE WHEN B.Position = 1 AND B.Value = 4 THEN 1 END) = 1
AND MAX(CASE WHEN B.Position = 3 AND B.Value = 5 THEN 1 END) = 1
```
Demo: [SQL Fiddle](http://www.sqlfiddle.com/#!6/f67b7/2/0)
Actually can move the `LocationID` criteria to `WHERE`:
```
SELECT A.ID
FROM Table1 A
INNER JOIN Table2 B ON (A.ID = B.AID)
WHERE A.LocationID = 185
GROUP BY A.ID
HAVING MAX(CASE WHEN B.Position = 1 AND B.Value = 4 THEN 1 END) = 1
AND MAX(CASE WHEN B.Position = 3 AND B.Value = 5 THEN 1 END) = 1
```
|
```
SELECT DISTINCT A.ID
FROM A
WHERE A.ID IN(
SELECT ID
FROM B
WHERE (B.Position = 1 AND B.Value = 4) OR
(B.Position = 3 AND B.Value = 5)
)
```
If I understand you correctly.
SELECT DISTINCT A.ID
FROM A
INNER JOIN B ON (A.ID = B.AID)
WHERE
A.LocationID = 185 AND
(B.Position = 1 AND B.Value = 4) OR
(B.Position = 3 AND B.Value = 5)
|
T-SQL - Find distinct values in table only if joined rows satisfy a list of criterias
|
[
"",
"sql",
"sql-server",
"join",
"set-theory",
""
] |
I have 2 tables :
```
interests (storing the interest ID and name)
person_interests(storing the person_id and interest_id)
```
How do I select all the interests that a particular person has not selected?
I have tried the following SQL Query and am still not getting the desired result
```
SELECT *
FROM interests LEFT JOIN person_interests
ON interests.id=person_interests.person_id
WHERE person_interests.id IS NULL
AND person_id=66;
```
|
Use `NOT EXISTS`
```
SELECT *
FROM interests
WHERE NOT EXISTS (
SELECT person_interests.interest_id
FROM person_interests
WHERE person_id = 66
AND interests.id = person_interests.interest_id
)
```
|
```
SELECT * from interests
WHERE interest_id NOT IN
(SELECT interest_id FROM person_interests WHERE person_id=66)
```
|
Opposite of UNION SQL Query
|
[
"",
"sql",
"postgresql",
""
] |
I'm a bit confused by the stored procedure syntax in Oracle.
I started with a simple:
```
select * from test_table;
```
It works, then I put it in a proc:
```
CREATE OR REPLACE PROCEDURE example
IS
BEGIN
select * from test_table;
END;
```
Doesn't work. Expected "INTO" is the error message I get. Now, I've seen syntax examples of SQL Server code that just shoves a select statement into a proc and it works instantly, but that doesn't seem to be the case here.
|
T-SQL and PL/SQL are completely different languages. In particular, for PL/SQL you have to select the result into some variable or cursor. Depending on what you plan to do with the record data - process in the procedure - or return to the caller, will drive what you have to do.
In your example, if you want to return the record set, you would do something like this:
```
CREATE OR REPLACE PROCEDURE example (
p_recordset OUT SYS_REFCURSOR) AS
BEGIN
OPEN p_recordset FOR
select * from test_table;
END example ;
```
See [this link](http://oracle-base.com/articles/misc/using-ref-cursors-to-return-recordsets.php) for examples.
|
Hello and welcome to SO.
I assume the full error you're seeing would be `PLS-00428: an INTO clause is expected in this SELECT statement` and it is correct, you must have an `INTO` statement in a stored procedure.
I recommend [this](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/selectinto_statement.htm#LNPLS01345) link for syntax relating to the `SELECT INTO` statement.
For your code I recommend this (I've changed from your `test_table` example to dba\_user):
```
CREATE OR REPLACE PROCEDURE example
IS
l_username VARCHAR(25);
BEGIN
select username INTO l_username from dba_users where user_id=1;
END;
/
```
**Note:** The `INTO` clause works with **1** column from 1 row. You cannot select multiple records or columns into this. You would need to reference the `BULK COLLECT` feature to do that. For examples of that feel free to read [here](http://www.java2s.com/Tutorial/Oracle/0440__PL-SQL-Statements/MultiplerowSELECTcommand.htm).
|
select stored proc plsql
|
[
"",
"sql",
"database",
"oracle",
"stored-procedures",
"plsql",
""
] |
IS there a way to run this across every column in a table:
```
UPDATE dbo.stage_a
SET Statement_Name= NULL
WHERE Statement_Name='""';
```
I am trying to tidy up after a data import.
|
`Dynamic query` plus `Information_schema.columns`. Try this.
```
DECLARE @cols NVARCHAR(max)='UPDATE dbo.stage_a set '
SELECT @cols += COLUMN_NAME + '=case when ' + COLUMN_NAME
+ ' = '""' then null else '+COLUMN_NAME+' end,'
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'stage_a'
AND TABLE_SCHEMA = 'dbo'
SELECT @cols = LEFT(@cols, Len(@cols) - 1)
PRINT @cols
EXEC Sp_executesql @cols
```
|
```
UPDATE Customers
SET ContactName='Alfred Schmidt', City='Hamburg'
WHERE CustomerName='Alfreds Futterkiste';
```
Like this example you should specify all coloumns.(separated by commas)
If you want to replace all coloumns with NULL why don't you delete the rows:
```
DELETE FROM Customers
WHERE CustomerName='Alfreds Futterkiste'
```
|
Global Update (All Rows) On One SQL Table With The Same Criteria
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have 2 queries do the same job:
```
SELECT * FROM student_info
INNER JOIN class
ON student_info.id = class.studentId
WHERE student_info.name = 'Ken'
SELECT * FROM (SELECT * FROM student_info WHERE name = 'Ken') studInfo
INNER JOIN class
ON student_info.id = class.studentId
```
Which one is faster? I guess the second but not sure, I am using Oracle 11g.
**UPDATED:**
My tables are non-indexed and I confirm two PLAN\_TABLE\_OUTPUTs are almost same:

[Full size image](https://i.stack.imgur.com/yyOcR.png)
|
In the latest versions of Oracle, the **optimizer** is smart enough to do its job. So it won't matter and both of your queries would be internally **optimized** to do the task efficiently. Optimizer might do a **query re-write** and opt an **efficient execution plan**.
Let's understand this with a small example of EMP and DEPT table. I will use two similar queries like yours in the question.
I will take two cases, first a predicate having a non-indexed column, second with an indexed column.
**Case 1 - predicate having a non-indexed column**
```
SQL> explain plan for
2 SELECT * FROM emp e
3 INNER JOIN dept d
4 ON e.deptno = d.deptno
5 where ename = 'SCOTT';
Explained.
SQL>
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
Plan hash value: 3625962092
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 59 | 4 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 59 | 4 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMP | 1 | 39 | 3 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("E"."ENAME"='SCOTT')
4 - access("E"."DEPTNO"="D"."DEPTNO")
Note
-----
- this is an adaptive plan
22 rows selected.
SQL>
SQL> explain plan for
2 SELECT * FROM (SELECT * FROM emp WHERE ename = 'SCOTT') e
3 INNER JOIN dept d
4 ON e.deptno = d.deptno;
Explained.
SQL>
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
Plan hash value: 3625962092
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 59 | 4 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 59 | 4 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMP | 1 | 39 | 3 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("ENAME"='SCOTT')
4 - access("EMP"."DEPTNO"="D"."DEPTNO")
Note
-----
- this is an adaptive plan
22 rows selected.
SQL>
```
**Case 2 - predicate having an indexed column**
```
SQL> explain plan for
2 SELECT * FROM emp e
3 INNER JOIN dept d
4 ON e.deptno = d.deptno
5 where empno = 7788;
Explained.
SQL>
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
Plan hash value: 2385808155
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 59 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 59 | 2 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 39 | 1 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN | PK_EMP | 1 | | 0 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("E"."EMPNO"=7788)
5 - access("E"."DEPTNO"="D"."DEPTNO")
18 rows selected.
SQL>
SQL> explain plan for
2 SELECT * FROM (SELECT * FROM emp where empno = 7788) e
3 INNER JOIN dept d
4 ON e.deptno = d.deptno;
Explained.
SQL>
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
Plan hash value: 2385808155
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 59 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 59 | 2 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 39 | 1 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN | PK_EMP | 1 | | 0 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("EMPNO"=7788)
5 - access("EMP"."DEPTNO"="D"."DEPTNO")
18 rows selected.
SQL>
```
Is there any difference between the explain plans in each case respectively? No.
|
You'd need to show us the query plans and the execution statistics to be certain. That said, assuming `name` is indexed and statistics are reasonably accurate, I'd be shocked if the two queries didn't generate the same plan (and, thus, the same performance). With either query, Oracle is free to evaluate the predicate before or after it evaluates the join so it is unlikely that it would choose differently in the two cases.
|
Comparing two join queries in Oracle
|
[
"",
"sql",
"oracle",
"join",
""
] |
I have to run a query to a table to get sums per a specific field value.
```
rechargedate originid
------------------------
2015-02-02 3
2015-02-02 3
2015-02-02 1
2015-02-02 1
2015-02-02 3
2015-02-02 2
2015-02-01 2
2015-02-01 3
2015-02-01 1
2015-02-01 1
2015-02-01 2
```
And the query result should be like:
```
rechargedate orig1 orig2 orig3
-------------------------------------
2015-02-02 2 1 3
2015-02-01 2 2 1
```
How can I accomplish that with MS SQL 2008?
|
You can use conditional aggregation:
```
CREATE TABLE #temp(
RechargeDate DATE,
OriginID INT
)
INSERT INTO #temp VALUES
('2015-02-02', 3),('2015-02-02', 3),('2015-02-02', 1),
('2015-02-02', 1),('2015-02-02', 3),('2015-02-02', 2),
('2015-02-01', 2),('2015-02-01', 3),('2015-02-01', 1),
('2015-02-01', 1),('2015-02-01', 2);
SELECT
RechargeDate,
orig1 = SUM(CASE WHEN OriginID = 1 THEN 1 ELSE 0 END),
orig2 = SUM(CASE WHEN OriginID = 2 THEN 1 ELSE 0 END),
orig3 = SUM(CASE WHEN OriginID = 3 THEN 1 ELSE 0 END)
FROM #temp
GROUP BY RechargeDate
ORDER BY RechargeDate DESC
DROP TABLE #temp
```
**RESULT**
```
RechargeDate orig1 orig2 orig3
------------ ----------- ----------- -----------
2015-02-02 2 1 3
2015-02-01 2 2 1
```
|
Use `PIVOT`. Check [this](https://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx) to learn `PIVIOT` in `SQL SERVER.`
```
SELECT rechargedate,
[1] AS orig1,
[2] AS orig2,
[3] AS orig3
FROM #TEST
PIVOT(COUNT(originid) FOR originid IN([1],[2],[3])) AS Piv
ORDER BY rechargedate DESC
```
|
How to summarize by row and specific value of a field
|
[
"",
"sql",
"sql-server",
""
] |
In Oracle [documentation](http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#i22289) it is mentioned that
> NUMBER (precision, scale)
>
> If a precision is not specified, the column stores values as given. If
> no scale is specified, the scale is zero.
But `NUMBER` (without precision and scale) is also accepting floating point numbers (34.30) but according to documentation if scale is not specified it should be zero scale by default so it should allow only integers, am I wrong?.
And in [another questions](https://stackoverflow.com/a/593218/1705895) it is mentioned that
> default precision is 38, default scale is zero
So `NUMBER` and `NUMBER(*,0)` should be equal but they are not.
Where am I wrong?
|
The default of scale is not zero, which has no value in it. Hence it can accept any value between `-84 to 127`. If you limit it to zero then it will not accept any presicion even the value contains the scale value
```
create table aaaaa
(
sno number(*,0),
sno1 number
);
```
The `user_tab_columns` will give you the value of your precision and scale
```
SQL> select column_name,data_precision,data_scale from user_tab_columns where ta
ble_name = 'AAAAA';
COLUMN_NAME DATA_PRECISION DATA_SCALE
------------------------------ -------------- ----------
SNO 0
SNO1
SQL>
```
Please find the below workings
```
SQL> select * from aaaaa;
no rows selected
SQL> insert into aaaaa values (123.123123,123123.21344);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from aaaaa;
SNO SNO1
---------- ----------
123 123123.213
SQL>
```
|
I think the sentence in the documentation
> If a precision is not specified, the column stores values as given. If no scale is specified, the scale is zero.
is a bit confusing. The scale is zero *if a precision is specified and a scale is not specified*. So, for example, `NUMBER(19)` is equivalent to `NUMBER(19,0)`. `NUMBER`, by itself, will have 38 digits of *precision* but no defined *scale*. So a column defined as a `NUMBER` can accept values of *any* scale, as long as their precision is 38 digits or less (basically, 38 numerical digits with a decimal point in any place).
You can also specify a scale without a precision: `NUMBER(*, <scale>)`, but that just creates the column with 38 digits of precision so I'm not sure it's particularly useful.
The table **How Scale Factors Affect Numeric Data Storage** on [this page](http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1832) might be helpful.
|
Is "NUMBER" and "NUMBER(*,0)" the same in Oracle?
|
[
"",
"sql",
"oracle",
"types",
"oracle11g",
"sqldatatypes",
""
] |
Iam trying to get the results from two columns with the same type into single column for example
```
table
id secId
1 2
3 1
5 1
1 4
```
i want to retrive them like that where id or secId = 1
```
id
2
3
4
5
```
i have tried inner join but it gives me 2 columns with data redundancy
|
You could use a `union` to get that
```
select id from yourTable where secId = 1
union
select secId from yourTable where id = 1
```
Use `union all` if you don't like the way `union` also does a `distinct`
|
You don't need any `Join` for that, this simple query would do it
```
select if(id=1, secId, id)
from yourTable
where id = 1 or secId = 1;
```
You select all data that have `id` or `secId` equal to `1` then, depending on which of both is equal to `1` you display `secId` or `id`.
Assuming here only one of both value can be `1`.
If there is the possibility to have two values, then you can use the union syntax as @AndreasWederbrand described.
|
how to join two columns into single column from the same table
|
[
"",
"mysql",
"sql",
"join",
"inner-join",
""
] |
I am trying to select all rows that has either the same username OR email as any users EXCEPT the one with ID 7.
But now it returns the row that actually has the ID 7. I was thinking that perhaps you can do some grouping of your SQL to solve this?
Can anyone tell me whats wrong?
This is my SQL that I am trying with in MySQL:
```
SELECT user_id FROM user WHERE username = "my_username" OR email = "my@email.com" AND user_id != 7
```
Previous I was running this in SQLite3 which worked fine for the purpose. But since MySQL does not support the EXCEPT this will not work in MySQL:
```
SELECT COUNT(*) FROM user WHERE username = :username OR email = :email EXCEPT SELECT COUNT(*) FROM user WHERE user_id = :user_id
```
Thanks in advance!
|
Put parenthesis around OR:
```
SELECT user_id
FROM user
WHERE (username = "my_username" OR email = "my@email.com")
AND user_id != 7
```
Without them, `AND` will be evaluated prior to `OR` because it has [higher priority](http://dev.mysql.com/dhttp://dev.mysql.com/doc/refman/5.0/en/operator-precedence.htmloc/refman/5.0/en/operator-precedence.html) and it is equivalent to:
```
SELECT user_id
FROM user
WHERE username = "my_username"
OR (email = "my@email.com" AND user_id != 7)
```
|
I think you made a general mistake here. You used OR and AND both in where clause without putting (). Following example should work i guess :
```
SELECT user_id FROM user
WHERE (username = "my_username" OR email = "my@email.com") AND user_id != 7
```
You can take a look [operator precedence in my sql](http://dev.mysql.com/doc/refman/5.0/en/operator-precedence.html)
|
Select all rows that has either A=B OR C=D EXCEPT where ID=foo in MySQL
|
[
"",
"mysql",
"sql",
""
] |
I have a table
* items: `id, userid, item_name, item_description`
I want to update a row and used the following sql statement for it.
```
$updateQuery = "UPDATE items SET item_name = '$item_name',
item_desc = '$item_desc' WHERE userid = '$userid'
AND item_name = '$old_name'";
```
But it fails. Is it because I used the `item_name` field, which is to be updated, for selecting the row?
|
I think I see the problem
```
item_desc = '$item_desc'
```
"4 columns id, userid, item\_name, **item\_description.**"
Change your query to
```
$updateQuery = "UPDATE items SET item_name = '$item_name', item_description = '$item_desc' WHERE userid = '$userid' AND item_name = '$old_name'";
```
|
you not update item\_name because you used it in where clause
or
you can echo this string and run in database terminal to verify.
Try :
```
$updateQuery = "UPDATE items SET item_name = '" . $item_name . "', item_desc = '" . $item_desc . "' WHERE userid = " . $userid . " AND item_name = '" . $old_name . "';"
```
|
Error in updating database row in MYSQL
|
[
"",
"mysql",
"sql",
""
] |
I am trying to insert records from another table into a table in Access using sql. I have pasted the statement below. I want to insert the records that exist in ImportMetricsIDs01262015 but not in ShouldImportMetricsIDs. It runs perfectly without any errors but it won't insert anything even when I physically add a new record.
```
INSERT INTO ShouldImportMetricsIDsTable ( [Formulary ID], [Market Segment] )
SELECT ImportMetricsIDs01262015.[Formulary ID], ImportMetricsIDs01262015.[Market Segment]
FROM ImportMetricsIDs01262015
WHERE NOT EXISTS (SELECT *
FROM ShouldImportMetricsIDsTable);
```
|
You need a correlation clause. The subquery just checks whether or not the table is empty. Something like:
```
INSERT INTO ShouldImportMetricsIDsTable( [Formulary ID], [Market Segment] )
SELECT im.[Formulary ID], im.[Market Segment]
FROM ImportMetricsIDs01262015 as im
WHERE NOT EXISTS (SELECT 1
FROM ShouldImportMetricsIDsTable as sim
WHERE im.[Formulary ID] = sim.[Formulary ID] AND
im.[Market Segment] = sim.[Market Segment]
);
```
|
You need to correlate your `NOT Exist` query with the `ImportMetricsIDs01262015` table
This code is on the assumption that FormularyID is the key in both the tables.
```
INSERT INTO ShouldImportMetricsIDsTable (
[Formulary ID]
,[Market Segment]
)
SELECT ImportMetricsIDs01262015.[Formulary ID]
,ImportMetricsIDs01262015.[Market Segment]
FROM ImportMetricsIDs01262015
WHERE NOT EXISTS (
SELECT *
FROM ShouldImportMetricsIDsTable
where ShouldImportMetricsIDsTable.[Formulary ID] = ImportMetricsIDs01262015.[Formulary ID]
);
```
|
Insert INTO NOT EXISTS SQL access
|
[
"",
"sql",
"vba",
"ms-access",
"insert",
""
] |
Sorry for that title, I don't know how to describe my problem in one sentence.
I have Table like this:
```
event | thema
-------------
1 1
1 2
2 1
2 2
2 3
3 1
3 2
3 3
3 4
4 1
4 2
4 3
```
What I want are the event IDs where the thema is exaclty 1, 2 and 3, not the event ID where it is only 1 and 2 or 1,2,3 and 4.
```
SELECT event WHERE thema=1 OR thema=2 OR thema=3
```
returns them all
```
SELECT event WHERE thema=1 AND thema=2 AND thema=3
```
returns nothing.
I think this should be absolutely simple, but stack is overflown...
Thanks for some help!
|
This type of query is a "set-within-sets" query (your are looking for sets of "thema" for each event). The most general approach is aggregation using a `having` clause. This might be the shortest way to write the query using standard SQL:
```
select event
from table t
group by event
having count(distinct (case when thema in (1, 2, 3) then thema end)) = 3;
```
|
Group by the `event` and take only those having at least one `thema` 1 and 2 and 3 and not any other
```
SELECT event
from your_table
group by event
having sum(case when thema = 1 then 1 else 0 end) > 0
and sum(case when thema = 2 then 1 else 0 end) > 0
and sum(case when thema = 3 then 1 else 0 end) > 0
and sum(case when thema not in (1,2,3) then 1 else 0 end) = 0
```
|
SQL get all IDs where Sub-IDs are exactly specified without getting other IDs where some Sub-ID's are not present
|
[
"",
"sql",
"sql-server",
""
] |
I need to create this table function. The function needs to return single words from passed parameters like: hello, hhuu, value
The table function should return:
```
hello,
hhuu,
value
```
But I am always getting some errors, please could you help me?
|
you can write as:
```
DECLARE @input_char VARCHAR(255)
SET @input_char = 'hello, hhuu, value'
;WITH cte AS (
SELECT
CAST('<r>' + REPLACE(@input_char, ' ', '</r><r>') + '</r>' AS XML) AS input_char
)
SELECT
rtrim( LTRIM (xTable.xColumn.value('.', 'VARCHAR(MAX)')) ) AS input_char
FROM cte
CROSS APPLY input_char.nodes('//r') AS xTable(xColumn)
```
|
Please give a look at this article:
<http://www.codeproject.com/Tips/625872/Convert-a-CSV-delimited-string-to-table-column-in>
and you could use ' ' (space) as delimiter.
```
SELECT * FROM dbo.CSVtoTable('hello, hhuu, value', ' ')
```
|
MSSQL - Create table function, return substring
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have 2 posts in my table that can be published on homepage. First has 3 comments and the other has no comments. Query below gives 1 result row but I want 2 result rows.
Expected result of the query
1. fetch requested rows from blog table
2. fetch COUNT of comments for the related post id. IF no comment is available then output should be 0.
Simplified form of my 2 tables
```
structure values expected query output current query output
==================== ==================== ===================== =====================
blog | comments blog | comments 1 - body1 - 3 comments 1 - body1 - 3 comments
-------------------- -------------------- 2 - body2 - 0 comments
id | commentid 1 | 1
body | comment body1 | comment1
postid 2 | 1
body2 | 2
| comment2
| 1
| 3
| comment3
| 1
```
My query doesn't work with this form?
Can you please correct me.
```
$query = 'SELECT
blog.id,
blog.body,
COUNT(comments.postid)
FROM
blog, comments
WHERE
blog.status="publish" AND comments.postid = blog.id
ORDER BY blog.id DESC';
```
|
A `left join` returns blogs even though they have no matching comment:
```
select b.id
, b.body
, count(c.commentid)
from blog b
left join
comments c
on b.id = c.blogid
where b.status = 'publish'
group by
b.id
, b.body
order by
b.id desc
```
|
At a first glance, you may need a left join:.
Edit: Confirmed. Have a look at this: <http://sqlfiddle.com/#!2/1475ab/1>
```
$query = 'SELECT
b.id,
b.body,
COUNT(c.comment)
FROM blog b
left join comments c
on b.id = c.postid
WHERE blog.status="publish"
group by b.id
ORDER BY b.id DESC';
```
|
MySQL query simultaneously combined result from 2 tables (fetch if count() is zero also)
|
[
"",
"mysql",
"sql",
""
] |
I have a table that has some columns containing strings ,Let's say `nvarchar`. Now, the user passes a string to a function that searches for this string in its assigned column. I want to check if that string is present in the database but the problem is it does not necessarily have to be a 100% match.
Let's say for example:
The user passed the string `Johnathon` and string `John` is present in this database.
So, basically I want to get the number of characters that matched.In this particular case of `John` and `Johnathon`. it should be 4 matched and 5 unmatched.
Can I please get some directions to approach this problem?
Edit: What I am guessing is I can do the percentage match thing once I have retrieved the best matching string from the column. So, likewise, if we ignore the number of matched and unmatched characters and focus on retrieving the matched string from database, that should work.
Forexample, as `Johnathon` was passed by the user, and `John` is present in the database, I definitely can not use `Like` operator here but a piece of code that searches for the most matched string in the column and returns it.
|
You can do it this way:
```
SELECT Name, LEN(Name) AS Equals, (LEN('Johnathon') - LEN(Name)) AS NotEquals
FROM TableName
WHERE 'Johnathon' LIKE '%' +Name +'%'
```
Or if you want to compare both ways then:
```
DECLARE @parameter NVARCHAR(MAX) = N'Johnathon'
SELECT Name,
CASE WHEN LEN(Name) > LEN(@parameter) THEN LEN(@parameter) ELSE LEN(Name) END AS Equals,
CASE WHEN LEN(Name) > LEN(@parameter) THEN LEN(Name) - LEN(@parameter) ELSE LEN(@parameter) - LEN(Name) END AS NotEquals
FROM TableName
WHERE Name LIKE '%' + @parameter + '%' OR @parameter LIKE '%' +Name +'%'
```
|
The Levenshtein distance mentioned by @DeadlyJesus might suit you, but an alternative would be just to count matching characaters from the start of the 2 strings. A simple user defined function could do this.
```
create function dbo.MatchStart(@input1 nvarchar(100), @input2 nvarchar(100)) returns int as
begin
declare @i int
set @i = 1
if (@input1 is not null and @input2 is not null)
begin
while (1 = 1)
begin
if (@i > len(@input1) or @i > len(@input2))
break
if (substring(@input1, @i, 1) <> substring(@input2, @i, 1))
break;
set @i = @i + 1
end
end
return @i - 1
end
go
declare @testTable table (text1 nvarchar(100))
declare @userInput nvarchar(100)
insert @testTable values
(null),
(''),
('John'),
('Johnathan'),
('JohXXX'),
('Fred'),
('JxOxHxN')
set @userInput = 'Johnathan'
select text1, dbo.MatchStart(text1, @userInput) as result from @testTable
```
|
Matching strings in TSQL
|
[
"",
"sql",
"ssms",
""
] |
I have a table **A** which has primary key column **a** and it is a foreign key to **B** table (Table B has primary key b and column a). Table B also has a primary key b which foreign key to C table and so on.
It won't allow me to delete a row from table A because its dependent on table B and table B is dependent on table C. So I have to delete a row from C first then B and at last A manually.
Is there any way to delete directly without going to each table and checking if it has any dependencies before deleting?
|
You Can Use Cascade `Cascading Referential Integrity Constraints`
**Update:**
you should enable Cascading Referential Integrity Constraints of Table A (PK) from Table B where ID of A is foreign Key and similarly of PK of Table B From Table C Where ID of B is foreign Key

[**MSDN LIBRARY**](https://technet.microsoft.com/en-us/library/ms186973(v=sql.105).aspx)
[**CODE PROJECT ARTICALE**](http://www.codeproject.com/Articles/620086/CASCADE-in-SQL-Server-with-example)
Very Nice Article [**BLOG.SQL AUTHORITY**](http://blog.sqlauthority.com/2012/08/17/sql-server-curious-case-of-disappearing-rows-on-update-cascade-and-on-delete-cascade-t-sql-example-part-2-of-2/)
|
If you're not allowed to modify the tables, you can take a look at the `EXISTS` operator.
It will allow you to delete rows from a table only if the query inside the `EXISTS` return at least 1 result. You can use it to check dependencies.
You can write 3 queries:
```
DELETE C c
WHERE EXISTS (SELECT 1
FROM B b
WHERE c.b = b.b
AND EXISTS (SELECT 1
FROM A a
WHERE a.a = b.a
AND ... ));
DELETE B b
WHERE EXISTS (SELECT 1
FROM A a
WHERE a.a = b.a
AND ...);
DELETE A a
WHERE ...
```
The first one will take care of record in `C` that references records in `B` that references records in `A` that you want to delete.
Then you can remove records from `B` since there is no more dependencies in `C`.
Finally, you're able to delete records from `A` using the same logic.
|
How can I delete all dependent rows of a table?
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I am confused in SQL Relationships, specially with One to One and One to Many. I have read many articles like [this](http://code.tutsplus.com/articles/sql-for-beginners-part-3-database-relationships--net-8561) and [this](http://www.databaseprimer.com/pages/relationship_1to1/)
For an example, below are some statements from those articles and some other articles
1. The relationship between customer and address tables are one to one, that is because one address is belong to one customer.
2. The relationship between customer and order tables are one to many, that is because one customer can make many orders.
3. customer and contactData tables are one to one because one contactData is belong to one customer
Yes, true, but take my argument now.
* In point 2 it says the relationship is one to many, but one order is belong to exact one customer right? So you can consider it as one to one as well.
* In point 3, contactData and customer relationship said to be one to one, but one customer can have many contactData right? So it is one to many
With all of these, I am seriously confused. Can someone give me a clear definition about how to find the relationship please?
|
> The relationship between customer and order tables are one to many, that is because one customer can make many orders.
This should better read
* a customer can place **many** orders
* an order belongs to **one** customer
So it is a **one to many** relationship between the tables.
> customer and contactData tables are one to one because one contactData is belong to one customer
This should better read
* a contactData belongs to **one** customer
* a customer can only have **one** contactData
So it is a **one to one** relationship between the tables.
Yes, in reality a customer of some company may have more than one contact, but in this particular database they are talking about, it is defined that a customer can have only one contactData.
EDIT: I should add that you can say "there is a one to many relationship between tables a and b", which doesn't specify which is which, so you can be more precise, saying "there is a one to many relationship from table a to b" or "there is a many to one relationship from table b to a". With one to one, you don't have such problem of course. (Adding to this, there are even more precise definitions, such as a "one to one or zero" etc. where you specify if zero records are allowed, such as a person without contact data.)
|
Dongle. You are making a mistake of looking at the subject symmetrically. Relationship is considered to be one-directional, that's why you felt confused.
Let's take for example this *Customer <-> Order* relation. Like you observed, *customer -> order* is something completely different than *order -> customer*. They are actually independent, however weird it may be. It's because in the real code you're never actually dealing with two objects at once, rather than "acting" from perspective one of the two. E.g. you fetched order object from the db and only then you're who is the customer that made this order. Or you have the customer object and you want to check what are his orders.
As you see, this is very uni-directional (we call relations uni- or bi-directional). You have object X and you want to "enhance" your knowledge of it, by joining it with the other table, object Y. So it's *X->Y*. Or *Y->X*. But never actually it is *X<->Y*. That's why **you always have to look at the relations as if they were independent**.
Apart from that, you're right and it usually is so, that one relation is *One to One* and the other *One to Many*.
|
Confused on SQL Relationships
|
[
"",
"sql",
"relational-database",
""
] |
I have table holding a hierarchical structure so basically it has 3 columns as :

What I want is to select from this table by a range of ID's for example :
```
Id IN (1,4,8,18)
```
which results to :

> In other word a row must come into result set with all of it's parents and children.
How could i solve this problem ?
Thanks in advance
|
You can do it using CTE:
```
DECLARE @t TABLE ( ID INT, ParentID INT )
INSERT INTO @t
VALUES ( 1, NULL ),
( 2, 1 ),
( 3, 2 ),
( 4, 3 ),
( 6, NULL ),
( 7, 6 ),
( 8, 7 ),
( 9, 8 ),
( 10, 9 ),
( 11, 10 ),
( 13, NULL ),
( 14, 13 ),
( 15, 14 ),
( 17, NULL ),
( 18, 17 ),
( 19, 18 ),
( 20, 19 );
WITH ctep
AS ( SELECT *
FROM @t
WHERE ID IN ( 1, 4, 8, 18 )
UNION ALL
SELECT t.*
FROM @t t
JOIN ctep ON t.ParentID = ctep.ID
),
ctec
AS ( SELECT *
FROM @t
WHERE ID IN ( 1, 4, 8, 18 )
UNION ALL
SELECT t.*
FROM @t t
JOIN ctec ON t.ID = ctec.ParentID
)
SELECT * FROM ctep
UNION
SELECT * FROM ctec
```
Here are 2 CTEs, for getting parents and childs. Finally you union those 2 results in order to get distinct rows.
I had a little bug. Edited...
|
Split your task into two steps
1. Find top most parent
2. Find all decedents
```
WITH ToTopParent AS
(
SELECT Id, ParentId
FROM yourTable
WHERE Id IN (1,4,8,18)
UNION ALL
SELECT T.Id, T.ParentId
FROM ToTopParent TTP
JOIN yourTable T ON TTP.ParentId = T.Id
),
AllDecedents AS
(
SELECT Id, Name, ParentID
FROM yourTable
WHERE id IN (SELECT Id FROM ToTopParent WHERE ParentId IS NULL)
UNION ALL
SELECT T.Id, T.Name, T.ParentID
FROM yourTable T
JOIN AllDecedents ON T.ParentId = AllDecedents.Id
)
SELECT * FROM AllDecedents
```
|
Select from a hierarchy based on a range of ID's
|
[
"",
"sql",
"sql-server",
"t-sql",
"tree",
"hierarchy",
""
] |
I have a TABLE (Classes\_Teachers) with 3 columns (classID - INT, teacherID INT , isActive BOOLEAN).
I read the contents of a new file and create a table (StudentsTemp) with the same columns but with the latest Data and want to "merge" the two.
I can do this easily using EXCEPT AND JOIN, however this will delete any rows in Students that don't exist in StudentsTemp, however I need to be able to just set isActive to false as I have to keep the data for report purposes.
This is a "JOIN" table for a many to many relationship and so it does not have a single primary key.
So for example
STUDENTS:
```
+---------------------------------+
| ClassID TeacherID isActive |
+---------------------------------+
| 1 2 1 |
| 1 12 1 |
| 1 13 0 |
| 5 10 1 |
+---------------------------------+
```
STUDENTS TEMP
```
+---------------------------------+
| ClassID TeacherID isActive |
+---------------------------------+
| 1 2 1 |
| 1 13 1 |
| 1 6 1 |
+---------------------------------+
```
In this scenario Class 5 is no longer taught by Teacher 10 and so this entry would normally be been "deleted", but I only want to change "isActive" to 0
Class 1 is also no longer taught by Teacher 12 so this should change isActive to 0.
Class 1 is now taught instead by Teacher 13; ie is now "active" so this changes isActive to 1
Finally, there is a wholly new Teacher (id = 6) who teaches classID of 1 so this inserts the row as is.
I need STUDENTS to end up like this:
```
+---------------------------------+
| ClassID TeacherID isActive |
+---------------------------------+
| 1 2 1 |
| 1 12 0 |
| 1 13 1 |
| 5 10 0 |
| 1 6 1 |
+---------------------------------+
```
I can do this by creating 3 temporary tables and using EXCEPT AND JOIN to populate those tables with the "deletions", "changes" and "insertions" and then running UPDATE AND INSERT on them, but I wondered if there was an easier way to do this without having to create 3 extra temporary tables.
|
First, insert new `teacherID` and `classID` combination. Then update rows from `Students` table.
```
CREATE TABLE #Students(
ClassID INT,
TeachedID INT,
isActive BIT
)
CREATE TABLE #StudentsTemp(
ClassID INT,
TeachedID INT,
isActive BIT
)
INSERT INTO #Students VALUES
(1, 2, 1), (1, 12, 1), (1, 13, 0), (5, 10, 1);
INSERT INTO #StudentsTemp VALUES
(1, 2, 1), (1, 13, 1), (1, 6, 1);
UPDATE s
SET s.isActive = ISNULL(t.isActive, 0)
FROM #Students s
LEFT JOIN #StudentsTemp t
ON t.ClassID = s.ClassID
AND t.TeachedID = s.TeachedID
INSERT INTO #Students
SELECT
ClassID,
TeachedID,
isActive
FROM #StudentsTemp t
WHERE
NOT EXISTS(
SELECT 1
FROM #Students
WHERE
ClassID = t.ClassID
AND TeachedID = t.TeachedID
)
SELECT * FROM #Students
DROP TABLE #Students
DROP TABLE #StudentsTemp
```
**RESULT**
```
ClassID TeachedID isActive
----------- ----------- --------
1 2 1
1 12 0
1 13 1
5 10 0
1 6 1
```
|
Have you tried using a Merge statement?
<https://msdn.microsoft.com/en-us/library/bb510625.aspx>
|
Update one table with the results from another
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Lets say i have a value like this:
```
Z00056ABC00044
```
I would like to know what the values are BEFORE ABC
So my query should return only:
```
Z00056
```
I have tried to create a function:
```
if CHARINDEX('ABC',@mainString) > 0
begin
--do something
end
```
But i am unsure how to grab the value before ABC
|
Try this:-
```
SELECT SUBSTRING('Z00056ABC00044', 1, CHARINDEX('ABC', 'Z00056ABC00044') -1)
```
|
Try this:
```
SELECT LEFT(@MyValue, (LEN(@MyValue) - CHARINDEX('ABC', @MyValue) - 1))
```
|
Return section of a string
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I've got a table with 3 columns: `name`, `lastname` and `date`.
For example:
```
Name Lastname Date
Ab Ab 2008-07-01
Ab Ab 2006-06-23
Kb Kb 2008-07-01
Kb Kb 2007-06-03
```
I need to find the names of those who are assigned to `2008-07` and not to `2006-06`. So for this example the output will be:
```
Name Lastname Date
Kb Kb 2008-07-01
Kb Kb 2007-06-03
```
My code:
```
select Name, Lastname, YEAR(date), MONTH(date) from MyTable
where (YEAR(date) = 2008 AND MONTH(date) = 7) AND (YEAR(date) <> 2006 AND MONTH(date) <> 6)
```
It doesn't work well, because really nothing happen (it "blocking" for example all names with month = 6). I try to do it with UNION statement, but nothing work well.
Note, I need to do this without using a subquery.
|
I think you want a `group by` and a `having` clause, because you are looking for multiple rows:
```
select Name, Lastname
from MyTable
group by Name, LastName
having
sum(case when YEAR(date) = 2008 AND MONTH(date) = 7 then 1 else 0 end) > 0 and
sum(case when YEAR(date) = 2006 AND MONTH(date) = 6 then 1 else 0 end) = 0;
```
Each condition in the `having` clause counts the number of rows that match. The `> 0` means there is at least one. The `= 0` means there are none. This generalizes easily to more conditions.
|
```
SELECT
t.*
FROM (
SELECT
Name,
LastName
FROM #Temp
GROUP BY Name, LastName
HAVING
SUM(CASE WHEN YEAR([Date]) = 2008 AND MONTH([Date]) = 7 THEN 1 ELSE 0 END) = 1
AND SUM(CASE WHEN YEAR([Date]) = 2006 AND MONTH([Date]) = 6 THEN 1 ELSE 0 END) = 0
)x
INNER JOIN #Temp t
ON t.Name = x.Name
AND t.LastName = x.LastName
```
**RESULT**
```
Name LastName Date
-------------------- -------------------- ----------
Kb Kb 2008-07-01
Kb Kb 2007-06-03
```
|
Two "where" statements with one table doesn't work without subqueries
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"where-clause",
""
] |
I've had a search around and have seen quite a few questions about selecting distinct values, but none of them seem close enough to my query to be able to help. This is the scenario
```
ID Product_ID Product_type
123 56789 A
123 78901 B
456 12345 A
789 45612 B
```
The SQL I need would be to search in a table similar to the above, and bring back the rows where the `Product_type` is `B` but only if the `ID` related to it exists once within the table.
So in this case it would bring back only
```
789 45612 B
```
The SQL I have tried based on what I've found so far was
```
SELECT DISTINCT(ID)
FROM "TABLE"
WHERE "PRODUCT_TYPE" = 'B'
```
As well as
```
SELECT *
FROM "TABLE"
WHERE "PRODUCT_TYPE" = 'B'
GROUP BY "ID"
HAVING COUNT(ID) = 1
```
And neither have worked
|
One way via a list of IDs appearing once:
```
select * from T where Product_type = 'B' and id in (
select id from T
group by id
having count(id) = 1)
```
|
Soltuion 1: Use a sub-query to count id's.
```
select * from table t1
where Product_type = 'B'
and (select count(*) from table
where id = t1.id) = 1
```
|
SQL - Selecting unique values from one column then filtering based on another
|
[
"",
"sql",
""
] |
I have a database that I'm trying to query from Access with ODBC. The table I need to read from has 304 columns, and I can only see the first 255 in the Query Builder. I've read elsewhere that the solution is to write an SQL query by hand rather than relying on the Builder, so I tried this query:
```
SELECT [Field1], [Field304]
FROM [ODBC;DRIVER=SQL Server;UID=USERNAME;SERVER=ServerAddress].[TabelName];
```
This query returns Field1 just as I'd expect, but still won't get Field304. What am I doing wrong?
|
You have encountered a limitation of ODBC linked tables in Access, and a query like
```
SELECT ... FROM [ODBC;...].[tableName];
```
is really just a way of creating a temporary ODBC linked table "on the fly".
When Access goes to create an ODBC linked table it queries the remote database to get the column information. The structures for holding table information in Access are limited to 255 columns, so only the first 255 columns of the remote table are available. For example, for the SQL Server table
```
CREATE TABLE manyColumns (
id int identity(1,1) primary key,
intCol002 int,
intCol003 int,
intCol004 int,
...
intCol255 int,
intCol256 int,
intCol257 int)
```
an Access query like
```
SELECT [id], [intCol002], [intCol255]
FROM [ODBC;DRIVER={SQL Server};SERVER=.\SQLEXPRESS;DATABASE=myDb].[manyColumns];
```
will work, but this query
```
SELECT [id], [intCol002], [intCol256]
FROM [ODBC;DRIVER={SQL Server};SERVER=.\SQLEXPRESS;DATABASE=myDb].[manyColumns];
```
will prompt for the "parameter" [intCol256] because Access does not know that such a column exists in the SQL Server table.
There are two ways to work around this issue:
(1) If you only need to read the information in Access you can create an Access [pass-through query](https://support.office.com/en-au/article/Process-SQL-on-a-database-server-by-using-a-pass-through-query-b775ac23-8a6b-49b2-82e2-6dac62532a42)
```
SELECT [id], [intCol002], [intCol256]
FROM [manyColumns];
```
That will return the desired columns, but pass-through queries always produce recordsets that are not updateable.
(2) If you need an updateable recordset then you'll need to create a View on the SQL Server
```
CREATE VIEW selectedColumns AS
SELECT [id], [intCol002], [intCol256]
FROM [manyColumns];
```
and then create an ODBC linked table in Access that points to the View. When creating the ODBC linked table remember to tell Access what the primary key column(s) are, otherwise the linked table will not be updateable.
|
Thanks Gord! That solves the problem of update large tables from within Access.
If anyone is interested, I've created a view inside sql server, selecting only the fields I wanted, then I link it inside Access.
Now, any update made in this link, reflects to the original table. So it's easy to do this running querys.
|
Access ODBC can't pull from SQL table with more than 255 columns
|
[
"",
"sql",
"ms-access",
"odbc",
"ms-access-2013",
""
] |
I'm trying to sum up all transactions for each day in my database.
```
SELECT DISTINCT
SUM(Balance) OVER (partition by Date) AS account_total,
Date
FROM tbl_FundData
ORDER BY Date;
```
The problem with the output is if a transaction is completed at a different time it becomes its own unique sum instead of rolling into the one day. I'm not sure how to modify the query to fix this.
I'm using SQL Server 2008 (I think)
|
Seems yo use DateTime as column data type so cast it as DATE :
```
SELECT DISTINCT SUM(Balance) OVER (partition by CAST([Date] AS DATE)) AS account_total, CAST([Date] AS DATE)
FROM tbl_FundData
ORDER BY CAST([Date] AS DATE);
```
Also you'd better use Group By in this case as :
```
SELECT SUM(Balance) AS account_total, CAST([Date] AS DATE)
FROM tbl_FundData
GROUP BY CAST([Date] AS DATE);
```
|
```
SELECT DISTINCT
SUM(Balance) OVER (partition by convert(varchar, Date, 103)) AS account_total,
convert(varchar, Date, 103) Date
FROM tbl_FundData
ORDER BY convert(varchar,Date,103)
```
|
Summing a column by all transactions in a day
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have a list of cust\_id and timestamps for an activity. I would like to add a column called "activity\_order" which would give an order to each cust\_id, which "1" being assigned to the max timestamp
```
cust_id | time_stamp
________ __________
a1 2015-01-31 10:48:43
a1 2015-01-31 12:48:46
a1 2015-01-31 17:50:40
b1 2015-01-25 10:39:01
b1 2015-01-31 12:53:34
```
This is what I want my desired result to look like:
```
cust_id | time_stamp | activity_order
________ ___________________ _________________
a1 2015-01-31 10:48:43 3
a1 2015-01-31 12:48:46 2
a1 2015-01-31 17:50:40 1
b1 2015-01-25 10:39:01 2
b1 2015-01-31 12:53:34 1
```
Here is my attempt, but the problem is this CASE statement is only so big, and I'm thinking I'll need a loop or something similar just in case a cust\_id has greater than > 2 activities.
```
SELECT a.cust_id
,a.time_stamp
,CASE WHEN a.time_stamp = b.max_ts THEN 1 ELSE 2 END as activity_order
FROM ACTIVITY a
JOIN (SELECT
cust_id
,MAX(time_stamp) as max_ts
FROM activity_order) b
ON a.cust_id = b.cust_id
```
|
```
SELECT cust_id
, time_stamp
, RANK () OVER(PARTITION BY cust_id ORDER BY time_stamp DESC) as activity_order
FROM activity
ORDER BY cust_id ASC, activity_order DESC
```
working example: <http://sqlfiddle.com/#!6/b6d85/3>
for more info: <http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions123.htm>
|
Try assuming MS SQL
```
Update B
Set Activity_Order = RN
From
(select CustID, [Time_stamp], Row_Number() Over(Partition By CustID Order By [Time_stamp] desc) as RN
From Activity) A
inner join Activity B on A.CustID = B.CustID and A.[Time_stamp] = B.[Timestamp]
```
|
SQL Order by Timestamp and Assign Numeric Value
|
[
"",
"sql",
"hive",
"hql",
""
] |
```
class Category < ActiveRecord::Base
has_many :articles
end
class Article < ActiveRecord::Base
belongs_to :category
has_many :comments
has_many :tags
end
```
`Category.joins(:articles)` in rails equivalently
```
SELECT categories.* FROM categories
INNER JOIN articles ON articles.category_id = categories.id
```
if i want have rails-code is equivalently sql
```
SELECT * FROM categories
INNER JOIN articles ON articles.category_id = categories.id
```
what should I do?
|
```
Category.select('*').joins(:articles)
```
or
```
categoyes = Category.select('*').includes(:articles)
```
and
```
categoyes.each do |category|
puts category.articles
end
```
|
Try to use `select` method
```
categories = Category.select('*').joins(:articles)
categories.first.some_column_from_articles
```
|
All columns in join in ruby on rails
|
[
"",
"sql",
"ruby-on-rails",
"join",
""
] |
I have a table in which i have START\_DATE column which is DATE datatype. I want to get all the SUBSCRIPTION\_ID which is older than the last 30 min from the current datetime.
Here is my query:
```
select WF.SUBSCRIPTION_ID,WF.START_DATE
from WF_WORKFLOW
where WF.NAME='SIGNUP_MOBILE_PRE_PAID'
and WF.STATUS_ID=0
```
|
You can amend the `where` clause with another condition:
```
start_date < sysdate - 30 / (24 * 60)
```
Oracle interprets a date as a number of days value. So, 30 minutes is `30 / (24 * 60)`.
|
Take a look at [INTERVAL](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements003.htm#i38598)
So this might work,
```
select WF.SUBSCRIPTION_ID,WF.START_DATE
from WF_WORKFLOW
where WF.NAME='SIGNUP_MOBILE_PRE_PAID'
and WF.STATUS_ID=0 and WF.START_DATE < sysdate - INTERVAL '30' minute(1)
```
|
Calculate DateTime difference in query
|
[
"",
"sql",
"oracle",
"datetime",
""
] |
I have 4 tables (as shown in the picture), I want to select Invoice, Client for which invoice is issued, And all the items for which the invoice is issued.
I cannot figure out how? How should I apply joins, or should i use sub queries?
Please helpe me, I am really struck.

Thanks in Advance.
|
```
Select c.ClientID, i.Invoiceid, it.itemId
from Clients c
Inner Join Invoice i ON i.ClientId = c.ClientId
Inner Join InvoiceItems ii on ii.InvoiceId = i.InvoiceId
Inner Join Items it on ii.ItemId = it.ItemI
Order by c.Clientid, i.Invoiceid, it.itemId
```
One can add additional columns as needed.
|
Simple `INNER JOIN` should work
```
SELECT I.ItemID,
C.ClientID,
IV.InvoiceID
FROM Items I
INNER JOIN InvoiceItems II
ON I.ItemID = II.ItemID
INNER JOIN Invoice IV
ON IV.InvoiceID = II.InvoiceID
INNER JOIN Client C
ON C.ClientID = IV.ClientID
```
|
SQL Multiple Joins
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to convert my resultant rows in to columns, but only with the bit values. To explain better what I mean, my table looks like this;
```
Id Item
-------------------
1 TextLineA
1 TextLineB
2 TextLineA
```
And what I am trying to achieve is a layout like this;
```
Id TextLineA TextLineB
--------------------------
1 True True
2 True False
```
Apologies for my poor formatting, but hopefully it conveys what I am looking for. I have tried Pivot but I don't have anything to sum over as I just want the true/false values rather than a sum.
Also, I have about 15 columns and I don't mind statically writing them out. Any pointers would be greatly appreciated!
|
This sort of approach will solve this:
```
DECLARE @Example TABLE (Id int, Item varchar(20))
INSERT INTO @Example (ID, Item)
SELECT 1, 'TextLineA' UNION ALL
SELECT 1, 'TextLineB' UNION ALL
SELECT 2, 'TextLineA' UNION ALL
SELECT 3, 'TextLineA' UNION ALL
SELECT 3, 'TextLineB' UNION ALL
SELECT 3, 'TextLineC'
select ID,
MAX(CASE WHEN Item='TextLineA' THEN 1 ELSE 0 END) as TextLineA,
MAX(CASE WHEN Item='TextLineB' THEN 1 ELSE 0 END) as TextLineB,
MAX(CASE WHEN Item='TextLineC' THEN 1 ELSE 0 END) as TextLineC
FROM @Example
GROUP BY ID
```
This results in:
```
ID TextLineA TextLineB TextLineC
1 1 1 0
2 1 0 0
3 1 1 1
```
Where 1 is true and 0 is false.
|
Use `Pivot`
```
SELECT id,
CASE WHEN [TextLineA] IS NOT NULL THEN 'TRUE' ELSE 'FALSE' END [TextLineA],
CASE WHEN [TextLineB] IS NOT NULL THEN 'TRUE' ELSE 'FALSE' END [TextLineB]
FROM Yourtable
PIVOT (Max(Item)
FOR item IN ([TextLineA],[TextLineB])) piv
```
|
SQL rows to columns (Pivot with just bit values)
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm trying to convert `string` column to `date` in Oracle SQL.
Here is my an example of value from the data:
```
'03/12/14 11:00:00'
```
Here is my query:
```
select to_date(A,'DD/MM/YY HH24:MI:SS')
from MyTable
```
Here is an example of the output:
```
03-DEC-14
```
Why my query return only the date but not the hour?
|
Assuming you are using SQL\*Plus (or SQL Developer) the default NLS\_DATE\_FORMAT is applied when a `DATE` value is *displayed*. To verify your current format you can run:
```
select value
from nls_session_parameters
where parameter = 'NLS_DATE_FORMAT';
```
To adjust this, run:
```
alter session set nls_date_format = 'YYYY-MM-DD HH24:MI:SS';
```
Then you should see the time as well.
|
You are trying to **DISPLAY** the datetime value. Use **TO\_CHAR** along with proper **FORMAT MODEL**.
For example,
`select to_char(sysdate, 'mm/dd/yyyy hh24:mi:ss') from dual;`
A **DATE** consists of both date and time portions. You just need to make sure you use appropriate format model to display it. Else, you end up with a format which depends on your **locale-specific NLS settings**.
|
Convert string to date in Oracle SQL
|
[
"",
"sql",
"oracle",
"to-date",
""
] |
I am doing a data migration between two tables (splitting out a related table). The existing table is `reminders`, and it has a `start` column and a newly-added `dateset_id` column pointing to a new `dateset` table, which also has a `start` column. For every row in `reminders`, I want to `INSERT` a new row in `dateset` with the `start` value copied over, and `UPDATE` the corresponding row in `reminders` with the newly-inserted `dateset` ID.
Here's the SQL I tried:
```
WITH inserted_datesets AS (
INSERT INTO dateset (start)
SELECT start FROM reminder
RETURNING reminder.id AS reminder_id, id AS dateset_id
)
UPDATE reminder
SET dateset_id = ids.dateset_id
FROM inserted_datesets AS ids
WHERE reminder.id = ids.reminder_id
```
I get an error `missing FROM-clause entry for table "reminder"`, because I'm including the `reminder.id` column in the `RETURNING` clause, but not actually selecting it for the insert. This makes sense, but I can't figure out how to modify the query to do what I need. Is there a totally different approach I'm missing?
|
There are several ways to solve the problem.
**1. temporarily add a column**
As others mentioned, the straight-forward way is to temporarily add a column `reminder_id` to the `dateset`. Populate it with original `IDs` from `reminder` table. Use it to join `reminder` with the `dateset` table. Drop the temporary column.
**2. when start is unique**
If values of the `start` column is unique it is possible to do it without extra column by joining `reminder` table with the `dateset` table on the `start` column.
```
INSERT INTO dateset (start)
SELECT start FROM reminder;
WITH
CTE_Joined
AS
(
SELECT
reminder.id AS reminder_id
,reminder.dateset_id AS old_dateset_id
,dateset.id AS new_dateset_id
FROM
reminder
INNER JOIN dateset ON dateset.start = reminder.start
)
UPDATE CTE_Joined
SET old_dateset_id = new_dateset_id
;
```
**3. when start is not unique**
It is possible to do it without temporary column even in this case. The main idea is the following. Let's have a look at this example:
We have two rows in `reminder` with the same `start` value and IDs 3 and 7:
```
reminder
id start dateset_id
3 2015-01-01 NULL
7 2015-01-01 NULL
```
After we insert them into the `dateset`, there will be new IDs generated, for example, 1 and 2:
```
dateset
id start
1 2015-01-01
2 2015-01-01
```
It doesn't really matter how we link these two rows. The end result could be
```
reminder
id start dateset_id
3 2015-01-01 1
7 2015-01-01 2
```
or
```
reminder
id start dateset_id
3 2015-01-01 2
7 2015-01-01 1
```
Both of these variants are correct. Which brings us to the following solution.
Simply insert all rows first.
```
INSERT INTO dateset (start)
SELECT start FROM reminder;
```
Match/join two tables on `start` column knowing that it is not unique. "Make it" unique by adding `ROW_NUMBER` and joining by two columns. It is possible to make the query shorter, but I spelled out each step explicitly:
```
WITH
CTE_reminder_rn
AS
(
SELECT
id
,start
,dateset_id
,ROW_NUMBER() OVER (PARTITION BY start ORDER BY id) AS rn
FROM reminder
)
,CTE_dateset_rn
AS
(
SELECT
id
,start
,ROW_NUMBER() OVER (PARTITION BY start ORDER BY id) AS rn
FROM dateset
)
,CTE_Joined
AS
(
SELECT
CTE_reminder_rn.id AS reminder_id
,CTE_reminder_rn.dateset_id AS old_dateset_id
,CTE_dateset_rn.id AS new_dateset_id
FROM
CTE_reminder_rn
INNER JOIN CTE_dateset_rn ON
CTE_dateset_rn.start = CTE_reminder_rn.start AND
CTE_dateset_rn.rn = CTE_reminder_rn.rn
)
UPDATE CTE_Joined
SET old_dateset_id = new_dateset_id
;
```
I hope it is clear from the code what it does, especially when you compare it to the simpler version without `ROW_NUMBER`. Obviously, the complex solution will work even if `start` is unique, but it is not as efficient, as a simple solution.
This solution assumes that `dateset` is empty before this process.
|
## Update based on changes in Postgres:
Using INSERT RETURNING in subqueries is, according to the documentation, supported, for Postgres versions 9.1 and after. The hypothetical DML subquery in the original answer should work for Postgres >= 9.1:
```
UPDATE reminder SET dateset_id = (
INSERT INTO dateset (start)
VALUES (reminder.start)
RETURNING dateset.id));
```
## Original answer:
Here's another way of doing it, distinct from the 3 ways Vladimir suggested so far.
A temporary function will let you read the id of the new rows created as well as other values in the query:
```
--minimal demonstration schema
CREATE TABLE dateset (
id SERIAL PRIMARY KEY,
start TIMESTAMP
-- other things here...
);
CREATE TABLE reminder (
id SERIAL PRIMARY KEY,
start TIMESTAMP,
dateset_id INTEGER REFERENCES dateset(id)
-- other things here...
);
--pre-migration data
INSERT INTO reminder (start) VALUES ('2014-02-14'), ('2014-09-06'), ('1984-01-01'), ('2014-02-14');
--all at once
BEGIN;
CREATE FUNCTION insertreturning(ts TIMESTAMP) RETURNS INTEGER AS $$
INSERT INTO dateset (start)
VALUES (ts)
RETURNING dateset.id;
$$ LANGUAGE SQL;
UPDATE reminder SET dateset_id = insertreturning(reminder.start);
DROP FUNCTION insertreturning(TIMESTAMP);
ALTER TABLE reminder DROP COLUMN start;
END;
```
This approach to the problem suggested itself after I realized that writing `INSERT ... RETURNING` as a subquery would solve the issue; although `INSERT`s are not allowed as subqueries, calls to functions certainly are.
Intriguingly, this suggests that DML subqueries that return values might be broadly useful. If they were possible, we would just write:
```
UPDATE reminder SET dateset_id = (
INSERT INTO dateset (start)
VALUES (reminder.start)
RETURNING dateset.id));
```
|
PostgreSQL - insert rows based on select from another table, and update an FK in that table with the newly inserted rows
|
[
"",
"sql",
"postgresql",
""
] |
I have a query that is already calculating a running total for new clients and clients that have left. However if there are no new clients or no clients have left when I try to plot this on a graph I get blank spaces.
How can I get the previous years figures if a year is null?
```
select x.Year
, case when x.TotalClients is null then 0 else x.TotalClients end as 'TotalNewClients'
, x.RunningTotal as 'RunningTotalNewClients'
, case when x2.TotalClients is null then 0 else x2.TotalClients end as 'TotalLeftClients'
, x2.RunningTotal as 'RunningTotalLeftClients'
from (
SELECT
st1.YearStart as 'Year',
st1.TotalClients,
RunningTotal = SUM(st2.TotalClients)
FROM
@TotalsStart AS st1
INNER JOIN
@TotalsStart AS st2
ON st2.YearStart <= st1.YearStart
GROUP BY st1.YearStart, st1.TotalClients) as x
left outer join
(SELECT
st1.YearStart as 'Year',
st1.TotalClients,
RunningTotal = SUM(st2.TotalClients)
FROM
@TotalsEnd AS st1
INNER JOIN
@TotalsEnd AS st2
ON st2.YearStart <= st1.YearStart
GROUP BY st1.YearStart, st1.TotalClients
) as x2 on x.Year = x2.Year
Order by x.Year
```
|
Maybe the lag function (msdn.microsoft.com/en-us/library/hh231256.aspx) can help you?
|
This was my amended code, I changed 2 lines in the select to use the lag function, I used the link in my answer to get this working.
```
select x.Year
, case when x.TotalClients is null then 0 else x.TotalClients end as 'TotalNewClients'
, x.RunningTotal
, case when x.RunningTotal is null then lag(x.RunningTotal,1,0) over(order by x.year) else x.RunningTotal end as 'RunningTotalNewClients'
, case when x2.TotalClients is null then 0 else x2.TotalClients end as 'TotalLeftClients'
, x2.RunningTotal
, case when x2.RunningTotal is null then lag(x2.RunningTotal,1,0) over(order by x.year) else x2.RunningTotal end as 'RunningTotalLeftClients'
from (
SELECT
st1.YearStart as 'Year',
st1.TotalClients,
RunningTotal = SUM(st2.TotalClients)
FROM
@TotalsStart AS st1
INNER JOIN
@TotalsStart AS st2
ON st2.YearStart <= st1.YearStart
GROUP BY st1.YearStart, st1.TotalClients) as x
left outer join
(SELECT
st1.YearStart as 'Year',
st1.TotalClients,
RunningTotal = SUM(st2.TotalClients)
FROM
@TotalsEnd AS st1
INNER JOIN
@TotalsEnd AS st2
ON st2.YearStart <= st1.YearStart
GROUP BY st1.YearStart, st1.TotalClients
) as x2 on x.Year = x2.Year
Order by x.Year
```
|
sql get previous row value for a running total
|
[
"",
"sql",
"sql-server",
""
] |
I want to do search in my app. Here is a general look of my query
```
SELECT * FROM My_Table WHERE Col1 OR col or Col3 LIKE "String";
```
My question is what is the correct syntax for that query.
Thanks.
|
The correct syntax looks like:
```
SELECT * FROM My_Table WHERE Col1 LIKE "%String%" OR col LIKE "%String%" or Col3 LIKE "%String%";
```
|
```
SELECT * FROM My_Table
WHERE LOWER(Col1) LIKE '%string%'
OR LOWER(Col2) LIKE '%string%'
OR LOWER(Col3) LIKE '%string%';
```
It is usually recommended to keep the search non case-sensitive. Here is the syntax for it.
|
Mention more than one coloumn in where cause
|
[
"",
"sql",
"where-clause",
"sql-like",
""
] |
I'm struggling with this piece of code:
```
INSERT INTO Bier(Bier, BrowuerID, TypeID, GistingID, KleurID, Alcohol)
SELECT bb.Naam, b.BrouwerID, t.TypeID, g.GistingID, k.KleurID, bb.Alcoholperc
FROM BELGISCHBIER AS bb JOIN Brouwer AS b JOIN Type AS t JOIN Gisting AS g JOIN Kleur AS k
```
Can someone tell why this won't work because I don't really know that much about SQL
Thanks.
|
You have used `JOINS` without the mandatory `ON` clause which determines on what columns the tables should be joined:
```
INSERT INTO bier
(
bier,
browuerid,
typeid,
gistingid,
kleurid,
alcohol
)
SELECT bb.naam,
b.brouwerid,
t.typeid,
g.gistingid,
k.kleurid,
bb.alcoholperc
FROM belgischbier AS bb
JOIN brouwer AS b
ON bb.browuerid= b.browuerid
JOIN type AS t
ON b.BrouwerTypeID = t.BrouwerTypeID
JOIN gisting AS g
ON bb.GistingID = g.GistingID
JOIN kleur AS k
ON g.KleurID = k.KleurID
```
|
with `JOIN`, SQL understand `INNER JOIN`. So is mandatory use of `ON` clause.
Only if you use `CROSS JOIN` you can don't write `ON` clause.
So, add `ON` clause in your `JOIN` command.
|
MSSQL Select more Tables to INSERT
|
[
"",
"sql",
"sql-server",
"insert",
""
] |
Oracle SQL update from one Table to another table throws syntax error for following simple update query.
```
UPDATE Sales_Import SI
SET AccountNumber = RAN.AccountNumber
FROM RetrieveAccountNumber RAN
WHERE RAN.LeadID = SI.LeadID;
```
**Error:**
```
Error starting at line 1 in command:
Error at Command Line:2 Column:37
Error report:
SQL Error: ORA-00933: SQL command not properly ended
00933. 00000 - "SQL command not properly ended"
```
I believe the same query will work in other databases like postgres etc.
Could any one tell the correct query ?
and whatever i tried is this ANSI standard query ?
|
> and whatever i tried is this ANSI standard query ?
No. Oracle Oracle doesn't support join in update statement.
In Oracle you could do it in two ways -
\*\*Merge statement \*\*
with only **update clause** in merge statement
```
MERGE INTO sales_import s
USING (SELECT *
FROM retrieveaccountnumber) u
ON (u.leadid = s.leadid)
WHEN matched THEN
UPDATE SET u.accountnumber = s.accountnumber;
```
**Correlated query**
```
UPDATE sales_import t1
SET accountnumber = (SELECT t2.accountnumber
FROM retrieveaccountnumber t2
WHERE t1.leadid = t2.leadid )
WHERE EXISTS (
SELECT 1
FROM retrieveaccountnumber t2
WHERE t1.leadid = t2.leadid );
```
|
I will write your sql like this:
```
UPDATE Sales_Import SI
SET AccountNumber = (Select RAN.AccountNumber
FROM RetrieveAccountNumber RAN
WHERE RAN.LeadID = SI.LeadID);
```
|
Oracle SQL update from one Table to another table throws syntax error
|
[
"",
"sql",
"oracle",
""
] |
I have a date format like
```
'2003-11-27 00:00:00.000'
```
How can I convert it into `YYYYMMDD` like 20031127 WITH SQL SERVER ?
|
It depends of how your date is declared.
```
DECLARE @date datetime = '2003-11-27T00:00:00.000' -- datetime
DECLARE @date2 char(23) = '2003-11-27 00:00:00.000' -- char(23)
SELECT
convert(char(8), @date, 112) datetimeconvert,
convert(char(8), convert(datetime, @date2, 121), 112) charconvert
```
Result:
```
datetimeconvert charconvert
20031127 20031127
```
|
To convert the date as per ISO standard you can write as:
```
SELECT CONVERT (VARCHAR(8), GETDATE(),112) as [YYYYMMDD]
```
|
Convert format of date
|
[
"",
"sql",
"sql-server",
""
] |
i am working with SQL Server 2012, i need to remove a particular string data from column values this string has saved multiple times for a user into the column.
I need to write the stored procedure for it. My table structure is like following.
```
Id UserId ColumnNeedUpdate Address
1 2565 l:\xyz\sfd\mybook.png Mumbai
1 2565 l:\xyz\sfd\myook.png Mumbai
1 2565 l:\xyz\sfd\mbook.png Mumbai
1 2465 l:\xzd\sfd\mybook.png Mumbai
1 2265 C:\myz\sfd\mybook.png Mumbai
1 2965 C:\xsz\sfd\mybook.png Mumbai
1 2565 l:\xyz\sfd\maybook.png Mumbai
1 2765 C:\zxu\sfd\mybook.png Mumbai
1 2465 m:\xdz\sfd\mybook.png Mumbai
```
Now if i write the select query for the user 2565 the result will be following.
```
1 2565 l:\xyz\sfd\mybook.png Mumbai
1 2565 l:\xyz\sfd\myook.png Mumbai
1 2565 l:\xyz\sfd\mbook.png Mumbai
1 2565 l:\xyz\sfd\maybook.png Mumbai
```
purpose of writing the stored procedure is to update column "ColumnNeedUpdate" like following.
```
mybook.png, myook.png,mbook.png,maybook.png
```
I am pretty new in sql server stored procedure so I tried following way.
```
Create PROCEDURE UsingExistsstoredprocedure(@Id int)
AS
DECLARE @ResultValue int,@ResultFirstName nvarchar(500),@imax int,@i int;
BEGIN TRAN
IF EXISTS(SELECT Name FROM Image WHERE UserId = @Id)
BEGIN
SET @imax=@@ROWCOUNT
SET @i=1
WHILE(@i<=@imax)
BEGIN
SET @ResultFirstName =(SELECT Name FROM [picsilo].[dbo].[Image] WHERE UserId=@i);
IF CHARINDEX('\',@ResultFirstName)>0
SET @ResultFirstName=SUBSTRING(@ResultFirstName,0,CHARINDEX('\',@ResultFirstName))
INSERT Into Image (Name)VALUES(@ResultFirstName)
END
END
```
|
```
UPDATE MyTable SET ColumnNeedUpdate =
REPLACE(ColumnNeedUpdate, 'l:\xyz\sfd\', '')
WHERE UserId = 2565
```
**Disclaimer:** This code changes the data in your table. Use at your own discretion.
|
It should work even with a non static "l:\xyz\sfd\" string :
```
UPDATE MyTable SET ColumnNeedUpdate =
REVERSE(SUBSTRING(REVERSE(ColumnNeedUpdate), 0,
CHARINDEX('\', REVERSE(ColumnNeedUpdate)) - 1))
WHERE UserId = 2565
```
|
how to replace string values of column values by SQL Stored Procedure
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I need to use sql query to DB from soap every day automatically.
Query to DB has current date.
So, I do following:
1) I set current date to properties with groovy:
```
def dateTime= new Date()
setProperty.setPropertyValue('currDate', String.format("%tF", dateTime, new Date()))
```
2) in jdbc request I make corresponding property, and try to use it in query:
```
select * ...
where SENDDATE = :Date
```
But trying to execute query leads to mismatching of datatypes.
```
2015-02-04 17:17:54 - Error getting response; java.sql.SQLException: Data type mismatch. (2015-02-04)
```
So, my question is: how to avoid mismatching in this case?
Note: query is working correct with '2015-02-02' direct filled date.
Dmitry
|
You could use SQL to figure out the date, and then it will always be in the correct format:
```
select * ...
where SENDDATE = now()
```
|
First of all, make sure your mysql field data type is relevant (it should be 'date' not time(), text, varchar etc.)
Regarding the groovy formatting way, Here is how:
```
import groovy.time.*
def dateTime= new Date().format('yyyy/MM/dd')
testRunner.testCase.testSteps["Properties"].setPropertyValue('currDate',dateTime.toString())
```
**JDBC (mysql Engine) Query:**
```
select * ...
where SENDDATE = '${Properties#currDate}'
```
**Notes:**
* The format in your db will be standard like 2015-02-05
* You can have the format like 'yyyy/MM/dd HH:mm:ss' if you want to have the time as well, in that case you mysql field datatype must be 'datetime()'.
|
soapui jdbc: java.sql.SQLException: Data type mismatch. (2015-02-04)
|
[
"",
"sql",
"jdbc",
"groovy",
"soapui",
""
] |
I have a table `Customers` that contains the fields `customer_id`, `first_name`, `last_name`, `referred_by`.
What I want to do is to display full name of the customers who have been referred and the full name of the referrer.
For example I want something like this :
```
id | first_name | last_name | referred_by
1 | first1 | last1 | NULL
2 | first2 | last2 | 1
3 | first3 | last3 | 2
```
the oupout should be:
```
customer's name | referred by
first2 last2 | first1 last1
first3 last3 | first2 last2
```
Usually I would create a table called references and store the references there but I can't modify the database.
I imagine something like :
```
SELECT
CONCAT(firstname, ' ', lastname) AS 'Customer''s Name',
CONCAT(firstname, ' ', lastname) AS 'Referred By '
FROM customer
WHERE 'Customer''s Name'.refferedby = 'Referred By'.customerno;
```
Thanks for any help.
|
Use join.
```
SELECT
CONCAT(c1.firstname, ' ', c1.lastname) AS 'Customer''s Name',
CONCAT(c2.firstname, ' ', c2.lastname) AS 'Referred By '
FROM customer c1 join customer c2 = c1.id = c2.referred_by
```
|
If the referred\_by column contains an id value from the customer table, then it would be something like this:
```
select concat(c.firstname,' ',c.lastname) customer
, concat(r.firstname,' ',r.lastname) referred_by_customer
from customer c
left join customer r on r.customer_id = c.referred_by
```
|
How to query a table referring to itself in MySql?
|
[
"",
"mysql",
"sql",
""
] |
I am trying to arrange the book title in alphabetical order through oracle pl/sql function but want to avoid 'a', 'an', 'the' in the sorting. I have tried create function using different functions such as LTRIM, TRIM, REPLACE but the the book title are not sorted when I use ORDER BY FUNCTION NAME.
Code for Create function is -
```
create or replace function sort_string (p_title in varchar2)
return varchar2
as
begin
return replace(replace(replace(p_title,'THE',''),'a',''),'an','');
end;
/
```
and the following SQL to retrieve data
```
select book_title
from books
order by sort_string(book_title);
```
|
Why use a function at all? Also, be sure to test with titles that begin with the letter 'A' but not the *word* 'A ' and so forth. The `ltrim( string, 'A ' )` function will return 'Great Day' when passed 'A Great Day' but it will also return 'nother Great Day' when passed 'Another Great Day'. This will not work as you expect. Note the titles I've added in the `Fiddle`
```
select *
from books
order by case when Book_title like 'A %'
then LTrim( book_title, 'A ' )
when Book_title like 'An %'
then LTrim( book_title, 'An ' )
when Book_title like 'The %'
then LTrim( book_title, 'The ' )
else Book_title end;
```
|
Try with this
```
CREATE OR REPLACE FUNCTION sort_string(
p_title IN VARCHAR2)
RETURN VARCHAR2
AS
BEGIN
RETURN trim(REPLACE(REPLACE(REPLACE(p_title,'A ',''),'An ',''),'The ',''));
END;
/
CREATE TABLE book
( bookid INT, book_title VARCHAR(100)
);
INSERT
INTO book VALUES
(
1,
'A reference to the Oracle analytical functions'
);
INSERT INTO book VALUES
(2, 'Croaking tetra from South America'
);
INSERT INTO book VALUES
(3, 'The Animals of Peru'
);
INSERT INTO book VALUES
(4, 'The Grand Medieval Bestiary'
);
INSERT INTO book VALUES
(5, 'The ancient Cities'
);
SELECT sort_string(BOOK_TITLE),book_title
FROM BOOK
ORDER BY nlssort(sort_string(BOOK_TITLE),'NLS_SORT=BINARY_CI');
```
|
how to sort book titles in order without a, an, the using oracle pl sql function
|
[
"",
"sql",
"oracle",
"sorting",
"plsql",
"sql-order-by",
""
] |
I need to normalize a table that contains anagraphic data.
Here is a sample table with some rows:
```
Id LastName FirsName Address Email Tel Mobile Age
1 Rossi Mario Via Milano NULL 123456 NULL 41
2 Rossi Mario NULL rm@test.it 123456 NULL NULL
3 Rossi Mario Via Milano NULL NULL 254521 NULL
```
I want merge the values in the records in order to have one unique record with all values.
Like this:
```
Id LastName FirsName Address Email Tel Mobile Age
1 Rossi Mario Via Milano rm@test.it 123456 254521 41
```
I tried to use the MERGE statement, but I don't think this is the right solution.
Any help is appreciated.
|
If you are considering `FirstName` and `LastName` as key identifiers then you can write as:
```
SELECT MIN(Id),
T1.LastName ,
T1.FirsName,
SUBSTRING ((SELECT DISTINCT CASE WHEN T2.Address IS NULL
THEN '' ELSE ','+ T2.Address END
FROM @Test T2
WHERE T1.LastName = T2.LastName AND T1.FirsName = T2.FirsName
FOR XML PATH('')),2,8000) AS [Address],
SUBSTRING ((SELECT DISTINCT CASE WHEN T3.Email IS NULL
THEN '' ELSE ','+ T3.Email END
FROM @Test T3
WHERE T1.LastName = T3.LastName AND T1.FirsName = T3.FirsName
FOR XML PATH('')),2,8000)AS Email
FROM @Test T1
GROUP BY T1.LastName ,T1.FirsName
```
|
I think this might helps you:-
```
UPDATE TAB_NAME
SET ID = MIN(ID), LastName = MAX(LastName), FirsName = MAX(FirsName)
Address = MAX(Address), Email = MAX(Email), Tel = MAX(Tel),
Mobile = MAX(Mobile), Age = MAX(Age)
WHERE COND.;
```
|
Merge records in SQL SERVER
|
[
"",
"sql",
"sql-server",
"merge",
""
] |
I have this query which finds the number of properties handled by each staff member along with their branch number:
```
SELECT s.branchNo, s.staffNo, COUNT(*) AS myCount
FROM Staff s, PropertyForRent p
WHERE s.staffNo=p.staffNo
GROUP BY s.branchNo, s.staffNo
```
The two relations are:
```
Staff{staffNo, fName, lName, position, sex, DOB, salary, branchNO}
PropertyToRent{propertyNo, street, city, postcode, type, rooms, rent, ownerNo, staffNo, branchNo}
```
How does SQL know what `COUNT(*)` is referring to? Why does it count the number of properties and not (say for example), the number of staff per branch?
|
This is a bit long for a comment.
`COUNT(*)` is counting the number of rows in each group. It is not specifically counting any particular column. Instead, what is happening is that the `join` is producing multiple properties, because the properties are what cause multiple rows for given values of `s.branchNo` and `s.staffNo`.
It gets even a little more "confusing" if you include a column name. The following would all typically return the same value:
```
COUNT(*)
COUNT(s.branchNo)
COUNT(s.staffNo)
COUNT(p.propertyNo)
```
With a column name, `COUNT()` determines the number of rows that do not have a `NULL` value in the column.
And finally, you should learn to use proper, explicit `join` syntax in your queries. Put `join` conditions in the `on` clause, not the `where` clause:
```
SELECT s.branchNo, s.staffNo, COUNT(*) AS myCount
FROM Staff s JOIN
PropertyForRent p
ON s.staffNo = p.staffNO
GROUP BY s.branchNo, s.staffNo;
```
|
`GROUP BY` clauses partition your result set. These partitions are all the sql engine needs to know - it simply counts their sizes.
Try your query with only `count(*)` in the `select` part.
In particular, `COUNT(*)` does *not* produce the number of distinct rows/columns in your result set!
|
How does GROUP BY use COUNT(*)
|
[
"",
"sql",
""
] |
I am working on a sql statement where I'm trying to grab all projects less than or equal to the development date. However, I'm getting an error
"Data type mismatch in criteria expression"
I've searched and searched, but I haven't been able to find anything.
The raw results look similar to this:
```
| Title | devTerm | pilotTerm |
+-------+---------+-----------+
| Ex1 | 201401 | 201404 |
| Ex2 | 201301 | 201401 |
| Ex3 | 201504 | 201601 |
```
Here's my query:
```
SELECT *
FROM projects
WHERE Len(devTerm)>0
AND Len(pilotTerm)>0
AND Date() >= CDate(DateSerial(Left(devTerm,4),Right(devTerm,2),1))
```
What am I doing wrong?
p.s. I wouldn't be using Access if I didn't have to.
I am referencing this article: [Convert Text to date](https://stackoverflow.com/questions/12359395/how-to-convert-a-text-field-to-a-date-time-field-in-access-2010)
Screenshot:

|
Ok I figured out the issue. From the comments under my question, I was able to think about the Nz() function. The final query looks like
```
SELECT *
FROM projects
WHERE Date() >= DateSerial(Left(Nz(devTerm, "1990"),4),Right(Nz(devTerm, "01"),2), 1)
AND Date() < DateSerial(Left(Nz(pilotTerm, "1990"),4),Right(Nz(pilotTerm, "01"),2), 1)
```
pilotTerm and devTerm were throwing NULLs out and that's where the issue was coming. I thought Access SQL would filter them out with
```
Len(devTerm)>0
AND Len(pilotTerm)>0
```
|
So far the only way I have been able to recreate this issue is by making a quick test function to see what possibilities it could be. From this I feel like there has to be a record that is blank or invalid, the function to recreated this is as follows:
```
Public Function Test()
If Date >= DateSerial(Left("201401", 4), Right("201401", 2), 1) Then
MsgBox DateSerial(Left("", 4), Right("", 2), 1)
End If
End Function
```
It gets into the if statement, then breaks with type mismatch. So a blank record could likely be causing this or an invalid record.
|
Access Date Comparison Error
|
[
"",
"sql",
"date",
"ms-access",
"ms-access-2013",
""
] |
I have a table related to reviews made by a person. The table has the following fields: **reviewId**, **personId**, **isComplete**, where **isComplete** is a boolean indicating whether the particular person completed his review.
Imagine the following values:
```
ReviewID | PersonID | isComplete |
1 1 1
2 1 1
3 2 0
4 2 0
5 3 1
6 3 0
```
In this case I should get only **PersonID** = 1 as a result because only they have completed all their reviews.
I have tried many queries and the closest one was:
`SELECT * FROM reviews x WHERE 1 = ALL (SELECT isComplete FROM reviews y WHERE x.personid = y.personid AND isComplete=1);`
Any suggestions or hints will be greatly appreciated.
|
```
SELECT DISTINCT(PersonID) FROM reviews
WHERE PersonId NOT IN (
SELECT DISTINCT(PersonID) FROM reviews WHERE isComplete = 0
)
```
|
* Table A contains all records
* Table B contains all people who have at least 1 outstanding review.
* We use a left join and eliminate nulls so that what remains is only users who have records with no outstanding reviews...
.
```
SELECT Distinct A.PersonID
FROM TABLE A
LEFT JOIN Table B
on A.PersonID = B.PersonId
and B.isComplete = 0
WHERE B.PersonId is null
```
I used distinct to only return 1 records.
Another way to do this (I believe to be the most efficient) would be to use an exists statement
```
SELECT Distinct A.PersonID
FROM table A
WHERE not exists (Select 1 from Table B where B.iscomplete=0 and A.PersonID=B.PersonID)
```
This basically says return all persons who don't have an incomplete review.
The premise in both these cases is that a single entry of an incomplete review is enough to exclude them from the result set.
|
Construct SQL query to find records that all have the same value
|
[
"",
"mysql",
"sql",
"subquery",
""
] |
I am trying to compute a value in select statement itself but surprisingly it results in 0.
```
SELECT Top(1) Name,
LEN(Name) AS Equals,
Abs(LEN('Johny') - LEN(Name)) AS NotEquals,
LEN(Name)/(Abs(LEN('Johny') - LEN(Name)) + LEN(Name)) As Match
FROM Demo
WHERE Name = LEFT( 'Johny' , LEN(Name) )
ORDER BY LEN(Name) DESC
```
Output:
```
Name Equals NotEquals Match
John 4 1 0
```
Why exactly is value of `match` field `0` in output?
|
you are trying to divide integer values where divisor > dividend
try cast one value to float
```
SELECT Top(1) Name,
LEN(Name) AS Equals,
Abs(LEN('Johny') - LEN(Name)) AS NotEquals,
cast(LEN(Name) as float)/(Abs(LEN('Johny') - LEN(Name)) + LEN(Name)) As Match
FROM Demo
WHERE Name = LEFT( 'Johny' , LEN(Name) )
ORDER BY LEN(Name) DESC
```
|
Because your calculation uses integers:
```
4/5 = 0
```
All behind the comma is removed
|
Computing value in select statement t-sql
|
[
"",
"sql",
"ssms",
""
] |
I'm pretty new to SQL. I'm trying to write a query that will grab records associated that share an value with the sought after record in a single query.
For example below, if one record has a 'No' in it, I want it to to then return all records that share a common 'Letter'
```
Letter;Present
A;Yes
A;No
A;Yes
B;Yes
B;Yes
B;Yes
```
Returning:
```
Letter;Present
A;Yes
A;No
A;Yes
```
|
You will need to use a subquery for that..
```
SELECT a.letter, a.present
FROM yourTable a
WHERE a.letter IN (SELECT letter
FROM yourTable
WHERE present = 'No');
```
What you do here is select all letters where `present = 'NO'` then you simply re-select from that group where your letter is in the list of letter that contains a `No`.
In other words, the subquery is a filter.
|
Use a sub-query to find those No letters. Then search for rows where values first character is in the found No letters.
```
select * from table
where letter in (select letter from table where present = 'No');
```
|
Associated records
|
[
"",
"sql",
""
] |
I am using sql-server.I have a table looks like
```
StudentName Class score
Jim a1 80
Ann a1 83
Bill a2 90
```
I want to select student whose score is above the average score in his/her class. Here is my code:
```
Select a.StudentName
From Table a
inner Join Table b
On a.score>(select avg(b.score) From b
Where a.class=b.class
group by class);
```
I think there might be some problem in "where a.class=b.class", Is my inner join method correct here please? Can I also use join?
|
Try something like this:
```
Select a.StudentName, TableAvg.class, a.score
From TableA a
inner Join (select class, avg(score) as AvgScore
From TableA
group by class) TableAvg
On a.score > TableAvg.AvgScore
and a.class = TableAvg.class
```
Untested code...
|
Sample data:
```
SELECT 'Jim' AS StudentName, 'A1' AS Class, 80 AS Score
INTO #Temporary
UNION ALL
SELECT 'Ann', 'A1', 83
UNION ALL
SELECT 'Bill', 'A2', 90
```
Actual query (without needing to join table twice)
```
SELECT *
FROM (
SELECT StudentName, Class, Score, AVG(CAST(Score AS FLOAT)) OVER(PARTITION BY Class) AS AvgScore
FROM #Temporary
) AS T
WHERE T.Score >= T.AvgScore
```
|
Inner join and Avg( ) function
|
[
"",
"sql",
"sql-server",
"inner-join",
"average",
""
] |
I'm attempting to perform a GROUP BY on a join table table. The join table essentially looks like:
```
CREATE TABLE user_foos (
id SERIAL PRIMARY KEY,
user_id INT NOT NULL,
foo_id INT NOT NULL,
effective_at DATETIME NOT NULL
);
ALTER TABLE user_foos
ADD CONSTRAINT user_foos_uniqueness
UNIQUE (user_id, foo_id, effective_at);
```
I'd like to query this table to find all records where the `effective_at` is the max value for any pair of `user_id, foo_id` given. I've tried the following:
```
SELECT "user_foos"."id",
"user_foos"."user_id",
"user_foos"."foo_id",
max("user_foos"."effective_at")
FROM "user_foos"
GROUP BY "user_foos"."user_id", "user_foos"."foo_id";
```
Unfortunately, this results in the error:
> column "user\_foos.id" must appear in the GROUP BY clause or be used in an aggregate function
I understand that the problem relates to "id" not being used in an aggregate function and that the DB doesn't know what to do if it finds multiple records with differing ID's, but I know this could never happen due to my trinary primary key across those columns (`user_id`, `foo_id`, and `effective_at`).
To work around this, I also tried a number of other variants such as using the [`first_value` window function](http://www.postgresql.org/docs/devel/static/functions-window.html) on the `id`:
```
SELECT first_value("user_foos"."id"),
"user_foos"."user_id",
"user_foos"."foo_id",
max("user_foos"."effective_at")
FROM "user_foos"
GROUP BY "user_foos"."user_id", "user_foos"."foo_id";
```
and:
```
SELECT first_value("user_foos"."id")
FROM "user_foos"
GROUP BY "user_foos"."user_id", "user_foos"."foo_id"
HAVING "user_foos"."effective_at" = max("user_foos"."effective_at")
```
Unfortunately, these both result in a different error:
> window function call requires an OVER clause
Ideally, my goal is to fetch ALL matching `id`'s so that I can use it in a subquery to fetch the legitimate full row data from this table for matching records. Can anyone provide insight on how I can get this working?
|
Try:
```
SELECT *
FROM (
SELECT t.*,
row_number() OVER( partition by user_id, foo_id ORDER BY effective_at DESC ) x
FROM user_foos t
)
WHERE x = 1
```
|
Postgres has a very nice feature called `distinct on`, which can be used in this case:
```
SELECT DISTINCT ON (uf."user_id", uf."foo_id") uf.*
FROM "user_foos" uf
ORDER BY uf."user_id", uf."foo_id", uf."effective_at" DESC;
```
It returns the first row in a group, based on the values in parentheses. The `order by` clause needs to include these values as well as a third column for determining which is the first row in the group.
|
Trouble performing Postgres group by non-ID column to get ID containing max value
|
[
"",
"sql",
"postgresql",
"window-functions",
""
] |
I wanted to remove/ignore the seconds and milliseconds coming from GETDATE() SQL function.
When I executed,
```
SELECT GETDATE()
```
output will be like
```
2015-01-05 14:52:28.557
```
I wanted to ignore seconds and milliseconds from above output. What is the optimize and best way to do this?
I have tried to do this by typecasting like this:
```
SELECT CAST(FORMAT(GETDATE(),'yyyy-MM-dd HH:mm:0') AS datetime)
```
Is it is the correct and optimize way to do this?
|
I'd either use the `DATEADD`/`DATEDIFF` trick that [Codo has shown](https://stackoverflow.com/a/28338137/15498) or just cast it to [`smalldatetime`](https://msdn.microsoft.com/en-GB/library/ms182418.aspx)1:
```
select CAST(GETDATE() as smalldatetime)
```
I'd avoid anything that involves round-tripping the value through a string.
1It may be appropriate, at this time, to change your schema to use this data type anyway, if seconds are always irrelevant.
|
Try this:
```
SELECT dateadd(minute, datediff(minute, 0, GETDATE()), 0)
```
The query uses the fact that `DATEDIFF` return the number of minutes between two dates, ignoring the smaller units. `0` is a fixed date in the past.
It can be easily adapted to other time units.
|
Ignore seconds and milliseconds FROM GETDATE() in SQL
|
[
"",
"sql",
"t-sql",
""
] |
I am trying to write this SQL code:
```
create table Users
{
UserID int primary key identity(200,1),
FirstName varchar(20) not null,
LastName varchar(20) not null,
BirthDate dateTime not null,
HomeTown varchar(30),
WorkPlace varchar(40),
Email not null
}
```
The problem is that next the `{` symbol, I get the error:
> Incorrect syntax near '{'.
When my mouse over the sign it adds:
> Expecting '(' or AS
In addition, I also get an error on the values that are in the bracket
> Incorrect syntax near '20'. Expecting '(' or Select".
The thing is that I have another SQL document (that I didn't write) and the same syntax work there! Why is that and how can I solve it?
|
You need brackets not braces - <http://www.w3schools.com/sql/sql_create_table.asp> Also a data type for email
I.e.
```
create table Users
(
UserID int primary key identity(200,1),
FirstName varchar(20) not null,
LastName varchar(20) not null,
BirthDate dateTime not null,
HomeTown varchar(30),
WorkPlace varchar(40),
Email varchar(40) not null
)
```
|
You have not specified the `datatype` for `Email` columnn.
Use ()instead of {}.
|
Sql syntax error - expecting ( or as
|
[
"",
"sql",
"sql-server",
"syntax-error",
""
] |
I want to filter out results obtained from two different queries and obtain the total count. Below are my two queries:
```
Query 1
select count(*)
from table1 oow
where oow.status='completed'--134 records
Query 2
select count(*)
from table2 oow
join #temp re
on oow.order=re.order
where oow.status='completed'--77 records
```
The desired result is the subtraction of the two I.e., 134-77=57.
How do I achieve that in sybase.
|
Try this:
```
select count(*)
from table1
where not exists
(select 1
from #temp re
where re.order = order and status = 'Completed')
```
This query just returns those rows which exist in table1 but do not exist in #temp, based on the order value and filter condition. Therefore, the above is equivalent to getting the total count, the filtered count and the difference between the two.
|
given the table alias oow I'm going to assume that both table1 in query1 and table2 in query2 are actually the same table. if that is the case, you could do it this way:
```
select count(*)
from
table2 oow
left join #temp re
on oow.order=re.order
where oow.status='completed'
and re.order is null
```
I have used a left join and a check for null instead of performing a subtraction.
|
How to subtract result of first query and second query
|
[
"",
"sql",
"sybase",
""
] |
I have a SQL query that refers to the same table twice, and I need to alias the table to two separate aliases. I can't quite figure out how to compose this with Knex.
There's a 'Words' table and a 'Users' table. The Words table has two foreign keys, 'author\_id' and 'winner\_id', referencing the Users table's 'id' column.
Here's the SQL I'm trying to compose in Knex:
```
SELECT w.*, ua.name, uw.name FROM Words AS w
INNER JOIN Users AS ua ON w.author_id = ua.id
LEFT JOIN Users AS uw ON w.winner_id = uw.id
```
I'm a little lost as to how to do this in Knex. My first attempt didn't involve aliasing, and so I got a 'table used more than once' error. When I tried to use the .as() method, knex complained that there was a missing .from() clause. Is the .as() method only used for aliasing subqueries, and I shouldn't expect it to be used to alias tables?
|
There are [two ways](http://knexjs.org/#Builder-identifier-syntax) to declare an alias for identifier (table or column). One can directly give as aliasName suffix for the identifier (e.g. identifierName as aliasName) or one can pass an object { aliasName: 'identifierName' }.
So, the following code:
```
knex.select('w.*', 'ua.name', 'uw.name')
.from({ w: 'Words' })
.innerJoin({ ua: 'Users' }, 'w.author_id', '=', 'ua.id')
.leftJoin({ uw: 'Users' }, 'w.winner_id', '=', 'uw.id')
.toString()
```
will compile to:
```
select "w".*, "ua"."name", "uw"."name"
from "Words" as "w"
inner join "Users" as "ua" on "w"."author_id" = "ua"."id"
left join "Users" as "uw" on "w"."winner_id" = "uw"."id"
```
|
I think I figured it out. In knex.js, say you specify a table like:
`knex.select( '*' ).from( 'Users' )`
Then you can just add the AS keyword within the quotes of the table name to alias it, like so:
`knex.select( '*' ).from( 'Users AS u' )`
..and you can do this for column names, too; so my original SQL would look like this in knex-land:
```
knex.select( 'w.*', 'ua.name AS ua_name', 'uw.name AS uw_name' )
.innerJoin( 'Users AS ua', 'author_id', 'ua.id' )
.leftJoin( 'Users as uw', 'winner_id', 'uw.id' )
```
I guess I got confused by the presence of knex's .as() method, which (as far as I currently understand) is meant just for subqueries, not for aliasing tables or column names.
|
Alias a table in Knex
|
[
"",
"sql",
"left-join",
"inner-join",
"knex.js",
""
] |
I have 3 columns (Date, Flag, cost)
The date starts from the beginning of the year, the flag is either daily or monthly and the cost.
For daily values it is fine. For monthly values, I would like to
Sum the entire monthly flaged values and divide by the number of days in that month. The resulted rate, populate it in the entire month
```
Date Flag Cost
1/1/2014
1/2/2014 DAILY 10
1/3/2014 DAILY 15
1/4/2014 DAILY 56
1/5/2014 DAILY 22
1/6/2014 DAILY 32
1/7/2014
1/8/2014 MONTHLY 3500
1/9/2014
1/10/2014
```
Result should be
---
```
Date Cost
1/1/2014 112.9032258
1/2/2014 122.9032258
1/3/2014 127.9032258
1/4/2014 168.9032258
1/5/2014 134.9032258
1/6/2014 144.9032258
1/7/2014 112.9032258
1/8/2014 112.9032258
1/9/2014 112.9032258
1/10/2014 112.9032258
.
.
.
1/30/2014 112.9032258
1/31/2014 112.9032258
```
|
*If* I understand it well, this should give you the average per day of the "monthly" values:
```
SELECT "Cost" / EXTRACT(DAY FROM LAST_DAY("Date")) "cost_per_day",
LAST_DAY("Date") "month"
FROM T
WHERE "Flag" = 'MONTHLY'
```
Once you have that, you final query could be written like:
```
WITH monthly AS (
SELECT "Cost" / EXTRACT(DAY FROM LAST_DAY("Date")) "cost_per_day",
LAST_DAY("Date") "month"
FROM T
WHERE "Flag" = 'MONTHLY'
)
SELECT T."Date", NVL("Cost",0) + NVL("cost_per_day",0) "cost"
FROM T FULL JOIN monthly ON LAST_DAY(T."Date") = "month"
WHERE T."Flag" = 'DAILY'
ORDER BY T."Date";
```
See <http://sqlfiddle.com/#!4/cea34/14>
As about getting "all day in month" this has already been answered several times ([oracle sql query to list all the dates of previous month](https://stackoverflow.com/questions/4644562/oracle-sql-query-to-list-all-the-dates-of-previous-month), [Generate a range of dates using SQL](https://stackoverflow.com/questions/418318/generate-a-range-of-dates-using-sql))
|
What's about this solution?
```
SELECT THE_DATE, Flag, COST,
CASE Flag
WHEN 'DAILY' THEN COST
WHEN 'MONTHLY' THEN
COST/EXTRACT(DAY FROM LAST_DAY(THE_DATE))
ELSE NULL
END AS AVG_COST
FROM THE_TABLE;
```
|
Populating Monthly cost to daily records
|
[
"",
"sql",
"oracle",
""
] |
How to change column value by `CASE` command depending on condition without giving adding a new column to table?
The only way I know is by adding new column:
```
SELECT
t1.*
,CASE
what='costs' THEN amount*(-1)
ELSE
sales
END AS NewAmount
FROM t1
```
Is there a way to get the results as on the picture below? Note that sometimes the condition is specified by values in more than one column (what=costs AND country=Atlantida)

|
Yes, there is way,
Instead of `select *`, use required column names only.
```
SELECT
t1.what
,CASE
WHEN what='costs' THEN amount*(-1)
ELSE
sales
END AS Amount
FROM t1
```
|
Select just the columns that you want:
```
SELECT t1.what,
(CASE WHEN what = 'costs' THEN amount*(-1)
ELSE sales
END) AS Amount
FROM t1
```
|
How to use CASE without adding new column to table in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I need to count how many weeks and listing them up in a table with their respective date range.
so what i have for now is
```
select countinous_weeks, decode(countinous_weeks-52,0,trunc(countinous_weeks),trunc(countinous_weeks)+1)
from (
select (TO_DATE('01-01-1995', 'DD/MM/YYYY') - TO_DATE('01-01-1994','DD/MM/YYYY'))/7 countinous_weeks
from dual) wks
```
it only shows how many weeks within that range. What im aiming to do is showing them up in 53 rows and showing the date range for each week. So lets say for the week one
```
WEEK RANGE
1 01-01-1994 Until 07-01-1994 ... etc
```
Please help me with this query.. much appreciated
|
This is interesting. It has got the following things involved -
1. **DATE ROW GENERATOR**
2. **Week number**
3. **ROW\_NUMBER()** to assign rank to dates in each set of week
4. Finally **LISTAGG** to aggregate the rows fetched from step 3
Let's see it working -
```
SQL> WITH DATA AS
2 (SELECT to_date('01/01/1994', 'DD/MM/YYYY') date1,
3 to_date('31/12/1994', 'DD/MM/YYYY') date2
4 FROM dual
5 )
6 SELECT the_week,
7 listagg(the_date, ' until ') within GROUP (
8 ORDER BY to_date(the_date, 'DD/MM/YYYY')) the_date_range
9 FROM
10 (SELECT the_week,
11 the_date,
12 row_number() over(partition BY the_week order by the_week, to_date(the_date, 'DD/MM/YYYY')) rn
13 FROM
14 (SELECT TO_CHAR(date1+level-1, 'WW') the_week ,
15 TO_CHAR(date1 +level-1, 'DD/MM/YYYY') the_date
16 FROM data
17 CONNECT BY LEVEL <= date2-date1+1
18 )
19 )
20 WHERE rn in( 1, 7)
21 GROUP BY the_week
22 /
TH THE_DATE_RANGE
-- ---------------------------------------------
01 01/01/1994 until 07/01/1994
02 08/01/1994 until 14/01/1994
03 15/01/1994 until 21/01/1994
04 22/01/1994 until 28/01/1994
05 29/01/1994 until 04/02/1994
06 05/02/1994 until 11/02/1994
07 12/02/1994 until 18/02/1994
08 19/02/1994 until 25/02/1994
09 26/02/1994 until 04/03/1994
10 05/03/1994 until 11/03/1994
11 12/03/1994 until 18/03/1994
12 19/03/1994 until 25/03/1994
13 26/03/1994 until 01/04/1994
14 02/04/1994 until 08/04/1994
15 09/04/1994 until 15/04/1994
16 16/04/1994 until 22/04/1994
17 23/04/1994 until 29/04/1994
18 30/04/1994 until 06/05/1994
19 07/05/1994 until 13/05/1994
20 14/05/1994 until 20/05/1994
21 21/05/1994 until 27/05/1994
22 28/05/1994 until 03/06/1994
23 04/06/1994 until 10/06/1994
24 11/06/1994 until 17/06/1994
25 18/06/1994 until 24/06/1994
26 25/06/1994 until 01/07/1994
27 02/07/1994 until 08/07/1994
28 09/07/1994 until 15/07/1994
29 16/07/1994 until 22/07/1994
30 23/07/1994 until 29/07/1994
31 30/07/1994 until 05/08/1994
32 06/08/1994 until 12/08/1994
33 13/08/1994 until 19/08/1994
34 20/08/1994 until 26/08/1994
35 27/08/1994 until 02/09/1994
36 03/09/1994 until 09/09/1994
37 10/09/1994 until 16/09/1994
38 17/09/1994 until 23/09/1994
39 24/09/1994 until 30/09/1994
40 01/10/1994 until 07/10/1994
41 08/10/1994 until 14/10/1994
42 15/10/1994 until 21/10/1994
43 22/10/1994 until 28/10/1994
44 29/10/1994 until 04/11/1994
45 05/11/1994 until 11/11/1994
46 12/11/1994 until 18/11/1994
47 19/11/1994 until 25/11/1994
48 26/11/1994 until 02/12/1994
49 03/12/1994 until 09/12/1994
50 10/12/1994 until 16/12/1994
51 17/12/1994 until 23/12/1994
52 24/12/1994 until 30/12/1994
53 31/12/1994
53 rows selected.
SQL>
```
|
Here is the query.
```
SELECT LEVEL , to_char( TO_DATE('01-01-1995', 'DD/MM/YYYY') + ( level * 7 ) - 7) || ' until ' || to_char( TO_DATE('01-01-1995', 'DD/MM/YYYY') + ( level * 7 ) - 1 ) as range
FROM DUAL
CONNECT BY LEVEL <= ( select (TO_DATE('01-01-1995', 'DD/MM/YYYY') - TO_DATE('01-01-1994','DD/MM/YYYY'))/7 countinous_weeks
from dual )
```
|
querying how many weeks between a date range and showing them their dates
|
[
"",
"sql",
"oracle",
"date",
""
] |
I want to compare datetimes and delete the rows thats more than 72 hours old. Then I want to update another table boolean "HasClone". How do I get the ints (ID's) from the first selection into the other? see code below:
```
SELECT Allocation_plan_details_Clone.Allocation_plan_id AS ID
FROM Allocation_plan_details_Clone
WHERE DATEDIFF(hour, start_date, GETDATE()) > 72
UPDATE Allocation_plan
SET HasClone = 0
WHERE allocation_plan_id = <INSERT CODE HERE!>
DELETE FROM Allocation_plan_details_Clone
WHERE DATEDIFF(hour, start_date, GETDATE()) > 72
```
So at "INSERT CODE HERE!" I want to insert the ID's I just got from Allocation\_plan\_details\_Clone
|
If I understood your question right I think what you want is this:
```
UPDATE Allocation_plan
SET HasClone = 0
WHERE allocation_plan_id IN (
SELECT Allocation_plan_details_Clone.Allocation_plan_id
FROM Allocation_plan_details_Clone
WHERE DATEDIFF(hour, start_date, GETDATE()) > 72
)
```
|
This answer is putting the Allocation\_plan\_id's into a table variable while the rows are deleted.
The transaction assure that the data doesn't get deleted without updating the Allocation\_plan table.
I rewrote your WHERE statement to perform better.
```
begin transaction t
DECLARE @deleted table(Allocation_plan_id int)
DELETE Allocation_plan_details_Clone
OUTPUT deleted.Allocation_plan_id
INTO @deleted
FROM Allocation_plan_details_Clone
WHERE start_date < dateadd(hour, -72, GETDATE())
UPDATE Allocation_plan
SET HasClone = 0
FROM Allocation_plan
JOIN
@deleted d
ON d.Allocation_plan_id = Allocation_plan.Allocation_plan_id
commit transaction t;
```
|
Get Selected values in SQL
|
[
"",
"sql",
"sql-server",
"azure-sql-database",
""
] |
The following code is throwing an error:
```
select *
from inPersonMedChangeNotes as IP
where
(IP.[Date], IP.Patient) not in (select EncounterDate, Patient
from tbl_patients_clinic_visit_records as VC
join tbl_patients as PS on VC.PatientId = PS.PatientId_this)
```
Error:
> non boolean expression where condition is expected
I am trying to find all the entries in inPersonMedChangeNotes where the combination of values is not in the tbl\_patients\_clinic\_visit\_records table. How do I do this?
|
You can do this with a left join:
```
SELECT *
FROM inPersonMedChangeNotes as IP
LEFT JOIN tbl_patients_clinic_visit_records as VC ON IP.[Date] = VC.EncounterDate AND IP.Patient = VC.Patient
left join tbl_patients as PS on VC.PatientId = PS.PatientId_this
WHERE VC.EncounterDate IS NULL
```
|
Following the same structure of your query, you can use `not exists`:
```
select *
from inPersonMedChangeNotes IP
where not exists (select 1
from tbl_patients_clinic_visit_records
where ip.patient = vc.patientid and
ip.[date] = vc.encounterdate
);
```
I don't think the `patient` table is needed for the query.
|
How to check for combination of column values in sql server table
|
[
"",
"sql",
"sql-server",
""
] |
I got a question: if I have a `Product` table in SQL which has columns `Product ID`, `TotalPrice`, `NumberOfUnits`
Now I need to calculate the `unitPrice` of the product.
This can be calculated by dividing the `TotalPrice` by `NumberOfUnits`.
Now the question is: if the `totalPrice` is `Null` and `NumberOfUnits` is 45.
What will the output be?
```
Select
ProductID,
(TotalPrice / NumberOfUnits) as UnitPrice
from ProductTable
```
What will be the output?
And if the answer is null, how can we handle this?
|
Null = NULL.
```
SELECT NULL / 45
```
Result
```
Column1
NULL
```
Handling would be...
```
SELECT IsNull(NULL, 0) / 45
```
Result
```
Column1
0
```
|
1. Null/45 = NULL
2. anynumber/null = null
Your table structure is incorrect, you need to create the table in a way that it cannot accept null for price as well as quantity.
Rest will have a big discussion :)
|
Divide Null by a number in SQL
|
[
"",
"sql",
"null",
"divide",
""
] |
I have a varchar2 date field 'Created\_Date' of the format 'MM/DD/YYYY HH24:MI' in an oracle table.Now, I want all the fields where date range is between 1-JAN-2014 to 31-MAR-2014.
I have used the below query & I know some of my dates of 31st March 2014 are getting missed.
```
Select *
from Table1
where to_date(Created_Date, 'MM/DD/YYYY HH24:MI') between '1-JAN-2014' and '31-MAR-2014'
```
Please suggest on the same. Thanks.
|
What I think is happening is that your `VARCHAR2` "dates" have time components, in which case they could fall after 31 March 2014. I would suggest you do the following:
```
SELECT * FROM table1
WHERE TO_DATE(created_date, 'MM/DD/YYYY HH24:MI') >= '01-JAN-2014'
AND TO_DATE(created_date, 'MM/DD/YYYY HH24:MI') < '01-APR-2014'
```
|
Of course dates on Mar 31 are not shown -- if they have a time component. By using `between` you are only getting up to midnight when Mar 31st begins. That is how dates work and how between works.
Here are two solutions. First, only use the date:
```
where to_date(Created_Date, 'MM/DD/YYYY') between date '2014-01-01' and date '2014-03-31'
```
Or phrase it a bit differently:
```
where to_date(Created_Date, 'MM/DD/YYYY HH24:MI') >= date '2014-01-01' and
to_date(Created_Date, 'MM/DD/YYYY HH24:MI') < date '2014-04-01'
```
Notice that I also switched from using the string constant to the `date` operator. This keyword allows you to provide a date constant in ISO-standard YYYY-MM-DD format.
My preference is a combination of these two methods:
```
where to_date(Created_Date, 'MM/DD/YYYY') >= date '2014-01-01' and
to_date(Created_Date, 'MM/DD/YYYY') < date '2014-04-01'
```
In general, I try to avoid using `between` with dates, precisely because of this type of problem.
|
Last day getting dropped while casting varchar 2 date format 'mm/dd/yyyy hh24:mi'
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have the following query:
```
SELECT TOP 3
Person.Name AS PersonName,
SUBSTRING(CAST(Person.BirthDate AS varchar(38)), 1, 2) AS CenteryBorn,
Person.BirthDate AS CompleteDateTimeBorn
FROM Person
```
What I want is something like this, with `CenteryBorn` column having the first two digits of the datetime:
```
PersonName CenteryBorn CompleteDateTimeBorn
Fred 20 2000-03-16 00:00:00.000
Tim 19 1900-09-27 00:00:00.000
Jenny 19 1901-06-20 00:00:00.000
```
What I get however is this:
```
PersonName CenteryBorn CompleteDateTimeBorn
Fred Ma 2000-03-16 00:00:00.000
Tim Se 1900-09-27 00:00:00.000
Jenny Ju 1901-06-20 00:00:00.000
```
The complete string representation seems to be end up being `Mar 16 2000 12:00AM`.
Is there any way to `CAST` it as a string but in the default format rather than this friendly one?
|
How about using:
```
YEAR(BirthDate)/1000
```
|
you can use datepart function instead
```
substring(cast(datepart(y,person.birthdate) as varchar),1,2)
```
|
TSQL - Make Casting datetime as varchar not return friendly format
|
[
"",
"sql",
"t-sql",
"datetime",
"formatting",
"substring",
""
] |
I have the following procedure which is executed on a button click (button1). After being prompted to log into the database, delphi throws the the following error:
> You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''AlphaMc'
>
> SELECT \* FROM 'AlphaMc111'' at line 1'. Process Stopped. Use Step or Run to continue.
Here is the procedure:
```
procedure TMainWin.Button1Click(Sender: TObject);
begin
ADOConnection1.ConnectionString := 'Driver={MySQL ODBC 3.51 Driver};
Server=db4free.net;Port=3306;Database=inventmanager;User=******;
Password=******;Option=3;';
ADOConnection1.Connected := True;
ADOQuery1.Connection := ADOConnection1;
ADOQuery1.SQL.Add('SELECT * FROM ''AlphaMc111''');
ADOQuery1.Open;
end;
```
|
The MySql [identifier](http://dev.mysql.com/doc/refman/5.0/en/identifiers.html) quote character is the backtick, try
```
ADOQuery1.SQL.Add('SELECT * FROM `AlphaMc111`');
```
|
Don't use quotes to escape column or table names. Use backticks
```
ADOQuery1.SQL.Add('SELECT * FROM `AlphaMc111`');
```
Quotes are string delimiters.
|
Why does This SQL query give a syntax error
|
[
"",
"mysql",
"sql",
"delphi",
"delphi-7",
""
] |
I have a table like this :
```
id Date Qty
1 Feb 25 2015 12:00AM 34
2 Feb 27 2015 12:00AM 34
3 Mar 17 2015 12:00AM 153
4 Mar 27 2015 12:00AM 68
5 Apr 3 2015 12:00AM 153
6 May 6 2015 12:00AM 153
```
I want select the first date where I will have enough Qty for my need.
I made this request that works for most of the case, but not on this data when I have a need of 210. It should select me MAR 17, but I have Feb 27
```
SELECT top 1
T1.Date,
T1.Qty
,SUM(cast (T2.Qty as int)) AS cumulqte
FROM
##temptest3 T1
INNER JOIN ##temptest3 T2
ON T1.Date >= T2.Date
GROUP BY T1.id, T1.Date, T1.Qty
HAVING SUM(cast (T2.Qty as int)) >=210
ORDER BY T1.Date;
```
I have try to change the order by and a lot of things, but nothing work.
Edit :
This is the way I create my table :
```
select * into ##temptest1 from sysadm.fnSplitString('17223ΓΌ17225ΓΌ17243ΓΌ17253ΓΌ17260ΓΌ17293','ΓΌ')
update ##temptest1 set splitdata=DATEADD(day,cast(splitdata as int), '1967-12-31');
select * into ##temptest2 from sysadm.fnSplitString('34ΓΌ34ΓΌ153ΓΌ68ΓΌ153ΓΌ153','ΓΌ')
update ##temptest2 set splitdata='0' where splitdata=''
create table ##temptest3 (id int,Date nvarchar(max), Qte NVARCHAR(MAX) )
INSERT INTO ##temptest3 (id, Qte, Date) select ##temptest2.id, ##temptest2.splitdata, ##temptest1.splitdata from ##temptest2 inner join ##temptest1 on ##temptest2.id=##temptest1.id
```
|
The issue is that when you create your ##temp tables you're using the `nvarchar` type instead of `datetime` (and `int` for qty). The best remedy would be to use the correct types when you create the tables, if that won't work you can use casts in the query like this:
```
SELECT top 1
cast(t1.Date as datetime),
T1.Qty
,SUM(cast (T2.Qty as int)) AS cumulqte
FROM
##temptest3 T1
INNER JOIN ##temptest3 T2
ON cast(t1.Date as datetime) >= cast(t2.Date as datetime)
GROUP BY T1.id, cast(t1.Date as datetime), T1.Qty
HAVING SUM(cast (T2.Qty as int)) >=210
ORDER BY cast(t1.Date as datetime);
```
See this [SQL Fiddle](http://www.sqlfiddle.com/#!6/df187/1)
|
As I said in my comment you need to cast / convert your Date field to a date or datetime for the ordering to work properly:
```
SELECT top 1
T1.Date,
T1.Qty
,SUM(cast (T2.Qty as int)) AS cumulqte
FROM
##temptest3 T1
INNER JOIN ##temptest3 T2
ON T1.Date >= T2.Date
GROUP BY T1.id, T1.Date, T1.Qty
HAVING SUM(cast (T2.Qty as int)) >=210
ORDER BY Cast(T1.Date as datetime)
```
|
SQL server query, select first date where enough stock
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I want to get a "totals" report for business XYZ. They want the season,term,distinct count of employees, and total employee's dropped hours, only when dropped hours of anemployee != any adds that equal the drops.
trying to do something like this:
```
select year,
season,
(select count(distinct empID)
from tableA
where a.season = season
and a.year = year) "Employees",
(select sum(hours)
from(
select distinct year,season,empID,hours
from tableA
where code like 'Drop%'
)
where a.season = season
and a.year = year) "Dropped"
from tableA a
-- need help below
where (select sum(hours)
from(
select distinct year,season,empID,hours
from tableA
where code like 'Drop%'
)
where a.season = season
and a.year = year
and a.emplID = emplID)
!=
(select sum(hours)
from(
select distinct year,season,empID,hours
from tableA
where code like 'Add%'
)
where a.season = season
and a.year = year
and a.emplID = emplID)
group by year,season
```
It appears I am not correctly doing my where clause correctly. I dont believe I am joining the emplID to each emplID correctly to exlude those whos "drops" <> "adds"
EDIT:
sample data:
year,season,EmplID,hours,code
2015, FALL, 001,10,Drop
20150 FALL, 001,10,Add
2015,FALL,002,5,Drop
2015,FALL,003,10,Drop
The total hours should be 15. EmplyID 001 should be removed from the totaling because he has drops that are exactly equal to adds.
|
I managed to work it out with a bit of analytics .. ;)
```
with tableA as (
select 2015 year, 1 season, 1234 empID, 2 hours , 'Add' code from dual union all
select 2015 year, 1 season, 1234 empID, 3 hours , 'Add' code from dual union all
select 2015 year, 1 season, 1234 empID, 4 hours , 'Add' code from dual union all
select 2015 year, 1 season, 1234 empID, 2 hours , 'Drop' code from dual union all
select 2015 year, 1 season, 2345 empID, 5 hours , 'Add' code from dual union all
select 2015 year, 1 season, 2345 empID, 3.5 hours, 'Add' code from dual union all
select 2015 year, 2 season, 1234 empID, 7 hours , 'Add' code from dual union all
select 2015 year, 2 season, 1234 empID, 5 hours , 'Add' code from dual union all
select 2015 year, 2 season, 2345 empID, 5 hours , 'Add' code from dual union all
select 2015 year, 2 season, 7890 empID, 3 hours , 'Add' code from dual union all
select 2014 year, 1 season, 1234 empID, 1 hours , 'Add' code from dual union all
select 2014 year, 1 season, 1234 empID, 2 hours , 'Add' code from dual union all
select 2014 year, 1 season, 1234 empID, 4 hours , 'Add' code from dual
),
w_group as (
select year, season, empID, hours, code,
lead(hours) over (partition by year, season, empID, hours
order by case when code like 'Drop%' then 'DROP'
when code like 'Add%' then 'ADD'
else NULL end ) new_hours
from tableA
)
select year, season, count(distinct empID),
sum(hours-nvl(new_hours,0)) total_hours
from w_group
where code like 'Add%'
group by year, season
/
YEAR SEASON COUNT(DISTINCTEMPID) TOTAL_HOURS
---------- ---------- -------------------- -----------
2015 1 2 15.5
2014 1 1 7
2015 2 3 20
```
(the first part "with tableA" is just faking some data, since you didn't provide any) :)
[edit]
corrected based on your data, and your explanation - in short, you're counting the DROPs, (minus the ADDs), I was doing the reverse
[edit2] replaced below query with minor tweak based on comment/feedback: don't count an empID if their DROP-ADD zero out)
```
with tableA as (
select 2015 year, 'FALL' season, '001' empID, 10 hours, 'Drop' code from dual union all
select 2015 year, 'FALL' season, '001' empID, 10 hours, 'Add' code from dual union all
select 2015 year, 'FALL' season, '002' empID, 5 hours, 'Drop' code from dual union all
select 2015 year, 'FALL' season, '003' empID, 10 hours, 'Drop' code from dual
),
w_group as (
select year, season, empID, hours, code,
lag(hours) over (partition by year, season, empID, hours
order by case when code like 'Drop%' then 'DROP'
when code like 'Add%' then 'ADD'
else NULL end ) new_hours
from tableA
)
select year, season, count(distinct empID),
sum(hours-nvl(new_hours,0)) total_hours
from w_group
where code like 'Drop%'
and hours - nvl(new_hours,0) > 0
group by year, season
/
YEAR SEAS COUNT(DISTINCTEMPID) TOTAL_HOURS
---------- ---- -------------------- -----------
2015 FALL 2 15
```
[/edit]
|
I think you can do what you want with just conditional aggregation. Something like this:
```
select year, season, count(distinct empID) as Employees,
sum(case when code like 'Drop%' then hours end) as Dropped
from tableA
group by year, season;
```
It is hard to tell exactly what you want, because you do not have sample data and desired results (or better yet, a SQL Fiddle). You might also want a `having` clause:
```
having (sum(case when code like 'Drop%' then hours end) <>
sum(case when code like 'Add%' then hours end)
)
```
|
SQL SUMs in where clause with conditionals
|
[
"",
"sql",
"oracle",
"sum",
"subquery",
""
] |
Consider the following employee\_history table:
```
βββββββββββ¦ββββββββ¦ββββββββββββββ¦ββββββββββββββββ¦βββββββ
β ID β idEMP β startDate β endDate β Moneyβ
β ββββββββββ¬ββββββββ¬ββββββββββββββ¬ββββββββββββββββ¬βββββββ£
β 1 β 1 β 2013-11-01 β 2013-11-25 β 100 β
β 2 β 1 β 2013-11-25 β 2014-01-01 β 50 β
β 3 β 1 β 2014-01-01 β 2014-01-10 β 25 β
β 4 β 1 β 2014-01-10 β 2014-01-15 β 50 β
β 5 β 1 β 2014-01-15 β 2014-01-20 β 18 β
β 6 β 1 β 2014-01-20 β 2014-02-01 β 70 β
β 7 β 1 β 2014-02-01 β NULL β 10 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
*Basically , it means that from startDate to endDate the employee has the amount of 'Money'.*
What I need to do is to find for an employee in the **history table** all the records for a specific **YEAR and MONTH.** But without getting records which are ending in the first day of the next month.
For example : If I'd search for '2013-12-01' this should be the output:
```
βββββββββββ¦ββββββββ¦ββββββββββββββ¦ββββββββββββββββ¦βββββββ
β ID β idEMP β startDate β endDate β Moneyβ
β ββββββββββ¬ββββββββ¬ββββββββββββββ¬ββββββββββββββββ¬βββββββ£
β 2 β 1 β 2013-11-25 β 2014-01-01 β 50 β
```
If I'd search for '2013-11-01' the output would be :
```
βββββββββββ¦ββββββββ¦ββββββββββββββ¦ββββββββββββββββ¦βββββββ
β ID β idEMP β startDate β endDate β Moneyβ
β ββββββββββ¬ββββββββ¬ββββββββββββββ¬ββββββββββββββββ¬βββββββ£
β 1 β 1 β 2013-11-01 β 2013-11-25 β 100 β
β 2 β 1 β 2013-11-25 β 2014-01-01 β 50 β
```
If I'd search for '2014-01-1' the output would be :
```
βββββββββββ¦ββββββββ¦ββββββββββββββ¦ββββββββββββββββ¦βββββββ
β ID β idEMP β startDate β endDate β Moneyβ
β ββββββββββ¬ββββββββ¬ββββββββββββββ¬ββββββββββββββββ¬βββββββ£
β 3 β 1 β 2014-01-01 β 2014-01-10 β 25 β
β 4 β 1 β 2014-01-10 β 2014-01-15 β 50 β
β 5 β 1 β 2014-01-15 β 2014-01-20 β 18 β
β 6 β 1 β 2014-01-20 β 2014-02-01 β 70 β
```
If I'd search for '2014-05-01' the output would be:
```
βββββββββββ¦ββββββββ¦ββββββββββββββ¦ββββββββββββββββ¦βββββββ
β ID β idEMP β startDate β endDate β Moneyβ
β ββββββββββ¬ββββββββ¬ββββββββββββββ¬ββββββββββββββββ¬βββββββ£
β 7 β 1 β 2014-02-01 β NULL β 10 β
```
I've tried all I've got in my mind and also googled a little bit more but I couldn't find a way of making the proper query for this.
Could someone help me ?
Thanks in advance,
|
Sth like this should work:
```
SELECT *
FROM employee_history
WHERE startDate <= DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@searchDate)+1,0)) AND
(endDate > CAST(CAST(YEAR(@searchDate) AS varchar) + '-' + CAST(MONTH(@searchDate) AS varchar) + '-01' AS DATETIME)
OR endDate is null)
```
The following expresion:
```
DECLARE @searchDate DATE = '2013-11-01'
SELECT DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@searchDate)+1,0))
```
yields this output:
```
2013-11-30 23:59:59.000
```
i.e. the *last day* of the month of the 'search date'.
Whereas, this expression:
```
DECLARE @searchDate DATE = '2013-11-01'
SELECT CAST(CAST(YEAR(@searchDate) AS varchar) + '-' + CAST(MONTH(@searchDate) AS varchar) + '-01' AS DATETIME)
```
yields the *first day* of the month of the 'search date'.
Thus said, the query I propose fetches all records whose intervals, i.e. `startDate` - `endDate`, *overlap* with the month that corresponds to the 'search date'.
|
```
SELECT money, startdate, substring_index(startdate,'-','-1') as date
FROM table
WHERE date = '2014-01'
```
the following code sorts it by month
try this, I kinda of deciphered what you meant.
I assume this is what you want :)
|
Getting history data for a specific date(YEAR AND MONTH) from a table
|
[
"",
"sql",
"sql-server-2008",
""
] |
Not sure how to title or ask this really. Say I am getting a result set like this on a join of two tables, one contains the `Id` (C), the other contains the `Rating` and `CreatedDate` (R) with a foreign key to the first table:
```
-----------------------------------
| C.Id | R.Rating | R.CreatedDate |
-----------------------------------
| 2 | 5 | 12/08/1981 |
| 2 | 3 | 01/01/2001 |
| 5 | 1 | 11/11/2011 |
| 5 | 2 | 10/10/2010 |
```
I want this result set (the newest ones only):
```
-----------------------------------
| C.Id | R.Rating | R.CreatedDate |
-----------------------------------
| 2 | 3 | 01/01/2001 |
| 5 | 1 | 11/11/2011 |
```
This is a very large data set, and my methods (I won't mention which so there is no bias) is very slow to do this. Any ideas on how to get this set? It doesn't necessarily have to be a single query, this is in a stored procedure.
Thank you!
|
You need a CTE with a ROW\_NUMBER():
```
WITH CTE AS (
SELECT ID, Rating, CreatedDate, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY CreatedDate DESC) RowID
FROM [TABLESWITHJOIN]
)
SELECT *
FROM CTE
WHERE RowID = 1;
```
|
If you are using SQL Server 2008 or later, you should consider using windowing functions. For example:
```
select ID, Rating, CreatedDate from (
select ID, Rating, CreatedDate,
rowseq=ROW_NUMBER() over (partition by ID order by CreatedDate desc)
from MyTable
) x
where rowseq = 1
```
Also, please understand that while this is an efficient query in and of itself, your overall performance depends even more heavily on the underlying tables and, in particular, the indexes and explain plans that are used when joining the tables in the first place, etc.
|
SQL Query: get the unique id/date combos based on latest dates - need speed improvement
|
[
"",
"sql",
"sql-server",
"performance",
""
] |
i'm developing a database search page (using jsp), in this search page users will have multiple search conditions/options in the form of a drop-down list, those options can be left unselected (null in other words), if they're left unselected the complete column should be retrieved. To achieve this i'm using ifnull() function, and i'm using MySQL.
The problem i'm facing is that SQL can't seem to recognize that the string is null, when a null string is passed in the ifnull function, it retrieves null values instead of retrieving the complete column. Passing NULL directly in the query "ifnull(NULL,Location)" seem to work just fine.
The List:
```
<select name="EX">
<option value="NULL">Any</option>
<option value="1">Option1</option>
<option value="2">Option2</option>
<option value="3">Option3</option>
</select>
```
How values are handled in the next page:
```
String NameVar = new String();
if(request.getParameter("EX") == "NULL"){
NameVar = null;
}
else if(request.getParameter("EX") != "NULL"){
NameVar = request.getParameter("EX");
}
```
My Query:
```
selectItem = connection.prepareStatement("SELECT Location, ItemType "
+ "FROM DBTEST "
+ "WHERE Location = IFNULL(?,Location)"
```
|
Some details on Sas answer. In SQL a null is a special value, and no int or either string can ever be equal to NULL.
In a query, you must use the special construct `IS NULL` to test for null values because `column = NULL` will never be true.
When using home variables, you must explicitely set them to NULL with the`setNull` method
You do not show how you populate your query, but what you need should no be far from :
```
selectItem = connection.prepareStatement("SELECT Location, ItemType "
+ "FROM DBTEST "
+ "WHERE Location = IFNULL(?,Location)");
if (NameVar == null) {
selectItem.setNull(1, Types.VARCHAR);
}
else {
selectItem.setString(1, NameVar);
}
```
BTW, common usages in java recommend that only class names begin with an uppercase, variable names should begin with a lowercase, so `NameVar` should be `nameVar`
|
Try set null:
```
prepStmt.setNull(1, Types.VARCHAR)
```
|
SQL IfNull() function doesn't recognize null strings
|
[
"",
"mysql",
"sql",
"database",
"jsp",
"web",
""
] |
I have a table `matches` that I want to get from it the users that has the best score:
```
SELECT userId,SUM(score) AS score FROM `matches` GROUP BY userId ORDER BY score DESC
```
This output 2 columns `userId` and `score`. Good.
Now I have a `users` table, and I want to have a more detailed output of that `userId`. For example, I want to have: `userId-firstName-lastName-phone-address-score`. Is this possible with a simple sql query ?
Thank you.
|
```
SELECT userId, firstName, lastName, phone, address, SUM(score) AS score FROM matches join users on matches.user_id = users.user_id GROUP BY userId ORDER BY score DESC
```
|
You could just `JOIN` the tables like this:
```
SELECT
matches.userId,
SUM(matches.score) AS score,
users.firstName,
users.lastName,
users.phone,
users.address
FROM
`matches`
JOIN users
ON `matches`.userId=users.userId
GROUP BY
matches.userId,
users.firstName,
users.lastName,
users.phone,
users.address
ORDER BY
score DESC
```
Reference:
* [13.2.8.2 JOIN Syntax](http://dev.mysql.com/doc/refman/5.0/en/join.html)
|
SELECT query from 2 tables
|
[
"",
"mysql",
"sql",
""
] |
I have a column that is VARCHAR2 and the string inlcudes a date and time. I have extracted the date and now wish to populate a new column solely with the date and time.
So far I have:
```
alter table t
add cb_time
as
select substr(t.notes, 24, INSTR(t.notes, 'for')-1)
from Mytable t
```
this results in error ORA 2000 - missing ( keyword
|
Since you are on 11g, and as you say that both columns are in same table. I would suggest, do not add a static column, rather add a **virtual column**.
So, you need not worry about the **insert** too. Oracle would give you the computed value without actually storing it.
Just add the virtual column as -
**EDIT** A closing brace was missing in the end.
```
ALTER TABLE t ADD
column_name GENERATED ALWAYS AS (to_date(substr(t.notes, 24, INSTR(t.notes, 'for')-1))) VIRTUAL;
```
**NOTE** You need to be cautious though. The virtual column would only be able to work if the **expression** is correctly evaluated. In this case, if the **datetime literal** is malformed, then the virtual column would fail. Either make sure that all the values in your base column have proper format of datetime literal.
|
An alternative method is to create `CB_TIME` as a virtual column. To do soan ALTER TABLE similar to the following is used:
```
ALTER TABLE T
ADD CB_TIME DATE GENERATED ALWAYS AS
(TO_DATE(SUBSTR(t.notes, 24, INSTR(t.notes, 'for')-1))) VIRTUAL;
```
Because virtual columns can't be INSERTed or UPDATEed, doing it this way means that the `NOTES` field must always contain a valid date string.
Share and enjoy.
|
How do I create a new column filled with the results of a query?
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
How do I select from multiple tables in different databases on the same server?
Also, Is there a way to have an identifying marker so I could see where the results came from?
So for example:
```
SELECT db1.table1.name, db2.table2.name, fromTbl
FROM db1.table1, db2.table2
WHERE db1.table1.name LIKE '%j%' OR db2.table2.name LIKE '%j%'
```
So in this case, I'm selecting the names from 2 different databases and tables. I'm doing a wildcard search on those names and the fromTbl would let me know where the results came from?
```
4 john smith 4 555.555.5555 table1
17 joe schmoe 17 555.555.5555 table2
11 james doe 11 555.555.5555 table1
```
I'm using SQL Server 2005.
|
You could use a UNION ALL and add in the database name like:
```
SELECT [columns_list], 'db1.schema.table1.name' AS [fromTbl]
FROM db1.schema.table1
WHERE db1.schema.table1.name LIKE '%j%'
UNION ALL
SELECT [columns_list], 'db2.schema.table2.name' AS [fromTbl]
FROM db2.schema.table2
WHERE db2.schema.table2.name LIKE '%j%'
```
This will only work if the columns in the tables have the same column types (as your example suggests) else UNION will not work.
|
Doing a union seems like your best bet here. A union will combine the results of two queries.
```
select name, 'table1' as fromTbl
from db1.schema.table1
where name like '%j%'
union --or union all depending on what you want
select name, 'table2' as fromTbl
from db2.schema.table2
where name like '%j%'
```
|
SQL statement to select from 2 different tables, from two different databases (same server)
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I have a Table named Example that has a over 5M rows.
I want to know the most efficient way to create a new `DateTime` column that does not allow nulls and has a default value of Now. just setting the value would fail due to the amount of rows.
The plan I have in mind would involve:
1) creating a new column that allows nulls.
```
ALTER TABLE Example
ADD RecordDate datetime
GO
```
2) set the value of the column to `GETDATE()` 1000 (or more if possible) rows at a time.
3) once all rows have a value, I would alter the column to not allow nulls.
```
ALTER TABLE Example
ALTER COLUMN RecordDate datetime NOT NULL
```
I am not sure on what would be the most efficient way of completing step number 2. so that is what I would like some tips on.
|
To work though a large table with a sequential ID, applying updates in batches, this approach will work:
```
DECLARE @startID bigint
DECLARE @endID bigint
SELECT @startID=min(ID) from Example
WHILE @startID IS NOT NULL BEGIN
SELECT @endID=MAX(ID) FROM (
SELECT top(1000) ID from Example where ID>=@startID ORDER BY ID
) t
update Example
set RecordDate = GETDATE()
where ID between @startID and @endID AND RecordDate IS NULL
IF @@ROWCOUNT=0 BEGIN
SET @startID=NULL
END ELSE BEGIN
SET @startID=@endID
END
END
```
The batch size is controlled by
```
SELECT top(1000) ID from Example where ID>=@startID ORDER BY ID
```
Adjust the 1000 as necessary to ensure each UPDATE completes quickly. I've used this technique to update hundreds of millions of rows in batches of around 100000 per update.
|
I would suggest:
```
ALTER TABLE Example ADD COLUMN RecordDate datetime NOT NULL DEFAULT getdate();
```
No matter how you attempt this, you are going to have to rewrite all the data records to add the extra bytes on each page for the value -- even if the value is NULL.
I had a thought that the following would minimize changes to the data:
```
ALTER TABLE Example ADD _RecordDate datetime;
ALTER TABLE Example ADD FirstDateTime as (cast(<current datetime> as datetime));
ALTER TABLE Example ADD RecordDate as COALESCE(_RecordDate, FirstDateTime);
```
On second thought, you still have to reserve the space on the page for `_RecordDate`, so the first method is probably the best.
Another alternative would be to set up another table with the same primary key and the record date. This requires a `left join` to get the information, but that would only be needed when you are accessing the column.
|
Set value to a new datetime column in a table with over 5 million rows
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I am using sql-server.I have a table looks like:
```
Name timestamp var1
Bill 2015-02-05 10:10:10 x1
Bill 2015-02-05 10:10:11 x2
...
Jim 2015-02-05 10:10:10 y1
Jim 2015-02-05 10:10:11 y2
...
John 2015-02-05 10:10:10 z1
John 2015-02-05 10:10:11 z2
```
The table is very large say 1 million rows and the timestamp is updated every second. I want to select the people whose var1 value change delta var1 in any one minute (i.e. x61-x1 or x62-x2, etc) is between 5-7. Here is my code with joining.
```
declare @duration int
set @duration = 60
SELECT a.name,
a.var1-b.var1 AS change
From Table1 a
inner join Table1 b
on a.name = b.name
and a.timestamp = b.timestamp + @duration
Where change between 5 and 7
```
However, I know there are two major problems.
1. Timestamp can't be compared like this. Is there any way to fix it please?
2. My table is too large. If joining each time, it will take too long to run. Any idea to avoid that please?
|
Assuming `@duration` is a number of minutes that join should look like
```
....
and a.timestamp = DATEADD(mi,@duration,b.timestamp)
....
```
|
If you're in SQL Server 2012 or newer you can solve this by using LAG() -window function, since with that you can compare the value of the row to the previous one, and if you order them properly, that's the one you're interested in.
<https://msdn.microsoft.com/en-us/library/hh231256.aspx>
It should probably be possible to adjust "Gaps and Islands" problem solutions to this one too.
Edit, found the post by Itzik Ben-Gan I remember reading earlier, this belongs to the "special islands" category:
<http://sqlmag.com/sql-server-2012/solving-gaps-and-islands-enhanced-window-functions>
|
Try to avoid joining a large table
|
[
"",
"sql",
"sql-server",
""
] |
I need to find ranges of dates within a column and serialize them in a concise format (`start` - `end` for a range or `date` for a single-date range).
I have a CTE (`readings`) that returns a data set that resembles:
```
ID VALUE DATE
1234567 A 2012-05-09
1234567 A 2012-05-10
1234567 A 2012-05-11
1234567 A 2012-05-16
1234567 A 2012-05-17
1234567 A 2012-05-20
1234567 B 2012-05-11
1234567 B 2012-05-12
1234567 B 2012-05-13
1234567 B 2012-05-14
```
I've been able to get:
```
ID VALUE TOTAL_DAYS DATES
1234567 A 6 2012-05-09; 2012-05-10; 2012-05-11; 2012-05-16; 2012-05-17; 2012-05-20
1234567 B 4 2012-05-11; 2012-05-12; 2012-05-13; 2012-05-14
```
Using:
```
readings AS (
...
)
,
reading_aggr AS (
SELECT ID, [VALUE]
,count(distinct date) TOTAL_DAYS
,STUFF((
SELECT '; ' + cast(date as varchar)
FROM readings r0
WHERE id=r0.id
AND value=r0.value
ORDER BY date
FOR XML PATH(''),TYPE).value('(./text())[1]','VARCHAR(MAX)'
),1,2,'') AS DATES
FROM readings
GROUP BY id, [value]
)
SELECT * FROM readings_aggr
```
I would like to format it as such:
```
ID VALUE TOTAL_DAYS DATES
1234567 A 6 2012-05-09 - 2012-05-11; 2012-05-16 - 2012-05-17; 2012-05-20
1234567 B 4 2012-05-11 - 2012-05-14
```
Is this possible without using a procedural approach?
|
You can use this query:
```
SELECT ID, VALUE, MIN([DATE]) AS startDate, MAX([DATE]) AS endDate
FROM (
SELECT ID, VALUE, DATE,
DATEDIFF(Day, '1900-01-01' , [DATE])- ROW_NUMBER() OVER( PARTITION BY ID, VALUE ORDER BY [DATE] ) AS DateGroup
FROM readings ) rGroups
GROUP BY ID, VALUE, DateGroup
```
to get a table expression containing all start - end intervals of you data:
```
ID VALUE startDate endDate
--------------------------------------
1234567 A 2012-05-09 2012-05-11
1234567 A 2012-05-16 2012-05-17
1234567 A 2012-05-20 2012-05-20
1234567 B 2012-05-11 2012-05-14
```
Then use the above query within `reading_aggr`:
```
;WITH start_end_readings AS (
SELECT ID, VALUE, MIN([DATE]) AS startDate, MAX([DATE]) AS endDate
FROM (
SELECT ID, VALUE, DATE, DATEDIFF(Day, '1900-01-01' , [DATE])- ROW_NUMBER() OVER( PARTITION BY ID, VALUE ORDER BY [DATE] ) AS DateGroup
FROM readings ) rGroups
GROUP BY ID, VALUE, DateGroup
), readings_aggr AS (
SELECT ID, [VALUE]
,count(distinct date) TOTAL_DAYS
,STUFF((
SELECT '; ' + cast(startDate as varchar) +
CASE WHEN startDate <> endDate THEN ' - ' + cast(endDate as varchar)
ELSE ''
END
FROM start_end_readings r0
WHERE r1.id=r0.id AND r1.value=r0.value
ORDER BY startDate
FOR XML PATH(''),TYPE).value('(./text())[1]','VARCHAR(MAX)'
),1,2,'') AS DATES
FROM readings AS r1
GROUP BY id, [value]
)
SELECT * FROM readings_aggr
```
to get the desired result:
```
ID VALUE TOTAL_DAYS DATES
===========================================================================
1234567 A 6 2012-05-09 - 2012-05-11; 2012-05-16 - 2012-05-17; 2012-05-20
1234567 B 4 2012-05-11 - 2012-05-14
```
[SQL Fiddle Demo here](http://sqlfiddle.com/#!6/cf364/1)
|
You'll probably be able to do this using a CLR aggregate.
Here's an example from MSDN which concatenates your data together. Just by changing the comma to a semi-colon, you could have your current format with a much cleaner query.
<https://msdn.microsoft.com/en-us/library/ms165055%28v=vs.90%29.aspx>
Once that's in place, you can tweak the Accumulate and/or Terminate method to look over the data and output ranges where possible. You'd probably want to accumulate the values into something like a SortedList instead of a StringBuilder, and then do the range analysis in the Terminate method.
|
Serialize dates in a 'range' format without using a procedure
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Lets have following sample table:
```
Person Quantity
A 1
B 2
C 3
D 4
E 5
```
Result should be:
```
PersonAggregate
1 (0+Quantity of PersonA)=sumA
3 (sumA+Quantity of PersonB)=sumB
6 (sumB+Quantity of PersonC)=sumC
10 (sumC+Quantity of PersonD)=sumD
15 (sumD+Quantity of PersonE)
```
Is it possible to get this result in onq SQL-query?
|
Most versions of SQL support cumulative sums as a window function:
```
select person, sum(quantity) over (order by person) as cumesum
from sample;
```
You can can also do this with a correlated subquery:
```
select s.person,
(select sum(s2.quantity)
from samples s2
where s2.person <= s.person
) as cumesum
from sample s;
```
|
this will obviously get the individual sums.
```
select person, sum(quantity)
from sample
group by person
order by person
```
i don't think your desired effect can be done in a set based way. a procedural language with cursor, like T-SQL or PLSQL, can do it easily.
i'd write a stored procedure and call it.
|
SQL: Column Sum
|
[
"",
"sql",
"select",
"aggregate",
""
] |
I have a system that has a User, Message, and MessageToken models. A User can create Messages. But when any User reads the Messages of others a MessageToken is created that associates the reader (User) to the Message. MessageTokens are *receipts* that keep track of the states for the user and that particular message. All of my associations in the Models are set up properly, and everything works fine, except for structuring a very specific query that I cannot get to work properly.
---
**User.rb**
```
has_many :messages
```
**Message.rb**
```
belongs_to :user
has_many :message_tokens
```
**MessageToken.rb**
```
belongs_to :user
belongs_to :message
```
---
I am trying to structure a query to return Messages that: Do not belong to the user; AND { The user has a token with the read value set to false OR The user does not have a token at all }
The later part of the statement is what is causing problems. I am able to successfully get results for Messages that are not the user, Messages that the user has a token for with read => false. But I cannot get the expected result when I try to make a query for Messages that have no MessageToken for the user. This query does not error out, it just does not return the expected result. How would you structure such a query?
**Below are the results of my successful queries and the expected results.**
```
130 --> # Messages
Message.count
78 --> # Messages that are not mine
Message.where.not(:user_id => @user.id)
19 --> # Messages that are not mine and that I do not have a token for
59 --> # Messages that are not mine, and I have a token for
Message.where.not(:user_id => @user.id).includes(:message_tokens).where(message_tokens: {:user_id => @user.id}).count
Message.where.not(:user_id => @user.id).includes(:message_tokens).where(["message_tokens.user_id = ?", @user.id]).count
33 --> # Messages that are not mine, and I have a token for, and the token is not read
Message.where.not(:user_id => @user.id).includes(:message_tokens).where(message_tokens: {:user_id => @user.id, :read => false}).count
Message.where.not(:user_id => @user.id).includes(:message_tokens).where(["message_tokens.user_id = ? AND message_tokens.read = false", @user.id]).references(:message_tokens).count
```
---
**The Final Expected Result**
```
52 --> # Messages that are not mine and: I have a token for that is not read OR I do not have a token for
```
---
**My best attempt at a query to achieve my goal**
```
64 --> # Wrong number returned, expected 52
Message.where.not(:user_id => @user.id).includes(:message_tokens).where(["(message_tokens.user_id = ? AND message_tokens.read = false) OR message_tokens.user_id <> ?", @user.id, @user.id]).references(:message_tokens).count
```
---
**The problem lies in the query trying to find Messages that are not the users and that the user does not have a token for**
```
63 --> #This does not yield the expected result, it should == 19 (the number of Messages that are not mine and that I do not have a token for)
Message.where.not(:user_id => @user.id).includes(:message_tokens).where.not(message_tokens: {:user_id => @user.id}).count
Message.where.not(:user_id => @user.id).includes(:message_tokens).where(["message_tokens.user_id <> ?", @user.id]).references(:message_tokens).count
```
How can I solve this?
|
Ok, so thanks to the help of R11 Runner I was able to come up with a solution, which required using pure SQL. I could not use the Squeel gem or ActiveRecord as there was no equivalent to SQL's NOT EXISTS operator, which was the crucial component missing.
The reason this works is because unlike the other solutions the NOT EXISTS operator will return all records from the Messages table where there are no records in the MessageTokens table for the given user\_id, whereas using where.not would look for the first match instead not ensuring the non existence that was needed.
```
Message.find_by_sql ["SELECT * FROM messages where messages.user_id <> ?
AND (
(EXISTS (SELECT * FROM message_tokens WHERE message_id = messages.id AND user_id = ? AND read = FALSE))
OR
(NOT EXISTS (SELECT * FROM message_tokens WHERE message_id = messages.id AND user_id = ?))
)",@user.id, @user.id, @user.id]
```
|
If you don't mind using 2 queries, a possible solution would be:
```
messages_not_written_by_user = Message.where.not(:user_id => @user.id)
messages_already_read_by_user = Message.where.not(:user_id => @user.id).includes(:message_tokens).where(message_tokens: {:user_id => @user.id, :read => true})
messages_not_read_by_user_yet = messages_not_written_by_user - messages_already_read_by_user
```
I would personally find this syntax more readable:
```
messages_not_written_by_user = Message.where.not(:user => @user).count
messages_already_read_by_user = Message.where.not(:user => @user).includes(:message_tokens).where(message_tokens: {:user => @user, :read => true}).count
```
One remark to this query:
```
63 --> #This does not yield the expected result, it should == 19 (the number of Messages that are not mine and that I do not have a token for)
Message.where.not(:user_id => @user.id).includes(:message_tokens).where.not(message_tokens: {:user_id => @user.id}).count
```
This query searches for all the messages which have a token with an arbitrary other user. (If msg1 has a token with @user, and it also has a token with @another\_user, this query will find it.)
|
Ruby ActiveRecord Query with has_many Association
|
[
"",
"mysql",
"sql",
"ruby-on-rails",
"ruby",
"activerecord",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.