Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am not really sure what I am doing wrong, but it is Monday so this is expected. I am trying to process some data in a table. Occasionally, one of the columns will contain a question mark. I would like to replace all question marks to NULL.
This is my query, but it seems to just replace every single entry.
```
Update Table
set [Hours] = REPLACE([Hours], '?', NULL)
```
I have also tried
```
Update Table
set [Hours] = REPLACE([Hours], CHAR(63), NULL)
```
I am pretty confident I am missing the simplest mistake. I just can't see it for some reason. | I would rewrite this query to not use a `REPLACE`
```
UPDATE Table SET [Hours] = NULL WHERE [Hours] = '?'
```
EDIT: As for the reason your original query didn't work, as AlexK and roryap say, [The docs for `REPLACE` say that:](http://technet.microsoft.com/en-us/library/ms186862.aspx)
> Returns NULL if any one of the arguments is NULL.
You can test this by performing a simple replace, such as:
```
SELECT REPLACE('asdf', '?', NULL)
```
This will return null, even though `?` isn't in the search string. | Use a `where` clause;
```
Update Table set [Hours] = NULL
where [Hours] like '%?%'
``` | Update Replace query is replacing every entry in the column? | [
"",
"sql",
""
] |
Given that I have a table `messages`
```
sender_id recipient_id
========== =============
1 2
1 3
2 1
```
Whats the most optimal query to get records in which one value is 1 and the other 2, i.e for the above example dataset, it should return both (1,2) and (2,1). Currently I am using something like:
```
SELECT *
FROM messages
WHERE 1 IN (sender_id, recipient_id) AND 2 IN (sender_id, recipient_id)
```
but this appears to be sub-optimal. Is there a better way to it?
**EDIT**
* I do not need to filter out repetitions
* I have compound-indexed sender\_id and recipient\_id | If the sender can't also be the recipient:
```
select * from messages
where sender_id in (1, 2)
and recipient_id in (1, 2)
```
If it's possible to be both, add another filter:
```
select * from messages
where sender_id in (1, 2)
and recipient_id in (1, 2)
and sender_id != recipient_id
``` | The most straightforward approach seems to me to be to use `IN`:
```
select *
from messages
where (sender_id, recipient_id) in ((1, 2), (2, 1));
```
Testing on SQL Fiddle shows that it is sometimes faster than Bohemian's answer, and sometimes slower, depending on the data present. But for readability, I think this is better. | Optimal query to find both (x,y) and (y,x) records | [
"",
"sql",
"postgresql",
""
] |
I have a table like this
The first table is the "Appointments" table with 'doctor\_id' the second table is the "Doctors" table also with 'doctor\_id'. The duplicate values from the appointments table represents how many appointment each doctor has. for example the doctor\_id(50) has 6 appointments.
```
doctor_id(Appointments) | doctor_id(Doctors)
50 | 50
50 | 51
50 | 52
52 | 53
50 |
50 |
52 |
53 |
50 |
```
Now, my question is how would I show this table below (ps excluding the doctor id '51)
```
doctor_id (Doctors) | Count
50 | 6
52 | 2
53 | 1
```
This what i have tried so far, i have managed to work out how many doctors have appointments
using :
```
SELECT COUNT(*) FROM appointment
WHERE doctor_id > 50
```
I have executed other queries as well but with no luck. | Assuming `doctor_id` is unique in the `doctors` table...
```
SELECT d.doctor_id AS doctor_id
, COUNT(a.doctor_id) AS appointment_count
FROM doctors d
LEFT
JOIN appointments a
ON a.doctor_id = d.doctor_id
GROUP BY d.doctor_id
```
To get the count by doctor\_id, you need to `GROUP BY` the doctor\_id. You don't have to include doctor\_id in the SELECT list, but if you only return the `COUNT()`, you'd just get a list of counts:
```
appointment_count
-----------------
6
0
2
1
```
With no indication of which appointment\_count goes with which doctor\_id. So, the usual pattern is to include what you GROUP BY in the SELECT list.
In order to return the 0 count for doctor 51, we want an outer join to the appointments table, so the row from the doctors table is returned even when there is no match.
---
If you only want rows for doctors that have one or more appointments, then you can just query
the appointments table. (This query won't return any "zero" count for a doctor.)
```
SELECT a.doctor_id
, COUNT(1) AS appointment_count
JOIN appointments a
GROUP BY a.doctor_id
```
Again, the `GROUP BY` clause is necessary to get a count of rows for each distinct value of doctor\_id. That effectively collapses the rows for each doctor\_id into a single row. The aggregate function operates on the "group" of rows. | ```
SELECT d.doctor_id, COUNT(a.doctor_id) FROM doctors d
Join appointment a
d.doctor_id =a.doctor_id
WHERE d.doctor_id > 50
GROUP BY a.doctor_id
```
try this code and hope this works. I am grouping the doctors appointments by the id | How to join Count(*) columns with another column in Mysql | [
"",
"mysql",
"sql",
"count",
""
] |
```
select COALESCE([ItemSlotX],0) as [ItemSlotX],COALESCE([ItemSlotY], 0 ) as [ItemSlotY] from [PowerUP_Items] where [ItemIndex]=16 and [ItemGroup]=255
```
In my case, is no record.
How can I return 0 for ItemSlotX and 0 for ItemSlotY if there is no record found? | If you want the results, even when there is no matching row, use this:
```
select COALESCE([ItemSlotX],0) as [ItemSlotX],COALESCE([ItemSlotY], 0 ) as [ItemSlotY]
from (select null dummy ) d
left
outer
join [PowerUP_Items]
on [ItemIndex]=16 and [ItemGroup]=255
``` | This?
```
select ItemSlotX, ItemSlotY from PowerUP_Items where ItemIndex=16 and ItemGroup=255
if @@rowcount = 0
select 0 as ItemSlotX, 0 as ItemSlotY
```
Or more general approach:
```
if exists (select * from PowerUP_Items where ItemIndex=16 and ItemGroup=255)
select ItemSlotX, ItemSlotY from PowerUP_Items where ItemIndex=16 and ItemGroup=255
else
select 0 as ItemSlotX, 0 as ItemSlotY
``` | MSSQL Return 0 for null blank | [
"",
"sql",
"sql-server",
""
] |
To begin, yes this is for homework, and I've been trying to read and understand. Nothing has just said why one is better or worse. Anyway, in SQL Server using the Adventureworks data base, the following 3 queries are run:
```
USE AdventureWorks2012;
GO
--1
SELECT LastName
FROM Person.Person
WHERE LastName = 'Smith';
--2
SELECT LastName
FROM Person.Person
WHERE LastName LIKE 'Sm%';
--3
SELECT LastName
FROM Person.Person
WHERE LastName LIKE '%mith';
```
LastName is a NonClustered index. Why do queries 1 and 2 perform identically and query 3 perform much worse that the first 2? | Open the phone book that is sorted by last names
1. Lookup for all "Smith"s
2. Lookup for all people whose last name starts with "Sm"
3. Lookup for all people whose last name ends with "mith"
Do you see now? | Keys in an index are sorted so that finding a value is easier. For string-type values they are sorted alphabetically (ascending or descending). That makes it easy to locate any value based on the whole value or its begininng.
The problem with matching on anything at the start (`'%mith'`) is that SQL Server cannot use the sorting of the index in any way (cannot do a lookup). It has to actually go through the whole index (doing a scan). | Why exactly is there a difference between performance? | [
"",
"sql",
"sql-server",
"performance",
"t-sql",
""
] |
I have an MS-SQL table, with a column titled 'ImportCount'.
Data in this column follows the below format:
```
ImportCount
[Schedules] 1376 schedule items imported from location H:\FOLDERA\AA\XX...
[Schedules] 10201 schedule items imported from location H:\FOLDERZZ\PERS\YY...
[Schedules] 999 schedule items imported from location R:\PERS\FOLDERA\AA\XX...
[Schedules] 21 schedule items imported from location H:\FOLDERA\MM\2014ZZ...
```
What I would like to do is extract that numerical portion of the data (which varies in length), but am struggling to get the right result. Would appreciate any help on this!
Thanks. | ```
select SUBSTRING(ImportCount,13,patindex('% schedule items%',ImportCount)-13) from table name
``` | Try
```
select left(ImportCount, patindex('%[^0-9]%', ImportCount+'.') - 1)
``` | MS-SQL - Extracting numerical portion of a string | [
"",
"sql",
"sql-server",
""
] |
I'm struggling to figure out the best way to write this query to use less queries if possible.. I'm wondering if a pivot table might be the correct way?
My 3 separate queries:
```
SELECT
ISNULL(SUM(ps.UnitsSold), 0) AS UnitsSold,
ISNULL(pg.[Description], 'Other') AS [Description]
FROM dbo.ProductSales ps
LEFT OUTER JOIN dbo.Product p ON ps.ProductID = p.ProductID
LEFT OUTER JOIN dbo.ProductGroupings pg ON p.[Asin] = pg.[Asin]
WHERE (ps.OrderDate BETWEEN GETDATE() - 10 AND GETDATE() - 3) AND ps.DistributionCentreID IN (3)
GROUP BY pg.[Description], ps.DistributionCentreID
SELECT
ISNULL(SUM(ps.UnitsSold), 0) AS UnitsSold,
ISNULL(pg.[Description], 'Other') AS [Description]
FROM dbo.ProductSales ps
LEFT OUTER JOIN dbo.Product p ON ps.ProductID = p.ProductID
LEFT OUTER JOIN dbo.ProductGroupings pg ON p.[Asin] = pg.[Asin]
WHERE (ps.OrderDate BETWEEN GETDATE() - 17 AND GETDATE() - 10) AND ps.DistributionCentreID IN (3)
GROUP BY pg.[Description], ps.DistributionCentreID
SELECT
ISNULL(SUM(ps.UnitsSold), 0) AS UnitsSold,
ISNULL(pg.[Description], 'Other') AS [Description]
FROM dbo.ProductSales ps
LEFT OUTER JOIN dbo.Product p ON ps.ProductID = p.ProductID
LEFT OUTER JOIN dbo.ProductGroupings pg ON p.[Asin] = pg.[Asin]
WHERE (ps.OrderDate BETWEEN GETDATE() - 374 AND GETDATE() - 367) AND ps.DistributionCentreID IN (3)
GROUP BY pg.[Description], ps.DistributionCentreID
```
This produces results similar to this (first query):
```
UnitsSold Description
4154 desc1
764 desc2
```
etc..
Things to think about, a description (product group) might not exist in one of the queries so I need to account for that.
Ideally I'd like it to look a little something like this:
```
Description UnitsSoldThisWeek UnitsSoldLastWeek UnitsSoldLastYear
Desc1 54 45 37
```
etc..
Any questions, issues or bitching due to bad query is acceptable, I'm happy to improve my understanding of SQL.
Thanks,
Michael | i've changed your query a little to show a different technique which will combine your 3 queries. I may not have the brackets quite right round the SUMS but that gives you an idea. There are also better ways to do the sort of "date between" stuff you are trying, but that wasn't the question! See if that helps
```
SELECT
ISNULL(SUM(ps.UnitsSold), 0) AS UnitsSold,
ISNULL(pg.[Description], 'Other') AS [Description]
SUM(CASE WHEN (ps.OrderDate BETWEEN GETDATE() - 10 AND GETDATE() - 3) AND ps.DistributionCentreID = 3 THEN 1 ELSE 0 END) AS UnitsSoldThisWeek
SUM(CASE WHEN (ps.OrderDate BETWEEN GETDATE() - 17 AND GETDATE() - 10) AND ps.DistributionCentreID = 3 THEN 1 ELSE 0 END) AS UnitsSoldThisWeek
SUM(CASE WHEN (ps.OrderDate BETWEEN GETDATE() - 374 AND GETDATE() - 367) AND ps.DistributionCentreID = 3 THEN 1 ELSE 0 END) AS UnitsSoldThisWeek
FROM dbo.ProductSales ps
LEFT OUTER JOIN dbo.Product p ON ps.ProductID = p.ProductID
LEFT OUTER JOIN dbo.ProductGroupings pg ON p.[Asin] = pg.[Asin]
GROUP BY pg.[Description], ps.DistributionCentreID
``` | Do this with one query and conditional aggregation:
```
SELECT coalesce(pg.[Description], 'Other') AS [Description],
sum(case when ps.OrderDate BETWEEN GETDATE() - 10 AND GETDATE() - 3 then UnitsSold
end) as ThisWeek
sum(case when ps.OrderDate BETWEEN GETDATE() - 17 AND GETDATE() - 10 then UnitsSold
else 0
end) as LastWeek,
sum(case when ps.OrderDate BETWEEN GETDATE() - 374 AND GETDATE() - 367 then UnitsSold
else 0
end) as LastYear
FROM dbo.ProductSales ps LEFT OUTER JOIN
dbo.Product p
ON ps.ProductID = p.ProductID LEFT OUTER JOIN
dbo.ProductGroupings pg
ON p.[Asin] = pg.[Asin]
WHERE ps.DistributionCentreID IN (3)
GROUP BY pg.[Description], ps.DistributionCentreID
``` | Multiple sql queries in one result | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
**Friends Table**
-------------------------
id seller_id buyer_id
-------------------------
1 101 102
2 102 104
3 103 101
4 104 101
**Name Table**
--------------
id name_id
--------------
101 Robin
102 Goblin
103 Ork
104 Wizard
```
I just want the Friends table to display the Name instead of name\_id and friend\_id.
I know its easy but I cant find proper word to goggle for it. please help.. | You can do two times `JOIN` to achieve that :
```
SELECT
f.id
, n1.name_id As seller
, n2.name_id As buyer
FROM Friends f
INNER JOIN Name n1 ON n1.id = f.seller_id
INNER JOIN Name n2 ON n2.id = f.buyer_id
``` | ```
select f.id,n.name_id as "seller_id",nb.name_id as "buyer_id"
from [Friends Table] f
join [Name Table] n on n.seller_id=f.id
join [Name Table] nb on nb.buyer_id=f.id
``` | how to query for the 2 id's field whose value present in the same table B | [
"",
"sql",
"sqlite",
""
] |
I want to write a query in oracle sql, which returns every minute of the current day in one column. But i've got no idea, how to start :-(
```
min
27.03.2014 00:00
27.03.2014 00:01
27.03.2014 00:02
...
27.03.2014 23:59
```
Thanks | We can use arithmetic to manipulate dates, and the simple CONNECT BY trick to generate a stream of rows.
```
alter session set nls_date_format='dd-mon-yyyy hh24:mi'
/
with cte as ( select trunc(sysdate) as start_date from dual )
select start_date + ((level-1)/(24*60)) as date_w_mins
from cte
connect by level <= (24*60)
/
``` | You can use `numtodsinterval` function and is simple too:
```
SELECT to_char(TRUNC(sysdate)+
numtodsinterval (level-1,'minute'),'dd.mm.yyyy hh24:mi') min
FROM dual
CONNECT BY LEVEL <= (24*60);
``` | Oracle query with every minute a day | [
"",
"sql",
"oracle",
"date-arithmetic",
""
] |
This query currently shows max(hella) so far
```
SELECT max(hella)
FROM (
SELECT G.firstname, G.lastname, count(*) as hella
FROM guest G, Timeslot TS, shows H
where G.timeslotnum = TS.timeslotnum
AND TS.shownumber = H.shownumber
AND H.showname = 'Fitness'
Group by g.firstname, G.lastname
ORDER by hella
)
As blabla
```
I want to show firstname and lastname of max(hella) entry | This is much like what @maniek or @zfus already posted: returns a single row, arbitrary pick if there are ties.
But with proper JOIN syntax and shorter with some syntax candy:
```
SELECT g.firstname, g.lastname, count(*) AS hella
FROM guest g
JOIN timeslot t USING (timeslotnum)
JOIN shows s USING (shownumber)
WHERE s.showname = 'Fitness'
GROUP BY 1,2
ORDER BY 3 DESC
LIMIT 1;
```
[SQL Fiddle](http://sqlfiddle.com/#!15/94095/5) (reusing @sgeddes' fiddle). | ```
SELECT G.firstname, G.lastname, count(*) as hella
FROM guest G, Timeslot TS, shows H
where G.timeslotnum = TS.timeslotnum
AND TS.shownumber = H.shownumber
AND H.showname = 'Fitness'
Group by g.firstname, G.lastname
ORDER by hella desc
limit 1
``` | SQL Display info with Max(count(*)) | [
"",
"sql",
"postgresql",
"max",
"greatest-n-per-group",
""
] |
```
$a='asfda'; $b='sdgdfgd'; $c='sdfdsfg'; $d='sgdfgdsfg';
mysql_query("INSERT INTO `users`(`id`, `confirmation`, `name`, `email`, `password`) VALUES ('','$a','$b','$c','$d')");
```
i have 10 more column in my user table that i want to leave empty while one of them is 'cityID' (foreign key), its keep giving me this error...
> "Cannot add or update a child row: a foreign key constraint fails (`php_first`.`users`, CONSTRAINT `users_ibfk_1` FOREIGN KEY (`city id`) REFERENCES `cities` (`id`))" | Its because you've created an index in users table which is references to `id` field in `cities` table. Remove that index and you'll get rid of this error.
I suggested that you remove the index because your index seems to be of no help otherwise when you have foreign key index, you have to make you that an entry for foreign key must exist before you can insert any entry to child record.
Logically think of it like this. You are going to create a child whose father doesn't exist ;) | ```
mysql_query("set foreign_key_checks=0");
mysql_query("INSERT INTO `users`(`id`, `confirmation`, `name`, `email`, `password`) VALUES ('','$a','$b','$c','$d')");
``` | Cannot add or update a child row... why did i get this error? | [
"",
"mysql",
"sql",
""
] |
I have this code in crystal reports that gives me last week date range based on the current date.
First day of the week:
```
If DayOfWeek(currentdate) = 2 Then
currentdate
Else If DayOfWeek(currentdate) = 3 Then
dateadd ("d",-1,currentdate)
Else If DayOfWeek(currentdate) = 4 Then
dateadd ("d",-2,currentdate)
Else If DayOfWeek(currentdate) = 5 Then
dateadd ("d",-3,currentdate)
Else If DayOfWeek(currentdate) = 6 Then
dateadd ("d",-4,currentdate)
Else If DayOfWeek(currentdate) = 7 Then
dateadd ("d",-5,currentdate)
Else If DayOfWeek(currentdate) = 1 Then
dateadd ("d",-6,currentdate)
```
Last day of week:
```
If DayOfWeek(currentdate) = 2 Then
dateadd ("d",+6,currentdate)
Else If DayOfWeek(currentdate) = 3 Then
dateadd ("d",+5,currentdate)
Else If DayOfWeek(currentdate) = 4 Then
dateadd ("d",+4,currentdate)
Else If DayOfWeek(currentdate) = 5 Then
dateadd ("d",+3,currentdate)
Else If DayOfWeek(currentdate) = 6 Then
dateadd ("d",+2,currentdate)
Else If DayOfWeek(currentdate) = 7 Then
dateadd ("d",+1,currentdate)
Else If DayOfWeek(currentdate) = 1 then currentdate
```
How can I do the same in SQL using 2 variables to storage Monday(`startdate`) and Sunday(`enddate`)?
I found this `select datepart(dw,getdate()) --6` in this site, but I do not know how to use it. | I generated some spaced out dates in the `parms` CTE then `SELECT` the `CurrentDate` from `parms`, the Sunday of the week prior to `CurrentDate` and the Saturday of the week prior to `CurrentDate`. I'm assuming that you want the dtate range to be Sunday - Saturday.
> Sunday - Saturday Ranges
```
;WITH parms (CurrentDate) AS (
SELECT DATEADD(dd, -14, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, -6, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, 2, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, 8, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, 15, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, 20, CURRENT_TIMESTAMP)
)
SELECT CurrentDate
, LastWeekSunday = DATEADD(dd, -1, DATEADD(ww, DATEDIFF(ww, 0, CurrentDate) - 1, 0))
, LastWeekSaturday = DATEADD(dd, 5, DATEADD(ww, DATEDIFF(ww, 0, CurrentDate) - 1, 0))
FROM parms
```
> Monday to Sunday Ranges
```
;WITH parms (CurrentDate) AS (
SELECT DATEADD(dd, -14, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, -6, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, 2, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, 8, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, 15, CURRENT_TIMESTAMP) UNION
SELECT DATEADD(dd, 20, CURRENT_TIMESTAMP)
)
SELECT CurrentDate
, LastWeekMonday = DATEADD(dd, 0, DATEADD(ww, DATEDIFF(ww, 0, DATEADD(dd, -1, CurrentDate)) - 1, 0))
, LastWeekSunday = DATEADD(dd, 6, DATEADD(ww, DATEDIFF(ww, 0, DATEADD(dd, -1, CurrentDate)) - 1, 0))
FROM parms
```
> If you just want the prior week's Monday to the prior week's Sunday from today rather than from a column of dates you can use this
```
SELECT CURRENT_TIMESTAMP
, LastWeekSunday = DATEADD(dd, 0, DATEADD(ww, DATEDIFF(ww, 0, DATEADD(dd, -1, CURRENT_TIMESTAMP)) - 1, 0))
, LastWeekSaturday = DATEADD(dd, 6, DATEADD(ww, DATEDIFF(ww, 0, DATEADD(dd, -1, CURRENT_TIMESTAMP)) - 1, 0))
``` | This solution is tested and works. I am getting the previous week's Monday and Sunday as upper and lower bounds.
```
SELECT
-- 17530101 or 1753-01-01 is the minimum date in SQL Server
DATEADD(dd, ((DATEDIFF(dd, '17530101', GETDATE()) / 7) * 7) - 7, '17530101') AS [LowerLimit], -- Last Week's Monday
DATEADD(dd, ((DATEDIFF(dd, '17530101', GETDATE()) / 7) * 7) - 1, '17530101') AS [UpperLimit] -- Last Week's Sunday.
```
Which can be used like this in a real world query:
```
SELECT
*
FROM
SomeTable
WHERE
SomeTable.[Date] >= DATEADD(dd, ((DATEDIFF(dd, '17530101', GETDATE()) / 7) * 7) - 7, '17530101') AND
SomeTable.[Date] <= DATEADD(dd, ((DATEDIFF(dd, '17530101', GETDATE()) / 7) * 7) - 1, '17530101')
```
Here are some tests:
**1. Leap Year**
Current Date: `2016-02-29 00:00:00.000`
Results:
```
LowerLimit UpperLimit
2016-02-22 00:00:00.000 2016-02-28 00:00:00.000
```
**2. Last Week was in different year**
Current Date: `2016-01-06 00:00:00.000`
```
LowerLimit UpperLimit
2015-12-28 00:00:00.000 2016-01-03 00:00:00.000
```
**3. Lower limit in previous month and upper limit in current month**
Current Date: `2016-05-04 00:00:00.000`
```
LowerLimit UpperLimit
2016-04-25 00:00:00.000 2016-05-01 00:00:00.000
```
**4. Current Date is Sunday**
Current Date: `2016-05-08 00:00:00.000`
```
LowerLimit UpperLimit
2016-04-25 00:00:00.000 2016-05-01 00:00:00.000
``` | How to get last week date range based on current date in sql? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have an attendance table like this in MySQL
> EmpID | EventTime
>
> 90010 | 2014-03-05 06:50:30
>
> 90010 | 2014-03-05 06:50:45
>
> 90010 | 2014-03-05 06:51:02
>
> 90020 | 2014-03-05 06:52:50
>
> 90030 | 2014-03-05 06:55:47
The employee with ID '90010' is submitted more than 1 attendance data on March 5th.
Now, i want to make a report of attendance on March 5th, but when i run my sql it always give 3 record for ID '90010'.
This is my query
```
SELECT b.EmpID, a.EventTime
FROM MST_Attendance a, MST_Employee b
WHERE a.EmpID=b.EmpID
AND DATE(a.EventTime)='2014-03-05'
```
That query give a result exactly the same as the table content. Actually, i need a result like this:
> 90010 | 2014-03-05 06:50:30
>
> 90020 | 2014-03-05 06:52:50
>
> 90030 | 2014-03-05 06:55:47
Please anyone help me about the query i've to use. | From your query, assuming you already have foreign keys on your MST\_Attendance table, you can just do a query on that table, since you have the two columns EmpID and EventTime there. You need to group by EmpID to obtain the minimum time:
```
SELECT EmpID, MIN(EventTime) as attendancetime
FROM MST_Attendance
WHERE DATE(EventTime)='2014-03-05'
GROUP BY EmpID
```
If you need to combine other details of employees, assuming all your EmpIDs are unique in your MST\_Employee table, you can use an alias and nest the query above, followed by a RIGHT JOIN:
```
SELECT a.*, b.attendancetime
FROM MST_Employee a
RIGHT JOIN (
SELECT EmpID, MIN(EventTime) as attendancetime
FROM MST_Attendance
WHERE DATE(EventTime)='2014-03-05'
GROUP BY EmpID
) b
ON a.EmpID = b.EmpID
``` | using `MIN` of EventTiem with `GROUP BY` will help you get the desired result
```
SELECT b.EmpID, MIN(a.EventTime) AS EventTime
FROM MST_Attendance a, MST_Employee b
WHERE a.EmpID=b.EmpID
AND DATE(a.EventTime)='2014-03-05'
GROUP BY b.EmpID;
``` | Get minimum time on same date in MySQL | [
"",
"mysql",
"sql",
"database",
""
] |
Please see the SQL DDL below:
```
create table dbo.Test(id int, name varchar(30))
INSERT INTO Test values (1, 'Mark')
INSERT INTO Test values (2,'Williams')
```
I am trying to return: 'Mark Williams' using an SQL SELECT. I have tried using an SQL Pivot, but it has not worked. | Possibly more flexible than COALESCE would be to use the STUFF and FOR XML pattern:
```
SELECT TOP 1
STUFF((SELECT ' ' + Name AS [text()]
FROM dbo.Test
ORDER BY id
FOR XML PATH('')), 1, 1, '' ) Concatenated
FROM TEST
``` | Try this:
```
DECLARE @Return VARCHAR(MAX)
SELECT @Return = COALESCE(@Return+' ','') + name
FROM dbo.TEST
SELECT @Return
``` | Return two rows as a single row | [
"",
"sql",
"sql-server",
""
] |
I want to create a table of **325** column:
```
CREATE TABLE NAMESCHEMA.NAMETABLE
(
ROW_ID TEXT NOT NULL , //this is the primary key
324 column of these types:
CHAR(1),
DATE,
DECIMAL(10,0),
DECIMAL(10,7),
TEXT,
LONG,
) ROW_FORMAT=COMPRESSED;
```
I replaced all the VARCHAR with the TEXT and i have added **Barracuda** in the my.ini file of MySQL, this is the attributes added:
```
innodb_file_per_table=1
innodb_file_format=Barracuda
innodb_file_format_check = ON
```
but i still have this error:
```
Error Code: 1118
Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.
```
EDIT: I can't change the structure of the database because it's legacy application/system/database. The create of a new table, it's an export of the legacy database.
EDIT2: i wrote this question that is similar to others but inside there are some solution that i found on internet like VARCHAR and Barracuda, but i still have that problem so i decided to open a new question with already the classic answer inside for seeing if someone have other answers | I struggled with the same error code recently, due to a change in MySQL Server 5.6.20.
I was able to solve the problem by changing the innodb\_log\_file\_size in the my.ini text file.
In the release notes, it is explained that an innodb\_log\_file\_size that is too small will trigger a "Row size too large error."
<http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-20.html> | I tried all the solutions here, but only this parameter
```
innodb_strict_mode = 0
```
solved my day...
From the manual:
> The innodb\_strict\_mode setting affects the handling of syntax errors
> for CREATE TABLE, ALTER TABLE and CREATE INDEX statements.
> innodb\_strict\_mode also enables a record size check, so that an INSERT
> or UPDATE never fails due to the record being too large for the
> selected page size. | MySQL: Error Code: 1118 Row size too large (> 8126). Changing some columns to TEXT or BLOB | [
"",
"mysql",
"sql",
"create-table",
""
] |
```
DELETE from Users
Where Id in (
SELECT TOP 200 u.id
FROM users u JOIN
playlists pl
ON u.id = pl.UserId LEFT OUTER JOIN
playlistitems pli
ON pli.PlaylistId = pl.id
GROUP BY u.id
HAVING count(pl.id) = 1 AND
count(pli.id) = 0
)
```
Are there any pitfalls in this SQL query which would cause it to run really slowly? It takes me 1m32s to delete 200 rows, but there's 260,000 that I need to clean up.. | I think it is pointless to join from `users` why don't you do this instead:
```
DELETE from Users
Where Id in (
SELECT TOP 200 pl.UserId
FROM playlists pl
LEFT OUTER JOIN
playlistitems pli
ON pli.PlaylistId = pl.id
GROUP BY pl.UserId
HAVING count(pl.id) = 1 AND
count(pli.id) = 0
)
``` | Index each of the tables on the ID column. Preferably a clustered index, but not essential.
given that you already have indexes, try something like this:
```
;with cte as
(
select
u.id,
rank() over (partition by u.id) as rk
from users u
join playlists pl on pl.UserId = u.ID
join playlistites pli on pli.PlaylistID = pl.ID
)
delete from cte
where rk > 200
``` | Deleting rows via SQL query -- any way to speed up this query? | [
"",
"sql",
"sql-server",
""
] |
I'm trying to show a zero when no records where found for that particular id.

The image is the representation of what I'm trying to achieve. Right now I'm only getting the rows where he finds at least one record (so 3, 4 and 5).
Could someone please tell me what I'm doing wrong. Here's my attempt (one of many):
## EDIT:
```
SELECT statusses.statusses_namenl,
IFNULL(COUNT(projectrecipes.fk_projectrecipes_projectid),0) AS CntRows
FROM recipes
RIGHT JOIN projectrecipes
ON projectrecipes.fk_projectrecipes_recipeid = recipes.id
LEFT JOIN statusses
ON recipes.fk_recipe_status = statusses.id
WHERE projectrecipes.fk_projectrecipes_projectid = 213
GROUP BY recipes.fk_recipe_status
```
## EDIT:
This is the result I'm getting:
 | You want a row per status, so select from statusses. Then you want to count records from recipes and projectrecipes per status, so outer join them. Put your criteria in the ON clause, not in the WHERE clause, because when there is no matching record, fk\_projectrecipes\_projectid will be NULL. (You turn the outer join into an inner join by using an outer joined column in the where clause.)
```
select s.statusses_namenl, count(*)
from statusses
left join recipes r on r.fk_recipe_status = s.id
left join projectrecipes pr on pr.fk_projectrecipes_recipeid = r.id and pr.fk_projectrecipes_projectid = 213
group by s.statusses_namenl;
``` | I think you dont need to use the CASE, and also use RIGHT join so if no match comes in Join then also the result will come. Try below it should work.
```
SELECT recipes.fk_recipe_status, COUNT(projectrecipes.fk_projectrecipes_projectid)
FROM projectrecipes
RIGHT JOIN recipes
ON projectrecipes.fk_projectrecipes_recipeid = recipes.id
WHERE projectrecipes.fk_projectrecipes_projectid = 213
GROUP BY recipes.fk_recipe_status
``` | sql show 0 when no records where found | [
"",
"mysql",
"sql",
""
] |
I am new to MYSQL but have a basic understanding of SQL, and i am trying to say that if my parameter has not be set (ie null) then set it as JM.
What/where am i going wrong with the below script?
Thanks in advance
```
CREATE DEFINER=`james`@`%` PROCEDURE `Project_Status`(IN engineer VARCHAR (5))
BEGIN
SELECT
PP.pid_full,
PP.description_small,
PP.project_status,
PP.est_comp_date as 'Est Comp Date',
EP.engineer
FROM
project_pid pp
JOIN
eng_project_hours EP on PP.pid_full = EP.pid_full
where engineer = ep.engineer;
IF engineer is null then update engineer = jm
set engineer = ep.engineer;
end if;
end
``` | I'm not sure I understand what you are trying to do.
If you want to use a value of `'JM'` instead of `NULL` when the input argument has a value of `NULL`, one way to get that kind of behavior is to declare a local variable in the procedure.
Check the value of the parameter, and set the value of the local variable accordingly; set the local variable to the literal value `'JM'` when the input argument is null, or to the value of the input argument when it's not null. Then reference the local variable in the SQL statement.
Something like this:
```
DELIMITER $$
CREATE DEFINER=`james`@`%` PROCEDURE `Project_Status`(IN engineer VARCHAR(5))
BEGIN
DECLARE ls_engineer VARCHAR(5);
SET ls_engineer = IFNULL(engineer,'JM');
SELECT pp.pid_full
, pp.description_small
, pp.project_status
, pp.est_comp_date as 'Est Comp Date'
, ep.engineer
FROM project_pid pp
JOIN eng_project_hours ep
ON pp.pid_full = ep.pid_full
WHERE ep.engineer = ls_engineer ;
END$$
```
---
Note that this:
```
SET ls_engineer = IFNULL(engineer,'JM');
```
is easier-to-read shorthand equivalent to this:
```
IF ( engineer IS NULL ) THEN
SET ls_engineer = 'JM';
ELSE
SET ls_engineer = engineer;
END IF;
```
---
**FOLLOWUP**
**Q:** Say for instance, where i have 'JM' i wished for the procedure to select all the records, is that something that can be done within this?
**A:** Yes. Let's say, for example, if the input parameter has the "special value" of `'JM'`, you don't want any restriction on the `ep.engineer` column at all, you could tweak the query by adding an OR condition to the WHERE clause...
```
WHERE ep.engineer = ls_engineer
OR engineer = 'JM'
```
If the input parameter `engineer` has a value of `'JM'`, the predicate following the `OR` is is going to return TRUE for all rows, so it won't matter whether the part before the `OR` returns TRUE, FALSE or NULL, the overall result of the WHERE clause is going to be TRUE for all rows.
But I would suggest that `NULL` would be a more appropriate than `'JM'` as a special "return all rows" value for the the input argument, with no need of a "default" value for the input parameter, i.e. no need to translate a NULL to `'JM'`. But that really depends on your use case, but you might consider bypassing the `JM` default value altogether, and just do something like this in your query:
```
WHERE ep.engineer = engineer
OR engineer IS NULL
```
---
**Q:** What is the reason/meaning for the `ls_` prefix?
**A:**
The `ls_` prefix is just a Hungarian-style notation I've used since Oracle PL/SQL days; I just found it a convenient way to help keep track of scope, and make for variable names that didn't conflict with other variable names, or with column names in SQL.
In a SQL statement, I can qualify column names to avoid ambiguity, but there's no way to qualify variables (apart from using bind parameters).
And I can define a local variable that has the exact same name as a global variable, and my local variable overrides (hides) the global variable, which is usually not what I want.
I'm not really a fan of Hungarian notation, especially not the Windows style `lpszFoo` and `hwndBar`, but the Hungarian notation was a convenience for me in Oracle PL/SQL.
I used first letter to identify scope of the variable, "l" for local, "g" for global, "a" for argument. The next letter was shorthand for the datatype of the variable, "s" for VARCHAR (string) type, "d" for DATE, "n" for NUMBER.
So, "as\_" was an argument string, "ld\_" was for a local date, etc.
Keeping track of the datatypes was important to avoid unintended implicit data conversions in SQL, and made an explicit conversion that was wrong look "wrong", e.g. there's no need for a TO\_DATE() around a "date" or a TO\_NUMBER around a number, but there is a need to cast a string to number, etc. | Just try this:
```
SELECT
PP.pid_full,
PP.description_small,
PP.project_status,
PP.est_comp_date as 'Est Comp Date',
EP.engineer
FROM
project_pid pp
JOIN
eng_project_hours EP on PP.pid_full = EP.pid_full
where ep.engineer = coalesce(engineer,"JM");
``` | MYSQL if parameter null then set | [
"",
"mysql",
"sql",
"stored-procedures",
""
] |
I have table like
```
βββββββββββββββββββββ
β Title β
β ββββββββββββββββββββ£
β Blogroll β
β Bottom Menu β
β Business β
β Entertainment β
β extend β
βββββββββββββββββββββ
```
and my search criteria is like
```
WHERE title LIKE '%blogroller%'
```
obviously I will have no result here but can I find the count where the like clause ended, like here in this case its beyond 8 that made like clause fail??
Any hint would be appreciated.
Thank you | You can do this but only with lots of manual effort:
```
select title,
(case when title like '%blogroller%' then 10
when title like '%blogrolle%' then 9
when title like '%blogroll%' then 8
. . .
else 0
end) as MatchLen
from table t
order by MatchLen desc;
```
(Note: in some versions of SQL Server, you might need a subquery to refer to `MatchLen`.) | You don't have ability to do it at once, so you have to try all options one by one, but you can significantly optimize this process - that's what SQL for.
First, convert pattern into all possible patterns, then just find MAX from all possible LIKEs. Not sure if SQL will be able to really optimize it now, but maybe in future it will.
```
-- test data
DECLARE @token NVARCHAR(100)
SET @token = 'Blogroller'
DECLARE @titles TABLE (Title NVARCHAR(100))
INSERT @titles VALUES
('Blogroll'),
('Bottom Menu'),
('Business'),
('Entertainment'),
('extend')
-- solution
DECLARE @patterns TABLE (part NVARCHAR(100) PRIMARY KEY, tokenLen int)
WHILE (LEN(@token) > 0)
BEGIN
INSERT @patterns VALUES (@token, LEN(@token))
SET @token = SUBSTRING(@token, 1, LEN(@token) - 1)
END
SELECT MAX(patterns.tokenLen)
FROM @titles titles
INNER JOIN @patterns patterns ON titles.Title LIKE '%' + patterns.part + '%'
``` | Ranking with LIKE keyword in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I use Rails 4. I have an application where i have a many-to-many relationship :
```
class User < ActiveRecord::Base
has_many :relationshipfollows, :foreign_key => "follower_id",
:dependent => :destroy
has_many :following, :through => :relationshipfollows, :source => :followed
end
class Project < ActiveRecord::Base
has_many :relationshipfollows, :foreign_key => "followed_id",
:dependent => :destroy
has_many :followers, :through => :relationshipfollows, :source => :follower
end
class Relationshipfollow < ActiveRecord::Base
belongs_to :follower, :class_name => "User"
belongs_to :followed, :class_name => "Project"
end
```
I follow this tutorial : <http://ruby.railstutorial.org/chapters/following-users?version=3.0#top>
But now I'd like to list all projects ordered by the number of follower. Like this :
```
1. project1 | 99 followers
2. project2 | 16 followers
3. project3 | 2 followers
...
```
I'm new to rails and I guess I keep making a mistake because I try a lot of examples like this :
[Rails 3 Order By Count on has\_many :through](https://stackoverflow.com/questions/10957025/rails-3-order-by-count-on-has-many-through)
or [has\_many , through: relationship count](https://stackoverflow.com/questions/14440769/has-many-through-relationship-count)
I try this method :
`Project.joins(:relationshipfollows).group("relationshipfollows.project_id").order("count(relationshipfollows.project_id) desc")`
But i have this error : `SQLite3::SQLException: no such column: relationshipfollows.project_id: SELECT "projects".* FROM "projects" INNER JOIN "relationshipfollows" ON "relationshipfollows"."followed_id" = "projects"."id" GROUP BY relationshipfollows.project_id ORDER BY count(relationshipfollows.project_id) desc`
And I try a another method :
```
Project.joins(:relationshipfollow).select('following.*, COUNT(followers.id) AS user_count').group('project_id').order('COUNT(followers.id) DESC')
```
But I have this error : `Association named 'relationshipfollow' was not found on Project; perhaps you misspelled it?`
Could someone please help me to find the right direction how to make it all work ?
Cordially
Edit :
I think the problem its from here. When I try this :
```
Relationshipfollow.select(:followed_id, "COUNT(follower_id) AS total").group(:followed_id).order("total DESC")
```
he return me this :
> => # ActiveRecord::Relation [# Relationshipfollow id: nil, followed\_id: 2, # Relationshipfollow id: nil, followed\_id: 1,
> # Relationshipfollow id: nil, followed\_id: 3]
All projects are ordered by the number of followers and all followed\_id (projects) are in the good order in relative of my test. But when I join this to my model Project like this :
```
Project.joins(:relationshipfollows).select(:followed_id, "COUNT(follower_id) AS total").group(:followed_id).order("total DESC")
```
he return me a list of projects but with project\_id NULL :
```
=> #ActiveRecord::Relation [# Project id: nil, # Project id: nil, # Project id: nil]
``` | My rescues solution was to add a integer numberfollowers in my Project model
But with another project and the rails guides (<http://guides.rubyonrails.org/active_record_querying.html>), I finally found the answer to my request
```
Project.joins(:relationshipfollows).select('projects.*, COUNT(followed_id) as user_count').group('projects.id').order('user_count DESC')
``` | Here's what I would do:
```
#app/models/project.rb
Class Project < ActiveRecord::Base
has_many :relationshipfollows, :foreign_key => "follower_id", :dependent => :destroy
has_many :followers, :through => :relationshipfollows, :source => :follower
scope :sort_by_followers, -> { joins(:followers).select("followers.*", "COUNT(followers.id) AS follower_count").group(:project_id).order("follower_count DESC") }
end
#app/controllers/projects_controller.rb
def index
@projects = Project.sort_by_followers
end
#app/views/projects/index.html.erb
<ol>
<% @projects.each_with_index do |project, i| %>
<li><%= "Project#{i} | #{project.follower_count} followers" %></li>
<% end %>
</ol>
``` | Order By Count on has_many :through | [
"",
"sql",
"ruby-on-rails",
"ruby",
"count",
"has-many-through",
""
] |
So I have a bit of a small issue:
```
GO
declare @datetime varchar(25);
select @datetime = getdate();
select cast(convert(varchar(8), @datetime, 112) as varchar)
GO
```
That is returning this: Apr 1 2
however this:
```
GO
declare @datetime varchar(25);
select @datetime = getdate();
select cast(convert(varchar(8), getdate(), 112) as varchar)
GO
```
returns : 20140401
(Which is what we want), so why is the declare messing it up? and how do I get the results like the second return? (This is within a Trigger btw, but this is the messed up part) | You need to change the datatype of @datetime to DATETIME
```
declare @datetime DATETIME;
select @datetime = getdate();
select cast(convert(varchar(8), @datetime, 112) as varchar)
```
There is an implicit conversion from `varchar to datatime` which is happening at `select @datetime = getdate();` and which results in the wierd behavior | Try the first one with this changes:
```
GO
declare @datetime DATETIME;
select @datetime = getdate();
select cast(convert(varchar(8), @datetime, 112) as varchar)
GO
```
The problem results from the varchar data type and the incorrect cast you use by selecting the result from `GETDATE()` into the varchar. | SQL Server getdate() returning weird results? | [
"",
"sql",
"sql-server",
""
] |
I am using SQL Server 2012,
How can I achieve following(Table 2) output from table 1 with sql script/tsql?
Table 1 :Current scenario
```
--------------------------------------------------
Year ReferredEachYear ActiveEachYear
--------------------------------------------------
2014 297 179
2013 321 144
2012 354 123
2011 317 90
2010 292 72
--------------------------------------------------
```
Table 2 : Expected OutPut
```
-------------------------------------------------------------------------
Year ReferredEachYear ActiveEachYear TotalActiveInSystem
-------------------------------------------------------------------------
2014 297 179 608
2013 321 144 429
2012 354 123 285
2011 317 90 162
2010 292 72 72
--------------------------------------------------------------------------
``` | If you are using SQL Server 2012, then you can use the cumulative sum function:
```
select cs.*,
sum(cs.ActiveEachYear) over (order by cs.[year]) as TotalActiveInSystem
from CurrentScenario cs
order by cs.[year] desc;
```
This function is not available in earlier versions of SQL Server. You would have to use some other method in those versions (I would use a correlated subquery). | ```
SELECT T.Year,
T.ReferredEachYear
T.ActiveEachYear
(SELECT SUM(ActiveEachYear) FROM YourTable WHERE YourTable.Year <= t.Year) AS TotalActiveinSystem
FROM yourTable T
``` | How to calculate sum of the years into a new column for Current to previouse years | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I'm very new to SQL and learing it from the book "SAMS Teach Yourself SQL in 24 hours". My concern is all about joins and grouping the select output. Here is the structure of my tables:
```
CREATE TABLE ORDERS_TBL
( ORD_NUM VARCHAR(10) NOT NULL primary key,
CUST_ID VARCHAR(10) NOT NULL,
PROD_ID VARCHAR(10) NOT NULL,
QTY INTEGER NOT NULL,
ORD_DATE DATE );
CREATE TABLE PRODUCTS_TBL
( PROD_ID VARCHAR(10) NOT NULL primary key,
PROD_DESC VARCHAR(40) NOT NULL,
COST DECIMAL(6,2) NOT NULL );
```
What I would like to achive are join operation which would group by my PROD\_ID.
```
SELECT P.PROD_DESC, SUM(O.QTY)
FROM PRODUCTS_TBL AS P LEFT JOIN ORDERS_TBL AS O
ON P.PROD_ID = O.PROD_ID
GROUP BY P.PROD_ID
ORDER BY P.PROD_ID;
```
The query above does work for MySQL but not for my MS SQL environment. What does work but is annoying as i don't want to see the PROD\_ID in my output.
```
SELECT P.PROD_ID, P.PROD_DESC, SUM(O.QTY)
FROM PROCUCTS_TBL AS P LEFT JOIN ORDERS_TBL AS O
ON P.PROD_ID = O.PROD_ID
GROUP BY P.PROD_ID, P.PROD_DESC
ORDER BY P.PROD_ID;
```
Any hints on how to I get an output table with PROD\_DESC, the sum of quantity and group by PROD\_ID? | In TSQL you cannot select a column that is not included in a group by clause (never understood why in mysql you can because it does not make sense IMHO)
A dirty trick is to select with `min()` or `max()` or similar function (based on your needs) only one of the not aggregated column row.
```
SELECT max( P.PROD_DESC), SUM(O.QTY)
FROM PRODUCTS_TBL AS P LEFT JOIN ORDERS_TBL AS O
ON P.PROD_ID = O.PROD_ID
GROUP BY P.PROD_ID
ORDER BY P.PROD_ID;
```
Anyway if the `P.PROD_DESC` column is the same it is better to include it in the `Group by` clause or if you want all different P.PROD\_ID, P.PROD\_DESC rows | The issue is that in most flavors of SQL, any columns you select without aggregating must appear in the `group by` clause.
If `PROD_DESC` is unique for each `PROD_ID`, then just group by `PROD_DESC` and you will be fine:
```
SELECT P.PROD_DESC, SUM(O.QTY)
FROM PRODUCTS_TBL AS P LEFT JOIN ORDERS_TBL AS O
ON P.PROD_ID = O.PROD_ID
GROUP BY P.PROD_DESC
ORDER BY P.PROD_ID;
```
If `PROD_DESC` is not unique for each `PROD_ID`, you can group by a column that doesn't appear with no problem. This should work:
```
SELECT P.PROD_DESC, SUM(O.QTY)
FROM PROCUCTS_TBL AS P LEFT JOIN ORDERS_TBL AS O
ON P.PROD_ID = O.PROD_ID
GROUP BY P.PROD_ID, P.PROD_DESC
ORDER BY P.PROD_ID;
``` | MS SQL - Join with group by | [
"",
"sql",
"sql-server",
"group-by",
""
] |
So I am trying to filter out IDs from these 2 tables I 'union all'ed. Basically if the ID contains a certain name, I want to filter them out of the query. Here is an example
```
ID,Name
1,Mark
1,John
2,Peter
2,Matt
3,Henry
3,Matt
4,John
4,Olaf
```
So I do not want to include IDs that contain 'Matt' meaning I would like to filter out IDs 2 and 3 out completely.
using something like:
```
select *
from name.table
where name not like 'Matt'
```
only seems to filter out the row. | You would want to do something like this:
```
select * from table where id not in(
select Id from table where name like '%Matt%'
)
```
This excludes rows with name containing Matt
Exclude rows that equal Matt
```
select * from table where id not in(
select Id from table where name ='Matt'
)
``` | Use `not exists`:
```
select n.*
from names n
where not exists (select 1 from names n2 where n2.id = n.id and n2.name = 'Matt')
``` | Filtering out IDs that contain a certain value in another column | [
"",
"sql",
"sql-server-2008",
""
] |
In this SQL query:
```
SELECT * FROM CallRecords
WHERE DNIS = '3216547'
ORDER BY date DESC
```
But in the WHERE statement for DNIS ='' and I am looking to take the 3216547, but have it look at another table that I have a list of different DNIS's, for example:
```
3216547
9874560
7418523
```
So normally the statement would be:
```
SELECT * FROM CallRecords
WHERE (DNIS = '3216547' OR DNIS = '9874560' DNIS = '7418523')
ORDER BY date DESC
```
This query with three is fine, but I am looking to make this where there could be 400 DNIS entries to select from. Would I be able to do a subquery in the WHERE clause or is this even possible. | ```
SELECT * FROM CallRecords
WHERE DNIS IN (SELECT DNIS FROM DNIS_TABLE)
ORDER BY date DESC
```
If you need all DNIS from the other table you could do that query. | ```
SELECT *
FROM CallRecords
WHERE DNIS IN (SELECT DNIS FROM OtherTable [...])
ORDER BY date DESC
``` | SQL Query with WHERE definitions are from another table | [
"",
"sql",
"sql-server",
""
] |
I am struggling to speed this SQL query up. I have tried removing all the fields besides the two SUM() functions and the Id field but it is still incredibly slow. It is currently taking 15 seconds to run. Does anyone have any suggestions to speed this up as it is currently causing a timeout on a page in my web app. I need the fields shown so I can't really remove them but there surely has to be a way to improve this?
```
SELECT [Customer].[iCustomerID],
[Customer].[sCustomerSageCode],
[Customer].[sCustomerName],
[Customer].[sCustomerTelNo1],
SUM([InvoiceItem].[fQtyOrdered]) AS [Quantity],
SUM([InvoiceItem].[fNetAmount]) AS [Value]
FROM [dbo].[Customer]
LEFT JOIN [dbo].[CustomerAccountStatus] ON ([Customer].[iAccountStatusID] = [CustomerAccountStatus].[iAccountStatusID])
LEFT JOIN [dbo].[SalesOrder] ON ([SalesOrder].[iCustomerID] = [dbo].[Customer].[iCustomerID])
LEFT JOIN [Invoice] ON ([Invoice].[iCustomerID] = [Customer].[iCustomerID])
LEFT JOIN [dbo].[InvoiceItem] ON ([Invoice].[iInvoiceNumber] = [InvoiceItem].[iInvoiceNumber])
WHERE ([InvoiceItem].[sNominalCode] IN ('4000', '4001', '4002', '4004', '4005', '4006', '4007', '4010', '4015', '4016', '700000', '701001', '701002', '701003'))
AND( ([dbo].[SalesOrder].[dOrderDateTime] >= '2013-01-01')
OR ([dbo].[Customer].[dDateCreated] >= '2014-01-01'))
GROUP BY [Customer].[iCustomerID],[Customer].[sCustomerSageCode],[Customer].[sCustomerName], [Customer].[sCustomerTelNo1];
``` | I don't think this query is doing what you want anyway. As written, there are no relationships between the `Invoice` table and the `SalesOrder` table. This leads me to believe that it is producing a cartesian product between invoices and orders, so customers with lots of orders would be generating lots of unnecessary intermediate rows.
You can test this by removing the `SalesOrder` table from the query:
```
SELECT c.[iCustomerID], c.[sCustomerSageCode], c.[sCustomerName], c.[sCustomerTelNo1],
SUM(it.[fQtyOrdered]) AS [Quantity], SUM(it.[fNetAmount]) AS [Value]
FROM [dbo].[Customer] c LEFT JOIN
[dbo].[CustomerAccountStatus] cas
ON c.[iAccountStatusID] = cas.[iAccountStatusID] LEFT JOIN
[Invoice] i
ON (i.[iCustomerID] = c.[iCustomerID]) LEFT JOIN
[dbo].[InvoiceItem] it
ON (i.[iInvoiceNumber] = it.[iInvoiceNumber])
WHERE it.[sNominalCode] IN ('4000', '4001', '4002', '4004', '4005', '4006', '4007', '4010', '4015', '4016', '700000', '701001', '701002', '701003') AND
c.[dDateCreated] >= '2014-01-01'
GROUP BY c.[iCustomerID], c.[sCustomerSageCode], c.[sCustomerName], c.[sCustomerTelNo1];
```
If this works and you need the `SalesOrder`, then you will need to either pre-aggregate by `SalesOrder` or find better join keys.
The above query could benefit from an index on `Customer(dDateCreated, CustomerId)`. | You have a lot of LEFT JOIN
I don't see CustomerAccountStatus usage. Ou can exclude it
The `[InvoiceItem].[sNominalCode]` could be null in case of LEFT JOIN so add [InvoiceItem].`[sNominalCode] is not null or <THE IN CONDITION>`
Also add the `is not null` checks to other conditions | SQL query that uses a GROUP BY and IN is too slow | [
"",
"sql",
""
] |
I hope I can explain this well enough.
Say I have this table:
```
Owner
+--------+--------+
| Name | Type |
+--------+--------+
| Bob | Cat |
| Bob | Dog |
| Bob | Cow |
| Tim | Dog |
| Tim | Cat |
| Ted | Cat |
| Joe | Dog |
| Joe | Cat |
| Joe | Sheep |
+--------+--------+
```
I am trying to find everyone who has all the animals tim has (so a cat and a dog). This means Joe and Bob would satisfy this, but not Ted as he only has one type of animal Tim has
How would I go about getting this result?
So I have a table with all the types tim owns:
```
SELECT Type FROM Owner WHERE Name= 'Tim';
```
How do I get it so that only those who have both Types tim has get selected from the list of owners?
Any guidance would be appreciated, thanks in advance. | ```
select name
from owner
where type in (select distinct type from owner where name = 'tim')
group by name
having count(distinct type) = (select count(distinct type) from owner where name = 'tim')
``` | If you are just trying to get all the persons who have more animals than Tim `has` then you can do it like
```
select Name from owners
group by Name
having count(distinct [Type]) > (select count(distinct type) from owners
where Name='tim')
``` | SQL resulting table satisfies two conditions | [
"",
"sql",
""
] |
Pretty new to SQL here - help would be much appreciated. I have a table with `Region`, `Month`, `Member ID`, and `Sales` (with multiple transactions per member). I just want to extract the top 2 members, based on sum of sales, per region, per month....so essentially:
```
Region Month MemberID Sales
-----------------------------------------
1 1/1/2013 A $200
2 2/1/2013 B $300
1 1/1/2013 A $100
1 1/1/2013 B $50
2 1/1/2013 D $500
2 2/1/2013 C $200
```
Becomes:
```
Region Month Member ID Sales
-----------------------------------------
1 1/1/2013 A $300
1 1/1/2013 B $50
2 1/1/2013 D $500
2 1/1/2013 B $200
```
Ultimately, there will be 10 regions, and I'd like to take the top 5 sales by member for each region, each month. | You can do this with `row_number()`:
```
select region, month, MemberId, sales
from (select region, month, MemberId, sum(sales) as sales
row_number() over (partition by region, month order by sum(sales) desc) as seqnum
from table t
group by region, month, MemberId
) t
where seqnum <= 2;
``` | If you are worried about ties (any you probably are as @Conrad Frix pointed out) you may prefer RANK() to ROW\_NUMBER().
I'll borrow sample data, use CTEs for clarity, apply my preferred formatting, and provide a [SQLFiddle](http://sqlfiddle.com/#!6/8a7dd/1).
```
CREATE TABLE MemberSales (
Region INT
,SalesMonth DATETIME
,MemberID CHAR(1)
,Sales FLOAT
);
INSERT INTO MemberSales VALUES (1, '1/1/2013', 'A', 200);
INSERT INTO MemberSales VALUES (2, '2/1/2013', 'B', 300);
INSERT INTO MemberSales VALUES (1, '1/1/2013', 'A', 100);
INSERT INTO MemberSales VALUES (1, '1/1/2013', 'C', 300);
INSERT INTO MemberSales VALUES (1, '1/1/2013', 'D', 100);
INSERT INTO MemberSales VALUES (1, '1/1/2013', 'B', 50);
INSERT INTO MemberSales VALUES (2, '1/1/2013', 'D', 500);
INSERT INTO MemberSales VALUES (2, '2/1/2013', 'C', 200);
;WITH SalesTotalByMember AS (
SELECT Region
,SalesMonth
,MemberID
,SUM(Sales) AS Sales
FROM MemberSales
GROUP BY Region
,SalesMonth
,MemberID
), Ranked AS (
SELECT Region
,SalesMonth
,MemberID
,Sales
,RANK() OVER (PARTITION BY Region, SalesMonth ORDER BY SALES DESC) rnk
FROM SalesTotalByMember
)
SELECT *
FROM Ranked
WHERE rnk <= 2
ORDER BY region
,SalesMonth
,rnk
``` | SQL Server 2012: Select Top n based on multiple criteria | [
"",
"sql",
"sql-server-2012",
""
] |
I keep getting an incorrect syntax near the word ON, this code is from a suggested change from a previous post and I am trying to find where the error is and so far I am having no luck figuring out what punctuation mark I am missing. The code is
```
SELECT AC.REG_NR, AC.DIS_NR, AC.GEMSID, AC.TMS_ID, AC.EMP_NA, AC.EMP_SEX_TYP_CD, AC.EMP_EOC_GRP_TYP_CD, AC.DIV_NR, AC.CTR_NR,
AC.JOB_CLS_CD_DSC_TE, AC.JOB_GRP_CD, AC.Job_Function, AC.Job_Group, AC.Meeting_Readiness_Rating, AC.Manager_Readiness_Rating, CD.Employee_ID,
CD.Meeting_Readiness_Rating AS Expr1, CD.Manager_Readiness_Rating AS Expr2, CD.Meeting_End_Date, CD.EmployeeFeedback,
CD.DevelopmentForEmployee1, CD.DevelopmentForEmployee2, CD.DevelopmentForEmployee3, CD.DevelopmentForEmployee4, CD.DevelopmentForEmployee5,
CD.Justification, CD.Changed, CD.Notes
FROM dbo.AC_Source AS AC INNER JOIN
(SELECT EmployeeID AS Employee_ID, MeetingReadinessLevel AS Meeting_Readiness_Rating, ManagerReadinessLevel AS Manager_Readiness_Rating,
logdate AS Meeting_End_Date, EmployeeFeedback, DevelopmentForEmployee1, DevelopmentForEmployee2, DevelopmentForEmployee3,
DevelopmentForEmployee4, DevelopmentForEmployee5, Justification, Changed, Notes
FROM (SELECT EmployeeID AS Employee_ID, MeetingReadinessLevel AS Meeting_Readiness_Rating, ManagerReadinessLevel AS Manager_Readiness_Rating,
logdate AS Meeting_End_Date, EmployeeFeedback, DevelopmentForEmployee1, DevelopmentForEmployee2, DevelopmentForEmployee3,
DevelopmentForEmployee4, DevelopmentForEmployee5, Justification, Changed, Notes, ROW_NUMBER() OVER (PARTITION BY EmployeeID ORDER BY Meeting_End_Date DESC) As RowNum
FROM (SELECT EmployeeID AS Employee_ID, MeetingReadinessLevel AS Meeting_Readiness_Rating, ManagerReadinessLevel AS Manager_Readiness_Rating,
logdate AS Meeting_End_Date, EmployeeFeedback, DevelopmentForEmployee1, DevelopmentForEmployee2, DevelopmentForEmployee3,
DevelopmentForEmployee4, DevelopmentForEmployee5, Justification, Changed, Notes
FROM dbo.AC_CDData_1
UNION ALL
SELECT Employee_ID, Meeting_Readiness_Rating, Manager_Readiness_Rating, Meeting_End_Date, '' AS EmployeeFeedback, '' AS DevelopmentForEmployee1,
'' AS DevelopmentForEmployee2, '' AS DevelopmentForEmployee3, '' AS DevelopmentForEmployee4, '' AS DevelopmentForEmployee5, '' AS Justification,
'' AS Changed, '' AS Notes
FROM dbo.TMS_Data_Latest_Career_Meeting_Rating
WHERE (Plan_Year = '2013')) AS InnerSelect) AS CD ON AC.TMS_ID = CD.Employee_ID
WHERE RowNum = 1)
ON AC.TMS_ID = CD.EMPLOYEE_ID;
``` | There's something wrong with this query (ok, nothing new..). There is no alias on the final subquery, but it seems there's more.
The simplified and properly indented form might help you spotting the problem:
```
SELECT x
FROM dbo.AC_Source AS AC
INNER JOIN (
SELECT x
FROM (
SELECT x, ROW_NUMBER() OVER (PARTITION BY EmployeeID ORDER BY Meeting_End_Date DESC) As RowNum
FROM (
SELECT x
FROM dbo.AC_CDData_1
UNION ALL
SELECT x
FROM dbo.TMS_Data_Latest_Career_Meeting_Rating
WHERE Plan_Year = '2013'
) AS InnerSelect
) AS CD ON AC.TMS_ID = CD.Employee_ID
WHERE RowNum = 1
) ON AC.TMS_ID = CD.EMPLOYEE_ID;
``` | For one thing, all subqueries need an alias. So, give the subqueries a name. I think it is something like this:
```
SELECT AC.REG_NR, AC.DIS_NR, AC.GEMSID, AC.TMS_ID, AC.EMP_NA, AC.EMP_SEX_TYP_CD,
AC.EMP_EOC_GRP_TYP_CD, AC.DIV_NR, AC.CTR_NR,
AC.JOB_CLS_CD_DSC_TE, AC.JOB_GRP_CD, AC.Job_Function, AC.Job_Group,
AC.Meeting_Readiness_Rating, AC.Manager_Readiness_Rating, CD.Employee_ID,
CD.Meeting_Readiness_Rating AS Expr1, CD.Manager_Readiness_Rating AS Expr2,
CD.Meeting_End_Date, CD.EmployeeFeedback,
CD.DevelopmentForEmployee1, CD.DevelopmentForEmployee2, CD.DevelopmentForEmployee3,
CD.DevelopmentForEmployee4, CD.DevelopmentForEmployee5,
CD.Justification, CD.Changed, CD.Notes
FROM dbo.AC_Source AS AC INNER JOIN
(SELECT EmployeeID AS Employee_ID, MeetingReadinessLevel AS Meeting_Readiness_Rating, ManagerReadinessLevel AS Manager_Readiness_Rating,
logdate AS Meeting_End_Date, EmployeeFeedback, DevelopmentForEmployee1, DevelopmentForEmployee2, DevelopmentForEmployee3,
DevelopmentForEmployee4, DevelopmentForEmployee5, Justification, Changed, Notes
FROM (SELECT EmployeeID AS Employee_ID, MeetingReadinessLevel AS Meeting_Readiness_Rating,
ManagerReadinessLevel AS Manager_Readiness_Rating,
logdate AS Meeting_End_Date, EmployeeFeedback, DevelopmentForEmployee1,
DevelopmentForEmployee2, DevelopmentForEmployee3,
DevelopmentForEmployee4, DevelopmentForEmployee5, Justification, Changed,
Notes,
ROW_NUMBER() OVER (PARTITION BY EmployeeID ORDER BY Meeting_End_Date DESC) As RowNum
FROM (SELECT EmployeeID AS Employee_ID, MeetingReadinessLevel AS Meeting_Readiness_Rating, ManagerReadinessLevel AS Manager_Readiness_Rating,
logdate AS Meeting_End_Date, EmployeeFeedback, DevelopmentForEmployee1, DevelopmentForEmployee2, DevelopmentForEmployee3,
DevelopmentForEmployee4, DevelopmentForEmployee5, Justification, Changed, Notes
FROM dbo.AC_CDData_1
UNION ALL
SELECT Employee_ID, Meeting_Readiness_Rating, Manager_Readiness_Rating, Meeting_End_Date, '' AS EmployeeFeedback, '' AS DevelopmentForEmployee1,
'' AS DevelopmentForEmployee2, '' AS DevelopmentForEmployee3, '' AS DevelopmentForEmployee4, '' AS DevelopmentForEmployee5, '' AS Justification,
'' AS Changed, '' AS Notes
FROM dbo.TMS_Data_Latest_Career_Meeting_Rating
WHERE (Plan_Year = '2013')) AS InnerSelect
) cd
) cd
ON AC.TMS_ID = CD.EMPLOYEE_ID and rownum = 1
``` | Incorrect Syntax Error Near the word ON | [
"",
"sql",
"sql-server",
""
] |
I would like to find all records in the column of one table that are not in a column of another. I can do so with this query:
```
SELECT
kywd
FROM
from_client
WHERE
kywd NOT IN
(SELECT
kywd
FROM
from_me);
```
However, I would like to extend this to allow for some pattern matching. At the very least I'd like to see if the `from_client` `kywd` appears anywhere in the `from_me` `kywd`. So, like, `'%kywd%'`. I tried ...`WHERE '%' || kywd || '%' NOT IN`... as a wild guess, and a result set was actually returned, but I suspect this syntax is just gibberish.
How can I make this select statement allow for these wildcards? | `not exists` is likely to be faster than join
```
select kywd
from from_client
where not exists (
select 1
from from_me
where position(from_client.kywd, kywd) > 0
)
``` | ```
SELECT from_client.kywd
FROM from_client
LEFT JOIN from_me
ON from_client.kywd LIKE '%' || from_me.kywd || '%'
WHERE from_me.kywd IS NULL
``` | select record in one table that are not in another with a pattern match | [
"",
"sql",
"regex",
"postgresql",
""
] |
Is there a good way to express within SQL to select a specific row if a column is matched, otherwise to select a more general row? Something like:
```
SELECT name FROM names
WHERE (colA = @colA OR colA = 'ALL')
```
I just want it to return the one specific record if it exists, otherwise return the generic "ALL" record if it can't find a specific column match. It seems like COALESCE would be similar to what I'm looking for, but I don't know how that could work here syntactically. | @Blorgbeard answer (using top 1 and order by) is probably the best way but just to be different you could also use a sub-query and not exists:
```
SELECT name FROM names
WHERE (colA = @colA) OR (colA = 'ALL' AND NOT EXISTS(
SELECT name FROM names
WHERE colA = @colA))
```
I guess the "advantage" of this is that it uses more standard sql. | This query, although not pretty, should do what you want:
```
SELECT TOP 1 name FROM names
WHERE (colA = @colA OR colA = 'ALL')
ORDER BY CASE WHEN colA='ALL' THEN 1 ELSE 0 END
```
Edit: For multiple columns, I think what you would want is this:
```
SELECT TOP 1 name FROM names
WHERE (colA = @colA AND colB = @colB) OR (colA = 'ALL')
ORDER BY CASE WHEN colA='ALL' THEN 1 ELSE 0 END
```
I'm assuming that there is just one `ALL` row, so there's no need to check for `colB='ALL'`, and that the "key" for the table is `colA` and `colB` combined - so you're not interested in rows where `colA` matches but `colB` doesn't, for example. | COALESCE in Where clause? | [
"",
"sql",
"sql-server-2008",
"stored-procedures",
""
] |
Trying to use sum to show decimals, but its converting it to integer. Very unsure what is wrong. Here is the sql fiddle:
<http://sqlfiddle.com/#!2/b2961/1>
SQL query:
```
Select sum(PH) as PH from googlechart;
``` | As per the comment of the user, they didn't intend to Sum the values of the field, but rather just wanted to use the values numerically. The answer previous was
```
select SUM(CAST(REPLACE(PH, ",", ".") AS DECIMAL(5,2))) from googlechart;
```
The actual answer should have been:
```
select CAST(REPLACE(PH, ",", ".") AS DECIMAL(5,2)) from googlechart;
```
The above should work, but the fiddle is down at the moment to test it. Saying that, really, you shouldn't store something that you want to Sum later on as a String of any type. It should instead be stored as a Decimal or equivalent in your case, and from there you should simply convert the . into a , when you are outputting the data to the user. It will both boost the speed of your queries, as well as allow you the use of the internal functions to the database, instead of having to make 3 separate calls to do the same thing.
Saying that, you have date and time separated into two separate fields as well. In most cases, you are better of storing them as a single field, and doing logic on them accordingly, as time as a value itself is often if little us unless you also know the day, and you can still do logic on just the time function if you needed to. | Change PH value as `08.11` and `09.34` instead of `08,11` and `09,34`
```
INSERT INTO `googlechart` (`id`, `Date`, `Time`, `PH`, `Chlorine`, `Temperature`) VALUES
(7, '2014-04-01', '09:00:00', '08,11', '9.00', '12.00'),
(9, '2014-04-02', '11:00:00', '09,34', '10.34', '2.34');
```
to
```
INSERT INTO `googlechart` (`id`, `Date`, `Time`, `PH`, `Chlorine`, `Temperature`) VALUES
(7, '2014-04-01', '09:00:00', '08.11', '9.00', '12.00'),
(9, '2014-04-02', '11:00:00', '09.34', '10.34', '2.34');
``` | Sum function not returning decimals | [
"",
"mysql",
"sql",
"sum",
"decimal",
""
] |
I am having an issue. I have declared a cursor and when i fetch it i want to print a variable (type nvarchar) but for each row i get only the first letter
Here is my code :
```
DECLARE @loginSoftware nvarchar
DECLARE @loginId int
DECLARE @softwareId int
DECLARE @passwordSoftware nvarchar
DECLARE @SoftwareFamily nvarchar
DECLARE @SoftwareName nvarchar
DECLARE @password nvarchar
DECLARE Get_Password cursor
for (select loginId, charly.softwareId, login, password FROM REFC_SOFTWARE_LOGIN, v_charly_HOSTS_AND_SOFTS_INFOS charly
WHERE charly.softwareId = REFC_SOFTWARE_LOGIN.softwareId
AND charly.SoftwareFamily = 'DB'
and SoftwareName in ('Oracle', 'SQL Server', 'MYSQL') )
OPEN Get_Password
FETCH NEXT FROM Get_Password INTO @loginId, @softwareId, @loginSoftware, @passwordSoftware
WHILE (@@FETCH_STATUS = 0)
BEGIN
--raiserror (@passwordSoftware, 10,1) with nowait
SET @password = @passwordSoftware
PRINT @password
FETCH NEXT FROM Get_Password INTO @loginId, @softwareId, @loginSoftware, @passwordSoftware
END
CLOSE Get_Password
DEALLOCATE Get_Password
```
What do I need to change for printing the entire variable's value ?
Did I made any mistake ?
In advance thanks for your help and your time | DECLARE @loginSoftware NVARCHAR
SET @loginSoftware = 'fdjafs'
PRINT @loginSoftware
## Will give you only the first letter.
DECLARE @loginSoftware NVARCHAR (2000)
SET @loginSoftware = 'fdjafs'
PRINT @loginSoftware
Will give you all of it....So basically you also have to specify the data length too. | Make you `nvarchar` variables have a length. If you don't declare is like `nvarchar(20)` then the default length is 1. | Transact SQL Print only first letter using cursor | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to update multiple columns of a table in DB2 with single Update statement.
Any hint or idea will be appreciable. Thanks. | The update statement in all versions of SQL looks like:
```
update table
set col1 = expr1,
col2 = expr2,
. . .
coln = exprn
where some condition
```
So, the answer is that you separate the assignments using commas and don't repeat the `set` statement. | If the values came from another table,
you might want to use
```
UPDATE table1 t1
SET (col1, col2) = (
SELECT col3, col4
FROM table2 t2
WHERE t1.col8=t2.col9
)
```
Example:
```
UPDATE table1
SET (col1, col2, col3) =(
(SELECT MIN (ship_charge), MAX (ship_charge) FROM orders),
'07/01/2007'
)
WHERE col4 = 1001;
``` | How to update multiple columns in single update statement in DB2 | [
"",
"sql",
"database",
"db2",
"sql-update",
""
] |
I want to create a view between a table that has data and another table that has a flag that the field must show or not.
**TABLE\_EXAMPLE**
```
+---------+---------+---------+-----------------+
| id | field_1 | field_2 | field_3 |
+---------+---------+---------+-----------------+
| 1 | test | 500 | another content |
+---------+---------+---------+-----------------+
| 2 | blah | 800 | text_lorem |
+---------+---------+---------+-----------------+
| 3 | hi! | 100 | lorem_impsum |
+---------+---------+---------+-----------------+
```
**REFERENCE\_TABLE** (This table is joined with the other table by table\_name, field\_name,entry\_id. The other field is show/not\_show flag)
```
+--------------+------------+----------+------+
| table_name | field_name | entry_id | show |
+--------------+------------+----------+------+
| TABLE_EXAMPLE| field_1 | 1 | 0 |
+--------------+------------+----------+------+
| TABLE_EXAMPLE| field_2 | 1 | 1 |
+--------------+------------+----------+------+
| TABLE_EXAMPLE| field_3 | 1 | 0 |
+--------------+------------+----------+------+
| TABLE_EXAMPLE| field_1 | 2 | 1 |
+--------------+------------+----------+------+
| TABLE_EXAMPLE| field_2 | 2 | 0 |
+--------------+------------+----------+------+
| TABLE_EXAMPLE| field_3 | 2 | 1 |
+--------------+------------+----------+------+
| TABLE_EXAMPLE| field_1 | 3 | 1 |
+--------------+------------+----------+------+
| TABLE_EXAMPLE| field_2 | 3 | 1 |
+--------------+------------+----------+------+
| TABLE_EXAMPLE| field_3 | 3 | 0 |
+--------------+------------+----------+------+
```
Here is the resultant view. In case the flag is 0 the field content must be NULL
**RESULT\_TABLE**
```
+---------+---------+---------+-----------------+
| id | field_1 | field_2 | field_3 |
+---------+---------+---------+-----------------+
| 1 | NULL | 500 | NULL |
+---------+---------+---------+-----------------+
| 2 | blah | NULL | text_lorem |
+---------+---------+---------+-----------------+
| 3 | hi! | 100 | NULL |
+---------+---------+---------+-----------------+
```
Any idea or suggestion? I couldn't get it. | If the `REFERENCE_TABLE` has flags for all fields of each entity, then you can use this:
```
SELECT
T1.ID,
MAX(CASE WHEN T2.field_name = 'field_1' AND T2.show = 1 THEN T1.field_1 END) field_1,
MAX(CASE WHEN T2.field_name = 'field_2' AND T2.show = 1 THEN T1.field_2 END) field_2,
MAX(CASE WHEN T2.field_name = 'field_3' AND T2.show = 1 THEN T1.field_3 END) field_3,
FROM TABLE_EXAMPLE T1
JOIN REFERENCE_TABLE T2
ON T1.id = T2.entity_id
WHERE T2.table_name = 'TABLE_EXAMPLE'
GROUP BY T1.ID
```
or use `LEFT JOIN` if you omit some flags. Then the omitted flags will treats as 0. | Here is one method where you join to the table three times and then use a case statement to determine what the value is for each column:
```
select e.id,
(case when r1.show then e.field_1 end) as field_1,
(case when r2.show then e.field_2 end) as field_2
(case when r3.show then e.field_3 end) as field_3
from table_example e left join
reference_table r1
on r1.table_name = 'table_name' and r1.entry_id = e.id and r1.column_name = 'field_1' left join
reference_table r2
on r2.table_name = 'table_name' and r2.entry_id = e.id and r2.column_name = 'field_2 left join
reference_table r3
on r3.table_name = 'table_name' and r3.entry_id = e.id and r3.column_name = 'field_3';
``` | How to create a Mysql View between 2 tables | [
"",
"mysql",
"sql",
"view",
"relation",
""
] |
suppose there are records as follows:
```
Employee_id, work_start_date, work_end_date
1, 01-jan-2014, 07-jan-2014
1, 03-jan-2014, 12-jan-2014
1, 23-jan-2014, 25-jan-2014
2, 15-jan-2014, 25-jan-2014
2, 07-jan-2014, 15-jan-2014
2, 09-jan-2014, 12-jan-2014
```
The requirement is to write an SQL select statment which would summarize the work days grouped by employee\_id, but exclude the overlapped periods (meaning - take them into calculation only once).
The desired output would be:
```
Employee_id, worked_days
1, 13
2, 18
```
The calculations for working days in the date range are done like this:
If work\_start\_date = 5 and work\_end\_date = 9 then worked\_days = 4 (9 - 5).
I could write a pl/sql function which solves this (manually iterating over the records and doing the calculation), but I'm sure it can be done using SQL for better performance.
Can someone please point me in the right direction?
Thanks! | This is a slightly modified query from similar question:
[compute sum of values associated with overlapping date ranges](https://stackoverflow.com/questions/22232796/compute-sum-of-values-associated-with-overlapping-date-ranges/22236718#22236718)
```
SELECT "Employee_id",
SUM( "work_end_date" - "work_start_date" )
FROM(
SELECT "Employee_id",
"work_start_date" ,
lead( "work_start_date" )
over (Partition by "Employee_id"
Order by "Employee_id", "work_start_date" )
As "work_end_date"
FROM (
SELECT "Employee_id", "work_start_date"
FROM Table1
UNION
SELECT "Employee_id","work_end_date"
FROM Table1
) x
) x
WHERE EXISTS (
SELECT 1 FROM Table1 t
WHERE t."work_start_date" > x."work_end_date"
AND t."work_end_date" > x."work_start_date"
OR t."work_start_date" = x."work_start_date"
AND t."work_end_date" = x."work_end_date"
)
GROUP BY "Employee_id"
;
```
Demo: <http://sqlfiddle.com/#!4/4fcce/2> | This is a tricky problem. For instance, you can't use `lag()`, because the overlapping period may not be the "previous" one. Or different periods can start and or stop on the same day.
The idea is to reconstruct the periods. How to do this? Find the records where the periods start -- that is, there is no overlap with any other. Then use this as a flag and count this flag cumulatively to count overlapping groups. Then getting the working days is just aggregation from there:
```
with ps as (
select e.*,
(case when exists (select 1
from emps e2
where e2.employee_id = e.employee_id and
e2.work_start_date <= e.work_start_date and
e2.work_end_date >= e.work_end_date
)
then 0 else 1
) as IsPeriodStart
from emps e
)
select employee_id, sum(work_end_date - work_start_date) as Days_Worked
from (select employee_id, min(work_start_date) as work_start_date,
max(work_end_date) as work_end_date
from (select ps.*,
sum(IsPeriod_Start) over (partition by employee_id
order by work_start_date
) as grp
from ps
) ps
group by employee_id, grp
) ps
group by employee_id;
``` | Sum of working days with date ranges from multiple records (overlapping) | [
"",
"sql",
"oracle",
"sum",
"overlap",
""
] |
I have this statements
```
SELECT @startdate = DATEADD(wk, DATEDIFF(wk,0,GETDATE()), -7) -- Monday of previous week
SELECT @enddate = DATEADD(wk, DATEDIFF(wk,0,GETDATE()), -1) -- Sunday of previous week
```
and I want to convert @startdate and @enddate to numbers in the following format 'yyyymmdd' | ```
SELECT CONVERT(nvarchar(8), @StartDate, 112)
```
Will convert `2014-03-31` to `20140331` | The easiest way is to use `year()`, `month()`, and `day()`:
```
select @startdate = year(getdate())*10000 + month(getdate()) * 100 + day(getdate())
``` | how can I convert a date var to a number is sql? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Dear all and thank you in advance.
**Problem:**
Instead of having two dropdown boxes for list A and list B. I would like to combine the two lists (they do relate) and display them to the user as 1 list. Then split it back into two so that I can store the relevant info.
List 1
```
Machines
1. Machine x
2. Machine y
```
List 2 Test Types
```
1. Test Type ab
2. Test Type ac
3. Text Type ad.
```
so Machine 1 can do test type ab, and ac. Machine 2 can do test type ac and ad.
It will be stored under 3 different tables (not really but just think that it will). 2 tables will contain the list and the third will contain the relationship between the two lists. I.e. which items from list 1 pair up with which items from list 2 etc.
To the user it would be displayed as follows
```
Machine X - ab
Machine x - ac
Machine y - ac
Machine y - ad
```
The user would then select 1 from the list and I would then decode the two items selected.
My thought so far is to use bits (and/or) as required.
There will be three functions
```
public function joinAB(a as long, b as long) as long
end function
Public function getA(ab as long) as long
end function
public function getB(ab as long)as long
end function
```
So just to clarify this is not to join text together, but to join/split ID's of the individual items in these two lists.
Anyone else have any other ideas. This will be done in a legacy system (VB6) which I have inherited. My VB6 coding skills are above average.
Thank you for any help/code snippets provided or general advice.
If you need more information please let me know. | Assuming a and b are numeric variables as your 3 functions suggest I would use the .ItemData() property of the items in the combined list just like Mark and use division and remained to obtain the separate parts:
```
Public Function joinAB(a As Long, b As Long) As Long
joinAB = a * 100 + b
End Function
Public Function getA(ab As Long) As Long
getA = ab \ 100
End Function
Public Function getB(ab As Long) As Long
getB = ab Mod 100
End Function
```
this assumes that b will never be higher than 100, and that neither a or b will be negative
If a and b are string variables then i would show the joined strings as the text int he combobox and split the selected text to get the seprate parts
```
Public Function joinAB(a As String, b As String) As String
joinAB = a & " - " & b
End Function
Public Function getA(ab As String) As String
getA = Left$(ab, InStr(ab, " - ") - 1)
End Function
Public Function getB(ab As String) As String
getB = Mid$(ab, InStr(ab, " - ") + 3)
End Function
``` | There are probably a few solutions to this.
The simplest is that you create a unique 32-bit integer field on your join table. This can then be embedded in the ItemData property of the VB6 ListBox.
A couple of helper functions for this are:
```
Private Sub AddListBoxItem(ByRef lst as ListBox, ByVal in_nKey As Long, ByRef in_sDisplay As String)
With lst
.AddItem in_sDisplay
.ItemData(.NewIndex) = in_nKey
End With
End Sub
Private Function GetSelectedListBoxKey(ByRef in_lst As ListBox) As Long
With in_lst
GetSelectedListBoxKey = .ItemData(.ListIndex)
End With
End Function
```
As for implementing your functions, I would simply use two collection.
m\_col\_A\_B\_to\_AB would be keyed by A & "\_" & B to return AB. m\_col\_AB\_to\_A\_B would be keyed to AB to return A and B.
Helper functions would be:
```
Private Sub AddRow(ByVal in_nA As Long, ByVal in_nB As Long, ByVal in_nAB As Long)
Dim an(0 To 1) As Long
an(0) = in_nA
an(1) = in_nB
m_col_A_B_to_AB.Add an(), CStr(in_nAB)
m_col_AB_to_A_B.Add in_nAB, CStr(in_nA) & "_" & CStr(in_nB)
End Sub
Private Sub Get_A_B(ByVal in_nAB As Long, ByRef out_nA As Long, ByRef out_nB As Long)
Dim vTmp As Variant
vTmp = m_col_A_B_to_AB.Item(CStr(in_nAB))
out_nA = vTmp(0)
out_nB = vTmp(1)
End Sub
Private Function GetA(ByVal in_nAB As Long) As Long
Get_A_B in_nAB, GetA, 0&
End Function
Private Function GetB(ByVal in_nAB As Long) As Long
Get_A_B in_nAB, 0&, GetB
End Function
Private Function JoinAB(ByVal in_nA As Long, ByVal in_nB As Long) As Long
JoinAB = m_col_AB_to_A_B.Item(CStr(in_nA) & "_" & CStr(in_nB))
End Function
``` | Two lists combined into 1 dropdown box | [
"",
"sql",
"list",
"vb6",
""
] |
In a select query I calculate a field using a nested select. I would like to order the results by the calculated field (rank), however Access does not recognize the field rank. (When I run the query, Access asks for the parameter value of rank.)
```
SELECT
*,
(select count(*)
from tbl as tbl2
where tbl.customers > tbl2.customers and tbl.dept = tbl2.dept
) + 1 as rank
FROM tbl
ORDER BY rank
```
[The example query is taken from [this](https://stackoverflow.com/questions/4463116/use-access-sql-to-do-a-grouped-ranking) post] | Use a Derived table
```
SELECT * FROM
(
SELECT
*,
(select count(*)
from tbl as tbl2
where tbl.customers > tbl2.customers and tbl.dept = tbl2.dept
) + 1 as rank
FROM tbl
) as newtbl
ORDER BY rank
``` | Think you'd have to:
```
order by ((select count(*) from tbl as tbl2 where
tbl.customers > tbl2.customers and tbl.dept = tbl2.dept) + 1)
``` | MS Access: Order by calculated field (using alias) | [
"",
"sql",
"ms-access",
""
] |
I have populated a DropDownList with values from an SQL statement and it works.
```
Try
cmdStr = "SELECT [idt],[col1] FROM [test];"
Using conn As New SqlConnection(connStr)
Using cmd As New SqlCommand(cmdStr, conn)
conn.Open()
cmd.ExecuteNonQuery()
Using da As New SqlDataAdapter(cmd)
da.Fill(ds)
DropDownList1.DataSource = ds.Tables(0)
DropDownList1.DataTextField = "idt"
DropDownList1.DataValueField = "col1"
DropDownList1.DataBind()
End Using
cmd.Dispose()
conn.Close()
conn.Dispose()
End Using
End Using
Catch ex As Exception
TextBox1.Text = ex.Message
End Try
```
The problem is the SQL list starts with idt=68 and it shows in the DropDownList as such but DropDownList1.DataTextField="" instead of 68 below.
```
Try
cmdStr = "SELECT [datetime],[col1],[col2],[col3] FROM [test] WHERE [idt]=@idt;"
Using conn As New SqlConnection(connStr)
Using cmd As New SqlCommand(cmdStr, conn)
conn.Open()
cmd.Parameters.AddWithValue("@idt", DropDownList1.DataTextField)
cmd.ExecuteNonQuery()
...
``` | update it to use DropDownList1.SelectedItem.Text
```
Try
cmdStr = "SELECT [datetime],[col1],[col2],[col3] FROM [test] WHERE [idt]=@idt;"
Using conn As New SqlConnection(connStr)
Using cmd As New SqlCommand(cmdStr, conn)
conn.Open()
cmd.Parameters.AddWithValue("@idt", DropDownList1.SelectedItem.Text)
cmd.ExecuteNonQuery()
``` | ```
DropDownList1.DataSource = ds.Tables(0)
DropDownList1.DataTextField = "idt"
DropDownList1.DataValueField = "idt"
DropDownList1.DataBind();
```
You don't need to select "col1" column . Because You don't use it.
Then
Use SelectedValue property
```
cmd.Parameters.AddWithValue("@idt", DropDownList1.SelectedValue);
``` | DropDownList showing the SQL value | [
"",
"asp.net",
"sql",
""
] |
I am trying to do a decode against an Oracle server and return a fiscal year if date ranges fall in between what is listed below. I keep getting 'ORA-00907: missing right parenthesis' and I am sure it is something simple or maybe I am not using correct syntax for the date range but any help would be appreciated. Thanks!
```
SELECT ISSUE_DT,
MAX(DECODE(ISSUE_DT >= '01-JUL-11' AND ISSUE_DT <= '30-JUN-12','2012',ISSUE_DT >= '01-JUL-12' AND ISSUE_DT <= '30-JUN-13','2013',ISSUE_DT >= '01-JUL-13' AND ISSUE_DT <= '30-JUN-14','2014','NONE'))FISCAL_YEAR
FROM PS_GSU_AWD_INFO_VW
```
UPDATE
This is my current SQL statement that returns two columns, the year with quarter and total amount. I ultimately need the quarter and fiscal year in two separate places:
```
select CASE
WHEN ISSUE_DT >= '01-JUL-11' AND ISSUE_DT <= '30-SEP-11' THEN 'FY12 Q1'
WHEN ISSUE_DT >= '01-OCT-11' AND ISSUE_DT <= '31-DEC-11' THEN 'FY12 Q2'
WHEN ISSUE_DT >= '01-JAN-12' AND ISSUE_DT <= '31-MAR-12' THEN 'FY12 Q3'
WHEN ISSUE_DT >= '01-APR-12' AND ISSUE_DT <= '30-JUN-12' THEN 'FY12 Q4'
WHEN ISSUE_DT >= '01-JUL-12' AND ISSUE_DT <= '30-SEP-12' THEN 'FY13 Q1'
WHEN ISSUE_DT >= '01-OCT-12' AND ISSUE_DT <= '31-DEC-12' THEN 'FY13 Q2'
WHEN ISSUE_DT >= '01-JAN-13' AND ISSUE_DT <= '31-MAR-13' THEN 'FY13 Q3'
WHEN ISSUE_DT >= '01-APR-13' AND ISSUE_DT <= '30-JUN-13' THEN 'FY13 Q4'
WHEN ISSUE_DT >= '01-JUL-13' AND ISSUE_DT <= '30-SEP-13' THEN 'FY14 Q1'
WHEN ISSUE_DT >= '01-OCT-13' AND ISSUE_DT <= '31-DEC-13' THEN 'FY14 Q2'
WHEN ISSUE_DT >= '01-JAN-14' AND ISSUE_DT <= '31-MAR-14' THEN 'FY14 Q3'
WHEN ISSUE_DT >= '01-APR-14' AND ISSUE_DT <= '30-JUN-14' THEN 'FY14 Q4'
ELSE 'NO DATA' END AS FISCAL_QUARTER, SUM(AMOUNT) AS TOTAL_AWARDED FROM PS_GSU_AWD_INFO_VW WHERE ISSUE_DT >= '01-JUL-11'
GROUP BY CASE WHEN ISSUE_DT >= '01-JUL-11' AND ISSUE_DT <= '30-SEP-11' THEN 'FY12 Q1'
WHEN ISSUE_DT >= '01-OCT-11' AND ISSUE_DT <= '31-DEC-11' THEN 'FY12 Q2'
WHEN ISSUE_DT >= '01-JAN-12' AND ISSUE_DT <= '31-MAR-12' THEN 'FY12 Q3'
WHEN ISSUE_DT >= '01-APR-12' AND ISSUE_DT <= '30-JUN-12' THEN 'FY12 Q4'
WHEN ISSUE_DT >= '01-JUL-12' AND ISSUE_DT <= '30-SEP-12' THEN 'FY13 Q1'
WHEN ISSUE_DT >= '01-OCT-12' AND ISSUE_DT <= '31-DEC-12' THEN 'FY13 Q2'
WHEN ISSUE_DT >= '01-JAN-13' AND ISSUE_DT <= '31-MAR-13' THEN 'FY13 Q3'
WHEN ISSUE_DT >= '01-APR-13' AND ISSUE_DT <= '30-JUN-13' THEN 'FY13 Q4'
WHEN ISSUE_DT >= '01-JUL-13' AND ISSUE_DT <= '30-SEP-13' THEN 'FY14 Q1'
WHEN ISSUE_DT >= '01-OCT-13' AND ISSUE_DT <= '31-DEC-13' THEN 'FY14 Q2'
WHEN ISSUE_DT >= '01-JAN-14' AND ISSUE_DT <= '31-MAR-14' THEN 'FY14 Q3'
WHEN ISSUE_DT >= '01-APR-14' AND ISSUE_DT <= '30-JUN-14' THEN 'FY14 Q4'
ELSE 'NO DATA' END ORDER BY FISCAL_QUARTER
``` | I figured out a query that works. Not sure if this is best practices or not but here my is code:
```
SELECT DECODE(FISCAL_QUARTER,'FY12 Q1',2012,'FY12 Q2',2012,'FY12 Q3',2012,
'FY12 Q4',2012,'FY13 Q1',2013,'FY13 Q2',2013,'FY13 Q3',2013,'FY13 Q4',2013,'FY14
Q1',2014,'FY14 Q2',2014,'FY14 Q3',2014,'FY14 Q4',2014,2015)as "FISCAL YEAR",
DECODE(FISCAL_QUARTER,'FY12 Q1','Q1','FY12 Q2','Q2','FY12 Q3','Q3','FY12 Q4'
,'Q4','FY13 Q1','Q1','FY13 Q2','Q2','FY13 Q3','Q3','FY13 Q4','Q4','FY14 Q1','Q1',
'FY14 Q2','Q2','FY14 Q3','Q3','FY14 Q4','Q4',2015)as "FISCAL QUARTER",
DECODE(FISCAL_QUARTER,'FY12 Q1',TO_CHAR(TOTAL_AWARDED),'FY12 Q2',TO_CHAR(TOTAL_AWARDED),'FY12 Q3',TO_CHAR(TOTAL_AWARDED),'FY12 Q4',TO_CHAR(TOTAL_AWARDED),'FY13 Q1',TO_CHAR(TOTAL_AWARDED),'FY13 Q2',TO_CHAR(TOTAL_AWARDED),'FY13 Q3',TO_CHAR(TOTAL_AWARDED),'FY13 Q4',TO_CHAR(TOTAL_AWARDED),'FY14 Q1',TO_CHAR(TOTAL_AWARDED),'FY14 Q2',TO_CHAR(TOTAL_AWARDED),'FY14 Q3',TO_CHAR(TOTAL_AWARDED),'FY14 Q4',TO_CHAR(TOTAL_AWARDED),'NONE')as "TOTAL AWARDED"
FROM(
select CASE
WHEN ISSUE_DT >= '01-JUL-11' AND ISSUE_DT <= '30-SEP-11' THEN 'FY12 Q1'
WHEN ISSUE_DT >= '01-OCT-11' AND ISSUE_DT <= '31-DEC-11' THEN 'FY12 Q2'
WHEN ISSUE_DT >= '01-JAN-12' AND ISSUE_DT <= '31-MAR-12' THEN 'FY12 Q3'
WHEN ISSUE_DT >= '01-APR-12' AND ISSUE_DT <= '30-JUN-12' THEN 'FY12 Q4'
WHEN ISSUE_DT >= '01-JUL-12' AND ISSUE_DT <= '30-SEP-12' THEN 'FY13 Q1'
WHEN ISSUE_DT >= '01-OCT-12' AND ISSUE_DT <= '31-DEC-12' THEN 'FY13 Q2'
WHEN ISSUE_DT >= '01-JAN-13' AND ISSUE_DT <= '31-MAR-13' THEN 'FY13 Q3'
WHEN ISSUE_DT >= '01-APR-13' AND ISSUE_DT <= '30-JUN-13' THEN 'FY13 Q4'
WHEN ISSUE_DT >= '01-JUL-13' AND ISSUE_DT <= '30-SEP-13' THEN 'FY14 Q1'
WHEN ISSUE_DT >= '01-OCT-13' AND ISSUE_DT <= '31-DEC-13' THEN 'FY14 Q2'
WHEN ISSUE_DT >= '01-JAN-14' AND ISSUE_DT <= '31-MAR-14' THEN 'FY14 Q3'
WHEN ISSUE_DT >= '01-APR-14' AND ISSUE_DT <= '30-JUN-14' THEN 'FY14 Q4'
ELSE 'NO DATA' END AS FISCAL_QUARTER, SUM(AMOUNT) AS TOTAL_AWARDED FROM PS_GSU_AWD_INFO_VW WHERE ISSUE_DT >= '01-JUL-11'
GROUP BY CASE WHEN ISSUE_DT >= '01-JUL-11' AND ISSUE_DT <= '30-SEP-11' THEN 'FY12 Q1'
WHEN ISSUE_DT >= '01-OCT-11' AND ISSUE_DT <= '31-DEC-11' THEN 'FY12 Q2'
WHEN ISSUE_DT >= '01-JAN-12' AND ISSUE_DT <= '31-MAR-12' THEN 'FY12 Q3'
WHEN ISSUE_DT >= '01-APR-12' AND ISSUE_DT <= '30-JUN-12' THEN 'FY12 Q4'
WHEN ISSUE_DT >= '01-JUL-12' AND ISSUE_DT <= '30-SEP-12' THEN 'FY13 Q1'
WHEN ISSUE_DT >= '01-OCT-12' AND ISSUE_DT <= '31-DEC-12' THEN 'FY13 Q2'
WHEN ISSUE_DT >= '01-JAN-13' AND ISSUE_DT <= '31-MAR-13' THEN 'FY13 Q3'
WHEN ISSUE_DT >= '01-APR-13' AND ISSUE_DT <= '30-JUN-13' THEN 'FY13 Q4'
WHEN ISSUE_DT >= '01-JUL-13' AND ISSUE_DT <= '30-SEP-13' THEN 'FY14 Q1'
WHEN ISSUE_DT >= '01-OCT-13' AND ISSUE_DT <= '31-DEC-13' THEN 'FY14 Q2'
WHEN ISSUE_DT >= '01-JAN-14' AND ISSUE_DT <= '31-MAR-14' THEN 'FY14 Q3'
WHEN ISSUE_DT >= '01-APR-14' AND ISSUE_DT <= '30-JUN-14' THEN 'FY14 Q4'
ELSE 'NO DATA' END ORDER BY FISCAL_QUARTER)
``` | You cannot use `DECODE` to test anything other than equality. You can, however, use the much more robust and much more standard `CASE` function
```
CASE WHEN issue_dt >= date '2011-07-01' AND issue_dt <= date '2012-06-30'
THEN '2012'
WHEN issue_dt >= date '2012-07-01' AND issue_dt <= date '2013-06-30'
THEN '2013'
WHEN issue_dt >= date '2013-07-01' AND issue_dt <= date '2014-06-30'
THEN '2014'
ELSE 'NONE'
END
```
It probably makes sense, though, to do this via a calculation rather than hard-coding every year's range
```
to_char( issue_dt + interval '6' month, 'yyyy' )
``` | How do I decode a row of data using a date range as my parameter | [
"",
"sql",
"oracle",
"syntax-error",
"peoplesoft",
""
] |
I recently upgraded from MySQL Workbench 5.2.47 up to 6.1.
After making changes to a table via the Gui grid interface on the old version, I'd hit the 'Apply' button and have a popup confirming to make the change. This popup had the SQL code that would run in order to make the change that I made via the UI interface. I would then confirm the action and get a success or failure popup. If a an error occurred, I could see the detailed error in order to easily troubleshoot the problem.
With the new version, I make a change in the GUI interface and hit the 'Apply' button. No confirmation popup, it just executes. The result is a small snippet in output window. This is fine unless there's an error, in which case it has a very generic message. "1 error(s) saving changes to table XYZ". I have not found how to access the detailed error message, either by right clicking the row or looking in the preferences.
So, how do I get the popup confirmation with SQL query and error messaging to work in the new version or, at the very least, be able to see detailed SQL query error details? | It is indeed a bug :-(
In the MySQL bug database:
<http://bugs.mysql.com/bug.php?id=72169>
<http://bugs.mysql.com/bug.php?id=72155> | Functionality restored in CE release: 6.1.7
<http://dev.mysql.com/downloads/file.php?id=452662> | MySQL Workbench: How can I see the detailed SQL query and errors when I use the "Apply" button? (confirmation popup) | [
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
Select value from CODE row if results of both criterias are equal.
Example of a condition:
> DAYS = 30, RATE 2
> AND
> DAYS = 60, RATE 0
Table
```
ID | CODE| DAYS| RATE|
1 | 01 | 30 | 2 |
2 | 05 | 20 | 3 |
3 | 01 | 60 | 0 |
4 | 06 | 30 | 3 |
...
```
First part is
```
SELECT * FROM Tablename WHERE (DAYS = 30 AND RATE = 2)
```
But the second part..encapsulated?
Result should be `01`
Any suggestions? thx in advance. | You need a self-join:
```
SELECT DISTINCT t1.code
FROM Tablename t1
INNER JOIN Tablename t2
ON t1.code = t2.code
WHERE t1.days = 30 AND t1.rate = 2
AND t2.days = 60 AND t2.rate = 0
``` | ```
SELECT DISTINCT CODE FROM Tablename
WHERE (DAYS = 30 AND RATE = 2)
OR (DAYS = 60 AND RATE = 0)
``` | select one value if result of both criterias is equal | [
"",
"sql",
"select",
""
] |
I am developing a program that uses a MySQL database to store data. I have a table (simplified here):
```
+---------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+------------------+------+-----+---------+----------------+
| dataId | int(10) unsigned | NO | PRI | NULL | auto_increment |
| someNum | tinyint(4) | NO | | 0 | |
+---------+------------------+------+-----+---------+----------------+
```
Now, I update the table with a query like this one.
```
UPDATE table SET someNum=someNum+2 WHERE dataId=78225;
```
The ID and the amount `someNum` changes come dynamically from code.
Now, what I'd like to do is limit `someNum`'s value to between -3 and 3, particularly in that `UPDATE`. It isn't required, my software can handle it being outside that range, but the data would be clearer with that limit. If `someNum+2` would be over 3, I'd just like to set it to 3.
Numeric `MIN`/`MAX` would make it easy:
```
UPDATE table SET someNum=MAX(-3,MIN(3,someNum+2)) WHERE dataId=78225;
```
I looked at the docs [here](https://dev.mysql.com/doc/refman/5.0/en/numeric-functions.html), but there seems to be no `MIN`/`MAX` for numbers. `MIN` and `MAX` are found [here](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html), but they don't seem to be the right ones for this.
What would be the best way (if any) to implement such a limit in MySQL (*not* the code that calls MySQL)? | First way: use [`LEAST()`](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_least) and [`GREATEST()`](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_greatest):
```
UPDATE t SET someNum=GREATEST(-3,LEAST(3,someNum+2)) WHERE dataId=78225;
```
This is the most easy way because you'll store all the logic inside one `UPDATE` query.
Second way: create [trigger](http://dev.mysql.com/doc/refman/5.5/en/create-trigger.html):
```
DELIMITER //
CREATE TRIGGER catCheck BEFORE UPDATE ON t
FOR EACH ROW
BEGIN
IF NEW.someNum<-3 THEN
NEW.someNum=-3;
END IF;
IF NEW.someNum>3 THEN
NEW.someNum=3;
END IF;
END;//
DELIMITER ;
```
you can also replace `IF` with `CASE` - but I left that two separate constraints for `-3` and `3`. The benefits here is - that DBMS will handle your data by itself - and you'll be able to pass data as it is and do not worry about ranges. But - there's weakness too: while in first case you can just change query text to adjust desired range, in second case you'll have to re-create trigger again if you'll want to change that constraints (so, less flexibility).
Also you may want to check your data not only on `UPDATE` statements, but on `INSERT` too. | use [GREATEST](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_greatest) and [LEAST](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_least) instead of MAX and MIN
You could also use a CASE WHEN
```
update table
set someNum = CASE WHEN SomeNum +2 > 3 THEN 3
WHEN SomeNum +2 < -3 THEN -3
ELSE someNum + 2
END)
``` | Limiting a field's value in MySQL? | [
"",
"mysql",
"sql",
"max",
"min",
""
] |
Question - let's say I have 2 tables.
Table 1 - name is permission\_list, columns are ID (unique ID), col\_ID, user\_ID
Table 2 - name is list\_entries, Columns are ID (unique ID), title, description, status
I want to select all the rows from table 2 that have status of 'public' as well as all the rows from table 2 that the ID from table 2 shows up in table 1 (under the column col\_ID) AND if the user\_ID in table 1 matches a certain value. So, anything public, or anything that this specific user has listed under the permissions table. This query would also remove duplicates - in case the user gets a public entry listed in their permissions\_list, it wouldn't show up twice.
Hope that makes sense! | Here you go:
```
SELECT DISTINCT table2.* from table2
LEFT JOIN table1 USING (id)
WHERE status='public'
OR user_ID='someuser';
``` | You need to get some education on JOIN for your first thing, and the second thing is called DISTINCT.
Start here... <https://www.google.com/>
You have not specified your join condition so we can't give you code samples really. Also the way you worded your question, I'm not entirely sure you don't want a UNION. Read up on those concepts and come back here when you can improve the question. | MYSQL - QUERY FROM TWO TABLES | [
"",
"mysql",
"sql",
"select",
"join",
""
] |
I'm creating a table in Oracle 11g like this:
```
CREATE TABLE EXAMPLE (
ID VARCHAR2(10) PRIMARY KEY,
NAME VARCHAR2(100),
SHORT VARCHAR2(50),
CURRENCY CHAR(3)
);
```
Is it possible to create a **foreign key constraint** or even a **check constraint** on `CURRENCY` to a built-in Oracle table that contains the [ISO currencies](http://en.wikipedia.org/wiki/ISO_4217)?
Not having a great understanding of databases I also take as input other solutions that might be out there, however I do not want to maintain my own table for this, if it's too much work, I'll live with user errors.
Thanks. | Note: Edited to include @A.B.Cade's suggestion.
Unfortunately you can't directly add a constraint, as the currencies data is available through a system a view. You can, however, create your own table with that information and then create the foreign key. Here's an example of how you can do it.
```
CREATE TABLE currencies (
country VARCHAR2(30) PRIMARY KEY,
currency VARCHAR2(3) NOT NULL
);
INSERT INTO currencies
SELECT value country, utl_i18n.get_default_iso_currency(value) currency
FROM v$nls_valid_values
WHERE parameter = 'TERRITORY';
CREATE INDEX currencies_iso_idx ON currencies(currency) COMPUTE STATISTICS;
ALTER TABLE example ADD CONSTRAINT example_currency_fk FOREIGN KEY (currency)
REFERENCES currencies(currency);
```
The example above includes an index on the currency value, as I suspect that will what you'll be querying on. | You can get a list of ISO currencies from a built in view in oracle:
```
select utl_i18n.GET_DEFAULT_ISO_CURRENCY(value) iso_cur
from v$nls_valid_values
where parameter = 'TERRITORY'
```
But as Nuno Guerreiro said, you'll need to create a table from these results and add a foreign key to the new table. | Does oracle provide a built-in currency table for me to use as constraints? | [
"",
"sql",
"oracle",
"oracle11g",
"constraints",
""
] |
I'm sum when the group state= 0,1,2 and sum 0 when it show 0
My tables:
```
|policies|
|id| |client| |policy_business_unit_id| |cia_ensure_id| |state|
1 MATT 1 1 0
2 STEVE 2 1 0
3 BILL 3 2 1
4 LARRY 4 2 1
5 MESSI 1 1 1
6 DROGBA 1 1 1
|policy_business_units|
|id| |name| |comercial_area_id|
1 LIFE 2
2 ROB 1
3 SECURE 2
4 ACCIDENT 1
|comercial_areas|
|id| |name|
1 BANK
2 HOSPITAL
|cia_ensures|
|id| |name|
1 SPRINT
2 APPLE
```
Here is the information:
```
http://sqlfiddle.com/#!2/935e78/4
```
I'm trying to sum states = 0,1,2 in each column:
```
SELECT
IF( p.state =0, COUNT( p.state ) , 0 ) AS state_0,
IF( p.state =1, COUNT( p.state ) , 0 ) AS state_1,
IF( p.state =2, COUNT( p.state ) , 0 ) AS state_2,
p.policy_business_unit_id AS UNITS,
p.cia_ensure_id AS CIAS
FROM policies p
WHERE p.policy_business_unit_id IN ( 1 )
AND p.cia_ensure_id =1
GROUP BY p.state
```
This is the result:
```
STATE_0 STATE_1 STATE_2 UNITS CIAS
1 0 0 1 1
0 2 0 1 1
```
How can I do to have this result?
```
STATE_0 STATE_1 STATE_2 UNITS CIAS
1 2 0 1 1
```
Please I will appreciate all kind of help.
Thanks. | If you need to group all values for state = 0 or 1 or 2 together, then your query must be a little different. Try this.
```
SELECT
SUM(CASE WHEN p.state = 0 THEN 1 ELSE 0 END) AS state_0,
SUM(CASE WHEN p.state = 1 THEN 1 ELSE 0 END) AS state_1,
SUM(CASE WHEN p.state = 2 THEN 1 ELSE 0 END) AS state_2,
p.policy_business_unit_id AS UNITS,
p.cia_ensure_id AS CIAS
FROM policies p
WHERE p.policy_business_unit_id IN ( 1 )
AND p.cia_ensure_id =1
GROUP BY CASE WHEN p.state IN (0,1,2) THEN 1 ELSE 0 END
```
Here is a working [**SQL FIDDLE**](http://sqlfiddle.com/#!2/935e78/27) that shows how this works. | replace this
```
SELECT
IF( p.state =0, COUNT( p.state ) , 0 ) AS state_0,
IF( p.state =1, COUNT( p.state ) , 0 ) AS state_1,
IF( p.state =2, COUNT( p.state ) , 0 ) AS state_2,
```
by
```
SELECT
sum(CASE WHEN p.state =0 THEN 1 ELSE 0 end) AS state_0,
sum(CASE WHEN p.state =1 THEN 1 ELSE 0 END) AS state_1,
sum(CASE WHEN p.state =2 THEN 1 ELSE 0 END) AS state_2,
``` | How can sum group values? | [
"",
"mysql",
"sql",
""
] |
In My project, we connecting to MS Access from MS Excel, in excel we have one dropdown (medd) with Yes or No values. At the same time, in MS Access, we have column Med\_D with Y or N values.
So, if the user selects 'Yes' from the dropdown, then it should fetch the MS Access rows where Med\_D = Y.
If the user selects 'No' from the dropdown, then it should fetch the MS Access rows where Med\_D in ('Y','N').
I want to check this condition in a single query, we cant use CASE in Access, tried IIF & Switch but I got failed. | Sub MakeDynamicSQL()
Const SQL As String = "select \* from YourAccessTable"
Dim sWhereClause As String
```
Select Case Cells(1, 1).Text ' cell A1 contains Y or N
Case "Y": sWhereClause = " where Med_D = 'Y'"
Case "N": sWhereClause = " where Med_D in ('Y','N')"
Case Else: MsgBox "error": Exit Sub
End Select
Debug.Print SQL & sWhereClause ' send this to Access
```
End Sub | use this:
```
Select * from "Table Name" Where Med_D = "xxxxx"
```
xxxxx is the dropdown with Y and N
Im guessing your using ADODB connection? | How to assign the value to the Access column in the where clause? | [
"",
"sql",
"vba",
"excel",
""
] |
So I have an app I'm working on with two models. Users can create objects with certain traits
I don't want a user to be able to find their own objects.
How do I limit the query?
```
def self.search(search)
where(['primarybreed LIKE ?', "%#{search}%"]).where(:user_id != current_user.id)
end
```
current user breaks the above code. Icluding the sessions helper also breaks the code.
edit:
controller
```
@dog = Dog.search(params[:search],current_user.id).sample if params[:search].present?
```
query:
```
def self.search(search, current_user_id)
where(['primarybreed LIKE and user_id != ?', "%#{search}%", current_user_id])
end
```
yield the following error
```
wrong number of bind variables (2 for 1) in: primarybreed LIKE and user_id != ?
``` | For Rails 4.x, use this:
```
def self.search(search, current_user_id)
where(['primarybreed LIKE ?', "%#{search}%").where.not(user_id: current_user_id])
end
```
and you can call it as:
```
@dog = Dog.search(params[:search],current_user.id).sample if params[:search].present?
``` | May be this will solve your problem
```
def self.search(search, current_user_id)
where(['primarybreed LIKE and user_id != ?', "%#{search}%",current_user_id])
end
``` | Exclude User's Built Objects From Search | [
"",
"html",
"sql",
"ruby-on-rails",
"ruby",
""
] |
So I have a table with users
A few users are "owners"
A user can have a owner
Now I want to select the owner from the user (Both in the same table)
Example:
Owner:
```
user_id: 34
user_name: hiimowner
user_owner_id: NULL
```
User:
```
user_id: 63
user_name: randomperson
user_owner_id: 34
```
Now I'm looking for the proper SQL query to be able to find the user\_name of the owner
So if I can request the owner of "randomperson" which should return "hiimowner" | ```
SELECT * FROM table AS t1
JOIN table AS t2
ON t1.user_id = t2.user_owner_id
WHERE t2.user_id = 63
``` | try this,
```
Select u.*,o.user_name from User as u
left join User o on o.user_id = u.user_owner_id
```
If want to filter specific record then add where clause,
```
Select u.*,o.user_name from User as u
left join User o on o.user_id = u.user_owner_id
where u.user_id = 63
``` | Select another user from same table | [
"",
"sql",
""
] |
I got a mysql database with approx. 1 TB of data. Table fuelinjection\_stroke has apprx. 1.000.000.000 rows. DBID is the primary key that is automatically incremented by one with each insert.
I am trying to delete the first 1.000.000 rows using a very simple statement:
```
Delete from fuelinjection_stroke where DBID < 1000000;
```
This query is takeing very long (>24h) on my dedicated 8core Xeon Server (32 GB Memory, SAS Storage).
Any idea whether the process can be sped up? | I believe that you table becomes locked. I've faced same problem and find out that can delete 10k records pretty fast. So you might want to write simple script/program which will delete records by chunks.
```
DELETE FROM fuelinjection_stroke WHERE DBID < 1000000 LIMIT 10000;
```
And keep executing it until it deletes everything | **Are you space deprived? Is down time impossible?**
If not, you could fit in a new INT column length 1 and default it to 1 for "active" (or whatever your terminology is) and 0 for "inactive". Actually, you could use 0 through 9 as 10 different states if necessary.
Adding this new column will take a looooooooong time, but once it's over, your UPDATEs should be lightning fast as long as you do it off the PRIMARY (as you do with your DELETE) and you don't index this new column.
The reason why InnoDB takes so long to DELETE on such a massive table as yours is because of the cluster index. It physically orders your table based upon your PRIMARY (or first UNIQUE it finds...or whatever it feels like if it can't find PRIMARY or UNIQUE), so when you pull out one row, it now reorders your ENTIRE table physically on the disk for speed and defragmentation. So it's not the DELETE that's taking so long. It's the physical reordering after that row is removed.
When you create a new INT column with a default value, the space will be filled, so when you UPDATE it, there's no need for physical reordering across your huge table.
I'm not sure exactly what your schema is exactly, but using a column for a row's state is much faster than DELETEing; however, it will take more space.
Try setting values:
```
innodb_flush_log_at_trx_commit=2
innodb_flush_method=O_DIRECT (for non-windows machine)
innodb_buffer_pool_size=25GB (currently it is close to 21GB)
innodb_doublewrite=0
innodb_support_xa=0
innodb_thread_concurrency=0...1000 (try different values, beginning with 200)
```
References:
[MySQL docs for description of different variables.](http://dev.mysql.com/doc/refman/5.0/en/innodb-parameters.html)
[MySQL Server Setting Tuning](http://www.mysqlperformanceblog.com/files/presentations/UC2007-MySQL-Server-Settings-Tuning.pdf)
[MySQL Performance Optimization basics](http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/)
<http://bugs.mysql.com/bug.php?id=28382> | mysql - Deleting Rows from InnoDB is very slow | [
"",
"mysql",
"sql",
"innodb",
""
] |
I have posted something similar before, but I am approaching this from a different direction now so I opened a new question. I hope this is OK.
I have been working with a CTE that creates a sum of charges based on a Parent Charge. The SQL and details can be seen here:
[CTE Index recommendations on multiple keyed table](https://stackoverflow.com/questions/22802484/cte-index-recommendations-on-multiple-keyed-table)
I don't think I am missing anything on the CTE, but I am getting a problem when I use it with a big table of data (3.5 million rows).
The table `tblChargeShare` contains some other information that I need, such as an `InvoiceID`, so I placed my CTE in a view `vwChargeShareSubCharges` and joined it to the table.
The query:
```
Select t.* from vwChargeShareSubCharges t
inner join
tblChargeShare s
on t.CustomerID = s.CustomerID
and t.MasterChargeID = s.ChargeID
Where s.ChargeID = 1291094
```
Returns a result in a few ms.
The query:
```
Select ChargeID from tblChargeShare Where InvoiceID = 1045854
```
Returns 1 row:
```
1291094
```
But the query:
```
Select t.* from vwChargeShareSubCharges t
inner join
tblChargeShare s
on t.CustomerID = s.CustomerID
and t.MasterChargeID = s.ChargeID
Where InvoiceID = 1045854
```
Takes 2-3 minutes to run.
I saved the execution plans and loaded them into SQL Sentry. The Tree for the fast query looks like this:

The plan from the slow query is:

I have tried reindexing, running the query through tuning advisor and various combinations of sub queries. Whenever the join contains anything other than the PK, the query is slow.
I had a similar question here:
[SQL Server Query time out depending on Where Clause](https://stackoverflow.com/questions/12362221/sql-server-query-time-out-depending-on-where-clause)
Which used functions to do the summimg of child rows instead of a CTE. This is the rewrite using CTE to try and avoid the same problem I am now experiencing. I have read the responses in that answer but I am none the wiser - I read some information about hints and parameters but I can't make it work. I had thought that rewriting using a CTE would solve my problem. The query is fast when running on a tblCharge with a few thousand rows.
Tested in both SQL 2008 R2 and SQL 2012
Edit:
I have condensed the query into a single statement, but the same issue persists:
```
WITH RCTE AS
(
SELECT ParentChargeId, s.ChargeID, 1 AS Lvl, ISNULL(TotalAmount, 0) as TotalAmount, ISNULL(s.TaxAmount, 0) as TaxAmount,
ISNULL(s.DiscountAmount, 0) as DiscountAmount, s.CustomerID, c.ChargeID as MasterChargeID
from tblCharge c inner join tblChargeShare s
on c.ChargeID = s.ChargeID Where s.ChargeShareStatusID < 3 and ParentChargeID is NULL
UNION ALL
SELECT c.ParentChargeID, c.ChargeID, Lvl+1 AS Lvl, ISNULL(s.TotalAmount, 0), ISNULL(s.TaxAmount, 0), ISNULL(s.DiscountAmount, 0) , s.CustomerID
, rc.MasterChargeID
from tblCharge c inner join tblChargeShare s
on c.ChargeID = s.ChargeID
INNER JOIN RCTE rc ON c.PArentChargeID = rc.ChargeID and s.CustomerID = rc.CustomerID Where s.ChargeShareStatusID < 3
)
Select MasterChargeID as ChargeID, rcte.CustomerID, Sum(rcte.TotalAmount) as TotalCharged, Sum(rcte.TaxAmount) as TotalTax, Sum(rcte.DiscountAmount) as TotalDiscount
from RCTE inner join tblChargeShare s on rcte.ChargeID = s.ChargeID and RCTE.CustomerID = s.CustomerID
Where InvoiceID = 1045854
Group by MasterChargeID, rcte.CustomerID
GO
```
---
Edit:
More playing around,I just don't understand this.
This query is instant (2ms):
```
Select t.* from
vwChargeShareSubCharges t
Where t.MasterChargeID = 1291094
```
Whereas this takes 3 minutes:
```
DECLARE @ChargeID int = 1291094
Select t.* from
vwChargeShareSubCharges t
Where t.MasterChargeID = @ChargeID
```
Even if I put heaps of numbers in an "In", the query is still instant:
```
Where t.MasterChargeID in (1291090, 1291091, 1291092, 1291093, 1291094, 1291095, 1291096, 1291097, 1291098, 1291099, 129109)
```
---
Edit 2:
I can replicate this from scratch using this example data:
I have created some dummy data to replicate the issue. It isn't so significant, as I only added 100,000 rows, but the bad execution plan still happens (run in SQLCMD mode):
```
CREATE TABLE [tblChargeTest](
[ChargeID] [int] IDENTITY(1,1) NOT NULL,
[ParentChargeID] [int] NULL,
[TotalAmount] [money] NULL,
[TaxAmount] [money] NULL,
[DiscountAmount] [money] NULL,
[InvoiceID] [int] NULL,
CONSTRAINT [PK_tblChargeTest] PRIMARY KEY CLUSTERED
(
[ChargeID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
END
GO
Insert into tblChargeTest
(discountAmount, TotalAmount, TaxAmount)
Select ABS(CHECKSUM(NewId())) % 10, ABS(CHECKSUM(NewId())) % 100, ABS(CHECKSUM(NewId())) % 10
GO 100000
Update tblChargeTest
Set ParentChargeID = (ABS(CHECKSUM(NewId())) % 60000) + 20000
Where ChargeID = (ABS(CHECKSUM(NewId())) % 20000)
GO 5000
CREATE VIEW [vwChargeShareSubCharges] AS
WITH RCTE AS
(
SELECT ParentChargeId, ChargeID, 1 AS Lvl, ISNULL(TotalAmount, 0) as TotalAmount, ISNULL(TaxAmount, 0) as TaxAmount,
ISNULL(DiscountAmount, 0) as DiscountAmount, ChargeID as MasterChargeID
FROM tblChargeTest Where ParentChargeID is NULL
UNION ALL
SELECT rh.ParentChargeID, rh.ChargeID, Lvl+1 AS Lvl, ISNULL(rh.TotalAmount, 0), ISNULL(rh.TaxAmount, 0), ISNULL(rh.DiscountAmount, 0)
, rc.MasterChargeID
FROM tblChargeTest rh
INNER JOIN RCTE rc ON rh.PArentChargeID = rc.ChargeID --and rh.CustomerID = rc.CustomerID
)
Select MasterChargeID, ParentChargeID, ChargeID, TotalAmount, TaxAmount, DiscountAmount , Lvl
FROM RCTE r
GO
```
Then run these two queries:
```
--Slow Query:
Declare @ChargeID int = 60900
Select *
from [vwChargeShareSubCharges]
Where MasterChargeID = @ChargeID
--Fast Query:
Select *
from [vwChargeShareSubCharges]
Where MasterChargeID = 60900
``` | The best SQL Server can do for you here is to push the filter on `ChargeID` down into the anchor part of the recursive CTE inside the view. That allows a seek to find the only row you need to build the hierarchy from. When you provide the parameter as a constant value SQL Server can make that optimization (using a rule called `SelOnIterator`, for those who are interested in that sort of thing):

When you use a local variable it can not do this, so the predicate on `ChargeID` gets stuck outside the view (which builds the full hierarchy starting from all `NULL` ids):

One way to get the optimal plan when using a variable is to force the optimizer to compile a fresh plan on every execution. The resulting plan is then tailored to the specific value in the variable at execution time. This is achieved by adding an `OPTION (RECOMPILE)` query hint:
```
Declare @ChargeID int = 60900;
-- Produces a fast execution plan, at the cost of a compile on every execution
Select *
from [vwChargeShareSubCharges]
Where MasterChargeID = @ChargeID
OPTION (RECOMPILE);
```
A second option is to change the view into an inline table function. This allows you to specify the position of the filtering predicate explicitly:
```
CREATE FUNCTION [dbo].[udfChargeShareSubCharges]
(
@ChargeID int
)
RETURNS TABLE AS RETURN
(
WITH RCTE AS
(
SELECT ParentChargeID, ChargeID, 1 AS Lvl, ISNULL(TotalAmount, 0) as TotalAmount, ISNULL(TaxAmount, 0) as TaxAmount,
ISNULL(DiscountAmount, 0) as DiscountAmount, ChargeID as MasterChargeID
FROM tblChargeTest
Where ParentChargeID is NULL
AND ChargeID = @ChargeID -- Filter placed here explicitly
UNION ALL
SELECT rh.ParentChargeID, rh.ChargeID, Lvl+1 AS Lvl, ISNULL(rh.TotalAmount, 0), ISNULL(rh.TaxAmount, 0), ISNULL(rh.DiscountAmount, 0)
, rc.MasterChargeID
FROM tblChargeTest rh
INNER JOIN RCTE rc ON rh.ParentChargeID = rc.ChargeID --and rh.CustomerID = rc.CustomerID
)
Select MasterChargeID, ParentChargeID, ChargeID, TotalAmount, TaxAmount, DiscountAmount , Lvl
FROM RCTE r
)
```
Use it like this:
```
Declare @ChargeID int = 60900
select *
from dbo.udfChargeShareSubCharges(@ChargeID)
```
The query can also benefit from an index on `ParentChargeID`.
```
create index ix_ParentChargeID on tblChargeTest(ParentChargeID)
```
Here is another answer about a similar optimization rule in a similar scenario.
[Optimizing Execution Plans for Parameterized T-SQL Queries Containing Window Functions](https://stackoverflow.com/questions/13635531/optimizing-execution-plans-for-parameterized-t-sql-queries-containing-window-fun/15304023#15304023) | Next to get to a solution, I would recommend to SELECT INTO the CTE into e temp table and join from there. From personal experience joining with CTE my query was returning for 5mins while simply inserting the data generated by CTE into a temp table brought it down to just 4secs. I was actually joining two CTEs together but I guess this would apply to all long running queries when a CTE is joined to a LONG table (especially outer joins).
```
--temp tables if needed to work with intermediate values
If object_id('tempdb..#p') is not null
drop table #p
;WITH cte as (
select * from t1
)
select *
into #p
from cte
--then use the temp table as you would normally use the CTE
select * from #p
``` | CTE very slow when Joined | [
"",
"sql",
"sql-server-2008-r2",
"common-table-expression",
""
] |
I have the following table structure
```
id dateChanged changeType changeID
```
which has the following data
```
1 2014-04-01 memEdit 205
2 2014-04-01 memEdit 205
3 2014-03-31 memEdit 1
4 2014-04-01 memEdit 1
5 2014-04-01 memEdit 3
```
Now i want to get the latest record for each change id. So i'd get the records with ids 2,4,5.
I've tried grouping them, but its giving me the first of each set (so i'm getting 1,3,5) | ### 1. OLD not working query
```
SELECT * FROM `your_table`
GROUP BY `changeID`
ORDER BY `id` DESC
```
### 2. Working query
The problem is with MySQL's non-standards craziness.
MySQL does not force you to `GROUP BY` every column that you include in the `SELECT` list. As a result, if you only `GROUP BY` one column but return 4 columns in total, there is no guarantee that the other column values will belong to the grouped `changeID` record that is returned. If the column is not in a `GROUP BY` MySQL chooses what value should be returned.
Using the subquery will guarantee that the correct record values are returned every time.
The correct and [working query](http://sqlfiddle.com/#!2/ab1e9/10/0) would be:
```
SELECT `t`.* FROM(
SELECT * FROM `smdbs_app`
ORDER BY `id` DESC
) `t`
GROUP BY `changeID`
ORDER BY `id`
``` | This can be done with a simple subquery:
```
select * from table t
where t.dateChanged =
(select max(dateChanged)
from table t2
where t2.id = t.id)
;
```
Let me know if this helps! | get latest record in group | [
"",
"mysql",
"sql",
""
] |
I have two tables, users and contestants. I'm trying to select the max contestant ID that has a profile picture(which is on the user table)
Heres my terrible SQL:
```
SELECT u.thumbnail, u.id FROM users AS u
INNER JOIN
(
SELECT c.id, c.user_id FROM contestants AS c
WHERE u.id = c.users_id
AND c.id = (select max(c.id))
) WHERE u.thumbnail IS NOT NULL
```
The error currently is: #1248 - Every derived table must have its own alias.
This confuses me since Users has an alias of u, and contestants has an alias of c..
What am I doing wrong here? I'm guessing a lot so some help would be really appreciated! | Whenever you are performing a join operation, you are actually joining two table. The subquery you wrote here, for instance, is working as a separate table. Hence, you have to use an alias to this table. That's the reason behind your error message.
Your query:
```
SELECT u.thumbnail, u.id FROM users AS u
INNER JOIN
(
SELECT c.id, c.user_id FROM contestants AS c
WHERE u.id = c.users_id
AND c.id = (select max(c.id))
) WHERE u.thumbnail IS NOT NULL
```
It should contain an alias for the subquery:
```
SELECT c.id, c.user_id FROM contestants AS c
WHERE u.id = c.users_id
AND c.id = (select max(c.id))
```
Let's say, it's T.
So, your query now becomes:
```
SELECT u.thumbnail, u.id FROM users AS u
INNER JOIN
(
SELECT c.id, c.user_id FROM contestants AS c
WHERE u.id = c.users_id
AND c.id = (select max(c.id))
) AS T
WHERE u.thumbnail IS NOT NULL
```
But what you are trying to achieve, can actually be done in a neater way:
```
SELECT u.thumbnail, u.id, max(c.id),
FROM users as u
LEFT JOIN contestants as c
on u.id = c.user_id
WHERE u.thumbnail IS NOT NULL
```
Why make all the fuss when you have a better and neater approach at your disposal? | try this:
```
SELECT u.thumbnail, u.id
FROM users AS u
INNER JOIN
(
SELECT c.id, c.user_id FROM contestants AS c
WHERE u.id = c.users_id
AND c.id = (select max(c.id))
)A
WHERE u.thumbnail IS NOT NULL
``` | SQL Join with MAX(). | [
"",
"mysql",
"sql",
"inner-join",
""
] |
I have a select statement which displays a list of companies.
```
SELECT distinct [Company]
FROM [Records]
```
How can I add the entry "ALL" as the first item on the returned list? | Use `union all` and `order by`:
```
select company
from ((select distinct company, 1 as ordering from records)
union all
(select 'ALL', 0)
) t
order by ordering;
```
In practice, the following would seem to work:
```
select 'ALL' as company
union all
select distinct company from records;
```
However, SQL Server does not guarantee that these are executed in order. In practice, I have never found a case where this statement would not put `ALL` first, but it is not guaranteed as far as I know. | You can use `UNION ALL` and add order:
```
SELECT [Company] FROM
(SELECT 'All' as [Company], 0 as RecordOrder
UNION ALL
SELECT distinct [Company], 1 as RecordOrder
FROM [Records]) X
ORDER BY RecordOrder
``` | SQL add ALL to select statement | [
"",
"sql",
"sql-server-2008",
""
] |
Here's my MySQL query and what it returns:
```
SELECT email, COUNT(*) AS num
FROM collectors_users
WHERE subscribed != 'NO'
AND lastLogin IS NULL
GROUP BY email
ORDER BY dateadded DESC;
```

I only want to return results where num > 1. I tried to change my query like this but it doesn't work saying that `num` is not a recognized column:
```
SELECT email, COUNT(*) AS num
FROM collectors_users
WHERE subscribed != 'NO'
AND lastLogin IS NULL
AND num > 1
GROUP BY email
ORDER BY dateadded DESC;
```
How can I return results where num > 1? | After the GROUP BY clause, and before the ORDER BY clause add this:
```
HAVING COUNT(*) > 1
```
The `HAVING` clause gets applied nearly last in the execution plan, after all of the rows have been prepared, prior to the `LIMIT` clause. This is most useful for conditions that can't be checked while rows are being accessed, but only after the rows have been accessed, such as an aggregate function like a COUNT(\*), though it can be used for non-aggregates. | Use `HAVING` clause. Or another way:
```
SELECT * FROM
(SELECT email, COUNT(*) AS num
FROM collectors_users
WHERE subscribed != 'NO'
AND lastLogin IS NULL
GROUP BY email
ORDER BY dateadded DESC;
) T
WHERE num>1
``` | How do I show all results where the GROUP BY count is greater than 1 in MySQL? | [
"",
"mysql",
"sql",
"count",
"group-by",
""
] |
I have a table look like this :
```
acronym | word
FCN | FCN
FCN | Fourth Corner Neurosurgical Associates
FHS | FHS
HW | HW
```
As you see, some acronyms have matching words and some don't. I want to keep the acronyms that have matching words. For the acronyms that don't have matching word, I'd like to keep the acronym itself. I expect the result table to look like:
```
acronym | word
FCN | Fourth Corner Neurosurgical Associates
FHS | FHS
HW | HW
```
I cannot think of a way to accomplish this yet. Probably grouping by "acronym" and choosing "word", but what algorithm can decide to remove "FCN" or "Fourth Corner Neurosurgical Associates" | Assuming there can't be any 'bad' words:
```
DELETE myTable
FROM myTable del
WHERE [acronym] = [word]
AND EXISTS ( SELECT *
FROM myTable lw -- Longer Word
WHERE lw.[acronym] = del.[acronym]
AND Len(lw.[word]) > Len(lw.[acronym]) )
```
Or do you want to avoid deleting `SQL|SQL` when there is a 'bad' other (longer) record that reads eg. `SQL|Strange Things Happen` ?
Rereading the question I'm now in doubt if you really want to `DELETE` those records, or simply want to `SELECT` from it with the bespoken records filtered out. In the latter case you'd have to use (including mellamokb's advice)
```
SELECT [acronym], [word]
FROM myTable mt
WHERE [acronym] <> [word]
OR NOT EXISTS ( SELECT *
FROM myTable lw
WHERE lw.[acronym] = mt.[acronym]
AND lw.[word] <> lw.[acronym] )
``` | Depending on your data, you may be able to use a combination of Distinct and a Case statement
```
select distinct acronym,
(case acronym when word then word else acronym end) as Abbr
from acronyms
``` | sql server remove duplicate acronym | [
"",
"sql",
"sql-server",
""
] |
1. Is it possible to do something like the following with SQL, not PL/pgSQL (note if it's only possible with PL/pgSQL, then how)?
```
IF password = 'swordfish' THEN
SELECT a, b, c FROM users;
ELSE
SELECT -1; -- unauthorized error code
END IF;
```
2. Ideally, could I wrap the above in a function with `TRUE` being an argument?
3. Rather, is it possible to set the [command status string](http://www.postgresql.org/docs/9.3/static/rules-status.html) to -1?
I'm asking this because I want the query to return an error code, like -1, if someone tries to get a list of all the users with the wrong password. This is for a web app with user accounts that each have a password. So, this is not something I want to manage with database roles/permissions. | Algorithm
1. Select `1` into `a` (authorized) if we find a `user_id_1`-`session_id` match.
2. Select `0, NULL, NULL` into `u` (unauthorized) if we didn't find a match in step 1.
3. Select `user_id, body, sent` into `s` (select) if we did find a match in step 1.
4. Union `u` and `s`.
Code
```
-- List messages between two users with `user_id_1`, `session_id`, `user_id_2`
CREATE FUNCTION messages(bigint, uuid, bigint) RETURNS TABLE(i bigint, b text, s double precision) AS
$$
WITH a AS (
SELECT 1
FROM sessions
WHERE user_id = $1
AND id = $2
), u AS (
SELECT 0, NULL::text, NULL::double precision
WHERE NOT EXISTS (SELECT 1 FROM a)
), s AS (
SELECT user_id, body, trunc(EXTRACT(EPOCH FROM sent))
FROM messages
WHERE EXISTS (SELECT 1 FROM a)
AND chat_id = pair($1, $3)
LIMIT 20
)
SELECT * FROM u UNION ALL SELECT * FROM s;
$$
LANGUAGE SQL STABLE;
``` | The PL/pgsql function below returns the `messages` sent between `user_id` & `with_user_id` if the `user_id`:`key` pair is authorized, as determined by the user-defined function (UDF) `user_auth`. Otherwise, it returns one row with `from = -1` . The other UDF, `pair`, is a [unique unordered pairing function](http://www.mattdipasquale.com/blog/2014/03/09/unique-unordered-pairing-function/) that, given two user IDs, returns the `chat_id` to which the messages belong.
```
--- Arguments: user_id, key, with_user_id
CREATE FUNCTION messages(bigint, uuid, bigint)
RETURNS TABLE(from bigint, body text, sent double precision) AS $$
BEGIN
IF user_auth($1, $2) THEN
RETURN QUERY SELECT from, body, trunc(EXTRACT(EPOCH FROM sent))
FROM messages WHERE chat_id = pair($1, $3);
ELSE
i := -1;
RETURN NEXT;
END IF;
END;
$$ LANGUAGE plpgsql STABLE;
```
I don't know how to translate this to an SQL function or whether that would be better. | PostgreSQL: CASE: SELECT FROM two different tables | [
"",
"sql",
"postgresql",
"select",
"case",
"create-function",
""
] |
I have a SQL problem. I have a table where a user gets a row for every experience they complete. The schema looks similar to this fiddle: <http://sqlfiddle.com/#!2/5d6a87/4>
I am trying to write a query that lists every user that has expid 1-5. So in my example it would list userids: 1,2, and 4. Since userid 3 does not have 5 rows, one for each experience that user shouldn't be listed. | What you would like to use is "group by"
the query would look like it:
```
select userid
from some_table
group by userid
having count(*)>=5
```
you can also be creative and force 5 different expeids by
```
select userid
from some_table
group by userid
having count(distinct expid)>=5
```
light reading about group by:
<http://www.w3schools.com/sql/sql_groupby.asp>
Good luck! | Here
```
SELECT * FROM some_table GROUP BY UserID having count(*)>= 5;
```
They way I see it should be like this but your question is not 100% clear, since you said 5 rows for every ExpID.
```
SELECT * FROM some_table GROUP BY UserID, ExpID having count(*)>= 5;
``` | mysql query that a user has a row for every experience id | [
"",
"mysql",
"sql",
"validation",
""
] |
I created a dynamic stored procedure that starts as follows.
I can save this without errors but when I execute it it seems it doesn't recognise the @temp variable as it throws the following error: "Must declare the table variable "@temp"."
Could this be because of wrong quotes / escaping and if, how do I have to change this in order to have it right (I am pretty new to SQL so some of the quotes could be wrong or missing) ?
**My SP (first part):**
```
@selection nvarchar(100)
AS
BEGIN
SET NOCOUNT ON;
BEGIN
DECLARE @temp AS TABLE
(
ranking int,
item nvarchar(100),
groupCount int,
groupName nvarchar(100)
)
DECLARE @sql nvarchar(max)
SET @sql = '
INSERT INTO @temp
(
ranking,
item,
groupCount,
groupName
)
SELECT RANK() OVER(ORDER BY COUNT(*) desc, policy) [Rank],
' + @selection + ',
COUNT(*) AS groupCount,
''currentMonth'' AS groupName
FROM Log_PE
WHERE CONVERT(DATE, dateEsc, 120) >= CONVERT(DATE, CONVERT(VARCHAR(6), GETDATE(), 112) + ''01'', 112)
GROUP BY ' + @selection + '
ORDER BY groupCount desc, ' + @selection + '
...
```
Many thanks in advance for any help with this, Mike. | As I have already mentioned in comments section, Dynamic Sql has its own scope, any variable declared outside of that scope arent visible to dynamic sql, Your have to declare the variable inside your dynamic sql. Something like as follows....
```
SET @sql = N' DECLARE @temp AS TABLE
(
ranking int,
item nvarchar(100),
groupCount int,
groupName nvarchar(100)
)
INSERT INTO @temp
(
ranking,
item,
groupCount,
groupName
)
SELECT RANK() OVER(ORDER BY COUNT(*) desc, policy) [Rank],
' + @selection + ',
COUNT(*) AS groupCount,
''currentMonth'' AS groupName
FROM Log_PE
WHERE CONVERT(DATE, dateEsc, 120) >= CONVERT(DATE, CONVERT(VARCHAR(6), GETDATE(), 112) + ''01'', 112)
GROUP BY ' + @selection + '
ORDER BY groupCount desc, ' + @selection + '
``` | Hope you are doing fine,
well i think you may reconsider using a variable table because SQL won't understand what the @temp variable means when you will execute the EXEC(@sql), i recommand using a temp table instead of a variable table, here's the new code :
```
BEGIN
SET NOCOUNT ON;
BEGIN
if object_id('temp') is not null
drop table temp
create table temp
(
ranking int,
item nvarchar(100),
groupCount int,
groupName nvarchar(100)
)
DECLARE @sql nvarchar(max)
SET @sql = '
INSERT INTO temp
(
ranking,
item,
groupCount,
groupName
)
SELECT RANK() OVER(ORDER BY COUNT(*) desc, policy) [Rank],
' + @selection + ',
COUNT(*) AS groupCount,
''currentMonth'' AS groupName
FROM Log_PE
WHERE CONVERT(DATE, dateEsc, 120) >= CONVERT(DATE, CONVERT(VARCHAR(6), GETDATE(), 112) + ''01'', 112)
GROUP BY ' + @selection + '
ORDER BY groupCount desc, ' + @selection + '
...
if object_id('temp') is not null
drop table temp
```
I hope thiw will help you | SQL Server: issue with declaring variable in dynamic SQL | [
"",
"sql",
"sql-server",
"select",
"dynamic",
"dynamic-sql",
""
] |
I am trying to calculate Month over Month % change on data rows. For example my current output is:
```
DataDate |LocationId|Payment|MoM [Current placeholder column in script]
12-1-2013|LocationA |$5.00 |
1-1-2014 |LocationA |$10.00 |
2-1-2014 |LocationA |$100.00|
12-1-2013|LocationB |$50.00 |
1-1-2014 |LocationB |$25.00 |
2-1-2014 |LocationB |$50.00 |
```
I am pasting the results into Excel and then calculating the MoM by using the following formula:
((CurrentDataDate Payment/PreviousDataDate Payment)-1)]
I can not figure out where to even begin trying to accomplish this so I cant provide any coding from what i have tried...I have read about and attempted a correlated scalar query used to calculate running totals and tried to alter it to accomplish this...no dice...I tried with a Join and a subquery but i will admit my subquery abilities are less than adequate.
The code used to call this info is:
```
Declare @BeginDate as DateTime
Declare @EndDate as DateTime
Set @BeginDate = '12-01-2013'
Set @EndDate = '02-01-2014'
Select DataDate,LocationId,Payment,0 as MoM
From dbo.mytableview
Where DataMonth between @BeginDate and @EndDate
```
Desired output is:
```
DataDate |LocationId|Payment|MoM
12-1-2013|LocationA |$5.00 |
1-1-2014 |LocationA |$10.00 |1.0 [or 100%]
2-1-2014 |LocationA |$100.00|9.0 [or 900%]
12-1-2013|LocationB |$50.00 |
1-1-2014 |LocationB |$25.00 |-.50 [or -50%]
2-1-2014 |LocationB |$50.00 |1.0 [or 100%]
```
I am using Microsoft SQLServer 2008 R2. I also have/and can use the 2012 version if that is needed. | This works on SQL Server 2012:
```
with x as (
select datadate, locationid, payment,
lag(payment) over(partition by locationid order by datadate) as prev_payment
from table
)
select *, (payment/prev_payment)-1
from x
``` | Although dean's solution is better, I just wanted to also post a solution for people that don't have SQL Server 2012 for completeness' sake (and since I had already started on it before dean posted his).
This can be accomplished using Common Table Expressions and the `Row_Number()` function:
```
WITH CTE AS
(
SELECT Row_Number() OVER (PARTITION BY locationid ORDER BY datadate) AS RN, datadate, locationid, payment
FROM table
)
SELECT
CTE2.*,
(CTE2.payment / CTE1.payment) - 1 AS MOM
FROM
CTE AS CTE1 RIGHT OUTER JOIN
CTE AS CTE2
ON
CTE1.RN = CTE2.RN-1
AND
CTE2.locationid = CTE1.locationid
ORDER BY
locationid
``` | Trying to Calculate Month over Month percentage increase/decrease | [
"",
"sql",
"sql-server",
"excel",
"sum",
"calculated-field",
""
] |
```
SELECT .... ColumnNames ...
INTO [FOUND_DUPLICATES]
FROM [FIND_DUPLICATES] AS FD
WHERE FD.[Contract No] IN
(SELECT [Contract No],
[Vehicle Identity number (VIN)],
COUNT(*) AS Anzahl
FROM FIND_DUPLICATES
GROUP BY
[Contract No],
[Vehicle Identity number (VIN)]
HAVING COUNT(*) >1)
```
Here u can see what I want :)
Find duplicates and copy them to another table.
But with this code I get an error:
> Only one expression can be specified in the select list when the
> subquery is not introduced with EXISTS.
The SELECT statement to find the duplicates is working very well. But I have a problem to select and copy only the duplicates to the new table with Select Into.
I hope u can understand what I want and anyone can help me with that :)
//EDIT:
Im using SQL Server 2008 R2 | Adding another sub query should work?
```
Select .... ColumnNames ...
INTO [FOUND_DUPLICATES]
FROM [FIND_DUPLICATES]
AS FD
where FD.[Contract No]
IN ( Select [Contract No] from (Select
[Contract No],
[Vehicle Identity number (VIN)],
COUNT(*) AS Anzahl
from FIND_DUPLICATES
group by
[Contract No],
[Vehicle Identity number (VIN)]
having count(*) >1)x)
``` | You can use this:
```
SELECT .... ColumnNames ...
INTO [FOUND_DUPLICATES]
FROM [FIND_DUPLICATES] AS FD
WHERE FD.[Contract No] IN
(
SELECT
[Contract No]
FROM FIND_DUPLICATES
GROUP BY
[Contract No],
[Vehicle Identity number (VIN)]
HAVING COUNT(*) >1
)
```
or use correlated subquery:
```
SELECT .... ColumnNames ...
INTO [FOUND_DUPLICATES]
FROM [FIND_DUPLICATES] AS FD
WHERE EXISTS
(
SELECT 1
FROM FIND_DUPLICATES FD1
WHERE FD.[Contract No] = FD1.[Contract No]
GROUP BY
FD1.[Contract No],
FD1.[Vehicle Identity number (VIN)]
HAVING COUNT(*) >1
)
``` | Select into Table from Table2 where column in (Subquery) | [
"",
"sql",
"sql-server",
"select",
"select-into",
""
] |
I want to fetch all columns of a table except of columns of type serial. The closest query to this problem I was able to come up with this one:
```
SELECT column_name FROM information_schema.columns
WHERE table_name = 'table1' AND column_default NOT LIKE 'nextval%'
```
But the problem is its also excluding/filtering rows having empty values for column\_default.I don't know why the behaviour of Postgres is like this. So I had to change my query to something like this:
```
SELECT column_name FROM information_schema.columns
WHERE table_name = 'table1'
AND ( column_default IS NULL OR column_default NOT LIKE 'nextval%')
```
Any better suggestions or rationale behind this are welcome. | ### About `null`
`'anything' NOT LIKE null` yields `null`, not `true`.
And only `true` qualifies for filter expressions in a `WHERE` clause.
Most functions return `null` on `null` input (there are exceptions). That's the nature of `null` in *any* proper RDBMS.
If you desire a *single* expression, you *could* use:
```
AND (column_default LIKE 'nextval%') IS NOT TRUE;
```
That's hardly shorter or faster, though. [Details in the manual.](https://www.postgresql.org/docs/current/functions-comparison.html)
### Proper query
Your query is still unreliable. A table name alone is not unique in a Postgres database, you need to specify the schema name in addition or rely on the current `search_path` to find the first match in it:
Related:
* [How does the search\_path influence identifier resolution and the "current schema"](https://stackoverflow.com/questions/9067335/how-to-create-table-inside-specific-schema-by-default-in-postgres/9067777#9067777)
```
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'hstore1'
AND table_schema = 'public' -- your schema!
AND (column_default IS NULL
OR column_default NOT LIKE 'nextval%');
```
Better, but still not bullet-proof. A column default starting with 'nextval' does not make a `serial`, yet. See:
* [Auto increment table column](https://stackoverflow.com/questions/9875223/auto-increment-sql-function/9875517#9875517)
To be sure, check whether the sequence in use is "owned" by the column with [`pg_get_serial_sequence(table_name, column_name)`](https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE).
I rarely use the information schema myself. Those slow, bloated views guarantee portability across major versions - and aim at portability to other standard-compliant RDBMS. But too much is incompatible anyway. Oracle does not even implement the information schema (as of 2015).
Also, useful Postgres-specific columns are missing in the information schema. For this case I might query the the system catalogs like this:
```
SELECT *
FROM pg_catalog.pg_attribute a
WHERE attrelid = 'table1'::regclass
AND NOT attisdropped -- no dropped (dead) columns
AND attnum > 0 -- no system columns
AND NOT EXISTS (
SELECT FROM pg_catalog.pg_attrdef d
WHERE (d.adrelid, d.adnum) = (a.attrelid, a.attnum)
AND d.adsrc LIKE 'nextval%'
AND pg_get_serial_sequence(a.attrelid::regclass::text, a.attname) <> ''
);
```
Faster and more reliable, but less portable.
[The manual:](https://www.postgresql.org/docs/current/catalog-pg-attrdef.html)
> The catalog `pg_attrdef` stores column default values. The main
> information about columns is stored in `pg_attribute` (see below). Only
> columns that explicitly specify a default value (when the table is
> created or the column is added) will have an entry here.
`'table1'::regclass` uses the `search_path` to resolve the name, which avoids ambiguity. You can schema-qualify the name to overrule: `'myschema.table1'::regclass`.
Related:
* [Find the referenced table name using table, field and schema name](https://stackoverflow.com/questions/15618312/find-the-referenced-table-name-using-table-field-and-schema-name/15625024#15625024)
* [Get the default values of table columns in Postgres?](https://stackoverflow.com/questions/8146448/get-the-default-values-of-table-columns-in-postgres/8148177#8148177) | The easiest method is:
```
where length(column_name) > 0
``` | Behaviour of NOT LIKE with NULL values | [
"",
"sql",
"postgresql",
"null",
"information-schema",
""
] |
I have a Table which is having a column with junk values
```
DECLARE @tmp TABLE(junkValues nvarchar(max))
INSERT into @tmp (junkValues)
VALUES ('@{''id''= ''63FCE3C0-B6FE-4FFE-8BC8-506A338455DD''; Value="ABC"; Other="ddd"}')
INSERT into @tmp (junkValues)
VALUES ('@{''id''= ''C10560BA-7059-4AC5-B9ED-77BAC7182BD7''; Other="ddd"; Value="XYZ" }')
```
I wanted to convert this semicolon separated value to another temporary table.
```
DECLARE @outputTempTable TABLE(CDID uniqueidentifier, Value nvarchar(max))
```
Where CDID will be the id from @tmp and Value will be Value from @tmp
Expecting an output like below
```
CDID Value
----- -----------
63FCE3C0-B6FE-4FFE-8BC8-506A338455DD ABC
C10560BA-7059-4AC5-B9ED-77BAC7182BD7 XYZ
```
I am using SQL Server 2008, is there any way I can achieve this? | A bit complex, but it Works!, please try following,
```
DECLARE @tmp TABLE(junkValues nvarchar(max))
INSERT into @tmp (junkValues)
VALUES ('@{''id''= ''63FCE3C0-B6FE-4FFE-8BC8-506A338455DD''; Value="ABC", Other="ddd"}')
INSERT into @tmp (junkValues)
VALUES ('@{''id''= ''C10560BA-7059-4AC5-B9ED-77BAC7182BD7''; Other="ddd"; Value="XYZ" }')
;with cte as
(
select junkValues
,CHARINDEX('''id''= ''',junkValues) + 7 as IDStart
,CHARINDEX(''';',junkValues,CHARINDEX('''id''= ''',junkValues)) as IDEnd
,CHARINDEX('Value="',junkValues) + 7 as valStart
,CHARINDEX('"',junkValues,CHARINDEX('Value="',junkValues) + 8) as valEnd
from @tmp
)
select *
,SUBSTRING(junkValues,IDStart, IDEnd - IDStart ) as IDCol
,SUBSTRING(junkValues,valStart, valEnd - valStart ) as ValCol
from cte
``` | May this help you..
```
DECLARE @tmp TABLE(junkValues nvarchar(max))
INSERT into @tmp (junkValues)
VALUES ('@{''id''= ''63FCE3C0-B6FE-4FFE-8BC8-506A338455DD''; Value="ABC", Other="ddd"}')
INSERT into @tmp (junkValues)
VALUES ('@{''id''= ''C10560BA-7059-4AC5-B9ED-77BAC7182BD7''; Value="XYZ", Other="ddd"}')
SELECT
SUBSTRING(junkValues,CHARINDEX('''= ''',junkValues)+4, 36) AS CDID,
SUBSTRING(junkValues,CHARINDEX('="',junkValues)+2,((CHARINDEX('", O',junkValues))-(CHARINDEX('="',junkValues)+2))) AS Value
from @tmp
``` | Split string and save it to temporary table | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
How to convert this result:
```
Group | Sum
Services | 11120.99
Vendas | 3738.00
```
Into:
```
Group | Sum
Services | 74.84
Vendas | 25.16
```
That is, the second displays the results as percentages of total.
This is what I tried:
```
SELECT categories.cat AS 'Group', SUM(atual) AS 'Sum'
FROM `table1` INNER JOIN
categories
ON table1.category_id=categories.id
GROUP BY categoria
``` | you can left join a total sum that is not grouped or split up, and divide that by your sum query. this way you are just doing the total select once for faster runtime
```
SELECT cat, sum_atual, sum_atual/total_atual as percent_atual
FROM
( SELECT categories.cat AS cat, SUM(atual) AS sum_atual
FROM `table1`
JOIN categories ON table1.category_id=categories.id
GROUP BY categoria
) t
LEFT JOIN
( SELECT SUM(atual) as total_atual
FROM `table1`
) t1
``` | ```
SELECT categories.cat AS categoria,
SUM(atual) * 100 / (select sum(atual) from table1) AS percentages
FROM `table1`
INNER JOIN categories ON table1.category_id=categories.id
GROUP BY categoria
``` | MySQL query to calculate percentage of total column | [
"",
"mysql",
"sql",
""
] |
I have a little problem, I do have a table like this :
```
CREATE TABLE IF NOT EXISTS `t_ot_prestation` (
`id_ot` int(11) NOT NULL,
`id_prestation` int(11) NOT NULL,
PRIMARY KEY (`id_ot`,`id_prestation`)
) ENGINE=MyISAM;
```
What I would like to get is to get distinct id\_ot according to multiple id\_prestation. For instance, I was expecting :
```
SELECT id_ot
FROM `t_ot_prestation`
WHERE `id_prestation` =723
AND `id_prestation` =1177;
```
To give me all id\_ot having 2 rows with 723 and 1177. But this isn't the case.
What am I doing wrong ? :( | That won't work, because no row could match that condition. Instead, use `group by` and `having`:
```
SELECT id_ot
FROM `t_ot_prestation`
GROUP BY id_ot
HAVING sum(case when `id_prestation` = 723 then 1 else 0 end) > 0 and
sum(case when `id_prestation` = 1177 then 1 else 0 end) > 0;
```
Each condition in the `having` clause counts the number of matches to each value. The `> 0` is simply saying that at least one row matches a particular value. | The WHERE clause is looked at per record. There is no single record that has both id\_prestation =723 and id\_prestation = 1177, of course.
Here is one way to solve this (provided your dbms supports INTERSECT):
```
SELECT id_ot
FROM `t_ot_prestation`
WHERE `id_prestation` =723
INTERSECT
SELECT id_ot
FROM `t_ot_prestation`
WHERE `id_prestation` =1177;
```
Other ways that come to mind are: EXISTS clause, IN clause, self join, group by. | How to get row when matching multiple AND? | [
"",
"sql",
""
] |
I have a basic table which outputs.
```
field1
a
b
c
```
Then i want to add single quotes and a comma so currently i have a simple QUOTENAME.
```
QUOTENAME(field1,'''')
```
Which outputs the following results.
```
field1
'a'
'b'
'c'
```
But i cant figure out how to get the QUOTENAME to output results like this.
```
field1
'a',
'b',
'c',
``` | Try:
```
QUOTENAME(field1,'''')+','
``` | I know you got your answer but just wanted to make an addition if someone else looks at this and is wondering how to get rid of the last comma after the last value in case they are using the resultset for a dynamic query.
```
Declare @MyString nvarchar(max)
select @MyString += QuoteName (field1.'''') + ','
from YourTable Name
Set @MyString = left (@MyString, Len ( @MyString) - 1 )
```
And to view the results a
```
Print @MyString
```
can be added to evaluate the results.
Hope this helps others looking for this logic when using the quotename and needing that last comma removed :-) | SQL QUOTENAME adding single quotes and a comma | [
"",
"sql",
"function",
"quotename",
""
] |
How do i combine a SUM and MAX in a single query?
Lets say i have a orderrule:
ProductID \ Quantity
I Could say:
```
Select ProductID,SUM(Quantity) AS Sold
FROM Orderrule
GROUP BY ProductID
ORDER BY SUM(Quantity) Desc
```
However that would return all sales, and not just the most sold product (with quantity). | ```
SELECT TOP 1 ProductID, Sold FROM
(
SELECT ProductID, SUM(Quantity) AS Sold
FROM Orderrule
GROUP BY ProductID
) totals
ORDER BY Sold DESC
``` | Try this
```
SELECT TOP(1)
*
FROM
(
Select
ProductID,
MAX(Quantity) As MaxQuantity,
SUM(Quantity) AS Sold
FROM Orderrule
GROUP BY ProductID
)AS X
ORDER BY Sold DESC
``` | SQL Query: SELECT MAX SUM quantity | [
"",
"sql",
""
] |
I don't really know much at all about writing a script for sql and the guy who normally does it is off the next couple of weeks.
I need help trying to write a script that will change the value in a column called "UPDATE\_FLAG" from false to true, wherever "xxx" is present. "xxx" could be in any column, but i need the value of the "UPDATE\_FLAG" to only change in the specific row that "xxx" is present. Also there could be multiple rows with "xxx" | Since you didn't provide much specific information you'll only get a generalized answer:
```
UPDATE your_table
SET Update_Flag = 1 // this assumes the flag is an integer value
WHERE (column1 = "xxx") // if it's a string do SET Update_Flag = "True"
OR (column2 = "xxx")
OR (column3 = "xxx")
```
etcetera... | ```
update table set update_flag=1
where
column1 = 'xxx' or
column2 = 'xxx' or
...
``` | update column_B of specific row that "xxx" is present | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Please look at the following query:
**tbl\_Contents**
```
Content_Id Content_Title Content_Text
10002 New case Study New case Study
10003 New case Study New case Study
10004 New case Study New case Study
10005 New case Study New case Study
10006 New case Study New case Study
10007 New case Study New case Study
10008 New case Study New case Study
10009 New case Study New case Study
10010 SEO News Title SEO News Text
10011 SEO News Title SEO News Text
10012 Publish Contents SEO News Text
```
**tbl\_Media**
```
Media_Id Media_Title Content_Id
1000 New case Study 10012
1001 SEO News Title 10010
1002 SEO News Title 10011
1003 Publish Contents 10012
```
**QUERY**
```
SELECT
C.Content_ID,
C.Content_Title,
M.Media_Id
FROM tbl_Contents C
LEFT JOIN tbl_Media M ON M.Content_Id = C.Content_Id
ORDER BY C.Content_DatePublished ASC
```
**RESULT**
```
10002 New case Study 2014-03-31 13:39:29.280 NULL
10003 New case Study 2014-03-31 14:23:06.727 NULL
10004 New case Study 2014-03-31 14:25:53.143 NULL
10005 New case Study 2014-03-31 14:26:06.993 NULL
10006 New case Study 2014-03-31 14:30:18.153 NULL
10007 New case Study 2014-03-31 14:30:42.513 NULL
10008 New case Study 2014-03-31 14:31:56.830 NULL
10009 New case Study 2014-03-31 14:35:18.040 NULL
10010 SEO News Title 2014-03-31 15:22:15.983 1001
10011 SEO News Title 2014-03-31 15:22:30.333 1002
10012 Publish 2014-03-31 15:25:11.753 1000
10012 Publish 2014-03-31 15:25:11.753 1003
```
**10012 are coming twice...!**
My query is returning duplicate rows from tbl\_Contents (left table in the join)
Some rows in tbl\_Contents has more than 1 associated rows in tbl\_Media.
I need all rows from tbl\_Contents even if there are Null values exists in the tbl\_Media BUT NO DUPLICATE RECORDS. | Try an [`OUTER APPLY`](https://learn.microsoft.com/en-us/sql/t-sql/queries/from-transact-sql?view=sql-server-ver15)
```
SELECT
C.Content_ID,
C.Content_Title,
C.Content_DatePublished,
M.Media_Id
FROM
tbl_Contents C
OUTER APPLY
(
SELECT TOP 1 *
FROM tbl_Media M
WHERE M.Content_Id = C.Content_Id
) m
ORDER BY
C.Content_DatePublished ASC
```
Alternatively, you could `GROUP BY` the results
```
SELECT
C.Content_ID,
C.Content_Title,
C.Content_DatePublished,
M.Media_Id
FROM
tbl_Contents C
LEFT OUTER JOIN tbl_Media M ON M.Content_Id = C.Content_Id
GROUP BY
C.Content_ID,
C.Content_Title,
C.Content_DatePublished,
M.Media_Id
ORDER BY
C.Content_DatePublished ASC
```
The `OUTER APPLY` selects a single row (or none) that matches each row from the left table.
The `GROUP BY` performs the entire join, but then collapses the final result rows on the provided columns. | You can do this using generic SQL with `group by`:
```
SELECT C.Content_ID, C.Content_Title, MAX(M.Media_Id)
FROM tbl_Contents C LEFT JOIN
tbl_Media M
ON M.Content_Id = C.Content_Id
GROUP BY C.Content_ID, C.Content_Title
ORDER BY MAX(C.Content_DatePublished) ASC;
```
Or with a correlated subquery:
```
SELECT C.Content_ID, C.Contt_Title,
(SELECT M.Media_Id
FROM tbl_Media M
WHERE M.Content_Id = C.Content_Id
ORDER BY M.MEDIA_ID DESC
LIMIT 1
) as Media_Id
FROM tbl_Contents C
ORDER BY C.Content_DatePublished ASC;
```
Of course, the syntax for `limit 1` varies between databases. Could be `top`. Or `rownum = 1`. Or `fetch first 1 rows`. Or something like that. | Left Join without duplicate rows from left table | [
"",
"sql",
"join",
"duplicates",
""
] |
Having a very simple table structure:
```
CREATE TABLE dbo.employees (id int PRIMARY KEY);
CREATE TABLE dbo.assignments (id int PRIMARY KEY, employee_id int );
CREATE TABLE dbo.contracts (id int PRIMARY KEY, assignment_id int );
CREATE TABLE dbo.tasks (id int PRIMARY KEY, contract_id int );
```
Employee -< Assignments -< Contracts -< Tasks
I want to select all employees that do not have any associated tasks.
What would be the most efficient way to do that? | ```
select id
from employees E
where not exists (
select *
from assignments A
join contracts C on A.ID = C.assignment_id
join tasks T on C.ID = T.contract_id
where A.employee_id = E.id)
``` | You can `inner join` first 3 tables and use `not in` for the 4th.
```
select
e.*
from dbo.employees e
inner join dbo.assignments a on a.employee_id = e.id
inner join dbo.contracts c on c.assignment_id = a.id
where c.id not in (select distinct contract_id from dbo.tasks)
``` | Select rows without nested associated records | [
"",
"sql",
"sql-server",
""
] |
In my Database I have a table called `ARTICLES`. In `ARTICLES` I have **2** columns.


Now I want to get the max value of `NRRENDORDOK` column where the `TIP` column has value **'A'**.
e.g. **the max number of cells A in TIP column is 8.**
Any help will be appreciated. | You should take use of the `MAX` function and then GOUP BY tip in order to get the max value for each `tip`:
```
SELECT tip, MAX(nrreddordok) FROM table GROUP BY tip
```
If you just want values for `A` then you can use following query:
```
SELECT MAX(nrreddordok) FROM table WHERE tip = 'A'
``` | ```
SELECT tip, MAX(nrreddordok) FROM table
where tip='A'
GROUP BY tip
``` | How to get max value of a column based on value of another column? | [
"",
"asp.net",
"sql",
"sql-server",
""
] |
So I am trying to pull rows from a table where there are more than one version for an ID that has at least one person for the ID that is not null but the versions that come after it are null.
So, if i had a statement like:
```
select ID, version, person from table1
```
the output would be:
```
ID Version Person
-- ------- ------
1 1 Tom
1 2 null
1 3 null
2 1 null
2 2 null
2 3 null
3 1 Mary
3 2 Mary
4 1 Joseph
4 2 null
4 3 Samantha
```
The version number can have an infinite value and is not limited.
I want to pull ID 1 version 2/3, and ID 4 Version 2.
So in the case of ID 2 where the person is null for all three rows I don't need these rows. And in the case of ID 3 version 1 and 2 I don't need these rows because there is never a null value.
This is a very simple version of the table I am working with but the "real" table is a lot more complicated with a bunch of joins already in it.
The desired output would be:
```
ID Version Person
-- ------- ------
1 2 null
1 3 null
4 2 null
```
The result set that I am looking for is where in a previous version for the same ID there was a person listed but is now null. | You are seeking all rows where the `person is not null` and that `id` has null rows, and the not null person `version` is less than the null `version` for the same person id:
**Edited predicate based on comment**
```
with sample_data as
(select 1 id, 1 version, 'Tom' person from dual union all
select 1, 2, null from dual union all
select 1, 3, null from dual union all
select 2, 1, null from dual union all
select 2, 2, null from dual union all
select 2, 3, null from dual union all
select 3, 1, 'Mary' from dual union all
select 3, 2, 'Mary' from dual union all
select 4, 1, 'Joseph' from dual union all
select 4, 2, null from dual union all
select 4, 3, 'Samantha' from dual)
select *
from sample_data sd
where person is null
and exists
(select 1 from sample_data
where id = sd.id
and person is not null
and version < sd.version);
/* Old predicate
and id in
(select id from sample_data where person is not null);
*/
``` | I think this query translates pretty nicely into what you asked for?
*List all the rows (R) where the person is null, but only if a previous row (P) with a non-null name exists.*
```
select *
from table1 r
where r.person is null
and exists(
select 'x'
from table1 p
where p.id = r.id
and p.version < r.version
and p.person is not null
);
``` | How do I display Rows in a table where all values but the first one for a column is null | [
"",
"sql",
"oracle",
""
] |
Let's say I have some data in a SQL Server database.
```
Location PayID Year
------------------------
Loc1 100 2010
Loc1 100 2011
Loc1 101 2012
Loc2 200 2010
Loc2 201 2011
Loc2 202 2012
```
And I am trying to write a query in SQL Server that will give me a table with two columns that I can search on to find out what the previous `PayID` for a particular Location. So the output would be.
```
PayID PrevID
-----------------
101 100
202 201
201 200
```
It only needs an entry when the previous year ID is different from the current year and I will query it recursively if I don't have the right match when a user goes back more than one year so It will pull each previous ID based on the one that was just pulled until it finds a PayID and Year matching the first table.
Any help on this would be much appreciated. I'll be attentively searching and will post a solution if I can find it. | This can be done pretty easily with a recursive CTE:
```
with cte as (
select Location, PayID, PayID as PrevID, Year from payhistory
union all
select p.Location, p.PayID, cte.PayID as PrevID, p.Year
from payhistory p
join cte on cte.Location = p.Location and cte.Year + 1 = p.Year
)
select distinct Location, PayID, PrevID
from cte
where PayID <> PrevID;
```
Here's the results I get:
```
| LOCATION | PAYID | PREVID |
|----------|-------|--------|
| Loc1 | 101 | 100 |
| Loc2 | 201 | 200 |
| Loc2 | 202 | 201 |
```
Demo: <http://www.sqlfiddle.com/#!3/e0ac0/4> | I didn't see a version specified, so I'd use LAG in 2012. You can filter the results if you want less info. You could replace LAG(PayID,1,NULL) with LAG(PayID,1,PayID) to alter the behavior of the first payid.
```
DECLARE @tbl TABLE (Location VARCHAR(4), PayID INT, Year INT)
INSERT INTO @tbl VALUES
('Loc1',100,2010)
,('Loc1',100,2011)
,('Loc1',101,2012)
,('Loc2',200,2010)
,('Loc2',201,2011)
,('Loc2',202,2012)
SELECT Location
,PayID
,LAG(PayID,1,NULL) OVER (PARTITION BY Location ORDER BY Year ASC) PrevID
FROM @tbl
```
<http://www.sqlfiddle.com/#!6/e0ac0/2> | How can I generate a previous value table by combining fields from SQL table? | [
"",
"sql",
"sql-server",
""
] |
Despite my little knwoledge of SQL Server, i'm stucked on this problem for a while:
I'm trying to get the final sum of a product of two columns, but SQL Server won't recognize my last column. Here is the query:
```
SELECT
Products.customName AS 'Name',
Ordered_Products.scanned AS 'Sent Quantity',
Charged_Products.price AS 'Product Price',
Ordered_Products.scanned * Charged_Products.price AS 'Charged'
FROM Products
JOIN Charged_Products
ON Products.productsId = Charged_Products.productsId
JOIN Ordered_Products
ON Ordered_Products.productsId = Products.productsId
WHERE
Ordered_Products.ordersId = 500 AND
Ordered_Products.scanned > 0
UNION ALL
SELECT 'TOTAL', '', '', SUM('Charged')
```
It should look something like this:

Could somebody point me in the right direction to make the query work ? Any help would be much appreciated. | You can do it this way:
```
SELECT
Products.customName AS 'Name',
Ordered_Products.scanned AS 'Sent Quantity',
Charged_Products.price AS 'Product Price',
Ordered_Products.scanned * Charged_Products.price AS 'Charged'
FROM Products
JOIN Charged_Products
ON Products.productsId = Charged_Products.productsId
JOIN Ordered_Products
ON Ordered_Products.productsId = Products.productsId
WHERE
Ordered_Products.ordersId = 500 AND
Ordered_Products.scanned > 0
UNION ALL
SELECT 'TOTAL', '', '', SUM(Ordered_Products.scanned * Charged_Products.price)
FROM Products
JOIN Charged_Products
ON Products.productsId = Charged_Products.productsId
JOIN Ordered_Products
ON Ordered_Products.productsId = Products.productsId
WHERE
Ordered_Products.ordersId = 500 AND
Ordered_Products.scanned > 0
``` | You'd better to do this on client side.
But if you want it so much, then use `GROUPING SETS`.
```
SELECT ISNULL (Products.customName, 'TOTAL') AS 'Name'
, Ordered_Products.scanned AS 'Sent Quantity'
, Charged_Products.price AS 'Product Price'
, SUM (Ordered_Products.scanned * Charged_Products.price) AS 'Charged'
FROM Products
INNER JOIN Charged_Products ON Products.productsId = Charged_Products.productsId
INNER JOIN Ordered_Products ON Ordered_Products.productsId = Products.productsId
WHERE Ordered_Products.ordersId = 500
AND Ordered_Products.scanned > 0
GROUP BY GROUPING SETS ((Products.customName, Ordered_Products.scanned, Charged_Products.price),())
``` | SQL Server - Get the final sum of a product of two columns | [
"",
"sql",
".net",
"sql-server",
"vb.net",
""
] |
I'm trying to INSERT new row with this values (hotelNo,guestNo,dataform,dataTo,roomNo)
I know the hotel name , so I have to SELECT the hotelNo from another table , it didn't work with me , is there something wrong?
```
INSERT INTO Booking
VALUES (hotelNo,123,'3-sept-1014','3-sept-1014',121)
(SELECT hotelNo
FROM Hotel
WHERE hotelName='Ritz Carlton' AND city='Dubai');
``` | Remove `VALUES (hotelNo,...` from your query and you are good to go.
```
INSERT INTO Booking
(SELECT hotelNo,123,'3-sept-1014','3-sept-1014',121
FROM Hotel
WHERE hotelName='Ritz Carlton' AND city='Dubai')
``` | You should do it without `VALUES`
```
INSERT INTO Booking
(SELECT hotelNo, 123, '3-sept-1014','3-sept-1014',121
FROM Hotel
WHERE hotelName='Ritz Carlton' AND city='Dubai');
``` | SQL , INSERT with SELECT | [
"",
"sql",
"database",
"oracle",
""
] |
Just trying to wrap my head around the logic/commands needed:
I have a contacts table. Each contact has a client in a many-to-one fashion.
I am trying to get any clientIDs where all contacts have a NULL value for email.
Example data:
```
ContactID EmailAddress ClientID
1 NULL 3
907 NULL 3
2468 NULL 3
2469 email@email.com 4
1077 NULL 4
908 email@email.com 4
2 email@email.com 4
3 email@email.com 5
909 email@email.com 5
```
Thanks! | You can do this with a grouped aggregate and a [`HAVING`](http://technet.microsoft.com/en-us/library/ms180199.aspx) clause (and assuming blank email addresses can be treated the same as nulls):
```
SELECT
ClientID
FROM ClientEmails
GROUP BY ClientID
HAVING MAX(LEN(ISNULL(EmailAddress, ''))) = 0;
```
[SqlFiddle](http://sqlfiddle.com/#!6/0c29f/7) | Would this help ?
```
select *
from
(
select clientId,
COUNT(emailAddress) as Mailz,
COUNT(contactId) as Contacts
from contacts
group by clientId
) as src
where (Mailz = 0 and contacts > 0)
``` | TSQL Counting Nulls - where all grouped items have NULL field | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have this query
```
select
raw_item_no,
raw_item_no_2,
raw_item_no_3,
raw_item_no_4
from jtjobfil_sql
where job_no = 213418
```
which outputs like this
```
raw_item_no raw_item_no_2 raw_item_no_3 raw_item_no_4
23 24 25 26
```
how do I get the output to look like this
```
raw_item_nos
23
24
25
26
```
I looked into pivot but I couldn't figure out how to do this because I am not summing any columns. | You can use `CROSS APPLY`:
```
SELECT x.raw_item_nos
FROM jtjobfil_sql t
CROSS APPLY
(
VALUES
(t.raw_item_no),
(t.raw_item_no_2),
(t.raw_item_no_3),
(t.raw_item_no_4)
) X (raw_item_nos)
WHERE job_no = 213418;
``` | You could do something using a union.
```
with my_query (column1, column2, column3)
as
(
SELECT column1, column2, column3
FROM my_table
where id = 1
)
SELECT column1 FROM my_query
UNION
SELECT column2 FROM my_query
UNION
SELECT column3 FROM my_query
``` | How do I display fields in sql vertically instead of horizontally | [
"",
"sql",
"sql-server-2008-r2",
"unpivot",
""
] |
I have the following tables:
```
create table Users (
Id int
)
create table Vals1 (
UserId int,
Number int
)
create table Vals2 (
UserId int,
Number int
)
```
I have values in `Vals1` and `Vals2` but not a value for every `UserId`. What I want to do is, when available, sum the corresponding values between `Vals1` and `Vals2`.
In this example, assume I have records with `Id` 1-5 in my User table. So, say I have the following in `Vals1`:
```
UserId Number
1 10
2 15
4 20
```
And this in `Vals2`:
```
UserId Number
1 30
2 55
3 40
```
This is what I want as output:
```
UserId Number
1 40
2 70
3 40
4 20
5 0
```
My first stab at this produces correct results, but this seems really ugly:
```
;WITH AllVals1 AS (
SELECT Id, ISNULL(Number, 0) as Number
FROM Users
LEFT JOIN Vals1 ON Id = UserId
), AllVals2 AS (
SELECT Id, ISNULL(Number, 0) as Number
FROM Users
LEFT JOIN Vals2 ON Id = UserId
)
SELECT v1.Id, v1.Number + v2.Number
FROM AllVals1 v1
JOIN AllVals2 v2 ON v1.Id = v2.Id
```
Is there a more succinct/efficient way of doing this? | Here is much simpler way of doing it, this will only work if you have 1 row per ID in `Vals1` and `Vals2` table.
```
SELECT id, COALESCE(v1.Number, 0) + COALESCE(v2.Number, 0) as NumberSum
FROM users u
LEFT OUTER JOIN Vals1 AS v1
ON u.id = v1.userid
LEFT OUTER JOIN vals2 AS v2
ON u.id = v2.userid
```
If you have more than 1 row per ID in values table than you can add `SUM()` and `GROUP BY` clause to get rid of multiple rows.
```
SELECT id
,SUM(COALESCE(v1.Number, 0) + COALESCE(v2.Number, 0))
FROM users u
LEFT OUTER JOIN Vals1 AS v1
ON u.id = v1.userid
LEFT OUTER JOIN vals2 AS v2
ON u.id = v2.userid
GROUP BY id
``` | You can use a left join and then account for the nulls with coalesce or isnull
```
SELECT users.id, ISNULL(vals1.number,0) + ISNULL(vals2.number,0) as [sum]
FROM users
left join vals1 on vals1.userid = users.id
left join vals2 on vals2.userid = users.id
``` | Sum Values Between Two Tables | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I have a database where I want to SELECT everything in two columns but exclude all the rows where the value are equal to each other. Is this possible?
Lets say it looks like this:
```
+------------------+------------------+
| Login | Alias |
+------------------+------------------+
| user1@domain.com | user1@domain.com |
| user1@domain.com | user1@example.com|
| user1@domain.com | user7@domain.com |
| user2@domain.com | user2@domain.com |
| user3@domain.com | user3@domain.com |
+------------------+------------------+
```
And all I want is this:
```
+------------------+------------------+
| Login | Alias |
+------------------+------------------+
| user1@domain.com | user1@example.com|
| user1@domain.com | user7@domain.com |
+------------------+------------------+
```
Is this possible with a SQL Query? | Try this
```
SELECT *
FROM table_name
WHERE Login <> Alias
``` | ```
select name, Alias,
case
when Login=Alias then 'true'
else 'false'
end
from TableName;
``` | SQL: Exclude row if value in both columns | [
"",
"mysql",
"sql",
""
] |
I am trying out some dynamic SQL queries using R and the postgres package to connect to my DB.
Unfortunately I get an empty data frame if I execute the following statement:
```
x <- "Mean"
query1 <- dbGetQuery(con, statement = paste(
"SELECT *",
"FROM name",
"WHERE statistic = '",x,"'"))
```
I believe that there is a syntax error somewhere in the last line. I already changed the commas and quotation marks in every possible way, but nothing seems to work.
Does anyone have an idea how I can construct this SQL Query with a **dynamic WHERE Statement using a R variable**? | Try this:
```
require(stringi)
stri_paste("SELECT * ",
"FROM name ",
"WHERE statistic = '",x,"'",collapse="")
## [1] "SELECT * FROM name WHERE statistic = 'Mean'"
```
or use concatenate operator `%+%`
```
"SELECT * FROM name WHERE statistic ='" %+% x %+% "'"
## [1] "SELECT * FROM name WHERE statistic ='mean'"
``` | You should use `paste0` instead of `paste` which is producing wrong results or `paste(..., collapse='')` which is slightly less efficient (see `?paste0` or [docs here](http://stat.ethz.ch/R-manual/R-patched/library/base/html/paste.html)).
Also you should consider preparing your SQL statement in separated variable. In such way you can always easily check what SQL is being produced.
I would use this (and I am using this all the time):
```
x <- "Mean"
sql <- paste0("select * from name where statistic='", x, "'")
# print(sql)
query1 <- dbGetQuery(con, sql)
```
In case I have SQL inside a function I always add `debug` parameter so I can see what SQL is used:
```
function get_statistic(x=NA, debug=FALSE) {
sql <- paste0("select * from name where statistic='", x, "'")
if(debug) print(sql)
query1 <- dbGetQuery(con, sql)
query1
}
```
Then I can simply use `get_statistic('Mean', debug=TRUE)` and I will see immediately if generated SQL is really what I expected. | Dynamic SQL Query in R (WHERE) | [
"",
"sql",
"r",
"postgresql",
"dynamic-sql",
""
] |
Is there a query I can run to show currently assigned privileges on a particular schema?
i.e. privileges that were assigned like so:
```
GRANT USAGE ON SCHEMA dbo TO MyUser
```
I have tried
```
SELECT *
FROM information_schema.usage_privileges;
```
but this only returns grants to the built-in PUBLIC role. Instead, I want to see which users have been granted privileges on the various schema.
Note: I'm actually using Amazon Redshift rather than pure PostgreSQL, although I will accept a pure PostgreSQL answer if this is not possible in Amazon Redshift. (Though I suspect it is) | in console util psql:
```
\dn+
```
will show you
```
Name | Owner | Access privileges | Description
``` | List all schemas with their priveleges for current user:
```
WITH "names"("name") AS (
SELECT n.nspname AS "name"
FROM pg_catalog.pg_namespace n
WHERE n.nspname !~ '^pg_'
AND n.nspname <> 'information_schema'
) SELECT "name",
pg_catalog.has_schema_privilege(current_user, "name", 'CREATE') AS "create",
pg_catalog.has_schema_privilege(current_user, "name", 'USAGE') AS "usage"
FROM "names";
```
The response will be for example:
```
name | create | usage
---------+--------+-------
public | t | t
test | t | t
awesome | f | f
(3 rows)
```
In this example current user is not owner of the `awesome` schema.
As you could guess, similar request for particular schema:
```
SELECT
pg_catalog.has_schema_privilege(
current_user, 'awesome', 'CREATE') AS "create",
pg_catalog.has_schema_privilege(
current_user, 'awesome', 'USAGE') AS "usage";
```
and response:
```
create | usage
--------+-------
f | f
```
As you know, it's possible to use `pg_catalog.current_schema()` for current schema.
Of all the possible privileges
```
-- SELECT
-- INSERT
-- UPDATE
-- DELETE
-- TRUNCATE
-- REFERENCES
-- TRIGGER
-- CREATE
-- CONNECT
-- TEMP
-- EXECUTE
-- USAGE
```
the only `CREATE` and `USAGE` allowed for schemas.
Like the `current_schema()` the `current_user` can be replaced with particular role.
---
**BONUS** with `current` column
```
WITH "names"("name") AS (
SELECT n.nspname AS "name"
FROM pg_catalog.pg_namespace n
WHERE n.nspname !~ '^pg_'
AND n.nspname <> 'information_schema'
) SELECT "name",
pg_catalog.has_schema_privilege(current_user, "name", 'CREATE') AS "create",
pg_catalog.has_schema_privilege(current_user, "name", 'USAGE') AS "usage",
"name" = pg_catalog.current_schema() AS "current"
FROM "names";
-- name | create | usage | current
-- ---------+--------+-------+---------
-- public | t | t | t
-- test | t | t | f
-- awesome | f | f | f
-- (3 rows)
```
---
[WITH](http://www.postgresql.org/docs/9.1/static/queries-with.html) | [System Information Functions](http://www.postgresql.org/docs/9.1/static/functions-info.html) | [GRANT (privileges)](http://www.postgresql.org/docs/9.1/static/sql-grant.html) | postgresql - view schema privileges | [
"",
"sql",
"postgresql",
"amazon-redshift",
""
] |
I have a form where I submit a Start and End date to book of holidays, I then send the value's across to SQL, now i'm a bit stuck because what I need to do is get the dates between the start and end date.
Can anyone help me with this I just need a calculation for my select statement to transfer all the dates between and on the start and end date.
Thanks in advance to your answers/replies :) | Try this:
```
DECLARE @FromDate datetime
DECLARE @ToDate datetime
SELECT @FromDate=FromDateCol FROM TableName
SELECT @ToDate=ToDateCol FROM TableName
WITH cte AS
(
SELECT CAST(@FromDate AS DATETIME) DateValue
UNION ALL
SELECT DateValue + 1
FROM cte
WHERE DateValue + 1 < @ToDate
)
SELECT DateValue
FROM cte
OPTION (MAXRECURSION 0)
``` | That should not be to difficult to difficult to figure if you are using SQL server! Try this website it is good documentation on how to retrieve those dates that you need.
These are the select statements:
start date (PRSTDATE) and the end date (PRENDATE)
SELECT statement
docs.oracle.com/javadb/10.6.2.1/.../rrefsqlj41360.htm...β | Select Statement Gathering Dates | [
"",
"date-arithmetic",
"sql",
""
] |
I have this as a result of the query:
```
select cast(to_date(a.start_time,'mm/dd/yyyy hh:mi:ss pm') as timestamp) date_of_call,
ora_rowscn from calling_table a where rownum <= 10;
DATE_OF_CALLING ORA_ROWSCN
26-JUL-13 12.29.28.000000000 PM 8347567733892
26-JUL-13 12.29.35.000000000 PM 8347567733892
26-JUL-13 12.29.35.000000000 PM 8347567733892
26-JUL-13 12.29.38.000000000 PM 8347567733892
26-JUL-13 12.29.44.000000000 PM 8347567733892
26-JUL-13 12.29.47.000000000 PM 8347567733892
26-JUL-13 12.29.48.000000000 PM 8347567733892
26-JUL-13 12.29.48.000000000 PM 8347567733892
26-JUL-13 12.29.48.000000000 PM 8347567733892
26-JUL-13 12.29.56.000000000 PM 8347567733892
```
But when I try to convert this timestamp into scn using the function timestamp\_to\_scn,
I am getting the following error:
ORA-08180: no snapshot found based on specified time
ORA-06512: at "SYS.TIMESTAMP\_TO\_SCN", line 1
08180. 00000 - "no snapshot found based on specified time"
\*Cause: Could not match the time to an SCN from the mapping table.
\*Action: try using a larger time.
And when I am using scn\_to\_timestamp on ora\_rowscn to convert that column into a timestamp, I am getting the following error:
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN\_TO\_TIMESTAMP", line 1
08181. 00000 - "specified number is not a valid system change number"
\*Cause: supplied scn was beyond the bounds of a valid scn.
\*Action: use a valid scn.
What is it that I am doing wrong? | You're trying to look too far back. You can only convert to and from SCNs that are in the redo/flashback window maintained by your system. Once changes age out then the mapping is lost.
This is explained [in the documentation](http://docs.oracle.com/cd/E18283_01/server.112/e17118/functions161.htm):
> The association between an SCN and a timestamp when the SCN is generated is remembered by the database for a limited period of time. This period is the maximum of the auto-tuned undo retention period, if the database runs in the Automatic Undo Management mode, and the retention times of all flashback archives in the database, but no less than 120 hours. The time for the association to become obsolete elapses only when the database is open. An error is returned if the SCN specified for the argument to `SCN_TO_TIMESTAMP` is too old.
Bear in mind these are part of Oracle's internal mechanism, and so are of limited use to us; though they are useful for flashback queries of course - again within the same window. | The SCN\_TO\_TIMESTAMP uses some internal algorithms to do the mapping between SCN and TIME when some event happened and it does the job with a good approximation. But there is a limit. You cannot go too far in the past if the UNDO data does not cover your period.
In that case there is a tricky way to create our own mapping when you hit a limit of undo data. It will not be as good as SCN\_TO\_TIMESTAMP but it will provide approximation depending on your data.
All you need to do is to find a table with constant inserts going on. I use the audit table **sys.aud$**. You can use your own but the table must have time filed indicating when the rows were inserted. And if you have SCN and DATE you can map SCN and DATE with another table.
If you will use **sys.aud$** keep in mind that:
1. You may need your dba to grant access to it or to create a simple view with two fields ora\_rowscn and ntimestamp#
2. the more activity is going on on the database the more accurate will be the mapping.Usually using sys.aud$ table I can map old data edit that happened a year ago with accuracy about 60-120 minutes
3. if audit is off then scn\_time will not return any rows and you need to find another table for mapping.
The query uses sys.aud$. Replace the **[YOU\_TABLE]** with the table where you need to find the date of insert or update
```
-- get scn to date interval [begin..end] mapping from audit table
with scn_time as
(
select sc sc_start,
lead(sc) over(order by sc) sc_end,
start_time,
lead(end_time) over(order by sc) end_time_sc
from
(
select n.ora_rowscn sc,
min( cast(from_tz(ntimestamp#,'00:00') at local as date) ) start_time,
max( cast(from_tz(ntimestamp#,'00:00') at local as date) ) end_time
from sys.aud$ n
-- if audit log is big you need to select only a part of the table
-- to make query faster
--where ntimestamp# > sysdate - 365
group by n.ora_rowscn
) order by sc
)
-- map scn from you table to scn_mapping
select *
from (
select t.ora_rowscn sc, t.*
from [YOU_TABLE] t
) table_inspect
inner join scn_time s
on (table_inspect.sc between s.sc_start and s.sc_end)
-- to filter out bit intervals
where (end_time_sc-start_time) < 1
```
I used the way to restore info when a row was inserted if it was inserted more than a year ago. | How to use Timestamp_to_scn and Scn_to_timestamp in Oracle? | [
"",
"sql",
"oracle",
"datetime",
"timestamp",
"oracle-sqldeveloper",
""
] |
I'am making a report in which I have to find out how many times there has been a situation where one or more row type 1 is found between row type 2 dates. Data looks like this and is sorted by date:
```
Date Row type
2014-01-17 2
2014-01-21 1*
2014-02-06 2
2014-02-11 1*
2014-02-20 1*
2014-03-19 1*
2014-03-19 1*
2014-03-19 2
2014-03-19 1*
2014-03-19 1*
2014-03-20 1*
2014-03-20 2
2014-03-20 2
2014-03-21 1*
2014-03-21 2
2014-03-21 2
2014-03-21 1*
2014-03-24 1*
2014-03-24 1*
2014-03-25 2
```
So here I can see that type 1 rows can be found between type 2 rows five times.
I only have used very basic SQL queries and I have no idea how to proceed with this. Query should be able to run in SSRS report.
Any help ? Or if somebody could point to right direction, I have looked at WITH CRE and RANK OVER but I can not figure out could these be solution to this problem. | ```
WITH AllRowNumbers AS (
SELECT RowType
,ROW_NUMBER() OVER (ORDER BY [Date]) AS RowNumber
FROM #T
)
,Type2RowNumbers AS (
SELECT RowNumber
FROM AllRowNumbers
WHERE RowType = 2
)
,Gaps AS (
SELECT RowNumber - ROW_NUMBER() OVER (ORDER BY RowNumber) AS Gap
FROM Type2RowNumbers
)
SELECT COUNT(DISTINCT Gap)
FROM Gaps
WHERE Gap > 0
``` | Here it is,...
```
;With CTE as
(
select *,Row_number() over (order by dateDt) as RNo
from mtT
)
select COUNT(*) from CTE as a
inner join CTE as b on a.RNo + 1 = b.RNo
and a.rowType <> b.rowType
where a.rowType = 1
``` | Count how many times row type 1 is found between row type 2 dates | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
So I have a history table called `Member_Phys_History` in SQL-Server 2008
Looks like this:
```
RecID, MemberID, Phys_ID, Phys_Start, Phys_End, Phys_Update
```
The first column is `identity`, the `Phys_Start`, `Phys_End`, and `Phys_Update` are dates.
I have another table called `Member_Phys_Update`
```
MemberID, Phys_ID, Phys_Start_Date
```
So once a week or so this Update table get's an update from client where The Phys\_ID changes and the Phys\_start is later in time...So I add this information to my History table and it would look like this:
```
1|ABC123|555|2014-01-01|NULL|NULL
2|ABC123|556|2014-04-01|NULL|NULL
```
**Here's what I need to do:**
I want to basically set the first records Phys\_End\_Date to the day before the second records `Phys_Start_Date`. So it would look like this:
```
1|ABC123|555|2014-01-01|2014-03-30|NULL
2|ABC123|556|2014-04-01|NULL|NULL
```
I cannot use a stored procedure with a cursor unfortunately, my DBA says it's inefficient. I was wondering if there was any way I could do this in a couple queries...
A cursor may be ideal, but can I do this with a FETCH NEXT or something? | Try this
```
Select A.*, B.Phys_End_Date
from table1 A
outer apply (select (min(Phys_Start_Date) - 1) Phys_End_Date from table1 x
where x.Phys_Start_Date > A.Phys_Start_Date
AND X.MemberID = A.MemberID) B
```
[SQL DEMO](http://sqlfiddle.com/#!3/a2c7d3/1)
Edit (Adding Update SQL)
```
update A
set A.Phys_End_Date = B.Phys_End_Date
from table1 A
outer apply (select (min(Phys_Start_Date) - 1) Phys_End_Date from table1 x
where x.Phys_Start_Date > A.Phys_Start_Date
AND X.MemberID = A.MemberID) B
or
INSERT INTO table2 (memberid, phys_id,Phys_Start_Date,Phys_End_Date)
Select A.*, B.Phys_End_Date
from table1 A
outer apply (select (min(Phys_Start_Date) - 1) Phys_End_Date from table1 x
where x.Phys_Start_Date > A.Phys_Start_Date
AND X.MemberID = A.MemberID) B;
``` | As alternative you can do it using Common Table expression.
```
;WITH base
AS (
SELECT *
,ROW_NUMBER() OVER ( PARTITION BY MemberID ORDER BY Phys_Start ASC ) AS rn
FROM Member_Phys_History
),
nextDate
AS (
SELECT *
,ROW_NUMBER() OVER ( PARTITION BY MemberID ORDER BY Phys_Start ASC ) AS rn
FROM Member_Phys_History
)
SELECT b.RecID
,b.MemberID
,b.Phys_ID
,b.Phys_Start
,DATEADD(dd, -1, n.Phys_Start) AS Phy_End
,b.Phys_Update
FROM base AS b
LEFT OUTER JOIN nextDate AS n
ON b.MemberID = n.MemberID
AND b.rn = n.rn - 1;
```
Than it is really easy to turn it into `UPDATE` statement
```
;WITH base
AS (
SELECT *
,ROW_NUMBER() OVER ( PARTITION BY MemberID ORDER BY Phys_Start ASC ) AS rn
FROM Member_Phys_History
),
nextDate
AS (
SELECT *
,ROW_NUMBER() OVER ( PARTITION BY MemberID ORDER BY Phys_Start ASC ) AS rn
FROM Member_Phys_History
)
UPDATE b
SET b.Phys_End = DATEADD(dd, -1, n.Phys_Start)
FROM base AS b
LEFT OUTER JOIN nextDate AS n
ON b.MemberID = n.MemberID
AND b.rn = n.rn - 1;
``` | SELECT NEXT ROW, COMPARE, SET PREVIOUS ROW | [
"",
"sql",
"sql-server",
"sql-server-2008",
"conditional-statements",
""
] |
How to convert smalldatetime to varchar? I've tried everything from <http://msdn.microsoft.com/en-us/library/ms187928.aspx>, but it didn't work.
I want to convert smalldatetime into varchar, because I want to use it in select like this:
`select 'Some text'+@Date`
thanks in advance | '121' is the format of the date in this case 'yyyy-mm-dd hh:mi:ss.mmm(24h)',
char(16) is the number of characters you wish to include, first 16 in this case.
```
select 'Some text'+convert(char(16), @date, 121)
```
[Cast and Convert](http://msdn.microsoft.com/query/dev10.query?appId=Dev10IDEF1&l=EN-US&k=k%28CONVERT_TSQL%29;k%28SQL11.SWB.TSQLRESULTS.F1%29;k%28SQL11.SWB.TSQLQUERY.F1%29;k%28MISCELLANEOUSFILESPROJECT%29;k%28DevLang-TSQL%29&rd=true) | SELECT CONVERT(VARCHAR(20), YourDateColumn, 103) as NewColumnName
here 103 make the date format as dd/mm/yyyy
if you want mm/dd/yyyy, you have to use 100 | Converting smalldatetime datatype to varchar datatype | [
"",
"sql",
"sql-server",
""
] |
I have two tables:
* **reports**
* **report\_contents**
which are related by foreign key **content\_id** on **reports** table.
I need to create procedure which delete some reports together with their contents, like this:
```
DELETE FROM report_contents WHERE id IN
(SELECT content_id FROM reports WHERE extra_value = extraValue)
DELETE FROM reports WHERE extra_value = extraValue
```
But it is impossible to delete records from **report\_contents** table firstly, because there is constrain on **content\_id** column on **reports** table.
On the other hand when I delete records from **reports** table firstly, I won't know what report\_contents should be deleted then...
```
CREATE OR REPLACE PROCEDURE delete_reports (extraValue NUMBER) IS
BEGIN
/* removing reports with extra_value = extraValue */
/* removing their report_contents */
END;
```
What is the best way to do this? (I don't want to add *on delete cascade* constrain) | If the number of ids is relatively small (i.e. just a few hundred or thousand) you can comfortably store the IDs to delete temporarily in a PL/SQL array.
```
PROCEDURE delete_reports (extraValue NUMBER) IS
TYPE id_table IS TABLE OF reports.content_id%TYPE INDEX BY BINARY_INTEGER;
ids id_table;
BEGIN
/* which reports to delete? */
SELECT content_id BULK COLLECT INTO ids
FROM reports WHERE extra_value = p_extra_value;
/* removing reports with extra_value = extraValue */
DELETE reports WHERE extra_value = p_extra_value;
/* removing their report_contents */
FORALL i IN 1..ids.COUNT
DELETE report_contents WHERE id = ids(i);
END delete_reports;
```
If the number of ids is large (e.g. millions or more) then I'd probably break this into a loop and get the ids in batches. | Since its an `SP`, you could use an intermediate `TABLE` variable to store your results
```
CREATE OR REPLACE PROCEDURE delete_reports (extraValue NUMBER) IS
BEGIN
DECLARE @TABLE table
( CONTENT_ID int)
INSERT INTo @TABLE
SELECT content_id FROM reports WHERE extra_value = extraValue
DELETE FROM reports B WHERE EXISTS (SELECT * FROM @TABLE A WHERE A.Content_id=B.Content_id)
DELETE FROM report_contents C WHERE EXISTS (SELECT * FROM @TABLE A WHERE A.Content_id=C.ID)
END
```
I am assuming that you could use `CONTENT_ID` to `delete` from both tabes | Removing records from related tables | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have two queries:
```
SELECT
users.id,
users.gender,
users.status,
users.icon_id,
users.image_name,
coords.lat,
coords.lng,
users.mess_count
FROM
users
INNER JOIN
coords ON users.id = coords.user_id
```
then I select blocked users:
```
SELECT
first_user,
second_user
FROM
blocks
WHERE
first_user = $1 OR second_user = $1
```
From first table I need to select all users which has coordinates and not blocked, I also need some public information(gender and etc.). Then because I need two side blocking. I have to select is user blocked him, or he was blocked by that user. So $1 is current user, and I select is my id in `block` table, if it is - I exclude another user from first query.
Then using string operations in my programming language I transform my string to exclude results I get from second query.
I probably can do it with `EXCEPT`, but I can't do it, because I have only 2 column selected with second query, and I need much more, in final result: `users.id, users.gender, users.status, users.icon_id, users.image_name, coords.lat, coords.lng, users.mess_count` . | There are several ways to do it, the only mildly compromising factor is that I *believe* you want to exclude users that appear in *either* of two columns in the `blocks` table.
SQL traditionally has weaker performance with `OR` logic, which the following query attempts to work around. *(In part due to its ability to make better use of indexes)*
```
SELECT
users.id,
users.gender,
users.status,
users.icon_id,
users.image_name,
coords.lat,
coords.lng,
users.mess_count
FROM
users
INNER JOIN
coords
ON users.id=coords.user_id
WHERE
NOT EXISTS (SELECT * FROM blocks WHERE first_user = users.id AND second_user = $1)
AND NOT EXISTS (SELECT * FROM blocks WHERE second_user = users.id AND first_user = $1)
```
Depending on the version of PostgreSQL, the optimiser *may* be less efficient with correlated sub-queries, such as those I have used above. In such a case, the following may be more performant still. *(It still avoids using **OR**.)*
```
SELECT
users.id,
users.gender,
users.status,
users.icon_id,
users.image_name,
coords.lat,
coords.lng,
users.mess_count
FROM
users
INNER JOIN
coords
ON users.id=coords.user_id
LEFT JOIN
(
SELECT first_user AS user_id FROM blocks WHERE second_user = $1
UNION
SELECT second_user AS user_id FROM blocks WHERE first_user = $1
)
AS blocks
ON blocks.users_id = users.id
WHERE
blocks.user_id IS NULL
``` | Check this out, I wrote the query using `NOT IN`.
```
SELECT
users.id,
users.gender,
users.status,
users.icon_id,
users.image_name,
coords.lat,
coords.lng,
users.mess_count
FROM
users
INNER JOIN coords ON users.id=coords.user_id
Where
users.id NOT IN (SELECT first_user FROM blocks WHERE first_user=$1)
AND
users.id NOT IN (SELECT second_user FROM blocks WHERE second_user=$1)
``` | How to replace 2 queries with except | [
"",
"sql",
"postgresql",
"except",
""
] |
Please help:
I have to create a column based on an expression from other columns,
but before that expression hits, there has to be a date check
if the date is before a specific date- it must do an expression, if the date is after said date, it must do another expression,
THEN: theres a unique column that has the digits 1-10 in it, each number represents a different expression.
the inner join and selecting the rows are fine, its just the switch and if expression that are beating me
basically the statement needs to look like this
```
select column1 if(date<neededDate)
{select case ExpressionColumn
when 1 //do stuff
when 2 // do stuff
else// do nothing
}
select column1 if(date>neededDate)
{select case ExpressionColumn
when 1 //do stuff
when 2 // do stuff
else// do nothing
}
```
i hope this made sense | You need two case statement nested within another case statement, it can be done like following
```
SELECT CASE WHEN date > neededDate THEN
CASE ExpressionColumn
WHEN 1 THEN '1'
WHEN 2 THEN '2'
ELSE 'null'
END
WHEN date < neededDate THEN
CASE ExpressionColumn
WHEN 1 THEN '1'
WHEN 2 THEN '2'
ELSE 'null'
END
ELSE 'null'
END
FROM YourTable;
``` | you have your syntax wrong:
```
select case sign(datediff('s', date, neededDate)) -- precision: second
when 0 then -- case missing in your spec !
else
case ExpressionColumn
when 1 then -- case 1
when 2 then -- case 2
else -- whatever
end
end
from whatever
;
```
replace each comment with the appropriate expression over columns.
in your case a searched case expression might be more convenient:
```
select case
when (date < neededDate) then
-- note the difference: instead of 'case x when y then ...' you write 'case when b then ...'
case ExpressionColumn
when 1 then -- case 1
when 2 then -- case 2
else -- whatever
end
when (date > neededDate) then
case ExpressionColumn
when 1 then -- case 1
when 2 then -- case 2
else -- whatever
end
else -- this is missing from your spec!
end
from whatever
;
``` | SQL INNERJOIN with a switch case and if statement in select | [
"",
"sql",
"sql-server",
""
] |
I am using this query to display a column with money datatype and how could I insert comma on top of this.
```
'$ '+CONVERT(varchar,CONVERT(decimal(10,0), CONVERT(money,B2.Total_Amount)),1) as Total_Amount,
'$ '+CONVERT(varchar,CONVERT(decimal(10,0), CONVERT(money,B2.Monthly_Amount )),1) as Monthly_Amount
```
Output
```
1500
```
Expected Output:
```
1,500
``` | I think you need this
```
'$ '+ Replace(CONVERT(varchar,CAST(B2.Total_Amount AS money),1) ,'.00','') as Total_Amount,
'$ '+ Replace(CONVERT(varchar, CAST(B2.Monthly_Amount As money),1),'.00','') as Monthly_Amount
```
If B2.Monthly\_Amount is
```
1987654321
```
Then the Op will be
```
1,987,654,321
``` | you can achieve this with this:
`select replace(convert(varchar,convert(Money, B2.Total_Amount),1),'.00','')`
BUT YOU SHOULDN'T DO IT
thats part of the display layer | Display comma for column values | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a dropdown and I want to check the selected value first before I run my SQL command so here is my logic: if the selected value of the dropdown is not `'ALL'` then I want to run this sql statement:
```
select distinct Location
from myTable
where PROGRAM = '"+program+'"
```
else if selected value of the dropdown is set to `'ALL'` then I want to run this SQL command instead:
```
select distinct Location
from myTable
```
Here is my my code which works fine but not the way I want it to work. How can I modify the SQL command so I can check the condition that I mentioned above? Thanks
```
protected void BindServiceList(DropDownList ddlService)
{
DropDownList ddl = (DropDownList)GV_MJJ.HeaderRow.FindControl("ddlProgram");
string program = ddl.SelectedValue;
String strConnString = System.Configuration.ConfigurationManager
.ConnectionStrings["myConn"].ConnectionString;
SqlConnection con = new SqlConnection(strConnString);
SqlDataAdapter sda = new SqlDataAdapter();
SqlCommand cmd = new SqlCommand("select distinct LOCATION" +
" from myTable where PROGRAM = '" + program + "' ");
cmd.Connection = con;
con.Open();
ddlService.DataSource = cmd.ExecuteReader();
ddlService.DataTextField = "LOCATION";
ddlService.DataValueField = "LOCATION";
ddlService.DataBind();
con.Close();
ddlService.Items.FindByValue(ViewState["Filter_Serv"].ToString())
.Selected = true;
}
``` | Don't change your SQL command instead create different SQL command's text based on the condition. Also you should use [SqlParameter](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparameter.aspx) instead of string concatenation. Your code is prone to [SQL injection](http://en.wikipedia.org/wiki/SQL_injection). Consider enclosing your connection and command object in [`using` statement](http://msdn.microsoft.com/en-us//library/yh598w02.aspx) as that will ensure resource disposal at the end of block.
```
using (SqlConnection con = new SqlConnection(strConnString))
{
SqlDataAdapter sda = new SqlDataAdapter();
using (SqlCommand cmd = new SqlCommand())
{
if (program.Equals("All", StringComparison.InvariantCultureIgnoreCase))
{
cmd.CommandText = "select distinct LOCATION from myTable";
}
else
{
cmd.CommandText = "select distinct LOCATION from myTable WHERE PROGRAM = @program";
cmd.Parameters.AddWithValue("@program", program);
}
cmd.Connection = con;
con.Open();
ddlService.DataSource = cmd.ExecuteReader();
ddlService.DataTextField = "LOCATION";
ddlService.DataValueField = "LOCATION";
ddlService.DataBind();
con.Close(); // can be left out because of `using` statement
ddlService.Items.FindByValue(ViewState["Filter_Serv"].ToString())
.Selected = true;
}
}
``` | Do this:
```
protected void BindServiceList(DropDownList ddlService)
{
DropDownList ddl = (DropDownList)GV_MJJ.HeaderRow.FindControl("ddlProgram");
string program = ddl.SelectedValue;
String strConnString = System.Configuration.ConfigurationManager
.ConnectionStrings["myConn"].ConnectionString;
using (var con = new SqlConnection(strConnString) )
using (var cmd = new SqlCommand("select distinct LOCATION from myTable where PROGRAM LIKE @Program", con) )
{
//guessing at column type/length here
cmd.Parameters.Add("@Program", SqlDbType.NVarChar, 20).Value = program;
con.Open();
ddlService.DataSource = cmd.ExecuteReader();
ddlService.DataTextField = "LOCATION";
ddlService.DataValueField = "LOCATION";
ddlService.DataBind();
}
ddlService.Items.FindByValue(ViewState["Filter_Serv"].ToString())
.Selected = true;
}
```
Notice that I fixed your sql injection vulnerability! Also notice that I changed the `=` to a `LIKE`. Then you can set the `value` property for the `All` item in the `ddlProgram` control to: `%`. (See the [`AppendDataBoundItems`](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.listcontrol.appenddatabounditems%28v=vs.110%29.aspx) property if you need help getting that working with a databound ddl). Using that wildcard with the `LIKE` operator will result in the query returning all locations. | Choose between two SQL statements using CASE or something else | [
"",
"asp.net",
"sql",
"sql-server",
""
] |
I am trying to run a query from sql developer and query has variables (`:var`). I am having problem with the date variables.
I used all the possible combinations to format date using `to_date()` function.
Every time getting below exception:
```
ORA-00932: inconsistent datatypes: expected DATE got NUMBER
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
```
Sorry can't post image here | Try changing your query to be:
```
select first_name,
last_name,
dob,
org
from emp
where dob > to_date(:highDate,'DD-MON-YYYY');
```
then when prompted, enter `'20-JAN-2010'`. | Just copying the answer from Oracle Community forum:
> You should be able to enter dates which conform to your
> `NLS_DATE_FORMAT` setting.
>
> e.g. If `NLS_DATE_FORMAT` is `DD-MON-YYYY`, you can enter
> `24-jan-2011` for today's date.
Worked for me. | How to use date variable in sql developer's "Enter Binds" dialog? | [
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I had a table named `movies` which had the fields `id` as primary key, and two varchars: `title` and `genre`.
I created a new table named `genres` with the int field `id` as primary key and `desription` varchar. I changed the field `genre` in my `movies` table so I could create a foreign key referencing a genre.
However, Mysql Workbench says there's an error when creating the foreign key.
Here's the statement:
```
ALTER TABLE `managemovies`.`movies`
ADD CONSTRAINT `genre_reference`
FOREIGN KEY (`genre` )
REFERENCES `managemovies`.`genres` (`id` )
ON DELETE NO ACTION
ON UPDATE NO ACTION
, ADD INDEX `genre_reference_idx` (`genero` ASC) ;
```
Error:
```
ERROR 1452: Cannot add or update a child row: a foreign key constraint fails (`managemovies`.`#sql-3ba_2b`, CONSTRAINT `genre_reference` FOREIGN KEY (`genre`) REFERENCES `genres` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION)
```
SQL Statement: [... same statement than above ... ]
ERROR: Error when running failback script. Details follow.
```
ERROR 1046: No database selected
SQL Statement:
CREATE TABLE `movies` [...]
[... the errors above repeated again ...]
``` | clear your table contents and try adding foreign key.
if your table contain data which not matching the foreign key field value you will see this error ... | It looks like your table `movies` has data in genre column which is not present in genres.id column.
Your statement should work after removing the invalid data.
Hope it helps
Vishad | Error when adding foreign key | [
"",
"mysql",
"sql",
"database",
"mysql-workbench",
""
] |
i have below string.Can some one help me how to select only characters after **`data/`** and before **`cross/`**
```
/dss/data/20131223_155909_7325/Cross-Boundary-Collaboration_Participant_Workbook_v1.4.onepkg
```
Please note **`/dss/data/`** will be consistent but characters may vary after **`7325/`** | This will select everything after `/dss/data/` up to next slash `/`
```
DECLARE @string NVARCHAR(MAX);
SET @string = '/dss/data/20131223_155909_7325/Cross-Boundary-Collaboration_Participant_Workbook_v1.4.onepkg';
SELECT LEFT(REPLACE(@string,'/dss/data/',''), CHARINDEX('/',REPLACE(@string,'/dss/data/',''))-1)
```
`20131223_155909_7325` in this example | USE [CHARINDEX and then SUBSTRING](http://social.technet.microsoft.com/wiki/contents/articles/17948.t-sql-right-left-substring-and-charindex-functions.aspx)
EDIT:
This will give you an idea on what to do:
```
DECLARE @URL VARCHAR(1000)
SET @URL = 'http://www.sql-server-helper.com/tips/tip-of-the-day.aspx?tid=58'
SELECT SUBSTRING(@URL, 8, CHARINDEX('/', @URL, 9) - 8) AS [Domain Name],
REVERSE(SUBSTRING(REVERSE(@URL), CHARINDEX('?', REVERSE(@URL)) + 1,
CHARINDEX('/', REVERSE(@URL)) - CHARINDEX('?', REVERSE(@URL)) - 1)) AS [Page Name],
SUBSTRING(@URL, CHARINDEX('?', @URL) + 1, LEN(@URL)) AS [Query Parameter]
```
Source: click [here](http://www.sql-server-helper.com/tips/tip-of-the-day.aspx?tkey=4AB06421-E859-4B5F-A948-0C9640F3108D&tkw=uses-of-the-substring-string-function) | Select character in between slashes --sql | [
"",
"sql",
"sql-server",
""
] |
I have a table which has below structure. I'm using phpmyadmin 3.4.5, mysql version 5.5.16.
**Table**
```
Invoice_ID - PO_ID- Product - Quantity Invoice_Qty - Amount
Inv1 - PO1 - P1 - 100 - 50 - 1000
Inv2 - PO1 - P1 - 100 - 50 - 1000
Inv3 - PO2 - P2 - 50 - 20 - 500
Inv4 - PO2 - P2 - 50 - 20 - 500
Inv5 - PO2 - P3 - 50 - 10 - 250
```
What I'm really want to do is that
If Previous Row of **PO\_ID and and Product Name** is
Same as current Row PO\_ID and Product then **Quantity** of current row should be zero?
Sum of Invoice\_Quantity = Quantity. So Required like below
My Expected Output given below:
```
Out Put:
Invoice_ID - PO_ID- Product - Quantity Invoice_Qty - Amount
Inv1 - PO1 - P1 - 100 - 50 - 1000
Inv2 - PO1 - P1 - 0 - 50 - 1000
Inv3 - PO2 - P2 - 50 - 20 - 500
Inv4 - PO2 - P2 - 0 - 20 - 500
Inv5 - PO2 - P3 - 0 - 10 - 250
```
I tried the [How to get result set like Oracle lag function](https://stackoverflow.com/questions/21826908/how-to-get-result-set-like-orcle-lag-function). But It not worked for me.
And tried to write a procedure for that. I'm stuck with export resutlset.
That is I don't know how to assign and get the result set.
Please help me out this problem.
Refer:
<http://sqlfiddle.com/#!2/5c0b0/4> | Your sqlfiddle was confusing. Please don't provide sample data here and then use different sample data in the sqlfiddle. And your desired result here is wrong, since you said in the description
> If Previous Row of PO\_ID and and Product Name is Same as current Row PO\_ID and Product then Quantity of current row should be zero
Anyway, used my own...
```
select
t.*,
if(@previd = po_id and @prevprod = Product, 0, Quantity) AS new_quantity,
@previd := po_id,
@prevprod := product
from
t
, (select @previd:=null, @prevprod:=null) var_init
order by po_id, product
```
* `sqlfiddle`
Note, that the order in the select clause is important, as well as the order by clause. | this works :):) :
```
select Invoice_ID,PO_ID,product,
case when
decode(lead(Quantity) over (order by PO_ID),Quantity,'SAME','DIFF') = 'SAME'
then Quantity
else 0
end Quantity, Amount
from <table-name>
``` | Change Value based on previous rows in mysql | [
"",
"mysql",
"sql",
"stored-procedures",
"lag",
""
] |
I have inherited a codebase in Web forms and are having trouble with a SQL query.
```
SELECT foretag.namn, foretag.epost, foretag.forlangEj, service_fakturering.*
FROM foretag
INNER JOIN service ON foretag.id = service.foretagsid
INNER JOIN service_fakturering ON service.id = service_fakturering.service_id
WHERE service_fakturering.giltighets_datum <= DATEADD(D, 30, GETDATE())
ORDER BY bestallnings_datum DESC, id DESC
```
In the table `service_fakturering` there are multiple rows with the same `service_id`
I need to select the last one, max id, to be used in the `INNER JOIN service_fakturering ON service.id = service_fakturering.service_id` | Using [`ROW_NUMBER()`](http://technet.microsoft.com/en-us/library/ms186734.aspx) function, along with [common table expression](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx), it can be done like this:
```
WITH cte_service_fakturering AS
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY service_id ORDER BY id DESC) RN
FROM service_fakturering
)
SELECT foretag.namn, foretag.epost, foretag.forlangEj, cte.*
FROM foretag
INNER JOIN service ON foretag.id = service.foretagsid
INNER JOIN cte_service_fakturering cte ON service.id = cte.service_id AND cte.RN = 1
WHERE service_fakturering.giltighets_datum <= DATEADD(D, 30, GETDATE())
ORDER BY bestallnings_datum DESC, id DESC
``` | The Sub select will group your service\_fakturering rows and get the maxId for you. This is then used to join back into your query and filter for only those rows you are interested in.
```
SELECT foretag.namn, foretag.epost, foretag.forlangEj, service_fakturering.*
FROM foretag
INNER JOIN service ON foretag.id = service.foretagsid
INNER JOIN service_fakturering ON service.id = service_fakturering.service_id
INNER JOIN (Select service_fakturering.service_id, Max(service_fakturering.id) as Id
FROM service_fakturering
GROUP BY service_fakturering.service_id) x
ON x.service_id = service_fakturering.service_id
AND x.Id = service_fakturering.Id
WHERE service_fakturering.giltighets_datum <= DATEADD(D, 30, GETDATE())
ORDER BY bestallnings_datum DESC, id DESC
``` | Select the highest Id Where serviceId occurs more than once | [
"",
"sql",
"sql-server",
""
] |
I need an algorithm for calculating the number of the day of the week in the month. Like 1st Friday of the month, 3rd Monday of the month, etc.)
Any ideas are appreciated.
Here is the final result:
```
declare @dt date = GetDate()
declare @DayOfWeek tinyint = datepart(weekday,@dt)
declare @DayOfMonth smallint = day(@dt)
declare @FirstDayOfMonth date = dateadd(month,datediff(month,0,@dt),0)
declare @DayOfWeekInMonth tinyint = @DayOfMonth / 7 + 1 -
(case when day(@FirstDayOfMonth) > day(@dt) then 1 else 0 end)
declare @Suffix varchar(2) =
case
when @DayOfWeekInMonth = 1 then 'st'
when @DayOfWeekInMonth = 2 then 'nd'
when @DayOfWeekInMonth = 3 then 'rd'
when @DayOfWeekInMonth > 3 then 'th'
end
select
cast(@DayOfWeekInMonth as varchar(2))
+ @Suffix
+ ' '
+ datename(weekday,@Dt)
+ ' of '
+ datename(month,@dt)
+ ', '
+ datename(year,@Dt)
```
PS: And if you can think of a better way to state the problem, please do. | Followint code will give you `1st Wednesday of April 2014` for today:
```
SELECT cast((DATEPART(d, GETDATE() - 1) / 7) + 1 as varchar(12))
+ 'st ' + DATENAME(WEEKDAY, getdate()) + ' of ' +
DATENAME(month, getdate()) + ' ' + DATENAME(year, getdate());
```
For any date use the code below. It gives `5th Tuesday of April 2014` for `@mydate = '2014-04-29'` in the example:
```
DECLARE @mydate DATETIME;
SET @mydate = '2014-04-29';
SELECT
case
when DATEPART(d, @mydate) = 1 then cast((DATEPART(d, @mydate ) / 7) + 1 as varchar(12))
else cast((DATEPART(d, @mydate - 1) / 7) + 1 as varchar(12))
end
+
case
when (DATEPART(d, @mydate - 1) / 7) + 1 = 1 then 'st '
when (DATEPART(d, @mydate - 1) / 7) + 1 = 2 then 'nd '
when (DATEPART(d, @mydate - 1) / 7) + 1 = 3 then 'rd '
else 'th '
end
+ DATENAME(WEEKDAY, @mydate) + ' of ' +
DATENAME(month, @mydate) + ' ' + DATENAME(year, @mydate) as [Long Date Name]
``` | Okeeeey my tuuuurn ,
Please rate my answer Metaphor hhh, Here's the cooode :
```
declare @v_month nvarchar(2) = '04'
,@v_annee nvarchar(4) = '2014'
declare @v_date date = convert(date,@v_annee+'-'+@v_month+'-01')
declare @v_date_2 date = dateadd(M,1,@v_date)
if OBJECT_ID('temp') is not null
drop table temp
create table temp(_date date, _DayOfMonth nvarchar(20), _order int)
while (@v_date<@v_date_2)
begin
set @v_date =@v_date;
WITH _DayOfWeek AS (
SELECT 1 id, 'monday' Name UNION ALL
SELECT 2 id, 'tuesday' Name UNION ALL
SELECT 3 id, 'wednesday' Name UNION ALL
SELECT 4 id, 'thursday' Name UNION ALL
SELECT 5 id, 'friday' Name UNION ALL
SELECT 6 id, 'saturday' Name UNION ALL
SELECT 7 id, 'sunday' Name)
insert into temp(_date,_DayOfMonth)
SELECT
@v_date
,(select Name from _DayOfWeek where id = DATEPART(WEEKDAY,@v_date))
SET @v_date = DATEADD(DAY,1,@v_date)
END
UPDATE tmp1
SET _order = _order_2
FROM temp tmp1
INNER JOIN
(SELECT *, ROW_NUMBER() OVER(PARTITION BY _DayOfMonth ORDER BY _date ASC) AS _order_2 FROM temp) tmp2
ON tmp1._date = tmp2._date
SELECT * FROM temp
SELECT *
FROM temp
WHERE _DayOfMonth = 'thursday'
AND _order = 3
```
I hope this will help you :)
Good Luck | Get day of the week in month (2nd Tuesday, etc.) | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm struggle with a SQL join for parent client records (literally in my example!). My db tables are made up of something like (excuse the very generic example).
```
Table Room
Room No
1
2
3
Table Customer
| Customer ID | Customer Name | Associated
| 1 | Joe Bloggs |
| 2 | Little Bloggs | 1
| 3 | Little bloggs pet | 2
Table Bookings
| Room No | Customer |
| 1 | 1 |
```
What I'm after, is some SQL to grab the full 3 levels and room e.g.
```
Room Customer
1 Joe Bloggs
1 Little Bloggs
1 Little Bloggs pet
```
There isnt a direct join on customer 2 and 3, but they are inherited via customer 1 to room 1. | You should be able to do this using a recursive [CTE](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx).
Something like below
```
;WITH Vals AS (
SELECT CustomerID,
CustomerName,
b.Room
FROM Customer c INNER JOIN
Bookings b ON c.CustomerID = b.Customer
UNION ALL
SELECT c.CustomerID,
c.CustomerName,
v.Room
FROM Vals v INNER JOIN
Customer c ON c.Associated = v.CustomerID
)
SELECT *
FROM Vals
``` | ```
;with Cte (RoomId,Name ,CustomerId)
as(
Select R.RoomId,C.Name,C.CustomerId
From Booking B
Inner Join Customer C on C.CustomerId=B.Customer
Inner Join Room R On R.RoomId=B.RoomNo
Union All
Select Cte.RoomId,Cus.Name,Cus.CustomerId
From Customer Cus
inner join Cte On Cte.CustomerId=Cus.Associated
)
Select RoomId,Name from Cte
```
Read more [Recursive Queries Using Common Table Expressions](http://technet.microsoft.com/en-us/library/ms186243%28v=sql.105%29.aspx) | SQL Join for child records | [
"",
"sql",
"sql-server",
""
] |
I have the below table. the only difference in data is **suff, wt**; the rest of them look the same.
Things table
```
Prefix Baseletter suff CSCcode Description WT BNO BNO-CSCcode
EIGC A5560 BGA 04020 blah1 0 5560 5560-04020
EIGC A5560 HEA 04020 blah2 17.9 5560 5560-04020
```
Mapp table
```
BNO BNO-CSCcode EID Description
5560 5560-04020 4005 blah1
5560 5560-04020 4011 blah2
```
I'm trying to inner join them using **BNO-CSCcode** to get EID for corresponding **BNO**. But my query is returning duplicates. I'm getting 4 records, even though the first table only has two records.
My SQL query:
```
SELECT
Things.Prefix ,
Things.Baseletter,
Things.suff,
Things.CSCcode,
Things.WT,
Mapping.BNO-CSCcode,
Mapping.EID
FROM
Things
INNER JOIN Mapping ON Things.BNO-CSCcode = Mapping.BNO-CSCcode
```
Why am I getting these duplicates, and how can I fix that? | `BNO-CSCcode` contains duplicates. You are joining the first record of `Things` to both records of `Mapp`, then the second record of `Things` joins to both records of `Mapp`. Giving you a total of 4 records.
If you want to join these together, you need some unique way of identifying the rows between the tables.
A Distinct should bring it back down to 2 records, but likely you need to join on a few more fields to get it to 2 records:
```
SELECT DISTINCT
Things.Prefix,
Things.Baseletter,
Things.suff,
Things.CSCcode,
Things.WT,
Mapping.BNO-CSCcode,
Mapping.EID
FROM
Things
INNER JOIN Mapping ON Things.BNO-CSCcode = Mapping.BNO-CSCcode
``` | You are getting duplicates because both records in the `Things` table have a `BNO-CSCcode` of 5560-04020, as do both records in the `Mapp` table. The only distinction between the two that I can see is the `Description` field. So, if you use the following query:
```
SELECT Things.Prefix ,Things.Baseletter,Things.suff,Things.CSCcode,Things.WT,Mapping.BNO-CSCcode,Mapping.EID
FROM Things INNER JOIN Mapping ON Things.BNO-CSCcode = Mapping.BNO-CSCcode AND
Things.Description = Mapping.Description
```
...you should eliminate the duplication.
However, using a field that is obviously a text field in a join is not best practice, as if they are separately user-generated, they are unlikely to match, and it takes longer (relatively speaking) to match fields with a higher byte-count than those with low byte-counts. | Inner join returning duplicate records access sql | [
"",
"sql",
"ms-access",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.