Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have this mysql syntax:
```
INSERT INTO `utilizatori`(
utilizator
)
SELECT
'Mama'
FROM
`utilizatori`
WHERE
NOT EXISTS (SELECT `utilizator` FROM `utilizatori` WHERE utilizator='Mama')
```
`utilizatori` is a table, `utilizator` is a column, `Mama` is a value
This syntax will insert a value in table only if it doesnt exist.If the value exist it wont create it,so until now it works fine,but if there is no 'Mama' value,then it will insert it...the only problem is that it will insert it multiple times.For example if i have 4 rows,it will insert 'Mama' value 4 times,creating 4 rows.Any idea??
|
I would make the task easier, clearer by making `utilizator` field `unique`.
That way when you add new rows with existing value `'Mama'` for `utilizator` in this case: mysql returns error with the code: `1062`, and don't let user have multiple rows with `Mama` in the table.
So when you run query:
```
INSERT INTO `utilizatori` (utilizator) VALUES ('Mama')
```
You can check if `MySQL` returns any error, but better to check number of affected rows, if insert was successful it will be equal to `1` otherwise `0`.
Checking mechanism depends what language and driver you use for connecting to database.
Since you had `PHP` tag selected you may be using `PDO` than
```
$statement->rowCount(); // $statement = PDOStatement, I assume you know this thing well
```
will give you desired result
**Final simple example:**
```
...
if ($statement->rowCount())
{
echo "Cool! You have been added to database";
}
else
{
echo "Hmms! Are you trying to duplicate something?";
}
```
|
Try to use group by :
```
INSERT INTO `utilizatori`(
utilizator
)
SELECT
'Mama'
FROM
`utilizatori`
WHERE
NOT EXISTS (SELECT `utilizator` FROM `utilizatori` WHERE utilizator='Mama')
group by utilizator
```
|
Some problems with INSERT INTO statement
|
[
"",
"mysql",
"sql",
""
] |
SQL Server 2008 - I have a table with 10 columns and many rows in which i want to delete with a condition like delete all rows which are in particular column those are less than 75 characters (about 10 words)
|
The easiest solution is to use the SQL function called `len` used like this: `len(nameOffield)`
In your case simply add the function to you where clause in the delete command like this:
```
DELETE FROM yourTableName where len(aParticularColumn) < 75
```
Update to answer: if your `aParticularColumn` is of datatype `text` or `ntext` you can use DATALENGTH instead of len. In this case it would be
```
DELETE FROM yourTableName where DATALENGTH(aParticularColumn) < 75
```
[Microsoft Documentation to the DATALENGTH function](http://msdn.microsoft.com/en-us/library/ms173486.aspx)
|
As @ogixologist said...
```
DELETE FROM table_name where len (column_name) < 75
```
And here i did by using CTE check it out !!!!!!!!!
```
;with cte
as
(
SELECT column_name, temp = LEN ( CAST ( column_name As nvarchar(4000) ) ) from table_name
)
delete from cte where temp<=75;
```
* You can cast to nvarchar before finding length or else simply use `Len(column_name)`
* instead of column\_name replace your nvarchar column name whichever you want
* instead of table\_name replace your table name in which your data resides
|
How to delete rows which contain less than 75 characters (about 10 words) in a particular column
|
[
"",
"sql",
"database",
"sql-server-2008",
"sql-delete",
""
] |
I'm trying to implement a simple query that should look like this:
```
select * from property join entity_area on property.id=entity_area.entity_id and entity_area.area_id=1 where property.price>300000 limit 12
```
Pretty straightforward: I want to get the joined result and then to limit to 12.
In Sequelize i'm using the following function:
```
return models.property.findAll(
{
where: ["price>=?", 300000],
include: [
{
model:models.entity_area,
where: { area_id:1 }
}
],
limit:12
})
```
But this code generates the following sql:
```
select property.*, entity_area.* from (select * from property where property.price>300000 limit 12) join entity_area on property.id=entity_area.entity_id and entity_area.area_id=1
```
Which has totally different logic from what i'm trying to do because in the generated sql it first gets any 12 results and then tries to join with entity\_area, and of course the random 12 results don't necessarily match the entity\_area, so i'm getting no results back.
Please suggest me a proper way of doing it. The property table is very massive, and i have to use the "limit" rather than getting all the results and slicing them in javascript. Also i wouldn't like to start using raw queries.
|
Actually I found a solution myself. I think this is a bug in sequelize framework.
In the node\_modules/sequelize/lib/dialect/abstract/query\_generator.js there is a "selectQuery" function which has the following line:
```
subQuery = limit && (options.hasIncludeWhere || options.hasIncludeRequired || options.hasMultiAssociation) && options.subQuery !== false
```
First of all there is an option subQuery that could be passed as false to remove the subquery generation. Sequelize documentation does not have a word about it. But moreover if you pass subQuery:false in the findAll object it's not going to work because for some reason it's getting as underfined to the selectQuery function.
I tried something like:
```
return models.property.findAll(
{
where: ["price>=?", 300000],
include: [
{
model:models.entity_area,
where: { area_id:1 }
}
],
limit:12,
subQuery:false
})
```
and still got options.subQuery=undefined.
So i had to change the function in query\_generator.js to be something like:
```
subQuery = limit && (options.hasIncludeWhere || options.hasIncludeRequired || options.hasMultiAssociation) && options.subQuery !== false && options.doSubQuery===true
```
So now by default it's not doing this ugly subquery unless i specify explicitely doSubQuery:true. And finally i got the proper query without subquery with limit.
|
```
models.property.findAll(
{
where: [...],
include: [{...}],
limit:12
},
{
subQuery:false
})
```
|
Sequelize with NodeJS can't join tables with limit
|
[
"",
"sql",
"node.js",
"sequelize.js",
""
] |
I am working on Hibernate query. This query works fine with MySQL but fails in Oracle and postgreSQL.
```
SELECT MIN(t.startTime) AS StartTime,
MAX(t.endTime) AS EndTime,
(SELECT SUM(t2.occurances) from table2 as t2)
from table1 as t1
```
And I am getting error-
> ORA-00937: not a single-group group function
How do I fix this problem? This table returns only one row. If I add group by by adding an extra column in SELECT it returns more than one row.
|
Just add `GROUP BY` for third column:
```
SELECT MIN(t1.startTime) AS StartTime,
MAX(t1.endTime) AS EndTime,
(SELECT SUM(t2.occurances) FROM table2 t2) as occurances
FROM table1 t1
GROUP BY 3
```
This query returns one row.
**EDIT:**
Alternatively you can use additional aggregate function (for example `MAX`), then `GROUP BY` is not required:
```
SELECT MIN(t1.startTime) AS StartTime,
MAX(t1.endTime) AS EndTime,
MAX( (SELECT SUM(t2.occurances) FROM table2 t2) ) as occurances
FROM table1 t1
```
|
This works fine for MySQL because of MySQL extensions to the `group by`. Unfortunately, subqueries are not treated as "constant"s by the compiler when it looks at aggregation queries. So, this is just another expression not in the `group by` clause.
Here is a way to write the query that is compatible for all three databases (but not for Hibernate):
```
SELECT MIN(t.startTime) AS StartTime, MAX(t.endTime) AS EndTime, MAX(occurrences)
from table1 t1 cross join
(SELECT SUM(t2.occurrences) as occurrences from table2 t2) t2;
```
I should add, you also cannot add the subquery to the `group by` clause, although the following will work in Oracle and Postgres:
```
SELECT MIN(t.startTime) AS StartTime, MAX(t.endTime) AS EndTime,
(SELECT SUM(t2.occurances) from table2 t2)
FROM table1 t1
GROUP BY 3;
```
EDIT:
HQL is quite restrictive compared to standard SQL. The following comes *close* two what you want, but it returns at least two rows instead of 1:
```
select t.starttime, t.endtime, (SELECT SUM(t2.occurances) from table2 t2)
from table1 t
where t.starttime = (select min(starttime) from table1) or
t.endtime = (select max(endtime) from table1);
```
|
Oracle ORA-00937: not a single-group group function in Subquery
|
[
"",
"sql",
"oracle",
"hibernate",
"oracle11g",
"oracle10g",
""
] |
I'm trying to display a start and end time for a SQL report. I always want to display the previous month period to look like:
> Start Time: Aug 1st 2014 00:00:00
>
> End Time: September 1st 2014 00:00:00
If the report was run in Oct it would give `Sept1-Oct1`. Not sure how to actually display that as a `Datetime` variable?
|
In a `DATETIME` format, that would be
```
SELECT
DATEADD(month, DATEDIFF(month, 0, GETDATE())-1, 0) AS StartTime,
DATEADD(month, DATEDIFF(month, 0, GETDATE()), 0) AS EndTime
```
If you want to play around with character formats, you can do a bunch of things, like
```
SELECT
DATENAME(MONTH,StartTime) + ' ' + CAST(YEAR(StartTime) AS VARCHAR(4)) + ' - ' + DATENAME(MONTH,EndTime) + ' ' + CAST(YEAR(EndTime) AS VARCHAR(4)),
LEFT(DATENAME(MONTH,StartTime),3) + ' ' + CAST(YEAR(StartTime) AS VARCHAR(4)) + ' - ' + LEFT(DATENAME(MONTH,EndTime),3) + ' ' + CAST(YEAR(EndTime) AS VARCHAR(4))
FROM
(
SELECT
DATEADD(month, DATEDIFF(month, 0, GETDATE())-1, 0) AS StartTime,
DATEADD(month, DATEDIFF(month, 0, GETDATE()), 0) AS EndTime
) dt
```
|
I apologize if i misunderstood but is this what you are looking for?
```
SELECT CAST(DATEADD(MONTH,-1,[date]) AS VARCHAR(12)) + '-' + CAST([date] AS VARCHAR(12)) FROM table
```
If you were to insert into your table a SMALLDATETIME such as
'2014-09-23 10:28:00'
then what would be produced is:
'Aug 23 2014 -Sep 23 2014 '
|
Displaying specific SQL Datetime
|
[
"",
"sql",
"datetime",
"sql-server-2005",
""
] |
I want to find Missing Numbers in a table..Table Like this.
```
Sno Branch
1 ABC
2 ABC
3 ABC
5 ABC // 4th sno is missing
6 ABC
8 ABC // 7th sno is missing
10 ABC // 9th sno is missing
```
I found the missing SNo using this Query
```
ALTER proc [dbo].[K_RT_DCNoMissing]--1,50
as
begin
declare @id int
set @id = 0
declare @maxid int
--set @id = @fromvalue
select @maxid = (select count(*) dcno from K_RT_Dailyentryretail nolock)
create table #IDSeq
(
id int
)
while 0<@maxid--whatever you max is
begin
insert into #IDSeq values(@id)
set @id = @id + 1
set @maxid = @maxid - 1
-- print @maxid
end
select
s.id
from
#idseq s
left join K_RT_Dailyentryretail t on
s.id = t.dcno
where t.dcno is null order by s.id asc
drop table #IDSeq
end
```
I am getting out put like this..
```
MissingNo's
4
7
9
```
Now I want to Display Sno with Branch Name like.
```
MissingNo's Branch
4 ABC
7 ABC
9 ABC
```
How can i get the branch name...
```
Am getting output as
4 abc
4 cde
4 fgh
7 abc
7 cde
7 fgh
but what actually am expecting is
4 abc
7 cde
. ..
. ..
```
etc.
|
You can use a CTE to build a table of all the branches and the full range of numbers for each. Then left join that to the main table to find what is missing. This will allow you to get consecutive missing numbers, e.g. 3, 4 and 5 missing. You can adjust `@minid` and `@maxid` to whatever you want your range to be. If `@maxid` can be greater than 32767, then you will need to do something with batches of ranges.
```
declare @minid int, @maxid int
set @minid = 1
set @maxid = (select count(*) dcno from K_RT_Dailyentryretail with (nolock))
; with Branches as (
select distinct Branch from K_RT_Dailyentryretail
)
, CTE as
(
select @minid as Sno, Branch from Branches
union all
select Sno + 1, Branch from CTE where Sno < @maxid
)
select CTE.Sno, CTE.Branch from CTE
left outer join K_RT_Dailyentryretail k on k.Branch = CTE.Branch and CTE.Sno = k.Sno
where k.Sno is null
option (MAXRECURSION 32767);
```
SQLFiddle here: <http://sqlfiddle.com/#!3/13653/13>
|
Assuming that each branch has it's own set of sno numbers, this works:
```
select sno-1 as sno, branch
from t1
where sno-1 NOT in (select sno from t1 as t2 where t1.branch = t2.branch)
and sno > 1
;
```
Which you can check here: <http://sqlfiddle.com/#!3/daa81d/3>
If sno is a unique key, let me know and I will revise the answer.
EDIT: Like AK47's answer, I'm assuming that the gap is only one row.
|
How to find Missing Numbers in a table using sqlserver?
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with ip's and a view date and for websites with a website id
```
+------------+-----------+---------------------+
| website_id | ip | view_date |
+------------+-----------+---------------------+
| 123 | 123123123 | 2014-01-01 10:23:00 |
+------------+-----------+---------------------+
| 123 | 987987987 | 2014-01-01 10:23:00 |
+------------+-----------+---------------------+
| 123 | 123123123 | 2014-01-01 10:26:00 |
+------------+-----------+---------------------+
| 123 | 987987987 | 2014-01-01 10:24:00 |
+------------+-----------+---------------------+
| 123 | 987987987 | 2014-01-01 10:25:00 |
+------------+-----------+---------------------+
| 123 | 123123123 | 2014-01-02 03:23:00 |
+------------+-----------+---------------------+
```
I would like to get sessions from the table, which would be a time someone starts on the site until they are done. To find out when they were done there is 30 minutes or more of no activity after one of the between 2 view dates when view\_date is ordered desc.
So, with this example data I have 2 users `123123123` and `987987987`
* User `123123123` has 2 sessions because row 3 happened with in 30 minutes of row 1 and row 6 happened more than 30 minutes after row 3
* User `987987987` has 1 session because row 4 happened less than 30 minutes after row 2 and row 5 happened less than 30 minutes after row 4
I have no idea how I would calculate that using MySQL. I all I know is to take the data order it by `ip`, then by `view_date`. After that I am lost.
When I say session I mean:
> a period of time devoted to a particular activity.
And Not:
> a way of tracking people between multiple web pages.
Desired Output:
```
+------------+-----------+----------+
| website_id | ip | sessions |
+------------+-----------+----------+
| 123 | 123123123 | 2 |
+------------+-----------+----------+
| 123 | 987987987 | 1 |
+------------+-----------+----------+
```
|
```
select ip, sum(session) + 1 as session
from (
select
t.*,
if(@previp = ip and timestampdiff(minute, @prevview, view_date) >= 30, 1, 0) as session,
@previp := ip,
@prevview := view_date
from
Table1 t
, (select @prevview := (select view_date from Table1 order by ip, view_date limit 1),
@previp := null) var_init_subquery
order by ip, view_date
) sq
group by ip
```
* see it working live in an [sqlfiddle](http://sqlfiddle.com/#!2/ccb33/4/0)
* read more about user defined [variables](http://dev.mysql.com/doc/refman/5.1/en/user-variables.html)
|
Here's my solution
I do a count to find out the number of rows for a given website id and ip address that occur between the date and the date -30 minutes. If its 0 assign a 1 meaning its a new session else give it a 0. Then do a sum.
**[SQL Fiddle Demo](http://sqlfiddle.com/#!2/51b21/3)**
```
select website_id,
ip,
sum(newSession) as Sessions
from
(select *,
case
when (select count(*)
from yourTable ytb
where ytb.website_id = yta.website_id
and ytb.ip = yta.ip
and ytb.view_date < yta.view_date
and ytb.view_date > date_add(yta.view_date, INTERVAL -30 MINUTE)) = 0 then 1
else 0
end as newSession
from yourtable yta) baseTable
GROUP BY website_id, ip
```
|
Count the number of user sessions
|
[
"",
"mysql",
"sql",
""
] |
I have a table like this below and i need the result to be like this when i run the query
**Results**
```
title | count
----------------
foo | 3
bar | 2
```
**Table**
```
customer_id | title
-------------------
55 | foo
22 | foo
55 | bar <-- duplicate
23 | bar
55 | bar <-- duplicate
23 | foo
```
**UPDATE** Thank you all for the quick response!
|
The trick is to count the `distinct` customer ids, so you won't count the double Foo for customer 55.
If you need to, you can order the results by that count too, or you can just leave out the `order by` clause.
```
select
title,
count(DISTINCT customerid) as `count`
from
yourTable
group by
title
order by
`count` desc
```
|
It's as easy as this:
```
select A.title, count(*) as count -- an usual count(*)
from
(select distinct customer_id, title -- this query removes the duplicates
from table) A
group by A.title -- an usual group by
```
|
SQl - select columns and group them
|
[
"",
"sql",
""
] |
Hi i have a complicated condition. I have a table lets say **'Test'**
```
ID Partner Type Amount
143854 CSTC Purchase -0.81
144029 CSTC Purchase -0.69
144030 CSTC Purchase -1.33
144031 CSTC Purchase -0.47
144032 CSTC Purchase -1.8
149527 CSTC div 1574.48
149528 CSTC Purchase -1574.48
149531 CSTC div 867.53
149532 CSTC Purchase -867.53
149539 CSTC div 76.2
149540 CSTC Purchase -76.2
149550 CSTC div 8.77
149551 CSTC Purchase -8.77
149554 CSTC div 700.45
149555 CSTC Purchase -700.45
```
I want to remove each occurrence of rows having Type = 'div' and the next row should have Type = 'Purchase'
i.e if `type = 'div'` and next row `type = 'Purchase'` delete both else i want to perform some update action on the row having `type = 'div'`.
I have tried Lead i can get next rows Type col value but it is not helping.
```
select LEAD([Type]) OVER (ORDER BY ID) Next, ID, Partner,[Type],Amount from Test where date='9/18/2014' and ([Type] = 'div' or [Type] = 'Purchase')
```
|
Miller was close the left was breaking it
```
WITH TestWithRowNumber AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY ID ASC) AS RowNumber
FROM Test
)
Select *
From TestWithRownumber FirstRow
join TestWithRownumber secondrow
on secondrow.RowNumber = firstrow.RowNumber + 1
and secondrow.type = 'Purchase'
and firstrow.type = 'div'
```
as for the delete statement
```
WITH TestWithRowNumber AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY ID ASC) AS RowNumber
FROM Test
)
delete Test
From TestWithRownumber FirstRow
join TestWithRownumber secondrow
on secondrow.RowNumber = firstrow.RowNumber + 1
and secondrow.type = 'Purchase'
and firstrow.type = 'div'
join test
on test.ID = FirstRow.ID
or test.ID = secondrow.ID
```
|
please remember a table is a set and a set has no order neither top to bottom or left to right.What if someone inserts a row for type = div again ??. However try the below code to see if it is working or not
```
DECLARE @T TABLE (ID INT,[Partner] VARCHAR(10), [Type] VARCHAR(20),Amount decimal (10,2))
INSERT INTO @T
VALUES
(143854 ,'CSTC' , 'Purchase ', -0.81)
,(144029 , 'CSTC' , 'Purchase' , -0.69)
,(144030 , 'CSTC', 'Purchase' , -1.33)
,(144031 , 'CSTC', 'Purchase' , -0.47)
,(144032 , 'CSTC' , 'Purchase' , -1.8)
,(149527 , 'CSTC' , 'div' , 1574.48)
,(149528 , 'CSTC' , 'Purchase' , -1574.48)
,(149531 , 'CSTC' , 'div' , 867.53)
,(149532 , 'CSTC' , 'Purchase' , -867.53)
,(149539 ,'CSTC', 'div' , 76.2)
,(149540 ,'CSTC' , 'Purchase' , -76.2)
,(149550 , 'CSTC' , 'div' , 8.77)
,(149551 , 'CSTC' , 'Purchase' , -8.77)
,(149554 , 'CSTC', 'div' , 700.45)
,(149555 , 'CSTC', 'Purchase' , -700.45)
;with cte as
(
select *
,(SELECT TOP 1 Amount FROM @t WHERE ID > t.ID AND [type] = 'purchase' ORDER BY ID ASC) as nxt
,(SELECT TOP 1 Amount FROM @t WHERE ID < t.ID AND [type] = 'div' ORDER BY ID DESC) AS pvs
FROM @T T
)
DELETE FROM cte WHERE (Amount + nxt = 0) OR (Amount + pvs = 0)
select * FROM @T
```
|
Select complete next row based on the current row
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
Lets say I have a table with a column named KEY.
I want to find all KEYs which are in the table exactly 3 times.
How can I do that?
I managed to get a list of how many entries I have for each KEY, like this:
```
select count(*) from my_table group by KEY;
```
but how can I filter it to show only those who have the value 3?
|
```
select KEY
from my_table
group by KEY
having count(*) = 3
```
|
The `having` clause filters after grouping (`where` filters before).
```
select `key`
from my_table
group by `KEY`
having count(*) = 3;
```
|
How can I filter an SQL table to show only keys with exactly N entries?
|
[
"",
"sql",
"select",
""
] |
From what I understand after reading the related SO answers and official docs, I may have a column type mismatch situation or absency of defining a required INDEX situation. However I couldn't solve my case.
Table below is being created successfully
```
CREATE TABLE `parts` (
`partnum_rev` varchar(255) COLLATE utf8_unicode_ci NOT NULL COMMENT 'part number with revision number',
`status` char(4) COLLATE utf8_unicode_ci NOT NULL COMMENT 'part is live or dead',
`partdef` varchar(255) COLLATE utf8_unicode_ci NOT NULL COMMENT 'part definition',
`makebuy` varchar(4) COLLATE utf8_unicode_ci NOT NULL COMMENT 'part is maked or buyed',
PRIMARY KEY (`partnum_rev`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci COMMENT='table that part specific data is hold'
```
Table below gives `MySQL - Error Code 1215, cannot add foreign key constraint`.
`partnum_rev` is primary key for `parts` table so I couldn't understand my reason.
can you help me please?
Best regards
```
CREATE TABLE `bom` (
`lcn` varchar(255) COLLATE utf8_unicode_ci NOT NULL COMMENT 'logistics control number',
`status` char(4) COLLATE utf8_unicode_ci NOT NULL COMMENT 'part is live or dead',
`partnum_rev` varchar(255) COLLATE utf8_unicode_ci NOT NULL COMMENT 'part number with revision number',
`effective_date` date NOT NULL COMMENT 'drawing effective date',
`abolish_date` date DEFAULT NULL COMMENT 'drawing abolish date',
`parent_partnum_rev` varchar(255) COLLATE utf8_unicode_ci NOT NULL COMMENT 'parent part number with revision number',
`bomlevel` tinyint(3) unsigned NOT NULL COMMENT 'bom level',
`quantity` decimal NOT NULL COMMENT 'quantity',
`quantity_unit` char(2) COLLATE utf8_unicode_ci NOT NULL COMMENT 'quantity unit',
`usage_ratio` decimal DEFAULT NULL COMMENT 'usage ratio',
`mttr` decimal DEFAULT NULL COMMENT 'mean time to repair',
`purchasecost` decimal unsigned DEFAULT NULL COMMENT 'purchasing cost',
`moneyunit` varchar(3) COLLATE utf8_unicode_ci DEFAULT NULL COMMENT 'purchasing cost monetary unit',
PRIMARY KEY (`lcn`),
FOREIGN KEY (`partnum_rev`) REFERENCES part(`partnum_rev`) ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci COMMENT='table for bill of material'
```
|
The `foreign key` clause is wrong - your table is called `parts` (plural!), not `part`. Just use the correct table name:
```
FOREIGN KEY (`partnum_rev`) REFERENCES parts(`partnum_rev`) ON UPDATE CASCADE
```
|
It looks like you have a typo where you missed an "s" in **parts**. Try the following line:
```
FOREIGN KEY (`partnum_rev`) REFERENCES parts(`partnum_rev`) ON UPDATE CASCADE
```
|
MySQL 5.0.11 Error Code #1215: cannot add foreign key constraint
|
[
"",
"mysql",
"sql",
"foreign-keys",
"constraints",
""
] |
I currently have this `WHERE` clause that includes the following parameters.
```
WHERE
Project ID=110
AND ((CAST(saa.Date AS DATE) >= '09/24/2014' AND CAST(saa.Date AS DATE) <= '09/24/2014') OR saa.Date IS NULL))
```
The tricky part here is that the `saa.Date is NULL` section is going to pull up ALL `Null` Values in all dates (which is excessive) I only want to use the following Date Range for the `Null` Values
```
(
(CAST(sa.StartDateTime AS DATE) >= '09/24/2014' AND CAST(sa.StartDateTime AS DATE) <= '09/24/2014')
OR
(CAST(sa.EndDateTime AS DATE) >= '09/24/2014' AND CAST(sa.EndDateTime AS DATE) <= '09/24/2014')
)
```
So I'm trying to figure out how I can create a `CASE` statement that would work that would be something like IF saa.Date is NULL Then [Use Date Range Parameters above]
|
I'll base my answer in @AHiggins's but adding performance an readability
-- [sergability](https://stackoverflow.com/questions/799584/what-makes-a-sql-statement-sargable)
-- avoiding `cast`
-- [using `between`](http://msdn.microsoft.com/en-US/en-es/library/ms187922.aspx)
```
WHERE
ProjectID=110 AND
(
(
saa.Date between '09/24/2014 00:00:00.000' AND '09/24/2014 23:59:59.999'
) OR
(
saa.Date IS NULL AND
(
sa.StartDateTime between '09/24/2014 00:00:00.000' AND '09/24/2014 23:59:59.999'
) OR
(
sa.EndDateTime between '09/24/2014 00:00:00.000' AND '09/24/2014 23:59:59.999'
)
)
)
```
Make sure you have indexes on thos date columns
|
You do know that
```
>= '09/24/2014'
AND <= '09/24/2014'
```
is the same as
= '09/24/2014'
```
where ProjectID=110
AND CAST(saa.Date AS DATE) = '09/24/2014'
OR (
saa.Date IS NULL
AND
(
CAST(sa.StartDateTime AS DATE) = '09/24/2014'
OR
CAST(sa.EndDateTime AS DATE) = '09/24/2014'
)
)
```
This is more efficient than a cast:
```
DATEADD(dd, DATEDIFF(dd, 0, COL),0)
```
But the answer from Horaciux is even more efficient
|
Using Case in a Where Clause for Dates
|
[
"",
"sql",
"t-sql",
""
] |
I'd appreciate if someone could help.
I have two tables that have no relationship:
Table\_1
```
ID NAME VALUE
1 abc 10
2 def 20
3 def 20
```
Table\_2
```
ID2 NAME2 VALUE2
5 ghi 30
6 gkl 40
```
I want to have a select statement that would show the data from both tables like this:
```
ID NAME VALUE ID2 NAME2 VALUE2
1 abc 10 5 ghi 30
2 def 20 6 gkl 40
3 def 20
```
The point is to show data of each record in one row, the table can look like:
```
ID NAME VALUE ID2 NAME2 VALUE2
5 ghi 30
6 gkl 40
```
If `Table_1` has no records. Same is true for `Table_2`.
I tried to use cross join, but then the data will repeat.
Thanks a lot
|
You need to add a `join` condition. In this case, by using `row_number()` to add a sequential number on each side. Then `full outer join` to get all the records:
```
select t1.id, t1.name, t1.value, t2.id as id2, t2.name as name2, t2.value as value2
from (select t1.*, row_number() over (order by id) as seqnum
from table_1 t1
) t1 full outer join
(select t2.*, row_number() over (order by id) as seqnum
from table_2 t2
) t2
on t1.seqnum = t2.seqnum;
```
|
Try this:
```
with Table_1(ID, NAME, VALUE) as (
select 1, 'abc', 10 union all
select 2, 'def', 20 union all
select 3, 'def', 20
), Table_2(ID2, NAME2, VALUE2) as (
select 5, 'ghi', 30 union all
select 6, 'gkl', 40
), prep_table_1 (ID, NAME, VALUE, rn) as (
select id, name, value, row_number() over(order by id)
from table_1
), prep_table_2 (ID2, NAME2, VALUE2, rn) as (
select id2, name2, value2, row_number() over(order by id2)
from table_2
)
select t1.ID, t1.NAME, t1.VALUE, t2.ID2, t2.NAME2, t2.VALUE2
from prep_table_1 t1
full outer join prep_table_2 t2 on t1.rn = t2.rn
```
[SQLFiddle](http://sqlfiddle.com/#!3/d41d8/39399)
|
Select data from two tables with no join condition, t-sql
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
These are my tables
## **Table1**
* price
* city\_category\_id
* city\_product\_id
Here are three rows"
```
price | city_category_id | city_product_id
------+------------------+--------------------------
1500 | CHDELCLAPTOPDELL | CHDELCLAPTOPDELLVOSTR8
1200 | CHDELCLAPTOPDELL | CHDELCLAPTOPDELLVOSTR816
1000 | CHDELCLAPTOPDELL | CHDELCLAPTOPDELLVOSTR816
```
Here I have to find firstly distinct product\_name and then select min price of the distinct elements.I want output as `CHDELCLAPTOPDELLVOSTR816 and 1200 and CHDELCLAPTOPDELLVOSTR816 and 1000`.
## **QUERY**
```
select min(price)
from sellers_product
where city_product_id=
(
select distinct city_product_id
from sellers_product
where city_category_id='CHDELCLAPTOPDELL'
)
```
## **ERROR**
I know why this error is coming because there are more than 1 rows returned by subquery but is there any way to get the desired output using only 1 query.
|
As I understand you want something like this:
```
DECLARE @sellers_product TABLE (price INT, Category_id VARCHAR(100), product_name VARCHAR(100))
INSERT INTO @sellers_product SELECT 1500, 'DELL', 'Vostro123'
INSERT INTO @sellers_product SELECT 1200, 'DELL', 'Vostro1234'
INSERT INTO @sellers_product SELECT 1000, 'DELL', 'Vostro123'
SELECT product_name, MIN(price) AS minPrice
FROM @sellers_product
WHERE Category_id = 'DELL'
GROUP BY product_name
```
Results:
```
product_name minPrice
Vostro123 1000
Vostro1234 1200
```
This select first filters by `Category_id` for rows/categories, which you need, and then groups by 'product\_name' to get unique name. In this case we group 2 rows with 'Vostro123' product\_name. From this grouped rows we can also take `MIN` price value.
|
```
;WITH CTE_RESULT AS
(
SELECT price, city_category_id, city_product_id, -- USE THE SELECT COLUMNS HERE
ROW_NUMBER() OVER (PARTITION BY city_product_id --USE THE DISTINCT COLUMN HERE
ORDER BY price ASC -- CHANGE THE ORDER HERE) ROW_NO
FROM #TABLE1
)
SELECT price, city_category_id, city_product_id FROM CTE_RESULT
WHERE ROW_NO=1 -- FILTERS THE DUPLICATE ROWS
```
|
Find distinct values in SQL when subquery returns more than one row?
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
The goal is to count; how many user\_id's have more then one record.
the result would be: 2
*(only one record should return)*
**THE DATA**
user\_id | value
12 | value1
25 | value2
25 | value3
17 | value4
17 | value5
**Thank you all for your quick response!**
|
I'm not sure whether I get your question right, but shouldn't the following work?
```
SELECT user_id, count(*)
FROM mytable1
GROUP BY user_id
HAVING count(*) > 1
```
-> Result all user\_id, with more then one entry
or if you wanna count how many entries are not unique..
```
SELECT COUNT(*) AS AreDublicate
FROM (
SELECT user_id
FROM mytable1
GROUP BY user_id
HAVING count(*) > 1
) myTable
```
-> Result how many aren't unique.. (in your case 2)
|
To arrive at a single number you need an inner and outer query, like so:
```
SELECT COUNT(*) FROM (
SELECT user_id
FROM mytable1
GROUP BY user_id
HAVING count(*) > 1
) iq
```
|
sql - count (if more then one) through a case
|
[
"",
"sql",
""
] |
I have a booking table with the following information:
BookingID,(unique, not null)
StartDate, (not null)
EndDate (not null)
I need to calculate the number of nights someone remained in residence which I can do with a DATEDIFF between EndDate and StartDate. However, if someone is in residence for the entire month during a 31 day month we only charge them 30.
I'm not sure how to do this in SQL. I was thinking I would have to create a variable, calculate on a monthly basis and add to the variable, but that seems like it would be messy and take a very long time, especially towards the end of year. This needs to be done for about 5,000 records on a daily basis.
So:
If someone starts on 7/25/14 and ends 9/2/14, the nights is 38 not 39.
If someone starts on 10/2/14 and ends on 11/1/14, the nights is 30.
If someone starts on 10/2/14 and ends on 10/31/14, the nights is 29.
We will be calculating into the future so it doesn't matter if the end date is greater than the day the report is being ran.
Does anyone have any ideas how to accomplish this in the best way?
|
I would first to create a lookup table with all the month with 31 days
Such as
```
DECLARE @month TABLE (start_date date,end_date date)
INSERT INTO @month VALUES ('2014-07-01','2014-07-31'),('2014-08-01','2014-08-31'),('2014-10-01','2014-10-31'),('2014-12-01','2014-12-31')
//populate all months in your calculate range
```
Then you can calculate the value with
```
DECLARE @start DATE = '2014-07-25', @end DATE = '2014-09-02'
SELECT DATEDIFF(day,@start,@end) -
(SELECT COUNT(*) FROM @month WHERE start_date >= @start AND end_date <= @end)
```
|
Retrieve the integer part of the datediff divided by 31 :
```
SELECT DATEDIFF(day,'2014-07-25', '2014-09-02') - DATEDIFF(day,'2014-07-25', '2014-09-02') / 31
```
|
Sql Server 2008 Calculate nights between two dates but remove 31st night
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"datediff",
""
] |
How to create a table with a column which is unique, not null and not empty(Check)?
I tried below Query
```
CREATE TABLE Persons
(
P_Id int NOT NULL UNIQUE,
LastName nvarchar(255) NOT NULL,
FirstName nvarchar(255),
Address nvarchar(255),
City nvarchar(255),
CHECK (P_Id>0)
)
```
When i try to create a table with both UNIQUE and CHECK constraint its throwing following error. Is it possible to use two constraint in a single query?
```
Major Error 0x80040E14, Minor Error 25501
> CREATE TABLE Persons
(
P_Id int NOT NULL UNIQUE,
LastName nvarchar(255) NOT NULL,
FirstName nvarchar(255),
Address nvarchar(255),
City nvarchar(255),
CHECK (P_Id>0)
)
There was an error parsing the query. [ Token line number = 8,Token line offset = 1,Token in error = CHECK ]. I am using SQL Server 2008.
```
|
```
CREATE TABLE tab
(
id INT,
notnullandnotemptystr VARCHAR(10) NOT NULL UNIQUE CHECK (DATALENGTH(notnullandnotemptystr) > 0)
)
```
|
It should be some thing like this.
```
CREATE TABLE [dbo].[TABLE1](
[COL1] [nvarchar](50) NOT NULL UNIQUE
)
ALTER TABLE [dbo].[TABLE1] WITH CHECK
ADD CONSTRAINT [CK_TABLE1] CHECK (([COL1]<>N''))
```
|
SQL Server Create Table With Column Unique Not Null and Not Empty(Check)
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"unique",
""
] |
I was searching for querys but i cant find an answer that helps me or if exit a similar question.
i need to get the info of the customers that made their last purchase between two dates
```
+--------+------------+------------+
| client | amt | date |
+--------+------------+------------+
| 1 | 2440.9100 | 2014-02-05 |
| 1 | 21640.4600 | 2014-03-11 |
| 2 | 6782.5000 | 2014-03-12 |
| 2 | 1324.6600 | 2014-05-28 |
+--------+------------+------------+
```
for example if i want to know all the cust who make the last purchase between
2014-02-11 and 2014-03-16, in that case the result must be
```
+--------+------------+------------+
| client | amt | date |
+--------+------------+------------+
| 1 | 21640.4600 | 2014-03-11 |
+--------+------------+------------+
```
cant be the client number 2 cause have a purchease on 2014-05-28,
i try to make a
```
SELECT MAX(date)
FROM table
GROUP BY client
```
but that only get the max of all dates,
i dont know if exist a function or something that can help, thanks.
---
well i dont know how to mark this question as resolved but this work for me
to complete the original query
```
SELECT client, MAX(date)
FROM table
GROUP BY client
HAVING MAX(date) BETWEEN date1 AND date2
```
thanks to all that took a minute to help me with my problem,
special thanks to Ollie Jones and Peter Pei Guo
|
Something in this format, replace date1 and date 2 with the real values.
```
SELECT client, max(date)
from table
group by client
having max(date) between date1 AND date2
```
|
There is more than one way to do this. Here is one of them.
```
select * from
(
select client, max(date) maxdate
from table
group by client ) temp
where maxdate between '2014-02-11' and '2014-03-06'
```
|
Get the MAX date between a range
|
[
"",
"mysql",
"sql",
""
] |
I have millions of items where I need to set whether they exist or not for many different configurations that will change over time.
I have four parameters.
1. `Production`
2. `14.15`
3. `Both-Local-General`
4. `Exists`
I want to create a table where the column name looks like `Production,14.15,Both-Local-General,Exists` to store boolean values whether or not a certain item exists for that configuration. The items will each have one row.
The parameters will constantly change throughout time, so new columns will need to be added using a C# application.
I want to create a new column where the configuration will be different each time.
Is this a bad idea?
Should I store the parameters in a different table? Or is there a better way to do this?
|
This is a bad idea. Why not have a table called "Configurations" with two fields "Parameters" and "Value". Then Create/Update/Delete rows instead of columns. To do it the other way seems like a complete misuse of an RDBMS.
|
Its more efficient to create a table with a single column for the parameter names and use a [Pivot](https://stackoverflow.com/questions/24470/sql-server-pivot-examples) or [Case When](https://stackoverflow.com/questions/5846007/sql-query-to-pivot-a-column-using-case-when) to display it as you have mentioned when querying it.
|
Best Practice to Use Multiple Parameters for SQL Column names?
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
```
Table A Table B
ID Name ID Name Last_Name
1 John 1 John Depp
2 Henry 2 David Henry
3 Ken
```
I was try to select the name in a statement with these condition:
1. If the `NAME` in `TableA` able to map with the `NAME` in `TableB`, then choose the `Last_Name`
2. If the `NAME` is `TableA` is equal to `LAST_NAME` in `TableB`, then select the `NAME` from `TableA`
3. If the `NAME` in `TableA` not able to map with `TableB`'s `NAME` or `LAST_NAME`, then select `NAME` from `TableA`.
Method i used:
```
SELECT * FROM
(SELECT A.ID NO, LAST_NAME REALNAME
FROM TABLEA A
INNER JOIN TABLEB B
ON A.NAME = B.NAME
UNION ALL
SELECT A.ID NO, A.NAME REALNAME
FROM TABLEA A
WHERE EXISTS (SELECT 1 FROM TABLEB B WHERE A.NAME = B.LAST_NAME)
UNION ALL
SELECT A.ID NO, A.NAME REALNAME
FROM TABLEA A
WHERE NOT EXISTS (SELECT 1 FROM TABLEB B WHERE A.NAME = B.NAME OR A.NAME = B.LAST_NAME)
)AS T
```
**QUESTION:** Is that any more simplify query i can use to achieve the conditions?
|
you can use CASE TO make more readable
```
SELECT A.ID NO, CASE WHEN A.Name = B.Name THEN B.LAST_NAME
WHEN A.Name = B.LAST_NAME THEN A.NAME
WHEN A.Name = B.Name AND A.Name = B.LAST_NAME THEN A.NAME END AS REALNAME
FROM TABLEA A, TABLEB B
```
|
You can use inner and outer joins rather than your exists and not exists subqueries, which may perform better:
```
select a.id, b.last_name as realname
from tablea a
join tableb b
on a.name = b.name
union all
select distinct a.id, a.name
from tablea a
join tableb b
on a.name = b.last_name
union all
select a.id, a.name
from tablea a
left join tableb b
on a.name = b.name
or a.name = b.last_name
where b.name is null
```
|
Select statement with inner join
|
[
"",
"mysql",
"sql",
"select",
"inner-join",
""
] |
I need to search for Article Tag strings that are sub-strings of a user entered string.
So in the below example, if a user searched for "normal", the query should return Article 1 and Article 3, as article 3 has a wildcard tag "norm\*".
If I searched for "normalization" then i should get back articles 3 and 4.
Let me know if I need to explain my question more clearly.
Example-
* Article 1 Tag = normal
* Article 2 Tag = apple
* Article 3 Tag = norm\*
* Article 4 Tag = normalization
* Article 5 Tag = corvette
Note - I only need to do the substring search on tags that end with an \*
|
The easiest way to do it, but perhaps not the most efficient, is to replace all `*` by `%` in your table and use `LIKE` statment :
```
SELECT
Tag
FROM
Article
WHERE
'normal' LIKE REPLACE(Tag, '*', '%')
```
See an example in [SqlFiddle](http://sqlfiddle.com/#!2/3a928/3/0)
|
I think this query should work, although I didn't test it beyond your sample data.
Also, you didn't specify what database you're using and I just tried it on MS SQL, but it should be easy to adapt to other databases as it only relies on charindex and left (or substring) and those functions should be available on most databases.
[SQL Fiddle](http://sqlfiddle.com/#!3/d8834/1)
**MS SQL Server 2008 Schema Setup**:
```
create table your_table (article varchar(10), tag varchar (20))
insert your_table values
('Article 1','normal'),
('Article 2','apple'),
('Article 3','norm*'),
('Article 4','normalization'),
('Article 5','corvette')
```
**Query 1**:
```
declare @str varchar(30) = 'normalization'
select t.article, tag
from your_table t
left join (
select
article,
left(tag, charindex('*', tag,0)-1) t,
charindex('*', tag,0)-1 as l
from your_table
where charindex('*', tag,0) > 0
) a
on t.article = a.article
where (tag = @str) or (left(@str, l) = t)
```
**[Results](http://sqlfiddle.com/#!3/d8834/1/0)**:
```
| ARTICLE | TAG |
|-----------|---------------|
| Article 3 | norm* |
| Article 4 | normalization |
```
**Query 2**:
```
declare @str varchar(30) = 'normal'
select t.article, tag
from your_table t
left join (
select
article,
left(tag, charindex('*', tag,0)-1) t,
charindex('*', tag,0)-1 as l
from your_table
where charindex('*', tag,0) > 0
) a
on t.article = a.article
where (tag = @str) or (left(@str, l) = t)
```
**[Results](http://sqlfiddle.com/#!3/d8834/1/1)**:
```
| ARTICLE | TAG |
|-----------|--------|
| Article 1 | normal |
| Article 3 | norm* |
```
|
SQL Query to do a reverse CONTAINS search?
|
[
"",
"sql",
"regex",
""
] |
I have below query to support employee pagination sorted by employee name
```
SELECT rowNumAlias
,Employee.employeeId
,Employee.NAME
FROM (
SELECT row_number() OVER (
ORDER BY Employee.NAME ASC
) rowNumAlias
,employeeId
,NAME
FROM Employee
) employeeData
INNER JOIN Employee ON Employee.employeeId = employeeData.employeeId
WHERE rowNumAlias BETWEEN ? AND ?
```
Where parameter rowNumAlias can be any integer number between 1 and 100
This query is taking around 7 seconds on my sql server database having 1 million records. **Is there a way i can minimize query execution time ?**
|
You can try like this:
```
SELECT * FROM (
SELECT (SELECT row_number() OVER (ORDER BY e2.NAME ASC) FROM Employee e2 WHERE Employee.employeeId = E2.employeeId) rowNumAlias,
,Employee.employeeId
,Employee.NAME
FROM Employee
) e3 WHERE e3.rowNumAlias BETWEEN ? AND ?
```
|
You can try to use CTE for this.
```
;WITH employeeData as
(
SELECT row_number() OVER (ORDER BY Employee.NAME ASC) rowNumAlias,
employeeId,
NAME
FROM Employee
)
SELECT employeeData.rowNumAlias,
employeeData.employeeId,
employeeData.NAME
FROM employeeData
INNER JOIN Employee ON Employee.employeeId = employeeData.employeeId
WHERE rowNumAlias BETWEEN ? AND ?
```
|
Performance tuning of this query?
|
[
"",
"sql",
"sql-server",
"database",
"sql-optimization",
""
] |
I have three tables like this.
**userLogin**
```
userID userPassword
c01 chutop
c02 213jkd
```
**SystemUser**
```
userID FName LName CategoryID
c01 Sidy Kamal 2
c02 Bindu Namal 1
c03 Sandy Khan 3
```
**SystemUserCategory**
```
CatID CatType
1 Admin
2 General
3 Librarian
```
When I enter `userID='c01'` and `userPassword` as `chutop` in the query, what I want to get in the results set is:
```
userID FName LName catType
c01 Sidy Kamal Genaral
```
I'm entering user name and the password. I want to get the `Fname`, `LName` and `catType` for the matching record. Further more, I'm going to login as `userID c01`, entering `userID c01` and the password as `chutop`. I want to retrieve the `Fname` and `Lname` from the `User` Table for `userID='c01'` and I want to get the `catType` from the `Category` table for the `userID='c01'`s `catID`, which is 2 in this case.
Here's what I tried.
```
SELECT userFName, userLName, catID
FROM userLogin, SystemUser, SystemUserCategory
WHERE userLogin.userID='c001' AND userLogin.userPassword='chath';
```
It gives me 12 records instead of 1 record.
Any help in correcting this or any query help would be really great.
|
Here's my answer.
```
SELECT userFName,UserLName,CatName
FROM userLogin INNER JOIN SystemUser ON (userLogin.userID=SystemUSer.userID) INNER JOIN SystemUserCategory ON (SystemUser.userCatID=SystemUserCategory.catID)
WHERE UserLogin.userID='c002' AND UserLogin.userPassword='ryan';
```
Apply the necessary username and password. The userid and the password are just what I added.
|
You must use inner join instead of selecting from multiple tables.
Like this:
```
select userFName, userLName,catID
from userLogin UL inner join SystemUser SU
on Ul.userId = SU.userId inner join SystemUserCategory SUC
on SU.CatID = SUC.CatID
```
|
Get data from three tables in a single query
|
[
"",
"sql",
"sql-server",
"select",
""
] |
How does one in oracle find the greatest of three columns in say 10 rows?
The requirement is I have three dates column and I have need to find greatest of three columns in 10 rows. I know greatest will find in one row.
How?
|
How about
```
select max(greatest(date1, date2, date3, date4)) from my_table;
```
|
you can use `greatest` again in your `order by`
```
select * from (
select greatest(c1,c2,c3,c4) from mytable
order by greatest(c1,c2,c3,c4) desc
) t1 where rownum = 1
```
|
SQL query to find greatest in columns and rows
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have created a betting database and I am trying to determine who are the best and worst betters. The easiest way I determined to do this is get the sum of the winnings by a user and subtract that by the sum amount they have bet
My first SQL query is
```
SELECT fromuser,SUM(itemsgiven::int)
FROM transactions
WHERE transtype='Bet Placed'
GROUP BY fromuser
ORDER BY 2 DESC
```
Which returns this
```
fromuser sum
15228328 2689
13896406 2634
55954103 308
37460340 64
66589399 62
```
Then my second query is to determine the total winnings
```
SELECT touser,SUM(itemsgiven::float)
FROM transactions
WHERE transtype='Bet Won'
GROUP BY touser
ORDER BY 2 DESC
```
which returns this
```
touser sum
15228328 4387.515
55954103 152.295
13896406 120.285
66589399 71.28
37460340 56.925
```
My question is what is the best way to combine these queries so I can have a two columns, one with the user id and the other with their total winnings(or losings). I looked up some examples but the part that tripped me up is how do you make sure when you are subtracting the two sums that you only subtract from the same user id.
EDIT: I am using postgresql
|
You could use this query:
```
SELECT usr, SUM(subtotal) AS total
FROM (
SELECT fromuser AS usr, SUM(itemsgiven::int) AS subtotal
FROM transactions
WHERE transtype='Bet Placed'
GROUP BY fromuser
UNION ALL
SELECT touser AS usr, -1*SUM(itemsgiven::float) AS subtotal
FROM transactions
WHERE transtype='Bet Won'
GROUP BY touser
) s
GROUP BY usr
ORDER BY 2 DESC
```
are you sure about itemsgiven::int and then itemsgiven::float?
|
You can do this with `UNION`:
```
SELECT user_id, SUM(itemsgiven) as total_itemsgiven
FROM
(
SELECT fromuser user_id, - itemsgiven::int itemsgiven
FROM transactions
WHERE 'Bet Placed'
UNION
SELECT touser user_id, itemsgiven::int itemsgiven
FROM transactions
WHERE 'Bet Won'
) t
GROUP BY user_id
ORDER BY 2 DESC
```
|
Subtracting multiple rows from two sql queries
|
[
"",
"sql",
"postgresql",
""
] |
I have two a table and a view . The table if of two rows of datatypes nvarchar and money. I have being updating the table by selecting from the view like below.
```
Insert into MyTable
Select * from MyView
```
Recently, this update fails due to an error "String or binary data would be truncated." However, when i modified by select statement to something like.
```
Select * from Myview WHERE Column is not null
OR
Select * from Myview WHERE Column > 0
```
The above work with a warning saying `Warning: Null value is eliminated by an aggregate or other SET operation.` . It occurred to me that may may be one of the null value records contain something that's not null. My table column is of money type and accept null. I presumed the error may be due to something that's not of money data type. The record is huge. Is there any way i can filter and return those aliens records?
I also i learnt that i can eliminate the error by turning ANSI WARNING SETTION ON & OFF [Here](http://blogs.lessthandot.com/index.php/DataMgmt/DBProgramming/MSSQLServer/surpress-string-or-binary-data-would-be-/) . My concern is wouldn't that result in loss of data. Please any help would be appreciated.
|
looks like the data in some column in table MyView exceeds the limit of the corresponding one in table MyTable
|
String or binary data would be truncated happened because the data coming from the MyView is larger than the column size in MyTable
Use
```
Select Max(Len(FieldName)) From MyTable
```
to check the maximum length of the nvarchar field in the MyTable
Or you can use Left when inserting data something Llike this
```
Insert into MyTable
Select Left(FieldName,50), Column1 from MyView
```
Note the 50 should be the size of the nvarchar field in MyTable
|
How to select record of different data type from sql column
|
[
"",
"sql",
"sql-server",
""
] |
I am working on a Project in which i have a date(i.e 01-01-2013) on which a item was taken on lease for some months like 60, 70 months. It is different for different customers. I want to send a email to all the customers to renew the lease whose lease date is going to be end after 2 days. I am not able to get the expiration date of lease. I have the date on which the lease item was taken and the number of months for which it was taken. I am doing this in mysql. Can anyone help how can i do this.
|
you need to use [DATE\_ADD()](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub)
where you add lease duration to lease start date
that will result in end-lease-date
then subtract [DATE\_SUB()](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub) 2 days from that
to get end lease date:
```
select DATE_ADD(lease_start,INTERVAL 60 MONTH)
```
to get end lease date - 2 days
```
select DATE_SUB(DATE_ADD(lease_start,INTERVAL 60 MONTH), INTERVAL 2 DAY) as alertDate
```
|
Here is the function you're looking for:
```
DATE_ADD(taken_date,INTERVAL nb_months MONTH)
```
This function will give you the expiration date corresponding to the `taken_date` on which i add the desired number of months.
Hope this will help.
|
How to get future date if i have the past date and the number of months after which i want the date?
|
[
"",
"mysql",
"sql",
"date",
""
] |
I am selecting from two tables, a product table and a shipping table, and building the table below, but I'd like to divide the product $ value by the number of rows per ID so that the product $ is split between all the rows it appears on. Is there a way to do this in the select statement as I'm building the table?
What I have:
```
ID | Product $ | Shipping $
---------------------------------
123456 | 200.00 | 5.00
123456 | 200.00 | 10.00
123567 | 186.00 | 7.99
```
What I'd like:
```
ID | Product $ | Shipping $
---------------------------------
123456 | 100.00 | 5.00
123456 | 100.00 | 10.00
123567 | 186.00 | 7.99
```
|
It is simpler to use [windowing functions](http://msdn.microsoft.com/en-us/library/ms189461%28v=sql.90%29.aspx) instead of subqueries:
```
SELECT
[ID]
,[Product $] / COUNT(*) OVER(PARTITION BY [ID])
,[Shipping $]
FROM MyTable
```
|
Before joining product & shipping table, you can calculate product price based on the count (as in the inner query) and then you can inner join it with shipping table. From the post-view, you can use the following query as a reference:
```
SELECT ps.id
,t1.new_c
,ps.shipping
FROM product_shipping ps
INNER JOIN (
SELECT id
,product / count(1) new_c
FROM product_shipping
GROUP BY id
,product
) t1 ON ps.id = t1.id;
```
Since sqlfiddle is down, Here you go with [ideone](http://ideone.com/huESPk)
|
In SQL, how can I divide a value by the number of rows it is in?
|
[
"",
"sql",
"sql-server",
""
] |
I'm using a stored procedure to search in my database and filtering the results on a parameter and I need to filter on unknown multiple parameters.
How can I write a stored procedure that handles this??
Here is my stored procedure :
```
CREATE PROCEDURE ComplaintRefListOnDistrict
@District nvarchar(max) = ''
AS
BEGIN
SET NOCOUNT ON;
SELECT
ComplaintFullID, CustomerName, Customer_Address, CustomerEmail,
Date, ContractID, CustomerPhoneNumber,ID, Complaintreference_ID, State
FROM
dbo.ComplaintsSmartObject
LEFT JOIN
dbo.UsersDistricts ON dbo.UsersDistricts.District = dbo.ComplaintsSmartObject.District
WHERE
(dbo.UsersDistricts.District = @District)
END
GO
```
Thank you!
|
I was telling to use this kind of dynamic SQL.
```
CREATE PROCEDURE ComplaintRefListOnDistrict
@District nvarchar(max) = ''
AS
BEGIN
SET NOCOUNT ON;
declare @sql nvarchar(max)
set @sql = 'SELECT
ComplaintFullID, CustomerName, Customer_Address, CustomerEmail,
Date, ContractID, CustomerPhoneNumber,ID, Complaintreference_ID, State
FROM
dbo.ComplaintsSmartObject
LEFT JOIN
dbo.UsersDistricts ON dbo.UsersDistricts.District = dbo.ComplaintsSmartObject.District
WHERE
(dbo.UsersDistricts.District in (' + @District + ') )'
EXEC sp_executesql @sql
END
```
|
First add a helper function, that will split the string you pass to it into rows.
```
CREATE FUNCTION SplitString
(
@string NVARCHAR(MAX),
@delimiter CHAR(1)
)
RETURNS
@output TABLE(Data NVARCHAR(MAX))
BEGIN
DECLARE @start INT, @end INT
SELECT @start = 1, @end = CHARINDEX(@delimiter, @string)
WHILE @start < LEN(@string) + 1
BEGIN
IF (@end = 0) SET @end = LEN(@string) + 1
INSERT INTO @output (Data) VALUES(SUBSTRING(@string, @start, @end - @start))
SET @start = @end + 1
SET @end = CHARINDEX(@delimiter, @string, @start)
END
RETURN
END
```
And now everything is very easy. Use the SQL `IN` operator:
```
CREATE PROCEDURE ComplaintRefListOnDistrict
@District nvarchar(max) = ''
AS
BEGIN
SET NOCOUNT ON;
SELECT
ComplaintFullID, CustomerName, Customer_Address, CustomerEmail,
Date, ContractID, CustomerPhoneNumber,ID, Complaintreference_ID, State
FROM
dbo.ComplaintsSmartObject
LEFT JOIN
dbo.UsersDistricts ON dbo.UsersDistricts.District = dbo.ComplaintsSmartObject.District
WHERE
(dbo.UsersDistricts.District IN (SELECT Data FROM dbo.SplitString(@District, ',')))
END
```
HTH.
|
Handle multiple parameters in SQL Server
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a sql select:
```
SELECT employee.name, employee.surname, department.name FROM employee INNER JOIN deparment
ON employee.id_department=department.id WHERE department.name = 'security'
```
But I wanna select **only 5 records**. If I can do this using the SQL tools?
And If I wana select **>= 5 records** (more than 5 records)?
|
Here is the SQL
```
SELECT employee.name, employee.surname, department.name
FROM employee INNER JOIN deparment
ON employee.id_department=department.id
WHERE department.name = 'security'
LIMIT 5;
```
To know more about [LIMIT](http://www.mysqltutorial.org/mysql-limit.aspx)
---
## Edit
I believe **>= 5 records** means leaving first 5 records and get another 5.
```
SELECT employee.name, employee.surname, department.name
FROM employee INNER JOIN deparment
ON employee.id_department=department.id
WHERE department.name = 'security'
LIMIT 5,5;
```
|
```
SELECT employee.name, employee.surname, department.name FROM employee INNER JOIN deparment
ON employee.id_department=department.id WHERE department.name = 'security'
limit 5;
```
|
How to select from DB only 5 records. Use MySQL
|
[
"",
"mysql",
"sql",
""
] |
```
SELECT
u.*,
GROUP_CONCAT(DISTINCT f.shot_id SEPARATOR ",") AS ownFavorites,
GROUP_CONCAT(DISTINCT st.shot_id SEPARATOR ",") AS ownStars,
GROUP_CONCAT(DISTINCT s.id SEPARATOR ",") AS ownShots,
( SELECT AVG(p.count)
FROM points p
LEFT JOIN shots s ON s.user_id = **U.ID** AND p.shot_id = s.id
WHERE date >= DATE_SUB(CURDATE(),INTERVAL 2 DAY)
) AS attention,
( SELECT SUM(p.count)
FROM points p
LEFT JOIN shots s ON s.user_id = **U.ID** AND s.id = p.shot_id
) AS popularity
FROM users u
LEFT OUTER JOIN shots s ON s.user_id = u.id
LEFT OUTER JOIN favorites f ON f.user_id = u.id
LEFT OUTER JOIN stars st ON st.user_id = u.id
WHERE u.username = ?;
```
I got two subselects which use the parameter u.id (marked in the query). If i do the sql like this it will generate somthing like that:
```
#1054 - Unknown column 'u.id' in 'on clause'
```
Means, the `u.id` id is NOT defined in the `SubSelects`. But in the `MainSelect` I choose from the `users` table, where `u.id` exists.
To my question: Is there a way to pass the selected `u.id` value to the `Subselects` with common sql?
|
Don't forget GROUP BY in the subqueries:
```
SELECT
u.*,
COALECSE(a.average, 0) attention,
COALESCE(p.total, 0) popular,
GROUP_CONCAT(DISTINCT f.shot_id) AS ownFavorites,
GROUP_CONCAT(DISTINCT st.shot_id SEPARATOR ",") AS ownStars,
GROUP_CONCAT(DISTINCT s.id SEPARATOR ",") AS ownShots
FROM
users u
LEFT JOIN
(
SELECT
s.user_id,
AVG(p.count) average
FROM
shots s
JOIN
points p
ON s.id = p.shot_id
WHERE
s.date >+ CURRENT_DATE - INTERVAL 2 DAY
GROUP BY s.user_id
) a
ON u.id = a.user_id
LEFT JOIN
(
SELECT
s.user_id,
SUM(p.count) total
FROM
shots s
JOIN
points p
ON s.id = p.shot_id
GROUP BY s.user_id
) p
ON u.id = p.user_id
LEFT OUTER JOIN shots s ON s.user_id = u.id
LEFT OUTER JOIN favorites f ON f.user_id = u.id
LEFT OUTER JOIN stars st ON st.user_id = u.id
WHERE u.username = 'user'
```
|
Try turning the selects into a subselect join.
```
FROM users u
LEFT OUTER JOIN shots s ON s.user_id = u.id
LEFT OUTER JOIN favorites f ON f.user_id = u.id
LEFT OUTER JOIN stars st ON st.user_id = u.id
LEFT OUTER JOIN ( SELECT AVG(p.count) AverageOfP, p.shot_id
FROM points p
WHERE date >= DATE_SUB(CURDATE(),INTERVAL 2 DAY)
) p ON p.shot_id = s.id
LEFT OUTER JOIN ( SELECT SUM(p.count) SumOfP, p.shot_id
FROM points p
) p2 ON p2.shot_id = s.id
```
The s table is already joined to u and should be good. Then in your select you can just select AverageOfP and SumOfP.
|
Pass Value to Subselect
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to filter some values and I need to know if they can be between two dates or not, but I could not create a SQL to do this.
I have the following date: `May 10 2010`.
I need to find if this date can be between two dates if I add some years to it.
**Example1**: can this date be between `January 15 2014` and `June 20 2014`?
Yes, because `May 10 2014` is.
**Example2**: can this date be between `May 15 2014` and `June 20 2014`?
No, because `May 10 2014` and `May 10 2015` is not between this interval.
**Example3**: can this date be between `December 15 2013` and `June 20 2014`?
Yes, because `May 10 2014` is.
|
This is a bit tricky in SQL Server. I think the best way is to normalize the dates to Jan 1, based on when the period begins. Then you can safely use `datediff()` to add the appropriate year value.
Something like this:
```
select (case when dateadd(year, datediff(year, newdate, newstart), newdate)
between newstart and newend
then 'Between' else 'NotBetween'
end)
from (select (StartDate - datepart(dayofyear, startDate) + 1) as newstart
(EndDate - datepart(dayofyear, StartDate) + 1) as newend,
(TheDate - datepart(dayofyear, StartDate) + 1) as newdate
from (select cast('2013-12-15' as datetime) as StartDate,
cast('2014-06-20' as datetime) as EndDate,
cast('2010-05-10' as datetime) as thedate
) dates
) dates;
```
|
You can try something like this:
```
declare @intervals table (StartDate date, EndDate date);
declare @date date = '2010-05-10';
insert into @intervals values
('2014-01-15', '2014-06-20'),
('2014-05-15', '2014-06-20'),
('2013-12-15', '2014-06-20');
select case when dateadd(year,year(EndDate)-year(@date),@date)
between StartDate and EndDate
then 'Yes'
else 'No'
end,
StartDate,
EndDate
from @intervals;
```
---
**OUTPUT**
```
StartDate EndDate
---- ---------- ----------
Yes 2014-01-15 2014-06-20
No 2014-05-15 2014-06-20
Yes 2013-12-15 2014-06-20
```
|
How do I get a date between two dates?
|
[
"",
"sql",
"sql-server",
""
] |
So I have 4 tables that are connected via foreign keys namely result, position, student, candidates
What i need to achieve is this:
output:
```
------------------------
s_fname | count(c_id)
-----------------------
Mark | 2 -> President
France| 2 -> President
```
.. to count as to how many times a c\_id have been repeated in the table "result" which is also filtered by pos\_id from the "candidates" table
Below is my code which lacks the counting part:
```
select s_fname
from results, candidates, student, positioning
where results.c_id = candidates.c_id
AND student.sid = results.sid
AND candidates.pos_id = positioning.pos_id
AND positioning.pos_id = 1
Group BY results.sid;
```
..which I know lacks a lot of things ...
Thanks
it seems very complex to me but i know there are gurus here who can achieve this,
result table
```
---------------------
| r_id | sid | c_id |
---------------------
1 | 1 | 1
2 | 1 | 2
3 | 1 | 4
4 | 2 | 1
5 | 2 | 2
6 | 2 | 4
7 | 3 | 3
8 | 3 | 2
9 | 5 | 3
10 | 5 | 2
----------------------
student table
----------------
| s_id| s_fname|
----------------
1 | Mark
2 | Jorge
3 | France
4 | James
--------------------
Candidates Table
------------------------
| c_id | sid | pos_id
------------------------
1 | 1 | 1
2 | 2 | 2
3 | 4 | 3
4 | 3 | 1
5 | 5 | 2
----------------------
positioning Table
-----------------------
| pos_id | po_name |
-----------------------
1 | President
2 | Vice President
3 | Secretary
4 | Treasurer
```
|
This is untested, but should return your intended result.
What it does is joins all of your tables on the related `foreign keys`, effectively giving a wide table of all of your columns. Then we limit on the `candidates` that are running for the `President` position. Since we need to `group` because of the `count aggregate` we `group` on the `name`. The `count` should reflect the number of votes they got, because there is a `one to many relationship` to the `result` table.
```
SELECT s_fname, Count(*)
FROM studentTable st
INNER JOIN Candidates c On c.sid = st.s_ID
INNER JOIN positioning p on c.pos_ID = p.pos_ID
INNER JOIN results r on st.s_ID = r.s_ID
WHERE po_Name = "President"
GROUP BY s_Fname
```
Due to a misunderstanding of the intended joins, the following query should show the appropriate results.
```
SELECT s_fname, Count(*)
FROM studentTable st
INNER JOIN Candidates c On c.sid = st.s_ID
INNER JOIN positioning p on c.pos_ID = p.pos_ID
INNER JOIN results r on c.c_ID = r.c_ID
WHERE po_Name = "President"
GROUP BY s_Fname
```
|
# Code:
```
SELECT s_fname AS [Student Name], COUNT(A.c_id) AS [Count], po_name AS [Position]
FROM results AS A INNER JOIN candidates AS B ON A.c_id=B.c_id
INNER JOIN student AS C ON A.sid=C.sid
INNER JOIN positioning AS D ON B.pos_id=D.pos_id
WHERE B.pos_id = 1
GROUP BY s_fname
```
|
How to write this basic sql statement
|
[
"",
"mysql",
"sql",
""
] |
I have the following Query....
```
SELECT DISTINCT
p.ProductID, p.ProductName,
od.Quantity, o.Shipcountry, o.ShippedDate
FROM
Products p, [Order Details] od, Orders o
WHERE
p.ProductID = od.ProductID
AND od.OrderID = o.OrderID
AND o.ShippedDate < '1997-01-01'
AND ShipCountry = 'Spain'
OR ShipCountry = 'Portugal'
ORDER BY
p.ProductName ASC
```
I am trying to get all orders that shipped BEFORE the date entered, yet it doesn't seem to be working...at all, below are a couple results from the query
```
82 Alice Mutton 91 Portugal 1996-10-17 00:00:00.000
82 Alice Mutton 110 Portugal 1997-03-04 00:00:00.000
17 Alice Mutton 21 Portugal 1996-11-18 00:00:00.000
82 Alice Mutton 4 Portugal 1997-03-04 00:00:00.000
17 Alice Mutton 28 Portugal 1998-04-13 00:00:00.000
82 Alice Mutton 130 Portugal 1997-04-08 00:00:00.000
17 Alice Mutton 10 Portugal 1996-10-25 00:00:00.000
82 Alice Mutton 22 Portugal 1996-10-25 00:00:00.000
17 Alice Mutton 1 Portugal 1997-06-06 00:00:00.000
17 Alice Mutton 36 Portugal 1998-03-26 00:00:00.000
82 Alice Mutton 49 Portugal 1997-01-02 00:00:00.000
82 Alice Mutton 66 Portugal 1997-07-29 00:00:00.000
82 Alice Mutton 39 Portugal 1997-01-02 00:00:00.000
82 Alice Mutton 120 Portugal 1997-04-08 00:00:00.000
```
|
The `OR` statement is messing up your results in your `WHERE` criteria. You can solve it with `IN` or using parentheses:
```
SELECT DISTINCT p.ProductID, p.ProductName, od.Quantity,o.Shipcountry,o.ShippedDate
FROM Products p
JOIN [Order Details] od ON p.ProductID=od.ProductID
JOIN Orders o ON od.OrderID=o.OrderID
WHERE o.ShippedDate<'1997-01-01'
AND o.ShipCountry IN ('Spain','Portugal')
ORDER BY p.ProductName ASC
```
If you'd prefer to use parentheses, then this would be the equivalent to using `IN`:
```
...
AND (ShipCountry = 'Spain' OR ShipCountry = 'Portugal')
...
```
|
The problem is the `or` statement. But, you should rewrite the query using explicit joins. Also, `IN` will help fix the problem with the `or`:
```
SELECT DISTINCT p.ProductID, p.ProductName, od.Quantity,o.Shipcountry,o.ShippedDate
FROM Products p JOIN
[Order Details] od
ON p.ProductID = od.ProductID IN
Orders o
ON od.OrderID=o.OrderID
WHERE o.ShippedDate<'1997-01-01' AND
ShipCountry IN ('Spain', 'Portugal')
ORDER BY p.ProductName ASC;
```
|
Can't get my SQL query to comply with date request
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to run a query that retrives all records in a table that exists in a subquery.
However, it is returning all records insteal of just the ones that I am expecting.
Here is the query:
```
SELECT DISTINCT x FROM T1 WHERE EXISTS
(SELECT * FROM T1 NATURAL JOIN T2 WHERE T2.y >= 3.0);
```
I've tried testing the subquery and it returns the correct number of records that meet my constraint.
But when I run the entire query it returns records that should not exists in the subquery.
Why is EXISTS evaluating true for all the records in T1?
|
You need a correlated subquery, not a join in the subquery. It is unclear what the right correlation clause is, but something like this:
```
SELECT DISTINCT x
FROM T1
WHERE EXISTS (SELECT 1 FROM T2 WHERE T2.COL = T1.COL AND T2.y >= 3.0);
```
Your query has a regular subquery. Whenever it returns at least one row, then the `exists` is true. So, there must be at least one matching row. This version "logically" runs the subquery for each row in the outer `T1`.
|
**Q: Why is EXISTS evaluating true for all the records in T1?**
**A:** Because the subquery returns a row, entirely independent of anything in the outer query.
The `EXISTS` predicate is simply checking whether the subquery is returning a row or not, and returning a boolean TRUE or FALSE.
You'd get the same result with:
```
SELECT DISTINCT x FROM T1 WHERE EXISTS (SELECT 1)
```
(The only difference would be if that subquery didn't return at least one row, then you'd get no rows returned in the outer query.)
There's no correlation between the rows returned by the subquery and the rows in the outer query.
---
I expect that there's another question you want to ask. And the answer to that really depends on what result set you are wanting to return.
If you are wanting to return rows from T1 that have some "matching" row in T2, you could use either a `NOT EXISTS (correlated subquery)`
Or, you could also use a join operation to return an equivalent result, for example:
```
SELECT DISTINCT T1.x
FROM T1
NATURAL
JOIN T2
WHERE T2.y >= 3.0
```
|
MySQL WHERE EXISTS evaluating to true for all records
|
[
"",
"mysql",
"sql",
"subquery",
"where-clause",
"exists",
""
] |
I recently started to support a project for which databases are maintained in SQL Server 2008 R2.
There is a SQL server agent DBMR job which was running fine until couple of days back. Now it's been failing with error " Could not find stored procedure 'XXX' ". When I looked in the master database, that procedure is not there.
Is there a way (logs or event viewer like application) from sql server management studio I can find out if a SP has been deleted by someone recently ?
Under a strong assumption no one would have deleted any SP's .. I am really puzzled as to how this job was running fine until 2 days and failing now. Any suggestions / idea's welcome. Thanks.
|
Thank you all for your suggestions/answers.
One of our engineers tried to install a different DBMR version and this deleted some of the jobs, SP's etc. Sorry if wasted your time.
|
```
SELECT * FROM sys.fn_dblog(NULL, NULL);
```
using above query you will be able to identify the dropped stored procedure details
```
SELECT CONVERT(varchar(max), SUBSTRING([RowLog Contents 0], 33, LEN([RowLog Contents 0]))) AS Script FROM fn_dblog(NULL, NULL) WHERE Operation = 'LOP_DELETE_ROWS' AND Context = 'LCX_MARK_AS_GHOST' AND AllocUnitName = 'sys.sysobjvalues.clst' AND [TRANSACTION ID] IN (SELECT DISTINCT [TRANSACTION ID] FROM sys.fn_dblog(NULL, NULL) WHERE Context IN ('LCX_NULL') AND Operation IN ('LOP_BEGIN_XACT') AND [Transaction Name] = 'DROPOBJ' AND CONVERT(nvarchar(11), [Begin Time]) BETWEEN '2013/07/31' AND '2015/08/1') AND SUBSTRING([RowLog Contents 0], 33, LEN([RowLog Contents 0])) <> 0;
```
using the above pasted query you can regenerate the deleted procedures
for the safe side i have taken the begin time between 2013 and 2015 dates so then result set should include all the dropped procedures (objects)
hope this will help you
for more reading
<https://serverfault.com/questions/69056/how-to-find-when-the-stored-procedure-was-deleted-and-who-deleted-it>
<http://solutioncenter.apexsql.com/recover-a-specific-dropped-object/>
<http://solutioncenter.apexsql.com/how-to-recover-stored-procedures-functions-and-triggers/>
|
SQL Server Agent Job failing
|
[
"",
"sql",
""
] |
I have a situation, where I get a result set and based on the values from the result set I need to delete records, from another table.
Example :
```
ResultSet rs = SELECT col FROM TableA;
```
Now I dont know whether `rs.getString(1)` is `null` or not.
so if it is null i have to do :
```
DELETE FROM TableB WHERE col2 IS NULL;
```
if not null then :
```
DELETE FROM TableB WHERE col2 = rs.getString(1);
```
How do I do this?
|
If you use the normal comparison operator, it always returns false if one of the operands is null.
So you have to explicitly check for null values:
```
DELETE FROM TableB
WHERE
col2 = rs.getString(1) OR -- Neither are NULL and they have the same value.
(col2 IS NULL AND rs.getString(1) IS NULL) -- or Both are null.
```
|
Something like:
```
DELETE FROM TableB
WHERE ISNULL(col2,'VALUEISNULL') = ISNULL(rs.getString(1),'VALUEISNULL')
```
'VALUEISNULL' may be whatever you want that would not be a valid value for that field.
|
How to distinguish between NULL and NOT NULL during comparison
|
[
"",
"sql",
"compare",
"isnull",
""
] |
DBA does not like that I am using case statements with sub-queries. Is there another approach I could take to improve performance? This update statement is a part of stored proc.
```
UPDATE dbo.CMN_PersonsFerpa
SET
IsWorkEmailFerpa = (CASE WHEN (SELECT EmailType FROM dbo.CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID) = 'Work' THEN @IsFERPA ELSE IsWorkEmailFerpa END,
IsPersonalEmailFerpa = (CASE WHEN (SELECT EmailType FROM dbo.CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID) = 'Personal' THEN @IsFERPA ELSE IsPersonalEmailFerpa END,
IsParentEmailFerpa = (CASE WHEN (SELECT EmailType FROM dbo.CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID) = 'Parent' THEN @IsFERPA ELSE IsParentEmailFerpa END,
IsTempEmailFerpa = (CASE WHEN (SELECT EmailType FROM dbo.CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID) = 'Temporary' THEN @IsFERPA ELSE IsTempEmailFerpa END,
IsFAFSAEmailFerpa = (CASE WHEN (SELECT EmailType FROM dbo.CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID) = 'FAFSA' THEN @IsFERPA ELSE IsFAFSAEmailFerpa END,
IsCSSProfEmailFerpa = (CASE WHEN (SELECT EmailType FROM dbo.CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID) = 'CSS Profile' THEN @IsFERPA ELSE IsCSSProfEmailFerpa END,
IsCommenceEmailFerpa = (CASE WHEN (SELECT EmailType FROM dbo.CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID) = 'Commencement' THEN @IsFERPA ELSE IsCommenceEmailFerpa END,
IsAcctHoldEmailFerpa = (CASE WHEN (SELECT EmailType FROM dbo.CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID) = 'Account Holder' THEN @IsFERPA ELSE IsAcctHoldEmailFerpa END
Where CMN_PersonsFerpa.cmn_personsID = (select cmn_personsID from CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID)
```
After few suggestions here is what I have. Is this acceptable? Or is there a better way?:
```
SELECT @DBType = EmailType, @cmn_personsID = CMN_PersonsID FROM dbo.CMN_PersonsEmailLinks WHERE CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID
UPDATE dbo.CMN_PersonsFerpa
SET
IsWorkEmailFerpa = CASE WHEN @DBType = 'Work' THEN @IsFERPA END,
IsPersonalEmailFerpa = CASE WHEN @DBType = 'Personal' THEN @IsFERPA END,
IsParentEmailFerpa = CASE WHEN @DBType = 'Parent' THEN @IsFERPA END,
IsTempEmailFerpa = CASE WHEN @DBType = 'Temporary' THEN @IsFERPA END,
IsFAFSAEmailFerpa = CASE WHEN @DBType = 'FAFSA' THEN @IsFERPA END,
IsCSSProfEmailFerpa = CASE WHEN @DBType = 'CSS Profile' THEN @IsFERPA END,
IsCommenceEmailFerpa = CASE WHEN @DBType = 'Commencement' THEN @IsFERPA END,
IsAcctHoldEmailFerpa = CASE WHEN @DBType = 'Account Holder' THEN @IsFERPA END,
LastChangeBy = @UserGUID,
LastChangeDateTime = GETDATE()
Where CMN_PersonsFerpa.cmn_personsID = @CMN_PersonsID
```
|
The goal would be to eliminate sub-queries. You could do so by building up a temp table before the update, or by using proper linking. Without knowing your data schema it is hard to help you design something, but here is a shot:
```
update p set
IsWorkEmailFerpa = case when e.EmailType = 'Work' then @IsFERPA else IsWorkEmailFerpa end,
IsPersonalEmailFerpa = case when e.EmailType = 'Personal' then @IsFERPA else IsPersonalEmailFerpa end,
IsParentEmailFerpa = case when e.EmailType = 'Parent' then @IsFERPA else IsParentEmailFerpa end,
IsTempEmailFerpa = case when e.EmailType = 'Temporary' then @IsFERPA else IsTempEmailFerpa end,
...
IsAcctHoldEmailFerpa = case when e.EmailType = 'Account Holder' then @IsFERPA else IsAcctHoldEmailFerpa end,
from dbo.CMN_PersonsFerpa p
join dbo.CMN_PersonsEmailLinks e
on e.cmn_personsID = p.cmn_personsID
where e.CMN_PersonsEmailLinksID = @CMN_PersonsEmailLinksID
```
|
Why wouldn't you use a join? Correlated subqueries work row by row like a cursor and almost never should be used.
|
Improve this SQL Query for better performance
|
[
"",
"sql",
"sql-server",
"subquery",
"case-statement",
""
] |
```
Name varchar, Value int, Active bit
-----------------------------------
'Name1',1,1
'Name2',2,1
'Name1',3,0
'Name2',4,0
'Name3',1,1
'Name4',1,1
```
I want to return where `Active` is anything but prioritize when it's `0` so I want to return this:
```
'Name1',3
'Name2',4
'Name3',1
'Name4',1
```
I tried this, but get an error to include `Active` in my return statement
```
Select Distinct Name, Value From Table Order by Active
```
So I tried this:
```
Select Distinct Name, Value, Active From Table Order by Active
```
But now it returns all the rows. I would like to prioritize `where Active = 0` in the distinct results but since it requires I put `Active` in the return statement makes this complicated.
Can someone help?
|
Your question is a little confusing, but if I'm understanding it correctly, you need to use a `group by` statement:
```
select name,
max(case when active = 0 then value end) value
from yourtable
group by name
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!3/93c89/1)
---
With your edits, you can use `coalesce` and still get it to work:
```
select name, coalesce(max(case when active = 0 then value end), max(value)) value
from yourtable
group by name
```
* [More Fiddle](http://sqlfiddle.com/#!3/86d50/4)
|
You can order by fields not contained in the select clause
```
Select Name, Value
From Table
ORDER BY Active, Name, Value
```
But you cannot use `SELECT DISTINCT` at the same time.
If you use "select distinct" there is the possibility that some rows will be discarded, when this happens there is no longer any viable relationship retained between [Active] and the "distinct" rows. So if using select distinct, and you need to order by [Active], then [Active] MUST be in the select clause.
|
Returning distinct prioritizing results with order by
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have table with the columns partner, post, postvariation
Now I'd like to know how many postvariations per post every partner has in average. I tried the following, however it is not working
```
SELECT partner,
COUNT(DISTINCT post),
COUNT(DISTINCT postvariation),
AVG(COUNT(DISTINCT post,postvariation))
FROM posts
GROUP BY partner
ORDER BY `id` DESC;
```
|
Here is the query you're looking for:
```
SELECT P.partner
, COUNT(DISTINCT P.post) AS nb_post
, COUNT(DISTINCT P.postvariation) AS nb_postvariation
, COUNT(DISTINCT P.postvariation) / COUNT(DISTINCT P.post) AS avg_postvariation
FROM posts P
GROUP BY P.partner
ORDER BY P.id DESC;
```
Here is the same query with an additional `GROUP BY` clause on the day:
```
SELECT P.partner
, DATE_FORMAT(P.datefield, '%Y-%m-%d') AS pivot_date
, COUNT(DISTINCT P.post) AS nb_post
, COUNT(DISTINCT P.postvariation) AS nb_postvariation
, COUNT(DISTINCT P.postvariation) / COUNT(DISTINCT P.post) AS avg_postvariation
FROM posts P
GROUP BY P.partner, DATE_FORMAT(P.datefield, '%Y-%m-%d')
ORDER BY P.id DESC;
```
Hope this will help you.
|
Well you need to use division to find average.
## **Query**
```
SELECT partner, post, postvariation, (postvariation/post) as result
FROM
(
SELECT partner,
COUNT(DISTINCT post) as post,
COUNT(DISTINCT postvariation) as postvariation
FROM posts
GROUP BY partner
ORDER BY id DESC
) AS new_table
```
|
MySQL Nested AVG over SUM
|
[
"",
"mysql",
"sql",
""
] |
How can I replace one char from a SQL Server `Select` command without changing the value in the table itself?
Example:
```
Select Col_Name
from TABLE
```
returns
```
**RANDOM/ANSWER**
```
How do I modify the `Select` query to get `**RANDOM-ANSWER**` instead? So the `/` is replaced by `-`. But the data in the table remains unchanged
|
The [`replace`](http://msdn.microsoft.com/en-us/library/ms186862.aspx) function should do the trick:
```
SELECT REPLACE(my_column, '/', '-')
FROM my_table
```
|
```
Select Col_Name, REPLACE(Col_Name, '/', '-') AS New_Column
from TABLE
```
|
Replacing / with - in SQL Server using Select Command
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table in PostgreSQL 9.2 that has a `text` column. Let's call this `text_col`. The values in this column are fairly unique (may contain 5-6 duplicates at the most). The table has ~5 million rows. About half these rows contain a `null` value for `text_col`. When I execute the following query I expect 1-5 rows. In most cases (>80%) I only expect 1 row.
## Query
```
explain analyze SELECT col1,col2.. colN
FROM table
WHERE text_col = 'my_value';
```
A `btree` index exists on `text_col`. This index is never used by the query planner and I am not sure why. This is the output of the query.
## Planner
```
Seq Scan on two (cost=0.000..459573.080 rows=93 width=339) (actual time=1392.864..3196.283 rows=2 loops=1)
Filter: (victor = 'foxtrot'::text)
Rows Removed by Filter: 4077384
```
I added another partial index to try to filter out those values that were not null, but that did not help (with or without `text_pattern_ops`. I do not need `text_pattern_ops` considering no `LIKE` conditions are expressed in my queries, but they also match equality).
```
CREATE INDEX name_idx
ON table
USING btree
(text_col COLLATE pg_catalog."default" text_pattern_ops)
WHERE text_col IS NOT NULL;
```
Disabling sequence scans using `set enable_seqscan = off;` makes the planner still pick the `seqscan` over an `index_scan`. In summary...
1. The number of rows returned by this query is small.
2. Given that the non-null rows are fairly unique, an index scan over the text should be faster.
3. Vacuuming and analyzing the table did not help the optimizer pick the index.
## My questions
1. Why does the database pick the sequence scan over the index scan?
2. When a table has a text column whose equality condition should be checked, are there any best practices I can adhere to?
3. How do I reduce the time taken for this query?
## [Edit - More information]
1. The index scan is picked up on my local database that houses about 10% of the data that is available in production.
|
A **[partial index](https://www.postgresql.org/docs/current/indexes-partial.html) is a good idea** to exclude half the rows of the table which you obviously do not need. Simpler:
```
CREATE INDEX name_idx ON table (text_col)
WHERE text_col IS NOT NULL;
```
Run `ANALYZE table` after creating the index. (`autovacuum` does that automatically after some time if you don't do it manually, but if you test right after creation, your test will fail.)
Then, to convince the query planner that a particular partial index can be used, repeat the `WHERE` condition in the query - even if it seems redundant:
```
SELECT col1,col2, .. colN
FROM table
WHERE text_col = 'my_value'
AND text_col IS NOT NULL; -- repeat condition
```
[The manual](https://www.postgresql.org/docs/current/indexes-partial.html):
> However, keep in mind that the predicate must match the conditions
> used in the queries that are supposed to benefit from the index. To be
> precise, a partial index can be used in a query only if the system can
> recognize that the `WHERE` condition of the query mathematically implies
> the predicate of the index. PostgreSQL does not have a sophisticated
> theorem prover that can recognize mathematically equivalent
> expressions that are written in different forms. (Not only is such a
> general theorem prover extremely difficult to create, it would
> probably be too slow to be of any real use.) The system can recognize
> simple inequality implications, for example "x < 1" implies "x < 2";
> otherwise the predicate condition must exactly match part of the
> query's `WHERE` condition or the index will not be recognized as usable.
> Matching takes place at query planning time, not at run time. As a
> result, parameterized query clauses do not work with a partial index.
As for parameterized queries: again, add the (redundant) predicate of the partial index as an additional, constant `WHERE` condition, and it works just fine.
---
**Update:** Postgres 9.6 or later largely improves chances for [**index-only scans**](https://www.postgresql.org/docs/current/indexes-index-only-scans.html). See:
* [PostgreSQL not using index during count(\*)](https://dba.stackexchange.com/a/128445/3684)
|
~~I figured it out~~. Upon taking a closer look at the `pg_stats` view that `analyze` helps build, I came across this excerpt on the [documentation](http://www.postgresql.org/docs/9.2/static/view-pg-stats.html).
### Correlation
> Statistical correlation between physical row ordering and logical
> ordering of the column values. This ranges from -1 to +1. When the
> value is near -1 or +1, an index scan on the column will be estimated
> to be cheaper than when it is near zero, due to reduction of random
> access to the disk. (This column is null if the column data type does
> not have a < operator.)
On my local box the correlation number is `0.97` and on production it was `0.05`. Thus the planner is estimating that it is easier to go through all those rows sequentially instead of looking up the index each time and diving into a random access on the disk block. This is the query I used to peek at the correlation number.
```
select * from pg_stats where tablename = 'table_name' and attname = 'text_col';
```
This table also has a few updates performed on its rows. The `avg_width` of the rows is estimated to be 20 bytes. If the update has a large value for a text column, it can exceed the average and also result in a slower update. My guess was that the physical and logical ordering are slowing moving apart with each update. To fix that I executed the following queries.
```
ALTER TABLE table_name SET (FILLFACTOR = 80);
VACUUM FULL table_name;
REINDEX TABLE table_name;
ANALYZE table_name;
```
The idea is that I could give each disk block a 20% buffer and `vacuum full` the table to reclaim lost space and maintain physical and logical order. After I did this the query picks up the index.
### Query
```
explain analyze SELECT col1,col2... colN
FROM table_name
WHERE text_col is not null
AND
text_col = 'my_value';
```
### Partial index scan - 1.5ms
```
Index Scan using tango on two (cost=0.000..165.290 rows=40 width=339) (actual time=0.083..0.086 rows=1 loops=1)
Index Cond: ((victor five NOT NULL) AND (victor = 'delta'::text))
```
Excluding the NULL condition picks up the other index with a bitmap heap scan.
### Full index - 0.08ms
```
Bitmap Heap Scan on two (cost=5.380..392.150 rows=98 width=339) (actual time=0.038..0.039 rows=1 loops=1)
Recheck Cond: (victor = 'delta'::text)
-> Bitmap Index Scan on tango (cost=0.000..5.360 rows=98 width=0) (actual time=0.029..0.029 rows=1 loops=1)
Index Cond: (victor = 'delta'::text)
```
### [EDIT]
While it initially looked like `correlation` plays a major role in choosing the index scan @Mike has observed that a `correlation` value that is close to 0 on his database still resulted in an index scan. Changing fill factor and vacuuming fully has helped but I'm unsure why.
|
PostgreSQL does not use a partial index
|
[
"",
"sql",
"database",
"performance",
"postgresql",
"indexing",
""
] |
I have a a large customer database where customers have been added multiple times in some circumstances which is causing problems. I am able to use a query to identify the records which are an exact match, although some records have slight variations such as different addresses or given names.
I want to query across 10 fields, some records will match all 10 which is clearly a duplicate although other fields may only match 5 fields with another record and require further investigation. Therefore i want to create a results set which has field with a count how many fields have been matched. Basically to create a rating of the likely hood the result is an actual match. All 10 would be a clear dup but 5 would only be a possible duplicate.
Some will only match on POSTCODE and FIRSTNAME which is generally can be discounted.
Something like this helps but as it only returns records which explicitly match on all 3 records its not really useful due the sheer amount of data.
```
SELECT field1,field2,field3, count(*)
FROM table_name
GROUP BY field1,field2,field3
HAVING count(*) > 1
```
|
You are just missing the magic of [`CUBE()`](http://technet.microsoft.com/en-us/library/bb522495%28v=sql.105%29.aspx), which generates all the combinations of columns automatically
```
DECLARE @duplicate_column_threshold int = 5;
WITH cte AS (
SELECT
field1,field2,...,field10
,duplicate_column_count = (SELECT COUNT(col) FROM (VALUES (field1),(field2),...,(field10)) c(col))
FROM table_name
GROUP BY CUBE(field1,field2,...,field10)
HAVING COUNT(*) > 1
)
SELECT *
INTO #duplicated_rows
FROM cte
WHERE duplicate_column_count >= @duplicate_column_threshold
```
---
Update: to fetch the rows from the original table, join it against the #duplicated\_rows using a technique that treats NULLs as wildcards when comparing the columns.
```
SELECT
a.*
,b.duplicate_column_count
FROM table_name a
INNER JOIN #duplicated_rows b
ON NULLIF(b.field1,a.field1) IS NULL
AND NULLIF(b.field2,a.field2) IS NULL
...
AND NULLIF(b.field10,a.field10) IS NULL
```
|
You might try something like
```
Select field1, field2, field3, ... , field10, count(1)
from customerdatabase
group by field1, field2, field3, ... , field10
order by field1, field2, field3, ... , field10
```
Where `field1` through `field10` are ordered by the "most identifiable/important" to least.
|
SQL - Find duplicate fields and count how many fields are matched
|
[
"",
"sql",
"sql-server",
"count",
"duplicates",
""
] |
I'm working on a SQL request for an internal search engine (MySQL db). I want to allow users to search for exact expressions, so when they search for `"foo bar"` they get the documents containing `foo` AND `bar` and not just `foo` or `bar`.
The `search_documents_words` table:
```
| word_id | doc_id | word |
---------------------------
| 1 | 1 | foo |
| 2 | 1 | bar |
| 3 | 2 | foo |
```
The `search_documents` table:
```
| doc_id | doc_name |
---------------------
| 1 | mydoc1 |
| 2 | mydoc2 |
```
What I want to get:
```
| doc_id | doc_name |
--------------------
| 1 | mydoc1 |
```
Assuming that I have my keywords in a PHP array.
How can I achieve that?
|
Assuming that your search keyword is in array
```
$search = array('foo','bar');
$keywords = '';
foreach($search as $values) {
$keywords .= "'.$values.',";
}
$keywords = rtrim($keywords,","); // creating words as 'foo','bar'
```
then execute this query
```
$query = "SELECT B.*
FROM search_documents_words A
JOIN search_documents B ON A.doc_id = B.doc_id
WHERE A.word IN(".$keywords.")";
```
**EDITED**
This case will work if a same doc id is not repeating word ( doc 1 with many foo entries won't work with this query )
```
$query = "SELECT B.*
FROM search_documents_words A
JOIN search_documents B ON A.doc_id = B.doc_id
WHERE A.word IN(".$keywords.")
GROUP BY A.doc_id HAVING COUNT(A.doc_id) = '".count($search)."' ";
```
|
The subquery can be build like this:
```
$keywords = array('foo','bar');
foreach($keywords as $key)
{
$conditions[] = "word='".$key."'";
}
$condition = implode(" AND ",$conditions);
```
Then execute this sql query:
```
$query = "SELECT *
FROM search_documents
WHERE doc_id
IN (SELECT doc_id
FROM search_documents_words
WHERE ".$condition.")"
```
Hope this helps!!
|
How to select rows with same ID sharing a group of values?
|
[
"",
"mysql",
"sql",
""
] |
I have 4 stored procedures. I need to take the result of the first stored procedure (2 temp tables) and pass it into the second stored procedure. These temp tables need to be used in the from clause in the second stored procedure.
Similarity the third and fourth stored procedures need results from the previous stored procedures.
is there a way to pass temporary tables across the stored procedures?
|
Regarding this comment, "it was 1 Sp but we broke it into 4 so its easier to alter if needed", I suggest that you break it up even more. In other words, implement encapsulation.
Have a separate stored procedure for each time you want to select data from the actual tables. Do not populate temp tables in these procedures, just return the data.
Then write a stored procedure that creates and populates temp tables from the procs mentioned above, and does the necessary processing.
Here is a simple example:
```
create procedure GetData1
select Field1, Field2
from blah, blah, blah
create procedure AssembleAllData
create table #temp1 (Field1, Field2)
insert into #temp1
exec GetData1
select Field1, Field2, etc
from #temp1 join anActualTable etc
drop table #temp1
```
|
Your question sounds more like an answer than a question. Just do as you described.
You don't need to *pass* the data in the temp tables from one procedure to the next. The data is just there. In one procedure you write to the temp table and in the next procedure you read from the temp table.
I would also not create temp tables dynamically, just create them and let them wait for data. This assumes that the temp table data is local to a session (in oracle this is the case and in a way the reason why temp tables exist).
Also I would opt against passing table names between procedures. There is almost always a better way and it is a no-no anyways. If you are under the impression that you need variable temp table names, then you really want to add another column to the temp tables (you may even call it "temp\_table\_name", though it almost certainly means something different). Then you can pass the "temp\_table\_name" around and the selects would need a `where temp_table_name = ...` and the inserts would have to populate this extra column.
|
Is there a way to pass temporary tables across the stored procedures
|
[
"",
"sql",
"stored-procedures",
""
] |
```
select
distinct on (productid)p.product,
pb.productid,
pb.mrp,
pb.ptr,
pb.ssr,
pt.patent
from prod_batch pb
inner Join prod_patent pt using(patentid)
inner Join prods p using(productid)
where(qty - iqty) > 0
```
this selects returns correctly and getting what actually I want.!
When am adding an `ORDER BY` in the above *select* to sort according to the `product`'s name
I used to write like this..
```
select
distinct on (productid)p.product,
pb.productid,
pb.mrp,
pb.ptr,
pb.ssr,
pt.patent
from prod_batch pb
inner Join prod_patent pt using(patentid)
inner Join prods p using(productid)
where(qty - iqty) > 0
order by
product -- this is my column name to order by
```
Getting error while executing this
> ERROR: SELECT DISTINCT ON expressions must match initial ORDER BY
> expressions
|
You can try like this
Choice - 1
```
select * from (
select
distinct on (productid)p.product,
pb.productid,
pb.mrp,
pb.ptr,
pb.ssr,
pt.patent
from prod_batch pb
inner Join prod_patent pt using(patentid)
inner Join prods p using(productid)
where(qty - iqty) > 0
order by
product
) t order by product
```
---
Choice - 2
```
With cte as (
select
distinct on (productid)p.product,
pb.productid,
pb.mrp,
pb.ptr,
pb.ssr,
pt.patent
from prod_batch pb
inner Join prod_patent pt using(patentid)
inner Join prods p using(productid)
where(qty - iqty) > 0
) select * from cte order by product
```
---
Choice - 3
Just make your select into a view
```
create or replace view pro_details as
select
distinct on (productid)p.product,
pb.productid,
pb.mrp,
pb.ptr,
pb.ssr,
pt.patent
from prod_batch pb
inner Join prod_patent pt using(patentid)
inner Join prods p using(productid)
where(qty - iqty) > 0
```
USAGE : `select * from pro_details order by product`
|
DISTINCT ON ( expression [, ...] ) keeps only the first row of each set of rows where the given expressions evaluate to equal. [...] Note that the "first row" of each set is unpredictable unless ORDER BY is used to ensure that the desired row appears first. [...] The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s).
Try following
```
select
distinct on (productid)p.product,
pb.productid,
pb.mrp,
pb.ptr,
pb.ssr,
pt.patent
from prod_batch pb
inner Join prod_patent pt using(patentid)
inner Join prods p using(productid)
where(qty - iqty) > 0
order by
p.productid,p.product
```
|
Error when adding ORDER BY in Select Query
|
[
"",
"sql",
"postgresql",
"sql-order-by",
"distinct",
""
] |
Although one can use composite primary keys, for the case below, is it really a bad practice? The consensus on Stackoveflow *seems to go both ways on this issue*.
Why?
---
I want to store payments for the orders in a separate table. The reason is that, an order can have many items which are handled in a separate table in the form of many to many relationship. Now, if I don't use composite primary keys for my payment table, I'll lose my unique `PaymentID`:
```
[PaymentId] INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
[OrderId] INT NOT NULL PRIMARY KEY --Also a Foreign Key--
```
Now, if I just remove the Primary Key for the `OrderId`, I'll lose my one to one relationship here so `Many OrderIds can be associated to many PaymentIds`, and I don't want this.
This seems to be why other answers on SO have concluded (mostly) that the composite key is a bad idea. If it is bad, what's the best practice then?
|
There is no conclusion that composite primary keys are bad.
The best practice is to have *some* column or columns that uniquely identify a row. But in some tables a single column is not enough by itself to uniquely identify a row.
SQL (and the relational model) allows a composite primary key. It is a good practice is some cases. Or, another way of looking at it is that it's not a bad practice in all cases.
Some people have the opinion that *every* table should have an integer column that automatically generates unique values, and that should serve as the primary key. Some people also claim that this primary key column should always be called `id`. But those are *conventions*, not necessarily best practices. Conventions have some benefit, because it simplifies certain decisions. But conventions are also restrictive.
You may have an order with multiple payments because some people purchase [on layaway](https://en.wikipedia.org/wiki/Layaway), or else they have multiple sources of payment (two credit cards, for instance), or two different people want to pay for a share of the order (I frequently go to a restaurant with a friend, and we each pay for our own meal, so the staff process half of the order on each of our credit cards).
I would design the system you describe as follows:
```
Products : product_id (PK)
Orders : order_id (PK)
LineItems : product_id is (FK) to Products
order_id is (FK) to Orders
(product_id, order_id) is (PK)
Payments : order_id (FK)
payment_id - ordinal for each order_id
(order_id, payment_id) is (PK)
```
This is also related to the concept of [identifying relationship](https://stackoverflow.com/questions/762937/whats-the-difference-between-identifying-and-non-identifying-relationships/762994#762994). If it's definitional that a payment exists only because an order exist, then make the order part of the primary key.
Note the LineItems table also lacks its own auto-increment, single-column primary key. A many-to-many table is a classic example of a good use of a composite primary key.
|
This question is dangerously close to asking for opinions, which can generate religious wars. As someone who is highly biased toward having auto-increasing integer primary keys in my tables (called something like `TablenameId`, not `Id`), there is one situation where it is optional.
I think the other answers address why you want primary keys.
One very important reason is for reference purposes. In a relational database, any entity could -- in theory -- be referenced by another entity via foreign key relationships. For foreign keys, you definitely want one column to uniquely define a row. Otherwise, you have to deal with multiple columns in different tables that align with each other. This is possible, but cumbersome.
The table you are referring to is not an "entity" table it is a "junction" table. It is a relational database construct for handling many-to-many relationships. Because it doesn't really represent an entity, it should not have foreign key relationships. Hence, a composite primary key is reasonable. There are some situations, such as when you are concerned about database size, where leaving out an artificial primary key is even desirable.
|
Composite Primary Keys : Good or Bad?
|
[
"",
"sql",
"database",
"database-design",
"relational-database",
""
] |
I'm doing a SQL Union syntax and I wanted to have a result like this:
```
+---------------+-----+---------+
|trkBusinessUnit| New | Pending |
+---------------+-----+---------+
| AIIB 2 0 |
| Credit Control 1 3 |
| Direct Center 1 2 |
| Financial Ins 1 1 |
| Motor Acclaim 1 0 |
+-------------------------------+
```
from my code:
```
SELECT trkBusinessUnit, Count(*) as New,0 as Pending
FROM tblDTPTracker
WHERE trkStatus = 'New'
GROUP BY trkBusinessUnit
UNION
SELECT trkBusinessUnit,0 as New,Count(*) as Pending
FROM tblDTPTracker
WHERE trkStatus = 'Pending'
GROUP BY trkBusinessUnit
```
but then the current output is:
```
+---------------+-----+---------+
|trkBusinessUnit| New | Pending |
+---------------+-----+---------+
| AIIB 2 0 |
| Credit Control 1 0 |
| Credit Control 0 3 |
| Direct Center 1 0 |
| Direct Center 0 2 |
| Financial Ins 1 0 |
| Financial Ins 0 1 |
| Motor Acclaim 1 0 |
+-------------------------------+
```
Am I missing out something or doing something wrong? Kindly advise.
|
There are (or has been) some syntax issues in the previous answers, but the intent of both earlier answers is correct, you need to use a `GROUP BY` query and NOT use `UNION` - which simply does not do what you were hoping/expecting.
UNION or UNION ALL work ROW by ROW, and absolutely do NOT merge by COLUMN
So, the MySQL syntax for the group by based query could be any of these:
**COUNT() using an implicit NULL**
```
SELECT
trkBusinessUnit
, COUNT(CASE WHEN trkStatus = 'New' THEN 1 END) as New
, COUNT(CASE WHEN trkStatus = 'Pending' THEN 1 END) as Pending
FROM tblDTPTracker
GROUP BY trkBusinessUnit
;
```
**COUNT() using explicit NULL**
```
SELECT
trkBusinessUnit
, COUNT(CASE WHEN trkStatus = 'New' THEN 1 ELSE NULL END) as New
, COUNT(CASE WHEN trkStatus = 'Pending' THEN 1 ELSE NULL END) as Pending
FROM tblDTPTracker
GROUP BY trkBusinessUnit
;
```
**SUM() as an alternative to counting:**
```
select
trkBusinessUnit
, sum(case when trkStatus = 'New' then 1 else 0 end) as New
, sum(case when trkStatus = 'Pending' then 1 else 0 end) as Pending
from tblDTPTracker
where trkStatus in ('Pending', 'New')
group by trkBusinessUnit
;
```
Apologies to both Marc Gravell & Daniel Gadawski who preceded this answer; this answer is a derivative of yours.
`See this SQLFiddle demo of these queries`
|
If I understand you correctly, you don't have to use an union.
Try:
```
SELECT
trkBusinessUnit,
COUNT(CASE WHEN trkStatus = 'New' THEN 1 ELSE NULL END) as New,
COUNT(CASE WHEN trkStatus = 'Pending' THEN 1 ELSE NULL END) as Pending
FROM tblDTPTracker
GROUP BY trkBusinessUnit
```
|
Combining data in SQL Union syntax
|
[
"",
"mysql",
"sql",
"union",
""
] |
I am trying to search through a table column for: **`<o:p />`** to then replace it to **`</p>`**
I know it to be the REPLACE function but unsure how to correctly format the query.
|
To replace values in a column you would do this:
```
update tablename set
columnname = replace(columnname, '<o:p />', '</p>')
WHERE columnname LIKE '%<o:p />%'
```
|
```
update table
set column=replace(column,'<o:p />', '</p>')
where condition
```
|
SQL Replace instance of word with another
|
[
"",
"mysql",
"sql",
""
] |
I am trying to match the first name varchar (50) and last name varchar(50) from table A to the first name varchar(50) and last name varchar(50) on table B. The issue is that both table contain a lot of shortened first names like the name Andrew in table A and there might be a matching record with the last name but the first name is Andy so it comes up as not a match. Is there anyway to get around this in SQL. The shortened names is a vice verse problem meaning that both Table A and Table B have some shortened names.
Here are some more examples:

This is my current code.
```
Select *
FROM TableA p
JOIN TableB e ON e.CompanyNumber = 1 and e.LastName like '%' + rtrim(ltrim(p.lastname)) + '%'
and e.FirstName like '%' + ltrim(rtrim(p.firstname)) + '%'
```
NOTE: This is the only way to match the tables together.
|
Provided you use a 3rd Table to hold you Long/Short Names as so.
```
CREATE TABLE TableNames
([Id] int, [OfficialName] varchar(7), [Alias] varchar(7))
;
INSERT INTO TableNames
([Id], [OfficialName], [Alias])
VALUES
(1, 'Andrew', 'Andy'),
(2, 'Andrew', 'Andrew'),
(3, 'William', 'Bill'),
(4, 'William', 'William'),
(5, 'David', 'Dave'),
(6, 'David', 'David')
```
The following query should give you what you are looking for.
```
SELECT *
FROM (
SELECT TableA.Id AS T1_Id
,CompanyId AS T1_CompanyId
,FirstName AS T1_FirstName
,LastName AS T1_LastName
,TableNames.OfficialName AS OfficialName
FROM tableA
INNER JOIN tableNames ON TableA.FirstName = TableNames.Alias
) T1
,(
SELECT tableB.Id AS T2_Id
,CompanyId AS T2_CompanyId
,FirstName AS T2_FirstName
,LastName AS T2_LastName
,TableNames.OfficialName AS OfficialName
FROM tableB
INNER JOIN tableNames ON TableB.FirstName = TableNames.Alias
) T2
WHERE T1.T1_CompanyId = T2.T2_CompanyId
AND T1.OfficialName = T2.OfficialName
AND T1.T1_LastName = T2.T2_LastName
```
I set up my solution sqlfiddle at <http://sqlfiddle.com/#!3/64514/2>
I hope this helps.
|
Create a third table that associates Long-form and short-form names.
For examle:
```
Long Form Short Form
Andrew Andy
Andrew Drew
David Dave
William Will
William Bill
William Billy
William Willy
```
|
Matching First and Last Name on two different tables
|
[
"",
"sql",
"sql-server",
"regex",
""
] |
I have a table, `SELECT * FROM data`
```
id pred name visual link, for your convenience
--------------------
1 null One
20 null Two <--+
21 20 Three -^
30 null Four <--+
31 30 Five -^ <--+
32 31 Six -^
```
In which the rows are connected via the `pred` column to the `id` column. They form only chains and not a tree -- each node has only one or zero successors (if that's important).
**I want to add a column `init` to the query where the very first element in the chain is shown**, i.e.
```
id pred name init initname
---------------------------------
1 null One 1 One
20 null Two 20 Two
21 20 Three 20 Two
30 null Four 30 Four
31 30 Five 30 Four
32 31 Six 30 Four
```
* It would be ok if rows with `pred=null` also show `null` for `init`.
* The `initname` column is completely optional and I show it here only for demonstration, I only need the `id`.
From what I gathered about **the `connect by` clause** I managed a somewhat reverse result, where for each "root" its "child" nodes are listed. I don't know how to "turn the query around".
```
SELECT id, pred, CONNECT_BY_ROOT id init, LEVEL, CONNECT_BY_ISLEAF "IsLeaf"
FROM data
CONNECT BY PRIOR pred=id
ORDER BY id, level;
```
Gives the result
```
id pred init lvl isLeaf
--------------------------
1 null 1 1 1
20 null 20 1 1
20 null 21 2 1
21 20 21 1 0
30 null 30 1 1
30 null 31 2 1
30 null 32 3 1
31 30 31 1 0
31 30 32 2 0
32 31 32 1 0
```
which somehow represents the the whole "tree", obviously. **But alas, "the wrong way around".** I'd need for example
```
id pred init lvl isLeaf
21 20 0 ? ?
```
instead of
```
id pred init lvl isLeaf
21 20 21 1 0
```
---
If you need data, here is the example data:
```
create table data ( id number primary key, pred number, name varchar2(100) );
insert into data(id,pred,name) values( 1 , null , 'One');
insert into data(id,pred,name) values( 20, null , 'Two');
insert into data(id,pred,name) values(21, 20 , 'Three');
insert into data(id,pred,name) values(30, null , 'Four');
insert into data(id,pred,name) values(31, 30 , 'Five');
insert into data(id,pred,name) values(32, 31 , 'Six');
```
|
```
SQL> select id
2 , pred
3 , name
4 , connect_by_root id init
5 , connect_by_root name initname
6 , sys_connect_by_path(id,' -> ') scbp
7 from data
8 connect by prior id = pred
9 start with pred is null
10 /
ID PRED NAME INIT INITNAME SCBP
---------- ---------- ---------- ---------- ---------- ------------------------------
1 One 1 One -> 1
20 Two 20 Two -> 20
21 20 Three 20 Two -> 20 -> 21
30 Four 30 Four -> 30
31 30 Five 30 Four -> 30 -> 31
32 31 Six 30 Four -> 30 -> 31 -> 32
6 rows selected.
```
|
Please try to use following expression to get name of the root element:
```
substr(SYS_CONNECT_BY_PATH(name, '/'), instr(SYS_CONNECT_BY_PATH(name, '/'), '/', -1)+1)
```
Replace delimiter '/' if needed.
```
SELECT id, pred, CONNECT_BY_ROOT id init, LEVEL, CONNECT_BY_ISLEAF "IsLeaf",
substr(SYS_CONNECT_BY_PATH(name, '/'), instr(SYS_CONNECT_BY_PATH(name, '/'), '/', -1)+1)
FROM data
CONNECT BY PRIOR pred=id
-- START WITH pred is NULL --???
ORDER BY id, level;
```
You probably need to add START WITH clause as well
|
How to get to the "final predecessor" with a CONNECT BY query?
|
[
"",
"sql",
"oracle",
"hierarchical-data",
"connect-by",
""
] |
I'm using SQL Server 2008R2 in this problem. Here's an example dataset:
```
WIRE_ID FROM TO CLASS
05485 0.000 1.520 PL
05485 1.520 3.050 PL
05485 3.050 22.250 SL
05485 3.050 22.250 SP
05485 22.250 33.530 SL
05485 22.250 33.530 QT
05485 33.530 43.580 QT
05485 43.580 52.580 PL
05485 52.580 57.910 QT
114161 0.000 3.000 SW
114161 3.000 5.000 SL
114161 5.000 6.000 SL
114161 6.000 9.412 YN
114161 9.412 10.549 YN
114161 10.549 12.375 CM
114161 12.375 14.438 SL
114161 14.438 15.126 SL
```
So, a non-sequential ID associated ranged values and a group/classification. As you can see you can sometimes have duplicate intervals as different classes may be applied. Ultimately the result I'd like to achieve would look like the following:
```
WIRE_ID FROM TO CLASS
05485 0.000 3.050 PL
05485 3.050 22.250 SL
05485 3.050 22.250 SP
05485 22.250 33.530 SL
05485 22.250 43.580 QT
05485 43.580 52.580 PL
05485 52.580 57.910 QT
114161 0.000 3.000 SW
114161 3.000 6.000 SL
114161 6.000 10.549 YN
114161 10.549 12.375 CM
114161 12.375 15.126 SL
```
Seems easy at first and I've constructed a solution that works, but once I apply it to the entire data-set it grinds to a halt. Ideally I need a solution that can handle a million rows of this style of data in a more or less efficient manner... Here's my solution:
```
Declare @WIRE_CLASS Table(WIRE_ID varchar(25), [FROM] float, [TO] float, CLASS varchar(15));
Insert @WIRE_CLASS(WIRE_ID, [FROM], [TO], CLASS) Values
('05485',0.000,1.520,'PL'),
('05485',1.520,3.050,'PL'),
('05485',3.050,22.250,'SL'),
('05485',3.050,22.250,'SP'),
('05485',22.250,33.530,'SL'),
('05485',22.250,33.530,'QT'),
('05485',33.530,43.580,'QT'),
('05485',43.580,52.580,'PL'),
('05485',52.580,57.910,'QT'),
('114161',0.000,3.000,'SW'),
('114161',3.000,5.000,'SL'),
('114161',5.000,6.000,'SL'),
('114161',6.000,9.412,'YN'),
('114161',9.412,10.549,'YN'),
('114161',10.549,12.375,'CM'),
('114161',12.375,14.438,'SL'),
('114161',14.438,15.126,'SL');
;with WIRE AS (
SELECT
WIRE_ID,
FROM,
TO,
CLASS
FROM
WIRE_CLASS
), ISLANDS AS (
SELECT
ROW_NUMBER() OVER (ORDER BY WI.WIRE_ID, WI.FROM) ID,
WI.WIRE_ID,
WI.FROM,
WI.TO,
WI.CLASS,
CASE WHEN WI2.WIRE_ID IS NULL THEN 1 ELSE 0 END BREAKER
FROM
WIRE WI
LEFT JOIN WIRE WI2 ON
WI2.WIRE_ID = WI.WIRE_ID
AND (WI2.TO = WI.FROM)
AND WI2.CLASS = WI.CLASS
), DATA AS(
SELECT
IS1.WIRE_ID, IS1.FROM, IS1.TO, IS1.CLASS,
(SELECT sum(BREAKER) FROM ISLANDS IS2 WHERE IS1.ID >= IS2.ID) BREAKER
FROM ISLANDS IS1
)
SELECT
DA.WIRE_ID,
MIN(DA.FROM),
MAX(DA.TO),
MIN(DA.CLASS)
FROM DATA DA
GROUP BY
DA.WIRE_ID,
BREAKER,
DA.CLASS
ORDER BY
DA.WIRE_ID,
MIN(DA.[FROM]),
MAX(DA.[TO])
```
Can you suggest a better way to do this??? Thanks a bunch SQL gurus!
|
As others have said, using a temp table instead of a table variable will make a world of difference to the performance. I assume however, you've only used the table variable to pose your example and in reality you have a proper user table that contains all this data.
My query will find the islands, but will provide slightly different output to your results from your sample data. My query will not split islands as it appears yours has in some instances... I see a discrepancy in your results between the way the SL/SP/QT results are handled for wire 05485. I think
`WIRE_ID FROM TO CLASS
05485 3.050 22.250 SL
05485 3.050 22.250 SP
05485 22.250 33.530 SL
05485 22.250 33.530 QT
05485 33.530 43.580 QT`
should result in
`WIRE_ID FROM TO CLASS
05485 3.050 33.530 SL
05485 3.050 22.250 SP
05485 22.250 43.580 QT`
not
`WIRE_ID FROM TO CLASS
05485 3.050 22.250 SL
05485 3.050 22.250 SP
05485 22.250 33.530 SL
05485 22.250 43.580 QT`
Your results have split the SL island, but not the QT. Even though there is from/to pairs matching between SL & SP (3.05/22.5) and then SL & QT (22.5/33.53).
I've run the query below on a VM on my laptop (read: a server should be quicker) and for 1M records, with ~2000 Wire/Class combinations and ~20% rows needing to have from/to combined. I've randomly generated the data a few times and it typically takes only between 80 and 100 seconds.
I've created the table as:
`IF OBJECT_ID('tempdb.dbo.#WIRE_CLASS', 'U') IS NOT NULL
DROP TABLE #WIRE_CLASS
GO
CREATE TABLE #WIRE_CLASS (
WIRE_ID varchar(25),
[FROM] float,
[TO] float,
CLASS varchar(15),
PRIMARY KEY (WIRE_ID, [FROM], CLASS)
)`
Here's the query, with explanations in comments
`-- The Cross Join ensures we always have a pair of first and last from/to pairs
-- The left join matches all from=to combinations,
-- allowing the where clause to restrict to just the first and last
-- These first/last pairs are then grouped in the CTE
-- The final select is then quite simple
; With GroupedData AS (
SELECT
(Row_Number() OVER (ORDER BY W1.WIRE_ID, W1.CLASS, W1.[FROM]) - 1) / 2 Grp,
W1.WIRE_ID, W1.[FROM], W1.[TO], W1.CLASS
FROM #WIRE_CLASS W1
CROSS JOIN (SELECT 0 AS [First] UNION SELECT 1) SetOrder
LEFT OUTER JOIN #WIRE_CLASS W2
ON W1.WIRE_ID = W2.WIRE_ID
AND W1.CLASS = W2.CLASS
AND ((W1.[TO] = W2.[FROM] AND [First] = 0)
OR (W2.[TO] = W1.[FROM] AND [First] = 1))
WHERE W2.WIRE_ID IS NULL
)
SELECT WIRE_ID, MIN([FROM]) AS [FROM], MAX([TO]) AS [TO], CLASS
FROM GroupedData
GROUP BY Grp, WIRE_ID, CLASS
ORDER BY WIRE_ID, [FROM], CLASS`
|
Using temporary tables instead of CTE's with large datasets will improve the performance, but without your dataset I'm unable to test the performance of this.
Here's your queries using temp tables:
```
Declare @WIRE_CLASS Table(WIRE_ID varchar(25),
[FROM] float,
[TO] float,
CLASS varchar(15));
Insert @WIRE_CLASS(WIRE_ID, [FROM], [TO], CLASS)
Values ('05485',0.000,1.520,'PL'),
('05485',1.520,3.050,'PL'),
('05485',3.050,22.250,'SL'),
('05485',3.050,22.250,'SP'),
('05485',22.250,33.530,'SL'),
('05485',22.250,33.530,'QT'),
('05485',33.530,43.580,'QT'),
('05485',43.580,52.580,'PL'),
('05485',52.580,57.910,'QT'),
('114161',0.000,3.000,'SW'),
('114161',3.000,5.000,'SL'),
('114161',5.000,6.000,'SL'),
('114161',6.000,9.412,'YN'),
('114161',9.412,10.549,'YN'),
('114161',10.549,12.375,'CM'),
('114161',12.375,14.438,'SL'),
('114161',14.438,15.126,'SL');
SELECT
WIRE_ID,
[FROM],
[TO],
CLASS
INTO #tmp_WIRE
FROM
@WIRE_CLASS
SELECT
ROW_NUMBER() OVER (ORDER BY WI.WIRE_ID, WI.[FROM]) ID,
WI.WIRE_ID,
WI.[FROM],
WI.[TO],
WI.CLASS,
CASE WHEN WI2.WIRE_ID IS NULL THEN 1 ELSE 0 END as BREAKER
INTO #tmp_ISLANDS
FROM
#tmp_WIRE WI
LEFT JOIN #tmp_WIRE WI2 ON
WI2.WIRE_ID = WI.WIRE_ID
AND (WI2.[TO] = WI.[FROM])
AND WI2.CLASS = WI.CLASS
SELECT
IS1.WIRE_ID, IS1.[FROM], IS1.[TO], IS1.CLASS,
(SELECT sum(BREAKER) FROM #tmp_ISLANDS IS2 WHERE IS1.ID >= IS2.ID) BREAKER
INTO #tmp_DATA
FROM #tmp_ISLANDS IS1
SELECT
DA.WIRE_ID,
MIN(DA.[FROM]),
MAX(DA.[TO]),
MIN(DA.CLASS)
FROM #tmp_DATA DA
GROUP BY
DA.WIRE_ID,
BREAKER,
DA.CLASS
ORDER BY
DA.WIRE_ID,
MIN(DA.[FROM]),
MAX(DA.[TO])
```
**Related Reading:**
[Which are more performant, CTE or temporary tables?](https://stackoverflow.com/a/26205087/57475)
[What's the difference between a CTE and a Temp Table?](https://dba.stackexchange.com/questions/13112/whats-the-difference-between-a-cte-and-a-temp-table)
|
Finding Islands With Group and Interval Data
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"aggregate",
"gaps-and-islands",
""
] |
I have transactional data that looks like this
```
Account ProductCategory
1 a
1 a
1 b
2 c
2 d
2 d
```
I need to find the ProductCategory that appears most per customer. Results:
```
Account ProductCategory
1 a
2 d
```
My result was a long with many nested subqueries. Any good ideas?
Thank you in advance for the help.
|
Most databases support the ANSI-standard window functions, particularly `row_number()`. You can use this with aggregation to get what you want:
```
select Account, ProductCategory
from (select Account, ProductCategory, count(*) as cnt,
row_number() over (partition by Account order by count(*) desc) as seqnum
from table t
group by Account, ProductCategory
) apc
where seqnum = 1;
```
|
This can be done using analytic SQL , or just using count over group. The syntax depends on the RDBMS, as asked by Michael .
you can try following SQL :
```
select * from
(select account, ProductCategory, ct , ROW_NUMBER() OVER (partition by account, ProductCategory ORDER BY ct DESC ) As myRank
from (select account, ProductCategory, count(0) as ct
from <table>
group by account, ProductCategory ) t ) t2
where t2.myRank = 1
```
|
Max count by customer and category
|
[
"",
"sql",
"count",
"max",
""
] |
The table structure remains same after truncate statement and only records get deleted (with auto commit). Then what is the reason that truncate is a DDL statement ?
|
I'd add here some explanation:
By default, when truncating a table , you are not just removing rows, you are implicitly telling Oracle Database to also perform the following tasks:
Deallocates all space used by the removed rows except that specified by the MINEXTENTS storage parameter
Sets the NEXT storage parameter to the size of the last extent removed from the segment by the truncation process
So, it changes the storage definitions and changing the structure of the table by resetting the water high mark.
Also it can't be a DML ,right?
Where clause can't be specified or comes along with DDL statement. If truncate is a DML statement then Oracle would have allowed us to use truncate along with where clause.
Note: A WHERE clause in SQL specifies that a SQL Data Manipulation Language (DML) statement should only affect rows that meet specified criteria.
|
`TRUNCATE` resets the high water mark of the table, effectively eliminating all the previously existing rows. Treating it as a DDL statement allows it to be super-fast, as it allows it to function without retaining undo (rollback) information like DML statements.
|
Why is truncate a DDL statement?
|
[
"",
"sql",
"oracle",
"ddl",
""
] |
I am running this code to import data from Access to Excel and getting a run-time error:
```
"syntax error in FROM clause."
```
The table in Access has four columns: `Date`, `Time`, `Tank`, `Comments`, and I want to import `Time` and `Tank`, based on a date in the spreadsheet.
I want to order these columns in the order `Tank`, `Time`.
The error is in the line:
```
.Open "Select [Time], [Tank] FROM [UnitOneRouting] WHERE [Date] = " & RpDate & " ORDER BY Tank, Time", cn, adOpenStatic, adLockOptimistic, adCmdTable
```
Code Snippet:
```
Sub ADOImportFromAccessTable()
Dim DBFullName As String
Dim TableName As String
Dim TargetRange As Range
Dim RpDate As Range
DBFullName = "U:\Night Sup\Production Report 2003 New Ver 5-28-10_KA.mdb"
TableName = "UnitOneRouting"
Worksheets("TankHours").Activate
Set TargetRange = Range("C5")
Set RpDate = Range("B2").Cells
Dim cn As ADODB.Connection, rs As ADODB.Recordset, intColIndex As Integer
Set TargetRange = TargetRange.Cells(1, 1)
' open the database
Set cn = New ADODB.Connection
cn.Open "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=" & _
"U:\Night Sup\Production Report 2003 New Ver 5-28-10_KA.mdb" & ";"
Set rs = New ADODB.Recordset
With rs
' open the recordset
' filter rows based on date
.Open "Select [Time], [Tank] FROM [UnitOneRouting] WHERE [Date] = " & RpDate & " ORDER BY Tank, Time", cn, adOpenStatic, adLockOptimistic, adCmdTable
rs.Open , TargetRange
TargetRange.CopyFromRecordset rs
End With
rs.Close
Set rs = Nothing
cn.Close
Set cn = Nothing
End Sub
```
|
Start with a `SELECT` statement which Access will accept. Use a string variable to hold the statement. Then you can `Debug.Print` the variable and inspect the statement text in the Immediate window. For troubleshooting, you can also copy the statement text from there and paste it into SQL View of a new Access query.
Here is a code example, where I hard-coded the value for `RpDate` ... just to keep it simple.
```
Dim RpDate
Dim strSelect As String
RpDate = #9/26/2014#
strSelect = "SELECT u.Time, u.Tank" & vbCrLf & _
"FROM UnitOneRouting AS u" & vbCrLf & _
"WHERE u.Date = " & Format(RpDate, "\#yyyy-m-d\#") & vbCrLf & _
"ORDER BY u.Tank, u.Time;"
Debug.Print strSelect
```
This is the `SELECT` statement produced by that code ...
```
SELECT u.Time, u.Tank
FROM UnitOneRouting AS u
WHERE u.Date = #2014-9-26#
ORDER BY u.Tank, u.Time;
```
Once you have a valid Access SQL `SELECT` statement, you will need to fix the recordset `.Open` call to give it acceptable option values. `adCmdTable` causes an error because your recordset's data source is a `SELECT` statement, not a table.
```
' next line throws error -2147217900, "Syntax error in FROM clause."
.Open strSelect, cn, adOpenStatic, adLockOptimistic, adCmdTable
'either of the next 2 lines works ...
'.Open strSelect, cn, adOpenStatic, adLockOptimistic
.Open strSelect, cn, adOpenStatic, adLockOptimistic, adCmdText
```
So I think you're dealing with a situation where the error message is misleading. *"Syntax error in FROM clause"* suggests the problem is in the `SELECT` statement. However, once you do have a valid `SELECT`, you will still get that same error text due to `adCmdTable`. Do not use `adCmdTable` for a `SELECT`.
|
Do you have an example of your SQL request?
I think there's a problem in the date's format...
you should try to wrap your date (`RpDate`) whith this char `#`, like this :
```
.Open "Select [Time], [Tank] FROM [UnitOneRouting] WHERE [Date] = #" & RpDate & "# ORDER BY Tank, Time", cn, adOpenStatic, adLockOptimistic, adCmdTable
```
|
Import Data into Excel from Access Table, Syntax error in FROM Clause
|
[
"",
"sql",
"excel",
"vba",
"ms-access",
""
] |
Given the following SQL table:
```
CREATE TABLE [dbo].[posts]
(
[id] [int] IDENTITY(1,1) NOT NULL,
[user_id] [int] NOT NULL,
[date_posted] [datetime] NOT NULL,
[date_modified] [datetime] NOT NULL,
[content] [text] NOT NULL,
CONSTRAINT [PK_posts] PRIMARY KEY CLUSTERED ( [id] ASC )
)
```
How can I retrieve the post id of the most recently modified post for each user?
|
You can get the most recent modified post for every user by using `row_number()`
```
select * from (
select * ,
row_number() over (partition by user_id order by date_modified desc) rn
from posts
) t1 where rn = 1
```
**Edit** it appears you're using mysql, which doesn't support `row_number()`. You can use `not exists` instead
```
select * from posts p1
where not exists (
select 1 from posts p2
where p2.user_id = p1.user_id
and p2.date_modified > p1.date_modified
)
```
Sidenote
```
Select * from posts GROUP BY user_id Order by date_modified desc
```
The query above might appear to work but it's actually unreliable because you can not influence which row from a group is returned via `order by`.
<https://dev.mysql.com/doc/refman/5.0/en/group-by-extensions.html>
> MySQL extends the use of GROUP BY so that the select list can refer to
> nonaggregated columns not named in the GROUP BY clause. This means
> that the preceding query is legal in MySQL. You can use this feature
> to get better performance by avoiding unnecessary column sorting and
> grouping. However, this is useful primarily when all values in each
> nonaggregated column not named in the GROUP BY are the same for each
> group. The server is free to choose any value from each group, so
> unless they are the same, the values chosen are indeterminate.
> Furthermore, the selection of values from each group cannot be
> influenced by adding an ORDER BY clause. Sorting of the result set
> occurs after values have been chosen, and ORDER BY does not affect
> which values within each group the server chooses.
|
Try This
```
Select
Id
,[User_id]
,[Date_Posted]
,[Date_Modified]
,[Content]
From [dbo].[posts] posts
where date_modified = (select Max(date_modified) from posts2
where posts.[User_id] = posts2.[User_id])
```
|
SQL - Retrieve id of most recently modified posts
|
[
"",
"sql",
""
] |
Q: For each customer list the CustomerID and total number of orders placed.
I believe that I can execute this query with these three tables
```
Customer_T Order_T OrderLine_T
+------+------+------+- +---------+-----------+-------+ +---------+-----------------+
|CustID| CNAME | Addr | | OrderID | OrderDate |CustID | |OrderID | OrderedQuantity |
+------+------+------+ +-------+----+--------+-------+ +---------+-----------------+
```
I tried several queries, this is my latest iteration which returns the error Customer\_T.CustomerID is invalid because it is not contained in an aggregate function or group by clause.
```
SELECT
Customer_T.CustomerID, Order_T.OrderID
FROM
Customer_T, Order_T, OrderLine_T
WHERE
Order_T.CustomerID=Customer_T.CustomerID
AND Order_T.OrderID=OrderLine_T.OrderID
ORDER BY
COUNT(OrderLine_T.OrderedQuantity)
```
I am not sure where to go here... should i be using a Join operator or something?
EDIT: I should mention that the need to pull additional CustID from the Customer\_T table is b/c there are customers who have not purchased anything and are not included in the order table (for EACH customer).
|
**For each customer list the CustomerID and total number of orders placed**
This only requires 2 tables (customers and orders)
```
SELECT
Customer_T.CustomerID
, COUNT(Order_T.OrderID) orders_per_customer
FROM Customer_T
LEFT OUTER JOIN Order_T ON Customer_T.CustomerID = Order_T.CustomerID
GROUP BY
Customer_T.CustomerID
ORDER BY
Customer_T.CustomerID
;
```
Note the use of `LEFT OUTER JOIN`, this join type allows all records in Customer\_T to be returned even if they have not placed an order. I recommend you visit [here for a visual representation of SQL joins](http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins).
You could add the third table for additional information perhaps, but you would need to take care that you don't exaggerate the number of orders because the line items per order implies a one to many relationship (many line items for a single order). You could do this by introducing `COUNT(DISTINCT ...)`
```
SELECT
Customer_T.CNAME
, Customer_T.CustomerID
, COUNT(DISTINCT Order_T.OrderID) orders_per_customer
, SUM(OrderLine_T.OrderedQuantity) sum_of_quantity
FROM Customer_T
LEFT OUTER JOIN Order_T ON Customer_T.CustomerID = Order_T.CustomerID
LEFT OUTER JOIN OrderLine_T ON Order_T.OrderID = OrderLine_T.OrderID
GROUP BY
Customer_T.CNAME
, Customer_T.CustomerID
ORDER BY
Customer_T.CNAME
, Customer_T.CustomerID
;
```
Note the sum of quantity isn't a very meaningful metric, it's there merely as a demonstration of approach.
|
May this would help you get right answer.
```
SELECT ord.CustID, count(ord.CustID) AS OrderCount, cust.CNAMR
FROM Customer_T cust JOIN Order_T ord
ON ( ord.CustomerID=cust.CustomerID)
JOIN OrderLine_T ordLine
ON ( ord.OrderID=ordLine.OrderID)
GROUP BY ord.CustID ORDER BY OrderCount Desc
```
but it would be trickier to get the order quantity here.
|
SQL query over three tables returning an aggregate answer
|
[
"",
"sql",
"sql-server",
""
] |
My situation:
I have the following view to retrieve data from:
```
|ID | START_DATE | END_DATE |
|80 | 09-JAN-2013 15:01:52 | 20-SEP-2014 15:01:52 |
|82 | 09-SEP-2014 15:01:52 | 25-SEP-2014 15:01:52 |
```
What I want is something like this:
```
|TOTAL_TIME_IN_HOURS| MONTH| YEAR |
| 200 | 01 | 2013 |
| 250 | 02 | 2013 |
| etc..... | etc. | etc..|
| 150 | 09 | 2014 |
```
Some additional information:
I can only use select statements, but I am able to create views beforehand.
It's an Oracle DB, so I'm not able to use MYSQL functions like DATEDIFF etc.
I have done the following:
```
SELECT
ID,
SUM(END_TIME - START_TIME) * 24 AS TOTAL_TIME_IN_HOURS,
FROM TABLE_X
WHERE TO_CHAR(START_TIME, 'MM') IN (1,2,3,4,5,6,7,8,9,10,11,12) AND TO_CHAR(START_TIME, 'YYYY') BETWEEN 1965 AND 2050 AND
TO_CHAR(END_TIME, 'MM') IN (1,2,3,4,5,6,7,8,9,10,11,12) AND TO_CHAR(END_TIME, 'YYYY') BETWEEN 1965 AND 2050
GROUP BY PROC_ID, TO_CHAR(START_TIME, 'YYYY'), TO_CHAR(START_TIME, 'MM') ORDER BY ID;
```
This returns the following:
```
|ID| TOTAL_TIME_IN_HOURS |
|80| 5000 |
|82| 300 |
```
(I used fictitious results, because the question isn't about the factual results)
This logic is ok as far as I only need the total amount of hours between start and end date.. However what I need is the total amount of hours per month between start and end date.
I thought of adding additional columns to my views, like start\_month, end\_month, start\_year and end\_year. However I ran into new problems with these options, like leap years...
My question is: Is it possible to reach the result I want? If so what kind of logic should I use to reach this result? (Preferably a dynamic query, so I don't have to enter hundreds of lines of code)
|
Try this:
```
with t(ID, START_DATE, END_DATE) as (
select 80, to_date('09/01/2013 15:01:52', 'DD/MM/YYYY HH24:MI:SS'), to_date('20/09/2014 15:01:52', 'DD/MM/YYYY HH24:MI:SS') from dual union all
select 82, to_date('09/09/2014 15:01:52', 'DD/MM/YYYY HH24:MI:SS'), to_date('25/09/2014 15:01:52', 'DD/MM/YYYY HH24:MI:SS') from dual
), t_mon(id, start_date, end_date, lvl, year, month) as (
select id
, start_date
, least(trunc(add_months(start_date, 1), 'MONTH'), end_date)
, 1
, extract(year from start_date)
, extract(month from start_date)
from t
union all
select t.id
, greatest(trunc(add_months(t.start_date, lvl), 'MONTH'), t.start_date)
, least(trunc(add_months(t.start_date, lvl+1), 'MONTH'), t.end_date)
, lvl + 1
, extract(year from greatest(trunc(add_months(t.start_date, lvl), 'MONTH'), t.start_date))
, extract(month from greatest(trunc(add_months(t.start_date, lvl), 'MONTH'), t.start_date))
from t, t_mon
where trunc(add_months(t.start_date, t_mon.lvl), 'MONTH') < t.end_date
), t_corr(id, start_date, end_date, year, month) as (
select unique id, start_date, end_date, year, month
from t_mon
)
select id, year, month, sum(end_date - start_date) * 24 hours
from t_corr
group by id, year, month
order by id, year, month
ID YEAR MONTH HOURS
--------- ---------- ---------- ----------
80 2013 1 536,968889
80 2013 2 672
80 2013 3 744
80 2013 4 720
80 2013 5 744
80 2013 6 720
80 2013 7 744
80 2013 8 744
80 2013 9 720
80 2013 10 744
80 2013 11 720
80 2013 12 744
80 2014 1 744
80 2014 2 672
80 2014 3 744
80 2014 4 720
80 2014 5 744
80 2014 6 720
80 2014 7 744
80 2014 8 744
80 2014 9 471,031111
82 2014 9 384
```
|
Another recursive solution, which will require at least Oracle 11gR2:
```
with t(id, start_date, end_date) as
(select 80, to_date('09/01/2013 15:01:52', 'DD/MM/YYYY HH24:MI:SS'), to_date('20/09/2014 15:01:52', 'DD/MM/YYYY HH24:MI:SS') from dual
union all
select 82, to_date('09/09/2014 15:01:52', 'DD/MM/YYYY HH24:MI:SS'), to_date('25/09/2014 15:01:52', 'DD/MM/YYYY HH24:MI:SS') from dual
)
, t_recur(id, start_date, end_date, month_start_date, month_end_date) as
(select id
, start_date
, end_date
, start_date
, least(add_months(trunc(start_date, 'MM'), 1), end_date)
from t
union all
select id
, start_date
, end_date
, trunc(add_months(month_start_date, 1), 'MM')
, least(add_months(trunc(month_start_date, 'MM'), 2), end_date)
from t_recur
where trunc(add_months(month_start_date, 1), 'MM') < end_date
)
select id
, extract(year from month_start_date) year
, extract(month from month_start_date) month
, (month_end_date - month_start_date) * 24 hours
from t_recur
order by id
, year
, month
```
|
Select amount of hours per month where start and end date spans across different years
|
[
"",
"sql",
"oracle",
""
] |
I trying to re-write this Microsoft Access IF statement as a CASE statement in SQL Server 2008:
```
Status: IIf(
IIf(B.[Online Class Completion Date] Is Null,
Null,
DateDiff("w",A.[Trust Start Date],B.[Online Class Completion Date])
) <=6,
"Pass",
IIf(
IIf(B.[Online Class Completion Date] Is Null,
Null,
DateDiff("w",A.[Trust Start Date],B.[Online Class Completion Date])
) >6,
"Fail",
IIf(DateDiff("w",A.[Trust Start Date],MAX(B.[Month])) <=6,
"NotApp",
"Never"
)
)
)
```
I've tried various different ways but I just cant seem to get the nesting right. I get Pass and Fails but no NotApps or Nevers
This is where I started:
```
CASE WHEN
(CASE WHEN CONVERT(DATETIME,LEFT(B.[Online Class Completion Date],2)+'/'+SUBSTRING(B.[Online Class Completion Date],4,2)+'/'+RIGHT(B.[Online Class Completion Date],4),103) IS NULL
THEN NULL
ELSE DATEDIFF(ww,A.[Trust Start Date],CONVERT(DATETIME,LEFT(B.[Online Class Completion Date],2)+'/'+SUBSTRING(B.[Online Class Completion Date],4,2)+'/'+RIGHT(B.[Online Class Completion Date],4),103))
END)
<=6
THEN 'Pass'
WHEN
(CASE WHEN CONVERT(DATETIME,LEFT(B.[Online Class Completion Date],2)+'/'+SUBSTRING(B.[Online Class Completion Date],4,2)+'/'+RIGHT(B.[Online Class Completion Date],4),103) IS NULL
THEN NULL
ELSE DATEDIFF(ww,A.[Trust Start Date],CONVERT(DATETIME,LEFT(B.[Online Class Completion Date],2)+'/'+SUBSTRING(B.[Online Class Completion Date],4,2)+'/'+RIGHT(B.[Online Class Completion Date],4),103))
END)
>6
THEN 'Fail'
WHEN DATEDIFF(WEEK,A.[Trust Start Date],MAX(B.[Month])) <=6
THEN 'NotApp'
ELSE 'Never'
END AS [Status]
```
Any help greatly appreciated!!
|
First off:
`CONVERT(DATETIME,LEFT(B.[Online Class Completion Date],2)+'/'+SUBSTRING(B.[Online Class Completion Date],4,2)+'/'+RIGHT(B.[Online Class Completion Date],4),103) IS NULL` Will never be TRUE since it will contain '//' at the very minimum. Just test that field is NULL, no reason to convert.
Furthermore, because a CASE statement exits after it finds a TRUE condition, you don't need to nest like you do with IIF(). Which is exactly where you headed, so that's good.
Here's a stab at it, making minor changes to your existing attempt: (Updated to add your case to Date from Char for that awful string date field)
```
CASE
WHEN
CASE
WHEN B.[Online Class Completion Date] THEN NULL
ELSE datediff(ww,A.[Trust Start Date],CONVERT(DATETIME,LEFT(B.[Online Class Completion Date],2)+'/'+SUBSTRING(B.[Online Class Completion Date],4,2)+'/'+RIGHT(B.[Online Class Completion Date],4),103))
END <= 6 THEN 'Pass'
WHEN
CASE
WHEN B.[Online Class Completion Date] THEN NULL
ELSE datediff(ww,A.[Trust Start Date],CONVERT(DATETIME,LEFT(B.[Online Class Completion Date],2)+'/'+SUBSTRING(B.[Online Class Completion Date],4,2)+'/'+RIGHT(B.[Online Class Completion Date],4),103))
END > 6 THEN 'Fail'
WHEN
DateDiff(ww,A.[Trust Start Date],MAX(B.[Month])) <= 6 THEN "NotApp"
ELSE
'Never'
END as 'Status'
```
|
This would be the appropriate nested case structure, but you would still need to clean up the dateDiff functionality in SQL-Server.
```
CASE WHEN B.[Online Class Completion Date] IS NULL
THEN NULL
ELSE CASE WHEN DateDiff( 'w', A.[Trust Start Date], B.[Online Class Completion Date] ) <= 6
THEN 'Pass'
ELSE CASE WHEN DateDiff( 'w', A.[Trust Start Date], B.[Online Class Completion Date] ) > 6
THEN 'Fail'
ELSE CASE WHEN DateDiff( 'w', A.[Trust Start Date], MAX(B.[Month] )) <= 6
THEN 'NotApp'
ELSE 'Never'
END
END
END
END as Status
```
Now, that said, seeing a more complete query, and maybe having some precursory work done on the dates would further simplify this case construct and would prefer doing it after a review of full qry vs waste time re-converting all the dates in question... But hopefully a good jump for you to complete your conversion otherwise.
|
Case statement SQL
|
[
"",
"sql",
"sql-server",
"if-statement",
"case",
""
] |
Following on from my earlier question here [Case statement for Order By clause with Desc/Asc sort](https://stackoverflow.com/questions/25949837/case-statement-for-order-by-clause-with-desc-asc-sort) I have a statement like this:
```
SELECT
*
FROM
TableName
WHERE
ORDER BY
CASE @OrderByColumn WHEN 1 THEN Forename END DESC,
CASE @OrderByColumn WHEN 2 THEN Surname END ASC
```
This works well, but sometimes I need more than column in the order by. I actually need something like this:
```
.....
ORDER BY
CASE @OrderByColumn WHEN 1 THEN Forename, Date, Location END DESC
```
I can't work out how to make the `CASE` statement allow multiple columns in the `THEN` part.
|
Do you need this?
```
ORDER BY
CASE @OrderByColumn WHEN 1 THEN Forename END DESC, Date, Location,
CASE @OrderByColumn WHEN 2 THEN Surname END ASC
```
|
You can write multiple cases, even if they all have the same condition.
```
ORDER BY
CASE @OrderByColumn WHEN 1 THEN Forename END DESC,
CASE @OrderByColumn WHEN 1 THEN Date END,
CASE @OrderByColumn WHEN 1 THEN Location END,
CASE @OrderByColumn WHEN 2 THEN Surname END ASC
```
Actually, you don't specify a column to sort by, but an expression.
The case statement returns null if the condition is not met, so actually it means:
```
CASE @OrderByColumn WHEN 1 THEN Forename ELSE NULL END
```
So if @OrderByColumn is not 1 then the statement returns always NULL. That doesn't exclude it from sorting, by the way, but it puts all those rows together in the result, making 'SurName' the decisive sorting within that group of rows.
|
CASE Statement for Order By Clause with Multiple Columns and Desc/Asc Sort
|
[
"",
"sql",
"sql-server-2012",
""
] |
I have this table:
```
**ID StartDate EndDate**
1 01/01/2012 03/01/2012
2 28/09/2013 02/10/2013
3 12/06/2011 15/06/2011
```
And I need to have this table:
Date
```
**ID Date**
1 01/01/2012
1 02/01/2012
1 03/01/2012
2 28/09/2013
2 29/09/2013
2 30/09/2013
2 01/10/2013
2 02/10/2013
3 12/06/2011
3 13/06/2011
3 14/06/2011
3 15/06/2011
```
I Have Next Sql Code That Retarn The Dates Between The StartDate & The EndDate + StartDate +EndDate :
```
declare @Start datetime
declare @end datetime
declare @request int
set @Start = '2014-09-28 06:53:04.560'
set @end = '2014-09-29 11:53:04.560'
set @request = 1
;with Dates as (
select @request as reqId,@Start as reqDate
union all
select reqId+1,DATEADD(hh,1,reqDate) from Dates
where reqDate < @end
)
select * from Dates
```
How Can I Get This Result For A Bulk Of StartDate-EndDate Input?
|
You can do this by using your source date table as below
```
declare @request int
set @request = 1
;with Dates as (
SELECT @request as reqId,StartDate as reqDate, EndDate
FROM yourDateTable
UNION ALL
SELECT reqId+1,DATEADD(DAY,1,reqDate),Dates.EndDate
FROM Dates
WHERE DATEADD(DAY,1,reqDate) < EndDate
)
SELECT *
FROM Dates
```
|
Use recursive query
```
CREATE TABLE #ranges
(
Id INT ,
startDate DATE ,
ENdDate DATE
)
INSERT #ranges
( Id, startDate, ENdDate )
VALUES ( 1, '2014-5-2', '2014-5-5' ),
( 2, '2014-8-29', '2014-9-3' ),
( 3, '2014-10-2', '2014-10-8' );
WITH
cte
AS ( SELECT *
FROM #ranges
UNION ALL
SELECT id ,
DATEADD(DAY, 1, startDate) ,
EndDate
FROM cte
WHERE DATEADD(DAY, 1, startDate) <= EndDate
)
SELECT Id ,
startDate Date
FROM cte
ORDER BY Id ,
startDate
DROP TABLE #ranges
```
|
Insert a series of dates into sql startdate and enddate inputs
|
[
"",
"sql",
"sql-server",
"date",
"between",
""
] |
I have a table of answers (tbAnswers). And I have a table which stores who upvotes an answer (tbContentRanking) which has a record for every time an answer has been upvoted.
I'm querying tbAnswers to get answers for each question and I want the answers with the most upvotes (most records in tbContentRanking) to be at the top. If the recordCount in tbContentRanking is tied between answers, I want the most recent answer to win the tie.
Here's the tables:
```
**tbAnswers**
AnswerID AnswerValue QuestionID CreateDateTime
1 This is an answer 15 Sept. 01 2014
2 This is another answer 15 Sept. 03 2014
3 This is yet another 15 Sept. 09 2014
4 Here's an answer 15 Sept. 10 2014
**tbContentRanking**
ContentRankingID AnswerID QuestionID UserID
1 3 15 10
2 3 15 101
3 2 15 30
4 2 15 3
5 4 15 23
6 4 15 42
7 4 15 4
8 1 15 6
```
Based on this it would order the result:
```
AnswerID:
4, 3, 2, 1
```
(3 and two are tied but 3 is more recent)
The initial tbAnswers query (qGetAnswers) is REALLY complicated (for other business reasons) so I just want to do a query of queries of the getAnswers, but not sure the how to do this. OR if query of queries isn't the best idea I'm open to others.
Query of Queries:
```
<cfquery name="getAnswersOrder" dbtype="query">
SELECT *
FROM qGetAnswers
(SELECT Count(AnswerID) AS theCount
FROM tbContentRanking
WHERE QuestionID = #arguments.questionID#)
Order By theCount, CreateDateTime
</cfquery>
```
It's something like this but pretty lost about how to construct the query of queries. OR like I said maybe a QoQ isn't even the best option.
|
Another solution is the next:
Instead of computing the num. of upvotes using a subquery you could [denormalize](http://en.wikipedia.org/wiki/Denormalization) `tbAnswers` table by adding a column named `UpvotesNum`. This means that every upvote action will "touch" two rows:
1. One row in `tbContentRanking` table (because of `INSERT INTO tbContentRanking (AnswerID, ...) VALUES (@AnswerID, ...)`) and
2. Another row in `tbAnswers` table (because of `UPDATE tbAnswers SET UpvotesNum = UpvotesNum + 1 WHERE AnswerID = @AnswerID`.
From this point of view, the problem of showing those answers with the most upvotes becomes trivial:
```
SELECT ... columns ...
FROM tblAnswers
ORDER BY UpvotesNum, CreateDateTime
```
|
You want to use a GROUP BY clause:
```
SELECT QuestionID, CreateDateTime, Count(AnswerID) as theCount
FROM tbContentRanking
WHERE QuestionID = #arguments.questionID#
GROUP BY QuestionID, CreateDateTime
ORDER BY Count(AnswerID), CreateDateTime
```
|
Ordering forum answers based on quantity of upvotes
|
[
"",
"sql",
"sql-server",
"coldfusion",
"coldfusion-10",
""
] |
I am trying to insert rows if they don't exist (for specific values) in this table.
My table where I insert is insertTable:
```
date (date)
created (datetime)
category (varchar)
companyId (int)
price (decimal 6,3)
```
I select my rows from an inner join between two tables doing :
```
declare @currentDateTime datetime = getDate()
declare @currentDate date = getDate()
INSERT INTO insertTable (date, created, category, companyId, price)
SELECT @currentDate, @currentDateTime, '30 Day', company.companyId, product.price
FROM product
INNER JOIN company
ON product.companyid = company.companyid
WHERE product.price >= 0.31 AND ... other conditions on company fields ...
AND NOT EXISTS(
SELECT * FROM insertTable WHERE insertTable.price = product.price
AND insertTable.date = @currentDateTime
AND insertTable.companyid = product.companyid
AND LTRIM(RTRIM(insertTable.category)) = '30 Day'
)
```
What am I doing wrong please?
Thanks
--edited after Gordon Linoff Comment.
I insert the result of that select in my insertTable.
My issue is that I get duplicates in insertTable.
If insertTable had
```
2014-09-26 | 2014-09-26 02:25:00 | 30 Day | 32650 | 0.600
```
My select will return something like
```
2014-09-26 | 2014-09-26 02:36:00 | 30 Day | 32650 | 0.600
```
However I already have that companyID and price in the insert table
|
You problem statement cannot be accurate
rate is not defined
you are missing created
and you are comparing date to @currentDateTime
```
SELECT @currentDate, @currentDateTime, '30 Day'
, company.companyId, product.price
FROM product
JOIN company
ON product.companyid = company.companyid
and product.price >= 0.31 AND ... other conditions on company fields ...
AND NOT EXISTS(
SELECT *
FROM insertTable
WHERE insertTable.date = @currentDate
--AND insertTable.created = @currentDateTime
AND insertTable.price = product.price
AND insertTable.companyid = product.companyid
AND LTRIM(RTRIM(insertTable.category)) = '30 Day'
)
```
|
I think you must modify your subquery in `NOT EXISTS`: `@currentDateTime` change to `@currentDate` and `rate.companyid` change to `company.companyid` (because when insert `insertTable.date` gets value of `@currentDate` and `insertTable.companyid` gets value of `company.companyid`):
```
...
AND NOT EXISTS(
SELECT * FROM insertTable WHERE insertTable.price = product.price
AND insertTable.date = @currentDate
AND insertTable.companyid = company.companyid
AND LTRIM(RTRIM(insertTable.category)) = '30 Day'
)
```
|
SQL Query: insert if not already exists from 2 tables
|
[
"",
"sql",
"t-sql",
""
] |
basically I want to be able to select monday and friday for every week in the year.
So for example this week coming i want 9/29/2014 and 10/3/2014, but i want this for every week in the year.
|
Here's one way (you might need to check which day of the week is setup to be the first, here I have Sunday as the first day of the week)
You can use a table with many rows (more than 365) to `CROSS JOIN` to in order to get a run of dates (a tally table).
My sys columns has over 800 rows in, you could use any other table or even `CROSS JOIN` a table onto itself to multiply up the number of rows
Here I used the `row_number` function to get a running count of rows and incremented the date by 1 day for each row:
```
select
dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt
from sys.columns a
```
With the result set of dates now, it's trivial to check the day of week using `datepart()`
```
SELECT
dt,
datename(dw, dt)
FROM
(
select
dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt
from
sys.columns a
) as dates
WHERE
(datepart(dw, dates.dt) = 2 OR datepart(dw, dates.dt) = 6)
AND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'
```
Edit:
Here's an example SqlFiddle
<http://sqlfiddle.com/#!6/d41d8/21757>
Edit 2:
If you want them on the same row, days of the week at least are constant, you know Friday is always 4 days after Monday so do the same but only look for Mondays, then just add 4 days to the Monday...
```
SELECT
dt as MonDate,
datename(dw, dt) as MonDateName,
dateadd(d, 4, dt) as FriDate,
datename(dw, dateadd(d, 4, dt)) as FriDateName
FROM
(
select
dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt
from
sys.columns a
) as dates
WHERE
datepart(dw, dates.dt) = 2
AND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'
AND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'
```
Example SqlFiddle for this:
<http://sqlfiddle.com/#!6/d41d8/21764>
(note that only a few rows come back because sys.columns is quite small on the SqlFiddle server, try another system table if this is a problem)
|
You can use a suitable table with numbers, like the master..spt\_values table as basis for the range generation:
```
;WITH dates AS (
SELECT DATEADD(DAY,number,CAST('2014-01-01' AS DATE)) d
FROM master..spt_values WHERE TYPE = 'p'
AND number < 366
)
SELECT
Week = DATEPART(WEEK, d),
DayOfWeek = DATENAME(dw, d),
Date = d
FROM dates
WHERE DATENAME(dw, d) IN ('Monday', 'Friday')
-- or use datepart instead as datename might be specific to language
-- WHERE DATEPART(dw, d) IN (2,6)
```
Sample output:
```
Week DayOfWeek Date
----------- ------------------------------ ----------
1 Friday 2014-01-03
2 Monday 2014-01-06
2 Friday 2014-01-10
3 Monday 2014-01-13
3 Friday 2014-01-17
4 Monday 2014-01-20
4 Friday 2014-01-24
5 Monday 2014-01-27
5 Friday 2014-01-31
```
|
How to select every Monday date and every Friday date in the year
|
[
"",
"sql",
"sql-server-2012",
""
] |
I have two tables. A category table with the columns: `name` and `category`. And an entry table with the columns: `entry` and `entry_name`. The `name` and `entry_name` columns share the same names (foreign key relationship). I could like to do a count of all the entries made in the entry table but only for a specific category from the category table (e.g. count and group by descending only for category 3)
I've tried some basic joins with no luck.
Any help would be greatly appreciated.
Thanks
|
Create a result with your entries and categories, then filter with where:
```
SELECT COUNT(*)
FROM entries e
LEFT JOIN categories cat
ON e.entry_name = cat.name
WHERE cat.category = ?
```
|
Try this:
```
select count(1)
from caterogy c inner join entry e on (c.name = e.entry_name)
where c.category = 'SPECIFIC_CAT'
```
|
join between two database values
|
[
"",
"mysql",
"sql",
"join",
""
] |
I am having all employees(manager and employees) under one table called Employee. Table looks as follows,
## **Table**
```
+-------+------------+---------+---------+------------+
|emp_id | name | dept_id | salary | manager_id |
+=======+============+=========+=========+============+
| 1 | Sally | 1 | 20000 | null |
| 2 | Ajit | 2 | 20000 | 1 |
| 3 | Rahul | 1 | 20000 | 1 |
| 4 | uday | 1 | 20000 | null |
| 5 | john | 1 | 20000 | null |
| 6 | netaji | 2 | 20000 | 2 |
| 7 | prakriti | 3 | 1111 | 3 |
| 8 | sachin | 3 | 1111 | 3 |
| 9 | santosh | 1 | 1111 | 2 |
| 10 | Ravi | 1 | 1111 | 2 |
+-------+------------+---------+---------+------------+
```
Both managers and employees belong to same table. manager\_id refers = emp\_id who is manager.
I want to write query to count number of employees belonging to each manager. So even if certain manager doesn't have any employee under her or him the count will show as 0
Result should be as follows,
## **Expected Output**
```
+------+----------+
|Count | Manager |
+======+==========+
| 2 | Sally |
| 3 | Ajit |
| 2 | Rahul |
| 0 | Uday |
| 0 | John |
+------+----------+
```
|
You need to do left self-join on the table. The left join will ensure that there is a row for every manager even if there are no employees under them. You need to use the `COUNT()` aggregate on a field from the employee side of the join that will be `NULL` if the manager has no employees. `COUNT()` doesn't actually count NULLs so this should give you zeroes where you want them.
The `WHERE` clause in this query defines managers by looking if their `manager_id` is NULL or if there are any matches in the joined table which means there are people that have them set as their manager.
```
SELECT mgr.name, COUNT(emp.emp_id) AS employee_count
FROM Employee AS mgr
LEFT JOIN Employee AS emp ON emp.manager_id=mgr.emp_id
WHERE mgr.manager_id IS NULL OR emp.emp_id IS NOT NULL
GROUP BY mgr.name
```
|
The correct solution likely involves *fixing* the scheme as any approach will *fail* for a "sub-manager" (who is managed and thus has a manager\_id) but does not currently manage anybody.
Anyway, if the above limitation is acceptable, then people are managers *if* either
* They have a NULL manager\_id (as stated in a comment), *or*
* They currently manage people other employees
Then this query (example [sqlfiddle](http://sqlfiddle.com/#!2/fa006b/24)) can be used:
```
SELECT m.name as Manager, COUNT(e.id) as `Count`
FROM employee m
LEFT JOIN employee e
ON m.id = e.manager_id
GROUP BY m.id, m.name, m.manager_id
HAVING `Count` > 0 OR m.manager_id IS NULL
```
Notes/explanation:
* The LEFT [OUTER] join is important here; otherwise managers who did not manage anybody would not be found. The filtering is then applied via the HAVING clause on the grouped result.
* The COUNT is applied to a *particular* column, instead of `*`; when done so, NULL values in that column are *not counted*. In this case that means that employees (m) without a match (e) are not automatically selected by the COUNT condition in the HAVING. (The LEFT JOIN leaves in the left-side records, even when there is no join-match - all the right-side columns are NULL in this case.)
* The GROUP BY contains all the grouping fields, even if they appear redundant. This allows the `manager_id` field to be used in the HAVING, for instance. (The group on ID was done in case two managers ever have the same name, or it is to be selected in the output clause.)
|
SQL Self Join with null values
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to load a Image control from a image blob saved previously in a sql database.I have testd so many ways and i can't make it work. The image blob is saved as:
```
qry.SQL.Text := 'update tbl set pic = :blobVal where id = :idVal';
qry.Parameters.ParamByName('blobVal').LoadFromFile('c:\sample.jpg', ftBlob);
qry.Parameters.ParamByName('idVal').Value := 1;
```
any suggestion?
|
There a a lot of treads here about loading images to as database, but I did not find one with update or insert parameters.
You might simply assign a graphic object to your parameter.
If you want to store different graphic types you should add a column
keeping the Information which kind of graphic should be stored (e.g. jpeg,bmp,png).
to be able to create the needed TGraphic class descendant if you want to retrieve the picture from the database.
```
uses jpeg, pngimage;
type
TitTYPES=(itJPG,itPNG,itBMP);
procedure TDEMO.Button1Click(Sender: TObject);
var
jp:TJpegimage;
g:TGraphic;
begin
jp:=TJpegimage.Create;
try
ads.Close;
jp.LoadFromFile('C:\Bilder1\PIC.jpg');
ads.SQL.Text := 'Insert into IMGBlob (ID,Blob,typ) Values (:ID,:BLOB,:typ)';
ads.Parameters[0].Value := 1;
ads.Parameters[1].Assign(jp);
ads.Parameters[2].Value := itJPG;
ads.ExecSQL;
ads.SQL.Text := 'Select * from IMGBlob where ID=:ID';
ads.Parameters[0].Value := 1;
ads.Open;
try
case TitTYPES(ads.FieldByName('typ').AsInteger) of
itJPG: g:=TJpegimage.Create;
itPNG: g:=TPNGImage.Create;
itBMP: g:=TBitmap.Create;
end;
g.Assign(ads.FieldByName('Blob'));
Image1.Picture.Assign(g);
finally
g.Free;
end;
finally
jp.Free;
end;
end;
```
|
To load a BLOB field into an image, you need to use TDataSet.CreateBlobStream.
```
var
Stream: TStream;
JPG: TJpegImage;
begin
JPG := TJpegImage.Create;
try
Stream := Qry.CreateBlobStream(Qry.FieldByName('BLOBVAL'), bmRead);
try
JPG.LoadFromStream(Stream);
finally
Stream.Free; // edited
end;
finally
JPG.Free;
end;
end;
```
To store the image back, you'll need to do the reverse:
```
var
Stream: TBlobStream;
Jpg: TJpegImage;
begin
Jpg := TJpegImage.Create;
try
Jpg.Assign(Image1.Picture.Graphic);
// Assign other query parameters here
Stream := Qry.CreateBlobStream(Qry.FieldByName('BLOBVAL'), bmWrite);
try
Jpg.SaveToStream(Stream);
Qry.ExecSQL;
finally
Stream.Free;
end;
finally
Jpg.Free;
end;
end;
```
`TDBImage` is only designed to work with bitmaps (when the field is `ftGraphic`), so it won't work with JPEG images directly. The easiest thing to do is to load the blob as a JPEG, and assign it to a standard `TImage.Picture.Graphic` in an event handler for the dataset (such as it's `AfterScroll` event).
|
Delphi load image save as blob in a sql database
|
[
"",
"sql",
"image",
"delphi",
"blob",
""
] |
I am very much confused about this GROUP BY & Union statements.
Right Now I am getting this O/P.

But I want my output to look like this:
```
Vendor Name Purchase Order Req Qty Inward Qty
----------------------------------------------------------------------
Philips Lighting PO20140918133011 350 350
(I) Pvt Ltd
```
**Requirement :**
If my vendor is null then also I have to group by on the basis of purchase order.
But the output which is I am getting in which all the requirement is not satisfied Is there any way that I can do **group by only on the basis of purchase order.**
\*\*Query : \*\*
```
SELECT
vendor_name,
purchase_order_no,
SUM(ISNULL(InwardQty,0)) AS InwardQty,
SUM(ISNULL(ReqQty,0)) ReqQty
FROM
(
SELECT
vendor_name,
pm.purchase_order_no,
ISNULL(SUM(PIIM.qty),0) AS ReqQty,
0 InwardQty
FROM
RS_Purchase_Order_Master AS PM
LEFT OUTER JOIN RS_Purchase_Invoice_Info_Master AS PIIM ON PIIM.purchase_order_no = PM.purchase_order_no
LEFT OUTER JOIN RS_Vendor_Master AS VM ON VM.vendor_id = PM.vendor_id
WHERE
IsPicreated = 1
GROUP BY
vendor_name,
pm.purchase_order_no
UNION
SELECT
NULL AS vendor_name,
purchase_order_no,
0 AS ReqQty,
ISNULL(SUM(qty),0) AS InwardQty
FROM
RS_GIN_Master
GROUP BY
purchase_order_no
)
AS A
GROUP BY
vendor_name,
purchase_order_no
```
Any help or suggestion will appreciated.
Thank you in advance
|
You don't need to create temp table. Just remove vendor\_name from GROUP BY, and add MAX(vendor\_name).
```
SELECT
MAX(vendor_name) AS vendor_name,
purchase_order_no,
SUM(ISNULL(InwardQty,0)) AS InwardQty,
SUM(ISNULL(ReqQty,0)) AS ReqQty
FROM
(
SELECT
vendor_name,
pm.purchase_order_no,
ISNULL(SUM(PIIM.qty),0) AS ReqQty,
0 InwardQty
FROM
RS_Purchase_Order_Master AS PM
LEFT OUTER JOIN RS_Purchase_Invoice_Info_Master AS PIIM ON PIIM.purchase_order_no = PM.purchase_order_no
LEFT OUTER JOIN RS_Vendor_Master AS VM ON VM.vendor_id = PM.vendor_id
WHERE
IsPicreated = 1
GROUP BY
vendor_name,
pm.purchase_order_no
UNION
SELECT
NULL AS vendor_name,
purchase_order_no,
0 AS ReqQty,
ISNULL(SUM(qty),0) AS InwardQty
FROM
RS_GIN_Master
GROUP BY
purchase_order_no
)
AS A
GROUP BY
purchase_order_no
```
|
I don't know if this is the best solution but I would create a temp table (@a):
```
declare @a table (
purchase_order_no varchar(100),
TotalReqQty int,
TotalInwardQty int )
```
insert the result of:
```
SELECT purchase_order_no,
SUM(ReqQty),
SUM(InwardQty)
FROM table
group by purchase_order_no
```
And after that I would use inner join to get the vendor using something like this:
```
SELECT T.Vendor_Name,
A.purchase_order_no,
A.TotalReqQty,
A.TotalInwardQty
FROM @a a
inner join
( SELECT DISTINCT Vendor_Name, purchase_order_no
FROM table WHERE Vendor_Name IS NOT NULL ) T
ON A.purchase_order_no = T.purchase_order_no
```
|
Group by & Union In Sql
|
[
"",
"sql",
"sql-server",
"group-by",
"sql-server-2012",
"union-all",
""
] |
I have an old query that was using `*=` operator. Right now, the query has where clause like below
```
Table1.Column1 *= Table2.Column1
AND Table1.Column2 *= Table3.Column1
if (some conditions in C# script) //this whole clause is generated by C# function based on different conditions
AND Table1.Column3 *= Table4.Column1
```
I have to rewrite it to use left join, because well, we are not dinosaurs anymore, and are moving to SQL server 2014 (from sql server 2000). Anyway, I have rewritten the query like
```
From Table1
Left Join Table2 On Table1.Column1 = Table2.Column1
Left Join Table3 On Table1.Column2 = Table3.Column1
Left Join Table4 On Table1.Column3 = Table4.Column1
```
I believe this should provide me the same resultset, but it is not. Clearly SQL Server is not following the same join order in both cases. So, I have to figure out the exact order the old query is following, and how to recreate the same order.
P.S. I don't have much understanding about the code. But, I can post the complete function here, in case if it helps someone understand the situation better.
Edit:
The exact query builder function, I am using.
```
public virtual FANUC.Common.BaseClasses.Row[] GetCustomersForPopup( FANUC.Common.BaseClasses.Row objListCustomerFilter, FANUC.Common.BaseClasses.PagingEventArgs e ) {
string strConnector = " WHERE ";
string strANDClause = "";
string strSQLQuery = " SELECT "
+ " TBL_Company_Master.CMPM_Company_ID,"
+ " TBL_Company_Master.CMPM_Company_Name,"
+ " " + ( ( FANUCUser )Thread.CurrentPrincipal.Identity ).DBUser + ".fnGetRefCodeValue( CMPM_Company_Type_ID ) AS CMPM_CompanyTypeID,"
+ " TBL_Company_Master.CMPM_Company_NickName,"
+ " TBL_Company_Master.CMPM_Service_Center_ID,"
+ " TBL_Company_Master.CMPM_Company_BranchName,"
+ " TBL_Company_Master.CMPM_Black_Listed_Flag,"
+ " TBL_Company_Master.CMPM_Prohibited_Company_Flag,"
+ " " + ( ( FANUCUser )Thread.CurrentPrincipal.Identity ).DBUser + ".fnGetRefCodeValue( TBL_Company_Master.CMPM_Status ) AS CMPM_Status,"
+ " TBL_Company_Master.CMPM_City_Location_ID AS CMPM_City_Location_ID,"
+ " TBL_City_Location_Master.CLIM_City_Name AS CLIM_City_Name, "
+ " TBL_Company_Master.CMPM_Country_ID AS CMPM_Country_ID,"
+ " TBL_Country_Master.CRIM_CountryName, "
+ " TBL_Company_Master.CMPM_Night_Call_Applicable_flag,"
+ " TBL_Company_Master.CMPM_Default_currency_for_transaction,"
+ " TBL_Company_Master.CMPM_Telephone_No, "
+ " TBL_Customer_Contact_Master.CNTM_ContactPersonName, "
+ " TBL_Customer_Contact_Master.CNTM_Section_Name, "
+ " TBL_Company_Master.Use_Count, "
+ " TBL_Company_Master.CMPM_Self_Company_Indicator, "
+ " TBL_Company_Master.CMPM_Transport_Time ";
string strFromClause = " FROM TBL_Company_Master, "
+ " TBL_Service_Center_Master, "
+ " TBL_City_Location_Master, "
+ " TBL_Country_Master, "
+ " TBL_Customer_Contact_Master";
strANDClause += " AND TBL_Company_Master.CMPM_Service_Center_ID *= TBL_Service_Center_Master.SCRM_Service_Center_ID "
+ " AND TBL_Company_Master.CMPM_City_Location_ID *= TBL_City_Location_Master.CLIM_City_ID "
+ " AND TBL_Company_Master.CMPM_Country_ID *= TBL_Country_Master.CRIM_CountryID ";
if ( objListCustomerFilter[ Constants.IS_CALLING_CUSTOMER ] != null || objListCustomerFilter[ Constants.IS_PAYEE_CUSTOMER ] != null || Convert.ToInt32( objListCustomerFilter[ "CUTM_Customer_Type_ID" ] ) == 120 )
strANDClause += " AND TBL_Company_Master.CMPM_Company_ID *= TBL_Customer_Contact_Master.CNTM_Customer_ID ";
else
strANDClause += " AND TBL_Company_Master.CMPM_Company_ID = TBL_Customer_Contact_Master.CNTM_Customer_ID " ;
strANDClause += " AND TBL_Customer_Contact_Master.CNTM_Default_Flag = 'Y' ";
strANDClause += " AND CMPM_Active_Flag != 'N'";
if ( objListCustomerFilter["CUTM_Customer_Type_ID"] != null && Convert.ToString(objListCustomerFilter["CUTM_Customer_Type_ID"]) != "" ) {
strFromClause += " ,TBL_Customer_Type_Mapping ";
strANDClause += " AND CUTM_Customer_ID = CMPM_Company_ID " + " AND CUTM_Customer_Type_ID = "+Convert.ToString(objListCustomerFilter["CUTM_Customer_Type_ID"]);
}
if ( objListCustomerFilter["CMPM_Company_Type_ID"] != null && Convert.ToString(objListCustomerFilter["CMPM_Company_Type_ID"]) != "" && Convert.ToString(objListCustomerFilter["CMPM_Company_Type_ID"]) != Constants.ALL ) {
strANDClause += " AND CMPM_Company_Type_ID IN ("+Convert.ToString(objListCustomerFilter["CMPM_Company_Type_ID"])+","+Constants.COMPANY_TYPE_BOTH+") ";
}
if ( !Convert.ToString( objListCustomerFilter[ Constants.PAYMENT_REQD ] ).Equals(Constants.CONST_NO ) ) {
strSQLQuery += ", TBL_Company_Payment_Terms.CMPT_Payment_Term_Description "
+ ", TBL_Company_Payment_Terms.CMPT_Payment_Term_ID ";
strFromClause += " ,TBL_Company_Payment_Terms ";
if((objListCustomerFilter[Constants.IS_CALLING_CUSTOMER] != null) ||(objListCustomerFilter[Constants.IS_END_USER] != null) )
strANDClause += " AND TBL_Company_Master.CMPM_Company_ID *= TBL_Company_Payment_Terms.CMPT_Company_ID "
+ " AND TBL_Company_Payment_Terms.CMPT_Default = 'Y' ";
else
strANDClause += " AND TBL_Company_Master.CMPM_Company_ID = TBL_Company_Payment_Terms.CMPT_Company_ID "
+ " AND TBL_Company_Payment_Terms.CMPT_Default = 'Y' ";
if ( objListCustomerFilter[ "CMPM_Company_Type_ID" ] != null && Convert.ToString( objListCustomerFilter[ "CMPM_Company_Type_ID" ] ) != Constants.COMPANY_TYPE_BOTH && Convert.ToString( objListCustomerFilter[ "CMPM_Company_Type_ID" ] ) != Constants.ALL )
strANDClause += " AND CMPT_Company_Type_ID = " + Convert.ToString( objListCustomerFilter[ "CMPM_Company_Type_ID" ] );
}
strANDClause += " AND CMPM_Subsidiary_Code = '"+((FANUCUser)Thread.CurrentPrincipal.Identity).SubsidiaryCode+"'";
Row objFilter = new Row();
objFilter["CMPM_Company_ID"] = objListCustomerFilter["CMPM_Company_ID"];
objFilter["CMPM_Black_Listed_Flag"] = objListCustomerFilter["CMPM_Black_Listed_Flag"];
objFilter["CMPM_Prohibited_Company_Flag"] = objListCustomerFilter["CMPM_Prohibited_Company_Flag"];
objFilter["CMPM_Status"] = objListCustomerFilter["CMPM_Status"];
objFilter["CMPM_Company_Name~like"] = objListCustomerFilter["CMPM_Company_Name"];
objFilter["CMPM_Company_NickName~like"] = objListCustomerFilter["CMPM_Company_NickName"];
objFilter["CMPM_Telephone_No~like"] = objListCustomerFilter["CMPM_Telephone_No"];
objFilter["CMPM_FAX_No"] = objListCustomerFilter["CMPM_FAX_No"];
objFilter["CMPM_Service_Center_ID"] = objListCustomerFilter["CMPM_Service_Center_ID"];
objFilter["CMPM_Billing_Company_ID"] = objListCustomerFilter["CMPM_Billing_Company_ID"];
objFilter["CMPM_Shipping_Company_ID"] = objListCustomerFilter["CMPM_Shipping_Company_ID"];
objFilter["CMPM_City_Location_ID"] = objListCustomerFilter["CMPM_City_Location_ID"];
objFilter["CMPM_State_ID"] = objListCustomerFilter["CMPM_State_ID"];
objFilter["CMPM_Country_ID"] = objListCustomerFilter["CMPM_Country_ID"];
objFilter["CMPM_Grp_Parent_Company_ID"] = objListCustomerFilter["CMPM_Grp_Parent_Company_ID"];
objFilter["CMPM_Night_Call_Applicable_Flag"] = objListCustomerFilter["CMPM_Night_Call_Applicable_Flag"];
objFilter["CMPM_Default_currency_for_transaction"] = objListCustomerFilter["CMPM_Default_currency_for_transaction"];
objFilter["CMPM_Company_local_registration_No~like"] = objListCustomerFilter["CMPM_Company_local_registration_No"];
objFilter["CMPM_Company_central_registration_No~like"] = objListCustomerFilter["CMPM_Company_central_registration_No"];
objFilter["CMPM_Insurance_Policy_No~like"] = objListCustomerFilter["CMPM_Insurance_Policy_No"];
objFilter["CMPM_Active_Flag"] = objListCustomerFilter["CMPM_Active_Flag"];
objFilter["CMPM_Company_BranchName~like"] = objListCustomerFilter["CMPM_Company_BranchName"];
objFilter["CMPM_Company_BranchName_LocalLanguage~like"] = objListCustomerFilter["CMPM_Company_BranchName_LocalLanguage"];
objFilter["CMPM_Postal_Code"] = objListCustomerFilter["CMPM_Postal_Code"];
objFilter["CMPM_Web_Site~like"] = objListCustomerFilter["CMPM_Web_Site"];
objFilter["CMPM_Distance"] = objListCustomerFilter["CMPM_Distance"];
if ( objListCustomerFilter["CMPM_Self_Company_Indicator"] != null && Convert.ToString(objListCustomerFilter["CMPM_Self_Company_Indicator"]) != Constants.ALL )
objFilter[ "CMPM_Self_Company_Indicator" ] = objListCustomerFilter["CMPM_Self_Company_Indicator"];
CommonBQ objCommonBQ = new CommonBQ();
string strSearchClause = objCommonBQ.CreateFilter( objFilter );
string strFinalString = "";
if ( !strSearchClause.Equals( "" ) ) strFinalString = strSQLQuery + strFromClause + strConnector + strSearchClause + strANDClause;
else {
strSQLQuery += strFromClause + strConnector + strANDClause;
int iFirstPos = strSQLQuery.IndexOf( "AND", 0 );
string strFirstPart = strSQLQuery.Substring( 0, iFirstPos );
string strSecondPart = strSQLQuery.Substring( iFirstPos + 3, strSQLQuery.Length - iFirstPos - 3 );
strFinalString = strFirstPart + strSecondPart;
}
return GetRows( strFinalString, CreateParameterArray( objListCustomerFilter ), CommandType.Text, null, e );
}
```
|
There are a few things you should update in this query:
1. Use Table Alias in select clause instead of complete table names.
2. `TBL_Customer_Contact_Master` is being joined based on condition:
objListCustomerFilter[ Constants.IS\_CALLING\_CUSTOMER ] != null ||
objListCustomerFilter[ Constants.IS\_PAYEE\_CUSTOMER ] != null ||
Convert.ToInt32( objListCustomerFilter[ "CUTM\_Customer\_Type\_ID" ] )
== 120 ) If this holds true then there's a `Left Join` else `Inner join`.
3. So update the statements as:
string strFromClause =
" FROM TBL\_Company\_Master TCM " +
" Left Join TBL\_Service\_Center\_Master TSC on
TCM.CMPM\_Service\_Center\_ID = TSC.SCRM\_Service\_Center\_ID " +
"Left Join TBL\_City\_Location\_Master TCL on
TCM.CMPM\_City\_Location\_ID = TCL.CLIM\_City\_ID " +
"Left Join TBL\_Country\_Master TC on
TCM.CMPM\_Country\_ID = TC.CRIM\_CountryID ";
4. Update condition 1 as:
if ( objListCustomerFilter[ Constants.IS\_CALLING\_CUSTOMER ] != null ||
objListCustomerFilter[ Constants.IS\_PAYEE\_CUSTOMER ] != null || Convert.ToInt32(
objListCustomerFilter[ "CUTM\_Customer\_Type\_ID" ] ) == 120 )
strFromClause += " Left join TBL\_Customer\_Contact\_Master TCCM on TCM.CMPM\_Company\_ID
= TCCM.CNTM\_Customer\_ID ";
else
strFromClause += " Inner join TBL\_Customer\_Contact\_Master TCCM on TCM.CMPM\_Company\_ID
= TCCM.CNTM\_Customer\_ID ";
5. Then update condition 2 as:
if ( objListCustomerFilter["CUTM\_Customer\_Type\_ID"] != null && Convert.ToString
(objListCustomerFilter["CUTM\_Customer\_Type\_ID"]) != "" ) {
strFromClause += "Left join TBL\_Customer\_Type\_Mapping on CUTM\_Customer\_ID = CMPM\_Company\_ID AND CUTM\_Customer\_Type\_ID = "+Convert.ToString(objListCustomerFilter["CUTM\_Customer\_Type\_ID"] ; }
6. And condition 3 as:
if((objListCustomerFilter[Constants.IS\_CALLING\_CUSTOMER] != null) ||(objListCustomerFilter
[Constants.IS\_END\_USER] != null) )
strFromClause += " Left Join TBL\_Company\_Payment\_Terms TCPT
On TCM.CMPM\_Company\_ID = TCPT.CMPT\_Company\_ID AND TCPT.CMPT\_Default = 'Y' ";
else
strFromClause += " Inner Join TBL\_Company\_Payment\_Terms TCPT
On TCM.CMPM\_Company\_ID = TCPT.CMPT\_Company\_ID AND TCPT.CMPT\_Default = 'Y' ";
I might have missed a few commas and semi - colons here and there but it should give you an idea where things might be missing. Hope this helps!!!
|
The difference is in how additional `where` clause filters are applied for columns in the outer joined tables.
With this:
```
select *
from a
left outer join b on a.id = b.id
where
b.other_col = 'test'
```
The result will contain only rows where the row in `b` was found and the `other_col` column in `b` has the value `test`.
Comparing to this:
```
select *
from a, b
where
a.id *= b.id
and b.other_col = 'test'
```
This will find all rows in `a`. And it will include the columns from `b` for rows where `b.other_col = 'test'`.
So taking the second query and converting it to one with a left join, one of the following would give the same output:
```
-- 1.
select *
from a
left outer join b on a.id = b.id and b.other_col = 'test'
-- 2.
select *
from a
left outer join
(
select *
from b
where other_col = 'test'
) as b on a.id = b.id
```
|
Understanding join order for 3 or more tables
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I have a stored procedure in which I am trying to conditionally select a specific table depending on a provided `tinyint` parameter. This is my code:
```
ALTER PROCEDURE [dbo].[GetAllStuff]
@countryId tinyint
AS
BEGIN
SET NOCOUNT ON;
DECLARE @marketSpecific TABLE
SET @marketSpecific =
(CASE (@countryId)
WHEN 1 THEN test_sv
WHEN 2 THEN test_dk
WHEN 3 THEN test_no
WHEN 4 THEN test_ge
ELSE test_sv
END)
[..]
END
```
Unfortunately this, does not work. I get the following errors:
> 'Must declare scalar variable @marketSpecific
>
> Invalid column name test\_sv
Obviously I have misunderstood how CASE works. Can anyone provide any insight?
|
You can write a dynamic query as:
```
create PROCEDURE [dbo].[GetAllStuff]
@countryId tinyint
AS
BEGIN
SET NOCOUNT ON
DECLARE @IntVariable int
DECLARE @SQLString nvarchar(500)
DECLARE @marketSpecific varchar(100)
/* Assign parameter value to local variable to avoid parameter sniffing */
SET @IntVariable = @countryId
SET @marketSpecific = (CASE (@IntVariable)
WHEN 1 THEN 'test_sv'
WHEN 2 THEN 'test_dk'
WHEN 3 THEN 'test_no'
WHEN 4 THEN 'test_ge'
ELSE 'test_sv'
END)
/* Build the SQL string one time.*/
SET @SQLString =
N'SELECT *
FROM '+ @marketSpecific
EXECUTE sp_executesql @SQLString
END
```
`Demo`
|
You can't choose a table to use in a CASE statement. You could do something like this:
```
IF @countryId = 1
SELECT * FROM test_sv
ELSE IF @countryId = 2
SELECT * FROM test_dk
ELSE IF @countryId = 3
SELECT * FROM test_no
... etc.
```
But this method might produce some odd query plans and affect performance. One alternative would be to create a view that UNIONs all the tables together (providing that have the same structure:
```
CREATE VIEW test_all
AS
SELECT 1 AS table_type, col1, col2, col3 FROM test_sv
UNION
SELECT 2 AS table_type, col1, col2, col3 FROM test_sdk
UNION
SELECT 3 AS table_type, col1, col2, col3 FROM test_no
UNION
... etc.
```
Now you can query that view in your stored procedure like this:
```
ALTER PROCEDURE [dbo].[GetAllStuff]
@countryId tinyint
AS
BEGIN
SELECT * FROM test_all WHERE table_type = @countryId
```
|
Conditionally select table by CASE
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have table with field `category` that contains entries like the following:
```
germany_bundesliga
germany_2nd_bundesliga
england_premier_leauge
england_championship
spain_liga
spain_liga_adelante
```
and so on...
i would like to get this output:
```
gemany
england
spain
...
```
Is it possible with an SQL query?
|
This is what you want right?
```
select distinct SUBSTRING_INDEX(value,'_',1) from category;
```
[SQLFIDDLE](http://sqlfiddle.com/#!2/8a48aa/22)
|
Something to think about...
```
SELECT SUBSTRING_INDEX('germany_2nd_bundesliga','_',1);
```
|
Make SQL query that rename output entries
|
[
"",
"mysql",
"sql",
"substring",
""
] |
I need to delete few records which I got from a subquery.
Let's say that the `EMP` table has columns `EMPNAME`, `EMPSALARY` and the primary key is a combination of `EMPNAME,EMPSALARY`.
```
delete from emp
where exists (
select *
from EMP ***** //query which fetch few records from EMP table);
```
The above query is not working.
Thanks for the help.
|
Your query would look something like
```
delete from emp
from emp e
where exists (select *
from EMP
WHERE e.EMPNAME = EMPNAME
AND e.EMPSALARY = EMPSALARY
AND <another Condition>) --<-- Condition on which you want to delete rows
```
|
you can directly give condition in where clause
```
delete from emp where <conditions>;
```
else you can do in this way
```
DELETE FROM Table1 T1
WHERE EXISTS (SELECT column1 FROM table2 T2
WHERE T1.column1 = T2.column1
AND T1.Column2 = T2.column2 );
```
|
delete records which are selected from select query on the same table
|
[
"",
"sql",
"sql-server",
"oracle",
"oracle-sqldeveloper",
""
] |
I get a lot of database information from clients in excel spreadsheets. I frequently need to insert/update this data back into the database.
I often use excel to generate the insert and update statements via concatenating a bunch of cells together. Sometimes the data includes text cells which can have single quotes in them. If not dealt with carefully, these single quotes will make the SQL statements break.
How can I escape single quotes in text data, via formulas, when concatenating a bunch of cell values together, so that my resulting SQL scripts are valid?
|
The best solution I have found is to use the following:
```
=SUBSTITUTE(A1, "'", "''")
```
This will replace all single quotes with two single quotes which, in T-SQL statements escapes the character and treats it as string data.
|
Needed to do something similar so I could "escape" double quotes:
```
=CONCATENATE(CHAR(34),"hello",CHAR(34))
```
Result:
[](https://i.stack.imgur.com/iaGTH.png)
|
Excel escaping quotes or apostrophes in cell values
|
[
"",
"sql",
"excel",
"replace",
"apostrophe",
"substitution",
""
] |
I am brand new to sql my company just kinda threw me head long into this and said do it. So any help is greatly appreciated. I am trying to get a date to come out in the format of mm/dd/yyyy hh:mm:ss AM/PM so for example a date of 09/26/2014 11:04:54 AM. I have tried using the code:
```
Select Convert(nvarchar,EntryDate,101)
From DB1
```
However that returns just 09/26/2014.
I also tried
```
Select Convert(nvarchar,EntryDate,100)
From DB1
```
but this returns Sep 26 2014 11:04AM
Not sure where to go from here. Again thanks for the help. BTW I am using SQL Server 2012.
|
Since you're on SQL 2012 the format function should work:
```
declare @date datetime = '2014-09-26 11:04:54'
select FORMAT(@date,'MM/dd/yyyy hh:mm:s tt')
```
result: `09/26/2014 11:04:54 AM`
In your case it would be:
```
Select FORMAT(EntryDate,'MM/dd/yyyy hh:mm:s tt')
From DB1
```
|
```
DECLARE @Date_Value DATETIME = GETDATE();
SELECT CONVERT(VARCHAR(10), @Date_Value, 101) + ' '
+ LTRIM(RIGHT(CONVERT(CHAR(20), @Date_Value, 22), 11))
RESULT: 09/26/2014 5:25:53 PM
```
## Your Query
```
SELECT CONVERT(VARCHAR(10), EntryDate, 101) + ' '
+ LTRIM(RIGHT(CONVERT(CHAR(20), EntryDate, 22), 11))
From DB1
```
|
Date format returned as mm/dd/yyyy hh:mm:ss AM/PM
|
[
"",
"sql",
"date",
"datetime",
"sql-server-2012",
""
] |
I have MySQL table with a date of birth. And I want to query the rows which based on age for current date. So following is the query for the table with date range.
```
SELECT * FROM `user` WHERE `dob` > '1980-01-20' AND `dob` < '1990-01-20';
```
How can we convert this query to search with age range for the current date in the same table?
|
You could use the `year` function to extract the year part of the current date. So, for example, if you're looking for users between the ages of 24 and 34:
```
SELECT *
FROM `user`
WHERE (YEAR(NOW()) - YEAR(`dob`)) BETWEEN 24 AND 34
```
|
Here is the query you're looking for:
```
SELECT U.*
,DATE_FORMAT(FROM_DAYS(DATEDIFF(NOW(),U.dob)), ‘%Y’)+0 AS age
FROM `user` U
WHERE DATE_FORMAT(FROM_DAYS(DATEDIFF(NOW(),U.dob)), ‘%Y’)+0 BETWEEN 20 AND 30
```
In this example, you'll have every users with age in the range `20-30`.
Hope this will help you
|
MySQL query by age range on a table which have date of birth
|
[
"",
"mysql",
"sql",
"select",
"where-clause",
""
] |
I have following three tables representing a tree structure. Every row in `#A` is ancestor of zero or more rows in `#B`. Similarly every row in `#B` is ancestor of zero or more rows in `#C`. Table `#B` contains a column `value`. I need to find sum of `value` for all rows in `#B` whose belong to an input ancestor.
For example, consider following content of tables:
```
CREATE TABLE #A (id varchar(10));
CREATE TABLE #B (id varchar(10), value int);
CREATE TABLE #C (id varchar(10), a_id varchar(10), b_id varchar(10));
INSERT INTO #A(id) VALUES ('A1'), ('A2');
INSERT INTO #B(id, value) VALUES('B1', 41), ('B2', 43), ('B3', 47);
INSERT INTO #C(id, a_id, b_id) VALUES('C1', 'A1', 'B1'), ('C2', 'A1', 'B1'),
('C3', 'A1', 'B2'), ('C4', 'A2', 'B3');
```
The above content represents following structure:
```
A1
|--- B1 (41)
| |-------- C1
| |-------- C2
|
|--- B2 (43)
|-------- C3
A2
|--- B3 (47)
|-------- C4
```
The parent-child relationship is weirdly defined. Table `#B` does not have its own column that says which row in table `#A` is its ancestor. All the mappings should be evaluated from table `#C`. Columns `a_id` and `b_id` in table `#C` designate grandparent and parent rows in table `#A` and `#B` respectively. If there is a row `Z` in `#C` where `a_id` is `X` and `b_id` is `Y`, then `X` is the ancestor of `Y` and `Y` is ancestor of `Z`. There will not be conflicting mappings in `#C`.
Problem Statement: For given id `A1`, find the sum of column `value` for all rows in `#B` whose parent is `A1`. Here there are two children of `A1`, `B1` with value `41` and `B2` with value `43` so we expect answer to be `84`.
If I do something like below:
```
SELECT SUM(#B.value) FROM #B
INNER JOIN #C ON #B.id = #C.b_id
INNER JOIN #A ON #C.a_id = #A.id
WHERE #A.id = 'A1'
```
I get `125` i.e. `41 + 41 + 43` instead of `84`, since two rows in `#A` have mapping `B1 -> C1`. I can write below query to get values associated with distinct rows in `#B` i.e. `41` and `43` but now I do not know how to sum the resultant values. Can I get the expected result without creating a temporary table?
```
SELECT MAX(#B.value) FROM #B
INNER JOIN #C ON #B.id = #C.b_id
INNER JOIN #A ON #C.a_id = #A.id
WHERE #A.id = 'A1'
GROUP BY #B.id;
```
I am not a SQL expert, so probably there might be a very simple solution to this.
|
You don't need table #A here, because the IDs are in table #C and the values in table #B. That is all you need. No need to join either. Simply select the IDs needed from #C, then use them to select from #B.
```
select sum(value)
from #B
where id in
(
select b_id
from #C
where a_id = 'A1'
);
```
|
You could do this:
```
SELECT SUM(#B.value)
FROM #B
WHERE EXISTS
(
SELECT NULL FROM #C
INNER JOIN #A ON #C.a_id = #A.id
WHERE #B.id = #C.b_id
AND #A.id = 'A1'
)
```
Then you will only sum up the `#B` values where they exists in the other tables
The result will be: 84
|
Summing the distinct elements of query result
|
[
"",
"sql",
"t-sql",
""
] |
I have two tables
```
EMPLOYEE (Fname, Lname, Ssn, Salary, Dno)
DEPARTMENT (Dname, Dno, Location)
```
I want to list the names of all employees making the least in their department
I have come up with this
```
select min(E.Salary) from EMPLOYEE E group by E.Dno;
```
but how do I join the EMPLOYEE table with it and display the 'Fname' and 'Lname';
|
Analytic functions would be best but this would also work:
```
select *
from employee e
where salary = (select min(x.salary) from employee x where x.dno = e.dno)
```
|
A simple way to do it is to check that no one with a lower salary exists in the same department;
```
SELECT e1.*
FROM employee e1
WHERE NOT EXISTS(
SELECT 1 FROM employee e2 WHERE e1.dno = e2.dno AND e1.salary > e2.salary
);
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!4/d456d/1).
|
Retrieve the names of employees who is making least in their departments
|
[
"",
"sql",
"oracle",
""
] |
I have this query
```
SELECT DISTINCT u.fbid, u.name,r.points
FROM users u, players_records r
WHERE u.fbid = r.user_id
ORDER BY r.points DESC LIMIT 5
```
I want to get the top players but only different 5 players, this query is not working it shows duplicated users ids
any help ?
Result for the above query
```
1112222 Name 1 9310
3334444 Name 2 8380
3334444 Name 2 7010
5555666 Name 3 6080
1112222 Name 1 4890
```
so the ids are duplicated
|
It sounds like you want the maximum point per user. So you could do it something like this:
```
SELECT
users.fbid,
users.name,
maxRecords.points
FROM
users
JOIN
(
SELECT
MAX(players_records.points) AS points,
players_records.user_id
FROM
players_records
GROUP BY
players_records.user_id
) AS maxRecords
ON maxRecords.user_id=users.fbid
ORDER BY
maxRecords.points DESC
LIMIT 5
```
If I understand you data. Then the output will be like this:
```
1112222 Name 1 9310
3334444 Name 2 8380
5555666 Name 3 6080
```
|
You could just select max score and group by user.
```
SELECT u.fbid, u.name, MAX(r.points) max_points
FROM users u, player_records r
WHERE u.fbid = r.user_id
GROUP BY u.fbid, u.name
ORDER BY max_points DESC LIMIT 5
```
|
DISTINCT Is not working in inner join
|
[
"",
"mysql",
"sql",
""
] |
I have this table

What I want to do is that Select the `attr_id` WHERE a DISTINCT `name_en-GB` appears and then SET that selected `attr_id` for that `name_en-GB`
I can do this by writing individual queries but I want to know is there any way I can do this in one query?
|
I have finally done this
```
UPDATE sc_product_phrase
JOIN (
SELECT
`caption`,
min(design_phrase_id) AS minai,
required
FROM
sc_product_phrase WHERE LENGTH(`options`) < 1
GROUP BY
`caption`
) tt ON sc_product_phrase.`caption` = tt.`caption`
AND sc_product_phrase.required = tt.required
SET design_phrase_id = tt.minai
WHERE
sc_product_phrase.design_phrase_id <> tt.minai
AND LENGTH(
sc_product_phrase.`options`
) < 1
```
|
You can do
```
UPDATE table1
SET attr_id = CASE `name_en-GB` WHEN 'Bride Name' THEN 142
WHEN 'Child Grade' THEN 270 END
WHERE `name_en-GB` IN('Bride Name', 'Child Grade')
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/d65eb/1)** demo
|
Issue with updating a table
|
[
"",
"mysql",
"sql",
""
] |
I have the following sql statement:
```
SELECT MODEL, COLOR, SUM(SALES)
FROM SALES
WHERE MODEL='chevy'
GROUP BY MODEL;
```
When I run this statement, I have an error message saying " "sales.color" must appear in the GROUP BY clause or be used in an aggregate function"
I don't understand why COLOR has to appear in Group by or used in aggregate function.
Thanks.
|
Color must appear in the group by because you are doing an aggregate select. Essentially the problem here is it doesn't know what rows to sum the sales across. You are selecting out the sum of sales for each model and the color of each model so you either need to remove the color from the selected columns if you want to know the sales for each model regardless of color, or add color to the group by if you want to know the sales for each color of each model.
In the first case:
```
SELECT MODEL, SUM(SALES)
FROM SALES
WHERE MODEL='chevy'
GROUP BY MODEL;
```
In the second case:
```
SELECT MODEL, COLOR, SUM(SALES)
FROM SALES
WHERE MODEL='chevy'
GROUP BY MODEL,COLOR;
```
|
```
SELECT MODEL, COLOR, SUM(SALES)
FROM SALES
WHERE MODEL='chevy'
GROUP BY MODEL,COLOR;
```
[SQL Server: Difference between PARTITION BY and GROUP BY](https://stackoverflow.com/questions/2404565/sql-server-difference-between-partition-by-and-group-by)
|
Aggregation with Group by
|
[
"",
"sql",
""
] |
Here is my pseudo query:
```
SELECT col1, col2 FROM table WHERE number IN(number1,number2,number3);
```
**EXAMPLE**
```
SELECT name, description FROM products WHERE 5 IN(category_id);
```
NOTE: Let's assume that one of the rows in the table has **category\_id = 2,5,7**
**THE PROBLEM**
my query works if the row has "5" at the beginning of the comma separated set like this: 5,2,7
BUT same query does not return any result if the row starts with anything other than 5.
I tried various scenario, it always appears to me that when mysql encounters the comma, it is no longer checking for matches further into the comma separated numbers.
I checked everything that made sense such as properly formatted comma separated strings and collation. It is puzzling me at this point. Any solutions?
Please note I used `FIND_IN_SET()` and that worked flawlessly. But still, I'm not satisfied with ditching `IN()` without knowing what is the real issue. Why does it stop at the first encounter f a comma.
|
**EDIT** The expression examples in this answer are based entirely on a string context, that is, comparison of character datatypes. These examples do NOT take into account implicit datatype conversions, and the quirky MySQL semantics of string to numeric conversion, e.g. `'2,5,7'+0` evaluates to integer value 2. See the excellent answer from Bohemian for a demonstration of that behavior.)
---
The row in the table has category value `'2,5,7'`. That's a string value.
The expression:
```
'5' IN ('2,5,7')
```
Is equivalent to
```
'5' = '2,5,7'
```
The commas within the string value are *not* seen as SQL text, those are characters in the string.
To get the result you are looking for with `IN`, you'd need an expression like this:
```
'5' IN ('2','5','7')
```
That is three separate values, separated by commas that are part of the SQL text. That's equivalent to:
```
( '5' = '2' OR '5' = '5' OR '5' = '7' )
```
To answer the question you asked:
**Q: Why does it stop at the first encounter of a comma?**
**A:** It doesn't stop at the first comma. It compares the whole string, as a single string. You'd get the same result with this expression.
```
'5' IN ('5,2,7')
```
That will return FALSE, because it's equivalent to the expression '5' = '5,2,7'`, and the two strings being compared are not equal.
(**EDIT:** Th example above is based on string comparison. In a numeric context, the string `'5,2,7'` would evaluate to a numeric value of 5.
In that case, it's still not the `IN` that's stopping at the first comma, it's the implicit conversion from string to numeric that's "stopping at the first comma". (It's not just the comma, it's any character that's encountered where the string can no longer be converted into a numeric value, and that could be a paren, a '#', a 'b' or whatever.)
**Bottom Line:** The `IN` comparison operator doesn't give a rat's @ss about the comma characters within the string. Those are just characters within the string. The commas *within* a string value are *not* interpreted as part of the SQL text.
|
[`IN`](http://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_in) works on a *set* of values, not a single comma-separated string value. Each separate *value* supplied to IN is *one element* in the set.
MySQL offers a non-standard [`FIND_IN_SET(str, strlist)`](http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_find-in-set) function which *can* be used here, although proper relational database design would be to normalize the field.
> Returns a value in the range of 1 to N if the string str is in the string list strlist consisting of N substrings. A string list is a string composed of substrings separated by “,” characters.
Example:
```
WHERE FIND_IN_SET('5', category_id)
```
One issue, besides breaking normalization and throwing referential integrity out the window, is that FIND\_IN\_SET *cannot* use indices and is thus *not* scalable when used as a high-cardinality selector.
See also [FIND\_IN\_SET() vs IN()](https://stackoverflow.com/questions/4155873/find-in-set-vs-in)
|
Why does num IN("num1,num2") atop at first comma?
|
[
"",
"mysql",
"sql",
"sql-in",
""
] |
I have a table named **Documents**, with the fields **Document\_ID**(primary key) and **Cluster\_ID** , now i want all the Document\_ID that have same Cluster\_ID, on basis on Document\_ID (passed as parameter), default value of Cluster\_ID is '-1'.
eg: **dbo.Documents**
```
Document ID ******** Cluster_ID
- 1****************** -1
- 2****************** 1001
- 3****************** 1001
- 4****************** -1
- 5****************** 1002
- 6****************** 1001
- 7****************** -1
- 8****************** -1
- 9****************** -1
```
now,
if Document\_ID is 2 then i want result as rows with Document\_ID=2,3,6 , here i am not able to handle records with cluster\_ID=-1;(because -1 id default value to the table and there are hundreds of matching records in table)
i tried a query mentioned below which is working fine but when i execute the query for Document\_ID=7 it will return 5 rows with cluster\_ID=-1,
```
SELECT d2.*
FROM dbo.Documents d1
join dbo.Documents d2 on d1.Cluster_ID=d2.Cluster_ID
WHERE d1.Document_ID =2
```
**my issue is:**
It is returning all the rows from table which has Cluster\_ID=-1, what i want is, If Cluster\_ID=-1,return single row else return all rows with the same cluster\_ID
|
Try this query
```
SELECT
Document_ID,
Cluster_ID
FROM Documents
WHERE Cluster_ID =
(
SELECT
Cluster_ID
FROM Documents
WHERE Document_ID = @Document_ID
)
AND Cluster_ID <> -1
UNION
SELECT
Document_ID,
Cluster_ID
FROM Documents
WHERE Document_ID = @Document_ID
AND Cluster_ID = -1
```
|
Please try this.
here **@doumentId** is parameter which you need to pass.
```
DECLARE @Cluster_ID int
SET Cluster_ID = (
SELECT Cluster_ID
FROM
FROM Documents
WHERE Document_ID = @documentId
)
IF Cluster_ID > 0
BEGIN
SELECT *
FROM Documents
WHERE Cluster_ID = (
SELECT Cluster_ID
FROM Documents
WHERE Document_ID = @documentId
)
END
ELSE
BEGIN
SELECT *
FROM
FROM Documents
WHERE Document_ID = @documentId
END
```
|
Query to get records from table on basis of one field and to check that if the field is not -1
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I keep getting the following message every time I try to run this code in oracle. The code is as follows:
```
DROP TABLE movie;
CREATE TABLE movie (movie_id NUMBER(5) PRIMARY KEY,
title VARCHAR2(45) NOT NULL,
description VARCHAR2(250) NOT NULL,
released_by NUMBER(3) NOT NULL,
released_on DATE NOT NULL);
INSERT INTO movie (movie_id, title, description, released_by, released_on)VALUES ('1', 'Edge of Tomorrow', 'Lieutenant Colonel Bill Cage is a skilled tactician who has honed his abilities through his experiences as a soldier. However, there is still much he can learn, and soon he is going to get his chance.', '1', '07-OCT-2014');
INSERT INTO movie (movie_id, title, description, released_by, released_on)VALUES('2', 'Captain America: Winter Soldier', 'Steve Rogers is finding it difficult to adjust to living life in the contemporary world. He is working for S.H.I.E.L.D. and begins to suspect a mystery is brewing there.', '2', '09-SEP-2014');
INSERT INTO movie (movie_id, title, description, released_by, released_on)VALUES('3', 'Fed Up', 'America’s problem with obesity is caused by our inactivity. Or is it? Katie Couric and Stephanie Soechtig tempt us to restructure our beliefs about the American diet, through this thought-provoking expose.', '3', '09-SEP-2014');
INSERT INTO movie (movie_id, title, description, released_by, released_on)VALUES('4', 'Godzilla', 'The legendary tale of Godzilla comes roaring back to life. This time, it's the modern era, and Godzilla is a giant lizard who has been made fearsome through the interference of radiation.', '1', '16-SEP-2014');
INSERT INTO movie (movie_id, title, description, released_by, released_on)VALUES('5', 'Neighbors', 'New parents Mac and Kelly settle into domesticity in a quiet neighborhood. The tranquility they have been afforded ceases to exist when a fraternity moves into the house next door.', '2', '14-SEP-2014');
COMMIT;
```
Below is what I get when I try to execute it in Oracle:
```
Table dropped.
Table created.
1 row created.
1 row created.
1 row created.
ERROR:
ORA-01756: quoted string not properly terminated
1 row created.
Commit complete.
SQL>
```
Any help would be greatly appreciated. Thank you.
|
Escape single quotes:
```
INSERT INTO movie (movie_id, title, description, released_by, released_on)VALUES('4', 'Godzilla', 'The legendary tale of Godzilla comes roaring back to life. This time, it''s the modern era, and Godzilla is a giant lizard who has been made fearsome through the interference of radiation.', '1', '16-SEP-2014');
```
Notice the `it''s` instead of `it's`.
|
Try Oracle's quoting mechanisms :
The mechanism is invoked with a simple "q" in PL/SQL only.
The syntax is q'[...]', where the "[" and "]" characters can be any of the following as long as they do not already appear in the string.
> •!
>
> •[ ]
>
> •{ }
>
> •( )
>
> •< >
Here For Example ,
```
INSERT INTO movie (movie_id, title, description, released_by, released_on)
VALUES('4', 'Godzilla', q'[The legendary tale of Godzilla comes roaring back to
life. This time, it's the modern era, and Godzilla is a giant lizard who has
been made fearsome through the interference of radiation.]', '1', '16-SEP-
2014');
```
It's always headache to find all single quotes and replace it with escape character.
For more Reference Follow : [THIS](http://www.oracle-developer.net/display.php?id=311)
|
ORA-01756: quoted string not properly terminated when I try to execute my code
|
[
"",
"sql",
"oracle",
"insert",
""
] |
I had a select statement which selected the oldest set of data in a table with an Inner Join
```
SELECT DateAndTime
,Val
,NumIndex
,TagName
,floattable.TagIndex
FROM taglogging.floattable
INNER JOIN tagtable ON tagtable.TagIndex = floattable.TagIndex
WHERE DateAndTime = oldest;
```
This works well and does the job in returning the data set.
I want to now delete from the data set, any record where the value column is outside its min, max range which is defined in another database.
I did this
```
DELETE
FROM taglogging.floattable
WHERE Val IN (
SELECT DateAndTime
,Val
,NumIndex
,TagName
,floattable.TagIndex
FROM taglogging.floattable
INNER JOIN tagtable ON tagtable.TagIndex = floattable.TagIndex
WHERE DateAndTime = oldest
) < taglogging_archive.tagmembers.tagMember_minValue
OR Val > taglogging_archive.tagMember_minValue;
```
and in doing so, got a syntax error:
```
Error Code: 1241. Operand should contain 1 column(s)
```
Does anyone see the error in my syntax :( I am new to MySQL syntax but after moving from MSSQL i was confident in the logic of my query.
Kind Regards,
Jordan
|
In your second query, the select statement should have only one column name in respect of `Val` column.
Your query should be like this:
```
DELETE from taglogging.floattable WHERE Val IN
(SELECT Val
FROM taglogging.floattable
INNER JOIN tagtable ON tagtable.TagIndex = floattable.TagIndex
WHERE DateAndTime = oldest) < taglogging_archive.tagmembers.tagMember_minValue
OR Val > taglogging_archive.tagMember_minValue;
```
As the error says.
|
`Operand should contain 1 column(s)` tells everything. Btw, this is not a syntax error, it is a semantic error, so your syntax has to be correct.
MySQL is complaining `SELECT DateAndTime, Val, NumIndex, TagName, floattable.TagIndex` is fetching more than one column. The `IN` operator requires exactly ONE column to be returned in the `SELECT` statement, you are returning 5.
As you use `DELETE from taglogging.floattable WHERE Val IN`, I would guess that you would want to use `SELECT Val FROM`...
|
MySQL Syntax error on a delete from a select set?
|
[
"",
"mysql",
"sql",
"select",
"dataset",
"syntax-error",
""
] |
I have a procedure that outputs a list of rows with your standard basic `SELECT` statement using a few `joins` and `where` clauses.
```
Employee Value
--------------------------
Tommy Elliott Damage
Tommy Elliott Overage
Tommy Elliott Damage
Tommy Elliott Shortage
Tommy Elliott Damage
Tommy Elliott Shortage
Trevor Gray Overage
Trevor Gray Shortage
Trevor Gray Overage
Trevor Gray Shortage
Trevor Gray Overage
Trevor Gray Shortage
```
I am wondering if anyone would know of a solution where I can either add a new SELECT statement below this one or within the same SELECT statement that would be able to:
* count the number of times Tommy and Trevor appear (which should be six based on the data above)
* and for Tommy and Trevor, count the number of times they have a value of damage, overage, or shortage.
I've been trying to figure it out and can't. I'm sure there's a quick solution to `COUNT`.
|
The analytic version of `COUNT()` can add these as column values *and* preserve the detail you want. Assuming your query starts like this:
```
SELECT
Employee,
Value
FROM ... and the rest of your query
```
... add the counts like this:
```
SELECT
Employee,
Value,
COUNT(*) OVER (PARTITION BY Employee) AS ThisEmpCount,
COUNT(CASE WHEN Value = 'Damage' THEN 1 END)
OVER (PARTITION BY Employee) AS ThisEmpDamageCount,
COUNT(CASE WHEN Value = 'Outage' THEN 1 END)
OVER (PARTITION BY Employee) AS ThisEmpOutageCount
FROM ... and the rest of your query
```
|
```
select T.Employee, COUNT(T.Employee)
from
(
your query
)T
group by T.Employee
```
|
Count the number of occurrences in a procedure
|
[
"",
"sql",
"sql-server",
""
] |
```
SELECT * FROM #TempMatrixTable
WHERE TIME >='07:00:00' and TIME <= '23:00:00'
ORDER BY dateperiod
```
In the above code am fetching data from 7 to 23 for each date its working fine.But when i try to fetch the data from 23 to 7 reverse of it am not able to get the result.Please provide solution to me.
Please find the attached screenshot of the same above
```
SELECT * FROM #TempMatrixTable
WHERE TIME >='23:00:00' and TIME <= '07:00:00'
ORDER BY dateperiod
```
using the above query when i am trying to fetch the data.Am getting no rows because 23 is greater than 7 what could be the possible solution please help me
```
SELECT * FROM #TempMatrixTable
WHERE TIME >='00:00:00' and TIME <= '24:00:00'
ORDER BY dateperiod
```
using the above query when i am ableto fetch the data.
|
When you are getting the times between 23 and 07, you are actually getting two time intervals; between 23 and 24, and between 00 and 07:
```
select
*
from
#TempMatrixTable
where
TIME >='00:00:00' and TIME < '07:00:00' or
TIME >='23:00:00' and TIME <= '23:59:99.9999'
order by
dateperiod
```
Naturally you don't need the 00 and 23.59 boundaries, as the time value can't be outside of those:
```
select
*
from
#TempMatrixTable
where
TIME < '07:00:00' or
TIME >='23:00:00'
order by
dateperiod
```
|
I think you want `or` instead of `and`:
```
SELECT *
FROM #TempMatrixTable
WHERE TIME >='23:00:00' OR TIME <= '07:00:00'
ORDER BY dateperiod;
```
|
Fetching each date data for a specified time imterval in sql
|
[
"",
"sql",
""
] |
How can I remove the Status = Pending with the Date of less than the currentdate in my query... This should be a select statement
```
Status Date
Pending 9/5/2014
Pending 9/30/2014
Complete 8/28/2014
Error 8/20/2014
```
|
Since you say you wish to remove the data where Status is 'Pending' before the current date, I assume you want to delete that data. Here's the deletion query.
```
DELETE FROM my_table
WHERE status = 'Pending'
AND date < CAST(GETDATE() AS DATE);
```
However, since you mention that you need a select statement, here's the select query excluding records where Status is 'Pending' before the current date.
```
SELECT status, date FROM my_table
WHERE status <> 'Pending'
AND date < CAST(GETDATE() AS DATE)
```
(The above query will incorrectly skip all records where Status is 'Pending') The next query should work exactly as intended.
```
SELECT status, date FROM my_table
EXCEPT
SELECT status, date FROM my_table
WHERE status = 'Pending'
AND date < CAST(GETDATE() AS DATE);
```
|
You can use
```
SELECT status, date FROM Test
WHERE status <> 'Pending'
AND date < CAST(GETDATE() AS DATE)
```
<http://sqlfiddle.com/#!6/755f0/1>
|
How can I remove the Status = Pending with the Date of less than the currentdate in a select statement
|
[
"",
"sql",
"t-sql",
"sybase",
""
] |
My stored procedure is trying to write a record in to a database on another Server. The statement is here:
```
IF @Builds > 0
BEGIN
SET @DPU = @Failures / @Builds
INSERT INTO SQL05.ManufacturingPortal.dbo.OPC.WriteRequests (ID, RegisterID, Value, RequestedDate, IsCompleted)
VALUES(@PLCID, 'F8:10' , CAST(@DPU AS NUMERIC(10,2)), GETDATE(), 0)
END
```
However when I try to create the stored procedure - I get the following error:
> The object name 'SQL05.ManufacturingPortal.dbo.OPC.WriteRequests' contains more than the maximum number of prefixes. The maximum is 3.
I have tried creating an alias of Server.DB.DBO to shorten the number of prefixes however seem to be implementing this wrong.
I cannot change my database schema on the target database. Any suggestions on how I can get around this?
Thanks
|
Correct four-part table name is server.database.schema.tablename - you have some excess parts there.
Looks like table name is OPC.WriteRequests? If yes, then you have to use brackets: `SQL05.ManufacturingPortal.dbo.[OPC.WriteRequests]`
But maybe you just have some part of name incorrect?
|
The accepted answer is correct, in that the example given literally has too many prefixes. In the example, one prefix had a dot in its name, and needs to be set off by square brackets (or better yet, have a name that conforms to convention).
In my case, it actually wanted to be run on the server where the INTO table would live:
```
SELECT *
INTO [database].[schema].[table]
FROM [server].[database].[schema].[table]
```
|
The object name contains more than the maximum number of prefixes. The maximum is 3
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a question related to SQL.
Can I display a custom message in MySQL phpMyAdmin, something like this:
```
SELECT * FROM table_name
PRINT 'The message that I want to write!';
```
I tried to do it with PRINT syntax, because I found something like that, but it doesn't work. Do you have some ideas?
|
Put the text in the `SELECT` statement
```
SELECT *, 'PRINT' FROM foo;
```

If you're wanting to print a long message, and want a different column name, simply use an alias by using `AS`.
```
SELECT *, 'The message i want to print' AS msg FROM foo;
```

|
You can simulate a print with the select statement like this:
```
SELECT 'some text'
```
|
Can I display a message in MySQL?
|
[
"",
"mysql",
"sql",
""
] |
I need to update a table with pre-calculated values from tables where data can be added/updated/deleted.
I could use
```
insert into precalculated(...)
select ... from ...
on duplicate key update ...
```
to add/update the pre-calculated table but is there an optimized method to delete the obsolete rows ?
|
I may have found my solution using rename.
so basically, I will do a simple insert select to the temporary table and then
```
rename precalculated to precalculated_temprename, precalculated_temp to precalculated, precalculated_temprename to precalculated_temp;
truncate precalculated_temp;
```
need some tests but it seems the rename operation is fast and atomic.
|
I think you should create a stored procedure that deletes the data of your related tables if and only if the records fulfill a condition.
There's not enough information in your question to design the procedure, but I can give you a little example:
```
delimiter $$
create procedure delete_orphans()
begin
declare id_orphan int;
declare done int default false;
declare cur_orphans cursor for
select distinct d.id
from data as d
left join precalculated as p on d.id = p.id
where p.id is null;
declare continue handler for not found set done = true;
open cur_orphans;
loop_delete_orphans: loop
fetch cur_orphans into id_orphan;
if done then
leave cur_orphans;
end if;
delete from data where id = id_orphan;
end loop;
close cur_orphans;
end$$
delimiter ;
```
This procedure will delete every row in the `data` table that does not have at least one related row in the `precalculated` table.
Of course, this approach might be inneficient, because it will delete the rows one by one, but as I said this is only an example. You can customize it to fit your needs.
You can call this procedure from a trigger if you want (with `call delete_orphans()`).
Hope this helps.
|
insert ... select ... on duplicate key update + delete obsolete rows
|
[
"",
"mysql",
"sql",
"insert-update",
""
] |
I have two MySQL queries that are finding MAX() values from nearly identical queries. I was hoping there was a way to combine the queries for better performance.
```
SELECT name, MAX(version) AS highest
FROM publish
WHERE name IN ('alpha', 'beta') AND deprecated = TRUE
GROUP BY name;
SELECT name, MAX(version) AS closed
FROM publish
WHERE name IN ('alpha', 'beta') AND open = FALSE
GROUP BY name;
```
The schema is fairly simple. For this simple example it would look like
```
CREATE TABLE publish
(
name CHAR(36) NOT NULL,
version INT NOT NULL,
deprecated TINYINT NOT NULL,
open TINYINT DEFAULT 0 NOT NULL,
PRIMARY KEY (name, version),
);
CREATE INDEX open ON publish (open);
CREATE INDEX deprecated ON publish (deprecated);
```
|
You can use conditional aggregation to combine your queries
```
SELECT name, MAX(case when open = FALSE then version end) AS closed,
MAX(case when deprecated = TRUE then version end) AS highest
FROM publish
WHERE name IN ('alpha', 'beta')
GROUP BY name;
```
|
You can try UNION
```
SELECT name, MAX(version) AS highest
FROM publish
WHERE name IN ('alpha', 'beta') AND deprecated = TRUE
GROUP BY name
UNION ALL
SELECT name, MAX(version) AS closed
FROM publish
WHERE name IN ('alpha', 'beta') AND open = FALSE
GROUP BY name;
```
But an optimized solution would be to use CASE (for conditional functionality)
```
SELECT name, MAX(case when deprecated = TRUE then version end) AS highest,
MAX(case when open = FALSE then version end) AS closed,
FROM publish
WHERE name IN ('alpha', 'beta')
GROUP BY name;
```
|
Combine sql queries that want different filtered max values
|
[
"",
"mysql",
"sql",
""
] |
So we have the following relational database for movie ratings:
```
mysql> select * from ratings;
+-----+-----+------+
| uid | mid | rid |
+-----+-----+------+
| u1 | m1 | 4 |
| u1 | m2 | 3 |
| u2 | m2 | 5 |
| u2 | m3 | 4 |
| u3 | m1 | 4 |
| u3 | m2 | 5 |
| u3 | m3 | 2 |
+-----+-----+------+
7 rows in set (0.03 sec)
```
So, we want to find all the users who have rated at least 2 common movies the movies which are rated by a specific user 'u1'. In other words, if user 'u1' has rated movies m1, m2, m3 we want the user ID of all others who have also rated at least 2 of movies m1, m2, m3.
so in this case, the answer should be u3, since u3 has rated m1, m2, m3 which includes 2 common movies with 'u1'. I tried the following statement:
```
mysql> SELECT *
FROM ratings
WHERE mid IN (SELECT mid FROM ratings WHERE uid='u1');
+-----+-----+------+
| uid | mid | rid |
+-----+-----+------+
| u1 | m1 | 4 |
| u1 | m2 | 3 |
| u2 | m2 | 5 |
| u3 | m1 | 4 |
| u3 | m2 | 5 |
+-----+-----+------+
5 rows in set (0.00 sec)
```
so, this gives me the all the ratings that are on common movies as with user u1. Now, how can I get only those that have at least 2 entries in common with u1?
|
```
SELECT R.uid from ratings R
JOIN
(SELECT distinct mid
FROM ratings
WHERE uid='u1') T
ON R.mid = T.mid
group by R.uid
having count(R.mid) = 2;
+-----+
| uid |
+-----+
| u1 |
| u3 |
+-----+
2 rows in set (0.15 sec)
```
|
A query of this form will return the specified result.
```
SELECT o.user_id
FROM ( SELECT COUNT(DISTINCT u.movie_id) AS cnt
FROM ratings u
WHERE u.user_id = '1234'
) c
CROSS
JOIN ( SELECT m.user_id
, m.movie_id
FROM ratings m
WHERE m.user_id = '1234'
GROUP
BY m.user_id
, m.movie_id
) n
JOIN ratings o
ON o.movie_id = n.movie_id
AND o.user_id <> n.user_id
GROUP
BY o.user_id
HAVING COUNT(DISTINCT o.movie_id) = c.cnt
```
Inline view `c` returns a count of the movies rated by specified user.
Inline view `n` returns the distinct list of movie\_id rated by specified user.
The join to `o` returns all rows from ratings for those same movies. The join predicate (in the ON clause) performs "matching" on values in the movie\_id column, and excludes rows for the specified user.
The GROUP BY collapses the rows to distinct user\_id.
The HAVING clause compares the "count" of the movies rated by specified user to the "count" of the number of those same movies rated by each other user, and eliminating rows for users that haven't rated *all* the movies rated by the specified user.
|
Complex query: find all users who rated at least 2 common movies rated by u1
|
[
"",
"mysql",
"sql",
""
] |
all. I had a SQL Server table with a column named "Name". I want to query the rows if the "Name" substract the last char are in a list.
Here is my SQL query but it tell me error: Invalid length parameter passed to the LEFT or SUBSTRING function.
```
select * from MyTable where substring(Name, 1, len(Name) - 1) in ('ABC', 'DEF')
```
|
You should check the length of your "Name" first, if the name is empty, substring(Name, 1, len(Name) - 1) will throw error.
|
How about doing something like this?
```
select t.*
from mytable t
where name like 'ABC_' or name like 'DEF_';
```
|
How to: substring of a column in a list
|
[
"",
"sql",
"sql-server",
"substring",
""
] |
I created a table using the following query:
```
CREATE Table articulo
(
id_art int NOT NULL,
nom_art varchar (25) DEFAULT 'XXXXXXXXXXXXX',
prec_art decimal (10,5) DEFAULT 0.000,
peso_art decimal (10,5),
existencia float,
color_art int, CONSTRAINT chk_color1 CHECK (color_art between 0 and 20),
um_art varchar (10) DEFAULT 'DEF_PZA',
primary key (id_art)
);
```
Then, I tested the Constraint with the following line of code:
```
INSERT into articulo values (80,'Taza',70.254,10.00,10.00,25,'PZA');
```
What I see is that the constraint is not working. I can insert 25 without any problem. I looked up for topics about this here and found that:
> The MySQL Reference Manual says:
>
> The CHECK clause is parsed but ignored by all storage engines.
>
> Try a trigger...
I am begginer with SQL, do you have any suggestion to implement this check constraint instead of using triggers?
Thanks in advance!
|
Check constraint does not work work as expected in mysql. check the bug report <http://bugs.mysql.com/bug.php?id=3464>.
You can have an alternative solution like follows to set a default value
```
CREATE TRIGGER trig_color_art_check BEFORE INSERT ON articulo
FOR EACH ROW
BEGIN
IF (NEW.color_art < 0 OR NEW.color_art > 25 )THEN
SET NEW.color_art=0;
END IF;
END
```
|
In many cases, you can replace a CHECK constraint with a table and a foreign key constraint. This case is one of them.
```
create table art_color (
color_art integer not null,
primary key (color_art)
);
insert into art_color values
(0), (1), (2), (3), (4), (5), (6), (7), (8), (9), (10),
(11), (12), (13), (14), (15), (16), (17), (18), (19), (20);
CREATE Table articulo
(
id_art int NOT NULL,
nom_art varchar (25) DEFAULT 'XXXXXXXXXXXXX',
prec_art decimal (10,5) DEFAULT 0.000,
peso_art decimal (10,5),
existencia float,
color_art int,
CONSTRAINT chk_color1
foreign key (color_art) references art_color (color_art),
um_art varchar (10) DEFAULT 'DEF_PZA',
primary key (id_art)
);
```
|
MySQL Constraint Check
|
[
"",
"mysql",
"sql",
"constraints",
""
] |
I want to display the department no. and the number of employees in each department from `EMP` table in a single row. I had one query which display the result in separate rows.
```
select deptno, count(*) from emp
group by deptno;
Dptno Count(*)
10 5
20 3
30 4
```
I want to display the result as a single-row. For example:
```
Dpt10 Count(*) Dpt20 Count(*) Dpt30 Count(*)
10 5 20 3 30 4
```
The output in this forum is not proper but try to understand that the no. 5,3 & 4 should be below `count(*)` column and 10,20 & 30 should be below `deptno`.
|
You can try this -
**Schema**
```
DECLARE @emp TABLE ([deptno] int NULL, [deptcount] int NULL);
INSERT @emp ([deptno], [deptcount]) VALUES (10, 5), (20, 3), (30, 4);
```
**Query**
```
SELECT STUFF((
SELECT ' ' + CAST([deptno] AS VARCHAR) + ' ' + CAST([deptcount] AS VARCHAR)
FROM @emp
FOR XML PATH('')
), 1, 1, '')
```
**OutPut**
> 10 5 20 3 30 4
|
Since pivot doesn't support several aggregates in SQL Server:
```
with t as (
select 10 id, 15 su union all
select 10 id, 10 su union all
select 10 id, 5 su union all
select 20 id, 135 su union all
select 20 id, 100 su union all
select 20 id, 15 su union all
select 30 id, 150 su union all
select 30 id, 1000 su union all
select 30 id, 500 su
)
select max(case when id = 10 then id end) dept10
, count(case when id = 10 then id end) dept10_cnt
, max(case when id = 20 then id end) dept20
, count(case when id = 20 then id end) dept20_cnt
, max(case when id = 30 then id end) dept30
, count(case when id = 30 then id end) dept30_cnt
from t
```
[SQLFiddle](http://sqlfiddle.com/#!3/d41d8/39430)
|
Multiple rows show in a single row using query in sql?
|
[
"",
"sql",
"sql-server",
""
] |
I have a question which looks easy but I can't figure it out.
I have the following:
```
Name Zipcode
ER 5354
OL 1234
AS 1234
BH 3453
BH 3453
HZ 1234
```
I want to find those rows where the ID does not define clearly one row.
So here I want to see:
```
OL 1234
AS 1234
HZ 1234
```
Or simply the zipcode enough.
**I am sorry I forget to mention an important part. If the name is the same its not a problem, only if there are different names for the same zipcode.**
So this means: BH 3453 does not return
|
I think this is what you want
```
select zipcode
from yourTable
group by zipcode
having count(*) > 1
```
It selects the zipcodes associated to more than one record
to answer your updated question:
```
select zipcode
from
(
select name, zipcode
from yourTable
group by name, zipcode
)
group by zipcode
having count(*) > 1
```
should do it. It might not be optimal in terms of performance in which case you could use window functions as suggested by @a1ex07
|
Try this:
```
select yt.*
from YOUR_TABLE yt
, (select zipcode
from YOUR_TABLE
group by zipcode
having count(*) > 1
) m
where yt.zipcode = m.zipcode
```
|
Find not unique rows in Oracle SQL
|
[
"",
"sql",
"oracle",
"having-clause",
""
] |
I have a calendar table which stores rows of dates and an indication of wether that date is a holiday or working day.
How can I select the date that is 5 working days into the future from the `2014-12-22` so the selected date will be `2014-12-31`
```
Date_Id Date_Date Date_JDE Is_WorkingDay
20141222 2014-12-22 114356 1
20141223 2014-12-23 114357 1
20141224 2014-12-24 114358 1
20141225 2014-12-25 114359 0
20141226 2014-12-26 114360 0
20141227 2014-12-27 114361 0
20141228 2014-12-28 114362 0
20141229 2014-12-29 114363 1
20141230 2014-12-30 114364 1
20141231 2014-12-31 114365 1
```
|
You can use a [CTE](http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx) like this...
```
;WITH cteWorkingDays AS
(
SELECT Date_Date, ROW_NUMBER() OVER (ORDER BY Date_Date) as 'rowNum'
FROM TableName
WHERE Is_WorkingDay = 1
and Date_Date > '20141222' -- this will be a param I suppose
)
SELECT Date_Date
FROM cteWorkingDays
WHERE rowNum = 5 -- this can be changed to 10 (title value
```
This is hand typed, but it will be close enough.
**EDIT**: Based on comment.
```
Declare @DateToUse TYPE -- unsure if you're using a string or a date type.
SELECT @DateToUse = Date_Date
FROM cteWorkingDays
WHERE rowNum = 5
```
|
```
...;
WITH DatesCTE AS
(
SELECT Date_Id,
Date_Date,
Date_JDE,
Is_WorkingDay,
ROW_NUMBER() OVER(ORDER BY Date_Date) AS rn
FROM DatesTable
WHERE Is_WorkingDay = 1
AND Date_Date > '2014-12-22'
)
SELECT Date_Date
FROM DatesCTE
WHERE rn = 5
```
**[SQL Fiddle Demo](http://sqlfiddle.com/#!3/95c91/1)**
|
TSQL Calendar table, count 10 workings days from date
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Say there is a database for students and professors (a very simple one) and the relational database is the following:
```
GradStudent (_id_, name, gradStuff)
UndergradStudent (_id_, name, underGradStuff)
Professor (_id_, name)
Teaches(_prof_id_, _stud_id_)
```
Considering that the relational database above is meant to represent total disjoint specialization, i.e. there is no table `Student` but rather two completely seperate ones, when writing this in SQL in order to implement the database, how would I fetch the student id for the `Teaches` table?
I cannot figure out how to make a foreign key from two different tables.
I am writing this question assuming that the SQL language is not all the different across all the platforms. If clarification is needed: I am working on Oracle SQL Developer.
**Edit** :: additional info / Clarification:
For a more graphical, simplistic view on what I am trying to achieve:
I want to write the following in SQL code (however I do not know how is it possible and thus don't know how to)

My apologies if the picture is too simplistic, if needed I can add more attributes and details, just let me know.
|
> I cannot figure out how to make a foreign key from two different tables.
You mean, a foreign key to/referencing two different tables. But there is no such foreign key in this design.
We declare an SQL FOREIGN KEY for a table to say that (ie to tell the DBMS that) the values for a list of columns are also values of a list of corresponding columns (maybe the same list) that are unique in a table (maybe the same table). You don't have this here. You have a different constraint on your tables.
If you want exactly those base tables then you have to use triggers in SQL to enforce your constraints.
You can also have a design with:
* base table Student with NOT NULL UNIQUE or PRIMARY KEY id
* FOREIGN KEYs from GradStudent (id), UndergradStudent (id) and Teaches (stud\_id) REFERENCES Student (id)
* a constraint that the projection of Student on id is the disjoint union of the projections of GradStudent and UndergradStudent on id
You could express part the latter constraint by a trigger. A triggerless way to express the disjointedness (but not the union) is:
* a type discriminator/tag column `student_type` (say) in GradStudent, UndergradStudent & Student with additional FOREIGN (super) KEYs (id,student\_type) from GradStudent and UndergradStudent to NOT NULL UNIQUE (id,student\_type) in Student
* GradStudent CHECK( student\_type = 'grad' ) and UndergradStudent CHECK ( student\_type = 'undergrad' )
Rows in each of the two student subtype base tables are all the same (redundancy) and rows in Student are determined by their id (redundancy) but that's the cost in this case of having no triggers. Column student\_type could be a computed column.
There's really no pretty SQL way to enforce that every parent id is a child. Having only the LEFT JOIN of the above child tables instead of the parent and child tables enforces that every parent is a child but requires NULL columns and further constraints. One needs triggers to reasonably constrain SQL databases. One uses idioms to get what declarative constraints one can.
For more on subtyping idioms see [this answer and its links](https://stackoverflow.com/a/27183834/3404097). Google 'stackoverflow database sql table' plus child/parent, super/subtables, super/subtypes, inheritance and/or polymorphism. Also multiple/many/two FKs/relationships/associations/references/links (although usually as in this question the constraint wanted is *not* a FK and the design should use subtypes instead). I googled "stackoverflow two foreign keys" and got [this](https://stackoverflow.com/q/668921/3404097).
|
If by "fetch the student id for the Teaches table", you mean you want Teaches.stud\_id to be a FK that references "GradStudent or Undergradstudent as is the case", you can't. The target of a FK must be a key of a table that is not a view. You have no such table, ergo you have no such key either.
Only way I see is to code a trigger that does the check upon inserts/updates to Teaches.
|
How to implement total disjoint specialization in database?
|
[
"",
"sql",
"database",
"relational-database",
""
] |
I have 2 tables, `article` and `image`.
In order to extract all articles of a particular user, I need to:
* query all the articles
`SELECT * FROM article WHERE userid=100;`
* then loop through these articles, and extract their images with:
`SELECT * FROM image WHERE articleid = currentArticleId;`
How can I achieve the same thing with just a single query?
|
You could use a join or a subquery.
---
[Subquery:](http://dev.mysql.com/doc/refman/5.0/en/subqueries.html)
The subquery (The second query contained in brackets) will be executed and the ID(s) that are returned will be used to identify which rows from the initial query to grab. So if you imagine the subquery returns (1, 2, 3), the initial query will be looking for rows `WHERE articleid` is `IN` that list.
```
SELECT * FROM image WHERE articleid IN (SELECT id FROM article WHERE userid=100);
```
---
[Join](http://dev.mysql.com/doc/refman/5.0/en/join.html):
Here you are telling MySQL that the `articleid` in `image` is related to the `id` column in the `article` table. Then you can just pull columns from either table. This is a watered down explanation so I recommend you read the link. There are various types of `JOIN` that will result in different results, so you should read about `LEFT`, `RIGHT`, `OUTER` and `INNER` joins to make sure the resultset fits your needs. In this case, `JOIN` is equivalent to an `INNER` join.
```
SELECT
image.col1,
image.col2,
image.colX,
article.col1,
article.col2,
article.colX
FROM image
JOIN article ON article.id = image.articleid;
```
|
You can read more about select statements, joins, etc. here:
[MySQL Reference - Select Syntax](http://dev.mysql.com/doc/refman/5.0/en/select.html)
```
SELECT *
FROM image
JOIN article ON article.id = image.articleid;
```
|
select from 2 tables in a single query
|
[
"",
"mysql",
"sql",
""
] |
I want to create a condition in where clause, that can imitate left join. For example:
```
From Table1, Table2 Where table1.ID *= table2.ID
```
needs to be written as
```
From Table1, Table2
Where Table1.ID = Table2.ID AND (some condition to get the rows from table1 where match does not exist)
```
I want to do this, because the existing query builder function calls like dozen functions to create the final query, and same function is called from 100+ pages. So, updating FROM clause to use `Left Join` will break the whole flow, and cause a whole lot of headache. I would rather just update the WHERE clause itself.
|
In general, a `where` clause cannot increase the number of rows in the result set. So, you cannot formally reproduce a `left join` in the `where` clause.
That said, it is possible that this would do what you want:
```
From Table1, Table2
Where (Table1.ID = Table2.ID or
not exists (select 1 from table2 t2 where t2.id = table1.id)
)
```
This doesn't quite formally replicate the `left join`. For instance, if `Table2` is empty, then the `from` clause returns no rows at all, so nothing is returned. However, it might work for your case.
|
```
CREATE TABLE Table1
(`Name` varchar(5), `Age` int)
;
INSERT INTO Table1
(`Name`, `Age`)
VALUES
('PVJ', 10),
('EPF', 12),
('CISCO', 13)
;
CREATE TABLE Table2
(`Name` varchar(3), `Age` int)
;
INSERT INTO Table2
(`Name`, `Age`)
VALUES
('PVJ', 10),
('EPF', 12),
('DVF', 13)
select Table1.Name,(select Table2.Age from Table2 where Table1.Name=Table2.Name) Age from table1
```
<http://sqlfiddle.com/#!2/7dbd4/17>
|
Creating a left join condition using where clause in sql
|
[
"",
"mysql",
"sql",
"left-join",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.