Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
My question is trying to find a more elegant way of getting the desired results, maybe by using a `CASE` as I use here. I'm currently getting the results I need with the new requirement, but it's sorta verbose.
This was the previous query, it worked fine since it was a previous requirement. As you can see, the rows with unit (12,23,34) are being ignored:
```
declare @table table
(
ID varchar(5),
pgroup varchar(5),
Unit varchar(5),
GeneralStatus varchar(10)
)
insert into @table (id, pgroup, unit, GeneralStatus) select 'P01', 1, 11, 'OK'
insert into @table (id, pgroup, unit, GeneralStatus) select 'P01', 1, 12, 'NOK'
insert into @table (id, pgroup, unit, GeneralStatus) select 'P01', 2, 22, 'OK'
insert into @table (id, pgroup, unit, GeneralStatus) select 'P01', 2, 23, 'NOK'
insert into @table (id, pgroup, unit, GeneralStatus) select 'P01', 3, 33, 'OK'
insert into @table (id, pgroup, unit, GeneralStatus) select 'P01', 3, 34, 'OK'
--select *From @table
select
id,
case
when pgroup = 1 then 'Alpha'
when pgroup = 2 then 'Beta'
when pgroup = 3 then 'Gamma'
END,
case
when Unit = 11 then 'G1'
when Unit = 22 then 'G2'
when Unit = 33 then 'G3'
end,
case
when GeneralStatus = 'OK' then 'ENABLED'
when GeneralStatus = 'NOK' then 'DISABLED'
end
GeneralStatus
from @table where id = ('P01') and unit in (11,22,33)
```
Unfortunately, now the requirement has changed in the sense that both rows for each `pgroup` will also be evaluated. So, in this example, for `PO1` and `pgroup` 1 (unit 11, 12) one is OK and the other NOK. This case is DISABLED. For `PO1` and `pgroup` 3 (unit 33, 34) both are OK. This case is ENABLED.
```
P01 Alpha G1 DISABLED
P01 Beta G2 DISABLED
P01 Gamma G3 ENABLED
```
The solution I have now counts the rows that are `PO1` with `pgroup 1` and 'OK'. If two, then I insert one row with status `ENABLED`. Else, `DISABLED. It's ugly, but it does the job.
I was looking for a more elegant way, maybe using `CASE`.
|
A correlated subquery should do it. Seems like you should be able to replace this:
```
case
when GeneralStatus = 'OK' then 'ENABLED'
when GeneralStatus = 'NOK' then 'DISABLED'
end GeneralStatus
from @table where id = ('P01') and unit in (11,22,33)
```
With this:
```
case
when (SELECT COUNT(*) FROM @table t2
WHERE t1.ID=t2.ID
AND t1.pGroup=t2.pGroup
AND GeneralStatus = 'OK') >= 2 then 'ENABLED'
ELSE 'DISABLED'
end GeneralStatus
from @table t1 where id = ('P01')
and unit in (11,22,33)
```
|
consider
```
select
id,
case
when pgroup = 1 then 'Alpha'
when pgroup = 2 then 'Beta'
when pgroup = 3 then 'Gamma'
END,
case min(unit)
when 11 then 'G1'
when 22 then 'G2'
when 33 then 'G3'
end,
case
when min(GeneralStatus) = 'NOK' then 'DISABLED' else 'ENABLED'
end
as GeneralStatus
from
@table
where
id = ('P01')
group by id, pgroup
```
[sqlFiddle](http://www.sqlfiddle.com/#!3/52b0a/2)
|
Case with multiple values and without using dynamic sql?
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
If I use NewID in View then it will take 1 ms,
```
ALTER VIEW [dbo].[d]
AS
SELECT
NEWID() AS Id,
p.XX
,p.xx
FROM [view_XXX] as P
GO
select * .view_d where code='xxxx'
```
But If I use ROW\_NUMBER it will too much time to finish,
```
ALTER VIEW [dbo].[d]
AS
SELECT
Row_number()OVER(ORDER BY T.XXXXXX) AS Id,
p.XX
,p.xx
FROM [view_XXX] as P
GO
select * .view_d where code='xxxx'
```
|
I think, `NEWID()` always creates any random number as ID(Doesn't need any calculation) But in case of `ROW_NUMBER()` SQL need to order the columns then provide ID.
As `ROW_NUMBER()` is more complex than `NEWID()` SQL server takes more time to process it.
|
`ROW_NUMBER()` will have the extra cost of sorting the result but `NEWID()` doesn't have this cost
If you look at the execution plan you can see the same:
> Click on the `Display Estimated Execution Plan` icon on the tool bar `SSMS`
See the below pics.
[](https://i.stack.imgur.com/0BgTD.jpg)
[](https://i.stack.imgur.com/REWUD.jpg)
|
Why adding ROW_NUMBER in View taking too much time to execute?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to implement a sql query in the following way:
```
INNER JOIN
`Product_has_ProductFeature` t ON `Product`.`id` = t.`productId` AND t.`productFeatureValueId` = 1
INNER JOIN
`Product_has_ProductFeature` t1 ON `Product`.`id` = t1.`productId` AND t1.`productFeatureValueId` = 5
```
How can I do this using `innerJoin()` or something like above mentioned?
|
`innerJoin()` is a [method](http://www.yiiframework.com/doc-2.0/yii-db-query.html#innerJoin()-detail) from the [Query](http://www.yiiframework.com/doc-2.0/yii-db-query.html) class.
You can try something like this.
```
$query = new \yii\db\Query;
$command = $query->innerJoin(
'Product_has_ProductFeature',
`Product`.`id` = t.`productId`)
->andWhere('t.`productFeatureValueId` = 1')
->createCommand();
$queryResult = $command->query();
```
|
You can use the below code:
```
$model = Product::find()
->innerJoinWith('t', 'Product.id = T.productId')
->andWhere(['T.productFeatureValueId' => ''])
->innerJoinWith('t1', 'Product.id = T1.productId')
->andWhere(['T1.productFeatureValueId' => '5'])
->all();
```
|
Yii2 innerJoin()
|
[
"",
"sql",
"yii2",
"inner-join",
""
] |
I am trying to get latest record based on date and time , but column contains time is in 12 Hours AM/PM format.
Suppose my records in
2015-09-07 12:50:37.1983315 PM - Coming this record
2015-09-07 03:12:15.1983315 PM - Actually I need to get this record.
The above result came by below query..
```
SELECT top 10 * FROM SHAdminMessageIndex where Appid='62001308607984608300' and
PolicyNumber='0081317' order by BeginTime desc
```
So how to get the result by using 12 hours format with AM and PM. Thanks in Advance
|
Try this
```
SELECT top 10 * FROM SHAdminMessageIndex
where Appid='62001308607984608300' and PolicyNumber='0081317'
order by
cast(BeginTime as datetime2) desc
```
OR
```
SELECT top 10 * FROM SHAdminMessageIndex
where Appid='62001308607984608300' and PolicyNumber='0081317'
order by
cast(left(BeginTime,charindex('.',BeginTime)-1)+right(BeginTime,2) as datetime) desc
```
|
Try this
```
SELECT top 10 * FROM SHAdminMessageIndex where Appid='62001308607984608300'
and PolicyNumber='0081317' order by stuff(convert(varchar(19), BeginTime,
126),11,1,' ') desc
```
|
How to get latest record based on date and time in Sql server
|
[
"",
"sql",
"sql-server",
""
] |
Please help me in writing the stored procedure. Here I'm having an input like something given below.
```
{local.txt_concept_id} != "" & {local.txt_ICD_code} != "" &
{local.txt_diagnosis_uniq_id} != "" & {local.txt_ICD_desc} != ""&
{FIND( {local.txt_ICD_code} , "asp", 0 )}!=-1&
{LEN( {local.txt_diagnosis_uniq_id})}
```
From which I want to get the text which is enclosed inside the open and close curly brace. This has to be done in SQL Server itself. So no high level language
(C#, Java, PHP).
Please Help me in having the query
Expected result table will have values like
```
local.txt_concept_id
local.txt_ICD_code
local.txt_ICD_desc
local.txt_diagnosis_uniq_id
```
|
Use this:
**[SqlFiddle](http://sqlfiddle.com/#!6/9eecb7db59d16c80417c72d1/2843/0)**
```
DECLARE @myString NVARCHAR(MAX) = '{local.txt_concept_id} != "" & {local.txt_ICD_code} != "" &
{local.txt_diagnosis_uniq_id} != "" & {local.txt_ICD_desc} != ""&
{FIND( {local.txt_ICD_code} , "asp", 0 )}!=-1&
{LEN( {local.txt_diagnosis_uniq_id})}';
SELECT T2.X.value('.', 'varchar(50)')
FROM
(SELECT CAST(REPLACE(REPLACE((SELECT @myString FOR XML PATH('')), '{', '<X>'), '}', '</X>') AS XML).query('.')) AS T1(X)
CROSS APPLY T1.X.nodes('/X/text()') AS T2(X);
```
You may also add `WHERE` condition for more filtering.
|
This should work for you...
```
declare @s VARCHAR(MAX)=
'{local.txt_concept_id} != "" & {local.txt_ICD_code} != "" &
{local.txt_diagnosis_uniq_id} != "" & {local.txt_ICD_desc} != ""&
{FIND( {local.txt_ICD_code} , "asp", 0 )}!=-1&
{LEN( {local.txt_diagnosis_uniq_id})}';
WITH DividedByAmpersand AS
(
SELECT CAST('<root><r>' + REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(@s,'{LEN( ',''),'{FIND( ',''),CHAR(10),''),CHAR(13),''),'&','</r><r>') + '</r></root>' AS XML) AsXML
)
,TheNodes AS
(
SELECT nodes.node.query('.') AS OneNode
FROM DividedByAmpersand
CROSS APPLY AsXML.nodes('/root/r') AS nodes(node)
)
SELECT SUBSTRING(thePart.content,2,CurlyClose.position-2)
FROM TheNodes
CROSS APPLY(SELECT LTRIM(RTRIM(TheNodes.OneNode.value('(/r)[1]','varchar(max)')))) AS thePart(content)
CROSS APPLY(SELECT CHARINDEX('}',thePart.content,1)) AS CurlyClose(position)
```
|
Finding the text enclosed in "{" and "}" and inserting that into a table
|
[
"",
"sql",
"sql-server",
""
] |
I have this code:
```
SELECT username, first_name, last_name, NVL(salary,0) "salary"
FROM customer
WHERE NVL(salary,0) < AVG(NVL(salary, 0));
```
Trying to find out which users have lower salary than average and one user has no salary ("null" which I must convert to 0).
The last statement "avg(nvl(salary,0))" doesn't work and I can't for the life of me figure out why. If I replace the statement with the actual number of the average everything works just fine.
|
You can try to pre-select the average:
```
select username,first_name,last_name,nvl(salary,0) "salary"
from customer
where nvl(salary,0) < (select avg(nvl(salary,0)) from customer);
```
You can use `avg` only in `select` and `having` clauses, that is why you have an error.
Also don't use `nvl` but use `coalesce` instead, this should be way faster on big data.
|
Here's a method using an analytic function:
```
select username,
first_name,
last_name,
salary
from (select username,
first_name,
last_name,
nvl(salary,0) salary,
avg(nvl(salary,0)) over () avg_all_salaries
from customer)
where salary < avg_all_salaries
```
It makes it easy to also display the average of all salaries, should that ever arise.
|
Find value smaller than average in Oracle SQL
|
[
"",
"sql",
"oracle",
"average",
""
] |
I've been playing around with different queries regarding duplicates but this is not really what I need. I do need a list of duplicates but where the value in another column is different.
I'm trying to do this in SQL Server 2012.
I need to get a list of "duplicate" rows where the DocId is the same but they have a different PoId in a table.
```
AuditId|DocMasterId|PoNumber
2224 |105 |11111
2374 |105 |11111
2574 |105 |11112
2624 |106 |232323
2874 |106 |242424
```
The query based on the above should return
105
106
But ideally, if I could list the first and last entry for each different PO based on the same DocMasterId, that would be the ideal solution, so I would end up with
```
AuditId|DocMasterId|PoNumber
2224 |105 |11111
2574 |105 |11112
2624 |106 |232323
2874 |106 |242424
```
Any ideas on how I can achieve this in SQL?
Thanks.
UPDATE:
I should have clarified that I wanted to list only rows that had a PONumber set and I wanted my results sorted by DocMasterId.
Based on Tim's answer, the final result looks like this:
```
WITH CTE AS
(
SELECT AuditId, DocMasterId, PoNumber,
RN_ASC = ROW_NUMBER() OVER (PARTITION BY DocMasterID ORDER BY
PoNumber ASC),
RN_DESC = ROW_NUMBER() OVER (PARTITION BY DocMasterID ORDER BY
PoNumber DESC),
CNT = COUNT(*) OVER (PARTITION BY DocMasterID)
FROM dbo.MyTable
WHERE PONumber IS NOT NULL
)
SELECT AuditId, DocMasterId, PoNumber
FROM CTE
WHERE CNT >= 2
AND (RN_ASC = 1 OR RN_DESC = 1)
ORDER BY DocMasterId
```
|
This approach uses ranking functions and a CTE:
```
WITH CTE AS
(
SELECT AuditId, DocMasterId, PoNumber,
RN_ASC = ROW_NUMBER() OVER (PARTITION BY DocMasterID ORDER BY PoNumber ASC),
RN_DESC = ROW_NUMBER() OVER (PARTITION BY DocMasterID ORDER BY PoNumber DESC),
CNT = COUNT(*) OVER (PARTITION BY DocMasterID)
FROM dbo.TableName
)
SELECT AuditId, DocMasterId, PoNumber
FROM CTE
WHERE CNT >= 2
AND (RN_ASC = 1 OR RN_DESC = 1)
ORDER BY DocMasterId
```
`Demo`
---
Update according your comments that `NULL` values in `PoNumber` should be excluded and not be counted for `CNT`:
```
WITH CTE AS
(
SELECT AuditId, DocMasterId, PoNumber,
RN_ASC = ROW_NUMBER() OVER (PARTITION BY DocMasterID
ORDER BY CASE WHEN PoNumber IS NULL THEN 1 ELSE 0 END ASC,
PoNumber ASC),
RN_DESC = ROW_NUMBER() OVER (PARTITION BY DocMasterID ORDER BY PoNumber DESC),
CNT = SUM(CASE WHEN PoNumber IS NOT NULL THEN 1 END) OVER (PARTITION BY DocMasterID)
FROM dbo.TableName
)
SELECT AuditId, DocMasterId, PoNumber
FROM CTE
WHERE CNT >= 2
AND (RN_ASC = 1 OR RN_DESC = 1)
ORDER BY DocMasterId
```
`Demo` with your sample data which correctly doesn't return any recods.
|
```
select AuditId, DocMasterId, PoNumber
from ( select *, ROW_NUMBER() OVER (PARTITION BY DocMasterId, PoNumber ORDER BY DocMasterId ASC) as a from tablename ) abc
where a =1
```
I created a partition column using DocMasterId and PoNumber which will repeat the Row\_Number for every same value of DocMasterId, PoNumber. Then I eliminated duplicate records using where condition a=1
|
How to get a list of rows that have the same Id but different values in the same field
|
[
"",
"sql",
"sql-server",
""
] |
I've been racking my brain trying to come up with a solution for this, perhaps someone out there will find this an intriguing problem...
I've got a view:
```
[RecordID] [Status] [IsActive] [Company]
25791 NEW Active McDonalds
25792 NEW Terminated Rabble
25792 NEW Active Aetna
```
There are two cases
1. Only one record ID -- Simply return the record
2. Up to four records with the same ID -- IsActive and Company = 'Multiple'
This reduction must occur on a dataset of close to a million records. Here's the output I'm looking for:
```
[RecordID] [Status] [IsActive] [Company]
25791 NEW Active McDonalds
25792 NEW Multiple Multiple
```
How do I select a row, but select a summary row when there is more then one record ID?
|
Perhaps this would work:
```
SELECT
RecordID
,MAX(Status)
,CASE WHEN COUNT(IsActive) > 1
THEN 'Multiple'
ELSE MAX(IsActive)
END
,CASE WHEN COUNT(Company) > 1
THEN 'Multiple'
ELSE MAX(Company)
END
FROM [Table_1]
GROUP BY RecordID
```
[](https://i.stack.imgur.com/o0Ao9.png)
|
This will work if you have more than one 'Status', using a CTE or it could be a temporary table, where you do the count of status column:
```
;with tblCount(ID, Sts, [Count])
as(
select RecordId, Status, count(Status)
from #tblTest -- Whatever table name
group by RecordId, Status
)
select distinct
RecordId,
Status,
case
when [count] > 1 then 'Multiple'
else IsActive
end,
case
when [count] > 1 then 'Multiple'
else Company
end
from tblCount t
inner join #tblTest test on ID = RecordId
and Status = Sts
```
[](https://i.stack.imgur.com/9k2HP.png)
|
Reduce a SQL Select based on Duplicates
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have two tables, Countries and Sums:
```
+------+---------+
| code | country |
+------+---------+
| 01 | France |
| 02 | Germany |
| 02 | Austria |
| 03 | Belgium |
| 04 | Belgium |
| 04 | Spain |
| 05 | Italy |
+------+---------+
+------+-----+
| code | sum |
+------+-----+
| 01 | 500 |
| 02 | 400 |
| 03 | 300 |
| 04 | 200 |
+------+-----+
```
I want to create a table code-sum-country. It's very easy of course, but I need to have exactly the same number of rows as in the table Sums.
```
+------+-----+---------+
| code | sum | country |
+------+-----+---------+
| 01 | 500 | France |
| 02 | 400 | Austria |
| 02 | 400 | Germany |
| 03 | 300 | Belgium |
| 04 | 200 | Spain |
| 04 | 200 | Belgium |
+------+-----+---------+
```
I want to have in the above table unique code values. So I need to remove some of them, it doesn't matter which one. My goal is to have only one row with the same code. For example the row
```
| 04 | 200 | Spain |
```
can remain or be deleted.
How can I do that?
|
Try this:
```
DELETE
FROM code_sum_country
WHERE code in
(SELECT code
FROM code_sum_country
GROUP BY code
HAVING COUNT (code) > 1)
AND country NOT IN
(SELECT MIN(country)
FROM code_sum_country
GROUP BY code
HAVING COUNT (code) > 1)
```
This will retain the country whose name is minimum in alphabetical order.
Change `MIN(country)` to `MAX(country)` if you want to retain the maximal ones.
Hope it helps :)
|
If you want to query `sums` and get one arbitrary country, you can use a correlated subquery:
```
select s.*,
(select top 1 c.country
from countries as c
where s.code = c.code
) as country
from sums as s;
```
|
How to delete some similar rows from a table?
|
[
"",
"sql",
"ms-access",
""
] |
I have string field `NAME` and some rows.
How I can ORDER these rows by `NAME` with DESC direction, but except one value with ID = 3?
So I need to add one value to the end:
```
A
B
C
KU-KU
```
|
You can just put the logic in the `order by`:
```
order by (id = 3) desc, name
```
MySQL treats a boolean expression as a number in a numeric context, with true being 1 and false being 0. Hence the `desc` after the comparison.
|
Create a virtual column that handles the sort rule you require, such as this fairly trivial example:
```
SELECT CASE WHEN id = 3 THEN "~~~~~~~~~" ELSE name END AS sortfield,
name, ...
FROM ...
ORDER BY 1, 2
```
|
How to order rows except one row in Mysql?
|
[
"",
"mysql",
"sql",
""
] |
My query works and is giving me correct results -- but it's ordering the two selects result sets separately and then concatenating the second result set to the end of the first and I don't know why! I want both sets to be ordered together.
```
(SELECT
m.m_name AS 'Merchant',
log.ship_day AS 'Date',
log.num_orders AS '# Orders',
'Sales Exec' AS 'Dept',
CONCAT(ciii.i_fname, ' ', ciii.i_lname) AS 'Rep',
log.m_sales_executive_days AS 'Days',
CONCAT(log.m_sales_executive_level,
' (',
log.m_sales_executive_com,
'*',
log.num_orders,
')') AS 'Level',
log.m_sales_executive_payout AS 'Payout'
FROM
commissions.commissions_log AS log
LEFT JOIN
acf_rds_new.merchants m ON m.m_id = log.m_id
LEFT JOIN
acf_rds_new.users uuu ON uuu.u_id = log.m_sales_executive
LEFT JOIN
acf_rds_new.contact_info ciii ON uuu.u_i_id = ciii.i_id
LEFT JOIN
acf_rds_new.users uuuu ON uuuu.u_id = log.m_account_manager
LEFT JOIN
acf_rds_new.contact_info ciiii ON uuuu.u_i_id = ciiii.i_id)
UNION ALL
(SELECT
m.m_name AS 'Merchant',
log.ship_day AS 'Date',
log.num_orders AS '# Orders',
'Account Exec' AS 'Dept',
CONCAT(ciiii.i_fname, ' ', ciiii.i_lname) AS 'Rep',
log.m_account_manager_days AS 'Days',
CONCAT(log.m_account_manager_level,
' (',
log.m_account_manager_com,
'*',
log.num_orders,
')') AS 'Level',
log.m_account_manager_payout AS 'Payout'
FROM
commissions.commissions_log AS log
LEFT JOIN
acf_rds_new.merchants m ON m.m_id = log.m_id
LEFT JOIN
acf_rds_new.users uuu ON uuu.u_id = log.m_sales_executive
LEFT JOIN
acf_rds_new.contact_info ciii ON uuu.u_i_id = ciii.i_id
LEFT JOIN
acf_rds_new.users uuuu ON uuuu.u_id = log.m_account_manager
LEFT JOIN
acf_rds_new.contact_info ciiii ON uuuu.u_i_id = ciiii.i_id)
ORDER BY 'Date' ASC , 'Merchant' ASC , '# Orders' DESC
```
|
If you want the entire result-set to be `order`ed, you should have another `select` statement.
```
select * from
(your query with union all) t
order by 'Date' ASC , 'Merchant' ASC , '# Orders' DESC
```
|
It you want the two subqueries to be ordered, then don't use constants in your `order by` statement. The proper escape character is backtick, but you only need that for that last columns:
```
ORDER BY Date ASC , Merchant ASC , `# Orders` DESC
```
As a general rule, do not use single quotes for column names. *Only* use single quotes for string and date constants. Otherwise, you are prone to make errors, such as the one in your query.
|
MySQL union sorting select statements separately instead of combining
|
[
"",
"mysql",
"sql",
"subquery",
"sql-order-by",
"union-all",
""
] |
I am writing below query which divides the two select query and calculate the percentage. But i am getting an error as `not a single-group group function`
```
select CASE WHEN COUNT(*) = 0 THEN 0 ELSE round((r.cnt / o.cnt)*100,3) END from
(Select count(*) as cnt from O2_CDR_HEADER WHERE STATUS NOT IN(0,1) and DATE_CREATED > (SYSDATE - 1)) r cross join
(Select count(*) as cnt from O2_CDR_HEADER WHERE DATE_CREATED > (SYSDATE - 1)) o;
```
|
You don't need to use joins. If I were you, I'd do:
```
select case when count(*) = 0 then 0
else round(100 * count(case when status not in (0, 1) then 1 end) / count(*), 3)
end non_0_or_1_status_percentage
from o2_cdr_header
where date_created > sysdate - 1;
```
Here's a simple demo:
```
with t as (select 1 status from dual union all
select 2 status from dual union all
select 3 status from dual union all
select 2 status from dual union all
select 4 status from dual union all
select 5 status from dual union all
select 6 status from dual union all
select 7 status from dual union all
select 1 status from dual union all
select 0 status from dual union all
select 1 status from dual)
select case when count(*) = 0 then 0
else round(100 * count(case when status not in (0, 1) then 1 end) / count(*), 3)
end col1
from t
where 1=0;
COL1
----------
0
```
---
And just in case you aren't sure that doing the filtering of the count in the case statement returns the same as when you filter in the where clause, here's a demo that proves it:
```
with t as (select 1 status from dual union all
select 2 status from dual union all
select 3 status from dual union all
select 2 status from dual union all
select 4 status from dual union all
select 5 status from dual union all
select 6 status from dual union all
select 7 status from dual union all
select 1 status from dual union all
select 0 status from dual union all
select 1 status from dual)
select 'using case statement' how_count_filtered,
count(case when status not in (0, 1) then 1 end) cnt
from t
union all
select 'using where clause' how_count_filtered,
count(*) cnt
from t
where status not in (0, 1);
HOW_COUNT_FILTERED CNT
-------------------- ----------
using case statement 7
using where clause 7
```
|
You are referencing an aggregate function (`COUNT(*)`) and an individual column expression (`r.cnt` and `o.cnt`) in the same SELECT query. This is not valid SQL unless a GROUP BY clause is added for the relevant individual columns.
It would be easier to provide a valid alternative it you could clarify what you'd like this query to return (given a sample schema and set of data). As a guess, I'd say you can simply substitute `COUNT(*)` with `o.cnt` to avoid the division by 0 issue. If there's some other logic expected to be present here, you'd need to clarify what that is.
|
not a single-group group function using select case statement
|
[
"",
"sql",
"oracle",
"group-by",
"case-when",
""
] |
I have a procedure where user inputs month e.g 'JAN' to extract data for that month. I want to compare the input month with month part in a date. This is what I got so far.
```
CREATE PROCEDURE SavingsAccountsAllDetail
(@Yr INT=NULL,
@Mon CHAR(3)=NULL)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @Year INT
DECLARE @Month CHAR(3)
SET @Year = @Yr
SET @Month = @Mon
```
and comparing part.
```
datepart(year, A.DateOpen)=@Year AND datepart(month, A.DateOpen)=datepart(MM, @Month)
```
But when I execute my procedure it says
> Conversion failed when converting date and/or time from character
> string. Severity 16
Any idea on how to proceed?
|
You're using `DATEPART` but passing it your property called `@Mon` which is a `CHAR(3)`, he cannot convert this `CHAR(3)` to `Date`, hence the error.
Your `@Mon` parameter should be an `Int` like `@Yr` and your query be
```
datepart(year, A.DateOpen)=@Year AND datepart(month, A.DateOpen) = @Month
```
|
Your `@Year` and `@Month` variables are not dates. They are string representations of parts of a date.
When you extract the dateparts from `A.DateOpen` you can directly compare that to those variables:
```
datepart(year, A.DateOpen)=@Year AND datepart(month, A.DateOpen)=@Month
```
However, `datepart` will return a numerical value and you want to compare to text (eg 'JAN'). Consider adding the 'DateName()' function, and changing the line to:
```
datepart(year, A.DateOpen)=@Year AND left(datename(month, A.DateOpen), 3)=@Month
```
|
Extract month number?
|
[
"",
"sql",
"sql-server",
""
] |
How can I select ALL ROWS and where A ROW is existent more then once, (based on ID) just take one (doesn't matter which one).
So I have:
```
ID name val
===========
1 | aa | 32
2 | aa | 32
3 | ad | 32
3 | zy | 11
4 | rr | 21
5 | oi | 44
5 | df | 11
```
result should be
```
ID name val
===========
1 | aa | 32
2 | aa | 32
3 | zy | 11
4 | rr | 21
5 | df | 11
```
It doesn't matter if `5 | df | 11` or `5 | oi | 44` is taken.
On Postgres, `SELECT DISTINCT ON ID` would be my choice.
Many thanks in advance
|
The most convenient way to do it will be using Proc Sort:
```
PROC SORT DATA=HAVE OUT=WANT NODUPKEY;
BY ID;
RUN;
```
Otherwise if it is presorted, then @Reeza's solution is nice.
If you have to use Proc SQL, then an undomented feature may need to be implemented: monotonic():
```
PROC SQL;
CREATE TABLE WANT AS
SELECT * FROM HAVE
GROUP BY ID
HAVING MONOTONIC()=MIN(MONOTONIC());
QUIT;
```
|
A SAS data step answer, using First/Last processing
```
data want;
set have;
by id;
if first.id;
run;
```
|
SELECT DISTINCT ON in SAS PROC SQL
|
[
"",
"sql",
"sas",
"proc-sql",
""
] |
I have 2 Requirements:
1. I want to write a SQL Command to fetch the Column Names from a Participant Table where the Column Name must be matching the word `%Track%`.
2. I want to fetch all the Table Name in the Database where it contains Column Name LIKE `%Track%`
`"USER_TAB_COLUMNS"` Does not run but gives error. `Invalid object name 'USER_TAB_COLUMNS'.`
```
SELECT [name] AS [Column Name]
FROM syscolumns
WHERE id = (SELECT id FROM sysobjects WHERE type = 'V' AND [Name] = 'Participant')
```
Runs but there is no record fetched by this query.
Please suggest me the Query which will give me the desired resultant.
|
> I want to fetch all the Table Name in the Database where it contains Column Name LIKE %Track%
You can try this
```
SELECT c.name AS ColumnName, t.name AS TableName
FROM sys.columns c
JOIN sys.tables t ON c.object_id = t.object_id
WHERE c.name LIKE '%Track%'
```
---
> 1. I want to write a SQL Command to fetch the Column Names from a Participant Table where the Column Name must be matching the word %Track%
```
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Participant' and column_name like '%Track%'
```
And if you want to find in all the tables the
```
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE column_name like '%Track%'
```
|
try this
```
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
COLUMN_NAME = 'Your column name'
```
|
Fetch Column Name from Table
|
[
"",
"sql",
"sql-server",
""
] |
I need to connect two tables by two columns.
First table has primary key as integer type id.
Second table has varchar2 type column which contains the same primary key but is located in the middle of a string.
> For example in first table I have column called integer ID with ID
> like 1234. In the second table I have column with string like
> 'abcdefgh - 1234 (ijklmno)'.
Is there a way to use that nested key?
|
use [regexp\_replace](http://docs.oracle.com/cd/B12037_01/server.101/b10759/functions115.htm). try this query
```
select * from table1
where table1.ID in (select regexp_replace(column2, '[A-Za-z]') from table2)
```
syntax of regexp\_replace is
`REGEXP_REPLACE(string, target [, replacement [, position [, occurrence [, regexp_modifiers]]]])`
|
From Oracle 11g you can use a virtual column:
[SQL Fiddle](http://sqlfiddle.com/#!4/30645/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE TableA (
id NUMBER(4,0) PRIMARY KEY
);
INSERT INTO TableA VALUES ( 1234 );
CREATE TABLE TableB (
data VARCHAR2(25) NOT NULL
CHECK ( REGEXP_LIKE( data, '\w+ - (0|[1-9]\d{0,3}) \(\w+\)' ) ),
id NUMBER(4,0) GENERATED ALWAYS AS ( TO_NUMBER( REGEXP_SUBSTR( data, '\w+ - (0|[1-9]\d{0,3}) \(\w+\)', 1, 1, null, 1 ) ) ) VIRTUAL,
CONSTRAINT TableB__ID__FK FOREIGN KEY ( id ) REFERENCES TableA ( id )
);
INSERT INTO TableB ( data ) VALUES ( 'abcdefgh - 1234 (ijklmno)' );
```
**Query 1**:
```
SELECT * FROM TableB
```
**[Results](http://sqlfiddle.com/#!4/30645/1/0)**:
```
| DATA | ID |
|---------------------------|------|
| abcdefgh - 1234 (ijklmno) | 1234 |
```
|
Is there a way to use a part of varchar2 string as a primary key in oracle sql?
|
[
"",
"sql",
"oracle",
""
] |
I'm learning SQL, and i'm trying to select all the students who has the lowest test score from below:
```
Given TABLE STUDENTS:
id | name | test_score
1 | John | 89
2 | Marry | 0
3 | Lena | 100
4 | Peter | 0
```
I want to select both Marry and Peter because they have the slowest test score. So far I have:
```
SELECT S.name, MIN(S.test_score) FROM STUDENTS S GROUP BY S.test_score;
```
Somehow, I get the result as below:
```
John | 89
Marry | 0
Lena | 100
```
I just want to print out only the name as :
```
Marry
Peter
```
Any hints how can I fix my query and what am I doing wrong here?
Thank you
|
```
SELECT name FROM tableName
WHERE test_score = (SELECT MIN(test_score) FROM tableName)
```
|
```
SELECT
*
FROM
STUDENTS
HAVING
min(test_score)
```
|
How to select the students who have lowest test scores?
|
[
"",
"mysql",
"sql",
""
] |
I've got the below statement to return flow id counts for today's date, joined to another table so we can see a little more meta data. With the left join though, it's not showing all the hubs that *dont* have data for today. I'm assuming it may be because of the where clause, but not sure how to fix that. Suggestions?
```
SELECT O.DATE,
R.HUB,
COUNT (O.FLOWID) AS point_counts
FROM table_r R
LEFT JOIN table_o O ON O.FLOWID = R.FLOWID
WHERE O.DATE = CONVERT(date,GETDATE())
GROUP BY O.DATE, R.HUB
ORDER BY R.HUB
```
|
Move the `where` condition to the `join` clause.
```
LEFT JOIN table_o O ON O.FLOWID = R.FLOWID AND O.DATE = CONVERT(date,GETDATE())
```
|
Because you are using the filter: `O.DATE = CONVERT(date,GETDATE())`, essentially making it an `INNER JOIN`, since the filter is applied on a column from the outer table.
|
Why won't NULL values return in a LEFT JOIN?
|
[
"",
"sql",
"sql-server",
""
] |
I need to reset the AutoIncrement field to 1 for tables in the database. I found that this can be done for a single table using:
```
DBCC CHECKIDENT (mytable, RESEED, 0)
```
How can I run this for all tables except tables with names that start with "\_"?
|
I'm using the next script for to do that. Maybe using `cursor` is not very performing but it is not take much time.
```
declare @TableName varchar(100)
declare cur_Cursor CURSOR STATIC
FOR
select TABLE_NAME
from information_schema.tables
where TABLE_TYPE = 'BASE TABLE'
and OBJECTPROPERTY(OBJECT_ID(TABLE_NAME), 'TableHasIdentity') = 1
OPEN cur_Cursor
FETCH NEXT FROM cur_Cursor into @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
IF LEFT(@TableName, 1) != '_'
DBCC CHECKIDENT(@TableName, RESEED, 0)
FETCH NEXT FROM cur_Cursor into @TableName
END
CLOSE cur_Cursor
DEALLOCATE cur_Cursor
```
|
You can try like this:
```
SELECT
IDENT_SEED(TABLE_NAME) AS Seed,
IDENT_INCR(TABLE_NAME) AS Increment,
IDENT_CURRENT(TABLE_NAME) AS Current_Identity,
TABLE_NAME,
'DBCC CHECKIDENT(' + TABLE_NAME + ', RESEED, ' + CAST(IDENT_SEED(TABLE_NAME) AS VARCHAR(10)) + ')'
FROM
INFORMATION_SCHEMA.TABLES
WHERE
OBJECTPROPERTY(OBJECT_ID(TABLE_NAME), 'TableHasIdentity') = 1
AND TABLE_TYPE = 'BASE TABLE'
AND TABLE_NAME not like '_%'
```
|
Reset AutoIncrement for all tables not starting with "_"
|
[
"",
"sql",
"sql-server",
"auto-increment",
""
] |
I have two tables in join. Here i have to get mapped records when the Second table has set of records. But when the Second table has no record i need all records from first table.
```
Create Table #temp1
(Id1 int)
Create Table #temp2
(Id2 int)
Insert into #temp1 Values(1),(2),(3),(4)
Insert into #temp2 Values(1),(2)
Select * from #temp1 A
Inner Join #temp2 B On A.Id1=B.Id2
```
**It gives correct output when i have records in second table.**
[](https://i.stack.imgur.com/05Iy2.jpg)
**But when i have no records in second table i need to get all records from first table.**
```
Delete #temp2
Select * from #temp1 A
Inner Join #temp2 B On A.Id1=B.Id2
```
this query return no records i tried with Left Outer Join it give all records but i am not getting records like first scenario.
```
Drop Table #temp1
Drop Table #temp2
```
**I need output like this.**
[](https://i.stack.imgur.com/5Gs7V.jpg)
Thanks in Advance.
if question is not clear ,please ask me.
|
If you only want columns from the first table:
```
select a.*
from #temp1 a
where exists (select 1 from #temp2 b where b.id2 = a.id1) or
not exists (select 1 from #temp2 b);
```
If you wanted the extra columns from the second table, you could use `union all`:
```
Select a.*, b.*
from #temp1 a Inner Join
#temp2 b
On a.Id1 = b.Id2
union all
select a.*, b.*
from #temp1 a left join
#temp2 b
on a.id1 = b.id2
where not exists (select 1 from #temp2);
```
|
Actually you *can* use an `OUTER JOIN`:
```
SELECT Id1 FROM #temp1 t1
LEFT OUTER JOIN #temp2 t2
ON t1.Id1 = t2.Id2
```
|
Get all records from Table When no records in Mapping Table
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Suppose I have the following table:
```
ID Bag Event Count Driver Time
1 XYZ Pick-up 10 A1 10:30 AM
2 XYZ Trnsfr-out 10 A1 10:40 AM
3 XYZ Trnsfr-in 10 A2 10:40 AM
4 XYZ Drop 10 A2 10:50 AM
5 ABC Pick-up 10 B1 10:30 AM
6 ABC Trnsfr-out 10 B1 10:40 AM
7 ABC Trnsfr-in 10 B2 10:40 AM
8 ABC Trnsfr-out 10 B2 10:50 AM
```
Is there anyway I can get Drivers for MAX(ID) and MIN(ID) for a particular bag using the same query?
I have the following query now:
```
Select Bag, Count, Driver as FinalDriver,TIME from Bagtable where ID in (Select Max(ID) from Bagtable Group By Bag)
```
The Result is
```
Bag Count FinalDriver TIME
XYZ 10 A2 10:50 AM
ABC 10 B2 10:50 AM
```
I want the result to looks something like this:
The values for Initial driver should be obtained by using the min(ID) for a specific bag.
```
Bag Count InitialDriver FinalDriver TIME
XYZ 10 A1 A2 10:50 AM
ABC 10 B1 B2 10:50 AM
```
Kindly help me with the SQL for the above. Thanks
|
If you are using Oracle (you tagged this question with both Oracle and SQL Server which are two different databases) you could use:
```
select bag,
count,
min(decode(id, min_bag_id, driver, null)) as initialdriver,
min(decode(id, max_bag_id, driver, null)) as finaldriver,
min(decode(id, max_bag_id, time, null)) as time
from (select id,
bag,
count,
driver,
time,
min(id) over(partition by bag) as min_bag_id,
max(id) over(partition by bag) as max_bag_id
from tbl)
group by bag,
count
```
|
You can do it with `aggregate` functions.
[Fiddle with sample data](http://sqlfiddle.com/#!6/970ec/1)
```
select b.bag, max(b.count) as count,
max(case when time = mintime then driver end) as initialdriver,
max(case when time = maxtime then driver end) as finaldriver,
max(x.maxtime) as time
from
(select bag, min(TIME) as mintime, max(time) as maxtime
from Bagtable
group by bag) x
join Bagtable b
on x.bag = b.bag
group by b.bag
```
|
Extract Max and Min values of a column from a table in the same row
|
[
"",
"sql",
"oracle",
"sql-server-2008-r2",
""
] |
I'm trying to pass a dynamic SQL statement to spool out to a text file using SQL\*Plus, but I can't seem to execute the select statement I'm generating.
```
set linesize 10000 pagesize 0 embedded on
set heading off feedback off verify off trimspool on trimout on termout off
set underline off
COLUMN gen_sql NEW_VALUE gen_sql_
SELECT 'SELECT * FROM USER_TAB_COLS WHERE ROWNUM < 10' gen_sql_ FROM DUAL;
SPOOL 'myfilename.csv'
EXECUTE IMMEDIATE &gen_sql_
SPOOL OFF
/
```
I can't seem to use `EXECUTE IMMEDIATE`. Is there another way to execute the results of the select statement??
MORE DETAIL:
I have a set of views whose output I'd like to generate as *formatted* CSV files. I'm using dynamic SQL to create the formatting essentially. I generate something similar to:
`SELECT TRIM(col1)||','||TRIM(col2)...FROM {myview}`
I'm using the following to generate it this way:
```
COLUMN gen_sql NEW_VALUE gen_sql_
SELECT 'SELECT ' || LISTAGG ('TRIM('||COLUMN_NAME||')', '||'',''|| ')
WITHIN GROUP (ORDER BY COLUMN_ID) gen_sql FROM...
```
Anyway, I'm able generate this SQL statement and store into a SQL\*PLUS variable, but I just need to execute it after the SPOOL statement so that it will print to the file. I'm not sure how to execute it. Normal statements work, such as:
```
SPOOL 'myfilename.csv'
SELECT 1 col1 FROM DUAL;
SPOOL OFF
/
```
So, it would seem reasonable that I could something similar but executing the contents of my variable like:
```
SPOOL 'myfilename.csv'
--- RUN MY DYNAMIC SQL ----
SPOOL OFF
/
```
|
I think this is what you're trying to achieve:
```
set linesize 10000 pagesize 0 embedded on
set heading off feedback off verify off trimspool on trimout on termout off
set underline off
SPOOL myfilename.sql
SELECT 'SELECT table_name||'',''||column_name FROM USER_TAB_COLS WHERE ROWNUM < 10;' gen_sql_ FROM DUAL;
SPOOL OFF
spool results.csv
@myfilename.sql
SPOOL OFF
```
I.e. first you spool the results of your query into a file, and then once the spool is complete, you call the script you just created, spooling the results of that into a separate file.
|
I believe you'll find `EXECUTE IMMEDIATE` is a PL/SQL command, so cannot be used directly in SQL or SQL\*plus.
<http://docs.oracle.com/cd/B12037_01/appdev.101/b10807/13_elems017.htm>
Also, `SPOOL` is a SQL\*Plus command, and cannot be used in PL/SQL ..
So you have a problem there ;)
Can you step back a bit and explain what it is you're trying to do ?
What is the requirements you have?
|
Oracle Spool using Dynamic SQL
|
[
"",
"sql",
"oracle",
""
] |
If I limit my user's input SQL statement so that it starts with SELECT, can any injection attacks be run in that case? Am I leaving a different security hole (beyond getting access to all data that the schema has permission to access)?
**Edit**
What if I also ban semi-colon(;) ?
|
In short, yes. A SQL injection attack given a simple select:
```
var sqlCommand = "SELECT * FROM Table1 WHERE Field = '" & fieldValue & "';";
```
Could be modified by replacing the value of the variable `fieldValue` with the following:
```
fieldValue = "'; DELETE FROM Table1 WHERE 0 = 0 OR '' <> '";
```
Which when the script is run will result in two statements being created and executed:
```
SELECT * FROM Table1 WHERE Field = ''; DELETE FROM Table1 WHERE 0 = 0 OR '' <> '';
```
This is just a simple example, any number of statements could be run that would insert, update, delete, etc...
To expand upon your edit about stripping semicolons, yes by changing the `fieldValue` variable to something like:
```
fieldValue = "' OR 1 = 1 OR '' <> '";
```
You would end up with the following:
```
SELECT * FROM Table1 WHERE Field = '' OR 1 = 1 OR '' <> '';
```
Which, if that were a user table could run the risk of allowing elevated privileges or logging in as an incorrect user.
|
Yes. If you are allowing your users to input SQL into a website (which is the typical case when you are talking about SQL injection attacks) you are opening yourself up to an avenue of attack. It doesn't matter if you restrict individual characters, or limit it to only beginning with a SELECT statement, in the end SQL has simply too many ways to parse input (and the web has even more) to catch every possible malicious query without also blocking most possible non-malicious ones.
Always parse your input. Ideally, use a framework or at least a set of stored procedures (which do NOT dynamically build and execute a query within them) for database access.
Tell us a little more about the problem you are trying to solve, and we can help more.
EDIT (post-comment): If you are trying to create an internal tool for your support staff, you'll want to create a tool with basic searching capabilities. Give the user drop downs to select what fields they want to search against, and pass the input into a stored procedure or a framework to minimize your risk. If you assume your support staff can write SQL, just give them a login to your database - it's easier to restrict access to 'SELECT' statements for a user in a database access tool like Management Studio than to secure a website.
|
What are dangers from running SELECT queries supplied by users?
|
[
"",
"sql",
""
] |
What is the difference between money & numeric data type from point of practical usage. If one table has money as one of the data type (field name say BreakupAmount) & another one of numeric type (field name may or may not be same) then is it possible to post entries from these two fields into a third table where data type is numeric (field name may or may not be BreakupAmount)? Any suggestions?
|
Never use MONEY; 1) It is proprietary, so porting it is a pain. It is one of the many "Sybase Code Museum" features from decades ago. Remember the early versions of UNIX?
2) Writing code in dialect when you don't need to make you sound like a hillbilly to people that speak the language. You are better off with DECIMAL(s,p) so you can use a properly sized column.
3) It does display and formatting in the back end, with commas and dollar signs. That defeats the purpose of a tiered architecture.
4) The MONEY data type has rounding errors.
Using more than one operation (multiplication or division) on money
columns will produce severe rounding errors. A simple way to visualize
money arithmetic is to place a ROUND() function calls after every
operation. For example,
Amount = (Portion / total\_amt) \* gross\_amt
can be rewritten using money arithmetic as:
Amount = ROUND(ROUND(Portion/total\_amt, 4) \* gross\_amt, 4)
Rounding to four decimal places might not seem an issue, until the
numbers you are using are greater than 10,000.
check here <https://social.msdn.microsoft.com/Forums/sqlserver/en-US/de0e5cfe-b984-4700-b81f-a0478a65daf1/difference-between-numeric-and-money-data-type-in-sql-server?forum=transactsql>
|
Money doesn't provide any advantages over Decimal. If fractional units up to 5 decimal places are not valid in your currency or database schema, just use Decimal with the appropriate precision and scale.
```
DECLARE
@mon1 MONEY,
@mon2 MONEY,
@mon3 MONEY,
@mon4 MONEY,
@num1 Numeric(19,6),
@num2 Numeric(19,6),
@num3 Numeric(19,6),
@num4 Numeric(19,6)
SELECT
@mon1 = 100, @mon2 = 339, @mon3 = 10000,
@num1 = 100, @num2 = 339, @num3 = 10000
SET @mon4 = @mon1/@mon2*@mon3
SET @num4 = @num1/@num2*@num3
SELECT @mon4 AS moneyresult, @num4 AS numericresult
```
|
difference between money & numeric data types?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table as follows
```
ID | Status ID | Value
1 | 1 | 100
2 | 1 | 200
3 | 1 | 300
4 | 2 | 100
5 | 2 | 150
6 | 2 | 200
7 | 3 | 500
8 | 3 | 300
9 | 3 | 150
```
I need to get the maximum value within the status. so my result should look like the following
```
ID | Status ID | Value
3 | 1 | 300
6 | 2 | 200
7 | 3 | 500
```
I'm fairly new to SQL and would appreciate your inputs
|
```
create table #temp(id int, statusid int, value int)
insert #temp(id,statusid,value)
select 1,1,100
union select 2,1,200
union select 3,1,300
union select 4,2,100
union select 5,2,150
union select 6,2,200
union select 7,3,500
union select 8,3,300
union select 9,3,150
-- if you don't need the id
select statusid, max(value)
from #temp
group by statusid
-- if you need the id
select min(id), X.statusid, X.value
from (
select statusid, max(value) value
from #temp
group by statusid
) X
inner join #temp T
on X.statusid = T.statusid
and X.value = T.value
group by X.statusid, X.value
```
|
Give this a go:
```
SELECT t.*
FROM TEST t
INNER JOIN (
SELECT STATUS,
MAX(VALUE) AS MAX_VALUE
FROM TEST
GROUP BY STATUS) gt
ON t.STATUS = GT.STATUS
AND t.VALUE = gt.MAX_VALUE;
```
|
how to compare rows in the same sql table and get maximum value
|
[
"",
"sql",
"sql-server",
"oracle",
""
] |
I have a table on a Oracle DB with two columns. I would like to see every row repeated as many times as the number stored in the second column. The table looks like this:
```
col1 col2
a 2
b 3
c 1
```
I want to write a query that returns this:
```
col1 col2
a 2
a 2
b 3
b 3
b 3
c 1
```
So the value from col2 dictates the number of times a row is repeated. Is there a simple way to achieve this?
Thanks!
|
[SQL Fiddle](http://sqlfiddle.com/#!4/18761/3)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE test ( col1, col2 ) AS
SELECT 'a', 2 FROM DUAL
UNION ALL SELECT 'b', 3 FROM DUAL
UNION ALL SELECT 'c', 1 FROM DUAL
```
**Query 1**:
```
SELECT col1,
col2
FROM test t,
TABLE(
CAST(
MULTISET(
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= t.col2
)
AS SYS.ODCINUMBERLIST
)
)
```
**[Results](http://sqlfiddle.com/#!4/18761/3/0)**:
```
| COL1 | COL2 |
|------|------|
| a | 2 |
| a | 2 |
| b | 3 |
| b | 3 |
| b | 3 |
| c | 1 |
```
|
Here's an alternative (I've added an extra column just to show that other columns can be present), assuming that col1 is unique:
```
with src (col1, col2, col3) as (
SELECT 'a', 'b', 2 FROM DUAL UNION ALL
SELECT 'b', 'c', 3 FROM DUAL UNION ALL
SELECT 'c', 'd', 1 FROM DUAL
)
select col1, col2, col3
from src
connect by level <= col3
and prior col1 = col1
and prior sys_guid() is not null;
COL1 COL2 COL3
---- ---- ----------
a b 2
a b 2
b c 3
b c 3
b c 3
c d 1
```
|
How to unfold the results of an Oracle query based on the value of a column
|
[
"",
"sql",
"oracle",
"split",
""
] |
I have a table like this:
```
initial confirmatory confirmatory1 confirmatory2 confirmatory3
3.4 true 3.6 4.9 7.4
2.1 false null null null
```
I want to generate a database view like this:
```
initial confirmatory_n confirmatory_value
3.4 1 3.6
3.4 2 4.9
3.4 3 7.4
2.1 null 2.1
```
For the rows that have `confirmatory` as false, I need to display the row 1 time. For rows that have `confirmatory` at true, I need to display the data 3 times in three different rows, with an extra column identifying which value is displayed.
I've been searching for some time, but can't seem to find relevant results. I always get stuck when trying to show `confirmatory_n`. So, I don't really have an sql samples of what isn't working because I feel I'm so far off. BUT, I did create a [sqlfiddle](http://sqlfiddle.com/#!4/3fc39/1/1) that might be helpful. Any help would be great.
I'm using Oracle 11g.
|
Use `union all`:
```
select initial_value, 1 as confirmatory_n, confirmatory1 as confirmatory
from results t
where confirmatory = 1
union all
select initial_value, 2 as confirmatory_n, confirmatory2 as confirmatory
from results t
where confirmatory = 1
union all
select initial_value, 3 as confirmatory_n, confirmatory3 as confirmatory
from results t
where confirmatory = 1
union all
select initial_value, null as confirmatory_n, initial_value
from results t
where confirmatory = 0;
```
The SQL Fiddle is [here](http://sqlfiddle.com/#!4/8edd3d/10).
If your tables are very large and performance is a concern, then there are other approaches that only scan the table once. However, this method is usually sufficient.
|
Another UNPIVOT solution:
```
WITH src ("INITIAL", confirmatory, confirmatory1, confirmatory2, confirmatory3) as (
SELECT 3.4, 'true', 3.6, 4.9, 7.4 FROM DUAL UNION ALL
SELECT 2.1, 'false', NULL, NULL, NULL FROM DUAL
), dta as (
select "INITIAL"
, case upper(confirmatory) when 'FALSE' then "INITIAL" end confirmatory
, confirmatory1
, confirmatory2
, confirmatory3
from src
)
select *
from dta
unpivot (confirmatory_value
FOR confirmatory_n IN (CONFIRMATORY AS null,
CONFIRMATORY1 AS 1,
CONFIRMATORY2 AS 2,
CONFIRMATORY3 AS 3));
```
|
Display extra rows (conditionally) separated by column name
|
[
"",
"sql",
"oracle",
"view",
"oracle11g",
""
] |
[SqlFiddle Demo](http://sqlfiddle.com/#!6/6470e/12)
I need to repeat each barcode of the article based on the quantity of this article in the table Stock.
This is source data:
```
| BarCode | quantity |
|---------|----------|
| 5142589 | 7 |
| 123454 | 5 |
| 1111145 | 3 |
```
I want result that looks like this:
```
Barcode
-------
5142589
5142589
5142589
5142589
5142589
5142589
5142589
123454
123454
123454
123454
123454
1111145
1111145
1111145
```
How can I do this?
Thanks
|
You can get this by a simple recursive **CTE**.
```
WITH cte
AS
(
SELECT IdArticle,1 AS rn FROM TABLE_STOCK
UNION ALL
SELECT t.IdArticle,rn+1 AS rn
FROM cte c
INNER JOIN TABLE_STOCK t ON t.IdArticle = c.IdArticle and rn<t.QUANTITY
)
SELECT t.BarCode,TS.QUANTITY
FROM cte c
INNER JOIN TABLE_BARCODE t ON t.IdArticle = c.IdArticle
INNER JOIN TABLE_STOCK TS ON TS.IdArticle = C.IdArticle
ORDER BY t.IdArticle
```
Here is [SQL Fiddle](http://sqlfiddle.com/#!3/6470e/1)
|
You can use [table of numbers](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1). Either permanent, or generated on the fly.
Query below uses `CTE` to generate up to 1000 numbers. Here is [SQL Fiddle](http://sqlfiddle.com/#!6/6470e/14/0).
```
WITH
e1(n) AS
(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
) -- 10
,e2(n) AS (SELECT 1 FROM e1 CROSS JOIN e1 AS b) -- 10*10
,e3(n) AS (SELECT 1 FROM e1 CROSS JOIN e2) -- 10*100
,CTE_Numbers
AS
(
SELECT ROW_NUMBER() OVER (ORDER BY n) AS Number
FROM e3
)
SELECT b.BarCode, s.quantity
FROM
TABLE_BARCODE b
INNER JOIN TABLE_STOCK s ON b.IdArticle = s.IdArticle
CROSS APPLY
(
SELECT TOP(s.quantity) CTE_Numbers.Number
FROM CTE_Numbers
ORDER BY CTE_Numbers.Number
) AS CA
```
**Results:**
```
| BarCode | quantity |
|---------|----------|
| 5142589 | 7 |
| 5142589 | 7 |
| 5142589 | 7 |
| 5142589 | 7 |
| 5142589 | 7 |
| 5142589 | 7 |
| 5142589 | 7 |
| 123454 | 5 |
| 123454 | 5 |
| 123454 | 5 |
| 123454 | 5 |
| 123454 | 5 |
| 1111145 | 3 |
| 1111145 | 3 |
| 1111145 | 3 |
```
|
Repeat row based on the number in a column
|
[
"",
"sql",
"sql-server",
""
] |
How to retrieve two different country date / time in single query? Whether I need to change any NLS parameters setting?
Sample query:
```
Select sysdate ind, sysdate us from dual;
```
O/p:
```
11/09/2015 3:05 PM 11/09/2015 05:35 AM
```
|
Try;
```
select systimestamp ind, systimestamp at time zone 'US/Eastern' us
from dual;
```
If you interested in date then;
```
select
cast(systimestamp as date) ind,
cast(systimestamp at time zone 'US/Eastern' as date) us
from dual;
```
[Reference for all Time Zones](http://docs.oracle.com/cd/B19306_01/server.102/b14225/applocaledata.htm#i637736)
You can use `FROM_TZ` and `TZ_OFFSET` to get the same result:
```
select sysdate ind, FROM_TZ(TIMESTAMP '2015-09-11 15:50:42', TZ_OFFSET('US/Eastern')) us
from dual;
```
If you want to know more about the time zones then:
```
SELECT DISTINCT tzname
FROM V$TIMEZONE_NAMES;
```
Please refer these links for [`FROM_TZ`](http://www.techonthenet.com/oracle/functions/from_tz.php) and [`TZ_OFFSET`](http://www.techonthenet.com/oracle/functions/tz_offset.php)
|
I think you can do it with the third `noptions` parameter of the [to\_char](http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_2113.htm) function.
It also seems that you can use the [AT Timezone](http://docs.oracle.com/cd/B12037_01/server.101/b10749/ch4datet.htm#1007620) clause in your query
Here is an example given the [link](http://docs.oracle.com/cd/B12037_01/server.101/b10749/ch4datet.htm#1007620) I gave (I am sorry I cannot test it for the moment):
```
SELECT FROM_TZ(CAST(TO_DATE('1999-12-01 11:00:00',
'YYYY-MM-DD HH:MI:SS') AS TIMESTAMP), 'America/New_York')
AT TIME ZONE 'America/Los_Angeles' "West Coast Time"
FROM DUAL;
```
|
How to retrieve two country date/time using single query?
|
[
"",
"sql",
"oracle",
""
] |
How can I optimize this mysql query? I'm using the `IN()` operator, but I believe it's not the way to do it.
```
select *
from users
where usid NOT in
(
select usid
from images
where status=0
)
and us_status=0
```
|
Using a LEFT OUTER JOIN
```
SELECT users.*
FROM users
LEFT OUTER JOIN images ON user.usid = images.usid AND images.status = 0
WHERE images.usid IS NULL
AND us_status = 0
```
This avoids using IN which can perform poorly.
|
```
SELECT users.*
FROM users
LEFT JOIN images ON users.usid = images.usid AND images.status=1 AND images.usid IS NOT NULL
WHERE users.us_status = 0
```
|
How can i optimize this mysql query ? i use IN() operator
|
[
"",
"mysql",
"sql",
"optimization",
""
] |
In our application we have a multiline grids which have many records. For inserting or updating we are calling a stored procedure.
As per the current implementation the stored procedure is calling for each line in the grid. For each line it checks the existence in the table. If data is already there, it will update the table else insert new data into the table.
Instead of calling the procedure for each line, we thought create a table value parameter and pass all the grid values at the same time.
My questions are:
1. Is it a good approach?
2. How to handle the existence check (for insert or update) if I pass the values as table-valued parameter? Do I need to loop through the table and check it?
3. Is it better to have separate stored procedures for `insert` and `update`?
Please provide your suggestions. Thanks in advance.
|
1) Using TVP is good approach, but send only new or updated rows as TVP, no need to send entire datagrid.
2) For `INSERT/UPDATE` use [MERGE](https://msdn.microsoft.com/en-us/library/bb510625.aspx) example:
```
MERGE [dbo].[Contact] AS [Target]
USING @Contact AS [Source] ON [Target].[Email] = [Source].[Email]
WHEN MATCHED THEN
UPDATE SET [FirstName] = [Source].[FirstName],
[LastName] = [Source].[LastName]
WHEN NOT MATCHED THEN
INSERT ( [Email], [FirstName], [LastName] )
VALUES ( [Source].[Email], [Source].[FirstName], [Source].[LastName] );
```
3) For your case one stored procedure is enough.
|
1) TVP is a good approach. And a single stored proc call is more efficient with fewer calls to the Database.
2) You haven't made it clear if each row in the grid has some kind of ID column that determines if the data exists in the Table, however assuming there is, make sure that it is indexed then use INSERT INTO and UPDATE statements like this:
To add new rows:
```
INSERT INTO [grid_table]
SELECT * FROM [table_valued_parameter]
WHERE [id_column] NOT IN (SELECT [id_column] FROM [grid_table])
```
To update existing rows:
```
UPDATE gt
SET gt.col_A = tvp.col_A,
gt.col_B = tvp.col_B,
gt.col_C = tvp.col_C,
...
gt.col_Z = tvp.col_Z
FROM [grid_table] gt
INNER JOIN [table_valued_parameter] tvp ON gt.id_column = tvp.id_column
```
NB:
* No need to do an IF EXISTS() or anything as the WHERE and JOIN
clauses will run the same checks,so no need to do a 'pre-check'
before running each statement.
* This assumes the TVP data isthe same structure as the Table in the
database.
* YOU MUST make sure the id\_column is indexed.
* I've use 'INNER JOIN' instead of just 'JOIN' to make the point it is an inner join
3) Using the approach above you just new one stored proc, simple and effective
|
Calling stored procedure to insert multiple values
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
How it is possible to retrieve column headers of a select query as a single column in SQL Server ? (it is preferred to retrieve data type of columns )
Query example:
```
select a.PartId, a.PartName, b.GroupName
from Parts as a
inner join Groups as b on a.GroupRef = b.GroupId
```
Expected result:
```
Columns
--------
PartId
PartName
GroupName
```
|
One way is to create a temporary table with the schema of resultset and then query `tempdb`'s schema table to get the column names and details. You can get all needed details.
```
select a.PartId , a.PartName , b.GroupName into #yourtable
from Parts as a inner join Groups as b
on a.GroupRef = b.GroupId
where 1=2
SELECT c.name as columnname,t.name as datatype
FROM tempdb.sys.columns c
inner join tempdb.sys.systypes as t on t.xtype = c.system_type_id
WHERE [object_id] = OBJECT_ID(N'tempdb..#yourtable');
```
|
Starting from `SQL Server 2012+` you can use [sys.dm\_exec\_describe\_first\_result\_set](https://msdn.microsoft.com/en-us/library/ff878258.aspx) to get all metadata about result set:
**[DBFiddle Demo](https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=5787cd733f5013fa69d1cc9e50e360b9)**
```
DECLARE @tsql NVARCHAR(MAX) =
N'select a.PartId , a.PartName , b.GroupName
from Parts as a inner join Groups as b
on a.GroupRef = b.GroupId';
SELECT name AS [Columns]
FROM sys.dm_exec_describe_first_result_set(@tsql, NULL, 1)
```
|
How to Retrieve Column Headers of a Select Query?
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
Lets say we have the following data sets
tbl\_building:
```
id -- name
1 -- building 1
2 -- building 2
```
tbl\_rooms:
```
id -- building_id -- room_id -- light_status
1 ------ 1 ------------- 1 ----------- 0
2 ------ 1 ------------- 2 ----------- 1
3 ------ 1 ------------- 3 ----------- 0
4 ------ 2 ------------- 1 ----------- 1
```
How would I construct a single sql statement to find out which BUILDINGS have a light switched on in a YES/NO format Whilst grouping by Building name
Idealling I want something like the following:
```
SELECT b.name, if(light_status, 'yes', no) as light_status
FROM tbl_building b
JOIN tbl_rooms r on b.id = r.building_id
group by b.id
```
However, this seems to be random as to which room it will bring back for each buildinh
|
```
Select b.name, case when sum (a.light_status) > 0 then 'YES' else 'NO' end as LightStatus
From tbl_rooms a
Join tbl_buildings b
On a.building_id = b.building_id
Group by b.name
```
|
Try this:
```
SELECT DISTINCT B.name
FROM tbl_rooms A
INNER JOIN tbl_builiding B
ON A.building_id = B.id
```
|
Finding value when multiple rows by group by
|
[
"",
"sql",
""
] |
Say I have a table like this called **inventory**, for example:
```
+----+------------+-------+
| id |Product_ID | Count |
+----+------------+-------+
| 1 | 1 | 4 |
| 2 | 2 | 5 |
| 3 | 4 | 3 |
| 4 | 5 | 3 |
| 5 | 5 | -3 |
| 6 | 2 | -5 |
+----+------------+-------+
```
And the result of my query **GroupedInventory AS (SELECT id, Product\_ID, Count FROM inventory ORDER BY Product\_ID** gives me (stored in GroupedInventory):
```
+----+------------+-------+
| id |Product_ID | Count |
+----+------------+-------+
| 1 | 1 | 4 |
| 2 | 2 | 5 |
| 6 | 2 | -5 |
| 3 | 4 | 3 |
| 4 | 5 | 3 |
| 5 | 5 | -3 |
+----+------------+-------+
```
And I want to delete the matching positive and negative results. So in the end the table should turn into:
```
+----+------------+-------+
| id |Product_ID | Count |
+----+------------+-------+
| 1 | 1 | 4 |
| 3 | 4 | 3 |
+----+------------+-------+
```
I deal with sequential languages and I just can't wrap my head around this. My mind is telling me to loop through the table, store the pairs and then delete the pairs by id in the original table.
|
```
with GroupedInventory as
(
SELECT Product_id, abs(sum(case when count < 0 then count else 0 end)) Negative, abs(sum(case when count > 0 then count else 0 end)) Positive
FROM inventory
group by Product_id
)
delete from inventory
where Product_id in (select Product_id from GroupedInventory where positive = negative );
```
Although the validity also depends on unspecified rules, such as how many entries can exist for a given product\_id.
|
Try this one
```
DELETE inventory
FROM inventory
INNER JOIN (
SELECT Product_ID FROM inventory group by Product_ID HAVING SUM(Count)=0
) as t ON inventory.Product_ID = t.Product_ID
```
|
SQL: Grouped table; delete reciprocal pairs?
|
[
"",
"sql",
"sql-server",
""
] |
I want to make a daily dump of all the databases in MySQL using
> Event Scheduler
, by now I have this query to create the event:
```
DELIMITER $$
CREATE EVENT `DailyBackup`
ON SCHEDULE EVERY 1 DAY STARTS '2015-11-09 00:00:01'
ON COMPLETION NOT PRESERVE ENABLE
DO
BEGIN
mysqldump -user=MYUSER -password=MYPASS all-databases > CONCAT('C:\Users\User\Documents\dumps\Dump',DATE_FORMAT(NOW(),%Y %m %d)).sql
END $$
DELIMITER ;
```
The problem is that MySQL seems to not recognize the command **'mysqldump'** and shows me an error like this: **Syntax error: missing 'colon'**.
I am not an expert in SQL and I've tried to find the solution, but I couldn't, hope someone can help me with this.
**Edit:**
Help to make this statement a cron task
|
For Windows, create a .bat file with the needed command, and then create a scheduled task that runs that .bat file according to a schedule.
Create a .bat file in this fashion, replacing your username, password, and database name as appropriate:
```
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname > C:\some_folder\some_file.sql
```
Then go to the start menu, control panel, administrative tools, task scheduler. Hit action > create task. Go to the actions tab, hit new, browse to the .bat file and add it to the task. Then go to the triggers tab, hit new, and define your daily schedule. Refer to <http://windows.microsoft.com/en-US/windows/schedule-task>
You might want to use a tool like 7zip to compress your backups all in the same command (7zip can be invoked from the command line). An example with 7zip installed would look like:
```
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname | 7z a -si C:\some_folder\some_file.7z
```
I use this to include the date and time in the filename:
```
set _my_datetime=%date:~-4%_%date:~4,2%_%date:~7,2%_%time:~0,2%_%time:~3,2%_%time:~6,2%_%time:~9,2%_
set _my_datetime=%_my_datetime: =_%
set _my_datetime=%_my_datetime::=%
set _my_datetime=%_my_datetime:/=_%
set _my_datetime=%_my_datetime:.=_%
echo %_my_datetime%
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname | 7z a -si C:\some_folder\backup_with_datetime_%_my_datetime%_dbname.7z
```
|
@Drew means to use a cronjob. to add a cronjon just start the crontab using this command:
```
crontab -e
```
the add a new entry at the end like this:
```
0 0 * * * mysqldump -u username -ppassword databasename > /path/to/file.sql
```
this will perform a database dump every day at 00:00
|
How to create a daily dump in MySQL?
|
[
"",
"sql",
"cron",
"mysql",
"mysql-event",
""
] |
I have a database table that has a Vendor\_ID column and a Vendor\_Item column.
```
Vendor_id Vendor_item
101 111
101 111
101 123
```
I need a way to show when vendor\_id and vendor\_item are combined, show if having count greater than 1. The vendor\_item number can be in there multiple times as long as it has a different vendor\_id.
```
Vendor_id Vendor_item
101 111
101 111
```
I have done the following but it only shows results have have more than 1 and doesn't show both records like the above example.
```
SELECT vendor_id,vendor_item
From Inventory_master
group by vendor_id,vendor_item
having count(*) >1
```
If possible I would like a way to add another column ( UPC ) to the results. The system I am working on can import back into the system with UPC so I would be able to fix what is duplicated.
```
Vendor_id Vendor_item UPC
101 111 456
101 111 789
```
|
Not sure about the `UPC` column as from where and how you are getting it but you can change your existing query a bit like below to get the desired data
```
SELECT * FROM Inventory_master WHERE vendor_item IN (
SELECT vendor_item
From Inventory_master
group by vendor_item
having count(vendor_item) >1);
```
|
You can use a subquery and then JOIN back to the inventory\_master table:
```
SELECT im.*
FROM
Inventory_master im INNER JOIN (
SELECT vendor_id, vendor_item
From Inventory_master
group by vendor_id,vendor_item
having count(*) >1) s
ON im.vendor_id = s.vendor_id AND im.vendor_item = s.vendor_item
```
|
SQL - Looking to show when 2 columns combined have the same data
|
[
"",
"sql",
"duplicates",
"sybase",
""
] |
I have two tables. Table 1 has about 750,000 rows and table 2 has 4 million rows. Table two has an extra ID field in which I am interested, so I want to write a query that will check if the 750,000 table 1 records exist in table 2. For all those rows in table 1 that exist in table 2, I want the respective ID based on same SSN. I tried the following query:
```
SELECT distinct b.UID, a.*
FROM [Analysis].[dbo].[Table1] A, [Proteus_8_2].dbo.Table2 B
where a.ssn = b.ssn
```
Instead of getting 750,000 rows in the output, I am getting 5.4 million records. Where am i going wrong?
Please help?
|
You're requesting all the rows in your select if b.UID is a unique field in column two.
Also if SSN is not unique in table one you can get the higher row count than the total row count for table 2.
You need to consider what you want from table 2 again.
EDIT
You can try this to return distinct combinations of ssn and uid when ssn is found in table 2 provided that ssn and uid have a cardinality of 1:1, i.e., every unique ssn has a single unique uid.
```
select distinct
a.ssn,b.[UID]
from [Analysis].[dbo].[Table1] a
cross apply
( select top 1 [uid] from [Proteus_8_2].[dbo].[Table2] where ssn = a.ssn ) b
where b.[UID] is not null
```
|
Since the order detail table is in a one-many relationship to the order table, that is the expected result of any join. If you want something different, you need to define for us the business rule that will tell us how to select only one record from the Order detail table. You cannot effectively write SQL code without understanding the business rules that of what you are trying to achieve. You should never just willy nilly select one record out of the many, you need to understand which one you want.
|
SQL Query to fetch information based on one or more condition. Getting combinations instead of exact number
|
[
"",
"sql",
"sql-server",
"select",
"where-clause",
""
] |
I have this trigger to avoid similar names inserted in a table. But isn't working at all and not let me insert any row (as if the condition is true everytime). I tried with IF EXISTS and happen the same. Even deleting everything in the "Actor" table (leaving with no rows) and when I insert a new one the trigger fires.
```
CREATE TRIGGER [dbo].[AvoidSimilarName]
ON [dbo].[Actor]
FOR INSERT
AS
BEGIN
DECLARE @Firstname varchar(25)
DECLARE @Lastname varchar(25)
SELECT @Firstname = Firstname, @Lastname = Lastname FROM INSERTED;
SELECT * FROM Actor WHERE @Firstname = Firstname AND @Lastname = Lastname
IF @@ROWCOUNT > 0
BEGIN
ROLLBACK TRANSACTION
END
END
```
The thing is that when I made a same example of the select that its in the trigger, works fine. Return no rows.
```
DECLARE @Firstname varchar(25)
DECLARE @Lastname varchar(25)
SET @Firstname = 'Johnny'
SET @Lastname = 'Depp'
SELECT * FROM Actor WHERE @Firstname = Firstname AND @Lastname = Lastname
```
What am I doing wrong?
|
Not trying to help you with the trigger because i don't think is your best option.
Why not create a unique constraint on those 2 fields?
```
ALTER TABLE dbo.Actor ADD CONSTRAINT UQ_ACTOR_NAME UNIQUE (firstname,lastname)
```
|
I think your trigger always fires because, since you've inserted, you're always going to get a @@rowcount of at least 1. Try setting it to > 1 and see.
|
Trigger to avoid similar names inserted in tables isnt working and not let me insert any row
|
[
"",
"sql",
"sql-server",
"triggers",
""
] |
I am using mysql and pma. I have a table `mytable` and a column `time`, storing ~17K individual values, i.e. timestamps (integers).
I need to update each by adding `962758` to each timestamp. What does the SQL command for that look like?
```
SELECT (*) FROM `mytable` t1
UPDATE `mytable` SET time = + 962758
PROFIT? :)
```
Would you need a `SELECT` statement for that or does it work with `UPDATE` only?
I cant use php for that in this case.
|
Considering that it's `TIMESTAMP` datatype, you can say
```
UPDATE `mytable` SET time = time + INTERVAL 962758 seconds;
```
Per your comment, since it's of `INT` type; you can just do the addition likewise you are already doing.
```
UPDATE `mytable` SET `time` = `time` + 962758;
```
|
If the data is stored as a `datetime` value, then you simply can use:
```
select timestampadd(second, 962758, time)
```
If the value is a unix timestamp, then it is already in seconds, and you can just add 962758.
|
How to update timestamp values in column by adding a specific time (seconds) to the existing timestamp using mysql?
|
[
"",
"mysql",
"sql",
"date",
"phpmyadmin",
""
] |
I have a simple query like:
```
SELECT name FROM people;
```
The `people` table does not a have unique id column. I want to add to the query result a column `id` with incremental `int` starting from 0 or 1 (it doesn't matter). How can one achieve this? (postgresql DB)
|
Use `ROW_NUMBER()`:
[SQLFiddle](http://sqlfiddle.com/#!15/b5bb81/2/0)
```
SELECT
name,
ROW_NUMBER() OVER (ORDER BY name) AS id
FROM people;
```
**EDIT:**
Difference between `ORDER BY 1` vs `ORDER BY column_name`
[SQLFiddleDemo](http://sqlfiddle.com/#!15/a227e/4/1)
```
SELECT
name,
ROW_NUMBER() OVER (ORDER BY name) AS id
FROM people;
/* Execution Plan */
QUERY PLAN WindowAgg (cost=83.37..104.37 rows=1200 width=38)
-> Sort (cost=83.37..86.37 rows=1200 width=38)
**Sort Key: name**
-> Seq Scan on people (cost=0.00..22.00 rows=1200 width=38)
SELECT
name,
ROW_NUMBER() OVER (ORDER BY 1) AS id
FROM people;
/* Execution Plan */
QUERY PLAN WindowAgg (cost=0.00..37.00 rows=1200 width=38)
-> Seq Scan on people (cost=0.00..22.00 rows=1200 width=38)
```
In second case there is no sort operation.
You can also write second query as:
```
SELECT
name,
ROW_NUMBER() OVER () AS id
FROM people;
```
**Why people write `ORDER BY 1` in windowed functions?**
Because in some dialects it is required and `ORDER BY 1` acts like placeholder.
Oracle:
> ORA-30485: missing ORDER BY expression in the window specification
```
SELECT
name,
ROW_NUMBER() OVER (ORDER BY 1) AS id
FROM people;
```
TSQL:
> The function 'ROW\_NUMBER' must have an OVER clause with ORDER BY.
```
SELECT
name,
ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS id
FROM people;
```
|
Check out the `row_number()` function at <https://www.postgresql.org/docs/current/static/functions-window.html>
|
SQL: Add column with incremental id to SELECT
|
[
"",
"sql",
"postgresql",
"select",
""
] |
I am using this query
```
select id, name, parent_id
from table_p
group by parent_id
having count(parent_id) > 1
```
So the problem with this is I need to put id and name in an aggregate function. Sorry for asking such basic question, but is there a way in which I can avoid this and print ids with duplicate parent\_id in a single query?
|
You can use a window function that counts the number of siblings:
```
SELECT id, name, COUNT(*) OVER (PARTITION BY parent_id) cnt
FROM table_p
```
then you can just use a subquery:
```
SELECT id, name
FROM (
SELECT id, name, COUNT(*) OVER (PARTITION BY parent_id) cnt
FROM table_p
) s
WHERE cnt>1
```
Please see a fiddle [here](http://sqlfiddle.com/#!15/6e4ad/1).
|
hope you can use this SQL
```
select id, name, parent_id
from table_p
where parent_id in (select parent_id
from table_p
group by parent_id
having count(parent_id) >1)
order by parent_id
```
|
Group by with normal fields
|
[
"",
"sql",
"postgresql",
""
] |
This seems easy, but I can't get it. Assume this dataset:
```
ID SID AddDate
1 123 1/1/2014
2 123 2/3/2015
3 123 1/4/2010
4 124
5 124
6 125 2/3/2012
7 126 2/2/2012
8 126 2/2/2011
9 126 2/2/2011
```
What I need is the most recent AddDate and the associated ID for each SID.
So, my dataset should return IDs 2, 5, 6 and 7
I tried doing a max(AddDate), but it won't give me the proper ID that's associated with it.
My SQL string:
```
SELECT First(Table1.ID) AS FirstOfID, Table1.SID, Max(Table1.AddDate) AS MaxOfAddDate
FROM Table1
GROUP BY Table1.SID;
```
|
You can use a subquery that returns the Maximum add date for each Sid, then you can join back this subquery to the dataset table:
```
SELECT
MAX(id)
FROM
ds INNER JOIN (
SELECT Sid, Max(AddDate) AS MaxAddDate
FROM ds
GROUP BY ds.Sid) mx
ON ds.Sid = mx.Sid AND (ds.AddDate=mx.MaxAddDate or MaxAddDate IS NULL)
GROUP BY
ds.Sid
```
the join still has to succeed if the MaxAddDate is NULL (there's no AddDate), and in case there are multiple ID that matches, it looks like you want the biggest one.
|
You can change your query to get the grouping first and then perform a `JOIN` like
```
SELECT First(Table1.ID) AS FirstOfID,
Table1.SID, xx.MaxOfAddDate
FROM Table1 JOIN (
SELECT ID, Max(AddDate) AS MaxOfAddDate
FROM Table1
GROUP BY SID) xx ON Table1.ID = xx.ID;
```
|
Getting the most recent data from a dataset based on a date field
|
[
"",
"sql",
"vba",
"ms-access",
"ms-access-2010",
""
] |
I am developing a Java application using MySQL. I need to know which is the week of each month, of the stored dates. Is there any MySQL function for that ? Basically , if i was to use this for the current date (13.09) it would show me its in week number 2 and tomorrow it will be week number 3.
|
You can play with the [WEEK()](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_week) function, and see if it suits your needs. Here I'm using `WEEK(date, 3)` that will return the week of the year from 1 to 53, starting from Mondays:
```
set @my_date = '2015-09-13';
SELECT
WEEK(@my_date, 3) -
WEEK(@my_date - INTERVAL DAY(@my_date)-1 DAY, 3) + 1
AS week_number;
```
* `WEEK(date, 3)` will return the week of the year of the selected date
* `WEEK(date - INTERVAL DAY(@my_date)-1 DAY, 3)` will return the week of the year of the first day of the month of the selected date
It will return 1 for 01-March-2015 (because it's the first day of the month so it's week 1) and 2 for 02-March-2015 (because weeks starts from Mondays, so it's a new week). If this is not the desidered behaviour you should specify your requirements more precisely.
Please see a fiddle [here](http://sqlfiddle.com/#!9/9eecb7d/20346).
|
Unfortunately, there isn't a "weekofmonth" function, but you could use [`dayofmonth`](https://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_dayofmonth), and manipulate the result a bit:
```
SELECT CURRENT_DATE(),
FLOOR((DAYOFMONTH(CURRENT_DATE()) - 1) / 7) + 1 AS week_of_month
```
|
Get the week of the month in MYSQL
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have a column that contains numbers and other string values (like "?", "???", etc.)
Is it possible to add an "is number" condition to the where clause in SQLite? Something like:
```
select * from mytable where isnumber(mycolumn)
```
|
From the [documentation](https://sqlite.org/lang_corefunc.html#typeof),
> The typeof(X) function returns a string that indicates the datatype of the expression X: "null", "integer", "real", "text", or "blob".
You can use `where typeof(mycolumn) = "integer"`
|
You could try something like this also:
```
select * from mytable where printf("%d", field1) = field1;
```
In case your column is text and contains numeric and string, this might be somewhat helpful in extracting integer data.
Example:
```
CREATE TABLE mytable (field1 text);
insert into mytable values (1);
insert into mytable values ('a');
select * from mytable where printf("%d", field1) = field1;
field1
----------
1
```
|
How to check if a value is a number in SQLite
|
[
"",
"sql",
"sqlite",
""
] |
I had following Table
```
CREATE TABLE Customer
( `Name` varchar(7), `Address` varchar(55), `City` varchar(15),`Contact` int,`timestamp` int)
;
INSERT INTO Customer
(`Name`,`Address`, `City`, `Contact`,`timestamp`)
VALUES
('Jack','New City','LA',79878458,456125),
('Joseph','New Lane23','LA',87458458,794865),
('Rosy','Old City','Paris',79878458,215125),
('Maria','New City','LA',79878458,699125),
('Jack','New City','LA',79878458,456125),
('Rosy','Old City','Paris',79878458,845125),
('Jack','New Main Street','New York',79878458,555525),
('Joseph','Near Bank','SAn Francisco',79878458,984521)
;
```
I want to get all customer record with highest timestamp without duplication.
|
> I want to get all customer record with highest timestamp without
> duplication.
Use `DISTINCT` operator and `ORDER BY` clause like
```
select distinct `Name`,`Address`, `City`, `Contact`,`timestamp`
from customer
order by `timestamp` desc;
```
In that case you can use `JOIN` query like
```
select t1.*
from customer t1 join
(select Name, max(`timestamp`) as maxstamp
from customer
group by Name) xx
on t1.Name = xx.Name
and t1.`timestamp` = xx.maxstamp;
```
|
Try the following.
```
select name,max(timestamp),Address,City,Contact from Customer group by name
```
|
SQL Query for timestamp
|
[
"",
"android",
"sql",
"sqlite",
"android-sql",
""
] |
I'm trying to write a query for Select from 2 tables.
Tables are the following:
```
Table_1:
id (int)
name (varchar)
status int (0,1)
Table_2:
id (int)
table_1_id (int)
name (varchar)
time (datetime)
```
I need to select all the rows from Table\_2 which are no older than 1 day and that are associated with table\_1 with status 1. The way I do it now is using 2 queries and 2 foreach arrays, which is very inefficient. Could someone help me to write a query with join? Thank you for your time.
|
No need of looping, you can do a `JOIN` between the tables like
```
select t2.*
from Table_2 t2 join Table_1 t1 on t2.table_1_id = t1.id
where t1.status = 1
and date(t2.`time`) = date(now() - interval 1 day);
```
|
```
SELECT table_2.* FROM table_1 t1 INNER JOIN table_2 t2 ON t2.table_1_id=t1.id
WHERE t1.status=1 AND time < (NOW() - INTERVAL 1 DAY);
```
You have to use `ON` to join tables since the fields in question do not have the same name. Otherwise you could have joined with `USING(id_field)`. In your case inner join is probably most useful. You could have used left join if you wanted matching results from table\_1 even if there is no counterpart in table\_2, e.g.
|
Join 2 tables SQL query
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have a query that groups by (column\_a, column\_b) and selects an aggregated value. I would like to then group by column\_a and take an aggregate sum of the previously aggregated values.
Probably clearer with an example:
We have 3 tables: projects, devs, and contributors. Each project has many contributors, and each dev is a contributor to many projects:
```
+======== projects =========+ +====== devs =======+
+--------------+------------+ +--------+----------+
| project_name | project_id | | dev_id | dev_name |
+--------------+------------+ +--------+----------+
| parsalot | 1 | | 1 | Ally |
| vimplug | 2 | | 2 | Ben |
| gamify | 3 | | 3 | Chris |
+--------------+------------+ +--------+----------+
+==== contributors ===+
+------------+--------+
| project_id | dev_id |
+------------+--------+
| 1 | 2 |
| 1 | 3 |
| 2 | 1 |
| 2 | 2 |
| 3 | 3 |
+------------+--------+
```
I'm interested in how much work goes into each project. I could just count how many contributors each has, but I'd like to give more weight to contributions made by devs who aren't splitting their time over lots of other projects.
So vimplug is more actively developed than parsalot: each project has two contributors, but one of vimplug's (Ally) does nothing else, whereas parsalot's contributors are both splitting their time across other projects.
I've constructed a query that groups by (project, contributor) and calculates each contributors "dedication" to the project:
```
SELECT
projects.project_name,
devs.dev_name,
1 / COUNT(contributions.project_id) as dedication
FROM
projects
JOIN
contributors USING (project_id)
JOIN
devs USING (dev_id)
JOIN
contributors contributions USING (dev_id)
GROUP BY projects.project_id , contributors.dev_id;
```
Which yields,
```
+--------------+----------+------------+
| project_name | dev_name | dedication |
+--------------+----------+------------+
| parsalot | Ben | 0.5000 |
| parsalot | Chris | 0.5000 |
| vimplug | Ally | 1.0000 |
| vimplug | Ben | 0.5000 |
| gamify | Chris | 0.5000 |
+--------------+----------+------------+
```
What I really want, though, is the total dedication for each project, i.e.
```
+--------------+------------------+
| project_name | total_dedication |
+--------------+------------------+
| gamify | 0.5000 |
| parsalot | 1.0000 |
| vimplug | 1.5000 |
+--------------+------------------+
```
I (naively) tried changing my select statement to
```
SELECT
projects.project_name,
SUM(1 / COUNT(contributions.project_id)) as total_dedication
```
but that doesn't work ("Invalid use of group function"). Is there a way I can do this without having to do a sub-select?
|
Just use a subquery:
```
select project_name, sum(dedication)
from (<your query here>) q
group by project_name;
```
|
Ivan,
You asked "Is there a way I can do this without having to do a sub-select" ... is there a reason you cannot sub-select?
Unfortunately, you'll need to use a sub-select, because you cannot combine aggregate functions (which would be the only way you'd be able to accomplish this). See: [How to combine aggregate functions in MySQL?](https://stackoverflow.com/questions/3409581/how-to-combine-aggregate-functions-in-mysql)
So as the other answers have shown, you'll have to use a sub-query.
|
SQL: how can I use GROUP BY to take an aggregate of an aggregate?
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I'm having some problems optimizing a certain query in SQL(using MariaDB), to give you some context: I have a system with "events"(see them as log entries) that can occur on tickets, but also on some other objects besides tickets(which I why I seperated the event and ticket\_event tables). I want to get all ticket\_events sorted by display\_time. The event table has ~20M rows right now.
```
CREATE TABLE IF NOT EXISTS `event` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`type` varchar(255) DEFAULT NULL,
`data` text,
`display_time` datetime DEFAULT NULL,
`created_time` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `index_for_display_time_and_id` (`id`,`display_time`),
KEY `index_for_display_time` (`display_time`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE IF NOT EXISTS `ticket_event` (
`id` int(11) NOT NULL,
`ticket_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `ticket_id` (`ticket_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
ALTER TABLE `ticket_event`
ADD CONSTRAINT `ticket_event_ibfk_1` FOREIGN KEY (`id`) REFERENCES `event` (`id`),
ADD CONSTRAINT `ticket_event_ibfk_2` FOREIGN KEY (`ticket_id`) REFERENCES `ticket` (`id`);
```
As you see I already played around with some keys(I also made one for (id, ticket\_id) that doesn't show up here now since I removed it again) The query I execute:
```
SELECT * FROM ticket_event
INNER JOIN event ON event.id = ticket_event.id
ORDER BY display_time DESC
LIMIT 25
```
That query takes quite a while to execute(~30s if I filter on a specific ticket\_id, can't even complete it reliably without filtering on it). If I run an explain on the query it shows it does a filesort + temporary:
[](https://i.stack.imgur.com/WP6cG.png)
I played around with force index etc. a bit, but that doesn't seem to solve anything or I did it wrong.
Does anyone see what I did wrong or what I can optimize here? I would very much prefer not to make "event" a wide table by adding ticket\_id/host\_id etc. as columns and just making them NULL if they don't apply.
Thanks in advance!
EDIT: Extra image of `EXPLAIN` with actual rows in the table:
[](https://i.stack.imgur.com/7lkGE.png)
|
OK what if you try to force the index?
```
SELECT * FROM ticket_event
INNER JOIN event
FORCE INDEX (index_for_display_time)
ON event.id = ticket_event.id
ORDER BY display_time DESC
LIMIT 25;
```
|
Your query selects every column from every row, even if you use a LIMIT. Have you tried to select one specific row by id?
|
Relatively simple SQL query with join refuses to be efficient
|
[
"",
"mysql",
"sql",
"query-optimization",
"mariadb",
""
] |
I need to run multiple select count queries to count how many people are available at certain times through the day to plot into a table from ms sql server.
I have the below sql which works, but is returning each count as a new table, I would like them to all in one table on different columns.
```
DECLARE @Day varchar(max)
SET @Day = 'Sunday'
DECLARE @Provider varchar(max)
SET @Provider = '58611'
DECLARE @sqlText varchar(max);
SET @sqlText = N'SELECT COUNT(*) AS Available0700
FROM tblCarersRota INNER JOIN tblCarersProviders ON tblCarersProviders.CarerID = tblCarersRota.CarerID
WHERE Rotation = 2 AND tblCarersProviders.ProviderID = '''+ @Provider + ''' AND ''07:00'' between ' + @Day + 'StartTime AND ' + @Day + 'EndTime '
Exec (@sqlText)
SET @sqlText = N'SELECT COUNT(*) AS Available0800
FROM tblCarersRota INNER JOIN tblCarersProviders ON tblCarersProviders.CarerID = tblCarersRota.CarerID
WHERE Rotation = 2 AND tblCarersProviders.ProviderID = '''+ @Provider + ''' AND ''08:00'' between ' + @Day + 'StartTime AND ' + @Day + 'EndTime '
Exec (@sqlText)
```
Actual current result:
```
Available0700
21
Available0800
22
```
Desired result:
```
Available0700 || Available0800
21 || 22
```
I have looked at where you select (select query 1) (select query 2) but I can't get that to work with the dynamic sqltext.
How can I modify my selects to get them to all return as 1 table?
Thanks
|
Option 1, put it in 2 rows with `UNION ALL`:
```
SELECT COUNT(*) as Counts, 'Available0700' AvailableTime
FROM ...
UNION all
SELECT COUNT(*) as Counts, 'Available0800' AvailableTime
FROM ...
```
Option 2, in 2 columns with subqueries:
```
SELECT (SELECT COUNT(*) FROM ...) as Available0700,
(SELECT COUNT(*) FROM ...) as Available0800
```
|
I think you can store these values in variables and finally select these variables in your select query.
Below is an example code in which i have used a table variable to execute the dynamic queries and store the records and finally used a PIVOT query to fetch the require output.
```
DECLARE @tablevar TABLE
(
Query VARCHAR(100),
Cnt INT
)
DECLARE @sqlText varchar(max)
SET @sqlText = N'SELECT ''Available Value 1'', 100 '
INSERT INTO @tablevar
Exec (@sqlText)
SET @sqlText = N'SELECT ''Available Value 2'', 200 '
INSERT INTO @tablevar
Exec (@sqlText)
SET @sqlText = N'SELECT ''Available Value 3'', 300 '
INSERT INTO @tablevar
Exec (@sqlText)
SELECT * FROM @tablevar
PIVOT(SUM(Cnt) FOR Query IN([Available Value 1], [Available Value 2], [Available Value 3])) AS PIV
```
Above code is an example, please replace it with your actual code accordingly.
Thanks
|
Multiple Dynamic Selects queries in one return
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I would like to know if there is a way to get system username and use it directly in an MS Access query. I have made a parameter work within a query from a combo box on a form and I have also acquired system name in Access VBA using ENVIRON ("USERNAME").
Kindly let me know if this is possible.
|
You need to create a VBA function that returns the username, and then use the function in the query.
```
Public Function GetUserName() As String
' GetUserName = Environ("USERNAME")
' Better method, see comment by HansUp
GetUserName = CreateObject("WScript.Network").UserName
End Function
```
and
```
SELECT foo FROM bar WHERE myUserName = GetUserName();
```
|
My solution kept all the work in VB.
I used a variable for the windows login username and then created a SQL string with that variable inserted. Lastly, I updated the query behind the form to use this new SQL string.
The `CHR(34)` puts quotes around the name as it is now a string inside the SQLSTR and needs to be within a set of quotes.
If you have a complex SQL statement, write it in the QBE using a string for the name and all the other variables, then switch to the SQL view and replace it using a VBA variable as shown below.
```
MyName = Environ("username")
sqlstr = "SELECT * From Projects WHERE ( ((Projects.LeadEngineer)=" & Chr(34) & MyName & Chr(34) & " AND ActiveYN = True ));"
Forms![Main Form].RecordSource = sqlstr
```
|
How to use system username directly in MS Access query?
|
[
"",
"sql",
"vba",
"ms-access",
""
] |
I will have a relatively large growing dataset with 250M+ rows with a very basic table structure:
`id (unsigned int primary)`, `age (tinyint)`, `email (some type of string)`
where I will be performing `count(*)` queries based on `age` grouped by `email`. For example:
```
select count(*) as count from table
where age = 38
group by email
order by count
limit 100
```
`email` field is going to be up to 60 chars.
I don't need the very precise results so I will be fine with approximate results as long as the queries run as fast as possible. So I am open for smart tricks and maybe even funny approaches.
Apart from choosing the correct data type for `email` and indexing `age` and `email` together what other tricks could help?
For example, if the `email` size was going to be larger than 32 chars on average, I would have stored the md5 hash instead. Furthermore, I could get rid of `@` and `.` from the `email` as I don't care a few emails ending up colliding with each other (abc@xyz.com and ab@cxyz.com would be both abcxyzcom and it is fine). Also, I thought maybe compressing the `email` field could be an option.
|
Here is how I moved forward. As I mentioned I don't need the very precise results and I will be fine with approximate results as long as the queries run as fast as possible.
So I defined email field as `bigint` (8 bytes, small size, fast operations) and saved the emails using the function `hex( substr( md5( email), 4, 8 ) )` I checked it with the list of 700K email addresses and there were only 17 pair of emails which was colliding with each other. So with 0.004% error margin, I could be able run aggregate functions very fast.
Thanks for the insights.
PS. I tried substring function with different `pos` vars. Starting from position 4 seemed like causing the least colliding emails after the md5 hash.
|
It seems like your table is poorly designed (denormalized in a way that's going to hurt performance instead of help it), and should be refactored into two or more tables. While denormalization might make some things easier, it's going to make this query that you need to refresh every two minutes very expensive - no matter how you do it.
If you really do have a good reason to keep the table denormalized (and that reason isn't just "it'll make inserts harder"), you'll likely still need a pseudo-normalized table that only contains unique email addresses. There you could either keep a numeric ID for each email address and foreign key that back to your original table (which you're trying to avoid), or have th email address itself be the primary key and a column that you insert or update to reflect the count for that email address. Whenever you insert into the original table, you also insert (with an `ON DUPLICATE KEY UPDATE` clause) into that email tracking table.
My bet is that it would be better to just normalize your original table than pursue this, but it is an option. It seems likely that you're issue isn't so much that email addresses are denormalized, but that user account information is denormalized - do you really allow different users to have the same email? If you're trying to report on how many users of a particular age perform a certain action, there should be a better key to use than the email address - like a numeric user id perhaps, or a table to track the count of actions for a particular user.
|
How to achieve absolute fastest way when doing COUNT(*) GROUP BY string
|
[
"",
"mysql",
"sql",
""
] |
Hi im obtaining data from a table through a query.. i need to omit some of the data types, currently i cant write the query for that
```
+--------+----------------------+-----------+----------------+
| aut_id | aut_name | country | home_city |
+--------+----------------------+-----------+----------------+
| AUT001 | William Norton | UK | Cambridge |
| AUT002 | William Maugham | Canada | Toronto |
| AUT003 | William Anthony | UK | Leeds |
| AUT004 | S.B.Swaminathan | India | Bangalore |
| AUT005 | Thomas Morgan | Germany | Arnsberg |
| AUT006 | Thomas Merton | USA | New York |
| AUT007 | Piers Gibson | UK | London |
| AUT008 | Nikolai Dewey | USA | Atlanta |
| AUT009 | Marquis de Ellis | Brazil | Rio De Janerio |
| AUT010 | Joseph Milton | USA | Houston |
| AUT011 | John Betjeman Hunter | Australia | Sydney |
| AUT012 | Evan Hayek | chillie | Chillie |
| AUT013 | E. Howard | Spain | Madrid |
| AUT014 | C. J. Wilde | UK | London |
| AUT015 | Butler Andre | Austria | Florida |
+--------+----------------------+-----------+----------------+
```
i want to omit brazil, australia and chillie.. i tried query similer to below but it dint work
```
SELECT d.country, COUNT(*) AS number FROM ..............
AND d.country NOT LIKE 'brazil' OR 'Austria' OR 'Chillie'
GROUP BY d.country
```
but it works for one country like this
```
SELECT d.country, COUNT(*) AS number FROM ..............
AND d.country NOT LIKE 'brazil'
GROUP BY d.country
```
can anybody help me to do the filtering and get rid of the terms i want
|
Use `NOT IN`:
```
SELECT d.country, COUNT(*) AS number FROM ..............
AND d.country NOT IN ('brazil', 'Austria', 'Chillie')
GROUP BY d.country
```
|
Do this...
[edit]
```
SELECT d.country, COUNT(*) AS number FROM ..............
AND d.country NOT LIKE 'brazil' AND d.country NOT LIKE 'Austria' AND d.country NOT LIKE 'Chillie'
GROUP BY d.country;
```
Above works but other people answers are best...!
|
Writing the filtering query to ignore some strings
|
[
"",
"mysql",
"sql",
""
] |
I have the below table with 2 columns
```
ID | Dept
1 | A
2 | A
3 | B
4 | B
5 | B
6 | A
```
I want to do a count such that the output should look as the table below.
```
Dept | Count
A | 2
B | 3
A | 1
```
Thanks for your help in advance!
|
Slightly different to Michael's, same result:
```
with cte1 as (
select id,
dept,
row_number() over (partition by dept order by id) -
row_number() over (order by id) group_num
from test),
cte2 as (
select dept,
group_num,
count(*) c_star,
max(id) max_id
from cte1
group by dept,
group_num)
select dept,
c_star
from cte2
order by max_id;
```
<http://sqlfiddle.com/#!4/ff747/1>
|
From your example, it looks like you're wanting to count sequential records for each department.
You can do this by combining the row number and the ordering Id.
```
create table tblDept (
id int not null,
dept varchar(50)
);
insert into tblDept values (1, 'A');
insert into tblDept values (2, 'A');
insert into tblDept values (3, 'B');
insert into tblDept values (4, 'B');
insert into tblDept values (5, 'B');
insert into tblDept values (6, 'A');
with orderedDepts as (
select
dept,
id,
row_number() over (partition by dept order by id) -
row_number() over (order by id) as rn
from tblDept
)
select
dept,
count(*) as num
from orderedDepts
group by
dept,
rn
order by
max(id)
```
Gives the output:
```
+------+-----+
| DEPT | NUM |
+------+-----+
| A | 2 |
| B | 3 |
| A | 1 |
+------+-----+
```
**[SQL Fiddle](http://sqlfiddle.com/#!4/f19ea/7)**
|
SQL Grouping by sequential occurrences of a value
|
[
"",
"sql",
"database",
"oracle",
"gaps-and-islands",
""
] |
I'm creating a web app that displays a pie chart. In order to get all the data for the chart from a **PostgreSQL 9.3** database in a single HTTP request, I'm combining multiple `SELECT` statements with `UNION ALL` — here's a portion:
```
SELECT 'spf' as type, COUNT(*)
FROM (SELECT cai.id
FROM common_activityinstance cai
JOIN common_activityinstance_settings cais ON cai.id = cais.activityinstance_id
JOIN common_activitysetting cas ON cas.id = cais.id
JOIN quizzes_quiz q ON q.id = cai.activity_id
WHERE cai.end_time::date = '2015-09-12'
AND q.name != 'Exit Ticket Quiz'
AND cai.activity_type = 'QZ'
AND (cas.key = 'disable_student_nav' AND cas.value = 'True'
OR cas.key = 'pacing' AND cas.value = 'student')
GROUP BY cai.id
HAVING COUNT(cai.id) = 2) sub
UNION ALL
SELECT 'spn' as type, COUNT(*)
FROM common_activityinstance cai
JOIN common_activityinstance_settings cais ON cai.id = cais.activityinstance_id
JOIN common_activitysetting cas ON cas.id = cais.id
WHERE cai.end_time::date = '2015-09-12'
AND cai.activity_type = 'QZ'
AND cas.key = 'disable_student_nav'
AND cas.value = 'False'
UNION ALL
SELECT 'tp' as type, COUNT(*)
FROM (SELECT cai.id
FROM common_activityinstance cai
JOIN common_activityinstance_settings cais ON cai.id = cais.activityinstance_id
JOIN common_activitysetting cas ON cas.id = cais.id
WHERE cai.end_time::date = '2015-09-12'
AND cai.activity_type = 'QZ'
AND cas.key = 'pacing' AND cas.value = 'teacher') sub;
```
This produces a nice, small response for sending back to the client:
```
type | count
------+---------
spf | 100153
spn | 96402
tp | 84211
```
I wonder if my queries can be made more efficient. Each SELECT statement uses mostly the same JOIN operations. Is there a way to not repeat the JOIN for each new SELECT?
And I would actually prefer a single row with 3 columns.
Or, in general, is there some entirely different but better approach than what I'm doing?
|
You can bundle most of the cost in a single main query in a [CTE](http://www.postgresql.org/docs/current/interactive/queries-with.html) and reuse the result several times.
This returns a **single row with three columns** named after each `type` ([as requested in the comment](https://stackoverflow.com/questions/32544322/postgresql-9-3-better-way-than-multiple-select-statements/32545083#comment52946781_32544322)):
```
WITH cte AS (
SELECT cai.id, cai.activity_id, cas.key, cas.value
FROM common_activityinstance cai
JOIN common_activityinstance_settings s ON s.activityinstance_id = cai.id
JOIN common_activitysetting cas ON cas.id = s.id
WHERE cai.end_time::date = '2015-09-12' -- problem?
AND cai.activity_type = 'QZ'
AND (cas.key = 'disable_student_nav' AND cas.value IN ('True', 'False') OR
cas.key = 'pacing' AND cas.value IN ('student', 'teacher'))
)
SELECT *
FROM (
SELECT count(*) AS spf
FROM (
SELECT c.id
FROM cte c
JOIN quizzes_quiz q ON q.id = c.activity_id
WHERE q.name <> 'Exit Ticket Quiz'
AND (c.key, c.value) IN (('disable_student_nav', 'True')
, ('pacing', 'student'))
GROUP BY 1
HAVING count(*) = 2
) sub
) spf
, (
SELECT count(key = 'disable_student_nav' AND value = 'False' OR NULL) AS spn
, count(key = 'pacing' AND value = 'teacher' OR NULL) AS tp
FROM cte
) spn_tp;
```
Should work for Postgres 9.3. In Postgres 9.4 you can use the new aggregate `FILTER` clause:
```
count(*) FILTER (WHERE key = 'disable_student_nav' AND value = 'False') AS spn
, count(*) FILTER (WHERE key = 'pacing' AND value = 'teacher') AS tp
```
Details for both syntax variants:
* [How can I simplify this game statistics query?](https://stackoverflow.com/questions/27136251/how-can-i-simplify-this-game-statistics-query/27141193#27141193)
The condition marked `problem?` may be big performance problem, depending on the data type of `cai.end_time`. For one, it's not [**sargable**](https://en.wikipedia.org/wiki/Sargable). And if it's a `timestamptz` type, the expression is hard to index, because the result depends on the current time zone setting of the session - which can also lead to different results when executed in different time zones.
Compare:
* [Sustract two queries from same table](https://stackoverflow.com/questions/28422763/sustract-two-queries-from-same-table/28423238#28423238)
* [Subtract hours from the now() function](https://stackoverflow.com/questions/30894296/subtract-hours-from-the-now-function/30896121#30896121)
* [Ignoring timezones altogether in Rails and PostgreSQL](https://stackoverflow.com/questions/9571392/ignoring-timezones-altogether-in-rails-and-postgresql/9576170#9576170)
You just have to name the time zone that is supposed to define your date. Taking my time zone in Vienna as example:
```
WHERE cai.end_time >= '2015-09-12 0:0'::timestamp AT TIME ZONE 'Europe/Vienna'
AND cai.end_time < '2015-09-13 0:0'::timestamp AT TIME ZONE 'Europe/Vienna'
```
You can provide simple `timestamptz` values as well. You could even just:
```
WHERE cai.end_time >= '2015-09-12'::date
AND cai.end_time < '2015-09-12'::date + 1
```
But the first variant does not depend on the current time zone setting.
Detailed explanation in the links above.
Now the query can use your index and should be much faster if there are many different days in your table.
|
This is only a sketch of a completely different approach: construct a boolean "hypercube" for all conditions that you need
in your "crosstabulation". the logic of selecting or aggregating subsets can be done later (such as suppressing the exit\_tickets, for which the business logic is not clear to me)
---
```
SELECT DISTINCT not_exit, disabled, pacing
, COUNT(*) AS the_count
FROM (SELECT DISTINCT cai.id
, EXISTS (SELECT *
FROM quizzes_quiz q
WHERE q.id = cai.activity_id AND q.name != 'Exit Ticket Quiz'
) AS not_exit
, EXISTS ( SELECT *
FROM common_activityinstance_settings cais
JOIN common_activitysetting cas ON cas.id = cais.id
WHERE cai.id = cais.activityinstance_id
AND cas.key = 'disable_student_nav' AND cas.value = 'True'
) AS disabled
, EXISTS ( SELECT *
FROM common_activityinstance_settings cais
JOIN common_activitysetting cas ON cas.id = cais.id
WHERE cai.id = cais.activityinstance_id
AND cas.key = 'pacing' AND cas.value = 'student')
) AS pacing
FROM common_activityinstance cai
WHERE cai.end_time::date = '2015-09-12' AND cai.activity_type = 'QZ'
) my_cube
GROUP BY 1,2,3
ORDER BY 1,2,3
;
```
---
Final note: This method is based on my *assumption* that the underlying data model is in fact an EAV-model, and that an attribute can occur at most once per student.
|
Better way than multiple SELECT statements?
|
[
"",
"sql",
"postgresql",
"select",
"common-table-expression",
"postgresql-performance",
""
] |
I am trying to fetch data between two dates. My query is:
```
select *
from TABLE
where CREATED_DATE >= '11-MAY-2015'
and CREATED_DATE <= '11-MAY-2015'
```
It doesn't return any value. Though the data is present for 11th May and can be fetched if I give the dates as
```
CREATED_DATE >= '11-MAY-2015' and CREATED_DATE <= '12-MAY-2015'
```
|
DateType `Date` do have time part also.
When you say `'11-MAY-2015'` its actually `'11-MAY-2015 00:00:00'`
> Date
> This datatype contains the datetime fields YEAR, MONTH, DAY,
> HOUR, MINUTE, and SECOND. It does not have fractional seconds or a
> time zone.
Try this query
```
select *
from TABLE
where
CREATED_DATE >= date '2015-05-11' and CREATED_DATE < date '2015-05-12'
```
This will be in effect
```
11-MAY-2015 00:00:00 <= CREATED_DATE < 12-MAY-2015 00:00:00
```
Or by using `Trunc` which will ignore the `time` of the date:
```
select *
from TABLE
where
Trunc(CREATED_DATE) = date '2015-05-11'
```
Edit: Updated the date field using `date literals`, as `11-MAY-2015` will only work with certain `NLS` settings.. (comment by `@Ben` and `@David Aldridge`)
|
Depending on the CREATED\_DATE datatype You may need to use the [TO\_DATE](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions183.htm) function. Try this:
```
select * from TABLE where
CREATED_DATE >= TO_DATE('2015-05-11 00:00:00', 'YYYY-MM-DD HH24:Mi:SS')
and CREATED_DATE <= TO_DATE('2015-05-12 23:59:59', 'YYYY-MM-DD HH24:Mi:SS')
```
|
Getting data between dates is not accurate
|
[
"",
"sql",
"oracle",
""
] |
I have two tables :
PEOPLE
```
ID |NAME
|A2112 |John
|B3200 |Mary
|C2454 |Bob
|F2256 |Joe
```
JOBS
```
|ID |NAME |PEOPLE
|56565 |Taxi Driver |A2112
|23232 |Herborist |A2112
|12125 |Jumper |B3200
|25425 |Taxi Driver |C2454
|12456 |Taxi Driver |F2256
|56988 |Herborist |F2256
|45459 |Superhero |F2256
```
I wonder how I can select any records FROM People that have JOBS ID 56565 AND 23232 in performant way.
The search pattern may be two or multiples jobs, and the records can have another jobs too.
So the result will be John and Joe in this example.
|
Not quite sure if I got you right. This will return people who have job 56565 and/or 23232:
```
select distinct p.name
from people p
join jobs j on p.id = j.peopleid
where j.id in (56565, 23232)
```
If BOTH jobs are required:
```
select p.name
from people p
join jobs j on p.id = j.peopleid
where j.id in (56565, 23232)
group by p.name
having count(*) > 1
```
The `HAVING` clause can also be written as
```
having max(j.id) <> min(j.id)
```
Perhaps better performance that way.
|
The [INNER JOIN](https://dev.mysql.com/doc/refman/5.7/en/join.html) keyword selects all rows from both tables as long as there is a match between the columns in both tables.
```
SELECT
p.NAME, COUNT(*) as tot
FROM
PEOPLE p
INNER JOIN JOBS j ON (p.ID = j.PEOPLE)
WHERE
j.ID IN (56565, 23232)
GROUP BY
p.NAME
HAVING
COUNT(*) > 1
```
[Visual explanation](http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/) of INNER JOIN bellow:
[](https://i.stack.imgur.com/misVD.png)
|
SELECT records from two table
|
[
"",
"mysql",
"sql",
""
] |
I have following SQL for Microsoft SQL Server
```
SELECT *
FROM tblA
WHERE CREATION_DATE BETWEEN DATE'2015-09-12' AND DATE'2015-09-15'
```
But this throws a syntax error:
> Error: Incorrect syntax near '2015-09-12'. SQLState: S0001 ErrorCode:
> 102
What is wrong? I want to use ANSI literal code.
|
This is not how you cast in MS SQL Server. Instead, you should try the `cast` syntax:
```
SELECT *
FROM tblA
WHERE creation_date BETWEEN CAST('2015-09-12' AS DATE) AND
CAST('2015-09-15' AS DATE)
```
|
You can simply use this query, without adding date:
```
SELECT *
FROM tblA
WHERE CREATION_DATE BETWEEN '2015-09-12' AND '2015-09-15'
```
|
SQL Server gives syntax error
|
[
"",
"sql",
"sql-server",
"date",
"select",
""
] |
I have a table that contains the following columns:
1. ID
2. Master ID
The Master ID can be shared between different rows with different IDs.
E.g.:
```
ID | Master ID
1 | 1
2 | 1
3 | 1
4 | 2
```
Knowing the ID I want to retrieve all the rows that share the same master ID
I managed to do it using this query:
```
Select *
FROM table t
LEFT JOIN table t2
ON t.MASTER_ID = t2.MASTER_ID
Where t.ID = '1'
```
Then I also tried using:
```
Select *
FROM table t
LEFT JOIN table t2
ON t.MASTER_ID = t2.MASTER_ID and t.ID = '1'
```
In that case, it was much slower. Can anyone explain why?
|
The queries are doing different things, the first you are saying:
```
1. give me all rows from `table` where `id = 1`
2. Also give me rows from t2 with a matching master ID
```
In the second you are saying
```
1. Give me all rows from `table`
2. Return rows from `t2` with a matching master ID and where `t1.ID = 1`
```
In a simple example you might have
```
ID Master_ID
------------------------
1 1
2 1
3 1
4 2
```
So your first query will return:
```
t1.ID t1.Master_ID t2.ID t2.Master_ID
--------------------------------------------
1 1 1 1
1 1 2 1
1 1 3 1
```
Your second query will return
```
t1.ID t1.Master_ID t2.ID t2.Master_ID
--------------------------------------------
1 1 1 1
1 1 2 1
1 1 3 1
2 1 NULL NULL
3 1 NULL NULL
4 2 NULL NULL
```
So basically in the first query you are returning a limited number of rows from your table, whereas in the second you return all rows, but only join to some of them.
|
If the `t.ID = '1'` condition is in the WHERE clause the `t.ID='1'` condition only has to be evaluated for the number of rows in t. If the `t.ID='1'` condition is put into the ON clause for the join it must be evaluated for all rows in t2. If there are a lot of rows in t2 this can significantly increase the run time of the query.
|
Why is my SQL request much slower when the condition is in the join?
|
[
"",
"sql",
"oracle",
"join",
"left-join",
""
] |
Good afternoon,
I am trying to put together a query that will select data from 2 tables when using the MAX() function to return the user's most recent login time.
The tables are as follows:
USERS:
```
USERNAME CREATED
JOHNSMITH 01/01/2015
MATTTYLER 12/12/2013
DAVIDCROSS 09/07/2014
SARAHTHOMPSON 02/05/2015
```
SESSIONS:
```
USERNAME ACTION TIMESTAMP
JOHNSMITH LOGOUT 13/09/2015 10:00:00
MATTTYLER LOGOUT 13/09/2015 05:00:00
JOHNSMITH LOGIN 12/09/2015 15:00:00
MATTTYLER LOGIN 12/09/2015 11:00:00
JOHNSMITH LOGOUT 12/09/2915 12:00:00
JOHNSMITH LOGIN 12/09/2015 05:00:00
```
Result:
```
USERNAME CREATED TIMESTAMP (as LASTLOGIN)
JOHNSMITH 01/01/2015 12/09/15 15:00:00
MATTTYLER 12/12/2013 12/09/15 11:00:00
DAVIDCROSS 09/07/2014 NULL
SARAHTHOMPSON 02/05/2015 NULL
```
If MAX() is not the most appropriate function to select the most recent login time, please feel free to suggest a better approach. If possible, can you please demonstrate how to achieve this using both the SQL-89 and SQL-92 joins?
Any help is greatly appreciated, thank you.
|
Do a `LEFT JOIN` to also get users never logged in. `GROUP BY` with `MAX` to get each user's latest `TIMESTAMP`.
```
select u.username, u.created, max(s."TIMESTAMP")
from users u
left join sessions s on u.username = s.username
group by u.username, u.created
```
SQL-92 (join used. `Timestamp` is a reserved word in SQL-92, that's why it's delimited as `"TIMESTAMP"`.
SQL-89 (aka SQL 1), if no left join available, do a correlated sub-query instead:
```
select u.username, u.created,
(select max("TIMESTAMP") from sessions s
where u.username = s.username)
from users u
```
(Didn't Oracle have `+=` or `=+` for left join, when old-style join syntax is used?)
|
Another way to do this is via analytic functions:
```
select username,
created,
"TIMESTAMP"
from (select u.*,
s."TIMESTAMP",
row_number() over( partition by u.username
order by s."TIMESTAMP" desc ) as rnk
from users u
left join sessions s
on u.username = s.username)
where rnk = 1
```
|
Oracle SQL Query - Join when using MAX() function
|
[
"",
"sql",
"oracle",
""
] |
Code:
```
ALTER TABLE tblUser
DROP COLUMN Mobile
```
Error:
> ALTER TABLE DROP COLUMN Mobile failed because one or more objects access this column.
This column had values in Table. How can I delete all objects that access this column?
How can I `DROP COLUMN` with values?
how can do it with code? How can I delete all constraints in column automatically?
|
```
ALTER TABLE DROP COLUMN Mobile failed because one or more objects access this column.
```
Your column won't be deleted. Because one column or multiple columns are getting reference from this column that you want to delete.
So first, you will have to find in which table your column is being referenced by below query.
```
SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE TABLE_NAME = 'TABLENAME'
```
It will show you all constraints of all tables of your current database. You need to find it and remove the constraint. After that your column will be deleted successfully because there is no reference of your column in any table.
To remove constraint from column - use below query
```
alter table tablename
drop constraint constraintid
```
|
Use below query to find the constraints name for particular tablename
```
SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE TABLE_NAME = 'TABLENAME'
```
Noe you can see the constraints name under constraint\_name column, drop all constraint using below syntax
```
ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRATINTSNAME
```
After that you can use below statement to drop the column
```
ALTER TABLE TABLENAME DROP COLUMN COLUMNNAME
```
|
How can I DROP COLUMN in Microsoft SQL Server with a value?
|
[
"",
"sql",
"sql-server",
"alter",
""
] |
I have two tables in a database: **Events** and **Dates**, with a one-to-many relationship between them (*1 event can take place along many dates*).
I want to query the database to get the future and past events like this:
* Query to get **future events** should return every event with a date >= CURDATE() **and events with no assigned dates yet**...
* Query to get **past events** should return only events with at least one date assigned which should be, in turn, older than CURDATE()...
I am querying the database using a simple join, getting the appropriate results fot past events (restrictive in not returning events without dates). However, the query for future events is a little bit more tricky, as events without dates have to be also returned. I am open to any suggestion.
My try so far:
**FOR PAST EVENTS (working ok):**
```
SELECT dates.*, events.*,
FROM events
JOIN dates on (events.eventID = dates.eventID AND dates.date < CURDATE())
GROUP BY events.eventID
ORDER BY events.eventID ASC
```
**FOR FUTURE EVENTS:** ???
```
SELECT dates.*, events.*,
FROM events
LEFT JOIN dates on (events.eventID = dates.eventID AND dates.date >= CURDATE())
GROUP BY events.eventID
ORDER BY events.eventID ASC
```
(I am aware my try with **LEFT JOIN** is not valid as it returns also past events, with no date information (every field NULLED)).
Thanks in advance.
|
First off, as you laid them out, your queries don't need a `GROUP BY` clause, as there is no aggregation being done (no `count`, `max`, etc).
You have to separate "join logic" and "condition logic", so I suggest:
Past events:
```
SELECT dates.*, events.*
FROM events
JOIN dates on (events.eventID = dates.eventID)
WHERE dates.date < CURDATE()
ORDER BY events.eventID ASC
```
Future events:
```
SELECT dates.*, events.*
FROM events
LEFT JOIN dates on events.eventID = dates.eventID
WHERE dates.date >= CURDATE()) OR dates.date IS NULL
ORDER BY events.eventID ASC
```
|
If a record in events table will always have a record in the dates table, then you could use
```
SELECT dates.*, events.*,
FROM events
JOIN dates on (events.eventID = dates.eventID AND NVL(dates.date, CURDATE()) >= CURDATE())
GROUP BY events.eventID
ORDER BY events.eventID ASC
```
I guess I don't follow why past events also do not have dates.
|
Join + date condition in SQL query
|
[
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
```
Select *
from tableA
inner join tableB on tableA.id = tableB.aid
and cast(a.date AS DATETIME) = CAST('2015-08-24' AS DATETIME)
```
Values that stored in `tableA.date` are '2015-08-24' meaning data has no issue.
When I execute the above statement, I get
> The conversion of a date data type to a datetime data type resulted in an out-of-range value
May I know why cant cast a `date` column to `datetime`?
|
The **root cause** of the problem is this:
* the data type `DATE` has a range of accepted values from `01-01-0001` through `12-31-9999`
* the data type `DATETIME` has a range of accepted values from `01-01-1753` through `12-31-9999`
So if you happen to have a `DATE` from before 1753, or an empty / NULL value - this will be outside the range that `DATETIME` can handle.
You should *stop using* `DATETIME` In SQL Server **2008** and newer. Use `DATETIME2(n)` instead (where `n` stands for the number of fractional seconds you need).
So try this:
```
select *
from tableA
inner join tableB on tableA.id = tableB.aid
and cast(a.date AS DATETIME2(3)) = CAST('2015-08-24' AS DATETIME2(3))
```
and I'm sure this'll work just fine.
|
Use the format yyyymmdd which is universal in SQL Server,
```
Select * from tableA inner join tableB
on tableA.id = tableB.aid
and cast(a.date AS DATETIME) = CAST('20150824' AS DATETIME)
```
I did not see the problem with the '0001-01-01' value in `a.Date`. You could do a bit dirty trick like this:
```
Select * from tableA inner join tableB
on tableA.id = tableB.aid
and case when substring(a.date, 1, 2) not in ('19', '20') then null else CAST(a.date AS DATETIME) end = CAST('20150824' AS DATETIME)
```
|
SQL doesn't allow to cast date column to datetime?
|
[
"",
"sql",
"sql-server",
"date",
"datetime",
""
] |
I've the following tables with some data:
```
Collector:
dcid name hostid userid
123 test host1 234
567 hello host2 345
CollectorConfiguration:
ID propertyname propertyvalue collector_id(foreign key)
id1 c_source local 123
id2 c_createdby admin 123
id3 c_pattern JBoss 123
id4 c_source remote 567
id4 c_createdby admin 567
id4 c_pattern Apache 567
```
Now I need to get all records from Collector table with sorting on column value "c\_pattern" in CollectorConfiguration table.
I tried writing query using inner join but I couldn't get the desired result. Please help.
Note: The returned result contain only the Columns of Collector table,i.e, it should behave like select \* from Collector but with sortinn on c\_pattern property value.
```
Desired output(with ascending order on c_pattern):
567 hello host2 345
123 test host1 234
```
|
With the EAV model you should have a bunch of helper views to overcome the problems like this. The views will act as tables, such as collector\_patterns, collector\_sources, etc.
```
SELECT c.*
FROM Collector c LEFT JOIN
CollectorConfiguration cc on c.dcid = cc.collector_id
where cc.propertyname = 'c_pattern'
ORDER BY cc.propertyvalue DESC
```
So, to make a view from this query you would write it like this:
```
CREATE VIEW collector_pattern AS
SELECT c.*, cc.propertyvalue AS pattern
FROM Collector c LEFT JOIN
CollectorConfiguration cc on c.dcid = cc.collector_id
where cc.propertyname = 'c_pattern'
```
|
```
SELECT a.* FROM Collector a
LEFT JOIN CollectorConfiguration b ON b.collector_id=a.dcid
WHERE b.propertyname="c_pattern"
```
Rather the question is not so clear to me, but I guess you are looking for so.
|
SQl sorting based on certain property value of joined table
|
[
"",
"sql",
"hsqldb",
""
] |
Is there any way to get *all* records matching these conditions:
* (`userid = 1 AND (status < 2 OR time > INTERVAL '1 day')`)
And if there are less than 10 records and more (if available) for:
* `userid = 1`
Get up to 10 (last, by `time`)?
|
You want to `get all records matching`. A hard `LIMIT 10` would be wrong for the purpose. You only want to add rows up to a maximum of 10 according to *secondary* conditions ***if*** there are not enough rows for your *primary* conditions. But there can be more than 10 rows already for the primary condition alone.
### PL/pgSQL solution
The *fastest* way I can think of is a plpgsql function:
```
CREATE OR REPLACE FUNCTION f_tbl_top(_uid int = 1, _min int = 10)
RETURNS SETOF tbl AS
$func$
DECLARE
_ct int;
BEGIN
RETURN QUERY
SELECT *
FROM tbl
WHERE userid = _uid
AND (status < 2 OR time > interval '1 day')
ORDER BY time DESC;
GET DIAGNOSTICS _ct = ROW_COUNT;
_ct := _min - _ct; -- calculate diff
IF _ct > 0 THEN
RETURN QUERY
SELECT *
FROM tbl
WHERE userid = _uid
AND (status < 2 OR time > interval '1 day') IS NOT TRUE
ORDER BY time DESC
LIMIT _ct;
END IF;
END
$func$ LANGUAGE plpgsql;
```
Call:
```
SELECT * FROM f_tbl_top();
```
I added two parameters: for `user_id` (\_uid) and for the minimum row count (`_min`). They default to `1` / `10` respectively. So you get at least 10 rows if there are enough for `userid = 1` when you call the function without providing parameters. Or, to do the same for user 7 and a minimum of 9 rows:
```
SELECT * FROM f_tbl_top(7, 9);
```
Or:
```
SELECT * FROM f_tbl_top(_uid := 7, _min := 9);
```
Also careful if your column `time` can be NULL. Then you need:
```
ORDER BY time DESC NULLS LAST
```
But don't use a basic type name like `time` as column name to begin with, even less for an `interval`, which is rather misleading.
The condition for the second `SELECT` might be optimized depending on your actual table definition.
### Pure SQL
If you prefer a pure SQL solution, you could use a [CTE](http://www.postgresql.org/docs/current/interactive/queries-with.html):
```
WITH cte AS (
SELECT *
FROM tbl
WHERE userid = 1
AND (status < 2 OR time > interval '1 day')
ORDER BY time DESC
)
TABLE cte
UNION ALL
( -- parentheses required!
SELECT *
FROM tbl
WHERE userid = 1
AND (status < 2 OR time > interval '1 day') IS NOT TRUE
ORDER BY time DESC
LIMIT GREATEST((SELECT 10 - count(*) FROM cte), 0)
);
```
The [`GREATEST(...)`](http://www.postgresql.org/docs/current/interactive/functions-conditional.html#FUNCTIONS-GREATEST-LEAST) expression in the 2nd `LIMIT` avoids an illegal negative number in the `LIMIT` clause.
|
You can do this by prioritization using `order by`. If you want 10 records:
```
select t.*
from table t
where userid = 1
order by (case when status < 2 OR time > INTERVAL '1 day' then 1 else 2 end),
time desc
limit 10;
```
|
Get all records matchung some conditions OR at least 10 by adding some others
|
[
"",
"sql",
"postgresql",
""
] |
I have a hash-table:
```
CREATE TABLE hash_table ( hash_id bigserial,
user_name varchar(80),
hash varchar(80),
exp_time bigint,
PRIMARY KEY (hash_id));
INSERT INTO hash_table (hash_id, user_name, exp_time) VALUES
(1, 'one', 10),
(2, 'two', 20),
(3, 'three', 31),
(4, 'three', 32),
(5, 'three', 33),
(6, 'three', 33),
(7, 'three', 35),
(8, 'three', 36),
(9, 'two', 40),
(10, 'two', 50),
(11, 'one', 60),
(12, 'three', 70);
```
exp\_time - expiration time of hash. `exp_time = now() + delta_time` when the row creates
I need a result:
```
(1, 'one', 10),
(2, 'two', 20),
(7, 'three', 35),
(8, 'three', 36),
(9, 'two', 40),
(10, 'two', 50),
(11, 'one', 60),
(12, 'three', 70);
```
It contains a lot of user\_name-hash pairs. user\_name may dublicate a lot of time.
How to delete all rows but (several, e.g. 10) newest of specified user\_name ?
I found [this](https://stackoverflow.com/a/578926/1979882) solution (for mySQL but I hope it works) but it removes all other user\_names
```
DELETE FROM `hash_table`
WHERE user_name NOT IN (
SELECT user_name
FROM (
SELECT user_name
FROM `hash_table`
ORDER BY exp_time DESC
LIMIT 10 -- keep this many records
)
);
```
|
You can use `row_number` to number rows of a user based on `exp_time` in the descending order. And then delete user rows from the table who have more than 10 entries.
[Fiddle with sample data](http://sqlfiddle.com/#!15/ad4ea/1)
```
delete from hash_table as h
using
(
SELECT user_name, exp_time,
row_number() over(partition by user_name order by exp_time desc) as rn
FROM hash_table
) as t
where h.user_name = t.user_name and h.exp_time <= t.exp_time and t.rn > 10;
```
|
This will keep 3 newest records for each user\_name:
```
DELETE FROM hash_table h
USING
(
SELECT
user_name,
exp_time,
row_number() OVER (PARTITION BY user_name ORDER BY exp_time DESC) AS r
FROM
hash_table
) AS s
WHERE
h.user_name = s.user_name
AND h.exp_time<s.exp_time
AND s.r=3
```
|
Delete all rows with specified parameters but newest in Postgresql
|
[
"",
"sql",
"postgresql",
"sql-delete",
""
] |
```
CREATE PROCEDURE [dbo].[AdminCheck] (@Name nvarchar, @Password int)
AS
Select * from Admin
where AdminName = @Name AND AdminPassword = @Password
Go
```
Above is my `stored procedure`
I got `Admin` as `AdminName` and `123` as `AdminPassword` in `Admin` table.
When I run it it does not work.
|
When you say
```
@Name nvarchar
```
The size of the variable will be 1 by default:
Hence your query will be compared to the first character of `@Name` variable:
Provide proper size for your variable;
```
@Name nvarchar(150)
```
|
You defined your parameter as a NVARCHAR - it means ONE symbol.
```
CREATE PROCEDURE [dbo].[AdminCheck] (@Name nvarchar, @Password int)
```
So when you call it `AdminCheck('Admin',123)` then `@Name` is actually `'A'` not `'Admin'`
It should be for example:
```
CREATE PROCEDURE [dbo].[AdminCheck] (@Name nvarchar(100), @Password int)
```
|
stored procedure does not work
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have got a table with 3 columns: (int) `test1`, (int) `test2` and (int) `test3`. The content is an integer or `NULL`.
For example:
```
╔═══╦════════════╦═════════════╦═════════════╗
║ ║ test1 ║ test2 ║ test3 ║
╠═══╬════════════╬═════════════╬═════════════╣
║ 1 ║ 123 ║ 456 ║ NULL ║
║ 2 ║ 456 ║ 456 ║ NULL ║
║ 3 ║ NULL ║ NULL ║ NULL ║
╚═══╩════════════╩═════════════╩═════════════╝
```
I want to get every entry, where `test1` is not "123" and `test2` is not "123" and `test3` is not "123". I need every entry with another number than "123" or `NULL`.
I tried the following SQL syntax:
```
SELECT * FROM table WHERE test1 != "123" AND test2 != "123" AND test3 != "123";
```
The problem is, that I get no entry because there must be a number everywhere. In this example, I want to get entry 2 and entry 3 but not the first one because there is a "123" in column `test1`.
How can I make this?
|
`null` is not a value - it's the lack thereof. It returns "unknown" when used in any context expecting a value, even the `!=` operator. In order to handle `null`s, you should do so explicitly with the `is` operator:
```
SELECT *
FROM mytable
WHERE (test1 IS NULL OR test1 != 123) AND
(test2 IS NULL OR test2 != 123) AND
(test3 IS NULL OR test3 != 123)
```
|
Use `ifnull()` to convert nulls to an included number:
```
SELECT * FROM table
WHERE ifnull(test1, 0) != "123"
AND ifnull(test2, 0) != "123"
AND ifnull(test3, 0) != "123";
```
|
SQL SELECT is not a number but can be NULL
|
[
"",
"mysql",
"sql",
"select",
"null",
""
] |
Wondering how to write a SQL function to find the N-th largest element in a table, and if no N-th largest element, return Null.
Using MySQL/MySQL workbench.
BTW, my question is different from the N-th highest salary question since I have one additional requirement, which is return Null if N-th largest element does not exist. Any thoughts are appreciated.
thanks in advance,
Lin
|
You could do this:
```
SELECT t1.*
FROM (
SELECT *
FROM my_table
ORDER BY value DESC
LIMIT 1
OFFSET N -- Set your value for N here, N being 0-based
) t1
RIGHT OUTER JOIN (
SELECT null -- This will guarantee that you have at least one row
) t2
ON TRUE
```
|
I can think of no instance where I would want to do this...
```
SELECT * FROM ints;
+---+
| i |
+---+
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
SELECT CASE WHEN COUNT(*) = 9 THEN x.i ELSE NULL END i
FROM ints x
JOIN ints y
ON y.i <= x.i
GROUP
BY x.i
ORDER
BY i DESC
LIMIT 1;
+------+
| i |
+------+
| 8 |
+------+
SELECT CASE WHEN COUNT(*) = 11 THEN x.i ELSE NULL END i
FROM ints x
JOIN ints y
ON y.i <= x.i
GROUP
BY x.i
ORDER
BY i DESC
LIMIT 1;
+------+
| i |
+------+
| NULL |
+------+
```
|
find the N-th largest element in SQL
|
[
"",
"mysql",
"sql",
""
] |
I have two queries that return a single value, from different tables (and not joined via a relationship in any way), and I'm trying to combine both queries' outputs onto a single row, however I'm getting a syntax error. This is what I'm trying:
```
SELECT
(SELECT Timestamp As StartDate
FROM Events
WHERE Description = 'Inserted') AS StartDate,
(SELECT TOP (1) Timestamp As EndDate
FROM DataStore
ORDER BY Timestamp DESC) AS EndDate
```
And this is what I'm getting back:
> There was an error parsing the query. [ Token line number = 2,Token
> line offset = 2,Token in error = SELECT ]
Query 1 on its own returns: "2015-06-10 11:43:34.000" and Query 2 returns: "2015-06-11 13:59:47.000"
I want to return a single row with two columns, with the output of query 1 as the "StartDate" column, and the output of query 2 as the "EndDate" column.
|
SQL CE does not support nesting SELECT statments like this, so you have to use two SELECT statements and use UNION or call ExecuteNonQuery twice.
|
The first query might return 2 or more values unlike the second query.
Try putting a `TOP (1)` also in the first query since I think you're just gunning for the top results.
```
SELECT
(SELECT TOP (1) Timestamp As StartDate FROM Events WHERE Description = 'Inserted') AS StartDate,
(SELECT TOP (1) Timestamp As EndDate FROM DataStore Order by Timestamp DESC) AS EndDate
```
See this [SQL Fiddle link](http://sqlfiddle.com/#!3/1e1e2/2) for the test that I did.
|
SQL Server CE : how can I combine results of two queries into one row?
|
[
"",
"sql",
"sql-server",
"sql-server-ce",
""
] |
I have a `users` table with fields user\_id and username and I am querying it like so:
```
SELECT user_id, username FROM users WHERE user_id=$1
```
I also have a `votes` table that has fields post\_id and user\_id.
user\_id is a foreign key referencing the user\_id field in the users table.
I would like to add to my initial query to select all votes that are child votes of this user and have it all run in the same query. Maybe this needs to go into a separate query? I want to do it all at once to get the most accurate results, but I have not found a way...
Database is postgres btw.
**EDIT**
okay, first I tried both the inner join methods provided and neither worked if there were no votes.
Once there are votes and I am using some left outer full outer thing.. it returns a table like this:
```
user_id | username | post_id
---------+----------+---------
6 | dbot77 | 3
6 | dbot77 | 5
```
BUT IT RETURNS USELESS REPETITIVE INFORMATION
I just want, just out of my grasp...
```
user_id | username | post_id
---------+----------+---------
6 | dbot77 | 3
| | 5
```
**OR EVEN BETTER**
```
user_id | username | vote_post_ids
---------+----------+---------
6 | dbot77 | [5, 3, etc.]
| |
```
|
Okay this finally worked for me...
```
SELECT u.user_id
,u.username
,ARRAY(SELECT v.post_id FROM votes v WHERE v.user_id=u.user_id) AS votes
FROM users u
WHERE u.user_id=$1
```
it returned:
```
user_id | username | votes
---------+----------+-------
6 | dbot77 | {3,5}
```
|
Try this:
```
SELECT users.user_id, users.username, votes.column_name
FROM votes
INNER JOIN users
ON votes.user_id = users.user_id
WHERE votes.user_id = $1;
```
Refer to:
<http://www.tutorialspoint.com/postgresql/postgresql_using_joins.htm>
|
select user and all votes SQL
|
[
"",
"sql",
"postgresql",
""
] |
Here's a sample of my PostgreSQL in CSV format.
```
row,latitude,longitude
1,42.082513,-72.621498
2,42.058588,-72.633386
3,42.061118,-72.631541
4,42.06035,-72.634145
```
I have thousands more rows like these spanning coordinates across the world.
I want to query the table only for coordinates within a certain radius. How do I do this with PostGIS and PostgreSQL?
|
I did a combo of Erwin's and Patrick's answers.
```
-- Add geography column
ALTER TABLE googleplaces ADD COLUMN gps geography;
UPDATE googleplaces SET gps = ST_SetSRID(ST_MakePoint(longitude, latitude), 4326);
CREATE INDEX googleplaces_gps ON googleplaces USING gist(gps);
SELECT *
FROM my_table
WHERE ST_DWithin(gps, ST_SetSRID(ST_MakePoint(-72.657, 42.0657), 4326), 5 * 1609);
```
|
You want *"all rows within a 5-mile radius of a coordinate"*, so this is *not* exactly a [K-nearest-neighbour (KNN) problem](http://boundlessgeo.com/2011/09/indexed-nearest-neighbour-search-in-postgis/). Related, but your case is simpler. *"Find the 10 rows closest to my coordinates"* would be a KNN problem.
Convert your coordinates to `geography` values:
```
ST_SetSRID(ST_MakePoint(longitude, latitude),4326)::geography
```
Alternatively you could use the simpler `geometry` type. Consider:
[4.2.2. When to use Geography Data type over Geometry data type](http://postgis.net/docs/using_postgis_dbmanagement.html#PostGIS_GeographyVSGeometry)
Then we have a table like:
```
CREATE TABLE tbl (
tbl_id serial PRIMARY KEY
, geog geography NOT NULL
);
```
All you need is [**`ST_DWithin()`**](http://postgis.net/docs/ST_DWithin.html) - and a **spatial index** to make it fast:
```
CREATE INDEX tbl_geog_gist ON tbl USING gist(geog);
```
Query:
```
SELECT *, ST_Distance(c.x, geog) AS distance -- distance is optional
FROM tbl t, (SELECT ST_GeographyFromText('SRID=4326;POINT(-72.63 42.06)')) AS c(x)
WHERE ST_DWithin(c.x, geog, 8045) -- distance in meter
ORDER BY distance; -- order is optional, you did not ask for that
```
*Or* you can use your original columns and create a functional index ...
This and other details in this closely related answer on dba.SE:
* [Order by distance](https://dba.stackexchange.com/a/60711/3684)
|
How do I query all rows within a 5-mile radius of my coordinates?
|
[
"",
"sql",
"postgresql",
"indexing",
"postgis",
""
] |
I have a table with two columns; `EVENT_DATE` date and `RANG` number the first column holds a date for an event while the second column is for the period of that event. here is a sample of data
```
| EVENT_DATE | RANG |
|------------|------|
| 03/01/2015 | 1 |
| 09/04/2015 | 3 |
| 15/10/2015 | 2 |
```
is there any way to expand the `EVENT_DATE` by increment it based on the `RANG` value, so the output will be like,
```
| EVENT_DATE |
|------------|
| 03/01/2015 |
| 04/01/2015 |
| 09/04/2015 |
| 10/04/2015 |
| 11/04/2015 |
| 12/04/2015 |
| 15/10/2015 |
| 16/10/2015 |
| 17/10/2015 |
```
|
Here you go.
```
select to_char(event_date + (l - 1),'dd/mm/yyyy') from tab1 t
left outer join (
select level l from dual
connect by level <= (select max(rang) + 1 from tab1)
) on l <= rang + 1
order by event_date, 1;
```
[SQL Fiddle](http://sqlfiddle.com/#!4/27fef0/15)
|
This should work:
```
select (t.event_date + t2.value) as event_date
from t, (select rownum -1 as value from all_objects) t2
where t2.value <= t.rang
order by 1 asc;
```
|
Expand table rows based on column value
|
[
"",
"sql",
"database",
"oracle",
""
] |
I have the sample data [See the SQL Fiddle](http://sqlfiddle.com/#!6/5d6e6/1). I want to find the lowest value from the value column and the date of lowest value from the uploaded-date column. I tried the following query and it is not properly working and showing all values.
```
SELECT
username,
MIN(Value1) AS MinValue1,
CASE
WHEN value1 = MIN(value1) THEN uploaded_date
END
AS MinDate
FROM
test
GROUP BY
username
```
|
You could use the [`RANK`](https://msdn.microsoft.com/en-us/library/ms176102.aspx) window function to find the minimal value for each username:
```
SELECT username, value1, uploaded_date
FROM (SELECT username, value1, uploaded_date,
RANK() OVER (PARTITION BY username ORDER BY value1 ASC) AS rk
FROM test) t
WHERE rk = 1
```
**[SQLFIddle](http://sqlfiddle.com/#!6/5d6e6/6/0)**
|
```
SELECT test. username,value1,MIN(uploaded_date )FROM
(Select MIN(value1) MinValue1,username from test group by username ) A
INNER JOIN
test
ON A.MinValue1 = test.Value1
AND A.username = test.username
GROUP BY
test. username,value1
```
Your query did not work because you were using the columns in the select list so the Group by clause will group all distinct values into each row
|
How to find the lowest value and the corresponding date SQL-Server 2014
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"select",
""
] |
I'm using MySQL 5.6.23-enterprise-commercial-advanced. I have a table called `FILE` as:
```
+---------+
| file_id |
+---------+
| 11 |
| 12 |
| 13 |
| 14 |
+---------+
```
I try to run a query of `SELECT A.FILE_ID FROM FILE A;` under **root@localhost**, and it is successfully executed. But there is no such table as **A**(any will work).So why it can be executed and seemingly have the same effect as `SELECT FILE.ID FROM FILE;`? I'm working on an existing project, and I have seen this. Is there any benefit of using it?
|
Because you're giving the table an alias of `A` so you're essentially temporarily renaming the table during the select statement.
See link for more information - [MYSQL ALIAS](http://www.mysqltutorial.org/mysql-alias/)
|
In your case `A` as alias of table. So you must need to write `as` between `FILE` and `A`. So now your query is like this
```
SELECT A.FILE_ID FROM FILE as A;
```
|
Why does such statement work in MySQL?
|
[
"",
"mysql",
"sql",
"select",
""
] |
Sorry for the confusing title; however a description and illustration should hopefully clear it up.
Essentially, I have the table `A` representing instances of a transfer of an 'amount' between rows of table `B.` I wish to join `A` with `B` so that I can display the details of the transfer:
```
================= A ===================
+-----+-----------+----------+--------+
| AID | fromID(FK) | toID(FK) | amount |
+-----+-----------+----------+--------+
| 1 | 1 | 5 | 100 |
| 2 | 1 | 3 | 150 |
| 3 | 5 | 3 | 500 |
| 4 | 1 | 5 | 200 |
| 5 | 4 | 5 | 800 |
| 6 | 3 | 5 | 15 |
+----+------------+----------+--------+
```
and
```
==== B =====
+----+------+
| BID | name |
+----+------+
| 1 | a |
| 2 | b |
| 3 | c |
| 4 | d |
| 5 | e |
+----+------+
```
I wish to join them and produce a "from name" column and a "to name" like:
```
+-----+------+----+--------+
| AID | from | to | amount |
+-----+------+----+--------+
| 1 | a | e | 100 |
| 2 | a | c | 150 |
| 3 | e | c | 500 |
| 4 | a | e | 200 |
| 5 | d | e | 800 |
| 6 | c | e | 15 |
+-----+------+----+--------+
```
|
You can join `a` on `b` twice:
```
SELECT aid, from_b.name, to_b.name, amount
FROM a
JOIN b from_b ON from_b.bid = a.fromid
JOIN b to_b ON to_b.bid = a.toid
```
|
You can do this with out `join`.
[Fiddle with sample data](http://www.sqlfiddle.com/#!6/c15af/1)
```
select aid,
(select name from b where a.fromid = bid) as "from",
(select name from b where a.toid = bid) as "to",
amount
from a
```
|
Join a table representing a "transfer" between two rows of another table
|
[
"",
"sql",
"sql-server",
"select",
"join",
""
] |
I am using a SQLLite Database and I want to copy values from one table to another. So I found this:
```
INSERT INTO `table1` (`row_id`, `daystamp`, `column1`, `column2`) SELECT `position`, 2015813, `column1`, `column2` FROM `table2`
```
The problem is that the "table1" has more columns than the four.
As result the other columns are null, but I didn't want them to be null.
So I add to every column "NOT NULL" and I get the error after exeucting the SQLQuery above:
```
NOT NULL constraint failed: table1.colum3: INSERT INTO `table1` (`row_id`, `daystamp`, `column1`, `column2`) SELECT `position`, 2015813, `column1`, `column2` FROM `table2`
```
I want that the cells are empty. How should I do this, without put every column in the query?
|
Define a `Default value` for those columns while creating your table (or) perform a `ALTER` and provide a default value like
```
colum3 varchar(10) default ''
```
You can as well change your `INSERT` statement to insert some default values for those columns like
```
INSERT INTO `table1` (`row_id`, `daystamp`, `column1`, `column2`, `column3`, `column4`, `column5`)
SELECT `position`, 2015813, `column1`, `column2`,
DEFAULT(), DEFAULT(), DEFAULT()
FROM `table2`;
```
|
You need to specify the columns and the values in the `insert`:
```
INSERT INTO `table1` (`row_id`, `daystamp`, `column1`, `column2`, column3, column4)
SELECT `position`, 2015813, `column1`, `column2`, <some value>, <some value>
FROM `table2`;
```
Alternatively, when you define the table, you need to specify a `DEFAULT` value along with `NOT NULL`.
|
Insert empty data not null
|
[
"",
"sql",
"sqlite",
""
] |
I have a sql query where I have to get records count, I am getting counts correctly with the current query but I want to only get counts for records where 'Unchecked Count' is not zero.
```
SELECT
dbo.Customer.AccountNo AS Cust_Acc_No,
dbo.Customer.Name AS [Customer Name],
dbo.Customer.Adrs1 AS Cust_Address_1,
dbo.Customer.Adrs2 AS Cust_Address_2,
dbo.Customer.City AS Cust_City,
dbo.Customer.Province,
dbo.Customer.PostalCode AS Cust_Postal_Code,
dbo.Customer.Email1 AS Email,
CAST(dbo.Customer.AccStatus AS int) AS [Account Status],
dbo.Customer.ID AS CID,
case when COUNT(dbo.Manifest.ID) <
sum( CASE WHEN (dbo.Manifest.CheckedBy IS NULL OR
(LTRIM(RTRIM(dbo.Manifest.CheckedBy)) = '')) THEN 1 ELSE 0 END)
then COUNT(dbo.Manifest.ID) else
sum( CASE WHEN (dbo.Manifest.CheckedBy IS NULL OR
(LTRIM(RTRIM(dbo.Manifest.CheckedBy)) = '')) THEN 1 ELSE 0 END)
end as [Unchecked Count],
case when COUNT(dbo.Manifest.ID) <
sum( CASE WHEN (dbo.Manifest.CheckedBy IS not NULL OR
(LTRIM(RTRIM(dbo.Manifest.CheckedBy))! = '')) THEN 1 ELSE 0 END)
then COUNT(dbo.Manifest.ID) else
sum( CASE WHEN (dbo.Manifest.CheckedBy IS not NULL OR
(LTRIM(RTRIM(dbo.Manifest.CheckedBy))! = '')) THEN 1 ELSE 0 END)
end as [Checked_Count],
COUNT(dbo.Manifest.ID) as Total
FROM
dbo.Customer inner JOIN dbo.Manifest ON dbo.Customer.AccountNo = dbo.Manifest.FKAccountNo
GROUP BY
dbo.Customer.AccountNo,
dbo.Customer.Name,
dbo.Customer.Adrs1,
dbo.Customer.City,
dbo.Customer.Province,
dbo.Customer.Adrs2,
dbo.Customer.Email1,
dbo.Customer.PostalCode,
dbo.Customer.AccStatus,
dbo.Customer.ID,Manifest.FKAccountNo
```
|
You can query the result of the other query.
`SELECT * FROM ( <your query> ) AS T WHERE [Unchecked Count] > 0`
|
Try `Having` clause after your `group by` clause
```
having [Unchecked Count] > 0
```
Full query
```
SELECT dbo.Customer.AccountNo AS Cust_Acc_No, dbo.Customer.Name AS [Customer Name], dbo.Customer.Adrs1 AS Cust_Address_1, dbo.Customer.Adrs2 AS Cust_Address_2, dbo.Customer.City AS Cust_City,
dbo.Customer.Province, dbo.Customer.PostalCode AS Cust_Postal_Code, dbo.Customer.Email1 AS Email, CAST(dbo.Customer.AccStatus AS int) AS [Account Status], dbo.Customer.ID AS CID,
case when COUNT(dbo.Manifest.ID) < sum( CASE WHEN (dbo.Manifest.CheckedBy IS NULL OR
(LTRIM(RTRIM(dbo.Manifest.CheckedBy)) = '')) THEN 1 ELSE 0 END)
then COUNT(dbo.Manifest.ID) else
sum( CASE WHEN (dbo.Manifest.CheckedBy IS NULL OR
(LTRIM(RTRIM(dbo.Manifest.CheckedBy)) = '')) THEN 1 ELSE 0 END)
end
as [Unchecked Count],
case when COUNT(dbo.Manifest.ID) < sum( CASE WHEN (dbo.Manifest.CheckedBy IS not NULL OR
(LTRIM(RTRIM(dbo.Manifest.CheckedBy))! = '')) THEN 1 ELSE 0 END)
then COUNT(dbo.Manifest.ID) else
sum( CASE WHEN (dbo.Manifest.CheckedBy IS not NULL OR
(LTRIM(RTRIM(dbo.Manifest.CheckedBy))! = '')) THEN 1 ELSE 0 END)
end
as [Checked_Count],
COUNT(dbo.Manifest.ID) as Total
FROM dbo.Customer inner JOIN
dbo.Manifest ON dbo.Customer.AccountNo = dbo.Manifest.FKAccountNo
GROUP BY dbo.Customer.AccountNo, dbo.Customer.Name, dbo.Customer.Adrs1, dbo.Customer.City, dbo.Customer.Province, dbo.Customer.Adrs2, dbo.Customer.Email1, dbo.Customer.PostalCode, dbo.Customer.AccStatus,
dbo.Customer.ID,Manifest.FKAccountNo
having [Unchecked Count] > 0
```
|
SQL QUERY case with a condition
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
The sql tables are as follows: likes with columns drinker and beer, sells with columns bar and beer and frequents with columns drinker and bar.
I have previously answered the following statement:
```
"Drinkers who frequent bars which serve some beer they like"
```
with the SQL Query:
```
SELECT DISTINCT y.drinker
FROM likes a, sells, frequents y
WHERE a.beer = sells.beer
AND sells.bar = y.bar
AND y.drinker = a.drinker;
```
Now, I am trying to modify the query listed above to answer a similar, yet different statement:
```
"Drinkers who only frequent bars which serve beers they like"
```
The modification, logically, is to include only those who frequent bars that serves beers they like and to eliminate those that visit any bar that doesn't serve a beer they like.
I am having the hardest time modifying the above query to satisfy the second statement.
My thought process was to perhaps do a double negation, such as: Get the list of drinkers who do not exist in the following group: drinkers who do not frequent bars that don't sell beers they like. But the correct implementation is escaping me.
Here's is my best attempt at solving this query, but I am well aware it is not correct, for this query still returns drinkers who frequent some bars (not only) that serves beer they like:
```
SELECT distinct x.drinker
FROM frequents x
WHEREexists (SELECT*
FROM frequents y, sells z, likes u
WHERE x.drinker=y.drinker
AND y.bar = z.bar and z.beer = u.beer
AND y.drinker = u.drinker);
```
Any help would be great, thank you for any insight you can provide.
|
Drinkers who frequently exactly the number of bars as those they frequent which sell their favorites:
```
select drinker
from frequents
group by drinker
having count(bar) = (
select count(distinct f.bar)
from
sells s
inner join likes l on l.beer = s.beer
inner join frequents f on f.bar = s.bar and f.drinker = l.drinker
where f.drinker = frequents.drinker
)
```
The inner count will be lower when you can't match up all three of the relationship triangle. (Notice how each of the tests for beer, bar, and drink each appear once in the inner query.) If you wanted to look for the opposite set of people you'd just change the equality test to greater than.
EDIT:
I was thinking about alternate ways of approaching this. Interestingly this is one where you might find a right outer join to be useful (rather than the parentheses around the joined tables.)
The `frequents` table is still the focus of the query below and the logic is driven by the idea of finding out which frequented bars are also bars where a like/favorite is available for sale. Because of the outer join the "frequents" side will never be null but the "likes-sells" side will have nulls only when a pair can't be matched for that frequent. The final test in the `having` clause is redundant but I wanted to show the symmetry in being able to match the counts on both the columns of `frequents' even though only one is strictly necessary.
```
select f.drinker
from
frequents f left outer join (
likes l inner join sells s on s.beer = l.beer
) on f.drinker = l.drinker and f.bar = s.bar
group by f.drinker
having count(f.drinker) = count(l.drinker) and count(f.bar) = count(s.bar)
```
|
I think this is a valid solution...
The subquery is used to filter out drinkers who frequent a bar that has a 0 count of beers they like.
```
select distinct drinker
from frequents
where drinker not in (
select f.drinker
from frequents f
join sells s on f.bar = s.bar
left join likes l on l.drinker = f.drinker and l.beer = s.beer
group by f.drinker, f.bar
having count(l.drinker) = 0
);
```
[Sample SQL Fiddle](http://sqlfiddle.com/#!9/2e0cb/25)
|
SQL Beginner Query Logic With 3 Tables
|
[
"",
"mysql",
"sql",
"querying",
""
] |
all. My question is easy, but I can't solve it by myselft.
I have 2 tables: et\_pics.ob\_no where ob\_no is user id and et\_thanks\_2014 where thnk.e\_to is a link to et\_pics.ob\_no.
I need to find out ob\_no in et\_pics who absent in e\_to in et\_thanks\_2014.
```
SELECT pics.ob_no, thnk.e_to FROM et_pics pics
left join et_thanks_2014 thnk on thnk.e_to = pics.ob_no
WHERE e_to is null
```
This code works, but I thinks it's not the best way to solve my task. I've tried to solve it with IN predict:
```
SELECT pics.ob_no FROM et_pics pics
WHERE pics.ob_no in ((SELECT e_to FROM et_thanks_2014))
```
and not exists
```
SELECT ob_no from et_pics
WHERE not exists (SELECT DISTINCT (e_to) FROM et_thanks_2014 thnk)
```
but both returns nothing. Why?
|
You need correlation. For instance:
```
SELECT ob_no
from et_pics
WHERE not exists (SELECT 1 FROM et_thanks_2014 thnk WHERE thnk.e_to = pics.ob_no);
```
Without correlation, your subquery (presumably) always returns at least one row, assuming the table is not empty. Hence, `not exists` always returns false.
The equivalent `not in` is:
```
SELECT ob_no
FROM et_pics
WHERE pics.ob_no NOT IN (SELECT thnk.e_to FROM et_thanks_2014 thnk);
```
Is almost equivalent. However, if `thnk.e_to` is *ever* `NULL`, then it will never return true, so everything will be filtered. Hence, I tend to recommend `NOT IN` or `LEFT JOIN` for semantic reasons.
Also note that `DISTINCT` is unnecessary in subqueries using `IN` or `EXISTS`.
|
I believe you should be using `NOT IN` rather like below since you are trying to get the uncommon values
```
SELECT pics.ob_no FROM et_pics pics
WHERE pics.ob_no NOT IN (SELECT e_to FROM et_thanks_2014);
```
Moreover, I am not sure why you think that your `LEFT JOIN` solution is not the best solution.
```
SELECT pics.ob_no,
thnk.e_to
FROM et_pics pics
left join et_thanks_2014 thnk on thnk.e_to = pics.ob_no
WHERE thnk.e_to is null;
```
|
Not in and Exists not working as i expect
|
[
"",
"sql",
"sql-server-2005",
""
] |
I need to trim leading zeros in a column using MS Access SQL.
I've found the topic
[Better techniques for trimming leading zeros in SQL Server?](https://stackoverflow.com/questions/662383/better-techniques-for-trimming-leading-zeros-in-sql-server)
but
```
SUBSTRING(str_col, PATINDEX('%[^0]%', str_col+'.'), LEN(str_col))
```
doesn't work in Access. How to "translate" it to Access SQL?
I changed the function `SUBSTRING` to `MID` and `PATINDEX` to `INSTR`, but it doesn't work
```
MID(str_col, INSTR(1, str_col+'.', '%[^0]%'), LEN(str_col))
```
The data type of my column is string and all rows looks like: "002345/200003", "0000025644/21113" and I need to extract "2345", "25644".
|
Check that the zeros really exist, they may if the field is text, in which case you can use:
```
Val(NameOfField)
```
Result
```
Field1 ValField1
ab 0
0000123 123
```
If the field is numeric, you probably have a format added to the table, which is a very bad idea.
|
If all of the records in the varchar column are numeric, then you can simply multiply the string by 1.
Be wary though. Sooner or later something non-numeric will sneak in and break things by causing an invalid cast. If possible, convert the column to an integer type and leave the formatting up to the client.
|
Trimming leading zeros in MS Access SQL
|
[
"",
"sql",
"ms-access",
""
] |
I have a table contains the following Data
```
id student class
1 English 3-B
2 null null
3 null null
4 Jack OuMe
5 Richard DSA
6 Jhon Ross
7 French 5-B
8 null null
9 null null
10 Cena ADI
11 Mike Rock
12 Philip DSK
```
I need a select query to get the resultant as below:
```
id student class classSTUD
1 English 3-B 3-B
2 null null 3-B
3 null null 3-B
4 Jack OuMe 3-B
5 Richard DSA 3-B
6 Jhon Ross 3-B
7 French 5-B 5-B
8 null null 5-B
9 null null 5-B
10 Cena ADI 5-B
11 Mike Rock 5-B
12 Philip DSK 5-B
```
i tried the query as bellow but i didn't get the right result
```
SELECT ID, STUDENT, CLASS,
NVL (CLASS, LAG(CLASS IGNORE NULLS) OVER (ORDER BY ID)) CLASSSTUD
FROM MYTABLE
ORDER BY ID;
```
Thanks for your help
|
I fixed the problem I improved the response answered by Gordon Linoff
```
SELECT ID, STUDENT, CLASS,
NVL(CASE WHEN CLASS LIKE '%-%' THEN CLASS END, LAG(CASE WHEN CLASS LIKE '%-%' THEN CLASS END) IGNORE NULLS OVER
(ORDER BY ID)) as CLASSSTUD
FROM MYTABLE
ORDER BY ID;
```
|
There appears to be a serious flaw in your database design. You need two tables to make this work properly. You are mixing two different categories of data in the same column which is why you are having an issue. You have classes and students in the same column.
Create a separate table for classes and one for students.
I have no idea why you have a table that contains only null values.
Classes table:
```
id class classcode
1 English 3-B
2 French 5-B
```
Students table:
```
id student classid
1 Jack 1
2 Richard 1
3 Jhon 1
4 Cena 2
5 Mike 2
6 Philip 2
```
If, as I suspect is the case what I presume are your class codes are unique, then they are a natural key anyway, so you could do it like this:
Classes table:
```
classcode class
3-B English
5-B French
```
Students table:
```
id student classcode
1 Jack 3-B
2 Richard 3-B
3 Jhon 3-B
4 Cena 5-B
5 Mike 5-B
6 Philip 5-B
```
I have left the id field in the Students table because it is probable that you will have two Jacks/Richards/Jhons/whatevers at some point, so having a unique id makes sense.
If you are stuck with the data as shown I think you are in real trouble.
|
Get a value of a row in a column
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have the data in the following format:
```
Tran No Date Store ID Register ID Tender
01 23-10-2015 1000 001 CASH
01 23-10-2015 1000 001 CRDT
02 23-10-2015 1000 001 CASH
02 23-10-2015 1000 001 GIFT
```
A new column has been added (Tran Seq) to the table
Such that the new data would become like
```
Tran No Date Store ID Register ID Tender Tran Seq
01 23-10-2015 1000 001 CASH 0
01 23-10-2015 1000 001 CRDT 1
02 23-10-2015 1000 001 CASH 0
02 23-10-2015 1000 001 GIFT 1
```
For every same TRAN NO, DATE, STORE ID and REGISTER ID, each line item would get a new sequence number in the field Tran Seq.
How do i achieve the above mentioned results?
|
You could use **ROW\_NUMBER()** analytic function:
For example,
**Setup**
```
CREATE TABLE t
(tran_no NUMBER,
"DATE" VARCHAR2(10),
store_id NUMBER,
Register_ID NUMBER,
Tender varchar2(4));
INSERT ALL
INTO t (tran_no, "DATE", store_id, Register_ID, Tender)
VALUES (01, '23-10-2015', 1000, 001, 'CASH')
INTO t (tran_no, "DATE", store_id, Register_ID, Tender)
VALUES (01, '23-10-2015', 1000, 001, 'CRDT')
INTO t (tran_no, "DATE", store_id, Register_ID, Tender)
VALUES (02, '23-10-2015', 1000, 001, 'CASH')
INTO t (tran_no, "DATE", store_id, Register_ID, Tender)
VALUES (02, '23-10-2015', 1000, 001, 'GIFT')
INTO t (tran_no, "DATE", store_id, Register_ID, Tender)
VALUES (02, '23-10-2015', 1000, 001, 'CRDT')
SELECT * FROM dual;
```
**Query**
```
SQL> SELECT t.*,
2 row_number() OVER(PARTITION BY tran_no,
3 "DATE",
4 store_id,
5 register_id
6 ORDER BY tender) - 1 tran_seq
7 FROM t;
TRAN_NO DATE STORE_ID REGISTER_ID TEND TRAN_SEQ
---------- ---------- ---------- ----------- ---- ----------
1 23-10-2015 1000 1 CASH 0
1 23-10-2015 1000 1 CRDT 1
2 23-10-2015 1000 1 CASH 0
2 23-10-2015 1000 1 CRDT 1
2 23-10-2015 1000 1 GIFT 2
SQL>
```
**Note** :
1. You cannot have a space between column names. I have used underscore instead.
2. You cannot used Oracle keywords. If you want to use, then you need to use double-quotation marks. For example, "DATE" column. remember, you need to use the double-quotation marks everywhere you make the reference.
From documentation on [Database Object Naming Rules](http://docs.oracle.com/database/121/SQLRF/sql_elements008.htm#SQLRF51129):
> A quoted identifier begins and ends with double quotation marks ("). If you name a schema object using a quoted identifier, then you must use the double quotation marks whenever you refer to that object.
**UPDATE**
OP wants to update the new column with the sequence generated. You could use **MERGE** statement.
```
SQL> ALTER TABLE t ADD tran_seq NUMBER;
Table altered.
SQL> MERGE INTO t
2 USING(
3 SELECT TRAN_NO, "DATE", STORE_ID, REGISTER_ID, TENDER,
4 row_number() OVER(PARTITION BY tran_no,
5 "DATE",
6 store_id,
7 register_id
8 ORDER BY tender) - 1 tran_seq
9 FROM t
10 ) s
11 ON (t.tran_no = s.tran_no
12 AND t."DATE" = s."DATE"
13 AND t.store_id = s.store_id
14 AND t.register_id = s.register_id
15 AND t.tender = s.tender
16 )
17 WHEN MATCHED THEN
18 UPDATE SET t.tran_seq = s.tran_seq
19 /
5 rows merged.
SQL> SELECT * FROM t ORDER BY tran_no, tran_seq;
TRAN_NO DATE STORE_ID REGISTER_ID TEND TRAN_SEQ
---------- ---------- ---------- ----------- ---- ----------
1 23-10-2015 1000 1 CASH 0
1 23-10-2015 1000 1 CRDT 1
2 23-10-2015 1000 1 CASH 0
2 23-10-2015 1000 1 CRDT 1
2 23-10-2015 1000 1 GIFT 2
```
|
You can use row\_number with partition by clause
```
select tran_no,date,store_id,register_id,tender, row_number() over(partition
by tran_no,date,store_id,register_id order by tran_no)-1 tran_seq
from table_name
```
|
Column Update Oracle SQL
|
[
"",
"sql",
"oracle",
""
] |
I have a table [see the Fiddle](http://sqlfiddle.com/#!6/a63c3/1). It contains details of user's size on 3 different visits. My question is how can I select the minimum size between the three size columns for each user and the respective date from the 3 dates column of the minimum size? I can get minimum size from the following query I guess but respective date, not sure?
```
Select
Name,
Case
When Size1 < Size2 And Size1 < Size3 Then Size1
When Size2 < Size1 And Size2 < Size3 Then Size2
Else Size3
End As MinSize
From
Test
```
This is what I am looking for
```
╔══════╦════════════╦═════════╗
║ name ║ visit ║ minSize ║
╠══════╬════════════╬═════════╣
║ ABC ║ 2014-02-01 ║ 100 ║
║ DEF ║ 2014-01-01 ║ 100 ║
║ GHI ║ 2014-02-01 ║ 50 ║
║ JKL ║ 2014-03-01 ║ 300 ║
╚══════╩════════════╩═════════╝
```
|
Try like this.
```
CREATE TABLE test
(
username VARCHAR(50),
visit1 DATETIME,
visit2 DATETIME,
visit3 DATETIME,
size1 INT,
size2 INT,
size3 INT
);
INSERT INTO test
VALUES ( 'ABC',
'2014-01-01',
'2014-02-01',
'2014-03-01',
200,
100,
300 );
INSERT INTO test
VALUES ( 'EFG',
'2014-01-01',
'2014-02-01',
'2014-03-01',
100,
200,
300 );
INSERT INTO test
VALUES ( 'HIJ',
'2014-01-01',
'2014-02-01',
'2014-03-01',
400,
50,
100 );
INSERT INTO test
VALUES ( 'KLM',
'2014-01-01',
'2014-02-01',
'2014-03-01',
600,
100,
300 );
SELECT UserName,
CASE
WHEN Size1 < Size2
AND Size1 < Size3 THEN Visit1
WHEN Size2 < Size1
AND Size2 < Size3 THEN Visit2
ELSE visit3
END AS Date,
CASE
WHEN Size1 < Size2
AND Size1 < Size3 THEN Size1
WHEN Size2 < Size1
AND Size2 < Size3 THEN Size2
ELSE Size3
END AS MinSize
FROM Test
```
|
You can unpivot your 3 visits into 3 rows using a [table value constructor](https://msdn.microsoft.com/en-GB/library/dd776382.aspx), then select the top 1 in order of size:
```
SELECT t.username,
ms.Visit,
MinSize = ms.Size
FROM Test AS t
CROSS APPLY
( SELECT TOP 1 m.visit, m.size
FROM (VALUES
(1, t.visit1, t.size1),
(2, t.visit2, t.size2),
(3, t.visit3, t.size3)
) AS m (VisitNo, Visit, Size)
ORDER BY m.Size
) AS ms;
```
**EDIT**
If you have more columns, just add more rows to your table valued constructor
```
SELECT t.username,
ms.Visit,
MinSize = ms.Size
FROM Test AS t
CROSS APPLY
( SELECT TOP 1 m.visit, m.size
FROM (VALUES
(1, t.visit1, t.size1),
(2, t.visit2, t.size2),
(3, t.visit3, t.size3),
(4, t.visit4, t.size4), -- New values added for more columns
(5, t.visit5, t.size5),
(6, t.visit6, t.size6),
(7, t.visit7, t.size7)
) AS m (VisitNo, Visit, Size)
ORDER BY m.Size
) AS ms;
```
|
How can I select two columns data on one condition in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I would like to filter my table by `MIN()` function but still keep columns which cant be grouped.
I have table:
```
+----+----------+----------------------+
| ID | distance | geom |
+----+----------+----------------------+
| 1 | 2 | DSDGSAsd23423DSFF |
| 2 | 11.2 | SXSADVERG678BNDVS4 |
| 2 | 2 | XCZFETEFD567687SDF |
| 3 | 24 | SADASDSVG3423FD |
| 3 | 10 | SDFSDFSDF343DFDGF |
| 4 | 34 | SFDHGHJ546GHJHJHJ |
| 5 | 22 | SDFSGTHHGHGFHUKJYU45 |
| 6 | 78 | SDFDGDHKIKUI45 |
| 6 | 15 | DSGDHHJGHJKHGKHJKJ65 |
+----+----------+----------------------+
```
This is what I would like to achieve:
```
+----+----------+----------------------+
| ID | distance | geom |
+----+----------+----------------------+
| 1 | 2 | DSDGSAsd23423DSFF |
| 2 | 2 | XCZFETEFD567687SDF |
| 3 | 10 | SDFSDFSDF343DFDGF |
| 4 | 34 | SFDHGHJ546GHJHJHJ |
| 5 | 22 | SDFSGTHHGHGFHUKJYU45 |
| 6 | 15 | DSGDHHJGHJKHGKHJKJ65 |
+----+----------+----------------------+
```
it is possible when I use `MIN()` on distance column and grouping by `ID` but then I loose my geom which is essential.
The query looks like this:
```
SELECT "ID", MIN(distance) AS distance FROM somefile GROUP BY "ID"
```
the result is:
```
+----+----------+
| ID | distance |
+----+----------+
| 1 | 2 |
| 2 | 2 |
| 3 | 10 |
| 4 | 34 |
| 5 | 22 |
| 6 | 15 |
+----+----------+
```
but this is not what I want.
Any suggestions?
|
You need a window function to do this:
```
SELECT "ID", distance, geom
FROM (
SELECT "ID", distance, geom, rank() OVER (PARTITION BY "ID" ORDER BY distance) AS rnk
FROM somefile) sub
WHERE rnk = 1;
```
This effectively orders the entire set of rows first by the `"ID"` value, then by the distance and returns the record for each `"ID"` where the distance is minimal - no need to do a `GROUP BY`.
|
One common approach to this is to find the minimum values in a derived table that you join with:
```
SELECT somefile."ID", somefile.distance, somefile.geom
FROM somefile
JOIN (
SELECT "ID", MIN(distance) AS distance FROM somefile GROUP BY "ID"
) t ON t.distance = somefile.distance AND t.ID = somefile.ID;
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!15/302819/4)
|
Filtering using aggregation functions
|
[
"",
"sql",
"postgresql",
"postgis",
""
] |
Example:
```
select c_id,'VIP Customer' [Customer Level] from customer where ...
```
union
```
select c_id,'Regular Customer' [Customer Level] from customer where ...
```
if `c_id` is part of first level (VIP) can't be 2nd level(Regular). I am using Union it's not working because second column data is different.
Can someone suggest?
|
this is the error Client query.
```
select A.RC_id
from (
select c_id RC_id
from customer
where 'Regular Customer' criteria) A
inner join (
select c_id VIP_id
from customer
where 'VIP Customer' criteria) B
on A.RC_id=B.VIP_id
```
or you can apply both criteria on the Client, i think is the shortest solution
```
select c_id
from customer
where ('VIP Customer criteria' AND 'Regular Customer criteria')
```
|
CASE seems to be a nice solution here, you can add more Customer Levels if needed also.
```
select c_id, [Customer Level] =
CASE
WHEN ... THEN 'VIP Customer'
ELSE 'Regular Customer'
END
from customer
```
More about CASE on MSDN: <https://msdn.microsoft.com/en-us/library/ms181765.aspx>
|
Compare query result on the basis of single column
|
[
"",
"mysql",
"sql",
""
] |
Every thread I've seen so far has been to check for duplicate rows and avoiding them. I'm trying to get a query to only return the duplicate rows. I thought it would be as simple as a subquery, but I was wrong. Then I tried the following:
```
SELECT * FROM a
WHERE EXISTS
(
SELECT * FROM b
WHERE b.id = a.id
)
```
Was a bust too. How do I return only the duplicate rows? I'm currently going through two tables, but I'm afraid there are a large amount of duplicates.
|
I am sure your posted code would work too like
```
SELECT * FROM a
WHERE EXISTS
(
SELECT 1 FROM b WHERE id = a.id
)
```
You can as well do a `INNER JOIN` like
```
SELECT a.* FROM a
JOIN b on a.id = b.id;
```
You can as well use a `IN` operator saying
```
SELECT * FROM a where id in (select id from b);
```
If none of them, then you can use `UNION` if both table satisfies the union restriction along with `ROW_NUMBER()` function like
```
SELECT * FROM (
SELECT *,
ROW_NUMBER() OVER(PARTITION BY id ORDER BY id) AS rn
FROM (
select * from a
union all
select * from b) xx ) yy
WHERE rn = 1;
```
|
use this query, maybe is better if you check the relevant column.
```
SELECT * FROM a
INTERSECT
SELECT * FROM b
```
|
Returning only duplicate rows from two tables
|
[
"",
"sql",
"sql-server",
""
] |
I have a query where I need to search the numerical part of a string in SQL Server.
[](https://i.stack.imgur.com/X74TV.png)
In the number column above needs to be searchable as a variable in the query.
Wildcards does not work:
```
SELECT PK_Story
FROM Story
WHERE ProductId = @productParam
AND Number LIKE '%' + @numberParam + '%';
```
because this would also return 132 and 232 for example.
So how can I search for a specific number after the '-'. As you can see I can't do charindex because of the variable prefix length.
|
What about `LIKE '%-' + @numberParam`?
|
You can use `substring` and `charindex` combination to get the result.
```
SELECT PK_Story
FROM Story
WHERE ProductId = @productParam
AND @numberParam like
'%' + case when charindex('-', Number) > 0
then substring(Number, charindex('-', Number) +1, len(Number)) + '%'
else Number
end + '%'
```
|
WHERE clause based on part of a VARCHAR column
|
[
"",
"sql",
"sql-server",
""
] |
**[UPDATED]: I have updated the question with the `WHERE` clause.**
I have the following SQL Fiddle. The result is:
```
| User | Name | Time |
|--------|---------------|--------|
| 00001 | Mary Jane | 12 |
| 00002 | Joana Smith | 7 |
| 00003 | George Andrz | 2 |
| 00004 | Julia Roberts | 4 |
```
I am expecting this:
```
| User | Name | Time |
|--------|---------------|--------|
| 00001 | Mary Jane | 12 |
| 00002 | Joana Smith | 7 |
| 00003 | George Andrz | 2 |
| 00004 | Julia Roberts | 4 |
| 90000 | Anderson Math | 0 |
| 90001 | Josh Xin | 0 |
```
> The difference is: There are some users in the table MainUsers that
> could not have done the TimeSheet yet, so the TimeSheet table is empty
> for the person. But I would like to show the name, with the Time = 0
---
[SQL Fiddle](http://sqlfiddle.com/#!3/26592/1)
**MS SQL Server 2008 Schema Setup**:
```
CREATE TABLE MainUsers
(
id int identity primary key,
UserID varchar(5),
FullName varchar(500),
);
INSERT INTO MainUsers
(UserID, FullName)
VALUES
('00001', 'Mary Jane'),
('00002', 'Joana Smith'),
('00003', 'George Andrz'),
('00004', 'Julia Roberts'),
('90000', 'Anderson Math'),
('90001', 'Josh Xin');
CREATE TABLE TimeSheet
(
id int identity primary key,
UserID varchar(5),
FullName varchar(500),
Minutes int,
TimeStamp Datetime,
);
INSERT INTO TimeSheet
(UserID, FullName, Minutes, TimeStamp)
VALUES
('00001', 'Mary Jane', 240, '2015-09-16 08:16:00'),
('00001', 'Mary Jane', 480, '2015-09-16 08:16:00'),
('00002', 'Joana Smith', 320, '2015-09-16 08:16:00'),
('00002', 'Joana Smith', 120, '2015-09-16 08:16:00'),
('00003', 'George Andrz', 120, '2015-09-16 08:16:00'),
('00004', 'Julia Roberts', 240, '2015-09-16 08:16:00');
```
**Query 1**:
```
SELECT u.UserID AS [User], u.FullName AS Name,
SUM(t.Minutes) / 60 AS [Time]
FROM MainUsers u LEFT JOIN
TimeSheet t
ON u.UserID = t.UserID
WHERE
month(TimeStamp) = 9 and year(TimeStamp) = 2015
GROUP BY u.UserID, u.FullName
ORDER BY u.UserID ASC
```
**[Results](http://sqlfiddle.com/#!3/26592/1/0)**:
```
| User | Name | Time |
|-------|---------------|------|
| 00001 | Mary Jane | 12 |
| 00002 | Joana Smith | 7 |
| 00003 | George Andrz | 2 |
| 00004 | Julia Roberts | 4 |
```
Thanks.
|
try this:
```
SELECT
u.UserID AS 'User',
u.FullName AS Name,
isnull(SUM(Minutes) / 60,0) AS [Time]
FROM
MainUsers u left OUTER JOIN
TimeSheet t ON
u.UserID = t.UserID
GROUP BY
u.UserID,
u.FullName
ORDER BY
u.UserID
```
[SQL Fiddle](http://sqlfiddle.com/#!3/8cfee/15)
if you want to include conditions on your timesheet table, such as
`month(timestamp) = 9 and year(timestamp) = 2015`
and you do it in the `WHERE` clause, it converts your `outer join` to an `inner join` because the `WHERE` clause requires fields in the timestamp table. To limit by month and year of your `left outer join`ed table, you put the conditions in the `JOIN` clause instead of `WHERE`, like:
```
SELECT
u.UserID AS 'User',
u.FullName AS Name,
isnull(SUM(Minutes) / 60,0) AS [Time]
FROM
MainUsers u left OUTER JOIN
TimeSheet t ON
u.UserID = t.UserID and
month(timestamp) = 9 and year(timestamp) = 2015
GROUP BY
u.UserID,
u.FullName
ORDER BY
u.UserID
```
[sql fiddle](http://sqlfiddle.com/#!3/26592/6)
|
I think you want a `left join`. Presumably, everyone with a timesheet is a valid user. And you seem to want to keep all the users.
In general, when you use `full outer join` you have to be very careful about `NULL` values. `COALESCE()` ends up being used extensively. So, your query can be written as:
```
SELECT u.UserID AS [User], u.FullName AS Name,
SUM(t.Minutes) / 60 AS [Time]
FROM MainUsers u LEFT JOIN
TimeSheet t
ON u.UserID = t.UserID
GROUP BY u.UserID, u.FullName
ORDER BY u.UserID ASC;
```
Also note that the query is much easier to follow when your table aliases are abbreviations for the table names. `A` and `B` don't mean anything. But it is clear that `t` stands for `TimeSheet`.
Finally, the `time` column is probably hours as an integer -- assuming that `Minutes` is an integer. (And it would be better called something like "hours".) If you want decimal hours, then divide by 60.0, rather than 60.
|
Full Outer Join with Group By
|
[
"",
"sql",
"sql-server-2008-r2",
"group-by",
""
] |
Can you please advise on this?
```
CREATE TABLE TEST2 (
JOB_NAME VARCHAR2(50) NULL,
RUNTIME NUMBER(22) NULL,
STARTTIME1 VARCHAR2(50) NULL,
ENDTIME1 VARCHAR2(50) NULL,
STARTTIME_READ DATE NULL,
ENDTIME_READ DATE NULL
)
GO
insert into TEST2 values ('TEST JOB',37,'08/18/2015 20:12:24','08/18/2015 20:13:01',null,null)
go
insert into TEST2 values ('TEST JOB',37,'08/18/2015 20:12:24','08/18/2015 20:13:01',null,null)
go
insert into TEST2 values ('TEST JOB',195,'08/20/2015 19:17:05','08/20/2015 19:20:20',null,null)
go
insert into TEST2 values ('TEST JOB',171,'08/19/2015 19:16:52','08/19/2015 19:19:43',null,null)
go
insert into TEST2 values ('TEST JOB',195,'08/21/2015 19:19:08','08/21/2015 19:22:23',null,null)
go
insert into TEST2 values ('TEST JOB',32,'08/24/2015 19:23:52','08/24/2015 19:24:24',null,null)
go
insert into TEST2 values ('TEST JOB',19,'08/26/2015 19:19:00','08/26/2015 19:19:19',null,null)
go
insert into TEST2 values ('TEST JOB',92,'08/25/2015 19:18:59','08/25/2015 19:20:31',null,null)
go
insert into TEST2 values ('TEST JOB',198,'08/27/2015 19:29:58','08/27/2015 19:33:16',null,null)
go
insert into TEST2 values ('TEST JOB',19,'08/28/2015 19:18:13','08/28/2015 19:18:32',null,null)
go
insert into TEST2 values ('TEST JOB',44,'08/31/2015 19:49:22','08/31/2015 19:50:06',null,null)
go
insert into TEST2 values ('TEST JOB',266,'09/11/2015 19:17:10','09/01/2015 19:21:36',null,null)
go
insert into TEST2 values ('TEST JOB',18,'09/20/2015 19:17:33','09/02/2015 19:17:51',null,null)
go
insert into TEST2 values ('TEST JOB',33,'09/22/2015 19:16:50','09/03/2015 19:17:23',null,null)
go
insert into TEST2 values ('TEST JOB',170,'08/17/2015 19:18:07','08/17/2015 19:20:57
```
query:
```
select job_name,to_char(to_date(avg(to_number(to_char(to_date(starttime1,'mm/dd/yyyy hh24:mi:ss'),'SSSSS'))),'SSSSS'),'HH24:MI:SS')AVG_STARTIME
from TEST2 group by job_name
```
Error:
> ORA-01830: date format picture ends before converting entire input string
it is working for some records in the table. Those dates exactly looks like the above. I am not getting what is wrong with this date format
Note: I am using oracle 8i
|
Try this. You had several problems. Format mask for to\_number should be '99999' not 'SSSSS' and result of avg function was not a whole number, so to\_date was not working. Rounded the value.
```
SELECT job_name,
TO_CHAR(to_date(ROUND(AVG(to_number(TO_CHAR(to_date(starttime1,'mm/dd/yyyy hh24:mi:ss'),
'SSSSS'),'99999')),0),'SSSSS'),'HH24:MI:SS')AVG_STARTIME
FROM TEST2
GROUP BY job_name
```
|
I think this is what you are looking for
```
select job_name,to_char(
to_date(
TRUNC(avg(
to_number(
to_char(
to_date(starttime1,'mm/dd/yyyy hh24:mi:ss'),
'SSSSS')
)
)),'SSSSS'),'HH24:MI:SS')AVG_STARTIME
from TEST2 group by job_name
```
That average is returning numbers with decimals. Just use a TRUNC to fix that.
[sqlfiddle](http://sqlfiddle.com/#!4/c6398/9)
|
ORA-01830: date format picture ends before converting entire input string
|
[
"",
"sql",
"oracle",
"oracle8i",
""
] |
I'd like to query the database as to whether or not one or more rows exist that satisfy a given predicate. However, I am not interested in the distinction between there being one such row, two rows or a million - just if there are 'zero' or 'one or more'. And I do not want Postgres to waste time producing an exact count that I do not need.
In DB2, I would do it like this:
```
SELECT 1 FROM SYSIBM.SYSDUMMY1 WHERE EXISTS
(SELECT 1 FROM REAL_TABLE WHERE COLUMN = 'VALUE')
```
and then checking if zero rows or one row was returned from the query.
But Postgres has no dummy table available, so what is the best option?
If I create a one-row dummy table myself and use that in place of SYSIBM.SYSDUMMY1, will the query optimizer be smart enough to not actually read that table when running the query, and otherwise 'do the right thing'?
|
PostgreSQL doesn't have a dummy table because you don't need one.
```
SELECT 1 WHERE EXISTS
(SELECT 1 FROM REAL_TABLE WHERE COLUMN = 'VALUE')
```
Alternatively if you want a true/false answer:
```
SELECT EXISTS(SELECT 1 FROM REAL_TABLE WHERE COLUMN = 'VALUE')
```
|
How about just doing this?
```
SELECT (CASE WHEN EXISTS (SELECT 1 FROM REAL_TABLE WHERE COLUMN = 'VALUE') THEN 1 ELSE 0 END)
```
`1` means there is a value. `0` means no value.
This will always return one row.
|
Efficiently determine if any rows satisfy a predicate in Postgres
|
[
"",
"sql",
"postgresql",
"exists",
""
] |
I'm Using Oracle where,
I have a Table(`FE_IMPORT_LC` Table) with data from where i give in following few column with data
```
TRANSMIT_LC_NO LIAB_AMT_LCY REM_LC_AMT_LCY IMP_AMEND_NO
108615020048 10022000 10022112 00
108615020048 10022000 10022112 01
108615020048 10022000 10022112 02
108615020048 11692000 8351760 03
```
I want to find out Data of the Red Marked Rows, which IMP\_AMEND\_NO column value is maximum. That means I want to find out one/two Columns data which corresponding other columns have maximum value.
So, I already create following query:
```
SELECT l1.liab_amt_lcy
FROM fe_import_lc l1
WHERE l1.transmit_lc_no = '108615020048'
AND l1.imp_amend_no = (SELECT MAX(l2.imp_amend_no)
FROM fe_import_lc l2
WHERE l2.transmit_lc_no = l1.transmit_lc_no)
```
But I want more effective query for this, If any one know about it please...
Please give answer/reply as early as possible.
|
Try;
```
select liab_amt_lcy
from (
SELECT l1.liab_amt_lcy, imp_amend_no
FROM fe_import_lc l1
WHERE l1.transmit_lc_no = '108615020048'
order by imp_amend_no desc
)
where rownum < 2
```
|
Try something like below, where l1 would be your FE\_IMPORT\_LC table. Better to create a view with the logic of l2 table given below and then select.
```
with l1(TRANSMIT_LC_NO, LIAB_AMT_LCY, REM_LC_AMT_LCY, IMP_AMEND_NO) as(
select 108615020048,10022000,10022112,00 from dual union
select 108615020048,10022000,10022112,01 from dual union
select 108615020048,10022000,10022112,02 from dual union
select 108615020048,10022000,10022112,03 from dual
), l2 as(
select l1.*,row_number() over (partition by TRANSMIT_LC_NO order by IMP_AMEND_NO desc) as rno from l1)
select TRANSMIT_LC_NO, LIAB_AMT_LCY,REM_LC_AMT_LCY,IMP_AMEND_NO from l2
where rno=1;
```
If 2 rows have same max(IMP\_AMEND\_NO ) and if you want both, use below query(instead of row\_number, I am using rank here. Rest same.
```
with l1(TRANSMIT_LC_NO, LIAB_AMT_LCY, REM_LC_AMT_LCY, IMP_AMEND_NO) as(
select 108615020048,10022000,10022112,00 from dual union all
select 108615020048,10022000,10022112,01 from dual union all
select 108615020048,10022000,10022112,03 from dual union all
select 108615020048,10022000,10022112,03 from dual
), l2 as(
select l1.*,rank() over (partition by TRANSMIT_LC_NO order by IMP_AMEND_NO desc) as rno from l1)
select TRANSMIT_LC_NO, LIAB_AMT_LCY,REM_LC_AMT_LCY,IMP_AMEND_NO from l2
where rno=1;
```
Here you dont have to specify TRANSMIT\_LC\_NO explicitely. If you have many records, then also you can get only row corresponding to max(IMP\_AMEND\_NO). But if you want to use this is a PL/SQL block, then put the TRANSMIT\_LC\_NO in the where clause in the select query from FE\_IMPORT\_LC and proceed like below.
|
In oracle How can I Find out one/two Columns data which corresponding other columns have maximum value
|
[
"",
"sql",
"oracle",
"greatest-n-per-group",
""
] |
I have a table `test` with the following columns: `id`, `startDate`, `stopDate`, `startOvertime`, `stopOvertime`.
`startDate` and `stopDate` are of type datetime and `startOvertime`, `stopOvertime` are of type `time(7)`.
I want to create a new column with the date portion from `stopDate` and the time portion from `startOvertime` and copy that into a newly created column `test-fill`.
If I use formula (stopDate, startOvertime) then the value of the new datetime column is not right. Any proposals?
|
If you always want the new column with this information, then you can use a computed column:
```
alter table test add test-fill as
(cast(cast(startDate as date) as datetime) + cast(overtime as datetime));
```
This doesn't actually add a new column. It just computes the values when you need them.
|
This should do it:
```
alter table [table-name] add [test-fill] datetime
update [table-name]
set [test-fill] = cast(startDate as date) + cast(stopOvertime as datetime)
```
First we add column to the table, then we update created cells with data.
|
T-SQL Add new column in existing table and fill with value from two another existing column
|
[
"",
"sql",
"sql-server",
""
] |
If i have Vacation table with the following structure :
```
emp_num start_date end_date
234 8-2-2015 8-5-2015
234 6-28-2015 7-1-2015
234 8-29-2015 9-2-2015
115 6-7-2015 6-7-2015
115 8-7-2015 8-10-2015
```
considering date format is: `m/dd/yyyy`
How could i get the summation of vacations for every employee during specific month .
Say i want to get the vacations in `8Aug-2015`
I want the result like this
```
emp_num sum
234 7
115 4
```
`7` = all days between `8-2-2015 and 8-5-2015` `plus` all days between `8-29-2015 AND 8-31-2015` the end of the month
|
This will work for sqlserver 2012+
```
DECLARE @t table
(emp_num int, start_date date, end_date date)
INSERT @t values
( 234, '8-2-2015' , '8-5-2015'),
( 234, '6-28-2015', '7-1-2015'),
( 234, '8-29-2015', '9-2-2015'),
( 115, '6-7-2015' , '6-7-2015'),
( 115, '8-7-2015' , '8-10-2015')
DECLARE @date date = '2015-08-01'
SELECT
emp_num,
SUM(DATEDIFF(day,
CASE WHEN @date > start_date THEN @date ELSE start_date END,
CASE WHEN EOMONTH(@date) < end_date
THEN EOMONTH(@date)
ELSE end_date END)+1) [sum]
FROM @t
WHERE
start_date <= EOMONTH(@date)
and end_date >= @date
GROUP BY emp_num
```
|
i hope this will help you
```
declare @temp table
(emp_num int, startdate date, enddate date)
insert into @temp values (234,'8-2-2015','8-5-2015')
insert into @temp values (234,'6-28-2015','7-1-2015')
insert into @temp values (234,'8-29-2015','9-2-2015')
insert into @temp values (115,'6-7-2015','6-7-2015')
insert into @temp values (115,'8-7-2015','8-10-2015')
-- i am passing 8 as month number in your case is August
select emp_num,
SUM(
DATEDIFF (DAY , startdate,
case when MONTH(enddate) = 8
then enddate
else DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,startdate)+1,0))--end date of month
end
)+1) AS Vacation from @temp
where (month(startdate) = 8 OR month(enddate) = 8) AND (Year(enddate)=2015 AND Year(enddate)=2015)
group by emp_num
```
**UPDATE** after valid comment: *This will fail with these dates: 2015-07-01, 2015-09-30* –@t-clausen.dk
i was assumed OP wants for month only which he will pass
```
declare @temp table
(emp_num int, startdate date, enddate date)
insert into @temp values (234,'8-2-2015','8-5-2015')
insert into @temp values (234,'6-28-2015','7-1-2015')
insert into @temp values (234,'8-29-2015','9-2-2015')
insert into @temp values (115,'6-7-2015','6-7-2015')
insert into @temp values (115,'8-7-2015','8-10-2015')
insert into @temp values (116,'07-01-2015','9-30-2015')
select emp_num,
SUM(
DATEDIFF (DAY , startdate,
case when MONTH(enddate) = 8
then enddate
else DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,startdate)+1,0))
end
)+1) AS Vacation from @temp
where (Year(enddate)=2015 AND Year(enddate)=2015)
AND 8 between MONTH(startdate) AND MONTH(enddate)
group by emp_num
```
|
How to get the summation of days in specific month of year in range
|
[
"",
"sql",
"sql-server",
"date",
"informix",
""
] |
I have procedure like:
```
create or replace procedure smth
is
some_variable varchar(2;
begin
insert into **table**(col1,col2) (Select id,'text' from TABLE2),
insert into table(col1,col2) (Select id,'text' from TABLE3)
```
What I want to do is put some if-else (or smoething else) statement for the case when eg. Table2 doesn't exists.
If Table2 doesn't exist then insert into **table** ('smth','nameoftable') from DUAL. I know exactly what is the name of the table2. ( but for some reason this table could be deleted from database).
It would be great if I could do that this way:
```
insert into **table**(col1,col2) if table2 doesn't exist then Select (1,'TABLE2') else (Select id,'text' from TABLE2),
if table3 doesn't exist then Select (1,'TABLE3') insert into table(col1,col2) else (Select id,'text' from TABLE3)
```
For each case.
## EDIT
```
insert into **table**(col1,col2) (Select id,'text' from TABLE2),
insert into table(col1,col2) (Select id,'text' from TABLE3)
insert into table(col1,col2) (Select id,'text' from TABLE4)
insert into table(col1,col2) (Select id,'text' from TABLE4)
```
Assume that table3 doesn't EXIST, then all what I want is INSERT into table ( Select 'text','text2' from dual.
|
In order to get waht you want you need two things
1. Find if the table exists: you can query the `user_tables` against the
`table_name` column if the procedure is reading form the table in the
same schema
2. you need to use dynamic sql because if the table does not exists
with static pl/sql you get the error **PL/SQL: ORA-00942: table or
view does not exists** because when oracle compiles the procedure it
does not find the table.
An example of dynamic plsql is
```
sql_stmt := 'insert into table(col1,col2) (Select id,''text'' from ' || <the right table_name> || ')';
EXECUTE IMMEDIATE sql_stmt;
```
Here is a link to the oracle documentation for [dynamic sql](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/dynamic.htm#LNPLS011).
**EDIT**
After you clarification can end with a procedure like this one:
```
create or replace procedure insert_from_dual_if_not_exists(table_name_in in varchar2)
begin
.....
if table_exists('<table_name>') then
sql_stmt := 'insert into table(col1,col2)' (Select id,''text'' from ' || <the right table_name> || ')';
else
sql_stmt := 'insert into table(col1,col2)' (Select ''text1'',''text2'' from dual )';
end if;
EXECUTE IMMEDIATE sql_stmt;
end;
```
and call `insert_from_dual_if_not_exists` instead of you simple insert; you must also create a procedure (or a simple statement) that telle your code if a table exists.
|
You could query the **USER\_TABLES** view to see whether the table actually exists or not. And, you must (ab)use **EXECUTE IMMEDIATE** to execute the dynamic sql.
For example,
```
SELECT COUNT(*)
INTO v_cnt
FROM USER_TABLES
WHERE TABLE_NAME = '<TABLE_1>';
v_sql := INSERT INTO TABLE(col1,col2).. SELECT id,'text' FROM ';
IF v_cnt > 0
THEN
v:sql := v_sql || TABLE_1;
EXECUTE IMMEDIATE v_sql;
ELSE
v:sql := v_sql || TABLE_2;
EXECUTE IMMEDIATE v_sql;
END IF;
```
**UPDATE** OP wants to dynamically use the table\_name to insert into another table from multiple tables.
Loop through all the tables, and use the table\_name as a variable in the dynamic sql.
For example,
```
FOR i IN SELECT table_name FROM user_tables WHERE table_name <> 'inserting_table'
LOOP
v_sql := 'INSERT INTO inserting_table SELECT column_list FROM' || i.table_name;
EXECUTE IMMEDIATE v_sql;
END LOOP;
```
|
If table doesn't exists then insert into other table
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I am trying to update a column with a value of a different column under certain circumstances. It's complicated. I've tried to figure this out on my own for a weeks now. (on and off)
Here is how it currently looks:
[](https://i.stack.imgur.com/BpIDf.png)
I Need a SQL statement that will look for the LotNo at OperationCode 1280 and assign it for all values with the same CastTID.
Here is what it should look like after
[](https://i.stack.imgur.com/r4qNi.png)
I would really appreciate any help! This is my first post, if iv'e left out anything important please let me know so that I can help you help me.
|
You can use a join in your update statement to make this easier.
```
update t
set CastLotNo = t2.LotNo
from yourTable t
inner join yourTable t2 on t2.CastTID = t.CastTID and t2.OperationCode = 1280
```
|
```
UPDATE `myTable` SET `CastLotNo` = (SELECT `LotNo` FROM `MyTable` WHERE `OperationCode` = '1280' LIMIT 1) WHERE `CastTID` = (SELECT `CastTID` FROM `MyTable` WHERE `OperationCode` = '1280' LIMIT 1)
```
It seems like you could use something like that. I can't test this at the moment, so let me know if it needs tweaking.
|
UPDATE SQL column based on values of another column. (It's not that simple)
|
[
"",
"sql",
"sql-server",
""
] |
I have a stored procedure which takes in five parameters of which two can be null - we will call these parameters A and B
What I would like to do is select records based on the following logic.
If Parameter A is NULL then only return records that match Parameter B
I know that I can do something similar to the following
```
IF A IS NULL
BEGIN
SELECT * FROM TABLE WHERE Param=B
END
ELSE
BEGIN
SELECT * FROM TABLE WHERE Param=A
END
```
However, the SQL query is much more complex then the above one and there would be huge replication in the Proc which is something I want to avoid
Thanks in advance
===============================
EDIT - Sorry, I should have mentioned that in the example the Param are based on separate columns e.g.
My table consists of four columns of which two separate columns map to the two parameters - basic schema below
ID
PersonName
GroupID
DeliveryID
In my procedure I want to retrieve those records that match the GroupID *however* in the scenario where the GroupID is null then I want to return those records that match the DeliveryID
Thanks again
|
Functionally, something like this should work. If either parameter is `NULL`, the condition becomes a self-identity (assuming neither `groupID` nor `deliveryID` is NULL).
```
SELECT *
FROM table_name
WHERE groupID = coalesce(@groupIDParameter, groupID)
AND deliveryID = coalesce(@deliveryIDParameter, deliveryID)
```
|
Try
```
SELECT * FROM my_table WHERE Param = COALESCE(A,B)
```
`COALESCE` will give you `A` if it's not null. Otherwise `B`.
|
SQL - Retrieve records based on parameters where either parameter can be null
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"where-clause",
""
] |
I am trying to show my `int` variables in raiserror `@MaxAmount` and `@MinAmount`
```
Raiserror('Total Amount should be less than %s and Greater than %s',16,1,@MaxAmount,@MinAmount)
```
But Im getting error:
> Must declare the scalar variable "@MaxAmount".
|
`%s` is used for `varchar` and your variable is of type `int` hence you need to try to use correct format specifier ie, `%d`
```
DECLARE @MaxAmount int = 16;
DECLARE @minAmount int = 1;
Raiserror('Total Amount should be less than %d and Greater than %d',@MaxAmount,@MinAmount)
```
Check [RAISEERROR](https://msdn.microsoft.com/en-us/library/ms178592.aspx) for details.
|
You need to use %I for integers and as mentioned, declare the variables before use.
```
declare @MaxAmount int, @MinAmount int
select @MaxAmount = 50, @MinAmount = 5
Raiserror('Total Amount should be less than %i and Greater than %i',16,1,@MaxAmount,@MinAmount)
```
|
How to use variables in SQL raiserror
|
[
"",
"sql",
"sql-server",
"raiserror",
""
] |
I am trying to use a subquery / join to compare a set of rows in a table to other rows in the same table.
Excerpt data set:
```
Guid MonitorsetID groupName
26464009405210800000000000 2162 ~templates.root
26464009405210800000000000 2161 ~templates.root
26464009405210800000000000 1464 ~templates.root
26464009405210800000000000 1224 ~templates.root
321794737607583 2162 lab.root.abc
321794737607583 2161 lab.root.abc
321794737607583 1464 lab.root.abc
321794737607583 1224 lab.root.abc
500311571061532 2196 lab.root.abc
500311571061532 2195 lab.root.abc
500311571061532 1464 lab.root.abc
500311571061532 1224 lab.root.abc
129478194721498 1464 lab.root.def
129478194721498 1224 lab.root.def
```
I need to result on which MonitorsetID(s) exist on one particular Guid '26464009405210800000000000' but are "missing" for a select group of other Guid records, for this example all Guids with groupName 'lab.root.abc'. In the exerpt above, there are currently 4 MonitorsetIDs matching this Guid: 2162, 2161, 1464, 1224. 2162 and 2161 are "missing" from the Guid '500311571061532'.
The result set I would like is:
```
Guid MonitorsetID groupName
500311571061532 2162 lab.root.abc
500311571061532 2161 lab.root.abc
```
Or, the following would also work:
```
Guid MonitorsetID groupName Guid MonitorsetID groupName
26464009405210800000000000 2162 ~templates.root 500311571061532 NULL lab.root.abc
26464009405210800000000000 2161 ~templates.root 500311571061532 NULL lab.root.abc
```
I'm able to get the inverse of the result I want with the following:
```
SELECT VMAA.agentguid, VMAA.MonitorsetID
FROM [vMonitorsetAgentAssignment] VMAA
LEFT JOIN [vMonitorsetAgentAssignment] VMAA2
ON VMAA.MonitorsetID = VMAA2.MonitorsetID
WHERE
VMAA.agentguid in
(
SELECT AgentGuid FROM vMonitorsetAgentAssignment VMAA
WHERE VMAA.groupName = 'lab.root.abc'
)
AND
VMAA2.agentguid = '26464009405210876452365122'
ORDER BY agentGuid, MonitorsetID
```
My attempts at getting the needed results by adding a condition to the JOIN ON clause with "IS NULL", etc. just return blank results.
|
This query will extract missing ids for both guids 129478194721498 and 500311571061532.
```
with createddata as (
select *
from
(
select distinct guid, groupname from vMonitorsetAgentAssignment
where guid <> '26464009405210800000000000'
) a
cross join
(
select monitorsetid from vMonitorsetAgentAssignment
where guid = '26464009405210800000000000'
) b
)
select x.*
from createddata x
left join vMonitorsetAgentAssignment y
on x.guid = y.guid
and x.monitorsetid = y.monitorsetid
and x.groupname = y.groupname
and y.guid <> '26464009405210800000000000'
where y.guid is null
Results
| guid | groupname | monitorsetid |
|-----------------|--------------|--------------|
| 129478194721498 | lab.root.def | 2162 |
| 129478194721498 | lab.root.def | 2161 |
| 500311571061532 | lab.root.abc | 2162 |
| 500311571061532 | lab.root.abc | 2161 |
```
Example: <http://sqlfiddle.com/#!3/80719/33>
**Ignore the answer below**
Take a look at this query for the results you described as desired:
```
select a.* from
vMonitorsetAgentAssignment a
left join (
select monitorsetid as masterids from vMonitorsetAgentAssignment
where guid = '26464009405210800000000000'
) b on a.monitorsetid = b.masterids
where not guid = '26464009405210800000000000'
and masterids is null
Result:
| Guid | MonitorsetID | groupName |
|-----------------|--------------|--------------|
| 500311571061532 | 2196 | lab.root.abc |
| 500311571061532 | 2195 | lab.root.abc |
```
|
Get all possible combinations of the GUIDs in the groupName you are filtering over along with the MonitorsetID's of the GUID you are filtering on (by cross-joining from the base table to a distinct set of MonitorsetIDs for the GUID you are filtering on).
Once you have that data-set, it becomes a simple left-join from that derived data set over to your base table again on guid/monitersetIds that don't exist:
```
with root_data as
(
select distinct a.agentguid, b.MonitorsetID, a.groupName
from vMonitorsetAgentAssignment a
cross join (
select distinct MonitorsetID
from vMonitorsetAgentAssignment d
where d.agentguid = '26464009405210800000000000'
) b
where a.groupName = 'lab.root.abc'
)
select *
from root_data d
left join vMonitorsetAgentAssignment a
on a.agentguid = d.agentguid
and a.MonitorsetID = d.MonitorsetID
where a.agentguid is null;
```
|
SQL subquery / join
|
[
"",
"sql",
"sql-server",
""
] |
I want to use a `FULL OUTER JOIN` between two tables on several columns, but when both columns are null, they are not considered as equal during the join, so I obtain two different rows. How can I write my join, so null columns are considered as equal ?
I have set up a simplified example :
```
create table t1 (
id number(10) NOT NULL,
field1 varchar2(50),
field2 varchar2(50),
CONSTRAINT t1_pk PRIMARY KEY (id)
);
create table t2 (
id number(10) NOT NULL,
field1 varchar2(50),
field2 varchar2(50),
extra_field number(1),
CONSTRAINT t2_pk PRIMARY KEY (id)
);
insert into t1 values(1, 'test', 'test2');
insert into t2 values(1, 'test', 'test2', null);
insert into t1 values(2, 'test1', 'test1');
insert into t2 values(2, 'test1', 'test1', null);
insert into t1 values(3, 'test0', null);
insert into t2 values(3, 'test0', null, 1);
insert into t2 values(4, 'test4', 'test0', 1);
select *
from t1
full outer join t2 using (id, field1, field2);
```
Result obtained : [](https://i.stack.imgur.com/wWam2.png)
Result expected : [](https://i.stack.imgur.com/515M8.png)
[SQLFiddle](http://sqlfiddle.com/#!4/5d2e5/1/0)
|
Use [NVL()](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions105.htm) and a Unique String to substitute NULL:
```
select t1.id,t1.field1,t1.field2,t2.extra_field
from t1
full outer join t2 ON
t1.id=t2.id
AND NVL(t1.field1,'UID_INSTEAD_OF_NULL')=NVL(t2.field1,'UID_INSTEAD_OF_NULL')
AND NVL(t1.field2,'UID_INSTEAD_OF_NULL')=NVL(t2.field2,'UID_INSTEAD_OF_NULL')
```
`SQLFiddle demo`
|
NVL can be applied on the result so no function is needed in the join condition
```
select
nvl(t1.id, t2.id) id,
nvl(t1.field1, t2.field1) field1,
nvl(t1.field2, t2.field2) field2,
extra_field
from t1
full outer join t2 on t1.id = t2.id AND t1.field1 = t2.field1 AND (t1.field2 = t2.field2 OR (t1.field2 IS NULL AND t2.field2 IS NULL));
```
|
SQL : FULL OUTER JOIN on null columns
|
[
"",
"sql",
"oracle",
"join",
"outer-join",
""
] |
I need to insert data in my table variable to a temp table but it gives me the following error.
> Msg 208, Level 16, State 0, Procedure sp\_CreateScenario\_q2, Line 70
> Invalid object name '#tmpp1'.
Here is the code
```
IF OBJECT_ID('tempdb..#tmpp1') IS NOT NULL
DROP TABLE #tmpp1
INSERT INTO #tmpp1
SELECT
[PlanningHierarchyId]
,[ProductReferenceId]
,[PlanningYear]
,[PlanningSeason]
,[UpdatedBy]
FROM
@paramTable
```
Is there any way to do this?
|
Error 'Invalid object name '#tmpp1' occurs because you delete temp table and then try to insert in it.
Try to use:
```
IF OBJECT_ID('tempdb..#tmpp1') IS NOT NULL
DROP TABLE #tmpp1
SELECT
[PlanningHierarchyId]
,[ProductReferenceId]
,[PlanningYear]
,[PlanningSeason]
,[UpdatedBy]
INTO #tmpp1
FROM @paramTable
```
|
you are dropping table. Either create one with `CREATE` or use `select * into` instead of `insert into`
```
IF OBJECT_ID('tempdb..#tmpp1') IS NOT NULL
DROP TABLE #tmpp1
SELECT
[PlanningHierarchyId]
,[ProductReferenceId]
,[PlanningYear]
,[PlanningSeason]
,[UpdatedBy] into #tmpp1
FROM @paramTable
```
|
SQL Server Insert Into Temp table from a table variable
|
[
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.