Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have an optimisation issue I would like to execute the subquery first
```
EXPLAIN
SELECT *
FROM `References_galaxia`
, Link_galaxia
WHERE linkReferenced IN ( SELECT id
FROM Link_galaxia
WHERE idConceptStart IN (616269,616268,615721)
AND idConceptLink = 315
AND idConceptTarget = 29209
)
AND `References_galaxia`.linkReferenced = Link_galaxia.id
AND `References_galaxia`.idConcept IN (416,36053,36088,36037)
```
The main query returns a hudge amount of data about a million records onto memory. While the subquery
> SELECT id FROM Link\_galaxia WHERE idConceptStart IN
> (616269,616268,615721) AND idConceptLink = 315 AND idConceptTarget =
> 29209 )
Return a little amount of data. How to manage to execute the subquery first ?
The result of Explain
```
1 PRIMARY References_galaxia ALL unique_ref,linkReferenced NULL NULL NULL 9163156 Using where
1 PRIMARY Link_galaxia eq_ref PRIMARY PRIMARY 4 eds_sandra.References_galaxia.linkReferenced 1
2 DEPENDENT SUBQUERY Link_galaxia eq_ref PRIMARY,idx_name,idConceptStart,idConceptStart_4,idConceptTarget,idConceptLink PRIMARY 4 References_galaxia.linkReferenced 1 Using where
```
Thank you a lot for your help !
|
A query such as the following would appear to be more optimal:
```
SELECT columns, I, actually, want
FROM `References_galaxia` r
JOIN Link_galaxia x
ON x.id = r.linkreferenced
JOIN Link_galaxia y
ON y.id = r.linkReferenced
WHERE r.idConcept IN (416,36053,36088,36037)
AND y.idConceptStart IN (616269,616268,615721)
AND y.idConceptLink = 315
AND y.idConceptTarget = 29209
```
For help on improving the performance of this query, we would need to see table creation statements for all relevant tables, as well as the EXPLAIN.
|
```
SELECT *
FROM `References_galaxia`
, Link_galaxia
WHERE `References_galaxia`.linkReferenced = Link_galaxia.id
AND `References_galaxia`.idConcept IN (416,36053,36088,36037)
AND Link_galaxia.idConceptStart IN (616269,616268,615721)
AND Link_galaxia.idConceptLink = 315
AND Link_galaxia.idConceptTarget = 29209
```
|
SQL subqueries optimisation, how to execute subquery first
|
[
"",
"mysql",
"sql",
"subquery",
""
] |
It's hard to explain my question, but here is the issue. I have a table of 350k records. The table is filled with skus and other part information. There are many rows that have the same sku. For example there are 5 rows with the sku 45666. I need to put a unique identifier on each of these. e.g 45666~1, 45666~2, etc... The solution I've come up with was using a while loop in a while loop and just doing them one at a time like this
```
$i=0
while($i -le $array.length) {
$j=0
$mfp = $array.itemnumber[$i]
while($array.itemnumber[$i] = $mfp) {
Invoke-SQLCmd -ServerInstance $Server -Database $Database -Query "update whiproducts set number = $j"
$i++
$j++
}
```
}
What I would do after this is concat number with the sku when I select it. The problem here is this has been running for like 30 minutes which just won't work well with the way I'm going about this project. Is there some quick and easy way to do this I'm missing!?
Thanks for any help!
|
Give this query a try (please don't run this on production data before testing or backing up):
```
update t
set t.sku = t.sku + '~' + cast(t.RowNumber as varchar)
from (
select sku, row_number() over(partition by sku order by sku) as RowNumber
from whiproducts) t
```
|
How about appending on a unique GUID. Then you can just do things with a single update statement, no looping at all.
`UPDATE whipproducts
SET number = number + '_' + cast(NEWID() as varchar(100))`
|
How can I append a unique identifier onto rows with same column value
|
[
"",
"sql",
"powershell",
"unique",
"identifier",
""
] |
I'm using the chinook database and sqlite3. My goal is to return a list of invoices with the invoice id, invoice date and the number of items on the invoice for a specific customer. The first two are pretty simple,
```
SELECT InvoiceId, InvoiceDate
FROM invoices
WHERE CustomerId = 2;
```
returns:
```
1 |2009-01-01 00:00:00
12 |2009-02-11 00:00:00
67 |2009-10-12 00:00:00
196|2011-05-19 00:00:00
219|2011-08-21 00:00:00
241|2011-11-23 00:00:00
293|2012-07-13 00:00:00
```
However, the invoice line items are in another table. I can count the ones that correspond to specific invoices with:
```
SELECT count(*)
FROM invoice_items
WHERE Invoiceid = 12;
```
which returns `14`
But I want to return a list like:
```
1 |2009-01-01 00:00:00|2
12|2009-02-11 00:00:00|14
```
|
You need an inner join on your query; it will look something like this:
```
SELECT
invoices.InvoiceId, invoices.InvoiceDate,
COUNT(DISTINCT(items.ID)) AS Items
FROM
invoices
JOIN
invoice_items AS items ON invoices.ID = itemsID
GROUP BY invoices.InvoiceId
```
Of course I'm guessing the names of your `ID` columns
|
Try a subquery like the following:
`select invoiceid, invoicedate,
(select count(*) from invoice_items where invoiceid = i.invoiceid) as CountInvoiceItems
from invoices`
to get all the results, otherwise you can add your where clause back in for just invoice 12.
|
Get information about an invoice and return the number of invoice items on it with a single SELECT comand
|
[
"",
"sql",
"sqlite",
""
] |
I am trying to do SQL Injection testing but I am currently testing a command line that separates parameters by spaces, so I'm trying to write a sql statement without any spaces. I've gotten it down to:
`create table"aab"("id"int,"notes"varchar(100))`
But I cannot figure out how to get rid of the space between CREATE and TABLE. The same would apply obviously for DROP and TABLE, etc.
Does anyone have any ideas? This is for Microsoft SQL Server 2014. Thanks!
[Update]: We are evaluating a third party product for vulnerabilities. I am not doing this to test my own code for weaknesses.
|
This is not possible, you have to check every argument to make sure they are as intended.
If they are supposed to be numbers, make sure they are numbers, is they are supposed to be a string that may contain specific caracters (like ' or ,) you should escape them when executing the request.
There should be a dedicated mechanism in your programmation langage to take care of hat (like PreparedStatement in Java)
|
You can write comments between lines instead of spaces in many cases. So /\*\*/ instead of spaces.
|
Can you write a sql statement without spaces between keywords
|
[
"",
"sql",
"sql-server",
"database",
"security",
""
] |
Suppose I have a table like:
```
id | parentId | name
1 NULL A
2 1 B
3 2 C
4 1 E
5 3 E
```
I am trying to write a scalar function I can call as:
`SELECT dbo.GetId('A/B/C/E')` which would produce "5" if we use the above reference table. The function would do the following steps:
> 1. Find the ID of 'A' which is 1
> 2. Find the ID of 'B' whose parent is 'A' (id:1) which would be id:2
> 3. Find the ID of 'C' whose parent is 'B' (id:2) which would be id:3
> 4. Find the ID of 'E' whose parent is 'C' (id:3) which would be id:5
I was trying to do it with a WHILE loop but it was getting very complicated very fast... Just thinking there must be a simple way to do this.
|
I think I have it based on @SeanLange's recommendation to use a recursive CTE (above in the comments):
```
CREATE FUNCTION GetID
(
@path VARCHAR(MAX)
)
/* TEST:
SELECT dbo.GetID('A/B/C/E')
*/
RETURNS INT
AS
BEGIN
DECLARE @ID INT;
WITH cte AS (
SELECT p.id ,
p.parentId ,
CAST(p.name AS VARCHAR(MAX)) AS name
FROM tblT p
WHERE parentId IS NULL
UNION ALL
SELECT p.id ,
p.parentId ,
CAST(pcte.name + '/' + p.name AS VARCHAR(MAX)) AS name
FROM dbo.tblT p
INNER JOIN cte pcte ON
pcte.id = p.parentId
)
SELECT @ID = id
FROM cte
WHERE name = @path
RETURN @ID
END
```
|
`CTE` version is not optimized way to get the hierarchical data. ([Refer MSDN Blog](https://blogs.msdn.microsoft.com/sqlcat/2011/04/28/optimize-recursive-cte-query/))
You should do something like as mentioned below. It's tested for 10 millions of records and is 300 times faster than CTE version :)
```
Declare @table table(Id int, ParentId int, Name varchar(10))
insert into @table values(1,NULL,'A')
insert into @table values(2,1,'B')
insert into @table values(3,2,'C')
insert into @table values(4,1,'E')
insert into @table values(5,3,'E')
DECLARE @Counter tinyint = 0;
IF OBJECT_ID('TEMPDB..#ITEM') IS NOT NULL
DROP TABLE #ITEM
CREATE TABLE #ITEM
(
ID int not null
,ParentID int
,Name VARCHAR(MAX)
,lvl int not null
,RootID int not null
)
INSERT INTO #ITEM
(ID,lvl,ParentID,Name,RootID)
SELECT Id
,0 AS LVL
,ParentId
,Name
,Id AS RootID
FROM
@table
WHERE
ISNULL(ParentId,-1) = -1
WHILE @@ROWCOUNT > 0
BEGIN
SET @Counter += 1
insert into #ITEM(ID,ParentId,Name,lvl,RootID)
SELECT ci.ID
,ci.ParentId
,ci.Name
,@Counter as cntr
,ch.RootID
FROM
@table AS ci
INNER JOIN
#ITEM AS pr
ON
CI.ParentId=PR.ID
LEFT OUTER JOIN
#ITEM AS ch
ON ch.ID=pr.ID
WHERE
ISNULL(ci.ParentId, -1) > 0
AND PR.lvl = @Counter - 1
END
select * from #ITEM
```
|
How to traverse a path in a table with id & parentId?
|
[
"",
"sql",
"sql-server",
"hierarchical-data",
"recursive-query",
""
] |
I have two tables:
`table1`: (ID, Code, Name)
`table2`: (ID, Code, Name)
with same columns
I want to to insert data from table1 to table2 or update columns if that exists in table2 (table1.ID = table2.ID)
What is the simple way to do this?
WITH OUT **MERGE**
|
```
Merge table2 as target
using table1 as source
on
target.id=source.id
When matched
Then
update
set target.id=source.id,
target.name=source.name
When not matched by Target Then
INSERT (id, name) VALUES (id, name);
```
There are some issues with Merge statement,so it should be used with [caution](https://www.mssqltips.com/sqlservertip/3074/use-caution-with-sql-servers-merge-statement/)..
Further i recommend ,using merge as two seperate DML statements like below..
```
insert into table2
select * from table1 t1 where not exists (select 1 from table2 t2 where t2.id=t1.id)
update t2
set
t2.id=t1.id,
t2.name=t1.name
from
table1 t1
join
table2 t2
on t1.id=t2.id
```
Reasons being stated by [Paul White](https://stackoverflow.com/users/440595/paul-white) here in his detailed [answer](https://dba.stackexchange.com/questions/30633/merge-a-subset-of-the-target-table)..
|
```
MERGE table2 t2
USING table1 t1
ON t1.ID = t2.ID
WHEN MATCHED THEN
UPDATE
SET t2.Code = t1.Code, t2.Name = t1.Name
WHEN NOT MATCHED BY TARGET THEN
INSERT (ID, Name, Code)
VALUES (t1.ID, t1.Name, t1.Code);
```
|
Update and insert to one table from another
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
how to combine this two queries without giving me the same output ?
the first query is :
```
select vwemployee.directorateName , count(vwemployeeCourse.employeeId) as t1
from vwemployee , vwemployeeCourse
where vwemployee.directorateName = vwemployeeCourse.directorateName
GROUP BY vwemployee.directorateName
```
this is the second query :
```
select vwemployee.directorateName , count(vwemployee.directorateName) as t2
from vwemployee , employeeCourse
where vwemployee.Id = employeeCourse.employeeId
GROUP BY vwemployee.directorateName
```
i will be using the combined query to generate a report
the first column is the name of the directorate
**by the way when i combined them my self t1 had the same result as t2 but when they are separate it gave me the right result**
> the t1 column should display how many courses this specific
> directorate took , and t2 column should display how many employee's
> under this directorate took this courses
>
> **so the total columns of the table of the combined query should be 3 columns**
|
If you are purposely using a employeeCourse instead of vwemployeeCourse in the second query I would use the following. I use a full outer join in case some directorates are in one table/view and not the other. If it is a typo and the name should be the same in both queries an inner (regular) join would work.
```
With cteQuery1 As
(
Select vwemployee.directorateName , Count(vwemployeeCourse.employeeId) as t1
From vwemployee
Join vwemployeeCourse On vwemployee.directorateName = vwemployeeCourse.directorateName
Group By vwemployee.directorateName
)
Select cteQuery1.directorateName, t1, t2
From
(
Select vwemployee.directorateName , Count(vwemployee.directorateName) as t2
From vwemployee
Join employeeCourse On vwemployee.Id = employeeCourse.employeeId
Group By vwemployee.directorateName
) Query2
Full Outer Join cteQuery1 On cteQuery1.directorateName = Query2.directorateName
```
|
You can use `SUM(IF())`:
```
SELECT vwemployee.directorateName,
SUM(IF(vwemployee.directorateName = vwemployeeCourse.directorateName),1,0) directorateCount,
SUM(IF(vwemployee.Id = employeeCourse.employeeId),1,0) idCount,
FROM vwemployee , vwemployeeCourse
WHERE vwemployee.directorateName = vwemployeeCourse.directorateName
OR vwemployee.Id = employeeCourse.employeeId
GROUP BY vwemployee.directorateName
```
Note: i didn't test it out so might need a bit of work.
|
SQL QUERY- multiple COUNT returns wrong (same) results
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
sorry to disturb you but I got a problem with a count of union; I try to implement the same logic that I read in other post but it's not working for me, some help please?
This is my code:
/*Declaration of data*?
```
/*where i should make the count*/
/*first select*/
UNION
/*second select*/
/*in the where of the second select I have a case with the following data*/
CASE
WHEN ((@case='other') AND (cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR)
or cfv.value like '%' + cast (@today as VARCHAR)+',0'
or cfv.value like '%' + cast (@today as VARCHAR)+',0,1'
or cfv.value like '%' + cast (@today as VARCHAR)+',1'))
then 1
WHEN ((@case ='zero') AND(cfv.value='0')) THEN 1
WHEN ((@case ='one') AND(cfv.value='1' or cfv.value='0,1')) THEN 1
ELSE 0
END = 1
```
and this re result without count so I suppose that should be very easy but I don't get it :S
[Just a column of elements but I'd like to have the number of the element present in this case 2](https://i.stack.imgur.com/qv52z.jpg)
Thanks so much in advance
|
Having a case statement in your where should not mess up the syntax. It is probably something else.
Make sure when doing a subquery you give it an alias.
```
SELECT COUNT(*)
FROM (
/*first select*/
UNION
/*second select*/
WHERE
CASE
WHEN ((@case='other')
AND (cfv.value LIKE '%,' + CAST(@today AS VARCHAR)
OR cfv.value LIKE '%' + CAST(@today AS VARCHAR)
OR cfv.value LIKE '%' + CAST(@today AS VARCHAR)+',0'
OR cfv.value LIKE '%' + CAST(@today AS VARCHAR)+',0,1'
OR cfv.value LIKE '%' + CAST(@today AS VARCHAR)+',1'))
THEN 1
WHEN ((@case ='zero') AND (cfv.value='0')) THEN 1
WHEN ((@case ='one') AND (cfv.value='1' OR cfv.value='0,1')) THEN 1
ELSE 0
END = 1
) AS alias --<<--
```
|
If you're trying to get a count of all the records selected, you could do this:
```
SELECT COUNT(*) FROM
(
SELECT First...
UNION
SELECT Second...
) AS CountTable
```
Just wrap your whole query in another `SELECT` statement and count from there.
|
SQL query - Count of union of select
|
[
"",
"sql",
"sql-server",
"count",
"union",
""
] |
I have two tables
User Table:
```
UserID Username FirstName Lastname
1001 KiranReddy Kiran Reddy
1002 Arvind Arvind Kumar
1003 Arun Arrun Swamy
1004 Ramesh Ramesh Naidu
1005 Ramesh Ramesh Naidu
1006 Ajay1233 Ajay Sharma
```
Friend Table:
```
UserID1 UserID2
1001 1002
1001 1003
1001 1004
1001 1005
1001 1006
```
How to do following query: I want to get the usernames for users that are friend of user 1001 and username is like 'A%'
|
```
Select friend.UserID2 from User
left join Friend on friend.userid1 = user.userid
Where user.username like '%A' and friend.UserID1 ='1001'
```
|
```
SELECT u.Username
FROM Friend as f
INNER JOIN User as u
ON f.UserID2 = u.UserID
WHERE f.UserID1 = '1001'
AND u.Username LIKE "A%"
ORDER BY u.Username
```
That should do it. Of course you could return more than just the Username.
|
SQL Server : Subquery returned more than 1 value
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have 5 tables: A, B, C, D, E
A: {ID, Value, GroupID, FilterID, In\_Date}
B: {ID, Description, Out\_Date, From\_Date }
C: {ID, Category}
D: {GroupID, GroupName}
E: {FilterID, FilterName}
There could be missing IDs in B and C as a result, I'm trying to do a LEFT JOIN to get the required info. In my dataset, I've explicitly added a row in A with ID that's not in B or C. I expect it to be picked up in the following query, but it doesn't happen. I believe I'm going wrong in the WHERE clause due to null values in B and C.
```
SELECT A.Value, A.Group, B.Description, C.Category, D.GroupName, E.FilterName
FROM A LEFT JOIN B ON A.ID = B.ID
LEFT JOIN C ON A.ID = C.ID,
D,E
WHERE
B.Out_Date>A.In_Date,
A.GroupID=D.GroupID,
A.FilterID=E.FilterID
```
How can I fix this to retrieve the fields I want with null values when the ID is not in B or C , but in A?
|
1) Don't mix old comma separated join syntax with modern explicit join syntax!
2) When `left join`, put the right side table's conditions in the `ON` clause to get true left join behavior. (When in `WHERE` you get inner join result.)
```
SELECT A.Value, A.Group, B.Description, C.Category, D.GroupName, E.FilterName
FROM A LEFT JOIN B ON A.ID = B.ID AND B.Out_Date > A.In_Date
LEFT JOIN C ON A.ID = C.ID
JOIN D ON A.GroupID = D.GroupID
JOIN E ON A.FilterID = E.FilterID
```
`Value` and `Group` are reserved words in ANSI SQL, so you may need to delimit them, as `"Value"` and `"Group"`.
|
Here is your table details
A: {ID, Value, GroupID, FilterID, In\_Date}
B: {ID, Description, Out\_Date, From\_Date }
C: {ID, Category}
D: {GroupID, GroupName}
E: {FilterID, FilterName}
now you try to retrieve data using left join
so you try the flowing script
```
SELECT A.Value, A.Group, B.Description, C.Category, D.GroupName,E.FilterName from A left join B on A.ID=B.ID
left Join C on A.ID=C.ID
Left Join D on A.GroupID=D.GroupID
Left Join E on A.FilterID=E.FilterID
where B.Out_Date>A.In_Date
```
I hope this is help full for you.
|
What is wrong with the following SQL left join?
|
[
"",
"sql",
"join",
""
] |
```
SELECT CASE WHEN ISNUMERIC('1a') =1 THEN 1 ELSE 'A' END
```
i'am getting this error !!
> Conversion failed when converting the varchar value 'A' to data type int.
|
you are a victim of data type precedence.Taken from BOL:
> When an operator combines two expressions of different data types, the rules for data type precedence specify that the data type with the lower precedence is converted to the data type with the higher precedence. If the conversion is not a supported implicit conversion, an error is returned. When both operand expressions have the same data type, the result of the operation has that data type
So in your case ,CASE is an expression and when two or more data types are combined in this expression,it returns data type of highest precedence..in your case it is INT..So change your query to below
```
SELECT CASE WHEN ISNUMERIC('1a') =1 THEN cast(1 as varchar) ELSE 'A' END
```
|
```
SELECT CASE WHEN ISNUMERIC('1a') =1 THEN 1 ELSE 2 END
```
|
Sql ISNUMERIC()
|
[
"",
"sql",
"sql-server",
""
] |
I have 2 tables. Table1 would be the main table.
Table2 contains data related to table1.
Table1:
```
WONUM
123
124
125
```
Table2:
```
wonum task holdstatus
123 1 APPR
123 2 APPR
123 3 APPR
124 1 COMP
124 2 APPR
125 1 COMP
125 2 COMP
```
I want to select ALL wonum from table1 where table1.wonum = table2.wonum and there are NO records with a table2.HOLDSTATUS = 'COMP'
Any help would be great.
The closet I got was:
```
select * from table1 where
exists (select 1 from table2 where table1.wonum=table2.wonum and holdstatus != 'COMP');
```
|
You've almost got the right answer.
Try this query:
```
SELECT t1.wonum
FROM table1 t1
WHERE t1.wonum NOT IN (
SELECT t2.wonum
FROM table2 t2
WHERE t2.wonum = t1.wonum
AND t2.holdstatus = 'COMP'
);
```
This should give you all of the records you need. In this case, just record 123.
You can also do it using a NOT EXISTS query. Generally, they perform better, but if you have a small table, then it wouldn't make that much of a difference.
```
SELECT t1.wonum
FROM table1 t1
WHERE NOT EXISTS (
SELECT t2.wonum
FROM table2 t2
WHERE t2.wonum = t1.wonum
AND t2.holdstatus = 'COMP'
);
```
|
You are close, you just need to use a `NOT EXISTS` and reverse your `holdstatus` condition:
```
Select *
From table1 t1
Where Not Exists
(
Select *
From table2 t2
Where t1.wonum = t2.wonum
And t2.holdstatus = 'COMP'
);
```
|
Oracle SQL - Finding records from table1 where table2 condition does not exist
|
[
"",
"sql",
"oracle",
""
] |
Title sucks, I couldn't come up with a good one explaining what I want, sorry.
I have a complex query which I have simplified to one below. Basically it always must return one record. Main table has a left join (it can't be an inner join) with another table from which it retrieves count of records it matches and so far it also returned one record, but now there is case where one property from secondary table can be null and if so, that record must also be included, but then it returns two rows. I somehow need to make them as one record where both matching records count and results that contain that null value count are summed.
So, given these tables:
```
TableA TableB
+----+------+ +----+------+
| id | type | | id | type |
+----+------+ +----+------+
| 1 |Email | | 1 |Email |
| 2 |Twitt | | 2 |null |
+----+------+ +----+------+
```
And this query:
```
select * from TableA a
left join
(select count(distinct b.id) count, b.type
from TableB b
Group by b.type) as asd
on a.type = asd.type or asd.type is null
where a.type = 'Email'
```
which returns:
```
+----+------+-------+------+
| id | type | count | type |
+----+------+-------+------+
| 1 |Email | 1 |(null)|
| 1 |Email | 1 |Email |
+----+------+-------+------+
```
is it possible to return it like this:?
```
+----+------+-------+------+
| id | type | count | type |
+----+------+-------+------+
| 1 |Email | 2 |Email |
+----+------+-------+------+
```
I actually don't even need to returns secondary's table's type just the right count.
Here is a fiddle I made: <http://sqlfiddle.com/#!3/8cf5c/2>
|
I don't know why you need the type at the end, in the case if you can remove it, the query something like this
```
select a.id, a.type, count(asd.count) as count
from TableA a join
(select count(distinct b.id) as count, b.type
from TableB b
Group by b.type
) asd
on a.type = asd.type OR asd.type IS NULL
where a.type = 'Email'
group by a.id, a.type
```
But if you want exactly the same result of your expectation, I think you can try this query
```
select a.id, a.type, count(asd.count) as count, ISNULL(asd.type,a.type) as type
from TableA a join
(select count(distinct b.id) as count, b.type
from TableB b
Group by b.type
) asd
on a.type = asd.type OR asd.type IS NULL
where a.type = 'Email'
group by a.id, a.type, ISNULL(asd.type,a.type)
```
|
should be this
```
select a.Type, count(*) from TableA a
left join
(select count(distinct b.id) count, b.type
from TableB b
Group by b.type) as asd
on a.type = asd.type or asd.type is null
where a.type = 'Email'
group by a.type
```
|
GROUP BY - add result with null group by key to other result
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I am learning to use recursive query in db2, got a problem online to print following pattern
```
*
**
***
****
*****
******
*******
********
*********
**********
***********
```
upto 20 level, solved it in Oracle using following query
```
select lpad('*', level, '*') result from dual connect by level <= 20
```
but got no idea how to do it in db2, would really appreciate any help.
Also how to do it in reverse order, means how to print stars pyramid from 20th to 1st level?
|
Correct query for db2, using `REPEAT` instead of `LPAD`
```
with x(id,val) as
(
select 1, REPEAT('*',1) from sysibm.sysdummy1
union all
select id+1, REPEAT('*',id+1) from x where id < 20
)
select val from x
```
```
with x(id,val) as
(
select 20, REPEAT('*',20) from sysibm.sysdummy1
union all
select id-1, REPEAT('*',id-1) from x where id > 1
)
select val from x
```
|
Replace `dual` with `sysibm.sysdummy1`
```
select lpad('*', level, '*') result
from sysibm.sysdummy1
connect by level <= 20
```
|
How to use Recursive query in db2
|
[
"",
"sql",
"db2",
""
] |
I'm probably fundamentally misunderstanding temp tables -
But from within SSMS, I have the following:
```
Create Table #temp(FromUserId int, ToUserId int, FromAction int, ToAction int, IsMatch int)
```
If I Execute this twice, I get the error:
```
There is already an object named '#temp' in the database.
```
Why is this happening if the table is #temporary?
|
Temp tables have scope either local to the connection or global to all connections. When I build procedures with temp tables I work in a normal query window and I have a drop statement before each create. Then when it is all set I add the create procedure code and comment out the drop table statements as during normal execution the temp table will not exist yet. Here is a really good article on the subject of temporary tables.
<https://www.simple-talk.com/sql/t-sql-programming/temporary-tables-in-sql-server/>
|
Each SSMS tab is its own connection. Any temp objects you create won't get dropped until you close it or drop it explicitly. It's actually quite useful behavior.
|
Why can't I create a temp table twice in a row from SSMS?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a query (sql) to pull out a street name from a string. It's looking for the last occurrence of a digit, and then pulling the proceeding text as the street name. I keep getting the oracle
> "argument '0' is out of range"
error but I'm struggling to figure out how to fix it.
the part of the query in question is
```
substr(address,regexp_instr(address,'[[:digit:]]',1,regexp_count(address,'[[:digit:]]'))+2)
```
any help would be amazing. (using sql developer)
|
The fourth parameter of [`regexp_instr`](https://docs.oracle.com/cd/B12037_01/server.101/b10759/functions114.htm) is the occurrence:
> occurrence is a positive integer indicating which occurrence of
> pattern in source\_string Oracle should search for. The default is 1,
> meaning that Oracle searches for the first occurrence of pattern.
In this case, if an address has no digits within, the `regexp_count` will return 0, that's not a valid occurrence.
|
> looking for the last occurrence of a digit, and then pulling the proceeding text as the street name
You could simply do:
```
SELECT REGEXP_REPLACE( address, '^(.*)\d+\D*$', '\1' )
AS street_name
FROM address_table;
```
|
Argument '0' is out of range error
|
[
"",
"sql",
"oracle",
"arguments",
""
] |
I have a table with 372 million rows, I want to delete old rows starting from the first ones without blocking the DB. How can I reach that?
The table have
```
id | memberid | type | timeStamp | message |
1 123 10 2014-03-26 13:17:02.000 text
```
UPDATE:
I deleted about 30 GB of Space in DB, but my DISC is ON 6gb space yet..
Any suggestion to get that free space?
Thank you in advance!
|
```
select 1;
while(@@ROWCOUNT > 0)
begin
WAITFOR DELAY '00:00:10';
delete top(10) from tab where <your-condition>;
end
```
delete in chunks using above sql
|
You may want to consider another approach:
1. Create a table based on the existing one
2. Adjust the identity column in the empty table to start from the latest value from the old table (if there is any)
3. Swap the two tables using `sp_rename`
4. Copy the records in batches into the new table from the old table
5. You can do whatever you want with the old table.
**BACKUP your database before you start deleting records / play with tables.**
|
SQL - Delete table with 372 million rows starting from first row
|
[
"",
"sql",
"sql-server",
"large-data",
""
] |
these two query output different result,
I have question `AND m.status = $1 ...`each condition follow the left join table or move to the final part, thats different ??
query 1
```
SELECT count(mua.*)
AS total_row_count
FROM media_user_action mua
LEFT JOIN media m ON m.id = mua.media_id
AND m.status = $1
LEFT JOIN gallery_media gm ON gm.id = mua.media_id
LEFT JOIN gallery g ON g.id = gm.gallery_id
AND g.status = $1
LEFT JOIN "user" mcbu ON mcbu.id = m.create_by_user_id
AND mcbu.status = $1
LEFT JOIN "user" gcbu ON gcbu.id = g.create_by_user_id
AND gcbu.status = $1
WHERE mua.user_id = $2
```
query 2
```
SELECT count(mua.*)
AS total_row_count
FROM media_user_action mua
LEFT JOIN media m ON m.id = mua.media_id
LEFT JOIN gallery_media gm ON gm.id = mua.media_id
LEFT JOIN gallery g ON g.id = gm.gallery_id
LEFT JOIN "user" mcbu ON mcbu.id = m.create_by_user_id
LEFT JOIN "user" gcbu ON gcbu.id = g.create_by_user_id
WHERE
m.status = $1
AND g.status = $1
AND mcbu.status = $1
AND gcbu.status = $1
AND mua.user_id = $2
```
**UPDATE**
base on below answer
if I want make sure return result, must all related/left join table status both equal $1, ( join table recode could be null) so I have to add `AND x.status ..` follow the join table, right?
|
When you use left outer join, value of the right table may be `NULL`.
For simplicity's sake, let say we have
`Table A (id, name)` and `Table B (fid, status)`
Then query1 will be like
```
select A.id, B.status
from A
left join (select * from B where status = $1)
on A.id = B.fid;
```
so result could have B.status is `NULL`
And query2 will be like
```
select C.*
from (select A.id, B.status
from A
left join B
on A.id = B.fid
) C
where C.status = $1;
```
It's equal to
```
select *
from A
inner join B
on A.id = B.fid
where B.status = $1;
```
So B.status must `exactly` be $1, and is never `NULL`
|
When you put `WHERE` conditions on tables that you have `LEFT JOIN`-ed, and that require some their fields to have non-NULL values, then you are actually converting the `LEFT JOIN` into an `INNER JOIN`.
This is because a `LEFT JOIN` may produce results where the joined table has no matching record. In that case all fields of that "virtual" record have value `NULL`. So by requiring that one of those fields is not null, you remove those instances from the result.
If on the contrary, you put such conditions in the `LEFT JOIN` condition, then you do not break this mechanism, where the a non-match will still give a result, albeit with `NULL`.
|
select left join table use and condition
|
[
"",
"sql",
"postgresql",
""
] |
We are doing some validation of data which has been migrated from one SQL Server to another SQL Server. One of the things that we are validating is that some numeric data has been transferred properly. The numeric data is stored as a float datatype in the new system.
We are aware that there are a number of issues with float datatypes, that exact numeric accuracy is not guaranteed, and that one cannot use exact equality comparisons with float data. We don't have control over the database schemas nor data typing and those are separate issues.
What we are trying to do in this specific case is verify that some ratio values were transferred properly. One of the specific data validation rules is that all ratios should be transferred with no more than 4 digits to the right of the decimal point.
So, for example, valid ratios would look like:
```
.7542
1.5423
```
Invalid ratios would be:
```
.12399794301
12.1209377
```
What we would like to do is count the number of digits to the right of the decimal point and find all cases where the float values have more than four digits to the right of it. We've been using the SUBSTRING, LEN, STR, and a couple of other functions to achieve this, and I am sure it would work if we had numeric fields typed as decimal which we were casting to char.
However, what we have found when attempting to convert a float to a char value is that SQL Server seems to always convert to decimal in between. For example, the field in question shows this value when queried in SQL Server Enterprise Manager:
```
1.4667
```
Attempting to convert to a string using the recommended function for SQL Server:
```
LTRIM(RTRIM(STR(field_name, 22, 17)))
```
Returns this value:
```
1.4666999999999999
```
The value which I would expect **if** SQL Server were directly converting from float to char (which we could then trim trailing zeroes from):
```
1.4667000000000000
```
Is there any way in SQL Server to convert directly from a float to a char without going through what appears to be an intermediate conversion to decimal along the way? We also tried the CAST and CONVERT functions and received similar results to the STR function.
SQL Server Version involved: SQL Server 2012 SP2
Thank you.
|
Your validation rule seems to be misguided.
An SQL Server `FLOAT`, or `FLOAT(53)`, is stored internally as a 64-bit floating-point number according to the IEEE 754 standard, with 53 bits of mantissa ("value") plus an exponent. Those 53 binary digits correspond to approximately 15 decimal digits.
Floating-point numbers have limited precision, which does not mean that they are "fuzzy" or inexact in themselves, but that not all numbers can be exactly represented, and instead have to be represented using *another* number.
For example, there is no exact representation for your **1.4667**, and it will instead be stored as a binary floating-point number that (exactly) corresponds to the decimal number **1.466699999999999892708046900224871933460235595703125**. Correctly rounded to 16 decimal places, that is **1.4666999999999999**, which is precisely what you got.
Since the "exact character representation of the float value that is in SQL Server" is **1.466699999999999892708046900224871933460235595703125**, the validation rule of "no more than 4 digits to the right of the decimal point" is clearly flawed, at least if you apply it to the "exact character representation".
What you might be able to do, however, is to round the stored number to fewer decimal places, so that the small error at the end of the decimals is hidden. Converting to a character representation rounded to 15 instead of 16 places (remember those "15 decimal digits" mentioned at the beginning?) will give you **1.466700000000000**, and then you can check that all decimals after the first four are zeroes.
|
You can try using `cast` to `varchar`.
```
select case when
len(
substring(cast(col as varchar(100))
,charindex('.',cast(col as varchar(100)))+1
,len(cast(col as varchar(100)))
)
) = 4
then 'true' else 'false' end
from tablename
where charindex('.',cast(col as varchar(100))) > 0
```
|
How Can I Get An Exact Character Representation of a Float in SQL Server?
|
[
"",
"sql",
"sql-server",
"string",
"floating-point",
""
] |
I am working on building a new SSIS project from scratch. I want to work with couple of my teammates. I was hoping to get a suggestion on how we can have some have some source control, so that few of us can work concurrently on the same SSIS project (same dtsx file, building new packages.)
Version:
SQL Server Integration Service v11
Microsoft Visual Studio 2010
|
It is my experience that there are two opportunities for any source control system and SSIS projects to get out of whack: adding new items to the project and concurrent changes to an existing package.
## Adding new items
An SSIS project has the .dtproj extension. Inside there, it's "just" XML defining what all belongs to the project. At least for 2005/2008 and 2012+ on the package deployment model. The 2012+ *project* deployment model carries a good bit more information about the state of the packages in the project.
When you add new packages (or project level connection managers or .biml files) the internal structure of the .dtproj file is going to change. Diff tools generally don't handle merging XML well. Or at all really. So, to prevent the need for merging the project definition, you need to find a strategy that works for you team.
I've seen two approaches work well. The first is to upfront define all the packages you think you'll need. DimFoo, DimDate, DimFoo, DimBar, FactBlee. Check that project and the associated empty packages in and everyone works on what is out there. When the initial cut of packages is complete, then you'll ensure everyone is sync'ed up and then add more empty packages to the project. The idea here is that there is one person, usually the lead, who is responsible for changing the "master" project definition and everyone consumes from their change.
The other approach requires communication between team members. If you discover a package needs to be added, communicate with your mates "I need to add a new package - has anyone modified the project?" The answer should be No. Once you've notified that a change to the project definition is coming, make it and immediately commit it. The idea here is that people commit and sync/check in whatever terminology with great frequency. If you as a developer don't keep your local repository up to date, you're going to be in for a bad time.
## Concurrent edits
Don't. Really, that's about it. The general problem with concurrent changes to an SSIS package is that in addition to the XML diff issue above, SSIS *also* includes layout data alongside tasks so I can invert the layout and make things flow from bottom to top or right to left and there's no material change to SSIS package but as Siyual notes "Merging changes in SSIS is nightmare fuel"
If you find your packages are so large and that developers need to make concurrent edits, I would propose that you are doing too much in there. Decompose your packages into smaller, more tightly focused units of work and then control their execution through a parent package. That would allow a better level of granularity to your development and debugging process in addition to avoiding the concurrent edit issue.
|
A dtsx file is basically just an xml file. Compare it to a bunch of people trying to write the same book. The solution I suggest is to use Team Foundation Server as a source control. That way everyone can check in and out and merge packages. If you really dont have that option try to split your ETL process in logical parts and at the end create a master package that calls each sub packages in the right order.
An example: Let's say you need to import stock data from one source, branches and other company information from an internal server and sale amounts from different external sources. After u have all information gathered, you want to connect those and run some analyses.
You first design the target database entities that you need and the relations. One of your member creates a package that does all the import to staging tables. Another guy maybe handles external sources and parallelizes / optimizes the loading. You would build a package that in merges your staging and production tables, maybe historicizing and so on.
At the end you have a master package that calls each of the mentioned packages and maybe some additional logging or such.
|
Source control in SSIS and Concurrent work on dtsx file
|
[
"",
"sql",
"sql-server",
"ssis",
"ssis-2012",
""
] |
## Explanation
I have a table which does not have a primary key (or not even a composite key).
The table is for storing the time slots (opening hours and food delivery available hours) of the food shops. Let's call the table "business\_hours" and the main fields are as below.
* shop\_id
* day (0 - 6, means Sunday - Saturday)
* type (open, delivery)
* start\_time
* end\_time
As an example, if shop A is opened on Monday from 9.00am - 01.00pm and 05.00pm to 10.00pm, there will be two records in business\_hours table for this scenario.
```
-----------------------------------------------
| shop_id | day | type | start_time | end_time
-----------------------------------------------
| 1000 | 1 | open | 09:00:00 | 13:00:00
-----------------------------------------------
| 1000 | 1 | open | 17:00:00 | 22:00:00
-----------------------------------------------
```
When I query this table, I will use shop\_id always as the first condition in where clause.
Ex:
```
SELECT COUNT(*) FROM business_hours WHERE shop_id = 1000 AND day = 1 AND type = 'open' AND start_time <= '13.29.00' AND end_time > '13.29.00';
```
## Question
Applying index for "shop\_id" is enough or "day" & "type" fields also should be indexed?
Also better if you can explain how the indexing really works.
|
It depends on several factors that you should specify:
1. How fast will the data grow
2. What is the estimated table size in rows
3. What queries will be run against that table
4. How fast do you expect the queries to run
It is more about thinking like: Some service will make thousands of inserts of new records per hour, the old records will be archived nightly and reports are to be created nightly from that table. In such a case you may prefer to not to create many indexes since they slow down inserts.
On the other hand if your table will grow and change slowly and many users will run queries against it, you need to have proper indexes to speed up queries.
If you can, try to create clustered unique primary key that most queries can benefit from. If you have data that form some timeline and most queries will get ranges of data using the datetime criteria (like from - to), it is better to include datetime in clustered index - you will get fastest query performance.
So something like this will grant you best performance for the mentioned select. (But you cannot store duplicate business hours for one shop and type)
```
CREATE TABLE Business_hours
( shop_id INT NOT NULL
, day INT NOT NULL
--- other columns
, CONSTRAINT Business_hours_PK
PRIMARY KEY (shop_id, day, type, start_time, end_time) -- your clustered index
)
```
Just creating an index on fields used in the SELECT (all of them or just some of them most used), will speed up your query too:
```
CREATE INDEX BusinessHours_IX ON business_hours (shop_id,day,type, start_time, end_time);
```
Difference between clustered and non-clustered is that clustered index affects order in which are db records stored on disk.
You can use EXPLAIN to find missing indexes in your database, see [this answer](https://stackoverflow.com/a/30627050/2224701).
For more detail [this blog](http://mysql.rjweb.org/doc.php/index_cookbook_mysql).
|
Having decided against a primary key means the following would be allowed:
```
| shop_id | day | type | start_time | end_time
+---------+-----+--------+------------+---------
| 1000 | 1 | open | 09:00:00 | 13:00:00
| 1000 | 1 | open | 09:00:00 | 13:00:00
| 1000 | 1 | open | 17:00:00 | 22:00:00
| 1000 | 1 | closed | 17:00:00 | 22:00:00
```
So you can have duplicate entries that may lead to strange query results and even have a shop open *and* closed in the very same time range. (But well, we all know that even with a primary key you'd still need a before-insert trigger to detect a range *overlapping*, e.g. 12:00-15:00 vs. 13:00-16:00, and throw an error in case. - How I wish there were some built-in range detection, so we could, say, have a unique index on `(shop_id, day, range(start_time, end_time))`.)
As to your question: Provided your database is built well, you already have a foreign key on `shop_id`. You don't need any further index as long as you consider your queries fast enough.
Once you think you need to speed them up, you can add composite indexes as needed. That would usually be an index on all columns in the slow query's `WHERE` clause. If that still doesn't suffice add the columns that are in the `GROUP BY` clause, if any. Next step would be to add the columns of the `HAVING` clause, if any. Next would be the columns of the `ORDER BY` clause. And the last step would be to even add all columns in your `SELECT` clause, which would give you a covering index, i.e. all data needed for the query would be in the index and the table itself would hence not have to be accessed any longer.
But as mentioned: As long as you don't have performance issues, you don't have to add any composite indexes.
|
How to decide which fields must be indexed in a database table
|
[
"",
"mysql",
"sql",
"database",
"indexing",
""
] |
I have a Grades table where I have the following fields:
-STUDENT\_ID
-COURSE\_ID
-FIRST\_TERM
-SECOND\_TERM
-FINAL
And a Course table:
-COURSE\_ID
-NAME
-DEPARTMENT\_ID
I'm trying to get all the grades for a particular student with grades for each course specified, I was wondering how do I get the name of each course?
This is how I get the grades but I want to include the course name also:
```
SELECT student_id,
course_id,
(first_term+second_term+final) AS "Total Mark"
FROM MARKS
WHERE student_id = 1;
```
|
You can use this query:
```
SELECT s.student_id,
s.course_id,
c.course_name,
(s.first_term+s.second_term+s.final) AS "Total Mark"
FROM marks s
INNER JOIN course c ON c.course_id = s.course_id
WHERE s.student_id = 1
```
Make sure to prefix field names with table names when they are used in both tables (like for `course_id`). I have prefixed all fields with table aliases.
Table aliases are like short names for tables and you define them right after the table name in the `FROM` clause.
|
```
SELECT student_id, m.course_id,c.name as course_name, (first_term+second_term+final) AS "Total Mark" FROM MARKS M inner join course c on m.course_id = c.course_id WHERE student_id = 1;
```
Use an inner join between marks and course table to get name from course table.
|
How to get each course name in a grades table where I only have the id for course?
|
[
"",
"sql",
""
] |
I have a table with records as below:
[](https://i.stack.imgur.com/GPG1D.png)
I need to verify that for each combination of Number, Name, if the Code is null then there is another record with same Number, Name which has Code as 1. There can be a record with Code 1 but no corresponding Code null.
I tried
```
select t1.name,t1.number,t1.code, t2.code
from
(select distinct name,number,code from dummy
where code is null) t1,
(select distinct name,number,code from dummy
where code is not null) t2
where t1.name=t2.name and t1.number=t2.number
```
My idea was to check if for every null in t1.code there is a 1 in t2.code. But I see repetitions in my result. Can anyone please help?
|
If you are just trying to verify and make sure there are none (or find the ones which don't have a code = 1) then you can simply do this:
```
select number, name from table where code is NULL
minus
select number, name from table where code = 1
```
|
```
SELECT
T1.number,
T1.name
FROM
My_Table T1
WHERE
T1.code IS NULL AND
NOT EXISTS (
SELECT *
FROM My_Table T2
WHERE
T2.number = T1.number AND
T2.name = T1.name AND
T2.code = 1
)
```
|
How can I check if each record of a database has a similar record for all columns except one?
|
[
"",
"sql",
""
] |
How to replace the value in 'projectId' column with project name assuming there is another table with name 'project' and two tables are related on the number mentioned after ':' in 'projectId' column of employee output.
```
> select * from employee;
+----+-----------+
| id | projectId |
+----+-----------+
| 1 | project:1 |
+----+-----------+
```
Desired output :
```
+----+-----------------------+
| id | projectId |
+----+-----------------------+
| 1 | project:internProject |
+----+-----------------------+
```
Steps required:
> extract the integer [done]
```
update employee SET projectId = substring_index(projectId,':', -1);
```
> update the projectID with name [not sure :( ]
```
update employee SET projectId = concat('project:', select projectName from projects where projects.projectID = employee.projectId);
```
|
You need parenthesis around your subrequest.
```
update employee e
set projectId = concat('project:',
(select projectName
from projects
where projectId = substring_index(e.projectId, ':', -1)));
```
|
This should do the job:
```
UPDATE employee e, projects p
SET e.projectId = CONCAT('project:',p.projectName)
WHERE substring_index(e.projectId,':', -1) = p.projectID
```
Is **project:** a constant or does it change? If so you need to take it over from employee
|
How to update mysql column with another value from another table?
|
[
"",
"mysql",
"sql",
""
] |
I have 2 column ID and description.I want to get the only mailid for each ID column.
```
ID Description
1 I have 2 mailID please note this mai1: anto1@gmail.com and mai1: anto2@gmail.com and mai1: anto3@gmail.com abbaaabbbbbbb.
2 I have 2 mailID please note this mai1: sample1@gmail.com and mai1: sample2@gmail.com and mai1: sample3@gmail.com abbaaabbbbbbb.
```
Expected output
```
ID Description
1 anto1@gmail.com
1 anto2@gmail.com
1 anto3@gmail.com
2 sample1@gmail.com
2 sample2@gmail.com
2 sample3@gmail.com
```
I have tried this below query.
```
SELECT id,Description
FROM sample_for
WHERE CHARINDEX('@', Description) > 0
```
But please provide the alternate valid query.
|
Maybe something like this...
## Test Data
```
Declare @table TABLE(ID int , Description Varchar(8000))
INsert into @table values
(1 , 'I have 2 mailID please note this mai1: anto1@gmail.com and mai1: anto2@gmail.com and mai1: anto3@gmail.com abbaaabbbbbbb'),
(2 , 'I have 2 mailID please note this mai1: sample1@gmail.com and mai1: sample2@gmail.com and mai1: sample3@gmail.com abbaaabbbbbbb')
```
## Query
```
Select ID
,LEFT(RTRIM(LTRIM(Emails)) , CHARINDEX(' ' , RTRIM(LTRIM(Emails)))) Emails
from
(
SELECT t.ID
,Split.a.value('.', 'VARCHAR(100)') Emails
FROM
(SELECT Cast ('<X>' + Replace(Description, ':', '</X><X>') + '</X>' AS XML) AS Data
,ID
FROM @table
) AS t CROSS APPLY Data.nodes ('/X') AS Split(a)
)a
Where a.emails LIKE '%@%'
```
## Result Set
```
╔════╦════════════════════╗
║ ID ║ Emails ║
╠════╬════════════════════╣
║ 1 ║ anto1@gmail.com ║
║ 1 ║ anto2@gmail.com ║
║ 1 ║ anto3@gmail.com ║
║ 2 ║ sample1@gmail.com ║
║ 2 ║ sample2@gmail.com ║
║ 2 ║ sample3@gmail.com ║
╚════╩════════════════════╝
```
|
```
DECLARE @xml xml
--- Remove from here...
;WITH cte AS (
SELECT *
FROM (VALUES
(1, 'I have 2 mailID please note this mai1: anto1@gmail.com and mai1: anto2@gmail.com and mai1: anto3@gmail.com abbaaabbbbbbb.'),
(2, 'I have 2 mailID please note this mai1: sample1@gmail.com and mai1: sample2@gmail.com and mai1: sample3@gmail.com abbaaabbbbbbb.')
) as t(ID, [Description])
)
-- To here
SELECT @xml = (
SELECT CAST('<i id="' + CAST(id as nvarchar(10)) + '"><a>' +REPLACE([Description],' ','</a><a>') +'</a></i>' as xml)
FROM cte -- here change cte to your table name
FOR XML PATH ('')
)
SELECT t.v.value('../@id', 'int') as id,
t.v.value('.', 'nvarchar(100)') as email
FROM @xml.nodes('/i/a') as t(v)
WHERE t.v.value('.', 'nvarchar(100)') like '%@%'
```
Output:
```
id email
1 anto1@gmail.com
1 anto2@gmail.com
1 anto3@gmail.com
2 sample1@gmail.com
2 sample2@gmail.com
2 sample3@gmail.com
```
|
How to find the string in large description value
|
[
"",
"sql",
"sql-server",
""
] |
I wrote a new query which gets me a listAgg result such as the below
```
Col A
1000016932,1000020056,1000020100,1000020144,1000020243
```
Now what i want with that results is as follows
`col C col D
1000016932 1000020056
1000016932 1000020100
1000016932 1000020144
1000016932 1000020243
1000020056 1000020100
1000020056 1000020144 ...and so on`
Please note that I can't hard-code the levels as each string can be of any given length
|
```
with table_1 (colA) as (
select '1000016932,1000020056,1000020100,1000020144,1000020243' from dual
),
prep (lvl, token) as (
select level, regexp_substr(colA, '[^,]+', 1, level) from table_1
connect by level <= regexp_count(colA, ',') + 1
and colA = prior colA
and prior sys_guid() is not null
)
select p1.token as token_1, p2.token as token_2
from prep p1 join prep p2 on p1.lvl < p2.lvl;
```
This assumes there are no nulls between commas (you don't have two consecutive commas with nothing between them, marking a "null" in the sequence).
Result:
```
TOKEN_1 TOKEN_2
---------- ----------
1000016932 1000020056
1000016932 1000020100
1000016932 1000020144
1000016932 1000020243
1000020056 1000020100
1000020056 1000020144
1000020056 1000020243
1000020100 1000020144
1000020100 1000020243
1000020144 1000020243
```
To allow several rows in the input table (assuming there is a row\_id column of some sort in the initial table):
```
with table_1 (row_id, colA) as (
select 101, '1000016932,1000020056,1000020100,1000020144,1000020243' from dual union all
select 102, '1000040042,1000045543,1000045664' from dual
),
prep (lvl, row_id, token) as (
select level, row_id, regexp_substr(colA, '[^,]+', 1, level) from table_1
connect by level <= regexp_count(colA, ',') + 1
and row_id = prior row_id
and prior sys_guid() is not null
)
select p1.row_id, p1.token as token_1, p2.token as token_2
from prep p1 join prep p2 on p1.row_id = p2.row_id and p1.lvl < p2.lvl
order by row_id, token_1;
```
Result:
```
ROW_ID TOKEN_1 TOKEN_2
---------- ---------- ----------
101 1000016932 1000020144
101 1000016932 1000020056
101 1000016932 1000020100
101 1000016932 1000020243
101 1000020056 1000020243
101 1000020056 1000020100
101 1000020056 1000020144
101 1000020100 1000020243
101 1000020100 1000020144
101 1000020144 1000020243
102 1000040042 1000045543
102 1000040042 1000045664
102 1000045543 1000045664
```
|
If I understand this correctly, you need to get all combinations of pair of values inside a comma separated string where order is not significant, excluding the same value pairs like (1,1), (2,2), etc.
The first step is to convert the string into rows and select a rownumber along with the values -
```
SELECT ROWNUM AS r,
REGEXP_SUBSTR (col_A,
'(.*?)(,|$)',
1,
LEVEL,
NULL,
1)
val
FROM my_table
CONNECT BY LEVEL <= REGEXP_COUNT (COL_A, ',') + 1;
```
Then do a cross join with itself. However this would give you the same pair twice. So something like `{(1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2), (3,3)}`. In order to eliminate duplicates and retrieve the rows in a way you want - make sure that the second table's rownumber is greater than the first. This way you would get - {(1,2), (1,3), (2,3)}.
So the final query looks like -
```
WITH my_table
AS (SELECT '1000016932,1000020056,1000020100,1000020144,1000020243'
AS col_A
FROM DUAL),
vals
AS ( SELECT ROWNUM AS r,
REGEXP_SUBSTR (col_A,
'(.*?)(,|$)',
1,
LEVEL,
NULL,
1)
val
FROM my_table
CONNECT BY LEVEL <= REGEXP_COUNT (COL_A, ',') + 1)
SELECT v_a.val AS col_B, v_B.val AS col_C
FROM vals v_A
CROSS JOIN vals v_B
WHERE v_B.val > v_A.val;
```
**EDIT**:
Because there could be multiple rows, it a good idea to have some kind of an ID column using which you could tie rows together. So in this example -
```
ID COL_A
1 1,2,3,4
2 5,6,7
```
The only thing that you need to do is to select unique rows based on the ID when splitting the comma separated string.
```
WITH my_table
AS (SELECT 1 AS id, '1,2,3,4' AS col_A FROM DUAL
UNION ALL
SELECT 2, '5,6,7' FROM DUAL),
vals
AS ( SELECT DISTINCT id,
REGEXP_SUBSTR (col_A,
'(.*?)(,|$)',
1,
LEVEL,
NULL,
1)
val
FROM my_table
CONNECT BY LEVEL <= REGEXP_COUNT (COL_A, ',') + 1)
SELECT v_a.val AS col_B, v_B.val AS col_C
FROM vals v_A
JOIN vals v_B ON v_A.id = v_B.id
WHERE v_B.val > v_A.val;
```
**EDIT 2**:
I realized I'm comparing the actual values and that's not correct. It would force all values to be an integer. Here is a query that would allow integers or strings.
```
WITH my_table
AS (SELECT 1 AS id, '1,2,3,4' AS col_A FROM DUAL
UNION ALL
SELECT 2, '5,6,7' FROM DUAL
UNION ALL
SELECT 3, 'a,b,c' FROM DUAL),
vals
AS ( SELECT DISTINCT id,
REGEXP_SUBSTR (col_A,
'(.*?)(,|$)',
1,
LEVEL,
NULL,
1)
val
FROM my_table
CONNECT BY LEVEL <= REGEXP_COUNT (COL_A, ',') + 1
ORDER BY id, val),
vals_r AS (SELECT ROWNUM AS r, vals.* FROM vals)
SELECT v_a.val AS col_B, v_B.val AS col_C
FROM vals_r v_A
JOIN vals_r v_B ON v_A.id = v_B.id
WHERE v_B.r > v_A.r;
```
|
SQL query to listAgg and group by
|
[
"",
"sql",
"oracle",
"listagg",
""
] |
Each item(item is produced by Serial) in my table has many record and I need to get last record of each item so I run below code:
```
SELECT ID,Calendar,Serial,MAX(ID)
FROM store
GROUP BY Serial DESC
```
it means it must show a record for each item which in that record all data of columns be for last record related to each item but the result is like this:
```
-------------------------------------------------------------+
ID | Calendar | Serial | MAX(ID) |
-------------------------------------------------------------|
7031053 | 2016-05-14 14:05:14 79.5 | N10088 | 7031056 |
7053346 | 2016-05-14 15:17:28 79.8 | N10078 | 7053346 |
7051349 | 2016-05-14 15:21:29 86.1 | J20368 | 7051349 |
7059144 | 2016-05-14 15:50:27 89.6 | J20367 | 7059144 |
7045551 | 2016-05-14 15:15:15 89.2 | J20366 | 7045551 |
7056243 | 2016-05-14 15:25:34 85.2 | J20358 | 7056245 |
7042652 | 2016-05-14 15:18:33 83.9 | J20160 | 7042652 |
7039753 | 2016-05-14 11:48:16 87 | J20158 | 7039753 |
7036854 | 2016-05-14 15:18:35 87.5 | J20128 | 7036854 |
7033955 | 2016-05-14 15:20:45 83.4 | 9662 | 7033955 |
-------------------------------------------------------------+
```
the problem is why for example in record related to Serial N10088 the ID is "7031053", but MAX(ID) is "7031056"? or also for J20358?
each row must show last record of each item but in my output it is not true!
|
If you want the row with the max value, then you need a join or some other mechanism.
Here is a simple way using a correlated subquery:
```
select s.*
from store s
where s.id = (
select max(s2.id)
from store s2
where s2.serial = s.serial
);
```
You query uses a (mis)feature of SQL Server that generates lots of confusion and is not particularly helpful: you have columns in the `select` that are not in the `group by`. What value do these get?
Well, in most databases the answer is simple: the query generates an error as ANSI specifies. MySQL pulls the values for the additional columns from indeterminate matching rows. That is rarely what the writer of the query intends.
For performance, add an index on `store(serial, id)`.
|
try this one.
```
SELECT MAX(id), tbl.*
FROM store tbl
GROUP BY Serial
```
|
List Last record of each item in mysql
|
[
"",
"mysql",
"sql",
"sorting",
""
] |
In our SSDT project we have a script that is huge and contains a lot of INSERT statements for importing data from an old system. Using sqlcmd variables, I'd like to be able to conditionally include the file into the post deployment script.
We're currently using the `:r` syntax which includes the script inline:
```
IF '$(ImportData)' = 'true'
BEGIN
:r .\Import\OldSystem.sql
END
```
This is a problem because the script is being included **inline** regardless of whether `$(ImportData)` is true or false and the file is so big that it's slowing the build down by about 15 minutes.
Is there another way to conditionally include this script file so it doesn't slow down the build?
|
I ended up using a mixture of our build tool (Jenkins) and SSDT to accomplish this. This is what I did:
1. Added a build step to each environment-specific Jenkins job that writes to a text file. I either write a SQLCMD command that includes the import file or else I leave it blank depending on the build parameters the user chooses.
2. Include the new text file in the Post Deployment script via `:r`.
That's it! I also use this same approach to choose which pre and post deploy scripts to include in the project based on the application version, except that I grab the version number from the code and write it to the file using a pre-build event in VS instead of in the build tool. (I also added the text file name to `.gitignore` so it doesn't get committed)
|
Rather than muddy up my prior answer with another. There is a special case with a VERY simple option.
Create separate SQLCMD input files for each execution possibility.
The key here is to name the execution input files using the value of your control variable.
So, for example, your publish script defines variable 'Config' which may have one of these values: 'Dev','QA', or 'Prod'.
Create 3 post deployment scripts named 'DevPostDeploy.sql', 'QAPostDeploy.sql' and 'ProdPostDeploy.sql'.
Code your actual post deploy file like this:
:r ."\"$(Config)PostDeploy.sql
This is very much like the build event mechanism where you overwrite scripts with appropriate ones except you don't need a build event. But you are dependent upon naming your scripts very specifically.
|
How can I conditionally include large scripts in my ssdt post deployment script?
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
"sql-server-data-tools",
""
] |
I have this select:
```
SELECT
*,
(SELECT some_value FROM other_table) as a1,
(SELECT some_value FROM other_table2) as a2
FROM some_table;
```
Is there any way to use the values a1, a2 and work with them like this?
```
SELECT
*,
(SELECT some_value FROM other_table) as a1,
(SELECT some_value FROM other_table2) as a2,
a1+a2,
a1/a2
FROM some_table;
```
Those subqueries are quite big so it's not good if I have to do this:
```
SELECT
*,
(SELECT some_value FROM other_table) as a1,
(SELECT some_value FROM other_table2) as a2,
(SELECT some_value FROM other_table)+(SELECT some_value FROM other_table),
SELECT some_value FROM other_table)/(SELECT some_value FROM other_table2)
FROM some_table;
```
|
Use `WITH` in Oracle. It's very helpful. [Link](https://oracle-base.com/articles/misc/with-clause)
```
WITH a1 AS (SELECT some_value FROM other_table),
a2 AS (SELECT some_value FROM other_table2)
SELECT *
FROM some_table t
JOIN a1 ON a1.key = t.key
JOIN a2 ON a2.key = t.key
```
|
Try
```
SELECT t.*,
a1.some_value + a2.some_value,
a1.some_value / a2.some_value
FROM some_table t
cross join (SELECT some_value FROM other_table) as a1
cross join (SELECT some_value FROM other_table2) as a2
```
|
sql working with alias from other select
|
[
"",
"sql",
"oracle11g",
""
] |
I have quick question why I can't use having keyword on distance? I need somehow to check is distance < 20 for example
```
SELECT
Id, Lat, Lng,
(6367 * acos( cos( radians(45.444) )
* cos( radians( Lat ) ) * cos( radians( Lng ) - radians(158.554) )
+ sin( radians(4545) ) * sin( radians( Lat ) ) ) ) AS distance
FROM
Posts
HAVING
distance < 15 // Invalid column Name
ORDER BY
distance
```
|
Try this
```
SELECT *
FROM
(SELECT
Id, Lat, Lng,
(6367 * acos(cos(radians(45.444)) * cos(radians(Lat)) *
cos(radians(Lng) - radians(158.554)) + sin(radians(4545)) *
sin(radians(Lat)))) AS distance
FROM Posts) p
WHERE
p.distance < 15
ORDER BY
p.distance
```
|
I might suggest `outer apply` for this purpose:
```
SELECT p.Id, p.Lat, p.Lng, d.distance
FROM Posts p OUTER APPLY
(SELECT (6367 * acos( cos( radians(45.444) )
* cos( radians( p.Lat ) ) * cos( radians( p.Lng ) - radians(158.554) )
+ sin( radians(4545) ) * sin( radians( p.Lat ) ) ) ) AS distance
) d
FROM Posts p
WHERE d.distance < 15
ORDER BY distance;
```
The use of `HAVING` as a substitute for `WHERE` is an extension for MySQL. In other databases, you can use a subquery, CTE, or lateral join (which is the technical name for what `APPLY` does). In this case, I think the lateral join is convenient, because it separates the logic for this quite complicated formula.
|
Invalid column name on Having in SQL Server
|
[
"",
"sql",
".net",
"sql-server",
""
] |
I have two queries below, which to me are the same so should return the same results. However they do not, the bottom query is not returning any data but I can't see why?
query below return the correct data
```
SELECT myDate, myTicker, SUM(myUnits) AS mUnits
FROM myTbl
where myName = 'abc4' and myDate >= '2016-04-13'
GROUP BY myDate, myTicker
```
The query below returns zero rows although I don't see why its the same query to me as the above one.
```
declare @myF nvarchar, @FromDate date
set @myF = 'abc4'
set @FromDate = '2016-04-13'
SELECT myDate, myTicker, SUM(myUnits) AS mUnits
FROM myTbl
where myName = @myF and myDate >= @FromDate
GROUP BY myDate, myTicker
```
**update**
the query below works, it seems like the trouble query doesn't like the @myF variable
```
declare @myF nvarchar, @FromDate date
set @myF = 'abc4'
set @FromDate = '2016-04-13'
SELECT myDate, myTicker, SUM(myUnits) AS mUnits
FROM myTbl
where myName = 'abc4' and myDate >= @FromDate
GROUP BY myDate, myTicker
```
|
Set size for `NVARCHAR(4)`
```
DECLARE @myF NVARCHAR(4), @FromDate DATE
SET @myF = 'abc4'
```
|
the main reason is `Default Length` for `NVARCHAR` is `1`
So,your this query
```
declare @myF nvarchar, @FromDate date
set @myF = 'abc4'
set @FromDate = '2016-04-13'
SELECT myDate, myTicker, SUM(myUnits) AS mUnits
FROM myTbl
where myName = @myF and myDate >= @FromDate
GROUP BY myDate, myTicker
```
Returning the Query compare to the 1st Query like,
```
SELECT myDate, myTicker, SUM(myUnits) AS mUnits
FROM myTbl
where myName = 'a' and myDate >= '2016-04-13'
GROUP BY myDate, myTicker
```
That's why your both Query is not equal.
You need to specify the `Length` of your `NVARCHAR` Variable, like..
```
declare @myF nvarchar(MAX), @FromDate date
set @myF = 'abc4'
set @FromDate = '2016-04-13'
SELECT myDate, myTicker, SUM(myUnits) AS mUnits
FROM myTbl
where myName = 'abc4' and myDate >= @FromDate
GROUP BY myDate, myTicker
```
Then you'll get the expected result.
|
two queries returning different data
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Suppose that we have two tables:
TABLE TA
```
AID BID1 BID2
-- ---- ----
01 01 02
02 01 03
03 02 01
```
TABLE TB
```
BID Name
--- ----
01 FOO
02 BOO
03 LOO
```
If I want to return the following:
```
AID Name1
-- -----
01 FOO
02 FOO
03 BOO
```
I write the following:
```
SELECT TA.AID, TB.Name as Name1
FROM TB
INNER JOIN TA on TB.BID = TA.BID1
```
However, I cannot figure out how to return the TB.Name that correspond to both the BID1 and BID2. More specifically I want to return the following:
```
AID Name1 Name2
-- ----- -----
01 FOO BOO
02 FOO LOO
03 BOO FOO
```
|
You could join multiple times:
```
SELECT TA.AID, tb1.Name AS Name1, tb2.Name AS Name2
FROM TA
LEFT JOIN TB tb1
ON TA.BID1 = tb1.BID
LEFT JOIN TB tb2
ON TA.BID2 = tb2.BID;
```
Note: `LEFT OUTER JOIN` will ensure you always get all records from `TA` even if there is no match.
`LiveDemo`
|
Just use one more join
```
SELECT TA.AID, TB.Name as Name1, T1.Name as Name2
FROM TB
INNER JOIN TA on TB.BID=TA.BID1
INNER JOIN TB T1 on T1.BID=TA.BID2;
```
|
INNER JOIN for two columns of the same foreign key
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
The simplified version is this: I have a table with two fields. The first field, `trx`, will always have a value. The second field, `tstop`, can be either null or a timestamp.
I would like to organize the output from the select such that the first "group" of records all have tstop of null, the remaining records have a non-null value of `tstop`. Each group is ordered by `trx` desc.
How is this done?
```
TABLE rx
(
recid serial NOT NULL,
trx timestamp without time zone NOT NULL,
tstop timestamp without time zone
)
Example values:
recid trx tstop
36; "2014-06-10 13:05:16"; "";
113759; "2014-06-10 13:05:16"; "";
33558; "2014-03-31 18:08:15"; "2014-03-31 18:08:15";
12535; "2014-03-31 18:08:15"; "";
660; "2014-03-31 18:05:59"; "";
144209; "2014-03-30 19:21:14"; "";
```
Desired Output:
```
recid trx tstop
36; "2014-06-10 13:05:16"; "";
113759; "2014-06-10 13:05:16"; "";
12535; "2014-03-31 18:08:15"; "";
660; "2014-03-31 18:05:59"; "";
144209; "2014-03-30 19:21:14"; "";
33558; "2014-03-31 18:08:15"; "2014-03-31 18:08:15";
```
This obviously will not work:
```
select * from rx order by trx desc;
```
|
You could use `IS NULL`:
```
SELECT *
FROM rx
ORDER BY tstop IS NULL DESC, trx DESC
```
`SqlFiddleDemo`
|
Just `order by` the columns and use the option `nulls first` to make `null` values appear first:
```
SELECT *
FROM rx
ORDER BY tstop DESC NULLS FIRST, trx DESC
```
|
Grouping select output by null value in PostgreSQL
|
[
"",
"sql",
"database",
"postgresql",
"postgresql-9.5",
""
] |
I have read a lot of other posts here on stackoverflow and google but I could not find a solution.
It all started when I changed the model from a CharField to a ForeignKey.
The error I recieve is:
```
Operations to perform:
Synchronize unmigrated apps: gis, staticfiles, crispy_forms, geoposition, messages
Apply all migrations: venues, images, amenities, cities_light, registration, auth, admin, sites, sessions, contenttypes, easy_thumbnails, newsletter
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying venues.0016_auto_20160514_2141...Traceback (most recent call last):
File "/Users/iam-tony/.envs/venuepark/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.IntegrityError: column "venue_city" contains null values
```
My model is as follows:
```
class Venue(models.Model):
venue_city = models.ForeignKey(City, null=True,)
venue_country=models.ForeignKey(Country, null=True)
```
venue\_country did not exist before so that migration happened successfully. But venue\_city was a CharField.
I made some changes to my migration file so that it would execute the sql as follows:
```
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('venues', '0011_venue_map_activation'),
]
migrations.RunSQL(''' ALTER TABLE venues_venue ALTER venue_city TYPE integer USING venue_city::integer '''),
migrations.RunSQL(''' ALTER TABLE venues_venue ALTER venue_city RENAME COLUMN venue_city TO venue_city_id '''),
migrations.RunSQL(''' ALTER TABLE venues_venue ADD CONSTRAINT venues_venus_somefk FOREIGN KEY (venue_city_id) REFERENCES cities_light (id) DEFERRABLE INITIALLY DEFERRED'''),
```
Thanks in advance!
UPDATE: my new migration file:
```
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('cities_light', '0006_compensate_for_0003_bytestring_bug'),
('venues', '0024_remove_venue_venue_city'),
]
operations = [
migrations.AddField(
model_name='venue',
name='venue_city',
field=models.ForeignKey(null=True, to='cities_light.City'),
),
]
```
|
Looks like you added `null=True` after created migration file. Because `venue_city` is not a nullable field in your migration file
Follow these steps.
```
1) Drop venue_city & venue_country from your local table
3) Delete all the migration files you created for these `CharField to a ForeignKey` change
4) execute `python manage.py makemigrations`
5) execute 'python manage.py migrate'
```
It should work
|
Had a similar problem i resolved it by removing the previous migration files.No technical explanation
|
django.db.utils.IntegrityError: column "venue_city" contains null values
|
[
"",
"sql",
"django",
"postgresql",
"django-models",
"django-migrations",
""
] |
I'm using this PostgreSQL table to store configuration variables:
```
CREATE TABLE SYS_PARAM(
SETTING_KEY TEXT NOT NULL,
VALUE_TYPE TEXT,
VALUE TEXT
)
;
```
[](https://i.stack.imgur.com/Dxq2M.png)
How I can update all configuration settings values using one SQL statement?
|
If you plan on performing these updates more than once or twice over time, it would be good to have a function handle this for you. You could use the table itself as a type for a variadic parameter within a function, like so:
```
-- The function
CREATE OR REPLACE FUNCTION update_sys_param(VARIADIC params sys_param[])
RETURNS VOID
AS $$
BEGIN
UPDATE sys_param
SET value_type = upd.value_type, value = upd.value
FROM
sys_param src
INNER JOIN
UNNEST(params) upd
ON (src.setting_key = upd.setting_key);
END; $$ LANGUAGE PLPGSQL;
-- To call it
SELECT update_sys_param(('SMTP_PORT','int','123'),('SMTP_OTHER','text','435343'));
```
However, if this is a one-time update you can try either of these two:
**UPDATE using JOIN**
```
UPDATE sys_param
SET
value_type = new.value_type,
value = new.value
FROM
sys_param src
INNER JOIN
new_params new --< this table/view/cte must already exist or you must create it.
ON (src.setting_key = new.setting_key);
```
**UPDATE using CASE**
```
UPDATE sys_param
SET value = CASE setting_key
WHEN 'SMTP_PORT' THEN '2100'
(..and so on..)
END;
-- You would need to repeat the case statement if you intend on updating the value_type, too.
```
|
you can use `where true` at the end and it update all rows in your table.
for example:
```
UPDATE table_name set table_column = value where true;
```
it will be update all rows in one SQL query.
|
Update all rows with one SQL query
|
[
"",
"sql",
"postgresql",
"sql-update",
"postgresql-9.1",
"postgresql-9.3",
""
] |
i'm trying to write a query which prevents inserting duplicated rows as below
```
INSERT INTO RSS_SETTING_ADMIN (ID_PRODUCT , ID_RSS , ID_CATEGORY , ID_TYPE_USER)
VALUES (384 , 3, 283 , 1)WHERE NOT EXISTS
(SELECT * FROM RSS_SETTING_ADMIN
WHERE ID_PRODUCT = 384 , ID_RSS = 3 , ID_CATEGORY = 283, ID_TYPE_USER = 1)
```
but I receive
> SQL command not properly ended
error
|
Your command should be:
```
INSERT INTO RSS_SETTING_ADMIN (ID_PRODUCT , ID_RSS , ID_CATEGORY , ID_TYPE_USER)
SELECT 384 , 3, 283 , 1 FROM DUAL
WHERE NOT EXISTS
(SELECT * FROM RSS_SETTING_ADMIN
WHERE ID_PRODUCT = 384
AND ID_RSS = 3
AND ID_CATEGORY = 283
AND ID_TYPE_USER = 1
);
```
I also remember you that is not sufficient to prevent duplicate rows , you need a unique key for that.
|
If you want to prevent the insertion of duplicate values, then you should use a unique constraint/index on the table:
```
alter table RSS_SETTING_ADMIN
add constraint unq_RSS_SETTING_ADMIN_4 unique(id_product, id_rss, id_category, id_type_user);
```
It is a much better approach to let the database maintain relational integrity rather than the application.
If you do want to still use `not exists`, I would recommend:
```
insert intoRSS_SETTING_ADMIN (ID_PRODUCT , ID_RSS , ID_CATEGORY , ID_TYPE_USER)
select x.id_product, x.id_rss, x.id_category, x.id_type_user
from (select 384 as id_product, 3 as id_rss, 283 as id_category, 1 as id_type_user
from dual
) x
where not exists (select 1
from RSS_SETTING_ADMIN rsa
where rsa.id_product = x.id_product and
rsa.id_rss = x.id_rss and
rsa.id_category = x.id_category and
rsa.id_type_user = x.id_type_user
);
```
With this structure, the values are only included in the query once, which makes it easier to avoid typos.
|
writing oracle insert query which prevents from inserting duplicated rows
|
[
"",
"sql",
"oracle",
""
] |
I am trying to write a SQL query to extract records from a single table that have the same `model_number` field, but different `primary_use` fields. I have been able to query for duplicates, but my query makes no distinction if a record is truly a duplicate or if `primary_use` is different. I'm not interested in where records have the same information in both fields, only where there is a mis-match.
|
With SQLite I believe your best bet is a self join:
```
SELECT DISTINCT t1.*, t2.*
FROM yourtable as t1
INNER JOIN yourtable as t2 ON
t1.model_number = t2.model_number AND
t1.primary_use <> t2.primary_use;
```
Joining the table to itself where model\_number is the same, but primary\_use is different and then returning the distinct results should give you what you are after.
|
Some examples would help, but it sounds like
```
select distinct model_number, primary_use from [table]
```
or perhaps
```
select model_number, primary_use from [table]
group by model_number, primary_use
```
|
Query for records that are the same in one field, but different in another?
|
[
"",
"sql",
"sqlite",
""
] |
I am using mysql.
I have a table that has a column id.
Let us say I have an input set of ids. I want to know which all ids are missing in the table.
If the set is "ida", "idb", "idc" and the table only contains "idb", then the returned value should be "ida", "idc".
Is this possible with a single sql query? If not, what is the most efficient way to execute this.
Note that I am not allowed to use stored procedure.
|
MySQL will only return rows that exist. To return missing rows you must have two tables.
The first table can be temporary (session/connection specific) so that multiple instances can run simultaneously.
```
create temporary table tmpMustExist (text id);
insert into tmpMustExist select "ida";
insert into tmpMustExist select "idb";
-- etc
select a.id from tmpMustExist as a
left join table b on b.id=a.id
where b.id is null; -- returns results from a table that are missing from b table.
```
> Is this possible with a single sql query?
Well, yes it is. Let me work my way to that, first with a `union all` to combine the `select` statements.
```
create temporary table tmpMustExist (text id);
insert into tmpMustExist select "ida" union all select "idb" union all select "etc...";
select a.id from tmpMustExist as a left join table as b on b.id=a.id where b.id is null;
```
Note that I use `union all` which is a bit faster than `union` because it skips over deduplication.
You can use `create table`...`select`. I do this frequently and really like it. (It is a great way to copy a table as well, but it will drop indexes.)
```
create temporary table tmpMustExist as select "ida" union all select "idb" union all select "etc...";
select a.id from tmpMustExist as a left join table as b on b.id=a.id where b.id is null;
```
And finally you can use what's called a "derived" table to bring the whole thing into a single, portable select statement.
```
select a.id from (select "ida" union all select "idb" union all select "etc...") as a left join table as b on b.id=a.id where b.id is null;
```
Note: the `as` keyword is optional, but clarifies what I'm doing with `a` and `b`. I'm simply creating short names to be used in the `join` and `select` field lists
|
```
//you can pass each set string to query
//pro-grammatically you can put quoted string
//columns must be utf8 collation
select * from
(SELECT 'ida' as col
union
SELECT 'idb' as col
union
SELECT 'idc' as col ) as setresult where col not in (SELECT value FROM `tbl`)
```
|
Find values missing in a column from a set (mysql)
|
[
"",
"mysql",
"sql",
""
] |
I have following **[RestaurantOffer]** table.
Offers, which is valid only within two date ranges (i.e FromDate and ToDate) and at a particular timing(FromTime,ToTime).
[](https://i.stack.imgur.com/sqzQt.png)
So I want to write a query which would give me all Offers on **current date(Today)** and the time will be more than **current Time(Today's Current Time)**. Because I don't want to get the expired Offers.
Database :
```
FromDate(date),
ToDate(date),
FromTime(time),
ToTime(time)
```
**UPDATE :**
\*\*Note :\*\*This Offers is not valid all 24 hours.. It is valid only FromTime-ToTime Range.
What I want :
**1st:** Give me All today's records if today satisfies FromDate and Todate
range.
**2nd :** After getting all the records for today , I want to get all the records which if more than or equal today's Current Time.
|
Try this:
```
--DROP TABLE #temp
CREATE TABLE #temp
(
FROMDate DATE
, ToDate DATE
, FromTime TIME
, ToTime TIME
);
INSERT INTO #temp
( FROMDate, ToDate, FromTime, ToTime )
VALUES ( '2016-05-01' -- FROMDate - date
, '2016-06-01' -- ToDate - date
, '11:30:00:000' -- FromTime - time
, '17:30:00:000' -- ToTime - time
),
( '2016-05-01' -- FROMDate - date
, '2016-06-01' -- ToDate - date
, '11:30:00:000' -- FromTime - time
, '17:30:00:000' -- ToTime - time
),
( '2016-05-01' -- FROMDate - date
, '2016-06-01' -- ToDate - date
, '11:30:00:000' -- FromTime - time
, '17:30:00:000' -- ToTime - time
),
( '2016-05-01' -- FROMDate - date
, '2016-06-01' -- ToDate - date
, '11:30:00:000' -- FromTime - time
, '19:30:00:000' -- ToTime - time
);
SELECT FROMDate
, ToDate
, FromTime
, ToTime
FROM #temp
WHERE ( CAST(GETDATE() AS DATE) >= FROMDate
AND CAST(GETDATE() AS DATE) <= ToDate
)
AND ( CAST(GETDATE() AS TIME) >= FromTime
AND CAST(GETDATE() AS TIME) <= ToTime
);
```
|
Try following query:
```
SELECT * FROM RestaurantOffer WHERE (GETDATE() >= FromDate AND GETDATE() <= ToDate) AND (cast(GETDATE() as datetime) >= FromTime AND cast(GETDATE() as datetime) <= ToTime);
```
|
Get database records which satisfy two date ranges and within a specific timing in SQL server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
"sql-server-2012",
""
] |
In my database I am renaming a table. I tried to rename it by using
```
exec sp_rename Student,Students
```
but I got the message:
> Caution: Changing any part of an object name could break scripts and stored procedures.
How can I avoid getting this message?
|
Unfortunately, this message cannot be suppressed.\*
There was a [Microsoft Connect ticket](https://connect.microsoft.com/SQLServer/feedback/details/266048/sp-rename-suppress-caution-warning-message) submitted by our very own [Aaron Bertrand](https://stackoverflow.com/users/61305/aaron-bertrand), but it was closed as won't fix.
Keep in mind that this message isn't an error - it's not even a warning - it's just a cautionary message saying, "Be careful."
---
\*Okay, it *can* be suppressed, but it involves modifying the source code of `sp_rename`, which is **[not recommended](https://social.msdn.microsoft.com/Forums/sqlserver/en-US/bb121a87-2681-4a9f-b38a-6870657c95e4/sprename-suppress-warning-message?forum=transactsql)**.
|
The way to do it is this:
Extract the source code of sp\_rename from your Master database.
Make a copy of it somewhere else under a different name e.g. "quiet\_rename"
Remove / comment out this line from the code:
```
raiserror(15477,-1,-1)
```
now you have your own local copy of sp\_rename without the annoying error message included.
|
How to hide Caution: Changing any part of an object name could break scripts and stored procedures
|
[
"",
"sql",
"sql-server",
""
] |
The below query gives a set of rows with 3 columns.
```
Select
c.CaseID, i.ImageID, i.ImagePath
From
tblCase c WITH (NOLOCK)
inner join
tblName n WITH (NOLOCK) on c.CaseID = n.CaseID
inner join
tblImage i WITH (NOLOCK) on n.NameID = i.NameID
where
n.NameCode = '70'
```
The ImagePath column will have data like this with semi colon separated values.
```
ImageID=3215;FilePath=\2016\5\13\test.tif;ImageType=Original;PageNumber=1
```
The ImageType value needs to be changed to "duplicate" as below for all the rows returned from the query.
```
ImageID=3215;FilePath=\2016\5\13\test.tif;ImageType=duplicate;PageNumber=1
```
Any ideas? Is using a cursor good to do this kind of update? The number of rows will be a couple of thousands.
Thank you!
|
The best idea would be to stop storing multiple values in one column and normalize the database so that you don't run into issues like this.
Barring that, you can try to use `REPLACE()`, but you need to be very careful that you don't inadvertently change parts of the string that happen to match with your string.
```
REPLACE(ImagePath, ';ImageType=Original;', ';ImagetType=Duplicate;')
```
might work, but if that attribute can appear at the end of the string (without a trailing semicolon) then it might fail. It could also fail if there is a space in between the "=" and the attribute/value in some cases. It could also fail if it might be the first attribute without a leading semicolon. There might be some other possible failure cases as well - which is why this isn't how you should be storing your data.
|
Here is a try:
```
declare @s varchar(max) = 'ImageID=3215;FilePath=\2016\5\13\test.tif;ImageType=original;PageNumber=1'
select substring(@s, 0, charindex('imagetype=', @s)) + 'ImageType=duplicate' +
substring(@s, charindex(';', @s, charindex('imagetype=',@s)), len(@s))
```
<http://rextester.com/edit/AVQO58898>
Description: take everything till `imagetype=` and add `ImageType=duplicate` to it and add everything from first semicolon that occurs after`imagetype=` till the end.
|
How to update a column for a set of rows?
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
Using MSSQL
I have a table of users, and a table of products to which they are subscribed. Those subscriptions are either Free (F) or Paid (P). I have joined the tables, converted the F/P value to numeric using a case statement, and then summed up those values by user ID, the idea being that anyone with only free accounts will have a sum of 0, those with at least one paid account a value 1 or greater. I've gotten this far with the following:
```
SELECT t1.user_id, SUM(
CASE
WHEN t2.free_paid = 'P'
THEN 1
ELSE 0
END ) as highest
FROM users t1 INNER JOIN accounts t2
ON t1.user_id = t2.user_id
WHERE t2.account = 'A'
GROUP BY t1.user_id
ORDER BY t1.user_id
```
This yields a result like:
```
755 2
1259 2
2031 1
3888 0
```
Meaning that all but 3888 have at least one paid account.
But now I would like to simply add those up somehow to get our two values, one a count of users with at least one paid account (3 in example), and a count of those with only free accounts (1 in example).
I tried declare two variables, e.g. @free and @paid and using a case statement to add to those values by wrapping that around the above and treating it as a subquery, but I am unable to get that run.
Any ideas?
|
Re-using the query from the question you can create a CTE (or a subquery) and aggregate the results:
```
;WITH CTE_UserAccounts AS (
SELECT t1.user_id, SUM(
CASE
WHEN t2.free_paid = 'P'
THEN 1
ELSE 0
END ) as highest
FROM users t1 INNER JOIN accounts t2
ON t1.user_id = t2.user_id
WHERE t2.account = 'A'
GROUP BY t1.user_id
)
SELECT
COUNT(CASE WHEN highest > 0 THEN 1 END) AS [Paid],
COUNT(CASE WHEN highest = 0 THEN 1 END) AS [Free]
FROM
CTE_UserAccounts;
```
|
```
SELECT
COUNT(DISTINCT CASE WHEN t2.free_paid = 'P' THEN t1.user_id END) as atleast_one_paid,
COUNT(DISTINCT CASE WHEN t2.free_paid <> 'P' THEN t1.user_id END) as onlyfree
FROM users t1
INNER JOIN accounts t2 ON t1.user_id = t2.user_id
WHERE t2.account = 'A'
```
|
Aggregating SQL results, two tables, then summarizing results
|
[
"",
"sql",
"sql-server",
""
] |
i am running this SQL query:
```
SELECT a.retail, b.cost
from call_costs a, call_costs_custom b
WHERE a.sequence = b.parent
AND a.sequence = '15684'
AND b.customer_seq = '124'
```
which returns both `a.retail` and `b.cost` if the row exists in `call_costs_custom` but if the row does not exist, i want to show just `a.retail` using the `WHERE` clauses for `a. (call_costs)`
|
You want an outer join, i.e. a join that keeps records from the first table even when there is no match in the second table. Use `LEFT OUTER JOIN` or short `LEFT JOIN` hence:
```
select cc.retail, ccc.cost
from call_costs cc
left join call_costs_custom ccc on ccc.parent = cc.sequence and ccc.customer_seq = '124'
where cc.sequence = '15684';
```
|
From W3Schools:
> The LEFT JOIN keyword returns all rows from the left table (table1),
> with the matching rows in the right table (table2). The result is NULL
> in the right side when there is no match.
```
SELECT
a.retail,
b.cost
FROM
call_costs a
LEFT JOIN
call_costs_custom b
ON
a.sequence = b.parent
AND
b.customer_seq = '124'
WHERE
a.sequence = '15684'
```
|
sql query selecting from two tables - return results from one table if there are none in the other table
|
[
"",
"mysql",
"sql",
""
] |
Here are my tables:
```
CREATE TABLE IF NOT EXISTS `category` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(50) NOT NULL,
`post_limit` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ;
INSERT INTO `category` (`id`, `title`, `post_limit`) VALUES
(1, 'News', 2),
(2, 'Sport', 2),
(3, 'Science', 1),
(4, 'Games', 1);
CREATE TABLE IF NOT EXISTS `article` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(50) NOT NULL,
`category_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `category_id` (`category_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=11 ;
INSERT INTO `article` (`id`, `title`, `category_id`) VALUES
(1, 'news article 1', 1),
(2, 'news article 2', 1),
(3, 'news article 3', 1),
(4, 'sports article 1', 2),
(5, 'sports article 2', 2),
(6, 'sports article 3', 2),
(7, 'Science article 1', 3),
(8, 'Science article 2', 3),
(9, 'games article 1', 4),
(10, 'games article 2', 4);
```
What I need to do is select 10 articles (`ORDER BY article.id DESC`) but bearing in mind that every category has `post_limit`, so for example we cannot take for `category_id=1` 5 posts if `post_limit=2`.
Thank you in advance.
**UPDATE 1:**
result should be:
```
10 games article 2 4
8 science article 2 3
6 sports article 3 2
5 sports article 2 2
3 news article 3 1
```
|
I think you need to enumerate the articles to apply the `post_limit`. You can do this with a subquery:
```
select a.*
from (select a.*,
(@rn := if(@c = a.category_id, @rn + 1,
if(@c := a.category_id, 1, 1)
)
) as rn
from articles a cross join
(select @rn := 0, @c := -1) params
order by category_id
) a join
category c
on c.id = a.category_id
where a.rn <= c.post_limit
limit 10;
```
In practice, you probably want an `order by` before the `limit` to have more control over which articles. Similarly, you probably want another key on the `order by` in the subquery to control the articles . . . such as `order by category_id, id desc` to get the most recent articles.
|
To limit the count of each category to the category's `post_limit`:
```
select c.id, c.title, least(post_limit, count(a.id))
from category c
left join article a on category_id = c.id
group by 1, 2 -- or "group by c.id, c.title" if you prefer the verbose style
```
See [SQLFiddle](http://sqlfiddle.com/#!9/c4f739/2).
|
How do I select an exact number of articles for each category?
|
[
"",
"mysql",
"sql",
""
] |
I want to round value of one select
For example:
```
select width from Home
```
The result of `width` is "3.999999999999999"
I want to be "3.99" or "3.9"
|
If you need only two decimal values you can multiply by 100, floor the result and divide by 100 (This is necessary because floor works only flooring a number to an integer):
```
select floor(width * 100) / 100 from Home
```
Here are the steps
```
3.99999999 * 100 = 399.999999 --- Multiply by 100
floor(399.999999) = 399 --- floor
399 / 100 = 3.99 --- Divide by 100
```
---
It is also possible using a different form of [round function with a third parameter](http://msdn.microsoft.com/en-us/library/ms175003(SQL.90).aspx).
When the third parameter is different from 0 the result is truncated instead of rounded
> **Syntax**
>
> `ROUND ( numeric_expression , length [ ,function ] )`
>
> **Arguments**
>
> `numeric_expression` Is an expression of the exact numeric or
> approximate numeric data type category, except for the bit data type.
>
> `length` Is the precision to which numeric\_expression is to be rounded.
> length must be an expression of type tinyint, smallint, or int. When
> length is a positive number, numeric\_expression is rounded to the
> number of decimal positions specified by length. When length is a
> negative number, numeric\_expression is rounded on the left side of the
> decimal point, as specified by length.
>
> `function` Is the type of
> operation to perform. function must be tinyint, smallint, or int. When
> function is omitted or has a value of 0 (default), numeric\_expression
> is rounded. **When a value other than 0 is specified, numeric\_expression**
> **is truncated.**
Here is the select using this version of `round`:
```
select round(width, 2, 1) from Home
```
|
`ROUND(width, 1 or 2, 1)` should work best for you. Example:
```
SELECT CONVERT(NUMERIC(13,2), ROUND(width, 2, 1))
FROM Home
```
will return 3.99, and
```
SELECT CONVERT(NUMERIC(13,1), ROUND(width, 1, 1))
FROM Home
```
will return 3.9
|
SQL Server round()
|
[
"",
"sql",
"sql-server",
"rounding",
""
] |
```
SELECT
table_comments1.*, table_comments2.*
FROM
[table_comments1], [table_comments2]
INNER JOIN
[users] ON [table_comments1].fk_user_id = [users].user_id
AND [table_comments2].fk_user_id = [users].user_id
ORDER BY
comment_date DESC
```
What am I doing wrong? I'm getting this error:
> The multi-part identifier "table\_comments1.fk\_user\_id" could not be bound.
I want to end up sorting all comments from both tables by datetime, but got that problem.
|
try this
```
select t1.*, t2.*
from [table_comments1] t1 inner join users u
on t1.fk_user_id = u.user_id
inner join [table_comments2] t2
on t2.fk_user_id = u.user_id
```
|
FYI There are several types of `join`
`t1 inner join t2 on t1.id = t2.fkId` (this is default if `inner` or anything else is ommited) returns records from t1 and t2 where exist matching records in t1 and t2. Old style (`ANSI 89`) looks like this: `...from t1,t2 where t1.id = t2.fkId`
`t1 left join t2 on t1.id = t2.fkId` All records from t1 (including non-matching) and records from t2 where match exists. Non-matching records from t2 are shown as `null`. Old style: `... from t1,t2 where t1.id *= t2.fkId`.
`t1 right join t2 on t1.id = t2.fkId` similar logic as `left join` but all from t2 and matching records from t1 and `null` for non-matching t1. Old style: `...from t1,t2 where t1.id =* t2fkId`.
> `t1 full join t2 on t1.id = t2.fkId` left + right. Everything from t1
> and t2, `null` for non-matching. Old style: `from t1,t2 where t1.id
> *=* t2.fkId`.
> I was wrong. At least at compatibility level 80 `*=*` is not supported by ms sql 2008. I don't have older version to check.
`t1 cross join t2` (no `on` condition) returns `Cartesian Product` of the two tables. For each record from t1 all records from t2. Old style is `... from t1,t2`.
Note that "New Style" was introduced as recently as in 1992 and MS SQL Server 2012 stop supporting "Old Style".
|
Error when selecting from two tables and performing an inner join on another
|
[
"",
"sql",
"sql-server",
""
] |
So I am trying to run a query but I'm having some problems with it because I'm using a `nvarchar` column to get a percentage column which gives me the percentage from the database of different data that I have. That column is called "Filetype" and what I have there is all the Extension's that I put there f.e: .exe, .zip, etc.
Then I thought I could get in the same query the `MAX` and `MIN` of percentage the problem is it is not so easy with that data type values. I've made a query in Microsoft Visual Studio
```
SELECT
Filetype AS [Extensão],
COUNT(*) AS [Nº de ficheiros],
CAST(((COUNT(Filetype) * 100.0) / (SELECT COUNT(*) FROM infofile)) AS DECIMAL(10,2)) AS [Percentagem (%)],
SUM(Filesize) AS [Total(KB)],
NULL AS [Convertido para MB],
MIN(COUNT(*)) OVER () * 100.0 / (SUM(COUNT(*)) OVER ()) AS [Min. Percentagem (%)],
MAX(COUNT(*)) OVER () * 100.0 / SUM(COUNT(*)) OVER () AS [Max. Percentagem (%)]
FROM infofile
GROUP BY Filetype
UNION ALL
SELECT '---------------',
COUNT('Nº de extensões'),
((COUNT(Filetype) * 100) / (SELECT COUNT(Filetype) FROM infofile)),
SUM(Filesize),
SUM(Filesize) / 1024,
NULL,
NULL
FROM infofile
```
But If I use this query it will fill all the rows and what I want it to do is the same from the lines after `UNION ALL`. I will show you the output by now[](https://i.stack.imgur.com/Vk87B.png)
And I want to display that `MAX` and `MIN` as I will show you with arrows.
[](https://i.stack.imgur.com/tia3E.png)
That row is where I display all the final results. And I want change it there by adding `MAX` and `MIN` values
Your query result
[](https://i.stack.imgur.com/zZJWI.png)
|
You can do something like this.
```
with cte as
(
SELECT
Filetype AS [Extensão],
COUNT(*) AS [Nº de ficheiros],
CAST(((COUNT(Filetype) * 100.0) / (SELECT COUNT(*) FROM infofile)) AS DECIMAL(10,2)) AS [Percentagem (%)],
SUM(Filesize) AS [Total(KB)],
NULL AS [Convertido para MB],
MIN(COUNT(*)) OVER () * 100.0 / (SUM(COUNT(*)) OVER ()) AS [Min. Percentagem (%)],
MAX(COUNT(*)) OVER () * 100.0 / SUM(COUNT(*)) OVER () AS [Max. Percentagem (%)]
FROM infofile
GROUP BY Filetype
)
select [Extensão],[Nº de ficheiros],[Percentagem (%)],[Total(KB)],[Convertido para MB],NULL AS [Min. Percentagem (%)],NULL AS [Max. Percentagem (%)] from cte
UNION ALL
SELECT '---------------',
COUNT('Nº de extensões'),
((COUNT(Filetype) * 100) / (SELECT COUNT(Filetype) FROM infofile)),
SUM(Filesize),
SUM(Filesize) / 1024,
(Select MAX([Min. Percentagem (%)]) from cte) as [Min. Percentagem (%)] ,
(Select MAX([Max. Percentagem (%)] from cte as [Max. Percentagem (%)])
FROM infofile
```
I have done nothing but put your 1st query in a cte and used it to return your min amd max % for the query after UNION ALL as well. I hope this is your expected output.
|
Probably no need for min/max percentage at first part, just at the second. As far as I've got it from comments, look at
```
WITH totals_by_ext AS (
SELECT
Filetype AS [Extensão],
COUNT(*) AS [Nº de ficheiros],
CAST(((COUNT(Filetype) * 100.0) / (SELECT COUNT(*) FROM infofile)) AS DECIMAL(10,2)) AS [Percentagem (%)],
SUM(Filesize) AS [Total(KB)],
CAST(NULL AS DECIMAL(10,2)) AS [Convertido para MB],
CAST(NULL AS DECIMAL(10,2)) AS [Min. Percentagem (%)],
CAST(NULL AS DECIMAL(10,2)) AS [Max. Percentagem (%)]
FROM infofile
GROUP BY Filetype
)
SELECT *
FROM totals_by_ext
UNION ALL
SELECT
'---------------',
COUNT([Extensão]),
SUM([Nº de extensões]) * 100 / (SELECT COUNT(Filetype) FROM infofile)),
SUM([Total(KB)]),
SUM([Total(KB)]) / 1024,
MIN([Percentagem (%)]) AS [Min. Percentagem (%)],
MAX([Percentagem (%)]) AS [Max. Percentagem (%)]
FROM totals_by_ext;
```
|
Query to run max and min
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
What is the most efficient way to get prime numbers from 2 to 1000 via SQL in MongoDB/MySQL/CockroachDB?
My query in Oracle:
```
WITH NUM AS (
SELECT LEVEL N
FROM DUAL CONNECT BY LEVEL <= 1000
)
SELECT LISTAGG(B.N,'-') WITHIN GROUP(ORDER BY B.N) AS PRIMES
FROM (
SELECT N,
CASE WHEN EXISTS (
SELECT NULL
FROM NUM N_INNER
WHERE N_INNER .N > 1
AND N_INNER.N < NUM.N
AND MOD(NUM.N, N_INNER.N)=0
) THEN
'NO PRIME'
ELSE
'PRIME'
END IS_PRIME
FROM NUM
) B
WHERE B.IS_PRIME='PRIME'
AND B.N!=1;
```
|
In PostgreSQL probably the most fastest query that prints prime numbers up to 1000 is:
```
SELECT regexp_split_to_table('2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,157,163,167,173,179,181,191,193,197,199,211,223,227,229,233,239,241,251,257,263,269,271,277,281,283,293,307,311,313,317,331,337,347,349,353,359,367,373,379,383,389,397,401,409,419,421,431,433,439,443,449,457,461,463,467,479,487,491,499,503,509,521,523,541,547,557,563,569,571,577,587,593,599,601,607,613,617,619,631,641,643,647,653,659,661,673,677,683,691,701,709,719,727,733,739,743,751,757,761,769,773,787,797,809,811,821,823,827,829,839,853,857,859,863,877,881,883,887,907,911,919,929,937,941,947,953,967,971,977,983,991,997',E',')::int
AS x
;
```
It took only 16 ms on my computer.
* Note: a list of prime numbers was copied from <https://en.wikipedia.org/wiki/Prime_number>
and pasted into this long string
---
If you prefer SQL, then this works
```
WITH x AS (
SELECT * FROM generate_series( 2, 1000 ) x
)
SELECT x.x
FROM x
WHERE NOT EXISTS (
SELECT 1 FROM x y
WHERE x.x > y.x AND x.x % y.x = 0
)
;
```
It's two times slower - 31 ms.
---
Ans an equivalent version for Oracle:
```
WITH x AS(
SELECT level+1 x
FROM dual
CONNECT BY LEVEL <= 999
)
SELECT x.x
FROM x
WHERE NOT EXISTS (
SELECT 1 FROM x y
WHERE x.x > y.x AND remainder( x.x, y.x) = 0
)
;
```
|
The most obvious improvement is that instead of checking from 1 to n you can check from 1 to the square root of n.
A second major optimization would be to use a temporary table to store the results and check them first. This way you can iterate incrementally from 1 to n, and only check the known primes from 1 to square root of n (recursively doing that until you have a list). If you go about things this way you would probably want to set up the prime detection in a function and then do the same with your number series generator.
That second one though means extending SQL and so I don't know if that fits your requirements.
For postgresql I would use `generate_series` go generate the list of numbers. I would then create functions which would then either store the list of primes in a temporary table or pass them back in and out in an ordered array and then couple them like that
|
Best Logic to Print Prime Numbers with SQL query
|
[
"",
"sql",
"select",
"subquery",
""
] |
I want to convert seconds to `minute : seconds` format in sql select statement.
At the moment I am using:
```
SELECT SEC_TO_TIME(duration) from messages;
```
It works perfectly but it gives me this format of time: `hh:mm:ss`
but I need `mm:ss`
Is it possible to convert seconds into `mm:ss` format using sql query?
|
If the value is less than an hour, then just do:
```
SELECT RIGHT(SEC_TO_TIME(duration), 5) from messages;
```
If you might go over an hour, then do the arithmetic:
```
SELECT CONCAT_WS(':', FLOOR(SEC_TO_TIME(duration) / 60),
SEC_TO_TIME(duration) % 60)
```
|
I recently had a similar project where I needed to convert stored seconds to `m:ss` format. No matter the amount, there needed to be at least one digit representing minutes and two digits representing seconds. The hours placeholder was forbidden, so the minutes value could acceptably go beyond `59` and also contain more than 2 digits. The minute value must not be zero-padded.
This is what I used: ([SQLFiddle Demo](http://sqlfiddle.com/#!9/f0e9ad/1))
```
CONCAT(FLOOR(seconds/60), ':', LPAD(MOD(seconds,60), 2, 0)) AS `m:ss`
```
aka
```
CONCAT(FLOOR(seconds/60), ':', LPAD(seconds%60, 2, 0)) AS `m:ss`
seconds | m:ss
-----------------
0 | 0:00
1 | 0:01
10 | 0:10
60 | 1:00
61 | 1:01
71 | 1:11
3599 | 59:59
3600 | 60:00
5999 | 99:59
6000 | 100:00
```
`TIME_FORMAT(SEC_TO_TIME(seconds),'%i:%s')` was unsuitable because the project specifications did not want the minute portion to be zero-padded. Here is [a good post relating to this technique](https://stackoverflow.com/a/8193923/2943403).
There is no single-digit minute option in [TIME\_FORMAT()](https://www.techonthenet.com/mysql/functions/time_format.php) or [DATE\_FORMAT()](https://www.techonthenet.com/mysql/functions/date_format.php).
|
MySQL: How to convert seconds to mm:ss format?
|
[
"",
"mysql",
"sql",
"format",
""
] |
I've scoured the forums, but couldn't quite find a proper solution.
I have two tables with the following information:
-TableA-
```
Id | Created
11111 | 2016-01-01
22222 | 2016-02-02
33333 | 2016-03-03
```
-TableB-
```
Id | Created | Comment
11111 | 2016-01-01 | Blah Blah Blah
11111 | 2016-01-02 | Blah Blah Blah
11111 | 2016-01-15 | Blah Blah Blah
11111 | 2016-01-17 | Blah Blah Blah
22222 | 2016-02-02 | Blah Blah Blah
22222 | 2016-02-05 | Blah Blah Blah
22222 | 2016-02-09 | Blah Blah Blah
33333 | 2016-03-03 | Blah Blah Blah
33333 | 2016-03-14 | Blah Blah Blah
```
TableA is the master table (it has a whole bunch of other fields, but the important thing is the ID and the Created date field), while TableB is a comment table that gets tied back to TableA.
What I'm trying to do is to calculate the time difference between two rows in TableB and then isolate the very first row where the record is created. I figured the best way to do this would be to use TableA to provide the definitive Created date and somehow use that against TableB after I obtain all of the calculated time differences.
I've written out a reasonable query for TableB to give me the calculated date differences:
```
SELECT C1.Id,
C1.Created,
MIN(C2.Created) AS Created2,
DATEDIFF(C1.Created, MIN(C2.Created) AS DaysDiff
FROM TableB C1
LEFT JOIN TableB C2
ON C1.Id = C2.Id
AND C2.Created > C1.Created
GROUP BY C1.Id, C1.Created
```
-TableB Queried-
```
Id | Created | Created2 | DaysDiff
11111 | 2016-01-01 | 2016-01-02 | 1
11111 | 2016-01-02 | 2016-01-15 | 13
11111 | 2016-01-15 | 2016-01-17 | 2
11111 | 2016-01-17 | |
22222 | 2016-02-02 | 2016-02-05 | 3
22222 | 2016-02-05 | 2016-02-09 | 4
22222 | 2016-02-09 | |
33333 | 2016-03-03 | 2016-03-14 | 11
33333 | 2016-03-14 | |
```
But I need to take this one step further and only get the earliest Created record, so it looks like this:
```
Id | Created | Created2 | DaysDiff
11111 | 2016-01-01 | 2016-01-02 | 1
22222 | 2016-02-02 | 2016-02-05 | 3
33333 | 2016-03-03 | 2016-03-14 | 11
```
I'm pretty sure I need to do one more JOIN here, but any JOIN I've done usually ends up where I get no records or I get just the Id and Created columns and nothing else.
Thanks for the help!
|
To get your expected result, you should use this:
```
SELECT C1.Id,
C1.Created,
MIN(C2.Created) AS Created2,
DATEDIFF(C1.Created, MIN(C2.Created)) AS DaysDiff
FROM (select id, min(created) created from TableB group by id) C1
JOIN TableB C2
ON C1.Id = C2.Id
AND C2.Created > C1.Created
GROUP BY C1.Id, C1.Created
;
```
And in the beginning, i thought that the temp C1 table should be `tableA`, not subquery select min(created) from `tableB`. If that's true then change that line: `FROM (....) C1` to `From tableA C1`
|
Try something like this:
```
select A.id, A.created, B.created, DATEDIFF(B.created, A.created) AS DaysDiff
from TableA A
join TableB B on B.id = A.id and
B.created = (select min(created) from TableB where
created > A.created)
```
Here's a SQLFiddle to try it: <http://www.sqlfiddle.com/#!9/859113/3>
|
SQL - Getting date difference between two rows between three tables
|
[
"",
"sql",
""
] |
Is it possible to check the time between ,let's say 19:00 to 08:00 of next day using `BETWEEN` ? I'm only using `time`.(not `datetime`)
|
Sure:
WHERE NOT time BETWEEN '8:00' AND '19:00'
|
```
select *
from MyTable
where CAST(mytime as time) >= '19:00:00'
and CAST(mytime as time) < '08:00:00'
```
|
BETWEEN IN SQL to check a time between two times
|
[
"",
"sql",
"sql-server",
"between",
""
] |
Provided the following structure and data:
```
CREATE TABLE "CHANGES" (
"ID" NUMBER(38),
"LAST_UPD_DATE" DATE DEFAULT SYSDATE
);
CREATE TABLE "EXPORT_LOG" (
"ID" NUMBER(38),
"LAST_EXPORT" DATE DEFAULT SYSDATE
);
```
Table `CHANGES` contains:
```
----------------------------
| ID | LAST_UPD_DATE |
----------------------------
| 123 | 12-MAY-16 12.23.23 |
| 124 | 12-MAY-16 12.24.23 |
| 125 | 12-MAY-16 12.11.23 |
----------------------------
```
and `EXPORT_LOG`
```
----------------------------
| ID | LAST_EXPORT |
----------------------------
| 124 | 12-MAY-16 12.23.12 |
| 125 | 12-MAY-16 12.12.24 |
----------------------------
```
I need to get the records in `CHANGES` that either don't exist in `EXPORT_LOG` or, if exists, get records with `LAST_UPD_DATE` later than `LAST_EXPORT`.
So in the above example, I should be getting 123 and 124.
I'm trying different JOINs but I cannot seem to get the result I want:
`INNER JOIN` is used for intersections, `LEFT JOIN` gets ALL of first table, but only those of second table that match a condition you set - none of these is what I want. So is the solution some sort of UNION?
|
Try this:
```
SELECT t1.*
FROM CHANGES AS t1
LEFT JOIN EXPORT_LOG AS t2 ON t1.ID = t2.ID
WHERE (t2.ID IS NULL) OR (t1.LAST_UPD_DATE > t2.LAST_EXPORT)
```
This will return all records of `CHANGES` table that don't have a match in `EXPORT_LOG` table **plus** the records of `CHANGES` table that have a `LAST_UPD_DATE` that is later than `LAST_EXPORT`.
|
One method is to translate the conditions directly using `exists`:
```
select c.*
from changes c
where not exists (select 1 from export_log el where c.id = el.id) or
not exists (select 1 from export_log el where c.id = el.id and el.last_export > c.last_upd_date);
```
This can be simplified to:
```
select c.*
from changes c
where not exists (select 1 from export_log el where c.id = el.id and el.last_export > c.last_upd_date);
```
|
SQL: Join/union of two tables, on multiple conditions
|
[
"",
"sql",
"oracle",
"join",
""
] |
For some reason, these two SQL statements not equivalent. Can anybody help in understanding why?
```
SELECT N, IF(P IS NULL, 'Root', IF(N NOT IN (SELECT P FROM BST), 'Leaf', 'Inner'))
FROM BST
ORDER BY N;
SELECT N, IF(P IS NULL, 'Root', IF(N IN (SELECT P FROM BST), 'Inner', 'Leaf'))
FROM BST
ORDER BY N;
```
P.S. - This is the Binary Search Tree problem on HackerRank.
<https://www.hackerrank.com/challenges/binary-search-tree-1>
|
jarlh's comment above is correct, i.e. nothing compares TRUE with NULL except the test IFNULL or 'is null'. So if the query:
```
(SELECT P FROM BST)
```
returns a single NULL value, the two SQL statements will return different results. This can easily be demonstrated with the following two SQL statements:
```
select if (1 in (select null), 'a', 'b') => 'b'
select if (1 not in (select null), 'b', 'a') => 'a'
```
|
Nope. `NOT IN` has strange semantics when the underlying table has `NULL` values. In this case, the clause filters out all rows.
For this reason, I think `NOT EXISTS` is a better alternative:
```
NOT EXISTS (SELECT 1 FROM BST WHERE BST.P = BST.N)
```
You can also fix this by adding `WHERE P IS NOT NULL` to the two subqueries.
|
Use of IN / NOT IN in mySQL
|
[
"",
"mysql",
"sql",
""
] |
i am not sure which is the best for color store in sql
i can be perform 3 way Color Name ,Hex color code , RGB Color Code. what is the datatype for the color .
please suggest me i am working a eCommerce side with Asp.net MVC and sql 2012. i need to store product color .
[](https://i.stack.imgur.com/qUncX.png)
|
You will probably need a color table with, say, 20 colors, the clients can use to filter the products.
Here is one rather simple database design where a product has various colors:
**table color** - these are the generic colors used for filtering
* color\_id
* color\_name - *e.g. 'red', a text you can use to show in the filter options*
* color\_code - *e.g. #FF0000, a color you can use to show in the filter options*
**table product** - the product (irrespective of its colors)
* product\_id
* product\_description
**table product\_color** - the product's colors; these are the items that are actually ordered
* product\_id
* color\_id
* color\_name, *product-specific e.g. 'dark red'*
* price
(That may get more complicated when the product has various sizes, too.)
|
if you are use only 0xRRGGBB format than you store in integer in database and rehexidecimal when you want to select.or how to store value in database its depend on.
|
what is the best way color store in sql
|
[
"",
"html",
"sql",
"asp.net",
"asp.net-mvc-4",
"colors",
""
] |
In excel and other similar software you can use total to get the percentage. Can anyone tell what is the most efficient way to replicate total function.
I have used nested query but I am not getting right result
```
select retpre04recency,
count(*) as CustomerCount,
(select count(*) from extractsummary) as Total,
round(count(*)/(select count(*) from extractsummary),2) as CustomerCount
from extractsummary
group by retpre04recency
order by retpre04recency asc
;
```
My result in percentage column is zero. can anyone help?
[](https://i.stack.imgur.com/yHrAQ.png)
|
This is a type problem. The expression
```
count(*)
```
results in type `bigint`. The expression
```
(select count(*) from extractsummary)
```
also results in type `bigint`. Unlike some programming languages (e.g. R), the division operator in PostgreSQL does not automatically promote integer operands to a fractional type. So you must cast it yourself.
```
select
retpre04recency,
count(*) as CustomerCount,
(select count(*) from extractsummary) as Total,
round(count(*)::numeric/(select count(*) from extractsummary),2) as CustomerCount
from
extractsummary
group by
retpre04recency
order by
retpre04recency asc
;
```
---
Example:
```
drop table if exists extractsummary;
create table extractsummary (retpre04recency int);
insert into extractsummary (retpre04recency) values (1), (1), (2), (2), (2), (3), (3), (3), (3), (4), (4), (4), (5), (5), (5), (5), (5), (6), (6), (6), (99);
select
retpre04recency,
count(*) as CustomerCount,
(select count(*) from extractsummary) as Total,
round(count(*)::numeric/(select count(*) from extractsummary),2) as CustomerCount
from
extractsummary
group by
retpre04recency
order by
retpre04recency asc
;
```
[](https://i.stack.imgur.com/jgWY8.png)
|
I'm not sure what problems you are having, but analytic functions are a simpler method than subqueries:
```
select retpre04recency,
count(*) as CustomerCount,
sum(count(*)) over () as Total,
round(count(*)/sum(count(*)) over (), 2) as CustomerCount
from extractsummary
group by retpre04recency
order by retpre04recency asc
```
|
What is the best way to use Total to get percentage of groups in PostgreSQL
|
[
"",
"sql",
"postgresql",
""
] |
I need to count the percentage of finalized transactions compared to total transactions (e.g. including in-process and finalized transactions). From looking around the web, I arrived at:
```
SELECT 100 * (SELECT COUNT(transaction_id) from t_transaction_main
WHERE due_date = '2016-05-16' and (suspend_status !='' OR
close_date != '0-0000-00'))/COUNT(transaction_id) from t_transaction_main as test;
```
And it worked thus far; however, while the finalized transaction query is as intended, I actually need to divide it by the transaction only on a particular date (here, 2016-05-16). But when I added a WHERE clause...
```
SELECT 100 * (SELECT COUNT(transaction_id) from t_transaction_main
WHERE due_date = '2016-05-16' and (suspend_status !='' OR
close_date !='0-0000-00'))/COUNT(transaction_id) WHERE due_date = '2016-05-16' from t_transaction_main as test;
```
It immediately stopped working and returned a syntax error. I've also tried wrapping the `COUNT(transaction_id) WHERE due_date = '2016-05-16'` inside parentheses, but it didn't help.
Any solution would be welcomed, thank you.
|
Wrong clause sequence you put the where before the from clause
```
SELECT 100 * (SELECT COUNT(transaction_id)
from t_transaction_main
WHERE due_date = '2016-05-16' and (suspend_status !='' OR
close_date !='0-0000-00'))/COUNT(transaction_id)
FROM t_transaction_main as test
WHERE due_date = '2016-05-16' ;
```
|
Why would you use a subquery? Or even two aggregation functions?
```
SELECT AVG(CASE WHEN suspend_status <> '' OR close_date <> '0-0000-00'
THEN 100.0 ELSE 0
END)
FROM t_transaction_main test
WHERE due_date = '2016-05-16' ;
```
This is *much* simpler than your original query.
|
Calculating percentage from two SQL Count() results
|
[
"",
"sql",
"count",
"percentage",
""
] |
this is my first stackoverflow question so please be gentle ;-)
I have a table with unique customerIDs and a table including their transactions with a certain transaction type (purchase, presell etc.)
What I want to count is all customers that have done a specific transaction type once. But if a customer has made an eligible transaction in 2014 I dont want this customer to be counted in 2015 again. Does that make sense?
I tried the following statement:
```
SELECT
datepart(yyyy,t.TransactionDate)
,count(DISTINCT c.customerID)
FROM Customers as c
JOIN Transactions as t
ON c.CustomerID = t.CustomerID
WHERE t.TransactionType = 'presell'
GROUP BY datepart(yyyy,t.TransactionDate)
```
The issue is that, of course, a customer can do the same transactiontype once a year. So with this statement I count the customers distinct per year...and not just once in total.
**EDIT:** Lets make it a bit easier. There is only one table and that table looks a bit like this:
[](https://i.stack.imgur.com/l2gSx.jpg)
So if I'm filtering for "Presell" my result should look a bit like this
[](https://i.stack.imgur.com/N3lUl.jpg)
in 2014 Customer A made a presell, in 2015 customer B made a presell, in 2016 customer A made a presell again but I don't count this customer because I already counted it in 2014. Hope that make things a bit clearer.
Thanks for any advice and help here.
|
When you look at it, it is the first presell per customer you are interested in. You can get that easily by selecting `MIN(TransactionDate)` per customer. Once you've done this, you can count.
```
select year, count(*)
from
(
select customerid, min(transactiondate) as year
from transactions
where transactiontype = 'Presell'
group by customerid
) first_presells
group by year
order by year;
```
|
Since you want to count `Customers` by `TransactionType`, you should be grouping by `CustomerID` and `TransactionType`, not the `TransactionDate`:
```
SELECT datepart(yyyy,t.TransactionDate) -- this part won't work anymore without grouping it. Doesn't seem to be relevant to what you're selecting
,count(DISTINCT c.customerID)
FROM Customers as c
JOIN Transactions as t
ON c.CustomerID = t.CustomerID
WHERE t.TransactionType = 'presell'
GROUP BY c.CustomerID, t.TransactionType
```
This should return the count of people who have completed a transaction of type at least once, regardless of the year.
|
Count distinct with repeating entries over time
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I have want to retrieve the string after. From a table pos which has values like:
```
Office.Secretary
XYZ.Manager - Liaison
ABC.Community Relations Officer
```
This should be converted to
```
Secretary
Manager - Liaison
Community Relations Officer
```
I am using this :
```
SELECT REGEXP_REPLACE(REGEXP_SUBSTR (name, '..+' , 1), '.', '', 1, 1) FROM table_POS;
```
But I am getting the column details as it is.
|
```
select regexp_replace('ABC.Community Relations Officer','([^\.]{0,}\.)(.*)','\2') from dual;
```
Try regexp group capturing.
`1-st group ([^\.]{0,}\.)` - match everything to first dot with dot.
`2-nd group (.*)` - match everything after dot.
`'\2'` - replace whole string with value from 2nd group.
With this future you can for example change order of your groups.
```
select regexp_replace('ABC.Community Relations Officer','([^\.]{0,}\.)(.*)','\2 -- \1') from dual;
```
|
If you need to remove a chunk of string from the beginning and up to the *first* dot, you need to use a negated character class regex solution:
```
SELECT REGEXP_REPLACE(name, '^[^.]+[.]', '') FROM table_POS;
```
Note that here,
* `^` - matches the start of a string
* `[^.]+` - matches 1 or more characters other than a literal dot
* `[.]` - matches a literal dot.
The whole matched substring is replaced with an empty string, and is thus removed.
If you need to remove a chunk of string up to the *last* dot, you need rock's solution, something like `SELECT REGEXP_REPLACE(name, '^.+[.]', '') FROM table_POS;`.
|
To get the string after . through regex in oracle
|
[
"",
"sql",
"regex",
"oracle",
""
] |
I have a table of about 50k records. It looks something like this:
```
Animal | Name | Color | Legs
Cat |George| Black | 4
Cat | Bob | Brown | 4
Cat | Dil | Brown | 4
Bird | Irv | Green | 2
Bird | Van | Red | 2
```
etc.
I want to only insert Cat once and Bird only once and so on. The Name / Color / Legs etc. should be the first value it finds.
This table has 10 columns and 50k rows.
I tried `insert into MyNewTable Select Distinct * From MyAnimalTable`, but that didn't work. I also tried `group by`, but did not work either.
|
you can use group by only on animal name and select the rest of the column from Max() to get the first finding.
```
insert into MyNewTable
Select MAT.Animal,max(MAT.Name),max(MAT.Color),max(MAT.Legs)
From MyAnimalTable MAT GROUP BY MAT.Animal
```
|
Use `ROW_NUMBER` to number the rows per animal and only keep the ones numbered 1.
```
insert into mynewtable (animal, name, color, legs)
select animal, name, color, legs
from
(
select
animal, name, color, legs,
row_number() over (partition by animal order by animal) as rn
from myanimaltable a
) numbered
where rn = 1;
```
(This numbers the records per animal arbitrarily. So you get the first record per animal "the DBMS finds". If you want any certain order, you'd have to specify this after the partition clause.)
|
How can I insert values as a distinct column?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to have a select that will not have fixed number of columns.
Column `OptionalColumn` should be selected only if variable *@check* = 1
This is what I had (syntax is bad but maybe it explains my problem)
```
SELECT
Column1, Column2,
(IF @Check = 1
BEGIN OptionalColumn END)
FROM table
```
It is crucial to have it only if @Check = 1 and not having it if @Check = 0.
Is it doable?
|
You could do this with an IF ELSE block like this;
```
CREATE TABLE #table (Column1 varchar(10), Column2 varchar(10))
INSERT INTO #table
VALUES
('column1', 'column2')
DECLARE @Check bit; SET @Check = 0
IF @Check = 1
(SELECT Column1, Column2
FROM #table)
ELSE
(SELECT Column1
FROM #table)
```
Change the variable from 0 to 1 for testing to see the change in number of columns.
|
Rich Benner gave you a good solution, as an alternative you can use dynamic sql like that :
```
DECLARE @sqlvar VARCHAR(100)
select @Sqlvar= ' SELECT Column1, Column2 '+IIF (@check=1,',OptionalColumn','')+' from TABLE';
EXECUTE sp_executesql @sqlvar
```
For reference on sp\_ executesql look:
[https://msdn.microsoft.com/it-it/library/ms188001(v=sql.120).aspx]
|
Select certain column only if condition met
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
This question is already answered multiple times but I just can't get it to works. I tied use some answer from [this](https://stackoverflow.com/questions/16423883/how-to-retrieve-same-column-twice-with-different-conditions-in-same-table) question but I always "get error more than one row returned by a subquery used as an expression"
I have following sql query:
```
SELECT DISTINCT p.name, pma.time AS goal, pma.time AS assist
FROM player p
INNER JOIN player_match pm
ON p.player_id = pm.player_id
INNER JOIN matches m
ON m.match_id = pm.match_id
INNER JOIN team_match tm
ON tm.team_id = p.team_id
FULL JOIN player_match_activity pma
ON pma.player_id = p.player_id
AND pma.activity_id = '1'
AND pma.match_id = m.match_id
WHERE m.match_id = '163'
AND tm.home_away = 'home'
```
The query gives me following result:
```
name | goal | assist
-------------------------------------
Ronaldo 1 1
Messi 3 3
Vardy
```
The column "assist" show same values like the column "goal".
Line pma.activity\_id = '1' select just goals.
How can I set that the column "assist" use exact same conditions like the column "goal" BUT instead of pma.activity\_id = '1' I want to change it to '2" ?
|
You could add another join to the `player_match_activity` table, or you could change `pma.activity_id = '1'` to `pma.activity_id IN ('1','2')` and use `CASE` expressions to choose the populate the proper columns:
```
SELECT DISTINCT p.name, pma_goal.time AS goal, pma_assist.time AS assist
FROM player p
INNER JOIN player_match pm
ON p.player_id = pm.player_id
INNER JOIN matches m
ON m.match_id = pm.match_id
INNER JOIN team_match tm
ON tm.team_id = p.team_id
FULL JOIN player_match_activity pma_goal
ON pma_goal.player_id = p.player_id
AND pma_goal.activity_id = '1'
AND pma_goal.match_id = m.match_id
FULL JOIN player_match_activity pma_assist
ON pma_assist.player_id = p.player_id
AND pma_assist.activity_id = '2'
AND pma_assist.match_id = m.match_id
WHERE m.match_id = '163'
AND tm.home_away = 'home'
```
Alternatively:
```
SELECT p.name, MAX(CASE WHEN pma.activity_id = '1' THEN pma.time END) AS goal
, MAX(CASE WHEN pma.activity_id = '2' THEN pma.time END) AS assist
FROM player p
INNER JOIN player_match pm
ON p.player_id = pm.player_id
INNER JOIN matches m
ON m.match_id = pm.match_id
INNER JOIN team_match tm
ON tm.team_id = p.team_id
FULL JOIN player_match_activity pma
ON pma.player_id = p.player_id
AND pma.activity_id IN ('1','2')
AND pma.match_id = m.match_id
WHERE m.match_id = '163'
AND tm.home_away = 'home'
GROUP BY p.name
```
Also, not sure you need to be using `FULL JOIN` here.
|
The way this query is created, the easiest way to do it is to join the player\_match\_activity table twice:
```
SELECT DISTINCT p.name, pma_goal.time AS goal, pma_assist.time AS assist
FROM player p JOIN player_match pm ON p.player_id = pm.player_id
JOIN matches m ON m.match_id = pm.match_id
JOIN team_match tm ON tm.team_id = p.team_id
LEFT JOIN player_match_activity pma_goal ON pma_goal.player_id = p.player_id AND pma_goal.activity_id = '1' AND pma_goal.match_id = m.match_id
LEFT JOIN player_match_activity pma_assist ON pma_assist.player_id = p.player_id AND pma_assist.activity_id = 2 AND pma_assist.match_id = m.match_id
WHERE m.match_id = '163' AND tm.home_away = 'home';
```
This is probably not the best way to do this SELECT, but it is my minimal change suggestion. ;)
|
Retrieve same column twice with different conditions
|
[
"",
"sql",
"database",
"postgresql",
""
] |
I have a query that is joined to 3 tables. Without "DISTINCT" the total records is 331 but with Distinct the total number is 113. I want to get the 113 total only which is the total of distinct records. I used Count but it gives me the total number of not unique records. Please help me get the total of distinct records. Here's my query.
> Without distinct (331 records)
```
SELECT
uf.OrigFileName,
uf.CreatedOn,
sdiTran.Status,
sdiFS.FileName,
sdiFile.ArchiveLogID,
COUNT(*) over() as totalRows
FROM [SDI].dbo.UploadedFile uf
inner join [SDI].dbo.SDIFile sdiFile on uf.UploadedFullFileName = sdiFile.OriginalName
left join [SDI].dbo.SDITransaction sdiTran on sdiFile.ID = sdiTran.SDIFileID
inner join [SDI].dbo.SDIFSArchive sdiFS on sdiFile.ID = sdiFS.SDIFileID
WHERE uf.CommunityID = '7cc67de8-e5c2-4055-958b-f604c6a40cf1'
AND uf.OrganizationID='e5750df1-0409-46b6-9aba-7f07be7c890c'
```
[](https://i.stack.imgur.com/98AEV.png)
> After Distinct (113 records). totalRows should be 113 only since what I'm getting are distinct records.
```
SELECT distinct
uf.OrigFileName,
uf.CreatedOn,
sdiTran.Status,
sdiFS.FileName,
sdiFile.ArchiveLogID,
COUNT(*) over() as totalRows
FROM [SDI].dbo.UploadedFile uf
inner join [SDI].dbo.SDIFile sdiFile on uf.UploadedFullFileName = sdiFile.OriginalName
left join [SDI].dbo.SDITransaction sdiTran on sdiFile.ID = sdiTran.SDIFileID
inner join [SDI].dbo.SDIFSArchive sdiFS on sdiFile.ID = sdiFS.SDIFileID
WHERE uf.CommunityID = '7cc67de8-e5c2-4055-958b-f604c6a40cf1'
AND uf.OrganizationID='e5750df1-0409-46b6-9aba-7f07be7c890c'
```
[](https://i.stack.imgur.com/k6IDQ.png)
|
You can use the distinct in a sub query:
```
SELECT *
, COUNT(*) over() as totalRows
FROM (
SELECT DISTINCT
uf.OrigFileName,
uf.CreatedOn,
sdiTran.Status,
sdiFS.FileName,
sdiFile.ArchiveLogID
FROM [SDI].dbo.UploadedFile uf
inner join [SDI].dbo.SDIFile sdiFile on uf.UploadedFullFileName = sdiFile.OriginalName
left join [SDI].dbo.SDITransaction sdiTran on sdiFile.ID = sdiTran.SDIFileID
inner join [SDI].dbo.SDIFSArchive sdiFS on sdiFile.ID = sdiFS.SDIFileID
WHERE uf.CommunityID = '7cc67de8-e5c2-4055-958b-f604c6a40cf1'
AND uf.OrganizationID='e5750df1-0409-46b6-9aba-7f07be7c890c'
) A
```
|
If you want 331 rows with the count of 113 in each row, you can use a trick. Use `row_number()` to enumerate the rows with the same values and then use a window function to count when the number is 1:
```
SELECT t.*,
SUM(CASE WHEN seqnum = 1 THEN 1 ELSE 0 END) OVER () as CountDistinct
FROM (SELECT uf.OrigFileName, uf.CreatedOn, sdiTran.Status, sdiFS.FileName,
sdiFile.ArchiveLogID,
ROW_NUMBER() OVER (PARTITION BY uf.OrigFileName, uf.CreatedOn, sdiTran.Status, sdiFS.FileName, sdiFile.ArchiveLogID
ORDER BY (SELECT NULL)) as seqnum
FROM [SDI].dbo.UploadedFile uf inner join
[SDI].dbo.SDIFile sdiFile
on uf.UploadedFullFileName = sdiFile.OriginalName left join
[SDI].dbo.SDITransaction sdiTran
on sdiFile.ID = sdiTran.SDIFileID inner join
[SDI].dbo.SDIFSArchive sdiFS on sdiFile.ID = sdiFS.SDIFileID
WHERE uf.CommunityID = '7cc67de8-e5c2-4055-958b-f604c6a40cf1' AND
uf.OrganizationID='e5750df1-0409-46b6-9aba-7f07be7c890c'
) t;
```
If you only want the count of the distinct records, you can use `COUNT(DISTINCT)`. Because SQL Server doesn't allow multiple arguments, a subquery is perhaps the simplest method:
```
SELECT COUNT(*)
FROM (SELECT DISTINCT uf.OrigFileName, uf.CreatedOn, sdiTran.Status, sdiFS.FileName,
sdiFile.ArchiveLogID
FROM [SDI].dbo.UploadedFile uf inner join
[SDI].dbo.SDIFile sdiFile
on uf.UploadedFullFileName = sdiFile.OriginalName left join
[SDI].dbo.SDITransaction sdiTran
on sdiFile.ID = sdiTran.SDIFileID inner join
[SDI].dbo.SDIFSArchive sdiFS on sdiFile.ID = sdiFS.SDIFileID
WHERE uf.CommunityID = '7cc67de8-e5c2-4055-958b-f604c6a40cf1' AND
uf.OrganizationID='e5750df1-0409-46b6-9aba-7f07be7c890c'
) t;
```
|
Get Total Number of Records after Distinct SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2014",
""
] |
I have a table which does not have any auto increments. I have to insert data into this table, each time incrementing the ID by 1000.
So I have
```
SELECT @maxId = Max(ID)
INSERT INTO TABLE1
(ID, DATA)
VALUES
(@maxId + 1000, DATA),
(@maxId + 2000, DATA),
(@maxId + 3000, DATA)
```
Instead of explicitly incrementing the ID for each insert, is there a way to have it 'auto-increment'. I can not use one of the `LAST_INSERT_ID()` or something like that, simply because the ID's are created in a weird way
|
There is nothing stopping you from doing the following and getting the data inserted correctly.
```
insert into table1(ID,DATA)
VALUES ((select max(id) from table1 as T) +1000, DATA),
((select max(id) from table1 as T) +1000, DATA);
```
Or is it something else that you meant?
|
You can declare the field this way:
```
MyID INT IDENTITY (0,1000);
```
This will auto increment each record by 1000.
For example:
```
CREATE TABLE MyTable
(
MyID INT IDENTITY(0,1000),
SField VARCHAR(128)
);
INSERT INTO MyTable (SField) VALUES ('TEST');
INSERT INTO MyTable (SField) VALUES ('TEST1');
INSERT INTO MyTable (SField) VALUES ('TEST2');
SELECT * FROM MyTable
```
Will yield the following result:
```
| MyID | SField |
-----------------
| 0 | TEST |
| 1000 | TEST1 |
| 2000 | TEST2 |
```
|
Increment Variable in SQL
|
[
"",
"sql",
"sql-server",
""
] |
Why the result of this two nvarchar to DateTime is different?
```
Declare @ApplyDate nvarchar(max)
Set @ApplyDate = N'2014-01-01'
Print CONVERT(datetime, @ApplyDate , 120)
Exec(' print CONVERT(datetime, ' + @ApplyDate + ', 120) ')
```
result is:
```
Jan 1 2014 12:00AM
Jul 6 1905 12:00AM
```
|
In this query:
```
Exec(' print CONVERT(datetime, ' + @ApplyDate + ', 120) ')
```
you are inserting `@ApplyDate` without quotes, which effectively turns into:
```
print CONVERT(datetime, 2014-01-01, 120)
```
which is of course equivalent to:
```
print CONVERT(datetime, 2012, 120)
```
where 2012 is read as number of days passed starting on 1900-01-01.
You are probably looking for (note additional pair of quotes):
```
Exec(' print CONVERT(datetime, ''' + @ApplyDate + ''', 120) ')
```
|
You have missed extra single quotes around string value in your second example, so it doesn't get recognized as string.
It should be
```
Exec(' print CONVERT(datetime, ''' + @ApplyDate + ''', 120) ')
```
In this case both examples outputs same date.
|
Convert nvarchar to DateTime returns back different dates
|
[
"",
"sql",
"sql-server",
"datetime",
"type-conversion",
""
] |
I'm trying to Left Join Multiple tables with two sum columns that both join on a mid step table.
The tables look like:
**Table1**
```
ID Value1
1 3
2 2
3 3
```
**Table2**
```
ID Value1
1 5
2 2
3 2
4 1
```
**Jointable**
```
ID
1
2
3
4
5
6
```
I'm trying to output:
```
Table1Value1SUM Table2Value1Sum
8 | 10
```
With the SQL:
```
SELECT SUM(Table1.Value1) Table1Value1SUM,SUM(Table2.Value1) Table2Value1Sum From Table1
Left Join JoinTable
On JoinTable.ID = Table1.ID
Left Join Table2
On Table2.ID = Table1.ID
```
I'm getting these results:
```
Table1Value1SUM Table2Value1Sum
8 | 9
```
|
`jointable` has to be `left join`ed upon with the other 2 tables.
```
SELECT SUM(Table1.Value1) Table1Value1SUM,SUM(Table2.Value1) Table2Value1Sum
From JoinTable
Left Join Table1 On JoinTable.ID = Table1.ID
Left Join Table2 On JoinTable.ID = Table2.ID
```
|
`Left Join` the `Table2` with `JoinTable` instead of `Table1`
```
SELECT SUM(Table1.Value1) Table1Value1SUM,SUM(Table2.Value1) Table2Value1Sum From Table1
Left Join JoinTable
On JoinTable.ID = Table1.ID
Left Join Table2
On JoinTable.ID = Table2.ID
```
|
Left Joining Table with Sum - Causing Issues
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to get the date part from a timestamp field.
I used this SQL query:
```
select timestamp, CAST(timestamp as date) as date from messages
```
I got the following result:
```
--------------------------------------------
| timestamp | date |
--------------------------------------------
| 2016-05-15 10:22:54 | 2016-05-16 |
--------------------------------------------
```
As shown above, the date field produced returns the wrong date `2016-05-16` whereas the original date is `2016-05-15`.
How can we resolve this issue?
|
Use date not cast because is not casting but a format
```
select timestamp, date(timestamp) as my_date from messages
```
|
Thats not a issue !!! Its only set the wrong time\_zone. see sample
**get current time\_zone**
```
SHOW GLOBAL VARIABLES LIKE 'time_zone'; -- systemwide setting
SHOW VARIABLES LIKE 'time_zone'; -- session setting
```
**sample**
```
MariaDB [mysql]> select t, CAST(t as date) FROM groupme LIMIT 1;
+---------------------+-----------------+
| t | CAST(t as date) |
+---------------------+-----------------+
| 2016-05-15 20:22:54 | 2016-05-15 |
+---------------------+-----------------+
1 row in set (0.00 sec)
MariaDB [mysql]> SET time_zone ='-12:00';
Query OK, 0 rows affected (0.00 sec)
MariaDB [mysql]> select t, CAST(t as date) FROM groupme LIMIT 1;
+---------------------+-----------------+
| t | CAST(t as date) |
+---------------------+-----------------+
| 2016-05-14 20:22:54 | 2016-05-14 |
+---------------------+-----------------+
1 row in set (0.00 sec)
MariaDB [mysql]>
```
|
Why does the CAST() function return the wrong date?
|
[
"",
"mysql",
"sql",
"date",
""
] |
Trying to use a `CASE` statement in the `WHERE` clause and it's giving me errors. What did I miss?
```
SELECT DISTINCT
Commodity,
Commodity_ID,
[Description],
Train,
Truck
FROM dbo.List_Commodity
WHERE
CASE WHEN @Truck = 1 THEN
Truck = @Truck
WHEN @Train = 1 THEN
Train = @Train
END
```
|
Instead of `CASE` statement use `AND/OR` logic to do this
```
SELECT DISTINCT commodity,
commodity_id,
[description],
train,
truck
FROM dbo.list_commodity
WHERE ( @Truck = 1
AND truck = @Truck )
OR ( @Train = 1
AND train = @Train )
```
In case you want to retrieve all the records when both `@Truck` and `@Train` are not equal to `1` then add the below condition to above query
```
OR ( @Train <> 1
AND @Train <> 1)
```
|
If you are expecting that `THEN truck = @Truck` will return `true` or `false` then you are missing the fact that MS SQL has no boolean data type and no boolean expressions.
|
CASE-Statement in WHERE Clause not working what am I missing?
|
[
"",
"sql",
"case",
""
] |
I would like to replace a character based on its HEX value (x96) in SELECT statement.
Please help.Thanks in advance.
|
Hexadecimal `96` is decimal `150` the corresponding character is `chr(150)`
To remove this character from a string use the following replace (I simulate the character is the string with concatenation).
```
select replace ('test'||chr(150)||'text',chr(to_number( '96', 'xx' )), '') from dual;
testtext
```
|
ASCII function returns the NUMBER code that represents the specified character.
xxxxxx - conver decimal to hex.
fm - remove trailing spaces
```
select replace('Test text to replace','e',to_char (ascii('e'), 'fmxxxxxx')) from dual
```
|
SQL ORACLE: How to replace a character by its HEX value
|
[
"",
"sql",
"oracle",
""
] |
I was interested in writing a query for an application where I need to use a `NOT EXISTS` clause to check if a row exists.
I am using Sybase but I would like to know if there is an example in SQL in general where you can write a query having a `NOT EXISTS` clause without a nested subquery for the `NOT EXISTS`.
So instead of
```
SELECT * FROM TABLE
WHERE NOT EXISTS (SOME SUBQUERY)
```
is there a way to write this without a subquery?
**EDIT**: unfortunately, I cannot show you the query since it is confidential information but what I can explain is that I am trying to do this:
```
SELECT t1.a
FROM (select t2.a from table t2 ,table t3 where t2.b = t3.b ) as t1
where not exists (select t1.a from table t1 )
```
hope that is clear.
|
You could write an anti-join using LEFT JOIN instead of an EXISTS:
```
SELECT t1.*
FROM Table1 t1
LEFT JOIN Table2 t2
ON t2.Id = t1.Id
WHERE t2.Id IS NULL
```
But with the EXISTS operator, [you *must* have a subquery](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc32300.1570/html/sqlug/sqlug195.htm).
|
No, there is no way to use the EXISTS function in the way you are asking without a subquery.
|
Writing a query with a NOT EXISTS clause without a subquery for the NOT EXISTS
|
[
"",
"sql",
"sybase",
""
] |
I feel as if this should be quite easy, but can't seem to find a solution.
Suppose I have the following table:
```
|--------||---||---||---||---||---||---||---|
|Company ||q1 ||q2 ||q3 ||q4 ||q5 ||q6 ||q7 |
|--------||---||---||---||---||---||---||---|
|abc ||1 ||2 ||1 ||3 ||2 ||2 ||1 |
|abc ||2 ||2 ||1 ||2 ||3 ||1 ||1 |
|abc ||1 ||1 ||3 ||3 ||1 ||2 ||2 |
|abc ||1 ||2 ||1 ||3 ||0 ||1 ||3 |
```
I want to count the number of times '1' appears in the table, so the query should, in this case, result with 12. I tried 'hardcoding' it, like the following query. But that just results in the rows containing a 1, so in this case 4. How do I count the number of times '1' occurs, thus resulting in a count of 12?
```
SELECT COUNT(*)
FROM table
WHERE Company = 'abc'
AND (
q1 = '1'
OR q2 = '1'
OR q3 = '1'
OR q4 = '1'
OR q5 = '1'
OR q6 = '1'
OR q7 = '1'
)
```
|
```
SELECT SUM(
IF(q1 = 1, 1, 0) +
IF(q2 = 1, 1, 0) +
IF(q3 = 1, 1, 0) +
IF(q4 = 1, 1, 0) +
IF(q5 = 1, 1, 0) +
IF(q6 = 1, 1, 0) +
IF(q7 = 1, 1, 0)
)
FROM table
WHERE Company = 'abc'
```
|
This is very weird assignment but:
<http://sqlfiddle.com/#!9/2e7aa/3>
```
SELECT SUM((q1='1')+(q2='1')+(q3='1')+(q4='1')+(q5='1')+(q6='1')+(q7='1'))
FROM table
WHERE Company = 'abc'
AND '1' IN (q1,q2,q3,q4,q5,q6,q7)
```
|
SQL count specific value over multiple columns and rows
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to retrive the records from a table in my MySQL database, where:
* the *timestamp* is the closest to a variable I provide; and,
* grouped by the fields keyA, keyB, keyC and keyD
I've hard coded the variable as below to test this, however can not get the query to work.
[**SQLFiddle**](http://sqlfiddle.com/#!9/37fad2/1)
My current schema is:
```
CREATE TABLE dataHistory (
timestamp datetime NOT NULL,
keyA varchar(10) NOT NULL,
keyB varchar(10) NOT NULL,
keyC varchar(25) NOT NULL,
keyD varchar(10) NOT NULL,
value int NOT NULL,
PRIMARY KEY (timestamp,keyA,keyB,keyC,keyD)
);
INSERT INTO dataHistory
(timestamp, keyA, keyB, keyC, keyD, value)
VALUES
('2016-05-12 04:15:00', 'value1', 'all', 'value2', 'domestic', 96921),
('2016-05-12 04:05:00', 'value1', 'all', 'value2', 'domestic', 96947),
('2016-05-12 04:20:00', 'value1', 'all', 'value2', 'domestic', 96954),
('2016-05-12 04:15:00', 'value1', 'all', 'value3', 'domestic', 2732),
('2016-05-12 04:10:00', 'value1', 'all', 'value3', 'domestic', 2819),
('2016-05-12 04:20:00', 'value1', 'all', 'value3', 'domestic', 2802);
```
and the query I currently have is:
```
SELECT e.difference, e.timestamp, e.keyA, e.keyB, e.keyC, e.keyD, e.value
FROM (SELECT TIMESTAMPDIFF(minute, '2016-05-12 04:11:00', d.timestamp) as difference, d.timestamp, d.keyA, d.keyB, d.keyC, d.keyD, d.value
FROM dataHistory d
GROUP BY d.keyA, d.keyB, d.keyC, d.keyD) as e;
```
All I can seem to extract from the sample data is the earliest two records and not the two closest to the datetime.
What I receive:
```
difference timestamp keyA keyB keyC keyD value
-10 May, 12 2016 04:05:00 value1 all value2 domestic 96947
-5 May, 12 2016 04:10:00 value1 all value3 domestic 2819
```
I am expecting to see:
```
timestamp keyA keyB keyC keyD value
May, 12 2016 04:15:00 value1 all value2 domestic 96921
May, 12 2016 04:10:00 value1 all value3 domestic 2819
```
Any assistance would be appreciated!
|
```
SELECT e.difference, e.timestamp, e.keyA, e.keyB, e.keyC, e.keyD, e.value
FROM (SELECT ABS(TIMESTAMPDIFF(minute, '2016-05-12 04:11:00', d.timestamp)) as difference, d.timestamp, d.keyA, d.keyB, d.keyC, d.keyD, d.value
FROM dataHistory d
ORDER BY difference) as e
GROUP BY e.keyA, e.keyB, e.keyC, e.keyD;
```
This query is returning the values you want.
|
Does this help?
```
SELECT
TIMESTAMPDIFF (MINUTE , '2016-05-12 04:15:00' , MainTable.timestamp) AS Difference ,
MainTable.timestamp ,
MainTable.KeyA ,
MainTable.KeyB ,
MainTable.KeyC ,
MainTable.KeyD ,
MainTable.value
FROM
dataHistory AS MainTable
LEFT OUTER JOIN
dataHistory AS SecondaryTable
ON
MainTable.KeyA = SecondaryTable.KeyA
AND
MainTable.KeyB = SecondaryTable.KeyB
AND
MainTable.KeyC = SecondaryTable.KeyC
AND
MainTable.KeyD = SecondaryTable.KeyD
AND
ABS (TIMESTAMPDIFF (MINUTE , '2016-05-12 04:15:00' , MainTable.timestamp)) > ABS (TIMESTAMPDIFF (MINUTE , '2016-05-12 04:15:00' , SecondaryTable.timestamp))
WHERE
SecondaryTable.timestamp IS NULL;
```
---
Guy Glantser,
Data Professional,
Madeira - Data Solutions,
<http://www.madeiradata.com>
|
SQL query to retrieve records closest to timestamp
|
[
"",
"mysql",
"sql",
"database",
""
] |
I am not sure what I am doing to get this wrong. I am trying to get the count of usertb.id per each month. I want the 0s to be displayed if the count is null. I am using a left outer join with a month table that consists just the id of the month and name of the month. Please help. I am using SQL SERVER 2008
```
SELECT
monthTB.name
,subTB.name
,COUNT(userTB.ID)
FROM TDP_NetOps.dbo.sub_activity AS subTB
,TDP_NetOps.dbo.month AS monthTB
LEFT OUTER JOIN TDP_NetOps.dbo.user_sub_activity AS userTB ON monthTB.ID = DATEPART(month, userTB.date_of_activity)
WHERE
subTB.sub_activity_id = userTB.sub_activity_id
AND userTB.emp_id = 'xxxx'
GROUP BY
monthTB.name
,monthTB.ID
,subTB.name
ORDER BY
monthTB.ID ASC
,subTB.name ASC
```
|
Well, there are a few things wrong with your query:
Firstly, do not use implicit `JOIN` syntax(comma separated) , use the proper syntax of join.
Secondly, if you do use them, don't ever mix them together with the explicit syntax, it will always lead to mistakes.
And lastly, conditions on the right table of a left join should be inside the `ON` clause, when specified in the where, the join turns into an inner join. :
```
SELECT monthTB.name
, subTB.name
, COUNT(userTB.ID)
FROM TDP_NetOps.dbo.sub_activity AS subTB
CROSS JOIN TDP_NetOps.dbo.month AS monthTB
LEFT OUTER JOIN TDP_NetOps.dbo.user_sub_activity AS userTB
ON (monthTB.ID = DATEPART(month, userTB.date_of_activity)
AND subTB.sub_activity_id = userTB.sub_activity_id
AND userTB.emp_id = 'xxxx')
GROUP BY monthTB.name
, monthTB.ID
, subTB.name
ORDER BY monthTB.ID ASC
, subTB.name ASC
```
|
Your `WHERE` clause is undoing the `LEFT JOIN`. I think this is the logic that you want:
```
SELECT monthTB.name, subTB.name
COUNT(userTB.ID)
FROM (SELECT DISTINCT subTB.sub_activity_id
FROM TDP_NetOps.dbo.sub_activity subTB
WHERE subTB.emp_id = 'xxxx'
) subTB CROSS JOIN
TDP_NetOps.dbo.month AS monthTB LEFT OUTER JOIN
TDP_NetOps.dbo.user_sub_activity userTB
ON monthTB.ID = DATEPART(month, userTB.date_of_activity) AND
subTB.sub_activity_id = userTB.sub_activity_id AND
subTB.emp_id = 'xxxx'
GROUP BY monthTB.name, monthTB.ID, subTB.name
ORDER BY monthTB.ID ASC, subTB.name ASC;
```
I'm not sure if the second filter on `emp_id` is really needed. That depends on what the data looks like.
|
Unable to get null values for the count per month
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have table for which I want to check the **count** for particular user for dates between `01/04/2016 - 17/05/2016`
I have added the query like below
```
select count(CUser_Id) from inward_doc_tracking_trl
where CSTATUS_flag = 4
and NStatus_Flag = 1
and CUser_Id = 1260
```
now how to add the `date` part and check ??
I am using `SQL server 2005`
|
Use `BETWEEN` to filter between two dates :
```
select count(CUser_Id) from inward_doc_tracking_trl
where CSTATUS_flag = 4
and NStatus_Flag = 1
and CUser_Id = 1260
and YourDateCol between '01/04/2016' and '17/05/2016'
```
**EDIT:** If I understood you, you need a group by clause :
```
select CUser_Id,count(CUser_Id) from inward_doc_tracking_trl
where CSTATUS_flag = 4
and NStatus_Flag = 1
and YourDateCol between '01/04/2016' and '17/05/2016'
GROUP BY CUser_Id
```
|
Just add the date column to the conditions list:
```
select count(CUser_Id) from inward_doc_tracking_trl
where CSTATUS_flag = 4
and NStatus_Flag = 1
and CUser_Id = 1260
and U_datetime >= '2016-04-01'
and U_datetime <= '2016-05-17'
```
Note I've used a different string format to represent dates then the one you used in your question.
The reason for this is that `yyyy-MM-dd` is an ISO standard and thus sql server will always treat it the same, whereas the format `dd/mm/yyyy` is ambiguous:
does `01/04/2016` stands for April 1st or January 4th of 2016?
Another thing: I always prefer using `>=` and `<=` instead of `between...and` - since `between...and` includes the first value but excludes the last, that's a little bit confusing. with `<=`, `>=` it's clearer.
|
Check count of user in a table for particular date
|
[
"",
"sql",
"sql-server-2005",
""
] |
I have some string output which contain alphanumeric value. I want to get only Digits from that string. how can I fetch this by query? which MySql function can I Use?
My query is like :
```
select DISTINCT SUBSTRING(referrerURL,71,6)
from hotshotsdblog1.annonymoustracking
where advertiserid = 10
limit 10;
```
Output :
```
100683
101313
19924&
9072&h
12368&
5888&h
10308&
100664
1&hash
101104
```
And I Want output like :
```
100683
101313
19924
9072
12368
5888
10308
100664
1
101104
```
|
If the string starts with a number, then contains non-numeric characters, you can use the [`CAST()`](http://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#function_cast) function or convert it to a numeric implicitly by adding a `0`:
```
SELECT CAST('1234abc' AS UNSIGNED); -- 1234
SELECT '1234abc'+0; -- 1234
```
To extract numbers out of an *arbitrary* string you could add a custom [function](https://dev.mysql.com/doc/refman/5.7/en/adding-functions.html) like [this](https://ushastry.blogspot.com.by/2009/10/mysql-extract-numbers-out-of-string.html):
```
DELIMITER $$
CREATE FUNCTION `ExtractNumber`(in_string VARCHAR(50))
RETURNS INT
NO SQL
BEGIN
DECLARE ctrNumber VARCHAR(50);
DECLARE finNumber VARCHAR(50) DEFAULT '';
DECLARE sChar VARCHAR(1);
DECLARE inti INTEGER DEFAULT 1;
IF LENGTH(in_string) > 0 THEN
WHILE(inti <= LENGTH(in_string)) DO
SET sChar = SUBSTRING(in_string, inti, 1);
SET ctrNumber = FIND_IN_SET(sChar, '0,1,2,3,4,5,6,7,8,9');
IF ctrNumber > 0 THEN
SET finNumber = CONCAT(finNumber, sChar);
END IF;
SET inti = inti + 1;
END WHILE;
RETURN CAST(finNumber AS UNSIGNED);
ELSE
RETURN 0;
END IF;
END$$
DELIMITER ;
```
Once the function is defined, you can use it in your query:
```
SELECT ExtractNumber("abc1234def") AS number; -- 1234
```
|
To whoever is still looking, use regex:
```
select REGEXP_SUBSTR(name,"[0-9]+") as amount from `subscriptions`
```
|
How to get only Digits from String in mysql?
|
[
"",
"mysql",
"sql",
""
] |
I'm running MySQL 5.1.71. In my database there are three tables - load, brass and mfg with load being my "main" table. My goal is to query load and have mfg.name included in the results. I've tried various iterations of `JOIN` clauses vs sub-queries both with and without `WHERE` clauses. It seems this should be pretty trivial so I'm not sure how I can't arrive at the solution.
```
load
-------------------------
| id | desc | brass_id |
-------------------------
| 1 | One | 2 |
| 2 | Two | 1 |
-------------------------
brass
---------------
| id | mfg_id |
---------------
| 1 | 6 |
| 2 | 8 |
---------------
brass_mfg
------------------------
| id | name |
------------------------
| 6 | This Company |
| 8 | That Company |
------------------------
```
My desired results would be...
```
results
---------------------------
| load | mfg |
---------------------------
| One | That Company |
| Two | This Company |
---------------------------
```
* A load ID will always have only a single brass ID
* A brass ID will always have only a single mfg ID
**EDIT**
**The previously provided sample data (above) has been updated. Also, below are the query I'm running and the results I'm getting. The company is wrong in each record that is returned. I've included in the query and the results the IDs across the tables. The company names that appear are not the names in for the IDs in the mfg table.**
```
SELECT
load.id AS "load.id",
load.brass_id AS "load.brass_id",
brass.id AS "brass.id",
brass.mfg_id AS "brass.mfg_id",
brass_mfg.id AS "brass_mfg.id",
brass_mfg.name AS "brass_mfg.name"
FROM `load`
LEFT JOIN brass ON load.brass_id = brass.id
LEFT JOIN brass_mfg ON brass.id = brass_mfg.id
-----------------------------------------------------------------------------------------
| load.id | load.brass_id | brass.id | brass.mfg_id | brass_mfg.id | brass_mfg.name |
-----------------------------------------------------------------------------------------
| 1 | 2 | 2 | 6 | 2 | Wrong Company |
| 2 | 1 | 1 | 8 | 1 | Incorrect Company |
-----------------------------------------------------------------------------------------
```
|
Look at your tables and see what data relates to one another then build up joins table by table to get your desired output.
```
SELECT p.desc AS Product, m.name AS mfg
FROM product p
INNER JOIN lot l ON p.lot_id = l.id
INNER JOIN mfg m ON l.mfg_id = m.id
```
|
You have an error in your join query.
Try this one:
```
Select
l.id AS "load.id",
l.brass_id AS "load.brass_id",
b.id AS "brass.id",
b.mfg_id AS "brass.mfg_id",
m.id AS "brass_mfg.id",
m.`name` AS "brass_mfg.name"
FROM `load` as l
LEFT JOIN brass as b ON l.brass_id = b.id
LEFT JOIN brass_mfg as m ON b.mfg_id = m.id
```
[](https://i.stack.imgur.com/KTxgf.png)
You need `LEFT JOIN` only
|
SQL Join vs Sub-query
|
[
"",
"mysql",
"sql",
""
] |
I have a table where I have to get the output as follows:
```
ID - amount - TotalAmount
1 - 400 - 400
2 - 600 - 1000
3 - 400 - 1400
```
The table has two columns: ID & amount. The 'TotalAmount' should be created when the SQL script runs and hope, the rest of the sum is cleared from the above.
How can I do the above sum?? Please share ideas to do so. Thanks.
|
This is a cumulative sum. The ANSI standard method is as follows:
```
select id, amount, sum(amount) over (order by id) as TotalAmount
from t;
```
Most, but not all databases, support this syntax.
The above is the "right" solution. If your database doesn't support it, then a correlated subquery is one method:
```
select t.id, t.amount,
(select sum(t2.amount) from t t2 where t2.id <= t.id) as TotalAmount
from t;
```
|
You didn't state your DBMS, so this is ANSI SQL:
```
select id, amount,
sum(amount) over (order by id) as totalamount
from the_table
```
|
SQL query to find sum of a column
|
[
"",
"sql",
""
] |
I got a table with the following structure:
```
Id | clientid | type | timeStamp | message |
```
I'm using this query to get the first rows of table to start deleting rows but is crashing the DB:
```
SELECT TOP 10 [id]
,[clientid]
,[type]
,[timeStamp]
,[message]
FROM [db].[dbo].[table]
WHERE timeStamp LIKE '%2014-01-01 00:00:00.000%'
```
Is there any way to get the first rows without crashing and delete them before arrive to `timeStamp` `'2016-01-01 00:00:00.000'`?
|
I'm not sure if I understand this correctly:
If you set an index on your TimeStamp column it should be absolutely fast to filter rows greater or smaller a given date.
These lines will delete everything from your table where the TimeStamp is smaller than 2016-01-01. Only current entries will remain...
### Attention: Be careful! Don't test against real data! :-)
```
DELETE FROM myTable
WHERE ID IN(SELECT ID
FROM myTable AS innerTbl
WHERE innerTbl.Timestamp<{ts'2016-01-01 00:00:00'}
)
```
### Update
This will delete 1000 rows per call. The number behind "GO" will execute this snippet 377000 times. Test with smaller numbers...
```
BEGIN TRANSACTION;
DELETE FROM myTable
WHERE ID IN(SELECT TOP 1000
ID
FROM myTable AS innerTbl
WHERE innerTbl.Timestamp<{ts'2016-01-01 00:00:00'}
);
COMMIT;
GO 377000
```
|
Simple?
```
WHERE timeStamp = '2014-01-01 00:00:00.000'
```
|
SQL - Poor Performance SELECT Query on 377 million table
|
[
"",
"sql",
"sql-server",
"performance",
"bigdata",
""
] |
I referenced many questions which have the same title as mine, but they have a different approach and different issue so this question is not a duplicate.
I have a table in which `column` `fm_sctrdate` is a `date` type and has the default value `0000-00-00`.
[](https://i.stack.imgur.com/guT7F.png)
Insertion by website is working fine but when I try to insert any value by `phpmyadmin` then I got following error.
[](https://i.stack.imgur.com/p4wjS.png)
`Mysql` version is `5.7.11`. One more thing recently our server has been upgrade from `mysqlnd 5.0.12` to `5.7.11`.
Here is the query
```
INSERT INTO `iavlif_fmp_clientquote` (`jm_cqid`, `fmsq_id`, `fmsg_id`,
`fm_sctrdate`, `fm_sctrtime`, `fm_sctbaggage_weight`,
`fm_sctfreight_weight`, `fm_sctpassenger`, `fm_sctinfant`,
`fm_sctinfant_details`, `fm_sctinfant_dob`, `fmtp_id`, `fmtpi_id`,
`jmcnt_id`, `fm_name`, `fm_company`, `fm_email`, `fm_phone`, `fmts_id`,
`jmts_id`, `fm_pax_nbr`, `fm_hours_nbr`, `fmqt_id`, `fmtr_id`,
`fm_sctnotes`, `fm_locdepart`, `fm_locarrive`, `fm_sctconsignment_weight`,
`fm_sctconsignment_dimensions`, `fm_sctconsignment_desc`, `fm_sctdangerous`,
`fm_scturgent`, `fm_sctspecial_instructions`, `fm_sctquote_type`,
`fm_sctwork_type`, `fm_sctreoccuring`, `fm_sctaccommodation`, `fm_sctcar`,
`fm_recdate`, `fm_recenddate`, `fm_recfrequency`, `fm_rectime`,
`fm_medical`, `fm_medical_details`, `fm_user_ip`, `fm_dang_details`,
`fm_sctsubstance`, `fm_sctpurchase_number`, `fm_role_id`, `fm_myrole_id`,
`jm_myrole_id`, `fm_sctwork_point`, `fm_locdepartarrive`, `fm_sctnbr`,
`fm_dateCreated`, `fm_cc`, `fm_gl`, `fm_timeCreated`, `jm_qtid`, `jmtp_id`,
`jmtpi_id`, `jmsg_id`, `jms_id`, `jmsq_id`, `fms_id`, `fmcq_id`) VALUES
(NULL, '1', '1', '0000-00-00', '1', '1', '1', 'sdfasd', 'No',
'sdafdsafdsaf', 'dsfas', 'sdfasd', 'dsafds', '0', 'asdfds', 'sdfasd',
'sdfads', 'sdaf', 'sdaf', 'sdaf', '0', '0', '0', '1sadfasdsda', 'sdfadsf',
'as', 'as', 'as', 'asas', 'asd', 'No', 'No', 'adsfsd', 'eqwrqew', 'qewrqew',
'No', 'No', 'No', '0000-00-00', '0000-00-00', 'ewqr', 'qewrw', 'No', 'eqwr',
'ewqr', 'qewr', '', '', '', '', '', '', '', '0', '0000-00-00', '', '',
'00:00:00.000000', '', '', '', '', '', '', '', NULL)
```
We started facing this issue just after upgrade of `mysql`
|
The error is because of the sql mode which can be strict mode as per latest MYSQL 5.7 documentation.
To disable strict mode and other strict features, run this query:
```
SET GLOBAL sql_mode = '';
```
For more information read [this.](https://stackoverflow.com/a/36374690/5139222)
Hope it helps.
|
You have 3 options to make your way:
1. The sql mode can be strict mode. **Go to mysql/bin/**
open my.ini or my.cnf based on windows or linux
Change **sql\_mode = "STRICT\_TRANS\_TABLES,NO\_AUTO\_CREATE\_USER,NO\_ENGINE\_SUBSTITUTION"**
To **sql\_mode= ""**
then restart mysql then **set global sql\_mode='';**
2. Select NULL from the dropdown to keep it blank.
3. Select CURRENT\_TIMESTAMP to set current datetime as default value.
|
#1292 - Incorrect date value: '0000-00-00'
|
[
"",
"mysql",
"sql",
"date",
"phpmyadmin",
""
] |
I have this table in SQL Server
```
name type date
aaa A 2016-05-05
aaa A 2016-05-22
aaa B 2016-05-21
bbb A 2016-05-15
bbb B 2016-05-01
```
and I want to make a query to get this result
```
name count(type)
aaa 2.5
bbb 1.5
```
NB : for A the count must increase with 1, and for B with 0.5 because I have this rule :
```
count(type)=count(A)+count(B)/2
```
|
Try this:
```
SELECT SUM(CASE type WHEN 'A' THEN 1.0 WHEN 'B' THEN 0.5 END)
FROM mytable
GROUP BY name
```
|
```
SELECT
SUM(
CASE type
WHEN 'B' THEN 0.5
WHEN 'A' THEN 1
END)
FROM <Table Name>
GROUP BY name
```
|
How can I get the count depending on the column value in SQL Server
|
[
"",
"sql",
"sql-server",
"count",
""
] |
I'm having trouble achieving something that seems like it should be simple. The example below shows people enrolled in classes, and a person can be enrolled in multiple classes, but only one currently:
```
DECLARE @Test TABLE
(
PersonId int NOT NULL,
LocationId int NOT NULL,
ClassId int NOT NULL,
IsEnrolled bit NOT NULL,
IsExited bit NOT NULL
)
```
Add Data:
```
INSERT INTO @Test
SELECT 1,5,6,1,0
UNION SELECT 1,6,7,0,1
UNION SELECT 2,5,8,1,0
UNION SELECT 2,5,9,0,1
UNION SELECT 3,5,9,0,1
UNION SELECT 3,6,9,1,0
```
I want to get all the records (current or not) for the people where all of their enrollment is at the same location (having the same LocationId), but only the values that are current (IsEnrolled = 1) where the locations are different.
For the same PersonId I'd like all the records if the LocationId unique, and only the current (IsEnrolled = 1) if the LocationId is not unique for the PersonId.
Data I want to get back from a query:
```
SELECT
1 AS PersonId,
6 AS ClassId,
1 As IsEnrolled,
0 AS IsExited
UNION SELECT 2, 8, 1, 0
UNION SELECT 2, 9, 0, 1
UNION SELECT 3, 9, 1, 0
```
|
I think it's a lot simpler than previously tried methods:
```
SELECT Test.*
FROM @Test Test
JOIN
(
SELECT PersonID, LocationID
FROM @Test T
WHERE ISENROLLED = 1
GROUP BY PeronID, LocationID
) T
ON Test.PersonID = T.PersonID
AND Test.LocationID = T.LocationID
```
Since you can only have one "isenrolled" record per person, the inner query is guaranteed to return one person/location combination for each person. Thus, joining to it on person and location ensures that you get every record for that person which was at the location of their currently enrolled class.
|
If I understand correctly you want all records that are either current or belong to a person who has always been in the same location. Hence:
```
select personid, classid, isenrolled, isexited
from mytable
where isenrolled = 1
or personid in
(
select personid
from mytable
group by personid
having min(locationid) = max(locationid)
)
order by personid, classid;
```
The same with window functions, so the table has to be read just once:
```
select personid, classid, isenrolled, isexited
from
(
select
personid, classid, isenrolled, isexited,
min(locationid) over (partition by personid) as minloc,
max(locationid) over (partition by personid) as maxloc
from mytable
)
where isenrolled = 1 or minloc = maxloc
order by personid, classid;
```
|
SELECT values from table with grouped id but differences in non-grouped columns
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
My problem occurs with a specific table with 3 columns.
```
itemnr --item number (int) 7 chars like 1111111
ccyymm --century year month (int) 6 chars like 201605
amount --amount of the specific item for that year month combo.
```
Basically I want to create a table that shows the amount of the past 12 months.
I created 12 virtual tables using the following code
```
SELECT *
FROM items
WHERE ccyymm = year(now())||right('00'||month(now()),2) -1
```
That shows me all the items with `where ccyymm = 201604`
and it works perfectly.
The problem is that when the month I am subtracting is more than the current month i have to subtract 1 from the year as well so I used the following:
```
SELECT *
FROM items
WHERE ccyymm = (case
when month(now()) < 12
then year(now())- 1||right('00'||month(now()),2)
else year(now())||right('00'||month(now()),2) -12
end)
```
So if i want to get 12 months ago's data the month is less than 12 it just subtracts 1 from the year so basically it has to give me `201505`. it says my SQL is valid but it returns no values yet when I look in the database there is data for that `ccyymm`.
|
```
SELECT * FROM ITEMS WHERE CCYYMM = (year(now()) * 100 + month(now())) - 100
```
|
You have tagged the question as both MySQL & SQL.
If you are using MySQL, then you could use the date functions :
```
SELECT * FROM ITEMS WHERE CCYYMM = date_format(date_sub(now(),INTERVAL 6 MONTH), '%Y%m');
```
|
SQL statement not returning any values
|
[
"",
"sql",
""
] |
I have a table which contains datetime rows like below.
```
ID | DateTime
1 | 12:00
2 | 12:02
3 | 12:03
4 | 12:04
5 | 12:05
6 | 12:10
```
I want to identify those rows where there is a 'gap' of 5 minutes between rows (for example, row 5 and 6).
I know that we need to use `DATEDIFF`, but how can I only get those rows which are consecutive with each other?
|
You can use [**`LAG`**](https://msdn.microsoft.com/en-us/library/hh231256.aspx), [**`LEAD`**](https://msdn.microsoft.com/en-us/library/hh213125.aspx) window functions for this:
```
SELECT ID
FROM (
SELECT ID, [DateTime],
DATEDIFF(mi, LAG([DateTime]) OVER (ORDER BY ID), [DateTime]) AS prev_diff,
DATEDIFF(mi, [DateTime], LEAD([DateTime]) OVER (ORDER BY ID)) AS next_diff
FROM mytable) AS t
WHERE prev_diff >= 5 OR next_diff >= 5
```
**Output:**
```
ID
==
5
6
```
**Note:** The above query assumes that order is defined by `ID` field. You can easily substitute this field with any other field that specifies order in your table.
|
## update SS2012: Use LAG
```
DECLARE @tbl TABLE(ID INT, T TIME)
INSERT INTO @tbl VALUES
(1,'12:00')
,(2,'12:02')
,(3,'12:03')
,(4,'12:04')
,(5,'12:05')
,(6,'12:10');
WITH TimesWithDifferenceToPrevious AS
(
SELECT ID
,T
,LAG(T) OVER(ORDER BY T) AS prev
,DATEDIFF(MI,LAG(T) OVER(ORDER BY T),T) AS MinuteDiff
FROM @tbl
)
SELECT *
FROM TimesWithDifferenceToPrevious
WHERE ABS(MinuteDiff) >=5
```
The result
```
6 12:10:00.0000000 12:05:00.0000000 5
```
|
Calculate Date difference between two consecutive rows
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have 8 records below:
```
ID | Common ID | Reject
-------------------------
AB-1 | AB | NULL
AB-2 | AB | YES
AB-3 | AB | NULL
BB-1 | BB | YES
BB-2 | BB | YES
BB-3 | BB | YES
CB-1 | CB | YES
CB-2 | CB | YES
DB-1 | DB | NULL
```
My expected result is:
```
ID | Common ID | Reject
-------------------------
BB-1 | BB | YES
CB-1 | CB | YES
```
I only want to obtain distinct records when the reject column is yes for all of the records with the same Common ID.
|
```
select min(ID), [Common ID], max(Reject)
from tablename
group by [Common ID]
having count(*) = count(case when Reject = 'YES' then 1 end)
```
If a [Common ID] has the same number of rows as the number of YES, then return it!
The `HAVING` clause's `count(*)` returns the total number of rows for a `[Common ID]`. The `case` expression returns 1 if Reject = Yes, otherwise null. The right side `count` returns the number of rows where the case returns a non-null value (i.e. when Reject is yes!) When the same number of rows, `HAVING` is true!
**Edit:**
In this specific case, when the Reject column's values seem to be either YES or NULL, the `HAVING` can be simplified as:
```
having count(*) = count(Reject)
```
However, if other values (like NO) later will be found in the column, this won't work. So I recommend the original `HAVING` clause!
|
```
SELECT MIN(ID), CommonID, MIN(Reject) as Reject
FROM yourtable
GROUP BY CommonID
HAVING MIN(ISNULL(Reject, '')) = MAX(ISNULL(Reject, ''))
AND MIN(ISNULL(Reject, '')) = 'Yes'
```
EDIT : as you have NULL value, will need to use ISNULL() on the column
|
Select distinct group of records only when all of the records' certain column is of a certain value
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I need to get a result set out of a MySQL database where there are multiple pictures referenced to each item. The pictures have a picture\_no field where the lowest number is the one I want to return with the result.
Currently I am using this query:
```
SELECT
ca.*,
ca.ID AS ad_id,
ca.cat_id,
UNIX_TIMESTAMP(ca.date_edited) AS date,
cp.*,
ar.area_name
FROM
$DB.$T4 ca
LEFT JOIN $T6 AS cp ON ca.ID = cp.classified_id AND cp.picture_no = 0
LEFT JOIN $T60 zip ON ca.zip_id = zip.zip_id
LEFT JOIN $T3 ar ON zip.area_id = ar.ID
WHERE
ca.extern = 0
AND ca.confirmed = 1
AND ca.advertiser_id = ".$user['uid']."
ORDER BY
date_created
LIMIT 10
```
As you can see, I am currently using `AND cp.picture_no = 0` since I could not figure out how to get the picture with the lowest number.
How could this be done? The function min() did not help me to solve the problem.
|
You could use this:
```
SELECT
ca.*,
ca.ID AS ad_id,
ca.cat_id,
UNIX_TIMESTAMP(ca.date_edited) AS date,
cp.*,
ar.area_name
FROM
$DB.$T4 ca
LEFT JOIN (SELECT tmp.*
FROM $T6 tmp
INNER JOIN (SELECT classified_id, MIN(picture_no) min_picture_no
FROM $T6
GROUP BY classified_id) tmp1
ON tmp.classified_id = tmp1.classified_id
and tmp.picture_no = tmp1.min_picture_no
) AS cp
ON ca.ID = cp.classified_id
LEFT JOIN $T60 zip ON ca.zip_id = zip.zip_id
LEFT JOIN $T3 ar ON zip.area_id = ar.ID
WHERE
ca.extern = 0
AND ca.confirmed = 1
AND ca.advertiser_id = ".$user['uid']."
ORDER BY
date_created
LIMIT 10
```
Editted: remove `AND cp.picture_no = 0` as it's only temp solution of asker. Thanks `Paul Spiegel` for suggestion.
|
You could change the join into something like this:
```
LEFT JOIN (select * from $T6 AS cp where ca.ID = cp.classified_id order by picture_no asc limit 1) on ca.ID = cp.classified_id
```
|
How to get MySQL result set with lowest picture number?
|
[
"",
"mysql",
"sql",
""
] |
I have two tables with the following structure:
```
|=================|
| posts |
|=================|
| ID | Title |
|-----------------|
| 1 | Title #1 |
|-----------------|
| 2 | Title #1 |
|-----------------|
| 3 | Title #1 |
|-----------------|
| 4 | Title #1 |
|-----------------|
| 5 | Title #1 |
|-----------------|
```
and
```
|==========================================|
| meta |
|==========================================|
| id | post_id | meta_key | meta_value |
|------------------------------------------|
| 1 | 1 | key_one | value for #1 |
|------------------------------------------|
| 2 | 1 | key_two | value for #1 |
|------------------------------------------|
| 3 | 1 | key_three | value for #1 |
|------------------------------------------|
| 4 | 2 | key_one | value for #2 |
|------------------------------------------|
| 5 | 2 | key_three | value for #2 |
|------------------------------------------|
| 6 | 3 | key_one | value for #3 |
|------------------------------------------|
| 7 | 3 | key_three | value for #3 |
|------------------------------------------|
```
and I need to get the following single result:
```
|----------------------------------------------------------------|
| ID | Post Title | Meta Key One | Meta Key Two | Meta Key Three |
|----------------------------------------------------------------|
| 1 | Title #1 | value for #1 | value for #1 | value for #1 |
|----------------------------------------------------------------|
| 2 | Title #2 | value for #2 | null | value for #2 |
|----------------------------------------------------------------|
| 3 | Title #3 | value for #3 | null | value for #3 |
|----------------------------------------------------------------|
```
But I don't know how to do this.
The SQL Query I have build until now is this:
```
SELECT
`p`.`ID` AS `ID`,
`p`.`Title` AS `Post Title`,
`mt1`.`meta_value` AS `Meta Key One`,
`mt2`.`meta_value` AS `Meta Key One`,
FROM
posts AS `p`
LEFT JOIN `meta` AS `mt1` ON ( `p`.`ID` = `mt1`.`post_id` )
LEFT JOIN `meta` AS `mt2` ON ( `p`.`ID` = `mt2`.`post_id` )
WHERE
1 = 1
AND `mt1`.`meta_key` = 'key_one'
AND `mt2`.`meta_key` = 'key_three';
```
The problem is that if I add a third `LEFT JOIN` in `meta` table to use it later on in `WHERE` clause and say `mt1.meta_key = 'key_two'` I get only one record instead of three.
Does anyone know how can I achieve this with a single query ?
I don't know if that helps, but I have create an SQL Fiddle here : <http://sqlfiddle.com/#!9/af591f/1>
Note that the column names in fiddle doesn't meet the ones in my example, but the problem remains the same.
|
<http://sqlfiddle.com/#!9/af591f/4>
```
SELECT
`b`.`id` AS `ID`,
`b`.`title` AS `Title`,
mt1.meta_value `KeyOne`,
mt2.meta_value `KeyTwo`,
mt3.meta_value `KeyThree`
FROM
`base` as `b`
LEFT JOIN `meta` mt1 ON b.id = mt1.base_id AND mt1.meta_key = 'key_one'
LEFT JOIN `meta` mt2 ON b.id = mt2.base_id AND mt2.meta_key = 'key_two'
LEFT JOIN `meta` mt3 ON b.id = mt3.base_id AND mt3.meta_key = 'key_three'
```
|
You can use *conditional aggregation* for this:
```
SELECT p.ID, p.Title AS 'Post Title',
MAX(CASE WHEN meta_key = 'key_one' THEN meta_value END) AS 'Meta Key One',
MAX(CASE WHEN meta_key = 'key_two' THEN meta_value END) AS 'Meta Key Two',
MAX(CASE WHEN meta_key = 'key_three' THEN meta_value END) AS 'Meta Key Three'
FROM posts AS p
LEFT JOIN meta AS m ON p.ID = m.post_id
GROUP BY p.ID, p.Title
```
The benefit of this method is that you use `LEFT JOIN` just once, so it is easily extensible in order to accommodate additional key values.
[**Demo here**](http://sqlfiddle.com/#!9/631f2/1)
|
How to join those two tables in MySQL
|
[
"",
"mysql",
"sql",
""
] |
There is DDL script in PostgreSQL that creates tables.
For example, if first table exists,
**How to stop SQL script execution for PostgreSQL (within script)?**
|
If you want to abort a script based on a condition you can do that using a `DO` block that raises an error:
```
do
$$
declare
l_count integer;
begin
select count(*)
into l_count
from information_schema.tables
where table_name = 'foobar'
and table_schema = 'public';
if (l_count > 0) then
raise exception 'Table foobar already exists!';
end if;
end;
$$
```
This requires that your SQL client will abort a script if an error occurs.
---
Another option is to change your script such that it doesn't do anything if the table already exists by using `create table if not exists ....`.
But that depends on what exactly you are trying to achieve.
|
You can do something like this:
```
SELECT 'Step 1' as step;
DO $$
BEGIN
assert (SELECT 'B'::text) = 'A'::text, 'Expected A';
END;
$$;
SELECT 'Step 2' as step;
```
Returns:
```
ERROR: Expected A
CONTEXT: PL/pgSQL function inline_code_block line 3 at ASSERT
SQL state: P0004
```
You can use any type to compare assert with, not just text, apparently.
|
How to stop SQL script execution for PostgreSQL?
|
[
"",
"sql",
"postgresql",
""
] |
I was trying to create a view and needed to create a column that shows if the 'somenumber' column exists on other table or not. The code below worked but very slowly. Is it possible to declare a table as (select someNumber from veryYugeTable) and check on that one instead of sending this query for every single record or using some other way to speed up the view?
```
case
when someOtherTable.someNumber in(select someNumber from veryYugeTable) then 'exists'
else 'doesn't exist'
end as "someColumn"
```
|
You might find you get some benefit with scalar subquery caching if you do something like:
```
coalesce((select 'exists'
from veryyugetable vyt
where vyt.somenumber = someOtherTable.someNumber
and rownum = 1),
'doesn''t exist') somecolumn
```
N.B. the `and rownum = 1` is not necessary if `vyt.somenumber` is a unique column. Also, I nth the suggest to index the `vyt.somenumber` column.
|
The query looks fine. You should have an index on `veryYugeTable.someNumber`.
Sometimes the optimizer handles correlated subqueries better than non-correlated ones, so you can try:
```
case
when exists
(
select *
from veryYugeTable
where veryYugeTable.someNumber = someOtherTable.someNumber
) then 'exists'
else 'doesn''t exist'
end as "someColumn"
```
(Well, as this query does exactly the same as yours, the optimizer should get to the same execution plan, but this is not always the case.)
But as mentioned: Make sure first to have that index.
|
Oracle 11g, how to speed up an 'in' query
|
[
"",
"sql",
"oracle",
"if-statement",
"view",
"oracle11g",
""
] |
I want to check which users have the most records in a database. So every user has a specific Id, and that Id is used as a reference in a few tables.
There are a few tables that contain a column UserId, like the table Exams, Answers, Questions, Classes, etc. Is it possible to count all the records with a specific UserId in all those tables?
|
```
;with cte as (
select rowsCount = count(*) from A where UserId = 1
union all
select rowsCount = count(*) from B where UserId = 1
union all
select rowsCount = count(*) from C where UserId = 1
)
select sum(rowsCount) from cte
```
|
I would do it in this way:
```
With AllRecords AS
(
SELECT TableName = 'Exams'
FROM dbo.Exams
WHERE UserId = @YouruserID
UNION ALL
SELECT TableName = 'Answers'
FROM dbo.Answers
WHERE UserId = @YouruserID
UNION ALL
SELECT TableName = 'Questions'
FROM dbo.Questions
WHERE UserId = @YouruserID
UNION ALL
SELECT TableName = 'Classes'
FROM dbo.Classes
WHERE UserId = @YouruserID
)
SELECT COUNT(*) FROM AllRecords
```
The table-name is not needed if you just want the count, it's just for the case that you want to know the source.
|
How to get amount of records of a specific ID in a database?
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to aggregate a list of product skus with a query that relates through a line\_items table. I've abstracted a simple example of my use case:
**my expected result would look like this:**
```
id name skus
1 mike bar sku1,sku2,sku3
2 bort baz sku4
```
given a schema and data like:
products
```
id sku
1 sku1
2 sku2
3 sku3
4 sku4
```
line\_items
```
id order_id product_id
1 1 1
2 1 2
3 1 3
4 2 4
```
addresses
```
id name
1 'bill foo'
2 'mike bar'
3 'bort baz'
```
orders
```
id address_id total
1 2 66
2 3 99
```
here's a working query, but it's not correct, i'm getting ALL products for each order. my `WHERE` should be using `orders.id`
<http://sqlfiddle.com/#!15/70cd7/3/0>
however, i can't seem to use `orders.id`? i'm guessing i need to use a `JOIN` or `LEFT JOIN` or somehow change the order of things in my query...
<http://sqlfiddle.com/#!15/70cd7/4>
|
You can use a correlated subquery with a `JOIN` to get the list of `sku`s for each `order`
```
SELECT
o.id,
a.name,
(SELECT array_to_string(array_agg(sku), ',') AS Skus
FROM products p
INNER JOIN line_items li
ON li.product_id = p.id
WHERE li.order_id = o.id
) AS Skus
FROM orders o
INNER JOIN addresses a
ON a.id = o.address_id
```
`ONLINE DEMO`
|
<http://sqlfiddle.com/#!15/70cd7/12>
```
SELECT orders.id,
addresses.name,
array_agg(DISTINCT products.sku )
FROM orders
LEFT JOIN addresses
ON orders.address_id = addresses.id
LEFT JOIN line_items
ON line_items.order_id = orders.id
LEFT JOIN products
ON products.id = line_items.product_id
GROUP BY orders.id,addresses.name
```
|
SQL aggregate query with one-to-many relationship with postgres
|
[
"",
"sql",
"postgresql",
"postgresql-9.3",
""
] |
I have a stored procedure that select top X from Table,
```
declare @i int ;
set @i = 10 ;
select top @i from tableNam
```
> Incorrect syntax near '@i'.
What should i do ?
suppose that the @i comes from parameters in my storedProcedure
|
You have to enclose the variable in parenthesis to make it work.
```
declare @i int ;
set @i = 10 ;
select top (@i) * from tableNam
```
If you take the cursor to the error line you will find the reason as to why you need the parenthesis like this:
[](https://i.stack.imgur.com/tWjc6.png)
So the value should be a integer. Also if you see the [TOP keyword](https://msdn.microsoft.com/en-us/library/ms189463.aspx) from MSDN then it says to use the parenthesis. The syntax said by MSDN is:
```
[
TOP (expression) [PERCENT]
[ WITH TIES ]
]
```
|
Following syntax will work
```
declare @i int ;
set @i = 10 ;
select top (@i) * from sys.tables
```
|
Sql server use variable in selection top row
|
[
"",
"sql",
"sql-server",
""
] |
I want to use postgresql `to_number(numberString, format_mask)`. My numberString may contain leading zeros (I don't know the actual length of these zeros and the numberString in total). I want to trim these zeros and get the value as number.
I've read about this function [here](http://www.techonthenet.com/postgresql/functions/to_number.php) and currently using the format:
```
select to_number('00005469', '9999999999')
```
But if the length of '9's is less than the length of the numberString then I can't get the correct number. How can I make this work without writing a long list of '9' in format\_mask?
|
You don't need `to_number()` for this. Just cast the string to an integer:
```
select '00005469'::integer;
```
or using standard SQL:
```
select cast('00005469' as integer);
```
|
To be more safe if your column has decimals or not, use
```
select NULLIF('00005469', '')::decimal
```
This `select '00005469'::integer;` will not work if there are decimals
|
Postgresql to_number() function format
|
[
"",
"sql",
"postgresql",
"type-conversion",
"sqldatatypes",
""
] |
How to encapsulate Between statement in SQL server.
I have a lot of code to refactor that has long list of where clauses with the same Between statement that have magic numbers and I want to refactor it. It is possible to encapsulate the BETWEEN.
Example: The query has something similar like this and there is not possibility of join this just a basic example to show that I would like to have something in the WHERE that is not repeating the BETWEEN
```
SELECT ...
FROM Table0
WHERE Id BETWEEN 3456 AND 15500;
...
SELECT ...
FROM Table1
WHERE Id BETWEEN 3456 AND 15500;
...
SELECT ...
FROM Table2
WHERE Id BETWEEN 3456 AND 15500;
```
|
If you want a function you can do it.
```
create function dbo.isValid(val int)
returns int
as
begin
if val BETWEEN 3456 AND 15500
return 1
return 0
end
```
And obvious usage:
```
select * from table1
where dbo.isValid(id) = 1
```
|
As @Rion said either create variables or use a seperated table
```
CREATE TABLE check_range
("range_id" int, "begin" int, "end" int)
;
INSERT INTO check_range
("range_id", "begin", "end")
VALUES
(1, 3456, 15500)
;
```
Then your query should be
```
SELECT ...
FROM Table0
CROSS JOIN check_range cr
WHERE Id BETWEEN cr.begin AND cr.end;
```
|
How to encapsulate Between statement in SQL server
|
[
"",
"sql",
"between",
""
] |
My example query is:
```
SELECT
tab1.col1, tab2.col1, tab3.col2
FROM
tab1 JOIN tab2 ON tab1.col1=tab2.col1
left JOIN tab3 ON tab1.col2=tab3.col2
WHERE blah blah GROUP BY blah blah HAVING blah blah;
```
I want to apply a condition (such as a `"where" or "having"` to just one of the joins but not the other. How do I do so?
Something like `"SELECT ___, ___, ___ FROM tab1 JOIN tab2 ON ___ WHERE ___ JOIN tab3 ON ___ WHERE ___ GROUP BY ___"` is impossible it seems.
Thanks for your help.
|
There are multiple ways to achieve that
First would be to simply put the filter clause at the end of the query. Mostly this must produce same results, although in some cases, this might not be the case.
```
SELECT
tab1.col1, tab2.col1, tab3.col2
FROM
tab1
JOIN
tab2
ON tab1.col1=tab2.col1
LEFT JOIN
tab3
ON tab1.col2=tab3.col2
WHERE
blah blah
AND
desired_column=filter_value
GROUP BY
blah blah
HAVING
blah blah
;
```
Second option is to use nested queries. It is more likely that this method with produce the results of your liking. An intelligent query optimizer *may* rewrite this query to the one above or vice versa.
```
SELECT
tab1.col1, tab2.col1, tab3.col2
FROM
tab1
JOIN
(SELECT col1, col2 FROM tab2 WHERE desired_column=filter_value) tab2
ON tab1.col1=tab2.col1
LEFT JOIN
tab3
ON tab1.col2=tab3.col2
WHERE
blah blah
GROUP BY
blah blah
HAVING
blah blah
;
```
A third option, in case you are using `INNER JOIN`, is to mention this as part on join condition
```
SELECT
tab1.col1, tab2.col1, tab3.col2
FROM
tab1
JOIN
tab2
ON tab1.col1=tab2.col1
AND
desired_column=filter_value
LEFT JOIN
tab3
ON tab1.col2=tab3.col2
WHERE
blah blah
GROUP BY
blah blah
HAVING
blah blah
;
```
You may also use the last one with `OUTER JOINs` if you do not want to filter the set and only relate based upon the mentioned criteria.
|
```
SELECT
tab1.col1, tab2.col1, tab3.col2
FROM
(select * from tab1 where / having) tab1 JOIN tab2 ON tab1.col1=tab2.col1
left JOIN
(select *from tab3 where / having) tab3 ON tab1.col2=tab3.col2
```
Is this what you want?
|
how can I apply a condition to just one part of a multiple join in SQL?
|
[
"",
"sql",
"join",
"conditional-statements",
"where-clause",
"having",
""
] |
I have a table with values:
```
--Product--
ASUS22
ASUSI522
ASUSI7256
ASUSI2262
ASUSI1267
ASUSI764
ASUSI712
```
and so on. I'm trying to select products with it starting with ASUSI and only 3 integers after it.
Somebody said I can use \d\d\d in order to achieve that but it doesn't work(below)
```
select product from products where product like '%ASUS\d\d\d%'
```
So I want to select values :
```
ASUSI712
ASUSI764
ASUSI522
```
How can do this?
Thanks,
Regards,
|
In order to use character classes, you need to use `SIMILAR TO` instead of `LIKE`:
```
select product from products where product similar to '%ASUS\d\d\d'
```
As @lad2025 notes, your original query does not match your expectation, so you need remove the final `%` to restrict the match to three numbers.
|
The regular expression you want to match the string `ASUSI` with exactly three digits after it is:
```
^ASUSI\d{3}$
^ASUSI - starts with 'ASUSI'
\d{3} - followed by exactly three digits
$ - followed by the end of the string
SELECT product
FROM products
WHERE product ~ '^ASUSI\d{3}d$'
```
|
Selecting values using \d
|
[
"",
"sql",
"postgresql",
""
] |
I have a select statement that gives a set of rows whose count is always a multiple of 8.
What I want to do is to find the sum of the first 8 rows, the second 8 rows and so on. Is there a way to do this
|
```
select a from test;
select r/8, SUM(a) from (select ROW_NUMBER() over (order by a)-1 as r,a from test) tab group by r/8;
```
[](https://i.stack.imgur.com/kz53i.jpg)
```
update test set a=t2.v
from test t1,(select (ROW_NUMBER() over (order by a)-1)/8 as v,a from test) as t2
where t1.a=t2.a;
```
[](https://i.stack.imgur.com/osomr.jpg)
|
Ideal: You declare `groups of 8 rows` by geting `row number` of all rows, subtract by 1 then divine by 8 . Then you calculate sum of those group that you just declare
Assume that your original query is
```
select col1, col2, ..., value1, value2,...
from table
order by col1, col3,...
```
You could use this:
```
select number_group, sum(value1) , sum(value2)
from (
select (row_number() over (order by col1, col3,...) - 1)/8 number_group,
t.*
from table t
) tmp
group by number_group;
```
|
How to sum 8 rows at a time in a select statement
|
[
"",
"sql",
"sql-server",
""
] |
I have a parent model **Post** and a child model **Comment**. Posts have privacy setting - column `privacy` in the DB. Any time when I have to deal with a child model **Comment** I have to check privacy settings if the parent model: `$comment->post->privacy`.
My app is becoming bigger and bigger and such approach needs more and more SQL-requests. Eager loading helps, but sometimes there is no other reasons to touch the parent model except of checking the privacy field.
My question is: Is it a good practice to duplicate the privacy column into the **Posts** table and keep them in sync? It will allow me to simply use `$comment->privacy` without touching the **Posts** table.
|
Planned redundancy (denormalization of the model) for a specific purpose can be good.
You specifically mention keeping the `privacy` column on the child table "in sync" with the `privacy` column in the parent table. That implies you have control of the redundancy. That's acceptable practice, especially for improved performance.
If it doesn't improve performance, then there wouldn't really be a need.
Uncontrolled redundancy can be bad.
|
Assuming that the privacy properties have to be in the parent (if the "Post" are not used directly on its own you can always move the property "privacy" to all the children)
**First** you should try enhance the performance using optimization techniques (like indexes, materialized views.. etc.)
**Second** if that didn't help much with the performance (very very rare case) you can start thinking about duplicating the information. but that should be your last option, and you need to take all the possible measures to preserve data consistency (using constraints, triggers or whatever).
|
Duplicating columns from parent to child model. Good or bad practice?
|
[
"",
"mysql",
"sql",
"laravel",
"optimization",
""
] |
I am fairly new to SQL and am having some trouble with the last step in a slightly complicated SQL Query. I want to count how many times in the table, two distinct values appear in more than 1 row.
My specific scenario is, that my table stores Messages/Alerts too and from a system. These alerts are sent to multiple people who are required to respond. I want to count how many of the recipients responded to each Alert.
I have edited down my query to the part i am stuck on, which is this:
```
SELECT DISTINCT AlertID, count(RecipientID) - count(Distinct RecipientID) as Replies,
FROM [myDB].[dbo].[Alerts]
```
This query shows how many messages each alert has in it, not including the original message to each recipient from the system. The issue with this is that if someone responds twice to a single alert, it is counted as two responses as apposed to 1(which is what i want).
I thought i could do this by counting each DISTINCT AlertID and how many times a DISTINCT recipients ID appears with the AlertID in more than 1 row. It wouldn't matter if someone responded twice, because as long as there is more than one it is counted as a single response. I'm having trouble getting this to working.
My data looks like this:
```
RecipientID MsgContents SentBy AlertID
12345 Msg1 mySystem 11111
98765 Msg1 MySystem 11111
12345 1st Reply to Msg1 John Doe 11111
12345 2nd Reply to Msg1 John Doe 11111
98765 reply to Msg1 Mike Smith 11111
12345 Msg3 mySystem 33333
12345 Reply to Msg3 John Doe 33333
12345 Msg2 mySystem 99999
```
I would like the output of the query to be:
```
AlertID Replies
11111 2
33333 1
99999 0
```
I thought [this answer](https://stackoverflow.com/questions/7029747/sql-query-to-get-count-of-rows-with-more-than-one-row-in-another-table) would help, but couldn't get it to work for me. I also would like to keep it in one query if possible, as the system will be dealing with large amounts of data. If this is not the easiest way to do it, i am open to all suggestions.
Any help would be greatly appreciated.
|
You can do this using a `CASE` expression inside `COUNT`.
```
SELECT
AlertID,
Replies = COUNT(DISTINCT CASE WHEN SentBy <> 'mySystem' THEN RecepientID END)
FROM alerts
GROUP BY AlertID
```
`ONLINE DEMO`
|
First, don't use `select distinct` when you should be using `group by`.
Then this produces the results you seem to want:
```
SELECT AlertID, (count(Distinct RecipientID) - 1) as Replies
FROM [myDB].[dbo].[Alerts]
GROUP BY AlertId;
```
You might actually want:
```
SELECT AlertID, count(Distinct case when sentBy <> 'mySystem' then RecipientID) as Replies
FROM [myDB].[dbo].[Alerts]
GROUP BY AlertId;
```
|
Count number of times 2 distinct values appear in more than 1 row in SQL Table
|
[
"",
"sql",
"sql-server",
"vb.net",
"count",
""
] |
I'm wondering if there is better,more optimal way to retrieve a number from string
eg.
```
"O5_KK/text1/1239312/006_textrandom"
"O5_KK/text1/1239315/0109_textrandom123"
"O5_KK/text1/1239318/0110_textrandom432"
'O5_KK/text1' - hardcoded, never change.
1239312,1239315,1239318 - random number, unique within row
textrandom,textrandom123,textrandom432 - random string
```
as output I would like to get only numbers:
```
006
0109
0110
```
I know how to do it by using instr,substr,replace function. But it looks terrible to read. I'm looking for other solution, any hints ?
Thanks
|
You can use `regexp_subtr()`:
```
select regexp_substr(val, '/[0-9]+_', 1, 1)
```
And then remove the extra characters:
```
select replace(replace(regexp_substr(val, '/[0-9]+_', 1, 1), '/', ''), '_', '')
```
|
This is simply the part after the third slash before the second underscore:
```
substr(str, instr(str, '/', 1, 3) + 1, instr(str, '_', 1, 2) - instr(str, '/', 1, 3) - 1)
```
|
Retrieve number from string most effective way
|
[
"",
"sql",
"oracle",
""
] |
[SQL FIDDLE DEMO HERE](http://%20%20[1]:%20http://sqlfiddle.com/#!6/4c7c2)
I have this table structure for Workers table:
```
CREATE TABLE Workers
(
[Name] varchar(250),
[IdWorker] varchar(250),
[work] varchar(250)
);
INSERT INTO Workers ([Name], [IdWorker], [work])
values
('Sam', '001', 'Director'),
('Julianne', '002', 'Recepcionist'),
('Jose', '003', 'Recepcionist');
```
What I want is to get for each job the name of workers separate by commas, like this:
```
Director Recepcionist
------- ------------
Sam Julianne, Jose
```
I tried to used this query:
```
DECLARE @rec VARCHAR(MAX)
SELECT @rec = COALESCE(@rec + ', ', '') + Name from
Workers where job = 'Recepcionist' SELECT @dir AS Recepcionist
```
And I got this result:
```
Recepcionist
------------
Julianne, Jose
```
This works only for one job, but I need to add more, so I tried then to use this query:
```
SELECT [Director] , [Recepcionist]
FROM
(SELECT [job], [Name],RANK() OVER (PARTITION BY [job] ORDER BY [job],[Name]) as rnk
FROM Workers ) p
PIVOT(
Min([Name])
FOR [job] IN
( [Director] , [Recepcionist] )
) AS pvt
```
And I get this result:
```
Director Recepcionist
-------- ------------
Sam Julianne
Jose
```
I need to get the results in the same row separate by commas, how can I combine the two querys?
I accept suggestions, thanks.
|
I am assuming in your example query you meant job to reference the work column. The following query should do the job as per your sql fiddle.
```
SELECT STUFF(
(
SELECT ', ' + cast([Name] as varchar(max))
FROM Workers
WHERE [work] = 'Recepcionist'
FOR XML PATH('')
), 1, 2, ''
) AS Recepcionist
,STUFF(
(
SELECT ', ' + cast([Name] as varchar(max))
FROM Workers
WHERE [work] = 'Director'
FOR XML PATH('')
), 1, 2, '') AS Director;
```
|
the way you would do this using PIVOT would be like this.
```
SELECT *
FROM (SELECT [work],
STUFF((SELECT ', ' + [Name]
FROM Workers s
WHERE s.WORK = w.WORK
FOR XML PATH('')),
1, 2, '') AS [workers]
FROM Workers w) t
PIVOT (
MAX([workers])
FOR [work] IN ([Director], [Recepcionist])
) p
```
another alternative to PIVOT is MAX(CASE)
```
SELECT MAX(CASE WHEN [work] = 'Director' THEN [workers] END) AS [Director],
MAX(CASE WHEN [work] = 'Recepcionist' THEN [workers] END) AS [Recepcionist]
FROM (SELECT [work],
STUFF((SELECT ', ' + [Name]
FROM Workers s
WHERE s.WORK = w.WORK
FOR XML PATH('')),
1, 2, '') AS [workers]
FROM Workers w) t
```
both of these allow you to separate the data by other fields like Company or Department
|
How can I join together this querys sql server?
|
[
"",
"sql",
"sql-server",
"join",
"pivot",
"coalesce",
""
] |
Im just writing a query to look through my clients customers database and to list how many orders they have made etc.
What I'm struggling to add into this query is to only show me most recent OrderID for that email
Any ideas?
Here is my query
```
select top 1000
BuyerEMail
,COUNT(*) HowMany
,Name
from Orders
where
Pay != 'PayPal'
group by
BuyerEmail
,Name
order by
HowMany Desc
```
|
Give this a go;
```
SELECT TOP 1000
o.BuyerEMail
,COUNT(*) HowMany
,o.Name
,o2.OrderID
FROM Orders o
JOIN
(
SELECT
BuyerEmail
,MAX(OrderDate) Latest
FROM Orders
GROUP BY BuyerEmail
) l
ON o.BuyerEmail = l.BuyerEmail
JOIN Orders o2
ON l.BuyerEmail = o2.BuyerEmail
AND l.OrderDate = o2.OrderDate
WHERE Pay != 'PayPal'
GROUP BY
o.BuyerEmail
,o.Name
,l.Latest
ORDER BY
COUNT(*) DESC
```
It works out the latest order by each email address in a sub query, you can then use this in the SELECT. I've also aliased the tables to make things easier.
You can do this another way too, by nesting subqueries;
```
SELECT TOP 1000
o.BuyerEMail
,COUNT(*) HowMany
,o.Name
,o2.OrderID
FROM Orders o
JOIN
(
SELECT
BuyerEmail
,OrderID
FROM
Orders ord
JOIN
(
SELECT
BuyerEmail
,MAX(OrderDate) Latest
FROM Orders
GROUP BY BuyerEmail
) ma
ON ord.BuyerEmail = ma.BuyerEmail
AND ord.OrderDate = ma.OrderDate
) l
ON o.BuyerEmail = l.BuyerEmail
WHERE Pay != 'PayPal'
GROUP BY
o.BuyerEmail
,o.Name
,l.Latest
ORDER BY
COUNT(*) DESC
```
|
If you are having troubles writing sql queries, try to break up your needs into single statements.
First you wanted the number of orders per buyer, which you already solved.
```
SELECT BuyerEMail
, Name
, COUNT(*) as TotalOrders
FROM Orders
WHERE Pay <> 'PayPal'
GROUP BY BuyerEmail, Name
Order By TotalOrders Desc
```
Now you wanted to display the latest order for each buyer. Something like this would do:
```
SELECT BuyerEMail
, Name
, MAX(OrderDate) LatestOrder
FROM Orders
GROUP BY BuyerEmail, Name
```
Next, you need to combine your output to one statement. If you compare the two statements, both are grouped by the same set (Buyer and Name), so you could sum it up to:
```
SELECT BuyerEMail
, Name
, COUNT(*) as TotalOrders
, MAX(OrderDate) LatestOrder
FROM Orders
WHERE Pay <> 'PayPal'
GROUP BY BuyerEmail, Name
```
If you only want to count the orders having Pay != 'PayPal', you could do:
```
SELECT BuyerEMail
, Name
, COUNT(CASE WHEN Pay != 'PayPal' THEN 1 END) as TotalOrders
, MAX(OrderDate) LatestOrder
FROM Orders
GROUP BY BuyerEmail, Name
```
Now you commented you would also want the OrderID for the latest Order. A Lead() function in sqlserver 2012+ could do, a subselect or how I prefer a cross apply:
```
SELECT o.*
, OrderID as LastOrderID
, OrderDate as LastOrderDate
FROM (
SELECT BuyerEMail
, Name
, COUNT(*) as TotalOrders
FROM Orders
WHERE Pay != 'PayPal'
GROUP BY BuyerEmail, Name
) o
CROSS APPLY (
SELECT TOP 1 OrderID, OrderDate
FROM Orders s
WHERE s.BuyerEmail = o.BuyerEmail
ORDER BY OrderDate DESC
) ca
```
As you can see, things become easier if you split it up in smaller logical parts.
|
SQL most recent order? MS SQL
|
[
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.