Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have these three tables in SQL Server 2012.
```
CREATE TABLE [dbo].[ClientInfo]
(
[ClientID] [int] IDENTITY(1,1) NOT NULL,
[ClientName] [varchar](50) NULL,
[ClientAddress] [varchar](50) NULL,
[City] [varchar](50) NULL,
[State] [varchar](50) NULL,
[DOB] [date] NULL,
[Country] [varchar](50) NULL,
[Status] [bit] NULL,
PRIMARY KEY (ClientID)
)
CREATE TABLE [dbo].[ClientInsuranceInfo]
(
[InsID] [int] IDENTITY(1,1) NOT NULL,
[ClientID] [int] NULL,
[InsTypeID] [int] NULL,
[ActiveDate] [date] NULL,
[InsStatus] [bit] NULL,
PRIMARY KEY (InsID)
)
CREATE TABLE [dbo].[TypeInfo]
(
[TypeID] [int] IDENTITY(1,1) NOT NULL,
[TypeName] [varchar](50) NULL,
PRIMARY KEY (TypeID)
)
```
Some sample data to execute the query
```
insert into ClientInfo (ClientName, State, Country, Status)
values ('Lionel Van Praag', 'NSW', 'Australia', 'True')
insert into ClientInfo (ClientName, State, Country, Status)
values ('Bluey Wilkinson', 'NSW', 'Australia', 'True')
insert into ClientInfo (ClientName, State, Country, Status)
values ('Jack Young', 'NSW', 'Australia', 'True')
insert into ClientInfo (ClientName, State, Country, Status)
values ('Keith Campbell', 'NSW', 'Australia', 'True')
insert into ClientInfo (ClientName, State, Country, Status)
values ('Tom Phillis', 'NSW', 'Australia', 'True')
insert into ClientInfo (ClientName, State, Country, Status)
values ('Barry Smith', 'NSW', 'Australia', 'True')
insert into ClientInfo (ClientName, State, Country, Status)
values ('Steve Baker', 'NSW', 'Australia', 'True')
insert into TypeInfo (TypeName) values ('CarInsurance')
insert into TypeInfo (TypeName) values ('MotorcycleInsurance')
insert into TypeInfo (TypeName) values ('HeavyVehicleInsurance')
insert into ClientInsuranceInfo (ClientID, InsTypeID, ActiveDate, InsStatus)
values ('1', '1', '2000-01-11', 'True')
insert into ClientInsuranceInfo (ClientID, InsTypeID, ActiveDate, InsStatus)
values ('1', '2', '2000-01-11', 'True')
insert into ClientInsuranceInfo (ClientID, InsTypeID, ActiveDate, InsStatus)
values ('2', '1', '2000-01-11', 'True')
insert into ClientInsuranceInfo (ClientID, InsTypeID, ActiveDate, InsStatus)
values ('2', '2', '2000-01-11', 'True')
insert into ClientInsuranceInfo (ClientID, InsTypeID, ActiveDate, InsStatus)
values ('2', '3', '2000-01-11', 'True')
insert into ClientInsuranceInfo (ClientID, InsTypeID, ActiveDate, InsStatus)
values ('3', '1', '2000-01-11', 'True')
insert into ClientInsuranceInfo (ClientID, InsTypeID, ActiveDate, InsStatus)
values ('4', '1', '2000-01-11', 'True')
insert into ClientInsuranceInfo (ClientID, InsTypeID, ActiveDate, InsStatus)
values ('4', '3', '2000-01-11', 'True')
insert into ClientInsuranceInfo (ClientID, InsTypeID, ActiveDate, InsStatus)
values ('5', '2', '2000-01-11', 'True')
```
I have written the following query which returns only those clients who have 'MotorcycleInsurance' type:
```
select distinct
ClientInfo.ClientID, ClientInfo.ClientName, TypeInfo.TypeName
from
ClientInfo
left join
ClientInsuranceInfo on ClientInfo.ClientID = ClientInsuranceInfo.ClientID
left join
TypeInfo on ClientInsuranceInfo.InsTypeID = TypeInfo.TypeID
and TypeInfo.TypeID = 2
where
typeinfo.TypeName is not null
```
But I want to do the following things
* I want to return all clients who have 'MotorcycleInsurance' along with rest of clients who do not have 'MotorcycleInsurance' as well.
* `TypeName` will be returned as NULL who do not have 'MotorcycleInsurance'
* `ClientID` have to be unique in the result set.
* Do not want to use `UNION` / `UNION ALL`.
How can I do this?
My required answer will be as follow
[](https://i.stack.imgur.com/REy6Q.jpg)
|
Try this one using `ROW_NUMBER`:
```
select a.ClientID, a.ClientName, a.TypeName
from
(
select distinct ClientInfo.ClientID, ClientInfo.ClientName, TypeInfo.TypeName, ROW_NUMBER() over(partition by ClientInfo.ClientID order by case when TypeName is null then 1 else 0 end) as rn
from ClientInfo
left join ClientInsuranceInfo on ClientInfo.ClientID = ClientInsuranceInfo.ClientID
left join TypeInfo on ClientInsuranceInfo.InsTypeID = TypeInfo.TypeID and TypeInfo.TypeID = 2
) a
where rn = 1
```
Edit: Updated ordering by `case` statement in `Row_Number` :)
|
Just remove your `where` clause and change join condition slightly
```
select distinct ClientInfo.ClientID, ClientInfo.ClientName, TypeInfo.TypeName
from ClientInfo
left join ClientInsuranceInfo
on ClientInfo.ClientID = ClientInsuranceInfo.ClientID
and ClientInsuranceInfo.InsTypeID = 2
left join TypeInfo on ClientInsuranceInfo.InsTypeID = TypeInfo.TypeID
```
|
SQL Server query help needed for joining
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
From the following table, I would like to return the set of all books that have topic 3 but do not have topic 4.
```
id book topic
10 1000 3
11 1000 4
12 1001 2
13 1001 3
14 1002 4
15 1003 3
```
The correct table should be:
```
book
1001
1003
```
I made a SQL Fiddle with this for testing [here](http://sqlfiddle.com/#!3/0bf7a/2).
So far, I tried the following, but it returns `1000`, `1001`, `1003` (because I am not comparing two rows with each other, and do not know how to do):
```
SELECT DISTINCT a.id
FROM TOPICS a, TOPICS b
WHERE a.id = b.id
AND a.topic = 3
AND NOT EXISTS (
SELECT DISTINCT c.id
FROM TOPICS c
WHERE a.id = c.id
AND b.topic = 4
)
```
|
Since you're using SQL Server (at least in the fiddle) you could use the `except`set operator:
```
SELECT DISTINCT book FROM TOPICS WHERE topic = 3
EXCEPT
SELECT DISTINCT book FROM TOPICS WHERE topic = 4
```
This would return 1001 and 1003.
|
You're almost there. The `not exists` condition is definitely the right idea, you just need to apply another one of these for checking the existence of topic 3:
```
SELECT DISTINCT a.book
FROM topics a
WHERE EXISTS (SELECT *
FROM topics b
WHERE a.book = b.book AND b.topic = 3) AND
NOT EXISTS (SELECT *
FROM topics b
WHERE a.book = b.book AND b.topic = 4)
```
|
Select values in SQL that have a colum value in one row and not a specific other column value in another
|
[
"",
"mysql",
"sql",
"sql-server",
"select",
""
] |
Can someone suggest how to add a condition in `WHERE` clause of my stored procedure?
```
CREATE Procedure getAllEmployeesByDeptAndFlag
@Dept int,
@sal int,
@Flag int
AS
if @flag = 1
select *
from employee
where Department = @dept and @sal < 10000
else
select *
from employee
where Department = @dept
```
Is there any way to simplify above procedure?
|
You could use the `or` logical operator to unify both branches of the `if` statement:
```
select * from employee where Department = @dept AND (@flag != 1 OR @sal < 10000)
```
|
You could define that if you pass in `NULL` for `@sal`, then the salary is not being checked. No need for an extra `@flag` parameter...
```
CREATE Procedure getAllEmployeesByDeptAndFlag
@Dept int,
@sal int
AS
SELECT
(list of columns)
FROM
dbo.employee
WHERE
Department = @dept
AND (@sal IS NULL OR salary <= @sal)
```
|
Condition based where clause SQL Server stored procedure
|
[
"",
"sql",
"sql-server",
"select",
"if-statement",
"stored-procedures",
""
] |
I have a table `itemsInShippment` with the following data:
```
itemid shippmentid qty
10 1 100
20 1 200
10 2 300
10 3 1000
```
and table `shippments`
```
shippmentid date shippmentstatus supplierid
1 2015-01-12 OK 5000
2 2015-01-17 OK 5000
3 2015-01-17 Cancelled 5000
```
I need to write a query that shows this details about specific shippment say `shipmentid 1`. My given parameters are `supplierid` and `date`. together they related to one shipment (unique).
For `supplierid=5000` and `date=2015-01-12` I want to get:
```
itemid qty qtyInOtherShipments
10 100 300 //1000 is canceled.
20 200 0
```
My query works fine without considering the cancelled:
```
SELECT cte.*
FROM
(SELECT
a.itemid, b.date, a.qty,
(coalesce( SUM(a.qty) OVER (PARTITION BY a.itemid), 0) -
coalesce( SUM(a.qty) OVER (PARTITION BY a.itemid, a.shipmentid) ,0) ) AS qtyInOtherShipments,
FROM
itemsInShippment a
LEFT JOIN
shippments b using (shippmentid)
WHERE
b.supplierid = 5000) AS cte
WHERE
cte.date = '2015-01-12'
```
the `cte` must be this way as in `qtyInOtherShipments` I Sum the total `qty` and then remove my own `qty`. In order to sum the total `qty` I can't do `WHERE d.date=...` inside I must do that outside.
This query gives:
```
itemid qty qtyInOtherShipments
10 100 1300
20 200 0
```
I'm having trouble taking under consideration the `cancelled` shipments.
if I change the `Where` to :
```
where b.supplierid = 5000 and b.shippmentstatus not like 'cancelled'
```
it works... I will see:
```
itemid qty qtyInOtherShipments
10 100 300
20 200 0
```
but if I run the query on cancelled shipments (`supplierid=5000 and date=2015-01-17`) I will get:
```
itemid qty qtyInOtherShipments
nothing
```
what I should have get is:
```
itemid qty qtyInOtherShipments
10 1000 300
```
so my problem is that I don't want to sum `itemid` that is related to cancelled but I still want to see this rows.
How do I get the correct result?
|
You want to exclude canceled items only from sums. So, do not filter them with where, just filter them on sums:
```
SUM(case when b.shippmentstatus <> 'cancelled' then a.qty end) OVER (PARTITION BY ...
```
Sum does not take in consideration null, that's why the above works. (When status is canceled the case expression will return null.)
|
A more efficient variant of Florian's answer exists for PostgreSQL 9.4, the `filter` clause for an aggregate.
```
SUM (a.qty) FILTER (WHERE b.shippmentstatus <> 'cancelled') OVER (PARTITION BY ...
```
See [`FILTER`](http://www.postgresql.org/docs/current/static/sql-expressions.html) in the docs for aggregates. It's basically a mini-`WHERE` clause that applies only for that aggregate.
Thanks to @a\_horse\_with\_no\_name for pointing it out earlier.
|
How to exclude rows from sum but still show them?
|
[
"",
"sql",
"postgresql",
""
] |
I'm trying to make this SQL statement simpler, and am looking for a way to change ONLY the `WHERE` clause if no results so I don't have to repeat the other part.
I've already tried with the `OR` and it's not what I need.
It searches first for a record that exactly matches the `ID` field. If there is no result, it will then look for an `ID LIKE`. If there is still no result, it will check the same input against the `Nombre` column.
```
IF EXISTS (SELECT ID FROM StockDetalles WHERE ID = '112')
BEGIN
SELECT StockDetalles.ID, Negocios.NombreNegocio AS [Nombre Local], StockDetalles.Nombre, Stock.[Precio de Venta], Stock.Cantidad,
Stock.[Fecha Actualizacion de Precio], Stock.[Fecha Actualizacion de Cantidad], StockDetalles.Proveedor, StockDetalles.[Precio de Compra],
Stock.[Cantidad Reposicion], StockDetalles.CategoriaID, StockFotos.Foto
FROM Stock INNER JOIN
StockDetalles ON Stock.ID = StockDetalles.ID INNER JOIN
Negocios ON Stock.IDNegocio = Negocios.IDNegocio INNER JOIN
StockFotos ON StockDetalles.ID = StockFotos.IDProducto
WHERE (StockDetalles.ID = '112')
AND Stock.IDNegocio = 1
END
ELSE IF EXISTS(SELECT ID FROM StockDetalles WHERE ID LIKE '%112%')
BEGIN
SELECT StockDetalles.ID, Negocios.NombreNegocio AS [Nombre Local], StockDetalles.Nombre, Stock.[Precio de Venta], Stock.Cantidad,
Stock.[Fecha Actualizacion de Precio], Stock.[Fecha Actualizacion de Cantidad], StockDetalles.Proveedor, StockDetalles.[Precio de Compra],
Stock.[Cantidad Reposicion], StockDetalles.CategoriaID, StockFotos.Foto
FROM Stock INNER JOIN
StockDetalles ON Stock.ID = StockDetalles.ID INNER JOIN
Negocios ON Stock.IDNegocio = Negocios.IDNegocio INNER JOIN
StockFotos ON StockDetalles.ID = StockFotos.IDProducto
WHERE (StockDetalles.ID LIKE '%112%')
AND Stock.IDNegocio = 1
END
ELSE IF EXISTS(SELECT ID FROM StockDetalles WHERE Nombre LIKE '%112%')
BEGIN
SELECT StockDetalles.ID, Negocios.NombreNegocio AS [Nombre Local], StockDetalles.Nombre, Stock.[Precio de Venta], Stock.Cantidad,
Stock.[Fecha Actualizacion de Precio], Stock.[Fecha Actualizacion de Cantidad], StockDetalles.Proveedor, StockDetalles.[Precio de Compra],
Stock.[Cantidad Reposicion], StockDetalles.CategoriaID, StockFotos.Foto
FROM Stock INNER JOIN
StockDetalles ON Stock.ID = StockDetalles.ID INNER JOIN
Negocios ON Stock.IDNegocio = Negocios.IDNegocio INNER JOIN
StockFotos ON StockDetalles.ID = StockFotos.IDProducto
WHERE (StockDetalles.Nombre LIKE '%112%' )
AND Stock.IDNegocio = 1
END
```
|
`OR` will work, you just have to get all the conditions right:
```
SELECT StockDetalles.ID, Negocios.NombreNegocio AS [Nombre Local], StockDetalles.Nombre, Stock.[Precio de Venta], Stock.Cantidad,
Stock.[Fecha Actualizacion de Precio], Stock.[Fecha Actualizacion de Cantidad], StockDetalles.Proveedor, StockDetalles.[Precio de Compra],
Stock.[Cantidad Reposicion], StockDetalles.CategoriaID, StockFotos.Foto
FROM Stock INNER JOIN
StockDetalles ON Stock.ID = StockDetalles.ID INNER JOIN
Negocios ON Stock.IDNegocio = Negocios.IDNegocio INNER JOIN
StockFotos ON StockDetalles.ID = StockFotos.IDProducto
WHERE
StockDetalles.ID = '112' OR -- the ID equals 112
(StockDetalles.ID LIKE '%112%' AND NOT EXISTS (SELECT 1 FROM StockDetalles WHERE ID = '112')) OR -- the ID is like 112, and no ID exists that equals 112
(StockDetalles.Nombre LIKE '%112%' AND NOT EXISTS (SELECT 1 FROM StockDetalles WHERE ID LIKE '%112%')) -- the Nombre is like 112, and no ID exists that is like 112 (which would include any ID that exactly matched 112)
```
|
One more way with window functions:
```
;with cte1 as(select StockDetalles.ID,
Negocios.NombreNegocio AS [Nombre Local],
StockDetalles.Nombre,
Stock.[Precio de Venta],
Stock.Cantidad,
Stock.[Fecha Actualizacion de Precio],
Stock.[Fecha Actualizacion de Cantidad],
StockDetalles.Proveedor,
StockDetalles.[Precio de Compra],
Stock.[Cantidad Reposicion],
StockDetalles.CategoriaID,
StockFotos.Foto,
row_number() over(order by case when id = '112' then 1
when id like '%112%' then 2
when nombre like '%112%' then 3
else 4 end) rn1
from Stock
join StockDetalles on Stock.ID = StockDetalles.ID
join Negocios on Stock.IDNegocio = Negocios.IDNegocio
join StockFotos on StockDetalles.ID = StockFotos.IDProducto
where Stock.IDNegocio = 1),
cte2 as(select *, row_number() over(order by rn1) rn2 from cte1)
select * from cte2
where rn2 = 1 and rn1 <> 4
```
|
Use a different WHERE in no results
|
[
"",
"sql",
"sql-server",
""
] |
I want to get `NULL` when the number in my table is 0. How can I achieve this using SQL? I tried
```
SELECT ID,
Firstname,
Lastname,
CASE Number
WHEN 0 THEN NULL
END
FROM tPerson
```
But this results in an error:
> At least one of the result expressions in a CASE specification must be
> an expression other than the NULL constant.
|
As others have mentioned you forgot to tell your CASE statement to return the number in case the number is not null.
However in SQL Server you can use NULLIF, which I consider more readable:
```
select
id,
firstname,
lastname,
nullif(number, 0) as number
from tperson;
```
If you want to stick to standard SQL then stay with CASE:
```
case when number = 0 then null else number end as number
```
|
```
SELECT ID, Firstname, Lastname,
CASE WHEN Number!=0 THEN Number END
FROM tPerson
```
|
SQL case 0 then NULL
|
[
"",
"sql",
"case",
""
] |
Hi im looking to get how many users that use one app use that app
```
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β username clientname date time publishedapp β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ£
β akirk hplaptop1 30/07/2015 8:42:30.04 PB Desktop service β
β john dellPC1 27/07/2015 9:41:30.04 desktop@Work2-1 β
β john dellPC1 27/07/2015 9:41:30.04 Word 2013 β
β karl delllaptop2 27/07/2015 9:40:21.00 Chrome β
β karl delllaptop2 27/07/2015 9:40:21.00 Desktop with acrobat β
β jdoe HPPC1 27/07/2015 9:40:15.00 Powerplan β
β mrt P2000 31/02/2015 10:03.20 PB Desktop service β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
I would be specting something like this:
```
PB DEsktop service: 2
Powerplan: 1
```
I've managed to get
```
PB DEsktop service: 2
desktop@Work2-1: 1
Desktop with acrobat: 1
Chrome: 1
Word 2013: 1
Powerplan: 1
```
with this query:
```
SELECT publishedapp, COUNT(DISTINCT username) as cnt
FROM tbl_name
GROUP BY publishedapp
ORDER BY cnt DESC
```
|
Here is one method that doesn't require a join:
```
select publishedapp, count(*) as NumberOfUsers
from (select username, min(publishedapp) as publishedapp
from table t
group by username
having count(*) = 1
) u
group by publishedapp
order by count(*) desc;
```
If a user only has one app, then the minimum will be that app.
If a user could have an app multiple times (and you still want them), then change the `count(*)` in the subquery to `count(distinct publishedapp)`.
|
Try this
```
SELECT publishedapp, COUNT(*) as cnt
FROM
(
select username from tbl_name
group by username
having count(*)=1
) as t1 inner join tbl_name as t2
on t1.username=t2.username
GROUP BY publishedapp
ORDER BY cnt DESC
```
|
mysql count having distinct
|
[
"",
"mysql",
"sql",
""
] |
I want to drop all the entries from the field `customers_fax` and then move all numbers beginning `07` from the `customers_telephone` field to the `customers_fax` field.
The table structure is below
```
CREATE TABLE IF NOT EXISTS `zen_customers` (
`customers_id` int(11) NOT NULL,
`customers_gender` char(1) NOT NULL DEFAULT '',
`customers_firstname` varchar(32) NOT NULL DEFAULT '',
`customers_lastname` varchar(32) NOT NULL DEFAULT '',
`customers_dob` datetime NOT NULL DEFAULT '0001-01-01 00:00:00',
`customers_email_address` varchar(96) NOT NULL DEFAULT '',
`customers_nick` varchar(96) NOT NULL DEFAULT '',
`customers_default_address_id` int(11) NOT NULL DEFAULT '0',
`customers_telephone` varchar(32) NOT NULL DEFAULT '',
`customers_fax` varchar(32) DEFAULT NULL,
`customers_password` varchar(40) NOT NULL DEFAULT '',
`customers_newsletter` char(1) DEFAULT NULL,
`customers_group_pricing` int(11) NOT NULL DEFAULT '0',
`customers_email_format` varchar(4) NOT NULL DEFAULT 'TEXT',
`customers_authorization` int(1) NOT NULL DEFAULT '0',
`customers_referral` varchar(32) NOT NULL DEFAULT '',
`customers_paypal_payerid` varchar(20) NOT NULL DEFAULT '',
`customers_paypal_ec` tinyint(1) unsigned NOT NULL DEFAULT '0'
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=18346 ;
```
Dropping any existing data is simple enough as I will just do
```
UPDATE zen_customers SET customers_fax = ''
```
I've no idea how to move only numbers starting with `07` to the `customers_fax` field ensuring they stay with the relevant `customers_id`.
Is there a simple way to do this as an SQL query only?
|
Try something like this:
```
UPDATE zen_customers
SET customers_fax = customers_telephone
WHERE customers_telephone like '07%'
```
|
This should work:
```
UPDATE zen_customers
SET customers_fax = customers_telephone
WHERE customers_telephone LIKE '07%';
```
But if you want to test on one customer to be sure it does what you expect, you can always run this update query on a specific user that have a '07' telephone number:
```
UPDATE zen_customers
SET customers_fax = customers_telephone
WHERE customers_id = N;
```
Where N is an integer.
|
Update a column using values from another column that match a specific criteria
|
[
"",
"mysql",
"sql",
"sql-update",
""
] |
This is my table:
```
Column1 Column2
1 A
2 B
3 B
2 C
4 B
```
I have following SQL query to get all rows with column1 matching an input list with numbers, for example "1, 2, 3":
```
SELECT * FROM table WHERE column1 IN (1, 2, 3);
```
This query is generated by a program appending every number.
Now I want all rows with column1 matching a number while matching a character of column2, e.g. "1 and A, 2 and B, 4 and B".
A possible query is:
```
SELECT * FROM table
WHERE (column1 = 1 and column2 = 'A')
OR (column1 = 2 and column2 = 'B')
OR (column1 = 4 and column2 = 'B')
```
What if my list has thousand value pairs and my table several hundred of thousand entries?
Is there a better way to get the same result or do I have to append one thousand of "OR"s?
Sadly this isn't working:
```
SELECT * FROM table
WHERE (column1, column2) IN ((1, 'A'), (2, 'B'), (4, 'B'))
```
|
This is what I would do. It generally would perform better than IN with OR's and is much more readable in my opinion.
```
SELECT * FROM TABLE
INNER JOIN
(
SELECT 1 AS col1, 'a' AS col2 UNION
SELECT 2, 'b' UNION
SELECT 4, 'b'
) AS tbl
ON tbl.col1=TABLE.col1 AND tbl.col2=TABLE.col2
```
|
Create a table or a table variable with the matching index. Load the user's filters, and join to the main table.
|
SQL IN-Operator: consider multiple columns
|
[
"",
"sql",
"sqlanywhere",
""
] |
This is my first post, and I attempted to do a thorough search for this issue, so please accept my apologies if it has been posted elsewhere many times, but I'm wondering if anyone has encountered the following issue when attempting to remove NULLs from their result set:
```
case Occurrence
when NULL then '0'
else occurrence
end as Occurrence,
case Aggregate
when NULL then '0'
else Aggregate
end as Aggregate,
```
This didn't do anything to my NULLs; however, this did the trick:
```
case
when occurrence is NULL then '0'
else occurrence
end as Occurrence,
case
when aggregate is NULL then '0'
else Aggregate
end as Aggregate
```
Does anyone have any idea why this behaves this way? I'm using SQLServer2012.
I'm also not very versed in programming and only have less than a year SQL experience.
Thanks!
|
You should be using the ISNULL() or COALESCE() system function for handling nulls
something like
```
SELECT ISNULL(Occurrence , 0) AS Occurrence
,ISNULL(Aggregate , 0) AS Aggregate
FROM Table
```
OR
```
SELECT COALESCE(Occurrence , 0) AS Occurrence
,COALESCE(Aggregate , 0) AS Aggregate
FROM Table
```
The reason it didn't work in the case statement with
```
case Occurrence
when NULL then '0'
else occurrence
end as Occurrence,
```
is because it is interpreting it as
```
CASE
WHEN Occurrence = NULL THEN 0
ELSE Occurrence
END
```
Null is checked in sql server using `IS NULL` or `IS NOT NULL` if you use any other operator with null like = , <> or <, < it yields NULL hence the unexpected results.
## Only for SQL Server 2012 and Later
In sql server 2012 and later versions you also have the IIF function
```
SELECT IIF(Occurrence IS NULL, 0, Occurrence) AS Occurrence
,IFF(Aggregate IS NULL , 0, Aggregate) AS Aggregate
FROM Table
```
|
You use [simple case](https://msdn.microsoft.com/en-us/library/ms181765.aspx):
> The simple CASE expression operates by comparing the first expression to the expression in each WHEN clause for equivalency. If these expressions are equivalent, the expression in the THEN clause will be returned.
>
> Allows **only an equality check**.
```
case Occurrence
when NULL then '0'
else occurrence
end as Occurrence,
```
Which is executed as :
```
case
when occurence = NULL then '0'
else occurrence
end as Occurrence
```
Then expression `occurence = NULL` return NULL and is treated *like False*
Second your case use searched CASE with full condition and works fine:
```
case
when occurrence IS NULL then '0'
else occurrence
end as Occurrence,
```
So your question is about difference **column IS NULL vs column = NULL**
|
CASE logic when removing NULLs
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Video table stores id and video data.
Tag table stores id and tag\_name.
video\_tag table connects video\_ids and tag\_ids to represent which video belongs to which tag.
For example in query below, i can get videos which belong to tags with ids **both** 3 and 4
Also, i want to know how many rows are there. How should i modify the query?
```
SELECT *
FROM video
INNER JOIN video_tag ON video.id = video_tag.video_id
INNER JOIN tag ON tag.id = video_tag.tag_id
WHERE video_tag.tag_id IN (3,4)
GROUP BY video.id
HAVING COUNT(video.id)=2
ORDER BY video.id DESC
```
---
\*
**Table Structures:**
```
--
-- Table structure for table `video`
--
CREATE TABLE IF NOT EXISTS `video` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`original_id` varchar(20) COLLATE utf8_turkish_ci NOT NULL COMMENT 'alΔ±nan sitedeki id''si',
`source` tinyint(2) NOT NULL,
`title` varchar(160) COLLATE utf8_turkish_ci NOT NULL,
`link` varchar(250) COLLATE utf8_turkish_ci NOT NULL,
`image` varchar(300) COLLATE utf8_turkish_ci NOT NULL,
`seconds` smallint(6) NOT NULL,
`fullscreen` varchar(100) COLLATE utf8_turkish_ci NOT NULL,
PRIMARY KEY (`id`),
KEY `source` (`source`,`seconds`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=122987 ;
--
-- Table structure for table `tag`
--
CREATE TABLE IF NOT EXISTS `tag` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`tag_name` varchar(24) COLLATE utf8_turkish_ci NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `tag_name` (`tag_name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=191 ;
--
-- Table structure for table `video_tag`
--
CREATE TABLE IF NOT EXISTS `video_tag` (
`video_id` int(11) NOT NULL,
`tag_id` int(11) NOT NULL,
KEY `video_id` (`video_id`,`tag_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
```
|
Your query should be doing what you want. But, you can simplify it:
```
SELECT v.*
FROM video v INNER JOIN
video_tag vt
ON v.id = vt.video_id
WHERE vt.tag_id IN (3, 4)
GROUP BY v.id
HAVING COUNT(v.id) = 2
ORDER BY v.id DESC ;
```
The only time this would not work is when a video can have duplicate tags of the same type. In that case, you case can instead use `COUNT(DISTINCT)` instead.
If you want to return the query with the number of rows for, say, pagination, use `SQL_CALC_FOUND_ROWS`:
```
SELECT SQL_CALC_FOUND_ROWS v.*
. . .
```
Then use `FOUND_ROWS()`.
If you just want the number of rows, you can use a subquery, and further simplification:
```
SELECT COUNT(*)
FROM (SELECT v.*
FROM video_tag vt
WHERE vt.tag_id IN (3, 4)
GROUP BY vt.id
HAVING COUNT(*) = 2
) t
```
|
Here is the complete query. It will show you every video, with it's total number. Hope it helps.
```
SELECT v.id, COUNT(v.id) as [Number]
FROM video AS v -- using alias
INNER JOIN video_tag vt ON v.id = vt.video_id
INNER JOIN tag t ON t.id = vt.tag_id
WHERE vt.tag_id IN (3,4) -- this is optional; you can remove it
GROUP BY v.id
```
Feel free to ask further.
|
SQL COUNT with 2 INNER JOINS
|
[
"",
"mysql",
"sql",
""
] |
I have a table tbldeptdivision as follows:
```
ID DEPTID DIVISIONID FROMDATE TODATE REMARKS
--- ------- ----------- ----------- ----------- --------
21 21 5 31-AUG-99 01-JAN-80 NULL
```
I have the query
```
select *
from tbldeptdivision
where deptid = 21
and trunc(sysdate) between to_date(fromdate,'dd-Mon-yyyy')
and to_date(todate,'dd-mon-yyyy');
```
It returns me no value. Can anybody say why? `'31-AUG-99'` is actually `'31-AUG-1999'` and `'01-JAN-80'` is actually `'01-JAN-2080'`. What will be the exact query?
|
As your `todate` is a `date` your problem stems from the useless conversion of the column's value from a date to a varchar and back to a date:
`to_date()` converts a **`VARCHAR`** to a `DATE` value. If the value you pass to that function is *already* a `DATE` Oracle will first implicitely convert your date to a varchar by applying the default NLS format and will then convert *that* varchar back to a date, again applying the default NLS format.
In the first (implicit) conversion you are losing the century in your year, which consequently is then wrong when the varchar is converted back to a `date`
So in your case the following is done due to the call `to_date(fromdate,'dd-Mon-yyyy')`
* `todate` contains the (real) date value: 1980-01-30
* the implicit conversion to a varchar makes that `'01-JAN-80'`
* the conversion from the varchar to a date then assumes the year 80 should be 2080 (again based on the rules for *implicit* data type conversion).
The general rule is:
### Do NOT use to\_date() on a `DATE` (or `TIMESTAMP`) column
If you need to get rid of the time part in the `DATE` column use `trunc()` instead:
```
where trunc(sysdate) between trunc(fromdate) and trunc(todate)
```
|
Assume `FROMDATE/TODATE` datatype is `varchar2` then when you do `to_date`;
```
select to_date('01-JAN-80','dd-mon-yyyy') from dual;
OutPut: January, 01 0080 00:00:00
```
So it wont be `'01-JAN-2080'` but `'01-JAN-0080'`
Even if `FROMDATE/TODATE` datatype is `date` tusing `to_date` is not a good idea;
```
create table t(c date);
insert into t select sysdate from dual;
select c, to_date(c,'dd-mon-yyyy') from t;
```
OutPut:
```
C TO_DATE(C,'DD-MON-YYYY')
August, 25 2015 10:55:36 August, 25 0015 00:00:00
```
Still the year is `0015` not `2015`.
If your columns datatype is date then use `trunc to get the`date portion`don't use`to\_date`.
```
select *
from tbldeptdivision
where deptid=21
and trunc(sysdate) between trunc(fromdate)
and trunc(todate)
```
|
Retrieve data within a date range in Oracle
|
[
"",
"sql",
"oracle",
"date",
"between",
"date-range",
""
] |
I have this stored procedure:
```
ALTER PROCEDURE spCertificationType
@result nvarchar(15) output,
@mode int
AS
BEGIN
if @mode = 1
begin
exec spGeneratedID 2, @result output
print @result
end
END
```
but when I tried to execute it,it has this error
> The formal parameter β@modeβ was not declared as an OUTPUT parameter, but the actual parameter passed in requested output.
I tried to set `@mode` as output like this:
```
ALTER PROCEDURE spCertificationType
@result nvarchar(15) output,
@mode int output
AS
BEGIN
if @mode = 1
begin
exec spGeneratedID 2, @result output
print @result
end
END
```
but the it returns a null value.
Any fix for this? Thanks in advance.
|
the sequence of parameter in store procedure is that first use input parameter then use output parameter:-
you can see this link for more knowledge of store procedure:-
<http://www.codeproject.com/Articles/126898/Sql-Server-How-To-Write-a-Stored-Procedure-in-SQL>
```
ALTER PROCEDURE spCertificationType
@mode int,
@result nvarchar(15) output
AS
BEGIN
if @mode = 1
begin
exec spGeneratedID 2, @result output
print @result
end
END
```
|
I fixed this error a different way.
1. I had removed the OUTPUT statement after a parameter in my SP.
2. I then got the error.
3. I then went back to the SQLDataSource Configure Data Source wizard and went through the steps again. I discovered that changing the SP made the wizard delete the settings associated with the parameter that used to have OUTPUT after it.
|
The formal parameter β@modeβ was not declared as an OUTPUT parameter, but the actual parameter passed in requested output
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
I want to insert all the rows from the cursor to a table.But it is not inserting all the rows.Only some rows gets inserted.Please help
I have created a procedure BPS\_SPRDSHT which takes input as 3 parameters.
```
PROCEDURE BPS_SPRDSHT(p_period_name VARCHAR2,p_currency_code VARCHAR2,p_source_name VARCHAR2)
IS
CURSOR c_sprdsht
IS
SELECT gcc.segment1 AS company, gcc.segment6 AS prod_seg, gcc.segment2 dept,
gcc.segment3 accnt, gcc.segment4 prd_grp, gcc.segment5 projct,
gcc.segment7 future2, gljh.period_name,gljh.je_source,NULL NULL1,NULL NULL2,NULL NULL3,NULL NULL4,gljh.currency_code Currency,
gjlv.entered_dr,gjlv.entered_cr, gjlv.accounted_dr, gjlv.accounted_cr,gljh.currency_conversion_date,
NULL NULL6,gljh.currency_conversion_rate ,NULL NULL8,NULL NULL9,NULL NULL10,NULL NULL11,NULL NULL12,NULL NULL13,NULL NULL14,NULL NULL15,
gljh.je_category ,NULL NULL17,NULL NULL18,NULL NULL19,tax_code
FROM gl_je_lines_v gjlv, gl_code_combinations gcc, gl_je_headers gljh
WHERE gjlv.code_combination_id = gcc.code_combination_id
AND gljh.je_header_id = gjlv.je_header_id
AND gljh.currency_code!='STAT'
AND gljh.currency_code=NVL (p_currency_code, gljh.currency_code)
AND gljh.period_name = NVL (p_period_name, gljh.period_name)
AND gljh.je_source LIKE p_source_name||'%';
type t_spr is table of c_sprdsht%rowtype;
v_t_spr t_spr :=t_spr();
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht BULK COLLECT INTO v_t_spr limit 50000;
EXIT WHEN c_sprdsht%notfound;
END LOOP;
CLOSE c_sprdsht;
FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'TOTAL ROWS FETCHED FOR SPREADSHEETS- '|| v_t_spr.count);
IF v_t_spr.count > 0 THEN
BEGIN
FORALL I IN v_t_spr.FIRST..v_t_spr.LAST SAVE EXCEPTIONS
INSERT INTO custom.pwr_bps_gl_register
VALUES v_t_spr(i);
EXCEPTION
WHEN OTHERS THEN
l_error_count := SQL%BULK_EXCEPTIONS.count;
fnd_file.put_line(fnd_file.output,'Number of failures: ' || l_error_count);
FOR l IN 1 .. l_error_count LOOP
DBMS_OUTPUT.put_line('Error: ' || l ||
' Array Index: ' || SQL%BULK_EXCEPTIONS(l).error_index ||
' Message: ' || SQLERRM(-SQL%BULK_EXCEPTIONS(l).ERROR_CODE));
END LOOP;
END;
END IF;
fnd_file.put_line(fnd_file.output,'END TIME: '||TO_CHAR (SYSDATE, 'DD-MON-YYYY HH24:MI:SS'));
END BPS_SPRDSHT;
```
Total rows to be inserted=568388
No of rows getting inserted=48345.
|
Oracle uses two engines to process PL/SQL code. All procedural code is handled by the **PL/SQL engine** while all SQL is handled by the SQL statement executor, or **SQL engine**. There is an overhead associated with each **context switch** between the two engines.
The entire **PL/SQL** code could be written in plain **SQL** which will be much **faster** and **lesser code**.
```
INSERT INTO custom.pwr_bps_gl_register
SELECT gcc.segment1 AS company,
gcc.segment6 AS prod_seg,
gcc.segment2 dept,
gcc.segment3 accnt,
gcc.segment4 prd_grp,
gcc.segment5 projct,
gcc.segment7 future2,
gljh.period_name,
gljh.je_source,
NULL NULL1,
NULL NULL2,
NULL NULL3,
NULL NULL4,
gljh.currency_code Currency,
gjlv.entered_dr,
gjlv.entered_cr,
gjlv.accounted_dr,
gjlv.accounted_cr,
gljh.currency_conversion_date,
NULL NULL6,
gljh.currency_conversion_rate ,
NULL NULL8,
NULL NULL9,
NULL NULL10,
NULL NULL11,
NULL NULL12,
NULL NULL13,
NULL NULL14,
NULL NULL15,
gljh.je_category ,
NULL NULL17,
NULL NULL18,
NULL NULL19,
tax_code
FROM gl_je_lines_v gjlv,
gl_code_combinations gcc,
gl_je_headers gljh
WHERE gjlv.code_combination_id = gcc.code_combination_id
AND gljh.je_header_id = gjlv.je_header_id
AND gljh.currency_code! ='STAT'
AND gljh.currency_code =NVL (p_currency_code, gljh.currency_code)
AND gljh.period_name = NVL (p_period_name, gljh.period_name)
AND gljh.je_source LIKE p_source_name
||'%';
```
**Update**
It is a myth that \*\*frequent commits\* in PL/SQL is good for performance.
Thomas Kyte explained it beautifully [here](https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4951966319022):
> Frequent commits -- sure, "frees up" that undo -- which invariabley
> leads to ORA-1555 and the failure of your process. Thats good for
> performance right?
>
> Frequent commits -- sure, "frees up" locks -- which throws
> transactional integrity out the window. Thats great for data
> integrity right?
>
> Frequent commits -- sure "frees up" redo log buffer space -- by
> forcing you to WAIT for a sync write to the file system every time --
> you WAIT and WAIT and WAIT. I can see how that would "increase
> performance" (NOT). Oh yeah, the fact that the redo buffer is
> flushed in the background
>
> * every three seconds
> * when 1/3 full
> * when 1meg full
>
> would do the same thing (free up this resource) AND not make you wait.
>
> * frequent commits -- there is NO resource to free up -- undo is undo,
> big old circular buffer. It is not any harder for us to manage 15
> gigawads or 15 bytes of undo. Locks -- well, they are an attribute
> of the data itself, it is no more expensive in Oracle (it would be in
> db2, sqlserver, informix, etc) to have one BILLION locks vs one lock.
> The redo log buffer -- that is continously taking care of itself,
> regardless of whether you commit or not.
|
First of all let me point out that there is a serious bug in the code you are using: that is the reason for which you are not inserting all the records:
```
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht
BULK COLLECT INTO v_t_spr -- this OVERWRITES your array!
-- it does not add new records!
limit 50000;
EXIT WHEN c_sprdsht%notfound;
END LOOP;
CLOSE c_sprdsht;
```
Each iteration OVERWRITES the contents of v\_t\_spr with the next 50,000 rows to be read.
Actually the 48345 records you are inserting are simply the last block read during the last iteration.
the "insert" statemend should be inside the same loop: you should do an insert for each 50,000 rows read.
you should have written it this way:
```
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht BULK COLLECT INTO v_t_spr limit 50000;
EXIT WHEN c_sprdsht%notfound;
FORALL I IN v_t_spr.FIRST..v_t_spr.LAST SAVE EXCEPTIONS
INSERT INTO custom.pwr_bps_gl_register
VALUES v_t_spr(i);
...
...
END LOOP;
CLOSE c_sprdsht;
```
If you were expecting to have the whole table loaded in memory for doing just one unique insert, then you wouldn't have needed any loop or any "limit 50000" clause... and actually you could have used simply the "insert ... select" approach.
Now: a VERY GOOD reason for NOT using a "insert ... select" could be that there are so many rows in the source table that such insert would make the rollback segments grow so much that there is simply not enough phisical space on your server to hold them. But if this is the issue (you can't have so much rollback data for a single transaction), you should also perform a COMMIT for each 50,000 records block, otherwise your loop would not solve the problem: it would just be slower than the "insert ... select" and it would generate the same "out of rollback space" error (now i don't remember the exact error message...)
now, issuing a commit every 50,000 records is not the nicest thing to do, but if your system actually is not big enough to handle the needed rollback space, you have no other way out (or at least I am not aware of other way outs...)
|
Bulk Collect with million rows to insert.......Missing Rows?
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have column values as
AVG,ABC, AFG, 3/M, 150,RFG,567, 5HJ
Requirement is to sort as below:
ABC,AFG,AVG,RFG,3/M,5HJ,150,567
Any help?
|
If you want to sort letters before numbers, then you can test each character. Here is one method:
```
order by (case when substr(col, 1, 1) between 'A' and 'Z' then 1 else 2 end),
(case when substr(col, 2, 1) between 'A' and 'Z' then 1 else 2 end),
(case when substr(col, 3, 1) between 'A' and 'Z' then 1 else 2 end),
col
```
|
This doesn't produce the requested output, but for lexicographic with numbers second `TRANSLATE` is a simple solution:
<http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions196.htm>
```
select value
from (
select 'AVG' as value from dual
union all
select 'ABC' as value from dual
union all
select 'AFG' as value from dual
union all
select '3/M' as value from dual
union all
select '150' as value from dual
union all
select 'RFG' as value from dual
union all
select '567' as value from dual
union all
select '5HJ' as value from dual
)
order by translate(upper(value), 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ')
;
```
This shifts all the letters down and numbers to the end.
|
Oracle Order By Sorting: Column Values with character First Followed by number
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I think I may be missing the obvious here but im trying to create a sql query to pull data out in a particular way but cant work it out.
I have a table which is made up with the following columns:
```
Name, StageDate, Stage
```
I want to create in sql an output something like this:
```
Name, Stage, Stage1Date, Stage2Date, Stage3Date
Person1, 1, 01/01/2015, NULL, NULL
Person1, 2, 01/01/2015, 02/01/2015, NULL
Person1, 3, 01/01/2015, 02/01/2015, 03/01/2015
```
My query is currently as follows:
```
select Name
, Stage
, Case when Stage = 1 then StageDate end as Stage1Date
, Case when Stage = 2 then StageDate end as Stage2Date
, Case when Stage = 3 then StageDate end as Stage3Date
From Details
```
Data export for the above query currently looks like this:
```
Name, Stage, Stage1Date, Stage2Date, Stage3Date
Person1, 1, 01/01/2015, NULL, NULL
Person1, 2, NULL, 02/01/2015, NULL
Person1, 3, NULL, NULL, 03/01/2015
```
|
Try this:
```
DECLARE @Stages TABLE (
Name varchar(20),
Stage int,
StageDate datetime);
INSERT INTO @Stages (Name, Stage, StageDate)
VALUES
('Player 1', 1, '2015-04-01'),
('Player 1', 2, '2015-05-01'),
('Player 1', 3, '2015-06-01'),
('Player 2', 1, '2015-04-01'),
('Player 2', 2, '2015-05-01');
SELECT NAME, Stage,
CASE WHEN Stage >=1 THEN (SELECT s2.StageDate FROM @Stages s2 Where s2.Name = s1.Name and s2.Stage =1) Else NULL END As Stage1Date,
CASE WHEN Stage >=2 THEN (SELECT s2.StageDate FROM @Stages s2 Where s2.Name = s1.Name and s2.Stage =2) Else NULL END As Stage2Date,
CASE WHEN Stage >=3 THEN (SELECT s2.StageDate FROM @Stages s2 Where s2.Name = s1.Name and s2.Stage =3) Else NULL END As Stage3Date
FROM @Stages s1
```
* [Fiddle](http://sqlfiddle.com/#!9/406b6/4)
|
The only thing I could come up with for now is using `UNIONS`, as an alternatives to other answers using subqueries:
```
SELECT Name,
stage,
stagedate AS Stage1Date,
NULL AS Stage2Date,
NULL AS Stage3Date
FROM details
WHERE stage = 1
UNION
SELECT d1.Name,
d1.stage,
d1.stagedate AS Stage1Date,
d2.stagedate AS Stage2Date,
NULL AS Stage3Date
FROM details d1
INNER JOIN details d2 ON d1.NAME = d2.NAME
AND d1.stage = 1 AND d2.stage = 2
UNION
SELECT d1.Name,
d1.stage,
d1.stagedate AS Stage1Date,
d2.stagedate AS Stage2Date,
d3.stagedate AS Stage3Date
FROM details d1
INNER JOIN details d2 ON d1.NAME = d2.NAME
AND d1.stage = 1 AND d2.stage = 2
INNER JOIN details d3 ON d1.NAME = d3.NAME
AND d1.stage = 1 AND d3.stage = 3
```
* See [this fiddle](http://sqlfiddle.com/#!9/406b6/3)
|
sql If statement or case
|
[
"",
"sql",
"if-statement",
"case",
""
] |
Example:
```
<root>
<StartOne>
<Value1>Lopez, Michelle MD</Value1>
<Value2>Spanish</Value2>
<Value3>
<a title="49 west point" href="myloc.aspx?id=56" target="_blank">49 west point</a>
</Value3>
<Value4>908-783-0909</Value4>
<Value5>
<a title="CM" href="myspec.aspx?id=78" target="_blank">CM</a>
</Value5>
<Value6 /> /* No anchor link exist, but I would like to add the same format as Value5 */
</StartOne>
</root>
```
Sql (currently only sees if the anchor link already exist and updates):
```
BEGIN
SET NOCOUNT ON;
--Declare @xml xml;
Select @xml = cast([content_html] as xml)
From [Db1].[dbo].[zTable]
Declare @locID varchar(200);
Declare @locTitle varchar(200);
Declare @locUrl varchar(255);
Select @locID = t1.content_id From [westmedWebDB-bk].[dbo].[zTempLocationTable] t1
INNER JOIN [Db1].[dbo].[zTableFromData] t2 On t2.Value3 = t1.content_title
Where t2.Value1 = @ProviderName --@ProviderName is a parameter
Select @locTitle = t1.content_title From [Db1].[dbo].[zTempLocationTable] t1
Where @locID = t1.content_id
Set @locUrl = 'theloc.aspx?id=' + @locID + '';
--if Value5 has text inside...
Set @xml.modify('replace value of (/root/StartOne/Value5/a/text())[1] with sql:variable("@locTitle")');
Set @xml.modify('replace value of (/root/StartOne/Value5/a/@title)[1] with sql:variable("@locTitle")');
Set @xml.modify('replace value of (/root/StartOne/Value56/a/@href)[1] with sql:variable("@locUrl")');
--otherwise... create a new anchor
set @locAnchor = ('<a href="theloc.aspx?id=' + @locID + '" title="' + @locTitle + '">' + @locTitle + '</a>');
set @xml.modify('replace value of (/root/StartOne/Value1/text())[1] with sql:variable("@locAnchor")'); --this adds "<" and ">" instead of "<" and ">" is the issue
Update [Db1].[dbo].[zTable]
Set [content_html] = cast(@xml as nvarchar(max))
Where [content_title] = @ProviderName --@ProviderName is a parameter
END
```
How can I modify it so, if the anchor link already exist, update. Otherwise create a new anchor link with the `<` and `>` instead of `<` and `>`
Update: This is working for me now (Not sure if there is a more efficient method)
```
If @xml.exist('/root/StartOne/Value6/a/text()') = 1 --if there is an anchor link/text in the node
BEGIN
--modify the text of the link
Set @xml.modify('replace value of (/root/StartOne/Value6/a/text())[1] with sql:variable("@locTitle")');
--modify the title of the link
Set @xml.modify('replace value of (/root/StartOne/Value6/a/@title)[1] with sql:variable("@locTitle")');
--modify the url of the link
Set @xml.modify('replace value of (/root/StartOne/Value6/a/@href)[1] with sql:variable("@locUrl")');
END
Else --otherwise create a new anchor link
BEGIN
--Set @locAnchor = ('<a href="theloc.aspx?id=' + @locID + '" title="' + @locTitle + '">' + @locTitle + '</a>');
--Set @xml.modify('insert <a title="Value6" href="Value6.aspx?id=78" target="_blank">Value6</a> into (/root/StartOne/Value6)[1]');
declare @a xml;
Set @a = N'<a title="' + @locTitle+ '" href="' +@locUrl+ '" target="_blank">'+@locTitle+'</a>';
Set @xml.modify('insert sql:variable("@a") into (/root/StartOne/Value6)[1]');
END
```
|
Try to delete the anchor element first and then insert the new one. It does not matter if it is there or not for the delete statement. I also provided a better way to build your new anchor element. It takes care of creating entities for characters like `&`.
```
-- Delete the anchor node from the XML
set @xml.modify('delete /root/StartOne/Value6/a');
-- Build the XML for the new anchor node
set @a = (
select @locTitle as 'a/@title',
@locUrl as 'a/@href',
'_blank' as 'a/@target',
@locTitle as 'a'
for xml path(''), type
);
-- Insert the new anchor node
set @xml.modify('insert sql:variable("@a") into (/root/StartOne/Value6)[1]');
```
|
i hope this will help you
```
Declare @locUrl varchar(255);
Set @locUrl = 'xyz.aspx?id=' + '444' + '';
declare @xml xml;
set @xml = '<root>
<StartOne>
<Value1>Lopez, Michelle MD</Value1>
<Value2>Spanish</Value2>
<Value3>
<a title="49 west point" href="myloc.aspx?id=56" target="_blank">49 west point</a>
</Value3>
<Value4>908-783-0909</Value4>
<Value5>
<a title="CM" href="myspec.aspx?id=78" target="_blank">CM</a>
</Value5>
<Value6>
</Value6>
</StartOne>
</root>';
declare @chk nvarchar(max);
-- here implementing for Value6
set @chk = (select
C.value('(Value6/a/text())[1]', 'nvarchar(max)') col
from
@xml.nodes('/root/StartOne') as X(C))
-- make sure here
select @chk;
if @chk is null
begin
-- INSERT
SET @xml.modify('
insert <a title="Value6" href="Value6.aspx?id=78" target="_blank">Value6</a>
into (/root/StartOne/Value6)[1]')
end
else
begin
-- UPDATE
Set @xml.modify('replace value of (/root/StartOne/Value6/a/@href)[1] with sql:variable("@locUrl")');
end
select @xml
```
UPDATE: after your below comment this is the way to update dynamically
```
Declare @locUrl nvarchar(255);
Set @locUrl = 'xyz.aspx?id=' + '444' + '';
declare @xml xml;
set @xml = '<root>
<StartOne>
<Value1>Lopez, Michelle MD</Value1>
<Value2>Spanish</Value2>
<Value3>
<a title="49 west point" href="myloc.aspx?id=56" target="_blank">49 west point</a>
</Value3>
<Value4>908-783-0909</Value4>
<Value5>
<a title="CM" href="myspec.aspx?id=78" target="_blank">CM</a>
</Value5>
<Value6>
</Value6>
</StartOne>
</root>';
declare @a xml;
set @a = N'<a title="' + @locUrl+ '" href="' +@locUrl+ '" target="_blank">'+@locUrl+'</a>';
SET @xml.modify
('insert sql:variable("@a")
into (/root/StartOne/Value6)[1]');
select @xml;
```
|
How to use IF/ELSE statement to update or create new xml node entry in Sql
|
[
"",
"sql",
"sql-server",
"xml",
""
] |
I am trying to retrieve all records between two dates and the MySQL query is bringing back records with a completely different date.
My Query Is
```
SELECT datecreated FROM TABLE_COMPLAINTS WHERE datecreated BETWEEN '01/08/2015' AND '24/08/2015'
```
The result from that query is this:
[](https://i.stack.imgur.com/95Yys.png)
As you can tell the dates returned are not in the boundary specified in the SQL Query.
My Question is how do I fix this?
|
Try this
```
SELECT datecreated FROM TABLE_COMPLAINTS WHERE datecreated BETWEEN
to_date('01/08/2015','MM/DD/YYYY') AND to_date('24/08/2015','MM/DD/YYYY')
```
**Solution 2**
```
SELECT datecreated FROM TABLE_COMPLAINTS WHERE datecreated >='01/08/2015' AND datecreated <= '24/08/2015'
```
**Solution 3**
```
SELECT datecreated FROM TABLE_COMPLAINTS
where
datecreated >='01/08/2015 06:42:10' and datecreated <='24/08/2015 06:42:50';
```
Forgive me I didn't see that Time part I thought it is a different column. This should work (I messed with time field to see if there is a difference as all your Times are in 12)
|
Give this format a try:
```
SELECT datecreated
FROM TABLE_COMPLAINTS
WHERE datecreated BETWEEN '2015-08-01' AND '2015-08-24'
SELECT datecreated
FROM TABLE_COMPLAINTS
WHERE datecreated BETWEEN
STR_TO_DATE('01/08/2015','%d-%m-%Y')
and STR_TO_DATE('24/08/2015','%d-%m-%Y');
```
|
MySQL return All records between two dates
|
[
"",
"mysql",
"sql",
"database",
""
] |
Table: `parent_id, parent_name, child_id, child_gender`
How to list `parent_id` who have at least one boy and one girl.
|
*Group by* the `parent_id` and take only those *having* children with at least 2 *distinct* genders
```
select parent_id
from your_table
group by parent_id
having count(distinct child_gender) = 2
```
|
To get the parents that have a boy:
```
select parent_id from table where child_gender = 'M'
```
To get the parents that have a girl:
```
select parent_id from table where child_gender = 'F'
```
So to get the parents that are in both result sets:
```
select parent_id from table where child_gender = 'M'
intersect
select parent_id from table where child_gender = 'F'
```
Note: the two stand-alone queries can return duplicates, but `intersect` will make sure each parent appears at most once.
|
select parent_id who have at least one boy and one girl
|
[
"",
"sql",
""
] |
I'm trying to return all rows which are associated with a user who is associated with ALL of the queried 'tags'. My table structure and desired output is below:
```
admin.tags:
user_id | tag | detail | date
2 | apple | blah... | 2015/07/14
3 | apple | blah. | 2015/07/17
1 | grape | blah.. | 2015/07/23
2 | pear | blahblah | 2015/07/23
2 | apple | blah, blah | 2015/07/25
2 | grape | blahhhhh | 2015/07/28
system.users:
id | email
1 | joe@test.com
2 | jane@test.com
3 | bob@test.com
queried tags:
'apple', 'pear'
desired output:
user_id | tag | detail | date | email
2 | apple | blah... | 2015/07/14 | jane@test.com
2 | pear | blahblah | 2015/07/23 | jane@test.com
2 | apple | blah, blah | 2015/07/25 | jane@test.com
```
Since user\_id 2 is associated with both 'apple' and 'pear' each of her 'apple' and 'pear' rows are returned, joined to `system.users` in order to also return her email.
I'm confused on how to properly set up this postgresql query. I have made several attempts with left anti-joins, but cannot seem to get the desired result.
|
The query in the derived table gets you the user ids for users that have all specified tags and the outer query gets you the details.
```
select *
from "system.users" s
join "admin.tags" a on s.id = a.user_id
join (
select user_id
from "admin.tags"
where tag in ('apple', 'pear')
group by user_id
having count(distinct tag) = 2
) t on s.id = t.user_id;
```
Note that this query would include users who have both tags that you search for but may have other too as long as they at least have the two specified.
With your sample data the output would be:
```
| id | email | user_id | tag | detail | date | user_id |
|----|---------------|---------|-------|------------|------------------------|---------|
| 2 | jane@test.com | 2 | grape | blahhhhh | July, 28 2015 00:00:00 | 2 |
| 2 | jane@test.com | 2 | apple | blah, blah | July, 25 2015 00:00:00 | 2 |
| 2 | jane@test.com | 2 | pear | blahblah | July, 23 2015 00:00:00 | 2 |
| 2 | jane@test.com | 2 | apple | blah... | July, 14 2015 00:00:00 | 2 |
```
If you want to exclude the row with `grape` just add a `where tag in ('apple', 'pear')` to the outer query too.
If you want only users that have only the searched for tags and none other (eg. exact division) you can change the query in the derived table to:
```
select user_id
from "admin.tags"
group by user_id
having sum(case when tag = 'apple' then 1 else 0 end) >= 1
and sum(case when tag = 'pear' then 1 else 0 end) >= 1
and sum(case when tag not in ('apple','pear') then 1 else 0 end) = 0
```
This would not return anything given your sample data as user 2 also has *grape*
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!15/84097/1)
|
Standard double-negation method for *must-have-them-all* kind of relational division problem: (I renamed `date` to `zdate` to avoid using a keyword as identifier)
---
```
-- For convenience: put search arguments into a temp table or CTE
-- I cheat by extracting this from the admin_tags table
-- (in fact, there should be a table with all possible tags somwhere)
-- WITH needed_tags AS (
-- SELECT DISTINCT tag
-- FROM admin_tags
-- WHERE tag IN ('apple' , 'pear' )
-- )
```
---
```
-- Even better: directly use a VALUES() as a constructor
-- (thanks to @jpw )
WITH needed_tags(tag) AS (
VALUES ('apple' ) , ( 'pear' )
)
SELECT at.user_id , at.tag , at.detail , at.zdate
, su.email
FROM admin_tags at
JOIN system_users su ON su.id = at.user_id
WHERE NOT EXISTS (
SELECT * FROM needed_tags nt
WHERE NOT EXISTS (
SELECT * FROM admin_tags nx
WHERE nx.user_id = at.user_id
AND nx.tag = nt.tag
)
)
;
```
|
Postgres exclusive tag search
|
[
"",
"sql",
"postgresql",
"relational-division",
""
] |
Suppose I have some data like:
```
id status activity_date
--- ------ -------------
101 R 2014-01-12
101 Mt 2014-04-27
101 R 2014-05-18
102 R 2014-02-19
```
Note that for rows with id = 101 we have activity between 2014-01-12 to 2014-04-26 and 2014-05-18 to current date.
Now I need to select that data where status = 'R' and the date is the most current date as of a given date, e.g. if I search for 2014-02-02, I would find the status row created on 2014-01-12, because that was the status that was still valid at the time for entity ID 101.
|
If I understand correctly:
**Step 1:** Convert the start and end date rows into columns. For this, you must join the table with itself based on this criteria:
```
SELECT
dates_fr.id,
dates_fr.activity_date AS date_fr,
MIN(dates_to.activity_date) AS date_to
FROM test AS dates_fr
LEFT JOIN test AS dates_to ON
dates_to.id = dates_fr.id AND
dates_to.status = 'Mt' AND
dates_to.activity_date > dates_fr.activity_date
WHERE dates_fr.status = 'R'
GROUP BY dates_fr.id, dates_fr.activity_date
```
```
+------+------------+------------+
| id | date_fr | date_to |
+------+------------+------------+
| 101 | 2014-01-12 | 2014-04-27 |
| 101 | 2014-05-18 | NULL |
| 102 | 2014-02-19 | NULL |
+------+------------+------------+
```
**Step 2:** The rest is simple. Wrap the query inside another query and use appropriate where clause:
```
SELECT * FROM (
SELECT
dates_fr.id,
dates_fr.activity_date AS date_fr,
MIN(dates_to.activity_date) AS date_to
FROM test AS dates_fr
LEFT JOIN test AS dates_to ON
dates_to.id = dates_fr.id AND
dates_to.status = 'Mt' AND
dates_to.activity_date > dates_fr.activity_date
WHERE dates_fr.status = 'R'
GROUP BY dates_fr.id, dates_fr.activity_date
) AS temp WHERE '2014-02-02' >= temp.date_fr and ('2014-02-02' < temp.date_to OR temp.date_to IS NULL)
```
```
+------+------------+------------+
| id | date_fr | date_to |
+------+------------+------------+
| 101 | 2014-01-12 | 2014-04-27 |
+------+------------+------------+
```
[SQL Fiddle](http://www.sqlfiddle.com/#!9/46983/2)
|
You can try
```
select id, status, activity_date
from TABLE
where status = "R" and activity_date = "2014-02-02"
```
where TABLE is name of your table
|
MySQL select rows where given date lies between the dates stored in table
|
[
"",
"mysql",
"sql",
"date",
"datetime",
""
] |
I am trying to filter some row of a table with ssdt (left click on table, view data, sort and filter)
Here I simply need to add `IS NULL` as a condition to an `nvarchar` field.
But as soon as I apply filter I get the error:
> Incorrect syntax near the keyword SET
Looking at the query written by editor I see that the consition is `fldName =`, no sign of my `NULL` check
How can I do it?
This is th result:
```
SELECT TOP 1000 [Ktyi_TS002_IdTipoDocumento] ,
[nvc_TS002_TipoDocumento] ,[nvc_TS002_IdFunzioneControllo] ,[bit_TS002_Annullato]
FROM [dbo].[TS002_TipoDocumento]
WHERE [nvc_TS002_IdFunzioneControllo] =
```
this is some images of the data editor found in google to show what iam talking about to who don't know ssdt:
[](https://i.stack.imgur.com/6Ubaq.png)
[](https://i.stack.imgur.com/9LLWH.png)
|
It seems to be a bug that IS (NOT) NULL expression is not supported in the filter.
|
This is a very ugly hack, but it may work for you.
It seems like you need a column name on the left of the = sign to keep the filter parser from changing the query. In my case my column that I was looking for nulls in was an integer, so I needed to get an integer on the left hand side.
I also needed a value for the columns that I was looking for nulls in that would not exist for any non-null row. In my case this was 0.
```
Create MyTable
( Id int primary key,
...
MyNum int
);
```
To search for rows with nulls in column MyNum, I did this:
```
[Id] - [Id] = IsNull([MyNum],0)
```
The [Id] - [Id] was used to produce 0 and not trigger the parser to re-write the statement as [MyNum] = stuff
The right hand side was not re-written by the parser so the NULL values were changed to 0's.
I assume for strings you could do something similar, maybe
```
concatenate([OtherStringCol],'XYZZY') = ISNull([MyStrCol],concatenate([OtherStringCol],'XYZZY'))
```
The 'XYZZY' part is used to ensure that you don't get cases where [MyStrCol] = [OtherStringCol]. I am assuming that the string 'XYZZY' doesn't exist in these columns.
|
Visual Studio 2013 SSDT - Edit data - IS NULL not work as filter
|
[
"",
"sql",
"visual-studio",
"sql-server-data-tools",
""
] |
I have the data with users tracking time. The data is in segments and each row represent one segment. Here is the sample data
<http://sqlfiddle.com/#!6/2fa61>
How can I get the data on daily basis i.e. if a complete day is of 1440 minutes then I want to know how many minutes the user was tracked in a day. I also want to show 0 on the day when there is no data.
I am expecting the following output
[](https://i.stack.imgur.com/Vk6Hw.png)
|
Use [table of numbers](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1). I personally have a permanent table `Numbers` with 100K numbers in it.
Once you have a set of numbers you can generate a set of dates for the range that you need. In this query I'll take `MIN` and `MAX` dates from your data, but since you may not have data for some dates, it is better to have explicit parameters defining the range.
For each date I have the beginning and ending of a day - our grouping interval.
For each date we are searching among `track` rows for those that intersect with this interval. Two intervals `(DayStart, DayEnd)` and `(StartTime, EndTime)` intersect if `StartTime < DayEnd` and `EndTime > DayStart`. This goes into `WHERE`.
For each intersecting intervals we are calculating the range that belongs to both intervals: from `MAX(DayStart, StartTime)` to `MIN(DayEnd, EndTime)`.
Finally, we group by day and sum up durations of all ranges.
**I added a row to your sample data to test the case when interval covers the whole day.** From `2015-02-14 20:50:43` to `2015-02-16 19:49:59`. I chose this interval to be well before intervals in your sample, so that results for the dates in your example are not affected. Here is [SQL Fiddle](http://sqlfiddle.com/#!3/d88a4/1/0).
```
DECLARE @track table
(
Email varchar(20),
StartTime datetime,
EndTime datetime,
DurationInSeconds int,
FirstDate datetime,
LastUpdate datetime
);
Insert into @track values ( 'ABC', '2015-02-20 08:49:43.000', '2015-02-20 14:49:59.000', 21616, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-20 14:49:59.000', '2015-02-20 22:12:07.000', 26528, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-20 22:12:07.000', '2015-02-21 07:00:59.000', 31732, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-21 09:49:43.000', '2015-02-21 16:30:10.000', 24027, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-21 16:30:10.000', '2015-02-22 09:49:30.000', 62360, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-22 09:55:43.000', '2015-02-22 11:49:59.000', 5856, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-22 11:49:10.000', '2015-02-23 08:49:59.000', 75649, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-23 10:59:43.000', '2015-02-23 12:49:59.000', 6616, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-23 12:50:43.000', '2015-02-24 19:49:59.000', 111556, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-28 08:49:43.000', '2015-02-28 14:49:59.000', 21616, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
Insert into @track values ( 'ABC', '2015-02-14 20:50:43.000', '2015-02-16 19:49:59.000', 0, '2015-02-19 00:00:00.000', '2015-02-28 11:45:27.000')
```
.
```
;WITH
CTE_Dates
AS
(
SELECT
Email
,CAST(MIN(StartTime) AS date) AS StartDate
,CAST(MAX(EndTime) AS date) AS EndDate
FROM @track
GROUP BY Email
)
SELECT
CTE_Dates.Email
,DayStart AS xDate
,ISNULL(SUM(DATEDIFF(second, RangeStart, RangeEnd)) / 60, 0) AS TrackMinutes
FROM
Numbers
CROSS JOIN CTE_Dates -- this generates list of dates without gaps
CROSS APPLY
(
SELECT
DATEADD(day, Numbers.Number-1, CTE_Dates.StartDate) AS DayStart
,DATEADD(day, Numbers.Number, CTE_Dates.StartDate) AS DayEnd
) AS A_Date -- this is midnight of each current and next day
OUTER APPLY
(
SELECT
-- MAX(DayStart, StartTime)
CASE WHEN DayStart > StartTime THEN DayStart ELSE StartTime END AS RangeStart
-- MIN(DayEnd, EndTime)
,CASE WHEN DayEnd < EndTime THEN DayEnd ELSE EndTime END AS RangeEnd
FROM @track AS T
WHERE
T.Email = CTE_Dates.Email
AND T.StartTime < DayEnd
AND T.EndTime > DayStart
) AS A_Track -- this is all tracks that intersect with the current day
WHERE
Numbers.Number <= DATEDIFF(day, CTE_Dates.StartDate, CTE_Dates.EndDate)+1
GROUP BY DayStart, CTE_Dates.Email
ORDER BY DayStart;
```
**Result**
```
Email xDate TrackMinutes
ABC 2015-02-14 189
ABC 2015-02-15 1440
ABC 2015-02-16 1189
ABC 2015-02-17 0
ABC 2015-02-18 0
ABC 2015-02-19 0
ABC 2015-02-20 910
ABC 2015-02-21 1271
ABC 2015-02-22 1434
ABC 2015-02-23 1309
ABC 2015-02-24 1189
ABC 2015-02-25 0
ABC 2015-02-26 0
ABC 2015-02-27 0
ABC 2015-02-28 360
```
You can still get `TrackMinutes` more than 1440, if two or more intervals in your data overlap.
**update**
You said in the comments that you have few rows in your data, where intervals do overlap and result has values more than 1440. You can wrap `SUM` into `CASE` to hide these errors in the data, but ultimately it is better to find these rows with problems and fix the data. You saw only few rows with values more than 1440, but there could be many more other rows with the same problem, which is not so visible. So, it is better to write a query that finds such overlapping rows and check how many there are and then decide what to do with them. The danger here is that at the moment you think that there are only few, but there could be a lot. This is beyond the scope of this question.
To hide the problem replace this line in the query above:
```
,ISNULL(SUM(DATEDIFF(second, RangeStart, RangeEnd)) / 60, 0) AS TrackMinutes
```
with this:
```
,CASE
WHEN ISNULL(SUM(DATEDIFF(second, RangeStart, RangeEnd)) / 60, 0) > 1440
THEN 1440
ELSE ISNULL(SUM(DATEDIFF(second, RangeStart, RangeEnd)) / 60, 0)
END AS TrackMinutes
```
|
you should group by the day value. you could get the day with the function DATEPART as in : DATEPART(d,[StartTime])
```
SELECT cast([StartTime] as date) as date ,sum(datediff(n,[StartTime],[EndTime])) as "min"
FROM [test].[dbo].[track]
group by DATEPART(d,[StartTime]),cast([StartTime]as date)
```
|
How can I get aggregate values for all dates, even when missing data for some days?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table called `ClientUrls` that has the following structure:
```
+------------+----------------+----------+
| ColumnName | DataType | Nullable |
+------------+----------------+----------+
| ClientId | INT | No |
| CountryId | INT | Yes |
| RegionId | INT | Yes |
| LanguageId | INT | Yes |
| URL | NVARCHAR(2048) | NO |
+------------+----------------+----------+
```
I have a stored procedure `up_GetClientUrls` that takes the following parameters:
```
@ClientId INT
@CountryId INT
@RegionId INT
@LanguageId INT
```
**Information about the proc**
1. All of the parameters are required by the proc and none of them will be NULL
2. The aim of the proc is to return a single matching row in the table based on a pre-defined priority. The priority being ClientId>Country>Region>Language
3. Three of the colums in the ClientUrls table are nullable. If one column contains a NULL, it refers to "All". e.g. if LanguageId IS NULL, then it refers to "AllLanguages". So if we send a LanguageId of 5 to the proc, we look for that first, otherwise we try and find the one that is NULL.
**Matrix of priority (1 being first)**
```
+---------+----------+-----------+----------+------------+
| Ranking | ClientId | CountryId | RegionId | LanguageId |
+---------+----------+-----------+----------+------------+
| 1 | NOT NULL | NOT NULL | NOT NULL | NOT NULL |
| 2 | NOT NULL | NULL | NOT NULL | NOT NULL |
| 3 | NOT NULL | NOT NULL | NULL | NOT NULL |
| 4 | NOT NULL | NULL | NULL | NOT NULL |
| 5 | NOT NULL | NOT NULL | NOT NULL | NULL |
| 6 | NOT NULL | NULL | NOT NULL | NULL |
| 7 | NOT NULL | NULL | NULL | NULL |
+---------+----------+-----------+----------+------------+
```
Here is some **example data**:
```
+----------+-----------+----------+------------+-------------------------------+
| ClientId | CountryId | RegionId | LanguageId | URL |
+----------+-----------+----------+------------+-------------------------------+
| 1 | 1 | 1 | 1 | http://www.Website.com |
| 1 | 1 | 1 | NULL | http://www.Otherwebsite.com |
| 1 | 1 | NULL | 2 | http://www.Anotherwebsite.com |
+----------+-----------+----------+------------+-------------------------------+
```
**Example stored proc call**
```
EXEC up_GetClientUrls @ClientId = 1
,@CountryId = 1
,@RegionId = 1
,@LanguageId = 2
```
**Expected response (based on example data)**
```
+----------+-----------+----------+------------+-------------------------------+
| ClientId | CountryId | RegionId | LanguageId | URL |
+----------+-----------+----------+------------+-------------------------------+
| 1 | 1 | NULL | 2 | http://www.Anotherwebsite.com |
+----------+-----------+----------+------------+-------------------------------+
```
This row is returned because matching on a NULL RegionId with the correct LanguageId is a higher priority than matching on a NULL LanguageId with the correct RegionId.
Here is the code for the proc (which works). To actually get to my question, is there a better way to write this? If i extend this table in future, I'm going to just keep multiplying the number of UNION statements and therefore it isn't really scalable.
**Actual stored procedure**
```
CREATE PROC up_GetClientUrls
(
@ClientId INT
,@CountryId INT
,@RegionId INT
,@LanguageId INT
)
AS
BEGIN
SELECT TOP 1
prioritised.ClientId
,prioritised.CountryId
,prioritised.RegionId
,prioritised.LanguageId
,prioritised.URL
FROM
(
SELECT
c.ClientId
,c.CountryId
,c.RegionId
,c.LanguageId
,c.URL
,1 [priority]
FROM ClientUrls c
WHERE c.ClientId = @ClientId
AND c.CountryId = @CountryId
AND c.RegionId = @RegionId
AND c.LanguageId = @LanguageId
UNION
SELECT
c.ClientId
,c.CountryId
,c.RegionId
,c.LanguageId
,c.URL
,2 [priority]
FROM ClientUrls c
WHERE c.ClientId = @ClientId
AND c.CountryId IS NULL
AND c.RegionId = @RegionId
AND c.LanguageId = @LanguageId
UNION
SELECT
c.ClientId
,c.CountryId
,c.RegionId
,c.LanguageId
,c.URL
,3 [priority]
FROM ClientUrls c
WHERE c.ClientId = @ClientId
AND c.CountryId = @CountryId
AND c.RegionId IS NULL
AND c.LanguageId = @LanguageId
UNION
SELECT
c.ClientId
,c.CountryId
,c.RegionId
,c.LanguageId
,c.URL
,4 [priority]
FROM ClientUrls c
WHERE c.ClientId = @ClientId
AND c.CountryId IS NULL
AND c.RegionId IS NULL
AND c.LanguageId = @LanguageId
UNION
SELECT
c.ClientId
,c.CountryId
,c.RegionId
,c.LanguageId
,c.URL
,5 [priority]
FROM ClientUrls c
WHERE c.ClientId = @ClientId
AND c.CountryId = @CountryId
AND c.RegionId = @RegionId
AND c.LanguageId IS NULL
UNION
SELECT
c.ClientId
,c.CountryId
,c.RegionId
,c.LanguageId
,c.URL
,6 [priority]
FROM ClientUrls c
WHERE c.ClientId = @ClientId
AND c.CountryId IS NULL
AND c.RegionId = @RegionId
AND c.LanguageId IS NULL
UNION
SELECT
c.ClientId
,c.CountryId
,c.RegionId
,c.LanguageId
,c.URL
,7 [priority]
FROM ClientUrls c
WHERE c.ClientId = @ClientId
AND c.CountryId IS NULL
AND c.RegionId IS NULL
AND c.LanguageId IS NULL
) prioritised
ORDER BY prioritised.[Priority]
END
```
|
This is easy (if i understand you correctly). You can do it with very little code. Plus it would be easy to extend if needed.
Here is a working example
```
--Make a table
CREATE TABLE #ClientUrls (ClientId INT NOT NULL,CountryId INT NULL,RegionId INT NULL,LanguageId INT NULL,URL NVARCHAR(2048) NOT NULL)
--Put some data into it
INSERT INTO #ClientUrls (ClientId, CountryId, RegionId, LanguageId, URL)
VALUES
(1,1,1,1,'http://www.Website.com'),
(1,1,1,NULL,'http://www.Otherwebsite.com'),
(1,1,NULL,2,'http://www.Anotherwebsite.com')
--This would all be in your proc
----------------------------------------------
DECLARE @ClientId INT = 1
DECLARE @CountryId INT = 1
DECLARE @RegionId INT = 1
DECLARE @LanguageId INT = 2
--This is the interesting bit
----------------------------------------------
SELECT TOP 1 C.*
FROM #ClientUrls AS C
ORDER BY
--Order the ones with the best hit count near the top
IIF(ISNULL(C.ClientId, @ClientId) = @ClientId ,1,0) +
IIF(ISNULL(C.CountryId, @CountryId) = @CountryId ,2,0) +
IIF(ISNULL(C.RegionId, @RegionId) = @RegionId ,4,0) +
IIF(ISNULL(C.LanguageId,@LanguageId) = @LanguageId,8,0) DESC,
--Order the ones with the least nulls of each hit count near the top
IIF(C.ClientId IS NULL,0,1) +
IIF(C.CountryId IS NULL,0,2) +
IIF(C.RegionId IS NULL,0,4) +
IIF(C.LanguageId IS NULL,0,8) DESC
DROP TABLE #ClientUrls
```
Thats it. In older versions of SQL you cant use IIF, but you can replace that with a case statement if needed.
It works like this.
Each matching item is given a value (a bit like in a binary number)
Then based on each matching item we use the value or 0 if its not a match
by adding up the total we will always pick the best combination of matches.
```
value 1 2 4 8 Total value
+---------+----------+-----------+----------+------------+
| Ranking | ClientId | CountryId | RegionId | LanguageId |
+---------+----------+-----------+----------+------------+
| 1 | NOT NULL | NOT NULL | NOT NULL | NOT NULL | 15
| 2 | NOT NULL | NULL | NOT NULL | NOT NULL | 13
| 3 | NOT NULL | NOT NULL | NULL | NOT NULL | 11
| 4 | NOT NULL | NULL | NULL | NOT NULL | 9
| 5 | NOT NULL | NOT NULL | NOT NULL | NULL | 7
| 6 | NOT NULL | NULL | NOT NULL | NULL | 5
| 7 | NOT NULL | NULL | NULL | NULL | 1
+---------+----------+-----------+----------+------------+
```
I just updated this to make sure that you get the Non null version over a null option.
If you edit the results to return more then the top 1 you can see the items in the correct order. Ie if you change language from 2 to 1 you will get the 1,1,1,1 line Over the 1,1,1,Null option
|
Not tested but you could do something like this:
```
SELECT TOP 1 c.ClientId,
c.CountryId,
c.RegionId,
c.LanguageId,
c.URL
FROM ClientUrls c
ORDER BY CASE
WHEN c.ClientId = @ClientId
THEN 1000
ELSE 0
END +
CASE
WHEN c.CountryId = @CountryId
THEN 200
WHEN c.CountryId IS NULL
THEN 100
ELSE 0
END +
CASE
WHEN c.RegionId = @RegionId
THEN 20
WHEN c.CountryId IS NULL
THEN 10
ELSE 0
END +
CASE
WHEN c.LanguageId = @LanguageId
THEN 2
WHEN c.CountryId IS NULL
THEN 1
ELSE 0
END DESC
```
By giving a value to each match and selecting the highest value you could reduce the code needed. But you will increase the number of case statements needed, instead of the amount of unions.
This could also be a function instead of a stored procedure. So it could be used more easier in other query's
|
Stored proc to return data based on nullable columns by a priority
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have a query in `postgresql` which generates:
```
statusid,qty
17 25
30
17 45
18 20
```
I want to remove all rows that `statuis` is `deleted` (`17`) and keep rows with `NULL`.
What I did is:
```
WHERE statuid not in (select statusid from statuses where statusname='deleted')
```
What I expected is:
```
statusid,qty
30
18 20
```
but what I got is:
```
statusid,qty
18 20
```
is there another operator can can get this deisered result?
|
You cannot compare `NULL` with any value(`NOT IN` compares it), you have to use `IS NULL`:
```
WHERE statusid IS NULL OR statusid not in (select statusid from statuses where statusname='deleted')
```
`NULL` is neither equal nor unequal to any other value not even with `NULL`. It's undefined.
|
Add `is null` explicitly:
```
<your query> or statuid is null
```
|
How to handle NULL in Where?
|
[
"",
"sql",
"postgresql",
""
] |
My SQL query needs to return a list of values alongside the date, but with my limited knowledge I have only been able to get this far.
This is my SQL:
```
select lsu_students.student_grouping,lsu_attendance.class_date,
count(lsu_attendance.attendance_status) AS count
from lsu_attendance
inner join lsu_students
ON lsu_students.student_grouping="Central1A"
and lsu_students.student_id=lsu_attendance.student_id
where lsu_attendance.attendance_status="Present"
and lsu_attendance.class_date="2015-02-09";
```
This returns:
```
student_grouping class_date count
Central1A 2015-02-09 23
```
I want it to return:
```
student_grouping class_date count
Central1A 2015-02-09 23
Central1A 2015-02-10 11
Central1A 2015-02-11 21
Central1A 2015-02-12 25
```
This query gets the list of the dates according to the student grouping:
```
select distinct(class_date)from lsu_attendance,lsu_students
where lsu_students.student_grouping like "Central1A"
and lsu_students.student_id = lsu_attendance.student_id
order by class_date
```
|
Just take out `and lsu_attendance.class_date="2015-02-09"` or change it to a range, and then add (at the end) `GROUP BY lsu_students.student_grouping,lsu_attendance.class_date`.
|
I think you just want a `group by`:
```
select s.student_grouping, a.class_date, count(a.attendance_status) AS count
from lsu_attendance a inner join
lsu_students s
ON s.student_grouping = 'Central1A' and
s.student_id = a.student_id
where a.attendance_status = 'Present'
group by s.student_grouping, a.class_date;
```
Comments:
* Using single quotes for string constants, unless you have a good reason.
* If you want a range of class dates, then use a `where` with appropriate filtering logic.
* Notice the table aliases. The query is easier to write and to read.
* I added student grouping to the `group by`. This would be required by any SQL engine other than MySQL.
|
Complex SQL Select query with inner join
|
[
"",
"mysql",
"sql",
""
] |
I have `SELECT` query below:
```
select * from table t
Where t.policy_date = @var1 and t.insurer_id = @var2
```
Here `@var1` could by empty and `@var2` also could be empty. If `@var1` is empty condition must be only `t.insurer_id = @var2`. if `@var2` is empty condition must be `t.policy_date = @var1`. Or they can be both empty and both not empty.
'@var1' and '@var2' are parametrs in procedure that i will give.
Question: How to create query that satisfy my conditions?
|
use this
```
select * from table t
Where (@var1 IS NULL OR t.policy_date = @var1) and (var2 IS NULL OR t.insurer_id = @var2)
```
when `@var` is null the first part of the where clause became `true` so we have just the second part to evaluate, and vice versa.
**Note:** If you mean `''` by "empty", use `@var = ''` instead of `@var IS NULL`.
|
```
select * from table t
where (@var1 is null or t.policy_date = @var1)
and (@var2 is null or t.insurer_id = @var2)
```
|
Condition in SQL statement
|
[
"",
"sql",
"sql-server-2014",
""
] |
I need to query a table in order to return rows, but I'm not able to query the table correctly. Here is my table view:
```
Id Name Date Subject TrackingToken RegardingObjectId Type TypeName
1 XXXX 8/26/2015 RE: XXXXXX CRM:0030062 496BF810-4DBE-E311-9357-00505686395E 112 RE: YYYY
1 XXXX 8/27/2015 RE: XXXXXX CRM:0030055 AA8C2F71-CDD1-E311-894A-005056863ADA 112 RE: YYYY
1 XXXX 8/28/2015 RE: XXXXXX CRM:0030055 4DF02C89-2FBE-E311-9357-00505686395E 112 RE: YYYY
1 XXXX 8/29/2015 RE: XXXXXX CRM:0030049 496BF810-4DBE-E311-9357-00505686395E 112 RE: YYYY
1 XXXX 8/30/2015 RE: XXXXXX CRM:0030049 06393EF9-71CC-E311-894A-005056863ADA 112 RE: YYYY
1 XXXX 8/31/2015 RE: XXXXXX CRM:0030047 8BE51823-52BE-E311-9357-00505686395E 112 RE: YYYY
1 XXXX 9/1/2015 RE: XXXXXX CRM:0030003 6ABE11CA-BABF-E311-89E9-005056863ADA 112 RE: YYYY
```
The result set should return:
```
Id Name Date Subject TrackingToken RegardingObjectId Type TypeName
1 XXXX 8/27/2015 RE: XXXXXX CRM:0030055 AA8C2F71-CDD1-E311-894A-005056863ADA 112 RE: YYYY
1 XXXX 8/28/2015 RE: XXXXXX CRM:0030055 4DF02C89-2FBE-E311-9357-00505686395E 112 RE: YYYY
1 XXXX 8/29/2015 RE: XXXXXX CRM:0030049 496BF810-4DBE-E311-9357-00505686395E 112 RE: YYYY
1 XXXX 8/30/2015 RE: XXXXXX CRM:0030049 06393EF9-71CC-E311-894A-005056863ADA 112 RE: YYYY
```
In other words: Select all records where column TrackingToken is repeating and RegardingObjectId has different value.
Current Query:
```
select [OwnerId],[OwnerIdName],[CreatedOn],[Subject],
[TrackingToken],[RegardingObjectId],
[RegardingObjectTypeCode],[RegardingObjectIdName]
from [TableX].[dbo].[Email] a
where not exists (select [TrackingToken], [RegardingObjectId]
from [TableX].[dbo].[Email] b
where a.[TrackingToken] = b.[TrackingToken]
and a.[RegardingObjectId] = b.[RegardingObjectId]
AND RegardingObjectTypeCode = 112
group by [TrackingToken],[RegardingObjectId]
having count(*)> 1)
and a.TrackingToken in (select TrackingToken
from [TableX].[dbo].[Email]
group by TrackingToken
having count(*) > 1)
and a.RegardingObjectId is not null
and a.RegardingObjectTypeCode = 112
and a.TrackingToken is not null
order by a.TrackingToken desc
```
|
You can use the analytic version of COUNT to count records over a certain range (within the TrackingToken in your case). Then only keep those records where the count of different RegardingObjectId is greater than one.
```
select *
from
(
select mytable.*
, count(distinct regardingobjectid) over (partition by trackingtoken) as cnt
from mytable
) counted
where cnt > 1;
```
EDIT: As Felix Pamittan pointed out, SQL Server doesn't fully support COUNT OVER, such that you cannot use DISTINCT with it. So here is the same with the normal COUNT aggregation:
```
select mytable.*
from mytable
where
(
select count(distinct token.regardingobjectid)
from mytable token
where token.trackingtoken = mytable.trackingtoken
) > 1;
```
Or without a correlated subquery, but a derived table instead:
```
select mytable.*
from mytable
join
(
select trackingtoken
from mytable
group by trackingtoken
having count(distinct regardingobjectid) > 1
) tokens on tokens.trackingtoken = mytable.trackingtoken;
```
|
You can count the distinct number of [RegardingObjectId]:
```
select TrackingToken
from [TableX].[dbo].[Email]
group by TrackingToken
having count(distinct [RegardingObjectId]) > 1
```
|
How to select rows based on column value is repeating and has different values in another column
|
[
"",
"sql",
"sql-server",
"t-sql",
"select",
""
] |
i have a case statement in plsql ,
and the thing is for more than one case i am calling same procedure , so don't want to call the same function again and again and want to club the cases together which call same procedure.
```
LOOP
FETCH cu_my_cur INTO lr_my_rec;
EXIT WHEN cu_my_cur%NOTFOUND;
CASE lr_my_rec.method_name
WHEN 'ProductName' THEN
-- population record with product name , product version and product type
p_required_det(pn_product_reference => pr_mi_exits.policy_reference
,pv_product_name => pr_out_rec.product_name
,pn_product_version => pr_out_rec.product_version
,pv_product_type => pr_out_rec.product_type);
WHEN 'ProductVersion' THEN
-- population record with product name , product version and product type
p_required_det(pn_product_reference => pr_mi_exits.policy_reference
,pv_product_name => pr_out_rec.product_name
,pn_product_version => pr_out_rec.product_version
,pv_product_type => pr_out_rec.product_type);
WHEN 'ProductType' THEN
-- population record with product name , product version and product type
p_required_det(pn_product_reference => pr_mi_exits.policy_reference
,pv_product_name => pr_out_rec.product_name
,pn_product_version => pr_out_rec.product_version
,pv_product_type => pr_out_rec.product_type);
ELSE
NULL;
END CASE;
```
END LOOP;
I want something like this ,
```
CASE lr_my_rec.method_name
WHEN 'ProductName' OR 'ProductVersion' OR 'ProductType' THEN
-- population record with product name , product version and product type
p_required_det(pn_product_reference => pr_mi_exits.policy_reference
,pv_product_name => pr_out_rec.product_name
,pn_product_version => pr_out_rec.product_version
,pv_product_type => pr_out_rec.product_type);
ELSE
NULL;
END CASE;
```
But when trying getting error .
Expression is of wrong type.
can you guys please help me to get it done.
note : I have to use switch only.
cant use IF ELSE things.
|
Just Replace your `case` like below
```
CASE WHEN lr_my_rec.method_name in ('ProductName','ProductVersion','ProductType') THEN
-- population record with product name , product version and product type
p_required_det(pn_product_reference => pr_mi_exits.policy_reference
,pv_product_name => pr_out_rec.product_name
,pn_product_version => pr_out_rec.product_version
,pv_product_type => pr_out_rec.product_type);
WHEN `Other condition` THEN
----DO for other cases
ELSE
NULL;
END CASE;
```
EDIT:
From [Documentation](https://dev.mysql.com/doc/refman/5.0/en/case.html). You have two Syntax for `CASE`
```
CASE case_value
WHEN when_value THEN statement_list
[WHEN when_value THEN statement_list] ...
[ELSE statement_list]
END CASE
```
Or:
```
CASE
WHEN search_condition THEN statement_list
[WHEN search_condition THEN statement_list] ...
[ELSE statement_list]
END CASE
```
> For the first syntax, **case\_value** is an expression. This value is compared to the when\_value expression in each WHEN clause until one of them is equal
>
> For the second syntax, each WHEN clause **search\_condition** expression is evaluated until one is true, at which point its corresponding THEN clause statement\_list executes.
Here we want second form since we are using Search Condition. So if you want to do this. You have to change the `Case` Structure.
|
```
declare
V_TEST varchar2(10) := 'a';
V_TEST2 varchar2(10) := 'a';
begin
case
when v_test in ('a','b','c')then
null;
when V_TEST2 in ('a','b','c')then
null;
else
null;
end case;
end;
```
|
how to club multiple cases with OR in case statement in plsql
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have a table with team names, the date the team was created, and the date the team was dissolved:
```
TeamName TeamStartDate TeamEndDate
Business Analysis 2012-12-31 00:00:00.000 2013-06-30 00:00:00.000
Business Systems 2012-06-18 00:00:00.000 2015-01-31 00:00:00.000
Business Systems and Portfolio 2012-12-31 00:00:00.000 2099-12-31 00:00:00.000
Data Administration/eCommerce/Testing 2012-10-29 00:00:00.000 2013-10-10 00:00:00.000
Data Solutions 2013-10-11 00:00:00.000 2099-12-31 00:00:00.000
Data Solutions-Reporting 2012-12-31 00:00:00.000 2013-10-10 00:00:00.000
```
Some teams get renamed (as in the case of Business Systems and Portfolio) and I want to be able to select the correct team for a specific date. For example, if my report is to run for 1st August 2015, I want to see "Business Systems and Portfolio", but if the report is to run for 12th December 2014, I want to see "Business Systems". I've been trying to figure out how to do this, but can't quite get there. Can anyone help?
|
The answer to your question is: **Unable to determine; not enough information.**
You could have renamed "Business Analysis" to "Business Systems"... or renamed it to "Pie" for all we know. You need to an identifier that tells both names actually belong to the same team.
If the data would include a team's unique identifier, then the following example would provide the oldest name for a given team:
```
DECLARE @teams TABLE(TeamID INT, TeamName varchar(100), TeamStartDate date, TeamEndDate date)
INSERT INTO @teams VALUES(1, 'Business Analysis ','2012-12-31 00:00:00.000','2013-06-30 00:00:00.000')
INSERT INTO @teams VALUES(2, 'Business Systems ','2012-06-18 00:00:00.000','2015-01-31 00:00:00.000')
INSERT INTO @teams VALUES(2, 'Business Systems and Portfolio ','2012-12-31 00:00:00.000','2099-12-31 00:00:00.000')
INSERT INTO @teams VALUES(3,'Data Administration/eCommerce/Testing','2012-10-29 00:00:00.000','2013-10-10 00:00:00.000')
INSERT INTO @teams VALUES(4,'Data Solutions ','2013-10-11 00:00:00.000','2099-12-31 00:00:00.000')
INSERT INTO @teams VALUES(5, 'Data Solutions-Reporting ','2012-12-31 00:00:00.000','2013-10-10 00:00:00.000')
SELECT *
FROM ( SELECT TeamName, TeamStartDate, TeamEndDate, ROW_NUMBER() OVER (PARTITION BY TeamID ORDER by TeamID) num
FROM @teams
WHERE '2014-12-12 00:00:00' BETWEEN TeamStartDate AND TeamEndDate) a
WHERE a.num = 1
```
Results:
```
TeamName TeamStartDate TeamEndDate num
------------------- ------------- ----------- ---
Business Systems 2012-06-18 2015-01-31 1
Data Solutions 2013-10-11 2099-12-31 1
```
|
```
SELECT TeamName, TeamStartDate, TeamEndDate
FROM [TableName]
WHERE '2015-08-01 00:00:00' BETWEEN TeamStartDate AND TeamEndDate
```
01/08/2015 is the date you want to search on
|
How to select oldest item from records with overlapping dates
|
[
"",
"sql",
"sql-server",
""
] |
I have a column in Oracle DB which is `varchar2` data type. Typical value stored in this column is like `06/16/2015 02:14:18 AM`.
I am trying to get all records wherein this column is having records after 1st August 2015.
```
select *
from MYTABLE
where to_date(substr(MYCOLUMN,1,10),'dd-mm-yyyy') > to_date('01-08-2015','dd-mm-yyyy');
```
But, I am getting `ORA-01843`. Where am I doing wrong?
|
Respect the format in your VARCHAR
```
....where to_date(substr(MYCOLUMN,1,10),'mm/dd/yyyy')
```
|
> I have a column in Oracle DB which is varchar2 data type. Typical value stored in this column is like 06/16/2015 02:14:18 AM.
The first question is why do you store **DATE** as **string**? Using appropriate **data type** is one of the most important part of **database design** and **performance**.
Understand that **DATE** doesn't have the **format** you see, it is internally stored in `7 bytes` which is Oracle's proprietary format. Storing date as a **string** to have a fixed format is not recommended.
I would suggest first fix the design so that you don't have to do this overhead activity while **comparing dates**. In the longer run it will help you.
**1. Add a new column as DATE data type.**
```
ALTER TABLE table_name
ADD new_column DATE;
```
**2. Update the new column.**
```
UPDATE table_name
SET new_column = TO_DATE(old_column, 'mm/dd/yyyy hh:mi:ss pm');
```
**3. DROP the old column.**
```
ALTER TABLE table_name
DROP COLUMN old_column;
```
**4. Rename the new column to old column name.**
```
ALTER TABLE table_name
RENAME COLUMN old_name to new_name;
```
Now, you could compare dates easily:
```
SELECT * FROM MYTABLE WHERE mycolumn > to_date('01-08-2015','dd-mm-yyyy');
```
This will also use any regular **index** on the date column.
**From performance point of view**:
If you don't fix it now, you will keep facing performance issues. Because the immediate fix of `SUBSTR` will not let you use any regular **index**, you need to create a **function-based index**.
|
ORA-01843: not a valid month error
|
[
"",
"sql",
"oracle",
"date",
"date-comparison",
""
] |
```
select Date(createdOn) Date, count(createdOn) "No of Questions"
from Questions
group by Date(createdOn);
```
Date Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β No of Questions
2015-08-25 Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β 3
2015-08-26 Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β 8
2015-06-27 Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β 13
2015-06-28 Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β 35
I am not an expert in mysql. Based on my requirement I want the number of questions asked on a specific day. When I execute above query I get results like above.
**How do I get the result for only current date i.e. 2015-06-28?**
|
Instead of `Group By` use `where` condition like this:
```
select Date(createdOn) `Dated`, count(createdOn) "No of Questions"
from Questions
where Date(createdOn) = CURDATE();
```
Do check if you have data specific to the CURDATE() ie, 2015-08-28 eles try like this:
```
select Date(createdOn) `Dated`, count(createdOn) "No of Questions"
from Questions
where Date(createdOn) = '2015-08-26';
```
|
Try this:
```
select Date(createdOn) Date, count(createdOn) "No of Questions"
from Questions
group by Date(createdOn)
having Date(createdOn) = CURDATE();
```
|
SQL query to find number of questions asked on a specific day
|
[
"",
"mysql",
"sql",
""
] |
Hi i have String like This
```
1;2;12;15;6;77;
```
which contain ids of table A
and table A is
```
id setcolumn
1 false
2 false
3 false
...
```
and i want to set true each row have id in my input string.
So can anyone Help me about this?
|
You can do something like this:
---
**SAMPLE DATA**
```
CREATE TABLE #Test
(
Id NVARCHAR(100)
)
INSERT INTO #Test VALUES ('1;2;12;15;6;77')
CREATE TABLE #Test2
(
Id NVARCHAR(100),
Setcolumn NVARCHAR(100)
)
INSERT INTO #Test2 VALUES
(1, 'false'), (2, 'false'), (3, 'false'), (4, 'false')
```
---
**QUERY**
```
;WITH cte AS(
SELECT
Split.a.value('.', 'VARCHAR(100)') AS Data
FROM
(
SELECT
CAST ('<M>' + REPLACE(Id, ';', '</M><M>') + '</M>' AS XML) AS Data
FROM #Test
) AS A CROSS APPLY Data.nodes ('/M') AS Split(a)
)
UPDATE t
SET t.Setcolumn = 'true'
FROM cte
JOIN #Test2 t ON cte.Data = t.Id
SELECT *
FROM #Test2
```
---
**DEMO**
You can test it at `SQL FIDDLE`
|
Here is an example
```
CREATE TABLE #tt(id INT, flag BIT)
INSERT INTO #tt VALUES (1,0),(2,0),(3,0)
DECLARE @S VARCHAR(max),
@Split CHAR(1),
@X XML
SELECT @S = '1;2;3;4;5',
@Split = ';'
SELECT @X = CONVERT(xml,' <root> <s>' + REPLACE(@S,@Split,'</s> <s>') + '</s> </root> ')
UPDATE tbl
SET tbl.flag=1
FROM #tt tbl
INNER JOIN(
SELECT [Value] = T.c.value('.','varchar(20)')
FROM @X.nodes('/root/s') T(c))k on tbl.id = k.Value
```
|
Filter Columns which have id in splitted String in sqlserver 2008
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have a table #tmpName with just one column, [Name].
Without using a cursor, I would like to take each row of #tmpName (which is technically one name), do something with that name, and then delete from #tmpName.
I began with something like this, but it's very slow and I eventually get an out of memory exception, which doesn't make sense on a 700-row table. I assume it has to do with the fact that I'm not ordering when doing a delete top 1.
```
drop table #tmpName ;
with cte as (
select Name from [Employees]
group by Name
)
select Name into #tmpName From cte
declare @count int = 0
while @@rowcount <> 0
begin
select top 1 Name From #tmpName order by Name asc
--do stuff with top 1
delete top (1) from #tmpName
select @count = @count + 1
select @count
END
```
I figured then that I could use this DELETE to better remove TOP 1, but I'm not sure how to process each one I delete.
```
WITH q AS
(
SELECT TOP 1 Name into @Name
FROM #tmpName
order by Name
)
DELETE
FROM q
```
any help is appreciated.
|
-- once you have data in the temp table, you could something like this....
```
-- populate temp table
select Name INTO #tmpName
from [Employees]
group by Name
DECLARE @Name VARCHAR(100);
while EXISTS(SELECT * FROM #tmpName)
BEGIN
select top 1 @Name = Name From #tmpName
-- do stuff with top 1
delete from #tmpName WHERE Name = @Name
END
```
|
```
drop table #tmpName ;
select row_number() over(order by Name) as rn, Name
into #tmpName From [Employees];
declare @count int = (select count(*) from #tmpName)
declare @rn int = 1
while @count > 0
begin
select Name From #tmpName where rn = @rn
--do stuff with top 1
delete from #tmpName where rn = @rn
set @count = @count - 1
set @rn = @rn+1
select @count
end
```
A `cte` is not needed. You can try this.
|
Use top 1 and then delete?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm using postgresql...
I need to implenet a function that merge two rows in the table.
assume that I already have the id of the two rows and it's ok to merge them.
**Example:**
```
id quantity isOK
5 20 FALSE
5 10 FALSE
6 30 TRUE
6 35 FALSE
```
after calling on id 5 the table will **CHANGED** to:
```
id quantity isOK
5 30 FALSE
6 30 TRUE
6 35 FALSE
```
If calling **again** on id 6 table will **CHANGED** to:
```
id quantity isOK
5 30 FALSE
6 65 TRUE
```
so basicly each call takes two rows and make from them one **combined** row.
What I have so far is simply add the quantity on the 1st to the 2nd and delete the 1st.
I need to modify my function so that there will be an `BOOLEAN OR` operation on the `isOK` field.
is there a way to do that without using `IF`?
This is how I did it with `IF`:
```
CREATE OR REPLACE FUNCTION func()
RETURNS integer AS
......
select isOK into x from A where firstrecord
if x then
update A set isOK=x where secondrecord
end if;
update quantity
delete the first record
return 0;
end;
$BODY$
LANGUAGE plpgsql VOLATILE
```
|
`EXISTS()` yields a boolean, which can be used in the result
```
=-- The table
CREATE TABLE t_test(id INTEGER,value INTEGER, fld Boolean);
-- The data
INSERT INTO t_test (id, value, fld)
VALUES (1, 23, 'TRUE'), (1, 22, 'FALSE'), (2, 2, 'FALSE'), (2, 23, 'FALSE');
-- The query
SELECT t.id
, SUM(t.value)
, EXISTS( SELECT 1
FROM t_test x
WHERE x.id = t.id AND x.fld = True
) AS fld
FROM t_test t
GROUP by t.id
;
```
---
Another way is to cast to-and-fro to integer (booleans are unordered, so you cannot use max on them)
```
SELECT t.id
, sum(t.value) AS value
, MAX( t.fld::integer )::boolean AS fld
FROM t_test t
GROUP by t.id
;
```
|
First take the desired id as parameter:
```
create function myfunc(selectedid integer) returns integer as
```
Declare two local variables:
```
declare mergedquantity integer; mergedisOK boolean;
```
Then merge the records:
```
select sum(quantity), case when sum(isOK) > 0 then true else false end
into mergedquantity, mergedisOK
from a
where id = selectedid;
```
Now you can delete old ones and insert a merged record:
```
delete from a where id = selectedid;
insert into a values(selectedid, mergedquantity, mergedisOK);
```
|
How to implement a boolean OR update instraction?
|
[
"",
"sql",
"postgresql",
""
] |
I have a table that contain a list of properties that will be visible to user. but I inserted data in wrong format,I want to swap values of `[ActionName]` with `[ControllerName]` name without creating a temporary table.
```
SELECT [MenuID]
,[MenuName]
,[MenuMasterID]
,[ActionName]
,[ControllerName]
,[ImageClassName]
,[MainOrderID]
,[SubOrderID]
FROM [DEV_CMS_Medical].[dbo].[SEC_Menus]
```
|
Use assigment/aliasing if you want to swap only when querying
```
SELECT
t.[MenuID]
,t.[MenuName]
,t.[MenuMasterID]
,[ActionName] = t.[ControllerName]
,[ControllerName] = t.[ActionName]
,t.[ImageClassName]
,t.[MainOrderID]
,t.[SubOrderID]
FROM [DEV_CMS_Medical].[dbo].[SEC_Menus] t;
```
Use Update when you want swap underlying data:
```
UPDATE [DEV_CMS_Medical].[dbo].[SEC_Menus]
SET [ActionName] = [ControllerName]
,[ControllerName] = [ActionName];
```
|
```
UPDATE [DEV_CMS_Medical].[dbo].[SEC_Menus]
SET ActionNamec = ControllerName,
ControllerName = ActionName
```
Or IF you want show the data with out any manipulation just use the alias name for required columns
|
swapping values of database table
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm trying to replace multiple white spaces and carriage return with a single white space in sql.
I've come up with following so far:
```
select
replace(
replace(
replace(
replace(
replace(
replace(
LTrim(RTrim(' 6 Spaces 6 Spaces.
abcde ')),char(13),''),--Trim the field
enter code here
char(10),''),
char(13) + char(10),''),
' ',' |'),
--Mark double spaces
'| ',''),
--Delete double spaces offset by 1
'|','')
```
Now the expected output for the above string is:
6 Spaces 6 Spaces. abcde
However I'm getting 6 Spaces 6 Spaces. [multiple white spaces] abcde (Stackoverflow is trimming the white spaces here, ergo I have to write it)
This seems like a conundrum to me.
What's wrong?
|
Well I'll just put this out there as an alternative since I just finished it up the second an answer was accepted.
This will also give you the results you desired by trimming and replacing in the correct orders:
```
Select Replace(replace(replace(replace(
RTRIM(LTRIM(this)),
char(13) + char(10), ''),
' ', ' |'),
'| ', ''),
'|','')
from
(select ' 6 Spaces 6 Spaces.
abcde ' as this) a
```
|
This type of problem is tricky to solve with simple replace functionality but becomes very easy with a regex function.
Sadly Microsoft haven't included this as a built in function for SQL Server but with some SQLCLR work it can be available.
[SQL Server Regular expressions in T-SQL](https://stackoverflow.com/questions/194652/sql-server-regular-expressions-in-t-sql) has an example of a SQLCLR function to search strings but here you would need a regex\_replace function
```
using System.Data.SqlTypes;
namespace Public.SQLServer.SQLCLR
{
public class Regex
{
#region Regex_IsMatch Function
/// <summary>
/// Searches an expression for another regular expression and returns a boolean value of true if found.
/// </summary>
/// <param name="expressionToFind">Is a character expression that contains the sequence to be found. This expression leverages regular expression pattern matching syntax. This expression may also be simple expression.</param>
/// <param name="expressionToSearch">Is a character expression to be searched.</param>
/// <param name="start_location">Is an integer expression at which the search starts. If start_location is not specified, is a negative number, or is 0, the search starts at the beginning of expressionToSearch.</param>
/// <returns>Bit.</returns>
[Microsoft.SqlServer.Server.SqlFunction(IsDeterministic = true, IsPrecise = true)]
public static SqlBoolean Regex_IsMatch(SqlString expressionToFind, SqlString expressionToSearch, SqlInt32 start_location)
{
// Process expressionToFind parameter
string etf;
if (expressionToFind.IsNull)
{
return SqlBoolean.Null;
}
else if (expressionToFind.Value == string.Empty)
{
return new SqlBoolean(0);
}
else
{
etf = expressionToFind.Value;
}
// Process expressionToSearch parameter
string ets;
if (expressionToSearch.IsNull)
{
return SqlBoolean.Null;
}
else if (expressionToSearch.Value == string.Empty)
{
return new SqlBoolean(0);
}
else
{
ets = expressionToSearch.Value;
}
// Process start_location parameter
int sl;
if (start_location.IsNull)
{
sl = 0;
}
else if (start_location.Value < 1)
{
sl = 0;
}
else
{
sl = (int)start_location.Value -1;
if (sl > expressionToSearch.Value.Length + 1)
{
sl = expressionToSearch.Value.Length;
}
}
// execute the regex search
System.Text.RegularExpressions.Regex regex = new System.Text.RegularExpressions.Regex(etf);
return regex.IsMatch(ets, sl);
}
#endregion
#region Regex_Replace Function
/// <summary>
/// Replaces all occurrences of a specified regular expression pattern with another regular expression substitution.
/// </summary>
/// <param name="expression">Is the string expression to be searched.</param>
/// <param name="pattern">Is a character expression that contains the sequence to be replaced. This expression leverages regular expression pattern matching syntax. This expression may also be simple expression.</param>
/// <param name="replacement">Is a character expression that contains the sequence to be inserted. This expression leverages regular expression substitution syntax. This expression may also be simple expression.</param>
/// <returns>String of nvarchar(max), the length of which depends on the input.</returns>
[Microsoft.SqlServer.Server.SqlFunction(IsDeterministic = true, IsPrecise = true)]
public static SqlString Regex_Replace(SqlString expression, SqlString pattern, SqlString replacement)
{
// Process null inputs
if (expression.IsNull)
{
return SqlString.Null;
}
else if (pattern.IsNull)
{
return SqlString.Null;
}
else if (replacement.IsNull)
{
return SqlString.Null;
}
// Process blank inputs
else if (expression.Value == string.Empty)
{
return expression;
}
else if (pattern.Value == string.Empty)
{
return expression;
}
// Process replacement parameter
System.Text.RegularExpressions.Regex regex = new System.Text.RegularExpressions.Regex(pattern.Value);
return regex.Replace(expression.Value, replacement.Value);
}
#endregion
}
}
```
Once available you can achieve your result with a query like the following;
```
select [library].[Regex_Replace]('String with many odd spacing
issues.
!','\s{1,}',' ')
```
which returns
> String with many odd spacing issues. !
the expression \s{1,} means match any whitespace \s in a sequence of one or more {1,} and matches are replaced with your single space character.
There is more to using SQLCLR than the code included here and further research into creating the assemblies and SQLCLR functions is required.
|
How to replace multiple white spaces and carriage return with a single white space in sql
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have to separate select queries I'm looking to combine in to one query. Meaning I'd like my out put formatted like:
col 1 | col2
252 ---- 05
One idea is to write a CTE and while these two are small queries I have about 4 more like this for date ranges which I thought to avoid using one or more CTES to get the data.
Here are my select queries:
```
SELECT
count(*) as pastDueRepl
FROM TBLPTS_APPDATA
WHERE APPV_PTSSTATUS = '2'
AND (APPD_NEXTREPLDATE IS NOT NULL)
AND APPD_NEXTREPLDATE between DATEADD(Day,-30,GETDATE()) and GETDATE()
SELECT
count(*) as pastDueInsp
FROM TBLPTS_APPDATA
WHERE APPV_PTSSTATUS = '2'
AND (APPD_NEXTINSPDATE IS NOT NULL)
AND APPD_NEXTINSPDATE between DATEADD(Day,-30,GETDATE()) and GETDATE()
```
|
Add `0` column to each query then `sum` over `union` of them:
```
SELECT SUM(pastDueRepl) as pastDueRepl, sum(pastDueInsp) as pastDueInsp FROM(
SELECT
count(*) as pastDueRepl , 0 as pastDueInsp
FROM TBLPTS_APPDATA
WHERE APPV_PTSSTATUS = '2'
AND (APPD_NEXTREPLDATE IS NOT NULL)
AND APPD_NEXTINSPDATE between DATEADD(Day,-30,GETDATE()) and GETDATE()
UNION ALL
SELECT
0 as pastDueRepl, count(*) as pastDueInsp
FROM TBLPTS_APPDATA
WHERE APPV_PTSSTATUS = '2'
AND (APPD_NEXTINSPDATE IS NOT NULL)
AND APPD_NEXTINSPDATE between DATEADD(Day,-30,GETDATE()) and GETDATE()
) t
```
|
This can be a good way to minimize the repetition and reduce the amount of times you have to hardcode your date range boundaries:
```
with DateRangeBoundaries as (select DATEADD(Day,-30,GETDATE()) as LowerBoundDate,
GETDATE() as UpperBoundDate)
select count(case when t.APPD_NEXTREPLDATE between drb.LowerBoundDate and drb.UpperBoundDate
then 'X' end) as pastDueRepl,
count(case when t.APPD_NEXTINSPDATE between drb.LowerBoundDate and drb.UpperBoundDate
then 'X' end) as pastDueInsp
from TBLPTS_APPDATA t
join DateRangeBoundaries drb on 1=1
where t.APPV_PTSSTATUS = '2'
```
The above may provide you with more compact syntax. However, if you are relying on indexes on your date columns, then the above may actually hurt your performance. Make sure you profile this if you decide to try it.
|
Rewrite into single query
|
[
"",
"sql",
"sql-server-2012",
"subquery",
"correlated-subquery",
""
] |
These are my tables:
```
Product:
ID PRICE NAME DESCRIPTION
1 100 laptop laptop_desc
2 200 mouse mouse_desc
Product_Translations:
PID LANG NAME DESCRIPTION
1 ch δΌεΎ δΌεΎδΌεΎ
```
Please don't worry about name and description in the Product table. We are keeping it to avoid join if default language is selected by the user.
Now I need to write a query to get all the products from **Product** according to the users's language with fallback that if no translations for name and description are found in **Product\_Translations** get them from **Product** table. I tried couple of different ways, but couldn't make it work.
Update:
---
I require all the columns from Product table(In this example, I only gave 2, but my actual table has more columns). And one more restriction is that I need to generate this query using JPA Criteria API, so any SQL keywords not supported by JPA may not work for me.
Thanks for your help.
|
```
SELECT p.ID, p.Price,
COALESCE(pt.Name, p.Name) Name,
COALESCE(pt.Description, p.Description) Description
FROM Product p
INNER JOIN User u on u.ID = @MyUserID
LEFT JOIN Product_Translations pt ON pt.PID = p.ID AND pt.LANG = u.LANG
```
|
Use LEFT OUTER JOIN:
```
SELECT
p.id
,p.price
,ISNULL(t.name, p.name) AS translation
,ISNULL(t.description, p.description) AS description
FROM Product p
LEFT JOIN Translations t
ON p.id = t.pid
AND t.lang = ?
```
This will work on SQL Server if you use other DB change to COALESCE().
|
SQL query to get default values if translations are not found
|
[
"",
"sql",
"hsqldb",
""
] |
Here is my statement for create view and related error message, using MySQL/MySQL WorkBench. I tried the select -- from -- where works. Any hints are appreciated. :)
```
CREATE VIEW [Products Above Average Price] AS
SELECT ProductName,Price
FROM Products
WHERE Price>(SELECT AVG(Price) FROM Products)
```
> Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[Products Above Average Price] AS #how to? SELECT ProductName,Price ' at line 1
|
You have to use *ANSI standard* double quotes `""` instead of `[]` brackets.
```
CREATE VIEW "Products Above Average Price" AS
SELECT ProductName,Price
FROM Products
WHERE Price>(SELECT AVG(Price) FROM Products)
```
|
Here is how view creation is supposed to look like in MySQL
```
Create View `MyViewName` as
Select
col1, col2, col3
From
myTable T
```
In your case, the problem occures because of `[]` which aren't used in MySQL. Replace your query with
```
CREATE VIEW `Products Above Average Price` AS
SELECT ProductName,Price
FROM Products
WHERE Price>(SELECT AVG(Price) FROM Products)
```
|
MySQL create view error
|
[
"",
"mysql",
"sql",
""
] |
I am trying to remove the text between the square brackets, however it only seems to remove the brackets.
```
SELECT Replace(Replace(aud_desc, '[', ''), ']', '') from _audit
```
`aud_desc` is `This is [a] test` however with the above I have got it to show `This is a test`, I am not sure why its not removing the text in the brackets as well.
Have I missed something?
|
Use the `REPLACE`, `LENGTH`, `LOCATE`, `REVERSE` & `SUBSTRING`functions.
```
SELECT REPLACE(aud_desc, SUBSTRING(aud_desc, LOCATE('[', aud_desc), LENGTH(aud_desc) - LOCATE(']', REVERSE(aud_desc)) - LOCATE('[', aud_desc) + 2), '') AS aud_desc
FROM _audit
```
Input:
aud\_desc
```
word [brakcet] word
[brakcet] word
word [brakcet]
```
Output:
aud\_desc
```
word word
word
word
```
SQL Fiddle: <http://sqlfiddle.com/#!9/178bb/1/0>
|
Here's a user-defined function that does it and works with multiple bracket pairs in the string:
```
DELIMITER //
CREATE FUNCTION REMOVE_BRACKETS(input TEXT CHARACTER SET utf8 COLLATE utf8_bin)
RETURNS TEXT CHARACTER SET utf8 COLLATE utf8_bin
BEGIN
DECLARE output TEXT CHARACTER SET utf8 COLLATE utf8_bin DEFAULT '';
DECLARE in_brackets BOOL DEFAULT FALSE;
DECLARE length INT;
IF(input IS NULL)
THEN
RETURN NULL;
END IF;
WHILE(TRUE) DO
SET length = LOCATE(CASE WHEN in_parens THEN ']' ELSE '[' END, input);
IF(length = 0)
THEN
RETURN CONCAT(output, input);
END IF;
IF(in_brackets)
THEN
SET in_brackets = FALSE;
ELSE
SET output = CONCAT(output, SUBSTRING(input, 1, length - 1));
SET in_brackets = TRUE;
END IF;
SET input = SUBSTRING(input, length + 1);
END WHILE;
END //
DELIMITER ;
```
|
remove text between square brackets
|
[
"",
"mysql",
"sql",
""
] |
I have a result as follows
```
DATE EID TIME TYPE
2015-07-26 1 10:01:00 IN
2015-07-26 1 15:01:00 OUT
2015-07-26 1 18:33:00 IN
2015-07-26 1 23:11:00 OUT
```
I want to split IN, OUT into different columns `ORDER BY date, eid, time`. expected result should be as follows
```
DATE EID IN TIME OUT TIME
2015-07-26 1 10:01:00 15:01:00
2015-07-26 1 18:33:00 23:11:00
```
This is what I tried so far
```
SELECT `date` AS 'DATE', `eid` AS 'EID',
CASE WHEN `type` = 'IN' THEN `time` END AS 'IN TIME',
CASE WHEN `type` = 'OUT' THEN `time` END AS 'OUT TIME'
FROM `attendance`
ORDER BY `date`, `eid`, `time`;
```
It's fetching some ridiculous result as follows
```
DATE EID IN TIME OUT TIME
2015-07-26 1 10:01:00 null
2015-07-26 1 null 15:01:00
2015-07-26 1 18:33:00 null
2015-07-26 1 null 23:11:00
```
## UPDATED:
This is my table structure
```
Field Type Null Key Default Extra
id int(10) unsigned NO PRI NULL auto_increment
eid int(10) unsigned NO NULL
time time NO 00:00:00
date date NO 0000-00-00
type enum('IN', 'OUT') NO NULL
state tinyint(1) unsigned NO 1
```
Here is more tuples...
```
DATE EID TIME TYPE
2015-07-26 1 10:01:00 IN
2015-07-26 1 15:01:00 OUT
2015-07-26 1 18:33:00 IN
2015-07-26 1 23:11:00 OUT
2015-07-26 3 09:42:00 IN
2015-07-26 3 15:29:00 OUT
2015-07-26 3 18:20:00 IN
2015-07-26 3 00:34:00 OUT
2015-07-26 6 14:16:00 IN
2015-07-26 6 23:08:00 OUT
2015-07-26 8 13:32:00 IN
2015-07-26 8 23:57:00 OUT
2015-07-26 12 09:14:00 IN
2015-07-26 12 15:07:00 OUT
2015-07-26 12 17:28:00 IN
2015-07-26 12 23:53:00 OUT
2015-07-26 13 13:47:00 IN
2015-07-26 13 23:25:00 OUT
2015-07-26 15 11:07:00 IN
2015-07-26 15 19:50:00 OUT
```
|
I have written a query for you. I am hoping it will solve your problem :
**SQL FOR TABLE SCHEMA**
```
CREATE TABLE `attendance` (
`date` date DEFAULT NULL,
`eid` int(11) DEFAULT NULL,
`time` time(6) DEFAULT NULL,
`type` varchar(5) DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1;
```
**SQL Query FOR RESULT**
I have assumed that there is an auto increment column available in your table as `id`
```
SELECT * FROM (
SELECT a.`date` AS 'DATE', a.`eid` AS 'EID',
CASE WHEN `type` = 'IN' THEN a.`time` END AS 'IN TIME',
CASE WHEN `type` = 'IN' THEN
(select `time` from `attendance` where id > a.id AND `type` = 'OUT' LIMIT 1) END AS 'OUT TIME'
FROM `attendance` a
ORDER BY a.`date`, a.`eid`, a.`time`
) as t WHERE t.`IN TIME` IS NOT NULL;
```
|
Assuming the data are valid (there is not more than one `IN` between two `OUT`), using window functions:
```
SELECT DATE, EID, `IN`, `OUT` FROM
(SELECT date AS 'DATE', eid AS 'EID',
(SELECT MAX(TIME) from attendance where
DATE = a.DATE AND TIME <= a.TIME AND TYPE = 'IN' AND EID = a.EID) as 'IN',
(SELECT MAX(TIME) from attendance where
DATE = a.DATE AND TIME <= a.TIME AND TYPE = 'OUT' AND EID = a.EID) as 'OUT',
TYPE
FROM attendance a
WHERE TYPE = 'OUT'
ORDER BY date, eid, time) t
```
here is a [fiddle](http://sqlfiddle.com/#!9/23548/41) for it.
Edit: I did not check the `EID`, here is the new [fiddle](http://sqlfiddle.com/#!2/da4a5d/4).
|
MySQL: splitting result into two different columns
|
[
"",
"mysql",
"sql",
""
] |
In my Employee table, I wanted to find the 3rd highest salary. Someone provided me with the following query to do this:
```
SELECT *
FROM employee C1
WHERE 3 = (SELECT Count(DISTINCT( C2.salary ))
FROM employee C2
WHERE C2.salary >= C1.salary)
```
This query works, but I don't how it works. What kind of query is this?
|
As others have said, this type of query is called a correlated sub-query. It's a sub-query because there is a query within a query and it's correlated because the inner query references the outer query in its definition.
Consider the inner query:
```
SELECT Count(DISTINCT( C2.salary ))
FROM employee C2
WHERE C2.salary >= C1.salary
```
Conceptually, this inner query will be evaluated once for every row produced by the outer query before the `WHERE` clause is applied, basically once for every row in `employee`. It will produce a single value, the count of rows from `employee` where the salary is less than the salary of the outer row.
The outer query will only return records where the value produced by the inner query is exactly 3. Assuming unique salary values, there is only one row from the `employee` table where there will be exactly 3 records with a salary value greater than or equal to it (the one row) and that one row is necessarily the third-highest salary value.
It's clever, but unnecessarily weird and probably not as optimal as something more straightforward.
|
Maybe a better solution would have been
```
SELECT TOP 1 *
FROM (
SELECT TOP 3 * FROM employee ORDER BY Salary DESC
) t
ORDER BY Salary ASC
```
Easier to read and more efficient than a correlated sub-query.
|
What kind of query is this?
|
[
"",
"sql",
"sql-server",
""
] |
I have a table,
`Contacts:
Contact_ID (int)
ContactName (nvarchar)`
I am given a list of contact IDs to select from. Usually, i would just do
```
SELECT *
FROM Contacts
WHERE IN (List of contact ID)
```
The problem is, the list of contact IDs could potentially get very big, like 50k or more.
So my question is, Is there a way to deal with a large list of contact IDs without having to use the IN clause?
EDIT: I'm using Microsoft sql server. The query and the contact IDs are built during runtime and passed on to sqlCommand class (c#) to be executed.
|
I'd create a table type with a single column and a clustered primary key.
```
CREATE TYPE dbo.ContactId AS TABLE
(
ContactId INT NOT NULL PRIMARY KEY
);
```
Pass the values into the query using a table valued parameter.
Change your query to
```
SELECT *
FROM Contacts
WHERE contactID IN (SELECT y.contactID FROM @yourtablevaluedparameter y)
OPTION (RECOMPILE)
```
The `OPTION (RECOMPILE)` is there to get the number of rows taken into account as the optimal plan for 50K may well be different than for 1.
[You can find some example C# code for populating a TVP here](https://stackoverflow.com/a/10779567/73226)
|
If you want performance, I would use the EXISTS clause.
```
SELECT c.Contact_ID, c.ContactName
FROM Contacts c
WHERE EXISTS (List of contact ID)
```
|
SQL, trying to get rid of large IN cluase
|
[
"",
"sql",
"sql-server",
"sql-in",
""
] |
Here is my table "puntos":
```
Equipo Liga Puntos
---------------------------
At. Madrid 1 68
Espanyol 1 64
Barcelona 1 63
Real Madrid 1 61
CastellΓ³n 1 48
Murcia 2 78
Elche 2 70
Sevilla 2 60
Valladolid 2 57
```
I just want to get two rows per different value of the attribute "Liga", the two that have the greatest values of "Puntos" group by "Liga".
So it should return something like that:
```
Equipo Liga Puntos
---------------------------
At. Madrid 1 68
Espanyol 1 64
Murcia 2 78
Elche 2 70
```
I get the result with that query:
```
SELECT * FROM puntos GROUP BY Liga HAVING max(puntos)
UNION
SELECT p1.Equipo, p1.Liga, max(p1.puntos) FROM puntos p1, (SELECT * FROM puntos GROUP BY Liga HAVING max(puntos) ORDER BY Liga,puntos DESC) p2
WHERE p1.Liga=p2.Liga AND p1.puntos<p2.puntos GROUP BY Liga
ORDER BY Liga,puntos DESC
```
But that solution that not works if the select values of column "puntos" are equal.
Is there any other way to do it for any value, equal or not?
|
Another option is to use [9.4 User-Defined Variables](http://dev.mysql.com/doc/refman/5.6/en/user-variables.html):
```
SELECT
`der`.`Equipo`,
`der`.`Liga`,
`der`.`Puntos`
FROM (
SELECT
`p`.`Equipo`,
`p`.`Liga`,
`p`.`Puntos`,
IF(@`prev_liga` != `p`.`Liga`,
@`rownum` := 1,
@`rownum` := @`rownum` + 1
) `rank`,
@`prev_liga` := `p`.`Liga`
FROM (
SELECT
`Equipo`,
`Liga`,
`Puntos`
FROM `puntos`
GROUP BY `Liga`, `Puntos`
ORDER BY `Liga`, `Puntos` DESC
) `p`, (SELECT
@`rownum` := NULL,
@`prev_liga` := 0) `r`
) `der`
WHERE `der`.`rank` <= 2
ORDER BY `der`.`Liga`, `der`.`rank`;
```
`SQL Fiddle demo`
Two teams from the same "Liga" equal "Puntos":
```
SELECT
`der`.`Equipo`,
`der`.`Liga`,
`der`.`Puntos`
FROM (
SELECT
`p`.`Equipo`,
`p`.`Liga`,
`p`.`Puntos`,
IF(@`prev_liga` != `p`.`Liga`,
@`rownum` := 1,
@`rownum` := @`rownum` + 1
) `rank`,
@`prev_liga` := `p`.`Liga`
FROM (
SELECT
`Equipo`,
`Liga`,
`Puntos`
FROM `puntos`
ORDER BY `Liga`, `Puntos` DESC
) `p`, (SELECT
@`rownum` := NULL,
@`prev_liga` := 0) `r`
) `der`
WHERE `der`.`rank` <= 2
ORDER BY `der`.`Liga`, `der`.`rank`;
```
`SQL Fiddle demo`
|
First, this solution will only work if `Equipo` is unique across all leagues. If not - you should add `equipoID` and use it instead.
Now, we first select the leader of each league. Then create a union with the same select again, only using the WHERE to filter out the teams that we've already got on our first union. That will essentially give us the follower.
Here's the query:
```
SELECT * FROM
(SELECT * FROM puntos ORDER BY `liga`,`puntos` DESC) `leader`
GROUP BY `liga`
UNION
SELECT * FROM
(SELECT * FROM puntos ORDER BY `liga`,`puntos` DESC) `follower`
WHERE `follower`.`equipo` NOT IN
(SELECT equipo FROM
(SELECT * FROM puntos ORDER BY `liga`,`puntos` DESC) `leader`
GROUP BY `liga`
)
GROUP BY `liga`
ORDER BY `liga`
```
|
How to select more than one row per group in MySQL?
|
[
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I am fairly new to using joins and selecting from multiple tables. I have successfully figured out how to get data from 2 tables based on a number of conditions. Now my issue is that I need to return a count only if another condition is also met. The idea is like this:
I have a query that returns the number of items in a specific status at a specific time. My query for this is:
```
SELECT count(*),
e.campus_id,
e.course_id
FROM statuses_history AS sh,
enrolments AS e
WHERE sh.date_added > '2015-08-01 00:00:00'
AND sh.date_added < '2015-08-20 23:59:59'
AND sh.status_id = 57
AND sh.item_id = e.enrolment_id
AND (e.course_id = 2
OR e.course_id =7
OR e.course_id = 8
OR e.course_id = 9)
GROUP BY e.campus_id,
e.course_id;
```
Now I have to check that it has been in a different status, lets say sh.status\_id = 50 before it was in 57. It also does not have to fall within the data range specified. So I basically need to change my query in some way to also select if it's ever been in status\_id 50 and then only return the result if both the statuses are found. sh.status\_id =57 will be bound by the dates specified.
Thanks in advance for the help.
|
You can adjust the subselect as needed, but this will return a row if another value is found in the database.
I don't like subselects, but this should work. I have not tested the syntax.
```
select count(*), e.campus_id, e.course_id
FROM statuses_history as sh, enrolments as e
WHERE sh.date_added > '2015-08-01 00:00:00'
AND sh.date_added < '2015-08-20 23:59:59'
AND sh.status_id = 57
AND sh.item_id = e.enrolment_id
AND (e.course_id = 2 OR e.course_id =7
OR e.course_id = 8 OR e.course_id = 9)
AND EXISTS (select sh2.status_id from statuses_history as sh2
WHERE NOT sh2.status_id = 57
AND sh2.item_id = e.enrolment_id)
GROUP BY e.campus_id, e.course_id;
```
|
I assume that the primary key is item\_id?
You just have to join the same tables by the primary key and the status\_id = 50.
For this case I will use INNER JOIN (my preference) but you can use FROM/WHERE CLAUSE.
```
select count(*), e.campus_id, e.course_id
FROM statuses_history as sh, enrolments as e
INNER JOIN statuses_history AS oldSh ON oldSh.item_id = sh.item_id AND oldSh.status_id = 50
WHERE sh.date_added > '2015-08-01 00:00:00' AND sh.date_added < '2015-08-20 23:59:59' AND sh.status_id = 57 and sh.item_id = e.enrolment_id AND (e.course_id = 2 OR e.course_id =7 OR e.course_id = 8 OR e.course_id = 9)
GROUP BY e.campus_id, e.course_id;
```
But if you prefer WHERE, you can do the following:
```
select count(*), e.campus_id, e.course_id
FROM statuses_history as sh, enrolments as e, statuses_history AS oldSh ON
WHERE oldSh.item_id = sh.item_id AND oldSh.status_id = 50 AND
sh.date_added > '2015-08-01 00:00:00' AND sh.date_added < '2015-08-20 23:59:59' AND sh.status_id = 57 and sh.item_id = e.enrolment_id AND (e.course_id = 2 OR e.course_id =7 OR e.course_id = 8 OR e.course_id = 9)
GROUP BY e.campus_id, e.course_id;
```
|
SQL query with multiple criteria and sub selects
|
[
"",
"mysql",
"sql",
""
] |
I have two queries, but I want the results to be in the same return table.
The first query returns a list of all posts with their comments as a list. The comments are stored in a different table.
```
SELECT E.id, E.time, E.title, E.body, E.type, C.comments
FROM elements E
LEFT JOIN(
SELECT elementID, GROUP_CONCAT(body SEPARATOR '|-|') AS comments
FROM comments
GROUP BY elementID
) C on C.elementID = E.id
```
The second query returns a list of all posts with a count of how many likes/upvotes the post has as a new column. The likes are stored in a different table.
```
SELECT E.id, E.googleID, E.title, L.likeCount
FROM elements E
LEFT JOIN (
SELECT elementID, COUNT(id) AS likeCount
FROM likes
GROUP BY elementID
) L ON L.elementID = E.id;
```
How would I have a concatenated list of comments as a column, and number of likes as a column, in the same query.
|
Well, why can't you `JOIN` both the query like you have already done for other query
```
SELECT E.id, E.time, E.title, E.body, E.type, C.comments, E.googleID, L.likeCount
FROM elements E
LEFT JOIN(
SELECT elementID, GROUP_CONCAT(body SEPARATOR '|-|') AS comments
FROM comments
GROUP BY elementID
) C on C.elementID = E.id
LEFT JOIN (
SELECT elementID, COUNT(id) AS likeCount
FROM likes
GROUP BY elementID
) L ON L.elementID = E.id;
```
|
You can "stack" joins of the same level together, like this:
```
SELECT
E.id
, E.time
, E.title
, E.googleID
, E.body
, E.type
, C.comments
, L.likeCount
FROM elements E
LEFT JOIN(
SELECT elementID, GROUP_CONCAT(body SEPARATOR '|-|') AS comments
FROM comments
GROUP BY elementID
) C on C.elementID = E.id
LEFT JOIN (
SELECT elementID, COUNT(id) AS likeCount
FROM likes
GROUP BY elementID
) L ON L.elementID = E.id;
```
Simply put a set of columns you wanted to return from both joins into the select list, and put the joins next to each other.
|
How to use two LEFT JOINs on one query?
|
[
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
I have a database table of that I have used to store the data returned from a web spider. I have a column that contains ticket prices for different events all in the varchar type (as the scrapy spider has to scrape the data in unicode). I'm trying to return the min price of the column and since the min() function only works for data of type INT, I tried to convert the column to integers using a solution from [this SO post](https://stackoverflow.com/questions/13170570/change-type-of-varchar-field-to-integer-cannot-be-cast-automatically-to-type-i):
```
ALTER TABLE vs_tickets ALTER COLUMN ticketprice TYPE integer USING (ticketprice::integer);
```
but I got the error: *ERROR: invalid input syntax for integer:*
I also tried: `change_column :vs_tickets, :ticketprice, 'integer USING CAST(ticketprice AS integer)'` but that didn't work either.
What is the proper way to convert the column to type INT?
### Edit:
[](https://i.stack.imgur.com/r8A0K.png)
|
You have decimal places in the string, so a simple cast is not going to work. You *can* do a double conversion:
```
cast(cast(ticketprice as decimal(10, 2)) as int)
```
or:
```
(ticketprice::decimal(10, 2))::int
```
(The parens are not strictly necessary.)
EDIT:
Or, as Erwin points out, just use `numeric`:
```
(ticketprice::numeric)::int
```
Postgres is much smarter about `numeric` than most other databases . . . after all, it supports numbers that are egregiously large ;)
The final query is:
```
ALTER TABLE vs_tickets
ALTER COLUMN ticketprice TYPE integer USING (ticketprice::numeric::integer);
```
|
I'm going to bet on your column have wrong characters.
Also you may want use float or numeric because you will lose decimals if convert to integers.
You need create a function to check if a text is numeric like this [*isnumeric-with-postgresql*](https://stackoverflow.com/questions/16195986/isnumeric-with-postgresql)
Then check each row like this
```
select ticketprice
from vs_tickets
where ISNUMERIC(ticketprice) = false;
```
As your comment you also should try
```
SELECT ticketprice::float
```
|
Can't convert postgresql table column from type varchar to int
|
[
"",
"sql",
"postgresql",
"scrapy",
""
] |
How should HTML Entities be stored on SQL server? Should they be stored as the entity or as the character. Which is correct and does it really matter?
For example
**Storage Solution 1**
```
& > <
```
**Storage Solution 2**
```
& < >
```
|
I feel like storage solution 1 is more secure, as it helps to drastically cut down on your risk of accidentally executing some script unintentionally if you have someone embedding javascript or the like in their input.
Which, granted, you should be stripping out, but it's a best practice kind of thing to store the entity, not the character.
|
Whenever storing any kind of data, you should consider this guideline that:
* Data/info should not be tampered or changed from it's original form
* It should be independent from any processing/conversion logic
This may not be applicable always, espeically when storing sensitive info like passowrds, credit card or any other financial data, hence will be case dependent.
In the mentioned scenario,
**Solution 1 will have 2 drawbacks:**
* Increase in data size
* Implementing a logic to convert data at each read & write opertion (may also increase proessing time)
**Solution 2 will have the drawback** of security issues, which maynot be a concernor or applicable if the system is designed appropriately.
Hence, decision cannot be taken simply with the provided information; and will depend on the architecture of the system and longterm usage analysis.
|
How should HTML Entities be stored on SQL server?
|
[
"",
"sql",
"sql-server",
"html-entities",
""
] |
I wrote the following query
```
SELECT UNIQUE
CITY
FROM
STATION
WHERE
(CITY LIKE '%A' OR
CITY LIKE '%E' OR
CITY LIKE '%I' OR
CITY LIKE '%O' OR
CITY LIKE '%U')
ORDER BY
CITY;
```
What is wrong with this?
|
If you want it to work with lower/upper letters, you could use `UPPER(CITY)`, otherwise it's all good.
|
I think you can use `REGEXP_LIKE` like this:
```
SELECT UNIQUE CITY
FROM STATION
WHERE REGEXP_LIKE(CITY, '[AaEeIiOoUu]$')
ORDER BY CITY;
```
|
In SQL I want to display the names of all those cities which end with an vowel
|
[
"",
"sql",
"oracle",
""
] |
I've the following code
```
select * from weeks
where case
when wet>1000 then wenumber=1
when wet>500 then wenumber=2
when wet>100 then wenumber=3
else wenumber= 22
end
```
it gives me the result of both ( when > 1000 and else )
through my search I understood that it is search case, but
I need to make it a simple case
simply if the first condition were true stop evaluating "else" statement
any help
|
Try this:
```
SELECT *
FROM weeks
WHERE wenumber = CASE WHEN wet > 1000 THEN 1
WHEN wet BETWEEN 500 AND 1000 THEN 2
WHEN wet BETWEEN 100 AND 500 THEN 3
ELSE 22
END;
```
|
Better way of doing it
```
select *
from weeks
where ( wet>1000 and wenumber = 1 ) or wenumber = 22
```
|
Convert searched case to simple case else
|
[
"",
"mysql",
"sql",
"select",
"case",
""
] |
I'm trying to return a set of rows after doing `UPDATE`.
Something like this.
```
UPDATE Notis new_noti SET notis = '{}'::noti_record_type[]
FROM (SELECT * FROM Notis WHERE user_id = 2 FOR UPDATE) old_noti
WHERE old_noti.user_id = new_noti.user_id RETURNING unnest(old_noti.notis);
```
but postgres complains, rightly so:
> set-valued function called in context that cannot accept a set
How am I supposed to go about implementing this?
That is, `RETURNING` a set of rows from `SELECT`ed array after `UPDATE`?
I'm aware that a function can achieve this using `RETURNS SETOF` but rather prefer not to if possible.
|
Use [WITH statement](http://www.postgresql.org/docs/9.4/static/queries-with.html):
```
WITH upd AS (
UPDATE Notis new_noti SET notis = '{}'::noti_record_type[]
FROM (SELECT * FROM Notis WHERE user_id = 2 FOR UPDATE) old_noti
WHERE old_noti.user_id = new_noti.user_id RETURNING old_noti.notis
)
SELECT unnest(notis) FROM upd;
```
|
Use a [data-modifying CTE](https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-MODIFYING).
You *can* use a set-returning function in the `SELECT` list, but it is cleaner to move it to the `FROM` list with a `LATERAL` subquery since Postgres **9.3**. Especially if you need to extract *multiple* columns (from a *row type* like you commented). It would also be inefficient to call `unnest()` multiple times.
```
WITH upd AS (
UPDATE notis n
SET notis = '{}'::noti_record_type[] -- explicit cast optional
FROM (
SELECT user_id, notis
FROM notis
WHERE user_id = 2
FOR UPDATE
) old_n
WHERE old_n.user_id = n.user_id
RETURNING old_n.notis
)
SELECT n.*
FROM upd u, unnest(u.notis) n; -- implicit CROSS JOIN LATERAL
```
If the array can be empty and you want to preserve empty / NULL results use `LEFT JOIN LATERAL ... ON true`. See:
* [What is the difference between LATERAL JOIN and a subquery in PostgreSQL?](https://stackoverflow.com/questions/28550679/what-is-the-difference-between-lateral-and-a-subquery-in-postgresql/28557803#28557803)
* [Call a set-returning function with an array argument multiple times](https://stackoverflow.com/questions/26107915/call-a-set-returning-function-with-an-array-argument-multiple-times/26514968#26514968)
Also, multiple set-returning functions in the same `SELECT` can exhibit surprising behavior. Avoid that.
This has been sanitized with Postgres 10. See:
* [What is the expected behaviour for multiple set-returning functions in SELECT clause?](https://stackoverflow.com/questions/39863505/what-is-the-expected-behaviour-for-multiple-set-returning-functions-in-select-cl/39864815#39864815)
Alternative to unnest *multiple* arrays in parallel before and after Postgres 10:
* [Unnest multiple arrays in parallel](https://stackoverflow.com/questions/27836674/passing-arrays-to-stored-procedures-in-postgres/27854382#27854382)
Related:
* [Return pre-UPDATE column values using SQL only](https://stackoverflow.com/questions/7923237/return-pre-update-column-values-using-sql-only-postgresql-version/7927957#7927957)
## Behavior of composite / row values
Postgres has an **oddity** when assigning a row type (or composite or record type) from a set-returning function to a column list. One might expect that the row-type field is treated as *one* column and assigned to the respective column, but that is not so. It is decomposed automatically (one row-layer only!) and assigned element-by-element.
So this does not work as expected:
```
SELECT (my_row).*
FROM upd u, unnest(u.notis) n(my_row);
```
But this does ([like @klin commented)](https://stackoverflow.com/questions/32273661/returning-rows-using-unnest/32278725?noredirect=1#comment52446876_32278725):
```
SELECT (my_row).*
FROM upd u, unnest(u.notis) my_row;
```
Or the simpler version I ended up using:
```
SELECT n.*
FROM upd u, unnest(u.notis) n;
```
Another oddity: A composite (or row) type with a **single field** is decomposed automatically. Thus, table alias and column alias end up doing the same in the outer `SELECT` list:
```
SELECT n FROM unnest(ARRAY[1,2,3]) n;
SELECT n FROM unnest(ARRAY[1,2,3]) n(n);
SELECT n FROM unnest(ARRAY[1,2,3]) t(n);
SELECT t FROM unnest(ARRAY[1,2,3]) t(n); -- except output column name is "t"
```
For more than one field, the row-wrapper is preserved:
```
SELECT t FROM unnest(ARRAY[1,2,3]) WITH ORDINALITY t(n); -- requires 9.4+
```
Confused? There is more. For composite types (the case at hand) like:
```
CREATE TYPE my_type AS (id int, txt text);
```
While this works as expected:
```
SELECT n FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]) n;
```
You are in for a **surprise** here:
```
SELECT n FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]) n(n);
```
And that's **the error I had**: When providing a column list, Postgres decomposes the row and assigns provided names one-by-one. Referring to `n` in the `SELECT` list does not return the composite type, but only the (renamed) first element. I had mistakenly expected the row type and tried to decompose with `(my_row).*` - which only returns the first element nonetheless.
Then again:
```
SELECT t FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]) t(n);
```
(Be aware that the first element has been renamed to `"n"`!)
With the new form of `unnest()` taking multiple array arguments (**Postgres 9.4+**):
```
SELECT *
FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]
, ARRAY[(3, 'baz')::my_type, (4, 'bak')::my_type]) n;
```
Column aliases only for the first two output columns:
```
SELECT *
FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]
, ARRAY[(3, 'baz')::my_type, (4, 'bak')::my_type]) n(a, b);
```
Column aliases for all output columns:
```
SELECT *
FROM unnest(ARRAY[(1,'foo')::my_type, (2,'bar')::my_type]
, ARRAY[(3,'baz')::my_type, (4,'bak')::my_type]) n(a,b,c,d);
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=33793113c98d01512b509d012097924e)*
Old [sqlfiddle](http://sqlfiddle.com/#!17/52ad0/1)
|
RETURNING rows using unnest()?
|
[
"",
"sql",
"database",
"postgresql",
"set-returning-functions",
"sql-returning",
""
] |
In my table i have two file one is **id** another is **strNum**. I want to order the **strNum** like as
```
One
Two
Three
```
My table data
```
id strNum
------ ------------
1 Two
2 One
3 Five
4 Nine
```
I want to get output without any extra field add. How can i will order like as
```
One
Two
Three
Four
Five
```
|
You can use this query :
```
SELECT strNum FROM your_table
ORDER BY FIELD(strNum,'One','Two','Three','Four','Five','Six','Seven','Eight','Nine','Ten') ASC
```
|
You can use `Case`:
```
select strNum
from A
order by case when strNum='one' then 1
when strNum='two' then 2
when strNum='three' then 3
when strNum='four' then 4
when strNum='five' then 5
when strNum='six' then 6
when strNum='seven' then 7
when strNum='eight' then 8
when strNum='nine' then 9
when strNum='ten' then 10 end
```
You didn't mention which DB you are using. This is PostgreSQL version.
See [SQLFiddle](http://sqlfiddle.com/#!15/c995f/1)
|
SQL order by string order
|
[
"",
"mysql",
"sql",
""
] |
Hey Stackoverflow community!
I currently use a temporary table to store the list of arrays are compare which ones don't exist within the table and insert those but I'm hoping there is a quicker more optimized way.
My goal is to get this down to one query
1.) I have an array of c\_no INTs (1,2,3,4,5) I would like to add to a group\_join table.
2.) In the groups\_join I need to preform the following query to get all c\_no values currently in the table.
```
select c_no from groups_join where g_no = (INT)
```
to see what INT's are in the groups
3.) I need someway to check if any of the c\_no INTs I will be inserting are already in the groups\_join table with same the g\_no so I don't have duplicates.
|
I cannot comment due to rep, but in response to your comment on Didier's answer, instead of `INSERT` you can use `INSERT IGNORE` after you've created your `unique index`. When you use `INSERT IGNORE` MySQL will not insert the value, but will also suppress the error thrown due to attempting to insert a duplicate `unique index`. This will allow you to add your unique index without have to write code to handle the error.
```
INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name [(col_name,...)]
{VALUES | VALUE} ({expr | DEFAULT},...),(...),...
[ ON DUPLICATE KEY UPDATE
col_name=expr
[, col_name=expr] ...
```
<https://dev.mysql.com/doc/refman/5.5/en/insert.html>
|
I guess you are looking for the `IN` Keyword.
```
SELECT * FROM myTable WHERE field IN (myList)
```
If you want the g\_no field to have unique values, why do you don't add the `unique` index ? [MySQL Document - Create Index](https://dev.mysql.com/doc/refman/5.0/en/create-index.html)
|
Check if list is in table
|
[
"",
"mysql",
"sql",
""
] |
I want to join these tables and count if the Status is 'Y'
```
table1(date, status)
8/23/2015 Y
8/24/2015 Y
8/24/2015 N
table2(date, status)
8/23/2015 Y
8/23/2015 Y
table3(date, status)
8/23/2015 Y
8/25/2015 N
8/25/2015 Y
```
The result I expect is like . . .
```
DATE count(table1.status) count(table2.status) count(table3.status)
--------- -------------------- -------------------- --------------------
8/23/2015 1 2 1
8/24/2015 1 0 0
8/25/2015 0 0 1
```
|
Perhaps the easiest way is to `union all` the tables together and then aggregation:
```
select date, sum(status1) as status1, sum(status2) as status2,
sum(status3) as status3
from ((select date, 1 as status1, 0 as status2 , 0 as status3
from table1
where status = 'Y') union all
(select date, 0 as status1, 1 as status2 , 0 as status3
from table2
where status = 'Y') union all
(select date, 0 as status1, 0 as status2 , 1 as status3
from table3
where status = 'Y')
) t
group by date
order by date;
```
If you want to do this with `full join`, you have to be very careful. You are tempted to write:
```
select date,
sum(case when t1.status1 = 'Y' then 1 else 0 end) as status1,
sum(case when t2.status1 = 'Y' then 1 else 0 end) as status2,
sum(case when t3.status1 = 'Y' then 1 else 0 end) as status3
from table1 t1 full join
table2 t2
using (date) full join
table3 t3
using (date)
group by date
order by date;
```
But this has problems when there are multiple counts on the same date in different tables (a cartesian product for the date). So, the next temptation is to add `count(distinct)` . . . you can do that in this case, because there is no unique column. Even if there were, this adds overhead.
Finally, you can solve this by pre-aggregating each table, if you want to go down this path.
|
Here's another alternative:
```
with table1 as (select to_date('23/08/2015', 'dd/mm/yyyy') dt, 'Y' status from dual union all
select to_date('24/08/2015', 'dd/mm/yyyy') dt, 'Y' status from dual union all
select to_date('25/08/2015', 'dd/mm/yyyy') dt, 'N' status from dual),
table2 as (select to_date('23/08/2015', 'dd/mm/yyyy') dt, 'Y' status from dual union all
select to_date('23/08/2015', 'dd/mm/yyyy') dt, 'Y' status from dual union all
select to_date('26/08/2015', 'dd/mm/yyyy') dt, 'Y' status from dual),
table3 as (select to_date('23/08/2015', 'dd/mm/yyyy') dt, 'Y' status from dual union all
select to_date('25/08/2015', 'dd/mm/yyyy') dt, 'Y' status from dual union all
select to_date('25/08/2015', 'dd/mm/yyyy') dt, 'N' status from dual)
select coalesce(t1.dt, t2.dt, t3.dt) dt,
coalesce(t1.cnt, 0) t1_cnt,
coalesce(t2.cnt, 0) t2_cnt,
coalesce(t3.cnt, 0) t3_cnt
from (select dt, count(case when status = 'Y' then 1 end) cnt
from table1
group by dt) t1
full outer join (select dt, count(case when status = 'Y' then 1 end) cnt
from table2
group by dt) t2 on t1.dt = t2.dt
full outer join (select dt, count(case when status = 'Y' then 1 end) cnt
from table3
group by dt) t3 on t1.dt = t3.dt
order by coalesce(t1.dt, t2.dt, t3.dt);
DT T1_CNT T2_CNT T3_CNT
---------- ---------- ---------- ----------
23/08/2015 1 2 1
24/08/2015 1 0 0
25/08/2015 0 0 1
26/08/2015 0 1 0
```
(It aggregates the rows down to one row per dt *before* joining to the other tables, so you don't get duplicated rows included in the counts).
|
How to join three tables and count the record with UNION one column
|
[
"",
"sql",
"oracle",
""
] |
I have two database tables - `Categories` and `LocalizedCategories`. The `Categories` table has `Name` and `Description` columns for all categories in the default language. The `LocalizedCategories` table also has these columns which contain translations of these columns into other languages (every content language has its id, so primary key of the table consists of `ContentLanguageId` and `CategoryId`). I would like now to perform a SELECT query to get all categories so that the textual data in the results are localized in case a translation is available for the given category and content language. If the translation is not available, or the content language is the default one, I would like to fall back to the defaults in the `Categories` table.
How would such a query look like? Is there a better way to separate the tables so that querying and working with the data is simpler?
|
Your table design seems fine. To get data only when it exists, is done with an **outer join**. Then use COALESCE to get one value or the other.
```
select
coalesce(lc.name, c.name),
coalesce(lc.description, c.description)
from categories c
left outer join localizedcategories lc
on lc.categoryid = c.categoryid and lc.contentlanguageid = 'EN';
```
|
simple, just do a left join :
```
select Description = coalesce(lc.Description, c.Description )
, Name = coalesce(lc.Name, c.Name)
, Lang = coalesce(lc.Lang, DefaultContentLanguageId )
from Categories c
left join LocalizedCategories lc on c.CategoryId=lc.CategoryId and lc.ContentLanguageId = ?
```
|
Conditional SQL query column mapping
|
[
"",
"sql",
"sql-server",
"database",
"join",
""
] |
There is a table with four columns:
```
SrNo Descript item1 item2
1 | AA | 45 | 25
2 | BB | 25 | 51
3 | CC | 41 | 22
```
I want get results like this:
```
SrNo| Descript| item1 |item2| totalitems
1 | AA | 45 | 25 | 70
2 | BB | 25 | 51 | 76
3 | CC | 41 | 22 | 63
4 | Total | 111 | 98 | 209
```
|
Try;
```
select
SrNo, Descript, item1, item2,
item1+item2 as totalitems
from tbl
ORDER BY SrNo
union all
select
max(SrNo) + 1, 'Total', sum(item1), sum(item2),
sum(item1+item2) as totalitems
from tbl
```
|
Something like this
```
select SrNo, Descript, item1, item2, item1+item2 as totalitems
from yourtable
Union all
select max(SrNo)+1, 'Total', sum(item1), sum(item2), sum(item1)+sum(item2) as totalitems
from yourtable
```
**Note :** If the data type of `item1` and `item2` is `varchar` then you may have to `cast` it to `int` before addititon
|
How do I get the sum of a column as a new row with column data
|
[
"",
"sql",
""
] |
I have a table of items with some values, among them cost and purchase date. I'm trying to get a list of the most expensive items, one per item type, ordered by the purchase date of that specific item, *without* the purchase date in the results.
My table (simplified):
```
CREATE TABLE Purchases
(ItemType varchar(25),
Cost int,
PurchaseDate smalldatetime)
```
My sample data:
```
INSERT INTO Purchases VALUES
('Hat', 0, '2007-05-20 15:22'),
('Hat', 0, '2007-07-01 15:00'),
('Shirt', 3500, '2007-07-30 08:43'),
('Pants', 2000, '2008-07-30 12:00'),
('Pants', 4000, '2009-03-15 07:30'),
('Sweater', 3000, '2011-05-20 15:22'),
('Sweater', 3750, '2012-07-01 22:00'),
('Sweater', 2700, '2014-06-12 11:00'),
('Hat', 4700, '2015-06-29 07:10')
```
My expected output (dates added for clarity):
```
ItemType MostExpensivePerType
------------------------- --------------------
Shirt 3500 (2007-07-30 08:43)
Pants 4000 (2009-03-15 07:30)
Sweater 3750 (2012-07-01 22:00)
Hat 4700 (2015-06-29 07:10)
```
My work so far:
I've tried things back and forth, and my best result is with this query:
```
SELECT
ItemType, MAX(Cost) AS MostExpensivePerType
FROM
Purchases
GROUP BY
ItemType
ORDER BY
MostExpensivePerType DESC
```
Which yields the most expensive items per item type, but orders them by cost. Without the `ORDER BY` clause, they seem to be ordered alphabetically. I realize that I need the date column in my query as well, but can I enter it and 'hide' it in the results? Or do I need to save the results I have so far in a temporary table and join with my regular table? What is the best way to go about this?
[SQL Fiddle here](http://sqlfiddle.com/#!6/9668a/3/0)!
|
Use window functions:
```
select ItemType, Cost MostExpensivePerType
from (select p.*,
row_number() over (partition by itemtype order by cost desc) as seqnum
from purchases p
) t
where seqnum = 1
order by PurchaseDate;
```
SQLFiddle [here](http://sqlfiddle.com/#!6/9668a/38).
|
```
SELECT
ItemType,
MAX(Cost) AS MostExpensivePerType
FROM
Purchases
GROUP BY
ItemType
ORDER BY
Max(PurchaseDate) DESC
```
|
How to GROUP BY one column and ORDER BY another in T-SQL
|
[
"",
"sql",
"t-sql",
"ssms",
"sql-server-2014",
""
] |
this seems like it should be extraordinarily simple, so I apologize in advance if this information is easily accessible on the transact-sql documentation pages. I searched myself, but couldn't seem to find anything.
I'm trying to modify a transact-sql statement that currently runs on our Windows server 2000 box. I want to check if a table in another database exists, and then do a bunch of stuff. The database name is given as a string argument, '@dbName'
```
CREATE PROCEDURE CopyTables
@dbName char(4)
AS
IF EXISTS (SELECT * FROM @dbName.INFORMATION_SCHEMA.TABLES WHERE
TABLE_NAME = N'MainTable')
BEGIN
--Do Stuff
```
In it's current state, it doesn't like using the bare @dbName variable within the select statement. Is there special syntax for doing this?
Thanks in advance.
|
Try doing the following:
```
DECLARE @dbName NVARCHAR(MAX) = 'master', @TableName NVARCHAR(MAX) = N'spt_monitor';
DECLARE @sql NVARCHAR(MAX) = N'SELECT * FROM [' + @dbName + N'].INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = ''' + REPLACE(@TableName,N'''',N'''''') + N'''';
SET NOCOUNT OFF;
EXEC(@sql);
IF @@ROWCOUNT > 0 BEGIN;
-- DO STUFF
SELECT NULL;
END;
```
There are a few shortcomings to this solution:
1) It requires that the user executing the statement has `SELECT` access to the other database's `INFORMATION_SCHEMA.TABLES`
2) It has the side-effect of actually selecting the rows, so if you're using a reader to access the results, you'll have to call `reader.NextResult()` or `await reader.NextResultAsync()` because it actually outputs the results of the `SELECT` statement, rather than doing it in an `IF EXISTS` context.
|
The below code should do what you want. As was mentioned previously, the account running the query would need the privilege to query the INFORMATION\_SCHEMAs in the target database.
To future-proof your stored procedure, I'd also suggest increasing the length of the database name parameter and declaring it as an nchar or nvarchar in stead of char.
```
CREATE PROCEDURE CopyTables
@dbName char(4)
AS
DECLARE
@SQLStr nvarchar (max),
@Params nvarchar (max),
@Count tinyint;
SET
@Count = 0;
SET @SQLStr = N'SELECT @qCount = 1 FROM [' + @dbName + N'].INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = N''MainTable''';
SET @Params = N'@qdbName char (4), @qCount tinyint OUTPUT';
EXECUTE sp_executesql @SQLStr, @Params, @qdbName = @dbName, @qCount = @Count OUTPUT;
IF @Count = 1
BEGIN
--Do Stuff
END; -- if
GO
```
|
Using variables in Transact-sql exists subquery
|
[
"",
"sql",
"t-sql",
"subquery",
""
] |
I am trying to compare a column `col1` and a variable `@myvar` in a `WHERE` clause. Both usually contain GUIDs, but may also have NULL values.
I thought I could get around the fact that `NULL=NULL` evaluates to FALSE by using `WHERE ISNULL(col1, '')=ISNULL(@myvar, '')`. That would compare two empty strings instead, and evaluate to TRUE.
This will, however, produce the following error message:
> Msg 8169, Level 16, State 2, Line 3 Conversion failed when converting
> from a character string to uniqueidentifier.
I tried
```
DECLARE @myvar uniqueidentifier = NULL
SELECT ISNULL(@myvar,'') as col1
```
Same error message.
Two questions:
First, I am trying to convert a uniqueidentifier variable - even though it has a NULL value - to an (empty!) string, not the other way around, as the error message suggests. What gives?
Second, is there a better way to word that WHERE clause I need, to allow for comparing uniqueidentifiers that might be NULL?
|
Since the first argument you are passing `isnull` is not a **literal `null`**, it will determine the return type of that call, a `uniqueidentifier` in your case. The second argument, `''`, cannot be cast to this type, hence the error you're getting.
One way around this is just to explicitly check for `null`s:
```
WHERE (@myvar IS NULL AND col1 IS NULL) OR (col1 = @myvar)
```
|
I think below expression can be used to check if the GUID column is empty
```
CAST(0x0 AS UNIQUEIDENTIFIER)
```
some thing like
```
...WHERE GuidId <> CAST(0x0 AS UNIQUEIDENTIFIER)
```
|
SQL Server: ISNULL on uniqueidentifier
|
[
"",
"sql",
"sql-server",
"null",
"uniqueidentifier",
"isnull",
""
] |
I have tried looking around for this issue and couldn't find much. I was wondering the best method of removing duplicate parent/child rows in an SQL table.
For example I have a table like so:
```
Id | CompanyId | DuplicateId
1 | 1 | 2
2 | 2 | 1 <------ CompanyId 2 is already a duplicate of 1
3 | 2 | 3
```
I wish to select all non duplicates:
```
Id | CompanyId | DuplicateId
1 | 1 | 2
3 | 2 | 3
```
Any help or pointing in the right direction would be great. Thanks!
Edit: I'm using Microsoft SQL Server
|
Here is one method, that works in most databases:
```
delete from sqltable
where duplicateid > companyid and
exists (select 1
from sqltable st2
where st2.duplicateid = sqltable.companyid and
st2.companyid = sqltable.duplicateid
);
```
|
Try this
```
SELECT * FROM tablename t1 where CompanyId not IN (SELECT Duplicateid from
tablename WHERE CompnayId<>t1.CompanyId and duplicateid > companyid)
```
|
SQL Remove duplicate parent child entries
|
[
"",
"sql",
"sql-server",
""
] |
Unfortunately I am not very strong when it comes to SQL and not very strong when it comes to explaining myself. I will try my best and hopefully someone can help.
I have a table called soc\_stat which has a lot of colums. One of those is "timestamp" which is just a timestamp, and the other one is a date\_of\_post which is just a `date_format();` of the timestamp, looking like this (example) 2015-08-28 00:18:52
What I would love to do it make a query that returns the amount of entries per day/hour
Something along the lines of this examples
```
2015-08-28 00:01:00 = 218 entries
2015-08-28 00:02:00 = 327 entries
2015-08-28 00:03:00 = 487 entries
2015-08-28 00:04:00 = 118 entries
```
I need it for a graph.
I currently use the following code to get the last 24 hours out from my database
```
SELECT * FROM `soc_stat` WHERE soc_stat.date_of_post > DATE_SUB(CURDATE(), INTERVAL 1 DAY)
```
*Edit*
Thanks a lot for the answers. They worked exactly like how I wanted the information out. However I got one more question. As it is now if there is no inputs within an hour or day, there is not returned anything. Is is possible to get a 0 returned? I have a graph that depends on the values I get from the SQL query. So if I want a 24 hour graph some hours are not going to be displayed if there is no input.
|
Try:
```
SELECT date(t.date_of_post) AS date,
hour(t.date_of_post) as hour,
count(*) as entries
FROM `soc_stat` t
WHERE t.date_of_post > DATE_SUB(CURDATE(), INTERVAL 1 DAY)
GROUP BY date(t.date_of_post), hour(t.date_of_post)
```
OR:
```
SELECT DATE_ADD(date(t.date_of_post), INTERVAL hour(t.date_of_post) HOUR) AS dateTime,
count(*) as entries
FROM `soc_stat` t
WHERE t.date_of_post > DATE_SUB(CURDATE(), INTERVAL 1 DAY)
GROUP BY date(t.date_of_post), hour(t.date_of_post)
```
|
This is for per day and per hour
```
SELECT date(soc_stat.date_of_post) as date,hour(soc_stat.date_of_post) as hour,
count(*) FROM `soc_stat`
WHERE soc_stat.date_of_post > DATE_SUB(CURDATE(), INTERVAL 1 DAY)
group by date(soc_stat.date_of_post),hour(soc_stat.date_of_post)
```
|
Fetching entries by hours from MySQL
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have two tables messages and users I want to find out which users received the messages however the query is only returning one message.
My Schemas are as follow
**Messages**
```
msg_id | msg_content | recipients |
-----------------------------------
1 | Hello world | 1,2,3,4,5
2 | Test | 1,3,5
3 | Welcome | 1,2,4
```
**Users**
```
uid | fname | lname |
---------------------------
1 | John |Doe |
2 | Jane |Doe |
3 | Mark |Someone |
4 | Mary |lady |
5 | Anthony |Doe |
```
So I would love to see my results simply as
```
msg_id | msg_content | recipients |
-----------------------------------
1 | Hello world | John,Jane,Mark,Mary,Anthony
2 | Test | John,Mark,Anthony
3 | Welcome | John,Jane,Mary
```
So I am doing my query as so
```
SELECT msg_id,msg_content,fname AS recepients FROM messages a
LEFT JOIN users ON uid IN(a.recipients)
```
When I run that query I only get one recipient. Please advice. Thanks.
|
I think you have to use a alternative way for create tables
Messages
```
msg_id | msg_content |
----------------------
1 | Hello world |
2 | Test |
3 | Welcome |
```
Users
```
uid | fname | lname |
---------------------------
1 | John |Doe |
2 | Jane |Doe |
3 | Mark |Someone |
4 | Mary |lady |
5 | Anthony |Doe |
```
users\_has\_messages
```
uhm_id | uid | msg_id |
---------------------------
1 | 1 | 1 |
2 | 2 | 1 |
3 | 3 | 1 |
4 | 2 | 2 |
5 | 1 | 3 |
```
Then you can use your code
|
Okay, so this schema isn't the best (using comma separated lists of IDs is not a great idea, and the performance of any joins will get pretty bad pretty quick). Best bet is to have a third table mapping uid's to msg\_id's as mentioned by @Thilina.
That said, this query will do probably what you're after:
```
SELECT msg_id,msg_content,GROUP_CONCAT(fname) AS recepients FROM messages a
LEFT JOIN users ON FIND_IN_SET(uid, a.recipients)
GROUP BY msg_id
```
|
MySQL Select use IN and GROUP BY
|
[
"",
"mysql",
"sql",
""
] |
Please help me with this problem, I've been searching couple hours and can't solve it.
Here is my code :
```
Private Sub search_btn_Click(sender As Object, e As EventArgs) Handles search_btn.Click
connectionString = "Data Source=MININT-BJDM4MI;AttachDbFilename=C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\QCDB.mdf;Database=QCDB;Trusted_Connection=Yes;"
sql = "SELECT * FROM PRODUCT WHERE SAMPLE_SOURCE = @SOURCE"
Dim connection As New SqlConnection(connectionString)
Dim dataadapter As New SqlDataAdapter(sql, connection)
Dim ds As New System.Data.DataSet()
Dim cmd = New SqlCommand(sql, connection)
cmd.Parameters.AddWithValue("@SOURCE", spl_source_combox.Text)
Try
connection.Open()
dataadapter.Fill(ds, "PRODUCT")
DataGridView1.Refresh()
cmd.ExecuteNonQuery()
connection.Close()
DataGridView1.DataSource = ds.Tables(0)
Catch ex As Exception
MessageBox.Show(ex.ToString)
End Try
End Sub
```
|
When you create the SqlDataAdapter, you should be passing in your command object instead of the SQL string.
```
Private Sub search_btn_Click(sender As Object, e As EventArgs) Handles search_btn.Click
connectionString = "Data Source=MININT-BJDM4MI;AttachDbFilename=C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\QCDB.mdf;Database=QCDB;Trusted_Connection=Yes;"
sql = "SELECT * FROM PRODUCT WHERE SAMPLE_SOURCE = @SOURCE"
Dim connection As New SqlConnection(connectionString)
Dim cmd = New SqlCommand(sql, connection)
Dim dataadapter As New SqlDataAdapter(cmd)
Dim ds As New System.Data.DataSet()
cmd.Parameters.AddWithValue("@SOURCE", spl_source_combox.Text)
Try
connection.Open()
dataadapter.Fill(ds, "PRODUCT")
DataGridView1.Refresh()
connection.Close()
DataGridView1.DataSource = ds.Tables(0)
Catch ex As Exception
MessageBox.Show(ex.ToString)
End Try
End Sub
```
|
It would be even better though if you used a stored procedure rather than to pass sql in your code, but the reason for your exception is because the @SOURCE variable isn't declared in your SQL statement. Try this:
`sql = "DECLARE @SOURCE varchar(50);SELECT * FROM PRODUCT WHERE SAMPLE_SOURCE = @SOURCE"`
|
Must declare the scalar variable (SQL)
|
[
"",
"sql",
"vb.net",
""
] |
I have an Oracle database with many tables that have identical structure (columns are all the same). The table names are similar also. The names of the tables are like table\_1, table\_2, table\_3...
I know this isn't the most efficient design, but I don't have the option of changing this at this time.
In this case, is it possible to make a single sql query, to extract all rows with the same condition across multiple tables (hundreds of tables) without explicitly using the exact table name?
I realize I could use something like
select \* from table\_1 UNION select \* from table\_2 UNION select \* from table\_3...select \* from table\_1000
But is there a more elegant sql statement that can be run that extracts from all matching table names into one result without having to name each table explicitly.
Something like
select \* from table\_%
Is something like that possible? If not, what is the most efficient way to write this query?
|
You can use `dbms_xmlgen` to query tables using a pattern, which generates an XML document as a CLOB:
```
select dbms_xmlgen.getxml('select * from ' || table_name
|| ' where some_col like ''%Test%''') as xml_clob
from user_tables
where table_name like 'TABLE_%';
```
You said you wanted a condition, so I've included a dummy one, `where some_col like '%Test%'`.
You can then use XMLTable to extract the values back as relational data, converting the CLOB to XMLType on the way:
```
select x.*
from (
select xmltype(dbms_xmlgen.getxml('select * from ' || table_name
|| ' where some_col like ''%Test%''')) as xml
from user_tables
where table_name like 'TABLE_%'
) t
cross join xmltable('/ROWSET/ROW'
passing t.xml
columns id number path 'ID',
some_col varchar2(10) path 'SOME_COL'
) x;
```
[SQL Fiddle demo](http://sqlfiddle.com/#!4/f4a99/1) which retrieves one matching row from each of two similar tables. Of course, this assumes your table names follow a useful pattern like `table_%`, but you suggest they do.
This is the only way I know to do something like this without resorting to PL/SQL (and having searched back a bit, was probably inspired [by this answer to count multiple tables](https://stackoverflow.com/a/10705088/266304)). Whether it's efficient (enough) is something you'd need to test with your data.
|
This is kind of messy and best performed in a middle-tier, but I suppose you could basically loop over the tables and use [`EXECUTE IMMEDIATE`](http://docs.oracle.com/cd/B12037_01/appdev.101/b10807/13_elems017.htm) to do it.
Something like:
```
for t in (select table_name from all_tables where table_name like 'table_%') loop
execute immediate 'select blah from ' || t.table_name;
end loop;
```
|
one query for many similar tables
|
[
"",
"sql",
"oracle",
"union",
""
] |
The logical error here is that the output displays like this:
```
+-----------------+-------+----------+-------+
| registration_no | color | producer | count |
+-----------------+-------+----------+-------+
| X23-234 | green | VW | 2 |
| BC-343 | red | BMW | 2 |
| TW-435 | blue | Ford | 1 |
| 232-444 | pink | Army | 2 |
+-----------------+-------+----------+-------+
```
As you can see the minimum count is the color blue and I want it to display only this:
```
| TW-435 | blue | Ford |
```
The derired query should not limit keyword only nested query.
Here is my sql query:
```
mysql> SELECT registration_no, color, producer, COUNT(*) as count
FROM car, car_type, booking
WHERE car.car_type_id = car_type.car_type_id and booking.car_id = car.car_id
GROUP BY car.car_id;
```
I just can't figure out the solution.
|
order by count(\*) and then limit at the first row
```
SELECT registration_no, color, producer, COUNT(*) as count
FROM car, car_type, booking
WHERE car.car_type_id = car_type.car_type_id and booking.car_id =car.car_id
GROUP BY car.car_id
ORDER BY COUNT(*) ASC
LIMIT 1;
```
this works if you only want just one row. In case you have more rows with `count(*) = 1` then you should use another method
if you want to show the highest count just sort descending
```
....
ORDER BY COUNT(*) DESC
```
but in this case you'll get one of the 3 rows having `count = 2` and not all the three rows
|
you can use this query in order to select the min count:
```
select min(count) from (select COUNT(*) as count
FROM car, car_type, booking
WHERE car.car_type_id = car_type.car_type_id and booking.car_id = car.car_id
GROUP BY car.car_id);
```
|
MySQL: How to display only the minimum count without the use of limit?
|
[
"",
"mysql",
"sql",
"count",
"output",
"min",
""
] |
Is it possible to run all opened scripts in SSMS. Would be any shortcut for that? Just like `F5` runs only active one script. The reason I want it is a temporary change in all scripts with `CTRL+H`. I want to run all scripts without saving changes. That is why I do not want the idea of running all scripts in a directory.
**Update.** Nope, manually won't be faster. The reason why I need it is exactly here: [VBA clear just pivot table cache, but leaving pivot table structure](https://stackoverflow.com/q/32271306/1903793)
I have to run it on 6 scripts not just once, but anytime I make changes to either SQL code or Excel file. Manually is frustrating for that.
|
You can do this :
`Ctrl` + (`E` `F6` `E` `F6` `E` ...)
|
You can kind of do this with SQLCMD mode.
1. Create a new Query. Make sure Query -- > SQLCMD Mode is enabled.
2. In the new query, enter `:r` followed by the path to the scripts. So:
`:r C:\Path\To\Script\1.sql`
`:r C:\Path\To\Script\2.sql`
`:r C:\Path\To\Script\3.sql`
`:r C:\Path\To\Script\4.sql`
`:r C:\Path\To\Script\5.sql`
`:r C:\Path\To\Script\6.sql`
3. Run your script. Each file should be executed in order.
You can't enable SQLCMD mode from within an SQL script, but you can [get it to yell at you or stop executing if it's not enabled](https://dba.stackexchange.com/questions/5468/can-i-enable-sqlcmd-mode-from-inside-a-script). You can also set SSMS to always start in SQLCMD mode in Tools --> Options --> Query Execution.
|
How to run all scripts in SSMS
|
[
"",
"sql",
"sql-server",
"ssms",
""
] |
I want to sum the weight of rows with the same id1 and then computes the ratio for each of that row (sort of like a probability) in a column prob.
Table data:
```
id1 weight id2
1 0.1 3
1 0.2 4
1 0.3 5
1 0.8 6
2 0.5 7
2 0.6 8
2 0.7 9
```
Output should be:
```
id1 weight id2 prob
1 0.1 3 0.07
1 0.2 4 0.1429
1 0.3 5 0.214
1 0.8 6 0.5714
2 0.5 7 0.2778
2 0.6 8 0.3333
2 0.7 9 0.3388
```
|
This can easily be solved with a window function which is typically faster than any solution with a sub-query or derived table:
```
select id1,
weight,
id2,
weight / sum(weight) over (partition by id1) as prob
from items
order by id1, id2;
```
SQLFiddle example: <http://sqlfiddle.com/#!15/64453/1>
|
Try this
```
SELECT t.*,t.wieght/t1.Weight AS Prob FROM TABLE_NAME t INNER JOIN
(
SELECT id1,SUM(weight) AS Weight FROM TABLE_NAME GROUP BY id1
)t1 ON t.id = t1.id
```
|
postgres alter table and insert row probabilities
|
[
"",
"sql",
"database",
"postgresql",
"calculated-columns",
""
] |
I have this code which is currently returning 0 regardless if the string contains the substring or not. It should return 1 if the substring is found.
```
CREATE FUNCTION dbo.checkLetters (@MESSAGE VARCHAR)
RETURNS INTEGER
WITH RETURNS NULL ON NULL INPUT
AS
BEGIN
DECLARE @value INTEGER;
IF @MESSAGE LIKE '%findMe%'
SET @value = 1
ELSE
SET @value = 0
RETURN @value
END;
```
I also tried using `charindex` in my `IF` statement to no avail. Am I missing something simple here?
Testing like so:
```
SELECT dbo.checkletters('dLHLd');
```
|
use `(@MESSAGE VARCHAR(max))` as a input parameter. or instead of max specify the length, currently in your function it is only 1. That is the issue.
|
Modify your function as given below:
```
ALTER FUNCTION dbo.checkLetters (@MESSAGE VARCHAR(max))
RETURNS INTEGER
WITH RETURNS NULL ON NULL INPUT
AS
BEGIN
DECLARE @value INTEGER;
DECLARE @ExpressionToFind VARCHAR(50)
SET @ExpressionToFind = 'findme'
IF @MESSAGE LIKE '%' + @ExpressionToFind + '%'
SET @value = 1
ELSE
SET @value = 0
RETURN @value
END;
```
The problem was in your input parameter.
|
User Defined Function to determine whether string contains a substring
|
[
"",
"sql",
"sql-server",
""
] |
How to get the sum from both the tables?
1. `res_transactions`
2. `TRANS_ADD`
Queries:
```
select
tranid, Qty, Price
from
res_transactions
where
order_no = '16104'
and tranid = '506060'
order by
tranid asc
select
FTRN, Qty, Price
from
TRANS_ADD
where
FTRN = '506060'
```
Kindly find the attached snapshot for more info
Excepted output is
```
Qty * Price
```
Total Price should be: **13.6**
[](https://i.stack.imgur.com/guRvM.jpg)
|
```
select sum(result) as sumresult from
(select Qty * Price as result from res_transactions where order_no='16104' and tranid='506060'
union all
select Qty * Price as result from TRANS_ADD where FTRN='506060'
)t
```
|
To get sum from res\_transactions:
```
SELECT TRANID AS ID, SUM(QTY*PRICE) AS TOTAL
FROM RES_TRANSACTIONS WHERE ORDER_NO='16104' AND TRANID='506060'
GROUP BY TRANID
```
Same for TRANS\_ADD:
```
SELECT FTRN AS ID, SUM(QTY*PRICE) AS TOTAL
FROM TRANS_ADD WHERE FTRN='506060'
GROUP BY FTRN
```
So if you want to find sum of these two values you can use [union all](https://msdn.microsoft.com/en-us/library/ms180026.aspx) keyword. **Do not use union** because it will delete duplicated rows.
```
SELECT ID, SUM(TOTAL) AS TOTAL FROM (
SELECT TRANID AS ID, SUM(QTY*PRICE) AS TOTAL FROM RES_TRANSACTIONS WHERE ORDER_NO='16104' AND TRANID='506060' GROUP BY TRANID
UNION ALL
SELECT FTRN AS ID, SUM(QTY*PRICE) AS TOTAL FROM TRANS_ADD WHERE FTRN='506060' GROUP BY FTRN
) TBL
```
|
How to get the sum of from this table?
|
[
"",
"sql",
"sql-server",
""
] |
In EXCEL/VBA I can program my way out of a thunderstorm, but in SQL I am still a novice. So apologies, after much Googling I can only get partway to a solution which I presume ultimately will be pretty simple, just not wrapping my head around it.
I need to create an INSERT script to add multiple rows in a 3-column table. A simple insert would be:
```
INSERT INTO table VALUES(StoreId, ItemID, 27)
```
First hurdle is dynamically repeat this for every StoreID in a different table. Which I think becomes this:
```
INSERT INTO table
SELECT (SELECT StoreID FROM Directory.Divisions), ItemID, 27)
```
If that is actually correct and would effectively create the 50-60 rows for each store, then I'm almost there. The problem is the ItemID. This will actually be an array of ItemIDs I want to feed in manually. So if there are 50 stores and 3 ItemIDs, it would enter 150 rows. Something like:
```
ItemID = (123,456,789,246,135)
```
So how can I merge these two ideas? Pull the StoreIDs from another table, feed in the array of items for the second parameter, then my hardcoded 27 at the end. 50 stores and 10 items should create 500 rows. Thanks in advance.
|
You can use `into` to insert into the target table. To generate itemid's you will have to use `union all` with your values and `cross join` on the `divisions` table.
```
select
d.storeid,
x.itemid,
27 as somecolumn
into targettablename
from Directory.Divisions d
cross join (select 123 as itemid union all select 456 union all select 789...) x
```
Edit: If the table to `insert` into isn't created yet, it should be created before inserting the data.
```
create table targettable as (store_id varchar(20), item_id varchar(20),
somecolumn int);
insert into targettable (store_id, item_id, somecolumn)
select
d.storeid,
x.itemid,
27
from Directory.Divisions d
cross join (select 123 as itemid union all select 456 union all select 789...) x
```
|
Firstly you need your array of item ids in a table of some sort. Either a permanent table, table variable or temporary table. For example using a temporary table, which you prefix with a hash symbol:
```
CREATE TABLE #ItemIds (item_id int)
INSERT INTO #ItemIds VALUES (1)
INSERT INTO #ItemIds VALUES (2)
...
INSERT INTO #ItemIds VALUES (10)
```
Then this should do the trick:
```
INSERT INTO table
SELECT StoreId, item_Id, 27
FROM Directory.Divisions, #ItemIds
```
The results set from the SELECT will be inserted into 'table'. This is an example of a cartesian join. Because there is no join condition, every row from Directory.Divisions is joined to every row in #ItemIds. Hence if you have 50 stores and 10 items, that will result in 50 x 10 = 500 rows.
|
INSERT multiple rows with SELECT and an array
|
[
"",
"sql",
"arrays",
"sql-server",
"bulkinsert",
""
] |
I'm trying the following and as expected, I get two columns with the values.
```
declare @aa varchar(10) = 'bb'
declare @bb varchar(10) = 'aa'
select @aa, @bb
```
Then I tried the following and I can't explain the result. I see nothing, except the command executed successfully.
```
declare @aa varchar(10) = 'bb'
declare @bb varchar(10) = 'aa'
select @aa = @bb
```
What have I done here? I was expecting a true/false or one/zero or an error.
|
To do comparison in a query:
```
select
case
when @aa = @bb then 'MATCH'
else 'NOT MATCH'
end
```
With what you were doing:
```
select @aa, @bb
```
The above selects both values stored in the variables as separate "columns".
```
select @aa = @bb
```
The above sets the variable `@aa` equal to the value of `@bb`.
|
`select @aa = @bb` is assignment like `SET @aa = @bb`
Compare using [IIF](https://msdn.microsoft.com/en-us/library/hh213574.aspx) `SELECT IIF(@aa = @bb, 'Same', 'Different')`
**[SqlFiddleDemo](http://sqlfiddle.com/#!6/9eecb7/1918/0)**
Inequality `SELECT IIF(@aa <> @bb, 'Different', 'Same')`
In General `SELECT IIF((condition), 1, 0)` where 1 is true, 0 is false.
|
What does the comparison mean and why isn't it showing true nor false?
|
[
"",
"sql",
"sql-server",
""
] |
I have a example query(fully modified) as below:
```
with projects as (
select * from projectdetails
)
select * from projects
```
when I run the above query, it is running fine.
But when i put one more select query..it is throwing error.
```
select * from
(
with projects as (
select * from projectdetails
)
select * from projects
)
```
> Error: Incorrect syntax near the keyword 'with'. If this statement is
> a common table expression, an xmlnamespaces clause or a change
> tracking context clause, the previous statement must be terminated
> with a semicolon.
The same query runs fine in Oracle but not in SQL server.
|
The query which you are trying to execute is called `Common Table Expression` and the the syntax which you are using is not the proper way of using it ie, you cannot use the `WITH` clause inside the `select` statement.
```
with projects as (
select * from projectdetails
)
select * from projects
```
The above query is fine and will work and the next query which you posted is neither correct and neither it is making sense as it is looking as if you are trying to do the same thing as above.
|
SQL Server and Oracle have different syntax requirements for common table expressions.
In Oracle, these can come before any `SELECT`. In SQL Server, they need to come at the beginning of the query. Hence, you cannot have a subquery with `WITH` in it.
Usually, you can just move the `WITH` statement before the first `SELECT` and the query will work in both databases.
|
Query on WITH Clause in SQL Server
|
[
"",
"sql",
"sql-server",
"oracle",
"common-table-expression",
""
] |
I usually code a join query as follows:
```
Select 'field' from 'table' join 'references' on 'table'.'id' = 'references'.'reference'
```
Now suddenly I wondered, does the order in which you do `on 'table'.'id' = 'references'.'reference'` have an impact on execution speed or not?
I mainly use sqlserver, mysql and sqlite and I can imagine there might be different implementations in those three if there is a difference at all.
So basically it boils down to:
is `on 'table'.'id' = 'references'.'reference'` different from `on 'references'.'reference' = 'table'.'id'`
This question is purely out of curiosity, I'd like to know how it is treated so I can understand it better.
|
There's no difference in execution if you write the SQL-Statement
```
Select 'field'
from 'table'
join 'references'
on 'table'.'id' = 'references'.'reference'
```
or
```
Select 'field'
from 'table'
join 'references'
on 'references'.'reference' = 'table'.'id'
```
You also could write it as a cross join (only possible for inner joins)
```
Select 'field'
from 'table'
cross join 'references'
where 'table'.'id' = 'references'.'reference'
```
I would'n use the third example for doing inner joins. But it explains that it makes no difference for equality joins if you think about the logical expression.
Sure, if you have the following comparison in the on-cluase `x > y` then you have to change the operator to `y < x`
|
There is no difference in that order.
Join condition `ON (..here..)` is just Π° logical expression, nothing else.
|
Is the order of comparisant important in a join query?
|
[
"",
"mysql",
"sql",
"sql-server",
"sqlite",
""
] |
in sql server i have one table (userMessageTbl) with this struct
```
User UserMessage UserMessageDate AdminMessage AdminMessageDate
```
now i want Select `UserMessage` and `AdminMessageDate` from this table by date `DES`.
For exmple
```
User UserMessage UserMessageDate AdminMessage AdminMessageDate
test hi 2015-03-1 thanks 2015-10-4
test ok 2015-08-2 car 2015-09-1
test u 2015-10-2 book 2015-10-3
```
i want get this :
```
thanks
book
u
car
ok
hi
```
thanks for your help
|
```
; WITH MD AS
(
SELECT UserMessage AS [Message], UserMessageDate AS [Date] FROM userMessageTbl
UNION ALL
SELECT AdminMessage AS [Message], AdminMessageDate AS [Date] FROM userMessageTbl
)
SELECT [Message], [Date] FROM
MD
ORDER BY [Date] DESC
```
|
Do a `UNION ALL` in a derived table. Then order the result.
```
select msg from
(
select UserMessage as msg, UserMessageDate as msgdate from table
union all
select AdminMessage as msg, AdminMessageDate as msgdate from table
) as dt
order by msgdate
```
|
select message from two column date
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a robot script which inserts some sql statements from a sql file; some of these statements contain utf8 characters. If I insert this file manually into database using navicat tool, everything's fine. But when I try to execute this file using database library of robot framework, utf8 characters go crazy!
This is my utf8 included sql statement:
```
INSERT INTO "MY_TABLE" VALUES (2, 'ΨͺΨ³Ψͺ1');
```
This is how I use database library:
```
Connect To Database Using Custom Params cx_Oracle ${dbConnection}
Execute Sql Script ${sqlFile}
Disconnect From Database
```
This is what I get in the database:
```
οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ 1
```
I have tried to execute the SQL file using `cx_Oracle` directly and it's still failing! It seems there is a problem in the original library. This is what I've used for importing SQL file:
```
import cx_Oracle
if __name__ == "__main__":
dsn_tns = cx_Oracle.makedsn(ip, port, sid)
db = cx_Oracle.connect(username, password, dsn_tns)
sql_commands = open(sql_file_addr, 'r').read().split(";")
cr = db.cursor()
for command in sql_commands:
if not command in ["", "\t", "\n", "\r", "\n\r", "\r\n", None]:
print "Executing SQL command:", command
cr.execute(command)
db.commit()
```
I have found that I can define character-set in the connection string. I've done it for `mysql` database and it the framework successfully inserted UTF8 characters into database; this is my connection string for MySQL:
```
database='db_name', user='db_username', password='db_password', host='db_ip', port=3306, charset='utf8'
```
But I don't know how to define character-set for Oracle connection string. I have tried this:
```
'db_username','db_password','db_ip:1521/db_sid','utf8'
```
And I've got this error:
```
TypeError: an integer is required
```
|
As @Yu Zhang suggested, I read discussion in this [link](https://community.oracle.com/thread/502949) and I found out that I should set an environment variable `NLS_LANG` in order to have a `UTF-8` connection to the database. So I've added below line in my test setup:
```
os.environ["NLS_LANG"] = "AMERICAN_AMERICA.AL32UTF8"
```
|
There can be several problems in here...
The first problem might be that you don't save the test files using UTF-8 encoding.
Robot framework expects plain text test files to be saved using UTF-8 encoding, yet most text editors will not save by default using UTF-8.
Verify that your editor saves that way - for example, by opening the file using NotePad++ and choosing Encoding -> UTF-8
Another problem might be the connection to the Oracle database. It doesn't seem like you can configure the connection custom properties to explicitly state UTF-8
This means you probably need to state that the database schema itself is UTF-8
|
how to insert utf8 characters into oracle database using robotframework database library
|
[
"",
"sql",
"database",
"utf-8",
"robotframework",
""
] |
I am stuck with a query that is using recursive and wondering if you guys can help me out.
I have this query below and it is based on the ShipQuantity, then it lists the number of records. For example, mfgPN "ABC123" has a ShipQuantity of 4, it will list 4 records with a number 1,2,3, 4.
```
WITH feedInfo
AS (
SELECT df1.RecID, MfgPN, LinkID, ShipQuantity, 1 AS Number
FROM EXT_DistributorFeed df1
WHERE 1 = 1
AND df1.mfgPN IN ('ABC1', 'ABC2')
UNION ALL
SELECT df2.RecID, df2.MfgPN, df2.LinkID, df2.ShipQuantity, feedInfo.number + 1 AS Number
FROM EXT_DistributorFeed df2
INNER JOIN feedInfo ON df2.RecID = feedInfo.RecID
WHERE 1 = 1
AND number < feedInfo.ShipQuantity
AND df2.mfgPN IN ('ABC1', 'ABC2')
)
Select feedInfo.*
From feedInfo
OPTION (maxrecursion 20000);
```
Let's say the result is
```
RecID MfgPN LinkID ShipQuantity Number
101 ABC1 L11111 4 1
102 ABC1 L11111 4 2
103 ABC1 L11111 4 3
104 ABC1 L11111 4 4
105 ABC2 L22222 2 1
106 ABC2 L22222 2 2
```
Now, I have another table "EXT\_DistributorFeedDetail" where it may contain serial# (some part# have serial# and some part# don't have). This table has only two columns: (1) LinkID and (2)SerialNo. Like this:
```
EXT_DistributorFeedDetail
LinkID SerialNo
L22222 S999999
L22222 S888888
```
I would like to join the feedInfo with EXT\_DistributorFeedDetail table to get the result like this:
```
RecID MfgPN LinkID ShipQuantity Number Serial
101 ABC1 L11111 4 1 NULL
102 ABC1 L11111 4 2 NULL
103 ABC1 L11111 4 3 NULL
104 ABC1 L11111 4 4 NULL
105 ABC2 L22222 2 1 S99999
106 ABC2 L22222 2 2 S88888
```
Any expert out there can help would be greatly appreciated.
Thank you,
|
Looks like you're trying to match up the LinkID and Number from the recursive query to the LinkID and a Row Number in the `EXT_DistributorFeedDetail` table
```
WITH feedInfo
AS (
SELECT df1.RecID, MfgPN, LinkID, ShipQuantity, 1 AS Number
FROM EXT_DistributorFeed df1
WHERE 1 = 1
AND df1.mfgPN IN ('ABC1', 'ABC2')
UNION ALL
SELECT df2.RecID, df2.MfgPN, df2.LinkID, df2.ShipQuantity, feedInfo.number + 1 AS Number
FROM EXT_DistributorFeed df2
INNER JOIN feedInfo ON df2.RecID = feedInfo.RecID
WHERE 1 = 1
AND number < feedInfo.ShipQuantity
AND df2.mfgPN IN ('ABC1', 'ABC2')
)
Select fi.*,
dfd.SerialNo [Serial]
From feedInfo fi
LEFT JOIN (SELECT *, ROW_NUMBER() OVER (PARTITION BY LinkID ORDER BY SerialNo) rn,
FROM EXT_DistributorFeedDetail) dfd
ON dfd.LinkID = fi.LinkID AND dfd.rn = fi.Number
OPTION (maxrecursion 20000);
```
Depending on what order you want the serial numbers in the EXT\_DistributorFeedDetail table to be you would need to change the order by in the Window function `ROW_NUMBER() OVER (PARTITION BY LinkID ORDER BY SerialNo)` if you take out the Order by the it would be random and could change.
|
It looks like you want to join the SerialNo in descending order for the number. You can do this by changing the last part of the query to this:
```
with feedinfo as (
....
)
select f.*, e.*
from feedinfo f
left join (
select *, rn = row_number() over (partition by linkid order by serialno desc)
from ext_distributorfeeddetail
) e on f.linkid = e.linkid and f.number = e.rn
option (maxrecursion 20000);
```
|
Recursive Query and INNER JOIN
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have a SQL Server table with call records and I'd like to get the number of calls in total and the total calls answered.
Here is what the table looks like:
```
Extension | Status
-----------------------
300 Answered
200 Not Answered
.... ...
```
What's the most efficient way to write a query that would return the extension, number of total calls (count on the entire table) and number of answered calls (count where `Status = 'Answered'`)?
I created a subquery and joined to it but it seems kind of inefficient so I have
```
SELECT Extension, COUNT(*) AS total, answered.num as totalAnswered
FROM calls c INNER JOIN (SELECT Extension, COUNT(*) AS num FROM calls
WHERE Status = 'answered') answered ON c.Extension = answered.Extension
GROUP BY Extension, answered.num
```
Thanks
|
By using a `case` statement inside the `count` aggregate function for the total answered calls, you can keep your query very simple with a `group by`.
```
select extension,
count(*) as total,
count(case when status = 'answered' then 'X' end) as totalAnswered
from calls
group by extension
```
I am assuming you are trying to return totals *per extension*.
**EDIT**
I have to admit that your post in its current form is not 100% clear about your intent. The query you posted implies that you want the counts to be *per extension*. If that's the case, then the above query will work great.
But in the text of your post you say:
> number of total calls (count on the entire table)
... which seems to imply something different: that you *don't* want the counts to *per extension*.
For completeness, here is the query you can use if you want the counts to be global to the entire table, instead of *per extension*:
```
select distinct extension,
count(*) over () as total,
count(case when status = 'answered' then 'X' end) over () as totalAnswered
from calls
```
And, if for some reason, you need a combination of both kinds of counts, then you can use something like this:
```
select extension,
count(*) as totalPerExtension,
count(case when status = 'answered' then 'X' end) as totalAnsweredPerExtension,
totalGlobal,
totalAnsweredGlobal
from (select *,
count(*) over () as totalGlobal,
count(case when status = 'answered' then 'X' end) over () as totalAnsweredGlobal
from calls) c
group by extension, totalGlobal, totalAnsweredGlobal
```
|
```
SELECT Extention,
COUNT(*) as total,
SUM(answered) OVER (PARTITION BY Extention) as answered
FROM (
SELECT Extention,
CASE WHEN Status = 'answered' THEN 1 ELSE 0 END as answered
) T
```
|
Sql Server Join to Sub Query
|
[
"",
"sql",
"sql-server",
""
] |
I have table with following data .
```
1
AAAAA01
AAAAA01
AAAAA01
B21
AAAAAA1
B3
AB100
```
and I want to sort data in following order
```
AAAAAA1
AAAAA01
AAAAA01
AAAAA01
AB100
B21
B3
1
```
I wrote a statement but not give me correct result.
```
Select
*
from
dbo.Section
order by
CASE
WHEN not Section like '%[^0-9]%' THEN CONVERT(int,Section)
WHEN Section like '[0-9]%' THEN CONVERT(int,SUBSTRING(Section,1,PATINDEX('%[A-Z]%',Section)-900000))
END
```
For your help i am providing script of table
```
INSERT [dbo].[Section] ([Section]) VALUES (N'1')
INSERT [dbo].[Section] ([Section]) VALUES (N'AAAAA01')
INSERT [dbo].[Section] ([Section]) VALUES (N'AAAAA01')
INSERT [dbo].[Section] ([Section]) VALUES (N'AAAAA01')
INSERT [dbo].[Section] ([Section]) VALUES (N'AAAAA01')
INSERT [dbo].[Section] ([Section]) VALUES (N'AAAAA01')
INSERT [dbo].[Section] ([Section]) VALUES (N'B21')
INSERT [dbo].[Section] ([Section]) VALUES (N'AAAAAA1')
INSERT [dbo].[Section] ([Section]) VALUES (N'B3')
INSERT [dbo].[Section] ([Section]) VALUES (N'AB100')
INSERT [dbo].[Section] ([Section]) VALUES (N'2')
INSERT [dbo].[Section] ([Section]) VALUES (N'B1')
INSERT [dbo].[Section] ([Section]) VALUES (N'B32')
INSERT [dbo].[Section] ([Section]) VALUES (N'11')
INSERT [dbo].[Section] ([Section]) VALUES (N'A10')
INSERT [dbo].[Section] ([Section]) VALUES (N'ABAAAA')
```
Can you please tell me how can this provide me correct answer.
|
Please try:
```
Select
*
from
dbo.Section
order by
REPLACE
(REPLACE
(REPLACE
(REPLACE
(REPLACE
(REPLACE
(REPLACE
(REPLACE
(REPLACE
(REPLACE ([Section], '0', 'ZZ0'),
'1', 'ZZ1'),
'2', 'ZZ2'),
'3', 'ZZ3'),
'4', 'ZZ4'),
'5', 'ZZ5'),
'6', 'ZZ6'),
'7', 'ZZ7'),
'8', 'ZZ8'),
'9', 'ZZ9'),
[Section]
```
[Sqlfiddle](http://sqlfiddle.com/#!3/d37e1/4)
Result of select for new test data:
[](https://i.stack.imgur.com/u0cwD.png)
Please check is it correct to you.
|
This is another way. Assuming your number strings are greater than -100000000.
```
SELECT YourString
FROM YourTable
ORDER BY CASE WHEN YourString LIKE '[0-9]%' THEN
CONVERT(int, YourString) ELSE -100000000 END, YourString
```
|
Sort data in varchar alphabet order
|
[
"",
"sql",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
I'm after a CTE which I want to return two columns, one with the total number of 1's and one with the total number of 0's. Currently I can get it to return one column with the total number of 1's using:
```
WITH getOnesAndZerosCTE
AS (
SELECT COUNT([message]) AS TotalNo1s
FROM dbo.post
WHERE dbo.checkletters([message]) = 1
--SELECT COUNT([message]) AS TotalNo0s
--FROM dbo.post
--WHERE dbo.checkletters([message]) = 0
)
SELECT * FROM getOnesAndZerosCTE;
```
How do I have a second column called TotalNo0s in the same CTE which I have commented in there to show what I mean.
|
Using conditional aggregation:
```
WITH getOnesAndZerosCTE AS(
SELECT
TotalNo1s = SUM(CASE WHEN dbo.checkletters([message]) = 1 THEN 1 ELSE 0 END),
TotalNo0s = SUM(CASE WHEN dbo.checkletters([message]) = 0 THEN 1 ELSE 0 END)
FROM post
)
SELECT * FROM getOnesAndZerosCTE;
```
|
For using COUNT() directly just be aware that it counts any NON-NULL values. You can omit the ELSE condition which implicitly returns NULL if not stated
```
SELECT
COUNT(CASE WHEN dbo.checkletters([message]) = 1 THEN 1 END) TotalNo1s
, COUNT(CASE WHEN dbo.checkletters([message]) = 0 THEN 1 END) TotalNo0s
FROM post
```
or, explicitly state NULL
```
SELECT
COUNT(CASE WHEN dbo.checkletters([message]) = 1 THEN 1 ELSE NULL END) TotalNo1s
, COUNT(CASE WHEN dbo.checkletters([message]) = 0 THEN 1 ELSE NULL END) TotalNo0s
FROM post
```
|
Count rows for two columns using two different clauses
|
[
"",
"sql",
"sql-server",
"common-table-expression",
""
] |
I have an events table that has a start and end date columns (events do not overlap), sample data
```
if object_id('tempdb..#SourceTable') is not null
begin
drop table #SourceTable
end
create table #SourceTable
(
Id int identity(1,1) not null,
WindowRange varchar(15) not null,
StartDatetime datetime null,
EndDatetime datetime null
)
insert into #SourceTable
(
WindowRange,
StartDatetime,
EndDatetime
)
values
('04:20 - 05:36', '2015-08-31 04:20:01.890', '2015-08-31 05:36:14.290' ),
('00:20 - 01:24', '2015-08-31 00:20:01.487', '2015-08-31 01:24:52.983' ),
('20:20 - 21:27', '2015-08-30 20:20:01.177', '2015-08-30 21:27:53.317' ),
('16:20 - 17:28', '2015-08-30 16:20:01.133', '2015-08-30 17:28:24.173' ),
('12:20 - 13:30', '2015-08-30 12:20:01.273', '2015-08-30 13:30:38.370' )
```
**Sample Output**
```
Id WindowRange StartDatetime EndDatetime
1 04:20 - 05:36 2015-08-31 04:20:01.890 2015-08-31 05:36:14.290
2 00:20 - 01:24 2015-08-31 00:20:01.487 2015-08-31 01:24:52.983
3 20:20 - 21:27 2015-08-30 20:20:01.177 2015-08-30 21:27:53.317
4 16:20 - 17:28 2015-08-30 16:20:01.133 2015-08-30 17:28:24.173
5 12:20 - 13:30 2015-08-30 12:20:01.273 2015-08-30 13:30:38.370
```
I would like to have additional rows that fill the gaps in the ranges, for the example above
**Expected Output**
```
Id WindowRange StartDatetime EndDatetime
1 04:20 - 05:36 2015-08-31 04:20:01.890 2015-08-31 05:36:14.290
2 01:24 - 04:20 2015-08-31 01:24:52.983 2015-08-31 04:20:01.890
3 00:20 - 01:24 2015-08-31 00:20:01.487 2015-08-31 01:24:52.983
4 00:00 - 00:20 2015-08-31 00:00:00.000 2015-08-31 00:20:01.487
5 21:27 - 23:59 2015-08-30 21:27:53.317 2015-08-30 23:59:59.999
6 20:20 - 21:27 2015-08-30 20:20:01.177 2015-08-30 21:27:53.317
7 17:28 - 20:20 2015-08-30 17:28:24.173 2015-08-30 20:20:01.177
8 16:20 - 17:28 2015-08-30 16:20:01.133 2015-08-30 17:28:24.173
9 13:30 - 16:20 2015-08-30 13:30:38.370 2015-08-30 16:20:01.133
10 12:20 - 13:30 2015-08-30 12:20:01.273 2015-08-30 13:30:38.370
```
I have tried using a common table expression with a window function but can't seem to get it right
```
;with myCTE as
(
select
row_number() over (order by EndDatetime desc) as SeqNo,
StartDatetime,
EndDatetime
from #SourceTable
)
select
t1.SeqNo as [T1SeqNo],
t2.SeqNo as [T2SeqNo],
t1.StartDatetime as [T1Start],
t1.EndDatetime as [T1End],
t2.StartDatetime as [T2Start],
t2.EndDatetime as [T2End]
from myCTE t1
left join myCTE t2
on t1.SeqNo = t2.SeqNo - 1
```
Any suggestion/help would be greatly appreciated.
|
```
DECLARE @WindowRange varchar(15),
@StartDatetime datetime,
@EndDatetime datetime,
@StartDate date,
@EndDate date,
@NextStartDatetime datetime,
@NextEndDatetime datetime
DECLARE yourCursor CURSOR FORWARD_ONLY READ_ONLY FOR
SELECT StartDatetime, EndDatetime FROM #SourceTable order by StartDatetime
OPEN yourCursor
FETCH NEXT FROM yourCursor INTO @StartDatetime, @EndDatetime
IF @@FETCH_STATUS = 0
BEGIN
FETCH NEXT FROM yourCursor INTO @NextStartDatetime, @NextEndDatetime
WHILE @@FETCH_STATUS = 0
BEGIN
SET @StartDate = @StartDatetime
SET @EndDate = @EndDatetime
IF @EndDate > @StartDate
BEGIN
SET @WindowRange = LEFT(CONVERT(varchar, @StartDatetime, 108), 5) + ' - ' + LEFT(CONVERT(varchar, @EndDate, 108), 5)
INSERT INTO #SourceTable (WindowRange, StartDatetime, EndDatetime) VALUES (@WindowRange, @StartDatetime, @EndDate)
SET @WindowRange = LEFT(CONVERT(varchar, @StartDatetime, 108), 5) + ' - ' + LEFT(CONVERT(varchar, @EndDate, 108), 5)
INSERT INTO #SourceTable (WindowRange, StartDatetime, EndDatetime) VALUES (@WindowRange, @EndDate, @EndDatetime)
END
SET @StartDate = @EndDatetime
SET @EndDate = @NextStartDatetime
IF @EndDate > @StartDate
BEGIN
SET @WindowRange = LEFT(CONVERT(varchar, @EndDatetime, 108), 5) + ' - ' + '00:00' --@StartDate
INSERT INTO #SourceTable (WindowRange, StartDatetime, EndDatetime) VALUES (@WindowRange, @EndDatetime, @StartDate)
SET @WindowRange = '00:00' + ' - ' + LEFT(CONVERT(varchar, @NextStartDatetime, 108), 5) --@EndDate
INSERT INTO #SourceTable (WindowRange, StartDatetime, EndDatetime) VALUES (@WindowRange, @EndDate, @NextStartDatetime)
END
SET @WindowRange = LEFT(CONVERT(varchar, @EndDatetime, 108), 5) + ' - ' + LEFT(CONVERT(varchar, @NextStartDatetime, 108), 5)
INSERT INTO #SourceTable (WindowRange, StartDatetime, EndDatetime) VALUES (@WindowRange, @EndDatetime, @NextStartDatetime)
SET @StartDatetime = @NextStartDatetime
SET @EndDatetime = @NextEndDatetime
FETCH NEXT FROM yourCursor INTO @NextStartDatetime, @NextEndDatetime
END
END
CLOSE yourCursor;
DEALLOCATE yourCursor;
select * from #SourceTable order by StartDatetime
```
|
```
;with myCTE as
(
select
row_number() over (order by EndDatetime desc) as SeqNo,
StartDatetime,
EndDatetime
from #SourceTable
)
select ROW_NUMBER() over (order by T1Start DESC), *
from (
select
t1.StartDatetime as [T1Start],
t1.EndDatetime as [T1End]
from myCTE t1
UNION ALL
select
t1.EndDatetime as [T1Start],
t2.StartDatetime as [T1SEnd]
from myCTE t1
inner join myCTE t2
on t1.SeqNo = t2.SeqNo + 1
) as t
order by T1Start DESC
```
|
Filling gaps with date ranges in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"gaps-and-islands",
""
] |
Really hard to even explain this question. I have data sets in SQL Server 2008 that contain rows with a start and end date which can overlap dates by rows. I am trying to figure out a simple query without the use of a cursor to group these into distinct possible ranges.
I have a query that will handle some situations but not all.
**Example 1:**
In this example, product A runs for about 30 days but product B runs for the entire year. I need the distinct list of date ranges. Day 1 - 30, Day 30 - 365
```
Data in Table
product start end
------- ---------- ----------
A 09/02/2015 10/01/2015
B 09/02/2015 08/31/2016
Desired Query Result
start end
---------- ----------
09/02/2015 10/01/2015
10/01/2015 08/31/2016
```
**Example 2:**
This example is similar to Example 1 above but product A already has two ranges defined. The result of the query would be the same, 2 date ranges. Day 1 - 30, Day 30 - 365
```
Data in Table
product start end
------- ---------- ----------
A 09/02/2015 10/01/2015
A 10/01/2015 08/31/2016
B 09/02/2015 08/31/2016
Desired Query Result
start end
---------- ----------
09/02/2015 10/01/2015
10/01/2015 08/31/2016
```
**Current Query**
This very simple query works on example 2 because there are two periods already defined. Cannot figure out how to make it work in example 1.
```
select start_on, min(end_on)
from product p
group by start_on
order by start_on
```
Any help would be much appreciated! I really prefer not to use a cursor. I have been thinking about a over/partition by bu so far have not been able to make any progress on it.
|
How about:
```
>WITH dates (dt) as
>(
>SELECT start [dt] from product
>UNION
>SELECT [end] [dt] from product)
>
>SELECT d1.dt [start_on],
>> (select min(d2.dt) from dates d2 where d2.dt > d1.dt) [end_on]
>FROM dates d1
>WHERE not (select min(d2.dt) from dates d2 where d2.dt > d1.dt) is NULL
>ORDER BY [dt]
```
|
I think this is what you need.
```
DECLARE @t TABLE
(PRODUCT CHAR(1)
,START DATE
,[END] DATE
)
INSERT INTO @t VALUES ('A','09/02/2015','10/01/2015')
,('B','09/02/2015','08/31/2016')
,('C','09/02/2015','12/31/2016')
,('D','09/02/2015','10/21/2017')
SELECT * FROM @t /*See sample data*/
SELECT START,[END]
FROM ( SELECT START, [END], ROW_NUMBER() OVER (PARTITION BY START ORDER BY START ASC) Rn
FROM @t) X
WHERE X.Rn = 1 /*Pick the first record with initial start date*/
UNION
SELECT X.END1, X.END2
FROM
(
SELECT DISTINCT t1.[END] END1,t2.[END] END2, ROW_NUMBER() OVER (PARTITION BY t1.[END] ORDER BY T1.[END], T2.[END]) Rn /*Partition by t1.END date to get unique combinations ranked as 1 */
FROM @t t1
JOIN @t t2 /*Self join on @t to pull distinct END date combination. Select the first End date (from t1) as Start Date and Second End Date (from t2) as End Date */
ON t1.START = t2.START
WHERE t1.[END] < t2.[END] /*Make sure t1.END date is less than t2.END date */
)X
WHERE X.Rn = 1 /*Pull the unique combination of t1.END date (which is technically the new Start date */
```
[](https://i.stack.imgur.com/EEE9X.png)
|
SQL to group by date range of various rows
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I don't know how to format?!, but I believe it's easy to understand.
I have the following table, lets call it "sales"
```
|Item| |Price| |PriceDate|
ItemA 801.36 09/23/2011
ItemA 800.64 09/23/2011
ItemA 803.55 09/22/2011
ItemB 4701.36 09/22/2011
ItemB 1101.36 09/22/2011
ItemB 4801.36 09/20/2011
ItemB 401.36 09/22/2011
ItemC 9601.36 09/21/2011
ItemC 201.36 09/19/2011
ItemC 301.36 09/17/2011
```
I'm given a date and I need to retrieve the records with the closest date, and only those, for example, if 09/24/2011 is the input, the output should be only the records from the 23rd for item A, 22nd for itemB, and 21st for itemC.
Using SQL Server 2012.
|
One way to go about it is to calculate the difference between the row's data and the given date by using [`datediff`](https://msdn.microsoft.com/en-us/library/ms189794.aspx), assign a [`rank`](https://msdn.microsoft.com/en-us/library/ms176102.aspx) to each row accordingly and filter by it.
Here I'm using `?` as a placeholder for the required date, just switch it with the correct syntax of the language you're using:
```
SELECT item, price, pricedate
FROM (SELECT item, price, pricedate,
RANK() OVER (PARTITION BY item
ORDER BY ABS(DATEDIFF(day, pricedate, ?))) AS rk
FROM salse) t
WHERE rk = 1
```
|
```
DECLARE @theTable TABLE (Item VARCHAR(10), price DECIMAL(10,2), priceDate DATE)
INSERT @theTable ( Item, price, priceDate )
VALUES
('ItemA',801.36,'2011-09-23'),
('ItemA',800.64,'2011-09-23'),
('ItemA',803.55,'2011-09-22'),
('ItemB',4701.36,'2011-09-22'),
('ItemB',1101.36,'2011-09-22'),
('ItemB',4801.36,'2011-09-20'),
('ItemB',401.36,'2011-09-22'),
('ItemC',9601.36,'2011-09-21'),
('ItemC',201.36,'2011-09-19'),
('ItemC',301.36,'2011-09-17')
DECLARE @inputDate DATE
SET @inputDate = '2011-09-24'
SELECT X.Item, X.price, X.priceDate FROM (
SELECT TT.Item, TT.price, TT.priceDate,
RANK() OVER (PARTITION BY [Item]
ORDER BY ABS(DATEDIFF(DAY, @inputDate, TT.priceDate))) AS RN
FROM @theTable TT
) AS X
WHERE RN = 1
```
---
> ```
> (10 row(s) affected)
> Item price priceDate
> ---------- --------------------------------------- ----------
> ItemA 801.36 2011-09-23
> ItemA 800.64 2011-09-23
> ItemB 4701.36 2011-09-22
> ItemB 1101.36 2011-09-22
> ItemB 401.36 2011-09-22
> ItemC 9601.36 2011-09-21
> ```
|
How to get distinct closest date in SQL Server?
|
[
"",
"sql",
"sql-server",
"select",
"sql-server-2012",
""
] |
The project is using Postgres 9.3
I have tables (that I have simplified) as follows:
```
t_person (30 million records)
- id
- first_name
- last_name
- gender
t_city (70,000 records)
- id
- name
- country_id
t_country (20 records)
- id
- name
t_last_city_visited (over 200 million records)
- person_id
- city_id
- country_id
- There is a unique constraint on person_id, country_id to
ensure that each person only has one last city per country
```
What I need to do are variations on the following:
**Get the ids of Person who are female who have visited country 'UK'
but have never visited country 'USA'**
I have tried the following, but it is too slow.
```
select t_person.id from t_person
join t_last_city_visited
on (
t_last_city_visited.person_id = t_person.id
and country_id = (select id from t_country where name = 'UK')
)
where gender = 'female'
except
(
select t_person.id from t_person
join t_last_city_visited
on (
t_last_city_visited.person_id = t_person.id
and country_id = (select id from t_country where name = 'USA')
)
)
```
I would really appreciate any help.
|
Hint: What you want to do here is to find the females for whom there EXISTS a visit to the UK, but where NOT EXISTS a visit to the US.
Something like:
```
select ...
from t_person
where ...
and exists (select null
from t_last_city_visited join
t_country on (...)
where t_country.name = 'UK')
and not exists (select null
from t_last_city_visited join
t_country on (...)
where t_country.name = 'US')
```
Another approach, to find the people who have visited the UK and not the US, which you can then join to the people to filter by gender:
```
select person_id
from t_last_city_visited join
t_country on t_last_city_visited.country_id = t_country.id
where t_country.name in ('US','UK')
group by person_id
having max(t_country.name) = 'UK'
```
|
Could you please run analyze and execute this query?
```
-- females who visited UK
with uk_person as (
select distinct person_id
from t_last_city_visited t
inner join t_person p on t.person_id = p.id and 'F' = p.gender
where country_id = (select id from t_country where name = 'UK')
),
-- females who visited US
us_person as (
select distinct person_id
from t_last_city_visited t
inner join t_person p on t.person_id = p.id and 'F' = p.gender
where country_id = (select id from t_country where name = 'US')
)
-- females who visited UK but not US
select uk.person_id
from uk_person uk
left join us_person us on uk.person_id = us.person_id
where us.person_id is null
```
This is one of the many ways this query can be formed. You might have to run them to find out which one works best and indexing tweaks you may need to make to have them run faster.
|
PostgreSQL Select Join Not in List
|
[
"",
"sql",
"postgresql",
"join",
""
] |
There are tables `Employees`
```
CREATE TABLE Employees
(
id int NOT NULL IDENTITY(1, 1) PRIMARY KEY,
name nvarchar(100) NOT NULL,
depID int NOT NULL,
salary money NOT NULL,
FOREIGN KEY (depID) REFERENCES Departments(id)
);
```
and `Payments`
```
CREATE TABLE Payments
(
id int NOT NULL IDENTITY(1, 1) PRIMARY KEY,
userID int NOT NULL,
createdDate date DEFAULT GETDATE(),
sum money NOT NULL,
FOREIGN KEY (userID) REFERENCES Employees(id)
);
```
I need to get names of Employees with the top three salaries for the last two years.
I tried to use the query below, but it doesn't work and I got an error.
```
SELECT TOP 3 name
FROM Employees
WHERE id in (SELECT id, SUM(sum) as SumTotal FROM Payments
WHERE (createdDate BETWEEN '2015-09-01' AND '2013-09-01')
ORDER BY SumTotal);
```
Error message:
> The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified.
How to make it work?
|
People are way over-complicating things. I'm pretty sure this will get you what you want:
```
SELECT TOP 3 employees.id, Name,
Sum([sum]) AS [TotalPayments]
FROM Employees
inner join Payments on employees.id = payments.userid
WHERE createdDate BETWEEN '2013-09-01' and '2015-09-01'
Group By employees.id, Name
Order BY TotalPayments DESC
```
[SqlFiddle](http://www.sqlfiddle.com/#!3/71773e) to test it
If you want just the names column, you could wrap that query with another select:
```
select Name from (
SELECT TOP 3 employees.id, Name,
Sum([sum]) AS [TotalPayments]
FROM Employees
inner join Payments on employees.id = payments.userid
WHERE createdDate BETWEEN '2013-09-01' and '2015-09-01'
Group By employees.id, Name
Order BY TotalPayments DESC
) q
```
|
This is one way to do it using `cte`s.
[Demo](http://www.sqlfiddle.com/#!3/71773e/24)
```
with pymt as
(
SELECT userid, sum(sum) as sumtotal
FROM Payments
WHERE createdDate BETWEEN '2013-09-01' AND '2015-09-01'
group by userid
)
, ename as
(
select e.name, pymt.sumtotal, row_number() over(order by pymt.sumtotal desc) as rn
from pymt join employees e
on pymt.userid = e.id
)
select name
from ename
where rn < = 3;
```
|
SELECT Employees with the top three salaries for the last two years
|
[
"",
"sql",
"sql-server",
""
] |
I created 3 mysql tables:
table 'city': contains cities
```
i.e | name |
+========+
| Athens |
+--------+
| Rome |
+--------+
| Paris |
+--------+
```
table 'category': contains categories
```
i.e | name |
+============+
| category_1 |
+------------+
| category_2 |
+------------+
```
table 'shops': contains shops
```
i.e | name | category_name | city_name |
+========+=================+=============+
| shop_1 | category_2 | Rome |
+--------+-----------------+-------------|
| shop_2 | category_2 | Rome |
+--------+-----------------+-------------+
| shop_3 | category_1 | Paris |
+--------+-----------------+-------------+
```
I'm trying to create a single query that will allow me to create the following html array(that shows how many shops there are in each city grouped by category):
```
| || category_1 | category_2 |
+========++================+=============+
| Athens || 0 | 0 |
+--------++----------------+-------------|
| Rome || 0 | 2 |
+--------++----------------+-------------+
| Paris || 1 | 0 |
+--------++----------------+-------------+
```
Thank you in advance,
Giannis
|
Try this
```
select c.name,
sum(case when s.category_name='category_1' then 1 else 0 end) as categoty_1,
sum(case when s.category_name='category_2' then 1 else 0 end) as categoty_2
from city as c left join shops as s on c.name=s,city_name
group by c.name
```
|
Look into this
[SQL select data from multiple tables](https://stackoverflow.com/questions/32248980/sql-select-data-from-multiple-) tables/32249225?noredirect=1#comment52417831\_32249225
|
Special connection between 3 tables
|
[
"",
"mysql",
"sql",
""
] |
I work on a software that allows users to manage estates they want to sell. I have a database like this (simple exemple) :
```
estates
-------------
id
price
photos
-------------
photoName
estateId
documents
-------------
docName
estateId
```
I'd like to get, in one request, all the informations of an estate (let's say id = 1), its photos and its documents, so I do something like that :
```
SELECT * FROM estates
LEFT OUTER JOIN photos ON photos.estateId = id
LEFT OUTER JOIN documents ON documents.estateId = id
WHERE id = 1
```
Which returns something like :
```
id | price | photoName | estateId | docName
----------------------------------------------
1 | 10000 | photo1.jpg| 1 | doc1.pdf
1 | 10000 | photo1.jpg| 1 | doc2.pdf
1 | 10000 | photo1.jpg| 1 | doc3.pdf
1 | 10000 | photo2.jpg| 1 | doc1.pdf
1 | 10000 | photo2.jpg| 1 | doc2.pdf
1 | 10000 | photo2.jpg| 1 | doc3.pdf
(assuming estate nΒ°1 has 2 photos and 3 docs for the exemple)
```
Then, with some PHP script, I loop through the result to get only unique results and that's it, I loaded my estate with only one request (which is faster than launching 3 requests).
But that was only a simple exemple. In the real database, I have a LOT of other tables that I want to load, with a LOT of values inside. So much tables that the same type of request returns over 2,000,000 rows for only one estate. Which is, believe me or not, very long to load ^^'.
So I was wondering if there where some kind of magical thing that could give me a result like this :
```
id | price | photoName | estateId | docName
----------------------------------------------
1 | 10000 | photo1.jpg| 1 | doc1.pdf
1 | 10000 | photo2.jpg| 1 | doc2.pdf
1 | 10000 | NULL* | 1 | doc3.pdf
*NULL or whatever you want, I don't care.
```
So a request where the number of lines returned is the number of lines of the column with the most values (in the exemple, there are more docs than photos, so there are as much rows as docs).
Does something like this exists ?
~ MetalFox Dioxymore
|
It is a bit more complicated and it uses variables, but the answer is "yes":
```
select e.*,
max(photoname) as photoname, max(docname) as docname)
from estates e join
((select p.estateid, p.photoname, NULL as docname, (@rnp := @rnp + 1) as seqnum
from photos p cross join
(select @rnp := 0) params
where p.estateid = 1
) union all
(select d.estateid, NULL, d.docname, (@rnd := @rnd + 1) as seqnum
from documents d cross join
(select @rnd := 0) params
where d.estateid = 1
)
) dp
on dp.estateid = e.estateid
group by e.stateid, dp.seqnum
order by e.stateid, dp.seqnum;
```
This adds an enumerated value to the photos and documents for a given estate. These are then aggregated.
|
You can even reduce this to one row per estate by simple aggregation.
```
SELECT
e.*,
group_concat(distinct p.photoname) as photonames,
group_concat(distinct d.docname) as docnames
FROM estates e
LEFT OUTER JOIN photos p ON p.estateId = e.id
LEFT OUTER JOIN documents d ON d.estateId = e.id
WHERE e.id = 1
GROUP BY e.id;
```
This gives you strings such as `'photo1.jpg,photo2.jpg'` and `'doc1.pdf,doc2.pdf,doc3.pdf'`. Use PHP's `explode` function to get arrays from the strings.
As to performance: The above produces a large intermediate result set due to joining all photos with all docs etc. So using subqueries instead may be faster. And the use of DISTINCT is no longer needed - a good indicator for a more straight-forward way to retrieve the data.
```
SELECT
e.*,
(select group_concat(p.photoname) from photos p where p.estateId = e.id) as photonames,
(select group_concat(d.docname) from documents d where d.estateId = e.id) as docnames
FROM estates e
WHERE e.id = 1;
```
One more remark: With many photos and docs or large file names, you may need to extend the maximum allowed length for GROUP\_CONCAT:
```
SET SESSION group_concat_max_len = 10000;
```
|
LEFT JOIN : is there a way to reduce the line numbers?
|
[
"",
"mysql",
"sql",
"left-join",
""
] |
EDIT: Thanks guys. It was truly just a formatting error of the single-quotation mark from the source code I copied. Thanks a lot!
**Codes:**
```
USE Library;
INSERT INTO myLibrary VALUES (
βSQL Bibleβ
,βAlex Kriegelβ
,βBoris M. Trukhnovβ
,βWileyβ
,888
,βApril 7,2008β
,β978-0470229064β
,βEnglishβ
);
```
**Output:**
```
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near 'β'.
```
**Question**
What is the problem here? I am new to SQL. Thanks in advance!
|
In SQL, strings are defined with the `'` characters, not `β` and `β`
|
Looks like you're using the wrong character to encapsulate your strings. Instead of the `β` character, you need to use either a `'` or a `"`:
```
USE Library;
INSERT INTO myLibrary VALUES (
"SQL Bible"
,"Alex Kriegel"
,"Boris M. Trukhnov"
,"Wiley"
,888
,"April 7,2008"
,"978-0470229064"
,"English"
);
```
|
SQL - populating table
|
[
"",
"sql",
"sql-server",
""
] |
When user insert Russian word like 'ΠΏΡΠΈΠΌΠ΅Ρ' to database,database saves it like '??????'. If they insert with 'N' letter or I select it with 'N' letter, ie; exec Table\_Name N'ΠΈΡΡΠ°Π½ΠΈΠ΅' there is no problem. But I don't want to use 'N' in every query, so is there any solution for this? I will use stored procedure by the way.
**UPDATE:**
Now I can use Russian letters with alter collation. But I can't alter collation for every language and I just want to learn is there any trigger or function for automatic add N in front of the text after text add. IE; when I insert 'ΠΏΡΠΈΠΌΠ΅Ρ', SQL should take it like N'ΠΏΡΠΈΠΌΠ΅Ρ' autamaticly.
|
You have to use column's datatype `NVARCHAR` to insert unicode letters, also you have to use `N'value'` when inserting.
You can test it in following:
```
CREATE TABLE #test
(
varcharCol varchar(40),
nvarcharCol nvarchar(40)
)
INSERT INTO #test VALUES (N'ΠΈΡΡΠ°Π½ΠΈΠ΅', N'ΠΈΡΡΠ°Π½ΠΈΠ΅')
SELECT * FROM #test
```
**OUTPUT**
```
varcharCol nvarcharCol
??????? ΠΈΡΡΠ°Π½ΠΈΠ΅
```
As you see column of datatype `varchar` returning questionmarks `??????` and column of datatype `nvarchar` returning russian characters `ΠΈΡΡΠ°Π½ΠΈΠ΅`.
---
**UPDATE**
Problem is that your database collation does not support russian letters.
1. In Object Explorer, connect to an instance of the SQL Server Database Engine, expand that instance, and then expand Databases.
2. Right-click the database that you want and click Properties.
3. Click the Options page, and select a collation from the Collation
drop-down list.
4. After you are finished, click OK.
`MORE INFO`
|
it would very difficult to put in comment i would recommend this link [Info](https://dba.stackexchange.com/questions/12475/n-prefix-before-string-in-transact-sql-query)
```
declare @test TABLE
(
Col1 varchar(40),
Col2 varchar(40),
Col3 nvarchar(40),
Col4 nvarchar(40)
)
INSERT INTO @test VALUES
('ΠΈΡΡΠ°Π½ΠΈΠ΅',N'ΠΈΡΡΠ°Π½ΠΈΠ΅','ΠΈΡΡΠ°Π½ΠΈΠ΅',N'ΠΈΡΡΠ°Π½ΠΈΠ΅')
SELECT * FROM @test
```
**RESULT**
[](https://i.stack.imgur.com/ghdfg.png)
|
Select cyrillic character in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have the following SQL Server tables.
```
CREATE TABLE workingSchedule
(
[workingDate] DATETIME NULL,
[openTime] TIME (7) NULL,
[closeTime] TIME (7) NULL
);
INSERT INTO workingSchedule
(workingDate, openTime, closeTime)
VALUES
('10/1/2015','9:00','17:00'),
('10/2/2015','9:00','17:00');
CREATE TABLE breakTable
(
[breakDate] DATETIME NULL,
[breakStart] TIME NULL,
[breakEnd] TIME NULL
);
INSERT INTO breakTable
(breakDate, breakStart, breakEnd)
VALUES
('10/1/2015','12:00','13:00'),
('10/1/2015','15:00','15:30'),
('10/2/2015','12:00','13:00');
```
I am trying to split the time intervals in [workingSchedule] into several rows considering the [breakTable]. The result I desire looks like this:
```
Date Start End
10/1/2015 09:00 12:00
10/1/2015 13:00 15:00
10/1/2015 15:30 17:00
10/2/2015 09:00 12:00
10/2/2015 13:00 17:00
```
I am not sure if I should use CTE, a function, or temporary tables. I appreciate if you can share your solution code. I was able to split time interval when there is only one break, but I failed when it came to multiple break times per day.
|
In situations like this, I use ROW\_NUMBER (<https://msdn.microsoft.com/en-us/library/ms186734.aspx>) to help join splits inside a single day. I used a UNION, but I think you could use the same join logic I have, but get away with a SELECT statement.
```
SELECT workingDate as [date], openTime as [Start], COALESCE(breakStart, closeTime) as [End]
FROM workingSchedule
LEFT JOIN (
SELECT breakDate, breakStart, breakEnd, ROW_NUMBER() OVER (PARTITION BY breakDate ORDER BY breakStart) AS ROWNUM
FROM breakTable
) as firstBreak ON workingSchedule.workingDate = firstBreak.breakDate AND firstBreak.ROWNUM = 1
UNION
SELECT breakStart.breakDate, breakStart.breakEnd, coalesce(breakEnd.breakStart, endTime.closeTime)
FROM (
SELECT breakDate, breakStart, breakEnd, ROW_NUMBER() OVER (PARTITION BY breakDate ORDER BY breakStart) AS ROWNUM
FROM breakTable
) as breakStart
LEFT JOIN (
SELECT breakDate, breakStart, breakEnd, ROW_NUMBER() OVER (PARTITION BY breakDate ORDER BY breakStart) AS ROWNUM
FROM breakTable
) as breakEnd ON breakStart.breakDate = breakEnd.breakDate AND breakStart.ROWNUM = breakEnd.ROWNUM - 1
LEFT JOIN (
SELECT workingDate, closeTime
FROM workingSchedule
) AS endTime ON breakStart.breakDate = endTime.workingDate
```
The idea here is to pull the start time, and the first break if there is one. If there is no break, the COALESCE will pull the closeTime instead. We then union on the breaks throughout the day. Finally, we join the closeTime onto the final break, again using COALESCE to use the closeTime when there is no "breakEnd.breakStart".
Here's a SQL Fiddler of it in action: <http://sqlfiddle.com/#!6/5a4765/14>
|
```
You can try this
DECLARE @Tab TABLE
(
[Date] [datetime],
[Time] [time],
[Row] int
)
DECLARE @Tab2 TABLE
(
[Date] [datetime],
[Row] int
)
INSERT INTO @Tab2
SELECT DISTINCT(workingDate) , ROW_NUMBER() OVER(ORDER BY [workingDate])
FROM workingSchedule
DECLARE @Count int;
DECLARE @Num [int];
DECLARE @Dat [datetime];
SET @Num=1;
SET @Count=(SELECT count(*) FROM @Tab2 t)
WHILE @Num<=@Count
BEGIN
SET @Dat=(SELECT [Date] FROM @Tab2 t WHERE t.[Row]=@Num)
INSERT INTO @Tab
SELECT * , ROW_NUMBER() OVER( ORDER BY Cte4.breakStart) Number FROM (
SELECT * FROM(
SELECT Cte.breakDate, Cte.breakStart FROM
(SELECT * FROM breakTable
UNION ALL
SELECT * FROM workingSchedule
)Cte
WHERE Cte.breakDate=@Dat
)Cte2
UNION ALL
SELECT * FROM(
SELECT Cte.breakDate, Cte.breakEnd FROM
(SELECT * FROM breakTable
UNION ALL
SELECT * FROM workingSchedule
)Cte
WHERE Cte.breakDate=@Dat
)Cte3
)Cte4
ORDER BY Cte4.breakStart
SET @Num=@Num+1;
END
;WITH Cte AS
(
SELECT * FROM @Tab t
WHERE t.[Row]%2=0
),
Cte2 AS
(
SELECT * FROM @Tab t
WHERE t.[Row]%2=1
)
SELECT DISTINCT Cte.[Date],Cte2.[Time] AS Start, Cte.[Time] AS [End] FROM Cte
INNER JOIN
Cte2
ON
(Cte2.[Row]+1)=Cte.[Row]
```
|
SQL Server split time interval to multiple rows
|
[
"",
"sql",
"sql-server",
"time",
"split",
""
] |
I have table with the following column:
```
[name_of_pos] varchar,
[date_from] datetime,
[date_to] datetime
```
Below is my sample data:
```
name_of_pos date_from date_to
----------------------------------------------------------------
Asystent 2015-08-26 08:57:49.000 2015-09-04 08:57:49.000
Biuro 2015-09-01 08:53:32.000 2015-09-01 08:53:32.000
Biuro 2015-09-02 09:00:41.000 2015-09-02 09:00:41.000
Biuro 2015-09-03 11:46:03.000 2015-09-03 11:46:03.000
Biuro 2015-09-10 09:02:11.000 2015-09-15 09:02:11.000
Koordynator 2015-09-01 09:04:06.000 2015-09-01 09:04:06.000
Projektant 2015-08-31 08:59:46.000 2015-09-01 08:59:46.000
Projektant 2015-09-02 08:00:54.000 2015-09-02 08:00:54.000
Projektant 2015-09-14 12:34:50.000 2015-09-14 12:34:50.000
```
What I want to return is the date range (min of `date_from` to max of `date_to`) for each `name_of_pos`, but only where the date values are continuous (the time part is not important and can be ignored in result).
The desired output would be:
```
name_of_pos date_from date_to
------------------------------------
Asystent 2015-08-26 2015-09-04
Biuro 2015-09-01 2015-09-03
Biuro 2015-09-10 2015-09-15
Koordynator 2015-09-01 2015-09-01
Projektant 2015-08-31 2015-09-02
Projektant 2015-09-14 2015-09-14
```
I tried a solution using something similar to this question:
[How do I group on continuous ranges](https://stackoverflow.com/questions/5662545/how-do-i-group-on-continuous-ranges)
But had no luck as I have two datetime columns.
|
Here's a solution using a `cte` to iterate over the rows (after they have been ordered) and check for consecutive days before grouping:
```
-- dummy table
CREATE TABLE #TableA
(
[name_of_pos] VARCHAR(11) ,
[date_from] DATETIME ,
[date_to] DATETIME
);
-- insert dummy data
INSERT INTO #TableA
( [name_of_pos], [date_from], [date_to] )
VALUES ( 'Asystent', '2015-08-26 08:57:49', '2015-09-04 08:57:49' ),
( 'Biuro', '2015-09-01 08:53:32', '2015-09-01 08:53:32' ),
( 'Biuro', '2015-09-02 09:00:41', '2015-09-02 09:00:41' ),
( 'Biuro', '2015-09-03 11:46:03', '2015-09-03 11:46:03' ),
( 'Biuro', '2015-09-10 09:02:11', '2015-09-15 09:02:11' ),
( 'Koordynator', '2015-09-01 09:04:06', '2015-09-01 09:04:06' ),
( 'Projektant', '2015-08-31 08:59:46', '2015-09-01 08:59:46' ),
( 'Projektant', '2015-09-02 08:00:54', '2015-09-02 08:00:54' ),
( 'Projektant', '2015-09-14 12:34:50', '2015-09-14 12:34:50' );
-- new temp table used to add row numbers for data order
SELECT name_of_pos, CAST(date_from AS DATE) date_from, CAST(date_to AS DATE) date_to,
ROW_NUMBER() OVER ( ORDER BY name_of_pos, date_from ) rn
INTO #temp
FROM #TableA
-- GroupingColumn in cte used to identify and group consecutive dates
;WITH cte
AS ( SELECT name_of_pos ,
date_from ,
date_to ,
1 AS GroupingColumn ,
rn
FROM #temp
WHERE rn = 1
UNION ALL
SELECT t2.name_of_pos ,
t2.date_from ,
t2.date_to ,
CASE WHEN t2.date_from = DATEADD(day, 1, cte.date_to)
AND cte.name_of_pos = t2.name_of_pos
THEN cte.GroupingColumn
ELSE cte.GroupingColumn + 1
END AS GroupingColumn ,
t2.rn
FROM #temp t2
INNER JOIN cte ON t2.rn = cte.rn + 1
)
SELECT name_of_pos, MIN(date_from) AS date_from, MAX(date_to) AS date_to
FROM cte
GROUP BY name_of_pos, GroupingColumn
DROP TABLE #temp
DROP TABLE #TableA
```
Produces your desired output:
```
name_of_pos date_from date_to
Asystent 2015-08-26 2015-09-04
Biuro 2015-09-01 2015-09-03
Biuro 2015-09-10 2015-09-15
Koordynator 2015-09-01 2015-09-01
Projektant 2015-08-31 2015-09-02
Projektant 2015-09-14 2015-09-14
```
|
This is a **gaps and islands** issue. This is the *tuned [official](http://social.technet.microsoft.com/wiki/contents/articles/18399.t-sql-gaps-and-islands-problem.aspx)* way to accomplish it and this would be checked as solution:
```
;with
cte as (
SELECT *,
dateadd( day,
- (ROW_NUMBER() OVER (
partition by name_of_pos
ORDER BY t.date_from
) + -- here starts tuned part --
isnull(
sum( datediff(day, date_from, date_to ) ) OVER (
partition by name_of_pos
ORDER BY t.date_from
ROWS BETWEEN UNBOUNDED PRECEDING and 1 PRECEDING
) ,0) -- here ends tuned part --
),
date_from
) as Grp
FROM t
)
SELECT name_of_pos
,min(date_from) AS date_from
,max(date_to) AS date_to
FROM cte
GROUP BY name_of_pos, Grp
ORDER BY name_of_pos, date_from
```
Here [tested on sqlfiddle](http://sqlfiddle.com/#!3/4b738c/7/0) (with some few different sample data).
|
Grouping by consecutive dates in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"gaps-and-islands",
""
] |
I need to compare two tables and get the pn's that are not done or non existent.
I cant seem to find an answer online anywhere for exactly what I need. Here are the tables and the example output that I need. Thank you so much to whom ever can help me out.
Table1:
```
+------------+----------+------------+----+---------------------+
| cert | job | pcmk | pn | stat |
+------------+----------+------------+----+---------------------+
| MF21600001 | 6216 | A148 | 1 | 2015-08-14 13:20:29 |
| MF21600001 | 6216 | A148 | 2 | |
+------------+----------+------------+----+---------------------+
```
Table2:
```
+-----------+----------+----+
| job | pcmk | pn |
+-----------+----------+----+
| 6216 | A148 | 1 |
| 6216 | A148 | 2 |
| 6216 | A148 | 3 |
+-----------+----------+----+
```
Example output for rows in Table2 that are not in Table1 or status = blank/NULL:
```
+------------+------+------+----+
| cert | job | pcmk | pn |
+------------+------+------+----+
| MF21600001 | 6216 | A148 | 2 |
| MF21600001 | 6216 | A148 | 3 |
+------------+------+------+----+
```
OK, I took the first idea and played with it a bit.
```
SELECT Table1.cert, pninput.job, pninput.pcmk, pninput.pn, pn.stat
FROM Table1, Table2
WHERE NOT EXISTS (SELECT *
FROM Table1
WHERE Table1.pcmk = Table2.pcmk AND Table1.job = Table2.job AND Table1.stat = '')
```
However now every cert gets paired up with every pcmk and job even though every cert can only have 1 job and 1 pcmk and it also taked 35+ seconds to run
|
Clarification: One job can have multiple `pcmk` and comparision should be done for each `pcmk`
```
SELECT t1.cert,t2.job,t2.pcmk,t2.pn
FROM table1 t1
join table2 t2 on (t1.pcmk=t2.pcmk)
WHERE (t1.pn=t2.pn and (t1.stat="" or t1.stat is null))
or t2.pn NOT IN(
SELECT pn
FROM table1
WHERE table1.pcmk=t2.pcmk)
group by t2.pcmk,t2.pn;
```
Assumption: Query is for single `job`. For multiple `job`, add `job` check in `where-clause`,`on-clause' and`group by` as well.
|
If I rephrase your condition, you want the rows from `table2` that don't have a corresponding row in `table` with a non-`null` status. This sounds like an `exists` condition:
```
SELECT *
FROM table2
WHERE NOT EXISTS (SELECT *
FROM table1
WHERE table1.pn = table2.pn AND stat IS NOT NULL)
```
|
How to select records that are not in another table?
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have this query:
```
SELECT t_ticket.ticketID, t_ticket.addedDate, t_ticket.question,
t_ticket.code, t_ticket.priority, t_actionTicket.addedDateAction, t_actionTicket.title
FROM t_actionTicket INNER JOIN
t_ticket ON t_actionTicket.ticketID_FK = t_ticket.ticketID INNER JOIN
(SELECT ticketID_FK, MAX(addedDateAction) AS maxDate
FROM t_actionTicket AS t_actionTicket_1
WHERE (t_actionTicket.userID_FK <> @userid)
GROUP BY ticketID_FK) AS b ON t_actionTicket.ticketID_FK = b.ticketID_FK AND t_actionTicket.addedDateAction = b.maxDate
WHERE (t_ticket.supporterID_FK IN
(SELECT supporterID
FROM t_Supporter
WHERE (userID_FK = @userid)))
```
I want to return just the latest record in t\_actionTicket table for each row in t\_ticket table that t\_actionTicket.userID\_FK <> @userid.
but I have this error:
> The multi-part identifier "t\_actionTicket.userID\_FK" could not be
> bound.
|
Problem in your query is
```
FROM t_actionTicket AS t_actionTicket_1
WHERE t_actionTicket.userID_FK <> @userid -- here
```
You cannot use `t_actionTicket` alias name inside `inner join` select query. You need to use `t_actionTicket_1`. It is possible only in `sub-query`
Try this better way of doing it
```
;WITH cte
AS (SELECT t_ticket.ticketID,
t_ticket.addedDate,
t_ticket.question,
t_ticket.code,
t_ticket.priority,
t_actionTicket.addedDateAction,
t_actionTicket.title,
Row_number()
OVER(
partition BY ticketID_FK
ORDER BY addedDateAction DESC) RN
FROM t_actionTicket
INNER JOIN t_ticket
ON t_actionTicket.ticketID_FK = t_ticket.ticketID
WHERE t_ticket.supporterID_FK IN (SELECT supporterID
FROM t_Supporter
WHERE userID_FK = @userid))
SELECT *
FROM cte
WHERE rn = 1
```
|
You can write this logic using `row_number()` instead of additional nested queries:
```
SELECT t.ticketID, t.addedDate, t.question, t.code, t.priority,
ta.addedDateAction, ta.title AS Expr1
FROM t_Ticket t INNER JOIN
(SELECT ta.*,
ROW_NUMBER() OVER (PARTITION BY ta.ticketID_FK ORDER BY ta.addedDateAction DESC) as seqnum
FROM t_actionTicket ta
) ta
ON t.ticketId = ta.ticketId_FK and ta.seqnum = 1
WHERE t.supporterID_FK IN (SELECT supporterID
FROM t_Supporter
WHERE userID_FK = @userid
);
```
Note that table aliases make the query easier to write and to read.
|
Join two tables returning only one row from the second table
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I have a table with data every hour:
[](https://i.stack.imgur.com/Qb7qm.png)
and a table with data every 30 minutes:
[](https://i.stack.imgur.com/9MOEZ.png)
I would like to join these two tables (by date) to get batVolt and TA in the same table and repeat the values for batVolt for the 30 minutes between the hour.
|
```
SELECT *
FROM HourTable t
INNER JOIN HalfHourTable ht
ON CAST(t.repDate AS Date) = CAST(ht.repDate AS Date)
AND DATEPART(HOUR, t.repDate) = DATEPART(HOUR, ht.repDate)
```
## Edit
Your query should be
```
SELECT n.repDate
, n.TA
, a.batVolt
FROM [DAP].[dbo].[ARRMet] AS n
FULL JOIN [DAP].[dbo].[array3] AS a
ON DATEPART(HOUR, n.repDate) = DATEPART(HOUR, a.repDate)
AND CAST(n.repDate AS DATE) = CAST(a.repDate AS DATE)
WHERE CAST(n.repDate AS DATE) = '20150831'
ORDER BY n.repDate DESC
```
|
I would do this slightly differently than M.Ali. This uses fewer functions and seems a bit simpler to me.
```
SELECT *
FROM HourTable t
INNER JOIN HalfHourTable ht on
t.repDate = dateadd(hour, datediff(hour, 0, ht.repDate), 0)
```
|
Join data from hourly and 30 minute tables
|
[
"",
"sql",
"sql-server",
"database",
""
] |
Sql keeps throwing me a syntax error and I can't seem to figure out whats wrong.
```
ALTER TABLE `productList` ALTER COLUMN `ASIN` VARCHAR(32);
```
The error is the normal sql syntax error
> ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'VARCHAR(32)' at line 1
|
Try this:
```
ALTER TABLE `productList` MODIFY COLUMN `ASIN` VARCHAR(32);
```
The syntax to [change the column name](http://dev.mysql.com/doc/refman/5.1/en/alter-table.html) is
> ALTER TABLE tablename MODIFY COLUMN new-column-definition
|
You need to use MODIFY COLUMN instead of ALTER COLUMN if you want to change the column definition.
<https://dev.mysql.com/doc/refman/5.1/en/alter-table.html>
|
Mysql syntax error in ALTER TABLE
|
[
"",
"mysql",
"sql",
""
] |
I have a single SQL table that contains multiple entries for each customerID (some customerID's only have one entry which I want to keep). I need to remove all but the most recent entry per customerID, using the invoiceDate field as my marker.
So I need to go from this:
```
+------------+-------------+-----------+
| customerID | invoiceDate | invoiceID |
+------------+-------------+-----------+
| 1 | 1393995600 | xx |
| 1 | 1373688000 | xx |
| 1 | 1365220800 | xx |
| 2 | 1265220800 | xx |
| 2 | 1173688000 | xx |
| 3 | 1325330800 | xx |
+------------+-------------+-----------+
```
To this:
```
+------------+-------------+-----------+
| customerID | invoiceDate | invoiceID |
+------------+-------------+-----------+
| 1 | 1393995600 | xx |
| 2 | 1265220800 | xx |
| 3 | 1325330800 | xx |
+------------+-------------+-----------+
```
Any guidance would be greatly appreciated!
|
1. Write a query to select all the rows you want to delete:
```
SELECT * FROM t
WHERE invoiceDate NOT IN (
SELECT MAX(invoiceDate)
-- "FROM t AS t2" isn't supported by MySQL, see http://stackoverflow.com/a/14302701/227576
FROM (SELECT * FROM t) AS t2
WHERE t2.customerId = t.customerId
GROUP BY t2.customerId
)
```
This may take a long time on a big database.
2. If you're satisfied, change the query to a DELETE statement:
```
DELETE FROM t
WHERE invoiceDate NOT IN (
SELECT MAX(invoiceDate)
-- "FROM t AS t2" isn't supported by MySQL, see http://stackoverflow.com/a/14302701/227576
FROM (SELECT * FROM t) AS t2
WHERE t2.customerId = t.customerId
GROUP BY t2.customerId
)
```
See <http://sqlfiddle.com/#!9/6e031/1>
If you have multiple rows whose date is the most recent for the same customer, you would have to look for duplicates and decide which one you want to keep yourself. For instance, look at customerId 2 on the SQL fiddle link above.
|
Try out this one
```
with todelete as
(
select
CustomerId, InvoiceId, InvoiceDate, Row_Number() over (partition by CustomerId order by InvoiceDate desc) as Count
from DeleteDuplicate
)
delete from todelete
where count > 1
```
|
Deleting all but the most recent entry from single SQL table
|
[
"",
"mysql",
"sql",
""
] |
I have a query like this:
```
select F_Exhibition_Name, F_dtFrom as startdate, F_dtTo as enddate
from T_Exhibition
```
My output looks like this:
```
Exhibiton startdate enddate
A 2015-05-04 2015-05-21
B 2015-06-10 2015-06-20
C 2015-07-10 2015-09-11
```
I want to get exhibition name only 7 days more after enddate compared to current date.
|
Below query will show events till 7 days in addition to enddate after checking current date using `Getdate()`:
```
select F_Exhibition_Name from T_Exhibition where Getdate() < = DATEADD(day,7,enddate )
```
For example :
**Case 1:**
```
Event end date : 7
Current date : 1
Show event till : 7 + 7 = 14
So it will be displayed on 1 (1<14)
```
**Case 2:**
```
Event end date : 8
Current date : 18
Show event till : 8 + 7 = 15
So it will be not be displayed on as 8 > 15
```
Hope that puts light on the issue.
|
Please use below query
```
select F_Exhibition_Name,F_dtFrom as startdate,CASE WHEN CONVERT(date,F_dtTo)=CONVERT(DATE,GETDATE()) THEN DATEADD(DAY,7,F_dtTo) else F_dtTo end as enddate from T_Exhibition
```
|
Get data comparing endate with current date
|
[
"",
"sql",
"sql-server",
""
] |
Using Postgres 8.4. I have a table of user activity that looks like kinda like this:
```
userid | timestamp | action
---------------------------------------------
0001 | 11/11/2015 9:00:02 | X
0001 | 11/11/2015 9:00:22 | Y
0002 | 11/11/2015 9:01:02 | Z
0002 | 11/11/2015 9:03:02 | W
0003 | 11/11/2015 9:04:02 | X
0004 | 11/11/2015 9:05:02 | Y
```
What I need to do is find counts of the number of users that performed a series of actions **X** and then **Y**
or **X** and then **Y** and then **Z** with a count of how many users made it to the next step.
So I have as input an ordered set of actions that and I want a count of how many users got through those actions (1st step: action 1 , 2nd step: action 2, 3rd step)
I'm trying to get a results like
```
step | action | count
=======================
1 | X | 100 <---- 100 users did X
2 | Y | 55 <-----55 did X and then Y (45 dropped away)
3 | Z | 12 <-----12 did X and then Y and then Z (43 more dropped)
```
As you can see the the count is always decreasing: 100 users did X, of those that did X 55 did Y and of those 12 did Z.
How can I achieve this?
|
The simplest solution will be probably to use `LEFT JOIN` to join the table with itself:
```
WITH actions(action) AS(
VALUES ('X'),('Y'),('Z'))
SELECT d.action
,Count(DISTINCT a.userid)
FROM table1 as a
LEFT JOIN table1 AS b
ON a.userid = b.userid AND b.action = 'Y' AND a.timestamp < b.timestamp
LEFT JOIN table1 AS c
ON a.userid = c.userid AND c.action = 'Z' AND b.timestamp < c.timestamp
JOIN actions AS d
ON d.action IN (a.action, b.action, c.action)
WHERE a.action = 'X'
GROUP BY d.action
```
[fiddle](http://sqlfiddle.com/#!15/2451a/1)
|
Here is one rather brute force approach. Use `listagg()` to create the sequences, and then look for them:
```
select p.pattern, count(t.actions)
from (select 'X' as pattern union all select 'XY' union all SELECT 'XYZ'
) p left join
(select userid, listagg(action, '') within group (order by timestamp) actions
from table t
group by userid
) t
on t.actions like concat('%', p.pattern, '%')
group by p.pattern;
```
|
SQL query time series data to find trend counts for user activity
|
[
"",
"sql",
"amazon-redshift",
""
] |
I am getting error while executing below query in sql server 2012:
```
DECLARE @Id NVARCHAR(max) = 2147483648
DECLARE @Result BIGINT = 0
IF (
@Id = 0
OR @Id = ''
)
BEGIN
SET @Result = NULL
END
ELSE
BEGIN
SET @Result = convert(bigint,@id)
END
```
Output:
> Msg 248, Level 16, State 1, Line 3 The conversion of the nvarchar
> value '2147483648' overflowed an int column.
Can someone suggest me why it is happening ?
Thanks in advance
|
Converting `0` to `bigint` will work because by default 0 will be treated as `INT`
```
DECLARE @Id NVARCHAR(max) = 2147483648
DECLARE @Result BIGINT = 0
IF ( @Id = cast(0 as bigint) -- or = '0'
OR @Id = '' )
BEGIN
SET @Result = NULL
END
ELSE
BEGIN
SET @Result = CONVERT(BIGINT, @id)
END
```
|
You have to use `''` when setting `@id = '0'` in following:
```
DECLARE @Id NVARCHAR(max) = 2147483648
DECLARE @Result BIGINT = 0
IF (
@Id = '0'
OR @Id = ''
)
BEGIN
SET @Result = NULL
END
ELSE
BEGIN
SET @Result = convert(bigint,@id)
END
```
|
Parsing/Conversion issue in sql server 2012?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Here `f` refer to `air_flight` and `fp` refer to `air_flight_details`.
As I'm beginner for sql I understand the concept except.... why they are mentioning `fp` after subquery and whyy not as `air_flight_details fp`?
```
SELECT f.flight_id, f.from_location, f.to_location, fp.Month_Name, fp.Average_Price
FROM air_flight f,
(
SELECT flight_id, MONTHNAME(flight_departure_date) AS Month_Name,
AVG(price) as Average_Price
FROM air_flight_details
GROUP BY flight_id, MONTHNAME(flight_departure_date)
) fp
WHERE f.flight_id = fp.flight_id AND f.airline_name = 'ABC AIRLINES'
ORDER BY f.flight_id, fp.Month_Name
```
|
The subquery in your example has an *alias* called `fp`, which allows the results from that subquery to be referenced in the outer query. For example, the query selects `fp.Average_Price`, which is a column contained in the subquery. Also, the `WHERE` clause contains the condition `f.flight_id = fp.flight_id`, which also uses the alias `fp`.
To be clear, `fp` and `air_flight_details` are two distinct tables.
|
Just like the first `FROM` statement names the table `f`, the entire subquery is given the alias `fp`.
The difference is that in the first `FROM` statement, the entire table `air_flight` is given the name `f`, while the subquery essentially creates an entire table to which `fp` refers.
|
Subqueries in SQL from airline management scheme
|
[
"",
"mysql",
"sql",
"subquery",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.