Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have this table in my database:

I want to get 2 columns: `id_chantier` and `id_chef`.
Conditions: `date_fin not null` and has the last `date_deb`.
So the rows that I want to get are number `1` and `11`.
How can I do that?
|
```
SELECT DISTINCT ON (id_chef)
id_chantier, id_chef
FROM tbl
WHERE date_fin IS NOT NULL
ORDER BY id_chef, date_deb DESC NULLS LAST;
```
Details for `DISTINCT ON`
* [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
Depending on data distribution there may be faster solutions:
* [Optimize GROUP BY query to retrieve latest record per user](https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-record-per-user/25536748#25536748)
|
You can do this with `rank()`:
```
select id_chantier, id_chef
from (select t.*, rank() over (order by date_deb desc) as rnk
from table t
) t
where date_fin is not null and rnk = 1;
```
|
Find the lastest qualifying row for each key
|
[
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
I'm new with SQL, and am trying to execute a query that would only return each Category if each one appears 4 times or more. From googling, it seems like I should be using having count, but it doesn't seem return the categories.
Any ideas?
```
SELECT Category
FROM myTable
HAVING COUNT(*) >= 3";
```
|
This is the query that will "return each Category if each one appears 4 times or more."
```
SELECT Category
FROM mytable
GROUP BY Category
HAVING count(*) >= 4;
```
|
Just modify your query like this by adding the `GROUP BY` clause:
```
SELECT Category
FROM myTable
group by category
HAVING COUNT(*) >= 4; -- Change this to 4 to return each Category if each one appears 4 times or more
```
|
SQL query where count is >= 3
|
[
"",
"sql",
""
] |
Consider this table:
```
Name Color1 Color2 Color3 Prize
-----------------------------------------
Bob Red Blue Green Stapler
Bob Red Blue NA Pencil
Bob Red NA NA Lamp
Bob Red NA NA Chair
Bob NA NA NA Mouse Pad
```
Bob has 3 colors. This is what I'm trying to get:
```
(#1) If Bob has Red, Blue, Green (match 3) .... Return Stapler
(#2) If Bob has Red, Blue, Purple (match 2) ... Return Pencil
(#3) If Bob has Red, Orange, Purple (match 1) . Return Lamp AND Chair rows
(#4) If Bob has Brown, Pink, Black (match 0) .. Return Mouse Pad
```
Colors would only appear in their own Columns. So in the example above, Red would only be in Color1 column and never in Color2 or Color3. Black would only be in Color3 and never in Color1 or Color2. Etc...
I only want the row(s) with the most matches.
I would really prefer not to do this with 4 separate SELECT statements and check each time if they return a row. This is how I do it in a stored procedure and it's clunky.
How can I do this in 1 SQL statement? Using Oracle if that matters...
Thanks!!!
|
> I only want the row(s) with the most matches.
You can use function rank() for that:
[SQLFiddle](http://sqlfiddle.com/#!4/f2135/2)
```
select name, color1, color2, color3, prize
from (
select t.*, rank() over (order by decode(color1, 'Red', 1, 0)
+ decode(color2, 'Blue', 1, 0) + decode(color3, 'Green', 1, 0) desc) rnk
from t)
where rnk = 1
```
This returns row or rows with most matches.
|
7 Years later, I think I'll answer my question with what I came up with. Thanks to Ponder Stibbons for his answer to get me started. Posting here in case anyone else stumbles upon this.
Ponder's answer will return multiple rows. For example, if you match on Red and Blue, it will return both Stapler and Pencil. Only Pencil should be returned.
So, in my solution, exact matches get 2 points for the ranking and "NA" for the column gets you 1. This bumps up the NA rows and creates "catch all" scenarios for the non-matching columns, which was my intent.
```
select *
from (
select t.*, rank() over (order by
decode(color1, 'Red', 2, 0)
+ decode(color2, 'Blue', 2, 0)
+ decode(color3, 'Purple', 2, 0)
+ decode(color1, 'NA', 1, 0)
+ decode(color2, 'NA', 1, 0)
+ decode(color3, 'NA', 1, 0)
desc) rnk
from t
where t.name = 'Bob'
)
where rnk = 1
```
|
SQL - Return rows with most column matches
|
[
"",
"sql",
"oracle",
""
] |
I have this stored procedure:
```
ALTER PROCEDURE [dbo].[GetCalendarEvents]
(@StartDate datetime,
@EndDate datetime,
@Location varchar(250) = null)
AS
BEGIN
SELECT *
FROM Events
WHERE EventDate >= @StartDate
AND EventDate <= @EndDate
AND (Location IS NULL OR Location = @Location)
END
```
Now, I have the location parameter, what I want to do is if the parameter is not null then include the parameter in where clause. If the parameter is null I want to completely ignore that where parameter and only get the result by start and end date.
Because when I'm doing this for example:
```
EXEC GetCalendarEvents '02/02/2014', '10/10/2015', null
```
I'm not getting any results because there are other locations which are not null and since the location parameter is null, I want to get the results from all the locations.
Any idea how can I fix this?
|
```
ALTER PROCEDURE [dbo].[GetCalendarEvents]
( @StartDate DATETIME,
@EndDate DATETIME,
@Location VARCHAR(250) = NULL
)
AS
BEGIN
SELECT *
FROM events
WHERE EventDate >= @StartDate
AND EventDate <= @EndDate
AND Location = ISNULL(@Location, Location )
END
```
If a NULL column is a possibility, then this would work.
```
ALTER PROCEDURE [dbo].[GetCalendarEvents]
( @StartDate DATETIME,
@EndDate DATETIME,
@Location VARCHAR(250) = NULL
)
AS
BEGIN
IF ( @loc IS NULL )
BEGIN
SELECT *
FROM events
WHERE EventDate >= @StartDate
AND EventDate <= @EndDate
END
ELSE
BEGIN
SELECT *
FROM events
WHERE EventDate >= @StartDate
AND EventDate <= @EndDate
AND Location = @Location
END
END
```
As having an 'OR' clause should be reasonably avoided due to possible performance issues.
|
Try this
```
SELECT *
FROM Events
WHERE EventDate >= @StartDate
AND EventDate <= @EndDate
AND Location = Case When LEN(@Location) > 0 Then @Location Else Location End
```
|
SQL Server stored procedure optional parameters, include all if null
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
"sql-server-2008-r2",
""
] |
Parameter table is initially created and one row is added in Postgres.
This table should have always one row, otherwise SQL queries using this table will produce incorrect results. `DELETE` or `INSERT` to this table are disallowed, only `UPDATE` is allowed.
How to add single row constraint to this table?
Maybe `DELETE` and `INSERT` triggers can raise an exception or is there simpler way?
|
The following will create a table where you can only insert one single row. Any update of the `id` column will result in an error, as will any insert with a different value than 42. The actual `id` value doesn't matter actually (unless there is some special meaning that you need).
```
create table singleton
(
id integer not null primary key default 42,
parameter_1 text,
parameter_2 text,
constraint only_one_row check (id = 42)
);
insert into singleton values (default);
```
To prevent deletes you can use a rule:
```
create or replace rule ignore_delete
AS on delete to singleton
do instead nothing;
```
You could also use a rule to make `insert` do nothing as well if you want to make an `insert` "fail" silently. Without the rule, an `insert` would generate an error. If you want a `delete` to generate an error as well, you would need to create a trigger that simply raises an exception.
---
**Edit**
If you want an error to be thrown for inserts or deletes, you need a trigger for that:
```
create table singleton
(
id integer not null primary key,
parameter_1 text,
parameter_2 text
);
insert into singleton (id) values (42);
create or replace function raise_error()
returns trigger
as
$body$
begin
RAISE EXCEPTION 'No changes allowed';
end;
$body$
language plpgsql;
create trigger singleton_trg
before insert or delete on singleton
for each statement execute procedure raise_error();
```
Note that you have to insert the single row before you create the trigger, otherwise you can't insert that row.
This will only partially work for a superuser or the owner of the table. Both have the privilege to drop or disable the trigger. But that is the nature of a superuser - he can do anything.
|
To make any table a singleton just add this column:
```
just_me bool NOT NULL DEFAULT TRUE UNIQUE CHECK (just_me)
```
This allows exactly one row. Plus add the trigger [@a\_horse provided](https://stackoverflow.com/a/29429083/939860).
But I would **rather use a function** instead of the table for this purpose. Simpler and cheaper.
```
CREATE OR REPLACE FUNCTION one_row()
RETURNS TABLE (company_id int, company text) LANGUAGE sql IMMUTABLE AS
$$SELECT 123, 'The Company'$$
ALTER FUNCTION one_row() OWNER TO postgres;
```
Set the owner to the user that should be allowed to change it.
* [Give a user permission to ALTER a function](https://stackoverflow.com/questions/24065749/give-a-user-permission-to-alter-a-function/24072637#24072637)
Nobody else change it - except superusers of course. Superusers can do anything.
You can use this function just like you would use the table:
```
SELECT * FROM one_row();
```
If you need a "table", create a view (which is actually a special table internally):
```
CREATE VIEW one_row AS SELECT * FROM one_row();
```
|
How to add constraint to sql table so that table has exactly one row
|
[
"",
"sql",
"postgresql",
"database-design",
""
] |
I create about 15 tables and create relationships and constraints among them using *Wizard* in **SQL Server 2012 express**
Now i want to see the *query* that is used to create those tables including relationships and constraints.
Can u provide help?
Thank you.
|
1. Connect to your database using SQL Server Manager Studio
2. Right-click the table or the view in the Object Explorer panel
3. From the context menu choose Script Table as.../CREATE to.../< SomeDestination >
4. Choose a destination (a file, the clip board, etc.)

This would give you access to the DDL SQL that can be used to create this table.
|
There are two ways within SSMS to view the SQL statement (known as Data Definition Language, or DDL) used to create a table.
1. Right-click the table and choose "Script Table as", "CREATE To" and choose your destination. This method is easiest if you just want to view the DDL for a single table quickly.
2. Right-click the database and choose "Tasks", "Generate Scripts" and follow the prompts. This method will generate DDL for all tables and many other database objects depending on your selection.
Either method will show constraints, indexes and metadata.
|
How to view query that was used to create a table?
|
[
"",
"sql",
"sql-server",
"sql-server-2012-express",
""
] |
In my SQL table there is a column named IsApproved and it's all NULL. I want to turn them to 'TRUE'. I wrote this SQL statement but it didn't work :
```
INSERT INTO [persondb].[dbo].[Person] (IsApproved) VALUES ('True')
```
How can I make this work? Thanks.
|
update the table with the `true` value
```
update table [persondb].[dbo].[Person]
set IsApproved = 'True' where IsApproved is null
```
|
you need to update it not insert:
```
update [persondb].[dbo].[Person] set IsApproved ='True' -- or 1, depends on the field type
where IsApproved is null
```
|
How to insert a value into all cells under a column name
|
[
"",
"sql",
"sql-server",
""
] |
I have a table called 'Workspaces' where the columns 'AreaID' and 'SurfaceID' work as a composite primary key. The AreaID references to another table called 'Areas' which only has AreaID as the primary key. What I want to do now is to make the surfaceID recound from 1 on every new AreaID. Right now I'm using the following code for the tables 'Areas' and 'Workspaces':
```
--Table 'Areas'
CREATE TABLE Areas (
AreaID INT IDENTITY(1,1) PRIMARY KEY,
Areaname VARCHAR(60) UNIQUE NOT NULL
)
--Table 'Workspaces'
CREATE TABLE Workspaces (
AreaID INT
CONSTRAINT ck_a_areaid REFERENCES Areas(AreaID)
ON DELETE CASCADE
ON UPDATE NO ACTION,
SurfaceID INT IDENTITY(1,1)
CONSTRAINT ck_surfaceid CHECK (surfaceid > 0 AND surfaceid < 1001),
Description VARCHAR(300) NOT NULL,
CONSTRAINT ck_workspaces PRIMARY KEY (AreaID, SurfaceID)
)
```
When I use the code above I get a result like this when creating new workspaces in different areas:
```
AreaID SurfaceID
1 1
1 2
1 3
2 4
2 5
3 6
Etc...
```
But I want the SurfaceID to recount from 1 on every new areaID, so my desired result would be like this:
```
AreaID SurfaceID
1 1
1 2
1 3
2 1
2 2
3 1
Etc...
```
Does anyone know how this can be fixed?
|
I am agree with [Mr. Linoff's answer](https://stackoverflow.com/a/29432023/3208640) but if you want to store it phisicaly, you can do it within an `insert trigger`:
```
Update Your_Table
set SurfaceID = ( select max(isnull(SurfaceID,0))+1 as max
from Workspaces t
where t.AreaID = INSERTED.AreaID )
```
---
**EDIT:\***(as an example wanted for how to implement it)
In the question I have seen two table that's why I have wrote the code as above, but following is a sample for what I meant:
*Sample table:*
```
CREATE TABLE testTbl
(
AreaID INT,
SurfaceID INT, --we want this to be auto increment per specific AreaID
Dsc VARCHAR(60)NOT NULL
)
```
*Trigger:*
```
CREATE TRIGGER TRG
ON testTbl
INSTEAD OF INSERT
AS
DECLARE @sid INT
DECLARE @iid INT
DECLARE @dsc VARCHAR(60)
SELECT @iid=AreaID FROM INSERTED
SELECT @dsc=DSC FROM INSERTED
--check if inserted AreaID exists in table -for setting SurfaceID
IF NOT EXISTS (SELECT * FROM testTbl WHERE AreaID=@iid)
SET @sid=1
ELSE
SET @sid=( SELECT MAX(T.SurfaceID)+1
FROM testTbl T
WHERE T.AreaID=@Iid
)
INSERT INTO testTbl (AreaID,SurfaceID,Dsc)
VALUES (@iid,@sid,@dsc)
```
*Insert:*
```
INSERT INTO testTbl(AreaID,Dsc) VALUES (1,'V1');
INSERT INTO testTbl(AreaID,Dsc) VALUES (1,'V2');
INSERT INTO testTbl(AreaID,Dsc) VALUES (1,'V3');
INSERT INTO testTbl(AreaID,Dsc) VALUES (2,'V4');
INSERT INTO testTbl(AreaID,Dsc) VALUES (2,'V5');
INSERT INTO testTbl(AreaID,Dsc) VALUES (2,'V6');
INSERT INTO testTbl(AreaID,Dsc) VALUES (2,'V7');
INSERT INTO testTbl(AreaID,Dsc) VALUES (3,'V8');
INSERT INTO testTbl(AreaID,Dsc) VALUES (4,'V9');
INSERT INTO testTbl(AreaID,Dsc) VALUES (4,'V10');
INSERT INTO testTbl(AreaID,Dsc) VALUES (4,'V11');
INSERT INTO testTbl(AreaID,Dsc) VALUES (4,'V12');
```
*Check the values:*
```
SELECT * FROM testTbl
```
*Output:*
```
AreaID SurfaceID Dsc
1 1 V1
1 2 V2
1 3 V3
2 1 V4
2 2 V5
2 3 V6
2 4 V7
3 1 V8
4 1 V9
4 2 V10
4 3 V11
4 4 V12
```
**IMPORTANT NOTICE:** this trigger **does not handle multi row insertion once** and it is needed to insert single record once like the example. for handling multi record insertion it needs to change the body of and use row\_number
|
Here is the solution that works with Multiple Rows.
Thanks to jFun for the work done for the single row insert, but the trigger is not really safe to use like that.
OK, Assuming this table:
```
create table TestingTransactions (
id int identity,
transactionNo int null,
contract_id int not null,
Data1 varchar(10) null,
Data2 varchar(10) null
);
```
In my case I needed "transactionNo" to always have the correct next value for each CONTRACT. Important for me in a legacy financial system is that there are no gaps in the transactionNo numbers.
So, we need the following trigger for to ensure referential integrity for the transactionNo column.
```
CREATE TRIGGER dbo.Trigger_TransactionNo_Integrity
ON dbo.TestingTransactions
INSTEAD OF INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Discard any incoming transactionNo's and ensure the correct one is used.
WITH trans
AS (SELECT F.*,
Row_number()
OVER (
ORDER BY contract_id) AS RowNum,
A.*
FROM inserted F
CROSS apply (SELECT Isnull(Max(transactionno), 0) AS
LastTransaction
FROM dbo.testingtransactions
WHERE contract_id = F.contract_id) A),
newtrans
AS (SELECT T.*,
NT.minrowforcontract,
( 1 + lasttransaction + ( rownum - NT.minrowforcontract ) ) AS
NewTransactionNo
FROM trans t
CROSS apply (SELECT Min(rownum) AS MinRowForContract
FROM trans
WHERE T.contract_id = contract_id) NT)
INSERT INTO dbo.testingtransactions
SELECT Isnull(newtransactionno, 1) AS TransactionNo,
contract_id,
data1,
data2
FROM newtrans
END
GO
```
OK, I'll admit this is a pretty complex trigger with just about every trick in the book here, but this version should work all the way back to SQL 2005. The script utilises 2 CTE's, 2 cross applies and a Row\_Num() over to work out the correct "next" TransactionNo for all of the rows in `Inserted`.
It works using an `instead of insert` trigger and discards any incoming transactionNo and replaces them with the "NEXT" transactionNo.
So, we can now run these updates:
```
delete from dbo.TestingTransactions
insert into dbo.TestingTransactions (transactionNo, Contract_id, Data1)
values (7,213123,'Blah')
insert into dbo.TestingTransactions (transactionNo, Contract_id, Data2)
values (7,333333,'Blah Blah')
insert into dbo.TestingTransactions (transactionNo, Contract_id, Data1)
values (333,333333,'Blah Blah')
insert into dbo.TestingTransactions (transactionNo, Contract_id, Data2)
select 333 ,333333,'Blah Blah' UNION All
select 99999,44443,'Blah Blah' UNION All
select 22, 44443 ,'1' UNION All
select 29, 44443 ,'2' UNION All
select 1, 44443 ,'3'
select * from dbo.TestingTransactions
order by Contract_id,TransactionNo
```
We are updating single rows, and multiple rows with mixed contract numbers - but the correct TransactionNo overrides the passed in value and we get the expected result:
```
id transactionNo contract_id Data1 Data2
117 1 44443 NULL Blah Blah
118 2 44443 NULL 1
119 3 44443 NULL 2
120 4 44443 NULL 3
114 1 213123 Blah NULL
115 1 333333 NULL Blah Blah
116 2 333333 Blah Blah NULL
121 3 333333 NULL Blah Blah
```
I am interested in people's opinions regarding concurrency. I am pretty certain the two CTEs will be treated as a single pass so, I am 99.99% certain the referential integrity will always be maintained.
|
auto increment on composite primary key
|
[
"",
"sql",
"sql-server",
"increment",
"identity",
""
] |
I have the following table in my DB
```
|id|name |company_id|role |
|1 |test1|11 |READADMIN |
|2 |test2|11 |WRITEADMIN |
|3 |test3|11 |READWRITEADMIN |
|4 |test4|12 |READADMIN |
|5 |test5|12 |WRITEADMIN |
|6 |test6|12 |READWRITEADMIN |
|7 |test6|11 |READADMIN |
|8 |test7|13 |READADMIN |
|9 |test8|12 |READADMIN |
|10|test9|12 |READADMIN |
```
My goal is to extract all admins that have role `READADMIN` with same `company_id`, and count of this users should be more then 1.
In general I expect to get the following result:
```
|id|name |company_id|role |
|1 |test1|11 |READADMIN|
|7 |test6|11 |READADMIN|
|4 |test4|12 |READADMIN|
|9 |test8|12 |READADMIN|
|10|test9|12 |READADMIN|
```
I've created select below, but I am not sure in their accuracy and performance:
```
SELECT id, name, company_id, role
FROM admin
WHERE company_id IN(SELECT CO.id
FROM company CO
join admin AU
ON CO.id = AU.company_id
WHERE role = 'READADMIN'
GROUP BY CO.id
HAVING Count(AU.id) > 1)
AND role = 'READADMIN';
```
Would be good to know optimal approach for the mentioned task.
|
```
SELECT a.*
FROM admin a
WHERE a.role = 'READADMIN'
AND EXISTS ( SELECT 'a'
FROM admin a2
WHERE a2.role = a.role
AND a2.company_id = a.company_id
AND a2.id <> a.id
)
```
|
I guess that'll work :
```
SELECT DISTINCT id, name, company_id, role
FROM admin WHERE role='READADMIN' GROUP BY CO.id
```
I'm not sure if that what you meant. What I understood from your question is that you want to get all the users that their role is 'READADMIN' and you don't want any duplicates in the results.
|
Extract all rows when specific column matches more then once
|
[
"",
"sql",
"oracle",
""
] |
Now I want to count the rows where column contains a value or not null. I am the beginner of SQL; my SQL query is as below:
```
select count(news) AS news_count, count(`msg`) as msg_count , count('req') as req_count
from news_msg_activity
where news!=''
UNION
select count(news) AS news_count, count(`msg`) as msg_count , count('req') as req_count
from news_msg_activity
where msg!=''
UNION
select count(news) AS news_count, count(`msg`) as msg_count , count('req') as req_count
from news_msg_activity
where req!=''
```
When I run the query it give two numbers in outcome. But I need a single number result where it will count the number of records from the above operation. I don't know how to write that query. Can anybody help me?
but I need like
```
news_count || msg_count || req_count
2 || 2 || 3
```
|
Wrap your query in a subquery.
```
SELECT * -- here you can sum, count or whatever elese you need
FROM (
-- your query goes here
) as src
```
Or simply
```
select
sum(news!='') AS news_count
, sum( msg!='' ) as msg_count
, sum(req!='') as req_count
from news_msg_activity
```
as boolean statement evaluate to integers as 0/1 (false/true) - this acts as a count where the condition is met.
check the fiddle: <http://sqlfiddle.com/#!9/8c780/1>
|
`COUNT(column)` will already return the number of non-null records, so unless I'm misunderstanding what you're trying to do, you can make this much simpler. The following query should return the number of non-null records for each field:
```
select count(news) AS news_count
, count(`msg`) as msg_count
, count(req) as req_count
from news_msg_activity
```
If your concern is eliminating empty strings from the count, you can use the `NULLIF` function:
```
select count(nullif(news, '')) AS news_count
, count(nullif(`msg`, '')) as msg_count
, count(nullif(req, '')) as req_count
from news_msg_activity
```
|
Sql Count rows which contains some value
|
[
"",
"mysql",
"sql",
""
] |
I have a table that its primary key used in multiple tables.
I want to delete the rows that have no relation FK in other tables.
How to delete all rows from table which has no FK relation?
|
First you can select like this:
```
select * from some_table where some_fk_column not in (
select some_column from second_table
)
```
if you get good result,then
```
delete from some_table where some_fk_column not in (
select some_column from second_table
)
```
|
If you want to do all without checking all other related tables I say a way but you should take care while using it:
1. loop through your table
2. delete the record, if any FK is exists then the record will not
delete (use `TRY/CATCH` blocks)
in this way you do not need to check all fk and tables
**Notice:** this way assumes that cascade delete is disabled.
```
Select *
Into #Tmp
From YOUR_TABLE
Declare @Id int
While EXISTS(SELECT * From #Tmp)
Begin
Select Top 1 @Id = Id From #Tmp
BEGIN TRY
DELETE FROM YOUR_TABLE WHERE ID=@ID
END TRY
BEGIN CATCH
END CATCH
Delete FROM #Tmp Where Id = @Id
End
```
|
How to delete all rows from table which has no FK relation
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-delete",
""
] |
I think I have a misunderstanding of how NOT EXISTS work and hope it can be clarified to me.
Here is the sample code I am running (also on [SQL Fiddle](http://sqlfiddle.com/#!4/9eecb/2495))
```
select sum(col1) col1, sum(col2) col1, sum(col3) col3
from (
select 1 col1, 1 col2, 1 col3
from dual tbl1
)
where not exists(
select 2 col1, 1 col2, 1 col3
from dual tbl2
)
```
I thought that it should return:
```
1, 1, 1
```
But instead it returns nothing.
I make this assumption only on the fact that I though NOT EXISTS would give me a list of all the rows in the first query that do not exist in the second query (in this case 1,1,1)
1. Why does this not work
2. What would be the appropriate way to make it work the way I am expecting it to?
|
You are performing an uncorrelated subquery in your `NOT EXISTS()` condition. It always returns exactly one row, therefore the `NOT EXISTS` condition is never satisfied, and your query returns zero rows.
Oracle has a rowset difference operator, `MINUS`, that should do what you wanted:
```
select sum(col1) col1, sum(col2) col1, sum(col3) col3
from (
select 1 col1, 1 col2, 1 col3
from dual tbl1
MINUS
select 2 col1, 1 col2, 1 col3
from dual tbl2
)
```
SQL Server has an `EXCEPT` operator that does the same thing as Oracle's `MINUS`. Some other databases implement one or the other of these.
|
`EXISTS` just returns true if a record exists in the result set; it does not do any value checking. Since the sub-query returns one record, `EXISTS` is true, `NOT EXISTS` is false, and you get no records in your result.
Typically you have a `WHERE` cluase in the sub-query to compare values to the outer query.
One way to accomplish what you want is to use `EXCEPT`:
```
select sum(col1) col1, sum(col2) col1, sum(col3) col3
from (
select 1 col1, 1 col2, 1 col3
from dual tbl1
)
EXCEPT(
select 2 col1, 1 col2, 1 col3
from dual tbl2
)
```
|
SQL Where Not Exists
|
[
"",
"sql",
"oracle",
"oracle11g",
"where-clause",
"not-exists",
""
] |
Table: TEST

**Select rows having time difference less than 2 hour for the same day (group by date).**
Here output should be first two rows, because the Time difference of the first two rows (18-JAN-15 01.08.40.000000000 PM - 18-JAN-15 11.21.28.000000000 AM < 2 hour)
```
NB: compare rows of same date.
```
OUTPUT:

```
CREATE TABLE TEST
( "ID" VARCHAR2(20 BYTE),
"CAM_TIME" TIMESTAMP (6)
)
Insert into TEST (ID,CAM_TIME) values ('1',to_timestamp('18-JAN-15 11.21.28.000000000 AM','DD-MON-RR HH.MI.SSXFF AM'));
Insert into TEST (ID,CAM_TIME) values ('2',to_timestamp('18-JAN-15 01.08.40.000000000 PM','DD-MON-RR HH.MI.SSXFF AM'));
Insert into TEST (ID,CAM_TIME) values ('3',to_timestamp('23-JAN-15 09.18.40.000000000 AM','DD-MON-RR HH.MI.SSXFF AM'));
Insert into TEST (ID,CAM_TIME) values ('4',to_timestamp('23-JAN-15 04.22.22.000000000 PM','DD-MON-RR HH.MI.SSXFF AM'));
```
|
This self-join query does the job:
[SQL Fiddle](http://sqlfiddle.com/#!4/5b268/2)
```
select distinct t1.id, t1.cam_time
from test t1 join test t2 on t1.rowid <> t2.rowid
and trunc(t1.cam_time) = trunc(t2.cam_time)
where abs(t1.cam_time-t2.cam_time) <= 2/24
order by t1.id
```
Edit:
If cam\_time is time\_stamp type then condition should be:
```
where t1.cam_time between t2.cam_time - interval '2' Hour
and t2.cam_time + interval '2' Hour
```
|
I took a slightly different tack and employed the `LAG()` and `LEAD()` analytic functions:
```
WITH mydata AS (
SELECT 1 AS id, timestamp '2015-01-15 11:21:28.000' AS cam_time
FROM dual
UNION ALL
SELECT 2 AS id, timestamp '2015-01-15 13:08:40.000' AS cam_time
FROM dual
UNION ALL
SELECT 3 AS id, timestamp '2015-01-23 09:18:40.000' AS cam_time
FROM dual
UNION ALL
SELECT 4 AS id, timestamp '2015-01-23 16:22:22.000' AS cam_time
FROM dual
)
SELECT id, cam_time FROM (
SELECT id, cam_time
, LAG(cam_time) OVER ( PARTITION BY TRUNC(cam_time) ORDER BY cam_time ) AS lag_time
, LEAD(cam_time) OVER ( PARTITION BY TRUNC(cam_time) ORDER BY cam_time ) AS lead_time
FROM mydata
) WHERE CAST(lead_time AS DATE) - CAST(cam_time AS DATE) < 1/12
OR CAST(cam_time AS DATE) - CAST(lag_time AS DATE) < 1/12
```
|
select rows having time difference less than 2 hour of a single column
|
[
"",
"sql",
"oracle",
""
] |
I have two tables with different structure (table1 confirmed items, table2 items waiting for confirmation, each user may have more items in either table):
```
table1
id (primary) | user_id | group_id | name | description | active_from | active_to
```
and
```
table2
id (primary) | user_id | group_id | name | description | active_from
```
What I try to have is the list of all the items of a certain user - i.e. rows from both tables with the same `user_id` (e.g. 1) prepared to be shown grouped by `group_id` (i.e. first group 1, then group 2 etc.) ordered by `name` (inside each group). Output should look like that:
```
(all the groups below belong to the same user - with certain user_id)
# Group 1 (group_id) #
Item 67 (id): Apple (name), healthy fruit (description) (item stored in table1)
Item 29: Pear, rounded fruit (item stored in table2)
# Group 2 #
Item 14: Grape, juicy fruit (item stored in table2)
# Group 3 #
Item 116: Blackberry, shining fruit (item stored in table2)
Item 14: Plum, blue fruit (item stored in table1)
Item 7: Raspberry, red fruit (item stored in table1)
```
I am not able to have a working solution, I tried using `JOIN` as well as simple `SELECT` from two tables using `WHERE` clause.
I ended with the following code which is obviously not working since returning wrong (much higher - redundant) number of results (not talking about non-implemented ordering of the results from `table2`):
```
SELECT table1.id, table1.user_id, table1.group_id, table1.active_from, table1.active_to, table2.id, table2.user_id, table2.group_id, table2.active_from
FROM table1
LEFT JOIN table2
ON table1.user_id = table2.user_id
WHERE (table1.group_id='".$group_id."' OR table2.group_id='".$group_id."') AND (table1.user_id='".$user_id."' OR table2.user_id='".$user_id."')
ORDER BY table1.property_name ASC
```
|
A `union` would suit your problem. Some data massage is required to give both sides of the union the same number and type of columns:
```
select group_id
, id as item_id
, name
, description
, source_table
from (
select id
, user_id
, group_id
, name
, description
, 'from table1' source_table
from table1
union all
select id
, user_id
, group_id
, name
, description
, 'from table2' -- Column name is already defined above
from table2
) as SubQueriesMustBeNamed
where user_id = 1
order by
group_id
, name
```
[Working example at SQL Fiddle.](http://sqlfiddle.com/#!9/89d84/4/0)
To format the result set as you like, iterate over the result set. When the `group_id` changes, print a `# Group N #` header.
There should be no need to have other loops or iterations client-side, just one `foreach` or equivalent over the set of rows returned by the query.
|
```
select * from a t1 , b t2 where t1.user_id=t2.user_id and t1.group_id='' ORDER BY t1.name ASC
```
|
SELECT from two tables based on the same id and grouped
|
[
"",
"mysql",
"sql",
""
] |
How to left join two tables, selecting from second table only the first row?

My question is a follow up of:
[SQL Server: How to Join to first row](https://stackoverflow.com/q/2043259)
I used the query suggested in that thread.
```
CREATE TABLE table1(
id INT NOT NULL
);
INSERT INTO table1(id) VALUES (1);
INSERT INTO table1(id) VALUES (2);
INSERT INTO table1(id) VALUES (3);
GO
CREATE TABLE table2(
id INT NOT NULL
, category VARCHAR(1)
);
INSERT INTO table2(id,category) VALUES (1,'A');
INSERT INTO table2(id,category) VALUES (1,'B');
INSERT INTO table2(id,category) VALUES (1,'C');
INSERT INTO table2(id,category) VALUES (3,'X');
INSERT INTO table2(id,category) VALUES (3,'Y');
GO
------------------
SELECT
table1.*
,FirstMatch.category
FROM table1
CROSS APPLY (
SELECT TOP 1
table2.id
,table2.category
FROM table2
WHERE table1.id = table2.id
ORDER BY id
)
AS FirstMatch
```
However, with this query, I get inner join results. I want to get left join results. The tabel1.id in desired results should have '2' with NULL. How to do it?
|
Following the comment of [t-clausen.dk](https://stackoverflow.com/users/806969/t-clausen-dk) this does the job:
change `CROSS APPLY` to `OUTER APPLY`
|
use `row_number` and `left join`
```
with cte as(
select id,
category,
row_number() over(partition by id order by category) rn
from table2
)
select t.id, cte.category
from table1 t
left outer join cte
on t.id=cte.id and cte.rn=1
```
**OUTPUT:**
```
id category
1 A
2 (null)
3 X
```
[SQLFIDDLE DEMO](http://www.sqlfiddle.com/#!6/20b4a/3)
|
How to left join to first row in SQL Server
|
[
"",
"sql",
"sql-server",
"left-join",
""
] |
I used `procedure` as the name of a table in SQL Server. And I can't do anything now and get error.
I didn't know it is a reserved keyword.
What should I do ?
|
You can use reserved keywords for table and field names in SQL server when you surround them with square brackets.
So
```
drop table [procedure];
```
should solve the problem. The other option is to find the table dbo.procedure in Object Explorer, right click it, click Delete and choose OK.
|
You can use brackets `[]` or `QUOTENAME()` with dynamic SQL. But is bad practice at all to use [RESERVED WORDS](https://www.drupal.org/node/141051) as tables, columns names.
|
How to change the name of table in SQL Server that is a keyword
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
As the title states, I'm trying to grab all available hotel rooms when a user specifies a check in and check out date. I've made some progress, but I'm struggling to understand the logic behind this process.
Here's what I have:
```
SELECT r.FLOOR, r.ROOM
FROM BOOKING b, ROOMS r
WHERE TO_DATE('2015-03-28', 'YYYY-MM-DD')
BETWEEN TO_DATE(b.CHECKIN, 'YY-MM-DD') AND TO_DATE(b.CHECKOUT, 'YY-MM-DD')
AND r.ROOMID = b.ROOMID;
```
This simply returns back all taken rooms on the specified date. (2015-03-28)
How can I change this code to take in two dates, checkin an checkout, while also providing available rooms instead of taken rooms.
Any help is much appreciated!
|
You can use Oracle's wm\_overlaps function, which finds overlapping time spans:
```
select *
from rooms
where roomid not in
(
select b.room_id
from booking b
where wm_overlaps (
wm_period(b.checkin, b.checkout),
wm_period(
to_date('2014-01-01', 'yyyy-mm-dd'),
to_date('2014-01-05', 'yyyy-mm-dd')
)
) = 1
)
```
In this query, the rooms have no bookings between the both given parameters.
|
This might be closer. Substitute the parameters (marked with @) as appropriate:
```
SELECT r.FLOOR, r.ROOM
FROM ROOMS r
WHERE r.ROOMID NOT IN (
-- exclude rooms where checkin or checkout overlaps with the desired dates
SELECT r.ROOMID
FROM BOOKING b
WHERE (
b.CHECKIN BETWEEN TO_DATE(@CHECKIN, 'YY-MM-DD') AND TO_DATE(@CHECKOUT, 'YY-MM-DD')
OR b.CHECKOUT BETWEEN TO_DATE(@CHECKIN, 'YY-MM-DD') AND TO_DATE(@CHECKOUT, 'YY-MM-DD')
)
```
|
SQL Grabbing hotel room availability between two dates
|
[
"",
"sql",
"oracle",
"select",
"join",
"to-date",
""
] |
```
proc SQL;
CREATE TABLE DATA.DUMMY AS
SELECT *,
CASE
WHEN (Discount IS NOT NULL)
THEN (Total_Retail_Price - (Total_Retail_Price * Discount)) * Quantity AS Rev
ELSE (Total_Retail_Price * Quantity) AS Rev
END
FROM DATA.Cumulative_Profit_2013 AS P
;
```
I am trying to factor in a potentially NULL column as part of the expression for Revenue. But my case statement throws up issues. I've checked other examples, but I can't see a why that would help
|
It looks like you can use [`COALESCE`](https://msdn.microsoft.com/en-us/library/ms190349.aspx) to achieve your goal without an explicit conditional:
```
SELECT *,
(Total_Retail_Price - (Total_Retail_Price * COALESCE(Discount, 0))) * Quantity AS Rev
FROM DATA.Cumulative_Profit_2013 AS P
```
|
Without knowing SAS the normal SQL syntax would be:
```
SELECT *,
CASE
WHEN (Discount IS NOT NULL)
THEN (Total_Retail_Price - (Total_Retail_Price * Discount)) * Quantity
ELSE (Total_Retail_Price * Quantity)
END AS Rev
```
That is, with the column alias after the end of the case expression.
|
SQL Case Statement NULL column
|
[
"",
"sql",
"sas",
"enterprise-guide",
""
] |
I am trying to create a query that returns userIds that have not received an offer. For example I would like to offer the `productId 1019` to my users but I do not want to offer the product to users that already received it. But the query below keeps returning the `userId = 1054` and it should only return `userId=3333`. I will appreciate any help.
**Users**:
```
Id Status
-----------------
1054 Active
2222 Active
3333 Active
```
**Offers**:
```
userId ProductId
--------------------
1054 1019
1054 1026
2222 1019
3333 1026
```
Query
```
DECLARE @i int = 1019
SELECT Distinct c.id
FROM Users c
INNER JOIN offers o ON c.id = o.UserId
WHERE o.ProductId NOT IN (@i)
ORDER BY c.id
```
|
You can use a LEFT JOIN:
```
DECLARE @i int = 1019
SELECT c.id
FROM
Users c LEFT JOIN offers o
ON c.id = o.UserId
AND o.ProductId IN (@i)
WHERE
o.UserId IS NULL
ORDER BY
c.id
```
Please see an example [here](http://sqlfiddle.com/#!6/280cf/1).
|
Here is a SQLFiddle to show how it works: <http://sqlfiddle.com/#!6/cb79b/3>
```
select
c.id
from
Users c
where
c.id not in (
select
o.userid
from
Offers o
where o.ProductId = @i
)
order by c.id
```
|
Guidance creating a SQL query
|
[
"",
"sql",
""
] |
I have table storing events occurring to users as shown in <http://sqlfiddle.com/#!15/2b559/2/0>
```
event_id(integer)
user_id(integer)
event_type(integer)
timestamp(timestamp)
```
A sample of the data looks as follows:
```
+-----------+----------+-------------+----------------------------+
| event_id | user_id | event_type | timestamp |
+-----------+----------+-------------+----------------------------+
| 1 | 1 | 1 | January, 01 2015 00:00:00 |
| 2 | 1 | 1 | January, 10 2015 00:00:00 |
| 3 | 1 | 1 | January, 20 2015 00:00:00 |
| 4 | 1 | 1 | January, 30 2015 00:00:00 |
| 5 | 1 | 1 | February, 10 2015 00:00:00 |
| 6 | 1 | 1 | February, 21 2015 00:00:00 |
| 7 | 1 | 1 | February, 22 2015 00:00:00 |
+-----------+----------+-------------+----------------------------+
```
I would like to get, for each event, the number of events of the same user and the same event\_type that occurred within 30 days before the event.
It should look like the following:
```
+-----------+----------+-------------+-----------------------------+-------+
| event_id | user_id | event_type | timestamp | count |
+-----------+----------+-------------+-----------------------------+-------+
| 1 | 1 | 1 | January, 01 2015 00:00:00 | 1 |
| 2 | 1 | 1 | January, 10 2015 00:00:00 | 2 |
| 3 | 1 | 1 | January, 20 2015 00:00:00 | 3 |
| 4 | 1 | 1 | January, 30 2015 00:00:00 | 4 |
| 5 | 1 | 1 | February, 10 2015 00:00:00 | 3 |
| 6 | 1 | 1 | February, 21 2015 00:00:00 | 3 |
| 7 | 1 | 1 | February, 22 2015 00:00:00 | 4 |
+-----------+----------+-------------+-----------------------------+-------+
```
The table contains millions of rows so I cannot go with a correlated subquery as suggested by @jpw in the answers below.
So far I managed to get the total number of events that occurred before with the same user\_id and same event\_id by using the following query:
```
SELECT event_id, user_id,event_type,"timestamp",
COUNT(event_type) OVER w
FROM events
WINDOW w AS (PARTITION BY user_id,event_type ORDER BY timestamp
ROWS UNBOUNDED PRECEDING);
```
With the following result:
```
+-----------+----------+-------------+-----------------------------+-------+
| event_id | user_id | event_type | timestamp | count |
+-----------+----------+-------------+-----------------------------+-------+
| 1 | 1 | 1 | January, 01 2015 00:00:00 | 1 |
| 2 | 1 | 1 | January, 10 2015 00:00:00 | 2 |
| 3 | 1 | 1 | January, 20 2015 00:00:00 | 3 |
| 4 | 1 | 1 | January, 30 2015 00:00:00 | 4 |
| 5 | 1 | 1 | February, 10 2015 00:00:00 | 5 |
| 6 | 1 | 1 | February, 21 2015 00:00:00 | 6 |
| 7 | 1 | 1 | February, 22 2015 00:00:00 | 7 |
+-----------+----------+-------------+-----------------------------+-------+
```
Do you know if there a way to change the window frame specification or the COUNT function so only the number of events which occurred within x days is returned?
In a second time, I would like to exclude duplicate events, i.e. same event\_type and same timestamp.
|
I provided a more detailed answer plus fiddle under the [**duplicate question on dba.SE**](https://dba.stackexchange.com/q/97076/3684).
Basically:
```
CREATE INDEX events_fast_idx ON events (user_id, event_type, ts);
```
And either:
```
SELECT *
FROM events e
, LATERAL (
SELECT count(*) AS ct
FROM events
WHERE user_id = e.user_id
AND event_type = e.event_type
AND ts >= e.ts - interval '30 days'
AND ts <= e.ts
) ct
ORDER BY event_id;
```
Or:
```
SELECT e.*, count(*) AS ct
FROM events e
JOIN events x USING (user_id, event_type)
WHERE x.ts >= e.ts - interval '30 days'
AND x.ts <= e.ts
GROUP BY e.event_id
ORDER BY e.event_id;
```
|
Maybe you already know how to solve this using a subquery and are asking specifically for a solution using a window function and if so this answer might be invalid for that reason, but if you're interest is in any possible solution then it's easy to solve this using a correlated subquery, although I suspect performance might be bad:
```
select
event_id, user_id,event_type,"timestamp",
(
select count(distinct timestamp)
from events
where timestamp >= e.timestamp - interval '30 days'
and timestamp <= e.timestamp
and user_id = e.user_id
and event_type = e.event_type
group by event_type, user_id
) as "count"
FROM events e
order by event_id;
```
[Sample SQL Fiddle](http://sqlfiddle.com/#!15/2b559/14)
|
Counting preceding occurences of an event within a given interval for each event row with a window function
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"window-functions",
"postgresql-performance",
""
] |
I have a ranking system where I save the users rank and points for every game day.
Now my problem is that I want to fetch the number of rank-positions that a user have climbed since last day. So in this example the user\_id = 1 has dropped 3 positions since yesterday. My current query is giving me kind of what I want, but with some extra calculation that I want to remove. So my question is how do I calculate the difference in rank for every user (between today and yesterday)?
[SQL FIDDLE](http://sqlfiddle.com/#!2/b040c/5)
|
```
SELECT current.user_id,(last.rank -current.rank)
FROM ranking as current
LEFT JOIN ranking as last ON
last.user_id = current.user_id
WHERE current.rank_date = (SELECT max(rank_date) FROM ranking)
and
last.rank_date = (SELECT max(rank_date) FROM ranking
where rank_date < (SELECT max(rank_date) FROM ranking)
)
```
|
I think the simplest way is:
```
SELECT today.user_id, (yest.rank - today.rank) as diff
FROM ranking today JOIN
ranking yest
on today.user_id = yest.user_id
WHERE today.rank_date = CURRENT_DATE AND
yest.rank_date = date_sub(CURRENT_DATE, interval 1 day);
```
|
Query issue ranking system MYSQL
|
[
"",
"mysql",
"sql",
"ranking",
""
] |
I am new to oracle, comming from MySQL, and I am trying to get autoincrement to work in Oracle with a sequence and a trigger, so that it increments my field by one, each time I do an insert.
```
CREATE SEQUENCE proposals_seq MINVALUE 1
START WITH 1 INCREMENT BY 1 CACHE 10;
CREATE OR REPLACE TRIGGER proposals_before_insert
BEFORE INSERT
ON proposals
FOR EACH ROW
BEGIN
SELECT proposals_seq.nextval INTO :new.proposal_id FROM dual;
END;
```
But when I run the script, I get the error:
```
Error code 900, SQL state 42000: ORA-00900: invalid SQL statement
```
If I remove the ";" after the select statement, I get no error, until I try to insert data into the table, then I get this error:
```
INSERT INTO proposals (target_audience, line_of_business, activity_description, cost, comments, objectives_and_results)
VALUES ('test', 'test', 'test', 15, 'test', 'test');
Error code 4098, SQL state 42000: ORA-04098: trigger 'PROPOSALS_BEFORE_INSERT' is invalid and failed re-validation
```
I am using this version of Oracle:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
All the articles I have found about it, seems to be doing it like this and answers in here aswell: [How to create id with AUTO\_INCREMENT on Oracle?](https://stackoverflow.com/questions/11296361/how-to-create-id-with-auto-increment-on-oracle)
Can it be because of my version of Oracle? Is there another way for me to autoincrement this? Or will I have to increment manually with the sequence in my sql?
My table looks like this:
```
CREATE TABLE proposals (
proposal_id INT NOT NULL,
target_audience VARCHAR2(50) NOT NULL,
line_of_business VARCHAR2(50),
activity_description VARCHAR2(250) NOT NULL,
cost DECIMAL(19, 4) NOT NULL,
comments VARCHAR2(250),
objectives_and_results VARCHAR2(250),
PRIMARY KEY (proposal_id)
);
```
|
I suspect the problem is that your client tool is reading every semicolon as the end of a command, causing PL/SQL code (which requires semicolons as statement terminators) to be incorrectly transmitted to the server.
When you remove the semicolon, the statement is correctly sent to the server, but you end up with an invalid object because the PL/SQL is incorrect.
I duplicated your problem on SQL Fiddle. Then I change the statement terminator to `/` instead of `;` and changed the code to use a slash to execute each statement, and it worked without error:
```
CREATE TABLE proposals (
proposal_id INT NOT NULL,
target_audience VARCHAR2(50) NOT NULL,
line_of_business VARCHAR2(50),
activity_description VARCHAR2(250) NOT NULL,
cost NUMBER(19, 4),
comments VARCHAR2(250),
objectives_and_results VARCHAR2(250),
PRIMARY KEY (proposal_id)
)
/
CREATE SEQUENCE proposals_seq MINVALUE 1
START WITH 1 INCREMENT BY 1 CACHE 10
/
CREATE OR REPLACE TRIGGER proposals_before_insert
BEFORE INSERT ON proposals FOR EACH ROW
BEGIN
select proposals_seq.nextval into :new.proposal_id from dual;
END;
/
```
|
This is the code I use adapted to your table. It will work on any Oracle version but does not take advantage of the new functionality in 12 to set a sequence as an auto increment id
```
CREATE OR REPLACE TRIGGER your_schema.proposal_Id_TRG BEFORE INSERT ON your_schema.proposal
FOR EACH ROW
BEGIN
if inserting and :new.Proposal_Id is NULL then
SELECT your_schema.proposal_Id_SEQ.nextval into :new.Proposal_Id FROM DUAL;
end if;
END;
/
```
and usage is
```
INSERT INTO proposals (proposal_id,target_audience, line_of_business, activity_description, cost, comments, objectives_and_results)
VALUES (null,'test', 'test', 'test', 15, 'test', 'test');
```
Note that we are deliberately inserting null into the primary key to initiate the trigger.
|
Autoincrement in oracle with seq and trigger - invalid sql statement
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a case in which I need to order the query result in a customized order like the following :

the `DEPARTEMENT_ID` needs to be in this order ( 10 then 50 then 20 )
is there a way to get this result ?
|
You could use **CASE** expression in the **ORDER BY** clause.
I answered a similar question here, <https://stackoverflow.com/a/26033176/3989608>, you could just tweak it to have your customized conditions in the CASE expression.
For example,
```
SQL> SELECT ename,
2 deptno
3 FROM emp
4 ORDER BY
5 CASE deptno
6 WHEN 20 THEN 1
7 WHEN 10 THEN 2
8 WHEN 30 THEN 3
9 END
10 /
ENAME DEPTNO
---------- ----------
SMITH 20
FORD 20
ADAMS 20
JONES 20
SCOTT 20
CLARK 10
KING 10
MILLER 10
ALLEN 30
TURNER 30
WARD 30
MARTIN 30
JAMES 30
BLAKE 30
14 rows selected.
SQL>
```
|
One way you can do it is to have an other column DISPLAY\_ORDER having serial number data in the order that you want it.
so the sql will be
```
select JOB_ID, DEPARTMENT_ID
from EMPLOYEES
order by DISPLAY_ORDER;
```
|
ORACLE SQL it is possible to customize the order result of a query?
|
[
"",
"sql",
"oracle-sqldeveloper",
""
] |
How do I write a Postgresql query to find the counts for users by hour?
Table:
```
date name
------------------- ----
2015-01-01 23:11:11 John
2015-02-02 23:22:22 John
2015-02-02 23:00:00 Mary
2015-02-02 23:59:59 Mary
2015-03-03 00:33:33 Mary
```
Desired output:
```
hour | name | count
---------------------+---------+-------
2015-01-01 23:00:00 | John | 1
2015-02-02 23:00:00 | Mary | 2
2015-02-02 23:00:00 | John | 1
2015-03-03 00:00:00 | Mary | 1
```
I tried this <http://www.sqlfiddle.com/#!12/a50d4/2>:
```
CREATE TABLE my_table (
date TIMESTAMP WITHOUT TIME ZONE,
name TEXT
);
INSERT INTO my_table (date, name) VALUES ('2015-01-01 23:11:11', 'John');
INSERT INTO my_table (date, name) VALUES ('2015-02-02 23:22:22', 'John');
INSERT INTO my_table (date, name) VALUES ('2015-02-02 23:00:00', 'Mary');
INSERT INTO my_table (date, name) VALUES ('2015-02-02 23:59:59', 'Mary');
INSERT INTO my_table (date, name) VALUES ('2015-03-03 00:33:33', 'Mary');
SELECT DISTINCT
date_trunc('hour', "date") AS hour,
name,
count(*) OVER (PARTITION BY date_trunc('hour', "date")) AS count
FROM my_table
ORDER BY hour, count;
```
but it gives me:
```
hour | name | count |
---------------------|------|-------|
2015-01-01 23:00:00 | John | 1 |
2015-02-02 23:00:00 | Mary | 3 |
2015-02-02 23:00:00 | John | 3 |
2015-03-03 00:00:00 | Mary | 1 |
```
Similar:
* [Select distinct users group by time range](https://stackoverflow.com/questions/16050847/select-distinct-users-group-by-time-range/16052114#16052114)
* [PostgreSQL: running count of rows for a query 'by minute'](https://stackoverflow.com/questions/8193688/postgresql-running-count-of-rows-for-a-query-by-minute)
|
If you want to stick with the window functions, you need to add "name" into your list like this:
<http://www.sqlfiddle.com/#!12/a50d4/51>
```
SELECT DISTINCT
date_trunc('hour', "date") AS "hour",
name,
count(name) OVER (PARTITION BY name, date_trunc('hour', "date")) AS "cnt"
FROM my_table
ORDER BY hour, cnt DESC
```
|
```
select
date_trunc('hour', "date") as "hour",
name,
count(*) as "count"
from my_table
group by 1, 2
order by 1, 3 desc;
```
<http://www.sqlfiddle.com/#!12/a50d4/62>
|
How to select count of each user by time period (e.g., hour)?
|
[
"",
"sql",
"postgresql",
"select",
"postgresql-9.2",
""
] |
I am using the code igniter framework. And I been trying to execute a stored procedure(no params) I wrote on Microsoft SQL Server 2008 from PHP but i get an error . This stored procedure runs good on Microsoft SQL Server with no errors.
I am using sqlsrv as the driver and I have a PHP version 5.2 if that helps.
This is the error I get
```
Error Number: 01000
[Microsoft][SQL Server Native Client 10.0][SQL Server]Executing SQL directly; no cursor
Exec sp_test
```
The following is part of the code I have
```
function index(){
$this->load->database();
$query=$this->db->query("Exec sp_test");
var_dump($query->result());
}
```
I replace the actual query with an actual query and it does work but not with a sp call. Any help will be appreciated, I tried for so long on this and i can't get it to work. Thanks
|
I had faced the same issue. Removing the 'EXEC' worked for me.
$query = $this->db->query("procedure\_name parameter\_1, parameter\_2");
|
I've been stumbling with a similar error for some time. The thing is that the execution of the stored rpocedure returned a state code 01000/0, which is not an error but a warning, and still don't know why it's beeing returned.
Anyway, the thing is that the SP was a simple select, and whenever I runned it with the query tool, the select worked just fine, even invoking this select from php with codeigniter the values where returned correctly, but when I tried to use the stored procedure, it keep failing.
Finally got it working and the solution was to simply mark to not return warnings as errors. Here's a sample code:
```
$sp = "MOBILE_COM_SP_GetCallXXX ?,?,?,?,?,?,?,?,?,? "; //No exec or call needed
//No @ needed. Codeigniter gets it right either way
$params = array(
'PARAM_1' => NULL,
'PARAM_2' => NULL,
'PARAM_3' => NULL,
'PARAM_4' => NULL,
'PARAM_5' => NULL,
'PARAM_6' => NULL,
'PARAM_7' => NULL,
'PARAM_8' => NULL,
'PARAM_9' => NULL,
'PARAM_10' =>NULL);
//Here's the magic...
sqlsrv_configure('WarningsReturnAsErrors', 0);
//Even if I don't make the connect explicitly, I can configure sqlsrv
//and get it running using $this->db->query....
$result = $this->db->query($sp,$params);
```
That's it. Hope it helped
|
Issue executing stored procedure from PHP to a Microsoft SQL SERVER
|
[
"",
"sql",
"codeigniter",
"sqlsrv",
""
] |
New to the community and have limited experience with the subject. I'm trying to create a column that gets the sum of indicators row by row. So the column would total each indicator, giving me a total of 3 for the first customer, and 2 for the second. Using Microsoft Sql Server Mgmt Studio. Any help would be greatly appreciated!
```
Customer Date Ind1 Ind2 Ind3 Ind4
12345 1-1-15 1 0 1 1
12346 1-2-15 0 1 1 0
```
|
You can use
```
SELECT Customer
, Date
, Ind1
, Ind2
, Ind3
, Ind4
, Ind1+Ind2+Ind3+Ind4 As Indicators
FROM TABLE_NAME
```
Replace `TABLE_NAME` with whatever name the table have. If you dont want all the `Ind1,Ind2,Ind3,Ind4` columns reported, use
```
SELECT Customer
, Date
, Ind1+Ind2+Ind3+Ind4 As Indicators
FROM TABLE_NAME
```
|
do you mean this:
```
select customer,date, ind1+ind2+ind3 as Indicators from table_name order by Indicators
```
notice: your columns may have `null` values so use this:
```
select customer,date, isnull(ind1,0)+isnull(ind2,0)+isnull(ind3,0) as Indicators
from table_name order by Indicators
```
|
Getting the Sum of Multiple Indicators by Row
|
[
"",
"sql",
"sql-server-2005",
""
] |
I have the following query:
```
SELECT CC.phone_ID,
COUNT(CC.phone_id) "Count",
PR.manuf_id
FROM CONTRACT_CELLPHONE CC
INNER JOIN product PR ON CC.phone_id = PR.product_id
GROUP BY CC.phone_id,
PR.manuf_id
ORDER BY 3;
```
which gives me the following output:
```
PHONE_ID COUNT(CC.PHONE_ID) MANUF_ID
---------- ------------------ ----------
87555 6 567000
43342 2 567001
58667 3 567001
46627 5 567002
11243 3 567003
87549 3 567003
86865 2 567005
65267 4 567006
8 rows selected.
```
I want to obtain the `phone_id` of the phone that has the highest `count` for every `manufacturer`. Something like this:
```
PHONE_ID COUNT(CC.PHONE_ID) MANUF_ID
---------- ------------------ ----------
87555 6 567000
58667 3 567001
46627 5 567002
11243 3 567003
87549 3 567003
86865 2 567005
65267 4 567006
```
This is the master data set:
```
SQL> SELECT * FROM CONTRACT_CELLPHONE CC INNER JOIN product PR ON CC.phone_id = PR.product_id;
CONTRACT_ID PHONE_ID SEQ# PAIDPRICE ESN PRODUCT_ID NAME MANUF_ID COSTPAID BASEPRICE TYPE
----------- ---------- ---------- ---------- ---------- ---------- ------------------------------ ---------- ---------- ---------- ---------
10010 11243 1 310 1234567890 11243 Galaxy 567003 276 345 CellPhone
10011 11243 1 310 1232145654 11243 Galaxy 567003 276 345 CellPhone
10011 87549 2 320 2323565678 87549 Galaxy 567003 280 350 CellPhone
10012 58667 1 300 3452123533 58667 Droid 567001 275 320 CellPhone
10013 87555 1 425 3445421789 87555 iPhone 567000 360 450 CellPhone
10014 65267 1 85 8752570865 65267 Bold 567006 63.75 75 CellPhone
10014 65267 2 85 5421785345 65267 Bold 567006 63.75 75 CellPhone
10014 65267 3 85 3454323457 65267 Bold 567006 63.75 75 CellPhone
10016 46627 1 250 9876554321 46627 HTC One 567002 200 250 CellPhone
10016 65267 2 85 1002938475 65267 Bold 567006 63.75 75 CellPhone
10017 46627 1 250 8766543289 46627 HTC One 567002 200 250 CellPhone
10018 87555 1 425 3454334532 87555 iPhone 567000 360 450 CellPhone
10019 43342 1 450 2334567654 43342 Droid 567001 400 500 CellPhone
10020 87549 1 320 2345678912 87549 Galaxy 567003 280 350 CellPhone
10021 87555 1 425 3456129642 87555 iPhone 567000 360 450 CellPhone
10021 87555 2 425 8732786480 87555 iPhone 567000 360 450 CellPhone
10022 46627 1 250 5634512345 46627 HTC One 567002 200 250 CellPhone
10023 11243 1 300 1276349812 11243 Galaxy 567003 276 345 CellPhone
10024 46627 1 250 3456123457 46627 HTC One 567002 200 250 CellPhone
10025 58667 1 300 5438767651 58667 Droid 567001 275 320 CellPhone
10026 87555 1 425 6541835680 87555 iPhone 567000 360 450 CellPhone
10027 86865 1 210 9826485932 86865 Lumia 567005 160 200 CellPhone
10028 86865 1 210 3218759604 86865 Lumia 567005 160 200 CellPhone
10029 87549 1 320 4328753902 87549 Galaxy 567003 280 350 CellPhone
10030 58667 1 300 9742467907 58667 Droid 567001 275 320 CellPhone
10031 46627 1 250 2938465831 46627 HTC One 567002 200 250 CellPhone
10032 87555 1 425 2319347891 87555 iPhone 567000 360 450 CellPhone
10033 43342 1 450 2319752032 43342 Droid 567001 400 500 CellPhone
28 rows selected.
```
I tried using `MAX` but it gives me an error.
Can someone help?
|
You could use the analytic **ROW\_NUMBER()** to assign ranking based on the counts.
**Update** For keeping the rows with same count, you need to use **DENSE\_RANK**.
Test case:
**DENSE\_RANK**
```
SQL> WITH DATA AS(
2 SELECT 87555 PHONE_ID, 6 count_phone_id, 567000 manuf_id FROM dual UNION ALL
3 SELECT 43342, 2, 567001 FROM dual UNION ALL
4 SELECT 58667, 3, 567001 FROM dual UNION ALL
5 SELECT 46627, 5, 567002 FROM dual UNION ALL
6 SELECT 11243, 3, 567003 FROM dual UNION ALL
7 SELECT 87549, 3, 567003 FROM dual UNION ALL
8 SELECT 86865, 2, 567005 FROM dual UNION ALL
9 SELECT 65267, 4, 567006 FROM dual
10 )
11 SELECT phone_id,
12 count_phone_id,
13 manuf_id
14 FROM
15 (SELECT t.*,
16 DENSE_RANK() OVER(PARTITION BY manuf_id ORDER BY count_phone_id DESC) rn
17 FROM DATA t
18 )
19 WHERE rn = 1;
PHONE_ID COUNT_PHONE_ID MANUF_ID
---------- -------------- ----------
87555 6 567000
58667 3 567001
46627 5 567002
11243 3 567003
87549 3 567003
86865 2 567005
65267 4 567006
7 rows selected.
SQL>
```
So, using **DENSE\_RANK** you have those rows which have same count in each group.
**ROW\_NUMBER**
```
SQL> WITH DATA AS(
2 SELECT 87555 PHONE_ID, 6 count_phone_id, 567000 manuf_id FROM dual UNION ALL
3 SELECT 43342, 2, 567001 FROM dual UNION ALL
4 SELECT 58667, 3, 567001 FROM dual UNION ALL
5 SELECT 46627, 5, 567002 FROM dual UNION ALL
6 SELECT 11243, 3, 567003 FROM dual UNION ALL
7 SELECT 87549, 3, 567003 FROM dual UNION ALL
8 SELECT 86865, 2, 567005 FROM dual UNION ALL
9 SELECT 65267, 4, 567006 FROM dual
10 )
11 SELECT phone_id,
12 count_phone_id,
13 manuf_id
14 FROM
15 (SELECT t.*,
16 row_number() OVER(PARTITION BY manuf_id ORDER BY count_phone_id DESC) rn
17 FROM DATA t
18 )
19 WHERE rn = 1;
PHONE_ID COUNT_PHONE_ID MANUF_ID
---------- -------------- ----------
87555 6 567000
58667 3 567001
46627 5 567002
11243 3 567003
86865 2 567005
65267 4 567006
6 rows selected.
SQL>
```
So, the inner query using **ROW\_NUMBER()** function first assigns rank to the rows based on the counts in **descending order**, that too in **each group** of `manuf_id`. Thus, the highest count in each group will have rank 1. Finally, we filter the required rows in the outer query.
|
Your original query is as follows:
```
SELECT CC.phone_ID,
COUNT(CC.phone_id) "Count",
PR.manuf_id
FROM CONTRACT_CELLPHONE CC
INNER JOIN product PR ON CC.phone_id = PR.product_id
GROUP BY CC.phone_id,
PR.manuf_id
ORDER BY 3;
```
Couple things here, one, you can just use `COUNT(*)` (unless `phone_id` can be `NULL`, which I doubt it can since you're grouping on it. Two, you can simply add an analytic function to this query, then make it a subquery:
```
SELECT phone_id, manuf_id, phone_id_cnt AS "Count" FROM (
SELECT cc.phone_id, pr.manuf_id, COUNT(*) AS phone_id_cnt
, RANK() OVER ( PARTITION BY manuf_id ORDER BY COUNT(*) DESC ) AS rn
FROM contract_cellphone cc INNER JOIN product pr
ON cc.phone_id = pr.product_id
GROUP BY cc.phone_id, pr.manuf_id
) WHERE rn = 1
ORDER BY manuf_id;
```
|
Oracle11g - Getting Maximum values for Group By and Order By
|
[
"",
"sql",
"oracle",
"join",
"oracle11g",
"max",
""
] |
I want to merge this result into 2 rows, any way to do that? Thanks in advance !
Current result :
```
cust-no document order-no Black Description Black CR Yellow Description Yellow CR
CE074L00 10012107 0 NULL NULL 841437 P.CART YLW C3501S; -5
CE074L00 10012107 0 NULL NULL 841696 P.CART YLW C5502S; -7
CE074L00 10012107 0 841436 P.CART BLK C3501S; -8 NULL NULL
CE074L00 10012107 0 841695 P.CART BLK C5502S; -3 NULL NULL
```
Expected result :
```
cust-no document order-no Black Description Black CR Yellow Description Yellow CR
CE074L00 10012107 0 841436 P.CART BLK C3501S; -8 841437 P.CART YLW C3501S; -5
CE074L00 10012107 0 841436 P.CART BLK C3501S; -3 841696 P.CART YLW C5502S; -7
```
The current SQL query :
```
select
a.[cust-no], a.[document],a.[order-no],
a.[Black Description],a.[Black CR],a.[Yellow Description],a.[Yellow CR]
from
(select
i1.[cust-no], i1.[document], i1.[order-no],
il1.[description] [Black Description],
il1.[qty-shipped] [Black CR], null [Yellow Description],
null [Yellow CR]
from
invoice i1
inner join
[invoice-line] il1 on il1.[document] = i1.[document]
inner join
toner t on t.[edp code] = il1.[item-no]
and t.[color] = 'black' and i1.[dbill-type] = 'PS'
and i1.[invoice-date] > '2015-01-01'
and i1.[order-code] = 'FOCA'
and i1.[cust-no] = 'CE074L00'
union
select
i1.[cust-no], i1.[document], i1.[order-no],
null [Black Description], null [Black CR],
il1.[description] [Yellow Description],il1.[qty-shipped] [Yellow CR] from invoice i1
inner join [invoice-line] il1 on il1.[document] = i1.[document] inner join
toner t on t.[edp code] = il1.[item-no] and t.[color] = 'yellow' and i1.[dbill-type] = 'PS'
and i1.[invoice-date] > '2015-01-01'
and i1.[order-code] = 'FOCA'
and i1.[cust-no] = 'CE074L00') a
```
---
Modified query however still shows only one row, not sure how to use **RowNo** :
```
SELECT
a.[cust-no],
a.[document],
a.[order-no],
MAX(CASE WHEN a.[color] = 'black' THEN a.[description] END) AS [Black Description],
MAX(CASE WHEN a.[color] = 'black' THEN a.[qty-shipped] END) AS [Black CR],
MAX(CASE WHEN a.[color] = 'yellow' THEN a.[description] END) AS [Yellow Description],
MAX(CASE WHEN a.[color] = 'yellow' THEN a.[qty-shipped] END) AS [Yellow CR]
FROM
(
SELECT
ROW_NUMBER() OVER (PARTITION BY i1.[cust-no],i1.[document],i1.[order-no] ORDER BY i1.[cust-no] desc) AS RowNo,
i1.[cust-no],i1.[document],i1.[order-no],t.[color],il1.[description],il1.[qty-shipped]
FROM
invoice i1
INNER JOIN
[invoice-line] il1
ON
il1.[document] = i1.[document]
INNER JOIN
toner t
ON
t.[edp code] = il1.[item-no]
WHERE
t.[color] IN('black', 'yellow')
and i1.[dbill-type] = 'PS'
and i1.[invoice-date] > '20150101'
and i1.[order-code] = 'FOCA'
and i1.[cust-no] = 'CE074L00'
) AS a
GROUP BY
a.[cust-no],
a.[document],
a.[order-no]
```
Result of the modified query :
```
cust-no document order-no Black Description Black CR Yellow Description Yellow CR
CE074L00 10012107 0 841695 P.CART BLK C5502S; -3 841696 P.CART YLW C5502S; -5
```
---
Data for testing :
```
create table #Invoice(
[document] int,
[cust-no] varchar(15),
[order-no] int,
[dbill-type] varchar(15),
[invoice-date] datetime,
[order-code] varchar(15))
create table #Invoice_line(
[document] int,
[item-no] int,
[description] varchar(100),
[qty-shipped] int)
create table #toner(
[edp code] int,
[color] varchar(15))
insert into #invoice values (10012107,'CE074L00',0,'PS','2015-03-01','FOCA')
insert into #Invoice_line values (10012107,841436,'841436 P.CART BLK C3501S;',-8)
insert into #Invoice_line values (10012107,841695,'841695 P.CART BLK C5502S;',-3)
insert into #Invoice_line values (10012107,841437,'841437 P.CART YLW C3501S;',-5)
insert into #Invoice_line values (10012107,841696,'841696 P.CART YLW C5502S;',-7)
insert into #toner values(841436,'black')
insert into #toner values(841695,'black')
insert into #toner values(841437,'yellow')
insert into #toner values(841696,'yellow')
```
---
Query to test :
```
SELECT
a.[cust-no],
a.[document],
a.[order-no],
MAX(CASE WHEN a.[color] = 'black' THEN a.[description] END) AS [Black Description],
MAX(CASE WHEN a.[color] = 'black' THEN a.[qty-shipped] END) AS [Black CR],
MAX(CASE WHEN a.[color] = 'yellow' THEN a.[description] END) AS [Yellow Description],
MAX(CASE WHEN a.[color] = 'yellow' THEN a.[qty-shipped] END) AS [Yellow CR]
FROM
(
SELECT
ROW_NUMBER() OVER (PARTITION BY i1.[cust-no],i1.[document],i1.[order-no] ORDER BY i1.[cust-no] desc) AS RowNo,
i1.[cust-no],i1.[document],i1.[order-no],t.[color],il1.[description],il1.[qty-shipped]
FROM
#invoice i1
INNER JOIN
#invoice_line il1
ON
il1.[document] = i1.[document]
INNER JOIN
#toner t
ON
t.[edp code] = il1.[item-no]
WHERE
t.[color] IN('black', 'yellow')
and i1.[dbill-type] = 'PS'
and i1.[invoice-date] > '20150101'
and i1.[order-code] = 'FOCA'
and i1.[cust-no] = 'CE074L00'
) AS a
GROUP BY
a.[cust-no],
a.[document],
a.[order-no]
```
|
@OP, Seems that this is working just fine, correct me if I'm wrong.
```
SELECT a.[cust-no]
, a.[document]
, a.[order-no]
, MAX(CASE WHEN a.[color] = 'black' THEN a.[description] END) AS [Black Description]
, MAX(CASE WHEN a.[color] = 'black' THEN a.[qty-shipped] END) AS [Black CR]
, MAX(CASE WHEN a.[color] = 'yellow' THEN a.[description] END) AS [Yellow Description]
, MAX(CASE WHEN a.[color] = 'yellow' THEN a.[qty-shipped] END) AS [Yellow CR]
FROM (
SELECT ROW_NUMBER() OVER (PARTITION BY i1.[cust-no], i1.[document], i1.[order-no], t.[color] ORDER BY i1.[cust-no] DESC) AS RowNo
, i1.[cust-no]
, i1.[document]
, i1.[order-no]
, t.[color]
, il1.[description]
, il1.[qty-shipped]
FROM #invoice i1
INNER JOIN #invoice_line il1
ON il1.[document] = i1.[document]
INNER JOIN #toner t
ON t.[edp code] = il1.[item-no]
WHERE t.[color] IN ('black', 'yellow')
AND i1.[dbill-type] = 'PS'
AND i1.[invoice-date] > '20150101'
AND i1.[order-code] = 'FOCA'
AND i1.[cust-no] = 'CE074L00'
) AS a
GROUP BY a.[cust-no]
, a.[document]
, a.[order-no]
, a.[RowNo]
```
|
You could use `case-based aggregation` or `conditional aggregation`:
```
SELECT
i1.[cust-no],
i1.[document],
i1.[order-no],
MAX(CASE WHEN t.[color] = 'black' THEN il1.[description] END) AS [Black Description],
MAX(CASE WHEN t.[color] = 'black' THEN il1.[qty-shipped] END) AS [Black CR],
MAX(CASE WHEN t.[color] = 'yellow' THEN il1.[description] END) AS [Yellow Description],
MAX(CASE WHEN t.[color] = 'yellow' THEN il1.[qty-shipped] END) AS [Yellow CR]
FROM invoice i1
INNER JOIN [invoice-line] il1
ON il1.[document] = i1.[document]
INNER JOIN toner t
ON t.[edp code] = il1.[item-no]
WHERE
t.[color] IN('black', 'yellow')
and i1.[dbill-type] = 'PS'
and i1.[invoice-date] > '20150101'
and i1.[order-code] = 'FOCA'
and i1.[cust-no] = 'CE074L00'
GROUP BY
i1.[cust-no],
i1.[document],
i1.[order-no]
```
|
SQL Server : merge subqueries without duplicate
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm using the following query to insert multiple rows into a table `my_employee` :
```
insert all into
my_employee (id, last_name, first_name, userid, salary) values (&id, '&last_name', '&firstname', '&userid', &salary)
my_employee (id, last_name, first_name, userid, salary) values (&id, '&last_name', '&firstname', '&userid', &salary)
my_employee (id, last_name, first_name, userid, salary) values (&id, '&last_name', '&firstname', '&userid', &salary)
select * from DUAL;
```
yet I followed similar questions step by stet ,but I don't know why I'm getting this error :
```
Erreur SQL : ORA-00928: missing SELECT keyword
00928. 00000 - "missing SELECT keyword"
```
|
You need to specify `into` for each insert. This should work:
```
insert all
into my_employee (id, last_name, first_name, userid, salary) values (&id, '&last_name', '&firstname', '&userid', &salary)
into my_employee (id, last_name, first_name, userid, salary) values (&id, '&last_name', '&firstname', '&userid', &salary)
into my_employee (id, last_name, first_name, userid, salary) values (&id, '&last_name', '&firstname', '&userid', &salary)
select * from DUAL;
```
|
You can do this with three insert statements:
```
insert into my_employee (id, last_name, first_name, userid, salary)
values (&id, '&last_name', '&firstname', '&userid', &salary);
insert into my_employee (id, last_name, first_name, userid, salary)
values (&id, '&last_name', '&firstname', '&userid', &salary);
insert into my_employee (id, last_name, first_name, userid, salary)
values (&id, '&last_name', '&firstname', '&userid', &salary);
```
Or with one statement using `union all`:
```
insert into my_employee (id, last_name, first_name, userid, salary)
select &id, '&last_name', '&firstname', '&userid', &salary
from dual union all
select &id, '&last_name', '&firstname', '&userid', &salary
from dual union all
select &id, '&last_name', '&firstname', '&userid', &salary;
```
I don't see an application for `insert all` because you want to insert into a single table.
|
Oracle SQL issue while inserting multiple rows into a table
|
[
"",
"sql",
"oracle",
""
] |
I have a sql puzzle and can't figure out a proper way to do it except brute force case statement. Hope there are good ideas how to achieve this. Thanks for your thoughts. I am on sql server 2012.
basically, I have groups of rows and each group has fixed 6 rows with value of either 1 or 0. Now, I need to insert a new row after each group and fill in the maximum **consecutive row count** for that group. see below:
```
group_name, row_number, yes_no
A, 1, 1
A, 2, 0
A, 3, 1
A, 4, 1
A, 5, 1
A, 6, 0
B, 1, 1
B, 2, 1
B, 3, 0
B, 4, 1
B, 5, 0
B, 6, 0
```
Now I would like the results to be:
```
group_name, row_number, yes_no
A, 1, 1
A, 2, 0
A, 3, 1
A, 4, 1
A, 5, 1
A, 6, 0
**A, 7, 3**
B, 1, 1
B, 2, 1
B, 3, 0
B, 4, 1
B, 5, 0
B, 6, 0
**B, 7, 2**
```
notice row\_number 7 is a new row with the number of max consecutive rows of 1. any idea how to do this? Thank you!
|
You can get the maximum consecutive row count by subtracting a sequence number from `row_number` and assigning a group. For instance, the following gets information about all consecutive values in the data:
```
select group_name, yes_no, min(row_number), max(row_number), count(*)
from (select t.*,
(row_number - row_number() over (partition by group_name, yes_no
order by row_number)
) as grp
from table t
) t
group by group_name, grp, yes_no;
```
To get what you want, you need an `insert` and one more level of aggregation -- to get the maximum count:
```
insert into table(group_name, row_number, yes_no)
select group_name, maxrn + 1, max(cnt)
from (select group_name, yes_no, count(*) as cnt, max(row_number) as maxrn
from (select t.*,
(row_number - row_number() over (partition by group_name, yes_no
order by row_number)
) as grp
from table t
) t
group by group_name, grp, yes_no
) t
group by group_name
) t
```
Note: your question is unclear on whether you want the longest group of 1s and 0s or just 1s. This gets both. If you want just 1s, you can insert a `where` clause before the last `group by`.
|
I hope this help.
I just take the greatest distance between 0's grouping by group\_name
```
INSERT INTO yourtable
(group_name, row_number, yes_no)
SELECT
t1.group_name,
7 AS row_number,
MAX(t2.row_number - t1.rownumber) - 1 as yes_no
FROM yourtable t1
INNER JOIN yourtable t2
ON t1.group_name = t2.group_name AND
t1.row_number < t2.row_number AND
t1.yes_no = 0 AND
t1.yes_no = t2.yes_no
GROUP BY t2.group_name
```
|
how to get maximum consecutive row count using sql
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
```
select
//
from
//
where
//this is the place i need help with
```
I have a table `person` with column `dob date`.
I want to be able to select rows of people aged 1 or more.
some mock date:
```
name dob
```
```
person 1 13-DEC-2014
person 2 24-JAN-2011
person 3 05-MAY-2013
person 4 17-APR-2014
person 5 21-DEC-2013
person 6 11-NOV-2014
```
in this scenario i would expect the names 'person 2', 'person 3' and 'person 5' to be listed in the output. i know how to do the select and from statement in my scenario, just not a where. any help would be greatly appreciated.
|
I would use `ADD_MONTHS()`:
```
SELECT * FROM person
WHERE dob <= TRUNC(ADD_MONTHS(SYSDATE, -12));
```
In the above I'm actually adding -12 months (or 1 year -- equivalent to subtracting 12 months) to the value of `SYSDATE`.
|
```
select *
from person
where dob <= add_months( trunc(sysdate), -12 )
```
will return everyone whose birth date is more than 12 months before the current date. `sysdate` returns the current date and time. `trunc` removes the time component (setting it to midnight). Then `add_months` subtracts 12 months.
|
checking age of a person, the person has date of birth stored as a date
|
[
"",
"sql",
"oracle",
"date",
""
] |
I have the following stored procedure:
```
CREATE PROCEDURE myProc
@nameList varchar(500)
AS
BEGIN
create table #names (Name varchar(20))
-- split @nameList up into #names table
END
GO
```
`@nameList` would basically look like this:
```
'John, Samantha, Bob, Tom'
```
|
use convert to `XML` and `cross apply`:
```
DECLARE @str varchar(50)
SET @str='John, Samantha, Bob, Tom'
SELECT names = y.i.value('(./text())[1]', 'nvarchar(1000)')
FROM
(
SELECT
n = CONVERT(XML, '<i>'
+ REPLACE(@str, ',' , '</i><i>')
+ '</i>')
) AS a
CROSS APPLY n.nodes('i') AS y(i)
```
*OUTPUT:*
```
names
-----
John
Samantha
Bob
Tom
```
**EDIT:** it's not need to the temp table inside the proc so the proc will be:
```
CREATE PROCEDURE myProc
(@nameList varchar(500))
AS
BEGIN
SELECT names = y.i.value('(./text())[1]', 'nvarchar(1000)')
FROM
(
SELECT
n = CONVERT(XML, '<i>'
+ REPLACE(@nameList, ',' , '</i><i>')
+ '</i>')
) AS a
CROSS APPLY n.nodes('i') AS y(i)
END
```
but if you want to insert it into a temp table, below is a the sample:
```
create table #names
(
Name varchar(20)
)
DECLARE @str varchar(50)
SET @str='John, Samantha, Bob, Tom'
insert into #names
SELECT names = y.i.value('(./text())[1]', 'nvarchar(1000)')
FROM
(
SELECT
n = CONVERT(XML, '<i>'
+ REPLACE(@str, ',' , '</i><i>')
+ '</i>')
) AS a
CROSS APPLY n.nodes('i') AS y(i)
select * from #names
drop table #names
```
**EDIT 2:** the input string may contains some special characters like `'<' , '>' , etc` it's not standard for names but if the the given string contains them you can remove them by using `replace` function : `replace(@str,'<','')`
|
With recursive cte:
```
DECLARE @nameList NVARCHAR(MAX) = 'John, Samantha, Bob, Tom'
SET @nameList = @nameList + ',';
WITH cte
AS ( SELECT SUBSTRING(@nameList, 0, CHARINDEX(',', @nameList)) AS n ,
CHARINDEX(',', @nameList) AS i
UNION ALL
SELECT SUBSTRING(@nameList, i + 2,CHARINDEX(',', @nameList, i + 2) - i - 2) ,
CHARINDEX(',', @nameList, i + 2)
FROM cte
WHERE CHARINDEX(',', @nameList, i + 2) > 0
)
SELECT n FROM cte
```
Output:
```
n
John
Samantha
Bob
Tom
```
|
Split comma separated varchar parameter up into temp table
|
[
"",
"sql",
"sql-server",
""
] |
What are the implications of using SQL Server's DateTime2 with a precision of 0 to represent a date rather than the built in Date field.
In either case, my concern is to prevent accidental time entries, but are there storage or performance considerations I should take note of?
|
It won't work. According to MSDN the minimum size of Datetime2 is six bytes and will contain hh:mm:ss so it can, and will, contain a time component (default of midnight). As other responders have noted you must use a date type to guarantee that not time portion is saved and will occupy three bytes.
<https://technet.microsoft.com/en-us/library/bb677335%28v=sql.105%29.aspx>
|
`DateTime2(0)` will store datetime with no decimal values i.e `YYYY-MM-DD hh:mm:ss`
```
SELECT CONVERT(DateTime2(0) , GETDATE())
RESULT: 2015-04-06 20:47:17
```
Storing data just as dates will only store dates i.e `YYYY-MM-DD` without any time values.
```
SELECT CONVERT(Date , GETDATE())
RESULT: 2015-04-06
```
If you are only interested in dates then use `DATE` data type.
`DATETIME2` will use `6 bytes` for precisions `less than 3` and `DATE` will use `3 bytes`.
Date is half the size of DATETIME(0) hence it will also perform better since sql server will process less data and will save disk space as well.
|
SQL Server DateTime2(0) vs Date
|
[
"",
"sql",
"sql-server",
""
] |
Hi i have an issue on handling some data on SQL, and returning some values by the nearest date. I have two Tables:
Table 1
```
ID Content Date
--------------------------------------------
123 X 2013-11-18
123 ZE 2013-11-29
233 YX 2013-12-30
233 XX 2013-12-28
444 Z 2014-02-24
```
Table 2
```
ID Value Validation Date
--------------------------------------------
123 0.54 2013-11-11
123 0.42 2013-11-18
123 0.32 2013-11-27
233 1.2 2013-12-4
233 1.1 2013-12-28
233 1.0 2013-12-29
444 4 2014-02-11
444 3 2014-02-15
444 2 2014-02-23
```
The output that i pretend is something like:
```
ID Content Date Value Validation Date
------------------------------------------------------------------------
123 X 2013-11-18 0.42 2013-11-18
123 ZE 2013-11-29 0.32 2013-11-27
233 YX 2013-12-30 1.0 2013-12-29
233 XX 2013-12-28 1.1 2013-12-28
444 Z 2014-02-24 2 2014-02-23
```
So i would like to return back the value where the validation date is the nearest to the date (where the validation date has to be always smaller than the date). Can you please help me? The ID in table 1 and 2 is not unique.
|
You can use the following query:
```
SELECT ID, Content, [Date], Value, [Validation Date]
FROM (
SELECT t1.ID, Content, [Date], Value, [Validation Date],
ROW_NUMBER() OVER (PARTITION BY t1.ID, Content
ORDER BY DATEDIFF(d, [Validation Date], [Date])) AS rn
FROM Table1 AS t1
INNER JOIN Table2 AS t2 ON t1.ID = t2.ID AND [Validation Date] <= [Date]
) t
WHERE t.rn = 1
```
`ROW_NUMBER()` is used to track the record with the smallest `[Date]` -`[Validation Date]` difference per (`ID`, `Content`) pair of values.
|
I think the only way to do this is correlation. Something like that.
```
SELECT a.id, a.content, a.date,
(SELECT TOP 1 b.value, b.validate
FROM table2 b
WHERE b.id=a.id
ORDER BY b.validate DESC) from table1 a
```
|
SQL Nearest Past Date
|
[
"",
"sql",
"sql-server",
"date",
""
] |
I have three tables User, office and a role map. The user Id will be FK to the office table against the Office manager and office head. The role and the user Id will be mapped in the role map table as shown below.
When an office is deleted, the office table's Active column will be updated to 'N'.
When an office is deleted, I have to check if the office manager is present as office manager for any other office. (The same person can be manager for another market too)
If he is not, then I have to delete the entry in Role map table by updating the Active flag to 'N'
Table User
```
User ID | User Name |
--------------------------
1 | Mark |
2 | George |
3 | Rick |
4 | Alex |
```
Table Office
```
Office ID | Office Name | Office Manager| Office Head | Active |
---------------------------------------------------------------------------
1 | Off1 | 1 | 2 | Y |
2 | Off2 | 1 | 4 | Y |
```
Role Map
```
User ID | User Role | Active |
---------------------------------------------
1 | Office Manager | Y |
2 | Office Head | Y |
4 | Office Head | Y |
```
So in this example, If I delete Office 2, then I need to update the third entry in the role map table to the following
The OFfice manager role map will be intact, as the same user is the manager for Off1 too.
Role Map -- After update
```
User ID | User Role | Active |
-------------------------------------------
1 | Office Manager | Y |
2 | Office Head | Y |
4 | Office Head | N |
```
I am unsure how to implement this query in pl/sql. can someone please guide me here?
|
I think that it would be safer to not use `merge` in this case. After deleting office or updating it's status to inactive
just run this `update`:
```
update role_map set active = 'N'
where (user_role = 'Office Head'
and not exists (
select 1 from office
where office_head = role_map.user_id and active = 'Y'))
or (user_role = 'Office Manager'
and not exists (
select 1 from office
where office_manager = role_map.user_id and active = 'Y'));
```
I'm not sure if table `role_map` is needed at all, probably it can be replaced with view, but maybe you store some other data.
|
If you have already deleted the row from the OFFICE table, you can just check if the employees that exist in the OFFICE table have an active role or not and just do a simple update like this -
```
UPDATE ROLE_MAP
SET active='N'
where user_id not in
(SELECT office_manager from office
UNION ALL
SELECT office_head from office)
```
|
PLSQL Merge - Update while row not existing
|
[
"",
"sql",
"oracle",
"merge",
""
] |
I'm trying to write a query that returns the last non-null value next to all subsequent dates until a new non-null value is encountered.The input table would look something like this:
```
DATE VALUE
========== ======
01/01/2015 1
02/01/2015 NULL
03/01/2015 NULL
04/01/2015 2
05/01/2015 NULL
```
And I would like the resulting query table would look like this:
```
DATE CURRENT VALUE
========== =============
01/01/2015 1
02/01/2015 1
03/01/2015 1
04/01/2015 2
05/01/2015 2
```
I've have tried searching around for an answer but I haven't come up with anything. Forgive me if this kind of question is common. Thanks
|
Probably the easiest way is using `outer apply`:
```
select t.date, coalesce(t.value, t2.value) as current_value
from table t outer apply
(select top 1 t2.value
from table t2
where t2.value is not null and
t2.date <= t.date
order by t2.date desc
) tt;
```
If you know that the values are increasing, then in SQL Server 2012+ you can use `max()`:
```
select date, max(value) over (order by date) as current_value
from table t;
```
|
*Another way*, if the date field is `unique`, and is `increasing by one` `with no gap` then you can use a `recursive cte`:
```
with cte (dt,value) as
(
select top 1 date , value from tbl where value is not null
union all
select t.date, isnull(t.value,cte.value)
from tbl t
join cte on t.date=dateadd(month,1,cte.dt)
)
select * from cte
```
[the FIDDLE DEMO](http://sqlfiddle.com/#!6/ad0e4/5)
|
SQL query running current value by date
|
[
"",
"sql",
"sql-server",
""
] |
i trying to do this query where i have a **where clause**. The problem is that i need to use inside the where condition the operator IN but i can´t figured out
what i missing.
someone can give a hand pls?
here is my query
```
DECLARE @OP INT = 1
SELECT * FROM Table
WHERE
Table.[status] IN (CASE WHEN @OP = 1 THEN (5,6) ELSE (12) END)
```
|
There's no need for a case statement.
```
DECLARE @OP INT = 1;
SELECT * FROM Table
WHERE (@OP = 1 AND Table.[status] IN (5,6))
OR (@OP !=1 AND Table.[status] IN (12))
```
|
Try:
```
DECLARE @OP INT = 1
SELECT * FROM TABLE
WHERE
((@OP = 1 AND TABLE.[status] IN (5,6)) OR (@OP <> 1 AND TABLE.[status] = 12))
```
|
T-SQL Case Condition in Where Clause
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
table 1
table 2
Here's my query statement..
```
select * from table1
inner join table2
on table2.rollnumber = table1.rollnumber
where table1.Name <> table2.Name
```
My objective here is to compare table1 to table2's highest ID with reference to rollnumber and in this case the result with this query should be none... Your help is highly appreciated...
|
Try:
```
select * from table1 t1
cross apply ( select * from (select top 1 * from table2 t2 where t1.rn = t2.rn order by t2.id desc) t where t.name <> t1.name) c
```
|
You could try this:
```
DECLARE @table1 TABLE (ID INT, RollNumber INT, Name VARCHAR(100))
DECLARE @table2 TABLE (ID INT, RollNumber INT, Name VARCHAR(100))
INSERT INTO @table1 VALUES(1,4,'John Doe')
INSERT INTO @table2
VALUES (1,4,'Jane Doe'),
(2,4,'John Doe');
SELECT A.RollNumber,
A.ID,
A.Name,
C.max_id
FROM @table1 A
CROSS APPLY (SELECT MAX(ID) FROM @table2 B WHERE A.RollNumber = B.RollNumber) C(max_id)
WHERE A.ID = C.max_id
```
|
comparing two table only the max id to be consider on the other table
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I've the Table:
SSID distrProdName distrBranchID distrID salesQty weekID
1, Product, 1, 7, 1, 434, 10
2, Product, 2, 7, 1, 223, 10
3, Product, 1, 7, 1, 23, 11
4, Product, 2, 7, 1, 23, 11
5, Product, 1, 7, 1, 43, 12
6, Product, 2, 7, 1, 12, 12
7, Product, 1, 7, 1, 232, 13
8, Product, 2, 7, 1, 47, 13
I want to add a new calculate column "AvSales4LastW"
In MS Excel I resolved like this:
SSID distrProdName distrBranchID distrID salesQty weekID AvSales4LastW code code1 code2 code3
1; Product 1; 7; 1; 434; 10; 108,5; Product 17110; Product 1719; Product 1718; Product 1717
2; Product 2; 7; 1; 223; 10; 55,75; Product 27110; Product 2719; Product 2718; Product 2717;
3; Product 1; 7; 1; 23; 11; 114,25; Product 17111; Product 17110; Product 1719; Product 1718;
4; Product 2; 7; 1; 23; 11; 61,5; Product 27111; Product 27110; Product; 2719 Product 2718;
5; Product 1; 7; 1; 43; 12; 125; Product 17112; Product 17111; Product 17110; Product 1719;
6; Product 2; 7; 1; 12; 12; 64,5; Product 27112; Product 27111; Product 27110; Product 2719;
7; Product 1; 7; 1; 232; 13; 183; Product 17113; Product 17112; Product 17111; Product 17110;
8; Product 2; 7; 1; 47; 13; 76,25; Product 27113; Product 27112; Product 27111; Product 27110;
Folmula in H9: **=B9&C9&D9&F9**
Formula in I9: **=B9&C9&D9&F9-1**
Formula in J9: **=B9&C9&D9&F9-2**
Formula in K9: **=B9&C9&D9&F9-3**
AvSales4LastW(G9) = ***AVERAGE(E9;SUMIF($H:$H;$I9;$E:$E);SUMIF($H:$H;$J9;$E:$E);SUMIF($H:$H;$K9;$E:$E)) = (47+12+23+223)/4 = 76,25***
How to add a new column (AvSales4LastW) in MS SQL SERVER?
Sorry, I can't add images
Please, help me
any, idea
Thanks
|
As far as I know, you will not be able to add a COMPUTED column to the table, which reasons over previous rows in the table. You can however derive the [rolling average](https://msdn.microsoft.com/en-us/library/ms189461.aspx) using an average over a partition, ordered by the week number, such as the following:
```
SELECT *, avg(salesQty) over (partition by distrProdName order by weekID
ROWS between 3 preceding and current row) AS AvSales4LastW
FROM MyTable;
```
Note however that this requires a consecutive / contiguous row for each product for each week (no gaps).
[SqlFiddle here](http://sqlfiddle.com/#!6/6bd4a/1)
|
You can add a `COMPUTED COLUMN` that references a `FUNCTION` to derive the result. Here is a sample mockup of this working:
Test data:
```
Create Table TestComputed
(
PK int not null identity(1,1),
ProductName Varchar (100),
B int,
C int,
Qty int,
Wk int
)
Go
Insert TestComputed (ProductName, B, C, Qty, Wk)
Values ('1', 7, 1, 434, 10),
('2', 7, 1, 223, 10),
('1', 7, 1, 23, 11),
('2', 7, 1, 23, 11),
('1', 7, 1, 43, 12),
('2', 7, 1, 12, 12),
('1', 7, 1, 232, 13),
('2', 7, 1, 47, 13)
```
Function:
```
Create Function dbo.fnGet4WeekAverage(@WeekId Int, @ProductName Varchar (100))
Returns int
As Begin
Declare @Average Int = 0
Select @Average = Avg(Qty)
From TestComputed
Where Wk Between @WeekId - 3 And @WeekId
And ProductName = @ProductName
Return @Average
End
Go
```
Alter Table:
```
Alter Table TestComputed Add AvSales4LastW As dbo.fnGet4WeekAverage(wk, ProductName)
```
Results:
```
Select * From TestComputed
PK ProductName B C Qty Wk AvSales4LastW
1 1 7 1 434 10 434
2 2 7 1 223 10 223
3 1 7 1 23 11 228
4 2 7 1 23 11 123
5 1 7 1 43 12 166
6 2 7 1 12 12 86
7 1 7 1 232 13 183
8 2 7 1 47 13 76
```
|
Add new calculate column (Average Sales 4 last weeks) to an existing table
|
[
"",
"sql",
"sql-server",
"calculated-columns",
""
] |
I am getting the error
> syntax error at or near "where" LINE 5: where zip in (select zipcode
> from zips where city = 'Sacra
when I try to run this code.
```
update listings
set price = CASE WHEN (listings.price IS NOT NULL) THEN (price *
(((100+(select price_change from zips where zips.zipcode=listings.zip))/100)))
where zip in (select zipcode from zips where city = 'Sacramento');
```
Does anybody see any easy to fix errors? or did I come up with some garbage code?
|
Add an **end** keyword just before **where** clause
```
update listings
set price = CASE WHEN (listings.price IS NOT NULL) THEN (price *
(((100+(select price_change from zips where zips.zipcode=listings.zip))/100))) END
where zip in (select zipcode from zips where city = 'Sacramento');
```
---
As per @a\_horse\_with\_no\_name's Comment
```
update listings
set price = (price * (((100+(select price_change from zips where zips.zipcode=listings.zip))/100)))
where zip in (select zipcode from zips where city = 'Sacramento') and listings.price IS NOT NULL
```
|
An [SQL `CASE` expression](http://www.postgresql.org/docs/current/static/functions-conditional.html) needs to be ended with `END` keyword.
|
Postgresql Syntax error at or near "where"
|
[
"",
"sql",
"postgresql",
"syntax-error",
""
] |
> Table storage engine for <TABLE> doesn't have this option.
This is the error returned by MySQL on an `order by` query. The column type is `varchar(2000)`.
Query:
```
select * from `dbo.table_1` order by textT;
```
Error returned:
> ERROR 1031 (HY000): Table storage engine for 'dbo.table\_1' doesn't have this option.
Why does this happen? And how can I fix it?
|
This problem appears to occur when you're importing a table definition that had been created with MyISAM but later was switched to InnoDB; the resulting `ROW_FORMAT` options appear to be invalid.
If you're trying to import an exported database and encounter this problem, you can simply search and replace `ROW_FORMAT=FIXED` with nothing.
I used the following to do so really quickly:
```
sed -ie 's/ROW_FORMAT=FIXED//g' backup.sql
```
Problem solved! Thanks to jbrahy for pointing out that it was the ROW\_FORMAT that was the problem.
EDIT: Updated to work for more platforms as per @seven's suggestion
EDIT2: Also note, as per @Steen-Schütt, this may be a safer fix
```
sed -ie 's/ROW_FORMAT=FIXED/ROW_FORMAT=COMPACT/g' backup.sql
```
|
You can also try this:
```
ALTER TABLE `dbo.table_1` ROW_FORMAT = DEFAULT ;
```
|
Table storage engine for <TABLE> doesn't have this option on order by query (ERROR 1031)
|
[
"",
"mysql",
"sql",
""
] |
I have this table:
* MyTable(ID, FK, ...)
I am using this query:
```
select ID fromMytable were FK <> 1
group by ID, FK
order by ID
```
This gives me the result that I want:
* 255
* 255
* 267
* 268
* 790
* ...
The 255 is duplicate because has two differnt KFs. The rest of the IDs has the same FK. I would like to get the IDs which has more than one FK and has differents values.
If an ID has two rows with FK = 2 and FK = 3 then get this ID, but if the ID has FK = 2, FK = 2, FK = 2 I don't want this ID because it has the same FK.
How could I get this IDs?
Thank you so much.
|
You should count distinct `FK`s
```
select ID from Mytable where FK <> 1
group by ID
having count(distinct FK) > 1
order by ID
```
|
Use `HAVING` to find only ID that exists more one once:
```
select DISTINCT ID
from Mytable
where FK <> 1
group by ID, FK
having count(*) >= 2
order by ID
```
|
how to get duplicates when I group rows?
|
[
"",
"sql",
"t-sql",
""
] |
Let's say I have a `posts` table and a `comments` table. I want to get the first two posts from my database, along with all comments that go with them. The SQL query could look like this:
```
SELECT C.CommentId, P.PostId FROM Post AS P
LEFT JOIN Comment AS C ON P.PostId = C.PostId
ORDER BY P.PostId ASC
OFFSET 0 ROWS FETCH NEXT 6 ROWS ONLY
```
And that gives me this return set:
```
PostId CommentId
1 1
1 2
1 3
2 1
2 2
2 3
```
Instead of hardcoding the "6" in `FETCH NEXT 6 ROWS`, how can I write my query in such a way that I am getting all associated comments for the first two posts in my database, regardless of what the `P.PostId` value is?
|
Add the asterisked line below.
```
SELECT C.CommentId, P.PostId FROM Post AS P
LEFT JOIN Comment AS C ON P.PostId = C.PostId
**where p.postID in (select top 2 postID from post order by postID)**
ORDER BY P.PostId ASC
```
|
```
;WITH CTE AS (
SELECT C.CommentId
, P.PostId
, DENSE_RANK() OVER (ORDER BY P.PostId ASC) rn
FROM Post AS P
LEFT JOIN Comment AS C ON P.PostId = C.PostId
)
SELECT CommentId
,PostId
FROM CTE
ORDER BY PostId ASC
WHERE rn <= 2
```
## OR
```
SELECT CommentId
,PostId
FROM
(
SELECT C.CommentId
, P.PostId
, DENSE_RANK() OVER (ORDER BY P.PostId ASC) rn
FROM Post AS P
LEFT JOIN Comment AS C ON P.PostId = C.PostId
ORDER BY P.PostId ASC
) A
WHERE rn <= 2
ORDER BY PostId ASC
```
|
How to control number of returned rows in an SQL query, without knowing how many rows we need
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables
**WAC table**
```
ID wac_inc item
-- ----------------- ----
1 2.310000000000000 A
2 1.100000000000000 A
3 2.130000000000000 A
4 1.340000000000000 A
```
**Baseline Table**
```
item baseline
---- ------------------
A 10.000000000000000
```
**Expected Result**
```
ID wac_inc item Running_Mul
-- ----------------- ---- -----------
1 2.310000000000000 A 10.231 -- 10 * (1+(2.310000000000000/100))
2 1.100000000000000 A 10.343541 -- 10.231 * (1+(1.100000000000000/100))
3 2.130000000000000 A 10.563858 -- 10.343541 * (1+(2.130000000000000/100))
4 1.340000000000000 A 10.705413 -- 10.563858 * (1+(1.340000000000000/100))
```
Formula to find `running_mul` is
> Baseline \* (1 + (wac\_inc/100))
**[SQLFIDDLE](http://sqlfiddle.com/#!6/6ee92)**
here for every row previous row `Running_Mul` value is the `baseline` and for the first row `baseline` will be coming from `baseline table`.
Hope i made it clear. AFAIK we can do this using `CURSOR` but i want to **avoid** `RBAR` as much as possible.
Can anyone suggest me the better way of doing it.
|
Try:
```
DECLARE @t TABLE
(
ID INT ,
wac DECIMAL(30, 10) ,
item CHAR(1)
)
DECLARE @b TABLE
(
item CHAR(1) ,
baseline DECIMAL(30, 10)
)
INSERT INTO @t
VALUES ( 1, 2.31, 'A' ),
( 2, 1.10, 'A' ),
( 3, 2.13, 'A' ),
( 4, 1.34, 'A' )
INSERT INTO @b
VALUES ( 'A', 10 );
WITH ordercte
AS ( SELECT * ,
ROW_NUMBER() OVER ( PARTITION BY item ORDER BY ID ) AS rn
FROM @t
),
rec
AS ( SELECT t.item ,
t.ID ,
t.wac ,
t.rn ,
b.baseline * ( 1 + ( t.wac / 100 ) ) AS m
FROM ordercte t
JOIN @b b ON b.item = t.item
WHERE t.rn = 1
UNION ALL
SELECT t.item ,
t.ID ,
t.wac ,
t.rn ,
c.m * ( 1 + ( t.wac / 100 ) )
FROM ordercte t
JOIN rec c ON t.item = c.item
AND t.rn = c.rn + 1
)
SELECT id ,
wac ,
item ,
m
FROM rec
```
Output:
```
id wac item m
1 2.3100000000 A 10.231000
2 1.1000000000 A 10.343541
3 2.1300000000 A 10.563858
4 1.3400000000 A 10.705414
```
**EDIT1**
I was trying to implement LOG EXP trick but could not manage unless @usr lead me to solution. So all credits to user @usr:
```
WITH ordercte
AS ( SELECT t.ID ,
t.wac ,
t.item ,
b.baseline ,
ROW_NUMBER() OVER ( PARTITION BY t.item ORDER BY ID ) AS rn
FROM @t t
JOIN @b b ON b.item = t.item
)
SELECT baseline
* EXP(SUM(LOG(( 1 + ( wac / 100 ) ))) OVER ( PARTITION BY item ORDER BY rn )) AS m
FROM ordercte
```
Or just:
```
SELECT t.ID, t.wac, t.item, baseline
* EXP(SUM(LOG(( 1 + ( wac / 100 ) ))) OVER ( PARTITION BY t.item ORDER BY t.ID )) AS m
FROM @t t
JOIN @b b ON b.item = t.item
```
if ID is the field you order by.
Output:
```
ID wac item m
1 2.3100000000 A 10.231
2 1.1000000000 A 10.343541
3 2.1300000000 A 10.5638584233
4 1.3400000000 A 10.7054141261722
```
**EDIT2**
For SQL 2008 use:
```
WITH cte
AS ( SELECT t.ID ,
t.wac ,
t.item ,
baseline ,
( SELECT SUM(LOG(( 1 + ( wac / 100 ) )))
FROM @t it
WHERE it.item = t.item AND it.ID <= t.ID
) AS e
FROM @t t
JOIN @b b ON b.item = t.item
)
SELECT ID, wac, item, baseline * EXP(e) AS m
FROM cte
```
**EDIT3**
Here is complete solution for SQL Server 2008 with dialing with NULLs and negative values:
```
WITH cte
AS ( SELECT t.ID ,
t.wac ,
t.item ,
b.baseline ,
ca.e,
ca.n,
ca.m
FROM @t t
JOIN @b b ON b.item = t.item
CROSS APPLY(SELECT SUM(LOG(ABS(NULLIF( 1 + wac / 100 , 0)))) as e,
SUM(SIGN(CASE WHEN 1 + wac / 100 < 0 THEN 1 ELSE 0 END)) AS n,
MIN(ABS(1 + wac / 100)) AS m
FROM @t it
WHERE it.item = t.item AND it.ID <= t.ID
) ca
)
SELECT ID, wac, item, baseline *
CASE
WHEN m = 0 THEN 0
WHEN n % 2 = 1 THEN -1 * EXP(e)
ELSE EXP(e)
END as Result
FROM cte
```
|
You can transform a series of multiplications into a series of additions with the following math trick:
```
exp(log(a) + log(b)) = a * b
```
So `MUL(a)` is `EXP(SUM(LOG(a)))`.
```
SELECT SUM(val) AS [Sum], EXP(SUM(LOG(val))) AS Product
FROM (VALUES
(1), (2), (3), (4)
) x(val)
```
This emits `sum = 10, product = 24`.
Potential problems are rounding errors and zero factors.
You can now use one of the usual ways to achieve a running aggregate such as windowing functions. That's a solved problem.
|
How to calculate Running Multiplication
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a self-referencing table `Foo`
```
[Id] int NOT NULL,
[ParentId] int NULL, --Foreign key to [Id]
[Type] char(1) NOT NULL
```
`[Id]` is clustered primary key, indexes on `[ParentId]` and `[Type]` columns.
Assume a maximum depth of 1 on the hierarchy (child nodes cannot have child nodes).
---
I want to get all rows of Foo that satisfy the following:
* Type is **A**
* Has a **B** in its family tree
* Has a **C** *or* **D** in its family tree
The following query using JOIN's returns the desired results, but the performance is terrible
```
SELECT DISTINCT [Main].*
FROM Foo AS [Main]
--[Main] may not be root node
LEFT OUTER JOIN Foo AS [Parent]
ON [Parent].[Id] = [Main].[ParentId]
--Must have a B in tree
INNER JOIN Foo AS [NodeB]
ON (
[NodeB].[Pid] = [Main].[Pid] --Sibling
OR [NodeB].[ParentId] = [Main].[Id] --Child
OR [NodeB].[Id] = [Parent].[Id] --Parent
)
AND [NodeB].[Type] = 'B'
--Must have a C or D in tree
INNER JOIN Foo AS [NodeCD]
ON (
[NodeCD].[Pid] = [Main].[Pid] --Sibling
OR [NodeCD].[ParentId] = [Main].[Id] --Child
OR [NodeCD].[Id] = [Parent].[Id] --Parent
)
AND [NodeCD].[Type] IN ('C', 'D')
WHERE [Main].[Type] = 'A'
```
From actual execution plan limited to looking only at first 10,000 of 650,000 rows

---
If I remove the --Parent lines from the query
```
OR [NodeB].[Id] = [Parent].[Id] --Parent
OR [NodeCD].[Id] = [Parent].[Id] --Parent
```
then execution becomes almost instantaneous, but it misses the cases where the A is a child and has only one sibling
```
Misses this: Catches this:
B B
├A ├A
└C ├B
└C
```
---
I have tried to come up with a CTE to do this, as it seems more promising in terms of performance, but I have been unable to figure how to exclude those trees that do not satisfy the criteria.
CTE so far
```
WITH [Parent] AS
(
SELECT *
FROM [Foo]
WHERE [ParentId] IS NULL
UNION ALL
SELECT [Child].*
FROM Foo AS [Child]
JOIN [Parent]
ON [Child].[ParentId] = [Parent].Id
WHERE [Child].[Type] = 'P'
UNION ALL
SELECT [ChildCD].*
FROM Foo AS [ChildCD]
JOIN [Parent]
ON [ChildCD].[ParentId] = [Parent].Id
WHERE [ChildCD].[Type] IN ('C', 'D')
)
SELECT *
FROM [Parent]
WHERE [Type] = 'I';
```
However, if I try to add the Sibling-Child-Parent OR statements, I hit the maximum recursion level of 100.
---
[SQL Fiddle with test data](http://sqlfiddle.com/#!6/ebdab/1)
|
The case where the node being examined is a root node is sufficiently distinct from the case where it is a child node, that you will probably be better off querying the two separately and forming the `UNION ALL` of the two sets. You can simplify, however, with a common table expression that identifies those trees that contain the nodes you're after. Overall, that might look like this:
```
WITH [TargetFamilies] AS (
SELECT
COALESCE(ParentId, Id) AS FamilyId
FROM Foo
GROUP BY COALESCE(ParentId, Id)
HAVING
COUNT(CASE Type WHEN 'B' THEN 1 END) > 0
AND COUNT(CASE Type WHEN 'C' THEN 1 WHEN 'D' THEN 1 END) > 0
)
-- root nodes
SELECT [Main].*
FROM
Foo AS [Main]
JOIN [TargetFamilies] ON [Main].Id = [TargetFamilies].FamilyId
WHERE
[Main].Type = 'A'
UNION ALL
-- child nodes
SELECT
[Main].*
FROM
Foo AS [Main]
JOIN [TargetFamilies] ON [Main].ParentId = [TargetFamilies].FamilyId
WHERE
[Main].Type = 'A'
```
|
How about something like this?
```
select
[F].[Id]
from
[Foo] [F]
where
[F].[Type] = 'A' and
(
(
[F].[ParentId] is null and
exists (select 1 from [Foo] [Child] where [F].[Id] = [Child].[ParentId] and [Child].[Type] = 'B') and
exists (select 1 from [Foo] [Child] where [F].[Id] = [Child].[ParentId] and [Child].[Type] in ('C', 'D'))
) or
(
[F].[ParentId] is not null and
exists (select 1 from [Foo] [ParentOrSibling] where [F].[ParentId] in ([ParentOrSibling].[Id], [ParentOrSibling].[ParentId]) and [ParentOrSibling].[Type] = 'B') and
exists (select 1 from [Foo] [ParentOrSibling] where [F].[ParentId] in ([ParentOrSibling].[Id], [ParentOrSibling].[ParentId]) and [ParentOrSibling].[Type] in ('C', 'D'))
)
);
```
|
Select rows from hierarchy based on related nodes
|
[
"",
"sql",
"sql-server",
"hierarchy",
""
] |
What would anyone think of this query ?
```
SELECT * FROM ta LEFT JOIN
(SELECT * FROM tb WHERE tb.end > 10000) AS tb
ON ta.id = tb.id
```
Is it ok to use table name ("tb") as alias or are there some invisible consequences ?
Reason: I have static methods to map ResultSet to object which use table name prefix. Would be nice to leave these methods alone.
|
Yes, you can use any alias, as long as the alias doesn't conflict with other aliases (tables without alias are give an alias with their name). This applies to a single query scope. In your query you have 2 scopes: one for the outer query and one for the inner query, thus you should not have a conflict even the same alias appears in both scopes.
|
Your query looks fine but why don't you use a simple `LEFT JOIN`
```
SELECT ta.*
FROM ta
LEFT JOIN tb ON ta.id = tb.id
WHERE tb.end > 10000
```
|
Can table name be used as alias?
|
[
"",
"mysql",
"sql",
""
] |
I am a beginner to SQL, trying to find the difference between average price of a cd made by "Skillet" artist with the average price of all cds in a database
E.g. if average of skillet cds = £6, and average price of all cds = £8 then there is a £2 gap (absolute value)
so something like this:
```
SELECT AVG(cdPrice) FROM cd
```
subtract
```
SELECT AVG(cdPrice)
FROM cd, artist
WHERE artName = "Skillet" AND artist.artID = cd.artID
```
Thanks
|
If I'm understanding your question correctly, you're looking for something like:
`SELECT AVG(cdPrice)-(SELECT AVG(cdPrice) FROM cd JOIN artist ON artist.artID = cd.artID WHERE artName = "Skillet") FROM cd;`
|
```
select x.avg_overall - y.avg_skillet as net_avg
from (select avg(cdprice) as avg_overall from cd) x
cross join (select avg(cdprice) as avg_skillet
from cd
join artist
using (artid)
where artname = 'Skillet') y
```
|
SQL aggregate function average
|
[
"",
"sql",
""
] |
I have a massive table I'm trying to thin out in MS SQL Server Management Studio. Currently there is a row for every 5 seconds of data. I want to shrink the table and only save 1 row for every 30 minutes.
For example, instead of having thousands of rows at
```
Timestamp A B C
2015-01-01 00:00:00 1 5 6
2015-01-01 00:00:05 5 7 1
2015-01-01 00:00:10 2 2 3
2015-01-01 00:00:15 2 1 0
......
2015-04-10 13:55:55 4 5 6
2015-04-10 13:56:00 6 4 2
```
I want to thin it out so I only have
```
Timestamp A B C
2015-01-01 00:00:00 1 5 6
2015-01-01 00:30:00 2 5 7
2015-01-01 01:00:00 1 7 6
......
2015-04-10 13:30:00 4 5 6
2015-04-10 14:00:00 6 4 2
```
Thank you!
|
Given your sample data, you could do:
```
select t.*
from table t
where datepart(minute, timestamp) in (0, 30) and
datepart(second, timestamp) = 0;
```
|
`delete from [massive Table] where datepart(mi,timestamp) not in (0,30)`
will delete all rows that don't match your criteria.
|
SQL: Filter data by datetime
|
[
"",
"sql",
"sql-server",
"datetime",
""
] |
I have three tables in my DB - `APPLICATION`, `APPLICANT` and `ADDRESS`
There is 1 row in `APPLICATION`.
`APPLICANT` can have 1 or 2 rows linked back to `APPLICATION` via `APPLICATION_ID`.
`ADDRESS` can have 1, 2 or 3 rows linked back to `APPLICANT` via `APPLICANT_ID`.
> `APPLICATION` -> (1-to-many on `APPLICATION_ID`) -> `APPLICANT` -> (1-to-many on `APPLICANT_ID`) -> `ADDRESS`
I need to write a query which extracts specific fields from each table (changed from 'all information') into 1 result set. The result needs to contain all possible info for each application in one result row. Can someone please point me in the direction of the best solution for this?
I hope the question is clear. I searched through SO already but could only really find some case specific answers and nothing general regarding 1-to-many joins.
OK I thought I should elaborate a little to help anyone who actually takes the time to have a think about this. Here's some example dummy data from all three tables.
```
APPLICATION
-----------
APPLICATION_ID|APP1|APP2|OTHER_STUFF
1 |1 |1 |x
APPLICANT
---------
APPLICANT_ID|APPLICATION_ID|FORENAME|OTHER_STUFF
1 |1 |Homer |x
2 |1 |Marge |x
ADDRESS
-------
ADDRESS_ID|APPLICANT_ID|STREET |OTHER_STUFF
1 |1 |Sesame Street |x
2 |1 |Evergreen Terrace|x
3 |2 |Evergreen Terrace|x
```
The result from the SQL query would look something like this (hopefully);
```
APPLICATION_ID|APPLICANT_ID1|FORENAME1|ADDRESS_ID1|STREET1 |ADDRESS_ID2|STREET2 |APPLICANT_ID2|FORENAME_2|ADDRESS_ID3|STREET3
1 |1 |Homer |1 |Sesame Street|2 |Evergreen Terrace|2 |Marge |3 |Evergreen Terrace
```
Thanks
|
```
; WITH applicants AS (
SELECT applicant_id
, application_id
, forename
, other_stuff
, Row_Number() OVER (PARTITION BY application_id ORDER BY applicant_id) As sequence
FROM applicant
)
, addresses AS (
SELECT address_id
, applicant_id
, street
, other_stuff
, Row_Number() OVER (PARTITION BY applicant_id ORDER BY address_id) As sequence
FROM address
)
SELECT application.application_id
, first_applicants.applicant_id As applicant_id1
, first_applicants.forename As forename1
, first_applicants_first_addresses.address_id As address_id1
, first_applicants_first_addresses.street As street1
, first_applicants_second_addresses.address_id As address_id2
, first_applicants_second_addresses.street As street2
, second_applicants.applicant_id As applicant_id2
, second_applicants.forename As forename2
, second_applicants_first_addresses.address_id As address_id3
, second_applicants_first_addresses.street As street3
, second_applicants_second_addresses.address_id As address_id4
, second_applicants_second_addresses.street As street4
FROM application
LEFT
JOIN applicants As first_applicants
ON first_applicants.application_id = application.application_id
AND first_applicants.sequence = 1
LEFT
JOIN addresses As first_applicants_first_addresses
ON first_applicants_first_addresses.applicant_id = first_applicants.applicant_id
AND first_applicants_first_addresses.sequence = 1
LEFT
JOIN addresses As first_applicants_second_addresses
ON first_applicants_second_addresses.applicant_id = first_applicants.applicant_id
AND first_applicants_second_addresses.sequence = 2
LEFT
JOIN applicants As second_applicants
ON second_applicants.application_id = application.application_id
AND second_applicants.sequence = 2
LEFT
JOIN addresses As second_applicants_first_addresses
ON second_applicants_first_addresses.applicant_id = second_applicants.applicant_id
AND second_applicants_first_addresses.sequence = 1
LEFT
JOIN addresses As second_applicants_second_addresses
ON second_applicants_second_addresses.applicant_id = second_applicants.applicant_id
AND second_applicants_second_addresses.sequence = 2
WHERE application.application_id = 1
;
```
|
You will surely get it working with the following query. Hope this helps you.
```
SELECT * FROM APPLICATION as App
INNER JOIN APPLICANT as A1 on A1.APPLICATION_ID = App.APPLICATION_ID
INNER JOIN ADDRESS as A2 on A2.APPLICANT_ID = A1.APPLICANT_ID
```
|
One-to-Many Join
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
"sql-server-2008-r2",
""
] |
I know `LIKE` can be used instead of `CONTAINS` but `CONTAINS` is relatively faster as compared to `LIKE`. Here, the following query doesn't return me any result. Why?
Query:
```
SELECT CustomerName FROM members WHERE CONTAINS(Country, 'Mexico');
```
DATABASE:

|
**MySQL Solution**
```
select customername
from members
where match(country) against ('Mexico')
```
**MS SQL Server Solution**
Full text indexes aren't necessarily always populated after creation.
Use the following code to ensure the index updates:
```
ALTER FULLTEXT INDEX ON members SET CHANGE_TRACKING AUTO;
```
More info: <https://msdn.microsoft.com/en-us/library/ms142575.aspx>
Full example (including change tracking option on index creation rather than in a later alter statement):
```
use StackOverflow
go
create table myTable
(
id bigint not null identity(1,1)
constraint myTable_pk primary key clustered
, name nvarchar(100)
);
create fulltext catalog StackOverflow_Catalog;
create fulltext index
on myTable(name)
key index myTable_pk
on StackOverflow_Catalog
with change_tracking auto;
insert myTable
select 'Mexico'
union select 'London';
select *
from myTable
where contains(name,'Mexico');
```
|
Have you tried using **in** it might be even faster if it works in your case
```
SELECT CustomerName
FROM members
WHERE Country IN ('Mexico')
```
But in your case you may put **%%** in contains is actually faster then contain so just use this
```
SELECT CustomerName
FROM members
WHERE CONTAINS(Country, '"*Mexico"');
```
|
CONTAINS doesn't return any result
|
[
"",
"mysql",
"sql",
""
] |
I am attempting to delete duplicate rows from my table for example if my customer table contained the following:
```
first_name last_name email
fred wilford wilford@xchange.co.uk
fred wilford wilford@xchange.co.uk
Damian Jones jones@xchange.co.uk
```
the ideal result should be the following:
```
first_name last_name email
fred wilford wilford@xchange.co.uk
Damian Jones jones@xchange.co.uk
```
this should be fairly straightforward to do with an intermediary table created containing the duplicate rows before deleting the duplicates in the master table and lastly insert all rows in the intermediary table back into the master table. However I would prefer to remove the intermediary table and just use something like a with statement.
consider the following example:
```
with dups as
(
select name,last_name,email from customer group by 1,2,3 having
count(*) > 1
)
delete from customer
using
(
select name,last_name,email from customer group by 1,2,3 having
count(*) > 1
)b
where b.name = customer.name;
insert into customer
(
select name,last_name,email from dups
)
```
the trouble is the final insert statement fails as "dups" is not recognised. Is there a way to fix this? Thanks in advance
|
You could also do it in this way:
*Schema:*
```
create table tbl (first_name varchar(50),
last_name varchar(50),
email varchar(50));
insert into tbl values
('fred','wilford','wilford@xchange.co.uk'),
('fred','wilford','wilford@xchange.co.uk'),
('Damian','Jones','jones@xchange.co.uk');
```
*DO this:*
```
CREATE TABLE temp (first_name varchar(50),
last_name varchar(50),
email varchar(50));
INSERT INTO temp SELECT DISTINCT * FROM tbl;
DROP TABLE tbl;
ALTER TABLE temp RENAME TO tbl;
```
*check:*
```
select * from tbl;
```
*result:*
```
first_name last_name email
fred wilford wilford@xchange.co.uk
Damian Jones jones@xchange.co.uk
```
|
You can [chain the CTE](http://www.depesz.com/2011/03/16/waiting-for-9-1-writable-cte/) if you want to:
```
WITH dups AS
(
select name,last_name,email from customer group by 1,2,3 having
count(*) > 1
),
del AS(
DELETE FROM customer USING dups WHERE dups.name = customer.name RETURNING dups.*
),
ins AS(
INSERT INTO customer(name,last_name,email) SELECT name,last_name,email FROM del RETURNING del.*
)
SELECT * FROM ins;
```
|
using "with" statement on multiple queries?
|
[
"",
"sql",
"postgresql",
""
] |
I have a snippet of SQL that compared the last two records and gives the `datediff` in seconds, however the way I have it is quite slow taking up to 20 seconds to execute depending how many controllerID's I needs to check.
What would be a more efficient way of do doing this?
```
select
T.controllerID,
datediff(ss, T.Max_dtReading, T1.Max_dtReading) As ElaspedTime
from
(select
controllerID,
max(dtReading) as Max_dtReading
from
ReaderData
where
CardID = 'FFFFFFF0' AND (controllerID in(2,13,28,30,37,40))
group by
controllerID) as T
outer apply
(select
max(T1.dtReading) as Max_dtReading
from
ReaderData as T1
where
T1.CardID = 'FFFFFFF0' AND (controllerID in(2,13,28,30,37,40))
and T1.controllerID = T.controllerID
and T1.dtReading < T.Max_dtReading) as T1
```
|
I might suggest conditional aggregation for this:
```
select controllerID,
datediff(second, max(dtReading), min(dtReading)
) As ElaspedTime
from (select controllerID, dtReading,
row_number() over (partition by controllerID order by dtReading desc) as seqnum
from ReaderData
where CardID = 'FFFFFFF0' AND
controllerID in (2, 13, 28, 30, 37, 40)
) r
where seqnum <= 2
group by controllerID
```
|
```
;WITH CTE AS
(select controllerID
,dtReading
,ROW_NUMBER() OVER (PARTITION BY controllerID ORDER BY dtReading DESC) rn
from ReaderData
where CardID = 'FFFFFFF0'
AND controllerID IN (2,13,28,30,37,40)
)
select C1.controllerID
,datediff(ss, C1.dtReading, C2.dtReading) As ElaspedTime
from CTE C1
LEFT JOIN CTE C2 ON C1.controllerID = C2.controllerID
AND C1.rn = 1
AND C1.rn < C2.rn
```
|
Bad performance with sql query
|
[
"",
"sql",
"sql-server",
""
] |
What is the most efficient SQL query to do the following?
Following are the two tables joined on ItemID:
Item { ItemID, OtherData }
Love { ItemID, Username }
I pass in a "someuser" as Username in the query.
Query should return ItemID, OtherData, and a boolean indicating whether there exists a Love for the ItemID.
|
Try something like this:
```
SELECT Item.*
,CASE WHEN t.ItemID IS NOT NULL THEN 'YES' ELSE 'NO' END AS flag
FROM Item LEFT JOIN (SELECT DISTINCT ItemID FROM Love WHERE Username = someuser) t
ON Item.ItemId = t.ItemId
```
|
```
SELECT I.ItemID, I.OtherData, CASE WHEN L.ItemID IS NULL THEN 0 ELSE 1 END AS Loved FROM Item I LEFT JOIN Love L ON I.ItemID = L.ItemID
```
To the op stop changing your requirements!
|
What is the most efficient SQL query to do the following?
|
[
"",
"sql",
""
] |
I have strings like 'keepme:cutme' or 'string-without-separator' which should become respectively 'keepme' and 'string-without-separator'. Can this be done in PostgreSQL? I tried:
```
select substring('first:last' from '.+:')
```
But this leaves the `:` in and won't work if there is no `:` in the string.
|
Use [**`split_part()`**](https://www.postgresql.org/docs/current/functions-string.html):
```
SELECT split_part('first:last', ':', 1) AS first_part
```
Returns the whole string if the delimiter is not there. And it's simple to get the 2nd or 3rd part etc.
Substantially faster than functions using regular expression matching. And since we have a fixed delimiter we don't need the magic of regular expressions.
Related:
* [Split comma separated column data into additional columns](https://stackoverflow.com/questions/8584967/split-comma-separated-column-data-into-additional-columns/8612456#8612456)
|
regexp\_replace() may be overload for what you need, but it also gives the additional benefit of *regex*. For instance, if strings use multiple delimiters.
Example use:
```
select regexp_replace( 'first:last', E':.*', '');
```
|
Cut string after first occurrence of a character
|
[
"",
"sql",
"string",
"postgresql",
"pattern-matching",
"delimiter",
""
] |
```
create table Emp(emp_name varchar(20),emp_sal int)
insert into Emp values
('amit',100),('animesh',200),('ashish',300),('ashok',400),
('ramesh',500),('ratan',600),('rajesh',700),
('lokesh',300),('lalit',900),('lakhan',800)
```
> Can i calculate sum of the salary of employees whose name starts with 'A', 'R', 'L' and Even I want it show in table with a unique id like
```
emp_id total_sal
A 1000
R 1800
L 2000
```
> I have tried like this
```
select sum(emp_sal)as from emp where emp_name like 'a%'
```
> but dont know what to do next
|
```
SELECT LEFT(emp_name, 1) first_letter, sum(emp_sal)
FROM emp
WHERE LEFT(emp_name, 1) IN ('a', 'r', 'l')
GROUP BY LEFT(emp_name, 1);
```
example fiddle here: <http://sqlfiddle.com/#!9/436b8/2>
And works in both mysql and sql-server.
|
this works almost all RDBMSs, MySQL, MS SQL Server, Oracle, Postgre,... as you tagged more than one RDBMS in the question
```
select q.emp_id, sum(q.emp_sal)
from (
select case
when emp_name like 'A%' then 'A'
when emp_name like 'R%' then 'R'
when emp_name like 'L%' then 'L'
end as emp_id,
emp_sal
from emp where emp_name like 'A%' or emp_name like 'R%' or emp_name like 'L%'
) q
group by q.emp_id
```
|
How to calculate sal of employe whose name starts with a,b,c.....?
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I have a stored procedure that just returns a `varchar(1000)`, but when I run using the following block:
```
declare @P1 VARCHAR(1000)
SET @P1=''
PRINT 'P1='+@P1
exec Test @PhoneNumber='1234567890',@AgentID=N'Test AgentID',@SP_RETURN=@P1 output
PRINT 'P1='+@P1
```
I get
> Msg 50000, Level 16, State 1, Procedure Test, Line 50
> Conversion failed when converting the varchar value 'Complete' to data type int..
>
> Msg 245, Level 16, State 1, Procedure Test, Line 60
> Conversion failed when converting the varchar value 'Exception raised' to data type int.
This is the stored procedure:
```
Create PROCEDURE [dbo].[Test]
@PhoneNumber as CHAR(10),
@AgentID as VARCHAR(100),
@SP_RETURN as VARCHAR(1000) OUTPUT,
@UTCDateTimeOverride as Datetime = NULL
AS
BEGIN
BEGIN TRY
SET @SP_RETURN = N'Complete'
RETURN @SP_RETURN
END TRY
BEGIN CATCH
SET @SP_RETURN = N'Exception raised'
RETURN @SP_RETURN
END CATCH
END
go
```
Any suggestions?
|
`stored procedure` always return integer, which is used to return the error code
you are returning a `varchar` field causing the exception
The error is caused by this
```
RETURN @SP_RETURN
```
do this instead
```
BEGIN TRY
SET @SP_RETURN = N'Complete'
RETURN 0 -- success
END TRY
BEGIN CATCH
SET @SP_RETURN = N'Exception raised'
RETURN ERROR_NUMBER() -- error code
```
|
A stored Procedure can only return an Int value. Varchar values are "returned" via OUTPUT variables.
```
Create PROCEDURE [dbo].[Test]
@PhoneNumber as CHAR(10),
@AgentID as VARCHAR(100),
@SP_RETURN as VARCHAR(1000) OUTPUT,
@UTCDateTimeOverride as Datetime = NULL
AS
BEGIN
BEGIN TRY
SET @SP_RETURN = N'Complete'
--RETURN @SP_RETURN not needed
END TRY
BEGIN CATCH
SET @SP_RETURN = N'Exception raised'
-- no need for return statement here
END CATCH
END
go
```
Now when executing the proc you will do something like ...
```
DECLARE @RETURN_Value VARCHAR(1000);
Exec [dbo].[Test] @PhoneNumber = 'Some Value'
,@AgentID = 'Some Value'
,@SP_RETURN = @RETURN_Value OUTPUT
```
Now the `@RETURN_Value` variable will have the values
|
Stored procedure return conversion error when there should be no conversion
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to retrieve the `UserInfo` with the `UserId=10004` with all his friends and all the posts of his friends and all the likes on those posts. And for that I am using a query :
```
select *,
(select * ,
(select * ,
(select * from PostLikes where PostId=UserPosts.PostId) as likes
from UserPosts where UserId=FriendsRelation.PersonId1 or UserId=FriendsRelation.PersonId2) as posts
from FriendsRelation where PersonId1=UserId or PersonId2=UserId) as friends
from UserInfo
where UserId=10004
```
but it is returning with an error
> Only one expression can be specified in the select list when the subquery is not introduced with EXISTS
How can I solve it?
Here are the tables that I am using:
```
CREATE TABLE [dbo].[UserInfo]
(
[UserId] [bigint] IDENTITY(1,1) NOT NULL,
[Username] [nvarchar](max) NULL,
[Email] [nvarchar](max) NOT NULL,
[UserPassword] [nvarchar](max) NOT NULL,
[Name] [nvarchar](max) NULL,
[Gender] [nchar](10) NOT NULL,
[ContactNo] [bigint] NOT NULL,
[DateOfBirth] [date] NOT NULL,
[RelationshipStatus] [nchar](10) NULL,
[InterestedIn] [nchar](10) NULL,
[Address] [nvarchar](max) NULL,
[Country] [nchar](10) NOT NULL,
[FavouriteQuote] [nvarchar](max) NULL,
[DisplayPhoto] [nvarchar](max) NULL,
[Guid] [nvarchar](max) NOT NULL,
[Status] [tinyint] NOT NULL CONSTRAINT [DF_UserInfo_Status] DEFAULT ((1)),
[CreatedDate] [datetime] NOT NULL,
[LastLogIn] [datetime] NULL,
CONSTRAINT [PK_UserInfo]
PRIMARY KEY CLUSTERED ([UserId] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
```
`UserPosts` table:
```
CREATE TABLE [dbo].[UserPosts]
(
[PostId] [bigint] IDENTITY(1,1) NOT NULL,
[UserId] [bigint] NOT NULL,
[PostText] [nvarchar](max) NULL,
[PostPicture] [nvarchar](max) NULL,
[Time] [time](7) NOT NULL,
[Date] [date] NOT NULL,
[LikeCount] [int] NULL CONSTRAINT [DF_UserPosts_LikeCount] DEFAULT ((0)),
CONSTRAINT [PK_UserPosts]
PRIMARY KEY CLUSTERED ([PostId] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[UserPosts] WITH CHECK
ADD CONSTRAINT [FK_UserPosts_UserInfo]
FOREIGN KEY([UserId]) REFERENCES [dbo].[UserInfo] ([UserId])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[UserPosts] CHECK CONSTRAINT [FK_UserPosts_UserInfo]
GO
```
and `PostLikes` table:
```
CREATE TABLE [dbo].[PostLikes]
(
[LikeId] [bigint] IDENTITY(1,1) NOT NULL,
[PostId] [bigint] NOT NULL,
[UserId] [bigint] NOT NULL,
CONSTRAINT [PK_PostLike]
PRIMARY KEY CLUSTERED ([PostId] ASC, [UserId] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[PostLikes] WITH CHECK
ADD CONSTRAINT [FK_PostLikes_UserInfo]
FOREIGN KEY([UserId]) REFERENCES [dbo].[UserInfo] ([UserId])
GO
ALTER TABLE [dbo].[PostLikes] CHECK CONSTRAINT [FK_PostLikes_UserInfo]
GO
ALTER TABLE [dbo].[PostLikes] WITH CHECK
ADD CONSTRAINT [FK_PostLikes_UserPosts]
FOREIGN KEY([PostId]) REFERENCES [dbo].[UserPosts] ([PostId])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[PostLikes] CHECK CONSTRAINT [FK_PostLikes_UserPosts]
GO
```
|
Similar to other people's answers, but use left joins for everything (in case the user has no friends), and some changes to join logic.
```
declare @userID int = 10004 --For easy swapping out if you want to query someone else
Select * from UserInfo a
left join FriendsRelation b
on b.PersonID1 = @userID or b.PersonID2 = @userID --Faster than joining and then filtering, if you're only looking for one person at a time
left join UserPosts c
on (c.UserID = b.PersonID1 or c.UserID = b.PersonID2)
and c.UserID <> @userID --returns only friends' posts, not user's posts, as specified in original question
left join postLikes d
on d.PostID = c.PostID
where a.UserID = @userID
```
|
As already mentioned, you need to use join instead of subqueries. Probably this query will be useful for you
```
select *
from UserInfo
inner join FriendsRelation
on UserInfo.UserId = FriendsRelation.PersonId1
inner join UserPosts
on FriendsRelation.PersonId1 = UserPosts.UserId
left outer join PostLikes
on UserPosts.PostId = PostLikes.PostId
where UserInfo.UserId=10004
union
select *
from UserInfo
inner join FriendsRelation
on UserInfo.UserId = FriendsRelation.PersonId2
inner join UserPosts
on FriendsRelation.PersonId2 = UserPosts.UserId
left outer join PostLikes
on UserPosts.PostId = PostLikes.PostId
where UserInfo.UserId=10004
```
|
Query multilayer table in sql
|
[
"",
"sql",
"sql-server",
""
] |
How can I populate the column `Week` with `Today` until `Today -6`? See the fiddle bellow with the actual result and the expected result:
[SQL Fiddle](http://sqlfiddle.com/#!6/eff373/2)
**MS SQL Server 2008 Schema Setup**:
```
CREATE TABLE Registry
([RegistryDate] datetime)
;
INSERT INTO Registry
([RegistryDate])
VALUES
('2015-04-07'),
('2015-04-07'),
('2015-04-06'),
('2015-04-02'),
('2015-04-01'),
('2015-04-01'),
('2015-03-31'),
('2015-03-31'),
('2015-03-30'),
('2015-03-29')
;
```
**Query**:
```
SELECT DISTINCT
TOP 7
CASE WHEN CAST(RegistryDate AS DATE) = CAST(GETDATE() AS DATE) THEN 1 ELSE 0 END AS Indicator,
CAST (RegistryDate AS DATE) AS LatestUpdates,
CAST(GETDATE() AS DATE) AS Week
FROM
Registry
ORDER BY
1 DESC
```
**Result**:
```
| Indicator | LatestUpdates | Week |
|-----------|---------------|------------|
| 1 | 2015-04-07 | 2015-04-07 |
| 0 | 2015-03-29 | 2015-04-07 |
| 0 | 2015-03-30 | 2015-04-07 |
| 0 | 2015-03-31 | 2015-04-07 |
| 0 | 2015-04-01 | 2015-04-07 |
| 0 | 2015-04-02 | 2015-04-07 |
| 0 | 2015-04-06 | 2015-04-07 |
```
**Expected Result**:
```
| Indicator | LatestUpdates | Week |
|-----------|---------------|------------|
| 1 | 2015-04-07 | 2015-04-07 | < Today
| 0 | 2015-03-29 | 2015-04-06 | < Today -1
| 0 | 2015-03-30 | 2015-04-05 | < Today -2
| 0 | 2015-03-31 | 2015-04-04 | < Today -3
| 0 | 2015-04-01 | 2015-04-03 | < Today -4
| 0 | 2015-04-02 | 2015-04-02 | < Today -5
| 0 | 2015-04-06 | 2015-04-01 | < Today -6
```
|
I think you want a query like this:
```
WITH SevenRowsWeek as (
SELECT LatestUpdates, CAST((GETDATE() - ROW_NUMBER() OVER (ORDER BY Latestupdates DESC) + 1) as DATE) As [Week]
FROM (SELECT DISTINCT TOP 7
CAST (RegistryDate AS DATE) AS LatestUpdates
FROM
#Registry
ORDER BY
CAST(RegistryDate AS DATE) DESC ) SevenRows
)
Select CASE WHEN CAST(LatestUpdates AS DATE) = [Week] THEN 1 ELSE 0 END AS Indicator,
LatestUpdates, [week]
FROM SevenRowsWeek
ORDER BY LatestUpdates DESC
```
|
wrap your query within a `subquery` and use it within a `cte`, and use `dateadd(day,-(row number-1),Week)`
```
with cte as
(
select q.Indicator ,q.LatestUpdates, q.Week,
row_number() over(order by q.Indicator desc,q.LatestUpdates)rn
from (
SELECT DISTINCT
TOP 7
CASE WHEN CAST(RegistryDate AS DATE) = CAST(GETDATE() AS DATE) THEN 1 ELSE 0 END AS Indicator,
CAST (RegistryDate AS DATE) AS LatestUpdates,
CAST(GETDATE() AS DATE) AS Week
FROM
Registry
ORDER BY
1 DESC
)q
)
select Indicator,LatestUpdates,dateadd(day,-(rn-1),Week) Week
from cte
order by 1 desc,2 asc
```
the [fiddle demo](http://sqlfiddle.com/#!6/eff373/36)
|
Populate a column with today until today-6
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
Suppose there is a table named 'a' with following data:
```
col1, col2
-----------
1 1
1 2
1 3
2 2
2 3
3 4
```
then to achieve following results:
```
col1, col2
--------------
1 6
2 5
3 4
```
i can run query like :
select col1, sum(col2) from a group by col1.
But suppose my table is:
```
col1, col2
---------
1 a
1 b
1 c
2 d
2 e
3 f
```
here col2 is of varchar type not of numeric type.
what will be the sql query to give following results???
```
col1, col2
------------
1 a,b,c
2 d,e
3 f
```
i have tried group by on col1 but how to concatenate values in col2???
the problem is that col2 is of varchar type.
|
In case of `MySQL` you can use `GROUP_CONCAT` like this:
```
SELECT
col1,
GROUP_CONCAT(col2) as col2
FROM demo
GROUP BY col1;
```
Here is the [sqlfiddle](http://sqlfiddle.com/#!9/be464/2).
In case of `SQL Server` you can use `STUFF` like this:
```
SELECT t1.col1,
stuff((SELECT ',' + CAST(t2.col2 as VARCHAR(10))
FROM demo t2 WHERE t1.col1 = t2.col1
FOR xml path('')),1,1,'') col2
FROM demo t1
GROUP BY t1.col1;
```
Here is the [sqlfiddle](http://sqlfiddle.com/#!6/be464/10).
|
You can use `group_concat` function in mysql
```
select
col1,
group_concat(col2) as col2
from table_name
group by col1
```
|
how to achieve concatenation using group by
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I would like to transform one Sql server table into another.
Original table
```
Period Date Portfolio Benchmark
Pre0Month 12/31/2014 -0.0001 -0.0025
Pre1Month 11/31/2014 0.0122 0.0269
Pre2Month 10/31/2014 0.0176 0.0244
```
After transformation
```
Returns Pre0Month Pre1Month Pre2Month
Portfolio -0.0001 0.0122 0.0176
Benchmark -0.0025 0.0269 0.0244
```
|
Considering the name of the table to be MyTable, you can pivot it the following way:
```
SELECT * FROM
(
SELECT Period, [Returns], value
FROM MyTable
CROSS APPLY
(
SELECT 'Portofolio', CAST(Portofolio as varchar(10))
UNION ALL
SELECT 'Benchmark', CAST(Benchmark as varchar(10))
) c([Returns], value)
) d
PIVOT
(
MAX(value)
FOR Period IN (Pre0Month, Pre1Month, Pre2Month)
) piv;
```
|
This requires a combination of PIVOT and UNPIVOT:
```
DECLARE @t TABLE(period VARCHAR(32),[date] DATETIME, portfolio DECIMAL(28,4), benchmark DECIMAL(28,4));
INSERT INTO @t(period,[date],portfolio,benchmark)VALUES('Pre0Month','2014-12-31',-0.0001,-0.0025);
INSERT INTO @t(period,[date],portfolio,benchmark)VALUES('Pre1Month','2014-11-30',0.0122,0.0269);
INSERT INTO @t(period,[date],portfolio,benchmark)VALUES('Pre2Month','2014-10-31',0.0176,0.0244);
SELECT
*
FROM
(
SELECT
*
FROM
(
SELECT
period,
portfolio,
benchmark
FROM
@t
) AS t
UNPIVOT(
value
FOR Returns IN (portfolio,benchmark)
) AS up
) AS t
PIVOT(
MAX(value)
FOR period IN ([Pre0Month],[Pre1Month],[Pre2Month])
) AS p;
```
Result is the following:
```
Returns Pre0Month Pre1Month Pre2Month
benchmark -0.0025 0.0269 0.0244
portfolio -0.0001 0.0122 0.0176
```
|
How to transform the table columns to vertical data in Sql server tables?
|
[
"",
"sql",
"sql-server",
"pivot",
"crosstab",
"unpivot",
""
] |
I wanted to insert data to psql from a text file with pro*c, after inserting data with my pro*c program ,the order of data in psql was different from the text file sometimes.
there was no any sql error in the process, I found the problem when I select \* from table.
Did there have some method could confirm the order which I inserted the same as the order in database(psql) ?
thanks!
for example:
> table: testtable
> id int
> name char(20)
>
> txt file:
> 100 amy
> 200 john
> 300 joy
> 400 ruby
but the data in psql is like:
> 200 john
> 300 joy
> 400 ruby
> 100 amy
my program:
```
EXEC SQL BEGIN DECLARE SECTION;
int id;
char name[20];
int id_ind, name_ind;
EXEC SQL ENDDECLARE SECTION;
main ()
{
EXEC CONNECT TO SQL ....
while ( still have data ){
read_data_from_file()
put_data_in_host_varable();
EXEC SQL INSERT INTO testtable( id, name )
VALUES (:id INDICATOR id_ind, :name INDICATOR name_ind)
}
EXEC SQL COMMIT WORK;
EXEC SQL DISCONNECT ALL;
}
```
|
Check the [documentation:](http://www.postgresql.org/docs/9.0/static/queries-order.html) *"If sorting is not chosen, the rows will be returned in an unspecified order. The actual order in that case will depend on the scan and join plan types and the order on disk, but it must not be relied on.*
So the typical solution is to add a unique, incremented integer identifier and sort by it in your `SELECT` statement:
```
SELECT *
FROM table
ORDER BY id;
```
Alternatively you could use a column like `INSERT_DATE` and fill it with a time stamp.
***p.s.***: check data type `SERIAL` in chapter 8.1.4 at the [Numeric Types](http://www.postgresql.org/docs/9.1/static/datatype-numeric.html) documentation
|
Data loaded into relational database has no inherent order. If you want specific order of the selected data you should use "order by" in your select. It is not an error.
|
insert wrong order in postgresql (using pro c)
|
[
"",
"sql",
"c",
"postgresql",
"embedded-sql",
""
] |
Let's say I have a table with a two-part composite key, and 4 records, like the following:
```
KEY_PART_1 KEY_PART_2
A 1
B 1
C 2
C 3
```
I want to write some dynamic SQL to select only the records B,1 and C,2 using a "WHERE IN" clause, *without* selecting A,1 or C,3.
Is there some way to do this without a temp table?
Not that it matters, but we are currently using Oracle, and hoping to move to PostgreSQL soon.
|
This syntax works for Oracle and PostgreSQL:
```
SELECT *
FROM table_name
WHERE (key_part_1, key_part_2) IN ( ('B',1), ('C',2) );
```
|
Following @Justin Cave's answer, here is a small test case to show that **Oracle** would do an **INDEX RANGE SCAN** followed by an **INLIST ITERATOR** for the following **filter predicate**:
```
WHERE (key_part_1, key_part_2) IN ( ('B',1), ('C',2) )
```
**Setup**
```
SQL> CREATE TABLE t(key1 VARCHAR2(1), key2 NUMBER);
Table created.
SQL>
SQL> INSERT INTO t VALUES('A', 1);
1 row created.
SQL> INSERT INTO t VALUES('B', 1);
1 row created.
SQL> INSERT INTO t VALUES('C', 2);
1 row created.
SQL> INSERT INTO t VALUES('C', 3);
1 row created.
SQL>
SQL> COMMIT;
Commit complete.
SQL>
```
A **composite index** on **key1 and key2**:
```
SQL> CREATE INDEX t_idx ON t(key1, key2);
Index created.
SQL>
```
**Gather stats:**
```
SQL> EXEC DBMS_STATS.gather_table_stats('LALIT', 'T');
PL/SQL procedure successfully completed.
SQL>
```
**Execute the query:**
```
SQL> SELECT * FROM t
2 WHERE (key1, key2) IN ( ('B',1), ('C',2) );
K KEY2
- ----------
B 1
C 2
SQL>
```
So, it gives the correct output.
Let's see the **explain plan**:
**Case# 1** Key-value pair in the same order of the index. Leading key in the lead.
```
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2301620486
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 10 | 1 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
|* 2 | INDEX RANGE SCAN| T_IDX | 2 | 10 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
---------------------------------------------------
2 - access(("KEY1"='B' AND "KEY2"=1 OR "KEY1"='C' AND "KEY2"=2))
14 rows selected.
```
**Case# 2** Key-value pair in opposite order of the index. Leading key in reverse.
```
SQL> EXPLAIN PLAN FOR SELECT * FROM t
2 WHERE (key2, key1) IN ( (1, 'B'), (2, 'C') );
Explained.
SQL>
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2301620486
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 10 | 1 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
|* 2 | INDEX RANGE SCAN| T_IDX | 2 | 10 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
---------------------------------------------------
2 - access(("KEY1"='B' AND "KEY2"=1 OR "KEY1"='C' AND "KEY2"=2))
14 rows selected.
```
In either cases, **Oracle uses the index**.
|
WHERE_IN query with a composite key?
|
[
"",
"sql",
"oracle",
""
] |
I have table named `simple_data` that has millions of mobile numbers in a column named `MSISDN`, but it contains duplicates.
All rows have a `FLAG` column set to zero.
I want to update all duplicate rows by setting `FLAG` value to `2` except one row.
```
CREATE TABLE `wt_simple_data` (
`SIMPLEID` int(10) NOT NULL AUTO_INCREMENT,
`CAMPID` bigint(20) NOT NULL,
`MSISDN` bigint(20) NOT NULL,
`FLAG` enum('0','1','2') DEFAULT '0' COMMENT '0 FOR NOT SENT, 1 FOR SENT',
PRIMARY KEY (`SIMPLEID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
Some sample data:
```
SIMPLEID CAMPID MSISDN FLAG
15 311 9874123411 0
16 311 9874123411 0
17 311 9874123411 0
18 311 9874123412 0
19 311 9874123412 0
20 311 9874123413 0
21 311 9874123413 0
22 311 9874123414 0
```
My desired result
```
SIMPLEID CAMPID MSISDN FLAG
15 311 9874123411 0
16 311 9874123411 2
17 311 9874123411 2
18 311 9874123412 0
19 311 9874123412 2
20 311 9874123413 0
21 311 9874123413 2
22 311 9874123414 0
```
|
You may do with JOIN and keeping the first record as `0`
```
update
wt_simple_data w1
join (
select
min(SIMPLEID) as SIMPLEID,
CAMPID,
MSISDN
from wt_simple_data
group by CAMPID,MSISDN
)w2
on w1.SIMPLEID > w2.SIMPLEID
and w1.CAMPID = w2.CAMPID
and w1.MSISDN = w2.MSISDN
set w1.FLAG = '2'
```
|
general SQL:
```
UPDATE wt_simple_data
SET flag='2'
WHERE simpleid IN (
SELECT simpleid FROM wt_simple_data WHERE flag = '0'
MINUS
SELECT MIN(simpleid) FROM wt_simple_data WHERE flag = '0'
GROUP BY msisdn
)
```
using analytical function:
```
UPDATE wt_simple_data
SET flag='2'
WHERE simpleid IN (
SELECT simpleid
FROM (
SELECT MIN(simpleid) OVER (PARTITION BY msisdn) minsimpleid, simpleid
FROM wt_simple_data
WHERE flag = '0')
WHERE minsimpleid <> simpleid
)
```
|
Update duplicate rows except for one
|
[
"",
"mysql",
"sql",
""
] |
The table in the link below:

Тhe `Result` column which is the result of a `T-SQL` query in which each record reflects the movement of a чproduct-modelч from an чemployeeч to another through a bill: if the `amount` is moved from the `employee`, the amount will be filled in the `Amount_Out` column and vise versa.
For example: rows 3 and 4 show that 40 pieces of the product `hp` moved from `emp1` to `emp3` through the bill with code `111`. This movement reduces the total amount of this product for `emp1` to be `60`, and increases the total amount of the same product for `emp3` to be `40`.
Now I need to calculate in the query the result in each record to show the total amount of a product in an employee's custody until the date in the record:
> Result = The amount calculated for all previous bills + Amount\_In - Amount\_Out
which means the result will depend on previous records' values. How could this be achieved by the query in SQL server 2008? and which is the best to achieve it, stored procedure or view?
|
You can do this with correlated subquery. Change `AND it.No <= ot.No` to `AND it.Date <= ot.Date` if `No`s are not sequential:
```
DECLARE @t TABLE
(
No INT ,
Employee CHAR(4) ,
Model CHAR(4) ,
Date DATE ,
Bill CHAR(3) ,
Income MONEY ,
Outcome MONEY
)
INSERT INTO @t
VALUES ( 1, 'emp1', 'hp', '20140101', '000', 100, 0 ),
( 2, 'emp1', 'dell', '20140101', '000', 100, 0 ),
( 3, 'emp1', 'hp', '20140308', '111', 0, 40 ),
( 4, 'emp3', 'hp', '20140308', '111', 40, 0 ),
( 5, 'emp3', 'hp', '20140408', '222', 0, 20 ),
( 6, 'emp2', 'hp', '20140408', '222', 20, 0 ),
( 7, 'emp1', 'hp', '20140608', '333', 0, 5 ),
( 8, 'emp2', 'hp', '20140608', '333', 5, 0 ),
( 9, 'emp1', 'dell', '20150101', '444', 0, 40 )
SELECT * ,
( SELECT SUM(Income - Outcome)
FROM @t it
WHERE it.Model = ot.Model
AND it.Employee = ot.Employee
AND it.No <= ot.No
) AS Result
FROM @t ot
```
Output:
```
No Employee Model Date Bill Income Outcome Result
1 emp1 hp 2014-01-01 000 100.00 0.00 100.00
2 emp1 dell 2014-01-01 000 100.00 0.00 100.00
3 emp1 hp 2014-03-08 111 0.00 40.00 60.00
4 emp3 hp 2014-03-08 111 40.00 0.00 40.00
5 emp3 hp 2014-04-08 222 0.00 20.00 20.00
6 emp2 hp 2014-04-08 222 20.00 0.00 20.00
7 emp1 hp 2014-06-08 333 0.00 5.00 55.00
8 emp2 hp 2014-06-08 333 5.00 0.00 25.00
9 emp1 dell 2015-01-01 444 0.00 40.00 60.00
```
|
You can get the Result by a simple subquery.
```
SELECT No,Employee,Product_Model,[Date],Bill_Code,Amount_In,Amount_Out,
(
SELECT SUM
(
SELECT D1.Amount_In - D0.Amount_Out AS subTotal
FROM Table_Name AS D1
WHERE
(
D1.[Date] < D0.[Date]
)
OR
(
D1.[Date] = D0.[Date]
AND
D1.No <= D0.No
)
)
) AS Result
FROM Table_Name D0
ORDER BY D0.No, D0.[Date]
```
And the actual output should be:
```
No Employee Product_Model Date Bill_Code Amount_In Amount_Out Result
1 emp1 hp 2014-01-01 000 100.00 0.00 100.00
2 emp1 dell 2014-01-01 000 100.00 0.00 200.00
3 emp1 hp 2014-03-08 111 0.00 40.00 160.00
4 emp3 hp 2014-03-08 111 40.00 0.00 200.00
5 emp3 hp 2014-04-08 222 0.00 20.00 180.00
6 emp2 hp 2014-04-08 222 20.00 0.00 200.00
7 emp1 hp 2014-06-08 333 0.00 5.00 195.00
8 emp2 hp 2014-06-08 333 5.00 0.00 200.00
9 emp1 dell 2015-01-01 444 0.00 40.00 160.00
```
|
Calculating Column Depending On Previous Rows
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have an activities table like below (also contains some other columns):
```
+------------------+---------------+------------+
| data | event | logged_on |
+------------------+---------------+------------+
| 12345 | File | 2015-04-08 |
| 25232 | Bulletin | 2015-04-08 |
| 7890 | File | 2015-04-08 |
| youtube | Search | 2015-04-07 |
| 13568 | Bulletin | 2015-04-07 |
| 25232 | Bulletin | 2015-04-07 |
| 7890 | File | 2015-04-07 |
+------------------+---------------+------------+
```
I want to fetch unique data values for latest date. So the required result would be:
```
+------------------+---------------+------------+
| data | event | logged_on |
+------------------+---------------+------------+
| 12345 | File | 2015-04-08 |
| 25232 | Bulletin | 2015-04-08 |
| 7890 | File | 2015-04-08 |
| youtube | Search | 2015-04-07 |
| 13568 | Bulletin | 2015-04-07 |
+------------------+---------------+------------+
```
But a select distinct on all the three columns above returns the same value of data for different dates. I cannot remove logged\_on from the select query as it's required elsewhere.
In MySQL, i tried using the following query:
```
SELECT DISTINCT
data,
event,
MAX(logged_on) AS latest_log_date
FROM
activites
WHERE
username = %%(username)s
GROUP BY
data
ORDER BY id DESC, latest_log_date DESC
LIMIT %%(limit)s
;
```
But this does not seem to work. I'm getting the some result, which looks like it's correct, but it isn't. And i'm not able to reason about it. Any ideas?
|
Use `NOT EXISTS` to return only rows that has no later with same data/event.
```
SELECT data, event, logged_on AS latest_log_date
from activites a1
where not exists (select 1 from activites a2
where a1.data = a2.data
and a1.event = a2.event
and a2.logged_on > a1.logged_on)
```
I'm not quite sure if this is what you want. Perhaps you need to remove `and a1.event = a2.event` from the sub-query.
|
```
SELECT data, event, logged_on
FROM activities A1
WHERE logged_on = (SELECT A2.logged_on FROM activities A2
WHERE A2.data = A1.data AND A2.event = A1.event
ORDER BY A2.logged_on DESC LIMIT 1);
```
|
MySQL aggregation by strings
|
[
"",
"mysql",
"sql",
""
] |
Since Redshift does not support `ALTER COLUMN`, I would like to know if it's possible to remove the NOT NULL constraints from columns in Redshift.
|
You cannot alter the table.
There is an alternative approach. You can create a new column with NULL constraint. Copy the values from your old column to this new column and then drop the old column.
Something like this:
```
ALTER TABLE table1 ADD COLUMN somecolumn (definition as per your reqm);
UPDATE table1 SET somecolumn = oldcolumn;
ALTER TABLE table1 DROP COLUMN oldcolumn;
ALTER TABLE table1 RENAME COLUMN somecolumn TO oldcolumn;
```
|
There is no way to change column on *Redshift*.
I can suggest you to create new column, copy values from old to new column and drop old column.
```
ALTER TABLE Table1 ADD COLUMN new_column (___correct_column_definition___);
UPDATE Table1 SET new_column = column;
ALTER TABLE Table1 DROP COLUMN column;
ALTER TABLE Table1 RENAME COLUMN new_column TO column;
```
|
Redshift - How to remove NOT NULL constraint?
|
[
"",
"sql",
"amazon-redshift",
""
] |
I have the following two relations:
* EMP(**ENO**, ENAME, JOB, DATEJOB, SAL, DNO)
* DEPT(**DNO**, DNAME, DIR)
EMP is a relation of the employees with their number ENO, their names ENAME, their job titles JOB, the dates when they get hired, their salary SAL, and the department number they are working on DNO (forgein key which references DNO in DEPT).
DEPT is a relation of the department with the number of each department DNO, the name of the department DNAME, the director of the department DIR (forgein key which references ENO).
My question is:
* Write the following query in SQL.
Find the names of the employees that have the same job and the same director as 'Joe'.
My attempt was:
```
SELECT ENAME
FROM EMP, DEPT
WHERE EMP.DNO = DEPT.DNO
AND (DIR, JOB) IN (
SELECT DIR, JOB
FROM EMP, DEPT
WHERE ENAME = 'Joe'
AND EMP.DEPT = DEPT.DNO
)
AND ENO NOT IN (
SELECT ENO
FROM EMP, DEPT
WHERE ENAME = 'Joe'
AND EMP.DEPT = DEPT.DNO
)
```
I found the solution of this problem but I couldn't agree of it.
This is what I found:
```
SELECT ENAME
FROM EMP, DEPT
WHERE ENAME <> 'Joe'
AND EMP.DNO = DEPT.DNO
AND (DIR, JOB) = (
SELECT DIR, JOB
FROM EMP, DEPT
WHERE ENAME = 'Joe'
AND EMP.DEPT = DEPT.DNO
)
```
The thing is, we have to not consider 'Joe' in the result. But which 'Joe'?
|
It looks like there's a potential for a "director" to head multiple departments. At least, the information model doesn't seem to be anything to restrict that (i.e. no unique constraint on `DIR`)
Presumably, we identify employee 'Joe' by finding the tuples(s) in `EMP` with `ENAME` attribute equal to `'Joe'`.
And presumably, we would identify Joe's "director" by getting the value of the `DIR` attribute from the `DEPT` relation.
If we wanted employees in the "same department" as `Joe`, we could just use the value of the `DNO` attribute,... but the requirement says "same director". So, just in case the same director heads multiple departments, we'll get all the departments headed by that director.
Then, it's a simple matter of getting all of the employees in those departments, and check for a "job" that matches Joe's "job".
```
SELECT e.ENAME
FROM EMP j
JOIN DEPT i
ON i.DNO = j.DNO
JOIN DEPT d
ON d.DIR = i.DIR
JOIN EMP e
ON e.DNO = d.DNO
AND e.JOB = j.JOB
WHERE j.ENAME = 'Joe'
```
Again, if we wanted only the employees in the "same department" as Joe, we could dispense with one of those references to `DEPT`. The result from this would be different, if Joe's director heads another department, and there's an employee in that other department has the same job... that employee would be excluded from this query:
```
SELECT e.ENAME
FROM EMP j
JOIN DEPT i
ON i.DNO = j.DNO
-- JOIN DEPT d
-- ON d.DIR = i.DIR
JOIN EMP e
-- ON e.DNO = d.DNO
ON e.DNO = i.DNO
AND e.JOB = j.JOB
WHERE j.ENAME = 'Joe'
```
If there's a requirement to exclude `Joe` from the resultset, then we could add another predicate to the WHERE clause. If we don't assume that `ENAME` can't have a NULL value...
```
AND ( e.ENAME IS NULL OR e.ENAME <> 'Joe')
```
|
You're correct in that the second solution is wrong. If there are two 'Joe's it won't work right. That's why you should exclude based on the unique ENO instead of the non-unique name. The first query won't work for the same reason. In order to be certain, you can't select either just by names or titles or departments, because those can be duplicate. We have three Chris programmers in our department.
Also, that join syntax is obsolete because it can cause confusion to the database in certain circumstances. Please see <http://www.w3schools.com/sql/sql_join_inner.asp> for an explanation of the current syntax.
|
How to solve this query in SQL?
|
[
"",
"sql",
""
] |
I have a table in sql like this:
```
id billpay
-------------------------
1024 0
1024 0
1024 1
1025 1
1025 1
```
I want to retrieve only those id having billpay 1
Please help me with this
|
Try this:
```
SELECT id
FROM mytable
GROUP BY id
HAVING COUNT(CASE WHEN COALESCE(billpay, 0) <> 1 THEN 1 END) = 0
```
The above will select only those `id`s associated to `billpay=1` and nothing but `billpay=1`.
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/c44fc/4)
|
It should be like this:
`SELECT id FROM tabel WHERE billpay = 1;`
|
Find out particular id
|
[
"",
"sql",
""
] |
Is there any sql script to find out when the database in SQL server is last updated?
I want to know the last updated date time for the changes done on meta data of the database rather than actual data inside the table.
Particularly when:
* Any new table is created/dropped from Database.
* Any new column is added/removed from table in the Database.
* Any new views/Stored Procedures/Functions are added/altered inside the Database.
|
Look in sys.objects should be enough, try this query
```
select * from sys.objects
order by modify_date desc
```
|
This will return last modified date time + name of updated item + description what was updated (table, stored procedure, etc)
```
SELECT TOP 1 name, modify_date, type_desc
FROM sys.objects
ORDER BY modify_date DESC
```
|
SQL Server database last updated date time
|
[
"",
"sql",
"sql-server",
"database",
"sql-scripts",
""
] |
Suppose if i declare 2 timestamp variable .
Lets say
```
DECLARE
var_t1 timestamp :=null;
var_t1 timestamp :=null;
BEGIN
---Suppose if i assign a timestamp to var_t1 .
select some_date into var_t1 from sometable where id='some_id';
--IF(var_t2<>var_t2) , this will be always false .
IF(var_t1 <> var_t2 ) THEN
dbms_output.put_line('Inside if');
END IF;
END;
```
If any one variable is null, always if return false.
|
> If any one variable is null, always if return false.
You are comparing a value with **NULL**. It is neither **TRUE** nor **FALSE**. It is a tri-valued logic, which is **unknown**.
* nothing is equal to null
* nothing is NOT equal to null
More meaningful code would be:
```
IF a <> b or (a is null and b is not null) or (a is not null and b is null)
THEN
```
**UPDATE** Adding an example.
```
SQL> set serveroutput on
SQL> DECLARE
2 A NUMBER;
3 b NUMBER;
4 BEGIN
5 A :=NULL;
6 B :=10;
7 IF a <> b OR (a IS NULL AND b IS NOT NULL) OR (a IS NOT NULL AND b IS NULL) THEN
8 dbms_output.put_line('False');
9 END IF;
10 END;
11 /
False
PL/SQL procedure successfully completed.
SQL>
```
|
Just an addon to Lalit answer:
```
var_t1 <> var_t2 or var_t1 is null or var_t2 is null
```
is equivalent to
```
LNNVL(var_t1 = var_t2)
```
just shorter. But note, you can use `LNNVL` only in `WHERE` clause of an `SELECT`, not in PL/SQL `IF LNNVL()` (don't ask what is the reason for this limitation - I don't know).
|
Comparing time stamps in oracle
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
There is a database with the following tables:
```
table_1: id, atr1
table_2: id, atr2
table_3: id, table_1_id(Foreign Key), table_2_id(Foreign Key), atr3
```
The operation I want to make is a bit complicated and I can explain it doing some extra programming aside from the SQL queries.
The first part of the operation would be to get all the identifiers of table\_2 in table\_3 which meet some criteria.
```
SELECT table_3.table_2_id
FROM table_3
WHERE
table_3.atr3 = 'value1' AND
table_3.table_1_id = 'value2'
```
The next part would be to take each of the values I have just obtained and then use them in the following query. I would do this in a loop.
```
SELECT table_3.id, table_3.atr3, table_1.atr1, table_2.atr2
FROM table_3
JOIN table_1 ON table_3.table_1_id = table_1.id
JOIN table_2 ON table_3.table_2_id = table_2.id
WHERE
table_3.table_2_id = 'current_value_i_am_iterating_over' AND
(table_3.table_1_id = 'value4' OR
table_3.table_1_id = 'value5')
```
After the query, I would concatenate the obtained rows in an array that contains all the rows I obtain in the successive iterations of the loop.
What I would like to know is if there's a way to avoid the second loop, ideally making the whole operation in only one SQL query.
|
Can't you use simple sub query? I mean you can have
```
table_3.table_2_id = (your first query with limit 1)
```
limit 1 just to avoid multiple records return from subquery
or else you can use `IN`
```
table_3.table_2_id IN (your first query)
```
|
You can phrase the query as `in`:
```
SELECT t3.id, t3.atr3, t1.atr1, t2.atr2
FROM table_3 t3 JOIN
table_1 t1
ON t3.table_1_id = t1.id JOIN
table_2 t2
ON t3.table_2_id = t2.id
WHERE t3.table_2_id IN (SELECT table_2_id
FROM table_3
WHERE atr3 = 'value1' AND table_1_id = 'value2'
) AND
t3.table_1_id IN ('value4', 'value5');
```
This is the most direct way to add the logic into a single query.
|
How to do consecutive SELECTS in the same query
|
[
"",
"mysql",
"sql",
"database-performance",
""
] |
What is wrong with:
```
select count (a.*, b.*)
from tableA a, tableB b
where a.x = 'blah'
and b.x = 'blah'
```
I keep getting the error:
> Msg 102, Level 15, State 1, Line 1
> Incorrect syntax near '\*'.
I have a bunch of table names to select from so wanted to use table name aliases? Any help appreciated: I'm not very good with sql.
|
You cannot use multiply arguments in `COUNT`. In your case you are asking to give you count of `a.col1, a.col2, a.col3...` `COUNT` expects just 1 argument. The only exception is `*` in that case it understands that it is supposed to calculate all the records from the table.
If you would like to calculate all the records from a, then do something like `COUNT(a.id)`. If you want to calculate unique records of a table, then do `COUNT(distinct a.id)`.
If you want to calculate all the records from `a` and `b` tables, then you can do `COUNT(a.id) as quantityOfAtable, COUNT(b.id) as quantityOfBtable`. Or you can sum them `COUNT(a.id) + COUNT(b.id)`.
One more thing you need to know about `COUNT` that it is counting only the `NOT NULL` rows. So, if you are using `LEFT JOIN` or the column that can be NULL then it will just calculate the amount of the records where this column is `NOT NULL`. Of course, you can use `DISTINCT` to calculate unique records as well.
|
* `COUNT()` function takes `only one parameter`.
**count (a.\*)**: means `count(a.field1,a.field2,a.field3,....)` which is `invalid`.
you have to use **count(\*)**: means `count all` .
|
Get count for different tables in the same query
|
[
"",
"sql",
"sql-server",
""
] |
I have 2 tables and need to get result on brand code.
In the database i have for example 2 different brands but their code is the same (only difference in lower and upper case). For example:
code Name
ab Nike
AB Adidas
How to inner join 2 tables on code to get this 2 separately?
Right now after inner join i get total sum of this 2.
```
SELECT Code, BrandName, Count(*) QTY, SUM(Price) TOTAL
FROM A
INNER JOIN B
ON A.Code=B.Code
GROUP BY Code, BrandName
```
This query will give me wrong result as it does not join sensitively on upper and lower case.
Please help :)
|
There are at least two quick ways you can solve this.
**1.** You specify a case-sensitive collation (rules for comparing strings across characters in a character set) for A.Code and B.Code. In MySQL and a few other database management systems, the default collation is case-insensitive.
That is, assuming that you're using MySQL or similar, you'll have to modify your statement as such:
```
SELECT Code, BrandName, Count(*) QTY, SUM(Price) TOTAL
FROM A
INNER JOIN B
ON A.Code=B.Code COLLATE latin1_bin
GROUP BY Code, BrandName
```
If, however, you plan on only performing case-sensitive queries on A and B, it may be in your interest to set the default collation on those two tables to case-sensitive.
Please see [How can I make SQL case sensitive string comparison on MySQL?](https://stackoverflow.com/q/5629111/3800937)
**2.** Cast A.Code and B.Code to a binary string and compare the two. This is an simple way to compare two strings, byte-by-byte, thus achieving case-insensitivity.
```
SELECT Code, BrandName, Count(*) QTY, SUM(Price) TOTAL
FROM A
INNER JOIN B
ON BINARY A.Code=B.Code
GROUP BY Code, BrandName
```
|
Since you use a collation that is case insensitive and want to differentiate on case try using the `collate` keyword with a suitable case-sensitive collation:
```
INNER JOIN B
ON A.Code COLLATE Latin1_General_CS_AS_KS_WS = B.Code COLLATE Latin1_General_CS_AS_KS_WS
```
|
join 2 tables case sensitive upper and lower case
|
[
"",
"sql",
"sql-server",
"inner-join",
"case-sensitive",
""
] |
I have a table named Order-
this gives me just the 1 column -
```
select min(OrderDate) from "order"
```
However I would like to view all the columns for that record.
Any help appreciated,
Thanks
|
```
SELECT *
from [order]
WHERE OrderDate = (SELECT min(OrderDate) from [order])
```
|
While [M. Ali's answer](https://stackoverflow.com/a/29494544/2266979) is accurate, depending on the size of the table and the index configuration, it might result in poor performance. It requires two reads through the table.
On SQL Server 2005 and up, you can use window functions and limit the plan to a single read through the table. For very small data sets, the query cost is actually higher, but the number of reads is halved. For large data sets, that should result in significantly superior performance.
Code example:
```
SET NOCOUNT ON;
-- Populate Test Data
DECLARE @Orders TABLE (OrderNum int IDENTITY, OrderDate datetime);
INSERT INTO @Orders (OrderDate) VALUES ('2015-04-04');
INSERT INTO @Orders (OrderDate) VALUES ('2015-04-04');
INSERT INTO @Orders (OrderDate) VALUES ('2015-04-04');
INSERT INTO @Orders (OrderDate) VALUES ('2015-04-05');
INSERT INTO @Orders (OrderDate) VALUES ('2015-04-05');
INSERT INTO @Orders (OrderDate) VALUES ('2015-04-06');
INSERT INTO @Orders (OrderDate) VALUES ('2015-04-07');
INSERT INTO @Orders (OrderDate) VALUES ('2015-04-07');
-- Run Tests
SET STATISTICS IO ON;
PRINT 'Full Table';
SELECT * FROM @Orders;
PRINT 'Results using MIN';
SELECT *
FROM @Orders
WHERE OrderDate = (SELECT min(OrderDate) FROM @Orders);
PRINT 'Results using RANK';
WITH BaseData AS
(
SELECT
*,
RANK() OVER (ORDER BY OrderDate) AS OrderDateRank
FROM @Orders
)
SELECT *
FROM BaseData
WHERE OrderDateRank = 1;
SET STATISTICS IO OFF;
SET NOCOUNT OFF;
```
Query costs:
MIN:
0.0065718
RANK:
0.014645
Statistics:
```
Full Table
Table '#1E0C7C2B'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Results using MIN
Table '#1E0C7C2B'. Scan count 2, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Results using RANK
Table '#1E0C7C2B'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
```
|
Selecting all rows in the oldest record
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have two fields :
```
-name
-address
```
How to change a specific letter? For example: `'a'` changed to `'o'` so the field names become:
```
-nome
-oddress
```
I've tried [this](https://stackoverflow.com/questions/3443156/how-to-replace-specific-values-in-a-oracle-database-column) but it doesn't work. Any help is appreciated.
|
This script replaces all `A`'s in all column names of all tables belonging to owner `SCOTT` by `O`'s:
```
DECLARE
CURSOR alters IS
SELECT 'ALTER TABLE ' || owner || '.' || table_name || ' RENAME COLUMN ' ||
column_name ||' TO ' || REPLACE (column_name, 'A', 'O') AS statement
FROM dba_tab_columns
WHERE owner = 'SCOTT'
AND column_name LIKE '%A%';
BEGIN
FOR rec IN alters LOOP
EXECUTE IMMEDIATE rec.statement;
END LOOP;
END;
```
You need `SELECT` rights on table `dba_tab_columns`. Note that it is case-sensitive.
Have a lot of fun ;)
|
does this do what you want?
```
ALTER TABLE table_name
RENAME COLUMN old_name to new_name;
```
|
Replace a specific character in column names
|
[
"",
"sql",
"oracle",
"replace",
""
] |
I'm using SQL Server 2014 and I have the following query which runs fine:
```
USE MyDatabase
SELECT [Room Nights],
COUNT([Room Nights]) AS 'Count of RN'
FROM HOLDINGS2
GROUP BY [Room Nights]
```
The output is as follows:
```
Room Nights Count of RN
1 3
4 10
5 6
7 25
```
Now I want to show another column that gives me the percentage distribution of the `Count of RN`. Hence, my output will need to be like this:
```
Room Nights Count of RN % Distribution
1 3 6.8
4 10 22.7
5 6 13.6
7 25 56.8
```
I had a look at the following discussion to try to find out a solution:
[percent distribution with counted values](https://stackoverflow.com/questions/29170489/sql-percent-distribution-with-counted-values).
I came up with the following amendments to my existing code but it's not working! I am having only zeroes in the `% Distribution` column.
```
USE MyDatabase
SELECT [Room Nights],
COUNT([Room Nights]) AS 'Count of RN',
CAST(COUNT([Room Nights])/(SELECT COUNT([Room Nights])*100. FROM HOLDINGS2) AS DECIMAL (9,0)) AS '% Distribution'
FROM HOLDINGS2
GROUP BY [Room Nights]
```
Basically, the `% Distribution` column should take the `Count of RN` and divide it by the TOTAL `Count of RN`.
|
This would work:
```
select [Room Nights],
count([Room Nights]) AS 'Count of RN',
cast(
(count([Room Nights])
/
(Select Count([Room Nights]) * 1.0 from HOLDINGS2)
) * 100 as decimal(6,1)
) as '% Distribution'
FROM HOLDINGS2
GROUP BY [Room Nights]
```
The `* 1.0` in the subquery forces a floating point division, and the outer cast limits the precision.
Or, as you're using a modern version of MSSQL you could use window functions:
```
cast(count([Room Nights])/(sum(count([Room Nights])*1.0) over ()) * 100 as decimal(6,1))
```
|
Try:
```
DECLARE @t TABLE
(
[Room Nights] INT ,
[Count of RN] INT
)
INSERT INTO @t
VALUES ( 1, 3 ),
( 4, 10 ),
( 5, 6 ),
( 7, 25 )
SELECT * ,
ROUND([Count of RN] * 100.0
/ SUM([Count of RN]) OVER ( ORDER BY ( SELECT NULL ) ), 1) AS [Percent]
FROM @t
```
Output:
```
Room Nights Count of RN Percent
1 3 6.800000000000
4 10 22.700000000000
5 6 13.600000000000
7 25 56.800000000000
```
EDIT: I've missed that Count of RN is result of grouping query. Here is modified statement:
```
SELECT [RN] ,
COUNT(S) AS C ,
CAST(COUNT(S) * 100.0 / SUM(COUNT(S)) OVER () AS DECIMAL(10, 1)) AS [Percent]
FROM @t
GROUP BY [RN]
```
|
SQL syntax to SUM a Count column for calculating percentage distribution
|
[
"",
"sql",
"sql-server",
"count",
"sql-server-2014",
""
] |
First: Yes I know number formatting shall not be done in data layer, but I'm working with an application I can't modify, nor can I format the result set in the application. I can only write a query.
For a SQL Server 2008 query I want to use space as thousand separator and also round to two decimals. Example: `123363.249999` should be displayed as `123 363,25`.
I've tried:
```
SELECT REPLACE(CONVERT(varchar, CONVERT(money, 123363.249999), 1), '.00', '')
```
But that results in `123,363.25`. I'm looking for nicer solution than just replacing comma with space and dot with comma. It would be nice to set the locale in the query.
|
For SQL Server 2008, I think your best bet for a "nicer" solution within SQL Server would be to write a custom [CLR function](https://msdn.microsoft.com/en-us/library/ms131043.aspx) to handle the formatting based on region/locale.
For SQL Server 2012 or later, the [FORMAT](https://msdn.microsoft.com/en-us/library/hh213505.aspx) function has been introduced.
|
I think this should work for you:
```
SELECT REPLACE(REPLACE(CONVERT(varchar, CONVERT(money, 12765763363.249999), 1), ',', ' '), '.', ',');
```
It will output the following:
```
12 765 763 363,25
```
|
Format number with space as thousand separator
|
[
"",
"sql",
"sql-server",
""
] |
I have a table like this:
```
create table person (
id integer,
Email varchar(100)
);
insert into person values (1, 'john@example.com');
insert into person values (2, 'bob@example.com');
insert into person values (3, 'john@example.com');
insert into person values (4, 'john@example.com');
```
Now , I need delete duplicate lines keep the min Id which means after delete the table will be:
```
+----+------------------+
| Id | Email |
+----+------------------+
| 1 | john@example.com |
| 2 | bob@example.com |
+----+------------------+
```
I got this to get a list non-duplicate id:
```
select Id from Person group by Email;
```
And delete code is:
```
delete from Person
where Person.Id not in
(select Id from Person group by Email);
```
But not works for me, why I cannot write like this and how to fix it?
|
This query will delete all duplicated rows, and keep the one with the lowest ID:
```
DELETE p1.*
FROM
person p1 INNER JOIN person p2
ON p1.Email=p2.Email
AND p1.id>p2.id
```
Please see it working [here](http://sqlfiddle.com/#!9/1e902/1).
|
You can easily do it with JOIN as
```
delete p from person p
join (
select min(id) as id ,email from person group by email
)p1
on p1.id < p.id and p1.Email = p.Email ;
```
|
MySQL delete with where not in error
|
[
"",
"mysql",
"sql",
""
] |
Say I have a table with all sales called sales with columns itemid, storeid, sale, and date
I then have a table called storeregion that has the columns storeid and region. If I want to get the sales in a region for a specific date range:
```
select region, sum(sale)
from sales s
inner join storeregion sr on s.storeid=sr.storeid
where date between 'whatever' and 'whatever'
group by region
```
So I get a result like this:
East|500
West|400
OK cool. Now, I also have a table called itemcategory with columns itemid and category. I want to see the sales of each category in each region. I can do something like this.
```
select sr.region, ic.category, sum(sale)
from sales s
inner join storeregion sr on s.storeid=sr.storeid
inner join itemcategory ic on s.itemid=ic.itemid
where date between 'whatever' and 'whatever'
group by sr.region
group by ic.category
```
So I get a result like this:
East|Toys|100
East|Books|200
East|Games|200
West|Toys|300
West|Games|100
Now what I really want to do is find where the sales of one category in one region is more than 50% of the total sales in that same region. So as per my example in the first query I get the result:
West|400
and in the second query I get the result:
West|Toys|300
which is greater than 50% of the total sales in the region.
I want to write one query that will only give me the result
West|Toys|300
because it is more than 50% of the sales in the region. Any ideas?
|
You do that with a *subquery*. join your query to a subquery that calculates the total by region, with a condition that sales > 50% of that total:
```
select sr.region, ic.category, sum(sale) sales
from sales s
inner join storeregion sr on s.storeid=sr.storeid
inner join itemcategory ic on s.itemid=ic.itemid
INNER JOIN
(
select region, sum(sale) sales
from sales s
inner join storeregion sr on s.storeid=sr.storeid
where date between 'whatever' and 'whatever'
group by region
) st ON sr.region = st.region
where date between 'whatever' and 'whatever'
group by sr.region,ic.category, st.sales
having sum(sale) > st.sales * 0.50
```
Note that the condition needs to be in a HAVING clause since it applies to the group total, not a record value.
|
You should do this using window functions:
```
select rc.*
from (select sr.region, ic.category, sum(sale) as catsale,
sum(sum(sale)) over (partition by region) as regsale
from sales s inner join
storeregion sr
on s.storeid = sr.storeid inner join
itemcategory ic
on s.itemid = ic.itemid
where date between 'whatever' and 'whatever'
group by sr.region, ic.category
) rc
where catsale >= 0.5 * regsale;
```
In general, window functions not only result in shorter queries, but they also perform better than the equivalent queries using multiple joins and aggregations.
|
comparison of joins
|
[
"",
"sql",
"sql-server",
""
] |
I am researching how to read in data from a server directly to a data frame in R. In the past I have written SQL queries that were over 50 lines long (with all the selects and joins). Any advice on how to write long queries in R? Is there some way to write the query elsewhere in R, then paste it in to the "sqlQuery" part of the code?
|
Keep long SQL queries in .sql files and read them in using readLines + paste with collapse='\n'
```
my_query <- paste(readLines('your_query.sql'), collapse='\n')
results <- sqlQuery(con, my_query)
```
|
You can paste any SQL query into R as is and then simply replace the newlines + spaces with a single space. For instance:
```
## Connect ot DB
library(RODBC)
con <- odbcConnect('MY_DB')
## Paste the query as is (you can have how many spaces and lines you want)
query <-
"
SELECT [Serial Number]
,[Series]
,[Customer Name]
,[Zip_Code]
FROM [dbo].[some_db]
where [Is current] = 'Yes' and
[Serial Number] LIKE '5%' and
[Series] = '20'
order by [Serial Number]
"
## Simply replace the new lines + spaces with a space and you good to go
res <- sqlQuery(con, gsub("\\n\\s+", " ", query))
close(con)
```
|
Writing Lengthy SQL queries in R
|
[
"",
"mysql",
"sql",
"sql-server",
"r",
"rodbc",
""
] |
I have a very simple ( I think it's simple, but I'm still struggling!) problem. I've got a table of cars. I've got another table of classes of cars. I want to join one to the other to get the class for the car.
The cars table is simple with a model and a make. The class table also has a model and a make and a type of class. The problem arises when I want to group cars of the same make together regardless of their model.
So for example, I have two vehicles:
```
id vehiclemake vehiclemodel
1 AUDI R8
2 AUDI Quattro
```
And I have two classes:
```
id vehiclemake vehiclemodel classtype
1 AUDI R8 A
2 AUDI NULL B
```
The AUDI R8 will match to class type A. I want all other AUDI's regardless of their model to match to class type B.
I've got some sample code here so you can have a play!
```
create table #vehicle(id int, vehiclemake varchar(10), vehiclemodel varchar(10))
create table #vehicleclass(id int, vehiclemake varchar(10), vehiclemodel varchar(10), classtype varchar(1))
insert into #vehicle values(1, 'AUDI', 'R8')
insert into #vehicle values(2, 'AUDI', 'Quattro')
insert into #vehicleclass values(1, 'AUDI', 'R8', 'A')
insert into #vehicleclass values(2, 'AUDI', null, 'B')
select
*
from
#vehicle v
left join #vehicleclass vc on
(v.vehiclemake = vc.vehiclemake and v.vehiclemodel = vc.vehiclemodel)
drop table #vehicle
drop table #vehicleclass
```
The statement above doesn't join the Quattro record to the B class record
|
May be this is what you want:
```
SELECT *
FROM #vehicle v
JOIN #vehicleclass vc ON v.vehiclemake = vc.vehiclemake
AND ( v.vehiclemodel = vc.vehiclemodel
OR ( vc.vehiclemodel IS NULL
AND v.vehiclemodel NOT IN (
SELECT vehiclemodel
FROM #vehicleclass
WHERE vehiclemodel IS NOT NULL )
)
)
```
Output:
```
id vehiclemake vehiclemodel id vehiclemake vehiclemodel classtype
1 AUDI R8 1 AUDI R8 A
2 AUDI Quattro 2 AUDI NULL B
```
|
Use a [COALESCE](https://msdn.microsoft.com/en-us/library/ms190349.aspx) on the join of `vehiclemodel` so that when the model is NULL on the `vehicleclass` table, the vehicle will instead match on its own model.
**[EDIT]**: After reading the user's comment, try this:
```
SELECT *
FROM #vehicle v
INNER JOIN #vehicleclass vc
ON v.vehiclemake = vc.vehiclemake
AND v.vehiclemodel = vc.vehiclemodel
UNION
SELECT *
FROM #vehicle v
INNER JOIN #vehicleclass vc
ON v.vehiclemake = vc.vehiclemake
AND vc.vehiclemodel IS NULL
WHERE NOT EXISTS(SELECT 1
FROM #vehicleclass vc2
where vc2.vehiclemake = v.vehiclemake
and vc2.vehiclemodel = v.vehiclemodel);
```
|
Catch All Record in a Join expression
|
[
"",
"sql",
"t-sql",
"join",
"sql-server-2008-r2",
"left-join",
""
] |
I'm having some trouble with a stored procedure I'm running in SQL. It's for a class I'm taking and the stored procedure is from the textbook, but the textbook has had a lot of errors in their SQL code so I wouldn't be surprised if that was the case here, too. I asked the professor and he seems to be having trouble figuring it out as well.
Here is the stored procedure I'm using, as generated by SQL Server Management Studio's "Create to" function:
```
USE [VGA]
GO
/****** Object: StoredProcedure [dbo].[InsertCustomerWithTransaction] Script Date: 4/8/2015 7:55:00 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[InsertCustomerWithTransaction]
@NewCustomerLastName CHAR(25),
@NewCustomerFirstName CHAR(25),
@NewCustomerAreaCode CHAR(3),
@NewCustomerPhoneNumber CHAR(8),
@NewCustomerEmail VARCHAR(100),
@ArtistLastName CHAR(25),
@WorkTitle CHAR(35),
@WorkCopy CHAR(12),
@TransSalesPrice NUMERIC(8,2)
AS
DECLARE @RowCount AS Int,
@ArtistID AS Int,
@CustomerID AS Int,
@WorkID AS Int,
@TransactionID AS Int
SELECT @RowCount = COUNT(*)
FROM dbo.CUSTOMER
WHERE LastName = @NewCustomerLastName
AND FirstName = @NewCustomerFirstName
AND AreaCode = @NewCustomerAreaCode
AND PhoneNumber = @NewCustomerPhoneNumber
AND Email = @NewCustomerEmail
IF (@RowCount > 0)
BEGIN
PRINT '****************************************************'
PRINT ''
PRINT ' The Customer is already in the database. '
PRINT ''
PRINT ' CustomerLastName = '+@NewCustomerLastName
PRINT ' CustomerFirstName = '+@NewCustomerFirstName
PRINT ''
PRINT '****************************************************'
END
ELSE
BEGIN TRANSACTION
INSERT INTO dbo.CUSTOMER
(LastName, FirstName, AreaCode, PhoneNumber, Email)
VALUES
(@NewCustomerLastName, @NewCustomerFirstName, @NewCustomerAreaCode, @NewCustomerPhoneNumber, @NewCustomerEmail)
SELECT @CustomerID = CustomerID
FROM dbo.CUSTOMER
WHERE LastName = @NewCustomerLastName
AND FirstName = @NewCustomerFirstName
AND AreaCode = @NewCustomerAreaCode
AND PhoneNumber = @NewCustomerPhoneNumber
AND Email = @NewCustomerEmail
SELECT @ArtistID = ArtistID
FROM dbo.ARTIST
WHERE LastName = @ArtistLastName
IF @ArtistID IS NULL
BEGIN
PRINT '****************************************************'
PRINT ''
PRINT ' Invalid ArtistID '
PRINT ''
PRINT '****************************************************'
ROLLBACK TRANSACTION
RETURN
END
SELECT @WorkID = WorkID
FROM dbo.WORK
WHERE ArtistID = @ArtistID
AND Title = @WorkTitle
AND Copy = @WorkCopy
IF @WorkID IS NULL
BEGIN
PRINT '****************************************************'
PRINT ''
PRINT ' Invalid WorkID '
PRINT ''
PRINT '****************************************************'
ROLLBACK TRANSACTION
RETURN
END
ELSE
BEGIN
PRINT '****************************************************'
PRINT ''
PRINT ' WorkID = '+CONVERT(CHAR(6), @WorkID)
PRINT ''
PRINT '****************************************************'
END
SELECT @TransactionID = TransactionID
FROM dbo.TRANS
WHERE WorkID = @WorkID
AND SalesPrice = NULL
IF @TransactionID IS NULL
BEGIN
PRINT '****************************************************'
PRINT ''
PRINT ' Invalid TransactionID '
PRINT ''
PRINT '****************************************************'
ROLLBACK TRANSACTION
RETURN
END
BEGIN
UPDATE dbo.TRANS
SET DateSold = GETDATE(),
SalesPrice = @TransSalesPrice,
CustomerID = @CustomerID
WHERE TransactionID = @TransactionID
INSERT INTO dbo.CUSTOMER_ARTIST_INT (CustomerID, ArtistID)
VALUES (@CustomerID, @ArtistID)
END
COMMIT TRANSACTION
BEGIN
PRINT '****************************************************'
PRINT ''
PRINT ' The new Customer is now in the database. '
PRINT ''
PRINT ' Customer Last Name = '+@NewCustomerLastName
PRINT ' Customer First Name = '+@NewCustomerFirstName
PRINT ''
PRINT '****************************************************'
PRINT '****************************************************'
PRINT ''
PRINT ' Transaction complete. '
PRINT ''
PRINT ' TransactionID = '+CONVERT(CHAR(6), @TransactionID)
PRINT ' ArtistID = '+CONVERT(CHAR(6), @ArtistID)
PRINT ' WorkID = '+CONVERT(CHAR(6), @WorkID)
PRINT ' Sales Price = '+CONVERT(CHAR(12), @TransSalesPrice)
PRINT ''
PRINT '****************************************************'
PRINT '****************************************************'
PRINT ''
PRINT ' New CUSTOMER_ARTIST_INT row added. '
PRINT ''
PRINT ' ArtistID = '+CONVERT(CHAR(6), @ArtistID)
PRINT ' CustomerID = '+CONVERT(CHAR(6), @CustomerID)
PRINT ''
PRINT '****************************************************'
END
GO
```
And here is the code that I'm running:
```
EXEC InsertCustomerWithTransaction
@NewCustomerLastName = 'Gliddens',
@NewCustomerFirstName = 'Melinda',
@NewCustomerAreaCode = '360',
@NewCustomerPhoneNumber = '765-8877',
@NewCustomerEmail = 'Melinda.Gliddens@somewhere.com',
@ArtistLastName = 'Sargent',
@WorkTitle = 'Spanish Dancer',
@WorkCopy = '588/750',
@TransSalesPrice = 350.00;
```
When I run the code, it works fine up until the section that starts with
```
SELECT @TransactionID = TransactionID
```
At this point it gives me the error "Invalid TransactionID". Since the input requires the WorkID and that the SalesPrice column be NULL, I added a bit of code to give me the WorkID at this point, and the output from this was the correct WorkID. [screenshot](https://i.stack.imgur.com/nkKww.png) I went into the database and double-checked that the SalesPrice column in that row is NULL, and it is. [screenshot](https://i.stack.imgur.com/PpDD4.png) But for some reason the stored procedure cannot find the correct TransactionID, even though I can manually find it in the table. Can someone help me figure out what I'm doing wrong here? Thank you!
|
It's `IS NULL` not = NULL
in this bit
```
SELECT @TransactionID = TransactionID
FROM dbo.TRANS
WHERE WorkID = @WorkID
AND SalesPrice = NULL
```
|
CRLast86....just wanted to point out, that = Null and Is Null are both valid. IS NULL is the ANSI standard, but MS-SQL allows = Null, except when you have changed the ANSI\_NULLS mode, as you have, with the SET ANSI\_NULLS ON statement near the top. Best practice says, use SET ANSI\_NULLS ON so that it will not let you use = Null. So, in the future when you see an = Null and it is working, you will know that conditionally, both are valid. Do a search on SET ANSI\_NULLS ON and you will get the whole long story.
|
Stored Procedure error SQL Server 2014
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
"ssms",
""
] |
How to:
1. Get values from table1.column1 (e.g. abc)
`table1.column1=abc`
2. Concatenate them with certain fixed strings, e.g.
`xxx
yyy
zzz`
3. Insert the results as separate rows in table2.column2. The final result should be rows with values like this:
`table2.column2=abc_xxx
table2.column2=abc_yyy
table2.column2=abc_zzz`
(table2 has a connecting column indicating to which ID the table2.column2 record corresponds in case this matters)
4. Repeat this process for all records in **table1.column1** which have **table1.item\_id > 100**
**EDIT**: For certain convenience I would like the final result rows sequence to look like:
`source1_xxx
source1_yyy
source1_zzz
source2_xxx
source2_yyy
source2_zzz`
and not like:
`source1_xxx
source2_xxx
source1_yyy
source2_yyy
source1_zzz
source2_zzz`
|
If I understand you correctly, you want N (e.g. 3) entries for every existing row in Table1. If so, you can CROSS JOIN Table1 to a projection of the values, like so:
```
INSERT INTO Table2(column2)
SELECT CONCAT(t1.column1, '_', x.col)
FROM Table1 t1
CROSS JOIN
(SELECT 'xxx' AS col
UNION
SELECT 'yyy'
UNION
SELECT 'zzz') x;
```
[SqlFiddle here](http://sqlfiddle.com/#!2/7b526/1)
**Edit**
The query was updated to be cognaisant of the ordering and filtering requirements as well:
```
INSERT INTO Table2(column2)
SELECT CONCAT(t1.column1, '_', x.col)
FROM Table1 t1
CROSS JOIN
(SELECT 'xxx' AS col
UNION
SELECT 'yyy'
UNION
SELECT 'zzz') x
WHERE t1.ID > 100
ORDER BY t1.column1 ASC, x.col ASC;
```
[With an updated SqlFiddle](http://sqlfiddle.com/#!2/9be7b/1)
|
Modifying the answer.
Credit goes to the StuartLC ..he is right, you would need to use cross join
```
INSERT INTO Table2(column2)
SELECT CONCAT(t1.column1, '_', x.col)
FROM Table1 t1
CROSS JOIN
(SELECT 'xxx' AS col
UNION
SELECT 'yyy'
UNION
SELECT 'zzz') x;
```
|
How to get values from one table column, concatenate them with a string and insert them in another table?
|
[
"",
"mysql",
"sql",
"percona",
""
] |
I have a query that selects particular `stored procedures`, `triggers` and `functions` in my SQL database. I have retrieved 86 in total.
I need to find which of these are using a certain string inside.
This string is a name of a `table`.
I do not want to open each of these individually and search for it.
I'm thinking of having a script that looks inside of these and find what I need.
What I think I might need is something like `from tableName` or `inner join on tableName` or `join on tableName` and etc
What is a good way to do it in `SQL`
Thank you
|
To search one SQL Server database for a specific piece of text in a stored procedure you could do this:
```
DECLARE @SearchText VARCHAR(100) = 'TableXYZ';
SELECT DISTINCT
o.name,
o.type_desc
FROM
sys.sql_modules m
INNER JOIN sys.objects o ON m.object_id = o.object_id
WHERE
m.[definition] Like '%' + @SearchText + '%';
```
Obviously this is a bit of a hack and won't work out if your table name is called something daft like "SELECT" as that will just cause loads of false-positive results.
One alternative is to use the sys.dm\_sql\_referencing\_entities system table, but my experience with this is that it can't be trusted. In general dependencies don't work very well with SQL Server due to some poor design decisions, e.g. deferred name resolution.
|
Microsoft SQL would cover Access, Sybase and SQL Server (and possibly more) .... But I suspect you are using SQL Server, as such I would suggest [sys.dm\_sql\_referencing\_entities](https://msdn.microsoft.com/en-gb/library/bb630351.aspx) rather than searching a definition for a particular string -
```
SELECT r.* , o.type_desc, m.definition
FROM sys.dm_sql_referencing_entities('dbo.TableName', 'OBJECT') AS r
INNER JOIN sys.all_objects AS o
ON o.[object_id] = r.referencing_id
INNER JOIN sys.sql_modules AS m
ON m.[object_id] = o.[object_id];
```
|
How to search for a specific string occurrence in stored procedures
|
[
"",
"sql",
"sql-server",
"function",
"t-sql",
"stored-procedures",
""
] |
Ok so what I am trying to do is pull the decimal values from a string. My issue is that the strings are not uniform. some may be 6.9% or 5.2mg/L and some may have no number values at all. What I would like to do is return just the decimal(or integer) value from the string and if that does not exist then return NULL.
I have tried this function:
```
CREATE FUNCTION dbo.udf_GetNumeric
(@strAlphaNumeric VARCHAR(256))
RETURNS VARCHAR(256)
AS
BEGIN
DECLARE @intAlpha INT
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric)
BEGIN
WHILE @intAlpha > 0
BEGIN
SET @strAlphaNumeric = STUFF(@strAlphaNumeric, @intAlpha, 1, '' )
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric )
END
END
RETURN ISNULL(@strAlphaNumeric,0)
END
```
But that only returns the numbers with no decimal place.
|
An alternative approach is to remove the characters after the string and before the string. The following expression does this:
```
select val,
stuff(stuff(val+'x', patindex('%[0-9][^0-9.]%', val+'x') + 1, len(val), ''
), 1, patindex('%[0-9]%', val) - 1, '')
from (values ('test123 xxx'), ('123.4'), ('123.4yyyyy'), ('tasdf 8.9'), ('asdb'), ('.2345')) as t(val);
```
The inner `stuff()` remove the characters after the number. The `+'x'` handles the problem that occurs when the number is at the end of the string. The first part handles the part before the number.
This does assume that there is only one number in the string. You can check this with a where clause like:
```
where val not like '%[0-9]%[^0-9.]%[0-9]%'
```
|
You just need to add a `.` (dot) in both `PATINDEX` expression:
```
CREATE FUNCTION dbo.Udf_getnumeric (@strAlphaNumeric VARCHAR(256))
returns VARCHAR(256)
AS
BEGIN
DECLARE @intAlpha INT
SET @intAlpha = Patindex('%[^0-9.]%', @strAlphaNumeric)
BEGIN
WHILE @intAlpha > 0
BEGIN
SET @strAlphaNumeric = Stuff(@strAlphaNumeric, @intAlpha, 1, '')
SET @intAlpha = Patindex('%[^0-9.]%', @strAlphaNumeric)
END
END
RETURN Isnull(@strAlphaNumeric, 0)
END
```
|
T-SQL to pull decimal values from a string
|
[
"",
"sql",
"t-sql",
""
] |
I have records as follows:
1) BEL
1) MERSEN
A) VISHAY-SPRAGUE
CIRCUIT PARTNERS
BENTEK
CIRCUIT TEST
I want to return a distinct set where if the record has a closing bracket, then remove the entire bracket prefix ( 1) MERSEN becomes MERSEN) otherwise return the record as is. This is an ad hoc, one off query. I've tried something like this.
```
IF (CHARINDEX(')', (SELECT [MANUFACTURER] FROM [dbo].[QPL_ITMSUPAC_NF]), 1) > 0)
SELECT DISTINCT SUBSTRING([dbo].[QPL_ITMSUPAC_NF].[MANUFACTURER], 4, 99)
FROM [dbo].[QPL_ITMSUPAC_NF]
ELSE
SELECT DISTINCT [dbo].[QPL_ITMSUPAC_NF].[MANUFACTURER]
FROM [dbo].[QPL_ITMSUPAC_NF]
```
...but get the error:
> Subquery returned more than 1 value...
The above was in a procedure.
Thoughts?
|
Use `exists` and move the function into the subquery. In this case, the `charindex()` is equivalent to `like ')%'`:
```
IF (EXISTS (SELECT 1 FROM [dbo].[QPL_ITMSUPAC_NF] WHERE MANUFACTURER LIKE ')%') )
SELECT DISTINCT SUBSTRING([dbo].[QPL_ITMSUPAC_NF].[MANUFACTURER], 4, 99)
FROM [dbo].[QPL_ITMSUPAC_NF]
ELSE
SELECT DISTINCT [dbo].[QPL_ITMSUPAC_NF].[MANUFACTURER]
FROM [dbo].[QPL_ITMSUPAC_NF]
```
|
You could do it with using `right` function, with taking help from `len` and `charindex` functions:
```
select right(MANUFACTURER,len(MANUFACTURER)-charindex(')',MANUFACTURER))
from QPL_ITMSUPAC_NF
```
*Example:*
```
select right('1)a sample name',len('1)a sample name')-charindex(')','1)a sample name'))
```
*Output:*
```
a sample name
```
|
Return records based on the result of a function
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am brushing up on my SQL and using the Chinook database for practice.
the data model can be found here: <https://chinookdatabase.codeplex.com/wikipage?title=Chinook_Schema&referringTitle=Documentation>
The goal is to write a trigger so that the Total in the Invoice table is updated when I insert or delete invoicelines.
```
CREATE TRIGGER UpdateTotal ON InvoiceLine
AFTER INSERT, DELETE
AS
UPDATE Invoice
SET Total = (
SELECT sum(LineSum) AS InvoiceTotal
FROM (
SELECT InvoiceId, (UnitPrice * Quantity) AS LineSum
FROM InvoiceLine
) AS WithLineSum
GROUP BY InvoiceId
HAVING WithLineSum.InvoiceId = Invoice.InvoiceId
)
```
and this works great when I insert and delete invoiceline records, except for when I delete the very last invoiceline for an invoice. When I do that, i get the error:
Cannot insert the value NULL into column 'Total', table 'Chinook.dbo.Invoice'; column does not allow nulls.
UPDATE fails.
so basically I need to set it to zero if sum(LineSum) is null
I am struggling to figure out how to structure the conditional, can anyone please help?
|
Your approach is inefficient as you are updating every Invoice every time instead of only those affected by the operation. You need to leverage the special [Inserted and Deleted tables](https://msdn.microsoft.com/en-us/library/ms191300.aspx) within the trigger.
```
CREATE TRIGGER UpdateTotal ON InvoiceLine
AFTER INSERT, DELETE
AS
SET NOCOUNT ON;
UPDATE Inv
SET Total = Total + (i.UnitPrice * i.Quantity)
FROM Inserted i
INNER JOIN Invoice Inv
ON i.InvoiceId = Inv.InvoiceId;
UPDATE Inv
SET Total = Total - (d.UnitPrice * d.Quantity)
FROM Deleted d
INNER JOIN Invoice Inv
ON d.InvoiceId = Inv.InvoiceId;
```
|
You should wrap your subquery in a COALESCE, basically
```
SET Total = COALESCE(...subquery..., 0)
```
|
SQL Trigger for Chinook database: condition when sum is null
|
[
"",
"sql",
"sql-server",
"triggers",
"conditional-statements",
""
] |
```
SELECT COUNT(dev_link.id) c, pattern.direction, pattern.section, (
SELECT to_stop AS `stop`
FROM dev_link d
WHERE d.section = section
ORDER BY d.sequence DESC
LIMIT 1
) code
FROM pattern
INNER JOIN dev_link ON dev_link.section = pattern.section
WHERE pattern.service = "YSEO252"
GROUP BY pattern.code
ORDER BY c DESC
```

Above is a query and its results. Currently this is selecting too much data, I need to narrow it down to the following:
1 row with direction `outbound` and another row with direction `inbound`. If it was just that I could easily `GROUP BY` the direction. However, I need to select an `inbound` and an `outbound` row with the highest `c` value. So based on the results above the 1st row and the third row would be selected.
How can I change my query so that it returns these rows?
|
```
( SELECT ... WHERE ... AND pattern.direction='inbound' ... LIMIT 1)
UNION ALL
( SELECT ... WHERE ... AND pattern.direction='outbound' ... LIMIT 1 );
```
Otherwise, the each SELECT is like the original one.
|
I would use variables to simulate row\_number()
```
SELECT * FROM (
SELECT * ,
@rn := if(@prevDirection = direction, @rn + 1, 1) as rn,
@prevDirection := direction,
FROM (
SELECT COUNT(dev_link.id) c, pattern.direction, pattern.section, (
SELECT to_stop AS `stop`
FROM dev_link d
WHERE d.section = section
ORDER BY d.sequence DESC
LIMIT 1
) code
FROM pattern
INNER JOIN dev_link ON dev_link.section = pattern.section
WHERE pattern.service = "YSEO252"
GROUP BY pattern.code
) t1 ORDER BY direction, c DESC
) t1 WHERE rn = 1
```
|
MySQL GROUP BY field with highest COUNT (based on JOIN)
|
[
"",
"mysql",
"sql",
"innodb",
""
] |
I have an program that needs to run queries on a number of very large Oracle tables (the largest with tens of millions of rows). The output of these queries is fed into another process which (as a side effect) can record the progress of the query (i.e., the last row fetched).
It would be nice if, in the event that the task stopped half way through for some reason, it could be restarted. For this to happen, the query has to return rows in a consistent order, so it has to be sorted. The obvious thing to do is to sort on the primary key; however, there is probably going to be a penalty for this in terms of performance (an index access) versus a non-sorted solution. Given that a restart may never happen this is not desirable.
Is there some trick to ensure consistent ordering in another way? Any other suggestions for maintaining performance in this case?
EDIT: I have been looking around and seen "order by rowid" mentioned. Is this useful or even possible?
EDIT2: I am adding some benchmarks:
* With no order by: 17 seconds.
* With order by PK: 46 seconds.
* With order by rowid: 43 seconds.
So any order by has a savage effect on performance, and using rowid makes little difference. Accepted answer is - there is no easy way to do it.
|
The best advice I can think of is to reduce the chance of a problem occurring that might stop the process, and that means keeping the code simple. No cursors, no commits, no trying to move part of the data, just straight SQL statements.
Unless a complete restart would be a completely unacceptable disaster, I'd go for simplicity without any part-way restart code at all.
|
If you want some order and queried data is unsorted then you need to sort it anyway, and spend some resources to do sorting.
So, there are at least two variants for optimization:
1. Minimize resources spent on sorting;
2. Query already sorted data.
For the first variant Oracle on its own calculates a best variant to minimize data access and overall query time. It may be possible to choose sorting order involved in unique index which already used by optimizer, but it's a very questionable tactic.
Second variant is about index-organized tables and about forcing Oracle with hints to use some specific index. It seems Ok if you need to process nearly all records in some specific table, but if selectivity of query is high it's significantly slows a process, even on a single table.
Think about a table with surrogate primary key which holds data with 10-year transaction history. If you need data only for previous year and you force order by primary key then Oracle need to process records in all 10 years one-by-one to find all records which belongs to a single year.
But if you need data for 9 years from this table then full table scan may be faster than index-based choice.
So selectivity of your query is a key to choose between full table scan and result sorting.
For storing results and restarting query a good solution is to use Oracle [Streams Advanced Queuing](https://docs.oracle.com/cd/E11882_01/server.112/e11013/aq_intro.htm#ADQUE0100) to fed another process.
All unprocessed messages in queue redirected to Exception Queue where it may be processed separately.
Because you don't specify exact ordering for selected messages I suppose that you need ordering only to maintain unprocessed part of records. If it's true then with AQ you don't need ordering at all and may, even, process records in parallel.
So, finally, from my point of view [Buffered Queue](https://docs.oracle.com/cd/E11882_01/server.112/e11013/aq_intro.htm#ADQUE0102) is what you really need.
|
What is the best way to ensure consistent ordering in an Oracle query?
|
[
"",
"sql",
"oracle",
"sorting",
"sql-order-by",
"data-consistency",
""
] |
I have the following query:
```
select * from table_2
where
user_1 not in (select user_id from users) or
user_2 not in (select user_id from users)
```
The problem is that it takes very long time to execute. The tables have few millions of records. Is there any optimization that I can use for the query?
|
First, rewrite these as `not exists` . . . this often has better performance:
```
select t2.*
from table_2 t2
where not exists (select 1 from users u where u.user_id = t2.user_1) or
not exists (select 1 from users u where u.user_id = t2.user_2);
```
More importantly, create an index on `users(user_id)`, if one does not already exist.
|
You should try this:
```
select * from table_2
LEFT JOIN `users` AS u ON u.user_id = table_2.user_1
LEFT JOIN `users` AS u2 ON u2.user_id = table_2.user_2
WHERE u.user_id is NULL and u2.user_id is NULL
```
|
Optimizing sql double "Not in" clauses
|
[
"",
"mysql",
"sql",
"database",
""
] |
So the scenario here is, I have 4 tables in the database namely:
1. "question\_info":
`CREATE TABLE question_info (
q_id mediumint(9) NOT NULL,
q_type_id int(11) NOT NULL,
q_options_id mediumint(9) NOT NULL,
q_category_id int(11) NOT NULL,
q_text varchar(2048) NOT NULL,
status tinyint(4) NOT NULL DEFAULT '0',
q_date_added date NOT NULL DEFAULT '2013-01-01',
q_difficulty_level tinyint(4) NOT NULL DEFAULT '0',
PRIMARY KEY(q_id)
);`
2. "question\_options\_info":
`CREATE TABLE question_options_info (
q_options_id mediumint(9) NOT NULL,
q_options_1 varchar(255) NOT NULL,
q_options_2 varchar(255) NOT NULL,
q_options_3 varchar(255) NOT NULL,
q_options_4 varchar(255) NOT NULL,
q_options_ex_1 varchar(1024) DEFAULT NULL,
q_options_ex_2 varchar(1024) DEFAULT NULL,
q_options_ex_3 varchar(1024) DEFAULT NULL,
q_options_ex_4 varchar(1024) DEFAULT NULL,
PRIMARY KEY(q_options_id)
);`
3. "question\_answer\_info":
`CREATE TABLE question_answer_info (
q_id mediumint(9) NOT NULL,
q_options mediumint(9) NOT NULL
);`
4. "trivia\_data":
`CREATE TABLE trivia_data (
q_id mediumint(9) NOT NULL,
q_text varchar(2048) NOT NULL,
q_options_1 varchar(255) NOT NULL,
q_options_2 varchar(255) NOT NULL,
q_options_3 varchar(255) NOT NULL,
q_options_4 varchar(255) NOT NULL,
q_options mediumint(9) NOT NULL,
q_difficulty_level tinyint(4) NOT NULL DEFAULT '0',
q_date_added date NOT NULL DEFAULT '2015-04-8',
PRIMARY KEY(q_id)
);`
So what I need is to, insert a data into `trivia_data` table.
The data is returned by this query:
`SELECT question_info.q_id, question_info.q_text, question_options_info.q_options_1, question_options_info.q_options_2, question_options_info.q_options_3, question_options_info.q_options_4, question_answer_info.q_options, question_info.q_difficulty_level, question_info.q_date_added
FROM question_info JOIN question_options_info ON question_info.q_options_id = question_options_info.q_options_id JOIN question_answer_info ON question_info.q_id = question_answer_info.q_id;`
This query would return data somewhat like this:

I have already tried this specific query to insert the data:
`INSERT INTO trivia_data VALUES(q_id, q_text, q_options_1, q_options_2, q_options_3, q_options_4, q_options, q_difficulty_level, q_date_added) SELECT question_info.q_id, question_info.q_text, question_options_info.q_options_1, question_options_info.q_options_2, question_options_info.q_options_3, question_options_info.q_options_4, question_answer_info.q_options, question_info.q_difficulty_level, question_info.q_date_added FROM question_info JOIN question_options_info on question_info.q_options_id = question_options_info.q_options_id JOIN question_answer_info on question_info.q_id = question_answer_info.q_id;`
But it always returns this error:
`near "SELECT": syntax error:`
Honestly I am a novice to SQL. So please try to explain as simply as possible.
Any help would be appreciated.
Thank You.
|
You don't need the `VALUES` keyword, as you are selecting from a query:
```
INSERT INTO trivia_data (
q_id,
q_text,
q_options_1,
q_options_2,
q_options_3,
q_options_4,
q_options,
q_difficulty_level,
q_date_added)
SELECT
question_info.q_id,
question_info.q_text,
question_options_info.q_options_1,
question_options_info.q_options_2,
question_options_info.q_options_3,
question_options_info.q_options_4,
question_answer_info.q_options,
question_info.q_difficulty_level,
question_info.q_date_added
FROM question_info
JOIN question_options_info on question_info.q_options_id = question_options_info.q_options_id
JOIN question_answer_info on question_info.q_id = question_answer_info.q_id;
```
In general if you are inserting a record then the syntax is
```
INSERT INTO <tablename> (<column1>, <column2>, ..., <columnN>)
VALUES (<value1>, <value2>, ..., <valueN>)
```
If you are inserting the results the syntax is like this:
```
INSERT INTO <tablename> (<column1>, <column2>, ..., <columnN>)
SELECT <value1>, <value2>, ..., <valueN> FROM ...
```
As you see there is no `VALUES` keyword in this case
|
Remove `VALUES` from your SQL as the values come from SELECT in this case.
```
INSERT INTO trivia_data (
q_id,
q_text,
q_options_1,
q_options_2,
q_options_3,
q_options_4,
q_options,
q_difficulty_level,
q_date_added
)
SELECT
question_info.q_id,
question_info.q_text,
question_options_info.q_options_1,
question_options_info.q_options_2,
question_options_info.q_options_3,
question_options_info.q_options_4,
question_answer_info.q_options,
question_info.q_difficulty_level,
question_info.q_date_added
FROM question_info
JOIN question_options_info
ON question_info.q_options_id = question_options_info.q_options_id
JOIN question_answer_info
ON question_info.q_id = question_answer_info.q_id;
```
|
SQLite - INSERT INTO SELECT - how to insert data of "join of 3 existing tables into a new table"?
|
[
"",
"sql",
"database",
"sqlite",
"sql-insert",
""
] |


Hello there, I have a questions for you guys. You see enrollment and student table in images below. I want to run a query getting first and last name from student table but students should belong to a section with less than 5 enrolled students.
Does that make any sense? If i am not clear please do ask me a question. Any help with this is greatly appreciated.
|
This should work.
```
SELECT s.FIRST_NAME, s.LAST_NAME
FROM student s
WHERE s.STUDENT_ID IN (
SELECT e1.STUDENT_ID
FROM e1.enrollment
WHERE e1.SECTION_ID IN (
SELECT e2.SECTION_ID
FROM e2.enrollment
GROUP BY e2.SECTION_ID HAVING COUNT(DISTINCT e2.STUDENT_ID) < 5
)
)
```
|
Why access one table more than once, if you can save some work by using analytic functions?
```
select
first_name, last_name
from
(
select
s.first_name, s.last_name, count(*) over(partition by e.section_id) as enrollment_count
from
student s
join enrollment e using (student_id)
)
where
enrollment_count < 5;
```
|
Getting student name from student table belonging to a section with less than 5 students enrolled
|
[
"",
"sql",
"oracle",
"count",
""
] |
I am making a Reddit clone and I'm having trouble querying my list of posts, given a logged in user, that shows whether or not logged in user upvoted the post for every post. I made a small example to make things simpler.
I am trying to return only one row per distinct `post_id`, but prioritize the `upvoted` column to be `t > f > null`.
For this example data:
```
> select * from post;
id
----
1
2
3
> select * from users;
id
----
1
2
> select * from upvoted;
user_id | post_id
---------+---------
1 | 1
2 | 1
```
If I am given `user_id = 1` I want my query to return:
```
postid | user_upvoted
--------+--------------
1 | t
2 | f
3 | f
```
Since user1 upvoted post1, `upvoted` is `t`. Since user1 did not upvote post2, `upvoted` is `f`. Same for post3.
## Schema
```
CREATE TABLE IF NOT EXISTS post (
id bigserial,
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS users (
id serial,
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS upvoted (
user_id integer
REFERENCES users(id)
ON DELETE CASCADE ON UPDATE CASCADE,
post_id bigint
REFERENCES post(id)
ON DELETE CASCADE ON UPDATE CASCADE,
PRIMARY KEY (user_id, post_id)
);
```
## What I tried so far
```
SELECT post.id as postid,
CASE WHEN user_id=1 THEN true ELSE false END as user_upvoted
FROM post LEFT OUTER JOIN upvoted
ON post_id = post.id;
```
Which gives me:
```
postid | user_upvoted
--------+--------------
1 | t
1 | f
2 | f
3 | f
```
Due to the join, there are two "duplicate" rows that result from the query. I want to priority the row with `t > f > null`. So I want to keep the `1 | t` row.
[Full script with schema+data.](http://pastebin.com/raw.php?i=KzcKiPsk)
|
You should be able to do this with `distinct on`:
```
SELECT distinct on (p.id) p.id as postid,
(CASE WHEN user_id = 1 THEN true ELSE false END) as upvoted
FROM post p LEFT OUTER JOIN
upvoted u
ON u.post_id = p.id
ORDER BY p.id, upvoted desc;
```
|
The `exists()` operator yields a boolean value:
```
SELECT p.id
, EXISTS (SELECT * FROM upvoted x
WHERE x.post_id = p.id
AND x.user_id = 1) AS it_was_upvoted_by_user1
FROM post p
;
```
|
Query conditionally return only one row per distinct id
|
[
"",
"sql",
"postgresql",
"left-join",
"greatest-n-per-group",
""
] |
I have a table with the following structure:
```
Contents (
id
name
desc
tdate
categoryid
...
)
```
I need to do some statistics with the data in this table. For example I want to get number of rows with the same category by grouping and id of that category. Also I want to limit them for `n` rows in descending order and if there are more categories available I want to mark them as "Others". So far I have come out with 2 queries to database:
*Select `n` rows in descending order:*
```
SELECT COALESCE(ca.NAME, 'Unknown') AS label
,ca.id AS catid
,COUNT(c.id) AS data
FROM contents c
LEFT OUTER JOIN category ca ON ca.id = c.categoryid
GROUP BY label
,catid
ORDER BY data DESC LIMIT 7
```
*Select other rows as one:*
```
SELECT 'Others' AS label
,COUNT(c.id) AS data
FROM contents c
LEFT OUTER JOIN category ca ON ca.id = c.categoryid
WHERE c.categoryid NOT IN ($INCONDITION)
```
But when I have no category groups left in db table I still get an "Others" record. Is it possible to make it in one query and make the "Others" record optional?
|
The **specific difficulty** here: Queries with one or more aggregate functions in the `SELECT` list and no `GROUP BY` clause produce **exactly one row, even if no row is found** in the underlying table.
There is nothing you can do in the `WHERE` clause to suppress that row. You have to exclude such a row *after the fact*, i.e. in the `HAVING` clause, or in an outer query.
[Per documentation:](http://www.postgresql.org/docs/current/interactive/queries-table-expressions.html#QUERIES-GROUP)
> If a query contains aggregate function calls, but no `GROUP BY` clause,
> grouping still occurs: the result is a single group row (or perhaps no
> rows at all, if the single row is then eliminated by `HAVING`). The same
> is true if it contains a `HAVING` clause, even without any aggregate
> function calls or `GROUP BY` clause.
It should be noted that adding a `GROUP BY` clause with only a constant expression (which is otherwise completely pointless!) works, too. *See example below.* But I'd rather not use that trick, even if it's short, cheap and simple, because it's hardly obvious what it does.
The following query only needs a *single table scan* and returns the top 7 categories ordered by count. If (**and only if**) there are more categories, the rest is summarized into 'Others':
```
WITH cte AS (
SELECT categoryid, count(*) AS data
, row_number() OVER (ORDER BY count(*) DESC, categoryid) AS rn
FROM contents
GROUP BY 1
)
( -- parentheses required again
SELECT categoryid, COALESCE(ca.name, 'Unknown') AS label, data
FROM cte
LEFT JOIN category ca ON ca.id = cte.categoryid
WHERE rn <= 7
ORDER BY rn
)
UNION ALL
SELECT NULL, 'Others', sum(data)
FROM cte
WHERE rn > 7 -- only take the rest
HAVING count(*) > 0; -- only if there actually is a rest
-- or: HAVING sum(data) > 0
```
* You need to break ties if multiple categories can have the same count across the 7th / 8th rank. In my example, categories with the smaller `categoryid` win such a race.
* Parentheses are required to include a `LIMIT` or `ORDER BY` clause to an individual leg of a `UNION` query.
* You only need to join to table `category` for the top 7 categories. And it's generally cheaper to aggregate first and join later in this scenario. So don't join in the the base query in the [CTE (common table expression)](http://www.postgresql.org/docs/current/interactive/queries-with.html) named `cte`, only join in the first `SELECT` of the `UNION` query, that's cheaper.
* Not sure why you need the `COALESCE`. If you have a foreign key in place from `contents.categoryid` to `category.id` and both `contents.categoryid` and `category.name` are defined `NOT NULL` (like they probably should be), then you don't need it.
### The odd `GROUP BY true`
This would work, too:
```
...
UNION ALL
SELECT NULL , 'Others', sum(data)
FROM cte
WHERE rn > 7
GROUP BY true;
```
And I even get slightly faster query plans. But it's a rather odd hack ...
[**SQL Fiddle**](http://sqlfiddle.com/#!15/312cc/6) demonstrating all.
Related answer with more explanation for the `UNION ALL` / `LIMIT` technique:
* [Sum results of a few queries and then find top 5 in SQL](https://stackoverflow.com/questions/8528787/sum-results-of-a-few-queries-and-then-find-top-5-in-sql/8528920#8528920)
|
The quick fix, to make the `'Others'` row conditional would be to add a simple `HAVING` clause to that query.
```
HAVING COUNT(c.id) > 0
```
(If there are no other rows in the `contents` table, then `COUNT(c.id)` is going to be zero.)
That only answers half the question, how to make the return of that row conditional.
---
The second half of the question is a little more involved.
To get the whole resultset in one query, you could do something like this
(this is *not* tested yet; desk checked only.. I'm not sure if postgresql accepts a LIMIT clause in an inline view... if it doesn't we'd need to implement a different mechanism to limit the number of rows returned.
```
SELECT IFNULL(t.name,'Others') AS name
, t.catid AS catid
, COUNT(o.id) AS data
FROM contents o
LEFT
JOIN category oa
ON oa.id = o.category_id
LEFT
JOIN ( SELECT COALESCE(ca.name,'Unknown') AS name
, ca.id AS catid
, COUNT(c.id) AS data
FROM contents c
LEFT
JOIN category ca
ON ca.id = c.categoryid
GROUP
BY COALESCE(ca.name,'Unknown')
, ca.id
ORDER
BY COUNT(c.id) DESC
, ca.id DESC
LIMIT 7
) t
ON ( t.catid = oa.id OR (t.catid IS NULL AND oa.id IS NULL))
GROUP
BY ( t.catid = oa.id OR (t.catid IS NULL AND oa.id IS NULL))
, t.catid
ORDER
BY COUNT(o.id) DESC
, ( t.catid = oa.id OR (t.catid IS NULL AND oa.id IS NULL)) DESC
, t.catid DESC
LIMIT 7
```
The inline view `t` basically gets the same result as the first query, a list of (up to) 7 `id` values from category table, or 6 `id` values from category table and a NULL.
The outer query basically does the same thing, joining `content` with `category`, but also doing a check if there's a matching row from `t`. Because `t` might be returning a NULL, we have a slightly more complicated comparison, where we want a NULL value to match a NULL value. (MySQL conveniently gives us shorthand operator for this, the null-safe comparison operator `<=>`, but I don't think that's available in postgresql, so we have to express differently.
```
a = b OR (a IS NULL AND b IS NULL)
```
The next bit is getting a GROUP BY to work, we want to group by the 7 values returned by the inline view `t`, or, if there's not matching value from `t`, group the "other" rows together. We can get that to happen by using a boolean expression in the GROUP BY clause.
We're basically saying "group by 'if there was a matching row from t'" (true or false) and then group by the row from 't'. Get a count, and then order by the count descending.
This isn't tested, only desk checked.
|
Get n grouped categories and sum others into one
|
[
"",
"sql",
"postgresql",
"query-optimization",
"aggregate",
"sql-limit",
""
] |
I have a table that looks has has about 50 rows. The table holds results from track meets. Here is the structure:

What I am doing is getting a top ten list for each event. Here is what I have so far:
```
SELECT * FROM Times WHERE event='1600m' GROUP BY fname, lname ORDER BY time, event LIMIT 10
```
Each person can be on the list once per event. This works, but if you look at all the times for the 1600m below...

you can see that `Kyle Rummens` has a time of `4:32:00`. The image below is the result of the sql statement above:

Kyle's time for that is `4:36:00`.
How can I make it so that it is a list of the top 10 fastest times for the 1600m `event`, each person can only be on the list once, and it is each person's fastest time
EDIT: I was asked to add my results in text not image so here they are:
This is the result from the above query:
```
fname lname event time meetName date
====== =========== ===== ======== ==================== ==========
Jeron Tucker 1600m 04:09:00 Coaches Meet 2015-02-12
Chris Hughes 1600m 04:27:00 Distance Chalenge 2015-02-10
Andrew Rummens 1600m 04:28:00 Simplot Games 2015-02-20
Kyle Rummens 1600m 04:36:00 Simplot Games 2015-02-19
Alex Southerland 1600m 04:52:00 Coaches Meet 2015-02-12
Canyon Schmerse 1600m 04:54:00 Arcadia Invitational 2015-05-12
Joseph Stone 1600m 05:08:00 Coaches Meet 2015-02-12
Teron Tucker 1600m 05:09:00 Harvard Track Meet 2015-02-19
Little Jacob 1600m 05:12:00 Arcadia Invitational 2015-05-12
Connor Kleinman 1600m 05:54:00 BYU Invitational 2015-04-03
```
|
Use `min(time)` for each swimmer to find each swimmer's fastest time:
```
SELECT fname, lname, min(time) as tune
FROM Times
WHERE event = '1600m'
GROUP BY fname, lname
ORDER BY time
LIMIT 10
```
No need to order by `event` too, since it's always going to be `'1600m'`.
|
Using correlated sub-query:
```
SELECT t1.*
FROM Times t1
WHERE t1.event='1600m' AND t1.time = (SELECT MAX(time)
FROM Times t2
WHERE t2.event='1600m' AND
t1.fname = t2.fname AND
t1.lname = t2.fname)
ORDER BY t1.time
LIMIT 10;
```
|
get lowest value while using group by and order by
|
[
"",
"mysql",
"sql",
"group-by",
"sql-order-by",
""
] |
Im using OrientDB 2.0. The query is simple, but I cant seem to figure it out. I have 2 classes: `User` and `UserGroup`. To make it simple, `User` has only `username` and `password` properties and UserGroup has only one `id` property. Relationship is simple:
`User --- user_group --> UserGroup`
where `user_group` is a class nam of edges connecting `User` and `UserGroup` vertices.
What I would like is to do is get a user with certain username and password where `UserGroup.id` is equal to *group1*
What I have so far is:
```
select expand(in('user_group')[username='foo' and password='bar'])
from UserGroup
where id = 'group1'
```
But that does not work for me. What am I doing wrong?
|
```
create class User extends V
create property User.username string
create property User.password string
create class UserGroup extends V
create property UserGroup.id string
create class user_group extends E
create vertex User set username = 'foo', password = 'bar'
create vertex UserGroup set id = 'group1'
create edge user_group from (select from User where username = 'foo') to (select from UserGroup where id = 'group1')
select expand(in('user_group')[username='foo'][password='bar'])
from UserGroup
where id = 'group1'
```
|
I think, after a couple of tries, I found the answer:
```
select from
(select expand(in('user_group'))
from UserGroup
where id = 'group1')
where username='foo' and password='bar'
```
If anyone can suggest a more elegant solution, that would be nice too
|
How to query a neighbor vertex, which has certain attributes?
|
[
"",
"sql",
"graph",
"orientdb",
""
] |
Here is my query:
```
SELECT DISTINCT `post_data`. * , pv.`seller_id` , pv.`islimited` , pv.`isquantity` , pv.`isslider`, `price`.`original_price` , `price`.`discount_percentage` , `timelimit`.`start_date` , `timelimit`.`expire_date` , `quantity`.`in_stock`, `currency`.`currency_symbol`, `seller`.`directory`, `post_to_cat`.`cat_id`, count(`sales`.`sales_id`) as sale FROM `post_view` AS pv
INNER JOIN `post_data` ON pv.`post_id` = `post_data`.`post_id` AND pv.`status` = 1
INNER JOIN `price` ON pv.`post_id` = `price`.`post_id`
INNER JOIN `currency` ON `price`.`currency_id` = `currency`.`currency_id`
INNER JOIN `seller` ON pv.`seller_id` = `seller`.`seller_id`
INNER JOIN `post_to_cat` ON `post_to_cat`.`cat_id` = 1 AND `post_to_cat`.`post_id` = `post_data`.`post_id`
LEFT JOIN `timelimit` ON ( CASE WHEN pv.`islimited` = 1 THEN `timelimit`.`post_id` ELSE -1 END ) = pv.`post_id`
LEFT JOIN `quantity` ON ( CASE WHEN pv.`isquantity` = 1 THEN `quantity`.`post_id` ELSE -1 END ) = pv.`post_id`
LEFT JOIN `sales` ON `sales`.`post_id` = pv.`post_id` AND `sales`.`status` = 1
WHERE pv.`status` = 1
ORDER BY pv.`post_id` DESC LIMIT 1
```
The `ORDER BY DESC` is not working, it just returns the first row from the table, but I want to get the highest post\_id value row. What is the mistake I am making?
|
I believe the issue here is mostly as a result of preforming aggregation (using the COUNT()) function, without any group by. Although, it seems like you don't necessarily need it because you want that count only for the post in question.
If you're trying to gather all of that information for a single post, I would adjust your `WHERE` clause to have a condition to only gather that information for the post with the largest ID.
Instead of ordering by ID and limiting by 1, use a subquery to get the largest id, like this:
```
...
WHERE pv.status = 1 AND post_data.slug = 'abc' AND pv.post_id = (SELECT MAX(post_id) FROM post_view);
```
|
As @McAdam331 said we need data sample and sql fiddle to investigate what is wrong with you query. But at the moment I have some suggestions how to improve and debug your query.
First off all, what do I see the main and very left table in your query is `post_view` so all other tables should be `LEFT JOIN` if you want to get the max id. You should use `INNER JOIN` only if you think that other table could *filter* your main table somehow and order or result could be *other table dependend*. But in your case I see no reason to use `INNER JOIN`.
Second point is your very weird `ON` conditions:
```
LEFT JOIN `timelimit` ON ( CASE WHEN pv.`islimited` = 1 THEN `timelimit`.`post_id` ELSE -1 END ) = pv.`post_id`
LEFT JOIN `quantity` ON ( CASE WHEN pv.`isquantity` = 1 THEN `quantity`.`post_id` ELSE -1 END ) = pv.`post_id`
```
I have converted them into another one
```
CASE WHEN pv.`islimited`=1 THEN `timelimit`.`start_date` ELSE NULL END as start_date ,
CASE WHEN pv.`islimited`=1 THEN `timelimit`.`expire_date` ELSE NULL END as expire_date,
CASE WHEN pv.`isquantity`=1 THEN `quantity`.`in_stock` ELSE NULL END as in_stock,
```
But I still don't like it. It seems very useless to me. And has no sense when I read `CASE WHEN pv.islimited=1 THEN timelimit.start_date ELSE NULL END as start_date` so if flag `pv.islimited=0` you don't need `start_date`? Are you sure?
And the last thing I can suggest: try to use my or even your query. But add every table by step while debugging. So First query just:
```
SELECT
pv.`post_id`, pv.`seller_id` , pv.`islimited` , pv.`isquantity` ,
pv.`isslider`
FROM `post_view` AS pv
WHERE pv.`status` = 1
ORDER BY pv.`post_id` DESC
LIMIT 1
```
If it returns correct `post_id` add next table:
```
SELECT
pv.`post_id`, pv.`seller_id` , pv.`islimited` , pv.`isquantity` ,
pv.`isslider`,
`post_data`. *
FROM `post_view` AS pv
LEFT JOIN `post_data`
ON pv.`post_id` = `post_data`.`post_id`
WHERE pv.`status` = 1
AND `post_data`.`slug` = 'abc'
ORDER BY pv.`post_id` DESC
LIMIT 1
```
Check the result. And continue step by step.
Yes it takes time. But that is debugging process. It could be the fastest way to get that query done. :-)
```
SELECT `post_data`. * ,
pv.`post_id`, pv.`seller_id` , pv.`islimited` , pv.`isquantity` ,
pv.`isslider`, `price`.`original_price` , `price`.`discount_percentage` ,
CASE WHEN pv.`islimited`=1 THEN `timelimit`.`start_date` ELSE NULL END as start_date ,
CASE WHEN pv.`islimited`=1 THEN `timelimit`.`expire_date` ELSE NULL END as expire_date,
CASE WHEN pv.`isquantity`=1 THEN `quantity`.`in_stock` ELSE NULL END as in_stock,
`currency`.`currency_symbol`, `seller`.`directory`, `post_to_cat`.`cat_id`, count(`sales`.`sales_id`) as sale
FROM `post_view` AS pv
LEFT JOIN `post_data`
ON pv.`post_id` = `post_data`.`post_id`
LEFT JOIN `price`
ON pv.`post_id` = `price`.`post_id`
LEFT JOIN `currency`
ON `price`.`currency_id` = `currency`.`currency_id`
LEFT JOIN `seller`
ON pv.`seller_id` = `seller`.`seller_id`
LEFT JOIN `post_to_cat`
ON `post_to_cat`.`cat_id` = 1
AND `post_to_cat`.`post_id` = pv.`post_id`
LEFT JOIN `timelimit`
ON `timelimit`.`post_id` = pv.`post_id`
LEFT JOIN `quantity`
ON quantity`.`post_id` = pv.`post_id`
LEFT JOIN `sales`
ON `sales`.`post_id` = pv.`post_id`
AND `sales`.`status` = 1
WHERE pv.`status` = 1
AND `post_data`.`slug` = 'abc'
GROUP BY pv.`post_id`
ORDER BY pv.`post_id` DESC
LIMIT 1
```
*EDIT 1* - last `GROUP BY pv.post_id` was added as per @McAdam331 notice about `count()` function without `GROUP BY`
|
Order by clause not behaving correctly after several joins
|
[
"",
"mysql",
"sql",
"join",
"phpmyadmin",
"left-join",
""
] |
I needed to write function with table parameter.
an example:
```
CREATE FUNCTION getParentByBrandList
( @_BrandList TABLE(
BR_ID INT, BR_Name NVARCHAR(150), BR_ParentBrandID INT, BR_MasterBrandID INT, BR_Role INT,
BR_State INT, BR_OwnerID INT, BR_OwnerIP NVARCHAR(50), BR_CreateDate DATETIME, BR_UpdaterID INT,
BR_UpdaterIP NVARCHAR(50), BR_UpdateDate DATETIME
)
)
```
How can I do?
Thanx
|
Beginning from `SQL Server 2008` you can use table valued parameters:
```
CREATE TYPE [dbo].[TableType] AS TABLE(
[ID] [INT] NULL
)
GO
CREATE FUNCTION fnTest
(
@t [dbo].[TABLETYPE] READONLY
)
RETURNS INT
AS
BEGIN
RETURN (SELECT TOP 1 ID FROM @t ORDER BY id DESC)
END
GO
DECLARE @t [dbo].[TABLETYPE]
INSERT INTO @t
VALUES ( 1 ),
( 2 )
SELECT dbo.fnTest(@t) AS ID
```
Output:
```
ID
2
```
|
Check this [tutorial](http://www.sailajareddy-technical.blogspot.in/2012/09/passing-table-valued-parameter-to.html)
Example : **Passing Table Valued Parameter to a function**
```
/* CREATE USER DEFINED TABLE TYPE */
CREATE TYPE StateMaster AS TABLE
(
StateCode VARCHAR(2),
StateDescp VARCHAR(250)
)
GO
/*CREATE FUNCTION WHICH TAKES TABLE AS A PARAMETER */
CREATE FUNCTION TableValuedParameterExample(@TmpTable StateMaster READONLY)
RETURNS VARCHAR(250)
AS
BEGIN
DECLARE @StateDescp VARCHAR(250)
SELECT @StateDescp = StateDescp FROM @TmpTable
RETURN @StateDescp
END
GO
```
|
Create T-SQL Function with table parameter
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2005",
""
] |
This is my query:
```
SELECT Cottage15.CNUM,Cottage15.Rent,Assignment15.Hours
FROM Cottage15
INNER JOIN Assignment15
ON Cottage15.CNUM=Assignment15.CNUM
ORDER BY Assignment15.AID;
```
It works, but I can't figure out how to apply the `Max` function to `hours`? I was hoping that `MAX(Assignment15.Hours)` would work!?
This query shows all of the rows, but I only want it to show me certain rows that have the highest number based on the hour field.
|
if you want to get `max(hour)` per each `Cottage15.CNUM and Cottage15.Rent`
then use `max(Assignment15.Hours)` and `group by Cottage15.CNUM,Cottage15.Rent`:
```
SELECT Cottage15.CNUM,Cottage15.Rent,MAX(Assignment15.Hours)
FROM Cottage15
INNER JOIN Assignment15
ON Cottage15.CNUM=Assignment15.CNUM
GROUP BY Cottage15.CNUM,Cottage15.Rent
```
*But* if you want to get `Cottage15.CNUM and Cottage15.Rent` which have `max(Assignment15.Hours)` in the whole result set then you can get it by:
```
SELECT Cottage15.CNUM,Cottage15.Rent,Assignment15.Hours
FROM Cottage15
INNER JOIN Assignment15
ON Cottage15.CNUM=Assignment15.CNUM
WHERE Assignment15.Hours = ( SELECT MAX(Assignment15.Hours)
FROM Cottage15
INNER JOIN Assignment15
ON Cottage15.CNUM=Assignment15.CNUM
)
```
|
I think this is what you want
```
SELECT Cottage15.CNUM,Cottage15.Rent,Assignment15.Hours
FROM Cottage15
INNER JOIN Assignment15
ON Cottage15.CNUM=Assignment15.CNUM
WHERE Assignment15.Hours = (SELECT MAX(Assignment15.Hours) FROM Assignment15)
ORDER BY Assignment15.Hours
```
|
How to use MAX function in SQL?
|
[
"",
"sql",
"max",
""
] |
I am new to SQL.
I have written following query in MySQL
```
select * from msgs where (msgs.toid = 1 or msgs.fid = 1) group by fid,toid;
```
this query returns values as shown below.
```
|message_id | toid | fid |
68 4 1
70 1 9
72 1 4
78 5 1
80 9 1
```
My expected result should be
```
|message_id | toid | fid |
72 1 4
78 5 1
80 9 1
```
It mean I should not get repeated values(based on two columns values toid,fid) and it should be the highest id value.
*The combination of toid and fid should be unique.*
|
You can use `greatest(),least()` to check whether another row with the same combination of id's and a higher message id does not exist (which means the selected row has the highest message id for the given combination).
```
select * from msgs m where (toid = 1 or fid = 1)
and not exists (
select 1 from msgs m2
where greatest(m2.toid,m2.fid) = greatest(m.toid,m.fid)
and least(m2.toid,m2.fid) = least(m.toid,m.fid)
and m2.message_id > m.message_id
)
```
EDIT - another way using `row_number()`
```
select * from (
select * ,
row_number() over (partition by greatest(toid,fid),
least(toid,fid) order by message_id desc) rn
from msgs m where (toid = 1 or fid = 1)
) t1 where rn = 1
```
|
You could also try this.
```
select * from msgs m
where (toid = 1 or fid = 1)
and not exists ( select 1 from msgs m2
where (m2.toid + m2.fid) = (m.toid + m.fid)
and m2.message_id > m.message_id
)
```
|
Select unique values(based on two columns) in group by mysql
|
[
"",
"mysql",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.