Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a table like following:
```
id value date
1 5 2015-01-10
2 5 2015-06-13
3 5 2015-09-05
4 11 2015-02-11
5 11 2015-01-10
6 11 2015-01-25
```
As can be seen, every `value` appears 3 times with different `date`. I want to write a query that returns the unique `value`s that has the maximum `date`, which would be the following for the above table:
```
id value date
3 5 2015-09-05
4 11 2015-02-11
```
How could I do it?
# This is the updated question:
The real question I am encountering is a little bit more complicated than the simplified version above. I thought I can move a step further once I know the answer to the simplified version, but I guest I was wrong. So, I am updating the question herein.
I have 2 tables like following:
```
Table 1
id id2 date
1 2 2015-01-10
2 5 2015-06-13
3 9 2015-09-05
4 10 2015-02-11
5 26 2015-01-10
6 65 2015-01-25
Table 2
id id2 data
1 2 A
2 5 A
3 9 A
4 10 B
5 26 B
6 65 B
```
Here, `Table 1` and `Table 2` are joined by `id2`
What I want to get is two records as follows:
```
id2 date data
9 2015-01-10 A
10 2015-02-11 B
```
|
You can use `row_number` to select the rows with the greatest date per value
```
select * from (
select t2.id2, t1.date, t2.data,
row_number() over (partition by t2.data order by t1.date desc) rn
from table1 t1
join table2 t2 on t1.id = t2.id2
) t where rn = 1
```
|
```
select a.id, a.value, a.date
from mytable a,
( select id, max(date) maxdate
from mytable b
group by id) b
where a.id = b.id
and a.date = b.maxdate;
```
|
How do I need to change my sql to get what I want in this case?
|
[
"",
"sql",
"database",
"oracle",
"group-by",
""
] |
I have table name xyz, in that I have 2 records as below.
```
Record Prod_available FromTime ToTime
1 Pizza 08:00 21:59
2 Beer 22:00 07:59
```
Now I want to select record based on current hour.
Kindly Help me in above case.
|
This should work. Retweeted @Gordon Linoff code.
```
where
(
((FromTime < ToTime) and @time between FromTime and ToTime) or
((FromTime > ToTime) and @time not between ToTime and FromTime) -- here swapped the totime and fromtime.
)
```
|
You can try something along these lines:
```
select * from xyz where
dateadd (hour, datepart(hour, getdate()), '2000-01-01') between
dateadd (hour, left(fromtime,2), '2000-01-01') and
dateadd (hour, case when left(totime,2) < left(fromtime,2) then left(totime,2)+24 else left(totime,2) end, '2000-01-01')
```
|
Select record based on Current hour between hours stored in Sql server 2008 table
|
[
"",
"sql",
""
] |
I want to have some information available for any stored procedure, such as current user. Following the temporary table method indicated [here](https://stackoverflow.com/a/25582641/2780791), I have tried the following:
1) create temporary table when connection is opened
```
private void setConnectionContextInfo(SqlConnection connection)
{
if (!AllowInsertConnectionContextInfo)
return;
var username = HttpContext.Current?.User?.Identity?.Name ?? "";
var commandBuilder = new StringBuilder($@"
CREATE TABLE #ConnectionContextInfo(
AttributeName VARCHAR(64) PRIMARY KEY,
AttributeValue VARCHAR(1024)
);
INSERT INTO #ConnectionContextInfo VALUES('Username', @Username);
");
using (var command = connection.CreateCommand())
{
command.Parameters.AddWithValue("Username", username);
command.ExecuteNonQuery();
}
}
/// <summary>
/// checks if current connection exists / is closed and creates / opens it if necessary
/// also takes care of the special authentication required by V3 by building a windows impersonation context
/// </summary>
public override void EnsureConnection()
{
try
{
lock (connectionLock)
{
if (Connection == null)
{
Connection = new SqlConnection(ConnectionString);
Connection.Open();
setConnectionContextInfo(Connection);
}
if (Connection.State == ConnectionState.Closed)
{
Connection.Open();
setConnectionContextInfo(Connection);
}
}
}
catch (Exception ex)
{
if (Connection != null && Connection.State != ConnectionState.Open)
Connection.Close();
throw new ApplicationException("Could not open SQL Server Connection.", ex);
}
}
```
2) Tested with a procedure which is used to populate a `DataTable` using `SqlDataAdapter.Fill`, by using the following function:
```
public DataTable GetDataTable(String proc, Dictionary<String, object> parameters, CommandType commandType)
{
EnsureConnection();
using (var command = Connection.CreateCommand())
{
if (Transaction != null)
command.Transaction = Transaction;
SqlDataAdapter adapter = new SqlDataAdapter(proc, Connection);
adapter.SelectCommand.CommandTimeout = CommonConstants.DataAccess.DefaultCommandTimeout;
adapter.SelectCommand.CommandType = commandType;
if (Transaction != null)
adapter.SelectCommand.Transaction = Transaction;
ConstructCommandParameters(adapter.SelectCommand, parameters);
DataTable dt = new DataTable();
try
{
adapter.Fill(dt);
return dt;
}
catch (SqlException ex)
{
var err = String.Format("Error executing stored procedure '{0}' - {1}.", proc, ex.Message);
throw new TptDataAccessException(err, ex);
}
}
}
```
3) called procedure tries to get the username like this:
```
DECLARE @username VARCHAR(128) = (select AttributeValue FROM #ConnectionContextInfo where AttributeName = 'Username')
```
but `#ConnectionContextInfo` is no longer available in the context.
I have put a SQL profiler against the database, to check what is happening:
* temporary table is created successfully using a certain SPID
* procedure is called using the same SPID
**Why is temporary table not available within the procedure scope?**
In T-SQL doing the following works:
* create a temporary table
* call a procedure that needs data from that particular temporary table
* temporary table is dropped only explicitly or after current scope ends
Thanks.
|
MINOR ISSUE: I am going to assume for the moment that the code posted in the Question isn't the full piece of code that is running. Not only are there variables used that we don't see getting declared (e.g. `AllowInsertConnectionContextInfo`), but there is a glaring omission in the `setConnectionContextInfo` method: the `command` object is created but never is its `CommandText` property set to `commandBuilder.ToString()`, hence it appears to be an empty SQL batch. I'm sure that this is actually being handled correctly since 1) I believe submitting an empty batch will generate an exception, and 2) the question does mention that the temp table creation appears in the SQL Profiler output. Still, I am pointing this out as it implies that there could be additional code that is relevant to the observed behavior that is not shown in the question, making it more difficult to give a precise answer.
THAT BEING SAID, as mentioned in @Vladimir's fine [answer](https://stackoverflow.com/a/40648178/577765), due to the query running in a sub-process (i.e. `sp_executesql`), local temporary objects -- tables and stored procedures -- do not survive the completion of that sub-process and hence are not available in the parent context.
Global temporary objects and permanent/non-temporary objects will survive the completion of the sub-process, but both of those options, in their typical usage, introduce concurrency issues: you would need to test for the existence first before attempting to create the table, *and* you would need a way to distinguish one process from another. So these are not really a great option, at least not in their typical usage (more on that later).
Assuming that you cannot pass in any values into the Stored Procedure (else you could simply pass in the `username` as @Vladimir suggested in his answer), you have a few options:
1. The easiest solution, given the current code, would be to separate the creation of the local temporary table from the `INSERT` command (also mentioned in @Vladimir's answer). As previously mentioned, the issue you are encountering is due to the query running within `sp_executesql`. And the reason `sp_executesql` is being used is to handle the parameter `@Username`. So, the fix could be as simple as changing the current code to be the following:
```
string _Command = @"
CREATE TABLE #ConnectionContextInfo(
AttributeName VARCHAR(64) PRIMARY KEY,
AttributeValue VARCHAR(1024)
);";
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
command.ExecuteNonQuery();
}
_Command = @"
INSERT INTO #ConnectionContextInfo VALUES ('Username', @Username);
");
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
// do not use AddWithValue()!
SqlParameter _UserName = new SqlParameter("@Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
}
```
Please note that temporary objects -- local and global -- cannot be accessed in T-SQL User-Defined Functions or Table-Valued Functions.
2. A better solution (most likely) would be to use `CONTEXT_INFO`, which is essentially session memory. It is a `VARBINARY(128)` value but changes to it survive any sub-process since it is not an object. Not only does this remove the current complication you are facing, but it also reduces `tempdb` I/O considering that you are creating and dropping a temporary table each time this process runs, and doing an `INSERT`, and all 3 of those operations are written to disk twice: first in the Transaction Log, then in the data file. You can use this in the following manner:
```
string _Command = @"
DECLARE @User VARBINARY(128) = CONVERT(VARBINARY(128), @Username);
SET CONTEXT_INFO @User;
";
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
// do not use AddWithValue()!
SqlParameter _UserName = new SqlParameter("@Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
}
```
And then you get the value within the Stored Procedure / User-Defined Function / Table-Valued Function / Trigger via:
```
DECLARE @Username NVARCHAR(128) = CONVERT(NVARCHAR(128), CONTEXT_INFO());
```
That works just fine for a single value, but if you need multiple values, or if you are already using `CONTEXT_INFO` for another purpose, then you either need to go back to one of the other methods described here, OR, if using SQL Server 2016 (or newer), you can use [SESSION\_CONTEXT](https://msdn.microsoft.com/en-us/library/mt590806.aspx), which is similar to `CONTEXT_INFO` but is a HashTable / Key-Value pairs.
Another benefit of this approach is that `CONTEXT_INFO` (at least, I haven't yet tried `SESSION_CONTEXT`) is available in T-SQL User-Defined Functions and Table-Valued Functions.
3. Finally, another option would be to create a global temporary table. As mentioned above, global objects have the benefit of surviving sub-processes, but they also have the drawback of complicating concurrency. A seldom-used to get the benefit without the drawback is to give the temporary object a unique, session-based *name*, rather than add a column to hold a unique, session-based *value*. Using a name that is unique to the session removes any concurrency issues while allowing you to use an object that will get automatically cleaned up when the connection is closed (so no need to worry about a process that creates a global temporary table and then runs into an error before completing, whereas using a permanent table would require cleanup, or at least an existence check at the beginning).
Keeping in mind the restriction that we cannot pass any value into the Stored Procedure, we need to use a value that already exists at the data layer. The value to use would be the `session_id` / SPID. Of course, this value does not exist in the app layer, so it has to be retreived, but there was no restriction placed on going in that direction.
```
int _SessionId;
using (var command = connection.CreateCommand())
{
command.CommandText = @"SET @SessionID = @@SPID;";
SqlParameter _paramSessionID = new SqlParameter("@SessionID", SqlDbType.Int);
_paramSessionID.Direction = ParameterDirection.Output;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
_SessionId = (int)_paramSessionID.Value;
}
string _Command = String.Format(@"
CREATE TABLE ##ConnectionContextInfo_{0}(
AttributeName VARCHAR(64) PRIMARY KEY,
AttributeValue VARCHAR(1024)
);
INSERT INTO ##ConnectionContextInfo_{0} VALUES('Username', @Username);", _SessionId);
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
SqlParameter _UserName = new SqlParameter("@Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
}
```
And then you get the value within the Stored Procedure / Trigger via:
```
DECLARE @Username NVARCHAR(128),
@UsernameQuery NVARCHAR(4000);
SET @UsernameQuery = CONCAT(N'SELECT @tmpUserName = [AttributeValue]
FROM ##ConnectionContextInfo_', @@SPID, N' WHERE [AttributeName] = ''Username'';');
EXEC sp_executesql
@UsernameQuery,
N'@tmpUserName NVARCHAR(128) OUTPUT',
@Username OUTPUT;
```
Please note that temporary objects -- local and global -- cannot be accessed in T-SQL User-Defined Functions or Table-Valued Functions.
4. Finally, it is possible to use a real / permanent (i.e. non-temporary) Table, provided that you include a column to hold a value specific to the current session. This additional column will allow for concurrent operations to work properly.
You can create the table in `tempdb` (yes, you can use `tempdb` as a regular DB, doesn't need to be only temporary objects starting with `#` or `##`). The advantages of using `tempdb` is that the table is out of the way of everything else (it is just temporary values, after all, and doesn't need to be restored, so `tempdb` using `SIMPLE` recovery model is perfect), and it gets cleaned up when the Instance restarts (FYI: `tempdb` is created brand new as a copy of `model` each time SQL Server starts).
Just like with Option #3 above, we can again use the `session_id` / SPID value since it is common to all operations on this Connection (as long as the Connection remains open). But, unlike Option #3, the app code doesn't need the SPID value: it can be inserted automatically into each row using a Default Constraint. This simplies the operation a little.
The concept here is to first check to see if the permanent table in `tempdb` exists. If it does, then make sure that there is no entry already in the table for the current SPID. If it doesn't, then create the table. Since it is a permanent table, it will continue to exist, even after the current process closes its Connection. Finally, insert the `@Username` parameter, and the SPID value will populate automatically.
```
// assume _Connection is already open
using (SqlCommand _Command = _Connection.CreateCommand())
{
_Command.CommandText = @"
IF (OBJECT_ID(N'tempdb.dbo.Usernames') IS NOT NULL)
BEGIN
IF (EXISTS(SELECT *
FROM [tempdb].[dbo].[Usernames]
WHERE [SessionID] = @@SPID
))
BEGIN
DELETE FROM [tempdb].[dbo].[Usernames]
WHERE [SessionID] = @@SPID;
END;
END;
ELSE
BEGIN
CREATE TABLE [tempdb].[dbo].[Usernames]
(
[SessionID] INT NOT NULL
CONSTRAINT [PK_Usernames] PRIMARY KEY
CONSTRAINT [DF_Usernames_SessionID] DEFAULT (@@SPID),
[Username] NVARCHAR(128) NULL,
[InsertTime] DATETIME NOT NULL
CONSTRAINT [DF_Usernames_InsertTime] DEFAULT (GETDATE())
);
END;
INSERT INTO [tempdb].[dbo].[Usernames] ([Username]) VALUES (@UserName);
";
SqlParameter _UserName = new SqlParameter("@Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
_Command.ExecuteNonQuery();
}
```
And then you get the value within the Stored Procedure / User-Defined Function / Table-Valued Function / Trigger via:
```
SELECT [Username]
FROM [tempdb].[dbo].[Usernames]
WHERE [SessionID] = @@SPID;
```
Another benefit of this approach is that permanent tables are accessible in T-SQL User-Defined Functions and Table-Valued Functions.
|
As was shown in this [answer](https://dba.stackexchange.com/a/124053/57105), `ExecuteNonQuery` uses `sp_executesql` when `CommandType` is `CommandType.Text` and command has parameters.
The C# code in this question doesn't set the `CommandType` explicitly and it is [`Text` by default](https://msdn.microsoft.com/en-us/library/system.data.commandtype(v=vs.110).aspx), so most likely end result of the code is that `CREATE TABLE #ConnectionContextInfo` is wrapped into `sp_executesql`. You can verify it in the SQL Profiler.
It is well-known that `sp_executesql` is running in its own scope (essentially it is a nested stored procedure). Search for "sp\_executesql temp table". Here is one example: [Execute sp\_executeSql for select...into #table but Can't Select out Temp Table Data](https://stackoverflow.com/questions/8040105/execute-sp-executesql-for-select-into-table-but-cant-select-out-temp-table-da)
So, a temp table `#ConnectionContextInfo` is created in the nested scope of `sp_executesql` and is automatically deleted as soon as `sp_executesql` returns.
The following query that is run by `adapter.Fill` doesn't see this temp table.
---
What to do?
Make sure that `CREATE TABLE #ConnectionContextInfo` statement is not wrapped into `sp_executesql`.
In your case you can try to split a single batch that contains both `CREATE TABLE #ConnectionContextInfo` and `INSERT INTO #ConnectionContextInfo` into two batches. The first batch/query would contain only `CREATE TABLE` statement without any parameters. The second batch/query would contain `INSERT INTO` statement with parameter(s).
I'm not sure it would help, but worth a try.
If that doesn't work you can build one T-SQL batch that creates a temp table, inserts data into it and calls your stored procedure. All in one SqlCommand, all in one batch. This whole SQL will be wrapped in `sp_executesql`, but it would not matter, because the scope in which temp table is created will be the same scope in which stored procedure is called. Technically it will work, but I wouldn't recommend following this path.
---
Here is not an answer to the question, but suggestion to solve the problem.
To be honest, the whole approach looks strange. If you want to pass some data into the stored procedure why not use parameters of this stored procedure. This is what they are for - to pass data into the procedure. There is no real need to use temp table for that. You can use a table-valued parameter ([T-SQL](https://msdn.microsoft.com/en-us/library/bb510489.aspx), [.NET](https://msdn.microsoft.com/en-us/library/bb675163(v=vs.110).aspx)) if the data that you are passing is complex. It is definitely an overkill if it is simply a `Username`.
Your stored procedure needs to be aware of the temp table, it needs to know its name and structure, so I don't understand what's the problem with having an explicit table-valued parameter instead. Even the code of your procedure would not change much. You'd use `@ConnectionContextInfo` instead of `#ConnectionContextInfo`.
Using temp tables for what you described makes sense to me only if you are using SQL Server 2005 or earlier, which doesn't have table-valued parameters. They were added in SQL Server 2008.
|
SQL Server connection context using temporary table cannot be used in stored procedures called with SqlDataAdapter.Fill
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"ado.net",
"temp-tables",
""
] |
Let's say I have the following table:
```
First_Name Last_Name Age
John Smith 50
Jane Smith 40
Bill Smith 12
Freda Jones 30
Fred Jones 35
David Williams 50
Sally Williams 20
Peter Williams 35
```
How do I design a query that will give me the first name, last name and age of the oldest in each family? It must be possible but it's driving me nuts.
I'm looking for a general SQL solution, although I've tagged ms-access as that is what I am actually using.
|
Simple answer, do a correlated sub-select to get the "current" families highest age:
```
select *
from tablename t1
where t1.Age = (select max(t2.Age) from tablename t2
where t2.Last_Name = t1.Last_Name)
```
However, Tim Biegeleisen's query is probably a bit faster.
|
```
SELECT t1.First_Name, t1.Last_Name, t1.Age
FROM family t1
INNER JOIN
(
SELECT Last_Name, MAX(Age) AS maxAge
FROM family
GROUP BY Last_Name
) t2
ON t1.Last_Name = t2.Last_Name AND t1.Age = t2.maxAge
```
Note: This will give multiple records per family in the event of a tie.
|
How do I select the record with the largets X for each of a group of records
|
[
"",
"sql",
"ms-access",
""
] |
I have tabA:
```
________________________
|ID |EMPLOYEE|CODE |
|49 |name1 |mobile |
|393 |name2 |none |
|3002|name3 |intranet|
________________________
```
The ID column (tabA) is based on a counter in the below tabB:
```
_________________
|TYPE |ID |
|intranet |3003|
|mobile |50 |
|none |394 |
__________________
```
I want to insert new row in tabA using the ID counter (as it is the next available ID). How do I insert into table based on a counter value?
I am trying this method, which results in trying to insert a duplicate type, instead of the max(ID):
```
INSERT INTO tabA (ID, EMPLOYEE, CODE)
```
`VALUES ((select max(ID) from tabB where TYPE = 'A'),name4,'intranet');`
I expect to see tabA:
```
________________________
|ID |EMPLOYEE|CODE |
|3000|name1 |mobile |
|3001|name2 |none |
|3002|name3 |intranet|
|3003|name4 |intranet|
________________________
```
tabB:
```
_________________
|TYPE |ID |
|mobile |2999|
|none |3002|
|intranet |3004|
__________________
```
|
**I would suggest the following answer:**
**the @tabA and @tabB I assume is your tabA and tabB above.**
```
declare @tabA table
(
id integer,
employee nvarchar(10),
code nvarchar(10)
)
insert into @tabA values (49, 'name1','mobile');
insert into @tabA values (393, 'name2','none');
insert into @tabA values (3002, 'name3','intranet');
declare @tabB table
(
type_ nvarchar(10),
ID integer
)
insert into @tabB values ('intranet',3003);
insert into @tabB values ('mobile',50);
insert into @tabB values ('none', 394);
insert into @tabA
select MAX(b.ID), 'name4','intranet'
from @tabB B
where b.type_ = 'intranet'
declare @max integer
select @max = MAX(ID) from @tabB where type_ = 'intranet'
declare @max_1 integer
set @max_1 = @max + 1
update @tabB
set ID = @max_1
from @tabB B
where type_ = 'intranet'
select * from @tabA
select * from @tabB
----------------------------
what I got from @tabA
49 |name1| mobile
393 |name2| none
3002 |name3| intranet
3003 |name4| intranet
what I got from @tabB
intranet|3004
mobile |50
none |394
```
|
This will solve your problem: (Assuming your ID is incremental)
```
INSERT INTO tabA (ID, EMPLOYEE, CODE)
VALUES ((SELECT TOP 1 ID + 1 FROM tabA ORDER BY ID DESC),'name4', 'intranet')
```
|
SQL - Insert Into table based on a counter value
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-update",
"sql-insert",
""
] |
In my SQL server query I want to always get 0 results. I only want to get the metadata.
How to write an always false evaluation in TSql? Is there a built-in function, or a recommended way of doing so?
Currently I use `WHERE 1 = 0`, but I'm looking for something like `WHERE 'false'`.
|
Although you can use `where 1 = 0` for this purpose, I think `top 0` is more common:
```
select top 0 . . .
. . .
```
This also prevents an "accident" in the `where` clause. If you change this:
```
where condition x or condition y
```
to:
```
where 1 = 0 and condition x or condition y
```
The parentheses are wrong.
|
Have a look at SET FMTONLY ON.
If you set that and run your query only the metadata is returned
```
USE AdventureWorks2012;
GO
SET FMTONLY ON;
GO
SELECT *
FROM HumanResources.Employee;
GO
SET FMTONLY OFF;
GO
```
|
Readable "always false" evaluation in TSQL
|
[
"",
"sql",
"sql-server",
"boolean",
""
] |
```
select
distinct TagName as new, *,(REPLACE(TagName,' ','-')) as SeoProduct_Name
from
dbo.tbl_Image_Master
inner join
dbo.tbl_size
on tbl_size.Size_Id=tbl_Image_Master.Size_Id
inner join
tbl_category
on tbl_category.Cat_Id = tbl_Image_Master.Cat_Id
```
I want to select distinct tag name with all column
|
```
SELECT * FROM TBL WHERE TAGNAME IN (
SELECT DISTINCT TAGNAME FROM TBL)
```
OR
```
SELECT * FROM (
SELECT *,ROW_NUMBER() OVER (PARTITION BY TAGNAME ORDER BY (SELECT 1)) AS ROWNUM FROM TBL WHERE TAGNAME)
WHERE ROWNUM =1
```
Hope it solves the Purpose !!!
|
If you want to get all columns from a table in a query you can use this syntax :
```
TableName.*
```
Example :
```
Select Table1.Col1, Table1.* from Table1;
```
I would post what your query becomes, but I am not sure what table you want all columns from.
|
getting distinct value with all column sqlserver
|
[
"",
"sql",
""
] |
I've created a view to show the author who writes all the books within the database with the word 'python' in the title. The issue I'm having is making it return nothing if there's more than one author. This is the working code for the view, I know I need to either implement a subquery using aggregate functions (count) or use EXISTS, but I'm not sure how to get it to work.
```
CREATE VIEW sole_python_author(author_first_name, author_last_name)
AS SELECT first_name, last_name
FROM authors, books
WHERE authors.author_id = books.author_id AND
title LIKE '%Python%'
GROUP BY authors.author_id;
```
The 'authors' table:
```
CREATE TABLE "authors" (
"author_id" integer NOT NULL,
"last_name" text,
"first_name" text,
Constraint "authors_pkey" Primary Key ("author_id")
);
```
The 'books' table:
```
CREATE TABLE "books" (
"book_id" integer NOT NULL,
"title" text NOT NULL,
"author_id" integer REFERENCES "authors" (author_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE,
"subject_id" integer REFERENCES "subjects" (subject_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE ,
Constraint "books_id_pkey" Primary Key ("book_id")
);
```
If there is only one author who has written a book with 'python' in the title, it should return their name. If there is more than one, it should return nothing at all.
Any help would be very much appreciated!
|
So only return a row if there's no other author?
I think this matches your description:
```
SELECT min(a.first_name), min(a.last_name)
FROM authors AS a JOIN books AS b
ON a.author_id = b.author_id
WHERE b.title LIKE '%Python%'
HAVING COUNT (DISTINCT b.author_id) = 1;
```
|
```
CREATE VIEW sole_python_author(author_first_name, author_last_name)
AS
SELECT first_name, last_name
FROM authors, books
WHERE authors.author_id = books.author_id
AND title LIKE '%Python%'
GROUP BY first_name, last_name
HAVING COUNT(*) = 1
```
|
SQL - Select query to return only a unique answer or nothing
|
[
"",
"sql",
"postgresql",
"subquery",
""
] |
Please help me. I wrote the query
```
with cte as
(
select
*,
row_number() over ( partition by product order by date desc ) as rownumber
from
saleslist
where
datediff( month, date, getdate() ) < 2
)
select
product,
((max(case when rownumber = 1 then price end) -
max(case when rownumber = maxn then price)) /
max(case when rownumber = maxn then price end)
)
from
(select cte.*, max(rownumber) over (partition by product) as maxn
from cte)
group by product
```
and I got the following messages
> Msg 102, Level 15, State 1, Line 13
> Incorrect syntax near ')'.
>
> Msg 156, Level 15, State 1, Line 18
> Incorrect syntax near the keyword 'group'.
Could someone please kindly tell me how to fix this?
|
```
with cte as
(
select *,
row_number() over ( partition by product order by date desc ) as rownumber
from saleslist
where datediff( month, [date], getdate() ) < 2
)
select product,
(
(max(case when rownumber = 1 then price end) -
max(case when rownumber = maxn then price end) --< missing end here
) /
max(case when rownumber = maxn then price end)
)
from
(select cte.*, max(rownumber) over (partition by product) as maxn
from cte ) t --< needs an alias here
group by product
```
|
SQL server 2014 supports FIRST/LAST\_VALUE
```
with cte as
(
select *,
product,
price as first_price,
row_number() over (partition by product order by date) as rownumber,
last_value(price) -- price of the row with the latest date
over (partition by product
order by date rows
rows between unbounded preceding and unbounded following) as last_price
count(*) over (partition by product) as maxrn
from saleslist sl
where datediff( month, date, getdate() ) < 2
)
select product,
(last_price - first_price) / first_price
from cte
where rownumber = 1;
```
|
SQL Server : error MSG 102 and MSG 156
|
[
"",
"sql",
"sql-server",
""
] |
The following code generates alter password statements to change all standard passwords in an Oracle Database. In version 12.1.0.2.0 it’s not possible anymore to change it to in invalid Password values. That’s why I had to build this switch case construct. Toad gives a warning (rule 5807) on the join construct at the end. It says “Avoid cartesian queries – use a where clause...”. Any Ideas on a “where clause” which works on all oracle database versions?
```
SET TERMOUT OFF
SET ECHO OFF
SET LINESIZE 140
SET FEEDBACK OFF
SET PAGESIZE 0
SPOOL user.sql
SELECT 'alter user '
|| username
|| ' identified by values '
|| CHR (39)
|| CASE
WHEN b.version = '12.1.0.2.0' THEN '462368EA9F7AD215'
ELSE 'Invalid Password'
END
|| CHR (39)
|| ';'
FROM DBA_USERS_WITH_DEFPWD a,
(SELECT DISTINCT version
FROM PRODUCT_COMPONENT_VERSION) b;
SPOOL OFF
@user.sql
```
|
On my machine (a laptop with the free XE version of Oracle - simplest arrangement possible) the table PRODUCT\_COMPONENT\_VERSION has FOUR rows, not one. There are versions for different products, including the Oracle Database and PL/SQL. On my machine the version is the same in all four rows, but I don't see why that should be expected in general.
If they may be different, FIRST you can ignore all the answers that tell you the cross join is not a problem because you return only one row. You don't; you may return more than one row. SECOND, in your query itself, why are you returning the version from ALL the rows of PRODUCT\_COMPONENT\_VERSION, and not just the version for the Oracle Database? I imagine something like
```
WHERE PRODUCT LIKE 'Oracle%'
```
should work - and you won't need DISTINCT in the SELECT clause. Good luck!
|
Toad is simply giving you a warning about the fact that you are doing a join with no `JOIN` conditions.
In this case, you are really doing that, but one of the involved entities (the query from `PRODUCT_COMPONENT_VERSION`) should return only one value, so the problem is a false one.
You are doing a cartesian with an entity that only has 1 element, so it IS a cartesian, but it does not multiply your values
|
Avoid cartesian queries - find a proper "where clause"
|
[
"",
"sql",
"oracle",
"cartesian",
""
] |
In our business we have a main base account and then subordinate accounts under the baseaccount.
1.) How do I get all the accounts into a single column including the base account (comma delimited)?
I've used this code before on other datasets and it works great. I just can't figure out how to make this work with all the multiple joins.
```
SELECT DISTINCT
A.acctnbr as baseacctnbr,
STUFF((SELECT ', '+c1.ACCTNBR
FROM [USBI].[vw_FirmAccount] a1
inner join [USBI].[vw_RelatedAccount] b1 on a1.firmaccountid = b1.basefirmaccountid
inner join [USBI].[vw_FirmAccount] a2 on a2.firmaccountid = b1.relatedfirmaccountid
inner join USBI.vw_NameAddressBase c1 on b1.relatedfirmaccountid = c1.firmaccountid
where c1.AcctNbr = c.ACCTNBR
FOR XML PATH ('')),1,1, '') AS ALLACCTS
FROM [USBI].[vw_FirmAccount] a
inner join [USBI].[vw_RelatedAccount] b on a.firmaccountid = b.basefirmaccountid
inner join [USBI].[vw_FirmAccount] a1 on a1.firmaccountid = b.relatedfirmaccountid
inner join USBI.vw_NameAddressBase c on b.relatedfirmaccountid = c.firmaccountid
where a.acctnbr = '11727765'
and c.restrdate <> '99999999'
and c.closerestrictind <> 'c'
and c.iscurrent = '1'
and b.iscurrent = '1'
```
My Output:
[](https://i.stack.imgur.com/xIEjT.jpg)
I would like to see the comma delimited list like this: `11727765, 11727799, 11783396, 12192670`
I've gone through all the other questions on adding data to a single string and I can't find a solution here. Not a duplicate.
|
Here's what I did to resolve this issue with my query to create a comma delimited list on a single string or in a single column. I had to write the code in a CTE with a substring using the XML Path. It works for what I need and I'll use the code within an SSRS report.
```
;with t1 as
(
SELECT b.relatedfirmaccountid, b.basefirmaccountid, c.firmaccountid, a.acctnbr as baseacctnbr, a1.acctnbr as relatedacct,
rn = row_number() over (partition by a.acctnbr order by a.acctnbr),
substring((select ', '+c.acctnbr as 'data()'
FROM [USBI].[vw_FirmAccount] a
inner join [USBI].[vw_RelatedAccount] b on a.firmaccountid = b.basefirmaccountid
inner join [USBI].[vw_FirmAccount] a1 on a1.firmaccountid = b.relatedfirmaccountid
inner join USBI.vw_NameAddressBase c on b.relatedfirmaccountid = c.firmaccountid
where a.acctnbr = '11727765'
and c.restrdate <> '99999999'
and c.closerestrictind <> 'c'
and c.iscurrent = '1'
and b.iscurrent = '1'
for xml path('')),2,255) as "AllRelatedAccts"
FROM [USBI].[vw_FirmAccount] a
inner join [USBI].[vw_RelatedAccount] b on a.firmaccountid = b.basefirmaccountid
inner join [USBI].[vw_FirmAccount] a1 on a1.firmaccountid = b.relatedfirmaccountid
inner join USBI.vw_NameAddressBase c on b.relatedfirmaccountid = c.firmaccountid
where a.acctnbr = '11727765'
and c.restrdate <> '99999999'
and c.closerestrictind <> 'c'
and c.iscurrent = '1'
and b.iscurrent = '1'
),
t2 as
(
select distinct a.acctnbr, a.name
from USBI.vw_NameAddressBase a
where a.acctnbr = '11727765'
and a.restrdate <> '99999999'
and a.closerestrictind <> 'c'
and a.iscurrent = '1'
group by a.acctnbr, a.name
)
select
a.baseacctnbr,
a.relatedacct,
a.allrelatedaccts,
a.rn
from t1 a
inner join t2 b on a.baseacctnbr = b.acctnbr
--where rn = 1
```
**My Results:**
[](https://i.stack.imgur.com/kTbSB.jpg)
**My Results after changing my where rn = 1**
[](https://i.stack.imgur.com/5tj8K.jpg)
|
You can right click the box in the top left (to the left of baseacctnbr and above 1) and Save Results As a csv file, and open it with Notepad to view it that way.
Personally I use python a lot for this type of thing. With pyodbc (<https://code.google.com/archive/p/pyodbc/wikis/GettingStarted.wiki>) you can execute your query and access the data directly. For the following code, all you have to do is type the server and database name in the connection string
```
import pyodbc
connection = pyodbc.connect('DRIVER={SQL Server Native Client11.0};SERVER=YOURSERVERHERE;DATABASE=YOURDATABASENAMEHERE;TRUSTED_CONNECTION=yes')
cursor = connection.cursor()
query = """
SELECT DISTINCT
A.acctnbr as baseacctnbr,
STUFF((SELECT ', '+c1.ACCTNBR
FROM [USBI].[vw_FirmAccount] a1
inner join [USBI].[vw_RelatedAccount] b1 on a1.firmaccountid = b1.basefirmaccountid
inner join [USBI].[vw_FirmAccount] a2 on a2.firmaccountid = b1.relatedfirmaccountid
inner join USBI.vw_NameAddressBase c1 on b1.relatedfirmaccountid = c1.firmaccountid
where c1.AcctNbr = c.ACCTNBR
FOR XML PATH ('')),1,1, '') AS ALLACCTS
FROM [USBI].[vw_FirmAccount] a
inner join [USBI].[vw_RelatedAccount] b on a.firmaccountid = b.basefirmaccountid
inner join [USBI].[vw_FirmAccount] a1 on a1.firmaccountid = b.relatedfirmaccountid
inner join USBI.vw_NameAddressBase c on b.relatedfirmaccountid = c.firmaccountid
where a.acctnbr = '11727765'
and c.restrdate <> '99999999'
and c.closerestrictind <> 'c'
and c.iscurrent = '1'
and b.iscurrent = '1'
"""
cursor.execute(query)
resultList = [row[0] for row in cursor.fetchall()]
print ", ".join(str(i) for i in resultList)
```
|
SQL comma delimited list as a single string
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am new to VBA in Access database, the following code tries to combine two columns in Table\_2, but one of the column name needs to be defined by a field value from Table\_1, I tried to run the code, but it returns "Error updating: Too few parameters. Expected 1." I am not sure where is the problem.
Appreciate if someone can help. Thanks a lot.
```
Function test()
On Error Resume Next
Dim strSQL As String
Dim As String
Dim txtValue As String
txtValue = Table_1![Field_A]
Set ws = DBEngine.Workspaces(0)
Set db = ws.Databases(0)
On Error GoTo Proc_Err
ws.BeginTrans
strSQL = "UPDATE Table_2 INNER JOIN Table_1 ON Table_2.id = Table_1.id SET Table_2.Field_Y = Table_2!txtValue & Table_2![Field_Z]"
db.Execute strSQL, dbFailOnError
ws.CommitTrans
Proc_Exit:
Set ws = Nothing
Set db = Nothing
Exit Function
Proc_Err:
ws.Rollback
MsgBox "Error updating: " & Err.Description
Resume Proc_Exit
End Function
```
**UPDATE:** The following codes are with actual field names:
```
Function CombineVariableFields()
On Error Resume Next
Dim ws As Workspace
Dim strSQL As String
Dim fieldname As String
fieldname = Table_1![SelectCombineField]
Set ws = DBEngine.Workspaces(0)
Set db = ws.Databases(0)
On Error GoTo Proc_Err
ws.BeginTrans
strSQL = "UPDATE Table_2 INNER JOIN Table_1 ON Table_2.BookType = Table_1.BookType SET Table_2.CombinedField = [Table_2]!fieldname & [Table_2]![BookName]"
Debug.Print strSQL
db.Execute strSQL, dbFailOnError
ws.CommitTrans
Proc_Exit:
Set ws = Nothing
Exit Function
Proc_Err:
ws.Rollback
MsgBox "Error updating: " & Err.Description
Resume Proc_Exit
End Function
```
**Below are screenshots of the two tables**
[](https://i.stack.imgur.com/juSmQ.jpg)
[](https://i.stack.imgur.com/xWY7E.jpg)
|
As advised by @adam-silenko, I used DLookup function and then "Too few parameters error" is fixed. Here is the codes I used. Tough it fixed the error, but it produces another issue, which I raised on another question [here](https://stackoverflow.com/questions/36581104/access-vba-concatenate-dynamic-columns-and-execute-in-loop).
```
Function CombineVariableFields_NoLoop()
On Error Resume Next
Dim ws As Workspace
Dim strSQL As String
Dim fieldname As String
fieldname = DLookup("[SelectCombineField]", "Table_1")
Set ws = DBEngine.Workspaces(0)
Set db = CurrentDb()
On Error GoTo Proc_Err
ws.BeginTrans
strSQL = "UPDATE Table_2 INNER JOIN Table_1 ON Table_2.BookType = Table_1.BookType SET Table_2.CombinedField = [Table_2]![" & fieldname & "] & ' - ' & [Table_2]![BookName]"
db.Execute strSQL, dbFailOnError
ws.CommitTrans
Proc_Exit:
Set ws = Nothing
Exit Function
Proc_Err:
ws.Rollback
MsgBox "Error updating: " & Err.Description
Resume Proc_Exit
End Function
```
|
try use [DLookup Function](https://support.office.com/en-us/article/DLookup-Function-8896cb03-e31f-45d1-86db-bed10dca5937) and concatenate result with your Update string to get expected command.
edit:
you can also open recordset, build your update dynamically and execute it in loop. For example:
```
Set rst = db.OpenRecordset("Select distinct SelectCombinetField FROM Table_1", dbOpenDynaset)
with rst
do while not .eof
strSQL = "UPDATE Table_2 SET Table_2.[" & !SelectCombinetField & "] = (select txtValue from Table_1 where SelectCombinetField = '" & SelectCombinetField & "' and id = Table_2.id Where somting....)"
db.Execute strSQL, dbFailOnError
.MoveNext
Loop
end with
```
If is only example, because all your description is unclear for me.
|
Acess VBA - Update Variable Column (Too few parameters error)
|
[
"",
"sql",
"ms-access",
"vba",
""
] |
im trying to insert an id depending on another table, but i dont know how to do so, i have to do it this way since i am inserting multiples values at the time.
this is my failed attempt at doing so
```
INSERT INTO producto (NumeroEconomico, Order_Id,Marca,Modelo,ano,placas,Product_Id,Inventario_Id)
values ('ad-101', '27', 'Nissan','NP300','2016','aer3457','1',SELECT Id from Inventario WHERE Serie = '5161017293'),
('ad-102', '27', 'Nissan','NP300','2015','aer5647','1',SELECT Id from Inventario WHERE Serie = '5161019329')
```
thx
|
You need to wrap your subqueries in parentheses.
```
INSERT INTO producto VALUES
('ad-101', '27', 'Nissan','NP300','2016','aer3457','1',(SELECT Id from Inventario WHERE Serie = '5161017293')),
('ad-102', '27', 'Nissan','NP300','2015','aer5647','1',(SELECT Id from Inventario WHERE Serie = '5161019329'))
```
|
Have you tried an INSERT INTO SELECT?
```
INSERT INTO producto
(NumeroEconomico, Order_Id,Marca,Modelo,ano,placas,Product_Id,Inventario_Id)
SELECT 'ad-101', '27', 'Nissan','NP300','2016','aer3457','1',Id
FROM Inventario WHERE Serie = '5161017293'
```
|
inserting from select and static elements
|
[
"",
"sql",
"sql-server",
"sql-insert",
""
] |
I have `COLUMN1 CHAR(1)` in `MYTABLE` where all rows are `NULL` and here is my sql query:
```
SELECT COLUMN1 FROM MYTABLE WHERE COLUMN1 != 'A'
```
it returns nothing because all rows have `NULL` in `COLUMN1`. I suppose it should return everything. How do I make it work? I don't wanna use
```
COALESCE(NULLIF(COLUMN1, ''), '*')
```
because it slows down the query. Is there any other alternatives?
|
If you really want NULLs, then why not
```
SELECT COLUMN1 FROM MYTABLE WHERE (COLUMN1 != 'A' OR COLUMN1 IS NULL)
```
???
|
Think to the semantic of `NULL`. You can use it the way you want, but it is meant to mean "unknown". Unknown = 'A'? Nope. But unknown != 'A'? Nope. We simply don't know what it is, so almost all comparisons will fail.
You can use `IS NULL` and `IS NOT NULL`, two special operators designed to work with `NULL`. So you can do something like:
```
SELECT COLUMN1 FROM MYTABLE WHERE COLUMN1 != 'A' OR COLUMN1 IS NULL
```
But first ask yourself: is this what you want? Are you really using `NULL` for unknown values? SQL semantics for `NULL` is perfectly logical if you consider it an unknown value, but it complicates everything unnecessarily in all other cases. Basically, introducing `NULL` in SQL was probably a mistake, because the problems it causes in 1 day are more than the problems it solved in the relational theory's history until now.
|
DB2 Select where not equal to string that is NULL
|
[
"",
"sql",
"db2",
""
] |
I want to select all of the unique Users where `coach is true` and `available is true` but if there are ANY Sessions for that user where `call_ends_at is null` I don't want to include that user.
`call_ends_at` can be NULL or have any number of various dates.
## Users
id:integer
name:string
coach: boolean
available: boolean
## Sessions
id:integer
coach\_id: integer
call\_ends\_at:datetime
Here's what I tried:
```
SELECT DISTINCT "users".* FROM "users"
INNER JOIN "sessions" ON "sessions"."coach_id" = "users"."id"
WHERE "users"."coach" = true
AND "users"."available" = true
AND ("sessions"."call_ends_at" IS NOT NULL)
```
But this will still include users if there are sessions with non-null `call_ends_at` columns.
|
```
-- I want to select all of the unique Users where coach is true and available is true
SELECT *
FROM users u
WHERE u.coach = true AND u.available = true
-- but if there are ANY Sessions for that user where call_ends_at is null
-- I don't want to include that user.
AND NOT EXISTS (
SELECT *
FROM sessions s
WHERE s.coach_id = u.id
AND s.call_ends_at IS NULL
) ;
```
|
I think you can do what you want using `EXISTS` and `NOT EXISTS`:
```
SELECT u.*
FROM "users" u
WHERE u."coach" = true AND u."available" = true AND
EXISTS (SELECT 1
FROM "sessions" s
WHERE s."coach_id" = u."id"
)
NOT EXISTS (SELECT 1
FROM "sessions" s
WHERE s."coach_id" = u."id" AND
s."call_ends_at" IS NULL
);
```
|
How do I exclude results if ANY associated row from a join table has NULL for a specific column?
|
[
"",
"sql",
"postgresql",
""
] |
I have a problem with `ISNUMERIC` Function in SQL-server.
I have a table, which contains one column with `nvarchar` type. In this column I have such a values `123`, `241`, ... and sometimes string values like `LK`
My Query:
```
WITH CTE AS (
SELECT *
FROM MyTable
WHERE ISNUMERIC([Column1]) = 1
)
SELECT *
FROM CTE
WHERE CAST(cte.Column1 AS INT) > 8000
```
but this query returns this error message:
```
error converting data type nvarchar 'LK' to int
```
my expectation is, that the Common table expression filter all rows, which Column1 are numeric and not string? Is it correct?
Why do I receive this error?
|
This is actually no bug.
The CTE is a temporary view (or acts like a temporary view). So think on our query like querying a view. Most likely SQL will try first to go through all rows and get results from both scalars (isnumeric and the cast) then proceed to the filtering.
That being said, it will fail way before it will try to filter the data on the cast.
If you want to make it work just filter the data before in a temporary table or in a table variable.
|
```
IF OBJECT_ID('dbo.tbl', 'U') IS NOT NULL
DROP TABLE dbo.tbl
GO
CREATE TABLE dbo.tbl (val NVARCHAR(1000))
INSERT INTO dbo.tbl
VALUES ('123'), ('234'), ('LK'), ('8001')
ALTER TABLE dbo.tbl
ADD val2 AS CASE WHEN ISNUMERIC(val) = 1 THEN CAST(val AS INT) END
GO
SELECT *
FROM dbo.tbl
WHERE val2 > 8000
```
|
bug in ISNUMERIC function?
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
How I can filter SQL results with `!=` in PostgreSQL SQL query? Example
```
SELECT * FROM "A" WHERE "B" != 'C'
```
Working. But it's also filtered all record where `"B" IS NULL`. When I changed query to:
```
SELECT * FROM "A" WHERE "B" != 'C' OR "B" IS NULL
```
I'm got right result. O\_o. Always, when I need using `!=` I need also check `OR "field" IS NULL`? Really?
It's uncomfortable in Sequelize: `{ B: { $or: [null, { $not: 'C' }] } }`, instead: `{ B: { $not: 'C' } }` :(
|
You can use the "null safe" operator `is distinct from` instead of `<>`
```
SELECT *
FROM "A"
WHERE "B" is distinct from 'C'
```
<http://www.postgresql.org/docs/current/static/functions-comparison.html>
---
You should also avoid quoted identifiers. They are much more trouble then they are worth it
|
you can use case when in where clause: its treat null values as an empty string. So it will return the null data also.
```
select * from table_A
where ( case when "column_a" is null then '' else "column_a" end !='yes')
```
It's pretty faster as well.
|
Not equal and null in Postgres
|
[
"",
"sql",
"database",
"postgresql",
"null",
"comparison-operators",
""
] |
I use a sql request to get the number message by user by forum.
```
SELECT count(idpost), iduser, idforum
FROM post
group by iduser, idforum
```
And I get this result:
[](https://i.stack.imgur.com/9AWR2.png)
But I want to get the better poster in one forum. How can I get it?
I want to get the user who has the most number of post in ONE forum like this :
[](https://i.stack.imgur.com/dltTy.png)
|
According to the edited question:
Please try the query below. What we need is a subquery finding max(idpost) based on idforum. Think the query below:
```
select max(idpost) as IDPOST,idforum
from post
group by idforum
```
What this query is supposed to do is to find the number of posts by a user on a forum. So it should present you an output like:
```
idpost idforum
3 1
2 2
3 4
```
Then we need to find the related iduser for this rows as:
```
select p2.IDPOST, p1.iduser, p2.idforum
from post p1 inner join
( --the query above comes here as subquery.
select max(idpost) as IDPOST,idforum
from post
group by idforum
) p2 on p1.idforum = p2.idforum and p1.idpost = p2.IDPOST
```
What it is doing is to match the data from your main table with the temprorary data coming from your subquery based on idforum and idpost values and adding iduser value from your original table.
```
idpost iduser idforum
3 2 1
2 6 2
3 2 4
```
|
Well assuming you supply the idForum of the forum in question.........just get the top row of your query ordered by the count desc
```
SELECT TOP 1 * FROM
(
SELECT count(idpost),iduser,idforum FROM post
GROUP BY iduser,idforum
) PostsCount
WHERE PostsCount.idForum = @theForumIdIamLookingFor
ORDER BY count DESC
```
|
Get max with one count
|
[
"",
"sql",
""
] |
This is a simple question and I can't seem to think of a solution.
I have this defined in my stored procedure:
```
@communityDesc varchar(255) = NULL
```
@communityDesc is "aaa,bbb,ccc"
and in my actual query I am trying to use `IN`
```
WHERE AREA IN (@communityDesc)
```
but this will not work because my commas are inside the string instead of like this "aaa", "bbb", "ccc"
So my question is, is there anything I can do to @communityDesc so it will work with my IN statement, like reformat the string?
|
at first you most create a function to split string some thing like this code
```
CREATE FUNCTION dbo.splitstring ( @stringToSplit VARCHAR(MAX) )
RETURNS
@returnList TABLE ([Name] [nvarchar] (500))
AS
BEGIN
DECLARE @name NVARCHAR(255)
DECLARE @pos INT
WHILE CHARINDEX(',', @stringToSplit) > 0
BEGIN
SELECT @pos = CHARINDEX(',', @stringToSplit)
SELECT @name = SUBSTRING(@stringToSplit, 1, @pos-1)
INSERT INTO @returnList
SELECT @name
SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos+1, LEN(@stringToSplit)-@pos)
END
INSERT INTO @returnList
SELECT @stringToSplit
RETURN
END
```
then you can use this function is your query like this
```
WHERE AREA IN (dbo.splitstring(@communityDesc))
```
|
This article could help you by your problem:
<http://sqlperformance.com/2012/07/t-sql-queries/split-strings>
In this article Aaron Bertrand is writing about your problem.
It's really long and very detailed.
One Way would be this:
```
CREATE FUNCTION dbo.SplitStrings_XML
(
@List NVARCHAR(MAX),
@Delimiter NVARCHAR(255)
)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
(
SELECT Item = y.i.value('(./text())[1]', 'nvarchar(4000)')
FROM
(
SELECT x = CONVERT(XML, '<i>'
+ REPLACE(@List, @Delimiter, '</i><i>')
+ '</i>').query('.')
) AS a CROSS APPLY x.nodes('i') AS y(i)
);
GO
```
With this function you only call:
```
WHERE AREA IN (SELECT Item FROM dbo.SplitStrings_XML(@communityDesc, N','))
```
Hope this could help you.
|
SQL Stored Procedure LIKE
|
[
"",
"sql",
""
] |
I have data currently in my table like below under currently section. I need the selected column data which is comma delimited to be converted into the format marked in green (Read and write of a category together)
[](https://i.stack.imgur.com/l7BTX.png)
Any ways to do it in SQL Server?
Please do look at the proposal data carefully....
Maybe I wasn't clear before: this isn't merely the splitting that is the issue but to group all reads and writes of a category together(sometimes they are just merely reads/writes), it's not merely putting comma separated values in multiple rows.
```
-- script:
use master
Create table prodLines(id int , prodlines varchar(1000))
--drop table prodLines
insert into prodLines values(1, 'Automotive Coatings (Read), Automotive Coatings (Write), Industrial Coatings (Read), S.P.S. (Read), Shared PL''s (Read)')
insert into prodLines values(2, 'Automotive Coatings (Read), Automotive Coatings (Write), Industrial Coatings (Read), S.P.S. (Read), Shared PL''s (Read)')
select * from prodLines
```
|
Using Jeff's [DelimitedSplit8K](http://www.sqlservercentral.com/articles/Tally+Table/72993/)
```
;
with cte as
(
select id, prodlines, ItemNumber, Item = ltrim(Item),
grp = dense_rank() over (partition by id order by replace(replace(ltrim(Item), '(Read)', ''), '(Write)', ''))
from #prodLines pl
cross apply dbo.DelimitedSplit8K(prodlines, ',') c
)
select id, prodlines, prod = stuff(prod, 1, 1, '')
from cte c
cross apply
(
select ',' + Item
from cte x
where x.id = c.id
and x.grp = c.grp
order by x.Item
for xml path('')
) i (prod)
```
|
Take a look at [STRING\_SPLIT](https://msdn.microsoft.com/en-us/library/mt684588.aspx), You can do something like:
```
SELECT user_access
FROM Product
CROSS APPLY STRING_SPLIT(user_access, ',');
```
or what ever you care about.
|
Split column data into multiple rows
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have a table with two columns, number of maximum number of places (capacity) and number of places available (availablePlaces)
I want to calculate the availablePlaces as a percentage of the capacity.
```
availablePlaces capacity
1 20
5 18
4 15
```
Desired Result:
```
availablePlaces capacity Percent
1 20 5.0
5 18 27.8
4 15 26.7
```
Any ideas of a SELECT SQL query that will allow me to do this?
|
Try this:
```
SELECT availablePlaces, capacity,
ROUND(availablePlaces * 100.0 / capacity, 1) AS Percent
FROM mytable
```
You have to multiply by 100.0 instead of 100, so as to avoid integer division. Also, you have to use [**`ROUND`**](http://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_round) to round to the first decimal digit.
[**Demo here**](http://sqlfiddle.com/#!9/0cc3e/1)
|
The following SQL query will do this for you:
```
SELECT availablePlaces, capacity, (availablePlaces/capacity) as Percent
from table_name;
```
|
Calculate percentage between two columns in SQL Query as another column
|
[
"",
"sql",
"percentage",
""
] |
I'm trying to get a `diff_date` from Presto from this data.
```
timespent | 2016-04-09T00:09:07.232Z | 1000 | general
timespent | 2016-04-09T00:09:17.217Z | 10000 | general
timespent | 2016-04-09T00:13:27.123Z | 250000 | general
timespent | 2016-04-09T00:44:21.166Z | 1144020654000 | general
```
This is my query
```
select _t, date_diff('second', from_iso8601_timestamp(_ts), SELECT from_iso8601_timestamp(f._ts) from logs f
where f._t = 'timespent'
and f.dt = '2016-04-09'
and f.uid = 'd2de01a1-8f78-49ce-a065-276c0c24661b'
order by _ts)
from logs d
where _t = 'timespent'
and dt = '2016-04-09'
and uid = 'd2de01a1-8f78-49ce-a065-276c0c24661b'
order by _ts;
```
This is the error I get
```
Query 20160411_150853_00318_fmb4r failed: line 1:61: no viable alternative at input 'SELECT'
```
|
I think you want [`lag()`](https://prestosql.io/docs/current/functions/window.html#lag):
```
select _t,
date_diff('second', from_iso8601_timestamp(_ts),
lag(from_iso8601_timestamp(f._ts)) over (partition by uid order by dt)
)
from logs d
where _t = 'timespent' and dt = '2016-04-09' and
uid = 'd2de01a1-8f78-49ce-a065-276c0c24661b'
order by _ts;
```
|
```
select date_diff('Day',from_iso8601_date(substr(od.order_date,1,10)),CURRENT_DATE) AS "diff_Days"
from order od;
```
|
How to get date_diff from previous rows in Presto?
|
[
"",
"sql",
"presto",
""
] |
In `SQL Server` you can do sum(field1) to get the sum of all fields that match the groupby clause.
But I need to have the fields subtracted, not summed.
Is there something like subtract(field1) that I can use in stead of sum(field1) ?
For example, table1 has this content :
```
name field1
A 1
A 2
B 4
```
Query:
```
select name, sum(field1), subtract(field1)
from table1
group by name
```
would give me:
```
A 3 -1
B 4 4
```
I hope my question is clear.
EDIT :
there is also a sortfield that i can use.
This makes sure that values 1 and 2 will always lead to -1 an not to 1.
What I need is all values for A subtracted, in my example 1 - 2 = -1
EDIT2 :
if the A-group has values 1, 2, 3, 4 the result must be 1 - 2 - 3 - 4 = -8
|
Assuming that you have some column to indicate order, to select first element per group, you could use windowed functions to calculate your `substract`:
```
CREATE TABLE tab(ID INT IDENTITY(1,1) PRIMARY KEY, name CHAR(1), field1 INT);
INSERT INTO tab(name, field1) VALUES ('A', 1), ('A', 2), ('B', 4);
SELECT DISTINCT
name
,[sum] = SUM(field1) OVER (PARTITION BY name)
,[substract] = SUM(-field1) OVER (PARTITION BY name)
+ 2*FIRST_VALUE(field1) OVER(PARTITION BY name ORDER BY ID)
FROM tab;
```
`LiveDemo`
Output:
```
╔══════╦═════╦═══════════╗
║ name ║ sum ║ substract ║
╠══════╬═════╬═══════════╣
║ A ║ 3 ║ -1 ║
║ B ║ 4 ║ 4 ║
╚══════╩═════╩═══════════╝
```
Warning:
`FIRST_VALUE` is available from `SQL Server 2012+`
|
So you want to subtract the sum of the second to last from the first value?
You need a column to indicate the order. If you don't have a logical column like a `datetime` column you could use the primary-key.
Here's an example which uses common table expression(CTE's) and the `ROW_NUMBER`-function:
```
WITH CTE AS
(
SELECT Id, Name, Field1,
RN = ROW_NUMBER() OVER (Partition By name ORDER BY Id)
FROM dbo.Table1
), MinValues AS
(
SELECT Id, Name, Field1
FROM CTE
WHERE RN = 1
)
, OtherValues AS
(
SELECT Id, Name, Field1
FROM CTE
WHERE RN > 1
)
SELECT mv.Name,
MIN(mv.Field1) - COALESCE(SUM(ov.Field1), 0) AS Subtract
FROM MinValues mv LEFT OUTER JOIN OtherValues ov
ON mv.Name = ov.Name
GROUP BY mv.Name
```
`Demo`
|
What is the counterpart for SUM() in SQL Server?
|
[
"",
"sql",
"sql-server",
"t-sql",
"aggregate-functions",
""
] |
I have a table which contains data like this:
```
MinFormat(int) MaxFormat(int) Precision(nvarchar)
-2 3 1/2
```
The values in precision can be 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 only.
Now I want result from query as -
```
-2
-3/2
-1
-1/2
0
1/2
1
3/2
2
5/2
3
```
Any query to get the result as follows?
Idea is to create result based onMinimum boundary (MinFomrat col value which is integer) to Maximum boundary (MaxFormat Col value which is integer) accordingly to the precision value.
Hence, in above example, value should start from -2 and generate the next values based on the precision value (1/2) till it comes to 3
|
Note this will only work for Precision 1/1, 1/2, 1/4, 1/8, 1/16, 1/32 and 1/64
```
DECLARE @t table(MinFormat int, MaxFormat int, Precision varchar(4))
INSERT @t values(-2, 3, '1/2')
DECLARE @numerator INT, @denominator DECIMAL(9,7)
DECLARE @MinFormat INT, @MaxFormat INT
-- put a where clause on this to get the needed row
SELECT @numerator = 1,
@denominator = STUFF(Precision, 1, charindex('/', Precision), ''),
@MinFormat = MinFormat,
@MaxFormat = MaxFormat
FROM @t
;WITH N(N)AS
(SELECT 1 FROM(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1))M(N)),
tally(N)AS(SELECT ROW_NUMBER()OVER(ORDER BY N.N)FROM N,N a,N b,N c,N d,N e,N f)
SELECT top(cast((@MaxFormat- @MinFormat) / (@numerator/@denominator) as int) + 1)
CASE WHEN val % 1 = 0 THEN cast(cast(val as int) as varchar(10))
WHEN val*2 % 1 = 0 THEN cast(cast(val*2 as int) as varchar(10)) + '/2'
WHEN val*4 % 1 = 0 THEN cast(cast(val*4 as int) as varchar(10)) + '/4'
WHEN val*8 % 1 = 0 THEN cast(cast(val*8 as int) as varchar(10)) + '/8'
WHEN val*16 % 1 = 0 THEN cast(cast(val*16 as int) as varchar(10)) + '/16'
WHEN val*32 % 1 = 0 THEN cast(cast(val*32 as int) as varchar(10)) + '/32'
WHEN val*64 % 1 = 0 THEN cast(cast(val*64 as int) as varchar(10)) + '/64'
END
FROM tally
CROSS APPLY
(SELECT @MinFormat +(N-1) *(@numerator/@denominator) val) x
```
|
Sorry, now I'm late, but this was my approach:
I'd wrap this in a TVF actually and call it like
```
SELECT * FROM dbo.FractalStepper(-2,1,'1/4');
```
or join it with your actual table like
```
SELECT *
FROM SomeTable
CROSS APPLY dbo.MyFractalSteller(MinFormat,MaxFormat,[Precision]) AS Steps
```
But anyway, this was the code:
```
DECLARE @tbl TABLE (ID INT, MinFormat INT,MaxFormat INT,Precision NVARCHAR(100));
--Inserting two examples
INSERT INTO @tbl VALUES(1,-2,3,'1/2')
,(2,-4,-1,'1/4');
--Test with example 1, just set it to 2 if you want to try the other example
DECLARE @ID INT=1;
--If you want to get your steps numbered, just de-comment the tree occurencies of "Step"
WITH RecursiveCTE as
(
SELECT CAST(tbl.MinFormat AS FLOAT) AS RunningValue
,CAST(tbl.MaxFormat AS FLOAT) AS MaxF
,1/CAST(SUBSTRING(LTRIM(RTRIM(tbl.Precision)),3,10) AS FLOAT) AS Prec
--,1 AS Step
FROM @tbl AS tbl
WHERE tbl.ID=@ID
UNION ALL
SELECT RunningValue + Prec
,MaxF
,Prec
--,Step + 1
FROM RecursiveCTE
WHERE RunningValue + Prec <= MaxF
)
SELECT RunningValue --,Step
,CASE WHEN CAST(RunningValue AS INT)<>RunningValue
THEN CAST(RunningValue / Prec AS VARCHAR(10)) + '/' + CAST(CAST(1/Prec AS INT) AS VARCHAR(MAX))
ELSE CAST(RunningValue AS VARCHAR(10))
END AS RunningValueFractal
FROM RecursiveCTE;
```
The result
```
Value ValueFractal
-2 -2
-1,5 -3/2
-1 -1
-0,5 -1/2
0 0
0,5 1/2
1 1
1,5 3/2
2 2
2,5 5/2
3 3
```
|
SQL Server 2008 - Query to get result in fraction format
|
[
"",
"sql",
"sql-server",
""
] |
I am stuck a little bit with my SQLite3 query.
I have following table, where I want to select only rows which are invalid.
Table contains servers in various countries and each server can contain number of application instances. Each application instance can be valid (e.g. success check or so..) or invalid (if control went wrong)
My table is something like this:
```
Country ServerID Application Validation
usa ABC1 Mailserver 1
usa ABC1 Mailserver 1
usa ABC1 Mailserver 0
usa ABC1 LDAPserver 0
usa ABC1 Voiceserver 0
usa ABC2 Voiceserver 1
canada BCA1 Mailserver 1
canada BCA2 LDAPserver 1
canada BCA2 Voiceserver 0
..............................................
```
Now I want to select Country, ServerID and Application where at least one of instances is invalid. So output should be:
```
Country ServerID Application
usa ABC1 Mailserver
usa ABC1 LDAPserver
usa ABC1 Voiceserver
canada BCA2 Voiceserver
```
I've already made few ruby scripts for similar problems, but I had an idea to solve it directly in database.
I was unsuccesful with some inner joins, distinct queries and so on.
So I will be very glad, if someone could gives helping hand.
Thanks in advance :)
|
I would use aggregation:
```
select Country, ServerID, Application
from t
group by Country, ServerID, Application
having sum(case when validation = 0 then 1 else 0 end) > 0;
```
This counts the number of rows where `validation = 0` and the `> 0` ensures that there is at least one.
Assuming validation is a number that takes on only 0 and 1, you can also do:
```
having sum(validation) < count(*)
```
Or:
```
select distinct Country, ServerID, Application
from t
where validation = 0;
```
|
How about just doing this?
```
select distinct Country, ServerID, Application
from table
where Validation = 0
```
|
SQL level nested SELECT
|
[
"",
"sql",
"sqlite",
""
] |
I Have 3 tables like:
**ProductCategory** [1 - m] **Product** [1-m] **ProductPrice**
a simple script like this :
```
select pc.CategoryId ,pp.LanguageId , pp.ProductId ,pp.Price
from ProductCategory as pc
inner join Product as p on pc.ProductId = p.Id
inner join ProductPrice as pp on p.Id = pp.ProductId
order by CategoryId , LanguageId , ProductId
```
shows these tabular data :
```
CategoryId LanguageId ProductId Price
----------- ----------- ----------- ---------------------------------------
1 1 1 55.00
1 1 2 55.00
1 2 1 66.00
1 2 2 42.00
2 1 3 76.00
2 1 4 32.00
2 2 3 89.00
2 2 4 65.00
4 1 4 32.00
4 1 5 77.00
4 2 4 65.00
4 2 5 85.00
```
now what I need is:
for each category, get *full row as is* but only with the product that has the minimum price.
I just wrote a simple query that does this like :
```
with dbData as
(
select pc.CategoryId ,pp.LanguageId , pp.ProductId ,pp.Price
from ProductCategory as pc
inner join Product as p on pc.ProductId = p.Id
inner join ProductPrice as pp on p.Id = pp.ProductId
)
select distinct db1.*
from dbData as db1
inner join dbData as db2 on db1.CategoryId = db2.CategoryId
where db1.LanguageId = db2.LanguageId
and db1.Price = (select Min(Price)
from dbData
where CategoryId = db2.CategoryId
and LanguageId = db2.LanguageId)
```
and its result is correct:
```
CategoryId LanguageId ProductId Price
----------- ----------- ----------- ---------------------------------------
1 1 1 55.00
1 1 2 55.00
1 2 2 42.00
2 1 4 32.00
2 2 4 65.00
4 1 4 32.00
4 2 4 65.00
```
**Is there a cooler way for doing this ?**
Note: The query must be compliant with *Sql-Server 2008 R2+*
|
You could use windowed function like [`RANK()`](https://msdn.microsoft.com/en-us/library/ms176102.aspx):
```
WITH cte AS
(
select pc.CategoryId, pp.LanguageId, pp.ProductId, pp.Price,
rnk = RANK() OVER(PARTITION BY pc.CategoryId ,pp.LanguageId ORDER BY pp.Price)
from ProductCategory as pc
join Product as p on pc.ProductId = p.Id
join ProductPrice as pp on p.Id = pp.ProductId
)
SELECT CategoryId, LanguageId, ProductId, Price
FROM cte
WHERE rnk = 1;
```
`LiveDemo`
|
You are not using the `Product` table in your query, so it doesn't seem necessary. I would right this as:
```
select ppc.*
from (select pc.CategoryId, pp.LanguageId , pp.ProductId, pp.Price,
row_number() over (partition by pc.CategoryId order by pp.Price) as seqnum
from ProductCategory pc inner join
ProductPrice pp
on pc.ProductId = pp.ProductId
) ppc
where seqnum = 1
order by CategoryId, LanguageId, ProductId;
```
|
T-Sql select and group by MIN()
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"greatest-n-per-group",
""
] |
So, I have some data that needs to be count with RIGHT JOIN. The query look like this:
```
SELECT COUNT(trans_id) FROM database2..transaction a
RIGHT JOIN database1..sellers b
ON a.products = b.products
GROUP BY sellers
ORDER BY sellers
```
The result ends up perfectly like this one:
[](https://i.stack.imgur.com/ZfaRH.png)
As you can see, there are 10 names on the result and there are 3 zero results because it was NULL data in my database.
And then I tried to add some conditions on the query, like this:
```
SELECT COUNT(trans_id) FROM database2..transaction a
RIGHT JOIN database1..sellers b
ON a.products = b.products
WHERE trans_status = 'success'
GROUP BY sellers
ORDER BY sellers
```
The result ends up differently:
[](https://i.stack.imgur.com/Mt0rv.png)
It ended up with 6 results only. And the zero count on before was gone after I add up some conditions. Does that mean SQL only count existed data only even though I added up condition like that?
|
I find it much easier to follow `left join` rather than `right join`. But your problem is that the `where` conditions should go in the `on` clause -- otherwise they turn the outer join in to an inner join:
```
SELECT COUNT(trans_id)
FROM database1..sellers s LEFT JOIN
database2..transaction t
ON s.products = t.products AND trans_status = 'success'
GROUP BY sellers
ORDER BY sellers;
```
I also encourage you to:
* Use table abbreviations for aliases (rather than generic letters such as "a" or "b").
* qualify all column names (I haven't done that, even though `trans_status` is obviously from `transaction`).
* `Left join` is preferable to `right join` because SQL `from` clauses are parsed from "left" to "right".
|
Your `WHERE` clause is applied after your `JOIN`. If you want the 0 counts to still show up, you should add it to your `JOIN` instead.
```
SELECT COUNT(trans_id) FROM database2..transaction a
RIGHT JOIN database1..sellers b
ON a.products = b.products
AND trans_status = 'success'
GROUP BY sellers
ORDER BY sellers
```
|
SQL RIGHT JOIN Won't Count Null Data
|
[
"",
"sql",
"sql-server",
""
] |
i think its easier to explain on an example. so here are my tables
```
Table A
id | Name
1 | row1
2 | row2
3 | row3
Table B
id | A_id | date
1 | 1 | NULL
2 | 1 | NULL
3 | 2 | 2016-04-01
4 | 2 | 2016-04-01
5 | 3 | 2016-04-01
6 | 3 | NULL
```
What im trying to accomplish is for my select on table A will only return values if the rows on table B associated to Table A have a Null date, at least one.
```
Select * from Table_A Where (?)
```
Is this acomplishable? i would only get row1 and row3 back, can it be done?
|
You can use `EXISTS`:
```
SELECT *
FROM TableA a
WHERE EXISTS(
SELECT 1
FROM TableB b
WHERE
b.A_id = a.id
AND b.date IS NULL
)
```
|
```
SELECT *
FROM TableA a
WHERE ID IN (SELECT ID_A
FROM TableB
WHERE date IS NULL
)
```
Subquery retrives IDs on null dates, what is in turn passed to the main query to have what you want
|
SQL where clause dependant on other table
|
[
"",
"sql",
"sql-server",
""
] |
i have a table called stock with columns Tran\_id,Trancode,item\_id barcode prodqty made in sql as below. each time an item is sold , a row is added with a trancode of sales and each time an item is produced there is a row added with a trancode of production.i need to get the remaining stock for a particuclar item barcode from this table stock. please help and stranded
```
tran_id trancode item_id barcode item_desc Packid Packing prodqty userr Prodate
1 Production 114 12349090898 cofeekarak CTN 10xCTN 6 emma 2016-04-05
8 sales 114 12349090898 cofeekarak CTN 10xCTN 2 emma 2016-04-11
```
|
Sum the quantities, positive if production, negative if sales:
```
select sum(case when trancode = 'sales' then -prodqty else prodqty end)
from stock
where barcode = ?
```
|
Try this
```
select (qty1.Produced - qty2.Sold) as difference from
(select sum(prodqty) as Produced from yourtable where barcode='xxxx') qty1,
(select sum(prodqty) as Sold from yourtable where bardcode='xxxx')qty2
```
|
how to get the difference between to columns in sql
|
[
"",
"sql",
""
] |
I am trying to insert data into a table, however I would like to select the ID which is auto incremented when the row is made, and then I would like to set it into a field within another table.
For example
1. INSERT INTO playerdata (Username) VALUES ('%e')
2. SELECT ID FROM playerdata p WHERE Username = '%e' LIMIT 1
3. UPDATE masterdata SET Slot = p.ID
I have been trying to find out a way for a hours now, but still no luck, so if there is a way to do this, please reply with it or if there is an easier way, then please explain. All replies are gratefully appreciated.
|
you can use a trigger after insert like this
```
create trigger update_masterdata after insert on playerdata
for each row
begin
UPDATE masterdata SET Slot = NEW.id;
end/
```
[sqlfiddle](http://sqlfiddle.com/#!9/d6a77/1)
|
Seems you only insert 1 rows.
You can try [LAST\_INSERT\_ID](http://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_last-insert-id) function
|
Using INSERT, SELECT and UPDATE in one query?
|
[
"",
"mysql",
"sql",
""
] |
I am trying to combine this SQL backup script/query that I have with my power shell script and I'm not sure how to convert it as I don't know much about SQL only Powershell. I have been trying to use `invoke-sqlcmd` before every line in the script, but I don't think that's how you do it. I don't fully understand the syntax of `invoke-sqlcmd`Micrsoft was not helpful. This is the SQL database backup script I need to use:
```
DECLARE @name VARCHAR(50) -- database name
DECLARE @path VARCHAR(256) -- path for backup files
DECLARE @fileName VARCHAR(256) -- filename for backup
DECLARE @fileDate VARCHAR(20) -- used for file name
-- specify database backup directory
SET @path = 'F:\Backups\'
-- specify filename format
SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112)
DECLARE db_cursor CURSOR FOR
SELECT name
FROM master.dbo.sysdatabases
WHERE name IN ('dbname','dbname','dbname','dbname','dbname','dbname') -- exclude these databases
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @name
WHILE @@FETCH_STATUS = 0
BEGIN
SET @fileName = @path + @name + '_' + @fileDate + '.BAK'
BACKUP DATABASE @name TO DISK = @fileName
WITH COMPRESSION
FETCH NEXT FROM db_cursor INTO @name
END
CLOSE db_cursor
DEALLOCATE db_cursor
```
Is it correct to just add `invoke-sqlcmd` in the front of every line of sql query? Also whatever solution there is to this needs to be compatible with Windows Server 2008 and up. I thought it might be as easy as just putting the whole script in quotes with `invoke-sqlcmd`, but I doubt it. Whenever I google this the answers are way too complex and don't really explain exactly how the SQL ties into Powershell. They just kind of assume you know. The only field in this script that needs to change is the path when you run it on SQL studio.
|
Sqlcmd is designed to be replacement for sqlcmd command-line tool.
> Much of what you can do with sqlcmd can also be done using
> Invoke-Sqlcmd.
Therefore you will execute your commands in one go. Either using external file or multi-line SQL string. See below for examples.
## External File Example
Save your sql backup script to another file like backupDaily.sql. After that use following syntax.
```
Invoke-Sqlcmd -InputFile "C:\backupScripts\backupDaily.sql" | Out-File -filePath "C:\backupScripts\backupDailyCmd.rpt"
```
Command is taken from [Microsoft documentation](https://msdn.microsoft.com/en-us/library/cc281720.aspx).
The good thing is you can try this sql file in Sql Server Management Studio to see if it works.
## Multi-line SQL String Example
```
$SQL_QUERY = @"
YOUR QUERY HERE.
"@
Invoke-Sqlcmd -Query $SQL_QUERY -ServerInstance "MyComputer\MyInstance"
```
You may easily use string substitution here.
```
$name = 'atilla'
$SQL_QUERY = @"
SELECT * FROM Employees WHERE NAME LIKE $($name)
"@
Invoke-Sqlcmd -Query $SQL_QUERY -ServerInstance "MyComputer\MyInstance"
```
|
I wouldn't use powershell for doing a backup.
I would recommend using a SQL stored procedure to schedule a job.
```
USE msdb ;
GO
-- creates a schedule named NightlyJobs.
-- Jobs that use this schedule execute every day when the time on the server is 01:00.
EXEC sp_add_schedule
@schedule_name = N'NightlyJobs' ,
@freq_type = 4,
@freq_interval = 1,
@active_start_time = 010000 ;
GO
-- attaches the schedule to the job BackupDatabase
EXEC sp_attach_schedule
@job_name = N'BackupDatabase',
@schedule_name = N'NightlyJobs' ;
GO
```
Hopefully this link will help.
<https://msdn.microsoft.com/en-GB/library/ms191439.aspx>
|
Combining a SQL database backup script/query with powershell
|
[
"",
"sql",
"sql-server",
"powershell",
""
] |
I have 3 tables with below format.
```
Table A Table B Table C
id id1 id2 id name
1 1 null 1.1 john
2 1 1.1
2 null
```
With the query
```
select a.id,b.id1,b.id2,c.id,c.name
from TableA a
join TableB b on a.id = b.id1
left join TableC on b.id2 = c.id
```
I will see the below data
```
a.id b.id1 b.id2 c.id c.name
1 1 null null null
1 1 1.1 1.1 john
2 2 null null null
```
My intention is, I need to get rid of first row in the result set i.e., if there is data then display only not null rows if there is no data then display null row.
Please let me know if you need any clarity.
|
We can have a solution by using another table which store if there is at least on non null value on `id2` for `id1` in `Tableb`, and if so deletes when there is the null values.
This table would be :
```
select sum(case when id2 is null then 0 else 1 end) as test,
id1
from TableB
group by id1
```
And the query :
```
select a.id,b.id1,b.id2,c.id,c.name
from TableA a
join TableB b on a.id = b.id1
left join TableC on b.id2 = c.id
left join (
select sum(case when id2 is null then 0 else 1 end) as test,
id1
from TableB
group by id1
) AS Table_test on Table_test.id1 = b.id1
where b.id2 is not null or Table_test.test = 0
```
Thus, in cases where you have multiple rows, you will have all your `id2` in a single row, and no null row, and in only null cases you will keep the null row.
|
If I'm understanding correctly, here's one option using `rank`:
```
select *
from (
select a.id,b.id1,b.id2,c.id,c.name,
rank() over (partition by b.id1
order by case when b.id2 is null then 1 else 0 end) rnk
from TableA a
join TableB b on a.id = b.id1
left join TableC on b.id2 = c.id
) t
where id2 is not null or rnk = 1
```
|
display data from 3 tables with conditional joining
|
[
"",
"sql",
"oracle11g",
"oracle-sqldeveloper",
"plsqldeveloper",
""
] |
I hope somebody can help me I am new in sql database.
I have the following table:
```
--------------------------------------
id from to version
---------------------------------------
0 01.01.70 31.12.79 v1
1 01.01.80 31.12.89 v2
```
Now I would like to reach the following behavior:
when the user selected some date then the version should be returned, where the selected user date is between the corresponding `from` and `to` dates.
examples:
```
user_date = 01.01.70 ------> v1 should be returned
user_date = 01.01.75 ------> v1 should be returned
user_date = 31.12.79 ------> v1 should be returned
user_date = 01.01.80 ------> v2 should be returned
user_date = 01.01.85 ------> v2 should be returned
user_date = 31.12.89 ------> v2 should be returned
```
I tried this
```
select version from versions_table where user_date between from and to;
```
|
You will need to pass a variable into the query - depending on your programming language this is easily accomplished. Substitute .
```
select version
from versions_table
where to_date('<myInputDate>', 'dd.mm.yy') between to_date('from', 'mm.dd.yy') and to_date('to', 'mm.dd.yy');
```
Make sure to substitute the date you are interested in for (no carets "<>").
[edit] If you don't to\_date them, it will perform an alphabetic ordering, which will cause issues with the query.
|
`from` could be a bad idea for a column name.
You could test this SQL Query:
```
SELECT version
FROM versions_table
WHERE user_date between versions_table.from
AND versions_table.to;
```
With this you indicates that you mean columns from table `versions_table` and not the keyword `FROM`
|
select a date between two dates
|
[
"",
"sql",
"oracle",
""
] |
I have this:
```
SELECT * FROM history JOIN value WHERE history.the_date >= value.the_date
```
is it possible to somehow to ask this question like, where history.the\_date is bigger then or equal to biggest possible value of value.the\_date?
```
HISTORY
the_date amount
2014-02-27 200
2015-02-26 2000
VALUE
the_date interest
2010-02-10 2
2015-01-01 3
```
I need to pair the correct interest with the amount!
|
So `value.the_date` is the date since when the interest is valid. Interest 2 was valid from 2010-02-10 till 2014-12-31, because since 2015-01-01 the new interest 3 applies.
To get the current interest for a date you'd use a subquery where you select all interest records with a valid-from date up to then and only keep the latest:
```
select
the_date,
amount,
(
select v.interest
from value v
where v.the_date <= h.the_date
order by v.the_date desc
limit 1
) as interest
from history h;
```
|
use join condition after `ON` not in where clause...
```
SELECT * FROM history JOIN (select max(value.the_date) as d from value) as x on history.the_date >= x.d
WHERE 1=1
```
|
Joining two tables by date MySQL
|
[
"",
"mysql",
"sql",
""
] |
Can you tell me how I can generate an ER diagram for my database1 (see below) created with VS 2015
[](https://i.stack.imgur.com/6wx9J.png)
Thanks in advance
|
DB Objects in the editor will be related as in Diagram so Just relate objects in the Db and drag to editor.
|
Ensure you installed either Microsoft SQL Server Data Tools or Microsoft Web Developer Tools in order to get the Entity Data Model Designer.
These are the steps to generate entity relationship diagram. It was tested in `VS2012`
1. Open `Visual Studio`
2. Create a project or open an existing project
(must be Visual Basic, Visual C# project, or Console Application)
3. Right-click the project and choose `Add` -> `New Item…`
4. Under Visual C# Items select `“Data”`
5. Select the template `“ADO.NET Entity Data Model”`
6. Give it a name and click `“Add”`
7. Select `“Generate from database”` or `“Empty model”`
8. If `“Generate from database”` selected enter connection
info, choose the database objects and done!
The model is stored as a `“.edmx”` file.
|
Generating entity relationship diagram in Visual Studio 2015
|
[
"",
"sql",
"database",
"visual-studio",
"visual-studio-2015",
"entity-relationship",
""
] |
I have the following tables in my database:
**tags**
```
id | name
---------
1 | tag1
2 | tag2
3 | tag3
4 | tag4
```
**map\_posts\_tags**
```
post_id | tag_id
----------------
123 | 1
123 | 2
234 | 1
345 | 3
345 | 4
456 | 2
456 | 1
```
Is it possible to get all the posts with the same related tags as the passed `post_id` by using a SQL query?
**For example:**
I have my post with id 123 and want to get a list of all the posts that have the same tags related (excluding from the list my post with id 123 if possible from SQL).
|
```
SELECT * FROM tags t
INNER JOIN map_posts_tags mpt
ON t.id = mpt.tag_id
WHERE tag_id in (
select tag_id from map_posts_tags
where post_id = 123)
AND NOT post_it = 123
```
|
This will give you what you want. You can join it with `tags` table to get `tag name` too
`SQLFiddle Demo`
```
select distinct post_id from map_posts_tags where tag_id in
(select tag_id from map_posts_tags where post_id = 123)
and post_id <> 123
```
|
Complex SQL query with JOIN
|
[
"",
"mysql",
"sql",
"database",
"join",
""
] |
I have a table that looks like this
```
userid | eventid | description | date | starttime | endtime
1 1 Event 1 2016-02-02 09:30:00 11:00:00
1 2 Event 2 2016-02-02 13:30:00 15:00:00
1 3 Event 3 2016-02-02 17:30:00 21:00:00
2 4 Event 4 2016-02-03 13:00:00 14:00:00
2 5 Event 5 2016-02-03 15:00:00 16:00:00
```
I need to find what is the sum of time between the events on the same day by the user.
Like this:
```
userid | timeBetween
1 05:00:00
2 01:00:00
```
I should also assume that there may be overlapping times for example event1 starts at 11:00 ends 13:00 and event2 starts 12:00 and ends 14:00 by the same user on the same day. These cases are rare and I believe returning 00:00 here is the appropriate answer.
I solved a similar problem, finding the sum of the length of all events per day.
```
SELECT *,
SEC_TO_TIME( SUM( TIME_TO_SEC(TIMEDIFF(`endtime`,`starttime`)))) as sumtime
FROM `events`
group by userid, date
order by sumtime desc
```
|
Given this sample data:
```
CREATE TABLE t
(`userid` int, `eventid` int, `description` varchar(7), `date` date, `starttime` time, `endtime` time)
;
INSERT INTO t
(`userid`, `eventid`, `description`, `date`, `starttime`, `endtime`)
VALUES
(1, 1, 'Event 1', '2016-02-02', '09:30:00', '11:00:00'),
(1, 2, 'Event 2', '2016-02-02', '13:30:00', '15:00:00'),
(1, 3, 'Event 3', '2016-02-02', '17:30:00', '21:00:00'),
(2, 4, 'Event 4', '2016-02-03', '13:00:00', '14:00:00'),
(2, 5, 'Event 5', '2016-02-03', '15:00:00', '16:00:00')
;
```
this query
```
SELECT userid, SEC_TO_TIME(SUM(TIME_TO_SEC(diff))) AS time_between
FROM (
SELECT
TIMEDIFF(starttime, COALESCE(IF(userid != @prev_userid, NULL, @prev_endtime), starttime)) AS diff,
@prev_endtime := endtime,
@prev_userid := userid AS userid
FROM
t
, (SELECT @prev_endtime := NULL, @prev_userid := NULL) var_init_subquery
ORDER BY userid
) sq
GROUP BY userid;
```
will return
```
+--------+--------------+
| userid | time_between |
+--------+--------------+
| 1 | 05:00:00 |
| 2 | 01:00:00 |
+--------+--------------+
```
Explanation:
In this part
```
, (SELECT @prev_endtime := NULL, @prev_userid := NULL) var_init_subquery
ORDER BY userid
```
we initialize our variables. The `ORDER BY` is very important, since there's no order in a relational database unless you specify it. It is so important, because the `SELECT` clause processes the rows in this order.
In the `SELECT` clause the order is also very important. Here
```
@prev_endtime := endtime,
@prev_userid := userid AS userid
```
we assign the values of the current row to the variables. Since this happens after this line
```
TIMEDIFF(starttime, COALESCE(IF(userid != @prev_userid, NULL, @prev_endtime), starttime)) AS diff,
```
the variables still hold the values of the previous row in the `timediff()` function. Therefore we also have to use `COALESCE()`, because in the very first row and when the userid changes, there is no value to calculate the diff from. To get a diff of 0 there, `COALESCE()` exchanges the `NULL` value with the starttime.
The last part is obviously to simply sum the seconds of the "between times".
|
Here's one way you can get the `timeBetween` value in `SECONDS`
```
SELECT
firsttable.userid,
SEC_TO_TIME(SUM(TIME_TO_SEC(secondtable.starttime) - TIME_TO_SEC(firsttable.endtime))) timeBetween
FROM
(
SELECT
*,
IF(@prev = userid, @rn1 := @rn1 + 1, @rn1 := 1) rank,
@prev := userid
FROM eventtable,(SELECT @prev := 0,@rn1 := 1) var
ORDER BY userid,starttime DESC
) firsttable
INNER JOIN
(
SELECT
*,
IF(@prev2 = userid, @rn2 := @rn2 + 1, @rn2 := 1) rank,
@prev2 := userid
FROM eventtable,(SELECT @prev2 := 0,@rn2 := 1) var
ORDER BY userid,endtime DESC
) secondTable
ON firsttable.userid = secondtable.userid AND firsttable.rank = secondtable.rank + 1 AND
firsttable.date = secondtable.date
GROUP BY firsttable.userid;
```
---
**TEST:**
Unable to add a **fiddle**.
So here's test data with schema:
```
DROP TABLE IF EXISTS `eventtable`;
CREATE TABLE `eventtable` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`userid` int(11) NOT NULL,
`eventid` int(11) NOT NULL,
`description` varchar(100) CHARACTER SET utf8 NOT NULL,
`date` date NOT NULL,
`starttime` time NOT NULL,
`endtime` time NOT NULL,
PRIMARY KEY (`id`)
) ;
INSERT INTO `eventtable` VALUES ('1', '1', '1', 'Event 1', '2016-02-02', '09:30:00', '11:00:00');
INSERT INTO `eventtable` VALUES ('2', '1', '2', 'Event 2', '2016-02-02', '13:30:00', '15:00:00');
INSERT INTO `eventtable` VALUES ('3', '1', '3', 'Event 3', '2016-02-02', '17:30:00', '21:00:00');
INSERT INTO `eventtable` VALUES ('4', '2', '4', 'Event 4', '2016-02-03', '13:00:00', '14:00:00');
INSERT INTO `eventtable` VALUES ('5', '2', '5', 'Event 5', '2016-02-03', '15:00:00', '16:00:00');
```
**Result:**
Executing the above query on the given test data you will get output like below:
```
userid timeBetween
1 05:00:00
2 01:00:00
```
**Note:**
For overlapping events the above query will give you negative `timeBetween` value.
You can replace the the `SEC_TO_TIME...`line by the following:
```
SEC_TO_TIME(IF(SUM(TIME_TO_SEC(secondtable.starttime) - TIME_TO_SEC(firsttable.endtime)) < 0, 0,SUM(TIME_TO_SEC(secondtable.starttime) - TIME_TO_SEC(firsttable.endtime)))) timeBetween
```
|
Finding in between time in a list of times
|
[
"",
"mysql",
"sql",
""
] |
I have a MySQL table that is filled with mails from a postfix mail log. The table is updated very often, some times multiple times per second. Here's the `SHOW CREATE TABLE` output:
```
Create Table postfix_mails CREATE TABLE `postfix_mails` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`mail_id` varchar(20) COLLATE utf8_danish_ci NOT NULL,
`host` varchar(30) COLLATE utf8_danish_ci NOT NULL,
`queued_at` datetime NOT NULL COMMENT 'When the message was received by the MTA',
`attempt_at` datetime NOT NULL COMMENT 'When the MTA last attempted to relay the message',
`attempts` smallint(5) unsigned NOT NULL,
`from` varchar(254) COLLATE utf8_danish_ci DEFAULT NULL,
`to` varchar(254) COLLATE utf8_danish_ci NOT NULL,
`source_relay` varchar(100) COLLATE utf8_danish_ci DEFAULT NULL,
`target_relay` varchar(100) COLLATE utf8_danish_ci DEFAULT NULL,
`target_relay_status` enum('sent','deferred','bounced','expired') COLLATE utf8_danish_ci NOT NULL,
`target_relay_comment` varchar(4098) COLLATE utf8_danish_ci NOT NULL,
`dsn` varchar(10) COLLATE utf8_danish_ci NOT NULL,
`size` int(11) unsigned NOT NULL,
`delay` float unsigned NOT NULL,
`delays` varchar(50) COLLATE utf8_danish_ci NOT NULL,
`nrcpt` smallint(5) unsigned NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `mail_signature` (`host`,`mail_id`,`to`),
KEY `from` (`from`),
KEY `to` (`to`),
KEY `source_relay` (`source_relay`),
KEY `target_relay` (`target_relay`),
KEY `target_relay_status` (`target_relay_status`),
KEY `mail_id` (`mail_id`),
KEY `last_attempt_at` (`attempt_at`),
KEY `queued_at` (`queued_at`)
) ENGINE=InnoDB AUTO_INCREMENT=111592 DEFAULT CHARSET=utf8 COLLATE=utf8_danish_ci
```
I want to know how many mails were relayed through a specific host on a specific date, so I'm using this query:
```
SELECT COUNT(*) as `count`
FROM `postfix_mails`
WHERE `queued_at` LIKE '2016-04-11%'
AND `host` = 'mta03'
```
**The query takes between 100 and 110 ms.**
Currently the table contains about 70 000 mails, and the query returns around 31 000. This is only a couple of days' worth of mails, and I plan to keep at least a month. The query cache doesn't help much because the table is getting updated constantly.
I have tried doing this instead:
```
SELECT SQL_NO_CACHE COUNT(*) as `count`
FROM `postfix_mails`
WHERE `queued_at` >= '2016-04-11'
AND `queued_at` < '2016-04-12'
AND `host` = 'mta03'
```
But the query takes the exact same time to run. I have made these changes to the MySQL configuration:
```
[mysqld]
query_cache_size = 128M
key_buffer_size = 256M
read_buffer_size = 128M
sort_buffer_size = 128M
innodb_buffer_pool_size = 4096M
```
And confirmed that they are all in effect (`SHOW VARIABLES`) but the query doesn't run any faster.
Am I doing something stupid that makes this query take this long? Can you spot any obvious or non-obvious ways to make it faster? Is there another database engine that works better than InnoDB in this scenario?
---
```
mysql> EXPLAIN SELECT SQL_NO_CACHE COUNT(*) as `count`
-> FROM `postfix_mails`
-> WHERE `queued_at` >= '2016-04-11'
-> AND `queued_at` < '2016-04-12'
-> AND `host` = 'mta03';
+----+-------------+---------------+------+--------------------------+----------------+---------+-------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+------+--------------------------+----------------+---------+-------+-------+-------------+
| 1 | SIMPLE | postfix_mails | ref | mail_signature,queued_at | mail_signature | 92 | const | 53244 | Using where |
+----+-------------+---------------+------+--------------------------+----------------+---------+-------+-------+-------------+
1 row in set (0.00 sec)
```
|
If you need your query to be really fast, you'll need to materialize it.
MySQL lacks a way to do that natively, so you'll have to create a table like that:
```
CREATE TABLE mails_host_day
(
host VARCHAR(30) NOT NULL,
day DATE NOT NULL,
mails BIGINT NOT NULL,
PRIMARY KEY (host, day)
)
```
and update it either in a trigger on `postfix_mails` or with a script once in a while:
```
INSERT
INTO mails_host_day (host, day, mails)
SELECT host, CAST(queued_at AS DATE), COUNT(*)
FROM postfix_mails
WHERE id > :last_sync_id
GROUP BY
host, CAST(queued_at AS DATE)
ON DUPLICATE KEY
UPDATE mails = mails + VALUES(mails)
```
This way, querying a host-day entry is a single primary key seek.
Note that trigger-based solution will affect DML performance, while the script-based solution will result in slightly less actual data.
However, you can improve the script-based solution actuality if you union the most recent actual data with the stored results:
```
SELECT host, day, SUM(mails) AS mails
FROM (
SELECT host, day, mails
FROM mails_host_day
UNION ALL
SELECT host, CAST(queued_at) AS day, COUNT(*) AS mails
FROM postfix_mails
WHERE id >= :last_sync_id
GROUP BY
host, CAST(queued_at) AS day
) q
```
It's not a single index seek anymore, however, if you run the update script often enough, there will be less actual records to read.
|
`queued_at` is a datetime value. Don't use `LIKE`. That converts it to a string, preventing the use of indexes and imposing a full-table scan. Instead, you want an appropriate index and to fix the query.
The query is:
```
SELECT COUNT(*) as `count`
FROM `postfix_mails`
WHERE `queued_at` >= '2016-04-11' AND `queued_at` < DATE_ADD('2016-04-11', interval 1 day) AND
`host` = 'mta03';
```
Then you want a composite index on `postfix_mails(host, queued_at)`. The `host` column needs to be first.
Note: If your current version is counting 31,000 out of 70,000 emails, then an index will not be much help for that. However, this will make the code more scalable for the future.
|
Is there any way to optimize this SELECT query any further?
|
[
"",
"mysql",
"sql",
""
] |
I am new to SSRS and I am facing an issue in SSRS 2008. I am hiding and showing a table depending upon the condition, but after hiding its shows empty space/row.
|
It seems the table is reserved and you can't get rid of it. What you can do is display some message like 'no data' etc when you want to hide it. You can display the message in a text box in the table, the text expression will be based on the condition.
|
Are you saying that when your table is invisible, you're still seeing white space that the table would've occupied if it were visible? Have you tried setting the following in the Report properties?
```
ConsumeContainerWhitespace = True
```
|
SSRS dynamic rows
|
[
"",
"sql",
"reporting-services",
"asp-classic",
"report",
""
] |
I need to return all values for: select...where IN (1,2,3,3,3,1)
I have a table with unique IDs. I have the following query:
```
Select * From Table_1 Where ID IN (1,2,3,3,3,1);
```
So far I'm returning only 3 records for unique values (1,2,3)
I want to return 6 records.
I need to get result set shown on the picture.
[](https://i.stack.imgur.com/penDh.jpg)
|
You cannot do this using the `IN` condition, because `IN` treats your items as a set (i.e. ensures uniqueness).
You can produce the desired result by joining to a `UNION ALL`, like this:
```
SELECT t.*
FROM Table_1 t
JOIN ( -- This is your "IN" list
SELECT 1 AS ID
UNION ALL SELECT 2 AS ID
UNION ALL SELECT 3 AS ID
UNION ALL SELECT 3 AS ID
UNION ALL SELECT 3 AS ID
UNION ALL SELECT 1 AS ID
) x ON x.ID = t.ID
```
|
You can't do that with the `IN` operator. You can create a temporary table and JOIN:
```
CREATE TABLE #TempIDs
(
ID int
)
INSERT INTO #TempIDs (1)
INSERT INTO #TempIDs (2)
INSERT INTO #TempIDs (3)
INSERT INTO #TempIDs (3)
INSERT INTO #TempIDs (3)
INSERT INTO #TempIDs (1)
Select Table_1.* From Table_1
INNER JOIN #TempIDs t n Table_1.ID = t.ID;
```
Another (maybe uglier) option is to do a `UNION`:
```
Select * From Table_1 Where ID = 1
UNION ALL
Select * From Table_1 Where ID = 2
UNION ALL
Select * From Table_1 Where ID = 3
UNION ALL
Select * From Table_1 Where ID = 3
UNION ALL
Select * From Table_1 Where ID = 3
UNION ALL
Select * From Table_1 Where ID = 1
```
|
Return all for where IN (1,2,3,3,3,1) clause with duplicates in the IN condition
|
[
"",
"sql",
"sql-server",
"duplicates",
""
] |
Below is my query:
```
SELECT * FROM [TEMPDB].[dbo].[##TEMPMSICHARTFORMATTED]
```
It gives me following result:
[](https://i.stack.imgur.com/eyiu1.jpg)
The problem is if I am doing `order by MonthCycle` then since it is a string it is using month's first letter to sort, but I want an `order by` on the basis of month like `Jan` should be first, then `Feb` and so on.
|
```
SELECT * FROM [TEMPDB].[dbo].[##TEMPMSICHARTFORMATTED]
ORDER BY CASE
WHEN LEFT(MONTHCYCLE,3) = 'Jan' then 1
WHEN LEFT(MONTHCYCLE,3) = 'Feb' then 2
WHEN LEFT(MONTHCYCLE,3) = 'Mar' then 3
WHEN LEFT(MONTHCYCLE,3) = 'Apr' then 4
WHEN LEFT(MONTHCYCLE,3) = 'May' then 5
WHEN LEFT(MONTHCYCLE,3) = 'Jun' then 6
WHEN LEFT(MONTHCYCLE,3) = 'Jul' then 7
WHEN LEFT(MONTHCYCLE,3) = 'Aug' then 8
WHEN LEFT(MONTHCYCLE,3) = 'Sep' then 9
WHEN LEFT(MONTHCYCLE,3) = 'Oct' then 10
WHEN LEFT(MONTHCYCLE,3) = 'Nov' then 11
WHEN LEFT(MONTHCYCLE,3) = 'Dec' then 12
ELSE 0 END;
```
|
The question doesn't specify if the report will be spanning several years, but if so I'd use something like this:
```
ORDER BY CONVERT(DATETIME, '1 ' + REPLACE(monthCycle, '''', ' '))
```
|
sql server order by clause
|
[
"",
"sql",
"sql-server",
""
] |
I want to access another SQL Server to import and update data of that server's database through a stored procedure.
For that I created linked server using `sp_addlinkedserver`.
```
EXEC sp_addlinkedserver
@server = N'LINKEDSERVER'
,@srvproduct=N''
,@provider=N'SQLOLEDB'
,@datasrc=N'[DB-Instance.cuprxxxlj.ap-xxxxxxx-x.rds.amazonaws.com]';
```
Then try to access by SQL query
```
SELECT *
FROM [LINKEDSERVER].[DATABASE].[dbo].[TABLE]
```
And getting errors like
> OLE DB provider "SQLNCLI11" for linked server "LINKEDSERVER" returned message "Login timeout expired".
>
> OLE DB provider "SQLNCLI11" for linked server "LINKEDSERVER" returned message "A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.".
>
> Msg 53, Level 16, State 1, Line 0
> Named Pipes Provider: Could not open a connection to SQL Server [53].
I had created database instance on amazon server and another is also on the amazon server and both are publicly accessible but don't know why I'm getting `Server is not found or not accessible`. Am I missing any server site setting or something else?
Please guide me if I am going in wrong direction.
|
I found the answer. The problem was an instance, I had created "Amazon RDS for SQL Server" which is currently not supporting to "Linked servers".
So I create a new AWS EC2 instance for Windows Server 2012 R2 and then I create Linked servers in it and tried to assess remotely. Now its working fine as I wanted.
Here is detailed [document](https://aws.amazon.com/blogs/apn/running-sql-server-linked-servers-on-aws/) to Running SQL Server Linked Servers on AWS
Thanks.
|
This article explains in detail how to link one AWS Sql Server to another (<https://aws.amazon.com/blogs/apn/running-sql-server-linked-servers-on-aws/>). I am not certain what is wrong in your case, but this has a lot of steps that I do not see in your description.
And the one thing that does jump out at me is that you haven't configured any login security/access control. So unless the target server is open to the world, I would guess that you have to add that as a minimum.
|
Access another SQL Server through stored procedure
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"sql-server-2012",
"linked-server",
""
] |
I have this structure of table Diary:
```
CREATE TABLE Diary
(
[IdDiary] bigint,
[IdDay] numeric(18,0)
);
INSERT INTO Diary ([IdDiary], [IdDay])
values
(51, 1),
(52, 2),
(53, 5);
```
And this other structure for table DiaryTimetable:
```
CREATE TABLE DiaryTimetable
(
[IdDiary] bigint,
[Hour] varchar(50)
);
INSERT INTO DiaryTimetable ([IdDiary], [Hour])
VALUES
(51, '09:00'),
(51, '09:30'),
(51, '10:00'),
(51, '10:30'),
(51, '11:00'),
(51, '11:30'),
(52, '11:00'),
(52, '11:30'),
(52, '12:00'),
(52, '12:30'),
(52, '13:00'),
(52, '13:30'),
(53, '15:00'),
(53, '15:30'),
(53, '16:00'),
(53, '16:30');
```
The table Diary contains an IdDiary and the IdDay is the number of day, for example:
```
Monday --> 1
Tuesday --> 2
Wednesday --> 3
Thursday --> 4
Friday --> 5
Saturday --> 6
Sunday --> 7
```
The table DiaryTimetable contains the iddiary, and the hour.
I want want to get the max hour and the min hour in the table DiaryTimetable for each day appears in the Diary table, If I put this query the result will be only the max hour and the min hour for all the query:
```
select MAX(Hour), MIN(Hour) from DiaryTimetable
inner join Diary on
DiaryTimetable.IdDiary = Diary.IdDiary
```
The result for wat I need will be something like that:
```
IdDiary IdDay Min Hour Max Hour
----- ----- -------- ---------
51 1 09:00 11:30
52 2 11:00 13:30
53 5 15:00 16:30
```
How can I get this, thanks?
[SQL FIDDLE DEMO HERE](http://sqlfiddle.com/#!6/5ca9c9)
|
You use a `GROUP BY` clause:
```
SELECT d.IdDiary, d.IdDay, MIN(Hour), MAX(Hour)
FROM Diary AS d
LEFT JOIN DiaryTimetable AS dt ON d.IdDiary = dt.IdDiary
GROUP BY d.IdDiary, d.IdDay
```
[**Demo here**](http://sqlfiddle.com/#!6/5ca9c9/7)
|
You just missed `Group by` in your query.
```
SELECT b.IdDiary,a.IdDay, MIN(b.Hour), MAX(b.hour)
from Diary A INNER JOIN DiaryTimetable B ON A.IdDiary=B.IdDiary
GROUP by B.IdDiary ,a.IdDay
```
|
How to get the MAX(Hour) and The MIN(Hour) for each day in this query?
|
[
"",
"sql",
"sql-server",
"max",
"min",
"hour",
""
] |
How to replace characters other than specified without using Regular Expression in oracle sql?
For Example:
```
select replace ('rhythm','^h') from dual;
Expected result:
hh
```
|
Seems like a non-problem since regular expressions exist, but you could do something like:
```
select listagg(c) within group (order by lvl) from
(
select level as lvl,
case when substr('rhythm', level, 1) in ('h', 'H')
then substr('rhythm', level, 1) end as c
from dual
connect by level <= length('rhythm')
);
```
Any characters except `h` and `H` are reported as null by the subquery, and the `listagg()` ignores those nulls. You don't need to specify the characters to exclude, as you would with `translate()`.
I know your example only had lower-case `h` but having an `in()` with a single entry would look odd, so I've included upper-case `H` to match what @Mottor did. It's extensible, anyway, so you can put any characters you're interested in into that list.
|
You can use [TRANSLATE](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions196.htm)
```
SELECT TRANSLATE('rhythm', 'hHABCDEFGIJKLMNOPQRSTUVWXYZabcdefgijklmnopqrstuvwxyz', 'hH')
FROM DUAL;
```
And this here is for @Aleksej ;)
```
WITH t1 AS (SELECT 'rhythm bythm dubidah' s, 'ht' c FROM DUAL),
t2 AS (SELECT TRANSLATE (s, '@' || c, '@') st FROM t1),
t3 AS (SELECT LISTAGG (str, '') WITHIN GROUP (ORDER BY str) AS sm
FROM ( SELECT UNIQUE SUBSTR (st, LEVEL, 1) str
FROM t2
CONNECT BY LEVEL <= LENGTH (st)))
SELECT TRANSLATE (s, c || sm, c) r
FROM t1 CROSS JOIN t3;
```
|
How to replace characters other than specified without using Regex in oracle sql?
|
[
"",
"sql",
"oracle",
""
] |
I have into col1 value as below:
[](https://i.stack.imgur.com/1Rw0p.png)
If I'm doing select:
```
select col1 from table1
```
How to instead of NULL display **X**?
It's possible then to join other table using values from col1? (A, B and X)
like: table1.col1 = table2.col1 ??
|
Try,
```
Select x.Col1
FROM Table1 x INNER JOIN Table2 y
ON (x.Col1 = y.Col1 or x.Col1 is NULL and y.Col1 is NULL)
```
|
Use ISNULL(col1,'X'), this will return the value of col1 if it is not null , else it will return the default value 'X'
```
select ISNULL(col1,'X') from table1
```
|
Display and join table using column with null values
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
"null",
""
] |
I have this query:
```
SELECT *
FROM Schedule_OverdueTasks
WHERE Job_No LIKE (
SELECT DISTINCT AREA_ID
FROM V_CONSTAT_PROJ_DATES
WHERE AREA_DESC IN ('aaa', 'bbb') + '%'
```
but this gives me this error:
`Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.`
When I run my query like so, I get no errors, because there is only 1 `AREA_I`
```
SELECT *
FROM Schedule_OverdueTasks
WHERE Job_No LIKE (
SELECT DISTINCT AREA_ID
FROM V_CONSTAT_PROJ_DATES
WHERE AREA_DESC = 'aaa') + '%'
```
I have tried redoing my query like so:
```
SELECT *
FROM Schedule_OverdueTasks
INNER JOIN V_CONSTAT_PROJ_DATES
ON V_CONSTAT_PROJ_DATES.JOB_NUMBER = Schedule_OverdueTasks.Job_No
WHERE V_CONSTAT_PROJ_DATES.AREA_DESC IN ('aaa', 'bbb')
```
But I do not get the same number of results. What am I doing wrong?
|
I would recommend that you use `exists`:
```
SELECT ot.*
FROM Schedule_OverdueTasks ot
WHERE EXISTS (SELECT 1
FROM V_CONSTAT_PROJ_DATES pd
WHERE ot.Job_No LIKE pd.area_id + '%' AND
AREA_DESC IN ('aaa', 'bbb')
);
```
|
I can't comment so I need to use the answer section, but `like` takes a `scalar` `data type` not a `table valued data type
Meaning you can only compare it to 1 value
so without knowing your data structure the best I would be able to recommend at the moment is
```
SELECT *
FROM Schedule_OverdueTasks
INNER JOIN (
SELECT DISTINCT AREA_ID
FROM V_CONSTAT_PROJ_DATES
WHERE AREA_DESC = 'aaa'
) vcpd ON Job_No LIKE Area_ID + '%'
```
Edit:
As Per Gordons answer an exist would be ideal for performance while not changing the result set though the optimizer might interpret this as an exist but better to write what you mean.
|
SQL Subquery returned more than 1 value. This is not permitted when the subquery
|
[
"",
"sql",
"sql-server",
""
] |
I have this problem, I've got database table that looks like this:
```
"63";"CLINICAL...";"Please...";Blah...;"2014-09-23 13:15:59";37;8
"64";"CLINICAL...";"Please...";Blah...;"2014-09-23 13:22:51";37;9
```
The values that matter are the second to last and last one.
As you can see, the second to last (abstract\_category\_numbers) are the same, but the last differs (version\_numbers)
Here is the problem:
When I make a scope, it returns all of the records, which i need to focus on the one with the maximum version number.
In SQL i would do something like this:
```
'SELECT * FROM Category c WHERE
NOT EXISTS SELECT * FROM Category c1
WHERE c.version_number < c1.version_number
AND c.abstract_category_id = c1.abstract_category_id'
```
But i'm totally lost at Ruby, more specifically how to do this kind of select in the scope (I understand it should be a relation)
Thanks
|
We can create a scope to select the `category` with max `version_number` like this:
```
scope :with_max_version_number, -> {
joins("JOIN ( SELECT abstract_category_id, max(version_number) AS max_version
FROM categories
GROUP BY abstract_category_id
) AS temp
ON temp.abstract_category_id = categories.abstract_category_id
AND temp.max_version = categories.version_number"
)
}
```
Basically, we will select the category with the `max_version` value on `temp` table in the subquery.
Btw, I expect the table name is `categories`, you may correct it. Then the final query will be:
```
Category.with_max_version_number
```
|
Scopes are suppose to return an array of values even if there is only 1 record.
If you want to ALWAYS return 1 value, use a static method instead.
```
def self.max_abstract_category
<your_scope>.max_by{ |obj| obj.version_number }
end
```
|
Rails: Need to scope by max version
|
[
"",
"sql",
"ruby-on-rails",
"scope",
""
] |
How can I make a check constraint that checks if last\_name has the last 2 letters capitalized?
```
alter table clienti modify (nume_client constraint che_d check(nume_client=upper(substr(nume_client, -2, 1))));
```
I did like this, but I am getting the following error:
> 1. 00000 - "cannot validate (%s.%s) - check constraint violated"
|
Your constraint is comparing the whole name to the upper-cased second-to-last character. It's only looking at one character, because you're supplying the [third argument](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions181.htm#SQLRF06114) `substring_length` as 1. You need to check the last two characters; so you need to compare only those with the same two characters in upper-case:
```
substr(nume_client, -2) = upper(substr(nume_client, -2))
```
The error you are getting is because you have existing data which does not satisfy the constraint you are trying to add. That may be because your constraint isn't doing what you intended, as it will always return false in your original version.
If you get the same error with the modified check then you either need to remove or correct that data before you add the constraint, or use [the `novalidate` clause](http://docs.oracle.com/cd/E11882_01/server.112/e41084/clauses002.htm#i1010237):
```
check (substr(nume_client, -2) = upper(substr(nume_client, -2))) novalidate
```
Any existing constraint-violating rows will remain untouched, but you won't be able to add new rows that violate the constraint, or update existing rows to still-invalid values.
You can use your `alter table modify (column...)` syntax, or the simpler syntax Gordon Linoff showed; they do the same thing ultimately.
|
You might already have records there in table, that do not pass the check constraint. If it's OK to have the check only for future transactions you can use NOVALIDATE clause to constraint. E.g.
```
CREATE TABLE names (last_name VARCHAR2(100));
--Table created
INSERT INTO names VALUES ('Rambo');
--1 row inserted
INSERT INTO names VALUES ('GatES');
--1 row inserted
alter table names add constraint chk_che_d
check (SUBSTR(last_name,-2,1) = upper(substr(last_name, -2, 1))) NOVALIDATE;
--Table altered
INSERT INTO names VALUES ('Travolta');
--ORA-02290: check constraint (RO.CHK_CHE_D) violated
INSERT INTO names VALUES ('SkywalkER');
--1 row inserted
```
|
Check constraint last_name capitalized
|
[
"",
"sql",
"oracle",
""
] |
I'm trying to do something similar to this:
```
SELECT SUM(
CASE WHEN (<some_condition>
AND EXISTS(SELECT 1
FROM <tableA> as tA
WHERE tA.Id = tB.tAId and
<some_other_condition>
)
) THEN 1 ELSE 0 END
) as <column_name>
FROM <tableB> as tB
```
I need to avoid the use of joins to achieve what I need, because I don't want to count/sum duplicates returned by the results I get through join clauses in this case. When I try to execute my query, I get the following error message:
> "cannot perform an aggregate function on an expression containing an aggregate or a subquery"
I don't know what I am doing wrong.
Any help appreciated.
> EDIT: I added a possible condition on the WHERE statement only to show I need to relate the columns of the outside table (tableB) with the inside table (tableA)
Here's an example of tableA and tableB and what happens when I JOIN both:
```
tableA: tableB: tableA JOIN tableB
Id Id tAId Time Id(A) Id(B) taId Time
1 1 1 5 1 1 1 5
2 2 2 6 2 2 2 6
3 3 2 10 2 3 2 10
4 4 3 11 3 4 3 11
5 5 4 14 4 5 4 14
6 4 13 4 6 4 13
7 4 12 4 7 4 12
8 5 10 5 8 5 10
```
So, when I try to count/sum some property (i.e. tableA.Id) from tableA using the results of the JOIN operation as input, I end up counting duplicates. Here's why I can't use JOIN unless I can select DISTINCT elements. I liked the answer that shows this so I'm trying to implement that with no errors.
(Note: I really appreciate all the other answers since they show possible solutions that can fit my situation)
|
```
SELECT SUM(
CASE WHEN (<some_condition>
AND tA.Id IS NOT NULL
) THEN 1 ELSE 0 END
) as <column_name>
FROM <tableB> as tB
JOIN (SELECT DISTINCT ta.Id
FROM <tableA>) as tA ON tA.Id = tB.Id AND <some_other_condition>
```
|
Since, as the message says, you cannot use a subquery in an aggregate function (`SUM()`), consider using a join similar to the following:
```
SELECT SUM(
CASE WHEN (<some_condition>
AND tA.<matching_column> IS NOT NULL
) THEN 1 ELSE 0 END
) as <column_name>
FROM <tableB> tB
LEFT JOIN <tableA> tA
ON tA.Id = tB.Id
WHERE tA.<some_other_condition>
```
|
EXISTS inside CASE inside AGGREGATE FUNCTION
|
[
"",
"sql",
"case",
"aggregate-functions",
"exists",
""
] |
I have two tables, as follows:
**People**:
```
First_Name Last_Name
Bill Smith
David Williams
Fred Jones
Freda Jones
Jane Smith
John Smith
Peter Williams
Sally Williams
```
**Pets**:
```
Species Title Owner
Dog Fido Jones
Cat Tibs Jones
Dog Jock Williams
```
I know it's a weird database design, but please don't tell me to redesign the database. It's an example that represents the real problem, which is more complicated and can't be redesigned.
I am trying to formulate a query that will return exactly one row for each person, and a column containing the title of their Cat or null if they do not own a cat (I used `Title` because `Name` is a reserved word and Access complained).
The nearest I have come up with is:
```
SELECT People.*, Pets.Title
FROM People
LEFT JOIN Pets ON People.Last_Name = Pets.Owner
WHERE Pets.Species IS NULL OR Pets.Species = "Cat"
```
However this doesn't list any rows for the Williams family because they own a dog.
I have tagged this as MS Access because I am actually using Access 97 (don't ask!), but, other than Access's rather limited Join facilities, it should be a general SQL question.
|
EDIT: The left join syntax in Access causes trouble, which means that for access we need to involve a subquery. This will look something like (the untested);
```
SELECT People.*, Pets.Title
FROM People LEFT JOIN
[SELECT Pets.* FROM Pets WHERE Pets.Species = 'Cat']. AS Pets
ON People.Last_Name = Pets.Owner
```
-- Original answer --
What you seem to want is a regular left join with an additional condition that the pet needs to be a cat;
```
SELECT People.*, Pets.Title
FROM People LEFT JOIN Pets
ON People.Last_Name = Pets.Owner AND Pets.Species = 'Cat'
```
The reason for *not* putting the cat condition in a `WHERE` clause is that a where clause eliminates all non matches, while putting it in the `ON` clause makes the result NULL if it can't find a match (just as you describe your desired result)
|
> this doesn't list any rows for the Williams family because they own a Dog
That's because of the `WHERE` clause:
```
WHERE Pets.Species IS NULL OR Pets.Species = "Cat"
```
If `Pets.Species` is `"Dog"` then this would evaluate to false, so those records are not shown. First remove the clause entirely:
```
SELECT People.*, Pets.Title
FROM People
LEFT JOIN Pets ON People.Last_Name = Pets.Owner
```
Now you're getting the *records* you want, but not specifically the *data* you want. The `Title` column is showing any value, and you only want it to show `"Cat"` or `NULL`. You can adjust the `JOIN` to achieve that:
```
SELECT People.*, Pets.Title
FROM People
LEFT JOIN Pets ON People.Last_Name = Pets.Owner
AND Pets.Species = 'Cat'
```
This will filter the *joined* table, but not the entire result set.
|
How to get the Joins right in a SQL query
|
[
"",
"sql",
"ms-access",
""
] |
I have column of 24 hr and i need to change it to 12 hr, Please help .
```
Start time
174300
035800
023100
```
The result should be
```
Start time
05.43 PM
03.58 AM
02.31 AM
```
|
Use `STUFF` function to convert string to `Time` format
```
SELECT CONVERT(VARCHAR,CAST(STUFF(STUFF(ColumnName,3,0,':'),6,0,':') AS TIME),100)
```
|
Using one of the examples above - the following will work.
You need to split the data into hours/minutes and cast it to time format, than convert it to the relevant type:
```
declare @data int
set @data = 174300
select convert(VARCHAR(15),cast(cast(left(@data, 2 )as varchar(2)) + ':' + cast(substring(cast(@data as nvarchar(6)), 3,2 )as varchar(2) ) as time),100)
```
|
how to convert the 24 hr to 12 hr in sql
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I have four tables: `Books`, `Magazine`, `Member` and `Transaction` and each has a column `Title`.
**Books** table
```
BookId, Title, author, edition, no_of_books
1, Harry Potter, Jk rowling, 5, 11
```
**Magazine** table:
```
MagazineId, Title, Publisher, edition, no_of_magazine
1, filmfare, bollywood, 5, 8
```
**Member** table:
```
MemberId, name, address, contact, email
4, Arjun, NO 56 park street, 071487845, arjun56@gmail.com
8, khan, no 21 new lane pune, 07487547, khan@gmail.com
```
When members borrow a `Book` or a `Magazine`, the `Transaction` table keeps records (`Trans_item_id` is `BookId` or `MagazineId`)
For example:
**Transaction** table:
```
Trans_Id, Trans_Mem_Id, Trans_item_Id, Issue_date, Receive_date,Category
1, 4, 1,06-04-2016, 10-04-2016,book
2, 8, 1,08-04-2016, 11-04-2016,magazine
```
So how can select data from database using SQL and display order to this
ex:
```
Tran_id, Title, member name, issue date, receive date
```
Thank you.
|
Try this:
```
SELECT t.tran_id,
case when t.category = 'books' then b.title else t.title end AS title,
u.name,
t.issue_date,
t.receive_date
FROM transaction t
INNER JOIN member u
ON t.trans_mem_id = u.memberid
LEFT JOIN books b
ON t.trans_item_id = b.bookid
AND t.category = 'books'
LEFT JOIN magazines m
ON t.trans_item_id = m.magazineid
AND t.category = 'magazines'
```
|
select
t.trans\_item\_id as tran\_id,
coalesce(b.title, ma.title),
m.name, t.issue\_date, t.receive\_date
from
transaction t
inner join
member m on m.memberid = t.trans\_mem\_id
left join
books b on t.trans\_item\_id = b.bookid
left join
magazine ma on t.trans\_item\_id = ma.Magazineid
|
How to get data from two table that using same primary key
|
[
"",
"sql",
"database",
""
] |
I have to fetch duplicate record
if date difference between duplicate record is more than 96 hours or 4 days otherwise ignore the duplicate entry and return record with first entry or oldest date. My table look like this :
```
ID SDATE
----------- -----------------------
1 2016-04-13 14:54:18.983
1 2016-04-08 12:55:47.907
2 2016-04-13 14:54:18.983
3 2016-04-13 14:54:18.983
4 2016-04-13 14:54:18.983
5 2016-04-13 14:54:18.983
5 2016-04-11 12:55:47.907
6 2016-04-13 14:54:18.983
6 2016-04-13 14:54:18.983
```
Expected result:
```
ID SDATE
----------- -----------------------
1 2016-04-13 14:54:18.983
1 2016-04-08 12:55:47.907
2 2016-04-13 14:54:18.983
3 2016-04-13 14:54:18.983
4 2016-04-13 14:54:18.983
5 2016-04-11 12:55:47.907
6 2016-04-13 14:54:18.983
```
i tried following query but it is not working.
```
WITH tt AS (
SELECT 1 as ID, GETDATE() as SDATE
UNION ALL
SELECT 1 as ID, '2016-04-09 12:55:47.907' as SDATE
UNION ALL
SELECT 2 as ID, GETDATE() as SDATE
UNION ALL
SELECT 3 as ID, GETDATE() as SDATE
UNION ALL
SELECT 4 as ID, GETDATE() as SDATE
UNION ALL
SELECT 5 as ID, GETDATE() as SDATE
UNION ALL
SELECT 5 as ID, '2016-04-11 12:55:47.907' as SDATE
UNION ALL
SELECT 6 as ID, GETDATE() as SDATE
UNION ALL
SELECT 6 as ID, GETDATE() as SDATE
)
SELECT MIN(SDATE) as SDATE, ID FROM tt as tbl
GROUP BY ID, DATEADD(HH, DATEDIFF(HH,0,SDATE) + 96,0)
```
|
The below query returns the expected result, added the inline comments:
```
-- Simply grouping each ID and get unique row with minimum date
SELECT MIN(SDATE) [SDate], ID
FROM tt
GROUP BY ID
UNION
-- Get the row with each ID's difference is more than 96 hours
SELECT D.MaxDate [SDate], D.ID
FROM (
SELECT MIN(SDATE) [MinDate], MAX(SDATE) [MaxDate], ID
FROM tt
GROUP BY ID
) D
WHERE DATEDIFF(HH, D.MinDate, D.MaxDate) >= 96
```
|
```
declare @table table
(
ID int, SDATE datetime)
insert into @table
(
ID ,SDATE )
values
(1,'2016-04-13 14:54:18'),
(1,'2016-04-08 12:55:47'),
(2,'2016-04-13 14:54:18'),
(3,'2016-04-13 14:54:18'),
(4,'2016-04-13 14:54:18'),
(5,'2016-04-13 14:54:18'),
(5,'2016-04-11 12:55:47'),
(6,'2016-04-13 14:54:18'),
(6,'2016-04-13 14:54:18')
;with cte as
(
select id,min(sdate) mindate,max(sdate) maxdate, datediff(dd,min(sdate),max(sdate)) daysdiff,count(*) as Dups
from @table
group by id
)
select cte.id, t.sdate
from cte
join @table t on t.id = cte.id
where cte.dups > 1 and cte.daysdiff > 4
union all
select cte.id,
mindate
from cte
where (cte.dups > 1 and cte.daysdiff <= 4) or
cte.dups = 1
```
|
how to return duplicate rows if date difference is equal or more than 4 days or 96 hours
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"group-by",
""
] |
I have the following data:
```
CREATE TABLE offer (
id INTEGER,
product_id VARCHAR,
created_at TIMESTAMP,
amount INTEGER,
PRIMARY KEY (id));
INSERT INTO offer (id, product_id, created_at, amount)
VALUES
(1, '123', '2016-03-12', 990),
(2, '136', '2016-02-01', 1056),
(3, '111', '2016-01-01', 1000),
(4, '123', '2016-01-02', 500);
```
And I would like to get rows with the highest amount per product\_id.
If I take these previous rows I would like to get IDs: 2, 3 and 1 because row 1 contains a greater amount than row 4.
```
id | product_id | created_at | amount
----+------------+---------------------+--------
2 | 136 | 2016-02-01 00:00:00 | 1056
3 | 111 | 2016-01-01 00:00:00 | 1000
1 | 123 | 2016-03-12 00:00:00 | 990
```
I tried something like that but I'm not sure about it:
```
SELECT id, product_id, created_at, amount
FROM offer
ORDER BY 4, 2 DESC, 1, 3
```
And I can't try it live yet.
|
If I understand correctly, you can use `distinct on`:
```
select distinct on (product_id) o.*
from offers o
order by product_id, amount desc;
```
`distinct on` is a Postgres extension. In this case, it returns one row per `product_id`. The particular row is the one with the largest amount, as determined by `amount desc`.
|
You could use `RANK()`:
```
WITH cte AS
(
SELECT * , RANK() OVER (PARTITION BY product_id ORDER BY amount DESC) AS rnk
FROM Offers
)
SELECT id, product_id, created_at, amount
FROM cte
WHERE rnk = 1
ORDER BY amount DESC;
```
`LiveDemo`
Keep in mind that if there will be 2 or more `product_id` created with different dates but the same highest `amount` they will all be returned.
Using positional in `ORDER BY` is not best practice.
|
PostgreSQL select max from rows
|
[
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
When building a stored procedure that inserts into a simple link table, should it check if the FK's exist and gracefully return an error or just let SQL throw an exception?
* What is best practice?
* What is most efficient?
Just in case someone doesn't understand my question:
* `Table A`
* `Table B`
* `Table AB`
Should I do:
```
IF EXISTS(SELECT 1 FROM A WHERE Id = @A) AND EXISTS(SELECT 1 FROM B WHERE Id = @B)
BEGIN
INSERT AB (AId,BId) VALUES (@A, @B)
END
ELSE
--handle gracefully, return error code or something
```
or
```
INSERT AB (AId,BId) VALUES (@A, @B)
```
and let SQL throw an exception
Thanks
|
**If the tables are under your control, there is no reason to perform an extra check.** Just assume they are set up correctly, and let SQL handle any error. Constantly checking that you have indeed done what you intended to do is overly defensive programming that adds unnecessary complexity to your code.
For example, you wouldn't write code like this:
```
i = 1;
if (i != 1)
{
print "Error: i is not 1!";
}
```
And I view this situation as similar.
**If the tables are not under your control it may be useful to handle the error gracefully.** For example, if this procedure can run on an arbitrary set of tables created by the user, or if it will be distributed to external users who are required to set up the tables in their own database, you may want to add some custom error handling. The purpose of this would be to give the user a clearer description of what went wrong.
|
As a basic concept, validating values before a potentially error raising code is a good thing. However, in this case, there could be (at least theoretically) a change in table a or table b between the exists checks and the insert statement, that would raise the fk violation error.
I would do something like this:
```
BEGIN TRY
INSERT AB (AId,BId) VALUES (@A, @B)
SELECT NULL As ErrorMessage
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE() AS ErrorMessage
END CATCH
```
The [`ERROR_MESSAGE()`](https://msdn.microsoft.com/en-us/library/ms190358.aspx) function returns the error that was rasied in the try block.
Then in the executing code you can simply check if the returned error message is null. If it is, you know the insert was successful. If not, you can handle this exception how ever you see fit.
|
Stored procedure best practice? Should they check if foreign keys exist before inserting?
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a query :
```
insert into xx_tab_abc
(wr_flag,actual_term_date,person_num)
SELECT BOL_FLAG, ACTUAL_TERM_DATE, PERSON_NUMBER FROM WR_TAB
```
Can i modify this select query in such a way that if `actual_term_date in wr_tab` is not null then 'Y' is inserted into `wr_flag` else no value is passed ?
|
```
SELECT case when ACTUAL_TERM_DATE is not null
then 'y'
else BOL_FLAG
end,
ACTUAL_TERM_DATE, PERSON_NUMBER
FROM WR_TAB
```
|
Are you saying that you don't want to insert anything if `ACTUAL_TERM_DATE` is null? If so, you can just do this
```
insert into xx_tab_abc (wr_flag,
actual_term_date,
person_num
)
SELECT 'Y',
ACTUAL_TERM_DATE,
PERSON_NUMBER
FROM WR_TAB
WHERE ACTUAL_TERM_DATE IS NOT NULL
```
|
Insert into one table to another and use case in select
|
[
"",
"sql",
"oracle",
"sql-insert",
""
] |
I'm having trouble figuring out the evaluation order of boolean predicates in SQL.
Consider the following selection predicate over our (imaginary) car database:
```
WHERE
make='Honda' AND model='Accord' OR make='VW' AND model='Golf';
```
I know that AND has precedence over OR, however I'm confused if this expression would be evaluated as follows:
```
((make='Honda' AND model='Accord') OR make='VW') AND model='Golf';
```
or as:
```
(make='Honda' AND model='Accord') OR (make='VW' AND model='Golf');
```
Or something completely different?!
Your help if very much appreciated.
|
This should be evaluated like
```
WHERE
(make='Honda' AND model='Accord' ) OR (make='VW' AND model='Golf');
```
**Explanation:** In SQL server [`AND` has precedence over `OR`](https://msdn.microsoft.com/en-in/library/ms190276.aspx) and there fore you can imagine AND parts to be inside parenthesis and evaluated first and after them OR
Details based on your comments
> AND has percedence over OR, it something I already mentioned in my post. This precedence is Left tor Right, therefore it is still not clear which evaluation order takes place here: ((make='Honda' AND model='Accord') OR make='VW') AND model='Golf'; or (make='Honda' AND model='Accord' ) OR (make='VW' AND model='Golf'); –
**L2R parsing**
1. `WHERE (make='Honda' AND model='Accord') OR make='VW' AND model='Golf';`
because first all ANDs and leftmost
2. `WHERE`result1`OR (make='VW' AND model='Golf');`
because first all ANDs
3. `WHERE`result1`OR`result2`;`
finally OR
**R2L parsing**
1. `WHERE make='Honda' AND model='Accord' OR (make='VW' AND model='Golf');`
because first all ANDs and rightmost AND first
2. `WHERE (make='Honda' AND model='Accord') OR`result1`;`
because first all ANDs over OR
3. `WHERE`result2`OR`result1`;`
finally OR
So in both cases the condition evaluates to
```
WHERE
(make='Honda' AND model='Accord' ) OR (make='VW' AND model='Golf');
```
So I evaluated all three expressions in below query
```
-- create table t(make varchar(100), model varchar(100))
-- insert into t values ('Honda','Golf'),('Honda','Accord'),('VW','Golf'),('VW','Accord')
select *,
case when make='Honda' AND model='Accord' OR make='VW' AND model='Golf' then 1 else 0 end as result,
case when (make='Honda' AND model='Accord') OR (make='VW' AND model='Golf') then 1 else 0 end as result1,
case when ((make='Honda' AND model='Accord') OR make='VW' ) AND model='Golf' then 1 else 0 end as result2
from t
;
```
And the results show that `result =result1` all the time, proving that it is evaluated as
```
WHERE
(make='Honda' AND model='Accord' ) OR (make='VW' AND model='Golf');
```
**[See sqlfiddle demo](http://sqlfiddle.com/#!3/9126b/3)**
|
if you are unsure about the evaluation order (me too, btw), you always can set parentheses as needed. So you "define" your evaluation order yourself, and even if the sql interpreter changes its evaluation behaviour, the result will still be the same.
I know that this does not really answer your question, but why bother with evaluation order, if you can define it yourself with the use of ()?
|
Order of evaluation of boolean expressions in SQL
|
[
"",
"sql",
""
] |
For some reason I have a hard time grasping joins and this one should be very simple with the knowledge that I have in SQL.
Anyway, I have 2 tables. We will call them TableA and TableB. One of the columns in TableA is "ID". TableB only consists of the column "ID". I want to return all rows in TableA whose ID is present in TableB.
I know this should be very simple to figure out, but my brain doesn't want to work today.
|
You can do this using an `EXISTS`:
```
Select A.*
From TableA A
Where Exists
(
Select *
From TableB B
Where A.Id = B.Id
)
```
You can also use a `JOIN` if you wish, but depending on your data, you may want to couple that with a `SELECT DISTINCT`:
```
Select Distinct A.*
From TableA A
Join TableB B On A.Id = B.Id
```
One thing to keep in mind is that the `ID` of `TableA` is not necessarily related to the `ID` of `TableB`.
|
this should work
```
SELECT B.ID
FROM TableA A
JOIN TableB B
ON (A.ID=B.ID)
WHERE A.ID=B.ID
```
|
Join of Two Tables where Data Matches in One Column
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with a UserID and ImageID Column. I want to select all the UserId's with only one ImageID. I have the code below.
```
Select UserID From AF.UserImageTable
Where Count(ImageId) = 1
```
When I run the code I get the error.
```
An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference.
```
The table looks like this
```
_________________________
: ImageID : UserID :
: 12345 : 10002 :
: 56789 : 10002 :
: 29292 : 10002 :
: 12345 : 10001 :
: 56789 : 10001 :
: 56789 : 10003 :
________________________
```
If I run the Query on the table above I'm trying to get it to return `10003`
|
When you use the below group by clause with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows(in this case UserID and aggregates their values) and then the HAVING clause eliminates groups that you dont need.
```
Select UserID,count(ImageId) From AF.UserImageTable
group by UserID having Count(ImageId) = 1
```
|
try :
```
Select UserID
From AF.UserImageTable
Group By UserID
Having Count(ImageId) = 1
```
|
Select ID Where only one value exists for that ID
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have this condition in my stored procedure to determine which `WHERE` clause to use:
```
IF (@communityDesc = 'All Areas')
BEGIN
WHERE V_CONSTAT_ACTUAL_DATES.AREA_DESC IN (SELECT name
FROM dbo.splitstring(@communityDesc))
AND V_CONSTAT_ACTUAL_DATES.DATE_TO_END >= GETDATE()
END
ELSE
BEGIN
WHERE V_CONSTAT_ACTUAL_DATES.DATE_TO_END >= GETDATE()
END
ORDER BY V_CONSTAT_ACTUAL_DATES.DATE_TO_END
```
but I get a long list of errors:
> Msg 156, Level 15, State 1, Procedure GetProductionSchedule, Line 256
> Incorrect syntax near the keyword 'WHERE'.
>
> Msg 156, Level 15, State 1, Procedure GetProductionSchedule, Line 256
> Incorrect syntax near the keyword 'AND'.
What am I doing wrong?
|
Or just put all of the logic into a single WHERE
```
SELECT *
FROM [Table]
WHERE V_CONSTAT_ACTUAL_DATES.DATE_TO_END >= GETDATE()
AND (@communityDesc = 'All Areas'
OR V_CONSTAT_ACTUAL_DATES.AREA_DESC IN (SELECT name
FROM dbo.splitstring(@communityDesc)))
```
Im not sure you have your logic right.. You probably want to split @communityDesc if it's not equal to `All Areas` I've updated my answer to reflect what I mean.
|
You can't separate out a query by a conditional. You'd have to do something like.
```
if(@communityDesc = 'All Areas')
BEGIN
SELECT * FROM Table
WHERE V_CONSTAT_ACTUAL_DATES.AREA_DESC IN
(select name from dbo.splitstring(@communityDesc))
AND V_CONSTAT_ACTUAL_DATES.DATE_TO_END>=GETDATE()
ORDER BY V_CONSTAT_ACTUAL_DATES.DATE_TO_END
END
else
BEGIN
SELECT * FROM Table
WHERE V_CONSTAT_ACTUAL_DATES.DATE_TO_END>=GETDATE()
ORDER BY V_CONSTAT_ACTUAL_DATES.DATE_TO_END
END
```
Your other option would be to conditionally build the query:
```
DECLARE @Query VARCHAR(1000)
SET @Query = 'SELECT * FROM TABLE '
if(@communityDesc = 'All Areas')
BEGIN
SET @Query = @Query +
'WHERE V_CONSTAT_ACTUAL_DATES.AREA_DESC IN (select name from dbo.splitstring(@communityDesc)) AND V_CONSTAT_ACTUAL_DATES.DATE_TO_END>=GETDATE() '
END
ELSE
SET @Query = @Query +
'WHERE V_CONSTAT_ACTUAL_DATES.DATE_TO_END>=GETDATE() '
BEGIN
SET @Query = @Query + 'ORDER BY V_CONSTAT_ACTUAL_DATES.DATE_TO_END'
END
exec sp_executesql @Query
```
|
SQL Server : IF condition not working in stored procedure
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have written a PL/SQL code to to print only records greater than id=4. I used a goto statement in the program, which is not detected in exception. Please help me continue the program after encountering the exception.
My code is
```
declare
cursor cur is select * from student;
c student%rowtype;
lt3 exception;
PRAGMA
exception_init(lt3,-77);
begin
open cur;
loop
<<backup>>
fetch cur into c;
if c.id < 4 then
raise lt3;
else
dbms_output.put_line(c.name);
end if;
end loop;
close cur;
exception
when lt3 then
dbms_output.put_line('Exception encountered');
goto backup;
end;
/
```
Where should i change?
I got the error at
```
ERROR at line 24:
ORA-06550: line 24, column 7:
PLS-00201: identifier 'BACKUP' must be declared
ORA-06550: line 24, column 2:
PL/SQL: Statement ignored
```
|
when you use `goto` in `cursor`, cursor will be closed, hence you cannot achieve the expected behaviour.
From [Doc](https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/goto_statement.htm)
> If you use the GOTO statement to exit a cursor FOR loop prematurely,
> the cursor is closed automatically. The cursor is also closed
> automatically if an exception is raised inside the loop.
One thing you can do is to use [`continue`](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/continue_statement.htm#LNPLS01360) , `break`, `exit` in the loop to control the execution
```
open cur;
loop
fetch cur into c;
if c.id < 4 then
continue;
else
dbms_output.put_line(c.name);
end if;
end loop;
close cur;
```
|
You should avoid using goto statement in any code if you can.
Below code should achieve what you are trying to do. I don't have access in database so there might be some incorrect syntax.
```
declare
cursor cur is select * from student;
c student%rowtype;
lt3 exception;
PRAGMA
exception_init(lt3,-77);
begin
open cur;
loop
begin
fetch cur into c;
if c.id < 4 then
raise lt3;
else
dbms_output.put_line(c.name);
end if;
exception
when lt3 then
dbms_output.put_line('Exception encountered');
end;
end loop;
close cur;
exception
when others then
dbms_output.put_line('other error encountered');
end;
```
|
How do i continue running my program after encountering exception in SQL?
|
[
"",
"sql",
"oracle",
"exception",
"plsql",
"goto",
""
] |
I have the following sql query which gets back unique days from a column of time stamp values for a particular user. What I am trying to get back is the count of timestamp values for each unique day for a particular user.
```
select distinct DAY(time) from users_has_activities
where usersID = 47;
```
|
Using a count `COUNT(column_name)` with a GROUP BY should do the trick:
```
SELECT DAY(time), COUNT(DAY(time))
FROM users_has_activities
WHERE usersID = 47
GROUP BY DAY(time);
```
|
```
select count(distinct DAY(time) )
from users_has_activities
where usersID = 47;
```
|
SQL query with where and distinct
|
[
"",
"mysql",
"sql",
""
] |
I made two tables like this
```
degree_plan student_record
------------------------------ ----------------------------
major course course_no ID course_no grade
------------------------------ ----------------------------
COE COE200 1 4455 1 A
COE COE305 2 4455 2 C
COE COE400 3 3333 4 B
SWE SWE214 4
SWE SWE344 5
SWE SWE444 6
```
`course_no` is the relationship between the tables
How I can write query in SQL to give me like this `where ID=4455 and the major = COE`
```
--------------------------
major course grade
--------------------------
COE COE200 A
COE COE305 C
COE COE400 NULL
```
|
Try this way
```
select a.mjor, a.course, b.grade
from degree_plan as a
left join student_record as b on a.course_no=b.course_no and b.id=4455
where a.major='COE'
```
|
```
select *
from degree_plan
left join studen`t_record
on studen`t_record.course_no = degree_plan.course_no
where major = 'COE'
```
you need major in the where to filter
|
How can I restrict LEFT JOIN?
|
[
"",
"mysql",
"sql",
""
] |
These are my **tblEmp** table and **tblDept** table(I'm using MS-SQL Server 2012), when I try using a cross join on these two tables it's giving me the result which I didn't expected, just wanted to know why this cross join gives this kind of result, Thank you.
```
ID Name Gender Salary Dept_id
1 abc male 2004 1
2 Tom female 5004 2
3 Sara female 29404 2
4 Jim male 8604 3
5 Lisan male 2078 1
6 Brad male 9804 3
7 Diana female 2095 2
8 Henry male 28204 2
9 Mark male 20821 1
10 Miley female 9456 1
11 Richie male 8604 NULL
12 Lisan female 20776 NULL
```
> tblDept
```
ID Dept_Name Location
1 IT Mumbai
2 HR Delhi
3 Accounts London
4 OtherDepartment NewYork
```
this is the cross join query and it's output
```
select Name, Gender, Salary, Dept_Name
from tblEmp
CROSS JOIN tblDept
where tblEmp.Dept_id is NULL
```
**OUTPUT**
```
Name Gender Salary Dept_Name
Richie male 8604 IT
Richie male 8604 HR
Richie male 8604 Accounts
Richie male 8604 OtherDepartment
Lisan female 20776 IT
Lisan female 20776 HR
Lisan female 20776 Accounts
Lisan female 20776 OtherDepartment
```
What I expected was something like this
```
Name Gender Salary Dept_Name
Richie male 8604 NULL
Richie male 8604 NULL
Richie male 8604 NULL
Richie male 8604 NULL
Lisan female 20776 NULL
Lisan female 20776 NULL
Lisan female 20776 NULL
Lisan female 20776 NULL
```
|
The result is correct, the `cross join` will give you all combinations based on two tables: `tblEmp` and `tblDept`.
And since you use `Dept_Name` as the combination, without where clause, it will give you every combination possible between your two tables:
```
Name Gender Salary Dept_Name
abc male 2004 IT
abc male 2004 HR
abc male 2004 Accounts
abc male 2004 OtherDepartment
Tom female 5004 IT
Tom female 5004 HR
Tom female 5004 Accounts
Tom female 5004 OtherDepartment
... and so on
Richie male 8604 IT
Richie male 8604 HR
Richie male 8604 Accounts
Richie male 8604 OtherDepartment
Lisan female 20776 IT
Lisan female 20776 HR
Lisan female 20776 Accounts
Lisan female 20776 OtherDepartment
```
That is, by cross-joining, you would actually get 12 (from `tblEmp`) x 4 (from `tblDept`) = 48 rows
Then your where clause will simply take away everybody except `Richie` and `Lisan`, since the two of them are the only ones having `Dept_id = NULL`
```
Name Gender Salary Dept_Name
Richie male 8604 IT
Richie male 8604 HR
Richie male 8604 Accounts
Richie male 8604 OtherDepartment
Lisan female 20776 IT
Lisan female 20776 HR
Lisan female 20776 Accounts
Lisan female 20776 OtherDepartment
```
If you query `Dept_id` column too,
```
select Name, Gender, Salary, Dept_id, Dept_Name
from tblEmp
CROSS JOIN tblDept
where tblEmp.Dept_id is NULL
```
The result will be clearer, as you actually only get the employees with `Dept_id = NULL`:
```
Name Gender Salary Dept_id Dept_Name
Richie male 8604 NULL IT
Richie male 8604 NULL HR
Richie male 8604 NULL Accounts
Richie male 8604 NULL OtherDepartment
Lisan female 20776 NULL IT
Lisan female 20776 NULL HR
Lisan female 20776 NULL Accounts
Lisan female 20776 NULL OtherDepartment
```
Your `Dept_Name` column comes from 4 `tblDept` entries, not from `tblEmp` entries.
|
A [`CROSS JOIN`](https://technet.microsoft.com/en-us/library/ms190690(v=sql.105).aspx) would give you each row of the first table join with each row of the second table, (a Cartesian product) unless you add a condition using the where clause to connect the two tables (and in that case, it behaves like an inner join)
Here is a quick demonstration of Cross join:
```
DECLARE @A table
(
A1 int identity(1,1),
A2 int
)
DECLARE @B table
(
B1 int identity(1,1),
B2 int
)
INSERT INTO @A VALUES (1), (2), (NULL)
INSERT INTO @B VALUES (4), (5), (6)
SELECT *
FROM @A
CROSS JOIN @B
```
Results:
```
A1 A2 B1 B2
----------- ----------- ----------- -----------
1 1 1 4
2 2 1 4
3 NULL 1 4
1 1 2 5
2 2 2 5
3 NULL 2 5
1 1 3 6
2 2 3 6
3 NULL 3 6
```
As you can see, for each record in table @A, you join each record of table @B
```
SELECT *
FROM @A
CROSS JOIN @B
WHERE A2 IS NULL
```
Results:
```
A1 A2 B1 B2
----------- ----------- ----------- -----------
3 NULL 1 4
3 NULL 2 5
3 NULL 3 6
```
As you can see, for each record in table @A where A2 is null, you join each record of table @B.
|
Unexpected result of Cross join
|
[
"",
"sql",
"sql-server",
"cross-join",
""
] |
how do I query a table for a column that is clob type? I need to query the table for the column for a certain string (in addition to other conditions - I might need to use Case statement), my initial idea was to query the data in the sub-query and then find a match using Case in the top query. however I am now stuck as I am not sure how to query clob type data in Select!
Edit:
the clob column in the table is set of paragraphs, and the string I am trying to search for may be in any location in the paragraph. And I am unsure of the size of the clob.
|
Hope this examples illustrates clearly what i am trying to explain.
```
SET SQLBL ON;
SET DEFINE OFF;
CREATE TABLE CLOB_TEST
(
clob_in CLOB
);
INSERT
INTO CLOB_TEST VALUES
(
'I am working as a DBA and senior database resource in L&T Infotech in Pune India'
);
SELECT DBMS_LOB.SUBSTR(CLOB_IN,3000) ot FROM CLOB_TEST;
-----------------------------OUTPUT------------------------------------------
OT
I am working as a DBA and senior database resource in L&T Infotech in Pune India
-----------------------------OUTPUT------------------------------------------
```
|
In many respects, the same way you "query a column" (odd terminology!) of type varchar2.
table structure:
```
SQL> describe t
Name Null? Type
------------------------- -------- --------------------------------------------
COL1 VARCHAR2(20)
COL2 CLOB
```
table content:
```
SQL> select * from t;
COL1 COL2
-------------------- -------------------------------------------------------
abc afdreed
azx; ffare21
```
query the table (where clause on the CLOB column):
```
SQL> select * from t where col2 like '%dre%';
COL1 COL2
-------------------- -----------------------------------------------------------
abc afdreed
```
|
Querying Oracle Clob datatype
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I'm currently running an aggregate query, summing amounts sold (say) on a given date.
```
select convert(date, datetimesold), sum(amountsold) from tblSold
group by convert(date, datetimesold)
```
where datetimesold is a datetime value.
The `convert(date,...)` gets rid of the time value, so the `group by` can group by whole day.
Already this isn't efficient, as it requires a table scan for the convert per row - a better approach would be to add a 'datesold' column containing just the date value, indexed, and include this value on insert each time. But this would lose precision on that column, and that is important, because...
`datetimesold` is a UTC datetime. So my question is: say I wanted to group by day, but on Eastern US time. I would have to add an offset in hours to datetimesold before doing the convert in the group by - `group by convert(date, dateadd(hours, -5, datetimesold))` - but even then this wouldn't be always accurate due to daylight savings - EDT -4 hours, EST -5 hours.
Do I have any efficient options to do this in SQL? Are there any timezone-aware functions that I can use here?
EDIT: To further clarify, I'm operating on Azure SQL databases.
|
You already noticed that it is difficult to convert from UTC to local time zone correctly. In fact, it is very difficult, because rules for daylight savings change. You need to maintain a historical database of timezones to do it properly.
I store two timestamps - in UTC and in local time zone. In some reports we need UTC, in some local.
Usually it is easy to convert between UTC and local time zone when the row is inserted and the OS of the client computer, which generates the data is in correct local time zone. At that moment OS knows both local and UTC time. But, if you have historical data from previous years it becomes much more difficult to perform such conversion.
SQL Server 2016 promises to add somewhat better support for time zones, see: [AT TIME ZONE](https://msdn.microsoft.com/en-us/library/mt612795.aspx).
---
As for your concern about table scan - you'll always have to scan the whole table to calculate the `SUM`, so extra `CONVERT` to `date` doesn't really matter.
On the other hand,
If you have a separate column that stores just `date`, not `datetime`, the query will be a bit more efficient, because `date` takes less bytes than `datetime`, so less bytes to read from disk.
If you add an index on `(datesold, amountsold)`, then the `GROUP BY` would not have to do an extra sort, which also makes a query more efficient.
---
So, in the current version of SQL Server I'd add an indexed `date` column which would contain a date in the time zone that you need for your reports. If there is a need for reports in UTC and Eastern US time zones, I'd add two separate `date` columns.
|
If you have access to the timezone name somehow, you can use the variant of the SQL date function that accepts a standard timezone string e.g.
```
select
date(date, @timezone_name) as dte, sum(amountsold) from tblSold
group by dte
```
where `@timezone_name` is any value from:
```
select * from sys.time_zone_info
```
(SQL 2016+)
|
How to efficiently group by date with a timezone specified?
|
[
"",
"sql",
"sql-server",
"t-sql",
"date",
"azure-sql-database",
""
] |
I'm trying to sum a certain column over a certain date range. The kicker is that I want this to be a CTE, because I'll have to use it multiple times as part of a larger query. Since it's a CTE, it has to have the date column as well as the sum and ID columns, meaning I have to group by date AND ID. That will cause my results to be grouped by ID and date, giving me not a single sum over the date range, but a bunch of sums, one for each day.
To make it simple, say we have:
```
create table orders (
id int primary key,
itemID int foreign key references items.id,
datePlaced datetime,
salesRep int foreign key references salesReps.id,
price int,
amountShipped int);
```
Now, we want to get the total money a given sales rep made during a fiscal year, broken down by item. That is, ignoring the fiscal year bit:
```
select itemName, sum(price) as totalSales, sum(totalShipped) as totalShipped
from orders
join items on items.id = orders.itemID
where orders.salesRep = '1234'
group by itemName
```
Simple enough. But when you add anything else, even the price, the query spits out way more rows than you wanted.
```
select itemName, price, sum(price) as totalSales, sum(totalShipped) as totalShipped
from orders
join items on items.id = orders.itemID
where orders.salesRep = '1234'
group by itemName, price
```
Now, each group is (name, price) instead of just (name). This is kind of sudocode, but in my database, just this change causes my result set to jump from 13 to 32 rows. Add to that the date range, and you really have a problem:
```
select itemName, price, sum(price) as totalSales, sum(totalShipped) as totalShipped
from orders
join items on items.id = orders.itemID
where orders.salesRep = '1234'
and orderDate between 150101 and 151231
group by itemName, price
```
This is identical to the last example. The trouble is making it a CTE:
```
with totals as (
select itemName, price, sum(price) as totalSales, sum(totalShipped) as totalShipped, orderDate as startDate, orderDate as endDate
from orders
join items on items.id = orders.itemID
where orders.salesRep = '1234'
and orderDate between startDate and endDate
group by itemName, price, startDate, endDate
)
select totals_2015.itemName as itemName_2015, totals_2015.price as price_2015, ...
totals_2016.itemName as itemName_2016, ...
from (
select * from totals
where startDate = 150101 and endDate = 151231
) totals_2015
join (
select *
from totals
where startDate = 160101 and endDate = 160412
) totals_2016
on totals_2015.itemName = totals_2016.itemName
```
Now the grouping in the CTE is way off, more than adding the price made it. I've thought about breaking the price query into its own subquery inside the CTE, but I can't escape needing to group by the dates in order to get the date range. Can anyone see a way around this? I hope I've made things clear enough. This is running against an IBM iSeries machine. Thank you!
|
I eventually found a solution, and it doesn't need a CTE at all. I wanted the CTE to avoid code duplication, but this works almost as well. Here's a [thread explaining summing conditionally](https://stackoverflow.com/questions/21421973/sum-for-multiple-date-ranges-in-a-single-call) that does exactly what I was looking for.
|
Depending on what you are looking for, this might be a better approach:
```
select 'by sales rep' breakdown
, salesRep
, '' year
, sum(price * amountShipped) amount
from etc
group by salesRep
union
select 'by sales rep and year' breakdown
, salesRep
, convert(char(4),orderDate, 120) year
, sum(price * amountShipped) amount
from etc
group by salesRep, convert(char(4),orderDate, 120)
etc
```
|
Summing a column over a date range in a CTE?
|
[
"",
"sql",
"function",
"aggregate",
"common-table-expression",
""
] |
I currently have six queries in which I take the results and use a spreadsheet to calculate two different final percentages. I believe that it can be done in a single query, and without a spreadsheet, but I am not knowledgeable enough in SQL to figure it out. I am hoping for some direction from the amazing SQL Gods here on SO.
We have several locations, and calculate a Past Due % and a Past Due Fallout %, per location, based on the average of two other percentages:
* Past Due Dollars ÷ Projected Dollars = *Past Due Float %*
* Past Due Units ÷ Total Active Units = *Past Due Unit %*
* ( *Past Due Unit %* + *Past Due Dollar %* ) / 2 = **Past Due %**
Fallout uses the same calculations, but looks at what the amounts will be tomorrow.
> **SOLVED:** I spent time learning about sub-queries, and joined them by the STID. Thanks for all who assisted and helped guide me in the
> correct direction.
Here is my final code:
```
SET DATEFIRST 1;
DECLARE @Today date = dbo.getdateparam(92,999);
DECLARE @TodayNum int = DATEPART(dw, @Today);
DECLARE @Saturday date = DATEADD(DAY, (6-@TodayNum)%7, @Today);
DECLARE @PrevSat date = DATEADD(DAY, -7, @Saturday);
Select store.STID As Store,
Proj.ProjRent As Projected,
PDRent.PastDueDollars As PDRent,
UOR.Units As UOR,
PDUnits.UnitsPD As PDUnits,
(PDRent.PastDueDollars / Proj.ProjRent) * 100 As FloatPerc,
(Cast(PDUnits.UnitsPD As Decimal) / Cast(UOR.Units As Decimal)) *
100 As UnitPerc,
Cast(Round((((PDRent.PastDueDollars / Proj.ProjRent) * 100) +
((Cast(PDUnits.UnitsPD As Decimal(18,4)) / Cast(UOR.Units As Decimal(18,4))) *
100)) / 2, 2) As Decimal(18,2)) As PDPerc,
Reds.RedsPD As PDReds,
Round(Cast(Reds.RedsPD As Float) / Cast(UOR.Units As Float) * 100,
2) As RedsPerc
From
-- Stores
(Select Distinct Stores.STID,
Stores.StoreName,
Stores.EMail,
Stores.ManagersName
From Stores
Where Stores.STID Not In (7, 999)) As store
-- Projected Rent
Left Join (Select CashProj.STID,
Sum(CashProj.ProjectedRental) As ProjRent
From CashProj
Where CashProj.ProjectionDate Between DateAdd(mm, DateDiff(mm, 0, @Today),
0) And DateAdd(mm, DateDiff(mm, 0, @Today) + 1, 0)
Group By CashProj.STID) As Proj On store.STID = Proj.STID
-- Past Due Float
Left Join (Select Agreemnt.STID As STID,
Sum(DateDiff(d, Agreemnt.DueDate, (Case DatePart(dw, @Today)
When '1' Then DateAdd(DAY, -7, DateAdd(DAY, (6 - DatePart(dw,
@Today)) % 7, @Today)) When '6' Then @Today
Else DateAdd(DAY, (6 - DatePart(dw, @Today)) % 7, @Today)
End)) * Round(Agreemnt.WeeklyRate / 7, 2)) As PastDueDollars,
DatePart(dw, @Today) As TodayNum,
DateAdd(DAY, -7, DateAdd(DAY, (6 - DatePart(dw, @Today)) % 7,
@Today)) As PrevSat,
DateAdd(DAY, (6 - DatePart(dw, @Today)) % 7, @Today) As Saturday
From Agreemnt
Where Agreemnt.AStatID = 1 And Agreemnt.DueDate < (Case DatePart(dw,
@Today)
When '1' Then DateAdd(DAY, -7, DateAdd(DAY, (6 - DatePart(dw,
@Today)) % 7, @Today)) When '6' Then @Today
Else DateAdd(DAY, (6 - DatePart(dw, @Today)) % 7, @Today)
End) And Agreemnt.RentToRent = 0
Group By Agreemnt.STID) As PDRent On store.STID = PDRent.STID
-- Units On Rent
Left Join (Select Invntry.STID,
Cast(Count(Invntry.StockNumber) As Int) As Units
From Invntry
Inner Join AgreHist On Invntry.InvID = AgreHist.InvID And
Invntry.STID = AgreHist.STID
Inner Join Agreemnt On Agreemnt.STID = AgreHist.STID And
Agreemnt.AgreeID = AgreHist.AgreeID And Agreemnt.AStatID =
AgreHist.AStatID
Where Invntry.InvStatID = 11 And Invntry.DisposalDate Is Null And
Agreemnt.AStatID = 1
Group By Invntry.STID) As UOR On store.STID = UOR.STID
-- Past Due Units
Left Join (Select Invntry.STID,
Cast(Count(Invntry.StockNumber) As Int) As UnitsPD
From Invntry
Inner Join AgreHist On Invntry.InvID = AgreHist.InvID And
Invntry.STID = AgreHist.STID
Inner Join Agreemnt On Agreemnt.STID = AgreHist.STID And
Agreemnt.AgreeID = AgreHist.AgreeID And AgreHist.AStatID =
Agreemnt.AStatID
Where Invntry.InvStatID = 11 And Invntry.DisposalDate Is Null And
Agreemnt.AStatID = 1 And Agreemnt.DueDate < (Case @TodayNum When '1' Then @PrevSat When '6' Then @Today Else @Saturday End) And Agreemnt.RentToRent = 0
Group By Invntry.STID) As PDUnits On store.STID = PDUnits.STID
-- Reds
Left Join (Select Invntry.STID,
Count(Invntry.StockNumber) As RedsPD
From Invntry
Inner Join AgreHist On Invntry.InvID = AgreHist.InvID And
Invntry.STID = AgreHist.STID
Inner Join Agreemnt On Agreemnt.STID = AgreHist.STID And
Agreemnt.AgreeID = AgreHist.AgreeID And Agreemnt.AStatID =
AgreHist.AStatID
Where Invntry.InvStatID = 11 And Invntry.DisposalDate Is Null And
Agreemnt.AStatID = 1 And Agreemnt.DueDate < DateAdd(day, -15, Case
Cast(DatePart(dw, @Today) As Int)
When '1' Then Cast(DateAdd(DAY, -7, DateAdd(DAY, (6 - DatePart(dw,
@Today)) % 7, @Today)) As Date)
When '6' Then Cast(@Today As Date)
Else Cast(DateAdd(DAY, (6 - DatePart(dw, @Today)) % 7, @Today) As
Date) End) And Agreemnt.RentToRent = 0
Group By Invntry.STID) As Reds On store.STID = Reds.STID
Order By Store
```
|
You can't use variables like that and you can't do this in one aggregate query because it's going to create a massive Cartesian product and give you incorrect results.
You could use CTE's or subqueries for each query you have listed and then join them together on the STID and apply your formulas.
example...
```
/* declare all variables here */
DECLARE @dayNum INT;
SET @dayNum = datepart(dw, getdate());
with PastDueDollars as (
select ... from ... group by STID
), ProjectedDollers as (
select ... from ... group by STID
), PastDueUnits as (
select ... from ... group by STID
), preFinal as (
select
PastDueDollarRatio = pdd.PastDueDollars / pd.ProjRent,
PastDueUnitRatio = pdu.UnitsPD / tau.TotalActiveUnits
/* add the other calculations */
from
PastDueDollars pdd
inner join ProjectedDollers pd on pdd.STID = pd.STID
inner join PastDueUnits pdu on pdu.STID = pd.STID
/* join more CTEs */
)
select
f.*,
PastDueRatio = (f.PastDueDollarRatio + f.PastDueUnitRatio) / 2
/* and so on for the other calculations of calculations... */
from
preFinal f
```
|
You never set the variables to any real value, the you proceed to select the useless variable in a irrelevant SELECT statement.
in the following line
```
Set @pdD = Sum( Case When a.DueDate < GetDate() Then DateDiff(d,a.DueDate,@runDate) * (a.WeeklyRate/7) )
```
You are not stating where s.DueDate comes from. It wont even compile.
In this SELECT the table is completely irrelevant
```
Select a.STID as STID,
@pdU As PastDueUnits,
@activeU As ActiveUnits,
@pdD As PastDueDollars,
@projRent As ProjRent,
@pdP As PastDuePerc,
@foU As FalloutUnits,
@foD As FalloutDollars,
@foP As FalloutPerc
FROM Agreemnt a INNER JOIN CashProj on a.STID = CashProj.STID JOIN Invntry i ON a.STID = i.STID JOIN AgreHist h On i.InvID = h.InvID And i.STID = h.STID INNER JOIN Agreemnt a On a.STID = h.STID AND a.AgreeID = h.AgreeID AND a.AStatID = h.AStatID
WHERE a.RentToRent = 0 AND i.InvStatID = 11 AND i.DisposalDate IS NULL AND a.AStatID = 1 AND a.DueDate < DateAdd(d, @runDate, GetDate())
GROUP BY a.STID
ORDER BY a.STID
```
This is an example to how you should set the values before you try to do calculations with the variables.
```
DECLARE @dayNum INT;
SET @dayNum = datepart(dw, getdate());
Select Invntry.STID,
@foU = COUNT(Invntry.StockNumber)
From Invntry
Inner Join AgreHist On Invntry.InvID = AgreHist.InvID And
Invntry.STID = AgreHist.STID
Inner Join Agreemnt On Agreemnt.STID = AgreHist.STID And Agreemnt.AgreeID =
AgreHist.AgreeID And Agreemnt.AStatID = AgreHist.AStatID
Where Invntry.InvStatID = 11 And Invntry.DisposalDate Is Null And
Agreemnt.AStatID = 1 And Agreemnt.DueDate < DateAdd(d, Case @dayNum When '1' Then -2 When '2' Then 1 When '3' Then 1 When '4' Then 1
When '5' Then 1 When '6' Then 0 When '7' Then -1 End, GetDate()) And Agreemnt.RentToRent = 0
Group By Invntry.STID
```
|
TSQL calculating from combined queries
|
[
"",
"sql",
"t-sql",
""
] |
I have the following SQL and it returns 349017 records. The number of records will increase on a daily basis.
Currently I used pagination to display only 12 records. It took around 2 to 3 secs to return every 12 records. How do I optimize the query to 0 second? Kindly give any solution / suggestions.
I have a lot of image data that would be shown in the application. When the user scrolls, the images have to loaded quickly based on this query.
```
SELECT
vInfo.UserId
,vInfo.VehicleInfoId
,vImage.GuidedTourTemplateId
,vImage.VehicleImageId
,vInfo.Value AS VehicleName
,vInfo.Value AS ImageName
,Prop.PropertyId
,COALESCE(Loc.LocationName, 'NA') AS VehicleLocation
,Prop.PropertyName
,vImage.Latitude
,vImage.Longitude
,vImage.IsMain
,CASE
WHEN (DATEADD(DAY, tPlan.BackupDays, CAST(vImage.CreatedDate AS DATETIME)) > (GETDATE()))
THEN 1
ELSE 0
END AS IsAccess
,vImage.ImageURL
,vImage.ThumbImageURL
,CASE
WHEN vImage.AudioURL <> ''
THEN 1
ELSE 0
END AS IsAudio
,vImage.AudioURL
,CASE
WHEN vImage.Comments <> ''
THEN 1
ELSE 0
END AS IsComment
,vImage.Comments
,COALESCE(DImage.Damaged, 0) AS IsDamaged
,tImage.TotalAdditionalImages
,CONVERT(VARCHAR(12), CAST(vInfo.CreatedDate AS DATETIME), 107) AS CreateDate
,vInfo.EditDate
,0 AS RowNumber
FROM
[dbo].[VehicleInfo] vInfo
INNER JOIN
[dbo].[Property] Prop ON Prop.PropertyId = vInfo.PropertyId
INNER JOIN
[dbo].[Location] Loc ON Loc.LocationId = vInfo.LocationId
INNER JOIN
[dbo].[VehicleImage] vImage ON vInfo.VehicleInfoId = vImage.VehicleInfoId
INNER JOIN
[dbo].[PropertyPlan] tPropPlan ON vImage.PropertyId = tPropPlan.PropertyId
INNER JOIN
[dbo].[PlanType] tPlan ON tPropPlan.PlanTypeId = tPlan.PlanTypeId
OUTER APPLY
(SELECT COUNT(VehicleInfoId) AS TotalAdditionalImages
FROM [dbo].[VehicleImage]
WHERE VehicleInfoId = vInfo.VehicleInfoId) tImage
OUTER APPLY
(SELECT
CASE
WHEN COUNT(VehicleInfoId) > 0
THEN 1
ELSE 0
END AS Damaged
FROM [dbo].[DamagedVehicleImage]
WHERE VehicleInfoId = vInfo.VehicleInfoId) DImage
WHERE
vInfo.TenantId = '72c79d03-6692-4813-8c5a-4649cb34ca2f'
AND Prop.DeleteStatus = 0
AND vImage.IsMain = 1
ORDER BY
CAST(vInfo.CreatedDate AS DATETIME) DESC
,Prop.PropertyName
OFFSET 0 ROWS
FETCH NEXT 12 ROWS ONLY
```
Execution plan
[](https://i.stack.imgur.com/gU9X6.png)
|
Solution is really simple - you should have `vInfo.CreatedDate` as indexed DATETIME column.
If modifying column type isn't possible for some reason, you can create a [`PERSISTENT` computed column](https://technet.microsoft.com/en-us/library/ms191250%28v=sql.105%29.aspx) and then define an index on it.
Example:
```
ALTER TABLE dbo.vInfo
ADD _CreatedDateAsDateTime AS CONVERT(DATETIME, CreatedDate) PERSISTED;
CREATE INDEX [IX_...] ON dbo.vInfo(_CreatedDateAsDateTime) INCLUDE(...);
```
|
Without a schema and sample data, it's just a guess, but it appears that
```
ORDER BY
CAST(vInfo.CreatedDate AS DATETIME) DESC
```
is the biggest time sink.
Ideally, you wouldn't store dates as something other than datetime - I'm guessing you'll need to sort on createdData in more places.
If you cannot modify the data type, you could create a [function-based index](https://stackoverflow.com/questions/22168213/function-based-indexes-in-sql-server) to mitigate your CAST operation (which is almost certainly not indexed today).
|
SQL Query Performance Optimization
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I've managed to use this query
```
SELECT
PartGrp,VendorPn, customer, sum(sales) as totalSales,
ROW_NUMBER() OVER (PARTITION BY partgrp, vendorpn ORDER BY SUM(sales) DESC) AS seqnum
FROM
BG_Invoice
GROUP BY
PartGrp, VendorPn, customer
ORDER BY
PartGrp, VendorPn, totalSales DESC
```
To get a result set like this. A list of sales records grouped by a group, a product ID (`VendorPn`), a customer, the customer's sales, and a sequence number which is partitioned by the group and the productID.
```
PartGrp VendorPn Customer totalSales seqnum
------------------------------------------------------------
AGS-AS 002A0002-252 10021013 19307.00 1
AGS-AS 002A0006-86 10021013 33092.00 1
AGS-AS 010-63078-8 10020987 10866.00 1
AGS-SQ B71040-39 10020997 7174.00 1
AGS-SQ B71040-39 10020998 2.00 2
AIRFRAME 0130-25 10017232 1971.00 1
AIRFRAME 0130-25 10000122 1243.00 2
AIRFRAME 0130-25 10008637 753.00 3
HARDWARE MS28775-261 10005623 214.00 1
M250 23066682 10013266 175.00 1
```
How can I filter the result set to only return rows which have more than 1 `seqnum`? I would like the result set to look like this
```
PartGrp VendorPn Customer totalSales seqnum
------------------------------------------------------------
AGS-SQ B71040-39 10020997 7174.00 1
AGS-SQ B71040-39 10020998 2.00 2
AIRFRAME 0130-25 10017232 1971.00 1
AIRFRAME 0130-25 10000122 1243.00 2
AIRFRAME 0130-25 10008637 753.00 3
```
Out of the first result set example, only rows with `VendorPn` "B71040-39" and "0130-25" had multiple customers purchase the product. All products which had only 1 customer were removed. Note that my desired result set isn't simply `seqnum > 1`, because i still need the first `seqnum` per partition.
|
I would change your query to be like this:
```
SELECT PartGrp,
VendorPn,
customer,
sum(sales) as totalSales,
ROW_NUMBER() OVER (PARTITION BY partgrp,vendorpn ORDER BY SUM(sales) DESC) as seqnum,
COUNT(1) OVER (PARTITION BY partgrp,vendorpn) as cnt
FROM BG_Invoice
GROUP BY PartGrp,VendorPn, customer
HAVING cnt > 1
ORDER BY PartGrp,VendorPn, totalSales desc
```
|
You can try something like:
```
SELECT PartGrp,VendorPn, customer, sum(sales) as totalSales,
ROW_NUMBER() OVER (PARTITION BY partgrp,vendorpn ORDER BY SUM(sales) DESC) as seqnum
FROM BG_Invoice
GROUP BY PartGrp,VendorPn, customer
HAVING seqnum <> '1'
ORDER BY PartGrp,VendorPn, totalSales desc
```
|
T-SQL: Select partitions which have more than 1 row
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm using a Control Flow and Data Flow tasks to record the number of rows read from an Excel data source in Visual Studio SSIS. The data is then processed into good and bad tables, the rows in these counted and the results written into a statistics table via a parameterised SQL statement.
For reasons unknown the data seems to be getting written into the wrong fields in the statistics table and despite recreating the variables and explicitly setting the columns for each variable I can't fix or identify the problem.
Three variables are set:
1. Total rows read from source Excel via a Row Count task (approx 28964 rows)
2. Rows written to table as 'good' data after processing (most of the source files, approx 28,540)
3. Rows written to table as 'bad' data after processing (approx 424)
Then the varables are stored in a separate table via a SQL command that reads parameters set from the variables. A final percentage field is calculated from the total rows and the errors.
However, the results in the table seem to be in the wrong fields (see image).
I've checked this several times and recreated all the tables and variables but get the same result. All tables are Access.
Any ideas?
Any help is much appreciated.
[](https://i.stack.imgur.com/Ez1pZ.jpg)
|
Is that an Access parameterised query?
I've never run one of those from SSIS. I do know that SSIS can be weird about mapping the values from the variables to the query parameters. Have you noticed that the display order of your variables (in the variable-to-parameter mapping) is the same as how they get assigned to parameters?
It looks as though the GoodRows value (28540) is going to P1, BadRows to P2 and TotalRows to P3. That's the order that the variables appear in the mapping.
This is exactly the bizarre, infuriating thing that I've seen SSIS do - though not specifically with Access SQL statements. SSIS sometimes maps your variables to the parameters *in the order that they appear in the mapping list, completely ignoring what you specify in the Parameter Name column*.
Try deleting all the mappings, and mapping the variables one after another so that they appear in the order P1, P2, P3 in the mapping table.
|
I recommend that you create a fourth variable for the fourth parameter rather than trying to do math in the ExecuteSQL task.
Instead of using P1, P2 & P3 in the Parameter Names column of the Parameter-mapping tab, try using their zero-based ordinal position.
In the query itself, use question marks for the parameters:
```
...VALUES ("France", ?, ?, ?, ?)
```
In other words, for the parameter used first in the query, use 0 for the name. Use 1 for the next parameter, 2 for the next parameter, and so on.
If that doesn't work, you can use your variables to build a string variable that holds the entire SQL string that you want to execute, and use the "SQL from Variable" option in the ExecuteSQL task.
|
SSIS Variables and Parameters written to wrong Access Fields
|
[
"",
"sql",
"visual-studio",
"ms-access",
"ssis",
""
] |
I have a table 'floating\_options', and I want to create a spatial index on a column 'area\_geo' (which is a sdo\_geometry column, with two rows of data that appear as expected when I select \* from floating\_options).
I have used the following code but I am receiving the error below. I would be very grateful for any help! Thanks!
```
CREATE INDEX area_idx ON floating_options(area_geo)
INDEXTYPE IS MDSYS.SPATIAL_INDEX;
Error report -
SQL Error: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-13203: failed to read USER_SDO_GEOM_METADATA view
ORA-13203: failed to read USER_SDO_GEOM_METADATA view
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 10
29855. 00000 - "error occurred in the execution of ODCIINDEXCREATE routine"
*Cause: Failed to successfully execute the ODCIIndexCreate routine.
*Action: Check to see if the routine has been coded correctly.
```
|
Before indexing the table you should have it 'spatially enabled'.
Try to check if it is shown in spatial metadata:
```
SELECT * FROM USER_SDO_GEOM_METADATA
WHERE TABLE_NAME = UPPER('floating_options')
AND COLUMN_NAME = UPPER('area_geo');
```
If there are no results - then a couple of options are available.
One dirty way - insert data directly
```
INSERT INTO USER_SDO_GEOM_METADATA
VALUES (UPPER('floating_options'),UPPER('area_geo'),
mdsys.SDO_DIM_ARRAY(
mdsys.SDO_DIM_ELEMENT('Easting', <lowest_x>, <highest_x>, <x_tolerance>),
mdsys.SDO_DIM_ELEMENT('Northing', <lowest_y>, <highest_y>, <y_tolerance>)
), <SRID>);
```
Please change the < **placeholders** > accordingly
Please take a look also at <https://community.oracle.com/thread/836452?tstart=0> or
<http://gerardnico.com/wiki/oracle_spatial/metadata>
|
The following should also be considered: Oracle has case-sensitive names [see this post](https://stackoverflow.com/questions/563090/oracle-what-exactly-do-quotation-marks-around-the-table-name-do).
The next problem with the "intelligent" guys of Oracle is, that their USER\_SDO\_GEOM\_METADATA table does not support lower-case names (at least in 11g).
So with for a table definition like this
```
CREATE TABLE "cola_markets" (
"mkt_id" NUMBER PRIMARY KEY,
"name" VARCHAR2(32),
"shape" SDO_GEOMETRY);
```
you cannot create a spatial index.
When inserting the metadata
```
INSERT INTO USER_SDO_GEOM_METADATA
VALUES (
'cola_markets',
'shape',
SDO_DIM_ARRAY( -- 20X20 grid
SDO_DIM_ELEMENT('X', 0, 20, 0.005),
SDO_DIM_ELEMENT('Y', 0, 20, 0.005)
),
NULL -- SRID
);
```
the names are converted to upper-case.
If you then create the index
```
CREATE INDEX cola_spatial_idx
ON "cola_markets"("shape")
INDEXTYPE IS MDSYS.SPATIAL_INDEX;
```
You will get the error mentioned above
```
ORA-13203: failed to read USER_SDO_GEOM_METADATA view
```
because it cannot find the lower-case names in the metadata table.
Conclusion:
* use only upper-case names (or no double quotes)
* Orcale guys are blockheads
|
Creating a spatial index on oracle
|
[
"",
"sql",
"oracle",
"geospatial",
"spatial-index",
""
] |
I have two tables with the following formats:
Table Name: "Awards"
```
id | Name | Exp_1 | Exp_2 | Exp_3
1 | Joe | 1 | 2 | 3
2 | Bob | 1 | | 3
3 | James | | 2 |
```
Table Name: "Exp"
```
id | Exp
1 | Service
2 | Integrity
3 | Timeliness
```
The result table I need to create:
```
id | Name | Exp_1_val | Exp_2_val | Exp_3_val
1 | Joe | Service | Integrity | Timeliness
2 | Bob | Service | | Timeliness
3 | James | | Integrity |
```
So basically if one of the "Exp\_" columns in my Awards table is not null, I need it's corresponding value from Exp table. But I can't match these with ID numbers, only their values. Is there a type of join can do accomplish this?
|
You would seem to need multiple left joins:
```
select a.id, a.name, e1.exp as exp1, e2.exp as exp2, e3.exp as exp3
from awards a left join
exp e1
on a.exp_1 = e1.id left join
exp e2
on a.exp_2 = e2.id left join
exp e3
on a.exp_3 = e3.id ;
```
|
Try this
```
SELECT a.id, a.Name, exp1.Exp, exp2.Exp, exp3.Exp FROM Awards a
LEFT JOIN Exp exp1 on exp1.id = a.Exp_1
LEFT JOIN Exp exp2 on exp2.id = a.Exp_2
LEFT JOIN Exp exp3 on exp3.id = a.Exp_3
```
You will need to join each of your Exp fields in Awards table to the Exp table,
use LEFT JOIN in case there is no value to link them with.
|
SQL joining between columns and rows
|
[
"",
"sql",
""
] |
-1 to 100 in SQL and display 'Fizz' if the number is divisible by 3 'Buzz' if divisible by 5 and 'FizzBuzz' if divisible by both. below is my code
```
select
Case when Remainder (rownum,3)=0 then 'Fizz'
when Remainder (rownum,5)=0 then 'Buzz'
when (remainder (rownum,3)=0 and remainder(ROWNUM,5) = 0) then 'FizzBuzz'
else rownum end
from DUAL
Connect by level <=100;
```
It gives me error - ORA-00932: inconsistent datatypes: expected CHAR got NUMBER
00932. 00000 - "inconsistent datatypes: expected %s got %s"
\*Cause:
\*Action:
Error at Line: 5 Column: 18
|
You need to convert `ROWNUM` to a string in your `ELSE` case by either:
* `CAST( ROWNUM AS VARCHAR2(3) )`
* `''||ROWNUM`
* `TO_CHAR( ROWNUM )`
You also need the case when it is divisible by both 3 and 5 to be at the top of the list (otherwise the preceding cases will take precedence). Like this:
```
SELECT CASE
WHEN remainder (rownum,15)=0 THEN 'FizzBuzz'
WHEN Remainder (rownum,3)=0 THEN 'Fizz'
WHEN Remainder (rownum,5)=0 THEN 'Buzz'
ELSE TO_CHAR(rownum)
END
FROM DUAL
CONNECT BY LEVEL <= 100;
```
However, you can shorten the code a lot (taken from an answer I posted for [Code Golf](https://codegolf.stackexchange.com/a/58969/15968)):
```
SELECT NVL(
DECODE(MOD(LEVEL,3),0,'Fizz')||DECODE(MOD(LEVEL,5),0,'Buzz'),
''||LEVEL
)
FROM DUAL
CONNECT BY LEVEL<101
```
|
It's in your else statement. Rownum is a NUMBER whereas Fizz, Buzz, and FizzBuzz are CHAR.
|
Fizz buzz SQL logic error
|
[
"",
"sql",
"oracle",
""
] |
I want to add incremental value on insert function. I have tried the following code but is not working. I want to increment `COLUMN1` by 1 each time.`COLUMN1` is not a key column.
```
@var=0;
INSERT INTO TABLE_A (COLUMN1,COLUMN2,COLUMN3,COLUMN4)
SELECT @var++,COLUMN6,COLUMN7,COLUMN8
FROM TABLE_B;
```
Any idea?
|
You can use `ROW_NUMBER` to generate sequential value for `COLUMN1`:
```
INSERT INTO TABLE_A (COLUMN1,COLUMN2,COLUMN3,COLUMN4)
SELECT
ROW_NUMBER() OVER(ORDER BY(SELECT NULL)) - 1,
COLUMN6,
COLUMN7,
COLUMN8
FROM TABLE_B;
```
|
You can try to make the column as `IDENTITY` column. If it is an existing column then you can add it like this:
```
ALTER TABLE (TABLE_A) ADD NewColumn INT IDENTITY(1,1)
ALTER TABLE (TABLE_A) DROP COLUMN COLUMN1
sp_rename 'TABLE_A.NewColumn', 'COLUMN1', 'COLUMN'
```
and then simply do like
```
INSERT INTO TABLE_A (COLUMN2,COLUMN3,COLUMN4)
SELECT
COLUMN6,
COLUMN7,
COLUMN8
FROM TABLE_B;
```
So now you don't have to provide the `COLUMN1` while doing insert. It will be populated automatically whenever a new row is inserted into the table and it would be incremental by 1.
|
SQL Insert Function (Insert incremental value)
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
This is my table **tblEmployee** ,when I'm trying to use a self join query it's result is varying while shifting columns, can someone please explain me the difference between those two queries.Thanks
```
EmployeeID Name ManagerID
1 Mike 3
2 Rob 1
3 Todd NULL
5 Ben 1
6 Sam 1
```
> Query\_1
```
SELECT E.Name AS Employee, M.Name AS Manager
from tblEmployee E
LEFT JOIN tblEmployee M
on E.ManagerID = M.EmployeeID
```
**OUTPUT**
```
Employee Manager
Mike Todd
Rob Mike
Todd NULL
Ben Mike
Sam Mike
```
> QUERY\_2
```
SELECT E.Name AS Employee, M.Name AS Manager
from tblEmployee E
LEFT JOIN tblEmployee M
on E.EmployeeID = M.ManagerID
```
**OUTPUT**
```
Employee Manager
Mike Rob
Mike Ben
Mike Sam
Rob NULL
Todd Mike
Ben NULL
Sam NULL
```
|
First a simple explanation: the left join takes every row in the first table that has at least a match in the second table on the column you selected.
In the **first query** you are considering the *Manager ID* on every row and looking for a match on the *Employee ID* in the second table: since value in the second table (*Employee ID*) is a ***key*** (at least in the example definition), you can find only one row that matches that ID.
Eg: First row Mike has *Manager ID* = 3, when you are looking at the table you have only one row that has *Employee ID* = 3, and that is Todd
In the **second query** you are considering the *Employee ID* on every row and looking for a match on the *Manager ID* in the second table: since there could be more than one row with the same *Manager ID*, you will receive more value for every row.
Eg: First row Mike has *Employee ID* = 1, when you are looking at the table you have three rows with *anager ID* = 1 and those are Rob, Ben and Sam.
|
With **Query1** you are selecting each employee (E) and their manager (M) if there is one (left join).
With **Query2** you reversed that. You are selecting each employee (E) and their subordinates (M). The alias for `M.Name` is incorrect now, because it's not the manager but a subordinate. As some employees have more than one subordinate, they are listed once for each of them (Mike).
|
Not quite understanding the query after just shifting column names
|
[
"",
"sql",
"sql-server",
"join",
"self-join",
""
] |
I have a table named CompApps and I wish to retrieve particular records based on a multiple condition query. My SQL is very rusty and this is why I am asking on Stack Overflow. What I need is to amend the SQL below to include a where clause that will exclude records that do not have any relevant information in the fields Interface, ExAPI, ExtraInfo, OpenCol. That is, in the image of the current query results below I want rows `170, 173, 174, 175, 177, 182, 185, 190` and NOT rows that only have the value of None, N\A or an empty value in Interface, ExAPI, ExtraInfo, OpenCol
```
SELECT RefNum, Interface, ExAPI, ExtraInfo, OpenCol
FROM CompApps
```
[](https://i.stack.imgur.com/gBnzq.jpg)
|
Or maybe like this
Correcting this according to dnoeth's comment:
```
SELECT RefNum, Interface, ExAPI, ExtraInfo, OpenCol
FROM CompApps
WHERE Interface NOT IN('None','N/A')
OR ExAPI NOT IN('None','N/A')
OR ExtraInfo NOT IN('None','N/A')
OR OpenCol NOT IN('None','N/A');
```
|
Try this:
```
SELECT
RefNum, Interface, ExAPI, ExtraInfo, OpenCol
FROM
CompApps
WHERE
(Interface IS NOT NULL AND (Interface !='None' AND Interface !='N/A'))
OR
(ExAPI IS NOT NULL AND (ExAPI != 'None' AND ExAPI != 'N/A'))
OR
(ExtraInfo IS NOT NULL AND (ExtraInfo != 'None' AND ExtraInfo != 'N/A'))
OR
(OpenCol IS NOT NULL AND (OpenCol != 'None' AND OpenCol != 'N/A'))
```
|
Complex SQL Query Filter
|
[
"",
"sql",
"sql-server",
""
] |
I am having a lot of trouble with PostgreSQL trying to figure out how to find the most common value that fits a specific criteria. The `ID` is the ID number of the book, meaning repeating numbers means there are multiple copies of the book.
I have 2 tables here:
```
Table A:
=====+===================
ID | Condition
-------------------------
1 | Taken
1 |
1 | Taken
1 |
2 | Taken
3 | Taken
3 |
3 | Taken
3 | Taken
4 |
4 | Taken
etc.
Table B:
=====+===================
ID | Name
-------------------------
1 | BookA
2 | BookB
3 | BookC
4 | BookD
etc.
```
What I need is to simply find which book has the most copies taken and simply print out the name of the book. In this case all I need is:
```
BookC
```
The problem is that I can't figure out how to find how much each individual ID has books taken. I tried using a temp table something like so:
```
CREATE TEMP TABLE MostCommon AS
(SELECT ID
FROM TableA
WHERE SUM(CASE WHEN Condition>0 then 1 else 0 END)
)
SELECT NAME FROM TableB, MostCommon WHERE
MostCommon.ID = TableB.ID;
```
But it either throws an error or simply doesn't give me what I need. Any help would be greatly appreciated.
|
Ok, so firstly I assumed that your columns and tables names are case sensitive which means you must use duble quote marks. To print most "taken" book name with number of "taken" copies, you can use simple aggragete `count()`, then order the output descending and at the end limit the output to 1 row, like:
```
SELECT
b."ID",
b."Name",
count(*) as takenCount
FROM "TableA" a
JOIN "TableB" b ON a."ID" = b."ID"
WHERE a."Condition" = 'Taken'
GROUP BY b."ID", b."Name"
ORDER BY 3 DESC
LIMIT 1;
```
|
```
CREATE TEMP TABLE MostCommon AS
(SELECT id, (sum(ID)/id) book_taken FROM tableA where condition = 'Taken' group by id);
select name from tableB t2 join MostCommon mc on mc.id = t2.id where mc.id in (select max(book_taken) from MostCommon)
```
|
Finding value with most common condition
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
""
] |
I would like to find data in a table with no matching records based on two fields (ContractState, TransID).
For instance, assume this data set (in reality, all data sets contain hundreds of records, I'm just including a few):
```
AccountNbr ContractState TransID Product
3335477 AL 80079 DPPO, DHMO
3335477 AL 80080 PPO
3335477 AR 80079 DPPO, DHMO
3335477 AR 80080 PPO
```
This should return 0 records, because there are 2 records for AL (one for each TransID) and 2 records for AR.
However, given the following data set:
```
AccountNbr ContractState TransID Product
3335477 AL 80079 DPPO, DHMO
3335477 AL 80080 PPO
3335477 DE 80079 DHMO
3335477 WA 80080 DHMO
```
I would like to return only the following data set:
```
AccountNbr ContractState TransID Product
3335477 DE 80079 DHMO
3335477 WA 80080 DHMO
```
because for each state, there is only 1 TransID.
I have this code, but it also includes records with matching data:
```
SELECT
'tblSQLContractState' as TableName,
TransID,
ContractState,
Product,
COUNT(*) AS [NumOfMessage]
FROM tblSQLContractState
WHERE TransID IN (80079, 80080)
GROUP BY
TransID,
ContractState,
Product
HAVING COUNT(*) = 1
```
|
Use window functions. For the data you provided, this should work:
```
select cs.*
from (select cs.*,
count(*) over (partition by AccountNbr, ContractState) as cnt
from tblSQLContractState cs
where TransID IN (80079, 80080)
) cs
where cnt = 1;
```
|
You can use a `NOT EXISTS` to select records where there is another record with this `TransId` but not with this `ContractState`:
```
SELECT
'tblSQLContractState' as TableName,
cs.AccountNbr, cs.ContractState, cs.TransID, cs.Product
FROM tblSQLContractState cs
WHERE cs.TransID IN (80079, 80080)
AND NOT EXISTS -- no other record
(
SELECT 1 FROM tblSQLContractState cs2
WHERE cs2.TransID <> cs.TransID -- with other TransId
AND cs2.ContractState = cs.ContractState -- and this ContractState
)
```
`NOT EXISTS` is quite efficient in SQL-Server and can be modified/extended easily. Another advantage is that you can select all columns as opposed to a `GROUP BY`.
|
SQL to find only data in a table with no matching records
|
[
"",
"sql",
"sql-server",
""
] |
I have a large database with connections between cities. Each connection has a start and destination town, a start date, and a price for that connection.
I'd like to compute any combinations of outgoing+return connections, for any connections, and for dates where the return connection is between 1-20 days. Then select the best price for each *date combination*.
Example:
Table:
```
city_start, city_end, date_start, price
Hamburg Berlin 01.01.2016 100.00
Berlin Hamburg 10.01.2016 112.00
Berlin Hamburg 10.01.2016 70.00
Berlin Hamburg 12.01.2016 50.00
Berlin Hamburg 30.02.2016 20.00
Paris Madrid ...
Madrid Paris
London Paris
```
Desired result:
```
Hamburg-Berlin-Hamburg, 01.01.2016, 10.01.2016, 170.00 (100+70)
Hamburg-Berlin-Hamburg, 01.01.2016, 12.01.2016, 150.00 (100+50)
...
(not Berlin-Hamburg on 30.02.2016 because it's >20 days from departure drive)
(not London-Paris, as there is no return Paris-London)
```
I can get the possible combinations by:
```
SELECT DISTINCT city_start, city_end, city_end, city_start from table
```
But how can I now compute their permutations?
|
The query to get all pairs uses a `join`:
```
select tto.city_start, tto.city_end, tto.date_start, tfrom.date_end,
(tto.price + tfrom.price) as price
from t tto join
t tfrom
on tto.city_end = tfrom.city_start and
tto.city_start = tfrom.city_end and
tfrom.date_start >= tto.date_start + interval '1 day' and
tfrom.date_end <= tto.date_start + interval '20 day';
```
To get the cheapest price, use window functions:
```
select tt.*
from (select tto.city_start, tto.city_end, tto.date_start, tfrom.date_end,
(tto.price + tfrom.price) as price,
row_number() over (partition by tto.city_start, tto.city_end order by (tto.price + tfrom.price) asc) as seqnum
from t tto join
t tfrom
on tto.city_end = tfrom.city_start and
tto.city_start = tfrom.city_end and
tfrom.date_start >= tto.date_start + interval '1 day' and
tfrom.date_end <= tto.date_start + interval '20 day'
) tt
where seqnum = 1;
```
|
Here is a solution without the row\_number partition part:
```
SELECT
a.city_start, a.city_end, b.city_end, a.date_start, b.date_start,
min(a.price + b.price)
FROM
flight AS a
JOIN
flight AS b ON a.city_start = b.city_end AND a.city_end = b.city_start
WHERE b.date_start BETWEEN a.date_start + 1 AND a.date_start + 20
GROUP BY a.city_start, a.city_end, b.city_end, a.date_start, b.date_start;
```
|
How to compute permutations with postgresql?
|
[
"",
"sql",
"postgresql",
""
] |
I'm trying to query the sum of the populations of all cities where the CONTINENT is 'Asia'.
The two tables CITY and COUNTRY are as follows,
```
city - id, countrycode, name population
country - code, name, continent, population
```
Here's my query
```
SELECT SUM(POPULATION) FROM COUNTRY CITY
JOIN ON COUNTRY.CODE = CITY.COUNTRYCODE
WHERE CONTINENT = "Asia";
```
This doesn't work. What am I doing wrong. I'm new to SQL.
|
It isn't working because the way you've written it `CITY` is being interpreted as a table alias for `COUNTRY`. Additionally, it looks like you've got a POPULATION column in each table so you need to disambiguate it. Let me rewrite the query for you:
```
SELECT SUM(CITY.POPULATION)
FROM COUNTRY
JOIN CITY
ON COUNTRY.CODE = CITY.COUNTRYCODE
WHERE COUNTRY.CONTINENT = "Asia";
```
|
I know the question was already answered, but I would like to put out the optimised solution. The below solution will decrease the execution time and at the same time it will take less resource to perform the SQL query.
```
select sum(a.population) from city a
inner join(select * from country where continent = 'Asia') b
on a.countrycode=b.code;
```
I would like to explain a bit on top of that, as you see I'm applying the filter condition before performing Join operation. So during reshuffling phase, the data would be very less and this way query will take less time to execute. You will not see a drastic performance changes in less data size, however while running this queries in large dataset, you can see the performance improvement.
|
Simple inner join not working
|
[
"",
"mysql",
"sql",
""
] |
I have a varchar value like this:
```
DECLARE @TestConvert VARCHAR(MAX) = '1234.94-'
```
And I want to convert this value to a `decimal(5,2)` likes this:
```
SELECT CAST(@TestConvert AS DECIMAL(18, 4))
```
The problem is the sign at the end of the value.
If the sign is at the beginning like this:
```
DECLARE @TestConvert VARCHAR(MAX) = '1234.94-'
```
I have no problem. Is there any solution or workaround for this?
Thanks.
|
For SQL2012 and above `PARSE` will handle this. (My test box is running 2016 so apologies if this doesn't work for you)
```
DECLARE @TestConvert VARCHAR(MAX) = '1234.94-'
SELECT PARSE(@TestConvert AS DECIMAL(18, 4)) AS Converted
Converted
---------------------------------------
-1234.9400
(1 row(s) affected)
```
`PARSE` documentation <https://msdn.microsoft.com/en-us/library/hh213316(v=sql.110).aspx>
and related
`TRY_PARSE` <https://msdn.microsoft.com/en-us/library/hh213126(v=sql.110).aspx>
|
```
DECLARE @TestConvert VARCHAR(MAX) = '1234.94-'
if(charindex('-', @TestConvert) = len(@TestConvert))
begin
set @TestConvert = replace(@TestConvert, '-', '')
select 0 - cast(@TestConvert as DECIMAL(6, 2))
end
else
select cast(@TestConvert as DECIMAL(6, 2))
```
|
SQL Server : convert varchar to numeric with sign at the end
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
```
UPDATE
WEB
SET
WEB.ID_D_SUPPORT = DIM.ID_D_SUPPORT
FROM
dbo.F_SUPPORT_WEB WEB
INNER JOIN
dbo.D_SUPPORT DIM
ON WEB.id_support = DIM.id_support
AND (WEB.date_creation BETWEEN DIM.Date_Deb_Valid AND DIM.Date_Fin_Valid)
```
I have this query below with 300 million rows in the table F\_SUPPORT\_WEB to update, I am unable to run it, each time I do I get a transaction log issue.
I think the between operator is the key, but I don't know how to optimize it.
Can someone help me ?
|
If your log table drive isn't big enough to support that single transaction, you need to break it up into multiple transactions. I had a similar issue and tested until I found a sweet spot (100,000 rows updated for me). Keep looping until all records are done. You could do so by simply modifying your query.
```
UPDATE WEB
SET WEB.ID_D_SUPPORT = DIM.ID_D_SUPPORT
FROM dbo.F_SUPPORT_WEB WEB
INNER JOIN dbo.D_SUPPORT DIM ON WEB.id_support = DIM.id_support
AND (WEB.date_creation BETWEEN DIM.Date_Deb_Valid AND DIM.Date_Fin_Valid)
WHERE WEB.ID_D_SUPPORT <> DIM.ID_D_SUPPORT
```
You might want to add that where in the first place to see if you are updating unnecessary records.
|
You'll want to run updates on a loop. When you try to update all those records at once, it has to copy all of them to the transaction log in case of an error so it can roll back the changes. If you do updates in batches, you won't run into that issue. See below
```
SELECT 1 --Just to get a @@ROWCOUNT established
WHILE (@@ROWCOUNT > 0)
BEGIN
UPDATE TOP (5000000) WEB --Change this number to however many you want to update at a time
SET WEB.ID_D_SUPPORT = DIM.ID_D_SUPPORT
FROM dbo.F_SUPPORT_WEB WEB
INNER JOIN dbo.D_SUPPORT DIM ON WEB.id_support = DIM.id_support
AND (WEB.date_creation BETWEEN DIM.Date_Deb_Valid AND DIM.Date_Fin_Valid)
WHERE WEB.ID_D_SUPPORT != DIM.ID_D_SUPPORT --Don't update records that have already been updated, otherwise the loop will run forever
END
```
|
How can I run this UPDATE query faster?
|
[
"",
"sql",
"sql-server-2008-r2",
"sql-update",
"inner-join",
"between",
""
] |
I have a strange situation with a simple select by column `pqth_scan_code` from the following table:
**table pqth\_**
```
Field Type Null Key Default Extra
pqth_id int(11) NO PRI NULL auto_increment
pqth_scan_code varchar(250) NO NULL
pqth_info text YES NULL
pqth_opk int(11) NO 999
```
**query 1**
This query took 12.7221 seconds to execute
```
SELECT * FROM `pqth_` WHERE pqth_scan_code = "7900722!30@3#6$EN"
```
**query 2**
This query took took 0.0003 seconds to execute
```
SELECT * FROM `pqth` WHERE `pqth_id`=27597
```
Based on data from table `pqth_` I have created the following table, where `pqthc_id` = `pqth_id` and `pqthc_scan_code`=`pqth_scan_code`
**table pqthc**
```
Field Type Null Key Default Extra
pqthc_id int(11) NO PRI NULL
pqthc_scan_code tinytext NO NULL
```
The same query ,**query1**, on table `pqthc` took 0.0259 seconds to run
```
SELECT * FROM `pqthc` WHERE pqthc_scan_code = "7900722!30@3#6$EN"
```
If I run the following query will took 0.0971 seconds, very strange.
**query 3**
```
SELECT * FROM `pqth` WHERE pqth_id = (SELECT pqthc_id From pqthc where pqthc_scan_code = "7900722!30@3#6$EN")
```
My question is **why a SELECT by `pqth_scan_code` is slow and SELECT by `pqth_id` is fastest?** Both columns are indexed.
For testing please get the export from this [link](https://soft2u.ro/dev/user205_test.zip)
The same behavior is with MySQL and MariaDB server
|
```
SELECT * FROM `pqth_` WHERE pqth_scan_code = "7900722!30@3#6$EN"
```
needs `INDEX(pqth_scan_code)`. Period. End of discussion.
```
SELECT * FROM `pqth` WHERE `pqth_id`=27597
```
has a useful index, since a `PRIMARY KEY` is an index (and it is unique).
```
SELECT * FROM `pqthc` WHERE pqthc_scan_code = "7900722!30@3#6$EN"
```
also needs `INDEX(pqthc_scan_code)`. But it may have been faster because (1) the table is smaller, or (2) you ran the query before, thereby caching what was needed in RAM.
Please don't prefix column names with the table name.
Please don't have table names so close to each other that they are hard to distinguish. (`pqth` and `pqthc`)
```
SELECT *
FROM `pqth`
WHERE pqth_id =
( SELECT pqthc_id
From pqthc
where pqthc_scan_code = "7900722!30@3#6$EN"
)
```
The construct `IN ( SELECT ... )` is not efficient.
It is rare to have two table with the same `PRIMARY KEY`; are you sure you meant that?
Use a `JOIN` instead:
```
SELECT a.*
FROM `pqth` AS a
JOIN pqthc AS c ON a.id = c.id
where c.scan_code = "7900722!30@3#6$EN"
```
If that is 'correct', then I recommend this 'covering' index:
```
INDEX(scan_code, id)
```
instead of the shorter `INDEX(scan_code)` I previously recommended.
[More on indexing](http://mysql.rjweb.org/doc.php/index_cookbook_mysql).
|
you have to understand the concept of primary key and indexes and how they help in searching,
reference docs [here](http://dev.mysql.com/doc/refman/5.7/en/optimization-indexes.html)
|
SQL - Strange issue with SELECT
|
[
"",
"mysql",
"sql",
"mariadb",
""
] |
I am able using the code below to select data from any number of days in the past but what if I only want the data from the previous month, e.g. from 60 days ago to 30 days ago.
I thought I might be able to use `INTERVAL 60 - 30`, but I am not sure that is working...
```
SELECT
product,
COUNT(OrderNumber) AS CountOf
FROM
orders
WHERE
STATUS = 'booking' AND
Date(OrderDate) <= CURDATE() AND
Date(OrderDate) > DATE_SUB(CURDATE(),INTERVAL 30 DAY)
GROUP BY
product
ORDER BY CountOf DESC
```
Thoughts?
|
This is too long for a comment:
```
Date(OrderDate) < DATE_SUB(CURDATE(), INTERVAL 30 DAY) AND
Date(OrderDate) >= DATE_SUB(CURDATE(), INTERVAL 60 DAY)
```
Note: if `OrderDate` is already a date, then don't use the `DATE()` function. It can prevent the use of indexes.
Even if `OrderDate` has a time component, you probably still don't need the `DATE()` function.
|
Do you know about the MySQL functions `MONTH()` and `NOW()`? You can use these in your where clause:
```
MONTH(OrderDate) = MONTH(NOW()) - 1
```
Of course, this needs to be fine tuned for example for years (to get December if it is January).
|
SQL Select data from previous month
|
[
"",
"mysql",
"sql",
""
] |
I want to query the list of `CITY` names from the table `STATION(id, city, longitude, latitude)` which have vowels as both their first and last characters. The result cannot contain duplicates.
For this is I wrote a query like `WHERE NAME LIKE 'a%'` that had 25 conditions, each vowel for every other vowel, which is quite unwieldy. Is there a better way to do it?
|
You could use a [regular expression](http://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_regexp):
```
SELECT DISTINCT city
FROM station
WHERE city RLIKE '^[aeiouAEIOU].*[aeiouAEIOU]$'
```
|
in **Microsoft SQL server** you can achieve this from below query:
```
SELECT distinct City FROM STATION WHERE City LIKE '[AEIOU]%[AEIOU]'
```
**Or**
```
SELECT distinct City FROM STATION WHERE City LIKE '[A,E,I,O,U]%[A,E,I,O,U]'
```
**Update --Added Oracle Query**
> --Way 1 --It should work in all Oracle versions
```
SELECT DISTINCT CITY FROM STATION WHERE REGEXP_LIKE(LOWER(CITY), '^[aeiou]') and REGEXP_LIKE(LOWER(CITY), '[aeiou]$');
```
> --Way 2 --it may fail in some versions of Oracle
```
SELECT DISTINCT CITY FROM STATION WHERE REGEXP_LIKE(LOWER(CITY), '^[aeiou].*[aeiou]');
```
> --Way 3 --it may fail in some versions of Oracle
```
SELECT DISTINCT CITY FROM STATION WHERE REGEXP_LIKE(CITY, '^[aeiou].*[aeiou]', 'i');
```
|
SQL query to check if a name begins and ends with a vowel
|
[
"",
"mysql",
"sql",
"select",
""
] |
I am working on a SQL assignment in Oracle. There are two tables.
table1 is called Person10:
fields include: ID, Fname, Lname, State, DOH, JobTitle, Salary, Cat.
table2 is called StateInfo:
fields include: State, Statename, Capital, Nickname, Pop2010, pop2000, pop1990, sqmiles.
Question:
> Create a view named A10T2 that will display the StateName, Capital and Nickname of the states that have at least 25 people in the Person10 table with a Cat value of N and an annual salary between $75,000 and $125,000. The three column headings should be StateName, Capital and Nickname. The rows should be sorted by the name of the state.
What I have :
```
CREATE VIEW A10T2 AS
SELECT StateName, Capital, Nickname
FROM STATEINFO INNER JOIN PERSON10 ON
STATEINFO.STATE = PERSON10.STATE
WHERE Person10.CAT = 'N' AND
Person10.Salary in BETWEEN (75000 AND 125000) AND
count(Person10.CAT) >= 25
ORDER BY STATE;
```
It gives me an error saying missing expression. I may need a group expression... but i dont know what I am doing wrong.
|
Yeah I originally messed this up when I first answered this because it was on the fly and I didn't have a chance to test what I was putting down. I forgot using a GROUP BY is more suited for aggregate functions (Like SUM, AVG and COUNT in the select) and that's probably why it's throwing the error. Using a ORDER BY is probably the correct option in this case. And you want to order your results by the state so you would use StateName.
```
SELECT S.StateName, S.Capital, S.Nickname
FROM STATEINFO S
INNER JOIN PERSON10 P ON S.STATE = P.STATE
WHERE P.CAT = 'N'
AND P.Salary BETWEEN 75000 AND 125000
ORDER BY S.StateName
HAVING count(P.CAT) >= 25;
```
|
You can try using a Sub Query like this.
```
CREATE VIEW A10T2 AS
SELECT statename, capital, nickname
FROM stateinfo
WHERE statename IN (SELECT statename
FROM person10
WHERE Cat = 'N'
AND Salary BETWEEN 75000 AND 125000
GROUP BY statename
HAVING COUNT(*) >= 25)
ORDER BY statename
```
|
SQL Assignment about joining tables
|
[
"",
"sql",
"oracle",
"inner-join",
""
] |
I've searched StackOverflow for all the possible solutions concerning how to insert a linebreak in a SQL text string. I've referred this link but to no avail. [How to insert a line break in a SQL Server VARCHAR/NVARCHAR string](https://stackoverflow.com/questions/31057/how-to-insert-a-line-break-in-a-sql-server-varchar-nvarchar-string)
But none of the solutions are working for me.
This is what I'm trying to do:
```
insert into sample (dex, col)
values (2, 'This is line 1.' + CHAR(13)+CHAR(10) + 'This is line 2.')
```
But this is the output generated: (Select Col from sample where dex = 2)
> This is line 1. This is line 2.
This is the output that I desire:
> This is line 1.
> This is line 2.
I'm using SQL server and SSMS if that helps.
Any ideas why it isn't working?
|
Well your query works perfectly fine.
SSMS by default shows all query out put in the grid view, which does not display the line break character.
To see it you can switch to text view using `cntrl` + `T` shortcut or like below
[](https://i.stack.imgur.com/CBXDB.png)
The results I got for your query are below( and they work)
[](https://i.stack.imgur.com/kpT3v.png)
|
It works perfectly:
```
CREATE TABLE sample(dex INT, col VARCHAR(100));
INSERT INTO sample(dex, col)
VALUES (2, 'This is line 1.' + CHAR(13)+CHAR(10) + 'This is line 2.');
SELECT *
FROM sample;
```
`LiveDemo`
Output:
[](https://i.stack.imgur.com/64L3H.png)
The "problem" is `SSMS` grid view that skips newline characters (and others too). Otherwise you will get different rows height like in `Excel`.
---
You could observe the same behaviour in `SEDE`.
`LiveDemo-SEDE``LiveDemo-SEDE-TextView`
Output:
[](https://i.stack.imgur.com/E6fvz.png)
[](https://i.stack.imgur.com/X6RG1.png)
---
You could compare it using:
```
SELECT 'This is line 1.' + CHAR(13)+CHAR(10) + 'This is line 2.';
PRINT 'This is line 1.' + CHAR(13)+CHAR(10) + 'This is line 2.';
```
|
SQL: Insert a linebreak in varchar string
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a query that gives me the following results. This result set contains the times when jobs on a server started as well as when they finished.
```
JobName LatestStartTime LatestEndTime
Job1 2016-04-15 00:00:40.000 2016-04-15 00:07:40.000
Job2 2016-04-15 00:01:23.000 2016-04-15 00:17:37.000
Job3 2016-04-15 08:00:03.000 2016-04-15 08:18:05.000
Job4 2016-04-15 08:30:06.000 2016-04-15 08:57:21.000
Job5 2016-04-15 09:00:03.000 2016-04-15 09:07:49.000
Job6 2016-04-15 03:53:40.000 2016-04-15 03:53:41.000
Job7 2016-04-15 09:30:07.000 2016-04-15 11:36:35.000
```
On the other hand, I have a query that creates a temp table with 15 min intervals. As the following:
```
Increment
2016-04-15 00:00:00.000
2016-04-15 00:15:00.000
2016-04-15 00:30:00.000
2016-04-15 00:45:00.000
2016-04-15 01:00:00.000
2016-04-15 01:15:00.000
2016-04-15 01:30:00.000
2016-04-15 01:45:00.000
2016-04-15 02:00:00.000
2016-04-15 02:15:00.000
2016-04-15 02:30:00.000
2016-04-15 02:45:00.000
```
I want to know which jobs run at a determined increment. For example, my final result set should look something like:
```
Increment NumberOfJobs JobNames
2016-04-15 00:00:00.000 2 Job1, Job2
2016-04-15 00:15:00.000 1 Job2
2016-04-15 00:30:00.000 0 NULL
```
OR
```
Increment NumberOfJobs JobNames
2016-04-15 00:00:00.000 2 Job1
2016-04-15 00:00:00.000 2 Job2
2016-04-15 00:15:00.000 1 Job2
2016-04-15 00:30:00.000 0 NULL
```
|
The difficult part is getting the list of jobs on a single row. The counts are easy:
```
select t.increment, j.jobname,
sum(count(*)) over (partition by t.increment) as countOfJobs
from times t left join
jobs j
on t.increment >= j.lasteststarttime and
t.increment <= j.lastestendtime
group by t.increment, j.jobname
order by increment, jobname;
```
Getting the list requires a weird subquery in SQL Server:
```
select t.increment, count(*) as numJobs,
stuff((select ', ' + j2.jobname
from jobs j2
where t.increment >= j2.lasteststarttime and
t.increment <= j2.lastestendtime
for xml path ('')
), 1, 1, '') as jobs
from times t left join
jobs j
on t.increment >= j.lasteststarttime and
t.increment <= j.lastestendtime
group by t.increment
order by increment;
```
If the job names contain unusual characters (those that need escaping for XML), then the XML logic is slightly more complex. That seems unlikely for job names, though.
|
try this:
```
SELECT [increment], JobName
, Count(1) over (Partition by [increment]) AS NumberOfJobs
FROM temp_table
INNER JOIN (
SELECT JobName, st.t AS t1, LatestEndTime AS t2
FROM query AS q
OUTER APPLY (
SELECT Max([increment]) AS t
FROM temp_table
WHERE [increment] < q.LatestStartTime) as st
) as j ON Increment BETWEEN j.t1 and j.t2
```
it returns:
```
increment JobName NumberOfJobs
2016-04-15 00:00:00.000 Job1 2
2016-04-15 00:00:00.000 Job2 2
2016-04-15 00:15:00.000 Job2 1
2016-04-15 02:45:00.000 Job3 5
2016-04-15 02:45:00.000 Job4 5
2016-04-15 02:45:00.000 Job5 5
2016-04-15 02:45:00.000 Job6 5
2016-04-15 02:45:00.000 Job7 5
```
with the left join :
```
SELECT [increment], JobName
, Count(JobName) over (Partition by [increment]) AS NumberOfJobs
FROM temp_table
LEFT JOIN (
SELECT JobName, st.t AS t1, LatestEndTime AS t2
FROM query AS q
OUTER APPLY (
SELECT Max([increment]) AS t
FROM temp_table
WHERE [increment] < q.LatestStartTime) as st
) as j ON Increment BETWEEN j.t1 and j.t2
```
returns:
```
2016-04-15 00:00:00.000 Job1 2
2016-04-15 00:00:00.000 Job2 2
2016-04-15 00:15:00.000 Job2 1
2016-04-15 00:30:00.000 NULL 0
2016-04-15 00:45:00.000 NULL 0
2016-04-15 01:00:00.000 NULL 0
2016-04-15 01:15:00.000 NULL 0
2016-04-15 01:30:00.000 NULL 0
2016-04-15 01:45:00.000 NULL 0
2016-04-15 02:00:00.000 NULL 0
2016-04-15 02:15:00.000 NULL 0
2016-04-15 02:30:00.000 NULL 0
2016-04-15 02:45:00.000 Job3 5
2016-04-15 02:45:00.000 Job4 5
2016-04-15 02:45:00.000 Job5 5
2016-04-15 02:45:00.000 Job6 5
2016-04-15 02:45:00.000 Job7 5
```
|
Doing Counts based on timestamps
|
[
"",
"sql",
"sql-server",
"timestamp",
""
] |
I'd like to order pairs (or group of 3,4 etc.) of rows given the SUM of a certain value.
The rows are consecutive based on the concatenation of Name+Surname+Age
To better understand given the following table:
```
ID Name Surname Age Salary
------------------------------
1 John Smith 30 2
2 John Smith 30 10
3 Rick James 22 300
4 Rick James 22 1000
5 Rick James 22 5
6 Mike Brown 50 200
7 Mike Brown 50 20
```
I'd like to have a final table that should be ordered DESC by the sum of Salary of each Name+Surname+Age and keeping the rows with same Name+Surname+Age next to each others despite the ID column is different. This would be the expected result:
```
ID Name Surname Age Salary
------------------------------
3 Rick James 22 300
4 Rick James 22 1000
5 Rick James 22 5
6 Mike Brown 50 200
7 Mike Brown 50 20
1 John Smith 30 2
2 John Smith 30 10
```
As you can see the rows with Name+Surname+Age = "Rick Jams 22" are on the top since their total sum would be 1305, followed by "Mike Brown 50" (sum = 220) and "John Smith 30" (sum = 12).
Additionally, the number of rows has to be the same in the resulting table.
How can I do that using Oracle SQL?
Thanks for any help
|
```
SELECT t.*,
COALESCE(SUM(salary) OVER (PARTITION BY name, surname, age), 0) ss
FROM mytable t
ORDER BY
ss DESC
```
|
or try this
```
SELECT ID, Name, Surname, Age, Salary
FROM mytable
ORDER BY SUM(Salary) OVER (PARTITION BY Name, Surname, Age) DESC, ID
```
|
Order pairs/triplets of rows given the sum of a column
|
[
"",
"sql",
"oracle",
""
] |
I have a table in a SQL Server 2008 R2 database that defines transitions between different states.
Example relevant table columns:
```
TransitionID | SourceStateID | DestinationStateID | WorkflowID
--------------------------------------------------------------
1 | 17 | 18 | 3
2 | 17 | 21 | 3
3 | 17 | 24 | 3
4 | 17 | 172 | 3
5 | 18 | 17 | 3
6 | 18 | 24 | 3
7 | 18 | 31 | 3
8 | 21 | 17 | 3
9 | 21 | 20 | 3
10 | 21 | 141 | 3
11 | 21 | 184 | 3
```
... etc.
My goal is to be able to define a *starting* StateID, an *ending* StateID, and the WorkflowID, and hopefully find a logical path from the starting StateID to the ending StateID within that workflow.
e.g. For the table above, if I supplied a starting StateID of 17, and ending StateID of 184, and a WorkflowID of 3, I could wind up with a 'path' of 17>21>184 (or more ideally, TransitionID 2 > TransitionID 11)
**The Good:**
* The Starting and Ending StateID's defined will *always* have a **possible** path within the defined WorkflowID
**The Bad:**
* There are certainly circular references on virtually every source/destination StateID (i.e. there's likely a transition from SourceStateID 1 to DestinationStateID 2, and also from SourceStateID 2 to DestinationStateID 1
* There are certainly multiple possible paths from virtually any defined starting StateID and ending StateID
I realize it's some manner of CTE I'm after, but admittedly I find CTE's hard to grasp in general, and doubly so when circular references are a guaranteed problem.
The perfect solution would be one that selects the shortest path from the starting StateID to the ending StateID, but to be honest at this point if I can get something working that will reliably give me **any** valid path between the two states I'll be so happy.
Any SQL gurus out there have some direction you could point me in? I honestly don't even know where to begin to prevent circular issues like getting a path along the lines of 17>18>17>18>17>18...
**DISGUSTING UPDATE**
I came up with this query which hurts me on an emotional level (with some help from this post: <https://stackoverflow.com/a/11042012/3253311>), but appears to be working.
```
DECLARE @sourceStatusId INT = 17
DECLARE @destinationStatusId INT = 24
DECLARE @workflowId INT = 3
DECLARE @sourceStatusIdString NVARCHAR(MAX) = CAST(@sourceStatusId AS NVARCHAR(MAX))
DECLARE @destinationStatusIdString NVARCHAR(MAX) = CAST(@destinationStatusId AS NVARCHAR(MAX))
DECLARE @workflowIdString NVARCHAR(MAX) = CAST(@workflowId AS NVARCHAR(MAX))
;WITH CTE ([Source], [Destination], [Sentinel]) AS
(
SELECT
CAST(t.[Source] AS NVARCHAR(MAX)) AS [Source],
CAST(t.[Destination] AS NVARCHAR(MAX)) AS [Destination],
[Sentinel] = CAST([Source] AS NVARCHAR(MAX))
FROM
Transitions t
WHERE
CAST([Source] AS NVARCHAR(MAX)) = @sourceStatusIdString AND
CAST([WorkflowID] AS NVARCHAR(MAX)) = @workflowIdString
UNION ALL
SELECT
CAST(CTE.[Destination] AS NVARCHAR(MAX)),
CAST(t.[Destination] AS NVARCHAR(MAX)),
CAST([Sentinel] AS NVARCHAR(MAX)) + CAST('|' AS NVARCHAR(MAX)) + CAST(CTE.[Destination] AS NVARCHAR(MAX))
FROM
CTE
JOIN Transitions t
ON CAST(t.[Source] AS NVARCHAR(MAX)) = CAST(CTE.[Destination] AS NVARCHAR(MAX))
WHERE
CHARINDEX(CTE.[Destination], Sentinel)=0
)
SELECT TOP 1
[Sentinel]
FROM
CTE
WHERE
LEFT([Sentinel], LEN(@sourceStatusIdString)) = @sourceStatusIdString AND
RIGHT([Sentinel], LEN(@destinationStatusIdString)) = @destinationStatusIdString
ORDER BY
LEN([Sentinel])
```
|
Similar to Quassnoi's answer, but prevents circular references that start after the first element:
```
DECLARE @src int = 18, @dst int = 184;
WITH cte AS
(Select TransitionId, SourceStateId, DestinationStateID, SourceStateID as StartingState, 0 as Depth
, cast(TransitionId as varchar(max)) + ',' as IDPath
, cast(SourceStateID as varchar(max)) + ',' as States
FROM Transitions
WHERE SourceStateId = @src
UNION ALL
Select t.TransitionId, t.SourceStateId, t.DestinationStateID, StartingState, cte.Depth + 1
, cte.IDPath + cast(t.TransitionId as varchar(max)) + ','
, cte.States + cast(t.SourceStateID as varchar(max)) + ','
FROM cte JOIN Transitions t on cte.DestinationStateID = t.SourceStateId
and CHARINDEX(',' + cast(t.SourceStateID as varchar(max)) + ',', States) = 0 --prevent loop starting after first element
and t.DestinationStateID <> StartingState --prevent loop starting with first element
where cte.Depth < 10 -- prevent going too deep
)
select TransitionId, SourceStateId, DestinationStateID, Depth, left(IDPath, len(IDPath) - 1) IDPath, left(States, len(States) - 1) States
from cte
where DestinationStateID = @dst
order by Depth
```
|
```
WITH q (id, src, dst, sid, cnt, path) AS
(
SELECT transitionId, sourceStateId, destinationStateId, sourceStateId, 1,
CAST(transitionId AS VARCHAR(MAX)) path
FROM mytable
WHERE sourceStateId = 18
UNION ALL
SELECT m.transitionId, m.sourceStateId, m.destinationStateId,
CASE WHEN sid < sourceStateId THEN sid ELSE sourceStateId END, cnt + 1,
path + ', ' + CAST(transitionId AS VARCHAR(MAX))
FROM q
JOIN mytable m
ON m.sourceStateId = q.dst
AND m.sourceStateId <> q.sid
)
SELECT TOP 1
*
FROM q
WHERE dst = 184
ORDER BY
cnt DESC
```
See the fiddle: <http://www.sqlfiddle.com/#!6/9342e/17>
|
SQL Server CTE to find path from one ID to another ID
|
[
"",
"sql",
"sql-server",
"common-table-expression",
"hierarchical",
"hierarchical-query",
""
] |
I have the following tables:
```
Employees
-------------
ClockNo int
CostCentre varchar
Department int
```
and
```
Departments
-------------
DepartmentCode int
CostCentreCode varchar
Parent int
```
Departments can have other departments as parents meaning there is infinite hierarchy. All departments belong to a cost centre and so will always have a `CostCentreCode`. If `parent = 0` it is a top level department
Employees *must* have a `CostCentre` value but may have a `Department` of 0 meaning they are not in a department
**What I want to try and generate is a query that will give the up to four levels of hierarchy. Like this:**
```
EmployeesLevels
-----------------
ClockNo
CostCentre
DeptLevel1
DeptLevel2
DeptLevel3
DeptLevel4
```
I've managed to get something to display the department structure on it's own, but I can't work out how to link this to the employees without creating duplicate employee rows:
```
SELECT d1.Description AS lev1, d2.Description as lev2, d3.Description as lev3, d4.Description as lev4
FROM departments AS d1
LEFT JOIN departments AS d2 ON d2.parent = d1.departmentcode
LEFT JOIN departments AS d3 ON d3.parent = d2.departmentcode
LEFT JOIN departments AS d4 ON d4.parent = d3.departmentcode
WHERE d1.parent=0;
```
---
SQL To create Structure and some sample data:
```
CREATE TABLE Employees(
ClockNo integer NOT NULL PRIMARY KEY,
CostCentre varchar(20) NOT NULL,
Department integer NOT NULL);
CREATE TABLE Departments(
DepartmentCode integer NOT NULL PRIMARY KEY,
CostCentreCode varchar(20) NOT NULL,
Parent integer NOT NULL
);
CREATE INDEX idx0 ON Employees (ClockNo);
CREATE INDEX idx1 ON Employees (CostCentre, ClockNo);
CREATE INDEX idx2 ON Employees (CostCentre);
CREATE INDEX idx0 ON Departments (DepartmentCode);
CREATE INDEX idx1 ON Departments (CostCentreCode, DepartmentCode);
INSERT INTO Employees VALUES (1, 'AAA', 0);
INSERT INTO Employees VALUES (2, 'AAA', 3);
INSERT INTO Employees VALUES (3, 'BBB', 0);
INSERT INTO Employees VALUES (4, 'BBB', 4);
INSERT INTO Employees VALUES (5, 'CCC', 0);
INSERT INTO Employees VALUES (6, 'AAA', 1);
INSERT INTO Employees VALUES (7, 'AAA', 5);
INSERT INTO Employees VALUES (8, 'AAA', 15);
INSERT INTO Departments VALUES (1, 'AAA', 0);
INSERT INTO Departments VALUES (2, 'AAA', 1);
INSERT INTO Departments VALUES (3, 'AAA', 1);
INSERT INTO Departments VALUES (4, 'BBB', 0);
INSERT INTO Departments VALUES (5, 'AAA', 3);
INSERT INTO Departments VALUES (12, 'AAA', 5);
INSERT INTO Departments VALUES (15, 'AAA', 12);
```
This gives the following structure (employee clock numbers in square brackets):
```
Root
|
|---AAA [1]
| \---1 [6]
| |---2
| \---3 [2]
| \---5 [7]
| \---12
| \---15 [8]
|
|---BBB [3]
| \---4 [4]
|
\---CCC [5]
```
**The query should return the following:**
```
ClockNo CostCentre Level1 Level2 Level3 Level4
1 AAA
2 AAA 1 3
3 BBB
4 BBB 4
5 CCC
6 AAA 1
7 AAA 1 3 5
8 AAA 1 3 5 12 *
```
`*` In the case of Employee 8, they are in level5. Ideally I would like to show all their levels down to level4, but I am happy just to show the CostCentre in this case
|
SunnyMagadan's query is good. But depending on number of employees in a department you may wish to try the following one which leaves DB optimizer an opportunity to traverse department hierarchy only once for a department instead of repeating it for every employee in a department.
```
SELECT e.ClockNo, e.CostCentre, Level1, Level2, Level3, Level4
FROM Employees e
LEFT JOIN
(SELECT
d1.departmentcode
, d1.CostCentreCode
, coalesce (d4.departmentcode, d3.departmentcode
, d2.departmentcode, d1.departmentcode) AS Level1
, case when d4.departmentcode is not null then d3.departmentcode
when d3.departmentcode is not null then d2.departmentcode
when d2.departmentcode is not null then d1.departmentcode end as Level2
, case when d4.departmentcode is not null then d2.departmentcode
when d3.departmentcode is not null then d1.departmentcode end as Level3
, case when d4.departmentcode is not null then d1.departmentcode end as Level4
FROM departments AS d1
LEFT JOIN departments AS d2 ON d1.parent = d2.departmentcode
LEFT JOIN departments AS d3 ON d2.parent = d3.departmentcode
LEFT JOIN departments AS d4 ON d3.parent = d4.departmentcode) d
ON d.DepartmentCode = e.Department AND d.CostCentreCode = e.CostCentre
;
```
**EDIT** Regarding level 5+ departments.
Any fixed step query can not get top 4 levels for them. So change above query just to mark them some way, -1 for example.
```
, case when d4.Parent > 0 then NULL else
coalesce (d4.departmentcode, d3.departmentcode
, d2.departmentcode, d1.departmentcode) end AS Level1
```
and so on.
|
When we join the tables we should stop further traversal of the path when we found proper department that belongs to the Employee at previous level.
Also we have exceptional case when Employee.Department=0. In this case we shouldn't join any of departments, because in this case Department is the Root.
We need to choose only those records which contains employee's Department at one of the levels.
In case if employee's department level is greater than 4 we should expand all 4 levels of departments and show them as is (even if can't reach the desired department level and didn't find it within expanded ones).
```
select e.ClockNo,
e.CostCentre,
d1.DepartmentCode as Level1,
d2.DepartmentCode as Level2,
d3.DepartmentCode as Level3,
d4.DepartmentCode as Level4
from Employees e
left join Departments d1
on e.CostCentre=d1.CostCentreCode
and d1.Parent=0
and ((d1.DepartmentCode = 0 and e.Department = 0) or e.Department <> 0)
left join Departments d2
on d2.parent=d1.DepartmentCode
and (d1.DepartMentCode != e.Department and e.Department<>0)
left join Departments d3
on d3.parent=d2.DepartmentCode
and (d2.DepartMentCode != e.Department and e.Department<>0)
left join Departments d4
on d4.parent=d3.DepartmentCode
and (d3.DepartMentCode != e.Department and e.Department<>0)
where e.Department=d1.DepartmentCode
or e.Department=d2.DepartmentCode
or e.Department=d3.DepartmentCode
or e.Department=d4.DepartmentCode
or e.Department=0
or (
(d1.DepartmentCode is not null) and
(d2.DepartmentCode is not null) and
(d3.DepartmentCode is not null) and
(d4.DepartmentCode is not null)
)
order by e.ClockNo;
```
|
How can I Ascertain the structure for each person from a self referencing table
|
[
"",
"sql",
"hierarchy",
"self-join",
"pervasive-sql",
""
] |
i have a cros tab query in MS Access which i want to replicate in T-SQL,
T-SQL table '#tmpZSPO\_DMD' has Part,Location, Qty,FiscalMonthPeriod. and when i run, the data looks like below.
```
Part LOCATION Qty FiscalMonthPeriod
123 4040_0086 1 CON00
123 4040_0086 1 CON00
123 4200_0010 1 CON00
123 2070_0060 2 CON01
123 2080_0061 1 CON01
123 4040_0070 1 CON02
123 4040_0070 2 CON02
123 4040_0086 1 CON02
123 2020_0060 2 CON03
123 2020_0064 1 CON03
123 2040_0060 1 CON03
123 4040_0061 1 CON03
123 4040_0061 1 CON03
123 4040_0069 1 CON03
123 4040_0070 1 CON03
```
I am looking to achieve the below result.
```
Part LOCATION CON00 CON01 CON02 CON03
123 2020_0060 2
123 2020_0064 1
123 2040_0060 1
123 2070_0060 2
123 2080_0061 1
123 4040_0061 2
123 4040_0069 1
123 4040_0070 3 1
123 4040_0086 2 1
123 4200_0010 1
```
|
A very simple PIVOT will do the job.
```
SELECT *
FROM
(
SELECT Part, LOCATION, Qty, FiscalMonthPeriod
FROM @Table
) t
PIVOT
(
SUM(Qty)
FOR FiscalMonthPeriod IN ([CON00], [CON01], [CON02], [CON03])
) p
```
|
```
SELECT tm.PART, tm.Location,
SUM(IIF(tm.FiscalMonthPeriod= 'CON00', [Sum], NULL)) As CON00,
SUM(IIF(tm.FiscalMonthPeriod= 'CON01', [Sum], NULL)) As CON01,
SUM(IIF(tm.FiscalMonthPeriod= 'CON02', [Sum], NULL)) As CON02,
SUM(IIF(tm.FiscalMonthPeriod= 'CON03', [Sum], NULL)) As CON03
FROM #tmpZSPO_DMD tm
GROUP BY tm.PART, tm.Location;
```
|
Cross tab Pivot in TSQLwith group by and order by
|
[
"",
"sql",
"sql-server",
"crosstab",
""
] |
I have a table called SOITEM. In that table the column TOTALPRICE has to be summed and result in the total sales by month, where the column with the dates is called DATELASTFULFILLMENT.
I want to compare sales form Jan 2014 with Jan 2015, then Feb 2014 with Feb 2015 and so forth.
I got this so far, but I'm not sure how continue.
```
Select SUM(SOITEM.TOTALPRICE)
FROM SOITEM
WHERE DATELASTFULFILLMENT>='2014-01-31' AND DATELASTFULFILLMENT<='2014-01-31'
```
but it only results in totals from Jan 2014....
Thank you.
|
You could consider grouping your results using the Month/Year from your date field and then using calculating the [`SUM()`](https://msdn.microsoft.com/en-us/library/ms187810.aspx) for each of those groups :
```
SELECT DATEPART(Year, DATELASTFULFILLMENT) AS [Year],
DATEPART(Month, DATELASTFULFILLMENT) AS [Month],
SUM(TOTALPRICE) AS Total
FROM SOITEM
GROUP BY DATEPART(Year, DATELASTFULFILLMENT), DATEPART(Month, DATELASTFULFILLMENT)
ORDER BY [Year], [Month]
```
You can [see an interactive example of this here](http://sqlfiddle.com/#!6/52119/2) and results demonstrated below :
[](https://i.stack.imgur.com/jfiOf.png)
|
This works for MySQL. I assume should be the same for MS SQL.
```
Select SUM(SOITEM.TOTALPRICE)
FROM SOITEM
WHERE DATELASTFULFILLMENT>='2014-01-31' AND DATELASTFULFILLMENT<='2014-01-31'
UNION
Select SUM(SOITEM.TOTALPRICE)
FROM SOITEM
WHERE DATELASTFULFILLMENT>='2015-01-31' AND DATELASTFULFILLMENT<='2015-01-31'
UNION
Select SUM(SOITEM.TOTALPRICE)
FROM SOITEM
WHERE DATELASTFULFILLMENT>='2014-02-31' AND DATELASTFULFILLMENT<='2014-02-31'
UNION
Select SUM(SOITEM.TOTALPRICE)
FROM SOITEM
WHERE DATELASTFULFILLMENT>='2014-02-31' AND DATELASTFULFILLMENT<='2015-02-31'
```
|
How to sum sales by months and compare them
|
[
"",
"sql",
"sql-server",
""
] |
I created a table using this query
`CREATE TABLE Store (id number(11) primary key not null, opening_time timestamp CHECK (EXTRACT(HOUR FROM opening_time) > 8 || NULL));`
Now, when I try to insert some data using `insert into Store values(1, '04/04/2012 13:35 PM');`, I encounter this error `SQL Error: ORA-01843: not a valid month`.
What is it that I am doing wrong ?
|
`'04/04/2012 13:35 PM'` is not a date - it is a string.
Oracle will do an implicit `TO_DATE( string_value, format_mask )` on non-date literals when inserting them into a `DATE` column using the value of the `NLS_DATE_FORMAT` session parameter as the format mask (*note: this is a session parameter and belongs to the client; it is not a global setting*). If the non-date literal matches this format then it will work (and if it doesn't then it won't) - however, if the `NLS_DATE_FORMAT` is ever changed then it will immediately break (any be a huge pain to debug as the code that was working won't but no-one will have changed the code).
You can find out your current `NLS_DATE_FORMAT` with the query:
```
SELECT VALUE
FROM NLS_SESSION_PARAMETERS
WHERE PARAMETER = 'NLS_DATE_FORMAT';
```
It is better to explicitly use `TO_DATE()` with the correct format mask or to use an ANSI/ISO date literal (i.e. `DATE '2012-04-04'` or `TIMESTAMP '2012-04-04 13:35'`).
You can do:
```
INSERT INTO STORE ( id, opening_time )
VALUES( 1, TO_DATE( '04/04/2012 13:35', 'DD/MM/YYYY HH24:MI' );
```
*(you do not need the `AM/PM` as the hour component is already on a 24 hour clock)*
or
```
INSERT INTO STORE ( id, opening_time )
VALUES( 1, TIMESTAMP '2012-04-04 13:35:00' );
```
*(using the ANSI/ISO timestamp literal which Oracle will implicitly convert to a date)*
|
> SQL Error: ORA-01843: not a valid month
`'04/04/2012 13:35 PM'` is a **string** and not a **date**. You should always use **TO\_DATE** to explicitly convert a string into date. Never ever rely on **implicit datatype conversion**. You might just be lucky to depend on your **locale-specific NLS settings**. However, it won't work if the NLS settings are different. So, always use **TO\_DATE** to convert a literal into date using proper **format mask**.
Looking at the datetime value, no need to use **timestamp**, simply use **DATE** data type. It can hold till precision up to seconds.
```
CREATE TABLE Store (id number(11) primary key not null,
opening_time timestamp
CHECK (EXTRACT(HOUR FROM opening_time) > 8 || NULL));
insert into Store
values(1, TO_DATE('04/04/2012 13:35 PM', 'DD/MM/YYYY HH:MI', 'nls_date_language=ENGLISH'));
```
You could also use the **[ANSI date/timestamp literal](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements003.htm#BABGIGCJ)** which uses a **fixed format** and therefore it is **NLS independent**.
```
DATE '2012-04-04'
```
or
```
TIMESTAMP '2012-04-04 13:35:00'
```
|
Encountering SQL Error: ORA-01843: not a valid month
|
[
"",
"sql",
"oracle",
"date-formatting",
""
] |
I want to delete all rows in both tables where the chart\_id is 1, but it wont work and I dont have any clue why.
```
DELETE `cms_module_charts`
FROM `cms_module_charts`
INNER JOIN `cms_module_charts_kategorie`
ON `cms_module_charts_kategorie`.`chart_id`=`cms_module_charts`.`chart_id`
WHERE `chart_id`= 1
```
This is the error:
Unexpected character. (near "`cms_module_charts`" at position 7)
|
From the [MySQL Docs](http://dev.mysql.com/doc/refman/5.7/en/delete.html) it looks like you can do this easily:
```
DELETE t1, t2
FROM t1 INNER JOIN t2 INNER JOIN t3
WHERE t1.id=t2.id AND t2.id=t3.id;
```
OR
```
DELETE
FROM t1, t2
USING t1 INNER JOIN t2 INNER JOIN t3
WHERE t1.id=t2.id AND t2.id=t3.id;
```
It also looks like the newer, preferable, `JOIN` standard is acceptable and I have no idea why your query is complaining. Are you sure you haven't got any strange characters in your query?
|
i think in you database scheme you should use `ON DELETE CASCADE`,then when you delete a rows ,it delete its references but when you deleting using join it doesn't make a sense.
|
How to delete rows from two tables using INNER JOIN in mysql?
|
[
"",
"mysql",
"sql",
"database",
"join",
"inner-join",
""
] |
I have a view with this structure:
```
EntryId | EntryName | ParentEntryId | Depth | DatePosted
```
What I want to do is to write an SQL query that will bring the first 2 entries with Depth=0 along with the first descendants (based on the ParentEntryId). Below, I provided an example output.
```
EntryId | EntryName | ParentEntryId | Depth | DatePosted | ChildCount
1 | a | NULL | 0 | 1/12/2012 | 2
4 | b | 1 | 1 | 1/14/2012 | 5
13 | c | 1 | 1 | 1/15/2012 | 3
3 | d | NULL | 0 | 1/11/2012 | 1
12 | e | 3 | 1 | 1/14/2012 | 0
```
I know I can bring the entries with depth = 0 easily like this:
```
SELECT TOP 10 FROM Entries WHERE Depth=0 ORDER BY DatePosted DESC
```
However, I am not sure how to bring the associated sub-entries. For example for the main entry with Id=1, I want to bring the entries (first descendants) whose ParentEntryId = 1. I also need to bring the count of the child entries of these first descendants. Any ideas?
|
You can also do this with a recursive cte.. You should make sure the performance meets your standards though if you're using this on a large record set
```
;WITH cte AS (
SELECT [EntryId],
[EntryName],
[ParentEntryId],
[Depth],
[DatePosted],
[EntryId] [Root],
ROW_NUMBER() OVER (ORDER BY DatePosted DESC) [Rn],
CAST(EntryId AS VARCHAR(MAX)) [Path]
FROM Entries
WHERE [Depth] = 0
UNION ALL
SELECT e.[EntryId],
e.[EntryName],
e.[ParentEntryId],
e.[Depth],
e.[DatePosted],
[Root],
Rn,
[Path] + ',' + CAST(e.EntryId AS VARCHAR(MAX))
FROM Entries e
JOIN cte ON cte.EntryID = e.ParentEntryId
)
SELECT [EntryId],
[EntryName],
[ParentEntryId],
[Depth],
[DatePosted],
ChildCount
FROM cte c1
OUTER APPLY (SELECT COUNT (*) - 1 AS ChildCount
FROM cte c2
WHERE c2.[Path] LIKE c1.[Path] + '%'
) oa
WHERE Rn <= 2 -- only gets the first 2 records with depth = 0
AND Depth <= 1 -- limit to only top level child records
ORDER BY [Root],
[ParentEntryID]
```
|
Based on your updated question, below query will yield perfect results
```
SELECT
EntryId , EntryName , ParentEntryId , Depth , DatePosted, ChildCount
FROM
(
SELECT
TOP 10
E1.EntryId , E1.EntryName , E1.ParentEntryId , E1.Depth , E1.DatePosted,
(
SELECT
COUNT(1)
FROM Entries E2
WHERE E2.ParentEntryID =E1.EntryID
) as ChildCount
FROM Entries E1
WHERE E1.Depth=0
UNION
SELECT
E1.EntryId , E1.EntryName , E1.ParentEntryId , E1.Depth , E1.DatePosted,
(
SELECT
COUNT(1)
FROM Entries E3
WHERE E3.ParentEntryID =E1.EntryID
) as ChildCount
FROM Entries E1
LEFT JOIN Entries E2 ON E1.ParentEntryID= E2.EntryID AND E2.Depth=0
)
ORDER BY ParentEntryID , Depth ASC, DatePosted
```
|
SQL Query to read a certain number of records with nested data
|
[
"",
"sql",
"sql-server",
""
] |
How to combine select distinct and order by a non-selected attribute or any alternative way?
I have a table called message
```
+----+------+-----------+-------------+
| id | body | sender_id | receiver_id |
+----+------+-----------+-------------+
| 10 | ... | 1 | 2 |
| 28 | ... | 1 | 3 |
| 29 | ... | 2 | 1 |
| 30 | ... | 2 | 1 |
| 31 | ... | 1 | 2 |
| 32 | ... | 3 | 1 |
| 37 | ... | 1 | 47 |
+----+------+-----------+-------------+
```
Is there a way to select a unique list of the latest receiver\_ids (ORDER BY id DESC) and limiting the list by 10 items ?
The result should be
```
+-------------+
| receiver_id |
+-------------+
| 47 |
| 1 |
| 2 |
| 3 |
+-------------+
```
As we know:
```
SELECT DISTINCT receiver_id FROM message ORDER BY id
```
Is not a valid sql because the id attribute must be in the select list
|
Here's one way to do it:
```
select receiver_id from
(select receiver_id, max(id) max_id from message group by receiver_id) ilv
order by max_id desc limit 10;
```
|
I'm not sure what "order by not-selected attribute" means. But the basic query can be handled using `where` and a subquery (or a `join` and explicit aggregation):
```
select m.receiver_id
from message m
where m.id = (select max(m2.id) from message m2 where m2.sender = m.sender)
order by id desc;
```
I am guessing that the ordering criterion is `id desc`.
|
How to combine select distinct and order by a non-selected attribute or any alternative way?
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have the following two tables (populated with more data):
**rating**
```
ID | user_id | rating_value | date
1 1 0.6 2016-04-02
2 2 0.75 2016-04-05
3 1 0.4 2016-04-08
4 2 0.5 2016-04-12
```
**recommendation**
```
ID | user_id | recommendation_text | date
1 1 'a' 2016-04-03
2 2 'b' 2016-04-07
3 1 'c' 2016-04-09
```
I would like to select the recommendation\_text, user\_id, and latest rating value from the rating table for each row in the recommendation table.
I am having trouble only returning the latest rating value:
```
SELECT rec.user_id, rec.recommendation_text, rec.recommendation.date, rating.rating_value, rating.date
FROM recommendation AS rec
JOIN rating
ON rec.user_id = rating.user_id;
```
Returns (as expected) all values joined to the recommendations. The end result I'd like to produce is:
```
user_id | recommendation_text | recommendation_date | rating_value | rating_date
1 'a' 2016-04-03 0.6 2016-04-02
2 'b' 2016-04-07 0.75 2016-04-05
1 'c' 2016-04-09 0.4 2016-04-08
```
|
You can try the below query which will work in **PostgreSQL**, mySQL and SQL server
```
SELECT rec.user_id, rec.recommendation_text, rec.date, rating.rating_value, rating.date
FROM recommendation AS rec
JOIN rating ON rec.user_id = rating.user_id
JOIN
(SELECT rec.id as id1, max(rating.id) as id2
FROM recommendation AS rec
JOIN rating
ON rec.user_id = rating.user_id AND rec.date >rating.date
GROUP BY rec.id ) t
on t.id1= rec.id and t.id2=rating.id;
```
**[Demo link here](http://sqlfiddle.com/#!15/157e4/1)**
**update based on comment:**
> FYI, that query will only work if the id's in your rating table are in the same order as the dates in the rating table. If you end up with a record with a higher ID but an earlier date, then it will return the higher ID instead of the later date
```
SELECT rec.user_id, rec.recommendation_text, rec.date, rating.rating_value, rating.date
FROM recommendation AS rec
JOIN rating ON rec.user_id = rating.user_id
JOIN
(SELECT rec.id as id1, max(rating.date) as id2
FROM recommendation AS rec
JOIN rating
ON rec.user_id = rating.user_id AND rec.date >rating.date
GROUP BY rec.id ) t
on t.id1= rec.id and t.id2=rating.date;
```
**[updated Demo link](http://sqlfiddle.com/#!15/157e4/4)**
|
Another method without subselect:
```
select rec.user_id, rec.recommendation_text, rec.dates, rat.rating_value, rat.dates
from recommendation as rec
left join rating as rat on rat.user_id = rec.user_id and rat.dates < rec.dates
left join rating as rat1
on rat.user_id = rat1.user_id and rat.dates < rat1.dates and rat1.dates < rec.dates
where rat1.id is null
```
|
SQL Join Sorting and Limiting Results
|
[
"",
"sql",
"postgresql",
"join",
""
] |
I have the following code:
```
SELECT cr_ts, msg_id, info
FROM messaging_log
WHERE UPPER(INFO) LIKE '%' || TRIM(UPPER(P_INFO)) || '%'
AND cr_ts >= NVL(P_DATE, SYSDATE-10)
AND msg_id = NVL(P_ID, msg_id)
ORDER BY cr_ts desc;
```
where P\_INFO, P\_DATE and P\_ID are parameters. In the normal case every parameter is NULL. But then the where clause `AND msg_id = NVL(P_ID, msg_ID)` don't return the rows where msg\_id itself is NULL. What did I miss?
How to return all entries when P\_ID and msg\_id are NULL?
|
Try this;)
```
SELECT
cr_ts,
msg_id,
info
FROM messaging_log
WHERE UPPER(INFO) LIKE '%' || TRIM(UPPER(P_INFO)) || '%'
AND ((P_DATE IS NULL AND cr_ts >= (SYSDATE - 10)) OR cr_ts >= P_DATE)
AND (P_ID IS NULL OR msg_id = P_ID)
ORDER BY cr_ts DESC;
```
|
```
SELECT cr_ts, msg_id, info
FROM messaging_log
WHERE UPPER(INFO) LIKE '%' || TRIM(UPPER(P_INFO)) || '%'
AND cr_ts >= NVL(P_DATE, SYSDATE-10)
AND NVL(msg_id,'*') = NVL(P_ID, '*')
ORDER BY cr_ts desc;
```
|
How to use NVL in where clause
|
[
"",
"sql",
"oracle",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.