Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
How do I update a table column in order to revert its value(set true if value is false and false for true! null remains null).
Please exclude solutions where one uses `case when` or `IIF()` I want something like following
```
UPDATE mytable SET IsEditable = !IsEditable
```
|
You can use [bitwise NOT operator](https://msdn.microsoft.com/en-us/library/ms173468.aspx):
```
update mytable set IsEditable = ~IsEditable
```
|
Just use this:
```
UPDATE mytable SET IsEditable=IsEditable^1
```
If you want to test it:
```
SELECT CONVERT(bit,0)^1, CONVERT(bit,1)^1
```
|
How to negate a bit column value in SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a holiday record that has a start\_date and an end\_date
I want to do a query where I have to dates '2015-05-05' and '2015-06-05' I want to see which holiday records fall in between these dates and could overlay so for example one holiday record may have a start\_date of '2015-06-01' and end\_date of '2015-07-10'. I would want that to be picked up in the query
How would you write a query to do so?
|
Try something like this:
```
select * from tablename
where (start_date, end_date) OVERLAPS ('2015-05-05', '2015-06-05')
```
You can find details about `OVERLAPS` on this page: <http://www.postgresql.org/docs/current/static/functions-datetime.html>
|
Try this
```
select * from tablename
where
'2015-05-05' between start_date and end_date or
'2015-06-05' between start_date and end_date or
(start_date<='2015-05-05' and end_date>='2015-06-05')
```
|
Query to see if records start_date and end_date fall in between date range
|
[
"",
"sql",
"postgresql",
""
] |
Since `NULL` it's just an unknown or missing value, so maybe it's `TRUE` but we don't know. Why `FALSE` presumption? Does it have any justification besides "it's obvious" (since it is not) or should be considered a kind of SQL bad design artifact?
for ex.:
```
SELECT * FROM `rainbow_table` WHERE `show_me`
```
and some rows have null `show_me`. We really don't know if we should output such rows, maybe it is better to show it (as the last chance to prevent data loss)? It seems like SQL was developed by pessimists.
|
Within the context of a `SELECT` statement (so, in `ON` clauses, the `WHERE` clause, and within `CASE` expressions), predicates must be `TRUE` (not `FALSE` or `UNKNOWN`1) for the predicate to be satisfied.
However, within `CHECK` constraints, predicates must not be `FALSE` in order to be satisfied.
I.e. the following script will work:
```
CREATE TABLE T (
ID int not null,
Val varchar(10) null,
constraint CK_Vals CHECK (Val in ('abc','def'))
);
INSERT INTO T(ID,Val) VALUES (10,NULL);
```
So we can see that it is not *universally* true in SQL that `UNKNOWN` results are treated as `FALSE`. It's also trivially demonstrated by the fact that wrapping a predicate that produces `UNKNOWN` with `NOT (<existing predicate>)` does not produce `TRUE`.
The wikipedia page on [Three-Valued logic](https://en.wikipedia.org/wiki/Three-valued_logic) covers a lot of details.
---
1I'm assuming your question is about `UNKNOWN` rather than `NULL`, since you've tagged [sql](/questions/tagged/sql "show questions tagged 'sql'") and [relational-algebra](/questions/tagged/relational-algebra "show questions tagged 'relational-algebra'"). In standard SQL, `UNKNOWN` and `NULL` are two distinctly different concepts. Only (so far as I'm aware) [mysql](/questions/tagged/mysql "show questions tagged 'mysql'") conflates the two.
|
SQL does not really treat `NULL` as false. Instead, conditional statements are considered true only when the condition evaluates to true.
The effect is that `NULL` is treated as false. But this does not mean that `NULL` is equivalent to false.
|
Why SQL does finally treat a NULL value as FALSE?
|
[
"",
"mysql",
"sql",
"null",
""
] |
I would like to check if the trigger exists on [tbl] and create another one. I tried it this way but didn't work. What am I doing wrong?
```
IF EXISTS (SELECT * FROM sys.objects WHERE [name] = '[dbo].[trg]' AND [type] = 'TR')
DROP TRIGGER [dbo].[trg] ON [dbo].[tbl]
GO
CREATE TRIGGER [dbo].[trg] ON [dbo].[tbl]
AFTER DELETE
AS
BEGIN
//
END
GO
```
|
The `[name]` field in `sys.objects` will contain only the actual name (i.e. `trg`), *not* including the schema (i.e. `dbo` in this case) or any text qualifiers (i.e. `[` and `]` in this case).
AND, you don't specify the table name for `DROP TRIGGER` since the trigger is an object by itself (unlike indexes). So you need to remove the `ON` clause (which is only used with DDL and Logon triggers).
```
IF EXISTS (SELECT * FROM sys.objects WHERE [name] = N'trg' AND [type] = 'TR')
BEGIN
DROP TRIGGER [dbo].[trg];
END;
```
Please note that you should prefix the object name string literal with an `N` since the `[name]` field is a `sysname` datatype which equates to `NVARCHAR(128)`.
If you did want to incorporate the schema name, you could use the `OBJECT_ID()` function which does allow for schema names and text qualifiers (you will then need to match against `object_id` instead of `name`):
```
IF EXISTS (SELECT * FROM sys.objects WHERE [object_id] = OBJECT_ID(N'[dbo].[trg]')
AND [type] = 'TR')
BEGIN
DROP TRIGGER [dbo].[trg];
END;
```
And to simplify, since the object name needs to be unique within the schema, you really only need to test for its existence. If for some reason a different object type exists with that name, the `DROP TRIGGER` will fail since that other object is, well, not a trigger ;-). Hence, I use the following:
```
IF (OBJECT_ID(N'[dbo].[trg]') IS NOT NULL)
BEGIN
DROP TRIGGER [dbo].[trg];
END;
```
|
If you use SQL Server 2016, you can use a shorter variant.
```
DROP TRIGGER IF EXISTS [dbo].[trg]
```
<https://learn.microsoft.com/en-us/sql/t-sql/statements/drop-trigger-transact-sql>
|
drop trigger if exists and create
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"triggers",
""
] |
I am creating a key-wording module where I want to search data using the comma separated words.And the search is categorized into comma `,` and minus `-`.
I know a relational database engine is designed from the principle that a cell holds a single value and obeying to this rule can help for performance.But in this case table is already running and have millions of data and can't change the table structure.
Take a look on the example what I exactly want to do is
I have a main table name `tbl_main` in SQL
```
AS_ID KWD
1 Man,Businessman,Business,Office,confidence,arms crossed
2 Man,Businessman,Business,Office,laptop,corridor,waiting
3 man,business,mobile phone,mobile,phone
4 Welcome,girl,Greeting,beautiful,bride,celebration,wedding,woman,happiness
5 beautiful,bride,wedding,woman,girl,happiness,mobile phone,talking
6 woman,girl,Digital Tablet,working,sitting,online
7 woman,girl,Digital Tablet,working,smiling,happiness,hand on chin
```
> If search text is = Man,Businessman then result AS\_ID is =1,2
>
> If search text is = Man,-Businessman then result AS\_ID is =3
>
> If search text is = woman,girl,-Working then result AS\_ID is =4,5
>
> If search text is = woman,girl then result AS\_ID is =4,5,6,7
What is the best why to do this, Help is much appreciated.Thanks in advance
|
Here is my attempt using Jeff Moden's [**DelimitedSplit8k**](http://www.sqlservercentral.com/articles/Tally+Table/72993/) to split the comma-separated values.
First, here is the splitter function *(check the article for updates of the script)*:
```
CREATE FUNCTION [dbo].[DelimitedSplit8K](
@pString VARCHAR(8000), @pDelimiter CHAR(1)
)
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
)
,E2(N) AS (SELECT 1 FROM E1 a, E1 b)
,E4(N) AS (SELECT 1 FROM E2 a, E2 b)
,cteTally(N) AS(
SELECT TOP (ISNULL(DATALENGTH(@pString), 0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
,cteStart(N1) AS(
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(@pString, t.N, 1) = @pDelimiter
),
cteLen(N1, L1) AS(
SELECT
s.N1,
ISNULL(NULLIF(CHARINDEX(@pDelimiter, @pString, s.N1),0) - s.N1, 8000)
FROM cteStart s
)
SELECT
ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(@pString, l.N1, l.L1)
FROM cteLen l
```
Here is the complete solution:
```
-- search parameter
DECLARE @search_text VARCHAR(8000) = 'woman,girl,-working'
-- split comma-separated search parameters
-- items starting in '-' will have a value of 1 for exclude
DECLARE @search_values TABLE(ItemNumber INT, Item VARCHAR(8000), Exclude BIT)
INSERT INTO @search_values
SELECT
ItemNumber,
CASE WHEN LTRIM(RTRIM(Item)) LIKE '-%' THEN LTRIM(RTRIM(STUFF(Item, 1, 1 ,''))) ELSE LTRIM(RTRIM(Item)) END,
CASE WHEN LTRIM(RTRIM(Item)) LIKE '-%' THEN 1 ELSE 0 END
FROM dbo.DelimitedSplit8K(@search_text, ',') s
;WITH CteSplitted AS( -- split each KWD to separate rows
SELECT *
FROM tbl_main t
CROSS APPLY(
SELECT
ItemNumber, Item = LTRIM(RTRIM(Item))
FROM dbo.DelimitedSplit8K(t.KWD, ',')
)x
)
SELECT
cs.AS_ID
FROM CteSplitted cs
INNER JOIN @search_values sv
ON sv.Item = cs.Item
GROUP BY cs.AS_ID
HAVING
-- all parameters should be included (Relational Division with no Remainder)
COUNT(DISTINCT cs.Item) = (SELECT COUNT(DISTINCT Item) FROM @search_values WHERE Exclude = 0)
-- no exclude parameters
AND SUM(CASE WHEN sv.Exclude = 1 THEN 1 ELSE 0 END) = 0
```
[**SQL Fiddle**](http://sqlfiddle.com/#!6/5269c2/1/0)
This one uses a solution from the **Relational Division with no Remainder** problem discussed in this [**article**](https://www.simple-talk.com/sql/learn-sql-server/high-performance-relational-division-in-sql-server/) by Dwain Camps.
|
I think you can easily solve this by creating a FULL TEXT INDEX on your `KWD` column. Then you can use the [CONTAINS](https://msdn.microsoft.com/en-us/library/ms187787.aspx) query to search for phrases. The FULL TEXT index takes care of the punctuation and ignores the commas automatically.
```
-- If search text is = Man,Businessman then the query will be
SELECT AS_ID FROM tbl_main
WHERE CONTAINS(KWD, '"Man" AND "Businessman"')
-- If search text is = Man,-Businessman then the query will be
SELECT AS_ID FROM tbl_main
WHERE CONTAINS(KWD, '"Man" AND NOT "Businessman"')
-- If search text is = woman,girl,-Working the query will be
SELECT AS_ID FROM tbl_main
WHERE CONTAINS(KWD, '"woman" AND "girl" AND NOT "working"')
```
To search the multiple words (like the `mobile phone` in your case) use the quoted phrases:
```
SELECT AS_ID FROM tbl_main
WHERE CONTAINS(KWD, '"woman" AND "mobile phone"')
```
As commented below the quoted phrases are important in all searches to avoid bad searches in the case of e.g. when a search term is "tablet working" and the KWD value is `woman,girl,Digital Tablet,working,sitting,online`
There is a special case for a single `-` search term. The NOT cannot be used as the first term in the CONTAINS. Therefore, the query like this should be used:
```
-- If search text is = -Working the query will be
SELECT AS_ID FROM tbl_main
WHERE NOT CONTAINS(KWD, '"working"')
```
|
SQL Customized search with special characters
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
So in my table, I have a column that looks like this:
```
45
NULL
234234
123
NULL
NULL
75
```
I want to set all the `NULL` values to `0`:
```
45
0
234234
123
0
0
75
```
I tried this:
```
UPDATE TABLE comm_call_count c
SET c.messageCountFri = 0
WHERE c.messageCountFri = NULL;
```
and then I tried this:
```
UPDATE TABLE comm_call_count c
SET c.messageCountFri = IFNULL(c.messageCountFri,0);
```
But neither worked. How would I do this?
|
`null` is not a value, it is the lack thereof. Therefore, you cannot evaluate it with operators designed to compare values (such as `=` or `!=`). Instead, you need to check for it explicitly by using the `is` operator. Also note that the syntax is `update <table name>`, not `update table <table name>`:
```
UPDATE comm_call_count c
SET c.messageCountFri = 0
WHERE c.messageCountFri IS NULL;
```
|
Try removing the keyword **`TABLE`** from your UPDATE statement. I don't think that is allowed there. (I never use the word `TABLE` in an update like that.)
Seems like that would throw a specific error, rather than just "not work".
|
SQL - Setting column entry to 0 if it equals NULL?
|
[
"",
"mysql",
"sql",
"null",
"sql-update",
""
] |
I have a table with a column which has default value to 0. However when null is inserted to this table, I want it to set it to default.
As the insert query is generic and used by other databases too, I cannot make any changes to the insert statement.
Can I have constraints or case statement on create table, so that default 0 value is inserted whenever null is passed.
|
If you can not change an `insert statement` you have no way other then creating an `instead of insert trigger`:
```
CREATE TRIGGER trTableName
ON SchemaName.TableName
INSTEAD OF INSERT
AS
BEGIN
INSERT INTO TableName (ColumnA, ColumnB, ...)
SELECT ISNULL(ColumnA, 0), ISNULL(ColumnB, 0), ...
FROM INSERTED
END
```
|
You can do an update using a trigger on insert.
```
CREATE TRIGGER [dbo].[YourTriggerName]
ON [dbo].[YourTable]
AFTER INSERT
AS
BEGIN
UPDATE t SET YourCol = 0
FROM YourTable t
JOIN Inserted i ON t.Id = i.Id
WHERE i.YourCol IS NULL
END
```
|
Set value of the column to default for NULL values
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm looking for a way to calculate the days between two dates, but on weekdays. Here is the formula, but it counts weekend days.
```
DATEDIFF(DAY,STARTDATE,ENDDATE)
SELECT DATEDIFF(DAY,'2015/06/01' , '2015/06/30')
```
Result of above query of datediff is 29 days which are weekend days. but i need week days that should be 21 by removing Saturday and Sunday(8 days).
Any suggestions?
|
Put it in the WHERE clause
```
SELECT DATEDIFF(DAY,'2015/06/01' , '2015/06/30')
FROM yourtable
WHERE DATENAME(dw, StartDate) != 'Saturday'
AND DATENAME(dw, StartDate) != 'Sunday'
```
Or all in a SELECT statement
```
SELECT (DATEDIFF(dd, StartDate, EndDate) + 1)-(DATEDIFF(wk, StartDate, EndDate) * 2)-(CASE WHEN DATENAME(dw, StartDate) = 'Sunday' THEN 1 ELSE 0 END)-(CASE WHEN DATENAME(dw, EndDate) = 'Saturday' THEN 1 ELSE 0 END)
```
|
This returns `22`:
```
DECLARE @StartDate AS DATE = '20150601'
DECLARE @EndDate AS DATE = '20150630'
SELECT
(DATEDIFF(DAY, @StartDate, @EndDate))
-(DATEDIFF(WEEK, @StartDate, @EndDate) * 2)
-(CASE WHEN DATENAME(dw, @StartDate) = 'Sunday' THEN 1 ELSE 0 END)
-(CASE WHEN DATENAME(dw, @EndDate) = 'Saturday' THEN 1 ELSE 0 END)
```
Read this [article](http://www.sqlservercentral.com/articles/Advanced+Querying/calculatingworkdays/1660/) by Jeff Moden for more information.
Explanation:
First, `(DATEDIFF(DAY, @StartDate, @EndDate))` will return the difference in number of days. In this case, it'll be `29`. Now, depending on your interpretation of whole days, you may want to add `1 day` to its result.
Next,`(DATEDIFF(WEEK, @StartDate, @EndDate) * 2)`:
To quote the article:
> `DATEDIFF` for the `WEEK` datepart doesn't actually calculate weeks, it
> calculates the number of times a date range contains dates that
> represent pairs of Saturdays and Sundays. To think of it in more
> simple terms, it only counts WHOLE WEEKENDS!
So, to exclude the weekends, you must subtract twice the result of this from the first `DATEDIFF`. Which now will be: `29 - (2 *4) = 21`.
Finally, this:
```
-(CASE WHEN DATENAME(dw, @StartDate) = 'Sunday' THEN 1 ELSE 0 END)
-(CASE WHEN DATENAME(dw, @EndDate) = 'Saturday' THEN 1 ELSE 0 END)
```
removes the partial weeks, which only happens when then `@StartDate` occurs on a Sunday and the `@EndDate` occurs on a Saturday.
|
count week days in sql by datediff
|
[
"",
"sql",
"sql-server",
""
] |
My order has two fields called buyerid and sellerid.
I want to get both id according to the same condition, I can use two search statements to union them, but I have to search them two times. How to search them one time in one Select statement? Now my statement is:
```
select sellerid from order where status=2 and createtime>'2015-05-25 8:00:00'
union all
select buyerid from order where status=2 and createtime>'2015-05-25 8:00:00'
```
How to change them to one statement?
```
-- This is what I try to change to equal to above,but it is wrong.
select (buyerid+sellerid) from order where status=2 and createtime>'2015-05-25 8:00:00'
```
|
This is what you're looking for, if I understood you right:
```
SELECT q.id FROM
(select sellerid as id, status, createtime from order
union all
select buyerid as id, status, createtime from order) AS q
WHERE status=2 and createtime>'2015-05-25 8:00:00'
```
I'd still go for your first attempt, not sure wether my solution is more efficient.
|
You probably want this:
```
select sellerid, buyerid from order
where status=2 and createtime>'2015-05-25 8:00:00'
```
This will return the two columns you want.
Or am I missing something about how you want the results returned?
If you can concat, use:
```
select CONCAT(sellerid, ' ', buyerid) from order
where status=2 and createtime>'2015-05-25 8:00:00'
```
|
How to change my Query to a single command in MySql?
|
[
"",
"mysql",
"sql",
""
] |
I have some data in the following format (table name - `ORDERS`):
```
ββββββββββββ¦ββββββββββ¦ββββββββββββββββββββββββββββββ
β OrderID β CustNum β OrderDate β
ββββββββββββ«ββββββββββ’ββββββββββββββββββββββββββββββ’
β 1 β 100 β 2015-02-05 00:00:00.0000000 β
β 2 β 101 β 2015-03-05 00:00:00.0000000 β
β 4 β 102 β 2015-04-05 00:00:00.0000000 β
β 5 β 102 β 2015-05-05 00:00:00.0000000 β
β 6 β 102 β 2015-06-05 00:00:00.0000000 β
β 10 β 101 β 2015-06-05 00:00:00.0000000 β
β 34 β 100 β 2015-06-05 00:00:00.0000000 β
ββββββββββββ©ββββββββββ©ββββββββββββββββββββββββββββββ
```
and I have a `customers` table having customer information in the following format:
```
βββββββββββ¦βββββββββββ
β CustNum β CustName β
βββββββββββ«βββββββββββ’
β 100 β ABC β
β 101 β DEF β
β 102 β GHI β
βββββββββββ©βββββββββββ
```
This data spans many years, the `ORDERS` table itself has 5 Million+ records. I would like to know the number of customers who haven't transacted (or placed an order) in the past two months.
In the above example, `CustNum` `100` and `101` haven't transacted for the past two months (if we look at June 2015) and `102` had transacted in the past two months (again, if we look at June 2015), so I would like the output to be in the following format:
```
Time NumberOfCustNotPlacingOrders
Apr-2014 1 (CustNum 102, didnt place orders in Feb and Mar)
May-2014 1 (CustNum 100, didnt place orders in Mar and Apr)
Jun-2014 2 (CustNum 100 and 101, didnt place orders in Apr and Mar)
```
In other words, I would like to look at an entire month, let's say June 2015 in this case. I would now look to peek 2 months back (April and May) and count how many customers (`CustNum`) didn't place orders, `100` and `101` in this case. **I would like to do this for all the months, say starting Jan 2012 till Jun 2015.**
I would post the initial query that I would have tried, but I really don't have any idea how to achieve this, so it's pretty much a blank slate for me. Seems like a `self join` maybe in order, but I'm not entirely sure.
Any help would be highly appreciated.
|
Here is the first solution, which could be used as a working base.
```
CREATE TABLE #orders(OrderId int identity(1,1), CustNum int, Orderdate date)
-- using system columns to populate demo data (I'm lazy)
INSERT INTO #orders(CustNum,Orderdate)
SELECT system_type_id, DATEADD(month,column_id*-1,GETDATE())
FROM sys.all_columns
-- Possible Solution 1:
-- Getting all your customers who haven't placed an order in the last 2 months
SELECT *
FROM (
-- All your customers
SELECT DISTINCT CustNum
FROM #orders
EXCEPT
-- All customers who have a transaction in the last 2 months
SELECT DISTINCT CustNum
FROM #orders
WHERE Orderdate >= DATEADD(month,-2,GETDATE())
) dat
DROP TABLE #orders
```
Based on the fact that a customer table is available, this can also be a solution:
```
CREATE TABLE #orders(OrderId int identity(1,1), CustNum int, Orderdate date)
-- using system columns to populate demo data (I'm lazy)
INSERT INTO #orders(CustNum,Orderdate)
SELECT system_type_id, DATEADD(month,column_id*-1,GETDATE())
FROM sys.all_columns
CREATE TABLE #customers(CustNum int)
-- Populate customer table with demo data
INSERT INTO #customers(CustNum)
SELECT DISTINCT custNum
FROM #orders
-- Possible Solution 2:
SELECT
COUNT(*) as noTransaction
FROM #customers as c
LEFT JOIN(
-- All customers who have a transaction in the last 2 months
SELECT DISTINCT CustNum
FROM #orders
WHERE Orderdate >= DATEADD(month,-2,GETDATE())
) t
ON c.CustNum = t.CustNum
WHERE t.CustNum IS NULL
DROP TABLE #orders
DROP TABLE #customers
```
You'll receive a counted value of each customer which hasn't bought anything in the last 2 months. As I've read it, you try to run this query regularly (maybe for a special newsletter or something like that). If you won't count, you'll getting the customer numbers which can be used for further processes.
**Solution with rolling months**
After clearing the question, this should make the thing you're looking for. It generates an output based on rolling months.
```
CREATE TABLE #orders(OrderId int identity(1,1), CustNum int, Orderdate date)
-- using system columns to populate demo data (I'm lazy)
INSERT INTO #orders(CustNum,Orderdate)
SELECT system_type_id, DATEADD(month,column_id*-1,GETDATE())
FROM sys.all_columns
CREATE TABLE #customers(CustNum int)
-- Populate customer table with demo data
INSERT INTO #customers(CustNum)
SELECT DISTINCT custNum
FROM #orders
-- Possible Solution with rolling months:
-- first of all, get all available months
-- this can be also achieved with an temporary table (which may be better)
-- but in case, that you can't use an procedure, I'm using the CTE this way.
;WITH months AS(
SELECT DISTINCT DATEPART(month,orderdate) as allMonths,
DATEPART(year,orderdate) as allYears
FROM #orders
)
SELECT m.allMonths,m.allYears, monthyCustomers.noBuyer
FROM months m
OUTER APPLY(
SELECT N'01/'+m.allMonths+N'/'+m.allYears as monthString, COUNT(c.CustNum) as noBuyer
FROM #customers as c
LEFT JOIN(
-- All customers who have a transaction in the last 2 months
SELECT DISTINCT CustNum
FROM #orders
-- to get the 01/01/2015 out of 03/2015
WHERE Orderdate BETWEEN DATEADD(month,-2,
CONVERT(date,N'01/'+CONVERT(nvarchar(max),m.allMonths)
+N'/'+CONVERT(nvarchar(max),m.allYears)))
-- to get the 31/03/2015 out of the 03/2015
AND DATEADD(day,-1,
DATEADD(month,+1,CONVERT(date,N'01/'+
CONVERT(nvarchar(max),m.allMonths)+N'/'+
CONVERT(nvarchar(max),m.allYears))))
-- NOTICE: the conversion to nvarchar is needed
-- After extracting the dateparts in the CTE, they are INT not DATE
-- A explicit conversion from INT to DATE isn't allowed
-- This way we cast it to NVARCHAR and convert it afterwards to DATE
) t
ON c.CustNum = t.CustNum
WHERE t.CustNum IS NULL
-- optional: Count only users which were present in the counting month.
AND t.CustRegdate >= CONVERT(date,N'01/'+CONVERT(nvarchar(max),m.allMonths)+N'/'+CONVERT(nvarchar(max),m.allYears))
) as monthyCustomers
ORDER BY m.allYears, m.allMonths
DROP TABLE #orders
DROP TABLE #customers
```
|
If you want the customers that have not placed orders you are going to need a customer table and use an outer join to the orders table. This should be a starting point until you make your requirements clearer...
<http://sqlfiddle.com/#!3/3aeb9/12>
```
select c.CustNum ,odata.omonth from
Customer c
left outer join
(select o.CustNum,
REPLACE(RIGHT(CONVERT(VARCHAR(11), o.OrderDate, 106), 8), ' ', '-')
as OMONTH
from Orders o
where o.OrderDate between '2015-05-01' and '2015-06-30'
) odata
on c.CustNum = odata.CustNum
where odata.omonth is null;
```
Note that your code will need to be more complicated than this... but it should give you an idea of how to start.
|
Query to get the customers who haven't transacted in the past two months
|
[
"",
"sql",
"sql-server",
""
] |
I have got 2 entities Office & Employee and I want to design the schema of the database for WorkingHours. There are offices that that have default WorkingHours for their employees, but there are Employees that might have different WorkingHours.
What is the best way to model that? Do I Have WorkingHours in both Tables?
|
What you can do is create a schema similar to the one below. Of course, you need to add additional columns to hold additional data if you have any and also adjust the queries with datatypes specific to the RDBMS you're using.
```
CREATE TABLE Office(OfficeID integer
, OfficeName VARCHAR(10))
CREATE TABLE Employee(EmployeeID integer
, EmployeeName VARCHAR(10)
, OfficeID integer
, WorkingHoursID integer
, UseOfficeDefaultWorkingHours Boolean)
CREATE TABLE WorkingHours(ID integer
, StartTime TIME
, EndTime TIME
, OfficeID integer
, OfficeDefaultWorkingHours Boolean)
```
Also, don't forget to implement constraints and primary keys on your unique columns, in each of your tables.
In the `Employee` table you add a column to specify if the Employee is working under Default Office working hours (`UseOfficeDefaultWorkingHours`).
In the `WorkingHours` table you add a column to specify if the row contains the default working hours for the office with the help of another Boolean column, in this case `OfficeDefaultWorkingHours`.
You can query this schema to get the working hours for an employee with a query similar to the one below:
```
SELECT
E.EmployeeName
, W.StartTime
, W.EndTime
FROM Employee E
INNER JOIN WorkingHours W ON E.OfficeID = WorkingHours.OfficeID
AND E.UseOfficeDefaultWorkingHours = W.OfficeDefaultWorkingHours
AND W.ID = CASE
WHEN E.WorkingHoursID IS NOT NULL
THEN E.WorkingHoursID
ELSE W.ID
END
WHERE E.EmployeeID = 1
```
This query will work under a SQL Server RDBMS, but I am not sure if it will work on other RDBMS products and you might need to adjust accordingly.
|
you can do one thing is create new table WorkingHours as its independent from the employee
```
Working hours
Id Working Hours
1 10 - 8
2 12- 9
```
assign id value to employee table
```
Employee
ID WorkingHoursID
1 2
2 2
3 1
```
|
Database schema
|
[
"",
"sql",
"database",
"database-schema",
""
] |
I am trying to merge three values in one column in select query, except `getdate` function the query is working fine, but when I write `getdate()` it gives the error:
> Conversion failed when converting the varchar value 'FA/118,' to data
> type int
Here is the query which is raising the error:
```
select top 1 ([Casetype] +'/'+ CaseNo +','+ YEAR(GETDATE()) )as CaseNo
from tbl_RecordRequisition
where Casetype='FA'
order by id desc
```
Please help!
|
try this query
```
select top 1 ([Casetype] +'/'+ convert(varchar(50),CaseNo) +','+ convert(varchar(50),YEAR(GETDATE())) )as CaseNo from tbl_RecordRequisition where Casetype='FA' order by id desc
```
|
you have to convert the value of YEAR(GETDATE())) to a string:
```
select top 1 ([Casetype] +'/'+ CaseNo +','+ CONVERT(varchar,YEAR(GETDATE())) )as CaseNo from tbl_RecordRequisition where Casetype='FA' order by id desc
```
Otherwise sql-server tries to convert the value of the expression `([Casetype] +'/'+ CaseNo +','` to an int.
|
Error in select query merge
|
[
"",
"sql",
"sql-server",
""
] |
This one is a doozy, so stick with me.
I have two tables that track people in locations. I've successfully merged them using LEAD and LAG to create a seamless transition in a single table.
My issue now is that for one of the tables, there are additional activity items I need to include, which sit within some segments.
So for simplicity, I have the following normal case:
```
| System | ID | Item | Start | End
| Alpha | 987 | 123 | May, 20 2015 07:00:00 | May, 20 2015 08:00:00
| Alpha | 374 | 123 | May, 20 2015 08:00:00 | May, 20 2015 10:00:00
| Beta | 184 | 123 | May, 20 2015 10:00:00 | May, 20 2015 11:00:00
| Beta | 798 | 123 | May, 20 2015 11:00:00 | May, 20 2015 12:00:00
```
Now, these extra items sit WITHIN certain records, so the data I have looks something like this:
```
| System | ID | Item | Start | End
| Alpha | 987 | 123 | May, 20 2015 07:00:00 | May, 20 2015 08:00:00
| Alpha | 374 | 123 | May, 20 2015 08:00:00 | May, 20 2015 10:00:00
| Beta | 184 | 123 | May, 20 2015 10:00:00 | May, 20 2015 11:00:00
| Charlie | 874 | 123 | May, 20 2015 10:20:00 | May, 20 2015 10:25:00
| Charlie | 984 | 123 | May, 20 2015 10:37:00 | May, 20 2015 10:54:00
| Beta | 798 | 123 | May, 20 2015 11:00:00 | May, 20 2015 12:00:00
```
Note that the two Charlie events occur within Beta 184.
I've currently got the table exploded into a record for every minute (needed for the next step, don't ask) and have it displaying the correct records when in that form, but I can't get my head around how to then put it back together and have the correct start and end times on each segment.
Also, somehow new records are going to be formed to fill the gaps.
My desired end result of the above situation is this:
```
| System | ID | Item | Start | End
| Alpha | 987 | 123 | May, 20 2015 07:00:00 | May, 20 2015 08:00:00 |
| Alpha | 374 | 123 | May, 20 2015 08:00:00 | May, 20 2015 10:00:00 |
| Beta | 184 | 123 | May, 20 2015 10:00:00 | May, 20 2015 10:20:00 |
| Charlie | 874 | 123 | May, 20 2015 10:20:00 | May, 20 2015 10:25:00 |
| Beta | 184 | 123 | May, 20 2015 10:25:00 | May, 20 2015 10:37:00 | new
| Charlie | 984 | 123 | May, 20 2015 10:37:00 | May, 20 2015 10:54:00 |
| Beta | 184 | 123 | May, 20 2015 10:54:00 | May, 20 2015 11:00:00 | new
| Beta | 798 | 123 | May, 20 2015 11:00:00 | May, 20 2015 12:00:00 |
```
Does this make sense?
Hope someone can help me.
|
You could use something like this:
```
DECLARE @Source TABLE (
[System] VARCHAR(50),
ID INT PRIMARY KEY,
Item INT NOT NULL,
Start DATETIME NOT NULL,
[End] DATETIME NOT NULL,
CHECK (Start<[End])
)
INSERT INTO @Source (System, ID, Item, Start, [End]) VALUES ('Alpha', 987, 123, '2015-05-20 07:00', '2015-05-20 08:00')
INSERT INTO @Source (System, ID, Item, Start, [End]) VALUES ('Alpha', 374, 123, '2015-05-20 08:00', '2015-05-20 10:00')
--INSERT INTO @Source (System, ID, Item, Start, [End]) VALUES ('Delta', 555, 123, '2015-05-20 09:30', '2015-05-20 10:00')
INSERT INTO @Source (System, ID, Item, Start, [End]) VALUES ('Beta', 184, 123, '2015-05-20 10:00', '2015-05-20 11:00')
--INSERT INTO @Source (System, ID, Item, Start, [End]) VALUES ('Charlie', 111, 123, '2015-05-20 10:05', '2015-05-20 10:07')
--INSERT INTO @Source (System, ID, Item, Start, [End]) VALUES ('Charlie', 222, 123, '2015-05-20 10:10', '2015-05-20 10:20')
INSERT INTO @Source (System, ID, Item, Start, [End]) VALUES ('Charlie', 874, 123, '2015-05-20 10:20', '2015-05-20 10:25')
INSERT INTO @Source (System, ID, Item, Start, [End]) VALUES ('Charlie', 984, 123, '2015-05-20 10:37', '2015-05-20 10:54')
INSERT INTO @Source (System, ID, Item, Start, [End]) VALUES ('Beta', 798, 123, '2015-05-20 11:00', '2015-05-20 12:00')
;WITH CTE AS (
SELECT *
FROM @Source s1
OUTER APPLY (
SELECT MIN(s2.Start) AS NextStart
FROM @Source s2
WHERE s2.Start>s1.Start AND s2.Start<s1.[End]
) q2
OUTER APPLY (
SELECT MAX(s3.[End]) AS PreviousEnd
FROM @Source s3
WHERE s3.[End]>s1.Start AND s3.[End]<s1.[End]
) q3
)
SELECT System, ID, Item, Start, [End]
FROM CTE WHERE NextStart IS NULL AND PreviousEnd IS NULL
UNION ALL
SELECT System, ID, Item, Start, NextStart
FROM CTE WHERE NextStart IS NOT NULL
UNION ALL
SELECT System, ID, Item, PreviousEnd, [End]
FROM CTE WHERE PreviousEnd IS NOT NULL
UNION ALL
SELECT s4.System, s4.ID, s4.Item, q5.[End], q6.Start
FROM @Source s4
CROSS APPLY (
SELECT *
FROM @Source s5
WHERE s5.Start>s4.Start AND s5.Start<s4.[End]
) q5
CROSS APPLY (
SELECT TOP 1 *
FROM @Source s6
WHERE s6.Start>q5.Start AND s6.Start<s4.[End]
ORDER BY s6.Start
) q6
WHERE q5.[End]<q6.Start
ORDER BY [Start]
```
The first part of the UNION processes the intervals which are not overlapped with any other intervals.
The second part processes the rows that are overlapped at the end of the interval.
The third part processes the rows that are overlapped at the beginning of the interval.
The last part produces the gap between two other intervals that are overlapping with the base interval (when the two intervals are not adjacent).
|
It seems @RazvanSocol beat me, but since I made this and it looks simpler than his, I'll post it here too:
```
create table #times (
Item int,
EndTime datetime,
primary key (Item, EndTime)
)
insert into #times
select distinct Item, StartTime from timetable
union
select distinct Item, EndTime from timetable
;with CTE as (
select
System, ID, Item, StartTime
from
timetable T1
union all
select
T1.System, T1.ID, T1.Item, T2.EndTime
from
timetable T1
join timetable T2 on T1.Item = T2.Item and
T1.StartTime < T2.StartTime and T1.EndTime > T2.EndTime
where
-- This check added to handle cases with adjacent ranges in the dates
-- as pointed out by Razvan Socol
not exists (select 1 from timetable T3 where T3.StartTime = T2.EndTime)
)
select
System, ID, Item, StartTime, E.EndTime
from
CTE
outer apply (
select top 1 EndTime from #times T
where T.Item = CTE.Item and T.EndTime > CTE.StartTime
order by EndTime asc
) E
order by Item, StartTime
```
I used a temp. table to collect all distinct start/end times per item, then used second select in the CTE to create the missing rows and the outer apply in the end recalculates end dates for each row by searching the earliest date found for that item.
[SQL Fiddle](http://sqlfiddle.com/#!6/de699/3)
Edit: Added check for adjacent ranges
|
Find/create correct enddate for series of tracking records
|
[
"",
"sql",
"sql-server",
"t-sql",
"datetime",
""
] |
I have two tables as follows:
```
Product GroupSize
------------------
1 10
2 15
GroupSize Size1 Size2 Size3 Size4
--------------------------------------
10 100 200
15 50 100 300
```
And i want to get a table like this:
```
Product Size
--------------
1 100
1 200
2 50
2 100
2 300
```
How can I do this in SQL?
|
The results that you have would come from this query:
```
select 1 as product, size1 as size from table2 where size1 is not null union all
select 2 as product, size2 as size from table2 where size2 is not null union all
select 3 as product, size3 as size from table2 where size3 is not null;
```
This is ANSI standard SQL and should work in any database.
EDIT:
Given the revised question, you can use `CROSS APPLY`, which is easier than the `UNION ALL`:
```
select t1.product, s.size
from table1 t1 join
table2 t2
on t1.groupsize = t2.groupsize
cross apply
(values(t2.size1), (t2.size2), (t2.size3)) as s(size)
where s.size is not null;
```
|
```
SELECT [Product],Size FROM tbl1
INNER JOIN(
SELECT GroupSize,Size1 Size from tbl2 where Size1 is not null
UNION
SELECT GroupSize,Size2 from tbl2 where Size2 is not null
UNION
SELECT GroupSize,Size3 from tbl2 where Size3 is not null
UNION
SELECT GroupSize,Size4 from tbl2 where Size4 is not null
)table2
ON tbl1.GroupSize=table2.GroupSize
```
|
sql join one row per column
|
[
"",
"sql",
"sql-server",
""
] |
Building my first Microsoft Access SQL queries. That should not be this hard!
I have 2 tables:


A user belonging to `GroupA` logged in. I want to show him only those `Data` table rows and columns which `GroupA` is assigned to, like this:
```
+--------+--------+--------+
| Group | Data3 | Data4 |
+--------+--------+--------+
| GroupA | 9 | 4 |
| GroupA | 1 | 5 |
+--------+--------+--------+
```
I tried this silly option:
```
SELECT (select Data from AccessRights where GroupA = "y")
FROM Data
WHERE Data.Group = "GroupA";
```
|
I use this query:
```
SELECT
Data.[Group],
IIf((SELECT GroupA FROM AccessRights WHERE Data = "Data1")="y",[Data1],Null) AS Data_1,
IIf((SELECT GroupA FROM AccessRights WHERE Data = "Data2")="y",[Data2],Null) AS Data_2,
IIf((SELECT GroupA FROM AccessRights WHERE Data = "Data3")="y",[Data3],Null) AS Data_3,
IIf((SELECT GroupA FROM AccessRights WHERE Data = "Data4")="y",[Data4],Null) AS Data_4
FROM
Data
WHERE
((Data.[Group])="GroupA");
```
For this result:
```
Group | Data_1 | Data_2 | Data_3 | Data_4
--------+--------+--------+--------+--------
GroupA | | | 9 | 4
GroupA | | | 1 | 5
```
I just hide values of `Data1` and `Data2`.
---
If you really want to hide your columns you need to use VBA that I create a VBA function that will give your final query string based on your group:
```
Function myQuery(groupName As String) As String
Dim strResult As String
Dim rs As Recordset
Dim i As Integer
strResult = "SELECT [DATA].[Group]"
Set rs = CurrentDb.OpenRecordset("SELECT [Data], [" & groupName & "] FROM AccessRights WHERE [" & groupName & "] = ""y""")
For i = 0 To rs.RecordCount
strResult = strResult & "," & rs.Fields("Data").Value
rs.MoveNext
Next i
strResult = strResult & " FROM [Data] WHERE ((Data.[Group])=""" & groupName & """)"
myQuery = strResult
End Function
```
For example; `myQuery("GroupA")` will be
```
SELECT [DATA].[Group],Data3,Data4 FROM [Data] WHERE ((Data.[Group])="GroupA")
```
|
It would probably be better just to pivot your data table and add a column named data. Do the same for access rights.
You data table would look something like this:
```
Group, Data, Value
Groupa,Data1,1
Groupb,Data2,7
...
```
AccessRights like this:
```
Data, Group, Valid
Data1, GroupA, Y
Data2, GroupA, N
```
Then you could just join the two tables together and filter as needed.
```
Select *
FROM Data D
JOIN AccessRights A
on D.data = A.data and D.Group = A.Group
WHERE A.Valid = 'Y'
and D.Group = 'GroupA'
```
|
Combining 2 queries - getting column names in one and using results in another query
|
[
"",
"sql",
"ms-access",
"select",
""
] |
Need help in converting datetime to varchar as in given format
```
2015-01-04 16:07:37.000"
```
to
```
01/04/2015 16:07PM
```
Here is what tried:
```
convert(varchar(20),datetime,103)+ ' '+convert(varchar(20),datetime,108)+ ' ' +right(convert(varchar(30),datetime,109),2)
```
|
this will work in `sqlserver` [SQLFiddle](http://sqlfiddle.com/#!3/4e26e/1) regarding this Demo
```
SELECT convert(varchar, getdate(), 103)
+' '+ CONVERT(varchar(15),CAST(getdate() AS TIME),100)
```
|
This will get your string from current datetime in the format `06/22/15 1:46:07 PM`.
```
SELECT CONVERT(VARCHAR(50), GETDATE(), 22)
```
Try using Format
```
SELECT FORMAT(GETDATE(), 'g')
```
which will get `6/22/2015 1:57 PM`.
|
convert datetime to varchar(50)
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
```
SELECT SUM(imps.[Count]) AS A , COUNT (imps.Interest_Name) AS B, ver.Vertical_name
FROM Impressions imps
INNER JOIN Verticals ver
ON imps.Campaign_id = ver.Campaign_id
WHERE ver.Vertical_name = 'Retail' OR ver.Vertical_name = 'Travel'
GROUP BY imps.Interest_Name, ver.Vertical_name;
```
The above query returns a record as :
```
A B Vertical_name
6 6 Retail
3 2 Retail
7 1 Travel
13 10 Travel
```
I want to modify this query to get a result such as :
```
A B Vertical_name
9 8 Retail
20 11 Travel
```
That is further grouping by the vertical name and taking the SUM of the colums A and B. I guess this has to be done by a sub query buy not sure how?
|
Just group by `vertical_name` and remove the `imps.Interest_Name` from your `GROUP BY` since you doing a `COUNT (imps.Interest_Name)` on it.
```
SELECT SUM(imps.[Count]) AS A , COUNT (imps.Interest_Name) AS B, ver.Vertical_name
FROM Impressions imps
INNER JOIN Verticals ver
ON imps.Campaign_id = ver.Campaign_id
WHERE ver.Vertical_name = 'Retail' OR ver.Vertical_name = 'Travel'
GROUP BY ver.Vertical_name;
```
|
your doing group by two column i.e imps.Interest\_Name, ver.Vertical\_name.
I modified your SQL i think these will useful to you.
```
SELECT SUM(imps.[Count]) AS A , COUNT (imps.Interest_Name) AS B, ver.Vertical_name
FROM Impressions imps
INNER JOIN Verticals ver
ON imps.Campaign_id = ver.Campaign_id
WHERE ver.Vertical_name = 'Retail' OR ver.Vertical_name = 'Travel'
GROUP BY ver.Vertical_name;
```
Thank you..
|
SQL: Sub Query to filter the result of a Query
|
[
"",
"sql",
"sql-server",
""
] |
I have to find whatever the matching records(NPI) are there in both. I tried with both below queries. But both are giving different results in count (Say 12000 and 200000 respectively).
Between:
```
SELECT * FROM Emblem_Attested
WHERE NPINUMBER IN (SELECT NPINUMBER FROM Empire_Attested)
```
and
```
SELECT * FROM Empire_Attested emp JOIN Emblem_Attested emb ON emp.NPINUMBER = emb.NPINUMBER
```
I don't know which one is correct.
|
The `IN` clause will only give you the rows in `Emblem_Attested` that have an `NPINUMBER` in `Empire_Attested`. If there are records in `Emblem_Attested` with the same `NPINUMBER` this will not affect the result.
For example, if `Emblem_Attested` has an `NPINUMBER` of `1` and the `Empire_Arrested` table has multiple records with an `NPINUMBER` of `1`. Then only one record will be returned with an `NPINUMBER` of `1`.
The `JOIN` will give you more record if there are multiple records in `Emblem_Attested` with the same `NPINUMBER`. If there are duplicate NPINUMBER in `Emblem_Attested` then you will get more records in your results from the `JOIN`. This is what you are seeing.
With the `JOIN`, if `Emblem_Attested` has an `NPINUMBER` of `1` and the `Empire_Arrested` table has multiple records with an `NPINUMBER` of `1`. Then multiple records will be returned as the join will return a record for each record in `Empire_Arrested` with an `NPINUMBER` of `1`.
|
Joins will duplicate the parent rows (Empire\_Attested?) if many child rows (Empire\_Attested?) are present.
The IN query will return a single parent row.
I'd be looking for more than one entry on the Empire\_Attested table.
You should see it with the following query
```
SELECT NPINUMBER, COUNT(*) Qty
FROM Empire_Attested
GROUP BY NPINumber
ORDER BY COUNT(*) DESC
```
|
What is the difference between these two queries(Using IN and JOIN)?
|
[
"",
"sql",
"sql-server",
""
] |
A table contains unique records for a specific field, (FILENAME). Although the records are unique, really they are just duplicates that only have some text appended. How can you return and group similar or like records and update the empty fields?
The table below is typical of the records. Every record has a file name but it is not a key field. There is one database record with metadata that I would like to populate to document metadata that is only identifiable by the first n characters.
The variable is the original file name is always changing character lengths.
The constant is that the prefix is always the same.
```
FILENAME / DWGNO / PROJECT
52349 / 52349 / Ford
52349-1.dwg / /
52349-2.DWG / /
52349-3.dwg / /
52351 / 52351 / Toyota
52351_C01_REV- / /
52351_C01_REV2- / /
123 / 123 / Nissan
123_rev1 / /
123_rev2 / /
123_rev3 / /
```
The table should look like this.
```
FILENAME / DWGNO / PROJECT
52349 / 52349 / Ford
52349-1.dwg / 52349 / Ford
52349-2.DWG / 52349 / Ford
52349-3.dwg / 52349 / Ford
52351 / 52351 / Toyota
52351_C01_REV- / 52351 / Toyota
52351_C01_REV2-/ 52351 / Toyota
123 / 123 / Nissan
123_rev1 / 123 / Nissan
123_rev2 / 123 / Nissan
123_rev3 / 123 / Nissan
```
I first tried to join the table on itself and check the length but "LEFT(FILENAME, 10)" is not returning all the results.
```
USE MyDatabase
SELECT x.DWGNO AS X_DWGNO,
y.DWGNO AS Y_DWGNO,
x.FILENAME AS X_FILENAME
y.FILENAME AS Y_FILENAME
x.DWGTITLE,
x.REV,
x.PROJECT
FROM dbo.DocShare x
-- want all the files from the left table... I think
LEFT JOIN dbo.DocShare y
ON LEFT(FILENAME LEN(CHARINDEX('.', FILENAME 1))) = LEFT(FILENAME, 10)
```
Also tried something else based on a similar post, but it doesn't really work either.
```
USE MyDatabase
SELECT X.E_DWGNO,
y.DWGNO AS Y_DWGNO,
x.FILENAME AS X_FILENAME
y.FILENAME AS Y_FILENAME
x.DWGTITLE,
x.REV,
x.PROJECT
FROM dbo.DocShare x
WHERE EXISTS(SELECT x.FILENAME
FROM dbo.DocShare
WHERE x.FILENAME = LEFT(y.FILENAME LEN(CHARINDEX('.', y.FILENAME, 0))))
ORDER BY y.FILENAME
```
|
Try this
[Sql Fiddle](http://sqlfiddle.com/#!3/3a0c7/5)
```
select f2.Filename,f1.DWGNO,f1.Project
from File1 f2 left join File1 f1 on
f2.Filename like f1.Filename+'%'
where f1.DWGNO != ''
```
|
First, you want to get the base files or those rows where `DWGNO IS NOT NULL`. Then, get the revisions (`DWGNO IS NULL`) and do a `JOIN` on the base files:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/e3e93/1/0)
```
WITH CteBase AS (
SELECT * FROM Tbl WHERE DWGNO IS NOT NULL
),
CteRev AS(
SELECT
t.FileName,
DWGNO = cb.DWGNO,
Project = cb.Project
FROM Tbl t
INNER JOIN CteBase cb
ON t.FileName LIKE cb.FileName + '%'
WHERE t.DWGNO IS NULL
)
SELECT * FROM CteBase
UNION ALL
SELECT * FROM CteRev
ORDER BY FileName
```
|
Return all records where a field has the same value plus additional text
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Hi I am trying to insert into a table tester3 it fails when i use the below syntax
```
insert into tester3 (UN0, UN1) values ( 1, 'jishnu1');
```
but below insert
```
insert into tester3 values ( 1, 'jishnu1');
```
works fine.
```
mydb=# CREATE TABLE tester3
mydb-# (
mydb(# "UN0" integer,
mydb(# "UN1" VARCHAR(40)
mydb(# );
CREATE TABLE
mydb=# insert into tester3 (UN0, UN1) values ( 1, 'jishnu1');
ERROR: column "un0" of relation "tester3" does not exist
mydb=# \d tester3
Table "public.tester3"
Column | Type | Modifiers
--------+-----------------------+-----------
UN0 | integer |
UN1 | character varying(40) |
```
I think i am missing something very trivial, I tried several other column names some of them works fine and some are not. I am confused.
Does PostgreSQL have restriction in column names for which the first syntax of insert query works?
---
**Edit :**
Checkout Girdon Linoff's [answer](https://stackoverflow.com/a/31072263/3651739) here , as [Frank Heikens](https://stackoverflow.com/users/271959/frank-heikens) pointed out the other column names which were working without quotes were in **lower case**.
> Lower case column is the standard within PostgreSQL and also works without quotes
|
If you define the columns with double quotes, then you generally need to use them when you refer to the column:
```
insert into tester3 ("UN0", "UN1")
values ( 1, 'jishnu1');
```
I would suggest you remove the double quotes from the column names in the `CREATE TABLE` statement.
You don't need the double quotes if the name is all lower case.
|
> Use double quotes to your column names and single quotes to your
> values
```
insert into tester3 ("UN0", "UN1") values ( 1, 'jishnu1');
```
|
ERROR: column of relation does not exist PostgreSQL ,Unable to run insert query
|
[
"",
"sql",
"postgresql",
"sql-insert",
"quoted-identifier",
""
] |
I'm making a query that fetches the records for the current date regardless of the time, how can I make it on SQL Server? My version of SQL Server doesn't support **DATE** for conversion. I want a format like:
```
2013-06-20
```
So for example my query is:
```
SELECT * FROM sometable where date = GETDATE();
```
Where `GETDATE()` is in a format **2015-06-19 19:12:10.953**.
I want it to be **2015-06-19** in the query.
Thanks.
|
`CONVERT(VARCHAR, GETDATE(), 120)` will make the date to ISO format, then SUBSTRING will get only date part:
```
SELECT *
FROM sometable
WHERE date = SUBSTRING(CONVERT(VARCHAR, GETDATE(), 120), 1, 10)
```
|
You *could* express the query as:
```
SELECT *
FROM sometable
WHERE date = cast(GETDATE() as date);
```
However, an alternative way to phrase this is:
```
SELECT *
FROM sometable
WHERE date >= cast(GETDATE() as date) and
date < cast(dateadd(day, 1, GETDATE()) as date)
```
This version is more guaranteed to use an appropriate index, if available.
|
Remove date in Datetime from SQL Server GETDATE() Function
|
[
"",
"sql",
"sql-server",
"date",
"datetime",
""
] |
I have a table with the below structure
```
+------------+--------+------------------+-----+
| E_DATE | CLIENT | TIME | INS |
+------------+--------+------------------+-----+
| 2015-06-04 | comA | 0.00478515625 | a1 |
| 2015-06-04 | comA | 0.0025390625 | a1 |
| 2015-06-04 | comA | 0.0 | a1 |
| 2015-06-04 | comA | 0.0 | a1 |
| 2015-06-04 | comB | 0.0115234375 | a2 |
| 2015-06-04 | comB | 1.953125E-4 | a2 |
| 2015-06-04 | comB | 0.0103515625 | a3 |
| 2015-06-04 | comB | 0.0 | a3 |
| 2015-06-05 | comA | 0.00478515625 | a4 |
| 2015-06-05 | comA | 0.0025390625 | a4 |
| 2015-06-05 | comA | 0.0 | a1 |
| 2015-06-05 | comA | 0.0 | a2 |
| 2015-06-05 | comB | 0.010351 | a1 |
| 2015-06-05 | comB | 0.05625 | a1 |
+------------+--------+------------------+-----+
```
I am looking to get the following output -
```
+------------+--------+-----+-----------------------------------------------------------+
| E_DATE | CLIENT | INS | TOTAL_TIME |
+------------+--------+-----+-----------------------------------------------------------+
| 2015-06-04 | comA | a1 | SUM of TIME for a1 for comA for the date in 'E_DATE' column |
| 2015-06-04 | comB | a2 | SUM of TIME for a2 for comA for the date in 'E_DATE' column |
| 2015-06-04 | comB | a3 | SUM of TIME for a3 for comA for the date in 'E_DATE' column |
| 2015-06-05 | comA | a1 | SUM of TIME for a1 for comA for the date in 'E_DATE' column |
| 2015-06-05 | comA | a2 | SUM of TIME for a2 for comA for the date in 'E_DATE' column |
| 2015-06-05 | comA | a4 | SUM of TIME for a2 for comA for the date in 'E_DATE' column |
| 2015-06-05 | comB | a1 | SUM of TIME for a1 for comA for the date in 'E_DATE' column |
+------------+--------+-----+-----------------------------------------------------------+
```
Is this the right query to achieve this?
```
select E_DATE, CLIENT,INS,SUM(INS) AS TOTAL_TIME GROUP BY E_DATE,CLIENT
```
|
If I understood it correctly then this should be the answer otherwise I/we will be needing the actual figures of `TOTAL_TIME` column in your expected result :
```
SELECT E_DATE,
CLIENT,
INS,
(SELECT SUM(time)
FROM mytable b
WHERE a.E_DATE = b.E_DATE
AND a.INS = b.INS
AND b.CLIENT = 'comA') TOTAL_TIME
FROM mytable a
GROUP BY E_DATE,CLIENT,INS
```
NOTE : Assumed that `INS` in 2nd last row is `a4` as per logic.
[SQL Fiddle](http://sqlfiddle.com/#!9/754c7/4)
|
I think it should be:
```
select E_DATE, CLIENT, INS, SUM(TIME) AS TOTAL_TIME
from Tablename
group by E_DATE, CLIENT, INS
```
Because in your example you are grouping by 3 columns `E_DATE, CLIENT, INS` and accumulating `TIME` column.
|
SQL GROUPBY & SUM OF A COLUMN
|
[
"",
"mysql",
"sql",
""
] |
How can one use SQL to count values higher than a group average?
For example:
I have table `A` with:
```
q t
1 5
1 6
1 2
1 8
2 6
2 4
2 3
2 1
```
The average for group 1 is 5.25. There are two values higher than 5.25 in the group, 8 and 6; so the count of values that are higher than average for the group is 2.
The average for group 2 is 3.5. There are two values higher than 3.5 in the group, 5 and 6; so the count of values that are higher than average for the group is 2.
|
Try this :
```
select count (*) as countHigher,a.q from yourtable a join
(select AVG(t) as AvgT,q from yourtable a group by q) b on a.q=b.q
where a.t > b.AvgT
group by a.q
```
In the subquery you will count average value for both groups, join it on your table, and then you will select count of all values from your table where the value is bigger then average
|
My answer is very similar to the other answers except the average is calculated with decimal values by adding the multiplication with `1.0`.
This has no negative impact on the values, but does an implicit conversion from integer to a float value so that the comparison is done with `5.25` for the first group, `3.5` for the second group, instead of `5` and `4` respectively.
```
SELECT count(test.q) GroupCount
,test.q Q
FROM test
INNER JOIN (
SELECT q
,avg(t * 1.0) averageValue
FROM test
GROUP BY q
) result ON test.q = result.q
WHERE test.t > result.averageValue
GROUP BY test.q
```
Here is a [**working SQLFiddle of this code**](http://sqlfiddle.com/#!4/ac846/1).
This query should work on the most common RDBMS systems (SQL Server, Oracle, Postgres).
|
SQL: Count values higher than average for a group
|
[
"",
"sql",
"sql-server",
"average",
""
] |
I have values like below I need to take only the thousand value in sql.
38,**635**.123
90,**232**.89
123,**456**.47888
I need to take result as below.
635
232
456
|
```
SELECT CONVERT(INT,YourColumn) % 1000
FROM dbo.YourTable
```
|
Cast it as an `int` so that we not only drop the decimal places off, but also ensure integer division takes place:
```
SELECT CAST(YourColumn as int) % 1000
```
The `%` operator (modulo) essentially divides the left side by the right side and returns the remainder. So, if we divide `123,456` by `1000`, using integer division, the result would be 123 with a remainder of 456. Using the `%` operator, we just get the `456` part returned.
|
Take thousand value in SQL
|
[
"",
"sql",
"sql-server",
""
] |
hi there I have table to store tags like this:
```
sate: publish:1 / unpublish:0
id | name | releated_content_id | state
1 a 1 1
2 a 2 1
3 a 3 1
4 a 4 1
5 b 1 1
6 b 2 1
7 b 3 1
8 c 1 1
.
.
.
```
Now I try get first 7 name of most repeated tags with their counts.
I do it with this query:
```
SELECT name, COUNT(name) count
FROM Tags
WHERE state = '1'
GROUP BY name
ORDER BY count
DESC LIMIT 7
```
It work well but it is too slow (more than 10 second to load)
becuase I have a big number of tags...around 1 million...
how can I optimized it?
any solution?
---
**Edit:**
@Allendar and @spencer7593 and @jlhonora
thanks for your answers...They are very helpfull for me...
But I do not which answer is the best...because of great notes and tests...
first one,index by state and then delete clauses ...it was very helpful...
but the average time became about 1 second...
is too much for my page load time (the average of my page load time is less 1 second...but it had a bad effect for first byte loading)
Finally I had to store the data in a file (every one hours via corn job) and then print data from file for every page load!...
thanks everyone.
|
Create an INDEX on your `state` field. Here's why;
BTREE INDEX is made on the `state` field for *search-queries* (a.k.a. WHERE clauses). What will happen now is the BTREE will index your `state` values like so;
1 -> 11 -> 11 -> 112
2 -> 21 -> 22 -> 221
Now when you have, let's say 100k of your results being with `state` with ID `1`. It will ask the BTREE INDEX branch and starts with `1`. It doesn't need to go deeper because it already has found it. Under that branch it now instantly knows all the unique records it needs from your table and it will be blazing fast to find names based on your state.
For future references; if you also do WHERE on the `name` *and* `state`, then you need to make a combined INDEX on `name` and `state`, so the BTREE will combine a more complex INDEX of them both and will improve those queries too.
Hope this helps.
Good luck!
|
You can do the following: Add an index on the `name` column
|
how can optimize a sql query for being faster on big data?
|
[
"",
"mysql",
"sql",
""
] |
```
select cast(datediff(minute,convert(time,'09:35 AM'),
convert(time,'07:06 PM'))/60 as varchar)
+':'+cast(datediff(minute,convert(time,'09:35 AM'),
convert(time,'07:06 PM'))%60 as varchar)
```
**Output** : `9:31`
Do we have any other function by which I can shorten the above script. Also, what should I do to get output as `09:31` instead of `9:31` ?
|
You can use [`CONVERT()`](https://msdn.microsoft.com/library/ms187928.aspx) with style 114 to get the `HH:mm`
```
SELECT
CONVERT(nvarchar(5),
-- Get difference of time
CONVERT(datetime,'07:06 PM') - CONVERT(datetime,'09:35 AM')
, 114)
```
[SQL Fiddle](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/1222)
**EDIT**
As comment to avoid the datetime arithmetic
```
SELECT CONVERT(varchar(5),
DATEADD(minute,
DATEDIFF(minute, convert(time,'09:35 AM'), convert(time,'07:06 PM'))
, 0)
, 114)
```
|
You may use [Format](https://msdn.microsoft.com/en-us/library/hh213505.aspx) function in Sql Server.
**Update - (Example)** : this may be what you want:
```
select FORMAT(CONVERT(datetime,'9:31'),'HH:mm') AS 'time'
```
|
Better option to get time in hh:mm format from two given time
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"time",
""
] |
A bit of background: I'm working for an organization with a large number of users running reports in excel. Often they require macros which compare two sets of data against each other. A large portion of my work involves tuning specific change report macros. Not pretty, but it pays the bills.
I had an idea today to pull excel tables into an ADODB object in vba and then run SQL against the object. Everything seemed to work well... but then I came across odd behavior.
This returns all the names in table1 where a.Name isn't in table2. And it works fine, but some of the names come up multiple times.
```
SELECT a.Name FROM [Table1] AS a
LEFT JOIN [Table2] AS b
ON a.Name = b.Name
WHERE b.Name IS NULL
```
Adding a DISTNCT clause:
```
SELECT DISTINCT a.Name FROM [Table1] AS a
LEFT JOIN [Table2] AS b
ON a.Name = b.Name
WHERE b.Name IS NULL
```
Completely changes which names appear. What shows up is DISTINCT, but present in both tables.
I tried rewording it a s a GROUP BY to replace the distinct value and recieved the same results. I checked with the local guru and got no where. Next step is to install a real database here to run some tests on.
I'm perplexed though.
|
From the comments:
> If you happen to have more than 65,536 rows, are you using the correct Excel 2007 onwards connection string (where the provider is "Microsoft.ACE.OLEDB.12.0")? If you use the older Excel 2003 connection string (with provider "Microsoft.Jet.OLEDB.4.0") for worksheets with more than 65K rows then problems tend to ariseβ barrowc
barrowc nailed it. It was the old connection string causing the issue. Since I had more than 64k rows of data the output was behaving strangely. Swapping the Jet string for "Microsoft.ACE.OLEDB.12.0" fixed the issue.
|
I'm not sure if this will work with an ADODB object in vba, but I would do something like the following in SQL Server of MySQL ...
If I read your question right, you're trying to find names from table A that aren't in table B.
```
SELECT DISTINCT a.Name
FROM [Table1] AS a
WHERE a.Name not in (
SELECT b.Name
FROM [Table2] AS b
)
```
|
Addition of DISTINCT clause in SQL returning wrong values
|
[
"",
"sql",
"performance",
"vba",
"excel",
""
] |
My column structure:
```
Column0 Column1
aaa abc
aaa abc
aaa xyx
aaa NA
bbb fgh
bbb NA
bbb NA
bbb NA
ccc NA
ccc NA
ccc NA
ccc NA
```
What I wish to get is foreach distinct 'Column0' data 'Column1' data whose count is max unless that data is NA in which case get the second highest.
If for a 'Column0' data all values of 'Column1' are NA then the value can be NA
So expected value:
```
Column0 Column1
aaa abc
bbb fgh
ccc NA
```
|
This will give the correct result:
```
DECLARE @t table(Column0 char(3), Column1 varchar(3))
INSERT @t values
('aaa','abc'),('aaa','abc'),('aaa','xyx'),('aaa','NA')
,('bbb','fgh'),('bbb','NA'),('bbb','NA'),('bbb','NA')
,('ccc','NA'),('ccc','NA'),('ccc','NA'),('ccc','NA')
;WITH CTE as
(
SELECT
column0,
column1,
count(case when column1 <> 'NA' THEN 1 end) over (partition by column0, column1) cnt
FROM @t
), CTE2 as
(
SELECT
column0,
column1,
row_number() over (partition by column0 order by cnt desc) rn
FROM CTE
)
SELECT column0, column1
FROM CTE2
WHERE rn = 1
```
Result:
```
column0 column1
aaa abc
bbb fgh
ccc NA
```
|
You can use two CTEs and the ranking function `ROW_NUMBER`:
```
WITH CTE1 AS
(
SELECT Column0, Column1, Cnt = COUNT(*) OVER (PARTITION BY Column0, Column1)
FROM dbo.TableName
)
, CTE2 AS
(
SELECT Column0, Column1,
RN = ROW_NUMBER() OVER (PARTITION BY Column0
ORDER BY CASE WHEN Column1 = 'NA' THEN 1 ELSE 0 END ASC
, Cnt DESC)
FROM CTE1
)
SELECT Column0, Column1
FROM CTE2
WHERE RN = 1
```
`Demo`
|
Get row for each user where the count of a value in a column is maximum
|
[
"",
"sql",
"sql-server",
""
] |
Ok so I have a table called PEOPLE that has a name column. In the name column is a name, but its totally a mess. For some reason its not listed such as last, first middle. It's sitting like last,first,middle and last first (and middle if there) are separated by a comma.. two commas if the person has a middle name.
example:
```
smith,steve
smith,steve,j
smith,ryan,tom
```
I'd like the second comma taken away (for parsing reason ) spaces put after existing first comma so the above would come out looking like:
```
smith, steve
smith, steve j
smith, ryan tom
```
Ultimately I'd like to be able to parse the names into first, middle, and last name fields, but that's for another post :\_0. I appreciate any help.
thank you.
|
```
Drop table T1;
Create table T1(Name varchar(100));
Insert T1 Values
('smith,steve'),
('smith,steve,j'),
('smith,ryan,tom');
UPDATE T1
SET Name=
CASE CHARINDEX(',',name, CHARINDEX(',',name)+1) WHEN
0 THEN Name
ELSE
LEFT(name,CHARINDEX(',',name, CHARINDEX(',',name)+1)-1)+' ' +
RIGHT(name,LEN(Name)-CHARINDEX(',',name, CHARINDEX(',',name)+1))
END
Select * from T1
```
|
This seems to work. Not the most concise but avoids cursors.
```
DECLARE @people TABLE (name varchar(50))
INSERT INTO @people
SELECT 'smith,steve'
UNION
SELECT 'smith,steve,j'
UNION
SELECT 'smith,ryan,tom'
UNION
SELECT 'commaless'
SELECT name,
CASE
WHEN CHARINDEX(',',name) > 0 THEN
CASE
WHEN CHARINDEX(',',name,CHARINDEX(',',name) + 1) > 0 THEN
STUFF(STUFF(name, CHARINDEX(',',name,CHARINDEX(',',name) + 1), 1, ' '),CHARINDEX(',',name),1,', ')
ELSE
STUFF(name,CHARINDEX(',',name),1,', ')
END
ELSE name
END AS name2
FROM @people
```
|
deleting second comma in data
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have the following table `stops` how can I check whether the following stops name order `GHI, JKL, MNO` is available in my stops table?
**stops table:**
```
CREATE TABLE IF NOT EXISTS stops
(
stop_id INT(11) NOT NULL AUTO_INCREMENT PRIMARY KEY,
name varchar(30) NOT NULL,
lat double(10,6),
longi double(10,6)
);
```
**Simple:**
```
1 ABC
2 DEF
3 GHI
4 JKL
5 MNO
6 PQR
7 SDU
8 VWX
```
|
This query will return `1` when there is an ordered of `'GHI','JKL','MNO'`:
```
SELECT 1
FROM stops s1
JOIN stops s2 ON s1.stop_id = s2.stop_id - 1
JOIN stops s3 ON s2.stop_id = s3.stop_id - 1
WHERE CONCAT(s1.name, s2.name, s3.name) = CONCAT('GHI','JKL','MNO')
```
`SQL Fiddle Demo`
|
This is a variation of the well known "find equal sets" task.
You need to insert the searched route into a table with a sequenced `stop_id`:
```
create table my_stops(stop_id INT NOT NULL,
name varchar(30) NOT NULL);
insert into my_stops (stop_id, name)
values (1, 'GHI'),(2, 'JKL'),(3, 'MNO');
```
Then you join and calculate the difference between both sequences. This returns a totally meaningless number, but always the same for consecutive values:
```
select s.*, s.stop_id - ms.stop_id
from stops as s join my_stops as ms
on s.name = ms.name
order by s.stop_id;
```
Now group by that meaningless number and search for a count equal to the number of searched steps:
```
select min(s.stop_id), max(s.stop_id)
from stops as s join my_stops as ms
on s.name = ms.name
group by s.stop_id - ms.stop_id
having count(*) = (select count(*) from my_stops)
```
See [Fiddle](http://sqlfiddle.com/#!9/a4b0a/5)
|
Check whether particular name order is available in my table
|
[
"",
"mysql",
"sql",
""
] |
This is the table structure for the seven tables I'm trying to join into just one:
```
-- tables: en, fr, de, zh_cn, es, ru, pt_br
`geoname_id` INT (11),
`continent_code` VARCHAR (200),
`continent_name` VARCHAR (200),
`country_iso_code` VARCHAR (200),
`country_name` VARCHAR (200),
`subdivision_1_name` VARCHAR (200),
`subdivision_2_name` VARCHAR (200),
`city_name` VARCHAR (200),
`time_zone` VARCHAR (200)
```
And this is the new table structure, where all data will be stored:
```
CREATE TABLE `geo_lists` (
`city_id` int (11), -- en.geoname_id (same for all 7 tables)
`continent_code` varchar (2), -- en.continent_code (same for all 7 tables)
`continent_name` varchar (200), -- en.continent_name (just in english)
`country_code` varchar (2), -- en.country_iso_code (same for all 7 tables)
`en_country_name` varchar (200), -- en.country_name
`fr_country_name` varchar (200), -- fr.country_name
`de_country_name` varchar (200), -- de.country_name
`zh_country_name` varchar (200), -- zh_cn.country_name
`es_country_name` varchar (200), -- es.country_name
`ru_country_name` varchar (200), -- ru.country_name
`pt_country_name` varchar (200), -- pt_br.country_name
`en_state_name` varchar (200), -- en.subdivision_1_name
`fr_state_name` varchar (200), -- fr.subdivision_1_name
`de_state_name` varchar (200), -- de.subdivision_1_name
`zh_state_name` varchar (200), -- zh_cn.subdivision_1_name
`es_state_name` varchar (200), -- es.subdivision_1_name
`ru_state_name` varchar (200), -- ru.subdivision_1_name
`pt_state_name` varchar (200), -- pt_br.subdivision_1_name
`en_province_name` varchar (200), -- en.subdivision_2_name
`fr_province_name` varchar (200), -- fr.subdivision_2_name
`de_province_name` varchar (200), -- de.subdivision_2_name
`zh_province_name` varchar (200), -- zh_cn.subdivision_2_name
`es_province_name` varchar (200), -- es.subdivision_2_name
`ru_province_name` varchar (200), -- ru.subdivision_2_name
`pt_province_name` varchar (200), -- pt_br.subdivision_2_name
`en_city_name` varchar (200), -- en.city_name
`fr_city_name` varchar (200), -- fr.city_name
`de_city_name` varchar (200), -- de.city_name
`zh_city_name` varchar (200), -- zh_cn.city_name
`es_city_name` varchar (200), -- es.city_name
`ru_city_name` varchar (200), -- ru.city_name
`pt_city_name` varchar (200), -- pt_br.city_name
`time_zone` varchar (30) -- en.time_zone (same for all 7 tables)
);
```
I'd like to join them all, using the locale (language) code as prefix for the column names.
|
Oh! @GabrielBlanca you are right, in that case try this query and let my know if it worked. You can copy and paste:
```
insert into geo_lists
-- columns
(city_id, continent_code, continent_name, country_code, time_zone,
en_country_name,
fr_country_name,
de_country_name,
zh_country_name,
es_country_name,
ru_country_name,
pt_country_name,
en_state_name,
fr_state_name,
de_state_name,
zh_state_name,
es_state_name,
ru_state_name,
pt_state_name,
en_province_name,
fr_province_name,
de_province_name,
zh_province_name,
es_province_name,
ru_province_name,
pt_province_name,
en_city_name,
fr_city_name,
de_city_name,
zh_city_name,
es_city_name,
ru_city_name,
pt_city_name)
-- end columns
select
en.city_id, en.continent_code, en.continent_name, en.country_code, en.time_zone,
en.country_name as en_country_name,
fr.country_name as fr_country_name,
de.country_name as de_country_name,
zh.country_name as zh_country_name,
es.country_name as es_country_name,
ru.country_name as ru_country_name,
pt.country_name as pt_country_name,
en.state_name as en_state_name,
fr.state_name as fr_state_name,
de.state_name as de_state_name,
zh.state_name as zh_state_name,
es.state_name as es_state_name,
ru.state_name as ru_state_name,
pt.state_name as pt_state_name,
en.province_name as en_province_name,
fr.province_name as fr_province_name,
de.province_name as de_province_name,
zh.province_name as zh_province_name,
es.province_name as es_province_name,
ru.province_name as ru_province_name,
pt.province_name as pt_province_name,
en.city_name as en_city_name,
fr.city_name as fr_city_name,
de.city_name as de_city_name,
zh.city_name as zh_city_name,
es.city_name as es_city_name,
ru.city_name as ru_city_name,
pt.city_name as pt_city_name
from en, fr, de, zh_cn, es, ru, pt_br
where en.city_id = fr.city_id
and fr.city_id = de.city_id
and de.city_id = zh_cn.city_id
and zh_cn.city_id = es.city_id
and es.city_id = ru.city_id
and ru.city_id = pt_br.city_id
```
|
Ok Gabriel where do we stand now with optimization.
1. What have you accomplished with data load
2. table structures with indexes
3. row counts
I will propose an answer here as I dragged you back here from a duplicate question you willingly deleted (thx)
|
insert from multiple tables into other
|
[
"",
"mysql",
"sql",
""
] |
I have a couple hundred line stored procedure that takes a single parameter (`@id`) and is heavily simplified to something like:
```
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
INSERT INTO #new_result
EXEC pr_do_a_ton_of_calculations
DELETE FROM result WHERE id = @id
INSERT INTO result
SELECT * FROM #new_result
```
Multiple processes may invoke this procedure concurrently, with the same parameters. I'm experiencing that both executions delete the rows one after the other, and then try to insert the same data one after the other. The result is that one errors out, because it's inserting duplicate data and violating a unique constraint.
Ideally, I'd like to ensure that two connections executing the procedure with the same `@id` parameter will execute both the `DELETE` and `INSERT` serially, without locking the entire table. It's also fine if the two procedures are completely serialized, as long as they aren't preventing the execution of other invocations with a different parameter.
Is there any way I can achieve this?
|
Add this to the beginning of your stored procedure:
```
DECLARE @lid INT
SELECT @lid = id
FROM result WITH (UPDLOCK, ROWLOCK)
WHERE id = @id
```
and get rid of the `READ UNCOMMITTED` above.
Make sure your `id` is indexed. If it's a reference to another table where it is a `PRIMARY KEY`, use the lock on that table instead.
Better yet, use application locks ([sp\_getapplock](https://msdn.microsoft.com/en-us/library/ms189823.aspx?f=255&MSPPError=-2147217396)).
|
You can use application locks, for example:
```
DECLARE @ResourceName VARCHAR(200) = 'MyResource' + CONVERT(VARCHAR(20), @id)
EXEC sp_getapplock @Resource = @ResourceName, @LockMode = 'Exclusive'
---- Do your thing ----
DECLARE @ResourceName VARCHAR(200) = 'MyResource' + CONVERT(VARCHAR(20), @id)
EXEC sp_releaseapplock @Resource = @ResourceName, @LockMode = 'Exclusive'
```
|
How can I serialize multiple executions of a stored procedure with the same arguments?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I'm hoping someone who is actually good at math can help me out with this. I get a different result in SQL than I do in VBA. In SQL, I have this function that calculates a payment. Variables for both VBA and SQL are:
* Principal = 239762.05
* Rate = 0.03 (aka 3%) ETA: in both cases, 0.03 is divided by 12, so really this is .0025
* Period = 268
@principal / (power(1+@rate,@period)-1) \* (@rate\*power(1+@rate,@period))
In SQL this gives a value of **1228.76** (rounded)
In VBA, I do not have the POWER function. So I copied this public function from the internet:
```
Public Function Power(ByVal number As Double, ByVal exponent As Double) As Double
Power = number ^ exponent
End Function
```
And I am calling it in a sub here like so:
```
NewPI = PrinBal / (Power(1 + IntCalc, Term) - 1) * (IntCalc * Power(1 + IntCalc, Term))
```
but here, the answer I get is **1228.63** (rounded). Only 13 cents off!
I have tried lots of adjustments and either ended up with this same figure, or a much worse result. I am thinking it's some sort of Order of Operations mistake, but I'm not sure.
**EDIT**
I am adding this to possibly get to the bottom of the problem, which might be the data types in the SQL version. This is the full function
```
create function [dbo].[PMT] (@rate numeric(15,9), @periods smallint, @principal numeric(20,2) )
returns numeric(16,2)
as
begin
declare @pmt numeric (38,9)
select @pmt = @principal / (power(1+@rate,@periods)-1) * (@rate*power(1+@rate,@periods))
return @pmt
end
```
|
Short answer: Data types in SQL matter.
Worse, you can experience some implicit data conversion. Check this out...
```
DECLARE @RealPrincipal real
SET @RealPrincipal = 239762.05
DECLARE @RealRate real
SET @RealRate = 0.0025 --0.03 --(aka 3%) ETA: in both cases, 0.03 is divided by 12, so really this is .0025
DECLARE @Period int
SET @Period = 268
SELECT @RealPrincipal / (power(1.0+@RealRate,@Period)-1.0) * (@RealRate*power(1.0+@RealRate,@Period))
```
Result = 1228.61333410069
Compare that to your formula from your OP comment with all literals and no variables...
```
SELECT 239762.05 / (power(1+0.0025,268)-1) * (0.0025*power(1+0.0025,268)) as 'Value!'
```
Result = 1228.761629
Now, use the same exact structure as the first code block, but replace the `real` type variables with `money` type...
```
DECLARE @moneyPrincipal money
SET @moneyPrincipal = 239762.05
DECLARE @moneyRate money
SET @moneyRate = 0.0025 --(aka 3%) ETA: in both cases, 0.03 is divided by 12, so moneyly this is .0025
DECLARE @Period int
SET @Period = 268
SELECT @moneyPrincipal / (power(1+@moneyRate,@Period)-1) * (@moneyRate*power(1+@moneyRate,@Period))
```
Result = 1233.2921
Now, using the money data types, watch what happens when you replace the literal `1` values in the formula with `1.0` ...
```
SELECT @moneyPrincipal / (power(1.0+@moneyRate,@Period)-1.0) * (@moneyRate*power(1.0+@moneyRate,@Period))
```
Result = 1228.761629
|
I think you should simplify your calculation:
In VBA:
```
function payment(principal as double, rate as double, term as integer) as double
' Inputs:
' principal: Ammount of the loan
' rate: Effective interest rate per period
' term: Number of periods / payments
dim v as double, a as double
v = 1 / (1 + rate)
a = (1 - (v ^ term) / rate)
payment = principal / a
end function
```
or, if you want it squeezed into a single line:
```
function payment(principal as double, rate as double, term as integer) as double
' Inputs:
' principal: Ammount of the loan
' rate: Effective interest rate per period
' term: Number of periods / payments
payment = principal * rate / (1 - ((1 + rate) ^ (-term)))
end function
```
and, in SQL:
```
@principal * @rate / (1 - power(1 + @rate, -@term))
-- @rate is the effective rate per period (in your example: 0.0025)
```
Minimize the number of arithmetic operations.
---
If you're having problems with data type (as LDMJoe mentions in his answer), you should try casting each variable to an appropriate data type. In Access you can do something like this (assuming you're writing the expression in a query):
```
CDbl(principal) * CDbl(rate) / (1 - power(1 + CDbl(rate), -CInt(term))
```
I'm also assuming that `principal`, `rate` and `term` are columns in a table.
|
Different results in SQL vs. VBA using POWER function and VBA equivalent
|
[
"",
"sql",
"vba",
"function",
"ms-access",
""
] |
I'm running some queries and not getting expected results.
```
SELECT Name, Population FROM City
WHERE Population = (SELECT Min(Population) FROM City);
```
Which brings me Adamstown 42.
Then I tried this (not being really sure about it)
```
SELECT Name, Min(Population) FROM City;
```
And it brings Kabul 42.
I understand first query OK, but I don't get what second query should bring or if it is useful at all.
|
I just downloaded the dataset to take a look.
The first query is correctly working out that the minimum population is 42, and then returning all rows with population 42 (a single row fits this condition).
The second query is determining the minimum population is 42, and then needs to return some value in the 'name' column. Unfortunately, we haven't grouped by name (or anything); the rows in the group (all in the table) each have their own name. Rather than spit out an error, mySQL seems to be returning the name of the first row; Kabul.
But obviously, this second query is a bit meaningless. The first query is preferable.
This behaviour is actually described in the [documentation](https://dev.mysql.com/doc/refman/5.0/en/group-by-handling.html):
> MySQL extends the use of GROUP BY so that the select list can refer to nonaggregated columns not named in the GROUP BY clause. This means that the preceding query is legal in MySQL. ... The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate.
|
There are subtle differences between the queries you have posted.
First one:
```
SELECT Name, Population FROM City
WHERE Population = (SELECT Min(Population) FROM City);
```
You are trying to fetch a record where `Population = Min(population)`; which will in turn get evaluated as `Population = 42`
Second one:
```
SELECT Name, Min(Population) FROM City;
```
You are just trying to fetch `Name`, and `minimum population` which doesn't mean that particular record hold minimum population.
|
MySQL World database query
|
[
"",
"mysql",
"sql",
""
] |
> how to solve this kind of probe help me please:
**error:**
> Msg 512, Level 16, State 1, Procedure trg\_pricebase, Line 13 Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. The statement has been terminated.
---
and my trigger is :
```
ALTER TRIGGER [dbo].[trg_pricebase]
ON [dbo].[tbl_model2]
AFTER UPDATE
AS
BEGIN
DECLARE @price_base NVARCHAR(50) = (SELECT tbl_model2.price_base FROM tbl_model2)
DECLARE @tipid int = (SELECT tbl_model2.id FROM tbl_model2)
INSERT INTO tbl_price_history (tbl_price_history.price_base,tbl_price_history.tipid)
VALUES (@price_base, @tipid )
END
```
|
My psychic powers tell me that you want
```
alter trigger[dbo].[trg_pricebase] on [dbo].[tbl_model2]after update as
begin
insert into dbo.tbl_price_history (
price_base,
tipid
) select
price_base,
id
from
inserted
end
```
You have two basic problems.
Firstly, to read only the rows affected by the update statement, use the `inserted` and `deleted` pseudo tables.
Secondly, a trigger can fire off multiple rows, you can't assume there will just be one affected row.
|
Execute following [sub]queries to see what query return more than one row:
```
SELECT tbl_model2.price_base FROM tbl_model2
SELECT tbl_model2.id FROM tbl_model2
```
I assume you want to insert into \*history table the old values (table deleted) or the new values (table inserted):
```
ALTER TRIGGER [dbo].[trg_pricebase]
ON [dbo].[tbl_model2]
AFTER UPDATE
AS
BEGIN
INSERT INTO tbl_price_history (price_base, tipid)
SELECT price_base, tipid
FROM deleted -- for old values
-- or FROM inserted -- for new values
END
```
References: [inserted and deleted table](https://msdn.microsoft.com/en-us/library/ms191300.aspx)
|
Msg 512, Level 16, State 1, Procedure trg_pricebase, Line 13 Subquery returned more than 1 value
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"triggers",
""
] |
I have a table with the following columns:
```
Date ReportDate Received Answered Average Wait Time
5/1/2015 5/3/2015 10 10 0:00:04
5/1/2015 5/3/2015 10 10 0:00:10
5/1/2015 5/3/2015 4 4 0:00:02
5/1/2015 5/3/2015 5 5 0:00:03
5/2/2015 5/3/2015 10 10 0:00:09
5/2/2015 5/3/2015 9 9 0:00:03
5/2/2015 5/3/2015 12 12 0:00:09
5/2/2015 5/3/2015 15 15 0:00:02
5/2/2015 5/3/2015 20 20 0:00:10
Total 95 95 0:00:07
```
I would like to calculate the totals and store it in a different table based on distinct Report Date, like: -
```
ReportDate TotalReceivedContacts TotalAnsweredContacts TotalAverageWaitTime
5/3/2015 95 95 0:00:07
```
Like this I have many rows based on ReportDate. Please help.
|
This should work:
```
SELECT ReportDate, SUM(Received) as ReceivedSum,
SUM(Answered) as AnsweredSum ,AVG(WaitTime) as WaitTimeAVG
FROM a
GROUP BY ReportDate
ORDER BY ReportDate
```
see [SQL Fiddle](http://sqlfiddle.com/#!15/c56c2/1)
|
Try this:
```
SELECT ReportDate, SUM(Received) AS TotalReceivedContacts,
SUM(Answered) AS TotalAnsweredContacts, AVG(AverageWaitTime) AS TotalAverageWaitTime
FROM Table
GROUP BY ReportDate
```
|
compute data based on column and row
|
[
"",
"sql",
"t-sql",
""
] |
I am working in Postgres with a table like this:
```
mon yyyy weather
Apr 2014 78.45
Apr 2015 77.32
May 2014 79.56
May 2015 78.43
```
I would like to be able to query some results, ordered by "mon", where the `weather` column values are divided according to year-on-year by month.
In other words I want to query weather where Apr 2015 is divided by Apr 2014.
However, I would like to write the query in such a way that I do not have to specify month or year, and the query automatically divides `weather` values according to: Apr 2015/Apr 2014, then May 2014/May 2014 without having to key in every month and every year, which is laborious.
I have the following code, but this expands columns which is not what I want:
```
select (select "Weather" from yoy
where mon = 'Apr' and yyyy = '2015'
)/(select "American" from yoy
where mon = 'Apr' and yyyy = '2014'
) as "weather_apr",
(select "Weather" from yoy
where mon = 'May' and yyyy = '2015'
)/(select "Weather" from yoy
where mon = 'May' and yyyy = '2014'
) as "weather_may",
from yoy;
```
|
In my opinion this is the right scenario to take advantage of analytical [window function](http://www.postgresql.org/docs/9.3/static/functions-window.html). Here the magic without joins:
```
SELECT yyyy,
weather,
mon,
lead( weather ) over (partition by mon order by mon, yyyy desc),
weather / lead( weather ) over (partition by mon order by mon, yyyy desc)
FROM joy
```
|
I think you need a self join like in the below example:
```
SELECT j1."yyyy" As year,
j2."yyyy" As next_year,
j1."mon",
j1."weather",
j2."weather" As "weather from next year",
j1."weather"::float / j2."weather" As divide
FROM joy j1
JOIN joy j2
ON j1."yyyy" = j2."yyyy" - 1 AND j1."mon" = j2."mon"
```
demo: <http://sqlfiddle.com/#!15/e02ec/1>
|
How to divide between rows in one table in Postgres
|
[
"",
"sql",
"postgresql",
""
] |
I have a tables like this

I want to select Acct\_id, number of transaction of user who have done a maximum number of transaction.
`select Acct_id,max(num) From (select Acct_id, count(Acct_id) as num from Transactions group by Acct_id);`
My above sql query is giving me error-
**Every derived table must have its own alias**
> why my query was being fail without Alias?
|
You query must be:-
```
SELECT Acct_id, MAX(num)
FROM (SELECT Acct_id, COUNT(Acct_id) AS num
FROM Transactions
GROUP BY Acct_id) TAB;
```
|
You need to give the subselect an alias:
```
SELECT ...
FROM (SELECT ...) AS anyAlias
GROUP BY ...;
```
|
sql Query for max transaction
|
[
"",
"mysql",
"sql",
""
] |
I have two models: an `owner` and a `pet`. An owner `has_many :pets` and a pet `belongs_to :owner`.
What I want to do is grab *only those owners* that have *pets which ALL weigh over 30lbs*.
```
#app/models/owner.rb
class Owner < ActiveRecord::Base
has_many :pets
#return only those owners that have heavy pets
end
#app/models/pet.rb
class Pet < ActiveRecord::Base
belongs_to :owner
scope :heavy, ->{ where(["weight > ?", 30])}
end
```
Here is what is in my database. I have three owners:
1. *Neil*, and *ALL of which ARE heavy*;
2. *John*, and *ALL of which ARE NOT heavy*;
3. *Bob*, and *SOME of his pets ARE heavy* and *SOME that ARE NOT heavy*.
The query should return only *Neil*. Right now my attempts return *Neil* and *Bob*.
|
What if you do it in two steps, first you get all `owner_ids` that have at least 1 heavy pet, then get all `owner_ids` that have at least 1 not-heavy pet and then grab the owners where id exists in the first array but not in the second?
Something like:
```
scope :not_heavy, -> { where('weight <= ?', 30) }
```
...
```
owner_ids = Pet.heavy.pluck(:owner_id) - Pet.not_heavy.pluck(:owner_id)
owners_with_all_pets_heavy = Owner.where(id: owner_ids)
```
|
You can form a group for each `owner_id` and check, if all rows within group match required condition *or at least one row doesn't match it*, you can achieve it with `group by` and `having` clauses:
```
scope :heavy, -> { group("owner_id").having(["count(case when weight <= ? then weight end) = 0", 30]) }
```
There is also another option, more of a Rails-ActiverRecord approach:
```
scope :heavy, -> { where.not(owner_id: Pet.where(["weight <= ?", 30]).distinct.pluck(:owner_id)).distinct }
```
Here you get all `owner_id`s that don't fit condition (*searching by contradiction*) and exclude them from the result of original query.
|
ActiveRecord: Exclude group if at least one record within it doesn't meet condition
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
"activerecord",
"rails-activerecord",
""
] |
I'm trying to write a propper SQL query in MS SQL Server.
First of all, i have the following tables: Towns, Employees, Addresses. Almost every employee has Manager, whom ManagerID is foreign key in Employees also. (Self relation). My goal is to display the number of managers from each town. So far i have this code:
```
SELECT t.Name, COUNT(*) AS [Managers from each town]
FROM Towns t
JOIN Addresses a
ON t.TownID = a.TownID
JOIN Employees e
ON a.AddressID = e.AddressID
GROUP BY t.Name
ORDER BY [Managers from each town] DESC
```
This query returns the number of Employees, from each town, not Managers.
If i try the second query bellow, I get something totally wrong:
```
SELECT t.Name, COUNT(*) AS [Managers from each town]
FROM Towns t
JOIN Addresses a
ON t.TownID = a.TownID
JOIN Employees e
ON a.AddressID = e.AddressID
JOIN Employees m
ON e.ManagerID = m.ManagerID
GROUP BY t.Name
ORDER BY [Managers from each town] DESC
```
Here is the structure of 'Employees' table:
EmployeeID, FirstName, LastName, MiddleName, JobTitle,DepartamentID, ManagerID, HireDate, Salary, AddressID
The correct query must return this result set:
```
Town | Managers from each town
Issaquah | 3
Kenmore | 5
Monroe | 2
Newport Hills | 1
```
|
I think the following variation on your original query should could the managers who reside in each town:
```
SELECT t.Name, COUNT(DISTINCT e.EmployeeId) AS [Managers from each town]
FROM Towns t JOIN
Addresses a
ON t.TownID = a.TownID JOIN
Employees e
ON a.AddressID = e.AddressID
WHERE e.EmployeeId IN (SELECT e2.ManagerId FROM Employees e2)
GROUP BY t.Name
ORDER BY [Managers from each town] DESC;
```
The `DISTINCT` may not be necessary, but it is hard to say without better understanding the data structure.
|
If I understand your structure correctly, the only indication an Employee is a manager is if its id is used as managerid for other employees. Your first query already correctly displays the count, then all that is needed is to filter the results with something like
```
where EmployeeID in (select ManagerID from Employees)
```
therefore turning your first query into :
```
SELECT t.Name, COUNT(*) AS [Managers from each town] FROM Towns t
JOIN Addresses a
ON t.TownID = a.TownID
JOIN Employees e
ON a.AddressID = e.AddressID
where EmployeeID in (select ManagerID from Employees)
GROUP BY t.Name
ORDER BY [Managers from each town] DESC
```
|
Troubles with using GROUP BY in SQL Query
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a database column Model has 2000+ data
```
Model
------
EH2
EF
F3
RM
RX
RX2
R3
....
```
I want to make it like:
```
Model
---------
Primo EH2
Primo EF
Primo F3
....
```
I have tried:
```
update Table
set Model = REPLACE(Model,' ','Primo EF2')
```
to replace all EF,EF2 etc with Primo prefix(Primo EF,Primo EF2 etc)
I short i want to add Primo before all model
but it did not work.
|
If you want to replace **all models** (!) to `PRIMO`+Model you can run the following code:
```
UPDATE yourTable
SET Model = N'Primo '+Model
```
If you want to set Primo just to **a specific list of models** try this instead:
```
UPDATE yt
SET Model = N'Primo '+Model
FROM yourTable yt
WHERE yt.Model IN(N'EF',N'EF2')
```
|
`REPLACE(Model,' ','Primo EF2')` does not replace anything. There is no `' '` in your data.
Use
```
update Table set Model='Primo EH2' WHERE Model='EH2'
```
**UPDATE** regarding your new specification:
Use
`update Table set Model = 'Primo ' + Model`
|
Changing the value of row in sql server
|
[
"",
"sql",
"sql-server",
""
] |
I have the following problem.
There is a String:
`There is something 2015.06.06. in the air 1234567 242424 2015.06.07. 12125235`
I need to show only just the last date from this string: `2015.06.07`.
I tried with `regexp_substr` with `insrt` but it doesn't work.
So this is just test, and if I can solve this after it with this solution I should use it for a CLOB query where there are multiple date, and I need only the last one. I know there is `regexp_count`, and it is help to solve this, but the database what I use is Oracle 10g so it wont work.
Can somebody help me?
|
The key to find the solution of this problem is the idea of reversing the words in the string presented in [this answer](https://stackoverflow.com/a/31099527/2074605).
Here is the possible solution:
```
WITH words AS
(
SELECT regexp_substr(str, '[^[:space:]]+', 1, LEVEL) word,
rownum rn
FROM (SELECT 'There is something 2015.06.06. in the air 1234567 242424 2015.06.07. 2015.06.08 2015.06.17. 2015.07.01. 12345678999 12125235' str
FROM dual) tab
CONNECT BY LEVEL <= LENGTH(str) - LENGTH(REPLACE(str, ' ')) + 1
)
, words_reversed AS
(
SELECT *
FROM words
ORDER BY rn DESC
)
SELECT regexp_substr(word, '\d{4}\.\d{2}\.\d{2}', 1, 1)
FROM words_reversed
WHERE regexp_like(word, '\d{4}\.\d{2}\.\d{2}')
AND rownum = 1;
```
|
From the documentation on [regexp\_substr](http://docs.oracle.com/cd/B12037_01/server.101/b10759/functions116.htm), I see one problem immediately:
The `.` (period) matches any character. You need to escape those with a backslash: `\.` in order to match only a period character.
For reference, I am linking [this post](https://stackoverflow.com/a/3911037/2074605) which appears to be the approach you are taking with `substr` and `instr`.
[Relevant documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions068.htm) from Oracle:
`INSTR(string , substring [, position [, occurrence]])`
> When position is negative, then INSTR counts and searches backward from the end of string. The default value of position is 1, which means that the function begins searching at the beginning of string.
The problem here is that your regular expression only returns a single value, as explained [here](https://stackoverflow.com/a/17597049/2074605), so you will be giving the `instr` function the appropriate match in the case of multiple dates.
Now, because of this limitation, I recommend using the approach that was proposed in [this question](https://stackoverflow.com/q/3908955/2074605), namely reverse the entire string (and your regular expression, i.e. `\d{2}\.\d{2}\.\d{4}`) and then the first match will be the 'last match'. Then, perform another string reversal to get the original date format.
Maybe this isn't the best solution, but it should work.
|
PLSQL show digits from end of the string
|
[
"",
"sql",
"regex",
"oracle",
"plsql",
"regexp-substr",
""
] |
I have a list of numbers in some fields in a table, for example something like this:
```
2033,1869,1914,1913,19120,1911,1910,1909,1908,1907,1866,1921,1922,1923
```
Now, I'm trying to do a query to check if a number is found in the row, however, I can't use `LIKE` as then it may return false positives as if I did a search for `1912` in the above field I would get a result returned because of the number `19120`, obviously we don't want that - we can't append or prepend a comma as the start/end numbers don't have them.
So, onto using `REGEXP` I go... I tried this, but it doesn't work (it returns a result):
```
SELECT * FROM cat_listing WHERE cats REGEXP '[^0-9]*1912[^0-9]*';
```
I imagine why it still finds something is because of the `*` quantifier; it found `[^0-9]` 0 times **AFTER** `1912` so it considers it a match.
I'm not sure how to modify it to do what I want.
|
In your case, it seems word boundaries are necessary:
```
SELECT * FROM cat_listing WHERE cats REGEXP '[[:<:]]1912[[:>:]]';
```
`[[:<:]]` is the beginning of a word and `[[:>:]]` is the end. See [reference](https://dev.mysql.com/doc/refman/5.1/en/regexp.html):
> `[[:<:]], [[:>:]]`
>
> These markers stand for word boundaries. They match the beginning and end of >words, respectively. A word is a sequence of word characters that is not >preceded by or followed by word characters. A word character is an alphanumeric >character in the alnum class or an underscore (`_`).
|
You have another option called [find\_in\_set()](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set)
```
SELECT * FROM cat_listing WHERE find_in_set('1912', cats) <> 0;
```
> Returns 0 if str is not in strlist or if strlist is the empty string. Returns NULL if either argument is NULL. This function does not work properly if the first argument contains a comma (β,β) character.
|
Using REGEXP within MySQL to find a certain number within a comma separated list
|
[
"",
"mysql",
"sql",
"regex",
""
] |
I have multiple rows in table, I'm using select statement to fetch the details.
```
select * from pdetails where pcode='P000437'
```
**Result**

my query print 3 records but I don't want to print first row.
Is there any option to use select statement with condition to count and remove first row.
|
try this query
```
SELECT * FROM (
select ROW_NUMBER()OVER(ORDER BY id ASC) row,* from pdetails where pcode='P000437'
)t
WHERE row != 1;
```
|
You can get this using a Temp Table,
```
IF OBJECT_ID('tempdb..#temp') IS NOT NULL
DROP TABLE #temp
SELECT *, ROW_NUMBER() over(order by id) as ROWID into #temp FROM pdetails
WHERE pcode='P000437'
SELECT * FROM #temp WHERE ROWID != 1
```
|
SQL Select statment with remove first row
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have two tables in my database - `MPRS` and `ALL_SSC`. I want to update the `ALL_SSC` table with data from `MPRS` and in Access (where I test my SQL) it works.
However, when I run it in my VB 2010 program on the SQL Server database, it says table `MPRS` already exists.
I know it does! I'm not trying to create it. I'm updating `FROM` it... any ideas where this SQL is wrong?
```
SQL = "UPDATE ALL_SSC LEFT JOIN MPRS ON MPRS.MPAN = ALL_SSC.MPAN1 SET ALL_SSC.PC1 = Format([mprs].[pc],'00'), ALL_SSC.MSPCDC1 = Mid([mprs].[PC_EFD],7,4) & Mid([mprs].[PC_EFD],4,2) & Mid([mprs].[PC_EFD],1,2), ALL_SSC.MTSC1 = [mprs].[MTC], ALL_SSC.MSMCDC1 = Mid([mprs].[MTC_EFD],7,4) & Mid([mprs].[MTC_EFD],4,2) & Mid([mprs].[MTC_EFD],1,2), ALL_SSC.LLF1 = [mprs].[LLF], ALL_SSC.SUPPLIER1 = [mprs].[SUPPLIER], ALL_SSC.REGI1 = Mid([mprs].[SSD],7,4) & Mid([mprs].[SSD],4,2) & Mid([mprs].[SSD],1,2), ALL_SSC.ENG_STATUS1 = 0 WHERE (((ALL_SSC.MPAN1) Is Not Null) AND ([mprs].[ENERG_STATUS]='E'));"
cmd = New SqlCommand(sSQL, cNN)
Try
Try
If cNN.State <> ConnectionState.Open Then
cNN.Open()
End If
Catch exCnn As Exception
MsgBox(exCnn.Message)
End Try
cmd.ExecuteNonQuery()
Catch ex As Exception
MsgBox("Cannot continue. " & ex.Message)
Exit Sub
End Try
```
|
You have accidentally used variable "sSQL" when you meant to use "SQL".
Yet another example of why meaningful variable names are important.
|
The correct syntax for a `JOIN` with `UPDATE` in SQL Server is:
```
UPDATE a
SET PC1 = . . .
FROM ALL_SSC a LEFT JOIN
MPRS m
ON m.MPAN = a.MPAN1
WHERE . . .;
```
That is, you need a `FROM` clause and it comes after the `SET`.
|
SQL Server Update Query says table already exits
|
[
"",
"sql",
"sql-server",
"vb.net",
""
] |
I'm getting the 150 error when attempting to run the following setup script.
Whats wrong?
serverPermissions seems to trigger the error but I'm not exactly sure why.
I think the foreign key constraints are all properly mapped as far as I can tell. I am probably missing something obvious.
```
-- Setup my members database
DROP DATABASE IF EXISTS mymembers;
CREATE DATABASE IF NOT EXISTS mymembers;
/*
* The list of all members in the community
*/
DROP TABLE IF EXISTS members;
CREATE TABLE members
(
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(30),
steamID3 INT NOT NULL UNIQUE,
PRIMARY KEY (id)
);
/*
* Lists all servers in operation
*/
DROP TABLE IF EXISTS servers;
CREATE TABLE servers
(
sid INT NOT NULL AUTO_INCREMENT,
sname VARCHAR(30), -- name of server (short name)
PRIMARY KEY(sid)
);
/*
* Maps a member and a server together along with a flag.
* Record (0, 12, 1, 0) indicates member 12 has flag 0 on server 1.
*/
DROP TABLE IF EXISTS serverPermissions;
CREATE TABLE serverPermissions
(
mid INT,
sid INT,
flagid INT,
PRIMARY KEY(mid, sid, flagid),
FOREIGN KEY (mid) REFERENCES members(id),
FOREIGN KEY (sid) REFERENCES servers(sid),
FOREIGN KEY (flagid) REFERENCES flags(id)
);
/*
* flags are used to give custom permissions to players.
* For example a record may be: (0, "VIP", "This play is very important")
*/
DROP TABLE IF EXISTS flags;
CREATE TABLE flags
(
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(10),
description VARCHAR(100),
PRIMARY KEY(id)
);
```
|
Change the order of table creation. You should create `flags` table before creating `serverPermissions`. See a [Demo Here](http://sqlfiddle.com/#!9/67223).
|
While adding foreign key constraints, both the tables should have storage type as InnoDb. Change the type for both tables from MyISAM to InnoDB
|
errno: 150 "Foreign key constraint is incorrectly formed"
|
[
"",
"mysql",
"sql",
"mariadb",
""
] |
I have a view called `a` with this data:
```
ID tDate name task val
23 2015-06-14
23 2015-06-25
126 2015-06-18
126 2015-06-22
126 2015-06-24
```
`ID` is integer and `tDate` is timestamp.
Basically I want to get for each `ID` the min value of `tDate` and present this row.
meaning:
```
ID tDate name task val
23 2015-06-14
126 2015-06-18
```
I wrote this query:
```
select ID, min(tDate)
from a
group by ID
order by ID
```
This is working **BUT** it doesn't allow me to present all other columns of `a`
for example if I do:
```
select ID, min(tDate), name
from a
group by ID
order by ID
```
it says that name must be under group by. So I wrote this query:
```
select ID, MIN(tDate), name, task, val , ....
from a
group by ID, name, task, val , ....
order by ID
```
And this one **doesn't work**. it gives false results.
How do I solve it?
|
Postgres has the very convenient `distinct on` for this type of problem:
```
select distinct on (id) a.*
from a
order by id, tdate;
```
This will return one row for each `id`. The row is the first one determined by the ordering defined in the `order by` clause.
|
Do a join from the one table to a sub-query table on just the ID / Min Date
```
select
YT.ID,
YT.tDate as OriginalDate,
PQ.MinDate,
YT.name,
YT.task,
YT.val
from
YourTable YT
JOIN ( select ID, min( tdate ) as MinDate
from YourTable
group by ID ) as PQ
on YT.ID = PQ.ID
AND YT.tDate = PQ.MinDate
order by
ID
```
|
Group by with MIN value in same query while presnting all other columns
|
[
"",
"sql",
"postgresql",
""
] |
How can a `select` be crafted to where the first and last row of each set of the rows are grouped with a `GROUP BY`.
I've this table with the following data:
```
id group val start end
1 10 36 465 89
2 10 35 55 11
3 10 34 20 456
4 20 38 1140 1177
5 20 22 566 788
6 20 1235 789 4796
7 20 7894 741 1067
```
What I need to get is the first value of the column start and last value of the column end with group by the group column.
The resultant table should be as below:
```
id group val start end
1 10 36 465 89
3 10 34 20 456
4 20 38 1140 1177
7 20 7894 741 1067
```
I did a query but with `FIRST_VALUE` and `LAST_VALUE` and `over (partition by)`. It works in SQL Server 2012 but didn't work in SQL Server 2008. I need a query that can be executed in SQL Server 2008.
|
How about using `ROW_NUMBER`:
[**SQL Fiddle**](http://sqlfiddle.com/#!3/9eecb7/1253/0)
```
WITH Cte AS(
SELECT *,
RnAsc = ROW_NUMBER() OVER(PARTITION BY [group] ORDER BY val),
RnDesc = ROW_NUMBER() OVER(PARTITION BY [group] ORDER BY val DESC)
FROM tbl
)
SELECT
id, [group], val, start, [end]
FROM Cte
WHERE
RnAsc = 1 OR RnDesc = 1
ORDER BY [group], val
```
|
This is one way -
```
select t.*
from tbl t
join (
select [group],
min(val) as val_1,
max(val) as val_2
from tbl
group by [group]
) v
on t.[group] = v.[group]
and (t.val = v.val_1
or t.val = v.val_2);
```
Fiddle: <http://sqlfiddle.com/#!3/c682f/1/0>
Another approach:
```
select id, [group], val, [start], [end]
from(
select t.*,
max(val) over(partition by [group]) as max_grp,
min(val) over(partition by [group]) as min_grp
from tbl t
) x
where val in (max_grp,min_grp)
```
|
How to get the first and the last record per group in SQL Server 2008?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Please don't downgrade this as it is bit complex for me to explain. I'm working on data migration so some of the structures look weird because it was designed by someone like that.
For ex, I have a table Person with PersonID and PersonName as columns. I have duplicates in the table.
I have Details table where I have PersonName stored in a column. This PersonName may or may not exist in the Person table. I need to retrieve PersonID from the matching records otherwise put some hardcode value in PersonID.
I can't write below query because PersonName is duplicated in Person Table, this join doubles the rows if there is a matching record due to join.
```
SELECT d.Fields, PersonID
FROM Details d
JOIN Person p ON d.PersonName = p.PersonName
```
The below query works but I don't know how to replace "NULL" with some value I want in place of NULL
```
SELECT d.Fields, (SELECT TOP 1 PersonID FROM Person where PersonName = d.PersonName )
FROM Details d
```
So, there are some PersonNames in the Details table which are not existent in Person table. How do I write CASE WHEN in this case?
I tried below but it didn't work
```
SELECT d.Fields,
CASE WHEN (SELECT TOP 1 PersonID
FROM Person
WHERE PersonName = d.PersonName) = null
THEN 123
ELSE (SELECT TOP 1 PersonID
FROM Person
WHERE PersonName = d.PersonName) END Name
FROM Details d
```
This query is still showing the same output as 2nd query. Please advise me on this. Let me know, if I'm unclear anywhere. Thanks
|
A simple left outer join to pull back all persons with an optional match on the details table should work with a case statement to get your desired result.
```
SELECT
*
FROM
(
SELECT
Instance=ROW_NUMBER() OVER (PARTITION BY PersonName),
PersonID=CASE WHEN d.PersonName IS NULL THEN 'XXXX' ELSE p.PersonID END,
d.Fields
FROM
Person p
LEFT OUTER JOIN Details d on d.PersonName=p.PersonName
)AS X
WHERE
Instance=1
```
|
well.. I figured I can put ISNULL on top of SELECT to make it work.
```
SELECT d.Fields,
ISNULL(SELECT TOP 1 p.PersonID
FROM Person p where p.PersonName = d.PersonName, 124) id
FROM Details d
```
|
SELECT Statement in CASE
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have a table which is missing some entries for a certain language. How can I get a list of all language text in english (lang 1 in table) which is missing the foreign translation counterpart (lang 2)
My table is as follows
```
PageName | LanguageNo | TranslationName | TranslationText |
main | 1 | SomeName | some english text |
main | 2 | SomeName | some foreign text |
main | 1 | SomeName2 | some english 2 |
other | 1 | SomeName3 | some english 3 |
other | 2 | SomeName3 | some foreign 3 |
```
For example, using the above table data, only the following should be returned:
```
main | 1 | SomeName2 | some english 2 |
```
How can I write a SQL statement to achieve this?
Thanks
|
There are several methods, but here is one that uses `not exists`:
```
select t.*
from mytable t
where t.LanguageNo = 1 and
not exists (select 1
from mytable t2
where t2.pagename = t.pagename and
t2.translationname = t.translationname and
t2.LanguageNo = 2
);
```
[Here](http://www.sqlfiddle.com/#!6/6c771/2) is a SQL Fiddle.
|
You can try the following:
```
-- Create demo data
CREATE TABLE #translation(pageName nvarchar(10), LanguageNo int, TranslationName nvarchar(25), TranslationText nvarchar(50))
INSERT INTO #translation(pageName, LanguageNo, TranslationName, TranslationText)
VALUES ('main',1,'SomeName','some english text'),
('main',2,'SomeName','some foreign text'),
('main',1,'SomeName2','some english 2'),
('other',1,'SomeName3','some english 3'),
('other',2,'SomeName3','some foreign 3')
--,('other',3,'SomeName3','some foreign 3') -- uncomment for language3 demo
-- your work:
SELECT availTrans.*
FROM #translation t
-- get all needed combinations
RIGHT JOIN(
SELECT DISTINCT t.pageName, t.TranslationName, langs.LanguageNo
FROM #translation as t
CROSS JOIN (SELECT DISTINCT LanguageNo FROM #translation) as langs
) as availTrans
ON t.pageName = availTrans.pageName
AND t.TranslationName = availTrans.TranslationName
AND t.LanguageNo = availTrans.LanguageNo
WHERE t.pageName IS NULL
-- Cleanup
DROP TABLE #translation
```
Given input:
```
pageName LanguageNo TranslationName TranslationText
---------- ----------- ------------------------- ---------------------
main 1 SomeName some english text
main 2 SomeName some foreign text
main 1 SomeName2 some english 2
other 1 SomeName3 some english 3
other 2 SomeName3 some foreign 3
```
Which produces this result:
```
pageName TranslationName LanguageNo
---------- ------------------------- -----------
main SomeName2 2
```
|
SQL find missing language entries in table
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to select records that begin same as first but without it. First one could change depending on user's choice so solution must be 'dynamic'. I tried putting substring containing first three chracters of first record to NOT LIKE () function but it doesn't work, got messege that 'like (text) doesn't exist'. Could I get help to resolve this problem? Thanks in advance.
Edit: I'm using PostgreSQL 9.4.4 and I no longer get error, instead of this my query that is posted below returns entire table.
```
select * from codes where code not like substring((select code from codes where id = 1) from 1 for 3);
Table:
+--------+-----------+
|code_id |code |
|[PK]int |varchar(10)|
+--------+-----------+
|1 |00011111 |
|2 |11111111 |
|3 |11122222 |
|4 |00022222 |
|5 |00033333 |
+--------+-----------+
Result:
+--------+-----------+
|code_id |code |
|[PK]int |varchar(10)|
+--------+-----------+
|4 |00022222 |
|5 |00033333 |
+--------+-----------+
```
|
Given this table definition:
```
CREATE TABLE codes (
code_id int PRIMARY KEY
, code text);
```
This does the job:
```
SELECT c.*
FROM (SELECT left(code, 3) || '%' AS pattern FROM codes WHERE code_id = 1) x
JOIN codes c ON c.code LIKE x.pattern
WHERE c.code_id <> 1;
```
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/284ec/1)
Use [`left()`](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-OTHER) (simpler, cheaper), [`LIKE`](http://www.postgresql.org/docs/current/interactive/functions-matching.html#FUNCTIONS-LIKE) - `NOT LIKE` would be backwards for (quote) `records that begin same as first` - and append the wildcard `%` to the pattern.
If your table is big (no use for small tables), an **index** will make this *fast*:
```
CREATE INDEX codes_text_pattern_ops_idx ON codes(code text_pattern_ops);
```
Details:
* [PostgreSQL LIKE query performance variations](https://stackoverflow.com/questions/1566717/postgresql-like-query-performance-variations/13452528#13452528)
|
[[NOT] LIKE](http://www.postgresql.org/docs/current/static/functions-matching.html) (which as a commenter stated is not written as `like()` because it isn't a function) is a bit different when it comes to matching. The percent symbol is your wild card and you need it to match the rest of the field.
```
SELECT * FROM table WHERE code LIKE '000%';
```
If you were to just do `LIKE '000'`, that would be equivalent to `code = '000'`, which wouldn't match anything. Or, conversely, `NOT LIKE '000'` would match everything.
To get what you're after involves something a bit messier to achieve the previous example by using [concat() and substr()](http://www.postgresql.org/docs/current/static/functions-string.html):
```
SELECT *
FROM codes
WHERE code NOT LIKE concat(
(SELECT substr(code::text, 1, 3) FROM codes WHERE id = 1), '%'
);
```
We use substr() in the sub-select to return just the first 3 characters in the code column up to concat(). Then we use concat() to generate a suitable string for `NOT LIKE` to use.
If the `code` column is already text-like, you can drop the `::text` cast, but it wouldn't hurt anything to keep it there. substring() and substr() are effectively the same, but we can shorten up the query a bit with substr().
|
Selecting values that begins with specified value located in the same or other table
|
[
"",
"sql",
"database",
"postgresql",
""
] |
I am rather new to writing SQL scripts and would like some advice on the best way to check statuses.
I need to find all Delivery Areas that have at least one Delivery with a Status of 2 and check those same areas to see if any other deliveries for those areas have a status of 1, 3, or 4.
Here is what I have written currently...
```
SELECT
DR.AREA,
T.STTS_ID
FROM TASK T
LEFT OUTER JOIN DLY_RTE_ASGMNT DRA
ON T.TASK_ID = DRA.TASK_ID
LEFT OUTER JOIN DLY_RTE DR
ON DR.DLY_RTE_ID = DRA.DLY_RTE_ID
WHERE
DR.AREA IS NOT NULL
ORDER BY DR.AREA
```
and here is the output that I get from it:
```
AREA STTS_ID
000-ST 1
000-ST 3
000-ST 1
000-ST 1
000-ST 2
100 1
100 1
100 1
100 1
100 1
```
There is more data then what is shown above, but it illustrates what i get back.
What would be the best way to check each unique AREA to see if it contains a STTS\_ID of 2 and also has a delivery that contains a STTS\_ID of 1, 3, or 4.
Coding samples are not necessarily what I am looking for here, although they would be appreciated... I am more trying wrap my head around the most efficient way of going about the task...
Thank you in advance for any help given.
|
Pre-select the data with status 2 and make a standard inner JOIN.
E.g. pseudo-SQL:
```
select st134.area,st134.status
from (select area
from mytable
where status = 2) st2,
mytable st134
where st134.area = st2.area
```
This should be the cleanest and the most performance efficient way to do that.
|
I have not got clear understanding on two diff status you are referring but I have separated out two. with good understanding in clause can be eliminated with exists clause.
```
with tab_stts2 as
(select * from task where status=2)
select dra.area, tab_stts2.stts_id from tab_stts2, dly_rte_asgmnt dra, dly_rte dr
where tab_stts2.task_id = dra.task_id
and dra.dly_rte_id = dra.dly_rte_id
and (status table alias).status in (1,3,4);
```
|
Checking Statuses
|
[
"",
"sql",
"oracle",
""
] |
My input is a two column table with headers `_id` and `change_num`. `change_num` is a string of comma-separated numbers that correspond to change IDs. For example:
```
_id change_num
123 4354, 3243, 7893
456 920, 1232, 9834, 2323
```
I want to get the first value in each row of `change_num`, so my output looks like this:
```
_id change_num
123 4354
456 920
```
How can I stop at the first comma and neglect everything thereafter? Furthermore, if a `change_num` starts with `CN`, can I ignore it and just get the number?
```
_id change_num
123 CN4354, 3243, 7893
456 920, 1232, 9834, 2323
```
to return
```
_id change_num
123 4354
456 920
```
|
This is string manipulation. Something like this should work:
```
select t.id,
replace(substr(change_num, 1, instr(change_num, ',') - 1), 'CN', '')
from table t;
```
As you can tell, storing ids in a comma-separated list is a bad idea. If you have any control over the data structure, you should add a junction table.
|
Try this:
```
SELECT _id,SUBSTRING_INDEX(change_num, ',', 1);
```
|
Get the first value from a list of values a table index
|
[
"",
"sql",
"oracle-sqldeveloper",
""
] |
I want to update top 100 rows of `sterms` table `order by datetime desc` along with `inner join concept`
Query is:
```
update top (100) sterms set st.code=cd.conceptid
from sterms st inner join Description cd on st.str=cd.term order by datetime desc
```
However this is not working
|
you can use CTE to first get `TOP 100` rows from `sterms` and then `UPDATE` like this
```
;WITH CTE AS
(
SELECT TOP 100 * FROM sterms ORDER BY datetime desc
)
UPDATE st
set st.code=cd.conceptid
FROM CTE as st inner join Description cd on st.str=cd.term
```
|
Normally I would think it's just a small error. Try this instead:
```
UPDATE TOP (100) st
SET code = cd.conceptid
FROM sterms st
INNER JOIN Description cd
ON st.str=cd.term
ORDER BY st.datetime DESC
```
|
update top 100 rows order by Datetime desc
|
[
"",
"sql",
"sql-server",
""
] |
So I have a table contains log activity for users over the course of a certain period and looks as such:
message\_table
```
+--------+------------+----------------------+
| Userid | Message_Id | Timestamp |
+--------+------------+----------------------+
| 3433 | 10051 | 05-Jun-2015 04:00:00 |
| 6321 | 10052 | 05-Jun-2015 04:01:00 |
| 83821 | 10053 | 05-Jun-2015 04:01:15 |
| ... | ... | ... |
| 2041 | 20052 | 15-Jun-2015 23:59:00 |
+--------+------------+----------------------+
```
I also have a list of users that I'm interested in doing activity counts.
interesting\_userid
```
ββββββββββ
β Userid β
β βββββββββ£
β 3433 β
β 83821 β
β 1454 β
ββββββββββ
```
My goal: I want to pick a certain timeframe (let's say between Jun 7 and Jun 9 for example). I want to have a table that has for each day in the timeframe, all userid's, and then their respective counts.
This is the view I'm looking for:
```
+-----------+--------+--------------+
| DayOfWeek | Userid | num_messages |
+-----------+--------+--------------+
| Jun 7 | 3433 | 2 |
| Jun 7 | 83821 | 5 |
| Jun 7 | 1454 | 0 |
| Jun 8 | 3433 | 1 |
| Jun 8 | 83821 | 5 |
| Jun 8 | 1454 | 2 |
| Jun 9 | 3433 | 0 |
| Jun 9 | 83821 | 3 |
| Jun 9 | 1454 | 1 |
+-----------+--------+--------------+
```
Instead what I get - rows where counts() are 0 are being excluded:
```
+-----------+--------+--------------+
| DayOfWeek | Userid | num_messages |
+-----------+--------+--------------+
| Jun 7 | 3433 | 2 |
| Jun 7 | 83821 | 5 |
| Jun 8 | 3433 | 1 |
| Jun 8 | 83821 | 5 |
| Jun 8 | 1454 | 2 |
| Jun 9 | 83821 | 3 |
| Jun 9 | 1454 | 1 |
+-----------+--------+--------------+
```
query looks something like this:
```
select some_date_interval_function(me.timestamp) as DayOfWeek, iu.userid, count(me.message_id)
from interesting_userid iu
left join message_table me
on iu.userid = me.userid
where me.timestamp between '07-Jun-2015' and '09-Jun-2015'
group by DayOfWeek, iu.userid
```
|
You can test the query here: [SQL Fiddle](http://sqlfiddle.com/#!4/82c8a/3)
```
WITH valid_date_range(valid_date) AS (
SELECT x.range_start + LEVEL - 1
FROM (SELECT TO_DATE('2015-06-07', 'YYYY-MM-DD') AS range_start,
TO_DATE('2015-06-09', 'YYYY-MM-DD') AS range_end
FROM dual) x
CONNECT BY x.range_start + LEVEL - 1 <= x.range_end),
message_count_by_user_and_date(message_date, userid, num_messages) AS (
SELECT d.valid_date, iu.userid, COUNT(me.message_id)
FROM interesting_userid iu
JOIN valid_date_range d ON 1 = 1
LEFT JOIN message_table me
ON me.userid = iu.userid
AND me.timestamp >= d.valid_date
AND me.timestamp < d.valid_date + 1
GROUP BY d.valid_date, iu.userid)
SELECT some_date_interval_function(m.message_date) AS DayOfWeek,
m.userid,
m.num_messages
FROM message_count_by_user_and_date m
ORDER BY m.message_date, m.userid;
```
|
You can try this:
```
select u.userid, DayOfWeek, tot_messages
from selected_users u
left join ( select userid,
some_date_interval_function(me.timestamp) as DayOfWeek,
count(*) tot_messages
from messages where tstamp between '07-Jun-2015' and '09-Jun-2015'
group by userid) e
on u.userid=e.userid;
```
change the tables and columns to yours and add the required columns to the final select.
|
Viewing counts for all users in an activity table by the day without losing users who have 0 counts
|
[
"",
"sql",
"oracle",
"sas",
""
] |
I am trying to retrieve records from customer table with customername,city
```
custname|city
Anand|London
Paul|Rome
.
.
.
```
however when retriving,if the city is London then Brussels should be displayed in its place and else original city name is to be displayed.
I have tried thee following query'
```
select custname,case city when 'London' then 'Brussels' end from customer;
```
and
```
select custname,deocde(city,'London','Brussels') from customer;
```
Both are giving the result as:
```
custname|city
Anand|Brussels
Pau|
```
Other cities are not being displayed.How to write this query correctly.Please help me.Thanks in advance
|
You should use `ELSE`:
```
select custname, case
when city = 'London' then 'Brussels'
else city
end as city
from customer;
```
|
Use the `else` clause:
```
select custname,
(case city when 'London' then 'Brussels'
else city
end) as city
from customer;
```
|
usage of case in SQL
|
[
"",
"sql",
""
] |
I'm trying to get each of the following sums as fields, I'm only getting the outer one, the other two return undefined index.
How do I put them outside as fields?
```
SELECT
(SELECT SUM(amount) AS income
FROM transaction
WHERE
type = '1' && state = 'a')
- (SELECT SUM(amount) AS expense
FROM transaction
WHERE
type = '2' && state = 'a')
AS balance
```
Balance works, but for income and expense I get undefined index.
|
Use conditional aggregation:
```
SELECT SUM(CASE WHEN type = '1' AND state = 'a' THEN amount ELSE 0
END) as income,
SUM(CASE WHEN type = '2' AND state = 'a' THEN amount ELSE 0
END) as balance
FROM transaction;
```
If you still want the difference, then you need to repeat the logic:
```
SELECT SUM(CASE WHEN type = '1' AND state = 'a' THEN amount ELSE 0
END) as income,
SUM(CASE WHEN type = '2' AND state = 'a' THEN amount ELSE 0
END) as expense,
(SUM(CASE WHEN type = '1' AND state = 'a' THEN amount ELSE 0 END) -
SUM(CASE WHEN type = '2' AND state = 'a' THEN amount ELSE 0 END)
) as balance
FROM transaction;
```
EDIT:
Actually, it is better to factor out the common conditions into a `WHERE` clause:
```
SELECT SUM(CASE WHEN type = '1' THEN amount ELSE 0 END) as income,
SUM(CASE WHEN type = '2' THEN amount ELSE 0 END) as expense,
(SUM(CASE WHEN type = '1' THEN amount ELSE 0 END) -
SUM(CASE WHEN type = '2' THEN amount ELSE 0 END)
) as balance
FROM transaction
WHERE state = 'a' AND type IN ('1', '2');
```
This can take advantage of an index on `transaction(state, type, amount)`.
|
You can use `case` and `sum` to get each result:
```
select sum(case when `type` = '1' then `amount` else 0 end) as `income`
, sum(case when `type` = '2' then `amount` else 0 end) as `expense `
, sum((case when `type` = '2' then -1 else 1 end) * `amount`) as `difference`
from `transaction`
where `type` in ('1', '2')
where `state` = 'a'
```
|
MySQL multiples SUM() functions inside one query as fields
|
[
"",
"mysql",
"sql",
"select",
"sum",
""
] |
I created a trial account on [Azure](http://azure.microsoft.com/en-us/), and I deployed my database from `SmarterAsp`.
When I run a pivot query on `SmarterAsp\MyDatabase`, the results appeared in **2 seconds**.
However, running the same query on `Azure\MyDatabase` took **94 seconds**.
I use the SQL Server 2014 Management Studio (trial) to connect to the servers and run query.
Is this difference of speed because my account is a trial account?
**Some related info to my question**
the query is:
```
ALTER procedure [dbo].[Pivot_Per_Day]
@iyear int,
@imonth int,
@iddepartment int
as
declare @columnName Nvarchar(max) = ''
declare @sql Nvarchar(max) =''
select @columnName += quotename(iDay) + ','
from (
Select day(idate) as iDay
from kpivalues where year(idate)=@iyear and month(idate)=@imonth
group by idate
)x
set @columnName=left(@columnName,len(@columnName)-1)
set @sql ='
Select * from (
select kpiname, target, ivalues, convert(decimal(18,2),day(idate)) as iDay
from kpi
inner join kpivalues on kpivalues.idkpi=kpi.idkpi
inner join kpitarget on kpitarget.idkpi=kpi.idkpi
inner join departmentbscs on departmentbscs.idkpi=kpi.idkpi
where iddepartment='+convert(nvarchar(max),@iddepartment)+'
group by kpiname,target, ivalues,idate)x
pivot
(
avg(ivalues)
for iDay in (' + @columnName + ')
) p'
execute sp_executesql @sql
```
Running this query on 3 different servers gave me different results in terms of Elapsed time till my pivot table appear on the screen:
Azure - Elapsed time = 100.165 sec
Smarterasp.net - Elapsed time = 2.449 sec
LocalServer - Elapsed time = 1.716 sec
Regarding my trial account on Azure, I made it with the main goal to check if I will have a better speed than Smarter when running stored procedure like the above one.
I choose for my database Service Tier - Basic, Performance level -Basic(5DTUs) and Max. Size 2GB.
My database has 16 tables, 1 table has 145284 rows, and the database size is 11mb. Its a test database for my app.
My questions are:
1. What can I do, to optimize this query (sp)?
2. Is Azure recommended for small databases (100mb-1Gb)? I mean performance vs. cost!
**Conclusions based on your inputs:**
* I made suggested changes to the query and the performance was improved with more than 50% - Thank you Remus
* I tested my query on Azure S2 and the Elapsed time for updated query was 11 seconds.
* I tested again my query on P1 and the Elapsed time was 0.5 seconds :)
* the same updated query on SmarterASP had Elapsed time 0.8 seconds.
Now its clear for me what are the tiers in Azure and how important is to have a very good query (I even understood what is an Index and his advantage/disadvantage)
Thank you all,
Lucian
|
This is first and foremost a question of performance. You are dealing with a poorly performing code on your part and you must identify the bottleneck and address it. I'm talking about the bad **2 seconds** performance now. Follow the guidelines at [How to analyse SQL Server performance](http://rusanu.com/2014/02/24/how-to-analyse-sql-server-performance/). Once you get this query to execute locally acceptable for a web app (less than 5 ms) then you can ask the question of porting it to Azure SQL DB. Right now your trial account is only highlighting the existing inefficiencies.
## After update
```
...
@iddepartment int
...
iddepartment='+convert(nvarchar(max),@iddepartment)+'
...
```
so what is it? is the `iddepartment` column an `int` or an `nvarchar`? And why use `(max)`?
Here is what you should do:
* parameterize `@iddepartment` in the inner dynamic SQL
* stop doing `nvarchar(max)` conversion. Make the `iddepartment` and `@iddertment` types match
* ensure indexes on `iddepartment` and all `idkpi`s
Here is how to parameterize the inner SQL:
```
set @sql =N'
Select * from (
select kpiname, target, ivalues, convert(decimal(18,2),day(idate)) as iDay
from kpi
inner join kpivalues on kpivalues.idkpi=kpi.idkpi
inner join kpitarget on kpitarget.idkpi=kpi.idkpi
inner join departmentbscs on departmentbscs.idkpi=kpi.idkpi
where iddepartment=@iddepartment
group by kpiname,target, ivalues,idate)x
pivot
(
avg(ivalues)
for iDay in (' +@columnName + N')
) p'
execute sp_executesql @sql, N'@iddepartment INT', @iddepartment;
```
The covering indexes is, by far, the most important fix. That obviously requires more info than is here present. Read [Designing Indexes](https://technet.microsoft.com/en-us/library/ms190804(v=sql.105).aspx) including all sub-chapters.
As a more general comment: this sort of queries befit [columnstores](https://msdn.microsoft.com/en-us/library/gg492088.aspx) more than rowstore, although I reckon the data size is, basically, tiny. Azure SQL DB supports updateable clustered columnstore indexes, you can experiment with it in anticipation of serious data size. They do require Enterprise/Development on the local box, true.
|
(**Update**: the original question has been changed to also ask how to optimise the query - which is a good question as well. The original question was *why the difference* which is what this answer is about).
The performance of individual queries is heavily affected by the performance tiers. I know the documentation implies the tiers are about load, that is not strictly true.
I would re-run your test with an S2 database as a starting point and go from there.
Being on a trial subscription does not in itself affect performance, but with the free account you are probably using a B level which isn't really useable by anything real - certainly not for a query that takes 2 seconds to run locally.
Even moving between, say, S1 and S2 will show a noticeable difference in performance of an individual query.
If you want to experiment, do remember you are charged a day for "any part of a day", which is probably okay for S level but be careful when testing P level.
For background; when Azure introduced the new tiers last year, they changed the hosting model for SQL. It used to be that many databases would run on a shared sqlserver.exe. In the new model, each database effectively gets its own sqlserver.exe that runs in a resource constrained sandbox. That is how they control the "DTU usage" but also affects general performance.
|
Why is running a query on SQL Azure so much slower?
|
[
"",
"sql",
"azure",
"ssms",
"azure-sql-database",
"sql-server-2014",
""
] |
Lets say, we have this table :
```
+------+------+
| COL1 | COL2 |
+------+------+
| A | B |
+------+------+
| B | A |
+------+------+
| C | D |
+------+------+
```
I want to count the number of times either `letter1, letter2` or `letter2, letter1` appears in the two columns.
I want the result :
```
+------+------+------+
| COL1 | COL2 | COL3 |
+------+------+------+
| A | B | 2 |
+------+------+------+
| C | D | 1 |
+------+------+------+
```
**NOTE:** It can be either `AB` or `BA` doesn't matter.
I tried:
```
SELECT
COL1,COL1,COUNT(*) AS COL3
FROM
X
GROUP BY COL1,COL2;
```
But that gets me :
```
+------+------+------+
| COL1 | COL2 | COL3 |
+------+------+------+
| A | B | 1 |
+------+------+------+
| B | A | 1 |
+------+------+------+
| C | D | 1 |
+------+------+------+
```
|
You can do this by swapping the columns if you need to:
```
SELECT Col1, Col2, COUNT(*)
FROM
(
SELECT
CASE WHEN Col1 < Col2 THEN Col1 ELSE Col2 END AS Col1,
CASE WHEN Col1 < Col2 THEN Col2 ELSE Col1 END AS Col2
FROM T
) t
GROUP BY Col1, Col2
```
[Fiddle](http://sqlfiddle.com/#!9/e6fda/2)
|
Another try
```
SELECT LEAST(col1, col2) col11, GREATEST(col1, col2) col12 , COUNT(1) FROM X
GROUP BY col11, col12
```
[SqlFiddle](http://sqlfiddle.com/#!9/4bd6a/26)
|
How to count the number of times two values appear in two columns in any order
|
[
"",
"mysql",
"sql",
"database",
"count",
""
] |
With SQL Server 2016 supporting [Temporal Tables](https://msdn.microsoft.com/en-us/library/dn935015.aspx "Temporal Tables") I wonder if there is a way to determine if a table is currently temporal? Something like
`select * from sys.objects where object_id('dbo.MyTable', 'u') = parent_object_id and type_desc = "SYSTEM_VERSIONED"`
|
> ```
> SELECT temporal_type
> FROM sys.tables
> WHERE object_id = OBJECT_ID('dbo.MyTable', 'u')
> ```
>
> 0 = NON\_TEMPORAL\_TABLE
>
> 1 = HISTORY\_TABLE
>
> 2 = SYSTEM\_VERSIONED\_TEMPORAL\_TABLE
[Documentation](https://msdn.microsoft.com/en-us/library/ms187406.aspx)
|
Another way of listing temporal tables with their history tables together is given in this SQL tutorial as [List Temporal and History Tables in a SQL Server Database](http://www.kodyaz.com/sql-server-2016/create-sql-server-2016-temporal-table.aspx)
```
select
t.object_id,
t.name,
t.temporal_type,
t.temporal_type_desc,
h.object_id,
h.name,
h.temporal_type,
h.temporal_type_desc
from sys.tables t
inner join sys.tables h on t.history_table_id = h.object_id
```
|
How to determine if sql table is Temporal?
|
[
"",
"sql",
"sql-server",
"sql-server-2016",
""
] |
Can I have an alias of the result of the join of 2 tables? So that I can call the result of the table in its alias name.
```
SELECT emp_table.Name, dept_table.dept
FROM dept_table
inner join emp_table
on emp_table.dept_id = dept_table.dept_id;
```
|
You can use a subquery:
```
SELECT YourAlias.Name
,YourAlias.dept
FROM (SELECT emp_table.Name, dept_table.dept
FROM dept_table
INNER JOIN emp_table
on emp_table.dept_id = dept_table.dept_id) AS YourAlias;
```
Or use a CTE:
```
;WITH YourAlias AS
(
SELECT emp_table.Name, dept_table.dept
FROM dept_table
INNER JOIN emp_table
on emp_table.dept_id = dept_table.dept_id
)
SELECT Name, dept
FROM YourAlias;
```
More information on CTE [here](https://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx).
|
Not exact but assume column alias and table alias name like this
```
SELECT e.Name as aliasname, d.dept as aliasdept
FROM dept_table d
inner join emp_table e
on e.dept_id = d.dept_id;
```
|
can i have an alias for the result table of inner join
|
[
"",
"sql",
"sql-server",
""
] |
I have a log table that contains columns for exception and stack trace data in varchar format.
I would like to query this log table to get counts of similar exceptions.
How would I aggregate similar but not exact matches together?
```
MyApp.MyCustomException: UserId 1 not found
MyApp.MyCustomException: UserId 2 not found
MyApp.MyCustomException: UserId 3 not found
MyApp.MyCustomException: UserId 1 login failed
MyApp.MyCustomException: UserId 2 login failed
MyApp.MyCustomException: UserId 3 login failed
```
The above 6 rows should be counted as
```
"MyApp.MyCustomException: UserId not found" Count:3
"MyApp.MyCustomException: UserId login failed" Count:3
```
The LEFT function would work on the above simple example but not for an exception such as NullReferenceException where the error could happen in several different places in the code.
EDIT: Updated example to more clearly represent the problem.
|
I would just use `like` with a `case`:
```
select trace, count(*)
from (select l.*,
(case when trace like 'MyApp.MyCustomException: UserId % not found'
then 'MyApp.MyCustomException: UserId not found'
when trace like 'MyApp.MyCustomException: UserId % login failed'
then 'MyApp.MyCustomException: UserId login failed'
else trace
end) as canonical_tracer
from log l
) l
group by trace;
```
|
You could try using
```
patindex('%pattern%',column)
```
The whole select could be something like
```
SELECT * FROM tbl
WHERE patindex('%MyApp.MyCustomException: % not found%',err)>0
```
Make sure not to forget the `%` before and after the end of the pattern. The function will give you the position the pattern was found in a column or `0` if not found.
See here for an example: <http://sqlfiddle.com/#!3/1a70e/1>
**Edit:**
It can be done with CTE like
```
WITH msgs AS(
SELECT err,CASE
WHEN patindex('%MyApp.MyCustomException: % not found%',err)>0 THEN 1
WHEN patindex('%Wrong password for %, please try again%',err)>0 THEN 2
ELSE 0 END msgno FROM tbl )
SELECT msgno, MIN(err) msg1, COUNT(*) cnt FROM msgs GROUP BY msgno
```
see here: <http://sqlfiddle.com/#!3/9565c/2>
**2. Edit:**
Or, in a more general way:
```
WITH pats as (SELECT 'UserId' pat -- define various patterns for
UNION ALL SELECT 'IP' -- words to be removed after ...
), pos1 AS ( -- find position of pattern
SELECT pat,err msg,patindex('%'+pat+'%',err)+len(pat) p1 FROM tbl,pats
), pos2 AS ( -- remove word after pattern
SELECT LEFT(msg,p1)
+'<'+pat+'> '
+SUBSTRING(msg,charindex(' ',SUBSTRING(msg,p1+1,256))+p1,256) msg
FROM pos1 WHERE p1>len(pat)
), nonames AS ( -- find non-specific messages
SELECT err FROM tbl WHERE NOT EXISTS
(SELECT 1 FROM pos1 WHERE msg=err AND p1>len(pat))
)
SELECT msg, count(*) cnt FROM -- combine all, group and count
( SELECT msg FROM pos2 UNION ALL SELECT err FROM nonames ) m
GROUP BY msg
```
From all the messages this will remove the first word (=character sequences without blanks) appearing after one of a number of predefined patterns (`pat`). This will make the messages of a certain type look exactly the same so they can be grouped.
You can try it out here (**my final solution**): <http://sqlfiddle.com/#!3/a2fb9/4>
|
Is there a way to group like strings using TSQL
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
Can anyone tell me what the difference (besides the obvious) is between these two queries:
1)
```
declare @coy_oid varchar
declare @field_name varchar
set @coy_oid = '10'
set @field_name = 'ResultReason'
SELECT OID, POSITION, DESCRIPTION, FIELD_NAME
FROM T_FIELD_TEXT_CHOICES
WHERE COY_OID = @coy_oid AND FIELD_NAME = @field_name
```
2)
```
declare @coy_oid varchar
declare @field_name varchar
set @coy_oid = '10'
set @field_name = 'ResultReason'
SELECT OID, POSITION, DESCRIPTION, FIELD_NAME
FROM T_FIELD_TEXT_CHOICES
WHERE COY_OID = @coy_oid AND FIELD_NAME = 'ResultReason'
```
The first one returns nothing and the second returns the expected results. I am sure it has to do with the FIELD\_NAME being a variable, but I don't know why.
Guess I should add this is SQL Server 2008 R2, but maybe it doesn't matter.
|
You're variables are declared as `varchar`. That's a single character, so in the first query you're comparing with 'R'. You probably meant to use something like `varchar(100)`...
|
`varchar` is the equivalent of `varchar(1)`
To see this consider
```
DECLARE @v1 VARCHAR
SET @v1 = '12345'
DECLARE @v2 VARCHAR (5)
SET @v2 = '12345'
SELECT @v1 AS v1, @v2 AS v2
```
which returns
```
'1' for v1
'12345' for v2
```
|
Difference in SQL queries with or without parameters
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am cracking my skull as to how to query records where the date field returns data from Thursday to the previous 7 days (Thursday). This is for reporting and I need assistance.
I had a look at the `DATEDIFF` function but not too sure how to have this date generic to run the SSRS report automatically.
|
```
SELECT *
FROM <TABLE_NAME>
WHERE
<YOURDATE> >= DATEADD(wk, DATEDIFF(wk,0,GETDATE()), -4) AND
<YOURDATE> <= DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 3)
```
The code will select all the rows from the current Thursday to the previous Thursday.
DATEADD(wk, DATEDIFF(wk,0,GETDATE()), -4): Gives the date for the last Thursday
DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 3): Gives the date for the current Thursday
|
For last 7 days something like that:
```
SELECT *
FROM Tbl
WHERE YourDate BETWEEN (GETDATE() - 6) AND GETDATE()
```
|
Date from Thursday to last Thursday
|
[
"",
"sql",
"sql-server",
"ssrs-2008",
"datediff",
""
] |
I have a high performance DB (on SQL Server 2012). One of my views had a virtual column represented by an inline scalar valued UDF computing custom hash from the previous column value. I moved this calculation from the view into one of the underlying tables to improve the performance. I added a new computed column to the table and persisted the data. Then, created index on this column and referenced it back in the corresponding view (basically migrating the logic from the view into the table).
Now I am wondering why wouldn't I just add a regular VARCHAR(32) column to the table instead of the computed PERSISTED column? I could create a DEFAULT on this column with the above mentioned UDF for all new inserts and recalculate the historical records.
Is there any advantage of the indexed computed column with PERSISTED data vs. regular NC indexed column?
Thx.
|
The computed column will keep your field up to date if the field data it is based on is changed. Adding just a default will update the field on insert if no value is provided for the field.
If you know that your data is not going to change (which i think you are implying but did not specify in your question), then they are functionally the same for you. The computed column would probably be preferred though to prevent accidental update of the field with an incorrect value (bypassing the default). Also it is clear to any other developers what the field is to be used for.
|
You could switch to a "normal" column with a default value or insert trigger. The one potential issue is that unlike a computed column anyone with insert/update access could (potentially accidentally) change the value of the column.
Performance is going to be the same either way. In essence that is what the db is doing behind the scenes with a persisted computed column. As a developer a column that is persisted computed is clearer in the intent than a default value. Default value implies it is one of many possible values not the only possible value.
Be sure to declare the UDF With SchemaBinding. This will allow SQL Server to determine if the function is deterministic and flag it as so. That can improve query plan optimization in some cases.
|
PERSISTED Computed Column vs. Regular Column
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
Here is a simplified version of the table I am looking at:
```
CREATE TABLE [dbo].[FrustratingTable]
(
[Id] Uniqueidentifier NOT NULL
, [SecondField] [datetime]
, [ThirdField] varchar(128)
)
```
I want to insert new records into this table. I have tried 3 approaches:
```
INSERT INTO [dbo].[FrustratingTable] (Id, SecondField, ThirdField)
SELECT newid() as Id,
'6/25/2015' as SecondField, 'Example' as ThirdField
```
This approach inserts, but the resulting key isn't a nice sequential GUID like the other ones in the table
```
INSERT INTO [dbo].[FrustratingTable] (Id, SecondField, ThirdField)
SELECT NEWSEQUENTIALID() as Id, '6/25/2015' as SecondField, 'Example' as ThirdField
```
This fails with error
> The newsequentialid() built-in function can only be used in a DEFAULT expression for a column of type 'uniqueidentifier' in a CREATE TABLE or ALTER TABLE statement. It cannot be combined with other operators to form a complex scalar expression.
```
INSERT INTO [dbo].[FrustratingTable] (SecondField,ThirdField)
SELECT '6/25/2015' as SecondField, 'Example' as ThirdField
```
This fails with the error
> Cannot insert the value NULL into column 'id', table 'mydatabase.dbo.frustratingtable'; column does not allow nulls. INSERT fails.
Is it possible to solve this without altering the table definition?
|
You may be able to do this by way of using a table variable:
```
declare @t table (
ID uniqueidentifier not null default newsequentialid(),
SecondField datetime,
ThirdField varchar(128)
)
insert into @t (SecondField,ThirdField)
output inserted.ID,inserted.SecondField,inserted.ThirdField
into FrustratingTable
values
('20150101','abc'),
('20150201','def'),
('20150301','ghi')
select * from FrustratingTable
```
Results:
```
Id SecondField ThirdField
------------------------------------ ----------------------- ------------
1FEBA239-091C-E511-9B2F-78ACC0C2596E 2015-01-01 00:00:00.000 abc
20EBA239-091C-E511-9B2F-78ACC0C2596E 2015-02-01 00:00:00.000 def
21EBA239-091C-E511-9B2F-78ACC0C2596E 2015-03-01 00:00:00.000 ghi
```
Since the table variable sets the value via a `default`, we're allowed to use `NEWSEQUENTIALID()`.
Of course, for very large data sets, there's a penalty in temporarily having two copies of the data lurking around.
---
An alternative would be to use an older solution, called COMBs, which were used before `NEWSEQUENTIALID()` was introduced:
```
SELECT CAST(CAST(NEWID() AS BINARY(10)) + CAST(GETDATE() AS BINARY(6)) AS UNIQUEIDENTIFIER)
```
Generates `uniqueidentifiers` with better locality than `NEWID()` by itself does.
|
Ok, if first yout take the `[IncrementGuid]` function from [this answer](https://stackoverflow.com/a/9116354/659190),
then you can do something like this,
[**Fiddle Here**](http://sqlfiddle.com/#!6/998cf/7)
```
INSERT [dbo].[FrustratingTable]
SELECT
[dbo].[IncrementGuid](MAX([Id])),
'01/01/01',
'3'
FROM
[dbo].[FrustratingTable];
```
## Caveat:
Once you reviewed the function in the other answer, you'll agree, there must be a better way.
Change the code that needs the GUIDs to be sequential.
|
How to INSERT into a table that uses sequential GUIDs as a primary key?
|
[
"",
"sql",
"sql-server-2012",
"guid",
""
] |
I am using SQL Server 2008 R2. I have a table called `Messages` where I store user messages each user send to other users. The table structure is like below.
```
+--------+----------+-----------------+------------+
| Sender | Receiver | Message | Date |
+--------+----------+-----------------+------------+
| John | Dennis | How are you | 2015-06-06 |
| John | Dennis | Hi | 2015-06-05 |
| Tom | John | How much is it? | 2015-06-04 |
| Tom | John | Did you buy it? | 2015-06-03 |
| Robin | Tom | Hey man | 2015-06-03 |
| Dennis | John | What up | 2015-06-02 |
| John | Tom | Call me | 2015-06-01 |
+--------+----------+-----------------+------------+
```
I want to get the newest message and other participants' name for a selected user for each conversation. For a example there are three conversations. One is between "john-Dennis" and 2nd one is "John-Tom"and 3rd one between "Robin-Tom".
If I want to get conversations for user `john`, I want to get the latest conversation message with the name of other user who is in the conversation.
The expected result for the above scenario should be like this.
```
+-------------+-----------------+------------+
| Participant | Message | Date |
+-------------+-----------------+------------+
| Dennis | How are you | 2015-06-06 |
| Tom | How much is it? | 2015-06-04 |
+-------------+-----------------+------------+
```
How to achieve this using a SQL query in SQL Server. I am struggling with part for days. Please help. Thanks in advance.
|
It's possible to compress this a bit, but I've split it into simple steps to hopefully make it a little easier to follow.
```
-- Sample data from the question.
declare @msg table (Sender varchar(32), Receiver varchar(32), [Message] varchar(max), [Date] date);
insert @msg
(Sender, Receiver, [Message], [Date])
values
('John','Dennis', 'How are you', '2015-06-06'),
('Dennis', 'John', 'Hi', '2015-06-05'),
('Tom', 'John', 'How much is it?', '2015-06-04'),
('Tom', 'John', 'Did you buy it?', '2015-06-03'),
('Robin', 'Tom', 'Hey man', '2015-06-03'),
('Dennis', 'John', 'What up', '2015-06-02'),
('John', 'Tom', 'Call me', '2015-06-01');
-- The name of the user whose conversations you want to find.
declare @UserName varchar(32) = 'John';
-- Step 1: Create columns [Participant1] and [Participant2] that will be the same for
-- each pair of users regardless of who's the sender and who the receiver.
with NameOrderCTE as
(
select
Participant1 = case when Sender < Receiver then Sender else Receiver end,
Participant2 = case when Sender < Receiver then Receiver else Sender end,
*
from
@msg
),
-- Step 2: For each distinct pair of participants, create a [Sequence] number that
-- puts the messages in reverse chronological order.
MessageSequenceCTE as
(
select
*,
[Sequence] = row_number() over (partition by Participant1, Participant2 order by [Date] desc)
from
NameOrderCTE
)
-- Step 3: Get the most recent ([Sequence] = 1) messages for each conversation
-- involving the target user.
select
Participant = case @UserName when Sender then Receiver else Sender end,
[Message],
[Date]
from
MessageSequenceCTE
where
@UserName in (Sender, Receiver) and
[Sequence] = 1;
```
|
Try this query
```
select a.Reciver as Participant,a.message,a.Date from message a
join(select sender,Receiver,max(date) from message where Sender = 'John' group by sender,Receiver) b
on a.sender=b.sender and a.Receiver=b.Receiver and a.Date=b.date
```
|
Group result by matching two columns in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
**table1 - doctors**
```
+---------+--------+------+
| country | state | doc |
+---------+--------+------+
| india | AP | 20 |
+---------+--------+------+
| india | TN | 30 |
+---------+--------+------+
| india | KA | 10 |
+---------+--------+------+
| US | LA | 30 |
+---------+--------+------+
| US | CA | 10 |
+---------+--------+------+
| US | NY | 50 |
+---------+--------+------+
```
**table2 - engineers**
```
+---------+--------+-------+
| country | state | engg |
+---------+--------+-------+
| india | AP | 100 |
+---------+--------+-------+
| india | TN | 400 |
+---------+--------+-------+
| india | KA | 250 |
+---------+--------+-------+
| US | LA | 140 |
+---------+--------+-------+
| US | CA | 120 |
+---------+--------+-------+
| US | NY | 150 |
+---------+--------+-------+
```
**Desired output:**
```
+---------+------+-------+
| country | doc | engg |
+---------+------+-------+
| india | 60 | 750 |
+---------+------+-------+
| US | 90 | 410 |
+---------+------+-------+
```
I tried with the below query but am getting more count of docs and engg. Someone please correct me..
```
select country, sum(a.doc), sum(b.engg)
from table1 a join table2 b on (a.country = b.country)
```
|
You can use `UNION ALL`
```
SELECT
country,
SUM(doc) AS doc,
SUM(engg) AS engg
FROM
(SELECT
country,
doc,
0 AS engg
FROM
doctors
UNION ALL
SELECT
country,
0,
engg
FROM
engineers
) a
GROUP BY
country
```
|
I think your problem is that you are getting a cross-product of both the tables with these set of values.
Try using:
```
tableA NATURAL JOIN tableB.
```
|
Join between two tables
|
[
"",
"sql",
"join",
""
] |
I have 2 tables with the same header, table1 and table2.
Table1:
```
AMS nr. sample pos
G242 16
G243 14
G246 18
```
Table2:
```
AMS nr. sample pos
G144 45
G789 32
G189 8
```
I want to add the data from Table2 in Table1 and store all data in Tabel1. Is this possible using SQL?
|
Take a look at below query
```
INSERT INTO table1
SELECT * FROM table2
```
|
If you are trying to insert the data:
```
INSERT INTO Table1
SELECT * FROM Table2
```
If you are trying to update the table content (for mysql):
```
UPDATE Table1 T1
JOIN Table2 T2 ON T1.`AMS nr.` = T2.`AMS nr.`
SET T1.`sample pos` = T1.`sample pos` + T2.`sample pos`
```
|
SQL join two tables with same header
|
[
"",
"sql",
"join",
""
] |
I have 3 tables without foreign keys (it's a legacy db, so I can't change that).
The model will be something like this (sql code):
```
Select
PROD.ProductoId,
PROD.Descripcion,
STK.StockActual,
DEPO.DepositoId,
DEPO.Descripcion
From
Productos PROD,
Stock STOK,
Depositos DEPO
where
PROD.ProductoId = STOK.ProductoId
and DEPO.DepositoId = STOK.DepositoId
```
How can I do to get the same results with Linq on C#?
|
Try this:
```
var result = from prod in _context.Productos
join stok in _context.Stocks on prod.ProductoId equals stok.ProductoId
join depo in _context.Depositos on stok.DepositoId equals depo.DepositoId
select new
{
ProductoId = prod.ProductoId,
ProductoDescripcion = prod.Descripcion,
StockActual = stok.StockActual,
DepositoId = depo.DepositoId,
DepositoDescripcion = depo.Descripcion
};
```
|
Why don't you create a View in Sql and then update your model, that way you can query from the View class.
|
Linq to 3 tables with no foreign keys
|
[
"",
"sql",
"sql-server",
"linq",
""
] |
In this example, how do I return the count of employees by salary band using SQL?
```
Table : EmployeeSalary
Employee Salary
1 21,000
2 24,000
3 30,000
4 31,000
5 32,000
6 50,000
Table: SalaryBands
ID Name MinSalary MaxSalary
1 Tier1 20,000 24,999
2 Tier2 25,000 44,999
3 Tier3 45,000 150,000
```
|
Try this:
```
select s.name, count(*) as count
from employeesalary e
inner join salarybands s on e.salary >= s.minsalary and e.salary <=s.maxsalary
group by s.name
```
The key idea is to join the tables based on which salary range each employee's salary falls in. The counting is then implemented in the usual way.
[Demo](http://rextester.com/KRB76306)
|
A nonequi join is a perfect fit for your needs. It's a join that does not use equivalence operator to match rows.
```
SELECT
sb.name,
count(*)
FROM
EmployeeSalary es
INNER JOIN SalaryBands sb ON
es.salary BETWEEN sb.minsalary AND sb.maxsalary
GROUP BY sb.name
```
|
Custom SQL Query to Count by Range or Group
|
[
"",
"sql",
""
] |
I am trying to run a query to update the `DATA` table where the `MAXNVQ` is different it will update a different value into the Burnham Grade column.
This is my code so far:-
```
UPDATE Data
SET Data.BurnhamGrade = (CASE WHEN Data.[MAXNVQ] > 3 THEN 'Grade II/III'
WHEN Data.[MAXNVQ] = 3 THEN 'Grade IV'
WHEN Data.[MAXNVQ] < 3 THEN 'Grade IV' END)
WHERE (MAXNVQ > 3) OR
(MAXNVQ = 3) OR
(MAXNVQ < 3)
```
I get an error saying SQL Execution Error:
> String or binary data would be truncated the statement has been
> terminated.
Like I have been a mistake in the SQL
|
Perhaps this should be a comment, but the comments seem to be about other topics. Your query (or the equivalent) is:
```
UPDATE Data
SET Data.BurnhamGrade = (CASE WHEN Data.[MAXNVQ] > 3 THEN 'Grade II/III'
WHEN Data.[MAXNVQ] = 3 THEN 'Grade IV'
WHEN Data.[MAXNVQ] < 3 THEN 'Grade IV' END)
WHERE MAXNVQ IS NOT NULL;
```
If you are getting a data truncation error, it would be because the strings assigned to `Data.BurnhamGrade` are too long for the column. So, check the length of the column.
A common reason for this is when the length is left off the variable. So, if your table is defined as:
```
CREATE TABLE data (
. . .
BurnhamData varchar,
. . .
);
```
This assigns a default length to the column, which depends on context and is often "1". There is no error, just a column that is shorter than you intend. Instead:
```
CREATE TABLE data (
. . .
BurnhamData varchar(255),
. . .
);
```
Or, if this mapping is *always* true, store the values in a reference table and use a `JOIN`, Or, use a computed column:
```
ALTER TABLE data
ADD BurnhamGrade AS (CASE WHEN Data.[MAXNVQ] > 3 THEN 'Grade II/III'
WHEN Data.[MAXNVQ] = 3 THEN 'Grade IV'
WHEN Data.[MAXNVQ] < 3 THEN 'Grade IV'
END)
```
With this approach, you don't have to worry about keeping the value up-to-date. It will be correct whenever you query the table and use the column.
|
As stated in the comments it's likely that the column you are updating (`BurnhamGrade`) is not big enough to hold the data that you are inserting in to it.
E.g. if your column definition is: `BurnhamGrade VARCHAR(10)` you wouldn't be able to insert `'Grade II/III'` as it's 12 characters long.
This recreates the error:
```
CREATE TABLE #data
(
MAXNVQ INT ,
BurnhamGrade VARCHAR(10)
)
INSERT INTO #data
( [MAXNVQ], BurnhamGrade )
VALUES ( 1, '' ),
( 3, '' ),
( 4, '' )
UPDATE #data
SET #data.BurnhamGrade = ( CASE WHEN MAXNVQ > 3 THEN 'Grade II/III'
WHEN MAXNVQ = 3 THEN 'Grade IV'
WHEN MAXNVQ < 3 THEN 'Grade IV'
END )
-- NOTE THE WHERE CLAUSE ISN'T REQUIRED UNLESS HANDLING NULLS
SELECT *
FROM #data
DROP TABLE #data
```
**Produces:**
> Msg 8152, Level 16, State 14, Line 11
> String or binary data would be truncated.
Modifying the column specification to: `BurnhamGrade VARCHAR(12)` allows it to work.
After the change, it produces:
```
MAXNVQ BurnhamGrade
1 Grade IV
3 Grade IV
4 Grade II/III
```
So altering the column definition to a larger value should fix the issue.
One final thing, you can merge 2 of your cases so:
```
WHEN MAXNVQ = 3 THEN 'Grade IV'
WHEN MAXNVQ < 3 THEN 'Grade IV'
```
Can use `<= 3` as they set the same value, like so:
```
WHEN MAXNVQ <= 3 THEN 'Grade IV'
```
|
UPDATE multiple WHERE SQL not working
|
[
"",
"sql",
"sql-server",
""
] |
I do not know what is wrong with this query but it does not give any error and does not update the row. ENDDATETIME field datatype is TIMESTAMP.
UPDATE TEST\_BANK set STATUS = 'RECEIVED', ENDDATETIME = '16-JUN-15 11.21.06.000000000' WHERE ENDDATETIME = null ;


|
I believe in Oracle the syntax for checking for null is as following
```
WHERE ENDDATETIME IS NULL
```
instead of the
```
WHERE ENDDATETIME = NULL
```
|
You can try this
```
UPDATE TEST_BANK set STATUS = 'RECEIVED', ENDDATETIME = '16-JUN-15 11.21.06.000' WHERE ENDDATETIME is null ;
```
|
Why oracle DB does not update the fileds
|
[
"",
"sql",
"database",
"oracle",
""
] |
I have the following query, it is working fine to show the cricket time played per day. All I need is to show 0 when no cricket is played. At the moment it is skipping those dates. For more reference please see this [link](https://stackoverflow.com/questions/30663600/how-to-sum-the-time-grouped-by-individual-day-in-sql-server).
```
;WITH CTE AS (
SELECT email, last_update, activity, starttime, endtime, duration as [Totaltime] from users
WHERE activity='cricket' and email='abc'
GROUP BY email, activity, duration, starttime, endtime, last_update
)
Select activity, cast(starttime as date) as date,
SUM(datediff(second, starttime, endtime))/60.0 as TimePerDay
from cte
where starttime >= dateadd(day, -15, last_update)
group by activity, cast(starttime as date)
```
current output of the query is
```
activity | date | TimePerDay
cricket | 2015-06-16 | 80.01
cricket | 2015-06-17 | 90.01
cricket | 2015-06-18 | 120.01
cricket | 2015-06-22 | 10.01
```
The desired output is
```
activity | date | TimePerDay
cricket | 2015-06-16 | 80.01
cricket | 2015-06-17 | 90.01
cricket | 2015-06-18 | 120.01
cricket | 2015-06-19 | 0
cricket | 2015-06-20 | 0
cricket | 2015-06-21 | 0
cricket | 2015-06-22 | 10.01
```
|
you could try this also :- (this could only for one particular activity)
```
Set Nocount On;
Declare @MinDate Date
,@MaxDate Date
Declare @test Table
(
activity Varchar(100)
,date Date
,TimePerDay Decimal(5,2)
)
Declare @result Table
(
activity Varchar(100)
,date Date
,TimePerDay Decimal(5,2) Default 0
)
;WITH CTE AS
(
SELECT email
,last_update
,activity
,starttime
,endtime
,duration As Totaltime
From users With (Nolock)
WHERE activity ='cricket'
And email = 'abc'
GROUP BY email
,activity
,duration
,starttime
,endtime
,last_update
)
Insert Into @test(activity,date,TimePerDay)
Select activity
,Cast(starttime as date) As date
,SUM(datediff(second, starttime, endtime))/60.0 As TimePerDay
From cte With (Nolock)
where starttime >= dateadd(day, -15, last_update)
group by activity
,cast(starttime as date)
Select @MinDate = Min(Date)
,@MaxDate = Max(Date)
From @test
;With AllDates As
(
Select @MinDate As xDate
From @test As t1
Where t1.date = @MinDate
Union All
Select Dateadd(Day, 1, xDate)
From AllDates As ad
Where ad.xDate < @MaxDate
)
```
**One way is :-** (left join)
```
Select 'cricket' As activity
,ad.xDate
,Isnull(t.TimePerDay,0) As TimePerDay
From AllDates As ad With (Nolock)
Left Join @test As t On ad.xDate = t.date
```
**another way is :-** (insert with all dates and update)
```
Insert Into @result(activity,date)
Select 'cricket'
,ad.xDate
From AllDates As ad With (Nolock)
Update t
Set t.TimePerDay = t1.TimePerDay
From @result As t
Join @test As t1 On t.date = t1.date
Select *
From @result As r
```
**output**

**Update**
```
Declare @MinDate Date
,@MaxDate Date
Select @MaxDate = Getdate()
,@MinDate = Dateadd(Day, -14, @MaxDate)
;With AllDates As
(
Select @MinDate As xDate
Union All
Select Dateadd(Day, 1, xDate)
From AllDates As ad
Where ad.xDate < @MaxDate
)
Select @activity As activity ---- @activity (your stored procedure parameter)
,ad.xDate
,Isnull(t.TimePerDay,0) As TimePerDay
From AllDates As ad With (Nolock)
Left Join @test As t On ad.xDate = t.date
```
|
**AS PREPARATION,** you need to generate a list of dates. There are multiple ways for that. One is a recursive CTE. For example, this one creates a list of the last 15 days:
```
with datelist ([date]) as
(
select dateadd(dd, -15, cast(getdate() as date)) as [date] -- 15 days back
union all
select dateadd(dd, 1, [date]) from datelist where dateadd(dd, 1, [date]) <= getdate()
)
select * from datelist option (maxrecursion 400)
```
**FOR THE FINAL SOLUTION,** you now need to create a `LEFT JOIN` between the `datelist` and with a subquery with your `user` table.
An overall solution for your question therefore goes as follows.
I have also put everything together in a **SQLFiddle**:
<http://sqlfiddle.com/#!3/36510/1>
```
with datelist ([date]) as
(
select dateadd(dd, -15, cast(getdate() as date)) as [date] -- 15 days back
union all
select dateadd(dd, 1, [date]) from datelist where dateadd(dd, 1, [date]) <= getdate()
)
select 'cricket' as activity,
d.[date],
coalesce(SUM(datediff(second, u.starttime, u.endtime)/60.0), 0)
as TimePerDay
from datelist d
left join
(
select [starttime], [endtime], cast(starttime as date) as [date]
from [users]
where activity='cricket' and email='abc'
) u
on d.[date] = u.[date]
group by d.[date]
option (maxrecursion 400)
```
So this query will give you the table for the cricket activity of the last 15 days of one specific user.
|
How to extend the query to add 0 in the cell when no activity is performed
|
[
"",
"sql",
"sql-server",
""
] |
```
SELECT CAST ((SUM(r.SalesVolume)/1000) AS decimal(3,3)) FROM RawData r
```
The above is a part of a query that I am trying to run but returns an error:
```
Lookup Error - SQL Server Database Error: Arithmetic overflow error converting int to data type numeric.
```
Not sure what this means.
The result column looks like(Without dividing by 1000 and casting):
```
Total_Sales_Volume
64146
69814
68259
56318
66585
51158
44365
49855
49553
88998
102739
55713
```
Tried casting as float but doesnt help.
|
Try this:
```
SELECT CAST ((SUM(r.SalesVolume)/1000.0) AS decimal(6,3)) FROM RawData r
```
|
The Problem is `decimal(3,3)` --> this means a number with 3 digit, 3 of them behind the decimal point. If you want a number like this `1234567.123` you would have do declare it as `decimal(10,3)`
|
SQL: Unable to CAST a query
|
[
"",
"sql",
"sql-server",
""
] |
My understanding is that a clause is a means of filtering the data set to certain conditions. Therefore `WHERE` being a clause makes total sense to me.
But in some of the tutorials I've watched, I've seen `AND`/`OR` referred to as keywords. It seems to me that they would both be a clause.
Could anyone help me understand this better?
|
Making it clear by an example
```
SELECT col1
,col2
,col3
FROM yourTable1 AS t1
INNER JOIN yourTable2 AS t2 ON t1.Id = t2.Id
WHERE col1 = 'aaaa'
AND col2 = 'bbbb'
ORDER BY col1
```
This is a `SELECT` statement or query
`FROM yourTable1 AS t1 INNER JOIN yourTable2` ==> `FROM` clause
`ON t1.Id = t2.Id` ==> `ON` clause
`WHERE col1 = 'aaaa' AND col2 = 'bbbb'` ==> `WHERE` clause
`ORDER BY col1` ==> `ORDER BY` clause
`SELECT`, `FROM`, `WHERE`, `ON`, `AS`, `AND`, `ORDER BY` are all keywords
|
The words keyword, clause, and statement are used as follows:
A keyword refers to an individual SQL element. For example, `SELECT` and `FROM` are keywords.
A clause is a part of a SQL statement.For example, `SELECT employee_id, last_name, ...` is a clause.
|
What is the difference between a keyword and a clause in SQL?
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
Here's a simplified example of what I'm trying to do. I have two tables, A and B.
```
A B
----- -----
id id
name a_id
value
```
I want to select only the rows from A where ALL the values of the rows from B match a where clause. Something like:
```
SELECT * from A INNER JOIN B on B.a_id = A.id WHERE B.value > 2
```
The problem with the above query is that if ANY row from B has a value > 2 I'll get the corresponding row from A, and I only want the row from A if
1.) **ALL** the rows in B **for B.a\_id = A.id** match the WHERE, **OR**
2.) There are no rows in B that reference A
B is basically a table of filters.
|
```
SELECT *
FROM a
WHERE NOT EXISTS
(
SELECT NULL
FROM b
WHERE b.a_id = a.a_id
AND (b.value <= 2 OR b.value IS NULL)
)
```
|
Answering this question (which it seems you actually meant to ask):
*Return all rows from `A`, where all rows in `B` with `B.a_id = A.id` also pass the test `B.value > 2`.*
Which is equivalent to:
*Return all rows from `A`, where no row in `B` with `B.a_id = A.id` fails the test `B.value > 2`.*
```
SELECT a.* -- "rows from A" (so don't include other columns)
FROM a
LEFT JOIN b ON b.a_id = a.id
AND (b.value > 2) IS NOT TRUE -- safe inversion of logic
WHERE b.a_id IS NULL;
```
When inverting a `WHERE` condition carefully consider null values. [**`IS NOT TRUE`**](https://www.postgresql.org/docs/current/functions-comparison.html) is the **simple and safe way** to perfectly invert a `WHERE` condition. The alternative would be `(b.value <= 2 OR b.value IS NULL)` which is longer but may be faster (easier to support with index).
* [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
|
INNER JOIN where **every** row must match the WHERE clause?
|
[
"",
"sql",
"postgresql",
"left-join",
"exists",
""
] |
Let's say I have table called `tbl` which looks like this:
```
attr1 attr2
A 2
A 1
A 1
B 1
C 1
C 3
```
I need to select only those data from `attr1` that for each of the same data in `attr1` have `1` or `2` in `attr2`.
So the result should be:
```
attr1
A
B
```
`C` should not be in result because in the group `C` one `attr2` has value `3`.
I'm doing this using SQL in MS Access.
|
You can try with `not exists` and a correlated subquery:
```
select distinct attr1
from tbl t
where attr2 in (1, 2)
and not exists ( select 1
from tbl
where attr1 = t.attr1
and attr2 not in(1, 2) )
```
|
You can count the instances of the allowed `attr2` and compare with the size of the group overall. If they are the same, all `attr2` values must have been "allowed":
```
SELECT attr1
FROM
(
SELECT attr1,
COUNT(*) AS group_count,
SUM(IIF(attr2 IN (1, 2), 1, 0)) AS allowed_count
FROM tbl
GROUP BY attr1
) AS allowed
WHERE allowed.group_count = allowed.allowed_count
```
I haven't got Access installed here (Linux!), so I'd be interested to see if I've got the syntax correct.
The subquery returns:
```
attr1 group_count allowed_count
A 3 3
B 1 1
C 2 1
```
And the main query just selects those values where `group_count` and `allowed_count` are equal:
```
attr1
A
B
```
|
SQL select data grouped by attr1 with criteria on attr2
|
[
"",
"sql",
"ms-access",
"select",
""
] |
I am working on a SQL query where I select distinct category and count how many rows in x category. I want to achieve a result like:
```
X.category | Amount
--------------------
Hi | 3
--------------------
Hello | 2
--------------------
```
```
SELECT DISTINCT company.category, category.desc2 FROM company, category
```
This query works but lacks amount (count).
|
Okay so I think I have come up with a solution to my own problem.
```
SELECT DISTINCT company.category, count(DISTINCT company.id) as amount, category.name, category.desc2 FROM company, category WHERE company.category = category.name GROUP BY company.category;
```
|
I think the code your looking for is the `GROUP BY`-clause.
```
SELECT DISTINCT company.category, COUNT(*)
FROM company, category
GROUP BY company.category
```
You can read more about in the [docs](https://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html).
|
SQL query - select distinct category and count
|
[
"",
"mysql",
"sql",
"count",
"distinct",
""
] |
I've got 2 tables. There is a constraint between these 2 tables.
`Table1 : Table2 = 1 : n`
I would like to select the keys of the `Table1` elements which GOT no entry in `Table2`.
Example:
```
| Table 1 |
|Key |Col1 |Col2 |...|
|0001|.... |.... |...|
|0002|.... |.... |...|
|0003|.... |.... |...|
|0004|.... |.... |...|
| Table 2 |
|Tab1|Col1 |Col2 |...|
|0001|.... |.... |...|
|0001|.... |.... |...|
|0003|.... |.... |...|
|0002|.... |.... |...|
Desired output:
0004
```
The output is the only entry of `table1` which got no child entry in `table2`.
I've tried many different options like:
```
SELECT
Key
FROM Table1
WHERE
( SELECT COUNT(*) FROM Table2
WHERE Table1~Key = Table2~Tab1
GROUP BY Table2~Tab1 )
SELECT COUNT(Table2)
Key
FROM Table1
INNER JOIN Table2
ON Table1~Key = Table2~Tab1.
...
```
But none of them work.
Does anyone got a working query for **ABAP**?
|
```
select * from table1 where key not in (select tab1 from table2)
```
|
Try this query
```
select key from table1 where not EXISTS (SELECT *
FROM table2
WHERE table1.key=table2.tab1);
```
|
select rows with zero child elements (join)
|
[
"",
"sql",
"abap",
""
] |
```
SELECT p.Distributor, SUM(r.SalesVolume) as Sales,
((Cast(SUM(r.SalesVolume) as decimal(14, 4)) / (SELECT SUM(r.SalesVolume) FROM RawData r)) * 100) as MarketSharesVolume
FROM RawData r
INNER JOIN Product p
ON r.ProductId = p.ProductId
WHERE p.Distributor in ('TF1','WARNER')
GROUP BY p.Distributor;
```
The result for the above query is:
```
Distributor Sales MarketSharesVolume
WARNER 2836192 58.131470300744400
TF1 268668 5.506702600797200
```
Basically I want the `MarketSharesVolume` to have a values like `58.1`
I tried changing the precision `decimal(14, 4)` but every other combination giver me an `Arithmetic Overflow error`.
Is there a way I could truncate the column?
|
You can convert data type of the column:
```
SELECT p.Distributor, SUM(r.SalesVolume) as Sales,
CAST( (Cast(SUM(r.SalesVolume) as decimal(14, 4)) / (SELECT SUM(r.SalesVolume) FROM RawData r)) * 100
as decimal(20,1)) as MarketSharesVolume
FROM RawData r
INNER JOIN Product p
ON r.ProductId = p.ProductId
WHERE p.Distributor in ('TF1','WARNER')
GROUP BY p.Distributor;
```
|
[Here is an example](http://sqlfiddle.com/#!3/9eecb7/1345). Using `ROUND()` to get the correct value and `CONVERT()` to get rid of trailing zeros. You can apply this to your query.
```
DECLARE @Val DECIMAL(28, 18) = 58.131470300744400
SELECT CONVERT(decimal(28, 1), ROUND(@Val, 1))
```
|
SQL: Reduce the size of result set
|
[
"",
"sql",
"sql-server",
""
] |
I have payments per cnsmr\_accnt\_id going back several months. I wish to see the total payments per month for the last 4 months. Something like this:
```
Cnsmr_accnt_id | Wrkgrp_nm | Apr15_Tot | May15_Tot | Jun15_Tot | Jul15_Tot
12345 |Workgrp1 | 123424 | 1243255 | 232342 | 23232323
12347 |Workgrp4 | 123323 | 1244455 | 324342 | 232323
```
I am trying to use a Pivot to do this but keep getting an error when using the following code. I suspect it's something to do with the variable @Dates scope.
@Dates is giving me the value Apr15,May15,Jun15,Jul15 which is right.
If I substitute this directly into the Select in the place of @Dates, the query works. But the moment I use @Dates, it gives an error. (Incorrect syntax near '@Dates'.)
I have also tried putting the whole "formula" used to Set @Dates into the select statement but this also gives an error. (Incorrect syntax near the keyword 'left')
```
Declare @Dates as nvarchar(max);
--Builds up a string of dates for the last 4 months.
Set @Dates = left(datename(Month, datefromparts(year(dateadd(month, -3, getdate())), month(getdate())-3, 1)), 3) +
right(datename(year, datefromparts(year(dateadd(month, -3, getdate())), month(getdate())-3, 1)), 2) + ',' +
left(datename(Month, datefromparts(year(dateadd(month, -2, getdate())), month(getdate())-2, 1)), 3) +
right(datename(year, datefromparts(year(dateadd(month, -2, getdate())), month(getdate())-2, 1)), 2)+ ',' +
left(datename(Month, datefromparts(year(dateadd(month, -1, getdate())), month(getdate())-1, 1)), 3) +
right(datename(year, datefromparts(year(dateadd(month, -1, getdate())), month(getdate())-1, 1)), 2)+ ',' +
left(datename(Month, getdate()), 3) +
right(datename(year, getdate()), 2);
--Print @dates
--Example output = Apr15,May15,Jun15,Jul15
--Use a pivot to get the totals per cnsmr_accnt_id for the last 4 months.
Select *
from
(
Select
capj.cnsmr_accnt_id,
wrkgrp_nm,
max(UDEFINST_AMT) as Instlmnt,
sum(cnsmr_accnt_pymnt_amnt) as Mnth_Tot,
--Gives the Month & year (Eg Jul15)
left(datename(Month, cnsmr_accnt_pymnt_pstd_dt), 3) + right(datename(year, cnsmr_accnt_pymnt_pstd_dt), 2) as Month_Yr
from
dbo.cnsmr_accnt_pymnt_jrnl capj
inner join
UDEFGENERAL UG
on
capj.cnsmr_accnt_id = UG.cnsmr_accnt_id
Inner join
wrkgrp w
on
capj.wrkgrp_id = w.wrkgrp_id
where
cnsmr_accnt_pymnt_stts_cd in (5)
and cnsmr_accnt_pymnt_pstd_dt between
--Go back to the 1st day 4 months back
datefromparts(year(dateadd(month, -3, getdate())), month(getdate())-3, 1)
and
getdate()
group by capj.cnsmr_accnt_id,
left(datename(Month, cnsmr_accnt_pymnt_pstd_dt), 3) + right(datename(year, cnsmr_accnt_pymnt_pstd_dt), 2),
wrkgrp_nm) as Mnth_Tot_Tbl
pivot
(
sum(Mnth_Tot)
for
Month_Yr in (@Dates)) as Piv
```
|
If you want the last 4 months worth of data I would suggest you approach this differently, rather than grouping by month and year, group by the number of months since this month. So rather than trying to dynamically pivot, you are just pivoting on static numbers (0, 1, 2, 3);
```
SELECT *
FROM ( SELECT capj.cnsmr_accnt_id,
wrkgrp_nm,
Instlmnt = MAX(udefinst_amt),
Mnth_Tot = SUM(cnsmr_accnt_pymnt_amnt),
MonthsSinceToday = DATEDIFF(MONTH, cnsmr_accnt_pymnt_pstd_dt, GETDATE())
FROM dbo.cnsmr_accnt_pymnt_jrnl AS capj
INNER JOIN Udefgeneral AS ug
ON capj.cnsmr_accnt_id = ug.cnsmr_accnt_id
INNER JOIN wrkgrp AS w
ON capj.wrkgrp_id = w.wrkgrp_id
WHERE cnsmr_accnt_pymnt_stts_cd IN (5)
AND cnsmr_accnt_pymnt_pstd_dt >= DATEADD(MONTH, DATEDIFF(MONTH, '19000401', GETDATE()), '19000101')
AND cnsmr_accnt_pymnt_pstd_dt < GETDATE()
GROUP BY cnsmr_accnt_id, wrkgrp_nm, DATEDIFF(MONTH, cnsmr_accnt_pymnt_pstd_dt, GETDATE())
) AS d
PIVOT
( SUM(Mnth_Tot)
FOR MonthsSinceToday IN ([0], [1], [2], [3])
) AS pvt;
```
The downside of this approach is that your dates won't be column headers, but the upside is you don't need to use dynamic sql, and column names should be fairly easily handled by your presentation layer anyway.
|
```
First Convert second part into dynamic string and create string query after execute your Query
EXEC('Select *
from
(
Select
capj.cnsmr_accnt_id,
wrkgrp_nm,
max(UDEFINST_AMT) as Instlmnt,
sum(cnsmr_accnt_pymnt_amnt) as Mnth_Tot,
--Gives the Month & year (Eg Jul15)
left(datename(Month, cnsmr_accnt_pymnt_pstd_dt), 3) + right(datename(year, cnsmr_accnt_pymnt_pstd_dt), 2) as Month_Yr
from
dbo.cnsmr_accnt_pymnt_jrnl capj
inner join
UDEFGENERAL UG
on
capj.cnsmr_accnt_id = UG.cnsmr_accnt_id
Inner join
wrkgrp w
on
capj.wrkgrp_id = w.wrkgrp_id
where
cnsmr_accnt_pymnt_stts_cd in (5)
and cnsmr_accnt_pymnt_pstd_dt between
--Go back to the 1st day 4 months back
datefromparts(year(dateadd(month, -3, getdate())), month(getdate())-3, 1)
and
getdate()
group by capj.cnsmr_accnt_id,
left(datename(Month, cnsmr_accnt_pymnt_pstd_dt), 3) + right(datename(year, cnsmr_accnt_pymnt_pstd_dt), 2),
wrkgrp_nm) as Mnth_Tot_Tbl
pivot
(
sum(Mnth_Tot)
for
Month_Yr in (' + @Dates + ')) as Piv')
```
|
Error when using a declared variable in SQL Query
|
[
"",
"sql",
"ssrs-2008",
""
] |
I have been trying to put together an SQL query that shows one line for each record with the values from another field broken out into their own columns. How would I be able to show multiple columns and a single row for each record?
I have a table with data similar to the following sample:
```
+--------------+------------+---------------+
| Employee_Num | Client_Num | Deduction_Num |
+--------------+------------+---------------+
| 1305 | 1000 | 1 |
| 1305 | 1000 | 30 |
| 1312 | 1000 | 1 |
| 1320 | 1000 | 1 |
| 1320 | 1000 | 30 |
| 1323 | 1000 | 30 |
| 1323 | 1000 | 1 |
+--------------+------------+---------------+
```
I have attempted a union but the results still show multiple records for each employee. Here's what I have tried thus far:
```
SELECT Employee_Num, Client_Num, Deduction_1, Deduction_30
FROM ( SELECT
Employee_Num,
Client_Num,
Deduction_Num AS Deduction_1,
Deduction_Num AS Deduction_30
FROM Employee_Deductions
WHERE client_num = 1000
AND Deduction_Num IN (1)
UNION
SELECT
Employee_Num,
Client_Num,
Deduction_Num AS Deduction_1,
Deduction_Num AS Deduction_30
FROM Employee_Deductions
WHERE Client_Num, = 1000
AND Deduction_Num IN (30)
) AS Datum
WHERE Client_Num = 1000
ORDER BY Employee_Num
```
I would like this to be the desired result:
```
+--------------+------------+-------------+--------------+
| Employee_Num | Client_Num | Deduction_1 | Deduction_30 |
+--------------+------------+-------------+--------------+
| 1305 | 1000 | 1 | 30 |
| 1312 | 1000 | 1 | |
| 1320 | 1000 | 1 | 30 |
| 1323 | 1000 | 1 | 30 |
+--------------+------------+-------------+--------------+
```
Any help would be appreciated.
|
To literally get what you asked for a case statement would work:
```
select Employee_Num
, Client_Num
, max(case when deduction_num = 1 then deduction_num end) as Deduction_1
, max(case when deduction_num = 30 then deduction_num end) as Deduction_30
From Employee_Deductions
Group by Employee_Num
, Client_Num
```
However it sounds like what you want to do is pivot the data as per this [Technet article](https://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396). Hope that helps.
|
You can try:
```
SELECT DISTINCT Employee_Num,
Client_Num,
CASE WHEN Deduction_Num = 1
THEN 1
ELSE ''
END Deduction_1,
CASE WHEN Deduction_Num = 30
THEN 30
ELSE ''
END Deduction_30
FROM Employee_Deductions
WHERE Client_Num = 1000
AND Deduction_Num IN (1,30)
```
|
SQL Select - How to show multiple columns based on field value?
|
[
"",
"sql",
""
] |
**Output to be produced**

using this as reference but now with different scenario [SQL Server query : get the sum conditionally](https://stackoverflow.com/questions/29597690/sql-server-query-get-the-sum-conditionally)
**explanation:**
Item, Sales, and remarks columns are given column from a database, New Sales column is a formulated column where in, it is getting the sum of items with the same keyword remarks except the default n/a remarks.
(regardless the remarks is not fully identical, at least there's a common similarity like what is on the image above - item 5 has "new" in it, still it sums up with item 6 because of their similar keyword found "small")
**code used**
**FIRST OPTION- using partition** - This doesn't work because when the remarks is not identical to each other it will not get the sum properly (for item5 and item6)
```
CASE
WHEN ([remarks] not like '%big%') AND ([remarks] not like '%PAENGS%')
THEN sales
ELSE SUM(sales) OVER(PARTITION BY [remarks])
END as 'New Sales'
```
**SECOND OPTION -using Cross Apply** - So it leave me to this, but I was lost as it is not getting the desired output.
```
CROSS APPLY
SELECT
d.*,
NewSales =
CASE
WHEN ([remarks] not like '%big%') or ([remarks] not like '%small%')
THEN Sales
ELSE x.NewSales
END
FROM #MSRSuperFinal3 d
CROSS APPLY(SELECT NewSales = SUM(Sales)
FROM #MSRSuperFinal3
WHERE ([remarks] like '%big%') or ([remarks] like '%small%')
)x
```
Any help will be highly appreciated
|
Using `CROSS APPLY`
```
SELECT *
FROM temp t
CROSS APPLY(
SELECT SUM(sales)
FROM temp
WHERE
remarks LIKE '%' + t.remarks + '%'
OR t.remarks LIKE '%' + remarks + '%'
)x(NewSales)
WHERE remarks <> 'n/a'
UNION ALL
SELECT *,
NewSales = sales
FROM temp
WHERE remarks = 'n/a'
ORDER BY item
```
---
Based on your comment, this should be your final query:
```
SELECT *
FROM #MSRSuperFinal3 t
CROSS APPLY(
SELECT
SUM(CurrentMonth)
FROM #MSRSuperFinal3
WHERE
t.misc LIKE '%' + misc + '%'
OR misc LIKE '%' + t.misc + '%'
)x(NewSales)
WHERE
([misc] LIKE '%BIGKAHUNA%')
or ([misc] LIKE '%PAENGS%')
UNION ALL
SELECT *,
NewSales = CurrentMonth
FROM #MSRSuperFinal3
WHERE
([misc] not like '%BIGKAHUNA%')
AND ([misc] not like '%PAENGS%')
AND ([misc] not like '%ROBINSONS%')
ORDER BY location, name
```
|
Try **Left Join** clause instead of Cross Apply:
```
SELECT a.item,
a.sales,
a.remarks,
CASE
WHEN a.remarks = 'n/a'
THEN a.sales
ELSE SUM( b.sales )
END AS [New Sales]
FROM query_using_cross_apply_to_get_sum_conditionally a
LEFT JOIN query_using_cross_apply_to_get_sum_conditionally b
ON(a.remarks LIKE '%' + b.remarks + '%'
AND b.remarks != 'n/a')
GROUP BY a.item, a.sales, a.remarks;
```
|
SQL Query using Cross Apply to get Sum Conditionally
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"cross-apply",
""
] |
```
SELECT Category, SUM (Volume) as Volume
FROM Product
GROUP BY Category;
```
The above query returns this result:
```
Category Volume
-------------------
Oth 2
Tv Kids 4
{null} 1
Humour 3
Tv 5
Theatrical 13
Doc 6
```
I want to combine some of the columns as one colum as follows:
* Oth,{null}, Humour, Doc as **Others**
* Tv Kids, Tv as **TV**
* Theatrical as **Film**
So my result would look like:
```
Category Volume
-------------------
Others 12
Tv 9
Film 13
```
How would I go about this?
|
You need a CASE here, like this:
```
SELECT
CASE
WHEN Category IN ('Oth','Humour','Doc')
OR Category IS NULL THEN 'Others'
WHEN Category IN ('Tv Kids','Tv') THEN 'TV'
WHEN Category = 'Theatrical' THEN 'Film'
END as category ,
SUM (Volume) as Volume
from Product
GROUP BY
CASE
WHEN Category IN ('Oth','Humour','Doc')
OR Category IS NULL THEN 'Others'
WHEN Category IN ('Tv Kids','Tv') THEN 'TV'
WHEN Category = 'Theatrical' THEN 'Film'
END;
```
Null must be dealt with outside the `IN` list as it is a special value.
|
I think you need to use a `case` statement to group categories together.
```
select case category when 'Tv' then 'Tv'
when 'Film' then 'Film'
else 'Other'
end as Category,
sum(Volume) as Volume
from (
SELECT Category, SUM (Volume) as Volume
FROM Product
GROUP BY Category
) subcategoryTotals
group by Category
```
(I think most DBs will allow you to group by the alias `Category`. (If not you can re-use the case statement)
**Edit:** Just a final thought (or two):
You should consider normalizing your database - for example, the `Category` column should really be a foreign key to a `Categories` table.
Also, this sql is reasonably ok because the `case` statement isn't too long or complex. If you wanted to split things up further it could quickly get to be unmanageable. I'd be inclined to use the idea of categories and subcategories in my database.
|
SQL: Combine result columns
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables
1. Category:
cat\_id, cat\_name
2. Topics:
topic\_id, cat\_id, topic\_name
I want to get list of all cat\_names along with a count of topics in topics table against the particular cat\_id like
```
Category Topics
----------------------
ABC 2
CDE 5
```
Would really appreciate a fast help
Thanks
|
Use `GROUP BY` Clause
```
SELECT COUNT(*) AS 'Topics', cat_id AS 'Category' FROM Category JOIN Topics ON Category.cat_id = Topics.cat_id
GROUP BY cat_id
```
|
Try this query
```
select a.cat_name as category, count(*) as Topics
from category a
join Topics b on a.cat_id=b.cat_id
group by a.cat_name
```
|
Writing an SQL Query with a count
|
[
"",
"mysql",
"sql",
""
] |
I have a strange situation with a field that doesn't allow NULL values. If I insert a row, the field defaults to the empty string, even when mysql claims that the default is NULL.
```
mysql> describe user;
+---------------------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------------------+------------------+------+-----+---------+----------------+
| id | int(30) | NO | PRI | NULL | auto_increment |
| username | varchar(45) | NO | UNI | NULL | |
| city | varchar(45) | NO | | NULL | |
+---------------------------+------------------+------+-----+---------+----------------+
mysql> show triggers;
Empty set (0.00 sec)
mysql> insert into user (username) values ('just_testing');
Query OK, 1 row affected, 17 warnings (0.01 sec)
```
This is the point where I go *WHAT?!* - city should default to NULL which isn't allowed, but look here:
```
mysql> select username, city from user where username = 'just_testing' and city is null;
Empty set (0.00 sec)
mysql> select username, city from user where username = 'just_testing' and city='';
+--------------+------+
| username | city |
+--------------+------+
| just_testing | |
+--------------+------+
1 row in set (0.00 sec)
```
Mysql has decided to use the empty string as default even though it isn't so and there isn't any triggers.
And further:
```
mysql> insert into user (username, city) values ('just_testing3', NULL);
ERROR 1048 (23000): Column 'city' cannot be null
```
What am I overlooking? How does the city column default to ''?
|
From the [docs](https://dev.mysql.com/doc/refman/5.7/en/insert.html):
> If you are not running in strict SQL mode, any column not explicitly given a value is set to its default (explicit or implicit) value. For example, if you specify a column list that does not name all the columns in the table, unnamed columns are set to their default values. Default value assignment is described in Section 11.6, βData Type Default Valuesβ. See also Section 1.8.3.3, βConstraints on Invalid Dataβ.
and [further](https://dev.mysql.com/doc/refman/5.7/en/data-type-defaults.html):
> For string types other than `ENUM`, the default value is the empty string. For `ENUM`, the default is the first enumeration value.
|
You are making a column that cannot be null have a default value of `NULL`. Alter the column to allow null as a value like this:
```
ALTER TABLE `user`
ALTER COLUMN `city` varchar(45) DEFAULT NULL
```
|
NULL defaults to empty string in mysql?
|
[
"",
"mysql",
"sql",
""
] |
Within my crystal report details section i have several values from the field `amount`, i added a simple formula field to my group header to calculate the `SUM({Amount})` Which works however i only want it to `SUM` the positive values.
There is always a negative version of the positive.
Data
```
10
30
60
-10
-30
-60
```
Current Output with `SUM({Amount})`
```
0
```
Desired Output
```
100
```
Something like but in crystal variant
```
SUM({Amount}) FROM mytable WHERE {Amount} > 0
```
|
What i did was create a new parameter called `ABSAmount` :
```
ABS({AMOUNT})
```
Then another
```
SUM({@ABSamount})/2
```
This gave me the required output.
|
Another option would be a running total that sums the {Table.amount} and evaluates on a formula. `{Table.amount} > 0`
reset on group if your report is grouped
|
Crystal Reports Formula fields SUM only positive values using WHERE
|
[
"",
"sql",
"crystal-reports",
"crystal-reports-xi",
""
] |
Table1
```
ID Name Tags
----------------------------------
1 Customer1 Tag1,Tag5,Tag4
2 Customer2 Tag2,Tag6,Tag4,Tag11
3 Customer5 Tag6,Tag5,Tag10
```
and Table2
```
ID Name Tags
----------------------------------
1 Product1 Tag1,Tag10,Tag6
2 Product2 Tag2,Tag1,Tag5
3 Product5 Tag1,Tag2,Tag3
```
what is the best way to join Table1 and Table2 with Tags column?
It should look at the tags column which coma seperated on table 2 for each coma seperated tag on the tags column in the table 1
Note: Tables are not full-text indexed.

|
I developed a solution as follows:
```
CREATE TABLE [dbo].[Table1](
Id int not null,
Name nvarchar(250) not null,
Tag nvarchar(250) null,
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Table2](
Id int not null,
Name nvarchar(250) not null,
Tag nvarchar(250) null,
) ON [PRIMARY]
GO
```
get sample data for Table1, it will insert **28000** records
```
INSERT INTO Table1
SELECT CustomerID,CompanyName, (FirstName + ',' + LastName)
FROM AdventureWorks.SalesLT.Customer
GO 3
```
sample data for Table2.. i need same tags for Table2
```
declare @tag1 nvarchar(50) = 'Donna,Carreras'
declare @tag2 nvarchar(50) = 'Johnny,Caprio'
```
get sample data for Table2, it will insert **9735** records
```
INSERT INTO Table2
SELECT ProductID,Name, (case when(right(ProductID,1)>=5) then @tag1 else @tag2 end)
FROM AdventureWorks.SalesLT.Product
GO 3
```
# My Solution
```
create TABLE #dt (
Id int IDENTITY(1,1) PRIMARY KEY,
Tag nvarchar(250) NOT NULL
);
```
I've create temp table and i will fill with **Distinct** Tag-s in Table1
```
insert into #dt(Tag)
SELECT distinct Tag
FROM Table1
```
Now i need to vertical table for tags
```
create TABLE #Tags ( Tag nvarchar(250) NOT NULL );
```
Now i'am fill #Tags table with **While**, you can use Cursor but while is faster
```
declare @Rows int = 1
declare @Tag nvarchar(1024)
declare @Id int = 0
WHILE @Rows>0
BEGIN
Select Top 1 @Tag=Tag,@Id=Id from #dt where Id>@Id
set @Rows =@@RowCount
if @Rows>0
begin
insert into #Tags(Tag) SELECT Data FROM dbo.StringToTable(@Tag, ',')
end
END
```
last step : join Table2 with #Tags
```
select distinct t.*
from Table2 t
inner join #Tags on (',' + t.Tag + ',') like ('%,' + #Tags.Tag + ',%')
```
Table rowcount= 28000 Table2 rowcount=9735 select is less than 2 second
|
The best way is not to have comma separated values in a column. Just use normalized data and you won't have trouble with querying like this - each column is supposed to only have one value.
Without this, there's no way to use any indices, really. Even a full-text index behaves quite different from what you might thing, and they are inherently clunky to use - they're designed for searching for *text*, not meaningful data. In the end, you will not get much better than something like
```
where (Col like 'txt,%' or Col like '%,txt' or Col like '%,txt,%')
```
Using a `xml` column might be another alternative, though it's still quite a bit silly. It would allow you to treat the values as a collection at least, though.
|
What is the best way to join between two table which have coma seperated columns
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables like this
Table 1
```
+-----------+----------+-------+-------------------+
| AGENCY_ID | LOCAL_ID | CLASS | LRS_ID |
+-----------+----------+-------+-------------------+
| 651 | 1 | NULL | 10200000690000001 |
| 651 | 2 | NULL | 10200000690205B01 |
| 651 | 3 | NULL | 10200000690293C01 |
| 651 | 4 | NULL | 10200000690293D01 |
+-----------+----------+-------+-------------------+
```
Table 2
```
+-----------+----------+-------+-------------------+
| AGENCY_ID | LOCAL_ID | CLASS | LRS_ID |
+-----------+----------+-------+-------------------+
| 651 | NULL | 1 | 10200000690000001 |
| 651 | NULL | NULL | 10200000690000091 |
| 651 | NULL | 7 | 10200000690205B01 |
| 651 | NULL | 1 | 10200000690293C01 |
+-----------+----------+-------+-------------------+
```
And I want the result to be
```
+-----------+----------+-------+-------------------+
| AGENCY_ID | LOCAL_ID | CLASS | LRS_ID |
+-----------+----------+-------+-------------------+
| 651 | 1 | 1 | 10200000690000001 |
| 651 | 2 | 7 | 10200000690205B01 |
| 651 | 3 | 1 | 10200000690293C01 |
| 651 | 4 | NULL | 10200000690293D01 |
+-----------+----------+-------+-------------------+
```
Taking table 2 and merging the non-NULL values to table 1. Here I only specify the `CLASS` field but there are 50+ fields that are always NULL in Table 1 and may or may not be NULL in table 2. So just specifying manually which fields I want is the problem, I want it to automatically replace it if its NULL and table 2 has it.
Key things to note is that LRS\_ID is the JOIN key. LRS\_ID that exist in table 2 and not table 1 don't exist in the output. LRS\_ID that exists in table 1 but not table 2 remain but CLASS remains NULL.
|
You could use `left join` to get your expected output like this:
```
select
t1.AGENCY_ID,
ISNULL(t1.LOCAL_ID,t2.LOCAL_ID)LOCAL_ID,
ISNULL(t1.CLASS,t2.CLASS)CLASS,
t1.LRS_ID
from table1 t1
left join table2 t2 on
t1.LRS_ID = t2.LRS_ID
```
[**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/99591/5/0)
|
Just join the tables and find the first value you want.
Select:
```
select
t1.AGENCY_ID,
ISNULL(t1.LOCAL_ID,t2.LOCAL_ID)LOCAL_ID,
ISNULL(t1.CLASS,t2.CLASS)CLASS,
t1.LRS_ID
from table1 t1
inner join table2 t2 on
t1.LRS_ID = t2.LRS_ID
```
Reverse ISNULL to give priority to t2.
To update:
```
update t
LOCAL_ID = ISNULL(t.LOCAL_ID,t2.LOCAL_ID),
CLASS = ISNULL(t.CLASS,t2.CLASS)
from table1 t
inner join table2 t2 on
t1.LRS_ID = t2.LRS_ID
```
|
Merge tables where NULL
|
[
"",
"sql",
"sql-server",
"join",
"merge",
""
] |
How to find rows that contains `'%'` symbol in a column of table.
For example, I have a column containing data as `'amylase 50% of dose'`.
How to select and filter all the rows containing `%` symbol in a given column?
|
try this query
```
select * from tablename where columnname like '%[%]%'
```
|
You can use `WHERE Id LIKE '%[%]%'`. An example will be something like this:
```
CREATE TABLE #Test
(
Id NVARCHAR(100)
)
INSERT INTO #Test VALUES ('100%'), ('1000')
SELECT * FROM #Test
WHERE Id LIKE '%[%]%'
DROP TABLE #Test
```
OUTPUT:
```
Id
100%
```
|
Finding Rows containing % symbol in a table
|
[
"",
"sql",
"sql-server",
""
] |
I am stuck into a situation where I need to insert data into a blob column by reading a file from the Filesystem in DB2 (DB2 Express C on Windows 7).
Somewhere on the internet I found this `INSERT INTO ... VALUES ( ..., readfile('filename'), ...);` but here readfile is not an inbuilt function but I need to create it using UDF (c language libraries), but that might not be a useful solution.
Can somebody update us how to insert BLOB values using Insert command.
|
1) You could use [LOAD](http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0023577.html?cp=SSEPGG_9.7.0%2F3-6-1-3-1-0-12) or [IMPORT](http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0023575.html?cp=SSEPGG_9.7.0%2F3-6-1-3-1-0-10) via [ADMIN\_CMD](http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0012547.html?cp=SSEPGG_9.7.0%2F3-6-1-3-1-0). In this way, you can use SQL to call the administrative stored procedure that will call the tool. Import or Load can read files and then put that in a row.
You can also wrap this process by using a temporary table that will read the binary data from the file, insert it in the temporary table, and then return it to be from the table.
2) You can create an external stored procedure or UDF implemented in Java or C, that will read the data, and then insert that in the row.
|
Also, you can insert blob values casting characters to its corresponding hex values:
```
CREATE TABLE BLOB_TEST (COL1 BLOB(50));
INSERT INTO BLOB_TEST VALUES (CAST('test' AS BLOB));
SELECT COL1 FROM BLOB_TEST;
DROP TABLE BLOB_TEST;
```
And this gives this result:
```
COL1
-------------------------------------------------------------------------------------------------------
x'74657374'
1 record(s) selected.
```
|
Inserting BLOB data in DB2 using SQL Query
|
[
"",
"sql",
"db2",
"blob",
"sql-insert",
""
] |
I have two tables Projects and Plans, and I need get recent projects from the last plan by create date:
```
Projects:
ProjectId, PlanId, StartDate, EndDate
(guid) (guid) (datetime) (datetime)
-------------------------------------
00001, 00001, 1/1/2015 31/1/2015
00001, 00002, 3/2/2015 15/2/2015
00002, 00001, 1/2/2015 20/2/2015
00002, 00002, 1/2/2015 21/2/2015
00003, 00001, 1/3/2015 10/3/2015
Plans:
PlanId, CreateDate
(guid) (datetime)
--------------------
00001, 1/1/2015
00002, 5/2/2015
```
I wrote query that take single project from the last plan, but i can't write query to get many projects by single query.
Here my query:
```
SELECT TOP 1 pr.ProjectId,
pl.CreateDate,
pr.StartDate,
pr.EndDate
FROM Projects pr
INNER JOIN Plans pl
ON pr.PlanId = pl.PlanId
WHERE ProjectId = '000002'
ORDER BY pl.CreateDate DESC
```
Desired result is (all projects from the last plans):
```
ProjectId, PlanId, StartDate, EndDate
--------------------------------------
00001, 00002, 3/2/2015, 15/2/2015
00002, 00002, 1/2/2015, 21/2/2015
00003, 00001, 1/3/2015, 10/3/2015
```
**UPDATE:**
Gordon Linoff gave the good answer, but it wasn't solved my question, because both his queries don't take '00003' project (its last plan is '00001').
I wrote my query with '[OVER Clause](https://msdn.microsoft.com/library/ms189461.aspx)' (Stanislovas KalaΕ‘nikovas note about it).
So I post full answer that solves my question for future googlers:
```
SELECT * FROM
(SELECT
result.ProjectId,
result.CreateDate,
result.StartDate,
result.EndDate,
ROW_NUMBER() OVER (PARTITION BY ProjectId ORDER BY CreateDate DESC) AS RowNumber
FROM (
SELECT pr.ProjectId AS ProjectId,
pl.CreateDate AS CreateDate,
pr.StartDate AS StartDate,
pr.EndDate AS EndDate
FROM Projects pr
INNER JOIN Plans pl ON pr.PlanId = pl.PlanId
--WHERE ProjectId IN ('000001', '000003') --Filter
) AS result
) AS result
WHERE result.RowNumber = 1
```
|
This is example of `ROW_NUMBER` with 1 table, easy you can use It in your case.
```
CREATE TABLE #Test
(
Id NVARCHAR(100),
Data DATE
)
INSERT INTO #Test VALUES ('1', '2015-01-04'), ('1', '2015-01-07'), ('2', '2015-01-05'), ('2', '2015-01-08')
SELECT Id, Data
FROM (
SELECT Id, Data, ROW_NUMBER() OVER (PARTITION BY Id ORDER BY Data DESC) rn
FROM #Test
)x
WHERE rn > 1
DROP TABLE #Test
```
|
You can use a subquery to get the most recent plan. Then just join this to projects:
```
SELECT pr.ProjectId, pl.CreateDate, pr.StartDate, pr.EndDate
FROM (SELECT TOP 1 pl.*
FROM plans pl
ORDER BY pl.CreateDate DESC
) pl JOIN
Projects pr
ON pr.PlanId = pl.PlanId;
WHERE ProjectId = '000002'
```
An alternative method is to just use `TOP WITH TIES`:
```
SELECT TOP 1 WITH TIES pr.ProjectId, pl.CreateDate, pr.StartDate, pr.EndDate
FROM plans pl
Projects pr
ON pr.PlanId = pl.PlanId;
WHERE ProjectId = '000002'
ORDER BY pl.CreateDate DESC
```
|
How to Select the most recent items by dates from another table
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have this query
```
select adate, factoryid, purchid, itemname, max(price) as price
from tableb where catnum = 9
group by adate, factoryid, purchid, itemname
order by adate, factoryid, purchid, itemname
```
But I want the id for that row. So in a perfect world:
```
select id, adate, factoryid, purchid, itemname, max(price) as price
from tableb where catnum = 9
group by adate, factoryid, purchid, itemname
order by adate, factoryid, purchid, itemname
```
But I know that won't work.
So I tried this:
```
select id, adate, factoryid, purchid, itemname,
max(price) over(partition by adate, factoryid, purchid, itemname) as price
from tableb where catnum = 9
```
That doesn't work. The price is duplicated for all the ids. And the query result set goes from 4000 rows to 11000.
So obviously, I got the window function wrong somehow. First what did I do wrong and secondly, of course, how do I fix it?
|
```
SELECT *
FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY adate, factoryid, purchid, itemname ORDER BY price DESC, id DESC) rn
FROM tableb
WHERE catnum = 9
) q
WHERE rn = 1
```
|
You can use windowed version of `MAX` along with [**FIRST\_VALUE**](https://msdn.microsoft.com/en-us/library/hh213018%28v=sql.110%29.aspx), available from SQL Server 2012+:
```
SELECT DISTINCT adate, factoryid, purchid, itemname,
MAX(price) OVER (PARTITION BY adate, factoryid, purchid, itemname) AS price,
FIRST_VALUE(id) OVER (PARTITION BY adate, factoryid, purchid, itemname
ORDER BY price DESC) AS id
FROM tableb
WHERE catnum = 9
ORDER BY adate, factoryid, purchid, itemname
```
|
How do I use window function to get max price and its id?
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
"aggregate-functions",
"window-functions",
""
] |
I have three tables: Let's call it CUSTOMER, LOG and REVIEW
The CUSTOMER table is:
```
id name
== ====
1 John
2 Jane
3 Mike
```
The LOG table is
```
id customer_id created_at
== =========== ==========
1 1 2015-06-10
2 1 2015-06-10
3 2 2015-06-11
4 1 2015-06-13
5 2 2015-06-15
6 1 2015-06-15
```
The REVIEW table is
```
id customer_id created_at
== =========== ==========
1 1 2015-06-10
2 2 2015-06-10
3 2 2015-06-11
4 1 2015-06-13
5 1 2015-06-15
6 1 2015-06-15
7 1 2015-06-18
```
What I wanted
```
CUSTOMER_ID NAME LOG_QTY REVIEW_QTY
=========== ==== ======= ==========
1 John 4 5
2 Jane 2 2
3 Mike 0 0
```
What I got:
```
CUSTOMER_ID NAME LOG_QTY REVIEW_QTY
=========== ==== ======= ==========
1 John 20 20
2 Jane 4 4
3 Mike 0 0
```
My query:
```
select CUSTOMER.ID, CUSTOMER.NAME,
count(REVIEW.CUSTOMER_ID) as REVIEW_QTY,
count(LOG.CUSTOMER_ID) as LOG_QTY
from CUSTOMER
left join REVIEW
on REVIEW.CUSTOMER_ID = CUSTOMER.ID
left join LOG
on LOG.CUSTOMER_ID = CUSTOMER.ID
group by CUSTOMER.ID
order by CUSTOMER.ID
```
|
If you run your query without the `COUNT()` and `GROUP BY`, you'll see what's happening:
```
select CUSTOMER.ID, CUSTOMER.NAME,
REVIEW.CUSTOMER_ID as REVIEW_QTY,
LOG.CUSTOMER_ID as LOG_QTY
from CUSTOMER
left join REVIEW on REVIEW.CUSTOMER_ID = CUSTOMER.ID
left join LOG on LOG.CUSTOMER_ID = CUSTOMER.ID
order by CUSTOMER.ID
```
This returns a row for each possible combination of rows with the same `CUSTOMER_ID` from the three tables (that's just what `INNER JOIN` does). Then `COUNT` just counts 'em!
This should give you what you need:
```
select CUSTOMER.ID, CUSTOMER.NAME,
(select count(*) from REVIEW where CUSTOMER_ID = CUSTOMER.ID) as REVIEW_QTY,
(select count(*) from LOG where CUSTOMER_ID = CUSTOMER.ID) as LOG_QTY
from CUSTOMER
order by CUSTOMER.ID
```
|
What your query is doing is joining the reviews and logs on the customer, with no join condition between the two. This means you are creating a Cartesian product of each log with each review of a given customer (e.g. the 4 logs you expect for John multiplied by his 5 reviews explains the 20 you are getting).
One way to solve this is to perform a `group by` on the logs and on the reviews separately, in subqueries:
```
SELECT c.id, c.name, review_qty, log_qty
FROM customer c
LEFT JOIN (SELECT customer_id, COUNT(*) AS review_qty
FROM review
GROUP BY customer_id) r ON r.customer_id = c.id
LEFT JOIN (SELECT customer_id, COUNT(*) AS log_qty
FROM log
GROUP BY customer_id) l ON l.customer_id = c.id
ORDER BY c.id
```
|
MySQL Join three tables return multiple results
|
[
"",
"mysql",
"sql",
"select",
"join",
"group-by",
""
] |
I have a following MySQL query:
```
SELECT p.*, MATCH (p.description) AGAINST ('random text that you can use in sample web pages or typography samples') AS score
FROM posts p
WHERE p.post_id <> 23
AND MATCH (p.description) AGAINST ('random text that you can use in sample web pages or typography samples') > 0
ORDER BY score DESC LIMIT 1
```
With 108,000 rows, it takes **~200ms**. With 265,000 rows, it takes **~500ms**.
Under performance testing(~80 concurrent users) it shows **~18sec** average latency.
Is any way to improve performance for this query ?
**EXPLAIN OUTPUT:**

**UPDATED**
We have added one new mirror MyISAM table with `post_id`, `description` and synchronized it with `posts` table via triggers. Now, fulltext search on this new MyISAM table works **~400ms**(with the same performance load where InnoDB shows **~18sec**.. this is a huge performance boost) Look like MyISAM is much more quicker for fulltext in MySQL than InnoDB. Could you please explain it ?
**MySQL profiler results:**
Tested on **AWS RDS db.t2.small** instance
**Original InnoDB `posts` table:**

**MyISAM mirror table with post\_id, description only:**

|
[Here](https://blogs.oracle.com/mysqlinnodb/entry/performance_enhancement_in_full_text) are a few tips what to look for in order to maximise the speed of such queries with InnoDB:
> 1. Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to
> sort to get top matching results.
> 2. Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list
> should all have ranking of 0, and no need to be retrieved. And InnoDB
> has a count of total matching records on hand. No need to recount.
> 3. Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and
> ranking is needed, there is no need to go to user table to fetch the
> record itself.
> 4. Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch
> all matching records from user table. We should be able to first
> select TOP N matching DOC IDs, and then only fetch corresponding
> records with these Doc IDs.
I don't think you cannot get that much faster looking only at the query itself, maybe try removing the `ORDER BY` part to avoid unnecessary sorting. To dig deeper into this, maybe profile the query using [MySQLs inbuild profiler](http://www.bjoerns-choice.de/archives/501?lang=en).
Other than that, you might look into the configuration of your MySQL server. Have a look at [this chapter of the MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/fulltext-fine-tuning.html), it contains some good informations on how to tune the fulltext index to your needs.
If you've already maximized the capabilities of your MySQL server configuration, then consider looking at the hardware itself - sometimes even a lost cost solution like moving the tables to another, faster hard drive can work wonders.
|
My best guess for the performance hit is the number of rows being returned by the query. To test this, simply remove the `order by score` and see if that improves the performance.
If it does not, then the issue is the full text index. If it does, then the issue is the `order by`. If so, the problem becomes a bit more difficult. Some ideas:
* Determine a hardware solution to speed up the sorts (getting the intermediate files to be in memory).
* Modifying the query so it returns fewer values. This might involve changing the stop-word list, changing the query to boolean mode, or other ideas.
* Finding another way of pre-filtering the results.
|
Improve performance on MySQL fulltext search query
|
[
"",
"mysql",
"sql",
"full-text-search",
""
] |
I've been created my database like that:

I want to select person and movie names which is same person act as "Oyuncu" and "Senarist"
Please help me. I wrote a query like that;
```
select
MovieName,PersonName,RoleName
from
Movies,
MoviesPersonRole,
PersonRole,
Person,
Role
where
Movies.Id = MoviesPersonRole.MovieId
and
PersonRole.Id = MoviesPersonRole.PersonRoleId
and
PersonRole.RoleId = Role.Id
and
PersonRole.PersonId = Person.Id
```
and the results are
```
MovieName PersonName RoleName
Pulp Fiction Mehmet Oyuncu
Pulp Fiction Mehmet Senarist
Matrix Aylin Oyuncu
LOTR GΓΆkberk Oyuncu
LOTR GΓΆkberk Senarist
Pulp Fiction Aylin Oyuncu
```
|
You can try somethong like this:
```
SELECT X.MovieName, X.PersonName FROM (
SELECT M.MovieName, PE.PersonName, COUNT(*) AS RoleCount
FROM Movies M
INNER JOIN MoviesPersonRole MPR ON (MPR.MovieId=M.Id)
INNER JOIN PersonRole PR ON (PR.Id = MPR.PersonRoleId)
INNER JOIN Person PE ON (PE.Id = PR.PersonId)
INNER JOIN Role R ON (R.Id = PR.RoleId)
WHERE R.RoleName='Oyuncu' OR R.RoleName='Senarist'
GROUP BY M.MovieName, PE.PersonName
) X WHERE X.RoleCount>1
```
|
A second Join on Role should do it:
```
SELECT MovieName ,
PersonName ,
RoleName
FROM Movies
JOIN MoviesPersonRole ON Movies.Id = MoviesPersonRole.MovieId
JOIN PersonRole ON PersonRole.Id = MoviesPersonRole.PersonRoleId
JOIN Person ON PersonRole.PersonId = Person.Id
JOIN Role Role1 ON PersonRole.RoleId = Role1.Id
JOIN Role Role2 ON PersonRole.RoleId = Role2.Id
WHERE Role1.RoleName = 'Oyuncu'
AND Role2.RoleName = 'Senarist'
```
|
complex sql query many to many
|
[
"",
"sql",
"sql-server",
"t-sql",
"many-to-many",
""
] |
I have a SQL query in SQL server where I am trying to create large query with 'union' on a large number of databases. However, the query keeps getting truncated. According to my research this shouldn't happen if all the `varchar` are cast to `varchar(MAX)`. I tried doing this, however, it still gets truncated. The final query should be in the `@finalQuery` varible. Can anyone help with the query below?
```
DECLARE @name VARCHAR(MAX) -- database name
DECLARE @path VARCHAR(MAX) -- path for backup files
DECLARE @fileName VARCHAR(MAX) -- filename for backup
DECLARE @fileDate VARCHAR(MAX) -- used for file name
DECLARE @executeQuery VARCHAR(MAX)
DECLARE @finalQuery VARCHAR(MAX)
SET @finalQuery = ''
DECLARE db_cursor CURSOR FOR
SELECT name
FROM master..sysdatabases
WHERE name NOT IN (CAST('master' AS VARCHAR(MAX)),CAST('model' AS VARCHAR(MAX)),CAST('msdb' AS VARCHAR(MAX)),CAST('tempdb' AS VARCHAR(MAX)))
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @name
WHILE @@FETCH_STATUS = 0
BEGIN
SET @executeQuery=CAST(
'SELECT TOP 1000
[EmailAddress] as ''Email Address''
,[FirstName] as ''First Name''
,[LastName] as ''Last Name''
,[LastLogin] as ''Last Login'',
Name as ''User Role''
FROM '+@name+'.[dbo].[User] c
INNER JOIN
( SELECT * FROM '+@name+'.[dbo].[SecurityRole] as a
INNER JOIN '+@name+'.[dbo].[SecurityRoleToUser] as b
ON (a.ID=b.SecurityRoleID)
) d
ON (c.ID=d.UserID)
WHERE IsActive=1' AS VARCHAR(MAX))
--PRINT @executeQuery
--PRINT @name
--PRINT @executeQuery
SET @finalQuery = CAST(@executeQuery+' UNION ALL ' +@finalQuery AS VARCHAR(MAX))
--PRINT @executeQUery
--EXEC (@executeQuery)
FETCH NEXT FROM db_cursor INTO @name
END
CLOSE db_cursor
DEALLOCATE db_cursor
PRINT @finalQuery
--EXEC(@finalQuery)
```
|
You are better off without doing a `UNION` across so many databases. And you don't need to. Also, database names, etc are all `sysname` which equates to `NVARCHAR(128)` so best to use `NVARCHAR(MAX)` instead of `VARCHAR(MAX)`.
**Step 1: Less complicated query generation**
```
DECLARE @DatabaseName sysname;
DECLARE @Query NVARCHAR(MAX),
@Template NVARCHAR(MAX);
SET @Query = '';
SET @Template = N'USE [?];
SELECT TOP 1000
[EmailAddress] as [Email Address]
,[FirstName] as [First Name]
,[LastName] as [Last Name]
,[LastLogin] as [Last Login],
Name as [User Role]
FROM [dbo].[User] c
INNER JOIN
( SELECT * FROM [dbo].[SecurityRole] as a
INNER JOIN [dbo].[SecurityRoleToUser] as b
ON (a.ID=b.SecurityRoleID)
) d
ON (c.ID=d.UserID)
WHERE IsActive = 1;
';
SELECT @Query = (@Query + REPLACE(@Template, N'?', sd.[name]))
FROM sys.databases sd
WHERE sd.[name] NOT IN (N'master', N'model', N'msdb', N'tempdb')
AND HAS_DBACCESS(sd.[name]) = 1;
--EXEC(@Query); -- uncomment when not debugging
SELECT LEN(@Query); -- 9506 on my system -- comment out if debugging
print @query; -- truncates at 4000 chars for NVARCHAR -- comment out if debugging
```
**Step 2: Not needing a UNION**
Instead of using a UNION to get everything into a single result set, just insert multiple result sets into a local temporary table.
```
CREATE TABLE #tmp (DatabaseName sysname NOT NULL,
EmailAddress NVARCHAR(200), FirstName NVARCHAR(50),
LastName NVARCHAR(50), LastLogin DATETIME, UserRole VARCHAR(50);
DECLARE @Query NVARCHAR(MAX),
@Template NVARCHAR(MAX);
SET @Query = '';
SET @Template = N'USE [?];
SELECT TOP 1000
DB_NAME() AS [DatabaseName],
[EmailAddress] as [Email Address]
,[FirstName] as [First Name]
,[LastName] as [Last Name]
,[LastLogin] as [Last Login],
Name as [User Role]
FROM [dbo].[User] c
INNER JOIN
( SELECT UserID, Name--* -- see Step #3 below
FROM [dbo].[SecurityRole] sr
INNER JOIN [dbo].[SecurityRoleToUser] srtu
ON sr.ID = srtu.SecurityRoleID
) d
ON c.ID = d.UserID
WHERE IsActive = 1;
';
SELECT @Query = (@Query + REPLACE(@Template, N'?', sd.[name]))
FROM sys.databases sd
WHERE sd.[name] NOT IN (N'master', N'model', N'msdb', N'tempdb')
AND HAS_DBACCESS(sd.[name]) = 1;
INSERT INTO #tmp (DatabaseName, EmailAddress, FirstName, LastName, LastLogin, UserRole)
EXEC(@Query);
SELECT * FROM #tmp;
```
**Step 3:**
It is probably best not use `SELECT *` in your `SELECT * FROM [dbo].[SecurityRole] as a` subquery. Just select the fields that you need as it will be more likely to use indexes. It looks like you just need two fields: `UserID, Name`
|
It's `PRINT` that gets truncated, not your variable:
From the docs on [`PRINT`](https://msdn.microsoft.com/en-us/library/ms176047.aspx):
> A message string can be up to 8,000 characters long if it is a non-Unicode string, and 4,000 characters long if it is a Unicode string. Longer strings are truncated. The `varchar(max)` and `nvarchar(max)` data types are truncated to data types that are no larger than `varchar(8000)` and `nvarchar(4000)`.
|
Trying to create a large dynamic query, keeps getting truncated
|
[
"",
"sql",
"sql-server",
"t-sql",
"casting",
"dynamic-sql",
""
] |
Am attempting to query the following. Basically clients can be in a number of categories. I have a list of check boxes of the categories and want to only display the clients of the selected categories. I have been trying to use an inner join but my SQL is still in its infancy and I am clearly getting it wrong . Any advice greatly appreciated . Thanks for reading.
my tables layout : <http://www.uk-wired.co.uk/images/sample.jpg>

|
```
select * from clients join clientscategories on clients.idclient=clientscategories.idclient join categories on clientscategories.idcat=categories.idcat WHERE clientscategories.idcat IN (v1,v2,......)
```
I think you can try this, Please provide the values that you are selecting from check boxes to v1,v2...
|
```
select * from clients join clientscategories on clientscategories.clientid=clients.clientid where clientscategories.idcat IN (value1,value2,......)
```
i hope this dummy query ll help you to achieve your goal.
|
Many to Many SQL Query Join
|
[
"",
"mysql",
"sql",
""
] |
Question
I am trying to create an UPDATE Query that will be running on SQL Server Agent Daily... however im having trouble getting the right value selected from the table..
Tables:
```
CREATE TABLE Staff (StaffID INT IDENTITY (1,1) NOT NULL,
Name VARCHAR(70))
CREATE TABLE EntryLine (EntryID INT IDENTITY (1,1) NOT NULL,
StaffID INT,
EntryDetails VARCHAR(500),
EntryDate DATETIME,
Count INT)
CREATE TABLE Timesheet (TimesheetID INT IDENTITY (1,1) NOT NULL,
StaffID INT
StartDate DATETIME,
EndDate DATETIME)
```
Example Data
```
StaffID Name
1 John
2 Ian
EntryID StaffID EntryDetails EntryDate count
1 1 test1 28/06/2015 07:58:06 1
2 1 test2 28/06/2015 12:03:47 2
3 1 test3 28/06/2015 13:01:12 3
4 1 test4 28/06/2015 17:34:56 4
5 2 test5 28/06/2015 07:48:24 1
6 2 test6 28/06/2015 17:31:42 2
TimesheetID StaffID StartDate EndDate
1 1 28/06/2015 07:58:06 NULL
2 2 28/06/2015 07:48:24 NULL
```
As you can see I have set the EndDate as NULL currently in the Timesheet Table. Now what should happen is the query would UPDATE the EndDate to the last entry of the day from the table EntryLine, so the final data would be:
```
TimesheetID StaffID StartDate EndDate
1 1 28/06/2015 07:58:06 28/06/2015 17:34:56
2 2 28/06/2015 07:48:24 28/06/2015 17:31:42
```
I have a counter column already (Count) which counts the number of entries per staff per day so I dont know if I should utilize this or base the UPDATE query on the latest date of the day.
please could someone advise me on how I could use this query to UPDATE my table where the:
* EndDate IS NULL
* The StaffID matches
* The EndDate is the SAME DAY as the StartDate
```
select
EntryLine.StaffID,
COUNT(count)FieldCounter,
MAX(EntryLine.EntryDate)EntryDate,
dateadd(DAY,0, datediff(day,0, EntryLine.EntryDate))GenericDate
from EntryLine
GROUP BY EntryLine.StaffID, dateadd(DAY,0, datediff(day,0, EntryLine.EntryDate))
```
|
You can use a correlated subquery for the `UPDATE`:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/0d1d2/2/0)
```
UPDATE t
SET t.EndDate = (
SELECT MAX(EntryDate)
FROM EntryLine
WHERE
StaffID = t.StaffID
AND CAST(EntryDate AS DATE) = CAST(t.StartDate AS DATE)
)
FROM Timesheet t
WHERE t.EndDate IS NULL
```
---
Another approach using `JOIN`:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/0d1d2/5/0)
```
UPDATE t
SET t.EndDate = a.MaxEntryDate
FROM Timesheet t
INNER JOIN(
SELECT
StaffId,
CAST(EntryDate AS DATE) AS EntryDate,
MAX(EntryDate) AS MaxEntryDate
FROM EntryLine
GROUP BY StaffId, CAST(EntryDate AS DATE)
)a
ON a.StaffID = t.StaffID
AND CAST(t.StartDate AS DATE) = a.EntryDate
WHERE t.EndDate IS NULL
```
|
You can use subquery for that .
```
UPdate T
set T.EndDate = X.Exittime
FROM Timesheet T JOIN (select StaffID,Max(EntryDate) as Exittime from EntryLine Group BY StaffID)X
ON X.StaffID = T.StaffID
AND CAST(t.StartDate AS DATE) = CAST(X.Exittime AS DATE)
where T.EndDate IS NULL
```
|
SQL Server selection and update query issue
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.