Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a table named WORKOUTS having column named "install\_time" and "local\_id". Both are having column type integer. They are UNIQUE keys .I have written two queries like below :
```
SELECT * from workouts where install_time = 123456 and local_id > 5
SELECT * from workouts where install_time = 987643 and local_id > 19
```
Now I wanted to combine both these queries and one where clause to handle this condition but can't find a way for it.
**EDIT**
```
SELECT * from workouts where install_time = 123456 and local_id > 5 and date = xyz
SELECT * from workouts where install_time = 987643 and local_id > 19 and date = xyz
```
|
It depends what you mean by **"combine both these queries"**. If you want a result set where the results of both the above queries are returned, you can try the `OR` operator. The following assumes that the `date` field is the same for both the `local_id`s.
```
SELECT * FROM workouts
WHERE date = xyz
AND (
(install_time = 123456 AND local_id > 5)
OR (install_time = 987643 AND local_id > 19)
);
```
|
```
SELECT *
FROM workouts
WHERE
(install_time = 123456 AND local_id > 5 AND date = xyz)
OR
(install_time = 987643 AND local_id > 19 AND date = xyz)
```
Also be careful when using the OR keyword. You have to put your parenthesis carefully.
Here is more info on the subject <http://www.w3schools.com/sql/sql_and_or.asp>
```
SELECT *
FROM workouts
WHERE
date = xyz
AND (
(install_time = 123456 AND local_id > 5)
OR
(install_time = 987643 AND local_id > 19)
)
```
|
Combining two SELECT queries from same table but different WHERE clause
|
[
"",
"sql",
"postgresql",
""
] |
My goal is to test if the grp's generated by one query, are the same grp's as the output of the same query. However, when I change a single variable name, I get different results.
Below I show an example of the **same query** where we know the results are the same. However, if you run this group, you will find one query produces different results than another.
```
SELECT grp
FROM
(
SELECT CONCAT(word, corpus) AS grp, rank1, rank2
FROM (
SELECT
word, corpus,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY test1 DESC) AS rank1,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY word_count DESC) AS rank2,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus DESC) AS rank3,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus_date DESC) AS rank4
FROM
(
SELECT *, (word_count * word_count * corpus_date) AS test1
FROM [bigquery-public-data:samples.shakespeare]
)
)
)
WHERE rank1 <= 3 OR rank2 <= 3
HAVING grp NOT IN
(
SELECT grp FROM (
SELECT CONCAT(word, corpus) AS grp, rank1, rank2
FROM
(
SELECT
word, corpus,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY test2 DESC) AS rank1,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY word_count DESC) AS rank2,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus DESC) AS rank3,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus_date DESC) AS rank4
FROM
(
SELECT *, (word_count * word_count * corpus_date) AS test2
FROM [bigquery-public-data:samples.shakespeare]
)
)
)
WHERE rank1 <= 3 OR rank2 <= 3
)
```
Far worse... now if you try running the exact same query, but simply change the variable name **test1** to **test3**, you will get completely different results.
```
SELECT grp
FROM
(
SELECT CONCAT(word, corpus) AS grp, rank1, rank2
FROM (
SELECT
word, corpus,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY test3 DESC) AS rank1,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY word_count DESC) AS rank2,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus DESC) AS rank3,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus_date DESC) AS rank4
FROM
(
SELECT *, (word_count * word_count * corpus_date) AS test3
FROM [bigquery-public-data:samples.shakespeare]
)
)
)
WHERE rank1 <= 3 OR rank2 <= 3
HAVING grp NOT IN
(
SELECT grp FROM (
SELECT CONCAT(word, corpus) AS grp, rank1, rank2
FROM
(
SELECT
word, corpus,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY test2 DESC) AS rank1,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY word_count DESC) AS rank2,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus DESC) AS rank3,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus_date DESC) AS rank4
FROM
(
SELECT *, (word_count * word_count * corpus_date) AS test2
FROM [bigquery-public-data:samples.shakespeare]
)
)
)
WHERE rank1 <= 3 OR rank2 <= 3
)
```
I can think of no explanation that satisfies both of these bizarre behaviors and this is preventing me from being able to validate my data. Any ideas?
**EDIT**:
I've updated the BigQuery SQL in the way the responses would suggest, and the same inconsistencies occur.
|
The problem is nondeterminism in your row numbering.
There are many examples in this table where `(word_count * word_count * corpus_date)` is the same for several corpuses. So when you partition by `word` and order by `test2`, the ordering you use for assigning row numbers is nondeterministic.
When you run the same subquery twice within the same top-level query, BigQuery actually executes that subquery twice and may yield different results between the two runs due to that nondeterminism.
Changing the alias might have just caused your query to *not* hit in the cache, resulting in a different set of nondeterministic choices and different amount of overlap between the results.
You can confirm this by changing the `ORDER BY` clause in your analytic functions to include `corpus`. For example, change `ORDER BY test2` to `ORDER BY test2, corpus`. Then the row numbering will be deterministic, and the queries will return zero results regardless of what aliases you use.
|
I noticed you are always asking tough questions and then you are tough on accepting or even voting for answer.
That’s Ok! And I want to try again so let’s go to subject:
Looks like using aliases in the same SELECT statement is undocumented and not supported
Note below in [SELECT clause](https://cloud.google.com/bigquery/query-reference#select) documentation:
> Each expression can be given an alias by adding a space followed by an
> identifier after the expression. The optional AS keyword can be added
> between the expression and the alias for improved readability. Aliases
> defined in a SELECT clause can be referenced in theGROUP BY, HAVING,
> and ORDER BY clauses of the query, but not by the FROM, WHERE, or OMIT
> RECORD IF clauses nor by other expressions in the same SELECT clause.
Thus, there is strange behavior here without throwing error.
So you can use it on your own risk but better not (still would be great to hear from Google Team – but as it is not supported - you can expect no much info explaining this behavior)
Meantime - I would propose just follow what is supported and transform your query to below "stable" version.
It doesn't have problem that you face in your original one!
(note I’ve changed the WHERE clause in first subquery – otherwise it always returns zero rows – which makes total sense)
```
SELECT grp
FROM
(
SELECT CONCAT(word, corpus) AS grp, rank2,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY [try_any_alias_1] DESC) AS rank1
FROM (
SELECT
word, corpus,
(word_count * word_count * corpus_date) AS [try_any_alias_1],
ROW_NUMBER() OVER (PARTITION BY word ORDER BY word_count DESC) AS rank2,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus DESC) AS rank3,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus_date DESC) AS rank4
FROM [bigquery-public-data:samples.shakespeare]
)
)
WHERE rank1 <= 3 OR rank2 <= 4 // if rank2 <= 3 as in second subquery - result is always empty as expected
HAVING grp NOT IN
(
SELECT grp FROM (
SELECT CONCAT(word, corpus) AS grp, rank2,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY [try_any_alias_2] DESC) AS rank1
FROM
(
SELECT
word, corpus,
(word_count * word_count * corpus_date) AS [try_any_alias_2],
ROW_NUMBER() OVER (PARTITION BY word ORDER BY word_count DESC) AS rank2,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus DESC) AS rank3,
ROW_NUMBER() OVER (PARTITION BY word ORDER BY corpus_date DESC) AS rank4
FROM [bigquery-public-data:samples.shakespeare]
)
)
WHERE rank1 <= 3 OR rank2 <= 3
)
```
|
Google Bigquery inconsistent when variable names changes in ORDER BY clause
|
[
"",
"sql",
"google-bigquery",
"partitioning",
"ranking",
"in-operator",
""
] |
The question I am working on is as follows:
What is the difference in the amount received for each month of 2004 compared to 2003?
This is what I have so far,
```
SELECT @2003 = (SELECT sum(amount) FROM Payments, Orders
WHERE YEAR(orderDate) = 2003
AND Payments.customerNumber = Orders.customerNumber
GROUP BY MONTH(orderDate));
SELECT @2004 = (SELECT sum(amount) FROM Payments, Orders
WHERE YEAR(orderDate) = 2004
AND Payments.customerNumber = Orders.customerNumber
GROUP BY MONTH(orderDate));
SELECT MONTH(orderDate), (@2004 - @2003) AS Diff
FROM Payments, Orders
WHERE Orders.customerNumber = Payments.customerNumber
Group By MONTH(orderDate);
```
In the output I am getting the months but for Diff I am getting NULL please help. Thanks
|
I cannot test this because I don't have your tables, but try something like this:
```
SELECT a.orderMonth, (a.orderTotal - b.orderTotal ) AS Diff
FROM
(SELECT MONTH(orderDate) as orderMonth,sum(amount) as orderTotal
FROM Payments, Orders
WHERE YEAR(orderDate) = 2004
AND Payments.customerNumber = Orders.customerNumber
GROUP BY MONTH(orderDate)) as a,
(SELECT MONTH(orderDate) as orderMonth,sum(amount) as orderTotal FROM Payments, Orders
WHERE YEAR(orderDate) = 2003
AND Payments.customerNumber = Orders.customerNumber
GROUP BY MONTH(orderDate)) as b
WHERE a.orderMonth=b.orderMonth
```
|
Q: How do I subtract two declared variables in MySQL.
A: You'd first have to DECLARE them. In the context of a MySQL stored program. But those variable names wouldn't begin with an at sign character. Variable names that start with an at sign **@** character are user-defined variables. And there is no DECLARE statement for them, we can't declare them to be a particular type.
To subtract them within a SQL statement
```
SELECT @foo - @bar AS diff
```
Note that MySQL user-defined variables are *scalar* values.
Assignment of a value to a user-defined variable in a SELECT statement is done with the Pascal style assignment operator **:=**. In an expression in a SELECT statement, the equals sign is an equality comparison operator.
As a simple example of how to assign a value in a SQL SELECT statement
```
SELECT @foo := '123.45' ;
```
In the OP queries, there's no assignment being done. The equals sign is a comparison, of the scalar value to the return from a subquery. Are those first statements actually running without throwing an error?
User-defined variables are probably not necessary to solve this problem.
You want to return how many rows? Sounds like you want one for each month. We'll assume that by "year" we're referring to a calendar year, as in January through December. (We might want to check that assumption. Just so we don't find out way too late, that what was meant was the "fiscal year", running from July through June, or something.)
How can we get a list of months? Looks like you've got a start. We can use a GROUP BY or a DISTINCT.
The question was... "What is the difference in the **amount received** ... "
So, we want amount *received*. Would that be the amount of *payments* we received? Or the amount of orders that we received? (Are we *taking* orders and *receiving* payments? Or are we *placing* orders and *making* payments?)
When I think of "amount received", I'm thinking in terms of income.
Given the only two tables that we see, I'm thinking we're *filling* orders and *receiving* payments. (I probably want to check that, so when I'm done, I'm not told... "oh, we meant the number of orders we received" and/or "the payments table is the payments we made, the 'amount we received' is in some other table"
We're going to assume that there's a column that identifies the "date" that a payment was received, and that the datatype of that column is DATE (or DATETIME or TIMESTAMP), some type that we can reliably determine what "month" a payment was received in.
To get a list of months that we received payments in, in 2003...
```
SELECT MONTH(p.payment_received_date)
FROM payment_received p
WHERE p.payment_received_date >= '2003-01-01'
AND p.payment_received_date < '2004-01-01'
GROUP BY MONTH(p.payment_received_date)
ORDER BY MONTH(p.payment_received_date)
```
That should get us twelve rows. Unless we didn't receive any payments in a given month. Then we might only get 11 rows. Or 10. Or, if we didn't receive any payments in all of 2003, we won't get any rows back.
For performance, we want to have our predicates (conditions in the WHERE clause0 reference bare columns. With an appropriate index available, MySQL will make effective use of an index range scan operation. If we wrap the columns in a function, e.g.
```
WHERE YEAR(p.payment_received_date) = 2003
```
With that, we will be forcing MySQL to evaluate that function on *every* flipping row in the table, and then compare the return from the function to the literal. We prefer not do do that, and reference bare columns in predicates (conditions in the WHERE clause).
We could repeat the same query to get the payments received in 2004. All we need to do is change the date literals.
Or, we could get all the rows in 2003 and 2004 all together, and collapse that into a list of distinct months.
We can use conditional aggregation. Since we're using calendar years, I'll use the YEAR() shortcut (rather than a range check). Here, we're not as concerned with using a bare column inside the expression.
```
SELECT MONTH(p.payment_received_date) AS `mm`
, MAX(MONTHNAME(p.payment_received_date)) AS `month`
, SUM(IF(YEAR(p.payment_received_date)=2004,p.payment_amount,0)) AS `2004_month_total`
, SUM(IF(YEAR(p.payment_received_date)=2003,p.payment_amount,0)) AS `2003_month_total`
, SUM(IF(YEAR(p.payment_received_date)=2004,p.payment_amount,0))
- SUM(IF(YEAR(p.payment_received_date)=2003,p.payment_amount,0)) AS `2004_2003_diff`
FROM payment_received p
WHERE p.payment_received_date >= '2003-01-01'
AND p.payment_received_date < '2005-01-01'
GROUP
BY MONTH(p.payment_received_date)
ORDER
BY MONTH(p.payment_received_date)
```
---
If this is a homework problem, I strongly recommend you work on this problem yourself. There are other query patterns that will return an equivalent result.
|
How do I subtract two declared variables in MYSQL
|
[
"",
"mysql",
"sql",
""
] |
I need to check if the format stored in an SQL table is YYYY-MM-DD.
|
try this way
```
SELECT CASE WHEN ISDATE(@string) = 1
AND @string LIKE '[1-2][0-9][0-9][0-9]/[0-1][0-9]/[0-3][0-9]'
THEN 1 ELSE 0 END;
```
@string is date.
|
You dont store specific format (i think ms sql stores it as two integers), you select your format for output. And when I say you select, I mean you have your default (mostly set automatically when installing MS SQL or whatever you use based on your country, timezone, etc - you can change this) and those which you choose to when executing scripts.
|
Check if format is YYYY-MM-DD
|
[
"",
"sql",
"sql-server",
""
] |
Here is my testing table data:
Testing
```
ID Name Payment_Date Fee Amt
1 BankA 2016-04-01 100 20000
2 BankB 2016-04-02 200 10000
3 BankA 2016-04-03 100 20000
4 BankB 2016-04-04 300 20000
```
I am trying to compare fields Name, Fee and Amt of each data records to see whether there are the same values or not. If they got the same value, I'd like to mark something like 'Y' to those record. Here is the expected result
```
ID Name Payment_Date Fee Amt SameDataExistYN
1 BankA 2016-04-01 100 20000 Y
2 BankB 2016-04-02 200 10000 N
3 BankA 2016-04-03 100 20000 Y
4 BankB 2016-04-04 300 20000 N
```
I have tried these two methods below. but I am looking for any other solutions so I can pick out the best one for my work.
Method 1.
```
select t.*, iif((select count(*) from testing where name=t.name and fee=t.fee and amt=t.amt)=1,'N','Y') as SameDataExistYN from testing t
```
Method 2.
```
select t.*, case when ((b.Name = t.Name)
and (b.Fee = t.Fee) and (b.Amt = t.Amt)) then 'Y' else 'N' end as SameDataExistYN
from testing t
left join ( select Name, Fee, Amt
from testing
Group By Name, Fee, Amt
Having count(*)>1 ) as b on b.Name = t.Name
and b.Fee = t.Fee
and b.Amt = t.Amt
```
|
Here is another method, but I think you have to run tests on your data to find out which is best:
```
SELECT
t.*,
CASE WHEN EXISTS(
SELECT * FROM testing WHERE id <> t.id AND Name = t.Name AND Fee = t.Fee AND Amt = t.Amt
) THEN 'Y' ELSE 'N' END SameDataExistYN
FROM
testing t
;
```
|
There are several approaches, with differences in performance characteristics.
One option is to run a correlated subquery. This approach is best suited if you have a suitable index, and you are pulling a relatively small number of rows.
```
SELECT t.id
, t.name
, t.payment_date
, t.fee
, t.amt
, ( SELECT 'Y'
FROM testing s
WHERE s.name = t.name
AND s.fee = t.fee
AND s.amt = t.amt
AND s.id <> t.id
LIMIT 1
) AS SameDataExist
FROM testing t
WHERE ...
LIMIT ...
```
The correlated subquery in the SELECT list will return a Y when there is at least one "matching" row found. If no "matching" row is found, SameDataExist column will have a value of NULL. To convert the NULL to an 'N', you could wrap the subquery in an IFULL() function.
---
Your method 2 is a workable approach. The expression in the SELECT list doesn't need to do all those comparisons, those have already been done in the join predicates. All you need to know is whether a matching row was found... just testing one of the columns for NULL/NOT NULL is sufficient.
```
SELECT t.id
, t.name
, t.payment_date
, t.fee
, t.amt
, IF(s.name IS NOT NULL,'Y','N') AS SameDataExists
FROM testing t
LEFT
JOIN ( -- tuples that occur in more than one row
SELECT r.name, r.fee, r.amt
FROM testing r
GROUP BY r.name, r.fee, r.amt
HAVING COUNT(1) > 1
) s
ON s.name = t.name
AND s.fee = t.fee
AND s.amt = t.amt
WHERE ...
```
---
You could also make use of an EXISTS (correlated subquery)
|
How do I Compare columns of records from the same table?
|
[
"",
"mysql",
"sql",
"subquery",
"self-join",
"nested-select",
""
] |
I have this query which basically goes through a bunch of tables to get me some formatted results but I can't seem to find the bottleneck. The easiest bottleneck was the `ORDER BY RAND()` but the performance are still bad.
The query takes from 10 sec to 20 secs without `ORDER BY RAND()`;
```
SELECT
c.prix AS prix,
ST_X(a.point) AS X,
ST_Y(a.point) AS Y,
s.sizeFormat AS size,
es.name AS estateSize,
c.title AS title,
DATE_FORMAT(c.datePub, '%m-%d-%y') AS datePub,
dbr.name AS dateBuiltRange,
m.myId AS meuble,
c.rawData_id AS rawData_id,
GROUP_CONCAT(img.captionWebPath) AS paths
FROM
immobilier_ad_blank AS c
LEFT JOIN PropertyFeature AS pf ON (c.propertyFeature_id = pf.id)
LEFT JOIN Adresse AS a ON (c.adresse_id = a.id)
LEFT JOIN Size AS s ON (pf.size_id = s.id)
LEFT JOIN EstateSize AS es ON (pf.estateSize_id = es.id)
LEFT JOIN Meuble AS m ON (pf.meuble_id = m.id)
LEFT JOIN DateBuiltRange AS dbr ON (pf.dateBuiltRange_id = dbr.id)
LEFT JOIN ImageAd AS img ON (img.commonAd_id = c.rawData_id)
WHERE
c.prix != 0
AND pf.subCatMyId = 1
AND (
(
c.datePub > STR_TO_DATE('01-04-2016', '%d-%m-%Y')
AND c.datePub < STR_TO_DATE('30-04-2016', '%d-%m-%Y')
)
OR date_format(c.datePub, '%d-%m-%Y') = '30-04-2016'
)
AND a.validPoint = 1
GROUP BY
c.id
#ORDER BY
# RAND()
LIMIT
5000
```
Here is the explain query:
[](https://i.stack.imgur.com/bCvzl.png)
Visual Portion:
[](https://i.stack.imgur.com/h5Lsd.png)
And here is a screenshot of `mysqltuner`
[](https://i.stack.imgur.com/l0UEJ.png)
**EDIT 1**
I have many indexes Here they are:
[](https://i.stack.imgur.com/gS6QW.png)
[](https://i.stack.imgur.com/woLlB.png)
**EDIT 2:**
So you guys did it. Down to **.5 secs to 2.5 secs.**
I mostly followed all of your advices and changed some of my.cnf + runned optimized on my tables.
|
You're searching for dates in a very suboptimal way. Try this.
```
... c.datePub >= STR_TO_DATE('01-04-2016', '%d-%m-%Y')
AND c.datePub < STR_TO_DATE('30-04-2016', '%d-%m-%Y') + INTERVAL 1 DAY
```
That allows a range scan on an index on the `datePub` column. You should create a compound index for that table on `(datePub, prix, addresse_id, rawData_id)` and see if it helps.
Also try an index on `a (valid_point)`. Notice that your use of a geometry data type in that table is probably not helping anything.
|
To begin with you have quite a lot of indexes but many of them are not useful. Remember more indexes means slower inserts and updates. Also mysql is not good at using more than one index per table in complex queries. The following indexes have a cardinality < 10 and probably should be dropped.
```
IDX_...E88B
IDX....62AF
IDX....7DEE
idx2
UNIQ...F210
UNIQ...F210..
IDX....0C00
IDX....A2F1
At this point I got tired of the excercise, there are many more
```
Then you have some duplicated data.
point
lat
lng
The point `field` has the `lat` and `lng` in it. So the latter two are not needed. That means you can lose two more indexes `idxlat` and `idxlng`. I am not quite sure how idxlng appears twice in the index list for the same table.
These optimizations will lead to an overall increase in performance for INSERTS and UPDATES and possibly for all SELECTs as well because the query planner needs to spend less time deciding which index to use.
Then we notice from your explain that the query does not use any index on table `Adresse` (a). But your where clause has `a.validPoint = 1` clearly you need an index on it as suggested by @Ollie-Jones
However I suspect that this index may have low cardinality. In that case I recommend that you create a composite index on this column + another.
|
MySQL Slow query ~ 10 seconds
|
[
"",
"mysql",
"sql",
""
] |
I have Below Table named session
```
SessionID SessionName
100 August
101 September
102 October
103 November
104 December
105 January
106 May
107 June
108 July
```
I executed the following query I got the output as below.
```
Select SessionID, SessionName
From dbo.Session
SessionID SessionName
100 August
101 September
102 October
103 November
104 December
105 January
106 May
107 June
108 July
```
the results get ordered by Session ID. But I need the output as below,
```
SessionID SessionName
106 May
107 June
108 July
100 August
101 September
102 October
103 November
104 December
105 January
```
How to achieve this in sql-server? thanks for the help
|
I'd use a `case` expression, like:
```
order by case SessionName when 'August' then 1
when 'September' then 2
...
when 'Juty' then 12
end
```
August has 1 because "*in the application logic a session started with august*", easy to renumber if you want to start with January and end with December.
|
To avoid any hassel with culture dependencies, you might get the month's index out of sys languages with a query like this:
(I'd eventually create a TVF from this and pass in the `langid` as parameter)
```
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) MyMonthIndex
,Mnth.value('.','varchar(100)') AS MyMonthName
FROM
(
SELECT CAST('<x>' + REPLACE(months,',','</x><x>') + '</x>' AS XML) AS XmlData
FROM sys.syslanguages
WHERE langid=0
) AS DataSource
CROSS APPLY DataSource.XmlData.nodes('/x') AS The(Mnth)
```
The result
```
1 January
2 February
3 March
4 April
5 May
6 June
7 July
8 August
9 September
10 October
11 November
12 December
```
### EDIT: An UDF for direct usage (e.g. in an `order by`)
```
CREATE FUNCTION dbo.GetMonthIndexFromMonthName(@MonthName VARCHAR(100),@langId INT)
RETURNS INT
AS
BEGIN
RETURN
(
SELECT MyMonthIndex
FROM
(
SELECT CAST(ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS INT) MyMonthIndex
,Mnth.value('.','varchar(100)') AS MyMonthName
FROM
(
SELECT CAST('<x>' + REPLACE(months,',','</x><x>') + '</x>' AS XML) AS XmlData
FROM sys.syslanguages
WHERE langid=@langId
) AS DataSource
CROSS APPLY DataSource.XmlData.nodes('/x') AS The(Mnth)
) AS tbl
WHERE MyMonthName=@MonthName
);
END
GO
SELECT dbo.GetMonthIndexFromMonthName('February',0)
```
|
Order By Month Name in Sql-Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-order-by",
""
] |
I have a subset of records that look like this:
```
ID DATE
A 2015-09-01
A 2015-10-03
A 2015-10-10
B 2015-09-01
B 2015-09-10
B 2015-10-03
...
```
For each ID the first minimum date is the first index record. Now I need to exclude cases within 30 days of the index record, and any record with a date greater than 30 days becomes another index record.
For example, for ID A, 2015-09-01 and 2015-10-03 are both index records and would be retained since they are more than 30 days apart. 2015-10-10 would be dropped because it's within 30 days of the 2nd index case.
For ID B, 2015-09-10 would be dropped and would NOT be an index case because it's within 30 days of the 1st index record. 2015-10-03 would be retained because it's greater than 30 days of the 1st index record and would be considered the 2nd index case.
The output should look like this:
```
ID DATE
A 2015-09-01
A 2015-10-03
B 2015-09-01
B 2015-10-03
```
How do I do this in SQL server 2012? There's no limit to how many dates an ID can have, could be just 1 to as many as 5 or more. I'm fairly basic with SQL so any help would be greatly appreciated.
|
working like in your example, #test is your table with data:
```
;with cte1
as
(
select
ID, Date,
row_number()over(partition by ID order by Date) groupID
from #test
),
cte2
as
(
select ID, Date, Date as DateTmp, groupID, 1 as getRow from cte1 where groupID=1
union all
select
c1.ID,
c1.Date,
case when datediff(Day, c2.DateTmp, c1.Date) > 30 then c1.Date else c2.DateTmp end as DateTmp,
c1.groupID,
case when datediff(Day, c2.DateTmp, c1.Date) > 30 then 1 else 0 end as getRow
from cte1 c1
inner join cte2 c2 on c2.groupID+1=c1.groupID and c2.ID=c1.ID
)
select ID, Date from cte2 where getRow=1 order by ID, Date
```
|
```
select * from
(
select ID,DATE_, case when DATE_DIFF is null then 1 when date_diff>30 then 1 else 0 end comparison from
(
select ID, DATE_ ,DATE_-LAG(DATE_, 1) OVER (PARTITION BY ID ORDER BY DATE_) date_diff from trial
)
)
where comparison=1 order by ID,DATE_;
```
Tried in Oracle Database. Similar funtions exist in SQL Server too.
I am grouping by Id column, and based on DATE field, am comparing the date in current field with its previous field. The very first row of a given user id would return null, and first field is required in our output as first index. For all other fields, we return 1 when the date difference with respect to previous field is greater than 30.
[Lag function in transact sql](https://msdn.microsoft.com/en-IN/library/hh231256.aspx)
[Case function in transact sql](https://msdn.microsoft.com/en-us/library/ms181765.aspx)
|
How to get next minimum date that is not within 30 days and use as reference point in SQL?
|
[
"",
"sql",
"sql-server",
"loops",
"while-loop",
""
] |
I need to create table from the return schema of sql query. Here, sql query has multiple joins.
Example - In below scenario, create table schema for column 'r' & 't'.
```
select a.x as r b.y as t
from a
JOIN b
ON a.m = b.m
```
I can not use 'select into statement' because I get an input sql select statement and need to copy the output of that query to destination table at runtime.
|
If I'm reading your problem correctly, you're getting SQL from an external source and you want to run that into a table (maybe with data, maybe without). This should do:
```
use tempdb;
declare @userSuppliedSQL nvarchar(max) = N'select top 10 * from Util.dbo.Numbers';
declare @sql nvarchar(max);
set @sql = concat('
with cte as (
', @userSuppliedSQL, '
)
select *
into dbo.temptable
from cte
where 9=0 --delete this line if you actually want data
;');
print @sql
exec sp_executesql @sql;
select * from dbo.temptable;
```
This assumes that the supplied query is legal for use as the body of a common table expression (e.g. all columns are named and unique). Note that you can't select into a temp table (i.e. #temp) because the temp table only exists for the duration of the `sp_executesql` call.
Also, for the love of all that is holy, please understand that by running arbitrary SQL that a user passes in, that you're opening yourself up to SQL injection.
|
Use the into clause here. Like
```
Select col1, col2, col3
into newtable
from old table;
select a.x as r b.y as t
into c
from a
JOIN b ON a.m = b.m
```
|
Create Table using Schema of SQL Query
|
[
"",
"sql",
"sql-server",
""
] |
I have a table looks like this:
```
method year segment
ABC 2014 AB
CAB 2014 AB
PAU 2013 AB
COR 2015 CD
PRK 2016 IK
```
All segments should have same year. So I need to identify how many of them has different year. Its a mistake.
Result should be
```
method year segment
PAU 2013 AB
```
or
> Error = 1
Can you help me with the code?
So far I tried something like this but it gives me whole list:
```
create table E1 as
select segment, dat_start
from pd_segment a
where a.segment in (select b.segment from pd_segment b
group by b.segment
having count (b.dat_start NE a.dat_start-1))
```
|
let's say our table name is "MyTable"
this query will report segment with more then 1 year:
```
select distinct segment from MyTable
Group by segment
having count(distinct year)>1
```
then if you want all the other columns data you can join this result with the table itself
```
select Mytable.* from MyTable join (
select distinct segment from MyTable
Group by segment
having count(distinct year)>1
) as x on x.segment=Mytable.segment
```
|
Try this
```
SELECT *
FROM (
SELECT *,ROW_NUMBER() OVER(PARTITION BY [YEAR]
ORDER BY SEGMENT DESC) ROW_NO FROM @TABLE1
) T WHERE row_no <> 1
```
|
sql one segment has many methods but all of them should have one date how to check for the ones that dont have
|
[
"",
"sql",
"sas",
""
] |
I want to get a part of text from my field `description`
Could someone offer some advice?
The whole string is `'Version100][BuildNumber:666][SubBuild:000]'` and the build number is what I want to single out (however the number may change).
I have tried `SUBSTRING` with `CHARINDEX` but I can't seem to figure it out.
I've been googling for about 30 minutes and I can't seem to work it out.
|
You can try this:
```
SELECT SUBSTRING([description],CHARINDEX('BuildNumber:',[description])+12,
CHARINDEX(']',[description], CHARINDEX('BuildNumber:',[description]))
-(CHARINDEX('BuildNumber:',[description])+12))
FROM YOURTABLE
```
|
little long, but you could do this.
```
DECLARE @Description VARCHAR(MAX)= '[Version100][BuildNumber:666][SubBuild:000]'
SELECT LEFT(STUFF(@Description, 1, PATINDEX('%BuildNumber%', @Description) + 11, '' )
,PATINDEX('%]%', STUFF(@Description, 1, PATINDEX('%BuildNumber%', @Description) + 11, '' )) - 1)
```
|
How can I find part of a string between two words?
|
[
"",
"sql",
"sql-server",
""
] |
I have this table:
```
// votes
+----+---------+---------+
| id | user_id | post_id |
+----+---------+---------+
| 1 | 12345 | 12 |
| 2 | 12345 | 13 |
| 3 | 52344 | 12 |
+----+---------+---------+
```
Also this is a part of my query:
```
EXISTS (select 1 from votes v where u.id = v.user_id and p.id = v.post_id)
```
To make my query more efficient, I have added a index group on `user_id` and `post_id`:
```
ALTER TABLE `votes` ADD INDEX `user_id,post_id` (`user_id,post_id`)
```
---
What's my question? I also want to prevent of duplicate vote from one user to one post. So I have to create a unique index on `user_id` and `post_id` too. Now I want to know, should I create another index? or just a unique index is enough and I should remove previous index?
|
You do not need two indexes serving similar purpose. Only one of them would be used during a `select` operation, and both will have to be modified on `insert, update` and `delete`. These are unnecessary overheads. Go with the unique index, since it serves both the purposes. A range scan is almost guaranteed when using a unique indexed columns in a `where` clause.
**EDIT :**
The term for index does not matter. When you are creating an index, a B- tree structure is created, selecting a convenient root node, and rearranging column values. If all entries in the given column are going to be unique, normal index would also be of the same size as unique index, and would give same performance as unique index.
Primary index is also a unique index, with the exception that it would not allow null values.Null values are permitted in a unique index.
|
if you're trying to prevent multiple votes from the same `user_id` to the same `post_id`, then why don't you use a [`UNIQUE` constraint](http://www.w3schools.com/sql/sql_unique.asp)?
```
ALTER TABLE votes
ADD CONSTRAINT uc_votes UNIQUE (user_id,post_id)
```
with regards to whether you should remove your index, you should review [`EXPLAIN`](https://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html) concepts for query plan execution paths and performance. I suspect it will be better to keep them, but it will require testing.
|
A unique can be used as index?
|
[
"",
"mysql",
"sql",
"indexing",
""
] |
```
table: users
id
table: tasks
id
table: tasks_users
user_id
task_id
is_owner
```
I have a `users` table, a `tasks` table and a pivot table `tasks_users`.
I would like to select all the users given a `task_id` and ordering by `tasks_users.is_owner`.
How would I accomplish this?
|
Try something like this ...
```
select u.users
from users u
join tasks_users tu
on u.id=tu.user_id
join tasks t
on t.id=tu.task_id
where t.task_id=your_id
order by tu.is_owner
```
|
I think it is simple
```
select u.id
from users u
inner join tasks_users tu on u.id = tu.user_id
inner join tasks t on t.id = tu.task_id
order by tu.is_owner;
```
|
Using JOINS, ORDER BY, and pivot tables
|
[
"",
"mysql",
"sql",
"database",
"join",
""
] |
I have only been using PhpStorm a week or so, so far all my SQL queries have been working fine with no errors after setting up the database connection. This current code actually uses a second database (one is for users the other for the specific product) so I added that connection in the database tab too but its still giving me a 'unable to resolve column' warning.
Is there a way to see what database its looking at? Will it work with multiple databases? Or have I done something else wrong?
Error below:
[](https://i.stack.imgur.com/huVMY.png)
```
$this->db->setSQL("SELECT T1.*, trunc(sysdate) - trunc(DATE_CHANGED) EXPIRES FROM " . $this->tableName . " T1 WHERE lower(" . $this->primaryKey . ")=lower(:id)")
```
Also here is what my database settings window looks like as seen some people having problems with parameter patterns causing this error but I'm fairly sure that is not the issue here:
[](https://i.stack.imgur.com/Juw8B.png)
Using PhpStorm 10.0.3
|
So the short answer is that it cant read the table name as a variable even though its set in a variable above. I thought PhpStorm could work that out. The only way to remove the error would be to either completely turn off SQL inspections (obviously not ideal as I use it throughout my project) or to temporarily disable it for this statement only using the doc comment:
```
/** @noinspection SqlResolve */
```
Was hoping to find a more focused comment much like the @var or @method ones to help tell Phpstorm what the table should be so it could still inspect the rest of the statement. Something like:
`/** @var $this->tableName TABLE_IM_USING */`
Maybe in the future JetBrains will add that or make PhpStorm clever enough to look at the variable 3 lines above.
|
You can set the SQL resolution scope in `File -> Settings -> Languages & Frameworks -> SQL Resolution Scopes`.
[](https://i.stack.imgur.com/dT7C9.png)
This allows you to provide a default for the entire project and you can optionally define specific mappings to certain paths in the project.
|
PhpStorm unable to resolve column for multiple database connections
|
[
"",
"sql",
"oracle",
"phpstorm",
""
] |
How to get Oracle database version from **sqlplus**, Oracle SQL developer, SQL Navigator or other IDE?
|
Execute this statement from SQL\*Plus, SQLcl, Oracle SQL Developer, SQL Navigator or other IDE:
```
select * from product_component_version
```
And you'll get:
```
PRODUCT VERSION VERSION_FULL STATUS
-------------------------------------- ---------- ------------ ----------
Oracle Database 18c Enterprise Edition 18.0.0.0.0 18.3.0.0.0 Production
```
|
Try running this query in SQLPLUS -
```
select * from v$version
```
|
How to get Oracle database version?
|
[
"",
"sql",
"oracle",
"version",
""
] |
I have a table that contains 4 columns. I need to remove some of the rows based on the Code and ID columns. A code of 1 initiates the process I'm trying to track and a code of 2 terminates it. I would like to remove all rows for a specific ID when a code of 2 comes after a code of 1 and there is not an additional code 1. For example, my current data set looks like this:
```
Code Deposit Date ID
1 $100 3/2/2016 5
2 $0 3/1/2016 5
1 $120 2/8/2016 5
1 $120 3/22/2016 4
2 $70 2/8/2016 3
1 $120 1/3/2016 3
2 $0 6/15/2015 2
1 $120 3/22/2016 2
1 $50 8/15/2015 1
2 $200 8/1/2015 1
```
After I run my script I would like it to look like this:
```
Code Deposit Date ID
1 $100 3/2/2016 5
2 $0 3/1/2016 5
1 $120 2/8/2016 5
1 $120 3/22/2016 4
1 $50 8/15/2015 1
2 $200 8/1/2015 1
```
In all I have about 150,000 ID's in my actual table but this is the general idea.
|
You can get the ids using logic like this:
```
select t.id
from t
group by t.id
having max(case when code = 2 then date end) > min(case when code = 1 then date end) and -- code 2 after code 1
max(case when code = 2 then date end) > max(case when code = 1 then date end) -- no code 1 after code2
```
It is then easy enough to incorporate this into a query to get the rest of the details:
```
select t.*
from t
where t.id not in (select t.id
from t
group by t.id
having max(case when code = 2 then date end) > min(case when code = 1 then date end) and -- code 2 after code 1
max(case when code = 2 then date end) > max(case when code = 1 then date end)
);
```
|
The approach I took was to add up the Code per each ID. If it equals 3 exactly, it should be removed.
```
;WITH keepID as (
Select
ID
,SUM(code) as 'sumCode'
From #testInit
Group by ID
HAVING SUM(code) <> 3
)
Select *
From #testInit
Where ID IN (Select ID from keepID)
```
Your post showed keeping ID = 1 which does not seem to fit the criteria ? Are you sure you would be keeping ID = 1 ? It only as 2 records with a code of 1 and a code of 2 which adds up to 3 ... thus, remove it.
I just showed the approach in logic ... let me know if you need help with the delete code.
|
Conditional Row Deleting in SQL
|
[
"",
"sql",
"sql-server-2012",
"conditional-statements",
"sql-delete",
"delete-row",
""
] |
This potentially might be too large of a question for a complete solution, and I've got a bit of a strange set up. I'm using HP OO to create a text-based RPG just to practice getting used to database design on this platform.
So it's basically a flow script that runs once. When the script starts, a player (user) is created, and then a character is created. The player inputs a name for its character, and this is stored in the `character` table. I then call that character name with `SELECT name FROM character WHERE character.character_id=x`. How can I retrieve the name from the correct (most recently created) character. The `character_id` is an auto-incrementing identity column.
|
There's nothing *guaranteeing* that the highest value in an identity column is the most recently created record. You should add a `date_created` column to your table and give it a default value of the current date and time (`current_timestamp` for a `datetime2` field). That actually does what you want.
OK, your question changed a bit and, Tab's comment here is also correct. If you want to insert and get the identity inserted back, you should follow [the advice here](https://stackoverflow.com/questions/42648/best-way-to-get-identity-of-inserted-row) that he linked.
However, if you want to be able to determine *the order of creation* -- which is what you originally asked -- then you should use a `date_created` field. It's possible to get around `IDENTITY` and insert any value you want, and things like UPDATEs and DELETEs can change things as well. Essentially, it's a bad idea to assign meaning to a record's value of an IDENTITY column relative to other records in the table (i.e., this was created before or after these other records) because you can actually get around that.
Personally, I would either use the `OUTPUT` clause to have my INSERTs send the ID back:
```
INSERT INTO Character (...)
OUTPUT INSERTED.Id
VALUES (....);
```
Or I'd reuse the same connection and return the `SCOPE_IDENTITY()`.
```
INSERT INTO Character (...)
VALUES (....);
SELECT SCOPE_IDENTITY() AS [SCOPE_IDENTITY];
```
|
```
SELECT row FROM table WHERE id=(
SELECT max(id) FROM table
)
this should work
```
Make sure the id is unique (auto increments? great!)
|
Find a value based on the most recently created row in a table (SQL Server)
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to select \* tasks that are not due. That been said, anything thats past this exact date **and time** should be selected taking in consideration that I have a separate columns for date and time.
Currently I am using this where **it does not** select all instances of today:
```
SELECT * FROM `tasks` WHERE `due_date` >= NOW() AND `due_time` >= NOW()
```
I have no access in alternating the database for a DATETIME field. I can only select.
The ***due\_date*** has **DATE** as type and ***due\_time*** has **TIME** as type
|
To get the behavior it seems like you're looking for, you could do something like this:
```
SELECT t.* FROM `tasks` t
WHERE t.`due_date` >= DATE(NOW())
AND ( ( t.`due_date` = DATE(NOW()) AND t.`due_time` >= TIME(NOW()) )
OR ( t.`due_date` > DATE(NOW()) )
)
```
The first cut is the comparison to due\_date, all tasks that are due today or later. That includes too many, we need to get rid of tasks that are due today but before the current time.
There are other query approaches that may seem "simpler". The approach above keeps the predicates on bare columns, so MySQL can make effective use of range scan operations on suitable indexes.
**FOLLOWUP**
I've done this type of query before, for paging with multiple columns in the key. Something was bothering me about that original query, it had more conditions than were really required. (What threw me off was the >= condition on due\_time.)
I believe this is equivalent:
```
SELECT t.* FROM `tasks` t
WHERE t.`due_date` >= DATE(NOW())
AND ( t.`due_date` > DATE(NOW()) OR t.`due_time` >= TIME(NOW()) )
```
|
```
SELECT * FROM `tasks` WHERE `due_date` >= DATE(NOW()) AND `due_time` >= TIME(NOW());
```
DATE() extracts the year-month-date part, and TIME() the hours:mins:secs part
<http://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html>
|
SQL: Select * WHERE due date and due time after NOW() without DATETIME()
|
[
"",
"mysql",
"sql",
"date",
"datetime",
"select",
""
] |
I have a query -
```
SELECT * FROM TABLE WHERE Date >= DATEADD (day, -7, -getdate()) AND Date <= getdate();
```
This would return all records for each day except day 7. If I ran this query on a Sunday at 17:00 it would only produce results going back to Monday 17:00. How could I include results from Monday 08:00.
|
Try it like this:
```
SELECT *
FROM SomeWhere
WHERE [Date] > DATEADD(HOUR,8,DATEADD(DAY, -7, CAST(CAST(GETDATE() AS DATE) AS DATETIME))) --7 days back, 8 o'clock
AND [Date] <= GETDATE(); --now
```
|
That's because you are comparing date+time, not only date.
If you want to include all days, you can trunc the time-portion from `getdate()`: you can accomplish that with a conversion to **date**:
```
SELECT * FROM TABLE
WHERE Date >= DATEADD (day, -7, -convert(date, getdate())
AND Date <= convert(date, getdate());
```
If you want to start from 8 in the morning, the best is to add again 8 hours to getdate.
```
declare @t datetime = dateadd(HH, 8, convert(datetime, convert(date, getdate())))
SELECT * FROM TABLE
WHERE Date >= DATEADD (day, -7, -@t) AND Date <= @t;
```
NOTE: with the conversion `convert(date, getdate())` you get a datatype **date** and you cannot add hours directly to it; you must re-convert it to **datetime**.
|
Getdate() functionality returns partial day in select query
|
[
"",
"sql",
"sql-server",
""
] |
I am executing this query in Oracle. I have added the screenshots of my data and the returned results but the returned result is wrong. It is returning 1 but it should return 0.52. Because the customer(see in attached screenshot) have codes 1,2,4,31 and for 1,2,4 he should get 0.70 value and for 31 he should get 0.75 and then after multiplication the returned result should be 0.52 instead of 1.
I am really stuck here. Please help me. I will be very thankful to you.
Here is my query. What I actually want to do is I want to calculate points value given to every customer on the basis of codes they got.
If a customer have code = 1 then he will get 0.70 points and then if he have code = 2 and 4 too then I do not want to give him extra 0.70 for code 2 and 4.
Let me be simple. If a customer have all of these codes 1, 2, 4 then he will only get 0.70 points for once, but if he have code 4 only then he will get 0.90, but if he got code 31 too then he will get extra 0.75 for having code 31. Does it make sense now?
```
SELECT
RM_LIVE.EMPLOYEE.EMPNO, RM_LIVE.EMPNAME.FIRSTNAME,
RM_LIVE.EMPNAME.LASTNAME, RM_LIVE.CRWBASE.BASE ,RM_LIVE.CRWCAT.crwcat AS "Rank",
nvl(nullif(MAX(CASE WHEN RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC IN (29,721) THEN 0.25 ELSE 1 END),0),1) *
nvl(nullif(MAX(CASE WHEN RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC IN (31,723) THEN 0.75 ELSE 1 END),0),1) *
nvl(nullif(MAX(CASE WHEN RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC = 861 THEN 0.80 ELSE 1 END),0),1) *
nvl(nullif(MAX(CASE WHEN RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC IN (17,302,16) THEN 0.85 ELSE 1 END),0),1) *
nvl(nullif(MAX(CASE WHEN RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC IN (3,7) THEN 0.90 ELSE 1 END),0),1)*
nvl(nullif(MAX(CASE WHEN RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC IN (921,301,30,722,601,581) THEN 0.50 ELSE 1 END),0),1) *
nvl(nullif(MAX(CASE WHEN RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC IN (2,1, 4) THEN 0.70 ELSE 1 END),0),1) *
nvl(nullif(MIN(CASE WHEN RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC IN (1,2) then 0 else 1 END) *
MAX(CASE WHEN RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC IN (4) then 0.20 else 0 END),0),1) AS "FTE VALUE"
FROM RM_LIVE.EMPBASE,
RM_LIVE.EMPLOYEE,
RM_LIVE.CRWBASE,
RM_LIVE.EMPNAME,
RM_LIVE.CRWSPECFUNC,
RM_LIVE.EMPSPECFUNC,RM_LIVE.EMPQUALCAT,RM_LIVE.CRWCAT
where RM_LIVE.EMPBASE.IDEMPNO = RM_LIVE.EMPLOYEE.IDEMPNO
AND RM_LIVE.EMPBASE.IDCRWBASE = RM_LIVE.CRWBASE.IDCRWBASE
AND RM_LIVE.EMPLOYEE.IDEMPNO = RM_LIVE.EMPNAME.IDEMPNO
AND RM_LIVE.EMPSPECFUNC.IDCRWSPECFUNC =RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC
AND RM_LIVE.EMPSPECFUNC.IDEMPNO =RM_LIVE.EMPLOYEE.IDEMPNO
AND RM_LIVE.EMPQUALCAT.IDEMPNO=RM_LIVE.EMPLOYEE.IDEMPNO
AND RM_LIVE.CRWCAT.IDCRWCAT = RM_LIVE.EMPQUALCAT.IDCRWCAT
AND RM_LIVE.CRWCAT.crwcat IN ('CP','FO','CM','MC')
AND RM_LIVE.CRWBASE.BASE <> 'XYZ'
AND RM_LIVE.CRWSPECFUNC.IDCRWSPECFUNC IN
('921','2' ,'1','301','17','4','3','7','302' ,'861','31',
'723','30','722 ','29 ','721','16','601','581')
AND RM_LIVE.EMPBASE.STARTDATE <= SYSDATE
AND RM_LIVE.EMPBASE.ENDDATE >= SYSDATE
AND RM_LIVE.EMPSPECFUNC.STARTDATE <= SYSDATE
AND RM_LIVE.EMPSPECFUNC.ENDDATE >= SYSDATE
AND RM_LIVE.EMPNAME.FROMDATE <=SYSDATE
AND RM_LIVE.EMPQUALCAT.STARTDATE <= SYSDATE
AND RM_LIVE.EMPQUALCAT.ENDDATE >= SYSDATE
GROUP BY RM_LIVE.EMPLOYEE.EMPNO, RM_LIVE.EMPNAME.FIRSTNAME,
RM_LIVE.EMPNAME.LASTNAME, RM_LIVE.CRWBASE.BASE,RM_LIVE.CRWCAT.crwcat;
```
[](https://i.stack.imgur.com/b2eKj.jpg)
[](https://i.stack.imgur.com/St9fy.jpg)
|
According to desired reasult comment, try this
```
SELECT [id]
,[name]
, r = max(CASE WHEN [code] IN (1,2,4) then 100 else 0 end)
+ max(CASE WHEN [code] IN (8) then 80 else 0 end)
FROM
-- your table here
(values (1, 'ali',4)
,(1, 'ali',1)
,(1, 'ali',8)
) as t(id, name,code)
GROUP BY id, name;
```
**EDIT** another story for excluding something.
Any of 1,2,4 give 100 plus if it was only 4 without (1,2) add 400.
```
SELECT [id]
,[name]
, r = max(CASE WHEN [code] IN (1,2,4) then 100 else 0 end)
+ min(CASE WHEN [code] IN (1,2) then 0 else 1 end)
* max(CASE WHEN [code] IN (4) then 400 else 0 end)
+ max(CASE WHEN [code] IN (8) then 80 else 0 end)
FROM
-- your table here
(values (1, 'ali',4)
,(1, 'ali',1)
,(1, 'ali',8)
,(2, 'ali',4)
,(2, 'ali',8)
) as t(id, name,code)
GROUP BY id, name;
```
**EDIT 2** If you need multiply scores, replace + with \* and convert 0 into 1.
```
SELECT [id]
,[name]
,r = isnull(nullif(
max(CASE WHEN [code] IN (1,2,4) then 100 else 0 end)
,0),1)
* isnull(nullif(
min(CASE WHEN [code] IN (1,2) then 0 else 1 end)
* max(CASE WHEN [code] IN (4) then 400 else 0 end)
,0),1)
* isnull(nullif(
max(CASE WHEN [code] IN (8) then 80 else 0 end)
,0),1)
FROM
-- your table here
(values (1, 'ali',4)
,(1, 'ali',1)
,(1, 'ali',8)
,(2, 'ali',4)
,(2, 'ali',8)
) as t(id, name,code)
GROUP BY id, name;
```
|
You're **already** selecting from the `testcode` table - no need to do any subqueries in your `CASE` expression - just use this code:
```
SELECT
[id], [name],
SUM(CASE
WHEN [code] IN (1, 2, 4)
THEN 100
WHEN [code] = 8
THEN 80
END) AS [total]
FROM
[Test].[dbo].[testcode] AS t
GROUP BY
id, name
```
|
SQL Error "Cannot perform an aggregate function on an expression containing an aggregate or a sub query."
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
"sql-server-2008-r2",
""
] |
I am working on performance tuning all the slow running queries. I am new to Oracle have been using sql server for a while. Can someone help me tune the query to make it run faster.
```
Select distinct x.a, x.b from
from xyz_view x
where x.date_key between 20101231 AND 20160430
```
Appreciate any help or suggestions
|
First, I'd start by looking at why the `DISTINCT` is there. In my experience many developers tack on the `DISTINCT` because they know that they need unique results, but don't actually understand why they aren't already getting them.
Second, a clustered index on the column would be ideal **for this specific query** because it puts all of the rows right next to each other on disk and the server can just grab them all at once. The problem is, that might not be possible because you already have a clustered index that's good for other uses. In that case, try a non-clustered index on the date column and see what that does.
Keep in mind that indexing has wide-ranging effects, so using a single query to determine indexing isn't a good idea.
|
I would also add if you are pulling from a VIEW, you should really investigate the design of the view. It typically has a lot of joins that may not be necessary for your query. In addition, if the view is needed, you can look at creating an indexed view which can be very fast.
|
How to Performance tune a query that has Between statement for range of dates
|
[
"",
"sql",
"sql-server",
"oracle",
"performance",
"query-tuning",
""
] |
I'm looking for any way to be able to round or trunc the numbers to 2 digits after comma. I tried with `round`, `trunc` and `to_char`. But didn't get what I wanted.
```
select round(123.5000,2) from dual;
select round(123.5000,2) from dual;
```
Works fine, but when I have zero as second digit after comma, I get only one 1 digit after comma in output number
```
select to_char(23.5000, '99.99') from dual;
```
Works fine, but if the number before comma has 3 digits, I'm getting '###' as output.
Apart from that I'm getting here spaces at the beginning. Is there any clear way to remove these spaces?
I'm looking a way to always get a number with two digits after comma and for all numbers(1,10,100 etc).
|
You can use [the `FM` number format modifier](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements004.htm#SQLRF00216) to suppress the leading spaces, but note that you also then need to use `.00` rather than `.99`, and you may want the last element of the format model before the decimal point to be a zero too if you want numbers less that 1 to be shown as, say, `0.50` instead of `.50`:
```
with t (n) as (
select 123.5678 from dual
union all select 123.5000 from dual
union all select 23.5000 from dual
union all select 0 from dual
union all select 1 from dual
union all select 10 from dual
union all select 100 from dual
)
select n,
round(n, 2) as n2,
to_char(round(n, 2), '99999.99'),
to_char(round(n, 2), 'FM99999.00') as str2,
to_char(round(n, 2), 'FM99990.00') as str3
from t;
N N2 TO_CHAR(R STR2 STR3
---------- ---------- --------- --------- ---------
123.5678 123.57 123.57 123.57 123.57
123.5 123.5 123.50 123.50 123.50
23.5 23.5 23.50 23.50 23.50
0 0 .00 .00 0.00
1 1 1.00 1.00 1.00
10 10 10.00 10.00 10.00
100 100 100.00 100.00 100.00
```
You don't strictly need the `round()` as well since that's the default behaviour, but it doesn't hurt to be explicit (aside from a tiny performance impact form the extra function call, perhaps).
This gives you a string, not a number. A number does not have trailing zeros. It doesn't make sense to describe an actual number in those terms. It only makes sense to have the trailing zeros when you're converting the number to a string for display.
|
Have you tried with a cast operation?
You can try for example with this, obviously substituting 30,2 with your desired precision.
```
SELECT CAST (someNumber AS DECIMAL(30,2)) FROM dual
```
You can find the documentation of the cast operation for the oracle sql [here](https://docs.oracle.com/javadb/10.8.3.0/ref/rrefsqlj33562.html)
|
Rounding numbers to 2 digits after comma in oracle
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
So I have a situation where a table `Partners` has a one-to-one relationship with a table called `Regions` and also a one-to-many relationship with the same table through an intersection table called `Destinations`. My nice naming conventions below should help you figure out what I mean.
```
Regions
======================
id | name
======================
1 | "United States"
2 | "Mother Russia"
3 | "Belize"
Partners
=================================
id | name | region_id
=================================
1 | "B Obama" | 1
2 | "V Putin" | 2
Destinations
==============================
partner_id | region_id
==============================
1 | 2
1 | 3
2 | 1
2 | 3
```
What I want is a query that returns a result like
```
=======================================================
partner_name | partner_region | destination_region
=======================================================
"B Obama" | "United States" | "Mother Russia"
"B Obama" | "United States" | "Belize"
"V Putin" | "Mother Russia" | "United States"
"V Putin" | "Mother Russia" | "Belize"
```
The problem is that I can't figure out how to join twice on the `Regions` table in order to make this query. I know that what I want is like
```
SELECT Partners.name AS partner_name,
Regions.name AS partner_region,
??? AS destination_region
FROM
Partners INNER JOIN Regions ON Partners.region_id=Regions.id
INNER JOIN Destinations ON Partners.id=Destinations.partner_id
```
but what I'm confused on is what to fill in for `???` above because `Regions` is already joined to `Partners`.
|
You need another `join`:
```
SELECT p.name AS partner_name,
rd.name AS partner_region,
rd.name AS destination_region
FROM Partners p INNER JOIN
Regions rp
ON p.region_id = rp.id INNER JOIN
Destinations d
ON p.id = d.partner_id INNER JOIN
Regions rd
ON d.region_id = rd.id;
```
Note that table aliases make the query easier to write and to read.
|
You'll need to add another join from Destinations to Regions again:
```
SELECT Partners.name AS partner_name,
Regions.name AS partner_region,
Regions2.name AS destination_region
FROM
Partners INNER JOIN Regions ON Partners.region_id=Regions.id
INNER JOIN Destinations ON Partners.id=Destinations.partner_id
INNER JOIN Regions AS Regions2 on Destinations.region_id = Regions2.id
```
Note that I added a second alias to the Regions table named `Regions2`. You need that alias so you can be unambiguous in telling the server which columns you want (e.g., `Regions2.name`).
|
How can I join on a table twice and reference a column name differently each time?
|
[
"",
"sql",
"sql-server",
"t-sql",
"database-design",
""
] |
I have a messages table that looks something like this:
```
| id | sender_id | recipient_id |
|-------------------|---------------| ...
| 1 | 23 | 20 |
| 2 | 11 | 5 | ...
| 3 | 20 | 23 |
| 4 | 23 | 20 | ...
| 5 | 7 | 11 |
```
I'm hoping to find the first message between any two user IDs (the IDs in the `sender_id` and `recipient_id` columns). So the result for the above sample would be:
```
| id | sender_id | recipient_id |
|-------------------|---------------| ...
| 1 | 23 | 20 |
| 2 | 11 | 5 | ...
| 5 | 7 | 11 |
```
At first I thought I could group by a checksum of `sender_id` and `recipient_id`, and then take the min message ID (`id`), but because checksum is different depending upon order of the inputs, that returns both the first message (the intro) and the first reply. Is there an alternative to checksum in which order of inputs is irrelevant?
Or maybe there's a better way to arrive at a solution.
Any help is much appreciated.
|
You can use `ROW_NUMBER`:
`ONLINE DEMO`
```
WITH CTE AS(
SELECT *,
ROW_NUMBER() OVER(
PARTITION BY
CASE WHEN sender_id < recipient_id THEN sender_id ELSE recipient_id END,
CASE WHEN sender_id > recipient_id THEN sender_id ELSE recipient_id END
ORDER BY id
) AS rn
FROM messages
)
SELECT
id, sender_id, recipient_id
FROM CTE
WHERE rn = 1
ORDER BY id
```
You need partition by the smaller id and then the greater one using a `CASE` expression.
|
```
; WITH CTE(ID,SENDER_ID, RECIEPENT_ID) AS
(
SELECT 1,23,20 UNION
SELECT 2,11,5 UNION
SELECT 3,20,23 UNION
SELECT 4,23,20 UNION
SELECT 5,7 ,11
)
SELECT *, ROW_NUMBER() OVER (PARTITION BY ABS(SENDER_ID - RECIEPENT_ID) ORDER BY ID) RN FROM CTE
```
FROM THIS USE WHERE RN = 1
|
How to find the first message between 2 parties in SQL?
|
[
"",
"sql",
"sql-server",
""
] |
I thought it was a bug but after reading this article <http://www.codeproject.com/Tips/668042/SQL-Server-2012-Auto-Identity-Column-Value-Jump-Is>, I found that it's a new feature of SQL Server 2012.
This feature increments your last identity column value by 1000(ints) for new rows(10000 for bigints) automatically.
[](https://i.stack.imgur.com/Yf6ty.png)
I am still trying the solution given in the article but I don't have any problem if this jump happens at client side. Because I am showing hashed version of IDs to client. It's his own demand not mine.
But I am wondering what if the values of these identity columns goes more than the range of the data type (int or bigint)? How it handles the range and size of the column?
|
Existing Identity columns will fail with "Server: Msg 8115, Level 16, State 1, Line 2 Arithmetic overflow error converting IDENTITY to data type int. Arithmetic overflow occurred." See <http://www.sql-server-performance.com/2006/identity-integer-scope/> for discussion.
There isnt a reason to suspect that Identity Jump will have a different behaviour. I would not want it to go and hunt for unused identities in an earlier sequence.
|
Why don't you use Sequence in MS Server 2012.
Sample Code For Sequence will be as follows and you don't need ADMIN permission to create Sequence.
```
CREATE SEQUENCE SerialNumber AS BIGINT
START WITH 1
INCREMENT BY 1
MINVALUE 1
MAXVALUE 9999999
CYCLE;
GO
```
In case if you need to add the leading '0' to Sequence then simple do it with following code :
```
RIGHT ('0000' + CAST (NEXT VALUE FOR SerialNumber AS VARCHAR(5)), 4) AS SerialNumber
```
|
How new Identity Jump feature of Microsoft SQL Server 2012 handles the range of data type?
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"auto-increment",
"identity-column",
""
] |
I am new to Microsoft SQL Server and need a query to return all records listed in the WHERE clause even duplicates. What I have will only return 3 rows.
I am reading in and parsing a text file using c#. And with that text file I am creating a query to get results from a database and then using the results to rebuild that text file. The original text file contains duplicate rows. Each row needs to be associated to the data retrieved from the database. –
```
SELECT tbl1.HdrCode, tbl1.HdrName
FROM Table1 tbl1
WHERE tbl1.HdrCode
IN ('000520',
'000531',
'000531',
'000636')
```
What I need returned is :
```
000520 Name1
000531 Name2
000531 Name2
000636 Name3
```
Thanks
|
Try something like this
You need a inline table with your values and `JOIN` with your table instead of `IN` clause
```
SELECT tb1.*
FROM (VALUES ('000520'),
('000531'),
('000531'),
('000636')) tc (hdrcode )
JOIN table1 tbl1
ON tc.hdrcode = tb1.hdrcode
```
|
This is not how things work in SQL. A query will only return what is there. If you only have 3 rows in your table and only one of them has `HdrCode 000531` it will be returned only once by that kind of query.
---
If you only want to solve this specific example, you could use:
```
SELECT tbl1.HdrCode, tbl1.HdrName FROM Table1 tbl1 WHERE tbl1.HdrCode = '000520'
UNION ALL
SELECT tbl1.HdrCode, tbl1.HdrName FROM Table1 tbl1 WHERE tbl1.HdrCode = '000531'
UNION ALL
SELECT tbl1.HdrCode, tbl1.HdrName FROM Table1 tbl1 WHERE tbl1.HdrCode = '000531'
UNION ALL
SELECT tbl1.HdrCode, tbl1.HdrName FROM Table1 tbl1 WHERE tbl1.HdrCode = '000636'
```
|
SQL query results Need to Return all records in WHERE clause even duplicates
|
[
"",
"sql",
"sql-server",
""
] |
I have an order table that looks like the below
```
order_id pre_pay_time pre_pay_amount pre_pay_type final_payment_time final_payment_amount final_payment_type
==============================================================================================================================
1 1234123413 10 1 1234123913 25 2
2 1234123414 25 1 0 100 0
3 1234123417 75 2 1234125416 155 1
4 0 0 0 1234126418 60 2
```
Here the customer can either make a pre payment on the order and then pay the remainder at the end, or they can just pay the full amount at the end.
The pre\_pay\_time and final\_payment\_time columns are UNIX timestamps.
What I'm trying to do is produce an output table that has the sum amounts for each calendar day. To do this I am joining with a calendar table.
Currently I am able to successfully output the data only for the sum of the final payment, as well as sums for cash and card payment (based on final\_payment\_type column) for each day of the month.
```
SELECT calendar.datefield AS DATE, IFNULL( SUM( orders.final_payment_amount ) , 0 ) AS total_sales,
IFNULL(sum(if(final_payment_type=1,orders.final_payment_amount,0)),0)AS total_cash,
IFNULL(sum(if(final_payment_type=2,orders.final_payment_amount,0)),0)AS total_card,
count(orders.id) AS order_counter
FROM orders
RIGHT JOIN calendar ON ( DATE( FROM_UNIXTIME( cast(orders.final_payment_time as signed) ) ) = calendar.datefield )
WHERE calendar.datefield >= '2016-4-1' AND calendar.datefield <= '2016-4-31'
GROUP BY DATE
```
What I'm hoping to do is expand the query so that I also get sum values for each day for the pre\_pay\_amount based on the pre\_pay\_time. This will allow me to calculate total revenue for the day as a combination of final\_payment\_amount and pre\_pay\_amount.
Since the pre payment may be made on a different day to the final payment I believe that I will have to do another JOIN to the same calendar table using the pre\_pay\_time column.
Is this possible to do with one query?
|
You could do something like this....You alias your pre and final payment queries and then join them by datefield.
```
SELECT *
FROM ( SELECT calendar.datefield AS FinalDate,
IFNULL (SUM (orders.final_payment_amount), 0)
AS total_final_sales,
IFNULL (
SUM (
if (final_payment_type = 1,
orders.final_payment_amount,
0)),
0)
AS total_final_cash,
IFNULL (
SUM (
if (final_payment_type = 2,
orders.final_payment_amount,
0)),
0)
AS total_final_card,
COUNT (orders.id) AS order_final_counter
FROM orders
RIGHT JOIN calendar
ON (DATE (
FROM_UNIXTIME (
CAST(orders.final_payment_time AS signed))) =
calendar.datefield)
WHERE calendar.datefield >= '2016-4-1'
AND calendar.datefield <= '2016-4-31'
GROUP BY FinalDate) finalPay,
( SELECT calendar.datefield AS PreDate,
IFNULL (SUM (orders.pre_payment_amount), 0) AS total_pre_sales,
IFNULL (
SUM (
if (pre_payment_type = 1, orders.pre_payment_amount, 0)),
0)
AS total_pre_cash,
IFNULL (
SUM (
if (pre_payment_type = 2, orders.pre_payment_amount, 0)),
0)
AS total_pre_card,
COUNT (orders.id) AS order_pre_counter
FROM orders
RIGHT JOIN calendar
ON (DATE (
FROM_UNIXTIME (
CAST(orders.pre_payment_time AS signed))) =
calendar.datefield)
WHERE calendar.datefield >= '2016-4-1'
AND calendar.datefield <= '2016-4-31'
GROUP BY PreDate) prePay
WHERE prePay.PreDate = finalPay.FinalDate
```
|
You can do it with a UNION. The second query will be similar to the one you have for final payments and use the pre\_payment columns. Then connect them with a UNION and SUM with a HAVING clause. Here's a good example of how to do that: [Sum a union query](https://stackoverflow.com/questions/5613728/sum-a-union-query)
|
MySQL Multiple Joins to a Calendar Table For Payments Data
|
[
"",
"mysql",
"sql",
""
] |
I googled a lot, but I did not find the exact straight forward answer with an example.
Any example for this would be more helpful.
|
The primary key is a unique key in your table that you choose that best uniquely identifies a record in the table. All tables should have a primary key, because if you ever need to update or delete a record you need to know how to uniquely identify it.
A surrogate key is an artificially generated key. They're useful when your records essentially have no natural key (such as a `Person` table, since it's possible for two people born on the same date to have the same name, or records in a log, since it's possible for two events to happen such they they carry the same timestamp). Most often you'll see these implemented as integers in an automatically incrementing field, or as GUIDs that are generated automatically for each record. ID numbers are almost always surrogate keys.
Unlike primary keys, not all tables need surrogate keys, however. If you have a table that lists the states in America, you don't really need an ID number for them. You could use the state abbreviation as a primary key code.
The main advantage of the surrogate key is that they're easy to guarantee as unique. The main disadvantage is that they don't have any meaning. There's no meaning that "28" is Wisconsin, for example, but when you see 'WI' in the State column of your Address table, you know what state you're talking about without needing to look up which state is which in your State table.
|
A **surrogate key** is a made up value with the sole purpose of uniquely identifying a row. Usually, this is represented by an auto incrementing ID.
Example code:
```
CREATE TABLE Example
(
SurrogateKey INT IDENTITY(1,1) -- A surrogate key that increments automatically
)
```
A **primary key** is the identifying column or set of columns of a table. **Can be surrogate key** or any other unique combination of columns (for example a compound key). MUST be unique for any row and cannot be `NULL`.
Example code:
```
CREATE TABLE Example
(
PrimaryKey INT PRIMARY KEY -- A primary key is just an unique identifier
)
```
|
What is the difference between a primary key and a surrogate key?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
"sql-server-2012",
""
] |
I am using the below query to insert data from one table to another:
```
DECLARE @MATNO NVARCHAR(10), @GLOBALREV INT, @LOCALREP INT
SET @MATNO = '7AGME'
SET @GLOBALREV = 11
SET @LOCALREP = 1
INSERT INTO CIGARETTE_HEADER
VALUES
(SELECT *
FROM CIGARETTE_HEADER_BK1
WHERE MATERIAL_NUMBER = @MATNO
AND GLOBAL_REVISION = @GLOBALREV
AND LOCAL_REVISION = @LOCALREP)
```
The column in both the tables are same, but I am getting the following error:
> Msg 156, Level 15, State 1, Line 7
> Incorrect syntax near the keyword 'SELECT'.
>
> Msg 102, Level 15, State 1, Line 7
> Incorrect syntax near ')'.
Can you please let me know the mistake here?
|
You don't need `VALUES` keyword:
```
INSERT INTO CIGARETTE_HEADER
SELECT * FROM CIGARETTE_HEADER_BK1
WHERE MATERIAL_NUMBER = @MATNO AND
GLOBAL_REVISION = @GLOBALREV AND
LOCAL_REVISION = @LOCALREP
```
It is also preferable to *explicitly* cite every field name of both tables participating in the `INSERT` statement.
|
You don't need to use the VALUES () notation. You only use this when you want to insert static values, and only one register.
Example:
INSERT INTO Table
VALUES('value1',12, newid());
Also i recommend writing the name of the columns you plan to insert into, like this:
```
INSERT INTO Table
(String1, Number1, id)
VALUES('value1',12, newid());
```
In your case, do the same but only with the select:
```
DECLARE @MATNO NVARCHAR(10), @GLOBALREV INT, @LOCALREP INT;
SET @MATNO = '7AGME';
SET @GLOBALREV = 11;
SET @LOCALREP = 1;
INSERT INTO CIGARETTE_HEADER
(ColumnName1, ColumnName2)
SELECT ColumnNameInTable1, ColumnNameInTable2
FROM CIGARETTE_HEADER_BK1
WHERE MATERIAL_NUMBER = @MATNO
AND GLOBAL_REVISION = @GLOBALREV
AND LOCAL_REVISION = @LOCALREP);
```
|
Insert into from select query error in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have this 2 simple tables
[](https://i.stack.imgur.com/y97OE.png)
I want to select unmatching data from SAMPLE1 by comparing FruitName in SAMPLE2
So far I have tried
```
SELECT * FROM SAMPLE1,SAMPLE2 WHERE SAMPLE1.FruitName NOT LIKE '%' + dbo.SAMPLE2.FruitName +'%'
```
But this gives me total 7 records
[](https://i.stack.imgur.com/HtxYq.png)
What I want the output is
[](https://i.stack.imgur.com/wN6qy.png)
|
```
SELECT *
FROM SAMPLE1 s1
WHERE NOT EXISTS (
SELECT NULL
FROM SAMPLE2 s2
WHERE s1.FruitName LIKE '%' + s2.FruitName + '%'
)
```
|
Maybe that help:
```
select SAMPLE1.* from SAMPLE1
Left join SAMPLE2 ON SAMPLE1.fruitName LIKE concat('%', SAMPLE2.fruitName, '%')
Where SAMPLE2.id is null
```
[SQLFiddle](http://sqlfiddle.com/#!9/5269ac/11/0)
|
Find unmatched data in SQL
|
[
"",
"sql",
"sql-server",
""
] |
Is it possible to display just certain table names in the query:
```
USE [WebContact]
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'memberEmails'
```
Most of the time I would need all the table names but there are curtain situations where I need just certain row names.
When I try doing the following:
```
USE [WebContact]
SELECT COLUMN_NAME, ContactDateTime
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'memberEmails'
```
It tell me that
> Invalid column name 'ContactDateTime'
even though that is one of the row names.
Is this possible to do?
|
The column `ContactDateTime` may be a column in your table but it is not a column in the `INFORMATION_SCHEMA.COLUMNS` view.
Since it is not a column there, SQL Server is going to error out saying that it is invalid.
I think what you're trying to do is add another `WHERE` clause to your statement:
```
USE [WebContact]
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'memberEmails'
AND [COLUMN_NAME] = 'ContactDateTime'; -- Here!
```
Or if you want to add multiple columns...
```
USE [WebContact]
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'memberEmails'
AND [COLUMN_NAME]
IN ('ContactDateTime', 'column2', 'column3', ... 'column(n)'); -- Here!
```
Also see here for the case against using [INFORMATION\_SCHEMAS](https://sqlblog.org/2011/11/03/the-case-against-information_schema-views).
|
if ContactDateTime is a column that you are looking for in table memberEmails then you can do this
```
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'memberEmails'
and COLUMN_NAME='ContactDateTime'
```
|
SQL Server : getting certain table names from query
|
[
"",
"sql",
"sql-server",
"vb.net",
"information-schema",
"tablename",
""
] |
**EDIT**
*the values in the table can be negative numbers (sorry for the oversight when asking the question)*
Having exhausted all search efforts, I am very stuck with the following:
I would like to calculate a running total based on the initial value. For instance:
My table would look like:
```
Year Percent Constant
==== ===== ========
2000 1.40 100
2001 -1.08 100
2002 1.30 100
```
And the desired results would be:
```
Year Percent Constant RunningTotal
==== ====== ======== ============
2000 1.40 100 140
2001 -1.08 100 128.8
2002 1.30 100 167.44
```
Taking the calculated value of 1.40\*100 and multiplying it with percent of the next line, 1.08 and so on.
I am using Sql Server 2012. I've looked into using a common table expression, but can't seem to get the correct syntax sadly.
|
You can accomplish this task using a recursive CTE
```
;WITH values_cte AS (
SELECT [Year]
,[Percent]
,[Constant]
,CASE WHEN [v].[Percent] < 0 THEN
[v].[Constant] - (([v].[Percent] + 1) * [v].[Constant])
ELSE
[v].[Percent] * [v].[Constant]
END
AS [RunningTotal]
FROM [#tmp_Values] v
WHERE [v].[Year] = 2000
UNION ALL
SELECT v2.[Year]
,v2.[Percent]
,v2.[Constant]
,CASE WHEN [v2].[Percent] < 0 THEN
[v].[RunningTotal] + (([v2].[Percent] + 1) * [v].[RunningTotal])
ELSE
[v2].[Percent] * [v].[RunningTotal]
END
AS [RunningTotal]
FROM values_cte v
INNER JOIN [#tmp_Values] v2 ON v2.[Year] = v.[Year] + 1
)
SELECT *
FROM [values_cte]
```
|
In SQL Server 2012+, you would use a cumulative sum:
```
select t.*,
(const * sum(1 + percent / 100) over (order by year)) as rolling_sum
from t
order by t.year;
```
EDIT:
Ooops, I notice you really seem to want a cumulative product. Assuming `percent` is always greater than 0, then just use logs:
```
select t.*,
(const * exp(sum(log(1 + percent / 100)) over (order by year))) as rolling_product
from t
order by t.year;
```
|
sql (beginner) - use value calculated from above cell
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Lets say I have a table like this:
```
--------------------------------------------------
| id | text |
--------------------------------------------------
| 01 | Test string <div src="0124"> |
--------------------------------------------------
| 02 | Another type <div src="667"> test string |
--------------------------------------------------
```
I need to write a SELECT that gets the value that is inside src="" from the string in the text column. So My end result should be:
```
------
|text|
------
|0124|
-----
| 667|
```
In my case [SUBSTRING](https://msdn.microsoft.com/en-us/library/ms187748.aspx) does not suit me, since I don't know my sub-strings
Also since I'm looking for a value, that comes after a specific sub-string, I can't use char position.
So how can I get a value that comes after a specific sub-string inside a string
|
You can use SUBSTRING by finding the initial position of the number and then finding the length of the string:
```
SELECT SUBSTRING(text, (CHARINDEX('src=', text) + 5), (CHARINDEX(CHAR(34) + '>', text) - (CHARINDEX('src=', text) + 5))) AS text
FROM yourTable;
```
This will get your starting postion (notice I add 5 to it):
```
(CHARINDEX('src=', text) + 5)
```
The following will get your length:
```
(CHARINDEX(CHAR(34) + '>', text) - (CHARINDEX('src=', text) + 5))
```
|
Create function to get numeric
```
CREATE FUNCTION dbo.udf_GetNumeric
(@strAlphaNumeric VARCHAR(256))
RETURNS VARCHAR(256)
AS
BEGIN
DECLARE @intAlpha INT
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric)
BEGIN
WHILE @intAlpha > 0
BEGIN
SET @strAlphaNumeric = STUFF(@strAlphaNumeric, @intAlpha, 1, '' )
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric )
END
END
RETURN ISNULL(@strAlphaNumeric,0)
END
GO
```
Call it
```
/* Run the UDF with different test values */
SELECT dbo.udf_GetNumeric('') AS 'EmptyString';
SELECT dbo.udf_GetNumeric('asdf1234a1s2d3f4@@@') AS 'asdf1234a1s2d3f4@@@';
SELECT dbo.udf_GetNumeric('123456') AS '123456';
SELECT dbo.udf_GetNumeric('asdf') AS 'asdf';
SELECT dbo.udf_GetNumeric(NULL) AS 'NULL';
--In your case
SELECT id, dbo.udf_GetNumeric([text]) from table
GO
```
|
Get dynamic length value in a string after a specific sub-string,
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I've got 2 tables:
adresses and a log of files (named send) i've sent.
For a given file, I want to get all adresses, and whether they received the file or not.
What I've got so far is this:
```
SELECT *
, CASE
WHEN send.fileid = 1 THEN 1
ELSE send.fileid = NULL
END as file1
FROM send
RIGHT OUTER JOIN `adress`
ON `send`.adressid = `adress`.`id`
```
The problem is, when an adress got two diffrent files, they get listed twice. How can I alter the statement to get arround this?
Example Data
```
*adress*
1 Adrian
2 Christian
3 Max
4 Alex
*file*
1 music
2 video
3 document
*send*
adress:1 file:1
adress:1 file:2 -
adress:3 file:1
adress:4 file:2 -
adress:4 file:3
when i browse the file 2, i want to see:
X Adrian
X Alex
Christian
Max
```
TLDR: I want all my adresses (once) with either the specific file id or null.
Thanks in advance.
|
```
SELECT *
FROM adress
LEFT JOIN send ON send.adressid = adress.id
AND send.fileid =1
LIMIT 0 , 30
```
that seems to be it
|
One way of going about is is putting the condition in a subquery and letting the outer join do all the heavy lifting:
```
SELECT a.*, s.fieldid
FROM address a
LEFT JOIN (SELECT filedid, addressid
FROM send
WHERE fileid = 1) ON s.addressid = a.id
```
|
Remove double entry from orignial table when joining
|
[
"",
"mysql",
"sql",
"join",
""
] |
How can I tell the LAG function to get the last "not null" value?
For example, see my table bellow where I have a few NULL values on column B and C.
I'd like to fill the nulls with the last non-null value. I tried to do that by using the LAG function, like so:
```
case when B is null then lag (B) over (order by idx) else B end as B,
```
but that doesn't quite work when I have two or more nulls in a row (see the NULL value on column C row 3 - I'd like it to be 0.50 as the original).
Any idea how can I achieve that?
(it doesn't have to be using the LAG function, any other ideas are welcome)
A few assumptions:
* The number of rows is dynamic;
* The first value will always be non-null;
* Once I have a NULL, is NULL all up to the end - so I want to fill it with the latest value.
Thanks
[](https://i.stack.imgur.com/VMq0m.png)
|
if it is null all the way up to the end then can take a short cut
```
declare @b varchar(20) = (select top 1 b from table where b is not null order by id desc);
declare @c varchar(20) = (select top 1 c from table where c is not null order by id desc);
select is, isnull(b,@b) as b, insull(c,@c) as c
from table;
```
|
You can do it with `outer apply` operator:
```
select t.id,
t1.colA,
t2.colB,
t3.colC
from table t
outer apply(select top 1 colA from table where id <= t.id and colA is not null order by id desc) t1
outer apply(select top 1 colB from table where id <= t.id and colB is not null order by id desc) t2
outer apply(select top 1 colC from table where id <= t.id and colC is not null order by id desc) t3;
```
This will work, regardless of the number of nulls or null "islands". You may have values, then nulls, then again values, again nulls. It will still work.
---
If, however the assumption (in your question) holds:
> Once I have a `NULL`, is `NULL` all up to the end - so I want to fill it with the latest value.
there is a more efficient solution. We only need to find the latest (when ordered by `idx`) values. Modifying the above query, removing the `where id <= t.id` from the subqueries:
```
select t.id,
colA = coalesce(t.colA, t1.colA),
colB = coalesce(t.colB, t2.colB),
colC = coalesce(t.colC, t3.colC)
from table t
outer apply (select top 1 colA from table
where colA is not null order by id desc) t1
outer apply (select top 1 colB from table
where colB is not null order by id desc) t2
outer apply (select top 1 colC from table
where colC is not null order by id desc) t3;
```
|
LAG functions and NULLS
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2016",
""
] |
correct syntax of upsert with postgresql 9.5, below query shows `column reference "gallery_id" is ambiguous` error , why?
```
var dbQuery = `INSERT INTO category_gallery (
category_id, gallery_id, create_date, create_by_user_id
) VALUES ($1, $2, $3, $4)
ON CONFLICT (category_id)
DO UPDATE SET
category_id = $1,
last_modified_date = $3,
last_modified_by_user_id = $4
WHERE gallery_id = $2`;
```
I tried change `WHERE gallery_id = $2;` to `WHERE category_gallery.gallery_id = $2;` then shows error `there is no unique or exclusion constraint matching the ON CONFLICT specification`, but **I don't want to set gallery\_id or category\_id as unique** becuase I want to sure both column are same then do update....
How to correctly do upsert in postgres 9.5?
if `ON CONFLICT` need unique column, should I use other method, how?
I want to sure multiple column both conflict then do update, what is correct usage
```
var dbQuery = `INSERT INTO category_gallery (
category_id, gallery_id, create_date, create_by_user_id
) VALUES ($1, $2, $3, $4)
ON CONFLICT (category_id, gallery_id)
DO UPDATE SET
category_id = $1,
last_modified_date = $3,
last_modified_by_user_id = $4
WHERE gallery_id = $2`;
var dbQuery = `INSERT INTO category_gallery (
category_id, gallery_id, create_date, create_by_user_id
) VALUES ($1, $2, $3, $4)
ON CONFLICT (category_id AND gallery_id)
DO UPDATE SET
category_id = $1,
last_modified_date = $3,
last_modified_by_user_id = $4
WHERE gallery_id = $2`;
```
table (category\_id , gallery\_id not unique column)
```
category_id | gallery_id | create_date | create_by_user_id | last_modified_date | last_modified_by_user_id
1 | 1 | ...
1 | 2 | ...
2 | 2 | ...
1 | 3 | ...
```
|
The `ON CONFLICT` construct requires a `UNIQUE` constraint to work. From the documentation on [`INSERT .. ON CONFLICT` clause](http://www.postgresql.org/docs/9.5/static/sql-insert.html#SQL-ON-CONFLICT):
> The optional `ON CONFLICT` clause specifies an alternative action to raising a **unique violation** or **exclusion constraint** violation error. For each individual row proposed for insertion, either the insertion proceeds, or, if an arbiter constraint or index specified by conflict\_target is violated, the alternative conflict\_action is taken. `ON CONFLICT DO NOTHING` simply avoids inserting a row as its alternative action. `ON CONFLICT DO UPDATE` updates the existing row that conflicts with the row proposed for insertion as its alternative action.
Now, the question is not very clear but you probably need a `UNIQUE` constraint on the 2 columns combined: `(category_id, gallery_id)`.
```
ALTER TABLE category_gallery
ADD CONSTRAINT category_gallery_uq
UNIQUE (category_id, gallery_id) ;
```
If the row to be inserted matches **both** values with a row already on the table, then instead of `INSERT`, do an `UPDATE`:
```
INSERT INTO category_gallery (
category_id, gallery_id, create_date, create_by_user_id
) VALUES ($1, $2, $3, $4)
ON CONFLICT (category_id, gallery_id)
DO UPDATE SET
last_modified_date = EXCLUDED.create_date,
last_modified_by_user_id = EXCLUDED.create_by_user_id ;
```
You can use either the columns of the UNIQUE constraint:
```
ON CONFLICT (category_id, gallery_id)
```
or the constraint name:
```
ON CONFLICT ON CONSTRAINT category_gallery_uq
```
|
As a simplified alternative to [the currently accepted answer](https://stackoverflow.com/a/36799500/3169029), the `UNIQUE` constraint can be anonymously added upon creation of the table:
```
CREATE TABLE table_name (
id TEXT PRIMARY KEY,
col TEXT,
UNIQUE (id, col)
);
```
Then, the upsert query becomes (similar to what was already answered):
```
INSERT INTO table_name (id, col) VALUES ($1, $2)
ON CONFLICT (id, col)
DO UPDATE SET col = $2;
```
|
How to correctly do upsert in postgres 9.5
|
[
"",
"sql",
"postgresql",
"upsert",
"postgresql-9.5",
""
] |
The following SQL statement selects all the orders from the customer with `CustomerID=4` ("Around the Horn"). We use the `Customers` and `Orders` tables, and give them the table aliases of `c` and `o`, respectively. (Here we have used aliases to make the SQL shorter):
```
SELECT o.OrderID, o.OrderDate, c.CustomerName
FROM Customers AS c, Orders AS o
WHERE c.CustomerName="Around the Horn"
AND c.CustomerID=o.CustomerID;
```
Can you please explain this syntax considering I am complete beginner.
|
With the `SELECT` statement you tell which columns of the table(s) you need. If there are more tables with the same column names you have to specificy also the table name (tableName.columnName).
With the `FROM` statement you tell the tables from which you want to select the datas. In this query there are two tables separated from a comma, this is not a good idea, because there is the `JOIN` statement to merge two tables in the correct way.
Finally with the `WHERE` statement you can tell conditions to filter the data. In this query you want only the customers with name = 'Around the Horn' and you also tell to select only the rows with the same customer id in the customer and order tables.
This last condition should be specified with a join condition in the `FROM`:
```
FROM Customer AS c JOIN Orders AS o ON c.CustomerID = o.CustomerID
```
Otherwise using the comma in the `FROM` it does the cartesian product between the two tables and then it select only the correct rows with the second `WHERE` condition.
I suggest you to read a sql guide for beginners
|
Here is a simpler version of the above code without using Alias (b/c that is what confused me in the start)... Without alias, it is much simpler to read this code as below:
```
SELECT OrderID, OrderDate, CustomerName
FROM Customers, Orders
WHERE Customers.CustomerID=Orders.CustomerID
```
|
Having query in SQL aliases
|
[
"",
"sql",
""
] |
How do I look up DISTINCT values of one table, look for each name in another table and get both the values and their names as a result?
The Beleg table looks like this:
```
SELECT DISTINCT Ursprungskonto FROM Beleg
WHERE YEAR ( Valuta ) = 2016
```
gets me:
```
1000
1210
1220
1230
```
For each of these values, I need to lookup its name:
```
SELECT Name FROM Geldkonto
WHERE Kontonr = 1000
```
results in:
```
Kasse
```
At the end of the query, I need to have this result:
```
1000 Kasse
1210 OneBankName
1220 AnotherBankName
1230 YABN
```
I'm using SQL-92 (Filemaker).
Thanks a lot for any help!
|
You could try sub-query:
```
SELECT Kontonr , Name FROM Geldkonto
WHERE Kontonr in (SELECT DISTINCT Ursprungskonto FROM Beleg
WHERE YEAR ( Valuta ) = 2016)
```
|
Instead of applying DISTINCT *after* the join you better do it *before*:
```
SELECT k.Kontonr, k.Name
FROM Geldkonto AS k
JOIN
(
SELECT DISTINCT Ursprungskonto
FROM Beleg
WHERE YEAR ( Valuta ) = 2016
) AS b
ON k.Kontonr = b.Ursprungskonto
```
This is similar to @rev\_dihazum's solution, simply using a join instead of a subquery, useful if you need any additional columns from `Beleg`.
|
Combining two nested SELECT with DISTINCT?
|
[
"",
"sql",
"select",
"nested",
"filemaker",
""
] |
Have a ncarchar(MAX) field in SQL table. It has numbers such as 717.08064182582, 39.0676048113, etc. in which I need to only have 3 places after decimal. For instance 717.080, 39.067.
Without converting the field type, would like to get rid of those last n characters, however every row has different number of characters. I believe I could use ROUND (correct me if wrong), but would rather not.
|
Try this
```
SELECT CAST(ColumnName AS DECIMAL(18,3))
```
Without converting it data type As per **@vkp** Comment
```
SELECT SUBSTRING(ColumnName ,0,CHARINDEX('.', ColumnName )+4)
```
|
```
select CASE WHEN CHARINDEX('.', Your_column) > 0
THEN SUBSTRING(Your_column, 1, CHARINDEX('.', Your_column) + 3)
ELSE Your_column
END
```
this is similar as previous answers but more faster and safer
|
SQL Server - Delete Values after Decimal (non-int field)
|
[
"",
"sql",
"sql-server",
"decimal",
"rounding",
"nvarchar",
""
] |
I'm using PostgreSQL. If I have an **id in** query say:
```
select * from public.record where id in (1,5,1);
```
this will only give me two rows, because the id 1 has a duplicate. But what if I want to display a set of records containing :
```
id | value
1 | A
5 | B
1 | A
```
Regardless of the reason why I would do this, is this possible?
|
You can do that by joining the values:
```
with ids (id) as (
values (1),(5),(1)
)
select r.*
from public.record r
join ids on r.id = ids.id;
```
If you need to keep the order of the parameter list, you need to add a column to sort on:
```
with ids (id, sort_order) as (
values
(1, 1),
(5, 2),
(1, 1)
)
select r.*
from public.record r
join ids on r.id = ids.id
order by ids.sort_order;
```
|
You can use `JOIN` on a subquery:
```
SELECT
r.id, r.value
FROM public.record r
INNER JOIN (
SELECT 1 AS id UNION ALL
SELECT 5 AS id UNION ALL
SELECT 1 AS id
) t
ON t.id = r.id
```
|
Retrieve same number of records in an ID IN query even with a duplicate
|
[
"",
"sql",
"postgresql",
"duplicates",
""
] |
I got a numeric(8,0) date column, with values like `20130101`. I need to cast it to some date format and do some queries.
My test query looks like this
```
SELECT *
FROM hund
WHERE ISDATE(hund.hfdat) = 1
and cast((left(convert(varchar(8),hund.hfdat),4) +
substring(convert(varchar(8),hund.hfdat),5,2) + right(hund.hfdat,2))
as datetime) between '20050101' and '20300101'
```
I get this error
> Conversion failed when converting date and/or time from character string.
I guess my 'date' column get some bad data. Some suggestion to write it in some other way?
I want to jack this into this, dogs not older than 10 years
```
SELECT Ras_.rasnamn as 'Ras', count(distinct person.personid) as 'Antal
ägare', count(distinct JBV_Aegare.hundid) as 'Antal djur'
FROM JBV_Aegare
INNER JOIN hund ON JBV_Aegare.hundID=hund.hundID
INNER JOIN ras_ ON hund.ras=ras_.raskod
INNER JOIN person ON JBV_Aegare.personID=person.personid
INNER JOIN PostnummerLan ON person.postnr=PostnummerLan.PN_Postnummer
INNER JOIN land ON PostnummerLan.PN_Lan=land.landkod
where postnr <> 0 and person.landkod=0 and HERE ->>> hund.hfdat >=
convert(CHAR(8),DATEADD(YEAR, -1, GETDATE()),112) and
hund.hfdat <= (year(getdate()) + 10)
group by Ras_.rasnamn
order by Ras_.rasnamn
```
|
```
hund.hfdat >= replace(CONVERT(date, DATEADD(year, -10, getdate())),'-','')
```
|
It should be a simple double cast
```
DECLARE @WhyIsThisNumeric decimal(8,0) = 20130101
SELECT CAST(CAST(@WhyIsThisNumeric AS varchar(8)) AS datetime)
```
When you attempt to cast `20130101` (not `'20130101'`: it's a number, not a string) then it is evaluated as 20,130,101 days after 01 Jan 1900 which is an utterly invalid datetime value.
Also, SQL is a declarative language and the optimiser will not do a left to right evaluation. The ISDATE could be evaluated first thus the conversion fails.
In this case, you really need to store dates as dates.
Otherwise you need force the ISDATE to be evaluated first by forcing a materialisation of the ISDATE before the CAST is evaluated
```
SELECT *
FROM
(
SELECT TOP 2000000000 *
FROM hund
WHERE ISDATE(hund.hfdat) = 1
ORDER BY hund.hfdat --better to use the PK
) X
WHERE
CAST(CAST(X.hfdat AS varchar(8)) AS datetime) between '20050101' and '20300101'
```
|
sql server casting some data
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I would like to get values without the smallest and the biggest ones, so without entry with 2 and 29 in column NumberOfRepeating.
[](https://i.stack.imgur.com/p8Pie.png)
My query is:
```
SELECT Note, COUNT(*) as 'NumberOfRepeating'
WHERE COUNT(*) <> MAX(COUNT(*))AND COUNT(*) <> MIN(COUNT(*))
FROM Note GROUP BY Note;
```
|
```
SELECT Note, COUNT(*) as 'NumberOfRepeating'
FROM Notes
GROUP BY Note
HAVING count(*) <
(
SELECT max(t.maxi)
FROM (select
Note, COUNT(Note) maxi FROM Notes
GROUP BY Note
) as t
)
AND
count(*) >
(
SELECT min(t.min)
FROM (select
Note, COUNT(Note) min FROM Notes
GROUP BY Note
) as t
)
```
try this code.
|
One method would use `order by` and `limit`, twice:
```
select t.*
from (select t.*
from t
order by NumberOfRepeating asc
limit 99999999 offset 1
) t
order by NumberOfRepeating desc
limit 99999999 offset 1;
```
|
How to write sql query to get items from range
|
[
"",
"mysql",
"sql",
""
] |
For instance, I have a datetime like this `'2016-04-02 00:00:00'` and another like this `'2016-04-02 15:10:00'`. I don't care about the time-part, I want them to match just by the date-part.
I have tried with `date()`, `to_date`, `datepart`, nothing works.
|
Truncating the date to day should do the trick. Documentation here:
<https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions201.htm>
For example
```
SELECT TRUNC(SYSDATE, 'DAY') FROM DUAL;
```
|
Do it like this:
```
where yourField >= the start of your date range
and yourField < the day after the end of your date range
```
**Edit starts here:**
While you could use `trunc`, as suggested by others, bear in mind that filtering on function results tends to be slow.
|
How to transform Oracle DateTime to Date
|
[
"",
"sql",
"oracle",
""
] |
I'm having an issue with my stored procedure.
I am getting error:
> Cannot perform an aggregate function on an expression containing an
> aggregate or a subquery
Here's the part of my stored procedure where I believe the error occurs:
```
SELECT column_1, column_2,
SUM(CASE WHEN column_2 NOT IN (SELECT product FROM table_products) THEN 1
ELSE 0
END) AS Total
FROM my_table
WHERE is_rated = '1'
GROUP BY column_1, column_2
```
Thank you.
|
You'll get much better performance generally if you try to avoid correlated subqueries anyway:
```
SELECT
MT.column_1,
MT.column_2,
SUM(CASE WHEN P.product IS NULL THEN 1 ELSE 0 END) AS total
FROM
My_Table MT
LEFT OUTER JOIN Products P ON P.product = MT.column_2
WHERE
MT.is_rated = '1'
GROUP BY
MT.column_1,
MT.column_2
```
This assumes that there will only ever be at most one match in the Products table (Products, not Table\_Products - of course it's a table so don't put that in the name). In other words, this will work if product is the PK (or an AK) of the Products table.
If that's not the case and you might have multiple matches in the `Products` table then you can `JOIN` to a subquery that uses `DISTINCT` on the `product` column.
|
```
SELECT
column_1, column_2,
SUM(
CASE
WHEN table_products.product IS NULL THEN 1
ELSE 0
END
) AS Total
FROM my_table
left join table_products on my_table.column_2 = table_products.product
WHERE is_rated = '1'
GROUP BY column_1, column_2
```
|
SQL Server Cannot perform an aggregate function on an expression containing an aggregate or a subquery
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table where monthly sales are recorded. See image for a sample of the table.[](https://i.stack.imgur.com/HI7oA.jpg)
So this data is partial, I have months 1-12 for every year since 2000. What I would normally do is query total sales for any one year like this:
```
SELECT SUM(total_sales) as TotalSalesYear FROM sales_report WHERE year = '2008'
```
What I want to do here is determine the highest year in sales and the average sales per year since 2000. So I need to do a SUM per year, and determine which year is the highest. Then, once I have the SUM for each year, I can come with an average of sales per year.
I'm just not sure how to query this.
Thank you.
|
Group by year in order to get a result record per year. Order it by the sales sum descending and limit your results to 1 row, so you get the year with the maximum sales sum.
Additionally we use the analytic version of `AVG` to get the avarage over all sums with each year record (of which we show only one at last). Remove the last line (i.e. the fetch only clause) from the query to see how it works.
```
select
year,
sum(total_sales) as sum_of_year,
avg(sum(total_sales)) over () as avg_sum
from sales_report
group by year
order by sum(total_sales) desc
fetch first 1 row only;
```
SQL fiddle: <http://sqlfiddle.com/#!15/fee30/5>
This is standard SQL but doesn't work in every DBMS. Some DBMS are more standard compliant, others are less. Some use `TOP` or `LIMIT` or something else to get the top row only. And some don't even feature analytic functions.
|
Here is a solution for the second part of your question. This will compute the average of sum of sales over all years:
```
SELECT avg(sum_sales)
FROM (
SELECT year as year, sum(total_sales) as sum_sales
FROM sales
GROUP BY year
) AS T;
```
For getting the year with the highest sales, you could extend this using order by and limit as follows:
```
SELECT year, sum_sales
FROM (
SELECT year as year, sum(total_sales) as sum_sales
FROM sales
GROUP BY year) AS T
ORDER BY sum_sales desc
LIMIT 1;
```
|
SQL SUM and Average Sale by Years
|
[
"",
"sql",
""
] |
I need to update `customdate1` column with the time from `starttime` column. The issue I'm having is formatting. The current format for `starttime` is HH:MM:SS.SSSSSSS I need it to be in HH:MM.
[](https://i.stack.imgur.com/UOmYN.jpg)
Thank you
Brian
|
I suggest that you first change your data type and then perform an update
```
ALTER TABLE TableName
ALTER COLUMN CustomDate1 time(0) not null
UPDATE tablename
SET customdate1 = (SELECT starttime)
FROM tablename);
```
If you don't want to store seconds you can change your data type to char(5) and then perform an update. But this is not recommended . Any formatting should be done at the application level
See example:
```
ALTER TABLE TableName
ALTER COLUMN CustomDate1 char(5) not null
UPDATE tablename
SET customdate1 = (SELECT starttime)
FROM tablename);
```
See a quick example:
```
DECLARE @time char(5) = '12:15';
SELECT @time AS '@time'
```
|
```
UPDATE tablename
SET customdate1 = (SELECT SUBSTRING(CONVERT(varchar, starttime, 108), 1, 5)
FROM tablename);
```
Try above one. Date will be truncated to 5 characters. `hh:mm`
|
Sql update column with time from
|
[
"",
"sql",
"sql-server",
""
] |
I have a code in sql which I am using. Not much familiar with postgresql. Below is the code which I want to convert to postgresql. I'm using dbeaver 3.5.4
```
Update tablename
set record_hash = cast(hashbytes('MD5',
coalesce(id, '') +
coalesce(name, '') +
coalesce(created_date, '') +
coalesce(last_modified_date, '')
) as bigint)
;
```
|
You can do it like this:
```
Update tablename
set record_hash = ('x'|| substr(
md5(
coalesce(id, '') ||
coalesce(name, '') ||
coalesce(created_date, '') ||
coalesce(last_modified_date, '')
),1,16)::bit(64)::bigint )
```
Found [here](https://stackoverflow.com/questions/9809381/hashing-a-string-to-a-numeric-value-in-postgressql) how to do the hash => bigint conversion.
|
I assume that this `hashbyte()` thing generates a hash value.
To create a md5 checksum in Postgres you can use the md5() function, e.g.:
```
md5(concat(id::text, name, created_date::text, last_modified_date::date))
```
`concat()` will take care of null values automatically, no need for `coalesce()`
Unfortunately there is no direct conversion from a hex value to an integer in Postgres
|
Sql to postgresql
|
[
"",
"sql",
"postgresql",
""
] |
I am working on Terradata SQL. I would like to get the duplicate fields with their count and other variables as well. I can only find ways to get the count, but not exactly the variables as well.
Available input
```
+---------+----------+----------------------+
| id | name | Date |
+---------+----------+----------------------+
| 1 | abc | 21.03.2015 |
| 1 | def | 22.04.2015 |
| 2 | ajk | 22.03.2015 |
| 3 | ghi | 23.03.2015 |
| 3 | ghi | 23.03.2015 |
```
Expected output :
```
+---------+----------+----------------------+
| id | name | count | // Other fields
+---------+----------+----------------------+
| 1 | abc | 2 |
| 1 | def | 2 |
| 2 | ajk | 1 |
| 3 | ghi | 2 |
| 3 | ghi | 2 |
```
What am I looking for :
**I am looking for all duplicate rows, where duplication is decided by ID and to retrieve the duplicate rows as well.**
All I have till now is :
```
SELECT
id, name, other-variables, COUNT(*)
FROM
Table_NAME
GROUP BY
id, name
HAVING
COUNT(*) > 1
```
This is not showing correct data. Thank you.
|
You could use a [window aggregate function](http://www.info.teradata.com/htmlpubs/DB_TTU_13_10/index.html#page/SQL_Reference/B035_1145_109A/ch09.13.039.html), like this:
```
SELECT *
FROM (
SELECT id, name, other-variables,
COUNT(*) OVER (PARTITION BY id) AS duplicates
FROM users
) AS sub
WHERE duplicates > 1
```
Using a teradata extension to ISO SQL syntax, you can simplify the above to:
```
SELECT id, name, other-variables,
COUNT(*) OVER (PARTITION BY id) AS duplicates
FROM users
QUALIFY duplicates > 1
```
|
As an alternative to the accepted and perfectly correct answer, you can use:
```
SELECT {all your required 'variables' (they are not variables, but attributes)}
, cnt.Count_Dups
FROM Table_NAME TN
INNER JOIN (
SELECT id
, COUNT(1) Count_Dups
GROUP BY id
HAVING COUNT(1) > 1 -- If you want only duplicates
) cnt
ON cnt.id = TN.id
```
**edit**: According to your edit, duplicates are on `id` only. Edited my query accordingly.
|
SQL : Getting duplicate rows along with other variables
|
[
"",
"sql",
"database",
"teradata",
""
] |
I was handed a scenario that had the schema of
Product:
```
maker model type
A 1232 PC
A 1233 PC
A 1276 Printer
A 1298 Laptop
A 1401 Printer
A 1408 Printer
A 1752 Laptop
B 1121 PC
B 1750 Laptop
C 1321 Laptop
D 1288 Printer
D 1433 Printer
E 1260 PC
E 1434 Printer
E 2112 PC
E 2113 PC
```
And the query was to
> Get the makers who produce only one product type and more than one
> model.
>
> Output columns should have been maker and type.
And this is what I came up with.
```
select distinct maker, type
from Product
where maker in (select maker
from Product
group by maker, type
having count(model) > 1
except
select maker
from
(
select distinct maker, type
from Product
) A
group by maker
having count(type) > 1)
```
I know this doesn't seem elegant in any way, so I was wondering whether someone could come up with a better alternative and explain as to why is it better than the query stated above.
---
EDIT: Please make sure that the answer is just two columns wide which is
Maker, Type
|
One method uses `exists` and `not exists`:
```
select distinct p.maker, p.type
from product p
where exists (select 1
from product p2
where p2.maker = p.maker and p2.type = p.type and p2.model <> p.model
) and
not exists (select 1
from product p2
where p2.maker = p.maker and p2.type <> p.type
);
```
Another version uses explicit aggregation:
```
select p.maker, p.type
from product p
where not exists (select 1
from product p2
where p2.maker = p.maker and p2.type <> p.type
)
group by p.maker, p.type
having min(model) <> max(model);
```
And, for completeness, here is a version that only uses window functions:
```
select p.model, p.type
from (select p.*,
min(type) over (partition by maker) as mintype,
max(type) over (partition by maker) as maxtype,
row_number() over (partition by maker, type order by model) as seqnum,
count(*) over (partition by maker, type) as cnt
from product p
) p
where seqnum = 1 and
mintype = maxtype and
cnt > 1;
```
|
```
SELECT maker, MIN(type) as type
FROM Product
GROUP BY maker
HAVING COUNT(DISTINCT type) = 1 AND COUNT(DISTINCT model) > 1;
```
|
Looking for a more elegant way of solving this sql query
|
[
"",
"sql",
""
] |
Hey guys suppose I have a data frame
```
Year Month 1_month_sub 3_month_sub 12_month_sub
2014 1 3 1 1
2014 2 1 0 0
2014 3 1 0 0
2014 4 1 0 0
2014 5 4 0 0
2014 6 1 0 0
2014 7 5 0 0
2014 8 1 0 0
2014 9 1 0 0
2014 10 6 0 0
2014 11 1 0 0
2014 12 3 0 0
```
Where 1\_month sub indicates that 1 month subscription was purchased, 3 month sub indicates that a 3 month subscription was purchased etc.
I need to add a column that gives me a # of monthly subscribers at any given unit of time. Thus the results would look like:
```
Year Month 1_month_sub 3_month_sub 12_month_sub subs
2014 1 3 1 1 5
2014 2 1 0 0 3
2014 3 1 0 0 3
2014 4 1 0 0 2
2014 5 4 0 0 5
2014 6 1 0 0 2
2014 7 5 0 0 6
2014 8 1 0 0 2
2014 9 1 0 0 2
2014 10 6 0 0 7
2014 11 1 0 0 2
2014 12 3 0 0 4
2015 1 1 0 0 1
```
I have used the COALESCE, LAG, LEAD functions with no real success. Any ideas on how I can approach this?
|
I speculate that the data is in a table and 1 month subs only exist for one month, 3 month for 3 months, and 12 months for 12 months.
And, further, I will assume that every month has a row.
You can do this in Postgres using a windowing clause on a cumulative sum:
```
select t.*,
(1_month_sub +
sum(3_month_sub) over (order by year rows between 2 preceding and current row) +
sum(12_month_sub) over (order by year rows between 11 preceding and current row)
) as total_subs
from t;
```
|
It is possible to do that without power features. Below has been tested on PostgreSQL and on MS SQL.
See on SQL Fiddle how it works: <http://sqlfiddle.com/#!15/74862/4/0>
Simple SQL Join
```
select
t1.Year,
t1.Month,
sum(case when ((t2.Year-2014)*12+t2.Month) <= ((t1.Year-2014)*12+t1.Month) and ((t2.Year-2014)*12+t2.Month) - ((t1.Year-2014)*12+t1.Month) > -1 then 1 else 0 end * t2.one_ms) +
sum( case when ((t2.Year-2014)*12+t2.Month) <= ((t1.Year-2014)*12+t1.Month) and ((t2.Year-2014)*12+t2.Month) - ((t1.Year-2014)*12+t1.Month) > -3 then 1 else 0 end * t2.three_ms ) +
sum( case when ((t2.Year-2014)*12+t2.Month) <= ((t1.Year-2014)*12+t1.Month) and ((t2.Year-2014)*12+t2.Month) - ((t1.Year-2014)*12+t1.Month) > -12 then 1 else 0 end * t2.twelve_ms ) as subs
from Test t1
join Test t2
on 1=1
group by t1.Year, t1.Month, ((t1.Year-2014)*12+t1.Month)
order by ((t1.Year-2014)*12+t1.Month)
```
and the following characteristic function over Cartesian product:
```
1 2 3 4 5
+---------+
1|O . . . .|
2|O O . . .|
3|O O O . .|
4|. O O O .|
5|. . O O O|
+---------+
```
work:
```
year month subs
2014 1 5
2014 2 3
2014 3 3
2014 4 2
2014 5 5
2014 6 2
2014 7 6
2014 8 2
2014 9 2
2014 10 7
2014 11 2
2014 12 4
2015 1 1
```
---
To understand it better you might want to give an alias to `(t1.Year-2014)*12+t1.Month` such as `num`:
```
alter table Test add column num int NULL
update Test
set num = (Year-2014)*12+Month
select
t1.Year,
t1.Month,
sum(case when t2.num <= t1.num and t2.num - t1.num > -1 then 1 else 0 end * t2.one_ms) +
sum( case when t2.num <= t1.num and t2.num - t1.num > -3 then 1 else 0 end * t2.three_ms ) +
sum( case when t2.num <= t1.num and t2.num - t1.num > -12 then 1 else 0 end * t2.twelve_ms ) as subs
from Test t1
join Test t2
on 1=1
group by t1.Year, t1.Month, t1.num
order by t1.num
```
|
Coalesing with lags/leads over rows?
|
[
"",
"sql",
"postgresql",
""
] |
I have a table A with ID col. Here is sample data -
```
ID
NT-QR-1499-1(2015)
NT-XYZ-1503-1
NT-RET-546-1(2014)
```
I need to select everything after first '-' from left and before '(' from the right. However, some records do not have '(', in which case, the second condition would not apply.
Here is what I need -
```
QR-1499-1
XYZ-1503-1
RET-546-1
```
|
You could get it done in a CASE statement, although I'd definitely take any advice from Aaron;
```
CREATE TABLE #TestData (ID nvarchar(50))
INSERT INTO #TestData (ID)
VALUES
('NT-QR-1499-1(2015)')
,('NT-XYZ-1503-1')
,('NT-RET-546-1(2014)')
SELECT
ID
,CASE
WHEN CHARINDEX('(',ID) = 0
THEN RIGHT(ID, LEN(ID)-CHARINDEX('-',ID))
ELSE LEFT(RIGHT(ID, LEN(ID)-CHARINDEX('-',ID)),CHARINDEX('(',RIGHT(ID, LEN(ID)-CHARINDEX('-',ID)))-1)
END Result
FROM #TestData
```
|
SELECT CASE
WHEN CHARINDEX('(',ID) > 0
THEN
SUBSTRING(ID,CHARINDEX('-',ID)+1,(CHARINDEX('(',ID)-CHARINDEX('-',ID)-1))
ELSE
SUBSTRING(ID,CHARINDEX('-',ID)+1)
END AS New\_Column\_Name
FROM Table\_Name
First it will check whether "(" present or not .
If present then it will fetch the data from next position of "-" to before the position of "(".
otherwise it will fetch the data from next position of "-" to till end.
|
How to use substring conditionally before and after two different symbols in SQL SERVER
|
[
"",
"sql",
"sql-server",
""
] |
I have 2 tables with the same columns (id, country\_code), example :
```
Table A
--------
id country_code
1 fr
2 fr
3 fr
Table B
--------
id country_code
1 ua
2 fr
3 uk
```
I would like to get all fields in B where the country\_code is different of the one in A for each same id,
Example expected :
```
id country_code
1 ua
3 uk
```
I tried with inner join but without any success, any idea?
Here is the error I get :
```
Illegal mix of collations (utf8_unicode_ci,IMPLICIT) and (utf8_general_ci,IMPLICIT) for operation '<>'
```
But in Workbench those fields and tables havee the same collation (set as "Table default"), it's strange..
Answer : I've checked all colations by doing
```
show table status;
```
and I updated the Collation column, now it works okay.
|
You need to use Collate keyword to change collation:
```
select b.id,b.country_code
from b join a
on b.id = a.id and b.country_code <> a.country_code collate utf8_unicode_ci;
```
For more information about collation: [What does character set and collation mean exactly?](https://stackoverflow.com/questions/341273/what-does-character-set-and-collation-mean-exactly)
|
You can use `join`:
```
select b.*
from b join
a
on b.id = a.id and b.country_code <> a.country_code;
```
|
SQL : error with an inner join, probably sthg simple
|
[
"",
"mysql",
"sql",
"database",
""
] |
I've been playing around with MySQL and sqlalchemy to gather and store data. Over the the weekend I was collecting tweets at about 20,000 tweets/hour and placing them in a table `raw_tweets` indexed by their tweet id. I am expecting ~1,000,000 rows, but when I run
```
SELECT COUNT(*) from raw_tweets;
```
the query just hangs. I've waited a few minutes and still nothing. My knowledge of sql is fairly limited. Is it possible there is a lock on the table? Would that cause it to hang without giving rise to an error? How can I go about diagnosing/fixing this problem?
P.S. it looks like all my queries on that table are hanging.
|
Most probably it doesn't hang but needs very much time to execute.
If the table engine is `InnoDB`, `SELECT COUNT(*)` must read all the rows from the table (in order to count them) and, if the database is under heavy use then the operation takes a lot of time.
This is documented in the [Limits on InnoDB Tables](http://dev.mysql.com/doc/refman/5.7/en/innodb-restrictions.html) page:
> InnoDB does not keep an internal count of rows in a table because concurrent transactions might “see” different numbers of rows at the same time. To process a `SELECT COUNT(*) FROM t` statement, InnoDB scans an index of the table, which takes some time if the index is not entirely in the buffer pool. To get a fast count, you have to use a counter table you create yourself and let your application update it according to the inserts and deletes it does. If an approximate row count is sufficient, `SHOW TABLE STATUS` can be used.
As explained above, if an approximate row count is enough for you then run:
```
SHOW TABLE STATUS WHERE NAME = 'raw_tweets'
```
and look into the `Rows` column of the result.
Please note:
* the number of rows returned by `SHOW TABLE STATUS` is approximate; it can be off of the real value by several percents (the difference is higher when the table is small);
* the value returned by `SHOW TABLE STATUS` changes on each subsequent run, even if there is no write activity on the table.
|
you could run the following in another database connection (if you have sufficient rights to do so):
```
SHOW FULL PROCESSLIST;
```
which might show all the queries/processes that are currently running on your database. In that list you might see if there are some locks set on a table
```
mysql> show full processlist;
+---------+------------+-----------------+------------+---------+------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+---------+------------+-----------------+------------+---------+------+-------+-----------------------+
| 121904 | user01 | localhost | user_db | Locked | 0 | | SELECT * FROM usr_tbl |
| 1186598 | root | localhost | NULL | Query | 0 | NULL | show full processlist |
```
you should take a closer look at the `Command` and `Info` columns.
|
MySQL query hangs on `SELECT COUNT(*)`
|
[
"",
"mysql",
"sql",
""
] |
I have a SQL Server 2012 table that holds session per user and I need to find out monthly total of minutes for activity and if there are no results for the month then to show "0". Table below:
```
User SessionStart SessionEnd
1 2014-03-01 08:00:00.000 2014-03-01 08:10:00.000
1 2014-03-15 09:00:00.000 2014-03-15 09:30:00.000
1 2014-05-01 04:00:00.000 2014-05-01 04:50:00.000
1 2014-06-01 02:00:00.000 2014-06-01 02:05:00.000
1 2014-07-01 09:00:00.000 2014-07-01 10:30:00.000
1 2014-09-01 01:00:00.000 2014-09-01 01:07:00.000
1 2014-12-05 08:00:00.000 2014-12-05 08:10:00.000
2 2014-01-01 01:01:00.000 2014-01-01 01:11:00.000
1 2015-03-01 08:00:00.000 2015-03-01 08:10:00.000
1 2015-05-01 04:00:00.000 2015-05-01 04:50:00.000
1 2015-06-01 02:00:00.000 2015-06-01 02:05:00.000
... ... ...
```
What I'm ending up with is:
```
User Month Year Minutes
1 3 2014 40
1 5 2014 50
1 6 2014 5
1 7 2014 90
1 9 2014 7
1 12 2014 10
1 3 2015 10
1 5 2015 50
1 6 2015 5
```
What I'd like to get:
```
User Month Year Minutes
1 1 2014 0
1 2 2014 0
1 3 2014 40
1 4 2014 0
1 5 2014 50
1 6 2014 5
1 7 2014 90
1 8 2014 0
1 9 2014 7
1 10 2014 0
1 11 2014 0
1 12 2014 10
1 1 2015 0
1 1 2015 0
1 3 2015 10
1 4 2015 0
1 5 2015 50
1 6 2015 5
```
I've tried using:
```
SELECT User
,MONTH(SessionStart)
,YEAR(SessionStart)
,SUM(DATEDIFF(minute, SessionStart, SessionEnd)) AS Minutes
FROM SessionTable
WHERE User = 1 AND YEAR(SessionStart) >= 2014
GROUP BY SessionStart, User
```
I have a table called "Months" that holds the number of the month (Month) and name of the month (MonthName) and have tried performing a left outer join on this table and have also tried using case selects. I've searched on this issue on Stackoverflow and see somewhat similar issues, but not one that involves using the date. Struggling to think today...
|
You can generate all the rows with a `cross join` and then use `left join` to bring in your summaries. Assuming that some row exists for each month, you can get this information from the session table itself:
```
SELECT u.User, yyyymm.mm, yyyymm.yy
COALESCE(SUM(DATEDIFF(minute, s.SessionStart, s.SessionEnd)), 0) AS Minutes
FROM (SELECT 1 as user) u CROSS JOIN
(SELECT DISTINCT YEAR(SessionStart) as yyyy, MONTH(SessionStart) as mm
FROM SessionTable
WHERE SessionStart >= '2014-01-01'
) yyyymm LEFT JOIN
SessionStart ss
ON u.user = ss.user AND
YEAR(SessionStart) = yyyymm.yy AND MONTH(SessionStart) = yyyymm.mm
GROUP BY u.User, yyyymm.yyyy, yyyymm.mm;
```
|
You're on the right track with your `months` table. Could you post your example that's not working using the left outer join? If you start with `months` and left outer join to your `SessionTable` you should be fine.
Something like...
```
SELECT User
,MONTH(m.month)
,YEAR(m.year)
,SUM(DATEDIFF(minute, s.SessionStart, s.SessionEnd)) AS Minutes
FROM Months m
LEFT OUTER JOIN SessionTable s
ON m.year = s.year
AND m.month = s.month
AND s.User = 1
WHERE YEAR(m.year) >= 2014
GROUP BY s.SessionStart, s.User
```
NOTE:
* Don't filter your `sessionTable` data in the where clause when you're using `month` as the primary data source - filter it in the `LEFT OUTER JOIN`. If you put it in the `WHERE`, you're turning things into an `INNER JOIN`.
* You want to use your `months` table when returning your data for the month/year info
That work?
|
Showing month value when no data exists SQL Server
|
[
"",
"sql",
"sql-server",
"datetime",
""
] |
I have the following SQL statement:
```
select
DOCUMENT.DOCUMENT_ID,
(case
when DOCUMENT.CLASSIFICATION_CODE is not null
then DOCUMENT.CLASSIFICATION_CODE
else TEMPLATE.CLASSIFICATION_CODE end) CLASSIFICATION_CODE,
CLASSIFICATION.NAME CLASSIFICATION_NAME
from
DOCUMENT,
TEMPLATE,
CLASSIFICATION
where
DOCUMENT.TEMPLATE_ID = TEMPLATE.TEMPLATE_ID and
DOCUMENT.CLASSIFICATION_CODE = CLASSIFICATION.CLASSIFICATION_CODE(+)
```
What I would like to do is along the lines of:
```
if DOCUMENT.CLASSIFICATION_CODE IS NOT NULL THEN
DOCUMENT.CLASSIFICATION_CODE = CLASSIFICATION.CLASSIFICATION_CODE(+),
ELSE
TEMPLATE.CLASSIFICATION_CODE = CLASSIFICATION.CLASSIFICATION_CODE(+)
```
Is this possible?
|
You can use [NVL](http://www.oradev.com/nvl.html) or the more standard [COALESCE](https://docs.oracle.com/cd/B28359_01/server.111/b28286/functions023.htm) for that:
```
COALESCE(DOCUMENT.CLASSIFICATION_CODE, TEMPLATE.CLASSIFICATION_CODE)
= CLASSIFICATION.CLASSIFICATION_CODE(+)
```
In ANSI syntax:
```
select
DOCUMENT.DOCUMENT_ID,
COALESCE(DOCUMENT.CLASSIFICATION_CODE, TEMPLATE.CLASSIFICATION_CODE)
CLASSIFICATION_CODE,
CLASSIFICATION.NAME CLASSIFICATION_NAME
from
DOCUMENT
inner join
TEMPLATE on DOCUMENT.TEMPLATE_ID = TEMPLATE.TEMPLATE_ID
left join
CLASSIFICATION on
COALESCE(DOCUMENT.CLASSIFICATION_CODE, TEMPLATE.CLASSIFICATION_CODE)
= CLASSIFICATION.CLASSIFICATION_CODE
```
|
While you can use `coalesce` or `nvl`, as other answers have suggested, I'd prefer to join to the `classification` table twice. Yes, the engine has to perform an extra join, but avoiding the functions allows the engine to use an index.
```
SELECT document.document_id,
COALESCE (d.classification_code, t.classification_code)
AS classification_code,
COALESCE (d_c.name, t_c.name) AS classification_name
FROM document d
JOIN template t ON d.template_id = t.template_id
LEFT JOIN classification d_c
ON d.classification_code = d_c.classification_code
LEFT JOIN classification t_c
ON t.classification_code = t_c.classification_code
```
|
Oracle SQL conditional join
|
[
"",
"sql",
"oracle",
"join",
""
] |
I need to update a single column over a thousand rows in the database. Normally when I need to do this, I'll do the following:
```
UPDATE [table] SET DATA="FOO" WHERE ID=23;
UPDATE [table] SET DATA="ASD" WHERE ID=47;
UPDATE [table] SET DATA="FGH" WHERE ID=83;
UPDATE [table] SET DATA="JKL" WHERE ID=88;
UPDATE [table] SET DATA="QWE" WHERE ID=92;
UPDATE [table] SET DATA="BAR" WHERE ID=97;
```
I feel like there should be a way to do this easily, but after searching around online, I cannot find a solution.
What I was hoping for was this:
```
-- THIS DOES NOT WORK
UPDATE [table]
SET DATA=("FOO", "ASD", "FGH", "JKL", "QWE", "BAR")
WHERE ID=(23, 47, 83, 88, 92, 9);
```
An important piece of information to note is that I am always setting one column, the same column across a specific set of rows.
Does anyone know the best way to do this?
|
You can actually do it using **insert into ...on duplicate key update**
```
insert into [table](ID,DATA)
values(23,'FOO'),(47,'ASD'),(54,'DSF')..,
on duplicate key update DATA=values(DATA)
```
|
You could use the [`MERGE`](https://msdn.microsoft.com/en-us/library/bb510625%28v=sql.100%29.aspx) statement which is in the [SQL:2003](https://en.wikipedia.org/wiki/Merge_%28SQL%29) standard and available in Transact-SQL since SQL Server 2008:
```
MERGE mytable
USING (VALUES (23, 'FOD'),
(47, 'ASD'),
(83, 'FGH'),
(88, 'JKL'),
(92, 'QWE'),
( 9, 'BAR')) AS pairs(id2, data2)
ON id = id2
WHEN MATCHED
THEN UPDATE SET data = data2
```
The `USING` clause allows to specify a derived table using a [table value constructor](https://msdn.microsoft.com/en-us/library/dd776382%28v=sql.100%29.aspx) (See example under point D on that page).
Alternatively, the more commonly implemented [SQL:92 standard](https://en.wikipedia.org/wiki/SQL-92) syntax to do this would be:
```
UPDATE mytable
SET data =
CASE id
WHEN 23 THEN 'FOD'
WHEN 47 THEN 'ASD'
WHEN 83 THEN 'FGH'
WHEN 88 THEN 'JKL'
WHEN 92 THEN 'QWE'
WHEN 9 THEN 'BAR'
END
WHERE id IN (23, 47, 83, 88, 92, 9);
```
The obvious downside is that you end up specifying the id values twice. You could do without the `WHERE` clause and add `ELSE data` in the `CASE` construct, but then you would actually update all rows, which is inefficient and may have undesired side-effects (via triggers).
|
Update a single column on multiple rows with one SQL query
|
[
"",
"sql",
""
] |
I have written a query
**delete from Table1 where Tableid in(select Tableid from Table1 group by Tableid having count(\*) >1)**
but this query removes all the data having count greater than 1.
Can someone help me with a single line query that deletes the duplicate data and resetting the count to 1.
I have table `Table1` with
```
Tableid Count
1 10
2 2
3 1
4 NULL
5 31
```
---
Post Delete it should be
```
Tableid Count
1 1
2 1
3 1
4 NULL
5 1
```
---
|
I think this is what you are looking for
```
DECLARE @Table TABLE
(
Name VARCHAR(20),
Value INT
);
;WITH T AS (
SELECT CONCAT('a',1) AS Name, 1 AS Value
UNION ALL
SELECT CONCAT('a',T.Value + 1) AS Name, T.Value + 1 FROM T
WHERE T.Value < 5
)
INSERT INTO @Table
SELECT T.Name ,
T.Value
FROM T
INSERT INTO @Table
( Name, Value )
VALUES ( 'a5', -- Name - varchar(20)
5 -- Value - int
),( 'a5', -- Name - varchar(20)
5 -- Value - int
)
INSERT INTO @Table
SELECT * FROM @Table
INSERT INTO @Table
SELECT * FROM @Table
SELECT
COUNT(*) AS TotalCount
, Name
, Value
FROM
@Table
GROUP BY
Name ,
Value
ORDER BY
Name
DELETE T
FROM (
SELECT
Name
, Value
, ROW_NUMBER() OVER(PARTITION BY Name, Value ORDER BY Value) AS RN
FROM
@Table
) AS T
WHERE T.RN > 1
SELECT COUNT(*) AS TotalCount, Name, Value
FROM @Table
GROUP BY Name, Value
ORDER BY Name, Value
```
|
To delete all the duplicate data: Group the column that may have the same data.
```
DELETE FROM table
WHERE id IN (SELECT id FROM table GROUP BY column HAVING COUNT(column) > 1)
```
To delete the duplicate and keep one of it: Get at least (1) data from the duplicate and grouped column.
```
DELETE t1 FROM table t1, table t2
WHERE t1.id <> t2.id AND t1.column = t2.column
```
**Back-up your data first before testing anything.**
|
I want a single query that deletes duplicate entry from my database
|
[
"",
"mysql",
"sql",
"count",
"sql-delete",
"having",
""
] |
I have a table with the next structure:
```
Data structure:
| CONTRACT | CONNECTION | STATE |
| 1 | AAA | Y |
| 2 | AAA | Y |
| 3 | BBB | N |
| 4 | BBB | N |
| 5 | BBB | N |
| 6 | BBB | N |
| 7 | AAA | Y |
| 8 | CCC | Y |
| 9 | CCC | N |
| 10 | AAA | Y |
| 11 | CCC | N |
```
I'd like to select all rows/data that, considering groups defined by CONNECTION column as groups AAA, BBB and CCC, the query have to select only groups that have **all** rows of the STATE column with the value 'N', but always considering all data of the group.
So the result on the table would be:
```
Result needed:
| CONTRACT | CONNECTION | STATE |
| 3 | BBB | N |
| 4 | BBB | N |
| 5 | BBB | N |
| 6 | BBB | N |
```
I've looking for functions like decode, exists and so on but just don't see how solve this problem.
Any idea of how could I write a query for this purpose?
|
You can use `SUM` and `HAVING`:
`ONLINE DEMO`
```
SELECT *
FROM tbl t
WHERE
CONNECTION IN(
SELECT CONNECTION
FROM tbl
GROUP BY CONNECTION
HAVING
SUM(CASE WHEN STATE = 'N' THEN 1 ELSE 0 END) > 0
AND SUM(CASE WHEN STATE <> 'N' THEN 1 ELSE 0 END) = 0
)
```
The first condition in the `HAVING` clause makes sure that the `CONNECTION` has at least one `STATE = 'N'`. The second one makes sure that the `CONNECTION` has no other `STATE`s than `'N'`.
|
Try this:
```
SELECT CONTRACT, CONNECTION, STATE
FROM (
SELECT CONTRACT, CONNECTION, STATE,
COUNT(CASE WHEN STATE <> 'N' THEN 1 END) OVER (PARTITION BY CONNECTION) AS cnt
FROM mytable) t
WHERE t.cnt = 0
```
|
Select all rows, with specific characteristics, with same subdata by grouping them through a same row value in oracle
|
[
"",
"sql",
"oracle",
"group-by",
"subquery",
""
] |
Table 'volcado': id, reserva, cantidad
With the following query:
```
SELECT (CASE
WHEN sum(cantidad) > 0 THEN 1
WHEN sum(cantidad) <= 0 THEN 0
END) AS suma
FROM volcado
GROUP BY reserva
```
The result is:
```
╔══════╗
║ suma ║
╠══════╣
║ 1 ║
║ 1 ║
║ 0 ║
║ 1 ║
║ 0 ║
╚══════╝
```
I need the result of the query would be: 3
How can I SUM() the rows of 'suma' on the same query?
I don't get it run with:
```
SELECT SUM(SELECT (CASE
WHEN sum(cantidad) > 0 THEN 1
WHEN sum(cantidad) <= 0 THEN 0
END) AS suma FROM volcado GROUP BY reserva)
FROM volcado
```
Error:
> Subquery returns more than 1 row
Thanks in advance.
|
Try this way
```
SELECT SUM(a.suma) from
(SELECT (CASE
WHEN sum(cantidad) > 0 THEN 1
WHEN sum(cantidad) <= 0 THEN 0
END) AS suma FROM volcado GROUP BY reserva)a
```
|
This should work, use the subquery as a table instead of as a column.
```
SELECT SUM(suma) from
(SELECT (CASE
WHEN sum(cantidad) > 0 THEN 1
WHEN sum(cantidad) <= 0 THEN 0
END) AS suma FROM volcado GROUP BY reserva)
```
|
SUM() the results of GROUP BY column
|
[
"",
"mysql",
"sql",
""
] |
For some reason, I get the mentioned error message while using two internal tables, declared at the beginning of the script.
I can insert data into the tables, but when I try to join them on a field, present in both tables it wount let me.
I am working in SQL2012
```
SELECT *
FROM @ITAB01
JOIN @ITAB02
on @ITAB01.country=@ITAB02.country
```
> Error: Must declare the scalar variable "@ITAB01".. Error 137. SQLSTATE 42000. Severity 15. MsgState 2. Line 83.
|
You need to give a name to the table variable.
NOT LIKE THIS:
```
declare @t1 table
(
p1 int
)
declare @t2 table
(
p2 int
)
select *
from @t1,
@t2
where @t1.p1 = @t2.p2
```
BUT LIKE THIS:
```
declare @t1 table
(
p1 int
)
declare @t2 table
(
p2 int
)
select *
from @t1 t1,
@t2 t2
where t1.p1 = t2.p2
```
So in your case:
```
SELECT *
FROM @ITAB01 ITAB01
JOIN @ITAB02 ITAB02
on ITAB01.country=ITAB02.country
```
|
If you want to refer to the tables elsewhere in the query (outside of the `FROM` clause), introduce aliases:
```
SELECT *
FROM @ITAB01 t1
JOIN @ITAB02 t2
on t1.country=t2.country
```
Aliases are also useful because you can shorten long names. They're also required once you want to use the same table more than once in a single query.
|
Must declare the scalar variable using internal tables
|
[
"",
"sql",
"sql-server-2012",
""
] |
I was reading articles about rollback transaction and can't find something like I need. Assume I have a user table. User can update his/her records but I want to rollback if entered value is null. I know there are lots of simple ways to do this like checking with c# if it is null and things like that but point in here is understanding rollback. How can I use rollback with if(null) code ?
|
Rollback is used with a transaction, a rollback restores the state to what it was when begin tran was executed.
```
BEGIN TRAN 'tran1'
INSERT INTO Whatever table...
IF(... check for null)
BEGIN
ROLLBACK TRAN 'tran1'
RETURN 0
END
COMMIT TRAN 'tran1'
```
|
There are three ways. First, if your checks are just on one column, use the `check` constraint on the column definition:
```
create table test (
age int check (age > 40),
...
);
```
Fore more complex constraints, you will need `triggers`. The kind of triggers available depends on the technology (oracle, sql server, mysql). Inside a *before insert* or *before update* trigger, you can generate an exception, which will automatically end up the transaction (equivalent to a rollback). Here is a mysql example:
```
delimiter //
drop trigger if exists trg_trigger_test_ins //
create trigger trg_trigger_test_ins before insert on test
for each row
begin
declare msg varchar(128);
if new.age <= 40 then
set msg = concat('MyTriggerError: age incorrect : ', cast(new.id as char));
signal sqlstate '45000' set message_text = msg;
end if;
end
//
```
Finally, you can start a *transaction*, make your changes, then decide if you want to *commit* or *rollback*.
|
Rollback if null
|
[
"",
"sql",
"rollback",
""
] |
I'm new in database design and as far as I could research, using `null` values as representation of no present data is not a good Idea, the proble that I'm having now is that I don't know how to represent no present data instead of `null`.
For example I have a user table and a `FavoritColor` table, the user has a column called FC which is a key for the ID of the FavoritColor table. Now I'm giving the user the possibility to not choose a color, how would I note that, what would the FC column be filled with if the user chose to not choose a color?
|
`NULL` is a perfectly fine value to use. If you're worried about defaults, make sure to use an `OUTER JOIN` and (for SQL Server, anyway) you can do something like:
`SELECT user_table.name, COALESCE(preferences.color_preference, 'DEFAULT_VALUE') FROM user_table LEFT OUTER JOIN preferences ON user_table.id = preferences.id;`
This type of query will allow you to set a default and store `NULL` as the preferred color, you will get the default in this case if the color\_preference is `NULL` or if there is no row in the preferences table.
You say "what would the FC column be filled with if the user chose to not choose a color?". I ask, why do you care? They either have a favorite color specified or do not. Do you care if they had a choice of specifying a favorite color but did not tell you?
|
`NULL` is usually meant to represent "unknown", which is why `'Ellen' <> NULL` doesn't result in `TRUE`, but in `NULL`. It is *not known* whether the value still unknown to us is 'Ellen' or not. An example would be a middle name; as long as the field is empty we don't know whether Mary's middle name is Ellen or not.
Often, however, we use `NULL` also for "not applicable" like the recommended retail price in a product table - some products simply don't have any. So we *do know* there is no RRP, it's not "unknown", but we use `NULL` still. What else could we do? Use 0 instead - and then mistakenly show in our online shop that the recommended retail price is zero dollars? Or add a flag has\_rrp? Two columns for one content? `NULL` is often the simpler solution.
And then, we can use `NULL` to mean "no value". Say an image in a user table. Some users just don't have a photo in our database, so the value remains empty. There is no other value then `NULL` for empty for binary data. We cannot put a zero in there or so, because the column is supposed to contain image data, say jpeg data or the like.
There are some other ways to represent "no data" for a single field. In the image example we could add a user\_image table with a 1:1 relation and either the record is there or not. For a string we'd use '' and for numbers we can sometime - not always - use zero. For dates, like in a price table containing past and future prices with a from\_date and to\_date, people sometimes put in extreme dates (0001-01-01, 9999-12-31) to avoid complicate queries.
For IDs as in your case we have one more option: have an ID for "no value" and a corresponding entry in the other table. As long as we don't want any special treatment for "no value", this is a fine solution. In your example you could show a combobox with 'black', 'red', 'blue', ..., and 'no color' in your GUI and you can select 'no color' just as easily as you can select 'blue'. But if you ever want special treatment, then you will have queries with `and color_id <> (select id from colors where value = 'no color')` or the like, which can be annoying.
By the way, sometimes people use `NULL` even for "all values". Let's say you have a table with product prices per shop. Make shop\_id NULL and you have a default price for all shops, fill in a shop\_id then you have a shop-specific price.
NULLs often need special treatment, as with `IS` instead of `=`, outer joins, and constructs like `COALESCE(color, 'no color')` etc. This is neither good nor bad per se. If you want to count distinct favorite colors in your users table, it is good. NULL for "no favorite color" just won't be counted with `COUNT(DISTINCT color)`, you will only count favorite colors.
After all it's a decision to make. Do you need the distinction between "not known yet" and "known that no value applies"? Do you want to treat "no color" other than "red"? `NULL` for "no value" *is* an option and it is often used such. Decide whether it is good in your case. There is no rule saying `NULL`must never be used to represent "no value".
|
What to use instead of Null if no data is present in SQL?
|
[
"",
"mysql",
"sql",
"null",
""
] |
I'm working on a query where I need to look at patient vitals (specifically blood pressure) that were entered when a patient visited a clinic. I'm pulling results for the entire year of 2015, and of course there are certain patients who visited multiple times, and I need to only see the vitals that were entered at the most recent visit. Another slight twist is that systolic and diastolic pressures are entered separately, so I end up with results like:
```
Patient ID Name DOB Test Results Date
---------------------------------------------------------------------------------
1000 John Smith 1/1/1955 BP - Diastolic 120 2/10/2015
1000 John Smith 1/1/1955 BP - Systolic 70 2/10/2015
1000 John Smith 1/1/1955 BP - Diastolic 128 7/12/2015
1000 John Smith 1/1/1955 BP - Systolic 75 7/12/2015
1000 John Smith 1/1/1955 BP - Diastolic 130 10/22/2015
1000 John Smith 1/1/1955 BP - Systolic 76 10/22/2015
9999 Jane Doe 5/4/1970 BP - Diastolic 130 4/2/2015
9999 Jane Doe 5/4/1970 BP - Systolic 60 4/2/2015
9999 Jane Doe 5/4/1970 BP - Diastolic 127 11/20/2015
9999 Jane Doe 5/4/1970 BP - Systolic 65 11/20/2015
```
There are 26,000+ results so obviously I don't want to go through every single patient and see when their most recent results were. I'd like my results to look like this:
```
Patient ID Name DOB Test Results Date
---------------------------------------------------------------------------------
1000 John Smith 1/1/1955 BP - Diastolic 130 10/22/2015
1000 John Smith 1/1/1955 BP - Systolic 76 10/22/2015
9999 Jane Doe 5/4/1970 BP - Diastolic 127 11/20/2015
9999 Jane Doe 5/4/1970 BP - Systolic 65 11/20/2015
```
I know the name and date of birth and whatnot would get repeated, but I'm mainly focused on the results column.
Here's my query:
```
SELECT DISTINCT
pd.PatientID as [Patient ID],
pd.PatientName as Name,
pd.DateOfBirth as DOB,
v.Test as Test,
v.Results as Results,
v.TestDate as Date
FROM PatientDemographic pd JOIN Vitals v ON pd.PatientID = v.PatientID
WHERE v.TestDate BETWEEN '01/01/2015' AND '12/31/2015'
AND v.Test LIKE 'BP%'
ORDER BY pd.PatientID, v.TestDate
```
After looking around for other answers, I tried doing a `GROUP BY` and the `MAX()` aggregate function for the `v.TestDate` column in the `SELECT` statement (I was specifically referencing [this link](https://stackoverflow.com/questions/189213/sql-selecting-rows-by-most-recent-date), though it's for Oracle and I'm using SQL Server so I'm not entirely sure if the syntax would be the same). My query then looked like:
```
SELECT DISTINCT
pd.PatientID as [Patient ID],
pd.PatientName as Name,
pd.DateOfBirth as DOB,
v.Test as Test,
v.Results as Results,
MAX(v.TestDate) as Date
FROM PatientDemographic pd JOIN Vitals v ON pd.PatientID = v.PatientID
WHERE v.TestDate BETWEEN '01/01/2015' AND '12/31/2015'
AND v.Test LIKE 'BP%'
GROUP BY pd.PatientID
```
Admittedly, I've always struggled a little with using `GROUP BY`. In this particular case, I get an error stating that I need to add the Patient Name column to the `GROUP BY` clause as well, so I do, and then it asks for DOB. Then the Test name. Basically, it wants me to add everything from my `SELECT` statement to the `GROUP BY`.
What's the best way to proceed and get my most recent patient visits?
|
One simple method uses `ROW_NUMBER()` to find the more recent record for each test:
```
SELECT pd.PatientID as [Patient ID], pd.PatientName as Name, pd.DateOfBirth as DOB,
v.Test as Test, v.Results as Results, v.TestDate as Date
FROM PatientDemographic pd JOIN
(SELECT v.*,
ROW_NUMBER() OVER (PARTITION BY PatientId, Test ORDER BY TestDate DESC) as seqnum
FROM Vitals v
WHERE v.TestDate BETWEEN '2015-01-01' AND '2015-12-31' AND
v.Test LIKE 'BP%'
) v
ON pd.PatientID = v.PatientID
WHERE seqnum = 1
ORDER BY pd.PatientID, v.TestDate;
```
|
You can also use Common Table Expression to achieve this.
```
IF OBJECT_ID('tempdb..#RecentPatientVitals') IS NOT NULL
DROP TABLE #RecentPatientVitals;
GO
CREATE TABLE #RecentPatientVitals
(
Patient_ID INT
, Name VARCHAR(100)
, DOB DATE
, Test VARCHAR(150)
, Results INT
, [Date] DATE
);
INSERT INTO #RecentPatientVitals
( Patient_ID, Name, DOB, Test, Results, [Date] )
VALUES ( 1000, 'John Smith', '1/1/1955', 'BP - Diastolic', 120, '2/10/2015' )
, ( 1000, 'John Smith', '1/1/1955', 'BP - Systolic', 70, '2/10/2015' )
, ( 1000, 'John Smith', '1/1/1955', 'BP - Diastolic', 128, '7/12/2015' )
, ( 1000, 'John Smith', '1/1/1955', 'BP - Systolic', 75, '7/12/2015' )
, ( 1000, 'John Smith', '1/1/1955', 'BP - Diastolic', 130, '10/22/2015' )
, ( 1000, 'John Smith', '1/1/1955', 'BP - Systolic', 76, '10/22/2015' )
, ( 9999, 'Jane Doe', '5/4/1970', 'BP - Diastolic', 130, '4/2/2015' )
, ( 9999, 'Jane Doe', '5/4/1970', 'BP - Systolic', 60, '4/2/2015' )
, ( 9999, 'Jane Doe', '5/4/1970', 'BP - Diastolic', 127, '11/20/2015' )
, ( 9999, 'Jane Doe', '5/4/1970', 'BP - Systolic', 65, '11/20/2015' );
SELECT *
FROM #RecentPatientVitals;
WITH PatVitals1
AS ( SELECT Patient_ID
, Name
, DOB
, Test
, MAX(Date) AS Date
FROM #RecentPatientVitals
GROUP BY Patient_ID
, Name
, DOB
, Test
) ,
PatVitals2
AS ( SELECT Patient_ID
, Test
, Results
, Date
FROM #RecentPatientVitals
)
SELECT P1.Patient_ID
, P1.Name
, P1.DOB
, P1.Test
, P2.Results
, P1.Date
FROM PatVitals1 P1
INNER JOIN PatVitals2 P2
ON P2.Patient_ID = P1.Patient_ID
AND P2.Date = P1.Date
AND P2.Test = P1.Test
GROUP BY P1.Patient_ID
, P1.Name
, P1.DOB
, P1.Test
, P2.Results
, P1.Date;
```
|
Find the most recent date in a result set
|
[
"",
"sql",
"sql-server",
"group-by",
"max",
"where-clause",
""
] |
I have simple table:
```
| Val1 | Val2 |
--------------------------
| 10 | 20 |
--------------------------
| 20 | 30 |
--------------------------
```
How to select data from one row in this and get result as table where data in first column will be column name from original table and data in second column will be data from row?
Like this:
```
| Name | value |
----------------
| val1 | 10 |
----------------
| val2 | 20 |
```
|
This SQL Script will create a temporary table, insert some values and unpivot the result.
```
CREATE TABLE #B
( VAL1 INT, VAL2 INT)
INSERT INTO #B VALUES(10,20),(20,30)
SELECT * FROM #B
SELECT U.NAME, U.VALUE
FROM (SELECT * FROM #B WHERE VAL1 = 10) AS SEL
UNPIVOT
(
VALUE
FOR NAME IN (VAL1, VAL2)
) U;
DROP TABLE #B
```
You did not explicitly prescribe how to "select data from one row in this", so I added a sample WHERE clause in a sub select.
|
Consider this scenario :
Table 1:
```
DEPARTMENT EMPID ENAME SALARY
A/C 1 TEST1 2000
SALES 2 TEST2 3000
```
Table 2:
```
ColumnName 1 2
DEPARTMENT A/C SALES
EMPID 1 2
ENAME TEST1 TEST2
SALARY 2000 3000
```
If we are required to transform result set in Table1 format to Table2 format:
How to display dynamically horizontal rows vertically:
To display dynamically horizontal rows vertically, I have used the technique of dynamic unpivoting (using Xquery and nodes() method )
and then dynamic pivoting .
Below code block will transform result set in Table1 format to Table2 format.
```
DECLARE @EMPLOYEE TABLE (DEPARTMENT VARCHAR(20),EMPID INT,ENAME VARCHAR(20),SALARY INT)
INSERT @EMPLOYEE SELECT 'A/C',01,'TEST1',2000
INSERT @EMPLOYEE SELECT 'SALES',02,'TEST2',3000
SELECT * FROM @EMPLOYEE
DECLARE @Xmldata XML = (SELECT * FROM @EMPLOYEE FOR XML PATH('') )
--Dynamic unpivoting
SELECT * INTO ##temp FROM (
SELECT
ROW_NUMBER()OVER(PARTITION BY ColumnName ORDER BY ColumnValue) rn,* FROM (
SELECT i.value('local-name(.)','varchar(100)') ColumnName,
i.value('.','varchar(100)') ColumnValue
FROM @xmldata.nodes('//*[text()]') x(i) ) tmp ) tmp1
--SELECT * FROM ##temp
--Dynamic pivoting
DECLARE @Columns NVARCHAR(MAX),@query NVARCHAR(MAX)
SELECT @Columns = STUFF(
(SELECT ', ' +QUOTENAME(CONVERT(VARCHAR,rn)) FROM
(SELECT DISTINCT rn FROM ##temp ) AS T FOR XML PATH('')),1,2,'')
SET @query = N'
SELECT ColumnName,' + @Columns + '
FROM
(
SELECT * FROM ##temp
) i
PIVOT
(
MAX(ColumnValue) FOR rn IN ('
+ @Columns
+ ')
) j ;';
EXEC (@query)
--PRINT @query
DROP TABLE ##temp
```
I hope this has helped you understand your problem.
|
How to select data from one row in sql as table?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I wish to order the results of the below query by a substring of the first 8 characters of the 'version' number. I understand SUBSTRING(), so don't bother me with that. My problem is trying to actually place the ORDER BY in regards to the UNION.
UPDATE: I need the data returned in order of Version, but still also ordered next to other rows with the same GUID.
The current query is like this, but versions come in random order.
```
/**** PLAYER MATCHUPS TWO ***/
SELECT e.[GUID], e.[KEY], e.[VALUE]
FROM db e INNER JOIN
(SELECT[GUID] FROM db
WHERE[Key] = 'Session.Type' and[Value] = 'EndMatchTypA') g
ON e.[GUID] = g.[GUID]
WHERE [KEY] IN('CharacterID',
'OpponentID',
'Version')
UNION ALL
/**** PLAYER MATCHUPS ONE ***/
SELECT e.[GUID], e.[KEY], e.[VALUE]
FROM db e INNER JOIN
(SELECT[GUID] FROM db
WHERE[Key] = 'Session.Type' and [Value] = 'EndMatchTypeB') g
ON e.[GUID] = g.[GUID]
WHERE[KEY] IN('CharacterID',
'OpponentID',
'Version')
```
This is how the data returns at the moment.
```
GUID Key Value
-------------------------------------------
1313-2212 Version 3.0.4.0_x64_!#
1313-2212 CharacterID 3
1313-2212 OpponentID 5
4321-1567 Version 1.0.0.0_x64_!#
4321-1567 CharacterID 11
4321-1567 OpponentID 2
```
|
You need to do another `JOIN` to the `db` table to get the `Version` for each `[KEY]` sort by that:
```
SELECT
e.[GUID], e.[KEY], e.[VALUE]
FROM db e
INNER JOIN db g
ON e.[GUID] = g.[GUID]
AND g.[Key] = 'Session.Type'
AND g.[Value] IN ('EndMatchTypeA', 'EndMatchTypeB')
INNER JOIN (
SELECT [GUID], [KEY], [VALUE]
FROM db
WHERE
[KEY] = 'Version'
)v
ON e.[GUID] = v.[GUID]
WHERE
e.[KEY] IN ('CharacterID', 'OpponentID', 'Version')
ORDER BY
SUBSTRING(v.[VALUE], 1, 8);
```
---
You can also use `CROSS APPLY`
```
SELECT
e.[GUID], e.[KEY], e.[VALUE]
FROM db e
INNER JOIN db g
ON e.[GUID] = g.[GUID]
AND g.[Key] = 'Session.Type'
AND g.[Value] IN ('EndMatchTypeA', 'EndMatchTypeB')
CROSS APPLY (
SELECT [GUID], [KEY], [VALUE]
FROM db
WHERE
[KEY] = 'Version'
AND [GUID] = e.[GUID]
)v
WHERE
e.[KEY] IN ('CharacterID', 'OpponentID', 'Version')
ORDER BY
SUBSTRING(v.[VALUE], 1, 8);
```
*Credits to Gordon for getting rid of the `UNION ALL`.*
|
You can wrap the entire query and then `SELECT` from it using `ORDER BY`:
```
SELECT t.[GUID], t.[KEY], t.[VALUE]
FROM
(
SELECT e.[GUID], e.[KEY], e.[VALUE]
FROM db e INNER JOIN
(SELECT[GUID] FROM db
WHERE[Key] = 'Session.Type' and[Value] = 'EndMatchTypA') g
ON e.[GUID] = g.[GUID]
WHERE [KEY] IN('CharacterID',
'OpponentID',
'appVer')
UNION ALL
SELECT e.[GUID], e.[KEY], e.[VALUE]
FROM db e INNER JOIN
(SELECT[GUID] FROM db
WHERE[Key] = 'Session.Type' and [Value] = 'EndMatchTypeB') g
ON e.[GUID] = g.[GUID]
WHERE[KEY] IN('CharacterID',
'OpponentID',
'appVer')
) t
ORDER BY SUBSTRING(t.[VALUE], 1, 8)
```
Note: I tested ordering string version numbers, and the numerical order appears to remain intact. However, you should be aware that if your version number length should change, this could be a problem. For example, if versions could become two digits, this would break the `ORDER BY` I gave (as well as given by the other answers).
|
SQL: Order By, Substring, Union
|
[
"",
"sql",
"sql-order-by",
"union",
""
] |
I'm trying to insert the row below into Oracle database, but I get error
`[22008][1830] ORA-01830: date format picture ends before converting entire input string`
```
insert into tbl (coldate, start, end)
values (
TO_DATE('2005-03-04 02:04:30', 'YYYY-MM-DD HH24:MI:SS'),
TO_TIMESTAMP('2005-03-23 09:06:51.055000', 'YYYY-MM-DD HH24:MI:SS'),
TO_TIMESTAMP('2005-04-26 23:32:59.430000', 'YYYY-MM-DD HH24:MI:SS')
);
```
|
The error message tells you what you should do: your format mask is too "short" for the value you provide. You need to include the fractional seconds in the format mask for `to_timestamp()`
```
TO_TIMESTAMP('2005-03-23 09:06:51.055000', 'YYYY-MM-DD HH24:MI:SS.FF6')
```
For more details on the format model used in `to_timestamp()` please see the manual:
<https://docs.oracle.com/database/121/SQLRF/sql_elements004.htm#CDEHIFJA>
|
You are missing the fractional second specification in your format string:
```
insert into tbl (coldate, start, end)
values (
TO_DATE('2005-03-04 02:04:30', 'YYYY-MM-DD HH24:MI:SS'),
TO_TIMESTAMP('2005-03-23 09:06:51.055000', 'YYYY-MM-DD HH24:MI:SS.FF'),
-- Here ----------------------------------------------------------^
TO_TIMESTAMP('2005-04-26 23:32:59.430000', 'YYYY-MM-DD HH24:MI:SS.FF')
-- And here ------------------------------------------------------^
);
```
|
Inserting timestamp in Oracle
|
[
"",
"sql",
"oracle",
"datetime",
"timestamp",
""
] |
I have to search a table ("NamesAges") for the names and ages of a set of names. The problem is that the table has thousands of names, the set of names that I am searching with has hundreds and not all the names in the set are in the table. How can I get an explicit NULL entry for the missing names.
Specifically:
```
NamesAges
=========
Allan 44
Brenda 33
Carl 21
Daniel 34
```
Set of Names == (Allan, Bonita, Chandra, Daniel)
I can do:
```
SELECT Name, Age
FROM [NamesAges]
WHERE Names IN ('Allan', 'Bonita', 'Chandra', 'Daniel')
```
but I want to get some indication that Bonita and Chandra are missing in the table.
|
One way to do it:
```
SELECT a.Name, b.Age, case when b.Name IS NULL THEN 'Missing' ELSE 'OK' End Status
FROM (
SELECT 'Allan' Name
UNION SELECT 'Bonita'
UNION SELECT 'Chandra'
UNION SELECT 'Daniel'
) a
LEFT JOIN [NamesAges] b ON b.Name = a.Name
```
|
Assuming the set of names is in another table, you could probably use a left join:
```
select s.name, n.age
from set_of_names s
left join names_and_ages n on s.name = n.name
```
This will give you all the names in the set of names, along with the age from the other table if it finds a name match (and null otherwise).
|
SQL List with empty items
|
[
"",
"sql",
""
] |
I have a table of names and I want to generate another column "Group" that increases by 1 every 5 records. Below is an example of the desired output.
```
Name Group
Joe 1
Frank 1
Susan 1
Tom 1
Kim 1
Mike 2
John 2
Henry 2
Rick 2
Quinn 2
```
|
Creating a CTE with row number will help
```
;WITH cte AS
(
SELECT
Name,
ROW_NUMBER() OVER (ORDER BY Name) AS RowNum
FROM YourTable
)
SELECT
Name,
(RowNum - 1) / 5 + 1 AS [Group]
FROM cte
```
|
You could try something like this
```
select a.name,
case when (a.myrow%5=0) then 5 else a.myrow%5 end as group
from
(select name,row_number() over (order by name) as myrow from YourTable)a
```
|
SQL Update Segment of 5 Records
|
[
"",
"sql",
"sql-update",
"row-number",
""
] |
I have the following query in HIVE which throwing "FAILED: SemanticException [Error 10017]: Line 4:28 Both left and right aliases encountered in JOIN 'status\_cd'" Error.
Whole query seems to be correct, I executed similar query in MYSQL also which is working fine. Only in Hive it throwing error.
Is there any limitation in HIVE which causing the problem.
Please look into below query, Any help will be appreciated.
```
INSERT INTO TABLE stg_dim_gate_package SELECT
`16_1693_418`.`package_id` AS `6896_package_id`,
`16_1723_432`.`status_cd` AS `7075_status_cd`,
`16_1723_432`.`load_dt` AS `7076_load_dt`,
`16_1723_432`.`mod_dt` AS `7077_mod_dt`,
(
COUNT(`16_1693_418`.`package_id`)
) AS `7078_package_count`
FROM
`16_1693_418`
LEFT JOIN `16_1723_432` ON `16_1693_418`.`candidateid` > `16_1723_432`.`status_cd`
GROUP BY
`16_1693_418`.`package_id`,
`16_1723_432`.`status_cd`,
`16_1723_432`.`load_dt`,
`16_1723_432`.`mod_dt`;
```
|
Hive currently support only EQUIJOIN. Thus you can't use `ON tableA.col1 > tableB.col2`, but you can only do `ON tableA.col1 = tableB.col2`.
Then, to achive your goal, you have to rewrite the query, using a case statement to handle the SEMIJOIN...
|
Hive supports only equi join . Any condition other than the equi join can be put in the where clause.
|
Hive Error: FAILED: SemanticException [Error 10017]: Line 4:28 Both left and right aliases encountered in JOIN 'status_cd'
|
[
"",
"mysql",
"sql",
"hadoop",
"hive",
""
] |
I have one table named `tbservicecallallocation` and below is data for that table.
[](https://i.stack.imgur.com/wStg8.png)
From above data i want that of technician with their `MAX AllocationTime`.
Below image shows what result i want..
[](https://i.stack.imgur.com/3GsDa.png)
Please help me to write MySQL SELECT query for retrieve above data.
|
Have a sub-query to return each technician's max AllocationTime. Join with that result:
```
select t1.*
from tbservicecallallocation t1
join (select TechnicianIDF, max(AllocationTime) as MAxAllocationTime
from tbservicecallallocation
group by TechnicianIDF) t2
on t1.TechnicianIDF = t2.TechnicianIDF
and t1.AllocationTime = t2.MAxAllocationTime
```
Optionally add `ORDER BY` clause at the end:
```
ORDER BY AllocationStatus DESC
```
|
Try this:
```
SELECT *
FROM tbservicecallallocation t1
WHERE NOT EXISTS(
SELECT 'NEXT'
FROM tbservicecallallocation t2
WHERE t1.TechnicianIDF = t2.TechnicianIDF
AND t2.AllocationTime > t1.AllocationTime
)
```
|
How to use MySql MAX() in a WHERE clause
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have a table similar to the one shown below.
```
-----------------------------
JOB ID | parameter | result |
-----------------------------
1 | xyz | 10 |
1 | abc | 15 |
2 | xyz | 12 |
2 | abc | 8 |
2 | mno | 20 |
-----------------------------
```
I want the result as shown below.
```
parameter | result 1 | result 2 |
----------------------------------
xyz | 10 | 12 |
mno | NULL | 20 |
abc | 15 | 8 |
----------------------------------
```
My goal is to have a single table which can compare the result values of two different jobs. It can be two or more jobs.
|
you want to simulate a pivot table since mysql doesn't have pivots.
```
select
param,
max(case when id = 1 then res else null end) as 'result 1',
max(case when id = 2 then res else null end) as 'result 2'
from table
group by param
```
[SQL FIDDLE TO PLAY WITH](http://sqlfiddle.com/#!9/7c2c6d/6)
|
In Postgres, you can use something like:
```
select parameter, (array_agg(result))[1], (array_agg(result))[2] from my_table group by parameter;
```
The idea is: aggregate all the results for a given parameter into an array of results, and then fetch individual elements from those arrays.
I think that you can achieve something similar in MySQL by using `GROUP_CONCAT()`, although it returns a string instead of an array, so you cannot easily index it. But you can split by commas after that.
|
SQL Query to compare two values which are in the same column but returned by two different set of queries
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to determine the minimum effective date of an employee that has a record in their most recent department (see data below). I'm looking to have 8/25/2014 as the result but would like some assistance from you experts out there on how to pull it off. In the same data, this employee was in department 70260 for a while and then transferred for a short period of time to 70210 and then came back to 70260. The effective date of 8/25/2014 is that date the employee came back and I'd like to see if there's any creative ways of programming the SQL to show the minimum date of someone in their most recent department. Thanks in advance!!!
```
╔═══════════╦════════════╦════════╗
║ EMPLID ║ EFFDT ║ DEPTID ║
╠═══════════╬════════════╬════════╣
║ 000123338 ║ 10/25/2015 ║ 70260 ║
║ 000123338 ║ 4/2/2015 ║ 70260 ║
║ 000123338 ║ 2/24/2015 ║ 70260 ║
║ 000123338 ║ 11/1/2014 ║ 70260 ║
║ 000123338 ║ 8/25/2014 ║ 70260 ║ <--- I need the SQL to show 8/25/2014
║ 000123338 ║ 4/27/2014 ║ 70210 ║
║ 000123338 ║ 3/16/2014 ║ 70210 ║
║ 000123338 ║ 3/6/2014 ║ 70260 ║
║ 000123338 ║ 11/1/2013 ║ 70260 ║
║ 000123338 ║ 1/24/2013 ║ 70260 ║
║ 000123338 ║ 1/1/2013 ║ 70260 ║
║ 000123338 ║ 11/1/2012 ║ 70260 ║
║ 000123338 ║ 11/1/2011 ║ 70260 ║
║ 000123338 ║ 8/1/2010 ║ 70260 ║
║ 000123338 ║ 8/5/2009 ║ 70260 ║
║ 000123338 ║ 7/1/2009 ║ 70260 ║
║ 000123338 ║ 7/7/2008 ║ 70260 ║
║ 000123338 ║ 5/5/2008 ║ 70260 ║
║ 000123338 ║ 1/1/2008 ║ 70260 ║
║ 000123338 ║ 10/29/2007 ║ 70260 ║
╚═══════════╩════════════╩════════╝
```
|
You need to find the first group of rows:
```
SELECT EMPLID, MIN(first_dates)
FROM
(
SELECT EMPLID, EFFDT, DEPTID,
CASE -- check if all previous rows return the same value
WHEN MIN(DEPTID) OVER (PARTITION BY EMPLID ORDER BY EFFDT DESC ROWS UNBOUNDED PRECEDING)
= MAX(DEPTID) OVER (PARTITION BY EMPLID ORDER BY EFFDT DESC ROWS UNBOUNDED PRECEDING)
THEN EFFDT
END AS first_dates -- all other rows return NULL
FROM tab
) dt
GROUP BY 1;
```
If you want the full row you need to add another ROW\_NUMBER:
```
SELECT ...
FROM
(
SELECT ...,
ROW_NUMBER() OVER (PARTITION BY EMPLID ORDER BY first_dates) AS rn
FROM
(
SELECT EMPLID, EFFDT, DEPTID,
CASE
WHEN MIN(DEPTID) OVER (PARTITION BY EMPLID ORDER BY EFFDT DESC ROWS UNBOUNDED PRECEDING)
= MAX(DEPTID) OVER (PARTITION BY EMPLID ORDER BY EFFDT DESC ROWS UNBOUNDED PRECEDING)
THEN EFFDT
END AS first_dates
FROM tab
) dt
) dt
WHERE rn = 1
```
|
You could do something like this
```
select * from
(select empid,effdt,deptid,
row_number() over ( partition by deptid order by effdt desc) as mostRecent
from yourtable) a
where a.mostRecent=1
```
|
How to find a specific date using Oracle SQL for employee in most current department?
|
[
"",
"sql",
"oracle",
"oracle11g",
"greatest-n-per-group",
""
] |
I am getting this error when executing a `stored procedure` that I have created
> Msg 241, Level 16, State 1, Procedure UDP\_INITIAL\_CUSTOMER\_DIM\_POPULATION, Line 28
> Conversion failed when converting date and/or time from character string.
The code for the procedure is:
```
CREATE PROCEDURE UDP_INITIAL_CUSTOMER_DIM_POPULATION
AS
BEGIN
SET NOCOUNT ON
EXEC UDP_REMOVE_NULLS_FROM_CUSTOMER_STG
INSERT INTO dbo.customer_dim(customer_no, first_name, middle_name, last_name,
street_number, street_name, po_address, zip_code, city,
region, country)
SELECT DISTINCT
CAST(customer_no AS VARCHAR),
first_name,
middle_name,
last_name,
street_number,
street_name,
po_address,
zip_code,
city,
region,
country
FROM
dbo.customer_stg
PRINT 'Initial customer data in place!'
SET NOCOUNT OFF
END
```
and the code to create the `customer_dim` table is:
```
CREATE PROCEDURE dbo.UDP_CREATE_CUSTOMER_DIM
AS
BEGIN
IF OBJECT_ID('customer_dim', 'U') IS NOT NULL
BEGIN
DROP TABLE customer_dim
END
CREATE TABLE [dbo].[customer_dim]
(
[customer_sk] INT IDENTITY (1,1) PRIMARY KEY,
[customer_no] [varchar](50) NULL,
[first_name] [varchar](50) NULL,
[middle_name] [varchar](50) NULL,
[last_name] [varchar](50) NULL,
[customer_name] [varchar](50) NULL,
[street_number] [varchar](50) NULL,
[street_name] [varchar](50) NULL,
[customer_address] [varchar](100) NULL,
[po_address] [varchar](50) NULL,
[zip_code] [varchar](50) NULL,
[city] [varchar](50) NULL,
[region] [varchar](50) NULL,
[country] [varchar](50) NULL,
[effective_date] DATE NULL DEFAULT GETDATE(),
[expiry_date] DATE NULL DEFAULT '999-12-31'
) ON [PRIMARY]
END
```
I have tried looking for a solution but because I am not trying to do anything on the date or time I cannot find anything that is making sense to me.
|
The error is caused by the default value for the `expiry_date`. Running
```
SELECT CAST('999-12-31'AS DATE)
```
will produce an error:
> Conversion failed when converting date and/or time from character
> string.
What you want is
```
[expiry_date] DATE NULL DEFAULT '9999-12-31'
```
|
I'm doing string to date conversion in SQL 2017 and it properly handles strings in m/d/yyyy format with single and double digit days or months and two or four digit years. For example, the following strings converted cleanly:
```
2/6/2012
1/20/2020
12/21/2017
6/14/21
```
In my case I found the problem to be a record with a value of '5/15/161'. To find the bad data, use the ISDATE() function. In my case:
```
SELECT [EMPLOYER NUMBER],[PLAN],[EMPLOYER NAME],[LAST UPDATED]
FROM [CONTACT NAME ADDRESS]
where [last updated] is not null
and isdate([last updated]) <> 1
```
|
SQL Server Msg 241, Level 16, State 1, Conversion failed when converting date and/or time from character string
|
[
"",
"sql",
"sql-server",
"type-conversion",
""
] |
I have 3 tables that all need to interact. They are for a bike inventory. They are bike shop, shop inventory, and bike model. I want to find the cheapest bike in each shop inventory. The code below gives me the correct result I expect, but I also want to show the bike name.
```
SELECT
BS.BIKESHOPNAME, MIN(BM.PRICE)
FROM
BIKESHOP BS, BIKEMODEL BM, SHOPINVENTORY SI
WHERE
SI.BIKESHOPID = BS.BIKESHOPID
AND SI.BIKEID = BM.BIKEID
GROUP BY
BS.BIKESHOPNAME;
```
If I change the select statement to look like:
```
SELECT BS.BIKESHOPNAME, BM.NAME, MIN(BM.PRICE)
```
I get too many results. Do I have to check that the name matches the bike model?
|
I think that this should solve your problem:
```
SELECT DISTINCT
BS.BIKESHOPNAME,
FIRST_VALUE(BM.NAME) OVER (PARTITION BY BS.BIKESHOPNAME ORDER BY BM.PRICE ASC),
FIRST_VALUE(BM.PRICE) OVER (PARTITION BY BS.BIKESHOPNAME ORDER BY BM.PRICE ASC)
FROM BIKESHOP BS, BIKEMODEL BM, SHOPINVENTORY SI
WHERE SI.BIKESHOPID = BS.BIKESHOPID AND SI.BIKEID = BM.BIKEID
```
I'm not sure that this is the best solution but i think it will do. It can also be solved using sub-queries but I think this one is better.
**UPDATE:**
you can use sub-queries like this:
```
SELECT
BS.BIKESHOPNAME
(SELECT TOP 1 BM.NAME FROM BIKEMODEL BM, SHOPINVENTORY SII WHERE SII.BIKEID = BM.BIKEID AND SI.BIKESHOPID = BS.BIKESHOPID ORDER BY BM.PRICE ASC) BIKENAME,
(SELECT TOP 1 BM.PRICE FROM BIKEMODEL BM, SHOPINVENTORY SII WHERE SII.BIKEID = BM.BIKEID AND SI.BIKESHOPID = BS.BIKESHOPID ORDER BY BM.PRICE ASC) PRICE
FROM
BIKESHOP BS
```
|
Give this a try maybe? This is an example of how you can use **join** as well :)
```
SELECT BS.BIKESHOPNAME, MIN(BM.PRICE), BM.NAME
FROM
BIKESHOP AS BS
INNER JOIN
SHOPINVENTORY AS SI
ON SI.BIKESHOPID = BS.BIKESHOPID
INNER JOIN
BIKEMODEL AS BM
ON BM.BIKEID = SI.BIKEID
GROUP BY BS.BIKESHOPNAME, BM.NAME;
```
|
Adding an extra SQL select statement is returning too many results
|
[
"",
"sql",
"select",
""
] |
I am trying to select all records in TABLEC and its equivalent value in TABLEA or TABLEB using right join. I am using MYSQL 5.5.47.
--Table data as follows
```
TABLEA TABLEB TABLEC
ID FNAME ID MNAME ID LNAME
0 ANOOP 0 N 0 SINGH
1 BIMA 2 SITA 3 RAJ
4 CIMI 4 B 5 KUMAR
6 RAVI 5 A 6 D
```
--Using below query and trying to select all records in TABLEC and its equivalent value in TABLEA or TABLEB
```
SELECT A.FNAME, B.MNAME, C.LNAME
FROM TABLEA AS A
RIGHT JOIN TABLEB AS B ON A.ID = B.ID
RIGHT JOIN TABLEC AS C ON C.ID = B.ID
```
--I am getting the following result
```
ANOOP N SINGH
NULL NULL RAJ
NULL A KUMAR
***NULL*** NULL D
```
The highlighted value doesn’t show the value as 'RAVI' instead it shows NULL in MYSQL 5.5.47. I tried to modify the '=' condition in second join related to C & A but still no luck. What am I doing wrong here? How do I get the value 'RAVI' in place of NULL? Any suggestion would be highly helpful.
|
```
DROP TABLE IF EXISTS table_a;
DROP TABLE IF EXISTS table_b;
DROP TABLE IF EXISTS table_c;
CREATE TABLE table_a
(id INT NOT NULL PRIMARY KEY
,fname VARCHAR(12) NULL
);
INSERT INTO table_a VALUES
(0,'ANOOP'),
(1,'BIMA'),
(4,'CIMI'),
(6,'RAVI');
CREATE TABLE table_b
(id INT NOT NULL PRIMARY KEY
,mname VARCHAR(12) NULL
);
INSERT INTO table_b VALUES
(0,'N'),
(2,'SITA'),
(4,'B'),
(5,'A');
CREATE TABLE table_c
(id INT NOT NULL PRIMARY KEY
,lname VARCHAR(12) NULL
);
INSERT INTO table_c VALUES
(0,'SINGH'),
(3,'RAJ'),
(5,'KUMAR'),
(6,'D');
SELECT a.fname
, b.mname
, c.lname
FROM table_c c
LEFT
JOIN table_a a
ON a.id = c.id
LEFT
JOIN table_b b
ON b.id = c.id;
+-------+-------+-------+
| fname | mname | lname |
+-------+-------+-------+
| ANOOP | N | SINGH |
| NULL | NULL | RAJ |
| NULL | A | KUMAR |
| RAVI | NULL | D |
+-------+-------+-------+
4 rows in set (0.02 sec)
```
|
You are trying to select all records in TABLEC and its equivalent value in TABLEA or TABLEB using right join. So Table A and B is joined to Table c records. So we need to use Left join(you will get all records of Table C and common records of Table A and B). More info please ref this [link](https://en.wikipedia.org/wiki/Join_(SQL))
```
SELECT
ifnull(A.FNAME,""),
ifnull(B.MNAME,""),
ifnull(C.LNAME,"")
FROM
TABLEA AS A
LEFT JOIN
TABLEB AS B
ON
A.ID = B.ID
LEFT JOIN
TABLEC AS C
ON
C.ID = B.ID
```
|
Two right joins
|
[
"",
"mysql",
"sql",
""
] |
I want to delete row and I get this error:
> the row values updated or deleted either do not make the row unique or
> they alter multiple rows
[](https://i.stack.imgur.com/c1TVX.png)
|
There are duplicate rows in your table. When this is the case you cannot edit table using UI. first delete rows with matching data using SQL then try and edit. Delete rows with matching data one by one until you are left with one row. Use the following query for deleting matching rows where column IdSeminar has value 1 :
```
Delete top(1) from tab where IdSeminar=1
```
do the same with other matching rows.
|
This might be a bit late but it could help someone. I ran into the same issue today but Akshey's code didn't work for me. My database table didn't contain an ID column, so I added one in and set it's 'Identity Specification' to 'Yes'. I reloaded the table with this new column in it and then was able to delete whichever rows I wanted. Afterwards I deleted the ID column, reloaded and the table and everything was fine.
|
the row values updated or deleted either do not make the row unique or they alter multiple rows
|
[
"",
"sql",
"sql-server",
""
] |
I have the following tables:
```
table1
========
rpid | fname | lname | tu | fu | tu_id | start_time
table2
========
tu_id | tu | fu | start_time
```
I want to populate table1's tu, fu and tu\_id using the matching records in table2. I match them based on a time stamp.
```
UPDATE table1
INNER JOIN table1
ON date_trunc('hour', table1.start_time) date_trunc('hour', table2.start_time) AND table1.rpid=table2.tu
SET table1.tu_id= table2.tu_id, table1.fu = table2.fu, table1.tu=table2.tu;
```
I'm getting a syntax error right now like this:
ERROR: syntax error at or near "INNER"
LINE 1: UPDATE table1 INNER JOIN table1 on date\_trunc('hour', table1.s...
I've never tried something like this before, so I'm sure I'm missing something obvious.
Any suggestions?
I know that the INNER JOIN itself is correct because I tried it first in a SELECT statement. It returns the data I want... Now I just to update the fields in table1.
thanks.
EDIT 1
Also just tried this:
```
UPDATE table1
SET tu_id, fu, tu FROM (
SELECT table2.tu_id, table2.fu, table2.tu
FROM table1, INNER JOIN table1 on date_trunc('hour', table1.start_time) = date_trunc('hour', table2.start_time) AND table1.rpid=table2.tu
);
```
That gives me the syntax error:
```
ERROR: syntax error at or near ","
LINE 1: UPDATE table1 SET tu_id, fu, tu FROM ( SELECT table2.t...
```
|
```
UPDATE table1 SET tu_id= table2.tu_id, fu = table2.fu, tu=table2.tu
from table2
where
date_trunc('hour', table1.start_time) = date_trunc('hour', table2.start_time) and table1.rpid=table2.tu
;
```
|
You cannot update multiple columns with only one select before postgres 9.5 (the last stable version)
So the syntax will be something like :
Before 9.5:
```
UPDATE table1
SET tu_id = (SELECT table2.tu_id
FROM table1
INNER JOIN table2
ON date_trunc('hour', table1.start_time) = date_trunc('hour', table2.start_time)
AND table1.rpid=table2.tu),
fu = (SELECT table2.fu
FROM table1
INNER JOIN table2
ON date_trunc('hour', table1.start_time) = date_trunc('hour', table2.start_time)
AND table1.rpid=table2.tu),
tu = (SELECT table2.tu
FROM table1
INNER JOIN table2
ON date_trunc('hour', table1.start_time) = date_trunc('hour', table2.start_time)
AND table1.rpid=table2.tu);
```
And 9.5+ :
```
UPDATE table1
SET (tu_id, fu, tu) = (
SELECT table2.tu_id, table2.fu, table2.tu
FROM table1
INNER JOIN table2
ON date_trunc('hour', table1.start_time) = date_trunc('hour', table2.start_time)
AND table1.rpid=table2.tu
);
```
Edit :
I forgot the UPDATE ... FROM ..., thanks to other repliers for refreshing that in my memory !
|
how to insert bulk data based on bulk select
|
[
"",
"sql",
"postgresql",
"sql-update",
""
] |
In my database there are 3 column which is Name, Age, Gender.
In the program, I only want to use 1 search button. When the button is clicked, the program determine which 3 of the textbox has input and search for the right data.
How do you work with the query? For example if Name and Gender has text, the query :
"`Select * from table Where (Name = @name) AND (Gender = @gender)`"
And when only name is entered, I only query for the name. Must I check textbox by textbox whether there is user input and then write multiple query for each of them? Or is there a better way to do this?
Edit (29/5/16) : I tried doing this another way like this
```
myCommand = New MySqlCommand("Select * from project_record Where
(FloatNo = @floatNo OR FloatNo = 'None') AND
(DevCompanyName = @devCompanyName OR DevCompanyName = 'None') AND
(DevType = @devType OR DevType = 'None') AND
(LotPt = @lotPt OR LotPt = 'None') AND
(Mukim = @mukim OR Mukim = 'None') AND
(Daerah = @daerah OR Daerah = 'None') AND
(Negeri = @negeri OR Negeri = 'None') AND
(TempReference = @tempRef OR TempReference = 'None')", sqlConn)
```
But as you can guess already it will not work efficiently as well because if I only enter input for `DevType` and leave other textboxes blank, the query will not pull up all the records for `DevType` only. It will just display as no records.
|
## Original answer
(scroll down to see update)
Can you try the following:
* build a list only including values of the textboxes that have an input
* set a string of the join the items of that list together with the " AND " string
* append that string to your standard SELECT statement
The code looks like this:
```
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Dim Predicate1 As String = Me.TextBox1.Text
Dim Predicate2 As String = Me.TextBox2.Text
Dim Predicate3 As String = Me.TextBox3.Text
Dim PredicateList As New List(Of String)
Dim WhereClause As String
Dim Query As String
If Predicate1 <> String.Empty Then
PredicateList.Add("Name=""" & Predicate1 & """")
End If
If Predicate2 <> String.Empty Then
PredicateList.Add("Age=""" & Predicate2 & """")
End If
If Predicate3 <> String.Empty Then
PredicateList.Add("Gender=""" & Predicate3 & """")
End If
WhereClause = String.Join(" AND ", PredicateList.ToArray)
Query = "SELECT * FROM TABLE WHERE " & WhereClause
MessageBox.Show(Query)
End Sub
```
## Update
Further to the comments re SQL injection, here is an updated sample.
```
Dim Command As SqlClient.SqlCommand
Dim Predicate1 As String = Me.TextBox1.Text
Dim Predicate2 As String = Me.TextBox2.Text
Dim Predicate3 As String = Me.TextBox2.Text
Dim ParameterList As New List(Of SqlClient.SqlParameter)
Dim PredicateList As New List(Of String)
Dim BaseQuery As String = "SELECT * FROM TABLE WHERE "
If Predicate1 <> String.Empty Then
PredicateList.Add("name = @name")
ParameterList.Add(New SqlClient.SqlParameter("@name", Predicate1))
End If
If Predicate2 <> String.Empty Then
PredicateList.Add("age = @age")
ParameterList.Add(New SqlClient.SqlParameter("@age", Predicate2))
End If
If Predicate3 <> String.Empty Then
PredicateList.Add("gender = @gender")
ParameterList.Add(New SqlClient.SqlParameter("@gender", Predicate3))
End If
Command = New SqlClient.SqlCommand(BaseQuery & String.Join(" AND ", PredicateList.ToArray))
Command.Parameters.AddRange(ParameterList.ToArray)
```
|
```
Select * from table
Where (Name = @name OR @name is Null)
AND (Gender = @gender OR @gender is Null)
...
```
it should be one query
|
How to implement a more efficient search feature?
|
[
"",
"mysql",
"sql",
"database",
"vb.net",
""
] |
I am trying to measure the growth over time of some sales data. I've grouped current and previous sales together based on the last n months, yielding the following (abbreviated) table:
```
+-------------------------------+---------------+-------------+
| CUSTOMER_NAME | TIMEFRAME | GROSS_SALES |
+-------------------------------+---------------+-------------+
| SALLY'S SALES INC | CURRENT | 1207.76 |
| SALLY'S SALES INC | PREVIOUS | 8139.49 |
| DAVES PRODUCTS LLC | CURRENT | 909.92 |
| DAVES PRODUCTS LLC | PREVIOUS | 2867.41 |
| MEGACORP | CURRENT | 8369.05 |
| MEGACORP | PREVIOUS | 19123.75 |
+-------------------------------+---------------+-------------+
```
I'm trying to calculate the quotient between the current and previous sales, something like
```
+-------------------------------+----------------+
| CUSTOMER_NAME | SALES_FACTOR |
+-------------------------------+----------------+
| SALLY'S SALES INC | 0.148382761082 |
| DAVES PRODUCTS LLC | 0.317331668649 |
| MEGACORP | 0.690302092999 |
+-------------------------------+----------------+
```
But, I can't figure out a query which correctly divides the two values for each customer name.
|
You can use `GROUP BY` to get one row per customer, then `SUM(CASE)` to pick out the current and previous values.
This has the advantage of only scanning the table once, rather than needing a join.
```
SELECT
CUSTOMER_NAME,
SUM(CASE WHEN TIMEFRAME = 'CURRENT' THEN GROSS_SALES END)
/
SUM(CASE WHEN TIMEFRAME = 'PREVIOUS' THEN GROSS_SALES END) AS SALES_FACTOR
FROM
yourTable
GROUP BY
CUSTOMER_NAME
```
|
Use the table twice with different names...
```
select CUSTOMER_NAME, t1.TIMEFRAME/t2.TIMEFRAME
from TABLE_NAME as t1, TABLE_NAME as t2
where t1.CUSTOMER_NAME = t2.CUSTOMER_NAME
and t1.TIMEFRAME = 'PREVIOUS'
and t2.TIMEFRAME = 'CURRENT'
```
|
Dividing column values based on other column values
|
[
"",
"mysql",
"sql",
""
] |
This query is working good, but the issue is I need to get the room number which is in another table called `rooms`. The query below fetched data from `reservations` table which only has `room_id`:
```
$select_table = "SELECT id, name, phone, email, address, country, adults, childs, purpose, booking_type, remarks, checkin, checkout, room_id, status, paid, date FROM reservations WHERE DATE_FORMAT(date, '%Y-%m') = '$date' ORDER BY id DESC";
$stmt = $db->prepare($select_table);
$stmt->execute();
$rows = $stmt->fetch(PDO::FETCH_ASSOC);
if ($rows)
{
getcsv(array_keys($rows));
}
while($rows)
{
getcsv($rows);
$rows = $stmt->fetch(PDO::FETCH_ASSOC);
}
```
How on earth can I get the column `number` inside the above query from the `rooms` table?
|
`Join` the rooms table!
```
SELECT r.id, r.name, r.phone, r.email, r.address, r.country, r.adults, r.childs, r.purpose,
r.booking_type, r.remarks, r.checkin, r.checkout, r.room_id, r.status, r.paid, r.date,
rooms.number
FROM reservations r
JOIN rooms ON r.room_id = rooms.id
WHERE DATE_FORMAT(r.date, '%Y-%m') = '$date'
ORDER BY r.id DESC
```
(I got tired of typing `reservations` over and over again, so I create a table alias `r` for that table.)
|
Change the query to include the room number by joining the two tables:
```
$query = "SELECT id, name, phone, email, address, country, adults, childs, purpose, booking_type, remarks, checkin, checkout, room_id, status, paid, date , ro.number AS number FROM reservations AS r JOIN rooms AS ro ON r.room_id=ro.id WHERE DATE_FORMAT(date, '%Y-%m') = '$date' ORDER BY id DESC";
```
Assuming that the `rooms` table has an `id` field and that `reservation.room_id` is a foreign key referencing `rooms.id` the query will work.
|
Fetch value from another table inside this query
|
[
"",
"mysql",
"sql",
""
] |
I am a newbie to SQL. I got stuck in this question.
I have 2 SQL tables: users with (name,userid), blacklistID with (userid). I want to create a sql query that will only allow non-blacklisted users to login.
Any suggestions are highly welcomed. thanks !!
|
You can join the `users` and `blacklist` tables together, and then check to make sure that a given user did not match the blacklist:
```
SELECT u.name, u.userid
FROM users u LEFT JOIN blacklist b
ON u.userid = b.userid
WHERE u.name = 'username' AND b.userid IS NULL
```
If the above query returns a record for a given `username` then it means he is not blacklisted.
However, I think a better database design would be to add a column to your original `users` table (call it `isActive`) which will monitor whether or not a user's account has become inactive ("blacklisted"). In this case, you can do a simple query only on the `users` table to get what you need:
```
SELECT u.name, u.userid
FROM users u
WHERE u.name = 'username' AND u.isActive
```
|
You can use a `LEFT OUTER JOIN` to return your condition in the form of a variable using:
```
SELECT
u.userID,
isOnBlackList=CASE WHEN b.userID IS NULL THEN 1 ELSE 0 END
FROM
users u
LEFT OUTER JOIN blacklistID b ON b.userID=u.userID
WHERE
u.userID=@UserID
```
Or, If you wish to return an empty set or null to indicate "No UserID Exists" or "The user is invalid because on blacklist" then use a condition in your where clause to exclude blacklist matches.
```
SELECT
u.userID
FROM
users u
LEFT OUTER JOIN blacklistID b ON b.userID=u.userID
WHERE
b.userID IS NULL
AND
u.userID=@UserID
```
|
Create an SQL query which allows only specific users to login
|
[
"",
"sql",
"mysqli",
""
] |
Here's the situation:
So, in my database, a person is "responsible" for job X and "linked" to job Y. What I want is a query that returns: name of person, his ID and he number of jobs it's linked/responsible. So far I got this:
```
select id_job, count(id_job) number_jobs
from
(
select responsible.id
from responsible
union all
select linked.id
from linked
GROUP BY id
) id_job
GROUP BY id_job
```
And it returns a table with id in the first column and number of occurrences in the second. Now, what I can't do is associate the name of person to the table. When i put that in the "select" from beginning it gives me all the possible combinations... How can I solve this? Thanks in advance!
Example data and desirable output:
```
| Person |
id | name
1 | John
2 | Francis
3 | Chuck
4 | Anthony
| Responsible |
process_no | id
100 | 2
200 | 2
300 | 1
400 | 4
| Linked |
process_no | id
101 | 4
201 | 1
301 | 1
401 | 2
OUTPUT:
| OUTPUT |
id | name | number_jobs
1 | John | 3
2 | Francis | 3
3 | Chuck | 0
4 | Anthony | 2
```
|
Try this way
```
select prs.id, prs.name, count(*) from Person prs
join(select process_no, id
from Responsible res
Union all
select process_no, id
from Linked lin ) a on a.id=prs.id
group by prs.id, prs.name
```
|
```
select id, name, count(process_no) FROM (
select pr.id, pr.name, res.process_no from Person pr
LEFT JOIN Responsible res on pr.id = res.id
UNION
select pr.id, pr.name, lin.process_no from Person pr
LEFT JOIN Linked lin on pr.id = lin.id) src
group by id, name
order by id
```
Query ain't tested, give it a shot, but this is the way you want to go
|
Count how many times a value appears in tables SQL
|
[
"",
"sql",
""
] |
I have some search filters on my project. You can search Users by roles. If you search for `:admin` it will only return Users with role: `[:admin]`. But it WONT add the users to the search results with role: `[:admin, :mentor]`.
**Does anyone know how to make an SQL query that will find me all Users that have a role admin, they must include Users that have a role: [:admin, :mentor] as these users are still admin.**
```
20: custom_filter :role, lambda { |records, value|
=> 21: binding.pry
22: records.where(roles_mask: User.mask_for(value))
23: }
24:
25: class << self
26: def secure_find!(token:)
4.2.5.1@2.2.4 (User)> records
=> User(id: integer, email: string, encrypted_password: string, reset_password_token: string, roles_mask: integer)
4.2.5.1@2.2.4 (User)> value
=> "admin"
```
Thank you!
|
You are probably passing only one role in `values`:
```
records.where(roles_mask: User.mask_for(value))
```
Make sure that `value` includes both:
```
records.where(roles_mask: User.mask_for([:admin, :mentor]))
```
|
I found a way to do this
```
custom_filter :role, lambda { |records, value|
records.where("roles_mask & ? != 0", User.mask_for(value))
}
```
This will return all `Users` that have a role `admin`. Even if the User has 2 roles or even 3 roles like `[:admin, :mentor]` it will inlcude his user in the result.
|
Role Model Gem: How to get all Users with role: [:admin, :mentor]
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
```
CREATE OR REPLACE FUNCTION sumNumbers(p VARCHAR2) RETURN NUMBER IS
...
SELECT sumNumbers('3 + 6 + 13 + 0 + 1') FROM dual;
```
So the output should be: `23`
|
You can use a 'trick' with XML and XPath evaluation to do this without manually tokenising the string:
```
select * from xmltable('3 + 6 + 13 + 0 + 1' columns result number path '.');
RESULT
----------
23
```
Or:
```
select to_number(xmlquery('3 + 6 + 13 + 0 + 1' returning content).getStringVal()) as result
from dual;
RESULT
----------
23
```
This is more flexible as other operators can be used:
```
select * from xmltable('2 * (5 + 7) - 3' columns result number path '.');
RESULT
----------
21
```
It doesn't like division using the normal `/` operator though, and you need to use `div` instead, so you might need to do a replace on your source string if it might contain a slash:
```
select * from xmltable('2 * (5 + 7) div 3' columns result number path '.');
RESULT
----------
8
```
Read more about [Oracle's XPath handling](http://docs.oracle.com/cd/E11882_01/appdev.112/e23094/xdb_xquery.htm#ADXDB5123), and [XPath numeric operators](http://commons.oreilly.com/wiki/index.php/XPath_and_XPointer/XPath_Functions_and_Numeric_Operators#XPath_Numeric_Operators).
You can probably call that directly, without a function; but if you particularly wanted to wrap it in a function it's a little more complicated as the XPath has to be fixed at parse time, so you would need to use dynamic SQL:
```
CREATE OR REPLACE FUNCTION sumNumbers(p VARCHAR2) RETURN NUMBER IS
l_result NUMBER;
BEGIN
execute immediate q'[select * from xmltable(']'
|| replace(p, '/', ' div ')
|| q'[' columns result number path '.')]'
into l_result;
return l_result;
END;
/
SELECT sumNumbers('3 + 6 + 13 + 0 + 1') FROM dual;
SUMNUMBERS('3+6+13+0+1')
---------------------------------------
23
```
I've used [the alternative quoting mechanism](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements003.htm#SQLRF00218) to avoid escaping the single quotes in the statement, though I'm not sure it's much clearer here. And I've included the `replace()` in case you need to be able to divide, but it you don't want that then just concatenate `p` straight into the SQL statement. With the replace you can do:
```
SELECT sumNumbers('2 * (5 + 7) / 3') FROM dual;
```
If you are going to allow it to be flexible the function should have a different name...
|
You can extract the numbers with a hierarchical query and a regular expression and sum the results. Something like this:
```
WITH t AS (
SELECT '3 + 6 + 13 + 0 + 1' numbers FROM DUAL
)
SELECT SUM(TRIM(REGEXP_SUBSTR(numbers,'( ?[[:digit:]]+ ?)',1,LEVEL)))
FROM t
CONNECT BY LENGTH(SUBSTR(numbers,DECODE(REGEXP_INSTR(numbers,'( ?[[:digit:]]+ ?)',1,LEVEL),0,NULL,REGEXP_INSTR(numbers,'( ?[[:digit:]]+ ?)',1,LEVEL)))) <= LENGTH(SUBSTR(numbers,DECODE(REGEXP_INSTR(numbers,'( ?[[:digit:]]+ ?)',1,LEVEL),0,NULL,REGEXP_INSTR(numbers,'( ?[[:digit:]]+ ?)',1,LEVEL))));
```
|
PL/SQL split a string by " + " and sum the numbers in it?
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I'm looking for a SQL solution for the following problem.
I want a list of employees who are more then 14 days sick in a row.
I've a sql table with the following:
```
First_name, Last_Name, INDIRECT_ID, SHIFT_DATE
John, Doe, Sick, 2016-01-01
John, Doe, Sick, 2016-01-02
John, Doe, working, 2016-01-03
John, Doe, Sick, 2016-01-04
John, Doe, Sick, 2016-01-05
etc.
```
I thought to do this by seeing if they are sick for 10x (2x 5 working days) in two weeks. But maybe there is a much simpler solution for it. But Now I'm also getting duplicate answers.
```
select FIRST_NAME, LAST_NAME
from (select t.*
,(select count(*)
from LABOR_TICKET t2
where t2.EMPLOYEE_ID = t.EMPLOYEE_ID and
t2.INDIRECT_ID = t.INDIRECT_ID and
t2.SHIFT_DATE >= t.SHIFT_DATE and
t2.SHIFT_DATE < DATEADD(day, 14, t.SHIFT_DATE)) NumWithin14Days
from LABOR_TICKET t
where SHIFT_DATE between '2016-01-01' and '2016-04-01'
) LABOR_TICKET
INNER JOIN
EMPLOYEE ON LABOR_TICKET.EMPLOYEE_ID = EMPLOYEE.ID
where NumWithin14Days >= 10 AND INDIRECT_ID = 'SICK'
```
|
Try this,
First create all the 14 days intervals in between the From Date and To Date.
Then check the count of the 'Sick' is 14 in each interval for every employee.
```
DECLARE @ST_DATE DATE='2016-01-01'
,@ED_DATE DATE='2016-04-01'
;WITH CTE_DATE AS (
SELECT @ST_DATE AS ST_DATE,DATEADD(DAY,13,@ST_DATE) AS ED_DATE
UNION ALL
SELECT DATEADD(DAY,1,ED_DATE),DATEADD(DAY,14,ED_DATE)
FROM CTE_DATE
WHERE DATEADD(DAY,14,ED_DATE) <= @ED_DATE
)
SELECT FIRST_NAME, LAST_NAME
FROM CTE_DATE
INNER JOIN LABOR_TICKET ON SHIFT_DATE BETWEEN ST_DATE AND ED_DATE
WHERE INDIRECT_ID = 'Sick'
GROUP BY FIRST_NAME, LAST_NAME
HAVING COUNT(*) >= 14
```
|
```
declare @t table(First_name varchar(50), Last_Name varchar(50), INDIRECT_ID varchar(50), SHIFT_DATE date)
insert into @t values
('John', 'Doe', 'Sick', '2016-01-01')
,('John', 'Doe', 'Sick', '2016-01-02')
,('John','Doe','working','2016-01-03')
,('John', 'Doe', 'Sick', '2016-04-04')
,('John', 'Doe', 'Sick', '2016-05-05')
select s.*
,u.*
,Sickdays =
case
when s.indirect_id = 'Sick' and u.indirect_id = 'Sick' then datediff(dd,u.shift_date,s.shift_date)
else 0
end
from
(
select t.*,
row_number() over(partition by last_name,first_name order by shift_date desc) rn
from @t t
) s
join
(select t.*,
row_number() over(partition by last_name,first_name order by shift_date desc) rn
from @t t
) u on s.last_name = u.last_name and s.first_name = u.first_name and s.rn = u.rn - 1
where
case
when s.indirect_id = 'Sick' and u.indirect_id = 'Sick' then datediff(dd,u.shift_date,s.shift_date)
else 0
end > 13
```
|
SQL getting status of a period
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with some data. It could look for example like this:
```
7 Gelb
8 Schwarz
9 Weiß my color
10 Grau
16 Gelb I
17 Gelb II
18 Gelb III
19 Gelb IV
27 Schwarz I
28 Schwarz II
29 Schwarz III
30 Schwarz IV
31 Schwarz V
32 Schwarz VI
39 Weiß my color III
40 Weiß my color IV
41 Weiß my color V
42 Weiß my color VI
```
As you can see, in some records we have roman numbers in convention `<name><space><roman number>`
For instance, there are *"Gelb"*, *"Weiß my color"* and *"Schwarz"* and there are also records for them in roman convention. For some, like *"Grau"*, there are no duplicates.
So there will be record with unique color name without a roman number e.g record *"Grau"* and in the table it could contain or not some records with it and roman numbers for it.
Roman numbers would be always at the end like: `<name><space><romannumber>`
My goal is only to get unique names. So out of example i want to extract only:
```
7 Gelb
8 Schwarz
9 Weiß my color
10 Grau
```
How can i achieve that?
I started with this, would it be enough?
```
Select Id, Name From MyTable Where Name Not Like = '%<space><anyromancharacter>'
```
I cannot change structure of the database.
|
**update**
```
select * from dbo.test
Where value not Like '%[MDILXV]_' Collate SQL_Latin1_General_CP1_CS_AS
```
Step 1 :
```
select * from dbo.test
id value
1 Gelb
2 Gelb I
3 Weiß my color III
4 Weiß my color
```
When i give
```
select * from dbo.test
Where value not Like '%[IXLV]' Collate SQL_Latin1_General_CP1_CS_AS
id value
1 Gelb
4 Weiß my color
```
|
Here is my solution:
First, generate a list of Roman Numerals up to a specified limit. Then, extract the last *word* from your table and check if it exists in the list of Roman Numerals:
`ONLINE DEMO`
```
;WITH E1(N) AS(
SELECT 1 FROM(VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t(N)
),
E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b),
E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b),
CteTally(N) AS(
SELECT TOP(1000) -- Replace value inside TOP for MAX roman numbers
ROW_NUMBER() OVER(ORDER BY(SELECT NULL))
FROM E4
),
CteRoman(N, Roman) AS(
SELECT *
FROM CteTally t
CROSS APPLY(
SELECT
REPLICATE('M', t.N/1000)
+ REPLACE(REPLACE(REPLACE(
REPLICATE('C', t.N%1000/100),
REPLICATE('C', 9), 'CM'),
REPLICATE('C', 5), 'D'),
REPLICATE('C', 4), 'CD')
+ REPLACE(REPLACE(REPLACE(
REPLICATE('X', t.N%100 / 10),
REPLICATE('X', 9),'XC'),
REPLICATE('X', 5), 'L'),
REPLICATE('X', 4), 'XL')
+ REPLACE(REPLACE(REPLACE(
REPLICATE('I', t.N%10),
REPLICATE('I', 9),'IX'),
REPLICATE('I', 5), 'V'),
REPLICATE('I', 4),'IV')
) r(a)
),
CteLastWord AS(
SELECT *,
LastWord = CASE
WHEN CHARINDEX(' ', Name) = 0 THEN Name
ELSE REVERSE(LEFT(REVERSE(Name), CHARINDEX(' ', REVERSE(Name)) - 1))
END
FROM MyTable
)
SELECT
id, Name
FROM CteLastWord w
WHERE
NOT EXISTS(
SELECT 1
FROM CteRoman
WHERE
Roman = w.LastWord
)
ORDER BY w.Id
```
Reference:
* [Convert Number to Roman Numerals in Sql Server](https://itdeveloperzone.wordpress.com/2011/03/26/convert-number-to-roman-numerals-in-sql-server/)
|
Extract values which do not end by specific words
|
[
"",
"sql",
"sql-server",
""
] |
I have a table which has duplicate values in one column, title, so the title column has multiple rows with the same values.
I want to delete all duplicates except one where the title is the same.
What sort of query can I perform to accomplish this?
```
Title Subject Description Created_at
Something Somethingsubject Somethingdescription 2016-04-13 16:37:10
Something Anothersubject Anotherdescription 2016-04-11 16:37:10
Something Thirdsubject Thirdsubject 2016-04-14 16:37:10
NumberTwo NumberTwoSubject NumberTwoSubject 2016-04-12 16:37:10
NumberTwo AnotherNumberTwo AnotherNumberTwoDescripti 2016-04-15 16:37:10
```
I would like to delete all duplicates, leaving just one, preferably the oldest record, so that the only remaining records would be:
```
Title Subject Description Created_at
Something Anothersubject Anotherdescription 2016-04-11 16:37:10
NumberTwo NumberTwoSubject NumberTwoSubject 2016-04-12 16:37:10
```
|
You can do a self-join `DELETE`:
```
DELETE t1
FROM mytable t1
JOIN (SELECT Title, MAX(Created_at) AS max_date
FROM mytable
GROUP BY Title) t2
ON t1.Title = t2.Title AND t1.Created_at < t2.max_date
```
[**Demo here**](http://sqlfiddle.com/#!9/e0c649/1)
|
Do a backup first, for obvious reasons, but this should work:
```
delete from your_table where id not in (select id from your_table group by title)
```
Where `id` is the column that stores the primary key for `your_table`
|
How can I delete all rows with duplicates in one column using MySQL?
|
[
"",
"mysql",
"sql",
""
] |
I need to write SQL to extract repeat location codes and separate out the sub-location detail. However, the data I am working with does not follow a set pattern.
Here's a sample of what the location codes look like (the real table has over 5,000 locations):
`JR-DY-TIN
DY-RHOLD
DY-PREQ-TIN
GLVCSH
GLFLR
GLBOX1
GLBOX2
GLBOX3
GLBOXA
GLBOXB
GLBOXC
GLBOXD
GL
GL0001
GL0002
GL0003
GL0014
…`
I was able to create a new column for the sub-location detail when it is numeric but that's all I have so far.
```
select
LocationCode,
REVERSE(LEFT(REVERSE(LocationCode),PATINDEX('%[A-Za-z]%',
REVERSE(LocationCode))-1)) AS PaddedNumbers
from LocationTable
Results...
LocationCode PaddedNumbers
------------ -------------
JR-DY-TIN
DY-RHOLD
DY-PREQ-TIN
GLVCSH
GLFLR
GLBOX1 1
GLBOX2 2
GLBOX3 3
GLBOXA
GLBOXB
GLBOXC
GLBOXD
GL
GL0001 0001
GL0002 0002
GL0003 0003
GL0014 0014
```
I still figure out how to display the following in two separate columns:
* Location codes without the sub-locations detail, e.g. GLBOX , or just
the original location code if there is no sub-location, e.g. GLFLR.
* Numeric and Nonnumeric sub-location detail at the same time, e.g. for
GLBOX have a column that displays 1, 2, 3,A, B, C, D, E, F.
Edit: If I am able to accomplish this the data should look like this:
```
LocationCode MainLoc SubLoc
------------ --------- ------
JR-DY-TIN JR-DY-TIN
DY-RHOLD DY-RHOLD
DY-PREQ-TIN DY-PREQ-TIN
GLVCSH GLVCSH
GLFLR GLFLR
GLBOX1 GLBOX 1
GLBOX2 GLBOX 2
GLBOX3 GLBOX 3
GLBOXA GLBOX A
GLBOXB GLBOX B
GLBOXC GLBOX C
GLBOXD GLBOX D
GL GL
GL0001 GL 0001
GL0002 GL 0002
GL0003 GL 0003
GL0014 GL 0014
```
Any help is appreciated.
Environment: SQL Server 2008 R2.
|
```
DECLARE @LocationRef TABLE (Location NVARCHAR(20), Ref INT)
INSERT INTO @LocationRef VALUES
('JR-DY-TIN',0)
,('DY-RHOLD',0)
,('DY-PREQ-TIN',0)
,('GLVCSH',0)
,('GLFLR',0)
,('GLBOX1',6)
,('GLBOX2',6)
,('GLBOX3',6)
,('GLBOXA',6)
,('GLBOXB',6)
,('GLBOXC',6)
,('GLBOXD',6)
,('GL',0)
,('GL0001',3)
,('GL0002',3)
,('GL0003',3)
,('GL0014',3)
SELECT Location AS LocationCode
,LEFT(Location,CASE Ref WHEN 0 THEN LEN(Location) ELSE Ref - 1 END)
,RIGHT(Location,CASE Ref WHEN 0 THEN 0 ELSE LEN(Location) - Ref + 1 END)
FROM @LocationRef
```
[](https://i.stack.imgur.com/2A35d.png)
|
It seems like you want to use something like a `parseInt` feature, which is not available in `SQL Server 2008`. You can attempt to use `cast`, but that won't work with your datatype - `varchar`.
I'd suggest using a case statement to parse the complex logic you need. ie:
```
select
LocationCode,
case when left(LocationCode,5) like 'GLBOX%' then substring(LocationCode,5,2)
when left(LocationCode,3) like 'GL0%' then substring(LocationCode,3,4)
else 'null' end as ParsedLocationCode end
from LocationTable
```
|
Parsing inconsistent data with SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"text-parsing",
""
] |
I have a log table like this:
```
line Date status Type
1 26.04.2016 08:58 IN 4
2 26.04.2016 08:59 OUT 80
3 26.04.2016 09:05 REZ 7
4 26.04.2016 09:06 IN 7
5 26.04.2016 09:22 EDIT 81
6 26.04.2016 09:23 EDIT 80
7 26.04.2016 09:24 OUT 80
8 26.04.2016 09:25 OUT 80
9 26.04.2016 09:25 OUT 80
```
"date" is the key.
I want to get last type change date. In this example last type is "80" and the min date of it is 26.04.2016 09:23 (line 6).
I know the last type at this point (@lasttype = 80).
But if I run `select min(date) from table where type = @lasttype`, then I get line 2 instead of line 6.
How can I get min date of last block of "type" (line 6) **in one query**? I don't want to use several select after and after.
I have just try this but line 6 doesn't come again:(
```
select max(date)
from
(
select min(date) as date, type
from MYTABLE
where type = '80'
group by type
) as t1
```
Is there a easy way?
Many thanks.
Edit:
Ok, I can get line 6:
```
select min(date) from MYTABLE where date > (select max(date) from MYTABLE where type <> @lasttype)
```
Any more effective code?
Many thanks for your answers...
|
Ok, I can get line 6:
```
select min(date) from MYTABLE where date > (select max(date) from MYTABLE where type <> @lasttype)
```
Any more effective code?
Many thanks for your answers...
|
Try running this query
```
select max(a.Date) from
(
Select top 4 * from table order by Date desc
)a;
```
|
SQL: Find min date of a last type change
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
So I don't know how to subtract two `time(hh:mm:ss)` in sql server.
This is my statement:
```
where
( CONVERT(TIME(7), [EndWork], 102)) - ( CONVERT(TIME(7), [StartWork], 102)) <
CONVERT(TIME(7), ' 8:30:00', 102)
```
|
```
DECLARE @END_DATE TIME = '' ,
@START_DATE TIME = ''
SELECT CONVERT(TIME,DATEADD(MS,DATEDIFF(SS, @START_DATE, @END_DATE )*1000,0),114)
```
|
Using the [DATEDIFF](https://msdn.microsoft.com/en-us/library/ms189794.aspx) function you will get the difference of two dates/time in Year/Month/Day/Hour/Min/Sec as you required.
Eg:- `DATEDIFF ( MINUTE , startdate , enddate )` --will return the diff in minutes
|
how to subtract two time in sql server?
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
how to Add 1 month to the date value(1st row) and use it as the input to the next row to add the month till it reaches the maximum month id.
```
declare @tmp table (date date,month_id int);
insert into @tmp values('2014-11-30',1),('2014-11-30',2),('2014-11-30',3),('2014-11-30',4),('2014-11-30',5),('2014-11-30',6),('2015-01-01',1),('2015-01-01',2),('2015-01-01',3),('2015-01-01',4);
```
[](https://i.stack.imgur.com/d9cLB.png)
Expected output:
```
DATE month_id derived_date1
2014-11-30 1 2014-12-30
2014-11-30 2 2015-01-30
2014-11-30 3 2015-02-28
2014-11-30 4 2015-03-28
2014-11-30 5 2015-04-28
2014-11-30 6 2015-05-28
2015-01-01 1 2015-02-01
2015-01-01 2 2015-03-01
2015-01-01 3 2015-04-01
2015-01-01 4 2015-05-01
```
|
Recursive CTE
```
declare @tmp table (date date,month_id int);
insert into @tmp values('2014-11-30',1),('2014-11-30',2),('2014-11-30',3),('2014-11-30',4),('2014-11-30',5),('2014-11-30',6),('2015-01-01',1),('2015-01-01',2),('2015-01-01',3),('2015-01-01',4);
;with cte as
(
select date, month_id, DATEADD(MONTH, 1, date) as derived_date1 from @tmp where month_id = 1
union all select t.date, t.month_id, DATEADD(MONTH, 1, cte.derived_date1) from cte inner join @tmp t on cte.date = t.date and cte.month_id = t.month_id - 1
)
select * from cte order by date, month_id
```
|
You can try this
The `DATEADD()` function adds or subtracts a specified time interval from a date.
```
Declare @tmp table (date date,month_id int);
Insert into @tmp values('2014-11-30',1),('2014-11-30',2),('2014-11-30',3),('2014-11-30',4),('2014-11-30',5),('2014-11-30',6),('2015-01-01',1),('2015-01-01',2),('2015-01-01',3),('2015-01-01',4);
SELECT date as Original_Date,month_id as Month_Id,DATEADD(MONTH,month_id,date) as Derived_date from @tmp
```
[](https://i.stack.imgur.com/gMVZc.png)
|
Add 1 month to the date value
|
[
"",
"sql",
"sql-server",
""
] |
I have a query that pulls sales information for employees.
An example of the query would be:
```
Select Name, Customer_count, item1, item2, item3
from Invoices
```
Output:
```
Name Customer_Count item1 item2 item3
Rob 10 1 2 0
Bill 10 3 0 2
Jim 10 2 1 3
```
I have a second query that pulls the percentage of each item.
```
Select
Name,
Customer_count,
Cast(Cast(item1 as float) / Cast(Customer_Count as float) as decimal (10,2)) as item1 ,
Cast(Cast(item2 as float) / Cast(Customer_Count as float) as decimal (10,2)) as item2,
Cast(Cast(item3 as float) / Cast(Customer_Count as float) as decimal (10,2)) as item3
from Invoices
Name Customer_Count item1 item2 item3
Rob 10 10.00 20.00 0.00
Bill 10 30.00 0.00 20.00
Jim 10 20.00 10.00 30.00
```
How could I have the following output, Query 1 listed, with the name, query 2 below it, not listing the name.:
```
Name Customer_Count item1 item2 item3
Rob 10 1 2 0
10 10.00 20.00 0.00
Bill 10 3 0 2
10 30 0.00 20.00
Jim 10 2 1 3
10 20 10.00 30.00
```
|
You can use the `UNION ALL` clause and do the union of the two queries, and then use the result in the `FROM` sentence. For ordering the rows the way you want, you can simply add the primary key column (or columns) to each `SELECT` sentence and add a column that will contain the visible name. This `Visible_name` column will contain the `Name` value for the first select of the union, and in the second select will contain an empty string.
The `ORDER BY` clause must order first by the primary key column(s) and second by the `Visible_name` column in descending order.
Supposing that `Invoices` table has an `Id` column as a primary key, the query can be as follows:
```
SELECT * FROM
(Select Id, Name, Name AS Visible_name, Customer_count, item1, item2, item3
from Invoices
UNION ALL
Select
Id,
Name,
'' AS Visible_Name,
Customer_count,
Cast(Cast(item1 as float) / Cast(Customer_Count as float) as decimal (10,2)) as item1 ,
Cast(Cast(item2 as float) / Cast(Customer_Count as float) as decimal (10,2)) as item2,
Cast(Cast(item3 as float) / Cast(Customer_Count as float) as decimal (10,2)) as item3
from Invoices
)
ORDER BY Id, Visible_name DESC
```
|
This will get you pretty close.
```
select name, customer_count, item1, item2, item3
from
(
select
rownum = row_number() over (order by name),
name, customer_count, item1, item2, item3
from Invoices
union
Select rownum = row_number() over (order by name),
'', Customer_count,
Cast(Cast(item1 as float) / Cast(Customer_Count as float) as decimal (10,2)) * 100 as item1 ,
Cast(Cast(item2 as float) / Cast(Customer_Count as float) as decimal (10,2)) * 100 as item2,
Cast(Cast(item3 as float) / Cast(Customer_Count as float) as decimal (10,2)) * 100 as item3
from Invoices
) x
order by rownum, name desc
```
Note the item[n] values all have decimals. I am not sure how to get it to look like your example, but maybe that doesn't matter that much.
Noel
|
Multiple Queries using name as a link
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
i have the below table. (no primary key in this table)
```
ID | IC | Name | UGCOS | MCOS
---------------------------------------------------------
1AA | A123456B | Edmund | Australia | Denmark
1AA | A123456B | Edmund | Australia | France
2CS | C435664C | Grace | Norway | NULL
3TG | G885595H | Rae | NULL | Japan
```
I need to get the result like this.
```
ID | IC | Name | UGCOS | MCOS | MCOS1
--------------------------------------------------------------------
1AA | A123456B | Edmund | Australia | Denmark | France
2CS | C435664C | Grace | Norway | NULL | NULL
3TG | G885595H | Rae | NULL | Japan | NULL
```
Did googled around and seems like PIVOT is what i need to do that. However i am not sure how can that be implemented to my tables. It would be great help if somebody can help me with it. Thanks!
|
I'll create a second answer, as this approach is something completely different from my first:
This dynamic query will first find the max count of a distinct ID and then build a dynamic pivot
```
CREATE TABLE #tmpTbl (ID VARCHAR(100),IC VARCHAR(100),Name VARCHAR(100),UGCOS VARCHAR(100),MCOS VARCHAR(100))
INSERT INTO #tmpTbl VALUES
('1AA','A123456B','Edmund','Australia','Denmark')
,('1AA','A123456B','Edmund','Australia','France')
,('1AA','A123456B','Edmund','Australia','OneMore')
,('2CS','C435664C','Grace','Norway',NULL)
,('3TG','G885595H','Rae',NULL,'Japan');
GO
DECLARE @maxCount INT=(SELECT TOP 1 COUNT(*) FROM #tmpTbl GROUP BY ID ORDER BY COUNT(ID) DESC);
DECLARE @colNames VARCHAR(MAX)=
(
STUFF
(
(
SELECT TOP(@maxCount)
',MCOS' + CAST(ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS VARCHAR(10))
FROM sys.objects --take any large table or - better! - an numbers table or a tally CTE
FOR XML PATH('')
),1,1,''
)
);
DECLARE @cmd VARCHAR(MAX)=
'SELECT p.*
FROM
(
SELECT *
,''MCOS'' + CAST(ROW_NUMBER() OVER(PARTITION BY ID ORDER BY (SELECT NULL)) AS VARCHAR(10)) AS colName
FROM #tmpTbl
) AS tbl
PIVOT
(
MIN(MCOS) FOR colName IN(' + @colNames + ')
) AS p';
EXEC(@cmd);
GO
DROP TABLE #tmpTbl;
```
The result
```
1AA A123456B Edmund Australia Denmark France OneMore
2CS C435664C Grace Norway NULL NULL NULL
3TG G885595H Rae NULL Japan NULL NULL
```
|
This is a suggestion with a concatenated result:
```
CREATE TABLE #tmpTbl (ID VARCHAR(100),IC VARCHAR(100),Name VARCHAR(100),UGCOS VARCHAR(100),MCOS VARCHAR(100))
INSERT INTO #tmpTbl VALUES
('1AA','A123456B','Edmund','Australia','Denmark')
,('1AA','A123456B','Edmund','Australia','France')
,('2CS','C435664C','Grace','Norway',NULL)
,('3TG','G885595H','Rae',NULL,'Japan');
SELECT ID,IC,Name,UGCOS,
(
STUFF(
(
SELECT ' ,' + x.MCOS
FROM #tmpTbl AS x
WHERE x.ID=outerTbl.ID
FOR XML PATH('')
),1,2,''
)
) AS MCOS
FROM #tmpTbl AS outerTbl
GROUP BY ID,IC,Name,UGCOS;
GO
DROP TABLE #tmpTbl;
```
The result
```
1AA A123456B Edmund Australia Denmark ,France
2CS C435664C Grace Norway NULL
3TG G885595H Rae NULL Japan
```
|
Use Dyamic Pivot query for this?
|
[
"",
"sql",
"sql-server",
"pivot",
"pivot-table",
""
] |
I need a method for managing sequences in a table. I have a table that needs to be managed by a sequence. I must be able to insert a row between two other rows.
if I have a numbered sequence from 1 to 10000, and I want to insert a row between 100 and 101. Or I want to move 99 to between 1 and 2.
Can this be managed in SQL or do I need to manage it in the programmatically.
|
SQL Server tables **do not have any implicit order!**
The only way to enforce a sequence is a sortable column (or a combination of columns) with unique values.
One way to achieve this was to number your row after an existing sort key:
* add a column SortKey BIGINT (after filling it with values you should set it to `NOT NULL` and create an `unique index`)
This fully working example will show you how to do this:
```
CREATE TABLE #Dummy(SomeExistingSortField INT, SomeContent VARCHAR(100));
INSERT INTO #Dummy VALUES(3,'insert first'),(1,'insert second'),(2,'insert third');
SELECT * FROM #Dummy;
ALTER TABLE #Dummy ADD SortKey BIGINT;
WITH CTE AS
(
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT SomeExistingSortField)) * 10 AS nr
,SortKey
FROM #Dummy
)
UPDATE CTE SET SortKey=nr;
SELECT * FROM #Dummy ORDER BY SortKey;
INSERT INTO #Dummy VALUES(99,'between 1 and 2',15);
SELECT * FROM #Dummy ORDER BY SortKey;
WITH CTE AS
(
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT SortKey)) * 10 AS nr
,SortKey
FROM #Dummy
)
UPDATE CTE SET SortKey=nr;
SELECT * FROM #Dummy ORDER BY SortKey;
GO
DROP TABLE #Dummy;
```
|
One way to accomplish this would be to have your sequence be non-contiguous. That is, specify an increment on the sequence to leave gaps enough to put your interleaving rows. If course this isn't perfect as you may exhaust your intra-row gaps, but it's a start.
|
MSSQL controlling sequence
|
[
"",
"sql",
"sql-server",
""
] |
I have a following table structure
```
id | sessionId | event | created_on
---|-----------|-------|--------------------
1 | 1 | view | 2016-01-01 12:24:01
2 | 1 | buy | 2016-01-01 12:25:05
3 | 2 | view | 2016-01-01 12:25:09
4 | 1 | view | 2016-01-01 12:27:10
......
```
I'm trying to get time between two events, in this particular case I want to know how much time have passed between FIRST view and FIRST buy event within a session.
How do I apply WHERE for data within GROUP BY? Basically I want to group by sesssionID and get only sessions which include both view and buy actions and I want to get time between FIRST view and FIRST buy event for a single session.
How do I achieve required result?
Thank you
|
A simple solution could be:
Take the minimum created\_at per sessionID where event is view and join it with the minimum created at per sessionID where event is buy. Use inner join to have only records with both...
> select A.sessionID,A.firstView,B.firstBuy, datedif(*depend on rdbms*) from
>
> (select sessionID,min(created\_on) as firstView from tblName where event ='view' group by sessionID ) A
>
> Inner join
>
> (select sessionID,min(created\_on) as firstBuy from tblName where event ='buy' group by sessionID ) B
>
> On A.sessionID = B.sessionID
|
Rather common sql syntax which hopefully google-bigquery supports
```
select sessionId,
min(case event when 'view' then created_on end) as firstView,
min(case event when 'buy' then created_on end) as firstBuy
from t
where event in ('view', 'buy')
group by sessionId
having max(event) != min(event)
```
|
Calculate time between two actions within session
|
[
"",
"sql",
"google-bigquery",
""
] |
I want to group by all the hours between 0 and 40 into one total sum.
41 - 50 into one total sum and 50+ into another sum.
```
select hours,
sum(hours)
from employee
where hours between 0 and 40
group by hours;
```
The above query groups by the hours, so i have the results split by hours, like if I have 1, 2.3, 0.5, 35.5, 30 etc.
```
1 403
2.3 4.6
0.5 53
35.5 284
30 1230
```
But I want something like
`403+4.6+53+284+1230 = 1974.6` because they all fall under 40
How can I do it ?
|
You can use a conditional aggregation, grouping by a value that builds intervals of hours.
By your example, you can have not integer values, so you should use explicit relational operators to have, for example, 40.1 in 40-50 group:
```
select sum(hours),
case
when hours <= 40 then '0-40'
when hours > 40 and hours <= 50 then '41-50'
when hours > 50 then '50-...'
end
from employee
group by case
when hours <= 40 then '0-40'
when hours > 40 and hours <= 50 then '41-50'
when hours > 50 then '50-...'
end
```
|
```
select sum(case when hours between 0 and 40 then hours else 0 end) hours_1,
sum(case when hours between 41 and 50 then hours else 0 end) hours_41,
sum(case when hours > 50 then hours else 0 end) hours_51
from employee
```
|
Sum the total hours and group by hours
|
[
"",
"sql",
"oracle",
""
] |
When just looking up Oracle's INSERT syntax, I noticed you can insert into a query, e.g.
```
insert into (select * from dept) (deptno, dname) values (99, 'new department');
```
Can anybody shed some light on what this is meant for? What can I achieve with inserting into a query that I can't with inserting into the table directly?
**UPDATE:** So far it seems this is just an alternative syntax, so I can write
```
insert into (select deptno, dname from dept) values (99, 'new department');
```
instead of
```
insert into dept (deptno, dname) values (99, 'new department');
```
Same thing, same execution plan. It doesn't matter whether the query returns a record or not. This:
```
insert into (select deptno, dname from dept where 1 = 0) values (99, 'new department');
```
leads again to the same execution plan. So we might assume that it really doesn't matter what the subquery looks like as long as we only select columns from one table. But no. This:
```
insert into (select deptno, dname from dept cross join some_table)
values (99, 'new department');
```
leads to "ORA-01779: cannot modify a column which maps to a non key-preserved table" or "ORA-01732: data manipulation operation not legal on this view".
I have the impression that Oracle decided to allow inserting into a query, because they allow inserting into views, for what else is a subquery here then an ad hoc view? So when you can insert into a view, surely they let you insert into an ad hoc view, too, but nobody in their right mind would ever use this syntax, of course :-)
But maybe I am wrong? Maybe this syntax does offer something that I am not yet aware of? If so tell me :-)
|
The subquery defines the columns of the table into which the rows are to be inserted. As [oracle's doc (12c)](https://docs.oracle.com/database/121/SQLRF/statements_9014.htm#SQLRF55057) says:
> Specify the name of the [...] column or columns returned by a
> subquery, into which rows are to be inserted. If you specify a view or object
> view, then the database inserts rows into the base table of the view.
***Example***
```
create table test_isq (
pk integer not null primary key, data_1 varchar2(40), data_2 varchar2(40), data_3 varchar2(40)
);
-- ok
insert into (select pk, data_2, data_3 from test_isq) (pk, data_2) values ( 1, 'Test');
insert into (select pk, data_2, data_3 from test_isq) values ( 2, 'Another', 'Test' );
-- fail
insert into (select data_1 from test_isq) values ( 'This', 'one', 'fails');
insert into (select data_1 from test_isq) (pk, data_1) values ( 42, 'Nope');
drop table test_isq;
```
|
Inserting into a subquery allows restricting results using [`WITH CHECK OPTION`](http://docs.oracle.com/database/121/SQLRF/statements_10002.htm#i2126149).
For example, let's say you want to allow any department name *except* for "new department". This example using a different value works fine:
```
SQL> insert into
2 (select deptno, dname from dept where dname <> 'new department' WITH CHECK OPTION)
3 values (98, 'old department');
1 row created.
```
But if that bad value is inserted it throws an error:
```
SQL> insert into
2 (select deptno, dname from dept where dname <> 'new department' WITH CHECK OPTION)
3 values (99, 'new department');
(select deptno, dname from dept where dname <> 'new department' WITH CHECK OPTION)
*
ERROR at line 2:
ORA-01402: view WITH CHECK OPTION where-clause violation
```
I've never seen this feature used in the wild. Views have this option so it makes sense that you should be able to do the same thing with a subquery. I'm not sure why anyone would want to do this though, it's just as easy to put the limit on the SELECT statement that feeds the INSERT. And if the INSERT uses VALUES it's trivial to convert it to a SELECT statement.
You have to really dig into the syntax diagrams to see this feature: [insert](http://docs.oracle.com/database/121/SQLRF/statements_9014.htm#SQLRF01604) --> single\_table\_insert --> subquery --> query\_block --> table\_reference --> query\_table\_expression --> subquery\_restriction\_clause.
|
Insert into a query
|
[
"",
"sql",
"oracle",
"insert",
""
] |
I'm trying to select data from a row and then using that value input another predefined data value into a variable. I've got this, not sure if it's the correct syntax for SQL though.
```
DECLARE @ID INT
DECLARE @server VARCHAR(15)
DECLARE response_cursor CURSOR
FOR SELECT col1, col3, col4 FROM [dbo].[tbl] WHERE [status] <> 'Decommission'
IF col3 ='ABCD'
THEN @server = 'Server1';
ELSIF col3 ='DEFS'
THEN @server = 'Server2';
ELSIF col3 ='THSE'
THEN @server = 'Server3';
...
ELSE col3 = NULL
THEN @server = Null
END IF;
```
I get this when I try and run it:
> Msg 156, Level 15, State 1, Line 9 Incorrect syntax near the keyword
> 'then'.
This is running on a MS SQL 2012 server
Here's the entire sql statement I'm working on; might help to better explain what I'm trying to do.
```
USE [DB]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER procedure [dbo].[sp_anotherprocedure]
as
begin
-- begin loop processing to get a response for each record in the validation table
DECLARE @ID INT
DECLARE @server varchar(15)
DECLARE @port nvarchar(5)
declare response_cursor cursor
for select ID, COL3, Assigned_Port from [dbo].[tbl] where [Status] <> 'Decommission' -- we dont want to process decommissioned ports
if COL3='123'
begin @server = 'ABC'; end
IF COL3='456'
begin @server = 'DEF'; end
IF COL3='789'
begin @server = 'HIJ'; end
ELSE COL3 = NULL
begin @server = '' END
open response_cursor
fetch next from response_cursor -- get the first record
INTO @ID, @SERVER, @PORT
WHILE @@FETCH_STATUS = 0
BEGIN
-- make the call
DECLARE @RC int
DECLARE @request nvarchar(100)
DECLARE @out nvarchar(max)
--set @server = 'xxx.xxx.xxx.xxx' --ip address of the server
--set @port = '101010' --port of the server
set @request = '"some_parmamet'
EXECUTE @RC = [dbo].[sp_stored]
@server
,@port
,@request
,@out output
-- UPDATE THE RESPONSE FIELD FOR THE CURRENT RECORD WITH THE RESPONSE FROM THE REQUEST
/* UPDATE [dbo].[tbl]
set response = @out
where ID = @ID
fetch next from response_cursor -- get the next record and repeat request
into @ID, @SERVER, @PORT
*/
Print @out
END
close response_cursor
deallocate response_cursor
end
```
|
Theres a few issues here:
```
DECLARE @ID INT
DECLARE @server VARCHAR(15)
DECLARE response_cursor CURSOR
FOR SELECT col1, col3, col4 FROM [dbo].[tbl] WHERE [status] <> 'Decommission'
```
**For your first if**
```
IF col3 ='ABCD'
BEGIN SET @server = 'Server1'; END
```
**There is no ELSE/IF IN SQL that I am aware of.**
```
IF col3 ='DEFS'
BEGIN SET @server = 'Server2'; END
IF col3 ='THSE'
BEGIN SET @server = 'Server3'; END
```
Yuu dont need an END IF. You do a BEGIN/END for each IF. Also if you need to do you can use ELSE
the structure is
```
IF something BEGIN dosomething END
ELSE
BEGIN doSomethingElse END
```
If you really need to use the IF/THEN/ELSE model, you can always do it using CASE
That structure is
```
SELECT CASE WHEN someCritera then dosomething
WHEN someCriteria then doSomething
...
--after your final when/then end with an END
```
[For reference purposes](https://msdn.microsoft.com/en-us/library/ms182587.aspx)
|
You can use `CASE` statement for a cleaner code.
```
SELECT @server = (
CASE col3
WHEN '123' THEN 'ABC'
WHEN '456' THEN 'DEF'
WHEN '789' THEN 'GHI'
ELSE ''
END)
```
|
Conditional Select Statement with if else
|
[
"",
"sql",
"sql-server-2012",
""
] |
I have two column in sql
```
year month
2016 4
2014 5
```
What I want to do now is to combine these two columns together and get the period
Output
```
year month result
2016 4 201604
2014 5 201405
```
Are there ways to do this?
|
If the column data types are integer, do
```
select year * 100 + month from tablename
```
|
First, you need to `CAST` them to `VARCHAR` to allow concatenation. Then use `RIGHT` for `month` to pad the value with `0`:
```
WITH Tbl AS(
SELECT 2016 AS [year], 4 AS [month] UNION ALL
SELECT 2014, 5
)
SELECT *,
CAST([year] AS VARCHAR(4)) + RIGHT('0' + CAST([month] AS VARCHAR(2)), 2)
FROM Tbl
```
|
How to combine two columns in sql
|
[
"",
"sql",
"sql-server",
""
] |
How do I write a query which will return the next date.
Here is example, I want query to populate the Next\_Date column
Thanks
```
Employee_ID Date Point Next_Date
53 07/31/2015 1 12/02/2015
53 12/02/2015 1 01/12/2016
53 01/12/2016 1 02/10/2016
53 02/10/2016 1
```
I used the following query but getting the Null
```
SELECT
TOP 1 att.attend_date
FROM
Attendance att
WHERE
att.ID_Employee=att.ID_Employee and
att.attend_date > att.attend_date
ORDER BY
att.attend_date ASC
```
|
Lead should work, depending on which DB you are using.
```
select
employee_id,
date,
point,
lead(date) over (partition by employee_id order by date) as next_date
from your_table
```
|
You need a column to specify the order since a table has no inherent order. It seems not to be ordered by the `Date`-column, otherwise your first two records would have the wrong `Next_Date` since they are in the past.
So i assume that you have a primary key to determine the order.
Then you can use this query(`TOP`presumes sql-server so must be changed in other rdbms):
```
UPDATE t SET Next_Date = (SELECT TOP 1 t2.Date
FROM TableName t2
WHERE t2.ID > t.ID)
FROM TableName t
WHERE t.Next_Date IS NULL
```
|
SQL Query to Show next date
|
[
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.