Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I don't understand why this query doesn't work.
I have a table which is full of rows where 3 of the column values are set to `NULL`.
But when I run the following query, it returns 0 (it should return 96)
```
SELECT COUNT(*) FROM SEAT WHERE BOOKED=null;
```
Do you know what I am doing wrong? | It depends on your database settings and the specific RDBMS you are using, but if you are using ANSI NULL syntax you cannot directly compare a value to NULL with an equality test (`=`) -- it will always fail.
Use `WHERE BOOKED IS NULL` instead. | You have to use `IS NULL` instead of `= null`
Since null technically isn't a value, you can't compare null using the `=` operator. | Checking tables for null returns 0 regardless | [
"",
"sql",
""
] |
I have a column in a table containing gene names. Example of gene names are dnaA, BN\_001,BN\_0023 etc...
I want to have all genes names that do not contain the underscore. The statement I used was this:
```
SELECT Gene from cds WHERE Gene NOT LIKE '%_%';
```
However, this does not return the correct names of genes. What is the problem? I am using MYSQL as RDBMS.
Thank you | The problem is that `_` is a placeholder in ther `like` statement of any single character.
Instead you can use
```
SELECT Gene from cds
WHERE instr(Gene, '_') = 0
``` | You should write
```
LIKE '%\_%' ESCAPE '\'
```
because \_ mean something in SQL, so you have to escape this by \ | SQL LIKE statement not working | [
"",
"mysql",
"sql",
""
] |
I am trying to write a Breeze query for the Northwind database to show orders that contain all three products that a user specifies. So if a user selects ProductID 41, 51, and 65 from drop downs, the query would return order id 10250.
This is just a sample scenario that I am looking to base another query on in a project I am working on, but I thought using Northwind to explain it would be easier than describing the project. I can easily do it in T-SQL using derived tables, but I need to get the parameters from the client. Any thoughts? Thanks in advance! | If you're still interested in doing this on the client, you can try the following Breeze query.
```
var listOfProductIds = [41, 51, 65];
var preds = listOfProductIds.map(function (id) {
return Predicate.create("OrderDetails", "any", "ProductID", "==", id);
});
var whereClause = Predicate.and(preds);
var query = EntityQuery.from('Orders').where(whereClause);
```
This will retrieve all Orders that have **at least all 3 of the Products specified**.
To further filter this so you have all Orders that have **only all 3 of the Products specified**, you can write,
```
entityManager.executeQuery(query)
.then(filterOrders);
//once you get results on the client
function filterOrders(data) {
var allOrders = data.results;
var filteredOrders = allOrders.filter(function (o) {
return o.OrderDetails.length == listOfProductIds.length;
});
}
```
You can only filter on the client since OData doesn't yet support the Aggregate Count function like Linq to Entities does. This is probably not ideal but it's an option if you decide not to do it on the server. | Sorry in my comment I said look at the `and()` method but in actuality for your use you needed to look at the `or()` method -
```
var p1 = breeze.Predicate("Product.ProductID", "==", 41);
var p2 = breeze.Predicate("Product.ProductID", "==", 51);
var p3 = breeze.Predicate("Product.ProductID", "==", 65);
var newPred = breeze.Predicate.or(p1, p2, p3);
var query = EntityQuery.from('Order_Details').select('Order.OrderID').where(newPred);
```
The only issue with these queries on the client is that depending on how you are building them client-side and how many predicates you are adding they can get *very* long in some situations, such as `select all 200 records except 1 id` type queries which can bite you on IE8. | Breeze - Writing complex query with multiple parameters from client | [
"",
"sql",
"entity-framework",
"breeze",
""
] |
I have two tables `trackings` and `responses`. I am running the query below to join the two tables based on the case/code\_2 columns.
Because there will sometimes be multiple records in the `response` table for every record in the `trackings` table, I only wanted one row returned, not duplicates for each row in the response table as would normally happen.
I accomplished this using the query below which works great.
```
SELECT T0.timestamp AS 'Creation Date', T0.ipaddress, T0.code_1 AS 'Alias', T0.code_2 AS 'Case ID', COUNT(T0.ipaddress) AS each_amount, T0.first, MAX(T1.res_id) AS 'responses'
FROM `trackings` AS T0
LEFT JOIN `responses` AS T1
ON T0.code_2 = T1.case
JOIN (
SELECT T2.case, MAX(T2.timestamp) AS max_date
FROM `responses` AS T2
GROUP BY T2.case
) x_temp_response_table
ON x_temp_response_table.case = T1.case
AND x_temp_response_table.max_date = T1.timestamp
WHERE T0.timestamp >= '2014-04-20 00:00:00'
AND T0.timestamp <= '2014-04-30 23:59:59'
GROUP BY code_2
```
However because of the second join to limit the response rows to just one, it now doesn't return `trackings` rows when there is no corresponding record in the response table.
Basically before adding this second join, it would return all rows from the `trackings` table, and just stick a NULL in the 'responses' column if there was no corresponding row in the `responses` table <- This is probably obvious as it's what a left join does :-)
So ideally I would like the query above to still return all rows from the `trackings` table even if there is no corresponding row in the responses table.
Any help would be really appreciated. | You may do this with an awful subquery (not as performant, but)...
```
SELECT
T0.timestamp AS 'Creation Date',
T0.ipaddress, T0.code_1 AS 'Alias',
T0.code_2 AS 'Case ID',
COUNT(T0.ipaddress) AS each_amount,
T0.first,
(SELECT r.res_id from responses r
where r.case = T0.code_2
order by r.timestamp desc
LIMIT 1) as responses
FROM `trackings` AS T0
WHERE T0.timestamp >= '2014-04-20 00:00:00'
AND T0.timestamp <= '2014-04-30 23:59:59'
GROUP BY code_2
``` | This is untested, but moving the `responses` join into a Derived Table should work:
```
SELECT T0.timestamp AS 'Creation Date', T0.ipaddress, T0.code_1 AS 'Alias', T0.code_2 AS 'Case ID', COUNT(T0.ipaddress) AS each_amount, T0.first, MAX(T1.res_id) AS 'responses'
FROM `trackings` AS T0
LEFT JOIN
(
SELECT T1.case, T1.res_id
FROM `responses` AS T1
JOIN
(
SELECT T2.CASE, MAX(T2.TIMESTAMP) AS max_date
FROM `responses` AS T2
GROUP BY T2.CASE
) x_temp_response_table
ON x_temp_response_table.CASE = T1.CASE
AND x_temp_response_table.max_date = T1.TIMESTAMP
) AS T1
ON T0.code_2 = T1.CASE
WHERE T0.TIMESTAMP >= '2014-04-20 00:00:00'
AND T0.timestamp <= '2014-04-30 23:59:59'
GROUP BY code_2
``` | Limiting rows returned in a left join in MySQL issue | [
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
I have 2 tables (USERS and ACCOUNTS) with the following data in them:
**USERS**
```
UserID Name Account_Number
10 John Smith 13
20 Alex Brown 14
30 Mary Wade 34
```
**ACCOUNTS**
```
Account number Amount
13 40
34 30
14 30
13 60
14 10
```
I would like to know how I can write a query to return the following results:
```
UserID Name Total amount
13 John Smith 100
14 Alex Brown 40
34 Mary Wade 30
```
The query that I have tried is:
```
SELECT USER_ID, NAME, (SELECT SUM(AMOUNT) FROM ACCOUNTS GROUP BY ACCOUNT) AS TOTAL_AMOUNT
FROM USERS
JOIN ACCOUNTS
USING(ACCOUNT_NUMBER)
ORDER BY TOTAL_AMOUNT DESC;
```
When I execute this I get the following error: ORA-01427: single-row subquery returns more than one row.
Does anyone know how I might be able to modify the query so that it works as intended?
Thanks! | Please try:
```
select
Account_Number,
Name,
(select SUM(Amount) from ACOUNTS b where b.[Account number]=a.Account_Number) Total
from USERS a
order by Account_Number
``` | Remove your subquery.. It is returning more than one row..
```
SELECT U.USERID, U.NAME, SUM(A.AMOUNT) AS TOTAL_AMOUNT
FROM USERS U
INNER JOIN ACCOUNT A on U.ACCOUNT_NUMBER=A.ACCOUNT_NUMBER
GROUP BY A.ACCOUNT_NUMBER,U.USERID, U.NAME
ORDER BY SUM(A.AMOUNT) DESC;
```
[Fiddle](http://sqlfiddle.com/#!6/620d7/3) | ORA-01427: single-row subquery returns more than one row error | [
"",
"sql",
"oracle",
""
] |
How to get two values from string in PL/SQL, like this:
```
DECLARE
context VARCHAR2(50) := 'param_a=Value1;param_b=Value2,Value3;';
paramA VARCHAR(50);
paramB VARCHAR(50);
BEGIN
paramA = ... -- expected value: Value1
paramB = ... -- expected value: Value2,Value3
dbms_output.put_line(context);
END;
``` | You can use something like:
```
DECLARE
context VARCHAR2(50) := 'param_a=Value1;param_b=Value2,Value3;';
paramA VARCHAR(50);
paramB VARCHAR(50);
BEGIN
paramA := SUBSTR('param_a=Value1;param_b=Value2,Value3;',instr('param_a=Value1;param_b=Value2,Value3;','=',1,1)+1,instr('param_a=Value1;param_b=Value2,Value3;',';',1,1)-instr('param_a=Value1;param_b=Value2,Value3;','=',1,1)-1);
paramB := SUBSTR('param_a=Value1;param_b=Value2,Value3;',instr('param_a=Value1;param_b=Value2,Value3;','=',1,2)+1,instr('param_a=Value1;param_b=Value2,Value3;',';',1,2)-instr('param_a=Value1;param_b=Value2,Value3;','=',1,2)-1);
dbms_output.put_line(paramA);
dbms_output.put_line(paramb);
END;
```
Hope it Helps
Vishad | ```
Hi Better way to do this kind of separation is by using REGEX(Regular Expressions).
PLease try this code it may help plus it will reduce the coding length too.
SET serveroutput ON;
DECLARE
context VARCHAR2(50) := 'param_a=Value1;param_b=Value2,Value3;';
paramA VARCHAR(50);
paramB VARCHAR(50);
BEGIN
FOR rec IN
(SELECT regexp_substr(context,'[^;"]+', 1, level) AS AV_TEST
FROM dual
CONNECT BY regexp_substr(context,'[^;"]+', 1, level) IS NOT NULL
)
LOOP
dbms_output.put_line(rec.av_test);
END LOOP;
END;
``` | Splitting two semicolons separated params | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have a database wherein I have the following columns:
```
id
name
date
time
item1
item2
item3
```
There are many 'names' but different 'id'. For each 'name', there are different 'date'. The 'date' may also duplicate. Under each 'date', there are different 'time'.
So an example data in the database is like this:
```
name date time
Name_1 2013-01-01 1:30
Name_1 2013-01-01 4:30
Name_1 2013-01-01 7:30
Name_1 2013-01-02 12:30
Name_1 2013-01-02 14:30
Name_1 2013-01-02 17:30
Name_2 2013-01-01 1:30
Name_2 2013-01-01 4:30
Name_2 2013-01-01 7:30
Name_2 2013-01-02 12:30
Name_2 2013-01-02 14:30
Name_2 2013-01-02 17:30
```
If I want my output to be:
```
Name_1 2013-01-01 7:30 /* last time checked for 2013-01-01 of Name_1 */
Name_1 2013-01-02 17:30 /* last time checked for 2013-01-02 of Name_1 */
Name_2 2013-01-01 7:30 /* last time checked for 2013-01-01 of Name_2 */
Name_2 2013-01-02 17:30 /* last time checked for 2013-01-02 of Name_2 */
```
What should be my query? Unfortunately, I cannot change the database structure anymore.
Thank you so much! | Try
```
SELECT name, date, MAX(time) FROM mytable
GROUP BY name, date
ORDER BY name,date
```
Demo at <http://sqlfiddle.com/#!3/470ce/5/0> | ```
SELECT
`name`, `date`, MAX(`time`) AS last_checked
FROM your_table
GROUP BY `name`, `date`
``` | SQL Query how to get data | [
"",
"sql",
""
] |
Simple SQL query:
```
(select e.lastName as Name, count(o.orderid) as numOrd from
Employees e join Orders o on e.employeeid=o.employeeid
group by e.lastName)
```
and result
Buchanan 42
Callahan 104
Davolio 123
Dodsworth 43
My question is how to achieve in SQL something like that:
```
let queryResult =
(select e.lastName as Name, count(o.orderid) as numOrd from
Employees e join Orders o on e.employeeid=o.employeeid
group by e.lastName)
```
and after that to write something like this, which will be the output:
```
select AVG(qr.numOrd) from queryResult qr
```
Is it possible without creating any new tables? | yes, but why not just something like:
```
select count(o.orderid) / count (distinct e.employeeid) AvgNumOrder
from orders
```
Derived tables, CTEs, Subqueries, Temp tables, tables variable all do what you ask, but none are needed. | Look at the [SQL Server Views](http://technet.microsoft.com/en-us/library/ms187956.aspx)
You can define view as
```
create view queryResult as
select e.lastName as Name, count(o.orderid) as numOrd from
Employees e join Orders o on e.employeeid=o.employeeid
group by e.lastName;
``` | Is it possible to encapsulate sql-select result? | [
"",
"sql",
"sql-server",
"select",
""
] |
I have the following SQL statement:
```
SELECT name, SUM(growth) AS sum_buy_price, SUM(recovery) AS sum_msrp, SUM(growth)+SUM(recovery) AS total
FROM orders
WHERE id = ?
GROUP BY name
```
My data is coming from a CSV file that I have no control over and either 'growth' or 'recovery' can be NULL in the data, but not at the same time. I need to use ISNULL to convert the possible NULL values to zero in order for the SUM to work correctly, but I'm unsure of how/where to add the ISNULL since the SELECT is indexing another record (name). | ISNULL returns whether the argument passed is null (i.e., it is analogous to true or false). I suppose, what you need is IFNULL:
```
SELECT
name,
SUM(IFNULL(growth, 0)) AS sum_buy_price,
SUM(IFNULL(recovery, 0)) AS sum_msrp,
SUM(IFNULL(growth, 0))+SUM(IFNULL(recovery,0)) AS total
FROM
orders
WHERE
id = ?
GROUP BY
name
``` | The `SUM()` function ignores `NULL` values, so you don't need to change a `NULL` to a `0` in order for it to work properly.
If however, all values that you're aggregating are `NULL` and you want to return a `0` instead of `NULL` you can use `IFNULL()` or the more common `COALESCE()` to show `0` as the sum instead of `NULL`:
```
SELECT COALESCE(SUM(growth),0)
```
`ISNULL()` is a valid SQL Server function, `IFNULL()` is the equivalent in MySQL, but all major databases make use of `COALESCE()` which returns the first non-NULL value in a set, ie: `COALESCE(NULL,NULL,5)` would return `5`. | NULL to zero with ISNULL | [
"",
"mysql",
"sql",
""
] |
I have the following column, `date_period2` in my query which displays date in `yyyy-mm` format but as a string.
```
date_period2
201304
201305
201306
201307
```
How can i convert that to a DATE format so I can use it to get the working days of the specific month. So for example, `201304` would be converted to `04-2013` as a DATE instead of a string.
This is my query which is calculating an expression based on the `date_period2` field and it's not working because it's a string format:
```
SELECT TOP 1000 [date_period2]
,[amount]
,[amount]/(SELECT 20 + COUNT(*) FROM
(SELECT DATEADD(MONTH, DATEDIFF(MONTH, 0, [date_period2]), 28) AS theDate
UNION
SELECT DATEADD(MONTH, DATEDIFF(MONTH, 0, [date_period2]), 29)
UNION
SELECT DATEADD(MONTH, DATEDIFF(MONTH, 0, [date_period2]), 30) ) AS d
WHERE DATEPART(DAY, [date_period2]) > 28 AND DATEDIFF(DAY, 0, [date_period2]) % 7 < 5) AS [Weekly Charge]
FROM [database].[dbo].[table]
```
The issue is the `amount` is being divided by 20 and not by the working days for the month-year.
Example:
```
date_period2 amount charge average (amount/total working days of the month)
201304 1750359.95 87517.9975
```
So according to the result above the charge average is supposed to be `1750359.95/22`, but instead it is being divided by 20 giving it the wrong output.
How can I either edit the query or convert the date to ensure the output is done correctly? | Just put the string in an unambiguous date format like `yyyy-mm-dd`:
```
SELECT
CONVERT(DATETIME,
SUBSTRING(date_period2,1,4) + '-'
+ SUBSTRING(date_period2,5,2) + '-01')
```
That will convert `201304` into `2013-04-01` which will convert directly to a `DateTime`.
**EDIT**
Since your source column is *actually* an `integer` column, a cleaner method is:
```
select CONVERT(DATETIME,CONVERT(CHAR(6),date_period2)+'01')
```
that will convert `201304` to `"20130401"` whihc is still unambiguous to the datetime parser. | You can convert date\_period2 to a datetime data type with following logic.
Please change as per your need.
```
declare @i int
select @i = CONCAT('201305','01')
select CONVERT (datetime,convert(char(8),@i))
```
In `CONCAT('201305','01')` '01' will be constant, only '201305' will change. | How to convert a string into date format | [
"",
"sql",
"sql-server",
"date",
"ssms",
""
] |
```
st.executeUpdate("insert into groupperformance (Date,Gid,GName) values('"+yourDate+"',select Gid,GroupName from grouping)");
```
I use the above query for insert table values.In this table variable yourDate having the Current system Date.And I copy the data's from the table grouping.With out yourDate variable query works fine and inserting values from grouping table to groupperformance table.What can I do for insert current date as default value. | ```
st.executeUpdate(
"insert into groupperformance (Date,Gid,GName)
select '" + yourDate + "', Gid,GroupName from grouping"
);
```
But, make sure that `yourDate` string is in valid MySQL date format `YYYY-MM-DD`.
And, if you want to insert current date from the database server, you can use `now()` or `curdate()` instead of your selected date.
```
st.executeUpdate(
"insert into groupperformance (Date,Gid,GName)
select curdate(), Gid,GroupName from grouping"
);
``` | You can use `SELECT GETDATE() AS CurrentDateTime`, which will return `2008-11-11 12:45:34.243`
OR
something like this:
```
CREATE TABLE Orders
(
OrderId int NOT NULL PRIMARY KEY,
ProductName varchar(50) NOT NULL,
OrderDate datetime NOT NULL DEFAULT GETDATE()
)
``` | Inserting Current Date as default in MySQL table | [
"",
"mysql",
"sql",
""
] |
I have this table with this data
```
DECLARE @tbl TABLE
(
IDX INTEGER,
VAL VARCHAR(50)
)
--Inserted values for testing
INSERT INTO @tbl(IDX, VAL) VALUES(1,'A')
INSERT INTO @tbl(IDX, VAL) VALUES(2,'A')
INSERT INTO @tbl(IDX, VAL) VALUES(3,'A')
INSERT INTO @tbl(IDX, VAL) VALUES(4,'B')
INSERT INTO @tbl(IDX, VAL) VALUES(5,'B')
INSERT INTO @tbl(IDX, VAL) VALUES(6,'B')
INSERT INTO @tbl(IDX, VAL) VALUES(7,'A')
INSERT INTO @tbl(IDX, VAL) VALUES(8,'A')
INSERT INTO @tbl(IDX, VAL) VALUES(9,'A')
INSERT INTO @tbl(IDX, VAL) VALUES(10,'C')
INSERT INTO @tbl(IDX, VAL) VALUES(11,'C')
INSERT INTO @tbl(IDX, VAL) VALUES(12,'A')
INSERT INTO @tbl(IDX, VAL) VALUES(13,'A')
--INSERT INTO @tbl(IDX, VAL) VALUES(14,'A') -- this line has bad binary code
INSERT INTO @tbl(IDX, VAL) VALUES(14,'A') -- replace with this line and it works
INSERT INTO @tbl(IDX, VAL) VALUES(15,'D')
INSERT INTO @tbl(IDX, VAL) VALUES(16,'D')
Select * From @tbl -- to see what you have inserted...
```
And the Output I'm looking for is the FIRST and LAST Idx and Val in each group of Val's prior ordering over Idx. Noting that Val's may repeat !!! also Idx may not be in ascending order in the table as they are in the imsert statments. No cursors please !
i.e
```
Val First Last
=================
A 1 3
B 4 6
A 7 9
C 10 11
A 12 14
D 15 16
``` | If the `idx` values are guaranteed to be sequential, then try this:
```
Select f.val, f.idx first, l.idx last
From @tbl f
join @tbl l
on l.val = f.val
and l.idx > f.idx
and not exists
(Select * from @tbl
Where val = f.val
and idx = l.idx + 1)
and not exists
(Select * from @tbl
Where val = f.val
and idx = f.idx - 1)
and not exists
(Select * from @tbl
Where val <> f.val
and idx Between f.idx and l.idx)
order by f.idx
```
if the `idx` values are not sequential, then it needs to be a bit more complex...
```
Select f.val, f.idx first, l.idx last
From @tbl f
join @tbl l
on l.val = f.val
and l.idx > f.idx
and not exists
(Select * from @tbl
Where val = f.val
and idx = (select Min(idx)
from @tbl
where idx > l.idx))
and not exists
(Select * from @tbl
Where val = f.val
and idx = (select Max(idx)
from @tbl
where idx < f.idx))
and not exists
(Select * from @tbl
Where val <> f.val
and idx Between f.idx and l.idx)
order by f.idx
``` | A way to do it - at least for SQL Server 2008 without using special functionality would be to introduce a helper table and helper variable.
Now whether that's actually possible for you as is (due to many other requirements) I don't know - but it might lead you on a solution path, but it does look to solve your current set up requirements of no cursor and nor lead/lag:
So basically what I do is make a helper table and a helper grouping variable:
(sorry about the naming)
```
DECLARE @grp TABLE
(
idx INTEGER ,
val VARCHAR(50) ,
gidx INT
)
DECLARE @gidx INT = 1
INSERT INTO @grp
( idx, val, gidx )
SELECT idx ,
val ,
0
FROM @tbl AS t
```
I populate this with the values from your source table @tbl.
Then I do an update trick to assign a value to gidx based on when VAL changes value:
```
UPDATE g
SET @gidx = gidx = CASE WHEN val <> ISNULL(( SELECT val
FROM @grp AS g2
WHERE g2.idx = g.idx - 1
), val) THEN @gidx + 1
ELSE @gidx
END
FROM @grp AS g
```
What this does is assign a value of 1 to gidx until VAL changes, then it assigns gidx + 1 which is also assigned to @gixd variable. And so on.
This gives you the following usable result:
```
idx val gidx
1 A 1
2 A 1
3 A 1
4 B 2
5 B 2
6 B 2
7 A 3
8 A 3
9 A 3
10 C 4
11 C 4
12 A 5
13 A 5
14 A 5
15 D 6
16 D 6
```
Notice that gidx now is a grouping factor.
Then it's a simple matter of extracting the data with a sub select:
```
SELECT ( SELECT TOP 1
VAL
FROM @GRP g3
WHERE g2.gidx = g3.gidx
) AS Val ,
MIN(idx) AS First ,
MAX(idx) AS Last
FROM @grp AS g2
GROUP BY gidx
```
This yields the result:
```
A 1 3
B 4 6
A 7 9
C 10 11
A 12 14
D 15 16
```
[Fiddler link](http://sqlfiddle.com/#!3/d41d8/33847) | Pull out first index record in each REPEATING group ordered by index | [
"",
"sql",
"sql-server",
"group-by",
"gaps-and-islands",
""
] |
I have pored over numerous questions and answers but can't seem to find a solution that works for me. Here is what I am trying to do:
Say I have table called "Store" that holds Store IDs, store names, and the annual sales of that store. The business wants each store to increase it's sales by 50% over the next year, and wants to calculate monthly target sales amounts that each store needs to reach to make the 50% increase value. For example, if store number 5 had $1000 in sales last year, the target for the end of the next year would be $1500. And to reach that, the amount for each month would need to be something like $1042, $1084, $1126... $1500.
What I am trying to do is insert 12 records for each store into a 'monthly plan fact' table. I am trying to select a store from the Store table, grab the annual sales amount, then do a loop inside that where I calculate what each month value would be so I can insert it into the 'monthly plan fact' table, then select the next store from the store table... and so on. As a C# developer, it seems a very simple task that could be accomplished with a 'for each' style loop with another 'for' loop inside it.
But I cannot for the life of me figure out how to do this in SQL. I know about temp tables, but I can't seem to figure out how to get multiple rows inserted into my 'monthly plan fact' table using the records from the Store table, and the values calculated in the loop to determine the monthly plan values. I've read some on cursors, but it seems that most people advise against them in the SQL community, so I am at a loss on this one. | In `Sql` you really should try to *avoid using loops*. Instead, you should use a set-based approach. [This post](https://stackoverflow.com/questions/5912346/type-of-loops-in-sql-server) (first answer) has a good link on why you shouldn't do it, then an example that could help you do it. Take a look at using a [CURSOR](http://en.wikipedia.org/wiki/Cursor_(databases)) instead.
@paqogomez was right :) | The T-SQL below shows how to really easily get the sales targets for each store for each month. I've included two columns here: one called 'salesTargetYourExample' where we have a pattern like 1042, 1084, ... for sales targets and one called 'linearGrowthSales' where we have the new annual sales target (e.g. 1500 for the store that made 1000 last year) and simply look at what sales should be at the end of each month to reach that target assuming linear growth).
The short answer though is to always think in terms of sets and set based operations when working with databases. Don't think in terms of loops
```
-- Create the store table
create table dbo.Store
(
storeId int not null primary key clustered,
storeName varchar(100) not null,
annualSales money not null
);
-- Populate the data in the store table
insert into dbo.Store(storeId,storeName,annualSales)
values (1, 'My first store', 2000),
(5, 'Store number five', 1000),
(6, 'The sixth store', 2500);
-- Get the sales targets for each store on a monthly basis
select s.storeId, s.storeName, months.mth,
(s.annualSales * 0.5 * months.mth/12) + s.annualSales as salesTargetYourExample,
s.annualSales * 1.5 * months.mth/12 as linearGrowthSales
from dbo.Store as s
cross apply
(
values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12)
) as months(mth)
``` | How can I do loop within a loop in SQL, similar to a for-each loop? | [
"",
"sql",
"loops",
"t-sql",
"database-cursor",
""
] |
I need to remove some attributes from a json type column.
The Table:
```
CREATE TABLE my_table( id VARCHAR(80), data json);
INSERT INTO my_table (id, data) VALUES (
'A',
'{"attrA":1,"attrB":true,"attrC":["a", "b", "c"]}'
);
```
Now, I need to remove `attrB` from column `data`.
Something like `alter table my_table drop column data->'attrB';` would be nice. But a way with a temporary table would be enough, too. | **Update**: for 9.5+, there are explicit operators you can use with `jsonb` (if you have a `json` typed column, you can use casts to apply a modification):
Deleting a key (or an index) from a JSON object (or, from an array) can be done with the `-` operator:
```
SELECT jsonb '{"a":1,"b":2}' - 'a', -- will yield jsonb '{"b":2}'
jsonb '["a",1,"b",2]' - 1 -- will yield jsonb '["a","b",2]'
```
Deleting, from deep in a JSON hierarchy can be done with the `#-` operator:
```
SELECT '{"a":[null,{"b":[3.14]}]}' #- '{a,1,b,0}'
-- will yield jsonb '{"a":[null,{"b":[]}]}'
```
For 9.4, you can use a modified version of the original answer (below), but instead of aggregating a JSON string, you can aggregate into a `json` object directly with `json_object_agg()`.
Related: other JSON manipulations whithin PostgreSQL:
* [How do I modify fields inside the new PostgreSQL JSON datatype?](https://stackoverflow.com/a/23500670/1499698)
**Original answer** (applies to PostgreSQL 9.3):
If you have at least PostgreSQL 9.3, you can split your object into pairs with `json_each()` and filter your unwanted fields, then build up the json again manually. Something like:
```
SELECT data::text::json AS before,
('{' || array_to_string(array_agg(to_json(l.key) || ':' || l.value), ',') || '}')::json AS after
FROM (VALUES ('{"attrA":1,"attrB":true,"attrC":["a","b","c"]}'::json)) AS v(data),
LATERAL (SELECT * FROM json_each(data) WHERE "key" <> 'attrB') AS l
GROUP BY data::text
```
With 9.2 (or lower) it is not possible.
**Edit**:
A more convenient form is to create a function, which can remove any number of attributes in a `json` field:
**Edit 2**: `string_agg()` is less expensive than `array_to_string(array_agg())`
```
CREATE OR REPLACE FUNCTION "json_object_delete_keys"("json" json, VARIADIC "keys_to_delete" TEXT[])
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT COALESCE(
(SELECT ('{' || string_agg(to_json("key") || ':' || "value", ',') || '}')
FROM json_each("json")
WHERE "key" <> ALL ("keys_to_delete")),
'{}'
)::json
$function$;
```
With this function, all you need to do is to run the query below:
```
UPDATE my_table
SET data = json_object_delete_keys(data, 'attrB');
``` | This has gotten much easier with PostgreSQL 9.5 using the JSONB type. See JSONB operators documented [here](http://www.postgresql.org/docs/9.5/static/functions-json.html#FUNCTIONS-JSONB-OP-TABLE).
You can remove a top-level attribute with the "-" operator.
```
SELECT '{"a": {"key":"value"}, "b": 2, "c": true}'::jsonb - 'a'
// -> {"b": 2, "c": true}
```
You can use this within an update call to update an existing JSONB field.
```
UPDATE my_table SET data = data - 'attrB'
```
You can also provide the attribute name dynamically via parameter if used in a function.
```
CREATE OR REPLACE FUNCTION delete_mytable_data_key(
_id integer,
_key character varying)
RETURNS void AS
$BODY$
BEGIN
UPDATE my_table SET
data = data - _key
WHERE id = _id;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
```
The reverse operator is the "||", in order to concatenate two JSONB packets together. Note that the right-most use of the attribute will overwrite any previous ones.
```
SELECT '{"a": true, "c": true}'::jsonb || '{"a": false, "b": 2}'::jsonb
// -> {"a": false, "b": 2, "c": true}
``` | PostgreSQL: Remove attribute from JSON column | [
"",
"sql",
"json",
"postgresql",
"postgresql-json",
""
] |
I'm not even sure how exactly to ask this question so bear with me.
I have two tables, meets and locations, they are linked by the meet's 'loc' and the location's 'id'
I'm using this query:
```
$query = "SELECT * FROM meets LEFT JOIN locations ON locations.id=meets.loc ORDER BY date DESC";
$meets = mysqli_query($con, $query);
```
though it joins the table successfully I lose the meet's 'id' because it's being overwritten by the location's 'id'. So I end up with two identical entries 'id' and 'loc'.
Is there any way to avoid this because I need to call on the meet id? | do not select \*, select the columns you need and rename them using the `as` key word like so
```
SELECT locations.id as loc_id, meets.id as meets_id, ... FROM meets LEFT JOIN locations ON locations.id=meets.loc ORDER BY date DESC
```
replacing `...` with other columns you would like to select and possible renames of them. | You could alias the locations.id with a query like:
```
SELECT meets.*, locations.id AS loc_id FROM meets LEFT JOIN locations ON locations.id=meets.loc ORDER BY date DESC
```
You could add any other columns from `locations` that you might also need. You could further only select explicit columns from `meet` rather than `meet.*` | Losing data when joining two tables in mysql? | [
"",
"mysql",
"sql",
""
] |
I've not posted here before so forgive me if this is incorrect or needs altering.
I'm currently trying to work on a way to split one table into two seperate tables after a certain row count.
So for example I have 2 columns of data with 2000 rows, I'd like to split that into two tables where the first table has the same 2 columns with the first 1000 results and the second table has the same two colums but with the last 1000 results. Hopefully that will make sense.
Is there any way I can do this?
Thanks | using ROWNUMER()
with FIDDLE <http://sqlfiddle.com/#!3/6230e/3>
```
-- First 1000
select * from
(SELECT ROW_NUMBER() OVER(ORDER BY Col1 asc) AS Row,
Col1, Col2
FROM ur_table)d
where d.Row between 1 and 1000
-- Last 1000
select * from
(SELECT ROW_NUMBER() OVER(ORDER BY Col1 asc) AS Row,
Col1, Col2
FROM ur_table)d
where d.Row between 1000 and 2000
``` | I might use this approach. Please comment if you need explanation of any of the operations.
```
CREATE TABLE #TableA (Col1 INT)
CREATE TABLE #TableB (Col1 INT)
INSERT INTO #TableA (Col1)
SELECT 1 UNION ALL
SELECT 2 UNION ALL
SELECT 3 UNION ALL
SELECT 4 UNION ALL
SELECT 5
DELETE #TableA
OUTPUT DELETED.Col1 INTO #TableB
WHERE Col1 < 3
``` | Split Table rows over two Tables | [
"",
"sql",
"sql-server",
""
] |
I'm using Rails 4.0.4 with Ruby 2.1 and Thin 1.6.2 on Ubuntu 14.04 through my terminal "Terminator" and my shell "Fish Shell".
When I'm launching my Rails server in development mode I don't have the SQL queries in my logs only the JS and HTML files are loaded.
I'm searching for something like that:
```
User Load (3.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = 3 ORDER BY "users"."id" ASC LIMIT 1
(2.0ms) SELECT COUNT(*) FROM "users" WHERE (driver_register_state_cd = -2)
``` | the rails console never writes to the log file, but you can achieve it quite easily, for example, if you execute following after starting the rails console
```
ActiveRecord::Base.logger = Logger.new STDOUT
```
rails will log all SQL statements to stdout, thus display them in your terminal. and since Logger.new accepts any stream as first argument, you could just let it write to the rails development.log:
```
ActiveRecord::Base.logger = Logger.new File.open('log/development.log', 'a')
``` | Set `config.log_level` in config/environments/development.rb to `:debug` and restart your local server, console:
```
config.log_level = :debug
``` | Display SQL queries in log with Rails 4 | [
"",
"sql",
"ruby-on-rails",
"ruby-on-rails-4",
""
] |
I am new to SQL Server/used to MySQL databases and I am running into an issue that I never ran into with MySQL. I am looking to pull all current policy numbers, the name of the company/person it belongs to, their total premium, and whether or not they have what we call 'equipment breakdown' coverage. This is all pretty simple, the issue I am having is with grouping. I want to group by one column only, aka one distinct policy number, the company name, a sum of the premium (it is possible to have several premium amounts both negative and positive so I want to sum these to see what the true total is), and a simple Yes or No column for equipment breakdown.
Here is the query I am running:
```
SELECT pol_num as policy_number,
insd_name as insureds_name,
SUM(amt) as 'total_premium',
(SELECT
CASE
WHEN cvg_desc = 'Equipment Breakdown'
THEN 'Y'
ELSE 'N'
END) as 'equipment_breakdown'
FROM bapu.dbo.fact_prem
WHERE '2014-05-06' between d_pol_eff and d_pol_exp
AND amt_type = 'Premium'
AND amt_desc = 'Written Premium'
GROUP BY pol_num
ORDER BY policy_number
```
I get the an error saying that I need to group by insd\_name and cvg\_desc as well, but I DON'T want that as it gives me duplicate policy numbers.
**Here is an example of what I get when I group everything it tells me to:**
```
policy_number insureds_name total_premium equipment_breakdown
001 company a 0.00 n
001 company a 25,000.00 n
001 company a -10,000.00 n
002 company b 100.00 y
002 company b 10,000.00 y
```
**Here is an example of the results I want:**
```
policy_number insureds_name total_premium equipment_breakdown
001 company a 15,000.00 n
002 company b 10,100.00 y
```
Basically, I just want to group by the policy number and sum the premium amounts. Above is how I would achieve this in MySQL, how can I achieve the results I am looking for in SQL Server?
Thanks | You'll need an aggregate function on the fields you don't want to group by. A simple one to use is `MAX` which works with most types;
```
SELECT pol_num as policy_number,
MAX(insd_name) as insureds_name,
SUM(amt) as 'total_premium',
(SELECT
CASE
WHEN MAX(cvg_desc) = 'Equipment Breakdown'
THEN 'Y'
ELSE 'N'
END) as 'equipment_breakdown'
FROM bapu.dbo.fact_prem
WHERE '2014-05-06' between d_pol_eff and d_pol_exp
AND amt_type = 'Premium'
AND amt_desc = 'Written Premium'
GROUP BY pol_num
ORDER BY policy_number
```
The reason SQL Server wants this is that it likes to give deterministic answers, for example
```
column_a | column_b
1 | 1
1 | 2
```
...grouped by only `column_a` would in MySQL give either 1 or 2 as an answer for `column_b`, while SQL Server wants you to tell it explicitly which one to use. | MySQL doesn't require all non-aggregate fields to be included in the `GROUP BY` clause, even though not doing so can yield unexpected results. SQL Server requires this, so you are forced to decide how you want to handle multiple `insd_name` values for a given `pol_num`, you can use `MAX()`, `MIN()`, or if the values are always the same, just add them to your `GROUP BY`:
```
SELECT pol_num AS policy_number
, MAX(insd_name) AS insureds_name
, SUM(amt) AS 'total_premium'
, MAX(CASE WHEN cvg_desc = 'Equipment Breakdown' THEN 'Y'
ELSE 'N'
END) AS 'equipment_breakdown'
FROM bapu.dbo.fact_prem
WHERE '2014-05-06' BETWEEN d_pol_eff AND d_pol_exp
AND amt_type = 'Premium'
AND amt_desc = 'Written Premium'
GROUP BY pol_num
ORDER BY policy_number
```
Or:
```
SELECT pol_num AS policy_number
, insd_name AS insureds_name
, SUM(amt) AS 'total_premium'
, CASE WHEN cvg_desc = 'Equipment Breakdown' THEN 'Y'
ELSE 'N'
END AS 'equipment_breakdown'
FROM bapu.dbo.fact_prem
WHERE '2014-05-06' BETWEEN d_pol_eff AND d_pol_exp
AND amt_type = 'Premium'
AND amt_desc = 'Written Premium'
GROUP BY pol_num
, insd_name
, CASE WHEN cvg_desc = 'Equipment Breakdown' THEN 'Y'
ELSE 'N'
END
ORDER BY policy_number
``` | Group by T-SQL vs. MySQL (single column) | [
"",
"sql",
"sql-server",
"t-sql",
"group-by",
"aggregate",
""
] |
I need to search for and display a part of a string field. The string value from record to record may be different. For example:
---
Record #1
String Value:
```
IA_UnsafesclchOffense0IA_ReceivedEdServDuringExp0IA_SeriousBodilyInjuryN
```
---
Record #2
String Value:
```
IA_ReasonForRemovalTIA_Beh_Inc_Num1392137419IA_RemovalTypeNIA_UnsafesclchOffense0IA_ReceivedEdServDuringExp0IA_SeriousBodilyInjuryN
```
---
Record #3
String Value:
```
IA_UnsafesclchOffense0IA_RemovalTypeSIA_ReasonForRemovalPIA_ReceivedEdServDuringExp0IA_Beh_Inc_Num1396032888IA_SeriousBodilyInjuryN
```
---
In each case, I need to search for `IA_Beh_Inc_Num`. Assuming it's found, and IF it's followed by numeric data, I want to RETURN the numeric portion of that data. The numeric data, when present, will always be 10 characters.
In other words, record #1 should return no value, record #2 should return 1392137419 and record #3 should return 1396032888
Is there a way to do this within a select statement without having to write a full function with PL/SQL? | This would work:
```
SELECT
CASE WHEN instr(value, 'IA_Beh_Inc_Num') > 0
THEN substr(substr(value, instr(value, 'IA_Beh_Inc_Num'), 25),15,10)
ELSE 'not found'
END AS result
FROM example
```
See this [SQL Fiddle](http://sqlfiddle.com/#!4/b9c0ed/9/0). | This should be easy with a Regular Expression: find a search string and check if it's followed by 10 digits:
```
REGEXP_SUBSTR(col, '(?<=IA_Beh_Inc_Num)([0-9]{10})')
```
but Oracle doesn't seem to support RegEx *lookahead*, so it's bit more complicated:
```
REGEXP_SUBSTR(value, '(IA_Beh_Inc_Num)([0-9]{10})',1,1,'i',2)
```
Remarks: the search is case-insensitive and if there are less than 10 digits NULL will be returned. | Search for substring, return another substring | [
"",
"sql",
"oracle",
"substr",
""
] |
Ok so I have searched for an answer to this on Technet, to no avail.
I just want to print an integer variable concatenated with two String variables.
This is my code, that doesn't run:
```
print 'There are ' + @Number + ' alias combinations did not match a record'
```
It seems like such a basic feature, I couldn't imagine that it is not possible in T-SQL. But if it isn't possible, please just say so. I can't seem to find a straight answer. | ```
declare @x INT = 1 /* Declares an integer variable named "x" with the value of 1 */
PRINT 'There are ' + CAST(@x AS VARCHAR) + ' alias combinations did not match a record' /* Prints a string concatenated with x casted as a varchar */
``` | If you don't want to manually cast types, you can use the [`CONCAT`-function](https://learn.microsoft.com/en-us/sql/t-sql/functions/concat-transact-sql) for this.
```
PRINT CONCAT('There are ', @Number, ' alias combinations did not match a record')
``` | Printing integer variable and string on same line in SQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Assume I have two tables:
1. users

2. sells

I need to alter this query: **Show top three users who bought copies of software**
My SQL is this:
```
select u.name, u.age, u.sex, u.email, s.selluid, max(count(u.uid)) FROM users u, sells s where u.usrid = s.selluid
```
Any idea about how to solve this problem? Thanks | Try this
```
select u.usrid, u.name, count(s.sellid)
from users u left join sells s on u.usrid=s.selluid
group by u.usrid, u.name order by count(s.sellid) desc;
``` | ```
SELECT x.*
FROM (
SELECT u.name
, u.age
, u.sex
, u.email
, s.selluid
, COUNT(*) as t
FROM users u JOIN sells s ON u.usrid = s.selluid
GROUP BY u.name
ORDER BY COUNT(*) DESC
) x
WHERE ROWNUM <= 3
``` | max(count(*)) Error: single-group group function | [
"",
"sql",
"database",
"oracle",
"group-by",
"oracle10g",
""
] |
Given these 2 table:
Users:
```
Name UserId
-----------
Pump 1 1
Pump 1 2
Pump 1 3
Pump 2 4
Pump 2 5
Pump 2 6
```
Posts:
```
PostId UserId Score
1 3 5
2 1 8
3 3 3
4 1 2
5 2 8
6 2 1
```
How do I get the post with the highest score made by each users?
Best I can do is:
```
select Users.UserId as UserID, Posts.PostId as PostsID, Max(Posts.Score) as Score
from Users
inner join Posts on Posts.UserId = Users.UserId
Group by Users.UserId, Posts.PostsId
```
which doesn't gave me the wrong answer. | Query:
**[SQLFIDDLEExample](http://sqlfiddle.com/#!2/62a4fb/6)**
```
SELECT u.Name,
u.UserId,
p.PostId,
p.Score
FROM Users u
JOIN Posts p
ON p.UserId = u.UserId
LEFT JOIN Posts p1
ON p1.UserId = p.UserId
AND p1.Score > p.Score
WHERE p1.Score is null
```
Result:
```
| NAME | USERID | POSTID | SCORE |
|--------|--------|--------|-------|
| Pump 1 | 3 | 1 | 5 |
| Pump 1 | 1 | 2 | 8 |
| Pump 1 | 2 | 5 | 8 |
``` | If you're using SQL-Server you can use a ranking function like `ROW_NUMBER` or `DENSE_RANK`:
```
WITH CTE AS
(
SELECT u.UserId as UserID, p.PostId as PostsID, p.Score,
RN = ROW_NUMBER() OVER (PARTITION BY u.UserId ORDER BY Score DESC)
FROM dbo.Users u
INNER JOIN dbo.Posts p on p.UserId = u.UserId
)
SELECT UserID, PostsID, Score
FROM CTE
WHERE RN = 1
```
`DEMO`
In MySql this should work:
```
SELECT Users.userid AS UserID,
Posts.postid AS PostsID,
Max(Posts.score) AS Score
FROM Users
INNER JOIN Posts
ON Posts.userid = Users.userid
GROUP BY Users.userid
```
Although the `PostID` is somewhat arbitrary here.
`DEMO` | Get the Post with highest score for each user | [
"",
"mysql",
"sql",
"sql-server",
"database",
""
] |
As I can understand [documentation](https://www.postgresql.org/docs/current/indexes-unique.html) the following definitions are equivalent:
```
create table foo (
id serial primary key,
code integer,
label text,
constraint foo_uq unique (code, label));
create table foo (
id serial primary key,
code integer,
label text);
create unique index foo_idx on foo using btree (code, label);
```
However, a note in [the manual for Postgres 9.4](https://www.postgresql.org/docs/9.4/indexes-unique.html) says:
> The preferred way to add a unique constraint to a table is `ALTER TABLE ... ADD CONSTRAINT`. The use of indexes to enforce unique constraints
> could be considered an implementation detail that should not be
> accessed directly.
(Edit: this note was removed from the manual with Postgres 9.5.)
Is it only a matter of good style? What are practical consequences of choice one of these variants (e.g. in performance)? | I had some doubts about this basic but important issue, so I decided to learn by example.
Let's create test table *master* with two columns, *con\_id* with unique constraint and *ind\_id* indexed by unique index.
```
create table master (
con_id integer unique,
ind_id integer
);
create unique index master_unique_idx on master (ind_id);
Table "public.master"
Column | Type | Modifiers
--------+---------+-----------
con_id | integer |
ind_id | integer |
Indexes:
"master_con_id_key" UNIQUE CONSTRAINT, btree (con_id)
"master_unique_idx" UNIQUE, btree (ind_id)
```
In table description (\d in psql) you can tell unique constraint from unique index.
**Uniqueness**
Let's check uniqueness, just in case.
```
test=# insert into master values (0, 0);
INSERT 0 1
test=# insert into master values (0, 1);
ERROR: duplicate key value violates unique constraint "master_con_id_key"
DETAIL: Key (con_id)=(0) already exists.
test=# insert into master values (1, 0);
ERROR: duplicate key value violates unique constraint "master_unique_idx"
DETAIL: Key (ind_id)=(0) already exists.
test=#
```
It works as expected!
**Foreign keys**
Now we'll define *detail* table with two foreign keys referencing to our two columns in *master*.
```
create table detail (
con_id integer,
ind_id integer,
constraint detail_fk1 foreign key (con_id) references master(con_id),
constraint detail_fk2 foreign key (ind_id) references master(ind_id)
);
Table "public.detail"
Column | Type | Modifiers
--------+---------+-----------
con_id | integer |
ind_id | integer |
Foreign-key constraints:
"detail_fk1" FOREIGN KEY (con_id) REFERENCES master(con_id)
"detail_fk2" FOREIGN KEY (ind_id) REFERENCES master(ind_id)
```
Well, no errors. Let's make sure it works.
```
test=# insert into detail values (0, 0);
INSERT 0 1
test=# insert into detail values (1, 0);
ERROR: insert or update on table "detail" violates foreign key constraint "detail_fk1"
DETAIL: Key (con_id)=(1) is not present in table "master".
test=# insert into detail values (0, 1);
ERROR: insert or update on table "detail" violates foreign key constraint "detail_fk2"
DETAIL: Key (ind_id)=(1) is not present in table "master".
test=#
```
Both columns can be referenced in foreign keys.
**Constraint using index**
You can add table constraint using existing unique index.
```
alter table master add constraint master_ind_id_key unique using index master_unique_idx;
Table "public.master"
Column | Type | Modifiers
--------+---------+-----------
con_id | integer |
ind_id | integer |
Indexes:
"master_con_id_key" UNIQUE CONSTRAINT, btree (con_id)
"master_ind_id_key" UNIQUE CONSTRAINT, btree (ind_id)
Referenced by:
TABLE "detail" CONSTRAINT "detail_fk1" FOREIGN KEY (con_id) REFERENCES master(con_id)
TABLE "detail" CONSTRAINT "detail_fk2" FOREIGN KEY (ind_id) REFERENCES master(ind_id)
```
Now there is no difference between column constraints description.
**Partial indexes**
In table constraint declaration you cannot create partial indexes.
It comes directly from the [definition](http://www.postgresql.org/docs/9.1/static/sql-createtable.html) of `create table ...`.
In unique index declaration you can set `WHERE clause` to create partial index.
You can also [create index](http://www.postgresql.org/docs/9.1/static/sql-createindex.html) on expression (not only on column) and define some other parameters (collation, sort order, NULLs placement).
You **cannot** add table constraint using partial index.
```
alter table master add column part_id integer;
create unique index master_partial_idx on master (part_id) where part_id is not null;
alter table master add constraint master_part_id_key unique using index master_partial_idx;
ERROR: "master_partial_idx" is a partial index
LINE 1: alter table master add constraint master_part_id_key unique ...
^
DETAIL: Cannot create a primary key or unique constraint using such an index.
``` | One more advantage of using `UNIQUE INDEX` vs. `UNIQUE CONSTRAINT` is that you can easily `DROP`/`CREATE` an index [`CONCURRENTLY`](http://www.postgresql.org/docs/9.3/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY), whereas with a constraint you can't. | Postgres unique constraint vs index | [
"",
"sql",
"postgresql",
"indexing",
"unique",
""
] |
I have two tables. An account table and a transactions table. I want to list all accounts with their **last transaction**. Need a left join since cases may exist with no transaction.
I have something like
```
select * from Account ac left join Transaction trans on ac.id = trans.acc_id
```
But that will list all the transaction data. No idea how could I modify it. | the first left join gets you the last\max transaction ID for the account. The second left join gets you all the transaction information for that last ID.
```
select ac.*, lasttrans.*
from Account ac
left join (select acc_id, max(id) as id from Transaction group by acc_id) transmax
on ac.id = transmax.acc_id
left join Transaction lasttrans on lasttrans.id = transmax.id
``` | You can try by joining the Account table with only those acc\_ids of your Transaction table that correspond to the maximum ID per acc\_id:
```
select ac.*, t.* from Account ac left join (select acc_id, max(id) from Transaction group by acc_id) trans on ac.id = trans.acc_id left join Transaction t on trans.id=t.id
``` | SQL Left Outer Join with a One to Many Row | [
"",
"sql",
"oracle",
""
] |
I have created an user defined functions in SQL server. Is it possible to call these function in select statement, just like any other built-in function?
```
Create function [dbo].[TIME_TO_SEC](@inputTime time)
returns INT
As
Begin
DECLARE @secDiff INT = datediff(second, '00:00:00', @inputTime)
return @secDiff;
end
```
Currently calling this as below:
```
DECLARE @ret int;
EXEC TIME_TO_SEC @inputTime='01:00:00'
select @ret;
```
I am attempting something as simple as:
```
select TIME_TO_SEC('01:00:00') from TimeTable
```
Is it feasible in SQL-server? | It's possible, but you need to reference the function by the user's schema
```
select dbo.TIME_TO_SEC('01:00:00') from TimeTable
``` | if you get an issue with your function, try to add "WITH EXECUTE AS CALLER"
```
Create function [dbo].[TIME_TO_SEC](@inputTime time)
Returns int
WITH EXECUTE AS CALLER
As
Begin
DECLARE @secDiff INT = datediff(second, '00:00:00', @inputTime)
return @secDiff;
end
```
then call the function with select
```
select dbo.[TIME_TO_SEC](getdate())
``` | Call User defined function as System Function in SQL server | [
"",
"sql",
"sql-server",
"sql-function",
""
] |
I have a table `OrderUser`
```
UserName | OrderNo
-----------------------
User1 | 1
User1 | 2
User1 | 3
User2 | 4
User2 | 5
```
I need the only the `userID` with min rows, meaning `user2` in this case.
Simple T-Sql statement is required because I will use the code in a `SqlCommand` in C#.
Please help. | ```
SELECT TOP 1 UserName, COUNT(*)
FROM myTable
GROUP BY UserName
ORDER BY COUNT(*) ASC
``` | Try the following code. Its *derived table* (inner `select`)
* summarizes the data by user name
* orders it by `count(*)` in ascending sequence
* then assigns each discrete frequency value a rank (1-N) based on that ordering
The outer `select` then keeps only the row or rows with a rank of 1 (meaning those with the lowest frequency value), ordering them by user name
```
select *
from ( select Rank = rank() over ( order by count(*) ) ,
UserName = t.UserName ,
Frequency = count(*)
from dbo.OrderUser t
group by t.UserName
) summary
where summary.Rank = 1
order by summary.UserName
```
If multiple users share the same lowest frequency, you'll get 1 row back for each such user; otherwise you'll get a single row. | How to find lowest count of rows for a value in SQL Server 2005 | [
"",
"sql",
"sql-server",
""
] |
I have Query:
```
SELECT Stage1
,Stage2
,Stage3
FROM dbo.Cases
WHERE Number='6913'
```
result:
```
Stage1 Stage2 Stage3
==========================
NULL Settlement Stage1
```
But when i try:
```
SELECT COALESCE(Stage1 ,Stage2 ,Stage3) AS [Exit]
FROM dbo.Cases
WHERE Number = '6913'
```
result still NULL:
```
EXIT
====
NULL
```
Why it is coming like this, result supposed to be 'Settlement'
Please anybody can help???? | The only explanation I can think of for that is that you have the string value of NULL in your Stage1 field. See if this query returns your row to determine if that is the case:
```
SELECT Stage1
,Stage2
,Stage3
FROM dbo.Cases
WHERE Number='6913'
AND Stage1 = 'NULL'
```
If it is a string value of NULL you could try this:
```
SELECT COALESCE(NULLIF(Stage1, 'NULL'), NULLIF(Stage2, 'NULL'), NULLIF(Stage3, 'NULL'))
FROM dbo.Cases
WHERE Number='6913'
``` | Check that the value is an actual NULL and not the varchar value 'NULL'.
You can do this with:
```
SELECT * FROM dbo.Cases WHERE Stage1 IS NULL
SELECT * FROM dbo.Cases WHERE Stage1 = 'NULL'
```
And see what comes back. | why my coalesce function not working | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
Let's imagine we have a small MySQL database:
**Women:**
* id\_woman
* name
* surname
**women\_shoes:**
* id\_woman (FK)
* id\_shoe (FK)
**Shoes:**
* id\_shoe
* name
**woman\_handbags:**
* id\_woman (FK)
* id\_handbag (FK)
**Handbags:**
* id\_handbag
* name
There are 2 relations many-to-many. What I want to get from such DB is: select all women which have shoes: nike, puma, mango and handbags: versace. I am interested in women who have every presented pair of shoes and every handbag. I know names of stuff and I want to find out the names. | Maybe something like this? if i understand your question correctly.
```
SELECT
r.*
FROM recipe r
JOIN recipe_ingredients ri ON ri.id_recipe = r.id_recipe
JOIN ingredients i ON i.id_ingredient = ri.id_ingredient
JOIN recipes_tags rt ON rt.id_recipe = r.id_recipe
JOIN tags t ON t.id_tag = rt.id_tag
WHERE i.name = 'ziemniaki'
OR i.name = 'cebula'
AND t.tag = "tani"
OR t.tag = "łatwy"
GROUP BY r.id_recipe
HAVING COUNT(r.id_recipe) > 3 -- all 4 of the criteria have been met
;
```
see working [FIDDLE](http://sqlfiddle.com/#!9/ad5c0/23) for clarification
Basically what this does is it returns a row when one of the four criteria is met. along with that it also will only return recipes that have at least one of the ingredients and at least one of the tags. so when 4 (or more) rows are returned then the criteria is met for a recipe with the requested params | Since the 2nd criteria only has 1 handbag you don't really need to select from a subquery but I figured it might change to multiple handbags.
```
SELECT
w.*
FROM women w
JOIN
(
SELECT
id_woman
FROM woman_shoes ws
JOIN shoes s ON ws.id_shoe = s.id_shoe
WHERE s.name IN ('puma','nike','mango')
GROUP BY id_woman
HAVING COUNT(*) = 3
) has_shoes hs ON hs.id_woman = w.id_woman
JOIN
(
SELECT
id_woman
FROM woman_handbags wh
JOIN handbags h ON wh.id_handbag = h.id_handbag
WHERE h.name IN ('versace')
GROUP BY id_woman
HAVING COUNT(*) = 1
) has_handbag hb ON hb.id_woman = w.id_woman
``` | Select in many to many relations in MySQL with multiple conditions | [
"",
"mysql",
"sql",
"many-to-many",
""
] |
It's my task to count the total number of absent days for each employee.
i've written the following query
```
SELECT name
,surname
,absentdays
FROM (
SELECT name
,surname
,DATEDIFF("d", fromdate, todate) AS absentdays
FROM DiseaseReport D
, Employees E
WHERE E.number = D.empnumber
)
```
Although this query works just fine it doesn't give me the 'TOTAL' amount of absent days per employee. Instead i get more rows (if applicable) per employee.
I tryed applying a SUM(absentdays) or GROUP BY 'name', but i'm guessing these are no options since the 'absentdays' column is generated.
Can you guys give me an alternative solution for this 'challenge'? | There's nothing wrong with applying a function (in your case, `sum`) to a "generated" column:
```
SELECT name,
surname,
SUM(absentdays) AS total_absent_days
FROM (SELECT name,
surname,
DATEDIFF("d", fromdate, todate) AS absentdays
FROM DiseaseReport D, Employees E
WHERE E.number = D.empnumber
)
GROUP BY name, surname
``` | Why have a redundant select statement when you can just use your inner select?
Just delete everything other than
```
SELECT name
,surname
,DATEDIFF("d", fromdate, todate) AS absentdays
FROM DiseaseReport D
, Employees E
WHERE E.number = D.empnumber ;
``` | SQL SUM on 'generated' column | [
"",
"mysql",
"sql",
"select",
"sum",
""
] |
I have a table with a `varchar` column, PaymentRef, containing a variety of text and numeric data. Furthermore, there's a column, CurrencyAmount of datatype `float`:
```
PaymentRef CurrencyAmount
---------- --------------
EUR 100,00 100
EUR 50,00 50
USD 25,00 25.2
Auth#: 98103 NULL
Auth#: 98104 NULL
Transferred from 2356 NULL
Transferred to 1356 NULL
```
Now whenever a record contains a PaymentRef on the form "EUR ##,##", "USD ##,##", etc. I need to compare the numeric value after the currency code, to the value of the CurrencyAmount column.
The following query works just fine:
```
SELECT CAST(REPLACE(SUBSTRING(PaymentRef, 5, 100), ',', '.') AS FLOAT) AS PaymentRefNumeric,
CurrencyAmount
FROM MyTable
WHERE LEFT(PaymentRef, 3) IN ('EUR', 'USD', 'SEK')
```
However, if I try to compare the casted value to the CurrencyAmount column, as in:
```
WITH CTE AS (
SELECT CAST(REPLACE(SUBSTRING(PaymentRef, 5, 100), ',', '.') AS FLOAT) AS PaymentRefNumeric,
CurrencyAmount
FROM MyTable
WHERE LEFT(PaymentRef, 3) IN ('EUR', 'USD', 'SEK')
)
SELECT * FROM CTE WHERE PaymentRefNumeric <> CurrencyAmount
```
I get the following error:
```
Msg 8114, Level 16, State 5, Line 2
Error converting data type varchar to float.
```
I know that I can easily put the result of the first query into a temporary table, and then perform my comparison, but I'm wondering what the cause of this error is. I believe it has something to do with the query optimizer attempting to CAST the PaymentRef values to FLOAT, before applying the first WHERE filter. Is there a way to ensure the filter is applied before the values are cast? (That's why I tried using CTE's, but even without CTE's I still get the same error). | here's a workaround as described in my comment
```
WITH CTE AS (
SELECT case when LEFT(PaymentRef, 3) IN ('EUR', 'USD', 'SEK')
then CAST(REPLACE(SUBSTRING(PaymentRef, 5, 100), ',', '.') AS FLOAT)
else null
end AS PaymentRefNumeric,
CurrencyAmount
FROM #t WHERE LEFT(PaymentRef, 3) IN ('EUR', 'USD', 'SEK')
)
SELECT * FROM CTE WHERE PaymentRefNumeric <> CurrencyAmount
``` | can't you change the where condition of the CTE to
```
WHERE CurrencyAmount IS NOT NULL
```
? your sample data looks like that would be ok ... the query works with the changed condition ...
you could also change the other where condition to
```
WHERE currencyamount is not null and PaymentRefNumeric <> CurrencyAmount
``` | Error converting to float ignoring WHERE filter | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
There is a table `likes`:
```
like_user_id | like_post_id | like_date
----------------------------------------
1 | 2 | 1399274149
5 | 2 | 1399271149
....
1 | 3 | 1399270129
```
I need to make **one** `SELECT` query and count records for specific `like_post_id` by grouping according periods for 1 day, 7 days, 1 month, 1 year.
The result must be like:
```
period | total
---------------
1_day | 2
7_days | 31
1_month | 87
1 year | 141
```
Is it possible?
Thank you. | I have a created a query for Oracle syntax please change it according to your db
```
select '1_Day' as period , count(*) as Total
from likes
where like_date>(sysdate-1)
union
select '7_days' , count(*)
from likes
where like_date>(sysdate-7)
union
select '1_month' , count(*)
from likes
where like_date>(sysdate-30)
union
select '1 year' , count(*)
from likes
where like_date>(sysdate-365)
```
here idea is to get single sub query for single period and apply the filter in where to match the filter. | This code shows how to build a cross-tab style query that you will likely need. This aggregates by like\_post\_id and you may want to put restrictions on it. Further, in terms of last month I don't know whether you mean month to date, last 30 days or last calendar month so I've left that to you.
```
SELECT
like_post_id,
-- cross-tab example, rinse and repeat as required
-- aside of date logic, the SUM(CASE logic is designed to be ANSI compliant but you could use IF instead of CASE
SUM(CASE WHEN FROM_UNIXTIME(like_date)>=DATE_SUB(CURRENT_DATE(), interval 1 day) THEN 1 ELSE 0 END) as 1_day,
...
FROM likes
-- to restrict the number of rows considered
WHERE FROM_UNIXTIME(like_date)>=DATE_SUB(CURRENT_DATE(), interval 1 year)
GROUP BY like_post_id
``` | Is it possible to group by a few different date periods in mysql? | [
"",
"mysql",
"sql",
"database",
""
] |
I have a table like this
```
id code text
100 1 short description 100
100 2 long descr 100
101 1 short description 101
```
Now I want to fetch data from `text` field depending on value in `code` field. if the code is `1` its short description and if code is `2` its long description.
My desire output is like this
```
id shortdescription longdescription
100 short description 100 long descr 100
101 short description 101 null
``` | Try this:
```
SELECT id,
MAX(CASE WHEN code=1 THEN text END) as shortdescription,
MAX(CASE WHEN code=2 THEN text END) as longdescription
FROM TableName
GROUP BY id
```
Result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/42181/1).
This query is same as:
```
SELECT id,MAX(shortdescription) as shortdescription,MAX(longdescription) as longdescription
FROM
(
SELECT id,
CASE WHEN code=1 THEN text END as shortdescription,
CASE WHEN code=2 THEN text END as longdescription
FROM TableName
) T
GROUP BY id
```
**NB:** To understand the working of this query, please execute the inner query first.
Result:
```
ID SHORTDESCRIPTION LONGDESCRIPTION
100 short description 100 long descr 100
101 short description 101 (null)
```
See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/c947e/1). | I think the simplest approach is to treat this as conditional aggregation . . . put the `case` statement as an argument to the `max()` function:
```
select id,
max(case when code = 1 then text end) as ShortDescription,
max(case when code = 2 then text end) as LongDescription
from table t
group by id;
``` | Get Value from column depending on another column value in SQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have database on Sql Server 2008 R2.
On that database a delete query on 400 Million records, has been running for 4 days , but I need to reboot the machine. How can I force it to commit whatever is deleted so far? I want to reject that data which is deleted by running query so far.
But problem is that query is still running and will not complete before the server reboot.
Note : I have not set any isolation / begin/end transaction for the query. The query is running in SSMS studio.
If machine reboot or I cancelled the query, then database will go in recovery mode and it will recovering for next 2 days, then I need to re-run the delete and it will cost me another 4 days.
I really appreciate any suggestion / help or guidance in this.
I am novice user of sql server.
Thanks in Advance
Regards | There is no way to stop SQL Server from trying to bring the database into a transactionally consistent state. Every single statement is implicitly a transaction itself (if not part of an outer transaction) and is executing either all or nothing. So if you either cancel the query or disconnect or reboot the server, SQL Server will from transaction log write the original values back to the updated data pages.
Next time when you delete so many rows at once, don't do it at once. Divide the job in smaller chunks (I always use 5.000 as a magic number, meaning I delete 5000 rows at the time in the loop) to minimize transaction log use and locking.
```
set rowcount 5000
delete table
while @@rowcount = 5000
delete table
set rowcount 0
``` | If you are deleting that many rows you may have a better time with truncate. Truncate deletes all rows from the table very efficiently. However, I'm assuming that you would like to keep some of the records in the table. The stored procedure below backs up the data you would like to keep into a temp table then truncates then re-inserts the records that were saved. This can clean a huge table very quickly.
Note that truncate doesn't play well with Foreign Key constraints so you may need to drop those then recreate them after cleaned.
```
CREATE PROCEDURE [dbo].[deleteTableFast] (
@TableName VARCHAR(100),
@WhereClause varchar(1000))
AS
BEGIN
-- input:
-- table name: is the table to use
-- where clause: is the where clause of the records to KEEP
declare @tempTableName varchar(100);
set @tempTableName = @tableName+'_temp_to_truncate';
-- error checking
if exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(@tempTableName)) begin
print 'ERROR: already temp table ... exiting'
return
end
if not exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(@TableName)) begin
print 'ERROR: table does not exist ... exiting'
return
end
-- save wanted records via a temp table to be able to truncate
exec ('select * into '+@tempTableName+' from '+@TableName+' WHERE '+@WhereClause);
exec ('truncate table '+@TableName);
exec ('insert into '+@TableName+' select * from '+@tempTableName);
exec ('drop table '+@tempTableName);
end
GO
``` | How to force a running t-sql query (half done) to commit? | [
"",
"sql",
"sql-server",
""
] |
In a SQL Server table with `Item`, `Supplier`, `Value` and `Date` columns where are records of all item purchases, I need records, by item, with minimum value of the last 180 days only.
A sample table with only one Item:
```
Item | Supplier | Value | Date
---------------------------------------
123 | 28 | 115 | 2013-09-25
123 | 25 | 125 | 2013-11-30
123 | 30 | 120 | 2014-01-15
123 | 25 | 130 | 2014-04-30
```
The query result should be:
```
123 | 30 | 120 | 2014-01-15
```
It is possible have records that seems to be duplicated because it can be more than one origin document from the same supplier and date with same Item and value. Only need one of this records.
How can I get it? | Use a ranking function like `Row_Number`:
```
WITH CTE AS
(
SELECT Item, Supplier,Value, Date,
RN = ROW_NUMBER() OVER (PARTITION BY Item
ORDER BY Supplier DESC)
FROM dbo.TableName
WHERE DATEDIFF(day, Date, GetDate()) <= 180
)
SELECT Item, Supplier,Value, Date
FROM CTE
WHERE RN = 1
```
[**Demo**](http://sqlfiddle.com/#!6/6b3bc/12/0)
However, it's not that clear what you want of the duplicates or what a duplicate is at all. I have presumed that the `Item` determines a duplicate and that the maximum `Supplier` is taken from each duplicate group.
If you want to use all columns to determine a duplicate your desired result would contain multiple rows because there is no duplicate at all. | Another simple way of achieving this:
```
SELECT TOP 1 * FROM TableName
WHERE DATEDIFF(day, Date, GetDate()) <= 180
ORDER BY VALUE
```
[Demo](http://sqlfiddle.com/#!6/6b3bc/20) | Multiple aggregate functions | [
"",
"sql",
"sql-server",
""
] |
I'm beyond stuck with some SQL questions on my Bsc. CS revision exam paper - I know you all dislike answering academic questions but this is my last resort (and revision for Monday's exam) and I have no idea how to go about it.
I have 2 tables:
```
Department (deptId: string, deptName: string, managerId: string)
Employee (empId: string, empName: string, jobName: string, salary: integer, deptId: string)
```
and 2 queries I need to run:
1) Display the name and the id of the department that has the largest number of
employees
2) Display the names of the departments whose employees' average salary is at least
40000
I believe I need to use `join` and `having` here but those are things I can't quite wrap my big head around.
So my question is, how would I write these queries? I don't need an answer, per se, but an explanatory method to **know** how to achieve the objective. Again, this is not academic work but revision for a final exam. | The fisrt thing you must clear out is what and how the 2 tables are connected with?
If you deep in the table structure:
```
Department
(deptId: string, deptName: string, managerId: string)
Employee
(empId: string, empName: string, jobName: string, salary: integer, deptId: string)
```
you can see that `deptId` is the common field thus the joining key for those tables.
(Probably primary key for Department and foreign key for emmployee)
If indeed is a primary key then been a string is the best `datatype` selection.
Now you can move on on make the `join query`
I want give the exact solution for your problem I just giving you the syntax for a join statement:
```
Select * from Table1 join Table2 on Table1.samefield=Table2.samefield where='condition'
``` | # 1
Use [Rank](https://stackoverflow.com/questions/1466698/how-should-i-handle-ranked-x-out-of-y-data-in-postgresql) and just determine who is = 1. For simplicity, try creating a sub query: group by Department ID and get a count of employees.
# 2 - yes, you need having
```
Select
d.deptid, d.deptName, Average(e.salary)
from Department as d
inner join Employee as e
on d.deptId = e.deptId
group by d.deptid, d.deptName
having Average(e.salary) >= 40000
``` | SQL Join technique | [
"",
"sql",
"postgresql",
""
] |
I have a table that contains color options for a product. The color options include a hex color code, which is used to generate the UI (HTML).
I would like to sort the rows so that the colors in the UI look like a rainbow, instead of the current order that sorts based off of the Name of the color (not very useful).
Here is what my query looks like. I get the R G B decimal values from the hex code. I just don't know how to order it.
I've looked into color difference algorithms. They seem more useful to compare 2 colors' similarity, not sort.
I'm using MySQL:
```
select a.*, (a.c_r + a.c_g + a.c_b) color_sum
from (
select co.customization_option_id,
co.designer_image_url,
concat(co.name, " (",cog.name, ")") name,
co.customization_option_group_id gr,
designer_hex_color,
conv(substr(designer_hex_color, 1, 2), 16, 10) c_r,
conv(substr(designer_hex_color, 3, 2), 16, 10) c_g,
conv(substr(designer_hex_color, 5, 2), 16, 10) c_b
from customization_options co
left join customization_option_groups cog
on cog.id = co.customization_option_group_id
where co.customization_id = 155
and co.customization_option_group_id
in (1,2,3,4)) a
order by ????
``` | You want to sort hex codes by wavelength, this roughly maps onto the hue-value. Given a hexcode as a six character string: `RRGGBB`.
You just need to make a function that takes in a hexcode string and outputs the hue value, here's the formula [from this Math.SO answer](https://math.stackexchange.com/questions/556341/rgb-to-hsv-color-conversion-algorithm):
R' = R/255
G' = G/255
B' = B/255
Cmax = max(R', G', B')
Cmin = min(R', G', B')
Δ = Cmax - Cmin

I wanted to see if this would work, so I whipped up a sample program in Ruby, it samples 200 random colors uniformly from RGB-space, and sorts them, the output looks like a rainbow!
Here's the Ruby source:
```
require 'paint'
def hex_to_rgb(hex)
/(?<r>..)(?<g>..)(?<b>..)/ =~ hex
[r,g,b].map {|cs| cs.to_i(16) }
end
def rgb_to_hue(r,g,b)
# normalize r, g and b
r_ = r / 255.0
g_ = g / 255.0
b_ = b / 255.0
c_min = [r_,g_,b_].min
c_max = [r_,g_,b_].max
delta = (c_max - c_min).to_f
# compute hue
hue = 60 * ((g_ - b_)/delta % 6) if c_max == r_
hue = 60 * ((b_ - r_)/delta + 2) if c_max == g_
hue = 60 * ((r_ - g_)/delta + 4) if c_max == b_
return hue
end
# sample uniformly at random from RGB space
colors = 200.times.map { (0..255).to_a.sample(3).map { |i| i.to_s(16).rjust(2, '0')}.join }
# sort by hue
colors.sort_by { |color| rgb_to_hue(*hex_to_rgb(color)) }.each do |color|
puts Paint[color, color]
end
```
Note, make sure to `gem install paint` to get the colored text output.
Here's the output:

It should be relatively straight-forward to write this as a SQL user-defined function and ORDER BY RGB\_to\_HUE(hex\_color\_code), however, my SQL knowledge is pretty basic.
EDIT: I posted [this question on dba.SE](https://dba.stackexchange.com/q/64264/5595) about converting the Ruby to a SQL user defined function. | This is based on the [answer by @dliff](https://stackoverflow.com/questions/23397594/sql-order-by-color-hex-code/23620824#23620824). I initially edited it, but it turns out my edit [was rejected](https://stackoverflow.com/review/suggested-edits/14009328) saying "*it should have been written as a comment or an answer*". Seeing this would be too large to post as a comment, here goes.
The reason for editing (and now posting) is this: there seems to be a problem with colors like 808080 because their R, G and B channels are equal. If one needs this to sort or group colors and keep the passed grayscale/non-colors separate, that answer won't work, so I edited it.
```
DELIMITER $$
DROP FUNCTION IF EXISTS `hex_to_hue`$$
CREATE FUNCTION `hex_to_hue`(HEX VARCHAR(6)) RETURNS FLOAT
BEGIN
DECLARE r FLOAT;
DECLARE b FLOAT;
DECLARE g FLOAT;
DECLARE MIN FLOAT;
DECLARE MAX FLOAT;
DECLARE delta FLOAT;
DECLARE hue FLOAT;
IF(HEX = '') THEN
RETURN NULL;
END IF;
SET r = CONV(SUBSTR(HEX, 1, 2), 16, 10)/255.0;
SET g = CONV(SUBSTR(HEX, 3, 2), 16, 10)/255.0;
SET b = CONV(SUBSTR(HEX, 5, 2), 16, 10)/255.0;
SET MAX = GREATEST(r,g,b);
SET MIN = LEAST(r,g,b);
SET delta = MAX - MIN;
SET hue=
(CASE
WHEN MAX=r THEN (60 * ((g - b)/delta % 6))
WHEN MAX=g THEN (60 * ((b - r)/delta + 2))
WHEN MAX=b THEN (60 * ((r - g)/delta + 4))
ELSE NULL
END);
IF(ISNULL(hue)) THEN
SET hue=999;
END IF;
RETURN hue;
END$$
DELIMITER ;
```
Again, I initially wanted to edit the original answer, not post as a separate one. | ORDER BY Color with Hex Code as a criterio in MySQL | [
"",
"mysql",
"sql",
"sql-order-by",
"similarity",
""
] |
I have a PostgreSQL database and want to insert the same value for multiple records based on record IDs I have.
Is there a way to make a `WHERE` condition in the `INSERT` statement? For example:
```
insert into Customers (new-customer) values ('t') where customer_id in (list)
``` | Yes, you can do something like:
```
INSERT INTO customers(customer_id, customer_name)
select 13524, 'a customer name'
where 13524 = ANY ('{13524,5578,79654,5920}'::BIGINT[])
```
Here, a customer with id: `13524` will be added because it's ID is in the list: `{13524,5578,79654,5920}`
I hope that what you are looking for! | To insert a row for every id in your list, you can use [`unnest()`](http://www.postgresql.org/docs/current/interactive/functions-array.html#ARRAY-FUNCTIONS-TABLE) to produce a set of rows:
```
INSERT INTO customers(customer_id, column1)
SELECT id, 't'
FROM unnest ('{123,456,789}'::int[]) id;
```
If you misspoke and actually meant to `UPDATE` existing rows:
```
UPDATE customers
SET column1 = 't'
WHERE customer_id = ANY ('{123,456,789}'::int[]);
``` | Insert same values for multiple records based on list of ids | [
"",
"sql",
"postgresql",
"sql-insert",
"unnest",
""
] |
I have a table that manage conversations of a chat between users, the structure is the following.
```
id | user_id | conversation_id
```
let's say that on the conversation with ID 1 there are 3 people to chat and the conversation with ID 2, 2 people as well
Conversations\_users table will look like this
```
id | user_id | conversation_id
1 1 1
2 2 1
3 4 1
4 3 2
5 4 2
```
Now having only the id of the users `3` and `4` and **Not Conversation ID** I would like select the conversation that belongs to that users so a verbal query should be:
`Select from conversations_users, where in user_id = 3 and 4 and conversation_id is equals to conversation id of user 3 and 4`
how can I build this "verbal query" in Mysql? | Here is one method:
```
select uc.conversation_id
from UserConversions uc
where uc.user_id in (3, 4)
group by uc.conversation_id
having count(*) = 2;
```
If the table could have duplicates, you'll want: `having count(distinct user_id) = 2`.
EDIT:
If you want a specific list, just move the `where` condition to the `having` clause:
```
select cu.conversation_id
from conversations_users cu
group by cu.conversation_id
having sum(cu.user_id in (3, 4)) = 2 and
sum(cu.user_id not in (3, 4)) = 0;
``` | to get all the users in the conversations that user 3 and 4 are part of you could use this:
```
select distinct(user_id) from conversation_table where conversation_id in (select distinct(conversation_id) from conversation_table where user_id in (3,4));
```
it won't be very fast though
to get their actual conversations, I'm assuming you have a different table with the text in it:
you probably want something like this
```
select distinct(u.user_id), c.text from conversation_table u left join conversations c on c.id=u.conversation_id where u.conversation_id in (select distinct(conversation_id) from conversation_table where user_id in (3,4));
```
here is an [sqlfiddle](http://sqlfiddle.com/#!2/63cb3/3) | select rows in mysql having another column with same value | [
"",
"mysql",
"sql",
""
] |
I have a database, with a table for venues and bookings. The relationship is 1:M.
I want to select all venues, that doesn't have a booking on a specific date. The date field is called booking\_date and is present on the bookings table. I'm not the strongest in SQL, but i found the NOT EXISTS, but that seems to give me either no venues or all the venues, depending on whether the date is present in just one of the booking\_date fields.
To sum up, what i need is a query that: Select all venues, that doesn't have a booking with a booking\_date field = ?.
Venues table:
id, int
name, string
other unimportant fields
Bookings table:
id, int
customer\_id, int
venue\_id, int
booking\_date, date
So a venue belongs to a booking. I want all venues, that doesn't have a booking, where the booking\_date field is equal to a specified date.
For instance, if i have 5 venues, where one of them has a booking on 2014-06-09, and i supply that date, i want the other 4 venues, that doesn't have a booking on this date.
If anyone is interested, the use for this is to show the venues, that are available on a given date, that the users specify. | `NOT EXISTS` sounds exactly what you need:
```
SELECT *
FROM Venues V
WHERE NOT EXISTS(SELECT 1 FROM bookings
WHERE booking_date = 'SomeDate'
AND venue_id = V.venue_id)
``` | I would take care of this in the WHERE (making some assumptions on your tables):
```
DECLARE @DateCheck date = '2014-05-09';
SELECT
*
FROM
Venues
WHERE
VenueId NOT IN
(
SELECT
VenueId
FROM
Bookings
WHERE
BookingDate = @DateCheck
);
``` | SQL: Selecting all from a table, where related table, doesn't have a specified field value | [
"",
"sql",
"select",
"relationship",
""
] |
I'm trying to count many different values in one column in one table, all in one query.
**EDIT:** Query now works with suggestions by @Red Edit to move the aliases outside of each select and by @dnoeth to move the 'from' line- thanks! I have one more question, which I will add a comment for.
```
select
m1.mods_in_study,
(select count(*)
from dbo.study as m2
where m2.mods_in_study like '%CT%'
and study_datetime >= DATEADD(day, datediff(day, 1, getdate()), 0)
and study_datetime < DATEADD(day, datediff(day, 0, getdate()), 0)) as CT,
(select count(*)
from dbo.study as m3
where m3.mods_in_study like '%MR%'
and study_datetime >= DATEADD(day, datediff(day, 1, getdate()), 0)
and study_datetime < DATEADD(day, datediff(day, 0, getdate()), 0)) as MR,
(select count(*)
from dbo.study as m4
where m4.mods_in_study like '%CR%'
and study_datetime >= DATEADD(day, datediff(day, 1, getdate()), 0)
and study_datetime < DATEADD(day, datediff(day, 0, getdate()), 0)) as CR,
(select count(*)
from dbo.study as m5
where m5.mods_in_study like '%DX%'
and study_datetime >= DATEADD(day, datediff(day, 1, getdate()), 0)
and study_datetime < DATEADD(day, datediff(day, 0, getdate()), 0)) as DX,
(select count(*)
from dbo.study as m6
where m6.mods_in_study like '%US%'
and study_datetime >= DATEADD(day, datediff(day, 1, getdate()), 0)
and study_datetime < DATEADD(day, datediff(day, 0, getdate()), 0)) as US,
(select count(*)
from dbo.study as m7
where m7.mods_in_study like '%PT%'
and study_datetime >= DATEADD(day, datediff(day, 1, getdate()), 0)
and study_datetime < DATEADD(day, datediff(day, 0, getdate()), 0)) as PT,
(select count(*)
from dbo.study as m8
where m8.mods_in_study like '%NM%'
and study_datetime >= DATEADD(day, datediff(day, 1, getdate()), 0)
and study_datetime < DATEADD(day, datediff(day, 0, getdate()), 0)) as NM,
(select count(*)
from dbo.study as m9
where m9.mods_in_study like '%MG%'
and study_datetime >= DATEADD(day, datediff(day, 1, getdate()), 0)
and study_datetime < DATEADD(day, datediff(day, 0, getdate()), 0)) as MG
from dbo.study as m1
group by m1.mods_in_study
```
Right now, SQL is complaining about the '(' on the 2nd count. Should I be using aliases for this query? I know that a 'union' world work, but won't display correctly in my report as I need a column name for each count.
What am I doing wrong, or is there just a better way of going about this?
Here is what I am expecting the results to be:
```
CT MR CR DX US PT NM MG
130 39 240 12 45 7 17 121
``` | After adding the expected result set it's easier to understand :-)
To get a single row you don't need the `FROM`or `GROUP BY`, so simply remove the `m1.mods_in_study,`and the `from dbo.study as m1
group by m1.mods_in_study`
But as you always use the same conditions on the same table this can be simplified to
```
SELECT
COUNT(CASE WHEN mods_in_study LIKE '%CT%' THEN 1 END) AS CT,
COUNT(CASE WHEN mods_in_study LIKE '%MR%' THEN 1 END) AS MR,
...
FROM dbo.study
WHERE study_datetime >= DATEADD(day, datediff(day, 1, getdate()), 0)
AND study_datetime < DATEADD(day, datediff(day, 0, getdate()), 0))
```
MarkD's solution is quite similar... | Not having a terminal at hand where I can test MSSQL but reading through the query it would seem like you are just missing a comma. | SQL Server : multiple select statements on one column | [
"",
"sql",
"sql-server",
"count",
""
] |
```
SELECT account_name ,sum(total_balance) from transaction
where account_name IN('cash','a/r','supplies','prepaid','land','gold')
AS total_assets
ORDER BY account_name
```
> 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'as total\_assets order by account\_name LIMIT 0, 30' at line 3 | You have an alias in the wrong place.
```
SELECT account_name ,sum(total_balance) from transaction
where account_name IN('cash','a/r','supplies','prepaid','land','gold')
AS total_assets
ORDER BY account_name
```
should be
```
SELECT account_name ,sum(total_balance) AS total_assets
from transaction
where account_name IN('cash','a/r','supplies','prepaid','land','gold')
ORDER BY account_name
``` | remove `AS total_assets`
```
SELECT account_name ,sum(total_balance) from transaction
where account_name IN('cash','a/r','supplies','prepaid','land','gold')
ORDER BY account_name
``` | how can i fix this syntax error in sql? | [
"",
"mysql",
"sql",
"syntax-error",
""
] |
I think this is mostly a terminology issue, where I'm having a hard time articulating a problem.
I've got a table with a couple columns that manage some historical log data. The two columns I'm interested in are timestamp(or Id, as the id is generated sequentially) and terminalID.
I'd like to supply a list of terminal ids and find only the latest data, that is highest id or timestamp per terminalID
---
Ended up using group solution as @Danny suggested, and the other solution he referenced
I found the time difference to be quite noticeable, so I'm posting both results here for anyone's FYI.
# S1:
```
SELECT UR.* FROM(
SELECT TerminalID, MAX(ID) as lID
FROM dbo.Results
WHERE TerminalID in (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24)
GROUP BY TerminalID
) GT left join dbo.Results UR on UR.id=lID
```
# S2
```
SELECT *
FROM (
SELECT TOP 100
Row_Number() OVER (PARTITION BY terminalID ORDER BY Id DESC) AS [Row], *
FROM dbo.Results
WHERE TerminalID in (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24)
ORDER BY Id DESC
) a
WHERE a.row=1
```
the results were:
S1:
* CPU time = 297 ms, elapsed time = 343 ms.
* Query Cost 36%
* Missing index impact - 94%
S2:
* CPU time = 562 ms, elapsed time = 1000 ms.
* Query Cost 64%
* Missing index impact - 41%
After adding the missing index to solution one (indexing ID only, as opposed to s2, where multiple columns needed an index), I got the query down to 15ms | I think you're on the right track with `GROUP BY`. Sounds like you want:
```
SELECT TerminalID, MAX(Timestamp) AS LastTimestamp
FROM [Table_Name]
WHERE TerminalID IN (.., .., .., ..)
GROUP BY TerminalID
``` | use the `TOP` keyword:
```
SELECT TOP 1 ID, terminalID FROM MyTable WHERE <your condition> ORDER BY <something that orders it like you need so that the correct top row is returned>.
``` | Getting a single row based on unique column | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to write a query to insert dynamic values if they or NOT present therein already. So far I have tried this.
```
INSERT MTB_AML..tb_aml_codes (aml_code, aml_desc)
SELECT 'NRA', 'Non-Resident Alien'
UNION
SELECT 'DOM', 'Resident Alien'
FROM MTB_AML..tb_aml_codes t1
WHERE t1.aml_code NOT IN (SELECT t2.aml_code from MTB_AML..tb_aml_codes t2)
```
But this is returning only upper select (which is already present in the table). What am I doing wrong? | I think that the problem you're having is that in your version the WHERE clause was only applying to the last Select in the UNION clause
TAke the records you want
```
SELECT 'NRA' AS aml_code, 'Non-Resident Alien' AS aml_desc
UNION
SELECT 'DOM' AS aml_code, 'Resident Alien' AS aml_desc
```
And then wrap them up as a subquery (aliased as [src] in the example) you can then check which ones don't have matching keys in the destination table (aliased as [dst])
```
INSERT MTB_AML..tb_aml_codes (aml_code, aml_desc)
SELECT src.aml_code, src.aml_desc
(
SELECT 'NRA' AS aml_code, 'Non-Resident Alien' AS aml_desc
UNION
SELECT 'DOM' AS aml_code, 'Resident Alien' AS aml_desc
) src
WHERE src.aml_code NOT IN (SELECT dst.aml_code from MTB_AML..tb_aml_codes dst)
```
Personally I'd do it with a left join like this but it's up to you
```
INSERT MTB_AML..tb_aml_codes (aml_code, aml_desc)
SELECT src.aml_code, src.aml_desc
FROM
(
SELECT 'NRA' AS aml_code, 'Non-Resident Alien' AS aml_desc
UNION
SELECT 'DOM' AS aml_code, 'Resident Alien' AS aml_desc
) src
LEFT JOIN MTB_AML..tb_aml_codes dst
ON dst.aml_code = src.aml_code
WHERE dst.aml_code IS NULL
```
both would work but if you had to match on multi-column key you'd need to use the join method | You are HARDCODING the values of the resultset, wrapping quotes means a literal value, you must want the COLUMN name.
```
SELECT 'DOM', 'Resident Alien'
```
You need to have your SELECT like this:
```
SELECT T1.ColName, T1.AndAnotherColName
....
``` | How to avoid duplicate insert of dynamic values in SQL server | [
"",
"sql",
"sql-server",
"select",
"duplicates",
"notin",
""
] |
So Im trying to find the total amount spent on Cuts and Products by each Customer
I don't know if my Query is Wrong or my entire Database Schema any ideas?
**My Query**
```
`Select First_Name, SUM(B.Cost), SUM(C.Cost)
FROM bookings A, cuts B, products C, customers D
Where A.Customer_ID= D.Customer_ID
AND A.Cut_ID = B.Cut_ID
AND A.Product_ID= C.Product_ID;`
```
**My Database**
```
`Table: bookings
Booking_N0, Customer_ID, Cut_ID, Product_ID, TimeTaken`
`Table: customers
Customre_ID, First_Name, Sex`
`Table: products
Product_ID, Products, Cost`
`Table: cuts
Cut_ID, Cut, Cost`
``` | You should `GROUP BY` to `SUM` by each customer :
```
Select D.First_Name
, SUM(B.Cost)
, SUM(C.Cost)
FROM bookings A LEFT JOIN cuts B ON A.Cut_ID = B.Cut_ID
JOIN products C ON A.Product_ID = C.Product_ID
JOIN customers D ON A.Customer_ID = D.Customer_ID
GROUP BY D.First_Name;
```
Also, look forward using *explicit join notation* (`FROM table1 t1 JOIN table2 t2 ON t1.field1 = t2.field2`) instead of *implicit join notation* (`FROM table1 t1, table2 t2 WHERE t1.field1 = t2.field2`), because it is has more intuitive view (tables are listed near conditions on which they are joined). | Start using recommended `JOIN / ON` syntax for joining instead of using `WHERE` clause . You also need a `GROUP BY` clause
```
Select First_Name, SUM(B.Cost), SUM(C.Cost)
FROM bookings A
INNER JOIN cuts B
ON A.Cut_ID = B.Cut_ID
INNER JOIN products C
ON A.Product_ID= C.Product_ID
INNER JOIN customers D
ON A.Customer_ID= D.Customer_ID
GROUP BY First_Name
``` | SQL Query or Table Error? | [
"",
"sql",
"database",
"sqlite",
""
] |
I have an `sql` statement below which do group based on country names.
```
SELECT COUNTRY,count(*) FROM DRUG_SEIZURE WHERE COUNTRY IS NOT NULL GROUP BY COUNTRY
```
**Result Sample:**
```
Country Count
------- -----
America 20
Saudi Arabia 28
China 10
Japan 14
Kenya 10
Pakistan 12
India 11
```
I want the top three max value countries. In the above case i only want:
```
Country Count
------- -----
Saudi Arabia 28
America 20
Japan 14
``` | Depending on what RDBMS you are using:
***SQL SERVER:***
```
SELECT TOP 3 COUNTRY, count(*)
FROM DRUG_SEIZURE
WHERE COUNTRY IS NOT NULL
GROUP BY COUNTRY
ORDER BY count(*) DESC
```
***MySQL:***
```
SELECT COUNTRY, count(*)
FROM DRUG_SEIZURE
WHERE COUNTRY IS NOT NULL
GROUP BY COUNTRY
ORDER BY count(*) DESC
LIMIT 3
```
***Oracle:***
```
SELECT *
FROM (
SELECT COUNTRY, count(*)
FROM DRUG_SEIZURE
WHERE COUNTRY IS NOT NULL
GROUP BY COUNTRY
ORDER BY count(*) DESC
) mr
WHERE rownum <= 3
ORDER BY rownum;
``` | ```
SELECT *
FROM (SELECT COUNTRY,count(*)
FROM DRUG_SEIZURE
WHERE COUNTRY IS NOT NULL
GROUP BY COUNTRY
ORDER BY 2 DESC)
WHERE rownum <= 3;
``` | Select top 3 most count group by - SQL | [
"",
"mysql",
"sql",
"sql-server",
"oracle",
"subquery",
""
] |
Products are made up of one or many components and components have individual price.
The cost of the product are calculated adding the cost of components used in that product.
Should I add a column named cost in Product table as they are not dynamic or Should I calculate everytime when a product is purchased?
Which one of these would be suitable approach in terms of speed, redundancy and performance?
I hope the scenario is clear. | Unless you are absolutely sure the cost will never change, calculate it dynamically | None of the answers so far really seem to address the issue. Price is generally a slowly changing dimension. That means that the same price for a component can be different at different points of time. When bringing up slowly changing dimensions, I usually recommend Ralph Kimball's book "The Data Warehouse Toolkit", whatever the latest edition is.
In other words, you should have an `EffDate` and `EndDate` for the prices in your `PriceList` table. That way, you know what the price is at any point in time.
In addition to the price for the components at any given point in time, you should also track the price of components within a product. You can handle this with a `ProductComponent` association/junction table. This would have columns such as:
* ProductId
* ComponentId
* Currency (if appropriate)
* Discount (as pointed out by Jonathan)
* Tax
* Full Price
* Net Price
The full price would come from the `PriceList` table. I know that "Full Price" is now redundant with the `ProductComponent` table, but it is amazing how useful this bit of redundancy can be.
Then, you would simply use this table to bring together the cost for any given product at any given time. You can readily calculate the price for any product using the `ProductComponent` table. | SQL - Optimization Priority, Which of these apply? | [
"",
"sql",
"database",
""
] |
I have the following code which is connecting to my database and retrieving some data from a table:
```
string connectionString = "Data Provider=SQLOLEDB;Data Source=myserver;Initial Catalog=Db;Integrated Security=FALSE;user=zh;pwd=zh12;";
using (OleDbConnection connection = new OleDbConnection(connectionString))
{
connection.Open();
OleDbCommand command = new OleDbCommand();
command.Connection = connection;
command.CommandText = "SELECT [Location], [URL], [TAGS] FROM [Db].[dbo].[BOOKINGTABLE]";
command.CommandType = CommandType.Text;
using (OleDbDataReader reader = command.ExecuteReader())
{
menu_ul_1.DataSource = reader;
menu_ul_1.DataBind();
}
}
```
I get the following error:
`Exception Details: System.ArgumentException: An OLE DB Provider was not specified in the ConnectionString. An example would be, 'Provider=SQLOLEDB;'.`
When I change the connectionstring line to:
```
string connectionString = "Provider=SQLOLEDB;Data Source=myserver;Initial Catalog=Db;Integrated Security=FALSE;user=zh;pwd=zh12;";
```
I get the following error:
```
Exception Details: System.Data.OleDb.OleDbException: No error message available, result code: DB_E_ERRORSOCCURRED(0x80040E21).
Source Error:
Line 23: using (OleDbConnection connection = new OleDbConnection(connectionString))
Line 24: {
Line 25: connection.Open();
Line 26:
Line 27: OleDbCommand command = new OleDbCommand();
```
How can I resolve the issue?
My Web.config file has the following line:
```
<add key="ConnStringTEST" value="Data Source=myserver;Initial Catalog=Db;Integrated Security=FALSE;user=zh;pwd=zh12;" />
```
How, If, I can use the above line in my C# code? | After much troubleshooting, I was able to figure out why it wasn't working. I rewrote the string like this:
```
string cString = "Provider=sqloledb;Data Source=myserver;Initial Catalog=mydatabase;User Id=myid;Password=mypassword;";
```
That worked like a charm, in case someone else is having the same issue. | Don't use "Integrated Security" when you are supplying the user ID and password. | Why connecting to an OLEDB is giving me an connection error | [
"",
"sql",
"sql-server",
"oledb",
""
] |
I’m using MS SQL Server 2012 and I need to bulk update varchar data in a single column based on a text pattern.
Basically my single column contains comma separated variable length text data (see below) and I need to strip out the some of the data based on a pattern. For example, I have “ProductA-106, ProductB-107” and need to transform it to “ProductA, ProductB” stripping off the “-Number” text.
Can you help me to do this? Thanks a lot.
* Existing:
**Single Column**
```
ProductA-106, ProductB-107
ProductC-108, ProductD-109, ProductDA-109, ProductDB-109
ProductF-1011, ProductE-2015
```
* End result:
**Single Column**
```
ProductA, ProductB
ProductC, ProductD
ProductF, ProductE
``` | This is a perfect example why you don't want to have a database design like that.
It can be solved, but it is not easy, pretty or fast:
Create this function:
```
CREATE function dbo.f_cleanup (@a varchar(max))
returns varchar(max)
as
BEGIN
WHILE @a like '%-%'
SELECT @a = STUFF(@a, dash, comma - dash, '')
FROM
(
SELECT CHARINDEX('-', @a) dash,
CHARINDEX(',', @a + ',', CHARINDEX('-', @a)) comma
) x
RETURN @a
END
go
```
Test here:
```
DECLARE @t TABLE(SingleColumn varchar(100))
INSERT @t VALUES
('ProductA-106, ProductB-107'),
('ProductC-108, ProductD-109, ProductDA-109, ProductDB-109'),
('ProductF-1011, ProductE-2015')
UPDATE @t
SET SingleColumn = dbo.f_cleanup (SingleColumn)
SELECT * FROM @t
```
Result:
```
ProductA, ProductB
ProductC, ProductD, ProductDA, ProductDB
ProductF, ProductE
``` | This would be much easier if your data was structured correctly - ie: your products in a single table, and related to your other table by IDs in a separate join table.
```
update yourtable
set yourfield = replace (yourfield, '-', '')
update yourtable
set yourfield = replace (yourfield, '0', '')
update yourtable
set yourfield = replace (yourfield, '1', '')
...
``` | How can I bulk update varchar data based on a pattern in a MS SQL 2012 database? | [
"",
"sql",
"sql-server",
""
] |
I found a bizarre snippet which is confusing me so I thought I'll ask the experts.
Let assume a tableA got following columns with data:
```
"START_TIME":1399075198
"END_TIME":1399075200
"START_DATE":"02-MAY-14"
"END_DATE":"03-MAY-14"
```
Now query 1:
```
SELECT MIN(start_date) INTO sdate FROM tableA;
```
query 2:
```
SELECT TRUNC(sdate, 'HH24') + INTERVAL '30' MINUTE from dual;
```
So if start-date = '02-MAY-14', how would that truncate to 'HH24'? | The expression:
```
TRUNC(sdate, 'HH24')
```
cuts off everything from a date that is smaller than an hour, i.e. the minutes and seconds. For the specific date:
```
TRUNC('02-MAY-14','HH24')
```
it returns the date unchanged. It only makes sense if the Oracle date contains a time component.
Possibly, your SQL tool (SQL Developer, TOAD etc.) is configured to not display the time part of Oracle dates. So the original date might in fact be `02-MAY-14 09:03:25`. Then it would return:
```
02-MAY-14 09:00:00
```
You mention the columns *START\_TIME* and *END\_TIME* but don't use them in the SQL queries. What are they for? | In Oracle the date datatype inherently store the time as well.
Try executing the below query. It should clear things up a bit:
SELECT TO\_CHAR(SYSDATE,'DD-MON-YYYY HH:MI:SS'), TO\_CHAR(TRUNC(SYSDATE,'HH24'),'DD-MON-YYYY HH:MI:SS') FROM DUAL; | What should be the outcome of TRUNC('02-MAY-14','HH24')? | [
"",
"sql",
"oracle",
"plsql",
"oracle11g",
""
] |
Hi I'm fairly new to SQL and have been learning by trial and error but I have walked into a wall with this issue. I hope someone could give me an advice.
1) Check if table already exist in database, if it does drops it [works]
```
IF OBJECT_ID (N'PWC_L6_Daily',N'U') IS NOT NULL
DROP TABLE PWC_L6_Daily
```
2) Create table with SELECT INTO statement from original table in same database, ORDER BY clause in this statement works without issues DESC or ASC [works]
```
SELECT Time_Stamp,Plate_Number,OP_Tgt_Wgt_Difference AS Sample_Tgt_Wgt_Difference,Plate_Weight_Target,Plate_Geometry,OP_NOP_Wgt_Difference
INTO PWC_L6_Daily
FROM PWC_L6
WHERE ((Time_Stamp BETWEEN '05/09/2014 07:00:00' And '05/10/2014 06:59:59') AND (OP_Tgt_Wgt_Difference BETWEEN -6 And 6) AND (NOP_Tgt_Wgt_Difference BETWEEN -6 And 6) AND (Plate_Number <> 0) AND (Plate_Geometry <> 'Error'))
UNION ALL
SELECT Time_Stamp,Plate_Number,NOP_Tgt_Wgt_Difference AS Sample_Tgt_Wgt_Difference,Plate_Weight_Target,Plate_Geometry,OP_NOP_Wgt_Difference
FROM PWC_L6
WHERE ((Time_Stamp BETWEEN '05/09/2014 07:00:00' And '05/10/2014 06:59:59') AND (OP_Tgt_Wgt_Difference BETWEEN -6 And 6) AND (NOP_Tgt_Wgt_Difference BETWEEN -6 And 6) AND (Plate_Number <> 0) AND (Plate_Geometry <> 'Error'))
ORDER BY Time_Stamp ASC
```
3) Check if table already exist in database, if it does drops it [works]
```
IF OBJECT_ID (N'PWC_L6_Report',N'U') IS NOT NULL
DROP TABLE PWC_L6_Report
```
4) Create table with SELECT INTO statement from table previously created with SELECT INTO statement in same database, GROUP BY clause will work but it will give a random order, ORDER BY clause will do nothing. [does not ORDER BY]
```
SELECT MAX(Time_Stamp) AS Last_Produced,Plate_Number,COUNT(Plate_Number) AS Sample_Count,AVG(Sample_Tgt_Wgt_Difference) AS Avg_StT_Wgt_Difference,AVG(OP_NOP_Wgt_Difference) AS Avg_StS_Wgt_Difference,STDEV(Sample_Tgt_Wgt_Difference) AS Std_Dev_StT_Wgt_Difference
INTO PWC_L6_Report
FROM PWC_L6_Daily
GROUP BY Plate_Number
ORDER BY Last_Produced DESC
```
5) I did the following for troubleshooting and found the first ORDER BY clause (in 2nd step) affects the PWC\_L6\_Report table if I change to ASC or DESC but the second ORDER BY clause in this SELECT INTO statement won't have any effect at all. [Temporary statement]
```
SELECT Time_Stamp
INTO PWC_L6_Report
FROM PWC_L6_Daily
ORDER BY Time_Stamp DESC
```
I would really appreciate any suggestions. Thank you. | How can you tell if data is in the order you want? The statement:
```
select *
from table t
```
Returns the data *as an unordered* result set. The only way you can impose an ordering is with `order by`. You do *not* get data in insert order in SQL Server. This is ANSI-compliant behavior and consistent with how almost all databases work. If you want results in a particular order, then use `order by`.
`order by` can be used for an `insert` with good reason. If you do:
```
insert into t2(col1 . . .)
select col1 . . . )
from t
order by col1;
```
Then an *identity* column will be incremented in the proper order. (This is a nice feature, but probably slows down the insert.) You can get a similar effect using `row_number()` in your query:
```
SELECT row_number() over (order by Time_Stamp desc) as id, Time_Stamp
INTO PWC_L6_Report
FROM PWC_L6_Daily;
```
Now you can `order by id` instead of `time_stamp`, if you wanted. | Order by is only for presenting purpose. When used to insert data, it takes no effect. THINK TABLE AS SET. | ORDER BY clause won't work in table created with SELECT INTO | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have 2 similar tables that contain campaign names. I know I can do an union all to combine the tables, but I was wondering if there was a way to do this using form of Join instead? I want to create a table Z with campaign names for table A plus campaign names from table B (which are not in A). Can I do this with a join or is Union ALL the only way? | UNION is the easier and correct way to do that. Purely for the exercise you can do it with a JOIN but it is a lot more complex, unreadable, and the perf will be way worse... | ```
SELECT * INTO TABLEZ
FROM
(
SELECT Column1, Column2, Column3.... FROM TABLEA
UNION ALL
SELECT Column1, Column2, Column3.... FROM TABLEB
)Q
``` | Join or Union All To Combine Records? | [
"",
"sql",
"sql-server",
""
] |
How do I convert a table to an atom? I have a 1x1 table with a header. I want to get rid of the header and just have the number so that I do things like anotherTable\*thisNumber and so on.
I've tried:
```
(raze raze tableName)
```
but this gives me: enlist 10.5, how do I get 10.5 (as a number)? | ```
q)t:([] head:enlist 1)
q)t
head
----
1
q)first exec head from t
1
q) // or the shortest way
q)t[`head]0
1
q)type first exec head from t
-7h
q)100*first exec head from t
100
q)first t`head
1
q)raze/[t]0
1
``` | Think you can use `over` and indexing to do this:
```
q)raze/[([]a:1#1)]0
1
``` | kdb: convert 1x1 table to an atom | [
"",
"sql",
"kdb",
""
] |
My actual table structures are much more complex but following are two simplified table definitions:
Table `invoice`
```
CREATE TABLE invoice (
id integer NOT NULL,
create_datetime timestamp with time zone NOT NULL,
total numeric(22,10) NOT NULL
);
id create_datetime total
----------------------------
100 2014-05-08 1000
```
Table `payment_invoice`
```
CREATE TABLE payment_invoice (
invoice_id integer,
amount numeric(22,10)
);
invoice_id amount
-------------------
100 100
100 200
100 150
```
I want to select the data by joining above 2 tables and selected data should look like:-
```
month total_invoice_count outstanding_balance
05/2014 1 550
```
The query I am using:
```
select
to_char(date_trunc('month', i.create_datetime), 'MM/YYYY') as month,
count(i.id) as total_invoice_count,
(sum(i.total) - sum(pi.amount)) as outstanding_balance
from invoice i
join payment_invoice pi on i.id=pi.invoice_id
group by date_trunc('month', i.create_datetime)
order by date_trunc('month', i.create_datetime);
```
Above query is giving me incorrect results as `sum(i.total) - sum(pi.amount)` returns (1000 + 1000 + 1000) - (100 + 200 + 150) = **2550**.
I want it to return (1000) - (100 + 200 + 150) = **550**
And I cannot change it to `i.total - sum(pi.amount)`, because then I am forced to add `i.total` column to group by clause and that I don't want to do. | You need a single row per invoice, so aggregate `payment_invoice` first - best before you join.
When the whole table is selected, it's typically fastest to [*aggregate first* and *join later*](https://stackoverflow.com/questions/16018237/aggregate-a-single-column-in-query-with-many-columns/16023336#16023336):
```
SELECT to_char(date_trunc('month', i.create_datetime), 'MM/YYYY') AS month
, count(*) AS total_invoice_count
, (sum(i.total) - COALESCE(sum(pi.paid), 0)) AS outstanding_balance
FROM invoice i
LEFT JOIN (
SELECT invoice_id AS id, sum(amount) AS paid
FROM payment_invoice pi
GROUP BY 1
) pi USING (id)
GROUP BY date_trunc('month', i.create_datetime)
ORDER BY date_trunc('month', i.create_datetime);
```
[**`LEFT JOIN`**](http://www.postgresql.org/docs/current/interactive/queries-table-expressions.html#QUERIES-FROM) is essential here. You do not want to loose invoices that have no corresponding rows in `payment_invoice` (yet), which would happen with a plain `JOIN`.
Accordingly, use [`COALESCE()`](http://www.postgresql.org/docs/current/interactive/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL) for the sum of payments, which might be NULL.
[**SQL Fiddle**](http://sqlfiddle.com/#!15/3eab4/1) with improved test case. | See [sqlFiddle](http://sqlfiddle.com/#!15/d2524/4)
```
SELECT TO_CHAR(invoice.create_datetime, 'MM/YYYY') as month,
COUNT(invoice.create_datetime) as total_invoice_count,
invoice.total - payments.sum_amount as outstanding_balance
FROM invoice
JOIN
(
SELECT invoice_id, SUM(amount) AS sum_amount
FROM payment_invoice
GROUP BY invoice_id
) payments
ON invoice.id = payments.invoice_id
GROUP BY TO_CHAR(invoice.create_datetime, 'MM/YYYY'),
invoice.total - payments.sum_amount
``` | Using a column in sql join without adding it to group by clause | [
"",
"sql",
"postgresql",
"aggregate-functions",
""
] |
I have a table with sample data below:
```
ID | RecordID | Time | Start/End
1 1111 09:00:10
5 1111 09:00:12
13 1111 09:01:10
24 1111 09:02:30
27 9999 10:00:10
29 9999 10:01:22
30 9999 10:03:10
38 7777 10:20:10
59 7777 10:21:10
60 7777 10:24:10
71 1111 14:20:10
72 1111 14:21:10
75 1111 14:24:10
```
How do I query the table to add a 1 to the `Start/End` column when the RecordID is the first and last instance?
```
ID | RecordID | Time | Start/End
1 1111 09:00:10 1
5 1111 09:00:12
13 1111 09:01:10
24 1111 09:02:30 1
27 9999 10:00:10 1
29 9999 10:01:22
30 9999 10:03:10 1
38 7777 10:20:10 1
59 7777 10:21:10
60 7777 10:24:10 1
71 1111 14:20:10 1
72 1111 14:21:10
75 1111 14:24:10 1
```
Do I need to loop through the table?
Please note that the RecordID can be in the table more than once throughout the day.
```
EDIT -----------------------------
```
Aplogies, i originally stated the ID would be incremental but that is not the case - the above table is what it would look like if i pulled out the data for 1 user. | Query:
**[SQLFIDDLEEXample](http://sqlfiddle.com/#!3/bce28/1)**
```
UPDATE t
SET [Start/End] = CASE WHEN t1.ID is null or t2.ID is null
THEN 1
WHEN t.RecordID <> t1.RecordID OR t.RecordID <> t2.RecordID
THEN 1 END
FROM Table1 t
LEFT JOIN Table1 t1
ON t1.ID = t.ID - 1
LEFT JOIN Table1 t2
ON t2.ID = t.ID + 1
```
Result:
```
| ID | RECORDID | TIME | START/END |
|----|----------|----------|-----------|
| 1 | 1111 | 09:00:10 | 1 |
| 2 | 1111 | 09:00:12 | (null) |
| 3 | 1111 | 09:01:10 | (null) |
| 4 | 1111 | 09:02:30 | 1 |
| 5 | 9999 | 10:00:10 | 1 |
| 6 | 9999 | 10:01:22 | (null) |
| 7 | 9999 | 10:03:10 | 1 |
| 8 | 7777 | 10:20:10 | 1 |
| 9 | 7777 | 10:21:10 | (null) |
| 10 | 7777 | 10:24:10 | 1 |
| 11 | 1111 | 14:20:10 | 1 |
| 12 | 1111 | 14:21:10 | (null) |
| 13 | 1111 | 14:24:10 | 1 |
```
**EDIT**
New Query:
**[SQLFIDDLEExample](http://sqlfiddle.com/#!3/d41dd/1)**
```
;WITH CTE AS (
SELECT *,
ROW_NUMBER()OVER(ORDER BY ID) rnka
FROM Table1)
UPDATE t
SET [Start/End] = CASE WHEN t1.ID is null or t2.ID is null
THEN 1
WHEN t.RecordID <> t1.RecordID OR t.RecordID <> t2.RecordID
THEN 1 END
FROM CTE t
LEFT JOIN CTE t1
ON t1.rnka = t.rnka - 1
LEFT JOIN CTE t2
ON t2.rnka = t.rnka + 1
```
Result:
```
| ID | RECORDID | TIME | START/END |
|----|----------|----------|-----------|
| 1 | 1111 | 09:00:10 | 1 |
| 5 | 1111 | 09:00:12 | (null) |
| 13 | 1111 | 09:01:10 | (null) |
| 24 | 1111 | 09:02:30 | 1 |
| 27 | 9999 | 10:00:10 | 1 |
| 29 | 9999 | 10:01:22 | (null) |
| 30 | 9999 | 10:03:10 | 1 |
| 38 | 7777 | 10:20:10 | 1 |
| 59 | 7777 | 10:21:10 | (null) |
| 60 | 7777 | 10:24:10 | 1 |
| 71 | 1111 | 14:20:10 | 1 |
| 72 | 1111 | 14:21:10 | (null) |
| 75 | 1111 | 14:24:10 | 1 |
``` | Edited My answer:
Well, you can use below update queries:
`UPDATE t SET t.Start/End = 1 FROM table t INNER JOIN (SELECT MIN(Id) IdMin,MIN(Time) FROM table GROUP BY RecordId) t1 ON t.Id = t1.IdMin`
`UPDATE t SET t.Start/End = 1 FROM table t INNER JOIN (SELECT NAX(Id) IdMax,MAX(Time) FROM table WHERE Start/End IS NULL GROUP BY RecordId ) t1 ON t.Id = t1.IdMax` | Update 1 column based on changing values in another column | [
"",
"sql",
"sql-server-2008",
"t-sql",
"reportbuilder3.0",
""
] |
Question1:
Do i have to use to\_date while inserting date?
```
INSERT INTO some_table (date1, date2)
VALUES (to_date('2012-10-24','YYYY-MM-DD'), to_date('2012-10-24','YYYY-MM-DD'));
```
Or can just insert as string? Will everything be OK this way too? I've tried and it worked.
```
INSERT INTO some_table (date1, date2)
VALUES ('2012-10-24',2012-10-24');
```
Question2:
What happens if i won't name columns that i'm inserting into? It works, but my question is if it inserts randomly now or it takes order of columns during creation of table?
```
INSERT INTO some_table
VALUES ('2012-10-24',2012-10-24');
``` | 1 seems to only work with the 'YYYY-MM-DD' format:
<http://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements003.htm#SQLRF51049> says
> You can specify a DATE value as a string literal ... to specify a DATE value as a literal, you must use the Gregorian calendar. You can specify an ANSI literal... The ANSI date literal contains no time portion, and must be specified in the format 'YYYY-MM-DD'.
However, it might work with time if you use the
> Alternatively you can specify an Oracle date value... The default date format for an Oracle DATE value is specified by the initialization parameter NLS\_DATE\_FORMAT.
For question 2, it uses the order at definition of the table. However you have to give values for all columns in that case. | You could even do it like this one:
```
ALTER SESSION SET NLS_DATE_FORMAT = 'MM:YYYY:DD';
INSERT INTO some_table (date1) VALUES ('05:2014:10');
```
...but doing it like this is not recommended. Use `TO_DATE` or DATE Literal, e.g. `DATE '2014-05-10'` instead. It makes your life easier. | SQL oracle beginner questions | [
"",
"sql",
"oracle",
""
] |
Here is the example SQL in question; The SQL should run on any Oracle DBMS (I'm running 11.2.0.2.0).
Note how the UUID values are different (one has 898 the other has 899) in the resultset despite being built from within the inline view/with clause. Further below you can see how DBMS\_RANDOM.RANDOM() does not have this side effect.
**SQL:**
```
WITH data AS (SELECT SYS_GUID () uuid FROM DUAL)
SELECT uuid, uuid
FROM data
```
**Output:**
```
UUID UUID_1
F8FCA4B4D8982B55E0440000BEA88F11 F8FCA4B4D8992B55E0440000BEA88F11
```
**In Contrast DBMS\_RANDOM** the results are the same
**SQL:**
```
WITH data AS (SELECT DBMS_RANDOM.RANDOM() rand FROM DUAL)
SELECT rand, rand
FROM data
```
**Output:**
```
RAND RAND_1
92518726 92518726
```
Even more interesting is I can change the behavior / stabilize sys\_guid by including calls to DBMS\_RANDOM.RANDOM:
```
WITH data AS (
SELECT SYS_GUID () uuid,
DBMS_RANDOM.random () rand
FROM DUAL)
SELECT uuid a,
uuid b,
rand c,
rand d
FROM data
```
SQL Fiddle That Stabilizes SYS\_GUID:
<http://sqlfiddle.com/#!4/d41d8/29409>
SQL Fiddle That shows the odd SYS\_GUID behavior:
<http://sqlfiddle.com/#!4/d41d8/29411> | The [documentation gives a reason](http://docs.oracle.com/cd/E16655_01/appdev.121/e17620/adfns_packages.htm#ADFNS00908) as to why you may see a discrepancy (emphasis mine):
> **Caution:**
>
> Because SQL is a declarative language, rather than an imperative (or procedural) one, **you cannot know how many times a function invoked by a SQL statement will run**—even if the function is written in PL/SQL, an imperative language.
> If your application requires that a function be executed a certain number of times, do not invoke that function from a SQL statement. Use a cursor instead.
>
> For example, if your application requires that a function be called for each selected row, then open a cursor, select rows from the cursor, and call the function for each row. This technique guarantees that the number of calls to the function is the number of rows fetched from the cursor.
Basically, Oracle doesn't specify how many times a function will be called inside a sql statement: it may be dependent upon the release, the environment, the access path among other factors.
However, there are ways to limit query rewrite as explained in the chapter [Unnesting of Nested Subqueries](http://docs.oracle.com/cd/E11882_01/server.112/e26088/queries008.htm#SQLRF52358):
> Subquery unnesting unnests and merges the body of the subquery into the body of the statement that contains it, allowing the optimizer to consider them together when evaluating access paths and joins. The optimizer can unnest most subqueries, **with some exceptions**. Those exceptions include hierarchical subqueries and subqueries that contain a ROWNUM pseudocolumn, one of the set operators, a nested aggregate function, or a correlated reference to a query block that is not the immediate outer query block of the subquery.
As explained above, you can use [`ROWNUM`](http://docs.oracle.com/cd/E16655_01/server.121/e17209/pseudocolumns009.htm#SQLRF00255) pseudo-column to prevent Oracle from unnesting a subquery:
```
SQL> WITH data AS (SELECT SYS_GUID() uuid FROM DUAL WHERE ROWNUM >= 1)
2 SELECT uuid, uuid FROM data;
UUID UUID
-------------------------------- --------------------------------
1ADF387E847F472494A869B033C2661A 1ADF387E847F472494A869B033C2661A
``` | The NO\_MERGE hint "fixes" it. Prevents Oracle from re-writing the inline view.
```
WITH data AS (SELECT /*+ NO_MERGE */
SYS_GUID () uuid FROM DUAL)
SELECT uuid, uuid
FROM data
```
[From the docs:](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements006.htm#SQLRF50507)
> The NO\_MERGE hint instructs the optimizer not to combine the outer
> query and any inline view queries into a single query.This hint lets
> you have more influence over the way in which the view is accessed.
[SQL Fiddle with the NO\_MERGE hint applied](http://sqlfiddle.com/#!4/d41d8/29545):
I'm still struggling to understand/articulate how the query is being re-written in such a way that `sys_guid()` would be called twice. Perhaps it is a bug; but I tend to assume it is a bug in my own thoughts/code. | Oracle SQL: Understanding the behavior of SYS_GUID() when present in an inline view? | [
"",
"sql",
"oracle",
""
] |
Running MS SQL Server 2008
I have this query:
```
select count(*) from dbo.study
where study_datetime >= (GETDATE() -1)
```
that comes back with all of yesterdays exams written to my study table. How would I make it come back with everything done 'today' up to the current time I asked for it? For example I would everything for today from 00:00:00.000 - current time
my values in the 'study\_datetime' column look like: 2014-05-06 10:40:31.000
I can't seem to figure this one out. I have tried replacing the '-1' with a '0' but I get back 0 results.
thanks | To get everything that strictly happened today, just use:
```
select count(*) from dbo.study
where study_datetime >= cast(getDate() As Date)
and study_datetime < cast(DATEADD(day,1,getdate()) as Date)
```
When you're working with continuous data, it's almost always better to switch to using semi-open intervals, to ensure that data falls into one and exactly one interval. Usually, when you want "all day", you don't want to exclude things that occurred during the final minute of the day (at e.g. 23:59:37.223 or even at 23:59:59.993). So you'd normally write your query to be `>= midnight at the start of the day` and `< midnight at the start of the following day` (note the different types of comparisons)
This is usually a far better idea than trying to compute the last moment of today and use `<=` (or `BETWEEN`) for your comparisons. | unfortunately there is no trunc() like in oracle, but since you have the 2008 version you can use:
`select count(*) from dbo.study
where study_datetime >= cast(getDate() As Date)` | mssql select GETDATE - all day from 00:00 - 23:59 | [
"",
"sql",
"sql-server",
""
] |
i want to insert the scholar's id to the tblinbox. Here is my query:
```
$sql = "INSERT INTO tblinbox VALUES ('','$sender','$type','$subject','$LRN','$content','$date', '$newyearLevel','','$userType','THIS_IS_FOR_THE_ID_OF_THE_SCHOLAR')
SELECT id FROM tblscholar WHERE schoYear = '$newyearLevel'";
```
my problem is,it is not inserting. what will i change in my query? | `INSERT ... SELECT` syntax does not allow for `VALUES` declaration. The values ARE the results returned from the `SELECT`.
See the documentation here: <http://dev.mysql.com/doc/refman/5.6/en/insert-select.html>
I honestly am not fully sure what you are trying to do with your insert. If you are trying to insert the same values held in your variables for each id value from the `tblscholar` table then perhaps you need to do something like this:
```
INSERT INTO tblinbox
/*
maybe add column definitions here to make it clearer
column definitions could look like this:
(
someField,
type,
subject,
LRN,
content,
`date`,
newyearLevel,
someOtherField,
userType,
id
)
*/
SELECT
'',
'$sender',
'$type',
'$subject',
'$LRN',
'$content',
'$date',
'$newyearLevel',
'',
'$userType',
id
FROM tblscholar
WHERE schoYear = '$newyearLevel'
``` | considering id is the first column in your insert statement, try this
```
$sql = "INSERT INTO tblinbox VALUES ((SELECT id FROM tblscholar WHERE schoYear = '$newyearLevel'),'$sender','$type','$subject','$LRN','$content','$date', '$newyearLevel','','$userType')";
``` | INSERT ... SELECT(insert many rows) | [
"",
"mysql",
"sql",
""
] |
I have a query that returns a date. eg 2013-10-12.
I want to check if the date is between June 1st of the year and the last day of the year.
If it is between these dates increase the year by 1 else leave the year as it is.
So, taking the example above `2013-10-12` the result I want is 2014.
If it was `2013-01-12` the result I want is 2013.
I hope this makes sense. | ```
Declare @Yourdate datetime = '2013-10-13'
SELECT CASE WHEN
(@Yourdate between '2013-06-01' and DATEADD(yy, DATEDIFF(yy,0,getdate()) + 1, -1) )
THEN YEAR(@Yourdate) + 1 ELSE (YEAR(@Yourdate) ) END AS [YEAR]
``` | ```
declare @arg datetime
set @arg = '01/12/2013'
select
case when month(@arg) > 5 then year(@arg) + 1 else year(@arg) end
``` | Check if date is between the start of May this year and the end of May next year | [
"",
"sql",
"sql-server",
"date",
""
] |
I have stored procedure in asp.net application as following:
```
CREATE PROCEDURE [dbo].[step2-e]
@PI varchar(50),
@Balance int output ,
@Shipment_status varchar(50) output,
@ETA varchar(50) output,
@Forwarder varchar(50) output,
@Transit_time Time output,
@Shipping_date date output,
@Shipping_method varchar(50) output,
@Clearance_location varchar(50) output,
@Advance_payment varchar(50) output
@Balance_t varchar(50) output,
@Loading_date date output
@Balance_d date output
AS
Begin
select
@Advance_payment = [advance_payment] @Balance = [Balance],
@Shipment_status = [Shipment_status],
@ETA = [Eta], @Forwarder = [Forwarder],
@Transit_time = [Transit_time], @Shipping_date = [Shipping_date],
@Shipping_method = [Shipping_method],
@Clearance_location = [Clearance_location],
@Balance_d = [Balance_due_d],
@Balance_t = [Balance_due_t],
@Loading_date = [Loading_date]
from
Inbound
where
[Pi1] = @PI
End
GO
Select convert(date, [dbo].[step2-e] ,3);
GO
```
But I get error message after Go word on select says:-
> Error SQL70001: This statement is not recognized in this context
Ok I think there is problem of use Go word
When I searched I found solution but in asp.net website not asp.net application.
I found the solution [here](http://arcanecode.com/2013/03/28/ssdt-error-sql70001-this-statement-is-not-recognized-in-this-context/) but I can't find script file in asp.net application.
Just I can find it in asp.net website. What can I do ? | As you posted it, there's a comma missing between the first two elements in your `SELECT`:
```
select
@Advance_payment = [advance_payment] @Balance = [Balance],
^^^^
|
here there should be a comma!
```
So try this instead:
```
select
@Advance_payment = [advance_payment],
@Balance = [Balance],
..... (rest of your statement) ....
``` | This error is due to the fact that Build Action is set To Build inside the Properties. Set it to None and it should work fine. [](https://i.stack.imgur.com/AlRJl.png) | Error SQL70001: This statement is not recognized in this context | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have created the following 3 tables for a database:
```
CREATE TABLE [dbo].[Buyer] (
[Buyer_Id] INT IDENTITY (1, 1) NOT NULL,
[Last_Name] NVARCHAR (50) NOT NULL,
[First_Name] NVARCHAR (50) NOT NULL,
[Social_No] NVARCHAR (50) NOT NULL,
[Phone] NVARCHAR (50) NOT NULL,
[User_Id] INT NOT NULL,
PRIMARY KEY CLUSTERED ([Buyer_Id] ASC),
CONSTRAINT [FK_User_Id] FOREIGN KEY ([User_Id]) REFERENCES [dbo].[User] ([User_Id])
);
CREATE TABLE [dbo].[Type] (
[Type_Id] INT IDENTITY (1, 1) NOT NULL,
[Type_Name] NCHAR (10) NULL,
PRIMARY KEY CLUSTERED ([Type_Id] ASC)
);
CREATE TABLE [dbo].[User] (
[User_Id] INT IDENTITY (1, 1) NOT NULL,
[User_Name] NCHAR (10) NOT NULL,
[Pass] NVARCHAR (50) NOT NULL,
[Type_Id] INT NOT NULL,
PRIMARY KEY CLUSTERED ([User_Id] ASC),
CONSTRAINT [FK__Type_Id] FOREIGN KEY ([Type_Id]) REFERENCES [dbo].[Type] ([Type_Id])
);
```
and I want to make a stored procedure with the purpose to be used in my Addcustumer Form in C#.
Bellow it is the code form my stored procedure:
```
CREATE PROCEDURE [dbo].[InsertCustomer]
@Buyer_Id int output,
@Last_Name varchar(50),
@Fist_Name varchar(50),
@Social_No varchar(50),
@Phone varchar(50),
@User_Id int output,
@User_Name nchar(10),
@Pass varchar(50),
@Type_id int output,
@Type_Name nchar(10)
AS
insert into Buyer(Buyer_Id,Last_Name,First_Name,Social_No,Phone)
values (@Last_Name,@Fist_Name,@Social_No)
set @Buyer_Id=SCOPE_IDENTITY
insert into User(User_Name,Pass,Type_Id)
values(@User_Id,@Pass)
set @Type_id=SCOPE_IDENTITY
insert into Type(Type_Id,Type_Name)
values (@Type_Name)
set @Type_id=SCOPE_IDENTITY
RETURN 0
```
But when I am trying to run my sql stored procedure I get the following error:
> Incorrect syntax near 'User'
Please tell what is wrong with my code. | The usual: User is a [reserved word in T-SQL](http://technet.microsoft.com/en-us/library/ms189822.aspx). So you got to quote it:
```
insert into [User] (User_Name,Pass,Type_Id)
``` | There are a number of issues with your procedure. Yes what VMai has mentioned is very important and you also have the following errors.
```
CREATE PROCEDURE [dbo].[InsertCustomer]
@Buyer_Id int output,
@Last_Name varchar(50),
@Fist_Name varchar(50),
@Social_No varchar(50),
@Phone varchar(50),
@User_Id int output,
@User_Name nchar(10),
@Pass varchar(50),
@Type_id int output,
@Type_Name nchar(10)
AS
BEGIN
SET NOCOUNT ON; --<-- to suppress the default sql server message (N) row(s) effected.
insert into Buyer(Buyer_Id,Last_Name,First_Name,Social_No,Phone)
values (@Last_Name,@Fist_Name,@Social_No)
set @Buyer_Id = SCOPE_IDENTITY(); --<-- Missing Parenthesis
insert into [User](User_Name,Pass,Type_Id)
values(@User_Id,@Pass)
set @User_Id = SCOPE_IDENTITY(); --<-- Missing Parenthesis and I think
--it was suppose to be @User_Id
insert into [Type](Type_Id,Type_Name)
values (@Type_Name)
set @Type_id = SCOPE_IDENTITY(); --<-- Missing Parenthesis
RETURN 0
END
``` | Incorect syntax near table when trying to save a stored procedure in C# | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I am trying to do a multi query but I don't want to use sub queries i.e:
```
SELECT column1
FROM table1
WHERE
EXISTS (SELECT column1 FROM table2 WHERE table1.column1 = table2.column1);)
```
I thought of using a `JOIN` but so far my best result was this:
```
SELECT *
FROM table1
JOIN table2 ON table1.t1id = table2.t2id
WHERE table1.id = 5;
```
This would be good except of the fact that I get a duplicate column (the `id` in table 1 and 2 are foreign keys).
How do I remove the duplicate column if possible?
> UPDATE:
```
Table1:
tableA_ID, TABLEB_ID
1, 1
1, 4
3, 2
4, 3
TableA: ID, COL1, COL2
1, A, B
2, A, B
3, A, B
4, A, B
TableB: ID, Col3, COL4
1, C, D
2, C, D
3, C, D
4, C, D
```
I want to get all or some of the columns from `TableA` according to a condition
Sample: Lets say the condition is that `tableA_ID = 1` which will result in the 2 first rows in the table then I want to get all or some of the columns in `TableA` that respond to the `ID` that I got from `Table1`.
Sample: The result from before was `[{1,1}{1,4}]` which means I want from `TableA` the results:
```
TableA.ID, TableA.COL1, TableA.COL2
1,A,B
4,A,B
```
The actual results I get is:
```
Table1.tableA_ID, Table1.TABLEB_ID, TableA.ID, TableA.COL1, TableA.COL2
1,1,1,A,B
1,4,4,A,B
``` | Is this what you're looking for?
```
select a.id, a.column1, b.column2
from table1 a
left join table2 b on a.id = b.otherid;
``` | You can't change the column list of a query based on the values it returns. It just isn't the way that SQL is designed to operate. At best, you can return all of the columns from the second table and ignore the ones that aren't relevant based on other values in that row.
I'm not even sure how a variable column list would work. In your scenario, you're looking for two discrete values separately. But that's not the only scenario: what if the condition is `tableA_ID in (1,2)`. Would you want different numbers of columns in different rows as part of a single result set? | Is there a way to do a multi table query and get result just from specific tables? | [
"",
"sql",
"join",
""
] |
I have an invoice table in my database into which new invoices are inserted. Recently we have upgraded our services and have many users inserting invoices into the table. I have written a function that calculates the next invoice number in the sequence (the sequence isn't just sequential numbering).
```
MAX(CAST(SUBSTRING([InvoiceNumber], 3, LEN([InvoiceNumber]) - 2) AS INT)) FROM [Invoice] WHERE [InvoiceNumber] LIKE @Prefix + '%'
```
I now want to use this function in an insert statement as follows
```
INSERT INTO [Invoice]
([InvoicedTo]
,[InvoiceNumber]
,[InvoiceDate]
,[Description]
,[Amount])
VALUES
(@InvoicedTo
,NextInvoiceNumber(@Prefix)
,@InvoiceDate
,@Description
,@Amount)
```
What I want to make sure that using this insert statement will guarantee that the invoice number cannot be duplicated when multiple users are generating invoices. If it can result in duplicates, I would appreciate suggestions on how to stop any duplicates occuring.
Many Thanks, Paul | dont know if this is helpful
```
begin tran
declare @nextInvoiceNumber int = dbo.NextInvoiceNumber(@Prefix)
INSERT INTO [Invoice]
([InvoicedTo]
,[InvoiceNumber]
,[InvoiceDate]
,[Description]
,[Amount])
VALUES
(@InvoicedTo
,@nextInvoiceNumber
,@InvoiceDate
,@Description
,@Amount)
commit
```
another thing you could/should do is add a constraint to the table that requires it to be unique.
then add error handling in the event this does happen that will just get you another number. | No, it doesn't guarantee uniqueness.
You should wrap the call inside a `serializable transaction`. | Using SQL Functions inside Insert Statements | [
"",
"sql",
"sql-server",
"function",
""
] |
i am new to SQL.
I have used "order by" to sort two large tables on SQL, netezza of IBM.
The table is:
```
col1 INT
col2 INT
col3 INT
col4 DOUBLE PRECISION
INSERT INTO mytable
SELECT *
FROM table1 AS t1
ORDER BY t1.col1 , t1.col2, t1.col3, t1.col4 ASC
```
After sorting, I check the top 100 rows:
```
SELECT *
FROM mytable
LIMIT 100;
```
But, I got different results everytime when i run the SQL query for top 100 rows.
When I export the table to a txt file, the same thing.
Why ?
Thanks ! | The order in which you insert data into the table is meaningless. Running a query has absolutely no guarantee on the order rows are returned, unless you explicitly specify it using the `order by` clause. Since there's no guarantee on the order of rows, there's no guarantee what the "top 100" are, and hence you may very well get different results each time you run the query.
If you do specify the order in your query, however, you should get consistent results (assuming that there's only one possible outcome for the top 100 rows, and not, e.g., 200 rows which any 100 of which could be considered valid results):
```
SELECT *
FROM mytable
ORDER BY col1, col2, col3, col4 ASC
LIMIT 100;
``` | The sequence is not guaranteed, although it may match the order in which the data was inserted. If you need it in a particular sequence, use ORDER BY. | SQL check top 100 rows different everytime running the SQL query? | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"netezza",
""
] |
I am trying to get only the first row from this query. The query does not return the top row. Here is the query.
```
SELECT DISTINCT name, age
FROM donates, persons
WHERE name = donor
AND name IN (SELECT receiver FROM donates)
AND ROWNUM <= 1
ORDER BY age DESC;
```
When I run the query it returns `Chirs |35`.
Without the `ROWNUM <=1` this is what the table looks like.
```
NAME | AGE
-------------
george | 62
Chris | 35
zara | 24
``` | I think the best way to get this is to use a subquery as the rownum is being looked at before the order by so:
```
select * from (
SELECT DISTINCT name, age
FROM donates, persons
WHERE name = donor
AND name IN (SELECT receiver FROM donates)
ORDER BY age DESC
) where ROWNUM <= 1;
```
For a longer read look at <http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html> | Try to use `ROWID` rather than `ROWNUM` like this:
```
SELECT * FROM TABLE_NAME WHERE ROWID = (SELECT MIN(ROWID) FROM TABLE_NAME)
``` | Get first row in Oracle Sql | [
"",
"sql",
"oracle",
""
] |
I *THINK* I'm having some trouble.
I'm trying to query 2 tables for customers that haven't been used on the table for the last 3 Years. The data consists of data ranging for 7+ years, so customers are used multiple times.
I think the issue with my current query: It's finding data of customers not used in the last 3 years... but it's not accounting for if there is also data of the customer within the last 3 years as well.
Can someone possibly help me? I'm guessing the answer is to use only the data of the customer with the latest date and ignore previous data.
```
SELECT DISTINCT
tbl_Customer.CustomerID
, tbl_Customer.CustomerName
, Table1.ImportDate
, Table2.ImportDate
FROM
tbl_Customer
LEFT JOIN
Table1 ON tbl_Customer.CustomerName = Table1.CustomerName
LEFT JOIN
Table2 ON tbl_Customer.CustomerName = Table2.CustomerName
WHERE
(((DateAdd("yyyy", 3, [Table2].[ImportDate])) < Now())
AND
((DateAdd("yyyy", 3, [Table1].[ImportDate])) < Now()))
ORDER BY
Table1.ImportDate DESC,
Table2.ImportDate DESC;
``` | The core problem with the initial query is that, for no imports (which will happen for "no order" customers) the condition
```
DateAdd("yyyy", 3, ImportDate) < Now()
--> DateAdd("yyyy", 3, NULL) < Now()
--> NULL < Now()
--> NULL (or not true)
```
is not true. A simple fix is to add a guard
```
([Table1].[ImportDate] IS NULL
OR DateAdd("yyyy", 3, [Table1].[ImportDate]) < Now())
```
around such expressions or to coalesce the NULL value before using it.
The ordering will also be wrong, as that means order by one value and *then* the other, not "by the greater of both" values. Compare with
```
ORDER BY
IIF(Table1.ImportDate > Table2.ImportDate, Table1.ImportDate, Table2.ImportDate)
```
---
However, I would use a LEFT JOIN on customers/orders, GROUP BY with a MAX on the order dates. Then you can use *that* result (as a derived subquery) to complete the query asked fairly trivially.
```
SELECT
c.CustomerID
, MAX(o.ImportDate) as lastImport
FROM tbl_Customer as c
-- The UNION is to simply "normalize" to a single table.
-- (Also, shouldn't the join be on a customer "ID"?)
LEFT JOIN (
SELECT CustomerName, ImportDate from Table1
UNION
SELECT CustomerName, ImportDate from Table2) as o
ON c.CustomerName = o.CustomerName
GROUP BY c.CustomerID
```
Then,
```
SELECT s.CustomerID
FROM (thatSubQuery) as s
WHERE
-- no orders
s.lastImport IS NULL
-- only old orders
OR DateAdd("yyyy", 3, s.lastImport) < Now()
ORDER BY s.lastImport
```
(YMMV with MS Access, this will work in a "real" database ;-) | ```
SELECT DISTINCT
tbl_Customer.CustomerID,
tbl_Customer.CustomerName,
Table1.ImportDate,
Table2.ImportDate
FROM (tbl_Customer
LEFT JOIN Table1
ON tbl_Customer.CustomerName = Table1.CustomerName)
LEFT JOIN Table2
ON tbl_Customer.CustomerName = Table2.CustomerName
WHERE DateAdd("yyyy",3,[Table2].[ImportDate]) < Now()
AND DateAdd("yyyy",3,[Table1].[ImportDate]) < Now()
AND tbl_Customer.CustomerID NOT IN (
SELECT DISTINCT
tbl_Customer.CustomerID,
FROM (tbl_Customer
LEFT JOIN Table1
ON tbl_Customer.CustomerName = Table1.CustomerName)
LEFT JOIN Table2
ON tbl_Customer.CustomerName = Table2.CustomerName
WHERE DateAdd("yyyy",3,[Table2].[ImportDate]) >= Now()
AND DateAdd("yyyy",3,[Table1].[ImportDate]) >= Now()
)
ORDER BY Table1.ImportDate DESC , Table2.ImportDate DESC;
``` | SQL Query For Customers Not Used For Last 3 Years | [
"",
"sql",
"ms-access",
""
] |
I have scenario like
```
Account_Number | Ins1 |Ins2 | Ins3
8923647324 | 3 | 1 | 5
128736244 | 5 | 2 | 6
9238475323 | 6 | 3 | 7
```
I wanted to achieve something like
```
8923647324 | 3
8923647324 | 1
8923647324 | 5
128736244 | 5
128736244 | 2
128736244 | 6
```
It can be done using `UNION`. But looking for some inbuilt feature, which won't affect performance of data retrieval. And, I don't have any clue. Anyone help me on this. | There are many ways to do this:
Using `UNION`
```
SELECT Account_Number,
Ins1 AS Ins
FROM YourTable
UNION ALL
SELECT Account_Number,
Ins2
FROM YourTable
UNION ALL
SELECT Account_Number,
Ins3
FROM YourTable
```
Using `CROSS APPLY`:
```
SELECT t.Accoun_Number,
x.Ins
FROM YourTable t
CROSS APPLY
(
VALUES
(t.Ins1),
(t.Ins2),
(t.Ins3)
) x (Ins);
```
I would add one way using `UNPIVOT`, but I'll leave that for you to research. | You can use `UNPIVOT`
```
SELECT Account_Number, Ins
FROM tbl
UNPIVOT
(
Ins FOR Val IN (Ins1, Ins2, Ins3)
) as unp
``` | transverse column to rows like matrix in sql server | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a query where data is coming from front end into IN condition. Now the value is coming as comma separated for eg: 002,003 or 002q, 4335f, 123d, shd4 or yuw98
My query takes value as `select * from tbl1 where userid in ('002,004')` where as it should be `userid in ('002','004')`
I tried below query to replace the string but it doesnot work.
Select \* from tbl1 where
`UserId in (''''|| Replace('004,002', ',', ''',''') || '''');`
Same value if i use in
`Select (''''|| Replace('004,002', ',', ''',''') || '''') from dual;`
Returns
`'004','002'`
then why does the value not run in my original query ?? | For those who were looking for solution, I found one.
`WHERE INSTRC('001, 002'), userid, 1, 1) > 0` | If you have a query like this:
```
select * from tbl1 where userid in ('002,004')
```
And, for some reason, you *have* to represent the list of values as a string (assuming you want to search for two values in this case), then you can use `like` to find matches:
```
where ','||userid||',' like ','||'002,004'||','
```
I encourage you to try to find a way to make `in` work, by properly constructing the query. For one thing, this would allow the query to take advantage of an index. But, if not reasonable, then the `like` approach will work. | Oracle Query - Replace not working | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
Table name: RANKTITLE,RANK
```
//RANKTITLE
TITLE
--------------
TEST1
TEST2
TEST3
TEST4
//RANK
TITLE SALARY
---------------------
TEST1 4500000
TEST2 80000
TEST3 26000
TEST4 1800000
```
So i want to display all the title except the rank with the highest salary
Output will be like:
```
TITLE SALARY
--------------------
TEST2 80000
TEST3 26000
TEST4 1800000
```
Using this query
```
SELECT r.TITLE FROM RANKTITLE r WHERE
r.TITLE IN (SELECT TITLE FROM RANK) AND
(SELECT MAX(SALARY) FROM RANK WHERE r.TITLE=p.TITLE AND SALARY < MAX(SALARY));
```
I can't get the output i want. isn't any mistake here? | Test this:
```
SELECT r.TITLE, r.SALARY
FROM RANKTITLE r
WHERE r.SALARY < (SELECT MAX(s.SALARY) FROM RANK s)
``` | ```
SELECT r.TITLE, salary FROM RANK r WHERE
salary not in (SELECT MAX(SALARY) FROM RANK);
```
If you have only one record with highest rank, only it will be exluded.
If you have more than one record with highest rank, all will be exluded.
```
select title from (
SELECT TITLE, rownum rnum FROM RANK
ORDER by salary desc, title)
where rnum <> 1;
```
will exclude only unlucky first record. You don't even need RANKTITLE unless there would be some more fields that you want to grab. | SELECT clause with exist condition | [
"",
"sql",
"oracle",
""
] |
Im having a slight issue merging the following statements
```
declare @From DATE
SET @From = '01/01/2014'
declare @To DATE
SET @To = '31/01/2014'
--ISSUED SB
SELECT
COUNT(pm.DateAppIssued) AS Issued,
pm.Lender,
pm.AmountRequested,
p.CaseTypeID
FROM BPS.dbo.tbl_Profile_Mortgage AS pm
INNER JOIN BPS.dbo.tbl_Profile AS p
ON pm.FK_ProfileId = p.Id
WHERE CaseTypeID = 2
AND (CONVERT(DATE,DateAppIssued, 103)
Between CONVERT(DATE,@From,103) and CONVERT(DATE,@To,103))
And Lender > ''
GROUP BY pm.Lender,p.CaseTypeID,pm.AmountRequested;
--Paased
SELECT
COUNT(pm.DatePassed) AS Passed,
pm.Lender,
pm.AmountRequested,
p.CaseTypeID
FROM BPS.dbo.tbl_Profile_Mortgage AS pm
INNER JOIN BPS.dbo.tbl_Profile AS p
ON pm.FK_ProfileId = p.Id
WHERE CaseTypeID = 2
AND (CONVERT(DATE,DatePassed, 103)
Between CONVERT(DATE,@From,103) and CONVERT(DATE,@To,103))
And Lender > ''
GROUP BY pm.Lender,p.CaseTypeID,pm.AmountRequested;
--Received
SELECT
COUNT(pm.DateAppRcvd) AS Received,
pm.Lender,
pm.AmountRequested,
p.CaseTypeID
FROM BPS.dbo.tbl_Profile_Mortgage AS pm
INNER JOIN BPS.dbo.tbl_Profile AS p
ON pm.FK_ProfileId = p.Id
WHERE CaseTypeID = 2
AND (CONVERT(DATE,DateAppRcvd, 103)
Between CONVERT(DATE,@From,103) and CONVERT(DATE,@To,103))
And Lender > ''
GROUP BY pm.Lender,p.CaseTypeID,pm.AmountRequested;
--Offered
SELECT
COUNT(pm.DateOffered) AS Offered,
pm.Lender,
pm.AmountRequested,
p.CaseTypeID
FROM BPS.dbo.tbl_Profile_Mortgage AS pm
INNER JOIN BPS.dbo.tbl_Profile AS p
ON pm.FK_ProfileId = p.Id
WHERE CaseTypeID = 2
AND (CONVERT(DATE,DateOffered, 103)
Between CONVERT(DATE,@From,103) and CONVERT(DATE,@To,103))
And Lender > ''
GROUP BY pm.Lender,p.CaseTypeID,pm.AmountRequested;
```
Ideally I would like the result of theses query's to show as follows
Issued, Passed , Offered, Received,
All in one table
Any Help on this would be greatly appreciated
Thanks
Rusty | I'm fairly certain in this case the query can be written without the use of any `CASE` statements, actually:
```
DECLARE @From DATE = '20140101'
declare @To DATE = '20140201'
SELECT Mortgage.lender, Mortgage.amountRequested, Profile.caseTypeId,
COUNT(Issue.issued) as issued,
COUNT(Pass.passed) as passed,
COUNT(Receive.received) as received,
COUNT(Offer.offered) as offered
FROM BPS.dbo.tbl_Profile_Mortgage as Mortgage
JOIN BPS.dbo.tbl_Profile as Profile
ON Mortgage.fk_profileId = Profile.id
AND Profile.caseTypeId = 2
LEFT JOIN (VALUES (1, @From, @To)) Issue(issued, rangeFrom, rangeTo)
ON Mortgage.DateAppIssued >= Issue.rangeFrom
AND Mortgage.DateAppIssued < Issue.rangeTo
LEFT JOIN (VALUES (2, @From, @To)) Pass(passed, rangeFrom, rangeTo)
ON Mortgage.DatePassed >= Pass.rangeFrom
AND Mortgage.DatePassed < Pass.rangeTo
LEFT JOIN (VALUES (3, @From, @To)) Receive(received, rangeFrom, rangeTo)
ON Mortgage.DateAppRcvd >= Receive.rangeFrom
AND Mortgage.DateAppRcvd < Receive.rangeTo
LEFT JOIN (VALUES (4, @From, @To)) Offer(offered, rangeFrom, rangeTo)
ON Mortgage.DateOffered >= Offer.rangeFrom
AND Mortgage.DateOffered < Offer.rangeTo
WHERE Mortgage.lender > ''
AND (Issue.issued IS NOT NULL
OR Pass.passed IS NOT NULL
OR Receive.received IS NOT NULL
OR Offer.offered IS NOT NULL)
GROUP BY Mortgage.lender, Mortgage.amountRequested, Profile.caseTypeId
```
(not tested, as I lack a provided data set).
... Okay, some explanations are in order, because some of this is slightly non-intuitive.
First off, [read this blog entry](https://sqlblog.org/2011/10/19/what-do-between-and-the-devil-have-in-common) for tips about dealing with date/time/timestamp ranges (interestingly, this also applies to all other non-integral types). This is why I modified the `@To` date - so the range could be safely queried without needing to convert types (and thus ignore indices). I've also made sure to choose a safe format - depending on how you're calling this query, this is a non issue (ie, parameterized queries taking an actual `Date` type are essentially format-less).
```
......
COUNT(Issue.issued) as issued,
......
LEFT JOIN (VALUES (1, @From, @To)) Issue(issued, rangeFrom, rangeTo)
ON Mortgage.DateAppIssued >= Issue.rangeFrom
AND Mortgage.DateAppIssued < Issue.rangeTo
.......
```
What's the difference between `COUNT(*)` and `COUNT(<expression>)`? If `<expression>` evaluates to `null`, it's ignored. Hence the `LEFT JOIN`s; if the entry for the mortgage isn't in the given date range for the column, the dummy table doesn't attach, and there's no column to count. Unfortunately, I'm not sure how the interplay between the dummy table, `LEFT JOIN`, and `COUNT()` here will appear to the optimizer - the joins should be able to use indices, but I don't know if it's smart enough to be able to use that for the `COUNT()` here too....
```
(Issue.issued IS NOT NULL
OR Pass.passed IS NOT NULL
OR Receive.received IS NOT NULL
OR Offer.offered IS NOT NULL)
```
This is essentially telling it to ignore rows that don't have at least *one* of the columns. They wouldn't be "counted" in any case (well, they'd likely have `0`) - there's no data for the function to consider - but they **would** show up in the results, which probably isn't what you want. I'm not sure if the optimizer is smart enough to use this to restrict which rows it operates over - that is, turn the `JOIN` conditions into a way to restrict the various date columns, as if they were in the `WHERE` clause too. If the query runs slow, try adding the date restrictions to the `WHERE` clause and see if it helps. | You could either as Dan Bracuk states use a union, or you could use a case-statement.
```
declare @From DATE = '01/01/2014'
declare @To DATE = '31/01/2014'
select
sum(case when (CONVERT(DATE,DateAppIssued, 103) Between CONVERT(DATE,@From,103) and CONVERT(DATE,@To,103)) then 1 else 0 end) as Issued
, sum(case when (CONVERT(DATE,DatePassed, 103) Between CONVERT(DATE,@From,103) and CONVERT(DATE,@To,103)) then 1 else 0 end) as Passed
, sum(case when (CONVERT(DATE,DateAppRcvd, 103) Between CONVERT(DATE,@From,103) and CONVERT(DATE,@To,103)) then 1 else 0 end) as Received
, sum(case when (CONVERT(DATE,DateOffered, 103) Between CONVERT(DATE,@From,103) and CONVERT(DATE,@To,103)) then 1 else 0 end) as Offered
, pm.Lender
, pm.AmountRequested
, p.CaseTypeID
FROM BPS.dbo.tbl_Profile_Mortgage AS pm
INNER JOIN BPS.dbo.tbl_Profile AS p
ON pm.FK_ProfileId = p.Id
WHERE CaseTypeID = 2
And Lender > ''
GROUP BY pm.Lender,p.CaseTypeID,pm.AmountRequested;
```
**Edit:**
What I've done is looked at your queries.
All four queries have identical Where Clause, with the exception of the date comparison. Therefore I've created a new query, which selects all your data which *might* be used in one of the four counts.
The last clause; the data-comparison, is moved into a case statement, returning 1 if the row is between the selected date-range, and 0 otherwise. This basically indicates whether the row would be returned in your previous queries.
Therefore a sum of this column would return the equivalent of a count(\*), with this date-comparison in the where-clause.
**Edit 2 (After comments by Clockwork-muse):**
Some notes on performance, (tested on MS-SQL 2012):
Changing BETWEEN to ">=" and "<" inside a case-statement does not affect the cost of the query.
Depending on the size of the table, the query might be optimized quite a lot, by adding the dates in the where clause.
In my sample data (~20.000.000 rows, spanning from 2001 to today), i got a 48% increase in speed by adding.
```
or (DateAppIssued BETWEEN @From and @to )
or (DatePassed BETWEEN @From and @to )
or (DateAppRcvd BETWEEN @From and @to )
or (DateOffered BETWEEN @From and @to )
```
(There were no difference using BETWEEN and ">=" and "<".)
It is also worth nothing that i got a 6% increase when changing the @From = '01/01/2014' to @From '2014-01-01' and thus omitting the convert().
Eg. an optimized query could be:
```
declare @From DATE = '2014-01-01'
declare @To DATE = '2014-01-31'
select
sum(case when (DateAppIssued >= @From and DateAppIssued < @To) then 1 else 0 end) as Issued
, sum(case when (DatePassed >= @From and DatePassed < @To) then 1 else 0 end) as Passed
, sum(case when (DateAppRcvd >= @From and DateAppRcvd < @To) then 1 else 0 end) as Received
, sum(case when (DateOffered >= @From and DateOffered < @To) then 1 else 0 end) as Offered
, pm.Lender
, pm.AmountRequested
, p.CaseTypeID
FROM BPS.dbo.tbl_Profile_Mortgage AS pm
INNER JOIN BPS.dbo.tbl_Profile AS p
ON pm.FK_ProfileId = p.Id
WHERE 1=1
and CaseTypeID = 2
and Lender > ''
and (
(DateAppIssued >= @From and DateAppIssued < @To)
or (DatePassed >= @From and DatePassed < @To)
or (DateAppRcvd >= @From and DateAppRcvd < @To)
or (DateOffered >= @From and DateOffered < @To)
)
GROUP BY pm.Lender,p.CaseTypeID,pm.AmountRequested;
```
I do however really like Clockwork-muse's answer, as I prefer joins to case-statements, where posible :) | SQL Merging 4 Queries to one | [
"",
"sql",
"sql-server",
"merge",
""
] |
Iam little stucked with this sql query. I have got two tables
```
TbleName - usrs TbleName - idtb
name | cid cname | cid
------------- ----------------
james | 1100 IT | 1100
john | 1200 HR | 1300
jack | 1100 QA | 1200
bill | 1300 HD | 1400
troy | 1100
SELECT COUNT(*) as 'Total' FROM usrs u WHERE u.cid = 1100;
SELECT c.cname FROM idtb c WHERE c.cid = 1100;
```
My first query returns 3 and my second query returns IT , Now I want to join this 2 queries into one that would produe me result as this
```
Total | Cname
------------------
3 | IT
```
I tried several ways and this worked
```
SELECT COUNT(*) as 'Total',c.cname FROM usrs u JOIN
idtb c ON u.cid = c.cid WHERE u.cid = 1100
GROUP BY u.cid
```
But the query does not seem to work when u.cid = 1400, since there are no names in usrs table that have cid value as 1400 and it returns empty result but I want the result to be
```
Total | Cname
-------------------
0 | HD
```
The query does not work if there are no records in usrs. I tried using left, right and full joins but did not figure it out. Any help is greatly appreciated. | As you want to print all `cname` but counting `usrs` this is the query you need:
```
select a.cname, count(b.name)
from idtb a left join usrs b on (a.cid = b.cid)
group by a.cname
```
If you want to add a filter for some of then, just add the where clause.
See it here on fiddle: <http://sqlfiddle.com/#!2/0acec/2> | ```
SELECT COUNT(*) as 'Total', idtb.cname
FROM idtb LEFT JOIN usrs
ON usrs.cid=idtb.cid
WHERE usrs.cid=1100
GROUP BY idtb.cname
``` | Count (*) on two tables | [
"",
"mysql",
"sql",
""
] |
I am trying to do a wildcard search on a value from a joined item. Ie.
```
SELECT
(
SELECT
count(*)
FROM
user_contacts e
WHERE
ece.user = 1 AND
(
e.name LIKE '%u.lastname%'
OR
e.name LIKE '%u.firstname%'
)
) as friends_count,
u.user_id,
u.firstname,
u.lastname
FROM
users u
```
But it doesnt work. I could instead do:
`e.name LIKE u.lastname`
That would work, but that would not include the wildcard %% which I need.
Any ideas? | You could concatenate it.
```
e.name LIKE CONCAT('%', u.lastname, '%')
``` | You just need to use the right syntax for concatenation for MySQL:
```
SELECT (SELECT count(*)
FROM user_contacts e
WHERE e.user = 1 AND
(e.name LIKE concat('%', u.lastname, '%') or
e.name LIKE concat('%', u.firstname, '%')
)
) as friends_count,
u.user_id, u.firstname, u.lastname
FROM users u;
``` | Wilcard Search LIKE %e.name% from INNER JOIN | [
"",
"mysql",
"sql",
""
] |
I sort of understand how this works but need more help
original question:
Use a correlated subquery to return one row per customer, representing the customer’s oldest order (the one with the earliest date). Each row should include these
three columns: email\_address, order\_id, and order\_date.
My Answer:
```
select email_address, order_id, order_date
from customers as T natural join orders
where order_date =
(select min(order_date)
from orders as S
where T.customer_id = S.customer_id
)
```
These are the schema:
customers: customer\_id, email\_address, password, first\_name, last\_name
orders: order\_id, customer\_id, order\_date, ship\_amount
My understanding is, I natural join customers and orders first. This gives me only the customers who actually have orders. Then I enter the where. I take the orders table, and select only tuples where customer\_id matches the parent one (but this part seems redundant to me since the parent and subquery should have exactly the same customer\_id's)
Then select the min(order\_date) from those tuples, then check for parent order\_date equal to that (the subquery returns only one tuple). This means that (with my understanding) I should have only one tuple in my result (since only one tuple will exactly match that order\_date)
I don't understand how I result with 7 tuples (note, my answer is apparently correct)
Thanks for any help | so along with the responses from everyone, I figured this would be helpful to understand the OP's question `I don't understand how having the subquery return a single tuple allows me to get more than one tuple in my final result.`
think of it like this... you have multiple orders on one date.
```
+-------+----------------+----------------+----------------+
| id | order_date | order_quantity | email_address |
+-------+----------------+----------------+----------------+
| 1 | 5/16 | 5 |bill@win.ing |
| 2 | 5/16 | 6 |jim@win.ing |
| 3 | 5/16 | 6 |stinky@win.ing |
| 4 | 5/12 | 1 |tom@win.ing |
| 5 | 5/12 | 7 |jeremy@win.ing |
| 6 | 5/12 | 3 |silly@win.ing |
+-------+----------------+----------------+----------------+
```
there are **three** orders on the most recent date.... so when you `SELECT` something like this.
```
select email_address, order_id, order_date
from customers as T natural join orders
where order_date =
(select min(order_date)
from orders as S
where T.customer_id = S.customer_id
)
```
What this is actually saying is this
```
SELECT
email_address,
order_id,
order_date
FROM customers AS T
NATURAL JOIN orders
WHERE order_date = "5/16"
```
as you can see there are multiple records of the most recent date in the table at May 16th **one tuple** so all of those records matching it will be returned.
the result of the query with my sample data would look like this
```
+-------+----------------+----------------+----------------+
| id | order_date | order_quantity | email_address |
+-------+----------------+----------------+----------------+
| 1 | 5/16 | 5 |bill@win.ing |
| 2 | 5/16 | 6 |jim@win.ing |
| 3 | 5/16 | 6 |stinky@win.ing |
+-------+----------------+----------------+----------------+
```
**KEY NOTE:** **all of the records on that specific date will be returned**
hope that helps clarify :) | Here is a break down of what is happening.
These are the columns of data you wish to get:
select email\_address, order\_id, order\_date
selecting data from the customers table:
from customers as T
matching only customers who have made an order:
natural join orders
the above could have been better represented like this:
inner join orders as O ON O.customer\_id = T.customer\_id
this is saying to get orders matching a specific date, in this case their first order, if they made multiple orders on the same date they would show up more than once.
where order\_date =
(select min(order\_date)
from orders as S
where T.customer\_id = S.customer\_id
)
if you want to eliminate the duplicates you can do so by doing either of the following:
(1) adding the keyword "DISTINCT" without quotes after select.
```
select DISTINCT email_address, order_id, order_date
from customers as T natural join orders
where order_date =
(select min(order_date)
from orders as S
where T.customer_id = S.customer_id
)
```
OR
(2) by adding a GROUP BY clause after the WHERE clause
```
select email_address, order_id, order_date
from customers as T natural join orders
where order_date =
(select min(order_date)
from orders as S
where T.customer_id = S.customer_id
)
GROUP BY email_address, order_id, order_date
```
However you will likely find that you are getting multiple customer\_id with multiple order\_id, and that would be because they have the same order\_date.
You might be able to rewrite the query like the following, assuming your order\_id are sequential.
```
select email_address, order_id, order_date
from customers as T natural join orders
where order_id =
(select min(order_id)
from orders as S
where T.customer_id = S.customer_id
)
```
Hope this helps. | Understanding this MySQL Query | [
"",
"mysql",
"sql",
""
] |
For the below query, the MAX function seems to be completely ignored. The MAX function has no effect on the result of the query
```
SELECT alarm_severity_id
FROM alarm_notification
WHERE alarm_life_cycle_id = 25
having MAX(event_timestamp);
``` | This is your query:
```
SELECT alarm_severity_id
FROM alarm_notification
WHERE alarm_life_cycle_id = 25
HAVING MAX(event_timestamp);
```
Your `HAVING` clause is saying: take the maximum of the timestamp. If the maximum is not equal to zero or NULL, then everything is fine. This is exactly equivalent:
```
HAVING MAX(event_timestamp) <> 0
```
I suspect you want:
```
SELECT alarm_severity_id
FROM alarm_notification
WHERE alarm_life_cycle_id = 25
ORDER BY event_timestamp DESC
LIMIT 1;
```
EDIT:
To optimize performance, create an index on `alarm_life_cycle_id` and `event_timestamp`:
```
create index alarm_notification_alci_et on alarm_notification(alarm_life_cycle_id, event_timestamp)
``` | 1. There should be correct condition like `having MAX(event_timestamp) > 100`
2. There should be `group by` if you use `MAX`
3. Or do as @Gordon Linoff said (with `order by` + `limit 1`). | SQL Function calls ignored with "HAVING" clause | [
"",
"mysql",
"sql",
""
] |
Table B have the following column: `Email Address`
and Table A have the following column: `email`
How do I update Table A `newsletter` to = 0 where A.`Email Address` = B.`email`?
I've tried with the following which seems to fail:
```
UPDATE TableA
SET TableA.newsletter = 0
FROM TableB, TableA
WHERE TableA.email = TableB.Email Address
``` | This is how the query should look like while updating data joining 2 tables with a key
```
update
TableA t1
inner join TableB t2 on t2.`Email Address` = t1.email
set t1.newsletter = 0
``` | You're just missing backticks for the TableB.Email Address column, anytime you have special characters or spaces you will need to add backticks for columns\tablenames\schemas.
```
UPDATE
TableA A, TableB B
SET A.newsletter = 0
WHERE A.email = B.`Email Address`
``` | update a certain column in table A when table A email equals to table B email | [
"",
"mysql",
"sql",
""
] |
For example . i having these 2 table
```
//DEPARTMENT
DNAME CITY
--------------------
RESEARCH BOSTON
IT CITY
SCIENCE DALLAS
RESEARCH DALLAS
```
So i using this statement
```
(SELECT DNAME FROM DEPARTMENT WHERE CITY='BOSTON')
UNION
(SELECT DNAME FROM DEPARTMENT WHERE CITY='DALLAS'):
```
Using this statement i get this output
```
DNAME
-----
RESEARCH
SCIENCE
```
But my question is to choose either in 'BOSTON' or 'DALLAS' But not in both, my output should be like this
```
DNAME
--------
SCIENCE
```
What should i change in order to get the correct output | You should use `INTERSECT` and `MINUS` (for Oracle):
```
-- At least at Boston or at Dallas
(SELECT DNAME
FROM DEPARTMENT
WHERE CITY = 'DALLAS'
UNION
SELECT DNAME
FROM DEPARTMENT
WHERE CITY = 'BOSTON')
MINUS
-- At Boston and at Dallas
(SELECT DNAME
FROM DEPARTMENT
WHERE CITY = 'DALLAS'
INTERSECT
SELECT DNAME
FROM DEPARTMENT
WHERE CITY = 'BOSTON')
```
Since `UNION` *adds* subqueries up while `EXCEPT`/`MINUS` *subtracts* them | You can filter out the dnames first and then use group by to select only those dnames that are present in one city only.
```
select dname
from department
where city in ('BOSTON', 'DALLAS')
group by dname
having count(city) = 1;
``` | SELECT using UNION Clause | [
"",
"sql",
"oracle",
""
] |
I have a SQL database that it is possible to update. As the sql (will be server based) i only want to update values as i need to. So if only one column changes, only one column updates.
I have a mechanism that detects if values have changed and currently i have a messy function that constructs the SQL syntax and then binds the data to it. What I'm wondering is, is there any value i can bind to the syntax so SQL dosn't update that column. So for example.
```
@"update TABLE1 Set col1 = ?, col2 = ? Where ROWID = '%d'",ROWID];
```
so if col2 hasn't changed is there any value i can bind to it so it don't update that column? | Basically no, there is no "magic" value you can bind that will leave col2 unchanged.
Binding the old value (ie updating with the same value col2 already has) will give the desired effect but requires you to update a single row at a time and know the old value for each row.
Rebuilding the SQL to not include `col2` is most likely the right thing to do in most circumstances. | Get all column values in some variables...say col1,col2.(for that specific row)
> If column1 is changed you may update variable col1 value.
>
> Similarly,If column2 is changed you may update variable col2 value.
and a single UPDATE QUERY you mentioned above should do the required updation. | best way to update an SQL row | [
"",
"sql",
"objective-c",
""
] |
I am trying to select some values from a database table and then convert them to human readable words. I am getting many errors in the third case statement. Can somebody tell me were am going wrong?
Errors
> Msg 156, Level 15, State 1, Line 10
> Incorrect syntax near the keyword 'or'.
> Msg 156, Level 15, State 1, Line 11
> Incorrect syntax near the keyword 'AS'.
Code:
```
SELECT
P.id AS Ref,
CASE
WHEN P.listed = 1 THEN 'Yes'
ELSE 'No'
END AS Listed,
CONVERT(DATE, CONVERT(VARCHAR, P.listeddate, 12)) AS DateListed,
CASE
WHEN P.premium = 1 THEN 'Premium'
ELSE 'Free'
END AS Type,
P.postcode AS London,
CONVERT(DATE, CONVERT(VARCHAR, (SELECT TOP 1 moveindate
FROM propertytenant
WHERE P.id = propertyid
ORDER BY id), 12)) AS MoveInDate,
(SELECT FirstName
FROM UserDetails
WHERE UserId = (SELECT UserId
FROM aspnet_Membership
WHERE LoweredEmail = P.CreatedBy)) AS FirstName,
(SELECT LastName
FROM UserDetails
WHERE UserId = (SELECT UserId
FROM aspnet_Membership
WHERE LoweredEmail = P.CreatedBy)) AS LastName,
(SELECT Mobile
FROM UserDetails
WHERE UserId = (SELECT UserId
FROM aspnet_Membership
WHERE LoweredEmail = P.CreatedBy)) AS Mobile,
CASE
WHEN (SELECT TOP 1 DepositScheme
FROM [Rent].[dbo].[Contract]
WHERE PropertyID = P.Id) = 1
THEN 'DPS' or = 2
THEN 'DPS2'
ELSE 'Other'
AS DPSType,
(SELECT DPSNumber
FROM UserDetails
WHERE UserId = (SELECT UserId
FROM aspnet_Membership
WHERE LoweredEmail = P.CreatedBy)) AS DPSNumber
P.createdby AS Email
FROM
[Rent].[dbo].[property] AS P;
``` | For Multiple options in one `CASE` statement use more `WHEN` conditions
So Try
```
SELECT P.id AS Ref,
CASE
WHEN P.listed = 1 THEN 'Yes'
ELSE 'No'
END AS Listed,
CONVERT(DATE, CONVERT(VARCHAR, P.listeddate, 12)) AS DateListed,
CASE
WHEN P.premium = 1 THEN 'Premium'
ELSE 'Free'
END AS Type,
P.postcode AS London,
CONVERT(DATE, CONVERT(VARCHAR, (SELECT TOP 1 moveindate
FROM propertytenant
WHERE P.id = propertyid
ORDER BY id), 12)) AS MoveInDate,
(SELECT firstname
FROM userdetails
WHERE userid = (SELECT userid
FROM aspnet_membership
WHERE loweredemail = P.createdby)) AS FirstName,
(SELECT lastname
FROM userdetails
WHERE userid = (SELECT userid
FROM aspnet_membership
WHERE loweredemail = P.createdby)) AS LastName,
(SELECT mobile
FROM userdetails
WHERE userid = (SELECT userid
FROM aspnet_membership
WHERE loweredemail = P.createdby)) AS Mobile,
CASE
WHEN (SELECT TOP 1 depositscheme
FROM [Rent].[dbo].[contract]
WHERE propertyid = P.id) = 1 THEN 'DPS'
WHEN (SELECT TOP 1 depositscheme
FROM [Rent].[dbo].[contract]
WHERE propertyid = P.id) = 2 THEN 'DPS2'
ELSE 'Other'
END AS DPSType,
(SELECT dpsnumber
FROM userdetails
WHERE userid = (SELECT userid
FROM aspnet_membership
WHERE loweredemail = P.createdby)) AS DPSNumber,
P.createdby AS Email
FROM [Rent].[dbo].[property] AS P;
``` | Try this if it works
```
SELECT P.id AS Ref,
CASE
WHEN P.listed = 1 THEN 'Yes'
ELSE 'No'
END AS Listed,
CONVERT(DATE, CONVERT(VARCHAR, P.listeddate, 12)) AS DateListed,
CASE
WHEN P.premium = 1 THEN 'Premium'
ELSE 'Free'
END AS Type,
P.postcode AS London,
CONVERT(DATE, CONVERT(VARCHAR, (SELECT TOP 1 moveindate
FROM propertytenant
WHERE P.id = propertyid
ORDER BY id), 12)) AS MoveInDate,
(SELECT firstname
FROM userdetails
WHERE userid = (SELECT userid
FROM aspnet_membership
WHERE loweredemail = P.createdby)) AS FirstName,
(SELECT lastname
FROM userdetails
WHERE userid = (SELECT userid
FROM aspnet_membership
WHERE loweredemail = P.createdby)) AS LastName,
(SELECT mobile
FROM userdetails
WHERE userid = (SELECT userid
FROM aspnet_membership
WHERE loweredemail = P.createdby)) AS Mobile,
CASE (SELECT TOP 1 depositscheme
FROM [Rent].[dbo].[contract]
WHERE propertyid = P.id)
WHEN 1 THEN 'DPS'
WHEN 2 THEN 'DPS2'
ELSE 'Other'
END AS DPSType,
(SELECT dpsnumber
FROM userdetails
WHERE userid = (SELECT userid
FROM aspnet_membership
WHERE loweredemail = P.createdby)) AS DPSNumber,
P.createdby AS Email
FROM [Rent].[dbo].[property] AS P;
``` | Case statement in SQL Server causing error | [
"",
"sql",
"sql-server",
"database",
""
] |
I have the following tables:
## MOVIES
```
MOVIE_ID TITLE
---------- -----------------------------
1 The Shawshank Redemption
2 The Godfather
3 The Godfather: Part II
4 The Dark Knight
5 Pulp Fiction
6 The Good, the Bad and the Ugly
7 Schindler's List
8 Angry Men
9 Fight Club
10 Inception
11 Forrest Gump
```
## DIRECTORS
```
DIRECTOR_ID NAME
----------- -------------------------
1 Tim Robbins
2 Morgan Freeman
3 Marlon Brando
4 Al Pachino
5 Robert De Niro
6 Christian Bale
7 Heath Ledger
8 John Travola
9 Uma Thurman
10 Clint Eastwood
11 Eli Wallach
```
## Direct
```
MOVIE_ID DIRECTOR_ID
---------- -----------
1 1
1 2
2 3
2 4
3 4
3 5
4 6
4 7
5 8
5 9
6 10
```
I'd like a query that returns all movies that have directors x, y and z as their director:
Example:
> If I look for a movie with **Al Pachino** and **Clint Eastwood**, it
> should return **nothing** because I don't have a movie that has both of
> them as it's director.
>
> However if I were to look for a movie with **Tim Robbins** and **Morgan
> Freeman**, it should return **The Shawshank Redemption**
Please suggest alternative if it cannot be done using the above design.
I've attempted it but my query will return results :(
```
SELECT m.title FROM Movie m
WHERE m.movie_id IN (
SELECT d.movie_id FROM Direct d
WHERE d.director_id IN (
SELECT director_id FROM Director dir
WHERE name IN('Clint Eastwood', 'Al Pachino')));
``` | This is one way to achieve this, viz to group, filter and then count the directors:
```
SELECT m.title
FROM Movie m
INNER JOIN Direct md
on md.movie_id = m.movie_id
INNER JOIN Directors d
on md.director_id = d.director_id
WHERE
d.name IN('Clint Eastwood', 'Al Pachino')
GROUP BY m.title
HAVING COUNT(DISTINCT d.director_id) = 2;
```
[SqlFiddle here](http://sqlfiddle.com/#!2/26cac/1)
{Out of interest, aren't these the actors in the movies?} | With the IN operator, your query returns movies that have *any* of these directors.
You have to check for each director separately:
```
SELECT *
FROM Movie
WHERE movie_id IN (SELECT movie_id
FROM Direct
WHERE director_id = (SELECT director_id
FROM Directors
WHERE name = 'Clint Eastwood'))
AND movie_id IN (SELECT movie_id
FROM Direct
WHERE director_id = (SELECT director_id
FROM Directors
WHERE name = 'Al Pachino'))
```
Alternatively, use a [compound query](http://www.sqlite.org/lang_select.html#compound) to construct a list of movie IDs for both directors:
```
SELECT *
FROM Movie
WHERE movie_id IN (SELECT movie_id
FROM Direct
WHERE director_id = (SELECT director_id
FROM Directors
WHERE name = 'Clint Eastwood')
INTERSECT
SELECT movie_id
FROM Direct
WHERE director_id = (SELECT director_id
FROM Directors
WHERE name = 'Al Pachino'))
```
Alternatively, get all records for these two directors from the `Direct` table, and then group by the movie to be able to count the directors per movie; we need to have two left:
```
SELECT *
FROM Movie
WHERE movie_id IN (SELECT movie_id
FROM Direct
WHERE director_id IN (SELECT director_id
FROM Directors
WHERE name IN ('Clint Eastwood',
'Al Pachino')
GROUP BY movie_id
HAVING COUNT(*) = 2)
``` | SQL query records containing all items in list | [
"",
"mysql",
"sql",
"oracle",
"sqlite",
"relational-division",
""
] |
I have a complex select that - when simplified - looks like this:
```
select m.ID,
(select sum(AMOUNT) from A where M_ID = m.ID) sumA,
(select sum(AMOUNT) from B where M_ID = m.ID) sumB,
.....
from M;
```
The tables A,B,... have a foreign key M\_ID pointing into table M.
The problem is that this select is very slow. I'd like to rewrite it using table joins, but I don't know how, because
```
select m.ID
sum(a.AMOUNT),
sum(b.AMOUNT),
.....
from M
join A on a.M_ID = m.ID
join B on b.M_ID = m.ID
....
group by m.ID;
```
gives incorrect (much higher) sum results, as each row in A or B can be counted multiple times.
Is there a way how to write that select optimally using e.g. analytical functions or some other ways?
**Edit:**
The explain plan for the original (not simplified) select looks like this:
```
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
|* 2 | FILTER | |
|* 3 | TABLE ACCESS BY INDEX ROWID| WORKITEM |
|* 4 | INDEX SKIP SCAN | WORKITEM_U01 |
|* 5 | FILTER | |
|* 6 | TABLE ACCESS FULL | RPRODUCT_INVENTORY_MASTER |
.....
| 31 | SORT AGGREGATE | |
|* 32 | FILTER | |
|* 33 | TABLE ACCESS BY INDEX ROWID| WORKITEM |
|* 34 | INDEX SKIP SCAN | WORKITEM_U01 |
|* 35 | FILTER | |
|* 36 | TABLE ACCESS FULL | RPRODUCT_INVENTORY_MASTER |
| 37 | SORT GROUP BY | |
| 38 | TABLE ACCESS FULL | RPRODUCT |
```
That's why I want to optimize it. Moreover, the AWR report shows that this select has 50000 gets/exec.
**Edit2,3:**
The whole select looks like this:
```
SELECT rprd.ID,
rprd.NAME,
(select sum(AMOUNT) from WORKITEM
where ACTION='REMOVE'
and trunc(CREATED_DATE) = to_date(:1,'DDMMYYYY')
and PAYEE_ID in
(select rim.RPRODUCT_ID from RPRODUCT_INVENTORY_MASTER rim
where rprd.ID = rim.RPRODUCT_ID
and rim.INVENTORY_DATE = to_date(:2,'DDMMYYYY')),
.....
(select sum(AMOUNT) from WORKITEM
where ACTION='COLLECT'
and trunc(CREATED_DATE) < to_date(:11,'DDMMYYYY')
and PAYEE_ID in
(select rim.RPRODUCT_ID from RPRODUCT_INVENTORY_MASTER rim
where rprd.ID = rim.RPRODUCT_ID
and rim.INVENTORY_DATE < to_date(:12,'DDMMYYYY'))
FROM RPRODUCT rprd
GROUP BY rprd.ID, rprd.NAME
ORDER BY rprd.ID
;
```
I didn't write it :-), I'm about to re-write it. Note, there are differences in comparison operators, in ACTION values, in dates to compare INVENTORY\_DATE to.
**Edit4:**
I tried to rewrite the query like this (and the exec plan looks better), but have run into the "row multiplicity" issues described above:
```
with RPRODUCT_INVENTORY_MASTER# as (
select RPRODUCT_ID, min(INVENTORY_DATE) INVENTORY_DATE
from RPRODUCT_INVENTORY_MASTER
group by RPRODUCT_ID),
WORKITEM# as (
select AMOUNT, PAYEE_ID, ACTION, trunc(CREATED_DATE) CREATED_DATE
from WORKITEM
where ACTION in ('REMOVE','ADD','COLLECT')
)
select rprd.ID,
rprd.NAME,
-- sum(wip2.AMOUNT), -- this is singular because of '=' in inventory_date comparison
sum(abs(wip4.AMOUNT)),
.....
sum(wip12.AMOUNT)
from RPRODUCT rprd
left join RPRODUCT_INVENTORY_MASTER# rim4 on rim4.RPRODUCT_ID = rprd.ID
and rim4.INVENTORY_DATE <= to_date(:4 ,'DDMMYYYY')
left join WORKITEM# wip4 on wip4.PAYEE_ID = rim4.RPRODUCT_ID
and wip4.ACTION='REMOVE'
and wip4.CREATED_DATE = to_date(:3 ,'DDMMYYYY')
.....
left join RPRODUCT_INVENTORY_MASTER# rim12 on rim12.RPRODUCT_ID = rprd.ID
and rim12.INVENTORY_DATE < to_date(:12 ,'DDMMYYYY')
left join WORKITEM# wip12 on wip12.PAYEE_ID = rim12.RPRODUCT_ID
and wip12.ACTION='COLLECT'
and wip12.CREATED_DATE < to_date(:11 ,'DDMMYYYY')
group by rprd.ID, rprd.NAME
order by rprd.ID
;
```
RPRODUCT\_INVENTORY\_MASTER# always gives at most one row for each rprd.ID. WORKITEM# can have any number of rows for each RPRODUCT\_ID = rprd.ID. | Yes, this is a typical problem. I like your original query for its clarity. However, if running in performence issues, one has to think of other options.
Here is one option. As A and B get multiplied you could simply divide the sum by the related count. Well, admittedly this looks kind of strange though.
```
select m.ID
sum(a.AMOUNT) / count(distinct b.id),
sum(b.AMOUNT) / count(distinct a.id),
.....
from M
join A on a.M_ID = m.ID
join B on b.M_ID = m.ID
....
group by m.ID;
```
The other option, which I would prefer is to build groups, so as not to have multiple A and B per m.id in the first place:
```
select m.ID
a_agg.SUM_AMOUNT,
b_agg.SUM_AMOUNT,
.....
from M
join (select M_ID, sum(AMOUNT) as SUM_AMOUNT from A group by M_ID) a_agg
on a_agg.M_ID = m.ID
join (select M_ID, sum(AMOUNT) as SUM_AMOUNT from B group by M_ID) b_agg
on b_agg.M_ID = m.ID
```
EDIT: In case an M\_ID might not have any A or any B, you would have to replace the joins with LEFT JOIN in both queries. Then in the first query select:
```
nvl(sum(a.AMOUNT), 0) / greatest(count(distinct b.id), 1),
nvl(sum(b.AMOUNT), 0) / greatest(count(distinct a.id), 1),
```
And in the second query:
```
nvl(a_agg.SUM_AMOUNT, 0),
nvl(b_agg.SUM_AMOUNT, 0),
```
EDIT: Here is your query modified. The trick is to join with distinct rims.
```
SELECT
rprd.ID,
rprd.NAME,
nvl(same_date.SUM_AMOUNT, 0),
.....
nvl(earlier_date.SUM_AMOUNT, 0)
FROM RPRODUCT rprd
LEFT JOIN
(
select rim.RPRODUCT_ID, sum(w.AMOUNT) as SUM_AMOUNT
from
(
select distinct RPRODUCT_ID
from RPRODUCT_INVENTORY_MASTER
where INVENTORY_DATE = to_date(:2,'DDMMYYYY')
) rim
left join WORKITEM w
on w.PAYEE_ID = rim.RPRODUCT_ID
and w.ACTION = 'REMOVE'
and trunc(w.CREATED_DATE) = to_date(:1,'DDMMYYYY')
) same_date on same_date.RPRODUCT_ID = rprd.ID
LEFT JOIN
(
select rim.RPRODUCT_ID, sum(w.AMOUNT) as SUM_AMOUNT
from
(
select distinct RPRODUCT_ID
from RPRODUCT_INVENTORY_MASTER
where INVENTORY_DATE < to_date(:12,'DDMMYYYY')
) rim
left join WORKITEM w
on w.PAYEE_ID = rim.RPRODUCT_ID
and w.ACTION = 'REMOVE'
and trunc(w.CREATED_DATE) < to_date(:11,'DDMMYYYY')
) earlier_date on earlier_date.RPRODUCT_ID = rprd.ID
GROUP BY rprd.ID, rprd.NAME
ORDER BY rprd.ID
;
``` | This should work
```
select m.ID,
a.aamount,
b.bamount
from M
inner join
(
select M_ID,sum(AMOUNT) as aamount
from A group by M_ID
) a
on a.M_ID = m.ID
inner join
(
select M_ID,sum(AMOUNT) as bamount
from B group by M_ID
) b
on b.M_ID = m.ID;
``` | 4:**Count/sum rows in multiple related tables | [
"",
"sql",
"oracle11g",
""
] |
I have these two tables (the names have been pluralized for the sake of the example):
Table `Locations`:
```
idlocation varchar(12)
name varchar(50)
```
Table `Answers`:
```
idlocation varchar(6)
question_number varchar(3)
answer_text1 varchar(300)
answer_text2 varchar(300)
```
This table can hold answers for multiple locations according a list of numbered questions that repeat on each of them.
What I am trying to do is to add up the values residing in the `answer_text1` and `answer_text2` columns, for each location available on the Locations table but for **only** an specific question and then output a value based on the result (1 or 0).
The query goes as follows using a nested table Answers to perform the SUM operation:
```
select
l.idlocation,
'RESULT' = (
case when (
select
sum(cast(isnull(c.answer_text1,0) as int)) +
sum(cast(isnull(c.answer_text2,0) as int))
from Answers c
where b.idlocation=c.idlocation and c.question_number='05'
) > 0 then
1
else
0
end
)
from Locations l, Answers b
where l.idlocation=b.idlocation and b.question_number='05'
```
In the table `Answers` I am saving sometimes a date string type of value for its field `answer_text2` but on a different question number.
When I run the query I get the following error:
> Conversion failed when converting the varchar value '27/12/2013' to data type int
I do have that value `'27/12/2013'` on the `answer_text2` field but for a different question, so my filter gets ignored on the nested select statement after this: `b.idlocation=c.idlocation`, and it's adding up apparently **more questions** hence the error posted.
**Update**
According to Steve's suggested solution, I ended up implementing the filter to avoid char/varchar considerations into my SUM statement with a little variant:
Every possible not INT string value has a length greater than 2 ('00' to '99' for my question numbers) so I use this filter to determine when I am going to apply the cast.
```
'RESULT' =
case when (
select sum(
case when len(c.answer_text1) <= 2 then
cast(isnull(c.answer_text1,'0') as int)
else
0
end
) +
sum(
case when len(c.answer_text2) <= 2 then
cast(isnull(c.answer_text2,'0') as int)
else
0
end
)
from Answers c
where c.idlocation=b.idlocation and c.question_number='05'
) > 0
then
1
else
0
end
``` | This is an unfortunate result of how the SQL Server query processor/optimizer works. In my opinion, the SQL standard prohibits the calculation of SELECT list expressions before the rows that will contribute to the result set have been identified, but the optimizer considers plans that violate this prohibition.
What you're observing is an error in the evaluation of a SELECT list item on a row that is not in the result set of your query. While this shouldn't happen, it does, and it's somewhat understandable, because to protect against it in every situation would exclude many efficient query plans from consideration. The vast majority of SELECT expressions will never raise an error, regardless of data.
What you can do is try to protect against this with an additional CASE expression. To protect against strings with the '/' character, for example:
```
... SUM(CASE WHEN c.answer_text1 IS NOT NULL and c.answer_text1 NOT LIKE '%/%' THEN CAST(c.answer_text1 as int) ELSE 0 END)...
```
If you're using SQL Server 2012, you have a better option: TRY\_CONVERT:
```
...SUM(COALESCE(TRY_CONVERT(int,c.answer_text1),0)...
```
In your particular case, the overall database design is flawed, because numeric information should be stored in number-type columns. (This, of course, may not be your fault.) So redesign is an option, putting integer answers in integer-type columns and non-integer answer\_text elsewhere. A compromise, if you can't redesign the tables, that I think will work, is to add a persisted computed column with value TRY\_CONVERT(int,c.answer\_text1) (or its best equivalent, based on what you know about the actual data in the table - perhaps the integer value of only columns containing no non-digit character and having length less than 9). | Your query appears correct enough, which means you have a Question 05 record with a datetime in either the `answer_text1` or `answer_text2` field.
Give this a shot to figure out which row has a date:
```
select *
from Answers
where question_number='05'
and (isdate(answer_text1) = 1 or isdate(answer_text2) = 1)
```
Furthermore, you could filter out any rows that have dates in them
```
where isdate(c.answer_text1) = 0
and isdate(c.answer_text2) = 0
and ...
``` | Filter on a nested aggregate SUM function not working | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Shortly, I am using `JDBC` and I am trying to write a query that will return some values from an SQL developer db table.
So far i have this:
```
#Query for getting data
sql <- paste("select *
FROM GRID Z
where Z.LAT = Xlat AND Z.LON = Xlon")
fun <- dbGetQuery(jdbcConnection, sql)
attach(fun)
```
Problem is that `Xlat` and `Xlon` are variables in R and their values change frequently so I can't really hard-pass them into the query. Apparently, `Z.LAT` and `Z.LON` correspond to `GRID` table.
Question is: Is it possible to use R variables into a query as such?
I would also like to know if instead of `=` is there something to match the closest or nearest values.
Appreciate any suggestions. Thanks
**Edit:** Another approach to this would be to `SELECT *` from my table, and then 'play' with `fun` in order to get my values. Any thoughts or practices on that?
**Note:** `jdbcConnection` is asking for a remote connection. | Are you looking for this?
```
sql <- paste0("select *
FROM GRID Z
where Z.LAT ='", Xlat,"' AND Z.LON = '", Xlon,"'")
```
I assumed that your variables are character. In case the above is running behind a web server, there are options for URL encode and escape to avoid code injections... like [this](https://stackoverflow.com/questions/14836754/is-there-an-r-function-to-escape-a-string-for-regex-characters)
**EDIT**: About this:
`I would also like to know if instead of = is there something to match the closest or nearest values.`
Since you are executing your query via a SQL engine that is more a SQL question than a R one. Like @Vivek says you can do that in `sqldf` but I guess your data are in a remote database, so it wouldn't help in this case.
All SQL flavours have `like`, so just use it in your query. Please tell me if I'm misunderstanding your question.
```
sql <- paste0("select *
FROM GRID Z
where Z.LAT like '", Xlat,"' AND Z.LON like '", Xlon,"'")
``` | [gsubfn](http://gsubfn.googlecode.com)'s `fn$` operator supports a quasi-perl sort of string interpolation like this:
```
library(gsubfn)
sql0 <- "select * FROM GRID Z where Z.LAT = $Xlat AND Z.LON = $Xlon"
fun <- fn$dbGetQuery(jdbcConnection, sql0)
```
or like this to allow examination of `sql` after substitution:
```
sql0 <- "select * FROM GRID Z where Z.LAT = $Xlat AND Z.LON = $Xlon"
sql <- fn$identity(sql0)
fun <- dbGetQuery(jdbcConnection, sql)
```
See [?fn](http://www.inside-r.org/packages/cran/gsubfn/docs/fn) and also the examples on the [sqldf home page](http://sqldf.googlecode.com). | Use R variables to an SQL query | [
"",
"sql",
"r",
"oracle",
"jdbc",
""
] |
Can someone tell me why I'm getting an Incorrect Syntax error at the last ELSE statement.
```
SELECT
CASE WHEN PolType = 'PKG' THEN
CASE WHEN PkgDef & 1 = 1 THEN 'BA ' ELSE
CASE WHEN PkgDef & 2 = 2 THEN 'BAT' ELSE
CASE WHEN PkgDef & 4 = 4 THEN 'GS ' ELSE
CASE WHEN PkgDef & 8 = 8 THEN 'DLR' ELSE
'ERR' END
ELSE
poltype
END AS 'PolType'
FROM Parallel_Test.dbo.PolicyG
WHERE rowid = (SELECT MAX(rowid) FROM Parallel_Test.dbo.policyg) - 10
``` | If you are using Microsoft SQL Server then you might be trying to do this:
```
SELECT
CASE
WHEN PolType = 'PKG' THEN
CASE
WHEN PkgDef & 1 = 1 THEN 'BA '
WHEN PkgDef & 2 = 2 THEN 'BAT'
WHEN PkgDef & 4 = 4 THEN 'GS '
WHEN PkgDef & 8 = 8 THEN 'DLR'
ELSE 'ERR'
END
ELSE poltype
END AS 'PolType'
FROM Parallel_Test.dbo.PolicyG
WHERE rowid = (SELECT MAX(rowid) FROM Parallel_Test.dbo.policyg) - 10
```
Let me know if I have interpreted your intended logic wrongly. | Not testet, but try:
```
SELECT CASE WHEN PolType <> 'PKG' THEN poltype
WHEN PkgDef & 1 = 1 THEN 'BA '
WHEN PkgDef & 2 = 2 THEN 'BAT'
WHEN PkgDef & 4 = 4 THEN 'GS '
WHEN PkgDef & 8 = 8 THEN 'DLR'
ELSE 'ERR'
END AS 'PolType'
FROM Parallel_Test.dbo.PolicyG
WHERE rowid = (SELECT MAX(rowid) FROM Parallel_Test.dbo.policyg) - 10
``` | Nested CASE ELSE END Syntax Error | [
"",
"sql",
"sql-server",
""
] |
Say I have a table with columns: `id, group_id, type, val`
Some example data from the select:
`1, 1, 'budget', 100`
`2, 1, 'budget adjustment', 10`
`3, 2, 'budget', 500`
`4, 2, 'budget adjustment', 30`
I want the result to look like
`1, 1, 'budget', 100`
`2, 1, 'budget adjustment', 10`
`5, 1, 'budget total', 110`
`3, 2, 'budget', 500`
`4, 2, 'budget adjustment', 30`
`6, 2, 'budget total', 530`
Please advise,
Thanks. | As @Serpiton suggested, it seems the functionality you're really looking for is the ability to add sub-totals to your result set, which indicates that `rollup` is what you need. The usage would be something like this:
```
SELECT id,
group_id,
coalesce(type, 'budget total') as type,
sum(val) as val
FROM your_table
GROUP BY ROLLUP (group_id), id, type
``` | This will get the you two added lines desired, but not the values for ID and type that you want.
Oracle examples: <http://docs.oracle.com/cd/B19306_01/server.102/b14223/aggreg.htm>
```
Select id, group_id, type as myType, sum(val) as sumVal
FROM Table name
Group by Grouping sets ((id, group_id, type, val), (group_ID))
``` | Oracle SQL: Adding row to select result | [
"",
"sql",
"oracle",
""
] |
Here is my challenge:
I have a log table which every time a record is changed adds a new record but puts a NULL value for each non-changed value in each record. In other words only the changed value is set, the rest unchanged fields in each row simply has a NULL value.
Now I would like to replace each NULL value with the value above it that is NOT a NULL value like below:
Source table: Task\_log
```
ID Owner Status Flag
1 Bob Registrar T
2 Sue NULL NULL
3 NULL NULL F
4 Frank Admission T
5 NULL NULL F
6 NULL NULL T
```
Desired output table: Task\_log
```
ID Owner Status Flag
1 Bob Registrar T
2 Sue Registrar T
3 Sue Registrar F
4 Frank Admission T
5 Frank Admission F
6 Frank Admission T
```
How do I write a query which will generate the desired output table? | One the new windowed function of SQLServer 2012 is `FIRST_VALUE`, wich have quite a direct name, it can be partitioned through the `OVER` clause, before using it is necessary to divide every column in data block, a block for a column begin when a value is found.
```
With Block As (
Select ID
, Owner
, OBlockID = SUM(Case When Owner Is Null Then 0 Else 1 End)
OVER (ORDER BY ID)
, Status
, SBlockID = SUM(Case When Status Is Null Then 0 Else 1 End)
OVER (ORDER BY ID)
, Flag
, FBlockID = SUM(Case When Flag Is Null Then 0 Else 1 End)
OVER (ORDER BY ID)
From Task_log
)
Select ID
, Owner = FIRST_VALUE(Owner) OVER (PARTITION BY OBlockID ORDER BY ID)
, Status = FIRST_VALUE(Status) OVER (PARTITION BY SBlockID ORDER BY ID)
, Flag = FIRST_VALUE(Flag) OVER (PARTITION BY FBlockID ORDER BY ID)
FROM Block
```
[SQLFiddle](http://www.sqlfiddle.com/#!6/06d63/16) demo
The `UPDATE` query is easily derived | As I mentioned in my comment, I would try to fix the process that is creating the records rather than fixing the junk data. If that is not an option, the code below should get you pointed in the right direction.
```
UPDATE t1
set t1.owner = COALESCE(t1.owner, t2.owner),
t1.Status = COALESCE(t1.status, t2.status),
t1.Flag = COALESCE(t1.flag, t2.flag)
FROM Task_log as t1
INNER JOIN Task_log as t2
ON t1.id = (t1.id + 1)
where t1.owner is null
OR t1.status is null
OR t1.flag is null
``` | Replace NULL with values | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I am attempting to calculate the average of a percentage that needs to take into account percentages that are zero.
I am currently using the following formula to exclude those that will ultimately end up as zero, but this is skewing the figures.
```
SET Average = (SELECT ROUND(AVG((CAST(Figure1 AS FLOAT)/CAST(Figure2 AS FLOAT))*100),2)
FROM [Schema].[Table] m
WHERE m.Figure1 > 0 and m.Figure2 > 0)
```
When I include the possible zero based percentages
```
SET Average = (SELECT ROUND(AVG((CAST(Figure1 AS FLOAT)/CAST(Figure2 AS FLOAT))*100),2)
FROM [Schema].[Table] m)
```
I obviously get the following error;
> Divide by zero error encountered. The statement has been terminated.
How can I change this to include the zero based percentages without the error? | The problem would be dividing by zero - so only `figure2` is relevant which is the divident.
Use a `case` statement to set a static value for that specific case (I chose `0`):
```
SET Average = (SELECT ROUND(AVG(case when Figure2 = 0
then 0
else CAST(Figure1 AS FLOAT) / CAST(Figure2 AS FLOAT)
end * 100)
,2)
FROM [Schema].[Table] m)
``` | While the answer of
```
SET Average = (SELECT ROUND(AVG(case when Figure2 = 0
then 0
else CAST(Figure1 AS FLOAT) / CAST(Figure2 AS FLOAT)
end * 100)
,2)
FROM [Schema].[Table] m)
```
is good if you want the records where Figure2 is zero to be average in as a zero, but I personally wouldn't want them at all and would use
```
SET Average = (SELECT ROUND(AVG(CAST(Figure1 AS FLOAT) / CAST(Figure2 AS FLOAT)
* 100)
,2)
FROM [Schema].[Table] m where Figure2 <> 0 and Figure2 is not null)
```
or perhaps this is a bonus points situation where you really have 4 points out of 0 points. Then I would do the following. Still taking into account if Figure2 really has all zeros in it's results.
```
SET Average = ((SELECT CASE WHEN (SUM(CAST(Figure2 AS FLOAT))) <> 0
THEN ROUND(SUM(CAST(Figure1 AS FLOAT)) / SUM(CAST(Figure2 AS FLOAT))
* 100
,2) ELSE 0 END
FROM [Schema].[Table] m)
``` | Include zero figures in calculation without getting divide by zero errors | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have an SQL statement I run for my website that inserts data into an access database. Is there a way during the insert to replace a space " " if one of the date textboxes contains a space? I need to do this because it throws off my date queries if there is a space in one of the columns?
```
INSERT INTO tblOpen (DateColumn1, DateColumn2) Values (@Value1, @Value2)
``` | You should use the 'datetime' type for your DateColumn. It solves all your problem. It is good to use proper variable for proper field. | If you mean heading and trailing spaces, then:
```
myString = myString.Trim()
```
in your vb.net code will be fine. Even though I would follow Steve's suggestion and converting to date. | SQL Insert and Replace? | [
"",
"asp.net",
"sql",
"vb.net",
"ms-access",
""
] |
Can't seem to get this working. If the count is null I want to set it to zero... else set it to the count. I am adding multiple counts in another part of my code so I cannot have null values when I do this.
$table = "
...
```
LEFT JOIN
(SELECT user_id, IF(count(user_id) = '(null)',0,count(user_id)) as t1count
FROM screenshot_logs
GROUP BY user_id) as t_screenshots
on t_screenshots.user_id = users.user_id
...
```
"; | In the outer query, you can replace a `NULL` with a zero using the **`IFNULL()`** function, e.g.
```
SELECT ...
, IFNULL(v.t1count,0) AS t1count
FROM ...
LEFT
JOIN ( SELECT ... AS t1count
...
) v
ON ...
```
---
The NULL you are getting returned by the outer query isn't from the inline view query. The NULL is a result of "no match" being found by the `LEFT [OUTER] JOIN` operation.
If you are referencing **`v.t1count`** in other expressions in the outer query, you can replace those references with **`NULLIF(v.t1count,0)`** as well. | The aggregate COUNT() will always return a value.
Reference: [Does COUNT(\*) always return a result?](https://stackoverflow.com/questions/2552086/does-count-always-return-a-result) | if count value is null set it to zero - sql select statement | [
"",
"mysql",
"sql",
"count",
""
] |
I'm trying to calculate the average time a person is a member, before he/she becomes a paying member.
I'm trying to select the ID-Date of creation - Date of first payment - time between creation and first payment - Average of the time between creation and first payment:
```
SELECT C.CompanyId, C.CreatedDate, CO.CreatedDate,
CAST(CO.CreatedDate - C.CreatedDate as int) AS 'Diff', FROM XXX.Company C
INNER JOIN -- some innerjoins--
WHERE -- Some conditions --
```
The first four selects works fine.
```
ID Date of creation Date of first payment 'Diff'
1 2013-12-29 00:00:00.000 2014-01-10 18:34:18.000 13
2 2013-12-09 00:00:00.000 2014-01-10 21:07:29.000 33
3 2013-12-12 00:00:00.000 2014-01-10 21:14:01.000 30
```
I would like to add AVG(Diff) AS Average to my select query
```
SELECT C.CompanyId, C.CreatedDate, CO.CreatedDate,
CAST(CO.CreatedDate - C.CreatedDate as int) AS 'Diff', AVG(CAST(Diff as int)) AS Average FROM XXX.Company C
INNER JOIN -- some innerjoins--
WHERE -- Some conditions --
```
Which gives "invalid column name"
I tried to change it to
```
SELECT C.CompanyId, C.CreatedDate, CO.CreatedDate,CAST(CO.CreatedDate - C.CreatedDate as int) AS 'Diff', AVG(CAST(CO.CreatedDate - C.CreatedDate as int)) AS Average FROM XXX.Company C
```
which gives the following error column is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
So basicly, how can I get the AVG() of a my CAST() as Diff?
SOLUTION
I used DATEDIFF <http://www.w3schools.com/sql/func_datediff.asp>
DATEDIFF(DAY,C.CreatedDate, CO.CreatedDate)
in order to remove the different datetime values, and get the difference
and used AVG around it:
AVG(DATEDIFF(DAY, C.CreatedDate,CO.CreatedDate)) | Didn't know how to mark my question as answered.
But I found the solution:
I used DATEDIFF <http://www.w3schools.com/sql/func_datediff.asp>
DATEDIFF(DAY,C.CreatedDate, CO.CreatedDate) in order to remove the different datetime values, and get the difference and used AVG around it: AVG(DATEDIFF(DAY, C.CreatedDate,CO.CreatedDate))
mods, Please close this topic :) | You have to group by some values to let the database know where the `avg` is applied to:
```
SELECT C.CompanyId
, AVG(CAST(CO.CreatedDate - C.CreatedDate as int)) AS Average
FROM XXX.Company C
GROUP
BY C.CompanyId
```
Also, you average and diff are conflicting, so you have to disable the cast then.
If you want all rows grouped on company and the two date created fields:
```
SELECT C.CompanyId
, C.CreatedDate
, CO.CreatedDate
, CAST(CO.CreatedDate - C.CreatedDate as int) AS 'Diff'
, AVG(CAST(CO.CreatedDate - C.CreatedDate as int)) AS Average
FROM XXX.Company C
GROUP
BY C.CompanyId
, C.CreatedDate
, CO.CreatedDate
``` | Get AVG() of CAST() | [
"",
"sql",
"sql-server-2008",
""
] |
I have to find a composition of data stored in a database to fullfill a input value using sql.
Table first
```
ID| Value |
1 | 8 |
2 | 10 |
3 | 12 |
4 | 14 |
```
Table values
```
ID| Value |
1 | 2 |
2 | 3 |
3 | 4 |
4 | 5 |
5 | 6 |
```
If the input value is 8, i have to find possible compositions of the values in the table that equals this.
The possibilities:
```
ID, 1 and 5
ID, 3 and 3
ID, 2 and 4
```
how do i get this from an sql query? | If you are looking for pairs, you do a join with a condition:
```
select t1.Value, t2.Value
from table t1 join
table t2
on t1.Value + t2.Value = 8;
```
EDIT:
If you actually want to get 1 and 5 but not 5 and 1, then you need to use distinct and another condition in the `on` clause:
```
select distinct t1.Value, t2.Value
from table t1 join
table t2
on t1.Value + t2.Value = 8 and
t1.Id <= t2.Id;
```
The `distinct` is so "3, 3" doesn't appear twice. | You can do a self-`CROSS JOIN` and check against the total. The `<=` condition is to prevent duplicates arising from the recipical:
```
SELECT t1.ID, t2.ID
FROM tbl t1 CROSS JOIN tbl t2
WHERE t1.value + t2.value = 8
AND t1.ID <= t2.ID
``` | SQL query composition search | [
"",
"mysql",
"sql",
""
] |
So in a facebook like wall we have the entities:
WallPost. (id, user\_id, message)
User. (id, name)
Connections. (user1\_id, user2\_id)
I want with one SQL query to get all the wall posts of a user with id: x (example 5)
I tried:
```
select wallPost.* from
wallPost, Connections
Where
wallPost.user_id = x
or
(Connections.user1_id =x and wallPost.user_id = connections.user2_id)
or
(Connections.user2_id =x and wallPost.user_id = connections.user1_id)
ORDER BY <field_name>
```
So beside the performance problem that this may create, it's also not working correctly when the user doesn't have any connections. I don't want to use two queries because I don't want to sort the results.
Thank you | You only want records from `wallPost`. I would suggest that you remove `Connections` table in the `from` clause and put the logic in subqueries using `exists`:
```
select wp.*
from wallPost wp
where wp.user_id = x or
exists (select 1 from Connections c where c.user1_id = x and wp.user_id = c.user2_id) or
exists (select 1 from Connections c where c.user2_id = x and wp.user_id = c.user1_id)
order by <field_name>;
```
For best performance, you would want two indexes, one on `Connections(user2_id, user1_id)` and the other on `Connections(user1_id, user2_id)`. | Why cant you do this?
```
SELECT WP.* FROM [WallPost] WP WITH (NOLOCK)
LEFT JOIN [Connections] CN WITH (NOLOCK)
ON ((WP.[User_Id] = CN.[User1_Id]) OR (WP.[User_Id] = CN.[User2_Id))
WHERE (WP.[User_Id] = X)
```
The LEFT JOIN will take care of users who have no connections. | Is there a way to improve this SQL query with 3 entities? | [
"",
"sql",
"join",
""
] |
I've been tasked with writing a query to display report assignments from our company's database. There are three tables I need to query: REPORT, which contains the top-level information of the report, PROCEDURE, which can contain multiple results per report on the individual report procedures, and TECHNICIAN, which can contain multiple technicians per procedure based on who is assigned to it.
My problem is that if there are no technicians assigned to a procedure, it's not returning anything for that procedure, whereas ideally I'd like it to return a row with the technician field being "null".
The current code I have is this:
```
SELECT
rep.RPT_ID
,tech.TECH_ID
,proc.PROC_ID
FROM REPORT rep
LEFT JOIN TECHNICIAN tech ON tech.RPT_ID = rep.RPT_ID
LEFT JOIN PROCEDURE proc ON proc.RPT_ID = rep.RPT_ID
WHERE rep.LAB_ID in ('test_lab')
AND proc.PROC_ID = tech.PROC_ID
```
I'd like this to return something like this if no tech is assigned:
```
RPT_ID | TECH_ID | PROC_ID
12345 456 1
12345 NULL 2
67890 123 1
67890 345 1
```
But currently I'm not getting that second row. | Here is the query.
```
SELECT
rep.RPT_ID
,tech.TECH_ID
,proc.PROC_ID
FROM REPORT rep
LEFT JOIN TECHNICIAN tech ON tech.RPT_ID = rep.RPT_ID
LEFT JOIN PROCEDURE proc ON proc.RPT_ID = rep.RPT_ID AND proc.PROC_ID = tech.PROC_ID
WHERE rep.LAB_ID in ('test_lab');
``` | The problem is that you have `and proc.PROD_ID = tech.PROD_ID` in the `WHERE` clause.
You should move this into the `LEFT JOIN` of proc
`LEFT JOIN PROCEDURE proc ON proc.RPT_ID = rep.RPT_ID and proc.PROD_ID = tech.PROD_ID`
Think of it this way, if your query, the SQL joins the tables using the join criteria and then filters the results based on the where clause. | How to return null value if no record found on joined table | [
"",
"sql",
"sql-server",
"t-sql",
"left-join",
""
] |
For example, my table data looks like this:
```
ID intValue date
1 5 1/1/1
1 10 22/22/22
```
What I want to do is multiply the `intValue` column by 5, 10, and 15 for each instance of intValue.
So my result set would like like:
```
ID intValue date
1 25 1/1/1
1 50 1/1/1
1 75 1/1/1
1 50 22/22/22
1 100 22/22/22
1 150 22/22/22
```
How would I get a result set like this?
**SOLUTION**
[qxg's answer](https://stackoverflow.com/a/23636393/1504882) below most accurately answered how to best handle the question above. Another way to manipulate the data, and the method by which I used it was:
```
SELECT CASE WHEN CrossTable.Value = 5 THEN 'By Five!'
WHEN CrossTable.Value = 10 Then 'By Ten!'
WHEN CrossTable.Value = 15 Then 'By Fifteen!' END AS ByWhat,
CrossTable.Value As Base Value,
intValue * CrossTable.Value
FROM table1
CROSS JOIN (VALUES (5), (10), (15)) CrossTAble(Value)
``` | Leverage CROSS JOIN and Values
```
SELECT ID, table1.intValue * Val.N, date
FROM table1
CROSS JOIN (VALUES (5), (10), (15)) Val(N)
``` | This solution only uses a single `union`, rather than unioning the table to itself:
```
WITH numbers
AS ( SELECT 5 AS n
UNION ALL
SELECT n + 5
FROM numbers
WHERE n + 5 <= 15
)
SELECT id ,
intValue + n AS intValue ,
date
FROM numbers
CROSS JOIN your_table
```
This solution is limited to providing sequences that can be expressed mathematically. However, it's advantage is that with only a little adaptation the number of values provided at run-time via a variable. | Apply Computation And Return As Another Row Without Union | [
"",
"sql",
"sql-server",
"row",
""
] |
In Oracle SQL, you can easily do an update based on a NOT EXISTS condition in a correlated subquery. This is useful for doing updates based on another query or list of ids.
The mechanism for subqueries is different in Postgres... how can I achieve the same goal?
<http://sqlfiddle.com/#!1/1dbb8/55>
**How you would do it on Oracle**
```
UPDATE UserInfo a
SET a.username = 'not found'
WHERE NOT EXISTS (SELECT 'X'
FROM UserOrder b
WHERE b.userid = a.userid)
AND a.userid in (1,2,3);
```
**Postgres NOT EXISTS query: this works**
```
SELECT u.userid, u.username
FROM UserInfo AS u
WHERE NOT EXISTS
( SELECT *
FROM UserOrder AS o
WHERE o.userid = u.userid
);
```
**Postgres NOT EXISTS update : does not work**
```
UPDATE UserInfo
SET username = 'not found'
FROM (SELECT u.userid
FROM UserInfo AS u
WHERE NOT EXISTS
( SELECT *
FROM UserOrder AS o
WHERE o.userid = u.userid
)) em
WHERE em.userid = UserInfo.userid;
``` | Sqlfiddle does regenerate the db contents for each time you "run script", apparently. Run the select in the same script as the update.
Your Oracle query from sqlfiddle almost works, but You forgot to alias the userid parameter. Correct query:
```
/* How you would do it on Oracle */
UPDATE UserInfo
SET username = 'not found'
WHERE NOT EXISTS (SELECT 'X'
FROM UserOrder b
WHERE b.userid = userinfo.userid);
``` | ```
UPDATE UserInfo a
SET username = 'not found'
WHERE NOT EXISTS (
SELECT 'X'
FROM UserOrder b
WHERE b.userid = a.userid
)
AND a.userid in (1,2,3)
;
```
Should definitely work in Postgres. | Postgres: correlated subquery with NOT EXISTS | [
"",
"sql",
"oracle",
"postgresql",
""
] |
I'm trying to add a column in where clause in SQL Server 2005, but issue is I have to add it for a given condition like
```
where condition 1
and condition 2
if(@val !=null and @val = 'test')
begin
and condition 3
end
```
but when I try this SQL Server gives me error:
```
Incorrect syntax near the keyword 'and'. Severity 15 Procedure Name
Incorrect syntax near 'END'. Severity 15 Procedure Name
```
I did some R&D and found that I should use case statement instead and then I tried this
```
where condition 1
and condition 2
CASE @val
WHEN 'test' THEN AND condition 3
END
```
but now it's giving error:
```
Incorrect syntax near the keyword 'CASE'. Severity 15 Procedure Name
```
Can any one help me to sort out the problem? Thanks.
**EDIT**
```
select *
from Tables with joins
where ac_jd_date_closed >= convert(VARCHAR(10),dateadd(dd, -@period, getdate()), 101)
and ac_jd_date_closed <= convert(VARCHAR(10),dateadd(dd, 0, getdate()), 101)
and lo_pc_last_name not in ('Shroder')
and lm_ap_pr_id NOT IN (4743, 2683)
```
I want to add this condition when spm David otherwise this condition should not be there. | Thanks for all to answer the question, but I used a little trick to solve the problem in this way
```
if(@spm='David')
begin
select *
from Tables with joins
where ac_jd_date_closed >= convert(VARCHAR(10),dateadd(dd, -@period, getdate()), 101)
and ac_jd_date_closed <= convert(VARCHAR(10),dateadd(dd, 0, getdate()), 101)
and lo_pc_last_name not in ('Shroder')
and lm_ap_pr_id NOT IN (4743, 2683)
end
else
begin
select *
from Tables with joins
where ac_jd_date_closed >= convert(VARCHAR(10),dateadd(dd, -@period, getdate()), 101)
and ac_jd_date_closed <= convert(VARCHAR(10),dateadd(dd, 0, getdate()), 101)
and lo_pc_last_name not in ('Shroder')
end
```
means i used the same full query with condition, hope this trick will help someone to solve his issue. | You can remove the `CASE` statement and substitute `AND (@Val<> 'test' OR (@Val = 'test' AND condition 3))`:
```
WHERE <condition 1>
AND <condition 2>
AND (@Val<> 'test' OR (@Val = 'test' AND <condition 3>))
``` | using if/case in sql server where clause | [
"",
"sql",
"sql-server",
""
] |
If i have the string:
```
[30.345, -97.345, 4],[30.345, -97.345, 5],[30.345, -97.345, 6],[30.345, -97.345, 7]
```
How would i remove every third comma, so that the string would look like this?
```
[30.345, -97.345, 4][30.345, -97.345, 5][30.345, -97.345, 6][30.345, -97.345, 7] ?
```
Thanks in advance. | how about replacing `],[` with `][`
```
SELECT REPLACE('[30.345, -97.345, 4],[30.345, -97.345, 5],[30.345, -97.345, 6],[30.345, -97.345, 7].','],[','][');
``` | sql is to general, need more info.
Here is ORACLE:
select replace('your string', '],[', '][') from dual | Removing commas from a string using SQL | [
"",
"sql",
""
] |
SQL is not my strong point and I am struggling to find a solution to this issue. I am trying to figure out how I can get a result set based on the following logic. Select record A when A is not in B OR select B if the record appears in B and A. I tried the following union which returns me all the records that match from the current day in the two tables but I cannot figure out how to pull the data I need from the two tables.
```
SELECT 'a',PurchaseOrderLine.iPurchaseOrderLineId, sProductDescription
FROM PurchaseOrderLine
WHERE PurchaseOrderLine.dRequiredDate = convert(date, getdate())
UNION
SELECT 'b',PurchaseOrderLine.iPurchaseOrderLineId, sProductDescription
FROM GoodsIn
INNER JOIN PurchaseOrderLine
ON PurchaseOrderLine.iPurchaseOrderLineId = GoodsIn.iPurchaseOrderLineId
WHERE GoodsIn.dDateDelivered = getdate())
``` | You can do a left outer join, and use a ISNULL or CASE statement in the select to return the required values.
I'll demonstrate:
```
SELECT
CASE WHEN b.iPurchaseOrderLineId IS NOT NULL THEN 'b' ELSE 'a' END AS [Source],
a.iPurchaseOrderLineId,
ISNULL(b.sProductDescription, a.sProductDescription) AS [sProductDescription]
FROM PurchaseOrderLine AS a
LEFT JOIN GoodsIn AS b ON a.iPurchaseOrderLineId = b.iPurchaseOrderLineId
AND b.dDateDelivered = GETDATE()
WHERE b.iPurchaseOrderLineId IS NOT NULL
OR a.dRequiredDate = CONVERT(DATE, GETDATE())
```
Hope that helps! | Hope This will help You: Just an example similar to you.
```
create table A(id int , name char(12))
go
create table B(id int , name char(12))
go
insert into A values (1,'ABC'),(3,'WXY')
insert into B values (1,'ABC'),(2,'AAA')
SELECT a.id,a.name FROM A EXCEPT SELECT * FROM B
UNION
SELECT a.id,a.name FROM A inner join b on a.id=b.id and a.name=b.name
```
Thanks!!! | SQL: Select A when in A and not B or select B when in A and B | [
"",
"sql",
"sql-server",
""
] |
I have following data:
```
Id: 1 Name: apple ForeignKey: 10
Id: 2 Name: apple ForeignKey: 10
Id: 3 Name: apple ForeignKey: 15
Id: 4 Name: peach ForeignKey: 11
Id: 5 Name: peach ForeignKey: 12
```
Rows with same Name and ForeignKey are duplicates in my case. Now I want to remove all the duplicates except one instance from the table.
In other words; I want to remove all rows but one, where Name and ForeignKey are equal.
In case with upper data only the row with Id 2 OR Id 1 should be removed.
With
```
select count(Name), Name, ForeignKey group by Name, ForeignKey having count(Name)>1
```
I am able to find items where more than 1 row with same Name and Foreign key exist, but how to get the IDs of those rows? And how to get the IDs of those rows except the first/last occurrence of that row with same Name and ForeignKey? | The answer if database specific, but here is how you can do it joining the table to itself:
```
delete t1
from yourtable t1
join yourtable t2 on t1.id > t2.id
and t1.name = t2.name
and t1.foreignkey = t2.foreignkey
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!3/7c122/1) | You can also do it with a CTE & window function, deleting the duplicate rows by counting the number of rows that are the same, and then deleting all but one.
[SQL Fiddle demo](http://www.sqlfiddle.com/#!3/fc217/2)
```
;WITH myvals
AS (
SELECT [id]
,[name]
,[foreignkey]
,ROW_NUMBER() OVER (
PARTITION BY [name]
,[foreignkey] ORDER BY [id]
,[name]
,[foreignkey]
) AS inst_count
FROM yourtable
)
DELETE
FROM myvals
WHERE inst_count > 1;
``` | How to delete duplicate rows in SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a pivot table on one sheet that is coming from a Microsoft Query MySQL datafeed on another one of my sheets.
Consider the information from the datafeed to be like so:
```
date | order | SKU | Quantity
-----------------------------------
5/1/14 123456 11111 1
5/1/14 234567 22222 1
5/1/14 456789 33333 2
5/2/14 987654 44444 1
5/2/14 876543 55555 3
```
When I make a pivot table for this information, I use the date for the row labels. I then want to count the amount of SKUs for that day, and add the quantity of SKUs for that day. So I drag the SKU column into the values section and make sure that COUNT is selected. I then drag the Quantity column into the value section, and when I select SUM, my values wind up being zero. See below for what is happening:
```
Row Labels | Count of SKUs | Sum of Quantity
------------------------------------------------
5/1/14 3 0
5/2/14 2 0
```
The Sum of Quantity column should not be zeros, it should be 4 for 5/1 and 4 for 5/2. I have never encountered this problem before, and I am assuming it has something to do with the datafeed being linked to a MySQL query.
I have tried to change the numbers in the Quantity column to number format with no luck. I have absolutely no idea what is causing this, and I'm assuming it's probably something simple that I am overlooking. But I was hoping someone else has encountered this problem and/or has a solution to this.
Help please!
Thanks in advance!
-Anthony
SOLUTION (below):
The data type of the Quantity column in the MySQL database table was VARCHAR. I changed the data type to INT and refreshed the datafeed, and now the pivot table works fine. | The solution was that the "quantity" column in the MySQL database was VARCHAR. Once the column was changed to INT and the datafeed was refreshed, the pivot table worked fine. | I am guessing that your query returns these numbers as text. If you input three 1's in cells preceded by an apostrophe to signal Excel that you want them to be considered strings and then use SUM() over that range it will yield zero. Consider converting your column to numbers. | SUM function in Pivot Table Not Working | [
"",
"mysql",
"sql",
"excel",
"pivot-table",
"ms-query",
""
] |
I am trying to create a query that returns the **accountservice\_id** with the MAX **fromdate** for each **service\_id**. Each **service\_id** can have multiple **accountservice\_id**'s tied to it and unfortunantly the MAX **accountservice\_id** doesn't always have the MAX **fromdate**.
For example:
```
service_id accountservice_id fromdate
---------------------------------------------------
3235 1081 2009-12-01 12:00:00
3235 1007 2013-03-15 12:00:00
3235 2104 2012-10-25 12:00:00
3340 1047 2009-12-15 13:50:00
```
Below is my current query.
```
SELECT service.service_id, accountservice.accountservice_id, accountservice.fromdate
FROM service
INNER JOIN accountservice ON service.service_id = accountservice.service_id
WHERE (service.servicetype_id IN (1, 74571, 74566))
ORDER BY service.service_id, accountservice.fromdate
``` | You can do subqueries to get the data. Use one query to get the maximum fromdate for the service\_id, then join that to a search for the accountservice\_id that matches the service\_id and max fromdate.
```
SELECT maxfromdate.service_id, correct_account.accountservice_id, maxfromdate.maxfromdate
FROM (SELECT
Service.service_id,
MAX (accountservice.fromdate) AS maxfromdate
FROM service
JOIN accountservice ON service.service_id = accountservice.service_id)
GROUP BY Service.service_id) maxfromdate
JOIN (SELECT
Service.service_id,
accountservice.accountservice_id,
accountservice.fromdate
FROM service
JOIN accountservice ON service.service_id = accountservice.service_id
)correct_account ON
(maxfromdate.service_id = correct_account.service_id
AND maxfromdate.maxfromdate = correct_account.fromdate)
``` | CTE + `ROW_NUMBER()` is a winning combo.
```
;with cte as (
SELECT service_id,
accountservice_id,
max_date_rank = row_number()
over(partition by service_id
order by fromdate desc)
FROM service
)
SELECT *
FROM cte
WHERE max_date_rank = 1
```
ETA: With a derived table:
```
SELECT *
FROM (
SELECT service_id,
accountservice_id,
max_date_rank = row_number()
over(partition by service_id
order by fromdate desc)
FROM service
) t
WHERE t.max_date_rank = 1
``` | Choose row based on MAX date (multiple tables) | [
"",
"sql",
"sql-server",
"reporting-services",
"greatest-n-per-group",
"ssrs-2008-r2",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.