Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
This seems like it should be a classic invoice-item problem that has already been solved but maybe I'm not using the right words in my searches.
I am running a query like this (this is just a simple example, my real query is much more complex but it returns the same results):
```
select invoice.inv_num, item.name, item.qty
from invoice invoice, item
where invoice.inv_num = item.inv_num
order by invoice.inv_num
```
I need to generate an item number column that increments for each item but starts over at 1 for each new invoice number. So, for example, I need the end result to look something like this:
```
inv_num item_num name qty
------- -------- ------------- ---
111 1 red widgets 10
111 2 blue widgets 5
222 1 green_widgets 7
222 2 red_widgets 16
222 3 black_widgets 10
333 1 blue_widgets 8
333 2 red_widgets 12
```
We are still using Oracle 9i in case that makes a difference. | You can use the oracle rank or row\_number analytic functions (depending on how you want to deal with duplicates/euqally ranked items).
Here's how you would add a 4th column item\_number to your query :
```
select invoice.inv_num, item.name, item.qty ,
row_number() over (partition by inv_num order by qty desc) item_num
from invoice invoice, item
where invoice.inv_num = item.inv_num
order by invoice.inv_num
```
* The counter resets at each new invoice number becuase of the
partition by clause.
* Within an invoice, the rank/item number is decided by qty (highest
to lowest).
* In the above query, rank or row\_number will give the same result with your data. But if there are multiple items with the same quantity in an invoice (10 red, 10 blue widgets), rank will give you equal item numbers, so in this case row\_number is appropriate.
<http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions001.htm#i81407> | ```
select invoice.inv_num,
item.name,
item.qty,
row_number() OVER(PARTITION BY invoice.inv_num order by item.qty desc) as item_num
from invoice invoice, item
where invoice.inv_num = item.inv_num
order by invoice.inv_num
```
`row_number()` generates Number starting with 1.. And we restart the sequence for every \*INV\_NUM\* using `PARTITION BY` clause. And ordering of numbering with qty. | How do I generate line item numbers in an oracle sql query | [
"",
"sql",
"oracle",
""
] |
I initially thought this would be fairly simple but for some reason I'm struggling with it.
If I have a table that looks like this:
```
table1
Date ID Quantity
2/21 1 100
2/21 2 500
2/21 3 200
2/20 2 600
2/20 3 400
2/20 5 2000
```
And I want to join this data to look like:
```
ID prev_date prev_quantity curr_date curr_quantity
1 2/20 0 2/21 100
2 2/20 600 2/21 500
3 2/20 400 2/21 200
5 2/20 2000 2/21 0
```
The tricky part being ID 1 doesn't have an entry for previous day (so make 0) and ID 5 doesnt have an entry for current day (make that 0 as well)
What is the best way to do this? Thanks in advance!! | This may help u,,,,,
```
SELECT
ISNULL(T.ID,T1.ID),
ISNULL(T.DATE,GETDATE()) AS 'CurrDate',
ISNULL(T.Quantity,0) AS 'CurrQty',
ISNULL(T1.DATE,GETDATE()-1) AS 'PrevDate' ,
ISNULL(T1.Quantity,0) AS 'PrevQty'
FROM
(
SELECT
T.ID,
T.Quantity,
T.DATE
FROM @TABLE T
WHERE T.DATE = CONVERT(VARCHAR,GETDATE(),100)
)T
FULL OUTER JOIN
(
SELECT
T1.ID,
T1.Quantity,
MAX(T1.DATE) AS [DATE]
FROM @TABLE T1
WHERE T1.DATE <> CONVERT(VARCHAR,GETDATE(),100)
GROUP BY T1.ID,T1.Quantity
) T1 ON T.ID = T1.ID
``` | ```
SELECT curr.ID, ISNULL(Prev.date, DATEADD(DD, -1, curr.Date)) AS Prev_Date,
ISNULL(prev.Quantity, 0) AS Prev_Quantity,
ISNULL(curr.date, DATEADD(DD, 1, prev.Date)) AS Curr_Date, ISNULL(curr.Quantity, 0) AS Curr_Quantity
FROM table1 curr
FULL OUTER JOIN
table1 prev
ON curr.ID = prev.ID AND prev.Date = DATEADD(DD, -1, curr.Date)
``` | SQL Best way to JOIN day over day data into same row including new and deleted IDs | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
How can I get the columns, which an index of a table uses, in DB2?
I tried:
```
DESCRIBE INDEXES FOR TABLE 'MYTABLE' SHOW DETAIL;
```
But I get the error message
> ILLEGAL SYMBOL "INDEXES". SOME SYMBOLS THAT MIGHT BE LEGAL ARE: PROCEDURE PROC. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.16.53
Ideally I want information of all indexes a table uses with their corresponding columns.
I am using DB2 for z/OS V9.1 | You can use this query to show the indexes and their columns of your tables:
```
SELECT IX.tbname,
KEY.ixname,
KEY.colname
FROM sysibm.syskeys KEY
JOIN sysibm.sysindexes IX
ON KEY.ixname = IX.name
WHERE IX.tbname IN ( 'SOMETABLE', 'ANOTHERTABLE' )
ORDER BY IX.tbname,
KEY.ixname,
KEY.colname;
``` | ```
SELECT * FROM SYSIBM.SYSKEYS WHERE IXNAME IN
(SELECT NAME FROM SYSIBM.SYSINDEXES WHERE TBNAME = 'your_table_name')
```
I have tested it, it is giving us all the columns which are used in indexes. | Get columns of index on DB2 | [
"",
"sql",
"indexing",
"db2",
""
] |
Consider a database of accounts and deposits:
```
CREATE TABLE accounts (
id int not null primary key,
name varchar(63)
);
CREATE TABLE deposits (
id int not null primary key,
account int references accounts(id),
dollars decimal(15, 2),
status enum('pending','complete')
);
insert into accounts values
(0, 'us'),
(1, 'europe'),
(2, 'asia');
insert into deposits values
(0, 0, 10, 'pending'),
(1, 0, 20, 'complete'),
(2, 1, 100, 'complete'),
(3, 1, 200, 'pending'),
(4, 1, 300, 'complete'),
(5, 2, 1000, 'pending');
```
I would like to get a total of all the `complete` deposits per bank, this is the expected result:
```
+--------+-----+
| us | 20 |
| europe | 400 |
| asia | 0 |
+--------+-----+
```
This is the SQL that I tried, but it does not work as expected:
```
SELECT
a.name, SUM(d.dollars)
FROM
accounts a
INNER JOIN
deposits d ON (a.id = d.account AND d.status='complete');
```
This is the result that it gave:
```
+--------+-----+
| us | 420 |
+--------+-----+
```
[Here is an SQLfiddle](http://sqlfiddle.com/#!2/ce81f/5/0) of the current code.
What have I done wrong, and how can I get the expected sum? | try this
```
SELECT
a.name, coalesce(SUM(d.dollars),0) as sums
FROM
accounts a
left JOIN
deposits d ON (a.id = d.account AND d.status='complete')
group by a.name
order by sums desc
```
you should use LEFT JOIN , and you should use GROUP BY also.
[**LOOK DEMO**](http://sqlfiddle.com/#!2/91103/6) | You should use grouping by `a.name` (or maybe even `a.id`) and `LEFT OUTER JOIN` (if you want to get non-present values).
EDIT:
```
SELECT
a.name, SUM(d.dollars)
FROM
accounts a
LEFT OUTER JOIN
deposits d ON (a.id = d.account AND d.status='complete')
GROUP BY a.name;
``` | Get total of referenced tables' column | [
"",
"mysql",
"sql",
"join",
"sum",
""
] |
I have a situation from which i have to get the data from three different tables and i want to use single query for that .How can we do this .Please help me .
Here in the situation:
```
table_one
emp_id dept_id name
GT102 206 Test
GT103 201 Test1
GT105 111 Test2
GT106 340 Test3
table_two
Dept_id department
111 Finance
340 Product
201 IT
table_three
emp_id Salary
GT102 10,000
GT103 20,000
GT105 40,000
GT106 50,000
```
I want to fetch department and salary of Test2.Please help me | Try this:
```
SELECT t1.name,
t2.department,
t3.salary
FROM table_one t1
INNER JOIN table_two t2
ON t2.dept_id = t1.dept_id
INNER JOIN table_three t3
ON t1.emp_id = t3.emp_id
WHERE t1.name = 'Test2'
``` | Try this:
```
Select table_two.department, table_three.Salary From table_one, table_two, table_three where table_one.emp_id = table_three.emp_id and table_one.dept_id = table_two.Dept_id and table_one.name='Test2'
``` | How to get data from 3 diffrent tables in MySql | [
"",
"mysql",
"sql",
""
] |
I'm only able to SELECT on a table.
The table has a column called inty with a value of 0 or 1
I am currently selecting inty as:
```
SELECT inty AS InternetApproved FROM Table1
```
Is there a way to reformat the data within the SQL SELECT so that if the value is 0 make it No and if the value is 1 make it Yes for display purposes in the output SELECT results? | Simple and easy way to achieve this is:
```
SELECT IF(inty = 1, 'YES', 'No') AS internetApproved FROM Table1
``` | ```
SELECT
CASE
WHEN inty = 0 then 'No'
WHEN inty = 1 then 'Yes'
ELSE 'Maybe'
END
AS InternetApproved
FROM Table1
``` | Change value from 1 To Yes In MySQL select statement | [
"",
"mysql",
"sql",
"case",
""
] |
I want to add a foreign key from Table `Customers`, `row= "Customer ID"` to Table Pet, `row= "Customer ID"`.
```
-- Table structure for table `Customers`
CREATE TABLE IF NOT EXISTS `Customers` (
`CustomerID` varchar(50) NOT NULL,
`Fname` varchar(50) DEFAULT NULL,
`LName` varchar(20) DEFAULT NULL,
`Tel` varchar(20) DEFAULT NULL,
`Fax` varchar(20) DEFAULT NULL,
`CustType` varchar(20) DEFAULT NULL,
`AdState` varchar(50) DEFAULT NULL,
`City` varchar(20) DEFAULT NULL,
`Zip` varchar(20) DEFAULT NULL,
`Street` varchar(20) DEFAULT NULL,
PRIMARY KEY (`CustomerID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
-- Dumping data for table `Customers`
INSERT INTO `Customers` (`CustomerID`, `Fname`, `LName`, `Tel`, `Fax`, `CustType`, `AdState`, `City`, `Zip`, `Street`) VALUES
('AC001', 'All', 'Creatures', '206 555-6622', '206 555-7854', '2', 'WA', 'Tall Pines', '98746', '21 Grace St.'),
('AD001', 'Johnathan', 'Adams', '206 555 7623', '206 555 8855', '1', 'WA', 'Mountain View', '984101012', '66 10th St'),
('AD002', 'William', 'Adams', '503 555 7623', '503 555 7319', '1', 'OR', 'Lakewille', '9740110011', '1122 10th_St'),
('AK001', 'Animal', 'Kingdom', '208 555 7108', '', '2', 'ID', 'Borderville', '834835646', '15 Marlin Lane');
CREATE TABLE IF NOT EXISTS `Pet` (
`ID` varchar(50) NOT NULL,
`CustomerID` varchar(50) NOT NULL,
`Gender` varchar(20) DEFAULT NULL,
`Race` varchar(20) DEFAULT NULL,
`Name` varchar(20) DEFAULT NULL,
`Kind` varchar(20) DEFAULT NULL,
`Birthday` varchar(20) DEFAULT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
-- Dumping data for table `Pet`
INSERT INTO `Pet` (`ID`, `CustomerID`, `Gender`, `Race`, `Name`, `Kind`, `Birthday`) VALUES
('AC001-01', '0', 'M', 'Long Ear', 'Bobo', 'Rabbit', '4/8/92'),
('AC001-02', '0', 'F', 'Chameleon', 'Presto Chango', 'Lizard', '5/1/92'),
('AC001-03', '0', 'M', '', 'Stinky', 'Skunk', '8/1/91'),
('AC001-04', '0', 'M', 'German Shepherd', 'Fido', 'Dog', '6/1/90'),
('AD001-01', '0', 'F', 'Potbelly', 'Patty', 'Pig', '2/15/91'),
('AD001-02', '0', 'M', 'Palomino', 'Rising Sun', 'Horse', '4/10/90'),
('AD002-01', '0', 'F', 'Mixed', 'Dee Dee', 'Dog', '2/15/91'),
('AK001-03', '0', 'M', '', 'Jerry', 'Rat', '2/1/88'),
('AK001-07', '0', 'M', 'Beagle', 'Luigi', 'Dog', '8/1/92');
```
This is the code that I have been using to add the foreign key:
```
ALTER TABLE Pet ADD CONSTRAINT Pet_FK
FOREIGN KEY (CustomerID) REFERENCES Customers (CustomerID);
```
And the error message from this is:
```
#1452 - Cannot add or update a child row: a foreign key constraint fails
(`hospital`.`#sql-523_76e`, CONSTRAINT `Pet_FK` FOREIGN KEY (`CustomerID`)
REFERENCES `Customers` (`CustomerID`))
```
I am quite a beginner with database and I have no idea what I should try next.
I think that's all. Im still new to this stackoverflow so if I missed any necessary information please tell me and I will add it.
UPDATE\*\*\*
```
ALTER TABLE Customers ADD CONSTRAINT Customers_FK
FOREIGN KEY (CustomerID) REFERENCES Pet (CustomerID);
```
*I swapped some positions and the error code I recieve is:*
```
#1215 - Cannot add foreign key constraint
``` | Simple one.
There is an row that contains the CustomerID that can't be matched. So first you need to remove/edit/handle the entry and than add a foreign key. | The `CustomerID` you're trying to enter in `PETS` table, does not exist in `CUSTOMERS` table, and that is why your Foreign Key constraint fails and throws error.
You need to ensure that the CustomerIDs you're entering in your `Pets` table, exist in `Customers` table OR simply insert `NULL` in the `PETS.CUSTOMERID` field | Adding Foreign Key Error | [
"",
"mysql",
"sql",
"database",
""
] |
I have a table `student(name_std,age_std,address_std)`. When i write query
```
select * from Student
```
it displays
```
name_std age_std address_std
Deepak 22 London
Anjali 16 Srilanka
```
My requirement is that when I query `select * from Student`,It should give me output like
```
name_std age_std address_std
**Name Age Address** <<alias row>>
Deepak 22 London
Anjali 16 Srilanka
```
please suggest me some solution | Try `UNION` like below :
```
SELECT 'Name' as name_std,'Age' as age_std, 'Address' as address_std FROM dual
UNION
SELECT name_std,to_char(age_std),address_std FROM Student
``` | I think you can just use `alias` to each column to achieve your desired output.
```
SELECT name_std as Name age_std as Age address_std as Address
FROM Student
``` | adding alias row in select query using union | [
"",
"sql",
"oracle",
""
] |
Hello i would like to select, each auction id that is associated with a bid id with status ='won' and also each auction id which status is 'Expired' as two separated column in to my statement can someone help please, the first column will be call won and second one expired
```
SELECT
Auction.AuctionID,
DATENAME(mm, BID.Date) AS Month
FROM
BID INNER JOIN Auction ON BID.AuctionID = Auction.AuctionID
WHERE
BID.Status = 'Won'
ORDER BY Month
```
Desired result:
```
AuctionID Month Won Expired
----------- ------------------------------ ---- -------
1 January 32 22
2 March 10 22
3 April 0 2
``` | If the result you want is something like this:
```
AuctionID Month Won Expired
----------- ------------------------------ ---- -------
1 January Won NULL
2 January Won NULL
3 January NULL Expired
```
Then you could use this query:
```
SELECT
A.AuctionID,
DATENAME(mm, B.Date) AS Month,
CASE Status WHEN 'Won' THEN 'Won' ELSE NULL END AS Won,
CASE Status WHEN 'Expired' THEN 'Expired' ELSE NULL END AS Expired
FROM BID b
INNER JOIN Auction a ON B.AuctionID = A.AuctionID
ORDER BY Month, AuctionID
```
But if the result you want is like this:
```
AuctionID Month Status
----------- ------------------------------ --------------------
1 January Won
2 February Won
3 January Expired
```
Then this query would do:
```
SELECT
A.AuctionID,
DATENAME(mm, B.Date) AS Month,
Status
FROM BID b
INNER JOIN Auction a ON B.AuctionID = A.AuctionID
ORDER BY Status Desc, Month DESC, AuctionID
```
Third try. To get this:
```
Month Won Expired
------------------------------ ----------- -----------
January 1 1
February 1 0
```
Use this query:
```
SELECT
DATENAME(mm, B.Date) AS Month,
SUM(CASE WHEN Status = 'Won' THEN 1 ELSE 0 END) AS Won,
SUM(CASE WHEN Status = 'Expired' THEN 1 ELSE 0 END) AS Expired
FROM BID b
INNER JOIN Auction a ON B.AuctionID = A.AuctionID
GROUP BY DATENAME(mm, B.Date), B.Date
ORDER BY b.Date
``` | Can you try something like this, its not exact just an overview
```
***
SELECT
Auction.AuctionID,
DATENAME(mm, b1.Date) AS Month,
COUNT(*) AS `Won`,
(SELECT count(*) from BID b2 WHERE DATENAME(mm, b1.Date) = DATENAME(mm, b2.Date))-COUNT(*) AS `EXPIRED`
FROM
BID b1
INNER JOIN Auction ON BID.AuctionID = Auction.AuctionID
WHERE
BID.Status = 'Won'
GROUP BY Month
***
``` | Select 2 different value from 2 table | [
"",
"sql",
"sql-server",
""
] |
My problem is this (using SQL Server 2008 R2).
There is some date columns with types as `datetime`.
So the original intention with the column was to store a date without any time.
Then `datetime` was chosen as datatype.
Sure it works but as the database is also logically connected to a UML-diagram I want to use the right datatype.
An example the column `Parcel.DateofArrival` has the type `datetime`.
There maybe rows that are
```
2011-08-05 00:00:00.000
```
this is a date. But if there is rows like
```
2011-08-05 07:30:00.000
```
it is a `datetime`.
Now I want to find a query that list rows only containing dates, not `datetime`.
Any hint ? | An easy way to do this is:
```
select p.*
from Parcel p
where DateOfArrival = cast(DateOfArrival as Date);
```
By casting the value to a date, the `datetime` portion is lost. If the original value equals this, then there is no time component. | A way to do this would be to convert it in your select statement, for example:
```
SELECT CONVERT(DATE, DateOfArrival) Date, *other columns*
FROM Parcel P
```
Another solution, would be converting it to VARCHAR, formatting it in a yyyyMMdd format, like:
```
SELECT CONVERT(VARCHAR(8), DateOfArrival, 112) Date, *other columns*
FROM Parcel P
```
And if you want to sort it or group it, you have to use the conversion.
I hope it was helpful! | Find rows in SQL that with date when it is stored as datetime? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
My query:
```
SET @s_query =
'Select ROW_NUMBER() OVER (ORDER BY ' + @ColNames + ') AS ##RowNum,
'+@ColNames+' INTO ##Results' +
' From TableA
Where FirstName = ' +@Search+ '
ORDER BY FirstName';
```
I am running the stored procedure with parameters:
```
@Search = 'Adam', @ColName = 'FirstName','LastName'
```
And getting the error:
```
" Invalid column name 'Adam'.
``` | Looks like you're just not quoting your string. The way your code comes out, to SQL it looks like:
```
Where FirstName = Adam
```
But you want it to look like:
```
Where FirstName = 'Adam'
```
So you'd want to change that line of your code to give it the single-quotes it needs. See the modified codeset below:
```
SET @s_query =
'Select ROW_NUMBER() OVER (ORDER BY ' + @ColNames + ') AS ##RowNum,
'+@ColNames+' INTO ##Results' +
' From TableA
Where FirstName = ''' +@Search+ '''
ORDER BY FirstName';
```
Now your query will read:
```
Select ROW_NUMBER() OVER (ORDER BY FirstName, LastName) AS ##RowNum,
FirstName, LastName INTO ##Results From TableA
Where FirstName = 'Adam'
ORDER BY FirstName, LastName
``` | The problem appears to be where you specify the criteria for `FirstName`, you have:
```
Where FirstName = ' +@Search+ '
```
Which will literally be translated to:
```
Where FirstName = Adam
```
Note the missing quote around the search criteria `Adam`.
Try this instead, to ensure the additional quotes are included:
```
Where FirstName = ''' +@Search+ '''
```
Which will finally give:
```
Where FirstName = 'Adam'
``` | TSQL Where clause inside string query | [
"",
"sql",
"t-sql",
""
] |
I have a query problem with count. I want to have a column with the number of persons registered to the course.
So far, this is my query:
```
select
courses.id,
name,
location,
capacity,
(
SELECT count(courses_requests.IDcourse)
FROM courses_requests, courses
WHERE courses_requests.IDcourse = courses.id AND status != "rejected"
) as Registered,
begin_date,
end_date,
price,
active
from courses
```
But this is giving me problems, it displays the same value for all rows, even if the course doesn't have persons registered in the course
E.G
```
Capacity Registered
2 1
30 1
``` | It may be simplier to aggregate the outer select, to eliminate the subquery, so something like:
```
SELECT c.id,
c.name,
c.location,
c.capacity,
COUNT(cr.IDcourse) AS RequestCount
c.begin_date,
c.end_date,
c.price,
c.active
FROM courses c
INNER JOIN courses_requests cr
ON cr.IDcourse = c.id
AND status != "rejected"
GROUP BY c.id,
c.name,
c.location,
c.capacity,
c.begin_date,
c.end_date,
c.price,
c.active
``` | You should connect your subquery to main query:
```
select courses.id,
courses.name,
courses.location,
courses.capacity,
(SELECT count(courses_requests.IDcourse)
FROM courses_requests,
WHERE courses_requests.ID = courses.id
and status != "rejected" ) as Registered,
begin_date,
end_date,
price,
active
from courses
``` | SQL count problems | [
"",
"mysql",
"sql",
"count",
""
] |
CTE gives a below result
```
Name | StartDateTime | EndDateTime
--------------------+-------------------------+------------------------
Hair Massage | 2014-02-15 09:00:00.000 | 2014-02-15 10:00:00.000
Hair Massage | 2014-02-15 10:00:00.000 | 2014-02-15 11:00:00.000
(X)Hair Massage | 2014-02-23 09:00:00.000 | 2014-02-23 10:00:00.000
(X)Hair Cut | 2014-02-20 12:15:00.000 | 2014-02-20 13:00:00.000
Hair Cut | 2014-03-07 11:30:00.000 | 2014-03-07 12:15:00.000
```
Also I have **Holidays**
```
Id | StartDateTime | EndDateTime
-------------+--------------------+-------------------
1 | 20140223 00:00:00 | 20140224 23:59:00
```
And **EventBooking**
```
EventId | StartDateTime | EndDateTime
-------------+-------------------------+------------------------
1 | 2014-02-20 12:15:00.000 | 2014-02-20 13:00:00.000
```
I want to remove the dates falls under `holidays and EventBooking` from my CTE.
I mean remove the `(X)` recods from my CTE
`RESULT=CTE- BookedSchedule-Holidays`
```
with HoliDaysCte2 as
(
select StartdateTime,EndDateTime from Holidays
union all
select StartdateTime,EndDateTime from EventBooking
)
SELECT
Name,
StartDateTime,
EndDateTime
FROM CTE WHERE not exists (select 1
from HoliDaysCte2 h
where cast(a.RepeatEventDate as DATETIME) between
cast(h.startdatetime as DATETIME)
and cast(h.enddatetime as DATETIME)
)
```
Here is my [SQL FIDDLE DEMO](http://sqlfiddle.com/#!3/07698/44) | Okay Assuming this is your schema
```
CREATE TABLE dbo.StaffSchedule
( ID INT IDENTITY(1, 1) NOT NULL,
Name Varchar(50),
StartdateTime DATETIME2 NOT NULL,
EndDateTime DATETIME2 NOT NULL
);
CREATE TABLE dbo.BookedSchedules
( ID INT IDENTITY(1, 1) NOT NULL,
StaffId INT,
StartdateTime DATETIME2 NOT NULL,
EndDateTime DATETIME2 NOT NULL
);
CREATE TABLE dbo.Holidays
( ID INT,
StartdateTime DATETIME2 NOT NULL,
EndDateTime DATETIME2 NOT NULL
);
INSERT dbo.StaffSchedule (Name, StartdateTime, EndDateTime)
VALUES
('Hair Massage','2014-02-15 09:00:00.000','2014-02-15 10:00:00.000'),
('Hair Massage','2014-02-15 10:00:00.000','2014-02-15 11:00:00.000'),
('(X)Hair Massage','2014-02-23 09:00:00.000','2014-02-23 10:00:00.000'),
('(X)Hair Cut','2014-02-20 12:15:00.000','2014-02-20 13:00:00.000'),
('Hair Cut','2014-03-07 11:30:00.000','2014-03-07 12:15:00.000');
INSERT dbo.BookedSchedules (StaffId, StartdateTime, EndDateTime)
VALUES
(1,'2014-02-20 12:15:00.000','2014-02-20 13:00:00.000');
INSERT dbo.Holidays (ID,StartdateTime, EndDateTime)
VALUES
(1,'20140223 00:00:00','20140224 23:59:00');
```
Does this solves your issue?
```
select * from StaffSchedule SS
where
not exists(
select * from NonBookingSlots NBS
where (dateadd(MICROSECOND,1,ss.StartdateTime)
between nbs.StartdateTime and nbs.EndDateTime)
or (dateadd(MICROSECOND,-1,ss.EndDateTime)
between nbs.StartdateTime and nbs.EndDateTime))
``` | ok try this,
> create one more cte,
```
,cte2 as
(
select * from @Holidays
union all
select BookingID,StartdateTime,EndDateTime from @EventBooking
)
```
> then as usual
```
AND not exists (select 1
from cte2 h
where cast(a.RepeatEventDate as date) between cast(h.startdatetime as date) and cast(h.enddatetime as date)
)
```
> this one is latest (datetime conversion very confusing,i just started
> from @Gordon query.
```
AND not exists (select 1
from cte2 h
where cast(DATEADD(SECOND, DATEDIFF(SECOND, 0, StartTime), RepeatEventDate) as datetime) between cast(h.startdatetime as datetime) and cast(h.enddatetime as datetime)
)
``` | Ignore date list from CTE | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need to add a `case` statement in a `where` clause. I want it to run either statement below depending on the value of TermDate.
```
Select *
from myTable
where id = 12345
AND TermDate CASE
WHEN NULL THEN
AND getdate() BETWEEN StartDate AND DATEADD(dd, 30, StartDate)
ELSE
AND GETDATE < TermDate
END
``` | Why not just use an `OR` condition?
```
SELECT *
FROM myTable
WHEN id = 12345
AND ((TermDate IS NULL AND
getdate() BETWEEN StartDate AND DATEADD(dd, 30, StartDate)) OR
GETDATE() < TermDate)
``` | Since we all posted three exact answers, obviously too much, here a version that uses your `case when` construction.
use this:
```
select *
from myTable
where id = 12345
AND case
when TermDate IS NULL
AND getdate() BETWEEN StartDate AND DATEADD(dd, 30, StartDate)
then 1
when GETDATE < TermDate
then 1
else 0
end
= 1
``` | Add Case Statement in Where Clause | [
"",
"sql",
"select",
"switch-statement",
"case",
""
] |
I am trying to insert some records into a many to many table, the condition though is that a particular Id (uniqueidentifier) is in a comman seperated list of Id's (nvarchar).
I had thought it was working but it only seems to compare it to the first Id in the list?
How can I correctly use the list to insert the data as I expect.
Here is a snippet of my Sql:
```
INSERT LineItemToSalesTax (SalesTax_Id, LineItem_Id)
SELECT st.Id, li.LineItem_Id
FROM @LineItems li
,SalesTax st
WHERE li.AppliedSalesTaxes IS NOT NULL
AND st.Id in (li.AppliedSalesTaxes)
```
li.AppliedSalesTaxes has a value in the following format:
`'C23E8882-A630-49B2-9E96-0313A4EC1C6A,3B96B6D4-85E7-4735-8351-1BD07FCFC395'`
I've tried a few scenarios and not quite got it working, I did spot a few articles that use Sql functions, but I would liket to avoid this if possible. | I use the following CSV to ROWS function:
```
CREATE FUNCTION dbo.CsvToRows
(
@string NVARCHAR(MAX)
)
RETURNS @ret TABLE
(
[Index] INT NOT NULL PRIMARY KEY,
Value NVARCHAR(500)
)
AS
BEGIN
DECLARE @pos INT,
@i INT = 0,
@piece NVARCHAR(500)
IF RIGHT(RTRIM(@string),1) <> ','
BEGIN
SET @string = @string + ','
END
SET @pos = PATINDEX('%,%' , @string)
WHILE @pos <> 0
BEGIN
SET @piece = LEFT(@string, @pos-1)
INSERT INTO @ret ([Index], Value)
VALUES (@i, @piece)
SET @string = STUFF(@string, 1, @pos, '')
SET @pos = PATINDEX('%,%', @string)
SET @i = @i + 1
END
RETURN
END
```
So your query can become:
```
INSERT LineItemToSalesTax (SalesTax_Id, LineItem_Id)
SELECT st.Id, li.LineItem_Id
FROM @LineItems li
,SalesTax st
WHERE li.AppliedSalesTaxes IS NOT NULL
AND st.Id in (SELECT CAST(value as UNIQUEIDENTIFIER) FROM dbo.CsvToRows(t.AppliedSalesTaxes))
``` | I would put the comma delimited GUID list into a table and then use that in your `IN` Operator:
```
-- get comma delimited list of GUIDs
declare @GUIDs nvarchar(369) -- 10 GUIDs and 9 commas
set @GUIDs = cast(newid() as varchar(36))
while len(@GUIDs) < 369
begin
set @GUIDs = @GUIDs + ',' + cast(newid() as varchar(36))
end
-- put GUIDs in a table with dynamic sql
declare @GUIDTable table(ID uniqueidentifier)
declare @sql nvarchar(max)
set @sql = 'select ''' + replace(@GUIDs,',',''' union select ''') + ''''
insert into @GUIDTable exec(@sql)
```
Then your code would be updated to:
```
INSERT LineItemToSalesTax (SalesTax_Id, LineItem_Id)
SELECT st.Id, li.LineItem_Id
FROM @LineItems li
,SalesTax st
WHERE li.AppliedSalesTaxes IS NOT NULL
AND st.Id in (select ID from @GUIDTable)
``` | How can I check a uniqueidentifier is in a comma delimited list of ids? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
This is my sql query:
```
CREATE TABLE items(
id int(3) ,
name VARCHAR(255) ,
amount INT(4),
PRIMARY KEY (`id`)
);
-- BRONIE DO WALKI WRĘCZ
INSERT INTO items (id, name, amount) VALUES ('1', 'KATANA','0');
INSERT INTO items (id, name, amount) VALUES ('2', 'HATCHET','0');
INSERT INTO items (id, name, amount) VALUES ('3', 'TACTICAL KNIFE','0');
INSERT INTO items (id, name, amount) VALUES ('4', 'MACHETE','0');
-- PISTOLETY
INSERT INTO items (id, name, amount) VALUES ('5', '1911','0');
INSERT INTO items (id, name, amount) VALUES ('6', 'B92','0');
INSERT INTO items (id, name, amount) VALUES ('7', 'B93R','0');
INSERT INTO items (id, name, amount) VALUES ('8', 'DESERT EAGLE','0');
INSERT INTO items (id, name, amount) VALUES ('9', 'FN FIVESEVEN','0');
INSERT INTO items (id, name, amount) VALUES ('10', 'SIG SAUER P226','0');
INSERT INTO items (id, name, amount) VALUES ('11', 'STI Eagle Elite .45 ACP','0');
-- STRZELBY
INSERT INTO items (id, name, amount) VALUES ('12', 'AA-12','0');
INSERT INTO items (id, name, amount) VALUES ('13', 'KT DECIDER','0');
INSERT INTO items (id, name, amount) VALUES ('14', 'MOSSBERG 590','0');
INSERT INTO items (id, name, amount) VALUES ('15', 'SAIGA','0');
-- KARABINY SZTURMOWE
INSERT INTO items (id, name, amount) VALUES ('16', 'AK-74M','0');
INSERT INTO items (id, name, amount) VALUES ('17', 'AKM','0');
INSERT INTO items (id, name, amount) VALUES ('18', 'FN SCAR CQC','0');
INSERT INTO items (id, name, amount) VALUES ('19', 'FN SCAR NIGHT STALKER','0');
INSERT INTO items (id, name, amount) VALUES ('20', 'G36','0');
INSERT INTO items (id, name, amount) VALUES ('21', 'IMI TAR-21','0');
INSERT INTO items (id, name, amount) VALUES ('22', 'M16','0');
INSERT INTO items (id, name, amount) VALUES ('23', 'M4','0');
INSERT INTO items (id, name, amount) VALUES ('24', 'M4 Semi','0');
INSERT INTO items (id, name, amount) VALUES ('25', 'MASADA','0');
INSERT INTO items (id, name, amount) VALUES ('26', 'SIG SAUER 556','0');
-- KARABINY SNAJPERSKIE
INSERT INTO items (id, name, amount) VALUES ('27', 'BLASER R93','0');
INSERT INTO items (id, name, amount) VALUES ('28', 'M107','0');
INSERT INTO items (id, name, amount) VALUES ('29', 'MAUSER SP66','0');
INSERT INTO items (id, name, amount) VALUES ('30', 'MAUSER SRG DESERT','0');
INSERT INTO items (id, name, amount) VALUES ('31', 'SVD','0');
INSERT INTO items (id, name, amount) VALUES ('32', 'VSS VINTOREZ','0');
-- SUB MACHINE GUNY
INSERT INTO items (id, name, amount) VALUES ('33', 'BIZON','0');
INSERT INTO items (id, name, amount) VALUES ('34', 'EVO-3','0');
INSERT INTO items (id, name, amount) VALUES ('35', 'FN P90','0');
INSERT INTO items (id, name, amount) VALUES ('36', 'FN P90 S','0');
INSERT INTO items (id, name, amount) VALUES ('37', 'HONEY BADGER','0');
INSERT INTO items (id, name, amount) VALUES ('38', 'MP5/10','0');
INSERT INTO items (id, name, amount) VALUES ('39', 'MP7','0');
INSERT INTO items (id, name, amount) VALUES ('40', 'UZI','0');
INSERT INTO items (id, name, amount) VALUES ('41', 'VERESK SR-2','0');
-- LIGHT MACHINE GUNY
INSERT INTO items (id, name, amount) VALUES ('42', 'FN M249','0');
INSERT INTO items (id, name, amount) VALUES ('43', 'PKM','0');
INSERT INTO items (id, name, amount) VALUES ('44', 'RA H23','0');
INSERT INTO items (id, name, amount) VALUES ('45', 'RPK-74','0');
-- AMMO
-- ARROWS
INSERT INTO items (id, name, amount) VALUES ('46', 'ARROW', '0');
INSERT INTO items (id, name, amount) VALUES ('47', 'EXPLOSIVE ARROW', '0');
--HANDGUN AMMO
INSERT INTO items (id, name, amount) VALUES ('48', '9MM MAG', '0');
INSERT INTO items (id, name, amount) VALUES ('49', '.45 ACP STI EAGLE ELITE', '0');
INSERT INTO items (id, name, amount) VALUES ('50', '5.7 FN M240 MAG', '0');
INSERT INTO items (id, name, amount) VALUES ('51', 'DESERT EAGLE AMMO', '0');
INSERT INTO items (id, name, amount) VALUES ('52', '9X19 PARA MAG', '0');
--SHOTGUN AMMO
INSERT INTO items (id, name, amount) VALUES ('53', '2X 12 GAUGE', '0');
INSERT INTO items (id, name, amount) VALUES ('54', 'SHOTGUN SHELL 2X', '0');
INSERT INTO items (id, name, amount) VALUES ('55', 'SHOTGUN SHELL 8X', '0');
INSERT INTO items (id, name, amount) VALUES ('56', 'SAIGA 10', '0');
INSERT INTO items (id, name, amount) VALUES ('57', '12 GAUGE SLUG', '0');
INSERT INTO items (id, name, amount) VALUES ('58', 'AA-12 DRUM', '0');
--ASSAULT RIFLE AMMO
INSERT INTO items (id, name, amount) VALUES ('59', 'SG 30 ROUND', '0');
INSERT INTO items (id, name, amount) VALUES ('60', '5.45 AK 30', '0');
INSERT INTO items (id, name, amount) VALUES ('61', '5.45 AK 45', '0');
INSERT INTO items (id, name, amount) VALUES ('62', '5.45 AK DRUM', '0');
INSERT INTO items (id, name, amount) VALUES ('63', '7.62 AKM CLIP', '0');
INSERT INTO items (id, name, amount) VALUES ('64', 'STANAG 30', '0');
INSERT INTO items (id, name, amount) VALUES ('65', 'STANAG 45', '0');
INSERT INTO items (id, name, amount) VALUES ('66', 'STANAG 60', '0');
INSERT INTO items (id, name, amount) VALUES ('67', 'STANAG C-MAG', '0');
INSERT INTO items (id, name, amount) VALUES ('68', 'G36 AMMO', '0');
INSERT INTO items (id, name, amount) VALUES ('69', 'G36 C-MAG', '0');
--SMG AMMO
INSERT INTO items (id, name, amount) VALUES ('70', 'SMG-20', '0');
INSERT INTO items (id, name, amount) VALUES ('71', 'SMG-40', '0');
INSERT INTO items (id, name, amount) VALUES ('72', 'MP5 10MM MAG', '0');
INSERT INTO items (id, name, amount) VALUES ('73', 'CZ SCORPION EVO-3 AMMO', '0');
INSERT INTO items (id, name, amount) VALUES ('74', 'MP7 30', '0');
INSERT INTO items (id, name, amount) VALUES ('75', 'MP7 40', '0');
INSERT INTO items (id, name, amount) VALUES ('76', 'P90 50 ROUNDS', '0');
INSERT INTO items (id, name, amount) VALUES ('77', 'BIZON 64 AMMO', '0');
--SNIPER RIFLE AMMO
INSERT INTO items (id, name, amount) VALUES ('78', '.308 WINCHESTER', '0');
INSERT INTO items (id, name, amount) VALUES ('79', 'SVD AMMO', '0');
INSERT INTO items (id, name, amount) VALUES ('80', 'VSS-10', '0');
INSERT INTO items (id, name, amount) VALUES ('81', 'VSS-20', '0');
INSERT INTO items (id, name, amount) VALUES ('82', 'AWM .338 MAGNUM AMMO', '0');
INSERT INTO items (id, name, amount) VALUES ('83', '.50 BMG', '0');
--LIGHT MACHINE AMMO
INSERT INTO items (id, name, amount) VALUES ('84', 'M249 AMMO BOX', '0');
INSERT INTO items (id, name, amount) VALUES ('85', 'PKM AMMO BOX', '0');
-- ATACZMENTY
INSERT INTO items (id, name, amount) VALUES ('86', 'ACOG','0');
INSERT INTO items (id, name, amount) VALUES ('87', 'BLACKWATER LONG RANGE','0');
INSERT INTO items (id, name, amount) VALUES ('88', 'COMPACT SCOPE','0');
INSERT INTO items (id, name, amount) VALUES ('89', 'FLASH HIDER','0');
INSERT INTO items (id, name, amount) VALUES ('90', 'FORWARD GRIP','0');
INSERT INTO items (id, name, amount) VALUES ('91', 'HOLOGRAPHIC','0');
INSERT INTO items (id, name, amount) VALUES ('92', 'PSO-1','0');
INSERT INTO items (id, name, amount) VALUES ('93', 'PISTOL FLASHLIGHT','0');
INSERT INTO items (id, name, amount) VALUES ('94', 'PISTOL LASER','0');
INSERT INTO items (id, name, amount) VALUES ('95', 'RED DOT SP','0');
INSERT INTO items (id, name, amount) VALUES ('96', 'REFLEX SIGHT','0');
INSERT INTO items (id, name, amount) VALUES ('97', 'RIFLE FLASHLIGHT','0');
INSERT INTO items (id, name, amount) VALUES ('98', 'RIFLE LASER','0');
INSERT INTO items (id, name, amount) VALUES ('99', 'SMG GRIP','0');
INSERT INTO items (id, name, amount) VALUES ('100', 'SMG GRIP 2','0');
INSERT INTO items (id, name, amount) VALUES ('101', 'SILENCER','0');
INSERT INTO items (id, name, amount) VALUES ('102', 'SWISS ARMS SCOPE 8X','0');
INSERT INTO items (id, name, amount) VALUES ('103', 'TACTICAL SNIPER SCOPE','0');
INSERT INTO items (id, name, amount) VALUES ('104', 'RPK-74','0');
-- MISC
INSERT INTO items (id, name, amount) VALUES ('105', 'FRAG GRENADE','0');
INSERT INTO items (id, name, amount) VALUES ('106', 'COMPUND CROSSBOW','0');
```
(I can't really post error output image so here's a link:)
<https://i.stack.imgur.com/cIVjT.png>
**#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '--HANDGUN AMMO
INSERT INTO items (id, name, amount) VALUES ('48', '9MM MAG', '0' at line 1** | to write the comment `--` must be followed by a white space.
see more in the [MySQL manual - 9.6 Comment Syntax](https://dev.mysql.com/doc/refman/5.1/en/comments.html)
BTW, you have the same problem in other lines:
* `--HANDGUN AMMO` change to `-- HANDGUN AMMO`
* `--SHOTGUN AMMO` change to `-- SHOTGUN AMMO`
* `--ASSAULT RIFLE AMMO` change to `-- ASSAULT RIFLE AMMO`
* `--SMG AMMO` change to `-- SMG AMMO`
* `--SNIPER RIFLE AMMO` change to `-- SNIPER RIFLE AMMO`
* `--LIGHT MACHINE AMMO` change to `-- LIGHT MACHINE AMMO` | The [MySQL documentation](https://dev.mysql.com/doc/refman/5.1/en/comments.html) specifies a space between the dashes and the comment:
```
-- HANDGUN
```
Note that 'Shotgun', 'SMG', etc will also fail, so you'll have to adjust those comments as well. | sql newbie gets sql error, dont know why | [
"",
"mysql",
"sql",
"syntax",
""
] |
I have a factory table with factoryname: `select factoryname from factory`
I then have a products table: `select productcode from products`
I want to create a list that has products for all factory.
so output :
```
mill 1 product 1
mill 1 product 2
mill 1 product 3
mill 2 product 1
mill 2 product 2
mill 2 product 3
mill 3 product 1
mill 3 product 2
mill 3 product 3
```
I have something like this:
```
DECLARE @numrows int
DECLARE @i int
DECLARE @department int
SET @numrows = (SELECT COUNT(*) FROM factory)
WHILE (@i <= @numrows)
BEGIN
SELECT factoryname,product FROM products,factory )
SET @i = @i + 1
END
```
I am obviously off the mark, here, any advice? Thanks as always. | If you want to list all the factory names and all products, you can use a cross join like this:
```
select factoryname, productname from factory cross join products
```
A `Cross Join` is basically a Cartesian product of 2 tables, resulting in (m \* n) records if table 1 has m and table 2 has n records. | SQL is a set based language, so you should try not to use loops in SQL whereever possible. It will hurt performance.
What you want to do, is use a JOIN to JOIN 2 tables together:
```
SELECT factory.factoryname, products.productcode
FROM factory, products
INNER JOIN products.factory_id = factory.factory_id
```
Something like this.
This will give you a collection of products that are made in a certain factory.
Displaying this in a nice way to the user should be done in your UI layer in your application. Do not try to use SQL to properly format data. | MSSQL - Loop through table and use field value in new query | [
"",
"sql",
"loops",
""
] |
I am using the following stored procedure to get the newest date out of a column which works fine.
Can someone tell me how I have to amend this that I also get the rest of the data of this record selected (i.e. the one with the max date) ?
The column in question is called "updated" and is formatted as datetime so the values there should be unique.
**Example:**
The max value for column "updated" is the row with "itemID" = 3.
In this case I also want the rest of the data in this row selected, say the columns are called col1, col2, col3, col4 + the column "updated".
**My stored procedure:**
```
SET NOCOUNT ON;
SELECT CONVERT(VARCHAR(11), MAX(updated), 106) AS lastUpdated
FROM MeetingDetails
WHERE itemStatus = 'active'
FOR XML PATH('updates'), ELEMENTS, TYPE, ROOT('root')
``` | You could take the analytic approach:
```
SELECT *
FROM (SELECT col1, col2, col3, col4,
RANK() OVER (PARTITION BY col1, col2, col3
ORDER BY updated DESC) AS rk
FROM MeetingDetails
WHERE itemstatus = 'active') t
WHERE rk = 1
``` | ```
SELECT col1, col2, col3, col4, updated
FROM MeetingDetails
WHERE updated in (
SELECT MAX(updated)
FROM MeetingDetails
WHERE itemStatus = 'active'
)
```
This should be what you're looking for. From here, if you need to narrow it down further, just add more criteria to your where clause. | SQL Server: Get data from record with max date / newest date | [
"",
"sql",
"sql-server",
"datetime",
"select",
"max",
""
] |
This is my sqldatasource
```
<asp:SqlDataSource runat="server" ID="get_client_users" ConnectionString="<%$ ConnectionStrings:local %>" SelectCommand="SELECT * FROM users WHERE clientid = @clientid" CancelSelectOnNullParameter="False">
<SelectParameters>
<asp:Parameter Name="clientid" Type="Int32" ConvertEmptyStringToNull="true"/>
</SelectParameters>
</asp:SqlDataSource>
```
Now I want is that when clientid is null then it should return all the rows with clientid=null, but when I watch the query passed in profiler i get this.
```
exec sp_executesql N'SELECT * FROM users WHERE clientid = @clientid',N'@clientid int',@clientid=NULL
```
but it will only return values if its
```
clientid is NULL
```
As I have ConvertEmptyStringToNull="true" command is executing, but returning empty dataset. | **UPDATE:**
```
SELECT *
FROM users
WHERE ISNULL(clientID, -1) = ISNULL(@ClientID, -1)
```
This is assuming that clientID will never be -1 in the database. | Since you have complete control over the `SELECT` command, I'd have thought:
```
SELECT * FROM users
WHERE clientid = @clientid OR
(@clientid is null and clientid is null)
```
Would work for you. | Pass is NULL instead of = Null | [
"",
"asp.net",
"sql",
"sql-server",
"visual-studio-2012",
"sqldatasource",
""
] |
I have a table with over a billion records. In order to improve performance, I partitioned it to 30 partitions. The most frequent queries have `(id = ...)` in their where clause, so I decided to partition the table on the `id` column.
Basically, the partitions were created in this way:
```
CREATE TABLE foo_0 (CHECK (id % 30 = 0)) INHERITS (foo);
CREATE TABLE foo_1 (CHECK (id % 30 = 1)) INHERITS (foo);
CREATE TABLE foo_2 (CHECK (id % 30 = 2)) INHERITS (foo);
CREATE TABLE foo_3 (CHECK (id % 30 = 3)) INHERITS (foo);
.
.
.
```
I ran `ANALYZE` for the entire database and in particular, I made it collect extra statistics for this table's `id` column by running:
```
ALTER TABLE foo ALTER COLUMN id SET STATISTICS 10000;
```
However when I run queries that filter on the `id` column the planner shows that it's still scanning all the partitions. `constraint_exclusion` is set to `partition`, so that's not the problem.
```
EXPLAIN ANALYZE SELECT * FROM foo WHERE (id = 2);
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------
Result (cost=0.00..8106617.40 rows=3620981 width=54) (actual time=30.544..215.540 rows=171477 loops=1)
-> Append (cost=0.00..8106617.40 rows=3620981 width=54) (actual time=30.539..106.446 rows=171477 loops=1)
-> Seq Scan on foo (cost=0.00..0.00 rows=1 width=203) (actual time=0.002..0.002 rows=0 loops=1)
Filter: (id = 2)
-> Bitmap Heap Scan on foo_0 foo (cost=3293.44..281055.75 rows=122479 width=52) (actual time=0.020..0.020 rows=0 loops=1)
Recheck Cond: (id = 2)
-> Bitmap Index Scan on foo_0_idx_1 (cost=0.00..3262.82 rows=122479 width=0) (actual time=0.018..0.018 rows=0 loops=1)
Index Cond: (id = 2)
-> Bitmap Heap Scan on foo_1 foo (cost=3312.59..274769.09 rows=122968 width=56) (actual time=0.012..0.012 rows=0 loops=1)
Recheck Cond: (id = 2)
-> Bitmap Index Scan on foo_1_idx_1 (cost=0.00..3281.85 rows=122968 width=0) (actual time=0.010..0.010 rows=0 loops=1)
Index Cond: (id = 2)
-> Bitmap Heap Scan on foo_2 foo (cost=3280.30..272541.10 rows=121903 width=56) (actual time=30.504..77.033 rows=171477 loops=1)
Recheck Cond: (id = 2)
-> Bitmap Index Scan on foo_2_idx_1 (cost=0.00..3249.82 rows=121903 width=0) (actual time=29.825..29.825 rows=171477 loops=1)
Index Cond: (id = 2)
.
.
.
```
What could I do to make the planer have a better plan? Do I need to run `ALTER TABLE foo ALTER COLUMN id SET STATISTICS 10000;` for all the partitions as well?
**EDIT**
After using Erwin's suggested change to the query, the planner only scans the correct partition, however the execution time is actually worse then a full scan (at least of the index).
```
EXPLAIN ANALYZE select * from foo where (id % 30 = 2) and (id = 2);
QUERY PLAN
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Result (cost=0.00..8106617.40 rows=3620981 width=54) (actual time=32.611..224.934 rows=171477 loops=1)
-> Append (cost=0.00..8106617.40 rows=3620981 width=54) (actual time=32.606..116.565 rows=171477 loops=1)
-> Seq Scan on foo (cost=0.00..0.00 rows=1 width=203) (actual time=0.002..0.002 rows=0 loops=1)
Filter: (id = 2)
-> Bitmap Heap Scan on foo_0 foo (cost=3293.44..281055.75 rows=122479 width=52) (actual time=0.046..0.046 rows=0 loops=1)
Recheck Cond: (id = 2)
-> Bitmap Index Scan on foo_0_idx_1 (cost=0.00..3262.82 rows=122479 width=0) (actual time=0.044..0.044 rows=0 loops=1)
Index Cond: (id = 2)
-> Bitmap Heap Scan on foo_1 foo (cost=3312.59..274769.09 rows=122968 width=56) (actual time=0.021..0.021 rows=0 loops=1)
Recheck Cond: (id = 2)
-> Bitmap Index Scan on foo_1_idx_1 (cost=0.00..3281.85 rows=122968 width=0) (actual time=0.020..0.020 rows=0 loops=1)
Index Cond: (id = 2)
-> Bitmap Heap Scan on foo_2 foo (cost=3280.30..272541.10 rows=121903 width=56) (actual time=32.536..86.730 rows=171477 loops=1)
Recheck Cond: (id = 2)
-> Bitmap Index Scan on foo_2_idx_1 (cost=0.00..3249.82 rows=121903 width=0) (actual time=31.842..31.842 rows=171477 loops=1)
Index Cond: (id = 2)
-> Bitmap Heap Scan on foo_3 foo (cost=3475.87..285574.05 rows=129032 width=52) (actual time=0.035..0.035 rows=0 loops=1)
Recheck Cond: (id = 2)
-> Bitmap Index Scan on foo_3_idx_1 (cost=0.00..3443.61 rows=129032 width=0) (actual time=0.031..0.031 rows=0 loops=1)
.
.
.
-> Bitmap Heap Scan on foo_29 foo (cost=3401.84..276569.90 rows=126245 width=56) (actual time=0.019..0.019 rows=0 loops=1)
Recheck Cond: (id = 2)
-> Bitmap Index Scan on foo_29_idx_1 (cost=0.00..3370.28 rows=126245 width=0) (actual time=0.018..0.018 rows=0 loops=1)
Index Cond: (id = 2)
Total runtime: 238.790 ms
```
Versus:
```
EXPLAIN ANALYZE select * from foo where (id % 30 = 2) and (id = 2);
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------
Result (cost=0.00..273120.30 rows=611 width=56) (actual time=31.519..257.051 rows=171477 loops=1)
-> Append (cost=0.00..273120.30 rows=611 width=56) (actual time=31.516..153.356 rows=171477 loops=1)
-> Seq Scan on foo (cost=0.00..0.00 rows=1 width=203) (actual time=0.002..0.002 rows=0 loops=1)
Filter: ((id = 2) AND ((id % 30) = 2))
-> Bitmap Heap Scan on foo_2 foo (cost=3249.97..273120.30 rows=610 width=56) (actual time=31.512..124.177 rows=171477 loops=1)
Recheck Cond: (id = 2)
Filter: ((id % 30) = 2)
-> Bitmap Index Scan on foo_2_idx_1 (cost=0.00..3249.82 rows=121903 width=0) (actual time=30.816..30.816 rows=171477 loops=1)
Index Cond: (id = 2)
Total runtime: 270.384 ms
``` | For non-trivial expressions you have to repeat the more or less verbatim condition in queries to make the Postgres query planner understand it can rely on the `CHECK` constraint. Even if it seems redundant!
[Per documentation](http://www.postgresql.org/docs/current/static/ddl-partitioning.html):
> With constraint exclusion enabled, the planner will examine the
> constraints of each partition and try to prove that the partition need
> not be scanned because it could not contain any rows meeting the
> query's `WHERE` clause. **When the planner can prove this**, it excludes
> the partition from the query plan.
Bold emphasis mine. The planner does not understand complex expressions.
Of course, this has to be met, too:
> Ensure that the [constraint\_exclusion](http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION) configuration parameter is not
> disabled in `postgresql.conf`. If it is, queries will not be optimized as desired.
Instead of
```
SELECT * FROM foo WHERE (id = 2);
```
Try:
```
SELECT * FROM foo WHERE id % 30 = 2 AND id = 2;
```
And:
> The default (and recommended) setting of [constraint\_exclusion](http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION) is
> actually neither `on` nor `off`, but an intermediate setting called
> `partition`, which causes the technique to be applied only to queries
> that are likely to be working on partitioned tables. The on setting
> causes the planner to examine `CHECK` constraints in all queries, even
> simple ones that are unlikely to benefit.
You can experiment with the `constraint_exclusion = on` to see if the planner catches on without redundant verbatim condition. But you have to weigh cost and benefit of this setting.
The alternative would be simpler conditions for your partitions as already [outlined by @harmic](https://stackoverflow.com/a/21964762/939860).
An no, increasing the number for `STATISTICS` will not help in this case. Only the `CHECK` constraints and your `WHERE` conditions in the query matter. | Unfortunately, partioning in postgresql is fairly primitive. It only works for range and list based constraints. Your partition constraints are too complex for the query planner to use to decide to exclude some partitions.
In the [manual](http://www.postgresql.org/docs/current/static/ddl-partitioning.html) it says:
> Keep the partitioning constraints simple, else the planner may not be
> able to prove that partitions don't need to be visited. Use simple
> equality conditions for list partitioning, or simple range tests for
> range partitioning, as illustrated in the preceding examples. A good
> rule of thumb is that partitioning constraints should contain only
> comparisons of the partitioning column(s) to constants using
> B-tree-indexable operators.
You might get away with changing your WHERE clause so that the modulus expression is explicitly mentioned, as Erwin suggested. I haven't had much luck with that in the past, although I have not tried recently and as he says, there have been improvements in the planner. That is probably the first thing to try.
Otherwise, you will have to rearrange your partitions to use ranges of id values instead of the modulus method you are using now. Not a great solution, I know.
One other solution is to store the modulus of the id in a separate column, which you can then use a simple value equality check for the partition constraint. Bit of a waste of disk space, though, and you would also need to add a term to the where clauses to boot. | Partitioned table query still scanning all partitions | [
"",
"sql",
"database",
"postgresql",
"partitioning",
""
] |
### What I am doing
I have a query in Access which as SQL view of:
```
SELECT Projects.*, Projects.MySortField
FROM Projects
ORDER BY Projects.MySortField DESC;
```
This query works fine and sorts my data correctly. It is also very, very clean to look at and to understand what is happening.
However, when I load it into my form, I have run into an unexpected problem. Prior to adding `Projects.MySortField` and the associated sorting, I was able to refer to `MySortField` as follows:
```
Me.Recordset("MySortField")
```
However, now that I've added it to the sort criteria, the query returns slightly difference fields as the record associated field is:
```
Me.Recordset("Projects.MySortField")
```
I refer to this somewhat often in VBA which is where problems happen.
### What I am trying to do and why
I would like my stakeholders to be able to more easily add or modify sort criteria. Unfortunately some of these are also referred to in the code more explicitly, which means if at any time I want to add sort criteria to my list, it will also adopt the `Projects.mFieldName` syntax in the RecordSet, which means any code referring to that data will break.
This would be nice to avoid, obviously, and if it was possible to modify the query somehow to facilitate my users adding fields and not having to change any code that would be wonderful.
### Specific Question
How can I include `Projects.*` *and* still have specific fields from that table for sorting but keep all RecordSet fields reflected as just the field name? | The problem you are running into is that by having Projects.\* as well as a named field, you are ending up with two output fields of the same name. I'm not sure why you feel you must use "\*" instead of just explicitly naming the fields (which is generally considered the best practice) which would allow you to sort without the problem.
If you want to stick with "\*" then you need to uncheck the show box in the query grid for your named fields, which will allow you to have the field explicitly enumerated for sorting or use as a criteria.
The SQL view when you do this should look like:
```
SELECT Projects.*
FROM Projects
ORDER BY Projects.MySortField DESC;
```
You can use the unchecking of the show box trick for both sorting or for doing criteria when you are using "\*". | This
```
SELECT Projects.*
FROM Projects
ORDER BY Projects.MySortField DESC;
```
can be done.
But if you insist on adding **, Projects.MySortField**,then
```
SELECT Projects.*, Projects.MySortField as whatever
FROM Projects
ORDER BY Projects.MySortField DESC;
``` | Access query using * and sort criteria on columns - how to make field show just column name? | [
"",
"sql",
"sql-server",
"vba",
"ms-access",
""
] |
I'm running on a 2012 MS-SQL server and have a table USER with Age, Gender among other fields and a SALES table with sales records.
I'm currently calculating the Sales Leaderboard showing an list of Sales People ordered by their TOP Sales so to give an example this list returns various sales rep based on their Top Sales. Somewhere in the middle of the list we have Mr. Thomas which let's say is #4th.
My current task is to display how Thomas compares to sales reps that have the same Age as him and also how he compares with sales rep that have the same gender as him. The calculation will return a different result than the overall list described above.
My ideal stored procedure would receive 1 param (UserId) and return the following single record
values: OverallPosition, OverallPositionTotalCount, AgePosition, AgeTotalCount, GenderPosition, GenderTotalCount
DATA SAMPLE:
```
CREATE TABLE dbo.User
(
UserId int NOT NULL IDENTITY (1, 1),
Name nvarchar(50) NOT NULL,
Age int NULL,
Gender nvarchar(10) NULL
)
1, James, 30, 'male'
2, Monica, 27, 'female'
3, Paul, 30, 'male'
4, Thomas, 30, 'male'
5, Mike, 22, 'male'
6, Sabrina, 30, 'female'
CREATE TABLE dbo.Sales
(
SalesId int NOT NULL IDENTITY (1, 1),
UserId int NOT NULL,
TotalSale int NOT NULL
) ON [PRIMARY]
1, 1, $900,000
2, 1, $1,000,000
3, 2, $900,000
4, 2, $400,000
5, 3, $750,000
6, 3, $300,000
7, 4, $875,000
8, 5, $700,000
9, 5, $1,200,000
10, 6, $850,000
```
**Sales Leaderboard list**
```
SELECT u.UserId, u.Name, MAX(s.TotalSale) as TopSale, Count(*) OVER () AS TotalCount
FROM User u
INNER JOIN Sales s on s.UserId = u.UserId
GROUP BY u.UserID, u.Name
ORDER BY TopSale DESC
OFFSET (@PageIndexSelected) * @PageCountSelected ROWS
FETCH NEXT @PageCountSelected ROWS ONLY
```
**Ideal Calculation Results**
Since Thomas (userId 4) is 30 of Age and 'male', his Stats should look like this
```
OverallPosition = 4; OverallPositionTotalCount = 6 (i.e 4 out of 6)
$1,200,000 Mike
$1,000,000 James
$900,000 Monica
$875,000 Thomas
$850,000 Sabrina
$750,000 Paul
AgePosition = 2; AgeTotalCount = 4 (i.e. 2 out of 4)
$1,000,000 James
$875,000 Thomas
$850,000 Sabrina
$750,000 Paul
GenderPosition = 3; GenderTotalCount = 4 (i.e 3 out of 4)
$1,200,000 Mike
$1,000,000 James
$875,000 Thomas
$750,000 Paul
```
**Note**
The expected result is ONLY the values for OverallPosition, OverallPositionTotalCount, AgePosition, AgeTotalCount, GenderPosition, GenderTotalCount for a single user (the stored procedure will receive the UserId as param) and NOT the actual list.
**EXPECTED RETURN**
OverallPosition = 4,
OverallPositionTotalCount = 6,
AgePosition = 2,
AgeTotalCount = 4,
GenderPosition = 3,
GenderTotalCount = 4
As I stated on my comments, I really don't know how to approach this problem. I hope that somebody will be willing to help !! | The first CTE gets the max sales for each person. The second uses the windowing functions `rank()` and `count()` with an appropriate `over()` clause to calculate the position and totals.
```
with C1 as
(
select U.UserId,
U.Gender,
U.Age,
max(S.TotalSale) as TotalSale
from dbo.[User] as U
inner join dbo.Sales as S
on U.UserId = S.UserId
group by U.UserId,
U.Gender,
U.Age
), C2 as
(
select C1.UserId,
C1.TotalSale,
rank() over(order by C1.TotalSale desc) as OverallPosition,
rank() over(partition by C1.Age order by C1.TotalSale desc) as AgePosition,
rank() over(partition by C1.Gender order by C1.TotalSale desc) as GenderPosition,
count(*) over() as OverallPositionTotalCount,
count(*) over(partition by C1.Age) as AgeTotalCount,
count(*) over(partition by C1.Gender) as GenderTotalCount
from C1
)
select C2.OverallPosition,
C2.OverallPositionTotalCount,
C2.AgePosition,
C2.AgeTotalCount,
C2.GenderPosition,
C2.GenderTotalCount
from C2
where C2.UserId = 4;
```
[SQL Fiddle](http://sqlfiddle.com/#!6/53404/2)
Alternative:
```
select C.OverallPosition,
C.OverallPositionTotalCount,
C.AgePosition,
C.AgeTotalCount,
C.GenderPosition,
C.GenderTotalCount
from (
select U.UserId,
S.TotalSale,
rank() over(order by S.TotalSale desc) as OverallPosition,
rank() over(partition by U.Age order by S.TotalSale desc) as AgePosition,
rank() over(partition by U.Gender order by S.TotalSale desc) as GenderPosition,
count(*) over() as OverallPositionTotalCount,
count(*) over(partition by U.Age) as AgeTotalCount,
count(*) over(partition by U.Gender) as GenderTotalCount
from dbo.[User] as U
cross apply (
select max(S.TotalSale) as TotalSale
from dbo.Sales as S
where U.UserId = S.UserId
) as S
) as C
where C.UserId = 4;
```
[SQL Fiddle](http://sqlfiddle.com/#!6/53404/3) | [jsFiddle](http://sqlfiddle.com/#!6/d41d8/14970) -- edit: it's a sqlFiddle, not jsFiddle :)
```
DECLARE @UserId INT = 4
;with overall as
(
SELECT u.Name, u.UserId, RANK() OVER (ORDER BY max(s.TotalSale) DESC) OverallRank
FROM User u
JOIN Sales s on u.UserId = s.UserId
group by u.Name, u.UserId
),
age as (
SELECT u.Name, u.UserId, RANK() OVER (ORDER BY max(s.TotalSale) DESC) AgeRank
FROM User u
JOIN Sales s on u.UserId = s.UserId
where u.age = (select age from @User where UserId = @UserId)
group by u.Name, u.UserId
),
gender as (
SELECT u.Name, u.UserId, RANK() OVER (ORDER BY max(s.TotalSale) DESC) GenderRank
FROM User u
JOIN Sales s on u.UserId = s.UserId
where u.Gender = (select gender from @User where UserId = @UserId)
group by u.Name, u.UserId
)
SELECT o.OverallRank as OverallPosition,
(select count(*) from overall) as OverallTotalCount,
a.AgeRank as AgePosition,
(select count(*) from age) as AgeTotalCount,
g.GenderRank GenderPosition,
(select count(*) from gender) as GenderTotalCount
FROM overall o
JOIN age a on o.UserId = a.UserId
JOIN gender g on o.UserId = g.UserId
WHERE o.UserId = @UserId
``` | TSQL calculating various % based on different fields | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a basic stored procedure that adds a new record to a table with a structure like the following. The table also has a column "itemID" that is set to auto-incrementing to create a unique id for each record.
I would like to use Output to get this id from the new record I inserted and then use this to add a new record to another table (**Table2 with columns colD, colE, colF**). colF in Table2 should be the Output from the below as this is the id that links both tables.
Can someone here tell me how this would work as I am pretty new to SQL and have never done this before ?
**My stored procedure (example):**
```
ALTER PROCEDURE [dbo].[CreateStuff]
@colA datetime,
@colB varchar(50),
@colC nvarchar(20)
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO Table1
(
colA,
colB,
colC
)
SELECT @colA,
@colB,
@colC
END
```
Many thanks for any help with this, Tim. | ```
BEGIN
SET NOCOUNT ON;
/* Here declare a Table Variable */
DECLARE @Table_Var TABLE(itemID INT)
INSERT INTO Table1(colA,colB,colC)
OUTPUT inserted.itemID INTO @Table_Var(itemID)
SELECT @colA,@colB,@colC
/* Now do the insert into Table 2*/
INSERT INTO TABLE2
SELECT itemID FROM @Table_Var
END
```
`SCOPE_IDENTITY()` is only good when you are doing a single Insert, and it is an IDENTITY column whos value you want to capture. It will only return the last Generated Identity value.
Other then that if you are doing multiple insert or it isn't an identity column then you should use `OUTPUT` clause along with a table variable/temp table to capture the inserted values. and then do whatever you want to do with them values later on (insert in another table/logging whatever).
To learn more about [`OUTPUT Clause`](http://technet.microsoft.com/en-us/library/ms177564.aspx) have a look at this link. | Try with `SCOPE_IDENTITY()`:
```
ALTER PROCEDURE [dbo].[CreateStuff]
@colA datetime,
@colB varchar(50),
@colC nvarchar(20),
@Valueout int output
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO Table1
(
colA,
colB,
colC
)
SELECT @colA,
@colB,
@colC
SET @Valueout = SCOPE_IDENTITY()
END
``` | SQL Server: Use output from first insert to insert into second table | [
"",
"sql",
"sql-server",
"t-sql",
"output",
"sql-insert",
""
] |
I have an `Insert` stored procedure which will feed data to `Table1` and get the `Column1` value from `Table1` and call the second stored procedure which will feed the Table2.
But when I call The second stored procedure as:
```
Exec USPStoredProcName
```
I get the following error:
> Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0.
I have read the answers in other such questions and am unable to find where exactly the commit count is getting messed up. | If you have a TRY/CATCH block then the likely cause is that you are catching a transaction abort exception and continue. In the CATCH block you must always check the [`XACT_STATE()`](http://technet.microsoft.com/en-us/library/ms189797.aspx) and handle appropriate aborted and uncommitable (doomed) transactions. If your caller starts a transaction and the calee hits, say, a deadlock (which aborted the transaction), how is the callee going to communicate to the caller that the transaction was aborted and it should not continue with 'business as usual'? The only feasible way is to re-raise an exception, forcing the caller to handle the situation. If you silently swallow an aborted transaction and the caller continues assuming is still in the original transaction, only mayhem can ensure (and the error you get is the way the engine tries to protect itself).
I recommend you go over [Exception handling and nested transactions](http://rusanu.com/2009/06/11/exception-handling-and-nested-transactions/) which shows a pattern that can be used with nested transactions and exceptions:
```
create procedure [usp_my_procedure_name]
as
begin
set nocount on;
declare @trancount int;
set @trancount = @@trancount;
begin try
if @trancount = 0
begin transaction
else
save transaction usp_my_procedure_name;
-- Do the actual work here
lbexit:
if @trancount = 0
commit;
end try
begin catch
declare @error int, @message varchar(4000), @xstate int;
select @error = ERROR_NUMBER(), @message = ERROR_MESSAGE(), @xstate = XACT_STATE();
if @xstate = -1
rollback;
if @xstate = 1 and @trancount = 0
rollback
if @xstate = 1 and @trancount > 0
rollback transaction usp_my_procedure_name;
raiserror ('usp_my_procedure_name: %d: %s', 16, 1, @error, @message) ;
end catch
end
go
``` | I had this problem too. For me, the reason was that I was doing
```
return
commit
```
instead of
```
commit
return
```
in one stored procedure. | Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0 | [
"",
"sql",
"sql-server-2012",
"sqlexception",
""
] |
I have two tables `BankMaster` and `#tmp_BankMaster`. The `#tmp_BankMaster` table is inherited from `BankMaster` table. So, data structures are same. Even some data also exist in `#tmp_BankMaster` table is the same as `BankMaster`. Now I want to insert all non-existing data from `#tmp_BankMaster` into `BankMaster`. I have created a query. But, don't know how it gives an error primary key constraint: "Cannot insert duplicate key".
```
INSERT INTO BankMaster
SELECT *
FROM #tmp_BankMaster
WHERE
BankID NOT IN (SELECT BankID FROM BankMaster
WHERE BankMaster.BankID = #tmp_BankMaster.BankID
AND BankMaster.CompanyID = #tmp_BankMaster.CompanyID )
```
I am using SQL Server 2005 and primary key is created on the columns `CompanyID` and `BankID`. It is a composite key. | No, you can't use NOT IN, but you can use NOT EXISTS,You are using composite key you need to make sure those records should be selected from #tmp\_BankMaster for which your composite key (BankID and CompanyID doesn't exist in BankMaster). Try this
```
insert into BankMaster
Select * From #tmp_BankMaster tmp
where not exists (select 1 from BankMaster
master where master.BankID= tmp.BankID
and master.CompanyID = tmp.CompanyID )
```
or
```
insert into BankMaster
SELECT *
FROM #tmp_BankMaster
WHERE NOT EXISTS
(SELECT *
FROM #tmp_BankMaster
WHERE BankMaster.BankID= #tmp_BankMaster.BankID
AND BankMaster.CompanyID = #tmp_BankMaster.CompanyID
)
```
This should also work:
```
INSERT INTO BankMaster
SELECT *
FROM #tmp_BankMaster
EXCEPT
SELECT *
FROM BankMaster
``` | You need to check the values you are trying to insert into BankMaster. It appears as if you are trying to insert a combination of CompanyID and BankID that already exists in the table. | What's going wrong with this sql query? | [
"",
"sql",
"sql-server",
""
] |
I have been at this for the past two hours and have tried many different ways in regards to subquery and joins. Here's the exact question "Get the name and city of customers who live in the city where the least number of products are made"
Here is a snapshot of the database tables

I know how to get the min
```
select min(quantity)
from products
```
but this returns just the min without the city attached to it so I can't search for the city in the customers table.
I have also tried group by and found it gave me 3 min's (one for each group of cities) which i believe may help me
```
select city,min(quantity)
from products
group by city
```
Putting everything together I got something that looks like
```
SELECT
c.name,c.city
FROM
customers c
INNER JOIN
(
SELECT
city,
MIN(quantity) AS min_quantity
FROM
products
GROUP BY
city
) AS SQ ON
SQ.city = c.city
```
But this returns multiple customers, which isn't correct. I assume by looking at the database the city when the lowest number of products seems to be Newark and there are no customers who reside in Newark so I assume again this query would result in 0 hits.Thank you for your time.
Example
Here is an example "Get the pids of products ordered through any agent who makes at least one order for a customer in Kyoto"
and the answer I provided is
```
select pid
from orders
inner join agents
on orders.aid = agents.aid
inner join customers
on customers.cid = orders.cid
where customers.city = 'Kyoto'
``` | I have just figured out my own answer. I guess taking a break and coming back to it was all I needed. For future readers this answer will use a subquery to help you get the min of a column and compare a different column (of that same row) to a different tables column.
This example is getting the city where the least number of products are made (quantity column) in the products table and comparing that city to the cities to the city column in the customers table, then printing the names and the city of those customers. (to help clarify, use the link in the original question to look at the structure of the database I am talking about) First step is to sum all the products to their respective cities, and then take the min of that, and then find the customers in that city.Here was my solution
```
with citySum as(
select city,sum(quantity) as sum
from products
group by city)
select name,city
from customers
where city
in
(select city
from citySum
where sum =(
select min(sum)
from citySum))
```
Here is another solution I have found today that works as well using only Sub queries
```
select c.name,c.city
from customers c
where c.city
in
(select city
from
(select p.city,sum(p.quantity) as lowestSum
from products p
group by p.city) summedCityQuantities
order by lowestsum asc
limit 1)
``` | In Postgresql you have sophisticated tools, viz., windowing and CTEs.
```
WITH
find_least_sumq AS
(SELECT city, RANK() OVER ( PARTITION BY city ORDER BY SUM(quantity) ) AS r
FROM products)
SELECT name, city
FROM customers NATURAL JOIN find_least_sumq /* ON city */
WHERE r=1; /* rank 1 is smallest summed quantity including ties */
```
In Drew's [answer](http://stackoverflow.com/a/21898953/717898), you are zeronig in on the cities where the smallest number of any particular item is made. I interpret the question as wanting the sum of items made in that city. | PostgreSQL: get the min of a column with it's associated city | [
"",
"sql",
"postgresql",
""
] |
I want a full backup of my SQL Server database using a scheduled batch file. I tried many codes from this website but nothing worked. I just want to know which exe file actually creates the backup so I can directly execute that file from cmd. The codes which I tried are
```
echo -- BACKUP DATABASE --
set /p DATABASENAME=Enter database name:
:: filename format Name-Date (eg MyDatabase.bak)
set BACKUPFILENAME=%CD%\%DATABASENAME%.bak
set SERVERNAME='server name'
echo.
sqlcmd -E -S %SERVERNAME% -d master -Q "BACKUP DATABASE [%DATABASENAME%] TO DISK =
N'%BACKUPFILENAME%' WITH INIT , NOUNLOAD,
NAME = N'%DATABASENAME% backup', NOSKIP , STATS = 10, NOFORMAT"
```
I also tried this one
```
SqlCmd -E -S sql 'server_name' –Q “BACKUP DATABASE 'db_name' TO DISK=’f:\2014\02\19\db_name.bak’”
``` | Here's the code for above solution
```
echo off
cls
echo -- BACKUP DATABASE --
::set db name
set DATABASENAME=db_name
:: set path and format
set BACKUPFILENAME=PATH\%DATABASENAME%.bak
:: set server name
set SERVERNAME=server_name
echo.
::backup execution
sqlcmd -S %SERVERNAME% -d COFI -U vutagi -P vu1234 -Q "BACKUP DATABASE [%DATABASENAME%] TO DISK = N'%BACKUPFILENAME%' WITH INIT , NOUNLOAD , NAME = N'%DATABASENAME% backup', NOSKIP , STATS = 10, NOFORMAT"
``` | > I just want to know which exe file actually creates the backup so I can directly execute that file from cmd.
You have zero understanding how sql server works, right?
The file that runs the backup is the server executable that you can not execute fom the command line - it is basically the exe that already runs to run the database server and waits for a sql command to execute - or 100 at the same time, or 1000, it has no limits.
Sql server is a proper server, not some honky donky small database system tham.
In your above examples:
* SqlCommand is a tool that comes with sql server (it is sqlcommand.exe) and is used t osend a command to the ALREADY RUNING sql server.
* The command you are sending is "BACKUP DATABASEW which tells the already runing server to make up a database. INteresting enough whoever did both has not really spendt a minute thinking.
* I Do not know what you looked up suppsedly on the internet, but the syntax is at
<http://technet.microsoft.com/en-us/library/ms186865(v=sql.100).aspx>
for your particular version.
* I suggest you give it a more serious try. Do not use SqlCmd, use the sql management studio. Try to make it working and - cough - if it fails with an exception ,read if and fix what it says. If you do not underrstand the error message ask for help, but hey, without error message we are kind of out of ideas, you know. Once it works transpose it to your batch.
* Note that backup is a command sent ot the server. So your path is not YOUR path, but from the server. Under the server identity. WIthout user mapped network paths.
The proper way from copmmand line is indeed using SqlCmd. Now, I do not know how unsecure you like your servers, but mine require a password. Thus I asume you never bothered to read the sqlcommand syntax. It is at
<http://technet.microsoft.com/en-us/library/ms162773.aspx>
and I am also quite sure SqlCommand gives you an error output that you just kindly, assuming we all here have the gift of reading your mind - are skipping as totally useless. Read it.
And that sort of makes it complete. Not the nicest answer, but maybe with it you learn to not ignore what is in front of you on your screen - namely error messages. And possibly reading the documentation once in a while instead of copy/pasting "stuff from the internet". | Which file is executed in SQL Server 2005 while backing up data | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I am a beginner in SQL and I need help to get the solution for this condition.
I want Output as a name of any person who managed to ride the Super Rollercoaster more than 50 times on a single day.
```
create table visitor(
visitID char(n),
name char(n) not null,
primary key (visitID)
);
create table ticket(
ticketID char(n),
ticketType char(n),
day int(n),
month char(n),
year int(n),
seasonID char(n),
visitID char(n),
primary key (ticketID),
foreign key (seasonID) references seasonPerClass,
foreign key (visitID) references visitor
);
create table attractionVisit(
attractionID char(n),
ticketID char(n),
foreign key (attractionID) references attraction,
foreign key (ticketID) references ticket
);
create table attraction(
attractionID char(n),
attractionName char(n) not null,
primary key (attractionID)
);
```
I am unable to test my code, but so far I have
```
SELECT
distinct Name
FROM
attractionVisit
Natural Join
ticket
Natural Join
visitor
Natural Join
attraction group by VisitID, day
WHERE
attractionName = ‘Super Rollercoaster’ having count(*) > 50;
```
I don't know if this is right or not. Please help. | WHERE should be put before GROUP BY
```
SELECT
distinct Name
FROM
attractionVisit
Natural Join
ticket
Natural Join
visitor Natural
Join attraction
WHERE
attractionName = 'Super Rollercoaster'
group by
VisitID, day
having
count(*) > 50;
``` | ```
SELECT
distinct(v.Name)
FROM
attractionVisit av
inner Join attraction a
on av.attractionID=attractionID
inner join ticket t
on av.ticketID=t.ticketID
inner join visitor v
on t.visitID=v.visitID
WHERE
attractionName = ‘Super Rollercoaster’
group by
v.name, t.day
having
count(*) > 50;
``` | SQL statement to output record of person that has more than x counts of a condition | [
"",
"sql",
""
] |
how to join userid to user and get the username ?
I really appreciate any help.Thanks in Advance.
<http://sqlfiddle.com/#!2/ac600/1>
```
CREATE TABLE if not exists tblA
(
id int(11) NOT NULL auto_increment ,
user varchar(255),
category int(255),
PRIMARY KEY (id)
);
CREATE TABLE if not exists tblB
(
id int(11) NOT NULL auto_increment ,
username varchar(255),
userid int(255),
PRIMARY KEY (id)
);
INSERT INTO tblA (user, category ) VALUES
('1', '1'),
('1', '2'),
('1', '3'),
('1', '1'),
('2', '1'),
('2', '1'),
('2', '1'),
('2', '1'),
('3', '1'),
('2', '1'),
('4', '1'),
('4', '1'),
('2', '1');
INSERT INTO tblB (userid, username ) VALUES
('1', 'A'),
('2', 'B'),
('3', 'C'),
('4', 'D'),
('5', 'E');
```
query:
```
SELECT
groups.*,
@rank:=@rank+1 AS rank
FROM
(select
user,
category,
count(*) as num
from
tblA
where
category=1
group by
user,
category
order by
num desc,
user) AS groups
CROSS JOIN (SELECT @rank:=0) AS init
```
the table looks like :
```
username category num Ascending rank
B 1 6 2
A 1 2 1
D 1 2 4
C 1 1 3
``` | Use `JOIN`, for example:
```
SELECT
tblB.username,
groups.*,
@rank:=@rank+1 AS rank
FROM
(select
user,
category,
count(*) as num
from
tblA
where
category=1
group by
user,
category
order by
num desc,
user) AS groups
-- left join: in case if data integrity fails:
left join
tblB ON groups.user=tblB.userid
CROSS JOIN (SELECT @rank:=0) AS init
```
-check your modified [demo](http://sqlfiddle.com/#!2/ac600/2). | You just need to do left join
```
SELECT
groups.*,
@rank:=@rank+1 AS rank
FROM
(select
user,
category,
count(*) as num,
tblB.username
from
tblA
left join tblB on tblA.id = tblB.userid
where
category=1
group by
user,
category
order by
num desc,
user) AS groups
CROSS JOIN (SELECT @rank:=0) AS init
```
See [Demo](http://sqlfiddle.com/#!2/ac600/5) | join user id from tblB to user from tblA and get username | [
"",
"mysql",
"sql",
""
] |
Using this we can create procedure then how to delete,alter...etc,Can any one help me how to delete procedure.
```
USE [PSI Data]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER Procedure[dbo].[savepsi]
@Psiid int,
@Name varchar(50),
@DOB int,
@Sex varchar(50),
@Mobile int,
@Cusaddress varchar(50),
@Product varchar(50),
@Amount int
as
begin
insert Psitable(Psiid,Name,DOB,Sex,Mobile,Cusaddress,Product,Amount)
Values (@Psiid,@Name,@DOB,@Sex,@Mobile,@Cusaddress,@Product,@Amount)
end
``` | Try this
USE [PSI Data]
DROP PROCEDURE [dbo].[savepsi];
GO | To remove a stored procedure you can use the command [`DROP PROCEDURE`](http://technet.microsoft.com/en-us/library/ms174969.aspx)
```
DROP PROCEDURE [dbo].[savepsi]
``` | How to delete a stored procedure? | [
"",
"mysql",
"sql",
"sql-server",
"stored-procedures",
""
] |
My Query looks like
```
$search_query = db_query("SELECT nd.nid, users.name, nd.type FROM node as nd
LEFT JOIN node_revisions as nd_rev ON nd_rev.nid = nd.nid AND nd_rev.vid = nd.vid
LEFT JOIN users ON nd.uid = users.uid
WHERE nd.status = 1 AND nd_rev.body LIKE LOWER('%node/100%')
AND nd.nid NOT IN(SELECT DISTINCT nid FROM term_node WHERE tid = 293)");
```
This query actually returns all the matches from node\_revisions.body field, Which includes
node/1000, node/1001.... Etc.,
I want to get only the result of exact match where possible like
```
"node/100"
"node/100/"
"/node/100"
"/node/100/"
'node/100'
'node/100/'
'/node/100'
'/node/100/'
```
and not like
```
"node/1006"
"node/10064/"
"/node/1000"
"/node/10001/"
'node/10023'
'node/1005/'
'/node/1001'
'/node/10069/'
```
This above query returned me result which has string like below..
..a href="/node/1006"
How to avoid this kind of errors? Please help.. | Oh ya... I got an resolution for this.. I redefined my query like below and it gives me result as expected..
```
$search_query = db_query("SELECT nd.nid, users.name, nd.type FROM node as nd
LEFT JOIN node_revisions as nd_rev ON nd_rev.nid = nd.nid AND nd_rev.vid = nd.vid
LEFT JOIN users ON nd.uid = users.uid
WHERE nd.status = 1 AND nd_rev.body RLIKE '[[:<:]]" . $search_string . "[[:>:]]'
AND nd.nid NOT IN(SELECT DISTINCT nid FROM term_node WHERE tid = 293)");
```
Look at
```
nd_rev.body RLIKE '[[:<:]]" . $search_string . "[[:>:]]'
```
This is what i expected | Try removing the `%` after 100 so the search won't consider any digit after 100, like this:
```
LOWER('%node/100')
``` | How to search exact string using MySql Query which avoids extended values? | [
"",
"mysql",
"sql",
""
] |
I'm trying to convert a date format stored m/d/yyyy to a 'yyyymmdd' character format, without the need to cast the date column every time I want to use the column in my query.
Currently I'm casting and formatting dates as chars: `((cast(cast(invitation_date as CHAR(8)) as date format 'YYYYMMDD'))`.
Is there a way to convert the data column once and call the converted character value later in the query?
I'm using Teradata in Aqua Data Studio 13.0.3. | If source is a DECIMAL yyyymmdd you can do
```
CAST(invitation_date - 19000000 AS DATE) AS newcol
```
Teradata allows an alias to be used in any place, so you can simply do
```
WHERE newcol > DATE
```
Of course best case would be to change those columns to DATE during load. | Navigate to File->Options -> Results Format -> Teradata. Then select the datatype Date and enter yyyyMMdd. Your result set will now return the specified date format. Let me know if this will solve your issue.

 | Converting date to character format in SQL | [
"",
"sql",
"teradata",
""
] |
I'm working in sql server 2012 where I have table
`Students` which contain
`fields: StudentId, Name, City, Username, Password`.
I need split table Students into next tables: `Students` and `Users`. And table
`Users` must be have next fields: `UserId, Username, Password`.
And table `Students` must be have: `StudentId, Name, City, UserId`.
My question is: how can I do this by code?
P.S.: I'm new in sql. I know maybe this is dublicate, but I don't understand how to do this. | You can do it in the following steps:
Create Users table. Since you don't have UserID, you can generate it using an identity column.
```
CREATE TABLE Users (
UserID bigint IDENTITY(1,1) PRIMARY KEY NOT NULL,
Username nvarchar(255),
Password nvarchar(255)
)
```
Insert data into Users table
```
INSERT INTO Users (
Username,
Password
)
SELECT
Username,
Password
FROM
[Existing table]
```
Now, create Students table
```
CREATE TABLE Students (
StudentID bigint PRIMARY KEY NOT NULL,
Name nvarchar(255),
City nvarchar(255),
UserID bigint FOREIGN KEY REFERENCES User(UserID)
)
```
and insert data into Students table
```
INSERT INTO Students (
StudentID,
Name,
City,
UserID
)
SELECT
StudentID,
Name,
City,
UserID
FROM
[Existing table] A INNER JOIN [Users] B
ON A.Username = B.Username
```
If the original table name is Students, create the above table as Students\_New.
Then, after creation and insertion, you can use the below script:
```
DROP TABLE dbo.Students
GO;
EXEC sp_rename 'Students_New', 'Students'
``` | Populate Students Table 1st , since you already have Students table in your table database you cannot have another table called students, so call it students\_new inititally.
**Add UserID Identity COlumn to yout Original Table**
```
ALTER TABLE Students
ADD UserID INT IDENTITY(1,1)
```
**Students Table**
```
SELECT StudentId, Name, City, UserId INTO Students_New
FROM Students
```
**Users Table**
```
SELECT UserId, Username, Password INTO Users
FROM Students
```
**Drop original Students Table and Rename New Table**
Now you can Drop the actual Students Table and Rename the `Students_New` Table to `Students`
```
DROP TABLE Students
GO
EXECUTE sp_rename 'Students_New','Students';
GO
``` | How to split one table into two | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I am loading data from Excels into database on SQL Server 2008. There is one column which is in `nvarchar` data type. This field contains the data as
```
Text text text text text text text text text text.
(ABC-2010-4091, ABC-2011-0586, ABC-2011-0587, ABC-2011-0604)
Text text text text text text text text text text.
(ABC-2011-0562, ABC-2011-0570, ABC-2011-0575, ABC-2011-0588)
```
so its text with many sentences of this kind.
For each row I need to get the data `ABC-####-####`, respectivelly I only need the last part. So e.g. for `ABC-2010-4091` I need to obtain `4091`. This number I will need to join to other table. I guess it would be enough to get the last parts of the format `ABC-####-####`, then I should be able to handle the request.
So the example of given above, the result should be `4091, 0586, 0587, 0604, 0562, 0570, 0575, 0588` in the row instead of the whole nvarchar value field.
Is this possible somehow? The text in the nvarchar field differ, but the text format (ABC-####-####) I want to work with is still the same. Only the count of characters for the last part may vary so its not only 4 numbers, but could be 5 or more.
What is the best approach to get these data? Should I parse it in SSIS or on the SQL server side with SQL Query? And how?
I am aware this is though task. I appreciate every help or advice how to deal with this. I have not tried anything yet as I do not know where to start. I read articles about SQL parsing, but I want to ask for best approach to deal with this task. | I have worked this problem out with the following guides:
[Split Multi Value Column into Multiple Records](http://microsoft-ssis.blogspot.se/2012/11/split-multi-value-column-into-multiple.html)
&
[Remove Multiple Spaces with Only One Space](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/34ce1b87-fee7-48c9-b3aa-9615de138bbc/how-to-replace-multiple-blanks-into-one?forum=sqlintegrationservices) | If I understand correctly, "ABS-####-####" will be the value coming through in the column and the numeric part is variable in length.
If that is the case, maybe this will work.
Use a "Derived Column" transformation.
Lets say we call "ABC-####-####" = Column1
```
SUBSTRING("Column1",(FINDSTRING("Column1","-",2)+1),LEN(Column1)-(FINDSTRING("Column1","-",2)))
```
If I am not mistaken, that should give you the last # values in a new column no matter how long that value is.
HTH | SQL Parse NVARCHAR Field | [
"",
"sql",
"sql-server",
"parsing",
"ssis",
""
] |
Hi Can anyone help with the problem below, I have a stored procedure which depending on the month and date, depends on which image is returned. I have found plenty of examples of how to solve this when the data is in a table, but no solutions on how I'm doing it.
The reason why I'm doing it this way is because it will be easier to modify the sql in the procedure than having to change and upload the new code to the website.
```
DECLARE @CurrentMonth int
SET @CurrentMonth = ( MONTH(GETDATE()))
DECLARE @CurrentDate int
SET @CurrentDate = ( Day(GETDATE()))
--DECLARE @Url varchar(100)
--SET @URL = ''
SET NOCOUNT ON;
SELECT
--Set image for xmas
CASE WHEN @CurrentMonth = 12 AND @CurrentDate = 25
THEN (SELECT 'imagetest.png' AS URL)
--Set for easter
WHEN @CurrentMonth = 3 AND @CurrentDate = 19
THEN (SELECT 'imagetest2.png' AS url)
--Keep setting images for events
WHEN @CurrentMonth = 3 AND @CurrentDate = 19
THEN (SELECT 'imagetest3.png' AS url)
--If no match, return default image
ELSE (SELECT 'logo.png' AS url)
END
-- return @URL
END
```
The sp executes ok, but the column is (No Column Name) when what I want is url as the column name.
Any help from someone more experienced would be appreciated.
I'm using SQL2008R2 | You don't need the inner `SELECT`s:
```
SELECT
--Set image for xmas
CASE WHEN @CurrentMonth = 12 AND @CurrentDate = 25
THEN 'imagetest.png'
--Set for easter
WHEN @CurrentMonth = 3 AND @CurrentDate = 19
THEN 'imagetest2.png'
--Keep setting images for events
WHEN @CurrentMonth = 3 AND @CurrentDate = 19
THEN 'imagetest3.png'
--If no match, return default image
ELSE 'logo.png'
END AS url
```
BTW, March 19th will NEVER be an Easter Sunday. March 22 is the earliest possible [Easter date](http://www.smart.net/~mmontes/freq3.html) | The `AS URL` needs to be at the end of your case statement
```
CASE
...
END AS URL
``` | Return Column name in case statement but no table involved | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a large database of URLs, and I have duplicates from trailing slashes. I would like find duplicate values of those ending with a trailing slash, but not urls with text after the trailing slash, such as `http://www.google.com/asdfasdf`
```
CREATE TABLE link_info (
id INT,
url VARCHAR(32)
);
INSERT INTO link_info VALUES
(1, 'http://www.yahoo.com/'),
(2, 'http://www.google.com/'),
(3, 'http://www.google.com/asdfasdf'),
(4, 'http://www.yahoo.com');
```
And I am trying to select duplicates without the trailing slash, but it selects `http://www.google.com/asdfasdf` as a duplicate.
```
SELECT DISTINCT TRIM(TRAILING '/' FROM url) url
FROM link_info
```
I was hoping to use regexp, but that doesn't work.
```
SELECT DISTINCT TRIM(TRAILING REGEXP('[/]$') FROM url) url
FROM link_info
``` | Your query will return every url trimmed. I think you need something like this:
```
SELECT TRIM(TRAILING '/' FROM url) trimmed_url
FROM link_info
GROUP BY trimmed_url
HAVING COUNT(DISTINCT url)>1
```
Please see fiddle [here](http://sqlfiddle.com/#!2/81038/25).
**Edit**
If there are no exact duplicates, and you just want to keep the row with no trailing slash, you could use this delete query:
```
DELETE l1.*
FROM
link_info l1 INNER JOIN link_info l2
ON l1.url = CONCAT(l2.url, '/')
```
Please see fiddle [here](http://sqlfiddle.com/#!2/a8829/1). Notice that this query will just remove the duplicated yahoo.com with the trailing slash, but it won't remove the trailing slash from www.google.com/ | you may use this
```
SELECT TRIM(TRAILING '/' FROM url) url
FROM link_info
group by SUBSTRING_INDEX(url, '.com', 1)
```
But this works only whith links which have `.com` so with `.net` or `.something` you add a
union
[**DEMO HERE**](http://sqlfiddle.com/#!2/81038/22) | select near duplicates with trailing slashes | [
"",
"mysql",
"sql",
"trim",
""
] |
Everytime I try to create a new Database in SQL—haven't accomplished to create any, so far. Please, help me with a detailed solution for this.
## TITLE: Microsoft SQL Server Management Studio
Cannot write property IsFileStream.This property is not available on SQL Server 7.0. (Microsoft.SqlServer.Smo)
For help, click: <http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600.1+((KJ_RTM).100402-1539+)&EvtID=IsFileStream&LinkId=20476>
---
BUTTONS:
## OK | This is due to you using older MS SQL Management studio. Download the new version in this link:
<https://learn.microsoft.com/es-es/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-ver15>
I faced the same issue with having SQL 2008R2 and when I installed 2019, I had the same error as you on management studio, now I'm installing new version. | SQL Server Management is unable to set up the default property while creating a database using previous version of SQL Server Management Studio. | Cannot write property IsFileStream | [
"",
"sql",
"sql-server",
"ssms",
""
] |
I have no idea how to finish this query....
The question asks:
> "For each person who acted in a movie in 2010, find their name
> and total pay in all movies in which they have acted (i.e. including
> those not in 2010)."
The table required: (where title = Movie title, and year = Movie year, and Pay = actors pay)
```
ActedIn (name:varchar, title:varchar, year:int, pay:real)
```
My query so far:
```
SELECT A.name, A.pay FROM ActedIn A WHERE A.year = 2010;
```
This returns:
```
+--------+--------+
| Name | Pay |
+--------+--------+
| Dino | 12.22 |
+--------+--------+
| Miro | 1238.22|
+--------+--------+
```
But, this only returns the pay for the money received for the specific movie made by the actor in 2010.
Now that I have the names of the actors I want to look up, I need it to find all the other movies they made (before or after 2010), and add TOTAL PAY for all the movies they were in.
I know I need to make another table to find all the movies those 2 actors acted in, and then merge the columns I think.
Any help? | You can do this with a single aggregation and `having` clause:
```
select ai.name, sum(ai.pay)
from actedin ai
group by ai.name
having sum(case when ai.year = 2010 then 1 else 0 end) > 0;
```
The `having` clause counts the number of movies that each "name" acted in in 2010. If this number is greater than 0, then the name is kept for the result set. | You can also use exists with a subquery.
```
select name, sum(pay)
from actedin a
where exists (select 1 from actedin b where a.name = b.name and year = 2010)
group by name
``` | SQL query help select from multiple tables | [
"",
"mysql",
"sql",
""
] |
Let's say i've got this database:
```
book
| idBook | name |
|--------|----------|
| 1 |Book#1 |
category
| idCateg| category |
|--------|----------|
| 1 |Adventures|
| 2 |Science F.|
book_categ
| id | idBook | idCateg | DATA |
|--------|--------|----------|--------|
| 1 | 1 | 1 | (null) |
| 2 | 1 | 2 | (null) |
```
I'm trying to select only the books which are in category 1 AND category 2 something like this
```
SELECT book.* FROM book,book_categ
WHERE book_categ.idCateg = 1 AND book_categ.idCateg = 2
```
Obviously, this giving 0 results becouse each row has only one idCateg it does work width OR but the results are not what I need. I've also tried to use a join, but I just can't get the results I expect.
Here it's the SQLFiddle of my current project, with my current DB, the data at the begining is just a sample. [SQLFiddle](http://sqlfiddle.com/#!2/3e6bc/49)
Any help will be really appreciated. | Solution using EXISTS:
```
select *
from book b
where exists (select 'x'
from book_categ x
where x.idbook = b.idbook
and x.idcateg = 1)
and exists (select 'x'
from book_categ x
where x.idbook = b.idbook
and x.idcateg = 2)
```
Solution using join with an inline view:
```
select *
from book b
join (select idbook
from book_categ
where idcateg in (1, 2)
group by idbook
having count(*) = 2) x
on b.idbook = x.idbook
``` | You could try using `ALL` instead of `IN` (if you only want values that match all criteria to be returned):
```
SELECT book.*
FROM book, book_categ
WHERE book_categ.idCateg = ALL(1 , 2)
``` | Select from one table but filtering other two | [
"",
"mysql",
"sql",
""
] |
**QUERY #1**
```
SELECT
dbo.CLIENT.CLIENT_ID, dbo.CLIENT.GOC, dbo.SALES_UW_REGION.SALES_UNDERWRITING
FROM dbo.CLIENT LEFT OUTER JOIN
dbo.SALES_UW_REGION ON dbo.CLIENT.GOC = dbo.SALES_UW_REGION.GOC
WHERE (dbo.CLIENT.CLIENT_ID = 23721)
CLIENT_ID, GOC, SALES_UNDERWRITING
23721 332 Underwriting
23721 332 Sales
```
I can understand why this would return only one row, the reason being that despite the LEFT outer join which ensures both CLIENT records are returned even if they are unmatched, the FILTER is applied AFTER the join, so the resultset only has one row.
**Query #2**
```
SELECT
dbo.CLIENT.CLIENT_ID, dbo.CLIENT.GOC, dbo.SALES_UW_REGION.SALES_UNDERWRITING
FROM dbo.CLIENT LEFT OUTER JOIN
dbo.SALES_UW_REGION ON dbo.CLIENT.GOC = dbo.SALES_UW_REGION.GOC
WHERE (dbo.CLIENT.CLIENT_ID = 23721)
and SALES_UW_REGION.SALES_UNDERWRITING = 'Sales '
CLIENT_ID GOC SALES_UNDERWRITING
23721 332 Sales
```
However, if I move the 'SALES' filter to the JOIN clause, I was susprised to see that still one row is returned.
**Query #3**
```
SELECT
dbo.CLIENT.CLIENT_ID, dbo.CLIENT.GOC, dbo.SALES_UW_REGION.SALES_UNDERWRITING
FROM dbo.CLIENT LEFT OUTER JOIN
dbo.SALES_UW_REGION ON dbo.CLIENT.GOC = dbo.SALES_UW_REGION.GOC
and SALES_UW_REGION.SALES_UNDERWRITING = 'Sales '
WHERE (dbo.CLIENT.CLIENT_ID = 23721)
CLIENT_ID GOC SALES_UNDERWRITING
23721 332 Sales
```
I expected that since it was part of the JOIN clause and the join was a LEFT OUTER, that I would have gotten 2 rows. In general, if the JOIN condition which involves an equality match on two columns, one from the LEFT table and one from teh RIGHT table is not met, in a LEFT OUTER JOIN the LEFT table would be returned with NULL values for values selected from the RIGHT. Why should it be any different if we are matching a value from the RIGHT table to a literal? Shouldn't the row from teh LEFT table still be returned?
Man, I thought this was basic stuff that I had down....
Here's what you need to recreate what I did:
```
CREATE TABLE [dbo].[CLIENT](
[CLIENT_ID] [bigint] NOT NULL,
[GOC] [char](3) NULL
)
go
CREATE TABLE [dbo].[SALES_UW_REGION](
[GOC] [char](3) NOT NULL,
[SALES_UNDERWRITING] [varchar](12) NULL
)
go
INSERT INTO [dbo].[CLIENT]([CLIENT_ID], [GOC])
SELECT 23721, N'332'
go
INSERT INTO [dbo].[SALES_UW_REGION]([GOC], [SALES_UNDERWRITING])
SELECT N'332', N'Underwriting' UNION ALL
SELECT N'332', N'Sales'
go
``` | I think your confusion involves a misunderstanding of the first result set:
```
CLIENT_ID, GOC, SALES_UNDERWRITING
23721 332 Underwriting
23721 332 Sales
```
Though there are two rows in this result set, the results represent data from only one row in the Client table. The join condition allowed the single row from the Client table to match two rows in the Sales\_UW\_Region table, and so the data for that row from the Client table is duplicated in the result set. There is only one Client record here in the first place, evidenced by the fact that there is only one Client\_ID, but the data for the record is shown twice: once for each matching record in Sales\_UW\_Region.
Later, when you include the `and SALES_UW_REGION.SALES_UNDERWRITING = 'Sales '` condition as part of the join's `ON` clause, the original single record in the Client table only matches one record from the Sales\_UW\_Region table. The data for the row no longer needs to be duplicated, and so only one row is returned. | You have one row in your `CLIENT` table for `CLIENT_ID = 23721`.
You're left joining the other table which has multiple rows for `CLIENT_ID = 23721` (when joined with criteria in your first query), but when the joining criteria is altered it excludes one of those rows from the `RIGHT` table.
You're getting all records from your `LEFT` table regardless of whether they join to records in the `RIGHT` table, just as expected. | LEFT OUTER JOINs not acting as expected | [
"",
"sql",
"sql-server",
""
] |
Below are the tables in MySQL
```
Table:ShopInfo
ShopCode ShopName
-------- -------
X Xylophon
Y Yellow
Z Zoota
K Kite
Table: Income
Shop Date Earnings
----- ---------- --------
X 2014-10-17 4000
Y NULL NULL
Y 2014-11-09 5000
Y 2014-11-18 3000
Y 2014-12-05 2000
Z 2014-12-25 6000
K NULL NULL
```
I want to find the **Total Earning for each Shop in the month of November only**, Null earning will be replaced with zero. The result table will be
```
ShopName NovEarn
-------- -------
Yellow 8000
Kite 0
```
This is What I am doing:
```
SELECT ShopInfo.ShopName, IFNULL( SUM(Income.Earnings), 0 ) AS NovEarn
FROM Income
JOIN ShopInfo ON Income.Shop=ShopName.ShopCode
GROUP BY Income.Shop
HAVING (MONTH(Income.Date)=11 AND YEAR(Income.Date)=2014) OR Date IS NULL
ORDER BY NovEarn DESC
```
And its giving me
```
ShopName NovEarn
-------- -------
Yellow 10000
Kite 0
```
That is, the sum of earnings for all the months. What am I doing wrong? | Try putting the having clauses under where clauses :
```
SELECT ShopInfo.ShopName, IFNULL( SUM(Income.Earnings), 0 ) AS NovEarn
FROM Income
JOIN ShopInfo ON Income.Shop=ShopName.ShopCode
WHERE (MONTH(Income.Date)=11 AND YEAR(Income.Date)=2014) OR Date IS NULL
GROUP BY Income.Shop
ORDER BY NovEarn DESC
``` | Filtering income by month has to be done BEFORE you group by show code. Use a where statement for that part. But then you will have the problem of keeping shops with no income. So you should select first from ShopInfo, and then left join on income, so that if there are no acceptable income, the shop will still be displayed.
```
SELECT ShopInfo.ShopName, IFNULL( SUM(Income.Earnings), 0 ) AS NovEarn
FROM ShopInfo
LEFT JOIN Income ON ShopName.ShopCode = Income.Shop
WHERE (MONTH(Income.Date)=11 AND YEAR(Income.Date)=2014)
GROUP BY ShopInfo.ShopCode
ORDER BY NovEarn DESC
``` | Filtering data after GROUP BY() or vice versa? | [
"",
"mysql",
"sql",
"database",
""
] |
I'm trying to create the following pseudocode in MySQL. I have the following tables:
* specials
* product\_filter
The `product_filter` table only has two columns, `product_id` and `filter_id`
I'd like to come up with a SQL that:
1. Reads all product\_ids from the `specials` table and puts them in the `product_filter` table
2. When reading the product id from the `specials` table, it needs to look at the `price` column too
3. If the price is under $100, the filter id would be 1
4. If the price is between $100 and $500, the filter id would be 2
5. If the price is between $500 and $1000, the filter id would be 3
Here is what I have so far:
```
INSERT INTO product_filter (product_id,filter_id)
SELECT product_id,
FROM specials;
```
Any help is appreciated. | Try this, using a `CASE` to determine what the `filter_id` is (this will set anything that doesn't have a price in those boundaries equal to 0):
```
INSERT INTO product_filter (product_id,filter_id)
SELECT product_id,
(CASE WHEN price < 100 then 1
WHEN price >= 100 AND price < 500 then 2
WHEN price >= 500 AND price < 1000 then 3
ELSE 0 END) AS filter_id
FROM specials;
```
Here's a working [SQLFiddle](http://sqlfiddle.com/#!2/086a7/1). | try this query (you didn't specify for values higher than 1000 so it'll make filter\_id 4 if values are greater than 1000).
```
INSERT INTO product_filter (product_id,filter_id)
SELECT product_id,
CASE WHEN price < 100 THEN 1
WHEN price BETWEEN 100 AND 500 THEN 2
WHEN price BETWEEN 500 AND 1000 THEN 3
ELSE 4
END as filter_id
FROM specials;
``` | Insert into table values from another table only if they meet certain criteria | [
"",
"mysql",
"sql",
"opencart",
""
] |
i have the following tables:
```
**entries**
entry_id | date | engineer | project
**entries_allowanes_map**
entry_id | allowance_id
**allowances**
allowance_id | allowance_name
```
I want to create a SELECT query that will give the following result:
```
entry_id | date | engineer | project | allowance_name1 | allowance_name2 | allowance_name_n...
```
The queries I have tried return a row for each allowance an entry has registered with. I want just one row with all allowances attached to it.
Thanks in advance | I would propose doing this with `group_concat()`. It doesn't put the values in separate columns, but it does put everything for a given entry on one row:
```
select e.entry_id, e.date, e.engineer, e.project,
group_concat(a.allowance_name) as allowances
from entries e join
entries_allowances_map f
on e.entry_id = eam.entry_id
allowances a
on eam.allowance_id = a.allowance_id
group by e.entry_id;
``` | Here is the query that I got:
It outputs your expected results in different columns:
```
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'(SELECT max(CASE WHEN AL.ALLOWANCE_ID = ''',
ALLOWANCE_ID,
''' THEN 1 END) AS `',
ALLOWANCE_ID, '` FROM entries_allowanes_map AL WHERE E.ENTRY_ID = AL.ENTRY_ID ) AS `',
ALLOWANCE_NAME, '`'
)
) INTO @sql
FROM allowances;
SET @sql
= CONCAT('SELECT E.ENTRY_ID, E.DATE, E.ENGINEER, E.PROJECT, ', @sql, '
FROM entries as e');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
```
Here is the [SqlFiddle](http://sqlfiddle.com/#!2/1e01a5/2) | MySQL - Select Query with junction table to one row | [
"",
"mysql",
"sql",
"select",
"join",
""
] |
This is table structure
```
id
1
2
3
4
5
6
```
I need result like this
```
id even odd
1 0 1
2 1 0
3 0 1
4 1 0
5 0 1
6 1 0
```
I tried
```
select id %2=0 then 1 else 0 end or id%2 <>0 then 1 else 0 odd
from table
``` | How about
```
select
id,
~id & 1,
id & 1
from t
``` | Take a look at the [`CASE`](http://technet.microsoft.com/en-us/library/ms181765.aspx) keyword. It works very similarly to what you're trying to do in your `SELECT` statement. In addition, if you want to select multiple columns, separate them with a comma. The [`OR`](http://technet.microsoft.com/en-us/library/ms188361.aspx) keyword is used for combining logical conditions in your query, not for specifying multiple columns.
An example of how you could use `CASE` in your query would be as follows:
```
SELECT id,
CASE WHEN id %2=0 THEN 1 ELSE 0 END AS Even,
[column2]
FROM [TableName]
``` | Even or odd in SQL | [
"",
"sql",
"sql-server",
"t-sql",
"select",
""
] |
So im trying to add fields to a database. It is .mdb database, microsoft access.
The Name of the table is Contacts.
```
Dim con As New OleDb.OleDbConnection
Dim dbProvider As String
Dim dbSource As String
Dim ds As New DataSet
Dim da As OleDb.OleDbDataAdapter
Dim sql As String
dbProvider = "PROVIDER=Microsoft.Jet.OLEDB.4.0;"
dbSource = "Data Source= C:\Users\Owner\Desktop\Contacts.mdb"
con.ConnectionString = dbProvider & dbSource
con.Open()
sql = "INSERT INTO Contacts (FName, LName, Age, Address Line 1, Address Line 2, City, State, Zip, Home Phone, Work Phone, Email, Sex) VALUES (a, b, c,d,e,f,g,h,i,j,k)"
da = New OleDb.OleDbDataAdapter(Sql, con)
da.Fill(ds, "Contacts")
```
My Error is Syntax error in INSERT INTO statement. Which makes no sense, whatsoever. What am i doing wrong?
EDIT\*
I solvedmy riginal problem by adding [] around certian fields as suggested, thanks. Now I am getting...
No value given for one or more required parameters.
The database has a primary ID field that autoincrements, does this change anything? | As other answers have already explained you need to use square brackets around column names that contain spaces, but also you need to add a value for the fields otherwise you cannot execute the command.
I will try to show a complete example
```
Dim dbProvider = "PROVIDER=Microsoft.Jet.OLEDB.4.0;"
Dim dbSource = "Data Source= C:\Users\Owner\Desktop\Contacts.mdb"
Dim sql = "INSERT INTO Contacts (FName, LName, Age, " & _
"[Address Line 1], [Address Line 2], City, State, Zip, " & _
"[Home Phone], [Work Phone], Email, Sex) " & _
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"
Using con = New OleDb.OleDbConnection(dbProvider & dbSource)
Using cmd = new OleDb.OleDbCommand(sql, con)
con.Open()
cmd.Parameters.AddWithValue("@p1", "Value For FName")
cmd.Parameters.AddWithValue("@p2", "Value For LName")
cmd.Parameters.AddWithValue("@p3", Convert.ToInt32("Value For Age"))
.... and so on for the other parameters .....
cmd.ExecuteNonQuery()
End Using
End Using
```
In this example I have inserted 12 placeholders for the parameters (?) and then added the first 3 parameters out of 12 required. Note that with OleDb the parameter collection still requires to add the parameters with a name (*@pX*), but when executing the query the parameter value is picked following the same order of the placeholder.
Also I have used the Using statement to close and dispose the disposable objects like the connection and the command.
Finally, an Insert query is normally executed using ExecuteNonQuery from the OleDbCommand and there is no need to use an OleDbAdapter and call Fill to load a DataSet when no SELECT query is executed | Put column names with spaces between squares brackets []
For example `[Address Line 1]`
Cheers | Inserting MS Access Row Into Database using vb.net | [
"",
"sql",
"database",
"vb.net",
"oledbcommand",
""
] |
I am looking for a query which fetches me the data that is different compared to the previous row,
A sample code (with table creation and data)
```
create table #temp
(id int, eid int, name char(10),estid int, ecid int, epid int, etc char(5) )
insert into #temp values (1,1,'a',1,1,1,'a')
insert into #temp values (2,1,'a',1,1,1,'a')
insert into #temp values (3,1,'a',2,1,1,'a')
insert into #temp values (4,1,'a',1,1,1,'a')
insert into #temp values (5,1,'a',1,1,1,'a')
insert into #temp values (6,1,'a',1,2,1,'a')
insert into #temp values (7,1,'a',1,1,1,'a')
insert into #temp values (8,1,'a',2,1,1,'a')
insert into #temp values (9,1,'a',1,1,1,'a')
insert into #temp values (10,1,'a',1,1,1,'a')
insert into #temp values (11,2,'a',1,1,1,'a')
insert into #temp values (12,2,'a',1,1,1,'a')
insert into #temp values (13,2,'a',2,1,1,'a')
insert into #temp values (14,2,'a',1,1,1,'a')
insert into #temp values (15,2,'a',1,1,1,'a')
insert into #temp values (16,2,'a',1,2,1,'a')
insert into #temp values (17,2,'a',1,1,1,'a')
insert into #temp values (18,2,'a',2,1,1,'a')
insert into #temp values (19,2,'a',1,1,1,'a')
insert into #temp values (20,2,'a',1,1,1,'a')
```
I tried with some ways of getting the data as the way that i expected
```
SELECT * INTo #Temp_Final
FROM #temp
WHERE #temp.%%physloc%%
NOT IN (SELECT Min(b.%%physloc%%)
FROM #temp b
GROUP BY eid,name,estid,ecid,epid,etc)
ORDER BY id
SELECT * FROM #temp WHERE id not in (SELECT id FROM #Temp_Final) ORDER BY id
```
But i wasn't getting the result as i expected...
This is how the result needs to be
```
select * from #temp where id in (1,3,4,6,7,8,9,11,13,14,16,17,18,19)
``` | You can do this with a simple self-join and appropriate comparison:
```
select t.*
from #temp t left outer join
#temp tprev
on t.id = tprev.id + 1
where tprev.id is null or
t.name <> tprev.name or
t.estid <> tprev.estid or
t.ecid <> tprev.ecid or
t.epid <> tprev.epid or
t.etc <> tprev.etc;
```
This assumes that the ids are sequential with no gaps. If the ids are not, you can get the previous id using a correlated subquery or the `lag()` function.
Your title says "delete" but the question seems to just want the list of such rows. You can phrase this as a `delete` query if you need to. | For SQL Server 2012 ([SQL Fiddle](http://sqlfiddle.com/#!6/0c09c/1))
```
WITH CTE
AS (SELECT *,
LAG(eid) OVER (ORDER BY id) AS prev_eid,
LAG(name) OVER (ORDER BY id) AS prev_name,
LAG(estid) OVER (ORDER BY id) AS prev_estid,
LAG(ecid) OVER (ORDER BY id) AS prev_ecid,
LAG(epid) OVER (ORDER BY id) AS prev_epid,
LAG(etc) OVER (ORDER BY id) AS prev_etc
FROM #temp)
DELETE FROM CTE
WHERE EXISTS (SELECT eid,
name,
estid,
ecid,
epid,
etc
INTERSECT
SELECT prev_eid,
prev_name,
prev_estid,
prev_ecid,
prev_epid,
prev_etc)
``` | Deleting records that are similar with previous one SQL Server | [
"",
"sql",
"sql-server",
""
] |
I've tried searching for this and I'm not quite sure how to phrase exactly what I'm after, but I'll show what I've got and what I want. Also my first question here so I apologise if I get anything wrong!
Firstly, I have this link-table with foreign keys:
```
|ID|Value|
+--+-----+
|1 | 70|
|1 | 130|
|2 | 60|
```
Now, I'm trying to show me all ID's in this table where `Value!=70`. I've tried;
```
SELECT * FROM table WHERE Value != 70
```
This removes the 70 value, and now will show rows `1:130` and `2:60`. But how would I go about excluding all values in the column where this has matched. E.g. if I say don't show me any ID's with the value 70 associated with them, I should only get the row with the ID of 2 out. I've tried;
```
SELECT DISTINCT * FROM table WHERE Value != 70
```
and
```
SELECT * FROM table WHERE Value != 70 GROUP BY ID
```
and
```
SELECT DISTINCT * FROM table WHERE Value != 70 GROUP BY ID
```
but none of these work. Like I said I'm not sure exactly how to word what I'm after, but any help will be appreciated. | ```
Select * From table
where id Not In
(Select distinct id
From table
Where value = 70)
```
or
```
Select * From table t
where Not exists
(Select * From table
where id = t.id
and value = 70)
``` | You can use a subquery and [`NOT IN`](http://www.w3resource.com/mysql/comparision-functions-and-operators/not-in.php) condition
```
SELECT tb.* FROM table tb
WHERE tb.ID NOT IN
(
SELECT a.ID FROM table a where a.Value = 70
)
``` | Exclude rows based on column value when second column value is found | [
"",
"mysql",
"sql",
""
] |
I am looking to simplify this confusing left outer join statement into something easier to read by possibly using a where-statement
```
Select approved.orderId
From (select distinct orderId from OrderLine where isApproved = 1 and isDismissed = 0) approved
left outer join
(select distinct orderId from OrderLine where isApproved = 0 and isDismissed = 0) unapproved
on approved.orderId = unapproved.orderId
Where unapproved.orderId Is Null
```
To something like:
```
Select orderId
From (select distinct orderId from OrderLine where (isApproved = 1 and isDismissed = 0) OR (isApproved = 0 and isDismissed = 0 and orderId is null))
``` | This should do:
```
SELECT DISTINCT O.orderId
FROM OrderLine O
WHERE O.isDismissed = 0
AND O.isApproved = 1
AND NOT EXISTS( SELECT 1 FROM OrderLine
WHERE isApproved = 0
AND isDismissed = 0
AND orderId = O.OrderId)
``` | `except` should work pretty fast for your usecase, and remove the need for both joins and inner queries:
```
SELECT DISTINCT orderId
FROM orderline
WHERE isApproved = 1 and isDismissed = 0
EXCEPT
SELECT orderId
FROM orderline
WHERE isApproved = 0 and isDismissed = 0
``` | Convert left outer join to where statement | [
"",
"sql",
"sql-server",
""
] |
I'm trying to make a SQL query that will search for user id and populate the query with the username.
These are my tables:
* Table Names: 'users' and 'schedule'

This is how I want it to look like, where 'schedule' table now shows the username instead of the user's ID
 | This is the query you are looking for:
```
SELECT s.REFID, s.jobnum, s.customer, u1.username AS engineer, u2.username AS sales
FROM schedule s, users u1, users u2
WHERE s.engineer=u1.id
AND s.sales=u2.id
```
You need to reference the `users` table two separate times, since you are checking in one sub-query for the engineer's username, and then checking in a separate sub-query for the salesperson's username.
Here is a link to an [SQLFiddle](http://sqlfiddle.com/#!2/84bf8f/1) that shows the result of the query's execution. It matches up with what you were looking for. I hope this helps. | Following Query will give you the expected result:
```
SELECT
s.refid as refid,
s.jobnum as jobnum,
s.customer as customer,
u_engg.username as engineer,
u_sales.username as sales
FROM
user u_engg join schedule s on u.id = s.engineer join
user u_sale on u_sale.id = s.sales
``` | MySQL Query phpmyadmin | [
"",
"mysql",
"sql",
""
] |
I have a script inserting a formula into a field in my database. The script looks like this:
```
insert into Export..DataDictionary
values(100001, 'Modifier 3', 'Modifier 3', 476, 'IsNull(Modifier_3, '''')', 'Char', NULL, NULL, 'Y', NULL, NULL, NULL, NULL ,'N', NULL, 'Y', 'Modifier_3')
```
In my test server, it comes out correctly and the string put in the field is: IsNull(Modifier\_3, '')
However, when I run the same insert on the client, the string inserts with one less apostrophe and comes out as: IsNull(Modifier\_3, ')
What could be the issue here? | When I run into situations with quotes inside strings I use QUOTENAME.
In your case I would do like this
```
DECLARE @item As varchar(100) = 'IsNull(Modifier_3, {0})';
SET @item = REPLACE(@item, '{0}', QUOTENAME('', ''''));
INSERT INTO Export..DataDictionary
VALUES (..., @item, ...)
```
You can of course make it a one-liner if you like
```
REPLACE('IsNull(Modifier_3, {0})', '{0}', QUOTENAME('', ''''))
``` | If you want to have two apostrophe the right way, I believe, is to put six apostrophe in your query.I really don't know how it works in your own server.
```
insert into Export..DataDictionary
values(100001, 'Modifier 3', 'Modifier 3', 476, 'IsNull(Modifier_3, '''''')', 'Char', NULL, NULL, 'Y', NULL, NULL, NULL, NULL ,'N', NULL, 'Y', 'Modifier_3')
``` | SQL inserting one less apostrophe in to client than on my test server | [
"",
"sql",
"string",
"t-sql",
"insert",
"apostrophe",
""
] |
I have a postgres table that looks something like this:
```
proposal_id | nih_budget_start | nih_budget_end | nsf_start_date | nsf_end_date | award_amount
proposal_A | 03/01/2000 | 12/31/2000 | | | 10,000
proposal_B | | | 08/01/2005 | 07/31/2009 | 5,000,000
proposal_C | 06/27/2012 | 11/17/2013 | | | 420,000
```
The dates have the `date` data type.
I'd like to create a view that tells me each year the proposal was funded, and what the average award amount was. So, the view might look something like this (option 1):
```
proposal_id | start_year | end_year | average_award
proposal_A | 2000 | 2000 | 10,000
proposal_B | 2005 | 2009 | 1,000,000
proposal_C | 2012 | 2013 | 210,000
```
Or -- even better -- this (option 2):
```
proposal_id | year | award
proposal_A | 2000 | 10,000
proposal_B | 2005 | 1,000,000
proposal_B | 2006 | 1,000,000
proposal_B | 2007 | 1,000,000
proposal_B | 2008 | 1,000,000
proposal_B | 2009 | 1,000,000
proposal_C | 2012 | 210,000
proposal_C | 2023 | 210,000
```
Also, it might be nice to have the award amount prorated for partial-year funding, but this isn't completely necessary.
Based on an answer suggested below, I'm currently doing this, which seems to be working as expected to get option 1 above:
```
CREATE VIEW award_per_year AS
select t1.proposal_id,t1.START_DATE,t1.END_DATE,
(t1.adjusted_award_amount/((t1.END_DATE - t1.START_DATE) + 1.)) avg_award
from
(select t2.proposal_id,
(extract(year from START_DATE)) START_DATE,
(extract(year from END_DATE)) END_DATE,
t2.adjusted_award_amount from
(select proposal_id,
case when nih_budget_start is not NULL then nih_budget_start else nsf_start_date end start_date,
case when nih_budget_end is not NULL then nih_budget_end else nsf_end_date end end_date,
adjusted_award_amount from proposal)t2)t1
``` | Option 1: use [`COALESCE`](http://www.postgresql.org/docs/current/interactive/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL)
```
SELECT proposal_id, start_year, end_year
, award_amount/((end_year - start_year) + 1.0) AS avg_award
FROM (
SELECT proposal_id
, extract(year FROM COALESCE(nih_budget_start, nsf_start_date))::int AS start_year
, extract(year FROM COALESCE(nih_budget_end, nsf_end_date))::int AS end_year
, award_amount
FROM proposal
) sub;
```
Option 2: use [`generate_series()`](http://www.postgresql.org/docs/current/interactive/functions-srf.html)
```
SELECT proposal_id
, generate_series(start_year, end_year) AS year
, award_amount/((end_year - start_year) + 1.0) AS avg_award
FROM (
SELECT proposal_id
, extract(year FROM COALESCE(nih_budget_start, nsf_start_date))::int AS start_year
, extract(year FROM COALESCE(nih_budget_end, nsf_end_date))::int AS end_year
, award_amount
FROM proposal
) sub;
```
[**-> SQLfiddle**](http://www.sqlfiddle.com/#!15/010fd/3) | From your clarification it seems it would be better for you to get a list like below:
```
proposal_ID | years_funded | average_award
proposal_A | 2000 | 10,000
proposal_B | 2005 | 1,000,000
proposal_B | 2006 | 1,000,000
proposal_B | 2007 | 1,000,000
proposal_B | 2008 | 1,000,000
proposal_B | 2009 | 1,000,000
proposal_C | 2012 | 210,000
proposal_C | 2013 | 210,000
```
On the front end you can then use this list to display year wise funding of proposal. Please confirm.
Based on your input here is a query which can achieve first output result set you want:
```
SELECT proposal_id,
TO_CHAR(COALESCE(nih_budget_start, nsf_start_date),'YYYY') AS start_year,
TO_CHAR(COALESCE(nih_budget_end, nsf_end_date),'YYYY') AS end_year,
award_amount/(TO_CHAR(COALESCE(nih_budget_end, nsf_end_date),'YYYY')::INT - TO_CHAR(COALESCE(nih_budget_start, nsf_start_date),'YYYY')::INT+1) AS average_award
FROM Proposals
```
Following query can achieve the second result set you need using recursive CTE:
```
WITH RECURSIVE dates AS
(
SELECT proposal_id,nih_budget_start,nsf_start_date,nih_budget_end,nsf_end_date, TO_CHAR(COALESCE(nih_budget_start, nsf_start_date),'YYYY')::INT AS Dt,
award_amount/(TO_CHAR(COALESCE(nih_budget_end, nsf_end_date),'YYYY')::INT - TO_CHAR(COALESCE(nih_budget_start, nsf_start_date),'YYYY')::INT+1) AS average_award
FROM proposals
UNION ALL
SELECT proposal_id,nih_budget_start,nsf_start_date,nih_budget_end,nsf_end_date, d1.dt + 1, average_award FROM dates d1
WHERE d1.dt < TO_CHAR(COALESCE(nih_budget_end, nsf_end_date),'YYYY')::INT
)
SELECT proposal_id, dt AS year, average_award FROM dates d ORDER BY proposal_id,dt
```
See the code at [**SQLFiddle**](http://www.sqlfiddle.com/#!15/5e144/22) | Enumerate years between two dates with calculated values | [
"",
"sql",
"database",
"postgresql",
"view",
"generate-series",
""
] |
1) The below query is run over 1 Million records minimum. Right now it is giving results in `70ms to 1s`. Is the performance good. What is the ideal time in which the query should return results. (MySql InnoDB system) all columns are in same table
```
SELECT
*
FROM
rgb
WHERE
((tcount = 16 AND r1 = 3 AND r2 = 5
AND r3 = 8)
OR (r1 = 8 AND r2 = 5)
OR (r1 = 5 AND r2 = 8)
OR (r1 = 3 AND r2 = 8)
OR (r1 = 5 AND r2 = 3)
OR (r1 = 3 AND r2 = 5)
OR (r1 = 8 AND r2 = 3))
order by case
when
(tcount = 16 AND r1 = 3 AND r2 = 5
AND r3 = 8)
then
1
else 2
end
```
2) Can I have a lengthy condition in my WHERE clause? The entire data is to be scanned from a single table. | I would try to have a covering index for your query of ( r1, r2, r3, tcount )
and have the WHERE clause something like
```
where
r1 in ( 3, 5, 8 )
AND r2 in ( 3, 5, 8 )
AND r1 != r2
OR ( r1 = 3 and r2 = 5 and r3 = 8 and tcount = 16 )
``` | why not use this simpler one ? instead of using a bunch of `OR`s
```
SELECT
*
FROM
rgb
WHERE
((tcount = 16 AND r1 = 3 AND r2 = 5
AND r3 = 8)
OR (r1, r2) IN ((8,5),(5, 8), (3, 8),(5,3),(3,5),(8,3))
order by case
when
(tcount = 16 AND r1 = 3 AND r2 = 5
AND r3 = 8)
then
1
else 2
end
``` | SQL query optimization for 1million record | [
"",
"mysql",
"sql",
""
] |
I am getting TF400997 when I am trying to configure TFS Scheduled backup using TFS Admin console. Here is my setup:
SQL Server name: Machine1
SQL Server service running under: sqlserverservice@domain.com (which is a domain user and local admin on Machine1 and Machine2)
SQL Server agent service running under: sqlserverservice@domain.com
TFS server name: Machine2
TFS service running under: tfsservice@domain.com (local admin on Machine2 and Machine1)
Network backup path: \Machine2\Backup (gave full permission to sqlserverservice@domain.com on Backup folder)
This is what I am getting while verifying the settings and it does not allow me to setup scheduled backup:

Google this already and it looks like my setup should work, Can anyone please help? | Backup is executed by TFS Job Service: check also TFS user's permissions. | Please try this one:
* configure your SQL Account to run under `domain\tfsservice` instead of `tfsservice@domain.com`
* configure your TFSBackup | TF400997: Configuring Scheduled backup with TFS 2013 | [
"",
"sql",
"tfs",
"backup",
""
] |
I have problem in creation a field name `staus` which `type-BOOLEAN`. and length is 1.
When I press the go button then this Massage Arrived
```
SQL query:
ALTER TABLE `abcd` ADD `status` BOOLEAN( 1 ) BINARY NOT NULL DEFAULT NULL
MySQL said: Documentation
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(1) BINARY NOT NULL DEFAULT NULL' at line 1
```
I can't solve the problem. I don't know where the error occurring.
Please Help me to solve the problem.
Thank you. | ```
ALTER TABLE `abcd`
ADD COLUMN `status` BIT NOT NULL DEFAULT 0
```
1. If you want to create only a single bit field use `BIT`. You had 2 data types in your statement.
2. Your default value was `null` but you don't want to allow that: `NOT NULL`. Use `0` as default value instead. | The correct syntax for [`ALTER TABLE`](http://dev.mysql.com/doc/refman/5.1/en/alter-table.html) requires `COLUMN` after `ADD`
```
ALTER TABLE `abcd` ADD COLUMN ...
``` | Error in creatinng a field in the table | [
"",
"mysql",
"sql",
""
] |
Table Structure
```
Name qty
Milk 12
Bread 10
Pen 100
Pencl 100
```
My sql code to get the maximum qty
```
SELECT Name, MAX(qty) FROM item
```
But it is giving Milk 100 as the answer. how to get the correct answer? | I think you are looking for this:
```
SELECT Name FROM item
WHERE qty = (SELECT MAX(qty) FROM item)
``` | ```
SELECT Name,qty FROM item ORDER BY qty DESC LIMIT 1
```
**[Fiddle](http://sqlfiddle.com/#!2/3c66e/1)** | getting mysql max() value | [
"",
"mysql",
"sql",
""
] |
I Have 2 tables
```
Users
--------------------------------------------
Uid | Name |
--------------------------------------------
1 | sdfa |
2 | dsf |
```
And Second Table as
```
Log
--------------------------------------------
Uid | log | size | file | time
--------------------------------------------
1 | dea | 2 | sadf | timestamp
1 | dea | 2 | sadf | timestamp
2 | eff | 25 | sadf | timestamp
```
Per user in log table get updated **3 times per 5 seconds**.
I need an optimized QUERY to get Name, log,size,file,
time per user in a table. With the latest updated record each user.
i.e. Latest update of user only to be shown not ALL.
I tried
```
SELECT DISTINCT userid, log, size,file,time FROM log
WHERE userid IN (SELECT uid FROM users) As b ORDER BY time ASC
```
// PROBLEM here is that for every user order is not the latest
but Order by here orders the new table
So I tried,
```
SELECT a.userid,a.name, b.log, b.size,b.file,b.time
FROM users a LEFT JOIN ON log b WHERE a.userid=b.userid
ORDER BY b.time ASC
```
But the interviewer told this is also wrong, please advice. | There's no distinct `rank` function in MYSQL. But you could utilize a variable in that matter.
Check on [SQLFIDDLE HERE](http://sqlfiddle.com/#!2/50105/1) for the sample. The time stamp is not formatted, which you may do on your own.
Table users
```
UID NAME
1 a
2 b
```
Table logs
```
UID LOG SIZE FILE TIME
1 lg1 2 f1 20140220173550
1 lg2 2 f2 20140220173551
2 lg3 25 f3 20140220173552
```
Check on **execution plan** for each of the answers to find the fastest for your case.
```
SELECT x.uid, u.name, x.log, x.size, x.file,
x.time FROM users u INNER JOIN
(
SELECT l.uid, l.log, l.size,
l.file, l.time,
@curRank := @curRank + 1 AS rank
FROM logs l, (SELECT @curRank := 0) r
ORDER BY rank DESC
) AS x
ON u.uid = x.uid
GROUP BY u.uid
ORDER BY x.time DESC
```
Results
```
UID NAME LOG SIZE FILE TIME
2 b lg3 25 f3 20140220173552
1 a lg2 2 f2 20140220173551
``` | This will give you the latest log entry for each user. If your tables are properly indexed, it should be fast.
```
SELECT a.*
FROM log a
INNER JOIN
(
SELECT uid, MAX(timestamp) max_date
FROM log
GROUP BY uid
) b ON a.uid = b.uid AND
a.timestamp = b.max_date
join users u on u.uid = a.uid
``` | SQL query for to get values from 2 combined table at very fast rate | [
"",
"mysql",
"sql",
""
] |
i am trying to find one column value is present in another table column value.
e.g
```
Product_Name(tb_new_purchase)
1.car
2.bus
3.truck
Product_Name(tb_new_product_Name_id)
1.car
```
i need to select bus and truck
Here is my code
```
SELECT Product_Name
FROM tb_new_purchase
WHERE NOT EXISTS (SELECT Product_Name FROM tb_new_product_Name_id )
```
But its not returning any values.where i made error? thanks.... | I know there are already 5 answers here....but just to throw this out here, you can do this as a join and not a where exists. Subqueries can really hinder performance and runtime. In MSSQL, the runtimes should be about the same...is MySQL, this join syntax works far better.
```
Select a.product_name
from tb_new_purchase a
left join tb_new_product_Name_id b on a.product_name = b.product_name
where b.product_name is null
```
Using a left join here will produce a null in the b.product\_name column whenever the product\_name is not in the b table. Where product\_name is null operates as a filter all lines that have an entry in the 'b' table here. | Either use `NOT IN` or pass the field into the `NOT EXISTS`
```
SELECT Product_Name
FROM tb_new_purchase
WHERE Product_Name NOT IN(SELECT Product_Name FROM tb_new_product_Name_id)
```
or
```
SELECT Product_Name
FROM tb_new_purchase
WHERE NOT EXISTS (SELECT Product_Name FROM tb_new_product_Name_id
WHERE tb_new_product_Name_id = tb_new_purchase.Product_Name )
``` | how to use NOT EXISTS in sql server | [
"",
"sql",
"sql-server",
""
] |
I have a bunch of strings that should have been stored as value pairs but were not. Now I need to replace every other comma with a semicolon to make them pairs. Hoping to find a simple way of doing this, but there might not be one.
ex:
> -1328.89,6354.22,-1283.94,6242.96,-1172.68,6287.91,-1217.63,6399.18
should be:
> -1328.89,6354.22;-1283.94,6242.96;-1172.68,6287.91;-1217.63,6399.18 | ```
create function f_tst(@a varchar(100)) -- use right size of field
returns varchar(100) -- make sure you use the right size of field
begin
declare @pos int = charindex(',', @a) + 1
;while 0 < charindex(',', @a, @pos)
select @a = stuff(@a, charindex(',', @a, @pos), 1, ';'),
@pos = charindex(',', @a, charindex(',', @a, @pos + 1)) + 1
return @a
end
go
declare @a varchar(100) = '-1328.89,6354.22,-1283.94,6242.96,-1172.68,6287.91,-1217.63,6399.18'
select dbo.f_tst(@a)
```
Or in your example
```
update <table>
set <field> = dbo.f_tst(<field>)
``` | Surely not so simple as you want, but a CHARINDEX/SUBSTRING solution:
```
Declare @input nvarchar(max) = '-1328.89,6354.22,-1283.94,6242.96,-1172.68,6287.91,-1217.63,6399.18'
Declare @i int = 0, @t int = 0, @isComma bit = 1
Declare @output nvarchar(max) = ''
Select @i = CHARINDEX(',', @input)
While (@i > 0)
Begin
Select @output = @output + SUBSTRING(@input, @t + 1, @i - @t - 1) + CASE @isComma WHEN 1 THEN ',' ELSE ';' END
Select @t = @i
Select @i = CHARINDEX(',', @input, @i + 1), @isComma = 1 - @isComma
End
Select @output = @output + SUBSTRING(@input, @t + 1, 1000)
Select @output
``` | SQL replace every other comma with a semicolon | [
"",
"sql",
"sql-server",
"replace",
""
] |
I am trying to write a query that will return the number of "proactive call cases" made by each employee per month as well as the total number of all cases. Essentially, I am looking to return 3 columns; an EmployeeName, a count of the Proactive Calls for that month, and a count of the total calls that month. To try and accomplish this I used a subquery. Here is what I have so far:
```
Select OpenedByName AS EmployeeName, Count(OpenedByName) As NumberOfProactiveCallsMONTH,
(Select Count(OpenedByName)
From table1
Where OpenDate Between '2/1/2014' and '3/1/2014' Group By OpenedByName) AS
TotalTicketsMONTH
From table1
Where OpenDate Between '2/1/2014' and '3/1/2014' AND ProblemType = 17
Group By OpenedByName
```
After running the query I got the error saying that:
```
"Msg 512, Level 16, State 1, Procedure SalesMTDSubQuery, Line 7 Subquery returned
more than 1 value. This is not permitted when the subquery follows =, !=, <, <= ,
>, >= or when the subquery is used as an expression."
```
I think it returned this error due to the GroupBy statement that I added in the subquery. When I remove the GroupBy it will just return the same value for every employee. I'm fairly new to using SQL so I'm not really sure of a good way around this. Any help would be much appreciated.
Thanks! | How about something like
```
SELECT OpenedByName AS EmployeeName,
SUM(CASE WHEN ProblemType = 17 THEN 1 ELSE 0 END) AS NumberOfProactiveCallsMONTH,
COUNT(1) TotalTicketsMONTH
FROM Table1
Where OpenDate Between '2/1/2014' and '3/1/2014'
GROUP BY OpenedByName
```
The error you received was due to the fact that when using a sub query as a column, it cannot return more than 1 value.
Now the way you had it structured, it would have returned all the `COUNT` values per OpenedByName, instead of a single value per OpenedBy.
If you realy wish to use your quer, you need to limit the lookup in the column select to the actual OpenedByName. So something like
```
Select OpenedByName AS EmployeeName, Count(OpenedByName) As NumberOfProactiveCallsMONTH,
(Select Count(OpenedByName)
From table1 t
Where OpenDate Between '2/1/2014' and '3/1/2014'
AND t.OpenedByName = t1.OpenedByName) AS
TotalTicketsMONTH
From table1 t1
Where OpenDate Between '2/1/2014' and '3/1/2014' AND ProblemType = 17
Group By OpenedByName
``` | You can use like this
```
Select a.OpenedByName AS EmployeeName, Count(a.OpenedByName) As NumberOfProactiveCallsMONTH,
b.TotalTicketsMONTH AS TotalTicketsMONTH
From table1 a
INNER JOIN
(Select OpenedByName ,Count(OpenedByName) AS TotalTicketsMONTH From table1
Where OpenDate Between '2/1/2014' and '3/1/2014' Group By OpenedByName) b ON
a.OpenedByName = b.OpenedByName
Where a.OpenDate Between '2/1/2014' and '3/1/2014' AND a.ProblemType = 17
Group By a.OpenedByName
``` | SQL Group By statement within subquery | [
"",
"sql",
"sql-server",
"group-by",
"subquery",
""
] |
I just started learning SQL; I've created a table. Learned insert command and inserted values in 2 rows. However I've inserted null values in 3rd.
Now I want to delete the third row which has 2 columns with no values in it.
I'm using the following query:
```
delete employee where city=null;
```
It doesn't seem to be working! | According **SQL 92** standard many logical operations with *null values* like
```
> null
= null
and null
or null
not null
```
should *always return null* (and *never true*). Some DBMS (e.g. Oracle) follow this rule rigorously, some (MS SQL) can have a mode that `null = null` returns *true*, not required *null*. In order to be *compartible with SQL 92* and so with (almost) all DBMSs, you should use `is null` or `is not null` standard comparisons, in your case
```
delete from employee
where city is null -- <- Standard comparison
``` | You need the `is null` "operator":
```
delete from employee where city is null;
```
This is because in SQL, *nothing* is equal to `NULL`. | How to delete a row in SQL based on a NULL condition | [
"",
"sql",
"null",
""
] |
I am using views for query convenience. The view is a join between three tables, using `INNER JOIN` and `OUTER RIGHT` joins. The overall result set from the view could be 500,000 records. I then perform other queries off of this view, similar to:
```
SELECT colA, colB, colC FROM vwMyView WHERE colD = 'ABC'
```
This query might return only 30 or so results. How will this be for performance? Internally in the SQL engine will the view always be executed, then the `WHERE` clause applied after, or is SQL Server smart enough to apply the `WHERE` clause first so that the `JOIN` operations are only done on a subset of records?
If I'm only returning 30 records to the middle tier, do I need to worry too much that the SQL Server had to trawl through 500,000 records to get to those 30 records? I have indexes applied on all important columns on the base tables.
Using MS SQL Server, view is not materialized | Usually, a view is treated in much the same way as a macro might be in other languages - the body of the view is "expanded out" into the query it's a part of, before the query is optimized. So your concern about it first computing all 500,000 results first is unfounded.
The exception to the above is if the view is e.g. an indexed view (SQL Server, query has to use appropriate hints or you have to be using a high-level edition) or a materialized view (Oracle, not sure on the requirements) where the view isn't expanded out - but the results have already been computed beforehand and are being stored much like a real table's rows are - so again, there shouldn't be too much concern whilst actually querying. | When not having a materialized view, the SQL behind your view will always executed when using the view e.g. inside the FROM part. Of course, maybe some caching is possible, but this is depending on your DBMS and your configurations.
To see what the database is doing in background your might like to start with using `EXPLAIN ANALYZE <your query>`. | SQL view performance | [
"",
"sql",
"sql-server",
"sql-view",
""
] |
All of the examples I've seen for using `sql.Row`, access return values from queries by **position**:`sql.Rows.scan()` requires a correctly typed variable correctly **positioned** in the `scan()` arguments corresponding to the appropriate column, to retrieve each column value returned, such as in the following example:
[**Example Based on GoDocs (with small mod):**](http://golang.org/pkg/database/sql/#pkg-examples)
```
rows, err := db.Query("SELECT name,age FROM users WHERE age>=50")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var name string
var age int
if err := rows.Scan(&name,&age); err != nil {
log.Fatal(err)
}
fmt.Printf("%s is %d\n", name, age)
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
```
`&name` and `&age` must be positioned correctly (columns 0 and 1) for Rows.Scan() to retrieve the correct values with the correct types.
Over my years of development for production systems, I have come to studiously avoid this practice because it's not robust: A database change in the layout of the columns will easily break your code if it's based on column positions.
It is far more robust to use **column names** for retrieving values - this insulates you from changes to the database that add or remove columns that screw up your position based code. For example, in Delphi and C#, all dataSets, including columns returning values from queries, support `FieldByName('age').asInteger` or `fields['age'].value,` etc.
Any way to accomplish this in Go? If not, this is a big drawback in Go database support and a serious disappointment - not at all safe, as mentioned.
***Edit:***
Also *(perhaps this is a new question)*: The examples I've seen seem to require you to retrieve all the columns returned by the query, or the positions of the columns will be skewed.
Suppose there is utility query in a locked-down database that I cannot modify or add to, and it retrieves several columns, but I only need one of them for my current task. Based on the current `sql.Rows.Scan()` model, I have to retrieve all the values from the query in my application code, even though I don't need them, whereas if I could query `"columnByName"` that would not be necessary - I could just bring into my application code the data I need. Any work-around for this? | Yes, it is possible to do this without having to manually match up the column positions. There are some third-party libraries you can use to do this, such as [sqlx](https://github.com/jmoiron/sqlx) or [gorp](https://github.com/coopernurse/gorp). I would recommend sticking with one of these instead of rolling your own.
Named matching does have a slight penalty. Named matching is no different than matching up the column positions yourself. It just does this work for you at runtime - possibly on every query execution. This is true in any other language.
Why at runtime? The query is written as a string. It has to be parsed to determine the position.
If you were to make your own library, how do you do this on your own?
* [Rows.Columns](http://golang.org/pkg/database/sql/#Rows.Columns) to get column names and positions.
* Passing a slice of pointers `[]interface{}` to [Rows.Scan](https://golang.org/pkg/database/sql/#Rows.Scan) to get the values.
* [reflect.Value](https://golang.org/pkg/reflect/#Value) and [Value.Addr](https://golang.org/pkg/reflect/#Value.Addr) to get a pointer to the destination value.
* [Value.FieldByName](https://golang.org/pkg/reflect/#Value.FieldByName) to get the `Value` of a struct field if you want to map to struct fields.
Ok, so lets see how this works.
```
type Person struct {
Id int
Name string
}
rows, err := db.Query("SELECT id, name FROM person;")
if err != nil {
// handle err
log.Fatal(err)
}
columnNames, err := rows.Columns() // []string{"id", "name"}
if err != nil {
// handle err
log.Fatal(err)
}
people = make([]Person, 0, 2)
for rows.Next() {
person := Person{}
// person == Person{0, ""}
pointers := make([]interface{}, len(columnNames))
// pointers == `[]interface{}{nil, nil}`
structVal := reflect.ValueOf(person)
for i, colName := range columnNames {
fieldVal := structVal.FieldByName(strings.Title(colName))
if !fieldVal.IsValid() {
log.Fatal("field not valid")
}
pointers[i] = fieldVal.Addr().Interface()
}
// pointers == `[]interface{}{&int, &string}`
err := rows.Scan(pointers...)
if err != nil {
// handle err
log.Fatal(err)
}
// person == Person{1, "John Doe"}
people = append(people, person)
}
``` | The only sane & clean way to do this is to use: <https://github.com/jmoiron/sqlx>
Let say you have a Place struct:
```
type Place struct {
Country string
City sql.NullString
TelephoneCode int `db:"telcode"`
}
```
You scan it easily:
```
rows, err := db.Queryx("SELECT * FROM place")
for rows.Next() {
var p Place
err = rows.StructScan(&p)
}
```
More info: <http://jmoiron.github.io/sqlx/> | Is it possible to retrieve a column value by name using GoLang database/sql | [
"",
"sql",
"go",
""
] |
I am creating a SQL table to hold transactions:
```
create table dbo.TRANSACTIONS
(
Id int identity not null,
Amount money not null
);
```
For currency (I am using euros) should I use money, decimal, or numeric?
I have seen the three being applied to currency columns so I am not sure anymore.
Money would be the obvious choice ... But I have seen decimal and numeric to.
By the way, I am using SQL Server 2012.
Thank You | First of all, Decimal and Numeric have the same functionality ([MSDN info about it](http://msdn.microsoft.com/en-us/library/ms187746.aspx))
To answer the new question money VS decimal, there is already a Stackoverflow question about it: [Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?](https://stackoverflow.com/questions/582797/should-you-choose-the-money-or-decimalx-y-datatypes-in-sql-server) - the short answer was:
> Never ever should you use money it is not precise and it is pure garbage, always use decimal/numeric
by [SQLMenace](https://stackoverflow.com/users/740/sqlmenace) | [Decimal and Numeric](http://technet.microsoft.com/en-us/library/aa258832(v=sql.80).aspx) are for [almost all purposes](https://stackoverflow.com/questions/1841915/difference-betweeen-decimal-and-numeric), [equivalent](https://stackoverflow.com/questions/759401/is-there-any-difference-between-decimal-and-numeric-in-sql-server)
Money is not a standard Sql type, and there may be [other reasons](https://stackoverflow.com/questions/582797/should-you-choose-the-money-or-decimalx-y-datatypes-in-sql-server) to avoid it as well.
So choose an appropriate scale and precision and use `DECIMAL`.
**Re : DECIMAL(19,4) vs (20,4)**
The [precision vs storage table here](http://technet.microsoft.com/en-us/library/aa258832(v=sql.80).aspx) sums things up.
`19,4` will store a value of `999 999 999 999 999.9999`, at a cost of 9 bytes. A precision of 20 would require `13` bytes, which IMO would be a waste unless you needed the extra precision (in which case you can go to a precision of 28 with the same storage).
Also, for the same 9 bytes, you could also use e.g. `19,2`, which will store
`99 999 999 999 999 999.99` | Money, Decimal or Numeric for Currency Columns | [
"",
"sql",
"sql-server",
""
] |
I have a table in mysql with birth date column that saved as unix timestamp (bigint).
I want to write query like this:
```
SELECT ... as `age` FROM `tableName`
```
And
```
SELECT * FROM `tableName` WHERE ... > 30
```
When the 3 dots is the function that calculate the age from the birth date.
I know about the function `TIMESTAMPDIFF`, but it's not good if I save the birth date as unix timestamp.
What can I do?
Thanks | From MySQL's [date and time functions](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html), we can combine `TIMESTAMPDIFF`, `NOW`, and `FROM_UNIXTIME`.
Supposing that `dob` is a Unix timestamp representing date of birth:
```
TIMESTAMPDIFF(YEAR, FROM_UNIXTIME(dob), NOW())
```
From there, it's simple enough to add a `WHERE` clause based on the column value.
[Live demo on SQLFiddle.](http://sqlfiddle.com/#!2/08177/2) | ```
SELECT FLOOR((unix_timestamp() - birthday)/(86400*365)) AS age FROM table;
``` | Calucate age from birth date in mysql | [
"",
"mysql",
"sql",
""
] |
I have a procedure where a `CURSOR c1` is declared. Then I want to output columns in each role `c1` fetches.
```
FOR rec IN c1
LOOP
DBMS_OUTPUT.PUT_LINE (rec.branchno || CHR(9) || rec.street || CHR(9)
|| rec.city || CHR(9) || rec.postcode);
END LOOP;
```
I'm now using `CHR(9)` to output exactly one tab. However, sometimes `rec.city` has more than 8 characters and thus a tab will push the following `postcode` right by one tab, as in the second line of output shown below .
```
B003 163 Main St. Glasgow G11 9QX
B007 16 Argyll St. Aberdeen AB2 3SU
B005 22 Deer Rd. London SW1 4EH
```
Is there a way to make `DBMS_OUTPUT` adjust the tabs automatically? For example,
```
IF STRLEN(rec.city) < 8
PUT ( CHR(9) || CHR(9) )
ELSE
PUT ( CHR(9) )
END IF;
```
If no, what's the industry convention of generating such formatted output?
Thank you for your help. | Using `DBMS_OUTPUT` to produce formatted reports is probably the wrong approach.
SQL\*Plus is pretty good at producing fixed-width text reports. It's not obvious whether the stored procedure is adding some value (in which case you could have it return a `sys_refcursor` to SQL\*Plus) or whether it would just make more sense to put the SQL statement in a SQL\*Plus script along with appropriate `column format` commands to control the output of the data. Any number of other tools (SQL Developer, for example) support enough SQL\*Plus commands to be able to generate a simple fixed width report as well. | I had the same problem when i used a `CURSOR` within a `PROCEDURE`.I used a similar code as following to solve my problem.Hope,this might help you.
```
DBMS_OUTPUT.PUT_LINE(RPAD(rec.branchno,10)||RPAD(rec.street,20)||RPAD(rec.city,20)||RPAD(rec.city,20));
```
`RPAD` function actually pads the right-side of an expression to a specified length.The padding can be used using specified character but it is optional. If the expression to be padded has longer length than the specified length then only the portion that fits the specified length will be shown.
`RPAD`( expression, padded\_length, padded\_character(optional) );
```
RPAD('school',10);
```
OUTPUT:`school`
```
RPAD('school',6);
```
OUTPUT:`school`
```
RPAD('school',2);
```
OUTPUT:`sc`
```
RPAD('school',10,'1');
```
OUTPUT:`school1111` | DBMS_OUTPUT.PUT_LINE: Deciding how many tabs to put according to output length | [
"",
"sql",
"oracle",
"plsql",
"dbms-output",
""
] |
if I have this sql:
```
SELECT A FROM B WHERE C IN
(
SELECT D FROM E
)
```
If my internal select don't return any result (0 rows) my where statement will be true or false?
I'm using SQL Server | This is your query:
```
SELECT A
FROM B
WHERE C IN (SELECT D FROM E);
```
The `where` statement is quite simple. It filters out rows there is no match between `B.C` and `E.D`. By your statement, there is no match, so all rows are filtered out. The query returns no rows.
The `where` statement is not "true" or "false" in general. It is "true" or "false" for a given row in `B` (in this case). With no matches, the `where` clause will be uniformly false for all rows. | Ignore the tables for the moment and think about what this does:
```
SELECT 'Yes' WHERE 1 IN (SELECT 1 WHERE 1=0)
``` | Empty IN sql statement | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
In my project I have locations of the product, which may look like this:
```
- Location 1
-- SubLocation 1
-- SubLocation 2
- Location 2
-- SubLocation 3
-- SubLocation 4
```
Imagine a zones with subzones in the facilty.
I need to store that in DB and then retrive sometime later, like this: `SubLocation 1 at Location 1`.
My first guess is to have two tables with one to many ralationship, but that won't scale, it later I'll need to have something like this:
```
- Location 2
-- SubLocation 3
-- SubLocation 4
---- SubLocation 5
---- SubLocation 6
```
So my question is what's the best way to store such structure in relational database? | You can define parent\_id reference FK to another record with id (roots have null parent\_id).
To define hierarhy and retrieve all subtree in one query you can define an additional field path (VARCHAR). The field should have full path of ids separated with '\_'
In your case SubLocation 5 has the path="2\_4\_5"
To retrieve all the children of SubLocation 4 you can use
```
select *
from myTable
where path like '2_4%';
```
There is level depth restriction (size of the path in fact) but for most cases it should work. | Dealing with hierarchical data is difficult in MySQL. So, although you might store the data in recursive tables, querying the data is (in general) not easy.
If you have a fixed set of hierarchies, such as three (I'm thinking "city", "state", "country"), then you can have a separate table for each entity. This works and is particularly useful in situations where the elements can change over time.
Alternatively, you can have a single table that flattens out the dimensions. So, "city", "state", and "country" are all stored on a single row. This flattens out the data, so it is no longer normalized. Updates become tedious. But if the data is rarely updated, then that is not an issue. This form is a "dimensional" form and used for OLAP solutions.
There are hybrid approaches, where you store each element in a single table, in a recursive form. However, the table also contains the "full path" to the top. For instance in your last example:
```
/location2/sublocation3
/location2/sublocation4
/location2/sublocation4/sublocation5
/location2/sublocation4/sublocation6
```
This facilitates querying the data. But it comes at the cost of maintenance. Changing a something such as `sublocation4` requires changing many rows. Think triggers.
The easiest solution is to use different tables for different entities, if you can. | How do I store nested locations? | [
"",
"mysql",
"sql",
"database-design",
"relational-database",
""
] |
Our company has a need to store and compute analytics related to content creation, review/approval and publishing workflow for documents. We are looking at something like Amazon SimpleDB.
We will store "events" which correspond to actions that users take in the system. For instance:
* [User B] requested [document B] be reviewed at [Time] by [User A]
* [User A] approved [document B] at [Time]
* [User B] edited [document B] at [Time]
* [User B] published [document B] at [Time]
Then we want to be able to create graphs (histogram/line plot) of this activity for given time periods. For instance:
* Edits vs Time
* Approvals vs Time
* Publishes vs Time
* Approvals vs Publishes vs Time
In SQL I assume this would be done by grouping results into "buckets". However, I am having a hard time figuring out how to do this with a NoSQL db like AWS Simpledb without batching this processing using Hadoop/Map Reduce. This has to be realtime so doing any batch processing is out of the question.
We are also looking at Neo4J so if someone has a solution for Neo I would be interested as well.
Thanks | And you would use "Action-Nodes" to model Approval, Publication, Edits so you can connect more than two things to it.
For modeling time I'd recommend a ordered list of events or even a time tree: <http://docs.neo4j.org/chunked/milestone/cypher-cookbook-path-tree.html>
I create a small GraphGist for you to show it, check it out:
<http://gist.neo4j.org/?9263624> | In Neo4j's Cypher, you can collect things into buckets with CASE/WHEN and aggregation syntax. | NoSQL - How to generate histograms for ranges of data | [
"",
"sql",
"nosql",
"neo4j",
"amazon-simpledb",
""
] |
I am trying to extract the text between two characters using t-sql. I have been able to write it where it pulls the information close to what I want, but for some reason I am not getting what i am expecting(suprise, suprise). Could really use alittle help refining it. I am trying to extract part of the table name that is located between two [ ]. An example of the column data is as follows(this is a table that records all changes made to the database so the column text is basically SQL statements):
```
ALTER TABLE [TABLENAME].[MYTABLE] ADD
[VIP_CUSTOMER] [int] NULL
```
I am trying to extract part of the table name, in this example I just want 'MYTABLE'
Right now I am using:
```
select SUBSTRING(db.Event_Text, CHARINDEX('.', db.Event_Text) + 2, (CHARINDEX(']', db.Event_Text)) - CHARINDEX('', db.Event_Text) + Len(']')) as OBJName
FROM DBA_AUDIT_EVENT DB
WHERE DATABASE_NAME = 'XYZ'
```
But when I use this, I don't always get the results needed. Sometimes I get 'MYTABLE] ADD' and sometimes I get the part of the name I want, and sometimes depending on the length of the tablename I only get part the first part of the name with part of the name cut off at the end. Is there anyway to get this right, or is there a better way of writing it? Any help would be greatly appreciated. Thanks in advance. | Long, but here's a formula using the brackets:
```
Declare @text varchar(200);
Select @text='ALTER TABLE [TABLENAME].[MYTABLE] ADD [VIP_CUSTOMER] [int] NULL';
Select SUBSTRING(@text,
CHARINDEX('[', @text, CHARINDEX('[', @text) + 1 ) +1,
CHARINDEX(']', @text, CHARINDEX('[', @text, CHARINDEX('[', @text) + 1 ) ) -
CHARINDEX('[', @text, CHARINDEX('[', @text) + 1 ) - 1 );
```
Replace @text with your column name. | this is a pretty ugly way to get the length, but I've used something like this before:
```
select SUBSTRING(db.Event_Text,
CHARINDEX('.', db.Event_Text) + 2,
charindex('] ADD',db.Event_Text) - CHARINDEX('.',db.Event_Text)-2))
```
Give it a try, it may work for you. | Extracting text between two characters in SQL | [
"",
"sql",
"t-sql",
""
] |
I am struggling for a few days with this issue and I can't figure out how can I fix it.
I would like to `group by` my table on values `1`,`2`,`3`,`4`,`5` so I have created a ***temporary table*** with this values.
Now I have to `INNER JOIN` this table with other tables on `a.value = #myTempTable.num`.
BUT `a.value` is `ntext` so I need to `CONVERT` it what I actually did, but I am getting an error:
> Conversion failed when converting the varchar value 'simple, ' to data
> type int. (on line 7)
```
Create table #myTempTable
(
num int
)
insert into #myTempTable (num) values (1),(2),(3),(4),(5)
SELECT a.name, CONVERT(INT, CONVERT(VARCHAR(12), a.value)) AS value, COUNT(*) AS pocet
FROM
(SELECT item.name, value.value
FROM mdl_feedback AS feedback
INNER JOIN mdl_feedback_item AS item
ON feedback.id = item.feedback
INNER JOIN mdl_feedback_value AS value
ON item.id = value.item
WHERE item.typ = 'multichoicerated' AND item.feedback IN (43)
) AS a
INNER JOIN #myTempTable
on CONVERT(INT, CONVERT(VARCHAR(12), a.value)) = #myTempTable.num
GROUP BY a.name, CONVERT(INT, CONVERT(VARCHAR(12), a.value)) ORDER BY a.name
drop table #myTempTable
```
I am not getting this error without the last `INNER JOIN`
```
INNER JOIN #myTempTable on CONVERT(INT, CONVERT(VARCHAR(12), a.value))
= #myTempTable.num
```
Could someone help me please?
Thanks. | In order to avoid such error you could use `CASE` + `ISNUMERIC` to handle scenarios when you cannot convert to int.
Change
```
CONVERT(INT, CONVERT(VARCHAR(12), a.value))
```
To
```
CONVERT(INT,
CASE
WHEN IsNumeric(CONVERT(VARCHAR(12), a.value)) = 1 THEN CONVERT(VARCHAR(12),a.value)
ELSE 0 END)
```
Basically this is saying if you cannot convert me to int assign value of 0 (in my example)
Alternatively you can look at this article about creating a custom function that will check if `a.value` is number: <http://www.tek-tips.com/faqs.cfm?fid=6423> | If you are converting a varchar to int make sure you do not have decimal places.
For example, if you are converting a varchar field with value (12345.0) to an integer then you get this conversion error. In my case I had all my fields with .0 as ending so I used the following statement to globally fix the problem.
```
CONVERT(int, replace(FIELD_NAME,'.0',''))
``` | Conversion failed when converting the varchar value 'simple, ' to data type int | [
"",
"sql",
"sql-server",
"t-sql",
"group-by",
""
] |
I have been trying for a couple of hours to figure this out, but it still does not work..
I have 2 tables:
* Prices
* Sales
The records are something like:
Prices
```
product_name price
--------------------
Milk 0.80
Cheese 1.00
Bread 1.50
```
Sales
```
customer_id product_name number_purchases
-------------------------------------------
15 Milk 2
15 Cheese 1
2 Butter 2
2 Candy 4
80 Bread 1
...
...
15 Bread 2
15 Milk 1
```
The sales are tracked per week, a customer can occur multiple times in the database with a purchase of the same goods (like in the example customer 15 buys milk twice a week, so customer 15 bought 3 packs of milk.
I want to get for a certain customer:
Each product he/she bought, with the corresponding total number of purchases of that product, and the corresponding price of the product.
This is what I have so far without errors:
```
SELECT product_name, SUM(number_purchases)
FROM sales S
WHERE customer_id = 80
GROUP BY product_name;
```
But when I want to add some lines to the code, to get the corresponding prices too, it does not work. One of the things I tried:
```
SELECT product_name, SUM(number_purchases), price
FROM sales S, prices P
WHERE S.product_name = P.productname
AND customer_id = 80
GROUP BY product_name;
```
Is this not possible in only one query or do I miss something?
Thanks a lot | Use aliases
```
SELECT s.product_name, SUM(number_purchases), price
FROM sales S, prices P
WHERE S.product_name = P.productname
AND customer_id = 80
GROUP BY s.product_name;
```
DBMS - doesn't know wich product\_name get
P.S. and I think u have to add price in group by | Your last query is grouping by `customer_id`. Try changing this to `product_name`:
```
SELECT S.product_name, SUM(S.number_purchases), P.price
FROM sales S, prices P
WHERE S.product_name = P.product_name
AND S.customer_id = 80
GROUP BY S.product_name;
```
You should also *never* use commas in the `from` clause. Instead use explicit joins as in your first example and aggregate on `price` as well (or move `price` to the `group by` clause):
```
SELECT S.product_name, SUM(S.number_purchases), sum(P.price) total_price
FROM sales S join
prices P
ON S.product_name = P.product_name
WHERE S.customer_id = 80
GROUP BY S.product_name;
``` | MySQL - How to join those tables to get the correct result? | [
"",
"mysql",
"sql",
""
] |
I have this table
```
CREATE TABLE staff (
`id` int(11) NOT NULL AUTO_INCREMENT,
`firstname` varchar(32),
`surname` varchar(32),
`dateEnrollment` date,
`dateCompletion` date,
PRIMARY KEY (`id`)
);
INSERT INTO staff (`firstname`, `surname`, `dateEnrollment`, `dateCompletion`) VALUES
('Demo', 'Demo', '2010-01-13', '2010-02-13'),
('Jone', 'Borek', '2010-02-14', '2011-03-13'),
('Denis', 'Koszi', '2010-02-15', '2010-06-13');
```
And I need to SQL query that returns my longest period without any activity. Eg: Jone Borek Completetion 2011-03-13 to Denis Koszi Completion 2010-06-13 - in this period was company without activity... How can I achieve it? Many thanks for help | Either of the following will work,
(I prefer the latter):
```
SELECT
MAX(DATEDIFF(
(SELECT MIN(s2.dateCompletion)
FROM staff s2
WHERE s2.dateCompletion >= s.dateCompletion AND s2.id != s.id)
, dateCompletion))
from staff s;
```
In the above example, for each record, you find the next completed project, do a datediff, and then take the max.
In the example below, I use joins to do the same thing. If you're dataset is really big, you might be better off creating a temporary table and get rid of the derived table.
```
SELECT
MAX(DATEDIFF(s2.dateCompletion, s.dateCompletion))
FROM staff s
JOIN staff s2 ON s2.dateCompletion = (SELECT MIN(s3.dateCompletion)
FROM staff s3
WHERE s3.dateCompletion >= s.dateCompletion
AND s3.id != s.id)
```
Also, as you're measuring maximum period of inactivity, would you also like to include the date difference between the MAX(dateCompletion) and CURDATE(), then use the following:
```
SELECT
MAX(DATEDIFF(COALESCE(s2.dateCompletion, CURDATE()), s.dateCompletion))
FROM staff s
JOIN staff s2 ON s2.dateCompletion = (SELECT MIN(s3.dateCompletion)
FROM staff s3
WHERE s3.dateCompletion >= s.dateCompletion
AND s3.id != s.id)
``` | Try:
```
SELECT s1.id id1,
s1.firstname firstname1,
s1.surname surname1,
s1.`dateCompletion` dateCompletion1,
s2.id id2,
s2.firstname firstname2,
s2.surname surname2,
s2.`dateCompletion` dateCompletion2,
datediff( s1.`dateCompletion`, s2.`dateCompletion` )
FROM staff s1
JOIN staff s2
ON s1.`dateCompletion` = (
SELECT min(dateCompletion)
FROM staff s3
WHERE s3.dateCompletion > s2.`dateCompletion`
)
ORDER BY datediff( s2.`dateCompletion`, s1.`dateCompletion` )
LIMIT 1
```
demo: <http://sqlfiddle.com/#!2/c1e939/8> | MySQL get longest gap without activity | [
"",
"mysql",
"sql",
"datediff",
""
] |
I have a (SQL Server) table similar to the following:
SalesSummary
Year | Team | Person | Person Sales | Team Sales/Yr
2013 1 Jim $10 ??
2013 1 Anna $0 ??
2013 2 John $8 ??
2013 3 Todd $4 ??
2013 3 Alan $1 ??
2014 3 Alan $22 ??
I'm trying to sum over this example SalesSummary table and insert the proper values into the Team Sales column. In this example, I would want $10 in the 1st and 2nd columns, $8 in the 3rd, $5 in the 4th/5th and $22 in the 6th column slot. Forgive my ignorance of SQL, but I settled on what I'm told is a poor solution as follows:
```
UPDATE SalesSummary SET TeamSales = sum.TeamSales
FROM (SELECT Team, Year, SUM(PersonSales) OVER (Partition By Team, Year) as TeamSales)
FROM SalesSummary
GROUP BY Team, Year, PersonSales
) AS sum, SalesSummary as SS
WHERE sum.Team = ss.Team AND sum.Year = ss.Year
```
I was hoping someone might be able to show be how best to perform this type of update. I appreciate any help, tips, or examples. Apologies if this is obvious. | Assuming you are using SQL Server, I think you want something like this:
```
WITH toupdate AS
(SELECT team, year,
Sum(personsales) OVER (partition BY team, year) AS newTeamSales
FROM salessummary
)
UPDATE toupdate
SET teamsales = newteamsales;
```
Your original query has several issues and suspicious constructs. First, an aggregation subquery is not updatable. Second, you are doing an aggregation and using a window function with, although allowed, is unusual. Third, you are aggregating by `PersonSales` and taking the `sum()`. Once again, allowed, but unusual. | Try this.
```
UPDATE SalesSummary SET TeamSales = (Select Sum(PersonSales)
From SalesSummary S
Where S.Year=SalesSummary.Year AND S.Team=SalesSummary.Team)
``` | SQL How to Update SUM of column over group in same table | [
"",
"sql",
"sql-server",
"optimization",
""
] |
I have a SQL query which returns data as listed in table1. I want to compare the column table1.Count with table2.Count\_Range. The range in which the Count value falls in its relevant Value column should be returned.
table1:
```
Time Count
2014/02/24 00:00 23.3
2014/02/24 01:00 43.1
2014/02/24 02:00 93.5
2014/02/24 03:00 123.9
2014/02/24 04:00 173.0
2014/02/24 05:00 223.7
...
...
```
table2:
```
Count_Range Value
10 1
20 2
30 3
40 4
50 5
...
...
```
For eg. at 00:00 the value is 23.3, from table2 it falls in the range between 20 and 30 so the value returned should be 2. Similarly for 01:00 the value returned should be 4.
The final table should be like below
Proposed output table3:
```
Time Count Value
2014/02/24 00:00 23.3 2
2014/02/24 01:00 43.1 4
2014/02/24 02:00 93.5 ...
2014/02/24 03:00 123.9 ...
2014/02/24 04:00 173.0 ...
2014/02/24 05:00 223.7 ...
```
Thanks to all in advance... btw this is my first question in this forum, in fact first for any forum on earth!! | You can do this with a correlated subquery:
```
select t1.*,
(select top 1 t2.value
from table2 t2
where t2.count_range < t1."count"
order by t2.count_range desc
) as value
from table1 t1;
```
Alternatively, SQL Server offers the `apply` keyword for this sort of thing:
```
select t1.*, t2.value
from table1 t1 cross apply
(select top 1 t2.value
from table2 t2
where t2.count_range < t1."count"
order by t2.count_range desc
) t2;
```
I prefer the first method because it is standard SQL (apart from the `top` keyword). | Another to do this is to first break the Count Range column into a low value and high value of each range:
`Low High Value
0 9 0
10 19 1
20 29 2
30 39 3
40 49 4
50 59 5
... ... ...`
And then using the following query:
SELECT Table1.Count, Table2.Value
FROM Table2 INNER JOIN Table1
ON Table1.Count >= Table2.LowRange
AND Table1.Count <= Table2.HighRange; | SQL: Compare each record of column with a range of values | [
"",
"sql",
"sql-server-2008",
""
] |
In TSql what is the recommended approach for grouping data containing nulls?
Example of the type of query:
```
Select Group, Count([Group])
From [Data]
Group by [Group]
```
It appears that the ~~count(\*) and~~ count(Group) both result in the null group displaying `0`.
Example of the expected table data:
```
Id, Group
---------
1 , Alpha
2 , Null
3 , Beta
4 , Null
```
Example of the expected result:
```
Group, Count
---------
Alpha, 1
Beta, 1
Null, 0
```
This is the desired result which can be obtained by count(Id). **Is this the best way to get this result and why does ~~count(\*) and~~ count(Group) return an "incorrect" result?**
```
Group, Count
---------
Alpha, 1
Beta, 1
Null, 2
```
edit: I don't remember why I thought count(\*) did this, it may be the answer I'm looking for.. | The best approach is to use count(\*) which behaves exactly like count(1) or any other constant.
The \* will ensure every row is counted.
```
Select Group, Count(*)
From [Data]
Group by [Group]
```
The reason `null` shows `0` instead of 2 in this case is because each cell is counted as either 1 or null and `null + null = null` so the total of that group would also be null. However the column type is an integer so it shows up as `0`. | Just do
```
SELECT [group], count([group])
GROUP BY [group]
```
[**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/1c284/8)
Count(id) doesn't gives the expected result as mentioned in question. Gives value of 2 for group NULL | Count of group for null is always 0 (zero) | [
"",
"sql",
"t-sql",
""
] |
I have a table called list\_details
```
listId tier
1 5
1 5
1 6
2 4
2 5
```
I want the following result:
```
listId tier
1 5(2), 6(1)
2 4(1),5(1)
```
I tried the following query:
```
SELECT ld.listId,count(ld.tier)
from list_details ld
group by ld.listId
```
and it gives me :
```
listId tier
1 3
2 2
```
But I dont know how I can put the condition or... to categorize based on the tiers.
Any help is appreciated
Thanks Mike:
Your query result is:
```
ListId tierData
1 5(2), 6(1), 4(1),5(1)
But I want:
listId tier
1 5(2), 6(1)
2 4(1),5(1)
``` | I am assuming that you don;t really need the response in the format of `5(2), 6(1)` or if you did need that format for display, you could provide that format in teh application layer.
You should simply add multiple groupings:
```
SELECT
listId,
tier,
COUNT(1) AS `tierCount`
FROM list_details
GROUP BY listId, tier
ORDER BY listId ASC, tier ASC
```
If you need that EXACT text result, you can do something like this:
```
SELECT
a.listID AS listID,
GROUP_CONCAT(
CONCAT(
a.tier,
'(',
a.tierCount,
')'
)
) AS tierData
FROM (
SELECT
listId,
tier,
COUNT(1) AS `tierCount`
FROM list_details
GROUP BY listId, tier
) AS a
GROUP BY listID
ORDER BY listID ASC
``` | ```
SELECT ld.listId, ld.tier, count(ld.tier)
FROM list_details ld
GROUP by ld.listId, ld.tier;
```
Will yield:
```
list_id tier count
1 5 2
1 6 1
```
etc | categorize by groupby-SQL | [
"",
"mysql",
"sql",
""
] |
I have an Access 2007 database that will be housing tables which refer to the bill of materials for multiple products. On the main form I want a user to be able to select one of the products - OK, easy. Now, I want two queries to run once they press a button after choosing their product from a dropdown. The first query is a simple delete query to delete all information on a table. The second query is where I'm having my issue with my SQL syntax. I want the information from a static table to be appended to the table where the delete query just removed everything from.
Now, each table that houses the bill of material for each product is labeled with the product's name. So I want the dropdown (`combo0`) to be the reference point for the table name in the `FROM` clause within the SQL string. Code is as follows:
```
INSERT INTO tblTempSassyInfo (Concat, TableName, AddressName, PartNumber, [L/R], FeederSize, QtyPerBoard, SASSYname, RawBoard)
SELECT TableName & AddressName & PartNumber, TableName, AddressName, PartNumber, [L/R], FeederSize, QtyPerBoard, SassyName, RawBoard
FROM [FORMS]![DASHBOARD]![Combo0];
```
So you can see where I'm trying to reference the product name in the dropdown on the form as the table name. Please let me know if this is possible. | *"... I'm trying to reference the product name in the dropdown on the form as the table name. Please let me know if this is possible."*
It is not possible with Access SQL.
The db engine can only accept the actual table name --- it isn't equipped to reference a form control to find the table name nor to accept any other type of parameter to obtain the table name.
You could change the query to include your combo's value as the table name and then rewrite the SQL from the combo's after update event.
```
"SELECT * FROM [" & [FORMS]![DASHBOARD]![Combo0] & "]"
```
A similar approach could keep Access happy. But it may not be the best fit for your application. | So, the user essentially wants 2 queries to run. A `DELETE * FROM Table` query, and an `Append` query. The user wants to know what table to utilize for the `Append` query by using the Combobox (may just be my assumption/interpretation). That being said, why not use something along the lines of:
```
If IsNull(Me.[Combo0].Value) Then
MsgBox "Please select something."
Me.[Combo0].SetFocus
Cancel = True
Else
Select Case Me.Form!Combo0
Case 1
DoCmd.OpenQuery "DeleteMaterialsTableData" 'Query to delete appropriate table data dependent on Combobox selection'
DoCmd.OpenQuery "QueryNameMaterial1" 'Append records to appropriate table dependent on Combo0 selection'
Case 2
DoCmd.OpenQuery "DeleteMaterialsTableData" 'Query to delete appropriate table data dependent on Combobox selection'
DoCmd.OpenQuery "QueryNameMaterial2" 'Append records to appropriate table dependent on Combo0 selection'
```
This is just trying to use the users' combobox values to determine which table to run the queries against, instead of the user trying to use the Combobox's value as a table name. | Reference a field on a form within a query using SQL | [
"",
"sql",
"ms-access",
"ms-access-2007",
""
] |
I'm having a database with two tables: `users`, `users_addresses` and `countries`.
When I'm selecting user record and binding it to the model I'm using the following statement:
```
SELECT
`u`.`id`, `u`.`first_name`, `u`.`last_name`,
`a`.`address_1`, `a`.`address_2`,
`a`.`town`, `a`.`region`, `a`.`post_code`, `a`.`country`,
`c`.`name` AS `country_name`,
`c`.`eu` AS `eu_member`
FROM `users` `u`
LEFT JOIN `users_addresses` `a`
ON `a`.`user` = `u`.`id`
LEFT JOIN `countries` `c`
ON `c`.`id` = `a`.`country`
WHERE `a`.`default` = 1
AND `u`.`id` = 3
```
The problem I'm having is that if table `users_addresses` does not contain corresponding record for the user then I get an empty result. If there is a record - it should only return one marked as `default` = 1, but obviously it would be better to ensure that it always returns just one in case, for any reason one user will have more than one addresses marked as default.
So my question is - how could I make sure that even if there is no corresponding record in the `users_addresses` table I will still get at least user record and how to ensure that query will always match just one address record.
Any help would be very much appreciated. | `LEFT JOIN` will include entries from the left table if there's no corresponding entry in the right tables.
However in your case you then filter by `a.default = 1` which remove entries with no default address.
To avoid that, you will need to either join with a subquery
```
LEFT JOIN (
SELECT * FROM user_adresses
WHERE `default` = 1
) a
ON a.user = u.id
```
With this option you can limit to at most one 'default' address per user by using a `GROUP BY user` in the subselect.
Or you could use the `a.default = 1` as a join condition and not a where condition, i.e.
```
LEFT JOIN user_addresses a
ON a.user = u.id and a.default = 1
```
Not 100% sure about that last suggestion, but I'm pretty confident this would work.
Edit: and you obviously also have the option suggested by @steinmas, i.e. extending the filter on default to accept also *null* values.
To ensure you get at most one default address by user, you'll most likely need a `GROUP BY user` command at some point | Try changing your WHERE clause to this:
```
WHERE (`a`.`default` = 1 OR `a`.`default` IS NULL)
AND `u`.`id` = 3
``` | MySQL return record where left joined table has no record | [
"",
"mysql",
"sql",
"left-join",
""
] |
i am trying to add incremented values to a new column in table.
Here is a sample structure of table
```
---------------------
Name - class - id
---------------------
abbc - 2 - null
efg - 4 - null
ggh - 6 - null
---------------------
```
i want to write a query that will generate unique id's for all records in table
Here is the query i have tried but show null
```
set @i=0;
update table1 set id =(@i:=@i+1);
``` | What you have shown should work; the id column should be getting assigned values.
I tested your statement; I verified it works on my database. Here's the test case I ran:
```
CREATE TABLE table1 (`name` VARCHAR(4), class TINYINT, id INT);
INSERT INTO table1 (`name`,class) VALUES ('abbc',2),('efg',4),('ggh',6);
SET @i=0;
UPDATE table1 SET id =(@i:=@i+1);
SELECT * FROM table1;
```
Note that MySQL user variables are specific to a database session. If the SET is running in one session, and the UPDATE is running another session, that would explain the behavior you are seeing. (You didn't mention what client you ran the statements from; most clients reuse the same connection, and don't churn connections for each statement, I'm just throwing that out as a possibility.)
To insure that `@i` variable is actually initialized when the UPDATE statement runs, you can do the initialization in the UPDATE statement by doing something like this:
```
UPDATE table1 t
CROSS
JOIN (SELECT @i := 0) s
SET t.id =(@i:=@i+1);
```
I tested that, and that also works on my database. | try this query my friend:
```
set @i=0;
update table1 set id =(select @i:=@i+1);
```
## [SQL Fiddle](http://sqlfiddle.com/#!2/73e736/1) | Adding incremented value to new column | [
"",
"mysql",
"sql",
"multiple-insert",
""
] |
I have Table
`Field1 PK int not null
Field2 PK int not null` like this
when i want to map this table I get this error
```
Error 3 Error 3034: Problem in mapping fragments starting at lines 2212, 2218:
Two entities with possibly different keys are mapped to the same row.
Ensure these two mapping fragments map both ends of the AssociationSet to the corresponding columns.
```
And I tried to delete and re-create that table inside model. When i add table it gives another error but at the end i always get this error
How can i handle this problem please help me... | When i investigate the problem, i realised something becuse i read an article before for this data model situation and i understand the why the problem occur. (Problem in mapping fragments in Entity Framework)
Actually problem comes from table mapping because i said that table has many to many relationship so That article says if you put that table in the model design it always gives us this error and finally when we delete that table on design side and add table silently in data model so program can be build. I ‘m telling you this because maybe you can make an idea for this because i learned this and when i add new view in model and vs doesnt give me any error.
This is the old model picture when the delete that table inside red box The program doesn't give any error and continiue to use bottom table

 | If nothing else works, try deleting and recreating the whole .edmx-file.
I got this problem when I added a couple of many-to-many tables to an existing Entity Framework 6.2 project.
I tried removing and regenerating all tables in the model. Nothing seemed to work. I believe there was some "junk" somewhere in the EF project causing the problem.
When regenerating the edmx, I added all tables in the project, including the many-to-many tables. | Two entities with possibly different keys are mapped to the same row | [
"",
"sql",
"entity-framework",
"entity-framework-mapping",
""
] |
Imagine the following structure:
```
Group Color ColorDesc
----- ----- -----
1 'Red' 'The cool name of Red Color'
1 'Green' 'The cool name of Green Color'
2 'Blue' 'The cool name of Blue Color'
2 'Yellow' 'The cool name of Yellow Color'
2 'Purple' 'The cool name of Purple Color'
3 'Pink' 'The cool name of Pink Color'
```
I would like to group the rows on `Group` field, аnd if there is only one row in a group, I need to output the colorDesc column (as for `Group`=3 below), but if there are more than one column, I would like to get a delimited string on field color (as for 1 and 2). Desired output:
```
Group GroupedColor
----- -----
1 'Red', 'Green'
2 'Blue', 'Yellow', 'Purple'
3 'The cool name of Pink Color'
```
I can create a multi-parameter CLR aggregate and live happily, but is there an efficient way to achieve this with native T-SQL? | ```
Declare @t table(Groups int,Color varchar(50),ColorDesc varchar(50))
insert into @t
select 1 ,'Red', 'The cool name of Red Color' union all
select 1,'Green', 'The cool name of Green Color' union all
select 2,'Blue', 'The cool name of Blue Color' union all
select 2,'Yellow', 'The cool name of Yellow Color' union all
select 2,'Purple', 'The cool name of Purple Color' union all
select 3,'Pink', 'The cool name of Pink Color'
;with cte as
(
select groups,count(*) cnt from @t group by groups
)
select distinct b.groups,case when cnt=1 then a.ColorDesc
else stuff((select ',' + color from @t c where c.groups=b.Groups for xml path('') ),1,1,'') end
from cte b inner join @t a on a.Groups=b.Groups
Without distinct(Test both with lot of data)
Select * from
(select ROW_NUMBER() over(partition by b.groups order by b.groups) rn, b.groups,case when cnt=1 then a.ColorDesc
else stuff((select ',' + color from @t c where c.groups=b.Groups for xml path('') ),1,1,'') end colorDesc
from cte b inner join @t a on a.Groups=b.Groups )t4 where rn=1
``` | Try like this
```
SELECT DISTINCT Group,
STUFF((
SELECT ',' + Color
FROM Table1 S
WHERE T.Group = S.Group
FOR XML path('')
), 1, 1, '') [GroupedColor]
FROM Table1 T
``` | How to output a single column or concatenate another one if multiple rows are in group in T-SQL | [
"",
"sql",
"sql-server",
"t-sql",
"sqlclr",
""
] |
How can I avoid having to type the same `CASE WHEN...` over and over in a `WHERE` clause?
`CASE WHEN rawGlass.blockHeight > rawGlass.blockWidth THEN rawGlass.blockWidth ELSE rawGlass.blockHeight END` repeats quite frequently in this query. I'd like to only have to do that once. I already know referencing `[Glass Height]` in the `WHERE` clause will result in an `Invalid column...` error.
I realize there may be other solutions that don't involve using a `CASE WHEN` that will work for this particular instance and I welcome them, but I would really like an answer to the question at hand since I've come across this problem before with other queries.
```
SELECT
parts.pgwName AS [Part Name],
parts.active AS [Active],
cutVinyl.boardName AS [Vinyl Board],
cutVinyl.rollWidth AS [Roll Width],
CASE WHEN rawGlass.blockHeight > rawGlass.blockWidth THEN rawGlass.blockWidth ELSE rawGlass.blockHeight END AS [Glass Height]
FROM
parts
LEFT JOIN cutVinyl ON parts.vinylBoard = cutVinyl.boardName
LEFT JOIN rawGlass ON parts.outerSku = rawGlass.sku
WHERE
(
(cutVinyl.stretchRadius IS NOT NULL OR cutVinyl.stretchRadius = 0) AND
cutVinyl.rollWidth < (CASE WHEN rawGlass.blockHeight > rawGlass.blockWidth THEN rawGlass.blockWidth ELSE rawGlass.blockHeight END)
) ...
```
***Note:** The list of conditions goes on a while, so wrapping the whole `WHERE` clause in one `CASE WHEN...` statement and then repeating the conditions twice doesn't really help any.* | Use CROSS APPLY
```
SELECT
pgwName AS [Part Name],
active AS [Active],
boardName AS [Vinyl Board],
rollWidth AS [Roll Width],
[Glass Height]
FROM
parts
LEFT JOIN cutVinyl ON parts.vinylBoard = cutVinyl.boardName
LEFT JOIN rawGlass ON parts.outerSku = rawGlass.sku
CROSS APPLY (
SELECT CASE WHEN blockHeight > blockWidth THEN blockWidth
ELSE blockHeight
END AS [Glass Height]
) AS CA1
WHERE
(
(stretchRadius IS NOT NULL OR stretchRadius = 0) AND
rollWidth < [Glass Height])
) ...
``` | You can choose to write an outer query with current as a sub-query like this:
```
SELECT * FROM
(
SELECT
pgwName AS [Part Name],
active AS [Active],
boardName AS [Vinyl Board],
rollWidth AS [Roll Width],
CASE WHEN blockHeight > blockWidth THEN blockWidth ELSE blockHeight END AS [Glass Height]
FROM
parts
LEFT JOIN cutVinyl ON parts.vinylBoard = cutVinyl.boardName
LEFT JOIN rawGlass ON parts.outerSku = rawGlass.sku
) out
WHERE
(
(stretchRadius IS NOT NULL OR stretchRadius = 0) AND
rollWidth < Glass Height
) ...
``` | Avoid Repeated Case When | [
"",
"sql",
"case",
"sql-server-2008-express",
"simplify",
""
] |
I have a problem with my table in my database.
Here's the problem:
Why is `'BP99999'` is greater than `'BP100000'` ?
Whenever I query where this column is ordered by `DESC` it takes `'BP99999'` as the first row instead of the `'BP100000'`
Additional info: my column type is `nvarchar`
Do you have any suggestions for this?
Thank you very much! | Presently the 9 after the BP is greater than 1 resulting in the problem which you are facing.
The possible solution is that you need to take out the first 2 characters out of your column value and then do the sorting on the integer value.
You may try something like this:
```
SELECT [BProject_ID]
FROM [Workload].[dbo].[Bending_Projects]
ORDER BY RIGHT([BProject_ID], LEN([BProject_ID]) - 2)
``` | ```
ORDER BY RIGHT([BProject_ID], LEN([BProject_ID]) - 2)
```
Or simpler
```
ORDER BY SUBSTRING([BProject_ID],2,8000)
``` | SQL nvarchar order | [
"",
"sql",
"sql-order-by",
""
] |
I'm using this query to to calculate yearly finance values.
```
select [Year],
[FinanceValue-2014],
[FinanceValue-2013],
[FinanceValue-2012],
[FinanceValue-2014]-[FinanceValue-2013] as [FinanceValue Variance]
```
Now I need to multiply the [FinanceValue Variance] \* 2.50 and for that how can I use the alias name as column in the query. I tried this but doesn't work.
```
select [Year],
[FinanceValue-2014],
[FinanceValue-2013],
[FinanceValue-2012],
[FinanceValue-2014]-[FinanceValue-2013] as [FinanceValue Variance],
[FinanceValue Variance] * 2.50 as [NewVariance]
from Finance
```
SumofVariance output will be like 5690.5893656 Also how can I show the SumofVariance to round off 4 decimal places like this 5690.5894. Really appreciate any help on this. | This can be done using table expressions as below. In any clause of the outer query, you can refer to column aliases that were assigned in the SELECT clause of the inner query. This helps you get around the fact that you can’t refer to column aliases assigned in the SELECT clause in query clauses that are logically processed prior to the SELECT clause (for example, WHERE or GROUP BY).
So Write as:
```
select [Year],
[FinanceValue-2014],
[FinanceValue-2013],
[FinanceValue-2012],
[FinanceValue Variance],
Round([FinanceValue Variance] * 2.50,4) as [NewVariance]
from (
select [Year],
[FinanceValue-2014],
[FinanceValue-2013],
[FinanceValue-2012],
[FinanceValue-2014]-[FinanceValue-2013] as [FinanceValue Variance]
from Finance)T
``` | Use CROSS APPLY to create aliases
```
SELECT [Year],
[FinanceValue-2014],
[FinanceValue-2013],
[FinanceValue-2012],
[FinanceValue Variance],
[FinanceValue Variance] * 2.50 as [NewVariance]
FROM Finance
CROSS APPLY (
SELECT [FinanceValue-2014]-[FinanceValue-2013] as [FinanceValue Variance]
) AS CA1
``` | Use alias name as a column | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have two Select queries.
The first Select query gives the output which has two columns viz.
```
A B
------
1 2
3 4
5 6
7 8
```
The second Select query given the output which as two columns viz Column B and Column C. All the values in Column B of this select statement matches the values of Column B of the first Select statement.i.e
```
B C
------
2 25
4 50
6 30
8 50
```
Now, I need to merge the outputs of the above two Select queries. i.e
```
A B C
----------
1 2 25
3 4 50
5 6 30
7 8 50
```
I cannot use views to store the output of the two select queries. I need to use the Column B in both select queries to merge. However, I am not able to figure out how to go about it. | If you have *elaborated* queries (not just *tables* to join), you may try using `with` construction
```
with
Query1 as ( -- <- Put your 1st Query text here
select A,
B
...
),
Query2 as ( -- <- Put your 2nd Query text here
select B,
C
...
)
select Query1.A,
Query1.B,
Query2.C
from Query1,
Query2
where Query1.B = Query2.B
```
If your case is not *that complicated*, e.g. both Query1 and Query2 are in fact *tables*, say `Table1` and `Table2` you can do well with a simpler solution:
```
select Table1.A,
Table1.B,
Table2.C
from Table1,
Table2
where Table1.B = table2.B
``` | Consider you tables like having fields like
```
TableA(A ,B) , TableB(B,C)
```
Try using `JOIN` like
```
SELECT TableA.A , TableA.B, TableB.C
FROM TableA
JOIN TableB ON TableA.B = TableB.B;
``` | How to merge(columns) outputs of two SELECT statements in oracle sql? | [
"",
"mysql",
"sql",
"sql-server",
"oracle",
""
] |
i don't if the title is good but anyway, i was wondering if we can add the sum of a table after summing duplicates with another table just like the following example:
Table A:
```
Date BL Client Design Ref1 Ref2 Ref3 Qte
14/01/2013 13011401 A VT VT1 JAUNE XL 3
14/01/2013 13011402 B VT VT2 GRIS L 62
16/01/2013 13011601 D VT VT1 GRIS L 10
19/01/2013 13011903 F VT VT2 JAUNE L 15
```
Table B:
```
Date BL Client Design Ref1 Ref2 Ref3 Qte
14/01/2013 13011401 A VT VT1 JAUNE XL 3
14/01/2013 13011402 B VT VT2 GRIS L 100
16/01/2013 13011601 D VT VT1 GRIS L 10
19/01/2013 13011903 F VT VT2 JAUNE L 15
```
Result:
```
Date BL Client Design Ref1 Ref2 Ref3 Qte
14/01/2013 13011401 A VT VT1 JAUNE XL 6
14/01/2013 13011402 B VT VT2 GRIS L 162
16/01/2013 13011601 D VT VT1 GRIS L 20
19/01/2013 13011903 F VT VT2 JAUNE L 30
```
The condition is that (**Client, Design, Ref1, Ref2, Ref3**) should be the same in the two tables! | You can try using `JOINS`
```
SELECT
TableA.Date,
TableA.BL,
TableA.Client,
TableA.Design,
TableA.Ref1,
TableA.Ref2,
TableA.Ref3,
TableA.Qte + TableB.Qte As Qte
FROM
TableA
JOIN
TableB ON TableA.Client = TableB.Client
AND TableA.Design = TableB.Design
AND TableA.Ref1 = TableB.Ref1
AND TableA.Ref2 = TableB.Ref2
AND TableA.Ref3 = TableB.Ref3;
``` | I guess this is what you want
```
select t.Date,t.BL,t.Client,t.Design,t.Ref1,t.Ref2,t.Ref3, t.Qte+t1.Qte as qte
from table1 t inner join table2 t1
on t.client=t1.client and
t.design=t1.design and
t.ref1=t1.ref1 and
t.ref2=t1.ref2 and
t.ref3=t1.ref3;
``` | add up the result of the sum of two tables | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I need to check whether a variable has a value or not.
```
declare @name varchar(20)
set @name = (SELECT Product_Name
FROM tb_new_product_Name_id
WHERE Product_Name = @productName)
if (@name ) // here I need to check it
```
How to do it? thanks | Try this
```
if (@name is null or @value = '') //it will indicate whether it contains a value or not
``` | Just do the [`IF`](http://technet.microsoft.com/en-us/library/ms182717.aspx) directly, using [`EXISTS`](http://technet.microsoft.com/en-us/library/ms188336.aspx) instead:
```
IF EXISTS(SELECT * FROM tb_new_product_Name_id
where Product_Name=@productName)
BEGIN
--Do something because a row existed with that name
END
```
We may be able to further assist with simplifying your code if you told us more on what you were planning to do having confirmed a row existed in `tb_new_product_Name_id`. It looks like you're writing very procedural code - first I'll do X, then I'll do Y, then I'll do Z, etc. SQL excels as a language where you tell it "what to do" - for the entire data set you want to compute - not "how to do it" in a step by step, row by row fashion. | How to check if a variable has a value in a SQL Server 2008 stored procedure | [
"",
"sql",
"sql-server",
""
] |
i need to select last inserted row identity value so i tried below code
```
select SCOPE_IDENTITY() from tb_new_product_Name_id
```
but its not showing. where i made error? | There are three similar concepts in MS SQL Server - @@IDETITy, SCOPE\_IDENTITY() and IDENT\_CURRENT().
Here are sample code:
```
CREATE TABLE TestData (
id INT IDENTITY(1,1) PRIMARY KEY,
name VARCHAR(25) NOT NULL)
INSERT TestData VALUES('one'), ('two')
CREATE TABLE TrigData (
id INT IDENTITY(100,5) PRIMARY KEY,
name VARCHAR(20) NULL)
INSERT TrigData(name)
VALUES('one'), ('two'), ('three')
CREATE TRIGGER Ztrig
ON TestData
FOR INSERT AS
BEGIN
INSERT TrigData VALUES('from trig')
END
INSERT TestData VALUES('dummy')
SELECT @@IDENTITY AS [@@IDENTITY]
GO
SELECT SCOPE_IDENTITY() AS [SCOPE_IDENTITY]
GO
SELECT IDENT_CURRENT('TestData') [IDENT_CURRENT]
```
It returns 115, 3 and 3.
Hope it helps you. | You don't need to specify a table name here..
SCOPE\_IDENTITY() is just a function which returns the recent identity value generated in the current scope. | how to get last inserted identity value | [
"",
"sql",
"sql-server",
""
] |
I want to display records of last 4 months from current date.
I don't want to consider time
How can I get just date part from the below query?
```
where OrderDate >= DATEADD(month, -4, GETDATE())
``` | Why not use the simple DATEDIFF function
```
where DATEDIFF(MM, OrderDate, GETDATE()) < 4
``` | If you're using SQL Server 2008, try converting `GETDATE()` to a `DATE` directly.
```
WHERE OrderDate >= DATEADD(month, -4, CONVERT(date, GETDATE()))
```
<http://sqlfiddle.com/#!3/df444/2> | How to get only date part while using dateadd() and getdate() | [
"",
"sql",
"sql-server",
"t-sql",
"date",
""
] |
I have a table called `requests` on which the columns are `id`, `placeId`, `songId`, `userId`
`id` is the primary index of the table. Rest of the columns are only unsigned integers and no other unique key is defined.
I want `placeId` & `songId` pairs to be unique, i.e., if a row has `placeId` : 5 and `songId` : 12, no other rows can have the same combination.
I want this check to happen in SQL level, so that I can query like `insert into requests (...) values (...) on duplicate key do something else` | you can create a UNIQUE index on multiple columns like this
```
CREATE UNIQUE INDEX placeSong
ON requests (placeId, songId)
``` | Another method is to add an unique constraint to the table :
```
ALTER TABLE requests ADD CONSTRAINT placeSong UNIQUE( placeId , songId );
``` | Using group of columns as a unique key in MySQL | [
"",
"mysql",
"sql",
""
] |
The following are the tables are an example of the issue I am having:
**Companies** *{Id}*
**CompanyPeople** *{CompanyId, PeopleId}*
**People** *{Id}*
**PeopleChildren** *{PeopleId, ChildrenId}*
**Children** *{Id}*
I want to delete all the people of a company and then delete the children of the people.
I cannot delete the people without deleting the items in the join table **CompanyPeople**. However if I do this I cannot find which people are part of the company.
The same issue follows down to **Children**
What is the best way to approach this? | I would normally use [table variables](http://technet.microsoft.com/en-us/library/ms175010.aspx) or [temp tables](http://technet.microsoft.com/en-us/library/ms174979.aspx) to capture the data to be used first, and then perform the deletes:
```
declare @people table (PersonID int not null)
create table #children (ChildrenID int not null)
insert into @people(PersonID)
select PersonID from CompanyPeople
where CompanyID = @CompanyToRemove
insert into #children (ChildrenID)
select ChildrenID from PeopleChildren
where PersonID in (select PersonID from @people)
delete from PeopleChildren where PersonID in (select PersonID from @people)
delete from Children where ChildrenID in (select ChildrenID from #children)
delete from CompanyPeople where PersonID in (select PersonID from @people)
delete from People where PersonID in (select PersonID from @people)
``` | You can delete the children of the people who are in the company you want to delete first by
```
delete from Children where ChildrenID in (select ChildrenID from Children, PeopleChildren, CompanyPeople where Children.ChildrenID = PeopleChildren.ChildrenID and PeopleChildren.PeopleID = CompanyPeople.PeopleID and CompanyPeople.CompanyID = ?)
```
Then only you delete the people by
```
delete from People where PeopleID in (select PeopleID from CompanyPeople where CompanyPeople.CompanyID = ?)
``` | Delete relational join tables | [
"",
"sql",
"sql-server",
"join",
"sql-delete",
""
] |
Yesterday I was looking at queries like this:
```
SELECT <some fields>
FROM Thing
WHERE thing_type_id = 4
```
... and couldn't but think this was very "readable". What's '4'? What does it mean? I did the same thing in coding languages before but now I would use constants for this, turning the 4 in a THING\_TYPE\_AVAILABLE or some such name. No arcane number with no meaning anymore!
[I asked about this on here](https://stackoverflow.com/questions/21882384/using-human-readable-constants-in-queries) and got answers as to how to achieve this in SQL.
I'm mostly partial to using JOINS with existing type tables where you have an ID and a Code, with other solutions possibly of use when there are no such tables (not every database is perfect...)
```
SELECT thing_id
FROM Thing
JOIN ThingType USING (thing_type_id)
WHERE thing_type_code IN ('OPENED', 'ONHOLD')
```
So I started using this on a query or two and my colleagues were soon upon me: "hey, you have literal codes in the query!" "Um, you know, we usually go with pks for that".
While I can understand that this method is not the usual method (hey, it wasn't for me either until now), is it really so bad?
What are the pros and cons of doing things this way? My main goal was readability, but I'm worried about performance and would like to confirm whether the idea is sound or not.
**EDIT: Note that I'm not talking about PL/SQL but straight-up queries, the kind that usually starts with a SELECT.**
**EDIT 2:**
To further clarify my situation with fake (but structurally similar) examples, here are the tables I have:
```
Thing
------------------------------------------
thing_id | <attributes...> | thing_type_id
1 3
4 7
5 3
ThingType
--------------------------------------------------
thing_type_id | thing_type_code | <attributes...>
3 'TYPE_C'
5 'TYPE_E'
7 'TYPE_G'
```
thing\_type\_code is just as unique as thing\_type\_id. It is currently also used as a display string, which is a mistake in my opinion, but would be easily fixable by adding a thing\_type\_label field duplicating thing\_type\_code for now, and changeable at any time later on if needed.
Supposedly, filtering with thing\_type\_code = 'TYPE\_C', I'm sure to get that one line which happens to be thing\_type\_id = 3. Joins can (and quite probably should) still be done with the numerical IDs. | Primary key values should **not** be coded as literals in queries.
The reasons are:
* Relational theory says that PKs should not convey any meaning. Not even a specific identity. They should be strictly row identifiers and not relied upon to be a specific value
* Due to operational reasons, PKs are often different in different environments (like dev, qa and prod), even for "lookup" tables
For these reasons, coding literal IDs in queries is brittle.
Coding data literals like `'OPENED'` and `'ONHOLD'` is GOOD practice, because these values are going to be consistent across all servers and environments. If they do change, changing queries to be in sync will be part of the change script. | I assume that the question is about the two versions of the query -- one with the numeric comparison and the other with the join and string comparison.
Your colleagues are correct that the form with `where thing_id in (list of ids)` will perform better than the `join`. The difference in performance, however, might be quite minor if `thing_id` is not indexed. The query will already require a full table scan on the original table.
In most other respects, your version with the `join` is better. In particular, it makes the intent of the query cleaner and overall make the query more maintainable. For a small reference table, the performance hit may not be noticeable. In fact, in some databases, this form could be faster. This would occur when the `in` is evaluated as a series of `or` expressions. If the list is long, it might be faster to do an index lookup.
There is one downside to the `join` approach. If the values in the columns change, then the code also needs to be changed. I wouldn't be surprised if your colleague who suggests using primary keys has had this experience. S/he is working on an application and builds it using `join`s. Great. Lots of code. All clear. All maintainable. Then every week, the users decide to change the definitions of the codes. That can make almost any sane person prefer primary keys over using the reference table. | Is using JOINs to avoid numerical IDs a bad thing? | [
"",
"sql",
"oracle",
"join",
""
] |
I've a table contains the columns like
```
Prefix | CR
----------------------------------------
g | ;#WR_1;#WR_2;#WR_3;#WR_4;#
v | ;#WR_3;#WR_4;#
j | WR_2
m | WR_1
d | ;#WR_3;#WR_4;#
f9 | WR_3
```
I want to retrieve data from **CR** column WHERE it has the longest text string i.e in current table it is **;#WR\_1;#WR\_2;#WR\_3;#WR\_4;#**.
I'm using
```
SELECT max(len(CR)) AS Max_Length_String FROM table1
```
But it retuns
```
Max_Length_String
----------------------------------------
26
```
But what i need is not the length (26), i wanted like this
```
Max_Length_String
----------------------------------------
;#WR_1;#WR_2;#WR_3;#WR_4;#
``` | The easiest way is:
```
select top 1 CR
from table t
order by len(CR) desc
```
Note that this will only return one value if there are multiple with the same longest length. | You can:
```
SELECT CR
FROM table1
WHERE len(CR) = (SELECT max(len(CR)) FROM table1)
```
Having just recieved an upvote more than a year after posting this, I'd like to add some information.
* This query gives all values with the maximum length. With a TOP 1 query you get only one of these, which is usually not desired.
* This query must probably read the table twice: a full table scan to get the maximum length and another full table scan to get all values of that length. These operations, however, are very simple operations and hence rather fast. With a TOP 1 query a DBMS reads all records from the table and then sorts them. So the table is read only once, but a sort operation on a whole table is quite some task and can be very slow on large tables.
* One would usually add `DISTINCT` to my query (`SELECT DISTINCT CR FROM ...`), so as to get every value just once. That *would* be a sort operation, but only on the few records already found. Again, no big deal.
* If the string lengths have to be dealt with quite often, one might think of creating a computed column (calculated field) for it. This is available as of Ms Access 2010. But reading up on this shows that you cannot index calculated fields in MS Access. As long as this holds true, there is hardly any benefit from them. Applying `LEN` on the strings is usually not what makes such queries slow. | How to find longest string in the table column data | [
"",
"sql",
"ms-access",
""
] |
There is a table containing all names:
```
CREATE TABLE Names(
Name VARCHAR(20)
)
```
And there are multiple tables with similar schema.
Let's say:
```
CREATE TABLE T1
(
Name VARCHAR(20),
Description VARCHAR(30),
Version INT
)
CREATE TABLE T2
(
Name VARCHAR(20),
Description VARCHAR(30),
Version INT
)
```
I need to query description for each name, by following priority:
1. any records in T1 with matching name and version = 1
2. any records in T1 with matching name and version = 2
3. any records in T2 with matching name and version = 1
4. any records in T2 with matching name and version = 2
I want result from lower priority source only if there are no result from higher priority source.
So far that's I've got:
```
SELECT
N.Name AS Name, Description =
CASE
WHEN (T11.Description IS NOT NULL) THEN T11.Description
WHEN (T12.Description IS NOT NULL) THEN T12.Description
WHEN (T21.Description IS NOT NULL) THEN T21.Description
WHEN (T22.Description IS NOT NULL) THEN T22.Description
ELSE NULL
END
FROM Names AS N
LEFT JOIN T1 AS T11 ON T11.Name = N.Name AND T11.Version = 1
LEFT JOIN T1 AS T12 ON T12.Name = N.Name AND T12.Version = 2
LEFT JOIN T2 AS T21 ON T21.Name = N.Name AND T21.Version = 1
LEFT JOIN T2 AS T22 ON T22.Name = N.Name AND T22.Version = 2
```
It's working, but are there too much `JOIN` here? Is there any better approach?
**[sqlfiddle](http://sqlfiddle.com/#!3/25574/1)**
Sample Input:
```
INSERT INTO Names VALUES('name1')
INSERT INTO Names VALUES('name2')
INSERT INTO Names VALUES('name3')
INSERT INTO Names VALUES('name4')
INSERT INTO Names VALUES('name5')
INSERT INTO Names VALUES('name6')
INSERT INTO T1 VALUES ('name1','name1_T1_1', 1)
INSERT INTO T1 VALUES ('name2','name2_T1_1', 1)
INSERT INTO T1 VALUES ('name3','name3_T1_1', 1)
INSERT INTO T1 VALUES ('name3','name3_T1_2', 2)
INSERT INTO T1 VALUES ('name5','name5_T1_2', 2)
INSERT INTO T2 VALUES ('name1','name1_T2_1', 1)
INSERT INTO T2 VALUES ('name4','name4_T2_1', 1)
```
Excepted result:
```
--
-- Excepted result:
-- Name Description
-- name1 name1_T1_1
-- name2 name2_T1_1
-- name3 name3_T1_1
-- name4 name4_T2_1
-- name5 name5_T1_2
-- name6 NULL
``` | Well, this is a solution to eliminate the `case` statement and minimize the repetitive part of the query, it requires some joins of it's own of course, so you'd need quite some tables and/or versions to get any real benefit out of it:
```
;WITH
AllDescriptions AS
(
SELECT 1 AS Rank, * FROM T1
UNION ALL SELECT 2 AS Rank, * FROM T2
-- UNION ALL SELECT 3 AS Rank, * FROM T3
-- UNION ALL SELECT 4 AS Rank, * FROM T4
-- etc
),
Ranks AS
(
SELECT
AllDescriptions.Name,
MIN(AllDescriptions.Rank) AS Rank
FROM
AllDescriptions
GROUP BY
Name
),
Versions AS
(
SELECT
AllDescriptions.Name,
AllDescriptions.Rank,
MIN(AllDescriptions.Version) AS Version
FROM
AllDescriptions
INNER JOIN Ranks
ON Ranks.Name = AllDescriptions.Name
AND Ranks.Rank = AllDescriptions.Rank
GROUP BY
AllDescriptions.Name,
AllDescriptions.Rank
),
Descriptions AS
(
SELECT
AllDescriptions.Name,
AllDescriptions.Description
FROM
AllDescriptions
INNER JOIN Versions
ON Versions.Name = AllDescriptions.Name
AND Versions.Rank = AllDescriptions.Rank
AND Versions.Version = AllDescriptions.Version
)
SELECT
Names.*,
Descriptions.Description
FROM
Names
LEFT OUTER JOIN Descriptions
ON Descriptions.Name = Names.Name
``` | Try this query and it will also give you the expected result.
```
SELECT N.name AS Name,
Description =
CASE
WHEN ( t1.description IS NOT NULL ) THEN t1.description
WHEN ( t2.description IS NOT NULL ) THEN t2.description
ELSE NULL
END
FROM names AS N
LEFT JOIN t1
ON t1.name = N.name
AND t1.version IN( 1, 2 )
LEFT JOIN t2
ON t2.name = N.name
AND t2.version IN ( 1, 2 )
``` | alternative solution to too many JOINs | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I faced this problem while developing a Trigger in Oracle: [ORA-01403: no data found](http://www.techonthenet.com/oracle/errors/ora01403.php). I did some research and understood the root of the problem. Nevertheless [handling the error exception](http://www.orafaq.com/wiki/ORA-01403) prevents the above error, but does not solve my problem.
What I am currently looking for is an *optimal* workaround to perform the lesser query amount/achieve the best performance as possible. I'll try to describe the scenario creating simple examples to the real structure.
## Scenario
I have a "date reference" table to establish periods of time, say:
```
CREATE TABLE DATE_REFERENCE (
DATE_START DATE NOT NULL,
DATE_END DATE NOT NULL,
-- Several other columns here, this is just a silly example
CONSTRAINT PK_DATE_REFERENCE PRIMARY KEY(DATE_START, DATE_END)
);
```
When the trigger is triggered, I'll have one `DATE` field - say `DATE_GIVEN` (for example sake). What I need is:
1. To find the `DATE_REFERENCE` row in which `DATE_GIVEN BETWEEN DATE_START AND DATE_END` (easy); *OR*
2. If the previous option returns *no data*, I need to find the next closest `DATE_START` to `DATE_GIVEN`.
In both cases, I need to retrieve **the row with all columns** from table `DATE_REFERENCE`, no matter if it matches Opt 1 or 2. That's exactly where I faced the problem described.
I wrote this test block to *test* and try to find a solution. The example below **is not working**, I know; **but it is *exactly* what I want to accomplish** (in concept). I have added comments like `-- Lots of code` to make clear that will be part of a more elaborate trigger:
```
DECLARE
DATE_GIVEN DATE;
RESULTROW DATE_REFERENCE%ROWTYPE;
BEGIN
-- Lots of code
-- Lots of code
-- Lots of code
DATE_GIVEN := TO_DATE('2014-02-26 12:30:00', 'YYYY-MM-DD HH24:MI:SS');
-- This one throws the ORA-01403 exception if no data was found
SELECT
* INTO RESULTROW
FROM
DATE_REFERENCE
WHERE
DATE_GIVEN BETWEEN DATE_START AND DATE_END;
-- If the above didn't throw exceptions, I would continue like so:
IF RESULTROW IS NULL THEN
SELECT
* INTO RESULTROW
FROM
DATE_REFERENCE
WHERE
DATE_START > DATE_GIVEN
AND ROWNUM = 1
ORDER BY DATE_START ASC;
END IF;
-- Now RESULTROW is populated, and the rest of the trigger code gets executed ~beautifully~
-- Lots of code
-- Lots of code
-- Lots of code
END;
```
## Question
Knowing that the above PL/SQL block is more of a ***concept*** than ***working code***, what is the best way to get `RESULTROW` populated, minding performance and the lesser queries as possible?
Sorry for the long question, but I figured scenario explanation was necessary. Thanks in advance for any help/thoughts! | Just populate the field directly, using ordering and `rownum`:
```
SELECT * INTO RESULTROW
FROM (SELECT *
FROM DATE_REFERENCE
ORDER BY (CASE WHEN DATE_GIVEN BETWEEN DATE_START AND DATE_END
THEN 1 ELSE 0
END) DESC,
(DATE_START - DATE_GIVEN)
) t
WHERE rownum = 1;
```
This will populate the information with one query.
EDIT:
If you want to put a condition in the subquery, it needs to be:
```
SELECT * INTO RESULTROW
FROM (SELECT *
FROM DATE_REFERENCE
WHERE DATE_GIVEN <= DATE_END
ORDER BY (CASE WHEN DATE_GIVEN BETWEEN DATE_START AND DATE_END
THEN 1 ELSE 0
END) DESC,
(DATE_START - DATE_GIVEN)
) t
WHERE rownum = 1;
```
I believe the right condition is `DATE_GIVEN <= DATE_END`. This covers both the `between` condition and should imply `DATE_GIVEN < DATE_START`. This assumes that `DATE_END` is never `NULL`. | I also had similar problem, resolved it like this:
if the row does not exists in table `LADDER.INCR_PROCESS` I get `IsPassed` as Null:
```
Declare
IsPassed Integer ;
Begin
Select I.LVL Into IsPassed
From LADDER.INCR_PROCESS I
Right
Join Dual on I.LVL >= 90010 and I.Passed = 0
Where RowNum = 1 ;
....
End;
``` | Oracle PL/SQL - ORA-01403 "No data found" when using "SELECT INTO" | [
"",
"sql",
"oracle",
"plsql",
"triggers",
"oracle11g",
""
] |
Hitting a record based on multiple where clauses, I would like SQL to return a copy of the row being hit, for every where clause that matches.
```
c1 | c2 | c3
x y z
```
x,y,z : Would be return only once with a `where c1=x or c3=z`
I would like `where c1=x or c3=z` , to return, 2 copies of x,y,z
Can this be achieved?
**Why?**
I am trying to eliminate running multiple queries to get multiples of the same record to do prioritization.
I can and should get two copies of x,y,z if I run two queries, one for c1, and one for c3.
I want these 2 copies of x,y,z by design, but I want to run one query. With my implementation, I will run into situations where I will have to run more than 10 query trips to get what I am after... for one prioritized record.
I would rather run one single query, than 10. | This would do it although it might not be as elegant as you'd like
```
SELECT c1,c2,c3 FROM [MyTableName] where c1=x
UNION ALL
SELECT c1,c2,c3 FROM [MyTableName] where c3=z
``` | Already answered, but here is an alternative:
```
SELECT c1,c2,c3
FROM [MyTableName]
JOIN (VALUES (1),(2)) t(n)
ON (t.n = 1 AND c1 = 'x') OR (t.n = 2 AND c3 = 'z');
``` | SQL query to return more than one copy of same row, if hit on more than one where clause | [
"",
"sql",
"sql-server",
"search",
"search-engine",
"record",
""
] |
I have one field in SQL Server containing section, township and range information, each separated by dashes; for example: `18-84-7`. I'd like to have this information broken out by each unit, section as one field, township as one field and range as one field, like: `18 84 7`.
The number of characters vary. It's not always 2 characters or 1 character per unit, so I believe the best way is to separate by the dashes, but I'm not sure how to do this. Is there a way to do this can be done in SQL Server?
Thanks! | There are probably several different ways to do it, some uglier than others. Here's one:
(Note: dat = the string of characters)
```
select *,
substring(dat,1,charindex('-',dat)-1) as Section,
substring(dat,charindex('-',dat)+1,charindex('-',dat)-1) as TownShip,
reverse(substring(reverse(dat),0,charindex('-',reverse(dat)))) as myRange
from myTable
``` | Please try more reliable code
CREATE BELOW FUNCTION
```
CREATE FUNCTION dbo.UFN_SEPARATES_COLUMNS(
@TEXT varchar(8000)
,@COLUMN tinyint
,@SEPARATOR char(1)
)RETURNS varchar(8000)
AS
BEGIN
DECLARE @POS_START int = 1
DECLARE @POS_END int = CHARINDEX(@SEPARATOR, @TEXT, @POS_START)
WHILE (@COLUMN >1 AND @POS_END> 0)
BEGIN
SET @POS_START = @POS_END + 1
SET @POS_END = CHARINDEX(@SEPARATOR, @TEXT, @POS_START)
SET @COLUMN = @COLUMN - 1
END
IF @COLUMN > 1 SET @POS_START = LEN(@TEXT) + 1
IF @POS_END = 0 SET @POS_END = LEN(@TEXT) + 1
RETURN SUBSTRING (@TEXT, @POS_START, @POS_END - @POS_START)
END
GO
```
AND Then try below code
```
DECLARE @STRING VARCHAR(20) ='1-668-333'
SELECT
dbo.UFN_SEPARATES_COLUMNS(@STRING, 1, '-') AS VALUE1,
dbo.UFN_SEPARATES_COLUMNS(@STRING, 2, '-') AS VALUE2,
dbo.UFN_SEPARATES_COLUMNS(@STRING, 3, '-') AS VALUE3
```
RESULT
[](https://i.stack.imgur.com/AzMG7.png)
If you need more understanding please go
> <https://social.technet.microsoft.com/wiki/contents/articles/26937.t-sql-splitting-a-string-into-multiple-columns.aspx> | How to Split String by Character into Separate Columns in SQL Server | [
"",
"sql",
"regex",
"sql-server-2008-r2",
""
] |
I want to find a female who has visits to two or more restaurants.
Here are the tables.
Cust:
```
Name | Gender
--------------
sarah | female
Tim | Male
```
Visits:
```
Name | Restaurant
------------------
sarah | Crab City
Tim | Domino's
sarah | Crab City
sarah | Krusty City
Tim | Domino's
sarah | Crab City
Tim | Domino's
```
Everything that I have tried didnt work...
I am using Access to do the quires. | ```
SELECT Temp_Table.name AS NameOfPerson
FROM (SELECT cust.name,
Count(DISTINCT restaurant) AS UniqueRest
FROM visits,
cust
WHERE cust.name = visits.name
AND cust.gender = 'female'
GROUP BY cust.name) AS Temp_Table
WHERE UniqueRest >= 2
```
this will give you the name of females who visited two or more Restaurants. | You can try something as shown below. The gist of it is to join on a subquery where you use group by/having to only return the >2 visit customers
```
select name from Cust
INNER JOIN
(
SELECT
Name,
Restaurant,
COUNT(DISTINCT Restaurant) AS UniqueRest
FROM
Visits
GROUP BY Name
HAVING COUNT(DISTINCT Restaurant) > 2
) visitingcusts ON cust.Name = visitingcusts.Name AND Cust.gender='female'
``` | How to find fields with calculating other fields? | [
"",
"sql",
"ms-access",
""
] |
I have a ASP.net C# Cloud application and I need to encrypt my SQL Azure Tables. I have researched about this and I found out that SQL Azure doesn't support encryption and that you have to do it in the application layer.
Now I have no problem encrypting my stored procedure parameters and decrypting them again when i read out, **however** this means I cannot perform a proper search on my cells anymore.
I have no idea how to deal with this except reading the whole thing in memory and filtering it there, but that just feels dirty and I don't wanna lose performance once I start reading a lot of cells.
Do you guys have any idea how to cope with this??
Thanks a lot | The only realistic way to have encryption-at-rest in SQL Server in Azure is to provision an IaaS VM with full SQL Server installed, [as described here](http://www.windowsazure.com/en-us/documentation/articles/virtual-machines-provision-sql-server/). Then you can configure whatever full-blown SQL features you want, including encryption, full-text indexing and all the other stuff that's not available in Windows Azure SQL Database.
But as @spender says, is encryption really necessary in this instance? The encryption-at-rest feature is to protect sensitive data in the event of the physical disk being compromised, and that's not really going to happen in a massive-scale cloud hosting scenario.
Update: full-text search and TDE are now supported in SQL Azure <http://azure.microsoft.com/blog/2015/04/30/full-text-search-is-now-available-for-preview-in-azure-sql-database/>
<http://www.sqlindepth.com/full-text-search-in-sql-azure/>
<http://blogs.msdn.com/b/sqlsecurity/archive/2015/04/29/announcing-transparent-data-encryption-for-azure-sql-database.aspx> | In case anyone else stumbles on this question, note that Azure SQL Database now supports Transparent Data Encryption (TDE, aka "encryption-at-rest").
See: <https://msdn.microsoft.com/en-us/library/dn948096.aspx> | ASP.net C# SQL Azure Encryption | [
"",
"sql",
"azure",
"encryption",
"azure-sql-database",
""
] |
The query below is failing unexpectedly with an arithmatic overflow error.
```
select IsNull(t2.val, 5005)
from(
SELECT 336.6 as val UNION ALL
SELECT NULL
) as t2
```
"Arithmetic overflow error converting int to data type numeric."
Strangely if the query is modified to remove the NULL and replace it with the same value as in the null coalesce (5005), it runs without issue
```
select IsNull(t2.val, 5005)
from(
SELECT 336.6 as val UNION ALL
SELECT 5005
) as t2
```
Also, omitting the SELECT NULL line entirely allows the query to run without issue
```
select IsNull(t2.val, 5005)
from(
SELECT 336.6 as val
) as t2
```
If the coalesce value in the IsNull function is changed to an integer which is small enough to convert to the decimal in the subquery without widening, the query runs
```
select IsNull(t2.val, 500)
from(
SELECT 336.6 as val UNION ALL
SELECT NULL
) as t2
```
Tested this in both SQL Server 2005 and SQL Server 2008.
Ordinarily combining integers with decimals is seamless and SQL Server will convert both the integer and the decimal into a decimal type large enough to accommodate both. But for some reason running a query where the cast occurrs from both the UNION and the IsNull, causes the cast to fail.
Does anyone know why this is? | Try doing this
```
select * into t2
from(
SELECT 336.6 as val
UNION ALL
SELECT NULL
) as x
```
If you now look at the columns, you see a numeric with numeric precision of 4 and scale of 1
```
select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME='T2'
```
SQL made that decision based on the smallest numeric precision to hold 336.6. Now, when you ask it to convert the NULL to 5005, you are saying, convert any NULL values to a number too big to fit in a numeric with the precision of 4 and a scale of 1. The error message indicates that 5005 won't fit in Numeric(4,1)
This will work because the table will now generate a larger numeric field, since SQL needs to accommodate 5005. Create the table using the new contents of T2 from below, and the field type should go to Numeric(5,1) allowing the 5005 to fit.
```
select IsNull(t2.val, 5005)
from(
SELECT 336.6 as val UNION ALL
SELECT 5005
) as t2
```
When you run the statement without a NULL in your inner query, SQL never evaluates the 5005, so it doesn't reach a condition where it needs to put 5005 into a numeric(4,1) field.
```
select IsNull(t2.val, 5005)
from(
SELECT 336.6 as val
) as t2
``` | I think the problem is that when SQL Server resolves the union, it decides on a decimal type that is only large enough to fit 333.6 (which is `decimal(4,1)`). Trying to put 5005 into that results in an overflow.
You can get around that specifying the precision of decimal yourself:
```
select IsNull(t2.val, 5005)
from(
SELECT CONVERT(DECIMAL(5,1), 336.6) as val UNION ALL
SELECT NULL
) as t2
``` | Why does this SQL query fail | [
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.