Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
## customers `id` INT PK `name` VARCHAR ## details `id` INT PK `detail_name` VARCHAR ## customers\_details `customer_id` INT FK `detail_id` INT FK `value` INT --- For each `customer` I have a set of `details`. The following query will get all users that have the detail #2 equals to 10: ``` SELECT c.* FROM customers c INNER JOIN customers_details cd ON cd.customer_id = c.customer_id WHERE cd.detail_id = 2 AND cd.value = 10 ``` My problem is that I need to get all `customers` that have 2 or more specific `details`. For example: I want to get all `customers` that have detail #2 = 10 **AND** detail #3 = 20. Is there a simple way to do that using SQL?
I would do that: ``` select c.* from customers_details cd inner join customers c on c.id = cd.customer_id where cd.detail_id in (2,10) group by cd.customer_id having sum(cd.detail_id = 2 and cd.value = 1) = 1 and sum(cd.detail_id = 10) = 1 ``` What I do here is: * Grouping details by customer * Sum 1 if the condition is satisfied. If there is a detail = 2 * Having filters only customers which has the both conditions * I use the WHERE clause in order to filter less results for HAVING filters again trying to avoid a full-scan. Regards,
You're just looking for customers that have more than one detail ID, where one equals 2 and one equals 20? ``` SELECT c.customer_id, count (*) FROM customers c INNER JOIN customers_details cd ON cd.customer_id = c.customer_id WHERE cd.detail_id IN (2, 20) GROUP BY c.customer_id HAVING count(*) > 1 ``` This should give you every customer\_id that has a detail\_id of 2 *and* a detail\_id of 20
Get records that have 2 or more entries in a joining table matching a condition
[ "", "mysql", "sql", "" ]
say I have a table like this: ``` S_id |ca1 |ca2 |exam 1 | 08 | 12 | 35 1 | 02 | 14 | 32 1 | 08 | 12 | 20 2 | 03 | 11 | 55 2 | 09 | 18 | 45 2 | 10 | 12 | 35 3 | 07 | 12 | 35 3 | 04 | 14 | 37 3 | 09 | 15 | 32 4 | 03 | 11 | 55 4 | 09 | 18 | 45 4 | 10 | 12 | 35 5 | 10 | 12 | 35 5 | 07 | 12 | 35 5 | 09 | 18 | 45 ``` I want to select S\_id, total and assign a rank on each student based on sum(ca1+ca2+exam) like the following: ``` S_id |total|rank 1 | 158 | 5 2 | 198 | 1 3 | 165 | 4 4 | 198 | 1 5 | 183 | 3 ``` If there are the same total, like `S_id` 2 and `S_id` 4 with rank 1, I want the rank to be jumped to 3. Thanks for helping.
Something like this maybe: ``` with tt(S_id,total) as ( select S_id, sum(ca1) + sum(ca2) + sum(exam) from t group by S_id union select null, 0 ) select s.S_id, s.total, (select count(*)+1 from tt as r where r.total > s.total) as rank from tt as s where S_id is not null; ```
Make the table: ``` sqlite> create table t (S_id, ca1, ca2, exam); sqlite> insert into t values ...> ( 1 , 08 , 12 , 35 ), ...> ( 1 , 02 , 14 , 32 ), ...> ( 1 , 08 , 12 , 20 ), ...> ( 2 , 03 , 11 , 55 ), ...> ( 2 , 09 , 18 , 45 ), ...> ( 2 , 10 , 12 , 35 ), ...> ( 3 , 07 , 12 , 35 ), ...> ( 3 , 04 , 14 , 37 ), ...> ( 3 , 09 , 15 , 32 ), ...> ( 4 , 03 , 11 , 55 ), ...> ( 4 , 09 , 18 , 45 ), ...> ( 4 , 10 , 12 , 35 ), ...> ( 5 , 10 , 12 , 35 ), ...> ( 5 , 07 , 12 , 35 ), ...> ( 5 , 09 , 18 , 45 ); ``` Make a temporary table with the total scores: ``` sqlite> create temp table tt as select S_id, sum(ca1) + sum(ca2) + sum(exam) as total from t group by S_id; ``` Use the temporary table to compute the ranks: ``` sqlite> select s.S_id, s.total, (select count(*)+1 from tt as r where r.total > s.total) as rank from tt as s; 1|143|5 2|198|1 3|165|4 4|198|1 5|183|3 ``` Drop the temporary table: ``` sqlite> drop table tt; ``` ADDENDUM With a recent change (2015-02-09) to SQLite, this formulation now works: ``` with tt (S_id, total) as (select S_id, sum(ca1 + ca2 + exam) as total from t group by S_id) select s.S_id, s.total, (select count(*)+1 from tt as r where r.total > s.total) as rank from tt as s; ```
How to Retrieve Rank Based on Total Mark in SQLite Table
[ "", "sql", "sqlite", "" ]
Whats the difference between using the following statements in a WHERE clause in SQL? ``` WHERE STUDENTID=7 ``` or ``` WHERE STUDENTID IN (7) ``` Is there a recommended/optimal choice?
Use IN if you're doing multiple or's e.g. ``` Where StudentID = 7 or StudentID = 6 or StudentID = 5 ``` Would be ``` Where StudentID IN (5,6,7) ``` Otherwise just use =
There is no functional difference, the result is the same. Performance would differ if the database treated them differently, but likely the database will recognise that you are using exactly one value in the `in` expression, and actually make the execution plan as if it was the first one. You might want to use the first one either way, that makes it clearer that you intended to make an exact comparison, and didn't just forget to put the other values in the `in` expression.
SQL Difference between using = and IN
[ "", "sql", "oracle", "" ]
I'm trying to combine two very similar SQL queries with separate date ranges to produce a single output table. (to compare results from this week with the corresponding week last year.) I've had a bit of a trawl of SO and found some similar questions (e.g. [this one](https://stackoverflow.com/questions/4557722/how-can-i-combine-sql-queries-with-different-expressions)) but still haven't managed to get this working: The two queries are: ``` SELECT [arrpoint] ,COUNT([arrpoint]) AS NumberOfTimesTW FROM [groups] tb1 INNER JOIN [fileinfo] tb2 ON tb1.op_name = tb2.[operator] INNER JOIN [costs] tb4 ON tb2.[fileno] = tb4.[fileno] WHERE [bedbank] = 1 AND [booked] >= DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 0) GROUP BY [ arrpoint] HAVING (COUNT([arrpoint])>1) ORDER BY NumberOfTimesTW DESC ``` and: ``` SELECT [arrpoint] ,COUNT([arrpoint]) AS NumberOfTimesTW FROM [groups] tb1 INNER JOIN [fileinfo] tb2 ON tb1.op_name = tb2.[operator] INNER JOIN [costs] tb4 ON tb2.[fileno] = tb4.[fileno] WHERE [bedbank] = 1 AND [booked] >= DateAdd(wk,-52,DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 0)) AND [booked] <= DateAdd(wk,-51,DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 0)) GROUP BY [ arrpoint] HAVING (COUNT([arrpoint])>1) ORDER BY NumberOfTimesTW DESC ``` These ouput: ``` arrpoint | NumberOfTimesTW abc | 3 def | 2 ``` and: ``` arrpoint | NumberOfTimesTWLY ghi | 5 klm | 4 abc | 1 ``` What I'm hoping to get is something like: ``` arrpoint | NumberOfTimesTW | NumberOfTimesTWLY abc | 3 | 1 def | 2 | ghi | | 5 klm | | 4 ``` Not knowing much about SQL I'd originally thought I'd be able to achieve this just by sticking `UNION` between the two queries but no luck. Can anyone give me some pointers on how to achieve this?
Use a `Case` inside your aggregation to simplify the query ``` SELECT [arrpoint] ,COUNT( case when [booked] >= DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 0) THEN [arrpoint] END) AS NumberOfTimesTW , COUNT(CASE WHEN ([booked] >= DateAdd(wk,-52,DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 0)) AND [booked] <= DateAdd(wk,-51,DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 0))) THEN [arrpoint] END) AS NumberOfTimesTWLY FROM [groups] tb1 INNER JOIN [fileinfo] tb2 ON tb1.op_name = tb2.[operator] INNER JOIN [costs] tb4 ON tb2.[fileno] = tb4.[fileno] WHERE [bedbank] = 1 AND ( [booked] >= DATEADD(wk, DATEDIFF(wk, 0, GETDATE()), 0) OR( [booked] >= DateAdd(wk, - 52, DATEADD(wk, DATEDIFF(wk, 0, GETDATE()), 0)) AND [booked] <= DateAdd(wk, - 51, DATEADD(wk, DATEDIFF(wk, 0, GETDATE()), 0)) ) ) GROUP BY [ arrpoint] HAVING (COUNT([arrpoint])>1) ORDER BY NumberOfTimesTW DESC ```
You can just use full join ( assuming you want all values from both tables) on the second query ``` select a.[arrpoint], NumberOfTimesTW, NumberOfTimesTW1 from ( (SELECT [arrpoint] ,COUNT([arrpoint]) AS NumberOfTimesTW FROM [groups] tb1 INNER JOIN [fileinfo] tb2 ON tb1.op_name = tb2.[operator] INNER JOIN [costs] tb4 ON tb2.[fileno] = tb4.[fileno] WHERE [bedbank] = 1 AND [booked] >= DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 0) GROUP BY [ arrpoint] HAVING (COUNT([arrpoint])>1) ORDER BY NumberOfTimesTW DESC) as a full join (SELECT [arrpoint] ,COUNT([arrpoint]) AS NumberOfTimesTW1 FROM [groups] tb1 INNER JOIN [fileinfo] tb2 ON tb1.op_name = tb2.[operator] INNER JOIN [costs] tb4 ON tb2.[fileno] = tb4.[fileno] WHERE [bedbank] = 1 AND [booked] >= DateAdd(wk,-52,DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 0)) AND [booked] <= DateAdd(wk,-51,DATEADD(wk, DATEDIFF(wk,0,GETDATE()), 0)) GROUP BY [ arrpoint] HAVING (COUNT([arrpoint])>1) ORDER BY NumberOfTimesTW DESC) as b) on a.[arrpoint] =b.[arrpoint] ```
Combine SQL queries
[ "", "sql", "sql-server", "" ]
I am looking for writing SQL statements for the below problems 1. To remove duplicate rows 2. To select rows from 10 to 15 order by a column length They need to be generic SQL statements not specific to SQL Server or Oracle. Can anybody quickly help me please ? So far I tried the below Assuming the table is called `sample` with columns `id int`, and `word varchar(50)` Query #1: ``` delete from [sample] a where a.rowid > any (select b.rowid from [SAMPLE] b where a.word = b.word) ``` Query #2: ``` SELECT * FROM [SAMPLE] WHERE rownum <= 5 AND rowid NOT IN (SELECT ROWID FROM [sample] ORDER BY LENGTH(WORD) WHERE rownum >= 10); ``` Are they correct ? I am new to SQL programming Thanks a lot for your time
(1) Your query is pretty close. In Oracle: ``` delete from sample s where s.rowid > (select min(s2.rowid) from sample s2 where s.word = s2.word) ``` SQL Server doesn't have a `rowid` pseudo-column. If you have a unique id, the following will work in both databases: ``` delete from sample s where s.id > (select min(s2.id) from sample s2 where s.word = s2.word) ``` (2) The most recent versions of SQL Server and Oracle both support the ANSI standard `FETCH` syntax. So, something like this: ``` select t.* from table t order by length(t.col) offset 10 fetch next 6 rows; ``` The problem is the `length()` versus `len()` function. These are different in the two databases. Your best bet would be to create a user defined function in one of the databases to mimic the functionality of the other.
For #1 the generic way to remove duplicate rows is as following: 1. create a copy of the table as a temporary table 2. INSERT in temp table SELECT from base table GROUP BY all columns HAVING COUNT(\*) > 1 3. DELETE FROM baseTable using the rows from the temp table 4. INSERT in base table SELECT from temp For #2 most DBMSes support Windowed Aggregate Functions: ``` SELECT * FROM ( SELECT ROW_NUMBER() OVER (ORDER BY CHAR_LENGTH(word)) AS rn ,... FROM [SAMPLE] ) dt WHERE rn BETWEEN 10 AND 15 ```
I am looking for sql statements for below statements
[ "", "sql", "oracle", "duplicates", "rows", "" ]
Having problems understanding how to get the Where clause to work with this date structure. Here is the principal logic. I want data only from previous March 1 onward and ending on yesterdays date. Example #1: So today is Feb 13, 2015 This would mean I need data between (**2014-03-01** and **2015-02-12**) Example #2: Say today is March 20, 2015 This This would mean I need data between (**2015-03-01** and **2015-03-19**) The where logic might work but it doesn't like to convert '3/1/' + year. But I'm not sure how else to express it. The first clause is fine its the Case section that is broken. **Query** ``` SELECT [Request Date], [myItem] FROM myTable WHERE [Request Date] < CONVERT(VARCHAR(10), GETDATE(), 102) AND [Request Date] = CASE WHEN CONVERT(VARCHAR(10), GETDATE(), 102) < CONVERT(VARCHAR(12), '3/1/' + DATEPART ( year , GETDATE()) , 114) THEN [Request Date] > CONVERT(VARCHAR(12), '3/1/' + DATEPART ( year , GETDATE()-365) , 114) ELSE [Request Date] > CONVERT(VARCHAR(12), '3/1/' + DATEPART ( year , GETDATE() , 114) END ``` I have also tried ``` AND [Request Date] = CASE WHEN CONVERT(VARCHAR(10), GETDATE(), 102) < '3/1/' + CONVERT(VARCHAR(12), DATEPART ( YYYY , GETDATE())) THEN [Request Date] > '3/1/' + CONVERT(VARCHAR(12), DATEPART ( YYYY , GETDATE()-364)) ELSE [Request Date] > '3/1/' + CONVERT(VARCHAR(12), DATEPART ( YYYY , GETDATE())) END ```
Try this `where` clause. ``` WHERE [Request Date] BETWEEN Cast(CONVERT(VARCHAR(4), Year(Getdate())-1)+ '-03-01' AS DATE) AND Getdate() - 1 ``` Here `Cast(CONVERT(VARCHAR(4), Year(Getdate())-1)+ '-03-01' AS DATE)` will fetch the first day of march month. With that add -1 year to get the starting point. `Getdate() - 1` will define the ending point
I'd prefer to create datetime variables for the @from - @to range but if this is for a view I guess you have to do it in the where clause. ``` SELECT [Request Date], [myItem] FROM myTable WHERE [Request Date] < cast(GETDATE() as date) AND [Request Date] >= CASE WHEN GETDATE() < CONVERT(datetime, '3/1/' + cast(Year(GETDATE()) as varchar(4))) THEN CONVERT(datetime, '3/1/' + cast(Year(GETDATE()) - 1 as varchar(4))) ELSE CONVERT(datetime, '3/1/' + cast(Year(GETDATE()) as varchar(4))) END ```
SQL Date Logic Clause
[ "", "sql", "sql-server", "t-sql", "" ]
> I need a query which will return list of book from book table > > but it will show total number of books and books available too .. And for number of books it will count number of books with simular ISBN n amount count which will count number of books with simular ISBN with status available Table structure ``` Bookid booktitle author ISBN edition publisher price supplier volume language status status contains available,issued ``` I want result as ``` BookTitle Author Publisher Location Total available ``` Please also explain the query because I need to implement it on many tables Queries i tried this far ``` select *,count(1) as Quantity from Book group by Book.BookTitle ``` It successfully gives me book list with total quantity and ``` select *,count(1) as Quantity from Book where status='Available' group by Book.BookTitle ``` It Successfully gives me list of books with how many are available. but i need combination of both total and available ..
In brevi: ``` select isbn, booktitle, sum(case when status='available' then 1 else 0 end) as avail, count(*) as total from Book group by isbn, booktitle ``` <http://sqlfiddle.com/#!2/80a41/5> i definitely suggest you to read about 3rd normal form
You must group by all items in the SELECT query. You can either count(\*) or sum(1). Here is how I would do: ``` SELECT BookId, Name, Available, count(*) as Quantity FROM Book GROUP BY Bookid,Name,Available ```
Sql Query for number of available and total books
[ "", "sql", "sqlite", "" ]
This is my table. ``` SELECT * FROM [Message] ``` ![Message Table Data](https://i.stack.imgur.com/puHo3.jpg) Now what I want is, I want the list of only last message which User with **Id: 101** has sent or received from any other user. The query which I wrote for it is below ``` SELECT (SELECT TOP 1 [Message_id] FROM [Message] WHERE ([Sender_id] = REC.[Sender_id] AND [Receiver_id] = REC.[Receiver_id]) OR ([Sender_id] = REC.[Receiver_id] AND [Receiver_id] = REC.[Sender_id]) ORDER BY 1 DESC) AS [Message Id], REC.[Sender_id] AS [Sender Id], REC.[Receiver_id] AS [Receiver Id], (SELECT TOP 1 [Message] FROM [Message] WHERE ([Sender_id] = REC.[Sender_id] AND [Receiver_id] = REC.[Receiver_id]) OR ([Sender_id] = REC.[Receiver_id] AND [Receiver_id] = REC.[Sender_id]) ORDER BY 1 DESC) AS [Message] FROM (SELECT DISTINCT [Sender_id], [Receiver_id] FROM [Message] WHERE [Sender_id] = '101') REC ``` And I am getting the following result which seems fine. ![Query Results](https://i.stack.imgur.com/NAbaC.jpg) I'm new to DB queries and it seems that my query is very inefficient and long. Can anyone please suggest a better way to write this query? Also, using **JOINS** if that might be a better possible way to write this query. **Note:** Please consider Message\_id to be just a unique number and not a ordered identity column which in actual scenario may be any generated unique alphanumeric code. Thanks.
This should do the trick. ``` WITH Priorities AS ( SELECT Priority = Row_Number() OVER (PARTITION BY X.Party2 ORDER BY Message_id DESC), M.* FROM dbo.Message M OUTER APPLY (VALUES (M.Sender_id, M.Receiver_Id), (M.Receiver_id, M.Sender_Id) ) X (Party1, Party2) WHERE Party1 = '101' ) SELECT * FROM Priorities WHERE Priority = 1 ; ``` # [See this working in a Sql Fiddle](http://sqlfiddle.com/#!6/300a3/2) Explanation: The real problem is that you don't care whether the selected person is the sender or the receiver. This leads to complication dealing with pulling the value from one column or the other, such as can be solved in typical fashion with a `CASE` statement. However, I'm always a fan of solving things like this relationally instead of procedurally, so I simplified the problem by (basically) doubling up the data. For each source row in the table, we're generating two rows, one where the sender comes first, and one where the receiver comes first. We don't care which one is the sender or receiver, and we can just say that we're looking for `Party1` to be id `101`, and then want to find, for each `Party2` that he exchanged a message with (and whether he was sender or receiver is irrelevant), the most recent one. `OUTER APPLY` is just a trick for us to avoid doing more CTEs or nested queries (another way to write it). It's like a `LEFT JOIN`, but lets one use outer references (it refers to columns in table `M`). For what it's worth, `Message_id` doesn't seem like a reliable way to choose the latest message. You should have a date column and order by that instead! (Just add a `datetime` or `datetime2` column to your table, and change the `ORDER BY` to use it. You never know if messages could be inserted to the table out of order from when they actually occurred--them being out of order should in fact be expected. What if you have to back-insert lost messages? Identity columns are not a good way to guarantee insertion order, in my experience. P.S. My original take was that you wanted the most recent message sent *and* the most recent message received, for each sender and receiver. However, that's not what you asked for. I thought I'd leave this in for posterity since it could also be a useful answer to someone: ``` WITH Priorities AS ( SELECT SNum = Row_Number() OVER (PARTITION BY Sender_id ORDER BY Message_id DESC), PNum = Row_Number() OVER (PARTITION BY Receiver_id ORDER BY Message_id DESC), * FROM Message WHERE '101' IN (Sender_id, Receiver_id) ) SELECT * FROM Priorities WHERE 1 IN (SNum, PNum) ; ```
If you want the most recent message to/from another user, calculate the recency (based on `message_id`) for the *other* user as both sender and receiver. The trick is to partition using the *other* user as the partitioning key. Then choose the first one in the outer query: ``` select m.* from (select m.*, row_number() over (partition by (case when sender_id = 101 then receiver_id else sender_id end) order by message_id desc) as seqnum from message m where 101 in (sender_id, receiver_id) ) m where seqnum = 1; ```
Better/Right way to write a complex query
[ "", "sql", "sql-server", "sqlite", "" ]
I have two tables `Items` and `Transactions`. In the items table, all the items are listed. In the transactions table it is where a particular employee can request for an item depending on the quantity that he/she requested. How to use joins to merge the data from two tables that will compute for the balance quantity of each item? Note: (Quantity Balance = Quantity - TR\_Qty) `ITEMS` table: ``` | ID | ITEM | UNIT | QUANTITY | PRICE | | 1 | Perfume | btl. | 50 | 200.00 | | 2 | Battery | pc. | 100 | 25.00 | | 3 | Milk | can | 250 | 70.00 | | 4 | Soap | pack | 400 | 150.00 | ``` `TRANSACTIONS` table: ``` | ID | ITEM_ID | TR_QTY | REQUSETOR | PROCESSOR | Date |Time | | 1 | 1 | 20 | A. Jordan | K. Koslav | 12-22-2014 |09:00| | 2 | 2 | 8 | B. Wilkins | Z. Flores | 12-22-2014 |10:03| | 3 | 3 | 80 | C. Potran | A. Mabag | 12-26-2014 |14:23| | 4 | 3 | 45 | D. Korvak | D. Sanchez | 12-28-2014 |15:33| | 5 | 4 | 22 | C. Carvicci | A. Flux | 12-31-2014 |16:02| | 6 | 1 | 18 | F. Sansi | N. Mahone | 01-22-2015 |08:45| | 7 | 4 | 14 | Z. Gorai | M. Sucre | 01-30-2015 |16:33| | 8 | 2 | 7 | L. ZOnsey | P. Panchito | 02-11-2015 |17:22| ``` Desired output: ``` | ID | ITEM | QUANITY BALANCE| | 1 | Perfume | 462 | | 2 | Battery | 85 | | 3 | Milk | 125 | | 4 |Soap | 364 | ```
Try this: ``` DECLARE @Items TABLE(ID INT, Item NVARCHAR(10), Q INT) DECLARE @Transactions TABLE(ID INT, ItemID INT, TQ INT) INSERT INTO @Items VALUES (1, 'Perfume', 500), (2, 'Battery', 100), (3, 'Milk', 250), (4, 'Soap', 400) INSERT INTO @Transactions VALUES (1, 1, 20), (2, 2, 8), (3, 3, 80), (4, 3, 45), (5, 4, 22), (6, 1, 18), (7, 4, 14), (8, 2, 7) SELECT i.ID, i.Item, MAX(i.Q) - ISNULL(SUM(t.TQ), 0) AS Balance FROM @Items i LEFT JOIN @Transactions t ON t.ItemID = i.ID GROUP BY i.ID, i.Item ORDER BY i.ID ``` Output: ``` ID Item Balance 1 Perfume 462 2 Battery 85 3 Milk 125 4 Soap 364 ```
``` SELECT ITEM , ( SELECT (SUM(TRANSACTIONS.TR_QTY)-ITEMS.TR_QTY) FROM TRANSACTIONS WHERE TRANSACTIONS.ITEM_ID = ITEMS.ID ) AS QUANITY BALANCE FROM ITEMS ``` Field name and table name is as you mentioned in query ( you should change that as space is not valid for field or table name )
How to use joins and sum() in SQL Server query
[ "", "sql", "sql-server", "" ]
I have a table called **cia** with 2 columns: Column 1 (**'Name'**) has the names of all countries in the world. Column 2 (**'area'**)has the size of those countries in m^2. I want to find the biggest and smallest country. To find those I need to enter the following Queries: ``` SELECT Name, MAX(area) FROM cia ``` My other query: ``` SELECT Name, MIN(area) FROM cia ``` Now obviously I could do ``` SELECT MIN(area), MAX(area) FROM cia ``` however, I wouldn't get the corresponding name to my values then. Is it possible to get an output like this **Country | Fläche** Afghanistan | lowest value of column 'area' China | highest value of column 'area'
This is the minimum size: ``` select min(area) from cia; ``` And this the maximum: ``` select max(area) from cia; ``` So: ``` select * from cia where area = (select min(area) from cia) or area = (select max(area) from cia) order by area; ```
You can try this query: ``` CREATE TABLE area (name varchar(50), area int); insert into area values ('Italy', 1000); insert into area values ('China', 10000); insert into area values ('San Marino', 10); insert into area values ('Ghana', 3333); select main.* from area main where not exists( SELECT 'MINIMUM' FROM area a2 where a2.area < main.area ) or not exists( SELECT 'MAXIMUM' FROM area a3 WHERE a3.area > main.area ) ``` order by area desc In this way write only two subqueries. Other ways can go in error with other DBMS (no use of GROUP BY to show Name) Go to [Sql Fiddle](http://sqlfiddle.com/#!2/d0705/1/0)
DISPLAY MIN() and MAX() Values in MySQL
[ "", "mysql", "sql", "database", "select", "" ]
So here is the problem which i am facing- I have a table "A" as below ``` Name AmountPaid Type NAME1 5 1 NAME1 10 1 NAME1 3 1 NAME2 5 2 NAME2 9 2 NAME2 8 2 ``` Here we have two names - Name1 and Name2 with type 1 and 2 I am trying to get a query which should return only number which should shows how many times collection of unique name exists in the table. Here in above i am expecting the result as - 2 I tried query ``` Select NAME, count(Name) from A group by Name ``` however this will return below 2 records. ``` NAME1 3 NAME2 3 ``` Any help is deeply appreciated.
If am not wrong this is what you need. Considering that you don't want to `count` the `name` if it is not duplicated ``` SELECT Count(DISTINCT NAME) FROM (SELECT *, Row_number()OVER(partition BY NAME ORDER BY type) rn FROM Yourtable) a WHERE rn > 1 ```
Please try below query: ``` Select count(distinct(NAME)) from A ```
Counting only number of rows a record being repeated
[ "", "sql", "sql-server", "t-sql", "" ]
I've been trying to work this one out for a while now, maybe my problem is coming up with the correct search query. I'm not sure. Anyway, the problem I'm having is that I have a table of data that has a new row added every second (imagine the structure {id, timestamp(datetime), value}). I would like to do a single query for **MySQL** to go through the table and output only the first value of each minute. I thought about doing this with multiple queries with LIMIT and datetime >= (beginning of minute) but with the volume of data I'm collecting that is a lot of queries so it would be nicer to produce the data in a single query. Sample data: ``` id datetime value 1 2015-01-01 00:00:00 128 2 2015-01-01 00:00:01 127 3 2015-01-01 00:00:04 129 4 2015-01-01 00:00:05 127 ... 67 2015-01-01 00:00:59 112 68 2015-01-01 00:01:12 108 69 2015-01-01 00:01:13 109 ``` Where I would want the result to select the rows: ``` 1 2015-01-01 00:00:00 128 68 2015-01-01 00:01:12 108 ``` Any ideas? Thanks! EDIT: Forgot to add, the data, whilst every second, is not reliably on the first second of every minute - it may be :30 or :01 rather than :00 seconds past the minute EDIT 2: A nice-to-have (definitely not required for answer) would be a query that is flexible to also take an arbitrary number of minutes (rather than one row each minute)
``` SELECT t2.* FROM ( SELECT MIN(`datetime`) AS dt FROM tbl GROUP BY DATE_FORMAT(`datetime`,'%Y-%m-%d %H:%i') ) t1 JOIN tbl t2 ON t1.dt = t2.`datetime` ``` [**SQLFiddle**](http://sqlfiddle.com/#!2/96508/2) Or ``` SELECT * FROM tbl WHERE dt IN ( SELECT MIN(dt) AS dt FROM tbl GROUP BY DATE_FORMAT(dt,'%Y-%m-%d %H:%i')) ``` [**SQLFiddle**](http://sqlfiddle.com/#!2/96508/4) ``` SELECT t1.* FROM tbl t1 LEFT JOIN ( SELECT MIN(dt) AS dt FROM tbl GROUP BY DATE_FORMAT(dt,'%Y-%m-%d %H:%i') ) t2 ON t1.dt = t2.dt WHERE t2.dt IS NOT NULL ``` [**SQLFiddle**](http://sqlfiddle.com/#!2/96508/9)
In MS SQL Server I would use `CROSS APPLY`, but as far as I know MySQL doesn't have it, so we can emulate it. Make sure that you have an index on your `datetime` column. Create a [table of numbers](http://web.archive.org/web/20150411042510/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html), or in your case a table of minutes. If you have a table of numbers starting from 1 it is trivial to turn it into minutes in the necessary range. ``` SELECT tbl.ID ,tbl.`dt` ,tbl.value FROM ( SELECT MinuteValue , ( SELECT tbl.id FROM tbl WHERE tbl.`dt` >= Minutes.MinuteValue ORDER BY tbl.`dt` LIMIT 1 ) AS ID FROM Minutes ) AS IDs INNER JOIN tbl ON tbl.ID = IDs.ID ``` For each minute find one row that has timestamp greater than the minute. I don't know how to return the full row, rather than one column in MySQL in the nested `SELECT`, so at first I'm making a temp table with two columns: `Minute` and `id` from the original table and then explicitly look up rows from original table knowing their `IDs`. [SQL Fiddle](http://sqlfiddle.com/#!2/ae216/8) I've created a table of Minutes in the SQL Fiddle with the necessary values to make example simple. In real life you would have a more generic table. Here is [SQL Fiddle](http://sqlfiddle.com/#!2/b75d9b/4) that uses a table of numbers, just for illustration. In any case, you do need to know in advance somehow the range of dates/numbers you are interested in. It is trivial to make it work for any interval of minutes. If you need results every 5 minutes, just generate a table of minutes that has values not every 1 minute, but every 5 minutes. The main query would remain the same. It may be more efficient, because here you don't join the big table to itself and you don't make calculations on the `datetime` column, so the server should be able to use the index on it. The example that I made assumes that for each minute there is at least one row in the big table. If it is possible that there are some minutes that don't have any data at all you'd need to add extra check in the `WHERE` clause to make sure that the found row is still within that minute.
Selecting first value of every minute in table
[ "", "mysql", "sql", "" ]
I have `table_A` with an `ID` column. ``` ----- ID ----- id1 id2 id3 ----- ``` I have another `table_B` which has `latest_sub_id` for every `id` in `table_A` along with a date. ``` --------------------------------- ID SUB_ID Date --------------------------------- id1 sub_id_1 2015-01-03 id1 sub_id_2 2015-01-10 id2 sub_id_1 2015-01-02 id2 sub_id_2 2015-01-10 id2 sub_id_2 2015-01-12 id2 sub_id_3 2015-01-15 id3 sub_id_1 2015-01-09 id3 sub_id_2 2015-01-25 --------------------------------- ``` I want to write a join query between the two tables, such that I get the `SUB_ID` value for given `ID` value on the `MAX(DATE)`. The result should be: ``` --------------------------------------------------------------- ID SUB_ID --------------------------------------------------------------- id1 sub_id_2 ---> as 10th Jan is the latest date for id1 id2 sub_id_3 ---> as 15th Jan is the latest date for id2 id3 sub_id_2 ---> as 25th Jan is the latest date for id3 --------------------------------------------------------------- ```
You don't really seem to need the first table. One way using a `where` clause is: ``` select b.* from tableb b where not exists (select 1 from tableb b2 where b2.id = b.id and b2.date > b.date); ```
Try ... ``` SELECT A.ID, B.SUB_ID, B.Date FROM table_A AS A INNER JOIN table_B AS B ON (B.ID = A.ID) INNER JOIN ( SELECT DISTINCT ID, MAX(Date) AS 'MaxDate' FROM table_B GROUP BY ALL ID HAVING COUNT(ID) = 1 ) AS MX ON (MX.ID = B.ID AND MX.[MaxDate] = B.Date) ```
SQL join between two tables with MAX in where clause
[ "", "sql", "db2", "" ]
Suppose I have a table like this: ``` link_ids | length ------------+----------- {1,4} | {1,2} {2,5} | {0,1} ``` How can I find the min length for each `link_ids`? So the final output looks something like: ``` link_ids | length ------------+----------- {1,4} | 1 {2,5} | 0 ```
Assuming a table like: ``` CREATE TABLE tbl ( link_ids int[] PRIMARY KEY -- which is odd for a PK , length int[] , CHECK (length <> '{}'::int[] IS TRUE) -- rules out null and empty in length ); ``` Query for Postgres 9.3 or later: ``` SELECT link_ids, min(len) AS min_length FROM tbl t, unnest(t.length) len -- implicit LATERAL join GROUP BY 1; ``` **Or** create a tiny function (Postgres 8.4+): ``` CREATE OR REPLACE FUNCTION arr_min(anyarray) RETURNS anyelement LANGUAGE sql IMMUTABLE PARALLEL SAFE AS 'SELECT min(i) FROM unnest($1) i'; ``` Only add `PARALLEL SAFE` in Postgres 9.6 or later. Then: ``` SELECT link_ids, arr_min(length) AS min_length FROM t; ``` The function can be inlined and is *fast*. **Or**, for **`integer`** arrays of *trivial length*, use the additional module [`intarray`](https://www.postgresql.org/docs/current/intarray.html) and its built-in [`sort()` function](https://www.postgresql.org/docs/current/intarray.html#INTARRAY-FUNC-TABLE) (Postgres 8.3+): ``` SELECT link_ids, (sort(length))[1] AS min_length FROM t; ```
Assuming that the table name is `t` and each value of `link_ids` is unique. ``` select link_ids, min(len) from (select link_ids, unnest(length) as len from t) as t group by link_ids; link_ids | min ----------+----- {2,5} | 0 {1,4} | 1 ```
Postgres - find min of array
[ "", "sql", "arrays", "postgresql", "min", "unnest", "" ]
By experimentation and surprisingly, I have found out that LEFT JOINING a point-table is much faster on large tables then a simple assigning of a single value to a column. By a point-table I mean a table 1x1 (1 row and 1 column). **Approach 1.** By a *simple assigning value*, I mean this (slower): ``` SELECT A.*, 'Value' as NewColumn, FROM Table1 A ``` **Approach 2.** By *left-joining* a point-table, I mean this (faster): ``` WITH B AS (SELECT 'Value' as 'NewColumn') SELECT * Table1 A LEFT JOIN B ON A.ID <> B.NewColumn ``` ![Join table to a point](https://i.stack.imgur.com/fGuav.png) Now the core of my question. Can someone advise me how to get rid of the whole ON clause: `ON A.ID <> B.NewColumn`? Checking the joining condition seems unnecessary waste of time because the key of table A must not equal the key of table B. It would throw out the rows from results if t1.ID had the same value as 'Value'. Removing that condition or maybe changing `<>` to `=` sign, seems further space to facilitate the join's performance. **Update February 23, 2015** Bounty question addressed to performance experts. Which of the approaches mentioned in my question and answers is the fastest. **Approach 1** Simple assigning value, **Approach 2** Left joining a point-table, **Approach 3** Cross joining a point-table (thanks to answer of Gordon Linoff) **Approach 4** Any other approach which may be suggested during the bounty period. As I have measured empirically time of query execution in seconds of 3 approaches - the second approach with LEFT JOIN is the fastest. Then CROSS JOIN method, and then at last simple assigning value. Surprising as it is. Performance expert with a Solomon's sword is needed to confirm it or deny it.
I'm surprised this is faster for a simple expression, but you seem to want a `cross join`: ``` WITH B AS (SELECT 'Value' as NewColumn) SELECT * FROM Table1 A CROSS JOIN B; ``` I use this construct to put "parameters" in queries (values that can easily be changed). However, I don't see why it would be faster. If the expression is more complicated (such as a subquery or very complicated calculation), then this method only evaluates it once. In the original query, it would normally be evaluated only once, but there might be cases where it is evaluated for each row.
You can also try with `CROSS APPLY`: ``` SELECT A.*, B.*, FROM Table1 A CROSS APPLY(SELECT 'Value' as 'NewColumn') B ```
Improving performance of adding a column with a single value
[ "", "sql", "sql-server", "join", "" ]
I'm trying to figure out how to make it so that the second `s_pin` returns its contents in reverse. So `"Coded Pin"` needs to be `s_pin` in reverse. ``` SELECT s_last||', '|| s_first||LPAD(ROUND(MONTHS_BETWEEN(SYSDATE, s_dob)/12,0),22,'*' ) AS "Student Name and Age", s_pin AS "Pin", Reverse(s_pin) AS "Coded Pin" FROM student ORDER BY s_last; ``` Output would look like this: ![enter image description here](https://i.stack.imgur.com/DHnXO.png)
Oracle's `reverse` function accepts a `char`, not a `number`, so you'd have to convert it: ``` SELECT s_last||', '|| s_first||LPAD(ROUND(MONTHS_BETWEEN(SYSDATE, s_dob)/12,0),22,'*' ) AS "Student Name and Age", s_pin AS "Pin", REVERSE(TO_CHAR(s_pin)) AS "Coded Pin" FROM student ORDER BY s_last; ``` **NOTE** `REVERSE` is an undocumented function. If you are using it in your application, you might have a risk in future, "IF" this feature is removed in a later version that you wish to upgrade to. And it's reasonably likely that they might end up being documented functions in future, who knows. So, use it at your own risk.
`TO_NUMBER (REVERSE( '' || s_pin)) AS "Coded Pin"` should work for you. If you don't need to calculate with *Coded Pin*, you can omit the `TO_NUMBER` function See documentation of [`TO_NUMBER`](http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_2117.htm#OLADM695) and [`REVERSE`](http://psoug.org/definition/reverse.htm) ***p.s.*** as per comment: note that the use of `TO_CHAR (s_pin)` is the better option. Thx!
Reversing numbers in Oracle SQL
[ "", "sql", "oracle", "select", "" ]
I have two tables: ``` EMPLOYEES ===================================================== ID NAME SUPERVISOR LOCATION SALARY ----------------------------------------------------- 34 John AL 100000 17 Mike 34 NY 75000 5 Alan 34 LE 25000 10 Dave 5 NY 20000 BONUS ======================================== ID Bonus ---------------------------------------- 17 5000 34 5000 10 2000 ``` I have to write query which return a list of the highest paid employee in each location with their names, salary and salary+bonus. Ranking should be based on salary plus bonus. So I wrote this query: ``` select em.name as name, em.salary as salary, bo.bonus as bonus, max(em.salary+bo.bonus) as total from employees as em join bonus as bo on em.empid = bo.empid group by em.location ``` But I'm getting wrong names and query don't return one employee without bonus (empid = 5 in employees table) which have highest salary based by location (25000 + 0 bonus).
You can either do ``` select em.location, em.name as name, em.salary as salary, IFNULL(bo.bonus,0)) as bonus, max(em.salary+IFNULL(bo.bonus,0)) as total from employees as em left join bonus as bo on em.empid = bo.empid group by em.location; ``` This query however relies on a group by behavior that is specific to MySQL and would fail in most other databases (and also in later versions of MySQL if the setting `ONLY_FULL_GROUP_BY` is enabled). I would suggest a query like below instead: ``` select em.location, em.name as name, em.salary as salary, IFNULL(bo.bonus,0)) as bonus, highest.total from employees as em left join bonus as bo on em.empid = bo.empid join ( select em.location, max(em.salary+IFNULL(bo.bonus,0)) as total from employees as em left join bonus as bo on em.empid = bo.empid group by em.location ) highest on em.LOCATION = highest.LOCATION and em.salary+IFNULL(bo.bonus,0) = highest.total; ``` Here you determine the highest salary+bonus for each location and use that result as a derived table in a join to filter out the employee with highest total for each location. See this [SQL Fiddle](http://www.sqlfiddle.com/#!2/631378/1)
Maybe try using a `left join`: ``` select em.name as name, em.salary as salary, bo.bonus as bonus, max(em.salary+bo.bonus) as total from employees as em left join bonus as bo on em.empid = bo.empid group by em.location ```
Select maximum value from one column by second column
[ "", "mysql", "sql", "" ]
I'm trying to to skip the first and last rows in my SQL query (i.e. skip where `sequence=0` and `sequence=4` in this case) but my SQL query does not seem to be working. Any idea why? The logic seems correct: ``` SELECT * FROM waypoint WHERE id NOT IN( (SELECT MIN(ID) FROM waypoint), (SELECT MAX(ID) FROM waypoint) ) AND booking_id="1"; ``` [**MY SQL FIDDLE IS HERE**](http://sqlfiddle.com/#!2/82f84/8)
You have 7 rows. Their id's are from 1 to 7. Your `not in` clause filters out 1 and 7. If you want to skip first and last with `booking_id=1` you should add this clause to subselects: ``` SELECT * FROM waypoint WHERE id NOT IN( (SELECT MIN(ID) FROM waypoint where booking_id="1"), (SELECT MAX(ID) FROM waypoint where booking_id="1") ) AND booking_id="1"; ```
you need to copy your second part of your `where` clause into the select of your `not in` clause, because each `select` is handled for its own and has access to the whole dataset. So you need to add the restriction logic to each `select`. Otherwise your `MAX(ID)` will return 7 because it has access to the whole dataset: ``` SELECT * FROM waypoint WHERE id NOT IN( (SELECT MIN(ID) FROM waypoint where booking_id="1"), (SELECT MAX(ID) FROM waypoint where booking_id="1") ) AND booking_id="1"; ```
SELECT to Skip First and Last Rows and Select in Between
[ "", "mysql", "sql", "" ]
I am attempting to take a delimited string and return each substring between delimiters. This is used in a bigger function I am writing thus the delimiter is usually a variable. A very common delimiter that we use is ', ' and thus that has been my number one test case. I have different problems depending on how I format the delimiter in the regular expression. The following are the different things I have tried and the results: ``` select REGEXP_SUBSTR ('foo bar', '[^' || '(, )' || ']+', 1, LEVEL) item from dual connect by REGEXP_SUBSTR ('foo bar', '[^' || '(, )' || ']+', 1, LEVEL select REGEXP_SUBSTR ('foo bar', '[^' || '(,\s)' || ']+', 1, LEVEL) item from dual connect by REGEXP_SUBSTR ('foo bar', '[^' || '(,\s)' || ']+', 1, LEVEL select REGEXP_SUBSTR ('foo bar', '[^' || '(,[:blank:])' || ']+', 1, LEVEL) item from dual connect by REGEXP_SUBSTR ('foo bar', '[^' || '(,[:blank:])' || ']+', 1, LEVEL ``` The first and third attempt separates 'foo' and 'bar' on the space even though there is no comma. The latter attempt works as hoped keeping 'foo' and 'bar' on the same line, but if the string has an s in it (e.g. horse) the result is 'hor' 'e'. My understanding of regular expressions and regexp\_substr tells me that ``` '[^(,\s)]+' ``` should separate the strings whenever it comes across a comma and then whitespace. But clearly this is not happening. I have yet to find anyone with a similar issue as me. Any help would be much appreciated For reference I am working in SQL Developer on an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
You're confused about how the matching character list works. [From the documentation](http://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_regexp.htm#CHDIEGEI): > [char...] Matching Character List > > Matches any single character in the list within the brackets. In the list, all > operators except these are treated as literals: > > Range operator: - > POSIX character class: [: :] > POSIX collation element: [. .] > POSIX character equivalence class: [= =] So in your pattern `'[^(,\s)]+'` each of those characters are treated as literals; the `\` is not making the `s` be treated as a whitespace character, it's just an `s`, so it is matched in `horse`. And the parentheses are also literals, so they are not enclosing the pair of characters in your delimiter, each just matches an actual parenthesis in your string. In your first and third attempt you get a match on just a space because each character in the match list is independent, they aren't combined by the parentheses as you're expecting. As far as I'm aware you can't negate a pair of values (though regex isn't a strong point so there's a good chance I'm wrong about that). One option is to replace all appearances of your delimiter with a character you know won't be present - depending on your actual data, you might have to pick an unprintable character or an obscure Unicode character - and then use that in the regex. For example, using bind variables for brevity and a hash as a character I know isn't present: ``` variable string varchar2(20); variable delimiter varchar2(2); exec :string := 'foo bar, the cad, left'; exec :delimiter := ', '; select regexp_substr(replace(:string, :delimiter, '#'), '[^#]+', 1, level) as item from dual connect by regexp_substr(replace(:string, :delimiter, '#'), '[^#]+', 1, level) is not null; ITEM -------------------- foo bar the cad left ```
**Use a text pattern which utilizes a non-greedy quantifier** March through a string looking for multiple occurances of the pattern, `'(.+?)(, |$)'`: * The pattern,`(.+?)`, is a character group. The `.` refers to any/all characters and the `+?` is a non-greedy quantifier for 1 or more characters. * The pattern, `(, |$)`, looks for an occurrance of the `', '` or (alternation operator, `|`) the end of string, `$`. This is the 2nd character group. Finally, use a sub-expression to reference only the 1st character group ``` SCOTT@dev> VAR tval VARCHAR2(500); SCOTT@dev> EXECUTE :tval := 'foo,bar, great'; PL/SQL procedure successfully completed. SCOTT@dev> SELECT regexp_substr(:tval,'(.+?)(, |$)', 1, LEVEL, NULL, 1) t_val 2 FROM dual 3 CONNECT BY regexp_substr(:tval,'(.+?)(, |$)', 1, LEVEL, NULL, 1) IS NOT NULL 4 / T_VAL -------- foo,bar great SCOTT@dev> VAR tval VARCHAR2(500); SCOTT@dev> EXECUTE :tval := 'foo, bar, great'; PL/SQL procedure successfully completed. SCOTT@dev> / T_VAL -------- foo bar great SCOTT@dev> VAR tval VARCHAR2(500); SCOTT@dev> EXECUTE :tval := 'foo,bar,great'; PL/SQL procedure successfully completed. SCOTT@dev> / T_VAL -------- foo,bar,great SCOTT@dev> VAR tval VARCHAR2(500); SCOTT@dev> EXECUTE :tval := ',foo, bar, great'; PL/SQL procedure successfully completed. SCOTT@dev> / T_VAL -------- ,foo bar great ```
Regular Expression Substring in SQL on two character delimeter
[ "", "sql", "regex", "oracle", "substring", "whitespace", "" ]
I have a table with values as given below: ``` MemberID Location DateJoined 79925 183 2013-07-01 00:00:00.000 79925 184 2013-07-02 00:00:00.000 65082 184 2012-07-22 00:00:00.000 72046 183 2013-05-01 00:00:00.000 72046 184 2013-05-10 00:00:00.000 ... ``` Here i need to check if the above table has locationID 183 & 184. Based on these results i need to create a new table as below given ``` MemberID Benifit 79925 Yes 65082 No 72046 Yes ```
If I understand you well ``` select MemberID, case when Sum(x) = 2 then 'YES' else 'No' end Benifit from ( SELECT *, CASE WHEN Location in (183,184) THEN 1 ELSE 0 END AS x FROM MyTable ) t group t by MemberID ```
``` INSERT INTO table2 (MemberID, Benifit) SELECT t1.MemberID, CASE WHEN t2.MemberID IS NULL OR t3.MemberID IS NULL THEN 'NO' ELSE 'YES' END AS Benefit FROM table1 t1 LEFT JOIN table1 t2 ON (t2.MemberID = t1.MemberID AND t2.Location = 183) LEFT JOIN table1 t3 ON (t3.MemberID = t1.MemberID AND t3.Location = 184) WHERE t1.Location IN (183,184) GROUP BY t1.MemberID, t2.MemberID, t3.MemberID ``` <http://sqlfiddle.com/#!3/ece758/7>
How to group a value(s) in a table based on a condition
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I would like to ask You what does exactly this PL/SQL code is doing: ``` DECLARE table_doesnot_exists EXCEPTION; PRAGMA EXCEPTION_INIT (table_doesnot_exists, -942); BEGIN EXECUTE IMMEDIATE 'DROP TABLE TABLE_NAME'; EXCEPTION WHEN table_doesnot_exists THEN NULL; END; ``` I partially understand this, but I'm curious about this `-942`.
I've went through and commented the block. Basically your code tests custom error handling. ``` DECLARE --Define the custom exception table_doesnot_exists EXCEPTION; /* *In order to use the custom exception we have to declare it (typically used with ORA messages *that have no predefined name. Syntax: PRAGMA EXCEPTION_INIT (exception_name, -ORA_ERR_#); *In this case: 00942, 00000, "table or view does not exist" *Original code cited 942 but the oerr code is actually 00942, the leading 0's in this case are irrelevant */ PRAGMA EXCEPTION_INIT (table_doesnot_exists, -942); BEGIN --Attempt to drop the table EXECUTE IMMEDIATE 'DROP TABLE test_table_does_not_exist'; EXCEPTION --If the table we attempted does not exist then our custom exception will catch the ORA-00942 WHEN table_doesnot_exists --Now lets throw out a debug output line THEN DBMS_OUTPUT.PUT_LINE('table does not exist'); END; / ```
It is just the oracle exception code for table does not exist From the oracle documentation: ``` ORA-00942 table or view does not exist Cause: The table or view entered does not exist, a synonym that is not allowed here was used, or a view was referenced where a table is required. Existing user tables and views can be listed by querying the data dictionary. Certain privileges may be required to access the table. If an application returned this message, the table the application tried to access does not exist in the database, or the application does not have access to it. ``` for more info <http://www.techonthenet.com/oracle/errors/ora00942.php>
Asking for explaination of this PL/SQL code
[ "", "sql", "oracle", "plsql", "" ]
I Have two columns from different selects in sql server **Table 1** ``` ID Name Bit .... ............ ..... 1 Enterprise 1 False 2 Enterprise 2 True 3 Enterprise 3 False ``` **Table 2** ``` ID Name Bit .... ............ ....... 1 Enterprise 1 True 2 Enterprise 2 False 3 Enterprise 3 False ``` **expected result** ``` ID Name Bit .... ............ ...... 1 Enterprise 1 True 2 Enterprise 2 True 3 Enterprise 3 False ``` the problem is make a union between the two tables and the bit column prevail fields that are true Any ideas?
I would suggest casting it to an int: ``` select id, name, cast(max(bitint) as bit) as bit from ((select id, name, cast(bit as int) as bitint from table1 ) union all (select id, name, cast(bit as int) as bitint from table2 ) ) t12 group by id, name; ``` With your data, you can also do it using `join`: ``` select t1.id, t1.name, (t1.bit | t2.bit) as bit from table1 t1 join table2 t2 on t1.id = t2.id and t1.name = t2.name; ``` This assumes all the rows match between the two tables (as in your sample data). You can do something similar with a `full outer join` if they don't.
You can make a left join on the other table to exclude the records that should be used from the other table: ``` select t1.ID, t1.Name, t1.Bit from [Table 1] t1 left join [Table 2] t2 on t2.ID = t1.ID where t1.Bit = 1 or t2.Bit = 0 union all select t2.ID, t2.Name, t2.Bit from [Table 2] t2 left join [Table 1] t1 on t1.ID = t2.ID where t1.bit = 0 and t2.Bit = 1 ``` (If there is a `True` in both tables or a `False` in both tables for an item, the record from `Table 1` is used.)
How make a union of two columns with different value of fields
[ "", "sql", "sql-server", "database", "" ]
This is from a school assignment with the following resctictions: 1. No nested queries 2. No aggregate queries Essentially I need to get all names that appear 3 or more times in a table. For example, if my table looks like this: ``` uniqueid |name | some other stuff ----------------------- 0 |Bob | ... 1 |Bob | ... 2 |Bob | ... 3 |Tim | ... 4 |Tim | ... 5 |John | ... 6 |John | ... 7 |Bill | ... 8 |Tim | ``` My desired output is: ``` name ----- Bob Tim ``` This is trivial with aggregate queries: ``` SELECT name FROM table GROUP BY name HAVING COUNT(*) >= 3; ``` But for whatever reason the assignment explicitly prevents me from using (what I perceive to be) the most appropriate tool for the job, and I can't figure out how it could be done.
No aggregate functions, no analytic functions, no nested queries, works on any Oracle you can probably get. :) ``` select distinct name from tbl where level = 3 connect by prior rowid < rowid and prior name = name and level < 4 ``` [fiddle](http://sqlfiddle.com/#!4/85e8e/19)
Turns out the solution that they were looking for was to just brute-force it: ``` SELECT DISTINCT t1.name FROM table t1, table t2, table t3 WHERE t1.name = t2.name AND t2.name = t3.name AND t1.name = t3.name AND t1.uniqueid <> t2.uniqueid AND t2.uniqueid <> t3.uniqueid AND t1.uniqueid <> t3.uniqueid; ```
SQLPlus - get number of records without aggregate queries
[ "", "sql", "oracle", "" ]
I have an `Employee` table and how it works is, when a new employee is added, the column `[DOR]` will be null and `[Status]` will be 1. When the employee is being relieved from the company, the `[DOR]` column value will be the date which he/she left the company and `[Status]` is set to 0. I need to fetch the details of all the employees who were available in a given date. The employees with `Status` as 1 and those who are not yet relieved till that date have to be fetched. But I am not able to do the same as when equating with `DOR`, its null value and not returning any of the rows. ![Data in Employee table](https://i.stack.imgur.com/tsNSf.png) If I give the input as `2015-02-10`, it should fetch the two records and when I give `2015-02-15`, it should fetch only first record. ``` CREATE TABLE [Employee] ( [EmployeeId] [int] IDENTITY(1000,1) NOT NULL, [Name] [varchar](50) NOT NULL, [RoleId] [int] NOT NULL, [Email] [varchar](50) NULL, [Contact] [varchar](50) NULL, [DOJ] [date] NOT NULL, [DOR] [date] NULL, [Status] [bit] NOT NULL, [Salary] [decimal](18, 2) NULL ) INSERT [dbo].[Employee] ([EmployeeId], [Name], [RoleId], [Email], [Contact], [DOJ], [DOR], [Status], [Salary]) VALUES (1001, N'Employee 1', 3, N'', N'', CAST(0x8D390B00 AS Date), NULL, 1, CAST(6000.00 AS Decimal(18, 2))) INSERT [dbo].[Employee] ([EmployeeId], [Name], [RoleId], [Email], [Contact], [DOJ], [DOR], [Status], [Salary]) VALUES (1002, N'Employee 2', 7, N'', N'', CAST(0x8D390B00 AS Date), CAST(0x9A390B00 AS Date), 0, CAST(4000.00 AS Decimal(18, 2))) ```
You need to use `IS NULL` operator instead of `= NULL` in your condition, like this: ``` SELECT * FROM Employee WHERE DOJ <= '2015-02-15' AND (DOR IS NULL OR DOR > '2015-02-15') ```
Something like this? ``` select EmployeeId, Name from Employee where DOJ <= @searchDate and (DOR is null or DOR > @searchDate) ```
SQL Server where clause with null column
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm quite confused about the best practice regarding handling unknown exceptions in Oracle. I could do something like this: ``` BEGIN --do something EXCEPTION WHEN NO_DATA_FOUND THEN raise_application_error etc WHEN OTHERS THEN raise; END; ``` This seems to be recommended in a fair few blogs and sites, and is even discussed in the [Oracle documentation](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/errors.htm#LNPLS99871): > Avoid unhandled exceptions by including an OTHERS exception handler at > the top level of every PL/SQL program. > > Make the last statement in the OTHERS exception handler either RAISE > or an invocation of the RAISE\_APPLICATION\_ERROR procedure. (If you do > not follow this practice, and PL/SQL warnings are enabled, then you > get PLW-06009.) For information about RAISE or an invocation of the > RAISE\_APPLICATION\_ERROR, see "Raising Exceptions Explicitly". But I also know it's been mentioned in a few places that this is fairly horrible to do, e.g. [Ask Tom](https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1155066278457): > I truly wish we didn't even support WHEN OTHERS. > > You should only catch the exceptions you are expecting and can do > something about. Let the others propagate out so you can detect them > (so you see them) So my question is simply: Do I need a *when others* clause which logs and raises *every, single time* I have some use of *data manipulation language* (e.g. an insert/update/delete)? If not, when would I want to avoid it?
As with anything, it depends. Generally, my bias would be to only catch those exceptions that you can either handle or that you can add additional information/ context to. If you know that you have a `SELECT INTO` that may return 0 rows, for example, it makes sense to handle the `no_data_found` exception if you can provide a reasonable default and continue running. If you can add additional context to the exception, generally by making the text of the error message more meaningful ("Customer cannot be found" rather than "No data found") or by including things like the value of local variables that would be helpful for debugging. It may make sense to design your code such that you always have a `WHEN OTHERS` exception handler that catches unexpected exceptions, logs them to a table (or a file) along with appropriate context (the values of local variables, for example), **and then re-throws them**. If you do this consistently, you'll end up with some pretty verbose error logging that gives you a lot of information about the program state at the time an unexpected exception was thrown. Unfortunately, in the vast majority of cases, the teams that implement and maintain these sorts of systems lose their discipline along the way and the use of `WHEN OTHERS` leads to far less maintainable systems. If you have a generic `WHEN OTHERS` that does not end with a `RAISE` (or `RAISE_APPLICATION_ERROR`), your code will silently swallow exceptions. The caller won't know that something went wrong and will continue along thinking that everything is OK. Inevitably, though, some future step will fail because the earlier silent failure left the system in an unexpected state. If you have a `WHEN OTHERS` at the end of a large block that has dozens of SQL statements and just a generic `RAISE`, you'll lose the information about what line the actual error occurred on.
Catching all unhandled exceptions at a particular tier can be appropriate in these scenarios(likely not a complete list): * You want to log the exception and then rethrow it. * You want to rethrow it with a more specific contextual error message. For example you might want to provide a message that provides information such as what parameters were passed to the procedure. * You want to hide the details from the caller. Perhaps due to security concerns and want to ensure an application doesn't have access to the real exception that might reveal sensitive details. If these procedures are called from an application, it is probably best to let all of them bubble up to the application, and let the application decide where/when to handle/log/wrap them. Usually the application employs a similar technique. It often has a handler for all unhandled exceptions, logs the full exception/stack, and then wraps them in a generic error to display to the user, thus hiding potentially sensitive information from the original error, and providing the user more concrete direction such as "If errors persist, contact support". **Here's where you can cause headaches for application programmers:** You catch an exception at the SP layer, then rethrow a generic error. While it's always best to code defensively and avoid exceptions, sometimes the application programmer has no choice but to literally `try`, knowing that in certain circumstances an exception will occur, and then write code specifically to handle it. If you wrap the exception in a generic exception, then the programmer can't address specific error scenarios, because you've hid them all under the same bucket. Additionally, the log at the application level would usually contain the full stack trace, and at the deepest level will be the error bubbled up from the database call, which will be wrapped in your generic error thus hiding away what the true cause of the problem was. This can be a huge problem when trying to address difficult to reproduce errors, and you really need detailed logs that allow you to see the true error so you have an idea of what the problem might be. Of course not all app programmers will think that way, because they don't all employ the same technique. However, any decent programmer should know how to wrap the errors that come from the database in a generic fashion, if that is what they choose to do. Unwrapping exceptions on the other hand is often more difficult or impossible depending on what was omitted when they were original wrapped. This is why IMO it's better to err on the side of not wrapping exceptions until you are at the layer that interacts with the user.
Do I need when others exception handling for all DML?
[ "", "sql", "oracle", "plsql", "exception", "" ]
![example](https://i.stack.imgur.com/zqrvd.png) I need get all state (1,2,3,4) with out duplicates. ``` 1 xxx 2 yyy 3 zzz 4 jjj ``` My first and only idea was get all state and group them: ``` select state1 as state,desc1 as desc from table where id=X Union select state2 as state,desc2 as desc from table where id=X ``` This get in the example 6 rows. So, to discard the duplicate I try to use a alias: ``` select state,desc from ( select state1 as state,desc1 as desc from table where id=X Union select state2 as state,desc2 as desc from table where id=X ) group by state; ``` But I got the error is not a GROUP BY expression. I saw similar questions but I can't resolve the problem.
The UNION should remove any duplicates. If it doesn't then you should check the data -- maybe you have extra spaces in the text columns. Try something like this. ``` select state1 as state,TRIM(desc1) as desc from table where id=X Union select state2 as state,TRIM(desc2) as desc from table where id=X ```
All the select-list items have to either be in the group by, or be aggregates. You could include both `state, desc` in the group-by clause, but it would be neater to use `distinct` instead; however, `union` (without `all`) suppresses duplicates anyway, so neither is needed here. As Bluefeet mentioned elsewhere, `desc` is a keyword and has a meaning in order-by clauses, so it is not a good name for a column or alias. This gets four rows, not six: ``` select state1 as state_x, desc1 as desc_x from t42 where id = 'X' union select state2 as state_x, desc2 as desc_x from t42 where id = 'X'; | STATE_X | DESC_X | |---------|--------| | 1 | xxx | | 2 | yyy | | 3 | zzz | | 4 | jjj | ``` [SQL Fiddle](http://sqlfiddle.com/#!4/5b900/2). It isn't clear why you think you're getting six rows, or what you are really doing differently.
group by alias in Oracle
[ "", "sql", "oracle", "group-by", "oracle10g", "" ]
my stored procudure is : ``` ALTER PROCEDURE [dbo].[Asbabbazi_A] @name_product nvarchar(50), @first_price int, @final_price int, @collection_1 nvarchar(30), @id_state tinyint AS BEGIN DECLARE @SQLstring nvarchar(1000) DECLARE @PARAMS nvarchar(1000) set @SQLstring = 'SELECT IDproduct,name_product,first_price,final_price,max_registered_price, date_record_shamsi,final_date_view_shamsi, count_views,image_1,collection_1 from Table_asbabbazi where active=0 ' if(@name_product != 'no name') set @SQLstring = @SQLstring + ' AND (name_product LIKE %@name_product%)' if (@final_price != 0) set @SQLstring = @SQLstring + ' AND ( first_price between @first_price AND @final_price )' if (@collection_1 != 'انتخاب کنید') set @SQLstring = @SQLstring + ' AND (collection_1 = @collection_1 )' if (@id_state != 0) set @SQLstring = @SQLstring + ' AND (id_state = @id_state )' execute @SQLstring END ``` when execute show this error: The name 'SELECT IDproduct,name\_product,first\_price,final\_price,max\_registered\_price, date\_record\_shamsi,final\_date\_view\_shamsi, count\_views,image\_1,collection\_1 from Table\_asbabbazi where active=0 AND (name\_product LIKE %@name\_product%) AND (collection\_1 = @collection\_1 )' is not a valid identifier. please help
Some parameters in a query string are not parsed correctly, and you are using dynamic sql it must be executed by EXECUTE sp\_executesql statement. This is the correct way to execute dynamic sql: ``` ALTER PROCEDURE [dbo].[Asbabbazi_A] @name_product nvarchar(50), @first_price int, @final_price int, @collection_1 nvarchar(30), @id_state tinyint AS BEGIN DECLARE @SQLstring nvarchar(1000) DECLARE @PARAMS nvarchar(1000) set @SQLstring = 'SELECT IDproduct,name_product,first_price,final_price,max_registered_price, date_record_shamsi,final_date_view_shamsi, count_views,image_1,collection_1 from Table_asbabbazi where active=0 ' if(@name_product != 'no name') set @SQLstring = @SQLstring + ' AND name_product LIKE ''%' + @name_product + '%''' + ' ' if (@final_price != 0) set @SQLstring = @SQLstring + ' AND first_price between ' + CONVERT(nvarchar(1000), @first_price) + ' AND ' + CONVERT(nvarchar(1000), @final_price) + ' ' if (@collection_1 != 'انتخاب کنید') set @SQLstring = @SQLstring + ' AND collection_1 = ''' + @collection_1 + ''' ' if (@id_state != 0) set @SQLstring = @SQLstring + ' AND id_state = ' + CONVERT(nvarchar(1000), @id_state) + ' ' EXECUTE sp_executesql @SQLstring END ```
**In brief,** put declared SQLstring inside Parenthesis when execute stored procedure like this **EXECUTE usp\_executeSql (@SQLstring)** Example: **False ❌** ``` EXECUTE usp_executeSql @SQLstring ``` **True ✔** ``` EXECUTE usp_executeSql (@SQLstring) ```
the name is not a valid identifier. error in dynamic stored procudure
[ "", "sql", "asp.net", "stored-procedures", "" ]
I have One table with StatusID in StatusHistory Table. One customer could be multiple statusID. I need to find just previous statusID that mean the second Status ID which just befor he was hold. I am getting current one this bellow way: ``` SELECT top 1 StatusIDHeld FROM dbo.UserStatusHistory WHERE userid=2154 ORDER BY tatusChangedOn DESC ``` **Question:** I need 2nd statusID means just previous statusID How to find the Second value(StatusID) from a table.?
There's nothing like second value of table. It depends on many factors, like indexes, etc. To be able to get 1., 2. or n-th record depending on sort order, use [ROW\_NUMBER()](https://msdn.microsoft.com/en-us/library/ms186734.aspx) function. ``` SELECT StatusIDHeld FROM ( SELECT StatusIDHeld, ROW_NUMBER () OVER(ORDER by StatusIDHeld) as RowNo FROM UserStatusHistory ) AS t where t.RowNo = 2 ``` Another way is to use TOP instruction twice: ``` SELECT TOP(1) StatusIDHeld FROM ( SELECT TOP(2) StatusIDHeld FROM UserStatusHistory WHERE userid=2154 ORDER BY tatusChangedOn ASC ) AS t ORDER BY StatusIDHeld DESC ```
``` select StatusIDHeld from (select StatusIDHeld, ROW_NUMBER () over (order by tatusChangedOn DESC) as num from dbo.UserStatusHistory where userid=2154 ) T where T.num = 2 ```
How to find the Second value from a table.?
[ "", "sql", "sql-server", "" ]
I have SQL code that I am running and am getting an error when I pass in certain information. ``` select * from OBX.BTOCUST --where [CUSTID] like 'sci' --order by BRANDING desc where BRANDING not like '0x4767374ADABABBAB9B96865669F9D9DE4E3E3838182ACABAB9E9D9DE3E3E3848182231F20000000FFFFFFFFFFFF00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000021F90401000026002C000000000D023C000006FF409370482C1A8FC8A472C96C3A9FD0A8744AAD5AAFD8AC76CBED7ABFE0B0784C2E9BCFE8B47ACD6EBBDFF0B87C4EAFDBEFF8BC7ECFEFFBFF8081828384858687884A0D03238D8E8F9091928E1520178998999A9B9C61148F1403A2A3A4A5A68CA1A49F941A9DAEAFB0B19A0C8C8D031925B9BABBBCBDB9231BBD17138F0CB2C7C8C9CA710C8D1505BED1D225CD1DD21AAB13CBDBDCDDDE57C4230CD3BC171916161ABD1613E40CE1DFF1F2F3F18D13E3E4BF8FE8C41B161C8D2EE42BF18982317A08132AD4147060090C8E0658CB550092C38A231A2CDCC8B1239F108D1C426C84A1D7C8900E378CA0E0B1A5CB976C2A8C2839F051B40EAB1CEA83C9B3A7CFFF2D204708CC77C166340FC07496687444410005044C18C882202A9600010008D17A27C0D49F607F3698E910A92369D59432024184C001AB041670B5F28044042B0B489018428084803912B826207120AC619E1039A434EA6B98D212163212A94BC4C0DC2A042024C0AC570291057FE590982B40C1E1D32D19411B98E19187C7D32A6A1C2280F083219B8538BD6D820080DF261064FDCDF5B76500B98560B52A4438F32100F44230325D7780E4267E47BD9E1D000103DC8960259200387821049C561592E0008907067C0367FF60BD89F200A6064080BA7F32D50E7500092EB0F922DB647A91C01F797209C05F667A99A0575D119AA0800006E4151A01A54170FF406E0840A0C0874540306112092C306268128E2601612D3E901709D5C58580020E14A797027A6DE600027915D6A25E0140B89709260610819000B827416D47FA27E52B003A140981BC4C32823407F23523090E9826D501A60DA6557424BCB8978942F66642007E09915776605A4842543012E1805E975546A6092F5AD5D99E85E9B50001115438A78424B6E8C09E0920E0806F15B6A8E391242CD09B82424089E67353868A4995034536A02F66356201395D1281807B9D59FA1B8C685E566175708656A183B2562A44827D0E5161749AB61800117C6E4AC2541526F86B94161E60008F51264BE75E2F1E6B426DA6D576ACB5A2868B08A9039DE40896BA68C0FF18979219E156827D9180D5BCD716C127A2CAC5D9A2BDF2D2FB995E0B1A316CB3A3215BB0A5FB0EF9AC1119566A2D9ADBCADBA9C4DE222CEEC585905BEE95BC9C54413EAD6E8B5DA26806BC95AFC1193A44AE0B1BFC15C37A3980C4C0325BFC2C577A318B29C195D108F1CD27AF2971C4FC550C2EC648FFA1713EEF70ACCB2A23BC960BBABB842C008BD915E6DEA44278F6335F4EAEACAFB578DA859E116A5E96C06D7BEA4623D0C2161C6FCB31B73CF1ACD51E0C310243E73595D107272D381F4B0F148E96E71600113EBD58ED80009BBD4A5B82DA52889D54770D9128D77CEB45A2017B122666110F380900021124A724010EB47962DC0200505BB123C697FF267ABECE48AD5C48D669028542EC09009C51B16EF6EB83278F47E14C1F7E6E073249C2EE6CCAF9F6A4042F9B108100D581E72F1197CDBBA0015733C72186491080C0D559952880572B63F5329FEB3FC01CF9B10F215CFB42907FDB855CE19067BCA72D023C807B0BDA5F56E6953DE539300ECC2347AA1EA198622C0525066AD7318EF6C00E2A248225D8C0C77821204904C328C4186106A9270B0E7AF085F20061D4CA01354850E08228890C4D56980CE4C1F087DF90E144A6060A8C48628491615CE33408C4263A510C32DCC505A046810B18311234F1C00062C3C4277AF18B58882052B6A80F5BE4425D93D049C8C0C8C6363E2182E138E3238672920906FFE3224C24CE6F2E0707F56DE1387C74A320953682D5F402205ACA45334882C8460C11645DFCDA12B425866269EF3D5B60D920370908E6A9441D68DC870756010DE729658D924C1F25C310A50440005456D0242767B907E68DA01DB9A8E12330B01A8BA83192283B425C56F905BE854196B44CA61D4048114890314B8EB8E12F59582F2138094A034C509C52149D0799285366A350E8C4149711DDE9597C6AD2D00EA0A20384E69BAFCA5366DA79A87C614D99F8846021F3D1C846480D9A24392530A3E42967454C5B06F8D3C0C099B99CD5A55810B80B9C3C53CD834A8504A609D4B52230B7DE49F4766FD2573E47FA066696A09F235047C72C38CD227C0D70FF47AA58AF8E94CA08FD2C009A7A51685E2AAF8E12AB9A1582534EF5854C921A350D26BDE23E7911BDA408949A3CFDD6AE8646247FD5742F920480868236B9AC2C14A8D0DAD656437ACFA39A950C2625C82354AA0B5339F231A8AC144C27A62D80B9146536AD108F84A3AFA81A806660D58D7BC86ACFB31AB60C6955892356B58BA66E09AE03EDAAC22C6A82AD3DE7AAF57A17612B5AB1B6D9693A3FAB10AC365BD4C39A960B696D8D237658C64628913591A5EB646B2301DFC0295317E2AAC1EA8526058CF553B2E59156E03495D0EEEC4E632DED69977B85B1B0761AAAC5E02944A19611B0257EE27B13568627BFE6684673F993CAF7B4BB407A012002FC09FD0004BEE2BFE0581502EFFBCABCCACB9FF3668E7BD9192F73F75B0575A9301FEBBA86950AC9DF021F4626CF95C614CF350D0A28A6262330B084C13216690EA4288B8DC63F02C494097B98270DD1497457C2810258600314C0C06BAFD188063000040DF8C08767CC1167AC18BA8A7D4405FE3990823020C51FA88036684C647A8CE5960542073A86A293458280011F10C5073650E42ACB63911430648174820D7B984023231800950B60E53277E370CFDC3224ED610C306B200426B88499E7BC0C5D4EF7CE781605242AD08A215440232AA1B3A0930165E721EED087967111821C66390FFAD1908EB4A4274DE94A5BFAD298CEB4A637CDE94E7B5A994100003B' ``` When I uncomment the `--where [CUSTID] like 'sci'` and comment out branding the query runs and am able to see results. But when I run where branding I get an error: > Msg 8116, Level 16, State 1, Line 1 > Argument data type varchar is invalid for argument 2 of like function. also another thing is when i uncomment the order by BRANDING desc it gives me another error. that error is > Msg 306, Level 16, State 2, Line 3 The text, ntext, and image data > types cannot be compared or sorted, except when using IS NULL or LIKE > operator. What do I need to do to get the command to actually work?
This error message is what you get if `BRANDING` is of type `IMAGE`, which is incomparable (as in, literally, it cannot be compared in any way, not even to another `IMAGE`). To overcome the limitations of this type, SQL Server 2005 introduced the `VARBINARY(MAX)` type, which has the same purpose but isn't burdened with the special case handling that `IMAGE` requires (likewise, `(N)VARCHAR(MAX)` was introduced to replace `(N)TEXT`). `IMAGE` should not be used for new work; `VARBINARY(MAX)` is superior in all respects. If existing `IMAGE` columns can be changed to `VARBINARY(MAX)`, do so. If that isn't possible, the `IMAGE` can still be converted on the fly. In the query above: ``` select * from OBX.BTOCUST where CONVERT(VARBINARY(MAX), BRANDING) <> 0x476737.....00003B ``` Here the `0x476737....` is a `BINARY` literal. To convert a hexstring to a binary, use `CONVERT(VARBINARY(MAX), @string, 1)` (with leading "0x") or `CONVERT(VARBINARY(MAX), @string, 2)` (without leading "0x").
Convert your image or text or ntext column data type to varbinary(max), varchar(max) or nvarchar(max). The image, text, and ntext data types are deprecated and will be removed in a future version of SQL Server. They are very difficult and awkward to work with. The varchar(max) and nvarchar(max) have all the benefits of nearly unlimited string size, and none of the drawbacks of text or ntext. They also work with all the normal string functions you'd expect.
Argument type varchar invalid
[ "", "sql", "sql-server", "" ]
I have a table with about 8000 rows and 15 columns. After I have inserted the data I saw that my data was wrong after a number of records (let's say 1000) some column values belong to the previous record some thing like this: ``` A B C (A+B) ================================== 1 1 2 2 2 4 3 3 6 4 4 8 5 5 6 6 10 7 7 12 8 8 14 9 9 16 ``` Now I have to either move some column values a record back or forward and I don't actually have much option testing it I'm afraid I may overwrite some data and ruin the whole table I should do something like this but for about 7000 records: ``` update table1 set B = (select B from table1 where id = 1000) where id = 999 ``` Any ideas?
If you know the ids are sequential with no gaps, you can use a `join` to look up the value you want: ``` update t1 set c = tt1.c from table1 t1 join table1 t2 on t1.id = t2.id - 1 where t1.id > 1000; ``` If you cannot trust the ids, you can create the appropriate sequential number without gaps using `row_number()`: ``` with toupdate as ( select t.*, row_number() over (order by id) as seqnum from table1 ) update t1 set c = tt1.c from toupdate t1 join toupdate t2 on t1.seqnum = t2.seqnum - 1 where t1.id > 1000; ```
Create another table with the same fields as the table in question. Insert the bad records. Fix the data in the new table. Update the real table from the new one.
shifting some columns one record back or forward
[ "", "sql", "sql-server", "" ]
I have a table Product where i have fields id,name,selling\_price and special\_selling\_price. I want to fetch "special\_selling\_price" of all records BUT if ``` `special_selling_price`=0 OR `special_selling_price`=null ``` then it should fetch "selling\_price". If my records are ``` id name selling_price special_selling_price 1 A 100 75 2 B 200 0 3 C 300 4 D 400 150 ``` Then it should fetch ``` 75 200 300 150 ```
Use `CASE`. ``` SELECT CASE WHEN (special_selling_price IS NULL OR special_selling_price = 0) THEN selling_price ELSE special_selling_price END AS SellPrice FROM TableName ```
``` SELECT coalesce(nullif(special_selling_price, 0), selling_price) FROM product ```
Sql: select another field if one field is blank or 0
[ "", "mysql", "sql", "sql-server", "" ]
I am looking for generating a random number which the generated number is not there on another table. For Example: If a table called `randomNums` having the values `10,20,30,40,50`. I like to generate a number apart from the above values. I tried the following query. **Query** ``` ;WITH CTE AS ( SELECT FLOOR(RAND()*100) AS rn ) SELECT rn FROM CTE WHERE rn NOT IN (SELECT num FROM randomNums); ``` But sometimes this query returns nothing. Because that time it generates the number which is there in the table `randomNums`. How to solve this issue? **[Fiddle for reference](http://sqlfiddle.com/#!6/d7ce21/1)**
Yet another option, I've always liked `NEWID()` for random ordering, and cross joins create many rows very efficiently: ``` ;with cte AS (SELECT 1 n UNION ALL SELECT 1) ,cte2 AS (SELECT TOP 100 ROW_NUMBER() OVER(ORDER BY a.n) n FROM cte a,cte b,cte c,cte d, cte e, cte f, cte g) SELECT TOP 1 n FROM cte2 a WHERE NOT EXISTS (SELECT 1 FROM randomNums b WHERE a.n = b.num) ORDER BY NEWID() ``` Demo: [SQL Fiddle](http://www.sqlfiddle.com/#!3/faeec/9/0)
If you don't want to use a `WHILE` loop then you might look into this solution which employs a recursive `CTE`: ``` ;WITH CTE AS ( SELECT FLOOR(RAND()*100) AS rn UNION ALL SELECT s.rn FROM ( SELECT rn FROM CTE WHERE rn NOT IN (SELECT num FROM randomNums) ) t CROSS JOIN (SELECT FLOOR(RAND()*100) AS rn) AS s WHERE t.rn IS NULL ) SELECT rn FROM CTE ``` **EDIT:** As stated in comments below the above does not work: If the first generated number (from the `CTE` anchor member) is a number *already present* in `randomNums`, then the `CROSS JOIN` of the recursive member will return `NULL`, hence the number from the anchor member will be returned. Here is a different version, based on the same idea of using a recursive `CTE`, that works: ``` DECLARE @maxAttempts INT = 100 ;WITH CTE AS ( SELECT FLOOR(RAND()*100) AS rn, 1 AS i UNION ALL SELECT FLOOR(RAND(CHECKSUM(NEWID()))*100) AS rn, i = i + 1 FROM CTE AS c INNER JOIN randomNums AS r ON c.rn = r.num WHERE (i = i) AND (i < @maxAttempts) ) SELECT TOP 1 rn FROM CTE ORDER BY i DESC ``` Here, the anchor member of the `CTE` firstly generates a random number. If this number is already present in `randomNums` the `INNER JOIN` of the recursive member will succeed, hence yet another random number will be generated. Otherwise, the `INNER JOIN` will fail and the recursion will terminate. A couple of things more to note: * `i` variable is used to record the number of attempts made to generate a *'unique'* random number. * The value of `i` is used in the `INNER JOIN` operation of the recursive member so as to join with the random value of the *immediately preceding* recursion **only**. * Since repetitive calls of [`RAND()`](https://msdn.microsoft.com/en-us/library/ms177610.aspx) with the same seed value return the same results, we have to use `CHECKSUM(NEWID())` as the seed of `RAND()`. * `@maxAttempts` can optionally be used to specify the maximum number of attempts made in order to generate a 'unique' random number. [SQL Fiddle Demo here](http://sqlfiddle.com/#!3/1038ec/5)
Generate a random number which is not there in a table in sql server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have this problem: **TABLE1**: ``` | ID | DESCRIPTION | | 10 | Apple | | 20 | Banana | | 33 | Pineapple | | 47 | Orange | ``` **TABLE2**: ``` | ID | FRUIT1 | FRUIT2 | | 1 | 10 | 47 | | 2 | 47 | 10 | | 3 | 33 | 20 | | 4 | 20 | 33 | ``` If I select all data in `TABLE2`, I want in output the name of the fruits (`TABLE1.DESCRIPTION`) for `TABLE2.FRUIT1` and `TABLE2.FRUIT2` and not the ID. How can I do it?
Join `table2` twice with different alias name ``` select t2.id, f1.description, f2.description from table2 t2 left join table1 f1 on f1.id = t2.fruit1 left join table1 f2 on f2.id = t2.fruit2 ```
You can join back to `Table1` twice; you just need to give different table aliases: ``` SELECT ID, T1a.Description [Fruit1], T1b.Description [Fruit2] FROM Table2 T2 INNER JOIN Table1 T1a ON T2.Fruit1 = T1a.ID INNER JOIN Table1 T1b ON T2.Fruit1 = T1b.ID ```
(T-SQL) Join two columns to one common column
[ "", "sql", "sql-server", "t-sql", "" ]
We have million rows in one table. our select: ``` select * from tableA where column1 in ("a", "b", "c", "d") and where column2 = "abc"; ``` and we have unique index on column1 and column2 combined. I was told to swtich to : ``` select * from tableA where column1 = "a" and column2 = "abc" union select * from tableA where column1 = "b" and column2 = "abc" union select * from tableA where column1 = "c" and column2 = "abc" union select * from tableA where column1 = "d" and column2 = "abc"; ``` We could have from 1 to 100 different values in IN clause. So is it better to run one statement with IN clause or run 100 statement and perform union.
If you have a unique index on `column1, column2` -- in that order -- then the version with `union` will definitely take advantage of the index. As mentioned in a comment, you should use `union all` rather than `union`. This eliminates the step of removing duplicates (even if there are none). This would be a handful of index lookup operations and should go quite fast. Whether Oracle uses an index as desired for the first version is somewhat open: ``` where column1 in ('a', 'b', 'c', 'd') and column2 = 'abc' ``` Most database would *not* use an index optimally in this case. If a database used the index, it would use the index for `column1` lookups and then scan the index comparing values to `column2`. Oracle might have some additional smarts that will use an index effectively here. However, it is easy to fix things. If you have an index on `column2, column1`, then that index would be used for the `where` clause.
Who told you to switch? Did they provide some evidence that the second approach is actually more efficient in your environment with your data? Unless your statistics are woefully incorrect or there is way more going on, I would find it very unlikely that the `UNION` approach would be more efficient than an `IN`. If you were going to break up the query, using `UNION ALL` would be more efficient than using `UNION` because it wouldn't force extra sorts to check for and eliminate (non-existent) duplicate rows. Assuming a relatively recent version of Oracle, I would expect the optimizer to be able to internally rewrite the `UNION ALL` query as an `IN`. Given that you have the table in question, though, you should be able to evaluate the actual performance of the two options in your actual environment. You should be able to see whether one approach consistently outperforms the other, whether one does less logical I/O than the other, etc. You should also be able to determine whether the two queries actually generate different plans. If the `UNION ALL` approach is more efficient, I'd strongly consider looking at the statistics that have been gathered on your table and index(es) to determine why the optimizer isn't finding the more efficient plan with the `IN` statement.
I have million rows in one table. I am tolde to use union instead on in clause for performance reason. is it true?
[ "", "sql", "oracle", "performance", "query-performance", "" ]
I have a SQL Server database full of the following (fictional) data in the following structure: ``` ID | PatientID | Exam | (NON DB COLUMN FOR REFERENCE) ------------------------------------ 1 | 12345 | CT | OK 2 | 11234 | CT | OK(Same PID but Different Exam) 3 | 11234 | MRI | OK(Same PID but Different Exam) 4 | 11123 | CT | BAD(Same PID, Same Exam) 5 | 11123 | CT | BAD(Same PID, Same Exam) 6 | 11112 | CT | BAD(Conflicts With ID 8) 7 | 11112 | MRI | OK(SAME PID but different Exam) 8 | 11112 | CT | BAD(Conflicts With ID 6) 9 | 11123 | CT | BAD(Same PID, Same Exam) 10 | 11123 | CT | BAD(Same PID, Same Exam) ``` I am trying to write a query with will go through an identify everything that isn't bad as per my example above. Overall, a patient (identified by `PatientId`) can have many rows, but may not have 2 or more rows with the same exam! I have attempted various modifications of exams I found on here but still with no luck. Thanks.
You seem to want to identify duplicates, ranking them as `good` or `bad`. Here is a method using window functions: ``` select t.id, t.patientid, t.exam, (case when cnt > 1 then 'BAD' else 'OK' end) from (select t.*, count(*) over (partition by patientid, exam) as cnt from table t ) t; ```
use `Count() over()` : ``` select *,case when COUNT(*) over(partition by PatientID, Exam) > 1 then 'bad' else 'ok' from yourtable ```
Compare Multiple rows In SQL Server
[ "", "sql", "sql-server", "" ]
This is the code we have at the moment that works fine as a select query: ``` SELECT DISTINCT Reference, (SELECT amount FROM tbl_DDTransactions WHERE DueDate = '2015-01-15' AND Reference = 'MAIN0134') AS LastMonth, (SELECT amount FROM tbl_DDTransactions WHERE DueDate = '2015-02-15' AND Reference = 'MAIN0134') AS CurrentMonth FROM tbl_DDTransactions WHERE Reference = 'MAIN0134' ``` The table we are pulling this information from can have any number of entries (each row actually relates to a transaction with the company reference as MAINxxxx). What we would like to do is gets a list a value of distinct MAIN references in the table, then have it loop through the code above, generating a row for each MAIN reference. We're not quite sure how to express this in SQL though. Any help appreciated.
``` select r.reference, a1.amount as lastMonth, a2.amount as currentMonth from (select distinct reference from tbl_DDTransactions) as r left join tbl_DDTransactions a1 on a1.reference = r.reference and a1.DueDate = '2015-01-15' left join tbl_DDTransactions a2 on a2.reference = r.reference and a2.DueDate = '2015-02-15' ```
At the first glance this looks like a simple aggregate: ``` SELECT Reference, sum(case when DueDate = '2015-01-15' then amount end) AS LastMonth, sum(case when DueDate = '2015-02-15' then amount end) AS CurrentMonth FROM tbl_DDTransactions GROUP BY Reference ```
Select query that iterates through the results of a first query
[ "", "sql", "sql-server", "" ]
I have a table countries, and I want to display all the neighbouring countries to Sweden, and I got two problems. My code is like this: ``` SELECT b.cntry_name FROM countries as a JOIN countries as b ON ST_Distance(a.the_geom,b.the_geom)<10000 WHERE a.cntry_name='Sweden' GROUP BY b.cntry_name ``` It returned a table looks like this: ![enter image description here](https://i.stack.imgur.com/eUDHq.jpg) I want to delete Sweden from this table, so I tried to use `NOT EXISTS`: ``` SELECT b.cntry_name FROM countries as a JOIN countries as b ON ST_Distance(a.the_geom,b.the_geom)<10000 WHERE a.cntry_name='Sweden' AND NOT EXISTS(SELECT b.cntry_name FROM b WHERE b.cntry_name='Sweden') GROUP BY b.cntry_name ``` However, this returned a blank webpage(I'm doing PostGIS online),which means there's something wrong in my code. So the first question is, how can I delete the row Sweden after the selection? Same thing happened when I tried to add `countries.svg` after `b.cntry_name`: ``` SELECT b.cntry_name,countries.svg FROM countries as a JOIN countries as b ON ST_Distance(a.the_geom,b.the_geom)<10000 WHERE a.cntry_name='Sweden' GROUP BY b.cntry_name ``` This also returned a blank webpage. Any tips where I went wrong when tried to display svg?
No need for NOT EXISTS: ``` SELECT b.cntry_name FROM countries as a JOIN countries as b ON ST_Distance(a.the_geom,b.the_geom)<10000 WHERE a.cntry_name='Sweden' AND a.cntry_name <> b.cntry_name --GROUP BY b.cntry_name -- should work without GROUP BY ```
skip your subquery: ``` SELECT b.cntry_name FROM countries as a JOIN countries as b ON ST_Distance(a.the_geom,b.the_geom)<10000 AND b.cntry_name <> 'Sweden' WHERE a.cntry_name <> 'Sweden' GROUP BY b.cntry_name ``` explanation: your subquery *always* returns rows ( since table `countries` has data on Sweden ), thus the where condition can never be fulfilled. you also had a syntax error in your subquery, when you only gave the alias `b` in the `from` clause instead of the table name.
NOT EXISTS doesn't work
[ "", "sql", "postgis", "" ]
Since PostgreSQL came out with the ability to do `LATERAL` joins, I've been reading up on it since I currently do complex data dumps for my team with lots of inefficient subqueries that make the overall query take four minutes or more. I understand that `LATERAL` joins may be able to help me, but even after reading articles like [this one](http://blog.heapanalytics.com/postgresqls-powerful-new-join-type-lateral/) from Heap Analytics, I still don't quite follow. What is the use case for a `LATERAL` join? What is the difference between a `LATERAL` join and a subquery?
### What *is* a `LATERAL` join? The feature was introduced with PostgreSQL 9.3. [The manual](https://www.postgresql.org/docs/current/queries-table-expressions.html#QUERIES-LATERAL): > Subqueries appearing in `FROM` can be preceded by the key word > `LATERAL`. This allows them to reference columns provided by preceding > `FROM` items. (Without `LATERAL`, each subquery is evaluated > independently and so cannot cross-reference any other `FROM` item.) > > Table functions appearing in `FROM` can also be preceded by the key > word `LATERAL`, but for functions the key word is optional; the > function's arguments can contain references to columns provided by > preceding `FROM` items in any case. Basic code examples are given there. ### More like a *correlated* subquery A `LATERAL` join is more like a [correlated subquery](https://en.wikipedia.org/wiki/Correlated_subquery), not a plain subquery, in that expressions to the right of a `LATERAL` join are evaluated once for each row left of it - just like a *correlated* subquery - while a plain subquery (table expression) is evaluated *once* only. (The query planner has ways to optimize performance for either, though.) Related answer with code examples for both side by side, solving the same problem: * [Optimize GROUP BY query to retrieve latest row per user](https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-record-per-user/25536748#25536748) For returning *more than one column*, a `LATERAL` join is typically simpler, cleaner and faster. Also, remember that the equivalent of a correlated subquery is **`LEFT JOIN LATERAL ... ON true`**: * [Call a set-returning function with an array argument multiple times](https://stackoverflow.com/questions/26107915/call-a-set-returning-function-with-an-array-argument-multiple-times/26514968#26514968) ### Things a subquery can't do There *are* things that a `LATERAL` join can do, but a (correlated) subquery cannot (easily). A correlated subquery can only return a single value, not multiple columns and not multiple rows - with the exception of bare function calls (which multiply result rows if they return multiple rows). But even certain set‑returning functions are only allowed in the `FROM` clause. Like `unnest()` with multiple parameters in Postgres 9.4 or later. [The manual:](https://www.postgresql.org/docs/current/functions-array.html#ARRAY-FUNCTIONS-TABLE) > This is only allowed in the `FROM` clause; So this works, but cannot (easily) be replaced with a subquery: ``` CREATE TABLE tbl (a1 int[], a2 int[]); SELECT * FROM tbl, unnest(a1, a2) u(elem1, elem2); -- implicit LATERAL ``` The comma (`,`) in the `FROM` clause is short notation for `CROSS JOIN`. `LATERAL` is assumed automatically for table functions. About the special case of `UNNEST( array_expression [, ... ] )`: * [How do you declare a set-returning-function to only be allowed in the FROM clause?](https://dba.stackexchange.com/a/160310/3684) ### Set-returning functions in the `SELECT` list You can also use set-returning functions like `unnest()` in the `SELECT` list directly. This used to exhibit surprising behavior with more than one such function in the same `SELECT` list up to Postgres 9.6. [But it has finally been sanitized with Postgres 10](https://www.postgresql.org/docs/10/xfunc-sql.html#XFUNC-SQL-FUNCTIONS-RETURNING-SET) and is a valid alternative now (even if not standard SQL). See: * [What is the expected behaviour for multiple set-returning functions in SELECT clause?](https://stackoverflow.com/questions/39863505/what-is-the-expected-behaviour-for-multiple-set-returning-functions-in-select-cl/39864815#39864815) Building on above example: ``` SELECT *, unnest(a1) AS elem1, unnest(a2) AS elem2 FROM tbl; ``` Comparison: [fiddle for pg 9.6](https://dbfiddle.uk/asUCZoT5) [fiddle for pg 10](https://dbfiddle.uk/C42oryLC) **To note:** a (combination of) set-returning function(s) in the `SELECT` list that produces **no rows** eliminates the row. Internally it translates to `CROSS JOIN LATERAL ROWS FROM ...`, not to `LEFT JOIN LATERAL ... ON true`! [fiddle for pg 16](https://dbfiddle.uk/AeiIKq95) demonstrating the difference. ### Clarify misinformation [The manual:](https://www.postgresql.org/docs/current/sql-select.html#SQL-FROM) > For the `INNER` and `OUTER` join types, a join condition must be > specified, namely exactly one of `NATURAL`, `ON` ***join\_condition***, > or `USING` (***join\_column*** [, ...]). See below for the meaning. > For `CROSS JOIN`, none of these clauses can appear. So these two queries are valid (even if not particularly useful): ``` SELECT * FROM tbl t LEFT JOIN LATERAL (SELECT * FROM b WHERE b.t_id = t.t_id) t ON true; SELECT * FROM tbl t, LATERAL (SELECT * FROM b WHERE b.t_id = t.t_id) t; ``` While this one is not: ``` SELECT * FROM tbl t LEFT JOIN LATERAL (SELECT * FROM b WHERE b.t_id = t.t_id) t; ``` That's why [Andomar's](https://stackoverflow.com/a/28551339/939860) code example is correct (the `CROSS JOIN` does not require a join condition) and [Attila's](https://stackoverflow.com/a/28550962/939860) is was not.
The difference between a non-`lateral` and a `lateral` join lies in whether you can look to the left hand table's row. For example: ``` select * from table1 t1 cross join lateral ( select * from t2 where t1.col1 = t2.col1 -- Only allowed because of lateral ) sub ``` This "outward looking" means that the subquery has to be evaluated more than once. After all, `t1.col1` can assume many values. By contrast, the subquery after a non-`lateral` join can be evaluated once: ``` select * from table1 t1 cross join ( select * from t2 where t2.col1 = 42 -- No reference to outer query ) sub ``` As is required without `lateral`, the inner query does not depend in any way on the outer query. A `lateral` query is an example of a `correlated` query, because of its relation with rows outside the query itself.
What is the difference between a LATERAL JOIN and a subquery in PostgreSQL?
[ "", "sql", "postgresql", "join", "subquery", "lateral-join", "" ]
How to look in Table C for those inspectors who have got ParentID but not child. Table A has both parent and child data. Parent ID 0 is for parents and child has their parent ID. In Table C, one inspector can have many parents and many childs. I need to run a query to look for those inspectors who have got parents but not child. ``` Table A Table B Table C -------- ------- ------- DisciplineID(PK) InspectorID(PK) ID (PK) ParentID DisciplineID(FK) InspectorID (Fk) ``` ![enter image description here](https://i.stack.imgur.com/FTmqo.png) ``` Table A Table C ``` In above mentioned data, Inspector 7239 and 7240 only have parent but not child. So query should return those two not 7242 because he has both parent and childs.
Use `EXISTS` and `NOT EXISTS`: ``` SELECT c.ID, c.InspectorID, c.DisciplineID FROM dbo.TableC c WHERE EXISTS ( SELECT 1 FROM dbo.TableA a WHERE a.DisciplineID = c.DisciplineID AND a.ParentID = 0 -- parent exists ) AND NOT EXISTS ( SELECT 1 FROM dbo.TableC c2 WHERE c.InspectorID = c2.InspectorID AND c.ID <> c2.ID -- look for another record with this InspectorID AND EXISTS ( SELECT 1 FROM dbo.TableA a WHERE a.DisciplineID = c2.DisciplineID AND a.ParentID <> 0 -- no child exists ) ) ```
I would start with a pre-qualifying query per discipline based on those having a count of entries that HAVE a parent ID = 0 but also no records as child... Join that result to your TableC ``` SELECT c.ID, c.InspectorID, c.DisciplineID FROM dbo.TableC c JOIN ( select a.DisciplineID from TableA a group by a.DisciplineID having sum( case when a.ParentID = 0 then 1 else 0 end ) > 0 AND sum( case when a.ParentID > 0 then 1 else 0 end ) = 0 ) qual on c.DisciplineID = qual.DisciplineID ```
Look for data where child doesn't exisit
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am trying to convert a date passed to a stored procedure through a parameter. I am unable to get this date in to British format, here's what I've tried and the results: ``` select N'2015-01-17 11:49:54.253', left(N'2015-01-17 11:49:54.253',10), cast(left(N'2015-01-17 11:49:54.253',10) as datetime), convert(datetime,convert(datetime, cast(left(N'2015-01-17 11:49:54.253',10) as datetime)),103), convert(datetime,convert(datetime, cast(left(N'2015-01-17 11:49:54.253',10) as datetime)),113), ``` RESULTS: ``` 2015-01-17 11:49:54.253 2015-01-17 2015-01-17 00:00:00.000 2015-01-17 00:00:00.000 2015-01-17 00:00:00.000 ```
I don't see a british format with datetime, so guessing you want this format: ``` dd/mm/yyyy hh:mi:ss:mmm ``` Convert syntax (can be used on sqlserver 2012+): ``` SELECT FORMAT(cast('2015-01-17 11:49:54.253' as datetime),'dd/MM/yyyy hh:mm:ss.fff') ``` Result: ``` 17/01/2015 11:49:54.253 ```
You're converting to a `datetime`, you wanna be converting to a `varchar` (or `char` etc.) to do any formatting: ``` select convert(varchar(255), cast(left(N'2015-01-17 11:49:54.253',10) as datetime), 103) ```
Convert to british date format issue
[ "", "sql", "sql-server-2014", "" ]
I am working on a simple update query and i see the below error while executing query. I am very much clear that this should not be a length issue at all. What may be the problem. Error: > The identifier that starts with identifier is too long. Maximum length is 128 My Query: ``` update dbo.DataSettings set Query ="/Details?$filter=(Status ne 'yes' and Status ne 'ok')&$expand=name,Address/street,phone/mobile&$orderby=details/Id desc" where id=5 ```
Use single quotes and escape your quotes in the text with two single quotes: ``` update dbo.DataSettings set set Query= '/Details?$filter=(Status ne ''yes'' and Status ne ''ok'')&$expand=name,Address/street,phone/mobile&$orderby=details/Id desc' where id=5 ```
You should use single quotes `'`(and escape those that are in your string with backslash `\`), because now you are assigning `Query` to the identifier (in that case, column name) and if it was even the right size for the identifier, you would probably get error like invalid column name : ``` UPDATE dbo.DataSettings SET Query ='/Details?$filter=(Status ne \'yes\' and Status ne \'ok\')&$expand=name,Address/street,phone/mobile&$orderby=details/Id desc' WHERE id = 5 ```
Sql Query throws Identifier is too long. Maximum length is 128
[ "", "sql", "sql-server", "sql-update", "" ]
If I wanted to display a list of names in ascending order, except for a few defaults, how could I achieve this with an SQL order by clause. eg result: ``` place ------- United States United Kingdom Bahrain Australia Fiji Indonesia Japan Korea .... ``` Where United States, United Kingdom and Bahrain are some defaults we want to come before the regular ascending order list from the database - all of these (including the defaults are fetched form the database and not hardcoded). The defaults should be in a defined order as above (not asc, or desc) Thanks
Try this one. I used conditional `ORDER BY` using `CASE` statement in my query: ``` SELECT place FROM #yourtable ORDER BY (CASE WHEN place='United States' THEN '1' WHEN place='United Kingdom' THEN '2' WHEN place='Bahrain' THEN '3' ELSE place END) ```
I think the correct way to do this would be to add a column to the table `default_seq int`. Set it to `0` for non-default columns, and to a positive value for the default ones, with the highest number being the one to come first. Then the query becomes: ``` select place from your_table order by default_seq desc, place asc ``` This has the advantage that defaults may be changed without changing any code.
SQL order by with default options appearing above
[ "", "mysql", "sql", "" ]
I'm using Oracle SQL. I have the following table: ``` Timestamp | A | B 13-11-14 06.49.54.004 | 50 | 70 13-11-14 06.49.54.005 | NULL| 80 13-11-14 06.49.54.006 | NULL| NULL 13-11-14 06.49.54.007 | 40 | 70 13-11-14 06.49.54.008 | 20 | 90 13-11-14 06.49.54.009 | 30 | NULL ``` How can i replace the `NULL` values with the last values of each column? Here is the expected output table: ``` Timestamp | A | B 13-11-14 06.49.54.004 | 50 | 70 13-11-14 06.49.54.005 | 50 | 80 13-11-14 06.49.54.006 | 50 | 80 13-11-14 06.49.54.007 | 40 | 70 13-11-14 06.49.54.008 | 20 | 90 13-11-14 06.49.54.009 | 30 | 90 ``` Please advise.
You could use [the `first_value()` analytic function](http://docs.oracle.com/cd/E11882_01/server.112/e25554/analysis.htm#sthref1167) with a windowing clause, using the timestamp column for ordering and ignoring nulls: ``` select timestamp, a, b, first_value(a) ignore nulls over (order by timestamp desc rows between current row and unbounded following) as new_a, first_value(b) ignore nulls over (order by timestamp desc rows between current row and unbounded following) as new_b from table_name order by timestamp; TIMESTAMP A B NEW_A NEW_B ---------------------------- ---------- ---------- ---------- ---------- 13-NOV-14 06.49.54.004000000 50 70 50 70 13-NOV-14 06.49.54.005000000 80 50 80 13-NOV-14 06.49.54.006000000 50 80 13-NOV-14 06.49.54.007000000 40 70 40 70 13-NOV-14 06.49.54.008000000 20 90 20 90 13-NOV-14 06.49.54.009000000 30 30 90 ``` Or going the other way with last\_value() instead: ``` select timestamp, a, b, last_value(a) ignore nulls over (order by timestamp rows between unbounded preceding and current row) as new_a, last_value(b) ignore nulls over (order by timestamp rows between unbounded preceding and current row) as new_b from table_name order by timestamp; ``` The window includes the current row, so if that value is not null then it will be used; otherwise it'll use the first/last not-null value (because of the `ignore null` clause) as it traverses the window in the specified order.
Use coalesce to update NULL's only, use sub-selects to find most recent values. ``` update tablename t1 set a = coalesce(a,(select max(a) from tablename where Timestamp < t1.Timestamp)), b = coalesce(b,(select max(b) from tablename where Timestamp < t1.Timestamp)) where a is null or b is null ``` Now edited! (Max must perhaps not be most recent...) ``` update tablename t1 set a = coalesce(a,(select a from tablename where Timestamp < t1.Timestamp order by Timestamp desc fetch first 1 row only)), b = coalesce(b,(select b from tablename where Timestamp < t1.Timestamp order by Timestamp desc fetch first 1 row only)) where a is null or b is null ``` Newer Oracle version required for FETCH FIRST.
Updating values instead of NULLs
[ "", "sql", "oracle", "null", "" ]
Background: I have a MS SQL Server database and I want to track changes to it. For example if a column needed to be added or removed or a table needed to be dropped. Something similar to Version control for regular code. The problem: While looking around I saw that there were some tools that can be used: * RedGate SQL Source Control * Visual Studio Database project I am more interested in knowing if either of these tools will track changes to my database? More specifically I have a TFS server that is the source control for my MVC code, can I use either of these with TFS? Will it allow us to restore from older versions? Will it allow multiple developers to work on the database simultaneously?
Being in the database version control space for 5 years (as director of product management at [DBmaestro](http://www2.dbmaestro.com/l/11742/2014-12-31/2grnfp)) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts. There are many reasons and I'll name a few: * Files are stored locally on the developer’s PC and the change s/he makes do not affect other developers. Likewise, the developer is not affected by changes made by her colleague. In database this is (usually) not the case and developers share the same database environment, so any change that were committed to the database affect others. * Publishing code changes is done using the Check-In / Submit Changes / etc. (depending on which source control tool you use). At that point, the code from the local directory of the developer is inserted into the source control repository. Developer who wants to get the latest code need to request it from the source control tool. In database the change already exists and impacts other data even if it was not checked-in into the repository. * During the file check-in, the source control tool performs a conflict check to see if the same file was modified and checked-in by another developer during the time you modified your local copy. Again there is no check for this in the database. If you alter a procedure from your local PC and at the same time I modify the same procedure with code form my local PC then we override each other’s changes. * The build process of code is done by getting the label / latest version of the code to an empty directory and then perform a build – compile. The output are binaries in which we copy & replace the existing. We don't care what was before. In database we cannot recreate the database as we need to maintain the data! Also the deployment executes SQL scripts which were generated in the build process. * When executing the SQL scripts (with the DDL, DCL, DML (for static content) commands) you assume the current structure of the environment match the structure when you create the scripts. If not, then your scripts can fail as you are trying to add new column which already exists. * Treating SQL scripts as code and manually generating them will cause syntax errors, database dependencies errors, scripts that are not reusable which complicate the task of developing, maintaining, testing those scripts. In addition, those scripts may run on an environment which is different from the one you though it would run on. * Sometimes the script in the version control repository does not match the structure of the object that was tested and then errors will happen in production! There are many more, but I think you got the picture. What I found that works is the following: 1. Use an enforced version control system that enforces check-out/check-in operations on the database objects. This will make sure the version control repository matches the code that was checked-in as it reads the metadata of the object in the check-in operation and not as a separated step done manually. This also allow several developers to work in parallel on the same database while preventing them to accidently override each other code. 2. Use an impact analysis that utilize baselines as part of the comparison to identify conflicts and identify if a change (when comparing the object's structure between the source control repository and the database) is a real change that origin from development or a change that was origin from a different path and then it should be skipped, such as different branch or an emergency fix. An article I wrote on this was published [here](http://www2.dbmaestro.com/l/11742/2014-12-31/2grnfr), you are welcome to read it.
For this type of work, ApexSQL Source Control shown to be all that you need. With this SSMS add-in you can work directly on a database, and all of your [changes will be tracked in real time](https://solutioncenter.apexsql.com/using-sql-source-control-to-track-database-changes/). Yes, several developers can work in the same time on the same database. When one developer works on a one or several objects, other developers can see which those objects are, and until the first one does not finish changing the others cannot change that object, they will not be allowed to. If by any case, object is changed wrong, previous version or any earlier version can be restored at any moment. This add-in has all necessary options and features to allow the developers to work without losing time for checking changes made against object, since the add-in does that for them. And you can always see by whom, when and what that change is.
Track changes made to a database
[ "", "sql", "sql-server", "database", "version-control", "" ]
I am writing a simple query to fetch Amount for a particular date. The query works well without where clause. But after putting where clause it does not fetches any record. Pls help. My Query is ``` Select OSTotal as RevenueDaily, systemlastedittime as Lastedittime from AccTransactionHeader where systemlastedittime = '09/02/2015' ``` Also datatype of `systemlastedittime` is `DT`, I am not aware about its format whether it is in `ddmmyyyy` or `mmddyyyy` format.
Format should be `yyyy-MM-dd` ``` Select OSTotal as RevenueDaily, systemlastedittime as Lastedittime from AccTransactionHeader where CAST(systemlastedittime as DATE) ='2015-02-09' ```
you might also try ``` set dateformat ymd ``` before your select statement. taken from MSDN <https://msdn.microsoft.com/en-us/library/ms189491.aspx>
MS SQL Query not working when i put where clause
[ "", "sql", "sql-server", "where-clause", "" ]
I have a query, that returns multiple tables, something like that: ``` SELECT TableName, DatabaseName +'.'+ TableName, ColumnName FROM DBC.Columns WHERE ColumnName = 'id' ``` And I need to loop through these tables by looking to the information stored in these tables, in order to get only specific tables. I tried something like code below, using 'LOOP' and cursor, but it says that `Query is invalid` (code have been taken from [here](http://www.dwhpro.com/teradata-sql-stored-procedures-cursors/)): ``` DECLARE cursor_Tables CURSOR FOR SELECT DatabaseName || '.' || TableName FROM DBC.Columns WHERE ColumnName ='id'; OPEN cursor_Tables; label1: LOOP FETCH cursor_Tables into tbName; IF (SQLSTATE ='02000') THEN LEAVE label1; END IF; CASE WHEN ( SELECT COUNT(*) FROM prd3_db_tmd.K_PTY_NK01 WHERE id = 0 ) > 0 THEN tbName END END LOOP label1; CLOSE cursor_Tables; END; ``` How can I actually deal with this problem? Do I need to use procedure in addition? DBMS is Teradata
You need a Stored Procedure because this is the only place where you can use a cursor in Teradata. ``` REPLACE PROCEDURE testproc() DYNAMIC RESULT SETS 1 BEGIN DECLARE tbName VARCHAR(257); DECLARE SqlStr VARCHAR(500); -- temporary table to store the result set CREATE VOLATILE TABLE _vt_(tbName VARCHAR(257)) ON COMMIT PRESERVE ROWS; -- your existing query to return the table name -- Better use ColumnsV instead of Columns FOR cursor_Tables AS SELECT DatabaseName || '.' || TABLENAME AS tbName FROM DBC.ColumnsV WHERE ColumnName ='id' DO -- prepare the dynamic SQL ... SET SqlStr = 'insert into _vt_ select ''' || cursor_tables.tbName || ''' from ' || cursor_tables.tbName || ' where id = 0 having count(*) > 0; '; -- ... and run it EXECUTE IMMEDIATE SqlStr; END FOR; BEGIN -- return the result set DECLARE resultset CURSOR WITH RETURN ONLY FOR S1; SET SqlStr = 'SELECT * FROM _vt_;'; PREPARE S1 FROM SqlStr; OPEN resultset; END; DROP TABLE vt; END; ```
If this is SQL Server you can check following [SQL cursor](http://www.kodyaz.com/articles/cursor.aspx), I edited the cursor declaration and the code within Although they may differ from your requirement, I think you can modify easily ``` declare @sql nvarchar(max) declare @tablename nvarchar(100) DECLARE cursor_Tables CURSOR FOR SELECT s.name + '.' + o.name --s.name [schema], o.name [table] FROM sys.Columns c inner join sys.objects o on c.object_id = o.object_id inner join sys.schemas s on s.schema_id = o.schema_id WHERE c.Name ='id' and o.type = 'U' /* SELECT TableName, DatabaseName +'.'+ TableName, ColumnName FROM DBC.Columns WHERE ColumnName = 'id' */ OPEN cursor_Tables; FETCH NEXT FROM cursor_Tables INTO @tablename WHILE @@FETCH_STATUS = 0 BEGIN -- print @tablename set @sql = 'select case when count(*) > 0 then ''' + @tablename + ''' else '''' end from ' + @tablename exec sp_executesql @sql FETCH NEXT FROM cursor_Tables INTO @tablename END CLOSE cursor_Tables; DEALLOCATE cursor_Tables; ```
How get information from multiple tables using cursor?
[ "", "sql", "cursor", "teradata", "" ]
This is going to be a little complicated. Let me start with my tables. ``` clients [src = 0] --------- clientID code company --------- ------- --------- 1 ABC ABC Corp 2 DEF DEF Corp carriers [src = 1] --------- clientID code company --------- ------- ------- 1 ABC ABC Inc. 2 JHI JHI Inc. link -------- contactID uID src --------- ----- ---- 1 1 0 1 1 1 1 2 0 contact info -------------- contactID fname lname --------- ------- -------- 1 John Smith 2 Quincy Jones ``` So, i'm trying to do a search for say "ABC" on the link table. The link table needs to basically join to either the carriers or clients table depending on the link.src column. It should find two matches, one in the clients and one in the carriers, but since both resolve to contactID (links table) of 1, i should then query the contact info table and return Found 1 record(s): John Smith I hope this makes sense. Any help is greatly appreciated!
Here is one approach using `left join`: ``` select co.* from link l left join clients cl on l.src = 0 and l.uid = cl.code left join carriers ca on l.src = 1 and l.uid = ca.code left join contacts co on l.contactid = co.contactid where 'ABC' in (co.code, cl.code) ```
Here is another approach. First, you `UNION` the `Clients` and `Carriers` tables and add a new column `ContactType` to differentiate one from the other. Use `0` for `Clients` and `1` for `Carriers`, the same as `src`. Then you perform a `LEFT JOIN` to get the desired result. ``` ;WITH Clients(ClientID, Code, Company) AS( SELECT 1, 'ABC', 'ABC Corp' UNION ALL SELECT 2, 'DEF', 'DEF Corp' ) ,Carriers(ClientID, Code, Company) AS( SELECT 1, 'ABC', 'ABC Inc.' UNION ALL SELECT 2, 'JHI', 'JHI Inc.' ) ,Link(ContactId, UID, Src) AS( SELECT 1, 1, 0 UNION ALL SELECT 1, 1, 1 UNION ALL SELECT 1, 2, 0 ) ,ContactInfo(ContactID, FName, LName) AS( SELECT 1, 'John', 'Smith' UNION ALL SELECT 2, 'Quincy', 'Jones' ) -- START ,Contact(ContactID, ContactType, Code, Company) AS( SELECT ClientID, 0, Code, Company FROM Clients UNION ALL SELECT ClientID, 1, Code, Company FROM Carriers ) SELECT DISTINCT ci.FName, ci.LName FROM Link l LEFT JOIN Contact c ON c.ContactID = l.UID AND c.ContactType = l.src LEFT JOIN ContactInfo ci ON ci.ContactID = c.ContactID WHERE c.Code = 'ABC' ```
grabbing information by joining multiple tables
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a 2 tables with names and balances ex Table A ``` Name | Amount ----------------------- Abraham | 500 Abraham | 100 George | -200 George | 200 Thomas | 300 Thomas | -100 ``` Table B ``` Name | Amount ------------------------- Abraham | 100 Abraham | 200 George | -200 George | 50 Thomas | 400 Thomas | 300 Albert | -100 Albert | -200 ``` So I need to add the 2 tables up , every name has a positive and a negative, I dont know if Table A and table B will have the same amount of unique names or not so it could be either or. When I run the query ``` Select sum(Amount) from table_A group by Name ``` This query Works for either Table A or Table B individually but when I try to run a query like this I get an outrageous number ``` Select a.Name , sum(a.amount) + sum(b.amount) from table_A full outer join table_B b on a.Name = b.Name group by a.Name ``` is there a way to do this in one query ?
My query works fine using `UNION ALL`. See my query and SQL Fiddle Demo below: ``` SELECT Name,SUM(Amount)Amount FROM( SELECT Name, Amount FROM TableA UNION ALL SELECT Name, Amount FROM TableB) AS A GROUP BY Name ``` **[SQL FIDDLE DEMO HERE](http://sqlfiddle.com/#!2/d3d56/14/0)**
**In ORACLE I would do** ``` SELECT Name, sum(Amount) AS Amount FROM (SELECT Name, Amount FROM table_A UNION ALL SELECT Name, Amount FROM table_B ) GROUP BY Name ``` Try if it is possible to do something similar in sybase. And from in-line view I would make "real" view. **If I did not understand your intentions I am very sorry.**
query to Sum product of two tables
[ "", "sql", "database", "sap-iq", "" ]
I'm looping through LDAP data placing some data from that into an MS SQL database. Out of the 7k+ LDAP records, a few have been causing an issue when trying to place the **lastlogin** into the database that has a DateTime format. The problem is the date it has, **12/31/1600 7:00:00 PM**, is not correct and causes an error of **The conversion of a varchar data type to a datetime data type resulted in an out-of-range value. The statement has been terminated.** I've been checking the date format before it inserts it into the database. ``` If Not IsDate(empInfo.lastLogon) Then empInfo.lastLogon = Format(Now.AddYears(-1), "MM/dd/yyyy HH:MM:ss") End If ``` But doesn't seem to catch the year of **1600** which seems to be causing that error above. Is there any code I can use to detect those weird years and replace it with a random, legit date so it will place it into the database?
I think your problem is caused because what is a valid date in the code is not a valid date in the database, probably because the database can store an earliest date of 1753 something, whereas the code allows much earlier dates (such as the 1600 you cited). Try adding a test for the year to your IF statement: ``` If Not IsDate(empInfo.lastLogon) OrElse empInfo.lastLogon.year < 2000 Then empInfo.lastLogon = Format(Now.AddYears(-1), "MM/dd/yyyy HH:MM:ss") End If ``` Instead of using a default of 1 year ago, you might want to use a marker value like `01/01/2000 00:00:00`. This will let you easily identify ones which didn't have a "real" value. But I don't know your business rules, so YMMV.
I think you want something like this: ``` If Not IsDate(empInfo.lastLogon) OrElse DateTime.Parse(empInfo.lastLogon).Year < 2000 Then empInfo.lastLogon = Format(Now.AddYears(-1), "MM/dd/yyyy HH:mm:ss") End If ```
VB.net checking date (year) to see if its an weird year
[ "", "sql", "vb.net", "datetime", "ldap", "" ]
I'm working with a legacy tables in MS SQL Server 2008 and need to create a `view` to display the data in a way a new system needs it. Here's the legacy table. ## Table ``` id userid sport1 sport1level sport2 sport2level ---------------------------------------------------------------------------------------------------------------------------------- 1 11 Baseball Varsity Baseball Recreational 2 22 Baseball,Basketball Varsity,Junior Varsity Baseball Varsity 3 33 Soccer Varsity Soccer,Track & Field Recreational,Intramural 4 44 null null Tennis Varsity 5 55 Volleyball Varsity null null 6 66 Baseball,Basketball Varsity,Varsity Soccer,Football Varsity,Varsity 7 77 Baseball,Basketball,Rowing Varsity,Varsity,Varsity Soccer,Football,Volleyball Varsity,Varsity,Recreational ``` This is the result we are looking for: ## Result ``` id userid sport sportlevel1 sportlevel2 --------------------------------------------------------------------------------------- 1_1 11 Baseball Varsity Recreational 2_1 22 Baseball Varsity Varsity 2_2 22 Basketball Junior Varsity null 3_1 33 Soccer Varsity Recreational 3_2 33 Track & Field null Intramural 4_1 44 Tennis null Varsity 5_1 55 Volleyball Varsity null 6_1 66 Baseball Varsity null 6_2 66 Basketball Varsity null 6_3 66 Soccer null Varsity 6_4 66 Football null Varsity 7_1 77 Baseball Varsity null 7_2 77 Basketball Varsity null 7_3 77 Rowing Varsity null 7_4 77 Soccer null Varsity 7_5 77 Football null Varsity 7_6 77 Volleyball null Recreational ``` Key things to note: * the original table may contain more than 2 comma separated values (I added a 7th row to show this * `id` from legacy table is an `int` but not necessarily needed this way in new table * you may have noticed that the `id` for the new table is a concatenation of the `{original id}_{incremental sport count per user}`. Where `{incremental sport count per user}`, is a `sub id` if you will, for each sport chosen by a user. e.g.: `userid = 2` has 2 `distinct` sports selected: **baseball** and **basketball**, even though **baseball** falls in two columns. If I have to create helper functions or whatever, please let me know. If you have any questions or need more info, please let me know. Please don't try to ask why it's structured this way or try to give a better structure to the new format. Thanks
Not too elegant but it does the job: ``` WITH Data AS( SELECT * FROM ( VALUES ( 1, 11, 'Baseball ', 'Varsity ', 'Baseball ', 'Recreational ' ) , ( 2, 22, 'Baseball,Basketball', 'Varsity,Junior Varsity', 'Baseball ', 'Varsity ' ) , ( 3, 33, 'Soccer ', 'Varsity ', 'Soccer,Track & Field', 'Recreational,Intramural' ) , ( 4, 44, NULL, NULL, 'Tennis ', 'Varsity ' ) , ( 5, 55, 'Volleyball ', 'Varsity ', NULL, NULL ) , ( 6, 66, 'Baseball,Basketball', 'Varsity,Varsity ', 'Soccer,Football ', 'Varsity,Varsity ' ) , ( 7, 77, 'Baseball,Football,Rugby,Wrestling', 'Varsity,Varsity,Varsity,Junior Varsity', 'Rugby', 'Recreational' ) ) AS T(id, userid, sport1, sport1level, sport2, sport2level) ), SplitValues AS( -- Substring logic is in the Anchor record ommited to prevent repetition of -- code, therefor level 0 needs to be ignored SELECT id , userid , [level] = 0 , sport1 = sport1 , sport1level = sport1level , sport2 = sport2 , sport2level = sport2level , sport1Remainder = sport1 , sport1levelRemainder = sport1level , sport2Remainder = sport2 , sport2levelRemainder = sport2level FROM data UNION ALL SELECT id , userid , [level] = [level] + 1 , sport1 = SUBSTRING(sport1Remainder, 1, ISNULL(NULLIF(CHARINDEX(',', sport1Remainder)- 1, -1), LEN(sport1Remainder))) , sport1level = SUBSTRING(sport1levelRemainder, 1, ISNULL(NULLIF(CHARINDEX(',', sport1levelRemainder)- 1, -1), LEN(sport1levelRemainder))) , sport2 = SUBSTRING(sport2Remainder, 1, ISNULL(NULLIF(CHARINDEX(',', sport2Remainder)- 1, -1), LEN(sport2Remainder))) , sport2level = SUBSTRING(sport2levelRemainder, 1, ISNULL(NULLIF(CHARINDEX(',', sport2levelRemainder)- 1, -1), LEN(sport2levelRemainder))) , sport1Remainder = SUBSTRING(sport1Remainder, NULLIF(CHARINDEX(',', sport1Remainder)+1, 1), LEN(sport1Remainder)) , sport1levelRemainder = SUBSTRING(sport1levelRemainder, NULLIF(CHARINDEX(',', sport1levelRemainder)+1, 1), LEN(sport1levelRemainder)) , sport2Remainder = SUBSTRING(sport2Remainder, NULLIF(CHARINDEX(',', sport2Remainder)+1, 1), LEN(sport2Remainder)) , sport2levelRemainder = SUBSTRING(sport2levelRemainder, NULLIF(CHARINDEX(',', sport2levelRemainder)+1, 1), LEN(sport2levelRemainder)) FROM SplitValues WHERE sport1Remainder IS NOT NULL OR sport2Remainder IS NOT NULL ), SplitRowsWithDifferentSport AS( SELECT id , userid , sport1 , sport1level , sport1level2 = CASE WHEN sport1 = sport2 THEN sport2level END FROM SplitValues WHERE [level] <> 0 UNION ALL SELECT id , userid , sport2 , null , sport1level2 = sport2level FROM SplitValues WHERE ISNULL(sport1, '') <> sport2 AND [level] <> 0 ) SELECT id = CAST(S.id AS VARCHAR(max)) + '_' + CAST(ROW_NUMBER() OVER (PARTITION BY S.userid ORDER BY s.id) AS VARCHAR(max)) , S.sport1 , sport1level1 = MAX(S.sport1level) , sport1level2 = MAX(S.sport1level2) FROM SplitRowsWithDifferentSport AS S WHERE S.sport1 IS NOT NULL GROUP BY S.ID, S.userid, S.sport1 ORDER BY id ``` **EDIT:** Changed the SplitValues CTE to allow for multiple sports in a single column. A maximum of 99 sports per row is now supported. If you need to go even higher than that, add `OPTION(MAXRECURSION 0)` to have no limit at all. **EDIT2:** Added group by to get rid of same sport on multiple rows.
The output you expect is not optimized, because it provides a way to produce nulls in SportLevel2. You should store each sport and each level as a separete row, for example: ``` nid userid SportName SportLevel 1_1 11 Baseball Varsity 1_2 11 Baseball Recreational 2_1 22 Baseball Varsity 2_2 22 Baseball Varsity 3_1 33 Soccer Varsity 3_2 33 Soccer Recreational 3_3 33 Track & Field Recreation 4_2 44 Tennis Varsity 5_1 55 Volleyball Varsity 6_1 66 Baseball Varsity 6_2 66 Soccer Varsity 6_2 66 Basketball Varsity 6_3 66 Football Varsity ``` To achieve that, you can use CTE as follow: ``` DECLARE @tmp TABLE(id INT IDENTITY(1,1), userid INT , sport1 VARCHAR(150), sport1level VARCHAR(150), sport2 VARCHAR(150), sport2level VARCHAR(150)) INSERT INTO @tmp (userid, sport1, sport1level, sport2, sport2level) VALUES(11, 'Baseball', 'Varsity', 'Baseball', 'Recreational'), (22, 'Baseball,Basketball', 'Varsity,Junior Varsity', 'Baseball', 'Varsity'), (33, 'Soccer', 'Varsity', 'Soccer,Track & Field', 'Recreational,Intramural'), (44, null, null, 'Tennis', 'Varsity'), (55, 'Volleyball', 'Varsity', null, null), (66, 'Baseball,Basketball', 'Varsity,Varsity', 'Soccer,Football', 'Varsity,Varsity') ;WITH Sports AS ( --1) initial value -- a) no commas in sport1 SELECT id, userid, 1 AS sportid, sport1 AS SportName, sport1level AS SportLevel, NULL AS SportNameRemainder, NULL AS SportLevelRemainder FROM @tmp WHERE CHARINDEX(',', sport1)=0 AND CHARINDEX(',', sport1level)=0 UNION ALL -- b) no commas in sport2 SELECT id, userid, 2 AS sportid, sport2 AS SportName, sport2level AS SportLevel, NULL AS SportNameRemainder, NULL AS SportLevelRemainder FROM @tmp WHERE CHARINDEX(',', sport2)=0 AND CHARINDEX(',', sport2level)=0 UNION ALL -- c) commas in sport1 SELECT id, userid, 1 AS sportid, LEFT(sport1, CHARINDEX(',', sport1)-1) AS SportName, LEFT(sport1level , CHARINDEX(',', sport1level)-1) AS SportLevel, RIGHT(sport1, LEN(sport1) - CHARINDEX(',', sport1)) AS SportNameRemainder, LEFT(sport1level , LEN(sport1level) - CHARINDEX(',', sport1level)) AS SportLevelRemainder FROM @tmp WHERE CHARINDEX(',', sport1)>0 AND CHARINDEX(',', sport1level)>0 UNION ALL -- d) commas in sport2 SELECT id, userid, 2 AS sportid, LEFT(sport2, CHARINDEX(',', sport2)-1) AS SportName, LEFT(sport2level , CHARINDEX(',', sport2level)-1) AS SportLevel, RIGHT(sport2, LEN(sport2) - CHARINDEX(',', sport2)) AS SportNameRemainder, LEFT(sport2level , LEN(sport2level) - CHARINDEX(',', sport2level)) AS SportLevelRemainder FROM @tmp WHERE CHARINDEX(',', sport2)>0 AND CHARINDEX(',', sport2level)>0 UNION ALL --2) recursive part SELECT id, userid, sportid +1 AS sportid, SportNameRemainder AS SportName, SportLevelRemainder AS SportLevel, NULL AS SportNameRemainder, NULL AS SportLevelRemainder FROM Sports WHERE CHARINDEX(',', SportNameRemainder)=0 AND CHARINDEX(',', SportLevelRemainder)=0 ) SELECT CONCAT(CONVERT(VARCHAR(5), id), '_', CONVERT(VARCHAR(5), sportid)) AS nid, userid, SportName, SportLevel FROM Sports ORDER BY id, userid, sportid ``` Feel free to change it to your needs. Note: I'd suggest to replace string values in *SportLevel* to its numeric values and does not concatenate *id* with *SportLevel*, for example: Varsity might have value 1, Recreational - 2, etc. The same logic should be used to *SportName*. It might be necessary to join data from 2 tables. If you need help, call ;)
TSQL Splitting Comma Separated Field into Multiple Rows with a Row per user Column
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have the following table [SQLFiddle](http://sqlfiddle.com/#!2/7dbf1e/4) What I'm attempting to do is to select three random images but to make sure that no two images have the same object, what I attempted to do is to do a GROUP BY along with an ORDER BY rand() but that is failing as it is always giving me cat1.jpg, dog1.jpg, box1.jpg (All images whose path ends with 1 and not the others) The fiddle includes the query I ran and how it is not working.
What you need is a Random aggregate function. Usually there are no such functions in the current RDBMSs. Similar question has [been asked](https://stackoverflow.com/questions/1890155/how-do-i-create-a-table-alias-in-mysql). So the basic idea is shuffle the elements, then group by, and then for every group just select the first row for every group. If we modify one of answers provided on the link we get this. ``` select object_id, name, image_path from (SELECT images.image_path AS image_path, objects.id AS object_id, objects.name FROM objects LEFT JOIN images ON images.object_id = objects.id ORDER BY RAND()) as z group by z.object_id, z.name ```
You can't get a random image as MySQL always returns that data based on the time of insert (first come, first serve), i.e. internal order. But you can get a random result using following approach ([fiddle](http://sqlfiddle.com/#!2/7dbf1e/29/0)): ``` SELECT images.image_path AS image_path, objects.name FROM objects LEFT JOIN ( SELECT object_id, SUBSTRING_INDEX(GROUP_CONCAT(image_path order by rand()), ',', 1) AS image_path FROM images GROUP BY object_id ) as images ON images.object_id = objects.id GROUP BY objects.name ``` If there's a restrictive WHERE-condition on the objects table you might get a better performance when you join first and the GROUP\_CONCAT.
How to select a random row with a group by clause?
[ "", "mysql", "sql", "random", "group-by", "" ]
I'm trying to solve this query where I have two players with game scores. The query should return a chart of the players with the numbers of games and if the score was 3:0 or 0:3 then the player that score 3 gets 3 point otherwise any winning is for 2 points My table: ``` player1 player2 player1score player2score ------- ------- ------------- ------------ john lee 3 0 maria andy 1 3 andy john 1 3 ``` the desired table should be like this: ``` players gamesplayed points ------- ----------- ------ john 2 5 andy 2 2 maria 1 0 lee 1 0 ``` I have this query: ``` SELECT DISTINCT players, count(*) gamesplayed FROM ( SELECT player1 AS players FROM table UNION SELECT player2 AS players FROM table ) players ``` But the query above returns the count of all players together and not individually in `gamesplayed`. Can anyone see the problem? I'm also not sure hot implement the `points` column. Many thanks
Now I think the fitting MySQL-Statement, just change it to a inline-view, have no mysql environment, plz excuse if the syntax isn't fully right: ``` SELECT player, COUNT(player) AS gamesPlayed, SUM(score) score FROM (SELECT player1 AS player, CASE WHEN player1score - player2score > 2 THEN + 3 WHEN player1score - player2score > 0 THEN +2 ELSE + 0 END as score FROM playerTable UNION ALL SELECT player2 AS player, CASE WHEN player2score - player1score > 2 THEN + 3 WHEN player2score - player1score > 0 THEN +2 ELSE + 0 END AS score FROM playerTable) AS playerSel Group by player; ```
Done in SQL-Server, dunno if it works with MySql: ``` WITH playerSel AS( SELECT player1 AS player, CASE WHEN player1score - player2score > 2 THEN + 3 WHEN player1score - player2score > 0 THEN +2 ELSE + 0 END as score FROM playerTable UNION ALL SELECT player2 AS player, CASE WHEN player2score - player1score > 2 THEN + 3 WHEN player2score - player1score > 0 THEN +2 ELSE + 0 END as score FROM playerTable) SELECT player, COUNT(player) AS gamesPlayed, SUM(score) score FROM playerSel Group by player; ``` I Retrieved the following Information: andy;2;2 john;2;5 lee;1;0 maria;1;0
MYSQL - how to return unique count on two columns
[ "", "mysql", "sql", "database", "" ]
How can I merge multiple rows with same `ID` into one row. When value in first and second row in the same column is the same or when there is value in first row and `NULL` in second row. I don't want to merge when value in first and second row in the same column is different. I have table: ``` ID |A |B |C 1 NULL 31 NULL 1 412 NULL 1 2 567 38 4 2 567 NULL NULL 3 2 NULL NULL 3 5 NULL NULL 4 6 1 NULL 4 8 NULL 5 4 NULL NULL 5 ``` I want to get table: ``` ID |A |B |C 1 412 31 1 2 567 38 4 3 2 NULL NULL 3 5 NULL NULL 4 6 1 NULL 4 8 NULL 5 4 NULL NULL 5 ```
I think there's a simpler solution to the above answers (which is also correct). It basically gets the merged values that can be merged within a CTE, then merges that with the data not able to be merged. ``` WITH CTE AS ( SELECT ID, MAX(A) AS A, MAX(B) AS B, MAX(C) AS C FROM dbo.Records GROUP BY ID HAVING MAX(A) = MIN(A) AND MAX(B) = MIN(B) AND MAX(C) = MIN(C) ) SELECT * FROM CTE UNION ALL SELECT * FROM dbo.Records WHERE ID NOT IN (SELECT ID FROM CTE) ``` SQL Fiddle: <http://www.sqlfiddle.com/#!6/29407/1/0>
``` WITH Collapsed AS ( SELECT ID, A = Min(A), B = Min(B), C = Min(C) FROM dbo.MyTable GROUP BY ID HAVING EXISTS ( SELECT Min(A), Min(B), Min(C) INTERSECT SELECT Max(A), Max(B), Max(C) ) ) SELECT * FROM Collapsed UNION ALL SELECT * FROM dbo.MyTable T WHERE NOT EXISTS ( SELECT * FROM Collapsed C WHERE T.ID = C.ID ); ``` # [See this working in a SQL Fiddle](http://sqlfiddle.com/#!6/780e0/3) This works by creating all the mergeable rows through the use of `Min` and `Max`--which should be the same for each column within an `ID` and which usefully exclude `NULL`s--then appending to this list all the rows from the table that couldn't be merged. The special trick with `EXISTS ... INTERSECT` allows for the case when a column has all `NULL` values for an `ID` (and thus the `Min` and `Max` are `NULL` and can't equal each other). That is, it functions like `Min(A) = Max(A) AND Min(B) = Max(B) AND Min(C) = Max(C)` but allows for `NULL`s to compare as equal. Here's a slightly different (earlier) solution I gave that may offer different performance characteristics, and being more complicated, I like less, but being a single flowing query (without a `UNION`) I kind of like more, too. ``` WITH Collapsible AS ( SELECT ID FROM dbo.MyTable GROUP BY ID HAVING EXISTS ( SELECT Min(A), Min(B), Min(C) INTERSECT SELECT Max(A), Max(B), Max(C) ) ), Calc AS ( SELECT T.*, Grp = Coalesce(C.ID, Row_Number() OVER (PARTITION BY T.ID ORDER BY (SELECT 1))) FROM dbo.MyTable T LEFT JOIN Collapsible C ON T.ID = C.ID ) SELECT ID, A = Min(A), B = Min(B), C = Min(C) FROM Calc GROUP BY ID, Grp ; ``` This is also in the above SQL Fiddle. This uses similar logic as the first query to calculate whether a group should be merged, then uses this to create a grouping key that is either the same for all rows within an `ID` or is different for all rows within an `ID`. With a final `Min` (`Max` would have worked just as well) the rows that should be merged are merged because they share a grouping key, and the rows that shouldn't be merged are not because they have distinct grouping keys over the `ID`. Depending on your data set, indexes, table size, and other performance factors, either of these queries may perform better, though the second query has some work to do to catch up, with two sorts instead of one.
Merge multiple rows with same ID into one row
[ "", "sql", "sql-server", "t-sql", "" ]
I have a query where I need to get rows from a table where any of two foreign keys exists in another query. Here is the simplified SQL: ``` Select MainID From MainTable Where Key1 In (Select SubID From SubTable Where UserID=@UserID) Or Key2 In (Select SubID From SubTable Where UserID=@UserID) ``` As you can see, the sub-query is duplicated. Is the SQL compiler intelligent enough to recognize this and run the sub-query once only or does it run twice? Is there a better way I can write this SQL? Update: I should have mentioned this originally - SubID is the primary key on SubTable.
You would replace the IN clause with an EXISTS clause: ``` Select MainID From MainTable Where Exists ( Select * From SubTable Where UserID = @UserID And SubID in (MainTable.Key1, MainTable.Key2) ); ```
You can use a common table expression: ``` with subid_data as ( Select SubID From SubTable Where UserID=@UserID ) Select MainID From MainTable Where Key1 In (select SubID from subid_data) Or Key2 In (select SubID from subid_data); ```
How to eliminate duplicate of subquery in ".. where X in (S) or Y in (S)"?
[ "", "sql", "sql-server", "subquery", "" ]
Suppose I have a table as the definition below: ``` CREATE TABLE Test ( a INT, b INT ) ``` And after that, I am creating a view on the table as: ``` CREATE VIEW ViewTest AS SELECT * FROM Test ``` After that, when I run the query on view, it returns me with two columns i.e. `A & B`. And, later I update the definition of table and insert a new column in it: ``` ALTER TABLE Test ADD c INT ``` But now when I run the view, it again returns the view statement, it returns me the same number of columns, not three columns. **I just wanted to know why? Because I have used the Select \* statement, so every time it should return me with the whole of columns.**
When you create a view it stores the view definition as metadata in the system tables. Even if you use `SELECT * FROM` it will store the exact column names e.g. `SELECT a, b FROM` If you then update the base table the metadata definition for the view still remains the same so any new columns won't be picked up. You need to either drop and recreate the view or run [sp\_refreshview](https://msdn.microsoft.com/en-GB/library/ms187821.aspx) to update the definition
When you add new columns to Test, ViewTest already exists, and only has columns A and B. You'd have to drop ViewTest and re-create it for your "SELECT \*" statement to retrieve the new column C.
Why view is not updated if my table definition is updated?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am trying to create a username for each person in the marketing table. I want an update statement that inserts the new username into the username column. I want to take the first letter of the firstname, join to the first 5 letters of the surname, and save this into the username column of the marketing table. I need to put this inside a procedure somehow too. heres my code: ``` SELECT CONCAT(m.firstname, 1)||SUBSTR(m.surname, 5) INTO username FROM marketing WHERE marketing_id = 10001; ``` However, I am getting errors that I cannot seem to fix, like firstname and surname are invalid identifiers. Any help here is appreciated.
``` update username set username= (select concat(left(firstname,1),left(surname,5)) from marketing WHERE marketing_id = 10001) WHERE marketing_id = 10001; ```
You seem to be confusing the `concat` function, `substr` function and the concatentation operator (`||`). You aren't using `substr` to get the first character of the first name, and to restirct the length of both you need to provide [both the starting position and the substring length](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions181.htm). You're also referring to `m.firstname` without defining `m` as an alias for your table name; an alias isn't really needed here as there is only one table but if you use it, use it consistently. To get the first letter of the firstname and the first five letters of the surname you could use: ``` SELECT SUBSTR(m.firstname, 1, 1) || SUBSTR(m.surname, 1, 5) FROM marketing m WHERE m.marketing_id = 10001; ``` or ``` SELECT CONCAT(SUBSTR(m.firstname, 1, 1), SUBSTR(m.surname, 1, 5)) FROM marketing m WHERE m.marketing_id = 10001; ``` If you're updating a column in the same table though, rather than using a PL/SQL `into` clause, you need an update not a select: ``` UPDATE marketing SET username = SUBSTR(firstname, 1, 1) || SUBSTR(surname, 1, 5) WHERE marketing_id = 10001; ``` [SQL Fiddle demo](http://sqlfiddle.com/#!4/e2191f/1).
How to use SQL CONCAT/SUBSTR?
[ "", "sql", "oracle", "concatenation", "substr", "" ]
I have implemented a calendar type of application where some predefined time ranges exist in my database. My Predefined Hours Table looks something like this: > ``` > Id int Unchecked > StartTime varchar(50) Unchecked > EndTime varchar(50) Unchecked > ``` And the values: > Id StartTime EndTime > > 1 00:00 10:29 > > 2 10:30 12:59 > > 3 13:00 15:59 > > 4 16:00 23:59 How can i determine using sql in which range does my current time falls into Thanks in advance
``` DECLARE @t char(5)='23:10' SELECT * FROM @yourtable WHERE @t BETWEEN StartTime AND EndTime ```
You should use a [date and time data type](https://msdn.microsoft.com/en-us/library/ms186724.aspx#DateandTimeDataTypes) instead of `varchar`, because SQL can understand the value of those types. `varchar` is just a string and SQL has no idea how to compare those. With a time data type, you can then use [BETWEEN](https://msdn.microsoft.com/en-us/library/ms187922.aspx) for your condition, something like: ``` (…) CURRENT_TIMESTAMP BETWEEN StartTime AND EndTime (…) ```
How to determine if the time of day is between a certain range of hours
[ "", "sql", "sql-server", "" ]
I am writing a procedure in which i have sql statement which will insert values in `TEMP_WF_WORKFLOW` table using `CASE WHEN` statement. The condition is when `STATUS_ID` is `0` then the `EVENT_ID=10003` and when `STATUS_ID` is `1` then `EVETN_ID=10018`. When i try to use `CASE WHEN` for this its giving me error missing keyword.I dont know but is there any other way to do this if not using `CASE WHEN` statement. I am thinking about using `cursor` but dont know how to do this. Here is my query: ``` CREATE OR REPLACE PROCEDURE ext_self_10003_sigwf AS BEGIN -- first empty TEMP_WF_WORKFLOW table EXECUTE IMMEDIATE 'TRUNCATE TABLE TEMP_WF_WORKFLOW'; -- get WF_WORKFLOW table data INSERT INTO temp_wf_workflow (status_id, event_id, order_number) SELECT wf.status_id, CASE WHEN wf.status_id = 0 THEN event_id = 10003 WHEN wf.status_id = 1 THEN event_id = 10018 END AS eventid, tsm.order_number FROM wf_workflow@fonic_retail wf JOIN tmp_soap_monitoring_ids tsm ON tsm.subscription_id = wf.subscription_id WHERE tsm.order_type = 'SELF_REGISTRATION' AND wf.name = 'SIGNUP_MOBILE_PRE_PAID'; COMMIT; END ext_self_10003_sigwf; ```
> ``` > CASE EVENT_ID WHEN WF.STATUS_ID=0 THEN EVENT_ID=10003 > WHEN WF.STATUS_ID=1 THEN EVENT_ID=10018 END AS EVENTID > ``` You have mixed two different **syntax** of **CASE** statement. **1.simple\_case\_statement** ``` CASE [ expression ] WHEN condition_1 THEN result_1 WHEN condition_2 THEN result_2 ... WHEN condition_n THEN result_n ELSE result END ``` **2.searched\_case\_statement** ``` CASE WHEN expression condition_1 THEN result_1 WHEN expression condition_2 THEN result_2 ... WHEN expression condition_n THEN result_n ELSE result END ``` Change your expression to - ``` CASE WHEN WF.STATUS_ID=0 THEN 10003 WHEN WF.STATUS_ID=1 THEN 10018 END AS EVENTID ``` Follow this [link](https://docs.oracle.com/database/121/LNPLS/case_statement.htm#LNPLS01304) to see the documentation for both the syntax. **Update** OP says he still gets the **missing keyword error**. This is a test case to show it is not true. The missing keyword will be fixed with correct CASE statement. ``` SQL> CREATE OR REPLACE 2 PROCEDURE EXT_SELF_10003_SIGWF 3 AS 4 BEGIN 5 -- first empty TEMP_WF_WORKFLOW table 6 EXECUTE IMMEDIATE 'TRUNCATE TABLE TEMP_WF_WORKFLOW'; 7 -- get WF_WORKFLOW table data 8 INSERT 9 INTO TEMP_WF_WORKFLOW 10 ( 11 STATUS_ID, 12 EVENT_ID, 13 ORDER_NUMBER 14 ) 15 SELECT WF.STATUS_ID, 16 CASE 17 WHEN WF.STATUS_ID=0 18 THEN 10003 19 WHEN WF.STATUS_ID=1 20 THEN 10018 21 END AS EVENTID, 22 TSM.ORDER_NUMBER 23 FROM WF_WORKFLOW@FONIC_RETAIL WF 24 JOIN TMP_SOAP_MONITORING_IDS TSM 25 ON TSM.SUBSCRIPTION_ID=WF.SUBSCRIPTION_ID 26 WHERE TSM.order_type ='SELF_REGISTRATION' 27 AND WF.NAME ='SIGNUP_MOBILE_PRE_PAID'; 28 COMMIT; 29 END EXT_SELF_10003_SIGWF; 30 / Warning: Procedure created with compilation errors. SQL> sho err Errors for PROCEDURE EXT_SELF_10003_SIGWF: LINE/COL ERROR -------- -------------------------------------------------- 7/3 PL/SQL: SQL Statement ignored 23/8 PL/SQL: ORA-00942: table or view does not exist SQL> ```
insert in THEN only value ``` CASE WHEN WF.STATUS_ID=0 THEN 10003 WHEN WF.STATUS_ID=1 THEN 10018 END ```
missing keyword error in oracle CASE WHEN sql statement
[ "", "sql", "oracle", "insert", "procedure", "case-when", "" ]
How do I alter a sql varchar column to a decimal column when there are nulls in the data? I thought: ``` ALTER TABLE table1 ALTER COLUMN data decimal(19,6) ``` But I just get an error, I assume because of the nulls: ``` Error converting data type varchar to numeric. The statement has been terminated. ``` So I thought to remove the nulls I could just set them to zero: ``` ALTER TABLE table1 ALTER COLUMN data decimal(19,6) NOT NULL DEFAULT 0 ``` but I dont seem to have the correct syntax. Whats the best way to convert this column? *edit* People have suggested it's not the nulls that are causing me the problem, but non-numeric data. Is there an easy way to find the non-numeric data and either disregard it, or highlight it so I can correct it.
If it were just the presence of NULLs, I would just opt for doing this before the `alter column`: ``` update table1 set data = '0' where data is null ``` That would ensure all nulls are gone and you could successfully convert. --- However, I wouldn't be *too* certain of your assumption. It seems to me that your new column is perfectly capable of handling NULL values since you haven't specified `not null` for it. What *I'd* be looking for is values that *aren't* NULL but also aren't something you could turn in to a real numeric value, such as what you get if you do: ``` insert into table1 (data) values ('paxdiablo is good-looking') ``` though some may argue that should be treated a 0, a false-y value :-) The presence of non-NULL, non-numeric data seems *far* more likely to be causing your specific issue here. --- As to how to solve that, you're going to need a `where` clause that can recognise whether a `varchar` column is a valid numeric value and, if not, change it to `'0'` or `NULL`, depending on your needs. I'm not sure if SQL Server has regex support but, if so, that'd be the first avenue I'd investigate. Alternatively, provided you understand the limitations (a), you could use [`isnumeric()`](https://msdn.microsoft.com/en-us/library/ms186272.aspx) with something like: ``` update table1 set data = NULL where isnumeric(data) = 0 ``` This will force all non-numeric values to NULL before you try to convert the column type. And, please, for the love of whatever deities you believe in, back up your data before attempting any of these operations. If none of those above solutions work, it may be worth adding a brand new column and populating bit by bit. In other words set it to NULL to start with, and then find a *series* of updates that will copy `data` to this new column. Once you're happy that all data has been copied, you should then have a series of updates you can run in a single transaction if you want to do the conversion in one fell swoop. Drop the new column and then do the whole lot in a single operation: * create new column; * perform all updates to copy data; * drop old column; * rename new column to old name. --- (a) From the linked page: > ISNUMERIC returns 1 for some characters that are not numbers, such as plus (+), minus (-), and valid currency symbols such as the dollar sign ($).
Possible solution: ``` CREATE TABLE test ( data VARCHAR(100) ) GO INSERT INTO test VALUES ('19.01'); INSERT INTO test VALUES ('23.41'); ALTER TABLE test ADD data_new decimal(19,6) GO UPDATE test SET data_new = CAST(data AS decimal(19,6)); ALTER TABLE test DROP COLUMN data GO EXEC sp_RENAME 'test.data_new' , 'data', 'COLUMN' ```
Alter column from varchar to decimal when nulls exist
[ "", "sql", "sql-server", "" ]
So this might be more of a theoretical question about how joins in MySQL work, but I'd love some guidance. Let's say I have three tables, table a, b and c, where table a and b are fact tables and table c is table b's dimension table. If I want to left join table b to table a (I want to keep all of the contents of table a, but also want matching contents in table b), can I still inner join table c to table b even table b is left joined? Or do I have to left join table c to table b? Or would both of these for all intents and purposes produce the same result? ``` select a.column, c.name from tablea a left join tableb b on a.id = b.id inner join (?) tablec c on b.name_id = c.name ```
MySQL supports syntax that allows you to achieve what you want: ``` select a.column, c.name from tablea a left join tableb b inner join tablec c on b.name_id = c.name on a.id = b.id ; ``` In this case tables `tableb` and `tablec` are joined first, then the result of their join is outer-joined to `tablea`. The final result set, however, would be same as with [@simon at rcl's solution](https://stackoverflow.com/a/28590798/297408).
In this case if there is no tablec entry for a tableb, then the whole join will fail and the tablea row will not be included. To include the tablsa entry you would need to make the join to tablc a left join: ``` select a.column, c.name from tablea a left join tableb b on a.id = b.id left join tablec c on b.name_id = c.name ``` That will get you every tablea row even when there is no matching tableb row, and also every tablea and tableb when there is no tablec row.
SQL inner joining to left joined table
[ "", "mysql", "sql", "left-join", "" ]
I'm trying to write 1 query to display a list of forum threads ordered by the latest post date. This is quite a simple process getting the data from the database however due to the `joins` and `group by`, the `order by` I wish to use isn't possible. # Table Structures *ForumRooms* ``` | id | title | |----------------| | 1 | Room 1 | | 2 | Room 2 | ``` *ForumThreads* ``` | id | title | forum_room_id | |------------------|-----------------| | 1 | Thread 1 | 1 | | 2 | Thread 2 | 2 | | 3 | Thread 3 | 1 | ``` *ForumPosts* ``` | id | content | forum_thread_id | post_time | user_id | |------------------|------------------|-------------|-----------| | 1 | A post 1 | 1 | 15/02/2015 | 1 | | 2 | A post 2 | 2 | 16/02/2015 | 2 | | 3 | A post 3 | 1 | 17/02/2015 | 1 | | 4 | A post 4 | 1 | 18/02/2015 | 2 | | 5 | A post 5 | 2 | 19/02/2015 | 1 | ``` # Desired Output This is the output I'd ideally have, the threads are sorted by the most recent thread post. ``` | Thread Title | Last Post User | Last Post Time | Creator User | |---------------|------------------|-----------------|----------------| | Thread 2 | 1 | 19/02/2015 | 2 | | Thread 1 | 2 | 18/02/2015 | 1 | ``` # The Problem At the moment I can get everything in 1 query however there is an issue when trying to `ORDER BY ForumPosts.id`, I think this is due to the `GROUP BY` combined with the joins. This means I have all of the required data but it's not in the order of the latest forum post. **Current incorrect query** ``` SELECT ForumThreads.title, SUBSTRING_INDEX(GROUP_CONCAT(ForumPosts.user_id ORDER BY ForumPosts.id ASC), ',', 1), MAX(ForumPosts.`post_time`), SUBSTRING_INDEX(GROUP_CONCAT(ForumPosts.user_id ORDER BY ForumPosts.id DESC), ',', 1) FROM forumThreads INNER JOIN `ForumRooms` ON ForumThreads.forum_room_id = ForumRooms.id INNER JOIN `ForumPosts` ON ForumThreads.id = ForumPosts.`forum_thread_id` WHERE ForumRooms.id = 1 GROUP BY ForumThreads.id ORDER BY ForumPosts.id DESC ``` How can I write a query that will achieve the above desired output? --- **Update 1** I've tried implementing the sub query approach as suggested by *Jean-François Savard* but the `ORDER BY a.id` is ignored and it still orders by `ForumTopics.id`
Only a small change to the original query is needed - either add `MAX(ForumPosts.id) AS max_post_id` to your SELECT and then order by that: ``` SELECT ForumThreads.title, SUBSTRING_INDEX(GROUP_CONCAT(ForumPosts.user_id ORDER BY ForumPosts.id ASC), ',', 1), MAX(ForumPosts.`post_time`), SUBSTRING_INDEX(GROUP_CONCAT(ForumPosts.user_id ORDER BY ForumPosts.id DESC), ',', 1), MAX(ForumPosts.id) AS max_post_id FROM forumThreads INNER JOIN `ForumRooms` ON ForumThreads.forum_room_id = ForumRooms.id INNER JOIN `ForumPosts` ON ForumThreads.id = ForumPosts.`forum_thread_id` WHERE ForumRooms.id = 1 GROUP BY ForumThreads.id ORDER BY max_post_id DESC ``` Or you can just alias `MAX(ForumPosts.post_time)` and order by that: ``` SELECT ForumThreads.title, SUBSTRING_INDEX(GROUP_CONCAT(ForumPosts.user_id ORDER BY ForumPosts.id ASC), ',', 1), MAX(ForumPosts.`post_time`) AS max_post_time, SUBSTRING_INDEX(GROUP_CONCAT(ForumPosts.user_id ORDER BY ForumPosts.id DESC), ',', 1) FROM forumThreads INNER JOIN `ForumRooms` ON ForumThreads.forum_room_id = ForumRooms.id INNER JOIN `ForumPosts` ON ForumThreads.id = ForumPosts.`forum_thread_id` WHERE ForumRooms.id = 1 GROUP BY ForumThreads.id ORDER BY max_post_time DESC ```
You could use a subquery ``` select * from( SELECT ForumPosts.id, group_concat(ForumThreads.title), SUBSTRING_INDEX(GROUP_CONCAT(ForumPosts.user_id ORDER BY ForumPosts.id ASC), ',', 1), MAX(ForumPosts.`post_time`), SUBSTRING_INDEX(GROUP_CONCAT(ForumPosts.user_id ORDER BY ForumPosts.id DESC), ',', 1) FROM forumThreads INNER JOIN `ForumRooms` ON ForumThreads.forum_room_id = ForumRooms.id INNER JOIN `ForumPosts` ON ForumThreads.id = ForumPosts.`forum_thread_id` WHERE ForumRooms.id = 1 GROUP BY ForumThreads.id) a ORDER BY a.id DESC ```
Order by ignored when using group by on joined tables
[ "", "mysql", "sql", "" ]
I have a table `pickles` with several hundred pickle types Table `pickles` is structured with at least: `id, name, likes` Starting with the query: ``` SELECT * FROM pickles WHERE likes > 100 LIMIT 10 ``` Let's say I need at LEAST 10 pickles to show to the user at any one time and let's say only 4 pickles are liked over 100 times. This means although there are several hundred available pickles only 4 will be pulled from this query. How can I manipulate this query to pull up to 10 'over 100 liked' pickles and then if there are not 10 available then to fill the rest with random pickles?
You could pick the pickles with the highest number of likes: ``` SELECT * FROM pickles ORDER BY likes DESC LIMIT 10; ``` The `100` is rather irrelevant if you always want 10 rows returned.
Use order by likes and sort the selection result to descending order, this will give you a list of highest numbers of likes and then you can limit the records. This should work : ``` SELECT * FROM pickles ORDER BY likes DESC LIMIT 10 ```
SQL query against multiple tables
[ "", "mysql", "sql", "" ]
I have a SQL statement which run in a window form application. I am using SQL server 2008 to develop and test. Everything is alright here. ``` SqlCommand command1 = new SqlCommand("insert into payment_detail ( " + "transaction_id, enrolment_id, hkid_1, section_r, " + "classes, section_fee, assist_amt, additional_assist_amt, " + "discount_amt, coupon_amt, coupon_amt_no_number, " + "paid_amt, last_updated_by, last_updated_date) values " + "(@transaction_id, @enrolment_id, @hkid_1, @section_r, " + "@classes, @section_fee, (select section_fee - adjusted_section_fee from coursenew where coursecode = @courseCode and section_r = @section_r), @additional_assist_amt, " + "@discount_amt, @coupon_amt, @coupon_amt_no_number, " + "@paid_amt, @last_updated_by, GETDATE())" , myConnection); ``` But when moving to another work station which is using SQL server 2005. It prompts out an error like below Subqueries are not allowed in this context. Only scalar expressions are allowed. I am sorry that since a computer can only install one version of SQL server. I cannot test the script with a SQL server 2005. Please help and thanks. For the full sql statement will be listed below ``` insert into payment_detail ( transaction_id, enrolment_id, hkid_1, section_r, classes, section_fee, assist_amt, additional_assist_amt, discount_amt, coupon_amt, coupon_amt_no_number, paid_amt, last_updated_by, last_updated_date) values (@transaction_id, @enrolment_id, @hkid_1, @section_r, @classes, @section_fee, (select section_fee - adjusted_section_fee from coursenew where coursecode = @courseCode and section_r = @section_r), @additional_assist_amt, @discount_amt, @coupon_amt, @coupon_amt_no_number, @paid_amt, @last_updated_by, GETDATE())" ```
You can turn it into a query that inserts from a select instead uf using a subquery: ``` insert into payment_detail ( transaction_id, enrolment_id, hkid_1, section_r, classes, section_fee, assist_amt, additional_assist_amt, discount_amt, coupon_amt, coupon_amt_no_number, paid_amt, last_updated_by, last_updated_date ) select @transaction_id, @enrolment_id, @hkid_1, @section_r, @classes, @section_fee, section_fee - adjusted_section_fee, @additional_assist_amt, @discount_amt, @coupon_amt, @coupon_amt_no_number, @paid_amt, @last_updated_by, GETDATE() from coursenew where coursecode = @courseCode and section_r = @section_r ```
The easiest way around this is to put your select statement into a local variable before running your insert, and then use your temporary table in place of the sub-query. <https://msdn.microsoft.com/en-us/library/ms188927.aspx> Edit: specific example of selecting into a local variable. <https://msdn.microsoft.com/en-us/library/ms187330.aspx>
Subqueries are not allowed in this context -- SQL statement from 2008 to 2005
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "" ]
This is the code I have ``` SET SERVEROUTPUT ON; DECLARE ACOUNTER INTEGER; IS_PRIME INTEGER; BEGIN IS_PRIME := 1; FOR NUM IN 1000..2000 LOOP FOR D IN 2..NUM-1 LOOP IF MOD(NUM,D) = 0 THEN IS_PRIME := 0; END IF; END LOOP; END LOOP; IF IS_PRIME = 1 THEN ACOUNTER := ACOUNTER +1; DBMS_OUTPUT.PUT_LINE('THE # OF PRIME NUMBERS BETWEEN 1000-2000 ARE: ' || ACOUNTER); END; ``` I get an error: > Error starting at line : 2 in command - Error report - ORA-06550: line > 18, column 4: PLS-00103: Encountered the symbol ";" when expecting one > of the following: > > if > 06550. 00000 - "line %s, column %s:\n%s" > \*Cause: Usually a PL/SQL compilation error. > \*Action: > END;
You don't have an `END IF`; given the line number the error is reported on, you have an `END;` after what you've shown, so the last few lines are: ``` ... IF IS_PRIME = 1 THEN ACOUNTER := ACOUNTER +1; DBMS_OUTPUT.PUT_LINE('THE # OF PRIME NUMBERS BETWEEN 1000-2000 ARE: ' || ACOUNTER); END; / ``` When the compiler sees that `END;` it's expecting the preceding `IF` to be closed, so it throws an exception when it see the `;`. Just add an `END IF` after the prime check: ``` ... IF IS_PRIME = 1 THEN ACOUNTER := ACOUNTER +1; END IF; DBMS_OUTPUT.PUT_LINE('THE # OF PRIME NUMBERS BETWEEN 1000-2000 ARE: ' || ACOUNTER); END; / ``` Consistent indentation makes this sort of this more obvious and easier to track down. You're also setting and checking your flag in the wrong place - both should be inside the first loop - and you have to initialise your counter before you start, as it will default to null: ``` DECLARE ACOUNTER INTEGER := 0; IS_PRIME INTEGER; BEGIN FOR NUM IN 1000..2000 LOOP IS_PRIME := 1; FOR D IN 2..NUM-1 LOOP IF MOD(NUM,D) = 0 THEN IS_PRIME := 0; END IF; END LOOP; IF IS_PRIME = 1 THEN ACOUNTER := ACOUNTER +1; END IF; END LOOP; DBMS_OUTPUT.PUT_LINE('THE # OF PRIME NUMBERS BETWEEN 1000-2000 ARE: ' || ACOUNTER); END; / anonymous block completed THE # OF PRIME NUMBERS BETWEEN 1000-2000 ARE: 135 ``` See @ruudvan's comments about your algorithm too though; that version gets the same answer 135, but more efficiently (in a tenth of the time on my system; about 0.06 seconds versus 0.60 with this simpler approach).
There is no `END IF` for this IF - `IF IS_PRIME = 1 THEN` There is no `END;` for the complete `BEGIN...END` block. The error tells you right there what action you have to take - > Action: END; which usually means that you have to put the END keyword somewhere. Some suggestions about the algorithm to find a prime number - 1. When checking for a prime number, you don't have to loop until num-1. Looping until square-root(num) will work fine. [Proof](http://en.wikipedia.org/wiki/Prime_number#Testing_primality_and_integer_factorization) 2. When you find a number that is not prime, exit the loop. No need to check for any further divisors for that number. EDIT: Mistakes in your logic which will give you incorrect results - 1. Inside the first loop, re-initialize the variable IS\_PRIME to 1. 2. The counter increment should happen inside the first loop. This is the correct program - ``` SET SERVEROUTPUT ON; DECLARE ACOUNTER INTEGER; IS_PRIME INTEGER; BEGIN ACOUNTER := 0; IS_PRIME := 1; FOR NUM IN 1000 .. 2000 LOOP IS_PRIME := 1; FOR D IN 2 .. SQRT (num) LOOP IF MOD (NUM, D) = 0 THEN IS_PRIME := 0; EXIT; END IF; END LOOP; IF IS_PRIME = 1 THEN ACOUNTER := ACOUNTER + 1; END IF; END LOOP; DBMS_OUTPUT.PUT_LINE ( 'THE # OF PRIME NUMBERS BETWEEN 1000-2000 ARE: ' || ACOUNTER); END; / ```
How to get prime numbers between 1000-2000 in Oracle SQL
[ "", "sql", "oracle", "plsql", "" ]
I am trying for a time now to create a query that check if a records exists in a table then do nothing else insert record.I have seen multiple example for this task. From these example I have create a query. **Query:** ``` CREATE PROCEDURE Answer () IF EXISTS (SELECT * FROM answer WHERE a = 'a' && b = 'b' && e = 'e ' && timestamp = '1423828119' ) THEN BEGIN SELECT 'This record already exists!' END IF ELSE BEGIN SELECT 'Record Added' INSERT INTO answer(a, b, c, d, e, timestamp) VALUES ('a','b','c','d','e','1423828119') END ``` **Error :** `syntax to use near ' 'IF ELSE BEGIN SELECT 'Record Added' INSERT INTO answer(a, b,'` not i am have a error on if condition in mysql. Please just guide me toward right direction.
Use `EXISTS` to check if a query returns anything: ``` if exists( select * from answer where a = 'a' and b = 'b' and e = 'e' and timestamp = '1423828119' ) then begin select 'This record already exists!' end else begin select 'Record Added' insert into answer(a, b, c, d, e, $timestamp) values ('a','b','c','d','e ','1423828119') end ```
In most DBMSes there's a `MERGE` statement, but MySQL doesn't support it. Following query doesn't need a Stored Procedure: ``` INSERT INTO answer(a, b, c, d, e, timestamp) SELECT 'a','b','c','d','e','1423828119' WHERE NOT EXISTS ( SELECT * FROM answer WHERE a = 'a' AND b = 'b' AND e = 'e' AND timestamp = '1423828119 ); ```
IF ELSE condition in SQL TO INSERT records
[ "", "mysql", "sql", "" ]
I have the following table "installation\_tracking" in my database. ![Intallation Tracking Table](https://i.stack.imgur.com/h9j1K.png) Now what I want is : 1. Total count of mac\_id in every case 2. Total count of distinct mac\_id every case 3. Total count of mac\_id if uninstalled\_date is not '0000-00-00 00:00:00' 4. Total count of distinct mac\_id if uninstalled\_date is not '0000-00-00 00:00:00' This whole should be group by rs\_id I tried this out by the following query but it did not work. ``` SELECT count(it.mac_id) AS total_installations, count(distinct it.mac_id) AS unique_installations, CASE WHEN it.uninstalled_date <> '0000-00-00 00:00:00' THEN count(it.mac_id) END as total_uninstallations, CASE WHEN it.uninstalled_date <> '0000-00-00 00:00:00' THEN count(distinct it.mac_id) END as total_uninstallations FROM installation_tracking it GROUP BY it.rs_id ```
Try this: ``` SELECT COUNT(it.mac_id) AS total_installations, COUNT(DISTINCT it.mac_id) AS unique_installations, COUNT(CASE WHEN it.uninstalled_date <> '0000-00-00 00:00:00' THEN it.mac_id ELSE NULL END) as total_uninstallations, COUNT(DISTINCT CASE WHEN it.uninstalled_date <> '0000-00-00 00:00:00' THEN it.mac_id ELSE NULL END) as unique_uninstallations FROM installation_tracking it GROUP BY it.rs_id ``` By returning `NULL` when the value does not match the criterion, you prevent the irrelevant values from being counted. As funkwurm stated, the `ELSE NULL` parts are optional, as the `CASE` clause will return `NULL` by default anyway. I personally find them more readable though.
The outcome of `COUNT()` is not influenced by `CASE WHEN`, you'd have to but that in the `WHERE` but that influences all your results. There's a little trick to use if you wanna conditionally count without a `WHERE`. You actually do a `SUM()` and the input for that will be 1 or 0 depending on the outcome of the condition: ``` SELECT count(it.mac_id) AS total_installations, count(distinct it.mac_id) AS unique_installations, SUM(CASE WHEN it.uninstalled_date <> '0000-00-00 00:00:00' THEN 1 ELSE 0 END) AS total_uninstallations, SUM(CASE WHEN it.uninstalled_date <> '0000-00-00 00:00:00' THEN 1 ELSE 0 END) AS total_uninstallations FROM installation_tracking it GROUP BY it.rs_id ``` Of course like this the last one doesn't do `DISTINCT`. I don't have a solution for that of the top of my head. Might have to be its own query, but hopefully this helps.
CASE Clause Is Not Working In SQL Query
[ "", "mysql", "sql", "" ]
I am using this code ``` SELECT DATENAME(month, GETDATE()) AS 'Month Name' ``` to get the month name. But how can I limit the name of the month up to 3 characters only? Example is `FEB` in this month.
Assuming you're using SQL Server, you can use the `SUBSTRING` function, i.e. ``` SELECT SUBSTRING(DATENAME(month, GETDATE()), 1, 3) AS 'Month Name' ```
Assuming SQL server: ``` Select FORMAT(getdate(),'MMM') as shortMonthName ```
How to return month short form from date in SQL Server?
[ "", "sql", "sql-server", "date", "select", "monthcalendar", "" ]
I have two tables, a parent table and a child-table. The child table is a vertical designed table (meaning it stores and `Id`, `ParentId`, `Property` and `PropertyValue`). Naturally the `PropertyValue` can hold all types of data. I'm trying to filter this set but I'm struggling with faulty dates and empty fields. I'm unable to create functions due to read-only access so I have to do everything in the actual query. I tried using a subquery but I'm experiencing I'm not getting the results from the subquery to work with in the outer query. So far I've got this: ``` DECLARE @Year Int SET @Year = 2015 SELECT COUNT(Parent.ID), YEAR(PropertyValue), MONTH(PropertyValue) FROM Parent INNER JOIN Child ON Parent.ID = Child.ParentID WHERE Parent.ID IN (SELECT ParentID FROM Child WHERE Child.Property = 'MyDateField' AND ISDATE(Child.PropertyValue) = 1) AND Child.Property = 'MyDateField' AND YEAR(Child.PropertyValue) = @Year GROUP BY YEAR(Child.PropertyValue), MONTH(Child.PropertyValue) ``` Any suggestions on how to cut out the faulty date rows and proceed with the desired dataset?
Replace Child.PropertyValue with the following expression: ``` (CASE WHEN ISDATE(Child.PropertyValue) = 1 THEN CAST(Child.PropertyValue AS datetime) ELSE NULL END) ``` EDIT: Here you have the query: ``` SELECT COUNT(Parent.ID), YEAR(CASE WHEN ISDATE(Child.PropertyValue) = 1 THEN CAST(Child.PropertyValue AS datetime) ELSE NULL END), MONTH(CASE WHEN ISDATE(Child.PropertyValue) = 1 THEN CAST(Child.PropertyValue AS datetime) ELSE NULL END) FROM Parent INNER JOIN Child ON Parent.ID = Child.ParentID WHERE Parent.ID IN ( SELECT ParentID FROM Child WHERE Child.Property = 'MyDateField' AND ISDATE(CASE WHEN ISDATE(Child.PropertyValue) = 1 THEN CAST(Child.PropertyValue AS datetime) ELSE NULL END) = 1 ) AND Child.Property = 'MyDateField' AND YEAR(CASE WHEN ISDATE(Child.PropertyValue) = 1 THEN CAST(Child.PropertyValue AS datetime) ELSE NULL END) = @Year GROUP BY YEAR(CASE WHEN ISDATE(Child.PropertyValue) = 1 THEN CAST(Child.PropertyValue AS datetime) ELSE NULL END), MONTH(CASE WHEN ISDATE(Child.PropertyValue) = 1 THEN CAST(Child.PropertyValue AS datetime) ELSE NULL END) ```
Try this instead. Everything else seems redundant. You don't need to check the dates in the SELECT- or GROUP BY part. Your IN statement is already included in the WHERE clause. ``` SELECT COUNT(Parent.ID), YEAR(Child.PropertyValue), MONTH(Child.PropertyValue) FROM Parent INNER JOIN Child ON Parent.ID = Child.ParentID WHERE Child.Property = 'MyDateField' AND CASE WHEN ISDATE(Child.PropertyValue) = 1 THEN YEAR(Child.PropertyValue) END = @Year GROUP BY YEAR(Child.PropertyValue), MONTH(Child.PropertyValue) ```
Struggling with faulty date fields in SQL Server
[ "", "sql", "sql-server", "database", "t-sql", "sql-server-2005", "" ]
I have this query that needs data for yesterday. What i have below returns result for the last 24 hours which is different from yesterday 00.00 - 23.59. Here is what i have but doesn't solve the problem. ``` Select * from message where now() - arrival_timestamp <= interval '24 hour' ```
you can use the [DATE\_PART](http://www.sqlines.com/postgresql/how-to/datediff) function ``` select * from message where DATE_PART('day', now() - arrival_timestamp) <= 1 ```
You could cast the `timestamp` to `date` with the syntax `expression::type` (more info on [The Type Casts section](http://www.postgresql.org/docs/current/interactive/sql-expressions.html#SQL-SYNTAX-TYPE-CASTS) of The PostgreSQL Documentation). Sufficient tools for making the comparison between dates can be found from the section [9.9. Date/Time Functions and Operators](http://www.postgresql.org/docs/current/interactive/functions-datetime.html): ``` SELECT * FROM message WHERE arrival_timestamp::date = current_date - 1; ``` If you have an index on `arrival_timestamp` the cast to date would render the index unusable in the query. In that case use other comparison operators: ``` SELECT * FROM message WHERE arrival_timestamp >= current_date - 1 AND arrival_timestamp < current_date; ```
Returning results for the whole of yesterday and not the last 24 hours
[ "", "sql", "postgresql", "" ]
Say I have an table like this: ![enter image description here](https://i.stack.imgur.com/xzQQH.png) And I want to create a select which combines every non-null row against every other value such that I end up with: ![enter image description here](https://i.stack.imgur.com/XHv7n.png) etc, all the way up to 3 - 3 - 3 Can this be done in one select statement?
You can do it with two `CROSS JOIN`'s: ``` DECLARE @tb AS TABLE ( column1 INT ,column2 INT ,column3 INT ); INSERT INTO @tb VALUES (1, NULL, NULL); INSERT INTO @tb VALUES (2, NULL, NULL); INSERT INTO @tb VALUES (3, NULL, NULL); INSERT INTO @tb VALUES (NULL, 1, NULL); INSERT INTO @tb VALUES (NULL, 2, NULL); INSERT INTO @tb VALUES (NULL, 3, NULL); INSERT INTO @tb VALUES (NULL, NULL, 1); INSERT INTO @tb VALUES (NULL, NULL, 2); INSERT INTO @tb VALUES (NULL, NULL, 3); SELECT tb1.column1, tb2.column2, tb3.column3 FROM @tb tb1 CROSS JOIN @tb AS tb2 CROSS JOIN @tb AS tb3 WHERE tb1.column1 IS NOT NULL AND tb2.column2 IS NOT NULL AND tb3.column3 IS NOT NULL ORDER BY tb1.column1, tb2.column2, tb3.column3; ```
Since you want to combine values that are now on different rows, first you need to seperate them in subqueries (the `with` statements), then you can cross join these (cartesian product) to find all different combinations. ``` with c1 as (select column1 from table where column1 is not null), c2 as (select column2 from table where column2 is not null), c3 as (select column3 from table where column3 is not null) select * from c1, c2, c3 -- no join condition returns all possible combinations ```
Combining rows in SQL Server against all other rows
[ "", "sql", "sql-server", "" ]
Here is my stored procedure ``` ALTER PROCEDURE Delete @ID nvarchar(64), @value int = 0 output AS BEGIN IF(EXISTS(SELECT * FROM A where Ap = @ID)) BEGIN set @value = 1 END ELSE IF(EXISTS(SELECT * FROM B where Bp = @ID)) BEGIN set @value = 2 END ELSE BEGIN select * from Table_D END END RETURN @value ``` Problem is that when I execute it, this does not return any value
There are multiple ways of returning status information from a stored procedure to an application. Each has its pros and cons; no single technique can definitely be said to be the right one in all circumstances. Even so, I'll start off with: ## TL;DR: recommendation Use `RAISERROR` if your stored procedure runs into trouble and cannot return the data it normally returns. Use `OUTPUT` parameters for information the client isn't free to ignore, but which isn't logically part of your result. Use the return value if you have an informational status code that the client is free to ignore. Use additional result sets only if you know what you're doing. ## RAISERROR If your stored procedure encounters an error and can't return any data, you can use `RAISERROR` to terminate execution and cause an exception to be raised on the client side. ``` CREATE PROCEDURE [Delete] @ID nvarchar(64) AS BEGIN IF(EXISTS(SELECT * FROM A where Ap = @ID)) BEGIN RAISERROR('Wrong. Try again.', 11, 1); RETURN; END ELSE IF(EXISTS(SELECT * FROM B where Bp = @ID)) BEGIN RAISERROR('Wrong in a different way. Try again.', 11, 2); RETURN; END ELSE BEGIN select * from Table_D END END ``` The second parameter (severity) must be set to at least 11 to make the error propagate as an exception, otherwise it's just an informational message. Those can be captured too, but that's out of the scope of this answer. The third parameter (state) can be whatever you like and could be used to pass the code of the error, if you need to localize it, for example. User-generated message always have SQL error code 50000, so that can't be used to distinguish different errors, and parsing the message is brittle. The C# code to process the result: ``` try { using (var reader = command.ExecuteReader()) { while (reader.Read()) { ... } } } catch (SqlException e) { Console.WriteLine( "Database error executing [Delete] (code {0}): {1}", e.State, e.Message ); } ``` This is a natural fit for errors because the code to actually process the data stays what it is, and you can handle the exception at the right location (rather than propagating a status code everywhere). But this method is not appropriate if the stored procedure is *expected* to return a status that is informational and not an error, as you would be catching exceptions all the time even though nothing's wrong. ## Output parameter A stored procedure can set parameter values as well as receive them, by declaring them `OUTPUT`: ``` CREATE PROCEDURE [Delete] @ID nvarchar(64), @StatusCode INT OUTPUT AS BEGIN IF(EXISTS(SELECT * FROM A where Ap = @ID)) BEGIN SET @StatusCode = 1; END ELSE IF(EXISTS(SELECT * FROM B where Bp = @ID)) BEGIN SET @StatusCode = 2; END ELSE BEGIN SET @StatusCode = 0; select * from Table_D END END ``` From C#, this is captured in a parameter marked as an output parameter: ``` SqlParameter statusCodeParameter = command.Parameters.Add( new SqlParameter { ParameterName = "@StatusCode", SqlDbType = SqlDbType.Int, Direction = ParameterDirection.Output } ); using (var reader = command.ExecuteReader()) { int statusCode = (int) statusCodeParameter.Value; if (statusCode != 0) { // show alert return; } while (reader.Read()) { ... } } ``` The benefits here are that the client cannot forget to declare the parameter (it must be supplied), you're not restricted to a single `INT`, and you can use the value of the parameter to decide what you want to do with the resul set. Returning structured data is cumbersome this way (lots of `OUTPUT` parameters), but you could capture this in a single `XML` parameter. ## Return value Every stored procedure has a return value, which is a single `INT`. If you don't explicitly set it using `RETURN`, it stays at 0. ``` CREATE PROCEDURE [Delete] @ID nvarchar(64) AS BEGIN IF(EXISTS(SELECT * FROM A where Ap = @ID)) BEGIN RETURN 1 END ELSE IF(EXISTS(SELECT * FROM B where Bp = @ID)) BEGIN RETURN 2 END ELSE BEGIN select * from Table_D END END ``` From C#, the return value has to be captured in a single special parameter marked as the return value: ``` SqlParameter returnValueParameter = command.Parameters.Add( new SqlParameter { Direction = ParameterDirection.ReturnValue } ); using (var reader = command.ExecuteReader()) { // this could be empty while (reader.Read()) { ... } } int returnValue = (int) returnValueParameter.Value; ``` It's important to note that the return value will not be available until you've processed all other result sets that the stored procedure generates (if any), so if you're using it for a status code that indicates there are no rows, you must still process the empty result set first before you have the status code. You cannot return anything other than an `INT`. Frameworks/OR mappers often have no support for the return value. Finally, note that the client is not required to do anything with the return value, so you have to carefully document its intended use. ## Result set The stored procedure can simply return what it wants as the result set, just like it's returning the other data. A stored procedure is allowed to return multiple result sets, so even if your status is logically separate from the other data, you can return it as a row. ``` CREATE PROCEDURE [Delete] @ID nvarchar(64) AS BEGIN DECLARE @StatusCode INT = 0; IF(EXISTS(SELECT * FROM A where Ap = @ID)) BEGIN SET @StatusCode = 1; END ELSE IF(EXISTS(SELECT * FROM B where Bp = @ID)) BEGIN SET @StatusCode = 2; END SELECT @StatusCode AS StatusCode; IF @StatusCode = 0 BEGIN select * from Table_D END END ``` To process this with C#, we need `SqlDataReader.NextResult`: ``` using (var reader = command.ExecuteReader()) { if (!reader.Read()) throw new MyException("Expected result from stored procedure."); statusCode = reader.GetInt32(reader.GetOrdinal("StatusCode")); if (statusCode != 0) { // show alert return; } reader.NextResult(); while (reader.Read()) { // use the actual result set } } ``` The main drawback here is that it's not intuitive for a stored procedure to return a variable number of result sets, and very few data frameworks/OR mappers support it, so you'll nearly always end up writing manual code like this. Returning multiple result sets is not really a good fit for returning a single piece of data like a status code, but it might be an alternative to returning structured data in an `XML` output parameter (especially if there's lots).
The return seems to be out of scope of the procedure. Try: ``` ALTER PROCEDURE Delete @ID nvarchar(64), @value int=0 output AS BEGIN IF(EXISTS(SELECT * FROM A where Ap=@ID)) BEGIN set @value=1 END ELSE IF(EXISTS(SELECT * FROM B where Bp=@ID)) BEGIN set @value=2 END ELSE BEGIN set @value=5 end --end if RETURN @value end --end procedure ``` This is where using tabbing properly makes the code a lot more readable, and these problems more obvious
SQL Server 2008 stored procedure return value
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Following query working fine in SQL Server 2012. But when I try to run it on Oracle it gives **ORA-00907: missing right parenthesis error.** Can not figure out what I did wrong here. ``` SELECT p1.ACTION_ID, STUFF( (SELECT ', '+EMP_DISPLAY_NAME FROM ( SELECT DISTINCT EMP.EMP_DISPLAY_NAME,ACTS.ACTION_ID FROM HS_HR_OB_ACT_ALT_RECIPIENT ALT LEFT JOIN HS_HR_EMPLOYEE EMP ON EMP.EMP_NUMBER=ALT.ACTALT_EMP_NUMBER LEFT JOIN HS_HR_OB_ACTION ACTS ON ACTS.ACTION_ID=ALT.ACTION_ID) p2 WHERE p2.ACTION_ID = p1.ACTION_ID ORDER BY EMP_DISPLAY_NAME FOR XML PATH(''), TYPE).value('.', 'varchar(max)') ,1,3,'') AS NAME FROM ( SELECT DISTINCT EMP.EMP_DISPLAY_NAME,ACTS.ACTION_ID FROM HS_HR_OB_ACT_ALT_RECIPIENT ALT LEFT JOIN HS_HR_EMPLOYEE EMP ON EMP.EMP_NUMBER=ALT.ACTALT_EMP_NUMBER LEFT JOIN HS_HR_OB_ACTION ACTS ON ACTS.ACTION_ID=ALT.ACTION_ID) p1 GROUP BY ACTION_ID ```
From your query I concluded minimal data structure: ``` HS_HR_OB_ACTION (action_id number) HS_HR_EMPLOYEE (EMP_NUMBER number, emp_display_name varchar2(20)) HS_HR_OB_ACT_ALT_RECIPIENT (ACTALT_EMP_NUMBER number, action_id number) ``` For such data following query works in Oracle 10g. For Oracle 11 uncomment second line and comment third line. Please compare results with SQL Server. ``` SELECT p1.ACTION_ID, -- (select LISTAGG(emp_display_name, ', ') WITHIN GROUP (ORDER BY emp_display_name) (SELECT wmsys.wm_concat(EMP_DISPLAY_NAME) FROM ( SELECT DISTINCT EMP.EMP_DISPLAY_NAME,ACTS.ACTION_ID FROM HS_HR_OB_ACT_ALT_RECIPIENT ALT LEFT JOIN HS_HR_EMPLOYEE EMP ON EMP.EMP_NUMBER=ALT.ACTALT_EMP_NUMBER LEFT JOIN HS_HR_OB_ACTION ACTS ON ACTS.ACTION_ID=ALT.ACTION_ID ) p2 WHERE p2.ACTION_ID = p1.ACTION_ID) AS NAME FROM ( SELECT DISTINCT EMP.EMP_DISPLAY_NAME,ACTS.ACTION_ID FROM HS_HR_OB_ACT_ALT_RECIPIENT ALT LEFT JOIN HS_HR_EMPLOYEE EMP ON EMP.EMP_NUMBER=ALT.ACTALT_EMP_NUMBER LEFT JOIN HS_HR_OB_ACTION ACTS ON ACTS.ACTION_ID=ALT.ACTION_ID ) p1 GROUP BY ACTION_ID ``` --- I think this query may be yet simplified, but I didn't want to interfere too much in original version at first. ``` SELECT distinct ACTION_ID, wmsys.wm_concat(EMP_DISPLAY_NAME) -- or listagg(...) FROM HS_HR_OB_ACT_ALT_RECIPIENT ALT LEFT JOIN HS_HR_EMPLOYEE EMP ON EMP.EMP_NUMBER=ALT.ACTALT_EMP_NUMBER LEFT JOIN HS_HR_OB_ACTION ACTS ON ACTS.ACTION_ID=ALT.ACTION_ID GROUP BY ACTION_ID ```
> SELECT ', '+EMP\_DISPLAY\_NAME It is not a valid Oracle syntax. You need to use the concatenation operator `||` For example, `SELECT EMP_DISPLAY_NAME||' ,'` Test case - ``` SQL> SELECT 'Employee ID is '||empno FROM emp; 'EMPLOYEEIDIS'||EMPNO ---------------------------------------------- Employee ID is 7369 Employee ID is 7499 Employee ID is 7521 Employee ID is 7566 Employee ID is 7654 Employee ID is 7698 Employee ID is 7782 Employee ID is 7788 Employee ID is 7839 Employee ID is 7844 Employee ID is 7876 Employee ID is 7900 Employee ID is 7902 Employee ID is 7934 14 rows selected. SQL> ```
Missing right parenthesis error in Oracle
[ "", "sql", "sql-server", "oracle", "t-sql", "plsql", "" ]
I have problem with Oracle DB-Link connection (Oracle 11g). Let us consider the following situation: 1. We are connected to the database DATABASE\_A with user USER1, 2. We are created new private database link to DATABASE\_B, username for connecion: USER2 ``` CREATE DATABASE LINK "CHECK_CONNECTION" CONNECT TO USER2 IDENTIFIED BY "password1" USING 'DATABASE_B'; ``` 3. Test connection failed - password or username is incorrect ``` SELECT * FROM DUAL@CHECK_CONNECTION Error at line 1 ORA-01017: invalid username/password; logon denied ``` 4. We are changing password: ``` DROP DATABASE LINK CHECK_CONNECTION CREATE DATABASE LINK "CHECK_CONNECTION" CONNECT TO USER2 IDENTIFIED BY "password2" USING 'DATABASE_B'; ``` 5. The connection test was successful ``` SELECT * FROM DUAL@CHECK_CONNECTION DUMMY ----- X 1 row selected. ``` 6. We are changing password again, to the older one: ``` DROP DATABASE LINK CHECK_CONNECTION CREATE DATABASE LINK "CHECK_CONNECTION" CONNECT TO USER2 IDENTIFIED BY "password1" USING 'DATABASE_B'; ``` 7. Connection still was correct, in spite of the wrong password: ``` SELECT * FROM DUAL@CHECK_CONNECTION DUMMY ----- X 1 row selected. ``` 8. Only creating new DB-Link with the changed name is detecting the incorrect connection. ``` CREATE DATABASE LINK "CHECK_CONNECTION_2" CONNECT TO USER2 IDENTIFIED BY "password1" USING 'DATABASE_B'; SELECT * FROM DUAL@CHECK_CONNECTION_2 Error at line 1 ORA-01017: invalid username/password; logon denied ``` Any idea why is the connection correct in spite of the wrong password?
From the manual on the command [alter session close database link](http://docs.oracle.com/database/121/SQLRF/statements_2015.htm#SQLRF00901): > When you issue a statement that uses a database link, Oracle Database > creates a session for you on the remote database using that link. The > connection remains open until you end your local session... It's useful that Oracle does not re-connect every time a database link is used. But it does seem like a minor bug to keep the connection alive even when the database link is changed. I've verified that this still happens in 12c. It shouldn't be a big deal because database links should remain fairly static. Just like the way an application should not re-connect to the database for each query, database sessions should not be changing links frequently. A lot of weird things happen over database links. Keep your remote process as simple as possible.
You need to check the plan for the calls. It is possible that for the first time, the value was fetched from the database but when it was called for the second time, it was read from the cache. Take small example: ``` SQL> set autotrace traceonly; SQL> select * from dual; Execution Plan ---------------------------------------------------------- Plan hash value: 272002086 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 | | 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 | -------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 3 consistent gets 2 physical reads 0 redo size 522 bytes sent via SQL*Net to client 523 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed SQL> select * from dual; Execution Plan ---------------------------------------------------------- Plan hash value: 272002086 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 | | 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 | -------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 522 bytes sent via SQL*Net to client 523 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed SQL> ``` Its obvious from the trace that during the first call there were 2 physical reads. But when the same statement was called again, there was 0 physical read, that means the result was read from the cache. In you case, although the definition of db link was changes but the basic sql remained the same and hence a cache hit took place.
Oracle 11g DB-Link connection stays open even after re-creating database link
[ "", "sql", "oracle", "oracle11g", "connection", "dblink", "" ]
I have a lovely form and a lovely table in MS access (I promise). I would like to insert into this table at the press of a button using where not exists but I am getting a not-so-friendly run-time error 3067: "Query input must contain at least one table or query." My query already does... ``` strSQL = "insert into tbl_MAP_systemTask (TaskID, SystemID) " & _ " Values (" & taskID & ", " & sysID & _ ") where not exists " & _ " (select M.TaskID, M.SystemID from tbl_MAP_systemTask as M where M.TaskID = " & taskID & _ " and M.SystemID = " & sysID & ");" Debug.Print strSQL DoCmd.RunSQL (strSQL) ``` strSQL is now ``` insert into tbl_MAP_systemTask (TaskID, SystemID) Values (1, 1) where not exists (select M.TaskID, M.SystemID from tbl_MAP_systemTask as M where M.TaskID = 1 and M.SystemID = 1); ``` Can anyone shed any light on a) what I broke? b) how to fix it?
Well instead of using a SubQuery, you could use a Domain function to get this going, ``` If Dcount("*", "tbl_MAP_systemTask", "TaskID = " & taskID & " AND SystemID = " &sysID) = 0 Then strSQL = "INSERT INTO tbl_MAP_systemTask (TaskID, SystemID) " & _ " VALUES (" & taskID & ", " & sysID & ") CurrentDb.Execute strSQL Else MsgBox "The Data already exists in the table, so nothing was added." End If ```
Try this: ``` strSQL = "insert tbl_MAP_systemTask (TaskID, SystemID) " & _ " select " & taskID & ", " & sysID & _ " where not exists " & _ " (select M.TaskID, M.SystemID from tbl_MAP_systemTask as M where M.TaskID = " & taskID & _ " and M.SystemID = " & sysID & ");" ``` => ``` insert tbl_MAP_systemTask (TaskID, SystemID) select 1, 1 where not exists (select M.TaskID, M.SystemID from tbl_MAP_systemTask as M where M.TaskID = 1 and M.SystemID = 1); ``` and seems to work in my case. Seems like the `where not exists` needs a select statement, so you have to model your insert like this.
Using insert into where not exists in VBA
[ "", "sql", "vba", "ms-access", "" ]
I have a question regarding how you use the IF statement in SQL, so I have 3 tables * user * mechanics * exchanges The fields in each of tables are as follows ``` user mechanics exchanges ------ ---------- --------- name name id_user id_user id_mecha id_mecha message ``` I want to use a condition like the following one in which I am going to select the name of the `user` or the `mechanic` with their corresponding `message` if their `id` matches the one that `exchanges` has for each one of them (`user` or `mechanic`) ``` SELECT CASE WHEN mechanics.id_mecha = exchanges.id_mecha THEN mechanics.name, exchanges.message WHEN users.id_user = exchanges.id_user THEN users.name, exchanges.message FROM users JOIN mechanics JOIN exchanges ```
The double `JOIN` will produce a full cross product between the mechanics and users. This is probably not what you want. You should use a single join with each table, and then combine them with `UNION`. ``` SELECT m.name, e.message FROM mechanics AS m JOIN exchanges AS e ON m.id_mecha = e.id_mecha UNION SELECT u.name, e.message FROM users AS u JOIN exchanges AS e ON m.id_user = e.id_user ```
If I understand correctly you need to `END` the [CASE](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#operator_case) in `SELECT` and you need to have `ON` for `JOIN` like below ``` SELECT CASE WHEN mechanics.id_mecha = exchanges.id_mecha THEN mechanics.name WHEN users.id_user = exchanges.id_user THEN users.name END, exchanges.message FROM users u JOIN mechanics m ON users.id_user = exchanges.id_user JOIN exchanges e ON mechanics.id_mecha = exchanges.id_mecha ```
Using IF/CASE conditioning in SQL
[ "", "mysql", "sql", "" ]
I have a need to add a case statement into a select, but I cannot seem to get the syntax right, could anyone help? ``` SELECT uuid = pnt.ID ,extras = (CASE (SELECT pnt.TypeID as [type], pnt.Source as source) WHEN source = 7 THEN 'a' WHEN source = 1 AND [type] = 0 THEN 'b' WHEN source = 8 THEN 'c' WHEN source = 2 AND [type] = 0 THEN 'd' WHEN source = 3 AND [type] IN (5,6,7,8) THEN 'e' ELSE NULL END) FROM Mydata as pnt ``` There are multiple problems, the select within the case is incorrect, the condition source = 7 is wrong, the combining conditions with an AND is wrong and the condition using IN is incorrect. I used the answer to [Multiple Criteria In Case Statement](https://stackoverflow.com/questions/27513588/multiple-criteria-in-case-statement) as a basis for the case statement.
There are two types of case statements: * A simple case statement that compares an expression to a set of simple expressions to return specific values. * A searched case statement that evaluates a set of Boolean expressions to return specific values. <https://msdn.microsoft.com/en-us/library/ms144841(v=sql.105).aspx> In your script your're "mixing" them, so that your script doesn't work. This could be a good solution: ``` SELECT pnt.ID ,CASE WHEN pnt.source = 7 THEN 'a' WHEN pnt.source = 1 AND pnt.TypeID = 0 THEN 'b' WHEN pnt.source = 8 THEN 'c' WHEN pnt.source = 2 AND pnt.TypeID = 0 THEN 'd' WHEN pnt.source = 3 AND pnt.TypeID IN (5, 6, 7, 8) THEN 'e' ELSE NULL END FROM @Mydata AS pnt ``` Warning! If you need to populate single variables (uuid, extras) you have to be sure that your query's result will have only 1 record
``` SELECT @uuid = pnt.ID ,@extras = (CASE WHEN source = 7 THEN 'a' WHEN source = 1 AND [type] = 0 THEN 'b' WHEN source = 8 THEN 'c' WHEN source = 2 AND [type] =0 THEN 'd' WHEN source = 3 AND [type] IN (5,6,7,8) THEN 'e' ELSE NULL END ) FROM Mydata as pnt ```
SQL Server CASE statement with mupltiple conditionals syntax
[ "", "sql", "sql-server", "sql-server-2012", "" ]
The following script is very slow when its run. I have no idea how to improve the performance of the script. Even with a view takes more than quite a lot minutes. Any idea please share to me. ``` SELECT DISTINCT ( id ) FROM ( SELECT DISTINCT ct.id AS id FROM [Customer].[dbo].[Contact] ct LEFT JOIN [Customer].[dbo].[Customer_ids] hnci ON ct.id = hnci.contact_id WHERE hnci.customer_id IN ( SELECT DISTINCT ( [Customer_ID] ) FROM [Transactions].[dbo].[Transaction_Header] WHERE actual_transaction_date > '20120218' ) UNION SELECT DISTINCT contact_id AS id FROM [Customer].[dbo].[Restaurant_Attendance] WHERE ( created > '2012-02-18 00:00:00.000' OR modified > '2012-02-18 00:00:00.000' ) AND ( [Fifth_Floor_London] = 1 OR [Fourth_Floor_Leeds] = 1 OR [Second_Floor_Bristol] = 1 ) UNION SELECT DISTINCT ( ct.id ) FROM [Customer].[dbo].[Contact] ct INNER JOIN [Customer].[dbo].[Wifinity_Devices] wfd ON ct.wifinity_uniqueID = wfd.[CustomerUniqueID] AND startconnection > '2012-02-17' UNION SELECT DISTINCT comdt.id AS id FROM [Customer].[dbo].[Complete_dataset] comdt LEFT JOIN [Customer].[dbo].[Aggregate_Spend_Counts] agsc ON comdt.id = agsc.contact_id WHERE agsc.contact_id IS NULL AND ( opt_out_Mail <> 1 OR opt_out_email <> 1 OR opt_out_SMS <> 1 OR opt_out_Mail IS NULL OR opt_out_email IS NULL OR opt_out_SMS IS NULL ) AND ( address_1 IS NOT NULL OR email IS NOT NULL OR mobile IS NOT NULL ) UNION SELECT DISTINCT ( contact_id ) AS id FROM [Customer].[dbo].[VIP_Card_Holders] WHERE VIP_Card_number IS NOT NULL ) AS tbl ```
try this, temptable should help you: ``` IF OBJECT_ID('Tempdb..#Temp1') IS NOT NULL DROP TABLE #Temp1 --Low perfomance because of using "WHERE hnci.customer_id IN ( .... ) " - loop join must be --and this "where" condition will apply to two tables after left join, --so result will be same as with two inner joints but with bad perfomance --SELECT DISTINCT -- ct.id AS id --INTO #temp1 --FROM [Customer].[dbo].[Contact] ct -- LEFT JOIN [Customer].[dbo].[Customer_ids] hnci ON ct.id = hnci.contact_id --WHERE hnci.customer_id IN ( -- SELECT DISTINCT -- ( [Customer_ID] ) -- FROM [Transactions].[dbo].[Transaction_Header] -- WHERE actual_transaction_date > '20120218' ) -------------------------------------------------------------------------------- --this will give the same result but with better perfomance then previouse one -------------------------------------------------------------------------------- SELECT DISTINCT ct.id AS id INTO #temp1 FROM [Customer].[dbo].[Contact] ct JOIN [Customer].[dbo].[Customer_ids] hnci ON ct.id = hnci.contact_id JOIN ( SELECT DISTINCT ( [Customer_ID] ) FROM [Transactions].[dbo].[Transaction_Header] WHERE actual_transaction_date > '20120218' ) T ON hnci.customer_id = T.[Customer_ID] -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- INSERT INTO #temp1 ( id ) SELECT DISTINCT contact_id AS id FROM [Customer].[dbo].[Restaurant_Attendance] WHERE ( created > '2012-02-18 00:00:00.000' OR modified > '2012-02-18 00:00:00.000' ) AND ( [Fifth_Floor_London] = 1 OR [Fourth_Floor_Leeds] = 1 OR [Second_Floor_Bristol] = 1 ) INSERT INTO #temp1 ( id ) SELECT DISTINCT ( ct.id ) FROM [Customer].[dbo].[Contact] ct INNER JOIN [Customer].[dbo].[Wifinity_Devices] wfd ON ct.wifinity_uniqueID = wfd.[CustomerUniqueID] AND startconnection > '2012-02-17' INSERT INTO #temp1 ( id ) SELECT DISTINCT comdt.id AS id FROM [Customer].[dbo].[Complete_dataset] comdt LEFT JOIN [Customer].[dbo].[Aggregate_Spend_Counts] agsc ON comdt.id = agsc.contact_id WHERE agsc.contact_id IS NULL AND ( opt_out_Mail <> 1 OR opt_out_email <> 1 OR opt_out_SMS <> 1 OR opt_out_Mail IS NULL OR opt_out_email IS NULL OR opt_out_SMS IS NULL ) AND ( address_1 IS NOT NULL OR email IS NOT NULL OR mobile IS NOT NULL ) INSERT INTO #temp1 ( id ) SELECT DISTINCT ( contact_id ) AS id FROM [Customer].[dbo].[VIP_Card_Holders] WHERE VIP_Card_number IS NOT NULL SELECT DISTINCT id FROM #temp1 AS T ```
Wow, where to start... ``` --this distinct does nothing. Union is already distinct --SELECT DISTINCT -- ( id ) --FROM ( SELECT DISTINCT [Customer_ID] as ID FROM [Transactions].[dbo].[Transaction_Header] where actual_transaction_date > '20120218' ) UNION SELECT contact_id AS id FROM [Customer].[dbo].[Restaurant_Attendance] -- not sure that you are getting the date range you want. Should these be >= -- if you want everything that occurred on the 18th or after you want >= '2012-02-18 00:00:00.000' -- if you want everything that occurred on the 19th or after you want >= '2012-02-19 00:00:00.000' -- the way you have it now, you will get everything on the 18th unless it happened exactly at midnight WHERE ( created > '2012-02-18 00:00:00.000' OR modified > '2012-02-18 00:00:00.000' ) AND ( [Fifth_Floor_London] = 1 OR [Fourth_Floor_Leeds] = 1 OR [Second_Floor_Bristol] = 1 ) -- all of this does nothing because we already have every id in the contact table from the first query -- UNION -- SELECT -- ( ct.id ) -- FROM [Customer].[dbo].[Contact] ct -- INNER JOIN [Customer].[dbo].[Wifinity_Devices] wfd ON ct.wifinity_uniqueID = wfd.[CustomerUniqueID] -- AND startconnection > '2012-02-17' UNION -- cleaned this up with isnull function and coalesce SELECT comdt.id AS id FROM [Customer].[dbo].[Complete_dataset] comdt LEFT JOIN [Customer].[dbo].[Aggregate_Spend_Counts] agsc ON comdt.id = agsc.contact_id WHERE agsc.contact_id IS NULL AND ( isnull(opt_out_Mail,0) <> 1 OR isnull(opt_out_email,0) <> 1 OR isnull(opt_out_SMS,0) <> 1 ) AND coalesce(address_1 , email, mobile) IS NOT NULL UNION SELECT ( contact_id ) AS id FROM [Customer].[dbo].[VIP_Card_Holders] WHERE VIP_Card_number IS NOT NULL -- ) AS tbl ```
How to improve sql script performance
[ "", "sql", "sql-server-2008", "t-sql", "" ]
``` SELECT service.* FROM star_service INNER JOIN service ON service.code = star_service.service UNION SELECT service.* FROM service ``` How can I modify the above query so that the results from the first table are shown first followed by the second query in the union?
Add an additional column to the result set, then use that column to for ordering. Like this: ``` SELECT service.*, 1 as OBCol FROM star_service INNER JOIN service ON service.code = star_service.service UNION SELECT service.*, 2 as OBCol FROM service ORDER BY OBCol ```
I would skip the `union` altogether. If you want everything from `service` with the ones in `star_service` first, then just use `left join` and `order by`: ``` select s.* from service s left join star_service ss on s.code = ss.service order by (ss.service is not null) desc; ``` EDIT: If there are duplicates in `star_service`, then you are better off using `exists`: ``` select s.*, (exists (select 1 from start_service ss where s.code = ss.service) ) as in_ss from service s order by (is_ss is not null) desc; ``` These versions (with the proper indexes) should perform much, much better than your original version or anything with a `union`.
MySQL Union - Select data from first table in union first
[ "", "mysql", "sql", "" ]
What is the `grant` option/trick I need to give to the current user ("*userA*") to allow him to change a object's owner which belongs by another user ("*userC*")? More precisely, the *contact* table is owned by the *userC* and when I perform the following query for changing the owner to the *userB*, connected with the *userA*: ``` ALTER TABLE contact OWNER TO userB; ``` I get this error: > ERROR: must be owner of relation contact But *userA* has all needed rights to do that normally (the "*create on schema*" grant option should be enough): ``` grant select,insert,update,delete on all tables in schema public to userA; grant select,usage,update on all sequences in schema public to userA; grant execute on all functions in schema public to userA; grant references, trigger on all tables in schema public to userA; grant create on schema public to userA; grant usage on schema public to userA; ``` --- Command line output: ``` root@server:~# psql -U userA myDatabase myDataBase=>\dt contact List of relations Schema | Name | Type | Owner -------+---------+----------+--------- public | contact | table | userC (1 row) myDataBase=> myDataBase=>alter table contact owner to userB; ERROR: must be owner of relation public.contact myDataBase=> ```
Thanks to Mike's comment, I've re-read the doc and I've realised that my current user (i.e. userA that already has the *create* privilege) wasn't a *direct/indirect member of the new owning role*... So the solution was quite simple - I've just done this grant: ``` grant userB to userA; ``` That's all folks ;-) --- **Update:** Another requirement is that the object has to be owned by user *userA* before altering it...
This solved my problem: an `ALTER TABLE` statement to change the ownership. ``` ALTER TABLE databasechangelog OWNER TO arwin_ash; ALTER TABLE databasechangeloglock OWNER TO arwin_ash; ```
Postgresql: error "must be owner of relation" when changing a owner object
[ "", "sql", "database", "postgresql", "sql-grant", "owner", "" ]
I am trying to query a receipts table, to pull order numbers from a database who's first transaction falls within a certain time range. Here is what I have so far: ``` SELECT id, order_number, ( SELECT id FROM receipts WHERE received < r.received AND order_number = r.order_number LIMIT 1 ) priorPayment FROM receipts r WHERE received > 1416384000 AND received < 1416470399 ORDER BY id DESC ``` This works fine except I want to filter out anything that has a priorPayment returned (so only show where this subquery returns null) right now it returns everything. Any suggestions?
Although you can use a subquery for this or a `having` clause, I would recommend simply using a `left join` ``` SELECT r.id, r.order_number FROM receipts r LEFT JOIN receipts r2 ON r2.received < r.received and r2.order_number = r.order_number WHERE r.received > 1416384000 AND r.received < 1416470399 AND r2.order_number is NULL ORDER BY r.id DESC; ```
another way is just bring the `subquery` from `select` into `where` clause: ``` SELECT id, order_number, null as priorPayment FROM receipts r WHERE received > 1416384000 AND received < 1416470399 and ( SELECT id FROM receipts WHERE received < r.received AND order_number = r.order_number LIMIT 1 ) is null ORDER BY id DESC ```
Only show rows where mysql subquery returns null
[ "", "mysql", "sql", "" ]
I have a SQL table like this: ``` >id| date |payed |amount >1 |2015-02-02|1|5 >2 |2015-02-02|0|2 >3 |2015-02-02|1|4 >4 |2015-03-02|1|1 ``` The payed column can be ether 0 or 1. How can I return the sum of "amount" for each option of "payed" (0 and 1) based on the date? ``` SELECT (sum of amount for both options of payed) FROM events WHERE date = ? ``` In the example above for date "2015-02-02" I should get 2 and 9. Thank you very much
You can use GROUP BY all by itself if you'd like, but you can also SUM(CASE) if you want the results as 2 columns of the same row. ``` SELECT date, SUM(case when payed = 1 then amount else 0 end) as PaidAmount, SUM(case when payed = 0 then amount else 0 end) as UnPaidAmount FROM events GROUP BY DATE; ```
``` SELECT [payed] ,[value] from table where date = 'target date' group by payed,value order by payed desc, value desc ``` this will generate at a result of all transactions on that day, cluster by if they payed then by how much. a quick manual sum and you are good to go.
Select sum of same column based on a condition
[ "", "sql", "sum", "" ]
I have created this procedure in Oracle, to assign a role to a user based on the grade stored in the grade column of the marketing table. However, when I run it I get errors. **Initial Problem** ``` CREATE OR REPLACE PROCEDURE proc_assign_role IS vn_grade NUMBER(5); CURSOR cur_user_grade IS SELECT grade, username FROM marketing WHERE grade BETWEEN 1 AND 3; BEGIN FOR rec_cur_user_grade IN cur_user_grade vn_grade:= IF grade= 1 THEN GRANT ROLE admin_staff; ELSIF grade= 2 THEN GRANT ROLE marketing_staff; ELSIF grade= 3 THEN GRANT ROLE event_staff; END IF; DBMS_OUTPUT.PUT_LINE(username||'YOU ARE A GRADE '||vn_grade|| 'USER'); END proc_assign_role; / ``` This is the error I get: ``` ERROR at line 11: PLS-00103: Encountered the symbol "VN_GRADE" when expecting one of the following: . ( * @ % & - + / at loop mod remainder range rem .. || multiset 1. CREATE OR REPLACE PROCEDURE proc_assign_role IS 2. vn_grade NUMBER(5); ```
`vn_grade:=` You need to assign a value to that line, or get rid of it. You can't assign an IF statement to a number variable. Probably get rid of it, then change your IF statement to look at the grade from the cursor. You also need to end your loop. Additionally, you can't do a grant directly within a PL/SQL code block. You have to use the execute immediate statement for that. And you have to tell it who you're granting the role to. ``` FOR rec_cur_user_grade IN cur_user_grade LOOP IF rec_cur_user_grade.grade= 1 THEN execute immediate 'GRANT ROLE admin_staff to ' || rec_cur_user_grade.username; ELSIF rec_cur_user_grade.grade= 2 THEN execute immediate 'GRANT ROLE marketing_staff to ' || rec_cur_user_grade.username; ELSIF rec_cur_user_grade.grade= 3 THEN execute immediate 'GRANT ROLE event_staff to ' || rec_cur_user_grade.username; END IF; DBMS_OUTPUT.PUT_LINE(username||'YOU ARE A GRADE '||rec_cur_user_grade.grade|| 'USER'); END LOOP; ```
I'm seeing a few things that would keep this from working: * After your `FOR` statement, there's no `LOOP` statement (which is what the error is complaining about). There's also no `END LOOP` after your `DBMS_OUTPUT`. * `vn_grade` is followed by the `:=` assignment operator, but nothing is being assigned to it. * The `GRANT` statements are written as bare DDL, which isn't allowed in PL/SQL. They need to be wrapped in `EXECUTE IMMEDIATE`. * `grade` and `username` need to be qualified by the cursor variable (e.g., `rec_cur_user_grade.grade` and `rec_cur_user_grade.username`). Try something like this (which runs as an anonymous block, rather than a procedure, and uses an implicit cursor): ``` BEGIN FOR rec_cur_user_grade IN ( SELECT grade, username FROM marketing WHERE grade BETWEEN 1 AND 3 ) LOOP CASE rec_cur_user_grade.grade WHEN 1 THEN EXECUTE IMMEDIATE 'GRANT ROLE admin_staff TO ' || rec_cur_user_grade.username; WHEN 2 THEN EXECUTE IMMEDIATE 'GRANT ROLE marketing_staff TO ' || rec_cur_user_grade.username; WHEN 3 THEN EXECUTE IMMEDIATE 'GRANT ROLE event_staff TO ' || rec_cur_user_grade.username; END CASE; DMBS_OUTPUT.PUT_LINE(rec_cur_user_grade.username || ' YOU ARE A GRADE ' || rec_cur_user_grade.grade || ' USER'); END LOOP; END; / ```
Oracle Procedure with an IF-THEN-ELSE produces error
[ "", "sql", "oracle", "if-statement", "procedure", "" ]
I don't understand the reason why this happens. Could somebody explain it? ``` SELECT * FROM users WHERE name = '' ``` returns **0** records as it should ``` SELECT * FROM users WHERE name = 'janet' ``` returns **1** record as it should ``` SELECT * FROM users WHERE name = ''-SLEEP(3) ``` returns **EVERY** record in the table, why?? ``` SLEEP(3) returns 0 ``` using `name = ''-0` produces the same result (returning every record) This is not for practical use it's because I'm testing time-based SQL injections
The [documentation for the sleep function](http://dev.mysql.com/doc/refman/5.1/en/miscellaneous-functions.html#function_sleep) states: > Sleeps (pauses) for the number of seconds given by the duration argument, then returns 0. If SLEEP() is interrupted, it returns 1. The duration may have a fractional part. So it returns an integer, so what you actually have is `'' - 0`, here `''` is implicitly converted to an integer so the two types are comparable, so you then have `0 - 0` which equals 0, so your where clause is actually: ``` WHERE Name = 0; ``` Since you are comparing a string with an integer, type conversion must take place. Below is from the [MySQL Docs](http://dev.mysql.com/doc/refman/5.0/en/type-conversion.html) > The following rules describe how conversion occurs for comparison operations: > > * If one or both arguments are NULL, the result of the comparison is NULL, except for the NULL-safe <=> equality comparison operator. For NULL <=> NULL, the result is true. No conversion is needed. > * If both arguments in a comparison operation are strings, they are compared as strings. > * If both arguments are integers, they are compared as integers. > * Hexadecimal values are treated as binary strings if not compared to a number. > * If one of the arguments is a TIMESTAMP or DATETIME column and the other argument is a constant, the constant is converted to a timestamp before the comparison is performed. This is done to be more ODBC-friendly. Note that this is not done for the arguments to IN()! To be safe, always use complete datetime, date, or time strings when doing comparisons. For example, to achieve best results when using BETWEEN with date or time values, use CAST() to explicitly convert the values to the desired data type. > * If one of the arguments is a decimal value, comparison depends on the other argument. The arguments are compared as decimal values if the other argument is a decimal or integer value, or as floating-point values if the other argument is a floating-point value. > * In all other cases, the arguments are compared as floating-point (real) numbers. Since you have one argument that is a string, and one that is an integer it is the final clause that takes place. Converting a string (that is not a number) to a floating point number yields causes an err 0, which you can test using something as simple as: ``` SELECT CAST('A String' AS DECIMAL(10,5)); ``` Which will give you `0.00000`. So your final where clause after conversions have taken place is: ``` WHERE 0 = 0; ``` So all rows are returned.
MySQL is truncating the varchar value and is evaluating 0=0 in the Where clause. Execute the query and then execute `show warnings;` to see that the varchar values are truncated. MySQL implicit conversions: <http://dev.mysql.com/doc/refman/5.5/en/type-conversion.html>
SLEEP(x) role in SQL query clarification
[ "", "mysql", "sql", "sleep", "" ]
I have the following text to search: ``` #S7Z OK #Wed Feb 18 07:16:26 GMT 2015 expiration=10.0 lastModified=1424192425832 length=466472 path=/name/_master_/563/5638ad54-8079-4399-ba2b-3257b6e6c7fd.pdf userType= ``` The words proceeding each = are the names of properties. For each property name, I'd like to get the property value. That means I'm looking for a regular expression to use with regexp\_substr to get the value of each known property. Something like this: ``` SELECT REGEXP_SUBSTR( '#S7Z OK #Wed Feb 18 07:16:26 GMT 2015 expiration=10.0 lastModified=1424192425832 length=466472 path=/name/_master_/563/5638ad54-8079-4399-ba2b-3257b6e6c7fd.pdf userType=', 'path=.+') FROM dual ``` which returns: path=/name/*master*/563/5638ad54-8079-4399-ba2b-3257b6e6c7fd.pdf But I only want the value, that is "/name/*master*/563/5638ad54-8079-4399-ba2b-3257b6e6c7fd.pdf ". It should also work for expiration, lastModified etc., that is, I don't just want to search for a url but any kind of value. How can I achieve that in one regular expression?
``` SELECT REGEXP_SUBSTR( '#S7Z OK #Wed Feb 18 07:16:26 GMT 2015 expiration=10.0 lastModified=1424192425832 length=466472 path=/name/_master_/563/5638ad54-8079-4399-ba2b-3257b6e6c7fd.pdf userType=', 'path=(.+)', 1, 1, null, 1) FROM dual; ```
Here is how you might capture all of the `name=value` pairs all at once. Note that I use an explicit quantifier `{1,10}` in the regular expression to prevent catastrophic backtracking. (This particular regex might not actually be subject to that, in which case you could replace the explicit quantifier with `+`. But best not to take chances!) ``` WITH s1 AS ( SELECT '#S7Z OK #Wed Feb 18 07:16:26 GMT 2015 expiration=10.0 lastModified=1424192425832 length=466472 path=/name/_master_/563/5638ad54-8079-4399-ba2b-3257b6e6c7fd.pdf userType=' AS str FROM dual ) SELECT SUBSTR(name_value, 1, INSTR(name_value, '=') - 1) AS myname , SUBSTR(name_value, INSTR(name_value, '=') + 1, LENGTH(name_value)) AS myvalue FROM ( SELECT REGEXP_SUBSTR(REGEXP_SUBSTR(s1.str,'(\S+=\S*\s*){1,10}'), '\S+', 1, LEVEL) AS name_value FROM s1 CONNECT BY REGEXP_SUBSTR(REGEXP_SUBSTR(s1.str,'(\S+=\S*\s*){1,10}'), '\S+', 1, LEVEL) IS NOT NULL ); ``` Output as follows: ``` MYNAME | MYVALUE ------------------------------------------------------------------------- expiration | 10.0 lastModified | 1424192425832 length | 466472 path | /name/_master_/563/5638ad54-8079-4399-ba2b-3257b6e6c7fd.pdf userType | (null) ``` [Please see SQL Fiddle here.](http://sqlfiddle.com/#!4/d41d8/40797) Note that I could have used `REGEXP_SUBSTR(name_value, '^[^=]+')`, etc., in the outer query but I figured there were enough regexes in this query (it's a bit expensive, I am sure `SUBSTR()` plus `INSTR()` is cheaper!). Also, note that if you're using Oracle 11g or higher, the `CONNECT BY` clause can be replaced by the following: ``` CONNECT BY LEVEL <= REGEXP_COUNT(REGEXP_SUBSTR(s1.str,'(\S+=\S*\s*){1,10}'), '\S+') ``` [See revised SQL Fiddle.](http://sqlfiddle.com/#!4/d41d8/40798)
Regular Expression for REGEXP_SUBSTR in Oracle
[ "", "sql", "regex", "oracle", "regexp-substr", "" ]
I have a simple `SELECT` query that works fine and returns one row, which is the last occurrence of a specific value in `order_id` column. I want to update this row. However, I cannot combine this `SELECT` query with the `UPDATE` query. This is the working query that returns one row, which I want to update: ``` SELECT * FROM ( SELECT *, ROW_NUMBER() OVER(PARTITION BY order_id ORDER BY start_hour DESC) rn FROM general_report WHERE order_id = 16836 ) q WHERE rn = 1 ``` And I tried many combinations to update the row returned by this statement. For example, I tried to remove `SELECT *`, and update the table `q` as in the following, but it didn't work telling me that relation q does not exist. ``` UPDATE q SET q.cost = 550.01685 FROM ( SELECT *, ROW_NUMBER() OVER(PARTITION BY order_id ORDER BY start_hour DESC) rn FROM general_report WHERE order_id = 16836 ) q WHERE rn = 1 ``` How can I combine these codes with a correct `UPDATE` syntax? In case needed, I test my codes at SQL Manager for PostgreSQL.
Try something like this. I am not sure on PostgreSQL syntax: ``` UPDATE general_report AS d SET cost = 550.01685 FROM ( SELECT * FROM ( SELECT *, ROW_NUMBER() OVER(PARTITION BY order_id ORDER BY start_hour DESC) rn FROM general_report WHERE order_id = 16836 ) q WHERE rn = 1 ) s WHERE d.id = s.id ```
Ana alternative method for `update the most recent record` is to use `NOT EXISTS (even more recent)`: ``` UPDATE general_report dst SET cost = 550.01685 WHERE order_id = 16836 AND NOT EXISTS ( SELECT * FROM general_report nx WHERE nx.order_id = dst.order_id AND nx.start_hour > dst.start_hour ); ```
SQL update last occurrence
[ "", "sql", "database", "postgresql", "sql-update", "" ]
# SQLite database table `table1` ``` user command date location ---------- ---------- ---------- ---------- user1 cmd1 2015-01-01 xxxdeyyy user2 cmd1 2015-01-01 zzzfrxxx user3 cmd1 2015-01-01 yyyukzzz user1 cmd1 2015-01-01 xxxdezzz ... ``` # Expected output Output for `where command='cmd1'`: ``` month users_de users_fr users_es -------- -------- -------- -------- 01 1 0 5 02 2 0 0 03 0 2 1 04 5 0 15 05 1 0 4 06 11 1 2 07 9 0 3 08 1 0 5 09 0 0 5 10 0 0 0 11 1 0 0 12 1 4 5 ``` It is grouped by month (from column `date`) and also grouped by a substring in location (from column `location`). # Actual output I can achieve this (per location): ``` month users_de -------- -------- 01 1 02 2 03 0 ... 12 1 ``` using this query: ``` select strftime('%m',date) as month, count(distinct user) as users_de from table1 where command='cmd1' and location like '%de%' group by strftime('%m',date); ``` I then repeat this query for the other locations (`where ... and location='fr'`): ``` month users_fr -------- -------- 01 0 02 0 03 2 ... 12 4 ``` and (`where ... and location='es'`); ``` month users_es -------- -------- 01 5 02 0 03 1 ... 12 5 ``` Is there a way to have all the `users_xx` columns in one table (as output from SQLite and not through any external (downstream) processing)? Am I thinking about this in the wrong way (grouping instead of subqueries in the top `select`)?
I think you want conditional aggregation: ``` select strftime('%m',date) as month, count(distinct CASE WHEN location like '%de%' THEN user END) as users_de, count(distinct CASE WHEN location like '%fr%' THEN user END) as users__fr, count(distinct CASE WHEN location like '%es%' THEN user END) as users_es from table1 where command = 'cmd1' group by strftime('%m',date); ``` Two notes: * `like` possibly isn't safe in this context. You have the country code embedded in the string, but the characters "de", "es", or "fr" could appear elsewhere in the string. Your question is not clear on better logic for this. * You should include the year in the date string, but your question specifically includes only the month.
You can use the case statement to match each location and then if matches count the user. ``` select strftime('%m',date) as month, CASE WHEN location='de' THEN count(distinct user) END users-de, CASE WHEN location='fr' THEN count(distinct user) END users-fr, CASE WHEN location='es' THEN count(distinct user) END users-es, from table1 where command='cmd1' group by strftime('%m',date),location; ```
How do I `group by` rows and columns in SQLite3?
[ "", "sql", "sqlite", "" ]
Here a sample table : ``` A | B ---------- DF RUI EF RUI AF FRO EF FRO ``` I want to get all results except WHERE (A = 'EF' AND B = 'RUI') like this : ``` A | B ---------- DF RUI AF FRO EF FRO ``` But is it possible to do this without a subquery ? EDIT : I have add some extra results to show what I want to get. I want to get result if A = EF or B = RUI but i don't want to get result if A = EF AND B = RUI
just add a `NOT` condition in front of where clause: ``` SELECT A,B FROM table_name WHERE NOT (A = 'EF' AND B = 'RUI') ```
``` SELECT A, B FROM table WHERE NOT (A = 'EF' AND B = 'RUI') ``` or ``` SELECT A, B FROM table WHERE (A <> 'EF' OR B <> 'RUI') ``` The where clause is essentially a boolean expression, so you can do any kind of boolean transforms that you're used to (it's a bit more complicated if it involved NULL values, which I assumed will not be the case for your example; if it does, you might need to add some additional rules or check the behaviour under [SQLs terinary logic](http://en.wikipedia.org/wiki/Null_(SQL)#Comparisons_with_NULL_and_the_three-valued_logic_.283VL.29)).
How to avoid using a subquery
[ "", "sql", "subquery", "where-clause", "" ]
I have this homework assignment : ``` Write an anonymous PL/SQL block that accepts a string as input and removes all of the vowels (a.e.i.o.u) from the string, then outputs the results. The output should look like this: Run the Program SQL>@rm_vowels Enter the String: A penny for your thoughts SQL>**************************** SQL>The new string is: pnny fr yr thghts ``` And this does look like something really easy to do but I'm really lacking some PL/SQL experience to get this done. From my searches so far I've realized I need to use something similar to this : ``` REGEXP_REPLACE(name,'[a,e,i,o,u,A,E,I,O,U]','') ``` Right ?
Technically, the assignment calls for an anonymous pl/sql block, and prompting user for input. So you'd have something like this: ``` set serveroutput on set verify off accept vstring prompt "Please enter your string: "; declare vnewstring varchar2(100); begin vnewstring := regexp_replace('&vstring', '[aeiouAEIOU]',''); dbms_output.put_line('The new string is: ' || vnewstring); end; / ``` You can put this in a file called "my\_homework\_from\_SO.sql" and from the same directory the file is located, login to sqlplus and run it: @my\_homework\_from\_SO.sql ``` Please enter your string: This is a test The new string is: Ths s tst PL/SQL procedure successfully completed. ```
You can also use the translate function, which might be marginally faster than regexp\_replace: ``` select translate('A penny for your thoughts', 'xaeiouAEIOU', 'x') new_str from dual; NEW_STR ------------------ pnny fr yr thghts ``` You may wish to put a trim around to remove any leading/trailing spaces.
Removing all the vowels from a string - PL/SQL
[ "", "sql", "oracle", "plsql", "" ]
I want to find the last inserted row that matches a query (ie, find the row that has the largest id among the matching rows.) for instance, suppose the following query matches 3 rows. with ids, 1,2,and 3. I want to get only the row with id 3. ``` SELECT * FROM `table` WHERE `mail` = 'someone@example.com' ``` How do I do this?
You need to write your query like this ``` SELECT * FROM table_name WHERE `mail` = 'someone@example.com' ORDER BY id DESC LIMIT 1 ```
**Query** ``` SELECT * FROM tbl WHERE `mail` = 'someone@example.com' AND id= ( SELECT MAX(id) FROM tbl WHERE `mail` = 'someone@example.com' ); ``` [Fiddle demo](http://sqlfiddle.com/#!9/39ab6/1)
Find the last inserted row that matches a query
[ "", "mysql", "sql", "" ]
I need help with a request. I've two tables : `ADDRESS` and `CUSTOM` I can link the two tables with `ADDRESS.ID = CUSTOM.ADDRESSID` And I want to select all the addresses where `CUSTOM.NAME = 'NUMBER'` and `CUSTOM.VALUE = '5'` But also, on these addresses, I want to display the field `CUSTOM.VALUE`, where `CUSTOM.NAME = 'INFO'` Here is some exemples ADDRESS : ``` ID NAME STREET 1 Paul 65 blue street 2 John 50 red street ``` CUSTOM : ``` ID IDADDRESS NAME VALUE 1 1 NUMBER 5 2 1 INFO 1st floor 3 2 NUMBER 6 4 2 INFO no info ``` What I want in result is : Paul - 65 blue street - 1st floor Can someone help me with this request please ? I tried : `SELECT * FROM ADDRESS, CUSTOM WHERE ADDRESS.ID=CUSTOM.ADDRESSID AND ((CUSTOM.NAME='NUMBER' AND CUSTOM.VALUE='5') OR CUSTOM.NAME='INFO')`
Try the following, joining CUSTOM twice ``` SELECT * FROM ADDRESS, CUSTOM c1, CUSTOM c2 WHERE ADDRESS.ID = c1.ADDRESSID AND c1.ADDRESSID = c2.ADDRESSID and c1.NAME <> c2.NAME ``` Maybe I removed some conditions still needed...
The table 'CUSTOM' appears to hold 2 different types of data, NUMBER and INFO. This means you should essentially treat both types of data as different tables, and join each seperately. Your query would/could like this: ``` SELECT a.NAME , a.STREET , inf.VALUE FROM ADDRESS a JOIN CUSTOM num ON a.ID = num.IDADDRESS AND num.NAME = 'NUMBER' JOIN CUSTOM inf ON a.ID = inf.IDADDRESS AND inf.NAME = 'INFO' WHERE num.VALUE = '5' ``` If you have the option, try making the table CUSTOM into 2 actual tables, each only containing a single type of data. This will make it clear to other developers what type of data is in the table, and how to use the tables correctly.
SQL help different value on the same field
[ "", "sql", "oracle", "" ]
Depending on a complicated condition, I need to insert one or the other set of fields. What I would like to do is (pseudo code) ``` IF (condition) DO INSERT INTO table set columnA= "foo", columnB = "bar" ELSE INSERT INTO table set columnA= "no foo", columnB = "no bar" ENDIF ``` Unfortunately I can't use stored procedures and I have to fit this into a regular statement. I could repeat the condition for every column ``` INSERT INTO table (columnA, columnB) SELECT IF(condition, "foo", "no foo"), IF(condition, "bar", "no bar") ``` But the conditiion itself is complex and nested, so I'm trying not to repeat it. Is there a syntax that lets a single IF or CASE decide multiple columns ?
Not exactly. But it would appear that your condition is independent of the rows. You could do something like this: ``` INSERT INTO table(columnA, columnB) select (case when flag = 1 then 'foo' else 'no foo' end), (case when flag = 1 then 'bar' else 'no bar' end) from (select <condition> as flag) x; ```
Variables are your friends ;-) ``` INSERT INTO table (columnA, columnB) SELECT IF(@condition:=<your condition goes here>, "foo", "no foo"), IF(@condition, "bar", "no bar"); ``` Not that you can use `EXISTS(SELECT something)` as a valid condition
MySQL IF deciding multiple fields?
[ "", "mysql", "sql", "if-statement", "conditional-statements", "case", "" ]
I have table `T1` with this structure and sample data: ``` ID CID Reference Rdate -------------------------------- 1 123 REF65798 11/11/2011 2 123 REF65798 11/11/2011 3 156 REF65798 11/3/2011 4 156 REF65798 11/11/2011 5 181 REF65798 11/5/2011 6 181 REF65798 11/10/2011 ``` Now in my procedure for different reference no's whether any duplicate records present with same `Rdate`: ``` declare @Duplicate int select top 1 @Duplicate = count(*) from (select Rdate from t1 where Reference = 'REF65798' and CID in (123, 156, 181) order by Rdate desc) A group by a.Rdate ``` Here 'Reference' no and 'CID' value change for each record (I have given it for single record) and I have to consider only latest R Date Above query takes lot of time when I am processing for 10000 records in loop with different validation. How can I improve above query performance.
``` ;WITH CTE as ( SELECT ID, CID, Reference, Rdate, row_number() over(partition by CID order by Rdate DESC) rn FROM yourtable WHERE Reference = 'REF65798' and CID in (123, 156, 181) ) SELECT ID, CID, Reference, Rdate FROM CTE WHERE rn = 1 ```
Use an EXISTS to make find rows with entries from same date with earlier id: ``` select * from t1 as t1main where Reference = 'REF65798' and CID in (123, 156, 181) and exists (select 1 from t1 where Reference = 'REF65798' and CID in (123, 156, 181) and Rdate = t1main.Rdate and id < t1main.id) ```
Finding duplicate record without looping in SP - SQL Server
[ "", "sql", "sql-server", "t-sql", "" ]
I am running a query to retrieve some game levels from a MySQL database. The query itself takes around 0.00025 seconds to execute on a base that contains 40 level strings. I thought it was satisfactory, until I got a message from the website host telling me to optimise the below-mentioned query, or the script will be removed since it is pushing a lot of strain onto their servers. I tried optimising by using explain and explain extended and adjusting the columns accordingly(adding indexes), but am always getting the same performance. What I noticed also is that MySQL didn't use indexes where they were available but instead did a full-table scan. Results from EXPLAIN EXTENDED: ``` table id select_type type possible_keys key key_len ref rows Extra users 1 SIMPLE ALL PRIMARY,id NULL NULL NULL 7 Using temporary; Using filesort AllTime 1 SIMPLE ref PRIMARY,userid PRIMARY 4 Test.users.id 1 ``` query: ``` SELECT users.nickname, AllTime.userid, AllTime.id, AllTime.levelname, AllTime.levelstr FROM AllTime INNER JOIN users ON AllTime.userid=users.id ORDER BY AllTime.id DESC LIMIT ($value_from_php),20; ``` The tables: users ``` | id(int) | nickname(varchar) | | (Primary, Auto_increment) | | |---------------------------|-------------------| | 1 | username1 | | 2 | username2 | | 3 | username3 | | ... | ... | ``` and AllTime ``` | id(int) | userid(int) | levelname(varchar) | levelstr(text) | | (Primary, Auto_increment) | (index) | | | |---------------------------|-------------|--------------------|----------------| | 1 | 2 | levelname1 | levelstr1 | | 2 | 2 | levelname2 | levelstr2 | | 3 | 3 | levelname3 | levelstr3 | | 4 | 1 | levelname4 | levelstr4 | | 5 | 1 | levelname5 | levelstr5 | | 6 | 1 | levelname6 | levelstr6 | | 7 | 2 | levelname7 | levelstr7 | ``` Is there a way to optimize this query or would I be better off by calling two consecutive queries from php just to avoid the warning? I am just learning MySQL, so please take that information into account when replying, thank you :)
I'm assuming you're using InnoDB. For an INNER JOIN, MySQL typically starts with the table with the fewest rows, in this case `users`. However, since you just want the latest 20 `AllTime` records joined with the corresponding `user` records, you actually should start with `AllTime` since with the `LIMIT`, it will be the smaller data set. Use STRAIGHT\_JOIN to force the join order: ``` SELECT users.nickname, AllTime.userid, AllTime.id, AllTime.levelname, AllTime.levelstr FROM AllTime STRAIGHT_JOIN users ON users.id = AllTime.userid ORDER BY AllTime.id DESC LIMIT ($value_from_php),20; ``` It should be able to use the primary key on the `AllTime` table and follow it in descending order. It'll grab all the data on the same pages as it goes. It should also use the primary key on the users table to grab the id and nickname. If there are more than just two columns, you might add a multi-column covering index on (id, nickname) to improve the speed. If you can, convert the `levelstr` column to `VARCHAR` so that the data is stored on the same page as the rest of the data, otherwise, it has to go fetch the text columns separately. This assumes that your columns are under the 8000 byte row limit for InnoDB. There is no way to avoid the `USING TEMPORARY` unless you get rid of the text column. Most likely, your host has identified this query by using the slow query log, which can identify all queries that don't use an index, or they may have red flagged it because of the `Using temporary`.
it doesn't look like the query has a problem. 1. Review the application code. Most likely the issue is in the code 2. Check [MySQL query execution plan](http://dev.mysql.com/doc/refman/5.5/en/using-explain.html) * possibly you are missing an index 3. Make sure you cache the data in Application and [Database](http://dev.mysql.com/doc/refman/5.1/en/query-cache.html) (fyi, sometimes you can load the whole database into Application memory) 4. Make sure you use a connection pool 5. Create a view (a very small chance for improvement) 6. Try to remove the "Order By" clause (again a very small chance it will improve the performance)
Can't optimise mySQL query
[ "", "mysql", "sql", "query-optimization", "" ]
This query returns the results below. ``` Select BookTitle,TotalNumberInStock From CurrentStock Where (Year<2000 OR Year >2010); ``` **Table 1** ``` BookTitle TotalNumberInStock The Tower 2 Orange Goblins 1 The future of Metal 3 Chronicles of the Banjo 2 Opera 4 Advanced SQL 5 The GAA 4 ``` I want to write a subquery that sums the TotalNumber In Stock so I used this statement: ``` Select Sum(TotalNumberInstock) From ( Select BookTitle,TotalNumberInStock From CurrentStock Where (Year<2000 OR Year >2010) ); ``` I get an error saying: > Incorrect syntax near ';'. What is wrong with this code?
you need to give an alias name to your sub query (aka derived table) ``` select Sum(TotalNumberInstock) From (Select BookTitle,TotalNumberInStock From CurrentStock Where (Year<2000 OR Year >2010)) x ; ``` this correction makes your query work, but you don't need such complexity to have your desired result. you can simply get what you want by this query ``` select sum(totalNumberInStock) from CurrentStock where year < 2000 or year > 2010 ```
I'm not sure why you need a sub query. ``` select sum(totalNumberInStock) from CurrentStock where year < 2000 or year > 2010 ```
I Want to write a subquery to Sum a column
[ "", "sql", "" ]
I am looking to find the count of rows where the entered answer is the same as the correct answer. Here's an example: ``` WorkerID Answer Correct 1 A A 1 B C 2 A D ``` I would then get the following result: ``` WorkerID AnswerCount # Correct 1 2 1 2 1 0 ``` So far I have (conceptually): ``` SELECT worker_id, count(*), count(Answer == Correct) FROM answer_table GROUP BY WorkerID ``` What would be the correct query here?
You don't want `count()`, you want `sum()`: ``` SELECT worker_id, count(*) as AnswerCount, sum(Answer = Correct) as NumCorrect FROM answer_table GROUP BY WorkerID; ``` `count()` counts the number of non-`NULL` values that the expression takes on. You want to count the number of matches, which is the number of trues.
I think this is what you want : ``` select count(*) from yourTable where answer = correct group by workerId ``` Basically, what you need to do is * Select all where answer = correct. * group them by workerId. * count the num of rows (where answer = correct) for each group. **Edit :** To answer to your edited question, ``` select count(*), count(b.workerId) from yourTable left join (select * from yourTable where answer = correct) b using(workerId) group by workerId ```
Count rows with equivalent value in two columns
[ "", "mysql", "sql", "" ]
I used the following query to insert values. The `DateOfLoan` data type is `date`. There is an extra column in my table called `DateDueBack` and this column is a computed column which adds 14 days to the date in the `DateOfLoan` column. ``` Insert CurrentLoans(LoanID, BookISBN, MemberID, DateOfLoan) Values (101, 'MB00001', 2, '20/10/2014'), (102, 'AO00001', 5, '13/10/2014'), (103, 'AH00002', 5, '13/10/2014'), (104, 'DK10100', 2, '23/10/2014'), (105, 'EP00666', 1, '12/10/2014'), (106, 'HH10189', 4, '01/01/2014'); ``` I get the following error message: > Msg 241, Level 16, State 1, Line 1 > Conversion failed when converting date and/or time from character string. When I remove the quotes around the date values I get an error message saying Int is incompatible with `Date`. Can someone explain whats wrong with the code above and provide a solution. Thanks in advance for any help offerd.
Date format Should be use in sql-server like below format. ``` YYYY-MM-dd or MM/dd/YYYY ``` So, you have to try like this. ``` Insert CurrentLoans(LoanID, BookISBN, MemberID, DateOfLoan) Values (101, 'MB00001', 2, '10/20/2014') ``` or ``` Insert CurrentLoans(LoanID, BookISBN, MemberID, DateOfLoan) Values (101, 'MB00001', 2, '2014-10-20') ``` Or use Sql [Convert](https://msdn.microsoft.com/en-us/library/ms187928.aspx) ``` Insert CurrentLoans(LoanID, BookISBN, MemberID, DateOfLoan) Values (101, 'MB00001', 2,convert(date,'20/10/2014',103)) ```
Try this:- ``` Insert into CurrentLoans(LoanID,BookISBN,MemberID,DateOfLoan) Values (101, 'MB00001', 2, '2014-10-20'), (102, 'AO00001', 5, '2014-10-13'), (103, 'AH00002', 5, '2014-1013'), (104, 'DK10100', 2, '2014-10-23'), (105, 'EP00666', 1, '2014-10-12'), (106, 'HH10189', 4, '2014-01-01'); ```
I get Error messages when entering date values into column with Date data type
[ "", "sql", "sql-server", "" ]
We are using MySql as our DB The following query is runs on mysql table(approx 25million records). I pasted two queries here.The queries runs too slowly and I was wondering if better composite indexes might improve the situation. Any idea on what the best composite index would be? and Suggest me Is composite index required for these queries FIRST QUERY ``` EXPLAIN SELECT log_type, count(DISTINCT subscriber_id) AS distinct_count, count(*) as total_count FROM stats.campaign_logs WHERE domain = 'xxx' AND campaign_id='12345' AND log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED') AND log_time BETWEEN CONVERT_TZ('2015-02-12 00:00:00','+05:30','+00:00') AND CONVERT_TZ('2015-02-19 23:59:58','+05:30','+00:00') GROUP BY log_type ``` EXPLAIN of above query ``` +----+-------------+---------------+-------------+--------------------------------------------------------------+--------------------------------+---------+------+-------+------------------------------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+---------------+-------------+--------------------------------------------------------------+--------------------------------+---------+------+-------+------------------------------------------------------------------------------+ | 1 | SIMPLE | campaign_logs | index_merge | campaign_id_index,domain_index,log_type_index,log_time_index | campaign_id_index,domain_index | 153,153 | NULL | 35683 | Using intersect(campaign_id_index,domain_index); Using where; Using filesort | +----+-------------+---------------+-------------+--------------------------------------------------------------+--------------------------------+---------+------+-------+------------------------------------------------------------------------------+ ``` SECOND QUERY ``` SELECT campaign_id , subscriber_id , campaign_name , log_time , log_type , message , UNIX_TIMESTAMP(log_time) AS time FROM campaign_logs WHERE domain = 'xxx' AND log_type = 'EMAIL_OPENED' ORDER BY log_time DESC LIMIT 20; ``` EXPLAIN of above query ``` +----+-------------+---------------+-------------+-----------------------------+-----------------------------+---------+------+--------+---------------------------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+---------------+-------------+-----------------------------+-----------------------------+---------+------+--------+---------------------------------------------------------------------------+ | 1 | SIMPLE | campaign_logs | index_merge | domain_index,log_type_index | domain_index,log_type_index | 153,153 | NULL | 118392 | Using intersect(domain_index,log_type_index); Using where; Using filesort | +----+-------------+---------------+-------------+-----------------------------+-----------------------------+---------+------+--------+---------------------------------------------------------------------------+ ``` **THIRD QUERY** ``` EXPLAIN SELECT *, UNIX_TIMESTAMP(log_time) AS time FROM stats.campaign_logs WHERE domain = 'xxx' AND log_type <> 'EMAIL_SLEEP' AND subscriber_id = '123' ORDER BY log_time DESC LIMIT 100 ``` EXPLAIN of above query ``` +----+-------------+---------------+------+-------------------------------------------------+---------------------+---------+-------+------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+---------------+------+-------------------------------------------------+---------------------+---------+-------+------+-----------------------------+ | 1 | SIMPLE | campaign_logs | ref | subscriber_id_index,domain_index,log_type_index | subscriber_id_index | 153 | const | 35 | Using where; Using filesort | +----+-------------+---------------+------+-------------------------------------------------+---------------------+---------+-------+------+-----------------------------+ ``` If you want any other details I can provide here **UPDATE (2016/April/22)** : Now we want to add one more column into existing table that is node id. One campaign can have multiple nodes. Whatever reports we are generating on campaigns we need those reports on individual nodes also now. for example ``` SELECT log_type, count(DISTINCT subscriber_id) AS distinct_count, count(*) as total_count FROM stats.campaign_logs WHERE domain = 'xxx', AND campaign_id='12345', AND node_id = '34567', AND log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED') AND log_time BETWEEN CONVERT_TZ('2015-02-12 00:00:00','+05:30','+00:00') AND CONVERT_TZ('2015-02-19 23:59:58','+05:30','+00:00') GROUP BY log_type CREATE TABLE `camp_logs` ( `domain` varchar(50) DEFAULT NULL, `campaign_id` varchar(50) DEFAULT NULL, `subscriber_id` varchar(50) DEFAULT NULL, `message` varchar(21000) DEFAULT NULL, `log_time` datetime DEFAULT NULL, `log_type` varchar(50) DEFAULT NULL, `level` varchar(50) DEFAULT NULL, `campaign_name` varchar(500) DEFAULT NULL, KEY `subscriber_id_index` (`subscriber_id`), KEY `log_type_index` (`log_type`), KEY `log_time_index` (`log_time`), KEY `campid_domain_logtype_logtime_subid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`), KEY `domain_logtype_logtime_index` (`domain`,`log_type`,`log_time`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | ``` **SIZE issue**. As we have two composite indexes , index file incresing rapidly. following are the table current stats. Data size : 30 GB Index size: 35 GB for reports on node\_id we want to update our existing composite index **from** ``` KEY `campid_domain_logtype_logtime_subid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`), ``` **to** ``` KEY `campid_domain_logtype_logtime_subid_nodeid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`,`node_id`) ``` Could you suggest suitable composite indexes for both campaign and node level reports. Thanks
This is your first query: ``` SELECT A.log_type, count(*) as distinct_count, sum(A.total_count) as total_count from (SELECT log_type, count(subscriber_id) as total_count FROM stats.campaign_logs WHERE domain = 'xxx' AND campaign_id = '12345' AND log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED') AND DATE(CONVERT_TZ(log_time,'+00:00','+05:30')) BETWEEN DATE('2015-02-12 00:00:00') AND DATE('2015-02-19 23:59:58') GROUP BY subscriber_id,log_type) A GROUP BY A.log_type; ``` It is better written as: ``` SELECT log_type, count(DISTINCT subscriber_id) as total_count FROM stats.campaign_logs WHERE domain = 'xxx' AND campaign_id = '12345' AND log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED') AND DATE(CONVERT_TZ(log_time, '+00:00', '+05:30')) BETWEEN DATE('2015-02-12 00:00:00') AND DATE('2015-02-19 23:59:58') GROUP BY log_type; ``` The best index on this is probably: `campaign_logs(domain, campaign_id, log_type, log_time, subscriber_id)`. This is a covering index for the query. The first three keys should be used for the `where` filtering.
For query 1, @Gordon Linoff's index is excellent (at least after the rewritten SELECT): ``` INDEX(domain, campaign_id, log_type, log_time, subscriber_id) INDEX(campaign_id, domain, log_type, log_time, subscriber_id) -- equally good. ``` For query 2: "index\_merge" is a sign that you could probably benefit from a "compound index". The second query is best handled by either of the following, which (I think) will compute the resultset with only 20 reads, not 118K, as estimated by EXPLAIN. ``` INDEX(domain, log_type, log_time) INDEX(log_type, domain, log_time) ``` Keep in mind that, when you add indexes, you should get rid of redundant ones. For example `INDEX(domain, ...)` makes `KEY domain_index (domain)` redundant, so the latter can be DROPped. Overall, I would recommend ``` DROP INDEX(campaign_id_index), ADD INDEX(campaign_id, domain, log_type, log_time, subscriber_id), DROP INDEX(domain), ADD INDEX(domain, log_type, log_time) PRIMARY KEY(id, log_time) -- if you also add PARTITIONing; see below ``` Other recommendations: * InnoDB Must have a PRIMARY KEY. (A 6-byte hidden one was provided for you.) Recommend `ADD COLUMN id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY`. * Consider changing log\_type from a bulky VARCHAR to an ENUM. * If subscriber\_id is really a number, then consider INT UNSIGNED. * Will you eventually need to purge 'old' records? PARTITION BY RANGE(TO\_DAYS(log\_time)) is probably the best way. See <http://mysql.rjweb.org/doc.php/partitionmaint> . (And note that the PK would need to be (id, log\_time).) * "Partition pruning" cannot happen because log\_time is buried in a pair of functions. Use @axiac's rephrasing. * innodb\_buffer\_pool\_size should be set to about 70% of available RAM.
MySql composite index
[ "", "mysql", "sql", "" ]
I haven't got any luck with my SQL query to get users with role subscriber: ``` SELECT ID, display_name FROM 'wp_users' INNER JOIN 'wp_usermeta' ON 'wp_users'.ID = 'wp_usermeta'.user_id WHERE 'wp_usermeta'.meta_key = 'wp_capabilities' AND ('wp_usermeta'.meta_value LIKE 'subscriber') ORDER BY display_name ``` Can any body help me?
I have got the answer to my question: ``` SELECT wp_users.ID, wp_users.user_nicename FROM wp_users INNER JOIN wp_usermeta ON wp_users.ID = wp_usermeta.user_id WHERE wp_usermeta.meta_key = 'wp_capabilities' AND wp_usermeta.meta_value LIKE '%subscriber%' ORDER BY wp_users.user_nicename ``` If anybody struggling with the same issue please use my SQL query above.
Here's a slight variant of @qqruza's answer that includes the user's email and role and returns users for all roles. ``` SELECT wp_users.ID, wp_users.user_nicename, wp_users.user_email, wp_usermeta.meta_value FROM wp_users JOIN wp_usermeta ON wp_users.ID = wp_usermeta.user_id WHERE wp_usermeta.meta_key = 'wp_capabilities' ORDER BY wp_users.user_nicename ``` If you have a WordPress multisite installation, to get the roles for all child sites, use: ``` SELECT wp_users.ID, wp_users.user_nicename, wp_users.user_email, wp_usermeta.meta_key, wp_usermeta.meta_value FROM wp_users JOIN wp_usermeta ON wp_users.ID = wp_usermeta.user_id WHERE wp_usermeta.meta_key LIKE 'wp_%capabilities' ORDER BY wp_users.user_nicename ``` Of course, you'll need to look at the `wp_usermeta.meta_key` value to determine which child site (blog) the record applies to.
SQL Query to Get Users With Role Subscriber
[ "", "sql", "wordpress", "" ]
I need your help. I have a listview and search edittext, when I type to search for a record and matches from db, it displays all records. However, if no record matches it just displays nothing. What I wanna do is to display a message **No Records Found** This is what I've tried so far: ``` etSearch.addTextChangedListener(new TextWatcher (){ public void afterTextChanged(Editable arg0) { } public void beforeTextChanged(CharSequence arg0, int arg1, int arg2, int arg3) { } public void onTextChanged(CharSequence cs, int arg1, int arg2, int arg3) { adapter.getFilter().filter(cs.toString()); } }); adapter.setFilterQueryProvider(new FilterQueryProvider() { @Override public Cursor runQuery(CharSequence constraint) { String searchBy = constraint.toString(); if(cursor.getCount() == 0) { Log.e("Cursor", "" + "Empty result"); } return dbHelper.searchProduct(searchBy, telco); } }); ``` **UPDATE:** ``` public Cursor searchProduct(String product, String telco) { final String MY_QUERY = "SELECT * " + "FROM products " + "WHERE telco_code = '" + telco + "' " + "AND product_code LIKE '%" + product + "%' " + // LIKE '% "AND product_description LIKE '%" + product + "%';"; Cursor localCursor = this.myDataBase.rawQuery(MY_QUERY, null); if (localCursor != null) localCursor.moveToFirst(); return localCursor; } ``` Any ideas? I would gladly appreciate your help. Thanks.
I solved the problem using this: ``` adapter.setFilterQueryProvider(new FilterQueryProvider() { @Override public Cursor runQuery(CharSequence constraint) { String searchBy = constraint.toString(); Cursor newCursor = dbHelper.searchProduct(searchBy, telco); if(newCursor.getCount() == 0) { Log.e("Cursor", "Empty" ); runOnUiThread(new Runnable() { @Override public void run() { tvHeader.setVisibility(View.VISIBLE); tvHeader.setText(getResources().getString(R.string.no_products)); } }); } else { runOnUiThread(new Runnable() { @Override public void run() { tvHeader.setVisibility(View.GONE); } }); } return newCursor; } }); ```
Replace ``` if(cursor == null) ``` with ``` if(cursor != null && cursor.getCount() == 0) ``` **EDIT:** Implement the `runQuery()` method this way: ``` @Override public Cursor runQuery(CharSequence constraint) { String searchBy = constraint.toString(); Cursor cursor = dbHelper.searchProduct(searchBy, telco); if(cursor != null && cursor.getCount() == 0){ new AlertDialog.Builder(this) .setTitle("No Records Found") .setMessage("No records are available.") .setPositiveButton(android.R.string.ok, null) .setIcon(android.R.drawable.ic_dialog_alert) .show(); } return cursor; } ``` Try this. This should work.
Android get current count returned by cursor
[ "", "android", "sql", "sqlite", "" ]
I have 2 queries i'd like to run. The idea here is to run a query on the transaction table by the transaction "type". Based on these results, I want to run another query to see the customers last transaction based on a specific type to see if the service ID was the same. If it's not the same, I want to flag it as "upgraded" Here is the initial query that Pulls the results from a transactions table based on a transaction type: ``` Select customerid, serviceid from Transactions where (dtcreated > @startdate and dtcreated < @enddate) and (transactiontype = 'Cust Save') ``` The output for this is: ``` Customerid ServiceID 1 11 2 21 3 21 4 11 5 12 6 11 ``` What i'd like to do next is run this query, matching the customerID to see what the customers last charge was: ``` Select serviceID, MAx(dtcreated) as MostRecent From Transactions Where (transactiontype = 'Cust Purchase') Group By serviceID ``` My Final output combining the two queries would be: ``` Customerid ServiceID Last Purchase Upgraded? 1 11 11 No 2 21 11 Yes 3 21 12 Yes 4 11 10 Yes 5 12 12 No 6 11 11 No ``` I thought this might work but it doesn't quite give me what I want. It returns too many results, so the query is obviously not correct.: ``` Select serviceID, MAx(dtcreated) as MostRecent From Transactions WHERE Where (transactiontype = 'Cust Purchase') AND EXISTS (Select customerid, serviceid from Transactions where (dtcreated > @startdate and dtcreated < @enddate) and (transactiontype = 'Cust Save')) GROUP BY serviceid ```
If I understand the requirements properly you can use [`ROW_NUMBER`](https://msdn.microsoft.com/en-GB/library/ms186734.aspx) to determine which record is the latest per customerID. Then you can JOIN this back to the transactions table to determine if there is a match in ServiceID: ``` SELECT r.CustomerID, t.ServiceID, t.dtCreated, Upgraded = CASE WHEN t.ServiceID = cp.ServiceID THEN 0 ELSE 1 END FROM Transactions AS t INNER JOIN ( SELECT CustomerID, ServiceID, dtCreated, RowNumber = ROW_NUMBER() OVER(PARTITION BY CustomerID ORDER BY dtCreated DESC) FROM Transactions WHERE transactiontype = 'Cust Purchase' ) AS cp ON cp.CustomerID = t.CustomerID AND cp.RowNumber = 1 WHERE t.dtcreated > @startdate AND t.dtcreated < @enddate AND t.transactiontype = 'Cust Save' ```
I think you need something like this. Here t1 gives you the max dtcreated per customer; t2 contains all transactions in the date range given; t3 gives you the last purchase per customer. ``` select t1.customerid, t3.serviceid as Last_Purchase_ServiceID, t1.dtcreated as Last_Purchase_DateCreated, t2.ServiceID as Current_Purchase_ServiceID, t2.dtcreated as Current_Purchase_DateCreated from ( select customerid, max(dtcreated) as dtcreated from Transactions group by customerid ) t1 join ( select customerid, serviceid, dtcreated from Transactions where (dtcreated > @startdate and dtcreated < @enddate) and (transactiontype = 'Cust Save') ) t2 on t1.customerid = t2.customerid join Transactions t3 on t1.customerid = t3.customerid and t1.dtcreated = t3.dtcreated ```
How to run a subquery based on results of a query SQL
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "" ]