Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Working on point redemption app. I have a query returning point ranges as below: ``` Point ----- 50 100 150 ``` I have a local variable having accumulated points. Say If I have 65 points then I will be able to redeem 50 points only. If I have 110 then I will be able to redeem 100 and so on. I tried to use `select case` & `between` but it wasn't coming correctly. Kindly help.
``` SELECT TOP 1 p.* FROM Points p WHERE p.Point <= @PointVar -- or < @PointVar ORDER BY p.Point DESC ```
Try to use subquery sa below ``` select points from tab where points = (select max(points) from tab where pointst <=@locaL_variable ) ```
Finding nearest value in T-SQL from array of elements
[ "", "sql", "t-sql", "sql-server-2012", "" ]
which is more efficient in sql min or top? ``` select MIN(salary) from ABC select top 1 salary from ABC order by salary asc ```
Tests made on sql server 2012 ( you didn't ask for a specific version, and this is what I have): ``` -- Create the table CREATE TABLE ABC (salary int) -- insert sample data DECLARE @I int = 0 WHILE @i < 1000000 -- that's right, a million records.. BEGIN INSERT INTO ABC VALUES (@i) SET @I = @I + 1 END ``` including execution plans and run both queries: ``` select MIN(salary) from ABC select top 1 salary from ABC order by salary asc ``` Results: * Without indexes: query cost for top 1 was 94% and for min was 6%. * With an index on salary - query cost for both was 50% (doesn't matter if the index is clustered or not). **Without an index:** ![Execution plan without an index](https://i.stack.imgur.com/t0lW9.jpg) **With an index:** (clustered and non-clustered resulted in the same execution plan) ![enter image description here](https://i.stack.imgur.com/MwjYD.jpg)
You are looking for the minimum salary. So use `MIN(salary)`; it is made for exactly this purpose. Your query is thus as readable and maintainable as possible. I have never seen anybody run in performance problems because of using MIN or MAX. Besides the task to find a minimum in a list is very simple. The task to sort a list (only to keep the first line) is quite another. If you are lucky the DBMS sees through this and doesn't sort at all, but simply looks up the minimum value for you, and then you are where you were with MIN already. If for some miracle MySQL performs the TOP 1 query faster than the MIN query, then consider this a flaw, stick with the MIN query though, and wait for a future version of MySQL to perform better :-) Use TOP n queries only when you need more then the MIN or MAX value from the record.
which is more efficient in sql min or top
[ "", "sql", "sql-server", "" ]
I'm trying to count the number of appointments in each month for the year of 2014, where the date formatting is like '22-JAN-14' I've tried variations of these two codes so far and both come up with "YEAR": invalid identifier. CODE 1 ``` SELECT count(*), MONTH(dateofappointment) as month, YEAR(dateofappointment) as year FROM appointment GROUP BY month HAVING YEAR(dateofappointment) ='14' ORDER BY month; ``` and CODE 2 ``` select count(*) from appointment group by month(dateofappointment) having year(dateofappointment) = '14'; ``` Any help is appreciated.
Use the `Extract`Reference ``` SELECT EXTRACT(month FROM dateofappointment) "Month", count(*) FROM appointment WHERE EXTRACT(YEAR FROM dateofappointment) = '2014' GROUP BY EXTRACT(month FROM dateofappointment) ORDER BY EXTRACT(month FROM dateofappointment); ``` To have a month name rather than number ``` SELECT TO_CHAR(TO_DATE(EXTRACT(month FROM dateofappointment), 'MM'), 'MONTH') "Month", count(*) FROM appointment WHERE EXTRACT(YEAR FROM dateofappointment) = '2014' GROUP BY EXTRACT(month FROM dateofappointment) ORDER BY EXTRACT(month FROM dateofappointment); ```
Data by month and year, sorted by date from oldest to newest: ``` SELECT EXTRACT(year FROM dateofappointment) "Year", EXTRACT(month FROM dateofappointment) "Month", count(*) FROM appointment GROUP BY EXTRACT(year FROM dateofappointment), EXTRACT(month FROM dateofappointment) ORDER BY 1, 2; ```
Count records per month
[ "", "sql", "sql-server", "sql-server-2008", "count", "" ]
I have a `Ticket_Date` column that is in the format `YYYY-MM-DD HH:MI:SS` I want to check if the `Ticket_date` is in the current month. So far I have : ``` Ticket_date >= '2015-04-01' and Ticket_date < '2015-04-30' ``` but I need it to work with the current month rather then hardcoded
``` YEAR(Ticket_date) = YEAR(getdate()) and MONTH(Ticket_date) = MONTH(getdate()) ``` but i would also save current date in variable to guarantee what result of `getdate()` doesn't change when executing long query at midnight ``` declare @today datetime = getdate() ... YEAR(Ticket_date) = YEAR(@today) and MONTH(Ticket_date) = MONTH(@today) ```
``` MONTH(Ticket_date) = MONTH(GETDATE()) AND YEAR(Ticket_date) = YEAR(GETDATE()) ```
How to see if date is in current month SQL Server?
[ "", "sql", "sql-server", "date", "" ]
I have a table with a column defined as `time CHAR(6)` with values like `'18:00'` which I need to convert from `char` to `time`. I searched [here](http://www.postgresql.org/docs/current/static/functions-formatting.html), but didn't succeed.
You can use the `::` syntax to cast the value: ``` SELECT my_column::time FROM my_table ```
If the value really is a valid time, you can just cast it: ``` select '18:00'::time ```
How to convert from CHAR to TIME in postgresql
[ "", "sql", "postgresql", "casting", "" ]
I have to write a delete statement for the customer table. delete customers that have not put in any orders. use a subquery and the exist operator. Im having trouble with the proper way this query should be displayed this is what i tested and had no luck with. Can anyone tell me how to fix this statement? ``` delete customers from dbo.customers WHERE (customerID NOT exist (SELECT customerID FROM dbo.Orders where customerid = ordersid)) ```
Your syntax for using the `exist` operator is slightly off, although the general idea is in the right direction: ``` DELETE FROM dbo.customers WHERE NOT EXISTS (SELECT * FROM dbo.orders WHERE dbo.customers.customer_id = dbo.orders.customer_id) ```
I think your problem lies in the where clause of your subselect. You're comparing customId to ordersId which you most likely don't want to do. Try simply removing the where clause from the subselect because you want to ensure that the customerId is not in the unfiltered Orders table and instead of not exists do a not in. You will also want to make sure that you have customerID indexed for a statement like this. ``` delete customers from dbo.customers where customerID not in (SELECT customerID FROM dbo.Orders) ```
Delete statement with a subquery
[ "", "sql", "" ]
I can't make out from [the documentation](https://msdn.microsoft.com/en-us/library/ms187928.aspx) why SQL Server parses a text in a format other than the specified style. Regardless of whether I provide text in the expected format: ``` SELECT CONVERT(DATETIME, N'20150601', 112) ``` or incorrect format (for style `113`): ``` SELECT CONVERT(DATETIME, N'20150601', 113) ``` The results are the same: `2015-06-01 00:00:00.000` I would expect the latter to fail to convert the date (correctly). What rules does it employ when trying to convert a `VARCHAR` to `DATETIME`? I.e. why does the latter (incorrect format style) still correctly parse the date? **EDIT**: It seems I've not been clear enough. Style `113` should expect `dd mon yyyy hh:mi:ss:mmm(24h)` but it happily converts values in the format `yyyymmdd` for some reason.
Because the date is in a canonical format ie(20150101). The database engine falls over it implicitly. This is a compatibility feature. If you swapped these around to UK or US date formats, you would receive conversion errors, because they cannot be implicitly converted. EDIT: You could actually tell it to convert it to a pig, and it would still implicitly convert it to date time: ``` select convert(datetime,'20150425',99999999) select convert(datetime,'20150425',100) select convert(datetime,'20150425',113) select convert(datetime,'20150425',010) select convert(datetime,'20150425',8008135) select convert(datetime,'20150425',000) ``` And proof of concept that this is a compatibility feature: ``` select convert(datetime2,'20150425',99999999) ``` Although you can still implicitly convert datetime2 objects, but the style must be in the scope of the conversion chart.
Reason why is the date `N'20150601'` converted to valid datetime is because of fact that literal `N'20150601'` is universal notation of datetime in SQL Server. That means, if you state datetime value in format `N'yyyymmdd`', SQL Server know that it is universal datetime format and know how to read it, in which order.
Why does SQL Server convert VARCHAR to DATETIME using an invalid style?
[ "", "sql", "sql-server", "t-sql", "" ]
I have a loop query with scenario below: * orderqty = increase value 1 each loop * runningstock = decrease value 1 each loop * allocateqty = case when orderqty > 0 and runningstock > 0 then 1 else 0 * loop till runningstock=0 or total allocation=stockqty Query: ``` DECLARE @RESULT TABLE (priority int,partcode nvarchar(50),orderqty int, runningstock int, allocateqty int) DECLARE @ORDER TABLE(priority int,partcode nvarchar(50),orderqty int) DECLARE @STOCK TABLE(partcode nvarchar(50),stockqty int) INSERT INTO @ORDER (priority,partcode,orderqty) VALUES(1,'A',2), (2,'A',10), (3,'A',3), (4,'A',8); INSERT INTO @STOCK(partcode,stockqty) VALUES('A',20); DECLARE @allocateqty int=1 DECLARE @runningstock int=(SELECT stockqty FROM @stock) WHILE @runningstock>=0 BEGIN INSERT INTO @RESULT(priority,partcode,orderqty,runningstock,allocateqty) SELECT priority, partcode, orderqty, @runningstock, CASE WHEN @runningstock > 0 AND orderqty > 0 THEN 1 ELSE 0 END FROM @order SET @runningstock -=1 END SELECT * FROM @Result GO ``` Result: ``` priority partcode orderqty runningstock allocateqty 1 A 2 20 1 2 A 10 20 1 3 A 3 20 1 4 A 8 20 1 1 A 2 19 1 2 A 10 19 1 3 A 3 19 1 4 A 8 19 1 1 A 2 18 1 2 A 10 18 1 3 A 3 18 1 4 A 8 18 1 1 A 2 17 1 2 A 10 17 1 3 A 3 17 1 4 A 8 17 1 1 A 2 16 1 2 A 10 16 1 3 A 3 16 1 4 A 8 16 1 1 A 2 15 1 2 A 10 15 1 3 A 3 15 1 4 A 8 15 1 1 A 2 14 1 2 A 10 14 1 3 A 3 14 1 4 A 8 14 1 1 A 2 13 1 2 A 10 13 1 3 A 3 13 1 4 A 8 13 1 1 A 2 12 1 2 A 10 12 1 3 A 3 12 1 4 A 8 12 1 1 A 2 11 1 2 A 10 11 1 3 A 3 11 1 4 A 8 11 1 1 A 2 10 1 2 A 10 10 1 3 A 3 10 1 4 A 8 10 1 1 A 2 9 1 2 A 10 9 1 3 A 3 9 1 4 A 8 9 1 1 A 2 8 1 2 A 10 8 1 3 A 3 8 1 4 A 8 8 1 1 A 2 7 1 2 A 10 7 1 3 A 3 7 1 4 A 8 7 1 1 A 2 6 1 2 A 10 6 1 3 A 3 6 1 4 A 8 6 1 1 A 2 5 1 2 A 10 5 1 3 A 3 5 1 4 A 8 5 1 1 A 2 4 1 2 A 10 4 1 3 A 3 4 1 4 A 8 4 1 1 A 2 3 1 2 A 10 3 1 3 A 3 3 1 4 A 8 3 1 1 A 2 2 1 2 A 10 2 1 3 A 3 2 1 4 A 8 2 1 1 A 2 1 1 2 A 10 1 1 3 A 3 1 1 4 A 8 1 1 1 A 2 0 0 2 A 10 0 0 3 A 3 0 0 4 A 8 0 0 ``` the correct one should be like this: ``` priority partcode orderqty runningstock allocateqty 1 A 2 20 1 2 A 10 19 1 3 A 3 18 1 4 A 8 17 1 1 A 1 16 1 2 A 9 15 1 3 A 2 14 1 4 A 7 13 1 1 A 0 12 0 2 A 8 12 1 3 A 1 11 1 4 A 6 10 1 1 A 0 9 0 2 A 7 9 1 3 A 0 8 0 4 A 5 8 1 1 A 0 7 0 2 A 6 7 1 3 A 0 6 0 4 A 4 6 1 1 A 0 5 0 2 A 5 5 1 3 A 0 4 0 4 A 3 4 1 1 A 0 3 0 2 A 4 3 1 3 A 0 2 0 4 A 2 2 1 1 A 0 1 0 2 A 3 1 1 3 A 0 0 0 ```
You can also loop through your priorities and make sure the orderqty is updated accordingly. I'd do something like this: ``` DECLARE @allocatedqty int = 0 DECLARE @allocateqty int = 1 DECLARE @runningstock int = (SELECT stockqty FROM @stock) WHILE @runningstock>=0 BEGIN DECLARE @priority int SELECT TOP 1 @priority = priority FROM @order ORDER BY priority ASC WHILE @priority <= (SELECT MAX(priority) FROM @order) BEGIN DECLARE @orderqty int SELECT @orderqty = orderqty - @allocatedqty FROM @order WHERE priority = @priority SELECT @allocateqty = CASE WHEN @runningstock > 0 AND @orderqty > 0 THEN 1 ELSE 0 END INSERT INTO @RESULT(priority,partcode,orderqty,runningstock,allocateqty) SELECT @priority, partcode, CASE WHEN @orderqty >= 0 THEN @orderqty ELSE 0 END AS orderqty, @runningstock, @allocateqty FROM @order WHERE priority = @priority SET @priority += 1 SET @runningstock = @runningstock - @allocateqty END SET @allocatedqty += 1 IF (@runningstock <= 0) BREAK END SELECT * FROM @Result GO ```
Looping in sql should be avoided when possible. This will give you the described result (except you have 2 errors in your countdown) : ``` DECLARE @ORDER TABLE(priority int,partcode nvarchar(50),orderqty int) INSERT INTO @ORDER (priority,partcode,orderqty) VALUES(1,'A',2), (2,'A',10), (3,'A',3), (4,'A',8); DECLARE @runningstock INT = 20 ;WITH CTE AS ( SELECT priority, partcode, orderqty, @runningstock + 1 - priority runningstock, sign(orderqty) allocateqty FROM @ORDER UNION ALL SELECT priority, partcode, orderqty - sign(orderqty), runningstock - 4, sign(orderqty - sign(orderqty)) FROM CTE WHERE runningstock > 3 ) SELECT priority, partcode, orderqty, runningstock, allocateqty FROM CTE ORDER BY runningstock desc, priority ``` Result: ``` priority partcode orderqty runningstock allocateqty 1 A 2 20 1 2 A 10 19 1 3 A 3 18 1 4 A 8 17 1 1 A 1 16 1 2 A 9 15 1 3 A 2 14 1 4 A 7 13 1 1 A 0 12 0 2 A 8 11 1 3 A 1 10 1 4 A 6 9 1 1 A 0 8 0 2 A 7 7 1 3 A 0 6 0 4 A 5 5 1 1 A 0 4 0 2 A 6 3 1 3 A 0 2 0 4 A 4 1 1 1 A 0 0 0 ```
How to use WHILE LOOP to add value to list with condition, SQL Server 2008
[ "", "sql", "sql-server", "while-loop", "" ]
I have two tables: `players` and `history_of_players`. I want to make some kind of history for the players, so if someone updates `'club'` in players table, I want the old data (old club) to be saved into `'history_of_players'` with `'player_ID'`, `'old_club'` and `'year'`(actual year).
Haven't used MySQL for a while but this should be what you are looking for (if there is always just one row in the player table for each player\_ID): ``` delimiter // CREATE TRIGGER historise BEFORE UPDATE ON players FOR EACH ROW BEGIN INSERT INTO history_of_players (player_ID, old_club, year) SELECT player_ID, old_club, year FROM players WHERE player_ID = OLD.player_ID; END;// delimiter ; ```
You will probably want to create a trigger on the `players` table. For example: ``` DELIMITER $$ CREATE TRIGGER playersHistoryTrigger AFTER UPDATE ON players FOR EACH ROW BEGIN INSERT INTO playersHistory VALUES (NEW.ID, NEW.Name, ...); END$$ DELIMITER ; ```
SQL on UPDATE save old data into another table
[ "", "mysql", "sql", "database", "triggers", "phpmyadmin", "" ]
I have a table `DRIVER` with a column having following specification `BranchPostedAt int not null` In another table viz. `CompanyBranch`, I have following column `BranchId int not null` In the second table viz. `CompanyBranch`, primary key is set on two columns as under ``` CompanyId int not null BranchId int not null ``` Is there any way by which I can set a foreign reference key on `dbo.Driver.BranchPostedId` referencing `dbo.CompanyBranch.BranchId` at the time of creating `dbo.Driver`? Any help?
An alternative would be to [create a trigger](https://msdn.microsoft.com/en-GB/library/ms189799.aspx), but this is a little messy. I prefer previous posters answer to create a new identiy primary key on CompanyBranch [CompanyBranchId] and reference that as a foreign key. Something like: ``` CREATE TRIGGER trDriver ON DRIVER AFTER INSERT, UPDATE AS BEGIN SET NOCOUNT ON; IF NOT EXISTS(Select * from CompanyBranch cb Inner join inserted i on i.BranchPostedAt = cb.BranchId ) BEGIN RAISERROR ('The BranchPostedAt is not in the CompanyBranch table.', 16, 1); ROLLBACK TRANSACTION; RETURN END END GO ```
No, it is not possible. To create a foreign key, the key that you "reference" to in the other table must be a UNIQUE or PRIMARY KEY constraint. You can consider few options: 1. Make BranchId in CompanyBranch table as Unique 2. Create a new PRIMARY KEY in CompanyBranch - CompanyBranchId, and reference it in your Driver table
how to validate values of a column with column of another table which is not a primary key
[ "", "sql", "sql-server", "" ]
Is there a way to insert into a table two values using two "FROM" clauses? I try to insert percentile values - Exposure and Awareness: ``` INSERT INTO tbReport (Exposure, Awareness) SELECT MAX([q_Exposure]) FROM (SELECT TOP 30 PERCENT [q_Exposure] FROM tbQuestions WHERE q_Exposure IS NOT NULL ORDER BY [q_Exposure]), MAX([q_Awareness]) FROM (SELECT TOP 30 PERCENT [q_Awareness] FROM tbQuestions WHERE q_Awareness IS NOT NULL ORDER BY [q_Awareness]); ```
I am pretty sure you could not use two SELECT statements like that, you could also do something like, ``` INSERT INTO tbReport (Exposure, Awareness) SELECT Max(tmpQ.Exposure) As MaxExpo, Max(tmpQ.Awareness) As MaxAware FROM (SELECT MAX([q_Exposure]) As Exposure, 0 As Awareness FROM (SELECT TOP 30 PERCENT [q_Exposure] FROM tbQuestions WHERE q_Exposure IS NOT NULL ORDER BY [q_Exposure]) UNION ALL SELECT 0 As Exposure, MAX([q_Awareness]) As Awareness FROM (SELECT TOP 30 PERCENT [q_Awareness] FROM tbQuestions WHERE q_Awareness IS NOT NULL ORDER BY [q_Awareness])) As tmpQ; ```
I dont think the syntax you mentioned will work because typical insert syntax will be: ``` INSERT INTO table_name (col_names) VALUES (col_values); ``` Give the more clear picture of what you want from the above query?
Insert into a table two values using two "FROM" clauses
[ "", "sql", "ms-access", "sql-insert", "" ]
I would like to do a SQL query in SQL Server to get a table: Table1: ``` id t value 1 R 2412 1 Q 98797 2 R 132 2 Q 7589 ``` I need to get table: ``` id R_value Q_value 1 2412 98797 2 132 7589 ``` I used case and when, but I got ``` id R_value Q_value 1 2412 null 1 null 98797 ``` Any help would be appreciated.
Use conditional aggregation: [**SQL Fiddle**](http://sqlfiddle.com/#!6/70e30/1/0) ``` SELECT id, MAX(CASE WHEN t = 'R' THEN value END) AS R_value, MAX(CASE WHEN t = 'Q' THEN value END) AS Q_value FROM YourTable GROUP BY id ```
You can use `max` or `min` with the `group by` to get rid of `null` values and aggregate rows with the same `id`: ``` select id , min(case when t = 'R' then value end) as R_value , min(case when t = 'Q' then value end) as Q_value from tbl group by id ```
SQL query design for getting a table with column depending on a column value in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "windows-7", "pivot", "" ]
**Update 05/18/15** Having a column with customers names listed on it. How can I get a percentage based in one customer per date? For example ``` CustomerName Date Sam 04/29/15 Joy 04/29/15 Tom 04/29/15 Sam 04/29/15 Oly 04/29/15 Joy 04/29/15 04/29/15 Sam 04/29/15 04/29/15 Sam 04/29/15 Oly 04/29/15 Sam 04/29/15 Oly 04/30/15 Joy 05/01/15 ``` Notice that my column has 12 records, 2 of them are blanks, but they won't count on the percentage, just the ones that has name. I would like to know what percentage represents Sam from the total(in this case 10 records, so Sam % will be 50). Query should return ``` Date Percentage 04/29/15 50 04/30/15 0 05/01/15 0 ``` **Update** I don't really care about the other Customers, so lets treat them as one. Just need to know what percentage is Sam from the total list. Any help will be really appreciated. Thanks
Everyone seems to be using subqueries or derived tables. This should perform well and it's easy to follow. Try it out: ``` DECLARE @CustomerName VARCHAR(5) = 'Sam'; SELECT [Date], @CustomerName AS CustomerName, percentage = CAST(CAST(100.0 * SUM(CASE WHEN CustomerName = @CustomerName THEN 1 ELSE 0 END)/COUNT(*) AS INT) AS VARCHAR(20)) + '%' FROM @yourTable WHERE CustomerName != '' GROUP BY [Date] ``` Results: ``` Date CustomerName percentage ---------- ------------ --------------------- 2015-04-29 Sam 50% 2015-04-30 Sam 0% 2015-05-01 Sam 0% ```
You could calculate the numbers per person+day in a subquery: ``` select Date , CustomerName , 100.0 * cnt / sum(cnt) over (partition by date) from ( select Date , CustomerName , count(*) cnt from table1 where CustomerName <> '' group by Date , CustomerName ) t1 ``` This prints: ``` Date CustomerName ----------------------- ------------ --------------------------------------- 2015-04-29 00:00:00.000 Joy 20.000000000000 2015-04-29 00:00:00.000 Oly 20.000000000000 2015-04-29 00:00:00.000 Sam 50.000000000000 2015-04-29 00:00:00.000 Tom 10.000000000000 (4 row(s) affected) ```
how to get a percentage depending on a column value?
[ "", "sql", "t-sql", "sql-server-2008-r2", "" ]
Given a column namely `a` which is a result of `array_to_string(array(some_column))`, how do I count an occurrence of a value from it? Say I have `'1,2,3,3,4,5,6,3'` as a value of a column. How do I get the number of occurrences for the value `'3'`?
I solved it myself. Thank you for all the ideas! ``` SELECT count(something) FROM unnest( string_to_array( '1,2,3,3,4,5,6,3' , ',') ) something WHERE something = '3' ```
It seems you need to use **unnest**. Try this: ``` select idTable, (select sum(case x when '3' then 1 else 0 end) from unnest(a) as dt(x)) as counts from yourTable; ```
I want to count the number of occurences of a value in a string
[ "", "sql", "arrays", "postgresql", "postgresql-8.4", "unnest", "" ]
I'm getting the following warning in my Visual Studio 2013 project: > SQL71502 - Procedure has an unresolved reference to object ![Warning SQL71502 ](https://i.stack.imgur.com/gVxdB.png)
This can be resolved by adding a database reference to the master database. 1. Add Database Reference: ![Add DB reference](https://i.stack.imgur.com/45PjH.png) 2. Select the `master` database and click OK: ![Select database](https://i.stack.imgur.com/cYou7.png) 3. Result: ![Result](https://i.stack.imgur.com/mDn2L.png)
You just have to set 71502 in the field Buid/Suppress Transact-SQL warning : [![See picture in the link](https://i.stack.imgur.com/I74Uc.png)](https://i.stack.imgur.com/I74Uc.png)
Warning SQL71502 - Procedure <name> has an unresolved reference to object <name>
[ "", "sql", "sql-server", "visual-studio", "visual-studio-2015", "database-project", "" ]
I have two tables: orders and orderProducts. They both have a column called 'order\_id'. orders has a column named 'date\_created' ordersProducts has a column named 'SKU' I want to SELECT SKUs in within a date range. My query so far is: ``` SELECT `SKU` FROM `orderProducts` INNER JOIN orders ON orderproducts.order_id = orders.order_id WHERE orders.order_id in (SELECT id FROM orders WHERE date_created BETWEEN '2014-10-01' AND '2015-03-31' ORDER BY date_created DESC) ``` The query runs but it returns nothings. What am I missing here?
Try putting `date` condition in the `where` clause, there is no need for the subquery: ``` select op.`SKU` from `orderProducts` op join `orders` o using(`order_id`) where o.`date_created` between '2014-10-01' and '2015-03-31' ```
try using between **clause** in your **where** condition for date, there is no need to use subquery. ``` SELECT `SKU` FROM `orderProducts` INNER JOIN orders ON orderproducts.order_id = orders.order_id WHERE date_created BETWEEN '2014-10-01' AND '2015-03-31' ORDER BY date_created DESC; ```
Results from query as argument for WHERE statement in MySQL
[ "", "mysql", "sql", "" ]
I want to select day after tomorrow date in sql. Like I want to make a query which select date after two days. If I select today's date from calender(29-04-2015) then it should show date on other textbox as (01-05-2015). I want a query which retrieve day after tomorrow date. So far I have done in query is below: ``` SELECT VALUE_DATE FROM DLG_DEAL WHERE VALUE_DATE = GETDATE()+2 ``` thanks in advance
Note that if you have a date field containing the time information, you will need to truncate the date part using [DATEADD](https://msdn.microsoft.com/fr-fr/library/ms186819.aspx) ``` dateadd(d, 0, datediff(d, 0, VALUE_DATE)) ``` To compare 2 dates ignoring the date part you could just use [DATEDIFF](https://msdn.microsoft.com/fr-fr/library/ms189794.aspx) ``` SELECT VALUE_DATE FROM DLG_DEAL WHERE datediff(d, VALUE_DATE, getdate()) = -2 ``` or ``` SELECT VALUE_DATE FROM DLG_DEAL WHERE datediff(d, getdate(), VALUE_DATE) = 2 ```
Try like this: ``` SELECT VALUE_DATE FROM DLG_DEAL WHERE VALUE_DATE = convert(varchar(11),(Getdate()+2),105) ``` **[SQL FIDDLE DEMO](http://sqlfiddle.com/#!6/9eecb/5266)**
retrieve day after tomorrow date query
[ "", "sql", "sql-server", "" ]
In SQL, I need to calculate the age of horses that have died, then multiply it by three to get what would be their human age. There is a column called 'BORN' and a column called 'PASSED', containing only the year they died. So I need to calculate the age of the horses from the BORN column (year they were born) to the PASSED column (year they passed away) to get there age, then multiply that number by three. Then I just need to list their horse\_id, name, and age in human years, but I know how to do that select statement myself.
Try This Code, ``` SELECT horse_id, name, ((PASSED-BORN) * 3) As humanAge FROM horses WHERE PASSED IS NOT NULL; ```
You can also Use following : ``` SELECT horse_id, name,(DATEDIFF(YEAR,PASSED,BORN) * 3) As humanAge FROM horses WHERE PASSED IS NOT NULL; ```
Need to calculate the age of horses
[ "", "mysql", "sql", "" ]
I am trying to build a query that will load records from and to specific date comparing 2 fields - the start\_time and the end\_date. ``` SELECT start_time ,end_time ,DATEDIFF(end_time, start_time) AS DiffDate FROM my_tbl WHERE start_time >= '2015-04-27 00:00:00' AND end_time <= '2015-04-28 00:00:00' AND end_time >= '2015-04-27 00:00:00' AND DiffDate < 100 LIMIT 1000; ``` Unfortunately the DiffDate returns always 0. The ideal scenario was to calculate the difference between start\_time and end\_time when inserting the end\_time but the I cant make any changes on the database. What am I doing wrong here? Even if the DiffDate was working will it considered as a good solution?
From the condition in the where clause it appears that you are trying to get data for the same date, however using the `datediff` for the same day always would result 0 ``` mysql> select datediff('2015-04-27 12:00:00','2015-04-27 00:00:00') as diff ; +------+ | diff | +------+ | 0 | +------+ 1 row in set (0.03 sec) ``` You may need other means of calculation perhaps using the [**timestampdiff**](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_timestampdiff) ``` mysql> select timestampdiff(minute ,'2015-04-27 00:00:00','2015-04-27 12:00:00') as diff ; +------+ | diff | +------+ | 720 | +------+ 1 row in set (0.00 sec) ``` Also you are using alias in the where clause which is not allowed you have to change that to having clause ``` SELECT start_time ,end_time ,timestampdiff(minute,start_time,end_time) AS DiffDate FROM my_tbl WHERE start_time >= '2015-04-27 00:00:00' AND end_time <= '2015-04-28 00:00:00' AND end_time >= '2015-04-27 00:00:00' having DiffDate < 100 LIMIT 1000; ```
Coming from MS SQL I was using DATEDIFF but the solution is: ``` SELECT start_time ,end_time ,TIMESTAMPDIFF(SECOND,start_time,end_time) AS DiffDate FROM my_tbl WHERE start_time >= '2015-04-27 00:00:00' AND end_time <= '2015-04-28 00:00:00' AND end_time >= '2015-04-27 00:00:00' AND DiffDate < 100 LIMIT 1000; ``` I would like to know if there is a better solution from that.
MySQL query with DATEDIFF
[ "", "mysql", "sql", "datediff", "" ]
I've researched for a pretty long time and extensively already on this problem; so far nothing similar has come up. **tl;dr below** Here's my problem below. I'm trying to create a `SELECT` statement in SQLite with conditional filtering that works somewhat like a function. Sample pseudo-code below: ``` SELECT col_date, col_hour FROM table1 JOIN table2 ON table1.col_date = table2_col_date AND table1.col_hour = table2.col_hour AND table1.col_name = table2.col_name WHERE IF table2.col_name = "a" THEN {filter these records further such that its table2.col_volume >= 600} AND IF table2.col_name = "b" THEN {filter these records further such that its table2.col_volume >= 550} BUT {if any of these two statements are not met completely, do not get any of the col_date, col_hour} ``` \*I know SQLite does not support the `IF` statement but this is just to demonstrate my intention. Here's what I've been doing so far. According to [this](http://weblogs.sqlteam.com/jeffs/archive/2003/11/14/513.aspx) article, it is possible to transform `CASE` clauses into boolean logic, such that you will see here: ``` SELECT table1.col_date, table1.col_hour FROM table1 INNER JOIN table2 ON table1.col_date = table2.col_date AND table1.col_hour = table2.col_hour AND table1.col_name = table2.col_name WHERE ((NOT table2.col_name = "a") OR table2.col_volume >= 600) AND ((NOT table2.col_name = "b") OR table2.col_volume >= 550) ``` In this syntax, the problem is that I still get `col_date`s and `col_hour`s where at least one `col_name`'s `col_volume` for that specific `col_date` and `col_hour` did not meet its requirement. (e.g. I still get a record entry with `col_date = 2010-12-31` and `col_hour = 5`, but `col_name = "a"`'s `col_volume = 200` while `col_name = "b"`'s `col_volume = 900`. This said date and hour should not appear in the query because "a" has a volume which is not >= 600, even if "b" met its volume requirement which is >= 550.) ## For tl;dr If all these are getting confusing, here are sample tables with the sample correct query results so you can just forget everything above and go right on ahead: table1 ``` col_date,col_hour,col_name,extra1,extra2 2010-12-31,4,"a","hi",1 2010-12-31,4,"a","he",1 2010-12-31,4,"a","ho",1 2010-12-31,5,"a","hi",1 2010-12-31,5,"a","he",1 2010-12-31,5,"a","ho",1 2010-12-31,6,"a","hi",1 2010-12-31,6,"a","he",1 2010-12-31,6,"a","ho",1 2010-12-31,4,"b","hi",1 2010-12-31,4,"b","he",1 2010-12-31,4,"b","ho",1 2010-12-31,5,"b","hi",1 2010-12-31,5,"b","he",1 2010-12-31,5,"b","ho",1 2010-12-31,6,"b","hi",1 2010-12-31,6,"b","he",1 2010-12-31,6,"b","ho",1 ``` table2 ``` col_date,col_hour,col_name,col_volume 2010-12-31,4,"a",750 2010-12-31,4,"b",750 2010-12-31,5,"a",200 2010-12-31,5,"b",900 2010-12-31,6,"a",700 2010-12-31,6,"b",800 ``` The correct query results (with `col_volume` filters: 600 for 'a' and 550 for 'b') should be: ``` 2010-12-31,4 2010-12-31,6 ```
I think I have an answer (BIG HELP to **@mucio**'s "HAVING" clause; looks like I have to brush up on that). Apparently the approach was a simple sub-query in which the outer query will do a join on. It's a work-around (not really a direct answer to the problem I posted, I had to reorganize my program flow with this approach), but it got the job done. Here's the sample code: ``` SELECT table1.col_date, table1.col_hour FROM table1 INNER JOIN ( SELECT col_date, col_hour FROM table2 WHERE (col_name = 'a' AND col_volume >= 600) OR (col_name = 'b' AND col_volume >= 550) GROUP BY col_date, col_hour HAVING COUNT(1) = 2 ) tb2 ON table1.col_date = tb2.col_date AND table1.col_hour = tb2.col_hour GROUP BY table1.col_date, table1.col_hour ```
try this: ``` SELECT table1.col_date, table1.col_hour FROM table1 INNER JOIN table2 ON table1.col_date = table2.col_date AND table1.col_hour = table2.col_hour AND table1.col_name = table2.col_name WHERE EXISTS ( -- here I'm appling the filter logic select col_date, col_hour from table2 sub where (col_name = 'a' and col_volume >= 600) or (col_name = 'b' and col_volume >= 550) and sub.col_date = table2.col_date and sub.col_hour = table2.col_hour and sub.col_name = table2.col_name group by col_date, col_hour having count(1) = 2 -- I assume there could be only two rows: -- one for 'a' and one for 'b' ) ``` You can check this demo in [SQLfiddle](http://sqlfiddle.com/#!7/62df3/8) Last thing, you show the same columns from `Table1` that you use for the join, but I imagine this is just for the sake of this example
SQL function-like WHERE statement
[ "", "sql", "sqlite", "" ]
For example, lets say I have `5/2/2015` as my end point. And I want to know the `max(Thursday)` before my end point. How can I end at `4/30/2015`. Also for this I do not have a column to select from. So my whereclause needs to specify '5/2/2015' sql 2012
You can try using this While loop to achieve what you're after: ``` Declare @Date date = '2015-05-02' While DATENAME(WEEKDAY,@Date) <> 'Thursday' begin Set @Date = DATEADD(dd, -1,@Date) end Select @Date ```
You can use the WEEKDAY function ``` SELECT TOP 1 column FROM table WHERE WEEKDAY(column) = 5 AND column<'2015-05-02' ORDER BY column DESC ```
SQL - How to select a date by day before a specific datetime?
[ "", "sql", "sql-server-2012", "" ]
I need to find the frequency of a string in a column, irrespective of its case and any white spaces. For example, if my string is `My Tec Bits` and they occur in my table like this, as shown below : ``` 061 MYTECBITS 12123 102 mytecbits 24324 103 MY TEC BITS 23432 247 my tec bits 23243 355 My Tec Bits 23424 454 My Tec BitS 23432 ``` Then, the output should be `6`, because, with whites pace removed and irrespective of case, all those strings are identical. Is there any `grep()` equivalent in `SQL` as there is in `R`?
If your concern is only with the **SPACE** and the **CASE**, then you need two functions: * **REPLACE** * **UPPER/LOWER** For example, ``` SQL> WITH DATA AS( 2 SELECT 'MYTECBITS' STR FROM DUAL UNION ALL 3 SELECT 'mytecbits' STR FROM DUAL UNION ALL 4 SELECT 'MY TEC BITS' STR FROM DUAL UNION ALL 5 SELECT 'my tec bits' STR FROM DUAL UNION ALL 6 SELECT 'MY TEC BITS' STR FROM DUAL UNION ALL 7 SELECT 'MY TEC BITS' STR FROM DUAL 8 ) 9 SELECT UPPER(REPLACE(STR, ' ', '')) FROM DATA 10 / UPPER(REPLA ----------- MYTECBITS MYTECBITS MYTECBITS MYTECBITS MYTECBITS MYTECBITS 6 rows selected. SQL> ``` > Then, the output should be 6 So, based on that, you need to use it in the **filter predicate** and **COUNT(\*)** the rows returned: ``` SQL> WITH DATA AS( 2 SELECT 'MYTECBITS' STR FROM DUAL UNION ALL 3 SELECT 'mytecbits' STR FROM DUAL UNION ALL 4 SELECT 'MY TEC BITS' STR FROM DUAL UNION ALL 5 SELECT 'my tec bits' STR FROM DUAL UNION ALL 6 SELECT 'MY TEC BITS' STR FROM DUAL UNION ALL 7 SELECT 'MY TEC BITS' STR FROM DUAL 8 ) 9 SELECT COUNT(*) FROM DATA 10 WHERE UPPER(REPLACE(STR, ' ', '')) = 'MYTECBITS' 11 / COUNT(*) ---------- 6 SQL> ``` **NOTE** The `WITH` clause is only to build the **sample table** for **demonstration** purpose. In our actual query, remove the entire WITH part, and use your **actual table\_name** in the **FROM clause**. So, you just need to do: ``` SELECT COUNT(*) FROM YOUR_TABLE WHERE UPPER(REPLACE(STR, ' ', '')) = 'MYTECBITS' / ```
You could use something like ``` UPPER(REPLACE(userString, ' ', '')) ``` to check for upper case only and to remove white space.
SQL: match a string pattern irrespective of it's case, whitespaces in a column
[ "", "mysql", "sql", "regex", "agrep", "" ]
I have the following link: `/ABCDEF/ABCDEF/ABC/8921/154535` I need to insert only the last 6 numbers i.e. `154535` in a column in a table.
Try below code: ``` Declare @s varchar(100) = '/ABCDEF/ABCDEF/ABC/8921/154535' select REVERSE(SUBSTRING(REVERSE(@s),0,CHARINDEX('/',REVERSE(@s)))) ```
You are assigning multiple rows to a variable. So, you get error : `returned more than 1 query` Try below simple solution: ``` select DISTINCT REVERSE(SUBSTRING(REVERSE(@s),0,CHARINDEX('/',REVERSE(@s)))) from [dbo].[No_of_Views] ``` And if you want to `insert` then: ``` INSERT INTO table_name --your table name select DISTINCT REVERSE(SUBSTRING(REVERSE(@s),0,CHARINDEX('/',REVERSE(@s)))) from [dbo].[No_of_Views] ```
Removing a specific set of data from a link
[ "", "sql", "sql-server", "" ]
I have an issue in our database(AS400- DB2) in one of our tables all the rows were deleted. I do not know if it was a program or SQL that a user executed. All I know it hapend +- 3am in the morning. I did check for any scheduled jobs at that time. We managed to get the data back from backups but I want to investigate what deleted the records or what user. Are there any logs on die as400 on physical tables to check what SQL executed and when on a specified table? This will help me determine what caused this. I tried checking I systems navigator but could not find any logs... Is there a way of getting transnational data on a table using i system navigator or green screen? And If I can get the SQL that executed in the timeline. Any help would be appreciated.
There was no mention of how the time was inferred\determined, but for lack of journaling, I would suggest a good approach is immediately to gather information about the file and member; DSPOBJD for both \*SERVICE and \*FULL, DSPFD for \*ALL, DMPOBJ, and perhaps even a copy of the row for the TABLE from the catalog [to include the LAST\_ALTERED\_TIMESTAMP for ALTEREDTS column of SYSTABLES or the based-on field DBXATS from the QADBXREF]. Gathering those, worthwhile almost only if done before any other activity [esp. before any recovery activity], can help establish the time of the event and perhaps allude to what was the event; most timestamps are reflective of only of the most recent activity against the object [rather than as a historical log], so any recovery activity is likely to effect loss of any timestamps that would be reflective of the prior event\activity. Even if there was no journal for the file and nothing in the plan cache, there may have been [albeit unlikely] an active SQL Monitor. An active monitor should be available visible somewhere in the iNav GUI as well. I am not sure of the visibility of a monitor that may have been active in a prior time-frame. Similarly despite lack of journaling, there may be some system-level object or user auditing in effect for which the event was tracked either as a command-string or as an action on the file.member; combined with the inferred timing, all audit records spanning just before until just after can be reviewed. Although there may have been nothing in the scheduled jobs, the History Log (DSPLOG) since that time may show jobs that ended, or [perhaps soon] prior to that time show jobs that started, which are more likely to have been responsible. In my experience, often the name of the job may be indicative; for example the name of the job as the name of the file, perhaps due only to the request having been submitted from PDM. Any spooled [or otherwise still available] joblogs could be reviewed for possible reference to the file and\or member name; perhaps a completion message for a CLRPFM request. If the action may have been from a program, the file may be recorded as a reference-object such that output from DSPPGMREF may reveal programs with the reference, and any [service] program that is an SQL program could have their embedded SQL statements revealed with PRTSQLINF; the last-used for those programs could be reviewed for possible matches. Note: module and program sources can also be searched, but there is no way to know into what name they were compiled or into what they may have been bound if created only temporarily for the purpose of binding.
Using System i Navigator, expand Databases. Right click on your system database. Select SQL Plan Cache-> Show Statements. From here, you can filter based on a variety of criteria.
How to get SQL executed or transaction history on a Table (AS400) DB2 for IBM i
[ "", "sql", "db2", "ibm-midrange", "" ]
I am trying to cast my result as varchar but it keeps giving me this error, **Code** ``` CREATE FUNCTION [dbo].[GetQuality](@FruitID VARCHAR(200)) RETURNS varchar(200) AS BEGIN DECLARE @Result varchar(200); WITH latest AS ( SELECT * FROM (SELECT TOP 1 * FROM Fruits_Crate WHERE FruitID like @FruitID ORDER BY ExpiryDate DESC) a ), result AS ( SELECT latest.ExpiryDate as LatestExpiryDate, latest.Size as LatestSize, latest.Weight as LatestWeight, previous.ExpiryDate as PreviousExpiryDate, previous.Size as PreviousSize, previous.Weight as PreviousWeight, DATEDIFF(DAY, latest.ExpiryDate, CURRENT_TIMESTAMP) As LastMovementInDays, COALESCE((Select MovementInMonths from Fruits_Settings Where PolicyID like '178'),3) * 30 AS MM, CASE SIGN((latest.Weight * latest.Size) - (previous.Weight * previous.Size)) WHEN 1 THEN 'Increased' WHEN 0 THEN 'Static' WHEN -1 THEN 'Decreased' ELSE 'Static' END AS Movement FROM (SELECT TOP 1 * FROM (SELECT TOP 2 * FROM Fruits_Crate WHERE FruitID like @FruitID ORDER BY ExpiryDate DESC) x ORDER BY ExpiryDate) previous FULL OUTER JOIN latest ON previous.FruitID = latest.FruitID ) SELECT @Result = CAST( CASE WHEN LastMovementInDays <= MM AND (Movement = 'Increased' OR Movement = 'Decreased') THEN Movement ELSE 'Static' END) AS VARCHAR FROM result; RETURN @Result END ``` **Error** > Msg 1035, Level 15, State 10, Line 25 Incorrect syntax near 'CAST', > expected 'AS'. Issue is with Cast I am trying to use here.
You don't need the specify the CAST: ``` SELECT @Result = CASE WHEN LastMovementInDays <= MM AND (Movement = 'Increased' OR Movement = 'Decreased') THEN 'Movement' ELSE 'Static' END FROM result; ``` Please note that the default string for a `CAST` to a `varchar` is 30 characters. This does not cause problems in your code example, however, as you have defined `@Result` as `VARCHAR(200)` this can result in truncation. Run for example: ``` SELECT CAST('the quick brown fox jumped over the lazy dog' AS VARCHAR) ```
You have to put `AS VARCHAR` in `CAST()` function as below ``` SELECT @Result = CAST( CASE WHEN LastMovementInDays <= MM AND (Movement = 'Increased' OR Movement = 'Decreased') THEN Movement ELSE 'Static' END AS VARCHAR) FROM result; ```
SQL Case not working
[ "", "sql", "sql-server-2008", "function", "" ]
I have two tables **A and B**. In Table A (Oracle sql), an unique column(not primary key) **code** may have some records in table B. **Example:** Code "A" has 3 entries, Code "B" has 2 entries and code "C" has 0 entries in table B. I want the query to display the code and its count of records in Table B. ``` A 3 B 2 C 0, ``` But i am not getting the code with zero records in table B, **i.e C 0.** Please anyone can help me with the query.
`GROUP BY` with `LEFT JOIN` solution: ``` select a.code, a.name, count(b.code) from A a LEFT JOIN B b ON a.code = b.code group by a.code, a.name ``` Correlated sub-query solution: ``` select a.code, a.name, (select count(*) from B b where a.code = b.code) from A a ``` Perhaps you need to do `SELECT DISTINCT` here.
You are doing something incorrectly. This works for me: ``` select A.code, Count(B.code) from A left join B on A.code = b.code group by A.code ``` Fiddle: <http://sqlfiddle.com/#!4/f13e1/2>
display Count of one column from another table even when the count is zero
[ "", "sql", "oracle", "left-join", "nvl", "" ]
I have following table and data: ``` CREATE TABLE customer_wer( id_customer NUMBER, name VARCHAR2(10), surname VARCHAR2(20), date_from DATE, date_to DATE NOT NULL, CONSTRAINT customer_wer_pk PRIMARY KEY (id_customer, data_from)); INSERT INTO customer_wer VALUES (4, 'Karolina', 'Komuda', '01-JAN-00', '31-MAR-00'); INSERT INTO customer_wer VALUES (4, 'Karolina', 'Komuda', '01-APR-00', '30-JUN-00'); INSERT INTO customer_wer VALUES (4, 'Karolina', 'Komuda', '15-JUN-00', '30-SEP-00'); INSERT INTO customer_wer VALUES (4, 'Karolina', 'Komuda', '01-OCT-00', '31-DEC-00'); INSERT INTO customer_wer VALUES (4, 'Karolina', 'Komuda', '01-JAN-01', '31-MAR-01'); INSERT INTO customer_wer VALUES (4, 'Karolina', 'Komuda', '01-APR-01', '30-JUN-01'); INSERT INTO customer_wer VALUES (4, 'Karolina', 'Komuda', '01-JUL-01', '5-OCT-01'); INSERT INTO customer_wer VALUES (4, 'Karolina', 'Komuda', '01-OCT-01', '31-DEC-01'); ``` I need a `SELECT` query to find the records with overlapping dates. It means that in the example above, I should have four records in result ``` number 2 3 7 8 ``` Thank you in advance. I am using Oracle DB.
Try this: ``` select * from t t1 join t t2 on (t1.datefrom > t2.datefrom and t1.datefrom < t2.dateto) or (t1.dateto > t2.datefrom and t1.dateto < t2.dateto) ``` Thank You for this example. After modification it is working: ``` SELECT * FROM customer_wer k JOIN customer_wer w ON k.id_customer = w.id_customer WHERE (k.date_from > w.date_to AND k.date_from < w.date_to) OR (k.date_to > w.date_from AND k.date_to < w.date_to); ```
the earlier answer does not account for situations where t2 is entirely within t1 ``` select * from t t1 join t t2 on (t1.datefrom > t2.datefrom and t1.datefrom < t2.dateto ) or (t1.dateto > t2.datefrom and t1.dateto < t2.dateto ) or (t1.dateto > t2.dateto and t1.datefrom < t2.datefrom) ```
Find records with overlapping date range in SQL
[ "", "sql", "oracle", "" ]
i am a newbie to sql. I have these tables: **Table Individual:** ``` IndividualId IndividualName Ind1 Mark Ind2 Paul ind3 Spencer ind4 Mike ind5 Hilary ``` **Table Agent:** ``` AgentId IndId(foreignKey) Agent1 ind1 Agent1 ind2 Agent2 ind1 Agent3 ind4 ``` **Transaction Table** (Transaction can be made via agent or by individual himself): ``` TranId IndId(foreignKey/Never Null) AgentId(foreignKey) TranAmount Tran1 ind1 Agent1 $100 Tran2 ind1 NULL $500 tran3 ind3 NULL $200 tran4 ind4 Agent3 $50 tran5 ind4 NULL $30 ``` Result should be like: (If Transaction is associated with Agent, then display in same line else display on separate line) ``` IndividualId IndividualName AgentId TranId TranAmount Ind1 Mark Agent1 Tran1 $100 Ind1 Mark Agent2 NULL NULL Ind1 Mark NULL Tran2 $500 Ind2 Paul Agent1 NULL NULL Ind3 Spencer NULL Tran3 $200 ind4 Mike Agent3 tran4 $50 ind4 Mike NULL tran5 $30 ind5 Hilary NULL NULL NULL ``` As per some comments, i used the following query: ``` SELECT DISTINCT IndividualId, IndividualName, TranId, Agent.AgentId, Amount FROM Individual LEFT JOIN Transactions ON Individual.IndividualId = Transactions.IndId LEFT JOIN Agent ON Transactions.AgentId = Agent.AgentId ``` Which gave the below results which are not correct: ``` IndividualId IndividualName TranId AgentId Amount ind1 Mark Tran1 Agent1 100 ind1 Mark Tran2 null 500 ind2 Paul null null null ind3 Spencer Tran3 null 200 ind4 Mike Tran4 Agent3 50 ind4 Mike Tran5 null 30 ind5 Hilary null null null ```
Try this: ``` SELECT Distinct IndividualId, IndividualName, Agent.AgentId, TranId, Amount FROM Individual Left join Agent ON Individual.IndividualId = Agent.IndId left join Transactions on Individual.IndividualId = Transactions.IndId and Transactions.AgentId = Agent.AgentId union SELECT Distinct IndividualId, Name, TranId,NUll as AgentId, Amount FROM Individual left join Transactions on Individual.IndividualId = Transactions.IndId and Transactions.AgentId is null ```
I think the issue is that you are joining the Transaction table to the agent table (when not every transaction had an agent), this means that your result set will not include transaction without agents. Since the transaction table itself already has an AgentId column you shouldn't need to union results to see the agent for the transaction. I'm making the assumption that the columns in expected result have the following associations (since the same columns are in more than 1 table, the assumption is important) : IndividualId and IndividualName are from the Individual table,, TranId and AgentId, and TranAmount are the results from the transaction table. **I am only including a join to the Agent table, in case you decide you want agent name returned**, otherwise you don't even need it. ``` SELECT I.IndividualId, I.IndividualName, T.AgentId, T.TranId, T.TranAmount FROM Individual I LEFT JOIN Agent A ON I = IndId = A.IndId LEFT JOIN Transactions T ON I.IndID = T.IndID ``` Hope this is what you are looking for!
Stuck with sql joins
[ "", "sql", "sql-server", "join", "" ]
How can I insert into a record the text of a procedure stored in ALL\_SOURCE.TEXT ? This part of code gives me error: "missing comma" ``` execute immediate ' insert into results(SrcProcedure) values(' '' || tabela_temporare(1).text || '' ')'; ``` I think this is due to the unescaped characters contained in the text of the stored procedure, but I can't find a way to fix it.
If that text field contains actual stored pl/sql code, the only character likely to cause problems is the single quote. So replace each instance with two single quotes which will then get parsed as a properly escaped single quote. Of course, you have to escape the quote in the replace statement to get it to work, but try this: ``` execute immediate ' insert into results(SrcProcedure) values(' '' || replace(tabela_temporare(1).text,'''','''''') || '' ')'; ```
If you want to have the string delimiting character (apostrophe: ') inside a Varchar, you must use two consecutive apostrophes and then another one to end the string. It gives you an error because you ended the string first. Try this piece of code: ``` EXECUTE IMMEDIATE 'INSERT INTO results(SrcProcedure) values(''' || tabela_temporare(1).text || ''')'; ``` EDIT: Better use Bind Variables, see my sample Code: ``` CREATE TABLE RESULTS (SRCPROCEDURE VARCHAR2 (300)); DECLARE v_SQL VARCHAR2(4000) := 'INSERT INTO results(SrcProcedure) VALUES(:1)'; BEGIN EXECUTE IMMEDIATE v_SQL USING 'something2'; END; ``` I hope it works now! :)
Inserting the text of a stored procedure into a record
[ "", "sql", "oracle", "stored-procedures", "plsql", "" ]
I am a noob at Spring, but have not find any material based on this type of injection. Having a **Queries.xml**: ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd"> <properties> <entry key="sql.accountdao.select"> </entry> <entry key="sql.accountdao.insert"> </entry> </properties> ``` Also have a **properties-config.xml**, which is containg the queries.xml: ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.1.xsd"> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/> <property name="locations"> <array> <value>classpath:sql/Queries.xml</value> </array> </property> </bean> </beans> ``` And naturally have a dao-config.xml, where I want to reference the Queries.xml: ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="namedParamTemplate" class="org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate"> <constructor-arg ref="dataSource"/> </bean> <!-- Either annotations or xml will be correct --> <bean id="accountDao" class="dao.impl.AccountDAOImpl"> <property name="namedParameterJdbcTemplate" ref="namedParamTemplate" /> <property name="insertSQL" ????? /> <property name="selectSQL" ???? /> </bean> </beans> ``` Tried this way also in the **AccountDAOImpl**.java: ``` public class AccountDAOImpl implements AccountDAO { private NamedParameterJdbcTemplate namedParameterJdbcTemplate; @Value("${sql.accountdao.select}") private String selectSql; @Value("${sql.accountdao.insert}") private String insertSql; @Required public void setSelectSql(String selectSql) { this.selectSql = selectSql; } @Required public void setInsertSql(String insertSql) { this.insertSql = insertSql; } ``` ... Annotation based injection just did not work. Have any idea anybody? Both of them are in the classpath @ web.xml. **Exception**: ``` Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'accountDao' defined in class path resource [META-INF/spring/root/dao-config.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanInitializationException: Properties 'insertSql' and 'selectSql' are required for bean 'accountDao' ```
The solution was, simple ${} at the **config-dao.xml**: ``` <property name="insertSql" value="${sql.accountdao.insert}" /> <property name="selectSql" value="${sql.accountdao.select}" /> ```
in web.xml must be declared this xml files: ``` <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/[Your DIR]/[Your XMLFiles].xml </param-value> </context-param> ```
Spring inject xml values to properties-config.xml then to a bean
[ "", "sql", "xml", "spring", "dependency-injection", "" ]
I have this query I can't seem to perfect. Any assistance would be greatly appreciated. Using SQL Server 2008 R2. I am attempting to show all of the communities in the results even if there are no records for that community. ``` SELECT COALESCE(SUM(D.FulfillmentAmt),0) as Approved, DB.Budget, DC.CommunityName FROM Donations D LEFT JOIN DCommunity DC ON D.Market = DC.CID LEFT JOIN DBudget DB ON D.Market = DB.Community WHERE D.StatusId = '1' AND DB.[Year] = year(getdate()) GROUP BY DC.CommunityName, DB.Budget ORDER BY DC.CommunityName ``` Currently displays the following results: ``` Approved | Budget | CommunityName 10 | 2000 | City1 2400 | 3000 | City2 2358 | 5000 | City3 1855 | 2000 | City5 2200 | 3000 | City6 5600 | 7000 | City8 ``` As you can see it is missing City4 and City7 because there are no records within the dbo.Donations table for those cities. I would still like those two to show up with the Approved amount of 0 even if they have no records.
Since you want all communities, as a matter of style, I'd always make that my first table in the join (easier for me to think through). Then you need to move the WHERE conditions into the JOIN clause appropriately because keeping them in the WHERE forces it to behave like an INNER JOIN. Also, the join on DBudget needed updated to refer back to the DCommunity table. ``` SELECT COALESCE(SUM(D.FulfillmentAmt),0) as Approved, DB.Budget, DC.CommunityName FROM DCommunity DC LEFT JOIN DBudget DB ON DC.CID = DB.Community AND DB.[Year] = year(getdate()) LEFT JOIN Donations D ON D.Market = DC.CID AND D.StatusId = '1' GROUP BY DC.CommunityName, DB.Budget ORDER BY DC.CommunityName ```
Then start your `left join` with `DCommunity`, if that is what you want to preserve. Also, you need to move some conditions to `on` clauses, so the `left join` does not become an `inner`join`: ``` SELECT COALESCE(SUM(D.FulfillmentAmt), 0) as Approved, DB.Budget, DC.CommunityName FROM DCommunity DC LEFT JOIN Donations D ON D.Market = DC.CID LEFT JOIN DBudget DB ON D.Market = DB.Community AND D.StatusId = '1' AND DB.[Year] = year(getdate()) GROUP BY DC.CommunityName, DB.Budget ORDER BY DC.CommunityName; ```
Show 0 even if there are no records to display
[ "", "sql", "sql-server", "" ]
I'm having trouble on creating a query to pull data from 3 tables. I have a `job_skus` table, `skus` table, and `stage` table, and I'm trying to select all the job skus for the whole month regardless of `jobNo`, for example to show a list of all job skus for the month of April. The tables' structure is ``` Job_Sku example data JobNo J4454 J4454 Sku (refences Product.Sku) 93339 9947 Quantity 500 600 Stage 1 2 Products Sku(primary key) 93339 9947 Description Galvanised Nails Steel Rods 900mm Stage jobNo J4454 J4454 stage 1 2 date 04/04/2015 12/04/2015 ``` and so on. I have come up with this query ``` SELECT jm.sku,jm.quantity,p.description FROM stage as s, products as p, job_sku as jm WHERE s.Date BETWEEN '2015-04-01' AND '2015-04-30' AND jm.stage = s.stage AND p.sku = jm.sku ``` but it seems to be getting duplicate data in the query. Do I need to use a join of some sort, or maybe extract all the stages for that date period and join the result onto the `job_sku` table?
Here's one suggestion. Others in the thread have suggested distinct that will result in the same thing, but using an exists predicate clarifies the intention of the query (IMO): ``` SELECT jm.sku , jm.quantity , p.description FROM products as p JOIN job_sku as jm ON p.sku = jm.sku WHERE EXISTS ( SELECT 1 FROM stage s WHERE s.Date BETWEEN '2015-04-01' AND '2015-04-30' AND jm.stage = s.stage ); ``` As others also have mentioned, ANSI joins will make the query a bit easier to understand. Vendors will continue to support the "," join for as long as we live, so the code wont break, but ANSI join will make it easier to maintain.
First, use proper joins. the join style you use is deprecated. [Explicit joins](https://dev.mysql.com/doc/refman/5.0/en/join.html) are a part of ANSI SQL for over 20 years now. Second, if you get the correct results but some rows are duplicated, all you need to add is [`DISTINCT`](https://dev.mysql.com/doc/refman/5.0/en/distinct-optimization.html) ``` SELECT DISTINCT jm.sku,jm.quantity,p.description FROM stage as s INNER JOIN job_sku as jm ON(jm.stage = s.stage) INNER JOIN products as p ON(p.sku = jm.sku) WHERE s.Date BETWEEN '2015-04-01' AND '2015-04-30' ```
SQL Select Query between 3 tables
[ "", "mysql", "sql", "subquery", "" ]
I have a table field `AccID` where I have to concatenate `Name` with `Date` like 'MyName-010415' in SQL query. Date format is `01-04-2015` or `01/04/2015`. But I want to display it like `010415`.
For the date part, to get the format you want you, try this: ``` SELECT RIGHT(REPLICATE('0', 2) + CAST(DATEPART(DD, accid) AS VARCHAR(2)), 2) + RIGHT(REPLICATE('0', 2) + CAST(DATEPART(MM, accid) AS VARCHAR(2)), 2) + RIGHT(DATEPART(YY, accid), 2) AS CustomFormat FROM yourtablename ... ``` The `DATEPART(DD, accid)` will give you the day part and the same for `mm` and `yy` will give you the month and the year parts. Then I added the functions `RIGHT(REPLICATE('0', 2) + CAST(... AS VARCHAR(2)), 2)` to add the leading zero, instead of `1` it will be `01`. * [SQL Fiddle Demo](http://sqlfiddle.com/#!6/dae0a0/1) --- As [@bernd-linde](https://stackoverflow.com/users/3864353/bernd-linde) suggested, you can use this function to concatenate it with the name part like: ``` concat(Name, ....) AS ... ``` Also you can just `SELECT` or `UPDATE` depending on what you are looking for. As in [@bernd-linde](https://stackoverflow.com/users/3864353/bernd-linde)'s [fiddle](http://sqlfiddle.com/#!6/eeb1c/3).
I am not sure which language you are using. Let take php as an example. ``` $AccID = $name.'-'.date('dmy'); ``` OR before you save this data format the date before you insert the data in database.. or you can write a trigger on insert.
Two Digit date format in SQL
[ "", "sql", "t-sql", "" ]
Getting the error using Postgresql 9.3: ``` select 'hjhjjjhjh'mnmnmnm'mn' ``` Error: > ERRO:syntax error in or next to "'mn'" > SQL state: 42601 > Character: 26 I tried replace single quote inside text with: ``` select REGEXP_REPLACE('hjhjjjhjh'mnmnmnm'mn', '\\''+', '''', 'g') ``` and ``` select '$$hjhjjjhjh'mnmnmnm'mn$$' ``` but it did not work. Below is the real code: ``` CREATE OR REPLACE FUNCTION generate_mallet_input2() RETURNS VOID AS $$ DECLARE sch name; r record; BEGIN FOR sch IN select schema_name from information_schema.schemata where schema_name not in ('test','summary','public','pg_toast','pg_temp_1','pg_toast_temp_1','pg_catalog','information_schema') LOOP FOR r IN EXECUTE 'SELECT rp.id as id,g.classified as classif, concat(rp.summary,rp.description,string_agg(c.message, ''. '')) as mess FROM ' || sch || '.report rp INNER JOIN ' || sch || '.report_comment rc ON rp.id=rc.report_id INNER JOIN ' || sch || '.comment c ON rc.comments_generatedid=c.generatedid INNER JOIN ' || sch || '.gold_set g ON rp.id=g.key WHERE g.classified = any (values(''BUG''),(''IMPROVEMENT''),(''REFACTORING'')) GROUP BY g.classified,rp.summary,rp.description,rp.id' LOOP IF r.classif = 'BUG' THEN EXECUTE format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/BUG/'|| quote_ident(sch) || '-' || r.id::text || '.txt ''',r.mess); ELSIF r.classif = 'IMPROVEMENT' THEN EXECUTE format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/IMPROVEMENT/'|| quote_ident(sch) || '-' || r.id || '.txt '' ',r.mess); ELSIF r.classif = 'REFACTORING' THEN EXECUTE format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/REFACTORING/'|| quote_ident(sch) || '-' || r.id || '.txt '' ',r.mess); END IF; END LOOP; END LOOP; RETURN; END; $$ LANGUAGE plpgsql STRICT; select * FROM generate_mallet_input2(); ``` Error: > ERRO: erro de sintaxe em ou próximo a "mailto" > LINHA 1: ...e.http.impl.conn.SingleClientConnManager$HTTPCLIENT-803). > > ERRO: erro de sintaxe em ou próximo a "mailto" > SQL state: 42601 > Context: função PL/pgSQL generate\_mallet\_input2() linha 31 em comando EXECUTE The retrieved content is a long text on project issues in software repositories and can have html in this text. Html quotes are causing the problem.
It is not the *content* of the string that needs to be escaped, but its *representation* within the SQL you are sending to the server. In order to *represent* a single `'`, you need to write two in the SQL syntax: `''`. So, `'IMSoP''s answer'` represents the string `IMSoP's answer`, `''''` represents `'`, and `''''''` represents `''`. But the crucial thing is you need to do this *before* trying to run the SQL. You can't paste an invalid SQL command into a query window and tell it to heal itself. Automation of the escaping therefore depends entirely how you are creating that SQL. Based on your updated question, we now know that you are creating the SQL statement using pl/pgsql, in this `format()` call: ``` format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/BUG/'|| quote_ident(sch) || '-' || r.id::text || '.txt ''',r.mess) ``` Let's simplify that a bit to make the example clearer: ``` format('select REPLACE(''%s'', '''', '''''''') as m', r.mess) ``` If `r.mess` was `foo`, the result would look like this: ``` select REPLACE('foo', '', ''''') as m ``` This replace won't do anything useful, because the first argument is an empty string, and the second has 3 `'` marks in; but even if you fixed the number of `'` marks, it won't work. If the value of `r.mess` was instead `bad'stuff`, you'd get this: ``` select REPLACE('bad'stuff', '', ''''') as m ``` That's invalid SQL; no matter where you try to run it, it won't work, because Postgres thinks the `'bad'` is a string, and the `stuff` that comes next is invalid syntax. Think about how it will look if `r.mess` is `SQL injection'); DROP TABLE users --`: ``` select REPLACE('SQL injection'); DROP TABLE users; --', '', ''''') as m ``` Now we've got valid SQL, but it's probably not what you wanted! So what you need to do is escape the `'` marks in `r.mess` **before** you mix it into the string: ``` format('select '%s' as m', REPLACE(r.mess, '''', '''''')) ``` Now we're changing `bad'stuff` to `bad''stuff` before it goes into the SQL, and ending up with this: ``` select 'bad''stuff' as m ``` This is what we wanted. There's actually a few better ways to do this, though: Use the `%L` modifier to [the `format` function](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-FORMAT), which outputs an escaped and quoted string literal: ``` format('select %L as m', r.mess) ``` Use the [`quote_literal()` or `quote_nullable()` string functions](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-OTHER) instead of `replace()`, and concatenate the string together like you do with the filename: ``` 'select ' || quote_literal(r.mess) || ' as m' ``` Finally, if the function really looks like it does in your question, you can avoid the whole problem by not using a loop at all; just copy each set of rows into a file using an appropriate `WHERE` clause: ``` EXECUTE 'Copy SELECT concat(rp.summary,rp.description,string_agg(c.message, ''. '')) as mess FROM ' || sch || '.report rp INNER JOIN ' || sch || '.report_comment rc ON rp.id=rc.report_id INNER JOIN ' || sch || '.comment c ON rc.comments_generatedid=c.generatedid INNER JOIN ' || sch || '.gold_set g ON rp.id=g.key WHERE g.classified = ''BUG'' -- <-- Note changed WHERE clause GROUP BY g.classified,rp.summary,rp.description,rp.id ) To ''/tmp/csv-temp/BUG/'|| quote_ident(sch) || '-' || r.id::text || '.txt ''' '; ``` Repeat for `IMPROVEMENT` and `REFACTORING`. I can't be sure, but in general, acting on a set of rows at once is more efficient than looping over them. Here, you'll have to do 3 queries, but the `= any()` in your original version is probably fairly inefficient anyway.
I'm taking a stab at this now that I think I know what you are asking. You have a field in a table, that when you run `SELECT <field> from <table>` you are returned the result: ``` 'This'is'a'test' ``` You want, intead, this result to look like: ``` 'This''is''a''test' ``` So: ``` CREATE Table test( testfield varchar(30)); INSERT INTO test VALUES ('''This''is''a''test'''); ``` You can run: ``` SELECT '''' || REPLACE(Substring(testfield FROM 2 FOR LENGTH(testfield) - 2),'''', '''''') || '''' FROM Test; ``` This will get only the bits inside the first and last single-quote, then it will replace the inner single-quotes with double quotes. Finally it concats back on single-quotes to the beginning and end. **SQL Fiddle:** <http://sqlfiddle.com/#!15/a99e6/4> If it's not double single-quotes that you are looking for in the interior of your string result, then you can change the REPLACE() function to the appropriate character(s). Also, if it's not single-quotes you are looking for to encapsulate the string, then you can change those with the concatenation.
Error with single quotes inside text in select statement
[ "", "sql", "database", "postgresql", "postgresql-9.3", "" ]
I'm in the process of transferring lots of embedded SQL in some SSRS reports to functions. The process generally involves taking the current select query, adding an INSERT INTO part and returning a results table. Something like this: ``` CREATE FUNCTION [dbo].[MyReportFunction] ( @userid varchar(255), @location varchar(255), more params here... ) RETURNS @Results TABLE ( Title nvarchar(max), Location nvarchar(255), more columns here... ) AS BEGIN INSERT INTO @Results (Title, Location, more columns...) SELECT tblA.Title, tblB.Location, more columns... FROM TableA tblA INNER JOIN TableB tblB ON tblA.Id = tblB.Id WHERE tblB.Location = @location RETURN END ``` As part of this I have to put the columns into the @Results table and give them the correct size and type as per what's returned from the SELECT query. Now getting the type is fine as I can just copy and paste the existing SELECT query into a new query and mouse over the column name to get that e.g. column Title (nvarchar, null). However, I also need to know the size. Is there an easy way of doing that without having to go to the particular tables and looking at the column list? Is there perhaps a way of editing the tooltip and the information it displays on mouse over? If I've got a lot of joins to different tables and a long list of tables to scroll through this can get tiresome. I'm using SQL Server 2008 R2
As [Zohar](https://stackoverflow.com/users/3094533/zohar-peled) suggests, you can use [INFORMATION\_SCHEMA.COLUMNS](https://msdn.microsoft.com/en-us/library/ms188348.aspx), and then let the query engine do the heavy lifting for you ``` SELECT TOP 0 * INTO #Temp FROM ( OriginalQuery ) o SELECT column_name+' ' + data_type + case data_type when 'sql_variant' then '' when 'text' then '' when 'ntext' then '' when 'xml' then '' when 'decimal' then '(' + cast(numeric_precision as varchar) + ', ' + cast(numeric_scale as varchar) + ')' else coalesce('('+case when character_maximum_length = -1 then 'MAX' else cast(character_maximum_length as varchar) end +')','') end + case when exists ( select id from syscolumns where object_name(id)=TABLE_NAME and name=column_name and columnproperty(id,name,'IsIdentity') = 1 ) then ' IDENTITY(' + cast(ident_seed(TABLE_NAME) as varchar) + ',' + cast(ident_incr(TABLE_NAME) as varchar) + ')' else '' end + (case when IS_NULLABLE = 'No' then ' NOT ' else ' ' end ) + 'NULL' + case when tempdb.INFORMATION_SCHEMA.COLUMNS.COLUMN_DEFAULT IS NOT NULL THEN ' DEFAULT '+ tempdb.information_schema.columns.COLUMN_DEFAULT ELSE '' END + ',' FROM tempdb.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME LIKE '#Temp%' DROP TABLE #TEMP ```
You could simply query the [INFORMATION\_SCHEMA.COLUMNS](https://msdn.microsoft.com/en-us/library/ms188348.aspx) view: ``` SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME IN('TableA', 'TableB') AND COALESCE(CHARACTER_MAXIMUM_LENGTH, -1) > 0 ``` I've filtered the results a little further by adding the condition on `CHARACTER_MAXIMUM_LENGTH` to the result set. This will give you only the columns that has maximum length (varchar, char, nvarchar etc`). I hope this helps a little more.
Is there a quick way to view the size of an existing column when writing a query?
[ "", "sql", "sql-server", "sql-server-2008-r2", "sql-function", "" ]
I'm trying to perform a select insert where I need to use a sequence and a distinct clause. I know we can't use sequences with distinct so, I've nested a subquery inside the FROM, like this: ``` INSERT INTO SchemaName.XXSON_ST_B2B_CTR_PRICING (SEQ_NO,SEEB_KEY,MESSAGE_ID,MESSAGE_TYPE,CREATE_DTIME,CHANGE_DTIME,STATUS) SELECT XXSON.XXSON_B2B_CTR_PRICING_SEQ.NEXTVAL AS SEQ_NO, 'SESS_NO' AS SEEB_KEY, srp.ITEM AS MESSAGE_ID, 'MESSAGE_TYPE' AS MESSAGE_TYPE, sysdate AS CREATE_DTIME, sysdate AS CHANGE_DTIME, 'I' AS STATUS FROM (SELECT distinct srp.ITEM FROM SchemaName.NB_PRICELIST_CTRL pctrl, SchemaName.NB_PRICELIST_SRP srp WHERE STATUS = 'W' AND SOURCE_TABLE = 'NB_PRICELIST_SRP'); ``` However, with this, I can't insert the srp.ITEM as MESSAGE\_ID because I have an invalid identifer (**ORA-00904**), no surprise here. Is there any workaround so I can do the insert properly?
To get rid of `ORA-00904: invalid identifier srp.ITEM` you have to assign an `alias` to a `derived table` which serves as a normal table stored in memory. An alias in `FROM` clause assigned inside a block is not visible outside. ``` INSERT INTO SchemaName.XXSON_ST_B2B_CTR_PRICING (SEQ_NO,SEEB_KEY,MESSAGE_ID,MESSAGE_TYPE,CREATE_DTIME,CHANGE_DTIME,STATUS) SELECT XXSON.XXSON_B2B_CTR_PRICING_SEQ.NEXTVAL AS SEQ_NO, 'SESS_NO' AS SEEB_KEY, alias.ITEM AS MESSAGE_ID, 'MESSAGE_TYPE' AS MESSAGE_TYPE, sysdate AS CREATE_DTIME, sysdate AS CHANGE_DTIME, 'I' AS STATUS FROM (SELECT distinct srp.ITEM FROM SchemaName.NB_PRICELIST_CTRL pctrl, SchemaName.NB_PRICELIST_SRP srp WHERE STATUS = 'W' AND SOURCE_TABLE = 'NB_PRICELIST_SRP') alias; ```
The error message you provided deals with missing or invalid columns. I believe the specific problem with the error message is the srp alias in ``` srp.item as Message_ID ``` since you haven't aliased the combination of the 2 tables in the parenthesis (you aliased the individual tables inside, but this does not get carried out). A simple fix is to remove 'srp.' from the front of 'item' in the top statement. ``` INSERT INTO SchemaName.XXSON_ST_B2B_CTR_PRICING (SEQ_NO,SEEB_KEY,MESSAGE_ID,MESSAGE_TYPE,CREATE_DTIME,CHANGE_DTIME,STATUS) SELECT XXSON.XXSON_B2B_CTR_PRICING_SEQ.NEXTVAL AS SEQ_NO, 'SESS_NO' AS SEEB_KEY, ITEM AS MESSAGE_ID, 'MESSAGE_TYPE' AS MESSAGE_TYPE, sysdate AS CREATE_DTIME, sysdate AS CHANGE_DTIME, 'I' AS STATUS FROM (SELECT distinct srp.ITEM FROM SchemaName.NB_PRICELIST_CTRL pctrl, SchemaName.NB_PRICELIST_SRP srp WHERE STATUS = 'W' AND SOURCE_TABLE = 'NB_PRICELIST_SRP') ``` On a separate note, I didn't quite understand the inclusion pctrl.
Select insert with sequence and distinct
[ "", "sql", "oracle", "select", "insert", "distinct", "" ]
I'm trying to find the Cartesian product of 4 columns which have data separated by delimiter Example ``` ID ID2 String String2 1234 33423,43222,442224,213432 Sample;repeat;example;multiple second; possible;delimiter 2345 12354; 55633; 343534;65443;121121 data;set;sample;find answer;combination;by ``` and I get an error Subscript out of range with the below code. Can anyone help with where it is going wrong? ``` Sub Cartesian() Dim MyStr1 As Variant, MyStr2 As Variant, MyStr3 As Variant, MyStr4 As Variant, _ Str1 As Variant, Str2 As Variant, Str3 As Variant, Str4 As Variant, X As Long, _ OrigString1 As Variant, OrigString2 As Variant, OrigString3 As Variant, _ OrigString4 As Variant, Y As Long OrigString1 = Range("A2:A" & Range("A" & Rows.Count).End(xlUp).Row) OrigString2 = Range("B2:B" & Range("A" & Rows.Count).End(xlUp).Row) OrigString3 = Range("C2:C" & Range("A" & Rows.Count).End(xlUp).Row) OrigString4 = Range("D2:D" & Range("A" & Rows.Count).End(xlUp).Row) X = 2 For Y = LBound(OrigString1) To UBound(OrigString1) MyStr1 = Split(OrigString1(Y), ";") MyStr2 = Split(OrigString2(Y), ";") MyStr3 = Split(OrigString3(Y), ";") MyStr4 = Split(OrigString4(Y), ";") For Each Str1 In MyStr1 For Each Str2 In MyStr2 For Each Str3 In MyStr3 For Each Str4 In MyStr4 Range("A" & X).Formula = Str1 Range("B" & X).Formula = Str2 Range("C" & X).Formula = Str3 Range("D" & X).Formula = Str4 X = X + 1 Next Next Next Next Next End Sub ``` Or is there a better way to deal with this using sql? Or any better way to achieve the Cartesian product of every row using VBA?
Something like this works. I could not find a more elegant solution. ``` Sub Cartesian() Dim MyStr1() As String Dim MyStr2() As String Dim MyStr3() As String Dim MyStr4() As String Dim X As Long Dim OrigString1() As String Dim OrigString2() As String Dim OrigString3() As String Dim OrigString4() As String Dim Y As Long Dim sht As Worksheet Set sht = Worksheets("Sheet1") ReDim OrigString1(1 To Range("A" & Rows.Count).End(xlUp).Row - 1) For Y = 1 To UBound(OrigString1) OrigString1(Y) = CStr(Range("A" & CStr(Y + 1)).Value) Next ReDim OrigString2(1 To Range("B" & Rows.Count).End(xlUp).Row - 1) For Y = 1 To UBound(OrigString2) OrigString2(Y) = CStr(Range("B" & CStr(Y + 1)).Value) Next ReDim OrigString3(1 To Range("C" & Rows.Count).End(xlUp).Row - 1) For Y = 1 To UBound(OrigString3) OrigString3(Y) = CStr(Range("C" & CStr(Y + 1)).Value) Next ReDim OrigString4(1 To Range("D" & Rows.Count).End(xlUp).Row - 1) For Y = 1 To UBound(OrigString4) OrigString4(Y) = CStr(Range("D" & CStr(Y + 1)).Value) Next X = 2 For Y = LBound(OrigString1) To UBound(OrigString1) MyStr1() = Split(OrigString1(Y), ";") MyStr2() = Split(OrigString2(Y), ";") MyStr3() = Split(OrigString3(Y), ";") MyStr4() = Split(OrigString4(Y), ";") For Each Str1 In MyStr1 For Each Str2 In MyStr2 For Each Str3 In MyStr3 For Each Str4 In MyStr4 Range("A" & X).Formula = Str1 Range("B" & X).Formula = Str2 Range("C" & X).Formula = Str3 Range("D" & X).Formula = Str4 X = X + 1 Next Next Next Next Next End Sub ```
The first row contains commas and not semi-colons, that is messing up the dimensionality of the vector
Subscript out of range error during Cartesian product
[ "", "sql", "vba", "excel", "" ]
I'm trying to come with an expression to join two tables (the first two) to get something like the third one. I want to SELECT 'Sitepage' and 'Medium' and JOIN the first two tables ON the rows where the RIGHT 5 characters are matching between 'Sitepage' and 'Campaign ID'. Additionally, IF there is a match, THEN 'Program' will REPLACE 'Medium'. What would be the Syntax? ``` Sitepage | Medium xyz.com/campaign=12345 | A xyz.com/campaign=23456 | C Campaign ID | Program 12345 | B Sitepage | Medium xyz.com/campaign=12345 | B xyz.com/campaign=23456 | C ``` <https://i.stack.imgur.com/pq35n.png>
I based my answer off of @Juan's, but I had to make some adjustments to get it to work. ``` SELECT SitePage, COALESCE(t2.Program, t1.Medium) as Medium FROM Table1 t1 LEFT JOIN Table2 t2 ON RIGHT(t1.Sitepage, 5) = COALESCE(t2.`Campaign ID`, -1); ``` @Abhik was heading in the right direction too. It's more generic than the one above, which assumes that the last 5 characters of `SitePage` will be the only pertinent ones. With that said, I would have gone with... ``` SELECT SitePage, COALESCE(t2.Program, t1.Medium) as Medium FROM Table1 t1 LEFT JOIN Table2 t2 ON SUBSTRING_INDEX(t1.Sitepage,'=',-1) = COALESCE(t2.`Campaign ID`, -1); ``` [SQL Fiddle example](http://sqlfiddle.com/#!9/afaa0/13)
You can use `substring_index` and then update by join ``` update table1 t1 join table2 t2 on t2.`Campaign ID` = substring_index(t1.`Sitepage`,'=',-1) set t1.`Medium` = t2.`Program` ```
Join Two Tables and Replace Values Where Conditions Meet
[ "", "mysql", "sql", "" ]
I need to select a substring that is found between (). The starting and ending position will vary, as well as the length of the substring. I have had moderate success with the following but not 100%. It will work for some values but not for others, it return blanks and will also change the values capitalization format, in other words if value is 'TEST' it will display as 'Test'. ``` SELECT SUBSTRING(columnName, CHARINDEX('(', LEN(columnName)), CHARINDEX(')', columnName) - CHARINDEX('(',columnName)) AS INPUT FROM tableName ``` *Update* There is only 1 set of parentheses ()
This will work provided you only have a single occurence of `(` and `)`: ``` SELECT SUBSTRING(columnName, CHARINDEX('(', columnName)+1, CHARINDEX(')', columnName) - CHARINDEX('(', columnName)-1) FROM tableName ``` If you have values that do not have any `(...)` content, add this where clause: ``` WHERE CHARINDEX('(', columnName) > 0 AND CHARINDEX(')', columnName) > CHARINDEX('(', columnName) ```
To account for no, nested or incomplete `()` ``` ;with t(f) as ( select 'aaa(bbb)ccc' union select 'aaa(bbbccc' union select 'aaabbb)ccc' union select 'aaa()ccc' union select '(aaa(?))ccc' ) select f, case when patindex('%(%)%', f) > 0 then substring(f, charindex('(', f ) + 1, (len(f) - charindex(')', reverse(f))) - charindex('(', f )) else '' end from t >> f (No column name) aaa()ccc aaa(bbb)ccc bbb (aaa(?))ccc aaa(?) aaa(bbbccc aaabbb)ccc ```
Selecting Substring SQL
[ "", "sql", "sql-server", "select", "substring", "" ]
I'm trying to figure out a way to update a record without having to list every column name that needs to be updated. For instance, it would be nice if I could use something similar to the following: ``` // the parts inside braces are what I am trying to figure out UPDATE Employee SET {all columns, without listing each of them} WITH {this record with id of '111' from other table} WHERE employee_id = '100' ``` If this can be done, what would be the most straightforward/efficient way of writing such a query?
It's not possible. What you're trying to do is not part of SQL specification and is not supported by any database vendor. See the specifications of SQL UPDATE statements for [MySQL](https://dev.mysql.com/doc/refman/5.0/en/update.html), [Postgresql](http://www.postgresql.org/docs/9.1/static/sql-update.html), [MSSQL](https://technet.microsoft.com/en-us/library/aa260662%28v=sql.80%29.aspx), [Oracle](http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/update_statement.htm), [Firebird](http://www.firebirdsql.org/refdocs/langrefupd21-update.html), [Teradata](http://www.info.teradata.com/edownload.cfm?itemid=131540030). Every one of those supports only below syntax: ``` UPDATE table_reference SET column1 = {expression} [, column2 = {expression}] ... [WHERE ...] ```
This is not posible, but.. you can doit: ``` begin tran delete from table where CONDITION insert into table select * from EqualDesingTabletoTable where CONDITION commit tran ``` be carefoul with identity fields.
How to UPDATE all columns of a record without having to list every column
[ "", "sql", "sql-update", "" ]
I have two tables ``` emplyoee (first table) id primary key auto increment emp_name varchar student(second table) id foriegnkey emplyoee.id st_name varchar ``` I want to insert multiple student records for a single `employeeid` . My code is attached here , but this use to only one student record update. How can I write stored procedure for this need. I am new with SQL server and stored procedure. Could you please help me? ``` create procedure empst_Sp @emp_name varchar(50), @st_name varchar(50) as begin insert into emplyoee (emp_name) values (@emp_name) insert into student(id,st_name) values(SCOPE_IDENTITY(),@st_name) end ```
For your case, you can try this code above ( I'm using XML parameter type) ``` CREATE PROCEDURE EmployeeIns @EmployeeName NVARCHAR(50), @Students XML AS /* @Students : <Students> <Student Name='Studen 1'/> <Student Name='Studen 1'/> </Students> */ BEGIN DECLARE @StudenTable TABLE(Name NVARCHAR(50)) DECLARE @EmployeeId INT INSERT INTO @StudenTable SELECT Tbl.Col.value('@Name', 'NVARCHAR(50)') FROM @Students.nodes('//Student') Tbl(Col) INSERT INTO Emplyoee VALUES(@EmployeeName) SET @EmployeeId = SCOPE_IDENTITY() INSERT INTO Student SELECT @EmployeeId, Name FROM @StudenTable END ``` **Update 1 :** Your table design should be look like this : ``` CREATE TABLE [dbo].[Emplyoee]( [Id] [int] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](150) NULL, CONSTRAINT [PK_Emplyoee] PRIMARY KEY CLUSTERED ( [Id] ASC )) CREATE TABLE [dbo].[Student]( [EmployeeId] [int] NULL, [Name] [nvarchar](150) NULL, [Id] [int] IDENTITY(1,1) NOT NULL, CONSTRAINT [PK_Student] PRIMARY KEY CLUSTERED ( [Id] ASC )) ``` The execute code : ``` EXEC EmployeeIns @EmployeeName='trungtin1710', @Students = '<Students><Student Name="Studen 1"/><Student Name="Studen 1"/></Students>' ```
As I understand: If emplyoee with @emp\_name is already exists then insert student records with ID of the emplyoee, if there is not any emplyoee with @emp\_name then need to insert new emplyoee and student with ID of the new emplyoee. Yes? ``` CREATE PROCEDURE empst_Sp @emp_name varchar(50), @st_name varchar(50) AS BEGIN DECLARE @EmplyoeeId int SET @EmplyoeeId = NULL select @EmplyoeeId = id from emplyoee where emp_name = @emp_name IF @EmplyoeeId IS NULL BEGIN insert into emplyoee (emp_name) values (@emp_name) SET @EmplyoeeId = SCOPE_IDENTITY() END insert into student(id, st_name) values(@EmplyoeeId, @st_name) END ```
Insert multiple records using stored procedure
[ "", "sql", "asp.net", "sql-server", "sql-server-2008", "" ]
I am having a tough time to understand why does `LEFT JOIN` / `IS NULL` eliminate records which are there in one table and not in the other. Here is an example ``` SELECT l.id, l.value FROM t_left l LEFT JOIN t_right r ON r.value = l.value WHERE r.value IS NULL ``` Why should `r.value = NULL` eliminate records ? I am not understanding . I know I am missing something very basic but at present I cant figure out even that basic one. I would appreciate if someone explains it to me in detail . I want a very basic explanation.
This could be explained with the following ``` mysql> select * from table1 ; +------+------+ | id | val | +------+------+ | 1 | 10 | | 2 | 30 | | 3 | 40 | +------+------+ 3 rows in set (0.00 sec) mysql> select * from table2 ; +------+------+ | id | t1id | +------+------+ | 1 | 1 | | 2 | 2 | +------+------+ 2 rows in set (0.00 sec) ``` Here `table1.id <-> table2.t1id` Now when we do a `left join` with the joining key and if the left table is table1 then it will get all the data from table1 and in non-matching record on table2 will be set to null ``` mysql> select t1.* , t2.t1id from table1 t1 left join table2 t2 on t2.t1id = t1.id ; +------+------+------+ | id | val | t1id | +------+------+------+ | 1 | 10 | 1 | | 2 | 30 | 2 | | 3 | 40 | NULL | +------+------+------+ 3 rows in set (0.00 sec) ``` See that table1.id = 3 does not have a value in table2 so its set as null When you apply the where condition it will do further filtering ``` mysql> select t1.* , t2.t1id from table1 t1 left join table2 t2 on t2.t1id = t1.id where t2.t1id is null; +------+------+------+ | id | val | t1id | +------+------+------+ | 3 | 40 | NULL | +------+------+------+ 1 row in set (0.00 sec) ```
Let's assume r-table is employees, and r\_table is computers. Some employees don't have computers. Some computers are not assigned to anyone yet. 1. Inner join: ``` SELECT l.*, r.* FROM employees l JOIN computers r ON r.id = l.comp_id ``` gives you the list of all employees who HAVE a computer, and the info about computer assigned for each of them. The employees without a computer will NOT appear on this list. 2. Left join: ``` SELECT l.*, r.* FROM employees l LEFT JOIN computers r ON r.id = l.comp_id ``` gives you the list of ALL employees. The employees with computer will show the computer info. The employees without computers will appear with NULLs instead of computer info. 3. Finally ``` SELECT l.*, r.* FROM employees l LEFT JOIN computers r ON r.id = l.comp_id WHERE r.id IS NULL ``` Left join with the WHERE clause will start with the same list as the left join **(2)**, but then it will keep only those employees that do not have corresponding information in the computer table, that is, employees without the computers. I this case, selecting anything from the `r` table will be just nulls, so you can leave those fields out and select only stuff from the `l` table: ``` SELECT l.* FROM ... ``` Try this sequence of selects and observe the output. Each next step builds on the previous one. Please let me know if this explanation is understandable, or you'd like me to elaborate some more. EDITED TO ADD: Here's sample code to create the two tables used above: ``` CREATE TABLE employees ( id INT NOT NULL PRIMARY KEY, name VARCHAR(20), comp_id INT); INSERT INTO employees (id, name, comp_id) VALUES (1, 'Becky', 1); INSERT INTO employees (id, name, comp_id) VALUES (2, 'Anne', 7); INSERT INTO employees (id, name, comp_id) VALUES (3, 'John', 3); INSERT INTO employees (id, name) VALUES (4, 'Bob'); CREATE TABLE computers ( id INT NOT NULL PRIMARY KEY, os VARCHAR(20) ); INSERT INTO computers (id, os) VALUES (1,'Windows 7'); INSERT INTO computers (id, os) VALUES (2,'Windows XP'); INSERT INTO computers (id, os) VALUES (3,'Unix'); INSERT INTO computers (id, os) VALUES (4,'Windows 7'); ``` There are 4 employees. Becky and John have computers. Anne and Bob do not have a computer. (Anne, has a comp\_id 7, which doesn't correspond to any row in computers table - so, she doesn't really have a computer.)
How does Left Join / IS NULL eliminate records which are there in one table and not in the other?
[ "", "mysql", "sql", "left-join", "isnull", "" ]
I am having a weird result when I am trying to get the LAST\_VALUE from a table in SQL Server 2012. This is the table I have ``` PK | Id1 | Id2 1 | 2 | 5 2 | 2 | 6 3 | 2 | 5 4 | 2 | 6 ``` This is my query ``` SELECT Id1, Id2, LAST_VALUE(PK) OVER (PARTITION BY Id1 ORDER BY Id2) AS LastValue FROM @Data ``` This is the result I am expecting ``` Id1 | Id2 | LastValue 2 | 5 | 3 2 | 5 | 3 2 | 6 | 4 2 | 6 | 4 ``` This is what I am receiving ``` Id1 | Id2 | LastValue 2 | 5 | 3 2 | 5 | 3 2 | 6 | 2 2 | 6 | 2 ``` Here is a demonstration of the problem <http://sqlfiddle.com/#!6/5c729/1> Is there anything wrong with my query?
SQL Server doesn't know or care about the order in which rows were inserted into the table. If you need specific order, always use `ORDER BY`. In your example `ORDER BY` is ambiguous, unless you include `PK` into the `ORDER BY`. Besides, `LAST_VALUE` function can return odd results if you are not careful - see below. You can get your expected result using `MAX` or `LAST_VALUE` ([SQLFiddle](http://sqlfiddle.com/#!6/5c729/24/0)). They are equivalent in this case: ``` SELECT PK, Id1, Id2 ,MAX(PK) OVER (PARTITION BY Id1, Id2) AS MaxValue ,LAST_VALUE(PK) OVER (PARTITION BY Id1, Id2 ORDER BY PK rows between unbounded preceding and unbounded following) AS LastValue FROM Data ORDER BY id1, id2, PK ``` Result of this query will be the same regardless of the order in which rows were originally inserted into the table. You can try to put `INSERT` statements in different order in the fiddle. It doesn't affect the result. Also, `LAST_VALUE` behaves not quite as you'd intuitively expect with default window (when you have just `ORDER BY` in the `OVER` clause). Default window is `ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW`, while you'd expected it to be `ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING`. Here is a SO answer with a [good explanation](https://stackoverflow.com/questions/15388892/sql-last-value-returns-wrong-result-but-first-value-works-fine/26806428#26806428). The link to this SO answer is on MSDN page for [`LAST_VALUE`](https://msdn.microsoft.com/en-us/library/hh231517.aspx). So, once the row window is specified explicitly in the query it returns what is needed. --- If you want to know the order in which rows were inserted into the table, I think, the most simple way is to use [`IDENTITY`](https://msdn.microsoft.com/en-us/library/ms186775.aspx). So, definition of your table would change to this: ``` CREATE TABLE Data (PK INT IDENTITY(1,1) PRIMARY KEY, Id1 INT, Id2 INT) ``` When you `INSERT` into this table you don't need to specify the value for `PK`, the server would generate it automatically. It guarantees that generated values are unique and growing (with positive increment parameter), even if you have many clients inserting into the table at the same time simultaneously. There may be gaps between generated values, but the relative order of the generated values will tell you which row was inserted after which row.
> It is never a good idea to rely on implicit order caused by the particular implementation of the underlying database engine. I don't know why, running the query ``` SELECT * FROM @Data ORDER BY Id2 ``` the result will be ``` +----+-----+-----+ | PK | id1 | id2 | +----+-----+-----+ | 1 | 2 | 5 | | 3 | 2 | 5 | | 4 | 2 | 6 | | 2 | 2 | 6 | +----+-----+-----+ ``` which means SQL Server decided the order of rows in a way that is different from the insert order. That's why the `LAST_VALUE` behavior is different from expected, but is consistent with the SQL Server sort method. **But how SQL Server sort your data?** The best answer we have is the accepted answer of this [question](https://stackoverflow.com/questions/2040724/how-does-sql-server-sort-your-data) (from where I took the sentence in the beginning of my answer).
LAST_VALUE in SQL Server 2012 is returning weird results
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Hi I have a table with 4 columns. the table is as below ``` sampleId totalAmount discount netAmount 1 120 40 80 2 200 50 150 3 400 100 300 ``` Now i want the totals summary row at the bottom of the table. Please look at the image file below. how can i achieve this? ![enter image description here](https://i.stack.imgur.com/0YFC4.jpg)
You can use [`UNION ALL`](https://dev.mysql.com/doc/refman/5.7/en/union.html) as below ``` select cast(sampleId as char(10)) as sampleId, totalAmount,discount, netAmount from tab union all select 'Total', sum(totalAmount),sum(discount), sum(netAmount) from tab ``` **[SqlFiddle](http://sqlfiddle.com/#!9/74c45/2/0) Demo** 1st column is converted to varchar becouse you want to `Total` word atthe bottom. Columns types in `UNION` must be the same type.
You may do `union all` ``` select * from tablename union all select 'Totals' as sampleId, sum(totalAmount) as totalAmount, sum(discount) as discount, sum(netAmount) as netAmount from tablename ``` Here is a demo ``` mysql> select * from test ; +------+--------+----------+-----------+ | id | amount | discount | net_total | +------+--------+----------+-----------+ | 1 | 120 | 40 | 80 | | 2 | 200 | 50 | 150 | | 3 | 500 | 100 | 300 | +------+--------+----------+-----------+ 3 rows in set (0.00 sec) mysql> select * from test union all select 'Totals' as id,sum(amount) as amount,sum(discount) as discount,sum(net_total) as net_total from test ; +--------+--------+----------+-----------+ | id | amount | discount | net_total | +--------+--------+----------+-----------+ | 1 | 120 | 40 | 80 | | 2 | 200 | 50 | 150 | | 3 | 500 | 100 | 300 | | Totals | 820 | 190 | 530 | +--------+--------+----------+-----------+ ```
adding summary row to the end of the table
[ "", "mysql", "sql", "" ]
I have an Procedure that receives an specific date as parameter ie **Exec ProcDB '20150428'** frequently I need to run this procedure for many dates and usually I retype Exec **ProcDB 'date1' GO Exec ProcDB 'date2'go**..... I think it's not smart, so I can get the valid list of dates using a **Select Distinct [dates] From Table1 Order By [dates]**. So I want to create a new Procedure that receives **Start\_Dt** and **End\_Dt** and it loops for all dates that my select distinct returns where its between including Start\_Dt and End\_Dt. ``` ie something like: Create ProcDBlist Start_Dt as date, End_Dt as date For each date in: Select Distinct [date] from [table1] where [date] >= @Start_Dt and [date] <= @End_dt Do: Exec ProcDB 'Date n' End ``` **UPDATED:** Final solution: ``` Create procedure [dbo].[ProcessDBRange] (@Start_dt as varchar(15) =null, @End_dt as varchar(15) =null) As Begin DECLARE @date as varchar(15) DECLARE Cursor_ProcessDB CURSOR FOR Select Distinct Convert(varchar(15), [date], 112) as [date] From [Prices] Where [date] >= @Start_dt and [date] <= @End_dt Order By [date] OPEN Cursor_ProcessDB FETCH next FROM Cursor_ProcessDB INTO @date WHILE @@FETCH_STATUS = 0 BEGIN Exec ProcessDB @date FETCH next FROM Cursor_ProcessDB INTO @date END CLOSE Cursor_ProcessDB DEALLOCATE Cursor_ProcessDB End ```
This can be accomplished by using a [cursor.](https://msdn.microsoft.com/en-us/library/ms180169.aspx) Basically, it goes like this: ``` DECLARE @Date datetime -- a local variable to get the cursor's result DECLARE DatesCursor CURSOR FOR Select Distinct [dates] where [dates] between @Start_Dt and @End_Dt From Table1 Order By [dates]. -- the query that the cursor iterate on OPEN DatesCursor FETCH NEXT FROM DatesCursor INTO @Date WHILE @@FETCH_STATUS = 0 -- this will be 0 as long as the cursor returns a result BEGIN Exec ProcDB @Date FETCH NEXT FROM DatesCursor INTO @Date -- don't forget to fetch the next result inside the loop as well! END -- cleanup - Very important! CLOSE DatesCursor DEALLOCATE DatesCursor ``` **Edit** I've just read the [link](https://stackoverflow.com/questions/477064/is-it-possible-to-execute-a-stored-procedure-over-a-set-without-using-a-cursor) that [zimdanen](https://stackoverflow.com/users/128217/zimdanen) gave you in the comments, I must say I think in this case it may be better than a using a cursor. **Edit #2** First, change `OPEN sub` to `OPEN cursor_name`. Second, use [CONVERT](https://msdn.microsoft.com/en-us/library/ms187928.aspx) to get the date as a string. Make sure you convert with the correct style, otherwise you are prone to get incorrect dates and/or exceptions.
You will want to use a cursor. I believe this is a good resource: <http://www.codeproject.com/Tips/277847/How-to-use-Cursor-in-Sql> I tried to make an example with the info you provided. ``` DECLARE @Start_dt DATE; DECLARE @End_dt DATE; DECLARE @date DATE; DECLARE cursor_name CURSOR FOR SELECT DISTINCT Date FROM [table1] WHERE Date >= @Start__Dt and Date <= @End__Dt ORDER BY Date OPEN cursor_name FETCH next FROM cursor_name INTO @date WHILE @@FETCH_STATUS = 0 BEGIN DECLARE @date2 VARCHAR(15) SET @date2 = (CAST ( @date AS varchar(15) )) Exec ProcdB date_parameter_name = @date2 FETCH next FROM cursor_name INTO @date END CLOSE cursor_name DEALLOCATE cursor_name ```
SQL, How to loop for a date list using each as parameter for an Procedure?
[ "", "sql", "loops", "stored-procedures", "sql-server-2014-express", "" ]
In SQL Server 2008 R2 I know how to create a primary key column that keeps my lines unique (column3 in my example below), but I also need a column that auto-fills a repeating set of integers (column2 in my example below). I don't know how to create a column like this. Searches suggest it can be done with "reseeding", but not sure how to actually a create this column or if its even possible to automatically do this. I'm looking for something like this, with Column2 being auto-filled every time a new line is created: ``` Column1 Column2 Column3 name1 1 1 name2 2 2 name3 3 3 name4 4 4 name5 5 5 name6 1 6 name7 2 7 name8 3 8 name9 4 9 name10 5 10 name11 1 11 name12 2 12 ```
You could use a persisted column by doing something along these lines: ``` ALTER TABLE Table_Name ADD Column2 AS (Column3 % 5) ``` While I think this will give you 0-4 instead of 1-5, it should put you on the right track to do some mathyness on Column3 to create the column you are looking for. For a constantly updating answer that updates if rows are removed, you could do something like this: ``` SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION dbo.GetTableValue ( @Index int ) RETURNS int AS BEGIN DECLARE @Result int SELECT @Result = COUNT(*) % 5 FROM dbo.TABLE WHERE PrimaryKey < @Index RETURN @Result END GO ALTER TABLE dbo.TableName ADD CalcValue AS (dbo.GetTableValue(PrimaryKey)); ``` That query isn't fully tested but should put you on the right path. Once again, this will hurt performance in a larger database due to the query executing on every read from the DB.
If you are using SQL Server 2012 or 2014, you can (and should) use a [`SEQUENCE`](https://msdn.microsoft.com/en-us/library/ff878091.aspx): ``` CREATE SEQUENCE dbo.FiveCount AS tinyint START WITH 1 INCREMENT BY 1 CYCLE MINVALUE 1 MAXVALUE 5 ; ``` Then define the next value of the `SEQUENCE` as the default value of the column: ``` CREATE TABLE test_table ([Column1] varchar(6), [Column2] int CONSTRAINT DF_testTable_col2 DEFAULT NEXT VALUE FOR FiveCount, [Column3] int) ; ``` [Here is a SQLFiddle](http://sqlfiddle.com/#!6/779da/1) of this `SEQUENCE` being created, followed by a series of inserts, then a select all to show the results of the insertions. EDIT: Note that inserting the results into the table like this means that the sequence will be broken if a row is deleted. A more dynamic approach might be something like the following (assuming the sequence is already in place): ``` CREATE TABLE test_table ( [Column1] varchar(6), [Column3] int ); ``` Then, when you want to select from the table: ``` SELECT Column1, NEXT VALUE FOR FiveCount AS Column2, Column3 FROM test_table ``` This query would ensure that you always get an unbroken sequence, regardless of the state of `test_table` If it is important that the `SEQUENCE` always begin with 1, you can `RESTART` it like so: ``` ALTER SEQUENCE dbo.FiveCount RESTART WITH 1 ``` [Here's a SQLFiddle](http://sqlfiddle.com/#!6/10768/3) of the `SEQUENCE` being restarted, followed by the select query, followed by a deletion from `test_table`, followed by another instance of the restart and select query.
SQL Server : create a column as an Identity that repeats a pattern
[ "", "sql", "sql-server", "identity", "" ]
I want to select my data by date - from a date until another date, so I have this query, ``` SELECT * FROM mytalbe WHERE date BETWEEN '2014-10-09' AND '2014-10-10' ``` But this query only return the data in `'2014-10-09'`, excluding the data in `'2014-10-10'`, unless I change the query to this below, ``` SELECT * FROM mytalbe WHERE date BETWEEN '2014-10-09' AND '2014-10-11' ``` This is not an ideal solution. How can I select the data including the data in `'2014-10-10'`? **NOTE:** I think my problem is different from other duplicate questions becos, 1. My date type is `TEXT`. 2. I need to select the date's data without its time. 3. It is an SQLite database. My sample data: ``` sid nid timestamp date 1 20748 5 1412881193 2014-10-09 14:59:53 2 20749 5 1412881300 2014-10-09 15:01:40 3 20750 5 1412881360 2014-10-09 15:02:40 ```
You could also just not use `between`. ``` select * from mytable where `date` >= '2014-10-09' and `date` <= '2014-10-10' ``` Example: ``` mysql> create table dd (id integer primary key auto_increment, date text); Query OK, 0 rows affected (0.11 sec) mysql> insert into dd(date) values ('2014-10-08'), ('2014-10-09'), ('2014-10-10'), ('2014-10-11'); Query OK, 4 rows affected (0.05 sec) Records: 4 Duplicates: 0 Warnings: 0 mysql> select * from dd where date >= "2014-10-09" and date <= "2014-10-10"; +----+------------+ | id | date | +----+------------+ | 2 | 2014-10-09 | | 3 | 2014-10-10 | +----+------------+ 2 rows in set (0.01 sec) ``` Since it includes time, and you dont want the time. this: ``` select substring(date, 1, 10) from dd where substring(date, 1, 10) between '2014-10-09' and '2014-10-10'; ``` **question updated again, additional answer** Ugh. you have timestamp fields? in that case this: ``` select date(from_unixtime(timestamp)) from mytabel where date(from_unixtime(timestamp)) between '2014-10-09' and '2014-10-10' ``` **finally we have arrived at sqlite** ``` select date(datetime(timestamp, 'unixepoch')) from mytable where date(datetime(timestamp, 'unixepoch')) between '2014-10-09' and '2014-10-10'; ```
IF date is a timestamp, you'll need to do like: ``` SELECT * FROM mytalbe WHERE date BETWEEN '2014-10-09 00:00:00' AND '2014-10-10 23:59:59' ``` Or you can do, I believe: ``` SELECT * FROM mytalbe WHERE DATE(date) BETWEEN '2014-10-09' AND '2014-10-10' ``` Or, since it is a text field: ``` SELECT * FROM mytalbe WHERE DATE_FORMAT(date,'%Y-%m-%d') BETWEEN '2014-10-09' AND '2014-10-10' ```
SQLite database - select the data between two dates?
[ "", "sql", "select", "sqlite", "" ]
I use SQL server 2014, i try the following query to select between two dates in the same table, the datatype is `nvarchar`, i executed the following query it just shows me three rows such`('30/03/2015','30/04/2015','30/04/2015')`,but in reality there is`('29/02/2015','30/03/2015','31/04/2015','30/04/2015','30/04/2015')` ``` select RegisteredDate from Student where Student.RegisteredDate between convert(nvarchar, '30/01/2014', 103) and convert(nvarchar, '30/04/2015', 103) ```
As i have read the other answers and comments, i could recommend you to firstly change the datatype of "RegisteredDate" from "nvarchar" to "date". Secondly use this standard 'yyyy-MM-dd' below code is what you need ``` select RegisteredDate from Student where Student.RegisteredDate between '2014-01-30' and '2015-04-30' ``` you will not be in need of any conversions, this is how i do it for myself
Cast the other way round, other way you are comparing strings: ``` select RegisteredDate from Student where convert(date, Student.RegisteredDate, 103) between '20140130' and '20150430' ``` The fact is that those dates saved as strings are ordered as: ``` '29/02/2015', '30/03/2015', '30/04/2015', '30/04/2015', '31/04/2015' ``` Now imagine where would you add filter values? ``` '29/02/2015', '30/01/2014' --start date /-------------\ |'30/03/2015',| |'30/04/2015',| |'30/04/2015',| \-------------/ '30/04/2015' --end date '31/04/2015' ``` So `between` will return you those three rows. Also you have in your data `29/02/2015`. In `2015` February ends on `28`(you have incorrect data in tables already). You will never be able to insert such values if you choose types correctly.So the conclusion is: **Use Appropriate data types for your data!**
Why my selection between two dates in the same table doesn't work?
[ "", "sql", "sql-server", "sql-server-2014-express", "" ]
I have the following table with two fields: ``` create table test_t ( cola varchar(10), coldate date ); ``` Inserting some records: ``` insert into test_t values('A','1-1-2010'), ('A','2-1-2010'), ('A','4-1-2010'), ('B','6-1-2010'), ('B','8-1-2010'), ('C','10-1-2010'), ('D','11-1-2010'); ``` **Note**: Now I want to show the `cola` values which are belongs to 2 to 3 days. And want to show that dates day into comma separated column as shown below in the expected ouptput. **Expected Output**: ``` cola Dates_Day ------------------ A 1,2,4 B 6,8 ```
Try this ``` select t1.cola, stuff((SELECT ',' + right(convert(varchar(5),t2.coldate,3),2) from test_t t2 where t2.cola = t1.cola FOR XML PATH('')),1,1,'') AS Dates_Day from test_t t1 group by t1.cola ```
``` SELECT DISTINCT COLA, STUFF ((SELECT ','+ CONVERT(VARCHAR,RIGHT(LEFT(COLDATE,7),2)) FROM TEST_T B WHERE A.COLA = B.COLA FOR XML PATH('')),1,1,'') AS [Dates_Day] FROM TEST_T A ```
SQL Server 2008 R2: Show the dates day into comma separated column
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I'm trying to do an MySQL SELECT query on a table that looks a bit like this: ``` ------------------------------- | id | domain | etc... |------+--------------+-------- | 1 | dev.howmuch | |------+--------------+-------- | 2 | devhowmuch | ``` At the moment I'm just trying to do the simple ``` SELECT * FROM `tbl_swad_apps` WHERE `domain` = 'devhowmuch' ``` Which works fine, however as soon as I try ``` SELECT * FROM `tbl_swad_apps` WHERE `domain` = 'dev.howmuch' ``` It returns no results, my guessing is that it's trying to treat the fullstop as a wildcard or as a table reference (`table`.`field`). How can I get it to query purely as a string? Thanks **EDIT:** I appreciate the help guys but none of these are working: There are no trailing or leading spaces. If I try `LIKE '%howmuch%'` both results turn up, if I do `LIKE '%.howmuch%'` nothing comes up. **hex(domain) result as requested:** ``` 1 dev.howmuch 646576A9686F776D756368 2 devhowmuch 646576686F776D756368 ```
The cell contains this: ``` d e v . h o w m u c h 64 65 76 A9 68 6F 77 6D 75 63 68 ``` [Full stop](http://www.fileformat.info/info/unicode/char/2e/index.htm) should probably be `2E` (it's a 7-bit ASCII character so it's the same byte in many encodings, including UTF-8): ``` mysql> SELECT HEX('.'); +----------+ | HEX('.') | +----------+ | 2E | +----------+ 1 row in set (0.00 sec) ``` But you have `A9`. That's not a 7-bit ASCII character and we don't know what encoding your data uses so we can't tell what it is (but it's clearly not a dot). In ISO-8859-1 and Windows-1252 it'd be a copyright symbol (©). In UTF-8 it'd be an invalid character, typically displayed as [REPLACEMENT CHARACTER](http://www.fileformat.info/info/unicode/char/0fffd/index.htm) (�) by many clients.
You are not honest. There is no problem with period. Here is a proof: <http://sqlfiddle.com/#!9/2e380/1> The problem should be with the real value in the table that include some nonprinting characters. So you can choose - fix the value in the table. Or use other query like: ``` SELECT * FROM `tbl_swad_apps` WHERE `domain` LIKE '%dev.howmuch%'; ``` or even: ``` SELECT * FROM `tbl_swad_apps` WHERE `domain` LIKE '%dev%.%howmuch%'; ``` <http://sqlfiddle.com/#!9/2e380/4>
SQL Select - Where search term contains '.'
[ "", "mysql", "sql", "" ]
I have read several posts describing a problem very similar to mine but i haven't been able to solve my issue yet. This is it: There are tables A, B, C and D. Table D is referenced by the other 3 tables and only table (A) has a 'nullable' foreign key and what I need to find are the rows in D that are NOT being pointed/referenced by any of the other 3 tables. So far I've been able to filter all rows including those I want to find with this code: ``` SELECT D.Id as Id, A.Id as A_Id, B.Id as B_Id, C.Id as C_Id FROM D LEFT OUTER JOIN A --this is the nullable one ON D.Id = A.D_Id LEFT OUTER JOIN B ON D.Id = B.D_Id LEFT OUTER JOIN C ON D.Id = C.D_Id ``` Although I can see the rows I need in the result by looking at them and they display the foreign-key-field as being null if I try to filter by `IS NULL` then I get no results at all, this is the query I used: ``` SELECT D.Id as Id, A.Id as A_Id, B.Id as B_Id, C.Id as C_Id FROM D LEFT OUTER JOIN A --this is the nullable one ON D.Id = A.D_Id LEFT OUTER JOIN B ON D.Id = B.D_Id LEFT OUTER JOIN C ON D.Id = C.D_Id WHERE A.D_Id IS NULL AND B.D_Id IS NULL AND C.D_Id IS NULL ``` Please any help will be most appreciated. Thanks in advance ## Edit I have tried this and it worked although it was a very different approach: ``` SELECT * FROM D WHERE Id not in (Select D_Id from A) and Id not in (Select D_Id from B) and Id not in (Select D_Id from C) ```
The second query is ~~almost~~ correct! Alternatively Instead of `WHERE A.D_Id IS NULL AND B.D_Id IS NULL AND C.D_Id IS NULL` I would use `WHERE A.Id IS NULL AND B.Id IS NULL AND C.Id IS NULL` ``` SELECT D.Id as Id, A.Id as A_Id, B.Id as B_Id, C.Id as C_Id FROM D LEFT OUTER JOIN A --this is the nullable one ON D.Id = A.D_Id LEFT OUTER JOIN B ON D.Id = B.D_Id LEFT OUTER JOIN C ON D.Id = C.D_Id WHERE A.Id IS NULL AND B.Id IS NULL AND C.Id IS NULL ``` [Sql Fiddle Updated](http://sqlfiddle.com/#!6/8871c/21/0)
Have you tried using an OR instead of AND for the is null to get what you want? It could be that it is null in the join from one of the tables and not null in the other. ``` SELECT D.Id as Id, A.Id as A_Id, B.Id as B_Id, C.Id as C_Id FROM D LEFT OUTER JOIN A --this is the nullable one ON D.Id = A.D_Id LEFT OUTER JOIN B ON D.Id = B.D_Id LEFT OUTER JOIN C ON D.Id = C.D_Id WHERE A.D_Id IS NULL OR B.D_Id IS NULL OR C.D_Id IS NULL ``` Edit: Does this give you the results you want? ``` SELECT * FROM D WHERE D.id NOT IN (SELECT D_id FROM A) AND D.id NOT IN (SELECT D_id FROM B) AND D.id NOT IN (SELECT D_id FROM C) ```
Querying for rows with no foreign key (NOT NULL Foreign key) pointing to them
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am using Hive (which is similar to SQL, but the syntax can be little different for the SQL users). I have looked at the other stackoverflow, but they seems to be in the SQL with different syntax. I am trying to the get the first day of the month through this query. This one gives me today's day. For example, if today is 2015-04-30, then result would be 2015-04-01. Thanks! ``` select cust_id, FROM_UNIXTIME(UNIX_TIMESTAMP(),'yyyy-MM-dd') as first_day_of_month_transaction --DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0) as first_day_of_month_transaction --SQL format. Not compatible in Hive. from customers; ```
To get the first day of the month, you can use: ``` date_add(<date>, 1 - day(<date>) ) ``` Applied to your expression: ``` date_add(FROM_UNIXTIME(UNIX_TIMESTAMP(), 'yyyy-MM-dd'), 1 - day(FROM_UNIXTIME(UNIX_TIMESTAMP(), 'yyyy-MM-dd')) ) ``` But this will work for any column in the right format.
Try this ``` date_format(current_date,'yyyy-MM-01') ```
Selecting the first day of the month in HIVE
[ "", "sql", "hive", "hiveql", "" ]
So I want to turn negative value's into a 0. I found 2 solutions that do exactly the same: ``` SUM(IF(ordered_item.amount < 0, 0, ordered_item.amount)) as purchases ``` And ``` SUM(CASE WHEN ordered_item.amount < 0 THEN 0 ELSE ordered_item.amount END) as purchases ``` They give me both the same result, but which would give me the best *performance*? And is there maybe a *simpler* solution to turn negatives into `0`.
An alternate approach you could use is the [GREATEST()](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_greatest) function. ``` SUM(GREATEST(ordered_item.amount, 0)) as purchases ```
You can define the field as "unsigned" so no conversion would be required ``` CREATE TABLE ordered_item ( order_id INT UNSIGNED NOT NULL AUTO_INCREMENT, amount INT UNSIGNED NOT NULL, PRIMARY KEY (`order_id `) ); ``` <https://dev.mysql.com/doc/refman/5.0/en/integer-types.html>
SQL 0 if negative value
[ "", "mysql", "sql", "mysql-workbench", "" ]
So I have these two tables: [![img](https://i.stack.imgur.com/4A2zY.png)](https://i.stack.imgur.com/4A2zY.png) (source: [gyazo.com](http://gyazo.com/284ffb2d834ab5aa25ced78fbd30c77c.png)) The Sites table holds all types of websites to vote on, and Voted\_sites table contains all of the sites that been voted on. Now, sites should get deleted from Voted\_sites, when the current date is greater than `date_voted + interval vote_wait_time_hours`. So in my case I would do it like this: ``` DELETE FROM voted_sites LEFT JOIN sites ON voted_sites.site_id = sites.id WHERE voted_sites.date_voted < NOW() - INTERVAL sites.vote_wait_time_hours HOUR ``` But unfortunately it gives my syntax error, because I am guessing that you can't use `LEFT JOIN` like that in a delete query. Basically I need to grab the `vote_wait_time_hours` from `sites` that has the same id key.
You could use the `exists` operator with a subquery instead: ``` DELETE FROM voted_sites WHERE EXISTS (SELECT * FROM sites WHERE voted_sites.site_id = sites.id AND voted_sites.date_voted < NOW() - INTERVAL sites.vote_wait_time_hours HOUR); ```
I believe in this situation you just need to specify which table you want to delete from, `JOIN` is not the issue. ``` DELETE voted_sites FROM voted_sites LEFT JOIN sites ON voted_sites.site_id = sites.id WHERE voted_sites.date_voted < NOW() - INTERVAL sites.vote_wait_time_hours HOUR ```
Mysql DELETE from table using information from another table
[ "", "mysql", "sql", "sql-delete", "" ]
I need to use the '\_' wildcard to find all id that are only one letter which there are a few of. However when I run my query no rows are returned. Heres my query: ``` SELECT * FROM table WHERE id LIKE '_'; ``` I have a table lets call Table1 that has two columns, id and name. id either has 1 or 2 characters to label a name. I'm trying to only find the names where the id is only one character. Heres an example of the table: ``` id name A Alfred AD Andy B Bob BC Bridget ``` I only want to return Alfred and Bob in this example. I don't want the solution but any advice or ideas would be helpful. Here is a screenshot of my query: <https://i.stack.imgur.com/a2mCt.png> And here is a small example of my table: <https://i.stack.imgur.com/APInZ.png> So in this example of my table I would ideally like only East Asia... to be returned. I if I search specifically for the character it works but for some strange reason the '\_' wildcard doesn't. For example: ``` SELECT * FROM icao WHERE prefix_code ='Z'; ``` This works.
In MySQL, the underscore is used to represent a wildcard for a single character. You can read more about that [Pattern Matching](https://dev.mysql.com/doc/refman/5.0/en/pattern-matching.html) here. The way you have it written, your query will pull any rows where the id column is just one single character, you don't need to change anything. Here is an [SQL Fiddle](http://sqlfiddle.com/#!9/012d7/4) example. **EDIT** One trouble shooting tip is to be sure there is no whitespace before/after the prefix code. If there is, and you need to remove it, add TRIM(): ``` SELECT * FROM myTable WHERE TRIM(id) LIKE '_'; ``` Here is an example with [TRIM](http://sqlfiddle.com/#!9/7b2eb/2). **EDIT 2** A little explanation to your weird behavior, hopefully. In MySQL, if there is trailing white space on a character, it will still match if you say `id = 'Z';` as seen by [this](http://sqlfiddle.com/#!9/ad832/2) fiddle now. However, leading white space will not match this, but will still be corrected by TRIM(), because that removes white space on the front and back end of the varchar. TL;DR You have trailing white space after Z and that's causing the problem.
Try using TRIM ``` Select * FROM [Table] where TRIM(ID) LIKE '_'; ```
Having trouble matching a single character in an SQL table
[ "", "mysql", "sql", "wildcard", "" ]
How to group rows in mysql to be as chunks with a specific size ? I have a table called `users` Which has these fields `id` `name` I want to make a query which group users into chucks with the size of 3 names per row. Sample table: ``` 1 name1 2 name2 3 name4 4 name5 5 name5 6 name6 7 name7 ``` result should be ``` name1,name2,name3 name4,name5,name6 name7 ``` Usign `GROUP_CONCAT` of course to do that
<http://sqlfiddle.com/#!9/a6b42/5> ``` SELECT GROUP_CONCAT(name), (IF(@i = 3 OR @i IS NULL, @i:=1, @i:=@i+1 ))as idx, (IF(@gr_idx IS NULL, @gr_idx:=1, IF(@i = 1, @gr_idx:=@gr_idx+1,@gr_idx) )) as gr_idx FROM users GROUP BY gr_idx ```
``` SELECT GROUP_CONCAT(name SEPARATOR ',') FROM users GROUP BY floor((id - 1) / 3); ``` Here is an SQL fiddle demonstrating this: <http://sqlfiddle.com/#!9/f3158/2/0> If the IDs are not ascending or not succeeding then this query may be it: ``` SELECT names from ( SELECT GROUP_CONCAT(name) as names, @rownum := @rownum + 1 AS rank FROM users u, (SELECT @rownum := 0) r GROUP BY floor(@rownum / 3) ) _users ; ``` Fiddle again: <http://sqlfiddle.com/#!9/f3158/13/0>
How to group rows in mysql to be as chunks with a specific size
[ "", "mysql", "sql", "" ]
I have the following table with two fields: ``` create table teste_r ( colx varchar(10), coly varchar(10) ); ``` Insertion of records: ``` insert into teste_r values('3','A'),('3','B'),('3','C') ,('2','A'),('2','A'),('2','C') ,('1','A'),('1','D'); ``` **Note**: Now I want to show the `colx` which is belongs to only and only `A and C`. So the according to the requirements the expected result should be: **Expected Result**: ``` colx ----- 2 ```
You can use `GROUP BY` with `HAVING COUNT(DISTINCT coly)` like this ``` SELECT colx FROM teste_r r WHERE r.coly IN('A','C') GROUP BY colx HAVING COUNT(distinct coly) = (SELECT COUNT(distinct coly) FROM teste_r r2 WHERE r2.colx = r.colx) AND COUNT(distinct coly) = 2 ``` This is an example of Relational Division with no Remainder. You can get more info in this [article](https://www.simple-talk.com/sql/t-sql-programming/divided-we-stand-the-sql-of-relational-division/) which is written by Joe Celko. You can also look at this [article](https://www.simple-talk.com/sql/learn-sql-server/high-performance-relational-division-in-sql-server/) by Dwain Camps
This query will results like you wish: ``` SELECT DISTINCT tr.colx FROM dbo.teste_r AS tr WHERE tr.colx IN (SELECT tr2.colx FROM dbo.teste_r AS tr2 WHERE tr2.coly IN ('A', 'C')) AND tr.colx NOT IN (SELECT tr2.colx FROM dbo.teste_r AS tr2 WHERE tr2.coly NOT IN ('A', 'C')) ```
SQL Server 2008 R2: Only and only in condition
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I am working on a query in MS SQL Server 2014 That basically is a lot of unions ``` select x, y where a = b union select x, y where a = b union select x, y where a = b ``` It works fine, however the where clauses are identical on every select. And for ease of maintenance I was wondering if there is a better, cleaner way to do this. I was thinking of somehow selecting the data with the where clauses first then doing all the other queries only on this data. But im open to any ideas on how to improve this query.
``` ;WITH Test AS ( SELECT x, y UNION SELECT x, y UNION SELECT x, y ) SELECT * FROM Test WHERE a = b ```
You could use a sub query and take the where clause outside of it for ease of maintenance. Just make sure you bring all the columns through in the sub query that you will need in the where clause. Eg ``` SELECT * FROM ( SELECT x,y,a,b FROM table1 union SELECT x,y,a,b FROM table2 UNION SELECT x,y,a,b FROM table3 )subquery WHERE a=b ```
Select from a subset of data
[ "", "sql", "sql-server", "union", "" ]
I have an Access table containing timecard records for jobs: ``` JobID HoursWorked Rate 101 1.25 101 0.75 OT 102 0.33 DT 101 0.50 103 2.00 ``` I want a query that returns a single record for each `JobID` that has summed the `HoursWorked` field. If `[Rate] = "OT"` then `HoursWorked` should be multiplied by 1.5, or in the case of DT, multiplied by 2. The result would look like this: ``` JobID TotalHoursWorked 101 2.875 102 0.66 103 2.00 ``` I came up with this query which successfully SUMS the different types of `Rate`: ``` SELECT JobID, Sum(HoursWorked) AS TotalHoursWorked FROM Timecards GROUP BY JobID, Rate; ``` But I'm stuck at how to multiply the different rates and return a single total by JobID.
Use a [Switch](https://support.office.com/en-us/article/Switch-Function-d750c10d-0c8e-444c-9e63-f47504f9e379?CorrelationId=52682d26-bf61-44fa-b17d-267a7a2e710d&ui=en-US&rs=en-US&ad=US) expression to compute the multiplier for each *Rate*. Multiply *HoursWorked* by those values and `Sum()` it all up in a `GROUP BY` ``` SELECT t.JobID, Sum(t.HoursWorked * Switch(t.Rate='OT',1.5,t.Rate='DT',2,True,1)) AS TotalHoursWorked FROM Timecards AS t GROUP BY t.JobID; ``` That query returned your expected result when tested with your data in Access 2010. Here is the `Switch` expression broken out and formatted. It returns the value from the first expression/value pair where the expression evaluates as True, and ignores any remaining pairs. The final expression/value pair (`True, 1`) returns 1 for any *Rate* which does not match *'OT'* or *'DT'*: ``` Switch( t.Rate='OT', 1.5, t.Rate='DT', 2, True, 1 ) ```
You can use the [IIF function](https://support.office.com/en-us/article/IIf-Function-32436ecf-c629-48a3-9900-647539c764e3) to conditionally determine what to aggregate: ``` SELECT JobID, SUM(IIF(rate = 'OT', HoursWorked * 1.5, IIF(rate = 'DT' ,HoursWorked * 2.0, HoursWorked))) AS TotalHoursWorked FROM Timecards GROUP BY JobID; ``` The logic is *iif(boolean test, value if true, value if false)* Since you have two conditions to test you have to nest the iif-expressions so if rate = 'OT' then multiply by 1,5, else check if rate = 'DT' and if so multiply with 2, or use the value as is if rate is something else. Also, when you use aggregate functions you should not group by the column(s) you aggregate. If you do you'll most likely not get the result you want.
Sum straight and overtime in single Access query
[ "", "sql", "ms-access", "ms-access-2013", "jet", "" ]
I have created a datatype in PostgreSQL using folloing line: ``` CREATE TYPE ABC AS (A CHARACTER(1), B CHARACTER(2), C BIGINT); ``` I didn't define this datatype. Now I want to delete this prototype. What is the way or command to delete this?
You can remove a data type using ``` DROP TYPE type_name; ``` [Click here for manual reference for DROP TYPE](http://www.postgresql.org/docs/current/static/sql-droptype.html) Remember, that if you have other objects that depend on the type you are trying to delete, it would yield an error ``` ERROR: cannot drop type type_name because other objects depend on it ``` with list of dependencies. If you would also like to `DROP` those objects type ``` DROP TYPE type_name CASCADE; ```
Try remove this `TYPE` this way ``` DROP TYPE ABC; ```
How to delete a custom data type in PostgreSQL?
[ "", "sql", "postgresql", "" ]
I'm wondering there is a difference between SQL variables and subqueries. Whether one uses more processing power, or one is quicker, or even if one merely is more readable. For (a very basic) example, I like to use variables to hold polygon and transformations in PostGIS: ``` WITH region_polygon AS ( SELECT ST_Transform(wkb_geometry, %(fishnet_srid)d) geom FROM regions LIMIT 1 ), raster_pixels AS ( SELECT (ST_PixelAsPolygons(rast)).* FROM test_regions_raster LIMIT 1 ) SELECT x, y FROM raster_pixels a, region_polygon b WHERE ST_Within(a.geom, b.geom) ``` But would it be better in any way to use subqueries? ``` SELECT x, y FROM ( SELECT ST_Transform(wkb_geometry, %(fishnet_srid)d) geom FROM regions LIMIT 1 ) a, ( SELECT (ST_PixelAsPolygons(rast)).* FROM test_regions_raster LIMIT 1 ) b WHERE ST_Within(a.geom, b.geom) ``` Note that I'm using PostgreSQL.
There's an important syntactic advantage of common table expressions over derived tables when it comes to reuse. Consider the following, equivalent examples using self-joins: **Using common table expressions** ``` WITH a(v) AS (SELECT 1 UNION SELECT 2) SELECT * FROM a AS x, a AS y ``` **Using derived tables** ``` SELECT * FROM (SELECT 1 UNION SELECT 2) x(v), (SELECT 1 UNION SELECT 2) y(v) ``` As you can see, using common table expressions, the view `(SELECT 1 UNION SELECT 2)` can be reused multiple times in your query. With derived tables, you will have to repeat your view declaration. In my example, this is still OK. In your own example, this starts getting a bit more hairy. ### It's all about scope Views in SQL are all about scoping. There are essentially four levels of declaring views: * As derived tables. They can be consumed exactly once. * As common table expressions. They can be consumed several times, but only in one query. * As views. They can be consumed several times in several queries. * As materialized views. Same as views, but the data is pre-calculated. Some databases (in particular PostgreSQL) also know table-valued functions. From a mere syntax perspective, they're just like views - parameterised views. ### Performance Note that these thoughts only focus on syntax, not query planning. The different approaches may have very different performance implications, depending on the database vendor.
As pointed out this construct is called [Common Table Expression](http://www.postgresql.org/docs/9.1/static/queries-with.html), not a variable. I prefer to use CTE, rather than subquery, because it is way easier to read and write for me, especially when you have several nested CTEs. You can write CTE once and refer to it several times in the rest of the query. With subquery you'll have to repeat the code several times. Important difference of PostgreSQL from other databases (at least from MS SQL Server) is that PostgreSQL evaluates each CTE only once. > A useful property of WITH queries is that they are evaluated only once > per execution of the parent query, even if they are referred to more > than once by the parent query or sibling WITH queries. Thus, expensive > calculations that are needed in multiple places can be placed within a > WITH query to avoid redundant work. Another possible application is to > prevent unwanted multiple evaluations of functions with side-effects. > However, the other side of this coin is that the optimizer is less > able to push restrictions from the parent query down into a WITH query > than an ordinary sub-query. The WITH query will generally be evaluated > as written, without suppression of rows that the parent query might > discard afterwards. (But, as mentioned above, evaluation might stop > early if the reference(s) to the query demand only a limited number of > rows.) MS SQL Server would inline each reference of CTE into the main query and optimize the whole result, but PostgreSQL doesn't. In some sense PostgreSQL is more flexible here. If you want the subquery to be evaluated only once, put it in CTE. If you don't want, put it in subquery and repeat the code. In SQL Server you'd have to use temporary table explicitly. Your example in the question is too simple and most likely both variants are equivalent - check the execution plan. --- Official docs mention it, as I quoted above, but Nick Barnes gave a link to a [good article explaining it in more details](http://blog.2ndquadrant.com/postgresql-ctes-are-optimization-fences/) and I thought it is worth putting it in an answer, rather that comment. > When optimising queries in PostgreSQL (true at least in 9.4 and > older), it’s worth keeping in mind that – unlike newer versions of > various other databases – PostgreSQL will always materialise a CTE > term in a query. > > This can have [quite surprising effects for those used to working with > DBs like MS SQL](https://stackoverflow.com/questions/24052356/common-table-expression-with-clause-in-postgresql-error-relation-stkpos-does): > > * A query that should touch a small amount of data instead reads a whole > table and possibly spills it to a tempfile; > * and You cannot UPDATE or > DELETE FROM a CTE term, because it’s more like a read-only temp table > rather than a dynamic view. So, there is no definite answer whether CTE is better than subquery in PostgreSQL. In some cases it can be faster, in some cases it can be slower. But, IMHO, in most cases CTE is easier to write, read and maintain. And, obviously, there is a case when you have no other option, but to use so-called recursive CTE (recursive queries are typically used to deal with hierarchical or tree-structured data).
What are the pros/cons of using SQL variables versus subqueries?
[ "", "sql", "postgresql", "" ]
I have the following table (simplification of the real problem): ``` +----+-------+ | id | value | +----+-------+ | 1 | T | | 2 | T | | 3 | F | | 4 | T | +----+-------+ ``` Now a simple `SELECT id FROM Table WHERE value='T';` would get me all the IDs where value is T, but I just need, in the example above, the first 2 (1 and 2). What is the best way to do this? I'd prefer not to use a while loop. I tagged it MySQL, but a solution working for most database engines would be better. Edit: based on the answers, I probably wasn't clear enough: I only want the first IDs where value is 'T'. This can be anything from no values to all values. Edit 2: another example: ``` +----+-------+ | id | value | +----+-------+ | 1 | F | | 2 | T | | 5 | T | | 6 | F | | 7 | T | | 9 | T | +----+-------+ ``` The result would be []. Example 3: ``` +----+-------+ | id | value | +----+-------+ | 1 | T | | 2 | T | | 5 | T | | 6 | F | | 7 | T | | 9 | T | +----+-------+ ``` And the result: [1, 2, 5]
Are you after something as simple as this? just limiting the result? ``` select id from table where value = 'T' order by id asc limit 2 ``` Just change the order to `desc` instead of `asc` if for some reason you want the last two matches, instead of the first two. I see the criteria has changed a little. ``` select id from `table` t where t.id >= (select @min := min(id) from `table` t2 where value = 'T') and not exists (select id from `table` t3 where value = 'F' and id > @min and id < t.id) and value = 'T' ``` [demo here](http://sqlfiddle.com/#!9/ed50f/7) If you want no results when the first value is 'F', then this one: ``` select id from `table` t where not exists (select id from `table` t3 where value = 'F' and id < t.id) and value = 'T' ```
If Ids are auto-incremented then `order by` `limit` could be used ``` select * from table_name where value = 'T' order by id limit 2 ```
How to select only the first elements satisfying a condition?
[ "", "mysql", "sql", "while-loop", "ansi-sql", "" ]
I have two tables: `Students (id, fname, lname)` and `Registrar (id, student_id, class_id)`. I teach a class with class ID 99. I want to find all students in my class, and list those students *and all the other classes they're taking*. I can do the following: ``` SELECT s.fname, s.lname, r.class_id FROM Students AS s JOIN Registrar AS r ON s.id = r.student_id WHERE r.student_id IN (SELECT student_id FROM Registrar WHERE class_id = 99); ``` This works, but it's slow. I don't have a lot of experience with this (and as my title suggests, I have no idea what this is called), but I'm guessing it's a fairly common problem to solve. Any pointers in the right direction are most welcome.
This should work. Join to the registar table once on the `class_id = 99` condition, and then again with no `class_id` restriction to get ALL the classes for the student. ``` select s.fname, s.lname, r2.class_id from students s inner join registrar r on s.id = r.student_id and r.class_id = 99 inner join registrar r2 on s.id = r2.student_id; ``` [demo fiddle here](http://sqlfiddle.com/#!9/37c78/1) With an index `(student_id, class_id)` on registrar table, and on `id` in student table, this should be quite fast. **based on comments** it's possible classes may be repeated - we can just group to get rid of those: ``` select s.fname, s.lname, r2.class_id from students s inner join registrar r on s.id = r.student_id and r.class_id = 99 inner join registrar r2 on s.id = r2.student_id group by s.id, r2.class_id ``` This abuses mysql's handling of group by somewhat, but its okay because `fname` and `lname` are identical in every row within each group
`IN` statement is usually slow. Then you can get better performance by using `EXISTS` statement like : ``` SELECT s.fname, s.lname, r.class_id FROM students AS s JOIN registrar AS r ON s.id = r.student_id WHERE EXISTS (SELECT 1 FROM registrar WHERE registrar.student_id = s.id AND registrar.class_id = 99); ```
Get every row in a table if (as a group) they satisfy a condition
[ "", "mysql", "sql", "" ]
I searched around I found that there are ways to transfer/sync data from sql-server to mongodb. I also know that Mongodb contains collections instead of table and the data is stored differently. I want to know whether it is possible to move data from mongodb to sql-server. If yes, then how and what are the tools/topics should I use?
Of course it's possible, but you will need to find a way to force the flexibility of a document db like MongoDB into a RDBMS like SQL Server. It means that you need to define how you want to handle missing fields (will it be a NULL in the db column? or a default value?) and other things that usually don't fit well in a relational database. Said do, you can use an ETL tool able to connect to both databases, SSIS can be an example if you want to stay in the MicroSoft world (you can check this [Importing MongoDB Data Using SSIS 2012](http://arcanecode.com/2014/01/14/importing-mongodb-data-using-ssis-2012/) to have an idea) or you can go for an open source tool like [Talend Big Data Integration](http://www.talend.com/products/big-data) which has a connector to MongoDB (and of course to SQL Server).
There is no way to directly move data from MongoDB to SQL Server. Because MongoDB data is non-relational, any such movement must involve defining a target relational data model in SQL Server, and then developing a transformation that can take the data in MongoDB and transform it into the target data model. Most ETL tools such as Kettle or Talend can help you with this process, or if you're a glutton for punishment, you can just write gobs of code. Keep in mind that if you need this transformation process to be online, or applied more than once, you may need to tweak it for any small changes in the structure or types of the data stored in MongoDB. As an example, if a developer adds a new field to a document inside a collection, your ETL process will need rethinking (possibly new data model, new transformation process, etc.). If you are not sold on SQL Server, I'd suggest you consider Postgres, because there is a widely-used open source tool called [MoSQL](https://github.com/stripe/mosql) that has been developed expressly for the purpose of syncing a Postgres database with a MongoDB database. It's primarily used for reporting purposes (getting data out of MongoDB and into an RDBMS so one can layer analytical or reporting tools on top). MoSQL enjoys wide adoption and is well supported, and for badly tortured data, you always have the option of using the Postgres JSON data type, which is not supported by any analytics or reporting tools, but at least allows you to directly query the data in Postgres. Also, and now my own personal bias is showing through, Postgres is 100% open source, while SQL Server is 100% closed source. :-) Finally, if you are *only* extracting the data from MongoDB to make analytics or reporting easier, you should consider [SlamData](http://github.com/slamdata/slamengine), an open source project I started last year that makes it possible to execute ANSI SQL on MongoDB, using 100% in-database execution (it's basically a SQL-to-MongoDB API compiler). Most people using the project seem to be using it for analytics or reporting use cases. The advantage is that it works with the data as it is, so you don't have to perform ETL, and of course it's always up to date because it runs directly on MongoDB. A disadvantage is that no one has yet built an ODBC / JDBC driver for it, so you can't directly connect BI tools to SlamData. Good luck!
How to migrate data from MongoDB to SQL-Server?
[ "", "sql", "sql-server", "database", "mongodb", "data-migration", "" ]
We've a database that stores the values of employees per month (for example the part-time percentage): ``` +-----+------+-------+----------+ | emp | year | month | parttime | +-----+------+-------+----------+ | 1 | 2015 | 1 | 100 | | 1 | 2015 | 2 | 100 | | 1 | 2015 | 3 | 100 | | 1 | 2015 | 4 | 100 | | 2 | 2015 | 1 | 80 | | 2 | 2015 | 2 | 100 | | 2 | 2015 | 3 | 100 | | 2 | 2015 | 4 | 80 | | 3 | 2015 | 1 | 60 | | 3 | 2015 | 2 | 60 | | 3 | 2015 | 3 | 80 | | 3 | 2015 | 4 | 100 | +-----+------+-------+----------+ ``` for reporting purposes i need to display the values in a from/until form: ``` +-----+---------+---------+----------+ | emp | from | to | parttime | +-----+---------+---------+----------+ | 1 | 2015.01 | 2015.04 | 100 | | 2 | 2015.01 | 2015.01 | 80 | | 2 | 2015.02 | 2015.03 | 100 | | 2 | 2015.04 | 2015.04 | 80 | | 3 | 2015.01 | 2015.02 | 60 | | 3 | 2015.03 | 2015.03 | 80 | | 3 | 2015.04 | 2015.04 | 100 | +-----+---------+---------+----------+ ``` my first attempt was to solve it with a simple min/max approach. but employee nr. 2 is a bit tricky with the recurrent value of 80. any ideas/examples? Database is based on db/2 or microsoft. thanks Philipp
This would be easier if your db stored a full date instead of just the year/month (or at least an equivalent combined type). Or if you could operate over the original base data: ``` SELECT emp, partTime, MIN(monthStart) AS monthStart, MAX(monthNext) AS monthEnd FROM (SELECT emp, partTime, DATEADD(month, month - 1, DATEADD(year, year - 1, CAST('00010101' AS DATE))) AS monthStart, DATEADD(month, month, DATEADD(year, year - 1, CAST('00010101' AS DATE))) AS monthNext, ROW_NUMBER() OVER(PARTITION BY emp ORDER BY year, month) - ROW_NUMBER() OVER(PARTITION BY emp, partTime ORDER BY year, month) AS groupId FROM Monthly_Hours) AS Grouping GROUP BY emp, partTime, groupId ORDER BY emp, monthStart ``` `SQL Fiddle Example` Note that I'm specifically using an exclusive upper-bound on the range. Date/time/timestamp types, like all positive, contiguous-range types (anything but an explicit integer count) should always be addressed in this fashion (it makes reasoning about and querying them much easier). This answer is slightly deficient, in that missing months aren't directly reported (don't show as `0`) - there are ways to correct this, if necessary, although it takes more work.
This is called the Gaps and Islands problem. One quick solution for it: ``` DECLARE @Employee TABLE (emp int, year int, month int, parttime int) INSERT INTO @Employee VALUES (1, 2015, 1, 100), (1, 2015, 2, 100), (1, 2015, 3, 100), (1, 2015, 4, 100), (2, 2015, 1, 80), (2, 2015, 2, 100), (2, 2015, 3, 100), (2, 2015, 4, 80), (3, 2015, 1, 60), (3, 2015, 2, 60), (3, 2015, 3, 80), (3, 2015, 4, 100) ;WITH cte AS ( SELECT * ,e.[month] - ROW_NUMBER() OVER (ORDER BY e.emp, e.[parttime]) AS Grp FROM @Employee e ) SELECT emp, CAST([year] AS varchar(50)) + '.' + CAST(MIN([month])AS varchar(50)) AS [from], CAST([year] AS varchar(50)) + '.' + CAST(MAX([month])AS varchar(50)) AS [to], parttime FROM cte GROUP BY emp, parttime, year, Grp ORDER BY emp, [from] ```
sql convert from "record per month" to "record from/until"
[ "", "sql", "sql-server", "t-sql", "db2", "" ]
I have two queries that give me the result. What I am trying to achieve is to get the result of the first query on the top (as the first value), and then the results of the second query under that, so the exact result will come first (`Inlagg.Categori = 'testcategory'`), and then if there is more results with the `LIKE 'testcategory%'` show them under and order them after amount of categories. How do I match these queries together? ``` SELECT Inlagg.Categori, COUNT(Inlagg.Categori) AS 'amount' FROM Inlagg WHERE Inlagg.Date > '2015-04-26' AND Inlagg.Categori = 'testcatego' GROUP BY Inlagg.Categori, Inlagg.Date SELECT Inlagg.Categori, COUNT(Inlagg.Categori) AS 'amount' FROM Inlagg WHERE Inlagg.DatE > '2015-04-26' AND Inlagg.Kategori LIKE 'testcatego%' GROUP BY Inlagg.Categori, Inlagg.Date ORDER BY amount DESC ```
As Parado's answer is in right way nad you need to `ORDER BY amount DESC` I suggest you this query ``` SELECT * FROM ( SELECT Inlagg.Categori, COUNT(Inlagg.Categori) AS 'amount', '0' As Ord FROM Inlagg WHERE Inlagg.Date > '2015-04-26' AND Inlagg.Categori = 'testcatego' GROUP BY Inlagg.Categori, Inlagg.Date UNION ALL SELECT Inlagg.Categori, COUNT(Inlagg.Categori) AS 'amount', '1' As ord FROM Inlagg WHERE Inlagg.Datum > '2015-04-26' AND Inlagg.Kategori LIKE 'testcatego%' GROUP BY Inlagg.Categori, Inlagg.Date ) DT ORDER BY ord, amount DESC ``` Trick: **ord** will keep the number of the query.
Try to use `UNION ALL` as below ``` SELECT Inlagg.Categori, COUNT(Inlagg.Categori) AS 'amount' FROM Inlagg WHERE Inlagg.Date > '2015-04-26' AND Inlagg.Categori = 'testcatego' GROUP BY Inlagg.Categori, Inlagg.Date UNION ALL SELECT Inlagg.Categori, COUNT(Inlagg.Categori) AS 'amount' FROM Inlagg WHERE Inlagg.Datum > '2015-04-26' AND Inlagg.Kategori LIKE 'testcatego%' GROUP BY Inlagg.Categori, Inlagg.Date ```
How to get result of two querys in the same result within SQL Server?
[ "", "sql", "sql-server", "" ]
In a trip there are several stops, (a stop = an adress whereone or multiple orders are loaded, or delivered), in a specific order. For example: ``` Trip A Trip_order Action Place Ordernumber 10 Load Paris 394798 20 Load Milan 657748 30 UnLoad Athens 657748 40 Unload Thessaloniki 394798 50 Load Thessaloniki 10142 60 Load Thessaloniki 6577 70 Unload Athens 6577 80 Unload Athens 10412 90 Load Thessaloniki 975147 100 Unload Paris 975147 ``` I want to see the specific stops, in order of the trip: ``` Load Paris Load Milan Unload Athens Unload Thessaloniki Load Thessaloniki Unload Athens Load Thessaloniki Unload Paris ``` I did look at [This](https://stackoverflow.com/questions/5391564/how-to-use-distinct-and-order-by-in-same-select-statement), but if I do that, I only get the unload Athens, unload Thessaloniki and Load Thessaloniki once. How do I solve this? EDIT: 11:11 (UTC +01:00) To be more specific: these are the tables which present this information: ``` Trips Trip_ID 100001 100002 100003 .... Actions Trip_ID Action MatNr RoOr RoVlg OrderID 100001 1 10 10 1 394798 100001 1 10 20 1 657748 100001 1 10 30 1 657748 100001 1 10 40 1 394798 100001 1 10 50 1 10142 100001 1 10 60 1 6577 100001 1 10 70 1 6577 100001 1 10 80 1 10412 100001 1 10 90 1 975147 100001 1 10 100 1 975147 ``` (Action: 1=load, 4=unload) The combination of MatNr, RoOr and RoVlg is the order of the Trip. ``` Orders OrderID LoadingPlace UnloadingPlace 6577 Thessaloniki Athens 10142 Thessaloniki Athens 394798 Paris Thessaloniki 657748 Milan Athens 975147 Thessaloniki Paris ```
Try this one. No variables, nothing especially fancy: ``` select a1.action, a1.place from trip_a a1 left join trip_a a2 on a2.trip_order = (select min(trip_order) from trip_a a3 where trip_order > a1.trip_order) where a1.action != a2.action or a1.place != a2.place or a2.place is null ``` Demo here: <http://sqlfiddle.com/#!9/4b6dc/13> Hopefully it works on whatever sql you're using, it should, so long as subqueries are supported. Tt simply finds the next highest `trip_id`, and joins to it, or joins to `null` if there is no higher `trip_order`. It then selects only the rows where either the `place`, the `action`, or both are different, or if there is no place in the joined table (`a2.place is null`). **edited after criteria changed completely** If you want to get the same results, built entirely from your base tables, you can do this: ``` select case when a.action = 1 then 'load' when a.action = 0 then 'unload' end as action, case when a.action = 1 then o.loadingplace when a.action = 0 then o.unloadingplace end as place from trips t inner join actions a on t.trip_id = a.trip_id inner join orders o on a.orderid = o.orderid left join actions a2 on a2.roor = (select min(roor) from actions a3 where a3.roor > a.roor) left join orders o2 on a2.orderid = o2.orderid where a.action != a2.action or a2.action is null or case when a.action = 1 then o.loadingplace != o2.loadingplace when a.action = 0 then o.unloadingplace != o2.unloadingplace end order by a.roor asc ``` And here's an updated fiddle: <http://sqlfiddle.com/#!9/fdf9c/14>
You don't need and you don't want to use `distinct` for that because we can see in your example that several destinations occur multiple times. What you want: filter out records that match the preceding record in terms of action and place. This could look something like this: ``` SELECT * FROM Trips t1 LEFT JOIN Trips t2 ON t1.Trip_Order = t2.Trip_Order - 10 WHERE t1.Action <> t2.Action OR t1.Place <> t2.Place) ```
Howto select (almost) unique values in a specific order
[ "", "sql", "distinct", "" ]
I need to display Employee `last_name` and their commission amount from employees table in Oracle SQL, but the condition is if it encounter `NULL` value I need to print *"No Commission"*. For the first part I wrote: ``` select last_name, commission_pct from employees; ``` But I am unable to get how to replace `NULL` values with *"No Commission"*.
You can use `case` expression: ``` select last_name , case when commision_pct is null then 'No Commission' else commision_pct end from employees; ``` or `coalesce`: ``` select last_name , coalesce(commision_pct, 'No Commission') from employees; ``` or `nvl`: ``` select last_name , nvl(commision_pct, 'No Commission') from employees; ``` *P.S. In case `commision_pct`'s datatype is not `varchar` you should also use `cast` or `to_char`.*
For Oracle ``` select last_name, nvl(commission_pct,'No Commission') from employees; ``` For SQL ``` select last_name, isnull(commission_pct,"No Commission") as commission_pct from employees; ```
How to replace null values with a text?
[ "", "sql", "oracle", "null", "" ]
I have two tables `abb` and `abb_click`. `abb` contains information which object **ident** that's active. `abb_click` contains information from each **student\_id** regarding each **ident**. I "simply" want to count the active **ident** in `abb_click` for a specific **student\_id**. The SELECT below seems to count only the active **ident**s, not taking any concern if the **ident** is in `abb_click`. You can see the fiddle here: <http://sqlfiddle.com/#!9/b7262/1> --- **The result should be:** for the student\_id 945 - 2 active **ident**s **Question:** how do I "tweak" the SELECT to count the active **ident**s in table `abb` joined with `abb_click`? --- ``` SELECT t.student_id, number_of_idents FROM `abb_click` AS t INNER JOIN (SELECT ident, COUNT(ident) as number_of_idents FROM `abb` AS k WHERE k.active = '1' ) AS t3 ON t.ident = t3.ident WHERE t.student_id = '945' GROUP BY t.student_id ORDER BY number_of_idents ASC; ``` **Table `abb`** ``` bid, ident, active ``` **Table `abb_click`** ``` kid, ident, student_id, click ``` **Data to table `abb`** ``` 1, 'ma53', 1 2, 'ma664', 1 3, 'ma779', 0 4, 'ma919', 1 ``` **Data to table `abb_click`** ``` 1, 'ma53', 945, 'E' 2, 'ma53', 945, 'E' 3, 'ma53', 945, 'C' 4, 'ma664', 945, 'C' 5, 'ma664', 945, 'A' 6, 'ma664', 945, 'E' 7, 'ma779', 945, 'A' ```
I believe this should do what you want. ``` select student_id, COUNT(distinct ac.ident) as active_idents from abb_click ac join abb on abb.ident = ac.ident where abb.active = 1 --and student_id = 945 group by student_id ```
Unless I've miss-understood this should work. ``` SELECT abb.*, COUNT(abb_click.id) as total_abb_clicks FROM abb INNER JOIN abb_click ON abb_click.ident = abb.ident WHERE abb.active = 1 && abb_click.student_id = 945 GROUP BY abb.id ``` -- Edit: Sorry I forgot the student condition. So this should return: bid, ident, active, total\_abb\_clicks 1, 'ma53', 1 , 3 2, 'ma664', 1, 3 4, 'ma919', 1, 1
Group and count returned SQL
[ "", "mysql", "sql", "list", "" ]
I have a SQL Server 2008 database. In this database, I have a result set that looks like the following: ``` ID Name Department LastOrderDate -- ---- ---------- ------------- 1 Golf Balls Sports 01/01/2015 2 Compact Disc Electronics 02/01/2015 3 Tires Automotive 01/15/2015 4 T-Shirt Clothing 01/10/2015 5 DVD Electronics 01/07/2015 6 Tennis Balls Sports 01/09/2015 7 Sweatshirt Clothing 01/04/2015 ... ``` For some reason, my users want to get the results ordered by department, then last order date. However, not by department name. Instead, the departments will be in a specific order. For example, they want to see the results ordered by Electronics, Automotive, Sports, then Clothing. To throw another kink in works, I cannot update the table schema. Is there a way to do this with a SQL Query? If so, how? Currently, I'm stuck at ``` SELECT * FROM vOrders o ORDER BY o.LastOrderDate ``` Thank you!
You can use `case` expression ; ``` order by case when department = 'Electronics' then 1 when department = 'Automotive' then 2 when department = 'Sports' then 3 when department = 'Clothing' then 4 else 5 end ```
create a table for the departments that has the name (or better id) of the department and the display order. then join to that table and order by the display order column. alternatively you can do a order by case: ``` ORDER BY CASE WHEN Department = 'Electronics' THEN 1 WHEN Department = 'Automotive' THEN 2 ... END ``` (that is not recommended for larger tables)
Specific Ordering in SQL
[ "", "sql", "sql-server", "sql-order-by", "" ]
Here's my basic query (SQL Server): ``` SELECT projectID, businessID, sum(number) AS summ FROM table GROUP BY projectID, businessID ORDER BY projectID, sum(number) DESC ``` which produces a table like so: ``` Project ID Business ID Summ 1 1 63 1 2 32 1 3 6 2 3 45 2 1 44 2 2 3 ``` I want to grab the project ID and business ID where the Summ column is greatest for each project ID. So rows 1 and 4 in the example given. How can I tweak the original query to do this?
You can use analytic functions: ``` SELECT projectID, businessID, summ FROM(SELECT projectID, businessID, SUM(number) AS summ, ROW_NUMBER() OVER (PARTITION BY projectID ORDER BY SUM(number) DESC) AS rn FROM table GROUP BY projectID, businessID ) t WHERE rn = 1 ORDER BY projectID; ``` Hope that helps.
If you might have ties and want to return both rows, you should use: ``` select * from (select projectID, businessID , sum(number) as Tot , max(sum(number)) over (partition by projectID) as MSum from Table group by projectID, businessID) a where a.tot = a.Msum ```
SQL select row where SUM of a column is greatest (with two fields in GROUP BY)
[ "", "sql", "sql-server", "greatest-n-per-group", "" ]
My apologies, as I know this is a question that's been answered many times in various contexts. However, after an hour of trying and failing to adapt the examples I've found to my needs, I'm at the conclusion that I'm an idiot, and need help specific to my data... I have a view that returns data as follows: ``` SELECT * FROM vwPersonMainContactDetails ``` Output: ``` PersonID | ContactMethod | ContactValue ----------+-----------------+----------------- 1 Email Bob@abc.com 1 Mobile 07777 777777 2 Email kate@abc.com 2 Mobile 07766 666666 3 Email jo@abc.com 3 Mobile 07755 555555 ``` What I need is to return the data structured as follows: ``` PersonID | Mobile | Email ----------+----------------+-------------------------- 1 07777 777777 bob@abc.com 2 07766 666666 kate@abc.com 3 07755 555555 jo@abc.com ``` Can anyone help? I know that PIVOT will be the answer, but I'm really struggling to make it work for me... Thanks a lot Andrew
If you are using SQL Server 2005+ you can do this: ``` SELECT * FROM ( SELECT PersonID, ContactMethod, ContactValue FROM vwPersonMainContactDetails ) AS SourceTable PIVOT ( MAX(ContactValue) FOR ContactMethod IN ([Email],[Mobile]) ) AS pvt ``` If you are not using mssql you can do this: ``` SELECT PersonID, MAX(CASE WHEN ContactMethod='Mobile' THEN ContactValue ELSE NULL END) AS Mobile, MAX(CASE WHEN ContactMethod='Email' THEN ContactValue ELSE NULL END) AS Email FROM vwPersonMainContactDetails GROUP BY PersonID ``` **Reference:** * [Using PIVOT and UNPIVOT](https://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx)
If we look at the syntax for Pivot: ``` SELECT <non-pivoted column>, [first pivoted column] AS <column name>, [second pivoted column] AS <column name>, ... [last pivoted column] AS <column name> FROM (<SELECT query that produces the data>) AS <alias for the source query> PIVOT ( <aggregation function>(<column being aggregated>) FOR [<column that contains the values that will become column headers>] IN ( [first pivoted column], [second pivoted column], ... [last pivoted column]) ) AS <alias for the pivot table> <optional ORDER BY clause>; ``` 1. Aggregate function in this case can be: `max/min` 2. column being aggregated :`ContactValue` = `Email` and `Mobile` 3. Now in Pivot all columns that are left in source table (here T)are considered for grouping and in this case it will be PersonID so Pivot becomes: `SELECT PersonID, -- <non-pivoted column>, Mobile , --[first pivoted column] AS <column name>, Email--[second pivoted column] AS <column name>, FROM ( SELECT PersonID ,ContactValue,ContactMethod from vwPersonMainContactDetails)-- query that produces the data>) AS T --<alias for the source query> PIVOT ( max(ContactValue) --<aggregation function>(<column being aggregated>) FOR [ContactMethod] --<column that contains the values that will become column headers>] IN ( [Mobile],[Email]--[first pivoted column], [second pivoted column], ) )as pvt order by PersonID asc --<optional ORDER BY clause>;` `DEMO`
TSQL Pivot with strings
[ "", "sql", "sql-server", "string", "t-sql", "pivot", "" ]
For an assignment I have to write several SQL queries for a database stored in a PostgreSQL server running PostgreSQL 9.3.0. However, I find myself blocked with last query. The database models a reservation system for an opera house. The query is about associating the a spectator the other spectators that assist to the same events every time. The model looks like this: ``` Reservations table id_res | create_date | tickets_presented | id_show | id_spectator | price | category -------+---------------------+---------------------+---------+--------------+-------+---------- 1 | 2015-08-05 17:45:03 | | 1 | 1 | 195 | 1 2 | 2014-03-15 14:51:08 | 2014-11-30 14:17:00 | 11 | 1 | 150 | 2 Spectators table id_spectator | last_name | first_name | email | create_time | age ---------------+------------+------------+----------------------------------------+---------------------+----- 1 | gonzalez | colin | colin.gonzalez@gmail.com | 2014-03-15 14:21:30 | 22 2 | bequet | camille | bequet.camille@gmail.com | 2014-12-10 15:22:31 | 22 Shows table id_show | name | kind | presentation_date | start_time | end_time | id_season | capacity_cat1 | capacity_cat2 | capacity_cat3 | price_cat1 | price_cat2 | price_cat3 ---------+------------------------+--------+-------------------+------------+----------+-----------+---------------+---------------+---------------+------------+------------+------------ 1 | madama butterfly | opera | 2015-09-05 | 19:30:00 | 21:30:00 | 2 | 315 | 630 | 945 | 195 | 150 | 100 2 | don giovanni | opera | 2015-09-12 | 19:30:00 | 21:45:00 | 2 | 315 | 630 | 945 | 195 | 150 | 100 ``` So far I've started by writing a query to get the id of the spectator and the date of the show he's attending to, the query looks like this. ``` SELECT Reservations.id_spectator, Shows.presentation_date FROM Reservations LEFT JOIN Shows ON Reservations.id_show = Shows.id_show; ``` Could someone help me understand better the problem and hint me towards finding a solution. Thanks in advance. So the result I'm expecting should be something like this ``` id_spectator | other_id_spectators -------------+-------------------- 1| 2,3 ``` Meaning that every time spectator with id 1 went to a show, spectators 2 and 3 did too.
> Meaning that every time spectator with id 1 went to a show, spectators 2 and 3 did too. In other words, you want a list of ... *all spectators that have seen all the shows that a given spectator has seen (and possibly more than the given one)* This is a special case of **relational division**. We have assembled an arsenal of basic techniques here: * [How to filter SQL results in a has-many-through relation](https://stackoverflow.com/questions/7364969/how-to-filter-sql-results-in-a-has-many-through-relation/7774879#7774879) It is special because the list of shows each spectator has to have attended is dynamically determined by the given prime spectator. Assuming that `(d_spectator, id_show)` is unique in `reservations`, which has not been clarified. A `UNIQUE` constraint on those two columns (in that order) also provides the most important index. For best performance in query 2 and 3 below also create an index with leading `id_show`. ### 1. Brute force The primitive approach would be to form a sorted array of shows the given user has seen and compare the same array of others: ``` SELECT 1 AS id_spectator, array_agg(sub.id_spectator) AS id_other_spectators FROM ( SELECT id_spectator FROM reservations r WHERE id_spectator <> 1 GROUP BY 1 HAVING array_agg(id_show ORDER BY id_show) @> (SELECT array_agg(id_show ORDER BY id_show) FROM reservations WHERE id_spectator = 1) ) sub; ``` But this is potentially **very expensive** for big tables. The whole table hast to be processes, and in a rather expensive way, too. ### 2. Smarter Use a [CTE](http://www.postgresql.org/docs/current/interactive/queries-with.html) to determine relevant shows, then only consider those ``` WITH shows AS ( -- all shows of id 1; 1 row per show SELECT id_spectator, id_show FROM reservations WHERE id_spectator = 1 -- your prime spectator here ) SELECT sub.id_spectator, array_agg(sub.other) AS id_other_spectators FROM ( SELECT s.id_spectator, r.id_spectator AS other FROM shows s JOIN reservations r USING (id_show) WHERE r.id_spectator <> s.id_spectator GROUP BY 1,2 HAVING count(*) = (SELECT count(*) FROM shows) ) sub GROUP BY 1; ``` [`@>` is the "contains2 operator for arrays](http://www.postgresql.org/docs/current/interactive/functions-array.html#ARRAY-OPERATORS-TABLE) - so we get all spectators that have *at least* seen the same shows. Faster than *1.* because only relevant shows are considered. ### 3. Real smart To also exclude spectators that are not going to qualify early from the query, use a [recursive CTE](http://www.postgresql.org/docs/current/interactive/queries-with.html): ``` WITH RECURSIVE shows AS ( -- produces exactly 1 row SELECT id_spectator, array_agg(id_show) AS shows, count(*) AS ct FROM reservations WHERE id_spectator = 1 -- your prime spectator here GROUP BY 1 ) , cte AS ( SELECT r.id_spectator, 1 AS idx FROM shows s JOIN reservations r ON r.id_show = s.shows[1] WHERE r.id_spectator <> s.id_spectator UNION ALL SELECT r.id_spectator, idx + 1 FROM cte c JOIN reservations r USING (id_spectator) JOIN shows s ON s.shows[c.idx + 1] = r.id_show ) SELECT s.id_spectator, array_agg(c.id_spectator) AS id_other_spectators FROM shows s JOIN cte c ON c.idx = s.ct -- has an entry for every show GROUP BY 1; ``` Note that the first CTE is *non-recursive*. Only the second part is recursive (iterative really). This should be **fastest for small selections from big tables**. Row that don't qualify are excluded early. the two indices I mentioned are essential. [**SQL Fiddle**](http://sqlfiddle.com/#!15/5955f/20) demonstrating all three.
**Note based on comments**: Wanted to make clear that this answer may be of limited use as it was answered in the context of SQL-Server (tag was present at the time) There is probably a better way to do it, but you could do it with the 'stuff 'function. The only drawback here is that, since your ids are ints, placing a comma between values will involve a work around (would need to be a string). Below is the method I can think of using a work around. ``` SELECT [id_spectator], [id_show] , STUFF((SELECT ',' + CAST(A.[id_spectator] as NVARCHAR(10)) FROM reservations A Where A.[id_show]=B.[id_show] AND a.[id_spectator] != b.[id_spectator] FOR XML PATH('')),1,1,'') As [other_id_spectators] From reservations B Group By [id_spectator], [id_show] ``` This will show you all other spectators that attended the same shows.
Find spectators that have seen the same shows (match multiple rows for each)
[ "", "sql", "postgresql", "aggregate-functions", "common-table-expression", "relational-division", "" ]
Consider the following tables: ``` tbl1 ------------ id name creationdate 0 AA 2015.05.11 1 BB 2015.04.27 2 cC 2015.04.18 tbl2 ------------ id name creationdate 0 DD 2015.04.17 1 FF 2015.04.27 2 NN 2015.05.01 3 BV 2015.05.01 ``` What would be the correct syntax for a query to group the count of rows from both table by dates? So that the result is: ``` DATE TBL1_COUNT TBL2_COUNT 2015.04.17 0 1 2015.04.18 1 0 2015.04.27 1 1 2015.05.01 0 2 2015.05.11 1 0 ``` So far I have managed to get the dates from both tables using: ``` select DISTINCT date(creationdate) from tbl1 union select DISTINCT date(creationdate) from tbl2 ``` The next part would be getting count(\*) from both tables and grouping them by the above query's result
I think the most efficient way is to have two queries: one that counts all grouped dates from table1 and set the count of table2 to 0, one other that counts all grouped dates from table2 and sets the count of table1 to 0, and then combine them using a UNION ALL query, group again and sum the resulting counts: ``` SELECT creationDate, sum(count_t1), sum(count_t2) FROM ( SELECT creationDate, COUNT(*) AS count_t1, 0 AS count_t2 FROM t1 GROUP BY creationDate UNION ALL SELECT creationDate, 0 AS count_t1, COUNT(*) AS count_t2 FROM t2 GROUP BY creationDate ) s GROUP BY creationDate ```
1. You need to use `UNION ALL` or the distinct aspect of union will eliminate the duplicates and negatively impact your counts. 2. You need a way to denote which value is from what table so we add a column to distinguish, so I added 1 for tbl1 and 2 for tbl2 and called the column src. 3. We then add a case statement in the outer select and a sum to get the desired counts and a group by to aggregate the dates.. 4. Polish it off with an order by and you get what you're after. . ``` SELECT creationDate as Date, sum(case when src=1 then 1 else 0 end) as tbl1_count, sum(case when src=2 then 1 else 0 end) as tbl2_count FROM ( SELECT creationdate, 1 as src FROM tbl1 UNION ALL SELECT creationdate, 2 as src FROM tbl2) Sub GROUP BY CreationDate ORDER BY CreationDate ```
Group data by date from two SQL tables
[ "", "mysql", "sql", "" ]
I observe there are two ways to achieve an objective on multiple tables. A column in the resultset will be updated and speed may be a requirement. The resultset is obtained either by: Case 1: ``` select ert.* from eval_rep_track ert inner join ( select erp.evaluation_fk, erp.report_type, LTRIM(erp.assign_group_id, '/site/') course_name from eval_report_dup@prod erp inner join eval_report er on er.id = erp.id where erp.status='queue' and er.status='done' ) cat on ert.eval_id || '.' || ert.report_type || '.' || ert.course_name = cat.evaluation_fk || '.' || cat.report_type || '.' || cat.course_name; ``` OR Case 2: ``` select ert.* from eval_rep_track ert inner join ( select erp.evaluation_fk, erp.report_type, LTRIM(erp.assign_group_id, '/site/') course_name from eval_report_dup@prod erp inner join eval_report er on er.id = erp.id where erp.status='queue' and er.status='done' ) cat on ert.eval_id = cat.evaluation_fk and ert.report_type = cat.report_type and ert.course_name = cat.course_name; ``` with both giving the same result, with only the join condition varying. Which will run/exec faster? eval\_id is `NUMBER`, report\_type and course\_name are `VARCHAR2`. From the developer used, case 1 has the following stats: [SELECT - 3077 row(s), 0.048 secs] Result set fetched ... 1 statement(s) executed, 3077 row(s) affected, exec/fetch time: 0.048/0.236 sec [1 successful, 0 warnings, 0 errors] while case 2: [SELECT - 3077 row(s), 0.019 secs] Result set fetched ... 1 statement(s) executed, 3077 row(s) affected, exec/fetch time: 0.019/0.194 sec [1 successful, 0 warnings, 0 errors] The results suggest case 2 is faster. Will this be universal across any platform (ide, developer) and database? Is this dependent on the data-type or is concatenation always expensive? I don't actually need the result of the concatenation. Thanks.
I think the version with concatenation will practically always be slower. If any of the columns you're comparing individually have indexes, the database will normally be able to use the indexes to optimize the join. When you compare concatenations, it has to perform full table scans, because the result of a calculation won't be in the index. And even if the columns are not indexed, the database can still perform the comparisons more efficiently. It compares one pair of columns at a time, and can stop as soon as one of those comparisons fails. When using the concatenation, it has to first combine all the columns, in both rows, then do a string comparison. Finally, if any of the columns are numeric, the concatenation will require the additional step of converting the number to a string.
Quite simply, joining on the individual columns is correct. Joining to concatenated values is incorrect. Separate from any discussion of performance, you should write correct code. For any particular query, you can probably write a mostly correct query using concatenation. But you'll almost certainly introduce subtle bugs that will bite you when you get data that you don't expect. In this case, as soon as your columns contains a period, there is the potential that you'd match data incorrectly (`'a.b' || '.' || null = 'a' || '.' || 'b.'`). In other cases, you'll have other subtle issues-- dates and numbers may be converted to strings implicitly using different session-level settings that may produce different results (your `NLS_DATE_FORMAT` might contain the time component or it might not so your concatenated value may or may not include a comparison of the time). If you concatenate columns commonly, you're going to end up with lots of queries that have very subtle bugs based on the data in the table and the users executing the code. That's terrible from a maintenance and support perspective. Performance should be, at most, a secondary concern. From a performance perspective, a proper join will almost certainly outperform the concatenation approach. The optimizer would be able to consider normal indexes on the various columns that are part of the join when generating the query plan when you're joining correctly. If you're concatenating values, at best Oracle might be able to do a full scan of a normal index to get all the data that needs to be concatenated together. But this is likely to be much less efficient (particularly when you've got more than a few thousand rows). Is it theoretically possible that the concatenation approach would be more efficient for some query somewhere? Sure. A sadistic developer might create a function-based index on the concatenated result, avoid creating an index on the individual columns, and produce a test case where the concatenation approach is more efficient. That would be easily remedied, however, by creating the appropriate corresponding index (or indexes) on the basic columns. Is it possible that concatenation would be more efficient for some query because it prevents the optimizer from using an index that it otherwise wants to use? Sure. But that almost certainly indicates that you have a problem with optimizer settings or statistics that should be addressed rather than throwing a band-aid on the problem.
Join conditions on multiple columns versus single join on concatenated columns?
[ "", "sql", "oracle", "join", "dbvisualizer", "" ]
In SQL Server I am able to create a query that uses both Top and Distinct in the Select clause, such as this one: ``` Select Distinct Top 10 program_name From sampleTable ``` Will the database return the distinct values from the top 10 results, or will it return the top 10 results of the distinct values? Is this behavior consistent in SQL or is it database dependent?
`TOP` is executed last, so your `DISTINCT` runs first then the `TOP` <http://blog.sqlauthority.com/2009/04/06/sql-server-logical-query-processing-phases-order-of-statement-execution/>
Use ``` Select Top 10 program_name From sampleTable group by program_name; ``` It will return you the top **10 distinct program\_name.** Your query will also return the distinct 10 program\_name.
Using Distinct with Top in Select Clause of Query
[ "", "sql", "sql-server", "t-sql", "" ]
I need help in updating a database using a join query.It is not a straight forward query and kind of confusing. I created a select query to select routelist(d.name) which is equal to 'TP-424-RL'. ``` select n.dnorpattern as RoutePattern, rp.name as Partition, d.name as RouteList, rl.selectionOrder, rg.name as RouteGroup from numplan as n inner join routepartition as rp on rp.pkid=n.fkroutepartition inner join devicenumplanmap as dnpm on dnpm.fknumplan=n.pkid inner join device as d on dnpm.fkdevice=d.pkid inner join routelist as rl on rl.fkdevice = d.pkid inner join routegroup as rg on rg.pkid=rl.fkroutegroup and n.tkpatternusage=5 where d.name='TP-424-RL'; ``` Similarly I want to update the routelist(d.name='SME-IT') instead of 'TP-424-RL'. the d.name(name from device table does not give routelist direclty but by the combination of the join query gives the d.name =routelist) I tried this and it didn't work. ``` update ( select n.dnorpattern as RoutePattern, rp.name as Partition, d.name as RouteList , rl.selectionOrder, rg.name as RouteGroup from numplan as n inner join routepartition as rp on rp.pkid=n.fkroutepartition inner join devicenumplanmap as dnpm on dnpm.fknumplan=n.pkid inner join device as d on dnpm.fkdevice=d.pkid inner join routelist as rl on rl.fkdevice = d.pkid inner join routegroup as rg on rg.pkid=rl.fkroutegroup and n.tkpatternusage=5 where d.name='TP-424-RL') set device.name='CME-IT' where device.name='TP-424-RL'; ``` Can someone help me on this?
Check out the syntax of the [UPDATE](http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_1254.htm) statement in Informix. The join notation updates are not described, which is why you get errors when you attempt to use it. You will need to use an UPDATE statement of the general form: ``` UPDATE device SET name = 'SME-IT' -- Or did you mean CME-IT? The question says both WHERE pkid IN (SELECT ...) ``` and the ... part will be a query that identifies the primary key (or some other candidate (unique) key) of the rows in the Device table for which the current device name is `TP-424-RL` and the rows should be updated. (I'm assuming that it isn't as simple as 'all rows where the device name is `TP-424-RL` should be updated.) A trivial version might be: ``` UPDATE device SET name = 'SME-IT' -- Or did you mean CME-IT? The question says both WHERE pkid IN (SELECT d.pkid FROM numplan AS n JOIN routepartition AS rp ON rp.pkid = n.fkroutepartition JOIN devicenumplanmap AS dnpm ON dnpm.fknumplan = n.pkid JOIN device AS d ON dnpm.fkdevice = d.pkid JOIN routelist AS rl ON rl.fkdevice = d.pkid JOIN routegroup AS rg ON rg.pkid = rl.fkroutegroup AND n.tkpatternusage = 5 WHERE d.name='TP-424-RL' ) ``` My suspicion is that the query can be simplified, but I'm not sure of all the details of your schema. The `AND n.tkpatternusage = 5` condition can be moved around; it seems to belong most logically to the `WHERE` clause of the `SELECT`, or perhaps the `ON` condition joining `n` to `rp` or `dnpm`. The optimizer will move it to where it provides most bang for the buck, which is likely to be filtering on the scan of the `numplan` (aka `n`) table.
This will not work since you don't have device.name in your query. Also, to me it looks very strange. I would probably do it like this: (will work with sql server, i don't know your rdbms so I go with what I know best) ``` update device set name = 'CME-IT' from numplan as n inner join routepartition as rp on rp.pkid=n.fkroutepartition inner join devicenumplanmap as dnpm on dnpm.fknumplan=n.pkid inner join device as on dnpm.fkdevice = device.pkid inner join routelist as rl on rl.fkdevice = device.pkid inner join routegroup as rg on rg.pkid=rl.fkroutegroup and n.tkpatternusage=5 where device.name='TP-424-RL' ```
update query using join
[ "", "sql", "join", "sql-update", "informix", "" ]
I want to get unique message (datatype: `TEXT`). I am using this simple query to achieve this: ``` SELECT DISTINCT message FROM `online_communication` ``` Results I am getting are: ``` hello.. what's up? second message.. third message..... third message..... ``` Why am I getting `third message.....` 2 times? I also tried with: ``` SELECT DISTINCT BINARY message FROM `online_communication` ``` But, getting same number of rows in `BLOB` PS: No luck with `GROUP BY` as well. **SQLFiddle:** <http://sqlfiddle.com/#!9/304e4/1>
This is behaving as expected because your values *are* different. From your SQL fiddle: ``` INSERT INTO `online_communication` (`id`, `company_id`, `country_id`, `message`, `date_added`, `time_interval`) VALUES (53, 1, 210, 'third message.....\r\nand here..\r\n', '2015-05-01 11:05:31', '1'), (61, 1, 103, 'third message.....\r\nand here.. \r\n', '2015-05-01 11:06:38', '1'); ``` Notice that the first insert does not have a space after the two periods, but the second one does. They look the same in the results, because the difference just represents white space. **EDIT** By definition, these *are* unique messages, so your query is fine. However, if you want to treat them as the same, you need to find a business rule for what defines them as the same. For example, you could just remove all spaces from the string, and compare them that way: ``` SELECT DISTINCT REPLACE(message, ' ', '') FROM `online_communication` ``` Here is a working [Fiddle](http://sqlfiddle.com/#!9/304e4/5) example. Here are some helpful [string](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html) functions. Notice that you cannot use `TRIM()` here because that function only removes whitespace from the beginning or end of a string. In this example, it is in the middle, so `REPLACE` is more useful.
One of your line has an extra space ``` (53, 1, 210, 'third message.....\r\nand here..\r\n', '2015-05-01 11:05:31', '1'), (61, 1, 103, 'third message.....\r\nand here.. \r\n', '2015-05-01 11:06:38', '1'); ``` Remove that and you will get the expected result. **[SQL FIDDLE DEMO](http://sqlfiddle.com/#!9/fc707/1)**
mysql: get distinct value of TEXT
[ "", "mysql", "sql", "" ]
In MySQL I'm tasked with a big dataset, with data from 1970 to 2010. I want to check for consistency: *check if each instance occurs minimum one time per year*. I took a snippet from 1970-1972 as example to demonstrate my problem. input: ``` id year counts -- ---- --------- 1 1970 1 1 1971 1 2 1970 3 2 1971 8 2 1972 1 3 1970 4 ``` expected: ``` id 1970-1972 -- ---------- 1 no 2 yes 3 no ``` I though about counting within the date range and then taking those out who had 3 counts: 1970, 1971, 1972. The following query doesn't force the check on each point in the range though. ``` select id, count(*) from table1 WHERE (year BETWEEN '1970' AND '1972') AND `no_counts` >= 1 group by id ``` What to do?
You can use `GROUP BY` with `CASE` / inline `if`. Using `CASE`. [SQL Fiddle](http://sqlfiddle.com/#!9/2f0df6/3) ``` select id,CASE WHEN COUNT(distinct year) = 3 THEN 'yes'ELSE 'No' END "1970-72" from abc WHERE year between 1970 and 1972 GROUP BY id ``` Using inline `IF`. [SQL Fiddle](http://sqlfiddle.com/#!9/2f0df6/5) ``` select id,IF( COUNT(distinct year) = 3,'yes','No') "1970-72" from abc WHERE year between 1970 and 1972 GROUP BY id ```
You can use a `having` clause with `distinct` count: ``` select `id` from `table1` where `year` between '1970' and '1972' group by id having count(distinct `year`) = 3 ```
Check if instances have occurred minimum once, every year in a specific range
[ "", "mysql", "sql", "" ]
``` products +----+--------+ | id | title | +----+--------+ | 1 | Apple | | 2 | Pear | | 3 | Banana | | 4 | Tomato | +----+--------+ product_variants +----+------------+------------+ | id | product_id | is_default | +----+------------+------------+ | 1 | 1 | 0 | | 2 | 1 | 1 | | 3 | 2 | 1 | | 4 | 3 | 1 | | 5 | 4 | 1 | +----+------------+------------+ properties +----+-----------------+-----------+ | id | property_key_id | value | +----+-----------------+-----------+ | 1 | 1 | Yellow | | 2 | 1 | Green | | 3 | 1 | Red | | 4 | 2 | Fruit | | 5 | 2 | Vegetable | | 6 | 1 | Blue | +----+-----------------+-----------+ property_keys +----+-------+ | id | value | +----+-------+ | 1 | Color | | 2 | Type | +----+-------+ product_has_properties +----+------------+-------------+ | id | product_id | property_id | +----+------------+-------------+ | 1 | 1 | 4 | | 2 | 1 | 3 | | 3 | 2 | 4 | | 4 | 3 | 4 | | 5 | 3 | 4 | | 6 | 4 | 4 | | 7 | 4 | 5 | +----+------------+-------------+ product_variant_has_properties +----+------------+-------------+ | id | variant_id | property_id | +----+------------+-------------+ | 1 | 1 | 2 | | 2 | 1 | 3 | | 3 | 2 | 6 | | 4 | 3 | 4 | | 5 | 4 | 1 | | 6 | 5 | 1 | +----+------------+-------------+ ``` I need to query my DB so it selects `products` which have certain `properties` attached to the product itself **OR** have those properties attached to one of its related `product_variants`. Also should `properties` with the same properties.property\_key\_id be grouped like this: `(pkey1='red' OR pkey1='blue') AND (pkey2='fruit' OR pkey2='vegetable')` Example cases: * Select all products with `(color='red' AND type='vegetable')`. This should return only Tomato. * Select all products with `((color='red' OR color='yellow') AND type='fruit')` should return Apple and Banana Please note that in the example cases above I don't really need to query by properties.value, I can query by properties.id. I played around a lot with MySQL query's but the biggest problem I'm struggling with is the properties being loaded through two pivot tables. Loading them is no problem but loading them and combining them with the correct `WHERE`, `AND` and `OR` statements is.
The following code should give you what you're looking for, however you should note that your table currently has a Tomato listed as yellow and a vegetable. Obviously you want the Tomato as red and a Tomato is actually a fruit not a vegetable: ``` Select distinct title from products p inner join product_variants pv on pv.product_id = p.id inner join product_variant_has_properties pvp on pvp.variant_id = pv.id inner join product_has_properties php on php.product_id = p.id inner join properties ps1 on ps1.id = pvp.property_id --Color inner join properties ps2 on ps2.id = php.property_id --Type inner join property_keys pk on pk.id = ps1.property_key_id or pk.id = ps2.property_key_id where ps1.value = 'Red' and ps2.value = 'Vegetable' ``` Here is the SQL Fiddle: <http://www.sqlfiddle.com/#!9/309ad/3/0>
This is a convoluted answer, and it may be possible to do it in a far simpler way. However given that you seem to want to be able to query by `color = xx` and `type = xx`, we clearly need to have columns with those names, which as you've intimated, means we need to pivot the data. Furthermore, since we want to get all the combinations of colours and types for each product, we need to perform a sort of cross join, to combine them. This leads us to the query - first we get all the types for a product and its variants, then we join that to all the colours for a product and its variant. We use `union` to combine the product and variant properties in order to keep them all in the same column, rather than having multiple columns to check. Of course all products may not have this information specified, so we use `left joins` all the way through. If it is guaranteed that a product will always have at least one colour, and at least one type - they can all be changed to `inner joins`. Also, in your example you say `tomato` should have a colour of `red`, yet in the sample data you provide i'm sure the `tomato` has a colour of `yellow`. Anyway, here's the query: ``` select distinct title from (select q1.title, q1.value as color, q2.value as type from ( select products.id, products.title, properties.value, properties.property_key_id from products left join product_has_properties on products.id = product_has_properties.product_id left join properties on properties.id = product_has_properties.property_id and properties.property_key_id = 1 union select product_variants.product_id, products.title, properties.value, properties.property_key_id from product_variants inner join products on product_variants.product_id = products.id left join product_variant_has_properties on product_variants.id = product_variant_has_properties.variant_id left join properties on properties.id = product_variant_has_properties.property_id and properties.property_key_id = 1 ) q1 left join ( select products.id, products.title, properties.value, properties.property_key_id from products left join product_has_properties on products.id = product_has_properties.product_id left join properties on properties.id = product_has_properties.property_id and properties.property_key_id = 2 union select product_variants.product_id, products.title, properties.value, properties.property_key_id from product_variants inner join products on product_variants.product_id = products.id left join product_variant_has_properties on product_variants.id = product_variant_has_properties.variant_id left join properties on properties.id = product_variant_has_properties.property_id and properties.property_key_id = 2 ) q2 on q1.id = q2.id where q1.value is not null or q2.value is not null ) main where ((color = 'red' or color = 'yellow') and type = 'fruit') ``` And here's a demo: <http://sqlfiddle.com/#!9/d3ded/76> If you were to get more types of property, in addition to colour and type, the query would need to be modified - sorry but that's pretty much what you're stuck with, trying to pivot in mysql
MySQL Query WHERE through multiple pivot tables
[ "", "mysql", "sql", "" ]
This question has bothered me for most of the day, just can't figure it out, and I don't really know which keywords to google for. Let's say I have policies and their date, and I want to know if the policy falls within the first year since its inception or later years. it will have to find the earliest date for each policy, and then return 1 if the future date is within 1 year of the original date. I know that we need to use for while statements but I don't know how to begin. Thanks For example: ``` +--------+-----------+ | Policy | Date | +--------+-----------+ | CC1002 | 5/1/2012 | | CC1002 | 6/1/2012 | | CC1002 | 7/1/2012 | | CC1002 | 8/1/2012 | | CC1002 | 9/1/2012 | | CC1002 | 10/1/2012 | | CC1002 | 11/1/2012 | | CC1002 | 12/1/2012 | | CC1002 | 1/1/2013 | | CC1002 | 2/1/2013 | | CC1002 | 3/1/2013 | | CC1002 | 4/1/2013 | | CC1002 | 5/1/2013 | | CC1002 | 6/1/2013 | | CC1002 | 7/1/2013 | | CC1002 | 8/1/2013 | | CC1008 | 5/1/2012 | | CC1008 | 6/1/2013 | +--------+-----------+ ``` Results ``` +--------+-----------+-------------------+ | Policy | Date | YearfromInception | +--------+-----------+-------------------+ | CC1002 | 5/1/2012 | 1 | | CC1002 | 6/1/2012 | 1 | | CC1002 | 7/1/2012 | 1 | | CC1002 | 8/1/2012 | 1 | | CC1002 | 9/1/2012 | 1 | | CC1002 | 10/1/2012 | 1 | | CC1002 | 11/1/2012 | 1 | | CC1002 | 12/1/2012 | 1 | | CC1002 | 1/1/2013 | 1 | | CC1002 | 2/1/2013 | 1 | | CC1002 | 3/1/2013 | 1 | | CC1002 | 4/1/2013 | 1 | | CC1002 | 5/1/2013 | 2 | | CC1002 | 6/1/2013 | 2 | | CC1002 | 7/1/2013 | 2 | | CC1002 | 8/1/2013 | 2 | | CC1008 | 5/1/2012 | 1 | | CC1008 | 6/1/2013 | 2 | +--------+-----------+-------------------+ ``` Solved. This community is great!
In SQL you don't use loops -- SQL works on sets. In this case you want to get the set of all starting dates for policies and then join that back to the original table to get your results. Like this: ``` SELECT mytable.policy, (datediff(m,sub.inception,mytable.PolicyDate)/12)+1 as yearfrominception FROM mytable JOIN ( SELECT Policy, Min(PolicyDate) as inception FROM mytable GROUP BY Policy ) as sub ON mytable.Policy = sub.Policy ```
Without subqueries: ``` CREATE TABLE #a ( policy VARCHAR(100), pDate DATE ) INSERT INTO #a VALUES ('CC1002' ,'5/1/2012'); INSERT INTO #a VALUES ('CC1002', '6/1/2012'); INSERT INTO #a VALUES ('CC1002', '7/1/2012'); INSERT INTO #a VALUES ('CC1002', '8/1/2012'); INSERT INTO #a VALUES ('CC1002', '9/1/2012'); INSERT INTO #a VALUES ('CC1002', '10/1/2012'); INSERT INTO #a VALUES ('CC1002', '11/1/2012'); INSERT INTO #a VALUES ('CC1002', '12/1/2012'); INSERT INTO #a VALUES ('CC1002', '1/1/2013'); INSERT INTO #a VALUES ('CC1002', '2/1/2013'); INSERT INTO #a VALUES ('CC1002', '3/1/2013'); INSERT INTO #a VALUES ('CC1002', '4/1/2013'); INSERT INTO #a VALUES ('CC1002', '5/1/2013'); INSERT INTO #a VALUES ('CC1002', '6/1/2013'); INSERT INTO #a VALUES ('CC1002', '7/1/2013'); INSERT INTO #a VALUES ('CC1002', '8/1/2013'); INSERT INTO #a VALUES ('CC1008', '5/1/2012'); INSERT INTO #a VALUES ('CC1008', '6/1/2013'); SELECT a.policy , a.pDate , DATEDIFF(DAY, MIN(a.pDate) OVER (PARTITION BY a.policy), a.pDate)/365+ 1 FROM #a a ```
For while statements in SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have two tables `Branch_TB` and `Branch_City`. Branch\_TB : ``` CREATE TABLE Branch_TB( Branch_Id int NULL, Branch_Name varchar(50) NULL ) ``` Whenever there is an entry for `Branch_Name`, I want to add that entry as column `name` in `Branch_City`. Is there any way for this. I don't know how to do this and haven't try any solution. Thanks in advance.
You can achieve that using `AFTER INSERT TRIGGER`. ``` CREATE TABLE Branch_TB( Branch_Id int NULL, Branch_Name varchar(50) NULL ) go --drop table BranchCity create table BranchCity(abc varchar(20)) go create TRIGGER dbo.AddCol ON Branch_TB AFTER INSERT AS BEGIN DECLARE @NewVal VARCHAR(20) DECLARE @AlterSQL VARCHAR(100) CREATE TABLE #New ( VAL VARCHAR(20) ) INSERT INTO #New select Branch_Name from inserted select @NewVal = Val from #New SET @AlterSQL = 'ALTER TABLE BranchCity add ' + @NewVal + ' VARCHAR(20)' exec(@AlterSQL) END go insert into Branch_Tb values(1, 'City1') go insert into Branch_Tb values(2, 'City2') ``` But in my opinion, you should re-evaluate your database design.
I understand it like you want to duplicate inserted value in some other table in column `name`. If this is true then you can try with `OUTPUT`: ``` INSERT INTO Branch_TB( Branch_Id, Branch_Name ) OUTPUT 'someValue1', Inserted.Branch_Name, 'someValue2' INTO Branch_City ( someCol1, Name, comeCol2 ) VALUES ( 1, 'some name' ) ```
Add Column To one table on inserting value to another table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
In SQL, suppose that I have table A ``` ID -- 1 3 5 ``` and table B ``` ID2 --- 1 2 3 4 5 6 ``` To get the result similar to: ``` ID | ID2 ---------- 1 | 1 1 | 2 3 | 3 3 | 4 5 | 5 5 | 6 ``` For an explanation, an element in column ID2 will be mapped to the highest value in the column ID that is less than or equal to the said element in ID2. For example, 4 in column ID2 is mapped to 3 from column ID, because 3 is the largest value in column ID which is less than or equal to 4. Is it possible at all to do this in sql?
What I would do is start by joining the two tables on the condition that the id in the first table is less than or equal to that in the second table, like this: ``` SELECT t1.id, t2.id AS id2 FROM t1 JOIN t2 ON t2.id >= t1.id; ``` Once you have that, you can select the maximum id from the first table, and group by the id from the second table to get the largest pairs: ``` SELECT MAX(t1.id) AS id, t2.id AS id2 FROM t1 JOIN t2 ON t2.id >= t1.id GROUP BY t2.id; ``` SQL Fiddle seems to be down but I will update with a link as soon as I can.
``` SELECT MAX(A.ID) ID, B.ID2 FROM A INNER JOIN B ON B.ID2 >= A.ID GROUP BY B.ID2 ```
Joining two tables on interval basis
[ "", "mysql", "sql", "" ]
I have a query that looks like this: ``` SELECT ElapsedTime, COUNT(SurgeryID) as numberOfOps FROM Table WHERE Doctor = 'Dr. ABC' AND numberOfOps > 5 GROUP BY ElapsedTime ORDER BY CASE WHEN ElapsedTime = 'Preoperative' THEN 1 WHEN ElapsedTime = '2 Weeks' THEN 2 WHEN ElapsedTime = '6 Weeks' THEN 3 WHEN ElapsedTime = '3 Months' THEN 4 WHEN ElapsedTime = '6 Months' THEN 5 WHEN ElapsedTime = '1 Year' THEN 6 WHEN ElapsedTime = '2 Years' THEN 7 WHEN ElapsedTime = '3 Years' THEN 8 WHEN ElapsedTime = '4 Years' THEN 9 WHEN ElapsedTime = '5 Years' THEN 10 WHEN ElapsedTime = '6 Years' THEN 11 WHEN ElapsedTime = '7 Years' THEN 12 WHEN ElapsedTime = '8 Years' THEN 13 WHEN ElapsedTime = '9 Years' THEN 14 WHEN ElapsedTime = '10 Years' THEN 15 ELSE ElapsedTime END ASC ``` If Dr. ABC doesn't have any results w/ 'DaysElapsed' == 'Preoperative', the rest of the ORDER BY gets ignored - as in the following result: ``` '5 Years,6 Years,7 Years,8 Years,10 Years,2 Weeks,6 Weeks,3 Months,6 Months,1 Year,2 Years,3 Years,4 Years,Unknown' ``` How do I make this ORDER BY clause more robust? Is there any way to include an incrementing variable in the statement, e.g. ``` SELECT ElapsedTime, COUNT(SurgeryID) as numberOfOps FROM Table WHERE Doctor = 'Dr. ABC' GROUP BY Time var i = 0; ORDER BY CASE WHEN ElapsedTime = 'Preoperative' THEN ++i WHEN ElapsedTime = '2 Weeks' THEN ++i WHEN ElapsedTime = '6 Weeks' THEN ++i WHEN ElapsedTime = '3 Months' THEN ++i . . . . . . . . . . . . . . . . WHEN ElapsedTime = '10 Years' THEN ++1 ELSE ElapsedTime END ASC ``` I understand that SQL doesn't allow for such things, but that is the type of functionality which I am seeking. Thanks
Create a reference table, something like: ``` CREATE TABLE ElapsedTimeRef ( varchar timeString, int orderValue ); INSERT INTO ElapsedTimeRef (timeString, orderValue) VALUES ('Preoperative', 1), ('2 Weeks', 2), . . . . . ('9 Years', 14), ('10 Years', 15); ``` Then join on it in your SELECT, like: ``` SELECT ElapsedTime, COUNT(SurgeryID) as numberOfOps FROM foo, ElapsedTimeRef etr WHERE foo.Doctor = 'Dr. ABC' AND foo.numberOfOps > 5 AND foo.ElapsedTime = etr.timeString GROUP BY foo.ElapsedTime ORDER BY etr.orderValue ASC ``` (P.S. I'm assuming your table name is not actually "Table" since `TABLE` is a SQL keyword.)
Try the below query ``` SELECT ElapsedTime, SURGERY_COUNT from (select case ElapsedTime WHEN 'Preoperative' THEN 1 WHEN '2 Weeks' THEN 2 WHEN '2 Years' THEN 3 WHEN '10 Years' THEN 4 ELSE 0 END AS surgery_order , ElapsedTime, COUNT(SurgeryID) as SURGERY_COUNT from test_table where Doctor = 'Dr. ABC' and numberOfOps > 5 GROUP BY surgery_order, ElapsedTime order by surgery_order asc) as DATA_TABLE ```
incrementing an ORDER BY statement
[ "", "mysql", "sql", "sql-order-by", "" ]
I am trying to find a MySQL query that will display the number of occurrences of an ID value within a year. Table: ``` a_id year ---- ---- 1 2010 1 2011 1 2012 1 2012 1 2013 1 2014 1 2015 2 2010 2 2011 2 2013 2 2014 2 2014 2 2015 3 2010 3 2010 3 2011 ``` Expected output: ``` a_id year occurrences ---- ----- ----------- 1 2010 1 1 2011 1 1 2012 2 1 2013 1 1 2014 1 1 2015 1 2 2010 1 2 2011 1 2 2013 1 2 2014 2 2 2015 1 3 2010 2 3 2011 1 ``` I'm trying with the something along the lines of following sql query, but it gives me nothing like the expected output. It's the 3rd column im struggling with. ``` SELECT a__id, year, count(distinct a_id) as occurrences FROM table1 GROUP by year ORDER by a_id ``` How can i create that 3rd column?
Scince you are grouping by `a_id` and `year` you of course get only 1 distinct value for group. Simply change `count(distinct a_id)` to `count(*)`. For example you get group: ``` 1 2012 1 2012 ``` Notice in this group distinct a\_id values is 1. But you want count of all rows in group. With `distinct` you will get 1 as occurence in all groups. **EDIT**: Ok, I have missed that you are grouping only by `year`, so you should group by `a_id` also. The rest of the answer stays as is. So you end up with: ``` SELECT a__id, year, count(*) as occurrences FROM table1 GROUP by a__id, year ```
``` SELECT a__id, year, count(*) as occurrences FROM table1 GROUP by a__id, year ```
Count occurrences of distinct values
[ "", "mysql", "sql", "" ]
I am using Hive, so the SQL syntax might be slightly different. How do I get the data from the previous month? For example, if today is 2015-04-30, I need the data from March in this format 201503? Thanks! ``` select employee_id, hours, previous_month_date--YYYYMM, from employees where previous_month_date = cast(FROM_UNIXTIME(UNIX_TIMESTAMP(),'yyyy-MM-dd') as int) ```
You could do `(year('2015-04-30')*100+month('2015-04-30'))-1` for the above mentioned date, it will return `201503` or something like `(year(from_unixtime(unix_timestamp()))*100+month(from_unixtime(unix_timestamp())))-1` for today's previous month. Assuming your date column is in 'yyyy-mm-dd' format you can use the first example and substitute the date string with your table column name; for any other format the second example will do, add the column name in the `unix_timestamp()` operator.
From experience, it's safer to use *DATE\_ADD(Today, -1-Day(Today))* to compute last-day-of-previous-month without having to worry about edge cases. From there you can do what you want e.g. ``` select from_unixtime(unix_timestamp(), 'yyyy-MM-dd') as TODAY, date_add(from_unixtime(unix_timestamp(), 'yyyy-MM-dd'), -1-cast(from_unixtime(unix_timestamp(), 'd') as int)) as LAST_DAY_PREV_MONTH, substr(date_add(from_unixtime(unix_timestamp(), 'yyyy-MM-dd'), -1-cast(from_unixtime(unix_timestamp(), 'd') as int)), 1,7) as PREV_MONTH, cast(substr(regexp_replace(date_add(from_unixtime(unix_timestamp(), 'yyyy-MM-dd'), -1-cast(from_unixtime(unix_timestamp(), 'd') as int)), '-',''), 1,6) as int) as PREV_MONTH_NUM from WHATEVER limit 1 -- today last_day_prev_month prev_month prev_month_num -- 2015-08-13 2015-07-30 2015-07 201507 ``` See Hive documentation about [date functions](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions), [string functions](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-StringFunctions) etc.
Selecting YYYYMM of the previous month in HIVE
[ "", "sql", "hive", "hiveql", "" ]
So I used this piece of sample code to retrieve data from a MS Access Database and display it on a few textboxes on the form. The following error occurs - > @ dr = cmd.ExecuteReader - Data type mismatch in criteria expression. > dr = cmd.ExecuteReader This is the sample code - ``` Dim provider As String Dim dataFile As String Dim connString As String Public myConnection As OleDbConnection = New OleDbConnection Public dr As OleDbDataReader Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load provider = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source =" dataFile = "C:\Users\example\Desktop\Data.accdb" ' Change it to your Access Database location connString = provider & dataFile myConnection.ConnectionString = connString End Sub Dim r As Random = New Random Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click myConnection.Open() TextBox1.Clear() TextBox2.Clear() TextBox3.Clear() Dim str As String str = "SELECT * FROM Items WHERE (Code = '" & r.Next(1, 3) & "')" Dim cmd As OleDbCommand = New OleDbCommand(str, myConnection) dr = cmd.ExecuteReader While dr.Read() TextBox1.Text = dr("Description").ToString TextBox2.Text = dr("Cost").ToString TextBox3.Text = dr("Price").ToString End While myConnection.Close() End Sub ```
The table name in the file that you provided for download is "Table1" not "Items". Change the query string to: `str = "SELECT * FROM Table1 where (Code = '" & r.Next(1, 3) & "')"`
Try this .. str = "SELECT \* FROM Items WHERE (Code = '" & (r.Next(1, 3)).ToString() & "')"
Read data from a database in vb.net
[ "", "sql", "database", "vb.net", "" ]
Let's assume I have the following table which is created by using the below query. **NEXT() command is not available** ``` ID ClientID Service_Code Date of Service 101 100030 1 2011-04-06 102 100030 6 2011-04-06 103 100030 9 2011-05-26 104 100030 11 2011-09-29 105 100030 6 2011-09-29 ``` and I would like to add a column that will show the next service date For example ``` ID ClientID Service_Code Date of Service Next Service Date 101 100030 1 2011-04-06 2011-04-06 102 100030 6 2011-04-06 2011-05-26 103 100030 9 2011-05-26 2011-09-29 104 100030 11 2011-09-29 2011-10-29 105 100030 6 2011-10-29 NULL ``` I have used the query below but is returning the following ``` ID ClientID Service_Code Date of Service Next Service Date 101 100030 1 2011-04-06 2011-05-26 102 100030 6 2011-04-06 2011-05-26 103 100030 9 2011-05-26 2011-05-26 104 100030 11 2011-09-29 2011-05-26 105 100030 6 2011-10-29 2011-05-26 ``` Any suggestions???
@Yoismel All I did was add an ORDER BY and LIMIT clause to your answer. Your answer does not guarantee that the date returned is the most immediate next date. If the same person had 3 entries: one for day x, day x+1 and day x+30, your answer could return day x+30 instead of day x+1 since you don't include an ORDER BY statement. Also, you need to LIMIT subqueries like this to only return one column from one row or else you'll throw an error if a patient has more than 2 entries in the billing\_tx\_history table. You may also want to handle cases where a patient has only two entries from the same date, if that's a possibility. ``` SELECT btx.PATID, btx.v_patient_name, service_code, ID, date_of_service, ( SELECT date_of_service FROM billing_tx_history btx1 WHERE btx1.PATID = btx.PATID AND date_of_service > btx.date_of_service ORDER BY date_of_service LIMIT 1 ) as next_date_of_service FROM billing_tx_history btx INNER JOIN episode_history ep ON btx.FACILITY = ep.FACILITY AND btx.PATID = ep.PATID AND ep.episode_number= btx.episode_number WHERE (ep.date_of_discharge IS NULL OR ep.date_of_discharge > getDate()) AND service_code NOT LIKE 'DELETE' AND btx.patid = '1000030' ```
Without more details, this is the best answer I can give. This works in PostgreSQL 9.4. I'm making a lot of assumptions as to what you want, so feel free to edit your question with more details if this doesn't do it. ``` CREATE TABLE t ( id integer, clientid integer, service_code integer, date_of_service date ); INSERT INTO t VALUES (101, 100030, 1, '2011-04-06'); INSERT INTO t VALUES (102, 100030, 6, '2011-04-06'); INSERT INTO t VALUES (103, 100030, 9, '2011-05-26'); SELECT *, ( SELECT date_of_service FROM t AS b WHERE b.id > a.id ORDER BY id LIMIT 1 ) AS next_service_date FROM t AS a ORDER BY id; ``` Which gives the following result: ``` id clientid service_code date_of_service next_service_date --------------------------------------------------------------------------- 101 10030 1 2011-04-06 2011-04-06 102 10030 6 2011-04-06 2011-05-26 103 10030 9 2011-05-26 <NULL> ```
Selecting next date from a list of services
[ "", "sql", "" ]
I'm trying to generate statistics per value for a column in a table, generating the value itself, number of occurrences of that value in the table, and the % of that from the total. I have a table such as in the following example: ``` Table "public.films" Column | Type | Modifiers --------+-----------------------+----------- code | character(5) | title | character varying(40) | # select * from films; code | title -------+------- a1123 | yo1 a1124 | yo1 a1125 | yo2 a110 | yo3 a110v | yo3 a1a | yo3 a1az | yo3 (7 rows) ``` I tried using rank() and percent\_rank() to accomplish this but it didn't work. Expected outcome for the above example would be: ``` # select * from films; title | title_count | title_percent -------+-------------+------------------- yo1 | 2 | 28% yo2 | 1 | 14% yo3 | 4 | 57% ``` What's the most efficient query to achieve that goal, considering that the table will contain over 100 millions of values? (the column is indexed)
This should help: ``` SELECT title, COUNT(*) AS title_count, ROUND(COUNT(*) / SUM(COUNT(*)) OVER () * 100) AS percent FROM films GROUP BY title ORDER BY title ```
And yet another one: ``` WITH code_cnt AS ( SELECT title, count(*) AS title_count FROM films GROUP BY title), gt AS ( SELECT sum(title_count) AS grand_total FROM code_cnt) SELECT title, title_count, (100 * title_count / grand_total) AS title_percent FROM code_cnt, gt ORDER BY title; ``` This version avoids the use of `count(*)` on the entire table, which is a performance issue when the table is large. (Note that the first three answers calculate both all records in the entire table and then separately for each group.)
Per-value statistics over table in PostgreSQL
[ "", "sql", "postgresql", "" ]
I would like to do something like the following: ``` CREATE VIEW foo_view AS SELECT foo.id, foo.carryover_id, foo.amount, (SELECT SUM(foo_view.total) FROM foo_view WHERE foo_view.carryover_id = foo.id ) AS carryover_amount, (amount + carryover_amount) AS total; ``` However, this raises an error `relation "foo_view" does not exist`. I would think that `RECURSIVE` would come in handy here, but the [documentation](http://www.postgresql.org/docs/9.4/static/queries-with.html) is more reference than tutorial and there don't seem to be any resources out there to help me. Thoughts? **EDIT** Here's a sqlfiddle with a relevant schema <http://sqlfiddle.com/#!15/6834d/1>
The trick is to populate a table with source > target pairs, then to use that table to calculate the total amount: ``` CREATE RECURSIVE VIEW leaf_view(target_id, source_id) AS ( SELECT id, id FROM foo UNION ALL SELECT leaf_view.target_id, foo.id FROM leaf_view JOIN foo ON foo.carryover_id = leaf_view.source_id ); CREATE VIEW foo_view(id, total) AS SELECT leaf_view.target_id, SUM(bar.amount) FROM leaf_view LEFT JOIN bar ON leaf_view.source_id = bar.foo_id GROUP BY leaf_view.target_id; ``` <http://sqlfiddle.com/#!15/6834d/62> I owe this answer to `RhodiumToad` in #postgresql on freenode. Note that this will have problems for large `foo` since the optimizer generally can't work on recursive queries.
You are right, your article is what you needed. I formatted the data first into a fooBar temp table: ``` SELECT b.id , foo_id , ISNULL(f.carryover_id, foo_id) as carryover_id , b.amount , ROW_NUMBER() OVER( PARTITION BY ISNULL(f.carryover_id, foo_id) ORDER BY b.id asc) as rowNmbr INTO #fooBar FROM #bar as b INNER JOIN #foo as f ON b.foo_id = f.id; ``` The first row in the `WITH` is the top level. Then it recursively goes through each level. The rowNmbr is the level. ``` WITH summarizedFoo(rowNmbr, id, foo_id, carryover_id, amount, totalAmount) AS ( SELECT f.rowNmbr , f.id , f.foo_id , f.carryover_id , ISNULL(f.amount,0) as amount , ISNULL(f.amount,0) as totalAmount FROM #fooBar as f WHERE f.rowNmbr = 1 UNION ALL SELECT f.rowNmbr , f.id , f.foo_id , f.carryover_id , ISNULL(f.amount,0) as amount , CAST((ISNULL(f.amount,0) + ISNULL(s.totalAmount,0)) as decimal) as totalAmount FROM #fooBar as f INNER JOIN summarizedFoo as s ON f.carryover_id = s.carryover_id AND f.rowNmbr = s.rowNmbr +1 ) SELECT rowNmbr, id, foo_id, carryover_id, amount, totalAmount FROM summarizedFoo ``` **I did this in T-SQL, but to make this work in your fiddler** Here is the PostgreSQL version: Create a fooBar table (could be a view): ``` INSERT INTO fooBar SELECT CAST(b.id as INTEGER) , CAST(foo_id as INTEGER) , CAST(CASE WHEN f.carryover_id is NULL THEN foo_id ELSE f.carryover_id END as INTEGER) as carryover_id , CAST(b.amount as DECIMAL) , CAST(ROW_NUMBER() OVER(PARTITION BY CASE WHEN f.carryover_id is NULL THEN foo_id ELSE f.carryover_id END ORDER BY b.id asc) as INTEGER) as rowNmbr FROM bar as b INNER JOIN foo as f ON b.foo_id = f.id; ``` Here is the `WITH RECURSIVE`: ``` WITH RECURSIVE summarizedFoo(rowNmbr, id, foo_id, carryover_id, amount, totalAmount) AS ( SELECT f.rowNmbr, f.id, f.foo_id, f.carryover_id , f.amount , f.amount FROM fooBar as f WHERE f.rowNmbr = 1 UNION ALL SELECT f.rowNmbr, f.id, f.foo_id, f.carryover_id , f.amount , f.amount + s.totalAmount FROM fooBar as f INNER JOIN summarizedFoo as s ON f.carryover_id = s.carryover_id AND f.rowNmbr = s.rowNmbr +1 ) SELECT rowNmbr, id, foo_id, carryover_id, amount, totalAmount FROM summarizedFoo ``` I tested in SSMS(T-SQL version) and Fiddler(PostgreSQL), and verified the results in Excel. Here is Fiddler of the solution: <http://sqlfiddle.com/#!15/a7822/3>
Reference a view inside of the view in postgresql
[ "", "sql", "postgresql", "" ]
I am trying to create a function which returns the maximal distance between all the items, as a number. I understand it should be like a nested for loop and am having trouble with the query nesting. I have a table: ``` create table MediaItems ( MID NUMBER(9,0), TITLE VARCHAR2 (200), PROD_YEAR NUMBER(4), TITLE_LENGTH NUMBER(4), CONSTRAINT mid_pk PRIMARY KEY (MID) ); ``` and I need to return the maximal result of the distance calculation between all the item pairs. distance being the squared difference between the production years of the given items(PROD\_YEAR). I would appreciate any help/advice on how to nest queries.
For your particular calculation -- assuming the values are all positive -- just square the difference between the minimum and maximum values: ``` select power(max(prod_year) - min(prod_year), 2) from mediaitems mi; ``` This can easily be modified to handle negative values, but that seems unlikely in a column named `prod_year`.
A `cross join` should do the trick - it pairs every item with every other item, giving the "nested loops" effect you wanted here. ``` SELECT MAX(POWER(a.prod_year - b.prod_year), 2) FROM MediaItems a CROSS JOIN MediaItems b ```
Oracle SQL nested max function
[ "", "sql", "oracle", "" ]
I have a query I would like to optimize. This is the query: ``` SELECT CONN.connNum, MIN(INTER.walkingDistanceMinutes) AS minimalWalkingDistance FROM INTER INNER JOIN CONN ON (INTER.IDConn_FK = CONN.IDConn) GROUP BY INTER.IDConn_FK, CONN.connNum; ``` These are the explain plan results: ``` ------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 3 | 171 | 7 (15)| 00:00:01 | | 1 | HASH GROUP BY | | 3 | 171 | 7 (15)| 00:00:01 | | 2 | NESTED LOOPS | | 3 | 171 | 6 (0)| 00:00:01 | | 3 | NESTED LOOPS | | 3 | 171 | 6 (0)| 00:00:01 | | 4 | TABLE ACCESS FULL | INTER | 3 | 78 | 3 (0)| 00:00:01 | |* 5 | INDEX UNIQUE SCAN | SYS_C002012172 | 1 | | 0 (0)| 00:00:01 | | 6 | TABLE ACCESS BY INDEX ROWID| CONN | 1 | 31 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------ ``` I've tried using more specific SELECTs, but the results are the same (something like `FROM (SELECT IDConn_FK, walkingDistanceMinutes FROM INTER) I` etc). Can you please show me a way to get the cost down?
It was very useful to know if IDConn\_FK and connNum were unique on their table because this changes lots of things. 1. If they're both unique on their tables, you wouldn't need to group results because there wouldn't be multiple occurrences of the same value for connNum. So, in this case, one optimizations would be to not group by because there is only a single value of walkingDistanceMinutes corresponding to each connNum. Removing an unneeded group by would be the right optimization here. 2. If just connNum is unique on CONN, then one way to optimize this query may be to limit the size of the resources needed to sort the elements during the MIN evaluation. This can be done using a subquery that will also limit the number of rows involved in the join. Here you can use query #1 3. If only IDConn\_FK is unique then the query is fine as it is. Query #2 may help you a little, but not really much. 4. If none of the two columns is unique, you can always try to limit the number of rows involved in the join through a subquery like for case #2, but you will also need to re-evaluate the MIN once more because you need it corresponding to connNum(that relies on table CONN). Don't think that grouping twice will be more expensive than doing it at once: this is a sort of divide-et-impera approach(separate a complex problem into more simple problems and the recombine their results together to get the solution for the complex problem). Here you could use query #2. Query #1: ``` SELECT CONN.connNum, minimalWalkingDistance FROM ( select INTER.IDConn_FK as IDConn, MIN(INTER.walkingDistanceMinutes) AS minimalWalkingDistance from INTER GROUP BY INTER.IDConn_FK ) inter JOIN CONN using (IDConn) ``` Query #2 ``` SELECT CONN.connNum, MIN(INTER.minimalWalkingDistance) AS minimalWalkingDistance FROM ( select INTER.IDConn_FK as IDConn, MIN(INTER.walkingDistanceMinutes) AS minimalWalkingDistance from INTER GROUP BY INTER.IDConn_FK ) inter JOIN CONN using (IDConn) group by CONN.connNum ``` And last one more thing to know: don't always consider execution plan cost as God's word, there are many times where queries with high cost are more efficient than others with lower cost. Especially when there are a high number of joins and aggregations.
For your size of data, there is no real optimization possible. For larger data, Oracle should choose other execution paths. You might try this: ``` select c.connNum, (select min(i.walkingDistanceMinutes from inter i where i.IDConn_FK = c.idConn ) as minimalWalkingDistance from conn c ; ``` I'm not 100% sure this is exactly the same query. I'm assuming that `idConn` is the primary key on the `conn` table.
Optimizing an Oracle group by/aggregate query
[ "", "sql", "oracle", "optimization", "" ]
I have a table as follows ``` +----+---------+-------------+--------+ | id | user_id | ip_address | amount | +----+---------+-------------+--------+ | 1 | 2 | 192.168.1.1 | 0.5 | | 2 | 2 | 192.168.1.1 | 0.5 | | 3 | 2 | 192.168.1.2 | 0.5 | | 4 | 2 | 192.168.1.3 | 0.5 | | 5 | 5 | 192.168.5.1 | 1.5 | | 6 | 5 | 192.168.5.1 | 1.5 | | 7 | 5 | 192.168.5.3 | 0.5 | +----+---------+-------------+--------+ ``` I want to get unique `ip_address` SUM of `amount` and count duplicates as only one. From above table there are 7 rows from which first 4 records are for `user_id` 2 and last 3 records for `user_id` 5. I am trying to SUM of amount field along with user\_id in descending order and want output as follows. ``` user_id total_amount 5 2.0 2 1.5 ```
Try this way: ``` select user_id, sum(amount) from (select user_id, ip_address, amount from tab group by ip_address, amount ) A group by user_id order by user_id desc ``` **[SqlFiddle](http://sqlfiddle.com/#!9/6edba/3/0) Demo**
GROUP BY, with aggregate function `SUM`, on the DISTINCT rows: ``` select user_id, sum(amount) as total_amount from (select distinct user_id, ip_address, amount from tablename) t group by user_id ``` Not sure if this is MySQL, but test and comment!
SUM of rows excluding duplicates in descending order
[ "", "mysql", "sql", "" ]
I have a table Team ``` Id Name ... 1 Chelsea 2 Arsenal 3 Liverpool ``` Now I need to search if my team table has a name like "Chelsea FC". How could I make a select query in this case when search string may have extra words ? I could try Lucene.net but it's kind of overkill just for a small usage and it would take time to learn it.
You would need to split the string up and search by each word in the string. SQL Server doesn't have a native function to do that, but there are various examples on the web. This function will take a string and a delimiter, and it will split the string by the delimiter and return a table of the resulting values. ``` CREATE FUNCTION dbo.SplitVarchar (@stringToSplit varchar(4000), @delimiter CHAR(1)) RETURNS @Result TABLE(Value VARCHAR(50))AS BEGIN --This CTE will return a table of (INT, INT) that signify the startIndex and stopIndex --of each string between delimiters. WITH SplitCTE(startIndex, stopIndex) AS ( SELECT 1, CHARINDEX(@delimiter, @stringToSplit) --The bounds of the first word UNION ALL SELECT stopIndex + 1, CHARINDEX(@delimiter, @stringToSplit, stopIndex+1) FROM SplitCTE --Recursively call SplitCTE, getting each successive value WHERE stopIndex > 0 ) INSERT INTO @Result SELECT SUBSTRING(@stringToSplit, --String with the delimited data startIndex, --startIndex of a particular word in the string CASE WHEN stopIndex > 0 THEN stopIndex-startIndex --Length of the word ELSE 4000 END --Just in case the delimiter was missing from the string ) AS stringValue FROM SplitCTE RETURN END; ``` Once you turn your delimited string into a table, you can JOIN it with the table you wish to search and compare values that way. ``` DECLARE @TeamName VARCHAR(50)= 'Chelsea FC' SELECT DISTINCT Name FROM Team INNER JOIN (SELECT Value FROM dbo.SplitVarchar(@TeamName, ' ')) t ON CHARINDEX(t.Value, Name) > 0 ``` Results: ``` | Name | |---------| | Chelsea | ``` [SQL Fiddle example](http://sqlfiddle.com/#!6/0d1b8/1) I based my design on Amit Jethva's [Convert Comma Separated String to Table : 4 different approaches](http://blogs.msdn.com/b/amitjet/archive/2009/12/11/sql-server-comma-separated-string-to-table.aspx)
You can use `like` this way: ``` declare @s varchar(20) = 'Chelsey FC' select * from Team where name like '%' + @s + '%' or @s like '%' + name + '%' ``` This will filter rows if `@s` contains `Name` or `Name` contains `@s`.
Sql Select similar text
[ "", "sql", "sql-server", "" ]
I need to select a value within a comma delimited string using only SQL. Is this possible? ``` Data A B C 1 Luigi Apple,Banana,Pineapple,,Citrus ``` I need to select specifically the `2nd item in column C`, in this case banana. I need help. I cannot create new SQL functions, I can only use `SQL`. This is the `as400` so the `SQL` is somewhat old tech. Update.. With help from @Sandeep we were able to come up with ``` SELECT xmlcast(xmlquery('$x/Names/Name[2]' passing xmlparse(document CONCAT(CONCAT('<?xml version="1.0" encoding="UTF-8" ?><Names><Name>',REPLACE(ODWDATA,',','</Name><Name>')),'</Name></Names>')) as "x") as varchar(1000)) FROM ACL00 ``` I'm getting this error ``` Keyword PASSING not expected. Valid tokens: ) ,. ``` New update. Problem solved by using UDF of Oracle's INSTR
I am answering my own question now. It is impossible to do this with the [built in functions within AS400](http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm?info/db2/rbafzmstch2func.htm) You have to create an UDF of Oracle's [INSTR](http://www.techonthenet.com/oracle/functions/instr.php) Enter this within STRSQL it will create a new function called INSTRB ``` CREATE FUNCTION INSTRB (C1 VarChar(4000), C2 VarChar(4000), N integer, M integer) RETURNS Integer SPECIFIC INSTRBOracleBase LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION DETERMINISTIC BEGIN ATOMIC DECLARE Pos, R, C2L Integer; SET C2L = LENGTH(C2); IF N > 0 THEN SET (Pos, R) = (N, 0); WHILE R < M AND Pos > 0 DO SET Pos = LOCATE(C2,C1,Pos); IF Pos > 0 THEN SET (Pos, R) = (Pos + 1, R + 1); END IF; END WHILE; RETURN (Pos - 1)*(1-SIGN(M-R)); ELSE SET (Pos, R) = (LENGTH(C1)+N, 0); WHILE R < M AND Pos > 0 DO IF SUBSTR(C1,Pos,C2L) = C2 THEN SET R = R + 1; END IF; SET Pos = Pos - 1; END WHILE; RETURN (Pos + 1)*(1-SIGN(M-R)); END IF; END ``` Then to select the nth delimited value within a comma delimited string... in this case the 14th use this query utilizing the new function ``` SELECT SUBSTRING(C,INSTRB(C,',',1,13)+1,INSTRB(C,',',1,14)-INSTRB(C,',',1,13)-1) FROM TABLE ```
If you want 2nd item only than you can use substring function: ``` DECLARE @TABLE TABLE ( A INT, B VARCHAR(100), C VARCHAR(100) ) DECLARE @NTH INT = 3 INSERT INTO @TABLE VALUES (1,'Luigi','Apple,Banana,Pineapple,,Citrus') SELECT REPLACE(REPLACE(CAST(CAST('<Name>'+ REPLACE(C,',','</Name><Name>') +'</Name>' AS XML).query('/Name[sql:variable("@NTH")]') AS VARCHAR(1000)),'<Name>',''),'</Name>','') FROM @TABLE ```
Select a portion of a comma delimited string in DB2/DB2400
[ "", "sql", "db2", "db2-400", "" ]
My database doesn't have a specific column so I created a column in my query by switch. What I need is to concatenate this column with another column in the database: ``` select certificateDuration , 'DurationType' = case when certificateDurationType = 0 then 'Day' when certificateDurationType = 1 then 'Month' when certificateDurationType = 2 then 'Year' end from Scientific_Certification ```
To concatenate strings in SQL Server you can simply use the [+ operator](https://technet.microsoft.com/en-us/library/aa276862%28v=sql.80%29.aspx). Note that if one of the substrings is null then the entire concatenated string will become null as well. therefor, use [COALESCE](https://msdn.microsoft.com/en-us/library/ms190349.aspx) if you need a result even if one substring is null. ``` select certificateDuration, ' DurationType = '+ COALESCE(case when certificateDurationType = 0 then 'Day' when certificateDurationType = 1 then 'Month' when certificateDurationType = 2 then 'Year' end, '') As DurationType from Scientific_Certification ``` **Note:** I've used coalesce on your case clause since you have no default behavior (specified by `else`). this means that if `certificateDurationType` is not 0, 1 or 2 the case statement will return `null`.
You just need to try `+` operator or `CONCAT` function 1. `+` for all SQL Server versions ``` DurationType = ' + 'some text' ``` 2. `CONCAT` for SQL Server 2012 + ``` CONCAT('DurationType = ', 'some text') ``` Your query should be something like this ``` SELECT certificateDuration ,'DurationType = ' + CASE certificateDurationType WHEN 0 THEN 'Day' WHEN 1 THEN 'Month' WHEN 2 THEN 'Year' END FROM Scientific_Certification ```
How to concatenate in SQL Server
[ "", "sql", "sql-server", "t-sql", "concatenation", "" ]
I have a table including more than 5 million rows of sales transactions. I would like to find **sum of date intervals between each customer three recent purchases**. Suppose my table looks like this : ``` CustomerID ProductID ServiceStartDate ServiceExpiryDate A X1 2010-01-01 2010-06-01 A X2 2010-08-12 2010-12-30 B X4 2011-10-01 2012-01-15 B X3 2012-04-01 2012-06-01 B X7 2012-08-01 2013-10-01 A X5 2013-01-01 2015-06-01 ``` The Result that I'm looking for may looks like this : ``` CustomerID IntervalDays A 802 B 135 ``` I know the query need to first retrieve 3 resent transactions of each customer (based on `ServiceStartDate`) and then calculate the interval between `startDate` and `ExpiryDate` of his/her transactions.
You want to calculate the difference between the previous row's ServiceExpiryDate and the current row's ServiceStartDate based on descending dates and then sum up the last two differences: ``` with cte as ( select tab.*, row_number() over (partition by customerId order by ServiceStartDate desc , ServiceExpiryDate desc -- don't know if this 2nd column is necessary ) as rn from tab ) select t2.customerId, sum(datediff(day, prevEnd, ServiceStartDate)) as Intervaldays ,count(*) as purchases from cte as t2 left join cte as t1 on t1.customerId = t2.customerId and t1.rn = t2.rn+1 -- previous and current row where t2.rn <= 3 -- last three rows group by t2.customerId; ``` Same result using LEAD: ``` with cte as ( select tab.*, row_number() over (partition by customerId order by ServiceStartDate desc) as rn ,lead(ServiceExpiryDate) over (partition by customerId order by ServiceStartDate desc ) as prevEnd from tab ) select customerId, sum(datediff(day, prevEnd, ServiceStartDate)) as Intervaldays ,count(*) as purchases from cte where rn <= 3 group by customerId; ``` Both will not return the expected result unless you subtract `purchases` (or `max(rn)`) from `Intervaldays`. But as you only sum two differences this seems to be not correct for me either... Additional logic must be applied based on your rules regarding: * customer has less than 3 purchases * overlapping intervals
Assuming there are no overlaps, I think you want this: ``` select customerId, sum(datediff(day, ServiceStartDate, ServieEndDate) as Intervaldays from (select t.*, row_number() over (partition by customerId order by ServiceStartDate desc) as seqnum from table t ) t where seqnum <= 3 group by customerId; ```
Finding the interval between dates in SQL Server
[ "", "sql", "sql-server", "datediff", "" ]
SQL Command : ``` UPDATE table1 SET colOne=1 WHERE id EXISTS ( SELECT IF(col1=19,col2,col1) AS id FROM table2 WHERE (col3=19 AND col4='X') OR (col3=0 AND col4='Y' AND (col1=19 OR col2=19)) ) ``` the inner `SELECT` query separately work fine
``` UPDATE table1 SET colOne=1 WHERE id IN ( SELECT IF(col1=19,col2,col1) AS id FROM table2 WHERE (col3=19 AND col4='X') OR (col3=0 AND col4='Y' AND (col1=19 OR col2=19)) ) ```
Did you mean to use = instead of exists ``` UPDATE table1 SET colOne=1 WHERE id = ( SELECT IF(col1=19,col2,col1) AS id FROM table2 WHERE (col3=19 AND col4='X') OR (col3=0 AND col4='Y' AND (col1=19 OR col2=19)) ) ```
where is my mistake in my SQL code?
[ "", "mysql", "sql", "exists", "" ]
I've been trying to put together a SQL query for a couple hours and can't seem to get it right. Consider the following example tables `Products` and `ProductCategories`: ``` Products -------- ProductId ProductName --------- ----------- 1 | Achilles 2 | Hermes 3 | Apollo 4 | Zeus 5 | Poseidon 6 | Eros ProductCategories ----------------- ProductId Category --------- -------- 1 | Wars 1 | Wars|Trojan 1 | Wars|Trojans|Mortals 1 | Toys|Games 2 | Travel 2 | Travel|Trade 2 | Communication|Language|Writing 5 | Oceanware 6 | Love 6 | Love|Candy 6 | Love|Valentines 3 | Sunshine 4 | Lightning ``` The goal would be to select the product ID, product name and one of the categories associated with the product such that each product ID/name appears once in the results and the category that is selected is the one with the most pipe characters in it. In the case that 2 (or more) categories for a product are tied for the most pipes, then randomly picking either of them will work. In other words, the query should result in this dataset: ``` ProductId ProductName Category --------- ----------- -------- 1 | Achilles | Wars|Trojans|Mortals 2 | Hermes | Communication|Language|Writing 3 | Apollo | Sunshine 4 | Zeus | Lightning 5 | Poseidon | Oceanware 6 | Eros | Love|Valentines ``` (Note, the category returned for Eros could also be Love|Candy, either is acceptable) As of now, I have this SQL, which obviously doesn't work because it returns a row for each Product/Category combination, not just the category with the most pipes: ``` SELECT ProductId, ProductName, Category, MAX(PipeCount) FROM ( SELECT DISTINCT p.ProductId AS ProductId, p.ProductName AS ProductName, c.Category AS Category, LEN(c.CategoryName) - LEN(REPLACE(c.CategoryName, '|', '')) AS PipeCount FROM Products p INNER JOIN ProductCategories c ON p.ProductId = c.ProductId ) Subquery GROUP BY ProductId, ProductName, Category, PipeCount ``` I can't seem to get my query any closer than this, however. I was to return only the row for each product where the PipeCount is the max PipeCount for any row for the product. Any help would be appreciated. Please note this is not my actual data; it's much more complicated than this, but this example should suffice. I am working on SQL Server 2012, but hopefully a good answer would be compatible with virtually any version of SQL.
I ended up solving the problem using various subqueries. One caveat is that it depends on the `ProductCategories` table in my example having a unique column that I did not explicitly specify. In my real data, this column exists already, but facing a similar problem, one could add such a column to make this solution work. Here is the SQL: ``` SELECT Sub1.ProductId, Sub3.Category FROM ( SELECT o.ProductId AS ProductId, MAX(LEN(REPLACE(c.Category, '|', '||')) - LEN(c.Category)) AS MaxPipeCount FROM Products o INNER JOIN ProductCategories c ON o.ProductId = c.ProductId GROUP BY o.ProductID ) Sub1 INNER JOIN ( SELECT o.ProductId AS ProductId, LEN(REPLACE(c.Category, '|', '||')) - LEN(c.Category) AS PipeCount, MAX(c.UniqueId) AS MaxUniqueId FROM Products o INNER JOIN ProductCategories c ON o.ProductId = c.ProductId GROUP BY o.ProductID, LEN(REPLACE(c.Category, '|', '||')) - LEN(c.Category) ) Sub2 ON Sub1.MaxPipeCount = Sub2.PipeCount AND Sub1.ProductId = Sub2.ProductId INNER JOIN ( SELECT DISTINCT o.ProductId, c.Category, LEN(REPLACE(c.Category, '|', '||')) - LEN(c.Category) AS PipeCount, c.UniqueId FROM Products o INNER JOIN ProductCategories c ON o.ProductId = c.ProductId ) Sub3 ON Sub1.MaxPipeCount = Sub3.PipeCount AND Sub2.MaxUniqueId = Sub3.UniqueId AND Sub1.ProductId = Sub3.ProductId ```
You could use `ROW_NUMBER` to get the `ProductId` with the most number of `CategoryName`: [**SQL Fiddle**](http://sqlfiddle.com/#!6/8375f/1/0) ``` SELECT p.*, pc.CategoryName FROM Products p INNER JOIN( SELECT *, RN = ROW_NUMBER() OVER(PARTITION BY ProductId ORDER BY LEN(CategoryName) - LEN(REPLACE(CategoryName, '|', '')) DESC) FROM ProductCategories ) pc ON pc.ProductId = p.ProductId WHERE RN = 1 ```
SQL - How to aggregate and limit results to one row per ID based on the aggregate
[ "", "sql", "sql-server", "t-sql", "" ]
The following is my query: ``` SELECT * FROM ( SELECT f.max, f.min, p.user_id, p.id, p.title, p.rating, RANK() OVER ( PARTITION BY p.user_id ORDER BY p.rating DESC, p.id DESC ) AS rnk FROM posts AS p INNER JOIN friends AS f ON (p.user_id = f.friend_id) WHERE f.user_id=1 ) AS subq WHERE (subq.rnk <= subq.max) LIMIT 10 ``` It searches for posts of my friends, sorted by their rating and date. The window function implemented in this query lets me limit the number of rows returned for each friend according to the `MAX` field on `Friends` table. However, I also have a field `MIN`, which is used to specify the minimum number of posts I want from the query for a given friend. How is that possible? I also wonder if SQL is the best option for those types of queries? I already tried Neo4j Graph database, and while it seemed as a good solution, I would rather avoid using 2 separate databases. [SQLFiddle](http://sqlfiddle.com/#!15/c9f2a/1) Schema: ``` CREATE TABLE friends( user_id int, friend_id int, min int, max int ); CREATE TABLE posts( id int, title varchar(255), rating int, date date, user_id int ); ``` Suppose we have the following data: ``` INSERT INTO friends VALUES (1,2,1,3) , (1,3,0,5) , (1,4,2,10); INSERT INTO posts VALUES (1, 'posts1', 2, now(), 2) , (2, 'posts2', 1, now(), 2) , (3, 'posts3', 5, now(), 2) , (4, 'posts4', 2, now(), 2) , (5, 'posts5', 11, now(), 2) , (6, 'posts6', 7, now(), 2) , (7, 'posts7', 3, now(), 2) , (8, 'posts8', 4, now(), 3) , (9, 'posts9', 1, now(), 3) , (10, 'posts10', 0, now(), 3) , (11, 'posts11', 7, now(), 3) , (12, 'posts12', 3, now(), 3) , (13, 'posts13', 2, now(), 3) , (14, 'posts14', 4, now(), 4) , (15, 'posts15', 9, now(), 4) , (16, 'posts16', 0, now(), 4) , (17, 'posts17', 3, now(), 4) , (18, 'posts18', 2, now(), 4) , (19, 'posts19', 1, now(), 4) , (20, 'posts20', 2, now(), 4); ``` Hence I would like to see `(post_id, title, rating, date, friend_id)` combinations with the following conditions, if possible: 1. between 1 and 3 posts from the friend with `id`=2 2. between 0 and 5 posts from the friend with `id`=3 3. between 2 and 10 posts from the friend with `id`=4 So basically, if my friend with `friend_id=2` posted 1 or more articles, I want at least 2 of them. If he posted more than 3 articles, I want no more than 3.
> Let's say I want to have 2-5 posts from you everyday, if you post that > much. If you post only one, it is alright, and I will have the only > one post. Your explanation [in the comment](https://stackoverflow.com/questions/30001333/minimum-number-of-rows-per-group/30013039#comment48128840_30001333) still does not add up. Your `min` number would be noise without effect according to this explanation. This is not what you wrote, but this would make sense: ***Given a maximum of display slots for posts (the outer `LIMIT`), I want to get `min` posts from each friend first (if available). If there are free slots after that, fill up with up to `max` posts per friend.*** In the example that would be 1 (`min`) post from friend 2 with top priority and another 2 (`max - min`) posts if more slots are still available. It would be arbitrary which posts make the cut if there are not enough slots for each priority. I went ahead and assumed that the first post from each should be selected first, etc. The rest is still arbitrary, but can be solved easily if you manage to formulate a requirement. ``` SELECT * FROM friends f , LATERAL ( SELECT * , row_number() OVER (ORDER BY rating DESC NULLS LAST, id DESC) AS rn FROM posts p WHERE user_id = f.friend_id -- LATERAL reference ORDER BY rating DESC NULLS LAST, date DESC NULLS LAST LIMIT f.max -- LATERAL reference ) p WHERE f.user_id = 1 ORDER BY (p.rn > f.min) -- minimum posts from each first , p.rn LIMIT 10; -- arbitrary total maximum ``` [**SQL Fiddle.**](http://sqlfiddle.com/#!15/86052/6) ### Notes * Assuming `friends.user_id` and `posts.id` to be primary keys. Your table definition is lacking there. * All other columns should be defined `NOT NULL` to make sense. * Use a `LATERAL` join to select only `max` postings per friend in the subquery: + [What is the difference between LATERAL and a subquery in PostgreSQL?](https://stackoverflow.com/questions/28550679/what-is-the-difference-between-lateral-and-a-subquery-in-postgresql/28557803#28557803) * Use [`row_number()`, not `rank()`](http://www.postgresql.org/docs/current/interactive/functions-window.html) in the subquery. It's a common mistake to confuse both. * You mentioned `date` but it did not show in your query. Maybe you really want: ``` , row_number() OVER (ORDER BY rating DESC NULLS LAST , date DESC NULLS LAST) AS rn ``` * `DESC NULLS LAST` only because `rating` and `date` could be NULL: + [PostgreSQL sort by datetime asc, null first?](https://stackoverflow.com/questions/9510509/postgresql-sort-by-datetime-asc-null-first/9511492#9511492) * In Postgres, you can use a simple boolean expression in `ORDER BY`: ``` ORDER BY (p.rn > f.min), p.rn ``` + [SQL select query order by day and month](https://stackoverflow.com/questions/14650705/sql-select-query-order-by-day-and-month/14651597#14651597) + [Sorting null values after all others, except special](https://stackoverflow.com/questions/21891803/sorting-null-values-after-all-others-except-special/21892611#21892611) That puts `min` posts per friend first. The second item (`p.rn`) gives each friend an equal chance (first post first etc.). * Don't use `date` as identifier. It's a reserved word in standard SQL and a basic type name in Postgres.
I think from a neo4j/cypher perspective, this is really what you want to do... ``` match (u:User {id: 1})-[r:FOLLOWS]->(p:Publisher) with u, p, r match p-[:PUBLISHED]-(i:Item) with u, p, r, i order by i.name return u.name, p.name, i.name skip 5 limit 2 ``` You just would need to parameterize out the min and max and bind them at run time which would involve two queries rather than one but I think this is still an elegant solution. I have tried to include properties in `skip` and `limit` before but cypher obviously does not support that (yet). It wants a parameter or an unsigned integer. ``` match (u:User {id: 1})-[r:FOLLOWS]->(p:Publisher) with u, p, r match p-[:PUBLISHED]-(i:Item) with u, p, r, i order by i.name return u.name, p.name, i.name skip {min} limit {max} ```
Minimum number of rows per group
[ "", "sql", "postgresql", "neo4j", "greatest-n-per-group", "sql-limit", "" ]