Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a table like this ``` id name value 1 Ram a 2 John b 3 Ram c 4 Ram d 5 John e ``` I want the the output like this ``` name value Ram a,c,d John b,e ``` Is there any way to perform this query? **UPDATE :** Table format : ``` id field1 value field2 1 val1 a null 2 val2 b null 3 val1 c null 4 val2 d null 5 null e val1 5 null f val1 5 null g val2 5 null h val2 ``` Output : ``` field1 field2 value val1 null a,c val2 null b,d null val1 e,f null val2 g,h ``` Is there any way to perform this ?
You can use `group_concat` ``` select name, group_concat(value separator ',') as value from table_name group by name ``` Also if you want the values to be ordered you can use `order by` within `group concat` as ``` select name, group_concat(value order by value) as value from table_name group by name ```
Use this: ``` SELECT field1, field2, GROUP_CONCAT(value ORDER BY value SEPARATOR ',') AS value FROM table GROUP BY field1, field2; ```
Concatenate fields in Mysql
[ "", "mysql", "sql", "select", "string-concatenation", "" ]
In a query like this one: ``` SELECT * FROM `Order` WHERE `CustID` = '1' ``` My results are displayed like so: ``` | CustID| Order | ----------------- | 1 | Order1| | 1 | Order2| | 1 | Order3| ----------------- ``` How do I write SQL statement, to get a result like this one?: ``` | CustID| Order | --------------------------------- | 1 | Order1, Order2, Order3| --------------------------------- ``` In mySQL it's possible with `Group_Concat`, but in SQL Server it gives error like syntax error or some.
Use xml path ([see fiddle](http://sqlfiddle.com/#!3/b4274/6/0)) ``` SELECT distinct custid, STUFF((SELECT ',' +[order] FROM table1 where custid = t.custid FOR XML PATH('')), 1, 1, '') FROM table1 t where t.custid = 1 ``` STUFF replaces the first `,` with an empty string, i.e. removes it. You need a distinct otherwise it'll have a match for all orders since the where is on custid. [FOR XML](https://msdn.microsoft.com/en-us/library/ms178107.aspx) [PATH Mode](https://msdn.microsoft.com/en-us/ms189885.aspx) [STUFF](https://msdn.microsoft.com/en-us/library/ms188043.aspx)
You can use [`Stuff`](https://msdn.microsoft.com/en-us/library/ms188043.aspx) function and [`For xml`](https://msdn.microsoft.com/en-us/library/ms178107.aspx) clause like this: ``` SELECT DISTINCT CustId, STUFF(( SELECT ','+ [Order] FROM [Order] T2 WHERE T2.CustId = T1.CustId FOR XML PATH('') ), 1, 1, '') FROM [Order] T1 ``` [fiddle here](http://sqlfiddle.com/#!3/0dbc1/1) **Note:** Using `order` as a table name or a column name is a very, very bad idea. There is a reason why they called reserved words *reserved*. See [this link](https://stackoverflow.com/a/30132058/3094533) for my favorite way to avoid such things.
SQL Server: Select multiple records in one select statement
[ "", "sql", "sql-server", "select", "" ]
I have a table where I need to get the `ID`, for a group(based on `ID` and `Name`) with a `COUNT(*) = 3`, for the latest set of timestamps. So for example below, I want to retrieve `ID 2`. As it has 3 rows, and the latest timestamps (even though `ID 3` has latest timestamps overall, it doesn't have a count of 3). But I don't understand how to order by `Date`, as I cannot contain it in the `Group By` clause, as it is not the same: ``` SELECT TOP 1 ID FROM TABLE GROUP BY ID,Name HAVING COUNT(ID) > 2 AND Name = 'ABC' --ORDER BY Date DESC ``` **Sample Data** ``` ID Name Date 1 ABC 2015-05-27 08:00 1 ABC 2015-05-27 09:00 1 ABC 2015-05-27 10:00 2 ABC 2015-05-27 11:00 2 ABC 2015-05-27 12:00 2 ABC 2015-05-27 13:00 3 ABC 2015-05-27 14:00 3 ABC 2015-05-27 15:00 ```
In SQL server, you need aggregate the columns not on group by list: ``` SELECT TOP 1 ID FROM TABLE WHERE Name = 'ABC' GROUP BY ID,Name HAVING COUNT(ID) > 2 ORDER BY MAX(Date) DESC ``` The name filter should be put before the group by for better performance, if you really need it.
You could do it in a nested query. Subquery: ``` SELECT ID from TABLE GROUP BY ID HAVING Count(ID) > 2 ``` That gives you the IDs you want. Put that in another query: ``` SELECT ID, Data FROM Table Where ID in (Subquery) Order by Date DESC; ```
Group BY Having COUNT, but Order on a column not contained in group
[ "", "sql", "t-sql", "" ]
I'm working with SQL 2008 R2. We have a third party software that is passing a string to a stored proc. The string is a date in the format of: ``` 2015-05-27 11:59pm ``` I have no access to this formatting and cannot change it. I need to convert this string to the proper format for SQL to use properly in my Stored Proc. The problem with it as is, is that it is ignoring the hours and min part of the date. example of what i am trying to accomplish: ``` 2015-05-27 11:59pm = 2015-05-27 23:59:00.000 2015-05-27 01:15am = 2015-05-27 01:15:00.000 ``` I've tried: ``` CONVERT(VARCHAR(24),'2015-05-27 11:59pm',121) ``` which converts it to : ``` 2015-05-27 11:59PM ``` I've tried CAST('2015-05-27 11:59pm' AS DATETIME) which converts it to: ``` 2015-05-27 00:00:00.000 ``` Is there a way I can convert the string and keep the hour and minute portion?
This works for me: ``` SELECT CONVERT(datetime, '2015-05-27 11:59pm', 121) ```
This expression: ``` CONVERT(VARCHAR(24), '2015-05-27 11:59pm', 121) ``` is not correct. It takes the date string, converts it to a date/time using internal settings. Then it converts that date/time to a string. Try converting the value to a `datetime` directly: ``` convert(datetime, @param, 121) ``` However, I think it would be better for your stored procedure to just take a date time parameter rather than a string.
Converting string date to ODBC canonical date
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I'm running into a scenario I don't know how to solve. Here's an example table: ``` items Table --------------------------------------- id | name | bool | linked_item_id_fk | --------------------------------------- 1 | test1 | f | null | --------------------------------------- 2 | test2 | t | null | --------------------------------------- 3 | test3 | t | 1 | --------------------------------------- 4 | test4 | f | 5 | --------------------------------------- 5 | test5 | f | null | --------------------------------------- ``` I'm trying to select data from the table when the bool is true. I'd also like to include items that are linked. Here's an example of what I'm using ``` SELECT * FROM items WHERE bool = true ``` This would return: > test2 and test3 But what I want to get is: > test1, test2, and test3 In this scenario even though test4 has a linked item, since it is false we don't retrieve test5. But I would like to retrieve test1 since test3 links to it, even though it is false. Can I do this with a single select statement? I'm sorry I couldn't come up with a creative title for this question ;)
It will give you the exact records ``` SELECT * FROM items WHERE bool = true OR ID IN (Select linked_item_id_fk From items where bool = true) ```
Try this: ``` SELECT * FROM Table WHERE bool = 't' UNION ALL SELECT t2.* FROM Table t1 JOIN Table t2 ON t1.linked_item_id_fk = t2.id WHERE t1.bool = 't' ```
SQL select statement conditional join
[ "", "sql", "select", "" ]
I have a table like this and I want to return concatenated strings where the column values are in ('01', '02', '03', '04', '99'). Plus the values will be delimited by a ';'. So row 1 will be 01;04, row 3 will be 01;02;03;04 and row 5 will simply be 01. All leading/trailing ; should be removed. What script would do this successfully? ``` R_NOT_CUR R_NOT_CUR_2 R_NOT_CUR_3 R_NOT_CUR_4 01 NULL 04 NULL 98 56 45 22 01 02 03 04 NULL NULL NULL NULL 01 NULL NULL NULL ```
You can accomplish this using [`COALESCE`](https://msdn.microsoft.com/en-us/library/ms190349.aspx) / [`ISNULL`](https://msdn.microsoft.com/en-us/library/ms184325.aspx) and [`STUFF`](https://msdn.microsoft.com/en-us/library/ms188043.aspx). Something like this. ``` SELECT STUFF( COALESCE(';'+R_NOT_CUR,'') + COALESCE(';'+R_NOT_CUR_2,'') + COALESCE(';'+R_NOT_CUR_3,'') + COALESCE(';'+R_NOT_CUR_4,''),1,1,'') FROM YourTable ``` Stuff will remove the first occurrence of `;`
It's not recommended to store integer values in strings but here this should work. Try it out and let me know: ``` DECLARE @yourTable TABLE (R_NOT_CUR VARCHAR(10),R_NOT_CUR_2 VARCHAR(10),R_NOT_CUR_3 VARCHAR(10),R_NOT_CUR_4 VARCHAR(10)); INSERT INTO @yourTable VALUES ('01',NULL,'04',NULL), ('98','56','45','22'), ('01','02','03','04'), (NULL,NULL,NULL,NULL), ('01',NULL,NULL,NULL); WITH CTE_row_id AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) row_id, --identifies each row R_NOT_CUR, R_NOT_CUR_2, R_NOT_CUR_3, R_NOT_CUR_4 FROM @yourTable ), CTE_unpivot --puts all values in one column so your can apply your where logic AS ( SELECT * FROM CTE_row_id UNPIVOT ( val FOR col IN (R_NOT_CUR,R_NOT_CUR_2,R_NOT_CUR_3,R_NOT_CUR_4) ) unpvt WHERE val IN ('01','02','03','04','99') ) SELECT STUFF( COALESCE(';'+R_NOT_CUR,'') + COALESCE(';'+R_NOT_CUR_2,'') + COALESCE(';'+R_NOT_CUR_3,'') + COALESCE(';'+R_NOT_CUR_4,'') ,1,1,'') AS concat_columns FROM CTE_unpivot PIVOT ( MAX(val) FOR col IN(R_NOT_CUR,R_NOT_CUR_2,R_NOT_CUR_3,R_NOT_CUR_4) ) pvt ``` Results: ``` concat_columns -------------------------------------------- 01;04 01;02;03;04 01 ```
Script to concatenate columns and remove leading/trailing delimiter
[ "", "sql", "t-sql", "sql-server-2005", "relational-database", "" ]
I have a table ``` timestamp ip score 1432632348 1.2.3.4 9 1432632434 5.6.7.8 8 1432632447 1.2.3.4 9 1432632456 1.2.3.4 8 1432632460 5.6.7.8 8 1432632464 1.2.3.4 9 ``` The timestamps are consecutive, but don't have any frequency. I want to count, per IP, the number of times the score changed. so in the example the result would be: ``` ip count 1.2.3.4 3 5.6.7.8 1 ``` How can I do that? (note: count distinct does not work: 1.2.3.4 changed 3 times but had 2 distinct scores)
``` select ip, sum(case when score <> (select t2.score from table t2 where t2.timestamp = (select max(timestamp) from table where ip = t2.ip and timestamp < t1.timestamp) and t1.ip = t2.ip) then 1 else 0 end) from table t1 group by ip ```
Although this requirement is not common, it is not rare either. Basically, you need to determine when there is a change in the data column. The data is Relational, therefore the solution is Relation. No Cursors or CTEs or `ROW_NUMBER()s` or temp tables or `GROUP BYs` or scripts or triggers are required. `DISTINCT` will not work. The solution is straight-forward. But you have to keep your Relational hat on. ``` SELECT COUNT( timestamp ) FROM ( SELECT timestamp, ip, score, [score_next] = ( SELECT TOP 1 score -- NULL if not exists FROM MyTable WHERE ip = MT.ip AND timestamp > MT.timestamp ) FROM MyTable MT ) AS X WHERE score != score_next -- exclude unchanging rows AND score_next != NULL ``` I note that for the data you have given, the output should be: ``` ip count 1.2.3.4 2 5.6.7.8 0 ``` * if you have been counted the last score per ip, which hasn't changed yet, then your figures will by "out-by-1". To obtain your counts, delete that last line of code. * if you have been counting an stated `0` as a starting value, add `1` to the `COUNT().` If you interested in more discussion of the not-uncommon problem, I have given a full treatment in [**this Answer**](https://stackoverflow.com/a/30460263/484814).
count changes based on timestamp
[ "", "sql", "" ]
After many changes to my stored procedure, I think it needs to re-factoring , mainly because of code duplication. How to overcome these duplications: ``` IF @transExist > 0 BEGIN IF @transType = 1 BEGIN --INSERT SELECT a.dayDate, a.shiftName, a.limit, b.startTimeBefore, b.endTimeBefore, b.dayAdd, b.name, b.overtimeHours, c.startTime, c.endTime INTO #Residence1 FROM #ShiftTrans a RIGHT OUTER JOIN #ResidenceOvertime b ON a.dayDate = b.dayDate INNER JOIN ShiftDetails c ON c.shiftId = a.shiftId AND c.shiftTypeId = b.shiftTypeId; SET @is_trans = 1; END ELSE BEGIN RETURN ; END END ELSE BEGIN IF @employeeExist > 0 BEGIN SELECT a.dayDate, a.shiftName, a.limit, b.startTimeBefore, b.endTimeBefore, b.dayAdd, b.name, b.overtimeHours, c.startTime, c.endTime INTO #Residence2 FROM #ShiftEmployees a RIGHT OUTER JOIN #ResidenceOvertime b ON a.dayDate = b.dayDate INNER JOIN ShiftDetails c ON c.shiftId = a.shiftId AND c.shiftTypeId = b.shiftTypeId; SET @is_trans = 0; END ELSE BEGIN RETURN; END END; IF @is_trans = 1 BEGIN WITH CTE_Residence_Overtime_trans AS ( SELECT * FROM #Residence1 ) UPDATE t1 SET t1.over_time = t1.over_time + CAST(RIGHT('0'+ CAST(overtimeHours as varchar(2)),2) +':00:00' As Time) + CAST(RIGHT('0'+ CAST(@total_min as varchar(2)),2) +':00:00' As Time), t1.day_flag = t1.day_flag + 'R1', t1.day_desc = 'R::' + CTE_Residence_Overtime_trans.shiftName +'[ ' + CTE_Residence_Overtime_trans.name +' ]' FROM rr_overtime AS t1 INNER JOIN CTE_Residence_Overtime_trans ON t1.[trans_date] = CTE_Residence_Overtime_trans.[dayDate] WHERE t1.emp_num = @empNum; UPDATE rr_overtime SET over_time = CAST(RIGHT('0'+ CAST(0 as varchar(2)),2)+':00:00' As Time), day_flag = day_flag +'R2' WHERE trans_date = @TomorrowDate AND emp_num = @empNum; END ELSE BEGIN WITH CTE_Residence_Overtime AS ( SELECT * FROM #Residence2 ) UPDATE t1 SET t1.over_time = CAST(RIGHT('0'+ CAST(overtimeHours as varchar(2)), 2) +':00:00' As Time) + CAST(RIGHT('0'+ CAST(@total_min as varchar(2)),2)+':00:00' As Time), t1.day_flag = t1.day_flag + 'R1', t1.day_desc = 'R::' + CTE_Residence_Overtime.shiftName +'[ ' + CTE_Residence_Overtime.name +' ]' FROM rr_overtime AS t1 INNER JOIN CTE_Residence_Overtime ON t1.[trans_date] = CTE_Residence_Overtime.[dayDate] WHERE t1.emp_num = @empNum ; UPDATE rr_overtime SET over_time = CAST(RIGHT('0'+ CAST(0 as varchar(2)),2)+':00:00' As Time), day_flag = day_flag +'R2' WHERE trans_date = @TomorrowDate AND emp_num = @empNum; END ```
You could try the following to avoid the duplication. Basically create your temporary table first (I've guessed the datatypes), then use only this table for the insert since you're using if's it's going to be one or the other that inserts to this. Then you don't need CTE's if you're doing Select \* from table so just call direct from the table. Since the table will only have data from one select or the other and the field names are the same etc we can just use one update for this and don't need anymore if's: ``` Create table #Residence (dayDate varchar(9), shiftName varchar(20), limit int, startTimeBefore time, endTimeBefore time, dayAdd int, name varchar(30), overtimeHours int, startTime time, endTime time) IF @transExist > 0 BEGIN IF @transType = 1 --INSERT BEGIN Insert into #Residence SELECT a.dayDate,a.shiftName,a.limit,b.startTimeBefore,b.endTimeBefore,b.dayAdd,b.name,b.overtimeHours,c.startTime,c.endTime FROM #ShiftTrans a RIGHT OUTER JOIN #ResidenceOvertime b ON a.dayDate = b.dayDate INNER JOIN ShiftDetails c ON c.shiftId = a.shiftId AND c.shiftTypeId = b.shiftTypeId; END ELSE BEGIN RETURN ; END END ELSE BEGIN IF @employeeExist > 0 BEGIN Insert into #Residence SELECT a.dayDate,a.shiftName,a.limit,b.startTimeBefore,b.endTimeBefore,b.dayAdd,b.name,b.overtimeHours,c.startTime,c.endTime FROM #ShiftEmployees a RIGHT OUTER JOIN #ResidenceOvertime b ON a.dayDate = b.dayDate INNER JOIN ShiftDetails c ON c.shiftId = a.shiftId AND c.shiftTypeId = b.shiftTypeId; END ELSE BEGIN RETURN ; END END; UPDATE t1 SET t1.over_time = t1.over_time + CAST(RIGHT('0'+ CAST(overtimeHours as varchar(2)), 2)+':00:00' As Time) + CAST(RIGHT('0'+ CAST(@total_min as varchar(2)), 2)+':00:00' As Time), t1.day_flag = t1.day_flag + 'R1', t1.day_desc = 'R::' +R.shiftName +'[ '+ R.name +' ]' FROM rr_overtime AS t1 INNER JOIN #Residence R ON t1.[trans_date] = R.[dayDate] WHERE t1.emp_num = @empNum ; UPDATE rr_overtime SET over_time = CAST(RIGHT('0'+ CAST(0 as varchar(2)), 2)+':00:00' As Time), day_flag = day_flag +'R2' WHERE trans_date = @TomorrowDate AND emp_num = @empNum; ```
Looking at the code, it looks like this should work: ``` WITH CTE_Residence_Overtime_trans AS ( SELECT a.dayDate, a.shiftName, a.limit, b.startTimeBefore, b.endTimeBefore, b.dayAdd, b.name, b.overtimeHours, c.startTime, c.endTime FROM ( select dayDate, shiftName, limit from #ShiftTrans where (@transExist > 0 and @transType = 1) union all select dayDate, shiftName, limit from #ShiftEmployees where (not (@transExist>0 and @transType=1)) and @employeeExist>0 ) a JOIN #ResidenceOvertime b ON a.dayDate = b.dayDate JOIN ShiftDetails c ON c.shiftId = a.shiftId AND c.shiftTypeId = b.shiftTypeId ) UPDATE t1 SET t1.over_time = t1.over_time + CAST(CAST(overtimeHours as varchar(2))+':00:00' As Time) + CAST(CAST(@total_min as varchar(2))+':00:00' As Time), t1.day_flag = t1.day_flag + 'R1', t1.day_desc = 'R::' + CTE.shiftName +'[ ' + CTE.name +' ]' FROM rr_overtime AS t1 INNER JOIN CTE_Residence_Overtime_trans CTE ON t1.[trans_date] = CTE.[dayDate] WHERE t1.emp_num = @empNum; UPDATE rr_overtime SET over_time = CAST('00:00:00' As Time), day_flag = day_flag +'R2' WHERE trans_date = @TomorrowDate AND emp_num = @empNum; ``` This makes an union all select to both of the temp. tables, but only fetches data from the correct one based on the variables, and uses that as the CTE for the update. I also removed the outer join because the table was also involved in an inner join. Although this can shorten the code, it is not always the best way to do things, because it might cause more complex query plan to be used causing performance issues. I also removed the right(2,...) functions from time conversion, since time conversion works without leading zero too, and the last one was just fixed 00:00:00.
Stored procedure: reduce code duplication using temp tables
[ "", "sql", "sql-server", "stored-procedures", "refactoring", "temp-tables", "" ]
I've a scenario(table) like this: ![enter image description here](https://i.stack.imgur.com/oeqnf.png) This is table(Folder) structure. **I've only records for user\_id = 1 in this table**. Now I need to insert the same folder structure for another user. Sorry, I've updated the question... yes, folder\_id is identity column (but folder\_id can be meshed up for a specific userID). Considering I don't know how many child folder can exists. Folder\_Names are unique for an user and Folder structures are not same for all user. Suppose user3 needs the same folder structure of user1, and user4 needs same folder structure of user2. and I'll be provided only source UserID and destination UserID(assume destination userID doesn't have any folder structure). How can i achieve this?
You can do the following: ``` SET IDENTITY_INSERT dbo.Folder ON go declare @maxFolderID int select @maxFolderID = max(Folder_ID) from Folder insert into Folder select @maxFolderID + FolderID, @maxFolderID + Parent_Folder_ID, Folder_Name, 2 from Folder where User_ID = 1 SET IDENTITY_INSERT dbo.Folder OFF go ``` **EDIT:** ``` SET IDENTITY_INSERT dbo.Folder ON GO ; WITH m AS ( SELECT MAX(Folder_ID) AS mid FROM Folder ), r AS ( SELECT * , ROW_NUMBER() OVER ( ORDER BY Folder_ID ) + m.mid AS rn FROM Folder CROSS JOIN m WHERE User_ID = 1 ) INSERT INTO Folder SELECT r1.rn , r2.rn , r1.Folder_Name , 2 FROM r r1 LEFT JOIN r r2 ON r2.Folder_ID = r1.Parent_Folder_ID SET IDENTITY_INSERT dbo.Folder OFF GO ```
This is as close to set-based as I can make it. The issue is that we cannot know what new identity values will be assigned until the rows are actually in the table. As such, there's no way to insert all rows in one go, with correct parent values. I'm using `MERGE` below so that I can access both the source and `inserted` tables in the `OUTPUT` clause, which isn't allowed for `INSERT` statements: ``` declare @FromUserID int declare @ToUserID int declare @ToCopy table (OldParentID int,NewParentID int) declare @ToCopy2 table (OldParentID int,NewParentID int) select @FromUserID = 1,@ToUserID = 2 merge into T1 t using (select Folder_ID,Parent_Folder_ID,Folder_Name from T1 where User_ID = @FromUserID and Parent_Folder_ID is null) s on 1 = 0 when not matched then insert (Parent_Folder_ID,Folder_Name,User_ID) values (NULL,s.Folder_Name,@ToUserID) output s.Folder_ID,inserted.Folder_ID into @ToCopy (OldParentID,NewParentID); while exists (select * from @ToCopy) begin merge into T1 t using (select Folder_ID,p2.NewParentID,Folder_Name from T1 inner join @ToCopy p2 on p2.OldParentID = T1.Parent_Folder_ID) s on 1 = 0 when not matched then insert (Parent_Folder_ID,Folder_Name,User_ID) values (NewParentID,Folder_Name,@ToUserID) output s.Folder_ID,inserted.Folder_ID into @ToCopy2 (OldParentID,NewParentID); --This would be much simpler if you could assign table variables, -- @ToCopy = @ToCopy2 -- @ToCopy2 = null delete from @ToCopy; insert into @ToCopy(OldParentID,NewParentID) select OldParentID,NewParentID from @ToCopy2; delete from @ToCopy2; end ``` (I've also written this on the assumption that we don't ever want to have rows in the table with wrong or missing parent values) --- In case the logic isn't clear - we first find rows for the old user which have no parent - these we can clearly copy for the new user immediately. On the basis of this insert, we track what new identity values have been assigned against which old identity value. We then continue to use this information to identify the next set of rows to copy (in `@ToCopy`) - as the rows whose parents were just copied are the next set eligible to copy. We loop around until we produce an empty set, meaning all rows have been copied. This doesn't cope with parent/child cycles, but hopefully you do not have any of those.
Recursive in SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "sql-function", "" ]
I have created one SSRS report having begin date and end date. If I provide end date< start date it will execute the report as shown in the image ![enter image description here](https://i.stack.imgur.com/EZhQZ.png) But that condition I need to show a pop up "Please Check the start date and end date provided". How to do this?
Click Report Menu then Report Properties. Go to Code Tab and add similar code as per your requirement: ``` Function CheckDateParameters(StartDate as Date, EndDate as Date) as Integer Dim msg as String msg = "" If (StartDate > EndDate) Then msg="Start Date should not be later than End Date" End If If msg <> "" Then MsgBox(msg, 16, "Report Validation") Err.Raise(6,Report) 'Raise an overflow End If End Function ``` And **Follow the Steps:** 1.) Go the Report Parameters and add a parameter with the datatype is string. 2.) Check the Hidden checkbox and Allow blank value ckeckbox. 3.) From Default Values choose Non-Queried radio button and then press the FX button and paste this code. ``` =CODE.CheckDateParameters(<parameterStartdate>.Value,<parameterEnddate>.Value) ``` Then press OK. See reference Link: [Easy Step by Step SSRS Parameter Validation Using Code & Conditional DataSet](https://gugiaji.wordpress.com/2012/03/26/easy-step-by-step-ssrs-parameter-validation-using-code/)
I'm answering this to chip in another possible solution when working with SQL/Server. If you just want to throw an error then simply amend your query SQL to raise an error on the SQL/Server side by adding something like this to the top of your SQL... ``` IF @ParEndDate < @ParStartDate BEGIN RAISERROR('Please check the start date and end date provided', 16, 1); RETURN; END; ``` The query won't run and the error message will be displayed in the report body.
Report Parameter validation in ssrs report
[ "", "sql", "sql-server", "sql-server-2008", "reporting-services", "parameters", "" ]
I'm trying to check if a database exists in the Microsoft SQL Server, what's the simplest way to do that? I just want it to return a true or false value then I would create a database if it doesn't exist. Any help would be appreciated, thanks.
Connect to a system-db (master, msdb, tempdb or model) - because you can be sure that they exist! Then you can select the list of database like this: ``` select * from sys.databases ``` or if you want to know if a specific db exists: ``` select * from sys.databases where name = 'NameOfYourDb' ``` If you connect without a database name in your connection string (belongs to which provider you are using) you should automatically be connect to your default database (which is "master" by default)
Try below one ``` Declare @Dbname varchar(100) SET @Dbname = 'MASTER' if exists(select * from sys.databases where name = @Dbname) select 'true' else select 'false' ``` this is specially for SQL Server
What's the simplest way to check if database exists in MSSQL using VB.NET?
[ "", "sql", "sql-server", "database", "vb.net", "" ]
I'm newish to this and using Oracle SQL. I have the following tables: Table 1 **CaseDetail** ``` CaseNumber | CaseType 1 | 'RelevantToThisQuestion' 2 | 'RelevantToThisQuestion' 3 | 'RelevantToThisQuestion' 4 | 'NotRelevantToThisQuestion' ``` Table 2 **LinkedPeople** ``` CaseNumber | RelationshipType | LinkedPerson 1 | 'Owner' | 123 1 | 'Agent' | 124 1 | 'Contact' | 125 2 | 'Owner' | 126 2 | 'Agent' | 127 2 | 'Contact' | 128 3 | 'Owner' | 129 3 | 'Agent' | 130 3 | 'Contact' | 131 ``` Table 3 **Location** ``` LinkedPerson| Country 123 | 'AU' 124 | 'UK' 125 | 'UK' 126 | 'US' 127 | 'US' 128 | 'UK' 129 | 'UK' 130 | 'AU' 131 | 'UK' ``` I want to count CaseNumbers that are relevant to this question with no LinkedPeople in 'AU'. So the results from the above data would be 1 I've been trying to combine aggregate functions and subqueries but I think I might be over-complicating things. Just need a push in the right direction, thanks!
To get all the records: ``` SELECT COUNT(DISTINCT CaseNumber) FROM LinkedPeople WHERE CaseNumber NOT IN ( SELECT DISTINCT C.CaseNumber FROM CaseDetail C INNER JOIN LinkedPeople P ON C.CaseNumber = P.CaseNumber INNER JOIN Location L ON P.LinkedPerson = L.LinkedPerson WHERE Country = 'AU' AND C.CaseType = 'RelevantToThisQuestion' ) ```
[SQL Fiddle](http://sqlfiddle.com/#!4/87bca4/4) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE CASEDETAIL ( CaseNumber, CaseType ) AS SELECT 1, 'RelevantToThisQuestion' FROM DUAL UNION ALL SELECT 2, 'RelevantToThisQuestion' FROM DUAL UNION ALL SELECT 3, 'RelevantToThisQuestion' FROM DUAL UNION ALL SELECT 4, 'NotRelevantToThisQuestion' FROM DUAL; CREATE TABLE LINKEDPEOPLE ( CaseNumber, RelationshipType, LinkedPerson ) AS SELECT 1, 'Owner', 123 FROM DUAL UNION ALL SELECT 1, 'Agent', 124 FROM DUAL UNION ALL SELECT 1, 'Contact', 125 FROM DUAL UNION ALL SELECT 2, 'Owner', 126 FROM DUAL UNION ALL SELECT 2, 'Agent', 127 FROM DUAL UNION ALL SELECT 2, 'Contact', 128 FROM DUAL UNION ALL SELECT 3, 'Owner', 129 FROM DUAL UNION ALL SELECT 3, 'Agent', 130 FROM DUAL UNION ALL SELECT 3, 'Contact', 131 FROM DUAL; CREATE TABLE LOCATION ( LinkedPerson, Country ) AS SELECT 123, 'AU' FROM DUAL UNION ALL SELECT 124, 'UK' FROM DUAL UNION ALL SELECT 125, 'UK' FROM DUAL UNION ALL SELECT 126, 'US' FROM DUAL UNION ALL SELECT 127, 'US' FROM DUAL UNION ALL SELECT 128, 'UK' FROM DUAL UNION ALL SELECT 129, 'UK' FROM DUAL UNION ALL SELECT 130, 'AU' FROM DUAL UNION ALL SELECT 131, 'UK' FROM DUAL; ``` **Query 1**: ``` SELECT COUNT( DISTINCT CASENUMBER ) AS Num_Relevant_Cases FROM CASEDETAIL c WHERE CaseType = 'RelevantToThisQuestion' AND NOT EXISTS ( SELECT 1 FROM LINKEDPEOPLE p INNER JOIN LOCATION l ON ( p.LinkedPerson = l.LinkedPerson ) WHERE c.CaseNumber = p.CaseNumber AND l.Country = 'AU' ) ``` **[Results](http://sqlfiddle.com/#!4/87bca4/4/0)**: ``` | NUM_RELEVANT_CASES | |--------------------| | 1 | ```
Oracle SQL - Not in subquery
[ "", "sql", "oracle", "" ]
I have a table that looks something like this: ``` |date_start | date_end |amount | +------------+-------------+-------+ |2015-02-23 | 2015-03-01 |50 | |2015-03-02 | 2015-03-08 |50 | |2015-03-09 | 2015-03-15 |100 | |2015-03-16 | 2015-03-22 |800 | |2015-03-23 | 2015-03-29 |50 | ``` and I'd like to work out the percent increase/decrease for column `amount`, from the previous date. For example the result would be something like this, ``` |date_start | date_end |amount | perc_change | +------------+-------------+-------+-------------+ |2015-02-23 | 2015-03-01 |50 | |2015-03-02 | 2015-03-08 |50 | 0 |2015-03-09 | 2015-03-15 |100 | 50 |2015-03-16 | 2015-03-22 |800 | 700 |2015-03-23 | 2015-03-29 |50 | -750 ``` I've searched and racked my brain for a couple of days now. Usually, I simply do this using server side code but now I need to contain it all within the query.
Try this: ``` SELECT t.*, amount - (SELECT amount FROM transactions prev WHERE prev.date_end < t.date_start ORDER BY date_start DESC LIMIT 1) AS changes FROM transactions t ```
If we assume that the previous row always ends exactly one day before the current begins (as in your sample data), then you can use a `join`. The percentage increase would be: ``` select t.*, 100 * (t.amount - tprev.amount) / tprev.amount from atable t left join atable tprev on tprev.date_end = t.date_start - interval 1 day; ``` However, your results seem to just have the difference, which is easier to calculate: ``` select t.*, (t.amount - tprev.amount) as diff from atable t left join atable tprev on tprev.date_end = t.date_start - interval 1 day; ```
Calculate percent increase/decrease from previous row value
[ "", "mysql", "sql", "" ]
I have an address field that is a single line that looks like this: ``` Dr Robert Ruberry, West End Medical Practice, 38 Russell Street, South Brisbane 4101 ``` I am wanting to write a view that will split that address into Name, Addr1, Addr2, Suburb, Postcode fields for reporting purposes. I have been trying to USE SUBSTRING and CHARINDEX like this but it doesnt seem to split it correctly. ``` SUBSTRING([address_Field],CHARINDEX(',',[address_Field]),CHARINDEX(',',[address_Field])) ``` Can anyone help? TIA
may be this works for your requirement ``` IF OBJECT_ID('tempdb..#test') IS NOT NULL DROP TABLE #test CREATE TABLE #test(id int, data varchar(100)) INSERT INTO #test VALUES (1,'Dr Robert Ruberry, West End Medical Practice, 38 Russell Street, South Brisbane 4101') DECLARE @pivot varchar(8000) DECLARE @select varchar(8000) SELECT @pivot=coalesce(@pivot+',','')+'[col'+cast(number+1 as varchar(10))+']' FROM master..spt_values where type='p' and number<=(SELECT max(len(data)-len(replace(data,',',''))) FROM #test) SELECT @select=' select p.col1 As Name,p.col2 as Addr1,p.col3 as Addr3,p.col4 as Postcode from ( select id,substring(data, start+2, endPos-Start-2) as token, ''col''+cast(row_number() over(partition by id order by start) as varchar(10)) as n from ( select id, data, n as start, charindex('','',data,n+2) endPos from (select number as n from master..spt_values where type=''p'') num cross join ( select id, '','' + data +'','' as data from #test ) m where n < len(data)-1 and substring(data,n+1,1) = '','') as data ) pvt Pivot ( max(token)for n in ('+@pivot+'))p' EXEC(@select) ```
Here's a couple of options for you. If you're just looking for a quick answer, see this similar question that's already been answered: [T-SQL split string based on delimiter](https://stackoverflow.com/questions/21768321/t-sql-split-string-based-on-delimiter) If you want some more in depth knowledge of the various options, check this out: <http://sqlperformance.com/2012/07/t-sql-queries/split-strings>
Address String to Address Fields VIEW or SELECT
[ "", "sql", "sql-server", "t-sql", "street-address", "" ]
``` SELECT * FROM MyTable WHERE MyRow IN ('100','200','300') ``` Trying to do the above by declaring a local variable like this: ``` DECLARE @What VARCHAR(MAX) SET @What = '100','200','300' SELECT * FROM MyTable WHERE MyRow IN (@What) ``` Is there any way to make this work? Have "tried" this: ``` SET @What = "'100','200','300'" ``` and this: ``` SET @What = ('100','200','300') ``` The first one is the most logical as it can mostly be used in any other language but SQL. The length of `@What` will vary so I cannot just have one variable for each. How to declare a local string variable to contain strings?
Here's one way to do it, with a table variable: ``` DECLARE @What TABLE(txt VARCHAR(MAX)) INSERT INTO @What (txt) VALUES ('100'),('200'),('300') SELECT * FROM MyTable WHERE MyRow IN (SELECT txt FROM @What) ``` Here's [a sqlfiddle](http://sqlfiddle.com/#!6/ce029c/1) to demonstrate the above.
I guess it would be better if you would pass your list as a comma seperated list and then convert it into table. Here's a working example: ``` DECLARE @What VARCHAR(MAX) = '100,200,300'; DECLARE @XmlData AS XML = CAST(('<X>' + REPLACE(@What, ',', '</X><X>')+'</X>') AS XML); DECLARE @Test TABLE (What INT); INSERT INTO @Test SELECT N.value('.', 'INT') FROM @XmlData.nodes('X') AS T(N); SELECT * FROM MyTable AS M WHERE EXISTS ( SELECT 1 FROM @Test AS T WHERE T.What = M.MyRow ); ```
How to declare a string of strings?
[ "", "sql", "sql-server", "" ]
I have a table `orders`. How do I subtract previous row minus current row for the column `Incoming`? ![enter image description here](https://i.stack.imgur.com/ezndQ.jpg)
in My SQL ``` select a.Incoming, coalesce(a.Incoming - (select b.Incoming from orders b where b.id = a.id + 1), a.Incoming) as differance from orders a ```
Use [`LAG`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions070.htm) function in Oracle. Try this query ``` SELECT DATE_IN,Incoming, LAG(Incoming, 1, 0) OVER (ORDER BY Incoming) AS inc_previous, LAG(Incoming, 1, 0) OVER (ORDER BY Incoming) - Incoming AS Diff FROM orders ``` [SQL Fiddle Link](http://www.sqlfiddle.com/#!4/bbab6/1)
Subtracting previous row minus current row in SQL
[ "", "sql", "oracle", "plsql", "oracle-apex", "" ]
In a `SELECT` statement would it be possible to evaluate a `Substr` using CASE? Or what would be the best way to return a `sub string` based on a condition? I am trying to retrieve a name from an event description column of a table. The string in the event description column is formatted either like `text text (Mike Smith) text text` or `text text (Joe Schmit (Manager)) text text`. I would like to return the name only, but having some of the names followed by `(Manager)` is throwing off my `SELECT` statement. This is my `SELECT` statement: ``` SELECT * FROM ( SELECT Substr(Substr(eventdes,Instr(eventdes,'(')+1),1, Instr(eventdes,')') - Instr(eventdes,'(')-1) FROM mytable WHERE admintype = 'admin' AND entrytime BETWEEN sysdate - (5/(24*60)) AND sysdate AND eventdes LIKE '%action taken by%' ORDER BY id DESC ) WHERE ROWNUM <=1 ``` This returns things like `Mike Smith` if there is no `(Manager)`, but returns things like `Joe Schmit (Manager` if there is. Any help would be greatly appreciated.
[SQL Fiddle](http://sqlfiddle.com/#!4/0278b9/1) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE MYTABLE ( id, admintype, entrytime, eventdes ) AS SELECT 1, 'admin', SYSDATE, 'action taken by (John Doe (Manager)) more text' FROM DUAL; ``` **Query 1**: ``` SELECT * FROM ( SELECT REGEXP_SUBSTR( eventdes, '\((.*?)(\s*\(.*?\))?\)', 1, 1, 'i', 1 ) FROM mytable WHERE admintype = 'admin' AND entrytime BETWEEN sysdate - (5/(24*60)) AND sysdate AND eventdes LIKE '%action taken by%' ORDER BY id DESC ) WHERE ROWNUM <=1 ``` **[Results](http://sqlfiddle.com/#!4/0278b9/1/0)**: ``` | REGEXP_SUBSTR(EVENTDES,'\((.*?)(\S*\(.*?\))?\)',1,1,'I',1) | |------------------------------------------------------------| | John Doe | ``` **Edit:** [SQL Fiddle](http://sqlfiddle.com/#!4/f1b00/10) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE MYTABLE ( id, admintype, entrytime, eventdes ) AS SELECT 1, 'admin', SYSDATE, 'action taken by (Doe, John (Manager)) more text' FROM DUAL; ``` **Query 1**: ``` SELECT SUBSTR( Name, INSTR( Name, ',' ) + 1 ) || ' ' || SUBSTR( Name, 1, INSTR( Name, ',' ) - 1 ) AS Full_Name, REGEXP_REPLACE( Name, '^(.*?),\s*(.*)$', '\2 \1' ) AS Full_Name2 FROM ( SELECT REGEXP_SUBSTR( eventdes, '\((.*?)(\s*\(.*?\))?\)', 1, 1, 'i', 1 ) AS Name FROM mytable WHERE admintype = 'admin' -- AND entrytime BETWEEN sysdate - (5/(24*60)) AND sysdate AND eventdes LIKE '%action taken by%' ORDER BY id DESC ) WHERE ROWNUM <=1 ``` **[Results](http://sqlfiddle.com/#!4/f1b00/10/0)**: ``` | FULL_NAME | FULL_NAME2 | |-----------|------------| | John Doe | John Doe | ```
You could use INSTR to search the last ')', but I would prefer a Regular Expression. This extracts everything between the first '(' and the last ')' and the TRIMs remove the brackets (Oracle doesn't support look around in RegEx): ``` RTRIM(LTRIM(REGEXP_SUBSTR(eventdes, '\(.*\)'), '('), ')') ```
Oracle SQL: Using CASE in a SELECT statement with Substr
[ "", "sql", "oracle", "" ]
My query: ``` select SeqNo, Name, Qty, Price from vendor where seqNo = 1; ``` outputs like below: ``` SeqNo Name Qty Price 1 ABC 10 11 1 -do- 11 12 1 ditto 13 14 ``` The output above shows the vendor name as ABC in first row which is correct. Later on as users entered for the same vendor name `"ABC"` as either `'-do-'` / `'ditto'`. Now in my final query output I want to replace `-do-` and `ditto` with `ABC` (as in above example) so my final output should look like: ``` SeqNo Name Qty Price 1 ABC 10 11 1 ABC 11 12 1 ABC 13 14 ```
this is working in sql server for you sample data..not sure how your other rows are look like ``` select SeqNo, case when Name in ('-do-','ditto') then (select Name from test where Name not in('-do-','ditto') and SeqNo = 1) else Name end as Name from table where SeqNo = 1 ```
Use the `REPLACE` function ``` SELECT SeqNo, REPLACE(REPLACE(Name,'ditto','ABC'),'-do-','ABC'), Qty, Price FROM vendor WHERE seqNo = 1; ```
How to replace a specific text from the query result
[ "", "sql", "sql-server", "" ]
I have a table with a column `meta_key` … I want to delete all rows in my table where `meta_key` matches some string. ![enter image description here](https://i.stack.imgur.com/tIblS.png) For instance I want to delete all 3 rows with "tel" in it - nut just the cell but the entire row. How can I do that with a mysql statement?
The below query deletes the row with strings contains "tel" in it : ``` DELETE FROM my_table WHERE meta_key like '%tel%'; ``` This is part of [Pattern matching](https://dev.mysql.com/doc/refman/5.0/en/pattern-matching.html). If you want the meta\_key string to be equal to "tel" then you can try below: ``` DELETE FROM my_table WHERE meta_key = 'tel' ``` This is a simple [Delete](https://dev.mysql.com/doc/refman/5.0/en/delete.html)
``` DELETE FROM table WHERE meta_key = 'tel' //will delete exact match DELETE FROM table WHERE meta_key = '%tel%'//will delete string contain tel ```
MySQL: delete table rows if string in cell is matched?
[ "", "mysql", "sql", "delete-row", "" ]
I have a stored procedure which has a varchar input parameter. The body has several IF conditions which compare against this input parameter. Try as I may, it does not enter one of the IF conditions though everything seems to be fine. Any idea why this is happening? ``` CREATE PROCEDURE [dbo].[spGetSnapshot] @ScenarioEnum varchar(10), @LabelerCodes varchar(200) = null, @StatusFilter varchar(200) = null AS BEGIN IF(@ScenarioEnum = 'ManufacturerAdjudicationSnapshot') PRINT 'Inside the if statement' END IF(@ScenarioEnum = 'Works') PRINT 'Inside the 2ND if statement' END ```
This is happening because the length of the input parameter is too small. Since the length of the argument passed was greater than the space reserved for the parameter, the input will be truncated to 'Manuf' Increase the length reserved for ScenarioEnum parameter to an appropriate value and this will start working!
``` CREATE PROCEDURE [dbo].[spGetSnapshot] @ScenarioEnum varchar(10), @LabelerCodes varchar(200) = null, @StatusFilter varchar(200) = null AS BEGIN IF(@ScenarioEnum = 'ManufacturerAdjudicationSnapshot') BEGIN PRINT 'Inside the if statement' END ELSE IF(@ScenarioEnum = 'Works') BEGIN PRINT 'Inside the 2ND if statement' END END ```
SQL Stored procedure does not enter IF and CASE condition
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table with the columns and values below ``` Date Nbr NewValue OldValue 5/20/2015 14:23:08 123 abc xyz 5/20/2015 15:02:10 123 xyz abc 5/21/2015 08:10:02 123 xyz pqr 5/21/2015 10:10:05 456 lmn ijk ``` From the above table i want to select `123` from `5/21/205` and `456` from `5/21/2015`. I don't want to select `nbr 123` from `5/20` because there is no change in `OldValue` and `NewValue` at the end of the day. How to write select statement for this kind of requirement.
If you're using at least SQL Server 2008, you can use a CTE to determine which entry represents the first entry for each `[Nbr]` on each day and which entry represents the last, then compare the two to see where an actual change has occurred using a self-join as suggested in some other answers. For instance: ``` -- Sample data from the question. declare @TestData table ([Date] datetime, [Nbr] int, [NewValue] char(3), [OldValue] char(3)); insert @TestData values ('2015-05-20 14:23:08', 123, 'abc', 'xyz'), ('2015-05-20 15:02:10', 123, 'xyz', 'abc'), ('2015-05-21 08:10:02', 123, 'xyz', 'pqr'), ('2015-05-21 10:10:05', 456, 'lmn', 'ijk'); with [SequencingCTE] as ( select *, -- [OrderAsc] will be 1 if and only if a record represents the FIRST change -- for a given [Nbr] on a given day. [OrderAsc] = row_number() over (partition by convert(date, [Date]), [Nbr] order by [Date]), -- [OrderDesc] will be 1 if and only if a record represents the LAST change -- for a given [Nbr] on a given day. [OrderDesc] = row_number() over (partition by convert(date, [Date]), [Nbr] order by [Date] desc) from @TestData ) -- Match the original value for each [Nbr] on each day with the final value of the -- same [Nbr] on the same day, and get only those records where an actual change -- has occurred. select [Last].* from [SequencingCTE] [First] inner join [SequencingCTE] [Last] on convert(date, [First].[Date]) = convert(date, [Last].[Date]) and [First].[Nbr] = [Last].[Nbr] and [First].[OrderAsc] = 1 and [Last].[OrderDesc] = 1 where [First].[OldValue] != [Last].[NewValue]; ```
``` SELECT * FROM YOURTABLE AS T1 INNER JOIN YOURTABLE AS T2 ON T1.NBR=T2.NBR AND T1.OLDVALUE<>T2.NEWVALUE ``` inner join your table with itself
SQL Select questions
[ "", "sql", "sql-server", "" ]
I have a `stop` table, after finding the stop name I want to find the previous and after data `name, lat, longi` of this name. ``` CREATE TABLE IF NOT EXISTS stops stop_id INT(11) NOT NULL AUTO_INCREMENT PRIMARY KEY, name varchar(30) NOT NULL, lat double(10,6) NOT NULL, longi double(10,6)NOT NULL) ``` ![enter image description here](https://i.stack.imgur.com/5oHIq.png) For example if the name is `TEST` I want to get the name, lat and longi of ABC and sky. It should even work when there is difference between the `stop_id` like `2,5,7,12` I appreciate any help.
You can use variables to achieve what you want: ``` SELECT stop_id, name, lat, longi, rn FROM ( SELECT stop_id, name, lat, longi, @r:=@r+1 AS rn FROM stops, (SELECT @r:=0) var ORDER BY stop_id, name ) s WHERE name != 'TEST' AND rn >= (SELECT row_number FROM ( SELECT name, @row_number:=@row_number+1 AS row_number FROM stops, (SELECT @row_number:=0) var ORDER BY stop_id, name ) s WHERE name = 'TEST' ) - 1 ORDER BY stop_id LIMIT 2 ``` [**Demo here**](http://sqlfiddle.com/#!9/e11b8/1) This query: ``` SELECT name, @row_number:=@row_number+1 AS row_number FROM stops, (SELECT @row_number:=0) var ORDER BY stop_id, name ``` is used twice to simulate `ROW_NUMBER` window function not available in MySQL. Using it we can identify 'row number' of record having `name = 'TEST'` and use this info to get the records we want.
You can get the ids of the previous and next stops using correlated subqueries, assuming that these are the ones with adjacent ids: ``` select s.*, (select s2.stop_id from stops s2 where s2.stop_id < s.stop_id order by s2.stop_id desc limit 1 ) as prev_stop_id, (select s2.stop_id from stops s2 where s2.stop_id > s.stop_id order by s2.stop_id ASC limit 1 ) as next_stop_id from stops s; ``` You can then use `join` to bring in additional information. It is tempting to use `join`s for getting the prev/next stops, such as: ``` select s.*, prevstop.* from stops s left join stops prevstop on s.stop_id = prevstop.stop_id + 1; ``` However, this assumes that there are no gaps in the values, and that assumption may not be true.
Find the previous and after raw in mysql
[ "", "mysql", "sql", "" ]
My table has data as below and when I am selecting the data using order by on the basis of first two characters of string it is not getting sorted properly. ``` 1 - Property Damage 11 - Legal Expenses Delivery 12 - Computer 14 - Directors 2 - Business ``` Below is the expected output: ``` 1 - Property Damage 2 - Business 11 - Legal Expenses 12 - Computer 14 - Directors Delivery ``` I am using following query ``` Select column from table order by Substring(column,1,2); ``` Result set is like : ``` 1 - Property Damage 11 - Legal Expenses 12 - Computer 14 - Directors 2 - Business Delivery ``` My understanding is, that it is sorting on the basis ASCII value. Is there any way to get the required output?
Split your order in 2, first part is to put columns that doesnt start with a numeric at the bottom - but still sorted while the numeric will still be undetermined (null). Second part will insert spaces so all the numeric has the same length before the first space. ``` SELECT col FROM (values ('1 - Property Damage '),('11 - Legal Expenses'),('Delivery'), ('12 - Computer'),('14 - Directors'),('2 - Business ')) x(col) ORDER BY CASE WHEN col like '[0-9]%' then null else col end, replicate(' ', 10-charindex(' ', col)) + col ``` Result: ``` col 1 - Property Damage 2 - Business 11 - Legal Expenses 12 - Computer 14 - Directors Delivery ```
A shorter answer ``` Select col from table order by cast((Parsename(replace(col,' - ','.'), 2) ) as Int) ```
Numeric Values not getting sorted by using substring in SQL server 2005
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'd like to create a database backup using SSMS. The backup file will be a .bak file, but I would like to ignore 1 column in a certain table, because this column isn't necessary, but it takes up 95% of the backup size. The column values should all be replaced by 0x00 (column type is varbinary(max), not null). What's the best way to do this? FYI: I know how to generate a regular backup using Tasks => Back Up..
There is a long way of doing what you ask. Its basically create a new restored database, remove the non required data and then do a new backup again. 1. Create a Backup of the production database. 2. Restore the backup locally on production with a new name 3. Update the column with `0x00` 4. Shrink the database (Shrink is helpful when doing a restore. This wont reduce the bak file size) 5. Take the backup of the new database (Also use Backup Compression to reduce the size even more) 6. Ftp the bak file If you only needed a few tables, you could have used `bcp` but that looks out of the picture for your current requirement.
From SQL Server native backups, you can't. You'd have to restore the database to some other location and then migrate usefull data.
Create database backup, ignore column
[ "", "sql", "sql-server", "ssms", "" ]
Hello I have two sql queries and i would like to merge or combine them Query1: ``` Select TableA.Name, TableB.Date from TableA, TableB where ID = ID_used; ``` Query2: ``` SELECT count(Date) FROM (SELECT DISTINCT Date FROM TableA, TableB group by Date; ``` I tried: ``` Select TableA.Name, TableB.Date from TableA, TableB where ID=ID_used inner join (SELECT count(Date) FROM (SELECT DISTINCT Date FROM TableA, TableB group by Date) ``` But it gives syntax error in union query. The result what I need: Name, count result. Any Idea? Test datas: ``` TableA --------------------- Name ID --------------------- John 1001 Peter 1002 TableB ----------------------- Date ID_used ----------------------- 2015.05.01.AM 1001 2015.05.01.AM 1001 2015.05.01.AM 1002 2015.05.01.PM 1001 2015.05.01.PM 1001 2015.05.01.PM 1002 2015.05.01.PM 1002 2015.05.02.PM 1002 ``` Results have to be: ``` John 2 Peter 3 ```
This returns the count of distinct dates from tableB for each Name in tableA: ``` Select a.Name, COUNT(DISTINCT b.Date) from TableA AS a JOIN TableB AS b ON ID=ID_used group by a.Name ```
As I understand it what you need is something like this: (Although I am not sure because you didn't add test data and expected results) ~~Select t1.Name,t2.Date, (select count(date) from (select distinct date FROM TableA,TableB GROUP BY Date)) as datecount from TableA t1 join TableB t2 on t1.ID = t2.ID\_used~~ UPDATE: I think this is what you are looking for: [SQLFIDDLE](http://sqlfiddle.com/#!9/2e0940/3) ``` select t1.id, t1.name, count(t2.Date) FROM TableA t1 JOIN ( SELECT Distinct id_used,Date FROM TableB) t2 ON t1.id = t2.id_used GROUP by t1.id ``` EDIT: I tried this in ms access 2010: NOTE: I couldn't use the keyword Date as a column name so changed to mydate ``` select t1.id, t1.name, count(t2.mydate) as datecount FROM TableA t1, (SELECT Distinct id_used,mydate FROM TableB) t2 WHERE t1.id = t2.id_used GROUP by t1.id,t1.name ``` That worked but I don't recommend using joins without the join or on keywords so here is one that is better readable and also works: ``` select t1.id, t1.name, count(t2.mydate) as datecount FROM TableA t1 INNER JOIN ( SELECT Distinct id_used,mydate FROM TableB) t2 ON t1.id = t2.id_used GROUP by t1.id,t1.name ```
Merge two sql queries select and count
[ "", "sql", "" ]
This is my whole script ``` CREATE TABLE #TEST ( STATE CHAR(2)) INSERT INTO #TEST SELECT 'ME' UNION ALL SELECT 'ME' UNION ALL SELECT 'ME' UNION ALL SELECT 'SC' UNION ALL SELECT 'NY' UNION ALL SELECT 'SC' UNION ALL SELECT 'NY' UNION ALL SELECT 'SC' SELECT * FROM #TEST ORDER BY CHARINDEX(STATE,'SC,NY') ``` I want to display all records start `with SC first` and `second with NY` and then rest come without any order. When I execute the above sql then first all records come with `ME` which is not in my order by list. Tell me where I am making the mistake. Thanks.
Use `CASE` statement ``` SELECT * FROM #TEST ORDER BY CASE WHEN STATE LIKE 'SC%' THEN 0 WHEN STATE LIKE 'NY%' THEN 1 ELSE 2 END ``` Output ``` STATE ----- SC SC SC NY NY ME ME ME ```
See here for what CHARINDEX returns when your search pattern is Not found. - <https://msdn.microsoft.com/en-GB/library/ms186323.aspx> To avoid dealing with the 0 case, you could use the below. ``` ORDER BY CASE LEFT(state, 2) WHEN 'SC' THEN 1 WHEN 'NY' THEN 2 ELSE 3 END ``` This should be easier to write, read and maintain. It should also use less CPU, though I doubt it will make a *tangible* difference.
Issue regarding order by CHARINDEX Sql Server
[ "", "sql", "sql-server", "t-sql", "" ]
How to filter out number of count matches without building new user functions(i.e. you can use built-in functions) on a given data? The requirement is to get rows with the gw column numbers appearing the same amount of times or if there is different set of amounts their number must match the other ones count. I.e. it could be all 1 like the Sandy's or it could be Don since it has '1' two times and '2' two times as well. Voland would not meet the requirements since he has '1' two times but only once '2' and etc. You don't want to count '0' at all. ``` login gw1 gw2 gw3 gw4 gw5 Peter 1 0 1 0 0 Sandy 1 1 1 1 0 Voland 1 0 1 2 0 Don 1 2 0 1 2 ``` Diserid output is: ``` login gw1 gw2 gw3 gw4 gw5 Peter 1 0 1 0 0 Sandy 1 1 1 1 0 Don 1 2 0 1 2 ``` Values could be any positive number of times. To match the criteria values also has to be at least twice total. I.e. 1 2 3 4 0 is not OK. since every value appears only once. 1 1 0 3 3 is a match.
[**SQL Fiddle**](http://sqlfiddle.com/#!6/fb82f/1/0) ``` WITH Cte(login, gw) AS( SELECT login, gw1 FROM TestData WHERE gw1 > 0 UNION ALL SELECT login, gw2 FROM TestData WHERE gw2 > 0 UNION ALL SELECT login, gw3 FROM TestData WHERE gw3 > 0 UNION ALL SELECT login, gw4 FROM TestData WHERE gw4 > 0 UNION ALL SELECT login, gw5 FROM TestData WHERE gw5 > 0 ), CteCountByLoginGw AS( SELECT login, gw, COUNT(*) AS cc FROM Cte GROUP BY login, gw ), CteFinal AS( SELECT login FROM CteCountByLoginGw c GROUP BY login HAVING MAX(cc) > 1 AND COUNT(DISTINCT gw) = ( SELECT COUNT(*) FROM CteCountByLoginGw WHERE c.login = login AND cc = MAX(c.cc) ) ) SELECT t.* FROM CteFinal c INNER JOIN TestData t ON t.login = c.login ``` --- First you `unpivot` the table without including `gw` that are equal to 0. The result (`CTE`) is: ``` login gw ---------- ----------- Peter 1 Sandy 1 Voland 1 Don 1 Sandy 1 Don 2 Peter 1 Sandy 1 Voland 1 Sandy 1 Voland 2 Don 1 Don 2 ``` Then, you perform a `COUNT(*) GROUP BY login, gw`. The result would be (`CteCountByLoginGw`): ``` login gw cc ---------- ----------- ----------- Don 1 2 Peter 1 2 Sandy 1 4 Voland 1 2 Don 2 2 Voland 2 1 ``` Finally, only get those `login` whose `max(cc)` is greater `1`. This is to eliminate rows like `1,2,3,4,0`. And `login` whose unique `gw` is the same the `max(cc)`. This is to make sure that the occurrence of a `gw` column is the same as others: ``` login gw1 gw2 gw3 gw4 gw5 ---------- ----------- ----------- ----------- ----------- ----------- Peter 1 0 1 0 0 Sandy 1 1 1 1 0 Don 1 2 0 1 2 ```
I know I'm late to the party, I can't type as fast as some and I think I arrived about 40 minutes late but since I done it, I thought I'd share it anyway. My method used unpivot and pivot to achieve the result: ``` Select * from foobar f1 where exists (Select * from (Select login_, Case when [1] = 0 then null else [1] % 2 end Val1, Case when [2] = 0 then null else [2] % 2 end Val2, Case when [3] = 0 then null else [3] % 2 end Val3, Case when [4] = 0 then null else [4] % 2 end Val4, Case when [5] = 0 then null else [5] % 2 end Val5 from (Select * from (select * from foobar) src UNPIVOT (value for amount in (gw1, gw2, gw3, gw4, gw5)) unpvt) src2 PIVOT (count(amount) for value in ([1],[2],[3],[4],[5])) as pvt) res Where 0 in (Val1,Val2, Val3, Val4, Val5) and not exists (select * from foobar where 1 in (Val1, Val2, Val3, Val4, Val5)) and login_ = f1.login_) ``` and here is the fiddle: <http://www.sqlfiddle.com/#!6/b78f8/1/0>
TSQL filtering by character match
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I faced a situation where I got duplicate values from `LEFT JOIN`. I think this might be a desired behavior but unlike from what I want. I have three tables: **`person`**, **`department`** and **`contact`**. **person :** ``` id bigint, person_name character varying(255) ``` **department :** ``` person_id bigint, department_name character varying(255) ``` **contact :** ``` person_id bigint, phone_number character varying(255) ``` **Sql Query :** ``` SELECT p.id, p.person_name, d.department_name, c.phone_number FROM person p LEFT JOIN department d ON p.id = d.person_id LEFT JOIN contact c ON p.id = c.person_id; ``` **Result :** ``` id|person_name|department_name|phone_number --+-----------+---------------+------------ 1 |"John" |"Finance" |"023451" 1 |"John" |"Finance" |"99478" 1 |"John" |"Finance" |"67890" 1 |"John" |"Marketing" |"023451" 1 |"John" |"Marketing" |"99478" 1 |"John" |"Marketing" |"67890" 2 |"Barbara" |"Finance" |"" 3 |"Michelle" |"" |"005634" ``` I know it's what joins do, keeping multiplied with selected rows. But It gives a sense like phone numbers `023451`,`99478`,`67890` are for both departments while they are only related to person john with unnecessary repeated values which will escalate the problem with larger data set. So, here is what I want: ``` id|person_name|department_name|phone_number --+-----------+---------------+------------ 1 |"John" |"Finance" |"023451" 1 |"John" |"Marketing" |"99478" 1 |"John" |"" |"67890" 2 |"Barbara" |"Finance" |"" 3 |"Michelle" |"" |"005634" ``` This is a sample of my situation and I am using a large set of tables and queries. So, kind of need a generic solution.
I like to call this problem **"cross join by proxy"**. Since there is no information (`WHERE` or `JOIN` condition) how the tables `department` and `contact` are supposed to match up, they are cross-joined via the proxy table `person` - giving you the [Cartesian product](https://en.wikipedia.org/wiki/Cartesian_product). Very similar to this one: * [Two SQL LEFT JOINS produce incorrect result](https://stackoverflow.com/questions/12464037/two-sql-left-joins-produce-incorrect-result/12464135#12464135) More explanation there. Solution for your query: ``` SELECT p.id, p.person_name, d.department_name, c.phone_number FROM person p LEFT JOIN ( SELECT person_id, min(department_name) AS department_name FROM department GROUP BY person_id ) d ON d.person_id = p.id LEFT JOIN ( SELECT person_id, min(phone_number) AS phone_number FROM contact GROUP BY person_id ) c ON c.person_id = p.id; ``` You did not define *which* department or phone number to pick, so I arbitrarily chose the *minimum*. You can have it any other way ...
I think you just need to get lists of departments and phones for particular person. So just use [`array_agg`](http://www.postgresql.org/docs/9.3/static/functions-aggregate.html) (or `string_agg` or `json_agg`): ``` SELECT p.id, p.person_name, array_agg(d.department_name) as "department_names", array_agg(c.phone_number) as "phone_numbers" FROM person AS p LEFT JOIN department AS d ON p.id = d.person_id LEFT JOIN contact AS c on p.id = c.person_id GROUP BY p.id, p.person_name ```
Prevent duplicate values in LEFT JOIN
[ "", "sql", "join", "" ]
I need to pass a value downstream from a `SQL Server` DB which is essentially the difference between two `timestamps`. This is simple enough with the `DATEDIFF` function. ``` SELECT DATEDIFF(day, '2015-11-06 00:00:00.000','2015-12-25 00:00:00.000') AS DiffDate ; ``` However at the time of passing the value down the code only knows an order ID value and not the 2 time stamps shown above. Therefore I need the `timestamp` info to come from a subquery, or something else I think. The main nuts and bolts of the sub query is here: ``` select O.DATE1 , C.DATE2 from TABLE1 O, TABLE2 C WHERE O.VALUE1_ID = C.VALUE1 AND O.order_id = '12345678' ``` I've tried a few different ways , however none have been sucesfull. The latest I've tired is below, which from a syntax perspective looks ok, but I get the error: > Conversion failed when converting date and/or time from character > string which I'm never too sure how to cope or deal with. ``` select DATEDIFF (day,'(select O.VALUE1 from TABLE1 O where O.VALUE1 = 16650476)' , '(SELECT C.VALUE1 from TABLE1 O, TABLE2 C WHERE O.VALUE1 = C.VALUE2 AND O.order_id = 12345678)') AS DIFFDATE; ``` Any pointers or help would be appreciated.
The subqueries shouldn't be strings, so remove the single quotes. Also, you talk abot O.DATE1 and C.DATE2, so you probably mean something like this: ``` select DATEDIFF (day, (select O.DATE1 from TABLE1 O where O.VALUE1 = 16650476) , (SELECT C.DATE2 from TABLE1 O, TABLE2 C WHERE O.VALUE1 = C.VALUE2 AND O.order_id = 12345678)) AS DIFFDATE; ```
Could you try the below I have used your sub query but more information on the data in the tables "Table1" and "Table2" would be useful ``` SELECT DATEDIFF(day, D.Date1, D.Date2) AS DiffDate FROM ( select O.DATE1 as Date1 , C.DATE2 as Date2 from TABLE1 O, TABLE2 C WHERE O.VALUE1_ID = C.VALUE1 AND O.order_id = '12345678' ) D ``` The reason you are getting the error > Conversion failed when converting date and/or time from character > string is because you are passing strings (below) to the datediff function instead of using a date ``` '(select O.VALUE1 from TABLE1 O where O.VALUE1 = 16650476)' ```
SQL - DATEDIFF with a subquery
[ "", "sql", "sql-server", "datediff", "" ]
I have 2 tables, `tr_testmodule` and `TR_Modulelocationdtl`. ``` select * from tr_testmodule -output like below (nmoduleno is primary key, vlocationno is varchar) nmoduleno vlocationnno 1 3,65,6,9,63 2 13,625,62,91,613 ``` Now I want to insert data from tr\_testmodule to TR\_Modulelocationdtl for each row of tr\_testmodule with only single query. For example I want to insert a number of rows for single moduleno ``` select * from TR_Modulelocationdtl --(nid is pk,nlocationo-int) nid nmoduleno nlocationno 1 1 3 2 1 65 3 1 6 4 1 9 5 1 63 6 2 13 7 2 625 ``` i can split the data like this into temptable (but only for single row) from the temp table i can insert data into my 'TR\_Modulelocationdtl' ``` SELECT * INTO #TR_Modulelocationdtl FROM (SELECT data AS nLocationno FROM dbo.SplitString('1,23,2,3,5',',') ) AS nLocationno select * from #TR_Modulelocationdtl nLocationno 1 23 2 3 5 ```
Try this: ``` DECLARE @t TABLE ( n INT, v VARCHAR(100) ) INSERT INTO @t VALUES ( 1, '3,65,6,9,63' ), ( 2, '13,625,62,91,613' ) SELECT n, s FROM @t CROSS APPLY ( SELECT Split.a.value('.', 'VARCHAR(100)') AS s FROM ( SELECT CAST ('<M>' + REPLACE(v, ',','</M><M>') + '</M>' AS XML) AS s) AS A CROSS APPLY s.nodes('/M') AS Split ( a ) ) ca ``` Output: ``` n s 1 3 1 65 1 6 1 9 1 63 2 13 2 625 2 62 2 91 2 613 ```
This is something rather easy to solve with a stored procedure (or an anonymous block for that matter): Loop through the records currently in tr\_testmodule, split vlocationnno and loop through the locations thus got, then insert into tr\_modulelocationdtl. I don't know SQL Server well enough, so here is an equivalent Oracle script that can serve as pseudo code. It should be rather easy to convert it to SQL Server I guess. ``` begin for rec in (select * from tr_testmodule) loop for loc in (select to_number(column_value) as nlocationno from xmltable(rec.vlocationnno)) loop insert into tr_modulelocationdtl (nmoduleno, nlocationno) values (rec.nmoduleno, loc.nlocationno); end loop; end loop; delete from tr_testmodule; commit; end; ```
how to insert multiple row in one table according to single row of another table?
[ "", "sql", "sql-server", "database", "" ]
I have a query as below: ``` SELECT value FROM table WHERE ((col1+col2+col3)/col4) > 4 AND ((col1+col2+col3)/col4) < 5 UNION ALL SELECT value FROM table WHERE ((col1+col2+col3)/col4) > 3 AND ((col1+col2+col3)/col4) < 4 ``` I'd like to define `((col1+col2+col3)/col4)` as some name, so if I want to change the conditions of the SQL query (eg. maybe I'd like it to be `((col1+col2-col3)*col4)` instead), I don't need to go through the entire text of the SQL query and change every line. ``` DEFINE ((col1+col2+col3)/col4) AS var SELECT value FROM table WHERE var > 4 AND var < 5 UNION ALL SELECT value FROM table WHERE var > 3 AND var < 4 ``` Only one line needs to be altered instead of four in this example. It's easier to read and maintain.
You can do the `union all` before the definition: ``` SELECT ((col1+col2+col3)/col4) as value FROM ((SELECT col1, col2, col3, col4 FROM table1 ) UNION ALL (SELECT col1, col2, col3, col4 FROM table2 ) ) t HAVING value > 4 and value < 5 ``` This use of the `having` clause is specific to MySQL. You would need another subquery or CTE for another database.
``` SELECT tt.value FROM (SELECT ((t.col1+t.col2+t.col3)/t.col4) as value FROM ((SELECT col1, col2, col3, col4 FROM #Temp ) UNION ALL (SELECT col1, col2, col3, col4 FROM #Temp ) ) t)tt GROUP BY tt.value HAVING tt.value > 4 and tt.value < 5 ```
Define column name as another name in SQL?
[ "", "mysql", "sql", "pdo", "" ]
I wrote function ``` CREATE FUNCTION ip2int(text) RETURNS bigint AS $$ SELECT split_part($1,'.',1)::bigint*16777216 + split_part($1,'.',2)::bigint*65536 + split_part($1,'.',3)::bigint*256 + split_part($1,'.',4)::bigint; $$ LANGUAGE SQL IMMUTABLE RETURNS NULL ON NULL INPUT; SELECT ip2int('200.233.1.2'); ``` which works on postgres database. But when I run it on Oracle It gave me following error > Error(1,21): PLS-00103: Encountered the symbol ")" when expecting one > of the following: in out long double ref char time timestamp interval > date binarynational character nchar What is the solution for this? Is there any way to make it database independent?
I realize that in PostgreSQL it's common to refer to one of the more commonly used programming languages as "plsql", but the language supported by PostgreSQL is properly known as "plpgsql", and PL/SQL and PL/pgSQL are two similar-but-different languages. The syntax of this function is not valid PL/SQL as defined by Oracle. PL/SQL does not use $$ to quote the text of the routine; the `text` parameter is untyped, which is not allowed in PL/SQL; PL/SQL does not support the $1, $2, etc parameter markers; `bigint` isn't supported in Oracle (although you could define a subtype of NUMBER to handle this); Oracle doesn't support the `::` cast operator; doesn't provide `split_part` (although I suppose you could write your own); and doesn't use the `LANGUAGE SQL IMMUTABLE`... stuff at the end. Oracle requires that a function consist of a statement block (`DECLARE...BEGIN...END`), while this function is purely a SELECT statement; and the SELECT statement in the function isn't valid for Oracle because Oracle requires a FROM clause. So, basically, this function is not valid PL/SQL. You could transform it into something like ``` CREATE OR REPLACE FUNCTION ip2int(text IN VARCHAR2) RETURN NUMBER AS nRetval NUMBER; BEGIN SELECT TO_NUMBER(REGEXP_SUBSTR(text, '[0-9]', 1, 1)) * 16777216 + TO_NUMBER(REGEXP_SUBSTR(text, '[0-9]', 1, 2)) * 65536 + TO_NUMBER(REGEXP_SUBSTR(text, '[0-9]', 1, 3)) * 256 + TO_NUMBER(REGEXP_SUBSTR(text, '[0-9]', 1, 4)) INTO nRetval FROM DUAL; RETURN nRetval; END IP2INT; ``` but making such radical alterations doesn't seem to be what you want to do. [The PostgreSQL documentation on converting from PL/SQL to PL/pgSQL](http://www.postgresql.org/docs/9.4/static/plpgsql-porting.html) might give you some ideas of the challenges involved in this. Best of luck.
``` CREATE OR REPLACE FUNCTION split_part(string VARCHAR2, delimiter VARCHAR2, n NUMBER) RETURN VARCHAR2 IS v_start NUMBER(5) := 1; v_end NUMBER(5); BEGIN -- Find the position of n-th -1 delimiter IF n > 1 THEN v_start := INSTR(string, delimiter, 1, n - 1); -- Delimiter not found IF v_start = 0 THEN RETURN NULL; END IF; v_start := v_start + LENGTH(delimiter); END IF; -- Find the position of n-th delimiter v_end := INSTR(string, delimiter, v_start, 1); -- If not found return until the end of string IF v_end = 0 THEN RETURN SUBSTR(string, v_start); END IF; RETURN SUBSTR(string, v_start, v_end - v_start); END; CREATE OR REPLACE FUNCTION ip2int(text IN VARCHAR2) RETURN NUMBER AS nRetval NUMBER; BEGIN SELECT TO_NUMBER(split_part(text,'.',1)) * 16777216 + TO_NUMBER(split_part(text,'.',2)) * 65536 + TO_NUMBER(split_part(text,'.',3)) * 256 + TO_NUMBER(split_part(text,'.',4)) INTO nRetval FROM DUAL; RETURN nRetval; END IP2INT; ```
Function to convert IP address in Integer using sql which will run on any database
[ "", "sql", "oracle", "postgresql", "" ]
I have very weird behavior. If my query have, ``` ,ROW_NUMBER() OVER (ORDER BY CDF.Id) AS [ROW_Number] ``` Then it will take 1 to 2 seconds. If I have, ``` ,ROW_NUMBER() OVER (ORDER BY CASE '' WHEN '' THEN CDF.Id END) AS [ROW_Number] ``` Then it will take 1 to 2 seconds. But If I have a variable with empty value, ``` DECLARE @SortExpression varchar(50)='' ,ROW_NUMBER() OVER (ORDER BY CASE @SortExpression WHEN '' THEN CDF.Id END) AS [ROW_Number] ``` Then it will take 12 to 16 seconds. In my real query I have some CASE statements CASE WHEN statements in ORDER BY clause. Here is my real query, ``` ,ROW_NUMBER() OVER ( ORDER BY CASE WHEN @SortExpression = 'MerchantName' THEN M.Name END ASC, CASE WHEN @SortExpression = '-MerchantName' THEN M.Name END DESC, CASE WHEN @SortExpression = 'Id' THEN CD.Id END ASC, CASE WHEN @SortExpression = '-Id' THEN CD.Id END DESC, CASE WHEN @SortExpression = 'MerchantProductId' THEN CD.MerchantProductId END ASC, CASE WHEN @SortExpression = '-MerchantProductId' THEN CD.MerchantProductId END DESC, CASE WHEN @SortExpression = 'Sku' THEN CD.Sku END ASC, CASE WHEN @SortExpression = '-Sku' THEN CD.Sku END DESC, CASE WHEN @SortExpression = 'ModelNumber' THEN CD.ModelNumber END ASC, CASE WHEN @SortExpression = '-ModelNumber' THEN CD.ModelNumber END DESC, CASE WHEN @SortExpression = 'Offer' THEN CD.Offer END ASC, CASE WHEN @SortExpression = '-Offer' THEN CD.Offer END DESC, CASE WHEN @SortExpression = 'Price' THEN CD.Price END ASC, CASE WHEN @SortExpression = '-Price' THEN CD.Price END DESC, CASE WHEN @SortExpression = 'NewPrice' THEN CD.NewPrice END ASC, CASE WHEN @SortExpression = '-NewPrice' THEN CD.NewPrice END DESC, CASE WHEN @SortExpression = 'InventoryControlType' THEN CD.InventoryControlType END ASC, CASE WHEN @SortExpression = '-InventoryControlType' THEN CD.InventoryControlType END DESC, CASE WHEN @SortExpression = 'Inventory' THEN CD.Inventory END ASC, CASE WHEN @SortExpression = '-Inventory' THEN CD.Inventory END DESC, CASE WHEN @SortExpression = 'Featured' THEN CD.Featured END ASC, CASE WHEN @SortExpression = '-Featured' THEN CD.Featured END DESC, CASE WHEN @SortExpression = 'Visible' THEN CD.Visible END ASC, CASE WHEN @SortExpression = '-Visible' THEN CD.Visible END DESC, CASE WHEN @SortExpression = 'Field1' THEN CD.Field1 END ASC, CASE WHEN @SortExpression = '-Field1' THEN CD.Field1 END DESC, CASE WHEN @SortExpression = 'Field2' THEN CD.Field2 END ASC, CASE WHEN @SortExpression = '-Field2' THEN CD.Field2 END DESC, CASE WHEN @SortExpression = 'Field3' THEN CD.Field3 END ASC, CASE WHEN @SortExpression = '-Field3' THEN CD.Field3 END DESC, CASE WHEN @SortExpression = 'Field4' THEN CD.Field4 END ASC, CASE WHEN @SortExpression = '-Field4' THEN CD.Field4 END DESC, CASE WHEN @SortExpression = 'OutletCode' THEN CD.OutletCode END ASC, CASE WHEN @SortExpression = '-OutletCode' THEN CD.OutletCode END DESC, CASE WHEN @SortExpression = 'Stock' THEN CD.Stock END ASC, CASE WHEN @SortExpression = '-Stock' THEN CD.Stock END DESC, CASE WHEN @SortExpression = 'Order' THEN CD.[Order] END ASC, CASE WHEN @SortExpression = '-Order' THEN CD.[Order] END DESC, CASE WHEN @SortExpression = 'ErrorDescription' THEN CD.[ErrorDescription] END ASC, CASE WHEN @SortExpression = '-ErrorDescription' THEN CD.[ErrorDescription] END DESC, CASE WHEN @SortExpression = 'CreationDateUtc' THEN CD.[CreationDateUtc] END ASC, CASE WHEN @SortExpression = '-CreationDateUtc' THEN CD.[CreationDateUtc] END DESC, CDF.Id, CD.[Order] ) AS [ROW_Number] ```
Just to address the first part, `CASE '' WHEN '' THEN CDF.Id END` will be optimised away at compile to just `CDF.Id`, so your first two queries are equivalent. At compile time the optimiser knows you want to sort by `CDF.Id` so can generate a plan that utilizes an index on this. **Short Answer** Just add the `OPTION (RECOMPILE)` query hint, but this will only help if the column you are sorting on is indexed. --- **The full answer** The problem with your latter example is that the optimiser will create a plan based on an unknown value for `@SortExpression`, therefore cannot plan to use an appropriate index, since the sort column is unknown. I created a simple test DDL: ``` IF OBJECT_ID(N'dbo.T', 'U') IS NOT NULL DROP TABLE dbo.T; CREATE TABLE dbo.T (A INT, B INT, C INT); INSERT dbo.T (A, B, C) SELECT TOP 100000 A = ABS(CHECKSUM(NEWID())) % 1000, B = ABS(CHECKSUM(NEWID())) % 1000, C = ABS(CHECKSUM(NEWID())) % 1000 FROM sys.all_objects AS a CROSS JOIN sys.all_objects AS b; CREATE INDEX IX_T_A ON dbo.T (A); CREATE INDEX IX_T_B ON dbo.T (B); CREATE INDEX IX_T_C ON dbo.T (C); ``` As a control, I ran: ``` SELECT TOP 100 A, B, C, D = ROW_NUMBER() OVER (ORDER BY A) FROM dbo.T ORDER BY D; ``` This gives a plan as such: ![enter image description here](https://i.stack.imgur.com/gwl3R.png) The key here is that the optimiser knows that rather than sort the whole table, it only needs the first 100 rows from the index `IX_T_A`, which is very cheap compared to sorting the table, since the index is already sorted. This is our optimal plan for a sort on an indexed column. So the aim of the game is to get to this plan while using a variable to define the sort. Just to explain further, I have used `TOP` because it is representative of what (I assume) you are trying to achieve which is to filter for a certain set of records for paging: ``` SELECT * FROM ( SELECT A, B, C, D = ROW_NUMBER() OVER (ORDER BY A) FROM dbo.T ) T WHERE D BETWEEN 150 AND 250; ``` ![enter image description here](https://i.stack.imgur.com/8Fbqg.png) This gives exactly the same plan, it just means internally the index seek starts further into the index. For the rest of the tests I will continue with `TOP` as it is shorter. As explained above, if I run this with the variable, the query plan cannot use the index scan on `IX_T_A` since it does not know for sure that `A` will be the sort column, so it just uses a plain old table scan, and has to sort the entire table, rather than being able to just sequentially read from a nonclustered index: ``` DECLARE @Sort VARCHAR(10) = 'A'; SELECT TOP 100 A, B, C, D = ROW_NUMBER() OVER (ORDER BY CASE WHEN @Sort = 'A' THEN A END ASC, CASE WHEN @Sort = '-A' THEN A END DESC, CASE WHEN @Sort = 'B' THEN B END ASC, CASE WHEN @Sort = '-B' THEN B END DESC, CASE WHEN @Sort = 'C' THEN C END ASC, CASE WHEN @Sort = '-C' THEN C END DESC) FROM dbo.T ORDER BY D; ``` Query Plan: ![enter image description here](https://i.stack.imgur.com/bOckm.png) The only way around this that I can see is to force recompilation at run-time, so that the redundant sorts can be optimised away, and the correct index used: ``` DECLARE @Sort VARCHAR(10) = 'A'; SELECT TOP 100 A, B, C, D = ROW_NUMBER() OVER (ORDER BY CASE WHEN @Sort = 'A' THEN A END ASC, CASE WHEN @Sort = '-A' THEN A END DESC, CASE WHEN @Sort = 'B' THEN B END ASC, CASE WHEN @Sort = '-B' THEN B END DESC, CASE WHEN @Sort = 'C' THEN C END ASC, CASE WHEN @Sort = '-C' THEN C END DESC) FROM dbo.T ORDER BY D OPTION (RECOMPILE); ``` Query Plan: ![enter image description here](https://i.stack.imgur.com/J5TAQ.png) As you can see, this has reverted to the plan when the sort column was hard coded. You will have an additional cost at compile time, but this should be less than the additional 10s+ you are seeing in run time. If you sort on a column without an index then it does not matter whether you recompile or not, it will use the same plan.
The only option that I could think of would require indexes on *all* the columns. I'm not sure if this is really feasible, but you had all such indexes, then the following might perform well: ``` (case when @SortExpression = 'MerchantName' then row_number() over (order by MerchantName) when @SortExpression = '-MerchantName' then row_number() over (order by MerchantName desc) . . . end) ``` SQL Server is smart enough to use indexes for `row_number()`, when possible. And, I'm pretty sure index usage is at the root of the performance difference. It should be smart enough to use indexes even when `row_number()` is an expression in a `case` statement.
CASE WHEN in Order By taking too much time
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I am trying to write a SQL querybut am not sure how to properly do what I want to achieve. Right now I have ``` select id from job where status = 'NE' or status = 'RU' and queue = CONCAT('VA-', varVappId) or queue = CONCAT('CL-', varClientId, '||', 'VA-', varVappId); ``` so basically if the job has a status of NE or a status of RU and queue matches either concat string then it should produce a result.
You should use parentheses to group together clauses. ``` select id from job where (status = 'NE' or status = 'RU') and (queue = CONCAT('VA-', varVappId) or queue = CONCAT('CL-', varClientId, '||', 'VA-', varVappId)); ```
Missing Parenthesis. ``` select id from job where (status = 'NE' or status = 'RU' ) and (queue = CONCAT('VA-', varVappId) or queue = CONCAT('CL-', varClientId, '||', 'VA-', varVappId)); ``` or ``` select id from job where status in( 'NE','RU' ) and (queue = CONCAT('VA-', varVappId) or queue = CONCAT('CL-', varClientId, '||', 'VA-', varVappId)); ```
How can I properly write SQL query
[ "", "sql", "" ]
How I can solve this? I need to eliminate the duplicated of the Cartesian product, of a table with itself. I thought of using "connect by". Thanks. ``` create table foo ( num number(2) ); insert into foo values (1); insert into foo values (2); insert into foo values (3); select a.num,b.num from foo a, foo b; NUM NUM --- --- 1 1 1 2 1 3 2 1 * duplicated 2 2 2 3 3 1 * duplicated 3 2 * duplicated 3 3 ```
You can try this: ``` select a.num,b.num from foo a cross join foo b where a.num <= b.num ```
``` select a.num,b.num from foo a, foo b where a.num = b.num ```
oracle sql creating combinations same table without repeat
[ "", "sql", "oracle", "cartesian", "" ]
I am trying to build a SQL query which will give me the date range for the dates with same prices. If there is a break in the prices, I expect to see it in a new line. Even if sometime during the month there are same prices, if there is change in the prices sometime in between I want to see it as two separate rows with the specific date range. Sample Data: ``` Date Price 1-Jan 3.2 2-Jan 3.2 3-Jan 3.2 4-Jan 3.2 5-Jan 3.2 6-Jan 3.2 7-Jan 3.2 8-Jan 3.2 9-Jan 3.5 10-Jan 3.5 11-Jan 3.5 12-Jan 3.5 13-Jan 3.5 14-Jan 4.2 15-Jan 4.2 16-Jan 4.2 17-Jan 3.2 18-Jan 3.2 19-Jan 3.2 20-Jan 3.2 21-Jan 3.2 22-Jan 3 23-Jan 3 24-Jan 3 25-Jan 3 26-Jan 3 27-Jan 3 28-Jan 3 29-Jan 3.5 30-Jan 3.5 31-Jan 3.5 ``` Desired Result : ``` Price Date Range 3.2 1-8 3.5 9-13 4.2 14-16 3.2 17-22 3 22-28 3.5 29-31 ```
### Non-relational Solution I don't think any of other answers are correct. * `GROUP BY` won't work * Using `ROW_NUMBER()` forces the data into a Record Filing System structure, which is physical, and then processes it as physical records. At a massive performance cost. Of course, in order to write such code, it forces you to *think* in terms of RFS instead of thinking in Relational terms. * Using CTEs is the same. Iterating through the data, especially data that does not change. At a slightly different massive cost. * Cursors are definitely the wrong thing for a different set of reasons. (a) Cursors require code, and you have requested a View (b) Cursors abandon the set-processing engine, and revert to row-by-row processing. Again, not required. If a developer on any of my teams uses cursors or temp tables on a Relational Database (ie. not a Record Filing System), I shoot them. ### Relational Solution 1. Your **data** is Relational, logical, the two given **data** columns are all that is necessary. 2. Sure, we have to form a View (derived Relation), to obtain the desired report, but that consists of pure SELECTs, which is quite different to processing (converting it to a **file**, which is physical, and then processing the **file**; or temp tables; or worktables; or CTEs; or ROW\_Number(); etc). 3. Contrary to the lamentations of "theoreticians", who have an agenda, SQL handles Relational data perfectly well. And you data is Relational. Therefore, maintain a Relational mindset, a Relational view of the data, and a set-processing mentality. Every report requirement over a Relational Database can be fulfilled using a single SELECT. There is no need to regress to pre-1970 ISAM File handling methods. I will assume the Primary Key (the set of columns that give a Relational row uniqueness) is `Date,` and based on the example data given, the Datatype is `DATE.` Try this: ``` CREATE VIEW MyTable_Base_V -- Foundation View AS SELECT Date, Date_Next, Price FROM ( -- Derived Table: project rows with what we need SELECT Date, [Date_Next] = DATEADD( DD, 1, O.Date ), Price, [Price_Next] = ( SELECT Price -- NULL if not exists FROM MyTable WHERE Date = DATEADD( DD, 1, O.Date ) ) FROM MyTable MT ) AS X WHERE Price != Price_Next -- exclude unchanging rows GO CREATE VIEW MyTable_V -- Requested View AS SELECT [Date_From] = ( -- Date of the previous row SELECT MAX( Date_Next ) -- previous row FROM MyTable_V WHERE Date_Next < MT.Date ), [Date_To] = Date, -- this row Price FROM MyTable_Base_V MT GO SELECT * FROM MyTable_V GO ``` ### Method, Generic Of course this is a method, therefore it is generic, it can be used to determine the `From_` and `To_` of any data range (here, a `Date` range), based on any data change (here, a change in `Price`). Here, your `Dates` are consecutive, so the determination of `Date_Next` is simple: increment the `Date` by 1 day. If the PK is increasing but *not* consecutive (eg. `DateTime` or `TimeStamp` or some other Key), change the Derived Table `X` to: ``` -- Derived Table: project rows with what we need SELECT DateTime, [DateTime_Next] = ( -- first row > this row SELECT TOP 1 DateTime -- NULL if not exists FROM MyTable WHERE DateTime > MT.DateTime ), Price, [Price_Next] = ( -- first row > this row SELECT TOP 1 Price -- NULL if not exists FROM MyTable WHERE DateTime > MT.DateTime ) FROM MyTable MT ``` Enjoy. Please feel free to comment, ask questions, etc.
You can do this by adding a grouping column. A neat trick for this is the difference of two sequences of numbers -- when the difference is constant, then the price is the same. ``` select price, min(date), max(date) from (select s.*, (row_number() over (order by date) - row_number() over (partition by price order by date) ) as grp from sample s ) grp group by grp, price; ``` Note: be careful that price is stored as a fixed decimal rather than a floating decimal. Otherwise, values that look the same might not actually be the same.
Date Range for set of same data
[ "", "sql", "sql-server", "ssms", "" ]
I am trying to update a column with the current period for a few months of the year. As such, I use a switch statement with a month variable. However, I get the error: Must declare the scalar variable I have tried the following: ``` DECLARE @DATE_AP dateTime DECLARE @month varchar(max) SET @DATE_AP= DATEADD(year, 0, GETDATE()) SET @month_AP = DATENAME(Month, @DATE_AP) SELECT CAST(@month as VARchar(10)) select @periodsetvar = 'select CASE @month when ''October'' then ''Update tbltimes set Periodyr = 01'' when ''November'' then ''Update tbltimes set Periodyr = 02'' when ''December'' then ''Update tbltimes set Periodyr = 03'' when ''January'' then ''Update tbltimes set Periodyr = 04'' when ''February'' then ''Update tbltimes set Periodyr = 05'' END' exec (@periodsetvar) ``` and: ``` DECLARE @DATE_AP dateTime DECLARE @month varchar(max) SET @DATE_AP= DATEADD(year, 0, GETDATE()) SET @month_AP = DATENAME(Month, @DATE_AP) SELECT CAST(@month as VARchar(10)) select @periodsetvar = 'select CASE'+ ' '+ @month+' when ''October'' then ''Update tbltimes set Periodyr = 01'' when ''November'' then ''Update tbltimes set Periodyr = 02'' when ''December'' then ''Update tbltimes set Periodyr = 03'' when ''January'' then ''Update tbltimes set Periodyr = 04'' when ''February'' then ''Update tbltimes set Periodyr = 05'' END' exec (@periodsetvar) ```
Instead of using Dynamic SQL which is susceptible to SQL Injection use a simple statement like this: ``` DECLARE @DATE_AP dateTime Declare @periodsetvar char(2) SET @DATE_AP= DATEADD(year, 0, GETDATE()) Set @periodsetvar = CASE DATENAME(Month, @DATE_AP) when 'October' then '01' when 'November' then '02' when 'December' then '03' when 'Janurary' then '04' when 'February' then '05' end Update tbltimes set Periodyr = @periodsetvar ``` If you are 100% determined to use Dynamic SQL then you should make sure you do it a bit more safe and parameterised such as below: ``` DECLARE @DATE_AP dateTime DECLARE @month varchar(10) SET @DATE_AP= DATEADD(year, 0, GETDATE()) SET @month = DATENAME(Month, @DATE_AP) select @periodsetvar = 'select CASE @month when ''October'' then ''Update tbltimes set Periodyr = 01'' when ''November'' then ''Update tbltimes set Periodyr = 02'' when ''December'' then ''Update tbltimes set Periodyr = 03'' when ''January'' then ''Update tbltimes set Periodyr = 04'' when ''February'' then ''Update tbltimes set Periodyr = 05'' END' exec sp_executesql @periodsetvar, N'@month varchar(10)', @month ``` sp\_executesql will allow you to declare parameters within dynamic sql making it much safer to execute.
You need to add declarations for all of your variables: ``` DECLARE @DATE_AP dateTime DECLARE @month varchar(max) DECLARE @month_AP varchar(max) DECLARE @periodsetvar varchar(max) SET @DATE_AP= DATEADD(year, 0, GETDATE()) SET @month_AP = DATENAME(Month, @DATE_AP) SELECT CAST(@month as VARchar(10)) select @periodsetvar = 'select CASE'+ ' '+ @month+' when ''October'' then ''Update tbltimes set Periodyr = 01'' when ''November'' then ''Update tbltimes set Periodyr = 02'' when ''December'' then ''Update tbltimes set Periodyr = 03'' when ''January'' then ''Update tbltimes set Periodyr = 04'' when ''February'' then ''Update tbltimes set Periodyr = 05'' END' exec (@periodsetvar) ```
Must declare the scalar variable date in case satatement
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
Consider the below tables Table 1: ``` id instance ttime val 1 -1 29-5-2915 08:17:29 2000 2 -1 29-5-2915 09:17:29 2938 3 -1 29-5-2915 10:17:29 2837 4 inst1 29-5-2915 11:17:29 1233 5 inst2 29-5-2915 12:17:29 2837 ``` Table 2: ``` id instance ttime val 1 -1 29-5-2915 13:17:29 10 2 -1 29-5-2915 14:17:29 29 3 -1 29-5-2915 15:17:29 283 4 inst1 29-5-2915 16:17:29 123 5 inst2 29-5-2915 17:17:29 28 ``` I want the total count of ids between 29-5-2915 08:17:29 and 29-5-2915 17:17:29. How do I write the join query to get my expected output?
Maybe something like this: ``` SELECT count(*) ( SELECT id FROM Table1 WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29' UNION ALL SELECT id FROM Table2 WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29' ) AS t ``` Or if you want a distinct count: ``` SELECT count(*) ( SELECT id FROM Table1 WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29' UNION SELECT id FROM Table2 WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29' ) AS t ```
To get the count for one table you could do: ``` SELECT count(*) FROM Table1 WHERE ttime > '29-5-2915 08:17:29' AND ttime < '29-5-2915 17:17:29' ``` You could do that for both table and sum it up: ``` SELECT sum(count) FROM ( SELECT count(*) FROM Table1 WHERE ttime > '29-5-2915 08:17:29' AND ttime < '29-5-2915 17:17:29' UNION SELECT count(*) FROM Table2 WHERE ttime > '29-5-2915 08:17:29' AND ttime < '29-5-2915 17:17:29' ) ```
SQL Query: Unable to get the count of values between a time stamp from multiple tables
[ "", "sql", "postgresql", "" ]
I have an SQL query like so: ``` UPDATE ScheduleTasks SET AssignedResource='360 CLEAN' WHERE (ScheduleID = (SELECT ScheduleID FROM Schedule WHERE Job_No = 'ABC123')) AND Description = 'Filing' ``` my issue is that I have another table called `Resource` with a `Resource_ID` and `Resource_Name` what I am trying to do is update the `scheduleTasks` table with the `Resource_ID` and not the `Resource_Name` which is what `360 CLEAN` is, its `Resource_Name` not `Resource_ID` my question is how can I get the `Resource_ID` From the `Resource_Name`
Try using a `sub-query` ``` UPDATE ScheduleTasks SET AssignedResource = (SELECT TOP 1 Resource_ID FROM Resource WHERE Resource_Name = '360 CLEAN') WHERE st.ScheduleID = (SELECT ScheduleID FROM Schedule WHERE Job_No = 'ABC123') AND st.Description = 'Filing' ```
You can do this by using an `INNER JOIN` on the `UPDATE` statement like so: ``` UPDATE st SET AssignedResource = r.Resource_ID FROM ScheduleTasks st -- change join conditions to your preference INNER JOIN Resource r on r.Resource_Name = '360 CLEAN' WHERE st.Description = 'Filing' AND EXISTS ( SELECT s.ScheduleID FROM Schedule s WHERE s.Job_No = 'ABC123' AND st.ScheduleID = s.ScheduleID ) ``` Also updated your query for optimizations on checking if the `ScheduleID` from `ScheduleTasks` exists in the `Schedule` table.
SQL Update SET sub query?
[ "", "sql", "sql-server", "" ]
Is this statement guaranteed to always return the rows in the order that they are mentioned? In other words, will the elements in the VALUES clause always be evaluated in order from left to right? ``` SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS orderCol, id FROM (VALUES (103), (7), (6), (16), (50), (41), (80), (67), (64)) x(id) ORDER BY orderCol ``` Expected order: ``` 103 7 6 16 50 41 80 67 64 ```
No. There's no such guarantee. SQL is a set-based language, where `{1,3}` and `{3,1}` are the same thing. The only place you get guarantees on ordering is when you use an `ORDER BY` clause that unambiguously specifies an ordering of the values you're working with. Since your `ORDER BY` depends on a `ROW_NUMBER()` expression that, in turn, uses an `ORDER BY` which is *completely* ambiguous, no ordering guarantees exist.
Please see the below as possible answer: > There is no guarantee that the rows returned by a query using > ROW\_NUMBER() will be ordered exactly the same with each execution > unless the following conditions are true. > > 1. Values of the partitioned column are unique. > 2. Values of the ORDER BY columns are unique. > 3. Combinations of values of the partition column and ORDER BY columns are unique. > ROW\_NUMBER() is nondeterministic. For more information, > see Deterministic and Nondeterministic Functions. [ROW\_NUMBER (Transact-SQL)](https://msdn.microsoft.com/en-us/library/ms186734.aspx)
Is the output order of VALUES clause guaranteed to be the same as the order the elements are mentioned
[ "", "sql", "sql-server", "" ]
I have got a table called Aliases in my MySQL database. Looks like this: ``` ------------------ | Id | Alias | ------------------ |1 | 'TabX' | ------------------ |2 | 'TabY' | ... | | | ------------------ ``` And I need to insert to those tables like this: ``` INSERT INTO (SELECT Alias FROM Aliases WHERE id=1) (somevalue) VALUES (value); ``` This doesn't work. Please help.
You can reach it with prepared statements: ``` SET @alias = (SELECT Alias FROM Aliases WHERE id = 1); SET @sql = CONCAT('INSERT INTO ', @alias, ' (somevalue) VALUES (value)'); PREPARE stmt1 FROM @sql; EXECUTE stmt1; ```
You need to make `PROCEDURE` for that. And use prepared statements for this purpose. You need procedure like that: ``` DELIMITER $$ DROP PROCEDURE IF EXISTS `DB`.`INSERT_VAR`$$ CREATE DEFINER=`user`@`host` PROCEDURE INSERT_VAR(IN tableName VARCHAR(200)) BEGIN SET @insert_query=CONCAT("INSERT INTO ", tableName, " (id) SELECT id FROM test"); PREPARE stmtInsert FROM @insertTab; EXECUTE stmtInsert; END$$ DELIMITER ; ``` You need to modify this procedure to fit your needs.
Inserting to selected table name
[ "", "mysql", "sql", "" ]
I am using oracle 11g and I have normal timestamps (starttime) which produce an output as follows: ``` 23.09.14 05:15:00,000000000 ``` Now I want an output like ``` 23.09.14 05 ``` Also ok would be: ``` 23.09.14 05:00:00,000000000 ``` but when I use something like `round(starttime, 'HH') or trunc(starttime ,'HH24')` I always get ``` 23.09.14 ``` with no hours at all. Looking around here at stackoverflow I found ``` substr(TO_CHAR(starttime),0,LENGTH(TO_CHAR(starttime))-13) ``` which produces the correct output as char but when I want to sort dates it wont work because it sorts alphabetically. (so for example, 1.3., 1.4, 1.5.... instead of 1.3., 2.3., 3.3,...), Any idea how I can get a timestamp which is rounded to the full hour? I will have to use the statement in a group by clause. The complete statement would look like: ``` select round(starttime, 'HH24'), sum(counter) from wmsconsolidationorderdwct group by round(starttime, 'HH24') order by round(starttime, 'HH24') desc; ``` So I cannot display the rounded time and sort by the full timestamp since this would violate the group by clause.
This will truncate to the hour: ``` trunc(SYSTIMESTAMP) + extract(hour from SYSTIMESTAMP)/24 ``` Edit: I just tried it and ``` SELECT TRUNC(SYSTIMESTAMP ,'HH24') FROM DUAL; ``` returns the correct result. [Fiddle](http://sqlfiddle.com/#!4/9eecb7/1298)
* If your purpose is to display, then use **TO\_CHAR** with desired **format model**. For example, ``` SQL> SELECT TO_CHAR(SYSTIMESTAMP, 'DD.MM.YY HH24') FROM dual; TO_CHAR(SYS ----------- 28.05.15 15 SQL> ``` * If your purpose is to do **date arithmetic** then you need to leave the data type as date. For example, ``` SQL> alter session set nls_date_format='DD-MM-YYYY HH24:MI:SS' 2 / Session altered. SQL> SELECT TRUNC(SYSTIMESTAMP ,'HH24') FROM DUAL 2 / TRUNC(SYSTIMESTAMP, ------------------- 28-05-2015 15:00:00 SQL> ```
How to use round and trunc to get dd:mm:yy:hh
[ "", "sql", "oracle", "oracle11g", "" ]
How to trim spaces inside all table's column values from particular Database? It has to `LTrim` & `RTrim` spaces inside row column values from all tables in Database. How Can I do that?
Try below one ``` DECLARE @SQL AS VarChar(MAX) SET @SQL = '' SELECT @SQL = @SQL + 'UPDATE T SET T.'+IC.COLUMN_NAME + ' = LTRIM(RTRIM(' + IC.COLUMN_NAME+')) FROM '+ IT.TABLE_SCHEMA + '.[' + IT.TABLE_NAME + '] AS T ;' + CHAR(13) FROM INFORMATION_SCHEMA.TABLES IT JOIN INFORMATION_SCHEMA.COLUMNS IC ON IT.TABLE_NAME = IC.TABLE_NAME AND IT.TABLE_SCHEMA = IC.TABLE_SCHEMA AND IT.TABLE_TYPE = 'BASE TABLE' and IC.DATA_TYPE in( 'varchar','char','nvarchar','nchar') Exec (@SQL) ```
This Solved my issue : ``` DECLARE @SQL AS VarChar(MAX) SET @SQL = '' SELECT @SQL += 'UPDATE ' + IT.TABLE_SCHEMA + '.' + QUOTENAME(IT.TABLE_NAME) + + 'SET '+IC.COLUMN_NAME + ' = LTRIM(RTRIM(' + IC.COLUMN_NAME+')) ' + CHAR(13) FROM INFORMATION_SCHEMA.TABLES IT JOIN INFORMATION_SCHEMA.COLUMNS IC ON IT.TABLE_NAME = IC.TABLE_NAME AND IT.TABLE_SCHEMA = IC.TABLE_SCHEMA AND IC.DATA_TYPE in ('nvarchar','char','varchar','nchar') EXEC (@SQL) ``` *Reference :* Deepshikha's answer.. Thanks Deepshikha
Trim Spaces inside whole Database
[ "", "sql", "database", "sql-server-2012", "trim", "" ]
I have a column that stores a date as char in the format 'YYYYMMDD'. Now I want to convert it to a real date. I tried ``` select cast (DATEFIELD as DATE) as MyDate ``` But it only returns the old `YYYYMMDD` format labeled as 'DATE'. What am I doing wrong?
`SELECT TIMESTAMP_FORMAT("DATEFIELD",'YYYYMMDD') as "MyDate"`
This is how you convert a Normal date to an iSeries (AS/400) Date Format: TO\_NUMBER (TO\_CHAR (SYSDATE, 'YYYYMMDD')) - 19000000 If you need the other way from iSeries to Normal date Format you use this: to\_date(to\_char(19000000+1230721),'YYYYMMDD')
DB2 Convert from YYYYMMDD to Date
[ "", "sql", "date", "casting", "db2", "" ]
I am trying to create a table **using a openrowset selection**. The idea is to only create the headers without content. So my idea was to use `SELECT TOP 0 FROM some_file.txt` as follows: ``` select * into table_name FROM OPENROWSET( 'Microsoft.ACE.OLEDB.12.0', 'Text; HDR=YES; Database=C:\folder_name', 'SELECT TOP 0 * FROM some_file.txt') ``` Unfortunately I have the follwing error triggered: ``` OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" returned message "The SELECT statement includes a reserved word or an argument name that is misspelled or missing, or the punctuation is incorrect.". ``` If I put `SELECT TOP 1 FROM some_file.txt` it is working but I have as expected the first row added. Important: Do not propose `TRUNCATE TABLE table_name` as a solution. Thanks
What about ...... ``` SELECT TOP 0 * INTO table_name FROM ( select TOP 1 * FROM OPENROWSET( 'Microsoft.ACE.OLEDB.12.0', 'Text; HDR=YES; Database=C:\folder_name', 'SELECT TOP 1 * FROM some_file.txt') )A ```
The usual technique is to add `where 0 = 1` to the query, `select * into table_name FROM OPENROWSET( 'Microsoft.ACE.OLEDB.12.0', 'Text; HDR=YES; Database=C:\folder_name', 'SELECT * FROM some_file.txt WHERE 0 = 1')` or, more generally, `select * into table_name FROM OPENROWSET( 'Microsoft.ACE.OLEDB.12.0', 'Text; HDR=YES; Database=C:\folder_name', 'SELECT * FROM some_file.txt') as A WHERE 0 = 1` but I suspect the latter query will execute the whole OPENROWSET before applying the WHERE clause. The idea here is that SELECT...INTO is atomic and must create the target table if syntactically correct. Because no rows meet the WHERE criteria, none are inserted into it.
SQL Server - How to create table from openrowset with headers only?
[ "", "sql", "sql-server", "" ]
I want to tweak the following SQL so the query will output the type of card (visa, mastercard, etc) as PaymentMethod instead of Credit Card. ``` CASE WHEN pm.PaymentType = 1 THEN 'Cash' WHEN pm.PaymentType = 2 THEN 'Check' WHEN pm.PaymentType = 3 THEN 'Credit Card' WHEN pm.PaymentType = 4 THEN 'EFT' WHEN pm.PaymentType = 5 THEN 'Money Order' WHEN pm.PaymentType = 6 THEN 'Conveyance' ELSE 'Unknown' END AS PaymentMethod, ``` Can someone please point me in the right direction. I tried adding a second conditional to my case statement, but it fails out to 'Unknown' for all payment types of 3. ``` CASE WHEN pm.PaymentType = 1 THEN 'Cash' WHEN pm.PaymentType = 2 THEN 'Check' WHEN pm.PaymentType = 3 and pm.CardTypeMId = 1 THEN 'American Express' WHEN pm.PaymentType = 3 and pm.CardTypeMId = 2 THEN 'Discover' WHEN pm.PaymentType = 3 and pm.CardTypeMId = 3 THEN 'Mastercard' WHEN pm.PaymentType = 3 and pm.CardTypeMId = 4 THEN 'Visa' WHEN pm.PaymentType = 4 THEN 'EFT' WHEN pm.PaymentType = 5 THEN 'Money Order' WHEN pm.PaymentType = 6 THEN 'Conveyance' ELSE 'Unknown' END AS PaymentMethod, ``` Thank you in advance for your assistance.
You could simplify this quite a bit. ``` CASE pm.PaymentType WHEN 1 THEN 'Cash' WHEN 2 THEN 'Check' WHEN 3 THEN CASE pm.CardTypeMId WHEN 1 THEN 'American Express' WHEN 2 THEN 'Discover' WHEN 3 THEN 'Mastercard' WHEN 4 THEN 'Visa' END WHEN 4 THEN 'EFT' WHEN 5 THEN 'Money Order' WHEN 6 THEN 'Conveyance' ELSE 'Unknown' END AS PaymentMethod ```
You can use nested CASE ``` WHEN pm.PaymentType = 3 THEN Case WHEN pm.CardTypeMId = 1 THEN 'American Express' WHEN pm.CardTypeMId = 2 THEN 'Discover' WHEN pm.CardTypeMId = 3 THEN 'Mastercard' WHEN pm.CardTypeMId = 4 THEN 'Visa' END as CreditCard ```
Case Statements with conditionals in SQL server
[ "", "sql", "sql-server", "" ]
Assume I have two tables(below). What is the best way to write a select to get the 2 employees with the highest salary from each department? Assume that there are potentially a lot of departments. output: ``` employee_name | salary | department_id John | 65000 | 1 Sally | 60000 | 1 Lucy | 40000 | 2 James | 80000 | 3 Harry | 65000 | 3 ``` Tables: **Employee** ``` employee_name | salary | department_id John | 65000 | 1 Sally | 60000 | 1 Connor | 55000 | 1 Judy | 55000 | 1 Lucy | 40000 | 2 James | 80000 | 3 Harry | 65000 | 3 Penny | 56000 | 3 ``` **Department** ``` department_id | name 1 | Sales 2 | Marketing 3 | IT ```
The best for such type of selects is `OUTER APPLY`. It is designed for this type of work: ``` select d.department_id, oa.employee_name, oa.salary from Departments d outer apply(select top 2 e.employee_name, e.salary from Employee e where d.department_id = e.department_id order by e.salary desc) oa ``` If you don't want to get departments where there are no employees then just change `outer apply` to `cross apply`
You can use `ROW_NUMBER()` like this. ``` ;WITH CTE as ( SELECT employee_name,Salary,department_id, ROW_NUMBER()OVER(PARTITION BY department_id ORDER BY Salary DESC) rn FROM Employee ) SELECT employee_name,Salary,d.department_id,d.name FROM CTE c INNER JOIN Departments d ON d.department_id = c.department_id WHERE rn <= 2 ```
How to return the X greatest rows for each row in a table
[ "", "sql", "sql-server", "" ]
I have a need to retrieve a hierarchy of managers and the column which stores the manager names for a given person are formatted like this `Smith, Mr. William (Bill)`. I want this output to simply be `William Smith`. So far I have put this together: ``` SELECT DISTINCT RIGHT(u.manager, LEN(u.manager)-(1+CHARINDEX(', ', u.manager))) + ' ' + LEFT(u.manager, CHARINDEX(', ', u.manager) - 1) as ManagerName FROM Users u ``` The current result from that query using my example above is `Mr. William (Bill) Smith`. This CHARINDEX and SUBSTRING stuff always gives me a lot of trouble so I am not really sure what the easiest way to do this is. This is also a one-off, so I am not sure a function would be useful here.
[DEMO](http://sqlfiddle.com/#!6/fe45d/29) ``` SELECT SUBSTRING(manager,0,CHARINDEX(',', manager)) as surname, SUBSTRING(manager,CHARINDEX('. ', manager)+2, LEN(manager)-CHARINDEX(' (', manager)+1) as name, CONCAT(SUBSTRING(manager,CHARINDEX('. ', manager)+2, LEN(manager)-CHARINDEX(' (', manager)+1), ' ', SUBSTRING(manager,0,CHARINDEX(',', manager))) as 'name surname' FROM Users ``` Result: ``` +-------------+-----------+--------------+ | surname | name | name surname | +-------------+-----------+--------------+ Smith William William Smith ```
I took your query and modified a little bit: ``` SELECT ---this is the tricky part: inner part finds the first instance of '(' parenthesis --and substract it from the length of the first name and get only the left part of the first name by subtracting it CONCAT ( LEFT(t.FirstName, LEN(t.FirstName) - (LEN(t.FirstName) - CHARINDEX('(', t.FirstName) + 1)) ,t.LastName ) FROM ( --basically separating your above syntax to two columns SELECT RIGHT('Smith, Mr. William (Bill)', LEN('Smith, Mr. William (Bill)') - CHARINDEX('.', 'Smith, Mr. William (Bill)') - 1) AS FirstName ,LEFT('Smith, Mr. William (Bill)', CHARINDEX(', ', 'Smith, Mr. William (Bill)') - 1) AS LastName ) t ``` Here is the query that should work with your table name and column: ``` SELECT ---Use case when statement to determine if there are any instances of '(' in the first name CONCAT ( CASE WHEN CHARINDEX('(', t.FirstName) > 0 THEN LEFT(t.FirstName, LEN(t.FirstName) - (LEN(t.FirstName) - CHARINDEX('(', t.FirstName) + 1)) ELSE t.FirstName + ' ' END ,t.LastName ) FROM ( SELECT RIGHT(u.manager, LEN(u.manager) - CHARINDEX('.', u.manager) - 1) AS FirstName ,LEFT(u.manager, CHARINDEX(', ', u.manager) - 1) AS LastName from Users u ) t ```
How to format the order of first/last name and remove prefix and nickname
[ "", "sql", "sql-server", "" ]
I'm running around in circles and have been doing so in the last hours. I'm doing a select in a table where have all stock of each store and I'm filtering by product id. What i need is: have a list of all stores even if I have no data for that store, but the select only returns 4 stores or less. Here's an example: That's the query: ``` select store_id, product_id, start_date, quantity from stock where product_id = 407214 ``` That's the result: ``` store_id | product_id | start_date | quantity | 2 | 407214 | 2015-05-26 08:32:53 | 10 | 3 | 407214 | 2015-03-16 12:10:00 | 25 | 4 | 407214 | 2015-01-06 11:45:15 | 16 | 7 | 407214 | 2015-05-14 00:00:00 | 8 | ``` And that's what I want: ``` store_id | product_id | start_date | quantity | 1 | NULL | NULL | NULL | 2 | 407214 | 2015-05-26 08:32:53 | 10 | 3 | 407214 | 2015-03-16 12:10:00 | 25 | 4 | 407214 | 2015-01-06 11:45:15 | 16 | 5 | NULL | NULL | NULL | 6 | NULL | NULL | NULL | 7 | 407214 | 2015-05-14 00:00:00 | 8 | ``` I would really need a help, it's driving me crazy!!
the solution depends on your db structure if `stock` table only contains available products, you need `left join` of tables `stores` and `stock` ``` select s.id as store_id, st.product_id, st.start_date, st.quantity from stores s left join stock st on s.Id = st.store_id and st.product_id = 407214 ```
Add `OR product_id IS NULL` to the `WHERE` clause
SQL Server query doesn't return all stores
[ "", "sql", "sql-server", "" ]
I have a "raw" table that looks like this (among other many fields): ``` team_id | team_name ---------+------------------------- 1 | Team1 1 | Team1 2 | Team2 2 | Team2 ``` I want to extract the team names and their id codes and create another table for them, so I created: ``` CREATE TABLE teams ( team_id integer NOT NULL, team_name varchar(50) NOT NULL, CONSTRAINT team_pkey PRIMARY KEY (team_id) ); ``` And I am planning to copy the data from the old table to the recently created one like this: ``` INSERT INTO teams(team_id,team_name) SELECT team_id,team_name FROM rawtable GROUP BY team_id, team_name; ``` At first I wasn't adding the `GROUP BY` part, and I was getting a message: ``` ERROR: duplicate key value violates unique constraint "team_pkey" ``` I added the `GROUP BY` so it doesn't try to insert more than one row for the same team, but the problem still persist and I keep getting the same message. I don't understand what is causing it. It looks like I am inserting single non duplicate rows into the table. What's the best way to fix this?
If two different teams with the same id are in `raw_table` e.g. `(1, 'foo')` and `(1, 'bar')` the `group by` will still return both, because those two *are* different. If you just want to pick *one* of the rows for duplicate values of `team_id` then you should use something like this: ``` insert into teams (team_id,team_name) select distinct on (team_id) team_id, team_name from rawtable order by team_id; ``` The Postgres specific `distinct on` operator will make sure that only distinct values for `team_id` are returned.
My best guess is that you have the same team\_id for more then one team\_name at least somewhere in your table. Try to add `Having count(\*)=1 to your select statement
Cannot insert into field with primary key constraint in PostgreSQL
[ "", "sql", "postgresql", "constraints", "" ]
I have row, example : `1,2,3,5,9,7` -> not in (3,7) (This character need to delete -> result select `1,2,5,9`. How do it ? For example : ``` drop table test.table_4; create table test.table_4 ( id integer, list_id text ); insert into test.table_4 values(1,'1,2,3,5,9,7'); insert into test.table_4 values(2,'1,2,3,5'); insert into test.table_4 values(3,'7,9'); insert into test.table_4 values(5,'1,2'); insert into test.table_4 values(9,'1'); insert into test.table_4 values(7,'5,7,9'); ``` query : ``` select list_id from test.table_4 where id not in (3,7) --return 4 row id list_id 1. 1 '1,2,3,5,9,7' 2. 2 '1,2,3,5' 3. 5 '1,2' 4. 9 '1' ``` How to remove 3 and 7 in row 1 and 2 ? ``` id 1. 1 '1,2,5,9' 2. 2 '1,2,5' 3. 5 '1,2' 4. 9 '1' ```
The following should deal with 3 or 7 at the start of the string, at the end of the string, or anywhere in the middle. It also ensures that the 3 in `31` and the 7 in `17` don't get replaced: ``` select list_id, regexp_replace(list_id, '(^[37],|,[37](,)|,[37]$)', '\2', 'g') from test.table_4 where id not in (3,7) ``` Explanation: `^[37],` matches a 3 or 7 followed by a comma at the start of the string. This should be replaced with nothing. `,[37](,)` matches a ,3, or ,7, in the middle of the string. This needs to be replaced with a single comma, which is captured by the brackets around it. `[37]$` matches a 3 or 7 preceded by a comma at the end of the string. This should be replaced with nothing. `\2` is used to replace the string - this is `,` for the second case above, and empty for cases 1 and 3.
You could use the following statements to update all of the records. In the below example the first statement will remove any `,7` found. Then you execute the next statement to find any sting that has the `7` in the front of the string. ``` UPDATE test.table_4 SET list_id = REPLACE(list_id, ',7', '') UPDATE test.table_4 SET list_id = REPLACE(list_id, '7', '') ``` If you also want to remove all occurrences of `3` then execute the following statements: ``` UPDATE test.table_4 SET list_id = REPLACE(list_id, ',3', '') UPDATE test.table_4 SET list_id = REPLACE(list_id, '3', '') ``` However, it is a bad design to store values that you need to search agianst, work with, and etc in a string.
How to delete character from list
[ "", "sql", "regex", "string", "postgresql", "substring", "" ]
I have a SQL query like this: ``` SELECT t1.name, MAX(t2.value) FROM t2 JOIN t1 ON t1.id = t2.t1_id WHERE t2.t1_id = 1 AND t2.text_id = 16; ``` However, when t2 selection is empty, it returns a row containing NULL values (because of MAX function returning NULL when called on an empty set). I would like it to return an empty set instead. How can I achieve it?
Try this in sql server ... ``` with cte as ( SELECT t1.name, MAX(t2.value) a FROM t2 JOIN t1 ON t1.id = t2.t1_id WHERE t2.t1_id = 1 AND t2.text_id = 16; ) select * from cte where a is not null ``` try this in Mysql ``` select p.* from ( SELECT t1.name, MAX(t2.value) a FROM t2 JOIN t1 ON t1.id = t2.t1_id WHERE t2.t1_id = 1 AND t2.text_id = 16; ) p where p.a is not null ```
Having clause fits perfectly here: ``` SELECT t1.name, MAX(t2.value) FROM t2 JOIN t1 ON t1.id = t2.t1_id WHERE t2.t1_id = 1 AND t2.text_id = 16 -- GROUP BY something? HAVING MAX(t2.value) IS NOT NULL ```
How do I make aggregate query return empty set instead of NULL row?
[ "", "mysql", "sql", "aggregate", "" ]
I understand that SQL uses three valued logic but I am having trouble understanding how to use this in practice, especially why `TRUE || NULL = True` and `FALSE && NULL = False` instead of evaluating to `null`. Here are the three valued truth tables that apply to SQL Server: ![Tri truth tables](https://i.stack.imgur.com/yRBgu.png) I found a couple explanations of three valued logic online but I cannot find any real code examples of this in use. Can someone show me a code example using three valued logic to help me understand this a little better?
An example of `TRUE || NULL = True` would be ``` declare @x as int = null; if 1=1 or @x/1=1 print 'true' ``` An example of `FALSE && NULL = False` would be ``` declare @x as int = null; if not(1=2 and @x/1=1) print 'false' ```
`True && NULL` is neither True or False. It's just `NULL`. Whether that will evaluate as True, False, or an Error in a boolean expression depends on what happens on your system when you evaluate `NULL` by itself as a boolean. Sql Server will do everything it can to avoid choosing, but when forced you'll pretty much never see a positive (True) result.
Example of three valued logic in SQL Server
[ "", "sql", "sql-server", "t-sql", "three-valued-logic", "" ]
is input to a bit type 0,1 and NULL ? or is it only 0,1 ? in sql server 2008 R2
It can be either - it depends on if you're "Allowing Nulls" on the column or not. ``` BIT NOT NULL ``` allows for 1 and 0 ``` BIT NULL ``` allows for 1, 0, and null. After looking at the title of the question... I guess that could potentially change things a bit (lolol). if using a bit parameter or variable, then the values can be 0, 1, or null. Sorry I missed the reference to the stored procedure as it was only in the title.
Bit can have values 0,1 and NULL REF: <https://msdn.microsoft.com/nl-be/library/ms177603.aspx>
sql server stored procedure - bit type
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have 3 tables: ``` > HockeyTeam with columns: ID and Name > HockeyGame with columns: ID,Team1ID and Team2ID (both linked to HockeyTeam ID) > GameScores with columns: ID, GameID, Team1Score, Team2Score ``` **HockeyTeam** ``` ID Name 1 Monkeys 2 Beavers 3 Dolphins 4 Snakes ``` **GameScore** ``` ID GameID Team1Score Team2Score 1 1 3 2 2 2 4 0 3 3 2 3 4 4 2 4 ``` **HockeyGame** ``` ID StartTime Team1ID Team2ID 1 2/6/2015 1 2 2 2/13/2015 3 4 3 2/20/2015 2 4 4 2/27/2015 1 3 ``` I need to display total goals for each team. A team can be in both `Team1ID` or `Team2ID`. Here is what I got so far: ``` SELECT ht.Name, SUM(SELECT (gs.Team1Score + gs.Team2Score) FROM GameScores as gs INNER JOIN HockeyGame as hg INNER JOIN HockeyTeam as ht ON ht.ID = hg.Team1ID OR ht.ID = ht.Team2ID ON gs.GameID = hg.ID FROM HockeyTeam as ht ```
Let's take a structure like this below: ``` create table team (id int, name varchar(20)); insert into team values (1, 'H1'), (2, 'H2'), (3, 'H3'); create table game (id int, team1id int, team2id int); insert into game values (11, 1, 2), (12, 1, 3), (13, 2, 3); create table score ( id int, gameid int, team1score int, team2score int ); insert into score values (21, 11, 5, 2), (22, 12, 2, 5), (23, 13, 0, 2); ``` **Display game results (not the answer yet)** ``` -- show readable game results select s.gameid, t1.name as team1, t2.name as team2, team1score, team2score from score s inner join game g on s.gameid = g.id inner join team t1 on g.team1id = t1.id inner join team t2 on g.team2id = t2.id; ``` Data looks like this: ``` gameid team1 team2 team1score team2score 11 H1 H2 5 2 12 H1 H3 2 5 13 H2 H3 0 2 ``` **Let's get the scores now (the answer)** ``` -- show score by team select t.name, sum(score) as goals from team t left join ( -- get score of team1 select t1.id as teamid, sum(team1score) as score from score s inner join game g on s.gameid = g.id inner join team t1 on g.team1id = t1.id group by t1.id union all -- get score of team2 and combine it with results from team1 -- this is because team2 can be team1 for some games select t2.id as teamid, sum(team2score) as score from score s inner join game g on s.gameid = g.id inner join team t2 on g.team2id = t2.id group by t2.id ) t1 on t.id = t1.teamid group by t.name ``` Result will look like this: ``` Name Goals H1 7 H2 2 H3 7 ``` Example: <http://sqlfiddle.com/#!9/aa3cc/15> Although the example is for MySQL (because SQL Server Fiddle was acting up), the SQL statements will remain valid for SQL Server also.
You can do this without `unions`: ``` select t.id, t.name, sum(case when oa.team1id = t.id then oa.team1score else 0 end + case when oa.team2id = t.id then oa.team2score else 0 end) from team t outer apply(select g.team1id, g.team2id, s.team1score, s.team2score from game g join score s on g.id = s.gameid where team1id = t.id or team2id = t.id) oa group by t.id, t.name ```
SQL Server: Select and count
[ "", "sql", "sql-server", "" ]
I was wondering if you could help me out I currently have a working query, but I feel it is not as efficient as it could be. Ill go on to explain: I have a car table and a carEvent table. Car table just stores info such as make, model etc of the car. CarEvent table stores events which happened on the car such as car has been crashed or car has been fixed. If no status of "CRASHED" exists on the carEvent table for a given car then it has not been crashed. What my query does is return all cars which have been crashed but not fixed. The way I have wrote it requires two scans of he carEvent table. What im wondering is, is there a more efficient way to do this query? my query is as follows: ``` SELECT * FROM Car c WHERE (select count(ce.id) FROM CarEvent ce where car_id = c.id AND ce.carEventType = 'CRASHED') > 0 AND (select count(ce.id) FROM CarEvent ce where car_id = c.id AND ce.carEventType = 'FIXED') = 0 ``` Any advice is greatly appreciated.
Oh, the infamous `count()` in a subquery. You want to use `exists`, not `count`: ``` SELECT c.* FROM Car c WHERE EXISTS (select 1 FROM CarEvent ce where ce.car_id = c.id AND ce.carEventType = 'CRASHED') AND NOT EXISTS (select 1 FROM CarEvent ce where ce.car_id = c.id AND ce.carEventType = 'FIXED'); ``` For performance, you want an index on `CarEvent(car_id, carEventType)`. Also, be especially sure to use table aliases in correlated subqueries.
The number of joins can be reduced from two to one by using `GROUP BY` and `HAVING`: ``` select Car.id from Car join CarEvent on Car.id = CarEvent.car_id group by Car.id having sum(case when carEventType = 'CRASHED' then 1 else 0 end) > 0 and sum(case when carEventType = 'FIXED' then 1 else 0 end) = 0 ```
Improving my query efficiency - reducing full table scans?
[ "", "sql", "oracle", "oracle10g", "" ]
I am creating a regression test query that shall check if tables are populated or not. This is done by getting the first row in each table A,B,C and if its id exist it shall get value "Y". If not then value "N". I created the code below to do the following but I get Syntax error: parse error. Can somebody please tell me what's wrong? ``` SELECT Case When EXISTS( A.Id) THEN 'Y' ELSE 'N' END AS A_exist, Case When EXISTS( B.Id) THEN 'Y' ELSE 'N' END AS B_exist, Case When EXISTS( C.Id) THEN 'Y' ELSE 'N' END AS C_exist FROM (select top 1 * from Table_A) A (select top 1 * from Table_B) B (select top 1 * from Table_C) C ```
You can force a row for empty tables using a `COUNT`: ``` SELECT CASE WHEN a.CNT = 1 THEN 'Y' ELSE 'N' END AS A_exist, CASE WHEN b.CNT = 1 THEN 'Y' ELSE 'N' END AS B_exist, CASE WHEN c.CNT = 1 THEN 'Y' ELSE 'N' END AS C_exist FROM ( SELECT COUNT(*) AS CNT WHERE EXISTS(SELECT * FROM Table_A) ) AS a, ( SELECT COUNT(*) AS CNT WHERE EXISTS(SELECT * FROM Table_B) ) AS b, ( SELECT COUNT(*) AS CNT WHERE EXISTS(SELECT * FROM Table_C) ) AS c; ``` But why don't you return one row per table instead of one column? It's much easier to extend: ``` SELECT 'Table_A' AS "Table", CASE WHEN COUNT(*) = 1 THEN 'Y' ELSE 'N' END AS "Exists?" WHERE EXISTS(SELECT * FROM Table_A) UNION ALL SELECT 'Table_B', CASE WHEN COUNT(*) = 1 THEN 'Y' ELSE 'N' END WHERE EXISTS(SELECT * FROM Table_B) UNION ALL SELECT 'Table_C', CASE WHEN COUNT(*) = 1 THEN 'Y' ELSE 'N' END WHERE EXISTS(SELECT * FROM Table_C) ```
Do you use sql-server? Does this work? ``` SELECT Case When EXISTS(select * from Table_A) THEN 'Y' ELSE 'N' END AS A_exist, Case When EXISTS(select * from Table_B) THEN 'Y' ELSE 'N' END AS B_exist, Case When EXISTS(select * from Table_C) THEN 'Y' ELSE 'N' END AS C_exist ```
Syntax error: parse error, when creating a regression test checking empty tables
[ "", "sql", "teradata", "" ]
I have a query as below: ``` SELECT top 8 (OrderTH_strMovieName) as Top8HotFilms, Convert(char(8), OrderTH_dtmSessionDateTime, 112) as DayOfCount, count( OrderTH_strMovieName)as filmoccurence FROM [MOVIES].[dbo].[tblOrderTicketHistory] where Convert(char(8), OrderTH_dtmSessionDateTime, 112) >= (SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE()))) GROuP BY OrderTH_strMovieName , Convert(char(8), OrderTH_dtmSessionDateTime, 112) ORDER BY filmoccurence desc ``` The result are as below: ``` Movies Session Day Occurence .tix *SAN ANDREAS 20150531 2156 *MASSS 20150531 1954 *TOMORROWLAND 20150531 990 SPY 20150531 825 PITCH PERFECT 2 20150531 374 MAD MAX FURY ROAD 20150531 302 *MASSS 20150601 268 *SAN ANDREAS 20150601 257 ``` Qns now, how to I derive a column called number, end result as below? ``` Number Movies Session Day Occurence .tix 1 *SAN ANDREAS 20150531 2156 2 *MASSS 20150531 1954 3 *TOMORROWLAND 20150531 990 4 SPY 20150531 825 5 PITCH PERFECT 2 20150531 374 6 MAD MAX FURY ROAD 20150531 302 7 *MASSS 20150601 268 8 *SAN ANDREAS 20150601 257 ```
you can try below ``` SELECT top 8 RANK() OVER (ORDER BY count( OrderTH_strMovieName) desc) AS Number, (OrderTH_strMovieName) as Top8HotFilms, Convert(char(8), OrderTH_dtmSessionDateTime, 112) as DayOfCount, count( OrderTH_strMovieName)as filmoccurence FROM [MOVIES].[dbo].[tblOrderTicketHistory] where Convert(char(8), OrderTH_dtmSessionDateTime, 112) >= (SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE()))) GROuP BY OrderTH_strMovieName , Convert(char(8), OrderTH_dtmSessionDateTime, 112) ``` Eg: ``` create table test (name varchar(10), age numeric) insert into test values ('John',10); insert into test values('Happy',20); insert into test values ('mary',35); insert into test values ('mary',35); insert into test values ('John',10); SELECT top 2 ROW_NUMBER() over(ORDER BY sum(age)) as number, name as n, sum(age) as age_sum ``` from test group by name Order by name ``` number n age_sum 1 Happy 20 2 John 20 ```
Select ROW NUMBER over your query:<https://msdn.microsoft.com/en-us/library/ms186734.aspx>
SQL derived column call number
[ "", "sql", "sql-server", "" ]
I've searched all over Stack Overflow and the web for an example of what I need to do and just can't seem to figure out how to apply pivoting to accomplish what I need done. Maybe using Pivot is simply not the right answer, but it just seems like my lack of understanding might be getting in the way. I have a table whose data looks like this: ``` year property type market total taxable total parcel count ---- ------------- ------------ ------------- ------------ 2012 Real 23453 34563 123 2012 Personal 53434 65432 321 2013 Real 24565 23546 345 2013 Personal 64453 45636 342 2014 Real 76586 78645 876 2014 Personal 56775 67556 897 ``` I need to turn it into a result showing all the values as a single row per year like so: ``` year real market real taxable real count pers market pers taxable pers count ---- ----------- ------------ ---------- ----------- ------------ ---------- 2012 23453 34563 123 53434 65432 321 2013 24565 23546 345 64453 45636 342 2014 76586 78645 876 56775 67556 897 ``` All of the pivot table examples I look at though show the column names as actual values from a column in the source data, which doesn't really seem to be the case for me. And I am stuck with not being able to dynamically construct SQL. By the way, the "property type" values are a known set, so I know exactly the number of columns my output will need. Is doing this possible in SQL in a simple way? It just seems like it should be...
I'm not sure what DBMS you are using, but I believe my query should work for you(I'm using SQL Server). ## My Version of Your Table ``` DECLARE @yourTable TABLE ([year] INT,[Property Type] VARCHAR(20),[Market Total] INT,[Taxable Total] INT,[Parcel Count] INT); INSERT INTO @yourTable VALUES (2012,'Real',23453,34563,123), (2012,'Personal',53434,65432,321), (2013,'Real',24565,23546,345), (2013,'Personal',64453,45636,342), (2014,'Real',76586,78645,876), (2014,'Personal',56775,67556,897); ``` ## Actual Query ``` SELECT [year], MAX(CASE WHEN [Property Type] = 'Real' THEN [Market Total] END) AS [Real Market], MAX(CASE WHEN [Property Type] = 'Real' THEN [Taxable Total] END) AS [Real Taxable], MAX(CASE WHEN [Property Type] = 'Real' THEN [Parcel Count] END) AS [Real Count], MAX(CASE WHEN [Property Type] = 'Personal' THEN [Market Total] END) AS [Personal Market], MAX(CASE WHEN [Property Type] = 'Personal' THEN [Taxable Total] END) AS [Personal Taxable], MAX(CASE WHEN [Property Type] = 'Personal' THEN [Parcel Count] END) AS [Personal Count] FROM @yourTable GROUP BY [year] ```
you can achieve your result with INNER JOIN query: Here is MySQL syntax: ``` SELECT t1x.`year`, `real market`, `real taxable`, `real count`, `pers market`, `pers taxable`, `pers count` FROM (SELECT `year`, `market total` AS `real market`, `taxable total` AS `real taxable`, `parcel count` AS `real count` FROM t1 WHERE `property type` = 'Real') t1x INNER JOIN (SELECT `year`, `market total` AS `pers market`, `taxable total` AS `pers taxable`, `parcel count` AS `pers count` FROM t1 WHERE `property type` = 'Personal') t1y ON t1x.`year` = t1y.`year` ``` Here is [SQL Fiddle](http://sqlfiddle.com/#!9/f8309/1) so you can see how it's work. Basically you select values from your table (I call it **t1**) into two temporary table **t1x** and **t1y**. Into **t1x** table you select rows form **t1** where **property type** column value is **Real** and into **t1y** where that value is **Personal**. And that Join that two table on **year** column... SQL Server syntax is pretty much the same the only difference is in that how they handle two words column in MySQL we use back quote (`) and in SQL Server we use brackets like this []; Here how that syntax look like: ``` SELECT t1x.[year], [real market], [real taxable], [real count], [pers market], [pers taxable], [pers count] FROM (SELECT [year], [market total] AS [real market], [taxable total] AS [real taxable], [parcel count] AS [real count] FROM t3 WHERE [property type] = 'Real')AS t1x INNER JOIN (SELECT [year], [market total] AS [pers market], [taxable total] AS [pers taxable], [parcel count] AS [pers count] FROM t1 WHERE [property type] = 'Personal') t1y ON t1x.[year] = t1y.[year] ``` If you use any other database than at least you have idea how to deal with this problem... GL!
SQL Pivot all columns and rows to single row
[ "", "sql", "sql-server", "" ]
I have a table of people who have a name, location (where they live), and a parent\_id (parents are stored on another table). So for example: ``` name | location | parent_id --------+-----------+----------- Joe | Chicago | 12 Sammy | Chicago | 13 Bob | SF | 13 Jim | New York | 13 Jane | Chicago | 14 Dave | Portland | 14 Al | Chicago | 15 Monica | Boston | 15 Debbie | New York | 15 Bill | Chicago | 16 Bruce | New York | 16 ``` I need to count of how many people live in Chicago and have siblings (share a parent\_id) that live in New York. So for the example above, the count would be 3. ``` name | location | parent_id --------+-----------+----------- Joe | Chicago | 12 Sammy | Chicago | 13 * sibling Jim lives in New York Bob | SF | 13 Jim | New York | 13 Jane | Chicago | 14 Dave | Portland | 14 Al | Chicago | 15 * sibling Debbie lives in New York Monica | Boston | 15 Debbie | New York | 15 Bill | Chicago | 16 * sibling Bruce lives in New York Bruce | New York | 16 ``` Can someone help me write the SQL to query this count?
The correlated query is a very nice way to go and is very efficient. Avoid the use of distinct as it is an expensive operation. Group by is a nice alternative over the use of distinct. Understand the data and structure the query accordingly. Here is another option that is engine optimized... ``` select count(*) from (select * from #t where Location = 'Chicago') ch inner join (select * from #t where Location = 'New York') ny on ch.ParentID = ny.ParentID ```
Looks like Minh's answer works great, but here is another example using a Self Join. ``` SELECT Count(DISTINCT a.child_id) FROM people a JOIN people b ON a.parent_id = b.parent_id WHERE a.location = 'Chicago' AND b.location = 'New York' ``` Should produce "3" for just the above table listed. EDIT: Added a DISTINCT a.parent\_id based on Lithis' suggestion. EDIT2: As noted by Uueerdo, a child\_id or some sort of unique id would really help in the case of 2 siblings who live in Chicago and 1 sibling who lives in New York. I have edited the original query to reflect this. Since this is not truly an "answer" to your question, because there is no such child\_id, I would defer to Uueerdo's answer, sorry!
Count SQL records based on sibling property
[ "", "mysql", "sql", "" ]
I tried to take values for VarNewIdOne as a bulk collection. So i got values to bulk collection after i tried to equel that values to another variable (VarOldLevlIdOne) then getting out put as a NO data found. ``` SELECT ITEM_ID bulk collect INTO VarNewIdOne FROM BIZZXE_V2_SCH.ITEMS WHERE PARENT_ITEM_ID = VarId; FOR k IN VarNewIdOne.First ..VarNewIdOne.Last LOOP SELECT LEVEL_ID INTO VarOldLevlIdOne FROM BIZZXE_V2_SCH.ITEM_UOM_LEVELS WHERE ITEM_ID IN (VarNewIdOne(K)); DELETE FROM BIZZXE_V2_SCH.ITEM_UOM_LEVEL_CONTROLS WHERE LEVEL_ID=VarOldLevlIdOne; END LOOP; ``` These are my tables ``` CREATE TABLE "ITEMS" ( "ITEM_ID" NUMBER(*,0) NOT NULL ENABLE, "NAME" VARCHAR2(50) NOT NULL ENABLE, "SHORT_NAME" VARCHAR2(25) NOT NULL ENABLE, "CODE" VARCHAR2(25) NOT NULL ENABLE, "HS_CODE_ID" NUMBER(*,0) NOT NULL ENABLE, "BRAND_ID" NUMBER(*,0) NOT NULL ENABLE, "CAT_ID" NUMBER(*,0) NOT NULL ENABLE, "XMLCOL" "SYS"."XMLTYPE" , "REMARKS" VARCHAR2(50), "VARIANT_TEMP_ID" NUMBER NOT NULL ENABLE, "LOCATION_ID" NUMBER(38,0), "TRANS_DATE" DATE, "USER_ID" VARCHAR2(128), "VERSION_ID" NUMBER(7,2), "PARENT_ITEM_ID" NUMBER, CONSTRAINT "ITEM_PK" PRIMARY KEY ("ITEM_ID") USING INDEX ENABLE ) CREATE TABLE "ITEM_UOM_LEVELS" ( "LEVEL_ID" NUMBER(*,0) NOT NULL ENABLE, "ITEM_ID" NUMBER(*,0) NOT NULL ENABLE, "CONV_FACTR" VARCHAR2(50), "IMAGE" BLOB, "STATUS_ID" NUMBER(*,0) NOT NULL ENABLE, "XMLCOL" "SYS"."XMLTYPE" , "UOM_LEVEL" NUMBER NOT NULL ENABLE, "LONG_NAME" VARCHAR2(50) NOT NULL ENABLE, "UOM_ID" NUMBER NOT NULL ENABLE, "SYS_USAGE_ID" NUMBER NOT NULL ENABLE, "UOM_LEVEL_VOLUME" NUMBER, CONSTRAINT "ITEM_LEVEL_PK" PRIMARY KEY ("LEVEL_ID") USING INDEX ENABLE ) ALTER TABLE "ITEM_UOM_LEVELS" ADD CONSTRAINT "ITM_UOM_LEVELS_ITEM_FK" FOREIGN KEY ("ITEM_ID") REFERENCES "ITEMS" ("ITEM_ID") ENABLE ``` sample data ``` insert into ITEMS (Item_Id, Name, Short_Name, Code, Hs_Code_Id, Brand_Id, Cat_Id ,Remarks ,Variant_Temp Id, Parent_Item_Id) values (203,"small Large updated","SML_LRG","SML_LRGBYPO123", 111, 8, 5, "Comment",4,202); insert into ITEM_UOM_LEVELS (LEVEL_ID,ITEM_ID,CONV_FACTR,STATUS_ID,UOM_LEVEL, LONG_NAME,UOM_ID,SYS_USAGE_ID,UOM_LEVEL_VOLUME) values (61,71,"Cost",1, 1,"Test",9,1,30) ```
Obviously table ITEM\_UOM\_LEVELS sometimes has no entry for an ITEM\_ID and thus results in a NO DATA FOUND error. So use a subselect instead: ``` SELECT ITEM_ID bulk collect INTO VarNewIdOne FROM BIZZXE_V2_SCH.ITEMS WHERE PARENT_ITEM_ID = VarId; FOR k IN VarNewIdOne.First ..VarNewIdOne.Last LOOP DELETE FROM BIZZXE_V2_SCH.ITEM_UOM_LEVEL_CONTROLS WHERE LEVEL_ID IN ( SELECT LEVEL_ID FROM BIZZXE_V2_SCH.ITEM_UOM_LEVELS WHERE ITEM_ID = VarNewIdOne(K) ); END LOOP; ``` Or even make this a single SQL statement: ``` DELETE FROM BIZZXE_V2_SCH.ITEM_UOM_LEVEL_CONTROLS WHERE LEVEL_ID IN ( SELECT LEVEL_ID FROM BIZZXE_V2_SCH.ITEM_UOM_LEVELS WHERE ITEM_ID IN ( SELECT ITEM_ID FROM BIZZXE_V2_SCH.ITEMS WHERE PARENT_ITEM_ID = VarId ) ); ```
`ORA-01403: no data found` occurs when the **filter predicate** in your SELECT statement is unable to retrieve any rows based on the condition. > WHERE ITEM\_ID IN (VarNewIdOne(K)) If there is no matching ITEM\_ID then **no data found** exception will be raised. For example, ``` SQL> DECLARE 2 cnt NUMBER; 3 BEGIN 4 FOR i IN 5 (SELECT empno FROM emp 6 ) 7 LOOP 8 SELECT empno INTO cnt FROM emp WHERE empno = 1234; 9 END LOOP; 10 END; 11 / DECLARE * ERROR at line 1: ORA-01403: no data found ORA-06512: at line 8 SQL> ```
PL/ SQL Loop No data found
[ "", "sql", "plsql", "oracle11g", "oracle-apex", "" ]
I have the following data in mysql table called wallet\_txns. ``` wlt_name wlt_txn_type wlt_txn_amount A Income 200 A Expense 100 B Income 100 B Income 500 B Expense 200 ``` I trying to get the output of the data like below ( the sum of income and expense in a single row) ``` Wlt_name Expense Income A 100 200 B 200 600 ``` I have used the following query, But i am not getting the output as expected, (the income and expense in getting in seperate rows) Please help... ``` select wlt_name, if(wlt_txn_type = 'Expense', wlt_txn_amount, 0) as Expense, if(wlt_txn_type = 'Income', wlt_txn_amount, 0) as Income from wallet_txns ; ```
You already almost wrote correct query, but you forget about `sum` and `group`. Here query: ``` select wlt_name, sum(if(wlt_txn_type = 'Expense', wlt_txn_amount, 0)) as Expense, sum(if(wlt_txn_type = 'Income', wlt_txn_amount, 0)) as Income from wallet_txns group by wlt_name ; ```
Maybe you can help this part of the code: `SUM(IF(wlt_txn_type = "Income", wlt_txn_amount, 0)) AS IncomeTotal`.
mysql table data output in a single row based on the type of the data
[ "", "mysql", "sql", "" ]
I'm trying to build on this code but I don't know how to tackle the 2nd part of this code. In the beginning the code in the SQL it is selected a record's caseid, program, and language from table Intake WHERE assignedto is Null. This is working perfectly fine. Once a record is selected, I want another code that checks if the selected record's caseid matches the caseid of a PREVIOUS record. This is where I am lost. Here is where I am at: ``` Set db = CurrentDb strSQL = "SELECT TOP 1 IntakeID, caseid, [Program], [language] FROM Intake WHERE assignedto Is Null" Set rs = db.OpenRecordset(strSQL, dbOpenDynaset) i = rs!caseid If DLookup("caseid", "Intake", i) Then Call GetPreviousWorker ``` I know that I'm not using the DLookup properly, I'm attempting to find a matching record and clearly failing haha. Thank you to anyone who can help :)
I think `IntakeId` is the primary key incrementing by 1 in the `Intake` table so your requriement can achieve as below ``` Set db = CurrentDb strSQL = "SELECT TOP 1 IntakeID, caseid, [Program], [language] FROM Intake WHERE assignedto Is Null" Set rs = db.OpenRecordset(strSQL, dbOpenDynaset) While Not rs!EOF strSQL = "SELECT IntakeID, caseid, [Program], [language],[other required fields] FROM Intake WHERE IntakeId=" & rs!IntakeID - 1 Set rs1 = db.OpenRecordset(strSQL, dbOpenDynaset) if rs!caseid = rs1!caseid Then do what in this case..update or something.. else //means current record case id is not equal to previous one .. End IF rs.MoveNext Wend ``` Hope this helps
Why not just `DCount` caseid? ``` If DCount("caseid", "Intake", "caseid=" & i) > 1 Then Call GetPreviousWorker ... ```
Match previous record after SQL Select
[ "", "sql", "ms-access", "vba", "ms-access-2010", "" ]
I've trying to select records from 3 tables in one database with `UNION` but I've got error: > `#1248` - Every derived table must have its own alias This is the query that I'm trying: ``` SELECT * from (SELECT column1, 'table1' from table1 UNION SELECT column1, 'table2' from table2 UNION SELECT column1, 'table3' from table3) WHERE column1 not like 'abr%' and length(column1) < 8; ``` What exactly mean that error and how can I fix it and show proper results?
You need to give the derived table an alias like it says: ``` SELECT * from (SELECT column1, 'table1' from table1 UNION SELECT column1, 'table2' from table2 UNION SELECT column1, 'table3' from table3) dtAlias WHERE column1 not like 'abr%' and length(column1) < 8; ```
``` SELECT Alis.column1 FROM (SELECT column1, 'table1' as which from table1 UNION ALL SELECT column1, 'table2' from table2 UNION ALL SELECT column1, 'table3' from table3 ) Alis WHERE column1 not like 'abr%' and length(column1) < 8; ```
Select from multiple tables with UNION
[ "", "mysql", "sql", "" ]
hope everyone had a good weekend. I have a table (`tableA`) with one column that has values such as the below: Column1: ``` xyz-12345678 rdr-32343234 fgm-23423423 ``` Then i have another table (`tableB`) with one column that has all the values in `tableA.column1` and then some more such as the below column1: ``` rfxyz-1234567800012015-01-029 grrdr-3234323485832015-02-037 tyfgm-2342342343432014-12-148 ``` As you can see `tableA.column1` has its value embedded in `tableB.column1`. I wrote a subselect query such as below to identify all instances in `tableB.column1` that have `tableA.column1` values embedded in them, but i get 0 results and no errors. I can clearly see that there are row values in `tableA.column1` that are present in `tableB.column1` but i am unsure what i have done wrong here to result in no errors and no results, my SAS proc sql is below: ``` PROC SQL; select i.* from tableA i where exists (select * from tableB where i.column1 like '%'||column1||'%' ) ; quit; ``` Any help on this would be greatly appreciated as my SAS knowledge is not really strong.
Your condition is backwards. `TableB` has the longer column, so you want it on the left side of `LIKE`: ``` PROC SQL; select i.* from tableA i where exists (select * from tableB b where b.column1 like '%' || i.column1 || '%' ) ; quit; ``` I encourage you to use table aliases for all your column references, especially in correlated subqueries.
Your attempt and your description don't quite match up. If your attempt is to be believed you are looking for values in tableA, where the value is contianed in another record in tableB. If that is the case then you just have your where clause slightly mixed up, it should be: ``` select i.* from tableA i where exists (select * from tableB where tableB.column1 like '%'||i.column1||'%' ); ``` Substituting in values what you had before was: ``` WHERE 'xyz-12345678' LIKE '%tyfgm-2342342343432014-12-148%' ``` Which is false, you need it to be: ``` WHERE 'tyfgm-2342342343432014-12-148' LIKE '%xyz-12345678%' ``` If your description is to be believed, then your where clause is right, but your tables are the wrong way round: > *to identify all instances in `tableB.column1` that have `tableA.column1` values embedded in them* So you would want to select from `tableB`: ``` select i.* from tableB i where exists (select * from tableA a where i.column1 like '%'||a.column1||'%' ); ```
SAS - sub select with like
[ "", "sql", "sas", "proc-sql", "" ]
I have a table with a couple million records in it which stores product views by SessionGUID (visitor), the table structure is: ID (PK) SessionGUID (uniqueidentifier) ProductID (int) DateTime (datetime) The purpose of the query is to return a list of Product ID's that were viewed by the same people who viewed a specific product ID (ie, customers who looked at X also looked at Y) The query I'm using is as follows: ``` SELECT A.ProductID FROM VISITOR_SESSIONS_PRODUCTVIEWS A WHERE A.SessionGUID IN (SELECT DISTINCT SessionGUID FROM VISITOR_SESSIONS_PRODUCTVIEWS WHERE ProductID = @ProductID) GROUP BY A.ProductID ORDER BY COUNT(A.ProductID) DESC ``` Is there a more efficient way to write this using GROUP BY/HAVING/PARTITON or some other more optimized method of getting the data the way I need it? I am running SQL 2008 on my dev server, but will be running from SQL Azure when it goes live.
Often, `EXISTS` is more efficient than `IN`: ``` SELECT A.ProductID FROM VISITOR_SESSIONS_PRODUCTVIEWS A WHERE EXISTS (SELECT 1 FROM VISITOR_SESSIONS_PRODUCTVIEWS B WHERE B.ProductID = @ProductID AND A.SessionGUID = B.SessionGUID ) GROUP BY A.ProductID ORDER BY COUNT(A.ProductID) DESC; ``` For best performance, you want an index on `VISITOR_SESSIONS_PRODUCTVIEWS(SessionGUID, ProductId)` and perhaps also `VISITOR_SESSIONS_PRODUCTVIEWS(ProductId)`. EDIT: You could *try* writing this using window functions, but I'm not sure the performance would be better: ``` select productid from (select pv.*, sum(case when productid = @productid then 1 else 0 end) over (partition by SessionGUID) as cnt from visitor_sessions_productviews ) pv where cnt > 0 group by productid order by count(*) desc; ``` I'm not sure the performance would be better than the `EXISTS` method.
``` SELECT A.ProductID FROM VISITOR_SESSIONS_PRODUCTVIEWS A JOIN VISITOR_SESSIONS_PRODUCTVIEWS S ON A.SessionGUID = S.SessionGUID AND S.ProductID = @ProductID GROUP B A.ProductID ORDER BY COUNT(DISTINCT(A.ProductID)) DESC ```
Is there a more efficient way to write this SQL query?
[ "", "sql", "sql-server", "t-sql", "azure-sql-database", "" ]
I have multiple tables : ``` Person --------------------- IDPerson, Name StickerTransaction --------------------- IDTransac, IDPerson, IDSomeoneElse, NbStickersReceived, NbStickersGiven Purchase --------------------- IDPurchase, IDPerson, NbStickersBought ``` I'm trying to get every person who never made a transaction or currently have 0 sticker. To have the number of stickers a person has, here's the formula : ``` NbStickersBought + NbStickersReceived - NbStickersGiven ``` Here's the query I've got so far. The problem I've got is that in the subsubsubqueries... I do not have access to the person's ID. Is there a way to have access to the person's ID or is there a better way to do write this query ? ``` SELECT People.IDPerson, People.Name FROM ( -- Selects people that never made a sticker transaction SELECT p.IDPerson, p.Name FROM Person p LEFT JOIN StickerTransaction sT ON p.IDPerson = sT.IDPerson WHERE sT.IDPerson IS NULL UNION -- Selects people that have currently 0 sticker SELECT p.IDPerson, p.Name FROM Person p WHERE 0 = ( SELECT SUM(NbStickers) AS NbStickers FROM ( -- I do not have access to p.IDPerson here... SELECT (sT.NbStickersReceived - sT.NbStickersGiven) AS NbStickers FROM StickerTransaction sT WHERE sT.IDPerson = p.IDPerson UNION ALL -- I do not have access to p.IDPerson here either... SELECT pu.NbStickersBought AS NbStickers FROM Purchase pu WHERE pu.IDPerson = p.IDPerson ) ) ) People ```
AFAIK you can't do a correlation into a Derived Table. But you can rewrite your query to a non-correlated subquery: ``` SELECT People.IDPerson, People.NAME FROM ( -- Selects people that never made a sticker transaction SELECT p.IDPerson, p.NAME FROM Person p LEFT JOIN StickerTransaction sT ON p.IDPerson = sT.IDPerson WHERE sT.IDPerson IS NULL UNION -- Selects people that have currently 0 sticker SELECT p.IDPerson, p.NAME FROM Person p JOIN ( SELECT sT.IDPerson, SUM(NbStickers) AS NbStickers FROM ( SELECT (sT.NbStickersReceived - sT.NbStickersGiven) AS NbStickers FROM StickerTransaction sT UNION ALL SELECT pu.NbStickersBought AS NbStickers FROM Purchase pu ) dt GROUP BY sT.IDPerson HAVING SUM(NbStickers) = 0 ) sT ON sT.IDPerson = p.IDPerson ) People ```
``` SELECT People.IDPerson, People.Name FROM Person where IDPerson not in (select distinct IDPerson from StickerTransaction) and IDPerson not in (SELECT IDPerson from Purchase group by IDPerson having SUM(NbStickers) > 0) ```
Correlated subqueries: How to get the ID of the outer query?
[ "", "sql", "sum", "subquery", "correlated-subquery", "" ]
I wonder if I can combine columns from different tables into one results. ``` Table A: ID ColA ColB 1 A B Table B: ID ColC ColD ColE 1 C1 D1 E1 1 C2 D2 E2 Table C: ID ColF ColG 1 F G ``` Expected Results: ``` ID ColA ColB ColC ColD ColE ColF ColG 1 A B C1 D1 E1 F G 1 C2 D2 E2 ``` Is there a way to combine columns from different tables into one? Thank you in advance.
I 'simple' solution would be a `full outer join`, however, that would not give you the gaps you have for the second row (because TableA only contains 1 row). So, to solve that, you can make it a little more complex and generate numbers per ID for each of the tables. Then you can make a full outer join on each of those subselects, and use that generated row number as the second field in the join. That way, you won't get a cartesian product, and you'll get the result as you drew it, with gaps for missing rows. I don't have a database at hand to test it, but let me know if it works. ``` select coalesce(A.ID, B.ID, C.ID) as ID, A.ColA, A.ColB, B.ColC, B.ColD, B.ColE, C.ColF, C.ColG from (select ID, row_number() over (partition by ID order by ColA) as rownumber, ColA, ColB from TableA t) A full outer join (select ID, row_number() over (partition by ID order by ColC) as rownumber, ColC, ColD, ColE from TableB t) B on B.ID = A.ID and B.rownumer = A.rownumber full outer join (select ID, row_number() over (partition by ID order by ColF) as rownumber, ColF, ColG from TableC t) C on C.ID = nvl(A.ID, B.ID) and C.rownumer = A.rownumber ```
``` select ID AS "ID", ColA AS "ColA", ColB AS "ColB",NULL AS ColC ,NULL AS ColD ,NULL AS ColE, NULL AS ColF, NULL AS ColG FROM Table A UNION SELECT ID AS "ID", NULL AS "ColA", NULL AS "ColB",ColC AS "ColC" ,ColD AS "ColD" ,ColE AS "ColE", NULL AS ColF, NULL AS ColG FROM Table B UNION SELECT ID AS "ID", NULL AS "ColA", NULL AS "ColB",NULL AS "ColC" ,NULL AS "ColD" ,NULL AS "ColE", ColF AS "ColF", ColG AS "ColG" FROM Table C ``` Hope this is what you expected
Oracle SQL: Can I combine columns from different tables into one table?
[ "", "sql", "oracle", "" ]
I use the `LIKE` statement like `'%/_Description'` but got the following results: 1. Description 2. Something/\_Description Using `'%/[_]Description'`, "Description" is returned. How can I restrict the result only to something like `"Something/_Description"` Please note this is a standard SQL so supposed to be ran at both SQL Server/Oracle Thanks,
You can escape the [wildchar characters](https://msdn.microsoft.com/en-US/library/ms179859.aspx) with the character defined in the [`ESCAPE` clause](https://msdn.microsoft.com/en-US/library/ms179859.aspx#code-snippet-8): ``` match_expression [ NOT ] LIKE pattern [ ESCAPE escape_character ] ``` In your case (untested): ``` WHERE ... LIKE '%/\_Description' ESCAPE '\' ``` This syntax is shared by SQL Server and [Oracle](http://docs.oracle.com/cd/B12037_01/server.101/b10759/conditions016.htm).
the `_` char is a special jolly character in Oracle. That's why the rows returned are not as you expected. If this query is generated by a program you can have a substitute the `_` with `[_]`. Alternatively, you can use the `REPLACE` function in oracle: ``` REPLACE( string1, string_to_replace [, replacement_string] ) ``` in your case ``` REPLACE( string_var, '_', [_] ) ``` be aware that this may be a bottleneck if your function is called too many times, so I suggest to use an escape character. example: ``` select REPLACE( 'aaaa_bbb', '_', '[_]' ) from dual ```
SQL LIKE '%/_Description' returns unexpected results
[ "", "sql", "wildcard", "sql-like", "" ]
Before your brain explodes yes i have seen [this](https://stackoverflow.com/questions/13284222/database-function-vs-case-statement). My question is tailored to my specific situation. Im trying to optimize a very large database and im trying to rewrite some calls because it was converted from an Orical DB to a MS DB. i have a function that is being called from a stored proc that i want to replace and i think it will be fine but im not sure in all cases. this is the current function that is being called. ``` ALTER function [dbo].[GREATEST_DATETIME] ( @i1_p datetime, @i2_p datetime ) returns datetime as begin declare @r_l datetime if @i1_p is null or @i2_p is null return null set @r_l = @i1_p if @i2_p > @r_l set @r_l = @i2_p return @r_l end ``` I want to replace the call to the function entirely with this. ``` select CASE WHEN @date1 >= @date2 THEN @date1 ELSE @date2 END ``` They both determine the largest date out of 2 dates im just not sure if im covering all my bases. After testing ``` declare @date1 datetime declare @date2 datetime set @date1 = '2015-05-01' set @date2 = null select CASE WHEN @date1 >= @date2 THEN @date1 ELSE @date2 END ``` This returns null which is the desired result Here is my final result that is going to work ``` declare @date1 datetime declare @date2 datetime set @date1 = null set @date2 = '2015-05-01' select CASE WHEN @date1 is null or @date2 is null then null WHEN @date1 >= @date2 THEN @date1 ELSE @date2 END ```
If one of both dates are `null` then the first comparision will fail and return `false`. So you need the extra `null` check ``` select CASE WHEN @date1 is null or @date2 is null then null WHEN @date1 >= @date2 THEN @date1 ELSE @date2 END ```
The new case statement here does not handle `NULL` in the same way as the original. The original would result in `NULL` if either source data was null.
Function vs Case efficiency
[ "", "sql", "sql-server", "t-sql", "" ]
I need help with duplicate rows. I have deleted the duplicate rows from one table using the following method ``` DELETE FROM names a WHERE ROWID > (SELECT MIN(ROWID) FROM names b WHERE b.name=a.name AND b.age=a.age ); ``` It did work with that particular table but I did the same thing with another table which has duplicate reference numbers for each doctor but different unique codes. ``` doc_name ref_no unique_code ------------------------------ abcd 1010 1111 abcd 1010 1112 cdef 1011 1113 cdef 1011 1114 ``` My result should look like this: ``` doc_name ref_no unique_code --------------------------- abcd 1010 1111 cdef 1011 1113 ```
Did you try it like this ? ``` DELETE FROM names a WHERE ROWID > (SELECT MIN(ROWID) FROM names b WHERE b.doc_name=a.doc_name AND b.ref_no=a.ref_no ) ``` try this also ``` SELECT * FROM doc_unique WHERE (DIV_CD, DOC_NAME, B_DT, FT_NM, UNQ_CD, DESG_CD, SPEC_CD) IN (SELECT DIV_CD, DOC_NAME, B_DT, FT_NM, UNQ_CD, DESG_CD, SPEC_CD FROM doc_unique GROUP BY DIV_CD, DOC_NAME, B_DT, FT_NM, UNQ_CD, DESG_CD, SPEC_CD HAVING COUNT(*) > 1) ```
You can use [ROW\_NUMBER()](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions137.htm), to detect duplicate rows and delete them. ``` DELETE tblName WHERE ROWID IN ( SELECT ROWID FROM( SELECT ROW_NUMBER() OVER (PARTITION BY doc_name, ref_no ORDER BY doc_name, ref_no) AS Rn ,doc_name ,ref_no ,unique_code FROM tblName ) WHERE Rn > 1 ) ```
How to delete duplicate rows in Oracle
[ "", "sql", "oracle", "" ]
I have seen similar but not Exactly the same requests. If I had the following table ``` Parent Child 1 2 1 3 4 3 5 1 6 1 5 7 8 9 ``` I selected "1" I would expect back all records where one is the parent or child but also all related parents and children for instance row "5 , 7" because 5 is the parent of "1" so the result set for 1 would be ``` Parent Child 1 2 1 3 4 3 5 1 6 1 5 7 ``` So it would ***NOT*** include the row ``` Parent Child 8 9 ``` This is as close as I can get so far ``` ;WITH LinksDown AS ( SELECT * FROM RecursiveTable WHERE Parent = 1 UNION ALL SELECT rt.* FROM RecursiveTable rt JOIN LinksDown ld on ld.Child = rt.Parent ), LinksUp AS ( SELECT * FROM RecursiveTable WHERE Child = 1 UNION ALL SELECT rt.* FROM RecursiveTable rt JOIN LinksUp lu on lu.Child = rt.Parent ) select distinct * from LinksDown Union All select distinct * from LinksUp ``` But this has the following output which is far from whats needed ``` Parent Child 1 2 1 3 1 2 1 3 5 1 6 1 ```
Here are two approaches. The first uses a CTE that is quite inefficient. The problem is that during recursion you cannot examine all of the other rows in the result set. While you can build a list of the rows that have contributed to a given row, you cannot check to see if you already reached that row via another path. The second approach uses a loop to populate a table with relations one step at a time. It is a much better method than the CTE. Left as an exercise for the reader: Will the two methods terminate in the presence of a cycle in the "tree", e.g. 1 > 2 > 3 > 1? ``` -- Sample data. declare @RecursiveTable as Table ( Parent Int, Child Int ); insert into @RecursiveTable ( Parent, Child ) values ( 1, 2 ), ( 1, 3 ), ( 4, 3 ), ( 5, 1 ), ( 6, 1 ), ( 5, 7 ), ( 8, 9 ); select * from @RecursiveTable; -- Walk the tree with a recursive CTE. -- NB: This is woefully inefficient since we cannot promptly detect -- rows that have already been processed. declare @Start as Int = 1; with Pairs as ( select Parent, Child, Cast( Parent as VarChar(10) ) + '/' + Cast( Child as VarChar(10) ) as Pair from @RecursiveTable ), Relations as ( select Parent, Child, Cast( '|' + Pair + '|' as VarChar(1024) ) as Path from Pairs where Parent = @Start or Child = @Start union all select P.Parent, P.Child, Cast( R.Path + P.Pair + '|' as VarChar(1024) ) from Relations as R inner join Pairs as P on P.Child = R.Parent or P.Parent = R.Child or P.Child = R.Child or P.Parent = R.Parent where CharIndex( '|' + P.Pair + '|', R.Path ) = 0 ) -- To see how terrible this is, try: select * from Relations select distinct Parent, Child from Relations order by Parent, Child; -- Try again a loop to add relations to a working table. declare @Relations as Table ( Parent Int, Child Int ); insert into @Relations select Parent, Child from @RecursiveTable where Parent = @Start or Child = @Start; while @@RowCount > 0 insert into @Relations select RT.Parent, RT.Child from @Relations as R inner join @RecursiveTable as RT on RT.Child = R.Child or RT.Parent = R.Parent or RT.Child = R.Parent or RT.Parent = R.Child except select Parent, Child from @Relations; select Parent, Child from @Relations order by Parent, Child; ```
I think that you could still do this with a CTE, as part of a stored procedure. (The performance will be lousy, but this should work.) The normal method of using recursive CTE's commonly generates 3 columns: ParentID, ChildID, RecursionLevel. My suggestions is to return one more column... A string that is the concatenation of all of the parents IDs. (Probably with some separator value, like a vertical pipe.) From there, you should be able to select every row where the `IDString` column contains your ID. (In your case, it would be "1".) This should return every record where your search ID occurs somewhere within the hierarchy, and not just as a parent or child. **EDIT:** Here is a sample. I'm using curly brackets { and } as my separators, I also realized that the code would be cleaner if I added an "IsLeaf" indicator to reduce duplication, since the leaf-level records would contain the IDs of all of their ancestors... ``` DECLARE @MyTable TABLE(P int, C int) -- Parent & Child INSERT @MyTable VALUES( 1, 2 ); INSERT @MyTable VALUES( 1, 3 ); INSERT @MyTable VALUES( 3, 4 ); INSERT @MyTable VALUES( 3, 5 ); INSERT @MyTable VALUES( 2, 6 ); INSERT @MyTable VALUES( 5, 7 ); INSERT @MyTable VALUES( 6, 8 ); INSERT @MyTable VALUES( 8, 9 ); -- In order to user a recursive CTE, you need to "know" which records are the 'root' records... INSERT @MyTable VALUES ( null, 1 ); /* 9 / 8 / 6 / 2 / 1 4 Using this example, if the user searched for 1, everything would show up. \ / Searching for 3 would return 1, 3, 4, 5, 7 3 Searching for 7 would return 1, 3, 5, 7 \ 5 \ 7 */ WITH RecursiveCTE AS ( SELECT C as ID, 0 as Level, CONVERT(varchar(max), '{' + CONVERT(char(1), C) + '}') as IDList, CASE WHEN EXISTS (SELECT * FROM @MyTable B Where B.P = 1) THEN 0 ELSE 1 END as IsLeaf FROM @MyTable A Where A.P IS NULL UNION ALL SELECT child.C as ID, Level + 1 as Level, IDList + '{' + CONVERT(varchar(max), child.C) + '}' as IDList, CASE WHEN EXISTS (SELECT * FROM @MyTable C Where C.P = child.C) THEN 0 ELSE 1 END as IsLeaf FROM RecursiveCTE as parent INNER JOIN @MyTable child ON child.P = parent.ID ) SELECT IDList -- Every ID listed here is a row that you want. FROM RecursiveCTE WHERE IsLeaf = 1 AND IDList LIKE '%{3}%' ```
Parent Child SQL Recursion
[ "", "sql", "sql-server", "t-sql", "recursion", "parent-child", "" ]
I have a table called OrderItems. I want to write a query that returns rows where the quantity = 1, BUT if there are any other records with the same OrderID, I don't want anything returned. ``` Id OrderID Qty SKU 1 123 1 abc123 2 124 2 sho221 3 125 1 toy903 4 125 3 ball05 5 155 1 gree32 ``` Using the above example data, I want Order 123 and 155 returned only, because they are for a single SKU with qty of 1. Order 125 has two SKUs and even though the first one (Id 3) has a qty of 1, the other (Id 4) has a qty of 3, so I don't want those Orders returned. The query I'm playing with so far is: ``` SELECT o.Id FROM Orders o INNER JOIN OrderItems oi ON oi.OrderId = o.Id WHERE oi.Quantity = 1 GROUP BY eo.Id ``` but this query does not take into account orders that have more than one SKU, just orders that happen to have SKU's with qty of 1. **In plain English - I want to find orders that only contain a single SKU and of those, they must have qty of one.**
I think you could do this also with having: ``` SELECT oi.OrderId, sum(oi.Quantity) FROM Orders o INNER JOIN OrderItems oi ON oi.OrderId = o.Id group by oi.OrderId having sum(oi.Quantity) = 1 and count(*) = 1 ```
You didn't specify your SQL Server version, but using a Group Count will remove the need to join back to `OrderItems`: ``` select * from ( select *, count(*) over (partition by orderId) as cnt from OrderItems ) dt where cnt = 1 and Qty = 1 ``` [Fiddle](http://sqlfiddle.com/#!6/1c46f/8) If this is more efficient depends on the actual data...
SQL query to find unique Orders with a Quantity of 1
[ "", "sql", "sql-server", "" ]
How can i constrain an attribute in a table to only allow the value to be between 1-10? This is the statement so far.. Have no idea how to make the OfficeNumber only accept values in that interval ``` CREATE TABLE OfficeStaff( EID INT PRIMARY KEY, OfficeNumber INT NOT NULL ); ```
Use a `check` constraint: ``` CREATE TABLE OfficeStaff ( EID INT PRIMARY KEY, OfficeNumber INT NOT NULL, CHECK (OfficeNumber BETWEEN 1 AND 10) ); ``` Note, though, that there is another, perhaps better approach. You should have an `OfficeNumbers` table with the valid office numbers. Then you can use a foreign key relationship to enforce the numbering, without having to hard-code the numbers.
You can use domains for this purpose: ``` create domain mydomain as integer check(value between 1 and 10) create table mytable(id serial primary key, md mydomain not null) ``` -- this two will succeed ``` insert into mytable(md) values(1) insert into mytable(md) values(2) ``` -- that one will fail ``` insert into mytable(md) values(12) ``` ERROR: value for domain mydomain violates check constraint "mydomain\_check" \*\*\*\*\*\*\*\*\*\* Error \*\*\*\*\*\*\*\*\*\* ERROR: value for domain mydomain violates check constraint "mydomain\_check" More information can be found here: <http://www.postgresql.org/docs/9.1/static/sql-createdomain.html>
Create table with attribute which must be in range of 1-10 (Postgres)
[ "", "sql", "postgresql", "" ]
``` **Table Employee** Id Name 1 EmpName1 2 EmpName2 3 EmpName3 **Table EmpDeptHistory** Id EmpId Dept Date 1 1 Housing 2015-03-02 2 2 Finance 2015-01-03 3 1 WareHouse 2015-05-02 4 2 Housing 2015-02-06 5 3 WareHouse 2015-02-02 6 1 Housing 2015-05-01 7 2 Finance 2015-01-02 8 2 Housing 2015-05-04 9 2 Finance 2015-05-02 10 1 WareHouse 2015-03-08 11 1 Housing 2015-02-20 ``` I need find the recent dept with which every employee worked. Also I need to find for individual employee by passing EmpId The following query returns only one employee and not all :( ``` SELECT e.id, edh.dept,edh.date FROM Employee e inner join (select top 1 eh.empid, eh.dept, eh.date from EmpDeptHistory eh order by eh.date desc) as edh on e.id=edh.empid ``` yes, I understand the top 1 will give the emp id based on date, hence only one employee details is show. I am not sure how to get all the employee recent department. ``` select e.id,edh.dept,edh.date from employee e inner join EmpDeptHistory edh on e.id = (Select eh.empid, eh.dept, eh.date from EmpDeptHistory eh where e.id=eh.empid order by eh.date desc) ``` The above throws > The ORDER BY clause is invalid in views, inline functions, derived > tables, subqueries, and common table expressions, unless TOP, > OFFSET or FOR XML is also specified.
You can use [`CROSS APPLY`](https://msdn.microsoft.com/en-us/library/ms177634.aspx) to run the right-hand subquery once for each left-hand row: ``` SELECT e.id, edh.dept,edh.date FROM Employee e cross apply ( select top 1 eh.empid, eh.dept, eh.date from EmpDeptHistory eh where eh.empid = e.id order by eh.date desc) as edh ```
You can use a CTE and a ranking function like `ROW_NUMBER`: ``` WITH CTE AS ( SELECT e.id, edh.dept, edh.date, rn = ROW_NUMBER() OVER (PARTITION BY edh.EmpId ORDER BY edh.date DESC) FROM Employee e inner join EmpDeptHistory edh on e.id = edh.empid ) SELECT id, dept, date FROM CTE WHERE rn = 1 ``` `DEMO`
Inner join with one row of another table
[ "", "sql", "sql-server", "sql-server-2012", "inner-join", "" ]
I have `book` and `store_order` tables. I want to make trigger(but it contain error): ``` DELIMITER $$ CREATE TRIGGER t1 BEFORE DELETE ON store_order FOR EACH ROW BEGIN UPDATE book SET number = number + NEW.quantity WHERE ISBN = NEW.ISBN; END $$ DELIMITER ; ```
``` DELIMITER $$ CREATE TRIGGER t2 AFTER delete ON library.store_order FOR EACH ROW BEGIN update library.book set library.book.number = (library.book.number + OLD.quantity) where library.book.ISBN = OLD.ISBN; END$$ DELIMITER ; ```
Use OLD instead of NEW when you want to get the deleted object. for an example of my case. I'm getting the id of the newly added role by calling > NEW.id and getting the same field's value while deleting by calling > OLD.id Example: ``` DELIMITER $$ CREATE TRIGGER after_insert_role AFTER INSERT ON role FOR EACH ROW BEGIN INSERT INTO `sync_mapping` (`operation_type`, `table_name`, `oid`, `end_point`) VALUES ('insert', 'role', NEW.id, 'new/role'); END $$ DELIMITER $$ CREATE TRIGGER after_delete_role AFTER DELETE ON role FOR EACH ROW BEGIN INSERT INTO `sync_mapping` (`operation_type`, `table_name`, `oid`, `end_point`) VALUES ('delete', 'role', OLD.id, 'delete/role'); END $$ ```
I have error "#1363 - There is no NEW row in on DELETE trigger "
[ "", "sql", "mysql", "database", "triggers", "mysql-error", "" ]
I am new to SQL and I am using MS SQL Server Management Studio 2014. I have 3 tables called Pizza, Pizza\_Topping and Topping. I want to list the pizzas that have TOPPING1 and TOPPING2 as toppings. This is what I have came up with, ``` select Pizza.pizzaID, Pizza.pizzaName, Topping.toppingName from Pizza left join Pizza_Topping on Pizza.pizzaID = Pizza_Topping.pizzaID left join Topping on Topping.toppingID = Pizza_Topping.toppingID where Topping.toppingName in ('topping1', 'topping2') ``` and this gives me ``` pizzaID pizzaName toppingName -------- ---------------- -------------- PZ002 | PIZZA1 | TOPPING1 PZ002 | PIZZA1 | TOPPING2 PZ010 | PIZZA5 | TOPPING1 PZ010 | PIZZA5 | TOPPING2 PZ011 | PIZZA6 | TOPPING1 PZ012 | PIZZA7 | TOPPING2 ``` I only need first four rows because last two pizzas only have one of the toppings not both. I have tried this as well, ``` GROUP BY Pizza.pizzaID,Pizza.pizzaName, Topping.toppingName HAVING COUNT(toppingName) >= 2 ``` but it didn't give the expected result. that line can be used if I am going to display only `pizzaID` and `pizzaName` but i want to display `toppingName` as well. to be more clear, this what I am expecting ``` pizzaID pizzaName toppingName -------- ---------------- -------------- PZ002 | PIZZA1 | TOPPING1 PZ002 | PIZZA1 | TOPPING2 PZ010 | PIZZA5 | TOPPING1 PZ010 | PIZZA5 | TOPPING2 ``` Please tell me how to achieve this result. Thanks
I think this could be done more easily if you can get the `count()` in your subquery and then join back all the pizzaid's that have count greater than 1 something like this: ``` SELECT q1.pizzaID ,q1.PizzaName ,t.ToppingName FROM ( SELECT Pizza.pizzaID ,Pizza.pizzaName ,count(topping.ToppingName) total_count FROM Pizza INNER JOIN Pizza_Topping ON Pizza.pizzaID = Pizza_Topping.pizzaID INNER JOIN Topping ON Topping.toppingID = Pizza_Topping.toppingID GROUP BY Pizza.pizzaID ,Pizza.pizzaName HAVING count(topping.ToppingName) > 1 ) q1 INNER JOIN Pizza_Topping pt ON q1.pizzaID = pt.pizzaID INNER JOIN Topping t ON t.toppingID = pt.toppingID WHERE t.toppingName in ('topping1', 'topping2') ```
You are almost correct. You need to remove `ToppingName` from the `GROUP BY`: ``` GROUP BY Pizza.pizzaID, Pizza.pizzaName HAVING COUNT(toppingName) >= 2 ``` You also need to remove it from the `SELECT` if it is also there: ``` select p.pizzaID, p.pizzaName from Pizza p join Pizza_Topping pt on p.pizzaID = pt.pizzaID join Topping t ont.toppingID = pt.toppingID where t.toppingName in ('topping1', 'topping2') group by p.pizzaID, p.pizzaName having count(*) >= 2; ``` Also notice two things. Because you have a condition on toppings, the `left join` is unnecessary. You are only looking for matches, so `inner join` is appropriate. The use of table aliases makes the query easier to write and to read. This assumes that pizzas cannot have duplicate toppings. If so, then change the last condition to: ``` having count(distinct t.toppingName) >= 2 ```
Where clause with multiple values
[ "", "sql", "sql-server", "" ]
I have table called Employee.Employee table has `ID,Name,DOB,DOJ,Designation columns`.In sp want to show like this when input para's value is 1 select DOB in select clause as well as in where clause else use DOJ Here is my query.I am able to manage in select clause but not in where clause. ``` select Name,Designation ,case when @value = 1 then DOB else DOJ end from Employee where DOB >='06/01/2015' and DOB < dateadd(dd,1,'06/01/2015') ``` If @val is other than 1,want to use DOJ in select and where clause.
Nest the `CASE` in a Derived Table, a smart optimizer should be able to resolve this to either `DOB` or `DOJ`: ``` select * from ( select Name,Designation, case when @value = 1 then DOB else DOJ end as dt from Employee ) as dt where dt >='06/01/2015' and dt < dateadd(dd,1,'06/01/2015') ```
For the WHERE clause, it's probably easiest to just use two distinct tests: ``` ((@value = 1 AND DOB >='06/01/2015' and DOB < dateadd(dd,1,'06/01/2015')) OR (@value <> 1 AND DOJ >='06/01/2015' and DOJ < dateadd(dd,1,'06/01/2015'))) ```
case statement in sql?
[ "", "sql", "" ]
I am using SQL Server and have a table set up like below: ``` | id | subject | content | moreContent | modified | | 1 | subj1 | aaaa | aaaaaaaaaaa | 03/03/2015 | | 2 | subj1 | bbbb | aaaaaaaaaaa | 03/05/2015 | | 3 | subj2 | cccc | aaaaaaaaaaa | 03/03/2015 | | 4 | subj1 | dddd | aaaaaaaaaaa | 03/01/2015 | | 5 | subj2 | eeee | aaaaaaaaaaa | 07/02/2015 | ``` I want to select the latest record for each subject heading, so the records to be returned would be: ``` | id | subject | content | moreContent | modified | | 2 | subj1 | bbbb | aaaaaaaaaaa | 03/05/2015 | | 3 | subj2 | cccc | aaaaaaaaaaa | 03/03/2015 | SELECT Subject, MAX(Modified) FROM [CareManagement].[dbo].[Careplans] GROUP BY Subject ``` I could do a query like the one above, but I want to preserve all of the content from the selected rows. To return the content columns I would need to apply an aggregate function, or add them to the group by clause which wouldn't give me the desired effect. I have also looked at nested queries but not found a successful solution yet. If anyone could assist that would be great.
You can use [`ROW_NUMBER()`](https://msdn.microsoft.com/en-us/library/ms186734.aspx): ``` SELECT id, subject, content, moreContent, modified FROM ( SELECT id, subject, content, moreContent, modified, ROW_NUMBER() OVER (PARTITION BY subject ORDER BY modified DESC) AS rn FROM [CareManagement].[dbo].[Careplans] ) t WHERE rn = 1 ``` `rn = 1` will return each record having the latest `modified` date per `subject`. In case there are two or more records sharing the same 'latest' date and you want *all* of these records returned, then you might have a look at `RANK()` window function.
Using ROW\_NUMBER this becomes pretty simple. ``` with myCTE as ( select id , Subject , content , morecontent , Modified , ROW_NUMBER() over (PARTITION BY [Subject] order by Modified desc) as RowNum from [CareManagement].[dbo].[Careplans] ) select id , Subject , content , morecontent , Modified from myCTE where RowNum = 1 ```
SQL query to select the latest records with a distinct subject
[ "", "sql", "sql-server", "group-by", "subquery", "aggregate-functions", "" ]
I'm trying to alter a Wordpress search query so that it returns broader results. **The scenario:** There is a post entry titles "Carousel Watch" in the database. When I search either (or both) of these words, I get the post returned, which is what I'd expect. If, however, I search for "Carousel Watch Gift", I do not get any results. The SQL query for this search is the following: ``` SELECT SQL_CALC_FOUND_ROWS sosen_posts.ID FROM sosen_posts WHERE 1=1 AND (((sosen_posts.post_title LIKE '%carousel%') OR (sosen_posts.post_content LIKE '%carousel%')) AND ((sosen_posts.post_title LIKE '%watch%') OR (sosen_posts.post_content LIKE '%watch%')) AND ((sosen_posts.post_title LIKE '%gift%') OR (sosen_posts.post_content LIKE '%gift%'))) AND (sosen_posts.post_password = '') AND sosen_posts.post_type = 'wp_aff_products' AND (sosen_posts.post_status = 'publish') ORDER BY (CASE WHEN sosen_posts.post_title LIKE '%carousel watch gift%' THEN 1 WHEN sosen_posts.post_title LIKE '%carousel%' AND sosen_posts.post_title LIKE '%watch%' AND sosen_posts.post_title LIKE '%gift%' THEN 2 WHEN sosen_posts.post_title LIKE '%carousel%' OR sosen_posts.post_title LIKE '%watch%' OR sosen_posts.post_title LIKE '%gift%' THEN 3 WHEN sosen_posts.post_content LIKE '%carousel watch gift%' THEN 4 ELSE 5 END), sosen_posts.post_date DESC LIMIT 0, 3 ``` My questions are: 1. What part of the query prevents returning results ("Carosuel Watch")? 2. How I should alter it to get the result returned? 3. And finally, what does the `THEN` statement do in this query? Does it set the value to compare in the `WHERE` statement?
The `THEN` keyword is used as part of a `CASE` clause. A `CASE` clause looks like this: ``` CASE WHEN expression THEN value WHEN otherexpression THEN othervalue ... END ``` The result of the expression is a single *value* (sometimes you see people try to use a `CASE` clause to determine what code will execute, and this won't work). In this query, the value from the `CASE` claues is used to determine sort order for posts, and has not bearing on which records are included or not included in the results. To find out which records are included in the results, we need to dissect the `WHERE` clause. I re-formatted the existing clause for easier reading below: ``` WHERE 1=1 AND ( (sosen_posts.post_title LIKE '%carousel%' OR sosen_posts.post_content LIKE '%carousel%') AND (sosen_posts.post_title LIKE '%watch%' OR sosen_posts.post_content LIKE '%watch%') AND (sosen_posts.post_title LIKE '%gift%' OR sosen_posts.post_content LIKE '%gift%') ) AND sosen_posts.post_password = '' AND sosen_posts.post_type = 'wp_aff_products' AND sosen_posts.post_status = 'publish' ``` First, the 1=1 part. This is common for autogenerated code. The generator can put a `WHERE 1=1` at the beginning of the `WHERE` clause whether or not there are any conditions, and the query will still be valid. Then, for each condition, it can always use the form `AND condition`, without worrying about the prior state of the WHERE clause. Moving into the next section, we see it checks each keyword individually against both the title and text. These checks are connected via `AND` operators. This means that if your post does not have the word `gift` in the title or body *somewhere*, it cannnot appear in the results. Finally, this code is *incredibly inefficient*. You **never** want to see a `LIKE` operator with a leading wildcard (`%`), because it pretty much guarantees that you can't use any indexes to satisfy that condition. I'm more of a Sql Server guy, but in Sql Server -land what you want to do instead is create a special kind of index, called a `full text` index, and write the query using a special `CONTAINS()` clause. Failing that, you use a 3rd-party search library such as Lucene. You *never* want to use `LIKE` queries for searches like this. I'm not sure what the MySql equivalents to these alternatives are, but what you have here is *not* the way you want to be doing this.
1. The LIKE clauses require one or more of each of the input words to exist (I suspect this is how the auto-generator works). If your posts do not contain all 3 words it will not therefore get returned. 2. You can remove the LIKE clauses for "Gift", though I think this will give you the same code as if you just searched for the first two words. 3. THEN comes before what is to be returned by the CASE function IE WHEN {true} THEN {return\_this} Hope this helps.
THEN statement in a SQL query
[ "", "mysql", "sql", "" ]
I am using the more than one like, and try to get the order by main keyword "sun screen", but I'm not getting the result as I want, please check my query. ``` select productName from products where product like "%sun%" or product like "%screen%" or product like "%sun screen%" order by product like "sun screen %" ``` Result should be: ``` sun screen olyiv sun screen dabaga sun bla screen bla bla ``` Please help to get the result.
You can try to use `ORDER BY CASE`: ``` SELECT p.productName FROM products p WHERE p.productName LIKE '%sun%' OR p.productName LIKE '%screen%' OR p.productName LIKE '%sun screen%' ORDER BY CASE WHEN p.productName LIKE '%sun screen%' THEN 1 ELSE 2 END ```
You can try something like this : ``` Select productName from products where productName like "%sun%" or productName like "%screen%" or productName like "%sun screen%" Order by CASE WHEN (productName like "sun screen %") THEN NULL ELSE productName END ASC ```
Query with more than one like condition, with order by LIKE
[ "", "mysql", "sql", "database", "plsql", "" ]
Let's say I have a table called `nameAge`: ``` ID Name Age 1 X 12 2 Y 12 3 null null 4 Z 12 ``` and when I run a query like: ``` select * from nameAge where Age <> 12 ``` it returns me an empty result set while I have row with `id 3` where `age` is different than `null`? Using `Sql Server 2008 R2`. Any ideas? Edit: Possibility to be duplicate with suggested answer may be at one point but does not cover at all and it shows how to use null values when compared with null but what I wanted to ask was about the result set which includes null values
This is the intended behavior. You cannot compare `NULL` values using `=` or `<>`. You have to use `IS NULL` or `IS NOT NULL`. If you want `NULL` values only use `IS NULL`: ``` select * from nameAge where age IS NULL ``` If you want `NULL` values with `age <> 12` values, use: ``` select * from nameAge where age <> 12 OR age IS NULL ```
The expression ``` WHERE NULL <> 12 ``` does not return `TRUE` or `FALSE`, but actually returns `UNKNOWN`. This means that the third record in your table will not be returned by your query. As @ughai mentioned, you should use `IS NULL` instead to query that record: ``` SELECT * FROM nameAge WHERE age IS NULL ``` Have a look at the [Microsoft SQL Server documentation](https://technet.microsoft.com/en-us/library/aa196339%28v=sql.80%29.aspx) for more information.
unusual sql server query result
[ "", "sql", "sql-server", "database", "sql-server-2008-r2", "" ]
I have query like: ``` SELECT field FROM table WHERE ( SELECT COUNT(*) FROM table2 WHERE table2.field = table.field ) != ( SELECT COUNT(*) FROM table3 WHERE table3.field = table.field ) ``` Now I want to have those `WHERE` subqueries in my field list like: ``` SELECT field, count1, count2 FROM table WHERE ( SELECT COUNT(*) FROM table2 WHERE table2.field = table.field ) AS Count1 != ( SELECT COUNT(*) FROM table3 WHERE table3.field = table.field ) AS Count2 ``` Is this possible? Of course I could put those subqueries in the field list, but then I can't compare them. Any ideas?
You can do this if you use `Sql Server`: ``` SELECT field, ca2.c2, ca3.c3 FROM table t cross apply(SELECT COUNT(*) c2 FROM table2 t2 WHERE t2.field = t.field)ca2 cross apply(SELECT COUNT(*) c3 FROM table3 t3 WHERE t3.field = t.field)ca3 where ca2.c2 <> ca1.c1 ```
Use correlated sub-selects to count. Wrap up in a derived table: ``` select dt.* from ( SELECT field, (SELECT COUNT(*) FROM table2 WHERE table2.field = table.field) as cnt1, (SELECT COUNT(*) FROM table3 WHERE table3.field = table.field) as cnt2 FROM table ) dt where dt.cnt1 <> dt.cnt2 ```
SQL WHERE Subquery in Field List
[ "", "sql", "subquery", "fieldlist", "" ]
I have a SQL request that return some ora-01427 error: > single-row subquery returns more than one row ``` INSERT INTO my_table (value0, value1, value2, value3) VALUES((SELECT MAX(value0) FROM my_table), '5', (SELECT DISTINCT(value2) FROM another_table), '8'); ``` The thing is, I need two hard-coded values, I need a value from a select that returns only one row, and I want to do that for each row returned by a second select. I feel like this query would work if only I had only one select. Is there a way to do multiple SELECT inside an INSERT ? What would be the syntax ? EDIT : my\_table and some\_table are actually the same table, sorry for not being clear in the first place, actually, I need value0 to be unique so it needs to retrieve the biggest id each time, not just before the insertion but every time a new row is inserted.
You need to switch to INSERT/SELECT: ``` INSERT INTO my_table (value0, value1, value2, value3) SELECT DISTINCT (SELECT MAX(value0) FROM some_table), '5', value2, '8' FROM another_table; ``` To answer your comment on jarlh's post: "*What if some\_table = my\_table and value0 needs to be incremented each time a value is inserted ?*" ``` INSERT INTO my_table (value0, value1, value2, value3) SELECT (SELECT MAX(value0) FROM my_table) + ROWNUM -- ROW_NUMBER() OVER (ORDER BY whatever you need) ,'5' ,value2 ,'8' FROM ( SELECT DISTINCT value2 FROM another_table ) dt ``` Edit: I switched to `ROWNUM`, but this is proprietary syntax. Oracle also supports Standard SQL's ROW\_NUMBER and it should be working as-is, too.
You can covert these two queries to a single one by cross joining the query from `some_table` with the results of `anoter_table`. The hard coded literal could also be selected. Additionally, note that for inserting a `select` result you don't need the `values` keyword: ``` INSERT INTO my_table (value0, value1, value2, value3) SELECT DISTINCT max_value_0, '5', value2, '8' FROM another_table CROSS JOIN (SELECT MAX(value0) AS max_value_0 FROM some_table) t ```
Insert with multiple selects
[ "", "sql", "oracle", "oracle-sqldeveloper", "sql-insert", "insert-select", "" ]
I have 1 table called errors it has the following structure: **Errors** ``` | id | UserID | CrashDump | ErrorCode| Timestamp | 1 | user1 | Crash 1 | 100 | 2015-04-08 21:00:00 | 2 | user2 | Crash 2 | 102 | 2015-04-10 22:00:00 | 3 | user3 | Crash 4 | 105 | 2015-05-08 12:00:00 | 4 | user4 | Crash 4 | 105 | 2015-06-02 21:22:00 | 5 | user4 | Crash 4 | 105 | 2015-06-03 04:16:00 ``` i wanted to get a result set with the following data: **Desired resultset** ``` CrashDump | Error Count| Affected Users| Crash 4 | 3 | 2 | Crash 2 | 1 | 1 | Crash 1 | 1 | 1 | ``` The result set would hold the count of each error as error count and affected users (the distinct users who received this error). i have been able to get the desired outcome using the following query, however it has proven to be very resource intensive and on huge data sets MySQL crashes. Could you please guide me as how i can optimize my current query or guide me towards a better approach in implementing its logic? Any help would be greatly appreciated. **Current Query:** ``` select B.CrashDump as CrashDump, B.B_UID as affected users, C.C_UID as ErrorCount from ( Select count(A.UserID) as B_UID, A.CrashDump, (A.timestamp) as timestmp, (a.errorcode) as errorCde, (a.ID) as uniqueId from ( select UserID , CrashDump, timestamp,errorcode,id from errors where Timestamp >='2015-04-08 21:00:00' and Timestamp <='2015-06-10 08:18:15' group by userID,CrashDump ) as A group by A.CrashDump ) as B left outer join ( select CrashDump , count(UserID) as C_UID from errors where Timestamp >='2015-04-08 21:00:00' and Timestamp <='2015-06-10 08:18:15' group by CrashDump ) as C On B.CrashDump = C.CrashDump order by ErrorCount desc limit 0,10 ```
This is the solution that worked: ``` Select A.CrashDump, sum(A.ErrorCount) as ErrorC, count(A.AffectedUsers) From ( SELECT CrashDump, COUNT(ErrorCode) AS ErrorCount, COUNT(DISTINCT UserID) AS AffectedUsers, UserID FROM errors WHERE Timestamp >='2015-05-13 10:00:00' and Timestamp <='2015-05-14 03:07:00' GROUP BY CrashDump, userID ) AS A group by A.CrashDump order by ErrorC desc limit 0,10 ``` Thank you everyone for helping in achieving the desired result.
Try ``` SELECT CrashDump, COUNT(ErrorCode) AS ErrorCount, COUNT(DISTINCT UserID) AS AffectedUser FROM errors WHERE Timestamp >='2015-04-08 21:00:00' AND Timestamp <='2015-06-10 08:18:15' GROUP BY CrashDump ```
Optimize sql query containing left joins
[ "", "mysql", "sql", "" ]
I have a table storing values of A and B for different dates. It looks like this: ``` Date Amount Type 2015 15 A 2014 -3 B 2013 8 B 2013 10 A 2012 5 A ``` etc. How to **most effectively** prepare a table that lists current A and current B value for each date? The idea is that for each date, one of the values is specified directly but the other one needs to be the last previous one. ``` Date A B 2015 15 -3 2014 10 -3 2013 10 8 ``` etc. The dataset will be a substantial one (millions of records), I'll appreciate your help. The solution will be implemented in SAS, so I'll use SAS procedures or the proc sql procedure (it has its limitations when it comes to self-joins).
As Gordon says in the comments, I would use a Data Step and a `RETAIN` statement First, create your data set. Second, sort it in ascending order by DATE Third, use the Data Step and `RETAIN` to create your values. Use the BY statement and the subsetting `IF` to output all values for a given DATE. Finally, sort in descending order by DATE to give you the order you want. ``` data have; input Date Amount Type $; datalines; 2015 15 A 2014 -3 B 2013 8 B 2013 10 A 2012 5 A ; run; proc sort data=have; by date; run; data want(keep=date a b); set have; by date; retain a b; if Type = "A" then a = amount; else if Type = "B" then b = amount; if last.date; run; proc sort data=want; by descending date; run; ``` This produces: ![enter image description here](https://i.stack.imgur.com/kBmGE.png)
``` data have; input Date Amount Type $; cards; 2015 15 A 2014 -3 B 2013 8 B 2013 10 A 2012 5 A ; data want(keep=date A B); merge have have(firstobs=2 rename=(date=_date amount=_amount type=_type)); retain flag; if date-_date=1 then do; if flag=1 then do; call missing(flag); return; end; if type='A' then do; A=amount; B=_amount; end; else if type='B' then do; B=amount; A=_amount; end; output; end; else if date=_date then do; flag=1; if type='A' then do; A=amount; B=_amount; end; else if type='B' then do; B=amount; A=_amount; end; output; end; else if missing(_date) then do; if flag=1 then return; if type='A' then A=amount; else B=amount; output; end; run; ```
Match last values of different variables from a table
[ "", "sql", "sas", "proc-sql", "" ]
I want to get all profiles first that are for certain location(s): ``` SELECT * FROM profile WHERE location IN ('a', 'b', 'c') OR isDefault=1 ORDER BY location IN ('a', 'b') DESC, -- put to the front if location in 'a','b' isDefault DESC, -- for each of both groups, put default profiles before the others. location ASC -- and sort each of the up to four groups by location. ``` This throws the error: "Incorrect syntax near the keyword 'IN'.". If I remove the order clause, results are returned. What is wrong here?
You could rewrite that to give an integer back which is sortable: ``` case when location IN ('a', 'b') then 0 else 1 end DESC ```
You can't use `IN` in the `ORDER BY`. Change it to: ``` ORDER BY location ASC, --will sort a, then b, then c ```
ORDER BY ... IN(...) ASC
[ "", "sql", "sql-server", "" ]
I have another issue with my SQL code. Below I will tell you by showing one of my tables and how I would like to take information out of it. ![https://i.stack.imgur.com/FqWs6.jpg](https://i.stack.imgur.com/FqWs6.jpg) 1) All employees at a company belong to a department. In this table, for an example, employees 00010-00013 to Dept1 and 00019-00022 to Dept4. 2) When being paid, the table functions as a record wheter or not the employee has been paid with status "Paid" or "Unpaid" in the column Employee Status. As you can see, all employees except 00017 and 00022 have been paid. 3) When all employees belonging to a certain department have been paid, the Department Status should change to "Paid", meaning that all employees belonging to that department have been paid. As you can see all employees at Dept1 have been paid, and the Department Status is set to "Paid". All the employees at Dept4 have not been paid, and the Department Status is set to "Unpaid". 4) As you can see, there is an "error" made in Department Status of Dept2. Although all employees have been paid, the Department Status is still set to "Unpaid". I need to construct a code that brings out all the departments like Dept2, that have all the Employees paid, but still haven't changed its status. This could be made by either omitting all other (correct) departments, or maybe adding a new column in which this is noted so I can sort afterwards. Below I'll add som useless pseudocode that kind of shows what I want to achieve by adding a new column INCOR\_DEP\_STAT in which I tell if the Department Status is incorrect or not. ``` SELECT A.DEPART, A.DEPART_STAT, A.EMPLOY, A.EMPLOY_STAT, 'N/A' as INCOR_DEP_STAT FROM PAYTABLE A FOR all objects in A.DEPART; IF all A.EMPLOY_STAT in A.DEPART == "Paid" AND IF A.DEPART_STAT == "Unpaid" THEN INCOR_DEPT_STAT = "Incorrect" ```
``` SELECT * FROM PAYTABLE A WHERE DEPART NOT IN ( SELECT DEPART FROM PAYTABLE A WHERE EMPLOY_STAT='Unpaid' ) AND DEPART_STAT='Unpaid' ```
I think you are making this way too complex. To identify departments marked as Unpaid with employees marked as paid: ``` select distinct department from PayTable where depart_stat= 'Unpaid' and employ_stat= 'Paid' ``` If you want to derive a column to show this: ``` select ..., case when depart_stat= 'Unpaid' and employ_stat = 'Paid' then 'Incorrect' else 'Correct' end as Incor_Dept_Stat from PayTable ``` EDIT: RE-read your question. If you only want departments where the department status does not equal the employee status for all employees in that group: ``` select dept, dept_status from ( select dept, dept_status, count (employee) as Emps, count (case when Employ_Status <> Dept_Status then 1 else null end) as PaidStatus from PayTable group by dept, dept_status ) t where emps = PaidStatus ```
SQL how to take out data while looping through columns?
[ "", "sql", "sql-server", "loops", "calculated-columns", "" ]
Assume a sqlite database with an integer column. Now, it tends to happen that the integer field contains NULL values (=unset) as well. I would like to interpret NULL values as zero (0) when doing queries on that field. Is there a way to tell sqlite that I like NULL handled like 0 in this case, especially when I do comparisons using a SELECT statement? Since I construct the queries dynamically in C-like code, I like to be able to write something like this: ``` query = "SELECT * FROM tbl WHERE intField=" + IntToStr(anIntValue) ``` Currently, I work around this with code as follow, which I like to avoid: ``` if (anIntValue == 0) { query = "SELECT * FROM tbl WHERE intField IS NULL OR intField=0" } else { query = "SELECT * FROM tbl WHERE intField=" + IntToStr(anIntValue) } ``` Maybe there's an operator I can use in the query that converts NULLs to other values I specify? Of course, another way would be to make sure one never ends up with NULL values in that field, but one might want to be able to tell when the value hasn't been set at all yet, so I like to be able to keep the NULL values in the database.
In Standard SQL this is called `COALESCE`: ``` COALESCE(col, 0) ``` Why using a proprietary extension like IFNULL/NVL if there's a Standard which is supported by every DBMS?
please, try `ifnull` function, see doc at <http://www.sqlite.org/lang_corefunc.html#ifnull>
Is there a way to make NULL behave like 0 (zero) or like an empty string in SQL?
[ "", "sql", "sqlite", "" ]
With T-SQL, I'm trying to find the easiest way to reverse numbers in string. so for string like `Test123Hello` have `Test321Hello`. ``` [Before] [After] Test123Hello Test321Hello Tt143 Hello Tt341 Hello 12Hll 21Hll Tt123H3451end Tt321H1543end ```
Just make use of [`PATINDEX`](https://msdn.microsoft.com/library/ms188395.aspx) for searching, append to the result string part by part: ``` CREATE FUNCTION [dbo].[fn_ReverseDigits] ( @Value nvarchar(max) ) RETURNS NVARCHAR(max) AS BEGIN IF @Value IS NULL RETURN NULL DECLARE @TextIndex int = PATINDEX('%[^0-9]%', @Value), @NumIndex int = PATINDEX('%[0-9]%', @Value), @ResultValue nvarchar(max) = '' WHILE LEN(@ResultValue) < LEN(@Value) BEGIN -- Set the index to end of the string if the index is 0 SELECT @TextIndex = CASE WHEN @TextIndex = 0 THEN LEN(@Value) + 1 ELSE LEN(@ResultValue) + @TextIndex END SELECT @NumIndex = CASE WHEN @NumIndex = 0 THEN LEN(@Value) + 1 ELSE LEN(@ResultValue) + @NumIndex END IF @NumIndex < @TextIndex SELECT @ResultValue = @ResultValue + REVERSE(SUBSTRING(@Value, @NumIndex, @TextIndex -@NumIndex)) ELSE SELECT @ResultValue = @ResultValue + (SUBSTRING(@Value, @TextIndex, @NumIndex - @TextIndex)) -- Update index variables SELECT @TextIndex = PATINDEX('%[^0-9]%', SUBSTRING(@Value, LEN(@ResultValue) + 1, LEN(@Value) - LEN(@ResultValue))), @NumIndex = PATINDEX('%[0-9]%', SUBSTRING(@Value, LEN(@ResultValue) + 1, LEN(@Value) - LEN(@ResultValue))) END RETURN @ResultValue END ``` Test SQL ``` declare @Values table (Value varchar(20)) INSERT @Values VALUES ('Test123Hello'), ('Tt143 Hello'), ('12Hll'), ('Tt123H3451end'), (''), (NULL) SELECT Value, dbo.fn_ReverseDigits(Value) ReversedValue FROM @Values ``` Result ``` Value ReversedValue -------------------- -------------------- Test123Hello Test321Hello Tt143 Hello Tt341 Hello 12Hll 21Hll Tt123H3451end Tt321H1543end NULL NULL ```
you can use this function ``` CREATE FUNCTION [dbo].[fn_ReverseDigit_MA] ( @Str_IN nVARCHAR(max) ) RETURNS NVARCHAR(max) AS BEGIN DECLARE @lenstr AS INT =LEN(@Str_IN) DECLARE @lastdigend AS INT=0 while (@lastdigend<@lenstr) BEGIN DECLARE @strPart1 AS NVARCHAR(MAX)=LEFT(@Str_IN,@lastdigend) declare @lenstrPart1 AS INT=LEN(@strPart1) DECLARE @strPart2 AS NVARCHAR(MAX)=RIGHT(@Str_IN,@lenstr-@lastdigend) declare @digidx as int=patindex(N'%[0-9]%' ,@strPart2)+@lenstrPart1 IF(@digidx=@lenstrPart1) BEGIN BREAK; END DECLARE @strStartdig AS NVARCHAR(MAX) = RIGHT(@Str_IN,@lenstr-@digidx+1) declare @NDidx as int=patindex(N'%[^0-9]%' ,@strStartdig)+@digidx-1 IF(@NDidx<=@digidx) BEGIN SET @NDidx=@lenstr+1 END DECLARE @strRet AS NVARCHAR(MAX)=LEFT(@Str_IN,@digidx-1) +REVERSE(SUBSTRING(@Str_IN,@digidx,@NDidx-@digidx)) +RIGHT(@Str_IN,@lenstr-@NDidx+1) SET @Str_IN=@strRet SET @lastdigend=@NDidx-1 END return @Str_IN END ```
Reverse only numerical parts of string in sql server
[ "", "sql", "sql-server", "string", "t-sql", "regexp-replace", "" ]
``` SELECT DECODE( add1, null, '.', ' ', '.', '{null}','.', LTRIM( RTRIM( REPLACE( add1,'"','.'), ' '), ' ') ), code, add1 FROM addrlist ``` I am trying to replace any null word (any case-insensitive word `null` or `nuLL` or `NulL` etc.) or blank value of add1 (varchar2) with a dot (`.`). The above code does not completely achieve it, could you please suggest a better solution. Update : the column add1 will contain only a single string null in any form which may have only spaces in the beginning or the end of the word null. Please find below some examples: ``` Input | Expected Result ------------------------------------- 'nuLL' | '.' ' NuLL ' | '.' 'null' | '.' NULL | '.' ```
[SQL Fiddle](http://sqlfiddle.com/#!4/d171c/1) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE SENTENCES ( sentence ) AS SELECT 'Null' FROM DUAL UNION ALL SELECT 'null' FROM DUAL UNION ALL SELECT 'NULL' FROM DUAL UNION ALL SELECT ' Null' || CHR(9) FROM DUAL UNION ALL SELECT NULL FROM DUAL UNION ALL SELECT '{NULL}' FROM DUAL UNION ALL SELECT '""' FROM DUAL UNION ALL SELECT '''''' FROM DUAL UNION ALL SELECT ' ' ||CHR(13)||CHR(10) FROM DUAL UNION ALL SELECT 'Not-Null' FROM DUAL; ``` **Query 1**: Use the regular expression `^\W*(NULL)?\W*$` - matching the start of the string `^` then zero-or-more non-word characters `\W*` then an optional `NULL` string `(NULL)?` then zero-or-more non-word characters `\W*` and finally the end-of-the-string `$` (using a case-insensitive match). ``` SELECT sentence, CASE WHEN sentence IS NULL THEN '.' ELSE REGEXP_REPLACE( sentence, '^\W*(NULL)?\W*$', '.', 1, 0, 'i' ) END AS unnulled_sentence FROM SENTENCES ``` **Query 2**: ``` SELECT sentence, DECODE( sentence, NULL, '.', REGEXP_REPLACE( sentence, '^\W*(NULL)?\W*$', '.', 1, 0, 'i' ) ) AS unnulled_sentence FROM SENTENCES ``` **[Results](http://sqlfiddle.com/#!4/d171c/1/1)**: (Both queries give the same results) ``` | SENTENCE | UNNULLED_SENTENCE | |----------|-------------------| | Null | . | | null | . | | NULL | . | | Null | . | | (null) | . | | {NULL} | . | | "" | . | | '' | . | | | . | | Not-Null | Not-Null | ```
``` select regexp_replace(nvl(add1, 'NULL'), '(^|\s)(NULL)(\s|$)', '\1.\3', 1, 0, 'im') from (select 'NUll sdfsdfasdf null xzxx nullable nil; null' as add1 from dual) ``` Result: > . sdfsdfasdf . xzxx nullable nil; . I assume that you want to replace only whole words NULL but not just combination of letters (that's why I put word "nullable" to leave it unchanged).
How to REPLACE any form of the word null using decode in Oracle?
[ "", "sql", "oracle", "oracle11g", "oracle10g", "" ]
I have created one SSRS report. In the report I am providing one parameter named `startDate`. From the `startDate` I have to display 10 columns like `week of dd-mon-yy`. First column contains date of Monday in the week which the start date lies. then the upcoming 10 Mondays are used. Example is shown in the image. How can I achieve this in SSRS reports. I need query and column value. ![enter image description here](https://i.stack.imgur.com/enk9o.png)
This should be your column name alias for current week ``` select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, datediff(week, 0, getdate()), 0),106); ``` then alias for next weeks ``` select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, 1,dateadd(week, datediff(week, 0, getdate()), 0)),106); select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, 2,dateadd(week, datediff(week, 0, getdate()), 0)),106); select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, 3,dateadd(week, datediff(week, 0, getdate()), 0)),106); select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, 4,dateadd(week, datediff(week, 0, getdate()), 0)),106); select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, 5,dateadd(week, datediff(week, 0, getdate()), 0)),106); select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, 6,dateadd(week, datediff(week, 0, getdate()), 0)),106); select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, 7,dateadd(week, datediff(week, 0, getdate()), 0)),106); select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, 8,dateadd(week, datediff(week, 0, getdate()), 0)),106); select 'Week of '+ CONVERT(VARCHAR(11),dateadd(week, 9,dateadd(week, datediff(week, 0, getdate()), 0)),106); ```
To get date of Monday in the week which the start date lies along with the dates of next 10 mondays you can write a query as: ``` DECLARE @input date SET @input = '12/29/2013' SELECT 'Week of '+ REPLACE(CONVERT(VARCHAR(11), DATEADD(wk, DATEDIFF(wk,0,@input), T.week*7) , 106), ' ', '-') FROM (VALUES (-1),(0),(1),(2),(3),(4),(5),(6),(7),(8))AS T(week) ``` `Demo`
adding date columns in SSRS Report
[ "", "sql", "sql-server", "sql-server-2008", "reporting-services", "" ]
I have 2 tables First is **addresses** *id                   ad                               datetime 1          121.252.212.121        2015-04-20 08:23:04 2          121.252.212.122       2015-05-20 09:11:06 3          121.252.212.121       2015-05-20 11:56:04 4          121.252.212.123      2015-05-20 09:54:25 5         121.252.212.123      2015-05-20 10:23:04 6        121.252.212.122      2015-05-20 09:50:10 7       121.252.212.123       2015-05-20 12:50:02* Second is **monitor** server\_id                 ad                           label *1                  121.252.212.121                North 2                 121.252.212.122                South 3                121.252.212.123                 East* i use this query SELECT monitor.label, addresses.datetime FROM monitor INNER JOIN addresses ON monitor.ad=addresses.ad ORDER BY monitor.label; **MYOUTPUT** label                   datetime North             2015-04-20 08:23:04 North             2015-05-20 11:56:04 South            2015-05-20 09:11:06 South            2015-05-20 09:50:10 East              2015-05-20 09:54:25 East              2015-05-20 10:23:04 East              2015-05-20 12:50:02 but my desire output is to get the recent row and merge duplicate label example below **MYDESIRE** label                   datetime North              2015-05-20 11:56:04 South             2015-05-20 09:50:10 East               2015-05-20 12:50:02 anyone can help me on this matter? NOTE\* addresses AD is not primary or unique but monitor AD is Primary
This is called [`mysql group-wise maximum`](https://dev.mysql.com/doc/refman/5.0/en/example-maximum-column-group-row.html) and there are many ways of doing it and one way is ``` select m.label, a.datetime from monitor m join ( select ad, max(datetime) as datetime from addresses group by ad )a on a.ad = m.ad; ```
Simply use `group by` and `max`: ``` SELECT monitor.label, max(addresses.datetime) FROM monitor INNER JOIN addresses ON monitor.ad=addresses.ad group by monitor.label ORDER BY monitor.label; ```
How do I join the most recent row in one table to another table MYSQL
[ "", "mysql", "sql", "join", "greatest-n-per-group", "" ]
My question is somewhat similar to this one [SQL Delete Rows Based on Another Table](https://stackoverflow.com/questions/1783784/sql-delete-rows-based-on-another-table) except for the fact two entries match if 4 columns match! So, I have: Table1: ``` Field | Type | Null | Key | Default | Extra f1 | int(32) unsigned | NO | PRI | NULL | auto_increment f2 | int(32) | NO | | 0 | f3 | int(32) | NO | | 0 | f4 | int(32) | NO | | 0 | ``` and another table with f1,f2,f3,f4 + other fields. I want to delete any record in Table 1 which have a correspondence in Table 2 based on ``` T1.f1=T2.f1 and T1.f2=T2.f2 and T1.f3=T2.f3 and T1.f4=T2.f4 ``` what's the most efficient way? I though of retrieving all T1 records in a program and then make n selects on T2, but there is an average of 5k records each time in T1 and I feel this is not the most efficient way... **edit:** as there are a lot of records I just made an index, on T2, on one single field (say field1 for argument's sake)
You could use `EXISTS`: ``` DELETE FROM Table1 t1 WHERE EXISTS ( SELECT 1 FROM Table2 t2 WHERE T1.f1=T2.f1 AND T1.f2=T2.f2 AND T1.f3=T2.f3 AND T1.f4=T2.f4 ) ``` The most efficient approach depends on your rdbms and the indexes used.
If there are too many rows to be deleted, then you could also do a **CTAS**, i.e. **`create table as select`**. From `12c` onwards, CTAS doesn't need **gathering statistics**. **CTAS** would be much faster than **DELETE**. * **Create** a new table with CTAS approach with required rows only * Recreate the required **indexes** on this new table * **Drop** the old table * **Rename** new table name to old table name. For example, **Step 1** ``` CREATE TABLE t_new AS SELECT t1.col1, t1.col2, t1.col3, t1.col4 FROM t1, t2 WHERE t1.col1 <> t2.col1 AND t1.col2 <> t2.col2 AND t1.col3 <> t2.col3 AND t1.col4 <> t2.col4; ``` **Step 2** ``` CREATE INDEX indx_name ON t_new(COLUMN_NAME) ``` **Step 3** ``` DROP TABLE t1 PURGE; ``` **Step 4** ``` ALTER TABLE t_new RENAME TO t1; ```
delete an entry from a table A if that entry is already present in a table B
[ "", "sql", "oracle", "oracle11g", "" ]
I have a date column `SLA_Date` in the `Orders` table. My `SLA_Date` should exclude weekends (Saturday & Sunday). Data for weekdays should be shown alone. How do I do that in SQL?
You just need to add the following **filter**: ``` WHERE TO_CHAR(date_column, 'DY','NLS_DATE_LANGUAGE=AMERICAN') NOT IN ('SAT', 'SUN') ``` Your query would look like: ``` SELECT SLA_Date FROM orders WHERE TO_CHAR(SLA_Date, 'DY','NLS_DATE_LANGUAGE=AMERICAN') NOT IN ('SAT', 'SUN') ``` For example(the WITH clause is only to build a test case), the below query is to display only the weekdays(i.e. excluding the Sat and Sun) ranging from `1st May 2015` to `31st May 2015`: ``` SQL> WITH DATA AS 2 (SELECT to_date('05/01/2015', 'MM/DD/YYYY') date1, 3 to_date('05/31/2015', 'MM/DD/YYYY') date2 4 FROM dual 5 ) 6 SELECT date1+LEVEL-1 the_date, 7 TO_CHAR(date1+LEVEL-1, 'DY','NLS_DATE_LANGUAGE=AMERICAN') day 8 FROM DATA 9 WHERE TO_CHAR(date1+LEVEL-1, 'DY','NLS_DATE_LANGUAGE=AMERICAN') 10 NOT IN ('SAT', 'SUN') 11 CONNECT BY LEVEL <= date2-date1+1; THE_DATE DAY --------- --- 01-MAY-15 FRI 04-MAY-15 MON 05-MAY-15 TUE 06-MAY-15 WED 07-MAY-15 THU 08-MAY-15 FRI 11-MAY-15 MON 12-MAY-15 TUE 13-MAY-15 WED 14-MAY-15 THU 15-MAY-15 FRI 18-MAY-15 MON 19-MAY-15 TUE 20-MAY-15 WED 21-MAY-15 THU 22-MAY-15 FRI 25-MAY-15 MON 26-MAY-15 TUE 27-MAY-15 WED 28-MAY-15 THU 29-MAY-15 FRI 21 rows selected. SQL> ``` For a detailed example, see [Generate Date, Month Name, Week Number, Day number between two dates](http://lalitkumarb.wordpress.com/2015/04/15/generate-date-month-name-week-number-day-number-between-two-dates-in-oracle-sql/)
You can try the following condition to filter your data: ``` WHERE to_char (SLA_Date, 'FmDay', 'nls_date_language=english') <> 'Saturday' AND to_char (SLA_Date, 'FmDay', 'nls_date_language=english') <> 'Sunday' ```
How do I exclude weekends in SQL?
[ "", "sql", "oracle", "datetime", "date-arithmetic", "" ]
I have a sub query that returns more than one value.Please see the query below. There is a possibility that more than one order exist for a given customer and date. In that case I only want to return one order (It might sound strange but that is what I want to accomplish). How do I set that limit? I tried to set limit 1 in the sub query but I didn't manage to make it work? Any ideas? I am using Teradata ``` SELECT Person.name, (SELECT order FROM ORDER where ORDER.name=Person.name AND ORDER.date=(SELECT MAX(date) FROM ORDER WHERE ORDER.name=Person.name))order FROM Person ```
As in my comment, here's the answer: ``` SELECT Person.name, (SELECT MAX(order) FROM ORDER where ORDER.name = Person.name AND ORDER.date = (SELECT MAX(date) FROM ORDER WHERE ORDER.name = Person.name)) order FROM Person ``` Note that both `ORDER` and `DATE` are reserved words in ANSI SQL. I suggest you double quote them to avoid future problems and improve portability. (E.g `"ORDER"`.)
Use a ROW\_NUMBER to find the latest order per name and then join it: ``` SELECT Person.name, ORDER.order FROM Person AS p LEFT JOIN ( SELECT * FROM ORDER QUALIFY ROW_NUMBER() OVER (PARTITION BY name ORDER BY date DESC) = 1 ) as o ON ORDER.name=Person.name ``` Btw, this will perform much better than nested `MAX`. And it's easier to write :-)
More than one value returned by subquery
[ "", "sql", "teradata", "" ]
I am trying to select the highest DATETIME for a specific Request code: ![enter image description here](https://i.stack.imgur.com/M3tng.png) This is my query I'm using: ``` SELECT SubmitTime FROM [QTRAXAdmin].[qt_request] WHERE SubmitTime IN (SELECT Max(SubmitTime) FROM [QTRAXAdmin].[qt_request]) AND requestcode = 'FOREMAN'; ``` MY issue is that my query above returns nothing. I think what it's doing is it's finding that the highest DATETIME in the SubmitTime column does not belong to a FOREMAN request code. How can I get it to return the latest SubmitTime for a specific Request code?
I think you can just simply select like this, if of course those times don't repeat within `requestcode`: ``` SELECT max(SubmitTime) FROM [QTRAXAdmin].[qt_request] WHERE requestcode = 'FOREMAN'; ```
Simply put the `requestcode = 'FOREMAN'` predicate in the `WHERE` clause of your subquery: ``` SELECT SubmitTime FROM [QTRAXAdmin].[qt_request] WHERE SubmitTime IN (SELECT Max(SubmitTime) FROM [QTRAXAdmin].[qt_request] WHERE requestcode = 'FOREMAN') AND requestcode = 'FOREMAN'; ``` This way, the subquery will return the `MAX(SubmitTime)` for this specific `requestcode`. As I see it now, after having a second look, you don't need the subquery at all: ``` SELECT Max(SubmitTime) FROM [QTRAXAdmin].[qt_request] WHERE requestcode = 'FOREMAN' ``` This is enough, if all you want to get is just the `Max(SubmitTime)` for `requestcode = 'FOREMAN'`.
Returning highest DATETIME from row with particular column data
[ "", "sql", "sql-server", "greatest-n-per-group", "" ]
If user put 800 as `@WorkShop` It should return all records having 800 workshop Id and 900 workshop id. If user pass any other Id as `@WorkShop` It should return only records for choosen `@WorkShop` Id. I have tried in following: ``` SELECT * FROM Test WHERE Workshop IN ( CASE WHEN @WorkShop = N'800' THEN N'900' AND N'800' ELSE @WorkShop END ) ``` This returns an error: ``` Incorrect syntax near the keyword 'and'. ``` So in conclusion if user pass 800 as `@WorkShop` parameter It should return something like: ``` Workshop IN ('800', '900') ```
Is this what you want? ``` SELECT * FROM Test WHERE ( @WorkShop = '800' AND Workshop IN('800', '900') ) OR @WorkShop = Workshop ```
An alternate method ``` SELECT * FROM Test WHERE Workshop IN ( @WorkShop, CASE WHEN @WorkShop = N'800' THEN N'900' END ) ```
T-SQL. CASE expression in WHERE clause using IN operator
[ "", "sql", "sql-server", "t-sql", "case", "where-clause", "" ]
I have a table with two columns that contains the start and end of a sequence of integers. It's a large list, but I'm pretty sure there are no overlaps in any of the ranges. I want to generate a list of all numbers contained within these range start and end bounds. Basically I want to do the opposite of this question: [How to create number ranges from a list of numbers?](https://stackoverflow.com/questions/23554869/how-to-create-number-ranges-from-a-list-of-numbers) From what I've read, my best guess at a solution might be some sort of dynamic cross apply, but I'm not really sure how to even begin that. My table looks something like this: **Table** ``` RangeStart RangeEnd 200 205 208 209 221 221 222 224 ``` I want something like this: **Desired Result** ``` Sequence 200 201 202 203 204 205 208 209 221 222 223 224 ```
You can avoid using a 'numbers' table, by the use of a recursive common table expression: ``` WITH Numbers AS ( SELECT RangeStart, RangeStart AS Number, RangeEnd from RangeTable UNION ALL SELECT RangeStart, Number + 1, RangeEnd FROM Numbers WHERE Number < RangeEnd) SELECT Number FROM Numbers ORDER BY Number ``` The first part is the 'anchor' query, which defines the root member (in this case, the range from number, the first number in the range, and the range to number). The second part (after the `UNION ALL`) recursively joins to the anchor member and essentially keeps adding one to `Number` and recursing until it hits `RangeEnd`. The final part gets just the numbers from the CTE we have built (we are no longer interested in the range start and end) and makes sure they are in the right order (this might not matter to you, in which case you can omit the `ORDER BY` clause.) edit - if you are hitting a recursion limit with this, you can fix this by adding `OPTION (MAXRECURSION 0)` on the end of the query. Apologies for missing this!
The easiest way is to start with a list of integers. A convenient one -- if the list is not too long -- is `master..spt_values`: ``` with n as ( select row_number() over (order by (select null)) - 1 as n from master..spt_values ) select (rangestart + n.n) as sequence from ranges r join n on r.rangestart + n.n <= r.rangeend order by sequence; ``` If you are concerned about overlaps, then the following is an easy way to get the sequence: ``` select distinct (rangestart + n.n) as sequence from ranges r join n on r.rangestart + n.n <= r.rangeend order by sequence; ``` If the problem has lots of overlaps and values, then you would want a somewhat different approach (but that is not the question you are asking).
Create list of numbers from list of ranges
[ "", "sql", "sql-server", "" ]
I'm new to SQL and am currently working through a "teach yourself SQL book" It was mentioned in the book that sometimes you NEED to specify table name with column name (immediately after SELECT line) to get your desired result. It was also mentioned that it is often good practice to do this regardless. Here is a specific example: ``` SELECT vend_name, prod_name, prod_price FROM Vendors, Products WHERE Vendors.vend_id = Products.vend_id; SELECT Vendors.vend_name, Products.prod_name, Products.prod_price FROM Vendors, Products WHERE Vendors.vend_id = Products.vend_id; ``` Both code blocks achieve the same result. My question is whether there is a performance difference, and if the full names are better practice. Thanks in advance.
First, learn proper `join` syntax. Simple rule: Never use commas in the `from` clause. Second, learn to use table aliases. These should be abbreviations for the table. Table aliases make queries easier to write and to read. Third, always use qualified column names. Using the column name has no effect on performance. Oh, perhaps you'll make an exception if you have only one table or something like that. But, including the table alias is a very good idea, a best practice. Why? * You or someone else may look at the query in the future and not want to figure out which names come from which tables. * You or someone else may add a new column to one of the tables that matches a column in the other. And, the query mysteriously stops working. * You or someone else may say "what a great query, but I need to add another table". The other table has naming conflicts, just introducing more work. So, I would write the query as: ``` SELECT v.vend_name, p.prod_name, p.prod_price FROM Vendors v JOIN Products p ON v.vend_id = p.vend_id; ``` Or, if you like: ``` SELECT v.vend_name, p.prod_name, p.prod_price FROM Vendors v JOIN Products p USING (vend_id) ```
Below format is helpful if you have multiple tables with same column names,in order to reduce the confusions ``` Vendors.vend_name ``` Let me be **simple and short** : > There wont be any **performance** issues but it is a **good practice** > to follow
SQL "Table_Name.Column_name" VS "Column_name" performance & syntax
[ "", "mysql", "sql", "mysql-workbench", "" ]