Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have some data like ``` code amount month aaa1 100 1 aaa1 200 2 aaa1 300 3 aaa4 450 2 aaa4 400 3 aaa6 0 2 ``` From the above, for each code I want to get the row with max(month) ``` code amount month aaa1 300 3 aaa4 400 3 aaa6 0 2 ``` How can I create a ms sql query for this?
``` ;WITH MyCTE AS ( SELECT code, amount, month, ROW_NUMBER() OVER(PARTITION BY code ORDER BY code,month DESC) AS rownum FROM table ) SELECT * FROM MyCTE WHERE rownum = 1 ```
You can use the [ranking function](http://technet.microsoft.com/en-us/library/ms189798.aspx) [**`ROW_NUMBER()`**](http://technet.microsoft.com/en-us/library/ms186734.aspx) with `PARTITION BY code ORDER BY month DESC` to do this: ``` WITH CTE AS ( SELECT code, amount, month, ROW_NUMBER() OVER(PARTITION BY code ORDER BY month DESC) AS RN FROM Tablename ) SELECT code, amount, month FROM CTE WHERE RN = 1; ``` This will give you the maximum `month` for each `code`. * [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/731df/1)
Get the row with max(column) for distinct key
[ "", "sql", "sql-server-2008", "" ]
I need to check in MySQL to see if two date intervals intersect. To explain better my issue: I have an event management module. If we have an event that was added like this: > start date : `'2013-09-09 08:00:00'` > > end date : `'2013-09-09 10:00:00'` and now I want to add another event that's like this: ## case A: > start date : `'2013-09-09 09:00:00'` > > end date : `'2013-09-09 11:00:00'` OR like this : ## case B: > start date : `'2013-09-09 07:00:00'` > > end date : `'2013-09-09 12:00:00'` I shouldn't be able to do this because an event was already added in that time interval (08-10) For the first example (`case A`), I solved the issue by doing this : ``` SELECT * FROM `events` as e where '2013-09-09 08:00:00' between e.ev_date_start and e.ev_date_end -- date start OR '2013-09-09 11:00:00' between e.ev_date_start and e.ev_date_end -- date end ``` But for the second case (`case B`) I'm having trouble figuring it out...
Test just that the start times of already assigned time blocks or the to be assigned time block do not fall between each other. ``` select * from `events` as e where '2013-09-09 08:00:00' between e.ev_date_start and e.ev_date_end or e.ev_date_start between '2013-09-09 08:00:00'and '2013-09-09 11:00:00' ```
The idea is to check if they DO NOT intersect, and then negate. ``` NOT ('2013-09-09 08:00:00' >= e.ev_date_end OR e.ev_date_start >= '2013-09-09 11:00:00') ``` which is logically equivalent to ``` '2013-09-09 08:00:00' < e.ev_date_end AND e.ev_date_start < '2013-09-09 11:00:00' ```
Check if two date intervals intersect
[ "", "mysql", "sql", "select", "" ]
Is there a way I could keep track of modified tables in Oracle? Is there a master table that keeps track of all other table's row? For example if I add a row to table1 it would update the row count stating that table1 now has 5 rows. I was thinking of tracking either `dba_tables` or `all_tables` or `user_tables` but I'm not sure which one actually counts the number of rows each table has.
``` SELECT TABLE_NAME, NUM_ROWS FROM USER_TABLES ``` I highly doubt you're actually using Oracle 3.1. This query works at least in 11g (I don't have other instances to test at the moment). Keep in mind that this is a data dictionary table and it won't update automatically after you insert a row in any schema table. The Gather Statistics procedure must be run to update these records.
You can get an improvement on the just querying user/all/dba\_statistics by combining them with information gathered by table monitoring. The views DBA/ALL/USER\_TAB\_MODIFICATIONS are populated with the number on insets, updates, deletes and truncates on the table since statistics were last gathered. The view is populated asynchronously so call DBMS\_STATS.FLUSH\_DATABASE\_MONITORING\_INFO to flush the latest in-memory data to the tables. Bear in mind that statistics themselves may be estimated, and although the accuracy is pretty good on most tables even to surprisingly low levels of estimation percent (even down to 5% or below), if you need accurate numbers you'll have to query the tables themselves with count(\*). You can put together a pipelined function to do this for multiple tables with a single query.
Tracking row numbers in Oracle
[ "", "sql", "oracle", "" ]
When I use this mysql code ``` update catalog_product_entity_decimal set value = "6.0000" where attribute_id = 75 and entity_in ( SELECT product_id from `catalog_category_product` WHERE category_id =37 ); ``` i'm getting this error ``` #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT product_id from `catalog_category_product` ' at line 4 ``` I couldn't figure out what's wrong with my query. Can someone tell me what's wrong? Thank you
You are missing the IN clause after entity\_in. Use this: ``` update catalog_product_entity_decimal set value = "6.0000" where attribute_id = 75 and entity_id IN ( SELECT product_id from `catalog_category_product` WHERE category_id =37 ); ```
I think catalog\_category\_product will not contain the ' symbol. Try the query without using the ' symbol
#1064 Syntax Error : In "Update Query"
[ "", "mysql", "sql", "" ]
I'm completely stumped. I have the following block: ``` IF NOT EXISTS(SELECT * FROM sys.schemas WHERE name = 'Test') BEGIN CREATE SCHEMA Test; END; ``` If I run this against our SQL Server 2008, I get "Msg 156, Level 15, State 1, Line 3: Incorrect syntax near the keyword 'SCHEMA'" but if I run just the `CREATE SCHEMA` command alone, it works. Also, this works: ``` IF NOT EXISTS(SELECT * FROM sys.schemas WHERE name = 'Test') BEGIN PRINT 'CREATE GOES HERE'; END; ``` What am I doing wrong?
The error message is a bit of a red herring here.... Execute the following to see what the "real" error is: ``` SELECT * FROM sys.schemas CREATE SCHEMA Test ``` > Msg 111, Level 15, State 1, Line 2 > > > 'CREATE SCHEMA' must be the first statement in a query batch. To get around this problem you can use the `EXEC` function: ``` IF NOT EXISTS (SELECT * FROM sys.schemas WHERE name = 'Test') BEGIN EXEC ('CREATE SCHEMA Test;'); END; ```
If you need to create schema in another database rather than current context, you should run following script. ``` set @myScript = 'exec '+ QUOTENAME(@DBName) + '..sp_executesql N''create schema MySchema''' execute(@myScript) ```
Can't create schema inside begin block
[ "", "sql", "sql-server", "" ]
I have a store procedure which i have planned to use for search and get all values. **Scenario:** If the parameter passed is `NULL` it should return all the values of the table and if the parameter passed is not `NULL` it should return the values according to the condition which is in LIKE. //Query: ``` ALTER procedure [dbo].[usp_GetAllCustomerDetails] ( @Keyword nvarchar(20) = null ) As Begin Select CustomerId,CustomerName,CustomerTypeName,CustomerCode,CategoryName,CustomerMobile,CustomerEmail,CustomerAddress,CustomerCity,CustomerState,Pincode from tblCustomerMaster CM inner join dbo.tblCustomerTypeMaster CTM on CTM.CustomerTypeId = CM.CustomerType inner join dbo.tblCategoryMaster CCM on CCM.CategoryId= CM.CustomerCategory where CustomerName like '%'+@Keyword+'%' ``` In the above query it returns no values when i execute since the `NULL` is assumed as `string` by `SQL`, so what should i write in the `where` clause to get the desired output?
You can use condition like this in you `where` clause ``` where @Keyword is null or CustomerName like '%' + @Keyword + '%' ```
I just want to point out another way of solving this problem. The issue is that the default value for `@KeyWord` is `NULL`. If you change the default to `''`, then the problem goes away: ``` ALTER procedure [dbo].[usp_GetAllCustomerDetails] ( @Keyword nvarchar(20) = '' ) ``` Any non-NULL customer name would then be like '%%'.
LIKE and NULL in WHERE clause in SQL
[ "", "sql", "sql-server", "sql-server-2008", "where-clause", "sql-like", "" ]
In something I need to work on, at a given time, I need to find all nodes of a tree (made using the materialized path + adjacent nodes) that have 0 or 1 children. The columns are something like: ``` id int parent_node int treepath VARCHAR(255) deepness int ``` When I try to come up with a solution to this, I'm only able to think of very complex queries that would use too many subqueries or table joins. For the nodes with 0 children, I've been thinking of searching for all nodes that are not referenced by a parent\_id and are within a subtree. ``` SELECT * FROM user_tree WHERE node_id NOT IN ( SELECT parent_id FROM user_tree WHERE parent_id IS NOT NULL ) AND treepath LIKE ( SELECT CONCAT(treepath, '/%') FROM user_tree WHERE node_id = 4 ) ``` Using the EXPLAIN, this seems to be kinda painful for the DB in therms of how long it takes to get the node list. Is there a non-very-painful way (in performance) to make a query that finds what I want? EDIT: As requested, here's some sample format of the table: ``` id parent_id treepath deepness 1 NULL 1 0 2 NULL 2 0 3 1 1/3 1 4 1 1/4 1 5 4 1/4/5 2 7 3 1/3/7 2 8 3 1/3/8 2 9 7 1/3/7/11 3 ```
Without sample data, this is just a guess: ``` SELECT t1.*, t2.numchildren FROM user_tree t1 LEFT JOIN ( SELECT parent_id, COUNT(*) AS numchildren FROM user_tree GROUP BY parent_id ) t2 ON t1.id = t2.parent_id WHERE t2.numchildren IS NULL OR t2.numchildren = 1; ``` [SQL-Fiddle from sample data](http://sqlfiddle.com/#!2/1dcea/1/0)
Query ``` SELECT parent_id, COUNT(id) AS n FROM user_tree GROUP BY parent_id HAVING n = 1 ``` finds parents having one child. You may want to add INDEX(parent\_id). Query ``` SELECT id FROM user_tree WHERE id NOT IN (SELECT DISTINCT parent_id FROM user_tree) ``` finds leaves. The same index will help.
Find all nodes within a subtree that has 0 or 1 children
[ "", "mysql", "sql", "" ]
I am trying to update a database via an update query,but I am getting an error that says ``` Error com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'add=null,col=null,wok=null,pcol=null,pwok=null,bio=null where un ='null'' at line 1. ``` I have been trying for hours to figure out what the error is in my SQL Syntax,but I haven't been able to get through. The update query is: ``` query = "Update users SET em=?, mn=?,add=?,col=?,wok=?,pcol=?,pwok=?,bio=? where un ='"+un+"'"; ``` I need help in figuring out the error in my update syntax,Thanks.
ADD is a reserved word in mysql use backticks ``` \`add\`=? ``` <http://dev.mysql.com/doc/mysqld-version-reference/en/mysqld-version-reference-reservedwords-5-5.html>
ADD is reserved word. Please quote it or change field name
Error in SQL Update Syntax
[ "", "mysql", "sql", "jsp", "" ]
Is there a way to drop all objects in a db, with the objects belonging to two different schemas? I had been previously working with one schema, so I query all objects using: ``` Select * From sysobjects Where type=... ``` then dropped everything I using ``` Drop Table ... ``` Now that I have introduced another schema, every time I try to drop it says something about I don't have permission or the object does not exist. BUT, if I prefix the object with the `[schema.object]` it works. I don't know how to automate this, cause I don't know what objects, or which of the two schemas the object will belong to. Anyone know how to drop all objects inside a db, regardless of which schema it belongs to? (The user used is owner of both schemas, the objects in the DB were created by said user, as well as the user who is removing the objects - which works if the prefix I used IE. `Drop Table Schema1.blah`)
Use [`sys.objects`](http://technet.microsoft.com/en-us/library/ms190324.aspx) in combination with [`OBJECT_SCHEMA_NAME`](http://technet.microsoft.com/en-us/library/bb326599%28v=sql.110%29.aspx) to build your `DROP TABLE` statements, review, then copy/paste to execute: ``` SELECT 'DROP TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' + QUOTENAME(name) + ';' FROM sys.objects WHERE type_desc = 'USER_TABLE'; ``` Or use [`sys.tables`](http://technet.microsoft.com/en-us/library/ms187406.aspx) to avoid need of the `type_desc` filter: ``` SELECT 'DROP TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' + QUOTENAME(name) + ';' FROM sys.tables; ``` [SQL Fiddle](http://www.sqlfiddle.com/#!6/6345e/1)
Neither of the other questions seem to have tried to address the **all objects** part of the question. I'm amazed you have to roll your own with this - I expected there to be a drop schema blah cascade. Surely every single person who sets up a dev server will have to do this and having to do some meta-programming before being able to do normal programming is seriously horrible. Anyway... rant over! I started looking at some of these articles as a way to do it by clearing out a schema: There's an [old article](http://uliasz.com/2010/08/how-to-drop-all-schema-objects-in-ms-sql-2005/) about doing this, however the tables mentioned on there are now [marked as deprecated](http://technet.microsoft.com/en-us/library/ms177596.aspx). I've also looked at the [documentation for the new tables](http://technet.microsoft.com/en-us/library/ms190324.aspx) to help understand what is going on here. There's [another answer](https://stackoverflow.com/questions/688425/evaluate-in-t-sql) and a [great dynamic sql resource](http://www.sommarskog.se/dynamic_sql.html) it links to. After looking at all this stuff for a while it just all seemed a bit too messy. I think the better option is to go for ``` ALTER DATABASE 'blah' SET SINGLE_USER WITH ROLLBACK IMMEDIATE drop database 'blah' create database 'blah' ``` instead. The extra incantation at the top is basically to force drop the database as [mentioned here](https://dba.stackexchange.com/questions/34264/how-to-force-drop-database-in-sql-server-2008) It feels a bit wrong but the amount of complexity involved in writing the drop script is a good reason to avoid it I think. If there seem to be problems with dropping the database I might revisit some of the links and post another answer
Drop all objects in SQL Server database that belong to different schemas?
[ "", "sql", "sql-server", "database-schema", "" ]
I want to use the following SQL statements in a VB.NET app: ``` select * from information_schema.parameters where specific_name='GetTaskEvents' SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_NAME= 'GetTaskEvents' ``` I spoke to the DBA about it today and he seemed unkeen, but did not give a reason. Is it bad practice to do this? I want to be able to do something like this ``` public sub Delete() 'loop around about ten databases 'Call the delete function in each of the databases (check that the stored procedure exists first) 'Each database has different output parameters. Deal with them differently e.g. if there is an email parameter, then email the person to say the record was deleted. End Sub ``` The reason for this is so that I can treat each database the same.
If you know what the stored procedure *is* but not sure what the parameters are - a better way to use [SqlCommandBuilder.DeriveParameters](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommandbuilder.deriveparameters.aspx) method to get the number of parameters as well as their types and names. *UPDATE* Here is a basic usage example in VB.NET ``` Dim oCommand As New SqlCommand("SpName", oConnObject) oCommand.CommandType = CommandType.StoredProcedure SqlCommandBuilder.DeriveParameters(oCommand) ``` At this point `oCommand.Parameters` collection will be filles with parameters for "SpName"
Best practice should come down to what kind of risk you are willing to expose your company to in order to get the type of data you need. Risk / benefit analysis. In this case the risk I see is that you are making your code depend on Microsofts behind the scenes system tables and schemas. These do tend to change from version to version. So, a move from sql 2018 to 2021 for example could break your compiled code. *Aside from the risks outlined in comments about calling procs you don't understand the effect of.* So, to reduce that risk you could put your code in a stored proc so that you can easily update it without having to recompile code if and when the system schema does change. There are other ways to reduce the other risks identified but that is how I analyze "best practice".
Using system tables in VB.NET
[ "", "sql", "vb.net", "" ]
I want to get all selected columns from a single select statement without performing the execution . The query may use join or union , then the query may consist more than just one table , so I can not simply use the statement ``` select column_name, data_type from user_tab_cols where table_name = 'my_table_name' ``` to get the result . For example **input:** ``` select db1.* from (select dept_code, vital_signs, units, log_time from table1 where dept_code = '111' union select 'hl06' as dept_code, vital_signs, units, log_time from table2 where isdefault = 1 and vital_signs not in (select vital_signs from Ward_Vs_Vital_Signs where dept_code = '111')) db1, table3 db2 where db1.vital_signs = db2.vital_signs(+) order by db2.serial_no ``` **output:** ``` column_name | data_type ------------------------- dept_code | VARCHAR vital_signs | NUMBER units | VARCHAR log_time | DATE ``` My question is how can I parse the query and ask the database to tell me the columns and data types that will be returned without executing the statement ? Thanks ! PS: I'm using Oracle 10g .
There is no easy way to do it using a single select statement, however, you can look into `dbms_sql` approach ([DOC](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_sql.htm#i997238)). ``` 14:32:26 SYSTEM@dwal> ed Wrote file S:\spool\dwal\BUFFER_SYSTEM_329.sql 1 DECLARE 2 c NUMBER; 3 d NUMBER; 4 col_cnt INTEGER; 5 rec_tab DBMS_SQL.DESC_TAB; 6 col_num NUMBER; 7 PROCEDURE print_rec(rec in DBMS_SQL.DESC_REC) IS 8 BEGIN 9 DBMS_OUTPUT.NEW_LINE; 10 DBMS_OUTPUT.PUT_LINE('col_name = ' || rec.col_name); 11 DBMS_OUTPUT.PUT_LINE('col_type = ' || rec.col_type); 12 END; 13 BEGIN 14 c := DBMS_SQL.OPEN_CURSOR; 15 DBMS_SQL.PARSE(c, 'SELECT dummy, 12345 num, sysdate dt FROM dual', DBMS_SQL.NATIVE); 16 d := DBMS_SQL.EXECUTE(c); 17 DBMS_SQL.DESCRIBE_COLUMNS(c, col_cnt, rec_tab); 18 col_num := rec_tab.first; 19 IF (col_num IS NOT NULL) THEN 20 for i in 1 .. col_cnt LOOP 21 print_rec(rec_tab(i)); 22 END LOOP; 23 END IF; 24 DBMS_SQL.CLOSE_CURSOR(c); 25* END; 14:32:46 SYSTEM@dwal> / col_name = DUMMY col_type = 1 col_name = NUM col_type = 2 col_name = DT col_type = 12 PL/SQL procedure successfully completed. ``` Notice `col_type` numeric value above - it is a type code which maps to actual data type. The mapping is documented here: <http://docs.oracle.com/cd/E11882_01/server.112/e17118/sql_elements001.htm#i54330>
I believe there is no option to find the datatype in the query. Instead you have to make a `create table as` or to `create view as` clause with the query and use then desc or some data dictionaries like user\_tab\_cols.
oracle 10g get all selected columns and datatype from a single select statement
[ "", "sql", "oracle", "" ]
Is it possible to write a query to loop through the rows of a two column table, checking the first column for a certain identifier and copy the data in the second column into a new table? Example: ``` tblSurveyData FirstColumn Second Column A0 John A2 Smith A3 05-01-1973 tblSurveyReport FirstName MiddleName LastName DateOfBirth John Smith 05-01-1973 ``` Where A0 data would go to FirstName, A1 MiddleName, A2 LastName and A3 DateOfBirth. There are many more identifiers and fields but just as an example how would you do this with a query in Access or is VBA a better solution? The only solution I came up with is the following VBA but this bypasses the two column table and tries to insert into the tblSurveyReport table. Unfortunately, it puts each piece of data into its own row which doesn't help. ``` If Identifier = "A0" Then DoCmd.RunSQL "INSERT INTO tblSurveyReport " _ & "(FirstName) " _ & "VALUES ('" & Info & "')" ElseIf Identifier = "A1" Then DoCmd.RunSQL "INSERT INTO tblSurveyReport " _ & "(MiddleName) " _ & "VALUES ('" & Info & "')" ElseIf Identifier = "A2" Then DoCmd.RunSQL "INSERT INTO tblSurveyReport " _ & "(LastName) " _ & "VALUES ('" & Info & "')" ElseIf Identifier = "A3" Then DoCmd.RunSQL "INSERT INTO tblSurveyReport " _ & "(DateOfBirth) " _ & "VALUES ('" & Info & "')" End If ``` However each piece of data is going into its own row and I need it all in the same row. Any help is greatly appreciated. Thanks in advance. TC
Use [INSERT INTO](http://msdn.microsoft.com/en-us/library/office/bb208861%28v=office.12%29.aspx) with a SELECT statement ``` INSERT INTO tblSurveyReport(FirstName) SELECT FirstName FROM tblSurveyData where FirstColumn = 'A0' INSERT INTO tblSurveyReport(MiddleName) SELECT MiddleName FROM tblSurveyData where FirstColumn = 'A1' ``` You could run this using a DoCmd, as a query in Access, etc.
You will need something that would link your records together. What happens if the data gets re-sorted? How would you know that all the info in your example should be in the same record? I believe the only way to do something like this would be to create a 3rd field in your first table that determines which data belongs with which, something like a UserID. Then the table would look like this: tblSurveyData ``` FirstColumn Second Column UserID A0 John XX001 A2 Smith XX001 A3 05-01-1973 XX001 ``` Then you could create a preliminary query like: ``` Select DISTINCT UserID from tblSurveyData ``` Use that as your "pointer" query and loop through the results, and then you can pull all the records with each UserID out and copy them into the new table. Or, you can inner join the "pointer" query and tblSurveyData, and then use a Crosstab query. The easiest way to do that would be to use the wizard to create it, and then just copy the code into your VBA. **EDIT:** For easier readability for future readers, the SQL for the query you're asking for in your comment is: ``` SELECT Max(IIf([Identifier]="A0",[IValue],"")) AS FName, Max(IIf([Identifier]="A1",[IValue],"")) AS MName, Max(IIf([Identifier]="A2",[IValue],"")) AS LName, Max(IIf([Identifier]="A3",[IValue],"")) AS BDate FROM tblSurveyData; ``` You will need to change "First Column" in your sample data to "Identifier", and "Second Column" to "IValue" (or make the corresponding field name changes to the above SQL). I have tested this and it worked perfectly, giving you one record with all 4 values in corresponding fields. Once you've pasted that into a blank query, you can switch to Design View and change the query to an Append or MakeTable query and that will let you export the results to tblSurveyReport.
Access query or VBA code to move data from one table to another depending on first row unique identifier
[ "", "sql", "vba", "ms-access", "" ]
Think I have two table and each table have one index . now I join tables by indexed column my question is that is there any different on time whether each table have 100 rows or 1 million rows ?
Let say you are searching for your a book in a library. Is there any different on time whether the library has 100 books or has 100 million books? Even the library nicely organize the book by category and by alphabetical order. It makes different. Computer is nothing different from us human, just it can perform some task a lot faster than us. Let say we have 100 books to search. Using binary search will takes log2(100) = 6.64 operations. 100,000,000 books takes log2(100,000,000) = 26.57 operations.
Of course there is a difference (And this is one reason why you should never use a developement database that has significantly fewer records than the production database, you don't want to find performance problems when you push to Prod). First assume you are going to return all the records. If you have 100 record table joined to a 100 record table the most records that coudl be returned is 10,000 recirds (this would be a cross join) the most records you would have returned in a million record table joined to a million record table is 1,000,000,000,000. Clearly just returning that number of records across a network connnection will take longer just like printing a million pages woudl take longer than printing 100 pages. Next the indexes on the 100 record table will probably not be used as all of it can easily fit into memory. But the larger tables may use the indexes, so there is an extra lookup step (which speed the query immeasurably over not using an index in a large table), but more critically, they may not be used for some queries be able to use the index. Suppose you do a search with this where condition "WHERE Field1 LIKE '%test%'. Now the index cant be used and teh contents of a million records must each be checked. Does it take you longer to read a million pages than it take to read 100? So too will it take the database longer to read a million records.
Different between 100 rows and 100 million rows
[ "", "mysql", "sql", "sql-server", "" ]
Two parameters that allow user to choose either one to sort data. ``` SELECT comment, comment_type,user FROM comments WHERE (comments.comment_type = @ctype) OR (comments.user = @user) ``` The user must be able to select each commenttype or user from available values dropdown. The problem is ssrs will not let me leave one blank. I tried using IN but it errored out.
How about ``` (comments.comment_type = @ctype AND comments.user IS NOT NULL) OR (comments.comment_type IS NOT NULL AND comments.user = @user) ```
You should check `Allow null value` in `Report parameters` window. Than you can try your query or below query. ``` SELECT comment, comment_type,user FROM comments WHERE (comments.comment_type = coalesce(@ctype,comments.comment_type)) OR (comments.user = coalesce(@user,comments.user)) ``` [**More information**](http://bloggingabout.net/blogs/egiardina/archive/2007/06/26/sql-server-reporting-services-optional-parameters.aspx)
How to implement two parameters that are each optional with available values for each parameter?
[ "", "sql", "sql-server", "t-sql", "reporting-services", "" ]
I want an SQL code which should perform the task of data scrubbing. I have two tables both contain some names I want to compare them and list out only those name which are in table 2 but not in table 1. Example: ``` Table 1 = A ,B,C Table 2 = C,D,E ``` The result must have D and E?
``` select T2.Name from Table2 as T2 where not exists (select * from Table1 as T1 where T1.Name = T2.Name) ``` See [**this article**](http://www.sqlperformance.com/2012/12/t-sql-queries/left-anti-semi-join) about performance of different implementations of anti-join (for SQL Server).
``` SELECT t2.name FROM 2 t2 LEFT JOIN 1 t1 ON t1.name=t2.name WHERE t1.name IS NULL ```
Compare two tables and give the output record which does not exist in 1st table
[ "", "sql", "" ]
In My WHERE Clause I am using a CASE that will return all rows if parameter is blank or null. This works fine with single valuses. However not so well when using the IN clause. For example: This works very well, if there is not match then ALL records are returned! ``` AND Name = CASE WHEN @Name_ != '' THEN @Name_ ELSE Name END AND ``` This works also, but there is no way to use the CASE expression, so if i do not provide some value i will not see any records: ``` AND Name IN (Select Names FROM Person Where Names like @Name_) AND ``` Can I use a CASE with the 2nd example? So if there is not a match all Names will be returned?
Maybe `coalesce` will resolve your problem ``` AND Name IN (Select Names FROM Person Where Names like coalesce(@Name,Name)) AND ``` As Mattfew Lake said and used, you can also use `isnull` function ``` Name IN (Select Names FROM Person Where Names like isnull(@Name,Name)) ```
You should almost definitely use an IF statement with two selects. e.g. ``` IF @Name IS NULL BEGIN SELECT * FROM Person END ELSE BEGIN SELECT * FROM Person --WHERE Name LIKE '%' + @Name + '%' WHERE Name = @Name END ``` N.B. I've changed like to equals since `LIKE` [without wildcards it is no different to equals](https://stackoverflow.com/q/404226/1048425), , it shouldn't make any difference in terms of performance, but it stops ambiguity for the next person that will read your query. If you do want non exact matches then use the commented out `WHERE` and remove wildcards as required. The reason for the `IF` is that the two queries may have very different execution plans, but by combining them into one query you are forcing the optimiser to pick one plan or the other. Imagine this schema: ``` CREATE TABLE Person ( PersonID INT IDENTITY(1, 1) NOT NULL, Name VARCHAR(255) NOT NULL, DateOfBirth DATE NULL CONSTRAINT PK_Person_PersonID PRIMARY KEY (PersonID) ); GO CREATE NONCLUSTERED INDEX IX_Person_Name ON Person (Name) INCLUDE (DateOfBirth); GO INSERT Person (Name) SELECT DISTINCT LEFT(Name, 50) FROM sys.all_objects; GO CREATE PROCEDURE GetPeopleByName1 @Name VARCHAR(50) AS SELECT PersonID, Name, DateOfBirth FROM Person WHERE Name IN (SELECT Name FROM Person WHERE Name LIKE ISNULL(@Name, Name)); GO CREATE PROCEDURE GetPeopleByName2 @Name VARCHAR(50) AS IF @Name IS NULL SELECT PersonID, Name, DateOfBirth FROM Person ELSE SELECT PersonID, Name, DateOfBirth FROM Person WHERE Name = @Name; GO ``` Now If I run the both procedures both with a value and without: ``` EXECUTE GetPeopleByName1 'asymmetric_keys'; EXECUTE GetPeopleByName1 NULL; EXECUTE GetPeopleByName2 'asymmetric_keys'; EXECUTE GetPeopleByName2 NULL; ``` The results are the same for both procedures, however, I get the same plan each time for the first procedure, but two different plans for the second, both of which are much more efficient that the first. ![enter image description here](https://i.stack.imgur.com/W5sXg.png) If you can't use an `IF` (e.g if you are using an inline table valued function) then you can get a similar result by using `UNION ALL`: ``` SELECT PersonID, Name, DateOfBirth FROM Person WHERE @Name IS NULL UNION ALL SELECT PersonID, Name, DateOfBirth FROM Person WHERE Name = @Name; ``` This is not as efficient as using IF, but still more efficient than your first query. The bottom line is that less is not always more, yes using `IF` is more verbose and may look like it is doing more work, but it is in fact doing a lot less work, and can be much more efficient.
Using a CASE with the IN clause in T-SQL
[ "", "asp.net", "sql", "t-sql", "case", "where-clause", "" ]
What kind of relational algebra operator would a [LOAD](http://dev.mysql.com/doc/refman/5.0/en/load-data.html) keyword be mapped into? If it it's not a logical operator but only a physical one then how is it handled during the logical to physical operator transformation process by the database query processor? Or, if it's not mapped into relational algebra primitives is it then an implementation-specific relational algebra operator extention?
The `LOAD` keyword is internally mapped in the database after query parsing into a *logical* operator but it's not an *algebra* operator. All algebra operators are logical operators but not all logical operators are algebra operators.
You have got to distinguish between algebraic operators such as natural join, transitive closure, ... and keywords/commands of the data manipulation language. Algebraic operators do computations on given values, and return the computed value as their result. And then you have programming and data manipulation languages that are built, in a certain sense, "on top" of algebraic operators. If you have an algebraic operator "integer addition", then you will also have a *symbol* in your language to denote invocations of that operator, say '+'. And you can have *well formed formulas* (wffs) that consist a.o. of such symbols. Say, e.g., 'a+b' or 'x+1'. This is where *variables* pop up. Algebraic operators alone are not enough, we also need variables (at least in procedural languages). And to manipulate those variables, we need this thing called "assignment", more longwindedly, we need the *assignment operator*. And just like the assignment "x := 3" can be regarded as a wff denoting an invocation of the integer assignment operator, LOAD can be regarded as a form in which the relational assignment operator can possibly appear. The assignment operator is -obviously- not a read-only operator of the algebra, but it is the operator of your *language* that you use for managing state.
Is LOAD a relational algebra operator?
[ "", "mysql", "sql", "relational-database", "relational-algebra", "first-order-logic", "" ]
I have a table like this : ``` Server CompliancePercentage A 25 B 15 C 45 D 75 E 17 F 82 ``` I want to get from a single query the results in the following way: ``` Conformity% 00-20 20-40 40-60 60-80 80-100 Server Count 2 1 1 1 1 ``` How do I get the above mentioned result from a nested query ? Any help would be greatful. Thanks a lot in advance. Suvi
You can use an aggregate function with a CASE expression to get the result: ``` select 'ServerCount' Conformity, count(case when CompliancePercentage >= 0 and CompliancePercentage <20 then 1 end) Per00_19, count(case when CompliancePercentage >= 20 and CompliancePercentage <40 then 1 end) Per20_39, count(case when CompliancePercentage >= 40 and CompliancePercentage <60 then 1 end) Per40_59, count(case when CompliancePercentage >= 60 and CompliancePercentage <80 then 1 end) Per60_79, count(case when CompliancePercentage >= 80 and CompliancePercentage <100 then 1 end) Per80_100 from yourtable; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/10702d/1)
Assuming the following table: ``` create table dummyTable ( srv char(1), comp int ) ``` You can write out the query similar to ``` select [00-19] = (SELECT COUNT(1) FROM dummyTable where comp between 0 and 19), [20-39] = (SELECT COUNT(1) FROM dummyTable where comp between 20 and 39) ``` If your table has a ton of records (ie, > 1M) you may want to consider adding a non-clustered index to the table on the compliance percentage column to avoid a bunch of table scans.
Nested SQL Queries: How to get multiple counts on same elements based on different criteria
[ "", "sql", "" ]
I have a table of customer ID's and a table of customer data. A customer can change ID's, but I want to query their earliest date in some cases, and the latest date in others. For example, I have two tables: ``` tbl_a id active_id a aa b aa c aa d bb e bb tbl_b id date city a 1/1/2012 Dallas b 1/1/2013 Houston c 2/1/2013 Austin d 1/1/2003 San Antonio e 3/3/2013 El Paso tbl_c id value a 50 b 50 b 50 c 50 d 10 e 10 e 10 ``` How would I query the data to return a single active\_id with the EARLIEST date and MOST RECENT city while summing the data from tbl\_c? For example, return the following: ``` active_id date city sum(value) aa 1/1/2012 Austin 200 bb 1/1/2003 El Paso 30 ``` In semantic terms, I'm trying to find the first date and most recent city for any active\_id, while summing the transaction values.
If you can use windowed functions (and don't have access to `first_value` function): ``` with cte as ( select row_number() over(partition by a.active_id order by b.date asc) as rn1, row_number() over(partition by a.active_id order by b.date desc) as rn2, a.active_id, b.date, b.city from tbl_a as a inner join tbl_b as b on b.id = a.id ) select c.active_id, c1.date, c2.city from (select distinct active_id from cte) as c left outer join cte as c1 on c1.active_id = c.active_id and c1.rn1 = 1 left outer join cte as c2 on c2.active_id = c.active_id and c2.rn2 = 1 ``` **`sql fiddle demo`** If you can use `first_value` function: ``` with cte as ( select first_value(b.date) over(partition by a.active_id order by b.date asc) as date, first_value(b.city) over(partition by a.active_id order by b.date desc) as city, a.active_id from tbl_a as a inner join tbl_b as b on b.id = a.id ) select distinct active_id, date, city from cte ```
Try something like this: ``` ;WITH DATA AS (SELECT T1.ACTIVE_ID, T2.* FROM TABLEA T1 INNER JOIN TABLEB T2 ON T1.ID = T2.ID), DATA2 AS (SELECT T1.ACTIVE_ID, SUM(T2.VALUE) TOT_VALUE FROM TABLEA T1 INNER JOIN TABLEC T2 ON T1.ID = T2.ID GROUP BY ACTIVE_ID) SELECT T3.ACTIVE_ID, DATE, CITY, T5.TOT_VALUE AS [SUM(VALUE)] FROM DATA T3 INNER JOIN (SELECT MIN(DATE) MAXDATE, ACTIVE_ID FROM DATA GROUP BY ACTIVE_ID)T4 ON T3.ACTIVE_ID = T4.ACTIVE_ID AND T3.DATE = T4.MAXDATE INNER JOIN DATA2 T5 ON T3.ACTIVE_ID = T5.ACTIVE_ID ``` A working example can be found on `SQL Fiddle`. Good Luck!
Most Efficient Way to Process Subquery For First Record
[ "", "sql", "sql-server", "" ]
I have the following 3 Oracle Database Tables: ``` application (app_id, application_name) ``` eg. (1, firstapp) ``` item_request (app_id, item_id, qty_requested) ``` eg. (1, 111, 5), (1, 112, 3), (1, 113, 7) ``` item (item_id, item_code) ``` eg. (111, "COMPUTER"), (112, "PHONE"), (113, "DESK") I want to produce this table: ``` new table (app_id, application_name, qty_requested_for_item_with_item_code_111, qty_requested_for_item_with_item_code_112, qty_requested_for_item_with_item_code_113, etc...) ``` eg. (1, fistapp, 5, 3, 7, etc...) Is this even possible?
You can achieve this using dynamic SQL, e.g. with a ref cursor. You have to a)iterate over all existing items b)build your SELECT list by adding all items c)perform the pivoting (either by using PIVOT or with the traditional MAX / CASE / GROUP BY) approach ``` create table application(app_id number primary key, application_name varchar2(30)); create table item_request(app_id number, item_id number, qty_requested number); create table item(item_id number primary key, item_code varchar2(30)); insert into application values(1, 'firstapp'); insert into application values(2, 'secondapp'); insert into item values (111, 'Computer'); insert into item values (112, 'Phone'); insert into item values (113, 'Desk'); insert into item_request values (1, 111, 5); insert into item_request values (1, 112, 3); insert into item_request values (1, 113, 7); insert into item_request values (2, 111, 3); -- SQL/Plus syntax for declaring and using a bind variable of type ref cursor var x refcursor; set autoprint on declare l_sql varchar2(4000); l_select varchar2(4000); l_from varchar2(4000); begin l_select := 'select application.app_id, application.application_name'; for cur in (select * from item) loop l_select := l_select || chr(10) || ',max(case when item_code = ''' || cur.item_code || ''' then qty_requested else 0 end) as ' || cur.item_code; end loop; l_sql := l_select || ' from application left join item_request on application.app_id = item_request.app_id left join item on item.item_id = item_request.item_id group by application.app_id, application.application_name'; dbms_output.put_line(l_sql); open :x for l_sql; end; ```
Oracle has limited support for Pivot functions. See e.g. <http://www.oracle.com/technetwork/articles/sql/11g-pivot-097235.html>. In short, for regular output you are stuck with having a static list of columns that you are pivoting into. For dynamic pivot tables you can use the pivot xml function. But this produces the pivot columns in xml, and so will need parsing.
Oracle Database - Dynamic number of columns
[ "", "sql", "oracle", "dynamic", "" ]
How to set row level lock in Informix?
``` CREATE TABLE customer(customer_num serial, lname char(20)...) LOCK MODE ROW; ``` Read more here: [Row and Key Locks](http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.perf.doc/perf227.htm)
You can use: ``` ALTER TABLE customer LOCK MODE(ROW); ``` (PAGE is the default) More [here](http://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_0083.htm). If you want to find the current lock mode of a table: ``` SELECT tablename, lockmode FROM systables ```
How to set row level lock in Informix?
[ "", "sql", "locking", "informix", "" ]
I have a table like below - ``` Student ID | History | Maths | Geography 1 A B B 2 C C E 3 D A B 4 E D A ``` How to find out how many students got A in history, B in maths and E in Geography with a single sql query ?
If you want to get number of students who got A in History in one column, number of students who got B in Maths in second column and number of students who got E in Geography in third then: ``` select sum(case when [History] = 'A' then 1 else 0 end) as HistoryA, sum(case when [Maths] = 'B' then 1 else 0 end) as MathsB, sum(case when [Geography] = 'E' then 1 else 0 end) as GeographyC from Table1 ``` If you want to count students who got A in history, B in maths and E in Geography: ``` select count(*) from Table1 where [History] = 'A' and [Maths] = 'B' and [Geography] = 'E' ```
If you want independent counts use: ``` SELECT SUM(CASE WHEN Condition1 THEN 1 ELSE 0 END) AS 'Condition1' ,SUM(CASE WHEN Condition2 THEN 1 ELSE 0 END) AS 'Condition2' ,SUM(CASE WHEN Condition3 THEN 1 ELSE 0 END) AS 'Condition3' FROM YourTable ``` If you want multiple conditions for one count use: ``` SELECT COUNT(*) FROM YourTable WHERE Condition1 AND Condition2 AND Condition3 ``` It sounds like you want multiple independent counts: ``` SELECT SUM(CASE WHEN History = 'A' THEN 1 ELSE 0 END) AS 'History A' ,SUM(CASE WHEN Maths = 'B' THEN 1 ELSE 0 END) AS 'Maths B' ,SUM(CASE WHEN Geography = 'E' THEN 1 ELSE 0 END) AS 'Geography E' FROM YourTable ```
multiple count conditions with single query
[ "", "sql", "" ]
I have a stored procedure where want check that a date is between a fixed date and the current date/time (with `GETDATE()`): ``` SELECT a, b FROM myTbl WHERE DATE BETWEEN 'Day start datetime' AND GETDATE() ``` ...for example : ``` WHERE DATE BETWEEN '2013-09-10 00:00:00.00' AND 'GETDATE()' ``` How to do it?
A pair of `DATEADD`/`DATEDIFF` calls will round a date down to the previous midnight: ``` SELECT a , b FROM myTbl WHERE DATE BETWEEN DATEADD(day,DATEDIFF(day,0,GETDATE()),0) and GETDATE() ``` Alternatively, if you're on SQL Server 2008 or later: ``` SELECT a , b FROM myTbl WHERE DATE BETWEEN CONVERT(date,GETDATE()) and GETDATE() ```
`'GETDATE()'` is a string literal, `GETDATE()` is a `T-SQL` function. Your query should look like: ``` SELECT a , b FROM myTbl WHERE DATE BETWEEN '2013-09-10 00:00:00.0' and GETDATE() ```
Check date between two dates in T-SQL
[ "", "sql", "sql-server", "t-sql", "between", "" ]
I have a large database with 300,000 rows (1.6 GB). I need to delete them all EXCEPT the ones that has the following features: ``` main_sec=118 main_sec=Companies type=SWOT ``` Here is the code I prepared, but somehow, it's deleting all the rows of the table: ``` DELETE FROM `swots` WHERE (main_sec <> '118') OR (main_sec <> 'Companies') OR (type <> 'SWOT'); ``` Please help me to understand where the mistake is.
It would be faster to insert the rows that you want to keep (assuming they are fewer then the remaining rows) in a new table like : ``` INSERT INTO main_sec_new SELECT * FROM main_sec WHERE main_sec IN ('118','Companies') and type = 'SWOT' ``` And then just drop the old table
``` DELETE FROM `swots` WHERE main_sec not in ('118', 'Companies') and type <> 'SWOT' ```
Delete all rows except specifics
[ "", "mysql", "sql", "sql-delete", "" ]
I am trying to update values in a MySQL table and really getting stuck. Basically I want to update a column's value to 1 where another column (in the same row) = "N". It should be quite simple but I can't fathom it. ``` UPDATE household SET allowsDogs=1 WHERE allowsCats="N" ``` In my mind the above query should, for each household if allowsCats="N" then set allowsDogs to 1. But instead I get an empty result set. I have also tried variations: ``` Update household set allowsDogs=1 where householdID in (select householdID from household where allowsCats="N") Update household set allowsDogs=1 where householdID in (select householdID from copy_of_household where copy_of_household.allowsCats="N") ``` I'm just about to write a php script to read in each row and update one at a time....But there must be an easier way...
The syntax you have specified is correct, and should work. I believe this is where you are getting tripped up: > But instead I get an empty result set. `UPDATE` queries do not return any result set. They do their work and then return an empty result set. However, your client library or application should provide a way for you to see *how many* records were altered (but not which specific ones). Further, note that the database server may skip updating a record if all of the fields you are updating already have the new value you are assigning. For example, if all of the records in the table with the field `allowCats` equal to `"N"` also have their `allowDogs` field equal to `1` then the database server may not include those rows in the total number of rows updated, since they were not actually changed.
Presumably, you mean one of the following: ``` UPDATE household SET allowsDogs = 1 WHERE allowsCats = 0; ``` or ``` UPDATE household SET allowsDogs = 'Y' WHERE allowsCats = 'N'; ``` Mixing numbers and characters for flags is like, well, mixing cats and dogs.
How to update a MySql field based upon value of another field in the same record
[ "", "mysql", "sql", "" ]
I have a table with companies and one with categories. I'm using SQL Server Free Text search, and searching companies (by name and description) works fine. But now I also want to include the category table. I want to search for something like: `ABC 24 Supermarket`. Now, `ABC 24` should make a match with the `Name` column in the `company` table, and `Supermarket` is the name of the `category` this company is connected to. Right now I have something like this: ``` DECLARE @SearchString VARCHAR(100) = '"ABC 24 Supermarket"' SELECT * FROM Company CO INNER JOIN Category CA ON CA.CategoryId = CO.CategoryId WHERE CONTAINS((CO.[Description], CO.[Name]), @SearchString) AND CONTAINS(CA.[Description], @SearchString) ``` But this of course, gives me nothing, because my search string cannot be found in either the company or the category table. Does anyone have an idea on how to do a combined search on my company and category table? The idea of splitting strings, as suggested in Lobo's answer below, is not really an option. Because i don't know which part will be the one that should match a category and which part should be used for matching company names/descriptions. Users might just as well type in "Supermarket ABC 24".
Imho the right way to do this is to create an indexed view containing the primary key of the main table ('company' in your example) and a second column containing all the stuff you're actually want to search, i.e. ``` create View View_FreeTextHelper with schemabinding as select CO.PrimaryKey, -- or whatever your PK is named CO.description +' '+CA.description +' '+CO.whatever as Searchtext from dbo.company CO join dbo.category CA on CA.CategoryId = CO.CategoryId ``` Note the two-part form of your tables. A few restrictions arise from this, e.g. all involved tables must be in the same table space and as far as I remember, no TEXT columns are allowed in this kind of concatenation (you may cast them though). Now create a unique index on the `PrimaryKey` column ``` create unique clustered index [View_Index] on View_FreeTextHelper (PrimaryKey ASC) ``` Finally create the fulltext index on the view using the 'Searchtext' column as the only column to index. Of course, you may add more columns, if you e.g. wish to distinguish in searching for company name and location and, the names of the managers (you would just concatenate them in a second column). Retrieving your data is now easy: ``` select tbl.RANK, co.* from freetextTable(View_FreeTextHelper,Search,'Your searchtext here') tbl join company co on tbl.key=co.PrimaryKey order by tbl.RANK desc ``` You may also limit the output using `select top 50` as the `freetexttable` clause will eventually return quite a lot of close and not so close results. And finally, don't get confused if you cannot find thing like 'off the shelf inc.' Beware of the stop lists. These are list of words which are very common, have no semantic use (like `the`) and are therefore removed from the text to be indexed. In order to include them, you have to switch of the stoplist feature. A last tipp: full text is very powerful but has a lot of features, tricks and caveats. It takes quite a bit to fully understand the techniques and get the best results you want. Have fun.
If we assume that the name of columns are unique per row, then you can use below query. The following example returns all rows that contain either the phrase "ABC", "24" or "Supermarket" in each of the columns ``` DECLARE @SearchString nvarchar(100) = N'ABC 24 Supermarket' SET @SearchString = REPLACE(LTRIM(RTRIM(@SearchString)), ' ', '|') SELECT * FROM Company CO JOIN Category CA ON CA.CategoryId = CO.CategoryId WHERE CONTAINS(CO.[Name], @SearchString) AND CONTAINS(CO.[Description], @SearchString) AND CONTAINS(CA.[Description], @SearchString) ``` First of all you need to prepare a search value for the CONTAINS predicate used in the WHERE clause. In this case I replaced spaces between the words on the "|" logic operator(the bar symbol (|) may be used instead of the OR keyword to represent the OR operator.)
SQL Server Free Text search: search words from a phrase in two tables
[ "", "sql", "sql-server", "contains", "freetext", "" ]
I have two tables `users` and `distance`. In a page I need to list all users with a simple query such as `select * from users where active=1 order by id desc`. Sometimes I need to output data from the `distance` table along with this query where the user ID field in `users` is matched in the `distance` table in EITHER of two columns, say `userID_1` and `userID_2`. Also in the `distance` table either of the two mentioned columns must also match a specified id (`$userID`) as well in the where clause. This is the best that I came up with: ``` select a.*, b.distance from users a, distance b where ((b.userID_1='$userID' and a.id=b.userID_2) or (a.id=b.userID_1 and b.userID_2='$userID')) and a.active=1 order by a.id desc ``` The only problem with this query is that if there is no entry in the `distance` table for the where clause to find a match, the query does not return anything at all. I still want it to return the row from the `user` table and return `distance` as null if there are no matches. I cannot figure out if I need to use a JOIN, UNION, SUBQUERY or anything else for this situation. Thanks.
You need a left join between 'users' and 'distance'. As a result (pun not intended), you will always get the rows from the 'users' table along with any matching rows (if any) from 'distance'. I notice that you are using the SQL-89 join syntax ("implicit joins") as opposed to SQL-92 join syntax ("explicit joins"). I wrote about this [once](http://www.nbnewman.blogspot.co.il/2011/08/implicit-vs-explicit-joins-sql.html). I suggest that you change your query to ``` select a.*, b.distance from users a left join distance b on ((b.userID_1='$userID' and a.id=b.userID_2) or (a.id=b.userID_1 and b.userID_2='$userID')) where a.active=1 order by a.id desc ```
Use a left join ``` select a.*, b.distance from users a left join distance b on (b.userID_1=? and a.id=b.userID_2) or (b.userID_2=? and a.id=b.userID_1) where a.active=1 order by a.id desc ``` and use a prepared statement. Substituting text into a query is vulnerable to SQL Injection attacks.
Not sure how to approach this with a mySQL query for two tables
[ "", "mysql", "sql", "" ]
I have two tables one I use to store `Album_Name` and other table I use to store `Album_Photos` I want to write a query so that it will get me following details ``` Album_Name Album_ID Album_Date ImageSmall Album One 1 2013-08-02 100.jpg Album Two 2 2013-09-09 55.jpg ``` I want details album details from `Album_Name` table and first image which I wasnt to assign to album from `Album_Photo` table I tried JOINS which didn't work for then I create a view will following SQL this doesn't work ``` SELECT a.Album_Name AS Album_Name , a.Album_Date AS Album_Date , a.Page_ID AS PageID , p.Image_ID AS Image_ID , p.Image_Small AS Image_Small FROM Album_Name a LEFT OUTER JOIN Album_Photos p ON a.Album_ID = p.Album_ID ``` I tried `DISTINCT Album_Name` with view it get me the same row as the above statement ``` SELECT DISTINCT [Album_Name], Album_Date, Page_ID, Image_Small FROM vw_AlbumName_AlbumPhotos WHERE Page_ID = 3 ``` Sample Data `Album_Name` & `Album_Photos` table ``` Album_ID Album_Name Album_Date Page_ID 1 Album One 2013-08-02 3 2 Album Two 2013-09-09 3 3 Album Three 2013-09-10 9 Image_ID Page_ID Album_ID ImageSmall 1 0 1 100.jpg 2 0 1 21.jpg 3 0 1 36.jpg 4 0 1 44.jpg 5 0 2 55.jpg 6 0 2 66.jpg 7 0 3 10.jpg ``` Any help is appreciated.
You are getting duplicate because there are multiple photos per album. To get one, use `row_number()`: ``` SELECT Album_Name AS Album_Name, a.Album_Date AS Album_Date, a.Page_ID AS PageID, p.Image_ID AS Image_ID, p.Image_Small AS Image_Small FROM Album_Name a left outer JOIN (select p.*, row_number() over (partition by Album_Id order by Image_ID) as seqnum from Album_Photos p ) p ON a.Album_ID = p.Album_ID and seqnum = 1; ```
I assume your Album table is called `Album` not `Album_Name` ``` SELECT .Album_Name AS Album_Name , a.Album_Date AS Album_Date , a.Page_ID AS PageID , p.Image_ID AS Image_ID , p.Image_Small AS Image_Small FROM Album a left outer JOIN Album_Photos p ON a.Album_ID = p.Album_ID WHERE p.Image_ID = ( SELECT MIN(Image_ID) FROM Album_Photos WHERE Album_Photos.Album_ID = Album.Album_ID ) ```
Get Distinct rows from a result of JOIN in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have read and retried rebuilding the below in many ways but to keep it clear I will show my last attempt. Aim - To get the Max value of column "UniqueID" The column field 'uniqueID' is set as a bigint I assume the error is in the line with addwithvalue in as I get "int" as the return value If I run the query `SELECT MAX(UniqueID) FROM tblResults in SQL` it works Code ``` Dim connection5 As SqlConnection Dim command5 As New SqlCommand Dim ds5 As New DataSet Dim ConnectionString5 As String = System.Configuration.ConfigurationManager.ConnectionStrings("mySQLConnectionString").ToString() connection5 = New SqlConnection(ConnectionString5) connection5.Open() command5.Connection = connection5 command5.Parameters.Clear() command5.CommandText = "spUniqueUserID" command5.Parameters.AddWithValue("@UniqueID", SqlDbType.BigInt) command5.Parameters("@UniqueID").Direction = ParameterDirection.Output command5.CommandType = CommandType.StoredProcedure command5.ExecuteNonQuery() Session.Item("UniqueID") = command5.Parameters("@UniqueID").Value connection5.Close() Dim vShow As String vShow = "" vShow = Session.Item("UniqueID").ToString ``` SP ``` USE [DB] GO /****** Object: StoredProcedure [dbo].[spUniqueUserID] Script Date: 09/10/2013 08:51:57 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spUniqueUserID] @UniqueID bigint OUTPUT AS BEGIN select @UniqueID = (SELECT MAX(UniqueID) FROM tblResults ) Return @UniqueID END ```
I personally wouldnt mess around with the output parameter. Simply use ``` ALTER PROCEDURE [dbo].[spUniqueUserID] AS BEGIN SELECT MAX(UniqueID) FROM tblResults END ``` in your proc, and ``` Dim sqlConnection1 As New SqlConnection("Your Connection String") Dim cmd As New SqlCommand Dim returnValue As Object cmd.CommandText = "spUniqueUserID" cmd.CommandType = CommandType.StoredProcedure cmd.Connection = sqlConnection1 sqlConnection1.Open() returnValue = cmd.ExecuteScalar() sqlConnection1.Close() ``` In your code. (best with using statements for your connection and command, here missing, for brevity)
Try with ``` command5.Parameters.AddWithValue("@UniqueID", 0L) ``` The AddWithValue determines the DataType of the parameter to pass to the underlying engine looking at the datatype of the second parameter. You pass a string and this is certainly wrong. In alternative you could define explicitly the parameter ``` Dim parameter = new SqlParameter("@UniqueID", SqlDbType.BigInt) parameter.Direction = ParameterDirection.Output parameter.Size = 8 parameter.Value = 0 command5.Parameters.Add(parameter) ``` And, **as last but fundamental steps**, don't forget to specify that this command executes a storedprocedure and then execute the command ``` command5.CommandType = CommandType.StoredProcedure command5.ExecuteNonQuery() ``` As an alterative, but this requires a change to your stored procedure, is to use ExecuteScalar. This method should be used when you need a single result from your code. ``` ALTER PROCEDURE [dbo].[spUniqueUserID] AS BEGIN select SELECT MAX(UniqueID) FROM tblResults END ``` And in your code ``` Using connection5 = New SqlConnection(ConnectionString5) Using command5 As New SqlCommand("spUniqueUserID", connection5) connection5.Open() command5.CommandType = CommandType.StoredProcedure Dim result = command5.ExecuteScalar() ..... End Using End Using ``` But at this point the usefulness of the storedprocedure is really minimal and you could code directly the sql in the constructor of the SqlCommand and remove the CommandType setting ``` Using connection5 = New SqlConnection(ConnectionString5) Using command5 As New SqlCommand("SELECT MAX(UniqueID) FROM tblResults", connection5) connection5.Open() Dim result = command5.ExecuteScalar() ..... End Using End Using ```
Returning Max with a stored procedure
[ "", "sql", "sql-server", "vb.net", "" ]
What query will count the number of rows, but distinct by three parameters? Example: ``` Id Name Address ============================== 1 MyName MyAddress 2 MySecondName Address2 ``` Something like: ``` select count(distinct id,name,address) from mytable ```
To get a count of the number of unique combinations of `id`, `name` and `address`: ``` SELECT Count(*) FROM ( SELECT DISTINCT id , name , address FROM your_table ) As distinctified ```
Get all distinct `id`, `name` and `address` columns and count the resulting rows. ``` SELECT COUNT(*) FROM mytable GROUP BY id, name, address ```
Count distinct value pairs in multiple columns in SQL
[ "", "sql", "count", "distinct", "" ]
I have the following query ``` with CTE as ( select Barkod, sum(kolicina) as Kolicina from stocks where Barkod = '555' group by Barkod ) select s.Barkod, s.Kategorija, s.Artikal, s.Opis, s.Kolicina, s.N_cena, s.N_Iznos, s.P_cena, s.P_Iznos, s.datum, s.Golemina from Stocks as s join CTE as b on b.Barkod = s.Barkod ``` The results from this query is ``` 555 КОШУЛА QWRSF QWRSF 10 10.00 NULL 20.00 NULL NULL NULL 555 КОШУЛА QWRSF QWRSF 1 10.00 NULL 20.00 NULL NULL NULL ``` I need to get the following result ``` 555 КОШУЛА QWRSF QWRSF 11 10.00 NULL 20.00 NULL NULL NULL ``` So I need to sum up the Kolicina field and get all in one row.
You're very close...just one last grouping. ``` select s.Barkod, s.Kategorija, s.Artikal, s.Opis, sum(s.Kolicina), s.N_cena, s.N_Iznos, s.P_cena, s.P_Iznos, s.datum, s.Golemina from Stocks as s group by s.Barkod, s.Kategorija, s.Artikal, s.Opis, s.N_cena, s.N_Iznos, s.P_cena, s.P_Iznos, s.datum, s.Golemina ``` Any other lines you may want summed can be moved out of the group by and have the sum() put onto it in the select line.
You need to find some way to aggregate the other columns. e.g. ``` select Barkod, sum(kolicina) as Kolicina, Max(Kategorija) as Kategorija, Max(Artikal) as Artikal, Max(Opis) as Opis, Max(N_cena) as N_cena, Max(N_Iznos) as N_Iznos, Max(P_cena) as P_cena, Max(P_Iznos) as P_Iznos, Max(datum) as datum, Max(Golemina) as Golemina from stocks where Barkod = '555' group by Barkod ``` This pattern would tend to suggest your data isn't properly normalized, though. Alternatively, if the non-summed columns always have identical values for the Barkod, you can just do: ``` select Barkod, sum(kolicina) as Kolicina, Kategorija, Artikal, Opis, N_cena, N_Iznos, P_cena, P_Iznos, datum, Golemina from stocks where Barkod = '555' group by Barkod, Kategorija, Artikal, Opis, N_cena, N_Iznos, P_cena, P_Iznos, datum, Golemina ```
Getting correct result with Group By query in SQL Server
[ "", "sql", "sql-server", "" ]
I have 4 sql scripts that I want to run in a DACPAC in PostDeployment, but when I try to build the VS project for 3 of them I get this error: > Only one statement is allowed per batch. A batch separator, such as 'GO', might be required between statements. The scripts contain only `INSERT` statements in different tables on the DB. And all of them are structured like so > IF NOT EXISTS (SELECT 1 FROM dbo.Criteria WHERE Name = 'Mileage') INSERT INTO dbo.Criteria(Name) VALUES ('Mileage'); > only on different tables and with different data. My question is why is VS complaining about 3 of them when all the scripts are the same in terms of syntax and operations? PS: Adding 'GO' between statements as the error suggests doesn't do anything.
I have found the problem. When I added the file in VS I forgot to set `Build Action = None` from the file properties. So changing that fixed the problem and the project now compiles.
Regardless this seems to be pretty old I stuck for some hours with that as well and I think this way could be helpful for many. In `Database project`, files set as `Build` are considered as Database structure so just one statement is allowed in such file by design. `Go` nor any other batch terminator will change that behavior, that message is just mistake. More info [here.](https://social.msdn.microsoft.com/Forums/sqlserver/en-US/47d4124b-82cd-48f9-8ba7-c2ae4c73cbcf/sql71006-why-am-i-getting-this-error?forum=ssdt) There is lot of other build options for files in such project. For your case it seems that `PostDeploy`. In such file you could have various commands like `inserts` etc. Then you can use output of Database project as dacpac file for Data-Tier DB applications (Otherwise it's not included).
Error in SQL script: Only one statement is allowed per batch
[ "", "sql", "sql-server", "visual-studio-2012", "dacpac", "" ]
I have a data column which is in a datetime format. The date is shown like this: ``` 2013-09-07 00:00:00.000; 2012-12-09 00:00:00.000; 2013-08-19 00:00:00.000; ``` I want the date to be like this ``` 26-JUN-13; 08-FEB-12; 28-NOV-12; ``` I used following query: ``` select [003 AccptReg].[dbo].[SysDB].[statCngDate_sys] from [003 AccptReg].[dbo].[SysDB] where [statCngDate_sys]= convert(datetime,left([statCngDate_sys],10),103) ``` But it shows following error: > The conversion of a varchar data type to a datetime data type resulted > in an out-of-range value. What am I doing wrong? Also do I have to keep the date column in datetime format rather than in varchar format to search a range of dates?
TRY THIS ``` SELECT CONVERT(varchar(50),(CONVERT(DATETIME,'2013-09-07 00:00:00.000')),106) ``` you don't need to convert into datetime as your column must be in same format
Use this query : ``` SELECT UPPER(REPLACE(CONVERT(CHAR(9), GETDATE(), 6), ' ', '-')) ``` This should solve your issue.
Convart varchar to date format with a twist
[ "", "sql", "sql-server", "type-conversion", "" ]
I have a single table which I need to pull back the 5 most recent records based on a userID and keying off of documentID (no duplicates). Basically, I'm tracking visited pages and trying to pull back the 3 most recent by user. Sample data: ``` ╔══════════════════════════════════════════════╗ ║UserID DocumentID CreatedDate ║ ╠══════════════════════════════════════════════╣ ║ 71 22 2013-09-09 12:19:37.930 ║ ║ 71 25 2013-09-09 12:20:37.930 ║ ║ 72 1 2012-11-09 12:19:37.930 ║ ║ 99 76 2012-10-10 12:19:37.930 ║ ║ 71 22 2013-09-09 12:19:37.930 ║ ╚══════════════════════════════════════════════╝ ``` Desired query results if UserID = 71: ``` ╔══════════════════════════════════════════════╗ ║UserID DocumentID CreatedDate ║ ╠══════════════════════════════════════════════╣ ║ 71 25 2013-09-09 12:20:37.930 ║ ║ 71 22 2013-09-09 12:19:37.930 ║ ╚══════════════════════════════════════════════╝ ```
``` SELECT TOP 3 UserId, DocumentId, MAX(CreatedDate) FROM MyTable WHERE UserId = 71 GROUP BY UserId, DocumentId ORDER BY MAX(CreatedDate) DESC ```
You could try using a [CTE](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) and [ROW\_NUMBER](http://technet.microsoft.com/en-us/library/ms186734.aspx). Something like ``` ;WITH Vals AS ( SELECT UserID, DocumentID, ROW_NUMBER() OVER(PARTITION BY UserID, DocumnentID ORDER BY CreatedDate DESC) RowID FROM MyTable ) SELECT TOP 3 * FROM Vals WHERE UserID = 71 AND RowID = 1 ```
SQL Server Group By Order By Where
[ "", "sql", "sql-server", "sql-server-2008", "group-by", "sql-order-by", "" ]
Why do I get duplication of `Jim` in the results of the following? ``` CREATE TABLE #B (Name VARCHAR(10), Age INT) INSERT INTO #B values ('Jim', 21), ('Jim', 21), ('Jim', 19), ('Jim', 20), ('Nick', 20), ('Nick', 2), ('Nick', 20); SELECT DISTINCT Name, Age FROM #B A WHERE EXISTS ( SELECT 1 FROM #B B WHERE A.Age > B.Age AND A.NAME = B.NAME ) ```
As the SQL Fiddle shows ([here](http://www.sqlfiddle.com/#!6/ee7c6/1)), your query is not doing what you think it is. Instead of getting the `max()`, it is fetching everything except the minmum. This version would get the `max()`: ``` SELECT DISTINCT Name, Age FROM B A WHERE NOT EXISTS ( SELECT 1 FROM B B WHERE A.Age < B.Age AND A.NAME = B.NAME ); ```
Because you have 2 rows corresponding ? ``` A.Age > B.Age and a.name = b.name ``` so ``` Jim 20 > Jim 19 Jim 21 > Jim 19 Jim 21 > Jim 20 ``` With the distinct, you get `Jim 20 and Jim 21` (as `Jim 21` are merged in one) **Solution without EXISTS** ``` select Name, Max(Age) FROM #B GROUP BY Name; ```
Using EXISTS to find rows with Maximum value
[ "", "sql", "sql-server", "" ]
Not sure what is going on here and why this is not working. I'm receiving the following error: "All expressions in a derived table must have an explicit name" - working with teradata. ``` select clm.c_clm ,clm.c_loc from (select * from pearl_p.TLTC900_CLM clm) as cl left join (select max(av.d_usr_udt_lst) from pearl_p.TLTC913_AVY av group by 1) as avy on cl.i_sys_clm = avy.i_sys_clm ```
Your max(av.d\_usr\_udt\_lst) in your subquery doesn't have an explicit name. You need to alias it like this: ``` max(av.d_usr_udt_lst) as "MaxThing" ``` So the query looks like ``` select clm.c_clm ,clm.c_loc from (select * from pearl_p.TLTC900_CLM clm) as cl left join (select max(av.d_usr_udt_lst) as "MaxThing" from pearl_p.TLTC913_AVY av group by 1) as avy on cl.i_sys_clm = avy.i_sys_clm ```
Apart from that error, you have another error in your join: ``` select clm.c_clm, clm.c_loc from (select * from pearl_p.TLTC900_CLM clm ) cl left join (select max(av.d_usr_udt_lst) from pearl_p.TLTC913_AVY av group by 1 ) as avy on cl.i_sys_clm = avy.i_sys_clm --------------------------^ This variable is not defined. ``` I think you might want something like: ``` select clm.c_clm, clm.c_loc from (select * from pearl_p.TLTC900_CLM clm ) cl left join (select i_sys_clm, max(av.d_usr_udt_lst) as maxdate from pearl_p.TLTC913_AVY av group by i_sys_clm ) avy on cl.i_sys_clm = avy.i_sys_clm and cl.<date column goes here> = avy.maxdate ```
Issue with subquery - All expressions must have explicit name
[ "", "sql", "teradata", "" ]
I have a table like this: **Email\_tbl**: ``` plcid Ecode ----------- ----------- 23 001646 24 001646 25 E004 25 2274 25 2208 25 1868 ``` I have another table `Employee`: ``` Ecode Ename E004 jaseem 2274 jasir 2208 deepu 1868 rupa 001646 shafeer ``` I want to get `Ename` of `plcid=25` so I use this query here: ``` SELECT em.Ename FROM dbo.Email_tbl e JOIN dbo.EmployeeMaster_tbl em ON em.Ecode IN (SELECT Ecode FROM Email_tbl WHERE plcid = 25) ``` but my result is coming wrong. What is wrong with my query?
``` SELECT em.Ename FROM dbo.Email_tbl e JOIN dbo.EmployeeMaster_tbl em ON em.Ecode = e.ECode WHERE e.plcid = 25 ```
Why are you even joining the tables? And why are there 3 answers 'fixing' an unnecessary join? ``` SELECT em.Ename FROM dbo.EmployeeMaster_tbl em WHERE em.Ecode IN (SELECT Ecode FROM Email_tbl WHERE plcid = 25) ```
When joining tables duplicate data is being returned from SQL Server
[ "", "sql", "sql-server", "" ]
I want to apply the conditional where clause That is if my `barcode` parameter comes null then i want to fetch all the records and if it comes with value then i want to fetch only matching records for the second part i am able to fetch the matching records but i am stuck at fetching the all records in case of null value i have tried as below , ``` SELECT item FROM tempTable WHERE ((ISNULL(@barcode,0)=1) // but this is not fetching all the records if barcode is null OR ISNULL(@barcode,0!= 1 AND tempTable.barcode LIKE @barcode+'%')) //THis is working perfect ``` so any help will be great
I might have misunderstood what you ask, but the logic [OR](http://technet.microsoft.com/en-us/library/ms188361.aspx) operator might help: ``` SELECT item FROM tempTable WHERE @barcode IS NULL OR tempTable.barcode LIKE @barcode+'%' ``` If `@barcode` is `NULL`, it returns all the records, and when it is not `NULL`, it returns all of the records that fulfill the condition `LIKE @barcode+'%'` **Important** Also, bear in mind that using the [OR](http://technet.microsoft.com/en-us/library/ms188361.aspx) operator can seemingly cause funny results when used with several complex conditions AND-ed together, and not enclosed properly in braces: ``` <A> AND <B> AND <C> OR <D> AND <E> AND <F> ``` Should most likely actually be formulated as: ``` (<A> AND <B> AND <C>) OR (<D> AND <E> AND <F>) ``` Remember, the parser does not know what you want to achieve, you have to describe your intents properly...
I think you could simplify it to: ``` SELECT item FROM tempTable WHERE @barcode IS NULL OR tempTable.barcode LIKE @barcode+'%' ``` so when @barcode is null you'll get everything - i.e. the Like part of the where won't need to execute. If @barcode has a value then the Like will be executed.
Conditional where clause?
[ "", "sql", "sql-server", "sql-server-2008", "where-clause", "" ]
I have a database (change) which I am trying to create an sql report on the detail field value. The issue is that the detail value displays a "phrase" and I need to evaluate based on this phrase, or a part of it. SQL ``` SELECT * FROM change WHERE change.detail LIKE '%To: [Step Two]%' ``` I want it to display all of the values where detail contains "To: [1. Step Two]" but the result is consistently not returning anything, where there are table values for this. Following is an example of the full value of the detail field: "[Step] Changed From: [1. Step One] To: [1. Step Two]" The items in [] represent other values in the database as well
Presuming you're using SQL Server/TSQL, the problem you have is that [the square bracket characters have a special meaning:](http://technet.microsoft.com/en-us/library/ms179859.aspx) > [ ] Any single character within the specified range ([a-f]) or set > ([abcdef]). > > WHERE au\_lname LIKE '[C-P]arsen' finds author last names ending with > arsen and starting with any single character between C and P, for > example Carsen, Larsen, Karsen, and so on. In range searches, the > characters included in the range may vary depending on the sorting > rules of the collation. In order to literally match the square bracket characters, you need to [escape them:](http://web.archive.org/web/20150519072547/http://sqlserver2000.databases.aspfaq.com:80/how-do-i-search-for-special-characters-e-g-in-sql-server.html) ``` LIKE '%To: \[Step Two\]%' ESCAPE '\' ```
If this is Microsoft SQL Server then you can escape brackets: ``` SELECT * FROM change WHERE change.detail LIKE '%To: \[Step Two\]%' ESCAPE '\' ``` Brackets used `LIKE` clause stand for a character range, So `[Step Two]` matches a character after `To:` that is either S,t,E,p, ,w, or o. You can read about this on [MSDN](http://technet.microsoft.com/en-us/library/ms179859.aspx).
SQL LIKE for partial sentence value
[ "", "sql", "sql-like", "phrase", "" ]
I have troubles in combining concatenation with order by in Postgre (9.1.9). Let's say, I have a table borders with 3 fields: ``` Table "borders" Column | Type | Modifiers ---------------+----------------------+----------- country1 | character varying(4) | not null country2 | character varying(4) | not null length | numeric | ``` The first two fields are codes of the countries and the third one is the length of the border among those countries. The primary key is defined on the first two fields. I need to compose a select of a column that would have unique values for the whole table, in addition this column should be selected in decreasing order. For this I concatenate the key fields with a separator character, otherwise two different rows might give same result, like (AB, C and A, BC). So I run the following query: ``` select country1||'_'||country2 from borders order by 1; ``` However in the result I see that the '\_' character is omited from the sorting. The results looks like this: ``` ?column? ---------- A_CH A_CZ A_D AFG_IR AFG_PK AFG_TAD AFG_TJ AFG_TM AFG_UZB A_FL A_H A_I . . ``` You can see that the result is sorted as if '\_' doesn't exists in the strings. If I use a letter (say 'x') as a separator - the order is correct. But I must use some special character that doesn't appear in the country1 and country2 fields, to avoid contentions. What should I do, in order to make the '\_' character to be taken into account during the sorting. --- ### EDIT It turned out that the concatenation has nothing to do with the problem. The problem is that the order by simply ignores '\_' character.
``` select country1 || '_' || country2 collate "C" as a from borders order by 1 ``` **`sql fiddle demo`** Notes according to discussion in comments: 1.) `COLLATE "C"` applies in the `ORDER BY` clause as long as it references the expression in the `SELECT` clause by *positional parameter* or *alias*. If you repeat the expression in `ORDER BY` you also need to repeat the `COLLATE` clause if you want to affect the sort order accordingly. **`sql fiddle demo`** 2.) In collations where `_` does not influence the sort order, it is more efficient to use [fog's query](https://stackoverflow.com/a/18706714/939860), even more so because that one makes use of the existing index (`primary key is defined on the first two fields`). However, if `_` has an influence, one needs to sort on the combined expression: **`sql fiddle demo`** Query performance (tested in Postgres 9.2): **`sql fiddle demo`** [PostgreSQL Collation Support in the manual.](http://www.postgresql.org/docs/current/static/collation.html)
Just order by the two columns: ``` SELECT country1||'_'||country2 FROM borders ORDER BY country1, country2; ``` Unless you use aggregates or windows, PostgreSQL allows to order by columns even if you don't include them in the SELECT list. As suggested in another answer you can also change the collation of the combined column but, if you can, sorting on plain columns is faster, especially if you have an index on them.
Combining concatenation with ORDER BY
[ "", "sql", "postgresql", "sql-order-by", "concatenation", "collation", "" ]
I have a table with a set of data like ``` record_date | id | category | model | name | alternate_name ------------------------------------------------------------------- 9/1/2012 | N42 | X | ABC | blah | blah 10/1/2011 | N42 | Y | No Code | #N/A | #N/A 6/1/2012 | N42 | X | DEF | xxxx | yyyy 7/1/2011 | N42 | Z | No Data | #N/A | #N/A ``` Since the dataset is not complete I want to fill the missing data (model, name, alternate\_name) with data from the most recent record containing data matching on the id field. i.e. I want it to look something like this ``` record_date | id | category | model | name | alternate_name ------------------------------------------------------------------- 9/1/2012 | N42 | X | ABC | blah | blah 10/1/2011 | N42 | Y | ABC | blah | blah 6/1/2012 | N42 | X | DEF | xxxx | yyyy 7/1/2011 | N42 | Z | ABC | blah | blah ```
Thank you for the suggestions however neither seemed to work quite right. I ended up doing this in two steps. ``` use myDB; drop table if exists tmp_myTable; create temporary table tmp_myTable like myTable; insert into tmp_myTable select * from myTable as t1 where record_date in (select max(record_Date) from myTable where id=t1.id and (model!="No Code" or model="No Data") and name!="#N/A") and (category="X" or category="Y" or category="Z") and (model!="No Code" or model="No Data") and name!="#N/A"; update myTable as t1 join tmp_myTable as t2 on t1.id=t2.id set t1.model=t2.model, t1.name=t2.name, t1.alternate_name=t2.alternate_name where (t1.category="X" or t1.category="Y" or t1.category="Z") and (t1.model="No Code" or t1.model="No Data") and t1.name="#N/A"; ```
Here is one method that uses three correlated subqueries: ``` update "table" t set model = (select model from "table" t2 where t2.id = t.id and t2.record_date < t.record_date and model <> 'No Code' order by t2.record_date limit 1), name = (select name from "table" t2 where t2.id = t.id and t2.record_date < t.record_date and name <> '#N/A' order by t2.record_date limit 1), alternate_name = (select alternate_name from "table" t2 where t2.id = t.id and t2.record_date < t.record_date and alternate_name <> '#N/A' order by t2.record_date limit 1) where model = 'No Code' and name = '#N/A' and alternate_name = '#N/A'; ``` I would recommend that you have a unique id on each row.
MySQL query to fill in a table with other data from that table
[ "", "mysql", "sql", "" ]
I am trying to pivot on multiple columns. I am using SQL server 2008. Here is what I have tried so far ``` CREATE TABLE #t ( id int, Rscd varchar(10),Accd varchar(10),position int) INSERT INTO #t Values (10,'A','B',1) INSERT INTO #t Values (10,'C','D',2) Select id,[1],[2],[11],[12] FROM (SELECT id, Rscd,Accd, position , position +10 as Aposition From #t) As query PIVOT (MAX(Rscd ) FOR Position IN ([1],[2])) AS Pivot1 PIVOT (MAX(Accd ) FOR Aposition IN ([11],[12])) AS Pivot2 ``` The below indicated is the result that I am getting ``` id 1 2 11 12 10 NULL C NULL D 10 A NULL B NULL ``` But the result that I am trying to achieve is , ``` id 1 2 11 12 10 A C B D ``` Any help ? what is wrong in my code.
I would unpivot the columns into pairs first, then pivot them. Basically the unpivot process will convert the pairs of columns (`rscd`, `position` and `accd`, `aposition`) into rows, then you can apply the pivot. The code will be: ``` select id, [1], [2], [11], [12] from ( select id, col, value from #t cross apply ( select rscd, position union all select Accd, position + 10 ) c (value, col) ) d pivot ( max(value) for col in ([1], [2], [11], [12]) ) piv; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/d41d8/20235)
``` Select id,sum([1]),sum([2]),sum([11]),sum([12]) FROM (SELECT id, Rscd,Accd, position , position +10 as Aposition From #t) As query PIVOT (MAX(Rscd ) FOR Position IN ([1],[2])) AS Pivot1 PIVOT (MAX(Accd ) FOR Aposition IN ([11],[12])) AS Pivot2 group by id ```
SQL server Pivot on Multiple Columns
[ "", "sql", "sql-server", "pivot", "" ]
I get the error "Conversion failed when converting the nvarchar value '23,24,3,45,91,' to data type int." The error seems to be occuring on the ON clause. E.ID is an integer field while F.LegalIssue is a varchar field of integers separated by commas. Below is the code with that error. ``` SELECT F.[FDTitle], E.PrimaryOpID as [FD Primary OP ID], F.County as [FD County], F.Status as [FD Status], F.IssueDate as [FD Date] FROM [dbo].[tbl_FinalDetMain] F LEFT OUTER JOIN [dbo].[tbl_lk_Exemptions_FD] E ON E.ID = F.LegalIssue WHERE F.[FDNbr] = '2013-0041' ``` I have tried the code below for the on clause, but it only returns one integer value, instead of the entire string of integers. ``` E.ID = cast(LEFT(F.LegalIssue,PATINDEX('%[^0-9]%',F.LegalIssue)-1) as int) ``` The result should include five integers delimited by commas.
If `LegalIssue` contains a string of comma-delimited numbers, then you really want an association table. Lacking that, here is a convenient (but not efficient) way to do the join: ``` SELECT F.[FDTitle], E.PrimaryOpID as [FD Primary OP ID], F.County as [FD County], F.Status as [FD Status], F.IssueDate as [FD Date] FROM [dbo].[tbl_FinalDetMain] F LEFT OUTER JOIN [dbo].[tbl_lk_Exemptions_FD] E ON ','+F.LegalIssue+',' like '%,'cast(E.ID as varchar(255))+',%' WHERE F.[FDNbr] = '2013-0041'; ``` This prepends and postpends the list with commas to avoid conflicts, such as finding "10" in "1,100,1000".
Using xml datatype, you can explode your string to integers like this. Good candidate for a user defined function I would say :-) ``` declare @test varchar(max) set @test = '1,2,3,4,5' select T2.item.value('(./text())[1]','int') from (select convert(xml,'<items><t>'+replace(@test,',','</t><t>')+'</t></items>') as xmldoc) as xmltable CROSS APPLY xmltable.xmldoc.nodes('/items/t') as T2(item) ```
T-SQL How to convert comma separated string of numbers to integer
[ "", "sql", "string", "delimited", "" ]
I will just go straight to the problem. This query is supposed to a list of records where PFlag is set to 'No' However, when I run it, I get blank results. If I remove the WHERE clause (where pFlag='No'), I get results. So, the WHERE clause is presenting a problem. Any idea what is wrong? Here is the code I am currently running. ``` SELECT DISTINCT t.username, lg.Email, lg.fullname, c.CourseName, l.location, d.trainingDates, d.trainingTime, t.processedFlag, i.instructorName FROM tblTrainings t INNER JOIN tblCourses c on t.courseId = c.courseId INNER JOIN tblLocations l on t.locationId = l.LocationId INNER JOIN tblTrainingDates d on t.dateid=d.dateid INNER JOIN tblCourseInstructor ic on c.courseId = ic.CourseId INNER JOIN tblInstructors i on ic.instructorId = i.instructorId INNER JOIN tblLogin lg on t.username = lg.username WHERE t.PFlag = 'No' ORDER BY lg.fullname DESC ``` It seems simple enough. Thanks a lot in advance ``` PFlag 0x59006500 0x59006500 0x59006500 ```
try rewriting the WHERE statement like this: ``` WHERE t.PFlag like '%No%' ``` hope it helps.
Check your table columns definitions. if the type of field `pFlag` is `CHAR` with particular length, e.g. `CHAR[4]`, then any string in that column will be filled up to 4 chars. In that case the real data stored in your table will be `No[space][space]`. If this is the case, convert your column type to `VARCHAR`, and do an update to that column with `trim()`.
Does anyone know why I am not getting any results with this query?
[ "", "sql", "sql-server-2008", "sql-server-2005", "" ]
Essentially, my problem is that I need to run a query in Oracle that unions a static list of values ('Static' meaning it's obtained from somewhere else that I cannot get from the database, but is actually an arbitrary list of values I plug into the query) with a dynamic list of values returned from a query. So, my initial query looks like: ``` select * from (select ('18776') as instanceid from dual) union (<more complex query>) ``` I think, hooray! And then try to do it with a longer list of static values. Turns out, I get 'Missing Right Parenthesis' if I try to run: ``` select ('18776','18775') as instanceid from dual ``` So, my basic issue is how can I integrate a list of static values into this union? NOTE: This is a simplified example of the problem. The actual list is generated from an API before I generate a query, and so this list of "static" values is unpredictably and arbitrarily large. I'm not dealing with just 2 static values, it is an arbitrary list.
``` select '18776' as instanceid from dual union all select '18775' as instanceid from dual ``` or ``` select column_value from table(sys.odcivarchar2list('18776', '18775')) ``` or some sort of hierarchical query that could take your comma separated-string and split it into a set of varchars. Union these to your initial query. **update**: "I'm not dealing with just 2 static values, it is an arbitrary list." Still can pass to a query as a collection (below is just one of many possible approaches) ``` 23:15:36 LKU@sandbox> ed Wrote file S:\spool\sandbox\BUFFER_LKU_39.sql 1 declare 2 cnt int := 10; 3 coll sys.odcivarchar2list := sys.odcivarchar2list(); 4 begin 5 coll.extend(cnt); 6 for i in 1 .. cnt loop 7 coll(i) := dbms_random.string('l', i); 8 end loop; 9 open :result for 'select * from table(:c)' using coll; 10* end; 23:37:03 11 / PL/SQL procedure successfully completed. Elapsed: 00:00:00.50 23:37:04 LKU@sandbox> print result COLUMN_VALUE ------------------------------------------------------------- g kd qdv soth rvwnq uyfhbq xxvxvtw eprralmd edbcajvfq ewveyljsjn 10 rows selected. Elapsed: 00:00:00.01 ```
I think you want to break it into two subqueries: ``` select * from ((select '18776' as instanceid from dual) union (select '18775' as instanceid from dual) union (<more complex query>) ) t; ``` Note that `union all` performs better than `union`. If you know there are no duplicates (or duplicates don't matter) then use `union all` instead.
Selecting static values to union into another query
[ "", "sql", "oracle", "oracle11g", "" ]
I am currently doing this tutorial (<http://sqlzoo.net/wiki/SELECT_within_SELECT_Tutorial>) and I can't answer question 8 : **Some countries have populations more than three times that of any of their neighbours (in the same continent). Give the countries and continents.** .. and my current query won't be accepted as the answer : ``` SELECT x.name, x.continent FROM world x WHERE (x.population * 3) > ALL ( SELECT y.population FROM world y WHERE x.continent = y.continent ) ``` What am I doing wrong ? What is the answer ?
The issue with your query is that you're not excluding the "bigger" country itself from the result in the inner query. The correct query is: ``` SELECT x.name, x.continent FROM world x WHERE x.population > ALL( SELECT (y.population*3) FROM world y WHERE x.continent=y.continent AND x.name<>y.name ) ``` Note the last condition in the inner query where I'm excluding the "x" country from the list of "y" countries by doing `x.name<>y.name`. If that is not done, no rows will be returned in the result. P.S. Usually the "exclusion" of the outer entity from the list of entities in the inner query is excluded by using `id` field, but the table on `sqlzoo` does not have `id` fields.
Simple answer: ``` Select name, continent From world x Where population > all(Select max(population)*3 From world y Where x.continent = y.continent AND y.name != x.name) ```
SQLzoo, SELECT within SELECT tutorial
[ "", "sql", "" ]
I'm trying to count rows in a table `events` where a date in column `EventDate` occurs between two dates given in another table `customers`. CUSTOMERS ``` ID EventFrom EventTo -- ---------- ----------- 1 2011-01-01 2012-01-01 2 2012-12-10 2013-12-10 3 2010-05-01 2011-05-01 4 2011-01-01 2012-01-01 5 2012-07-30 2013-07-30 6 2011-06-21 2012-06-21 7 2011-06-22 2012-06-22 8 2010-02-19 2011-02-19 ``` EVENTS ``` ID EventDate -- ---------- 2 1999-01-01 2 2012-12-12 2 2012-12-13 3 1900-01-12 4 2011-02-10 4 2011-02-11 4 2011-02-12 ``` RESULT ``` ID EventFrom EventTo Events -- ---------- ----------- ------ 1 2011-01-01 2012-01-01 0 2 2012-12-10 2013-12-10 2 3 2010-05-01 2011-05-01 0 4 2011-01-01 2012-01-01 3 5 2012-07-30 2013-07-30 0 6 2011-06-21 2012-06-21 0 7 2011-06-22 2012-06-22 0 8 2010-02-19 2011-02-19 0 ``` ID 2 appears twice in `events`, but the first date does not lie between `EventTo` and `EventFrom` so it shouldn't be counted. ID 4 appears three times in `events` and all are within the correct range. I can do it, but I end up with a nested join that is very slow. ``` SELECT customers.ID , customers.EventFrom , customers.EventTo , IFNULL(e.Events, 0) AS 'Events' FROM customers LEFT JOIN ( SELECT events.ID, COUNT(events.ID) AS 'Events' FROM events INNER JOIN customers ON customers.ID = events.ID AND events.EventDate BETWEEN customers.EventFrom AND customers.EventTo GROUP BY events.ID ) e ON e.ID = customers.ID ``` I've set `EventDate` as the index in `events`. I tried setting `EventFrom` and `EventTo` as indexes too, but it didn't make a whole lot of difference. And this query is part of a larger query, so I had the indexes set up for the main part. I've also tried this: ``` SELECT customers.ID , customers.EventFrom , customers.EventTo , SUM(IF(events.EventDate BETWEEN customers.EventFrom AND customers.EventTo), 1, 0) AS 'Events' FROM customers LEFT JOIN events ON events.ID = customers.ID ``` Which is also incredibly slow. `customers` has about 1.5 million rows but still the query seems to take unreasonably long time. Is there better way of structuring this?
[SQL Fiddle](http://sqlfiddle.com/#!2/b3819/2) **MySQL 5.5.32 Schema Setup**: ``` CREATE TABLE CUSTOMERS (`ID` varchar(2), `EventFrom` varchar(10), `EventTo` varchar(11)) ; INSERT INTO CUSTOMERS (`ID`, `EventFrom`, `EventTo`) VALUES ('1', '2011-01-01', '2012-01-01'), ('2', '2012-12-10', '2013-12-10'), ('3', '2010-05-01', '2011-05-01'), ('4', '2011-01-01', '2012-01-01'), ('5', '2012-07-30', '2013-07-30'), ('6', '2011-06-21', '2012-06-21'), ('7', '2011-06-22', '2012-06-22'), ('8', '2010-02-19', '2011-02-19') ; CREATE TABLE EVENTS (`ID` int, `EventDate` datetime) ; INSERT INTO EVENTS (`ID`, `EventDate`) VALUES (2, '1999-01-01 00:00:00'), (2, '2012-12-12 00:00:00'), (2, '2012-12-13 00:00:00'), (3, '1900-01-12 00:00:00'), (4, '2011-02-10 00:00:00'), (4, '2011-02-11 00:00:00'), (4, '2011-02-12 00:00:00') ; ``` **Query 1**: ``` SELECT c.Id, c.EventFrom, c.EventTo, COUNT(e.ID) FROM CUSTOMERS c LEFT JOIN EVENTS e ON e.ID = c.ID AND e.EventDate BETWEEN c.EventFrom AND c.EventTo GROUP BY c.Id, c.EventFrom, c.EventTo ``` **[Results](http://sqlfiddle.com/#!2/b3819/2/0)**: ``` | ID | EVENTFROM | EVENTTO | COUNT(E.ID) | |----|------------|------------|-------------| | 1 | 2011-01-01 | 2012-01-01 | 0 | | 2 | 2012-12-10 | 2013-12-10 | 2 | | 3 | 2010-05-01 | 2011-05-01 | 0 | | 4 | 2011-01-01 | 2012-01-01 | 3 | | 5 | 2012-07-30 | 2013-07-30 | 0 | | 6 | 2011-06-21 | 2012-06-21 | 0 | | 7 | 2011-06-22 | 2012-06-22 | 0 | | 8 | 2010-02-19 | 2011-02-19 | 0 | ```
User `left join`. Put the date condition in the `on` clause. Then count the matches in the table using `count(e.ID)` (which counts non-NULL values): ``` SELECT c.ID, c.EventFrom, c.EventTo, COUNT(e.ID) as "Events" FROM customers c LEFT JOIN events e ON e.ID = c.ID and e.EventDate BETWEEN c.EventFrom AND c.EventTo GROUP BY c.ID, c.EventFrom, c.EventTo; ```
MySQL count rows in one table based on dates in another table
[ "", "mysql", "sql", "" ]
I have an SQL query that is meant to select a list of things from different tables using a subquery. I am meant to find those things with the lowest value in a particular column. This is the query that i currently have. I know the minimum rate is 350 but i cant use it in my query. Any effort to change it to MIN(rate) has been unsuccessful. ``` SELECT DISTINCT name FROM table1 NATURAL JOIN table2 WHERE table2.code = (SELECT Code FROM rates WHERE rate = '100') ``` How do i change that subquery to find the minimum rate?
``` SELECT DISTINCT name FROM table1 NATURAL JOIN table2 WHERE table2.code = (SELECT CODE FROM RATE WHERE RATE=(SELECT MIN(RATE) FROM RATE)) ``` Considering you are expecting only one record of minimum value.
Most general way to do this would be ``` select distinct name from table1 natural join table2 where table2.code in ( select t.Code from rates as t where t.rate in (select min(r.rate) from rates as r) ) ``` if you have windowed functions, you can use [`rank()`](http://technet.microsoft.com/en-us/library/ms176102.aspx) function: ``` ... where table2.code in ( select t.Code from ( select r.Code, rank() over(order by r.rate) as rn from rates as r ) as t where t.rn = 1 ) ``` in SQL Server you can use [`top ... with ties`](http://msdn.microsoft.com/en-us/library/ms189463%28v=sql.110%29.aspx) syntax: ``` ... where table2.code in ( select top 1 with ties r.Code from rates as r order by r.rate ) ```
SQL selecting minimum value in a sub query when exact value is unknown
[ "", "sql", "min", "" ]
Data transfer from Excel to SQL table. Before transfer i need to validate the data(basic data validation). 1. Want to check if ColT is not blank or not Null 2. Want to check if ColB is Integers only 3. Want to check if ColG is Alpha only. I kind of have an idea how to do this, using Staging tables before transferring the data into Final table but **I'm looking for different solution something like using a Script task and try Pop up msg as soon as one of checks fail.** **Basically iterate data in script task and then do the validation in script. Shoot an Popup as soon as validation fails.** Let me know if that's possible. Thanks, Cindy!!
You can do two things if you don't wanna go in script task. 1.get everything in database and then perform what ever check you want to do. 2.Else go for condition Split and check these condition and then only insert the record. Need any more help Shout. :)
I didn't get the complete answer but got a start.
SSIS data validation checks for data transfer
[ "", "sql", "ssis", "" ]
I have 2 tables, one that contains the final results I need, and another that contains the list of columns I need to select from based on a set level. For example : ``` table_levels level | name | [selected-columns] 1 | startdate | start_date 1 | end date | end_date 1 | contract | contract ``` So if i do the following ``` select column from table_levels where level = '1' ``` Then basically i need to use the column names from this select statement to determine which columns are selected from another statement. This is what ive tried and of course i know its wrong but gives you an idea of what im trying to do. ``` select (select [selected-columns] from table_levels where nwlevel = '1') from table_results ``` In a way im trying to dynamically build an sql query which can be altered by whichever columns i put in the `table_levels` table. This should in theory act the same as the following sql query ``` select start_date, end_date, contract from table_results ```
I got it to work by doing what @lobo said with a slight change. ``` DECLARE @listStr varchar(MAX); set @liststr = ( select [column] + ',' from dbo.columns where nwlevel = '1' for xml path('') ) DECLARE @query varchar(MAX); set @query = ( 'SELECT ' + LEFT(@listStr, LEN(@listStr)-1) + ' FROM staff' ) execute(@query) ```
My previous answer was for mysql. Since the tag has been updated on the question since then, here is the query for `sql-server-2008`. Build a list of columns from the values in `table_levels`, remove the last `,`, build a query string to get you the results from `table_results`, and then execute. ``` DECLARE @listStr varchar(MAX) = ( select selectColumnName + ',' from table_levels where level = 1 for xml path('')) DECLARE @query varchar(MAX) = 'SELECT ' + LEFT(@listStr, LEN(@listStr)-1) + ' FROM table_results' execute(@query) ``` [Demo for sql server](http://sqlfiddle.com/#!3/c9cde/55) --- Previous answer. Works for `mssql` See [demo for mysql](http://sqlfiddle.com/#!2/1f35a/1) Use `GROUP_CONCAT` to make a string out of the values in `table_levels` and then build a query string to get you the results from `table_results` ``` SET @listStr = ( SELECT GROUP_CONCAT(selectColumnName) FROM table_levels where level = 1); SET @query := CONCAT('SELECT ', @listStr, ' FROM table_results'); PREPARE STMT FROM @query; EXECUTE STMT; ```
Select columns from one table based on the column names from another table
[ "", "sql", "sql-server-2008", "" ]
The below is a practice for myself: I've created a table with some values, as: ``` CREATE TABLE my_test AS SELECT ROWNUM ID, TRUNC(SYSDATE)+(LEVEL*5/24/60/60)date_time , 111 person_id FROM dual CONNECT BY LEVEL <= (24*60*60)/5 ORDER BY 1; ``` Now, I updated table , `person_id = 222` for `date_time` between `5:30 am` and `6:30 am` ``` UPDATE my_test SET person_id = 222 WHERE date_time >= trunc(SYSDATE)+ 5/24 + 30/24/60 AND DATE_TIME <= trunc(sysdate)+ 6/24 + 30/24/60; ``` I need to take the number of persons in each hour, so I've written a code like: ``` SELECT time_range,count(distinct person_id) PERSON_COUNT FROM ( select person_id , date_time, case TO_CHAR(date_time,'hh24') when '00' then '12 AM - 01 AM' when '01' then '01 AM - 02 AM' when '02' then '02 AM - 03 AM' when '03' then '03 AM - 04 AM' when '04' then '04 AM - 05 AM' when '05' then '05 AM - 06 AM' when '06' then '06 AM - 07 AM' when '07' then '07 AM - 08 AM' when '08' then '08 AM - 09 AM' when '09' then '09 AM - 10 AM' when '10' then '10 AM - 11 AM' when '11' then '11 AM - 12 PM' when '12' then '12 PM - 01 PM' when '13' then '01 PM - 02 PM' when '14' then '02 PM - 03 PM' when '15' then '03 PM - 04 PM' when '16' then '04 PM - 05 PM' when '17' then '05 PM - 06 PM' when '18' then '06 PM - 07 PM' when '19' then '07 PM - 08 PM' when '20' then '08 PM - 09 PM' when '21' then '09 PM - 10 PM' when '22' then '10 PM - 11 PM' WHEN '23' THEN '11 PM - 12 AM' end time_range FROM my_test WHERE date_time >= trunc(SYSDATE) AND date_time < trunc(SYSDATE)+1 ORDER BY date_time) GROUP BY time_range; ``` Its working fine without expected ordering. ( The order by time\_range not gives the expected as it is string in the outer query). I need to order by time\_range, and any simple and easy codes for the above are appreciated. Thanks in advance.
The `ORDER BY` clause is always the last to be interpreted in a single query, here however it is in a subquery so the main query `GROUP BY` will "overwrite" your `ORDER BY`. You could write for instance: ``` SELECT time_range, COUNT(DISTINCT person_id) PERSON_COUNT FROM (SELECT person_id, date_time, CASE TO_CHAR(date_time, 'hh24') WHEN '00' THEN '12 AM - 01 AM' /*[snip]*/ WHEN '23' THEN '11 PM - 12 AM' END time_range FROM my_test WHERE date_time >= trunc(SYSDATE) AND date_time < trunc(SYSDATE) + 1) GROUP BY time_range, TO_CHAR(date_time, 'hh24') ORDER BY TO_CHAR(date_time, 'hh24'); ``` Also I'm not a fan of your `time_range` expression. You could rewrite it simply as: ``` to_char(date_time, 'HH PM - ') || to_char(date_time + 1/24, 'HH PM') time_range ``` Edit: apparently you need the complete query: ``` SELECT time_range, COUNT(DISTINCT person_id) PERSON_COUNT FROM (SELECT person_id, date_time, to_char(date_time, 'HH PM - ') || to_char(date_time + 1/24, 'HH PM') time_range FROM my_test WHERE date_time >= trunc(SYSDATE) AND date_time < trunc(SYSDATE) + 1) GROUP BY time_range, TO_CHAR(date_time, 'hh24') ORDER BY TO_CHAR(date_time, 'hh24'); ```
First, you should have the `order by` in the *outer* query rather than the inner query. `order by` in inner queries is generally not guaranteed to work. But, even if you put: ``` order by date_time; ``` at the end, you still won't get what you want. For that, try ordering by the `date_time` itself, as in: ``` order by max(date_time); ``` Here is an example of the [Oracle documentation](http://docs.oracle.com/javadb/10.8.1.2/ref/rrefsqlj13658.html) on `order by` in subqueries: > An ORDER BY clause allows you to specify the order in which rows > appear in the result set. In subqueries, the ORDER BY clause is > meaningless unless it is accompanied by one or both of the result > offset and fetch first clauses or in conjunction with the ROW\_NUMBER > function, since there is no guarantee that the order is retained in > the outer result set. It is permissible to combine ORDER BY on the > outer query with ORDER BY in subqueries.
Oracle query for time range
[ "", "sql", "oracle", "" ]
I have to query a table with few millons of rows and I want to do it the most optimized. Lets supose that we want to controll the access to a movie theater with multiples screening rooms and save it like this: ``` AccessRecord (TicketId, TicketCreationTimestamp, TheaterId, ShowId, MovieId, SeatId, CheckInTimestamp) ``` To simplify, the 'Id' columns of the data type 'bigint' and the 'Timestamp' are 'datetime'. The tickets are sold at any time and the people access to the theater randomly. And the primary key (so also unique) is TicketId. I want to get for each Movie and Theater and Show (time) the AccessRecord info of the first and last person who accessed to the theater to see a mov. If two checkins happen at the same time, i just need 1, any of them. My solution would be to concatenate the PK and the grouped column in a subquery to get the row: ``` select AccessRecord.* from AccessRecord inner join( select MAX(CONVERT(nvarchar(25),CheckInTimestamp, 121) + CONVERT(varchar(25), TicketId)) as MaxKey, MIN(CONVERT(nvarchar(25),CheckInTimestamp, 121) + CONVERT(varchar(25), TicketId)) as MinKey from AccessRecord group by MovieId, TheaterId, ShowId ) as MaxAccess on CONVERT(nvarchar(25),CheckInTimestamp, 121) + CONVERT(varchar(25), TicketId) = MaxKey or CONVERT(nvarchar(25),CheckInTimestamp, 121) + CONVERT(varchar(25), TicketId) = MinKey ``` The conversion 121 is to the cannonical expression of datatime resluting like this: aaaa-mm-dd hh:mi:ss.mmm(24h), so ordered as string data type it will give the same result as it is ordered as a datetime. As you can see this join is not very optimized, any ideas? --- **Update with how I tested the different solutions**: I've tested all your answers in a real database with SQL Server 2008 R2 with a table over 3M rows to choose the right one. If I get only the first or the last person who accessed: * Joe Taras's solution lasts 10 secs. * GarethD's solution lasts 21 secs. If I do the same accessed but with an ordered result by the grouping columns: * Joe Taras's solution lasts 10 secs. * GarethD's solution lasts 46 secs. If I get both (the first and the last) people who accessed with an ordered result: * Joe Taras's (doing an union) solution lasts 19 secs. * GarethD's solution lasts 49 secs. The rest of the solutions (even mine) last more than 60 secs in the first test so I canceled it.
Try this: ``` select a.* from AccessRecord a where not exists( select 'next' from AccessRecord a2 where a2.movieid = a.movieid and a2.theaterid = a.theaterid and a2.showid = a.showid and a2.checkintimestamp > a.checkintimestamp ) ``` In this way you pick the last row as timestamp for the same movie, teather, show. Ticket (I suppose) is different for each row
Using analytical functions may speed up the query, more specifically [ROW\_NUMBER](http://technet.microsoft.com/en-us/library/ms186734.aspx), it should reduce the number of reads: ``` WITH CTE AS ( SELECT TicketId, TicketCreationTimestamp, TheaterId, ShowId, MovieId, SeatId, CheckInTimestamp, RowNumber = ROW_NUMBER() OVER(PARTITION By MovieId, TheaterId, ShowId ORDER BY CheckInTimestamp, TicketID), RowNumber2 = ROW_NUMBER() OVER(PARTITION By MovieId, TheaterId, ShowId ORDER BY CheckInTimestamp DESC, TicketID) FROM AccessRecord ) SELECT TicketId, TicketCreationTimestamp, TheaterId, ShowId, MovieId, SeatId, CheckInTimestamp, FROM CTE WHERE RowNumber = 1 OR RowNumber2 = 1; ``` However as always with optimisation you are best suited to tune your own queries, you have the data to test with and all the execution plans. Try the query with different indexes, if you show the actual execution plan SSMS will even suggest indexes to help your query. I would expect an index on `(MovieId, TheaterId, ShowId)` that includes `CheckInTimestamp` as a non key column would help.
Performance of SQL query
[ "", "sql", "sql-server", "performance", "datetime", "" ]
In reference to my [question](https://stackoverflow.com/questions/18695466/stored-procedure-confusion#18695836) I am trying this select statement and getting error ``` Select * From (SELECT * FROM FruitCrate WHERE FruitName IN (' +@FruitNames+ ')) WHERE FruitCrate.Eatable= 1 ``` Error > Incorrect syntax near the keyword 'WHERE'. **Edit** ``` USE [DataBaseName] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[SPName] ( @FruitNames ) AS exec('SELECT * FROM FruitCrate WHERE FruitName IN (' +@FruitNames+ ') and Eatable = 1') ```
You can't pass a comma seperated string to the `in` operator. And you need just 1 query if you use `and` for the conditions. ``` exec('SELECT * FROM FruitCrate WHERE FruitName IN (' +@FruitNames+ ') and FruitCrate.Eatable= 1' ``` If you want to execute a dynamic query (which is the case here) that you have to use `exec`
You should name your subquery as below ``` Select * From (SELECT * FROM FruitCrate WHERE FruitName IN (' +@FruitNames+ ')) tab WHERE tab.Eatable= 1 ``` and you shoud use the code as dynamic sql ``` exec('Select * From (SELECT * FROM FruitCrate WHERE FruitName IN (' +@FruitNames+ ')) tab WHERE tab.Eatable= 1') ``` it will work for list like `'apple','strawberry','banana'` ``` Select * From (SELECT * FROM FruitCrate WHERE FruitName IN ('apple','strawberry','banana')) tab WHERE tab.Eatable= 1 ```
What's wrong with this SQL Statement
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a table called EMP which has a column called JOB. Now, JOB column has various entries like CLERK, SALESMAN, MANAGER, etc. I want to fetch all rows(&columns) where JOB = 'CLERK' but instead of displaying 'CLERK' I want to display 'NEW CLERK'. I am not supposed to change the value in the table but display it different while querying. Is this possible? [Sorry if it doesn't make sense - it's a homework question]
It is possible. Something like this should work: ``` select case JOB when 'CLERK' then 'NEW CLERK' else JOB end as JOB from your_table ```
One way to do this is with a CASE expression, for example: ``` SELECT CASE WHEN t.job = 'CLERK' THEN 'NEW CLERK' ELSE t.job END AS job , t.other_col FROM emp t ``` This will return the value from the `JOB` column, except replace occurrences of `'CLERK'` with the literal value of `'NEW CLERK'`. If you have several different replacements to do, the search form of the CASE expression can be handy: ``` SELECT CASE t.job WHEN 'CLERK' THEN 'NEW CLERK' WHEN 'SALESMAN' THEN 'GOOD SALESMAN' ELSE t.job END AS job , t.other_col FROM emp t ``` The 'search' form is just a shorter way of writing: ``` SELECT CASE WHEN t.job = 'CLERK' THEN 'NEW CLERK' WHEN t.job = 'SALESMAN' THEN 'GOOD SALESMAN' ELSE t.job END AS job , t.other_col FROM emp t ``` --- If you are only returning rows that have 'CLERK' in the JOB column, then you could simply return a literal in the statement: ``` SELECT 'NEW CLERK' AS job , t.other_col FROM emp t WHERE t.job = 'CLERK' ``` --or-- ``` SELECT 'NEW '||t.job AS job , t.other_col FROM emp t WHERE t.job = 'CLERK' ```
How to format query result in Oracle?
[ "", "sql", "oracle", "select", "" ]
So I have a MySQL database that tracks events on a calendar. I have a table called calendar that has all the dates that concern me. At my job we have contracts that have certain values; for the sake of simplification we will use only 1 and 2. These contracts have a start date and a end date. I want to take all the contracts for the whole calendar and add up the contracts values for each day. So far I have: ``` SELECT calendar.datefield AS DATE, contract.time_slots AS Slots_taken FROM contract RIGHT JOIN calendar ON calendar.datefield BETWEEN contract.start_time AND contract.end_time LIMIT 600 , 30 ``` This almost gives me what I want. This query produces a tables like this: ``` DATE | Slots_taken ------------|-------------- 2013-08-29 | 1 2013-08-30 | 1 2013-08-31 | 1 2013-09-01 | 1 2013-09-01 | 2 2013-09-02 | 1 2013-09-02 | 2 2013-09-03 | 1 2013-09-03 | 2 ``` But here I am getting repeat dates. I want only one row per date with the sum of the values for that date. So it should be: ``` DATE | Slots_taken ------------|-------------- 2013-08-29 | 1 2013-08-30 | 1 2013-08-31 | 1 2013-09-01 | 3 2013-09-02 | 3 2013-09-03 | 3 ```
``` SELECT calendar.datefield AS DATE, SUM(contract.time_slots) AS Slots_taken FROM contract RIGHT JOIN calendar ON calendar.datefield BETWEEN contract.start_time AND contract.end_time GROUP BY DATE LIMIT 600 , 30 ```
``` SELECT calendar.datefield date , SUM(contract.time_slots) slots_taken FROM calendar LEFT JOIN calendar ON calendar.datefield BETWEEN contract.start_time AND contract.end_time GROUP BY date LIMIT 600 , 30; ```
How to sum rows after paging?
[ "", "mysql", "sql", "" ]
I have a non clustered index I would like to drop (it is a duplicate of the clustered index). However, it is being used by a foreign key constraint. I would like to be able to identify which constraint is using the index, so I can update it to use the primary key index. When I try to drop it: ``` DROP INDEX [idx_duplicate] ON [dbo].[MyTable] ``` I get an error: > An explicit DROP INDEX is not allowed on index 'dbo.MyTable.idx\_duplicate'. It is being used for FOREIGN KEY constraint enforcement. I tried to find the culprit with the following query but no luck: ``` SELECT name FROM sys.foreign_keys WHERE OBJECT_NAME (referenced_object_id) = 'idx_duplicate' ```
Something like ``` Select f.name, object_name(f.parent_object_id) From sys.foreign_keys f inner join sys.indexes i on f.referenced_object_id = i.object_id and f.key_index_id = i.index_id Where i.name = 'idx_duplicate' and i.object_id = object_id('[dbo].[MyTable]') ```
This will tell you the tables, the foreign key and the columns involved: ``` select f.name , parentTable = o.name , parentColumn = c.name , foreignTable = ofr.name , foreignColumn = cfr.name from sys.foreign_keys f inner join sys.foreign_key_columns fc on f.object_id = fc.constraint_object_id inner join sys.objects o on fc.parent_object_id = o.object_id inner join sys.columns c on fc.parent_column_id = c.column_id and o.object_id = c.object_id inner join sys.objects ofr on fc.referenced_object_id = ofr.object_id inner join sys.columns cfr on fc.referenced_column_id = cfr.column_id and ofr.object_id = cfr.object_id inner join sys.indexes i on ofr.object_id = i.object_id where i.name = 'MyIndex' ``` [SQL Fiddle with demo](http://sqlfiddle.com/#!3/41782/2).
How to find what foreign key references an index on table
[ "", "sql", "sql-server", "" ]
I have a problem in writing a query. I'd like to select the first row of each set of rows grouped My table is `Transactions`: ``` userID | Date | StoreID --------------------------- 1 | 8-9-2013 | 10 1 | 9-9-2013 | 10 1 | 10-9-2013| 20 2 | 7-9-2013 | 30 2 | 8-9-2013 | 10 2 | 9-9-2013 | 20 1 | 11-9-2013| 10 2 | 10-9-2013| 20 ``` and I try to this SQL statement: ``` Select tr.userID , Min(tr.TransactionDate) FirstDate From Transactions tr Group By tr.userID ``` I get this output: ``` userID | Date ------------------ 1 | 8-9-2013 2 | 7-9-2013 ``` But I need the `Store ID` in every first transaction. I need it to be like that ``` userID | Date | StoreID ------------------------- 1 | 8-9-2013 | 10 2 | 7-9-2013 | 30 ``` Please any one can help me
[SQL Fiddle](http://sqlfiddle.com/#!3/e536a/34) **MS SQL Server 2008 Schema Setup**: ``` CREATE TABLE Transactions ([userID] int, [Date] datetime, [StoreID] int) ; INSERT INTO Transactions ([userID], [Date], [StoreID]) VALUES (1, '2013-08-09 00:00:00', 10), (1, '2013-09-09 00:00:00', 10), (1, '2013-10-09 00:00:00', 20), (2, '2013-07-09 00:00:00', 30), (2, '2013-08-09 00:00:00', 10), (2, '2013-09-09 00:00:00', 20), (1, '2013-11-09 00:00:00', 10), (2, '2013-10-09 00:00:00', 20) ; ``` **Query 1**: ``` SELECT tr.userID , Min(tr.Date) FirstDate , tr2.storeid FROM Transactions tr inner join Transactions tr2 on tr.userid = tr2.userid and tr2.date = (select top 1 date from transactions t where t.userid = tr2.userid order by date asc) GROUP BY tr.userID, tr2.storeid ``` **[Results](http://sqlfiddle.com/#!3/e536a/34/0)**: ``` | USERID | FIRSTDATE | STOREID | |--------|-------------------------------|---------| | 1 | August, 09 2013 00:00:00+0000 | 10 | | 2 | July, 09 2013 00:00:00+0000 | 30 | ```
You could use Row\_Number(). ``` select UserId, Date, StoreId from (select row_number() over(partition by UserId order by date) as RowNumber, UserId, Date, StoreId from Transactions ) as View1 where RowNumber = 1 ``` <http://sqlfiddle.com/#!6/e536a/7>
How To get the First Row Form SQL Group Query?
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "" ]
When I try and delete a movie from my database I get a the above error. I believe I have some how made the rated table have precedence over the films table. How do I make the film table have precedence of the rated table ``` DELETE FROM `film`.`films` WHERE `films`.`movie_id` =16 -- -- Table structure for table `films` -- CREATE TABLE IF NOT EXISTS `films` ( `movie_id` int(4) NOT NULL AUTO_INCREMENT, `movie_title` varchar(100) NOT NULL, `actor` varchar(100) NOT NULL, PRIMARY KEY (`movie_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=17 ; CREATE TABLE IF NOT EXISTS `rated` ( `rated_id` int(4) NOT NULL AUTO_INCREMENT, `rated_name` varchar(40) NOT NULL, `movie_id` int(4) DEFAULT NULL, PRIMARY KEY (`rated_id`), KEY `movie_id` (`movie_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; ALTER TABLE `rated` ADD CONSTRAINT `rated_ibfk_1` FOREIGN KEY (`movie_id`) REFERENCES `films` (`movie_id`); ```
The foreign key you have defined on `movie_id` by default restricts the deletion: with the current schema **you cannot delete a film as long as it has ratings.** You can automatically delete the ratings when you delete the film using *cascading delete*. Whether this is the best option for your application is for you to decide... ``` ALTER TABLE `rated` ADD FOREIGN KEY (`movie_id`) REFERENCES `films` (`movie_id`) ON DELETE CASCADE; ```
Since there is a foreign key constraint you have to first delete any child records before you can delete the parent.
Cannot delete or update a parent row
[ "", "mysql", "sql", "" ]
We restored databases to our `UAT` environment and set up `AlwaysOn`. Whenever I try and backup the log I'm getting "The log was not truncated because records at the beginning of the log are pending replication or Change Data Capture. Ensure the Log Reader Agent or capture job is running or use `sp_repldone` to mark transactions as distributed or captured." I've removed CDCs from the database and removed all replication but I'm still getting the above error. The `log_reuse_wait_desc` is showing up as `AVAILABILITY_REPLICA` or `REPLICATION`. I've also tried running the following. ``` EXEC sys.sp_replflush EXEC sp_removedbreplication EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1 ``` Any suggestions? Thanks, Tim
The production database, which is what this DB was restored from, does have replication set up. Even though this copy was restored without any replication there was a transaction in the log still from prod. To fix this I set up a new publication with one table from the problem database, didn't even create a subscription, then deleted the publication and the problem went away. Thanks, Tim
Running the below solved the problem for us: ``` use myDB go EXEC sys.sp_cdc_enable_db go EXEC sys.sp_cdc_disable_db go EXEC sp_removedbreplication myDB ```
Log Can't be truncated but no CDCs or Replication exists
[ "", "sql", "sql-server", "" ]
I have a SQL Server table with several columns, an index starting from 1 as primary key. Now I need to clone this table but without the data and with a different index, let's say starting from 100 000. I have never done this before so I am quite unsure about how to do it. I know how to create a new table, ``` CREATE TABLE new_table AS (SELECT * FROM old_table); ``` but I do not know how to start the index from 100 000 and clone it without the data from the original table keeping the structure and the datatype exactly the same. Some help will be appreciated. I am using SQL Server 2012 Express that came with Visual Studio 2012.
`SQL Server` does not support `CREATE TABLE AS SELECT`. Use this ``` SELECT * INTO new_table FROM old_table ``` or ``` SELECT TOP 0 * INTO new_table FROM old_table ```
Try this ``` CREATE TABLE new_table AS SELECT TOP 0 * FROM old_table ```
SQL Server Express : clone table with a different index but without data
[ "", "sql", "sql-server-express", "" ]
I have two tables `TableA` and `TableB`. `TableA` has a column `newValue` and `oldValue`. `TableB` has a column `value`. I want the following result: get all the rows from `TableA` where `newValue` exist in `TableB` and `oldValue` not exist in `TableB`. Both tables has many rows so performance is important! How would you accomplish that in SQL Server 2010/2012? (u.i.: I found no property title for this question). **Edit:** The first thing I tried is: ``` SELECT newValue, oldValue FROM TableA WHERE newValue IN (SELECT value FROM TableB) AND oldValue NOT IN (SELECT value FROM Table B) ``` But this is inefficient and has poor performance on my database. I'm looking for other solutions.
``` Select Distinct A.* From TableA A Inner Join TableB B on A.newValue = B.Value Where A.OldValue NOT In (Select Value From TableB) ```
I think avoiding `EXISTS` and `IN` helps with speed here: ``` SELECT * FROM TableA WHERE (SELECT TOP 1 1 FROM TableB WHERE value = newValue) IS NOT NULL AND (SELECT TOP 1 1 FROM TableB WHERE value = oldValue) IS NULL ```
SQL query - how to use exist and not exist conditions at the same time
[ "", "sql", "sql-server", "" ]
I've been reading up on how to use MySQL insert on duplicate key to see if it will allow me to avoid Selecting a row, checking if it exists, and then either inserting or updating. As I've read the documentation however, there is one area that confuses me. This is what the documentation says: > If you specify ON DUPLICATE KEY UPDATE, and a row is inserted that would cause a duplicate value in a UNIQUE index or PRIMARY KEY, an UPDATE of the old row is performed The thing is, I don't want to know if this will work for my problem, because the 'condition' I have for not inserting a new one is the existence of a row that has two columns equal to a certain value, not necessarily that the primary key is the same. Right now the syntax I'm imagining is this, but I don't know if it will always insert instead of replace: ``` INSERT INTO attendance (event_id, user_id, status) VALUES(some_event_number, some_user_id, some_status) ON DUPLICATE KEY UPDATE status=1 ``` The thing is, event\_id and user\_id aren't primary keys, but if a row in the table 'attendance' already has those columns with those values, I just want to update it. Otherwise I would like to insert it. Is this even possible with ON DUPLICATE? If not, what other method might I use?
If I were you, I would make a primary key out of event\_id and user\_id. That will make this extremely easy with `ON DUPLICATE`. [SQLFiddle](http://sqlfiddle.com/#!2/39957/1) ``` create table attendance ( event_id int, user_id int, status varchar(100), primary key(event_id, user_id) ); ``` Then with ease: ``` insert into attendance (event_id, user_id, status) values(some_event_number, some_user_id, some_status) on duplicate key update status = values(status); ```
The quote includes "a duplicate value in a UNIQUE index". So, your values do not need to be the primary key: ``` create unique index attendance_eventid_userid on attendance(event_id, user_id); ``` Presumably, you want to update the existing record because you don't want duplicates. If you want duplicates sometimes, but not for this particular insert, then you will need another method.
MySQL "Insert ... On Duplicate Key" with more than one unique key
[ "", "mysql", "sql", "sql-insert", "on-duplicate-key", "" ]
I am attempting to migrate/update an old table that allowed a nullable varchar in a supposed "date" field. I want to find all rows that don't match this format: **`%e-%b-%y`**. How can I accomplish this query? \*\*EDIT: I should mention that the field does contain a few "CANCELLED", null, or other string values instead of the more common e-b-y format. I am looking for those rows so I can update them to the format I want (%e-%b-%y).
One more approach is to try to recover as much dates as possible with different formats using `STR_TO_DATE()` which will return `NULL` if extracted value is invalid and `COALESCE()` to chain different date formats. To show only rows with unrecoverable dates: ``` SELECT * FROM table1 WHERE COALESCE(STR_TO_DATE(NULLIF(dt, ''), '%e-%b-%Y'), STR_TO_DATE(NULLIF(dt, ''), '%e-%b-%y'), STR_TO_DATE(NULLIF(dt, ''), '%Y-%m-%d'), STR_TO_DATE(NULLIF(dt, ''), '%m/%d/%Y'), STR_TO_DATE(NULLIF(dt, ''), '%m/%d/%y')) IS NULL; ``` To see what will you have got after converting dates: ``` SELECT *, COALESCE(STR_TO_DATE(NULLIF(dt, ''), '%e-%b-%Y'), STR_TO_DATE(NULLIF(dt, ''), '%e-%b-%y'), STR_TO_DATE(NULLIF(dt, ''), '%Y-%m-%d'), STR_TO_DATE(NULLIF(dt, ''), '%m/%d/%Y'), STR_TO_DATE(NULLIF(dt, ''), '%m/%d/%y')) new_date FROM table1; ``` **Note:** * You can chain as much format strings as you need. * Use four digit formats `%y` before two digits `%y`. Otherwise you'll get incorrect dates. If you were to have following sample data: ``` | ID | DT | |----|-------------| | 1 | CANCELLED | | 2 | 02-Mar-12 | | 3 | (null) | | 4 | 5-Aug-13 | | 5 | | | 6 | 2013-09-12 | | 7 | 10/23/2013 | | 8 | 13-Aug-2012 | ``` Then the second query produces following output: ``` | ID | DT | NEW_DATE | |----|-------------|----------------------------------| | 1 | CANCELLED | (null) | | 2 | 02-Mar-12 | March, 02 2012 00:00:00+0000 | | 3 | (null) | (null) | | 4 | 5-Aug-13 | August, 05 2013 00:00:00+0000 | | 5 | | (null) | | 6 | 2013-09-12 | September, 12 2013 00:00:00+0000 | | 7 | 10/23/2013 | October, 23 2013 00:00:00+0000 | | 8 | 13-Aug-2012 | August, 13 2012 00:00:00+0000 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/36c6d/1)** demo
You can use regular expressions in MySQL; see <http://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_not-regexp> Here's an expression that returns rows where the date field (dt) is null or does not match 1-2 digits + dash + 3 alphabetic characters + dash + 2 digits (for example, 06-Sep-13) ``` select * from table_name where dt is null or dt not rlike '[[:digit:]]{1,2}-[[:alpha:]]{3}-[[:digit:]]{2}'; ```
Select all rows that dont match a date format?
[ "", "mysql", "sql", "" ]
I would like to have a model that belongs to another one. In my controller, I'd like to get all items in that model, but I want the attributes from the table it belongs to as well. For example: ``` class Comment extends Eloquent { public function post() { return $this->belongsTo('Post'); } } ``` In my controller I can do the following to get the comment data: ``` $comments = Comment::first(); ``` However, that will only give me data from the comments table (and no joined data from the posts table). I would like the data from the posts table for the row that each comment belongs to available as attributes to my Comment model. I know that I can also do the following to get the data from the posts table: $comments = Comment::first(); The issue with doing it this way is that it uses two database queries (#1 to get the comment, and #2 to get the post data that it belongs to). Is there a way that I get the data from both tables into my model, equivalent to a join statement: ``` SELECT * FROM comments LEFT JOIN posts ON comments.post_id = posts.id ``` I Know that I build a join query manually without using my Comment or Post models, but I have several methods in my Comment model that I'd like to be able to use with the data that is retrieved. Anyone know how I can accomplish this?
From the [documentation on eager loading](http://laravel.com/docs/eloquent#eager-loading): > Thankfully, we can use eager loading to drastically reduce the number of queries. The relationships that should be eager loaded may be specified via the with method [...] Using the `with()` parameter will use a **constant number**† of queries in a one-to-many relationship. Therefore, this is query should retrieve your comments with their related post in one query: ``` $comments = Comments::with('post')->get(); ``` † A Constant number of queries as opposed to linearly increasing in query count per number of comments
If you want to get the results only with one query done on the background, without using `with()`, you can use `fluent` and `join()`. ``` SELECT * FROM comments LEFT JOIN posts ON comments.post_id = posts.id ``` is equal to: ``` DB::table('comments') ->join('posts','comments.post_id','=','posts.id','left') ->get(); ``` But I think also adding a `groupBy()` statement will give you better results: Example: ``` DB::table('comments') ->join('posts','comments.post_id','=','posts.id','left') ->groupBy('comments.id') ->get(); ``` But I'd prefer to use the other answer in my projects: ``` Comment::with('post')->get(); ```
Laravel 4 model with attributes from multiple tables
[ "", "sql", "join", "orm", "laravel", "laravel-4", "" ]
I'm trying to write a query to select users of a database whose birthdays are in the next 7 days. I've done a lot of research but I can't come up with a working solution. The birthday field is stored as a varchar eg '04/16/93' is there any way to work with this? This is what I have so far: ``` SELECT * FROM `PERSONS` WHERE `BIRTHDAY` > DATEADD(DAY, -7, GETDATE()) ``` I should have made it more clear, I'm trying to find birthdays not dates of birth. So i'm just looking for days and months not years.
To get all birthdays in next 7 days, add the year difference between the date of birth and today to the date of birth and then find if it falls within next seven days. ``` SELECT * FROM persons WHERE DATE_ADD(birthday, INTERVAL YEAR(CURDATE())-YEAR(birthday) + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0) YEAR) BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY); ``` If you want to exclude today's birthdays just change `>` to `>=` ``` SELECT * FROM persons WHERE DATE_ADD(birthday, INTERVAL YEAR(CURDATE())-YEAR(birthday) + IF(DAYOFYEAR(CURDATE()) >= DAYOFYEAR(birthday),1,0) YEAR) BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY); -- Same as above query with another way to exclude today's birthdays SELECT * FROM persons WHERE DATE_ADD(birthday, INTERVAL YEAR(CURDATE())-YEAR(birthday) + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0) YEAR) BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY) AND DATE_ADD(birthday, INTERVAL YEAR(CURDATE())-YEAR(birthday) YEAR) <> CURDATE(); -- Same as above query with another way to exclude today's birthdays SELECT * FROM persons WHERE DATE_ADD(birthday, INTERVAL YEAR(CURDATE())-YEAR(birthday) + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0) YEAR) BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY) AND (MONTH(birthday) <> MONTH(CURDATE()) OR DAY(birthday) <> DAY(CURDATE())); ``` Here is a [DEMO](http://sqlfiddle.com/#!2/7d8769/11) of all queries
Its very easy and simple. No need to use any if conditions or anything else you just need to use DATE\_FORMAT() function of mysql. Here is my sql query that is ``` SELECT id,email ,dob FROM `users` where DATE_FORMAT(dob, '%m-%d') >= DATE_FORMAT(NOW(), '%m-%d') and DATE_FORMAT(dob, '%m-%d') <= DATE_FORMAT((NOW() + INTERVAL +7 DAY), '%m-%d') ``` ---
mySQL SELECT upcoming birthdays
[ "", "mysql", "sql", "database", "" ]
I'd like to view grants on redshifts. I found [this view for postgres](http://postgresql.1045698.n5.nabble.com/quot-SHOW-GRANTS-FOR-username-quot-or-why-z-is-not-enough-for-me-td5714952.html): ``` CREATE OR REPLACE VIEW view_all_grants AS SELECT use.usename as subject, nsp.nspname as namespace, c.relname as item, c.relkind as type, use2.usename as owner, c.relacl, (use2.usename != use.usename and c.relacl::text !~ ('({|,)' || use.usename || '=')) as public FROM pg_user use cross join pg_class c left join pg_namespace nsp on (c.relnamespace = nsp.oid) left join pg_user use2 on (c.relowner = use2.usesysid) WHERE c.relowner = use.usesysid or c.relacl::text ~ ('({|,)(|' || use.usename || ')=') ORDER BY subject, namespace, item ``` Which doesn't work because the `::text` cast of `relacl` fails with the following: ``` ERROR: cannot cast type aclitem[] to character varying [SQL State=42846] ``` Modifying the query to ``` CREATE OR REPLACE VIEW view_all_grants AS SELECT use.usename as subject, nsp.nspname as namespace, c.relname as item, c.relkind as type, use2.usename as owner, c.relacl -- , (use2.usename != use.usename and c.relacl::text !~ ('({|,)' || use.usename || '=')) as public FROM pg_user use cross join pg_class c left join pg_namespace nsp on (c.relnamespace = nsp.oid) left join pg_user use2 on (c.relowner = use2.usesysid) WHERE c.relowner = use.usesysid -- or c.relacl::text ~ ('({|,)(|' || use.usename || ')=') ORDER BY subject, namespace, item ``` Allows the view to be created, but I'm concerned that this is not showing all relevant data. How can I modify the view to work on redshift or is there an better/alternative way to view grants on redshift ? **UPDATE:** Redshift has the HAS\_TABLE\_PRIVILEGE function to check grants. (see [here](http://docs.aws.amazon.com/redshift/latest/dg/r_HAS_TABLE_PRIVILEGE.html))
Another variation be like: ``` SELECT * FROM ( SELECT schemaname ,objectname ,usename ,HAS_TABLE_PRIVILEGE(usrs.usename, fullobj, 'select') AND has_schema_privilege(usrs.usename, schemaname, 'usage') AS sel ,HAS_TABLE_PRIVILEGE(usrs.usename, fullobj, 'insert') AND has_schema_privilege(usrs.usename, schemaname, 'usage') AS ins ,HAS_TABLE_PRIVILEGE(usrs.usename, fullobj, 'update') AND has_schema_privilege(usrs.usename, schemaname, 'usage') AS upd ,HAS_TABLE_PRIVILEGE(usrs.usename, fullobj, 'delete') AND has_schema_privilege(usrs.usename, schemaname, 'usage') AS del ,HAS_TABLE_PRIVILEGE(usrs.usename, fullobj, 'references') AND has_schema_privilege(usrs.usename, schemaname, 'usage') AS ref FROM ( SELECT schemaname, 't' AS obj_type, tablename AS objectname, schemaname + '.' + tablename AS fullobj FROM pg_tables WHERE schemaname not in ('pg_internal','pg_automv') UNION SELECT schemaname, 'v' AS obj_type, viewname AS objectname, schemaname + '.' + viewname AS fullobj FROM pg_views WHERE schemaname not in ('pg_internal','pg_automv') ) AS objs ,(SELECT * FROM pg_user) AS usrs ORDER BY fullobj ) WHERE (sel = true or ins = true or upd = true or del = true or ref = true) and schemaname='<opt schema>' and usename = '<opt username>'; ```
Something along the lines off: ``` select tablename, HAS_TABLE_PRIVILEGE(tablename, 'select') as select, HAS_TABLE_PRIVILEGE(tablename, 'insert') as insert, HAS_TABLE_PRIVILEGE(tablename, 'update') as update, HAS_TABLE_PRIVILEGE(tablename, 'delete') as delete, HAS_TABLE_PRIVILEGE(tablename, 'references') as references from pg_tables where schemaname='public' order by tablename; ``` gives me all I need.
How do I view grants on Redshift
[ "", "sql", "amazon-redshift", "" ]
We at college are making an application to generate PDF document from Excel sheet records using Java SE. I have though about two approaches to design the database. In one approach, there will be one table that will contain a lot of records (50K every year). In other approach, there will be a lot of tables created (1000 every year) at runtime and each table will contain max 50 records. Which approach is efficient comparatively considering better overall time performance?
When building a relational database the basic rule would be to **avoid redundancy**. Look over your data and try to separate things that tend to repeat. If you notice a column or a group of columns that repeat across multiple entries create a new table for them. This way you will achieve the best performance when querying. Otherwise, if the values are unique across the entries just keep the minimum number of tables. You should just look for some [**design rules**](http://en.wikipedia.org/wiki/Database_normalization) for relational databases. You will find some examples as well.
Multiple tables of identical structure almost never makes sense. Databases are designed to have many records in few tables.
Which database structure is efficient? One table with 10,000 records or 1000 tables with 10 records?
[ "", "sql", "" ]
I have a table with two columns, name and year of birth(from 1960-1980). How can I write a single sql query which lists each year and the number of people born that year? something like: ``` Year Number of people born 1960 5 1961 3 1962 4 ... .. 1980 4 ```
``` Select Year, Count(*) as 'Number of people born' From SomeTable GROUP BY Year ORDER BY Year ASC ```
``` SELECT YEAROFBIRTH, COUNT(*) FROM YOURTABLE GROUP BY YEAROFBIRTH ```
sql list number of people born every year
[ "", "sql", "" ]
I wanted to create a table using dynamic SQL. If I creates a table using ``` CREATE Table TodayTemp(id varchar(20)) DROP TABLE TodayTemp ``` Then there is no problem. It works fine. But problem using this is I can't create columns dynamically. Hence I tried using store create script in a variable and then finally execute them using `EXEC` command. Like ``` Declare @CreateTableCmd varchar(max) SET @CreateTableCmd = 'CREATE Table TodayTemp(id varchar(20))' Exec @CreateTableCmd ``` But this causes an error > Msg 2812, Level 16, State 62, Line 6 > Could not find stored procedure 'CREATE Table TodayTemp(id varchar(20))'.
Add parentheses around your variable when executing ``` Declare @CreateTableCmd varchar(max) SET @CreateTableCmd = 'CREATE Table TodayTemp (id varchar(20))' Exec (@CreateTableCmd) ^---------------^--------here ``` ## [SQLFiddle demo](http://sqlfiddle.com/#!3/fa387)
if you want to exec your script with exec, call it like this: ``` Exec (@CreateTableCmd) ``` Another way to do this is to use [sp\_executesql](http://technet.microsoft.com/en-us/library/ms188001.aspx) stored procedure: ``` exec sp_executesql @stmt = @CreateTableCmd ``` there're many links comparing this two approaches: * <https://dba.stackexchange.com/questions/4559/difference-between-exec-and-sp-executesql-with-no-parameters> * <http://www.sqlskills.com/blogs/kimberly/exec-and-sp_executesql-how-are-they-different/>
Msg 2812 : "Create Table Using Dynamic Script"
[ "", "sql", "sql-server", "sql-server-2008", "dynamic-tables", "" ]
I have a phonenumber field like: ``` **phonenumbers** 0729643482 0723412678 0734231567 0745297334 0729643482 0720606706 0729643482 0720606706 ``` There are thousands of entries. I wanted to get top 10 phonenumbers with greatest count. this can be displayed as ``` **phonenumber count** 0729643482 3 0720606706 2 . . . (entry 10) 1 ``` From some of the few related questions i understand i can use rank() then group by but i have never done this before. Here is what i have: ``` select phonenumber,cnt FROM (select phonenumber, cnt, rank() over (partition by phonenumber order by cnt desc) rnk from (select distinct phonenumber, count(phonenumber) cnt from ozekiout group by phonenumber order by phonenumber, count(phonenumber) desc) ) where rnk = 1; ```
You don't need the rank function, you can use a normal count with `TOP 10`: ``` SELECT TOP 10 phonenumber, [count] = COUNT(*) FROM ozekiout GROUP BY Phonenumber ORDER BY [count] DESC; ``` If you want to include more than 10 results if there are ties e.g. ``` Phonenumber count 01111111111 18 01111111112 15 01111111113 15 01111111114 14 01111111115 13 01111111116 13 01111111117 12 01111111118 12 01111111119 10 01111111120 10 01111111121 10 01111111122 10 .... CUT OFF 01111111122 9 ``` you can use: ``` SELECT TOP 10 WITH TIES phonenumber, [count] = COUNT(*) FROM ozekiout GROUP BY Phonenumber ORDER BY [count] DESC; ```
Try this : ``` select phonenumber,Count(*) as count from ozekiout group by phonenumber order by count desc limit 10; ```
Get Count() for DISTINCT values for each field SQL Server
[ "", "sql", "sql-server", "" ]
Is there a way to count number of columns in a temp (#temptable) table in sql server ?
``` SELECT COUNT(*) FROM tempdb.sys.columns WHERE object_id = object_id('tempdb..#mytemptable') ```
**Query to get column counts for specified table** ``` SELECT Count(*) as cnt into #TempTable FROM INFORMATION_SCHEMA.Columns where TABLE_NAME = 'TableName' ``` **Query to get column names for specified table** ``` SELECT COLUMN_NAME into #TempTable FROM INFORMATION_SCHEMA.Columns where TABLE_NAME = 'TableName' ``` **Query to get column count for #TempTable** ``` SELECT COUNT(*) as Cnt FROM tempdb.sys.columns WHERE object_id = object_id('tempdb..#TempTable') DROP table #TempTable ```
Count of Columns in temp table in SQL Server
[ "", "sql", "sql-server", "" ]
I want to select some data using simple sql and insert those data into another table. Both table are same. Data types and column names all are same. Simply those are temporary table of masters table. Using single sql I want to insert those data into another table and in the where condition I check E\_ID=? checking part. My another problem is sometime there may be any matching rows in the table. In that time is it may be out sql exception? Another problem is it may be multiple matching rows. That means one E\_ID may have multiple rows. As a example in my attachment\_master and attachments\_temp table has multiple rows for one single ID. How do I solve those problems? I have another problem. My master table data can insert temp table using following code. But I want to change only one column and others are same data. Because I want to change temp table status column. ``` insert into dates_temp_table SELECT * FROM master_dates_table where e_id=?; ``` In here all data insert into my dates\_temp\_table. But I want to add all column data and change only dates\_temp\_table status column as "Modified". How should I change this code?
You can use merge statement if I understand your requirement correctly. [Documentation](http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm) As I do not have your table structure below is based on assumption, see whether this cater your requirement. I am assuming that e\_id is primary key or change as per your table design. ``` MERGE INTO dates_temp_table trgt USING (SELECT * FROM master_dates_table WHERE e_id=100) src ON (trgt.prm_key = src.prm_key) WHEN NOT MATCHED THEN INSERT (trgt.col, trgt.col2, trgt.status) VALUES (src.col, src.col2, 'Modified'); ``` More information and examples [here](http://www.oracle-base.com/articles/10g/merge-enhancements-10g.php)
You could try this: ``` insert into table1 ( col1, col2, col3,.... ) SELECT col1, col2, col3, .... FROM table2 where (you can check any condition here on table1 or table2 or mixed) ``` For more info have a look [here](https://stackoverflow.com/questions/5907206/mysql-insert-into-tbl-select-from-another-table-and-some-default-values) and [this similar question](https://dba.stackexchange.com/questions/2973/how-to-insert-values-into-a-table-from-a-select-query-in-postgresql) Hope it may help you. EDit : If I understand your requirement properly then this may be a helpful solution for you: ``` insert into table1 ( col-1, col-2, col-3,...., col-n, <Your modification col name here> ) SELECT col-1, col-2, col-3,...., col-n, 'modified' FROM table2 where table1.e_id=<your id value here> ``` As per your comment in above other answer: > "I send my E\_ID. I don't want to matching and get. I send my E\_ID and > if that ID available I insert those data into my temp table and change > temp table status as 'Modified' and otherwise don't do anything." As according to your above statements, If given e\_id is there it will copy all the columns values to your table1 and will place a value 'modified' in the 'status' column of your table1 For more info look [here](https://stackoverflow.com/questions/5907206/mysql-insert-into-tbl-select-from-another-table-and-some-default-values)
How to select data and insert those data using single sql?
[ "", "sql", "database", "select", "insert", "oracle10g", "" ]
I have a database which is used to store information about different matches for a game that I pull in from an external source. Due to a few issues, there are occasional gaps (which could be anywhere from 1 missing ID to a few hundred) in the database. I want to have the program pull in the data for the missing games, but I need to get that list first. Here is the format of the table: ``` id (pk-identity) | GameID (int) | etc. | etc. ``` I had thought of writing a program to run through a loop and query for each GameID starting at 1, but it seems like there should be a more efficient way to get the missing numbers. Is there an easy and efficient way, using SQL Server, to find all the missing numbers from the range?
The idea is to look at where the gaps start. Let me assume you are using SQL Server 2012, and so have the `lag()` and `lead()` functions. The following gets the next `id`: ``` select t.*, lead(id) over (order by id) as nextid from t; ``` If there is a gap, then `nextid <> id+1`. You can now characterize the gaps using `where`: ``` select id+1 as FirstMissingId, nextid - 1 as LastMissingId from (select t.*, lead(id) over (order by id) as nextid from t ) t where nextid <> id+1; ``` EDIT: Without the `lead()`, I would do the same thing with a correlated subquery: ``` select id+1 as FirstMissingId, nextid - 1 as LastMissingId from (select t.*, (select top 1 id from t t2 where t2.id > t.id order by t2.id ) as nextid from t ) t where nextid <> id+1; ``` Assuming the `id` is a primary key on the table (or even that it just has an index), both methods should have reasonable performance.
I like the "gaps and islands" approach. It goes a little something like this: ``` WITH Islands AS ( SELECT GameId, GameID - ROW_NUMBER() OVER (ORDER BY GameID) AS [IslandID] FROM dbo.yourTable ) SELECT MIN(GameID), MAX(Game_id) FROM Islands GROUP BY IslandID ``` That query will get you the list of contiguous ranges. From there, you can self-join that result set (on successive IslandIDs) to get the gaps. There is a bit of work in getting the IslandIDs themselves to be contiguous though. So, extending the above query: ``` WITH cte1 AS ( SELECT GameId, GameId - ROW_NUMBER() OVER (ORDER BY GameId) AS [rn] FROM dbo.yourTable ) , cte2 AS ( SELECT [rn], MIN(GameId) AS [Start], MAX(GameId) AS [End] FROM cte1 GROUP BY [rn] ) ,Islands AS ( SELECT ROW_NUMBER() OVER (ORDER BY [rn]) AS IslandId, [Start], [End] from cte2 ) SELECT a.[End] + 1 AS [GapStart], b.[Start] - 1 AS [GapEnd] FROM Islands AS a LEFT JOIN Islands AS b ON a.IslandID + 1 = b.IslandID ```
Find all integer gaps in SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I want to `check date` in `between two dates` or can pass as a `NULL` using stored proc code ``` declare @fromDate date = null declare @toDate date = null select * from Mytable where date betweeen @fromDate and @toDate OR NULL (how to check for both parameters) ``` I have other 2 more parameters so irrespective with date result should be displayed. if @todate and @fromDate is NULL please help.
Try with coalesce function as below ``` declare @fromDate date = null declare @toDate date = null select * from Mytable where date between coalesce(@fromDate,date) and coalesce(@toDate,date) ```
Pretty much convert your English to SQL: ``` where (date is null or date betweeen @fromDate and @toDate) ```
check date between two dates or NULL in sql
[ "", "mysql", "sql", "sql-server", "t-sql", "" ]
I am trying to join two tables based on whether or not a string from the first table is contained in part of a long string in the second table. I am using PROC SQL in SAS, but could also use a data step instead of a SQL query. This code works fine on smaller datasets, but rapidly gets bogged down since it has to make a ton of comparisons. It would be fine if it were a simple equality check, but having to use the `index()` function makes it tough. ``` proc sql noprint; create table matched as select A.*, B.* from search_notes as B, names as A where index(B.notes,A.first) or index(B.notes,A.last) order by names.name, notes.id; quit; run; ``` B.notes is a 2000 character (sometimes fully populated) block of text, and I am looking for any result that contains either the first or last name from A. I don't think I get any speed advantage from doing it in two steps since it already has to compare every line of A with every line of B (so checking for both the first and last name isn't the bottleneck). When I run it, I get `NOTE: The execution of this query involves performing one or more Cartesian product joins that can not be optimized.` in my log. Running it with A=4000 observations and B=100,000 observations takes 30 minutes to produce ~1000 matches. Is there any way to optimize this?
This is a partial answer that makes it run 4-5X faster, but it isn't ideal (it helps in my case, but wouldn't necessarily work in the general case of optimizing a Cartesian product join). I originally had 4 separate index() statements like in my example (my simplified sample had 2 for A.first and A.last). I was able to refactor all 4 of those index() statements (plus a 5th I was going to add) into a regular expression that solves the same problem. It won't return an identical result set, but I think it actually returns *better* results than the 5 separate indexes since you can specify word edges. In the datastep where I clean the names for matching, I create the following pattern: ``` pattern = cats('/\b(',substr(upcase(first_name),1,1),'|',upcase(first_name),').?\s?',upcase(last_name),'\b/'); ``` This should create a regex along the lines of `/\b(F|FIRST).?\s?LAST\b/` which will match anything like F. Last, First Last, flast@email.com, etc (there are combinations that it doesn't pick up, but I was only concerned with combinations that I observe in my data). Using '\b' also doesn't allow things where FLAST happens to be the same as the start/end of a word (such as "Edward Lo" getting matched to "Eloquent") which I find hard to avoid with index() Then I do my sql join like this: ``` proc sql noprint; create table matched as select B.*, prxparse(B.pattern) as prxm, A.* from search_text as A, search_names as B where prxmatch(calculated prxm,A.notes) order by A.id; quit; run; ``` Being able to compile the regex once per name in B, and then run it on each piece of text in A seems to be dramatically faster than a couple of index statements (not sure about the case of a regex vs a single index). Running it with A=250,000 Obs and B=4,000 Obs, took something like 90 minutes of CPU time for the index() method, while doing the same with prxmatch() took only 20 minutes of CPU time.
The Cartesian product might be best for your data but here is something to try. What I am doing is using CALL EXECUTE() in a data step to build the step matching into a data step. This means you only have to tranverse each table once. However, you will have 4000 IF/THEN clauses in your written data step. Doing this brings the runtime on my example data from 55 seconds to 40 seconds. That would represent about 24 minutes down from your 30 minutes if the ratio holds. I would leave this question open. Maybe someone can come up with a better method. ``` %let n=50; data B; format notes $&n..; choose = "ABCDEFGHIJLKMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; do j=1 to 9000000; notes = ""; do i=1 to floor(5 + ranuni(123)*(&n-5)); r = floor(ranuni(123)*62+1); notes = catt(notes,substr(choose,r,1)); end; output; drop r choose i; end; run; data a; choose = "ABCDEFGHIJLKMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; format first last $2.; do i=1 to 62 by 2; first = strip(substr(choose,i,1)); first = catt(first,first); last = strip(substr(choose,i+1,1)); last = catt(last,last); output; end; drop choose ; run; proc sql noprint; create table matched as select A.*, B.* from B as B, A as A where index(B.notes,A.first) or index(B.notes,A.last) order by B.notes, a.i; quit; options nosource; data _null_; set a end=l; if _n_ = 1 then do; call execute("data matched2; set B;"); call execute("format First Last $2. i best.;"); end; format outStr $200.; outStr = "if index(notes,'" || first || "') or index(notes,'" || last || "') then do;"; call execute(outStr); outStr = "first = '" || first || "';"; call execute(outStr); outStr = "last = '" || last || "';"; call execute(outStr); outStr = "i = " || i || ";"; call execute(outStr); call execute("output; end;"); if l then do; call execute("run;"); end; run; proc sort data=matched2; by notes i; run; ```
Efficiently joining/merging based on matching part of a string
[ "", "sql", "sas", "" ]
I have a database table which holds error logs reported from an application. If certain errors occur, the application requires human intervention before becoming active again. I need to sort through the logs and determine the total amount of time that has accrued between every pair of events. So when the app goes into ERROR state where intervention is required at a certain time, I need to find the elapsed time to the next error log where the app was restarted. Then I need to the sum of the total elapsed time between every pair of events. The table looks like this: ``` ErrorID | ErrorMessage | ErrorDateTime --------------------------------------------- 20 | ex. msg 1 | 2013-09-01 00:10:10 21 | ex. msg 2 | 2013-09-01 00:10:15 22 | ex. msg 3 | 2013-09-01 00:10:20 23 | ERROR | 2013-09-01 00:10:25 24 | ex. msg 4 | 2013-09-01 00:10:30 25 | ex. msg 5 | 2013-09-01 00:10:35 26 | ex. msg 6 | 2013-09-01 00:10:37 27 | App Restarted | 2013-09-01 00:11:30 28 | ex. msg 7 | 2013-09-01 00:11:35 29 | ex. msg 8 | 2013-09-01 00:11:40 30 | ex. msg 9 | 2013-09-01 00:11:43 31 | ERROR | 2013-09-01 00:11:45 32 | ex. msg 10 | 2013-09-01 00:12:10 33 | ex. msg 11 | 2013-09-01 00:12:20 34 | ex. msg 12 | 2013-09-01 00:12:22 35 | App Restarted | 2013-09-01 00:13:30 ``` So basically I need to find the difference between the timestamps of every ERROR and the subsequent App Restarted log message. Then get the sum of all of these durations Can anyone point me in the right direction?
``` ;WITH x AS ( SELECT ErrorID, ErrorMessage, ErrorDateTime, rn = ROW_NUMBER() OVER (ORDER BY ErrorDateTime, ErrorID) FROM dbo.YourLogTable WHERE ErrorMessage IN ('ERROR', 'App Restarted') ) SELECT y.ErrorID, x.ErrorID, [Back_Up] = y.ErrorDateTime, SecondsDown = DATEDIFF(SECOND, y.ErrorDateTime, x.ErrorDateTime) FROM x LEFT OUTER JOIN x AS y ON x.rn = y.rn + 1 WHERE x.ErrorMessage = 'App Restarted'; ``` That gives you each downtime duration. I'm not sure what value the `SUM` has - over the lifetime of the app? Limited to a certain time frame? Something else? But you can get it this way: ``` ;WITH x AS ( SELECT ErrorID, ErrorMessage, ErrorDateTime, rn = ROW_NUMBER() OVER (ORDER BY ErrorDateTime) FROM dbo.YourLogTable WHERE ErrorMessage IN ('ERROR', 'App Restarted') ) SELECT TotalDowntime = SUM(DATEDIFF(SECOND, y.ErrorDateTime, x.ErrorDateTime)) FROM x LEFT OUTER JOIN x AS y ON x.rn = y.rn + 1 WHERE x.ErrorMessage = 'App Restarted'; ```
The following query gets the restart time for each error: ``` select l.*, (select top 1 ErrorDateTime from logs l2 where l2.ErrorId > l.ErrorId and l2.ErrorMessage = 'App Restarted' order by l2.ErrorId ) as RestartTime from logs l where l.ErrorMessage = 'ERROR'; ``` To get the sum requires summing times. Here is the sum in seconds: ``` with errors as ( select l.*, (select top 1 ErrorDateTime from logs l2 where l2.ErrorId > l.ErrorId and l2.ErrorMessage = 'App Restarted' order by l2.ErrorId ) as RestartTime from logs l where l.ErrorMessage = 'ERROR' ) select sum(datediff(second, ErrorDateTime, RestartTime)) as SecondsDown from errors; ```
How can I determine down time from logs in database table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have to get an instructor approval date from a table called section. If that table doesn't have the date (null), then I have to get a date from offering table, and even if that table doesn't have what I am looking for, then from Term table. AND if all three are null, then I need to get absoluteExpireDate in same manners as instructorapprovaldate. AND if absoluteExpireDate is also null in all three tables, then I need to get WaitList date in similar fashion. How do I create a case statement to handle that? so far here's what I have: ``` SELECT @dInstructApprDate = case when a.InstructorApprovalDate is null then select @dInstructApprDate = instructorapprovaldate from SSS_OfferingAcademicPeriods where SSS_OfferingRegPeriods.SSS_OfferingsID = @lSSS_OfferingsID ``` I am not sure why it doesn't like me using select statement within "THEN" Any help would be appreciated. Here is what I got so far for the function: ``` CREATE FUNCTION [dbo].[SSS_GetInstructorApprovalDate](@lSSS_SectionsID INT) RETURNS VARCHAR(20) AS BEGIN DECLARE @dInstructApprDate DATETIME, @dAddDropDate DATETIME, @lTemp INT, @lSSS_OfferingsID INT, @lSSS_TermsID INT SET @lTemp = 0 SELECT @lTemp = 1 WHERE EXISTS (SELECT 1 FROM SSS_SectionAcademicPeriods WITH (NOLOCK) WHERE SSS_SectionsID = @lSSS_SectionsID) --Fetch from section level, if present - Begin IF @lTemp = 1 BEGIN SELECT @dInstructApprDate = case when a.InstructorApprovalDate is null then (select instructorapprovaldate from SSS_OfferingAcademicPeriods where SSS_OfferingRegPeriods.SSS_OfferingsID = @lSSS_OfferingsID) else InstructorApprovalDate end FROM SSS_SectionAcademicPeriods a WITH (NOLOCK) where SSS_SectionsID = @lSSS_SectionsID ```
Given you're checking for null, you could use `Coalesce`: ``` select coalesce ( (select 1 a where 1=2) --returns null , (select 2 a where 2=3) --returns null , (select 3 a where 4=4) --returns result , 100 --default ) x ``` For me this would be cleaner / easier to read than a case statement, and I suspect would perform just as well. Based on the code & description included in your question, for you this would look something like the following: ``` CREATE FUNCTION [dbo].[SSS_GetInstructorApprovalDate](@lSSS_SectionsID INT) RETURNS VARCHAR(20) AS BEGIN DECLARE @dInstructApprDate DATETIME , @dAddDropDate DATETIME , @lSSS_OfferingsID INT , @lSSS_TermsID INT --, @lTemp INT = 0 --I suspect you don't want this bit; but uncomment if it's required (i.e. if you only want a value when there's a matching record in the secion table, but the record's approval date's null --SELECT top 1 @lTemp = 1 --FROM SSS_SectionAcademicPeriods WITH (NOLOCK) --WHERE SSS_SectionsID = @lSSS_SectionsID --Fetch from section level, if present - Begin --IF @lTemp = 1 --BEGIN SELECT @dInstructApprDate = coalesce ( ( SELECT InstructorApprovalDate FROM SSS_SectionAcademicPeriods with(nolock) where SSS_SectionsID = @lSSS_SectionsID ) , ( select InstructorApprovalDate from SSS_OfferingAcademicPeriods where SSS_OfferingsID = @lSSS_OfferingsID ) , ( select InstructorApprovalDate from SSS_TermsAcademicPeriods where SSS_OfferingsID = @lSSS_TermsID ) , ( SELECT AbsoluteExpireDate FROM SSS_SectionAcademicPeriods with(nolock) where SSS_SectionsID = @lSSS_SectionsID ) , ( select AbsoluteExpireDate from SSS_OfferingAcademicPeriods where SSS_OfferingsID = @lSSS_OfferingsID ) , ( select AbsoluteExpireDate from SSS_TermsAcademicPeriods where SSS_OfferingsID = @lSSS_TermsID ) , ( SELECT WaitListDate FROM SSS_SectionAcademicPeriods with(nolock) where SSS_SectionsID = @lSSS_SectionsID ) , ( select WaitListDate from SSS_OfferingAcademicPeriods where SSS_OfferingsID = @lSSS_OfferingsID ) , ( select WaitListDate from SSS_TermsAcademicPeriods where SSS_OfferingsID = @lSSS_TermsID ) ) --END return cast(@dInstructApprDate as varchar(20)) --probably END ``` NB: Depending on how long each query takes you may want to approach it slightly differently. Here's an alternate / let me know how it suits: ``` CREATE FUNCTION [dbo].[SSS_GetInstructorApprovalDate](@lSSS_SectionsID INT) RETURNS VARCHAR(20) AS BEGIN DECLARE @dInstructApprDate DATETIME , @dInstructApprDate2 DATETIME , @dInstructApprDate3 DATETIME , @dAddDropDate DATETIME , @lSSS_OfferingsID INT , @lSSS_TermsID INT --, @lTemp INT = 0 --I suspect you don't want this bit; but uncomment if it's required (i.e. if you only want a value when there's a matching record in the secion table, but the record's approval date's null --SELECT top 1 @lTemp = 1 --FROM SSS_SectionAcademicPeriods WITH (NOLOCK) --WHERE SSS_SectionsID = @lSSS_SectionsID --Fetch from section level, if present - Begin --IF @lTemp = 1 --BEGIN SELECT @dInstructApprDate = InstructorApprovalDate , @dInstructApprDate2 = AbsoluteExpireDate , @dInstructApprDate3 = WaitListDate FROM SSS_SectionAcademicPeriods with(nolock) where SSS_SectionsID = @lSSS_SectionsID if @dInstructApprDate is null select @dInstructApprDate = InstructorApprovalDate , @dInstructApprDate2 = isnull(@dInstructApprDate2, AbsoluteExpireDate) , @dInstructApprDate3 = isnull(@dInstructApprDate3, WaitListDate) from SSS_OfferingAcademicPeriods where SSS_OfferingsID = @lSSS_OfferingsID if @dInstructApprDate is null select @dInstructApprDate = InstructorApprovalDate , @dInstructApprDate2 = isnull(@dInstructApprDate2, AbsoluteExpireDate) , @dInstructApprDate3 = isnull(@dInstructApprDate3, WaitListDate from SSS_TermsAcademicPeriods where SSS_OfferingsID = @lSSS_TermsID set @dInstructApprDate = coalesce(@dInstructApprDate, @dInstructApprDate2, @dInstructApprDate3) --END return cast(@dInstructApprDate as varchar(20)) --probably END ```
a bit hard to say without having the whole query, what `a` stands for? Looks like your `case` is part of the bigger query, but ``` SELECT @dInstructApprDate = case when a.InstructorApprovalDate is null then ( select o.InstructorApprovalDate from SSS_OfferingAcademicPeriods as o where o.SSS_OfferingsID = @lSSS_OfferingsID ) -- ... -- you have from clause here? -- ... ``` I think your query could be greatly simplified, but can't say until I see the whole query **update** ``` select @dInstructApprDate = InstructorApprovalDate from SSS_SectionAcademicPeriods where SSS_SectionsID = @lSSS_SectionsID if @dInstructApprDate is null select @dInstructApprDate = instructorapprovaldate from SSS_OfferingAcademicPeriods where SSS_OfferingsID = @lSSS_OfferingsID ```
Case within Case statement
[ "", "sql", "sql-server-2008", "case", "" ]
I had a table with almost 20000 records with columns ``` Id SubjectId UniqueId 1 54 1 1 58 2 1 59 3 1 60 4 2 54 5 2 58 6 2 59 7 2 60 8 2 60 9 3 54 10 3 70 11 ``` I want to Select those Records Which Are repeating like result is Like ``` Id SubjectId UniqueId 2 60 8 2 60 9 7 54 15 7 54 18 7 54 30 ``` Help Me how could I do this
use `EXISTS()` ``` SELECT a.* FROM tableName a WHERE EXISTS ( SELECT 1 FROM tableName b WHERE a.ID = b.ID AND a.SubjectID = b.subjectID GROUP BY Id, SubjectId HAVING COUNT(*) > 1 ) ``` * [SQLFiddle Demo](http://sqlfiddle.com/#!2/10f1e/1)
You can utilize analytic `COUNT()` since you're using **SQL Server 2008** ``` SELECT id, subjectid, uniqueid FROM ( SELECT id, subjectid, uniqueid, COUNT(*) OVER (PARTITION BY id, subjectid) cnt FROM table1 ) q WHERE cnt > 1 ``` or another way ``` SELECT t.* FROM ( SELECT id, subjectid FROM table1 GROUP BY id, SubjectId HAVING COUNT(*) > 1 ) q JOIN table1 t ON q.id = t.id AND q.subjectid = t.subjectid ``` Output for both queries: ``` | ID | SUBJECTID | UNIQUEID | |----|-----------|----------| | 2 | 60 | 8 | | 2 | 60 | 9 | | 7 | 54 | 15 | | 7 | 54 | 18 | | 7 | 54 | 30 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!3/923ad/4)** demo
How do I find repeating records In a SQL table
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I have a SQL server table in which there are 2 columns that I want to update either of their values according to a flag sent to the stored procedure along with the new value, something like: ``` UPDATE table_Name SET CASE WHEN @flag = '1' THEN column_A += @new_value WHEN @flag = '0' THEN column_B += @new_value END AS Total WHERE ID = @ID ``` What is the correct SQL server code to do so??
The current answers are fine and should work ok, but what's wrong with the more simple, more obvious, and more maintainable: ``` IF @flag = 1 UPDATE table_name SET column_A = column_A + @new_value WHERE ID = @ID; ELSE UPDATE table_name SET column_B = column_B + @new_value WHERE ID = @ID; ``` This is much easier to read albeit this is a very simple query. Here's a working example courtesy of @snyder: [SqlFiddle](http://sqlfiddle.com/#!3/74fcd/12).
Something like this should work: ``` UPDATE table_Name SET column_A = CASE WHEN @flag = '1' THEN column_A + @new_value ELSE column_A END, column_B = CASE WHEN @flag = '0' THEN column_B + @new_value ELSE column_B END WHERE ID = @ID ```
if condition in sql server update query
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
I am having trouble finding the correct syntax to make this work? I have a table with the columns id and color and I want to print out all of the id's for a specific color. ``` if exists(select id from mytable where color = 'red') print id if exists(select id from mytable where color = 'red') print SCOPE_IDENTITY() --which won't work because i'm using select rather than insert ```
`PRINT` only prints one value. It looks like you want to loop through the results and print each value, which will require a cursor: ``` DECLARE @id INT DECLARE id_cursor CURSOR FOR SELECT id from mytable where color = 'red' OPEN id_cursor FETCH NEXT FROM id_cursor INTO @id WHILE @@FETCH_STATUS = 0 BEGIN PRINT @id FETCH NEXT FROM id_cursor INTO @id END CLOSE id_cursor; DEALLOCATE id_cursor; ``` Which seems ridiculous to do in SQL and is why I said in my comment that SQL is not designed to easily print the results of a query - you may be better off returning a result set and letting the *consumer* print the results.
id isnt defined. The id in the exists subquery doesnt exist outside that subquery. You could do this: ``` declare @id int; select @id = id from table_name; if (@id is not null) print @id; ```
Cannot access if exists select variable
[ "", "sql", "sql-server", "" ]
I have a multi-table join (only two shown in example) where I need to retain all rows from the base table. Obviously I use a LEFT JOIN to include all rows on the base table. Without the WHERE clause it works great – When a row doesn’t exist in the Right table the row from the Left table still shows, just with a 0 from the column in the Right table. The first two rows in the dataset are Labels from the Left table and Count of rows from the Right table, grouped by Label. All I want is a count of 0 when a label does not have a value from Table2 assigned. **Table1** ``` Label | FK ---------- Blue | 1 Red | 2 Green | 3 ``` **Table2** ``` Values | pk | Date --------------------------- Dog | 1 | 02/02/2010 Cat | 2 | 02/02/2010 Dog | 1 | 02/02/2010 Cat | 2 | 02/02/2010 ``` **Query**: ``` SELECT 1.Label, COUNT(2.values) FROM Table1 1 LEFT JOIN Table2 2 ON 1.fk = 2.pk GROUP BY 1.Label ``` Good Result Set - No filters ``` Blue | 2 Red | 2 Green | 0 ``` Great! My issue is that when I add filtering criteria to remove rows from the Right table the row is removed for my Left join rows (zeroing them out), the Left rows are dropped. I need the Left rows to remain even if their count is filtered down to zero. ``` SELECT 1.Label, COUNT(2.values) FROM Table1 1 LEFT JOIN Table2 2 ON 1.fk = 1.pk WHERE 2.Date BETWEEN '1/1/2010' AND '12/31/2010' GROUP BY 1.Label ``` Bummer Result Set - After Filters ``` Blue | 2 Red | 2 ``` Dukes! So, what the hell? Do I need to get a temp table with the filtered dataset THEN join it to the Left table? What am I missing? Thanks! Do a second join or recursive join. Get my “good” join table, get a second “filtered” table, then LEFT JOIN them
You are filtering on the second table in the `where`. The values could be `NULL` and `NULL` fails the comparisons. Move the `where` condition to the `on` clause: ``` SELECT 1.Label, COUNT(2.values) FROM Table1 1 LEFT JOIN Table2 2 ON 1.fk = 1.pk AND 2.Date BETWEEN 1/1/2010 AND 12/31/2010 GROUP BY 1.Label ``` Note: The date formats retain the dates from the question. However, I don't advocate using `BETWEEN` for dates and the conditions should use standard date formats: ``` SELECT 1.Label, COUNT(2.values) FROM Table1 1 LEFT JOIN Table2 2 ON 1.fk = 1.pk AND 2.Date >= '2010-01-01' AND 2.Date < '2011-01-01' GROUP BY 1.Label; ``` Some databases support the SQL Standard keyword `DATE` to identify date constants.
simply move the condition in `WHERE` clause to the `ON` clause. ``` LEFT JOIN Table2 2 ON 1.fk = 1.pk AND 2.Date BETWEEN '1/1/2010' AND '12/31/2010' ```
SQL Left Join losing rows after filtering
[ "", "sql", "left-join", "" ]
I have a table and I want to write a `query` which will give duplicate rows of that table based on the value of column `quantity` in that table. Suppose the table is one given below ``` name | quantity ------|---------- a | 1 b | 1 c | 3 d | 2 e | 1 ``` And I want to be able to write a `query` in `T-SQL` so that it would give a result which would look like follows; ``` name | number | quantity ------|--------|---------- a | 1 | 1 b | 1 | 1 c | 1 | 3 c | 2 | 3 c | 3 | 3 d | 1 | 2 d | 2 | 2 e | 1 | 1 ``` So in this result, there are 3 rows for "c" as its quantity is 3 and the number increments as the line appears for nth time. I have found [this question](https://stackoverflow.com/questions/12816520/sql-select-selective-row-multiple-times) which has been answered and accepted, but I don't quite understand how to apply it in to my scenario. Any help on this is much appreciated..!
try this: ``` With Ints(n) As (Select 1 Union All Select n + 1 From Ints Where n < 1000) Select t.Name, i.n from myTable t join Ints i on i.n <= t.Quantity option(MaxRecursion 1000) ```
Create a numbers table and do a join: ``` with numbers as ( select 1 as n union all select 1 + n from numbers where 1 + n <= 50 ) select t.name, numbers.n, t.quantity from t join numbers on t.quantity <= numbers.n; ``` This assumes the maximum quantity is 50. You could put in `select max(quantity) from t)` if you wanted it more flexible. If `quantity` can be big, you might want to add `OPTION (MAXRECURSION 0)`.
Add multiple rows in a SQL SELECT query
[ "", "sql", "sql-server", "t-sql", "select", "" ]
I have the following SQL, ``` SELECT TD.MyTempTableID ,TD.ID ,TD.Name ,TD.PhoneNumber ,TD.Featured ,TD.Price ,TD.Available ,TD.ModelNumber ,TD.Searchable ,TD.Brand ,TD.Tags ,TD.ShortDescriptions ,TD.Variations ,TD.Promotion ,TD.Archive ,TD.UPC ,TD.Status FROM MyTempTable YD WHERE TD.Brand = @Brand ``` Now I need to validate all fields of all these rows in my SP. How to do this?
I would add a column to the temporary table `IsValid BIT DEFAULT(1)`. Next I would run each validation routine per column: ``` UPDATE MyTempTable SET IsValid = 0 WHERE IsNumeric(PhoneNumber) = 0 -- don't process rows that have already failed AND IsValid = 1 ``` Repeat the above for each column, replacing the WHERE clause with what would make that value be NOT valid. Once done, you can query for `WHERE IsValid = 1` to get the rows that passed validation, or `WHERE IsValid = 0` for those that failed one or more tests. For when you want something to describe why it failed you could add another column `ErrorReason VARCHAR(MAX) DEFAULT('')`. ``` UPDATE MyTempTable SET IsValid = 0, -- note I'm forcing a line-break here inside the string so each reason is on a new line ErrorReason += 'PhoneNumber must be numeric ' WHERE IsNumeric(PhoneNumber) = 0 ``` Note that in this case we're not including the `AND IsValid = 1` condition, as we want to get multiple reasons for failure. Feel free to add that if you only want the first reason for failure. Now when you're done, rows with `IsValid = 0` will also have one or more reasons too.
``` Declare @ID as varchar(500) Declare @Name as varchar(30) Declare @MOBILE as varchar(20) --AND OTHER vARUIABLE FOR oTHER FIELDS Declare MY_data CURSOR FOR SELECT TD.MyTempTableID ,TD.ID ,TD.Name ,TD.PhoneNumber ,TD.Featured ,TD.Price ,TD.Available ,TD.ModelNumber ,TD.Searchable ,TD.Brand ,TD.Tags ,TD.ShortDescriptions ,TD.Variations ,TD.Promotion ,TD.Archive ,TD.UPC ,TD.Status FROM MyTempTable YD WHERE TD.Brand = @Brand OPEN MY_data FETCH NEXT FROM MY_data INTO @ID, @Name ,@MOBILE -- All other Declared vARIABLE FOR EACH COLUMNS EACH SEPERATE BY ',' WHILE @@FETCH_STATUS = 0 BEGIN --Here you can validate data for each row since each row value is stored in vaariables above like if( ! IsNumeric(@Mobile)) Delete From TableName where Id=@Id END CLOSE MY_data DEALLOCATE MY_data ```
How to validate list of records in a SP?
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I have to prepare update statement dynamically in stored procedure based on 4 input variables. suppose take `test`, `test1`, `test2` and `test3` are input parameters. then I have to prepare query like ,please help me to build this: ``` update emp set empid='1' where test = test and test1 = test1 and test2 = test2 and test3 = test3 ``` Suppose if the test and test1 values are null and test2 and test3 values are not null then I can prepare update statement like below ``` update emp set empid='1' where test is null and test1 is null and test2 = test2 and test3 = test3 ```
If there is a change that one or more parameters that you are passing in a procedure are going to be `NULL` you could write your `UPDATE` statement as follows. There is really no need of using dynamic SQL. In this example procedure's parameters are prefixed with `p_`: ``` update emp set empid='1' where (test = p_test or (test is null and p_test is null)) and (test1 = p_test1 or (test1 is null and p_test1 is null)) and (test2 = p_test2 or (test2 is null and p_test2 is null)) and (test3 = p_test3 or (test3 is null and p_test3 is null)) ```
Create a SQL statement based on your input parameters like ``` 'SELECT '||'Colum_nmae from table' || where var1=var2 ..... ``` then use execute immediate to execute this query
preparing update statement based on input parameters dynamically
[ "", "sql", "oracle", "" ]
In a Postgres DB, I need to filter a set of several hundred thousand rows in a table A by including only those rows for which an IP address column (of type inet) in the row matches any of several thousand IP address blocks (of type cidr) in another table B. I've tried various indexes on the inet addresses in the first table and the cidr ranges in the second, but no matter what I do, the planner does a nested sequential scan, applying the << operator to every pair of IP addresses and prefixes. Is there a way to speed this up with indexes or other clever tricks? (I can resort to external procedural scripting, but I was wondering if it's doable within Postgres.) Thanks!
Case closed. To make things fast, do the following: * Use the ip4r types available from <http://pgfoundry.org/projects/ip4r>, as pointed out by user bma. This type supports indexing where Postgres's (up to Postgres 9.3) native ones don't. * Do not use the ip4r type directly, but expand it into lower and upper values as suggested by user caskey and mentioned in the ip4r docs: <https://github.com/petere/ip4r-cvs/blob/master/README.ip4r#L187> Given the above, if you're using type ip4 (assuming you're dealing with v4 addresses) for all compared addresses, then the planner will leverage indexes on those columns. Thanks for the help, guys!
This is an old question but prominent in Google results, so posting my 2 cents here: With Postgres 9.4 and later you can use GIST indexes for inet and cidr: <https://www.postgresql.org/docs/current/static/gist-builtin-opclasses.html> E.g. the following query will use the gist index (assuming a table from MaxMind's free dataset): ``` create index on geolite2_city_ipv4_block using gist (network inet_ops); select * from geolite2_city_ipv4_block where network >>= '8.8.8.8'; ```
Speeding up checking of IP address membership in CIDR ranges, for large datasets
[ "", "sql", "postgresql", "indexing", "ip-address", "cidr", "" ]
I've installed SQL Developer on my system. No connections are being shown in my system yet. How do I create a new connection. Must I create a database first? If yes, then how do I create a new database. The SQL Query Editor window is not opening because there is no connection. All of this because there is no database. How do I create an empty database and then connect to it.
This tutorial should help you: [Getting Started with Oracle SQL Developer](http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/prod/appdev/sqldev/sqldev_mngdb/sqldev_mngdb_otn.htm) See the prerequisites: 1. Install Oracle SQL Developer. ***You already have it.*** 2. Install the Oracle Database. Download available [here](http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html). 3. Unlock the HR user. Login to SQL\*Plus as the **SYS** user and execute the following command: `alter user hr identified by hr account unlock;` 4. Download and unzip the **sqldev\_mngdb.zip** file that contains all the files you need to perform this tutorial. --- Another version from May 2011: [Getting Started with Oracle SQL Developer](http://www.oracle.com/technetwork/developer-tools/sql-developer/getting-started-155046.html) --- For more info check this related question: [How to create a new database after initally installing oracle database 11g Express Edition?](https://stackoverflow.com/q/9534136/114029)
1. Connect to sys. 2. Give your password for sys. 3. Unlock hr user by running following query: `alter user hr identified by hr account unlock;` 4. Then, Click on new connection 5. Give connection name as HR\_ORCL `Username: hr` `Password: hr` `Connection Type: Basic` `Role: default` `Hostname: localhost` `Port: 1521` `SID: xe` 6. Click on test and Connect
Creating a new database and new connection in Oracle SQL Developer
[ "", "sql", "oracle", "database-connection", "oracle-sqldeveloper", "" ]
I would like to group all the merchant transactions from a single table, and just get a count. The problem is, the merchant, let's say redbox, will have a redbox plus the store number added to the end(redbox 4562,redbox\*1234). I will also include the category for grouping purpose. ``` Category Merchant restaurant bruger king 123 main st restaurant burger king 456 abc ave restaurant mc donalds * 45877d2d restaurant mc 'donalds *888544d restaurant subway 454545 travelsubway MTA gas station mc donalds gas travel nyc taxi travel nyc-taxi ``` The question: How can I group the merchants when they have address or store locations added on to them.All I need is a count for each merchant.
The short answer is there is no way to accurately do this, especially with just pure SQL. You can find exact matches, and you can find wildcard matches using the `LIKE` operator or a (potentially huge) series of regular expressions, but you cannot find *similar* matches nor can you find potential misspellings of matches. There's a few potential approaches I can think of to solve this problem, depending on what type of application you're building. **First, normalize the merchant data in your database.** I'd recommend *against* storing the exact, unprocessed string such as *Bruger King* in your database. If you come across a merchant that doesn't match a known set of merchants, ask the user if it already matches something in your database. When data goes in, process it then and match it to an existing known merchant. **Store a similarity coefficient**. You might have some luck using something like a [Jaccard index](http://en.wikipedia.org/wiki/Jaccard_index) to judge how *similar* two strings are. Perhaps after stripping out the numbers, this could work fairly well. At the very least, it could allow you to create a user interface that can attempt to guess what merchant it is. Also, some database engines have full-text indexing operators that can descibe things like *similar to* or *sounds like*. Those could potentially be worth investigating. **Remember merchant matches per user**. If a user corrects *bruger king 123 main st* to *Burger King*, store that relation and remember it in the future without having to prompt the user. This data could also be used to help other users correct their data. **But what if there is no UI?** Perhaps you're trying to do some automated data processing. I really see no way to handle this without some sort of human intervention, though some of the techniques described above could help automate this process. I'd also look at the source of your data. Perhaps there's a distinct merchant ID you can use as a key, or perhaps there exists *somewhere* a list of all known merchants (maybe credit card companies provide this API?) If there's boat loads of data to process, another option would be to partially automate it using a service such as Amazon's [Mechanical Turk](https://www.mturk.com/).
You can use LIKE ``` SELECT COUNT(*) AS "COUNT", "BURGER KING" FROM <tables> WHERE restaurant LIKE "%king%" UNION ALL SELECT COUNT(*) AS "COUNT", "JACK IN THE BOX" FROM <tables> Where resturant LIKE "jack in the box%" ``` You may have to move the wildcards around depending on how the records were spelled out.
TSQL grouping on fuzzy column
[ "", "sql", "sql-server", "t-sql", "grouping", "fuzzy", "" ]
I have a set of records which store weekly events by using a datetime... `start_at` from the first week they began. It's not the REAL DATE, it's storing the DAY (DOW) and TIME, but is not the date I'm actually looking for. I'm trying to figure out how I can convert/abstract the `start_at` date in a way so using that I can find all of the next events that fall within the next 24 hours. START\_AT -> 2010-09-12 16:00:00 -> CONVERT into 2013-09-08 16:00:00 but I'm not sure how to abstract the `start_at` for the upcoming day of the week. I figured out how to order them but I'm not sure how to do what I want: ``` SELECT * FROM events ORDER BY (EXTRACT(dow FROM start_at)::int + 7 - EXTRACT(dow FROM now())::int) % 7,start_at::time ``` Any help would be greatly appreciated!
If I understand you correctly you may be over thinking it. If all you're looking for is a recurring datetime 24 hour before you could do something like: ``` SELECT * FROM events EXTRACT(dow FROM start_at) = 1 ``` Where 1 is the DOW (for Monday)... you abstract the DOW from the date ``` The day of the week as Sunday(0) to Saturday(6) ``` and unless you very fussed about the Time, you could just search for the DOW before the day you want (~24 hours).
If you mean the next 24 hours: ``` SELECT * FROM events WHERE start_at BETWEEN now() AND now() + INTERVAL '1 day' ``` If you mean the next 24 hour period starting at midnight (i.e. all day tomorrow): ``` SELECT * FROM events WHERE start_at BETWEEN now()::date + INTERVAL '1 day' AND now()::date + INTERVAL '2 days' ```
SQL abstract out date and find the next one this week
[ "", "sql", "postgresql", "postgresql-9.2", "" ]
I have a row in which two different teams are displace with their ID and name.I want to display them in a dropdown.For that I need to display them in a Format which is one above the other.THis is my query and the Image is the current result I am getting.. ``` SELECT Match_Schedule.Match_Serno as 'Id', FirstHomeTeam.Serno as 'HomeTeamID', FirstHomeTeam.Team_Name as 'HomeTeam',SecondHomeTeam.Serno as 'AwayTeamID',SecondHomeTeam.Team_Name as 'AwayTeam' FROM Match_Schedule INNER JOIN Team_Detail AS FirstHomeTeam ON Match_Schedule.HomeTeam = FirstHomeTeam.Serno INNER JOIN Team_Detail AS SecondHomeTeam ON Match_Schedule.AwayTeam = SecondHomeTeam.Serno where Match_Serno=436 ``` The result I get is ![enter image description here](https://i.stack.imgur.com/W9pu6.png) But the result I want is.![enter image description here](https://i.stack.imgur.com/mcAjq.png) Thank you.
You acheive this by doing an `UNPIVOT` on your `Match_Schedule` table and then doing a `JOIN` to the `Team_Detail` table to get the team name. ``` SELECT U.TeamID, TD.Team_Name FROM Match_Schedule UNPIVOT (TeamID FOR TeamName IN (HomeTeam, AwayTeam)) U JOIN Team_Detail TD ON U.TeamID = TD.Serno WHERE Match_Serno = 436 ```
You need to massage your data into producing two rows, then query those rows with an outer query like this: ``` SELECT Id, TeamID FROM ( SELECT Serno as ID, Team_Name as TeamID, 'H' as TeamType FROM Team_Detail WHERE Serno = (SELECT HomeTeam FROM Match_Schedule WHERE Match_Serno = 436) UNION ALL SELECT Serno as ID, Team_Name as TeamID, 'A' as TeamType FROM Team_Detail WHERE Serno = (SELECT AwayTeam FROM Match_Schedule WHERE Match_Serno = 436) ) ORDER BY TeamType DESC ```
Displaying values one above the other instead of in the same line which belong to the same row
[ "", "sql", "sql-server", "rows", "" ]
I’m trying to execute a query like this one: ``` SELECT (17 + 4) AS foo, foo * 2 AS bar; ``` It fails, returning `Unknown column 'foo' in 'field list'`. So I did this: ``` SELECT (@foo := 17 + 4) AS foo, @foo * 2 AS bar; ``` Which works. The problem now is that I want to integrate this computed value (`foo`) in the `WHERE` clause: ``` SELECT (@foo := 17 + 4) AS foo, @foo * 2 AS bar FROM lorem WHERE foo = 21; ``` Fails: `Unknown column 'foo' in 'where clause'`. If I replace `foo` with `@foo` in the `WHERE`, it works. But if I replace the literals by a column, it stops working: ``` SELECT (@foo := ipsum) AS foo, @foo * 2 AS bar FROM lorem WHERE @foo = 33; ``` Returns an empty set with this content in table `lorem`: ``` SELECT * FROM lorem; +-------+ | ipsum | +-------+ | 33 | +-------+ ```
I found the solution, it works using the `HAVING` clause instead of `WHERE`, like this: ``` SELECT (@foo := ipsum * 2) AS foo, @foo * 2 AS bar FROM lorem HAVING foo = 66; ``` Returns: ``` +------+------+ | foo | bar | +------+------+ | 66 | 132 | +------+------+ ``` With: ``` SELECT * FROM lorem; +-------+ | ipsum | +-------+ | 33 | | 41 | +-------+ ``` Because the `HAVING` is *evaluated* after the `SELECT` one; while the `WHERE` clause is *evaluated* before it and just after the `FROM`.
Subselect! ``` SELECT foo , foo * 2 As bar FROM ( SELECT (17 + 4) AS foo ) As hey_look_at_me WHERE foo = 42 ```
Accessing SELECT's computed values in WHERE clause
[ "", "mysql", "sql", "" ]
I get an error;Invalid column name 'phase'. I've tried every permutation I can think of. ``` <cfargument name="locationFilter" default="" /> <cfargument name="educationFilter" default="" /> <cfargument name="workFilter" default="" /> <cfargument name="diversityFilter" default="" /> <cfargument name="phaseFilter" default="" /> <cfquery name="QMentors" datasource="#request.dsn_live#"> SELECT (case when datepart(year,getdate())-cast(yearstarted as integer) > 29 then '5' when datepart(year,getdate())-cast(yearstarted as integer) > 13 then '4' when datepart(year,getdate())-cast(yearstarted as integer) > 5 then '3' when datepart(year,getdate())-cast(yearstarted as integer) > 1 then '2' else '1' end) as phase, * FROM mentors WHERE 0=0 AND mentortype='mentor' AND approved='true' AND active='true' AND mentorcat LIKE '%general%' <cfif arguments.locationFilter neq ""> AND location LIKE <cfqueryparam value="%#arguments.locationFilter#%" /></cfif> <cfif arguments.educationFilter neq ""> AND educationhistory LIKE <cfqueryparam value="%#arguments.educationFilter#%" /></cfif> <cfif arguments.workFilter neq ""> AND workhistory LIKE <cfqueryparam value="%#arguments.workFilter#%" /></cfif> <cfif arguments.diversityFilter eq "Diversity"> AND mentorcat LIKE <cfqueryparam value="%#arguments.diversityFilter#%" /></cfif> <cfif arguments.phaseFilter neq ""> AND phase = <cfqueryparam value="#arguments.phaseFilter#" /></cfif> ORDER BY lastname </cfquery> ``` I thought I had better show the entire query.
try this ``` SELECT (case when datepart(year,getdate())-cast(yearstarted as integer) > 29 then '5' when datepart(year,getdate())-cast(yearstarted as integer) > 13 then '4' when datepart(year,getdate())-cast(yearstarted as integer) > 5 then '3' when datepart(year,getdate())-cast(yearstarted as integer) > 1 then '2' else '1' end) as phase, * FROM table ``` EDIT: You need repeat whole case in where ``` AND case when datepart(year,getdate())-cast(yearstarted as integer) > 29 then '5' when datepart(year,getdate())-cast(yearstarted as integer) > 13 then '4' when datepart(year,getdate())-cast(yearstarted as integer) > 5 then '3' when datepart(year,getdate())-cast(yearstarted as integer) > 1 then '2' else '1' end) = <cfqueryparam value="#arguments.phaseFilter#" /></cfif> ``` Or create `Select(your query) where phase = condition` [Alias in where](https://stackoverflow.com/questions/260399/using-an-alias-column-in-the-where-clause-in-ms-sql-2000)
Your syntax is wrong. This code should run: ``` SELECT * FROM (SELECT case when datepart(year,getdate())-cast(yearstarted as integer) > 29 then '5' when datepart(year,getdate())-cast(yearstarted as integer) > 13 then '4' when datepart(year,getdate())-cast(yearstarted as integer) > 5 then '3' when datepart(year,getdate())-cast(yearstarted as integer) > 1 then '2' else '1' end AS phase, col1, col2, col3, ... -- The list of columns that you want to select FROM table) WHERE plase = ... ``` When you want to select a case expression, you can define the case expression and then give the column alias using the `AS` keyword as shown above. If you want to update the column named `phase` in a table then your expression (as provided in the question) would be the right one. **EDIT** For the sake of maintainability, writing the entire `CASE` construct in the `WHERE` clause is quite tedious. Instead, use the main query inside an inline view (`SELECT` statement in the `FROM` clause) and filter on `phase = ...`.
Why SQL case isn't working
[ "", "sql", "sql-server-2008", "coldfusion", "case", "cfquery", "" ]
I want to store the top 2 results in 2 variables. ``` create table t(id int); insert into t (id) values (1),(2),(3),(4); declare @id1 int declare @id2 int select top 2 @id1 = first id, @id2 = next id from t ``` ## [SQLFiddle](http://sqlfiddle.com/#!6/89b4e/4) Can I do it in one query without using a loop?
``` declare @id1 int,@id2 int ;with cte as ( select top (2) id from t order by id ) select @id1 = min(id), @id2 = max(id) from cte select @id1,@id2 ``` **[Fiddle demo](http://sqlfiddle.com/#!3/d95e6/6)**
``` with cte as ( select top 2 id, row_number() over(order by id) as rn from t order by id ) select @id1 = (select id from cte where rn = 1), @id2 = (select id from cte where rn = 2) ``` or ``` with cte as ( select top 2 id, row_number() over(order by id) as rn from t order by id ) select @id1 = max(case when rn = 1 then id end), @id2 = max(case when rn = 2 then id end) from cte ``` **`sql fiddle demo`**
Select TOP 2 results in variables without loop
[ "", "sql", "sql-server", "sql-server-2008", "variables", "" ]
I'm trying to use Inner join on 2 tables sharing same column name. The first table is temp table but the column name equals the others table Primary key column name. How can I join between them without changing the names of the column (in the temp table of course)? ``` SELECT * FROM [dbo].[Apps] INNER JOIN #statsForManagerApps on [Apps].[AppId] = #statsForManagerApps.AppId WHERE AppId IN (SELECT AppId FROM [AppsForManagers] WHERE [Managerid] = @ManagerId) AND [Enabled]=1 ``` This is how my join looks like. Using \* because I need all data in Apps table.
You need to specify the table in your `WHERE` clause: ``` WHERE Apps.AppId IN ``` You can also alias the tables, as shown below: ``` SELECT * FROM [dbo].[Apps] App INNER JOIN #statsForManagerApps on App.AppId = #statsForManagerApps.AppId WHERE App.AppId IN (SELECT AppId FROM [AppsForManagers] WHERE [Managerid] = @ManagerId) AND [Enabled]=1 ```
You should specify all columns with table aliases as below ``` SELECT A.col1,A.col2, TA.col1,TA.col2 FROM [dbo].[Apps] A INNER JOIN #statsForManagerApps TA on A.AppId = TA.AppId WHERE A.AppId IN (SELECT AppId FROM [AppsForManagers] WHERE [Managerid] = @ManagerId) AND A.Enabled=1 ``` if yo really can't use table names (as you said 30 columns) try with below `select` statements instead of above: ``` SELECT * ``` or ``` SELECT A.*, TA.* ```
SQL inner join on specified columns
[ "", "sql", "join", "" ]
The goal here is to: 1. Fetch the row with the most recent date from EACH store for EACH ingredient. 2. From this result, compare the prices to find the cheapest store for EACH ingredient. I can accomplish either the first or second goal in separate queries, but not in the same. How can i filter out a selection and then apply another filter on the previous result? EDIT: I've been having problems with results that i get from MAX and MIN since it just fetches the rest of the data arbitrarily. To avoid this im supposed to join tables on multiple columns (i guess). Im not sure how this will work with duplicate dates etc. I've included an image of a query and its output data. ![enter image description here](https://i.stack.imgur.com/M9gjb.png) If we use ingredient1 as an example, it exists in three separate stores (in one store twice on different dates). In this case the cheapest current price for ingredient1 would be store3. If the fourth row dated 2013-05-25 was even cheaper, it would still not "win" due to it being out of date. (Disregard brandname, they dont really matter in this problem.) Would appreciate any help/input you can offer!
This question is really interesting! So, first, we get the row with the most recent date from EACH store for EACH ingredient. (It is possible that the most recent dates from each store can be different.) Then, we compare the prices from each store (regardless of the date) to find the least price for each ingredient. The query below uses the GROUP\_CONCAT function in good measure. Here's a [SO question](https://stackoverflow.com/questions/1379565/mysql-first-and-last-record-of-a-grouped-record-aggregate-functions) regarding the use of the function. ``` SELECT i.name as ingredient_name , MIN(store_price.price) as price , SUBSTRING_INDEX( GROUP_CONCAT(store_price.date ORDER BY store_price.price), ',', 1 ) as date , SUBSTRING_INDEX( GROUP_CONCAT(s.name ORDER BY store_price.price), ',', 1 ) as store_name , SUBSTRING_INDEX( GROUP_CONCAT(b.name ORDER BY store_price.price), ',', 1 ) as brand_name FROM ingredient i JOIN (SELECT ip.ingredient_id as ingredient_id , stip.store_id as store_id , btip.brand_id as brand_id , CONVERT(SUBSTRING_INDEX( GROUP_CONCAT(ip.ingredient_price_id ORDER BY ip.date DESC), ',', 1 ), UNSIGNED INTEGER) as ingredient_price_id , MAX(ip.date) as date , CONVERT(SUBSTRING_INDEX( GROUP_CONCAT(ip.price ORDER BY ip.date DESC), ',', 1 ), DECIMAL(5,2)) as price FROM ingredient_price ip JOIN store_to_ingredient_price stip ON ip.ingredient_price_id = stip.ingredient_price_id JOIN brand_to_ingredient_price btip ON ip.ingredient_price_id = btip.ingredient_price_id GROUP BY ip.ingredient_id , stip.store_id) store_price ON i.ingredient_id = store_price.ingredient_id JOIN store s ON s.store_id = store_price.store_id JOIN brand b ON b.brand_id = store_price.brand_id GROUP BY store_price.ingredient_id; ``` You can check the implementation on this [SQL Fiddle](http://sqlfiddle.com/#!2/decc6/9). The version below, which ignores the brand, is slightly smaller: ``` SELECT i.name as ingredient_name , MIN(store_price.price) as price , SUBSTRING_INDEX( GROUP_CONCAT(store_price.date ORDER BY store_price.price), ',', 1 ) as date , SUBSTRING_INDEX( GROUP_CONCAT(s.name ORDER BY store_price.price), ',', 1 ) as store_name FROM ingredient i JOIN (SELECT ip.ingredient_id as ingredient_id , stip.store_id as store_id , CONVERT(SUBSTRING_INDEX( GROUP_CONCAT(ip.ingredient_price_id ORDER BY ip.date DESC), ',', 1 ), UNSIGNED INTEGER) as ingredient_price_id , MAX(ip.date) as date , CONVERT(SUBSTRING_INDEX( GROUP_CONCAT(ip.price ORDER BY ip.date DESC), ',', 1 ), DECIMAL(5,2)) as price FROM ingredient_price ip JOIN store_to_ingredient_price stip ON ip.ingredient_price_id = stip.ingredient_price_id GROUP BY ip.ingredient_id , stip.store_id) store_price ON i.ingredient_id = store_price.ingredient_id JOIN store s ON s.store_id = store_price.store_id GROUP BY store_price.ingredient_id; ``` References: [Simulating First/Last aggregate functions in MySQL](http://www.zonalivre.org/2009/10/12/simulating-firstlast-aggregate-functions-in-mysql/)
This probably needs a couple of sub queries joined together. This isn't tested (as I don't have your table definitions, nor any test data), but something like this:- ``` SELECT i.name AS ingredient, ip.price, ip.date, s.name AS storename, b.name AS brandname FROM ingredient i INNER JOIN ingredient_price ip ON ingredient.ingredient_id = ingredient_price.ingredient_id INNER JOIN store_to_ingredient_price stip ON ingredient_price.ingredient_price_id = store_to_ingredient_price.ingredient_price_id INNER JOIN store s ON store_to_ingredient_price.store_id = store.store_id INNER JOIN brand_to_ingredient_price btip ON ingredient_price.ingredient_price_id = brand_to_ingredient_price.ingredient_price_id INNER JOIN brand b ON brand_to_ingredient_price.brand_id = brand.brand_id INNER JOIN ( SELECT i.ingredient_id, stip.store_id, ip.date, MIN(ip.price) AS lowest_price FROM ingredient i INNER JOIN ingredient_price ip ON ingredient.ingredient_id = ingredient_price.ingredient_id INNER JOIN store_to_ingredient_price stip ON ingredient_price.ingredient_price_id = store_to_ingredient_price.ingredient_price_id INNER JOIN ( SELECT i.ingredient_id, stip.store_id, MAX(ip.date) AS latest_date FROM ingredient i INNER JOIN ingredient_price ip ON ingredient.ingredient_id = ingredient_price.ingredient_id INNER JOIN store_to_ingredient_price stip ON ingredient_price.ingredient_price_id = store_to_ingredient_price.ingredient_price_id GROUP BY ingredient_id, store_id ) Sub1 ON i.ingredient_id = Sub1.ingredient_id AND stip.store_id = Sub1.store_id AND ip.date = Sub1.latest_date GROUP BY i.ingredient_id, stip.store_id, ip.date ) Sub2 ON i.ingredient_id = Sub2.ingredient_id AND stip.store_id = Sub2.store_id AND ip.date = Sub2.date AND ip.price = Sub2.lowest_price ```
MySQL Filter result again
[ "", "mysql", "sql", "" ]
In the Security/Users folder in my database, I have a bunch of security groups, include "MyApplication Users". I need to check if I am (or another user is) in this group, but I have no idea how to query for it or where I could see this information. I tried looking in the properties, but couldn't find anything. Any ideas?
Checking yourself or the current user: ``` SELECT IS_MEMBER('[group or role]') ``` A result of 1 = yes,0 = no, and null = the group or role queried is not valid. To get a list of the users, try xp\_logininfo if extended procs are enabled and the group in question is a windows group : ``` EXEC master..xp_logininfo @acctname = '[group]', @option = 'members' ```
For a quick view of which groups / roles the current user is a member of; ``` select [principal_id] , [name] , [type_desc] , is_member(name) as [is_member] from [sys].[database_principals] where [type] in ('R','G') order by [is_member] desc,[type],[name] ```
Check users in a security group in SQL Server
[ "", "sql", "sql-server", "security", "sql-server-2005", "active-directory", "" ]
I have two related tables, data(no, name, citycode, age) and city(code, city) Table city ``` +------+------------+ | code | city | +------+------------+ | A1 | Jakarta | | A2 | Bali | | A3 | Semarang | | A4 | Surabaya | | C1 | Dili | | C2 | Jayapura | | C3 | Yogyakarta | | C4 | Bandung | +------+------------+ ``` Table Data ``` +----+--------+----------+------+ | no | name | citycode | age | +----+--------+----------+------+ | 1 | Ony | A3 | 27 | | 2 | Abri | A3 | 28 | | 3 | Denny | C4 | 27 | | 4 | Febri | C1 | 27 | | 5 | Galih | C3 | 28 | | 6 | Yulia | A2 | 26 | | 7 | Zening | A1 | 25 | +----+--------+----------+------+ ``` I want to count the number of employees who are age 27 by city my query : ``` select city.city , count(data.name) as Nmb_of_employees from city left join data on data.citycode = city.code where data.age = 27 group by city.city; ``` The result ``` +----------+------------------+ | city | Nmb_of_employees | +----------+------------------+ | Bandung | 1 | | Dili | 1 | | Semarang | 1 | +----------+------------------+ ``` but the result I want is like this ``` +------------+------------------+ | city | Nmb_of_employees | +------------+------------------+ | Jakarta | 0 | | Bali | 0 | | Semarang | 1 | | Surabaya | 0 | | Dili | 1 | | Jayapura | 0 | | Yogyakarta | 0 | | Bandung | 1 | +------------+------------------+ ``` what query should I use for the results as above ?
You need to remove the `WHERE` clause which causes to filter out records which has employee of age `27` only. `SUM(age = 27)` is a mysql specific statement only which basically summed up the boolean result of the expression. It can be further modified into using `CASE` which is a more RDBMS friendly, `SUM(CASE WHEN age = 27 THEN 1 ELSE 0 END)`. ``` SELECT a.City, IFNULL(SUM(age = 27), 0) Nmb_of_employees FROM city a LEFT JOIN Data b ON a.code = b.cityCode GROUP BY a.City ``` * [SQLFiddle Demo](http://sqlfiddle.com/#!2/3d6ca/2)
I think you want the 'age' check inside the `ON` clause: ``` SELECT city.city , count(data.name) AS Nmb_of_employees FROM city LEFT JOIN data ON data.citycode = city.code AND data.age = 27 GROUP BY city.city; ``` Otherwise you correctly get the table (with blanks) but then filter out the blanks with your `WHERE` clause.
bring all categories of data using where
[ "", "mysql", "sql", "" ]
I have to find a `SELECT DISTINCT [COL1] from {TABLE}` there can find all from [COL1] there have 'A' in [COL2] but not 'B' in [COL2] In this case '3' and '4' should be the result **TABLE** ``` [COL1] [COL2] ---- ---- '1' 'A' '1' 'B' '1' 'C' '2' 'B' '2' 'C' '3' 'A' '3' 'C' '4' 'A' '4' 'D' ```
``` SELECT [COL1] from {TABLE} WHERE [COL2] = 'A' EXCEPT SELECT [COL1] from {TABLE} WHERE [COL2] = 'B' ``` Or ``` SELECT [COL1] FROM {TABLE} GROUP BY [COL1] HAVING MAX(CASE WHEN [COL2] = 'A' THEN 1 ELSE 0 END) = 1 AND MAX(CASE WHEN [COL2] = 'B' THEN 1 ELSE 0 END) = 0 ```
``` SELECT DISTINCT [COL1] FROM {TABLE} t1 WHERE t2.[COL2] = 'A' AND NOT EXISTS ( SELECT 1 FROM {TABLE} t2 WHERE t2.[COL1] = t1.[COL1] AND t2.[COL2] = 'B' ) ```
SQL Server : SELECT DISTINCT [COL1] from {TABLE} WHERE [COL2] = 'A' AND [COL2] <> 'B'
[ "", "sql", "sql-server", "select", "distinct", "multiple-records", "" ]
Here's what I'm trying to do. Let's say I have this table t: ``` key_id | id | record_date | other_cols 1 | 18 | 2011-04-03 | x 2 | 18 | 2012-05-19 | y 3 | 18 | 2012-08-09 | z 4 | 19 | 2009-06-01 | a 5 | 19 | 2011-04-03 | b 6 | 19 | 2011-10-25 | c 7 | 19 | 2012-08-09 | d ``` For each id, I want to select the row containing the minimum record\_date. So I'd get: ``` key_id | id | record_date | other_cols 1 | 18 | 2011-04-03 | x 4 | 19 | 2009-06-01 | a ``` The only solutions I've seen to this problem assume that all record\_date entries are distinct, but that is not this case in my data. Using a subquery and an inner join with two conditions would give me duplicate rows for some ids, which I don't want: ``` key_id | id | record_date | other_cols 1 | 18 | 2011-04-03 | x 5 | 19 | 2011-04-03 | b 4 | 19 | 2009-06-01 | a ```
How about something like: ``` SELECT mt.* FROM MyTable mt INNER JOIN ( SELECT id, MIN(record_date) AS MinDate FROM MyTable GROUP BY id ) t ON mt.id = t.id AND mt.record_date = t.MinDate ``` This gets the minimum date per ID, and then gets the values based on those values. The only time you would have duplicates is if there are duplicate minimum record\_dates for the same ID.
I could get to your expected result just by doing this in [mysql](/questions/tagged/mysql "show questions tagged 'mysql'"): ``` SELECT id, min(record_date), other_cols FROM mytable GROUP BY id ``` Does this work for you?
Group by minimum value in one field while selecting distinct rows
[ "", "sql", "group-by", "max", "distinct", "min", "" ]
UPDATE: Table and index definition ``` desc activities;x +----------------+--------------+------+-----+---------+ | Field | Type | Null | Key | Default | +----------------+--------------+------+-----+---------+ | id | int(11) | NO | PRI | NULL | | trackable_id | int(11) | YES | MUL | NULL | | trackable_type | varchar(255) | YES | | NULL | | owner_id | int(11) | YES | MUL | NULL | | owner_type | varchar(255) | YES | | NULL | | key | varchar(255) | YES | | NULL | | parameters | text | YES | | NULL | | recipient_id | int(11) | YES | MUL | NULL | | recipient_type | varchar(255) | YES | | NULL | | created_at | datetime | NO | | NULL | | updated_at | datetime | NO | | NULL | +----------------+--------------+------+-----+---------+ show indexes from activities; +------------+------------+-----------------------------------------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | +------------+------------+-----------------------------------------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+ | activities | 0 | PRIMARY | 1 | id | A | 7263 | NULL | NULL | | BTREE | | activities | 1 | index_activities_on_trackable_id_and_trackable_type | 1 | trackable_id | A | 7263 | NULL | NULL | YES | BTREE | | activities | 1 | index_activities_on_trackable_id_and_trackable_type | 2 | trackable_type | A | 7263 | NULL | NULL | YES | BTREE | | activities | 1 | index_activities_on_owner_id_and_owner_type | 1 | owner_id | A | 7263 | NULL | NULL | YES | BTREE | | activities | 1 | index_activities_on_owner_id_and_owner_type | 2 | owner_type | A | 7263 | NULL | NULL | YES | BTREE | | activities | 1 | index_activities_on_recipient_id_and_recipient_type | 1 | recipient_id | A | 2421 | NULL | NULL | YES | BTREE | | activities | 1 | index_activities_on_recipient_id_and_recipient_type | 2 | recipient_type | A | 3631 | NULL | NULL | YES | BTREE | +------------+------------+-----------------------------------------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+ select count(id) from activities; +-----------+ | count(id) | +-----------+ | 7117 | +-----------+ ``` This is what my current query looks like: ``` SELECT act.*, group_concat(act.owner_id order by act.created_at desc) as owner_ids FROM (select * from activities order by created_at desc) as act INNER JOIN users on users.id = act.owner_id WHERE (users.city_id = 1 and act.owner_type = 'User') GROUP BY trackable_type, recipient_id, recipient_type order by act.created_at desc limit 20 offset 0; ``` Doing an explain ![Explain](https://i.stack.imgur.com/yUTgu.png) I have played around with this query a lot including indexes etc. Is there any way to optimizes this query?
MySQL sometimes work weirdly so I will give this a shot. I'm assuming ID is Primary Key on users table. ``` SELECT act.trackable_type, act.recipient_id, act.recipient_type, max(act.created_at) as max_created_at, group_concat(act.owner_id order by act.created_at DESC) as owner_ids FROM activities act WHERE act.owner_id in (select id from users where city_id = 1) AND act.owner_Type = 'User' GROUP BY trackable_type, recipient_id, recipient_type ORDER BY max_created_at LIMIT 20 ```
I think you don't need `offset 0` at all, looks like you can live without subquery too. If you don't use fields from `users` table, you can use `in` (or `exists`) to make it clear: ``` select a.trackable_type, a.recipient_id, a.recipient_type, max(a.created_at) as max_created_at, group_concat(a.owner_id order by a.created_at desc) as owner_ids from activities as a where a.owner_type = 'User' and a.owner_id in (select u.id from users as u where u.city_id = 1) group by a.trackable_type, a.recipient_id, a.recipient_type order by max_created_at desc limit 20; ``` Also for me looks like your query could definitely get performance boost if you create index on `owner_type, owner_id` on `activities` (your index `owner_id, owner_type` will not work well for your query) and index on `city_id` on `users`.
mysql query optimization with multiple groupings or order by
[ "", "mysql", "sql", "query-optimization", "" ]
i have 2 table Table A and B Table A data --- ``` Enquiry_Id Ckeck_In check_Out Place 129 2013-09-20 2013-09-23 Delhi 130 2013-08-09 2013-08-10 Agra 130 2013-08-10 2013-08-12 Punjab 130 2013-08-12 2013-08-13 Haryana ``` Table B Data ``` Enquiry_Id Arrives_On Arrives_Details 129 2013-09-20 NA 130 2013-08-09 NA ``` I want when i update my table B Arrives\_on Date 2013-08-09 to 2012-08-14 where enquiry\_Id is 130 then my table A data will be automatically change check\_In and Check\_Out date. Like ``` Enquiry_Id Ckeck_In check_Out Place 129 2013-09-20 2013-09-23 Delhi 130 2013-08-14 2013-08-15 Agra 130 2013-08-15 2013-08-17 Punjab 130 2013-08-17 2013-08-18 Haryana ```
Trigger on TableB where you calculate difference between previous (`DELETED`) and updated (`INSERTED`) value and add that difference to your `TableA` columns. ``` CREATE TRIGGER tgTableBUpdate ON TableB AFTER UPDATE AS BEGIN UPDATE a SET a.Ckeck_In = DATEADD(DD, DATEDIFF(DD,d.Arrives_On, i.Arrives_On), a.Ckeck_In) ,a.Check_Out = DATEADD(DD, DATEDIFF(DD,d.Arrives_On, i.Arrives_On), a.check_Out) FROM INSERTED i INNER JOIN DELETED d ON i.Enquiry_Id = d.Enquiry_Id INNER JOIN dbo.TableA a ON i.Enquiry_Id = a.Enquiry_Id AND d.Enquiry_Id = a.Enquiry_Id END ``` **[SQLFiddle DEMO](http://sqlfiddle.com/#!3/afcbb/4)**
You can create an update trigger in the database on table B ``` CREATE TRIGGER trgAfterUpdate ON [dbo].[B] FOR UPDATE AS declare @ArrivesOn datetime; declare @id int; SELECT @id =i.Enquiry_Id, @ArrivesOn=Arrives_On from inserted i; -- perform update.insert here like -- Insert into A(Enquiry_Id,NewDate) values(@id ,@ArrivesOn) GO ```
update date full column on changing only one datafield
[ "", "asp.net", "sql", "sql-server", "sql-server-2008", "c#-4.0", "" ]