Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
How can I use my query to find data which are only equal to today and future? Which mean any data with date that pass today, will not shown. I have this query, but my syntax might be wrong. Please assist me: ``` strSQL = "SELECT * FROM kiosk_directory.tbldetails where start_date='" & today & "' andalso start_date!<='" & today & "';" ``` I want data which equal today and data that not less than today.
``` SELECT * FROM kiosk_directory.tbldetails where start_date >= Cast(Now() As Date) ``` Or any Date in accepted MySql Formats I usually go with 'YYYY-MM-DD' cause for whatever reason I don't mix up the days/months (I grew up learning both MM-DD and DD-MM). ``` SELECT * FROM kiosk_directory.tbldetails where start_date >= Cast('2014-7-1' As Date) ``` Though you don't need the cast then... ``` SELECT * FROM kiosk_directory.tbldetails where start_date >= '2014-7-1' ```
You haven't said which RDBMS you're using, but this would be the query in mysql: `SELECT * FROM kiosk_directory.tbldetails where TO_DAYS(start_date) >= TO_DAYS(NOW())` I'm using TO\_DAYS assuming start\_date is a timestamp. If it only contains dates without a time component, then it's not necessary to use TO\_DAYS()
SQL statement for starting date and over
[ "", "mysql", "sql", "" ]
I have the following query: ``` SELECT p.CategoryID ,p.Category_Name ,p.IsParent ,p.ParentID ,p.Sort_Order ,p.Active ,p.CategoryID AS sequence FROM tbl_Category p WHERE p.IsParent = 1 UNION SELECT c.CategoryID ,' - ' + c.Category_Name AS Category_Name ,c.IsParent ,c.ParentID ,c.Sort_Order ,c.Active ,c.ParentID as sequence FROM tbl_Category c WHERE c.IsParent = 0 ORDER BY sequence, ParentID, Sort_Order ``` This results in: ``` Parent - child Parent - child - child ``` etc. What I'm finding difficult is getting the results to obey the Sort\_Order so that the Parents are in proper sort order, and the children under those parents are in proper sort order. Right now it's sorting based on the ID of the Parent category. Not sure about advanced grouping or how to handle it.
This is how I would do it, assuming the tree that the parent-child relationship represents is only two levels deep. ``` SELECT * FROM ( SELECT p.CategoryID , p.Category_Name , p.IsParent , p.ParentID , p.Active , p.Sort_Order as Primary_Sort_Order , NULL as Secondary_Sort_Order FROM tbl_Category p WHERE p.IsParent = 1 UNION SELECT c.CategoryID , ' - ' + c.Category_Name AS Category_Name , c.IsParent , c.ParentID , c.Active , a.Sort_Order as Primary_Sort_Order , c.Sort_Order as Secondary_Sort_Order FROM tbl_Category c JOIN tbl_Category a on c.ParentID = a.CategoryID WHERE c.IsParent = 0 AND a.IsParent = 1 ) x ORDER BY Primary_Sort_Order ASC , (CASE WHEN Secondary_Sort_Order IS NULL THEN 0 ELSE 1 END) ASC , Secondary_Sort_Order ASC ``` Primary\_Sort\_Order orders the parents and its children as a group first. Then within the primary group, enforce NULL values of Secondary\_Sort\_Order to come first, and afterwards order by regular non-NULL values of Secondary\_Sort\_Order.
After a lot of searches I have come with a solution for the problems like this It can resolve your Issue. Here I have another sort order column which selects the same sort order of its parent records. Using this Secondary SortOrder Order the record first, then group the child records under Parent ``` CASE WHEN ParentID = 0 THEN CategoryID ELSE ParentID END, CASE WHEN ParentID = 0 THEN 0 ELSE Primary_Sort_Order END, ``` this is used to group the child fields under Its Parent. The Final Solution like ``` SELECT * FROM ( SELECT p.CategoryID , p.Category_Name , p.IsParent , p.ParentID , p.Active , p.Sort_Order AS Primary_Sort_Order , CASE WHEN p.IsParent = 0 THEN (SELECT Sort_Order FROM tbl_Category WHERE CategoryID = p.ParentID) ELSE p.Sort_Order END AS Secondary_Sort_Order FROM tbl_Category p ) x ORDER BY Secondary_Sort_Order, CASE WHEN ParentID = 0 THEN CategoryID ELSE ParentID END, CASE WHEN ParentID = 0 THEN 0 ELSE Primary_Sort_Order END, Primary_Sort_Order ASC ``` Hope It helps....!
SQL Order by, parent, child, and sort order
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I found a weird behaviour from Active Record Pluck. My query is ``` Friend.joins('INNER JOIN users ON friends.friend_id = users.id').where("user_id=? AND (status=? or status=?)", 4,"true","").pluck("users.first_name, users.last_name") ``` It is to join with friends with users and get users first name and last name Generated SQL command is ``` SELECT users.first_name, users.last_name FROM "friends" INNER JOIN users ON friends.friend_id = users.id WHERE (user_id=4 AND (status='true' or status='')) ``` If i run above command on sqlite browser tool i am getting response like ``` first_name last_name user4 y user5 y ``` but from command line with pluck ``` ["y", "y"] ``` and command line with find\_by\_sql ``` [#<Friend >, #<Friend >] ``` What's wrong in my code, or is it problem with pluck and find\_by\_sql How can i resolve the problem? thanks in advance
config/initializers/pluck\_all.rb ``` module ActiveRecord class Relation def pluck_all(*args) args.map! do |column_name| if column_name.is_a?(Symbol) && column_names.include?(column_name.to_s) "#{connection.quote_table_name(table_name)}.#{connection.quote_column_name(column_name)}" else column_name.to_s end end relation = clone relation.select_values = args klass.connection.select_all(relation.arel).map! do |attributes| initialized_attributes = klass.initialize_attributes(attributes) attributes.each do |key, attribute| attributes[key] = klass.type_cast_attribute(key, initialized_attributes) end end end ``` end end ``` Friend.joins('INNER JOIN users ON friends.friend_id = users.id').where("user_id=? AND (status=? or status=?)", 4,"true","").pluck_all("users.first_name","users.last_name") ``` resolves my issue, it's purely a pluck problem. thanks for a [great tutorial](http://meltingice.net/2013/06/11/pluck-multiple-columns-rails/)
If you're using rails 4 you can do Instead of ``` .pluck("users.first_name, users.last_name") ``` Try ``` .pluck("users.first_name", "users.last_name") ``` In rails 3 you'll want to use select to select those specific fields ``` .select("users.first_name", "users.last_name") ```
Retrieving multiple Records,columns using Joins and pluck
[ "", "sql", "ruby-on-rails", "activerecord", "join", "pluck", "" ]
I am asking your help about a join I have to do. I have a field **Date** as DD/MM/YYYY. I have to join it on an other table with a field with data like **A14**, for year 2014 for example. Do you know how can I get only the two last characters from field **Date**, add to it the character **A**, to to the join with my other table ? Thanks for your help !
If your datefield is of date data-type, then ``` SELECT 'A' || TO_CHAR (DATEFIELD, 'YY') FROM MY_TABLE; ``` **Example:** ``` SELECT 'A' || TO_CHAR (SYSDATE, 'YY') FROM MY_TABLE; ``` But I assume from your example, DATE is a string in format DD/MM/YYYY You can probably try something like, 1. Convert the String to Date and get the year alone: `SELECT * FROM OTHER_TABLE WHERE FIELD = (SELECT 'A' || TO_CHAR (TO_DATE(DATEFIELD,'DD/MM/YYYY'), 'YY') FROM MY_TABLE;` 2. Just use string - substring function as below: `SELECT * FROM OTHER_TABLE WHERE FIELD = (SELECT 'A' || substr (DATEFIELD, -2) FROM MY_TABLE;` For using join, You can do something like: ``` SELECT * FROM MY_TABLE, OTHER_TABLE WHERE OTHER_TABLE.FIELD = ('A' || SUBSTR(MY_TABLE.DATEFIELD, -2); ``` or ``` SELECT * FROM MY_TABLE JOIN OTHER_TABLE ON OTHER_TABLE.FIELD = ('A' || SUBSTR(MY_TABLE.DATEFIELD, -2); ```
As an alternative to concatenation, and if your date field is actually a `DATE` not a string, you can include fixed values in a date format model if they're in double-quotes: ``` select to_char(sysdate, '"A"YY') from dual; TO_ --- A14 ``` So to join with that you could do: ``` select <columns> from my_table join other_table on other_table.string_column = to_char(my_table.date_column, '"A"YY'); ```
SQL Oracle - YY from date and add character
[ "", "sql", "oracle", "join", "" ]
i want to display `department_id`'s along with count,and count should be more than 5, and i want to have employees who are not hired in January. i tried the below query ``` SELECT * FROM EMPLOYEES WHERE DEPARTMENT_ID IN ( SELECT DEPARTMENT_ID FROM EMPLOYEES GROUP BY DEPARTMENT_ID HAVING COUNT(*)>5 ) AND HIRE_DATE NOT LIKE '%JAN%'; ``` but here I didnt get count.I want count Also.
``` SELECT department_ID, count(employee_id) as '# of Employees' FROM EMPLOYEES WHERE DEPARTMENT_ID IN ( SELECT DEPARTMENT_ID FROM EMPLOYEES GROUP BY DEPARTMENT_ID HAVING COUNT(*)>5 ) AND HIRE_DATE NOT LIKE '%JAN%' group by department_ID; ``` This query returns the department\_id and because I group by department\_id, the count of employees that belong to each department will be returned Output will look something like this ``` Department_Id | # of Employees 1 7 2 6 4 9 ```
If you want the dept id and count of employees (where employee hire date is not in Jan) then something like the following should work. I say "something like the following" because I suspect the WHERE hire\_date NOT LIKE '%JAN%' could be improved, but it would just depend on the format of that column. ``` SELECT DEPARTMENT_ID, COUNT(*) FROM EMPLOYEES WHERE HIRE_DATE NOT LIKE '%JAN%' GROUP BY DEPARTMENT_ID HAVING COUNT(*)>5; ``` If you also want to list the individual employees along with these departments, then something like this might work: ``` SELECT a.*, b.count(*) FROM EMPLOYEES AS a INNER JOIN ( SELECT DEPARTMENT_ID, COUNT(*) FROM EMPLOYEES WHERE HIRE_DATE NOT LIKE '%JAN%' GROUP BY DEPARTMENT_ID HAVING COUNT(*)>5) AS b ON a.department_id = b.department_id WHERE a.HIRE_DATE NOT LIKE '%JAN%'; ``` Again, though, I think you can leverage your schema to improve the where clause on HIRE\_DATE. A like/not-like clause is generally going to be pretty slow.
employee department wise and count of employees more than 5
[ "", "sql", "oracle", "" ]
I need to set `CONTEXT_INFO` variable in SQL Server to user session unique identifier, GUID of user's session to be exact. But I cannot perform this operation in more or less short and clean one-liner. I'm obligatory to create new binary variable and only than assign it to the `CONTEXT_INFO`. It looks like this: ``` DECLARE @sessionId binary(16) = CAST(CAST('A53BEEF9-4AFF-937A-857A-2C27B845B755' AS uniqueidentifier) AS binary(16)) SET CONTEXT_INFO @sessionId ``` It it possible to put everything into one line statement? Straightforward solution like following: ``` SET CONTEXT_INFO CAST(CAST('A53BD5F9-4AFF-E211-857A-2C27D745B005' AS uniqueidentifier) AS binary(16)) ``` This does not work, unfortunately. And I cannot get the reason of such behavior. **EDIT:** `sessionId` will be generated on-fly, so hard coding constant binary value won't work...
According to: <http://msdn.microsoft.com/en-us/library/ms187768.aspx> "SET CONTEXT\_INFO does not accept expressions other than constants or variable names. To set the context information to the result of a function call, you must first include the result of the function call in a binary or varbinary variable. "
Transact-SQL is a *very* simple language. In most modern languages, you'd expect to be able to supply an arbitrary expression in most locations where a simple value was valid - but that's not the case with T-SQL. So, if we look at the syntax diagram for [`SET CONTEXT_INFO`](http://msdn.microsoft.com/en-us/library/ms187768.aspx): ``` SET CONTEXT_INFO { binary_str | @binary_var } ``` You are allowed to supply exactly one of: ``` binary_str ``` > Is a **binary** constant, or a constant that is implicitly convertible to **binary**, to associate with the current session or connection. ``` @binary_var ``` > Is a **varbinary** or **binary** variable holding a context value to associate with the current session or connection. And not any aribtrary expression that happens to *compute* a binary value. --- Of course, without the `CAST`s, this would be a constant, so you could just write: ``` SET CONTEXT_INFO 0xF9EE3BA5FF4A7A93857A2C27B845B755 ```
Why I cannot set T-SQL CONTXT_INFO variable in one line statement?
[ "", "sql", "sql-server", "" ]
I want to check if table A has any entry THEN ONLY check table A column value is present in table B. If there is no data in TableA then just get data from TableB. I want to have Exists clause for TableB select query if and only if there is data in TablaA else it will be just plain select query for TableB Inner join wont work as there is possiblity that TableA won't be having any data, not even left join. How do I do that in a single query? something like this : ``` select Id from TableB where if( select count(*) from TableA ) > 0 then Id in (select col from TableA) ```
``` select b.* from tableB b, tableA A Where b.id = a.id ``` a simple join is enough for this. Other way is using `EXISTS`. And put your conditions inside it. ``` select * from tableB b Where exists (Select 'x' from tableA a Where a.id=b.id) ``` **EDIT:** hope you need this. ``` /* first part returns result only when tableA has data */ select b.* from tableB b, tableA A Where b.id = a.id Union /* Now we explicitly check for presence of data in tableA,only If not we query tableB */ Select * from tableB Where not exists (select 'x' from tableA) ``` so, only one part of the query will return result, being the same resultset, `UNION` set operation, could make this done. If at all, you want condition based results, PL /SQL could be a great choice! ``` VARIABLE MYCUR REFCURSOR; /* defines a ref cursor in SQL*Plus */ DECLARE V_CHECK NUMBER; BEGIN SELECT COUNT(*) INTO V_CHECK FROM TABLEA; IF(v_CHECK > 0) THEN OPEN :MYCUR FOR select b.* from tableB b, tableA A Where b.id = a.id; ELSIF OPEN :MYCUR FOR select * from tableB; END IF; END; / /* prints the cursor result in SQL*Plus*/ print :MYCUR ```
``` select b.id from table_a a , table_b b where a.col = b.id and a.col_val = 'XXX'; ```
SQL : If condition in where clause
[ "", "sql", "oracle", "" ]
I have a `users` table ``` Table "public.users" Column | Type | Modifiers --------+---------+----------- user_id | integer | style | boolean | id | integer | ``` and an `access_rights` table ``` Table "public.access_rights" Column | Type | Modifiers --------+---------+----------- user_id | integer | id | integer | ``` I have a query joining users on access right and I want to count the number of values in the style column that are true. From this answer: [postgresql - sql - count of `true` values](https://stackoverflow.com/questions/5396498/postgresql-sql-count-of-true-values), I tried doing ``` SELECT COUNT( CASE WHEN style THEN 1 ELSE null END ) from users join access_rights on access_rights.user_id = users.user_id ; ``` But that counts duplicate values when a user has multiple rows for access\_rights. How can I count values only once when using a join?
If you are interested in the number of users that have *at least* 1 row with (`style IS TRUE`) in `access_rights`, aggregate `access_rights` before you join: ``` SELECT count(style OR NULL) AS style_ct FROM users JOIN ( SELECT user_id, bool_or(style) AS style FROM access_rights GROUP BY 1 ) u USING (user_id); ``` Using `JOIN`, since users without any entries in `access_rights` don't count in this case. Using the aggregate function [`bool_or()`](http://www.postgresql.org/docs/current/interactive/functions-aggregate.html). **Or** even simpler: ``` SELECT count(*) AS style_ct FROM ( SELECT user_id FROM access_rights GROUP BY 1 HAVING bool_or(style) ); ``` This is assuming a foreign key enforcing referential integrity, so there is no `access_rights.user_id` without a corresponding row in `users`. Also assuming no `NULL` values in `access_rights.user_id`, which would increase the count by 1 - and can be countered by using `count(user_id)` instead of `count(*)`. **Or** (if that assumption is not true) use an `EXISTS` semi-join: ``` SELECT count( EXISTS ( SELECT 1 FROM access_rights WHERE user_id = u.user_id AND style -- boolean value evaluates on its own ) OR NULL ) FROM users u; ``` I am using the capabilities of true `boolean` values to simplify the count and the `WHERE` clause. Details: [Compute percents from SUM() in the same SELECT sql query](https://stackoverflow.com/questions/15644713/compute-percents-from-sum-in-the-same-select-sql-query/15650897#15650897)
You could do this: Try something like this (per [the documentation](http://www.postgresql.org/docs/7.4/static/datatype-boolean.html)) ``` select sum( case style when TRUE then 1 else 0 end ) as style_count from public.users u join public.access_rights ar on ar.user_id = u.user_id ``` Or, considering your problem statement, "*I want to count the number of values in the style column that are true*", you could do this: ``` select count(*) as style_count from public.users u join public.access_rights ar on ar.user_id = u.user_id where u.style = TRUE ``` **Edited To Note:** On re-reading your question, it sounds like what you really want is a count of distinct users whose `style` attribute is `true`, and who have an access right. You can get to that by this: ``` select count(distinct u.user_id) from public.users u join public.access_rights ar on ar.user_id = u.user_id where u.style = TRUE ; ``` Another way to get there would be like this: ``` select count(*) from public.users u where u.style = TRUE and exists ( select * from public.access_rights ar where ar.user_id = u.user_id ) ``` I would vote for the latter as it more clearly shows your intent.
SQL counting unique true values on join table
[ "", "sql", "postgresql", "join", "count", "aggregate-functions", "" ]
There is a procedure which tries to fetch details of project/s from PROJECTS table. The snippet goes here: ``` PROCEDURE GET_PROJECTS ( P_PROJECT_ID_LIKE IN VARCHAR2 DEFAULT '%', P_SEPARATOR IN VARCHAR2 DEFAULT '-=-' ) AS CURSOR PROJECTS_CURSOR IS ..... WHERE PROJECT_ID LIKE P_PROJECT_ID_LIKE ``` The concern is: ``` PROJECT_ID has a datatype - NUMBER. P_PROJECT_ID_LIKE has a datatype - VARCHAR2. ``` I am wondering how LIKE can be used on PROJECT\_ID ? It is working perfectly fine for ``` GET_PROJECTS('%','-=-'); GET_PROJECTS('28','-=-') ``` Any insight would be a great help!
An implicit type conversion will take place. Notice `1 - filter(TO_CHAR("N") LIKE 'asdf%')` in predicate information section. ``` 15:13:51 (133)LKU@sandbox> create table t (n number); Table created. Elapsed: 00:00:00.10 15:14:22 (133)LKU@sandbox> select * from t where n like 'asdf%' 15:14:37 2 15:14:37 (133)LKU@sandbox> @xplan PLAN_TABLE_OUTPUT -------------------------------------------------------------------------- Plan hash value: 1601196873 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 13 | 2 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T | 1 | 13 | 2 (0)| 00:00:01 | -------------------------------------------------------------------------- Query Block Name / Object Alias (identified by operation id): ------------------------------------------------------------- 1 - SEL$1 / T@SEL$1 Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(TO_CHAR("N") LIKE 'asdf%') Column Projection Information (identified by operation id): ----------------------------------------------------------- 1 - "N"[NUMBER,22] ``` In either case, it doesn't make much sense to filter identifiers using `like` operator. If you want to get all values in case certain condition is met, then you should probably do it sort of this way: ``` where project_id = P_PROJECT_ID or P_PROJECT_ID = -1 ``` using basically any numeric value that is not a valid project id.
As @be here now suggested an implicit conversion will take place. This will work 99% of the times but there are some cavets when reaching big numbers. Take this scenario for example, ``` SQL> select to_char(power(2,140)) from dual; TO_CHAR(POWER(2,140)) ---------------------------------------- 1.3937965749081639463459823920405226E+42 ``` The number was converted to char with an exponential notation. So some string might not match. If you don't reach these numbers you should be fine. Although this is an oracle question take some advice from the [Zen Of Python](http://legacy.python.org/dev/peps/pep-0020/) > Explicit is better than implicit.
Is it allowed to use LIKE for NUMBER datatype?
[ "", "sql", "oracle", "sql-like", "" ]
I made a silly mistake at work once on one of our in house test databases. I was updating a record I just added because I made a typo but it resulted in many records being updated because in the where clause I used the foreign key instead of the unique id for the particular record I just added One of our senior developers told me to do a select to test out what rows it will affect before actually editing it. Besides this, is there a way you can execute your query, see the results but not have it commit to the db until I tell it to do so? Next time I might not be so lucky. It's a good job only senior developers can do live updates!.
First assume you will make a mistake when updating a db so never do it unless you know how to recover, if you don't don't run the code until you do, The most important idea is it is a dev database expect it to be messed up - so make sure you have a quick way to reload it. The do a select first is always a good idea to see which rows are affected. However for a quicker way back to a good state of the database which I would do anyway is For a simple update etc Use transactions Do a `begin transaction` and then do all the updates etc and then select to check the data The database will not be affected as far as others can see until you do a last commit which you only do when you are sure all is correct or a rollback to get to the state that was at the beginning
It seems to me that you just need to get into the habit of opening a transaction: ``` BEGIN TRANSACTION; UPDATE [TABLENAME] SET [Col1] = 'something', [Col2] = '..' OUTPUT DELETED.*, INSERTED.* -- So you can see what your update did WHERE ....; ROLLBACK; ``` Than you just run again after seeing the results, changing ROLLBACK to COMMIT, and you are done! If you are using Microsoft SQL Server Management Studio you can go to `Tools > Options... > Query Execution > ANSI > SET IMPLICIT_TRANSACTIONS` and SSMS will open the transaction automatically for you. Just dont forget to commit when you must and that you may be blocking other connections while you dont commit / rollback close the connection.
How to test your query first before running it sql server
[ "", "sql", "sql-server", "" ]
I'm not sure if what I'm looking to do is possible with a union, or if I need to use a nested query and join of some sort. ``` select c1,c2 from t1 union select c1,c2 from t2 // with some sort of condition where t1.c1 = t2.c1 ``` Example: ``` t1 | 100 | regular | | 200 | regular | | 300 | regular | | 400 | regular | t2 | 100 | summer | | 200 | summer | | 500 | summer | | 600 | summer | Desired Result | 100 | regular | | 100 | summer | | 200 | regular | | 200 | summer | ``` I've tried something like: ``` select * from (select * from t1) as q1 inner join (select * from t2) as q2 on q1.c1 = q2.c1 ``` But that joins the records into a single row like this: ``` | 100 | regular | 100 | summer | | 200 | regular | 200 | summer | ```
Try: ``` select c1, c2 from t1 where c1 in (select c1 from t2) union all select c1, c2 from t2 where c1 in (select c1 from t1) ``` Based on edit, try the below: MySQL doesn't have the WITH clause which would allow you to refer to your t1 and t2 subs multiple times. You might want to create both t1 and t2 as a view in your database so that you can refer to them as t1 and t2 multiple times throughout a single query. Even still, the query below honestly looks very bad and could probably be optimized a lot if we knew your database structure. Ie. a list of the tables, all columns on each table and their data type, a few example rows from each, and your expected result. For instance in your t1 sub you have an outer join with with the LESSON table, but then you have criteria in your WHERE clause (lesson.dayofweek >= 0) which would naturally not allow for nulls, effectively turning your outer join into an inner join. Also you have subqueries that only check for the existence of studentid using criteria that would suggest several tables used don't actually need to be used to produce your desired result. However without knowing your database structure and some example data with an expected result it's hard to advise further. Even still, I believe the below will probably get you what you want, just not optimally. ``` select * from (select distinct students.student_number as "StudentID", concat(students.first_name, ' ', students.last_name) as "Student", general_program_types.general_program_name as "Program Category", program_inventory.program_code as "Program Code", std_lesson.studio_name as "Studio", concat(teachers.first_name, ' ', teachers.last_name) as "Teacher", from lesson_student left join lesson on lesson_student.lesson_id = lesson.lesson_id left join lesson_summer on lesson_student.lesson_id = lesson_summer.lesson_id inner join students on lesson_student.student_number = students.student_number inner join studio as std_primary on students.primary_location_id = std_primary.studio_id inner join studio as std_lesson on (lesson.studio_id = std_lesson.studio_id or lesson_summer.studio_id = std_lesson.studio_id) inner join teachers on (lesson.teacher_id = teachers.teacher_id or lesson_summer.teacher_id = teachers.teacher_id) inner join lesson_program on lesson_student.lesson_id = lesson_program.lesson_id inner join program_inventory on lesson_program.program_code_id = program_inventory.program_code_id inner join general_program_types on program_inventory.general_program_id = general_program_types.general_program_id inner join accounts on students.ACCOUNT_NUMBER = accounts.ACCOUNT_NUMBER inner join account_contacts on students.ACCOUNT_NUMBER = account_contacts.ACCOUNT_NUMBER /** NOTE: the WHERE condition is the only **/ /** difference between subquery1 & subquery2 **/ where lesson.dayofweek >= 0 and order by students.STUDENT_NUMBER) t1 where StudentID in (select StudentID from (select distinct students.student_number as "StudentID", concat(students.first_name, ' ', students.last_name) as "Student", general_program_types.general_program_name as "Program Category", program_inventory.program_code as "Program Code", std_lesson.studio_name as "Studio", concat(teachers.first_name, ' ', teachers.last_name) as "Teacher", from lesson_student left join lesson on lesson_student.lesson_id = lesson.lesson_id left join lesson_summer on lesson_student.lesson_id = lesson_summer.lesson_id inner join students on lesson_student.student_number = students.student_number inner join studio as std_primary on students.primary_location_id = std_primary.studio_id inner join studio as std_lesson on (lesson.studio_id = std_lesson.studio_id or lesson_summer.studio_id = std_lesson.studio_id) inner join teachers on (lesson.teacher_id = teachers.teacher_id or lesson_summer.teacher_id = teachers.teacher_id) inner join lesson_program on lesson_student.lesson_id = lesson_program.lesson_id inner join program_inventory on lesson_program.program_code_id = program_inventory.program_code_id inner join general_program_types on program_inventory.general_program_id = general_program_types.general_program_id inner join accounts on students.ACCOUNT_NUMBER = accounts.ACCOUNT_NUMBER inner join account_contacts on students.ACCOUNT_NUMBER = account_contacts.ACCOUNT_NUMBER /** NOTE: the WHERE condition is the only **/ /** difference between subquery1 & subquery2 **/ where lesson_summer.dayofweek >= 0 order by students.STUDENT_NUMBER) t2) UNION ALL select * from (select distinct students.student_number as "StudentID", concat(students.first_name, ' ', students.last_name) as "Student", general_program_types.general_program_name as "Program Category", program_inventory.program_code as "Program Code", std_lesson.studio_name as "Studio", concat(teachers.first_name, ' ', teachers.last_name) as "Teacher", from lesson_student left join lesson on lesson_student.lesson_id = lesson.lesson_id left join lesson_summer on lesson_student.lesson_id = lesson_summer.lesson_id inner join students on lesson_student.student_number = students.student_number inner join studio as std_primary on students.primary_location_id = std_primary.studio_id inner join studio as std_lesson on (lesson.studio_id = std_lesson.studio_id or lesson_summer.studio_id = std_lesson.studio_id) inner join teachers on (lesson.teacher_id = teachers.teacher_id or lesson_summer.teacher_id = teachers.teacher_id) inner join lesson_program on lesson_student.lesson_id = lesson_program.lesson_id inner join program_inventory on lesson_program.program_code_id = program_inventory.program_code_id inner join general_program_types on program_inventory.general_program_id = general_program_types.general_program_id inner join accounts on students.ACCOUNT_NUMBER = accounts.ACCOUNT_NUMBER inner join account_contacts on students.ACCOUNT_NUMBER = account_contacts.ACCOUNT_NUMBER /** NOTE: the WHERE condition is the only **/ /** difference between subquery1 & subquery2 **/ where lesson_summer.dayofweek >= 0 order by students.STUDENT_NUMBER) x where StudentID in (select StudentID from (select distinct students.student_number as "StudentID", concat(students.first_name, ' ', students.last_name) as "Student", general_program_types.general_program_name as "Program Category", program_inventory.program_code as "Program Code", std_lesson.studio_name as "Studio", concat(teachers.first_name, ' ', teachers.last_name) as "Teacher", from lesson_student left join lesson on lesson_student.lesson_id = lesson.lesson_id left join lesson_summer on lesson_student.lesson_id = lesson_summer.lesson_id inner join students on lesson_student.student_number = students.student_number inner join studio as std_primary on students.primary_location_id = std_primary.studio_id inner join studio as std_lesson on (lesson.studio_id = std_lesson.studio_id or lesson_summer.studio_id = std_lesson.studio_id) inner join teachers on (lesson.teacher_id = teachers.teacher_id or lesson_summer.teacher_id = teachers.teacher_id) inner join lesson_program on lesson_student.lesson_id = lesson_program.lesson_id inner join program_inventory on lesson_program.program_code_id = program_inventory.program_code_id inner join general_program_types on program_inventory.general_program_id = general_program_types.general_program_id inner join accounts on students.ACCOUNT_NUMBER = accounts.ACCOUNT_NUMBER inner join account_contacts on students.ACCOUNT_NUMBER = account_contacts.ACCOUNT_NUMBER /** NOTE: the WHERE condition is the only **/ /** difference between subquery1 & subquery2 **/ where lesson.dayofweek >= 0 and order by students.STUDENT_NUMBER) x); ```
An `UNION` is okay. You want to run the same join **twice**. Except that the first time over you get the left side, and the second time the right side. ``` SELECT t1.* FROM t1 JOIN t2 USING (c1) UNION SELECT t2.* FROM t1 JOIN t2 USING (c1) ``` Of course, if at all possible, you could run a single query and save the left side in memory, display the right side, then enqueue the saved left side at end. It needs much more memory than a cursor, but runs the query in half the time (something less actually, due to disk and resource caching). See [here](http://sqlfiddle.com/#!2/7b632/2) the sample SQLFiddle.
Adding conditions to a UNION in MySQL
[ "", "mysql", "sql", "" ]
Having executed a DB deploy (from a VS SQL Server database project) on a local database, which failed, the database has been left in a state where it has single user mode left on (the deploy runs as single user mode). When I connect to it from SSMS and try something like the following: ``` ALTER DATABASE MyDatabase SET MULTI_USER; GO ``` I get the error: > Changes to the state or options of database 'MyDatabase' cannot be made at this time. The database is in single-user mode, and a user is currently connected to it. I tried taking the database offline, which SSMS tells me succeeds, but it doesn't appear to actually do anything. So far, I've only been able to get around this by dropping and recreating the database (which is kind of okay, because it's only a local test database). However, I'd like to be able to reset the status. How can I convince SQL Server to take this database out of single user mode?
In first run following query in master database ``` exec sp_who ``` If you can't find the culprit, try ``` SELECT request_session_id FROM sys.dm_tran_locks WHERE resource_database_id = DB_ID('YourDatabase') ``` Then kill all process that use your database with following query: ``` KILL spid ``` Then run following query: ``` USE Master ALTER DATABASE YourDatabase SET MULTI_USER ```
Try the below commands First run these three commands ``` USE [master] SET DEADLOCK_PRIORITY HIGH exec sp_dboption MyDBName, 'single user', 'FALSE'; ``` Second run these two commands ``` ALTER DATABASE MyDBName SET MULTI_USER WITH NO_WAIT ALTER DATABASE MyDBName SET MULTI_USER WITH ROLLBACK IMMEDIATE ```
SQL Server 2008 R2 Stuck in Single User Mode
[ "", "sql", "sql-server", "database", "sql-server-2008", "database-deployment", "" ]
I want to update 1 column only for all rows in my table with the total calculation of : The data of column multiply with random number between 0.9 to 1.2.. ``` UPDATE TABLE SET ABC = (SELECT ABC*(RAND()*((1.2-0.9)+0.9)) FROM TABLE) ``` Well idk how to write it in a proper T-SQL. I didn't get the expected result in SQL Server 2008 R2 with my query. Need your help guys to solve this things up. Or you can give me another solution with using cursor.
``` SET NOCOUNT ON DECLARE @ABC_ID AS INT, @RANDOM_GEN_NO AS VARCHAR(50), @TEMP AS VARCHAR(50) DECLARE ABC_CURSOR CURSOR FOR SELECT @ABC_ID, RANDOM_GEN_NO FROM ABCTABLE FOR UPDATE OF RANDOM_GEN_NO OPEN ABC_CURSOR FETCH NEXT FROM ABC_CURSOR INTO @EMP_ID, @RANDOM_GEN_NO WHILE (@@FETCH_STATUS = 0) BEGIN SELECT @TEMP = ABC*(RAND()*((1.2-0.9)+0.9)) UPDATE ABCTABLE SET ABC = @TEMP WHERE CURRENT OF ABC_CURSOR FETCH NEXT FROM ABC_CURSOR INTO @ABC_ID, @RANDOM_GEN_NO END CLOSE ABC_CURSOR DEALLOCATE ABC_CURSOR SET NOCOUNT OFF ``` OR IN Update Statement update ABCTable SET ABC = abc \* yourcode no need of select in update statement
Think you have a problem of parenthesis (for the rand "range"), than the way to write the query (if I understood well) ``` update table set abc = abc * (RAND() * (1.2-0.9) + 0.9) ``` **CAUTION** the "random" multiplicator will be the same for all the rows updated by this statement, as noticed by Damien\_The\_Unbeliever
How to update 1 column only but to all rows with a calculation?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a field in Oracle that has a blank character at the end so instead of it reading "12345" it reads "12345 " The length of this field differs on different records but may also have a blank at the end. How may I write in SQL to identify those records that have a blank at the end?
try this ``` Select ColumnName from tablename where columnname like '% ' ```
this untested query should do what you want: ``` select * from tabler where length(col)>length(rtrim(col)); ``` This query compares the length of the string including trailing blanks with the length after trailing blanks have been removed by the RTRIM function, and only returns rows where these two lengths differ.
SQL fields with blank space at end of fields
[ "", "sql", "oracle", "" ]
I have four tables in Access 2010, each with the same primary key. I'd like to join all the data in all four tables into one table with columns for each value tied to the primary key over all the tables. So, for example: ``` Table1 ID Value1 1 10 2 7 3 4 4 12 Table 2 ID Value2 1 33 2 8 6 19 7 4 Table 3 ID Value3 1 99 2 99 5 99 7 99 ``` I'd like to create: ``` Table 4 ID Value1 Value2 Value3 1 10 33 99 2 7 8 99 3 4 4 12 5 99 6 19 7 4 99 ``` I'm using MS Access and I know I have to basically use 3 joins (left, right, inner) to get a full join, but I'm not exactly sure about how to structure the query. Could someone please give me some example SQL code to point me in the right direction as to how to produce this result? Here is what I have so far. This combines all the tables, but it looks like I'm still missing some data. Have I done something wrong: ``` SELECT Coventry.cptcode, Coventry.[Fee Schedule], CT6002.[Fee Schedule], Medicare.[Fee Schedule], OFSP.[Fee Schedule] FROM ((Coventry LEFT JOIN CT6002 ON Coventry.cptcode = CT6002.cptcode) LEFT JOIN Medicare ON CT6002.cptcode = Medicare.cptcode) LEFT JOIN OFSP ON Medicare.cptcode = OFSP.cptcode UNION SELECT Coventry.cptcode, Coventry.[Fee Schedule], CT6002.[Fee Schedule], Medicare.[Fee Schedule], OFSP.[Fee Schedule] FROM ((Coventry RIGHT JOIN CT6002 ON Coventry.cptcode = CT6002.cptcode) RIGHT JOIN Medicare ON CT6002.cptcode = Medicare.cptcode) RIGHT JOIN OFSP ON Medicare.cptcode = OFSP.cptcode UNION SELECT Coventry.cptcode, Coventry.[Fee Schedule], CT6002.[Fee Schedule], Medicare.[Fee Schedule], OFSP.[Fee Schedule] FROM ((Coventry INNER JOIN CT6002 ON Coventry.cptcode = CT6002.cptcode) INNER JOIN Medicare ON CT6002.cptcode = Medicare.cptcode) INNER JOIN OFSP ON Medicare.cptcode = OFSP.cptcode; ```
How about this? ``` SELECT id , max(v1) as value1 , max(v2) as value2 , max(v3) as value3 FROM ( select id , value1 as v1 , iif(true,null,value1) as v2 , iif(true,null,value1) as v3 from Table1 union select id, null , value2 , null from Table2 union select id, null , null , value3 as v3 from Table3 ) group by id order by id ``` How it works: * Rather than doing a join, I put all of the results into one "table" (my sub query), but with value1, value2 and value3 in their own columns, and set to null for the tables that don't have those columns. * The `iif` statements in the first query are to say that I want v2 and v3 to be the same data type as v1. It's a dodgy hack but seems to work (sadly access works out the type from the first statement in the query, and casting with`clng(null)` didn't work). They work because the result of the iif must be of the same type as the last two parameters, and only the last parameter has a type, so that gets inferred from this; whilst the first parameter being true means that the value returned will only ever be the second parameter. * The outer query then squashes these results down to one line per id; since fields with a value are greater than null and we have at most one field with a value per id, we get that value for that column. I'm not sure how performance compares with the MS article's way of doing this, but if you're using access I suspect you have other things to worry about ;). SQL Fiddle: <http://sqlfiddle.com/#!6/6f93b/2> (For SQL Server since Access not available, but I've tried to make it as similar as possible)
Create an union of all the ids from all tables, this way you get the id column in table4. Then encapsulate this union in a subselect (ids) and create left joins between this subselect (the parent join) and table1, table2, table3 (the childs join). Then select what you need... ``` SELECT ids.id, t1.Value1, t2.Value2, t3.Value3 FROM (( (select id from table1 union select id from table2 union select id from table3) AS ids LEFT JOIN Table1 AS t1 ON ids.id = t1.ID) LEFT JOIN Table2 AS t2 ON ids.id = t2.ID) LEFT JOIN Table3 AS t3 ON ids.id = t3.ID; ```
SQL Joins and MS Access - How to combine multiple tables into one?
[ "", "sql", "ms-access", "join", "ms-access-2010", "" ]
I have a table in which I store a log of every request to a web site. Every time a page is requested, a record is inserted. I now want to analyze the data in the log to detect possible automated (non-human) requests. The criteria I need to use is x number of requests within y seconds by an individual user. So, the data looks like this: | **Page**          | **UserId** | **Date**                           | | /Page1.htm | 001      | 2014-06-02 11:03 AM | | /Page2.htm | 001      | 2014-06-02 11:03 AM | | /Page1.htm | 002      | 2014-06-02 11:04 AM | | /Page3.htm | 001      | 2014-06-02 11:04 AM | | /Page2.htm | 002      | 2014-06-02 11:05 AM | | /Page4.htm | 001      | 2014-06-02 11:05 AM | | /Page5.htm | 001      | 2014-06-02 11:07 AM | | /Page3.htm | 002      | 2014-06-02 11:15 AM | So, I wanted to get all UserIDs that made 5 or more requests within any 5 second timespan. How can I get that? Is this even possible with SQL alone? I don't have access to the web server logs or anything else other than the SQL Server database.
Here is the query which you are looking for: ``` SELECT T1.Page, T1.UserId, T1.Date, MIN(T2.Date) AS Date2, DATEDIFF(minute, T1.Date, MIN(T2.Date)) AS DaysDiff, COUNT(*) RequestCount FROM [STO24541450] T1 LEFT JOIN [STO24541450] T2 ON T1.UserId = T2.UserId AND T2.Date > T1.Date GROUP BY T1.Page, T1.UserId, T1.Date HAVING DATEDIFF(minute, T1.Date, MIN(T2.Date)) >= 5 AND COUNT(*) >= 5; ```
I would probably group by the time range and UserId and grab any with a count greater than 5. ``` select count(*), UserId, dateadd(SECOND, DATEDIFF(SECOND, '01-jan-1970', [date])/5*5, '01-jan-1970') from [LogTable] group by UserId, DATEDIFF(SECOND, '01-jan-1970', [date])/5 having count(1) > 5 ``` The above will return the same UserId for each period in which the user has made more than 5 requests. If you're only interested in the userId's and not when or how many times they breached the conditions you could simplify the above to ``` select distinct(UserId) from [LogTable] group by UserId, DATEDIFF(SECOND, '01-jan-1970', [date])/5 having count(1) > 5 ```
How to query for records that have a certain count within a timespan
[ "", "sql", "sql-server", "" ]
I have been trying to run debugging within SQl server management studio and for some reason the debugger has just stopped working. This is the message I get: > Unable to start the Transact-SQL debugger, could not connect to the > database engine instance 'server-sql'. Make sure you have enabled the > debugging firewall exceptions and are using a login that is a member > of the sysadmin fixed server role. The RPC server is unavailable. Before this I get two messages, one requesting firewall permissions and the next says 'usage' with some text that makes little sense. I have looked at the other similar answers on there for the same message which suggest adding the login as a sysadmin but that is already set. I also tried adding sysadmin to another account but that also didn't work.
In the end I was able to start it by right clicking and selecting run as administrator.
I encountered this issue while connected to SQL using a SQL Server Authenticated user. Once I tried using a Windows Authenticated user I was able to debug without issue. That user must also be assigned the sysadmin role.
Unable to start the Transact-SQL debugger, could not connect to the database engine instance
[ "", "sql", "t-sql", "debugging", "" ]
I'm trying to find an intuitive way of enforcing mutual uniqueness across two columns in a table. I am not looking for composite uniqueness, where duplicate *combinations* of keys are disallowed; rather, I want a rule where any of the keys cannot appear again in *either* column. Take the following example: ``` CREATE TABLE Rooms ( Id INT NOT NULL PRIMARY KEY, ) CREATE TABLE Occupants ( PersonName VARCHAR(20), LivingRoomId INT NULL REFERENCES Rooms (Id), DiningRoomId INT NULL REFERENCES Rooms (Id), ) ``` A person may pick any room as their living room, and any other room as their dining room. Once a room has been allocated to an occupant, it cannot be allocated again to another person (whether as a living room or as a dining room). I'm aware that this issue can be resolved through data normalization; however, I cannot change the schema make breaking changes to the schema. **Update**: In response to the proposed answers: Two unique constraints (or two unique indexes) will not prevent duplicates *across* the two columns. Similarly, a simple `LivingRoomId != DiningRoomId` check constraint will not prevent duplicates across *rows*. For example, I want the following data to be forbidden: ``` INSERT INTO Rooms VALUES (1), (2), (3), (4) INSERT INTO Occupants VALUES ('Alex', 1, 2) INSERT INTO Occupants VALUES ('Lincoln', 2, 3) ``` Room 2 is occupied simultaneously by Alex (as a living room) and by Lincoln (as a dining room); this should not be allowed. **Update2**: I've run some tests on the three main proposed solutions, timing how long they would take to insert 500,000 rows into the `Occupants` table, with each row having a pair of random unique room ids. Extending the `Occupants` table with unique indexes and a check constraint (that calls a scalar function) causes the insert to take around three times as long. The implementation of the scalar function is incomplete, only checking that new occupants' living room does not conflict with existing occupants' dining room. I couldn't get the insert to complete in reasonable time if the reverse check was performed as well. Adding a trigger that inserts each occupant's room as a new row into another table decreases performance by 48%. Similarly, an indexed view takes 43% longer. In my opinion, using an indexed view is cleaner, since it avoids the need for creating another table, as well as allows SQL Server to automatically handle updates and deletes as well. The full scripts and results from the tests are given below: ``` SET STATISTICS TIME OFF SET NOCOUNT ON CREATE TABLE Rooms ( Id INT NOT NULL PRIMARY KEY IDENTITY(1,1), RoomName VARCHAR(10), ) CREATE TABLE Occupants ( Id INT NOT NULL PRIMARY KEY IDENTITY(1,1), PersonName VARCHAR(10), LivingRoomId INT NOT NULL REFERENCES Rooms (Id), DiningRoomId INT NOT NULL REFERENCES Rooms (Id) ) GO DECLARE @Iterator INT = 0 WHILE (@Iterator < 10) BEGIN INSERT INTO Rooms SELECT TOP (1000000) 'ABC' FROM sys.all_objects s1 WITH (NOLOCK) CROSS JOIN sys.all_objects s2 WITH (NOLOCK) CROSS JOIN sys.all_objects s3 WITH (NOLOCK); SET @Iterator = @Iterator + 1 END; DECLARE @RoomsCount INT = (SELECT COUNT(*) FROM Rooms); SELECT TOP 1000000 RoomId INTO ##RandomRooms FROM ( SELECT DISTINCT CAST(RAND(CHECKSUM(NEWID())) * @RoomsCount AS INT) + 1 AS RoomId FROM sys.all_objects s1 WITH (NOLOCK) CROSS JOIN sys.all_objects s2 WITH (NOLOCK) ) s ALTER TABLE ##RandomRooms ADD Id INT IDENTITY(1,1) SELECT 'XYZ' AS PersonName, R1.RoomId AS LivingRoomId, R2.RoomId AS DiningRoomId INTO ##RandomOccupants FROM ##RandomRooms R1 JOIN ##RandomRooms R2 ON R2.Id % 2 = 0 AND R2.Id = R1.Id + 1 GO PRINT CHAR(10) + 'Test 1: No integrity check' CHECKPOINT; DBCC FREEPROCCACHE WITH NO_INFOMSGS; DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS; SET NOCOUNT OFF SET STATISTICS TIME ON INSERT INTO Occupants SELECT * FROM ##RandomOccupants SET STATISTICS TIME OFF SET NOCOUNT ON TRUNCATE TABLE Occupants PRINT CHAR(10) + 'Test 2: Unique indexes and check constraint' CREATE UNIQUE INDEX UQ_LivingRoomId ON Occupants (LivingRoomId) CREATE UNIQUE INDEX UQ_DiningRoomId ON Occupants (DiningRoomId) GO CREATE FUNCTION CheckExclusiveRoom(@occupantId INT) RETURNS BIT AS BEGIN RETURN ( SELECT CASE WHEN EXISTS ( SELECT * FROM Occupants O1 JOIN Occupants O2 ON O1.LivingRoomId = O2.DiningRoomId -- OR O1.DiningRoomId = O2.LivingRoomId WHERE O1.Id = @occupantId ) THEN 0 ELSE 1 END ) END GO ALTER TABLE Occupants ADD CONSTRAINT ExclusiveRoom CHECK (dbo.CheckExclusiveRoom(Id) = 1) CHECKPOINT; DBCC FREEPROCCACHE WITH NO_INFOMSGS; DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS; SET NOCOUNT OFF SET STATISTICS TIME ON INSERT INTO Occupants SELECT * FROM ##RandomOccupants SET STATISTICS TIME OFF SET NOCOUNT ON ALTER TABLE Occupants DROP CONSTRAINT ExclusiveRoom DROP INDEX UQ_LivingRoomId ON Occupants DROP INDEX UQ_DiningRoomId ON Occupants DROP FUNCTION CheckExclusiveRoom TRUNCATE TABLE Occupants PRINT CHAR(10) + 'Test 3: Insert trigger' CREATE TABLE RoomTaken ( RoomId INT NOT NULL PRIMARY KEY REFERENCES Rooms (Id) ) GO CREATE TRIGGER UpdateRoomTaken ON Occupants AFTER INSERT AS INSERT INTO RoomTaken SELECT RoomId FROM ( SELECT LivingRoomId AS RoomId FROM INSERTED UNION ALL SELECT DiningRoomId AS RoomId FROM INSERTED ) s GO CHECKPOINT; DBCC FREEPROCCACHE WITH NO_INFOMSGS; DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS; SET NOCOUNT OFF SET STATISTICS TIME ON INSERT INTO Occupants SELECT * FROM ##RandomOccupants SET STATISTICS TIME OFF SET NOCOUNT ON DROP TRIGGER UpdateRoomTaken DROP TABLE RoomTaken TRUNCATE TABLE Occupants PRINT CHAR(10) + 'Test 4: Indexed view with unique index' CREATE TABLE TwoRows ( Id INT NOT NULL PRIMARY KEY ) INSERT INTO TwoRows VALUES (1), (2) GO CREATE VIEW OccupiedRooms WITH SCHEMABINDING AS SELECT RoomId = CASE R.Id WHEN 1 THEN O.LivingRoomId ELSE O.DiningRoomId END FROM dbo.Occupants O CROSS JOIN dbo.TwoRows R GO CREATE UNIQUE CLUSTERED INDEX UQ_OccupiedRooms ON OccupiedRooms (RoomId); CHECKPOINT; DBCC FREEPROCCACHE WITH NO_INFOMSGS; DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS; SET NOCOUNT OFF SET STATISTICS TIME ON INSERT INTO Occupants SELECT * FROM ##RandomOccupants SET STATISTICS TIME OFF SET NOCOUNT ON DROP INDEX UQ_OccupiedRooms ON OccupiedRooms DROP VIEW OccupiedRooms DROP TABLE TwoRows TRUNCATE TABLE Occupants DROP TABLE ##RandomRooms DROP TABLE ##RandomOccupants DROP TABLE Occupants DROP TABLE Rooms /* Results: Test 1: No integrity check SQL Server Execution Times: CPU time = 5210 ms, elapsed time = 10853 ms. (500000 row(s) affected) Test 2: Unique indexes and check constraint SQL Server Execution Times: CPU time = 21996 ms, elapsed time = 27019 ms. (500000 row(s) affected) Test 3: Insert trigger SQL Server parse and compile time: CPU time = 5663 ms, elapsed time = 11192 ms. SQL Server Execution Times: CPU time = 4914 ms, elapsed time = 4913 ms. (1000000 row(s) affected) SQL Server Execution Times: CPU time = 10577 ms, elapsed time = 16105 ms. (500000 row(s) affected) Test 4: Indexed view with unique index SQL Server Execution Times: CPU time = 10171 ms, elapsed time = 15777 ms. (500000 row(s) affected) */ ```
You could create an "external" constraint in the form of an indexed view: ``` CREATE VIEW dbo.OccupiedRooms WITH SCHEMABINDING AS SELECT r.Id FROM dbo.Occupants AS o INNER JOIN dbo.Rooms AS r ON r.Id IN (o.LivingRoomId, o.DiningRoomId) ; GO CREATE UNIQUE CLUSTERED INDEX UQ_1 ON dbo.OccupiedRooms (Id); ``` The view is essentially unpivoting the occupied rooms' IDs, putting them all in one column. The unique index on that column makes sure it does not have duplicates. Here are demonstrations of how this method works: * [failed insert](http://sqlfiddle.com/#!3/24f5e/2); * [successful insert](http://sqlfiddle.com/#!3/24f5e/3). **UPDATE** As [hvd has correctly remarked](https://stackoverflow.com/questions/24562733/enforcing-mutual-uniqueness-across-multiple-columns/24703792?noredirect=1#comment38334232_24703792), the above solution does not catch attempts to insert identical `LivingRoomId` and `DiningRoomId` when they are put on the same row. This is because the `dbo.Rooms` table is matched only once in that case and, therefore, the join produces produces just one row for the pair of references. One way to fix that is suggested in the same comment: additionally to the indexed view, use a CHECK constraint on the `dbo.OccupiedRooms` table to prohibit rows with identical room IDs. The suggested `LivingRoomId <> DiningRoomId` condition, however, will not work for cases where both columns are NULL. To account for that case, the condition could be expanded to this one: ``` LivingRoomId <> DinindRoomId AND (LivingRoomId IS NOT NULL OR DinindRoomId IS NOT NULL) ``` Alternatively, you could change the view's SELECT statement to catch all situations. If `LivingRoomId` and `DinindRoomId` were `NOT NULL` columns, you could avoid a join to `dbo.Rooms` and unpivot the columns using a cross-join to a virtual 2-row table: ``` SELECT Id = CASE x.r WHEN 1 THEN o.LivingRoomId ELSE o.DiningRoomId END FROM dbo.Occupants AS o CROSS JOIN (SELECT 1 UNION ALL SELECT 2) AS x (r) ``` However, as those columns allow NULLs, this method would not allow you to insert more than one single-reference row. To make it work in your case, you would need to filter out NULL entries, but only if they come from rows where the other reference is not NULL. I believe adding the following WHERE clause to the above query would suffice: ``` WHERE o.LivingRoomId IS NULL AND o.DinindRoomId IS NULL OR x.r = 1 AND o.LivingRoomId IS NOT NULL OR x.r = 2 AND o.DinindRoomId IS NOT NULL ```
I think the only way to do this is to use constraint and a Function. Pseudo code (haven't done this for a long time): ``` CREATE FUNCTION CheckExlusiveRoom RETURNS bit declare @retval bit set @retval = 0 select retval = 1 from Occupants as Primary join Occupants as Secondary on Primary.LivingRoomId = Secondary.DiningRoomId where Primary.ID <> Secondary.ID or ( Primary.DiningRoomId= Secondary.DiningRoomId or Primary.LivingRoomId = Secondary.LivingRoomID) return @retval GO ``` Then, use this function in a check constraint.... Alternative would be to use an intermediate table OccupiedRoom, where you would always insert rooms that are used (by trigger for instance? ) and FK to it instead of Room table **Reaction to comment:** Do you need to enforce it directly on the table, or is a constraint violation happening in reaction to insert/update enough? Because then I am thinking like this: 1. create a simple table: ``` create table RoomTaken (RoomID int primary key references Room (Id) ) ``` 2. create a trigger on insert/update/delete, that makes sure that any Room used in Occupants is kept in RoomID as well. 3. If you try to duplicate room usage, RoomTaken table will throw an PK violation Not sure if this is enough and/or how it would compare in speed to the UDF (I assume it would be superior). And yes, I see the problem that RoomTaken would not FK to the usage in Occupants, but... really, you are working under some constraints and there is no perfect solution - it's speed (UDF) vs 100% integrity enforcement in my opinion.
Enforcing mutual uniqueness across multiple columns
[ "", "sql", "sql-server", "t-sql", "" ]
I am new to server side programming and am trying to understand relational databases a little better. Whenever I read about MYSQL vs SQLite people always talk about SQLite not being able to have multiple users. However, when I program with the Django Framework I am able to create multiple users on the sqlitedb. Can someone explain what people mean by multi-user? Thanks!
When people talk about multiple users in this context, they are talking about simultaneous connections to the database. The users in this case are threads in the web server that are accessing the database. Different databases have different solutions for handling multiple connections working with the database at once. Generally reading is not a problem, as multiple reading operations can overlap without disturbing each other, but only one connection can write data in a specific unit at a a time. The difference between concurrency for databases is basically how large units they lock when someone is writing. MySQL has an advanced system where records, blocks or tables can be locked depending on the need, while SQLite has a simpler system where it only locks the entire database. The impact of this difference is seen when you have multiple threads in the webserver, where some threads want to read data and others want to write data. MySQL can read from one table and write into another at the same time without problem. SQLite has to suspend all incoming read requests whenever someone wants to write something, wait for all current reads to finish, do the write, and then open up for reading operations again.
As you can read [here](https://stackoverflow.com/a/5102105/2108339), sqlite supports multi users, but lock the whole db. Sqlite is used for development ussualy, buy Mysql is a better sql for production, because it has a better support for concurrency access and write, but sqlite dont. Hope helps
What is the difference between MYSQL and SQLite multi-user functionality?
[ "", "mysql", "sql", "database", "sqlite", "" ]
im trying to make a method that return the value from the database and im getting CursorIndexOutOfBounds: (The Line 94 is : String count = c.getString(1);) ``` public String getCount() { // TODO Auto-generated method stub String[] columns = new String[] { KEY_ROWID, KEY_COUNT }; Cursor c = ourDatabase.query(DATABASE_TABLE, columns, KEY_ROWID + "=1",null, null, null, null); if (c != null) { c.moveToFirst(); String count = c.getString(1); return count; } return null; } ``` LogCat: ``` 07-07 19:51:53.408: E/AndroidRuntime(27923): FATAL EXCEPTION: main 07-07 19:51:53.408: E/AndroidRuntime(27923): android.database.CursorIndexOutOfBoundsException: Index 0 requested, with a size of 0 07-07 19:51:53.408: E/AndroidRuntime(27923): at android.database.AbstractCursor.checkPosition(AbstractCursor.java:424) 07-07 19:51:53.408: E/AndroidRuntime(27923): at android.database.AbstractWindowedCursor.checkPosition(AbstractWindowedCursor.java:136) 07-07 19:51:53.408: E/AndroidRuntime(27923): at android.database.AbstractWindowedCursor.getString(AbstractWindowedCursor.java:50) 07-07 19:51:53.408: E/AndroidRuntime(27923): at com.example.fartsound.MyDataBase.getCount(MyDataBase.java:94) ```
move `c.moveToFirst()` inside the if statement. if that returns false then the cursor is empty and would get that error ``` if (c != null && c.moveToFirst()) ```
cursor starts from '0' use this: ``` String count = c.getString(0); ```
Cursor Index Out Of Bounds while trying to get a value from the database
[ "", "android", "sql", "" ]
With this schema: ``` CREATE TABLE BAR (id INT PRIMARY KEY); CREATE TABLE FOO (id INT PRIMARY KEY, rank INT UNIQUE, fk INT, FOREIGN KEY (fk) REFERENCES Bar(id)); INSERT INTO BAR (id) VALUES (1); INSERT INTO BAR (id) VALUES (2); -- sample values INSERT INTO FOO (id, rank, fk) VALUES (1, 10, 1); INSERT INTO FOO (id, rank, fk) VALUES (2, 3, 1); INSERT INTO FOO (id, rank, fk) VALUES (3, 9, 2); INSERT INTO FOO (id, rank, fk) VALUES (4, 5, 1); INSERT INTO FOO (id, rank, fk) VALUES (5, 12, 1); INSERT INTO FOO (id, rank, fk) VALUES (6, 14, 2); ``` How can I query for certain rows of `FOO` *and* the rows linked to the same row of `BAR` with the next highest `rank`? That is, I want to search for certain rows ("targets"), and for each target row I also want to find another row ("secondary"), such that secondary has the highest rank of all rows with `secondary.fk = target.fk` *and* `secondary.rank < target.rank`. For example, if I target all rows (no where clause), I would expect this result: ``` TARGET_ID TARGET_RANK SECONDARY_ID SECONDARY_RANK --------- ----------- ------------ -------------- 1 10 4 5 2 3 NULL NULL 3 9 NULL NULL 4 5 2 3 5 12 1 10 6 14 3 9 ``` When the target row has `id` 2 or 3, there is no secondary row because no row has the same `fk` as the target row and a lower `rank`. I tried this: ``` SELECT F1.id AS TARGET_ID, F1.rank as TARGET_RANK, F2.id AS SECONDARY_ID, F2.rank AS SECONDARY_RANK FROM FOO F1 LEFT JOIN FOO F2 ON F2.rank = (SELECT MAX(S.rank) FROM FOO S WHERE S.fk = F1.fk AND S.rank < F1.rank); ``` ...but I got `ORA-01799: a column may not be outer-joined to a subquery`. Next I tried this: ``` SELECT F1.id AS TARGET_ID, F1.rank AS TARGET_RANK, F2.id AS SECONDARY_ID, F2.rank AS SECONDARY_RANK FROM FOO F1 LEFT JOIN (SELECT S1.rank, S1.fk FROM FOO S1 WHERE S1.rank = (SELECT MAX(S2.rank) FROM FOO S2 WHERE S2.rank < F1.rank AND S2.fk = F1.fk) ) F2 ON F2.fk = F1.fk; ``` ...but I got `ORA-00904: "F1"."FK": invalid identifier`. Surely there's **some** way to do this in a single query?
It doesn't like the subquery inside the temporary table. The trick is to left join all the secondary rows with `rank` less than the target's `rank`, then use the WHERE clause to filter out all but the max while being sure not to filter out target rows with no secondary. ``` select F1.id as TARGET_ID, F1.rank as TARGET_RANK, F2.id as SECOND_ID, F2.rank as SECOND_RANK from FOO F1 left join FOO F2 on F1.fk = F2.fk and F2.rank < F1.rank where F2.rank is null or F2.rank = (select max(S.rank) from FOO S where S.fk = F1.fk and S.rank < F1.rank); ```
You need [Analytic Functions](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions004.htm) ``` select id, lead(rank) over (partition by fk order by rank) next_rank lag(rank) over (partition by fk order by rank) prev_rank from foo ``` It is more efficient than self joins. But if you want to torture your database, you may try ``` select id, (select min(f2.rank) from foo f2 where f2.fk = f1.fk and f2.rank >f1.rank) next_rank, (select max(f2.rank) from foo f2 where f2.fk = f1.fk and f2.rank <f1.rank) prev_rank from foo f1 ``` Ok, I misunderstood what needs to be in output. Here is example: ``` select id, rank, prev_rank_id from ( select id, rank, lag(id) over (partition by fk order by rank) prev_rank_id from foo ) where id in (1, 3) ``` Analitic functions are working on OUTPUT dataset. You need to wrap it into another select statement to limit your actual output.
How can I reference an outer table in a subquery for a self left join?
[ "", "sql", "oracle", "self-join", "correlated-subquery", "" ]
I have table like: **Items** * ID(int) * Name(varchar) * Date(DateTime) * Value(int) I would like to write SQL Command which will update only these values, which are not null Pseudo code of what I would like to do: ``` UPDATE Items SET IF @NewName IS NOT NULL { Items.Name = @NewName } IF @NewDate IS NOT NULL { Items.Date = @NewDate } IF @newValue IS NOT NULL { Items.Valie = @NewValue } WHERE Items.ID = @ID ``` Is it possible to write query like this?
``` UPDATE Items SET Name = ISNULL(@NewName,Name), [Date] = ISNULL(@NewDate,[Date]), Value = ISNULL(@NewValue,Value) WHERE ID = @ID ```
You cannt use if statement in update you can use below query for update on condition ``` 'UPDATE Items SET Items.Name = (case when @NewName is not null then @NewName else end) , Items.Date = (case when @NewDate is not null then @newDate else end ), Items.Valie = (case when @Valie is not null then @Valie else end ) ``` WHERE Items.ID = @ID'
SQL Update command with various variables
[ "", "sql", "sql-server", "t-sql", "sql-update", "" ]
I have following requirement for writing a query in oracle. I need to fetch all the records from a Table T1 (it has two date columns D1 and D2)based on two dynamic values V1 and V2. These V1 and V2 are passed dynamically from application. The possible values for V1 are 'Less than' or 'Greater than'. The possible value for V2 is a integer number. Query i need to write: If V1 is passed as 'Less than' and V2 is passed as 5, then I need to return all the rows in T1 WHERE D1-D2 < 5. If V1 passed as 'Greater than' and V2 passed as 8, then I need to return all the rows in T1 WHERE D1-D2 > 8; I could think that this can be done using a CASE statement in where clause. But not sure how to start. Any help is greatly appreciated. Thanks
You could write this as: ``` select * from t1 where (v1 = 'Less Than' and D1 - D2 < v2) or (v1 = 'Greater Than' and D1 - D2 > v2) ``` A `case` statement isn't needed.
Try this: ``` select * from T1 where case when V1 = 'LESS THAN' THEN D1 - D2 < V2 ELSE D1 - D2 > V2 ``` This assume if V1 is not LESS THAN the only other value is greater than. If necessary you can use more than one case statement but this should get you started.
SQL query with decoding and comparison in where clause
[ "", "sql", "oracle", "" ]
Probably the wrong title, but I can't summarise what I'm trying to do nicely. Which is probably why my googling hasn't helped. I have a list of `Discounts`, and a list of `TeamExclusiveDiscounts` (`DiscountId, TeamId`) I call a stored procedure passing in `@TeamID (int)`. What I want is all `Discounts` except if they're in `TeamExclusiveDiscounts` and don't have `TeamID` matching `@TeamId`. So the data is something like Table `Discount`: ``` DiscountID Name ----------------------- 1 Test 1 2 Test 2 3 Test 3 4 Test 4 5 Test 5 ``` Table `TeamExclusiveDiscount`: ``` DiscountID TeamID ----------------------- 1 10 2 10 2 4 3 8 ``` Expected results: * searching for `TeamID = 10` I should get discounts 1,2,4,5 * searching for `TeamID = 5` I should get discounts 4, 5 * searching for `TeamID = 8` I should get discounts 3, 4, 5 I've tried a variety of joins, or trying to update a temp table to set whether the discount is allowed or not, but I just can't seem to get my head around this issue. So I'm after the T-SQL for my stored procedure that will select the correct discounts (SQL Server). Thanks!
``` SELECT D.DiscountID FROM Discounts D LEFT JOIN TeamExclusiveDiscount T ON D.DiscountID=T.DiscountID WHERE T.TeamID=@TeamID OR T.TeamID IS NULL ``` [SQLFIDDLE for TEST](http://www.sqlfiddle.com/#!3/5beff/14)
Can you try this - it only selects records where there is a teamdiscount record with the team or no teamdiscount record at all. ``` SELECT * FROM Discounts D WHERE EXISTS ( SELECT 1 FROM TeamExclusiveDiscount T WHERE T.DiscountID = D.DiscountID AND TeamID = @TeamID ) OR NOT EXISTS ( SELECT 1 FROM TeamExclusiveDiscount T WHERE T.DiscountID = D.DiscountID ) ```
SQL Server - tsql join/filtering issue
[ "", "sql", "sql-server", "t-sql", "" ]
I am working on automating the SMS sending part in my web application. [**SQL Fiddle Link**](http://sqlfiddle.com/#!3/a18de/2) **DurationType** table stores whether the sms should be sent out an interval of Hours, Days, Weeks, Months. Referred in SMSConfiguration ``` CREATE TABLE [dbo].[DurationType]( [Id] [int] NOT NULL PRIMARY KEY, [DurationType] VARCHAR(10) NOT NULL ) ``` **Bookings** table Contains the original Bookings data. For this booking I need to send the SMS based on configuration. ![enter image description here](https://i.stack.imgur.com/hhmxd.png) **SMS Configuration**. Which defines the configuration for sending automatic SMS. It can be send before/after. Before=0, After=1. `DurationType can be Hours=1, Days=2, Weeks=3, Months=4` ![enter image description here](https://i.stack.imgur.com/JVcej.png) Now I need to find out the list of bookings that needs SMS to be sent out at the current time based on the SMS Configuration Set. **Tried SQL using UNION** ``` DECLARE @currentTime smalldatetime = '2014-07-12 11:15:00' -- 'SMS CONFIGURED FOR HOURS BASIS' SELECT B.Id AS BookingId, B.StartTime AS BookingStartTime,@currentTime As CurrentTime, SMS.SMSText FROM Bookings B INNER JOIN SMSConfiguration SMS ON SMS.CategoryId = B.CategoryId OR SMS.CategoryId IS NULL WHERE (DATEDIFF(HOUR, @currentTime, B.StartTime) = SMS.Duration AND SMS.DurationType=1 AND BeforeAfter=0) OR (DATEDIFF(HOUR, B.StartTime, @currentTime) = SMS.Duration AND SMS.DurationType=1 AND BeforeAfter=1) --'SMS CONFIGURED FOR DAYS BASIS' UNION SELECT B.Id AS BookingId, B.StartTime AS BookingStartTime,@currentTime As CurrentTime, SMS.SMSText FROM Bookings B INNER JOIN SMSConfiguration SMS ON SMS.CategoryId = B.CategoryId OR SMS.CategoryId IS NULL WHERE (DATEDIFF(DAY, @currentTime, B.StartTime) = SMS.Duration AND SMS.DurationType=2 AND BeforeAfter=0) OR (DATEDIFF(DAY, B.StartTime, @currentTime) = SMS.Duration AND SMS.DurationType=2 AND BeforeAfter=1) --'SMS CONFIGURED FOR WEEKS BASIS' UNION SELECT B.Id AS BookingId, B.StartTime AS BookingStartTime, @currentTime As CurrentTime, SMS.SMSText FROM Bookings B INNER JOIN SMSConfiguration SMS ON SMS.CategoryId = B.CategoryId OR SMS.CategoryId IS NULL WHERE (DATEDIFF(DAY, @currentTime, B.StartTime)/7 = SMS.Duration AND SMS.DurationType=3 AND BeforeAfter=0) OR (DATEDIFF(DAY, B.StartTime, @currentTime)/7 = SMS.Duration AND SMS.DurationType=3 AND BeforeAfter=1) --'SMS CONFIGURED FOR MONTHS BASIS' UNION SELECT B.Id AS BookingId, B.StartTime AS BookingStartTime, @currentTime As CurrentTime, SMS.SMSText FROM Bookings B INNER JOIN SMSConfiguration SMS ON SMS.CategoryId = B.CategoryId OR SMS.CategoryId IS NULL WHERE (dbo.FullMonthsSeparation(@currentTime, B.StartTime) = SMS.Duration AND SMS.DurationType=4 AND BeforeAfter=0) OR (dbo.FullMonthsSeparation(B.StartTime, @currentTime) = SMS.Duration AND SMS.DurationType=4 AND BeforeAfter=1) ``` **Result** ![enter image description here](https://i.stack.imgur.com/Xwakn.png) **Problem:** The SQL procedure will be running every 15 mins. Current query keep returning days/weeks/months records even for the current time '2014-07-12 11:30:00', '2014-07-12 11:45:00', etc > I want a single query that takes care of all Hours/Days/Weeks/Months > calculation and I should be get records only one time when they meet > the correct time. Otherwise I will be sending sms again and again > every 15 mins for day/week/months records matched. It should consider the following scenarios. 1. Hour, If booking is 10:15 A.M same day 9:15 A.M if it is before 1 hour configured 2. Day(24 Hour difference), If booking is 10:15 A.M 3rd day morning 10:15 A.M if it is configured after 3 days in SMSConfiguration 3. Match Week. If booking is 10:15 A.M today(Wednesday) morning then after 14 days morning 10:15 A.M if it is configured after 2 weeks. 4. Month also same logic like above.
Try the simplified version [**FIDDLE**](http://sqlfiddle.com/#!3/24708/65) , removed Union and used OR conditions **LIST FOR HOURS - RUN Every 15 mins** ``` DECLARE @currentTime smalldatetime = '2014-07-12 11:15:00' SELECT B.Id AS BookingId, C.Id, C.Name, B.StartTime AS BookingStartTime,@currentTime As CurrentTime, SMS.SMSText FROM Bookings B INNER JOIN SMSConfiguration SMS ON SMS.CategoryId = B.CategoryId OR SMS.CategoryId IS NULL LEFT JOIN Category C ON C.Id=B.CategoryId WHERE (DATEDIFF(MINUTE, @currentTime, B.StartTime) = SMS.Duration*60 AND SMS.DurationType=1 AND BeforeAfter=0) OR (DATEDIFF(MINUTE, B.StartTime, @currentTime) = SMS.Duration*60 AND SMS.DurationType=1 AND BeforeAfter=1) Order BY B.Id GO ``` **LIST FOR DAYS/WEEKS/MONTHS - RUN Every Day Morning Once** ``` DECLARE @currentTime smalldatetime = '2014-07-12 08:00:00' SELECT B.Id AS BookingId, C.Id, C.Name, B.StartTime AS BookingStartTime,@currentTime As CurrentTime, SMS.SMSText FROM Bookings B INNER JOIN SMSConfiguration SMS ON SMS.CategoryId = B.CategoryId OR SMS.CategoryId IS NULL LEFT JOIN Category C ON C.Id=B.CategoryId WHERE (((DATEDIFF(DAY, @currentTime, B.StartTime) = SMS.Duration AND SMS.DurationType=2) OR (DATEDIFF(DAY, @currentTime, B.StartTime) = SMS.Duration*7 AND SMS.DurationType=3) OR (DATEDIFF(DAY, @currentTime, B.StartTime) = SMS.Duration*30 AND SMS.DurationType=4)) AND BeforeAfter=0) OR (((DATEDIFF(DAY, B.StartTime, @currentTime) = SMS.Duration AND SMS.DurationType=2) OR (DATEDIFF(DAY, @currentTime, B.StartTime) = SMS.Duration*7 AND SMS.DurationType=3) OR (DATEDIFF(DAY, @currentTime, B.StartTime) = SMS.Duration*30 AND SMS.DurationType=4)) AND BeforeAfter=1) Order BY B.Id ```
You seem to have forgotten to pick some columns to return. ``` DECLARE @currentTime smalldatetime = '2014-07-12 09:15:00'; SELECT [col1], [col2], [etcetera] FROM Bookings B INNER JOIN SMSConfiguration SMS ON SMS.CategoryId=B.CategoryId WHERE DATEDIFF(hour,B.StartTime,@currentTime)=1 ```
Get records based on Current Date and Configured Data
[ "", "sql", "sql-server", "sql-server-2008-r2", "datediff", "" ]
I have table online\_users\_changes where I store whenever user go online/idle/offline. ``` CREATE TABLE online_users_changes(id INT PRIMARY KEY AUTO_INCREMENT, user_id INT, timestamp TIMESTAMP, status TEXT); INSERT INTO online_users_changes(user_id, timestamp, status) VALUES (1, "2014-07-05 9:00", "idle"), (1, "2014-07-05 12:00", "online"), (1, "2014-07-05 15:00", "offline"), (2, "2014-07-05 7:00", "offline"), (2, "2014-07-05 13:00", "online"), (2, "2014-07-05 14:00", "offline"); ``` I want to create chart, where I will show user's status in specific period of time. So I need to fetch all changes in this period + the most recent changes before it. ``` SELECT * FROM online_users_changes WHERE timestamp >= :start and timestamp < :end ORDER BY user_id, timestamp ``` This query returns only changes between :start and :end, but I also need the most recent change before :start for each user. How can I get the data I want? <http://sqlfiddle.com/#!2/8147a9/2> e.g. This query ``` SELECT * FROM online_users_changes WHERE timestamp BETWEEN "2014-07-05 11:00" and "2014-07-05 22:00" ORDER BY user_id, timestamp; ``` returns ``` | USER_ID | TIMESTAMP | STATUS | |---------|-----------------------------|---------| | 1 | July, 05 2014 12:00:00+0000 | online | | 1 | July, 05 2014 15:00:00+0000 | offline | | 2 | July, 05 2014 13:00:00+0000 | online | | 2 | July, 05 2014 14:00:00+0000 | offline | ``` But I want to obtains something like this. ``` | USER_ID | TIMESTAMP | STATUS | |---------|-----------------------------|---------| | 1 | July, 05 2014 09:00:00+0000 | idle | | 1 | July, 05 2014 12:00:00+0000 | online | | 1 | July, 05 2014 15:00:00+0000 | offline | | 2 | July, 05 2014 07:00:00+0000 | offline | | 2 | July, 05 2014 13:00:00+0000 | online | | 2 | July, 05 2014 14:00:00+0000 | offline | ```
You could try this (very similar to bitsm post): ``` select user_id, Max(timestamp) as timestamp, status FROM online_users_changes WHERE timestamp < "2014-07-05 11:00" group by user_id, status union all SELECT user_id, timestamp, status FROM online_users_changes WHERE timestamp BETWEEN "2014-07-05 11:00" and "2014-07-05 22:00" ORDER BY user_id, timestamp; ``` Here's the sqlfiddle for you. <http://sqlfiddle.com/#!2/8147a9/8> **UPDATE** I tried the following update, based on new data given. It may need some cleanup though, as I personally think the query looks nasty, despite it's success in working. ``` SELECT user_id, max(timestamp) as timestamp, status from (SELECT user_id, timestamp, status FROM online_users_changes WHERE timestamp < "2014-07-05 11:00" GROUP BY user_id, status ) a group by user_id union all (SELECT user_id, timestamp, status FROM online_users_changes WHERE timestamp BETWEEN "2014-07-05 11:00" and "2014-07-05 22:00" ) ORDER BY user_id, timestamp; ``` SQL Fiddle here. <http://sqlfiddle.com/#!2/0bd390/44> IF anyone has any other suggestions, feel free to provide another suggestion.
"the most recent event before a given date" = "the first event that took place before a given date, when sorting events in reverse chronological order" = ``` SELECT * FROM online_users_changes WHERE timestamp < "your date and time here" ORDER BY timestamp DESC LIMIT 1 ``` Then `UNION` this row with the main query. Beware, using `ORDER BY` clauses together with `UNION` is somewhat tricky, you'll need parenthesis at the right places (the [manual pages](http://dev.mysql.com/doc/refman/5.6/en/union.html) have it all).
How to get the first row before specific date for each user and the rows after in one query?
[ "", "mysql", "sql", "fetch", "mariadb", "" ]
I have a column that is a maximum size of 9 characters and I want to do a query that will look at all rows which are less than 9. This: ``` SELECT * FROM tEmpCourseAssoc WHERE ({ fn LENGTH(EmployeeID) } < 9) ``` Now I want to add zeros to the front of the column EmployeeID if it's less than 9 characters so it adds up to nine characters. For example if the column EmployeeID is 1234567 it would get updated to 001234567 and if it's 12345678 it would get updated to 012345678 I think it's something like: ``` SELECT RIGHT('0000000' + CAST(myField AS VARCHAR), 9) ``` But I'm not sure how to implement this. I plan on excuting the query straight from Enterprise Manager. The column EmployeeID is type varchar(10)
You can try this ``` UPDATE tEmpCourseAssoc SET EmployeeID= RIGHT(REPLICATE('0',9) + EmployeeID, 9) WHERE LEN(EmployeeID) < 9 ```
``` SELECT RIGHT(REPLICATE('0', 9) + EmployeeID, 9) AS Expr1 FROM tEmpCourseAssoc WHERE ({ fn LENGTH(EmployeeID) } < 9) ```
Add zeros to front of column
[ "", "sql", "sql-server", "" ]
In my `info` DB i have 4 tables, from which TWO are named as `girls` & `boys` and both have `id` column. `boys` contain 12 `id`(12 records) and`girls` contain 8 `id`. when i apply this query: ``` SELECT * FROM boys NATURAL JOIN girls ``` i get output as ``` Empty Set ``` How could this happen? (both `id` column is `INT, NOT NULL, AUTO_INCREMENT`).
`NATURAL JOIN` is shorthand for an INNER JOIN with equality predicates on columns of the same name. For example, given two tables "m" and "f", which have columns named "id" and "name" in common, a NATURAL JOIN like this ``` SELECT ... FROM m NATURAL JOIN f ``` Is equivalent to: ``` SELECT ... FROM m JOIN f ON f.id = m.id AND f.name = m.name ``` Only rows that "match" in the two tables will be returned in the result, all other rows are not returned. For example, if we had table contents: ``` m: id name -- --------------- 1 peter 2 paul 4 michael jackson f: id name -- --------------- 3 mary 4 michael jackson ``` The queries above would return only the rows from "m" and "f" that have id=4 (as a single row), because those are the only rows that "match".
Natural join will only work if the same named columns' values are same. ie if ID and Name both are same in both the tables. If you need result like this : ![Result](https://i.stack.imgur.com/KsyQX.png) Then you must name the 'name' col different in both the tables. If you want a set of all boys and girls, you should use UNION. ``` select * from boys union select * from girls ``` Example : Boys Table ![Boys Table](https://i.stack.imgur.com/HXpBX.png) Girls Table ![Girls Table](https://i.stack.imgur.com/qYQy2.png) Natural Join ![Natural Join](https://i.stack.imgur.com/k2gsN.png) Union Result ![Union](https://i.stack.imgur.com/coaF9.png)
why natural join give output as empty set in following situation?
[ "", "mysql", "sql", "" ]
I have a table with a large number of records ( > 300,000). The most relevant fields in the table are: * CREATE\_DATE * MOD\_DATE Those are updated every time a record is added or updated. I now need to query this table to find the date of the record that was modified last. I'm currently using ``` SELECT mod_date FROM table ORDER BY mod_date DESC LIMIT 1; ``` But I'm wondering if this is the most efficient way to get the answer. I've tried adding a `where clause` to limit the date to the last month, but it looks like that's actually slower (and I need the most recent date, which could be older than the last month). I've also tried the suggestion I read elsewhere to use: ``` SELECT UPDATE_TIME FROM information_schema.tables WHERE TABLE_SCHEMA = 'db' AND TABLE_NAME = 'table'; ``` But since I might be working on a dump of the original that query might result into NULL. And it looks like this is actually slower than the original query. I can't resort to `last_insert_id()` because I'm not updating or inserting. I just want to make sure I have the most efficient query possible.
The most efficient way for this query would be to use an index for the column `MOD_DATE`. From [How MySQL Uses Indexes](http://dev.mysql.com/doc/refman/5.6/en/mysql-indexes.html) > **8.3.1 How MySQL Uses Indexes** > Indexes are used to find rows with specific column values quickly. > Without an index, MySQL must begin with the first row and then read > through the entire table to find the relevant rows. The larger the > table, the more this costs. If the table has an index for the columns > in question, MySQL can quickly determine the position to seek to in > the middle of the data file without having to look at all the data. If > a table has 1,000 rows, this is at least 100 times faster than reading > sequentially. You can use ``` SHOW CREATE TABLE UPDATE_TIME; ``` to get the CREATE statement and see, if an index on MOD\_DATE is defined. To add an Index you can use ``` CREATE INDEX CREATE [UNIQUE|FULLTEXT|SPATIAL] INDEX index_name [index_type] ON tbl_name (index_col_name,...) [index_option] [algorithm_option | lock_option] ... ``` see <http://dev.mysql.com/doc/refman/5.6/en/create-index.html>
Make sure that both of those fields are indexed. Then I would just run - ``` select max(mod_date) from table ``` or create\_date, whichever one. Make sure to create 2 indexes, one on each date field, not a compound index on both. As for a discussion of the difference between this and using limit, see [MIN/MAX vs ORDER BY and LIMIT](https://stackoverflow.com/questions/426731/min-max-vs-order-by-and-limit)
Most efficient query to get last modified record in large table
[ "", "mysql", "sql", "" ]
Let me start first with description of the table I have. One column is a company id column (integer value), another column is a date in format yyyymmdd (integer value). These two columns taken together uniquely identify the entries in my table. The table (the way I think of it at least) is ordered by Company\_id, Date. The table has several other columns. I will call the one I am interested in mycolumn (integer value). Also sorry for the formatting below but I don't know how to create a proper table over here. ``` Company_id Date mycolumn 1 20121015 1 1 20121113 1 1 20130108 2 1 20130207 2 1 20130409 2 1 20130815 1 2 20050611 7 2 20080719 7 4 20091114 3 4 20091215 3 4 20100304 5 4 20110215 5 ``` What I am interested in is changes in the mycolumn for each company id and the dates around the change. For example for company with id 1 there are 2 changes (from 1 to 2 and then from 2 to 1), for company with id 2 there are no changes and for company with id 4 there is a single change from 3 to 5. The output table should be: ``` Company_id Date mycolumn 1 20121113 1 1 20130108 2 1 20130409 2 1 20130815 1 4 20091215 3 4 20100304 5 ``` I know I can do an intermediary step like selecting companies with more than 1 mycolumn value and then use a join statement to exclude the ones with no changes from my table. But I don't know what to do next... Well, I did come up with something, but it is both messy and not working properly. What I did initially was adding 2 columns with the first and last dates showing up for each company id - mycolumn combination. Then I used several steps to get where I wanted. That works well for companies like the last one where you go from value 3 to value 5, but it messes up for companies like the first one where you go from 1 to 2 and then back to 1... Thanks for any help!
You can use `lead` and `lag`. ``` with C ( select *, lag(mycolumn) over(partition by company_id order by Date) as lagmycolumn, lead(mycolumn) over(partition by company_id order by Date) as leadmycolumn from YourTable ) select company_id, Date, mycolumn from C where mycolumn <> lagmycolumn or mycolumn <> leadmycolumn ```
Try this: ``` select company_id, mycolumn, max(date) from tableName group by company_id, mycolumn ``` Cheers !!
Select all rows from a table where value on a column differs from previous value on that column
[ "", "sql", "sql-server", "" ]
I have a table named grades. A column named Students, Practical, Written. I am trying to figure out the top 5 students by total score on the test. Here are the queries that I have not sure how to join them correctly. I am using oracle 11g. This get's me the total sums from each student: ``` SELECT Student, Practical, Written, (Practical+Written) AS SumColumn FROM Grades; ``` This gets the top 5 students: ``` SELECT Student FROM ( SELECT Student, , DENSE_RANK() OVER (ORDER BY Score DESC) as Score_dr FROM Grades ) WHERE Student_dr <= 5 order by Student_dr; ```
The approach I prefer is data-centric, rather than row-position centric: ``` SELECT g.Student, g.Practical, g.Written, (g.Practical+g.Written) AS SumColumn FROM Grades g LEFT JOIN Grades g2 on g2.Practical+g2.Written > g.Practical+g.Written GROUP BY g.Student, g.Practical, g.Written, (g.Practical+g.Written) AS SumColumn HAVING COUNT(*) < 5 ORDER BY g.Practical+g.Written DESC ``` This works by joining with all students that have greater scores, then using a HAVING clause to filter out those that have less than 5 with a greater score - giving you the top 5. The left join is needed to return the top scorer(s), which have no other students with greater scores to join to. Ties are all returned, leading to more than 5 rows in the case of a tie for 5th. By not using row position logic, which varies from darabase to database, this query is also completely portable. Note that the ORDER BY is optional.
You can easily include the projection of the first query in the sub-query of the second. ``` SELECT Student , Practical , Written , tot_score FROM ( SELECT Student , Practical , Written , (Practical+Written) AS tot_score , DENSE_RANK() OVER (ORDER BY (Practical+Written) DESC) as Score_dr FROM Grades ) WHERE Student_dr <= 5 order by Student_dr; ``` One virtue of analytic functions is that we can just use them in any query. This distinguishes them from aggregate functions, where we need to include all non-aggregate columns in the GROUP BY clause (at least with Oracle).
Trying to figure out how to join these queries
[ "", "sql", "oracle", "top-n", "" ]
I have requirement where in I am creating a table dynamically(number of columns may change based on input parameter to my procedure through which I am creating this table) with data in the table like below. ``` PK col1 col2 col3 A null 1-2 3-4 B null null 4-5 C null 5-6 null ``` Now the requirement is I want to extract only the columns where at least there should be 1 record without null and spool the whole data into a file. My output should be like below (col1 exlluded from output as it has all nulls). ``` PK col2 col3 A 1-2 3-4 B null 4-5 C 5-6 null ``` Can anybody provide any hints to achieve this. Thanks in advance.
I used num\_nulls from all\_tab\_cols and achieved result as per my requirement. Thank you.
This won't be very efficient I suspect, but you can use COUNT() to determine if there are any only NULLS in a column because COUNT(column\_here) will only add 1 for each non-null value. Hence if the count is zero that column is only NULLs. This can then be combined into a query to generate a valid select statement and then that executed immediate (being careful of course to avoid sql injection). Anyway, here's an example: ``` select 'select ' || substr(( select case when count(COL1) > 0 then ',col1' else '' end || case when count(COL2) > 0 then ',col2' else '' end || case when count(COL3) > 0 then ',col3' else '' end from a_table ),2,8000) || ' from ' || ' a_table' as sql_string from dual ; ``` see [this sqlfiddle](http://sqlfiddle.com/#!4/1669a/2) result by the above is: ``` | SQL_STRING | |--------------------------------| | select col2,col3 from a_table | ```
Customize SQL query
[ "", "sql", "oracle", "" ]
There are data in a date column like: > '2014-03-01', '2014-05-01', '2014-07-01' ... I want to get distinct years with the IF clause. For example, if the date is in 'YEAR-07-01' ~ '(YEAR+1)-06-30', then it returns YEAR, else it returns YEAR-1. For Example, if the data of the date column is like: > '2000-06-01', '2000-07-01', '2005-07-01', '2005-08-01', '2008-07-01' I want to get '1999', '2000', '2005', '2008' I am thinking ``` SELECT ( IF t.DATE IN YEAR(t.DATE) + '-07-01' AND YEAR(t.DATE)+1 + '-06-30' THEN YEAR(t.DATE) ELSE YEAR(t.DATE)+1 ) FROM THE_TABLE t WHERE PRIMARY_KEY = 123; ``` But I don't know how to write it correctly in SQL Server.
It appears to me you're doing a calculation based on a fiscal year ending 6/30 and starting 7/1. Try this: ``` select distinct case when Month(t.DATE) >=7 then Year(t.DATE) else Year(t.DATE)-1 end as FiscalYear from THE_TABLE t order by FiscalYear desc ```
Even better would be to create and use a calendar table. Not only is it easier to work with it can then be flexible across different years. Take a look at this article. <http://www.sqlservercentral.com/articles/T-SQL/70482/>
Get value with the clause IF and IN
[ "", "sql", "sql-server", "" ]
I have 2 tables right now and they look like the following: The `points` table looks like the following: ``` id country points 1 4 1 2 7 5 3 8 4 ``` The `sample` table looks like the following: ``` id iso pts 1 UK 100 2 US 300 3 AU 700 4 BO 1200 5 BA 1500 6 BR 2000 7 HR 5000 8 TD 10000 9 CA 15000 ``` What I basically want is to select ALL data from the `points` table where the `points.country` and `points.points` corresponds to the `sample.iso` and `sample.pts`. So the result I'd like to achieve is: ``` id country points 1 BO 100 2 HR 1500 3 TD 1200 ``` Is this actually achievable? If yes, how?
You will have to join to the `sample` table twice in order to get the information you are after ([**SQL FIDDLE**](http://sqlfiddle.com/#!2/114ec/4/0)): ``` SELECT p.id, s1.iso AS country, s2.pts AS points FROM points p INNER JOIN sample s1 ON p.country = s1.id INNER JOIN sample s2 ON p.points = s2.id ```
with the help of left join or any join you can do it ``` select t1.id,t1.country,t1.points from table1 as t1 left join table2 as t2 on t1.id=t2.country ```
How to SELECT multiple columns with MySQL based on multiple records?
[ "", "mysql", "sql", "join", "" ]
I have three tables as following Table a (id,name) as primary key Table b (id,name,country) as primary key,c\_id as foreign key Table c (id) as primary key, another column is type I would like to select id,name,type from these tables, where country='CA' MY database is MySQL, I wrote the SQL as following: ``` select n.activity_date,n.advertiser_id,c.type as type from a JOIN b as n on (a.id=b.id,a.name=b.name) JOIN c on n.c_id=c.id where b.country='CA' limit 10; ``` The error is ``` ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'as n on (a.activity_date=d.activity_date,a.advertiser_id=d.advertiser_id) JOIN t' at line 2 ```
You have to use AND for multiple criteria joins, not a comma. ``` select n.activity_date,n.advertiser_id,c.type as type from a JOIN b as n on a.id=n.id AND a.name=n.name JOIN c on n.c_id=c.id where n.country='CA' limit 10; ```
Your query should be ``` select n.activity_date,n.advertiser_id,c.type as type from a JOIN b as n -- because of the alias name you've got to reference table b as n -- and you've omitted the AND operator on (a.id=n.id AND a.name=n.name) JOIN c on n.c_id=c.id where n.country='CA' -- you've got to use n instead of b limit 10; ```
How to join three tables with multiple primary key
[ "", "mysql", "sql", "" ]
When I run the following mysql statement, I get **127 rows** returned: ``` SELECT t.strat_id, t.candle_id, t.spot_entry, t.spot_close, t.f,t.fa,t.b,t.we,t.direct,t.date_created,t.date_closed,t.notes,t.results FROM cp_user_trades AS t INNER JOIN candles2 AS c ON t.candle_id = c.id WHERE t.user_login = "user" AND t.active=0 AND t.deleted=0 ORDER BY t.date_closed DESC ``` When I remove the `INNER JOIN` I get **131 rows** returned: ``` SELECT t.strat_id, t.candle_id, t.spot_entry, t.spot_close, t.f,t.fa,t.b,t.we,t.direct,t.date_created,t.date_closed,t.notes,t.results FROM cp_user_trades AS t WHERE t.user_login = "user" AND t.active=0 AND t.deleted=0 ORDER BY t.date_closed DESC ``` How can I `SELECT` the **4 rows** that are not being returned in the first statement? Thank you!
assuming c.ID is the primary key for the candles2 table: Use a Left Join and add where c.Id is null
To find ids between to tables, I use `union all` and `group by`: ``` select in_cut, in_c2, count(*) as cnt, min(candle_id), max(candle_id) from (select candle_id, sum(in_cut) as in_cut, sum(in_c2) as in_c2 from ((select candle_id, 1 as in_cut, 0 as in_c2 from cp_user_trades ) union all (select id, 0, 1 from candles2 ) ) cc group by candle_id ) c group by in_cut, in_c2; ``` This gives you the three possibilities of ids in the two tables (in the first, in the second, and in both). It shows whether ids are duplicated in either table, and it gives examples of the ids.
Find missing ids between mysql tables
[ "", "mysql", "sql", "" ]
For now I have following query: ``` SELECT "type" AS "modifiedType" FROM "Table" WHERE "type" = 'type1' OR "type" = 'type2' ``` What I want is to return `modifiedType` like this: ``` if "type" = 'type1' then 'modifiedType1' else if "type" = 'type2' then 'modifiedType2' ``` So I just want to modify column value with another value based on original column value. Type column in `ENUM` not string. I am using Postgres 9.3 (or 9.4?).
Use a `CASE` statement: ``` select type, case when type = 'type1' then 'modifiedType1' when type = 'type2' then 'modifiedType2' else type end as modifiedType from the_table WHERE type in ('type1', 'type2') ``` Btw: `type` is not a good name for a column
A [**simple `CASE`**](http://www.postgresql.org/docs/current/interactive/functions-conditional.html#FUNCTIONS-CASE#) is more efficient with multiple alternatives for a single condition: ``` SELECT CASE type WHEN 'type1' THEN 'modifiedType1' WHEN 'type2' THEN 'modifiedType2' ELSE type END AS modified_type FROM tbl; ``` BTW, for a number of alternatives, it's shorter to write your `WHERE` clause like this: ``` WHERE type IN ('type1', 'type2', 'type3', ...) ``` Or even ``` WHERE type = ANY('{type1, type2, type3, ...}') ``` All variants are translated to the *same* `OR` list internally.
How to transform SELECT output column to arbitrary value based on this column?
[ "", "sql", "postgresql", "" ]
I am trying to count same column for different condition , but I am getting the later count result for the other conditions also. For this sql : ``` select count(eqa.ID) as gold_yes from EC_SECTION_ANSWER esa , EC_QUESTION_ANSWER eqa where esa.SC_NUMBER = 1079 and esa.ID = eqa.SC_ID and eqa.QU_ANSWER = 'Yes' and eqa.QU_TYPE = 'abc'; ``` I am getting the output `gold_yes:` `1` Again if i run sql with different condition : ``` select count(eqa.ID) as gold_no from EC_SECTION_ANSWER esa , EC_QUESTION_ANSWER eqa where esa.SC_NUMBER = 1079 and esa.ID = eqa.SC_ID and eqa.QU_ANSWER = 'No' and eqa.QU_TYPE = 'abc'; ``` I am getting the output `gold_no:` `0` Now if the run sql combined : ``` select count(eqa.ID) as gold_yes ,count(eqa_2.ID) as gold_no from EC_SECTION_ANSWER esa , EC_QUESTION_ANSWER eqa ,EC_SECTION_ANSWER esa_2 , EC_QUESTION_ANSWER eqa_2 where esa.SC_NUMBER = 1079 and esa.ID = eqa.SC_ID and eqa.QU_ANSWER = 'Yes' and eqa.QU_TYPE = 'abc' and esa_2.SC_NUMBER = 1079 and esa_2.ID = eqa_2.SC_ID and eqa_2.QU_ANSWER = 'No' and eqa_2.QU_TYPE = 'abc'; ``` I am getting the output : `gold_yes:` `0` and `gold_no:` `0` . Which should be `gold_yes:` `1` and `gold_no:` `0` Can anybody please guide me what the mistake I am doing . Thanks in Advance.
The `WHERE` condition in your query are the problem: you can't have both `eqa.QU_ANSWER = 'Yes'` and `eqa.QU_ANSWER = 'No'`, so the query return 0 rows, and the count of the values of 0 rows is also 0. You can change the condition to `OR`, or you can take the same condition in the `COUNT`, while doing that you should also change the query to use explicit `JOIN` instead of writing the join condition in the `WHERE` clause. ``` SELECT COUNT(CASE WHEN eqa.QU_ANSWER = 'Yes' THEN 1 ELSE NULL END) as gold_yes , COUNT(CASE WHEN eqa.QU_ANSWER = 'No' THEN 1 ELSE NULL END) as gold_no FROM EC_SECTION_ANSWER esa INNER JOIN EC_QUESTION_ANSWER eqa ON esa.ID = eqa.SC_ID WHERE esa.SC_NUMBER = 1079 AND eqa.QU_TYPE = 'abc'; ```
Put your conditions from the `where` clause into a `case` and sum it up ``` select sum(case when esa.SC_NUMBER = 1079 and eqa.QU_ANSWER = 'Yes' and eqa.QU_TYPE = 'abc' then 1 else 0 end) as gold_yes ... from EC_SECTION_ANSWER esa join EC_QUESTION_ANSWER eqa on esa.ID = eqa.SC_ID ```
SQL count from same column but different conditions fetching later count result for all
[ "", "sql", "" ]
I have data like this: ``` a b c -------|--------|-------- 100 | 3 | 50 100 | 4 | 60 101 | 3 | 70 102 | 3 | 70 102 | 4 | 80 102 | 5 | 90 ``` `a` : key `b` : sub\_id `c` : value I want to NULL the `c` row for each element which has non-max `a` column. My resulting table must look like: ``` a b c -------|--------|-------- 100 | 3 | NULL 100 | 4 | 60 101 | 3 | 70 102 | 3 | NULL 102 | 4 | NULL 102 | 5 | 90 ``` How can I do this with an SQL Query? **#UPDATE** My relational table has about a billion rows. Please remind that while providing an answer. I cannot wait couple of hours or 1 day for executing.
Updated after the requirement was changed to "update the table": ``` with max_values as ( select a, b, max(c) over (partition by a) as max_c from the_table ) update the_table set c = null from max_values mv where mv.a = the_table.a and mv.b = the_table.b and mv.max_c <> the_table.c; ``` SQLFiddle: <http://sqlfiddle.com/#!15/1e739/1> Another possible solution, which might be faster (but you need to check the execution plan) ``` update the_table t1 set c = null where exists (select 1 from the_table t2 where t2.a = t1.a and t2.b = t2.b and t1.c < t2.c); ``` SQLFiddle: <http://sqlfiddle.com/#!15/1e739/2> But with "billion" rows there is no way this is going to be really fast.
``` DECLARE @TAB TABLE (A INT,B INT,C INT) INSERT INTO @TAB VALUES (100,3,50), (100,4,60), (101,3,70), (102,3,70), (102,4,80), (102,5,90) UPDATE X SET C = NULL FROM @TAB X LEFT JOIN ( SELECT A,MAX(C) C FROM @TAB GROUP BY A) LU ON X.A = LU.A AND X.C = LU.C WHERE LU.A IS NULL SELECT * FROM @TAB ``` Result: ![enter image description here](https://i.stack.imgur.com/YU2ub.jpg) this approach will help you
Discard rows which is not MAX in that group
[ "", "sql", "database", "postgresql", "" ]
I am creating a stored procedure (with Sql Server) involving a single table. I'd like to apply the conditions in my WHERE clause to a pair of columns. i.e. I would like something like this: ``` CREATE PROCEDURE afficherCreationLuns (@start DATE, @end DATE) AS SELECT dateMin = MIN(t1.dateCollecte), displayName, label, date = DATEADD(day, -1, MIN(dateCollecte)), capacityInKB = MIN(capacityInKB) FROM Vsp WHERE dateCollecte > @start AND dateCollecte <= @end AND (label, displayName) NOT IN (SELECT label, displayName FROM Vsp WHERE dateCollecte = @start) GROUP BY displayName, label ``` After some research, I changed the code to use the EXISTS clause, which seemed better suit my needs. Here is what I have now: ``` CREATE PROCEDURE afficherCreationLuns (@start DATE, @end DATE) AS SELECT dateMin = MIN(t1.dateCollecte), t1.displayName, t1.label, date = DATEADD(day, -1, MIN(t1.dateCollecte)), capacityInKB = MIN(t1.capacityInKB) FROM Vsp t1 WHERE EXISTS (SELECT t2.displayName, t2.label FROM Vsp t2 WHERE t1.displayName = t2.displayName AND t1.label = t2.label AND t2.dateCollecte > @start AND t2.dateCollecte <= @end AND (t1.label NOT IN (SELECT t3.label FROM Vsp t3 WHERE t3.dateCollecte = @start) OR t1.displayName NOT IN (SELECT t4.displayName FROM Vsp t4 WHERE t4.dateCollecte = @start) ) ) GROUP BY t1.displayName, t1.label ``` But it didn't change anything.
You can replace `NOT IN` with `NOT EXISTS`, like this: ``` SELECT dateMin = MIN(t1.dateCollecte) , t1.displayName , t1.label , date = DATEADD(day, -1, MIN(t1.dateCollecte)) , capacityInKB = MIN(t1.capacityInKB) FROM Vsp t1 WHERE t1.dateCollecte > @start AND t1.dateCollecte <= @end AND NOT EXISTS ( SELECT * FROM Vsp t2 WHERE t2.dateCollecte = @start AND t2.label=t1.label AND t2.displayName=t1.displayName ) GROUP BY t1.displayName, t1.label ``` The idea is to run the inner select with the additional condition that matches `label` and `displayName` to the corresponding fields of the outer select, and drop records of the outer select where such a match can be found.
You could concatenate the two columns together? Like so: ``` SELECT * FROM [TABLE] WHERE COLUMN1+' '+COLUMN2 NOT IN ( SELECT COLUMN1+' '+COLUMN2 FROM [OTHER-TABLE]) ``` Thanks.
Can we use NOT IN on multiple columns?
[ "", "sql", "sql-server", "stored-procedures", "where-clause", "" ]
If I have two tables like this: ``` CREATE TABLE Director ( `DirectorNr` INTEGER NOT NULL, `First Name` VARCHAR (20), `Last Name` VARCHAR (30), `Age` INTEGER, `Movies` INTEGER, PRIMARY KEY(DirectorNr) ); CREATE TABLE Movies ( `MovieNr` INTEGER NOT NULL, `Title` VARCHAR (100), `Genre` VARCHAR (30), `USK` INTEGER, `Director` INTEGER NOT NULL, `Length` INTEGER, `Release` DATE, PRIMARY KEY (MovieNr), FOREIGN KEY (Director) REFERENCES Director (DirectorNr) ); INSERT INTO Director VALUES (1, 'Peter', 'Jackson', 52, 17); INSERT INTO Movies VALUES (1, 'The Hobbit: An Unexpected Journey', 'Fantasy', 12, 1, 169, '2012/12/12'); ``` how can i get the Director's name through the foreign key, with a SELECT? Something like this: ``` SELECT `Director.First Name` `Director.Last Name` FROM Movies ... ```
Join the `director` table and specify the relation in the `on` condition ``` SELECT m.title, d.`First Name`, d.`Last Name` FROM Movies m JOIN Director d ON m.Director = d.DirectorNr ``` Beware: With that design you can only have a single director for a movie. And to escape table or column names use backticks only around the name of the element and not the combination of table and column Wrong ``` `director.first name` ``` correct ``` director.`first name` ```
``` SELECT d.`First Name`, d.`Last Name` FROM Director d WHERE DirectorNr IN (SELECT Director FROM Movies WHERE Director=value ); ```
Get values of a foreign key reference
[ "", "mysql", "sql", "" ]
I have 3 drop-down lists on my page and a data-list. All of them are populated from SQL database based on selected item from previous drop-down. AutoPostBack is set to true on all three dropdowns, and on first page load they are displayed correctly i.e. on all three dropdowns first item is selected and the result is displayed in data-list based on that selection. But when I change selection in the first drop-down, only second drop-down gets updated (sometimes it updates the third but its inconsistent), and to get the update on the third one I need to change the selection on the second one. The result in data-list mostly remains the same, and updates when I change selection in third drop-down. What do I need to do to get the data-list updated when I change the selection in first drop down?
Withoug looking at your code it is difficult to answer the question. But by looking at the description of your question, it looks like you are setting AutoPostBack=true, but not handling the OnSelectedIndexChanged event. First of all you should add OnSelectedIndexChanged event for all the three dropdowns like below. ``` protected void dropDownList1_SelectedIndexChanged(object sender, EventArgs e) { } ``` In each selectedindex changed event, you need to bind the next dropdown lists. May be you can have one common function for binding the list view, which you need to call in each SelectedIndexChanged event. If you just set AutoPostBack=true and not handling the selectedindex changed, then it does not work properly as expected
To get the data-list updated when the first drop-down is changed, you must fire the post-back event manually(for the second and 3rd drop-downs)! Assuming the data on the 3rd drop-down is based on the selection from the 2nd,on the client Page in JavaScript : Try to have the first item or a default item selected. This would populate the 3rd list. A good practice would be to have a default element, that is not specific! It is also a matter of design. If its a typical list where the second and 3rd lists would be changed often locally, then it is best to have it pre-fetched and stored in a local Json object!
Dependent dropdowns populated by sql database in ASP.NET
[ "", "asp.net", "sql", "" ]
I have a mysql table `mysql_table`. There is a column `Cid` for category id. I already uploaded many data in that table. ``` In first row, i upload ~1~2~3~ in Cid In second row, i upload ~3~5~ in Cid In third row , I upload ~1~ in Cid ``` Now i want to update column Cid of table mysql\_table without affecting existing data. I want to update each Cid column with 15~. So that i need final reasult as ``` first row , ~1~2~3~15~ second row, ~3~5~15~ third row, ~1~15~ ``` How Can I achieve this using mysql update query ?
You can use **[concat](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat)** to update the existing column as ``` update mysql_table set Cid = concat(Cid,'15~'); ```
Try using [**CONCAT**](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat) function. ``` UPDATE `mysql_table` set Cid = concat(Cid,'15~'); ```
How to update mysql table without deleting existing content
[ "", "mysql", "sql", "" ]
I have a table that contains an even number of rows. The rows form pairs with the same information but the content of the first two columns swapped. Here is an example with three columns: ``` 1 2 3 ======= A B W B A W C D X D C X E F Y H G Z F E Y G H Z ``` My actual table has many more columns, but the content is always the same within a pair. **I'm looking for an SQL-Statement that gets rid of one row of each pair. The result should look like this:** ``` 1 2 3 ======= A B W C D X E F Y G H Z ``` My table is generated by a script (which I can't change), so I assume my input is correct (every row has a partner, rows >=3 are the same for each pair). A statement that could check these preconditions would be extra cool.
For Me the below code is working ``` DECLARE @TEST TABLE (A CHAR(1),B CHAR(1),C CHAR(1)) INSERT INTO @TEST VALUES ('A','B','W'), ('B','A','W'), ('C','D','X'), ('D','C','X'), ('E','F','Y'), ('H','G','Z'), ('F','E','Y'), ('G','H','Z') SELECT MIN(A) [1], MAX(A) [2], C [3] FROM @TEST GROUP BY C ``` **Result:** ![enter image description here](https://i.stack.imgur.com/aPzlf.png)
If every row has a counterpart where c1 and c2 are swapped then just select rows where c1 and c2 are in a certain order (i.e. c1 < c2). The `EXISTS` part makes sure that only rows that have a counterpart are shown. If you want to show all unique rows regardless of whether or not they have a counterpart, then change the last condition from `AND EXISTS` to `OR NOT EXISTS`. ``` SELECT * FROM myTable t1 WHERE c1 < c2 AND EXISTS ( SELECT * FROM myTable t2 WHERE t2.c1 = t1.c2 AND t2.c2 = t1.c1 AND t2.c3 = t1.c3 ) ORDER BY c1 ```
Combine row pairs with swapped columns
[ "", "sql", "oracle", "" ]
I'm trying to do an subquery with SUM() and LIMIT. This works fine with the following code: ``` SELECT id, ( SELECT SUM(number) FROM ( SELECT number FROM t2 WHERE u_id = '1' ORDER BY time ASC LIMIT 30 ) AS temp ) AS test FROM t1 ``` But I want to do it of course dynamically and with the current row ID. I changed the Query to the following: ``` SELECT id, ( SELECT SUM(number) FROM ( SELECT number FROM t2 WHERE u_id = p.id ORDER BY time ASC LIMIT 30 ) AS temp ) AS test FROM t1 p ``` This will give the following error: ``` Unknown column 'p.id' in 'where clause' ``` Any ideas how to make it working?
Unfortunately, MySQL limits the scope of table aliases. Oracle is another database that does this. You can phrase your query as a complicated join: ``` select t1.id, sum(t2.number) from t1 p join t2 on p.id = t2.u_id where 30 >= (select count(*) from t2 t22 where t22.u_id = t2.u_id and t22.time <= t2.time ) group by t1.id; ``` Or you can do this with variables: ``` select p.id, sum(number) from t1 p join (select t2.*, @rn := if(@u = t2.u_id, @rn + 1, if((@u := t2.u_id) is not null, 1, 0)) as rn from t2 (select @u := 0, @rn := 0) vars order by t2.u_d, time ) t2 on p.id = t2.u_id where rn <= 30 group by p.id; ```
why not just change p.id to t1.id? I'm pretty sure it's because you are aliasing t1 in the first select, and it isn't defined in the subquery. Try an inner join instead. ``` SELECT id, ( SELECT SUM(number) FROM ( SELECT number FROM t2 INNER JOIN t1 p on u_id = p.id ORDER BY time ASC LIMIT 30 ) AS temp ) AS test FROM t1 p ```
MySQL Subquery SUM Limit
[ "", "mysql", "sql", "" ]
I start to learn how to write sql language but I got stuck with the problem below : Now I have a data in a table named 'data' ``` +------+----------------------+ | id | date | +------+----------------------+ | 1 | 2009-05-10 09:17:25 | | 2 | 2010-04-09 09:17:25 | | 3 | 2010-12-12 09:17:25 | | 4 | 2011-01-11 09:17:25 | | 5 | 2012-03-19 09:17:25 | | 7 | 2012-05-20 09:17:25 | | 8 | 2013-02-21 09:17:25 | | 9 | 2013-02-02 09:17:25 | +------+----------------------+ ``` I want to write an sql statement to get 3 last year from sysdate so the result I want to get is ``` +------+ | date | +------+ | 2011 | | 2012 | | 2013 | +------+ ``` Assume that we don't exactly knew how many different types and number of column, so can you kindly guide me how to deal with this problem? Thanks in advance
You can try this query: ``` WITH years([year]) AS ( SELECT DISTINCT TOP 3 YEAR([date]) AS [date] FROM Table1 ORDER BY YEAR([date]) DESC ) SELECT TOP 3 [year] FROM years ORDER BY [year] ASC; ``` [SQL FIDDLE DEMO](http://sqlfiddle.com/#!3/3aa13/52)
You could use `top`: ``` SELECT DISTINCT TOP 3 YEAR([date]) AS [date] FROM my_table ORDER BY YEAR([date]) DESC ```
sql statement query last data
[ "", "sql", "sql-server", "date", "" ]
I have the following table structure and data ``` TransID TransType Product Qty OrderRef Date ------- --------- ------- --- -------- ---- C123 Credit Prod1 1 Order8 2014-07-08 C123 Credit Prod2 5 Order8 2014-07-08 Inv111 Invoice Prod1 1 Order8 2014-07-08 Inv111 Invoice Prod2 5 Order8 2014-07-08 C999 Credit Prod1 6 Order8 2014-07-08 C999 Credit Prod2 9 Order8 2014-07-08 Inv666 Invoice Prod1 6 Order8 2014-07-08 ``` What I want to do is to be able to identify those Credit records that have an exact matching group of Invoice records. By exact matching I mean the same Product, OrderRef, Qty and Date In the above data C123 would match with Inv111 but C999 would not match with Inv666 as Inv666 is missing a row I want to delete both the Credit and Invoice records that have an exact match. There is no link between Invoice and Credits apart from the OrderRef I've played around with the Except statement, something like this:- ``` ;with CreditToInvoice(Product, Qty, OrderRef, Date) as (select Product ,Qty ,OrderRef ,Date) from @t t1 where t1.TransType = 'Credit' group by TransactionID, OrderRef, Product, Date, Qty EXCEPT select Product ,Qty ,OrderRef ,Date) from @t t2 where t2.TransType = 'Invoice' group by TransactionID, OrderRef, Product, Date, Qty ) ``` which gives me everything in table a not in table b as I would expect The problem is I really need the TransactionID's so that I can proceed to delete correctly Is the Except the wrong statement for this? Could I use a merge?
I think a `LEFT JOIN` and some `GROUP`ing is the most obvious way to deal with this requirement: ``` SELECT cr.TransID, MAX(inv.TransID) as InvoiceID, MAX(CASE WHEN inv.TransID is NULL THEN 1 ELSE 0 END) as Unsatsified FROM @t cr left join @t inv on cr.Product = inv.Product and cr.OrderRef = inv.OrderRef and cr.Qty = inv.Qty and cr.Date = inv.Date and inv.TransType = 'Invoice' WHERE cr.TransType = 'Credit' GROUP BY cr.TransID HAVING MAX(CASE WHEN inv.TransID is NULL THEN 1 ELSE 0 END) = 0 ``` That is, we join together all of the matching rows between a credit and an invoice, and then we only select this result if all credit rows achieved a match. You can place this in a subquery or CTE and perform an unpivot if you need both `TransID` values in a single column for the next part of your processing.
The resulting TransIDs should be the ones you need to delete ``` DECLARE @Trans TABLE ([TransID] varchar(6), [TransType] varchar(7), [Product] varchar(5), [Qty] int, [OrderRef] varchar(6), [Date] datetime) ; INSERT INTO @Trans ([TransID], [TransType], [Product], [Qty], [OrderRef], [Date]) VALUES ('C123', 'Credit', 'Prod1', 1, 'Order8', '2014-07-08 00:00:00'), ('C123', 'Credit', 'Prod2', 5, 'Order8', '2014-07-08 00:00:00'), ('Inv111', 'Invoice', 'Prod1', 1, 'Order8', '2014-07-08 00:00:00'), ('Inv111', 'Invoice', 'Prod2', 5, 'Order8', '2014-07-08 00:00:00'), ('C999', 'Credit', 'Prod1', 6, 'Order8', '2014-07-08 00:00:00'), ('C999', 'Credit', 'Prod2', 9, 'Order8', '2014-07-08 00:00:00'), ('Inv666', 'Invoice', 'Prod1', 6, 'Order8', '2014-07-08 00:00:00') ; DECLARE @TransUnique TABLE ([TransID] varchar(6) ) INSERT INTO @TransUnique SELECT DISTINCT TransID FROM @Trans --Remove Credits DELETE t FROM @TransUnique t INNER JOIN ( select t1.*,t2.TransID [TransId2],t2.TransType [TransType2] From @Trans t1 LEFT JOIN @Trans t2 ON t1.OrderRef=t2.OrderRef AND t1.Date=t2.Date AND t1.Qty=t2.Qty AND t1.Product=t2.Product AND t2.TransType='Invoice' WHERE t1.TransType='Credit' ) joined ON t.TransID=joined.TransId AND joined.TransId2 IS NULL --Remove Invoices DELETE t FROM @TransUnique t INNER JOIN ( select t1.*,t2.TransID [TransId2],t2.TransType [TransType2] From @Trans t1 LEFT JOIN @Trans t2 ON t1.OrderRef=t2.OrderRef AND t1.Date=t2.Date AND t1.Qty=t2.Qty AND t1.Product=t2.Product AND t2.TransType='Invoice' LEFT JOIN @TransUnique tu ON tu.TransID=t1.TransID WHERE t1.TransType='Credit' AND tu.TransID IS NULL ) joined ON t.TransID=joined.TransId2 SELECT * FROM @TransUnique ```
TSQL - Exact match on subset of records
[ "", "sql", "sql-server", "t-sql", "" ]
I'm having a bit of trouble with truncating data. I'm using SQL's GETDATE() function to get the current date and time and enter them into a database. However, I only want to save the date and time up until the minute. In other words, I want dd/mm/yyyy hh:mm:00.000 or dd/mm/yyyy hh:mm to be saved when I input new data. How can I go about doing this? I should note I'm using MS-SQL.
There are a number of ways to go about doing this. For example, you could convert the generated `datetime` from `GetDate()` to a `smalldatetime` first, à la: ``` CAST(GetDate() AS smalldatetime) ``` To be clear, this will round the generated seconds up (or down) to the nearest minute depending up the value of the current second. **EDIT:** Alternatively, you can have SQL Server truncate a `datetime` for you for a "cleaner" (READ: no rounding, since the value is pre-truncated) conversion to `smalldatetime`: ``` CAST(DateAdd(minute, DateDiff(minute, 0, GetDate()), 0) AS smalldatetime) ```
For truncation: ``` SELECT SMALLDATETIMEFROMPARTS( datepart(year ,dt) ,datepart(month ,dt) ,datepart(day ,dt) ,datepart(hour ,dt) ,datepart(minute,dt) ) FROM (SELECT GETDATE()) t(dt) ```
Truncate seconds and milliseconds in SQL
[ "", "sql", "sql-server", "date", "datetime", "" ]
I want to optimize a query but I have no idea how I could do that. Here is the table I want to query: ``` Device table: Id || PushId || created abc aaa 10/10/13 def aaa 10/12/13 efg abb 9/9/12 ``` The query I want is the following: I want to get duplicate PushIds and remove the oldest entry of the two from the table. This is what I have up to now (select instead of delete because I am still at the testing stage) ``` select m.* from (select pushId, created from Device group by pushId having count(*) >1) as m inner join Device mm on mm.pushId = m.pushId and mm.created = m.created; ``` This correctly returns the stuff that should be deleted but it is very, very slow. Is there a faster way of doing this? Is there a way to do this without a temporary table? i.e. with a single scan? EDIT: This is MySQL i mistakenly put an MS-SQL tag in there. Apologies guys
This will provide the equivalent of row\_number() in MySQL using @ variables. Here it locates all but the 2 most recent rows for each PushId ``` SELECT PushId , Id , created FROM ( SELECT @row_num :=IF(@prev_value = d.PushId,@row_num+1,1)AS RN , d.PushId , d.Id , d.created , @prev_value := d.PushId FROM tblDevices d CROSS JOIN(SELECT @row_num :=1, @prev_value :='') vars ORDER BY d.PushId , d.created DESC ) SQ WHERE RN > 2 ; ``` You can alter the outcome by changing the order (eg to ASC) to locate the oldest records. Note the cross join is used simply to "attach" 2 @ vars to each row & as there is only one row it has no impact on the actual number of records. The vars are then set within the query.
There may need to be some manipulation to get it to fit into your delete statement, but try the following using the MIN function to find the lowest date+id combination, where there is more than one entry. then remove the date from the result, giving the correct id only: ``` delete from Device where id in ( select right(min(cast(cast(created as unsigned) as char(5)) + id),3) from Device group by pushid having count(*) > 1 ) ```
Optimizing a query, duplicate rows that have oldest date
[ "", "mysql", "sql", "performance", "query-optimization", "" ]
My problem is that I have 4 differents SELECT with ``` SELECT COUNT (*) AS regular WHERE experience = 1 AND bl = 1 SELECT COUNT (*) AS rptmm WHERE experience = 1 AND bl = 0 SELECT COUNT (*) AS new WHERE experience = 0 AND bl = 0 SELECT COUNT (*) AS rptss WHERE experience = 0 AND bl = 1 ``` I want that the results appear together whith the respective names like: ``` regular rptmm new rptss 10 5 2 6 ```
***Firstly, I'd suggest not to use Count(*)\***. There are many answers on this site explaining why so I am not going to repeat it. Instead, I'd suggest you to use a query like this: ``` SELECT (SELECT COUNT (tab.someColumnName) FROM TableName tab WHERE tab.experience = 1 AND tab.bl = 1) AS 'Regular', (SELECT COUNT (tab.someColumnName) FROM TableName tab WHERE tab.experience = 1 AND tab.bl = 0) AS 'rptmm', (SELECT COUNT (tab.someColumnName) FROM TableName tab WHERE tab.experience = 0 AND tab.bl = 0) AS 'New', (SELECT COUNT (tab.someColumnName) FROM TableName tab WHERE tab.experience = 0 AND tab.bl = 1) AS 'rptss' ``` Hope this helps!!!
Just put UNION ALL between your four statements you will get four rows with each count on its own row. However, you will lose the column name. You could also use join to get one row with four columnes. Just put the keyword join between each sql statement. ``` SELECT COUNT (*) AS regular WHERE experience = 1 AND bl = 1 JOIN SELECT COUNT (*) AS rptmm WHERE experience = 1 AND bl = 0 JOIN SELECT COUNT (*) AS new WHERE experience = 0 AND bl = 0 JOIN SELECT COUNT (*) AS rptss WHERE experience = 0 AND bl = 1 ```
Union different tables with differents columns and data
[ "", "sql", "sql-server", "select", "" ]
Quick question: Does exiting out of TOAD (for Oracle) while it is trying to cancel a pending query harmful? Should I be letting this dialog box run its course? I did have the screenshot but am unable to post pics until I have 10 reps. EDIT: It has been going for around 30 minutes now. EDIT2: I should mention it is not an update query, purely search. Thanks,
When this happens and I've already waited long enough (and the **`Cancel`** button has no effect), I open *Task Manager* and apply *"End Process Tree"* command on the `Toad.exe` process. If a database connection is lost, all uncommitted changes made are automatically rolled back by the database. **So it is not harmful.** Once I investigated this by looking up the sessions list. It looks like this happens when Toad somehow loses the connection to the server in the midst of executing a query. When you wonder why the query is taking so long (when it shouldn't) and click the **`Cancel`** button, Toad enters a state of *"limbo"* where it's waiting for the result of the *cancel* operation from the server (not aware of connection loss). The problem is that there is no way to stop this *waiting* and go back to normal. This is a bug in Toad. There is no other way around this. I am not sure when they will fix it, if at all.
I just had same problem. [LIKE THIS FOR DAYS](https://i.stack.imgur.com/3Z08s.png) One solution to cancel the running process (processing or canceling) in TOAD which take time (keep processing for longer hours and same going for hours) is JUST disable the internet connection which auto cancels the toad process. Later go to Session > Test connection to reconnect to servers
Exiting TOAD while cancel pending?
[ "", "sql", "oracle", "toad", "" ]
i have the data like this: ``` CODE_VD N_10_19_xxx N_0_3_xxx N_121_131_xxx N_100_120_xxx N_80_90_xxx N_20_29_xxx ``` as you can see i need to sort just the first number after N\_,i don't know how can i get this number. i have tried with susbsting(CODE\_VD,2,3) but not exactly what i expected. i want to get this: ``` CODE_VD N_0_3_xxx N_10_19_xxx N_20_29_xxx N_80_90_xxx N_100_120_xxx N_121_131_xxx ``` how can i do that ?
``` DECLARE @MyTable TABLE ( CODE_VD VARCHAR(20) ) INSERT INTO @MyTable ( CODE_VD ) VALUES ('N_10_19_xxx'), ('N_0_3_xxx'), ('N_121_131_xxx'), ('N_100_120_xxx'), ('N_80_90_xxx'), ('N_20_29_xxx'); SELECT * FROM ( SELECT *, CONVERT(INT, SUBSTRING(mt.CODE_VD, 3, CHARINDEX('_', mt.CODE_VD, 3) - 3)) ConvCol FROM @MyTable mt ) mt ORDER BY mt.ConvCol ``` I converted to int to get the sort to work correctly, because 100 > 20
``` SELECT SUBSTRING(CODE_VD,3, CHARINDEX('_',CODE_VD, 3)-3) ```
sorting SQL with substring on string
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I'm grabbing the first day of the next quarter as `StartDate`. How can I add 90 days to it. The below is gives me invalid column error for StartDate within `EndDate`. ``` SELECT DATEADD(qq, DATEDIFF(qq, 0, MAX(Time_Stamp)) + 1, 0) as StartDate, DATEADD(day, 90, MAX(StartDate)) as EndDate FROM Survey WHERE MainHospital = 'Hospital1' ``` column ``` Time_Stamp ----------- 2014-06-04 16:01:14.000 2014-06-04 15:55:33.000 2014-06-04 15:45:05.000 2014-06-04 15:36:15.000 2014-06-04 15:00:34.000 2014-06-04 14:35:24.000 2014-06-04 14:04:50.000 2014-06-04 13:46:55.000 2014-06-04 13:23:57.000 2014-06-04 11:27:51.000 ``` Just for refernce I found this which was very useful: <http://zarez.net/?p=2484> So ended up with something like this: ``` SELECT --Returns first day of next quarter DATEADD(qq, DATEDIFF(qq, 0, MAX(Time_Stamp)) + 1, 0) as StartDate, --Returns last day of next quarter DATEADD (dd, -1, DATEADD(qq, DATEDIFF(qq, 0, MAX(Time_Stamp)) +2, 0)) as EndDate FROM Survey WHERE MainHospital = 'Hospital1' ```
The sql query doesn't know of column alias in the select. It is a self referencing error. The select statement creates the alias so it can't itself use the alias. The only statement to run after select is order by, where you can use an alias. I would reuse your logic to prevent needing a subquery. Since ninety days is just another quater, change 1 to 2 and add another quarter to your Time Stamp. ``` SELECT DATEADD(qq, DATEDIFF(qq, 0, MAX(Time_Stamp)) + 1, 0) as StartDate, DATEADD(qq, DATEDIFF(qq, 0, MAX(Time_Stamp)) + 2, 0) as EndDate FROM Survey WHERE MainHospital = 'Hospital1' ```
It's invalid because StartDate is alias for your date calculation. You can use alias only when you have results. In other words alias is available when you have dataset returned from server. You can for example use subquery: ``` SELECT StartDate, DATEADD(day, 90, MAX(StartDate)) as EndDate FROM ( SELECT DATEADD(qq, DATEDIFF(qq, 0, MAX(Time_Stamp)) + 1, 0) as StartDate FROM Survey WHERE MainHospital = 'Hospital1' ) DS ```
Adding 90 days to created date
[ "", "sql", "sql-server", "t-sql", "datetime", "" ]
I have the following SQL select. How can I convert it to a delete statement so it keeps 1 of the rows but deletes the duplicate? ``` select s.ForsNr, t.* from [testDeleteDublicates] s join ( select ForsNr, period, count(*) as qty from [testDeleteDublicates] group by ForsNr, period having count(*) > 1 ) t on s.ForsNr = t.ForsNr and s.Period = t.Period ```
Try using following: **Method 1:** ``` DELETE FROM Mytable WHERE RowID NOT IN (SELECT MIN(RowID) FROM Mytable GROUP BY Col1,Col2,Col3) ``` **Method 2:** ``` ;WITH cte AS (SELECT ROW_NUMBER() OVER (PARTITION BY ForsNr, period ORDER BY ( SELECT 0)) RN FROM testDeleteDublicates) DELETE FROM cte WHERE RN > 1 ``` Hope this helps! **NOTE:** *Please change the table & column names according to your need!*
This is easy as long as you have a generated primary key column (which is a good idea). You can simply select the min(id) of each duplicate group and delete everything else - Note that I have removed the having clause so that the ids of non-duplicate rows are also excluded from the delete. ``` delete from [testDeleteDublicates] where id not in ( select Min(Id) as Id from [testDeleteDublicates] group by ForsNr, period ) ``` If you don't have an artificial primary key you may have to achieve the same effect using row numbers, which will be a bit more fiddly as their implementation varies from vendor to vendor.
Delete duplicates but keep 1 with multiple column key
[ "", "sql", "sql-server", "duplicates", "delete-row", "" ]
``` WHERE StartDate BETWEEN CAST(FLOOR(CAST(DATEADD("month", - 12, ISNULL(RunDate.testdate, GETDATE()) - DAY(ISNULL(RunDate.testdate, GETDATE())) + 1) AS FLOAT)) AS DATETIME) AND CAST(FLOOR(CAST(ISNULL(RunDate.testdate, GETDATE()) - DAY(ISNULL(RunDate.testdate, GETDATE())) + 1 AS FLOAT)) AS DATETIME) /************************************************* SQL Server 2005 (9.0 SP4) StartDate is of TYPE DATETIME StartDate date looks like 2012-07-05 12:45:10.227 RunDate is a TABLE with no records RunDate.testdate is always NULL **************************************************/ ``` This `WHERE` clause works and produces the expected results for what the business expects but seems excessively untidy. All that this is attempting to achieve is to bring across all records for the last 12 months, when run in July of any year. So records from 1 July 2013 to 30 June 2014.
If you [abandon the `x BETWEEN a AND b` pattern](https://sqlblog.org/2011/10/19/what-do-between-and-the-devil-have-in-common "What do BETWEEN and the devil have in common? ") in favour of one that could roughly be expressed as ``` x BEGINS WITH a ENDS BEFORE b ``` (which, of course, is not valid SQL syntax but `x >= a AND x < b` would be a decent replacement), you will end up with both `a` and `b` having same granularity (specifically, months, because each of the two points would be the beginning of a month). In this case it is very opportune because it will allow you to factorise calculation of both `a` and `b` nicely. But before I get there, let me tell (or remind) you about this [date/time truncation technique in SQL Server](https://stackoverflow.com/questions/2639051/what-is-the-best-way-to-truncate-a-date-in-sql-server "What is the best way to truncate a date in SQL Server?"): ``` DATEADD(unit, DATEDIFF(unit, SomeFixedDate, YourDateTime), SomeFixedDate) ``` In your case, the *`unit`* would be `MONTH`. As for `SomeFixedDate`, there is an established practice of using 0 there for brevity, 0 representing the date of 1900-01-011. So, using this technique, the beginning of the current month would be calculated as ``` DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0) ``` That would be the end of your interval. The beginning of the interval would be same except you would subtract 12 from the DATEDIFF's result, and so the complete condition to match the range would look like this: ``` WHERE StartDate >= DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()) - 12, 0) AND StartDate < DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0) ``` If that still looks a mouthful to you, you could try getting rid of the repetition of `DATEDIFF(MONTH, 0, GETDATE())`. You could view it as some abstract month number and reference it as such in both DATEADD calls. The CROSS APPLY syntax would help you with that: ``` … CROSS APPLY ( SELECT DATEDIFF(MONTH, 0, GETDATE()) ) AS x (MonthNo) WHERE StartDate >= DATEADD(MONTH, x.MonthNo-12, 0) AND StartDate < DATEADD(MONTH, x.MonthNo , 0) ``` Yes, that seems factorised all right. However, it might not lend itself well to readability. Someone (even yourself, at some later point) might look at it and say: "Huh, `MonthNo`? What in the world is that? Ah, *that* number…" If I were to factorise your interval calculation, I might consider this instead: ``` … CROSS APPLY ( SELECT DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0) ) AS x (ThisMonth) WHERE StartDate >= DATEADD(YEAR, -1, x.ThisMonth) AND StartDate < x.ThisMonth ``` I admit, though, that readability is not a perfectly objective criterion and sometimes one chooses to write one way or the other based on one's personal tastes/preferences. --- 1Although hard-coding a date as an integer value is not a normal practice, I would argue that this truncation technique is a special case where using 0 in place of a datetime value should be okay: that usage appears to be so widespread that it may by now have become a "fixed expression" among those using it and is, therefore, unlikely to cause much confusion.
Here is another expression: ``` where startdate > cast(dateadd(year, -1, getdate() - day(getdate()) as date) and startdate <= cast(getdate() - day(getdate()) as date) ``` When you subtract a number from a `datetime` it is treated as a number of day, which can be easier to read. EDIT: Oh, that's 2005, not 2008. The date logic goes like this . . . : ``` where startdate > cast(dateadd(year, -1, dateadd(day, - day(getdate()), getdate()) as date) and startdate <= cast(dateadd(day, - day(getdate()), getdate()) ``` But you will have times on the date. There actually isn't an easy way to convert this to a date. You can do this more readily with a subquery: ``` from (select dateadd(day, 0, datediff(day, 0, getdate())) as today) const where startdate > dateadd(year, -1, dateadd(day, - day(today), today) and startdate <= cast(dateadd(day, - day(today), today) ```
SQL Server 2005 Query DateDate
[ "", "sql", "sql-server", "datetime", "sql-server-2005", "" ]
I have some enormous tables of values, and dates, that I want to compress using run length encoding. The most obvious way (to me) to do this is to select all the distinct value combinations, and the minimum and maximum dates. The problem with this is that it would miss any instances where a mapping stops, and then starts again. ``` Id | Value1 | Value2 | Value3 | DataDate ------------------------------------------ 01 | 1 | 2 | 3 | 2000-01-01 01 | 1 | 2 | 3 | 2000-01-02 01 | 1 | 2 | 3 | 2000-01-03 01 | 1 | 2 | 3 | 2000-01-04 01 | A | B | C | 2000-01-05 01 | A | B | C | 2000-01-06 01 | 1 | 2 | 3 | 2000-01-07 ``` Would be encoded this way as ``` Id | Value1 | Value2 | Value3 | FromDate | ToDate ----------------------------------------------------- 01 | 1 | 2 | 3 | 2000-01-01| 2000-01-07 01 | A | B | C | 2000-01-05| 2000-01-06 ``` Which is clearly wrong. What I'd like is a query that would return each set of continuous dates that exist for each set of values. Alternatively, if I'm looking at this arse-backwards, any other advice would be appreciated.
Try this: ``` DECLARE @MyTable TABLE ( Id INT, Value1 VARCHAR(10), Value2 VARCHAR(10), Value3 VARCHAR(10), DataDate DATE ); INSERT @MyTable SELECT 01, '1', ' 2', '3', '2000-01-01' UNION ALL SELECT 01, '1', ' 2', '3', '2000-01-02' UNION ALL SELECT 01, '1', ' 2', '3', '2000-01-03' UNION ALL SELECT 01, '1', ' 2', '3', '2000-01-04' UNION ALL SELECT 01, 'A', ' B', 'C', '2000-01-05' UNION ALL SELECT 01, 'A', ' B', 'C', '2000-01-06' UNION ALL SELECT 01, '1', ' 2', '3', '2000-01-07' SELECT Id, Value1, Value2, Value3, MIN(DataDate) AS FromDate, MAX(DataDate) AS ToDate FROM ( SELECT x.Id, x.Value1, x.Value2, x.Value3, x.DataDate, GroupNum = DATEDIFF(DAY, 0, x.DataDate) - ROW_NUMBER() OVER(PARTITION BY x.Id, x.Value1, x.Value2, x.Value3 ORDER BY x.DataDate) FROM @MyTable x ) y GROUP BY Id, Value1, Value2, Value3, GroupNum ``` Results: ``` Id Value1 Value2 Value3 FromDate ToDate -- ------ ------ ------ ---------- ---------- 1 1 2 3 2000-01-01 2000-01-04 1 1 2 3 2000-01-07 2000-01-07 1 A B C 2000-01-05 2000-01-06 ```
You'll probably want to use windowing functions. Try something like this: ``` select id, value1, value2, value3, from_date=update_date, to_date=lead(update_date) over (partition by id order by update_date) from ( select t.* ,is_changed= case when value1 <> lag(value1) over (partition by id order by update_date) or (lag(value1) over (partition by id order by update_date) is null and value1 is not null) or value2 <> lag(value2) over (partition by id order by update_date) or (lag(value2) over (partition by id order by update_date) is null and value2 is not null) or value3 <> lag(value3) over (partition by id order by update_date) or (lag(value3) over (partition by id order by update_date) is null and value3 is not null) then 1 else 0 end from test t ) t2 where is_changed = 1 order by id, update_date ``` Please note that this query relies on the `LAG()` function, and two other things: * Separate tests for each "value" column; if you have a lot of columns to test, you might consider creating a single hash value to simplify the equality checks * The "to\_date" is identical to the next record's "from\_date", which means you might need to test for values using `>= from_date` and `< to_date` to make the run-lengths mutually exclusive Note that I used the following sample data in my testing: ``` create table test(id int, value1 varchar(3), value2 varchar(3), value3 varchar(3), update_date datetime) insert into test values (1, 'A', 'B', 'C', '1/1/2014'), (1, 'A', 'B', 'C', '2/1/2014'), (1, 'X', 'Y', 'Z', '3/1/2014'), (1, 'A', 'B', 'C', '4/1/2014'), (2, 'D', 'E', 'F', '1/1/2014'), (2, 'D', 'E', 'F', '6/1/2014') ``` Good luck!
Select Contiguous Rows for Run Length Encoding
[ "", "sql", "sql-server", "run-length-encoding", "" ]
I'm building a tool to query a database from an html form, and I defined a data type to capture the form values. For example: ``` data BookSearchParams = BookSearchParams { Title :: Maybe Text TitleSearchType :: SearchType , Author :: Maybe Text , AuthorSearchType :: SearchType } ``` The columns of the database are represented as fields in the data type and I'm using Maybe because the user may or may not enter that value in the form. The SearchType fields are used to specify exact match, regex, or like. I'd like to turn this into a sql select statement. In an imperative language, I could just add each field to a string if the field was passed down: ``` if params.title: sql += params.title ``` In Haskell, I could do it with a lot of case statements to create a function with type: ``` bookSearchParamsToSql :: BookSearchParams -> Text ``` However, it would be a messy function. How should this function be written?
I've asked a similar [question](https://stackoverflow.com/q/23838630/376113) a couple of weeks ago, and the elegant answer was to use a `Writer` monad. (In fact, it's was exactly the same problem : building SQL query but I transformed it to a more general question). ``` import Control.Monad.Writer hiding (forM_) import Data.Foldable bookSearchSQL :: BookSearchParams -> Text bookSearchSQL p = execWriter $ do tell "header of the query" forM_ (title p) $ \t -> "title = " ++ (show t) tell $ "type = " ++ (searchType p) forM_ (author p) $ \a -> "author = " ++ (show a) ``` (I haven't check it works or even compiles). The trick is to use `Data.Foldable.forM_` instead of the more traditional `Control.Monad.forM_`. The later take a list as argument but the foldable version takes any foldable, in our a case a `Maybe`. Note, that in the lambda, `t` and `a` are not `Maybe` but `Text` and the lambda is only called is there is the `Maybe` is not `Nothing`.
I would use Monoid to solve this problem. ``` import Data.Monoid (<>) titleSQL, authorSQL :: BookSearchParams -> Text bookSearchSQL p = titleSQL p <> authorSQL p ```
Convert value to sql select in Haskell
[ "", "sql", "haskell", "" ]
I am converting my mysql script to sql server script. I want to sort data and then collapse the data by some criteria. Lets take below example: in Mysql: [sql fiidle](http://sqlfiddle.com/#!2/f26cf/2) lets say we have table "test" and 3 column (colA,colB and colC) Now I want to sort data by colA and colB and then want to pick first row of group colA,colB I also like to find MIN(colC) for the group colA,colB I have achieved in mysql by below query: ``` select colA, colB, min(ColC) from ( select * from test order by colA,colB) tempalias group by colA, colB ``` Now I case of sql server: when I try to execute same query it gives me below error: **The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified.** can anyone help me how to get same functionality in sql server same as I am doing in mysql. Thanks in advance..
I think this should be the trick. Don't know if it's very efficient but you could give it a try ``` SELECT t1.cola, t1.colb, t1.colc FROM (SELECT *, ROW_NUMBER() OVER(ORDER BY cola, colb) AS RowNumber FROM test) t1 JOIN (SELECT cola, colb, MIN(RowNumber) AS RowNumber FROM (SELECT *, ROW_NUMBER() OVER(ORDER BY cola, colb) AS RowNumber FROM test) AS SUBQUERY_01 GROUP BY cola, colb) t2 ON t1.cola = t2.cola AND t1.colb = t2.colb AND t1.RowNumber = t2.RowNumber ``` If you want to change your order by criteria you have to edit both *OVER(ORDER BY cola, colb)* areas
Your question as is doesn't really make sense. Either you want the smallest value for **colc** for every group or the first group-value of **colc** else. The solution of Nesuke for example does exactly what your fiddle example does. So why do you want to order by **cola** and **colb** before grouping?
Collapse Data using Order By and Group BY
[ "", "mysql", "sql", "sql-server", "" ]
I would like to introduce a new column where it holds the calculated sum of next three Periods. This is the sample data ``` Company Value Period ------- ----- ------ MGT 9 1401 MGT 2 1402 MGT 3 1403 MGT 4 1404 MGT 5 1405 MGT 6 1406 MGT 7 1407 MGT 8 1408 MGT 9 1409 MGT 10 1410 MGT 11 1411 MGT 12 1412 MGT 3 1501 ``` This is the Expected data ``` Company Value Period New_Column ------- ----- ------ ---------- MGT 9 1401 9 MGT 2 1402 12 MGT 3 1403 15 MGT 4 1404 18 MGT 5 1405 21 MGT 6 1406 24 MGT 7 1407 27 MGT 8 1408 30 MGT 9 1409 33 MGT 10 1410 26 MGT 11 1411 15 MGT 12 1412 3 MGT 3 1501 0 ``` The New\_Column contains the addition of values from next three months. For Example consider column Period **1403 (March'14)** where the Value is **3**, the New\_Column should now have the addition of values from next three months **i.e., values of [1404(April'14) + 1404(May'14) + 1405(June'14)]** [4 + 5 + 6] **= 15**
I used the following code in MS SQL Server 2008 ``` select t1.Company, t1.value value, t1.period period, isnull(t2.value,0) + isnull(t3.value,0) + isnull(t4.value,0) New_Column from @NOA t1 left join @NOA t2 on (t1.period % 100 < 12 and t1.period = t2.period - 1) or (t1.period % 100 >= 12 and t1.period = t2.period - 89) left join @NOA t3 on (t1.period % 100 < 11 and t1.period = t3.period - 2) or (t1.period % 100 >= 11 and t1.period = t3.period - 90) left join @NOA t4 on (t1.period % 100 < 10 and t1.period = t4.period - 3) or (t1.period % 100 >= 10 and t1.period = t4.period - 91) ``` Delivers the result expected ``` Company Value Period New_Column ------- ----- ------ ---------- MGT 9 1401 9 MGT 2 1402 12 MGT 3 1403 15 MGT 4 1404 18 MGT 5 1405 21 MGT 6 1406 24 MGT 7 1407 27 MGT 8 1408 30 MGT 9 1409 33 MGT 10 1410 26 MGT 11 1411 15 MGT 12 1412 3 MGT 3 1501 0 ``` Thank you @all :)
``` DROP TABLE IF EXISTS my_table; CREATE TABLE my_table (company CHAR(3) NOT NULL ,dt DATE NOT NULL ,value INT NOT NULL ,PRIMARY KEY(company,dt) ); INSERT INTO my_table VALUES ('MGT',20140101,9), ('MGT',20140201,2), ('MGT',20140301,3), ('MGT',20140401,4), ('MGT',20140501,5), ('MGT',20140601,6), ('MGT',20140701,7), ('MGT',20140801,8), ('MGT',20140901,9), ('MGT',20141001,10), ('MGT',20141101,11), ('MGT',20141201,12), ('MGT',20150101,3); mysql> SELECT * FROM my_table; +---------+------------+-------+ | company | dt | value | +---------+------------+-------+ | MGT | 2014-01-01 | 9 | | MGT | 2014-02-01 | 2 | | MGT | 2014-03-01 | 3 | | MGT | 2014-04-01 | 4 | | MGT | 2014-05-01 | 5 | | MGT | 2014-06-01 | 6 | | MGT | 2014-07-01 | 7 | | MGT | 2014-08-01 | 8 | | MGT | 2014-09-01 | 9 | | MGT | 2014-10-01 | 10 | | MGT | 2014-11-01 | 11 | | MGT | 2014-12-01 | 12 | | MGT | 2015-01-01 | 3 | +---------+------------+-------+ SELECT x.* , COALESCE(SUM(y.value),0) new_val FROM my_table x LEFT JOIN my_table y ON y.company = x.company AND y.dt BETWEEN x.dt + INTERVAL 1 MONTH AND x.dt + INTERVAL 3 MONTH GROUP BY x.company , x.dt; +---------+------------+-------+---------+ | company | dt | value | new_val | +---------+------------+-------+---------+ | MGT | 2014-01-01 | 9 | 9 | | MGT | 2014-02-01 | 2 | 12 | | MGT | 2014-03-01 | 3 | 15 | | MGT | 2014-04-01 | 4 | 18 | | MGT | 2014-05-01 | 5 | 21 | | MGT | 2014-06-01 | 6 | 24 | | MGT | 2014-07-01 | 7 | 27 | | MGT | 2014-08-01 | 8 | 30 | | MGT | 2014-09-01 | 9 | 33 | | MGT | 2014-10-01 | 10 | 26 | | MGT | 2014-11-01 | 11 | 15 | | MGT | 2014-12-01 | 12 | 3 | | MGT | 2015-01-01 | 3 | 0 | +---------+------------+-------+---------+ ```
Introduce calculated values in a new column
[ "", "mysql", "sql", "sql-server-2008", "" ]
Suppose I have a table like below: ``` row_id record_id tag_id 1 1 2 2 1 3 3 2 2 4 2 4 5 3 2 6 3 3 ``` I want to get those record\_id which they have record with tag\_id of value 2 but does not have 3, in this case, I want to get record\_id 2. I can only think of a SQL statement with 3 selection but it seems bulky. Is there any simpler, faster way to achieve this? Thanks. Edit: The SQL I got: ``` SELECT record_id FROM table_A WHERE record_id NOT IN (SELECT record_id FROM table_A WHERE record_id IN (SELECT record_id FROM table_A WHERE tag_id = 2) AND tag_id =3) AND record_id IN (SELECT record_id FROM table_A WHERE tag_id = 2) GROUP BY record_id ``` And each record\_id may have 1 to any number of tag\_id value.
This can be simply written as ``` SELECT record_id FROM table_A WHERE tag_id = 2 EXCEPT SELECT record_id FROM table_A WHERE tag_id = 3; ```
You can use **[bool\_or()](http://www.postgresql.org/docs/9.2/static/functions-aggregate.html)** function in your query in postgresql: ``` select record_id from table1 group by record_id having bool_or(tag_id = 2) and not bool_or(tag_id = 3); ``` ## [SQL Fiddle](http://sqlfiddle.com/#!15/1f8c1/5)
SQL - How to select records with value A but not B? (A and B belongs to different rows)
[ "", "sql", "postgresql", "selection", "multiple-value", "" ]
Although the question seems to be somehow obvious, I could not find a clear specification for how duplicate elimination of UNION DISTINCT is defined (I hope it's defined in general and not per database system): When writing ``` select * from table A union distinct select * from table B ``` and A and B have duplicate values (with regard to the key fields), can I rely on a precedence of the records from A over the records of B ? For example, lets assume A has one record with the key field with value 1 and a data field with value 'x' and B also has one record, also with key value 1, but with data field value 'y'. Can I be sure, that the result will have the value 'x' in the data field?
There is no way to detect the order of elimination, because all columns, not only the key ones, are taken into consideration when performing a `UNION`: ``` create table a(k int primary key,x int); create table b(k int primary key,y int); insert into a(k,x) values (1,1),(2,2); insert into b(k,y) values (1,3),(2,2); select * from a union distinct select * from b ``` The above produces three rows, not two ([demo](http://sqlfiddle.com/#!2/56f2a/1)): ``` 1 1 2 2 1 3 ``` Even though the primary key of `1` is present in both tables, the non-key column in these tables have different values. This preserves both rows - `{1, 1}` and `{1, 3}` in the result of the `UNION`.
UNION [DISTINCT] removes duplicate result rows. So if you select columns (1,x) and (1,y) from your tables, no matter how often they occur and in which of the tables, you will get both rows, and each row just once. The result is the same as ``` select distinct * from (select * from a union all select * from b); ``` just shorter and probably faster. BTW: The keyword DISTINCT after UNION is not allowed in all dbms. Usually you would just write UNION. On the other hand some dbms require an alias for any derived table as in `from (select * from a union all select * from b) dummy;` for above query.
SQL UNION: Is the order of duplicate elimination defined?
[ "", "sql", "" ]
I have 2 tables in SQL like below and need to run some tests queries on the data. The tables are with some sample data: ``` Table A ID groupName value id2000 groupA 1 id2000 groupA 2 id2000 groupA 3 id3000 groupB 1 id3000 groupB 2 Table B ID groupNameB valueB id2000 groupA 1 id2000 groupA 2 id2000 groupA -9 id3000 groupB 1 id3000 groupB -9 ``` The tables are similar but actually are different (they have different data and I just shortened it for stack overflow). What I need is a year that joins the two but gives me the values that are in Table A but not in table B. So using the above data I would get a resultset of: ``` id2000 groupA 3 id3000 groupB 2 ``` Its like a NOT UNION I guess?
You want to use [EXCEPT](http://msdn.microsoft.com/en-us/library/ms188055.aspx). It removes the rows from the first query that are also in the second query. It does not return rows in query2 that were not in query1. ``` select id, groupName, value from tableA except select id, groupNameB, valueB from tableB ```
``` select a.ID, a.groupName, a.Value from [Table A] a left join [Table B] b on a.ID = b.ID and a.groupName = b.groupNameB and a.Value = b.ValueB where b.ID is null ```
SQL query with UNION or INTERSECT
[ "", "sql", "sql-server", "sql-server-2005", "" ]
Hi im trying to return a resultset from one table. However, I want to add another column on the result set, that is coming from a different table. ``` SELECT ID, NAME, CASE (SELECT STATUS FROM OTHERTABLE O WHERE O.ID = M.ID) WHEN '1' THEN 'In progress' WHEN '2' THEN 'Complete' END as STATUS FROM MAINTABLE M ``` Im getting error: Subquery returned more than 1 value. This is not permitted...etc
Use join and organize your case ``` SELECT M.ID, M.NAME, CASE WHEN O.STATUS = '1' THEN 'In progress' WHEN O.STATUS = '2' THEN 'Complete' END AS STATUS FROM MAINTABLE M JOIN OTHERTABLE O ON (O.ID = M.ID) ```
This is because your MainTable Id has more then one record in OtherTable try it ``` SELECT ID, NAME, CASE WHEN O.STATUS = '1' THEN 'In progress' WHEN O.STATUS = '2' THEN 'Complete' END as STATUS FROM MAINTABLE M JOIN OTHERTABLE O ON O.Id = M.ID ```
select statement on CASE
[ "", "mysql", "sql", "select", "case", "" ]
Wondering if someone could explain why my query is not giving the desired result in SQLite I have two tables Table A contains many lines with a column called picklist\_id in which the value is repeated, I am trying to get the distinct value of these and then check to see if this picklist id exists in table two. This is what I have but I am just getting an empty result ``` SELECT distinct(pt.picklist_id) from picklisttable pt LEFT JOIN pickingtable p on pt.picklist_id = p.picklist_id where p.picklist_id = null ``` Could someone point me in the right direction? I want the picklist\_id from table a returned if it does not exist in table b
This should be IS NULL, not = NULL. ``` SELECT pt.picklist_id FROM picklisttable pt LEFT JOIN pickingtable p on pt.picklist_id = p.picklist_id WHERE p.picklist_id = null; ``` When asking for the existence of data, the EXISTS clause is more readable usually, or the IN clause when dealing with one column only. ``` SELECT pt.picklist_id FROM picklisttable pt WHERE NOT EXISTS (select * from pickingtable p where pt.picklist_id = p.picklist_id); ``` or ``` SELECT pt.picklist_id FROM picklisttable pt WHERE pt.picklist_id NOT IN (select p.picklist_id from pickingtable p); ``` Or deal with sets and use EXCEPT, which I consider even more readable: ``` SELECT picklist_id FROM picklisttable EXCEPT SELECT picklist_id FROM pickingtable; ```
Try this. Same query except instead of `= NULL` it's `IS NULL` ``` SELECT distinct(pt.picklist_id) from picklisttable pt LEFT JOIN pickingtable p on pt.picklist_id = p.picklist_id where p.picklist_id IS null ```
LEFT Join Query failure
[ "", "sql", "sqlite", "" ]
I have the following table: ``` +----+-------+---------+ | id | media | status | +----+-------+---------+ | 1 | FOO | ACTIVE | | 1 | FOO | PENDING | | >1 | BAR | ACTIVE | | 2 | FOO | ACTIVE | | 2 | FOO | PENDING | | >3 | BAR | ACTIVE | +----+-------+---------+ ``` What I need to get is a list of `id`, which have `ACTIVE` records with certain `media`, but have no `PENDING` record with the same `media`. In my example `id=1` have covered `FOO`, but uncovered `BAR`. So the resulting table should be: ``` +----+ | id | +----+ | 1 | | 3 | +----+ ``` The only solution I see is to create two tables with `ACTIVE` and `PENDING` records separately, then find records, which are only in `ACTIVE` and not in `PENDING`. But I have no idea how to construct the request. Please advise.
You can use a NOT EXISTS clause (a NOT IN would also work). ``` select distinct id from table t where status = 'ACTIVE' and not exists (select null from table where t.id = id and t.media = media and status = 'PENDING') ``` see [sqlFiddle](http://sqlfiddle.com/#!2/e70b30/1)
``` SELECT DISTINCT id FROM t GROUP BY id, media HAVING MAX(status) = 'ACTIVE'; ``` This assumes that an id that has a pending media should still be included if it also has a different media type without pending. If that's incorrect, or not applicable, you can remove `DISTINCT` from the SELECT and `media` from the GROUP BY.
MySQL query to find unique records
[ "", "mysql", "sql", "" ]
I’m going nuts. It seems everyone has had this same problem by the number of results I get when I google ‘left join not working’. I’ve studied them all and despite best efforts I cannot get my specific problem to work. Please help. I have two tables; an animals\_Table and an animalMilestones\_Table. Some of the animals in the animals\_Table have intake records in the animalMilestones\_Table and some do not. I want to join the two tables on animalID and show all rows in the animals\_Table including those that have no matching ‘Intake’ record in the animalMilestones\_Table. Here are the two things I’ve tried that I thought would give the best chance for success but alas, neither work. ``` SELECT animals_Table.animalID, animalMilestones_Table.milestoneType FROM animals_Table LEFT JOIN animalMilestones_Table ON animals_Table.animalID=animalMilestones_Table.animalID WHERE animalMilestones_Table.milestoneType="Intake" SELECT animalID FROM animals_Table LEFT JOIN (SELECT milestoneType FROM animalMilestones_Table WHERE animalMilestones_Table.milestoneType = "Intake") ON animals_Table.animalID = animalMilestones_Table.animalID ```
It seems to me that you want a list of all [animalID] values from [animals\_Table] along with a column that indicates whether or not that [animalID] has a "milestone" equal to 'Intake'. If so, then I would suggest this: ``` SELECT at.animalID, amt.milestoneType FROM animals_Table at INNER JOIN animalMilestones_Table amt ON at.animalID = amt.animalID WHERE amt.milestoneType = 'Intake' UNION SELECT animalID, NULL AS milestoneType FROM animals_Table WHERE animalID NOT IN ( SELECT animalID FROM animalMilestones_Table WHERE milestoneType = 'Intake' ) ORDER BY 1 ``` Come to think of it, this should work too, and may prove to be faster: ``` SELECT at.animalID, amt.milestoneType FROM animals_Table at LEFT JOIN ( SELECT animalID, milestoneType FROM animalMilestones_Table WHERE milestoneType = 'Intake' ) amt ON at.animalID = amt.animalID ```
animalMilestones\_Table.milestoneType will be NULL in cases where there is no record. You need to check for it being NULL or the value you are querying for. ``` animalMilestones_Table.milestoneType="Intake" or animalMilestones_Table.milestoneType is null ``` Not sure on the exact SQL syntax in Accesss
Left join including a where condition not working
[ "", "sql", "vb.net", "ms-access", "left-join", "" ]
I have a field namely `Modified_Dt` of type `Datetime` in all of my tables, to keep track of last modified date and time for a record. Now, let's say I need to know which tables has records that has been modified recently(like today). How do I write a query for that? How do I query multiple tables? By the way, I am using `MS SQL Server 2008 R2`.
``` declare @T table (T_Name nvarchar(255), M datetime) declare @T_Name nvarchar(255), @SQLT nvarchar(max) declare c cursor for select name from sys.tables open c fetch next from c into @T_Name while @@fetch_status = 0 begin set @SQLT = 'select top 1 ''' + @T_Name + ''', Modified_Dt from ' + @T_Name + ' order by Modified_Dt desc' insert @T exec sp_executesql @SQLT fetch next from c into @T_Name end close c deallocate c select * from @T where M >= dateadd(day,datediff(day,0,getdate()),0) ```
``` USE MASTER GO DECLARE @ObjectName NVARCHAR(255) DECLARE TablesList CURSOR FOR select object_name(object_id, db_id('DBStackExchange')) from [DBStackExchange].sys.columns where name = 'Modified_Dt' OPEN TablesList FETCH NEXT FROM TablesList INTO @ObjectName WHILE @@FETCH_STATUS = 0 BEGIN exec ( 'If exists ( SELECT 1 FROM DBStackExchange.dbo.[' + @ObjectName + '] Where convert(varchar(20),Modified_Dt,103)>=convert(varchar(20),getdate(),103)) Print ''' + @ObjectName + ''' ' ) FETCH NEXT FROM TablesList INTO @ObjectName END CLOSE TablesList DEALLOCATE TablesList ``` Note: Replace 'DBStackExchange' with your Database name
List tables with recently modified records
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I need to select all the `rows` AFTER the first 50 rows from a `table`. I can't seem to figure out an efficient way to do it. I could select the first 50 `rows`, put their `IDs` in an array, then make another `query` that selects everything except them, but that seems wasteful. I can't seem to find anything here that does what I need: <http://www.w3schools.com/sql/default.asp> Is there anyway to do this in one `SQL query`? Thanks for the help!
You are probably looking for `OFFSET`: For example: ``` SELECT * FROM users ORDER BY id LIMIT 100 OFFSET 50 ```
You can [define](https://dev.mysql.com/doc/refman/5.0/en/select.html) an offset: > [LIMIT {[offset,] row\_count | row\_count OFFSET offset}] Like this: ``` SELECT * FROM table1 LIMIT 5, 10; # Retrieve rows 6-15 ```
Select after first 50 with MYSQL and PHP
[ "", "mysql", "sql", "" ]
I have this query ``` Select Model_Number From Guitar.Fenders Where Model_number LIKE ('5%' OR '6%') ``` This doesn't work. I only want the query to return model numbers that begin with 5 or 6.
There are a couple of ways you can do this, one by making use of `LIKE` and the other by making use of `IN` : `LIKE` ``` Select Model_Number From Guitar.Fenders Where Model_number LIKE '5%' OR Model_Number LIKE '6%' ``` `IN` ``` Select Model_Number From Guitar.Fenders Where LEFT(Model_number, 1) IN ('5','6') ``` I'd prefer to use the `IN` statement in case the 'conditions' are likely to increase. If not, you could simplify this by making use of: ``` Select Model_Number From Guitar.Fenders WHERE Model_number >= '5' AND Model_number < '7' ``` You can choose any of these three which suits your needs. Hope this helps!!!
Select Model\_Number From Guitar.Fenders Where (Model\_number LIKE '5%' OR Model\_number LIKE '6%')
How do I pass multiple options for a like statement?
[ "", "sql", "db2", "" ]
We are new to postgres, we have following query by which we can select top N records from each category. ``` create table temp ( gp char, val int ); insert into temp values ('A',10); insert into temp values ('A',8); insert into temp values ('A',6); insert into temp values ('A',4); insert into temp values ('B',3); insert into temp values ('B',2); insert into temp values ('B',1); select a.gp,a.val from temp a where a.val in ( select b.val from temp b where a.gp=b.gp order by b.val desc limit 2); ``` Output of above query is something like this ``` gp val ---------- A 10 A 8 B 3 B 2 ``` But our requirement is different, we want to select top n% records from each category where n is not fixed, n is based of some percent of elements in each group.
To retrieve the rows based on the percentage of the number of rows in each group you can use two window functions: one to count the rows and one to give them a unique number. ``` select gp, val from ( select gp, val, count(*) over (partition by gp) as cnt, row_number() over (partition by gp order by val desc) as rn from temp ) t where rn / cnt <= 0.75; ``` SQLFiddle example: <http://sqlfiddle.com/#!15/94fdd/1> --- Btw: using `char` is almost always a bad idea because it is a fixed-length data type that is padded to the defined length. I hope you only did that for setting up the example and don't use it in your real table.
Referencing the response from a\_horse\_with\_no\_name, you can achieve something similar using *[percent\_rank()](https://www.postgresql.org/docs/11/functions-window.html)* ``` SELECT gp, val, pct_rank FROM ( SELECT gp, val, percent_rank() over (order by val desc) as pct_rank FROM variables.temp ) t WHERE pct_rank <= 0.75; ``` You can then set the final WHERE clause to return data at whatever *percent\_rank()* threshold you require.
Postgresql : How do I select top n percent(%) entries from each group/category
[ "", "sql", "database", "postgresql", "postgresql-9.1", "" ]
SQL: ``` WITH joined AS ( SELECT * FROM table_a a JOIN table_b b ON (a.a_id = b.a_id) ) SELECT a_id FROM joined ``` returns invalid identifier. How can you select joined column when using WITH clause? I have tried aliases, prefixing and nothing worked. I know I can use: ``` WITH joined AS ( SELECT a.a_id FROM table_a a JOIN table_b b ON (a.a_id = b.a_id) ) SELECT a_id FROM joined ``` but I need this alias to cover all fields. Only way I managed to meet this condition is using: ``` WITH joined AS ( SELECT a.a_id a_id_alias, a.*, b.* FROM table_a a JOIN table_b b ON (a.a_id = b.a_id) ) SELECT a_id_alias FROM joined ``` but it is not perfect solution...
You can use the effect of the `USING` clause when joining the tables. When you join tables where the join columns have the same name (as it is the case with your example), the `USING` clause will return the join column only once, so the following works: ``` with joined as ( select * from table_a a join table_b b using (a_id) ) select a_id from joined; ``` SQLFiddle example: <http://sqlfiddle.com/#!4/e7e099/2>
I don't think you can do this without aliases. The result of the "joined" query has two fields, both named a\_id. Unless you alias one (or both), as you did in your final query, the outer query has no idea which a\_id you are referring to. Why is your final query not a "perfect" solution?
Oracle SQL WITH clause select joined column
[ "", "sql", "oracle", "join", "with-clause", "" ]
the used table is here: <http://sqlzoo.net/wiki/SUM_and_COUNT> There's a thing I can't understand. If I use this: ``` select continent, count(name) from world group by continent having population >= 10000000 ``` They say Unknown column 'population' in 'having clause'. But this other query is considered fine: ``` select continent from world group by continent having sum(population) >= 100000000 ``` Why? Why here the population column is recognized and on the first query is not?
Syntax for HAVING clause is: ``` SELECT column_name, aggregate_function(column_name) FROM table_name WHERE column_name operator value GROUP BY column_name HAVING aggregate_function(column_name) operator value; ``` There have to be some aggregate function (or already aggregated column) after HAVING.
I am guessing this is MySQL, since other DBMS will throw a different error (e.g SQL Server `Column 'world.population' is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause`, or Posgresql: `column "world.population" must appear in the GROUP BY clause or be used in an aggregate function`) MySQL is slightly different and [extends the GROUP BY and HAVING clauses](http://dev.mysql.com/doc/refman/5.0/en/group-by-extensions.html), the docs state: > A similar MySQL extension applies to the HAVING clause. In standard SQL, a query that includes a GROUP BY clause cannot refer to nonaggregated columns in the HAVING clause that are not named in the GROUP BY clause. A MySQL extension permits references to such columns to simplify calculations. This extension assumes that the nongrouped columns will have the same group-wise values. Otherwise, the result is indeterminate. So the absence of an aggregate function, and because `population` is not in the group by clause, MySQL assumes you are using a column alias, and there is no column with an alias of `population`, therefore you get an unkown column error. For example, if you give your column an alias of `population` the query compiles: ``` select continent, count(name) as population from world group by continent having population >= 10000000 ``` And is the semantic equivalent of ``` select continent, count(name) as population from world group by continent having count(name) >= 10000000 ``` --- I am not sure of your exact criteria, but I suspect you either want **Continents with a population over 1,000,0000 and a count of countries in those continents** ``` select continent, count(name) as Countries from world group by continent having sum(population) >= 10000000 ``` Or **A count of countries in each continent where the country has a population over 1,000,000** ``` select continent, count(name) as Countries from world where population >= 10000000 group by continent; ```
SQL: why is the HAVING clause rejected in this case?
[ "", "sql", "group-by", "sum", "having", "" ]
Can anyone point out why this insert trigger is not inserting the new rows into the IVRTestB table? ``` If Object_ID('MyTrig3', 'TR') IS Not Null Drop Trigger MyTrig3; GO Alter Trigger MyTrig3 On dbo.IVRTest After Insert, update AS Begin SET NOCOUNT ON; Insert Into [dbo].[IVRTestB] (IVRAID, IVRName, DayNumber, OpenFlag) Select 'i.IVRAID', 'i.IVRName', 'i.DayNumber', 'i.OpenFlag' From inserted i INNER JOIN dbo.IVRTestB b On i.IVRAID = b.IVRAID END ```
By putting every column of `Inserted` into single quotes, you're effectively inserting **string literals** into your destination table - not the column values! Use this code instead: ``` INSERT INTO [dbo].[IVRTestB] (IVRAID, IVRName, DayNumber, OpenFlag) SELECT i.IVRAID, i.IVRName, i.DayNumber, i.OpenFlag -- *NO* single quotes here!!!! FROM inserted i -- change this WHERE clause to insert those rows that AREN'T alredy in IVRTestB ! WHERE i.IVRAID NOT IN (SELECT DISTINCT IVRAID FROM dbo.IVRTestB) ```
``` Alter Trigger MyTrig3 On dbo.IVRTest After Insert AS Begin SET NOCOUNT ON; IF EXISTS ( SELECT 1 FROM INSERTED WHERE INSERTED.DayNumber IS NULL ) Insert Into [dbo].[IVRTestB] (IVRAID, IVRName, DayNumber, OpenFlag) Select i.IVRAID, i.IVRName, i.DayNumber, i.OpenFlag From inserted i WHERE INSERTED.DayNumber IS NULL END ```
Insert trigger not inserting new rows in other table
[ "", "sql", "sql-server", "" ]
I need to find same digit phone numbers from a table and delete these. Phone Numbers are like below : ``` +999999999999 11111111 0000000000000 44444444 ``` I am following [this answer](https://stackoverflow.com/questions/24251440/regexp-find-numbers-that-have-all-same-digits) to solve this. I am trying : ``` select * from tblPhone where PhoneNo like '^([0-9a-z])\1+$' ``` But not succeed. `PhoneNo` is varchar. How I can achieve this ?
Try this: ``` select * from tblPhone where substring(replace(PhoneNo,'+',''),1,len(replace(PhoneNo,'+',''))-1) = substring(replace(PhoneNo,'+',''),2,len(replace(PhoneNo,'+',''))) ``` The idea is that if substring from 1st to second-last position matches the one from 2nd to last, the string must be composed of identical characters. [Demo](http://rextester.com/LYMU82849)
Here is an idea. There is a problem with the first character, which can be a `+` or number. Let's substitute the second character for an empty string and look at the result: ``` where replace(PhoneNo, substring(PhoneNo, 2, 1), '') in ('+', '') and (PhoneNo not like '%[^0-9]%' and PhoneNo like '[0-9]%'or PhoneNo not like '+%[^0-9]%') and PhoneNo like '+%' ) ```
using regex to find same digit phone numbers
[ "", "sql", "sql-server", "regex", "sql-server-2008", "" ]
Consider the following table `#temp`: ``` objectid field1 field2 field3 -------------------------------- 1 X 001 foo 2 Y 022 bar 2 Z 033 baz 3 A 111 abc 3 B 222 def 3 C 333 ghi 4 Q 900 tom ``` I need the concatenate `field1`, `field2` and `field3` for each objectid: ``` objectid field1 field2 field3 ------------------------------------------ 1 X 001 foo 2 Y;Z 022;033 barbaz 3 A;B;C 111;222;333 abc;def;ghi 4 Q 900 tom ``` I trief this first for only `field1` as follows: ``` select c.FLUSObjectNumber, [field1] = stuff( ( select ';' + field1 from #temp as c2 where c2.objectid = c.objectid and c2.field1 = c.field1 for xml path(''), type ).value('.', 'varchar(max)'), 1, 1, '' ) from #temp c group by c.objectid, c.field1 ; ``` However, this returns the list I already have in `#temp`. What am I doing wrong? How can I generate the desired output?
``` declare @t table (objectid INT, field1 VARCHAR(50), field2 VARCHAR(50), field3 VARCHAR(50)) INSERT INTO @t (objectid,field1,field2,field3)values (1,'X','001','foo') INSERT INTO @t (objectid,field1,field2,field3)values (2,'Y','022','bar') INSERT INTO @t (objectid,field1,field2,field3)values (2,'Z','033','baz') INSERT INTO @t (objectid,field1,field2,field3)values (3,'A','111','abc') INSERT INTO @t (objectid,field1,field2,field3)values (3,'B','222','def') INSERT INTO @t (objectid,field1,field2,field3)values (3,'C','333','ghi') INSERT INTO @t (objectid,field1,field2,field3)values (4,'Q','900','tom') Select distinct t.objectid , STUFF((Select distinct ',' + t1.field1 from @t t1 where t.objectid = t1.objectid ORDER BY 1 FOR XML PATH('')) , 1, 1, '')As field, STUFF((Select distinct ',' + t2.field2 from @t t2 where t.objectid = T2.objectid ORDER BY 1 FOR XML PATH(''),TYPE).value('.', 'NVARCHAR(MAX)') , 1, 1, ' ')As field2, STUFF((Select distinct ',' + T3.field3 from @t t3 where t.objectid = t3.objectid ORDER BY 1 FOR XML PATH(''),TYPE).value('.', 'NVARCHAR(MAX)') , 1, 1, ' ')As field3 from @t t group by t.objectid ```
``` DECLARE @Temp TABLE ( objectid INT, field1 VARCHAR(20), field2 VARCHAR(20), field3 VARCHAR(20) ) INSERT INTO @Temp ( objectid, field1, field2, field3 ) VALUES (1 ,'X' ,'001' ,'foo'), (2 ,'Y' ,'022' ,'bar'), (2 ,'Z' ,'033' ,'baz'), (3 ,'A' ,'111' ,'abc'), (3 ,'B' ,'222' ,'def'), (3 ,'C' ,'333' ,'ghi'), (4 ,'Q' ,'900' ,'tom'); SELECT DISTINCT t.objectid, f1.f1, f2.f2, f3.f3 FROM @Temp t OUTER APPLY ( SELECT STUFF((SELECT ',' + Field1 FROM @Temp WHERE objectid = t.objectid ORDER BY field1 FOR XML PATH('')),1,1,'') f1 ) f1 OUTER APPLY ( SELECT STUFF((SELECT ',' + Field2 FROM @Temp WHERE objectid = t.objectid ORDER BY field2 FOR XML PATH('')),1,1,'') f2 ) f2 OUTER APPLY ( SELECT STUFF((SELECT ',' + Field3 FROM @Temp WHERE objectid = t.objectid ORDER BY field1 FOR XML PATH('')),1,1,'') f3 ) f3 ``` OUTPUT: ``` objectid f1 f2 f3 1 X 001 foo 2 Y,Z 022,033 bar,baz 3 A,B,C 111,222,333 abc,def,ghi 4 Q 900 tom ```
How can I concatenate values from 3 column per id?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have this query ``` SELECT Month(date) AS Monthly, Year(date) AS Annual, COUNT(idcad) AS NumCad, SUM(CONVERT(FLOAT, valorpag)) AS Valor FROM PI_AS WHERE (status = 'Paid' OR status = 'Available') AND platform = 'Sales' GROUP BY Year(date), Month(date) ORDER BY Year(date), Month(date) ``` sample result: ``` Monthly | Annual | NumCad | Valor 3 | 2014 | 62 | 72534 4 | 2014 | 7 | 8253.6 5 | 2014 | 42 | 45356.39 6 | 2014 | 36 | 33343.19 7 | 2014 | 5 | 4414.6 ``` and this query ``` SELECT Month(date) AS Monthly, Year(date) AS Annual, COUNT(idcad) AS NumCad, SUM(CONVERT(FLOAT, valorpag)) AS Valor FROM PI_PP WHERE (status = 'Completed') AND platform = 'Sales' GROUP BY Year(date), Month(date) ORDER BY Year(date), Month(date) ``` sample result: ``` Monthly | Annual | NumCad | Valor 4 | 2014 | 6 | 2572.80 5 | 2014 | 8 | 7828 6 | 2014 | 3 | 3891.60 7 | 2014 | 2 | 278.3 ``` I tried UNION the queries: ``` SELECT Month(date) AS Monthly, Year(date) AS Annual, COUNT(idcad) AS NumCad, SUM(CONVERT(FLOAT, valorpag)) AS Valor FROM PI_AS WHERE (status = 'Paid' OR status = 'Available') AND platform = 'Sales' GROUP BY Year(date), Month(date) UNION SELECT Month(date) AS Monthly, Year(date) AS Annual, COUNT(idcad) AS NumCad, SUM(CONVERT(FLOAT, valorpag)) AS Valor FROM PI_PP WHERE (status = 'Completed') AND platform = 'Sales' GROUP BY Year(date), Month(date) ORDER BY Year(date), Month(date) ``` But when I do this, it repeat the row with the same month... I want the SUM of `NumCad` and `Valor` for the same month The UNION result in something like this: ``` Monthly | Annual | NumCad | Valor 6 | 2014 | 3 | 3891.60 6 | 2014 | 36 | 33343.19 7 | 2014 | 5 | 4414.6 7 | 2014 | 2 | 278.3 ``` but I want this: ``` Monthly | Annual | NumCad | Valor 6 | 2014 | 39 | 37234.79 7 | 2014 | 7 | 4692.9 ``` Any idea?
First you need to run union on data and then do the aggregation: ``` SELECT Month(data.date) AS Monthly, Year(data.date) AS Annual, COUNT(data.idcad) AS NumCad, SUM(CONVERT(FLOAT, data.valorpag)) AS Valor FROM (SELECT date, idcad, valorpag FROM PI_AS WHERE (status = 'Paid' OR status = 'Available') AND platform = 'Sales' UNION ALL SELECT date, idcad, valorpag FROM PI_PP WHERE (status = 'Completed') AND platform = 'Sales' ) data GROUP BY Year(data.date), Month(data.date) ORDER BY Year(data.date), Month(data.date) ```
This should fix what you want: ``` SELECT Monthly, Annual, SUM(NumCad), SUM(Valor) FROM ( SELECT Month(date) AS Monthly, Year(date) AS Annual, COUNT(idcad) AS NumCad, SUM(CONVERT(FLOAT, valorpag)) AS Valor FROM PI_AS WHERE (status = 'Paid' OR status = 'Available') AND platform = 'Sales' GROUP BY Year(date), Month(date) UNION SELECT Month(date) AS Monthly, Year(date) AS Annual, COUNT(idcad) AS NumCad, SUM(CONVERT(FLOAT, valorpag)) AS Valor FROM PI_PP WHERE (status = 'Completed') AND platform = 'Sales' GROUP BY Year(date), Month(date) ) s GROUP BY Annual, Monthly ```
SUM some columns of SQL Selects with UNION of two tables
[ "", "sql", "sql-server", "" ]
I'm using phpMyAdmin for submitting queries. When using GROUP BY in subquery the whole application just hangs without errors until I restart the browser. I have three tables: `files` stores information about uploaded files, `file_category` defines the available categories for files and `file_category_r` stores relations between files and categories. I want to count how many files each category has, but some files can have multiple entries in the files table, so I need to group them by `files.filename`. I tried two different approaches, both resulting in a hang: ``` SELECT fc.*, (SELECT COUNT(*) FROM file_category_r WHERE file_category_r.category_id = fc.id AND file_category_r.file_id IN (SELECT f2.id FROM (SELECT * FROM files f3 GROUP BY f3.filename) f2 WHERE f2.mandant_id = 1) ) as file_count FROM file_category fc ORDER BY name ASC ``` or ``` SELECT fc.*, (SELECT COUNT(*) FROM file_category_r WHERE file_category_r.category_id = fc.id AND file_category_r.file_id IN (SELECT id FROM files WHERE mandant_id = 1 GROUP BY filename) ) as file_count FROM file_category fc ORDER BY name ASC ``` I don't see a problem with my queries, running the subquery alone works ok. Even removing the GROUP BY return the result, but the result is wrong, because it's counting duplicate values. Here is the table schema: ``` CREATE TABLE IF NOT EXISTS `files` ( `id` bigint(20) unsigned NOT NULL, `project_id` bigint(20) unsigned DEFAULT NULL, `customer_id` bigint(20) unsigned DEFAULT NULL, `opportunity_id` int(11) DEFAULT NULL, `task_id` bigint(20) unsigned DEFAULT NULL, `calendar_event_id` bigint(20) unsigned DEFAULT NULL, `mandant_id` tinyint(4) DEFAULT NULL, `time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `size` float NOT NULL, `mime_type` varchar(100) NOT NULL, `filename` text NOT NULL, `file` longblob NOT NULL, `folder_id` int(11) DEFAULT NULL, `user_id` int(11) DEFAULT NULL, `is_public` tinyint(1) unsigned NOT NULL DEFAULT '0', `description` text, `file_link` varchar(500) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=104832 ; CREATE TABLE IF NOT EXISTS `file_category` ( `id` int(11) NOT NULL, `name` varchar(200) NOT NULL, `parent` int(11) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=445 ; CREATE TABLE IF NOT EXISTS `file_category_r` ( `id` bigint(20) unsigned NOT NULL, `file_id` bigint(20) unsigned NOT NULL, `category_id` int(11) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=300346 ; ``` What am I doing wrong? The tables are quite big, is it possible the request is too heavy? I'm out of ideas, please help! Thanks!
``` select fc.name, count(*) from file_category fc inner join file_category_r fcr on fc.id = fcr.category_id group by fc.name ``` Not quite sure about that "some files can have multiple entries in the files table, so I need to group them by files.filename", though. Maybe you need something like ``` select fc.name, count(distinct f.filename) from file_category fc inner join file_category_r fcr on fc.id = fcr.category_id inner join files f on fcr.file_id = f.id group by fc.name ```
Often, the use of `in` can result in inefficient query plans. You can try `exists` instead: ``` SELECT fc.*, (SELECT COUNT(*) FROM file_category_r fcr WHERE fcr.category_id = fc.id AND exists (select 1 from files f where f.mandant_id = 1 and fcr.file_id = f.id) ) as file_count FROM file_category fc ORDER BY name ASC; ``` Now, you should add indexes. Start with `file_category_r(category_id, file_id)` and `files(id, mandant_id)`.
How to use GROUP BY in MySQL subquery
[ "", "mysql", "sql", "group-by", "subquery", "" ]
I have a mammal attribute table with the following elements; ``` MammalAttributeID AttributeID MammalID ``` Some mammals have an attribute record of "cat" (AttributeID 234) , some have an attribute record of "feline" (AttributeID 456) and some have both "feline" and "cat" attribute records. We want to replace the attribute of "cat" with "feline" in the mammal attribute table and remove the "cat" attribute. So if a mammal has only the "feline" attribute record it remains unchanged. If it has both "cat" and "feline" attribute records entries the "cat" attribute record is deleted. If it has only the "cat" attribute record a "feline" attribute record is inserted and the "cat" attribute record is deleted. I can't wrap my brain around the SQL necessary to do this. Can some SQL expert point me in the right direction?
I think Olgas response would miss the case where a mammal is a cat but not a feline because the delete happens first. ``` --update cats to felines Update "mammal attribute table" set AttributeID = 456 where AttributeID=234 go --delete cat attribute delete from attributetable where AttributeID=234 go --delete duplicate feline attribute delete from "mammal attribute table" where mammalattributeid in( select max(mammalattributeid) from "mammal attribute table" group by mammalid,attributeid having count(*)>1) ```
Step 1. Delete rows with 'cat' attribute for mammals that have 'feline' attribute too. ``` DELETE FROM [YOUR_TABLE] WHERE MammalID in (SELECT MammalID FROM [YOUR_TABLE] WHERE AttributeID = 234 -- cat INTERSECT SELECT MammalID FROM [YOUR_TABLE] WHERE AttributeID = 456) --feline AND AttributeID = 234 ``` Step 2. Replace all remains 'cat' with 'feline' ``` UPDATE [YOUR_TABLE] SET AttributeID = 456 WHERE AttributeID = 234 ```
Deleting and replacing entries in a table using SQL Server
[ "", "sql", "sql-server", "insert", "sql-delete", "" ]
Say I've got a query like this: ``` select table1.id, table1.name from table1 inner join table2 on table1.id = table2.id where table1.name = "parent" and table2.status = 1 ``` Is it true that, since there's an inner join, I can refer the table2's status column even from table1? Like this: ``` select table1.id, table1.name from table1 inner join table2 on table1.id = table2.id where table1.name = "parent" and table1.status = 1 ``` And if yes, what's the best of the two ways?
If I am not mistaken, you are asking that in an inner join, two fields of the same name, data type and length will be one field in the particular query. Technically that is not the case. Regardless of anything, `Table1.Status` will refer to `Table1` and `Table2.Status` will refer to `Table2`'s condition/value. The two queries above CAN product different results from each other. A good rule on this is that you stick your conditions on the base table, or `Table1`, in this case. If a field is exclusive to another table, that's when you'll use that Table's field.
No, that's not true. By Inner join what you are doing is say if you have table1 with m rows and table two with n rows then the third `SET` that will be produced by joining the two tables will have m\*n rows based on match condition that you have mentioned in where clause. It's not m+n rows or infact columns of the two tables are not getting merged at database level. status column will remain in the table it has been defined. Hope that helps!!!
sql - what's the faster/better way to refer to columns in a where clause with inner joins?
[ "", "sql", "performance", "syntax", "inner-join", "where-clause", "" ]
I have a table in SQL Server 2012, with columns like: ``` Client_ID, Sale_Date ``` I need 2 additional flag columns: ``` FirstPurchase, ActiveWithin90Days ``` "FirstPurchase" shoud take one of these 2 values: **new**, or **old**. **New** means the Client\_ID value is identified for the first time given the existing time frame. **Old** means the value has already been identified for the first time in the past. "ActiveWithin90Days" should take one of these 2 values: **1** or **0**. Value **1** means Client\_ID value exists in the previous 90 days. Value **0** means it does NOT exist in the past 90 days. The desired output looks like this: ``` Client_ID Sale_Date FirstPurchase ActiveWithin90Days 1 2013-03-01 new 1 2 2013-04-01 new 1 1 2013-05-01 old 1 1 2013-09-01 old 0 3 2013-10-01 new 1 ```
You need to have a sub query that finds the first sale date. Something like: ``` select s.ClientID, s.Sale_Date, case when s.Sale_Date = m.MinSales_Date then 1 else 0 end AS IsFirstSale, case when dateadd(day, -90, getdate()) < m.MinSale_Date then 1 else 0 end AS IsNewAccount from Sales s inner join (select m.ClientId, MIN(m.Sale_Date) MinSale_Date from Sales m group by m.ClientId) m on s.ClientID = m.ClientId ```
Try this: ``` WITH CTE AS ( SELECT *, RN = ROW_NUMBER() OVER(PARTITION BY ClientID ORDER BY Sale_Date) FROM YourTable ) SELECT A.ClientID, A.Sale_Date, CASE WHEN RN = 1 THEN 'new' ELSE 'old' END Flag1, CASE WHEN B.ClientID IS NULL THEN 1 ELSE 0 END Flag2 FROM CTE A OUTER APPLY (SELECT TOP 1 * FROM YourTable WHERE ClientID = A.ClientID AND Sale_Date <= DATEADD(DAY,-90,A.Sale_Date)) B ``` [**Here is**](http://sqlfiddle.com/#!6/35809/7) an sqlfiddle with a demo. And the results are: ``` ╔══════════╦════════════╦═══════╦═══════╗ ║ ClientID ║ Sale_Date ║ Flag1 ║ Flag2 ║ ╠══════════╬════════════╬═══════╬═══════╣ ║ 1 ║ 2013-03-01 ║ new ║ 1 ║ ║ 1 ║ 2013-05-01 ║ old ║ 1 ║ ║ 1 ║ 2013-09-01 ║ old ║ 0 ║ ║ 2 ║ 2013-04-01 ║ new ║ 1 ║ ║ 3 ║ 2013-10-01 ║ new ║ 1 ║ ╚══════════╩════════════╩═══════╩═══════╝ ```
SQL: flag if value exists in the past
[ "", "sql", "sql-server", "" ]
I wrote the below query which shows me ApplicationIDs associated with two specific ables. I need the results to return the number of times each Applications.AppID appears in those tables next to the row with the application name. Ive used distinct because in my results I only want the name to appear once but have a number next to it indicating how many times it has been used. Examples below. Ive written count conditions before but only for single tables. ``` SELECT 0 AppId , 'Select an Application' ApplicationName union all select .1 , '--All--' union all SELECT DISTINCT Applications.AppId , Applications.ApplicationName FROM ImpactedApplications , SupportingApplications JOIN applications ON SupportingApplications.Appid = applications.appid JOIN ImpactedApplications Apps on SupportingApplications.AppId = Applications.AppId ``` Returns something like this: ``` 0.0 Select an Application 0.1 --All-- 12.0 APP A 59.0 APP B 60.0 APP C 71.0 APP D 74.0 APP E 121.0 APP F 124.0 APP G 130.0 APP H ``` I want it to return something like this: ``` 0.0 Select an Application 0.1 --All-- 12.0 APP A 1 59.0 APP B 2 60.0 APP C 1 71.0 APP D 4 74.0 APP E 3 121.0 APP F 1 124.0 APP G 2 130.0 APP H 2 ``` Any help is appreciated thank you. Adding Results from Help Query ``` 12 APP A 17161 59 APP B 51483 60 APP C 85805 71 APP D 17161 ```
First, you do realize that order is **unspecified** unless you order the result set using `order by`? That means there is no guarantee that the first two selects in your `union all` will come first. So, let's strip those two out as they are really extraneous to the actual problem. Let us consider the core `select`: select distinct Applications.AppId , Applications.ApplicationName from ImpactedApplications , SupportingApplications JOIN applications ON SupportingApplications.Appid = applications.appid JOIN ImpactedApplications Apps on SupportingApplications.AppId = Applications.AppId and dissect it. **Problem #1.** `select distinct` is often a *code smell* indicating that you don't have the correct join criteria or you don't correclty understand the cardinality of the relationships involved. **Problem #2.** Indeed, this is the case. You are mixing old-school, pre-ISO/ANSI joins with ISO/ANSI joins. Since the first two tables in the FROM clause are joined pre-ISO/ANSI style, and you have no `where` clause with criteria to join them, The above `select` statement is exactly identical to select distinct a.AppId , a.ApplicationName from ImpactedApplications ia cross join SupportingApplications sa join applications a on sa.Appid = a.appid join ImpactedApplications Apps on sa.AppId = a.AppId I'm pretty sure you didn't intend to generate the cartesian product of the 2 tables. You haven't described the table schema, but my suspicion, from your problem statement > I need the results to return the number of times each Applications.AppID > appears in those tables next to the row with the application name. is that you want something more along these lines: ``` select AppId = a.AppId , AppName = a.ApplicationName , ImpactedCount = coalesce( ia.Cnt , 0 ) , SupportingCount = coalesce( sa.Cnt , 0 ) , Total = coalesce( ia.Cnt , 0 ) + coalesce( sa.Cnt , 0 ) from Applications a left join ( select AppId = t.AppId , Cnt = count(*) from ImpactedApplications t group by t.AppId ) ia on ia.AppId = a.AppId left join ( select AppId = t.AppId , Cnt = count(*) from SupportingApplications t group by t.AppId ) sa on sa.AppId = a.AppId ``` If you want to restrict the results to just those rows with non-zero values, you could change the `left join` clauses to `join`, but that would mean you would only get those rows that have a non zero value for both. Instead, add a where clause to restrict the result set: ``` where sa.Cnt > 0 OR ia.Cnt > 0 ``` In addition to filtering out any rows where both counts are zero, it also removes rows where both rows have a `null` count, indicating that no match occurred in the `left join`.
DISTINCT is logically equivalent to a GROUP BY: ``` SELECT Applications.AppId, Applications.ApplicationName ,COUNT(*) FROM SupportingApplications INNER JOIN applications ON SupportingApplications.Appid = applications.appid INNER JOIN ImpactedApplications as Apps on SupportingApplications.AppId = Applications.AppId GROUP BY Applications.AppId, Applications.ApplicationName ```
How can I count and show the number of times a row appears within multiple joined tables?
[ "", "sql", "sql-server-2008", "" ]
For experimental purposes only. I would like to build a query but not querying data extracted for any table but querying data provided in the query it self. Like: ``` select numbers.* from (1, 2, 3) as numbers; ``` or ``` select numbers.* from (field1 = 1, field2 = 2, field3 = 3) as numbers; ``` so I can do things like ``` select numbers.* from (field1 = 1, field2 = 2, field3 = 3) as numbers where numbers.field1 > 1; ``` If the solution is specific for a database engine could be interesting too.
If you wanted the values to be on separate rows instead of three fields of the same row, the method is the same, just one row per value linked with a `union all`. ``` select * from( select 1 as FieldName union all select 2 union all select 3 union all select 4 union all -- we could continue this for a long time select 5 -- the end ) as x; select numbers.* from( select 1 ,2, 3 union select 3, 4, 5 union select 6, 7, 8 union select 9, 10, 11 -- we could continue this for a long time union select 12, 13, 14 -- the end ) as numbers; ``` This works with MySQL *and* Postgres (and most others as well). [Edit] Use `union all` rather than just `union` as you do not need to remove duplicates from a list of constants. Give the field(s) in the first `select` a meaningful name. Otherwise, you can't specify a specific field later on: `where x.FieldName = 3`. If you don't provide meaningful names for the fields (as in the second example), the system (at least MySQL where this was tested) will assign the name "1" for the first field, "2" as the second and so on. So, if you want to specify one of the fields, you have to write expressions like this: ``` where numbers.1 = 3 ```
Use the `values` row constructor: ``` select * from (values (1),(2),(3)) as numbers(nr); ``` or using a CTE. ``` with numbers (nr) as ( values (1),(2),(3) ) select * from numbers where nr > 2; ``` **Edit:** I just noticed that you also taggeg your question with `mysql`: the above will not work with MySQL, only with Postgres (and a few other DBMS)
SQL, build a query using data provided in the query itself
[ "", "mysql", "sql", "postgresql", "" ]
I'm working on creating a report page for an ASP.NET project, the url to that page is ``` 'localhost/reports' ``` But for some reason, the SQL Server reporting page is turned on and now there is a url conflict. So my url now redirects me to the following url: ``` 'localhost/Reports/Pages/Folder.aspx' ``` It seems the behavior of '/reports' is overwritten by the SQL Report Service site. I would like to know how to turn on/off the db service(report browsing) site manually or by DDL change
You can turn of the entire reporting service in the `SQL Server Configuration Manager` or you can go to the `Reporting Services Configuration Manager` and change the `Report Manager URL` to point to a different location. The default is /Reports.
If you look under "Internet Information Services (IIS) Management", I usually just search for it from start menu, you will see an app pool for reporting services/SSRS with a Reports entry under it. You will probably also see an entry under Default Website. Unfortunately different versions of IIS manager will look different. If you find the "Reports" entry under the Websites/Sites entry, and right click you may have a Properties or Manage Virtual Directory option. Basically IIS handles routing requests to different web applications. So each web application that wants to handle a certain base URL must be configured here. Often times this is done without your knowledge through a deployment or install process, but for more complex setups you might modify IIS directly. Basically you have a situation where your app and SSRS both have a path in IIS that responds to /reports/. As JodyT answered, the correct solution is to modify the Reporting Manager URL(which likely in turn modifies IIS). I wouldn't be surprised if you had to reboot or run IISReset to get this change picked up. It's hard to say sometimes, but hopefully not. ![enter image description here](https://i.stack.imgur.com/KC2LL.jpg)
How does /reports redirect to the SQL server reporting page
[ "", "asp.net", "sql", "sql-server", "" ]
Query: ``` select * from v_fi_trans_logs where case_num like '105' AND TRANS_DT=CAST('09-JUL-2014' AS DATE); ``` Doesn't return me any records. But If I do a select as ``` select TRANS_DT,CAST('09-JUL-2014' AS DATE) from v_fi_trans_logs where case_num like '105' ``` It returns me values which is equal. ``` TRANS_DT CAST('09-JUL-2014' AS DATE) ---------------------------------------------- 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 09-JUL-14 ``` Can anyone help me out, with the issue here. Any inputs will be of great help.
I found the solution by HIT and TRAIL method : It worked with: ``` select * from v_fi_trans_logs where case_num like '105' AND TRANS_DT LIKE CAST('09-JUL-2014' AS DATE); ``` That is,replacing **"="** with **"LIKE"** solves the issue.
I think that the problem is that you are using : ``` like '105' AND TRANS_DT=CAST('09-JUL-2014' AS DATE); ``` You can use like as a comparation of numbers and strings… only once. But if you are trying to compare ´105´ AND TRANS\_DT=CAST('09-JUL-2014' AS DATE) logically. You cannot do it. The translation will be ( 7 & true ) for example. Check out this solution the error is very similar. [How can I use "OR" condition in MySQL CASE expression?](https://stackoverflow.com/questions/24630603/how-can-i-use-or-condition-in-mysql-case-expression/24630776#24630776)
Select Query in Oracle giving issue
[ "", "sql", "oracle", "" ]
Table1: ``` BrID HQID ------------ Br1 HQ1 HQ1 HQ2 Br2 HQ1 Br3 HQ2 ``` Table2: ``` ID Name BrID IDt2 ------------------------------ 11 OthName11 Br1 22 HQName111 HQ1 33 HQName222 HQ2 44 OthName22 Br2 55 OthName33 Br3 ``` I need to UPDATE Table2.IDt2 for each row from Table1 which has HQID value but with Table2.ID connected by Table1.HQID = Table2.BrID In other words fro the first row . ``` SELECT HQID FROM Table1 WHERE BrID = 'Br1' SELECT ID WHERE Table2.BrID = HQID (from previous SELECT) UPDATE Table2 SET IDt2 = ID(from previous SELECT) WHERE Table2.BrID = 'Br1' (from first SELECT) ``` This should be the result Table2: ``` ID Name BrID IDt2 ------------------------------ 11 OthName11 Br1 22 22 HQName111 HQ1 33 HQName222 HQ2 44 OthName22 Br2 22 55 OthName33 Br3 33 ``` Is it possible to make it in one UPDATE? How else can I do this?
You just need multiple `JOIN` statements: ``` UPDATE a SET a.IDT2 = c.ID FROM Table2 a JOIN Table1 b ON a.BRID = b.BRID JOIN TAble2 c ON b.HQID = c.BRID ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!6/4784b/5/2)
Try this: ``` UPDATE A SET A.IDt2 = A1.ID FROM table2 A INNER JOIN table1 T1 ON A.BrID = T1.BrId INNER JOIN table2 A1 ON T1.HQID = A1.BrId; ```
UPDATE and SELECT to/from the same table
[ "", "sql", "sql-server", "select", "join", "sql-update", "" ]
I find myself needing to retrieve matches for, on average, ~1.5m rows from a remote database. There are two tables (ITEM1 and ITEM2) that have dated item information. There should always be at least one record for an item in ITEM1, and there may be 0 to many records for the same item in ITEM2. I have to find the latest record from either table, and if it exists in ITEM2, use that information instead of ITEM1. #TEMPA is the table that has the initial ~1.5m ItemNumbers. Below is the query: ``` SELECT GETDATE() AS DateElement, A.SourceStore, COALESCE(FR.original_cost,CO.original_cost) AS Cost FROM #TEMPA A INNER JOIN REMOTEDB.ITEM1 CO ON CO.item_id = A.ItemNumber AND CO.month_ending >= (SELECT MAX(month_ending) FROM REMOTEDB.ITEM1 CO2 WHERE CO2.item_id = A.ItemNumber) LEFT JOIN REMOTEDB.ITEM2 FR ON FR.item_id = A.ItemNumber AND FR.month_ending >= (SELECT MAX(month_ending) FROM REMOTEDB.ITEM2 FR2 WHERE FR2.item_id = A.ItemNumber) WHERE CO.item_id IS NOT NULL OR FR.item_id IS NOT NULL ``` There are unique clustered indexes on item\_id and month\_ending on both ITEM tables. I realize the subqueries are probably a big performance hit, but I can't think of any other way to do it. Each item could potentially have a different max month\_ending date. Currently it returns the correct information, but it takes ~2.6 hrs to do so. Any help in optimizing this query to perform better would be appreciated. Edit: I should mention the query is also being run READ UNCOMMITTED already. I tried both answer queries using ROW\_NUMBER and they both ran in ~20 minutes on the remote server itself. Using my original query it finishes in ~2 minutes. My original query runs in ~17 minutes over linked server. I cancelled the other queries once they went over an hour. Thoughts? Answer Queries: <http://content.screencast.com/users/CWhittem/folders/Jing/media/ed55352b-9799-4dec-94f0-764e2670884f/2014-07-09_0957.png> Original Query: <http://content.screencast.com/users/CWhittem/folders/Jing/media/4991aa7d-a05c-4fb1-afad-52b07f896d5e/2014-07-09_1014.png> Thanks!
So after much testing and experimentation I have come up with the following that outperforms everything else I have tried: ``` SELECT DISTINCT oInv.Item_ID, oInv.Month_Ending, oInv.Original_Cost FROM ( SELECT Item_ID, Month_Ending, Original_Cost FROM ho_data.dbo.CO_Ho_Inven UNION ALL SELECT Item_ID, Month_Ending, Original_Cost FROM ho_data.dbo.FR_Ho_Inven ) OInv INNER JOIN ( SELECT UInv.Item_ID, MAX(UInv.Month_ending) AS Month_Ending, MAX(original_cost) AS original_cost FROM ( SELECT Item_ID, Month_Ending, original_cost FROM ho_data.dbo.CO_Ho_Inven UNION ALL SELECT Item_ID, Month_Ending, original_cost FROM ho_data.dbo.FR_Ho_Inven ) UInv GROUP BY UInv.Item_ID ) UINv ON OInv.Item_ID = UInv.Item_ID AND OInv.Month_Ending = UInv.Month_Ending AND OInv.original_cost = UINv.original_cost ```
Rewrite the Correlated Subqueries using MAX with ROW\_NUMBERs: ``` SELECT GETDATE() AS DateElement, A.SourceStore, COALESCE(FR.original_cost,CO.original_cost) AS Cost FROM #TEMPA A INNER JOIN ( SELECT * FROM ( SELECT original_cost, item_id, ROW_NUMBER() OVER (PARTITIOM BY item_id ORDER BY month_ending DESC) AS rn FROM REMOTEDB.ITEM1 ) as dt WHERE rn = 1 ) AS CO ON CO.item_id = A.ItemNumber LEFT JOIN ( SELECT * FROM ( SELECT original_cost, item_id, ROW_NUMBER() OVER (PARTITIOM BY item_id ORDER BY month_ending DESC) AS rn FROM REMOTEDB.ITEM2 ) as dt WHERE rn = 1 ) as FR ON FR.item_id = A.ItemNumber ```
Optimizing A Slow Complicated Remote SQL Query
[ "", "sql", "sql-server", "database", "performance", "query-optimization", "" ]
As we know in Oracle no data or action query is committed until we call a commit. I want such implementation with SQL Server that I can run action queries on db but data is not changed permanently, it just let me see data before and after my action queries. Or is there any way to mirror database on same server in such a way that I can test my queries on secondary database but it didn't have any impact on primary database.
You can use the [`IMPLICIT_TRANSACTIONS`](http://msdn.microsoft.com/en-gb/library/ms187807.aspx) setting to achieve a similar function to what Oracle does: > Transactions that are automatically opened as the result of this setting being `ON` must be explicitly committed or rolled back by the user at the end of the transaction. Otherwise, the transaction and all of the data changes it contains are rolled back when the user disconnects. You can, if you so choose, change a setting in Management Studio so that this setting is always in force when you open new query windows: ![enter image description here](https://i.stack.imgur.com/k7EmR.png)
You can do it within same query window, in which you have written 'begin tran' do not commit until not conform. you can execute number of query within same window, which was give you your query preview before committing transaction. It's only possible within same query window. you can not able view preview in other query window.
How to use sql server database in preview mode?
[ "", "sql", "sql-server", "" ]
I have quite a large DB, but I've simplified it for the purpose of this question: ![A sample chunk of my DB](https://i.stack.imgur.com/YEpNj.png) Basically, every time a user clicks something on my site, it gets logged as a row in my DB: the `UserID` field is a cookie that is used to identify the user, and `Stuff` and `MoreStuff` are data about the click. Note that, obviously, these are likely to be different every time, "Foo" and "Bar" is just a representation. What I want to do is this: with an SQL query, filter out either all of the first visits, or all of the repeated visits (I assume if I can do one, I can invert my filter for the other). So, if I were to filter out all of the repeat visits on my sample, I'd get this: ![The filtered DB](https://i.stack.imgur.com/Ba1jF.png) with green representing the selected rows and red representing the rejected ones. How can I do this with just SQL?
You could use aggregate function `COUNT()` and then `HAVING` statement like: ``` SELECT userID, COUNT(userID) FROM tbl GROUP BY userID HAVING COUNT(userID) >= 2 ``` You can then re-use the above query if you want to filter out who are the repeat visitors like: ``` SELECT * FROM tbl WHERE EXISTS ( SELECT userID, COUNT(userID) FROM tbl GROUP BY userID HAVING COUNT(userID) >= 2 ) GROUP BY userID ``` If you want to see first-time visitors you could change `HAVING COUNT(userID) >= 2` to `HAVING COUNT(userID) = 1` or of course instead of `EXISTS` use `NOT EXISTS`. See `Demo`
You can do this with a `group by` and `min(timestamp)` table. Then put that in a view or temporary aliased table and join. Example - get first visits timestamp: ``` SELECT userID, min(TIMESTAMP) FROM info GROUP BY userID ``` Example - get all information: ``` SELECT * FROM info t, (SELECT userID, min(TIMESTAMP) as timestamp FROM info GROUP BY userID) AS t2 WHERE t.userID = t2.userID AND t.TIMESTAMP = t2.TIMESTAMP; ``` SQLFiddle: <http://sqlfiddle.com/#!2/6fd238/3>
Separate new and returned users in database
[ "", "mysql", "sql", "database", "" ]
![enter image description here](https://i.stack.imgur.com/ikeiq.png) From this above table I will need the **event\_ids** that **tag\_id** 21 and 5 involved in a single query. I want those event\_ids that only 21 and 5 both visited
If you need events that must have both of these tags then you can do so ``` select * from table where tag_id IN (21,5) group by event_id having count(distinct tag_id) = 2 ```
``` SELECT * FROM TABLE_NAME WHERE tag_id=21 OR tag_id=5; ``` OR ``` SELECT * FROM TABLE_NAME WHERE tag_id IN (21,5); ```
Find all possible event_id that were attend by tag_id 5 and 21
[ "", "mysql", "sql", "" ]
I want to add a string to my sql database using JDBC. But the problem is that whenever the string contains a double quote, then the sql command is interpreted completely differently and a "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax" gets thrown." For example, String ssql = "INSERT INTO tableName VALUES (\""+ string + "\")"; if string = "abc", then sql = INSERT INTO tableName VALUES ("abc") But if string = "ab\"cd", then sql = INSERT INTO tableName VALUES ("ab"c") And hence for a string that contains a double quote, the sql command is interpreted completely differently. How can I add such a string to the database. PS. I cannot afford to change the double quote to a single quote. And there can be other hacks to add such a string but I want to know if there really is no such way of adding such a string directly.
I tried adding the string by escaping the double quote but that didn't work. The only way I could add the string is by using PreparedStatement instead of a Statement class. ``` String string = "abc\"abc"; String sql = "INSERT INTO tableName VALUES(?)"; connection = DriverManager.getConnection(DB_URL, USER, PASS); preparedStatement = connection.prepareStatement(sql); preparedStatement.setString(1, string); preparedStatement.executeUpdate(); ``` This code segment successfully adds the string containing the double quote into the database.
You need to escape the string value to make a string literal in the query. When you are using quotation marks to delimiter the string: * A backslash (`\`) in the string should be replaced by two backslashes. * A quotation mark (`"`) in the string should be replaced by a backslash and a quotation mark. If you would use apostrophes (`'`) to delimiter the string (which is more common in SQL), you would escape the apostrophes instead of the quotation marks. If possible, you should use parameterised queries, instead of concatenating the values into the queries.
How to add a string containg a double quote to a sql database?
[ "", "mysql", "sql", "database", "jdbc", "" ]
I have two columns: * id (nchar) * name (varchar) Something simple just can't figure out how. ``` select * from plant.carrier where carrier_id like 'NT' ``` This gives an expression error ``` SELECT * FROM plant.carrier WHERE RTrim(CAST([carrier_id] As VarChar) = 'NT' ``` id column ![enter image description here](https://i.stack.imgur.com/muIsJ.png) ``` CREATE TABLE [plant].[carrier]( [carrier_id] [nchar](4) NOT NULL, [name] [nvarchar](30) NOT NULL, CONSTRAINT [PK_carrier] PRIMARY KEY CLUSTERED ( [carrier_id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS =enter code here ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] GO ```
To compare nchar and varchar you need to convert char to varchar and compare two varchars: ``` RTrim(CAST([nCharColumn] As VarChar) ``` **Updated** Here is full example: ``` create table test ( f nchar(20)) insert into test values('nt') insert into test values('st') declare @s varchar(20) = 'nt' select * from test where rtrim(cast(f as varchar)) = @s ``` **Updated 2** ``` CREATE TABLE [carrier]( [carrier_id] [nchar](4) NOT NULL, [name] [nvarchar](30) NOT NULL, CONSTRAINT [PK_carrier] PRIMARY KEY CLUSTERED ( [carrier_id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] GO insert into carrier values('nt', 'nt') insert into carrier values('st', 'st') declare @s varchar(20) = 'nt' select * from carrier where rtrim(cast([carrier_id] as varchar)) = @s ``` This works fine for me too. It returns one record.
If you [**Read The Documentation**](http://msdn.microsoft.com/en-us/library/ms191530.aspx), you'll see that conversion between `[var]char` and `n[var]char` is implicit, so saying something like ``` where x.some_char_column = y.some_nchar_column ``` should work just fine...unless the columns differ in their defined collation. Then you might have a problem. ![T-SQL data type conversion table](https://i.stack.imgur.com/ahYxl.gif)
SQL Comparing nchar to string
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table with a column1 nvarchar(50) null. I want to insert this into a more 'tight' table with a nvarchar(30) not null. My idea was to insert a derived column task between source and destination task with this expression: Replace column1 = **(DT\_WSTR,30)Column1** I get the "truncation may occur error" and I am not allowed to insert the data into the new tighter table. Also I am 100% sure that no values are over 30 characters in the column. Moreover I do not have the possibility to change the column data type in the source. What is the best way to create the ETL process?
Consider changing the truncation error disposition of the Derived Column component within your Data Flow. By default, a truncation will cause the Derived Column component to fail. You can configure the component to ignore or redirect rows which are causing a truncation error. To do this, open the Derived Column Transformation editor and click the 'Configure Error Output...' button in the bottom-left of the dialog. From here, change the setting in the 'Truncation' column for any applicable columns as required. ![Row Error Disposition Settings](https://i.stack.imgur.com/nsbFr.gif) Be aware that any data which is truncated for columns ignoring failure will not be reported by SSIS during execution. It sounds like you've already done this, but it's important to be sure you've analysed your data as it currently stands and taken into consideration any possible future changes to the nature of the data before disabling truncation reporting.
JotaBe recommended using a data conversion transformation. Yes, that is another way to achieve the same thing, but it will also error out if truncation occurs. Your way *should* work (I tried it), provided the input data really is less than 30 characters. You could modify your derived column expression to ``` (DT_WSTR,30)Substring([Column1], 1, 30) ```
Insert data via SSIS package and different datatypes
[ "", "sql", "sql-server", "ssis", "type-conversion", "" ]
I have a column which is named "`Firstname`" and contains ``` Firstname Lastname ``` and I want to create another column called `Lastname` and cut the `Lastname` from the "`Firstaname`" column and paste it on the "`Lastname`" column, for multiple rows. before ``` Firstname --------- Bob Weller ``` after ``` Firstname | Lastname ----------|--------- Bob | Weller ``` **edit:** after `Lastname` there might be a number or other strings such as ( ) etc.. which should go to the Lastname column
This query splits it into FirstName and LastName at the first space. ``` SELECT FirstNameLastName, substring(FirstNameLastName, 0, charindex(' ', FirstNameLastName) -1) as FirstName, substring(FirstNameLastName, charindex(' ', FirstNameLastName) -1,len(FirstNameLastName)) as LastName from [Table] ``` To modify your table and move the old column into new columns, you can do the following: Alter your table first to create your new columns: ``` Alter Table [TableName] Add FirstName varchar(50), LastName Varchar(50) ``` Then use an update statement to move the values into it ``` Update Table [TableName] Set FirstName = substring(FirstNameLastName, 0, charindex(' ', FirstNameLastName)-1), LastName = substring(FirstNameLastName, charindex(' ', FirstNameLastName)-1,len(FirstNameLastName)) ``` This should move the values from the old column into the two new columns.
If you want then split into 2 *columns* not *rows* then something like this should do: ``` SELECT SUBSTRING(WholeName, 1, CHARINDEX(' ', WholeName) - 1) AS Firstname, SUBSTRING(WholeName, CHARINDEX(' ', WholeName) + 1, len(FirstName)) AS LastName FROM MyTable ``` NOTE: This is assuming that EVERY record has a space between first and last names.
How to split a row into two strings and copy the second string in a separate column in SQL?
[ "", "sql", "string", "split", "" ]
I made a little pl/pgsql script to rename some sequences (prefixes adding) and set their schema to 'public'. However I don't understand why but my 'ELSE' instructions are executed only once in the loop, which is not logical because I have many rows whose value of 'nspname' is other than 'docuprocess' : ``` CREATE OR REPLACE FUNCTION move_schemas_to_public(target_schemas text[]) RETURNS integer AS $procedure$ DECLARE rec RECORD; sql text; newname text; nbreq integer := 0; tabsize integer := array_length(target_schemas, 1); i integer := 1; debug boolean := false; BEGIN -- [...] FOR rec in select nspname, c.relname from pg_class c inner join pg_namespace ns on (c.relnamespace = ns.oid) where c.relkind = 'S' and ns.nspname = any(target_schemas) order by 1, 2 LOOP IF rec.nspname = 'docuprocess' THEN newname := rec.relname; ELSE -- Why these instructions are executed only once : ----- newname := rec.nspname||'_'||rec.relname; sql := 'ALTER SEQUENCE '||rec.nspname||'.'||rec.relname||' RENAME TO '||newname; RAISE NOTICE '%', sql; IF debug is not true THEN EXECUTE sql; END IF; nbreq := nbreq + 1; -------------------------------------------------------- END IF; sql := 'ALTER SEQUENCE '||rec.nspname||'.'||newname||' SET SCHEMA public'; RAISE NOTICE '%', sql; IF debug is not true THEN EXECUTE sql; END IF; nbreq := nbreq + 1; END LOOP; -- [...] RETURN nbreq; END; select move_schemas_to_public( -- schemas list ARRAY[ 'docufacture', 'docuprocess', 'formulaire', 'notification' ] ); ``` Here is the result for the loop's SQL query : ``` [nspname];[relname] "docufacture";"exportdoc_idexportdoc_seq" "docufacture";"tableau_idcolonne_seq" "docuprocess";"dp_action_champsdocuged_seq" "docuprocess";"dp_action_commentaire_seq" "docuprocess";"dp_action_docuged_seq" "docuprocess";"dp_action_email_id_seq" "docuprocess";"dp_action_formulaire_seq" "docuprocess";"dp_action_id_seq" "docuprocess";"dp_action_imprimer_id_seq" "docuprocess";"dp_action_lancer_processus_id_seq" "docuprocess";"dp_action_lancer_programme_id_seq" "docuprocess";"dp_action_seq" "docuprocess";"dp_action_transfert_fichier_id_seq" "docuprocess";"dp_deroulement_etape_seq" "docuprocess";"dp_deroulement_processus_seq" "docuprocess";"dp_etape_seq" "docuprocess";"dp_indisponibilite_seq" "docuprocess";"dp_intervenant_seq" "docuprocess";"dp_processus_seq" "docuprocess";"dp_type_action_seq" "formulaire";"champ_id_seq" "formulaire";"fond_id_seq" "formulaire";"formulaire_id_seq" "formulaire";"modele_id_seq" "notification";"notification_id_seq" ``` Thanks in advance for precious help.
I finally found the source of the problem! In the beginning of my function (masked part "[...]"), I have a loop which rename tables in schemas passed as parameters, and move these tables to schema 'public'. At this time, sequences owned by tables present in 'docufacture' and 'notification' schemas are automatically moved into public schema. So, I just have to rename sequences for these schemas, not moving them. However I don't really understand why sequences of 'docuprocess' and 'formulaire' aren't moved in the same manner! Indeed, if I try to execute the following request after tables shifting... ``` ALTER SEQUENCE docufacture.exportdoc_idexportdoc_seq RENAME TO docufacture_exportdoc_idexportdoc_seq ``` ...I got this error : ``` ERROR: relation "docufacture.exportdoc_idexportdoc_seq" does not exist ``` ...because "exportdoc\_idexportdoc\_seq" has been moved to public schema. And if I I try to execute the following request after tables shifting... ``` ALTER SEQUENCE exportdoc_idexportdoc_seq SET SCHEMA public; ``` ...I got this error : ``` ERROR: cannot move an owned sequence into another schema ``` If someone has some explanations about that, it will be really appreciated. Thanks a lot! **EDIT :** So, one solution is to proceed in 3 steps : * Rename all sequences * Move tables * Move remaining sequences Here is the code : ``` CREATE OR REPLACE FUNCTION move_schemas_to_public(target_schemas text[]) RETURNS integer AS $procedure$ DECLARE rec RECORD; sql text; newname text; nbreq integer := 0; tabsize integer := array_length(target_schemas, 1); i integer := 1; debug boolean := false; BEGIN SET lc_messages TO 'en_US.UTF-8'; -- sequences renamming FOR rec in select ns.nspname, c.relname from pg_class c inner join pg_namespace ns on (c.relnamespace = ns.oid) where c.relkind = 'S' and ns.nspname = any(target_schemas) LOOP IF rec.nspname != 'docuprocess' THEN newname := quote_ident(rec.nspname||'_'||rec.relname); sql := 'ALTER SEQUENCE '||quote_ident(rec.nspname)||'.'||quote_ident(rec.relname)||' RENAME TO '||newname; RAISE NOTICE '%', sql; IF debug is not true THEN EXECUTE sql; END IF; nbreq := nbreq + 1; END IF; END LOOP; -- END sequences -- tables FOR rec in SELECT table_schema, table_name from information_schema.tables where table_type = 'BASE TABLE' and table_schema = any(target_schemas) LOOP IF rec.table_schema = 'docuprocess' THEN newname := rec.table_name; ELSE newname := rec.table_schema||'_'||rec.table_name; sql := 'ALTER TABLE '||rec.table_schema||'.'||rec.table_name||' RENAME TO '||newname; RAISE NOTICE '%', sql; IF debug is not true THEN EXECUTE sql; END IF; nbreq := nbreq + 1; END IF; sql := 'ALTER TABLE '||rec.table_schema||'.'||newname||' SET SCHEMA public'; RAISE NOTICE '%', sql; IF debug is not true THEN EXECUTE sql; END IF; nbreq := nbreq + 1; END LOOP; -- END tables -- remaining sequences shifting FOR rec in select ns.nspname, c.relname from pg_class c inner join pg_namespace ns on (c.relnamespace = ns.oid) where c.relkind = 'S' and ns.nspname = any(target_schemas) LOOP sql := 'ALTER SEQUENCE '||quote_ident(rec.nspname)||'.'||quote_ident(rec.relname)||' SET SCHEMA public'; RAISE NOTICE '%', sql; IF debug is not true THEN EXECUTE sql; END IF; nbreq := nbreq + 1; END LOOP; -- END sequences -- [...] Move functions, drop empty schemas RETURN nbreq; END; $procedure$ LANGUAGE plpgsql; select move_schemas_to_public( -- schemas list ARRAY[ 'docufacture', 'docuprocess', 'formulaire', 'notification' ] ); ``` To finish, I would like to address special thanks to "Erwin Brandstetter" for his advanced help and advices.
There is a space missing in this line after `RENAME TO`: ``` sql := 'ALTER SEQUENCE '||rec.nspname||'.'||rec.relname||' RENAME TO '||newname; ``` As a result, upon the first sequence that is not in schema `docuprocess` the `sql` statement is executed and raises an error which aborts the loop. Note also that you do not have to `ORDER BY` the `rec` query because you are evaluating record properties in the loop and not using the ordering of the qualifying records.
Issue with a control structure in pl/pgsql script loop
[ "", "sql", "postgresql", "plpgsql", "" ]
At wits end with this How do I pass a null value to my stored procedure within SSMS? I have set two of my parameters to optional eg. ``` --This is inside my storedproc @name varchar(50) @surname varchar(50) = null, @text varchar(255) = null, ``` Now to execute ``` --Now execute without the 'text' EXEC sp_bla 'Noob Question Name', 'Noob Surname' ```
``` EXEC sp_bla NULL, 'Noob Question Name', 'Noob Surname' ``` Basically, just pass NULL as parameter value in the required position
Try this If u want to define all the three parameters then ``` EXEC sp_bla @name = 'Noob Question',@surname = 'Noob',@text = 'Newtext' ``` If u want to define any two ``` EXEC sp_bla @name = 'Noob Question',@surname = 'Noob' ``` or ``` EXEC sp_bla @name = 'Noob Question',@text = 'Text' ``` If u do not have any of the other two ``` EXEC sp_bla @name = 'Noob' ```
Pass explicit NULL value to stored procedure from T-SQL not from application
[ "", "sql", "sql-server", "database", "t-sql", "" ]
I'm working on a project for school where I have to work with asp.net and sql databases. I'm wondering If I could use this ``` SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["connectionNameWhateverItIs"].ConnectionString); SqlCommand command = new SqlCommand("functionName", connection); command.CommandType = CommandType.StoredProcedure; SqlParameter Parameters = command.CreateParameter(); //parameters //parameters connection.Open(); command.ExecuteNonQuery(); connection.Close(); ``` In many functions with different parameters just by calling this one time only. Like maintaining the user connected all the time on the database. Is this slowing the user's navigation? Is it right what I'm trying to do?
You can certainly re-use that same code with different parameters, here's one way of doing that ``` public void RunMyFunction(IDictionary<string, object> parameters) { using (SqlConnection con = new SqlConnection(...)) using (SqlCommand cmd = new SqlCommand("functionName", conn)) { cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddRange(parameters.Select(x => new SqlParameter(x.Key, x.Value))); cmd.ExecuteNonQuery(); } } ... RunMyFunction( new Dictionary<string, object>(2) { { "@param1", "value1" }, { "@param2", "value2" }, } ); RunMyFunction( new Dictionary<string, object>(2) { { "@param1", "value3" }, { "@param2", "value4" }, } ); ``` However, "*In many functions with different parameters **just by calling this one time only***", unfortunately, is not possible.
Yes, you can re-use your db connection. Keeping the connection open to re-use vs. re establishing the connection every time will depend on your application. With lots of simultaneous users, it may be a good idea to re-establish a connection every time a db needs to be accessed, as keeping multiple connections open, while not really doing anything, may cause your application to consume more resources than it has to. If you are doing a lot of heavy db access, it may be a good idea to keep a conneciton open and re-use it... You will have to find / decide on a balance based on what you are doing... I personally, always instantiate a new connection whenever I need to access the db in my apps... but I will congregate certain functions in the same stack, to use the same connection object if it makes sense to do so...
Database connection on a class
[ "", "asp.net", "sql", "sql-server", "database", "" ]
I would like to align my column 'Gesamt': ``` SELECT k.KUNDENID AS ID, vorname AS Vorname, nachname AS Nachname, CONCAT(IFNULL(ROUND(SUM(Betrag), 2), 0), LPAD('Euro', 4, ' ')) AS Gesamt FROM kunde k LEFT JOIN verkauf v ON k.KUNDENID = v.KUNDENID GROUP BY k.KUNDENID; ``` **Output:** ``` +----+---------+-----------+-------------+ | ID | Vorname | Nachname | Gesamt | +----+---------+-----------+-------------+ | 1 | Homer | Simpson | 20.95Euro | | 2 | Doug | Heffernan | 120.50Euro | | 3 | Dexter | Morgan | 350.30Euro | | 4 | Sheldon | Cooper | 1500.00Euro | | 5 | Maurice | Moss | 0.00Euro | +----+---------+-----------+-------------+ ``` **So my desired output looks like:** ``` +----+---------+-----------+--------------+ | ID | Vorname | Nachname | Gesamt | +----+---------+-----------+--------------+ | 1 | Homer | Simpson | 20.95 Euro | | 2 | Doug | Heffernan | 120.50 Euro | | 3 | Dexter | Morgan | 350.30 Euro | | 4 | Sheldon | Cooper | 1500.00 Euro | | 5 | Maurice | Moss | 0.00 Euro | +----+---------+-----------+--------------+ ``` How can I align it correctly? Thanks for your help!
Simplify the concat by concatenating `'<space>Euro'` instead of `LPAD('Euro',4,' ')` and add a LPAD in the whole concat to make it shift right (you can decide based on your data how many characters are needed in order to align): ``` SELECT k.KUNDENID AS ID, vorname AS Vorname, nachname AS Nachname, LPAD( CONCAT(IFNULL(ROUND(SUM(Betrag), 2), 0), ' Euro') , 13, ' ') AS Gesamt FROM kunde k LEFT JOIN verkauf v ON k.KUNDENID = v.KUNDENID GROUP BY k.KUNDENID; ```
Move `LPAD()`, you need it to work on the whole result, not only on `Euro` : ``` SELECT k.KUNDENID AS ID, vorname AS Vorname, nachname AS Nachname, LPAD(CONCAT(IFNULL(ROUND(SUM(Betrag), 2), 0), ' Euro'), 15) AS Gesamt FROM kunde k LEFT JOIN verkauf v ON k.KUNDENID = v.KUNDENID GROUP BY k.KUNDENID; ``` **Result:** ``` +----+---------+-----------+-----------------+ | ID | Vorname | Nachname | Gesamt | +----+---------+-----------+-----------------+ | 1 | Homer | Simpson | 20.95 Euro | | 2 | Doug | Heffernan | 120.50 Euro | | 3 | Dexter | Morgan | 350.30 Euro | | 4 | Sheldon | Cooper | 1500.00 Euro | | 5 | Maurice | Moss | 0.00 Euro | +----+---------+-----------+-----------------+ ```
LPAD/RPAD: Align column
[ "", "mysql", "sql", "concatenation", "" ]
This may seem silly but I am working on an old system where we don't have separate table for project. We have a table something like this. ``` | PROJECTNAME | EMPID | |--------------------|-------| | casio calc new | 1 | | calc new | 2 | | tech | 3 | | financial calc new | 4 | | casio | 5 | ``` Now what we want is to select EmpID from the above table where ProjectName can be either casio or calc. And we have user inputs for multiple selection like `casio, calc`. Meaning if user inputs `casio, calc` it should find `ProjectName LIKE '%casio%' OR '%calc%'`. I really don't have any idea how can this be possible. ``` SELECT * FROM ProjectDetails WHERE ProjectName LIKE 'casio, calc'; ``` I searched for [SQL LIKE with IN](https://stackoverflow.com/search?q=SQL%20LIKE%20with%20IN) but I couldn't find any solution. Do anyone have idea how can I achieve this? Or any other approach to use? I am trying this on [this fiddle](http://sqlfiddle.com/#!3/33e57b/2).
Here you go. You can create a dynamic query with the help of `REPLACE` function with a little bit *hack*. Then use `EXEC` function to execute that query. ``` DECLARE @str VARCHAR(MAX) SELECT @str='casio,calc' SELECT @str='LIKE ''%'+REPLACE(@str,',','%'' OR ProjectName LIKE ''%')+'%''' EXEC('SELECT * FROM ProjectDetails WHERE ProjectName ' + @str +''); ``` Output: ``` | PROJECTNAME | EMPID | |--------------------|-------| | casio calc new | 1 | | calc new | 2 | | financial calc new | 4 | | casio | 5 | ``` ### [SQL Fiddle Demo](http://sqlfiddle.com/#!3/33e57b/44) Thanks to [Joe G Joseph](https://stackoverflow.com/a/13529347/1369235) for his hint.
The solution is to use several `LIKE` conditions: ``` SELECT * FROM ProjectDetails WHERE ProjectName LIKE '%casio%' OR ProjectName LIKE '%calc%'; ```
LIKE condition with comma separated values
[ "", "sql", "sql-server", "sql-like", "sql-in", "" ]
I'm wanting to return the total number of times an `exerciseID` is listed, but filter it so each `exerciseID` may only be increment once per a date. For this reason I believe I cannot do a `group by date`. ``` id | exerciseid | date 1 | 105 | 2014-01-01 00:00:00 2 | 105 | 2014-02-01 00:00:00 3 | 105 | 2014-03-11 00:00:00 4 | 105 | 2014-03-11 00:00:00 5 | 105 | 2014-03-11 00:00:00 6 | 127 | 2014-01-01 00:00:00 7 | 127 | 2014-02-02 00:00:00 8 | 127 | 2014-02-02 00:00:00 // 105 = 5 total rows but 3 unique // 127 = 3 total rows but 2 unique $db->query("SELECT exerciseid as id, sum(1) as total FROM `users exercises` as ue WHERE userid = $userid GROUP BY exerciseid ORDER BY date DESC"); ``` Current Output: ``` Array ( [id] => 105 [date] => 2014-05-06 [total] => 5 ) Array ( [id] => 127 [date] => 2014-05-06 [total] => 3 ) ``` As you can see it's not merging the rows where the date and exerciseid are the same. Expected Result: ``` Array ( [id] => 105 [date] => 2014-05-06 [total] => 3 ) Array ( [id] => 127 [date] => 2014-05-06 [total] => 2 ) ```
for V2.0 question: ``` select exerciseid , count(distinct date) as exercise_count from user_exercises group by exerciseid ; | EXERCISEID | EXERCISE_COUNT | |------------|----------------| | 54 | 1 | | 85 | 3 | | 420 | 2 | ``` see [this sqlfiddle](http://sqlfiddle.com/#!9/324d1/4)
If you want count how many group you have on group by : ``` $db->query("SELECT e.id as id, e.name, count(id) as total, ue.date FROM `users exercises` as ue LEFT JOIN `exercises` as e ON exerciseid = e.id WHERE ue.`userid` = $userid GROUP BY id ASC ORDER BY total DESC"); ``` else if you want take previous total for addition, create a procedure like this (I think there are errors in my procedure) ``` CREATE PROCEDURE name DECLARE record your_table%ROWTYPE; nb int DEFAULT 0; BEGIN FOR record IN SELECT e.id as id, e.name as name, count(id) as nbid, ue.date as date FROM `users exercises` as ue LEFT JOIN `exercises` as e ON exerciseid = e.id WHERE ue.`userid` = $userid GROUP BY id ASC ORDER BY total DESC LOOP set nb := nb + record.nbid; SELECT record.id,record.name,nb,date; END LOOP; END ``` regards Dragondark De Lonlindil
SQL: Increment for every GROUP BY
[ "", "mysql", "sql", "count", "group-by", "" ]
Can you please assist me in writing a query to remove duplicates. See below looking the results I have a column added as my status (manually) added. The category is the one determining where a record is a duplicate or not. In this case our main focus is cancellation. If for a member a cancellation has reinstatement after as member Y007. is not considered as a duplicate. But if a member has more than one cancellation then is considered as a duplicate because if we were to count a number of duplicates both of them will be counted which will give incorrect results. We need to just count a member once. The cancellation can either done by a user or user1 and it is possible to have user1 done more than one cancellation. can you please assist me on writing a query that will ensure that no duplicates are shown. only one record for that member not 2 records both as duplication ``` CreateYear MonthDay Category Member Status 2014 July 1 Cancellation by User Y0007 2014 July 1 Reinstatement by User Y0007 not duplicate 2014 July 2 Cancellation by User Y0007 2014 July 2 Reinstatement by User Y0007 2014 July 1 Cancellation by User O0031 not duplicate 2014 July 8 Reinstatement by User O0031 2014 July 1 Cancellation by User O0135 not duplicate 2014 July 8 Reinstatement by User O0135 2014 July 3 Cancellation by User P0422 duplicate 2014 July 4 Cancellation by User2 P0422 2014 July 4 Cancellation by User E3488 not duplicate 2014 July 8 Reinstatement by User E3488 ```
**Strategy** In order to remove the duplicates, let's start by writing some code that only selects the duplicates. If you can write the proper select statement, it is easy to convert it into a delete statement. Now, the select statement is a little tricky because we need to compare the dates, but your dates are split into three columns and the month is not stored as a number. However, if we concatenate the three columns together and cast them as dates, we can compare the date values and find cancellations that occurred several times in a row. **Query** ``` Select * From testtable t Where category = 'Cancellation' and exists (Select 1 From testtable t2 Where t2.category = t.category and t2.Member = t.Member and Cast(t2.CreateMonth + ' '+ cast(t2.CreateDay as varchar(2)) + ' ' + Cast(t2.CreateYear as varchar(4)) as date) > Cast(t.CreateMonth + ' '+ cast(t.CreateDay as varchar(2)) + ' ' + Cast(t.CreateYear as varchar(4)) as date) and not exists (Select 1 From testtable t3 Where t3.category = 'Reinstatement' and t3.Member = t.Member and Cast(t3.CreateMonth + ' '+ cast(t3.CreateDay as varchar(2)) + ' ' + Cast(t3.CreateYear as varchar(4)) as date) >= Cast(t.CreateMonth + ' '+ cast(t.CreateDay as varchar(2)) + ' ' + Cast(t.CreateYear as varchar(4)) as date) and Cast(t2.CreateMonth + ' '+ cast(t2.CreateDay as varchar(2)) + ' ' + Cast(t2.CreateYear as varchar(4)) as date) >= Cast(t3.CreateMonth + ' '+ cast(t3.CreateDay as varchar(2)) + ' ' + Cast(t3.CreateYear as varchar(4)) as date) ) ) ``` **Explaination** This query looks a little bit intense, but let's talk through it. So first we have the `select * From testtable t` all of this you are probably used to seeing, except perhaps the alias. I have named this particular instance of `testtable Where category = 'Cancellation'` t. I have essentially given it the nickname t so that instead of referring to the table by its name (and getting confused because I am using the same table three times), I can refer to it as t. Now, the `exists`. This may or may not be different syntax than you've seen. Exists returns true if there is something in the parenthesis that follows it and returns false if there is nothing in those parenthesis. I am using this exists to check in the testtable for a different record that is newer than the cancellation that we are looking at and is also a cancellation. This is where the date comes in. We only want know if there are other cancellations where the Month and Day is greater than the cancellation that we are looking at. For Member P0422, when we are determining if the cancellation on July 3 is a duplicate, we want to find the cancellation on July 4. In the subselect I am using `not exists`. The last thing we need to check is if there is a Reinstatement between the two cancellations. If there is one, we want to ignore the cancellation and move on to check the next row. [See the query in action](http://sqlfiddle.com/#!6/f9c66/9) **Cast as date in-depth look** I have used syntax like `Cast(t2.CreateMonth + ' '+ cast(t2.CreateDay as varchar(2)) + ' ' + Cast(t2.CreateYear as varchar(4)) as date)` several times in this query to figure out what the date of the record is. I have done this because the date of the cancellation (or reinstatement) has been broken up into three columns. Let's look at what this is doing for the following record. > ``` > CreateYear MonthDay Category Member Status > 2014 July 1 Cancellation by User Y0007 > ``` First is the concatenation. `t2.CreateMonth + ' '+ cast(t2.CreateDay as varchar(2)) + ' ' + Cast(t2.CreateYear as varchar(4))` this part grabs the different pieces of information from each of the columns and stick it all together. The year and the day were ints and not varchars, so first I switched them to be varchars. The result of this line is `July 1 2014`. Then, we `Cast (... as date)`. Casting allows you to take a piece of information and make it into a different data type. So, this tells sql to look at `July 1 2014` as if it were a date instead of a string. All of this was done so we can compare the dates. Sql knows how to tell what date is more recent than another, which is why I converted these values. Instead of doing this, you could compare each piece of the date separately, but either way is still a lot of work. **Let's Delete!** Now that we found all of the rows that are duplicates, we can change the query to delete them really easily. ``` Delete t From testtable t Where category = 'Cancellation' and exists (Select 1 From testtable t2 Where t2.category = t.category and t2.Member = t.Member and Cast(t2.CreateMonth + ' '+ cast(t2.CreateDay as varchar(2)) + ' ' + Cast(t2.CreateYear as varchar(4)) as date) > Cast(t.CreateMonth + ' '+ cast(t.CreateDay as varchar(2)) + ' ' + Cast(t.CreateYear as varchar(4)) as date) and not exists (Select 1 From testtable t3 Where t3.category = 'Reinstatement' and t3.Member = t.Member and Cast(t3.CreateMonth + ' '+ cast(t3.CreateDay as varchar(2)) + ' ' + Cast(t3.CreateYear as varchar(4)) as date) >= Cast(t.CreateMonth + ' '+ cast(t.CreateDay as varchar(2)) + ' ' + Cast(t.CreateYear as varchar(4)) as date) and Cast(t2.CreateMonth + ' '+ cast(t2.CreateDay as varchar(2)) + ' ' + Cast(t2.CreateYear as varchar(4)) as date) >= Cast(t3.CreateMonth + ' '+ cast(t3.CreateDay as varchar(2)) + ' ' + Cast(t3.CreateYear as varchar(4)) as date) ) ) ```
``` ;with TempCte as (Select CreateYear,MonthDay,Category,Member, MemberCount =ROW_NUMBER() over(PARTITION By CreateYear, MonthDay, Category, Member, Status Order By CreateYear) From TableName) Delete TempCte Where MemberCount >1 ```
Remove duplicate records
[ "", "sql", "sql-server", "" ]