Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am calculating total hours/minutes but i would like to get rid off the decimals and only show something like this 2.00 hours or 2.5 hours etc. I am getting now something like this: 2.000000 and want only to limit to 2 decimals only. ``` select DATEDIFF(minute, Min(FullDatetime), Max(FullDatetime)) / 60.0 as hours from myTable where userid = 123 ```
You can do it by rounding but the easiest is to format for output using FORMAT(). ``` select FORMAT(DATEDIFF(minute, Min(FullDatetime), Max(FullDatetime)) / 60.0, 'N2') as hours from myTable where userid = 123 ``` Helpful original documentation: [here](https://msdn.microsoft.com/en-us/library/ee634924.aspx)
try use ``` cast('values' as decimal(18,2)) --2 decimal place. select Cast((DATEDIFF(minute, Min(FullDatetime), Max(FullDatetime)) / 60.0 as hours)as decimal(18,2)) from myTable where userid = 123 ```
how to get rid off decimals from sql results
[ "", "sql", "sql-server", "t-sql", "" ]
Basically I have a table that has accnt and code. the codes are 4 digit codes CAWE CPEE CWWE CBEW etc. Each time an accnt is accessed it leaves a code on it. So account 30040 can be in this table 500 times. I am trying to figure out how I can pull the accounts that have the CBEW code and only 1 accessed entry. Please help me :) I'm new to SQL so please be nice!
You can use `group by` and `having`: ``` select accnt from table group by accnt having count(*) = 1 and max(code) = 'CBEW'; ``` This finds accounts with just one row and ensure that that row has the code you are looking for.
This should do what you are asking ``` SELECT accnt FROM mytable WHERE code = 'CBEW' GROUP BY code HAVING COUNT(code) = 1 ```
SQL count? Not sure how to go about
[ "", "sql", "count", "" ]
This is the homework question I am having issues with: Using the BOOKS table and the SUBSTR function, create a query that will return a list of book ISBN codes and the book title for all the computer books. Format the ISBN code to display as 9-999-99999-9 (e.g., 1059831198 would display as 1-059-83119-8) My code is below as to what I have so far: ``` SELECT ISBN, Title,Category SUBSTR(ISBN,1,3,4,1 ) FROM BOOKS; WHERE Category = 'Computer'; ``` The error I got was: ERROR at line 2: ORA-00923: FROM keyword not found where expected After playing around with it for a bit I realized that I'm not allowed to have that many numbers in my `SUBSTR` to split up the ISBN but I've looked up my in my textbook and online and I cannot figure out how to d do my `SUBSTR` with that many dashes between all those numbers. Also I tried doing the equation with just the two numbers and it worked but my WHERE clause is giving me issues and I don't know why either even though I researched the error , I'm not understanding. The course I am in is an entry course to SQL\*Plus so it's very basic things I am doing but just have never done in my life so any input on what I'm doing wrong which I kind of know already that I'm not allowed to put that many numbers it so I guess if someone could point me in the right direction on how I could go about completing this substring.
You will have to build up the string section by section, e.g. `SUBSTR(ISBN,1,1) || '-' || SUBSTR(ISBN,2,3 ) || '-'` etc
The template for your solution could be: ``` SELECT Title -- reformatted isbn deliberately omitted - replace this line with your own work FROM BOOKS WHERE Category = 'Computer' ; ``` Note: Consider this answer to be a comment that wishes to benefit from advanced formatting options ...
Creating a SUBSTR in SQL Plus
[ "", "sql", "oracle", "" ]
My application uses a single query to return all permissions from a user, and this single query has 10 INNER JOINs to create the entire resultset. Here is a preview of the query (I had to change the table names because of confidential information): ``` SELECT TABLE9.CONTINENT, TABLE9.COD_COUNTRY, TABLE9.DES_COUNTRY, TABLE9.COD_ISO, TABLE7.ID_DEL, TABLE7.COD_DEL, TABLE7.DES_DEL, TABLE7.DES_ZONE, TABLE7.GMT_MINUTES, TABLE7.CANT_MIN_INI, TABLE7.CANT_MIN_SALIDA, TABLE7.CANT_MET_BASE, TABLE5.ID_TS, TABLE5.COD_TS, TABLE2.ID_ROLE, TABLE2.TIMEOUT_SESION, TABLE11.ID_PERMISSION, TABLE3.COD_APLICATION, TABLE3.DES_APLICATION, TABLE6.ID_PLANT, TABLE6.COD_PLANT, TABLE6.DES_PLANT FROM TABLE1 INNER JOIN TABLE2 ON TABLE2.ID_ROLE = TABLE1.ID_ROLE INNER JOIN TABLE3 ON TABLE3.ID_APLICATION = TABLE2.ID_APLICATION INNER JOIN TABLE4 ON TABLE4.ID_PTS = TABLE1.ID_PTS INNER JOIN TABLE5 ON TABLE4.ID_TS = TABLE5.ID_TS INNER JOIN TABLE6 ON TABLE6.ID_PLANT = TABLE4.ID_PLANT INNER JOIN TABLE7 ON TABLE7.ID_DEL = TABLE6.ID_DEL INNER JOIN TABLE8 ON (TABLE8.ID_USER = TABLE1.ID_USER) INNER JOIN TABLE9 ON TABLE9.ID_COUNTRY = TABLE7.ID_COUNTRY INNER JOIN TABLE10 ON TABLE10.ID_ROLE = TABLE2.ID_ROLE INNER JOIN TABLE11 ON (TABLE11.ID_PERMISSION = TABLE10.ID_PERMISSION AND TABLE11.ID_APLICATION = TABLE3.ID_APLICATION) WHERE TABLE11.COD_PERMISSION <> 'PermissionCode' AND TABLE8.ID_USER_AD = 'e5def917-73e6-4b4e-8b5b-436794768c4b' AND TABLE8.BOL_ENABLED = 1 ``` Here is the execution plan (the cost has decreased after creating some indexes, however it still takes 39 seconds to return 58k rows): ``` ---------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 129 | 118K| 62 (9)| 00:00:01 | | 1 | SORT ORDER BY | | 129 | 118K| 62 (9)| 00:00:01 | | 2 | NESTED LOOPS | | 129 | 118K| 61 (7)| 00:00:01 | |* 3 | HASH JOIN | |3461 | 2926K| 61 (7)| 00:00:01 | |* 4 | TABLE ACCESS FULL | TABLE11 | 262 | 24890 | 4 (0)| 00:00:01 | |* 5 | HASH JOIN | | 185 | 139K| 57 (8)| 00:00:01 | | 6 | TABLE ACCESS FULL | TABLE3 | 14 | 840 | 4 (0)| 00:00:01 | |* 7 | HASH JOIN | | 185 | 128K| 52 (6)| 00:00:01 | | 8 | TABLE ACCESS FULL | TABLE2 | 65 | 5785 | 4 (0)| 00:00:01 | |* 9 | HASH JOIN | | 185 | 112K| 48 (7)| 00:00:01 | | 10 | TABLE ACCESS FULL | TABLE5 | 56 | 2800 | 4 (0)| 00:00:01 | |* 11 | HASH JOIN | | 185 | 103K| 43 (5)| 00:00:01 | | 12 | TABLE ACCESS FULL | TABLE9 | 1 | 70 | 3 (0)| 00:00:01 | |* 13 | HASH JOIN | | 185 | 92870 | 40 (5)| 00:00:01 | | 14 | TABLE ACCESS FULL | TABLE7 | 43 | 5375 | 3 (0)| 00:00:01 | |* 15 | HASH JOIN | | 185 | 69745 | 36 (3)| 00:00:01 | | 16 | TABLE ACCESS FULL | TABLE6 | 43 | 4128 | 3 (0)| 00:00:01 | |* 17 | HASH JOIN | | 185 | 51985 | 33 (4)| 00:00:01 | | 18 | NESTED LOOPS | | 193 | 35126 | 20 (0)| 00:00:01 | |* 19 | TABLE ACCESS BY INDEX ROWID| TABLE8 | 1 | 77 | 2 (0)| 00:00:01 | |* 20 | INDEX UNIQUE SCAN | AK_TABLE8_2 | 1 | | 1 (0)| 00:00:01 | | 21 | TABLE ACCESS BY INDEX ROWID| ADPR_TABLE1 | 193 | 20265 | 18 (0)| 00:00:01 | |* 22 | INDEX RANGE SCAN | IX_TABLE1 | 193 | | 2 (0)| 00:00:01 | | 23 | INDEX FAST FULL SCAN | IX_TABLE4 |2281 | 220K| 12 (0)| 00:00:01 | |* 24 | INDEX UNIQUE SCAN | AK_TABLE10 | 1 | 73 | 0 (0)| 00:00:01 | ---------------------------------------------------------------------------------------------------------------------- ``` What can I do to improve this query? --- UPDATE Here are the indexes I have created: ``` create index IX_TABLE11 on TABLE11 (ID_PERMISSION, ID_APLICATION) ONLINE; create index IX_TABLE8 on TABLE8 (ID_USER, ID_USER_AD, BOL_ACTIVE) ONLINE; create index IX_TABLE6 on TABLE6 (ID_PLANT, ID_DEL) ONLINE; create index IX_TABLE4 on TABLE4 (ID_PTS, ID_TS, ID_PLANT) ONLINE; create index IX_TABLE2 on TABLE2 (ID_ROLE, ID_APLICATION) ONLINE; ```
Thanks for the note of the indexes added. To optimize the query based on your primary criteria of Table8, you want the columns associated with the WHERE clause up front, and ancillary fields AFTER. Since your criteria is about a specific user via "Table8", I have restructured the query slightly to put that into the primary position and updated the WHERE slightly. I have also included indexes I would have on the respective tables noting the ones you provided and those that should be slightly adjusted/added. ``` SELECT -- Columns FROM TABLE8 INNER JOIN TABLE1 ON TABLE8.ID_USER = TABLE1.ID_USER INNER JOIN TABLE2 ON TABLE1.ID_ROLE = TABLE2.ID_ROLE INNER JOIN TABLE3 ON TABLE2.ID_APLICATION = TABLE3.ID_APLICATION INNER JOIN TABLE10 ON TABLE2.ID_ROLE = TABLE10.ID_ROLE INNER JOIN TABLE11 ON TABLE10.ID_PERMISSION = TABLE11.ID_PERMISSION AND TABLE3.ID_APLICATION = TABLE11.ID_APLICATION AND TABLE11.COD_PERMISSION <> 'PermissionCode' INNER JOIN TABLE4 ON TABLE1.ID_PTS = TABLE4.ID_PTS INNER JOIN TABLE5 ON TABLE4.ID_TS = TABLE5.ID_TS INNER JOIN TABLE6 ON TABLE4.ID_PLANT = TABLE6.ID_PLANT INNER JOIN TABLE7 ON TABLE6.ID_DEL = TABLE7.ID_DEL INNER JOIN TABLE9 ON TABLE7.ID_COUNTRY = TABLE9.ID_COUNTRY WHERE TABLE8.BOL_ENABLED = 1 AND TABLE8.ID_USER_AD = 'e5def917-73e6-4b4e-8b5b-436794768c4b' Table Index TABLE1 (ID_USER, ID_ROLE, ID_PTS) TABLE2 (ID_ROLE, ID_APPLICATION) <- index already exists TABLE3 (ID_APLICATION ) TABLE4 (ID_PTS, ID_TS, ID_PLANT ) <- index already exists TABLE5 (ID_TS ) TABLE6 (ID_PLANT, ID_DEL) <- index already exists TABLE7 (ID_DEL, ID_COUNTRY) TABLE8 (ID_USER_AD, BOL_ENABLED, ID_USER ) <- Added BOL_ENABLED, ID_USER as LAST column index TABLE10 (ID_ROLE, ID_PERMISSION ) TABLE11 (ID_PERMISSION, ID_APLICATION, COD_PERMISSION ) <-- add COD_PERMISSION ``` From the adjusted indexes, and your comment about it still taking too long, I would offer the following. It appears your application is browser-based. If so, your table has specific applications. What I would SUGGEST doing is the following. Strip down your query to get DISTINCT applications a person has access to. They probably have something on the screen that allows them to choose from... Then, once the user picks the SPECIFIC application they want, THEN run the query but also include the criteria for the SINGLE application they select. So if you have 10 applications, your 58k permissions may now be down to 5-6k records for permissions. So the first query might be stripped down to the code and description of available applications for the user. ``` SELECT DISTINCT TABLE3.COD_APLICATION, TABLE3.DES_APLICATION FROM TABLE8 INNER JOIN TABLE1 ON TABLE8.ID_USER = TABLE1.ID_USER INNER JOIN TABLE2 ON TABLE1.ID_ROLE = TABLE2.ID_ROLE INNER JOIN TABLE3 ON TABLE2.ID_APLICATION = TABLE3.ID_APLICATION WHERE TABLE8.BOL_ENABLED = 1 AND TABLE8.ID_USER_AD = 'e5def917-73e6-4b4e-8b5b-436794768c4b' ``` Then, once the specific application is selected from the user interface, add that specific application to the main query (notice change just at the join to table2) ``` SELECT DISTINCT TABLE9.CONTINENT, TABLE9.COD_COUNTRY, TABLE9.DES_COUNTRY, TABLE9.COD_ISO, TABLE7.ID_DEL, TABLE7.COD_DEL, TABLE7.DES_DEL, TABLE7.DES_ZONE, TABLE7.GMT_MINUTES, TABLE7.CANT_MIN_INI, TABLE7.CANT_MIN_SALIDA, TABLE7.CANT_MET_BASE, TABLE5.ID_TS, TABLE5.COD_TS, TABLE2.ID_ROLE, TABLE2.TIMEOUT_SESION, TABLE11.ID_PERMISSION, TABLE3.COD_APLICATION, TABLE3.DES_APLICATION, TABLE6.ID_PLANT, TABLE6.COD_PLANT, TABLE6.DES_PLANT FROM TABLE8 INNER JOIN TABLE1 ON TABLE8.ID_USER = TABLE1.ID_USER INNER JOIN TABLE2 ON TABLE1.ID_ROLE = TABLE2.ID_ROLE AND TABLE2.ID_APLICATION = [specific application user selected] INNER JOIN TABLE3 ON TABLE2.ID_APLICATION = TABLE3.ID_APLICATION INNER JOIN TABLE10 ON TABLE2.ID_ROLE = TABLE10.ID_ROLE INNER JOIN TABLE11 ON TABLE10.ID_PERMISSION = TABLE11.ID_PERMISSION AND TABLE3.ID_APLICATION = TABLE11.ID_APLICATION AND TABLE11.COD_PERMISSION <> 'PermissionCode' INNER JOIN TABLE4 ON TABLE1.ID_PTS = TABLE4.ID_PTS INNER JOIN TABLE5 ON TABLE4.ID_TS = TABLE5.ID_TS INNER JOIN TABLE6 ON TABLE4.ID_PLANT = TABLE6.ID_PLANT INNER JOIN TABLE7 ON TABLE6.ID_DEL = TABLE7.ID_DEL INNER JOIN TABLE9 ON TABLE7.ID_COUNTRY = TABLE9.ID_COUNTRY WHERE TABLE8.BOL_ENABLED = 1 AND TABLE8.ID_USER_AD = 'e5def917-73e6-4b4e-8b5b-436794768c4b' ```
Assuming you're using most of the intermediary tables for joins only and not pulling data from them, create additional indexes in each that index both ids into a single index. That way when the db gathers data to do the join, it looks in the index and then has all the data it needs. All those "TABLE ACCESS FULL" records go away, you'll have only index accesses. Example indexes: ``` TABLE2 (ID_ROLE, ID_APLICATION) TABLE4 (ID_PTS, ID_TS) TABLE6 (ID_PLANT, ID_DEL) etc... ```
How to optimize query with 10+ joins?
[ "", "sql", "oracle", "" ]
I do not know how to formulate a query. To simplify the problem I have created a silly but simple example. Here are the specifics: * TableColor: this is a table of colors that looks like: > ``` > ColorID Color > 1 Red > 2 Green > 3 Blue > ``` * TableClothes: this is a table that logs the pants and shirt color worn each day that looks like: > ``` > Day PantsColorID ShirtColorID > 1 2 3 (Day 1 wore green pants and a blue shirt) > 2 3 1 (Day 2 wore blue pants and a red shirt) > ``` How do I define a query to return a dataset that looks like: > ``` > Day PantsColorID PantsColor ShirtColorID ShirtColor > 1 2 Green 3 Blue > 2 3 Blue 1 Red > ``` This query gets me close: ``` select TableClothes.Day, TableClothes.PantsColorID, TableColor.Color as 'color of pants', TableClothes.ShirtColorID, TableColor.Color as 'color of shirt', TableColor.ColorID from TableClothes, TableColor where TableClothes.PantsColorID = TableColor.ColorID ``` > ``` > Day PantsColorID PantsColor ShirtColorID ShirtColor > 1 2 Green 3 Green > 2 3 Blue 1 Blue > ``` Of course, this query returns the correct pants color but is shows that same color as the shirt color which is wrong. How do I construct a query to return the correct color for both pants and shirt? Thank you.
First off. You're using the old style joins which is bad for [all of these reasons](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins). Second, you have identifiers with spaces (not the best choice, you could use underscores instead of spaces) and even worse, you're using single quotes for these identifiers: `'color of pants'` which is not ANSI standard, quite confusing as single quotes are used for string literals and also under deprecation. See [another set of good reasons](https://sqlblog.org/2012/01/23/bad-habits-to-kick-using-as-instead-of-for-column-aliases). So, it's better to use double quotes (or brackets) for identifiers: `"color of pants"`. Third, we'll add the statement separator (`;`) because it should be obvious where a statement ends and also because SQL-Server while being happy to allow you not place these separators and do the dirty job of finding out where a statements ends and another starts, it will get confused when the next statement starts with `WITH`. Help him (and the next developer who will be reading your code) to stay sane. So if we fix those, your query will look like this: ``` select TableClothes.Day, TableClothes.PantsColorID, TableColor.Color as "color of pants", TableClothes.ShirtColorID, TableColor.Color as "color of shirt", TableColor.ColorID from TableClothes inner join TableColor on TableClothes.PantsColorID = TableColor.ColorID ; ``` The reason this isn't giving you the results you crave is you also need to join to both the shirt and pants colorID that way you can get the descriptor information for both. ``` select TableClothes.Day, TableClothes.PantsColorID, TableColor.Color as "color of pants", TableClothes.ShirtColorID, TableColor.Color as "color of shirt", TableColor.ColorID from TableClothes inner join TableColor on TableClothes.PantsColorID = TableColor.ColorID inner join TableColor on TableClothes.ShirtColorID = TableColor.ColorID ``` Oh but wait that doesn't compile. That's because when you refer to TableColor twice like this the Database System has no clue which one you are referring to in your `SELECT` and `JOIN` statements. So we're going to use a technique called aliasing that will not only solve for this but make your code easier to read. ``` select C.Day, C.PantsColorID, P.Color as 'color of pants', C.ShirtColorID, S.Color as 'color of shirt', C.ColorID from TableClothes as C inner join TableColor as P on C.PantsColorID = P.ColorID inner join TableColor as S on C.ShirtColorID = S.ColorID ; ``` There now we have a functional clean easy to read query.
You use an alias -- like this: ``` select TableClothes.Day, TableClothes.PantsColorID, Color1.Color as 'color of pants' , TableClothes.ShirtColorID, Color2.Color as 'color of shirt', from TableClothes, TableColor as Color1, TableColor as Color2 where TableClothes.PantsColorID = Color1.ColorID and TableClothes.ShirtColorID = Color2.ColorID ``` More common to use modern join syntax (which I think makes it clearer) ``` select TableClothes.Day, TableClothes.PantsColorID, TableColor.Color as 'color of pants' , TableClothes.ShirtColorID, Color2.Color as 'color of shirt', from TableClothes, join TableColor as Color1 on TableClothes.PantsColorID = Color1.ColorID join TableColor as Color2 on TableClothes.ShirtColorID = Color2.ColorID ```
How define a SQL query to "lookup and return a value from a table twice"
[ "", "sql", "" ]
I have the following 2 tables > Table 1 - Questions > > Contains questions and marks allotted for each questions ``` ID| Questions | Marks ________________________________________ 1 | What is your name? | 2 2 | How old are you? | 2 3 | Where are you from? | 2 4 | What is your father's name? | 2 5 | Explain about your project? | 5 6 | How was the training session?| 5 ``` > Table 2 - Question Format > > Contains how many questions (count) to be extracted for a set of Marks ``` Mark | Count ------------- 2 | 2 5 | 1 ``` I want the random questions to be picked up from the table [Questions] as per the [count] set in the table [Question\_Format]. ``` ID | Question ---------------------------- 2 | How old are you? 3 | Where are you from? 6 | How was the training session? ```
Here is the idea. Enumerate the questions for each "mark" by using `row_number()`. Then use this sequential number to select the random questions: ``` select q.* from (select q.*, row_number() over (partition by marks order by newid()) as seqnum from questions q ) q join marks m on q.marks = m.mark and q.seqnum <= m.count; ```
``` with cte as ( select *, row_number() over(partition by Marks order by newid()) as rn from Questions ) select q.id, q.Questions from cte as q inner join QuestionFormat as qf on qf.Mark = q.Marks where q.rn <= qf.[Count] ``` **`sql fiddle demo`**
query to extract random rows from a table
[ "", "sql", "sql-server", "sql-server-2008", "select", "stored-procedures", "" ]
I need to build an SQL query in MySQL to solve a problem as part of an assignment for class. I've been working at this problem for awhile, but I'm having a hard time figuring out how to structure this query properly. I'm rather new to the SQL language and databases in general and I'm stumped on this question. I have posted what I've come up with so far, but unfortunately I have not been able to get the results I'm looking for. If anyone could give me some guidance on how to accomplish this I would greatly appreciate it. Here's what my table structure looks like: > course(**cid**, fid, room) > > enroll(**cid**, **sid**, grade) > > faculty(**fid**, fname, dept, rank) > > student(**sid**, sname, major, credits) Here's the query I need to build: > Show the faculty id and faculty name for all faculty that have taught all computer science majors (major = 'CSC'). Here's what I've tried so far: ``` select f.fid, f.fname from faculty f join course c on f.fid = c.fid join enroll e on c.cid = e.cid join student s on e.sid = s.sid where s.sid = ALL (select sid from student where major = 'CSC'); select f.fid, f.fname from faculty f join course c on f.fid = c.fid join enroll e on c.cid = e.cid join student s on e.sid = s.sid group by f.fid, s.sid having s.sid = ALL (select sid from student where major = 'CSC')); ``` The logical hurdle I'm having a hard time understanding is how to make sure that the faculty member is teaching ALL of the current CSC majors. You can see that I've tried to add some logic to check each record returned, but I'm afraid I may be misunderstanding the syntax. These queries will run, but they return empty sets. Thanks for the help.
I agree the question may be unclear, and they might just be after all faculty that have taught *any* CSC major. However, just in case you still need all the faculty that have taught *all* CSC major, this should work: The following query tells us the pairs of faculty and CSC majors: ``` select f.fid, s.sid from faculty f inner join course c on f.fid = c.fid inner join enroll e on e.cid = c.cid inner join student s on e.sid = s.sid where s.major = 'CSC' group by f.fid, s.sid ``` Therefore, if we know the count of students who are computer science majors: ``` select count(1) from student s where s.major = 'CSC' ``` Then we can add up the number of CSC majors taught by each faculty member, and check it's equal to the total number of CSC majors: ``` select b.fid, b.fname from ( select a.fid, a.fname, count(1) as taught_count from ( select f.fid, f.fname, s.sid from faculty f inner join course c on f.fid = c.fid inner join enroll e on e.cid = c.cid inner join student s on e.sid = s.sid where s.major = 'CSC' group by f.fid, s.sid ) a group by a.fid, a.fname ) b where b.taught_count = ( select count(1) from student s where s.major = 'CSC' ) ```
Try with ``` select f.fid, f.fname from faculty f join course c on f.fid = c.fid join enroll e on c.cid = e.cid join student s on e.sid = s.sid where s.sid IN (select sid from student where major = 'CSC'); ```
How do i write this SQL query for an ALL condition?
[ "", "mysql", "sql", "database", "group-by", "having", "" ]
I have 2 tables Accounts ``` ID | Deleted? | Type 1 | 0 | Father 2 | 0 | Son 3 | 1 | Son 4 | 1 | Son 5 | 0 | Father 6 | 0 | Father 7 | 1 | Son 8 | 0 | Son 9 | 0 | Father 10 | 1 | Son ``` Rel\_Accounts ``` ID | SON | FATHER 1 | 4 | 6 2 | 3 | 6 3 | 2 | 5 4 | 4 | 1 5 | 7 | 1 6 | 8 | 9 7 | 10 | 9 ``` I want to select only the active (deleted =0) fathers IDs who do have SONs Deleted = 1: ``` FATHERS 6 1 ``` How do you get these records when FATHER = 0 but all his SONs Deleted = 1? I have tried the following but it did not work: ``` SELECT A.ID, case when A.DELETED = 0 THEN (SELECT AH.SONS FROM ACCOUNTS_REL AH WHERE AH.FATHER = A.ID AND A.DELETED = 1) END FROM ACCOUNTS A WHERE A.TYPE = 'Father' ``` The expected results are 1 and 6 because they are active fathers and *all* of their sons are deleted.
if you want all active father which all of his sons has deleted this equal to say all active fathers which has sons but no active son, then try below: ``` select distinct rc.father from accounts a join rel_accounts rc on a.id=rc.father where a.deleted=0 and rc.father not in (select qrc.father from accounts qa join rel_accounts qrc on qa.id=qrc.father where qa.deleted=0 and qrc.son in (select qracc.son from rel_accounts qracc join accounts qacc on qracc.son=qacc.id where qacc.deleted=0)) order by father desc ``` see the [SQLFIDDLE DEMO](http://www.sqlfiddle.com/#!2/edf94/3)
This is a case where you will need to use multiple joins to create the data set you want. If I am understanding you correctly, you want to filter out account records that are deleted from the result set, then return only rows representing fathers that have a son. Something like this should suffice: ``` Select distinct F.id from accounts F join rel_accounts R on R.father=F.id join accounts S on S.id=R.son where F.deleted=0 and S.deleted=0; ``` The joins themselves do the work of filtering out results you don't want, then you can simply exclude the deleted rows from the result set. Someone else might be able to throw together a slightly cleaner version for you.
Joining tables and selecting foreign key where all rows meet condition in first table
[ "", "mysql", "sql", "compare", "case", "" ]
Let's assume this table: ``` Dpt ctd dte A 1 2014-01-06 A 2 2014-01-07 A 1 2014-01-07 B 1 2014-01-06 B 1 2014-01-07 A 2 2014-01-09 B 1 2014-01-10 A 1 2014-01-11 B 1 2014-01-13 A 2 2014-01-13 ``` I would like to calcualte the running sum on every sunday: ``` A 1 2014-01-06 B 1 2014-01-06 A 9 2014-01-13 B 4 2014-01-13 ``` How can I do this using an SQL query? Running PostgreSQL 9.3
You get a running total with SUM OVER. As you can have multiple records per day for a dpt, you must group by dpt and day first and run the total over the SUM(ctd). Afterwards remove days that are not Sunday. ``` select * from ( select dpt, dte, sum(sum(ctd)) over (partition by dpt order by dte) as total from mytable group by dpt, dte ) distinct_days where to_char(dte,'D') = '1' -- Sunday is '1', Monday is '2', etc. order by dte, dpt; ``` (You can achieve the same by using SUM OVER on all records first and remove duplicates in your results with DISTINCT. To me, however, grouping first feels more natural.)
This does not requires to have entries on sundays (and it really works with sundays, not with mondays): ``` with t (dpt, ctd, dte) as ( values ('a', 2, date '2014-01-05'), ('a', 1, '2014-01-06'), ('a', 2, '2014-01-07'), ('a', 1, '2014-01-07'), ('b', 1, '2014-01-06'), ('b', 1, '2014-01-07'), ('a', 2, '2014-01-09'), ('b', 1, '2014-01-10'), ('a', 1, '2014-01-11'), ('b', 1, '2014-01-13'), ('a', 2, '2014-01-13') ) select dpt, monday + 6 sunday, sum(ctd_sum) over(partition by dpt order by monday) total from ( select dpt, date(date_trunc('week', dte)) monday, sum(ctd) ctd_sum from t group by 1, 2 ) sub; dpt | dte | total -----+------------+------- a | 2014-01-05 | 2 a | 2014-01-12 | 9 a | 2014-01-19 | 11 b | 2014-01-12 | 3 b | 2014-01-19 | 4 ```
SQL running sum on particular dates
[ "", "sql", "postgresql", "" ]
i have a table that contains: ``` itemid inventdimid datephysical transrefid 10001 123 2015-01-02 300002 10002 123 2015-01-03 3566 10001 123 2015-02-05 55555 10002 124 2015-02-01 4545 ``` The result i want ``` itemid inventdimid datephysical transrefid 10001 123 2015-02-05 555 10002 123 2015-01-03 3566 10002 124 2015-02-01 4545 ``` MY query: ``` SELECT a.itemid,a.inventdimid,max(a.datephysical),a.transrefid FROM a where dataareaid = 'ermi' group by a.itemid,a.inventdimid ``` it is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
Use the ANSI standard `row_number()` function: ``` select t.* from (select t.*, row_number() over (partition by itemid, inventdimid order by datephysical desc) as seqnum from table t ) t where seqnum = 1; ```
Find max(a.datephysical) for each itemid, inventdimid combination, select all rows from that date. ``` SELECT itemid, inventdimid, datephysical, transrefid FROM a a1 where dataareaid = 'ermi' and datephysical = (select max(datephysical) from a a2 where a1.itemid = a2.itemid and a1.inventdimid = a2.inventdimid and a2.dataareaid = 'ermi') ```
sql group by and max and other values
[ "", "sql", "group-by", "" ]
Assuming I have this data in a table: ``` id | thing | operation | timestamp ----+-------+-----------+----------- 0 | foo | add | 0 0 | bar | add | 1 1 | baz | remove | 2 1 | dim | add | 3 0 | foo | remove | 4 0 | dim | add | 5 ``` Is there any way to construct a Postgres SQL query that will group by id and operation but without grouping rows with a higher timestamp value over those with lower? I want to get this out of the query: ``` id | things | operation ----+----------+----------- 0 | foo, bar | add 1 | baz | remove 1 | dim | add 0 | foo | remove 0 | dim | add ``` Basically group by, but only over adjacent rows sorted by timestamp.
This is a [gaps and islands](http://www.manning.com/nielsen/SampleChapter5.pdf) problem (although this article is directed at SQL-Server it describes the problem very well so still applies to Postgresql) , and can be solved using ranking functions: ``` SELECT id, thing, operation, timestamp, ROW_NUMBER() OVER(ORDER BY timestamp) - ROW_NUMBER() OVER(PARTITION BY id, operation ORDER BY Timestamp) AS groupingSet, ROW_NUMBER() OVER(ORDER BY timestamp) AS PositionInSet, ROW_NUMBER() OVER(PARTITION BY id, operation ORDER BY Timestamp) AS PositionInGroup FROM T ORDER BY timestamp; ``` As you can see by taking the overall position within the set, and deducting the position in the group you can identify the islands, where each unique combination of `(id, operation, groupingset)` represents an island: ``` id thing operation timestamp groupingSet PositionInSet PositionInGroup 0 foo add 0 0 1 1 0 bar add 1 0 2 2 1 baz remove 2 2 3 1 1 dim add 3 3 4 1 0 foo remove 4 4 5 1 0 dim add 5 3 6 3 ``` Then you just need to put this in a subquery, and group by the relevant fields, and use string\_agg to concatenate your things: ``` SELECT id, STRING_AGG(thing) AS things, operation FROM ( SELECT id, thing, operation, timestamp, ROW_NUMBER() OVER(ORDER BY timestamp) - ROW_NUMBER() OVER(PARTITION BY id, operation ORDER BY Timestamp) AS groupingSet FROM T ) AS t GROUP BY id, operation, groupingset; ```
Perhaps this works, if your sample data is good enough: ``` select id, string_agg(thing,',') as things, operation from tablename group by id, operation ``` I.e. use id and operation to find things to concat. Edited, now using string\_agg instead of group\_concat.
Postgres SQL GROUP BY without jumping rows?
[ "", "sql", "postgresql", "group-by", "" ]
I thought that selecting values from a subquery in SQL would only yield values from that subset until I found a very nasty bug in code. Here is an example of my problem. 1. I'm selecting the rows that contain the latest(max) function by date. This correctly returns 4 rows with the latest check in of each function. ``` select *, max(date) from cm where file_id == 5933 group by function_id; ``` ``` file_id function_id date value max(date) 5933 64807 1407941297 1 1407941297 5933 64808 1407941297 11 1407941297 5933 895175 1306072348 1306072348 5933 895178 1363182349 1363182349 ``` 2. When selecting only the value from the subset above, it returns function values from previous dates, i.e. rows that don't belong in the subset above. You can see the result below where the dates are older than in the first subset. ``` select temp.function_id, temp.date, temp.value from (select *, max(date) from cm where file_id 5933 group by function_id) as temp; ``` ``` function_id date value 64807 1306072348 1 &lt-outdated row, not in first subset 64808 1306072348 17 &lt-outdated row, not in first subset 895175 1306072348 895178 1363182349 ``` What am I doing fundamentally wrong? Shouldn't selects performed on subqueries only return possible results from those subqueries?
Perhaps my question was not formulated correctly, but this post had the solutions I was essentially looking for: <https://stackoverflow.com/a/123481/2966951> <https://stackoverflow.com/a/121435/2966951> Filtering out the most recent row was my problem. I was surprised that selecting from a subquery with a max value could yield anything other than that value.
SQLite allows you to use MAX() to select the row to be returned by a GROUP BY, but this works only if the MAX() is actually computed. When you throw the `max(date)` column away, this no longer works. In this case, you actually want to use the `date` value, so you can just keep the MAX(): ``` SELECT function_id, max(date) AS date, value FROM cm WHERE file_id = 5933 GROUP BY function_id ```
Subqueries: What am I doing fundamentally wrong?
[ "", "sql", "sqlite", "subquery", "subset", "" ]
I have this table ``` | date | sum | |--------------|-------| | 2015-02-19 | 10000 | | 2015-02-19 | 10000 | | 2015-02-20 | 15000 | | 2015-02-20 | 15000 | | 2015-02-21 | 18000 | | 2015-02-21 | 18000 | ``` I want to select top 2 rows from the table, but only different ones, meaning my result should return `2015-02-20` and `2015-02-21`. ``` SELECT TOP 2 distinct date FROM stock ``` Using this gives me an error: > Incorrect syntax near the keyword 'distinct'. Help would be highly appreciated.
You can try like this ``` select top 2 * from ( select distinct date FROM stock ) ```
Try something like: ``` SELECT TOP 2 date FROM stock GROUP BY date ```
Select top 2 rows different from each other
[ "", "sql", "sql-server", "" ]
I'm using Oracle SQL and i need a help with a query. I need to count how many rows have at least one value of `NA`. My table have more than 300 columns (these are not the real columns names): ``` col1 varchar, col2 varchar, col3 varchar, ... ... ... col300 varchar ``` Here is an example of a table: ``` col1 | col2 | col3 | col4 10 | 15 | 55 | NA NA | 15 | 55 | NA 10 | 15 | 55 | 33 10 | NA | 55 | 58 10 | 15 | 55 | 33 10 | 15 | 55 | 84 10 | 15 | NA | 40 Output expected result: 4 ``` Please advise.
Well, I've found the solution. ``` select sum(cnt) from ( select case when col1 = 'NA' or col2 = 'NA' or col3 = 'NA' or col4 = 'NA' then 1 end cnt from MyTable ) ```
You can do a Sum of a sub select using a case to get this: ``` Select SUM(NA) From (Select case when col1 = 'NA' then 1 case when col2 = 'NA' then 1 case when col3 = 'NA' then 1 case when col4 = 'NA' then 1 end as NA from Table) a ```
Count how many rows have at least one certain value
[ "", "sql", "oracle", "count", "" ]
Using SQL Server 2012 (LocalDB), I have three tables: ``` BESEXT.COMPUTER BESEXT.ANALYSIS_PROPERTY BESEXT.ANALYSIS_PROPERTY_RESULT ``` These contains following info: * BESEXT.COMPUTER: Mapping between ComputerIDs and ComputerNames * BESEXT.ANALYSIS\_PROPERTY: List of properties that can be mapped to a computer * BESEXT.ANALYSIS\_PROPERTY\_RESULT: List of values of properties for a computer First, I perform the following query: ``` SELECT AR.ComputerID, AP.Name, AR.Value FROM BESEXT.ANALYSIS_PROPERTY_RESULT AR JOIN BESEXT.ANALYSIS_PROPERTY AP ON AP.ID = AR.PropertyID AND AP.ID IN (1672, 1673, 1674) ORDER BY AR.ComputerID, AP.Name ``` Which yields the following result: ``` ComputerID Name Value ---------- ---- ----- 595640 DisplayName Windows 8.1 x64 - Mobile Device Image - v3.2 595640 SequenceName Windows 8.1 x64 - Mobile Device Image 595640 SequenceVersion 3.2 631459 DisplayName Windows 8.1 x64 - Mobile Device Image - v3.2 631459 SequenceName Windows 8.1 x64 - Mobile Device Image 631459 SequenceVersion 3.2 ``` In BESEXT.COMPUTER I have the following values: ``` ID ComputerID ComputerName -- ---------- ------------ 1 595640 PO121203866 2 631459 PO121201739 3 1101805 PO121201100 ``` I want to perform a left outer join of all my computer objects on the first select, so that I know which computers I do not have a value for. So, first I do a simple inner join on the previous selection: ``` SELECT C.ComputerName, R.ComputerID, R.Name, R.Value FROM ( SELECT AR.ComputerID, AP.Name, AR.Value FROM BESEXT.ANALYSIS_PROPERTY_RESULT AR JOIN BESEXT.ANALYSIS_PROPERTY AP ON AP.ID = AR.PropertyID AND AP.ID IN (1672, 1673, 1674) ) R JOIN BESEXT.COMPUTER C ON C.ComputerID = R.ComputerID ORDER BY R.ComputerID, R.Name ``` Which, predictably, yields the following resultset: ``` ComputerName ComputerID Name Value ------------ ---------- ---- ----- PO121203866 595640 DisplayName Windows 8.1 x64 - Mobile Device Image - v3.2 PO121203866 595640 SequenceName Windows 8.1 x64 - Mobile Device Image PO121203866 595640 SequenceVersion 3.2 PO121201739 631459 DisplayName Windows 8.1 x64 - Mobile Device Image - v3.2 PO121201739 631459 SequenceName Windows 8.1 x64 - Mobile Device Image PO121201739 631459 SequenceVersion 3.2 ``` Now, for the grand finale, let's do the **LEFT OUTER JOIN**: ``` SELECT C.ComputerName, R.ComputerID, R.Name, R.Value FROM ( SELECT AR.ComputerID, AP.Name, AR.Value FROM BESEXT.ANALYSIS_PROPERTY_RESULT AR JOIN BESEXT.ANALYSIS_PROPERTY AP ON AP.ID = AR.PropertyID AND AP.ID IN (1672, 1673, 1674) ) R -- LEFT OUTER JOIN ADDED HERE! LEFT OUTER JOIN BESEXT.COMPUTER C ON C.ComputerID = R.ComputerID ORDER BY R.ComputerID, R.Name ``` This yields **the exact same resultset as with the inner join**! This is not what I'm looking for, and it is not at all what I was expecting. Now it's pretty late here and I'm basically doing some work out of insomnia, but I think this is about as simple and example of an Outer Join as it can get, right? The result I'm looking to achieve with the outer join is this: ``` ComputerName ComputerID Name Value PO121203866 595640 DisplayName Windows 8.1 x64 - Mobile Device Image - v3.2 PO121203866 595640 SequenceName Windows 8.1 x64 - Mobile Device Image PO121203866 595640 SequenceVersion 3.2 PO121201739 631459 DisplayName Windows 8.1 x64 - Mobile Device Image - v3.2 PO121201739 631459 SequenceName Windows 8.1 x64 - Mobile Device Image PO121201739 631459 SequenceVersion 3.2 PO121201100 NULL NULL NULL PO121201100 NULL NULL NULL PO121201100 NULL NULL NULL ``` P.S.: To be completely honest, the result I'm looking for is more like this, but I feel that would be a different question altogether: ``` ComputerName Name Value ------------ ---- ----- PO121203866 DisplayName Windows 8.1 x64 - Mobile Device Image - v3.2 PO121203866 SequenceName Windows 8.1 x64 - Mobile Device Image PO121203866 SequenceVersion 3.2 PO121201739 DisplayName Windows 8.1 x64 - Mobile Device Image - v3.2 PO121201739 SequenceName Windows 8.1 x64 - Mobile Device Image PO121201739 SequenceVersion 3.2 PO121201100 DisplayName NULL PO121201100 SequenceName NULL PO121201100 SequenceVersion NULL ```
The query you're looking can be written simply as this: ``` SELECT ComputerName, A.ComputerID, Name, Value FROM BESEXT.COMPUTER A CROSS JOIN (SELECT * FROM BESEXT.ANALYSIS_PROPERTY WHERE ID BETWEEN 1672 AND 1674) B LEFT JOIN BESEXT.ANALYSIS_PROPERTY_RESULT C ON A.ComputerId = C.ComputerId AND B.ID = C.PropertyId ORDER BY ComputerId, Name ``` Start by getting all of the computer-property combinations you care about: ``` SELECT * FROM BESEXT.COMPUTER A CROSS JOIN (SELECT * FROM BESEXT.ANALYSIS_PROPERTY WHERE ID BETWEEN 1672 AND 1674) B ``` This yields the results: ``` ID ComputerId ComputerName ID Name -- ---------- ------------ -- ---- 1 595640 PO121203866 1672 DisplayName 2 631459 PO121201739 1672 DisplayName 3 1101805 PO121201100 1672 DisplayName 1 595640 PO121203866 1673 SequenceName 2 631459 PO121201739 1673 SequenceName 3 1101805 PO121201100 1673 SequenceName 1 595640 PO121203866 1674 SequenceVersion 2 631459 PO121201739 1674 SequenceVersion 3 1101805 PO121201100 1674 SequenceVersion ``` From there, you simply perform a left join on `BESEXT.ANALYSIS_PROPERTY_RESULT` to get your values, and you include the `ORDER BY` clause to sort it.
You can do this by using a cross join to set up the properties for all computers and then a left join to connect to the actual property values for those computers that have values set: ``` SELECT * FROM ( SELECT C.ComputerName, C.ComputerID, AP.Name, AP.ID FROM BESEXT.COMPUTER C CROSS JOIN BESEXT.ANALYSIS_PROPERTY AP WHERE AP.ID IN (1672, 1673, 1674) ) AP LEFT JOIN BESEXT.ANALYSIS_PROPERTY_RESULT AR ON AP.ComputerID = AR.ComputerID AND AP.ID = AR.PropertyID ORDER BY AP.ComputerName DESC, AP.Name ```
Left outer join in SQL Server 2012 LocalDB not working?
[ "", "sql", "sql-server", "outer-join", "sql-server-2012-localdb", "" ]
I have written a finder as follows: ``` @cars = @cars.joins(:manufacturers).where("manufacturers.name ILIKE ?", params[:manufacturer].gsub!(/-/, ' ')) ``` `params[:manufacturer]` comes through in a form of a string that has been `.parameterize`d by Rails. The problem is that a string with an "'" or an "&" in it doesn't get matched by ILIKE correctly. So as an example, some strings that are stored in my DB and their parameterized versions: 1. "This is a test" parameterized: "this-is-a-test" gsubbed: "this is a test" 2. "He didn't do it" parameterized: "he didn-t-do-it" gsubbed: "he didn t do it" 3. "This & That" parameterized: "this-that" gsubbed: "this that" So when I do ILIKE between the first part of 2 and the third part of 2, it does not create a match. Same with 3. 1 obviously works fine though. Any ideas how to get a correct match even with special characters in the strings?
Since this is a lot similar to a slug system, you should just add a new field and call it whatever you find suitable, just don't forget to add an `index` so you don't waste time searching in strings. Also you could add a `before_create` or `before_save` callback to auto create it when you save the object, in the format you are planning to search for.
This is the way to do it: ``` @cars = @cars.joins(:manufacturers).where("manufacturers.name LIKE ?", "%#{params[:manufacturer].parameterize}%") ``` By the way, you can do this, it looks cleaner: ``` search = params[:manufacturer].parameterize @cars = @cars.joins(:manufacturers).where("manufacturers.name LIKE ?", "%#{search}%") ```
Rails Query with ILIKE
[ "", "sql", "ruby-on-rails", "parameters", "find", "" ]
So I have a 'recently\_viewed' table with columns ``` product|user|time ``` However, if a user views a product they have already viewed, it will create a new row. What is the most efficient way of covering this possibility (while keeping the possibility that it is the first time they viewed it)? overwriting the old TIME they viewed it or deleting the old table row (and simultaneously creating the new row)? I can think of some inefficient ways to do it (querying first to see if it's in the table, but this requires multiple statements), but I suspect there is something far more efficient. INSERT INTO recently\_viewed ... Sincere thanks for any help. It is greatly appreciated from an amateur.
You can use define (product, user) as UNIQUE, for example you can set it as the PRIMARY KEY of your table: ``` CREATE TABLE tablename ( product INT, user INT, time DATETIME, PRIMARY KEY (product, user) ); ``` (or you can also create a UNIQUE index) and then use an [INSERT INTO ... ON DUPLICATE](http://dev.mysql.com/doc/refman/5.1/en/insert-on-duplicate.html) query: ``` INSERT INTO tablename (product, user, `time`) VALUES (1, 1, '2015-01-01 10:00:00'), ON DUPLICATE KEY UPDATE `time`=VALUES(`time`); ``` Please see a working example [here](http://sqlfiddle.com/#!2/e7ed75/1).
You can check whether record for user exists in `recently_viewed`, ``` IF EXISTS (SELECT * FROM recently_viewed WHERE user = "user_id") BEGIN #UPDATE query END ELSE BEGIN #INSERT query END ```
How do I INSERT a new row or UPDATE an existing row effeciently?
[ "", "mysql", "sql", "" ]
What would be the SQL to remove all numbers found in an otherwise string column using Sqlite (an Oracle example would be appreciated too)? Example : I would like to remove all numbers from entries like this : `291 HELP,1456 CALL` Expected output: `HELP,CALL` edit: I have edited the question because it is not only from one entry that I want to remove numbers but many of them.
Either you do it in the language, you embedded sqlite, or you use this SQLite code, that removes all numbers: ``` UPDATE table SET column = replace(column, '0', '' ); UPDATE table SET column = replace(column, '1', '' ); UPDATE table SET column = replace(column, '2', '' ); UPDATE table SET column = replace(column, '3', '' ); UPDATE table SET column = replace(column, '4', '' ); UPDATE table SET column = replace(column, '5', '' ); UPDATE table SET column = replace(column, '6', '' ); UPDATE table SET column = replace(column, '7', '' ); UPDATE table SET column = replace(column, '8', '' ); UPDATE table SET column = replace(column, '9', '' ); ```
Using **TRANSLATE** and **REPLACE** ``` SQL> WITH DATA AS( 2 SELECT '291 HELP' str FROM dual UNION ALL 3 SELECT '1456 CALL' str FROM dual 4 ) 5 SELECT REPLACE(translate(str, '0123456789', ' '), ' ', NULL) str 6 FROM DATA 7 / STR --------- HELP CALL SQL> ``` Using **REGEXP\_REPLACE** ``` SQL> WITH DATA AS( 2 SELECT '291 HELP' str FROM dual UNION ALL 3 SELECT '1456 CALL' str FROM dual 4 ) 5 SELECT trim(regexp_replace(str, '[0-9]+')) str 6 FROM DATA 7 / STR --------- HELP CALL SQL> ``` **POSIX character class** ``` SQL> WITH DATA AS( 2 SELECT '291 HELP' str FROM dual UNION ALL 3 SELECT '1456 CALL' str FROM dual 4 ) 5 SELECT trim(regexp_replace(str, '^[[:digit:]]+')) str 6 FROM DATA 7 / STR --------- HELP CALL SQL> ``` **Perl-extensions** ``` SQL> WITH DATA AS( 2 SELECT '291 HELP' str FROM dual UNION ALL 3 SELECT '1456 CALL' str FROM dual 4 ) 5 SELECT trim(regexp_replace(str, '\d+')) str 6 FROM DATA 7 / STR --------- HELP CALL SQL> ```
Remove numbers found in string column
[ "", "sql", "oracle", "sqlite", "replace", "regexp-replace", "" ]
This may be a novice question, but I cannot for the life of me figure this out. Before I explain my stored procedure I will describe my tables and show examples. There is the `TicketBook` table that houses all TicketBooks that have been issued to a unit. TicketBooks are 25 Tickets long. eg: ``` TicketBooks ----------- TicketBookNum | TicketNum | UnitID | FirstTicket | LastTicket | Used --------------------------------------------------------------------------- 101 101 120 101 126 Yes 101 102 120 101 126 Yes 101 103 120 101 126 No 101 104 120 101 126 Yes etc... ``` This table has all tickets that are used and not used in the database. **Different TicketBooks can be issued to different units at the same time. So TicketBook 101 can be issued to Unit 120, while TicketBook 151 can be issued to 140.** What my stored procedure needs to do is return all Tickets that are marked as not used, but also have a ticket that is greater than it that is marked Used. Example: ``` TicketBookNum | TicketNum | Used ---------------------------------- 101 101 Yes 101 102 Yes 101 103 No 101 104 Yes 101 105 Yes 101 106 No 101 107 No etc.. ``` So the stored procedure returns Ticket `103`, but not `106` and `107` because there are no tickets inside that ticketbook after them that are marked as Used. This is currently the stored procedure that I am using: ``` select TicketNum from TicketBooks where Used='No' and TicketNum between 92226 and 92251 and TicketNum < (select top 1 TicketNum from TicketBooks where Used='Yes' and TicketNum between 92226 and 92251 order by TicketNum desc) order by TicketNum desc ``` This will return the missing tickets from the TicketBook 82226. If I try to change the parameters on the `between` clause then it does not return the correct results. Any help will be appreciated.
I believe I understand what you are trying to do. I think the thing you are missing is that your subquery does not map to the ticket book from your outer query. So you want something like this: ``` select TicketNum from TicketBooks a where Used='No' and TicketNum < (select MAX(b.TicketNum) from TicketBooks b where b.Used='Yes' and b.TicketBookNum = a.TicketBookNum) order by TicketNum desc ```
You could use a common table expression to wrap the range limit in and do a exists query like this: ``` ;with cte as ( select * from TicketBooks where TicketNum between 101 and 126 ) select * from cte where Used = 'no' and exists ( select 1 from cte t where t.TicketNum > cte.ticketnum and t.used='yes' ) ``` This is still restricted to a hard coded range; a better option might be to build the range from the FirstTicket/LastTicket values in the source table.
Make SQL function run for every value in table
[ "", "sql", "sql-server", "missing-data", "" ]
A `book` `has_many` `pages`. Pages have attribute `page_no`. I want to addPageAt (z) between any two pages (x, y) of the book and to do that I have to first update all pages of the book from y to book.pages.count and then create->save page(z). Here's the SQL for single run update: ``` sql = "update pages set page_no = page_no+1 where book_id =" + (@book.id).to_s + " and page_no >" + (@addPageAt - 1).to_s records_arraty = ActiveRecord::Base.connection.execute(sql) ``` It works! But to do that I had to remove the following validations from `pages` table: ``` validates :book_id, presence: true, on: :save validates :page_no, presence: true, uniqueness: true, on: :save validates_uniqueness_of :page_no, :scope => :book_id ``` I want to keep these validations, particularly the third one, in place. How to do that?
You would need to either: 1. Use a [SERIALIZABLE](http://www.postgresql.org/docs/9.4/static/transaction-iso.html) Transaction when running Gordon's SQLs above (and have the Caller expect failures (in case someone else attempted a conflicting transaction) and thereby loop to retry the transaction. OR 2. Run both UPDATEs in a single SQL statement like this: ``` "WITH a AS ( UPDATE pages SET page_no = - (page_no + 1) WHERE book_id = " + (@book.id).to_s + " AND page_no > " + (@addPageAt - 1).to_s + " RETURNING * ), b AS ( UPDATE pages SET page_no = - page_no WHERE book_id = " + (@book.id).to_s + " AND page_no < 0 RETURNING * ) SELECT 1 FROM a, b LIMIT 1;" ```
One method is two separate updates: ``` update pages set page_no = - (page_no + 1) where book_id = " + (@book.id).to_s + " and page_no >" + (@addPageAt - 1).to_s; update pages set page_no = - page_no where where book_id = " + (@book.id).to_s " and page_no < 0); ``` You can do this inside a single transaction, so the negative page numbers are never visible.
How to run multiple UPDATE in single SQL statement without removing uniqueness validation?
[ "", "sql", "ruby-on-rails", "postgresql", "" ]
With Oracle SQL query, can we do the following? ``` Input Output 'aaaabcd' ---> 'a' '0001001' ---> '0' ``` That is, find the character which is occurring the greatest number of times in the string?
Yes, this is possible through the use of `CONNECT BY`. A bit complicated, though: ``` SELECT xchar, xcount FROM ( SELECT xchar, COUNT(*) AS xcount, RANK() OVER ( ORDER BY COUNT(*) DESC) AS rn FROM ( SELECT SUBSTR('aaaabcd', LEVEL, 1) AS xchar FROM dual CONNECT BY LEVEL <= LENGTH('aaaabcd') ) GROUP BY xchar ) WHERE rn = 1; ``` What we do in the innermost query is break the string into its individual characters. Then we just get the `COUNT()` grouped by the character, and use `RANK()` to find the max (note that this will return more than one result if there is a tie for the most frequently occurring character). The above query returns both the character appearing most often and the number of times it appears. If you have a table of multiple strings, then you'll want to do something like the following: ``` WITH strlen AS ( SELECT LEVEL AS strind FROM dual CONNECT BY LEVEL <= 30 ) SELECT id, xchar, xcount FROM ( SELECT id, xchar, COUNT(*) AS xcount, RANK() OVER ( PARTITION BY id ORDER BY COUNT(*) DESC) AS rn FROM ( SELECT s.id, SUBSTR(s.str, sl.strind, 1) AS xchar FROM strings s, strlen sl WHERE LENGTH(s.str) >= sl.strind ) GROUP BY id, xchar ) WHERE rn = 1; ``` where `30` is a magic number that is equal to the length of your longest string, or greater. [**See SQL Fiddle here.**](http://sqlfiddle.com/#!4/00a95/7) Alternately, you could do the following to avoid the magic number: ``` WITH strlen AS ( SELECT LEVEL AS strind FROM dual CONNECT BY LEVEL <= ( SELECT MAX(LENGTH(str)) FROM strings ) ) SELECT id, xchar, xcount FROM ( SELECT id, xchar, COUNT(*) AS xcount, RANK() OVER ( PARTITION BY id ORDER BY COUNT(*) DESC) AS rn FROM ( SELECT s.id, SUBSTR(s.str, sl.strind, 1) AS xchar FROM strings s, strlen sl WHERE LENGTH(s.str) >= sl.strind ) GROUP BY id, xchar ) WHERE rn = 1; ``` [**Updated SQL Fiddle.**](http://sqlfiddle.com/#!4/00a95/8)
Here's one way - assuming you want to show all rows that have the highest number of characters per string: ``` with sample_data as (select 'aaaabcd' str from dual union all select '0001001' str from dual union all select '11002' str from dual), pivoted as (select str, substr(str, level, 1) letter from sample_data connect by level <= length(str) and prior str = str and prior dbms_random.value is not null), grp as (select str, letter, count(*) cnt from pivoted group by str, letter), ranked as (select str, letter, dense_rank() over (partition by str order by cnt desc) dr from grp) select str, letter from ranked where dr = 1; STR LETTER ------- ------ 0001001 0 11002 1 11002 0 aaaabcd a ``` If you wanted to only show one of the letters in the event of a tie, change the `dense_rank()` in the query above for a `row_number`. If you wanted to show all tied letters in a single row (e.g. comma separated) then use listagg in the final query to group the rows into one.
Find which character occurs the greatest number of times in a string
[ "", "sql", "oracle", "" ]
I want to change my column value in `categoriID` from numbers to text. Is this possible? ``` SELECT name, CAST(categoriID AS char(10)) FROM customer WHERE categoriID = 1 AS 'new_text' ``` Here is a link of a pic how i want it: <https://i.stack.imgur.com/NVdXR.png>
1) Simplest solution would be a simple join thus: ``` SELECT c.name, c.categoryID, category.name AS category_name FROM customer c INNER JOIN -- or LEFT JOIN if categoryID allows NULLs ( SELECT 1, 'First category' UNION ALL SELECT 2, 'Second category' UNION ALL SELECT 3, 'Third category' ) category(categoryID, name) ON c.categoryID = category.categoryID ``` I would use this solution if list of categories is small, static and if it is needed only for this query. 2) Otherwise, I would create a new table thus ``` CREATE TABLE category -- or dbo.cateogory (note: you should use object's/table's schema) ( categoryID INT NOT NULL, CONSTRAINT PK_category_categoryID PRIMARY KEY(categoryID), name NVARCHAR(50) NOT NULL -- you should use the propper type (varchar maybe) and max length (100 maybe) --, CONSTRAINT IUN_category_name UNIQUE(name) -- uncomment this line if you want to have unique categories (nu duplicate values in column [name]) ); GO ``` plus I would create a foreign key in order to be sure that categories from [customer] table exist also in [category] table: ``` ALTER TABLE customer ADD CONSTRAINT FK_customer_categoryID FOREIGN KEY (categoryID) REFERENCES category(categoryID) GO INSERT category (categoryID, name) SELECT 1, 'First category' UNION ALL SELECT 2, 'Second category' UNION ALL SELECT 3, 'Third category' GO ``` and your query will be ``` SELECT c.name, c.categoryID, ctg.name AS category_name FROM customer c INNER JOIN ctg ON c.categoryID = ctg.categoryID -- or LEFT JOIN if c.categoryID allows NULLs ``` I would use solution #2.
From [this possible duplicate SO](https://stackoverflow.com/questions/16296622/rename-column-sql-server-2008) on SQL Server 2008: ``` EXEC sp_RENAME table_name , old_name, new_name ``` Or you could do [this](http://www.techonthenet.com/sql/tables/alter_table.php): ``` ALTER TABLE table_name RENAME COLUMN old_name to new_name; ``` **EDIT:** Just got the question. To change the datatype of the column to a text datatype, you could execute this: ``` ALTER TABLE table_name MODIFY column_name CHARACTER(n); ``` where `n` is the number of characters in the string, or: ``` ALTER TABLE table_name MODIFY column_name VARCHAR(n) ``` where `n` is the maximum number of characters in the string. Note that these will edit the original table. If you just want to select the column as a particular datatype, you'll need to copy the table using: ``` SELECT column_name(s) INTO newtable [IN externaldb] FROM table1; ``` Then, you can modify the column datatype as shown above, and `DROP` the new table if you want to. Another way to do this without a separate table would be to use [`CAST` or `CONVERT`](https://msdn.microsoft.com/en-us/library/ms187928.aspx)
Change column datatype in SELECT in SQL Server
[ "", "sql", "sql-server", "" ]
I am having problem with sql. I want to join two tables, employee and class instructor. Condition is that employee is having unid column like 'u0871457' where as class instructor is having EmplId as '00871457'. I just want to replace the first character of EmplId to 'u' to join to match the string coming from unid . How can I do that? I have tried this so far : ``` select e.name, i.name from Employee e inner join Instructor i on SUBSTR(e.id,1, LENGTH(e.id )) = SUBSTR(i.id,1, LENGTH(i.id )) ``` but this is resulting into a blank resultset. Any help will be appreciated. Thanks for your time!
So many ways to do this. It would be a good idea to look at the explain plan for various ways before committing to a particular method. For example, if there is a function-based index on `EMPLOYEE` such as `SUBSTR(id, 2, LENGTH(id) - 1)` then you'll want to use that in your query: ``` SELECT e.name, i.name FROM employee e INNER JOIN instructor i ON SUBSTR(e.id, 2, LENGTH(e.id) - 1) = SUBSTR(i.id, 2, LENGTH(i.id) - 1); ``` Another question is if the values in the `id` column are always the same length in `EMPLOYEE` *and* `INSTRUCTOR`. What if they are of differing lengths? Maybe one has more padding than another. Also, will they always be *digits* apart from a leading `u`? If so, then it might be worthwhile to try a safe `TO_NUMBER()` conversion: ``` SELECT e.name, i.name FROM employee e INNER JOIN instructor i ON TO_NUMBER(REGEXP_SUBSTR(e.id, '\d+$')) = TO_NUMBER(REGEXP_SUBSTR(i.id, '\d+$')); ``` One other thing you may want to consider, however -- is there a reason for the leading `u` in the `EMPLOYEE` `id` column? Can there be other leading characters? Does the leading `u` stand for something (violating first normal form, but that happens)?
Oracle uses 1 as the base of its indexes, so `substr('aaa',1,3)` is equivalent to `'aaa'`. You need to use 2 as the second parameter of `substr` in order to accomplish what you're attempting. --- Beyond that, you'd probably be better off only changing one side, if you can. If the prefix characters are consistent, you could do this: ``` SELECT e.name, i.name FROM employee e INNER JOIN instructor i ON REPLACE (e.id, 'u', '0') = i.id ``` This would potentially allow the database to use an index on `instructor`, which would not be possible with your solution.
How to join two tables based on substring values of fields?
[ "", "sql", "oracle", "join", "inner-join", "" ]
I have the following query. ``` SELECT DISTINCT propertylist.propertyid ,propertylist.price ,propertylist.publicremarks ,address.addressline1 ,address.streetaddress ,address.city ,address.postalcode ,alternateurl.maplink ,building.bathroomtotal ,building.bedroomtotal ,building.constructeddate ,building.sizeinterior ,building.type ,building.basementfeatures ,building.basementtype ,building.constructionstyleattachment ,propertylist.ammenitiesnearby ,propertylist.features ,propertylist.transactiontype ,propertylist.lastupdated ,propertylist.communityfeatures ,land.acreage FROM propertylist ,address ,building ,alternateurl ,land WHERE propertylist.propertyid = address.propertyid AND address.propertyid = building.propertyid AND building.propertyid = alternateurl.propertyid AND alternateurl.propertyid = land.propertyid ``` I want to know the total number of records that will be derived from this query so that i can implement paging in my website. If i try to execute this without '**limit**' it takes so much time and the execution time runs out. The result of Explain sql is ``` Generation Time: Feb 21, 2015 at 01:06 PM Generated by: phpMyAdmin 4.2.7.1 / MySQL 5.5.39 SQL query: EXPLAIN SELECT DISTINCT COUNT(*) FROM propertylist , address , building , alternateurl ,land WHERE propertylist.propertyid = address.propertyid AND address.propertyid = building.propertyid AND building.propertyid = alternateurl.propertyid AND alternateurl.propertyid = land.propertyid; Rows: 5 Current selection does not contain a unique column. Grid edit, checkbox, Edit, Copy and Delete features are not available. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE alternateurl ALL NULL NULL NULL NULL 12947 1 SIMPLE address ALL NULL NULL NULL NULL 13338 Using where; Using join buffer 1 SIMPLE building ALL NULL NULL NULL NULL 13389 Using where; Using join buffer 1 SIMPLE propertylist ALL NULL NULL NULL NULL 13614 Using where; Using join buffer 1 SIMPLE land ALL NULL NULL NULL NULL 13851 Using where; Using join buffer ```
You can modify your query using [`COUNT`](http://www.tutorialspoint.com/mysql/mysql-count-function.htm): ``` SELECT DISTINCT COUNT(*) FROM propertylist , address , building , alternateurl ,land WHERE propertylist.propertyid = address.propertyid AND address.propertyid = building.propertyid AND building.propertyid = alternateurl.propertyid AND alternateurl.propertyid = land.propertyid ```
To get the count of your query you shall use `count`. So your Query will be ``` SELECT count (DISTINCT propertylist.propertyid ,propertylist.price ,propertylist.publicremarks ,address.addressline1 ,address.streetaddress ,address.city ,address.postalcode ,alternateurl.maplink ,building.bathroomtotal ,building.bedroomtotal ,building.constructeddate ,building.sizeinterior ,building.type ,building.basementfeatures ,building.basementtype ,building.constructionstyleattachment ,propertylist.ammenitiesnearby ,propertylist.features ,propertylist.transactiontype ,propertylist.lastupdated ,propertylist.communityfeatures ,land.acreage ) FROM propertylist ,address ,building ,alternateurl ,land WHERE propertylist.propertyid = address.propertyid AND address.propertyid = building.propertyid AND building.propertyid = alternateurl.propertyid AND alternateurl.propertyid = land.propertyid ``` In Simple ``` SELECT COUNT(DISTINCT column_name) FROM table_name; ``` You shall also refer [here](http://www.w3schools.com/sql/sql_func_count.asp)
how to count the number of rows fetched from the query consists of data from many tables
[ "", "mysql", "sql", "" ]
I have two SQLite tables. I want to update a column in `table1` with a value from `table2`. Table 1, `table1 (id INTEGER AUTOINCREMENT, status TEXT, name TEXT);`: ``` | id | status | name | |----|-----------|------| | 1 | pending | xyz | | 2 | completed | abc | ``` Table 2, `table2 (status TEXT, name TEXT, trans_id INTEGER);`: ``` | trans_id | status | name | |----------|-----------|------| | 1 | refunded | cvb | | 2 | cancelled | asd | ``` I want to update status and name from table2 to table1 where `table1.id = table2.trans_id`. I have this query: ``` UPDATE table1 SET status = (SELECT t2.status FROM table1 t1,table2 t2 WHERE t1.id = t2.trans_id) , name = (SELECT t2.name FROM table1 t1,table2 t2 WHERE t1.id = t2.trans_id) WHERE id IN (SELECT trans_id FROM table1 t1,table2 t2 WHERE t1.id = t2.trans_id) ``` It populates table1 wrongly. This is the resultant table1 ``` | id | status | name | |----|----------|------| | 1 | refunded | cvb | | 2 | refunded | cvb | ``` My requirement is this: ``` | id | status | name | |----|-----------|------| | 1 | refunded | cvb | | 2 | cancelled | asd | ``` Whats wrong with my query? How can I achieve it?
I am assuming that the `t2.trans_id` is `uniq` or `primary key` in `table2`. If not then if it return multiple result then the update query will blow up. In that case either you need to apply the more filter using the `WHERE` or use the `TOP 1` if any result will be needed. ``` UPDATE table1 SET status = (SELECT t2.status FROM table2 t2 WHERE t2.trans_id = id), NAME = (SELECT t2.NAME FROM table2 t2 WHERE t2.trans_id = id) WHERE id IN (SELECT trans_id FROM table2 t2 WHERE t2.trans_id = id) ```
The previous answer will be failed if there is the id column in the table2. It would be better that using the full name table1.id ``` UPDATE table1 SET status = (SELECT t2.status FROM table2 t2 WHERE t2.trans_id = table1.id) , name = (SELECT t2.name FROM table2 t2 WHERE t2.trans_id = table1.id) WHERE id IN (SELECT trans_id FROM table2 t2 WHERE t2.trans_id= table1.id); ```
Update column with value from another table using SQLite?
[ "", "sql", "sqlite", "sql-update", "" ]
I'm not 100% sure how to phrase the question, but I'm pretty much trying to do this: say I have two tables: table a: ``` a1 a2 ``` and table b: ``` b1 b2 ``` I want to combine them and create a table such as: ``` a1 b1 a1 b2 a2 b1 a2 b2 ``` (for every row in table a, create row number of rows in table b sort of) I figure I'd be able to do this using a loop of some sort, but I was wondering if there was any way to do this with set logic?
The syntax you're looking for is a `cross join`: ``` SELECT a.*, b.* FROM a CROSS JOIN b ```
You don't need any loops. This is a very simple task in SQL. You can do: ``` select a.*, b.* from a cross join b ``` or: ``` select a.*, b.* from a inner join b on (1=1) ```
Concatenating two tables distributively
[ "", "sql", "sql-server", "select", "" ]
I am using this code to get ABC count from all tables having 72 table if I use ``` declare @SQL nvarchar(max) declare @Countt bigint SELECT @SQL = STUFF(( SELECT ' ; SELECT COUNT(ABC) FROM ' + INFORMATION_SCHEMA.TABLES.TABLE_NAME FROM INFORMATION_SCHEMA.TABLES LEFT OUTER JOIN INFORMATION_SCHEMA.COLUMNS ON INFORMATION_SCHEMA.TABLES.TABLE_NAME = INFORMATION_SCHEMA.COLUMNS.TABLE_NAME where INFORMATION_SCHEMA.TABLES.TABLE_TYPE =N'BASE TABLE' AND INFORMATION_SCHEMA.COLUMNS.COLUMN_NAME =N'ABC' FOR XML PATH('')),1,2,'') SET @SQL = @SQL PRINT @SQL EXECUTE (@SQL) ``` but I am getting 72 results one by one but I just want to get sum of all 72 results,for example if ABC have 10 rows in 4 Tables so it should be return 40 please suggest where I am wrong or any other better way
Everyone is right just need to add schema if there is different ones: ``` declare @SQL nvarchar(max) declare @Countt bigint SELECT @SQL = STUFF(( SELECT DISTINCT ' UNION ALL SELECT COUNT(ABC) AS CountAmount FROM ' + INFORMATION_SCHEMA.TABLES.TABLE_SCHEMA + '.' + INFORMATION_SCHEMA.TABLES.TABLE_NAME AS [text()] FROM INFORMATION_SCHEMA.TABLES LEFT OUTER JOIN INFORMATION_SCHEMA.COLUMNS ON INFORMATION_SCHEMA.TABLES.TABLE_NAME = INFORMATION_SCHEMA.COLUMNS.TABLE_NAME WHERE INFORMATION_SCHEMA.TABLES.TABLE_TYPE =N'BASE TABLE' AND INFORMATION_SCHEMA.COLUMNS.COLUMN_NAME =N'ABC' FOR XML PATH('')),1,11,'') SET @SQL = 'SELECT SUM( CountAmount ) AS TotalSum FROM (' + @SQL + ' ) AS T ' PRINT @SQL EXECUTE (@SQL) ```
``` declare @SQL nvarchar(max) declare @Countt bigint SELECT @SQL = STUFF(( SELECT ' UNION ALL SELECT COUNT(ABC) AS noCount FROM ' + INFORMATION_SCHEMA.TABLES.TABLE_NAME FROM INFORMATION_SCHEMA.TABLES LEFT OUTER JOIN INFORMATION_SCHEMA.COLUMNS ON INFORMATION_SCHEMA.TABLES.TABLE_NAME = INFORMATION_SCHEMA.COLUMNS.TABLE_NAME where INFORMATION_SCHEMA.TABLES.TABLE_TYPE =N'BASE TABLE' AND INFORMATION_SCHEMA.COLUMNS.COLUMN_NAME =N'ABC' FOR XML PATH('')),1,10,'') SET @SQL = 'SELECT COUNT(*) FROM (' + @SQL + ')A' PRINT @SQL EXECUTE (@SQL) ```
How to get count across multiple tables
[ "", "sql", "sql-server", "" ]
I'm using the following query to get an overview of my results. At this moment the query shows 4 different rows instead of 2. I want to the first 3 columns as key for my query. Suggestions to do this? ``` select r.ID, m.MATERIALID, m.LOT, ms.AMMOUNT as aantal, m.NETPRICE as prijs, (ms.AMMOUNT * m.NETPRICE) as Total from rc_recall r inner join RC_RECALLMATSTORE ms on r.ID = ms.RECALLID inner join RC_RECALLMATERIAL m ON ms.RECALLID = m.RECALLID and ms.LINE = m.LINE where r.ID = '2015073' and d.LANG = 'FR' group by r.ID, m.MATERIALID m.LOT, ms.AMMOUNT, m.NETPRICE; ``` The result I get: ``` 2015073 | 100654643 | 1 | 2 | 0.9200 | 1.8400000 2015073 | 100654643 | 1 | 5 | 0.9200 | 4.6000000 2015073 | 100654643 | 2 | 3 | 0.9200 | 2.7600000 2015073 | 100654643 | 2 | 5 | 0.9200 | 4.6000000 ``` Is it possible to count the rows with the same: 'r.ID, m.MATERIALID, m.LOT' with each other? Result I want: ``` 2015073 | 100654643 | 1 | 7 | 0.9200 | 6.4400000 2015073 | 100654643 | 2 | 8 | 0.9200 | 7.3600000 ```
You probably want to use `sum()` and other aggregation functions: ``` select r.ID, m.MATERIALID, m.LOT, sum(ms.AMMOUNT) as aantal, sum(m.NETPRICE )as prijs, sum(ms.AMMOUNT * m.NETPRICE) as Total from rc_recall r inner join RC_RECALLMATSTORE ms on r.ID = ms.RECALLID inner join RC_RECALLMATERIAL m ON ms.RECALLID = m.RECALLID and ms.LINE = m.LINE where r.ID = '2015073' and d.LANG = 'FR' group by r.ID, m.MATERIALID, m.LOT; ``` You also need to fix the `group by` to be at the right granularity.
Remove "ms.AMMOUNT" from your GROUP BY and add some SUM() functions should do the work : ``` select r.ID, m.MATERIALID, m.LOT , SUM(ms.AMMOUNT) as aantal , m.NETPRICE as prijs , SUM(ms.AMMOUNT * m.NETPRICE) as Total from rc_recall r inner join RC_RECALLMATSTORE ms on r.ID = ms.RECALLID inner join RC_RECALLMATERIAL m ON ms.RECALLID = m.RECALLID and ms.LINE = m.LINE where r.ID = '2015073' and d.LANG = 'FR' group by r.ID, m.MATERIALID, m.LOT, m.NETPRICE; ```
SQL counting multi lines with each other using group by
[ "", "sql", "group-by", "sum", "" ]
I have some question about IF condition in SQL Is it possible to use the next syntax in SQL query? I`m interesting about if condition in group by statement ``` "SELECT * FROM TABLE WHERE... IF(order_id !=0, GROUP BY order_id, GROUP BY other_field)" ```
``` SELECT * FROM TABLE GROUP BY case when order_id <> 0 then order_id else other_field end ```
First, you shouldn't be doing `select *` with `group by`. The query would (normally) be rejected with a syntax error in most databases. Second, the SQL standard is `case`. Perhaps you want something like this: ``` select (case when order_id != 0 then order_id end) as order_id, (case when order_id = 0 then other_field end) as other_field, count(*) from table t group by (case when order_id != 0 then order_id end), (case when order_id = 0 then other_field end); ``` Note that I split the logic into two `case` statements. This just makes it easier if the types of the fields are not the same -- you don't have to deal with things like how to convert from one type to another.
SQL IF condition if GROUP BY statement
[ "", "sql", "" ]
I wanna union two queries as vertical, so not using union parameters normally. ``` select Code, Name, Category, SubCategory, p.Brand, Model, Price, Currency, Unit, Template, F1, F2,P1, P2, DiscountPercent as PrivateDiscount from Products as p left join ProductTemplates as pt on pt.TemplateName=p.Template left join Discounts as d on d.Brand = p.Brand and d.Status=1 and d.CustomerID='1198' and d.DiscountType=0 select DiscountPercent as PublicDiscount from Products as p left join ProductTemplates as pt on pt.TemplateName=p.Template left join Discounts as d on d.Brand = p.Brand and d.Status=1 and d.DiscountType=1 ``` out:query1 ``` colum name | Code | Name | Category | Brand | PrivateDiscount | 1.row _____ | ssff3 | adcdd | affcdcddd | assfdd | %6 | 2.row _____ | scf3 | adcdd | affcdcddd | assfdd | %6 | ``` out:query2 ``` colum name | PublicDiscount | 1.row _____| %2 | 2.row _____| %3 | ``` --- I wanna this result: ``` colum name : | Code | Name | Category | Brand | PrivateDiscount | PublicDiscount | 1.row _____ | ssff3 | adcdd | affcdcddd | assfdd | %6 | %2 | 2.row _____ | scf3f | adcdd | affcdcddd | assfdd | %6 | %3 | ``` As a result, shoult be column "PublicDiscount", next to column "PrivateDiscount" how can to that?
Simply join the table Discounts twice: ``` select Code, Name, Category, SubCategory, p.Brand, Model, Price, Currency, Unit, Template, F1, F2,P1, P2, privd.DiscountPercent as PrivateDiscount, pupld.DiscountPercent as PublicDiscount from Products as p left join ProductTemplates as pt on pt.TemplateName=p.Template left join Discounts as privd on privd.Brand = p.Brand and privd.Status=1 and privd.CustomerID='1198' and privd.DiscountType=0 left join Discounts as pupld on pupld.Brand = p.Brand and pupld.Status=1 and pupld.DiscountType=1; ```
Join the queries like this: [Note that I used `ProductId` as primary key, please replace it with actual primary key field name of `Products` table] ``` SELECT Code, Name, Category, SubCategory, p.Brand, Model, Price, Currency, Unit, Template, F1, F2,P1, P2, PrivateDiscount, PublicDiscount FROM ( SELECT Code, Name, Category, SubCategory, p.Brand, Model, Price, Currency, Unit, Template, F1, F2,P1, P2, DiscountPercent as PrivateDiscount, p.ProductId from Products as p left join ProductTemplates as pt on pt.TemplateName=p.Template left join Discounts as d on d.Brand = p.Brand and d.Status=1 and d.CustomerID='1198' and d.DiscountType=0 ) as tbl1 INNER JOIN ( SELECT DiscountPercent as PublicDiscount, p.ProductId from Products as p left join ProductTemplates as pt on pt.TemplateName=p.Template left join Discounts as d on d.Brand = p.Brand and d.Status=1 and d.DiscountType=1 ) as tbl2 ON tbl1.ProductId=tbl2.ProductId; ``` Hope this will work, thank you.
union two queries but not horizontal as vertical
[ "", "mysql", "sql", "" ]
I am trying to insert multiple rows into SQL. The table contains an external ID column that when using the APP increases this ID by one. The external ID is not the Primary key, but another ID in the table. Currently last external ID is 544. I want to insert 1600 additional rows and have the external ID increase by 1 for every row inserted. I have tried the following, but all of the external IDs end up being 100. ``` INSERT INTO tableA (externalid,tableuiduid) VALUES ((select ISNULL(MAX(EXTERNALID) +1, 0)from tableA),newid()); ``` I have also tried this, but it ends up inserting a duplicate external ID, as there are gaps in the numbers. ``` INSERT INTO tableA (ExternalID,tableAuid) VALUES ((select count (externalid) + from tableA),newid()); ``` Please let me know what I need to use to have this increase by 1 and not insert a duplicate ID.
The way you're doing it now, depending on your RDBMS, you can use an `INSERT INTO ... SELECT` statement: ``` INSERT INTO tableA (externalid, tableuiduid) select ISNULL(MAX(EXTERNALID) + 1, 0), newid() from tableA; ``` But you'll need to execute that 1600 times. If you have an auxilliary number table or use a recursive CTE, you could use that to generate 1600 rows at once, but without knowing your RDBMS a precise implementation is very difficult. You could define the field as an automatically incrementing field or sequence, but I get the impression that that isn't a good idea because you're not always going to be determining what the `externalid` value is.
You should use the +1 outside of `ISNULL`. And use `INSERT INTO .. SELECT`. Try this way: ``` DECLARE @Cnt as int SET @Cnt = 0 WHILE (@Cnt < 1600) BEGIN INSERT INTO tableA (externalid,tableuiduid) select ISNULL(MAX(EXTERNALID),0) + 1,newid() from tableA SET @Cnt = @Cnt + 1 END ```
How to have SQL increase ID by 1 on a column using an insert statement
[ "", "sql", "sql-insert", "" ]
Ok, so I have real difficulty with the following question. Table 1: Schema for the bookworm database. Primary keys are underlined. There are some foreign key references to link the tables together; you can make use of these with natural joins. For each publisher, show the publisher’s name and the average price per page of books published by the publisher. Average price per page here means the total price divided by the total number of pages for the set of books; it is not the average of (price/number of pages). Present the results sorted by average price per page in ascending order. ``` Author(aid, alastname, afirstname, acountry, aborn, adied). Book(bid, btitle, pid, bdate, bpages, bprice). City(cid, cname, cstate, ccountry). Publisher(pid, pname). Author_Book(aid, bid). Publisher_City(pid, cid). ``` So far I have tried: ``` SELECT pname, bpages, AVG(bprice) FROM book NATURAL JOIN publisher GROUP BY AVG(bpages) ASC; ``` and receive > ERROR: syntax error at or near "asc" > LINE 3: group by avg(bpages) asc;
You can't group by an aggregate, at least not like that. Also don't use natural join, it's bad habit to get into because most of the time you'll have to specify join conditions. It's one of those things you see in text books but almost never in real life. OK with that out of the way, and this being homework so I don't want to just give you an answer without an explanation, aggregate functions (sum in this case) affect all values for a column within a group as limited by the where clause and join conditions, so unless your doing every row you have to specify what column contains the values you are grouping by. In this case our group is Publisher name, they want to know per publisher, what the price per page is. Lets work out a quick select statement for that: ``` select Pname as Publisher , Sum(bpages) as PublishersTotalPages , sum(bprice) as PublishersTotalPrice , sum(bprice)/Sum(bpages) as PublishersPricePerPage ``` Next up we have to determine where to get the information and how the tables relate to eachother, we will use books as the base (though due to the nature of left or right joins it's less important than you think). We know there is a foreign key relation between the column PID in the book table and the column PID in the Publisher table: ``` From Book B Join Publisher P on P.PID = B.PID ``` That's what is called an explicit join, we are explicitly stating equivalence between the two columns in the two tables (vs. implying equivalence if it's done in the where clause). This gives us a many to one relation ship, because each publisher has many books published. To see that just run the below: ``` select b.*, p.* From Book B Join Publisher P on P.PID = B.PID ``` Now we get to the part that seems to have stumped you, how to get the many to one relationship between books and the publishers down to one row per publisher and perform an aggregation (sum in this case) on the page count per book and price per book. The aggregation portion was already done in our selection section, so now we just have to state what column the values our group will come from, since they want to know a per publisher aggregate we'll use the publisher name to group on: ``` Group by Pname Order by PublishersPricePerPage Asc ``` There is a little gotcha in that last part, publisherpriceperpage is a column alias for the formula sum(bprice)/Sum(bpages). Because order by is done after all other parts of the query it's unique in that we can use a column alias no other part of a query allows that, without nesting the original query. so now that you have patiently waded through my explanation, here is the final product: ``` select Pname as Publisher , Sum(bpages) as PublishersTotalPages , sum(bprice) as PublishersTotalPrice , sum(bprice)/Sum(bpages) as PublishersPricePerPage From Book B Join Publisher P on P.PID = B.PID Group by Pname Order by PublishersPricePerPage Asc ``` Good luck and hope the explanation helped you get the concept.
You need ORDER BY clause and not GROUP BY to sort record. So change your query to: ``` SELECT pname, AVG(bprice) FROM book NATURAL JOIN publisher GROUP by pname ORDER BY AVG(bpages) ASC; ```
SQL Aggregation AVG statement
[ "", "sql", "" ]
I am working with oracle and below I have a basic sql query where I select the schools that have a team using a `join`. However I am struggling how can I select the opposite(schools that don’t have a team) using a join as well. I tried using the words `outter`, `inner`, `left` and `right` but to no avail I get the results that I desired. How can I list the schools that don’t have a team? Query: ``` SELECT DISTINCT a.school_name, a.school_id, b.school_id FROM School a RIGHT OUTER JOIN Team b ON a.school_id = b.school_id; ``` Results: ``` SCHOOL_NAME SCHOOL_ID SCHOOL_ID ------------------------- ---------- ---------- Panthers University 92022 92022 Blue Rays Middle School 22149 22149 ``` Tables: ``` schools SCHOOL_ID SCHOOL_NAME SCHOOL_TYPE ---------- ----------------------------- ------------ 92022 Panthers University University 22149 Blue Rays Middle School Middle SchooL 21719 Bull Dogs High School High SchooL 42736 Saint Lucia Grade School Middle SchooL school_teams TEAM_ID SPORT_ID SCHOOL_ID TEAM_NAME ---------- ---------- ---------- ---------- 1 1 92022 Panthers 2 1 22149 Blue Rays ```
All those joins (inner, outer, left, right) select only rows that match. You are looking for rows that DON'T match, so they won't suit you. Try this: ``` SELECT DISTINCT school_id, school_name FROM school WHERE school_id not in (SELECT school_id FROM school_teams) ```
Use `Not exists` to do this. ``` SELECT a.school_name, a.school_id FROM School a where not exists (select 1 from Team b where a.school_id = b.school_id) ```
Usage of joins: getting the oppsite result
[ "", "sql", "" ]
I am trying to replace the nth character in SQL Server. I tried with it using `replace()`: ``` SELECT REPLACE(ABC,0,1) FROM XXX ``` In above code all zeros will be replaced with one, but I only want to change it in a particular position and sometimes that position can change.
use stuff The STUFF function inserts a string into another string. It deletes a specified length of characters in the first string at the start position and then inserts the second string into the first string at the start position. ``` select STUFF(ABC, starting_index, 1, 'X') from XXX ``` "Here your int position to replace" is the position no just replace with any int no and that position will be replaced **Note :** (Thanks to [pcnate](https://stackoverflow.com/users/3851647/pcnate) for suggestion) starting\_index is your int position to replace.
You're looking for [`STUFF`](https://msdn.microsoft.com/en-us/library/ms188043.aspx): ``` select STUFF(ABC, @n, 1, 'X') from XXX ``` This would replace the `@n`th character with an `X`. *Technically it seeks into the original string at column `ABC` starting at position `@n`, deletes `1` character, then inserts the string `'X'` at that position.*
How to replace nth character in sql server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Rows in table `Balance`: ![enter image description here](https://i.stack.imgur.com/R3vBu.png) Row `Amount` have type decimal 10,1 We are make select: ``` SELECT u.UserID as UserID, (ifnull(b.BalanceDebit,0) - ifnull(bc.BalanceCredit, 0)) as Balance, ifnull(bc.BalanceCredit, 0) as BalanceCredit FROM Users as u LEFT JOIN ( SELECT UserID, SUM(Amount) as BalanceDebit FROM Balance WHERE Operation='debit' AND StatusPay = 'success' GROUP BY UserID ) as b ON u.UserID = u.UserID LEFT JOIN ( SELECT UserID, SUM(Amount) as BalanceCredit FROM Balance WHERE Operation='credit' AND StatusPay = 'success' GROUP BY UserID ) as bc ON bc.UserID = u.UserID WHERE u.UserID != '12' ``` After select, in result we are see next rows: ![enter image description here](https://i.stack.imgur.com/cL09V.png) But result balance not right. Result show that all users have 10.0 on balance, but only one user (UsersID = 40) have 10.0. Tell me please where error in sql quesry? How right select sum ?
Your issue is your first LEFT OUTER JOIN: ``` ON u.UserID = u.UserID ``` should be ``` ON u.UserID = b.UserID ``` Also, if you are trying to find the balance, I think you would take all of the credits, and subtract all the debits. So instead of this: ``` ifnull(b.BalanceDebit,0) - ifnull(bc.BalanceCredit, 0) ``` Shouldn't it be this? ``` ifnull(b.BalanceCredit,0) - ifnull(bc.BalanceDebit, 0) ```
There's no need for the subqueries, the same result can be achieved like this: ``` SELECT u.UserID, SUM(CASE WHEN b.operation = 'debit' THEN b.operation ELSE 0 END) balance_debit, SUM(CASE WHEN b.operation = 'credit' THEN b.operation ELSE 0 END) balance_credit, SUM(CASE WHEN b.operation = 'debit' THEN b.operation ELSE 0 END) - SUM(CASE WHEN b.operation = 'credit' THEN b.operation ELSE 0 END) balance FROM users u LEFT JOIN balance b ON u.UserID = b.UserID AND b.StatusPay = 'success' WHERE u.UserID <> 12 GROUP BY u.USERID ```
Why sql give not right sum in result?
[ "", "mysql", "sql", "left-join", "" ]
In my db DOB is saved in dd/mm/yyyy format. i want to change the DOB date format to MM/dd/yyyy. How can i do that?
First i want to know which datatype you are using for saving your date value. There is nothing provided with your code no sample code, table details nothing. anyway i think your 'date of birth' field datatype is datetime,then you can use the following example ``` create table checktable( ID int, name nvarchar (30), dob datetime); ``` > Example data insert into the table ``` insert into checktable(ID,name,dob) values(10,'myname','03/01/2014'); ``` //.......... ``` select * from checktable ``` > //Use CONVERT() it will give you the desired output ``` SELECT TOP 1 ID, dob,CONVERT(varchar,dob,101) 'mm/dd/yyyy' FROM checktable ``` **UPDATE** if your datatype is varchar and now it is in the format mm/dd/yyyy and you want to change it into dd/mm/yyyy format then use the following example it will help you ``` create table checktable1( ID int, name nvarchar (30), dob varchar(20)); ``` > // insert sample data ``` insert into checktable1(ID,name,dob) values(10,'myname','21/05/2010'); select * from checktable1 ``` > // change the format using substring() ``` select * FROM checktable1 select dob,substring(dob,4,3)+substring(dob, 1, 3)+substring(dob, 7, 4) from checktable1 ``` > It will give you result in 05/21/2010 (mm/dd/yyyy)format
Microsoft SQL: <https://msdn.microsoft.com/en-us/library/ms187928.aspx> Syntax for CONVERT: ``` CONVERT ( data_type [ ( length ) ] , expression [ , style ] ) ``` use example: ``` SELECT CONVERT(VARCHAR(10), GETDATE(), 101) AS [MM/DD/YYYY] ``` For other databases search **Date and Time Functions** in documentation.
How to update/Change the date format to MM/dd/yyyy from dd/MM/yyyy format in sql server?
[ "", "sql", "sql-server", "sql-update", "date-formatting", "" ]
I have two tables as shown in figure 1: ![enter image description here](https://i.stack.imgur.com/y8kCg.png) And I need this result: ![enter image description here](https://i.stack.imgur.com/7WEe5.png) I tried it with joins, but I am unable to get the needed result. ``` SELECT company_info.t_id, company_info.company_name, company_info.remark, tender.company_selected, company_info.company_document FROM company_info LEFT OUTER JOIN tender ON company_info.t_id = tender.t_id; ``` **Note:** It is showing Company Document in both rows, but I need it only for the company which is selected. Please check the snapshot 2 for reference.
This code worked ``` SELECT company_info.t_id, company_info.company_name, company_info.remark, CASE WHEN tender.company_selected =company_info.company_name THEN tender.company_selected ELSE '' END, CASE WHEN tender.company_selected =company_info.company_name THEN company_info.company_document ELSE '' END FROM company_info LEFT OUTER JOIN tender ON company_info.t_id = tender.t_id ```
This will get you what you are looking for : ``` SELECT c.t_id, c.company_name, c.remark, IFNULL(t.company_selected,'') as company_selected, CASE WHEN t.company_selected IS NULL THEN '' ELSE c.company_document END AS company_document FROM company_info c LEFT JOIN tender t ON c.t_id=t.t_id AND c.company_name=t.company_selected ``` **Explanation:** This query will do the following: * selecting `t_id`,`comapany_name`,`remark` from company table. * Selecting company name from tender table. If there is no record, this column will be empty. * If there is no record in tender table, this column will be empty, otherwise, it will select `company_document` from company table.
Need partial record from 2 tables
[ "", "sql", "sql-server", "join", "" ]
I have a complex query which is really over the top of my head. I think RANK of the RANK-ing is needed, but there must be a better, and an existing way. Here I have a simpe table: ``` Manufacturer DateOF Status Prefer Dell 05-2014 ComputerInstalled 30 Dell 05-2014 ComputerUninstalled 70 Dell 05-2014 ComputerUninstalled 70 Dell 05-2014 ComputerUninstalled 70 Dell 05-2014 ComputerInstalled 30 Dell 05-2014 ComputerUninstalled 70 Dell 05-2014 ComputerNew 26 Dell 05-2014 ComputerNew 26 Dell 05-2014 ComputerInstalled 30 Dell 05-2014 ComputerInstalled 30 ``` What I need to do is to GROUP BY the table by MANUFACTURER and DATEOF columns, then choose the rows with the lowest PREFER number (26 in this case). Its easy with RANK function: ``` SELECT sq.* FROM ( SELECT *, RANK() OVER (PARTITION BY Manufacturer,DateOF ORDER BY Prefer) AS RankPrefer FROM table1 WHERE RankPrefer = 1 ) sq ``` So I will have the result of 2 rows with Status ComputerNew. ``` Manufacturer DateOF Status Prefer Dell 05-2014 ComputerNew 26 Dell 05-2014 ComputerNew 26 ``` Thats easy, and not the question. **The question is:** I have to implement the following rule: If the rows with the lowest **Prefer** values (e.g.: 26) turn out to have **ComputerNew** value in their **Status** field, then I have to include more rows with **ComputerInstalled** values. The result should be this: ``` Manufacturer DateOF Status Prefer Dell 05-2014 ComputerInstalled 30 Dell 05-2014 ComputerInstalled 30 Dell 05-2014 ComputerNew 26 Dell 05-2014 ComputerNew 26 Dell 05-2014 ComputerInstalled 30 Dell 05-2014 ComputerInstalled 30 ``` Similar to this rule, I have one more: If the rows with the lowest **Prefer** values (e.g.: 26) turn out to have **ComputerOld** value in their **Status** field, then I have to include more rows with **ComputerUninstalled** values. I think RANK of RANKING would solve this, but now I am really lost. Any help is appreciated on this riddle. Thank you --- **Edit1:** Gordon's solution is almost good, but not perfect. I give you more test data, there you can see where it fails. SQLFiddle to test is [here](http://sqlfiddle.com/#!3/527f6/1). I include the test data here as well: ``` INSERT Table1 VALUES ('HP10011','04/01/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP10011','04/04/2014','ComputerOld',26) INSERT Table1 VALUES ('HP10011','04/04/2014','ComputerOld',26) INSERT Table1 VALUES ('HP10011','04/30/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP10011','05/23/2014','QuickDispose',10) INSERT Table1 VALUES ('HP10011','06/03/2014','QuickDispose',10) INSERT Table1 VALUES ('HP10077','04/01/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP1910','04/25/2014','QuickDispose',10) INSERT Table1 VALUES ('HP1910','05/01/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP1910','05/01/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP1910','05/01/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP1910','05/01/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP1910','05/01/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP1910','05/01/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP1910','05/01/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP1910','05/01/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP1910','05/02/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP1910','05/02/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP3720','05/07/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP3720','05/07/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP3720','05/07/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','05/07/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','05/07/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','05/07/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','05/08/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP3720','05/08/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP3720','05/08/2014','ComputerInstalled',30) INSERT Table1 VALUES ('HP3720','05/08/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','06/06/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','06/06/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','06/10/2014','ComputerOld',26) INSERT Table1 VALUES ('HP3720','06/10/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','06/10/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','06/11/2014','ComputerOld',26) INSERT Table1 VALUES ('HP3720','06/11/2014','ComputerUninstalled',70) INSERT Table1 VALUES ('HP3720','06/11/2014','ComputerUninstalled',70) ``` The query returns both rows ComputerInstalled and ComputerUninstalled for the following data: ``` 'HP1910','05/01/2014','ComputerInstalled',30 'HP1910','05/01/2014','ComputerUninstalled',70 ``` It should choose ComputerInstalled only, because for that Manufacturer, in the same month, it should choose the lowest Prefer (30). The result for this dataset should be this: ``` Manufacturer DateOF Status Prefer HP10011 2014-04-01 ComputerUninstalled 70 HP10011 2014-04-04 ComputerOld 26 HP10011 2014-04-04 ComputerOld 26 HP10011 2014-04-30 ComputerUninstalled 70 HP10011 2014-05-23 QuickDispose 10 HP10011 2014-06-03 QuickDispose 10 HP10077 2014-04-01 ComputerUninstalled 70 HP1910 2014-04-25 QuickDispose 10 HP1910 2014-05-01 ComputerInstalled 30 HP1910 2014-05-01 ComputerInstalled 30 HP1910 2014-05-01 ComputerInstalled 30 HP1910 2014-05-01 ComputerInstalled 30 HP3720 2014-05-07 ComputerInstalled 30 HP3720 2014-05-07 ComputerInstalled 30 HP3720 2014-05-08 ComputerInstalled 30 HP3720 2014-05-08 ComputerInstalled 30 HP3720 2014-05-08 ComputerInstalled 30 HP3720 2014-06-06 ComputerUninstalled 70 HP3720 2014-06-06 ComputerUninstalled 70 HP3720 2014-06-10 ComputerOld 26 HP3720 2014-06-10 ComputerUninstalled 70 HP3720 2014-06-10 ComputerUninstalled 70 HP3720 2014-06-11 ComputerOld 26 HP3720 2014-06-11 ComputerUninstalled 70 HP3720 2014-06-11 ComputerUninstalled 70 ```
Here is one idea. Figure out the preference ranking for the rows. Then get determine whether the rows with rank = 1 meet your criteria using `exists`. The final query would look like: ``` with r as ( select t.*, rank() over (partition by manufacturer, dateof order by Prefer) as seqnum from table1 t ), r1 as ( select r.* from r where seqnum = 1 ) select r.* from r where r.seqnum = 1 or (exists (select 1 from r1 where status = 'ComputerNew' and r1.dateof = r.dateof) and r.status = 'ComputerInstalled' or exists (select 1 from r1 where status = 'ComputerOld' and r1.dateof = r.dateof) and r.status = 'ComputerUninstalled' ); ```
Okay, now that you've made some edits to the question, I have a different answer that I believe will resolve the question. Here is the query: ``` ;with r as ( select t.*, CAST(MONTH(dateof) AS VARCHAR(2)) + '-' + CAST(YEAR(dateof) AS VARCHAR(4)) AS EffDate, rank() over (partition by manufacturer, CAST(MONTH(dateof) AS VARCHAR(2)) + '-' + CAST(YEAR(dateof) AS VARCHAR(4)) order by Prefer) as seqnum from Table1 t ), r1 as ( select r.* from r where seqnum = 1 ) select r.* from r where r.seqnum = 1 or ( r.Status = 'ComputerUninstalled' and exists ( Select 1 from r1 where r1.Manufacturer = r.Manufacturer and r1.EffDate = r.EffDate and r1.Status = 'ComputerOld' ) and r.seqNum = ( Select Min(SeqNum) From r as r2 Where r2.Manufacturer = r.Manufacturer And r2.EffDate = r.EffDate And r2.SeqNum > 1 ) ) or ( r.Status = 'ComputerInstalled' and exists ( Select 1 from r1 where r1.Manufacturer = r.Manufacturer and r1.EffDate = r.EffDate and r1.Status = 'ComputerNew' ) and r.seqNum = ( Select Min(SeqNum) From r as r2 Where r2.Manufacturer = r.Manufacturer And r2.EffDate = r.EffDate And r2.SeqNum > 1 ) ); ``` Note: I get 2 more records than your expected resultset indicates. But from what you described, I believe that you made a mistake in your expected results. There are 6 "ComputerInstalled" for HP1910 in May, 2014 with a Prefer of 30. 4 of them have May 1st as their date, 2 of them have May 2nd. You left out the May 2nd records. Other than that, this result set matches your expected results and should work for larger datasets, I believe.
Complex RANK in SQL
[ "", "sql", "t-sql", "" ]
I have this query: ``` SELECT `id` FROM `accounts` WHERE AES_DECRYPT(`email`, :salt) = CONCAT_WS('@',:mailbox,:host) OR AES_DECRYPT(`email`, :salt) LIKE CONCAT('%',:host) ``` I have 2 records in that table: ``` id email 1 test@test.com 2 other@test.com ``` When I'm running this query like this: ``` SELECT `id` FROM `accounts` WHERE AES_DECRYPT(`email`, '123') = CONCAT_WS('@','test','test.com') OR AES_DECRYPT(`email`, '123') LIKE CONCAT('%','test.com') ``` I get this as a result: ``` id email 2 other@test.com 1 test@test.com ``` **Question:** What I want is this: I want to have the best match as the first result, without using fulltext search. Is this possible, if so, how can I do this?
You can readily order the results by the number of matches: ``` SELECT `id` FROM `accounts` WHERE AES_DECRYPT(`email`, '123') = CONCAT_WS('@', 'test', 'test.com') OR AES_DECRYPT(`email`, '123') LIKE CONCAT('%','test.com') ORDER BY ( (AES_DECRYPT(`email`, '123') = CONCAT_WS('@', 'test', 'test.com')) + (AES_DECRYPT(`email`, '123') LIKE CONCAT('%','test.com')) ); ``` This will work for your example.
To get records in a specific order, use an ORDER BY clause. ``` SELECT `id` FROM `accounts` WHERE AES_DECRYPT(`email`, :salt) = CONCAT_WS('@',:mailbox,:host) OR AES_DECRYPT(`email`, :salt) LIKE CONCAT('%',:host) order by AES_DECRYPT(`email`, :salt) = CONCAT_WS('@',:mailbox,:host) desc; ``` Here we are using a MySQL special. A boolean expression that evaluates to TRUE results in 1. A boolean expression that evaluates to FALSE results in 0. In another DBMS you could write this instead: ``` order by case when AES_DECRYPT(`email`, :salt) = CONCAT_WS('@',:mailbox,:host) then 1 else 0 end desc; ```
Mysql select by best match with like
[ "", "mysql", "sql", "" ]
I have a stored procedure that returns an integer 1 or 0 depending on specific criteria. It currently uses three select statements and it will be used heavily by multiple users across multiple locations. There has to be a more efficient way of doing this. In short the query checks first to see if all checklist items on an order are completed (a separate table), then it checks to see if a field named BreakOutGuest (a bit field) is a 1 or 0. Depending on that result it checks to see if the total guest count is greater than 0 and the order total is zero. It returns the one or zero on all this criteria. Is there a more efficient way to do this? A temp table so I only have to hit the actual tables once? Below is the code. ``` @ORDERID INT AS BEGIN DECLARE @AUTO_CLOSE INT SET NOCOUNT ON; --If all checklist items are marked complete move on, if not set @AUTO_CLOSE=0 IF NOT EXISTS(SELECT ORDERID FROM dbo.orderchecklistitems WHERE OrderID=@ORDERID AND CompletedON IS NULL) BEGIN --if BreakOutGuestFees is 1 only sum Guest_Count_1 + Guest_Count_2 IF EXISTS(SELECT * FROM dbo.Orders WHERE (GuestCount_1 + GuestCount_2)>1 AND OrderTotal=0 AND BreakoutGuestFees=1) BEGIN SET @AUTO_CLOSE=1 END ELSE SET @AUTO_CLOSE=0 --if BreakOutGuestFees is 0 only consider Guest_Count_1 IF EXISTS(SELECT * FROM dbo.Orders WHERE (GuestCount_1)>1 AND OrderTotal=0 AND BreakoutGuestFees=0) BEGIN SET @AUTO_CLOSE=1 END ELSE SET @AUTO_CLOSE=0 END ELSE SET @AUTO_CLOSE=0 END ```
If am not wrong you can combine two `if` clause into single `if` clause by using `AND` , `OR` logic. Try this. ``` IF NOT EXISTS(SELECT ORDERID FROM dbo.orderchecklistitems WHERE OrderID = @ORDERID AND CompletedON IS NULL) BEGIN IF EXISTS(SELECT * FROM dbo.Orders WHERE ( ( GuestCount_1 + GuestCount_2 > 1 AND BreakoutGuestFees = 1 ) OR ( BreakoutGuestFees = 0 AND GuestCount_1 > 1 ) ) AND OrderTotal = 0 AND OrderID = @ORDERID) SET @AUTO_CLOSE=1 ELSE SET @AUTO_CLOSE=0 END ELSE SET @AUTO_CLOSE=0 ```
You can perform your selection check with only one query ``` SELECT (SELECT sum(1) FROM dual WHERE EXISTS (SELECT ORDERID FROM dbo.orderchecklistitems WHERE OrderID=@ORDERID AND CompletedON IS NULL)), (SELECT sum(1) FROM dual WHERE EXISTS (SELECT 1 FROM dbo.Orders WHERE (GuestCount_1 + GuestCount_2)>1 AND OrderTotal=0 AND BreakoutGuestFees=1)), (SELECT sum(1) FROM dual WHERE EXISTS (SELECT 1 FROM dbo.Orders WHERE (GuestCount_1)>1 AND OrderTotal=0 AND BreakoutGuestFees=0)) INTO result1, result2, result3 from dual ``` then check results
Less expensive query?
[ "", "sql", "" ]
IN SQLite I wrote : ``` UPDATE MYTABLE SET MYFIELD = TRIM(MYFIELD); ``` What to do to have this on SQL Server 2014 ?
``` UPDATE MYTABLE SET MYFIELD = LTRIM(RTRIM(MYFIELD)); ``` However, field type must be varchar() and not text. Otherwise you get "Argument data type text is invalid for argument 1 of rtrim function"
You need functions `LTRIM` (to trim from left) and `RTRIM` (to trim from right): ``` UPDATE MYTABLE SET MYFIELD = LTRIM(RTRIM(MYFIELD)); ```
TRIM function on SQL Server 2014
[ "", "sql", "sql-server", "sqlite", "" ]
I have two tables, for example: TableA with columns GUID,ProgKey,UserKey,ProgName. TableB with columns GUID,AppKey,ModeName. Foreign Key: GUID->TableA.GUID *I need to delete TableB.AppKey with specific value but only with the condition where TableA.UserKey= specific value.* I tried something simple like this but it didn't work: ``` Delete from TableB b, TableA a where b.AppKey=? and a.UserKey=? ``` And tried this, also didn't work: ``` Delete from TableB (AppKey) Select ? From TableB b, TableA a where a.UserKey=? ``` ?=appkeyValue000 ?=userkeyValue000 GUID is a primary key.
<http://dev.mysql.com/doc/refman/5.0/en/delete.html> ``` DELETE tableB FROM tableB JOIN tableA ON (tableB.GUID = tableA.GUID) WHERE tableA.UserKey = 'userkeyValue000' AND tableB.AppKey = 'appkeyValue000' ``` The deletion occurs in the table(s) occurring before the FROM keyword.
delete from table where id in (select id from table2 where y=234)?
SQL Delete with relation to another table
[ "", "mysql", "sql", "delete-row", "" ]
I have the following table named **Table: activity** ``` userid appid type 1 a imp 2 a imp 2 a click 3 a imp 4 a imp 4 a click 5 b imp 5 b click ``` I am trying to calculate the click-through rate for each `appid`. In this instance we define a click-through as the (number of clicks) / (number of impressions). I have written the following SQL: ``` SELECT appid, type, count(*) from activity group by appid, type ``` and got the following outcome: **Output:** ``` appid type count(*) a click 2 a imp 4 b click 1 b imp 1 ``` The next step is to do a row-wise division. Ultimately, I would like to achieve the following: **Goal:** ``` appid click-through a .5 # 2/4 = .5 b 1 # 1/1 = 1 ``` How is this achieved? Ideally I want this to be done in one query, is that possible?
You can use conditional aggregation to do this: ``` select appid , SUM(CASE WHEN type = 'click' THEN 1 END)*1.0 / SUM(CASE WHEN type = 'imp' THEN 1 END) AS click_through from activity group by appid ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!2/ad1aa/2/0) If using MySQL you can further simplify with: ``` select appid , SUM(type = 'click')*1.0 / SUM(type = 'imp') AS click_through from activity group by appid ```
Just count the impressions and clicks in subqueries and join them together: ``` select appid, num_impressions, num_clicks, cast(num_clicks as float) / num_impressions as ctr from( select appid, count(1) as num_impressions from activity where type = 'imp' group by appid )a join( select appid, count(1) as num_clicks from activity where type = 'click' group by appid )b on (a.appid = b.appid); ``` Note the type cast on `num_clicks` in `ctr` to avoid integer division.
SQL divide data in rows
[ "", "sql", "aggregate-functions", "" ]
I have a table with 3 columns: ``` Index, Time_start, Time_stop ``` The `Time` columns are of type `time(7)`. I need a view (or select statement) that will list the three columns along with a fourth column that will state `In` or `Out` if the moment when the script is run is or not between `Time_start` and `Time_stop` for each row. I tried using a case statement: ``` case getdate() between Time_start and Time_stop then 'Yes' ``` but it's not working.
You can do it by converting the value of `GETDATE()` to a `TIME`: ``` -- you may need to change these times to values relevant to your local time DECLARE @time1 AS TIME(7) = '12:15' DECLARE @time2 AS TIME(7) = '16:15' SELECT @time1 AS Time_start, @time2 AS Time_stop INTO #tmp_time SELECT *, CASE WHEN CONVERT(TIME(7), GETDATE()) BETWEEN Time_start AND Time_stop THEN 'In' ELSE 'Out' END FROM #tmp_time DROP TABLE #tmp_time ``` **Caveat** This will not filter dates, so I assume that all records in the recordset you are querying are for the same day, otherwise this will take records from other days too.
You have the right idea with a `CASE` expression, but you need the `TIME` portion of the `GETDATE()` as mentioned in comments: ``` SELECT *,CASE WHEN CAST(GETDATE() AS TIME) BETWEEN Time_start AND Time_stop THEN 'In' ELSE 'Out' END AS In_Out FROM YourTable ```
Establish if time portion of getdate() is in a specific interval
[ "", "sql", "sql-server", "t-sql", "" ]
I have the following table: ``` +-----------+-----------+-------+ | ItemCode1 | ItemCode2 | Value | +-----------+-----------+-------+ | X1 | Y1 | 1 | | X2 | Y1 | 50 | | X3 | Y3 | 1 | | X4 | Y4 | 20 | | X5 | Y4 | 1 | +-----------+-----------+-------+ ``` And I'd like to select 1 ItemCode1 for each distinct ItemCode2, based on the highest value. I.E, the output table should look like: ``` +-----------+-----------+-------+ | ItemCode1 | ItemCode2 | Value | +-----------+-----------+-------+ | X2 | Y1 | 50 | | X3 | Y3 | 1 | | X4 | Y4 | 20 | +-----------+-----------+-------+ ``` I know it should be quite easy but for some reason, I can't get this one... Help would be truly appreciated!
Select all rows where value = max(value) for the ItemCode2. ``` select ItemCode1, ItemCode2, Value from tablename t1 where Value = (select max(Value) from tablename t2 where t1.ItemCode2 = t2.ItemCode2) ``` Note that if several rows have same max value, they will all be returned.
``` select t1.itemcode1, t1.itemcode2, t1.value from your_table t1 join ( select max(value) mvalue, itemcode2 from your_table group by itemcode2 ) t2 on t1.value = t2.mvalue and t1.itemcode2 = t2.itemcode2 ```
SQL - Getting specific rows by complicated condition
[ "", "sql", "sql-server", "" ]
How can I add the LastDocumentID column like so: ``` +------------+-----------+----------------+ | DocumentID | Reference | LastDocumentID | +------------+-----------+----------------+ | 1 | A | NULL | | 2 | A | 1 | | 3 | A | 2 | | 4 | B | NULL | | 5 | B | 4 | | 6 | C | NULL | | 7 | C | 6 | | 8 | C | 7 | | 9 | C | 8 | +------------+-----------+----------------+ ``` The table could be in a random order, but in the Last Document ID I essentially want it to get the Max Document ID that is less than that row's Document ID for that row's Reference.
You can get any value from the "last document" this way: ``` SELECT D.DocumentID, D.Reference, LastDocumentID = R.DocumentID FROM dbo.Documents D OUTER APPLY ( SELECT TOP 1 * FROM dbo.Documents R WHERE D.Reference = R.Reference AND R.DocumentID < D.DocumentID ORDER BY R.DocumentID DESC ) R ; ``` # [See this working in a SQL Fiddle](http://sqlfiddle.com/#!3/9fc74/2) Though having identical logic to similar methods that compute just the column value in a subquery in the `WHERE` clause, this allows you to pull multiple columns from the previous document, and demonstrates `OUTER APPLY`. Change to `CROSS APPLY` if you want the equivalent `INNER` join (excluding rows that have no previous). For reference, here's the single-value way to do it. You basically put the query contained in the `OUTER APPLY` into parentheses, and only select one column: ``` SELECT D.DocumentID, D.Reference, LastDocumentID = ( SELECT TOP 1 R.DocumentID FROM dbo.Documents R WHERE D.Reference = R.Reference AND R.DocumentID < D.DocumentID ORDER BY R.DocumentID DESC ) FROM dbo.Documents D ; ``` Alternately, you can just use `Max`: ``` SELECT D.DocumentID, D.Reference, LastDocumentID = ( SELECT Max(R.DocumentID) FROM dbo.Documents R WHERE D.Reference = R.Reference AND R.DocumentID < D.DocumentID ) FROM dbo.Documents D ; ``` If you were using SQL Server 2012 and up, you could do it this way using its more advanced syntax available for windowing functions: ``` SELECT D.DocumentID, D.Reference, LastDocumentID = Max(D.DocumentID) OVER ( PARTITION BY D.Reference ORDER BY D.DocumentID ASC ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING ) FROM dbo.Documents D ; ```
In SQL Server 2012+ you can use `lag()`. In SQL Server 2008, you can use a correlated subquery or outer apply. Here is one method: ``` select documentid, reference, (select top 1 documentid from table t2 where t2.reference = t.reference and t2.documentid < t.documentid order by documentid desc ) as LastDocumentId from table t; ```
SQL Get Last Occurrence of Field Against Each Row
[ "", "sql", "sql-server-2008", "" ]
Using SQL Server 2008, say I have a table called `testing` with 80 columns and I want to find a value called `foo`. I can do: ``` SELECT * FROM testing WHERE COLNAME = 'foo' ``` Is it possible I can query all 80 columns and return all the results where `foo` is contained in any of the 80 columns?
You can use `in`: ``` SELECT * FROM testing WHERE 'foo' in (col1, col2, col3, . . . ); ```
**First Method(Tested)** First get list of columns in string variable separated by commas and then you can search 'foo' using that variable by use of `IN` Check stored procedure below which first gets columns and then searches for string: ``` DECLARE @TABLE_NAME VARCHAR(128) DECLARE @SCHEMA_NAME VARCHAR(128) ----------------------------------------------------------------------- -- Set up the name of the table here : SET @TABLE_NAME = 'testing' -- Set up the name of the schema here, or just leave set to 'dbo' : SET @SCHEMA_NAME = 'dbo' ----------------------------------------------------------------------- DECLARE @vvc_ColumnName VARCHAR(128) DECLARE @vvc_ColumnList VARCHAR(MAX) IF @SCHEMA_NAME ='' BEGIN PRINT 'Error : No schema defined!' RETURN END IF NOT EXISTS (SELECT * FROM sys.tables T JOIN sys.schemas S ON T.schema_id=S.schema_id WHERE T.Name=@TABLE_NAME AND S.name=@SCHEMA_NAME) BEGIN PRINT 'Error : The table '''+@TABLE_NAME+''' in schema '''+ @SCHEMA_NAME+''' does not exist in this database!' RETURN END DECLARE TableCursor CURSOR FAST_FORWARD FOR SELECT CASE WHEN PATINDEX('% %',C.name) > 0 THEN '['+ C.name +']' ELSE C.name END FROM sys.columns C JOIN sys.tables T ON C.object_id = T.object_id JOIN sys.schemas S ON S.schema_id = T.schema_id WHERE T.name = @TABLE_NAME AND S.name = @SCHEMA_NAME ORDER BY column_id SET @vvc_ColumnList='' OPEN TableCursor FETCH NEXT FROM TableCursor INTO @vvc_ColumnName WHILE @@FETCH_STATUS=0 BEGIN SET @vvc_ColumnList = @vvc_ColumnList + @vvc_ColumnName -- get the details of the next column FETCH NEXT FROM TableCursor INTO @vvc_ColumnName -- add a comma if we are not at the end of the row IF @@FETCH_STATUS=0 SET @vvc_ColumnList = @vvc_ColumnList + ',' END CLOSE TableCursor DEALLOCATE TableCursor -- Now search for `foo` SELECT * FROM testing WHERE 'foo' in (@vvc_ColumnList ); ``` **2nd Method** In sql server you can get object id of table then using that object id you can fetch columns. In that case it will be as below: **Step 1:** First get Object Id of table ``` select * from sys.tables order by name ``` **Step 2:** Now get columns of your table and search in it: ``` select * from testing where 'foo' in (select name from sys.columns where object_id =1977058079) ``` Note: object\_id is what you get fetch in first step for you relevant table
SQL Server SELECT where any column contains 'x'
[ "", "sql", "sql-server", "sql-server-2008", "select", "" ]
I have a string as `'kj,,,,,,,,,sdkdsd,das,das,,,,dasdasd,,,,,ad'` and now I want to replace multiple `commas` with single as `'kj,sdkdsd,das,das,sdasd,ad'` note: with best performance
Use this: ``` DECLARE @mystring VARCHAR(50) = 'kj,,,,,,,,,sdkdsd,das,das,,,,dasdasd,,,,,ad' SELECT REPLACE(REPLACE(REPLACE(@mystring, ',', '{}'), '}{', ''), '{}', ',') ``` **Output:** `kj,sdkdsd,das,das,dasdasd,ad`
This will do it for you: ``` declare @str varchar(100)='kj,,,,,,,,,sdkdsd,das,das,,,,dasdasd,,,,,ad' declare @strBefore varchar(100)='' WHILE @strBefore<>@str BEGIN SET @strBefore=@str SET @str=REPLACE(@str,',,',',') END print @str ```
how to replacle multiple continuous string with single string in sql
[ "", "sql", "sql-server", "sql-server-2008", "" ]
A query of the format `select * from student where id in(1,2,3,4)` works like ``` select * from student where id=1; select * from student where id=2; select * from student where id=3; select * from student where id=4; ``` *This is what I know to the best of my knowledge.* **Now I want to fire a query with multiple WHERE IN CLAUSE** ``` SELECT Count(*) NO_OF_PO, QUANTITY FROM INTER_DEFECT idp WHERE DEFECT_ID IN(1,2,3) AND PO_NUMBER IN (4,5,6) AND SUPP_ID IN (7,8,9) AND LINE_NO IN (10,11,12) AND RECEIPT_NO IN(13,14,15) ``` I want this to fire like this in reality ``` SELECT Count(*) NO_OF_PO, QUANTITY FROM INTER_DEFECT idp WHERE DEFECT_ID =1 AND PO_NUMBER =4 AND SUPP_ID = 7 AND LINE_NO = 10 AND RECEIPT_NO=13 SELECT Count(*) NO_OF_PO, QUANTITY FROM INTER_DEFECT idp WHERE DEFECT_ID =2 AND PO_NUMBER =5 AND SUPP_ID = 8 AND LINE_NO = 11 AND RECEIPT_NO=14 ``` etc etc. So you see the **Parameter order should not change** like DEFECT\_ID =1 should fire with PO\_NUMBER =4 for example `DEFECT_ID =1 AND PO_NUMBER =4 ....` And DEFECT\_ID =2 should fire with PO\_NUMBER =5 for example `DEFECT_ID =2 AND PO_NUMBER =5 ....` **Is this possible in Oracle or any other Database ?**
Oracle supports `in` clauses with multiple expressions. That is, you can do: ``` SELECT Count(*) NO_OF_PO, QUANTITY FROM INTER_DEFECT idp WHERE (DEFECT_ID, PO_NUMBER, SUPP_ID, LINE_NO, RECEIPT_NO) IN ( (1, 4, 7, 10, 13), (2, 5, 8, 11, 14), (3, 6, 9, 12, 15) ); ``` Not all databases support this construct. Normally, you would just use `AND` and `OR`: ``` WHERE (DEFECT_ID = 1 and PO_NUMBER = 4 . . . ) OR (DEFECT_ID = 2 and PO_NUMBER = 5 . . . ) OR (DEFECT_ID = 3 and PO_NUMBER = 6 . . . ) ``` Or use `join` with a CTE or subquery that has the right values.
``` select * from student where id in(1,2,3,4) ``` works as ``` select * from student where id = 1 or id = 2 or id = 3 or id = 4 ``` as for your 2nd question, try this ``` SELECT Count(*) NO_OF_PO, QUANTITY FROM INTER_DEFECT idp WHERE (DEFECT_ID =1 AND PO_NUMBER =4 AND SUPP_ID = 7 AND LINE_NO = 10 AND RECEIPT_NO=13 or DEFECT_ID =2 AND PO_NUMBER =5 AND SUPP_ID = 8 AND LINE_NO = 11 AND RECEIPT_NO=14) ```
Can we fire a sql query with multiple where IN clause , if yes how does it work?
[ "", "sql", "where-in", "" ]
I have user1 who exchanged messages with user2 and user4 (these parameters are known). I now want to select the latest sent or received message for each conversation (i.e. LIMIT 1 for each conversation). **[SQLFiddle](http://sqlfiddle.com/#!2/96407/1)** Currently my query returns all messages for all conversations: ``` SELECT * FROM message WHERE (toUserID IN (2,4) AND userID = 1) OR (userID IN (2,4) AND toUserID = 1) ORDER BY message.time DESC ``` The returned rows should be messageID 3 and 6.
Assuming that higher `id` values indicate more recent messages, you can do this: * Find all messages that involve user 1 * Group the results by the other user id * Get the maximum message id per group ``` SELECT * FROM message WHERE messageID IN ( SELECT MAX(messageID) FROM message WHERE userID = 1 -- optionally filter by the other user OR toUserID = 1 -- optionally filter by the other user GROUP BY CASE WHEN userID = 1 THEN toUserID ELSE userID END ) ORDER BY messageID DESC ``` [Updated SQLFiddle](http://sqlfiddle.com/#!2/6962b/1)
There are **two parts** of your query in the **following order**: 1. You want the latest outgoing or incoming message for a conversation between two users 2. You want these latest messages for two different pairs of users, i.e. conversations. So, lets get the latest message for a conversation between UserID a and UserID b: ``` SELECT * FROM message WHERE (toUserID, userID) IN ((a, b), (b, a)) ORDER BY message.time DESC LIMIT 1 ``` Then you want these to be combined for the two conversations between UserIDs 1 and 2 and UserIDs 1 and 4. This is where the union comes into play (we do not need to check for duplicates, thus we use UNION ALL, thanks to Marcus Adams, who brought that up first). So a **complete and straightforward solution** would be: ``` (SELECT * FROM message WHERE (toUserID, userID) IN ((2, 1), (1, 2)) ORDER BY message.time DESC LIMIT 1) UNION ALL (SELECT * FROM message WHERE (toUserID, userID) IN ((4, 1), (1, 4)) ORDER BY message.time DESC LIMIT 1) ``` And as expected, you get message 3 and 6 in your [SQLFiddle](http://sqlfiddle.com/#!2/96407/26).
Select most recent record based on two conditions
[ "", "mysql", "sql", "" ]
I'm pretty much a noob when it comes to SQL so any help would be appreciated. I have a large data set that I am filtering through for a hospital. I am pulling data from 6 different tables and one of my tables has duplicate rows for each visit. I only want to pull in one row for each visit (it doesn't matter which row is pulled in). I know I need to use a DISTINCT, or GROUP BY clause but my syntax must be wrong. ``` SELECT ADV.[VisitID] AS VisitID ,ADV.[Name] AS Name ,ADV.[UnitNumber] AS UnitNumber ,CONVERT(DATE,ADV.[BirthDateTime]) AS BirthDate ,ADV.[ReasonForVisit] AS ReasonForVisit ,ADV.[AccountNumber] AS AccountNumber ,DATEDIFF(day, ADV.ServiceDateTime, DIS.DischargeDateTime) AS LOS ,ADV.[HomePhone] AS PhoneNumber ,ADV.[ServiceDateTime] AS ServiceDateTime ,ADV.[Status] AS 'Status' ,PRV.[PrimaryCareID] AS PCP ,LAB.[TestMnemonic] AS Test ,LAB.[ResultRW] AS Result ,LAB.[AbnormalFlag] AS AbnormalFlag ,LAB.[ResultDateTime] AS ResultDateTime ,DIS.[Diagnosis] AS DischargeDiagnosis ,DIS.[ErDiagnosis] AS ERDiagnosis ,DCP.[TextLine] AS ProblemList FROM Visits ADV LEFT JOIN Tests LAB ON ( LAB.VisitID = ADV.VisitID AND LAB.SourceID = ADV.SourceID ) LEFT JOIN Discharge DIS ON ( DIS.VisitID = LAB.VisitID AND DIS.SourceID = LAB.SourceID ) LEFT JOIN Providers PRV ON ( PRV.VisitID = DIS.VisitID AND PRV.SourceID = DIS.SourceID ) LEFT JOIN ProblemListVisits EPS ON ( EPS.VisitID = PRV.VisitID AND EPS.SourceID = PRV.SourceID ) LEFT JOIN ProblemList DCP ON ( DCP.PatientID = EPS.PatientID AND DCP.SourceID = EPS.SourceID ) WHERE ( DCP.[TextLine] LIKE '%Diabetes%' OR DCP.[TextLine] LIKE '%Diabetic%' OR DCP.[TextLine] LIKE '%DM2%' OR DCP.[TextLine] LIKE '%DKA%' OR DCP.[TextLine] LIKE '%Hyperglycemia%' OR DCP.[TextLine] LIKE '%Hypoglycemia%' ) AND ( LAB.[TestMnemonic] = 'GLU' OR LAB.[TestMnemonic] = '%HA1C' ) AND ADV.[Status] != 'DIS CLI' ) ``` So this works okay, but when the doctor goes into the patient's Problem List and makes a change it refiles the whole list, which populates the ProblemList table again. So for 1 visit, I may get 4 duplicate entries thanks to the ProblemList and I only want one. It doesn't matter which one either. I tried referencing other questions and nest another SELECT statement in but I just kept getting syntax errors. This is what the duplicate values look like: ``` 1111111111 SMITH,JOHN 1111 1/1/1901 CHEST PAIN 1111 2 111-111-1111 1/1/1901 12:15 DIS IN DOEJO GLU 120 H 1/2/1901 6:35 NULL CHEST PAIN Diabetes type 2, controlled 1111111111 SMITH,JOHN 1111 1/1/1901 CHEST PAIN 1111 2 111-111-1111 1/1/1901 12:15 DIS IN DOEJO GLU 120 H 1/2/1901 6:35 NULL CHEST PAIN Diabetes type 2, controlled 1111111111 SMITH,JOHN 1111 1/1/1901 CHEST PAIN 1111 2 111-111-1111 1/1/1901 12:15 DIS IN DOEJO GLU 120 H 1/2/1901 6:35 NULL CHEST PAIN Diabetes type 2, controlled 1111111111 SMITH,JOHN 1111 1/1/1901 CHEST PAIN 1111 2 111-111-1111 1/1/1901 12:15 DIS IN DOEJO GLU 120 H 1/2/1901 6:35 NULL CHEST PAIN Diabetes type 2, controlled ``` At the very end, the 'Diabetes type 2, controlled' is what's causing the duplicates. If I remove the ProblemListVisit and ProblemList tables from the query I get only one row of data. What's most important is getting all of the unique Test results but NOT all the duplicate entries of the problem list (just want to know what type of diabetes they have, ONCE). Thanks!
In lieu of `DISTINCT` which I think would be the quickest way of achieving this, you could also move each of your tables that generate multiple lines into a subquery wherein you GROUP BY the values that you seek for your JOINS and SELECTS. There are two advantages here: 1. You gain more control over the output from these more granular tables and 2. you reduce the overhead on the JOIN, which will cut your I/O and CPU usage, when you restrict what they allow through with the WHERE clause inside the subquery. Code: ``` SELECT ADV.[VisitID] AS VisitID ,ADV.[Name] AS Name ,ADV.[UnitNumber] AS UnitNumber ,CONVERT(DATE,ADV.[BirthDateTime]) AS BirthDate ,ADV.[ReasonForVisit] AS ReasonForVisit ,ADV.[AccountNumber] AS AccountNumber ,DATEDIFF(day, ADV.ServiceDateTime, DIS.DischargeDateTime) AS LOS ,ADV.[HomePhone] AS PhoneNumber ,ADV.[ServiceDateTime] AS ServiceDateTime ,ADV.[Status] AS 'Status' ,PRV.[PrimaryCareID] AS PCP ,LAB.[TestMnemonic] AS Test ,LAB.[ResultRW] AS Result ,LAB.[AbnormalFlag] AS AbnormalFlag ,LAB.[ResultDateTime] AS ResultDateTime ,DIS.[Diagnosis] AS DischargeDiagnosis ,DIS.[ErDiagnosis] AS ERDiagnosis ,DCP.[TextLine] AS ProblemList FROM Visits ADV LEFT JOIN Tests LAB ON ( LAB.VisitID = ADV.VisitID AND LAB.SourceID = ADV.SourceID ) LEFT JOIN Discharge DIS ON ( DIS.VisitID = LAB.VisitID AND DIS.SourceID = LAB.SourceID ) LEFT JOIN Providers PRV ON ( PRV.VisitID = DIS.VisitID AND PRV.SourceID = DIS.SourceID ) LEFT JOIN ( SELECT VisitID, SourceID, PatientID FROM ProblemListVisits GROUP BY VisitID, SourceID, PatientID ) EPS ON ( EPS.VisitID = PRV.VisitID AND EPS.SourceID = PRV.SourceID ) LEFT JOIN ( SELECT PatientID, SourceID, TextLine FROM ProblemList WHERE [TextLine] LIKE '%Diabetes%' OR [TextLine] LIKE '%Diabetic%' OR [TextLine] LIKE '%DM2%' OR [TextLine] LIKE '%DKA%' OR [TextLine] LIKE '%Hyperglycemia%' OR [TextLine] LIKE '%Hypoglycemia%' GROUP BY PatientID, SourceID, TextLine ) DCP ON ( DCP.PatientID = EPS.PatientID AND DCP.SourceID = EPS.SourceID ) WHERE ( LAB.[TestMnemonic] = 'GLU' OR LAB.[TestMnemonic] = '%HA1C' ) AND ADV.[Status] != 'DIS CLI' ) ``` In the event that you are still getting multiples it suggests that [TextLine] has more than one value for each VisitID/PatientID combination in your ProblemList table. At that point you can remove that one from your GROUP BY clause and use some sort of aggregation on that field like `MAX([TextLine])` in your subquery. I suspect, though, that you won't have duplicates after using `DISTINCT` or using this subquery method.
the `Distinct` clause should do the trick but if not you can change ``` LEFT JOIN ProblemList DCP ON ( DCP.PatientID = EPS.PatientID AND DCP.SourceID = EPS.SourceID ) ``` for ``` OUTER APPLY (Select top 1 DCP.[TextLine] FROM ProblemList DCP WHERE DCP.PatientID = EPS.PatientID AND DCP.SourceID = EPS.SourceID) DCP ```
SQL Duplicate Rows Multiple Joins
[ "", "sql", "duplicates", "left-join", "" ]
I try to change the value of LAST\_NUMBER in a sequence in sql developer v4 using the graphical interface only. When I click the edit icon next to the value I am unable to change the field. What I see is following: ![enter image description here](https://i.stack.imgur.com/KKixv.jpg) My question is: is there a way to edit the value inline using only the graphical interface?
You can't change `LAST_NUMBER`, it's the database's internal record of the [highest value reserved in the cache and written to disk](http://docs.oracle.com/cd/E11882_01/server.112/e40402/statviews_2062.htm) for crash recovery. You generally can't (and shouldn't) change anything in the data dictionary. If you want to reset the sequence to 1 then you can change the increment to a negative value (equal to the current value) and call nextval, then change the increment back to 1; or drop and recreate the sequence; or from 12c you can explicitly [restart it](https://stackoverflow.com/a/28714053/266304). Since you're on 11g see [How do I reset a sequence in Oracle?](https://stackoverflow.com/q/51470/266304). You could do some of that from the SQL Developer object viewer, but not in one step. You can change the increment by clicking on the edit button right under the 'details' tab, but would then have to call nextval somewhere else before changing back. And you could drop the sequence from the 'actions' drop-down, but then you'd need to recreate it as a separate action.
you can get sql from 3rd tab of added in current snap given and change initial value from it and create sequence again with same name to reset it from 1.
Edit Sequence values using sql developer interface
[ "", "sql", "oracle", "oracle11g", "sequence", "oracle-sqldeveloper", "" ]
I am using Aqua Data Studio 7.0.39 for my Database Stuff. I have a 20 SQL files(all contains sql statements, obviously). I want to execute all rather than copy-paste contains for each. Is there any way in Aqua to do such things. Note: I am using Sybase Thank you !!
I'm also not sure of how to do this in Aqua, but it's very simple to create a batch/powershell script to execute .sql files You can use the SAP/Sybase `isql` utility to execute files, and just create a loop to cover all the files you wish to execute. Check my answer here for more information: [Running bulk of SQL Scripts in Sybase through batch](https://stackoverflow.com/questions/16300180/running-bulk-of-sql-scripts-in-sybase-through-batch/16303043#16303043)
In the latest versions of ADS there is an integrated shell named FluidShell where you can achieve what you are looking for. See an overview here: <https://www.aquaclusters.com/app/home/project/public/aquadatastudio/wikibook/Documentation15/page/246/FluidShell> The command you are looking for is [source](https://www.aquaclusters.com/app/home/project/public/aquadatastudio/wikibook/Documentation15/page/289/source) source NAME source - execute commands or SQL statements from a file SYNOPSIS source [OPTION...] FILE [ARGUMENT...] source [OPTION...] DESCRIPTION Read and execute commands or SQL statements from FILE in the current shell environment.
How to execute SQL queries from text files
[ "", "sql", "database", "sybase", "aquafold", "" ]
I know this is an amateur question, but I've searched every resource I can think of, and now I'm at my wit's end. The following query works perfectly on most of my tables, but for some reason it is not working on the tables that I desperately need it for: `SELECT COUNT(*) FROM radio_r1_own_it WHERE daypart LIKE 'AM';` The query works exactly how I want it to for nearly all of my tables, but for some reason it is returning a value of "0" on the tables I need it for (even though there are over 20 instances of "AM" in the "daypart" column on this table). I have checked and double-checked everything I can think of. I'm relatively new to SQL but I've never encountered a problem like this before. Anyone have any ideas or resources that might help? Thanks so much for your time! EDIT: I don't have enough reputation points to post a screen shot on here... but here's a link where you can see one: <https://i.stack.imgur.com/u9jUp.jpg> There are 29 columns in this table. If there's any other info that might help just let me know, thanks!
Try this. ``` SELECT COUNT(*) FROM radio_r1_own_it WHERE daypart LIKE '%AM%'; ``` If you want to order it using the count, ``` SELECT COUNT(*) FROM radio_r1_own_it WHERE daypart LIKE '%AM%' ORDER BY COUNT(*) DESC; ``` DESC - Descending order ASC - Ascending order
You need to add the like part as shown below: ``` where column_name like '%AM%' ``` when you write `like 'AM'` it is searching for the full match
Why is my count function only working on certain tables?
[ "", "mysql", "sql", "" ]
I have had a look through the other questions and can't quite find what i'm looking for I have an SQL Database and in it a table called InventoryAllocations. In the table I have multiple entries for DocumentID's and want to retrieve the last entry for each unique DocumentID. I can retrieve just one by doing ``` SELECT top(1) [UID] ,[RecordStatusID] ,[CreatedDate] ,[CreatedTime] ,[CreatedByID] ,[OperationType] ,[InventoryLocationID] ,[DocumentTypeID] ,[DocumentID] ,[SOJPersonnelID] ,[InventorySerialisedItemID] ,[TransactionQty] ,[TransactionInventoryStatusID] ,[Completed] ,[CreatedByType] ,[RecordTimeStamp] FROM [CPData].[dbo].[InventoryAllocations] order by DocumentID desc ``` but I want it to bring back a list containing all the unique DocumentID's.I hope you can help. Many Thanks Hannah x
``` SELECT TOP 1 WITH TIES [UID] ,[RecordStatusID] ,[CreatedDate] ,[CreatedTime] ,[CreatedByID] ,[OperationType] ,[InventoryLocationID] ,[DocumentTypeID] ,[DocumentID] ,[SOJPersonnelID] ,[InventorySerialisedItemID] ,[TransactionQty] ,[TransactionInventoryStatusID] ,[Completed] ,[CreatedByType] ,[RecordTimeStamp] FROM [CPData].[dbo].[InventoryAllocations] ORDER BY ROW_NUMBER() OVER(PARTITION BY DocumentID ORDER BY [RecordTimeStamp] DESC); ``` `TOP 1` works with `WITH TIES` here. `WITH TIES` means that when `ORDER BY = 1`, then `SELECT` takes this record (because of `TOP 1`) and all others that have `ORDER BY = 1` (because of `WITH TIES`).
``` You can use a RowNumber() Window Function. SELECT * FROM( SELECT ROW_NUMBER() OVER(PARITION BY [DOCUMENTID] ORDER BY [RecordTimeStamp] DESC) AS RowNumber, ,[RecordStatusID] ,[CreatedDate] ,[CreatedTime] ,[CreatedByID] ,[OperationType] ,[InventoryLocationID] ,[DocumentTypeID] ,[DocumentID] ,[SOJPersonnelID] ,[InventorySerialisedItemID] ,[TransactionQty] ,[TransactionInventoryStatusID] ,[Completed] ,[CreatedByType] ,[RecordTimeStamp] FROM [CPData].[dbo].[InventoryAllocations] ) as A WHERE RowNumber = 1 ```
SQL SELECT TOP 1 FOR EACH GROUP
[ "", "sql", "sql-server", "greatest-n-per-group", "" ]
I have a following table (a simplified example, in fact the table contains multiple IDs, with variable numbers of dates, and variable number of events for each date): ``` IDs Date Event 102 1996-10-16 00:00:00 A 102 1996-10-23 00:00:00 A 102 1996-10-23 00:00:00 B 102 1997-01-14 00:00:00 A 103 1997-01-14 00:00:00 D 103 1997-01-15 00:00:00 A 103 1997-01-16 00:00:00 A 103 1997-01-16 00:00:00 B 103 1997-01-16 00:00:00 C ``` I am trying to get a table where I will have distinct IDs/Date pairs, with the rows for which there have been multiple events recoded being transposed into columns. So, I'm looking for a table which for this example would look like this: ``` IDs Date Event Event2 Event3 102 1996-10-16 00:00:00 A NULL NULL 102 1996-10-23 00:00:00 A B NULL 102 1997-01-14 00:00:00 A NULL NULL 103 1997-01-14 00:00:00 D NULL NULL 103 1997-01-15 00:00:00 A NULL NULL 103 1997-01-16 00:00:00 A B C ``` I'm sorry for not posting any code, but I frankly don't even know how to start with this.
Details about [PIVOT](https://technet.microsoft.com/library/ms177410(v=sql.105).aspx) method. And helpfull answers: [Using PIVOT in SQL Server 2008](https://stackoverflow.com/questions/1677645/using-pivot-in-sql-server-2008) [MSSQL dynamic pivot column values to column header](https://stackoverflow.com/questions/19688697/mssql-dynamic-pivot-column-values-to-column-header) Try this code: ``` -- Temporary table... create table ##myTable ( IDs int ,[Date] datetime ,[Event] varchar(1) ) -- ... with sample data insert ##myTable select 102, '2010-01-01', 'A' union select 102, '2010-01-01', 'B' union select 102, '2010-01-01', 'C' union select 102, '2010-01-01', 'E' union select 103, '2010-01-01', 'A' union select 104, '2010-01-01', 'B' union select 104, '2010-01-01', 'C' union select 105, '2010-01-01', 'F' -- Variables DECLARE @cols AS NVARCHAR(MAX) ,@query AS NVARCHAR(MAX) -- Build column name for our result. -- The ROW_NUMBER() operator gives us the rank of the event for -- the combination of IDs and Date. With that, event B for IDs 104 -- will have rank 1, and then will appear in the 1st column. SELECT @cols = STUFF( (SELECT DISTINCT ',' + QUOTENAME('Event' + LTRIM(STR( ROW_NUMBER() OVER ( PARTITION BY IDs, [Date] ORDER BY IDs, [Date] ) ))) FROM ##myTable FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)') , 1, 1, '') set @query = ' SELECT IDs, [Date], ' + @cols + ' FROM ( SELECT IDs ,[Date] ,[Event] ,''Event'' + LTRIM(STR( ROW_NUMBER() OVER ( PARTITION BY IDs, [Date] ORDER BY IDs, [Date] ) )) as [EventNo] FROM ##myTable ) x PIVOT ( MAX([Event]) FOR [EventNo] IN (' + @cols + ') ) p' execute sp_executesql @query -- Remove temporary table drop table ##myTable ``` And the result : ![enter image description here](https://i.stack.imgur.com/vVuhf.jpg)
If you only have two events, you can do this with `min()`, `max()`, and some additional logic: ``` select ids, date, min(event) as event, (case when min(event) <> max(event) then max(event) end) as event2 from table t group by ids, date; ``` This is standard SQL so it should work in any database.
Transposing rows into columns based on a condition sql
[ "", "sql", "duplicates", "transpose", "" ]
in PostgreSQL, I would like a query to return Null, or empty value, instead of NaN (this NaN was inserted by python's pandas to fill an empty value). Exemple: Select name, age From "People" I would like to get: John 24 Emily Laura 50 Instead of: John 24 Emily NaN Laura 50
If you would like null-values instead of 'NaN' you could simply use: ``` SELECT name, NULLIF(age, 'NaN') From "People" ```
I think you need to use : ``` SELECT name, coalesce(age, '') as age From "People" ``` Coalesce replaces a null value (first parameter) by the second parameter (here, an empty string). If you need to test more complex cases than NULLs, you could also use CASE/WHEN : ``` SELECT name, CASE WHEN age IS NULL THEN '' ELSE age END AS age from "People" ``` but COALESCE has better readibility for simple null handling.
Return Null in a query, instead of NaN
[ "", "sql", "postgresql", "replace", "nan", "" ]
I have two tables like this: ``` TableOne(id, name, attr) { 1, Mark, A 2, Peter, B 3, John, A 4, Mary, A } TableTwo(id, id_TableOne, path) { 1, 1, one.png 2, 1, two.png 3, 1, three.png 4, 3, foo.png 5, 3, bar.png } SELECT TableOne.name, TableTwo.path FROM TableOne, TableTwo WHERE TableOne.attr = 'A' AND TableOne.id = TableTwo.id_TableOne That outputs: { Mark, one.png Mark, two.png Mark, three.png John, foo.png John, bar.png } ``` I want to limit the subquery, and show only one path per name. The desired output would be: ``` { Mark, one.png John, foo.png Mary, NULL } ``` I tried doing LIMITS, and write a select within another, but I did not get the result I want, I also read something that worries me > "MySQL does not support LIMIT in subqueries for Un Certain subquery operators" from <http://dev.mysql.com/doc/refman/5.0/en/subquery-restrictions.html> Can anyone guide me to solve my problem? Please.
Try this: ``` SELECT TableOne.name, MIN(TableTwo.path) FROM TableOne LEFT JOIN TableTwo ON (TableOne.id = TableTwo.id_TableOne) WHERE TableOne.attr = 'A' GROUP BY TableOne.name ```
Which path do you want to return if there are more than one? If any path is fine, then you can use this query: ``` SELECT TableOne.name, TableTwo.path FROM TableOne INNER JOIN TableTwo ON TableOne.id = TableTwo.id_TableOne WHERE TableOne.attr = 'A' GROUP BY TableOne.name ``` if you want to return the first path, then you need to use this: ``` SELECT TableOne.name, TableTwo.path FROM TableOne INNER JOIN ( SELECT id_TableOne, MIN(id) as min_id FROM TableTwo GROUP BY id_TableOne ) m ON TableOne.id = m.id_TableOne INNER JOIN TableTwo ON TableOne.id = TableTwo.id_TableOne AND m.min_id = TableTwo.id WHERE TableOne.attr = 'A' ```
How to limit a field in a query in MySQL
[ "", "mysql", "sql", "" ]
I've search and tried a lot of procedure but did not work. CREATE PROCEDURE did not work, SET @variable is not acceptable and a couple more codes. This is my last "query" code that still didn't work. ``` qry = "select * from employeefile where empfname LIKE '%''" + input + "''%'" ``` empfname is a name of an employee from table employeefile that possibly consists of three words. The input could be the first word, second word, third word or the entire words. when i tried to input any word within the name, the program will still prompt, "no records found." when i tried to change the query into ``` qry = "select * from employeefile where empfname LIKE '%existingname%'" ``` and my input is "existingname", the program runs just as i want it to. This code is one of those that i seached but still didn't work. [T-SQL and the WHERE LIKE %Parameter% clause](https://stackoverflow.com/questions/14237755/t-sql-and-the-where-like-parameter-clause) [How to use like clause in MySQL 5.0 Statement](https://stackoverflow.com/questions/15926724/how-to-use-like-clause-in-mysql-5-0-statement) [T-SQL and the WHERE LIKE %Parameter% clause](https://stackoverflow.com/questions/14237755/t-sql-and-the-where-like-parameter-clause) The problem here is when i use a variable... i probably get the wrong way of using it into query. Please help me. I am new here by the way.
i got the answer. it turns out that i just overdid the single quote. it must be written this way: ``` qry = "select * from employeefile where empfname LIKE '%" + input + "%'" ```
If I'm understanding your question correctly: ``` Dim command As New MySqlCommand("SELECT * FROM emplyeefile WHERE empfname LIKE '%' + @empfname + '%'", connection) command.Parameters.AddWithValue("@empfname", input) ``` or: ``` Dim command As New MySqlCommand("SELECT * FROM emplyeefile WHERE empfname LIKE @empfname", connection) command.Parameters.AddWithValue("@empfname", "%" & input & "%") ``` You have to concatenate the wildcards with the input text, either in your SQL code or your VB code.
How to query using LIKE clause in mysql using vb.net comparing to a varied string input?
[ "", "mysql", "sql", "vb.net", "sql-like", "clause", "" ]
Table 1: Schema for the bookworm database. Primary keys are underlined. There are some foreign key references to link the tables together; you can make use of these with natural joins. ``` Author(aid, alastname, afirstname, acountry, aborn, adied). Book(bid, btitle, pid, bdate, bpages, bprice). City(cid, cname, cstate, ccountry). Publisher(pid, pname). Author_Book(aid, bid). Publisher_City(pid, cid). ``` I need to reduce the prices of all of Charles Dickens’s books by 20 percent, using only one update statement. Tried using... ``` update book set bprice=bprice * .2 where alastname = 'Dickens'; ``` but no luck, I get the syntax: ``` ERROR: column "alastname" does not exist LINE 3: where alastname = 'Dickens'; ``` Not sure how to use subselects or 'nested select queries' to find the primary keys of the tuples that I need to update.
The simple join query would be this - ``` update Book set bprice = bprice * 0.8 where bid IN (select bid from Author_Book ab join Author a on ab.aid = a.aid where a.alastname = 'Dickens'); ``` Note that you have to reduce by 20%, not make it 20%.
Try this: ``` update book b set bprice=bprice * 0.2 where bid in ( select aid, bid from Author a inner join Author_Book ab ON a.aid = ab.aid where alastname = 'Dickens' ) ```
SQL Update statement Database modify
[ "", "sql", "postgresql", "sql-update", "" ]
I have two tables, `Values` and `SpecialValues`. `Values` has two columns, `RecordID` and `ValueName`. `SpecialValues` is a table which contains a single row, and thirty columns named `SpecialValueName1`, `SpecialValueName2`, `SpecialValueName3`, etc. There are obvious database design problems with this system. That aside, can someone explain to me how to query `SpecialValues` so that I can get a collection of all the values of every row from the table, and exclude them from a Select from `Values`? There's probably some easy way to do this or create a View for it or something, but I think looking at this code might have broken me for the moment... **EDIT:** I'd like a query to get all the individual values from every row and column of a given table (in this case the `SpecialValues` table) so that the query does not need to be updated the next time someone adds another column to the `SpecialValues` table.
This creates a `@SpecialValuesColumns` temporary table to store all the column names from `SpecialValues`. It then uses a cursor to insert all the values from each of those columns into another temporary table `#ProtectedValues`. It then uses a `NOT IN` query to exclude all of those values from a query to `Values`. This code is bad and I feel bad for writing it, but it seems like the least-worst option open to me right now. ``` DECLARE @SpecialColumnsCount INT; DECLARE @Counter INT; DECLARE @CurrentColumnName VARCHAR(255); DECLARE @ExecSQL VARCHAR(1024); SET @Counter = 1; CREATE TABLE #ProtectedValues(RecordID INT IDENTITY(1,1) PRIMARY KEY NOT NULL, Value VARCHAR(255)); DECLARE @SpecialValuesColumns TABLE (RecordID INT IDENTITY(1,1) PRIMARY KEY NOT NULL, ColumnName VARCHAR(255)); INSERT INTO @SpecialValuesColumns (ColumnName) SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'SpecialValues' AND DATA_TYPE = 'varchar' AND CHARACTER_MAXIMUM_LENGTH = 255 SELECT @SpecialColumnsCount = COUNT(*) FROM @SpecialValuesColumns WHILE @Counter <= @SpecialColumnsCount BEGIN SELECT @CurrentColumnName = ColumnName FROM @SpecialValuesColumns WHERE RecordID = @Counter; SET @ExecSQL = 'INSERT INTO #ProtectedValues (Value) SELECT ' + @CurrentColumnName + ' FROM SpecialValues' EXEC (@ExecSQL) SET @Counter = @Counter + 1; END SELECT * FROM Values WHERE ValueName NOT IN (SELECT ValueName COLLATE DATABASE_DEFAULT FROM #ProtectedValues) DROP TABLE #ProtectedValues; ```
I might have misunderstood but doesn't this do it? ``` SELECT * FROM Values WHERE ValueName NOT IN ( SELECT SpecialValueName1 FROM SpecialValues UNION SELECT SpecialValueName2 FROM SpecialValues UNION SELECT SpecialValueName3 FROM SpecialValues etc.. ) ``` You could of course make the subquery into a view instead. \*Edit: This is quite ugly but should solve your problem: First Create procedure #1 ``` CREATE PROCEDURE [dbo].[SP1] As DECLARE @Query nvarchar(MAX), @Table nvarchar(255), @Columns nvarchar(255) CREATE TABLE #TempTable (Value nvarchar(255)) SET @Table = 'SpecialValues' SELECT [COLUMN_NAME] FROM [INFORMATION_SCHEMA].[COLUMNS] WHERE [TABLE_NAME] = @Table DECLARE Table_Cursor CURSOR FOR SELECT COLUMN_NAME FROM [INFORMATION_SCHEMA].[COLUMNS] WHERE [TABLE_NAME] = @Table OPEN Table_Cursor FETCH NEXT FROM Table_Cursor INTO @Columns WHILE @@FETCH_STATUS = 0 BEGIN INSERT INTO #TempTable EXEC SP2 @Columns = @Columns, @Table = @Table FETCH NEXT FROM Table_Cursor INTO @Columns END CLOSE Table_Cursor DEALLOCATE Table_Cursor SELECT ValueName FROM Value WHERE Value NOT IN (SELECT * FROM #TempTable) TRUNCATE TABLE #TempTable DROP TABLE #TempTable ``` Then Create procedure #2 ``` CREATE PROCEDURE [dbo].[SP2] @Columns nvarchar(255) = '', @Table nvarchar(255) AS DECLARE @Query nvarchar(MAX) SET @Query = 'SELECT TOP 1 CONVERT(nvarchar, ' + @Columns + ') FROM ' + @Table EXEC (@Query) ``` Then lastly execute the procedure ``` EXEC SP1 ```
How do I get a collection of every value in every column of a table?
[ "", "sql", "sql-server", "" ]
I'm wondering if I can create a variable in T-SQL which refreshes itself during the flow of script. I want to create a variable which references to other 2 variables' summation. For instance: ``` DECLARE @Number1 INT DECLARE @Number2 INT DECLARE @Total INT SET @Number1 = 5 SET @Number2 = 10 SET @Total = @Number1 + @Number2 SELECT @Total ``` This will show 15. However, if I set in later lines of the script : ``` SET @Number1 = 15 SET @Number2 = 25 SELECT @Total ``` It will still show 15. I know that during the assignment, it got the value of @Number1 + @Number2 . Is there any way to update/refresh @Total variable's value without doing : `SET @Total = @Number1 + @Number2` to refresh its value? I'm looking for, once either @Number1 or @Number2's value is changes (or both changed), @Total is also recalculated automatically. I have looked around but couldn't find any info regarding that. Maybe it was kinda hard to make a "search term" on google for what I'm looking. Thanks.
No, there is no way to do this with a variable. 2 alternatives, which probably won't help but will show you how unwieldy it will be to try to do what you're asking: You could do it with a computed column in a table, where you would replace all three variables with columns in the table, and instead of setting and getting varibles you would insert and select from the table, or You could do it with a UDF, where everytime you wanted to reference the value of Total, you would call fnTotal(Number1, Number2).
You cannot create a trigger (or an event) on a TSQL variable. There are triggers that work somewhat like what you are asking for that pertains to columns, and or rows, in a table.
How to create SQL Variable byRef (Refreshing its value) in T SQL?
[ "", "sql", "sql-server", "t-sql", "" ]
I know this has been asked before, but the solutions given did not work for me unfortunately. I have several queries (they will be 42 in total, but let's try with 2 for this example) looking into one Table and returning results with different conditions. How can I simply put the results in adjacent columns with SQL? The queries are: ``` SELECT Column5 as Alias1 FROM Table WHERE Column2 = 1 AND Column3 = 1 AND Column4 =1 SELECT Column5 as Alias2 FROM Table WHERE Column2 = 1 AND Column3 = 1 AND Column4 =2 ``` ... (all combinations of values in Columns 2, 3 and 4 which happen to be 42) ``` SELECT Column5 as Alias42 FROM Table WHERE Column2 = 7 AND Column3 = 3 AND Column4 =3 ``` Each of the above queries works as expected and returns one column with 44 lines. All I want to do is have the queries return the results in side by side columns (so I need 42 columns with 44 lines each). Any ideas? I have tried the following: Based on this: [How do i combine multiple select statements in separate columns?](https://stackoverflow.com/questions/20606374/how-do-i-combine-multiple-select-statements-in-separate-columns) ``` SELECT TMP1.Alias1,TMP2.Alias2 FROM (SELECT Column5 as Alias1 FROM Table WHERE Column2 = 1 AND Column3 = 1 AND Column4 =1) AS TMP1, (SELECT Column5 as Alias2 FROM Table WHERE Column2 = 1 AND Column3 = 1 AND Column4 =2) AS TMP2 ``` This returns 44\*44 lines instead of 44. Based on this: [Merge result of two sql queries in two columns](https://stackoverflow.com/questions/13578983/merge-result-of-two-sql-queries-in-two-columns) ``` SELECT q1.Alias1, q2.Alias2 FROM ( (SELECT Column5 as Alias1 FROM Table WHERE Column2 = 1 AND Column3 = 1 AND Column4 =1) q1) JOIN (SELECT Column5 as Alias2 FROM Table WHERE Column2 = 1 AND Column3 = 1 AND Column4 =2) q1) q2 ON q1.Alias1 = q2.Alias2 ``` Doesn't work, since I don't want to join the tables with any conditions, I just want to have the results next to each other. Also, doesn't compile. Similar to the above (suggested from a friend): ``` SELECT Table1.Column5, Table2.Column5 FROM Table AS Table1, Table AS Table2 WHERE Column2 = 1 AND Column3 = 1 AND Column4 =1 AND Column2 = 1 AND Column3 = 1 AND Column4 =2 ``` Doesn't work, since it returns 44\*44 instead of 44 lines (it's unnecessarily joining tables). Also this: [How Do I Combine Multiple SQL Queries?](https://stackoverflow.com/questions/4441590/how-do-i-combine-multiple-sql-queries) is a combination of the above. To give some context, I'm trying to reformat a set of data in Excel from a long form to a wide form so as to perform statistical tests on them. So I am kind of limited by the Excel SQL functionality (Access syntax). Any help will be greatly appreciated. EDIT: I am not posting this as an answer, since it's not solving my problem fully with SQL, but it is solving my problem. I used Jim Sosa's solution and modified it and I have: ``` select iif([Column2]=1 AND [Column3]=1 AND [Column4]=1,Column5,null) as column1, iif([Column2]=1 AND [Column3]=1 AND [Column4]=2,Column5,null) as column2 ... (40 more iffs) from Table ``` Then I get what I want, but with extra nulls. I then get rid of those nulls, like so: <http://exceltactics.com/automatically-delete-blank-cells-organize-data/> and that's it. Thank you for all the responses. I appreciate your comments that this is not a typical SQL problem :) Cheers
I haven't used access in a long time but I believe there's a couple ways to do this. Though one of the more entertaining would be this: ``` select Max(iif(Column2 = 1 AND Column3 = 1 AND Column4 =1, column5, 0)) as column1, Max(iif(Column2 = 1 AND Column3 = 1 AND Column4 =2, column5, 0)) as column2, ... Max(iif(Column2 = 7 AND Column3 = 3 AND Column4 =4, column5, 0)) as column42 from table ``` I am assuming here that column5 is a positive number though it may work even if it's a string. If it doesn't you may have to change the 0 to an empty string or some such. The aggregate functions will ensure you only get one row back. You could also try multiple sub queries in your select clause, but I'm not sure that access even supports that.
Quit making it hard on yourself! :P SQL isn't always the answer. I'm hitting myself right now because I try to use it to solve the data woes I run into as well... but in this case, and especially when dealing with dynamic number of columns; Excel pivot tables work great in this manner... In my opinion this is nothing more than a concatenation of the "Categories" followed by a pivot table. Since the column 2,3,4 are just a category denoting a specific response for a a user... create a pseudo column to pivot upon that category and pivot as below. I used a '.' to separate out the values in case we get into multi-digit values. you can parse it out later if needed. in Col F all I did was concatenate columns b,c,d to give me a unique value as a column header which would be the same for each user (col1) I then pivoted on the data... voila. ![enter image description here](https://i.stack.imgur.com/xMocD.png) Each category gets its own column per response/user and each user has its own row and now the row header actually ties to the category without having to go lookup what you did in the SQL... As more combinations exist, more columns are added. Users lacking data for specific categories, simply get a blank in that row/column intersection. Such as 10 and category 3.1.1. Remove the grand totals, or change how they relate if needed... Min/Max, Sum... whatever. The only gotcha I can forsee is if col2, col3, col4 don't actually have the same values.... (like an extra space or something on some...) but as the values are numeric and you're trying to use case statements to filter by them... I think this will work...
Combining results of SQL queries in adjacent columns
[ "", "sql", "excel", "ms-access", "" ]
What is the problem with this command. ``` SELECT * INTO database2.table2 FROM database1.table1; ``` It returns this error: > ERROR 1327 (42000): Undeclared variable: database2 The goal is to transfer data in `table1` to `database2`.
`select ... into` is for storing values in variables. See the [documenation](http://dev.mysql.com/doc/refman/5.6/en/select-into.html) for more details. What you need is an `insert into .. select ..` ``` insert into database2.table2 (select * from database1.table1) ```
Use: `CREATE TABLE database2.table2 AS select * from database1.table1;` MySQL probably does not support `SELECT ... INTO ... FROM` syntax.
MySQL ERROR 1327 when doing a INTO FROM
[ "", "mysql", "sql", "sql-server", "database", "" ]
`orders` table has a `billing_state` filed and a `shipping_state` field. I need to get orders which were shipped to NY state. Suppose an order was billed and shipped to NY state. In this case `billing_state` field has value NY and ship\_state is `null`. One order was placed from CA and shipped to NY. `billing_state` value is CA and `shipping_state` is NY. Now what should be the query to get both rows in result? ``` SELECT * FROM orders WHERE dateord>='2014-02-25' AND dateord<='2014-02-25' AND activeFlag=1 AND (ship_state='NY' OR (billing_state = 'NY' AND ship_state='')) order by ordId ```
If `ship_state` is NULL, comparing it against an empty string won't generate a match. If `dateord` is a timestamp, *i.e.*, it includes the time portion, you will also need to modify the date filter. Try the following WHERE statement if `ship_state` is NULL and `dateord` is a timestamp. ``` WHERE (dateord >= '2014-02-25') AND (dateord < '2014-02-25' + INTERVAL 1 DAY) AND (activeFlag = 1) AND (COALESCE(`ship_state`, `billing_state`) = 'NY') ```
You were almost there, ``` SELECT * FROM orders WHERE dateord>='2014-02-25' AND dateord<='2014-02-25' AND activeFlag=1 AND (ship_state='NY' OR (billing_state = 'NY' AND ship_state IS NULL)) order by ordId ``` With most DBMSs (Oracle being an exception, MySQL being included) `''` and `NULL` are different values. You might want to re-check the validation ``` WHERE dateord>='2014-02-25' AND dateord<='2014-02-25' ``` As it is the same of saying ``` WHERE dateord = '2015-02-25' ```
Get all orders shipped to NY
[ "", "mysql", "sql", "select", "" ]
Suppose I have one table with the following values and columns: ``` ID1 | ID2 1 | 1 2 | 1 3 | 1 4 | 1 4 | 2 3 | 3 4 | 3 4 | 4 4 | 4 ``` I'd like to retrieve the ID2 values that belong **exclusively** to records where ID1 = 4. So for the above example, I'd like to see the following response: ``` ID1 | ID2 4 | 2 4 | 4 ```
Try working it out [contrapositively](http://en.wikipedia.org/wiki/Contraposition) like this. Finding all elements where ID1 **is only** 4 is the same as finding all elements that **don't not** have ID1 = 4. ``` CREATE TABLE #temp (ID1 NVARCHAR(10), ID2 NVARCHAR(10)) INSERT INTO #temp(ID1,ID2) VALUES (N'1',N'1') INSERT INTO #temp(ID1,ID2) VALUES (N'2',N'1') INSERT INTO #temp(ID1,ID2) VALUES (N'3',N'1') INSERT INTO #temp(ID1,ID2) VALUES (N'4',N'1') INSERT INTO #temp(ID1,ID2) VALUES (N'4',N'2') INSERT INTO #temp(ID1,ID2) VALUES (N'3',N'3') INSERT INTO #temp(ID1,ID2) VALUES (N'4',N'3') INSERT INTO #temp(ID1,ID2) VALUES (N'4',N'4') INSERT INTO #temp(ID1,ID2) VALUES (N'4',N'4') SELECT * FROM #temp AS t SELECT DISTINCT * FROM #temp AS t WHERE id2 NOT IN (SELECT ID2 FROM #temp AS t WHERE ID1 <> 4) ```
These queries will probably be useful to you for the more general cases (and by general I mean when ID1 is something other than 4): ``` select distinct t1.id1, t1.id2 from T as t1 where not exists ( select 1 from T as t2 where t2.ID1 <> t1.ID1 and t2.ID2 = t1.ID2 ) select t1.id1, count(distinct t1.id2) from T as t1 where not exists ( select 1 from T as t2 where t2.ID1 <> t1.ID1 and t2.ID2 = t1.ID2 ) group by t1.id1 ```
SQL Query to retrieve values that belong exclusively to a group
[ "", "sql", "oracle", "" ]
I've seen quite a few questions/forum posts regarding this scenario but I either don't understand the solutions or the solutions provided are too specific to that particular question and I don't know how to apply it to my situation. I have the following query: ``` SELECT DISTINCT d.* FROM Data d JOIN Customers c ON c.Customer_Name = d.Customer_Name AND c.subMarket = d.subMarket JOIN Sort s ON s.Market = c.Market ORDER BY d.Customer_Name, d.Category, d.Tab, d.SubMarket, CASE s.sortBy WHEN 'Comp_Rank' THEN d.Comp_Rank WHEN 'Market_Rank' THEN d.Market_Rank ELSE d.Other_Rank END ``` I used that exact query on my MySQL database and it worked perfectly. We recently switched over to a SQL Server database and now it doesn't work and I get the error: ``` ORDER BY items must appear in the select list if SELECT DISTINCT is specified. ``` **I've tried adding s.\* to the SELECT (since s.sortBy is in the CASE) and that didn't change anything and I also tried listing out every single field in Data and Sort in SELECT and that resulted in the same exact error.** There actually aren't duplicates in Data, but when I do the joins it results in 4 exact duplicate rows for every single item and I don't know how to fix that so that's why I originally added the DISTINCT. I tried variations of LEFT JOINs, INNER JOINs, etc... and couldn't get a different result. Anyway, a solution to either issue would be fine but I'm assuming more information would be needed to figure out the JOIN duplicate issue. Edit: I just realized that I mistakenly typed some of the fields in the ORDER BY (example, n.Category, n.Tab should have been d.Category, d.Tab). EVERYTHING in the ORDER BY is from the Data table which I've selected \* from. As I said, I also tried listing out every field in the SELECT and that didn't help.
As the error suggests, when you use `select distinct`, you have to order by the expressions in the `select` clause. So, your `case` is an issue as well as all the columns not from `d`. You can fix this by using `group by` instead, and including the columns that you want to sort by. Because the case includes a column from `s`, you need to include the `case` (or at least that column) in the `group by`: ``` SELECT d.* FROM Data d JOIN Customers c ON c.Customer_Name = d.Customer_Name AND c.subMarket = d.subMarket JOIN Sort s ON s.Market = c.Market GROUP BY "d.*", (CASE s.sortBy WHEN 'Comp_Rank' THEN d.Comp_Rank WHEN 'Market_Rank' THEN d.Market_Rank ELSE d.Other_Rank END) ORDER BY d.Customer_Name, d.Category, d.Tab, d.SubMarket, (CASE s.sortBy WHEN 'Comp_Rank' THEN d.Comp_Rank WHEN 'Market_Rank' THEN d.Market_Rank ELSE d.Other_Rank END) ``` Note that `"d.*"` is in quotes. You need to list out all the columns in the `group by`.
Try this: ``` SELECT DISTINCT d.Customer_Name, d.Category, d.Tab, d.SubMarket, CASE s.sortBy WHEN 'Comp_Rank' THEN d.Comp_Rank WHEN 'Market_Rank' THEN d.Market_Rank ELSE d.Other_Rank END FROM Data d JOIN Customers c ON c.Customer_Name = d.Customer_Name AND c.subMarket = d.subMarket JOIN Sort s ON s.Market = c.Market ORDER BY d.Customer_Name, d.Category, d.Tab, d.SubMarket, CASE s.sortBy WHEN 'Comp_Rank' THEN d.Comp_Rank WHEN 'Market_Rank' THEN d.Market_Rank ELSE d.Other_Rank END ```
SQL Server Select Distinct and Order By with CASE
[ "", "sql", "sql-order-by", "case", "distinct", "" ]
I get an error for the below query: ``` SELECT mt.tag_id, count(mt.tag_id) as bcount, bcount / t.count as rel, t.count as t FROM tags.media_tag as mt, tags.tags as t WHERE mt.media_id in (SELECT mt.media_id FROM tags.media_tag as mt WHERE mt.tag_id = 'tag') GROUP BY mt.tag_id ORDER BY rel LIMIT 1000; ``` Error: ``` Error Code: 1054. Unknown column 'bcount' in 'field list' ``` I'd like to use the count in the select again as it is quite an expensive query. I can just do `count(mt.tag_id)` but that seems to be unbearably slow. Any ideas to optimize or achieve alias with the count? Tables: `medias` (shown for completeness, not needed in query) ``` +----+-------+-------+---------+ | id | attr1 | attr2 | attrEtc | +----+-------+-------+---------+ | 11 | | | | | 22 | | | | +----+-------+-------+---------+ ``` `media_tag` ``` +----------+--------+ | media_id | tag_id | +----------+--------+ | 11 | tag | +----------+--------+ ``` `tags` ``` +-----+-------+ | id | count | +-----+-------+ | tag | 1 | +-----+-------+ ``` **UPDATE**: Sample Data: <http://sqlfiddle.com/#!2/b8085> Expected Response where target tag, or in query above `mt.tag_id = tagB`: ``` +--------+--------+--------+-----------------+ | tag_id | tcount | bcount | bcount / tcount | +--------+--------+--------+-----------------+ | tagB | 2 | 2 | 1 | | tagC | 2 | 1 | 0.5 | | tagA | 3 | 1 | 0.333 | +--------+--------+--------+-----------------+ ``` bcount = number of shared media\_id between tag\_id and tagB (target)
This is the query I ended up using. @NoDisplayName got close but not all the way there. This SQL actually completes running in seconds and is much much faster than my original query. ``` SELECT mt.tag_id, COUNT(mt.tag_id) / t.count as rel, COUNT(mt.tag_id) AS bcount, t.count as tcount FROM tags.media_tag AS mt INNER JOIN (SELECT smt.media_id FROM tags.media_tag as smt WHERE tag_id = 'tag') meds ON mt.media_id = meds.media_id INNER JOIN tags.tags t ON t.id = mt.tag_id GROUP BY mt.tag_id ORDER BY rel; ```
Instead of `alias` directly use the `Count(mt.tag_id)` ``` SELECT mt.tag_id, Count(mt.tag_id) AS bcount, Count(mt.tag_id) / t.count AS rel, t.count AS t FROM tags.media_tag AS mt, tags.tags AS t WHERE mt.media_id IN (SELECT mt.media_id FROM tags.media_tag AS mt WHERE mt.tag_id = 'tag') GROUP BY mt.tag_id ORDER BY rel LIMIT 1000; ``` **Update:** Try changing your query like this. You can avoid the `sub-query` in `where` clause and use proper `Inner Join` ``` SELECT mt.tag_id, Count(mt.tag_id) AS bcount, Count(mt.tag_id) / t.count AS rel, t.count AS t FROM tags.media_tag AS mt INNER JOIN tags.tags AS t ON mt.tag_id = t.id WHERE mt.tag_id = 'tag' GROUP BY mt.tag_id ORDER BY rel ```
How to use COUNT alias in SELECT?
[ "", "mysql", "sql", "sql-optimization", "" ]
For this query: ``` SELECT `user_id`, `user_name` FROM users WHERE `user_id` IN (SELECT user_one, user_two FROM friends WHERE user_one='$my_id' OR user_two = '$my_id') ``` I get this error message: > Cardinality violation: 1241 Operand should contain 1 column(s) Would it be possible to use 2 columns in a join instead of a subquery? And if yes, how?
Why not use a union like this: ``` SELECT `user_id`, `user_name` FROM users WHERE `user_id` IN (SELECT user_one FROM friends where user_one = '$my_id' UNION SELECT user_two FROM friends where user_two = '$my_id') ```
If you want records from the `users` table where your user\_id is in user\_one or user\_two of the `friends` table, you can do the following: ``` SELECT `user_id`, `user_name` FROM users WHERE `user_id` IN ( SELECT user_one FROM friends WHERE user_one = '$my_id' ) UNION ALL SELECT `user_id`, `user_name` FROM users WHERE `user_id` IN ( SELECT user_two FROM friends WHERE user_two = '$my_id' ); ```
Turn subquery into join query
[ "", "mysql", "sql", "" ]
I have the following query. ``` $query = $this->db->query('SELECT COUNT(*) FROM iplog.persons WHERE begin_date LIKE '2014%''); ``` I need to count the number of columns with a begin\_date in the year 2014. When I run this script I'm getting an error: > Parse error: syntax error, unexpected '2014' (T\_LNUMBER) in C:\xampp\htdocs\iPlog2\application\controllers\stat.php on line 12 I was trying to change my CI script to ``` $query = $this->db->query('SELECT COUNT(*) FROM iplog.persons WHERE begin_date LIKE "2014%"'); ``` but it caused an error.
You mean, count ROWS: So for that, just count the number of rows you have based on a condition: ``` $year = '2014' $this->db->from('iplog'); $this->db->like('begin_date', $year); $query = $this->db->get(); $rowcount = $query->num_rows(); ```
First, you have a simple typo regarding the use of single quotes. Your complete sql string should be double quoted so that your value-quoting can be single quoted. Second, you are using inappropriate query logic. When you want to make a comparison on a DATE or DATETIME type column, you should NEVER be using `LIKE`. There are specific MYSQL functions dedicated to handling these types. In your case, you should be using `YEAR()` to isolate the year component of your `begin_date` values. Resource: <https://www.w3resource.com/mysql/date-and-time-functions/mysql-year-function.php> You could write the raw query like this: (`COUNT(*)` and `COUNT(1)` are equivalent) ``` $count = $this->db ->query("SELECT COUNT(1) FROM persons WHERE YEAR(begin_date) = 2014") ->row() ->COUNT; ``` Or if you want to employ Codeigniter methods to build the query: ``` $count = $this->db ->where("YEAR(begin_date) = 2014") ->count_all_results("persons"); ``` --- You could return all of the values in all of the rows that qualify, but that would mean asking the database for values that you have no intention of using -- this is not best practice. **I do not recommend the following**: ``` $count = $this->db ->get_where('persons', 'YEAR(begin_date) = 2014') ->num_rows(); ``` For this reason, you should not be generating a fully populated result set then calling `num_rows()` or `count()` when you have no intention of using the values in the result set.
How to count the number of rows with a date from a certain year in CodeIgniter?
[ "", "sql", "codeigniter", "" ]
I'm struggling to find if this is possible to use SQL Server 2008 to assign a sequence without having to use cursors. Let's say I have the following table which defines a driver's driving route going from one location to another (null means he is going from home): ``` RouteID SourceLocationID DestinationLocationID DriverID Created Updated ------- ---------------- --------------------- -------- ------- ------- 1 NULL 219 1 10:20 10:23 2 219 266 1 10:21 10:24 3 266 NULL 1 10:22 10:25 4 NULL 54 2 10:23 10:26 5 54 NULL 2 10:24 10:27 6 NULL 300 1 10:25 10:28 7 300 NULL 1 10:26 10:29 ``` I want to group the records between the rows where sourceLID is NULL and the destinationLID is null, so I get the following (generating a sequence number for each grouping set): ``` DriverID DestinationLocationID TripNumber -------- --------------------- ---------- 1 219 1 (his first trip) 1 266 1 1 300 2 (his second trip) 2 54 1 ``` Is there a way I could use GROUP BY here rather than cursors?
a quick try: ``` with cte as ( select DestinationLocationID , DriverID , tripid = row_number() over ( partition by driverid order by DestinationLocationID) from table1 where sourcelocationid is NULL UNION ALL select table1.DestinationLocationID , table1.DriverID , cte.tripid from table1 join cte on table1.SourceLocationID=cte.DestinationLocationID and table1.DriverID=cte.DriverID where cte.DestinationLocationID is not null ) select * from cte ```
Try this, ``` Declare @t table(RouteID int, SourceLocationID int,DestinationLocationID int ,DriverID int,Created time, Updated time) insert into @t values(1, NULL, 219, 1, '10:20','10:23'), (2 ,219,266, 1, '10:21','10:24'), (3,266, NULL, 1, '10:22','10:25'), (4, NULL, 54, 2, '10:23','10:26'), (5,54, NULL, 2, '10:24','10:27'), (6,NULL,300, 1, '10:25','10:28'), (7,300,NULL, 1, '10:26','10:29') ; WITH CTE AS ( SELECT * ,ROW_NUMBER() OVER ( PARTITION BY DriverID ORDER BY Created ) RN FROM @t ) ,CTE1 AS ( SELECT * ,1 TripNumber FROM CTE WHERE RN = 1 UNION ALL SELECT A.* ,CASE WHEN A.SourceLocationID IS NULL THEN B.TripNumber + 1 ELSE B.TripNumber END FROM CTE1 B INNER JOIN CTE A ON B.DriverID = A.DriverID WHERE A.RN > B.RN ) SELECT DISTINCT DestinationLocationID ,DriverID ,TripNumber FROM CTE1 WHERE DestinationLocationID IS NOT NULL ORDER BY DriverID ```
SQL Server 2008 Group Based on a Sequence
[ "", "sql", "sql-server", "t-sql", "group-by", "" ]
I have some kind of a tree stored in a table. It has 2 key columns `id` and `parent_id`. And some abstract data for example `name` and `mtime`. Lets say this is a file system. I can select all children or all parents from a paticular `id`. (Like described in [this answer](https://stackoverflow.com/questions/6585631/how-can-you-detect-a-parent-with-a-nested-relationship-in-a-database-using-sql)) The question is how can I update(or delete) such a subtree? For example I want to update the modification time of some node and all of it children (or node and all it's parents upto root). Or delete this node with children. What is the best approach in terms of performance? This table can be really large 100M+ or records. DDL ``` create table test ( id int not null primary key, parent_id int not null, name varchar(100), mtime timestamp default current_timestamp ); insert into test(id, parent_id, name) values(1, 0, "row1"); insert into test(id, parent_id, name) values(2, 1, "row2"); insert into test(id, parent_id, name) values(3, 2, "row3"); insert into test(id, parent_id, name) values(4, 2, "row4"); insert into test(id, parent_id, name) values(5, 4, "row5"); insert into test(id, parent_id, name) values(6, 4, "row6"); insert into test(id, parent_id, name) values(7, 6, "row7"); ``` What makes for us this tree: ``` row1 | row2--row4--row5 | | row3 row6 | row7 ``` --- Update Try1 --- Tried this as Vladimir suggested: ``` create procedure upd_test (start_id integer) as begin WITH RECURSIVE CTE (id) AS ( SELECT T.id FROM test AS T WHERE T.id = :start_id UNION ALL SELECT T.id FROM test AS T INNER JOIN CTE ON CTE.id = T.parent_id ) UPDATE test SET mtime = '2001-02-03 10:11:12' WHERE id IN (SELECT id FROM CTE); end ``` got: ``` Invalid token. Dynamic SQL Error. SQL error code = -104. Token unknown - line 19, column 5. UPDATE. ```
Make sure that you have indexes on `id` and on `parent_id` for this to work efficiently. After reading docs on Firebird (<http://www.firebirdsql.org/file/documentation/reference_manuals/reference_material/html/langrefupd25-select.html#langrefupd25-select-cte>) > * The maximum recursion depth is 1024 **(so, you need to check if it is enough for your data)** > * When enclosed in parentheses, CTE constructs can be used as subqueries in SELECT statements, but also in UPDATEs, MERGEs etc. **UPDATE** I have installed the latest Firebird 2.5.3 on Windows 7 64bit to test the syntax. Based on the above, the query to update timestamp of some node (for example, with `ID = 4`) and all of its children to some value (for example, to `2001-02-03 10:11:12`) looks like this: ``` UPDATE TEST SET MTIME = '2001-02-03 10:11:12' WHERE ID IN ( WITH RECURSIVE CTE (id) AS ( SELECT T.id FROM test AS T WHERE T.id = 4 UNION ALL SELECT T.id FROM test AS T INNER JOIN CTE ON CTE.id = T.parent_id ) SELECT id FROM CTE ); ``` I checked and it worked as expected (rows with IDs 4, 5, 6, 7 have been updated). **DELETE** The same approach, i.e.: ``` DELETE FROM TEST WHERE ID IN ( WITH RECURSIVE CTE (id) AS ( SELECT T.id FROM test AS T WHERE T.id = 4 UNION ALL SELECT T.id FROM test AS T INNER JOIN CTE ON CTE.id = T.parent_id ) SELECT id FROM CTE ); ``` ran without syntax errors, but it deleted only **one** row with `id = 4`. I would call it a bug. **DELETE with temporary table** The following works correctly. Create a [`global temporary table`](http://www.firebirdsql.org/refdocs/langrefupd21-ddl-table.html) in advance. The temporary is only data in the table, not the table itself, so it has to be created in advance and it will remain in the database. By default the data in such temporary table will be cleaned up upon transaction end. ``` CREATE GLOBAL TEMPORARY TABLE ToDelete (id int not null primary key); ``` Insert results of the recursive CTE into the temporary table and then use it to delete found IDs from the main table. Make sure these two statements run inside the same transaction. ``` INSERT INTO ToDelete WITH RECURSIVE CTE (id) AS ( SELECT T.id FROM test AS T WHERE T.id = 4 UNION ALL SELECT T.id FROM test AS T INNER JOIN CTE ON CTE.id = T.parent_id ) SELECT id FROM CTE ; DELETE FROM TEST WHERE ID IN (SELECT ID FROM ToDelete) ; ``` I checked, this worked as expected (rows with IDs 4, 5, 6, 7 have been deleted).
As you said you are able to get all the Id's you want. So one solution would be: 1. Select all the IDs to be updated and store them to #tmpid table 2. Update / Delete ``` UPDATE t SET t.mtime = GETDATE() FROM dbo.TreeTable t INNER JOIN #tmpid i ON t.id = i.id DELETE t FROM dbo.TreeTable t INNER JOIN #tmpid i ON t.id =i.id ``` But: NOT TESTED! Please check if this is ok with your amount of data... To reach best performance it's always necessary to have meaningful index: ``` CREATE UNIQUE CLUSTERED INDEX idx_treetable_id ON dbo.TreeTable(id); CREATE UNIQUE INDEX idx_treetable_unique ON dbo.TreeTable(id,parent_id) CREATE NONCLUSTERED INDEX idx_parent_id ON dbo.TreeTable(parent_id); GO ```
How to write a "recursive update"?
[ "", "sql", "firebird", "" ]
Just looking at a query and notice it uses `NVL(table.column, '')` with nothing between the quotes. What function would this serve? Isn't that equivalent to a NULL?
Yes it would be bad programming practice to write that in oracle. It essentially keeps the same value as-is no matter what the column value is. It might even hurt you in performance by not using any indexes that you might have on those columns if this is in the where clause.
In Oracle it would do nothing, as empty string IS NULL. In other DB though, it wloud return NOT NULL value of empty string if table.column wloud be NULL. [Here](https://stackoverflow.com/questions/203493/why-does-oracle-9i-treat-an-empty-string-as-null) is a link to more thorough explanation of this behaviour.
What would NVL(table.column, '') do?
[ "", "sql", "oracle", "" ]
Inside a DATA Flow task, I have OLEDB source, data conversion task and excel destination. I could see data moving from OLEDB source to EXCEL through DATA CONVERSION task. I switched on data viewer and I could see data moving. I replaced the Excel with a Flat File. The flat file is getting loaded with the data. But if my destination is EXCEL, then I am not able to see data in that excel file. Total count of rows is around 600,000 and my destination excel is 2007(.xlsx) I am running it in 32bit. Can anyone please help me out? Please I need it. Thank you so much in advance.
Excel 2007 row limit is 65,536. I know the source here is Wikipedia, but it is accurate. [Source: Wikipedia](http://en.wikipedia.org/wiki/65536_%28number%29) Excel 2010 is a million something [MS Excel Specs](https://support.office.com/en-nz/article/Excel-specifications-and-limits-16c69c74-3d6a-4aaf-ba35-e6eb276e8eaa). Might be time for an upgrade.
In case you haven't already checked, page/scroll down to the end of the spreadsheet to confirm the data hasn't just been appended below rows that previously held data. Carl's answer is probably the right fit, but thought I'd share this just in case. I had a similar outcome while developing an SSIS package today. I tried to transfer data to an Excel sheet that previously had data in the first 1400 rows. I deleted the data in the Excel sheet prior to running the package. The package ran to completion (all green) and said it wrote 1400 rows. Went back to check the file but there was nothing. Made some tweaks to the package and ran it a few more times with the same result. Upon closer inspection of the destination Excel sheet, I found that the data actually did get over to the Excel sheet but it didn't start until row 1401...even though there was nothing in rows 1-1400. Did some research but found no solutions that would be worth the time. I ended up just exporting the data to a new file.
SSIS package completed successfully but data is not getting loaded into Excel Destination
[ "", "sql", "excel", "ssis", "" ]
I'm trying to restore a database from a BAK file using the following command to perform unit test on a clean copy of the db: ``` RESTORE DATABASE MyDbUnitTest FROM DISK = 'c:\db\MyDb.bak'; ``` it tries to restore the database bu throws an error that MyDb.mdf is in use - and it's correct - it is - by the original database that's used for development on my machine. Is there a way to specify the name of the MDF file that it will import it along side the development db?
This **might be** because you have a tail-log backup being done on the restore. Change to this: ``` RESTORE DATABASE [MyDB2] FROM DISK = N'C:\db\MyDb.BAK' WITH FILE = 1, MOVE N'MyDb' TO N'C:\db\MyDb2.mdf', MOVE N'Mydb_log' TO N'D:\SQLLogs\MyDb2_log.ldf', NORECOVERY, NOUNLOAD, STATS = 5 RESTORE LOG [MyDB2] FROM [MyDB_Log] WITH FILE = 3, NOUNLOAD, STATS = 5 ``` **Please note the numbers listed pertain to an example of my environment, so you need to ensure you modify this to fit your needs.** Another way to do this without a script is to simply right click on the database name in management studio and select **Tasks->Restore->Database**. Then on the options tab remove the "Take tail-log backup before restore" option. Easiest way to do that is to do it from within management studio and have management studio generate the script for you as shown: ![enter image description here](https://i.stack.imgur.com/oX0Pe.png)
You need to know whats in this backup file. Therefore you need to check the content of this file by doing two types of restores before you can actually resotre a database from this backup. **HEADERONLY RESTORE** ``` RESTORE HEADERONLY FROM DISK = N'c:\db\MyDb.bak' GO ``` This will give the list of backups in this backup file. You should be looking for database `MyDb` with Backup type = 1. Get the position of that file and do the following **FILELISTONLY RESTORE** ``` RESTORE FILELISTONLY FROM DISK = N'c:\db\MyDb.bak' WITH FILE = 1 --<-- this will be the position of the backup from headerOnly restore GO ``` This will return all the files for that database in that backup. Primary file , any .ndf and Log file name. Use these file names and write the restore command like this.... **Database Restore** ``` RESTORE [DatabaseName] FROM DISK = N'c:\db\MyDb.bak' WITH RECOVERY , MOVE N'PrimaryFile' TO N'C:\Folder\MyDB.mdf', --<-- should be a valid path MOVE N'LogFile' TO N'C:\Folder\MyDB_Logs.ldf' --<-- should be a valid path -- anymore files ...... GO ```
SQL Server specify mdf file name when restoring database
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to query 2 separate key-value pairs as follows: AttributeTable contains 3 columns [of interest] ``` ID KeyColumn ValueColumn 12345 Sector Sector 1 12345 Group Group X ``` I need to query to get the following output: ``` ID Sector Group 12345 Sector 1 Group X ``` This is what I have so far: ``` SELECT ID, (SELECT VALUECOLUMN WHERE KEYCOLUMN = 'Sector') as Sector,(SELECT VALUECOLUMN WHERE KEYCOLUMN = 'Group') as Group FROM AttributeTable GROUP BY ID, KEYCOLUMN, VALUECOLUMN ``` Obviously, this is wrong since it gives me duplicates [1 row each for the 3 columns per ID] How would I do this to get the desired output?
**Test Data** ``` DECLARE @TABLE TABLE (ID INT, KeyColumn VARCHAR(10), ValueColumn VARCHAR(10)) INSERT INTO @TABLE VALUES (12345 ,'Sector' , 'Sector 1'), (12345 ,'Group' , 'Group X') ``` **Query** ``` SELECT * FROM @TABLE PIVOT (MAX(ValueColumn) FOR KeyColumn IN ([Sector] , [Group]))p ``` **Result** ``` ╔═══════╦══════════╦═════════╗ ║ ID ║ Sector ║ Group ║ ╠═══════╬══════════╬═════════╣ ║ 12345 ║ Sector 1 ║ Group X ║ ╚═══════╩══════════╩═════════╝ ```
If you need an alternative to `PIVOT`, the views I've seen used for Entity-Attribute-Value tables look like this: ``` SELECT DISTINCT a.ID f1.ValueColumn AS "Sector", f2.ValueColumn AS "Group" FROM AttributeTable a LEFT JOIN AttributeTable f1 ON f1.ID = a.ID AND f1.KeyColumn = 'Sector' LEFT JOIN AttributeTable f2 ON f2.ID = a.ID AND f2.KeyColumn = 'Group' ``` And so on with one join for each field. Dynamically generating them is a huge pain in the ass, and `PIVOT` is only slightly better since that isn't dynamic either. If you have a reason to avoid aggregates (if `ValueColumn` is a `BIT`, for example) this might help.
How should I query key-value pairs in this scenario?
[ "", "sql", "sql-server", "subquery", "" ]
What I'm trying to accomplish is display distinct data but also display a field on each row if a particular row has a certain data. The thing thats confusing me is the fact that I still need it to be distinct and if I attempt to do another join i get more rows.. I just wish to keep the same results i'm retrieving but with an additional column that tells me - This equipment (row) needs repairs because at least one of its properties stated so... Hope this makes sense, not sure if I'm explaining myself clearly here. **The Main Table (Inspection Table)** ![enter image description here](https://i.stack.imgur.com/hwdNR.png) In the above table, Notice `FK_Sequence` Each entry has around 17 which the user is required to answer OK or REPAIR (this is the `FK_Status`) My current Query is the following and results just an Employee and the equipment they worked on. ``` SELECT DISTINCT a.EnteredDate, bb.EmployeeId, bb.EmployeeName, dd.EquipmentId, dd.EquipmentName FROM dbo.PIT_Inspection a INNER JOIN dbo.EmployeeName bb ON a.FK_EmployeeName = bb.PK_EmployeeName INNER JOIN dbo.EquipmentName dd ON a.FK_EquipmentName = dd.PK_EquipmentName ``` Results for above query: ![enter image description here](https://i.stack.imgur.com/atYsv.png) But then here is where my question comes in. Notice the Main Table - `FK_Status` of `2` on Line 19.. I would like to detect this and for this particular Employee Display Repair in an additional column in the table above named `StatusName`.
You can do this by using `GROUP BY` instead of `DISTINCT`, then you can use a conditional count to see how many rows have a status of 2, if it is more than none then display `REPAIR` ``` SELECT a.EnteredDate, bb.EmployeeId, bb.EmployeeName, dd.EquipmentId, dd.EquipmentName, StatusName = CASE WHEN COUNT(CASE WHEN a.FK_Status = 2 THEN 1 END) > 0 THEN 'REPAIR' ELSE '' END FROM dbo.PIT_Inspection a INNER JOIN dbo.EmployeeName bb ON a.FK_EmployeeName = bb.PK_EmployeeName INNER JOIN dbo.EquipmentName dd ON a.FK_EquipmentName = dd.PK_EquipmentName GROUP BY a.EnteredDate, bb.EmployeeId, bb.EmployeeName, dd.EquipmentId, dd.EquipmentName; ```
Try this: ``` SELECT a.EnteredDate, bb.EmployeeId, bb.EmployeeName, dd.EquipmentId, dd.EquipmentName, CASE WHEN SUM(CASE FK_Status WHEN 2 THEN 1 ELSE 0 END) > 0 THEN 'Repair' ELSE 'OK' END AS StatusName FROM dbo.PIT_Inspection a INNER JOIN dbo.EmployeeName bb ON a.FK_EmployeeName = bb.PK_EmployeeName INNER JOIN dbo.EquipmentName dd ON a.FK_EquipmentName = dd.PK_EquipmentName GROUP BY a.EnteredDate, bb.EmployeeId, bb.EmployeeName, dd.EquipmentId, dd.EquipmentName ```
SQL Display Distinct records and include a field if certain data is found
[ "", "sql", "sql-server", "t-sql", "" ]
There are tables like these: 1. pictures: ``` wall_id|picture_id|user_id|likes 1| 1| 1| 2 1| 2| 1| 0 2| 1| 1| 1 2| 2| 2| 2 ``` Pair (wall\_id, picture\_id) is unique 2. likers: ``` wall_id|picture_id|user_id 1| 1| 3 1| 1| 2 2| 1| 2 2| 2| 4 2| 2| 3 ``` I want to get something like that: ``` user_id|pictures_count|likes_count|likers_count 1| 3| 3| 2 2| 1| 2| 2 ``` I tried this: ``` select p.user_id as user_id, count(p.user_id) as pictures_count, sum(p.likes) as likes_count, count(distinct l.user_id) as likers_count from pictures p left join likers l on p.wall_id = l.wall_id and p.picture_id = l.picture_id group by p.user_id ``` and ``` select pictures.user_id, count(pictures.user_id) as pictures_count, sum(pictures.likes) as likes_count, count(distinct likers.user_id) as likers_count from pictures, likers where pictures.picture_id = likers.picture_id and pictures.user_id = likers.user_id group by pictures.user_id ``` But I get such result: ``` user_id|pictures_count|likes_count|likers_count 1| 4| 6| 2 2| 2| 4| 2 ``` What should I do to get right result?
`Join`s are curious things. When you have a key and multiple rows match on *both* sides, then you get more rows that you expect. The solution is to pre-aggregrate the rows on each side. This is a bit complicated with your data model, because you need the `join` to look up the user id for the `likes` table. ``` select p.user_id as user_id, p.pictures_count, p.likes_count, l.likers_count from (select p.user_id, count(*) as pictures_count, sum(likes) as likes_count from pictures p group by p.user_id ) p left join (select p.user_id, count(distinct l.user_id) as likers_count from pictures p left join likers l on p.wall_id = l.wall_id and p.picture_id = l.picture_id group by p.user_id ) l on p.user_id = l.user_id; ``` Notice that because the aggregations are done in subqueries, it is no longer needed in the outer query.
Try this: ``` SELECT T1.user_id,T1.pictures_count,T1.likes_count,T2.likers_count FROM (select p.user_id, count(*) AS pictures_count, SUM(p.likes) as likes_count from pictures p group by p.user_id) T1 JOIN (select p.user_id,count(distinct l.user_id) as likers_count from pictures p join likers l on p.wall_id = l.wall_id and p.picture_id = l.picture_id group by p.user_id) T2 on T1.user_id=T2.user_id ``` Result: ``` USER_ID PICTURES_COUNT LIKES_COUNT LIKERS_COUNT 1 3 3 2 2 1 2 2 ``` See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!2/f19960/15).
sql count() from multiple tables
[ "", "sql", "sqlite", "count", "" ]
Using PostgreSQL 9.3, I want to convert the calculated values to data type `float`. My first attempt: ``` SELECT float(SUM(Seconds))/-1323 AS Averag; ``` Gives me this error: > ``` > syntax error at or near "SUM" > ``` My second attempt: ``` SELECT to_float(SUM(Seconds))/-1323 AS Averag; ``` Gives me this error: > ``` > function to_float(bigint) does not exist > ```
You need to use the `cast` syntax: ``` SELECT CAST (SUM(Seconds) AS FLOAT)/-1323 AS Averag; ```
I use the shorthand cast syntax almost everywhere: ``` SELECT sum(seconds)::float / -1323 AS averag; ``` More details: * [Postgres data type cast](https://stackoverflow.com/questions/13676816/postgres-data-type-cast/13676871#13676871)
Cast syntax to convert a sum to float
[ "", "sql", "postgresql", "casting", "type-conversion", "postgresql-9.3", "" ]
I have 2 MySQL tables. One is `pastsergicalhistory_type` and the other one is `pastsurgicalhistory` Below is `pastsergicalhistory_type` ``` CREATE TABLE `pastsergicalhistory_type` ( `idPastSergicalHistory_Type` int(11) NOT NULL AUTO_INCREMENT, `idUser` int(11) DEFAULT NULL, `Name` varchar(45) NOT NULL, PRIMARY KEY (`idPastSergicalHistory_Type`), KEY `fk_PastSergicalHistory_Type_User1_idx` (`idUser`), CONSTRAINT `fk_PastSergicalHistory_Type_User1` FOREIGN KEY (`idUser`) REFERENCES `user` (`idUser`) ON DELETE NO ACTION ON UPDATE NO ACTION ) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=utf8 ``` Below is `pastsurgicalhistory` ``` CREATE TABLE `pastsurgicalhistory` ( `idPastSurgicalHistory` int(11) NOT NULL AUTO_INCREMENT, `idPatient` int(11) NOT NULL, `idPastSergicalHistory_Type` int(11) NOT NULL, `Comment` varchar(45) DEFAULT NULL, `ActiveStatus` tinyint(1) NOT NULL, `LastUpdated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`idPastSurgicalHistory`), KEY `fk_PastSurgicalHistory_Patient1_idx` (`idPatient`), KEY `fk_PastSurgicalHistory_PastSergicalHistory_Type1_idx` (`idPastSergicalHistory_Type`), CONSTRAINT `fk_PastSurgicalHistory_PastSergicalHistory_Type1` FOREIGN KEY (`idPastSergicalHistory_Type`) REFERENCES `pastsergicalhistory_type` (`idPastSergicalHistory_Type`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `fk_PastSurgicalHistory_Patient1` FOREIGN KEY (`idPatient`) REFERENCES `patient` (`idPatient`) ON DELETE NO ACTION ON UPDATE NO ACTION ) ENGINE=InnoDB AUTO_INCREMENT=11 DEFAULT CHARSET=utf8 ``` Now my requirement is as this, I will explain it in point form. 1. Get all the data from `pastsergicalhistory_type` where `idUser` is `NULL` or `idUser` is 1. 2. Get all the data from `pastsurgicalhistory` where `idPatient` is 2. as you can see, the foreign key of `pastsurgicalhistory` is the primary key of `pastsergicalhistory_type`. I tried the below query, but it gave me the wrong results. It only displayed what is available in `pastsurgicalhistory`. The data in `pastsergicalhistory_type` (which follows the condition in point 1) which is not in `pastsurgicalhistory` is not displayed. ``` SELECT pastsergicalhistory_type.*, pastsurgicalhistory.* FROM pastsergicalhistory_type LEFT JOIN pastsurgicalhistory ON pastsurgicalhistory.`idPastSergicalHistory_Type` = pastsergicalhistory_type.`idPastSergicalHistory_Type` WHERE pastsergicalhistory_type.idUser = NULL OR pastsergicalhistory_type.idUser=1 AND pastsurgicalhistory.idPatient=2 ``` So, how can I solve this problem? **EDIT** If I use the `AND pastsurgicalhistory.idPatient=2` in my where clause, it actually filters the "entire" result set. This will give me results where `idPatient` is related to 2. But as I mentioned, I need data which is not available in `pastsurgicalhistory` table as well.
Try ``` SELECT pastsergicalhistory_type.*, pastsurgicalhistory.* FROM pastsergicalhistory_type LEFT JOIN pastsurgicalhistory ON (pastsurgicalhistory.`idPastSergicalHistory_Type` = pastsergicalhistory_type.`idPastSergicalHistory_Type` and pastsurgicalhistory.idPatient=2) WHERE (pastsergicalhistory_type.idUser = NULL OR pastsergicalhistory_type.idUser=1) ; ```
Use paraenthises? ``` WHERE pastsergicalhistory_type.idUser = NULL OR pastsergicalhistory_type.idUser=1 AND pastsurgicalhistory.idPatient=2 ``` I belive would return results where idUser is 1 **and** idPatient is 2 **or** iduser is null **Try this:** ``` WHERE (pastsergicalhistory_type.idUser = NULL OR pastsergicalhistory_type.idUser=1) AND pastsurgicalhistory.idPatient=2 ``` **If I understand you correctly?** ``` SELECT pastsergicalhistory_type.*, pastsurgicalhistory.* FROM pastsergicalhistory_type RIGHT JOIN pastsurgicalhistory ON pastsurgicalhistory.`idPastSergicalHistory_Type` = pastsergicalhistory_type.`idPastSergicalHistory_Type` WHERE (pastsergicalhistory_type.idUser = NULL OR pastsergicalhistory_type.idUser=1) AND pastsurgicalhistory.idPatient=2 ``` Even if it works without parenthesis for you, I would say it's better to use to make it more readable.
Collect data from 2 different tables, following conditions
[ "", "mysql", "sql", "database", "join", "subquery", "" ]
I am trying to find a way to detect the users who have duplicate accounts and ordered from a table I have on my database. I've made some complex queries and haven't been to solve my problem. First thought was to group by `ip` and `count(user_id)` but users can have more than one order so these wll be counted as well (eg if I use the above, the ip "192.168.1.1" will return 3 and not 2 which I want My data is like ``` | order_id | user_id | ip | --------------------------------------- | 1001 | 2 | 192.168.1.1 | | 1002 | 5 | 192.168.1.1 | | 1003 | 2 | 192.168.1.1 | | 1004 | 12 | 18.15.0.1 | | 1005 | 9 | 10.0.0.1 | ``` Result needed IP 192.168.1.1 cause it has 2 different user\_id Any help is appreciated
You are looking for IPs given to more than one user? Then group by IP and count distinct users. ``` select ip from mytable group by ip having count(distinct user_id) > 1; ``` EDIT: To get the users associated with that ``` select user_id, ip from mytable where ip in ( select ip from mytable group by ip having count(distinct user_id) > 1 ); ``` And here is the same with an EXISTS clause: ``` select user_id, ip from mytable where exists ( select * from mytable other where other.ip = mytable.ip and other.user_id <> mytable.user_id ); ```
use `Distinct` inside your `Count` ``` SELECT COUNT(Distinct user_ID) FROM table GROUP BY ip ```
Get duplicate users from orders table
[ "", "mysql", "sql", "database", "" ]
I have a table which name is 'world'. That table has columns in this order: name, continent, area, population and gdp. The query i want to write is for the next problem: Some countries have populations more than three times that of any of their neighbours (in the same continent). Give the countries and continents. My query is like this: ``` SELECT name, continent FROM world as x WHERE x.population/3 > ALL (SELECT population FROM world as y WHERE x.continent = y.continent) ``` But it doesn't work as it is supposed to. What can be the possible problem?
Personally, I find the use of the `any`, `all`, and `some` keywords to be a bit hard to follow. I find it easier to interpret such a query when it uses `min()` and `max()` explicitly: ``` SELECT w.name, w.continent FROM world w WHERE w.population > (SELECT 3 * MAX(w2.population) FROM world w2 WHERE w2.continent = w.continent AND w2.name <> w.name ); ``` In addition, the use of `min()` and `max()` in the subquery make it behave more intuitively when there are `NULL` values that the subquery might return. Note that I have included table aliases for each for the table, and *used* them to qualify column names.
You can use `WHERE NOT EXISTS`: ``` SELECT x.name, x.continent FROM world x WHERE NOT EXISTS ( SELECT 1 FROM world y WHERE y.continent = x.continent AND y.name <> x.name AND y.population >= x.population/3 ); ``` In other words, get all countries where there is not another country on the same continent with even 1/3 the population. The advantage this has over using an aggregate with a subquery is that it will return values for continents with only one country. **[See SQL Fiddle Demo here](http://sqlfiddle.com/#!4/d41d8/40981) (WHERE EXISTS) vs [this one](http://sqlfiddle.com/#!4/d41d8/40982) (MAX)**
SQL query for finding countries in the world with 3 times bigger population than all of the countries at the same continent
[ "", "sql", "" ]
I need to write a query in which I select all people who have a date of birth over 30 years ago. Unfortunately, as I am using Oracle I cannot use the `DATEADD()` function. I have currently got this, but obviously this isn't dynamic and won't change as the years pass: ``` SELECT Name, DOB FROM Employee WHERE DOB <= DATE '1985-01-01'; ```
Use [`Add_MONTHS`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions004.htm) to *add* `(- 12 * 30)`. ``` SELECT Name, DOB FROM Employee WHERE DOB <= ADD_MONTHS(SYSDATE, -(12 * 30)); ```
Other way, using intervals: ``` SELECT Name, DOB FROM Employee WHERE DOB <= sysdate - interval '30' year; ```
Subtracting 30 Years from Current Date in Oracle SQL
[ "", "sql", "oracle", "" ]
I have an SQL statement that yeilds something like this: ``` Amounts $101.45 $1000.56 $20978.44 $2.98 ``` The SQL that got me this is: ``` select CHANGE_EFFECTIVE_AMOUNT from V_Rpt where ACCOUNT_NUMBER = '100' and CHANGE_TYPE_CODE = 2 order by TRAN_SEQUENCE_NUMBER desc ``` How can I structure my statement so that it only returns the second value in this column? Basically, something like "select the 2nd in a column from..." The value will change all the time but it will always be the value after the top one. I am using Microsoft SQL Server 2008 R2, in case it matters. And, this will eventually go into MS Access 2007 as a pass-through query.
You could achieve this using the `ROW_NUMBER()` function. ``` ;WITH CTE AS( SELECT CHANGE_EFFECTIVE_AMOUNT, RN = ROW_NUMBER() OVER(ORDER BY TRAN_SEQUENCE_NUMBER DESC) FROM V_Rpt WHERE ACCOUNT_NUMBER = '100' AND CHANGE_TYPE_CODE = 2 ) SELECT CHANGE_EFFECTIVE_AMOUNT FROM CTE WHERE RN = 2 ``` *This will return nothing if your query has only 1 row.* --- Using a SUBQUERY: ``` SELECT t.CHANGE_EFFECTIVE_AMOUNT FROM( SELECT CHANGE_EFFECTIVE_AMOUNT, RN = ROW_NUMBER() OVER(ORDER BY TRAN_SEQUENCE_NUMBER DESC) FROM V_Rpt WHERE ACCOUNT_NUMBER = '100' AND CHANGE_TYPE_CODE = 2 )t WHERE t.RN = 2 ```
Try to add this `OFFSET 1 ROW FETCH NEXT 1 ROW ONLY` to your query. ``` select CHANGE_EFFECTIVE_AMOUNT from V_Rpt where ACCOUNT_NUMBER = '100' and CHANGE_TYPE_CODE = 2 order by TRAN_SEQUENCE_NUMBER desc OFFSET 1 ROW FETCH NEXT 1 ROW ONLY; ``` `OFFSET` skip 1 row and return only the next row from the result set.
Query for select 2nd row of a column
[ "", "sql", "sql-server", "t-sql", "" ]
Lets say an internet user searches for *"trouble with gmail"*. How can I return entries with *"problem|problems|issues|issue|trouble|troubles with gmail|googlemail|google mail"*? I don't like to manually add these linkings between different keywords so the links between *"issue <> problem <> trouble"* and *"gmail <> googlemail <> google mail"* are completly unknown. They should be found in an automated process. **Approach to solve the problem** I provide a synonyms/thesaurus plattform like thesaurus.com, synonym.com, etc. or use an synomys database/api and use this user generated input for my queries on a third website. But this won't cover all synonyms like the "gmail"-example. Which other options do I have? Maybe something based on the given data and logged search phrases of the past?
You have to think of it ignoring the language. When you show a baby the same thing using two words, he understand that those words are synonym. He might not have understood perfectly, but he will learn when this is repeated. You type "problem with gmail". Two choices: 1. Your search give results: you click on one item. The system identify that this item was already clicked before when searching for "google mail bug". That's a match, and we will call it a "relative search". 2. Your search give poor results: We will search in our history for a matching search: We propose : "do you mean trouble with yahoo mail? yes/no". You click no, that's a "no match". And we might propose others suggestions like a list of known "relative search" or a list of might be related playing with both full text search in our history and levenshtein distance. When a term is sufficiently scored to be considered as a "synonym", you can consider it is. Algorithm might be wrong, but in fact it depends on what you really expect. If i search "sending a message is difficult with google", and "gmail issue", nothing is synonym, but search are relatively the same. This is more important to me than true synonyms. And if you really want to get the synonym, i would do it in a second phase comparing words inside "relative searches" and would include a manual check. I think google algorithm use synonym mainly to highlight search terms in page result, but not to do an actual search where they use the relative search terms, except in known situations, as the result for "gmail" and "google mail" are not the same. But if you identify 10 relative searches for "gmail" which all contains "google mail", that will be a good start point to guess they are synonyms.
This is a bit long for a comment. What you are looking for is called a "thesaurus" or "synonyms" list in the world of text searching. Apparently, there is a proposal for such functionality in MySQL. It is not yet implemented. ([Here](https://stackoverflow.com/questions/3265396/how-to-add-synonym-dictionary-to-mysql-fulltext-search) is a related question on Stack Overflow, although the link in the question doesn't seem to work.) The work-around would be to modify queries before sending them to the database. That is, parse the query into words, then look up all the synonyms for those words, and reconstruct the query. This works better for the natural language searches than the boolean searches (which require more careful reconstruction). Pseudo-code for getting the final word list with synonyms would be something like: ``` select @finalwords = concat_ws(' ', group_concat(synonyms separator ' ') ) from synonyms s where find_in_set(s.baseword, @words) > 0; ```
How to realize a context search based on synomyns?
[ "", "mysql", "sql", "search", "full-text-search", "" ]
Trying to convert seconds to a minute:seconds format. e.g. 207 seconds would be 3:27 I have a table with column length that has the length of songs stored in seconds. Using this query almost works, however, when a song should be 3:03 it will instead show 3:3 ``` select concat(Length/60, ':', Length%60) as Length from songs ```
``` SELECT Convert(nvarchar, (Length/60)) + ':' + RIGHT('0' + Convert(nvarchar, Length%60), 2) as Length from songs ```
You can use dateadd to do that: ``` select convert(varchar, dateadd(second, 187, 0), 108) ``` This returns "00:03:07" so you can cut away hours if you don't need them.
Select length as minute:seconds format
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Here are my tables: **Teacher:** ![enter image description here](https://i.stack.imgur.com/FwJDZ.png) **Student:** ![enter image description here](https://i.stack.imgur.com/LwxwN.png) **Test** ![enter image description here](https://i.stack.imgur.com/D5vS7.png) a teacher has students, a student takes tests. ``` SELECT t.name as teacherName, s.name as studentName, t.id as teacherID, s.id as studentID, MIN(tt.grade) as grade FROM teacher t JOIN student s ON s.`teacher_id` = t.id JOIN test tt ON tt.student_id = s.id GROUP BY studentID; ``` so here I get teacher -> student -> lowest grade (6 rows) ![enter image description here](https://i.stack.imgur.com/VFyQ2.png) What I want is teacher -> student with lowest grade -> the grade (2 rows) I can get teacher -> lowest grade per teacher but then the student name becomes ambiguous since I'm not grouping by that..
``` SELECT t.name as teacherName, s.name as studentName, t.id as teacherID, s.id as studentID, tt.grade FROM teacher t JOIN student s ON s.`teacher_id` = t.id JOIN test tt ON tt.student_id = s.id JOIN ( SELECT t.id teacherID, MIN(grade) AS grade FROM teacher t JOIN student s ON s.`teacher_id` = t.id JOIN test tt ON tt.student_id = s.id GROUP BY t.id ) j ON j.teacherID = t.id AND j.grade = tt.grade; ``` arghh
You need to make a subtable getting the lowest grade by teacher, and then join to that. Like so: ``` select t.name teacher_name, s.name student_name, x.min_grade from teacher t join student s on t.id = s.teacher_id join test ts on s.id = ts.student_id join ( select sx.teacher_id, min(tsx.grade) min_grade from student sx join test tsx on sx.id = tsx.student_id group by 1) x on t.id = x.teacher_id and ts.grade = x.min_grade; ```
Get student with the lowest grade per teacher
[ "", "mysql", "sql", "group-by", "subquery", "aggregate", "" ]
Gurus, I have XML Column value like below in SQL 2012 ``` '<XMLDoc> <AAA> <Name>Name_A</Name> <Value>Val_A</Value> </AAA> <AAA> <Name>Name_B</Name> <Value>Val_B</Value> </AAA> <AAA> <Name>Name_C</Name> <Value>Val_C</Value> </AAA> <AAA> <Name>Name_D</Name> <Value>Val_D</Value> </AAA> <AAA> <Name>Name_E</Name> <Value>Val_E</Value> </AAA> : : <AAA> <Name>Name_Z</Name> <Value>Val_Z</Value> </AAA> </XMLDoc>' ``` We have requirement to store first 2 nodes in individual columns and the remaining nodes (length may be 0 to n) as delimited string in third column. I have tried SQL as below, need help to populate 3rd column ``` SELECT Col1 = Col.value('(/AAA/Value)[1]', 'varchar(255)') , Col2 = Col.value('(/AAA/Value)[2]', 'varchar(255)') FROM table ``` We need Output as ``` Col1 Col2 Col3 Val_A Val_B Val_c,Val_D,Val_E....Val_n ```
Gurus, I was able to solve this question by below SQL ``` SELECT Col1 = Col.value('(/XMLDoc/AAA/Value)[1]', 'varchar(255)') , Col2 = Col.value('(/XMLDoc/AAA/Value)[2]', 'varchar(255)') , Col3 = STUFF( (SELECT ',' + x.value('(Value)[1]', 'varchar(50)') FROM Col.nodes('/XMLDoc/AAA[position()>2]') AS Node(x) FOR XML PATH('')), 1, 1, '') FROM table ``` Let me know if any better solution
The following T-SQL should solve your problem in SQL Server 2012, using a subquery with FOR XML PATH ``` SELECT Col1 = Col.value('(/XMLDoc/AAA/Value)[1]', 'varchar(255)'), Col2 = Col.value('(/XMLDoc/AAA/Value)[2]', 'varchar(255)'), Col3 = (SELECT ', '+P.N.value('text()[1]', 'varchar(max)') FROM [table].Col.nodes('/XMLDoc/AAA[position()>2]/Value') P(N) FOR XML PATH(''), type).value('substring(text()[1], 3)', 'varchar(max)') FROM [table] ```
SQL Server - convert XML Column to delimited string
[ "", "sql", "sql-server", "sqlxml", "" ]
I am trying to do this query. This is what I have. My table is: Table ``` StudyID FacultyID Year Access1 Access2 Access3 1 1 2014 4 8 5 1 2 2014 8 4 7 1 1 2013 5 4 4 2 3 2014 4 6 5 2 5 2013 5 8 10 2 4 2014 5 5 7 3 7 2013 9 4 7 ``` I want to group by StudyID and Year and get the minimum value of each field Access1 Access2 and Access3 and show only the last year, I mean for each group the first row. Here is the Result. ``` StudyID Year Access1 Access2 Access3 1 2014 4 4 5 2 2014 4 5 5 3 2013 9 4 7 ``` This is my Query: ``` SELECT DISTINCT T.StudyID, T.Year, MIN(T.Access1), MIN(T.Access2), MIN(T.Access3) FROM T GROUP BY T.StudyID, T.Year ORDER BY T.StudyID, T.Year DESC ``` I also tried with this one. ``` ;WITH MyQuery AS ( SELECT DISTINCT T.StudyID, T.Year, MIN(T.Access1), MIN(T.Access2), MIN(T.Access3),ROW_NUMBER() OVER (PARTITION BY T.StudyID, T.Year ORDER BY T.StudyID, T.Year DESC) AS rownumber FROM T GROUP BY T.StudyID, T.Year ORDER BY T.StudyID , T.Year DESC ) SELECT * FROM MyQuery WHERE rownumber = 1 ``` Any success, I know I am missing something...but dont know what? Thanks in advance!!!!
You can `GROUP BY StudyID, Year` and then in an outer query select the first row from each `StudyID, Year` group: ``` SELECT StudyID, Year, minAccess1, minAccess2, minAccess3 FROM ( SELECT StudyID, Year, min(Access1) minAccess1, min(Access2) minAccess2, min(Access3) minAccess3, ROW_NUMBER() OVER (PARTITION BY StudyID ORDER BY Year DESC) AS rn FROM mytable GROUP BY StudyID, Year ) t WHERE t.rn = 1 ``` `ROW_NUMBER` is used to assign an ordering number to each `StudyID` group according to `Year` values. The row with the maximum `Year` value is assigned a `rn = 1`.
Try this: ``` SELECT DISTINCT T.StudyID, T.Year, MIN(T.Access1), MIN(T.Access2), MIN(T.Access3) FROM myTable T WHERE T.Year = (SELECT MAX(T2.Year) FROM myTable T2 WHERE T2.StudyID = T.StudyID) GROUP BY T.StudyID ``` Its giving the result you wanted in SQLite, but perhaps in SQL-Server needs some alias I'm not sure. Can't test it right now.
SQL Server Group By Query Select first row each group
[ "", "sql", "sql-server", "" ]
I'm creating a flat file with header. I'm getting an ORA-00904 error, which I think is because of the headers I created that does not have the field 'employee\_name' (am I correct with that assumption?). If yes, how can I can I sort my query without the headers? The error I get: ``` ORA-00904: "employee_id":invalid identifier ``` My code: ``` select 'FILENAME' as filerec from dual UNION ALL select 'FILEDATE: ' || to_char(sysdate,'mm/dd/yyyy) as filerec from dual UNION ALL select employee_id || emloyee_name from employee_database as filerec order by employee_id; ``` This is the output I want produce: ``` FILENAME FILEDATE: 02/27/2015 200125Ruth Chan 200126Dan Gonzales 200135Lisa Mayoral ```
In **SQL**, you cannot name the columns in **ORDER BY** with a **composite statement**, you have to use **column position**. Or, **Project the columns explicitly**. In your case, since you have concatenated the columns, you cannot project them explicitly, also, column position would not be of any sense. You could therefore play a small trick. Add a pseudo column, with required values to the rows you want to be sorted first, and then use **NULL** value in the pseudo column for which you want the sorting after the first column. So that, the NULLs are always placed in the end of the sort. For example, ``` SQL> SELECT filerec FROM ( 2 SELECT 'FILENAME' AS filerec, 1 col FROM dual 3 UNION ALL 4 SELECT 'FILEDATE: ' || to_char(SYSDATE,'mm/dd/yyyy') as filerec, 2 col FROM dual 5 UNION ALL 6 SELECT empno || ename AS filerec, NULL col FROM emp 7 ORDER BY 2,1 8 ); FILEREC -------------------------------------------------- FILENAME FILEDATE: 02/27/2015 7369SMITH 7499ALLEN 7521WARD 7566JONES 7654MARTIN 7698BLAKE 7782CLARK 7788SCOTT 7839KING 7844TURNER 7876ADAMS 7900JAMES 7902FORD 7934MILLER 16 rows selected. SQL> ```
Try order by column position number instead of column name ``` select 'FILENAME' as filerec from dual UNION ALL select 'FILEDATE: ' || to_char(sysdate,'mm/dd/yyyy) as filerec from dual UNION ALL select employee_id || emloyee_name from employee_database as filerec order by 1; ``` or use alias ``` select 'FILENAME' as filerec from dual UNION ALL select 'FILEDATE: ' || to_char(sysdate,'mm/dd/yyyy) as filerec from dual UNION ALL select employee_id || emloyee_name as filerec from employee_database order by filerec; ```
ORA-00904: ORDER BY with UNION ALL
[ "", "sql", "oracle-sqldeveloper", "" ]
I'm new to DB's, I have a table within Oracle with dates stored as INT, my SELECT statement I include TO\_DATE() so I can see the date, I then want to add a WHERE BETWEEN and use "dates" but I feel like there has to be a cleaner more efficient way to do this...I tried using the "alias" for the column but it gives me an error saying it's expecting an integer, that is why I have TO\_DATE() again in the WHERE. ``` SELECT to_date('1-Jan-1970 00:00:00','dd-mon-yyyy hh24:mi:ss') + IRF.START_DATE_TIME_KEY/60/60/24) AS START_DATE_TIME_KEY WHERE to_date('1-Jan-1970 00:00:00','dd-mon-yyyy hh24:mi:ss') + (IRF.START_DATE_TIME_KEY/60/60/24) BETWEEN TO_DATE('20-Feb-2015 00:00:01','dd-mon-yyyy hh24:mi:ss') AND TO_DATE('20-Feb-2015 23:59:59','dd-mon-yyyy hh24:mi:ss'); ```
Couple things. One, using `BETWEEN` is not that great for dates because dates aren't exactly discrete. It is better to use something like: ``` WHERE the_date >= TRUNC(start_dt) AND the_date < TRUNC(end_dt) + 1; ``` Two, it looks like you're storing the date as the number of seconds that has passed since 1/1/1970? You might be better off at least using ANSI date literals and intervals: ``` SELECT DATE'1970-01-01' + NUMTODSINTERVAL(ifs_start_date_time_key, 'SECOND') ``` You can use an alias for the above, but you can't refer to it in the `WHERE` clause unless you use a subquery. So putting all of this together: ``` SELECT start_date_time_key FROM ( SELECT DATE'1970-01-01' + NUMTODSINTERVAL(irf.start_date_time_key, 'SECOND') AS start_date_time_key FROM mytable irf ) WHERE start_date_time_key >= DATE'2015-02-20' AND start_date_time_key < DATE'2015-02-20' + INTERVAL '1' DAY; -- or just DATE'2015-02-20' + 1; ``` --- Someone asked in the comments why not use `BETWEEN` with dates. Well, in this case it almost certainly wouldn't matter because there isn't going to be an index on `start_date_time_key` when it's converted to a `DATE`, but it will matter in cases where there is an index on the `DATE` column, so avoiding `BETWEEN` for dates is just a good habit to get into. I just tried the following on a medium-sized table in my DB: ``` SELECT * FROM mytable WHERE TRUNC(date_created) BETWEEN TRUNC(SYSDATE-2) AND TRUNC(SYSDATE-1); ``` The above gave me a full table scan with a high CPU cost. Then I did this: ``` SELECT * FROM mytable WHERE date_created >= TRUNC(SYSDATE-2) AND date_created < TRUNC(SYSDATE); ``` That gave me a range scan on the index (because there is an index on `date_created`) and a fair CPU cost. I can imagine the contrast would be even greater for a "big" table with millions of rows. Alternately, one could put a function-based index on the `DATE` column (e.g., `TRUNC(mydate)`), but that won't help you if your date values also have time portions. Just eschew `BETWEEN` -- using `>=` and `<` isn't that much more typing. --- Another thought just struck me. If the column `IRF.START_DATE_TIME_KEY` is itself indexed, then it might be better to convert the dates to similar integers and use those. ``` SELECT DATE'1970-01-01' + NUMTODSINTERVAL(irf.start_date_time_key, 'SECOND') AS start_date_time_key FROM mytable irf WHERE irf.start_date_time_key >= (DATE'2015-02-20' - DATE'1970-01-01') * 86400 AND irf.start_date_time_key < (DATE'2015-02-20' + 1 - DATE'1970-01-01') * 86400 ``` It's not pretty, but it would have the advantage of using the index on `IRF.START_DATE_TIME_KEY` in the event there is one.
It could be done by a view, but since you said the DB is read-only it could be "nested" in your query sort of like this: ``` select REAL_DATE_COL from (select *, --code to convert int date to real date-- as REAL_DATE_COL from MY_TABLE) where REAL_DATE_COL between date1 and date2 ``` This way you only have that ugly conversion code once.
Oracle BETWEEN two dates stored as INT
[ "", "sql", "oracle", "date", "int", "between", "" ]
I have a problem in making SQL query. I am making a small Search Engine in which the word to page mapping or indexes are kept like this. Sorry I wasn't able to post images here so I tried writing the output like this. ``` +---------+---------+-----------+--------+ | word_id | page_id | frequency | degree | +---------+---------+-----------+--------+ | 2331 | 29 | 2 | 1 | | 2332 | 29 | 7 | 1 | | 2333 | 29 | 4 | 1 | | 2334 | 29 | 1 | 1 | | 2335 | 29 | 1 | 1 | | 2336 | 29 | 1 | 1 | | 2337 | 29 | 2 | 1 | | 2338 | 29 | 7 | 1 | | 2343 | 29 | 1 | 3 | | 2344 | 29 | 1 | 3 | ...... ...... ...... and so on. ``` Word\_id points to Words present in other table and page\_id points to URLs present in other table. Now Suppose I want to search "Rapid 3D Prototyping Services". I brought the union of results corresponding to individual words by query -> ``` select * from words_detail where word_id=2353 or word_id=2364 or word_id=2709 or word_id=2710; ``` In above query the word\_ids corresponds to the 4 words in the search query and the results are as below. Union of page\_id corresponding to individual words... mysql> ``` select * from words_detail where word_id=2353 or word_id=2364 or word_id=2709 or word_id=2710; +---------+---------+-----------+--------+ | word_id | page_id | frequency | degree | +---------+---------+-----------+--------+ | 2353 | 29 | 2 | 4 | | 2353 | 33 | 2 | 2 | | 2353 | 36 | 5 | 9 | | 2353 | 40 | 1 | 4 | | 2353 | 41 | 1 | 9 | | 2353 | 45 | 4 | 9 | | 2353 | 47 | 2 | 9 | | 2353 | 49 | 4 | 9 | | 2353 | 52 | 1 | 4 | | 2353 | 53 | 1 | 9 | | 2353 | 66 | 2 | 9 | | 2364 | 29 | 1 | 4 | | 2364 | 34 | 1 | 4 | | 2364 | 36 | 9 | 2 | | 2709 | 36 | 1 | 9 | | 2710 | 36 | 1 | 9 | +---------+---------+-----------+--------+ 16 rows in set (0.00 sec) ``` But I want the result to be sorted according to maximum match. The earlier result should be where all 4 words match, next result should be with 3 match and so on. In other words earlier results should have those page\_id which are common to 4 word\_ids, next should be those which are common in 3 words\_ids and so on. I checked [here](https://stackoverflow.com/questions/3289095/order-by-maximum-condition-match) but this is not working in my case because in my case OR conditions are not matched in a single row. How can such a query can be designed?
Use the occurence of you `page_id` as your matching count and then order by it. ``` select * from words_detail A inner join (SELECT PAGE_ID , COUNT(PAGE_ID) matchCount from words_detail where word_id=2353 or word_id=2364 or word_id=2709 or word_id=2710 group by PAGE_ID) B on A.PAGE_ID=B.PAGE_ID where word_id=2353 or word_id=2364 or word_id=2709 or word_id=2710 order by matchCount desc ```
Try this ``` select p.* from words_detail p , (select word_id, count(1) as count from words_detail where word_id in (2353,2364,2709,2710) group by word_id) t where p.word_id = t.word_id order by t.count desc; ```
SQL Query to Sort the result according to maximum common results
[ "", "mysql", "sql", "" ]
**Title changed! The problem is to merge rows to columns.** I tag images (table Image) in a data base using a table (ImageAttribute) which has basically two rows - one for attribute type and the other for the value. The database is: ``` Image.id Image.URI ImageAttribute.imageId ImageAttribute.attributeType ImageAttribute.attributeValue ``` ImageAttribute.imageId refers to Image.id One image may have many attributes. e.g: ``` imageId | attributeType | attributeValue --------+---------------+--------------- 1 |COLOR | blue 1 |QUALITY | good 1 |MEMO | some notes for image 1 2 |COLOR | red 2 |QUALITY | good 2 |OBJECTS | cars, trees ``` From image to image the attributes set may differ. Is it possible to select all images having attribute COLOR with any value and having QUALITY='good' and display this information in one row e.g.: ``` id | COLOR | QUALITY --------+---------------+--------------- 1 |blue | good 2 |red | good ```
One option is to use conditional aggregation to basically `pivot` your results: ``` select imageid, max(case when attributetype = 'color' then attributevalue end) Color, max(case when attributetype = 'quality' then attributevalue end) Quality from imageattribute group by imageid ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!2/44e67/1) --- If you need to filter only results with `quality = good`, then you can add a `having` statement: ``` having max(case when attributetype = 'quality' then attributevalue end) = 'good' ``` * [More Fiddle](http://www.sqlfiddle.com/#!2/59d9a/1)
This statement also seems to work (and you can use the SQL fiddle link provided by sgedded which is very usefull to test queries). ``` SELECT im1.imageId as id, im1.attributeValue as COLOR, im2.attributeValue as QUALITY FROM imageattribute im1, imageattribute im2 WHERE im1.imageId = im2.imageId AND im1.attributeType = "COLOR" AND im2.attributeType = "QUALITY" AND im2.attributeValue = "good" ``` Alex.
Make one row in result table from two rows in SQLite
[ "", "sql", "sqlite", "" ]
Here is the simplified table: ``` id - company_id - report_year - code 1 - 123456 - 2013 - ASD 2 - 123456 - 2013 - SDF 3 - 123456 - 2012 - ASD 4 - 123456 - 2012 - SDF ``` I would like to get all codes for the highest report\_year available for the specified company\_id. So I should get: ``` 1 - 123456 - 2013 - ASD 2 - 123456 - 2013 - SDF ``` But I can not hard code `WHERE year = 2013`, because for some company latest report year may be 2012 or 2009 for example. So I need to get data based on the latest year available. So far I have query like this: ``` SELECT id, company_id, report_year, code, FROM `my_table` WHERE company_id= 123456 ``` I have tried with some mixtures of group by and max() but I couldn't get what I need, this is the first time I am facing such a request, its confusing. Any ideas ? I am using mysql.
You could do this using a join on the same table which returns the max year per company like so: ``` select my_table.id, my_table.company_id, my_table.report_year, my_table.code from my_table inner join ( select max(report_year) as maxYear, company_id from my_table group by company_id ) maxYear ON my_table.report_year = maxYear.maxYear and my_table.company_id = maxYear.company_id ``` To limit this to a specific company, just add your `where` clause back: ``` select my_table.id, my_table.company_id, my_table.report_year, my_table.code from my_table inner join ( select max(report_year) as maxYear, company_id from my_table where my_table.company_id= 123456 group by company_id ) maxYear ON my_table.report_year = maxYear.maxYear and my_table.company_id = maxYear.company_id ```
Use a correlated sub-query to find latest year for a company: ``` SELECT id, company_id, report_year, code, FROM `my_table` t1 WHERE company_id = 123456 AND report_year = (select max(report_year) from `my_table` t2 where t1.company_id = t2.company_id) ```
How to get all data from a table only for the latest year, while many rows may be associated with that year
[ "", "mysql", "sql", "greatest-n-per-group", "" ]
I tried the following the ways, but result could not be as per requirement( I mean getting same after running replace query). how to replace all the places that special character only? Here is the Query. ``` select REPLACE(description,'‚'COLLATE Latin1_General_BIN, N'&#x201A;') from Zstandars_25Feb2015 where Colname = 56 ``` Result: . > ....the form + = and = for cases in which , and > are all nonnegative ..........
By keeping N, I am able to fetch the data.. select REPLACE(description,N'‚'COLLATE Latin1\_General\_BIN, N'‚') from Zstandars\_25Feb2015 where Colname = 56 Thanks all for the help.
``` -- Removes special characters from a string value. -- All characters except 0-9, a-z and A-Z are removed and -- the remaining characters are returned. -- Author: Christian d'Heureuse, www.source-code.biz create function dbo.RemoveSpecialChars (@s varchar(256)) returns varchar(256) with schemabinding begin if @s is null return null declare @s2 varchar(256) set @s2 = '' declare @l int set @l = len(@s) declare @p int set @p = 1 while @p <= @l begin declare @c int set @c = ascii(substring(@s, @p, 1)) if @c between 48 and 57 or @c between 65 and 90 or @c between 97 and 122 set @s2 = @s2 + char(@c) set @p = @p + 1 end if len(@s2) = 0 return null return @s2 end ```
How to replace this special character at all the places
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I need help writing a aging report on oracle. The report should be like: ``` aging file to submit total 17 aging file to submit 0-2 days 3 aging file to submit 2-4 days 4 aging file to submit 4-6 days 4 aging file to submit 6-8 days 2 aging file to submit 8-10 days 4 ``` I can create a query for each section and then union all the the results like: ``` select 'aging file to submit total ' || count(*) from FILES_TO_SUBMIT where trunc(DUE_DATE) > trunc(sysdate) -10 union all select 'aging file to submit 0-2 days ' || count(*) from FILES_TO_SUBMIT where trunc(DUE_DATE) <= trunc(sysdate) and trunc(DUE_DATE) >= trunc(sysdate-2) union all select 'aging file to submit 2-4 days ' || count(*) from FILES_TO_SUBMIT where trunc(DUE_DATE) <= trunc(sysdate-2) and trunc(DUE_DATE) >= trunc(sysdate-4) ; ``` I was wondering if there is a better way using oracle analytic functions or any other query that will get better performance? Sample data: ``` CREATE TABLE files_to_submit(file_id int, file_name varchar(255),due_date date); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 1, 'file_' || 1, sysdate); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 2, 'file_' || 2, sysdate -5); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 3, 'file_' || 3, sysdate -4); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 4, 'file_' || 4, sysdate); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 5, 'file_' || 5, sysdate-3); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 6, 'file_' || 6, sysdate-7); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 7, 'file_' || 7, sysdate-10); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 8, 'file_' || 8, sysdate-12); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 9, 'file_' || 9, sysdate-3); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 10, 'file_' || 10, sysdate-5); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 11, 'file_' || 11, sysdate-6); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 12, 'file_' || 12, sysdate-7); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 13, 'file_' || 13, sysdate-5); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 14, 'file_' || 14, sysdate-4); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 15, 'file_' || 15, sysdate-2); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 16, 'file_' || 16, sysdate-6); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 17, 'file_' || 17, sysdate-6); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 18, 'file_' || 18, sysdate-5); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 19, 'file_' || 19, sysdate-10); INSERT INTO FILES_TO_SUBMIT(FILE_ID,FILE_NAME,DUE_DATE) VALUES ( 20, 'file_' || 20, sysdate-9); DROP TABLE files_to_submit; ```
Allow me to suggest [WIDTH\_BUCKET](https://docs.oracle.com/cd/E11882_01/server.112/e41084/functions234.htm#SQLRF06163). This will divide the date range into equal size. Since you want 10 days range into groups of 2 days, the bucket size will be 10 / 2 = 5. Query: ``` SELECT CASE GROUPING(bucket) WHEN 1 THEN 'aging file to submit Total' ELSE 'aging file to submit ' || (bucket-1)*2 || '-' || (bucket)*2 || ' days' END AS bucket_number, COUNT(1) AS files FROM ( SELECT WIDTH_BUCKET(due_date, sysdate, sysdate-10, 5) bucket FROM files_to_submit WHERE due_date >= sysdate-10 ) GROUP BY ROLLUP(bucket) ORDER BY bucket NULLS FIRST; ``` Result: ``` BUCKET_NUMBER FILES ------------------------------------ ---------- aging file to submit Total 17 aging file to submit 0-2 days 2 aging file to submit 2-4 days 3 aging file to submit 4-6 days 6 aging file to submit 6-8 days 5 aging file to submit 8-10 days 1 ```
you can use this simple approach to get the report for all days(without total): ``` select 'aging file to submit '|| trunc(dist/2)*2 ||'-'|| (trunc(dist/2)*2+2) || ' days: ' || count(*) from ( select trunc(sysdate) - trunc(DUE_DATE) as dist from FILES_TO_SUBMIT --where trunc(DUE_DATE) > trunc(sysdate) -10 ) group by trunc(dist/2) order by trunc(dist/2); ``` The only thing that is important is just number of days (dist(ance) field). If you want to have also the Total in the same scan: ``` select 'aging file to submit '|| case when trunc(dist/2) is null then 'Total ' else trunc(dist/2)*2 ||'-'|| (trunc(dist/2)*2+2) || ' days: ' end || count(*) from ( select trunc(sysdate) - trunc(DUE_DATE) as dist from FILES_TO_SUBMIT where trunc(DUE_DATE) > trunc(sysdate) -10 ) group by rollup(trunc(dist/2)) order by trunc(dist/2) nulls first; ``` Hint: If you have hundreds of days of history an index would be useful. (pay attention: if your table is very big, >100Milion, the creation of the index will take some time) ``` create index index_name on files_to_submit(due_date); ``` and then change the condition to: ``` where DUE_DATE > trunc(sysdate) - 10 ``` This will speed up y
Sql (on Oracle) aging report by days
[ "", "sql", "oracle", "oracle11g", "oracle-analytics", "" ]
I'm trying to query a database but excluding the first and last rows from the table. Here's a sample table: ``` id | val -------- 1 1 2 9 3 3 4 1 5 2 6 6 7 4 ``` In the above example, I'd first like to order it by `val` and then exclude the first and last rows for the query. ``` id | val -------- 4 1 5 2 3 3 7 4 6 6 ``` This is the resulting set I would like. Note row 1 and 2 were excluded as they had the lowest and highest `val` respectively. I've considered LIMIT, TOP, and a couple of other things but can't get my desired result. If there's a method to do it (even better with first/last % rather than first/last n), I can't figure it out.
You can try this mate: ``` SELECT * FROM numbers WHERE id NOT IN ( SELECT id FROM numbers WHERE val IN ( SELECT MAX(val) FROM numbers ) OR val IN ( SELECT MIN(val) FROM numbers ) ); ```
You can try this: ``` Select * from table where val!=(select val from table order by val asc LIMIT 1) and val!=(select val from table order by val desc LIMIT 1) order by val asc; ``` You can also use UNION and avoid the 2 val!=(query)
Exclude top and bottom n rows in SQL
[ "", "mysql", "sql", "pdo", "" ]
I have following data in `MySQL` table: ``` +-------------------------------------+-----------------+ | DATE | SipResponseCode | +-------------------------------------+-----------------+ | 20 Feb | 200 | | 20 Feb | 500 | | 20 Feb | 200 | | 20 Feb | 200 | | 20 Feb | 487 | | 20 Feb | 200 | | 20 Feb | 200 | | 20 Feb | 500 | | 20 Feb | 500 | | 20 Feb | 487 | | 20 Feb | 200 | | 20 Feb | 200 | | 20 Feb | 200 | | 20 Feb | 500 | | 20 Feb | 200 | | 20 Feb | 200 | | 20 Feb | 200 | | 20 Feb | 200 | | 20 Feb | 200 | | 20 Feb | 500 | | 21 Feb | 200 | | 21 Feb | 487 | | 21 Feb | 200 | | 21 Feb | 487 | | 21 Feb | 487 | | 21 Feb | 487 | | 21 Feb | 487 | | 21 Feb | 200 | | 21 Feb | 200 | | 21 Feb | 487 | | 21 Feb | 487 | | 21 Feb | 500 | ``` I want to write SQL query so it can give me count of SipResponseCode count per day `200`, `487`, `500` ``` +-------------------------------------+------------+-----------+--------+ | DATE | 200 | 487 | 500 | +-------------------------------------+------------+--------------------+ | 20 Feb | 14 | 2 | 5 | | 21 Feb | 4 | 7 | 1 | ``` I have tired but cannot get correct results.
If the values 200, 487, and 500 are constant values that you are aware of at the time of writing the query, you can use those in a case statement of your select clause. Something a little more readable than a case statement is using the SUM() function with a condition, which will essentially count the number of rows that meet that condition. Try this: ``` SELECT dateColumn, SUM(SisResponseCode = 200) AS '200', SUM(SisResponseCode = 487) AS '487', SUM(SisResponseCode = 500) AS '500' FROM myTable GROUP BY dateColumn; ```
The following query will give you the results, not like you want per say, but in an easy way to parse it later in your language (PHP, C#, etc.). ``` SELECT DATE, GROUP_CONCAT(CAST(TEST AS CHAR(10000) CHARACTER SET utf8) SEPARATOR ",") AS myCol FROM ( SELECT DATE, CONCAT(SipResponseCode, "^", COUNT(*)) AS TEST FROM table1 GROUP BY DATE, SipResponseCode ) a GROUP BY DATE ```
MySQL Query count with multiple group per date/time
[ "", "mysql", "sql", "" ]
I'm having following table and below is expected results. Please let me known if there is a easy to way to get the expected results in SQL server. ``` EmpNo Name Benefit StartDate Status -------------------------------------------- 0001 ABC Medical 01/01/2014 Active 0001 ABC Dental 02/02/2013 Inactive 0001 ABC Vision 03/03/2012 Active 0002 XYZ Medical 01/01/2014 Active 0002 XYZ Dental 02/02/2008 Inactive ``` The results should be like below ``` EmpNo Name MedicalStart MedStatus DenStart DenStatus VisionStart VisStatus --------------------------------------------------------------------------------------- 0001 ABC 01/01/2014 Active 02/02/2013 Inactive 03/03/2012 Active 0002 XYZ 01/01/2014 Active 02/02/2008 Inactive . ``` **I forgot put a few notes in my initial post.** 1) There are 10 benefit plans available, so an employee may enroll for any number of plans up to ten (all plan or few plan or no plans at all). 2) There will be only one row with same benefit plan per EmpNo/Name. 3) Also, there are several fields associated with each row, for example, election option (Self, Family, etc) and many more. To make it simple, I have not included in the question.
**Sample data:** ``` CREATE TABLE #Test ( EmpNo INT , Name VARCHAR(255) , Benefit VARCHAR(255) , StartDate DATETIME2 , Status VARCHAR(255) ); INSERT INTO #Test (EmpNo, Name, Benefit, StartDate, Status) VALUES (0001, 'ABC', 'Medical', '01/01/2014', 'Active') , (0001, 'ABC', 'Dental', '02/02/2013', 'Inactive') , (0001, 'ABC', 'Vision', '03/03/2012', 'Active') , (0002, 'XYZ', 'Medical', '01/01/2014', 'Active') , (0002, 'XYZ', 'Dental', '02/02/2008', 'Inactive') ``` And a simple group clause: **Actual query (if there are historical records), using ROW\_NUMBER will let you find latest record for each User and its Benefit:** ``` SELECT T.EmpNo , T.Name , MAX(CASE WHEN T.Benefit = 'Medical ' THEN CONVERT(VARCHAR(10), CONVERT(DATE, T.StartDate, 106), 103) END) AS MedStart , MAX(CASE WHEN T.Benefit = 'Medical' THEN T.Status END) AS MedStatus , MAX(CASE WHEN T.Benefit = 'Dental ' THEN CONVERT(VARCHAR(10), CONVERT(DATE, T.StartDate, 106), 103) END) AS DenStart , MAX(CASE WHEN T.Benefit = 'Dental' THEN T.Status END) AS DenStatus , MAX(CASE WHEN T.Benefit = 'Vision ' THEN CONVERT(VARCHAR(10), CONVERT(DATE, T.StartDate, 106), 103) END) AS VisStart , MAX(CASE WHEN T.Benefit = 'Vision' THEN T.Status END) AS VisStatus FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY EmpNo, Name, Benefit ORDER BY StartDate DESC) AS RowNo , EmpNo , Benefit , Name , StartDate , Status FROM #Test ) AS T WHERE T.RowNo = 1 GROUP BY T.EmpNo , T.Name ``` **Query using dynamic SQL if there is unknown amount of Benefits. Might not be very efficient:** ``` DECLARE @SQL NVARCHAR(MAX) = 'SELECT T.EmpNo, T.Name' , @Benefit VARCHAR(MAX); SELECT @SQL += ', MAX(CASE WHEN T.Benefit = ''' + Benefit + ''' THEN CONVERT(VARCHAR(10), CONVERT(DATE, T.StartDate, 106), 103) END) AS ' + LEFT(Benefit, 3) + 'Star , MAX(CASE WHEN T.Benefit = ''' + Benefit + ''' THEN T.Status END) AS ' + LEFT(Benefit, 3) + 'Status' FROM (SELECT DISTINCT Benefit FROM #Test) AS T SET @SQL += ' FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY EmpNo, Name, Benefit ORDER BY StartDate DESC) AS RowNo, EmpNo, Benefit, NAME, StartDate, STATUS FROM #Test ) AS T WHERE T.RowNo = 1 GROUP BY T.EmpNo, T.Name' EXEC sp_executesql @SQL ``` **Query (if there are no historical records):** ``` SELECT T.EmpNo , T.Name , MAX(CASE WHEN T.Benefit = 'Medical ' THEN CONVERT(VARCHAR(10), CONVERT(DATE, T.StartDate, 106), 103) END) AS MedStart , MAX(CASE WHEN T.Benefit = 'Medical' THEN T.Status END) AS MedStatus , MAX(CASE WHEN T.Benefit = 'Dental ' THEN CONVERT(VARCHAR(10), CONVERT(DATE, T.StartDate, 106), 103) END) AS DenStart , MAX(CASE WHEN T.Benefit = 'Dental' THEN T.Status END) AS DenStatus , MAX(CASE WHEN T.Benefit = 'Vision ' THEN CONVERT(VARCHAR(10), CONVERT(DATE, T.StartDate, 106), 103) END) AS VisStart , MAX(CASE WHEN T.Benefit = 'Vision' THEN T.Status END) AS VisStatus FROM #Test AS T GROUP BY T.EmpNo , T.Name ``` **Output:** ``` EmpNo Name MedStart MedStatus DenStart DenStatus VisStart VisStatus ------------------------------------------------------------------------------------- 1 ABC 01/01/2014 Active 02/02/2013 Inactive 03/03/2012 Active 2 XYZ 01/01/2014 Active 02/02/2008 Inactive NULL NULL ```
`PIVOT` solution on `StartDate` field: ``` DECLARE @tb AS TABLE ( EmpNo INT ,Name NVARCHAR(25) ,Benefit NVARCHAR(25) ,StartDate DATE ,[Status] NVARCHAR(25) ); INSERT INTO @tb VALUES (1, 'ABC', 'Medical', '01/01/2014', 'Active'); INSERT INTO @tb VALUES (1, 'ABC', 'Dental', '02/02/2013', 'Inactive'); INSERT INTO @tb VALUES (1, 'ABC', 'Vision', '03/03/2012', 'Active'); INSERT INTO @tb VALUES (2, 'XYZ', 'Medical', '01/01/2014', 'Active'); INSERT INTO @tb VALUES (2, 'XYZ', 'Dental', '02/02/2012', 'Inactive'); SELECT EmpNo ,Name ,MAX(MedicalStart) AS MedicalStart ,MAX(MedStatus) AS MedStatus ,MAX(DenStart) AS DenStart ,MAX(DenStatus) AS DenStatus ,MAX(VisionStart) AS VisionStart ,MAX(VisStatus) AS VisStatus FROM ( SELECT EmpNo ,Name ,[Medical] AS MedicalStart ,CASE WHEN [Medical] IS NOT NULL AND [Status] = 'Active' THEN 'Active' WHEN [Medical] IS NOT NULL AND [Status] = 'Inactive' THEN 'Inactive' ELSE NULL END AS MedStatus ,[Dental] AS DenStart ,CASE WHEN [Dental] IS NOT NULL AND [Status] = 'Active' THEN 'Active' WHEN [Dental] IS NOT NULL AND [Status] = 'Inactive' THEN 'Inactive' ELSE NULL END AS DenStatus ,[Vision] AS VisionStart ,CASE WHEN [Vision] IS NOT NULL AND [Status] = 'Active' THEN 'Active' WHEN [Vision] IS NOT NULL AND [Status] = 'Inactive' THEN 'Inactive' ELSE NULL END AS VisStatus ,[Status] FROM @tb PIVOT ( MAX(StartDate) FOR Benefit IN ([Medical], [Dental], [Vision]) ) AS pivotTableDate ) AS tb GROUP BY EmpNo, Name; ``` You can check [this link 'PIVOT on two or more fields in SQL Server'](http://blogs.msdn.com/b/kenobonn/archive/2009/03/22/pivot-on-two-or-more-fields-in-sql-server.aspx) for information about full PIVOT solution.
How to combine multiple rows of employee into single row in SQL Server
[ "", "sql", "sql-server", "pivot", "" ]
I am using this query to calculate number of business days for certain date range: ``` WITH cte AS ( SELECT [Date] AS WorkingDay, ROW_NUMBER() OVER (ORDER BY [Date] ASC) AS RN FROM DimDate WHERE IsHolidayUSA = 0 AND IsWeekday = 1 ) SELECT DateStarted, DateCompleted, c2.RN - c1.RN AS CycleTime FROM MyTable t INNER JOIN cte c1 ON t.DateStarted=c1.WorkingDay INNER JOIN cte c2 ON t.DateCompleted=c2.WorkingDay ``` This works fine if DateStartedand and DateCompleted are both business days. If either one of them is null then result is also null: So the idea is to apply following business day row\_number to weekend/holiday date. For example: ``` Date RN 2015-02-23 1 -- Mon 2015-02-24 2 -- Tue 2015-02-25 3 -- Wed 2015-02-26 4 -- Thu 2015-02-27 5 -- Fri 2015-02-28 6 -- Sat (applied row number of next business day) 2015-03-01 6 -- Sun (applied row number of next business day) 2015-03-02 6 -- Mon 2015-03-03 7 -- Tue 2015-03-04 8 -- Wed 2015-03-05 9 -- Thu ``` EDIT: Extracted ROW\_NUMBER query and pointed to part which needs to be handled: ``` select Date as WorkingDay, RN = CASE WHEN IsHolidayUSA = 0 AND IsWeekday = 1 THEN ROW_NUMBER() OVER (ORDER BY [Date] ASC) ELSE 1 -- need to modify this one END from DimDate ```
You still need to get the row\_number() only on the working days, but the trick is to then join all dates to this working day cte and also look up the next working day for the non-working days. (confusing)... ``` with dn as ( select *, IsWorkingDay = cast(case when IsHolidayUSA = 0 AND IsWeekday = 1 then 1 else 0 end as bit) from DimDate where [Date] between '2/23/2015' and '3/5/2015' ), wd as ( select [Date], WorkingDayNum = row_number() OVER (ORDER BY [Date] ASC) from dn where IsWorkingDay = 1 ), d as ( select dn.[Date], [WorkingDayNum] = coalesce(wd.WorkingDayNum, n.WorkingDayNum) from dn left outer join wd on wd.[Date] = dn.[Date] outer apply ( select top 1 wd.WorkingDayNum from wd where wd.[Date] > dn.[Date] order by wd.[Date] ) n ) select * from d order by Date ```
You can use the `LEAD()` function to pull the `RN` value from subsequent rows, and rather than exclude dates based on the holiday/weekday fields, you can just conditionally apply the `ROW_NUMBER()` to them: ``` ;WITH cte AS (SELECT [Date] AS WorkingDay , CASE WHEN IsHolidayUSA <> 0 AND IsWeekday <> 1 THEN NULL ELSE ROW_NUMBER() OVER(PARTITION BY CASE WHEN IsHolidayUSA <> 0 AND IsWeekday <> 1 THEN 1 END ORDER BY [Date]) END AS RN FROM DimDate ) SELECT *,RN = COALESCE(RN,LEAD(RN,1) OVER(ORDER BY WorkingDay) ,LEAD(RN,2) OVER(ORDER BY WorkingDay)) FROM cte ORDER BY WorkingDay ``` You could add more `LEAD()` functions to accommodate 3 or 4 day weekends if needed. Here's a working example to demonstrate on non-existing tables: ``` ;WITH cal AS (SELECT CAST('2013-03-01' AS DATE) dt UNION ALL SELECT DATEADD(DAY,1,dt) FROM cal WHERE dt < '2013-03-31') ,RN AS (SELECT *,CASE WHEN DATENAME(WEEKDAY,dt) IN ('Saturday','Sunday') THEN NULL ELSE ROW_NUMBER() OVER(PARTITION BY CASE WHEN DATENAME(WEEKDAY,dt) IN ('Saturday','Sunday') THEN 1 END ORDER BY dt) END AS RN FROM cal ) SELECT *,RN = COALESCE(RN,LEAD(RN,1) OVER(ORDER BY dt) ,LEAD(RN,2) OVER(ORDER BY dt)) FROM RN ORDER BY dt ```
SQL - Apply same ROW_NUMBER to non-matching rows
[ "", "sql", "row-number", "" ]