Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have attendance in following table called `Attendance` ![enter image description here](https://i.stack.imgur.com/GCpIl.jpg) `EID` is employee ID and in shift column, `D` denotes a Day shift and `N` denotes a Night shift. Now I'm trying to get following data pertaining to each employee. No of Day shifts - count of `D`, No of Night shifts - count of `N`, No of Days worked - no of days an employee has worked either shift or both shifts (Even an employee worked both Day and Night on the same day its taken as one day.) I can get all three information in three different results as follows... ``` WITH CTE (EID, in_time, shift) AS ( SELECT EID, in_time, shift FROM Attendance WHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND PID = 'A002' ) SELECT EID, COUNT(*) AS DayTotal FROM CTE WHERE (shift = 'D') GROUP BY EID SELECT EID, COUNT(*) AS NightTotal FROM Attendance WHERE (shift = 'N') GROUP BY EID ; WITH CTE2 (EID, in_time, shift) AS ( SELECT EID, in_time, shift FROM Attendance WHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND PID = 'A002' ) SELECT EID, COUNT ( DISTINCT CONVERT (DATE, in_time)) AS [Days] FROM CTE2 WHERE (shift = 'D' OR shift = 'N') GROUP BY EID ``` But I want to have this in single result (table). So I tried following query but it's not giving the intended output. ``` WITH CTE (EID, in_time, shift) AS ( SELECT EID, in_time, shift FROM Attendance WHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND PID = 'A002' ) SELECT EID, CASE WHEN Shift = 'D' THEN COUNT(Shift) END AS [Day], CASE WHEN Shift = 'N' THEN COUNT(Shift) END AS [Night], COUNT ( DISTINCT CONVERT (DATE, in_time)) AS [Days] FROM CTE GROUP BY EID, shift ``` Could you please let me know a way to do this? The intended result ![enter image description here](https://i.stack.imgur.com/ZYsDX.jpg)
I think you can get what you want using conditional aggregation: ``` SELECT EID, sum(case when shift = 'd' then 1 else 0 end) as dayshifts, sum(case when shift = 'n' then 1 else 0 end) as nightshifts, count(*) as total FROM Attendance a WHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND PID = 'A002'; ``` EDIT: If you want counts of distinct dates for the total, then use `count(distinct)`: ``` SELECT EID, sum(case when shift = 'd' then 1 else 0 end) as dayshifts, sum(case when shift = 'n' then 1 else 0 end) as nightshifts, count(distinct case when shift in ('d', 'n') then cast(in_time as date) end) as total FROM Attendance a WHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND PID = 'A002'; ```
@Chathuranga, Since Day and Night Shifts of a day should be counted as one, Please let me know if the below solution works for you. ``` DECLARE @Attendance TABLE (EID INT, PID CHAR(4), In_Time DATETIME, Out_Time DATETIME, Shift CHAR(1)) INSERT INTO @Attendance VALUES ('100', 'A001', '2014-07-01 07:00:00.000', '2014-07-01 19:30:00.000', 'D'), ('102', 'A001', '2014-07-01 19:30:00.000', '2014-07-02 07:00:00.000', 'N'), ('100', 'A001', '2014-07-01 19:30:00.000', '2014-07-02 07:00:00.000', 'N'), ('104', 'A001', '2014-07-02 07:00:00.000', '2014-07-02 19:30:00.000', 'D'), ('100', 'A001', '2014-07-03 19:30:00.000', '2014-07-04 07:00:00.000', 'N'), ('102', 'A001', '2014-07-03 19:30:00.000', '2014-07-04 07:00:00.000', 'N'), ('104', 'A001', '2014-07-03 07:00:00.000', '2014-07-03 19:30:15.000', 'D'), ('102', 'A001', '2014-07-04 07:00:00.000', '2014-07-04 19:30:00.000', 'D'), ('100', 'A001', '2014-07-04 07:00:00.000', '2014-07-04 19:30:10.000', 'D') SELECT EID, SUM(CASE WHEN Shift = 'D' THEN 1 ELSE 0 END) AS DayShift, SUM(CASE WHEN Shift = 'N' THEN 1 ELSE 0 END) AS NightShift, COUNT(DISTINCT CAST(In_Time AS DATE)) AS DayTotal FROM @Attendance GROUP BY EID ```
Shift manipulation in SQL to get counts
[ "", "sql", "sql-server", "ado.net", "case", "common-table-expression", "" ]
1. `select field from table where field = 'value'` 2. `select field from table where field in ('value')` The reason I'm asking is that the second version allow me to use the same syntax for null values, while in the first version I need to change the condition to 'where field is null'...
When you are comparing a field to a null like `field_name=NULL` you are comparing to a known data type from a field say `varchar` to not only an unknown value but also an unknown data type as well, that is, for `NULL` values. When comparison like `field_name=NULL` again implies therefore a checking of data type for both and thus the two could not be compared even if the value of the field is actually `NULL` thus it will always result to `false`. However, using the `IS NULL` you are only comparing for the value itself without the implied comparison for data type thus it could result either to `false` or `true` depending on the actual value of the field. See reference [here](http://en.wikipedia.org/wiki/Null_(SQL)) regarding the issue of `NULL` in computer science and [here](https://stackoverflow.com/questions/3777230/is-there-any-difference-between-is-null-and-null) in relation to the similarity to your question. Now, for the IN clause (i.e. `IN(NULL)`) I don't know what RDBMS you are using because when I tried it with MS SQL and MySQL it results to nothing. See [MS SQL example](http://sqlfiddle.com/#!3/11b3b/1) and [MySQL example](http://sqlfiddle.com/#!2/11b3b/1).
There is no difference in your example. The second, slightly longer, query is not usually used for a single value, it is usally seen for multiple values, such as ``` select field from table where field in ('value1', 'value2') ```
is there any difference between the queries
[ "", "sql", "select", "database-performance", "" ]
I am inserting customer records into a table where, if a record with the same name already exists, I assign the same ID to the newly inserted record. Assume table T has this record: ``` ID | Name | Phone_Number | Date_Inserted 105| Sam | 111111 | 04/03/2014 106| Rita | 222222 |04/03/2014 ``` And I'm inserting this from table A: ``` Name| Phone_Number Sam | 333333 ``` Then after insertion, table T should have: ``` ID | Name | Phone_Number | Date_Inserted 105| Sam | 111111 | 04/03/2014 106| Rita | 222222 | 04/03/2014 105| Sam | 333333 | 04/04/2014 ``` Without the above change it would look like: ``` INSERT INTO T SELECT CustID.nextval,Name,Phone_Number,SYSDATE FROM A; ``` I was thinking of using, ``` INSERT INTO T SELECT CASE WHEN NOT EXISTS(select null from T WHERE T.Name=A.Name) THEN CustID.nextVal ELSE (select ID from T where T.Name=A.Name) END, Name, Phone_Number, SYSDATE FROM A; ``` But I'm not sure if it'll work and it seems redundant/bad for performance. If there's a preferred way to do this, please let me know.
The characterisation of anything as "bad" is subjective. As long as the results are correct, something is only "bad" if it takes too long or uses too many system resources. *You* define "long" and "too many". If something is returning the correct results, in an acceptable time, using an acceptable amount of system resources then there is no need to change . There are, however, a number of things that you can look at out for (assuming that altering your data-model is not an acceptable solution): * You're going to want an index on `NAME, ID` as you're selecting on `NAME` and returning `ID`. * Your second correlated sub-query, `(select ID from T where T.Name=A.Name)`, is returning multiple rows, which is going to cause an error. You either need to limit the result set to a single row, or to utilise some aggregate function. It seems better to add an additional condition `where rownum < 2` to limit the results as adding an aggregate will force Oracle to perform a range scan over every row that has that name whereas you only need to find whether it exists. * CASE claims that it performs short-circuit evaluation; [this isn't necessarily true when you get sequences involved.](https://stackoverflow.com/questions/20550512/case-and-coalesce-short-circuit-evaluation-works-with-sequences-in-pl-sql-but-no) * I don't think it will affect your INSERT statement but it might be worth changing your `DATE_INSERTED` column to have a default; it means that you don't need to add it to every query and you can't forget to do so: ``` alter table t modify date_inserted date default sysdate; ``` Putting these (pretty small) changes together your query might look like: ``` insert into t (id, name, phone_number) select coalesce( select id from t where name = a.name and rownum < 2 , custid.nextval ) , name , phone_number from a ``` *Only you can tell whether this is acceptable or not.* I do something very similar - For one analytical database I have to maintain an old data-based primary key. The only way I could get the thing to work was running it in a background job every minute, using correlated sub-queries and explicitly adding a rownum restriction on the number of potential rows. I know that it's "better" to maintain this in the INSERT statement but the execution time was unacceptable. I know that the code can only deal with at most 10,000 rows a minute but it doesn't matter as I only add at most 5,000 rows a minute to the table. These numbers might change in the future and as the table grows the execution plan might change as well - when it does I'll deal with the problem then rather than attempting to solve a problem that doesn't exist. ### tl;dr Every bit of code is okay until it isn't. While knowledge and experience can help code to remain okay for longer [don't prematurely optimise if there's no need to optimise](https://softwareengineering.stackexchange.com/questions/80084/is-premature-optimization-really-the-root-of-all-evil).
If your schema is not set in stone, I would perhaps reconfigure it so that there is a "person" table and a separate "person phone number" table. With that sort of set up, you can associate multiple phone numbers with one person, and you won't be stomping on IDs, or creating confusing secondary ID columns that aren't primary keys.
Query to assign the same ID to row being inserted if it it already exists in the table
[ "", "sql", "oracle", "plsql", "" ]
How can I subtract values in two variables. Here is what I have done so far ``` DECLARE @A NVARCHAR(MAX) DECLARE @B NVARCHAR(MAX) SET @A='select count(abc) from x' EXEC (@A) PRINT @A SET @B= ' select count(xyz) from y' EXEC (@B) PRINT @B DECLARE @C INT SET @C = CAST(@A AS INT) - CAST(@B AS INT) EXEC (@C) PRINT @C ``` I am getting conversion error > Msg 245, Level 16, State 1, Line 20 > Conversion failed when converting the nvarchar value
Of course you get a conversion error, because the variable is a string. Even if the string is a query, if you execute it you are not storing the result back to your variable. What you need to do is something along the lines of: ``` DECLARE @A INT, @B INT, @C INT; SELECT @A = COUNT(abc) FROM x; SELECT @B = COUNT(xyz) FROM y; SET @C = @A - @B; ```
In your code, `@A` and `@B` will hold the sql statements and not the results of them. If you want to run a dunamic sql statement and get some output from it you need to check for [sp\_executesql](http://msdn.microsoft.com/en-us/library/ms188001.aspx) where you may define output parameters. A proper implementation of your code would be the following: ``` DECLARE @A NVARCHAR(MAX) DECLARE @B NVARCHAR(MAX) DECLARE @SQLString nvarchar(500); DECLARE @ParmDefinition nvarchar(500); SET @SQLString = N'select @test = count(abc) from x' SET @ParmDefinition = N'@test int OUTPUT'; EXECUTE sp_executesql @SQLString, @ParmDefinition, @A=@test OUTPUT; SET @SQLString = N'select @test = count(xyz) from y' EXECUTE sp_executesql @SQLString, @ParmDefinition, @B=@test OUTPUT; DECLARE @C INT SET @C = CAST(@A AS INT) - CAST(@B AS INT) PRINT @C ``` Another note is that declaring your `@A` and `@B` as `(MAX)` is too much.... It would be better if you would declare them as `INT` and avoid the additional cast: ``` DECLARE @A INT DECLARE @B INT DECLARE @SQLString nvarchar(500); DECLARE @ParmDefinition nvarchar(500); SET @SQLString = N'select @test = count(abc) from x' SET @ParmDefinition = N'@test int OUTPUT'; EXECUTE sp_executesql @SQLString, @ParmDefinition, @A=@test OUTPUT; SET @SQLString = N'select @test = count(xyz) from y' EXECUTE sp_executesql @SQLString, @ParmDefinition, @B=@test OUTPUT; DECLARE @C INT SET @C = @A - @B ```
subtract variables in SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table that has a column defined as varray of a defined type. The production table is way more complicated then the following example. I am able to select the single columns within the type of the varray. But I would like to update the table with a simple update statement (rather than going through a pl/sql routine). If this is not possible (and I must go through a pl/sql routine) what is a smart and easy way to code this? ``` update (select l.id, t.* from my_object_table l, table(l.object_list) t) set value2 = 'obj 4 upd' where value1 = 10 ORA-01733: virtual column not allowed here ``` Here the full example of types etc. ``` create or replace type my_object as object( value1 number, value2 varchar2(10), value3 number); create or replace type my_object_varray as varray(100000000) of my_object; create table my_object_table (id number not null, object_list my_object_varray); insert into my_object_table values (1, my_object_varray ( my_object(1,'object 1',10), my_object(2,'object 2',20), my_object(3,'object 3',30) ) ); insert into my_object_table values (2, my_object_varray ( my_object(10,'object 4',10), my_object(20,'object 5',20), my_object(30,'object 6',30) ) ); select l.id, t.* from my_object_table l, table(l.object_list) t; Type created. Type created. Table created. 1 row created. 1 row created. ID VALUE1 VALUE2 VALUE3 ---------- ---------- ---------- ---------- 1 1 object 1 10 1 2 object 2 20 1 3 object 3 30 2 10 object 4 10 2 20 object 5 20 2 30 object 6 30 6 rows selected. ```
I don't believe you can update a single object's value within a `varray` from plain SQL, as there is no way to reference the `varray` index. (The link Alessandro Rossi posted seems to support this, though not necessarily for that reason). I'd be interested to be proven wrong though, of course. I know you aren't keen on a PL/SQL approach but if you do have to then you could do this to just update that value: ``` declare l_object_list my_object_varray; cursor c is select l.id, l.object_list, t.* from my_object_table l, table(l.object_list) t where t.value1 = 10 for update of l.object_list; begin for r in c loop l_object_list := r.object_list; for i in 1..l_object_list.count loop if l_object_list(i).value1 = 10 then l_object_list(i).value2 := 'obj 4 upd'; end if; end loop; update my_object_table set object_list = l_object_list where current of c; end loop; end; / anonymous block completed select l.id, t.* from my_object_table l, table(l.object_list) t; ID VALUE1 VALUE2 VALUE3 ---------- ---------- ---------- ---------- 1 1 object 1 10 1 2 object 2 20 1 3 object 3 30 2 10 obj 4 upd 10 2 20 object 5 20 2 30 object 6 30 ``` [SQL Fiddle](http://sqlfiddle.com/#!4/51f29/1). If you're updating other things as well then you might prefer a function that returns the object list with the relevant value updated: ``` create or replace function get_updated_varray(p_object_list my_object_varray, p_value1 number, p_new_value2 varchar2) return my_object_varray as l_object_list my_object_varray; begin l_object_list := p_object_list; for i in 1..l_object_list.count loop if l_object_list(i).value1 = p_value1 then l_object_list(i).value2 := p_new_value2; end if; end loop; return l_object_list; end; / ``` Then call that as part of an update; but you still can't update your in-line view directly: ``` update ( select l.id, l.object_list from my_object_table l, table(l.object_list) t where t.value1 = 10 ) set object_list = get_updated_varray(object_list, 10, 'obj 4 upd'); SQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table ``` You need to update based on relevant the ID(s): ``` update my_object_table set object_list = get_updated_varray(object_list, 10, 'obj 4 upd') where id in ( select l.id from my_object_table l, table(l.object_list) t where t.value1 = 10 ); 1 rows updated. select l.id, t.* from my_object_table l, table(l.object_list) t; ID VALUE1 VALUE2 VALUE3 ---------- ---------- ---------- ---------- 1 1 object 1 10 1 2 object 2 20 1 3 object 3 30 2 10 obj 4 upd 10 2 20 object 5 20 2 30 object 6 30 ``` [SQL Fiddle](http://sqlfiddle.com/#!4/9c1c23/1). If you wanted to hide the complexity even further you could create a view with an instead-of trigger that calls the function: ``` create view my_object_view as select l.id, t.* from my_object_table l, table(l.object_list) t / create or replace trigger my_object_view_trigger instead of update on my_object_view begin update my_object_table set object_list = get_updated_varray(object_list, :old.value1, :new.value2) where id = :old.id; end; / ``` Then the update is pretty much what you wanted, superficially at least: ``` update my_object_view set value2 = 'obj 4 upd' where value1 = 10; 1 rows updated. select * from my_object_view; ID VALUE1 VALUE2 VALUE3 ---------- ---------- ---------- ---------- 1 1 object 1 10 1 2 object 2 20 1 3 object 3 30 2 10 obj 4 upd 10 2 20 object 5 20 2 30 object 6 30 ``` [SQL Fiddle](http://sqlfiddle.com/#!4/55f80/1).
As the Oracle documentation states [here](http://docs.oracle.com/database/121/ADOBJ/adobjcol.htm#ADOBJ7278) > While nested tables can also be changed in a piecewise fashions, > varrays cannot. There is no way to modify VARRAYS in piecewise fashion. The only things you could do are: * Convert the data type of your fied into a NESTED TABLE (CREATE TYPE xxx AS TABLE OF yyy) * Fetch the varray of the row you want to change, modify with in your client language, then update the row to set the modified value on it.
How to update a varray type within a table with a simple update statement?
[ "", "sql", "oracle", "plsql", "" ]
I have a very weird scenario which i am sure the is an explanation for. I trying to calculate something and here is the values: ``` SELECT 145.28/63 *(8 * 100) / 100 -- returns: 18.4482480000 ``` The problem is that this is not correct, If i do the calculation with Windows Calculator then it returns a more precise value: 145,28/63 \*(8 - 0 \* 100) / 100 = 18,44825396825397 I had to change a lot to get it work with SQL Server: ``` SELECT 145.28 * 100 * 8 / (100 * 63) --RETURNS: 18.4482539682539 ``` How come i need to change the numbers in SQL Server but not in windows? In Navision programming it returns the correct value which is 18.4482539682539682
the default datatype that is automatically assigned simply doesnt hold the precision you want. This can easily be fixed by explicitly casting your first number as a more precise datatype like this: ``` SELECT CAST(145.28 AS decimal(38,35))/63 *(8 * 100) / 100 --returns 18.448253968253968253968253 ``` From there on out sql server understand not to downcast it so the result becomes 18.448253968253968253968253 syntax for decimal is decimal(total positions,of which after comma). don't forget that the comma takes a place as well. Here is a [link](http://msdn.microsoft.com/en-us/library/ms187752.aspx) for docu. If you want to limit the decimals you can use ROUND like this: ``` SELECT ROUND(CAST(145.28 AS float)/63 *(8 * 100) / 100,12) ``` The maximum precision you can achieve in sql server (as a single value anyway) is 18.448253968253968253968253 per linked docu from ms: **decimal [ (p[ ,s] )] and numeric[ (p[ ,s] )]** Fixed precision and scale numbers. When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1. The ISO synonyms for decimal are dec and dec(p, s). numeric is functionally equivalent to decimal.
The result's precision is calculated [based on some factors](http://msdn.microsoft.com/en-us/library/ms190476.aspx) that may give you a scale lower than what you expect. Just try this one: ``` SELECT CONVERT(DECIMAL(38, 35), 145.28)/63 *(8 * 100) ``` which will give you ``` 1844.825396825396825396825397 ```
SQL Server calculation gives unexpected results
[ "", "sql", "sql-server", "" ]
I am trying to convert the following query: ``` select * from employees where emp_id not in (select distinct emp_id from managers); ``` into a form where I represent the subquery as a join. I tried doing: ``` select * from employees a, (select distinct emp_id from managers) b where a.emp_id!=b.emp_id; ``` I also tried: ``` select * from employees a, (select distinct emp_id from managers) b where a.emp_id not in b.emp_id; ``` But it does not give the same result. I have tried the 'INNER JOIN' syntax as well, but to no avail. I have become frustrated with this seemingly simple problem. Any help would be appreciated.
Try this: ``` select e.* from employees e left join managers m on e.emp_id = m.emp_id where m.emp_id is null ``` This will join the two tables. Then we discard all rows where we found a matching manager and are left with employees who aren't managers.
Assume employee Data set of ``` Emp_ID 1 2 3 4 5 6 7 ``` Assume Manger data set of ``` Emp_ID 1 2 3 4 5 8 9 select * from employees where emp_id not in (select distinct emp_id from managers); ``` The above isn't joining tables so no Cartesian product is generated... you just have 7 records you're looking at... The above would result in 6 and 7 Why? only 6 and 7 from Employee Data isn't in the managers table. 8,9 in managers is ignored as you're only returning data from employee. ``` select * from employees a, (select distinct emp_id from managers) b where a.emp_id!=b.emp_id; ``` The above didnt' work because a Cartesian product is generated... All of Employee to all of Manager (assuming 7 records in each table 7\*7=49) so instead of just evaluating the employee data like you were in the first query. Now you also evaluate all managers to all employees so Select \* results in ``` 1,1 1,2 1,3 1,4 1,5 1,8 1,9 2,1 2,2... ``` Less the where clause matches... so 7\*7-7 or 42. and while this may be the answer to the life universe and everything in it, it's not what you wanted. I also tried: ``` select * from employees a, (select distinct emp_id from managers) b where a.emp_id not in b.emp_id; ``` Again a Cartesian... All of Employee to ALL OF Managers So this is why a left join works ``` SELECT e.* FROM employees e LEFT OUTER JOIN managers m on e.emp_id = m.emp_id WHERE m.emp_id is null ``` This says join on ID first... so don't generate a Cartesian but actually join on a value to limit the results. but since it's a LEFT join return EVERYTHING from the LEFT table (employee) and only those that match from manager. so in our example would be returned as e.emp\_Di = m.Emp\_ID ``` 1,1 2,2 3,3 4,4 5,5 6,NULL 7,NULL ``` now the where clause so ``` 6,Null 7,NULL are retained... ``` older ansii SQL standards for left joins would have been \*= in the where clause... ``` select * from employees a, managers b where a.emp_id *= b.emp_id --I never remember if the * is the LEFT so it may be =* and b.emp_ID is null; ``` But I find this notation harder to read as the join can get mixed in with the other limiting criteria...
Representing 'not in' subquery as join
[ "", "sql", "oracle", "join", "" ]
I am using database schema with the following column ``` TRAFFIC_DIRECTION tinyint The direction of traffic. Enum ( unknown = 0; inbound = 1; outbound = 2) ``` When I run query, TRAFFIC\_DIRECTION displays as ``` TRAFFIC_DIRECTION 1 2 2 1 1 1 2 2 2 1 1 1 1 1 ``` How do I make it such that, instead of outputting 1 it outputs "inbound", and instead of 2, it outputs "outbound"
Some RDBMSs support an ENUM type, but all of them support a `CASE` statement. ``` SELECT id, CASE TRAFFIC_DIRECTION WHEN 0 THEN 'Unknown' WHEN 1 THEN 'Inbound' WHEN 2 THEN 'Outbound' ELSE 'ERROR' END FROM Table1 ```
``` select case when traffic_direction = 1 then 'inbound' when traffic_direction = 2 then 'outbound' else 'unknown' end as direction from the_table; ```
Format SQL output where value is either 0, 1, or 2
[ "", "sql", "" ]
I have a table with posts in them. Website visitors can upvote or downvote such a post. I want to order a certain sql query by the score of the post, but my `posts` table doesn't have a `score` column - I keep the upvotes and downvotes in a different `votes` table, because that tells me who voted on what. I could add a `score` column to by `posts` table, and update it every time someone votes on a post, but I'd rather not do this, as the score is something I can work out by subtracting the downvotes from the upvotes anyways. Do you have any suggestions? Or should I just go ahead and add a `score` column to my table? **Edit** My `posts` table has a `post_id` column (among other irrelevant columns) and my `votes` table has columns `post_id`, `user_id` and `positive` (the latter is a BOOLEAN, being 1 when the vote is an upvote and 0 when the vote is a downvote). I can easily determine the score of a post 'by hand', by first querying the number of upvotes of that post, then the number of downvotes, and calculating their difference. However, I would like to query my `posts` table and order by the score of that post, so I want to know how/if I can query the `votes` table in the `ORDER BY` command while querying the `posts` table.
No, you do not have to create a score column. You can order by the calculated score, as below: Since you do have the upvotes and downvotes in a different table, you need to join, as Tim Schmelter has explained. ``` SELECT p.* FROM Post p INNER JOIN Votes v ON p.PostID = v.PostID ORDER BY (v.upvotes - v.downvotes); ``` If you want to get the query to perform better, you could add a function-based index for `(v.upvotes - v.downvotes)`. **EDIT**: Based on the updated information about the `posts` and the `votes` table, the following query can be used. The score is calculated within an inline view using a `CASE` statement. Then, this inline view is joined with the posts table, ordering the rows by the score. Note that an `INNER JOIN` is used, so only posts that have votes would be listed. To list all posts, a `LEFT JOIN` could be used instead. ``` SELECT p.* FROM posts p INNER JOIN ( SELECT post_id, SUM ( CASE WHEN positive = 0 THEN -1 ELSE 1 END ) score FROM votes v GROUP BY post_id ) scores ON p.post_id = scores.post_id ORDER BY scores.score; ```
You have to link both tables via `JOIN`. Presuming that the `Score`-table has a column `PostID`: ``` SELECT p.*, Score = s.Upvotes- s.DownVotes FROM Post p INNER JOIN Score s ON p.PostID = s.PostID ORDER BY Score ```
SQL ORDER BY something else than one of the table's columns
[ "", "sql", "" ]
my event table with starteventdate and endevent date. ``` Id EventName starteventdate endevent --------------------------------------- 1 a 7/6/2014 8/6/2014 2 b 9/6/2014 10/6/2014 3 c 10/6/2014 15/6/2014 ``` my search screen have fromdate and todate. so i want to search events between two dates those events are active. > i'm searching events between 7/6/2014- 10/6/2014 result -->1,2,3. > > if i'm searching events between 10/6/2014-12/6/2014 result -->2,3. > > if i'm searching events between 8/6/2013-20/6/2014 result-->1,2,3 i tried this query.but i'm able to seach b/w only one date like ``` SELECT e.EventsID,e.EventDesc AS 'Event Description' FROM dEvents e WHERE convert(datetime,convert(varchar(10),StartEvent,101),101) BETWEEN convert(datetime,convert(varchar(10),@FromDate,101),101) AND convert(datetime,convert(varchar(10),@ToDate,101),101). ``` so i want only active events which are b/w search dates.please sort this out.
I would do something like this assuming you have `DATETIME` types to work with, instead of strings. You should be able to put the `CONVERT`s back in though, if necessary. ``` SELECT e.EventsID, e.EventDesc AS 'Event Description' FROM dEvents e WHERE EndEvent >= @FromDate AND StartEvent <= @ToDate ```
Please try this. Observation, you are storing dd/mm/yyyy so use 103 for conversion **Sample Data:** ``` IF OBJECT_ID(N'dEvents')>0 BEGIN DROP TABLE dEvents END CREATE TABLE dEvents (Id INT,EventName VARCHAR(10),StartEvent VARCHAR(20),endevent VARCHAR(20)) INSERT INTO dEvents VALUES ('1','a','7/6/2014','8/6/2014'), ('2','b','9/6/2014','10/6/2014'), ('3','c','10/6/2014','15/6/2014') ``` **Query:** ``` DECLARE @FromDate VARCHAR(20) = '7/6/2014' DECLARE @ToDate VARCHAR(20) = '10/6/2014' SELECT e.ID,e.EventName AS 'Event Description' FROM dEvents e WHERE CONVERT(DATE,StartEvent,103) BETWEEN CONVERT(DATE,@FromDate,103) AND CONVERT(DATE,@ToDate,103) OR CONVERT(DATE,endevent,103) BETWEEN CONVERT(DATE,@FromDate,103) AND CONVERT(DATE,@ToDate,103) ``` **Cleanup:** ``` IF OBJECT_ID(N'dEvents')>0 BEGIN DROP TABLE dEvents END ```
How to write search query in sql server
[ "", "sql", "sql-server", "" ]
I have a table similar to the following in my database ``` +----+----+----+---------------------+ | id | a | b | date_created | +----+----+----+---------------------+ | 1 | 22 | 33 | 2014-07-31 14:38:17 | | 2 | 11 | 9 | 2014-07-30 14:40:19 | | 3 | 8 | 4 | 2014-07-29 14:40:34 | +----+----+----+---------------------+ ``` I'm trying to write a query that subtracts `sum(b)` from each `a`. However, the values of `b` included in `sum(b)` should be only those that are earlier than (or the same time as) the `a` they are being subtracted from. In other words, the results returned by the query should be those shown below ``` 22 - (33 + 9 + 4) 11 - (9 + 4) 8 - (4) ``` is it possible to calculate this in a single query?
``` SELECT x.* , x.a - SUM(y.b) FROM my_table x JOIN my_table y ON y.date_created <= x.date_created GROUP BY x.id; ```
``` select id, a, a - (select sum(b) from My_TABLE T2 where T2.date_created <= T1.date_created) from MY_TABLE T1; ```
query that subtracts sum of one column from another
[ "", "mysql", "sql", "" ]
I have scoured stack overflow for this with no success. I'm working on converting some access views, stored procedures, etc to SQL Server 2008. [I am not very well versed in Access] This is one statement that has me stumped: ``` IIf([Column1]=0,[Column2],[Column1]) ``` Using the `SELECT CASE` statement, I can get the above statement suitably converted: ``` select case when TABLE1.[COLUMN1]=0 then TABLE1.[COLUMN2] else TABLE1.[COLUMN1] end as SomeColumn ``` ...but this is also present in the GROUP BY clause. That makes using `SELECT CASE` difficult since I can't have a subquery in `GROUP BY`. Thoughts?
There is no reason you can't put a CASE statement into the GROUP BY clause - ``` select case when TABLE1.[COLUMN1] = 0 then TABLE1.[COLUMN2] else TABLE1.[COLUMN1] end as SomeColumn, another_col, canother_col2, sum(a_dif_col) as sum_a_col from TABLE1 group by case when TABLE1.[COLUMN1] = 0 then TABLE1.[COLUMN2] else TABLE1.[COLUMN1] end, another_col, canother_col2 ``` The above is perfectly valid syntax.
You can use subquery: ``` SELECT data.Somecolumn FROM ( select case when TABLE1.[COLUMN1]=0 then TABLE1.[COLUMN2] else TABLE1.[COLUMN1] end as SomeColumn ) data GROUP BY data.SomeColumn ```
What is the SQL Server equivalent of the following Access statement
[ "", "sql", "sql-server", "ms-access", "" ]
I have a dynamic SQL query which returns rows like below with string values & numeric values. ``` EMP col1 col2 col3 col4 col5 ---------------------------- A1 4 4 3 3 3 A2 4 2 5 3 3 A3 sd 3 3 1 sd A4 3 4 3 3 3 ``` Now I need a new column which sums col1 to col5 and creates a total sum column where it should ignore the string values as in row 3. There are no NULL values How could I achieve this? Using `ISNUMERIC` might be the solution, but I'm not sure how to use it in such a scenario.
You can use a [CASE](http://msdn.microsoft.com/en-us/library/ms181765(v=sql.105).aspx) Expression to determine whether the value is a number. If it is a number then either cast the value to an INT or DECIMAL data type, otherwise use 0 so it doesn't effect the sum. ``` SELECT CASE WHEN ISNUMERIC(col1) = 1 THEN CAST(col1 as INT) ELSE 0 END + CASE WHEN ISNUMERIC(col2) = 1 THEN CAST(col2 as INT) ELSE 0 END + CASE WHEN ISNUMERIC(col3) = 1 THEN CAST(col3 as INT) ELSE 0 END + CASE WHEN ISNUMERIC(col4) = 1 THEN CAST(col4 as INT) ELSE 0 END + CASE WHEN ISNUMERIC(col5) = 1 THEN CAST(col5 as INT) ELSE 0 END as SumValue FROM MyTable ```
If you're on SQL Server 2012, [TRY\_CONVERT](http://msdn.microsoft.com/en-us/library/hh230993.aspx) avoids [pitfalls commonly encountered with ISNUMERIC](http://classicasp.aspfaq.com/general/what-is-wrong-with-isnumeric.html): ``` SELECT col1, col2, col3, col4, col5, ISNULL(TRY_CONVERT(int, col1), 0) + ISNULL(TRY_CONVERT(int, col2), 0) + ISNULL(TRY_CONVERT(int, col3), 0) + ISNULL(TRY_CONVERT(int, col4), 0) + ISNULL(TRY_CONVERT(int, col5), 0) AS total FROM Employee ``` [SQLFiddle](http://www.sqlfiddle.com/#!6/91fbf/6)
SQL Server Sum rows with string value
[ "", "sql", "sql-server", "t-sql", "pivot", "" ]
I'm drawing a blank on the best way write this sql, For simplicity sake I'm Only interested in one column. Table A Column = [Supplier]. Table B Column = [AuthorizedSuppliers] Table A contains many rows with the same value in the [Supplier]. ``` Table A ------------------------------ ID |Supplier | Company | ---|--------------|----------| 1 | Warehouse 1 | Company 1| 2 | Warehouse 1 | Company 2| 3 | Warehouse 1 | Company 3| 4 | Warehouse 2 | Company 4| 5 | Warehouse 2 | Company 5| 6 | Warehouse 3 | Company 6| 7 | Warehouse 3 | Company 7| 8 | Warehouse 3 | Company 8| ``` Table B contains a single Supplier from Table A [Supplier] row. ``` Table B ------------------------------ ID |AuthorizedSupplier | ---|-------------------------| 1 | Warehouse 1 | 2 | Warehouse 2 | ``` I need to return the DISTINCT values of [Supplier] and [AuthorizedSupplier] with a additional column let's call it [Authorized] that is marked as true for those values that were in Table B So the result set of the above sample should look like this ``` Result Set ------------------------------ Supplier |Authorized | ------------|----------------| Warehouse 1 | True | Warehouse 2 | True | Warehouse 3 | False | ```
Use `Distinct` and `Left Outer Join`: ``` SELECT DISTINCT Supplier, Authorized = CASE WHEN B.AuthorizedSupplier IS NULL THEN 'True' ELSE 'False' END FROM TableA A LEFT OUTER JOIN TableB B ON A.Supplier = B.AuthorizedSupplier ``` [**SQL-Fiddle Demo**](http://sqlfiddle.com/#!3/80d63/6/0)
``` SELECT A.Supplier , CASE WHEN B.AuthorizedSupplier IS NULL THEN 'True' ELSE 'False' END AS Authorized FROM (Select Distinct Supplier FROM TableA) A LEFT OUTER JOIN TableB ```
Distinct SQL that will return true if in a table
[ "", "sql", "sql-server", "t-sql", "" ]
The method I am using may not be efficient (or possible), if so please let me know. I am trying to use `SELECT INTO` to select two values and then attach them to a variable (v1) which will be returned by the function as one cell. Inserting `UNION ALL` between the two `SELECT INTO` statements results in an `ERROR: syntax error at or near "UNION"` **EDIT** - the function provides two unique values (not null) which are specified for these two SELECT INTO statements **desired output:** ``` v1 v2 v3 etc. 2678, 2987 ``` **excerpt from function:** ``` SELECT value INTO v1 FROM table WHERE year <= parameteryear; SELECT value INTO v1 FROM table WHERE yearinteger >= parameteryear; data := v1; RETURN NEXT; END; $$ LANGUAGE plpgsql; ```
``` SELECT cast(t1.value as varchar(20)) || ', ' || cast(t2.value as varchar(20)) INTO v1 FROM table as t1, table as t2 WHERE t1.year <= parameteryear AND t2.yearinteger >= parameteryear; ```
If you want to use a union all query, it must be in a derived table. ``` select value into v1 from (union query goes here) derivedTable ```
Combining multiple SELECT INTO results into one variable
[ "", "sql", "postgresql", "plpgsql", "postgresql-9.3", "" ]
I am working on a SQL query which performs some calculations and returns difference of two columns that belongs to two different rows of single table when certain values in the other columns are not equal For Example I have the following data in a table ``` id Market Grade Term Bid Offer CP 1 Heavy ABC Jun14 -19.5 -17 BA 2 Heavy ABC Jul14 -20 -17.5 BB 3 Sour XYZ Jun14 -30 -17 NULL 4 Sour XYZ Jul14 -32 -27 NULL 5 Sweet XY Jun14 -30 -17 PV 6 Sweet XY Jul14 -32 -27 PV ``` Now, I want the following results (AS `Market` and `Grade` are same and CP are not same for `Id=1,2` So, it should calculate ``` Bid of Id=1 - Offer of Id=2 Offer of Id=1- Bid of Id=2 ``` (AS `Market` and `Grade` are same for `Id=3,4` and also their CP are both NULL logically but I still want to calculate as I did in the previous case ``` Bid of Id=3 - Offer of Id=4 Offer of Id=3- Bid of Id=4 ``` And, Finally I dont want to calculate anything for record with Ids 5 and 6 as their CPs are same Something Like the following should be the result ``` Market Term Bid Offer Heavy/ABC Jun14/Jul14 (-19.5-(-17.5))=-2 (-17-(-20))=3 Sour/XYZ Jun14/Jul14 (-30-(-27))=-3 (-17-(-32))=15 ``` I was able to figure out most of this except the case when CPs are two records are NULL as it is treating them as equal which is obvious ``` ;with numbered as ( select id, market, grade, term, bid, offer, row_number() OVER (Partition BY Market, Grade ORDER BY Bid desc) i from things ) --select * from numbered select r1.market + '/' + r1.grade as Market, r1.term + '/' + r2.term as Term, r1.Bid - r2.Offer [Bid], r1.Offer - r2.Bid [Offer] from numbered r1 join numbered r2 on r1.market = r2.market and r1.grade = r2.grade and r1.i < r2.i and r1.CP!=r2.CP ``` How can I treat both NULLs as not equal.
Can you not just change: ``` and r1.CP!=r2.CP ``` to: ``` and ISNULL(r1.CP, 'X') != ISNULL(r2.CP, 'Y') ``` Edit. If you want to be really safe and live a little dangerously you could even do this: ``` and ISNULL(r1.CP, CONVERT(VARCHAR(36), NEWID())) != ISNULL(r2.CP, CONVERT(VARCHAR(36), NEWID())) ```
I'm not quite throwing this out as an answer because it is an awful solution, but you could replace NULLS with a stand-in value with the [ISNULL](http://msdn.microsoft.com/en-us/library/ms184325.aspx) function. ``` ;with numbered as ( select id, market, grade, term, bid, offer, row_number() OVER (Partition BY Market, Grade ORDER BY Bid desc) i from things ) --select * from numbered select r1.market + '/' + r1.grade as Market, r1.term + '/' + r2.term as Term, r1.Bid - r2.Offer [Bid], r1.Offer - r2.Bid [Offer] from numbered r1 join numbered r2 on r1.market = r2.market and r1.grade = r2.grade and r1.i < r2.i and ISNULL(r1.CP, 1) != ISNULL(r2.CP,2) ```
Treating null values of two records as not equal in SQL query
[ "", "sql", "sql-server-2012", "" ]
I know this has been a topic that has been discussed in [AWS forums before](https://forums.aws.amazon.com/message.jspa?messageID=396906) and SO [How to setup sessionState with SQL Server mode on Amazon RDS](https://stackoverflow.com/questions/12636919/how-to-setup-sessionstate-with-sql-server-mode-on-amazon-rds) As mentioned on the above thread and on a couple of stack overflow articles it would seem like there is a way to get a SQL session using AWS RDS. I used the above stackoverflow as guidance to try and set up my ASPState database. Using the pastebin script in the above stackoverflow article for a "jobless" InstallSqlState.sql still caused me issues as it tries to use "tempdb" and "master". I don't have access to these databases neither can I grant myself permissions to do so. However had a working site that used the sessions table on an EC2 server. As per the above stackoverflow article I used the SQL import/export tool. The database seemed to copy over okay, tables and stored procedures all seem to be present and correct. In my web.config I have: ``` <sessionState mode="SQLServer" allowCustomSqlDatabase="true" cookieless="false" timeout="45" sqlConnectionString="data source=RDSIP;initial catalog=ASPState;user id=myuser;password=mypassword" /> ``` However when I run my site I get the error: ``` Invalid object name 'tempdb.dbo.ASPStateTempApplications'. ``` Is there anyone who has managed to achieve a session state using SQL on AWS RDS or can point me to a resource that can explain the steps I need to take?
After some digging around I realised that the stored procedures being generated by the [pastebin script](http://pastebin.com/QJDXC093) are still making reference to the tempdb. By doing a simple find replace of [tempdb] to [ASPState] and then re-running the script recreated SP with the correct DB name. I also changed the "USE" statements to databases I had permissions for.
I've solved my issue create session db on Amazon RDS. 1 Step:- Create ASPState DB on AWS RDS and database and schema migration using Following Tool SQLAzureMW v5.15.6 Release Binary for SQL Server 2014. 2 Step:- change DB Name on sessionState mode="SQLServer" allowCustomSqlDatabase="true" sqlConnectionString="data source=amazon-server-name;initial catalog=ASPState;persist security info=True;user id=userid;password=password" cookieless="false" timeout="100"> please feel free to contact us about any of your queries Thanks Amit Verma
Using ASP Session State DB on AWS RDS
[ "", "sql", "amazon-web-services", "session-state", "rds", "aspstate", "" ]
I tried connecting to the database server using the command: ``` psql -h host_ip -d db_name -U user_name --password ``` It displays the following line and refuses to connect. ``` psql: FATAL: too many connections for role "user_name". ``` How to close the active connections? I do not have admin rights for the database. I am just an ordinary user.
### From inside any DB of the cluster: Catch 22: you need to be connected to a database first. Maybe you can connect as another user? (By default, some connections are reserved for superusers with the [`superuser_reserved_connections`](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-SUPERUSER-RESERVED-CONNECTIONS) setting.) To get detailed information for each connection by this user: ``` SELECT * FROM pg_stat_activity WHERE usename = 'user_name'; ``` As the **same user** or as superuser you can cancel all (other) connections of a user: ``` SELECT pg_cancel_backend(pid) -- (SIGINT) -- pg_terminate_backend(pid) -- the less patient alternative (SIGTERM) FROM pg_stat_activity WHERE usename = 'user_name' AND pid <> pg_backend_pid(); ``` Better be sure it's ok to do so. You don't want to terminate important queries (or connections) that way. [`pg_cancel_backend()` and `pg_terminate_backend()` in the manual.](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL-TABLE) ### From a Linux shell Did you start those other connections yourself? Maybe a hanging script of yours? You should be able to kill those (if you are sure it's ok to do so). You can investigate with `ps` which processes might be at fault: ``` ps -aux ps -aux | grep psql ``` If you identify a process to kill (better be sure, you do *not* want to kill the server): ``` kill 123457689 # pid of process here. ``` Or with `SIGKILL` instead of `SIGTERM`: ``` kill -9 123457689 ```
I'm pretty new to pgAdmin, but so far I have not utilized the command line. I had the same issue and I found the easiest way to resolve the issue in my case was to simply delete the processes listed in "Database Activity" in the Dashboard. ![pgAdmin dashboard](https://i.stack.imgur.com/BJkwi.png) (just click the X on the left side of the PID) It's a bit tedious since you must delete each process individually, but doing so should free up your available connections. Hope this is useful.
psql: FATAL: too many connections for role
[ "", "sql", "postgresql", "connection", "psql", "" ]
I need to select data from the table that looks somewhere about this: ``` monthStart monthEnd newPhones totalPhones oblid 1 1 1 2 1 2 2 1 2 2 1 2 2 2 3 2 2 1 1 4 2 3 0 3 5 ``` So I want to select 4 fields: month, count obj for months on the base of monthStart, sum of newPhones on the base of monthEnd, sum of totalPhones on the base of monthEnd. So for this data I need to select this: ``` month count totalPhones newPhones 1 3 2 1 2 2 5 4 3 0 3 0 ``` count for 1st month = 3 as we have 3 rows with monthStart = 1, but we have only one row with monthEnd = 1, so totalPhones for 1 month = 2, newPhones = 1 count for 3d month = 0 as we have 0 rows with monthStart = 3, but we have 3 totalPhones and 0 newPhones for monthEnd = 3 - we should show this data. I've stuck with this. I've tried to select from the result of this select: ``` SELECT monthStart, monthEnd, count(1) as uploaded, sum(newPhones) as newPhones, sum(totalPhones) as totalPhones from TestGB group by monthEnd, monthStart ``` but I can't get ideal result. Thank you for your help!
So if there is enough data that all months are represented, I think StevieG's answer works. With the smaller data sets like the sample data given, where month 3 is in the monthEnd but not in the monthStart, then there is a problem. Then you need something to make sure all months are represented, which I did with c, the coalesce's are just to make things pretty. ``` SELECT c.month, coalesce(a.mycount,0), coalesce(b.totalPhones,0), coalesce(b.newphones,0) FROM (SELECT monthStart as month FROM TestGB UNION SELECT monthEnd as month FROM TestGB) c LEFT OUTER JOIN (SELECT monthStart as month, count(distinct obild) as mycount, from TestGB group by monthStart) a on a.month = c.month LEFT OUTER JOIN (SELECT monthStart as month, sum(newPhones) as newPhones, sum(totalPhones) as totalPhones from TestGB group by monthEnd) b ON b.month = c.month ```
Maybe something like this.... ``` CREATE TABLE #monthstartdata ( themonth INT, themonthcount int ) CREATE TABLE #monthenddata ( themonth INT, totalphones INT, newphones INT, ) INSERT INTO #monthstartdata ( themonth, themonthcount ) SELECT monthStart, COUNT(#monthstart) FROM TestGB GROUP BY monthStart INSERT INTO #monthenddata ( themonth, totalphones, newphones ) SELECT monthEnd, COUNT(totalphones), COUNT(newPhones) FROM TestGB GROUP BY monthEnd SELECT #monthstartdata.themonth, themonthcount, totalphones, newphones FROM #monthstartdata INNER JOIN #monthenddata ON #monthstartdata.themonth = #monthenddata.themonth ```
MySQL get COUNT and SUM on different GROUP BY in one SELECT
[ "", "mysql", "sql", "group-by", "" ]
I am building a Monthly License Report for our IT department and it receives a list of License information in Euro's. I have an XML feed that provides Exchange rate data daily but the job will only be set to run once a month. I have a conditional split set up in my SSIS package that splits off the USD exchange rate and discards everything else. What I have left to do is to return only the exchange rate for the 1st of every month, I don't need every day just the day the report runs. Is there an SSIS expression that will return only that line out of the XML feed? Or is there an SQL-T Script that will return only the first month data? Date format is MM/DD/YYYY, I have a sequence of Derived columns that creates a [DateKey] for my [DIMDate] that is formatted as YYYYMMDD.
I actually found a solution for this online that was a little simpler. I have built it, tested it and implemented. I had to put my foot down with our Network guy and request an outside connection to the internet in order to make it happen (something he wasn't happy about doing) but once we worked that out it worked like a charm. Here is the link to the site that I used. <http://technet.microsoft.com/en-us/sqlserver/ff686773.aspx> Thank you everyone for all the great ideas. I'm sure they will come in handy for other projects.
For the XML posted, this Xpath query will bring you the first element (group of currencies for the first date): ``` '//Cube/Cube[1]/.' ``` if you need to bring only the USD part then: ``` '//Cube/Cube[1]/Cube[@currency="USD"]/.' ``` In SSIS you can apply Xpath transformations through XML task in control flow.
Exchange Rate Return
[ "", "sql", "sql-server", "ssis", "" ]
I am having a real issue with trying to query my database by selecting the data between 2 dates. Below is the query I am using within my VB .NET project. `WHERE [BuildID] > 0 AND [EndDate] BETWEEN '01/07/2014' and '31/07/2014'` The date format is set to dd/mm/yyyy and the column type is VARCHAR(10) When I run this query, it brings back all the data in my database, basically it selects all. I have tried several methods such as `[EndDate] >= '01/07/2014' and [EndDate] <= '31/07/2014'` and it brings back the same result, it disregards the dates and brings back the entire database. I have a feeling that it is because the column is set to VARCHAR but I am not sure. If someone could point me in the right direction that would be most helpful. Thanks for taking the time to read this.
> The column type is `VARCHAR(10)` To have dates treated as dates, you'd need the column type to be `date` (or `datetime` if using older versions of SQL). If you don't have control over the column type, you'd need to cast to date, but that's going to perform badly with any sizable data set: ``` WHERE [BuildID] > 0 AND cast([EndDate] as date) BETWEEN '2014-07-01' and '2014-07-31' ``` NB: I also changed the date format of the strings above - that's not required, but it's my preference to avoid confusion between US and UK formats (especially useful if your code's maintained by an international team).
Always **always** use [parameterized SQL queries](https://stackoverflow.com/questions/542510/how-do-i-create-a-parameterized-sql-query-why-should-i). ``` Dim date1 As Date = Date.Parse("2014-07-01") Dim date2 As Date = Date.Parse("2014-07-31") mycmd.CommandText = ".... WHERE [BuildID] > @bid AND [EndDate] BETWEEN @firstDate AND @lastDate" mycmd.Parameters.AddWithValue("@bid", 0I) mycmd.Parameters.AddWithValue("@firstDate", date1) mycmd.Parameters.AddWithValue("@lastDate", date2) ```
SQL SELECT Between 2 Dates
[ "", "sql", "sql-server", "vb.net", "" ]
I have two tables as follows: table: **recipe** fields: **recipe\_ID**, **title** --- table: **recipe\_ingredient** fields: **recipe\_ID**, **ingredient\_ID** --- I would like to show only recipes which contain certain ingredients (I managed to do that part), however I also want to **exclude** recipes which contain certain ingredients. So far I managed to do this query, it is working but it only shows recipes which contain certain ingredients. ``` SELECT DISTINCT r.recipe_ID, r.title FROM recipe r JOIN recipe_ingredient ri ON (ri.recipe_ID = r.recipe_ID) WHERE ri.ingredient_ID IN (4, 7) GROUP BY r.recipe_ID HAVING COUNT(ri.ingredient_ID) = 2 ``` How do I make it to also exclude recipes with certain ingredients? I tried some methods but I failed. Note: The 4, 7 and Count values are static for demonstration purposes. Please ask if you need any more info or anything. Thanks a lot!
You could use a sub query that gets any `recipe_ID` which contains the ingredients you **don't** want and then exclude those `recipe_ID`s in the main query: ``` SELECT r.recipe_ID, r.title FROM recipe r JOIN recipe_ingredient ri ON ri.recipe_ID = r.recipe_ID WHERE ri.ingredient_ID IN (4, 7) AND r.recipe_ID NOT IN ( SELECT rs.recipe_ID FROM recipe rs JOIN recipe_ingredient ris ON ris.recipe_ID = rs.recipe_ID WHERE ris.ingredient_ID IN (8, 2) ) GROUP BY r.recipe_ID HAVING COUNT(ri.ingredient_ID) = 2 ```
You can just rewrite this part of your query. ``` WHERE ri.ingredient_ID not in (4,7) ``` I believe this is what you are looking for since you say your query returns recipes with only certain ingredients.
SQL Query With & Without
[ "", "mysql", "sql", "" ]
I have a PostgreSQL 9.3 database with a users table that stores usernames in their case-preserved format. All queries will be case insensitive, so I should have an index that supports that. Additionally, usernames must be unique, regardless of case. This is what I have come up with: ``` forum=> \d users Table "public.users" Column | Type | Modifiers ------------+--------------------------+------------------------ name | character varying(24) | not null Indexes: "users_lower_idx" UNIQUE, btree (lower(name::text)) ``` Expressed in standard SQL syntax: ``` CREATE TABLE users ( name varchar(24) NOT NULL ); CREATE UNIQUE INDEX "users_lower_idx" ON users (lower(name)); ``` With this schema, I've satisfied all my constraints, albeit without a primary key. The SQL standard doesn't support functional primary keys, so I cannot promote the index: ``` forum=> ALTER TABLE users ADD PRIMARY KEY USING INDEX users_lower_idx; ERROR: index "users_lower_idx" contains expressions LINE 1: ALTER TABLE users ADD PRIMARY KEY USING INDEX users_lower_id... ^ DETAIL: Cannot create a primary key or unique constraint using such an index. ``` But, I already have the UNIQUE constraint, and the column is already marked "NOT NULL." If I had to have a primary key, I could construct the table like this: ``` CREATE TABLE users ( name varchar(24) PRIMARY KEY ); CREATE UNIQUE INDEX "users_lower_idx" ON users (lower(name)); ``` But then I'll have two indexes, and that seems wasteful and unnecessary to me. So, does PRIMARY KEY mean anything special to postgres beyond "UNIQUE NOT NULL," and am I missing anything by not having one?
First off, practically ***every*** table should have a primary key. ### [`citext`](http://www.postgresql.org/docs/current/interactive/citext.html) The additional module provides a data type of the same name. "ci" for case insensitive. Per documentation: > The `citext` module provides a case-insensitive character string type, > `citext`. Essentially, it internally calls `lower` when comparing > values. Otherwise, it behaves almost exactly like `text`. It is intended for exactly the purpose you describe: > The `citext` data type allows you to eliminate calls to lower in SQL > queries, and allows a **primary key to be case-insensitive**. Bold emphasis mine. Be sure to read [the manual about **limitations**](http://www.postgresql.org/docs/current/interactive/citext.html#AEN147965) first. Install it once per database with ``` CREATE EXTENSION citext; ``` ### `text` If you don't want to go that route, I suggest you add a [**`serial`**](https://stackoverflow.com/questions/14649682/safely-and-cleanly-rename-tables-that-use-serial-primary-key-columns-in-postgres/14651788#14651788) as **surrogate primary key**. ``` CREATE TABLE users ( user_id serial PRIMARY KEY , username text NOT NULL ); ``` I would use `text` instead of `varchar(24)`. Use a `CHECK` constraint if you need to enforce a maximum length (that may change at a later time). Details: * [Any downsides of using data type "text" for storing strings?](https://stackoverflow.com/questions/20326892/any-downsides-of-using-data-type-text-for-storing-strings/20334221#20334221) * [Change PostgreSQL columns used in views](https://stackoverflow.com/questions/8524873/change-postgresql-columns-used-in-views/8527792#8527792) Along with the `UNIQUE` index in your original design (without type cast): ``` CREATE UNIQUE INDEX users_username_lower_idx ON users (lower(username)); ``` The underlying `integer` of a `serial` is small and fast and does not have to waste time with `lower()` or the collation of your database. That's particularly useful for foreign key references. I mostly prefer that over some natural primary key with varying properties. Both solutions have pros and cons.
I would suggest using a primary key, as you have stated you want something that is unique, and as you have demonstrated that you can put unique constraints on a username. I will assume that since this is a unique,not null username that you will use this to track your users in other parts of the Database, as well as allow usernames to be changed. This is where a primary key will come in handy, instead of having to go into all of your tables and change the value of the Username column, you will only have one place to change it. Example ``` Without primary key: Table users Username 'Test' Table thingsdonebyUsers RandomColumn AnotherColumn Username RandomValue RandomValue Test ``` Now assume your user wants to change his username to Test1, well now you have to go find everywhere you used Username and change that to the new value before you change it in your users table since I'm assuming you will have a constraint there. ``` With Primary Key Table users PK Username 1 'Test' Table thingsdonebyUsers RandomColumn AnotherColumn PK_Users RandomValue RandomValue 1 ``` Now you can just change your users table and be done with the change. You can still enforce unique and not null on your username column as you demonstrated. This is just one of the many advantages of having normalized tables, which requires your tables to have a Primary Key that is an unrelated value(forget what the proper name is for this right now). As for what a PK actually signifies, it just a non nullable unique column that identifies the row, so in this sense you already have a Primary Key on your table. The thing is that usually PKs are INT numbers because of the reason that I explained above.
What does PRIMARY KEY actually signify, and does my table need one?
[ "", "sql", "database", "postgresql", "database-schema", "postgresql-9.3", "" ]
I need a way to use a particular set of where clauses depending on the comparison of 2 cells in a table. So that if the active date is less then the submit date the count will be based off the submit date where clause otherwise the opposite is done. I have looked at trying to use a CASE but couldn't figure out how to utilise it in such a situation. The reason I am trying to do this in SQL and not code is because fast response speed is required. Thanks ``` SELECT COUNT(Work_Order_ID) AS OutSLA FROM WOI_WorkOrder WHERE ASGRP IN ( 'WOIQUE) AND (Status = 'Assigned' OR Status = 'Pending' OR Status = 'In Progress') AND IF(Active_Date < Submit_Date) Submit_Date < '2014-07-28 00:00:00' ELSE Active_Date < '2014-07-28 00:00:00' ```
``` SELECT SUM(case when Active_Date < Submit_Date and Submit_Date < '2014-07-28' then 1 when Active_Date >= Submit_Date and Active_Date < '2014-07-28' then 1 end) AS OutSLA FROM WOI_WorkOrder WHERE ASGRP IN ('WOIQUE') AND Status = 'Assigned' OR Status = 'Pending' OR Status = 'In Progress' ```
Although the other answers are correct, the `WHERE` clauses doesn't have to be this difficult. Because if you think about your `IF` statement, `Active_Date` and `Submit_Date` will both always be less than `2014-07-28`. So the `WHERE` clause will become `Active_Date < '2014-07-28' AND Submit_Date < '2014-04-28'` is the same. This will not only improve readability, but also query performance. Your whole query will then look like this: ``` SELECT COUNT(Work_Order_ID) as OutSLA FROM WOI_WorkOrder WHERE ASGRP IN ('WOIQUE') AND Status IN ('Assigned', 'Pending', 'In Progress') AND Submit_Date < '2014-07-28' AND Active_Date < '2014-07-28'; ```
Use a particular set of where clauses depending on the comparision of 2 cells
[ "", "sql", "sql-server", "" ]
I have the following problem: I want to execute a query on multiple databases on my SQL Server. Every customer has a separate database. Those all have exactly the same table and their names are similar. So there is a database `kde_01_Miller`, then a `kde_02_Mueller` and so on ... I want to execute a query in every one of those databases. Here's what I have tried: ``` DECLARE @name VARCHAR(100) -- database name DECLARE @dothis nvarchar(200) DECLARE db_cursor CURSOR FOR SELECT name FROM master.dbo.sysdatabases WHERE name like 'kde_0%' order by name OPEN db_cursor FETCH NEXT FROM db_cursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN set @dothis = 'use [' + @name + ']' exec sp_executesql @dothis /* Start query */ select description from dbo.basicdata /* End query */ FETCH NEXT FROM db_cursor INTO @name END CLOSE db_cursor DEALLOCATE db_cursor ``` The problem is that the query does not work properly. The `use` statement seems not to be working. I get a result for every database I have, but the result is always the same one, dependent on the database I'm currently doing a query for. I've also tried the following and it worked: Instead of my while-loop I did this: ``` WHILE @@FETCH_STATUS = 0 BEGIN set @dothis= 'select description from ' + QUOTENAME(@name) + '.dbo.basicdata' exec sp_executesql @dothis FETCH NEXT FROM db_cursor INTO @name END ``` But I don't like this way, because you need the `quotename(@name)` for every table. How do I make the first example work?
That's not possible, since sp\_executesql is executed as its own self-contained **batch**, that mean you did actually "use" other databases, but only in those batchs i mentioned earlier I'll try to be more clear, this code of you is a **batch**, since there's no "**GO**" command inside (read my sql comments) : ``` DECLARE @name VARCHAR(100) -- database name DECLARE @dothis nvarchar(200) DECLARE db_cursor CURSOR FOR SELECT name FROM master.dbo.sysdatabases WHERE name like 'kde_0%' order by name OPEN db_cursor FETCH NEXT FROM db_cursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN set @dothis = 'use [' + @name + ']' -- this will create another batch and execute the @dothis -- it'll have nothing todo with your current executing batch, -- which is calling the sp_executesql exec sp_executesql @dothis /* Start query */ select description from dbo.basicdata /* End query */ FETCH NEXT FROM db_cursor INTO @name END CLOSE db_cursor DEALLOCATE db_cursor ``` So, there's only one way left, write whatever you want to do with the database inside the @dothis : ``` declare @dothis nvarchar(max) set @dothis = ' use [' + @name + '] -- query start Select description from dbo.basicdata -- query end ' exec sp_executesql @dothis ```
While this question has already been answered, I thought I'd provide a second answer I believe is better. Instead of using a Cursor you can generate dynamic SQL to query multiple databases. ``` DECLARE @sql NVARCHAR(Max); SELECT @sql = COALESCE(@sql, '') + 'SELECT * FROM ' + [name] + '.sys.tables' + CHAR(13) FROM sys.databases PRINT @sql EXEC sp_executesql @sql ``` The above SQL will Generate the following SQL. ``` SELECT * FROM master.sys.tables SELECT * FROM tempdb.sys.tables SELECT * FROM model.sys.tables SELECT * FROM msdb.sys.tables SELECT * FROM StackOverflow.sys.tables SELECT * FROM AdventureWorks2012.sys.tables SELECT * FROM AdventureWorksDW2012.sys.tables ``` As you see I was able to run a query against multiple databases. I could even UNION the data together if I'd like.
Script -> Query for multiple databases
[ "", "sql", "sql-server", "database", "t-sql", "" ]
I have a mysql table containing 100K mobile numbers (india), but some of them are invalid mobile numbers. I want to delete all mobile numbers that are not exactly 10 digits and starts with digit less than 7. Can you help me with an sql query for this?
You could do this in 2 simple deletes. ``` DELETE FROM yourtable WHERE Length(MobileNumber) <> 10; DELETE FROM yourtable WHERE Left(MobileNumber,1) < 7; ``` Or in 1 ``` DELETE FROM yourtable WHERE Length(MobileNumber) <> 10 OR Left(MobileNumber,1) < 7; ```
Here how you can check the length and start of the digit ``` mysql> select length(5234567896); +--------------------+ | length(5234567896) | +--------------------+ | 10 | +--------------------+ 1 row in set (0.00 sec) mysql> select substr(5234567896,1,1); +------------------------+ | substr(5234567896,1,1) | +------------------------+ | 5 | +------------------------+ 1 row in set (0.00 sec) ``` So combining above 2 query you can construct the delete query something as ``` delete from table_name where length(column_name) < 10 OR substr(column_name,1,1) < 7 ``` UPDATE : From Kay Nelson's note delete if the length is not 10 meaning 9,11 etc In that case the condition should be ``` length(column_name) <> 10 ```
query to delete invalid mobile numbers from mysql database
[ "", "mysql", "sql", "" ]
Table structure **employee\_salary** ``` salary_id | emp_id | salary ``` **employee tabe structure** ``` emp_id | first_name | last_name | gender | email | mobile | dept_id | is_active ``` **Department:** ``` dept_id | dept_name | manager_name | is_active ``` **Question:** Display department wise employee who is getting highest and lowest salary amount? I am using query ``` SELECT max(salary) salary, dept_name, first_name, dept_id , 'MAX' Type FROM ( SELECT a.salary, c.dept_name, b.first_name, b.dept_id, a.salary_id FROM employee_salary a LEFT JOIN employee b ON a.emp_id = b.emp_id LEFT JOIN department c ON c.dept_id = b.dept_id ) t GROUP BY dept_id UNION ALL SELECT min(salary) salary, dept_name, first_name, dept_id , 'MIN' Type FROM ( SELECT a.salary, c.dept_name, b.first_name, b.dept_id, a.salary_id FROM employee_salary a LEFT JOIN employee b ON a.emp_id = b.emp_id LEFT JOIN department c ON c.dept_id = b.dept_id ) t GROUP BY dept_id ORDER BY dept_id ``` The output I am getting is shown below . **I am not able to get the respective first name from employee table and rest all other fields are showing correct values,** ``` salary dept_name first_name dept_id Type 30000 dept_1 Paul 1 MIN 98000 dept_1 Paul 1 MAX 51000 dept_2 Aron 2 MAX 20000 dept_2 Aron 2 MIN 40000 dept_3 Steve 3 MAX 40000 dept_3 Steve 3 MIN 64000 dept_4 Henry 4 MAX 64000 dept_4 Henry 4 MIN ```
ORDER the salary with DESC for finding the maximum value and by ASC for finding the MINIMUM salary This is because it checked the first matched term, and return the value accordingly, ``` SELECT max(salary) salary, dept_name, first_name, dept_id , 'MAX' Type FROM ( SELECT a.salary, c.dept_name, b.first_name, b.dept_id, a.salary_id FROM employee_salary a LEFT JOIN employee b ON a.emp_id = b.emp_id LEFT JOIN department c ON c.dept_id = b.dept_id ORDER BY a.salary DESC) t group by dept_id UNION ALL SELECT min(salary) salary, dept_name, first_name, dept_id , 'MIN' Type FROM ( SELECT a.salary, c.dept_name, b.first_name, b.dept_id, a.salary_id FROM employee_salary a LEFT JOIN employee b ON a.emp_id = b.emp_id LEFT JOIN department c ON c.dept_id = b.dept_id ORDER BY a.salary ASC) t group by dept_id ORDER BY dept_id ```
You cannot have in your select list a value that is not an aggregate function or included in your group by. Select the name after you have determined that the employeeid has the highest or lowest salary.
MySQL query to get the max, min with GROUP BY and multiple table joins in separate rows
[ "", "mysql", "sql", "" ]
For testing purposes, I provide my own implementation of the `now()` function which is `public.now()`. Using `search_path` to override the default `pg_catalog.now()` with my own version mostly works, but I have a table with a table with a default expression of `now()`. Showing the table produces something akin to the following: ``` start_date | date | not null default now() ``` However, after a schema save and restore (to a testing DB), the same show table produces ``` start_date | date | not null default pg_catalog.now() ``` I assume from this, initially the function in the default expression is not bound to any schema and the search\_path will be used to find the correct one. However, dump or restore operation seems to "bind" the function to the current one. Is my understanding of the "bind state" of the function correct? Is there a way to keep the unbound-ness of the function across dump/restore boundaries?
Default values are parsed at creation time (early binding!). What you see in psql, pgAdmin or other clients is a text representation but, in fact, the `OID` of the function `now()` at the time of creating the column default is stored in the system catalog [`pg_attrdef`](https://www.postgresql.org/docs/current/catalog-pg-attrdef.html). I quote: > `adbin` `pg_node_tree` > > The column default value, in `nodeToString()` representation. Use > `pg_get_expr(adbin, adrelid)` to convert it to an SQL expression. Changing the [`search_path`](https://stackoverflow.com/a/9067777/939860) may cause Postgres to *display* the name of the function schema-qualified since it would not be resolved correctly any more with the current `search_path`. Dump and restore are not concerned with your custom `search_path` setting. They set it explicitly. So what you see is not related to the the dump / restore cycle. ### Override built-in functions Placing `public` before `pg_catalog` in the `search_path` is a **game of hazard**. Underprivileged users (including yourself) are often allowed to write there and create functions that may inadvertently overrule system functions - with arbitrary (or malicious) outcome. You want a *dedicated schema with restricted access* to override built-in functions. Use something like this instead: ``` SET search_path = override, pg_catalog, public; ``` Details in this [related answer on dba.SE](https://dba.stackexchange.com/a/70009/3684).
The default function is "bound" at the time the default constraint is created. The view showing the unqualified name is simply abbreviating it. This can be demonstrated by inserting rows before and after shadowing the function: ``` Set search_path to public,pg_catalog; Create Temp Table foo ( test date not null default now() ); Insert Into foo default values; Create Function public.now() Returns timestamp with time zone Language SQL As $$ -- No idea why I chose this date. Select '1942-05-09'::timestamp with time zone; $$; Insert Into foo default values; Select * from foo; ``` Note that both rows (inserted before and after function creation) contain today's date, not the fake date. Furthermore, creating a table with the above function already in scope, then trying to delete the function, results in a dependency error: ``` Set search_path to public,pg_catalog; Create Function public.now() Returns timestamp with time zone Language SQL As $$ Select '1942-05-09'::timestamp with time zone; $$; Create Temp Table bar ( test date not null default now() ); Insert Into bar default values; Select * from bar; -- Single row containing the dummy date rather than today Drop Function public.now(); -- ERROR: cannot drop function now() because other objects depend on it ``` If the binding happened only on insert, there would be no such dependency.
When / how are default value expression functions bound with regard to search_path?
[ "", "sql", "postgresql", "default", "search-path", "" ]
I would like to build a query that returns entries that have been entered in two consecutive weeks. For example: ``` Name | Country | Date Name1 | Country1 | 2014-07-29 Name2 | Country2 | 2014-08-08 Name1 | Country2 | 2014-08-07 ``` I want to be able to select the entries that are entered on two consecutive weeks. In this case, my query would return only Name1. I recently asked a similar question about querying records entered on two separate dates and this is what I have for that: ``` SELECT Name FROM Table GROUP by Name COUNT(DISTINCT Date) > 1 ``` But this checks that the record was inserted on more than one date, but not that it has been entered at least once in two consecutive weeks.
If your definition of week is "7 days later", then you can do something like: ``` select t.name, t.date from table t where exists (select 1 from table t2 where t2.name = t.name and t.date between t2.date + 7 and t2.date + 14 ); ``` Note that different databases handle dates differently, so `t.date + 7` may not work in all databases. There is some similar construct that does.
You haven't said what DBMS you are using, and "week" is handled differently in them. The Stack Overflow question [Getting week number of date in SQL](https://stackoverflow.com/questions/15432591/getting-week-number-of-date-in-sql) is a good starting point, or the SQL2003 standard at <http://users.atw.hu/sqlnut/sqlnut2-chp-4-sect-4.html> and search within it for EXTRACT. But working with SQL-Server, a working example would be ``` --Input your sample data DECLARE @T TABLE (N varchar(50), C varchar(50), D Date) INSERT INTO @T VALUES('Name1', 'Country1', '2014-07-29'), ('Name2', 'Country2', '2014-08-08'), ('Name1', 'Country2', '2014-08-07') --Code to search it for entries in consecutive weeks SELECT DISTINCT T1.N FROM @T as T1 INNER JOIN @T as T2 ON T1.N = T2.N WHERE (DATEPART(year, T1.D) * 52 + DATEPART(week, T1.D)) - (DATEPART(year, T2.D) * 52 + DATEPART(week, T2.D)) = 1 ``` Note that you really need to use the week function, because an entry on Monday of one week and Friday of the same week are farther apart than an order on Friday of one week and Monday of the next week. Also note that you can't *just* use the week, otherwise an entry in the first week of January of 2005 would "match" with an entry in the second week of January of 2006, or would miss a match in the last week of December of 2004.
SQL query for checking entries in two consecutive weeks
[ "", "sql", "" ]
I am getting this error "Incorrect syntax near '<'.". What's the correct way of using case, comparison operator in where clause? Here is my Sql query: ``` SELECT COUNT(AL.PKAPPLICATIONID) FROM APPLICATIONLOGO AL LEFT JOIN THIRDPARTYSESSIONKEY TSK ON AL.PKAPPLICATIONID = TSK.APPLICATIONSOURCEID WHERE CASE WHEN TSK.CREATEDON IS NULL THEN TSK.CREATEDON < GETDATE() ELSE 1=1 END ```
This is your `where` clause: ``` WHERE CASE WHEN TSK.CREATEDON IS NULL THEN TSK.CREATEDON < GETDATE() ELSE 1=1 END ``` I assume you mean `IS NOT NULL`; otherwise the logic doesn't make much sense. In general, the best way to use `case` in a `where` clause is to avoid it. The following is easier to read and probably what you intend: ``` WHERE TSK.CREATEDON IS NULL OR TSK.CREATEDON < GETDATE() ``` The original logic for your version is simply: ``` WHERE TSK.CREATEDON IS NOT NULL ``` And, the specific answer to your question is that a `case` statement returns a value and a boolean result is not a value. So, the following does not work: ``` where (case when c = 'a' then a = b else c = b end) ``` The following does: ``` where (case when c = 'a' then a else c end) = b ``` But as I say, better to just use the basic logical operators in most cases.
Use an `OR`: ``` SELECT Count(AL.Pkapplicationid) FROM APPLICATIONLOGO AL LEFT JOIN THIRDPARTYSESSIONKEY TSK ON AL.Pkapplicationid = TSK.Applicationsourceid WHERE TSK.Createdon IS NULL OR TSK.Createdon < Getdate() ``` I assume that this is the actual logic since yours makes not sense: ``` WHEN TSK.Createdon IS NULL THEN TSK.Createdon < Getdate() ``` or do you actually want to use a different column in case `Createdon IS NULL`? Then you could use `COALESCE` / `ISNULL`: ``` WHERE COALESCE(TSK.Createdon, TSK.OtherColumn) < Getdate() ```
What's the correct way of using case in where clause
[ "", "sql", "sql-server", "" ]
I've been trying to come up for answer but no success. **Need to get:** price for the first purchase and price for the last purchase and grouped by SKU. Query results should be like this: ``` sku first_purchase_price Last_purchase_price BC123 3.09 6.68 QERT1 9.09 13.23 ``` **My Query** ``` SELECT sku,PRICE,MAX(purchase_DATE),MIN (purchase_DATE) FROM store ORDER By sku ``` **keep getting:** ``` SQL Error: ORA-00979: not a GROUP BY expression 00979. 00000 - "not a GROUP BY expression" ``` *or* ``` SQL Error: ORA-00937: not a single-group group function 00937. 00000 - "not a single-group group function" ``` Any help is greatly appreciated. ``` SKU TRANSAC_ID purchase_DATE PRICE ---------------------------------------------- BC123 CHI0018089 21-OCT-09 6.98 BC123 CHI0031199 11-MAR-13 6.68 BC123 NAP1000890 22-JAN-08 3.09 BC123 NAP1011123 21-DEC-11 89.9 QQQ789 NAP1000891 22-JAN-08 4.01 QERT1 JOL0400090 8-MAR-12 13.23 QERT1 NAP1000990 22-FEB-08 9.09 QERT1 NAP1001890 28-FEB-09 2.09 WW000 CHI0031208 11-MAR-13 200.01 WW000 CHI0031298 11-MAR-13 200.01 YZV11 JOL0200080 10-OCT-06 230.23 YZV11 AUR0700979 14-APR-13 6.68 YZV11 CHI0018189 03-OCT-09 556.98 YZV11 JOL0300080 10-MAR-11 300 ```
You can make use of [FIRST/LAST](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions065.htm#SQLRF51413) aggregate functions to simplify this type of query. [SQL Fiddle](http://sqlfiddle.com/#!4/d9ea1/1) **Query**: ``` select sku, max(price) keep (dense_rank first order by purchase_date) first_purchase_price, max(price) keep (dense_rank last order by purchase_date) last_purchase_price from store group by sku; ``` **[Results](http://sqlfiddle.com/#!4/d9ea1/1/0)**: ``` | SKU | FIRST_PURCHASE_PRICE | LAST_PURCHASE_PRICE | |--------|----------------------|---------------------| | BC123 | 3.09 | 6.68 | | QERT1 | 9.09 | 13.23 | | QQQ789 | 4.01 | 4.01 | | WW000 | 200.01 | 200.01 | | YZV11 | 230.23 | 6.68 | ```
It seems your query is missing a GROUP BY clause indeed, if your intention is to obtain the maximum and minimum "purchase\_DATE" values for a certain value of "sku". You need to group the columns you're referring to obtain the maximum and minimum values. With the sample data above, you could try this query: ``` SELECT sku, MAX(purchase_DATE), MIN (purchase_DATE) FROM store GROUP BY sku ORDER by sku ``` that would give you the max and min purchase dates for each SKU value, like: ``` BC123 2013-03-11 2000-01-22 -- max and min "purchase dates" for "BC123" sku QERT1 2012-03-08 2008-02-22 -- ... "QERT1" sku QQQ789 2008-01-22 2008-01-22 -- ... "QQQ789" sku ... ... ... ``` If you would like the minimum and maximum prices for a certain "sku", you could use a very similar query.
sql - oracle - selecting the first and last record for specific group/sku
[ "", "sql", "oracle", "" ]
I have this table ``` **Original Table** year month duration amount per month 2012 5 3 2000 ``` and I want to get this ``` **Result table** year month duration amount per month 2012 5 1 2000 2012 6 1 2000 2012 7 1 2000 ``` Note how the duration of a project (this is a project) is 3 and the "amount per month" is 2000, so I added two more rows to show that the next months (6 and 7) will have an "amount per month" as well. How do I do that with sql/tsql?
try this for SQL SERVER, i included my test temp table: ``` declare @temp as table ( [year] int , [month] int , [duration] int , [amount] int ) insert into @temp ( [year] , [month] , [duration] , [amount] ) VALUES( 2012 ,5 ,3 ,2000 ) SELECT [year] ,[month] + n.number ,1 ,[amount] , '1' + SUBSTRING(CAST([duration] AS varchar(10)), 2, 1000) AS Items FROM @temp JOIN master..spt_values n ON n.type = 'P' AND n.number < CONVERT(int, [duration]) ```
Please see the script below that may work for your requirement. I have also compensated for calender year and month increment. Please test and let me know. ``` DECLARE @temp AS TABLE([Year] INT,[Month] INT,Duration INT,Amount INT) INSERT INTO @temp([year], [month], Duration, Amount) VALUES (2011, 5, 3, 2000),(2012, 11, 3, 3000),(2013, 9, 12, 1000); ;WITH cte_datefix AS ( SELECT [Year], [Month], Duration, Amount, CAST(CAST([Year] AS VARCHAR(4)) + RIGHT('00' + CAST([Month] AS VARCHAR(2)), 2) + '01' AS DATE) AS [Date] FROM @temp ), cte_Reslut AS (SELECT [Year], [Month], Duration, Amount, [Date], 1 AS Months FROM cte_datefix UNION ALL SELECT t.[Year], t.[Month], t.Duration, t.Amount, DATEADD(M, Months, t.[Date]) AS [Date], cr.Months + 1 AS Months FROM cte_Reslut AS cr INNER JOIN cte_datefix AS t ON t.[Year] = cr.[Year] WHERE cr.Months < cr.Duration ) SELECT YEAR([Date]) AS [Year], MONTH([Date]) AS [Month], 1 AS Duration, Amount FROM cte_Reslut ORDER BY [Date] ```
altering a table using sql
[ "", "mysql", "sql", "sql-server", "t-sql", "" ]
I currently have two tables in a database. Called Email and unsuscribed both tables have a column called Email. now I want to compare these two tables and wherever email matches I want to update column in email table called Email\_status\_id to 2...the query I am using is ``` UPDATE Email E SET E.Email_status_id = 2 WHERE E.Email IN ( SELECT U.Email FROM UNSUSCRIBED U); ``` I am currently using mysql. email table has 2704569 rows of Email and unsuscribed table has 12102 rows of Email the query execution time is taking forever.... any suggestion to reduce query execution time...
The first thing is to create an index on `Unsubscribed(Email)`: ``` create index idx_unsubscribed_email on unsubscribed(email); ``` Or, even better, declare it as the `primary key`, particularly if it is the only column in the table. Then, MySQL sometimes does a poor job of implementing `in`. There are a variety of ways to write the query making use of the index. `Exists` is a typical method: ``` update email e set email_status_id = 2 where exists (select 1 from unsubscribed u where u.email = e.email); ``` The `join` version should have similar performance with the index. EDIT: An index on `email(email)` could also help the query. For some reason, I assumed that this would already be a key in the table.
You're doing string comparisons over a large amount of data in an `In` clause. Since you don't actually need the data returned, you can do this in an `Exists`: ``` Update Email E Set E.Email_status_id = 2 Where Exists ( Select 1 From Unsubscribed U Where U.Email = E.Email ) ``` Aside from that, proper [indexing](http://dev.mysql.com/doc/refman/5.0/en/create-index.html) on the `Email` column in both the `Email` and `Unsubscribed` tables would up your performance as well.
Query execution is taking too long
[ "", "mysql", "sql", "performance", "mysql-workbench", "database-administration", "" ]
SQL noob here. I'm kind of beat. So I have a list of data of sales employees like this ``` Employee Name| Sales Timestamp | Item Sold --------------------- Jackie Chan | 07/11/2014 | TV Jessica Alba | 08/01/2014 | Sofa Jessica Alba | 07/25/2014 | Stereo System Will Ferrell | 06/30/2014 | Sofa Will Ferrell | 07/15/2014 | TV ``` I want to return employee, the latest date they sold something, and the item sold. I have tried ``` Select [Employee Name], MAX[Sales Timestamp], [ITEM SOLD] FROM Sales GROUP BY [Employee Name], [ITEM SOLD] ``` but that gives me each employee's latest sales of ANY items, whereas I just want the latest item they sold. Please help!
``` ;WITH SalesCTE AS ( SELECT [Employee Name], [Sales Timestamp], [ITEM SOLD], ROW_NUMBER() OVER (PARTITION BY [Employee Name] ORDER BY [Sales Timestamp] DESC) AS rn FROM Sales ) SELECT [Employee Name], [Sales Timestamp], [ITEM SOLD] FROM SalesCTE WHERE rn = 1 ```
[**SQL Fiddle**](http://sqlfiddle.com/#!6/bed7e/2/0): ``` SELECT s.[Employee Name], s.[Sales Timestamp], s.[ITEM SOLD] FROM ( Select [Employee Name] AS emp, MAX([Sales Timestamp]) AS ts FROM Sales GROUP BY [Employee Name] ) AS ss INNER JOIN Sales s ON ss.emp = s.[Employee Name] AND ss.ts = s.[Sales Timestamp] ```
sql server: how to return the latest sales data by employee
[ "", "sql", "sql-server", "" ]
``` declare @nr1 decimal(20,19), @nr2 decimal(20,19) set @nr1 = EXP(1.0) set @nr2 = PI(); print @nr1/@nr2 ``` As EXP and PI are "infinite" numbers you should always have enough decimals to print The result for this query is `0.865255979432265082` For the query : ``` declare @nr12 decimal(34,25), @nr22 decimal(34,25) set @nr12 = EXP(1.0) set @nr22 = PI(); print @nr12/@nr22 ``` I get the result : `0.865255` So my question is, why is the first query more precise then the second one? As `decimal(p,s)` as it is define in msdn tells me that the second query should be more precise.
This link will help: <http://msdn.microsoft.com/en-us/library/ms190476.aspx> according to this the scale of the result of a division e1/e2 will be given by this formula max(6, s1 + p2 + 1) and it also includes this note: * The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated. probably you will be better using decimal(19,16) given that the scale for exp() and pi() are 16 in both cases.
There is a great explanation here [T-SQL Decimal Division Accuracy](https://stackoverflow.com/questions/423925/t-sql-decimal-division-accuracy) ``` --Precision = p1 - s1 + s2 + max(6, s1 + p2 + 1) --Scale = max(6, s1 + p2 + 1) --Scale = 15 + 38 + 1 = 54 --Precision = 30 - 15 + 9 + 54 = 72 --Max P = 38, P & S are linked, so (72,54) -> (38,20) --So, we have 38,20 output (but we don use 20 d.p. for this sum) = 11.74438969709659 SELECT CONVERT(DECIMAL(30,15),146804871.212533)/CONVERT(DECIMAL (38,9),12499999.9999) --Scale = 15 + 38 + 1 = 54 --Precision = 30 - 15 + 15 + 54 = 84 --Max P = 38, P & S are linked, so (84,54) -> (38,8) --So, we have 38,8 output = 11.74438969 SELECT CONVERT(DECIMAL(30,15),146804871.212533)/CONVERT(DECIMAL (38,15),12499999.9999) ```
SQL Server Decimal precision
[ "", "sql", "sql-server", "decimal", "" ]
I have this Table ``` CREATE TABLE [dbo].[Room] ( [Id] INT IDENTITY (1, 1) NOT NULL, [Name] NVARCHAR (32) NOT NULL, [Capacity] TINYINT NOT NULL, [CapacityOccupied] TINYINT NOT NULL, CONSTRAINT [PK_Room] PRIMARY KEY CLUSTERED ([Id] ASC) ); ``` And Data in Table is: . ![enter image description here](https://i.stack.imgur.com/8z1OB.png) I have use query for find status capacity, like this: * **if room is full show full** * **if room is empty show empty** * **else show free capacity room** i use this query but, i don't know how to show free capacity. ``` SELECT CASE WHEN (Capacity-CapacityOccupied)= Capacity THEN 'Empty' when (Capacity-CapacityOccupied)=0 then 'Full' END as Salable, * FROM Room ``` And result is: ![enter image description here](https://i.stack.imgur.com/HwMXj.png) **Row 2,3,4 -> how to show free capacity?** for example: - Empty - 3 - 2 - 1 - Full
Try this way: ``` SELECT CASE WHEN (Capacity-CapacityOccupied)= Capacity THEN 'Empty' when (Capacity-CapacityOccupied)=0 then 'Full' else cast((Capacity-CapacityOccupie) as varchar(10)) END as Salable, * FROM Room ```
Try : ``` SELECT CASE WHEN (Capacity-CapacityOccupied)= Capacity THEN 'Empty' WHEN (Capacity-CapacityOccupied)=0 THEN 'Full' ELSE CAST((Capacity-CapacityOccupied) AS nvarchar(100)) END AS Salable, * FROM Room ```
use IF…THEN Sql query and return int and String
[ "", "sql", "sql-server", "if-statement", "" ]
I have the following table of messages (sid = sending correspondent ID, rid = receiving correspondent ID, mdate = message date, mtext = message text) representing a correspondence among parties: ``` sid|rid| mdate | mtext ---+---+------------+---------- 1 | 2 | 01-08-2014 | message1 <-- 1st m. in corresp. between id=1 and id=2 2 | 1 | 02-08-2014 | message2 <-- 2nd m. in corresp. between id=1 and id=2 1 | 2 | 04-08-2014 | message3 <-- last m. in corrensp. between id=1 and id=2 2 | 3 | 02-08-2014 | message4 <-- not id=1 correspondence at all 1 | 3 | 03-08-2014 | message5 <-- 1st m. in corrensp. between id=1 and id=3 3 | 1 | 04-08-2014 | message6 <-- 2nd m. in corrensp. between id=1 and id=3 3 | 1 | 05-08-2014 | message7 <-- last m. in corrensp. between id=1 and id=3 5 | 1 | 03-08-2014 | message8 <-- last m. in corrensp. between id=1 and id=5 ``` requested MySQL query should return for one correspondent (being sender or receiver) only correspondence with last message (sent or received) with other parties. So from previous table of messages this query for correspondent with id=1 should return last correspondence messages (last sent or received): ``` sid|rid| mdate | mtext ---+---+------------+---------- 1 | 2 | 04-08-2014 | message3 3 | 1 | 05-08-2014 | message7 5 | 1 | 03-08-2014 | message8 ``` How to make such a query for MySQL?
group by sid if rid=1 or rid if sid=1 to find max date, then join: ``` select a.* from messages a join ( select if(sid=1, rid, sid) id, max(mdate) d from messages where sid = 1 or rid = 1 group by id) b on ((a.sid=1 and a.rid=b.id) or (a.sid=b.id and a.rid=1)) and a.mdate = b.d; ``` [demo](http://sqlfiddle.com/#!2/cedb8f/3)
Assuming that there are no messages with the exact same timestamp between two correspondents, you can use a filtering join: ``` select * from messages m join ( select case when sid > rid then sid else rid end r1 , case when sid <= rid then sid else rid end r2 , max(mdate) as max_mdate from messages where 1 in (sid, rid) group by r1 , r2 ) as filter on m.sid in (filter.r1, filter.r2) and m.rid in (filter.r1, filter.r2) and m.mdate = filter.max_mdate ``` [Example on SQL Fiddle.](http://sqlfiddle.com/#!2/26dfb2/3/0)
Last message in correspondence
[ "", "mysql", "sql", "" ]
I have the following table: ``` | id | msgType | user | job_id | project_id | |+++++++++++++++++++++++++++++++++++++++++++| | 1 | 1 | 1 | 1 | 1 | | 2 | 1 | 1 | 2 | 1 | | 3 | 2 | 1 | 3 | 1 | | 4 | 2 | 1 | 4 | 1 | | 5 | 1 | 1 | 5 | 2 | ``` This my query: ``` SELECT msgType ,user, job_id, project_id FROM mail GROUP BY msgType,user,project_id,job_id ``` The output whit this query: ``` | msgType | user | job_id | project_id | |++++++++++++++++++++++++++++++++++++++| | 1 | 1 | 1 | 1 | | 1 | 1 | 2 | 1 | | 2 | 1 | 3 | 1 | | 2 | 1 | 4 | 1 | | 1 | 1 | 5 | 2 | ``` My desired output would be: ``` | msgType | user | job_id | project_id | |++++++++++++++++++++++++++++++++++++++| | 1 | 1 | 1 | 1 | | 2 | 1 | 3 | 1 | | 2 | 1 | 4 | 1 | | 1 | 1 | 5 | 2 | ``` So basically `if msgType = 1` then i want to see 1 row for every user in every different project\_id `if msgType=2` then i want to see every job\_id in a "project" Any help is greatly appreciated. UPDATE i created a fiddle for this: <http://sqlfiddle.com/#!2/5708b8/4>
``` SELECT * FROM ( SELECT msgType ,user, min(job_id), project_id FROM mail WHERE msgType = 1 GROUP BY msgType,user,project_id UNION ALL SELECT msgType ,user, job_id, project_id FROM mail WHERE msgType <> 1 )T ORDER BY T.msgType Desc ``` Fiddle here: <http://sqlfiddle.com/#!2/5708b8/7>
To do it all in one statement you could do this: <http://sqlfiddle.com/#!2/5708b8/16> ``` SELECT msgType , min(user) user , min(job_id) job_id , project_id FROM mail group by msgType , project_id , case when msgType = 1 then user else job_id end ``` Though I suspect the `union all` methods above are simpler to understand, so may be better from a maintainability perspective.
MySQL if statement in GROUP BY
[ "", "mysql", "sql", "" ]
I have my table below: ``` | id | version | type | test_id | value | +------+---------+----------+---------+--------------------+ | 1235 | 0 | 600 | 111 | 2 | | 1236 | 0 | 600 | 111 | 2 | | 1237 | 0 | 600 | 111 | 2 | | 1238 | 0 | 601 | 111 | 1 | | 1239 | 0 | 602 | 111 | 1 | | 1240 | 0 | 600 | 111 | 1 | | 1241 | 0 | 601 | 111 | 1 | ``` I'm trying to retrieve the count dependents of the column value. Type 600 has three values of 2 and one value of 1. So I need the result 3 and 1. My co-worker told me to use distinct but I think I'm using wrong syntax? ``` (select distinct a.type from Answer a where type = 600) union (select distinct a.value from Answer a where type = 600) union (select count(value) from Answer where type = 600 and value = 2); ```
You can group by type and value. [SQL Fiddle Example](http://sqlfiddle.com/#!2/98091/2) ``` select type, value, count(*) from answer -- add your where clause group by type, value ```
``` select value, count(*) from a where type=600 group by value ```
Mysql logic to count
[ "", "mysql", "sql", "count", "" ]
I have a field in the database that contains a lot of empty/null rows. I am trying to create a case statement that changes this to "Empty Field" or something similar so that when it is displayed to the user, they will have an indication that the row has no data for that column but for some reason it is not working. When I select the data without the case statement, I get either a string or an empty field as should be. When I add the case statement, every row comes back as `{null}` regardless to what is in that column. I have used similar case statements many times but this has me baffled. ``` SELECT CASE WHEN col_name = TRIM('') THEN 'Empty Field' END col_name FROM table_name ``` I also tried `LIKE` in place of the equals sign as well as `NULL` instead of `TRIM('')` working solution: ``` SELECT NVL(TRIM(col_name), 'Empty Field') AS col_name FROM table_name ```
You should use `nvl()` as codenheim suggested, but further to the answers about your original solution, you have no `else` clause - so nothing is matching against `= null` and you never get `Empty field`, but everything that *doesn't* match is then defaulted to null anyway. You'd need to do: ``` SELECT CASE WHEN col_name IS NULL THEN 'Empty Field' ELSE col_name END AS col_name FROM table_name ``` But that would be the same as ``` SELECT NVL(col_name, 'Empty Field') AS col_name FROM table_name ``` You suggested that still isn't working but weren't very specific; but your original use of `trim` makes me wonder if you have some values that are not null but only consist of spaces - so they would still just be spaces afterwards. If that is the case and you want to treat all-spaces as null then you could do: ``` SELECT NVL(TRIM(col_name), 'Empty Field') AS col_name FROM table_name ``` If you have other whitespace - just a `tab` perhaps - then you could use a regular expression to strip that out instead, e.g.: ``` SELECT NVL(REGEXP_REPLACE(col_name, '^[[:space:]]+$', NULL), 'Empty Field') AS col_name FROM table_name; ``` [SQl Fiddle demo](http://sqlfiddle.com/#!4/2347b/1).
In Oracle, `''` in VARCHAR context is treated as NULL You can just use NVL(col\_name,'Default Value') ``` select NVL(col_name, 'Empty Field') from table_name ``` TRIM('') is NULL in Oracle SQL, so has no effect. ``` SQL> set null NULL SQL> select trim('') from dual; T - N U L L ``` As far as your CASE statement, the problem lies in your use of equalty with NULL. You cant say where NULL = anything, it is false. NULL is not equal to anything, not even itself. You need to use: ``` WHERE col_name IS NULL ``` IS NULL and = NULL are not the same. ``` SQL> select case when id = trim('') then 'Empty' end from nulls; CASEW ----- NULL SQL> select case when id = null then 'Empty' end from nulls; CASEW ----- NULL SQL> select case when id is null then 'Empty Field' end from nulls; CASEW ----- Empty Field <--- DATA! ``` If this does not work, perhaps you have spaces in your data. Test that by using SQLPLUS - SET NULL NULL command I showed to verify it is truly NULL.
Case statement not working as expected in Oracle
[ "", "sql", "oracle", "" ]
I am looking for a way to only select the value of a column if it is unique. Example of data in table: ``` Name City Age ----------------------- James New York 20 Charles New York 21 Tom New York 22 ``` Example of output of select all query: ``` Name City Age ----------------------- James New York 20 Charles New York 21 Tom New York 22 ``` But if Charles lived in London I want no city to be printed or a default value. Example of data in table: ``` Name City Age ------------------------ James New York 20 Charles Londen 21 Tom New York 22 ``` Example of output of a select all query. ``` Name City Age ----------------------- James / 20 Charles / 21 Tom / 22 ``` I hope you can make the sql magic happen... I tried doing it with a case structure but without any success. I am using SQL Server.
Use a join to other cities, and mask duplicates: ``` select distinct p.name, case when p2.city is not null then null else p.city end city, p.age from person p left join person p2 on p.name = p2.name and p.city != p2.city ``` `distinct` makes only one row result for duplicate cities. This query relies on the age being consistent for a given name between cities. If it's not, use something like `max(age)` and group by the other columns. To provide a default value instead of `null` for duplicates, change `then null` to `then 'Various'` (eg).
I don't believe that the accepted answer addresses OP's question, because it only checks the uniqueness across the cities of person with the same name. It also doesn't satisfy the second provided dataset. The following query checks the uniqueness **across the entire column** and satisfies the example output for the inputs provided by OP. ``` SELECT p.name, CASE WHEN stats.unique_cities > 1 THEN '/' ELSE p.city END city, p.age FROM person p CROSS JOIN (SELECT COUNT(DISTINCT city) unique_cities FROM person) stats; ``` `CROSS JOIN` (ANSI-92 syntax) could also be replaced with the `,` syntax (ANSI-89 syntax), i.e. `FROM person p, (SELECT ...`.
Select Column if Unique in SQL
[ "", "sql", "sql-server", "" ]
I have a question about `SHOW COLUMNS FROM table like 'column name'"`. I have already tried some tests for some times, it seems be similar like "where `column name='column'`. However, I just would like to confirm thank u very much in advance. Also, I'd like to say, why cannot I use `SHOW COLUMNS FROM table = 'columnname'` ?
It's more like ``` WHERE column_name LIKE 'column name' ``` Since it uses `LIKE`, you can put wildcard patterns in the parameter, e.g. ``` SHOW COLUMNS FROM table LIKE '%id' ``` will find all columns that end in `id`. If there are no wildcard characters, then `LIKE` is equivalent to `=`. If you don't want to use `LIKE`, you can use `WHERE`: ``` SHOW COLUMNS FROM table WHERE field = 'column name'; ``` In the `SHOW COLUMNS` output, the `field` column contains the column names. The `WHERE` clause also permits testing other attributes, e.g. ``` SHOW COLUMNS FROM table WHERE type LIKE 'varchar%' ``` will find all `VARCHAR` columns.
maybe this: SHOW COLUMNS FROM accounts LIKE'id'
mysql query "SHOW COLUMNS FROM table like 'columnname'":questions
[ "", "mysql", "sql", "" ]
I have FACULTY table which contain the column with other table's id's separated by commas. I want to join those with respective table. `faculty` table: ``` id | name | course_id | subject_id a | smith | 2,3 | 1,2 ``` `course` table: ``` id | name 1 | bcom 2 | mcom 3 | bba ``` `subject` table: ``` id | name 1 | account 2 | state 3 | economics ``` I want to get result from these table like.. ``` faculty.id, faculty.name, course.name(using faculty.course_id), subject.name(using faculty.subject_id) ``` I have tried a lot of queries and also finds from Google but it didn't gave me proper result.
You can do the following query ``` select * from faculty F JOIN course C on CHARINDEX((','+CAST(c.id as varchar(10))+','), (','+f.courseid+',')) > 0 JOIN subject s on CHARINDEX((','+CAST(s.id as varchar(10))+','), (','+f.subjectid+',')) > 0 ```
I do not think the performance will be too nice but worths trying. This solution would work in `SQL SERVER`: ``` SELECT * FROM faculty F JOIN course C ON ','+F.course_id+',' LIKE '%,'+CONVERT(VARCHAR,C.ID) +',%' JOIN subject S ON ','+F.subject_id_id+',' LIKE '%,'+CONVERT(VARCHAR,S.ID) +',%' ``` Based on `Albin Sunnanbo`'s comment i would also sugget you add some many too many tables: **fcourses** ``` facultyId courseId ``` and **fsubjects** ``` facultyId subjectId ``` That way you could do a proper join : ``` SELECT * FROM faculty F JOIN fcourses FC ON F.Id = FC.facultyId JOIN course C ON FC.courseId = C.ID JOIN fsubjects FS ON F.Id = FS.facultyId JOIN subject S ON FS.courseId = S.ID ```
Join table using columns which contains other table's id's separated by commas
[ "", "sql", "join", "" ]
I'm working on a database structure and trying to imagine the best way to split up a host of related records into tables. Records all have the same base type they inherit from, but each then expands on it for their particular use. These 4 properties are present for every type. ``` id, name, groupid, userid ``` Here are the types that expand off those 4 properties. ``` "Static": value "Increment": currentValue, maxValue, overMaxAllowed, underNegativeAllowed "Target": targetValue, result, lastResult ``` What I tried initially was to create a "records" table with the 4 base properties in it. I then created 3 other tables named "records\_static/increment/target", each with their specific properties as columns. I then forged relationships between a "rowID" column in each of these secondary tables with the main table's "id". Populating the tables with dummy data, I am now having some major problems attempting to extract the data with a query. The only parameter is the userid, beyond that what I need is a table with all of the columns and data associated with the userid. I am unsure if I should abandon that table design, or if I just am going about the query incorrectly. I hope I explained that well enough, please let me know if you need additional detail.
Make the design as simple as possible. First I'd try a single table that contains all attributes that might apply to a record. Irrelevant attributes can be `null`. You can enforce `null` values for a specific type with a `check` constraint. If that doesn't work out, you can create three tables for each record type, without a common table. If that doesn't work out, you can create a base table with 1:1 extension tables. Be aware that querying that is much harder, requiring `join` for every operation: ``` select * from fruit f left join apple a on a.fruit_id = f.id left join pear p on p.fruit_id = f.id left join ... ``` The more complex the design, the more room for an inconsistent database state. The second option you could have a pear and an apple with the same id. In the third option you can have missing rows in either the base or the extension table. Or the tables can contradict each other, for example a base row saying "pear" with an extension row in the `Apple` table. I fully trust end users to find a way to get that into your database :) Throw out the complex design and start with the simplest one. Your first attempt was not a failure: you now know the cost of adding relations between tables. Which can look deceptively trivial (or even "right") at design time.
This is a typical "object-oriented to relational" mapping problem. You can find books about this. Also a lot of google hits like <http://www.ibm.com/developerworks/library/ws-mapping-to-rdb/> The easiest for you to implement is to have one table containing all columns necessary to store all your types. Make sure you define them as nullable. Only the common columns can be not null if necessary.
Multiple record types and how to split them amongst tables
[ "", "sql", "" ]
I need to update table from result another table. I have some tables ``` Table people |id|level| 1 0 2 0 3 0 ``` and Table games ``` |id | idman | win_or_lose | game | 1 1 win 1 2 2 lose 1 3 1 win 2 4 3 lose 2 5 2 win 3 6 3 lose 3 ``` I need to create query which change level in table people. I try to use this query. but it's wrong. Please Help me fix this query. My query. ``` UPDATE people p, (select count(id) as count from games where and idman = p.id and win_or_lose = 'win') g SET p.level = g.count ```
I would do this like this: (**Fiddle:** <http://sqlfiddle.com/#!2/1c29e/1/0>) ``` update people p join (select idman, count(*) as max_game from games where win_or_lose = 'win' group by idman) x on p.id = x.idman set p.level = x.max_game ```
You could try this(I believe idman is the FK of peoples Id column) ``` UPDATE people JOIN (SELECT idman FROM games WHERE win_or_lose="win" GROUP BY game)a ON people.id=a.idman SET level=level+1 ; ``` If the same people wins several different games. The level counter will keep on increasing.
Fix query to mysql
[ "", "mysql", "sql", "" ]
I have a situation where I need to convert the datetime value stored as string to Timestamp: *I am using oracle database* This actually works for me `select TO_DATE('11-27-2013 21:28:41', 'MM-DD-YYYY HH24:MI:SS') from dual;` But my date value now is diffent from the above: `select TO_DATE('Sunday 6/1/2014 8:00AM', 'MM-DD-YYYY HH24:MI:SS') from dual;` - failed. I have 'Sunday' inside my date.
You have to specify a correct format mask to the `TO_DATE` function. See a full list of format masks along with documentation for the function here: <http://www.techonthenet.com/oracle/functions/to_date.php> You can correct your problem by: ``` SELECT TO_DATE('Sunday 6/1/2014 8:00AM', 'DAY M/D/YYYY HH:MIAM') FROM DUAL; ```
Try using the correct format. I think this will work: ``` select TO_DATE('Sunday 6/1/2014 8:00AM', 'DAY MM/DD/YYYY HH:MI AM') ``` [Here](http://www.sqlfiddle.com/#!4/d41d8/33525) is a SQL Fiddle.
Converting String Date to Date Time in Oracle
[ "", "sql", "oracle", "date", "datetime", "" ]
I have 2 tables in sybase **Account\_table** ``` Id account_code 1 A 2 B 3 C ``` **Associate\_table** ``` id account_code 1 A 1 B 1 C 2 A 2 B 3 A 3 C ``` I have this sql query ``` SELECT * FROM account_table account, associate_table assoc WHERE account.account_code = assoc.account_code ``` This query will return 7 rows. What I want is to return the rows from associate\_table that is only common to the 3 accounts like this: ``` account id account_code Assoc Id 1 A 1 2 B 1 3 C 1 ``` Can anyone help what kind of join should I do?
``` SELECT b.id account_id,a.code account_code,a.id assoc_id FROM associate a, account b WHERE a.code = b.code AND a.id IN (SELECT a.id FROM associate a, account b WHERE a.code = b.code GROUP BY a.id HAVING Count(*) = (SELECT Count(*) FROM account)); ``` NOTE: this query works only if you have unique values in Id and account\_code columns in account table. And also, your associate\_table should contain unique combination of (id, account,code). i.e., associate table should not contain (1,A) or any pair twice.
Try this ``` SELECT AC.ID,AC.account_code,ASS.ID FROM account_table AC INNER JOIN associate_table AS ASS ON AC.account_code = ASS.account_code ```
SQL joining two tables with common row
[ "", "sql", "sybase", "" ]
I have lost few hours looking for one row solution and didn't manage to find it. I have some value `(2013/01/03 07:13:26.000)` and I want to extract time part to get this: `07:13:26` so I can store it in another database as stage layer in ETL process. When I try `select my_datetime::datetime hour to second`, I still get full timestamp. Thanks. **EDIT: Sorry, source column is `datetime`, NOT timestamp.**
Ok, I managed to do this: ``` SELECT to_char(extend (my_datetime_column, hour to second),'%H:%M:%S') as my_time FROM my_table ``` Hope it will help someone!
Try this: ``` SELECT CAST(datetime_field AS DATETIME HOUR TO SECOND) ... ``` Or ``` SELECT to_char(datetime_field, "%H:%M:%S") ... ```
Extract TIME from DATETIME - informix
[ "", "sql", "timestamp", "informix", "etl", "" ]
I'm new to SQL and am not a programmer. Struggling with the following. Finally I give up and ask for your help. ``` SELECT `AccountNo`, -SUM(`Amount`) AS "Charged" FROM `databasenamex`.`tablenamex` WHERE (`Charge progress` BETWEEN 1 AND 2) OR IFNULL(`Charge progress` BETWEEN 1 AND 2, 0) GROUP BY `AccountNo`; ``` This query returns only a two row-result corresponding to the summation of records where the stated condition is met. I thought the ... or ifnull()... section would include listing all other AnnountNo where the BETWEEN condition is not met. There are a total of 10 AccountNo selectable entries, and only two has other than zero sum. Btw, query result is the same without the ...or ifnull()... section. The table has columns (among others) containing Amount, AccountNo and Charge progress. What is the proper way to set up this query?
Not sure I understood your problem, but here's a try: ``` SELECT `AccountNo`, -SUM(`Amount`) AS "Charged" FROM `databasenamex`.`tablenamex` WHERE COALESCE(`Charge progress`, 1) BETWEEN 1 AND 2 GROUP BY `AccountNo`; ``` COALESCE is a function that returns it's leftmost not null value. so if `Charge progress` is null, 1 is the result which lies between 1 and 2. EDIT: updated with table definition, some sample data and a modified query: ``` CREATE TABLE tablesql ( ID int(10) NOT NULL AUTO_INCREMENT , AccountNo tinyint(3) NOT NULL , Amount smallint(5) DEFAULT NULL , Charge progress tinyint(2) DEFAULT NULL , PRIMARY KEY (ID) , UNIQUE KEY ID (ID) -- redundant ) ENGINE=InnoDB AUTO_INCREMENT=11 DEFAULT CHARSET=latin1; INSERT INTO tablesql VALUES (1,1,5,NULL),(2,23,6,3),(3,23,7,2),(4,3,8,1),(5,3,9,2),(6,3,10,2) , (7,3,11,3),(8,3,12,NULL),(9,4,13,NULL),(10,4,14,3); select AccountNo, sum( case when "Charge progress" is null then 0 else amount end ) as amount from tablesql where COALESCE("Charge progress", 1) BETWEEN 1 AND 2 group by AccountNo; +-----------+--------+ | AccountNo | amount | +-----------+--------+ | 1 | 0 | | 3 | 27 | | 4 | 0 | | 23 | 7 | +-----------+--------+ 4 rows in set (0.00 sec) select * from resultquery; +-----------+--------+ | AccountNo | Amount | +-----------+--------+ | 1 | 0 | | 23 | 7 | | 3 | 27 | | 4 | 0 | +-----------+--------+ 4 rows in set (0.00 sec) ```
This seems to be what you want: ``` SELECT `AccountNo`, -SUM(`Amount`) AS "Charged" FROM `databasenamex`.`tablenamex` WHERE `Charge progress` BETWEEN 1 AND 2 OR `Charge progress` IS NULL GROUP BY `AccountNo`; ``` Your question, though, says that you want all of the rows. Why are you using a `where` clause at all if you want all accounts?
sql query to include null
[ "", "mysql", "sql", "" ]
I want to determine the AND clause based on the value of the current row. So for example, my job table has 4 date columns: offer\_date, accepted\_date, start\_date, reported\_date. I want to check against an exchange rate based on the date. I know the reported\_date is never null, but it's my last resort, so I have a priority order for which to join against the exchange\_rate table. I'm not quite sure how to do this with a CASE statement, if that's even the right approach. ``` SELECT * FROM job j INNER JOIN exchange_rate er ON j.currency_id = er.currency_id AND er.date = ( -- use offer_date if not null -- use accepted_date if above is null -- use start_date if above two are null -- use reported_date if above three are null ) ```
Should just be a simple coalesce, so something like: ``` SELECT * FROM job j INNER JOIN exchange_rate er ON j.currency_id = er.currency_id AND er.date = COALESCE( offer_date, accepted_date, start_date, reported_date ) ```
CASE statement in a where clause. ``` CASE WHEN offer_date is not null THEN offer_date WHEN accepted_date IS NOT NULL THEN accepted_date WHEN start_date IS NOT NULL THEN start_date WHEN reported_date IS NOT NULL THEN reported_date ELSE '' END ``` it will work as per your require condition
SQL Conditional AND
[ "", "sql", "join", "case", "conditional-statements", "" ]
I'm looking for an efficient way to find all the intersections between sets of timestamp ranges. It needs to work with PostgreSQL 9.2. Let's say the ranges represent the times when a person is available to meet. Each person may have one or more ranges of times when they are available. I want to find *all* the time periods when a meeting can take place (ie. during which all people are available). This is what I've got so far. It seems to work, but I don't think it's very efficient, since it considers one person's availability at a time. ``` WITH RECURSIVE td AS ( -- Test data. Returns: -- ["2014-01-20 00:00:00","2014-01-31 00:00:00") -- ["2014-02-01 00:00:00","2014-02-20 00:00:00") -- ["2014-04-15 00:00:00","2014-04-20 00:00:00") SELECT 1 AS entity_id, '2014-01-01'::timestamp AS begin_time, '2014-01-31'::timestamp AS end_time UNION SELECT 1, '2014-02-01', '2014-02-28' UNION SELECT 1, '2014-04-01', '2014-04-30' UNION SELECT 2, '2014-01-15', '2014-02-20' UNION SELECT 2, '2014-04-15', '2014-05-05' UNION SELECT 3, '2014-01-20', '2014-04-20' ) , ranges AS ( -- Convert to tsrange type SELECT entity_id, tsrange(begin_time, end_time) AS the_range FROM td ) , min_max AS ( SELECT MIN(entity_id), MAX(entity_id) FROM td ) , inter AS ( -- Ranges for the lowest ID SELECT entity_id AS last_id, the_range FROM ranges r WHERE r.entity_id = (SELECT min FROM min_max) UNION ALL -- Iteratively intersect with ranges for the next higher ID SELECT entity_id, r.the_range * i.the_range FROM ranges r JOIN inter i ON r.the_range && i.the_range WHERE r.entity_id > i.last_id AND NOT EXISTS ( SELECT * FROM ranges r2 WHERE r2.entity_id < r.entity_id AND r2.entity_id > i.last_id ) ) -- Take the final set of intersections SELECT * FROM inter WHERE last_id = (SELECT max FROM min_max) ORDER BY the_range; ```
I created the `tsrange_interception_agg` aggregate ``` create function tsrange_interception ( internal_state tsrange, next_data_values tsrange ) returns tsrange as $$ select internal_state * next_data_values; $$ language sql; create aggregate tsrange_interception_agg (tsrange) ( sfunc = tsrange_interception, stype = tsrange, initcond = $$[-infinity, infinity]$$ ); ``` Then this query ``` with td (id, begin_time, end_time) as ( values (1, '2014-01-01'::timestamp, '2014-01-31'::timestamp), (1, '2014-02-01', '2014-02-28'), (1, '2014-04-01', '2014-04-30'), (2, '2014-01-15', '2014-02-20'), (2, '2014-04-15', '2014-05-05'), (3, '2014-01-20', '2014-04-20') ), ranges as ( select id, row_number() over(partition by id) as rn, tsrange(begin_time, end_time) as tr from td ), cr as ( select r0.tr tr0, r1.tr as tr1 from ranges r0 cross join ranges r1 where r0.id < r1.id and r0.tr && r1.tr and r0.id = (select min(id) from td) ) select tr0 * tsrange_interception_agg(tr1) as interseptions from cr group by tr0 having count(*) = (select count(distinct id) from td) - 1 ; interseptions ----------------------------------------------- ["2014-02-01 00:00:00","2014-02-20 00:00:00") ["2014-01-20 00:00:00","2014-01-31 00:00:00") ["2014-04-15 00:00:00","2014-04-20 00:00:00") ```
If you have a fixed number of entities you want to cross reference, you can use a cross join for each of them, and build the intersection (using the `*` operator on ranges). Using a cross join like this is probably less efficient, though. The following example has more to do with explaining the more complex example below. ``` WITH td AS ( SELECT 1 AS entity_id, '2014-01-01'::timestamp AS begin_time, '2014-01-31'::timestamp AS end_time UNION SELECT 1, '2014-02-01', '2014-02-28' UNION SELECT 1, '2014-04-01', '2014-04-30' UNION SELECT 2, '2014-01-15', '2014-02-20' UNION SELECT 2, '2014-04-15', '2014-05-05' UNION SELECT 4, '2014-01-20', '2014-04-20' ) ,ranges AS ( -- Convert to tsrange type SELECT entity_id, tsrange(begin_time, end_time) AS the_range FROM td ) SELECT r1.the_range * r2.the_range * r3.the_range AS r FROM ranges r1 CROSS JOIN ranges r2 CROSS JOIN ranges r3 WHERE r1.entity_id=1 AND r2.entity_id=2 AND r3.entity_id=4 AND NOT isempty(r1.the_range * r2.the_range * r3.the_range) ORDER BY r ``` In this case a multiple cross join is probably less efficient because you don't actually need to have all the possible combinations of every range in reality, since `isempty(r1.the_range * r2.the_range)` is enough to make `isempty(r1.the_range * r2.the_range * r3.the_range)` true. I don't think you can avoid going through each person's availability at time, since you want them all to be meet anyway. What may help is to build the set of intersections incrementally, by cross joining each person's availability to the previous subset you've calculated using another recursive CTE (`intersections` in the example below). You then build the intersections incrementally and get rid of the empty ranges, both stored arrays: ``` WITH RECURSIVE td AS ( SELECT 1 AS entity_id, '2014-01-01'::timestamp AS begin_time, '2014-01-31'::timestamp AS end_time UNION SELECT 1, '2014-02-01', '2014-02-28' UNION SELECT 1, '2014-04-01', '2014-04-30' UNION SELECT 2, '2014-01-15', '2014-02-20' UNION SELECT 2, '2014-04-15', '2014-05-05' UNION SELECT 4, '2014-01-20', '2014-04-20' ) ,ranges AS ( -- Convert to tsrange type SELECT entity_id, tsrange(begin_time, end_time) AS the_range FROM td ) ,ranges_arrays AS ( -- Prepare an array of all possible intervals per entity SELECT entity_id, array_agg(the_range) AS ranges_arr FROM ranges GROUP BY entity_id ) ,numbered_ranges_arrays AS ( -- We'll join using pos+1 next, so we want continuous integers -- I've changed the example entity_id from 3 to 4 to demonstrate this. SELECT ROW_NUMBER() OVER () AS pos, entity_id, ranges_arr FROM ranges_arrays ) ,intersections (pos, subranges) AS ( -- We start off with the infinite range. SELECT 0::bigint, ARRAY['[,)'::tsrange] UNION ALL -- Then, we unnest the previous intermediate result, -- cross join it against the array of ranges from the -- next row in numbered_ranges_arrays (joined via pos+1). -- We take the intersection and remove the empty array. SELECT r.pos, ARRAY(SELECT x * y FROM unnest(r.ranges_arr) x CROSS JOIN unnest(i.subranges) y WHERE NOT isempty(x * y)) FROM numbered_ranges_arrays r INNER JOIN intersections i ON r.pos=i.pos+1 ) ,last_intersections AS ( -- We just really want the result from the last operation (with the max pos). SELECT subranges FROM intersections ORDER BY pos DESC LIMIT 1 ) SELECT unnest(subranges) r FROM last_intersections ORDER BY r ``` I'm not sure whether this is likely to perform better, unfortunately. You'd probably need a larger dataset to have meaningful benchmarks.
Find all intersections of all sets of ranges in PostgreSQL
[ "", "sql", "postgresql", "date-range", "" ]
I am trying to find out total count of clientIds that are distinct in my table here are the queries ``` $stmt = $conn->prepare("SELECT DISTINCT count(clientId) as totalrecords FROM quotes WHERE storeEmail = '". $store['email']. "'"); $stmt->execute(); $totalRecords = $stmt->fetch(PDO::FETCH_COLUMN); echo 'Total Users Found with Request Quote : ' . $totalRecords . "<br /><br />"; echo 'Press Send button to send notifications to all users <br /><br />'; $query = "SELECT DISTINCT clientId FROM quotes WHERE storeEmail = '". $store['email']. "'"; $stmt = $conn->prepare($query); $stmt->execute(); $clients = $stmt->fetchAll(PDO::FETCH_COLUMN); ``` The first query gives me sum of 147 whereas the second query gives me 60 What is wrong with first query.
[`COUNT DISTINCT`](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_count) is what you're looking for: ``` $query = "SELECT COUNT(DISTINCT clientId) FROM quotes WHERE storeEmail = '". $store['email']. "'"; ```
`SELECT DISTINCT count(clientId)` will return you count of clientId , whether you use DISTINCT OR not. But the second query will return the distinct clietIds. And you should to use `SELECT count(DISTINCT clientId)`
SELECT Distinct table items from a table in mysql
[ "", "mysql", "sql", "" ]
I am sure it was asked before but I just can't figure out how to search for it I have a table table1: ``` | RefId_1 | RefId_2 | --------------------- | 1 | 133 | | 3 | 12 | | 4 | 144 | | 4 | 22 | | 3 | 123 | ``` I need to get list of `RefId_1` which do have reference to list of `RefId_2` but don't have any references to another list of `RefId_2`. For example, I need list of `RefId_1` which references to `RefId_2` list of (133, 22, 44) but such `RefId_1` should not have references to list of `RefId_2` (12, 144, 111). Result should be just (1) because (4) references to forbidden 144 Thanks in advance
**UPDATED** : (**UNTESTED**) ``` SELECT DISTINCT REF_ID1 FROM table1 WHERE REF_ID2 IN(133,22,44) AND REF_ID1 NOT IN (SELECT REF_ID1 FROM table1 WHERE REF_ID2 IN (12, 144, 111)) ```
Try this It must work which you want ``` SELECT DISTINCT REF_ID1 FROM table1 WHERE REF_ID1 NOT IN (SELECT REF_ID1 FROM table1 WHERE REF_ID2 IN (12,144)); ```
Get rows which do have references to list of values and at the same time don't have any references to list of other values
[ "", "mysql", "sql", "" ]
I have SQL Server 2008 query that displays `Start_of_Attack` and `End_of_Attack`: ``` SELECT CAST (DATEADD(SECOND, dbo.AGENT_SECURITY_LOG_1.BEGIN_TIME /1000 + 8*60*60, '19700101') AS VARCHAR(50)) as Start_of_Attack, CAST (DATEADD(SECOND, dbo.AGENT_SECURITY_LOG_1.END_TIME /1000 + 8*60*60, '19700101') AS VARCHAR(50)) as End_of_Attack FROM dbo.AGENT_SECURITY_LOG_1 WHERE dbo.AGENT_SECURITY_LOG_1.TIME_STAMP > DATEDIFF(second, '19700101', DATEADD(day, -1, GETDATE())) * CAST(1000 as bigint) ``` I would like to display a third column, `Duration_of_Attack` How do I display the time difference between `Start_of_Attack` and `End_of_Attack` ?
Since you're storing your date as an integer, you can just do: ``` SELECT (dbo.AGENT_SECURITY_LOG_1.END_TIME - dbo.AGENT_SECURITY_LOG_1.BEGIN_TIME )/1000 as Duration_Of_Attack ``` If the values were stored as `DATETIME` you could use `DATEDIFF()` easily, as it is you can still use `DATEDIFF()` but it requires casting your existing date as `DATETIME()` instead of `VARCHAR()` and is needless: ``` SELECT CAST (DATEADD(SECOND, dbo.AGENT_SECURITY_LOG_1.BEGIN_TIME /1000 + 8*60*60, '19700101') AS VARCHAR(50)) as Start_of_Attack, CAST (DATEADD(SECOND, dbo.AGENT_SECURITY_LOG_1.END_TIME /1000 + 8*60*60, '19700101') AS VARCHAR(50)) as End_of_Attack, DATEDIFF(ss,CAST(DATEADD(SECOND, dbo.AGENT_SECURITY_LOG_1.BEGIN_TIME /1000 + 8*60*60, '19700101') AS DATETIME) ,CAST(DATEADD(SECOND, dbo.AGENT_SECURITY_LOG_1.END_TIME /1000 + 8*60*60, '19700101') AS DATETIME)) FROM dbo.AGENT_SECURITY_LOG_1 WHERE dbo.AGENT_SECURITY_LOG_1.TIME_STAMP > DATEDIFF(second, '19700101', DATEADD(day, -1, GETDATE())) * CAST(1000 as bigint) ``` It's advisable to use one of the appropriate `DATETIME` data types, but you've got another 22 years or so before you'll run into problems.
Your times look like they are in a Unix format of milliseconds since a reference time. So just take the difference: ``` SELECT (dbo.AGENT_SECURITY_LOG_1.END_TIME - dbo.AGENT_SECURITY_LOG_1.END_TIME) / 1000 as diff_in_seconds ```
SQL Server 2008 - Display time difference
[ "", "sql", "sql-server-2008", "datetime", "" ]
I'm working with SAP Business One in a formatted search and have run into some syntax I am unfamiliar with or haven't come across before. This is probably pretty basic but can anybody shed some light on this please? Ignore the $[system objects]. ``` DECLARE @parent nVarChar(15) SET @parent = (SELECT T0.FNum FROM OACT T0 WHERE T0.ACode = $[DLN1.ACode]) ``` > ``` > @parent ='40099' > ``` ``` begin if $[OCRD.ctry] = 'UK' and $[OCRD.zip] not like 'BBT_%%' begin select 19 end else if $[ocrd.country] = 'SA' or $[ocrd.zipcode] like 'BBT_%%' begin select 31 end else if $[ocrd.country] = 'NZ' begin select 41 end else if $[ocrd.country] = 'AUS' begin select 51 end else select 21 end ``` The full query currently doesn't work, but my question is, typically what does the "Select 21" or "Select 41" or "Select 51" or select xxx number return? Can anybody provide an example which will help me understand? Thanks
Select [VALUE] will return just a value. **Oracle:** ``` select 'Hello' from Dual ``` **SQL Server:** ``` Select 'Hello' ``` So what your code seems to be doing is setting/displaying a value, nothing more. An example of another way you can use it, multiple items within a select: <http://sqlfiddle.com/#!2/a2581/28218/0> This works both in SQL Server and Oracle, no need for Dual.
`Select XX` will return `XX`. For Example I've three rows in my table bookings and I do `select 21 from tablename` then I'll get the following result ![enter image description here](https://i.stack.imgur.com/ZNvMq.jpg) and if i just do `select 21` , it will give me this ![enter image description here](https://i.stack.imgur.com/3kePS.jpg)
Basic SQL syntax SELECT [number]
[ "", "sql", "select", "sapb1", "" ]
Hi there guys basically I have a query asking me to do this... What is the lowest rated review in the database? List the review title, date of post, author username, category name, and rating. I've written this.. ``` SELECT * from reviews where rating like '%1%' ``` Is that right? As in it's not showing anything but I am unsure why. Please excuse me as I'm trying to learn to SQL My reviews table.. <http://gyazo.com/e48a0be08782af79b4fbf0fec5481eba> THANKS!!
`SELECT * from reviews where rating = (select min(rating ) from reviews )` here you go !!!
``` SELECT * from reviews where rating = (SELECT MIN(rating) from reviews ) ```
Struggling with an SQL query
[ "", "mysql", "sql", "" ]
Can any body tel me how to overcome this error when taking a backup in SQL Server 2008 Express? ``` Backup failed for Server '\SQLEXPRESS'. (Microsoft.SqlServer.SmoExtended) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1447.4+((KJ_RTM).100213-0103+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Backup+Server&LinkId=20476 System.Data.SqlClient.SqlError: The media loaded on "D:\Project 2014\PayrollBackup" is formatted to support 2 media families, but 1 media families are expected according to the backup device specification. (Microsoft.SqlServer.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1447.4+((KJ_RTM).100213-0103+)&LinkId=20476 ``` I just want to take a single backup of my database. But always I have to give two destinations and then when restoring I cant restore the database using a single backup file? Is there anything to configure in SSMS? If any, could you please clearly explain? **EDIT:** ``` BACKUP DATABASE [Payroll] TO DISK = N'D:\Project 2014\PayrollBackup' WITH NOFORMAT, NOINIT, NAME = N'Payroll-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO ```
The backup statement is using the `NOINIT` clause. This causes each successive backup to append to the existing backup file. You are attempting to backup to one media set (one file), but it appears the existing backup you are attempting to append to consisted of two media sets (two files). For a quick turnaround, specify a new filename: `BACKUP DATABASE [Payroll] TO DISK = N'D:\Project 2014\<insert new filename>' WITH ...`
A simple solution has arisen (via SSMS), based on the [answer of @DMason](https://stackoverflow.com/questions/25086022/backup-failed-for-server-the-media-is-formatted-to-support-2-media-families-err/25086435#25086435) and [answer of @Carol Baker West](https://dba.stackexchange.com/a/33320/127892). [![enter image description here](https://i.stack.imgur.com/5xXV1.jpg)](https://i.stack.imgur.com/5xXV1.jpg) Just create a new media set for a ***single media family***. ![https://i.stack.imgur.com/C6eNN.gif](https://i.stack.imgur.com/C6eNN.gif)
Backup failed for Server, The media is formatted to support 2 media families Error
[ "", "sql", "t-sql", "sql-server-express", "database-backups", "" ]
I'm trying to create a view based on an existing table. The table will be like: ``` +------+------+------+------+ | col1 | col2 | col3 | col4 | +------+------+------+------+ | 1 | a1 | a2 | a3 | | 2 | b1 | b2 | b3 | | 3 | c1 | c2 | c3 | | 4 | d1 | d2 | d3 | | 5 | e1 | e2 | e3 | | 6 | f1 | f2 | f3 | +------+------+------+------+ ``` The resultant view should have rows in the following manner: ``` +------+------+------+ | col1 | col2 | col3 | +------+------+------+ | 1 | a1 | a3 | | 2 | a2 | a3 | | 3 | b1 | b3 | | 4 | b2 | b3 | | 5 | c1 | c3 | | 6 | c2 | c3 | +------+------+------+ ``` My SQL query looks like this: ``` (select col1 as "col 1",col2 as "col 2",col4 as "col 3" from t1) union (select col1 as "col 1",col3 as "col 2",col4 as "col 3" from t1) order by "col 2","col 3" ``` But the `order by` doesn't seem to work. I'm getting the same order of rows for whatever alias I give after the `order by` Thanks in advance
It is very likely that your MySQL instance is configured to [treat double-quoted strings as string literals rather than quoted identifiers](http://dev.mysql.com/doc/refman/5.0/en/sql-mode.html#sqlmode_ansi_quotes "Server SQL Modes – ANSI_QUOTES"). If you use MySQL's native backticks to delimite the names in ORDER BY, [you will get the expected result](http://sqlfiddle.com/#!2/53646/2 "Demo at SQL Fiddle"): ``` (select col1 as "col 1",col2 as "col 2",col4 as "col 3" from t1) union (select col1 as "col 1",col3 as "col 2",col4 as "col 3" from t1) order by `col 2`,`col 3` ```
Try: ``` select * from ( (select col1 as col1,col2 as col2,col4 as col3 from t1) union (select col1 as col1,col3 as col2,col4 as col3 from t1) ) as t order by t.col2,t.col3 ``` In your case only the second query will order the result.
SQL ORDER BY with UNION
[ "", "mysql", "sql", "sql-order-by", "union", "" ]
I have this problem I've been working on for a while, and I just haven't been able to figure out what's wrong with my query. Today, I finally got a bit closer, and now I think I'm pretty close and not too far from succeeding, but I need your help to spot the mistake in this query, because I think I've stared blind on it. This is the error I get: "The objects "webpages\_UsersInRoles" and "webpages\_UsersInRoles" in the FROM clause have the same exposed names. Use correlation names to distinguish them." ``` SELECT * FROM Users INNER JOIN webpages_UsersInRoles ON webpages_UsersInRoles.UserId = Users.UserId INNER JOIN webpages_UsersInRoles ON webpages_UsersInRoles.RoleId = webpages_Roles.RoleId WHERE Users.UserId = 1 ``` Thank you in advance!
I think below is the final query that you need. I have used alias as `TAB1,TAB2 and TAB3` for clarity and to avoid confusion. Change it as required. ``` SELECT * FROM Users TAB1 INNER JOIN webpages_UsersInRoles TAB2 ON TAB2.UserId = TAB1.UserId INNER JOIN webpages_Roles TAB3 ON TAB2.RoleId = TAB3.RoleId WHERE TAB1.UserId = 1 ``` Btw,JUST FOR INFO.... probably you get "`The objects "webpages_UsersInRoles" and "webpages_UsersInRoles" in the FROM clause have the same exposed names. Use correlation names to distinguish them.`" error in the line mentioned below from your original query: ``` SELECT * FROM Users INNER JOIN webpages_UsersInRoles ON webpages_UsersInRoles.UserId = Users.UserId INNER JOIN webpages_UsersInRoles ON webpages_UsersInRoles.RoleId -----> Not able to identify which webpages_UsersInRoles is being called (First inner join or second inner join) = webpages_Roles.RoleId WHERE Users.UserId = 1 ```
You have to use an alisas for the tables: ``` SELECT * FROM Users INNER JOIN webpages_UsersInRoles w1 ON w1.webpages_UsersInRoles.UserId = Users.UserId INNER JOIN webpages_UsersInRoles w2 ON w1.webpages_UsersInRoles.RoleId = w2.webpages_Roles.RoleId WHERE Users.UserId = 1 ```
Two inner joins with same tables
[ "", "sql", "sql-server", "inner-join", "" ]
I have a table that has multiple rows for the same ID. I need to return a count of the distinct IDs which have no rows that contain a value. I have tried something like this: ``` SELECT COUNT(DISTINCT studID) AS 'ID' FROM all_classes WHERE absences=0 AND (SELECT DISTINCT studID FROM all_classes WHERE absences<>0) <>studID; ``` In this example, I am trying to write a query to find the number of students who have zero absences in all classes. If they have an absence in one class but zero in the other classes, they should be ignored (which is what I am using the nested query for, but it doesn't work). Thank you in advance for any suggestions.
I think you were very close. I would have used a NOT IN subquery. And I don't think the first where criteria was necessary. This should pull any student id where all rows of the id have absences=0 ``` SELECT COUNT(DISTINCT studID) AS 'ID' FROM all_classes WHERE studID NOT IN (SELECT DISTINCT studID FROM all_classes WHERE absences<>0); ```
The correct way to rewrite your query is like this: ``` SELECT COUNT(DISTINCT studID) AS 'ID' -- 'total students' makes more sense? FROM all_classes WHERE studID NOT IN( SELECT studID FROM all_classes WHERE absences <> 0 ) ``` However, when you're not actually selecting something using a subquery it's better to use exists: ``` SELECT COUNT(DISTINCT studID) AS 'ID' FROM all_classes a WHERE NOT EXISTS( SELECT * FROM all_classes a1 WHERE a1.studID = a.studID AND absences > 0 -- I assume absences can't be negative ) ```
Get a count of distinct rows that does not include a value in any row
[ "", "sql", "sql-server", "" ]
I will try to present the situation as a minimal example: Say we have a table of tickets, defined as follows: ``` CREATE TABLE ticket_id_list ( ticket_id NUMBER, issued_to VARCHAR2(100), CONSTRAINT ticket_id_list_pk PRIMARY KEY (ticket_id) ); ``` The system is used for some time and data is added to the table: ``` INSERT INTO ticket_id_list (ticket_id, issued_to) VALUES (1, 'Arthur'); INSERT INTO ticket_id_list (ticket_id, issued_to) VALUES (2, 'Ford'); ``` Later, following requirement pops up: > We need to store a reference number to some other thing in the ticket table this point onwards. The reference needs to be ***non null***. But old records **must** have `NULL` values. (Stupid as though this sound, this is a real requirement.) Now if we do this: ``` ALTER TABLE ticket_id_list ADD ref_no VARCHAR2(6) NOT NULL; ``` the constraint will be violated immediately and we will get: `ORA-01758: table must be empty to add mandatory (NOT NULL) column`. We can of course add a check in the business logic, but that is cumbersome. And because of unexplainable reasons, we cannot use a default value. Is there a way to ***add a NOT NULL contraint that is only applicable to new records?***
You could add a check constraint where values after a certain date cannot be null; something like: ``` alter table ticket_id_list add constraint nul_ref check (ticket_id < 123456 or ref is not null); ``` This does presume that ticket\_id's only go up over time, if you happen to have a date field in your table it maybe clearer to use that.
I found a neat trick: [`NOVALIDATE`](http://richardfoote.wordpress.com/2008/07/28/novalidate-constraints-no-really/). Add a nullable column: ``` ALTER TABLE ticket_id_list ADD ref_no VARCHAR2(6); ``` And then add a table contraint, but make sure to specify `NOVALIDATE`: ``` ALTER TABLE ticket_id_list ADD CONSTRAINT new_ref_nonull CHECK(ref_no IS NOT NULL) NOVALIDATE; ``` Now let's check it: ``` INSERT INTO ticket_id_list (ticket_id, issued_to, ref_no) VALUES (3, 'Zaphod', '2A4252'); 1 row inserted INSERT INTO ticket_id_list (ticket_id, issued_to, ref_no) VALUES (4, 'Marvin', NULL); ORA-02290: check constraint (NEW_REF_NONULL) violated ``` Just as expected. ***Edit***: `NOVALIDATE` will work only if you need not to update the old rows. If you updated the old rows the constraint will fail. For this instance though, this is not an issue.
Make NOT NULL constraint apply only to new rows
[ "", "sql", "oracle", "constraints", "" ]
I got a numeric value in Foo.A and it has its equivalent in Bar but with a string prefix ("Z"). I'm trying to **append the "Z" to the Bar.A col value**. I also tried with `CONCAT` but without any success. This following codes returns "Unknown column Z". ``` UPDATE Foo, Bar SET Foo.B = Bar.B WHERE Foo.A = Z + Bar.A ``` For example 14 (Foo.A) = Z14 (Bar.A).
If your syntax works, then it is likely you are using MySQL. In any case, the problem is that you need quotes around string constants. So try this: ``` UPDATE Foo join Bar on Foo.A = concat('Z', Bar.A) SET Foo.B = Bar.B; ``` You should always use single quotes for string and date constants, regardless of the database. That is the ANSI standard and it reduced the possibility of error.
You are missing the single quotes around Z i.e. your code should be: ``` UPDATE Foo, Bar SET Foo.B = Bar.B WHERE Foo.A = CONCAT('Z', Bar.A); ```
SQL CONCAT String + Column in WHERE clause
[ "", "mysql", "sql", "concatenation", "where-clause", "" ]
I have the following two tables: ``` CREATE TABLE [dbo].[ApplicationServers] ( [ServerName] [nchar](10) NOT NULL, [ApplicationName] [nchar](10) NULL ) GO CREATE TABLE [dbo].[Alerts] ( [ServerName] [nchar](10) NULL, [AlertDescrption] [nchar](10) NULL, [AlertStatus] [int] NULL ) GO ``` Sample data: ``` INSERT [dbo].[ApplicationServers] ([ServerName], [ApplicationName]) VALUES (N'Server1 ', N'App1 ') INSERT [dbo].[ApplicationServers] ([ServerName], [ApplicationName]) VALUES (N'Server2 ', N'App1 ') INSERT [dbo].[ApplicationServers] ([ServerName], [ApplicationName]) VALUES (N'Server3 ', N'App1 ') INSERT [dbo].[ApplicationServers] ([ServerName], [ApplicationName]) VALUES (N'Server4 ', N'App2 ') INSERT [dbo].[ApplicationServers] ([ServerName], [ApplicationName]) VALUES (N'Server5 ', N'App2 ') INSERT [dbo].[Alerts] ([ServerName], [AlertDescrption], [AlertStatus]) VALUES (N'Server1 ', NULL, 1) INSERT [dbo].[Alerts] ([ServerName], [AlertDescrption], [AlertStatus]) VALUES (N'Server3 ', NULL, 2) ``` I'm trying to form a query that would show 3 columns. ``` Server Name | Application Name | Server Status ``` Server Status would be calculated by joining on the Alerts table and if that particular ServerName has records in there with only a 1, then it would return "Warning", but if it has records in there with a "2" then it would show "Error". If there are no records at all for that ServerName, it would return "Normal". Could someone help with that?
``` SELECT a.*, CASE ax.Highest WHEN 2 THEN 'Error' WHEN 1 THEN 'Warning' ELSE 'Normal' END FROM ApplicationServers a OUTER APPLY ( SELECT MAX(AlertStatus) AS Highest FROM Alerts b WHERE a.ServerName = b.ServerName ) ax ``` OUTER APPLY involves two tables A and B and is helpful for doing requests on table B for each row of table A. We look for the highest value of any alert. Depending of that value, the status is calculated.
You can also do this by using LEFT OUTER JOIN. [SQL Fiddle here](http://sqlfiddle.com/#!3/a7942/11) ``` SELECT aps.ServerName, aps.ApplicationName, CASE WHEN a.AlertStatus = 1 THEN 'Warning' WHEN a.AlertStatus = 2 THEN 'Error' ELSE 'Normal' END AS AlertStatus FROM ApplicationServers aps LEFT OUTER JOIN Alerts a ON a.ServerName = aps.ServerName ```
Results Dependent on Values in other table
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table of winners vs losers (TABLE1) e.g. ``` +----+--------+-------+ | ID | Winner | Loser | +----+--------+-------+ | 1 | 2 | 3 | | 2 | 1 | 2 | +----+--------+-------+ ``` In the most recent game between Item 1 and Item 2, 1 won (ID 2). For this example, I'll refer to this as Current Winner and Current Loser. I'm trying to build a query that works our inferences from past results. e.g. if 2>3, and 1>2. Then I need to record a value for 1>3 The query I'm building would find multiple inferred losers against the current winner. The ideal query would return an array of "losers", which I can loop through and record in the table as inferred results. In this case "3". The table would be updated to: ``` +----+--------+-------+ | ID | Winner | Loser | +----+--------+-------+ | 1 | 1 | 2 | | 2 | 2 | 3 | | 3 | 1 | 3 | +----+--------+-------+ ``` And if the query was run again, it would return nothing. The process I have so far is: * Look up everything the Current Loser, has previously beaten (Previous losers to Current loser) * Check the table to see if any of the Previous Losers to Current Loser, has played the current winner, ever. * Any previous loser that has, should be removed To get the list of things the Current Loser has beaten i use: ``` select * from TABLE1 where winner = 2 ``` Then for the second bullet point, I've got two nested queries: ``` select * from TABLE1 where winner = 1 and loser = (select loser from rp_poss where winner = 2) select * from TABLE1 where loser = 1 and winner = (select loser from rp_poss where winner = 2) ``` I really can't work out how to put these together, to remove the rows I don't want. Can somebody let me know what is best, and most efficient query for this for a example, a nested query, some kind of join? Pea brain is really struggling with this. Thanks in advance
You can do it this way, by explicitly looking for certain records (a match between the two items) and counting to see if there are zero of them. CURRENTLOSER and CURRENTWINNER are placeholders for variables or whatever. ``` select previous.loser from table1 previous where previous.winner=CURRENTLOSER and ( select count(*) from table1 ancient where (ancient.winner=CURRENTWINNER and ancient.loser=previous.loser) or (ancient.loser=CURRENTWINNER and ancient.winner=previous.loser) ) = 0 ``` Aliasing tables ("from table1 ancient") will help get the algorithm clear in your head.
The query below will insert inferred losers for the most recent match between 1 and 2 the first time it's run. The second time it won't insert any new rows. Initially the `not exists` subquery had `where id < current.id` to remove previous losers, however, since inferred games are inserted with 'future' ids (i.e. 3 in your example), if you ran the query again, it would reinsert the rows, so I changed it to `where id <> current.id`, which means it will also exclude 'future' losers. ``` insert into mytable (winner, loser) select current.winner, previous.loser from (select id, winner, loser from mytable where (winner = 1 and loser = 2) or (winner = 2 and loser = 1) order by id desc limit 1) current join mytable previous on previous.winner = current.loser and previous.id < current.id where not exists (select 1 from mytable where id <> current.id and ((winner = current.winner and loser = previous.loser) or (winner = previous.loser and loser = current.winner))) ```
Excluding results of nested SQL query
[ "", "mysql", "sql", "" ]
I have this query ``` SELECT COUNT(*) from `login_log` where from_unixtime(`date`) >= DATE_SUB(NOW(), INTERVAL 1 WEEK); ``` and the same one with 1 diff. it's not **1 WEEK** , but **1 MONTH** how can I combine those two and assign them to aliases?
I would do this with conditional aggregation: ``` SELECT SUM(from_unixtime(`date`) >= DATE_SUB(NOW(), INTERVAL 1 WEEK)), SUM(from_unixtime(`date`) >= DATE_SUB(NOW(), INTERVAL 1 MONTH)) FROM `login_log`; ``` MySQL treats boolean values as integers, with `1` being "true" and `0` being "false". So, using `sum()` you can count the number of matching values. (In other databases, you would do something similar using `case`.)
Even though it's pretty tough to understand what you ask: If you want them in the same column use `OR` ``` SELECT COUNT(*) from 'login_log' where from_unixtime('date') >= DATE_SUB(NOW(), INTERVAL 1 WEEK) OR from_unixtime('date') >= DATE_SUB(NOW(), INTERVAL 1 MONTH) ; ``` If you don't want duplicate answers: use `GROUP BY`
Combine 2 MySQL queries
[ "", "mysql", "sql", "" ]
I have 3 tables for example **Parent Table :TEST\_SUMMARY** **Child Tables : TEST\_DETAIL, TEST\_DETAIL2** ![Image Attached](https://i.stack.imgur.com/sEK2P.png) I have data show in image, and want output result shown in image, I tried below 2 query, but not giving expected output ``` SELECT s.NAME, sum(s.AMT), sum(d.d_amt), sum(d2.d2_amt) FROM TEST_SUMMARY s LEFT OUTER JOIN TEST_DETAIL d ON s.ID = d.SUMMARY_ID LEFT OUTER JOIN TEST_DETAIL2 d2 ON s.ID =d2.SUMMARY_ID GROUP BY s.NAME ORDER BY s.NAME; select rs1.*,rs2.total1,rs3.total2 FROM (select id, name,amt from TEST_SUMMARY a) RS1, (select SUMMARY_ID, sum(d_amt) over(partition by summary_id ) total1 from TEST_DETAIL a) RS2, (select SUMMARY_ID, sum(d2_amt) over(partition by summary_id ) total2 from TEST_DETAIL2 a) RS3 where rs1.id(+)= RS2.SUMMARY_ID and rs1.id(+)= RS3.SUMMARY_ID; ``` Create table and insert data test Queries ``` CREATE TABLE TEST_SUMMARY(ID NUMBER, NAME VARCHAR2(20 BYTE),AMT NUMBER(10,2)); CREATE TABLE TEST_DETAIL (ID NUMBER, SUMMARY_ID NUMBER, NAME VARCHAR(20), D_AMT NUMBER(10,2)); CREATE TABLE TEST_DETAIL2 (ID NUMBER, SUMMARY_ID NUMBER, NAME VARCHAR(20), D2_AMT NUMBER(10,2)); INSERT INTO TEST_SUMMARY VALUES (1, 'NAME1', 100); INSERT INTO TEST_SUMMARY VALUES (4, 'NAME1', 150); INSERT INTO TEST_SUMMARY VALUES (6, 'NAME1', 50); INSERT INTO TEST_SUMMARY VALUES (2, 'NAME2', 200); INSERT INTO TEST_SUMMARY VALUES (3, 'NAME3', 300); INSERT INTO TEST_DETAIL VALUES (1, 1, 'NAME11', 11); INSERT INTO TEST_DETAIL VALUES (2, 1, 'NAME12', 12); INSERT INTO TEST_DETAIL2 VALUES (1, 1, 'NAME_2_11', 1); INSERT INTO TEST_DETAIL2 VALUES (2, 1, 'NAME_2_12', 1); ```
One way to solve it for both MySQL and Oracle is to use subqueries to help solve the duplication for you by aggregating the sums from the details tables by name, so you can summarise with a normal join; ``` SELECT ts.name, SUM(ts.amt) amt1, MAX(td1.amt) amt2, MAX(td2.amt) amt3 FROM TEST_SUMMARY ts LEFT JOIN ( SELECT ts.name, SUM(td.d_amt) amt FROM TEST_DETAIL td JOIN TEST_SUMMARY ts ON td.summary_id = ts.id GROUP BY ts.name) td1 ON ts.name = td1.name LEFT JOIN ( SELECT ts.name, SUM(td.d2_amt) amt FROM TEST_DETAIL2 td JOIN TEST_SUMMARY ts ON td.summary_id = ts.id GROUP BY ts.name) td2 ON ts.name = td2.name GROUP BY ts.name ORDER BY ts.name ``` [A MySQL SQLfiddle](http://sqlfiddle.com/#!9/a0834/1) and [an Oracle SQLfiddle](http://sqlfiddle.com/#!4/cdd49/2) to test with.
You could try this: ``` SELECT TEST_SUMMARY.NAME, TEST_SUMMARY.AMT AS AMT1, ( SELECT SUM(TEST_DETAIL.D_AMT) FROM TEST_DETAIL WHERE TEST_DETAIL.SUMMARY_ID=TEST_SUMMARY.ID ) AS AMT2, ( SELECT SUM(TEST_DETAIL2.D2_AMT) FROM TEST_DETAIL2 WHERE TEST_DETAIL2.SUMMARY_ID=TEST_SUMMARY.ID ) AS AMT3 FROM TEST_SUMMARY ``` **Update** You could basically do this if you have many name that are the same. But the question comes what you should do with the other fields (AMT1,AMT2)? Should you sum them for the same name or maybe a max is enough. Depends on what your requirement are : ``` SELECT TEST_SUMMARY.NAME, SUM(TEST_SUMMARY.AMT) AS AMT, SUM(tblAMT2.AMT2) AS AMT2, SUM(tblAMT3.AMT3) AS AMT3 FROM TEST_SUMMARY LEFT JOIN ( SELECT SUM(TEST_DETAIL.D_AMT) AS AMT2, TEST_DETAIL.SUMMARY_ID FROM TEST_DETAIL GROUP BY TEST_DETAIL.SUMMARY_ID ) AS tblAMT2 ON TEST_SUMMARY.ID=tblAMT2.SUMMARY_ID LEFT JOIN ( SELECT SUM(TEST_DETAIL2.D2_AMT) AS AMT3, TEST_DETAIL2.SUMMARY_ID FROM TEST_DETAIL2 GROUP BY TEST_DETAIL2.SUMMARY_ID ) AS tblAMT3 ON TEST_SUMMARY.ID=tblAMT3.SUMMARY_ID GROUP BY TEST_SUMMARY.NAME ```
Select query with left outer join and sum with group by
[ "", "mysql", "sql", "oracle", "left-join", "" ]
Being new to phpMyAdmin, I want to create a relation between a column (`column_1`) of one of my tables (`Table_1`) with a column (`column_2`) in another table (`Table_2`). I first index `column_1` in `Table_1` and when I want to create the relation in relation view tab of `Table_1`, in front of `column_1`, I first choose the database from the first drop down menu, choose the `Table_1` in second drop down menu, but the third drop down menu (Which seems should include `column_2`) has no select options. My phpMyAdmin version is: 4.1.6. Below is the result of my "CREATE SHOW my\_table": Table\_1: ``` CREATE TABLE `Table_1` ( `column_1` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`column_1`), KEY `FK_COLUMN1` (`column_1`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ``` Table\_2: ``` CREATE TABLE `Table_2` ( `column_2` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`column_2`), KEY `FK1_COLUMN2` (`column_2`)) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ```
I don't know what's wrong with PMA. Following query should create the reference. If that fails the error-message will exlpain more ;) ``` ALTER TABLE `this_table` ADD CONSTRAINT `name_of_the_constraint` FOREIGN KEY (`this_column`) REFERENCES `external_table` (`external_column`) ON DELETE CASCADE ON UPDATE CASCADE; ``` `ON DELETE CASCADE` does mean that if the row in `external_table` got deleted, the row in `this_table` also will be removed. Also valid options are `RESTRICT` and `SET NULL`. Same applies for `UPDATE` (row in `this_table` will also be updated). **Edit:** To answer your comments: I've run following SQL on a new empty database/scheme and all runs perfectly fine without errors. If it doesn't at your machine I don't know what went wrong (I can't reproduce it). If following also runs at yours, what's different whith that what you were trying before? ``` CREATE TABLE `Table_1` ( `column_1` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`column_1`), KEY `FK_COLUMN1` (`column_1`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; CREATE TABLE `Table_2` ( `column_2` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`column_2`), KEY `FK1_COLUMN2` (`column_2`)) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; INSERT INTO `Table_1` (`column_1`) VALUES ( '1' ), ( '2' ); /* Add some both valid as invalid relations */ INSERT INTO `Table_2` (`column_2`) VALUES ( '1' ), ( '3' ); /* Add some both valid as invalid relations */ SET foreign_key_checks = 0; ALTER TABLE Table_1 ADD FOREIGN KEY (column_1) REFERENCES Table_2(column_2) ON DELETE RESTRICT ON UPDATE RESTRICT; ALTER TABLE Table_2 ADD FOREIGN KEY (column_2) REFERENCES Table_1(column_1) ON DELETE RESTRICT ON UPDATE RESTRICT; ``` NB. Normally you only create the foreign key one direction. I've added both to show that either direction should work in this setup.
To clarify: Table\_1 has an index column1 Table\_2 has a column2 that you want to use as a foreign key into Table\_1 I think you need to make column1 a Primary Key for Table\_1 in order for this to work. Once you do that, the third drop-down in relationship view for Table\_2 will have column\_1 as a value that you can select. Hope this helps.
Creating a relationship in phpMyAdmin
[ "", "mysql", "sql", "phpmyadmin", "" ]
We have a table that contains all of our products. In here is a column called UNITMEASURE. This column is populated with values such as "EA, PR, PK, YD, etc". Occasionally a 1 will get populated into this column. I would like to run an update statement to change the 1's to whatever the other values are. For example: ``` PRODUCT UNITMEASURE 1234 1 1234 1 1234 PR 1234 1 1234 1 4321 1 4321 1 4321 EA 4321 1 4321 1 ``` I want to be able to update the UNITMEASURE for all lines of PRODUCT 1234 to PR and PRODUCT 4321 to EA (from 1) with one query, but am completely stumped. There are over 8,000 products that this could happen to, so I would want to lookup the UNITMEASURE <> '1' and update from that for each product.
Try this way: ``` update producttable set UNITMEASURE = case WHEN PRODUCT = 1234 THEN = 'PR' WHEN PRODUCT = 4321 THEN = 'EA' ELSE = UNITMEASURE END ``` for updated question ``` update producttable set UNITMEASURE = PT2.UNITMEASURE from producttable PT join (select UNITMEASURE, PRODUCT from producttable UNITMEASURE <> 1) PT2 on PT.PRODUCT = PT2.PRODUCT where PT.UNITMEASURE = 1 ```
Try this way: ``` update yourtable set UNITMEASURE = (case when PRODUCT=1234 then 'PR' when PRODUCT=4321 then 'EA' else UNITMEASURE end) where UNITMEASURE <> 1 ```
SQL Update Multiple Rows value based on same row
[ "", "sql", "sql-server", "sql-update", "" ]
let me first i put my code : ``` SELECT REPLACE(SUBSTRING(CreateDate,3,8),'/','') as registerdate, FROM TatEstelam.dbo.tblInquiryRealForeigners RL LEFT JOIN TatEstelam.dbo.CustomerInfo CINFO ON RL.IdentificationDocumentNumber = CINFO.NationalID WHERE-- NationalID <> '0001' IdentificationDocumentNumber <> '' AND registerdate BETWEEN '910404' AND '950505' ``` my problem is that i cant use alias as column . i mean , in last line , i want call the alias but i cant . for example i want use this code for time function : ``` AND registerdate BETWEEN '910404' AND '950505' ```
As you said you can't use alias in where clause - it's true. You should use whole expression instead as below. ``` SELECT REPLACE(SUBSTRING(CreateDate,3,8),'/','') as registerdate, FROM TatEstelam.dbo.tblInquiryRealForeigners RL LEFT JOIN TatEstelam.dbo.CustomerInfo CINFO ON RL.IdentificationDocumentNumber = CINFO.NationalID WHERE-- NationalID <> '0001' IdentificationDocumentNumber <> '' AND REPLACE(SUBSTRING(CreateDate,3,8),'/','') BETWEEN '910404' AND '950505' ```
``` SELECT * FROM ( SELECT REPLACE(SUBSTRING(CreateDate,3,8),'/','') as registerdate, FROM TatEstelam.dbo.tblInquiryRealForeigners RL LEFT JOIN TatEstelam.dbo.CustomerInfo CINFO ON RL.IdentificationDocumentNumber = CINFO.NationalID ) AS registerdate WHERE-- NationalID <> '0001' IdentificationDocumentNumber <> '' AND registerdate BETWEEN '910404' AND '950505' ```
using alias name in sql server 2012
[ "", "sql", "sql-server-2012", "" ]
I'm in the process of creating a temporary procedure in SQL because I have a value of a table which is written in markdown, so it appear as rendered HTML in the web browser *(markdown to HTML conversion)*. String of the column currently look like this: ``` Questions about **general computing hardware and software** are off-topic for Stack Overflow unless they directly involve tools used primarily for programming. You may be able to get help on [Super User](http://superuser.com/about) ``` I'm currently working with bold and italic text. This mean *(in the case of bold text)* I will need to replace odd N times the pattern`**`with`<b>`and even times with`</b>`. I saw [replace()](http://msdn.microsoft.com/fr-fr/library/ms186862.aspx) but it perform the replacement on all the patterns of the string. So How I can replace a sub-string only if it is odd or only it is even? ***Update:*** Some peoples wonder what schemas I'm using so just take a look [here](https://data.stackexchange.com/stackoverflow/query/193616/highest-flag-close-votes-per-questions-with-more-control-than-in-the-flag-queue?Limit=38369&opt.withExecutionPlan=true). ***One more extra if you want:*** The markdown style hyperlink to html hyperlink doesn't look so simple.
Using the`STUFF`function and a simple`WHILE`loop: ``` CREATE FUNCTION dbo.fn_OddEvenReplace(@text nvarchar(500), @textToReplace nvarchar(10), @oddText nvarchar(10), @evenText nvarchar(500)) RETURNS varchar(max) AS BEGIN DECLARE @counter tinyint SET @counter = 1 DECLARE @switchText nvarchar(10) WHILE CHARINDEX(@textToReplace, @text, 1) > 0 BEGIN SELECT @text = STUFF(@text, CHARINDEX(@textToReplace, @text, 1), LEN(@textToReplace), IIF(@counter%2=0,@evenText,@oddText)), @counter = @counter + 1 END RETURN @text END ``` And you can use it like this: ``` SELECT dbo.fn_OddEvenReplace(column, '**', '<b>', '</b>') FROM table ``` UPDATE: This is re-written as an SP: ``` CREATE PROC dbo.##sp_OddEvenReplace @text nvarchar(500), @textToReplace nvarchar(10), @oddText nvarchar(10), @evenText nvarchar(10), @returnText nvarchar(500) output AS BEGIN DECLARE @counter tinyint SET @counter = 1 DECLARE @switchText nvarchar(10) WHILE CHARINDEX(@textToReplace, @text, 1) > 0 BEGIN SELECT @text = STUFF(@text, CHARINDEX(@textToReplace, @text, 1), LEN(@textToReplace), IIF(@counter%2=0,@evenText,@oddText)), @counter = @counter + 1 END SET @returnText = @text END GO ``` And to execute: ``` DECLARE @returnText nvarchar(500) EXEC dbo.##sp_OddEvenReplace '**a** **b** **c**', '**', '<b>', '</b>', @returnText output SELECT @returnText ```
As per OP's request I have modified my earlier answer to perform as a temporary stored procedure. I have left my earlier answer as I believe the usage against a table of strings to be useful also. If a Tally (or Numbers) table is known to already exist with at least 8000 values, then the marked section of the CTE can be omitted and the CTE reference *tally* replaced with the name of the existing Tally table. ``` create procedure #HtmlTagExpander( @InString varchar(8000) ,@OutString varchar(8000) output )as begin declare @Delimiter char(2) = '**'; create table #t( StartLocation int not null ,EndLocation int not null ,constraint PK unique clustered (StartLocation desc) ); with -- vvv Only needed in absence of Tally table vvv E1(N) as ( select 1 from (values (1),(1),(1),(1),(1), (1),(1),(1),(1),(1) ) E1(N) ), --10E+1 or 10 rows E2(N) as (select 1 from E1 a cross join E1 b), --10E+2 or 100 rows E4(N) As (select 1 from E2 a cross join E2 b), --10E+4 or 10,000 rows max tally(N) as (select row_number() over (order by (select null)) from E4), -- ^^^ Only needed in absence of Tally table ^^^ Delimiter as ( select len(@Delimiter) as Length, len(@Delimiter)-1 as Offset ), cteTally(N) AS ( select top (isnull(datalength(@InString),0)) row_number() over (order by (select null)) from tally ), cteStart(N1) AS select t.N from cteTally t cross join Delimiter where substring(@InString, t.N, Delimiter.Length) = @Delimiter ), cteValues as ( select TagNumber = row_number() over(order by N1) ,Location = N1 from cteStart ), HtmlTagSpotter as ( select TagNumber ,Location from cteValues ), tags as ( select Location = f.Location ,IsOpen = cast((TagNumber % 2) as bit) ,Occurrence = TagNumber from HtmlTagSpotter f ) insert #t(StartLocation,EndLocation) select prev.Location ,data.Location from tags data join tags prev on prev.Occurrence = data.Occurrence - 1 and prev.IsOpen = 1; set @outString = @Instring; update this set @outString = stuff(stuff(@outString,this.EndLocation, 2,'</b>') ,this.StartLocation,2,'<b>') from #t this with (tablockx) option (maxdop 1); end go ``` Invoked like this: ``` declare @InString varchar(8000) ,@OutString varchar(8000); set @inString = 'Questions about **general computing hardware and software** are off-topic **for Stack Overflow.'; exec #HtmlTagExpander @InString,@OutString out; select @OutString; set @inString = 'Questions **about** general computing hardware and software **are off-topic** for Stack Overflow.'; exec #HtmlTagExpander @InString,@OutString out; select @OutString; go drop procedure #HtmlTagExpander; go ``` It yields as output: ``` Questions about <b>general computing hardware and software</b> are off-topic **for Stack Overflow. Questions <b>about</b> general computing hardware and software <b>are off-topic</b> for Stack Overflow. ```
How I can replace odd patterns inside a string?
[ "", "sql", "sql-server", "sql-server-2014", "dataexplorer", "" ]
I have this little query that is working fine: ``` SELECT * FROM Components AS T1 WHERE Cost=null or Cost='' Order by Code ``` Now I need to grab also the filed "description" from other tables where code=code ``` SELECT * FROM Components AS T1 WHERE Cost=null or Cost='' LEFT JOIN on Table_321 AS T2 where T1.Code=T2.Code Order by Code ``` But it is giving me a sintax error around "LEFT" which I have not been able to solve and I am not sure if such JOIN is the correct way to get it. Some help indicating me the how to solve the problem will be really appreciated. Moreover, I have also another table "Table\_621 from which I need to take the description. How can I add this second table in the query?
``` SELECT * FROM Components T1 LEFT JOIN Table_321 T2 ON T1.Code=T2.Code LEFT JOIN Table3 T3 ON T3.Code = T1.Code WHERE T1.Cost=null or T1.Cost='' ORDER BY T1.Code ```
``` SELECT * FROM Components AS T1 LEFT JOIN Table_321 AS T2 ON T1.Code=T2.Code WHERE Cost=null or Cost='' Order by Code ```
Query with JOIN on multiple tables
[ "", "sql", "" ]
I have a query that regularly returns "nothing", and I would like to run a different query if this is the case, but I know not of the way of doing this. If anyone could be of help please. Here is the current code I am using... `SELECT * FROM cfg_users JOIN cfg_ash ON cfg_users.iUserId = cfg_ash.iUserid WHERE iTeamId`=`'0' AND sDisabled IS NULL AND iStatusId > 0 AND sDate = '2014-08-01' GROUP BY cfg_users.iUserId` `ORDER BY iStatusId, sName` I basically want to say ``` IF <my code> IS NULL THEN <do other code>, IF <my code> IS NOT NULL THEN return the result. ``` Thanks
A way you can do it is like this * set two variables equal to the queries you want to execute. * set another variable equal to the correct query when the first is not null. * execute that query with a stored procedure. **STORED PROCEDURE:** ``` DELIMITER $$ CREATE PROCEDURE `dynamic_query`(in input varchar(255)) BEGIN SET @a := input; PREPARE stmt FROM @a; EXECUTE stmt; DEALLOCATE PREPARE stmt; END $$ DELIMITER ; ``` **THE TWO SELECTS YOU WANT TO EXECUTE:** ``` SET @A := "SELECT * FROM cfg_users JOIN cfg_ash ON cfg_users.iUserId = cfg_ash.iUserid WHERE iTeamId='0' AND sDisabled IS NULL AND iStatusId > 0 AND sDate = '2014-08-01' GROUP BY cfg_users.iUserId ORDER BY iStatusId, sName"; SET @B := "your other select here"; ``` **THE DEFINER TO GET THE CORRECT QUERY:** ``` SET @C := ( SELECT CASE WHEN EXISTS ( SELECT * FROM cfg_users JOIN cfg_ash ON cfg_users.iUserId = cfg_ash.iUserid WHERE iTeamId='0' AND sDisabled IS NULL AND iStatusId > 0 AND sDate = '2014-08-01' GROUP BY cfg_users.iUserId ORDER BY iStatusId, sName ) THEN @A ELSE @B END ); ``` **EXECUTE THE STATEMENT:** ``` CALL dynamic_query(@C); ``` * [DEMO WHEN THE QUERY EXISTS](http://sqlfiddle.com/#!2/c586c/2) * [DEMO WHEN THE QUERY DOESN'T EXIST](http://sqlfiddle.com/#!2/c586c/3)
There are some simple way only use sql. Define your first query as a temp table, with union all, filter the second query with temp table's count. ``` with temp as (select * from t1 where 1=0) select * from temp union all select * from t2 where (select count(*) from temp) =0 ``` This query will return the second table's records. ``` with temp as (select * from t1 ) select * from temp union all select * from t2 where (select count(*) from temp) =0 ``` And if temp query have result, only return temp query. You can test with [sql fiddle here](http://www.sqlfiddle.com/#!6/4dc8c/5).
SQL IF SELECT query is null then do another query
[ "", "mysql", "sql", "select", "null", "" ]
I am tasked with keeping several tables updated with information to be brought in from an external source. To this end, I've been hunting online for ways to pass table names as parameters, and all the answers are complex and/or throw errors (such as the below: "Incorrect syntax near 'Table' error shows) ``` CREATE PROCEDURE sp_Insert_Delta -- Add the parameters for the stored procedure here @tableName Table READONLY AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Delete rows in MIRROR database where ID exists in the DELTA database Delete from [S1].[MIRROR].[dbo].@tableName Where [ID] in (Select [ID] from [S2].[DELTAS].[dbo].@tableName) -- Insert all deltas Insert Into [S1].[MIRROR].[dbo].@tableName Select * from [S2].[DELTAS].[dbo].@tableName END GO ``` This script works just fine when named explicitly, so how can I parameterize the table name? Thank you, Nate
The short answer is that you can't parameterize the table name. The longer answer is that can can accomplish what you want via *dynamic SQL*. It looks like you're using SQL Server. See the question *[Dynamic SQL - EXEC(@SQL) versus EXEC SP\_EXECUTESQL(@SQL)](https://stackoverflow.com/questions/548090/dynamic-sql-execsql-versus-exec-sp-executesqlsql)* for details. One should really not, however, that a need to drop arbitrary table names into a query is a code smell indicating to me that you've got architectural problems with your database design and probably with your E-R model.
use dynamic SQL ``` DECLARE @sql as varchar(4000) SET @sql = 'Delete from [S1].[MIRROR].[dbo].' + @tableName + ' Where [ID] in (Select [ID] from [S2].[DELTAS].[dbo].' + @tableName + ')' EXEC(@sql) ``` As an example
Passing a Table name as a parameter
[ "", "sql", "sql-server", "stored-procedures", "" ]
Does `select * into B from A` also copy constraints of A on B ? If not, then how can I copy constraints?
No, not in SQL-Server. You would need to specify the constraints and indexes on the new table manually. If you're using SSMS, using the `Script As... Create To` functionality can get you a sample script to create a new table with the same constraints and structure. <https://learn.microsoft.com/en-us/sql/t-sql/queries/select-into-clause-transact-sql?view=sql-server-ver15> > Indexes, constraints, and triggers defined in the source table are not transferred to the new table, nor can they be specified in the SELECT...INTO statement. If these objects are required, you can create them after executing the SELECT...INTO statement.
You can not directly copy the constraint from one table to another table, first you should copy the table Structure with indexes and constraint, to do this Please follow the instructions below: 1. In SSMS right click on the table, script create. 2. Change the name in the generated script to NewTable insert into NewTable select \* from OldTable -- note that it may be slow if the Old is big enough.
Copying table constraints from select * into
[ "", "sql", "sql-server", "select", "constraints", "select-into", "" ]
Good Day I am working of a existing SQL Server Database. What the developers did is to keep the Date and time separate. The Date is in DateTime format (what I want) but the time is incorrect. if it is 14:30 it shows as 1430 when its 09:25 shows as 925. I am trying tyo combine the date and time to have a Date Time view for an program I am writing on top of this database. I have created the date as a normal date like this: ``` CASE WHEN LEN(T0.BeginTime) = 3 THEN '0' + LEFT(T0.BeginTime, 1) + ':' + RIGHT(T0.BeginTime, 2) ELSE LEFT(T0.BeginTime, 2) + ':' + RIGHT(T0.BeginTime, 2) END AS 'NEW Start Time'` ``` The date now looks like it's the correct format but when I want to combine the date and time I get `VARCHAR` to `DateTime` error. How can I fix this? This is the error: > The conversion of a varchar data type to a datetime data type resulted in an out-of-range value (ONLY RAN 804 RECORDS) Thanks
A different approach is to convert the time column in minutes and add it to the date ``` DATEADD(minute, T0.BeginTime / 100 * 60 + T0.BeginTime % 100 , CONVERT(VARCHAR, T0.BeginDate, 112)) ``` with that the length of the time column doesn't matter
This should do the trick, Hope it helps. ``` DECLARE @DateTime TABLE ( DateWithTime DATE, BeginTime INT); INSERT INTO @DateTime VALUES ('2014-08-04', '1525'), ('2014-08-04', '525'), ('2014-08-04', '15'), ('2014-08-04', '5'), ('2014-08-04', '0'), ('2014-08-04', '90') ;WITH cte_BeginTimeFix AS ( SELECT CONVERT(VARCHAR(10), DateWithTime, 120) AS DateWithTime, RIGHT('0000' + CAST(BeginTime AS VARCHAR(4)), 4) AS BeginTime FROM @DateTime ) , cte_DateString AS ( SELECT DateWithTime, BeginTime, DateWithTime + ' ' + STUFF(STUFF('00:00:00.000', 4, 2, RIGHT(BeginTime, 2)), 1, 2, LEFT(BeginTime, 2)) AS DateTimeStr FROM cte_BeginTimeFix ) SELECT DateWithTime, BeginTime, CASE WHEN ISDATE(DateTimeStr) = 1 THEN CAST(DateTimeStr AS DATETIME) ELSE NULL END AS DateTimeStr FROM cte_DateString ```
CAST Correct VARCHAR to DateTime
[ "", "sql", "sql-server", "date", "datetime", "" ]
I have below query ``` select Count(*), FUNCTIONAL_CLASS from RDF_NAV_LINK where (LINK_ID NOT IN (Select DEST_LINK_ID from RDF_SIGN_DESTINATION) AND LINK_ID NOT IN (Select ORIGINATING_LINK_ID from RDF_SIGN_ORIGIN)) GROUP BY FUNCTIONAL_CLASS ``` I like to show the Query output Like below ``` Count(*) FUNCTIONAL_CLASS 152/252 1 2563/2655 2 251/5485 3 451/562 4 542/562 5 ``` Here 152 is number of records which satisfy where condition and 252 is number of total records which satisfy FUNCTIONAL\_CLASS = 1 is it possible ?
With the WHERE clause, you have thrown away all information about the ignored rows. To get the total count of rows, you have to count them separately: ``` SELECT count(*) || '/' || (SELECT count(*) FROM RDF_NAV_LINK AS link2 WHERE link2.FUNCTIONAL_CLASS = link.FUNCTIONAL_CLASS), FUNCTIONAL_CLASS FROM RDF_NAV_LINK AS link WHERE LINK_ID NOT IN (SELECT DEST_LINK_ID FROM RDF_SIGN_DESTINATION) AND LINK_ID NOT IN (SELECT ORIGINATING_LINK_ID FROM RDF_SIGN_ORIGIN) GROUP BY FUNCTIONAL_CLASS ```
You could consider using LEFT JOINs to determine the LINK\_IDs that are missing in the ORIGIN and DESTINATION tables, as below: ``` SELECT fn_class_groups.FUNCTIONAL_CLASS, CONCAT(fn_class_groups.not_in_cnt,'/',fn_class_groups.total_cnt) FROM (SELECT FUNCTIONAL_CLASS, COUNT(rnl.LINK_ID) total_cnt, SUM (CASE WHEN rsd.DEST_LINK_ID IS NULL AND rso.ORIGINATING_LINK_ID IS NULL THEN 1 ELSE 0 END) not_in_cnt FROM RDF_NAV_LINK rnl LEFT JOIN (SELECT DISTINCT DEST_LINK_ID FROM RDF_SIGN_DESTINATION) rsd ON rnl.LINK_ID = rsd.DEST_LINK_ID LEFT JOIN (SELECT DISTINCT ORIGINATING_LINK_ID FROM RDF_SIGN_ORIGIN) rso ON rnl.LINK_ID = rso.ORIGINATING_LINK_ID GROUP BY FUNCTIONAL_CLASS ) fn_class_groups ORDER BY fn_class_groups.FUNCTIONAL_CLASS; ```
Show Query output / Number of Total Records
[ "", "sql", "sqlite", "" ]
I have a dataset that has a structure similar to this (greatly simplified for expository purposes): ``` CREATE TABLE FOO ( CUI CHAR(8), SAB VARCHAR(40), CODE VARCHAR(50), KEY X_CUI (CUI) ); ``` There is no primary key: there are multiple rows with the same CUI value: ``` C0000039 MSH D015060 C0000039 NDFRT N0000007747 C0000039 LNC LP15542-1 C0074393 RCD da5.. C0074393 RXNORM 36437 C0074393 SNOMEDCT_US 96211006 ``` I want to find all unique CUI values that do not have a SNOMEDCT\_US SAB. In the above example C0000039 would be in the result set, but C0074393 would not. Unfortunately this exceeds my rudimentary SQL skills: could someone suggest an approach in SQL? The full dataset contains 11,633,065 rows with 2,973,458 unique CUIs. I am using MySQL 5.6.19.
Try this: ``` select distinct cui from foo where CUI not in (select distinct CUI from foo where SAB = 'SNOMEDCT_US') ``` [SQL Demo](http://sqlfiddle.com/#!2/795ef/22)
Try this query : ``` SELECT DISTINCT CUI FROM FOO WHERE SAB != 'SNOMEDCT_US' AND CUI NOT IN (SELECT DISTINCT CUI FROM FOO WHERE SAB = 'SNOMEDCT_US') ```
A query to return a row only when that row lacks a particular value
[ "", "mysql", "sql", "" ]
I am using an INSERT ... SELECT query in my MySQL. Now, in my database is a column named "medDate" which I use for my medicine Inventories app. This has a type Varchar and is formatted in this way, "July 2014". Now I want to use the insert...select query to copy the previous month's records. But as I test my query to MySQL, there's an error which says incorrect datetime value. Can you help me with this? This is my query. ``` INSERT INTO medicinesinventory (itemName, compCode, classID, medDate, price, beginningIn, newIn, outMed, sales) SELECT DISTINCT(itemName), compCode, classID, CURDATE(), price, 0.0, 0, 0.0, 0.0 FROM medicinesinventory WHERE YEAR(medDate) = DATE_FORMAT(CURRENT_DATE - INTERVAL 1 MONTH, '%M %Y') AND MONTH(medDate) = DATE_FORMAT(CURRENT_DATE - INTERVAL 1 MONTH, '%M %Y'); ``` **SAMPLE DATA** ``` compCode medID classID medDate itemname price beginningIn newIn outMed sales GOLDEN 148 20 July 2014 sample 0.00 0 0.00 0.00 6.00 ```
The functions `year()` and `month()` require dates. You can get the dates using `str_to_date()` because MySQL supports "partial" dates (i.e. those without days. So, try this: ``` WHERE YEAR(str_to_date(meddate, '%M %Y')) = YEAR(CURRENT_DATE - INTERVAL 1 MONTH) AND MONTH(str_to_date(meddate, '%M %Y')) = MONTH(CURRENT_DATE - INTERVAL 1 MONTH) ``` Alternatively, you seem to want to format the previous month in the same format and do the comparison. That can also work: ``` WHERE medDate = DATE_FORMAT(CURRENT_DATE - INTERVAL 1 MONTH, '%M %Y') ```
As `medDate` is a string and not a DATETIME you can't use YEAR on this column, but you could convert the right side with DATE\_FORMAT and to be consistent you should store the values in the same format. ``` INSERT INTO medicinesinventory (itemName, compCode, classID, medDate, price, beginningIn, newIn, outMed, sales) SELECT DISTINCT(itemName), compCode, classID, DATE_FORMAT(CURDATE(), '%M %Y'), -- formatted this date price, 0.0, 0, 0.0, 0.0 FROM medicinesinventory -- and converted those in the where clause WHERE medDate = DATE_FORMAT(CURRENT_DATE - INTERVAL 1 MONTH, '%M %Y') ``` **Note** It would be better, if you could change the data type of the medDate column to DATE.
Using INSERT ... SELECT with Date format
[ "", "mysql", "sql", "database", "" ]
Today I need help with this really long select query. The thing is I need to get all records from 1 table "dispositivos" and then LEFT JOIN "monitoreo", some other tables, and the most important part of this query is joining the table "incidencia". One "dispositivo" can have many "incidencia", and from all the "incidencia" i need the one where the column "alerta\_id" has the lowest value, with "incidencia.fecha\_incidente" = "monitoreo.fecha" and "incidencia.fecha\_correccion" being NULL. Now, since I need just one record from "incidencia" then I use LIMIT. However I keep printing the results on the console, and when i use limit inside the select query for "incidencia" it beings the 5 columns that come from it as null, and if I don't, then it actually brings me the whole data, but of course it repeats the "dispositivos" information for each "incidencia" that it finds. Now here's my query. And if anyone can help i'd appreciate it. If you don't understand something about my query or my previous text, don't hesitate to ask. By the way, all the quotes and plus signs is because the query is for a node.js project and i need to do that to better visualize my code. ``` SELECT d.id , d.tipo_dispositivo_id , d.unidad_adscripcion_id , d.nro_cns , d.cod_inter , d.nombre_senal , d.lat as lat_origen , d.long as long_origen , d.direccion_latitud_id , d.direccion_longitud_id , d.descripcion , d.alc , d.ele , d.cod_iala_id , d.sistema_acceso_id , d.color_bombillo_id , d.modelo_bombillo_id , d.cant_bateria , d.volt_grupo_bateria , d.nro_panel_solar , d.corriente_panel as cor_pan , d.forma_estructura , d.altura_torre , d.nro_telf , d.lat_gps , d.long_gps , d.activo , mon.tipo_evento_id , mon.fecha fecha_hora , mon.voltaje_pila , mon.voltaje_panel , mon.consumo_linterna , mon.corriente_panel , mon.estado , mon.temperatura , mon.gpsval , mon.lat lat_mon , mon.dlat , mon.long long_mon , mon.dlong , mon.velocidad , mon.rumbo , mon.dispositivo_id disp_check , i.* FROM dispositivos d LEFT JOIN monitoreo_actual mon ON mon.dispositivo_id = d.id LEFT JOIN ( SELECT dispositivo_id disp_inc , alerta_id , fecha_incidente , hora , fecha_correccion FROM incidencia inc WHERE fecha_correccion IS NULL ORDER BY alerta_id ASC LIMIT 1 ) i ON i.disp_inc = d.id ```
``` 'SELECT d.id, d.tipo_dispositivo_id, d.unidad_adscripcion_id, d.nro_cns, d.cod_inter, d.nombre_senal, d.lat as lat_origen, d.long as long_origen, '+ 'd.direccion_latitud_id, d.direccion_longitud_id, d.descripcion, d.alc, d.ele, d.cod_iala_id, d.sistema_acceso_id, d.color_bombillo_id, '+ 'd.modelo_bombillo_id, d.cant_bateria, d.volt_grupo_bateria, d.nro_panel_solar, d.corriente_panel as cor_pan, d.forma_estructura, d.altura_torre, '+ 'd.nro_telf, d.lat_gps, d.long_gps, d.activo, '+ 'mon.tipo_evento_id, mon.fecha fecha_hora, mon.voltaje_pila, mon.voltaje_panel, mon.consumo_linterna, mon.corriente_panel, mon.estado, '+ 'mon.temperatura, mon.gpsval, mon.lat lat_mon, mon.dlat, mon.long long_mon, mon.dlong, mon.velocidad, mon.rumbo, mon.dispositivo_id disp_check, '+ 'iala.cod, '+ 'adsc.descripcion uni_ads_desc, '+ 'acceso.descripcion acceso_desc, '+ 'color.descripcion color_bombillo, '+ 'modelo.caracteristicas, '+ 'marca.descripcion marca_bombillo, '+ 'evento.descripcion evento_alerta, '+ 'inc.alerta_id, '+ 'alert.descripcion descripcion_alerta '+ 'FROM dispositivos d '+ 'LEFT JOIN monitoreo_actual mon ON mon.dispositivo_id = d.id '+ 'LEFT JOIN m_cod_iala iala ON iala.id = d.cod_iala_id '+ 'LEFT JOIN m_unidad_adscripcion adsc ON adsc.id = d.unidad_adscripcion_id '+ 'LEFT JOIN m_sistema_acceso acceso ON acceso.id = d.sistema_acceso_id '+ 'LEFT JOIN m_color_bombillo color ON color.id = d.color_bombillo_id '+ 'LEFT JOIN m_modelo_bombillo modelo ON modelo.id = d.modelo_bombillo_id '+ 'LEFT JOIN m_marca_bombillo marca ON marca.id = modelo.marca_bombillo_id '+ 'LEFT JOIN m_tipo_evento evento ON evento.id = mon.tipo_evento_id '+ 'LEFT JOIN (SELECT dispositivo_id, MIN(alerta_id) AS alerta_id, fecha_incidente, fecha_correccion '+ 'FROM incidencia '+ 'WHERE fecha_correccion IS NULL '+ 'GROUP BY dispositivo_id) inc ON inc.dispositivo_id = d.id AND fecha_incidente = mon.fecha '+ 'LEFT JOIN m_alerta alert ON alert.id = inc.alerta_id' ```
One approach to solving this problem is to use an inline view that returns the unique identifier of the rows from "incidencia" you want to return, and then join that to the inidencia table to get the rows. e.g. ``` LEFT JOIN ( SELECT l.fecha_incidente , MIN(l.alerta_id) AS lowest_alerta_id FROM incidencia l WHERE l.fecha_correccion IS NULL GROUP BY l.fecha_incidente ) m ON m.fecha_incidente = monitoreo.fecha LEFT JOIN incidencia n ON n.fecha_incidente = m.fecha_incidente AND n.alerta_id = m.lowest_alerta_id ```
MySQL Query with LEFT JOIN (SELECT) ORDER and LIMIT
[ "", "mysql", "sql", "join", "sql-order-by", "limit", "" ]
So I have two tables, the first would be users\_ ``` Name ------ Carol Sue ``` and the second would be interests\_ ``` Name Interest ---------------------- Carol Books Carol Dancing Carol Sports Sue Books Sue Dancing ``` The user will be presented with checkboxes to select a match based on criteria for similar interests like this ![enter image description here](https://i.stack.imgur.com/h5VGk.png) So if the user selected Books and Dancing as interests for their match, what type of sql query would I build to combine multiple rows in interests and ensure that the result is Sue, since she had Books and Dancing as interests but NOT Sports? any help will go a long way thanks!
Here is the simple way to do it ``` select i.name from interests i where i.interest in ('Books','Dancing') and not exists ( select 1 from interests i1 where interest not in ('Books','Dancing') AND i.name = i1.name ) group by i.name having count(*) = 2 ``` **[DEMO](http://www.sqlfiddle.com/#!9/08e54/1)**
so basically to do this build a negative list of users that have more than those two and then select the others ``` SELECT u.name FROM users_ u JOIN interests_ i ON i.name = u.name JOIN ( SELECT u.name FROM users_ u JOIN interests_ i ON i.name = u.name WHERE i.interest NOT IN('Books', 'Dancing') ) t WHERE u.name <> t.name AND i.interest IN('Books', 'Dancing') GROUP BY u.name HAVING COUNT(u.name) = 2; ``` [DEMO](http://sqlfiddle.com/#!2/15393/5)
SQL select one row from table 1 joining on multiple rows in table 2
[ "", "mysql", "join", "sql", "" ]
I'm trying to write a nested joins query with a condition. The query I have right now is: ``` Event.joins(:store => :retailer).where(store: {retailer: {id: 2}}) ``` Which outputs the following SQL: ``` SELECT "events".* FROM "events" INNER JOIN "stores" ON "stores"."id" = "events"."store_id" INNER JOIN "retailers" ON "retailers"."id" = "stores"."retailer_id" WHERE "store"."retailer_id" = '--- :id: 2 ' ``` And also the following error: ``` SQLite3::SQLException: no such column: store.retailer_id: SELECT "events".* FROM "events" INNER JOIN "stores" ON "stores"."id" = "events"."store_id" INNER JOIN "retailers" ON "retailers"."id" = "stores"."retailer_id" WHERE "store"."retailer_id" = '--- :id: 2 ' ``` It's telling me there is no column store.retailer\_id, however, I can run the following query and it will work just fine: ``` Event.first.store.retailer_id Event Load (0.2ms) SELECT "events".* FROM "events" ORDER BY "events"."id" ASC LIMIT 1 Store Load (0.1ms) SELECT "stores".* FROM "stores" WHERE "stores"."id" = ? LIMIT 1 [["id", 28958]] => 4 ```
Looks like you don't need nested joins here. Try to use something like ``` Event.joins(:store).where(stores: {retailer_id: 2}) ``` Nested join should also work using `stores` ``` Event.joins(:store => :retailer).where(stores: {retailer: {id: 2}}) ```
There is the simplest way instead of using curly brackets : ``` Event.joins(:store => :retailer).where('stores.retailer_id => ?', 2) ```
Rails Nested Joins Activerecord with conditions
[ "", "sql", "ruby-on-rails", "rails-activerecord", "" ]
In a database that have Orders, Customers tables, i need the relation between these two tables to be zero or one to many relationship, simply because in my application, an order can exist without being assigned to a customer. One first idea that comes to my mind is to make the foreign key nullable, this way when creating an order the foreign key can be null, but i don't know how to do so. I'm working on a model so i can render it to my 'teacher', so i need to know how to do it using the EER diagram. thanks in advance.
Create your 1:n relationship as normal. Then double click the table to bring up the details. On your new FK field, uncheck the "NN" column. Now your FK can be NULL.
Mandatory option in "Foreign Key" tab in "Relationship" tab appears to be the way to control optionality.
How to create a zero or one to many relationship in Mysql workbench
[ "", "mysql", "sql", "mysql-workbench", "" ]
Hi there guys basically I have this query and I sort of get half of it but not all of it can someone please help me work it out? [Query](http://gyazo.com/8e372fe56c9f9c5c28f77739bfa10e9c) Thanks in advance! :)!
``` SELECT username,first_name, last_name from members ORDER BY join_date DESC LIMIT 5; ```
> List the username, first name, last name and date of joining of the five most recent members to register an account. List the results with the most recent registration first. ``` SELECT `username`,`firstname`,`lastname`,`join_date` FROM `members` ORDER BY `join_date` DESC LIMIT 5 ``` > @hd Oops sorry but here is the question basically.. The new member Fredrick Williams needs to be deleted from the database. Write an SQL statement to delete all Fredrick’s details from the database ``` DELETE FROM `members` WHERE `firstname` = 'Fredrick' AND `lastname` = 'Williams'; ``` *Backup your database beforehand ;)*
How do I fix this MYSQL query
[ "", "mysql", "sql", "" ]
I have two tables in SQL Server: Household and People. Household represents a home and People represents the people living in the home: **Household** ``` Id Address City State Zip ------------------------------------------------------ 1 123 Main Anytown CA 90121 ``` **People** ``` Id HouseholdId Name Age ------------------------------------------- 1 1 John 32 2 1 Jane 29 ``` I want to query the two tables and end up with a result set like below, but I'm not sure how best to approach this: ``` Id Address City State Zip Person1Name Person1Age Person2Name Person2Age ---------------------------------------------------------------------------------------------------------------------------- 1 123 Main Anytown CA 90121 John 32 Jane 29 ``` Of course, "PersonXName and PersonXAge" should repeat based on how many people there are. How can I write a query that would accomplish this? Simplicity is preferred over performance as this is a one-off report I need to come up with.
This is done using a dynamic cross tab. For reference: <http://www.sqlservercentral.com/articles/Crosstab/65048/> ``` CREATE TABLE HouseHold( ID INT, Address VARCHAR(20), City VARCHAR(20), State CHAR(2), Zip VARCHAR(10) ) CREATE TABLE People( ID INT, HouseHoldID INT, Name VARCHAR(20), Age INT ) INSERT INTO HouseHold VALUES (1, '123 Main', 'Anytown', 'CA', '90121'); INSERT INTO People VALUES (1, 1, 'John', 32), (2, 1, 'Jane', 29); DECLARE @sql1 VARCHAR(4000) = '' DECLARE @sql2 VARCHAR(4000) = '' DECLARE @sql3 VARCHAR(4000) = '' SELECT @sql1 = 'SELECT ID ,Address ,City ,State ,Zip' + CHAR(10) SELECT @sql2 = @sql2 + ' ,MAX(CASE WHEN RN = ' + CONVERT(VARCHAR(10), RN) + ' THEN Name END) AS [Person' + CONVERT(VARCHAR(10), RN) + 'Name] ,MAX(CASE WHEN RN = ' + CONVERT(VARCHAR(10), RN) + ' THEN Age END) AS [Person' + CONVERT(VARCHAR(10), RN) + 'Age] ' FROM( SELECT DISTINCT RN = ROW_NUMBER() OVER(PARTITION BY p.HouseHoldID ORDER BY p.ID) FROM People p )t SELECT @sql3 = 'FROM( SELECT h.* ,p.Name ,p.Age ,RN = ROW_NUMBER() OVER(PARTITION BY h.ID ORDER BY p.ID) FROM Household h INNER JOIN People p ON p.HouseHoldId = h.ID )t GROUP BY ID, Address, City, State, Zip ORDER BY ID' PRINT(@sql1 + @sql2 + @sql3) EXEC (@sql1 + @sql2 + @sql3) DROP TABLE HouseHold DROP TABLE People ``` **RESULT** ``` ID Address City State Zip Person1Name Person1Age Person2Name Person2Age ----------- -------------------- -------------------- ----- ---------- -------------------- ----------- -------------------- ----------- 1 123 Main Anytown CA 90121 John 32 Jane 29 ```
This is adapted from a script I use with a similar requirement. Probably don't want to use if the People table has a million rows, but works well enough for my use case with about 20000 rows: ``` DECLARE @id int, @householdid int, @name varchar(50), @age int, @currentid int, @peoplecount int; DECLARE @colsql nvarchar(1000), @datasql nvarchar(1000), @RunSql nvarchar(1000); CREATE TABLE #ReturnTable (HouseholdId int, Address varchar(50)) INSERT #ReturnTable SELECT Id, Address FROM Household; -- these are split into two dynamic queries -- so that columns exist when we try the insert SET @colsql = 'IF (SELECT COUNT(*) FROM TempDB.INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_NAME = ''Person{Number}Name'' AND TABLE_NAME LIKE ''#ReturnTable'') = 0 BEGIN ALTER TABLE #ReturnTable ADD Person{Number}Name VARCHAR(50) END IF (SELECT COUNT(*) FROM TempDB.INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_NAME = ''Person{Number}Age'' AND TABLE_NAME LIKE ''#ReturnTable'') = 0 BEGIN ALTER TABLE #ReturnTable ADD Person{Number}Age INT END' set @datasql = 'UPDATE #ReturnTable SET Person{Number}Name = @name, Person{Number}Age = @age WHERE HouseholdId = @householdid' DECLARE PeopleCursor CURSOR FOR SELECT p.Id, p.HouseholdId, p.Name, p.Age FROM People p ORDER BY p.HouseholdId, p.Age OPEN PeopleCursor; FETCH NEXT FROM PeopleCursor INTO @id, @householdid, @name, @age SET @currentid = @id SET @peoplecount = 1; WHILE @@FETCH_STATUS = 0 BEGIN IF @currentid <> @id BEGIN SET @peoplecount = 1 SET @currentid = @id END ELSE SET @peoplecount = @peoplecount + 1; SET @RunSql = REPLACE(@colsql, '{Number}', CAST(@peoplecount AS VARCHAR(3))); EXEC dbo.sp_ExecuteSql @RunSql SET @RunSql = REPLACE(@datasql, '{Number}', CAST(@peoplecount AS VARCHAR(3))); EXEC dbo.sp_ExecuteSql @RunSql, N'@householdid int, @name varchar(50), @age int', @householdid = @householdid, @name = @name, @age = @age; FETCH NEXT FROM PeopleCursor INTO @id, @householdid, @name, @age END CLOSE PeopleCursor DEALLOCATE PeopleCursor SELECT * FROM #ReturnTable drop table #ReturnTable ```
Flatten parent child relationship in SQL Server
[ "", "sql", "sql-server", "" ]
I have a bit parameter ``` @IsInRetry ``` If it is false I have to set the where condition to ``` attempts = 0 ``` else if it is true I have to set the where condition to ``` attempts > 0 ``` How could this be done?
Try this way: ``` ( (@IsInRetry = 0 and attempts = 0) or (@IsInRetry = 1 and attempts > 0) ) ```
Try this: ``` WHERE (attempts = @IsInRetry or (@IsInRetry = 1 and attempts > 0)) ```
SQL Server : IF Condition in WHERE clause
[ "", "sql", "sql-server", "" ]
I have this one table `empname`, `age`, and `gender` ``` empname age gender Nana 24 F Jillian 28 M Sally 29 F David 30 M Mike 35 M Daisy 37 F ``` and the question is how do i count the gender who is female<30 years old, female>30, male<30 and male>30 from table called `employee` ``` female<30 female>30 male<30 male>30 2 1 1 2 ```
The easiest way would be using multiple sub-queries: ``` SELECT (SELECT COUNT(*) FROM employee WHERE gender = F AND age<30) AS 'Female Below 30', (SELECT COUNT(*) FROM employee WHERE gender = F AND age>=30) AS 'Female Above 30', (SELECT COUNT(*) FROM employee WHERE gender = M AND age<30) AS 'Male Below 30', (SELECT COUNT(*) FROM employee WHERE gender = M AND age>=30) AS 'Male Above 30' ``` I think using PIVOT is better for such cases: ``` SELECT [FB30], [FA30], [MB30], [MA30] FROM ( SELECT empname, gender + CASE WHEN age<30 THEN 'B30' ELSE 'A30' END AS SexAge FROM Employee ) AS SourceTable PIVOT ( COUNT(empname) FOR SexAge IN ([FB30],[FA30],[MB30],[MA30]) ) AS PivotTable; ``` Check out this **[SQL SERVER Fiddle Demo](http://sqlfiddle.com/#!6/68eb1/2)** After a bit of search (I am not into Oracle...) i ended up with this for the PIVOT: ``` WITH T AS ( SELECT "empname" , "gender" || CASE WHEN "age"<30 THEN 'B30' ELSE 'A30' END AS "SexAge" FROM Employee ) SELECT SUM(FB30) AS FB30,SUM(FA30) AS FA30,SUM(MB30) AS MB30,SUM(MA30) AS MA30 FROM T PIVOT ( COUNT(*) FOR ("SexAge") IN ('FB30' AS FB30,'FA30' AS FA30,'MB30' AS MB30,'MA30' AS MA30) ); ``` [**Oracle PIVOT Demo**](http://sqlfiddle.com/#!4/6ebda/30) And since you asked for a `DECODE` example: ``` WITH T AS ( SELECT "empname" , "gender" || DECODE(TRUNC("age"/30),0,'B30','A30') AS "SexAge" FROM Employee ) SELECT SUM(FB30) AS FB30,SUM(FA30) AS FA30,SUM(MB30) AS MB30,SUM(MA30) AS MA30 FROM T PIVOT ( COUNT(*) FOR ("SexAge") IN ('FB30' AS FB30,'FA30' AS FA30,'MB30' AS MB30,'MA30' AS MA30) ); ``` [**Oracle DECODE PIVOT Demo**](http://sqlfiddle.com/#!4/6ebda/32)
You can query like this: ``` select count(case when gender='F' and age >= 30 then 1 end) "Female>=30", count(case when gender='F' and age < 30 then 1 end) "Female<30", count(case when gender='M' and age >= 30 then 1 end) "Male>=30", count(case when gender='M' and age < 30 then 1 end) "Male>30" from ... ``` You can read the table **only once** to get the result what you want. This can be great difference when the table is large.
How to count multiple things in 1 column
[ "", "sql", "" ]
I have a table with alot of missing identity ids missing. I would also like to start it back at 1 with. ``` DBCC CHECKIDENT('Customer', RESEED, 0) ``` However, there is existing data in the table, so will this effect anything? Say I have ID 2,3,5 in the table with 1 and 4 gone. If I execute the SQL above, does that start the next new row in that table at 1, then the next at 4 without bothering 2,3,5 and so on?
RESEEDING to 0 on table with existing data will cause duplicates of existing ID or raise error when Unique Constraints is available. Please see POC for further clarification of the same **TestData:** ``` IF OBJECT_ID(N'Table1') > 0 BEGIN DROP TABLE Table1 END CREATE TABLE Table1 (ID INT IDENTITY(1, 1), Col1 VARCHAR(3)) INSERT INTO Table1 VALUES('1F'),('3A'),('2A'),('4G'),('5X') SELECT * FROM Table1 DELETE FROM Table1 WHERE ID IN( 2, 4) SELECT * FROM Table1 DBCC CHECKIDENT('Table1', RESEED, 0) INSERT INTO Table1 VALUES ('12F'),('34A'),('23A'),('45G'),('56X') SELECT * FROM Table1 DELETE FROM Table1 DBCC CHECKIDENT('Table1', RESEED, 0) ALTER TABLE Table1 ADD UNIQUE (Id) INSERT INTO Table1 VALUES('1F'),('3A'),('2A'),('4G'),('5X') SELECT * FROM Table1 DELETE FROM Table1 WHERE ID IN( 2, 4) DBCC CHECKIDENT('Table1', RESEED, 0) SELECT * FROM Table1 INSERT INTO Table1 VALUES ('12F'),('34A'),('23A'),('45G'),('56X') IF OBJECT_ID(N'Table1') > 0 BEGIN DROP TABLE Table1 END IF OBJECT_ID(N'Table1') > 0 BEGIN DROP TABLE Table1 END ```
Although missing ids are not aesthetically pleasing to the eye, they make absolutely no difference to the database performance. If you are using these values to display something like a Customer Ref to the user, then you should use a separate field for this. Database IDs should not be relied upon for display, as the rules for your database, may differ from your rules for your Customer Ref etc. I would suggest if this is your problem that you add an additional field for the Customer Ref - populate it with the current Database Id for default values, then you can do whatever you want to this new field (as far as ensuring that all values are used sequentially) without interfering with the performance and integrity of the database.
Reseed back 1 with existing data in SQL Server
[ "", "sql", "sql-server", "identity-column", "" ]
i have this query for insert list of data in table and now i want avoid insert duplicate records how i can do it in my code? please help me ``` -- Create a table type to match your input parameters CREATE TYPE IdNameTable AS TABLE ( ID INT, Name NVARCHAR(50) ); GO -- change your stored procedure to accept such a table type parameter ALTER PROCEDURE [dbo].[Register] @Values IdNameTable READONLY AS BEGIN BEGIN TRY INSERT INTO dbo.Group (Id, Name) -- get the values from the table type parameter SELECT Id, Name FROM @Values SELECT 0 END TRY BEGIN CATCH SELECT -1 END CATCH END GO ```
Try this way: ``` INSERT INTO dbo.Group (Id, Name) SELECT Id, Name FROM @Values WHERE ID NOT IN (select ID from Group) ``` with full code: ``` -- Create a table type to match your input parameters CREATE TYPE IdNameTable AS TABLE ( ID INT, Name NVARCHAR(50) ); GO -- change your stored procedure to accept such a table type parameter ALTER PROCEDURE [dbo].[Register] @Values IdNameTable READONLY AS BEGIN BEGIN TRY INSERT INTO dbo.Group (Id, Name) SELECT Id, Name FROM @Values WHERE ID NOT IN (select ID from Group) SELECT 0 END TRY BEGIN CATCH SELECT -1 END CATCH END GO ```
I have used a NOT EXISTS to check to see if ID is available in dbo.Group table and insert only when no match is found. The assumption is that ID is Unique. Please modify the filter inside the Not EXISTS to tiler for your need. Hope it helps ``` CREATE TYPE IdNameTable AS TABLE (ID INT, NAME NVARCHAR(50)); GO -- change your stored procedure to accept such a table type parameter ALTER PROCEDURE [dbo].[Register] @Values IdNameTable READONLY AS BEGIN BEGIN TRY INSERT INTO dbo.Group (Id,NAME) -- get the values from the table type parameter SELECT v.Id, v.NAME FROM @Values v WHERE NOT EXISTS (SELECT 1 FROM dbo.Group gp WHERE gp.id=v.id) SELECT 0 END TRY BEGIN CATCH SELECT -1 END CATCH END GO ```
how to avoid inserting duplicate records in table type sql server
[ "", "sql", "sql-server", "" ]
Given a really simple table in MS SQL like this: ``` User ID | Status ---------------- 1 False 2 True 2 False 3 False 3 False ``` I'm having trouble wrapping my head around how I would select only users that *don't* have any rows with a `Status` assigned to `True`. The result would return User IDs 1 and 3. I have a feeling it's requires more than a straight forward `WHERE` selector, and have been experimenting with GROUP BY and COUNT without success.
You can use a `GROUP BY` with a `HAVING` clause to get the result. In your `HAVING` you can use a `CASE` expression with an aggregate to filter out any rows that have a `Status = 'True'`: ``` select [user id] from table1 group by [user id] having sum(case when status = 'true' then 1 else 0 end) < 1; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/a057d/2)
Try this way: ``` select distinct userid from yourtable where userid not in (select userid from yourtable where status = 'True') ```
Selecting rows in SQL only where *all* values do not meet a certain criteria
[ "", "sql", "t-sql", "group-by", "" ]
Here's my Query to find id's for userid 2, I want to run a query that finds entries where userid=2 and amount>1 AND userid 1 has none of that id ``` SELECT id, amount FROM collection WHERE userid='2' AND amount>1 ``` I'm not sure how to do an if statement inside a SQL query, but there has to be a way to easily do this. Any help would be appreciated
If I understand right, you want The list of users having user ID = 2 and amount > 1. This list should ignore the records where ID is not in user ID = 1 **Sample Input/Ouput:** ``` ID UserID Amount Returned? 1 2 0 No (Amount 0) 2 2 10 Yes 3 1 10 3 2 5 (No, since ID =3 exists with Userid = 1) ``` Below Query should help you with it. ``` SELECT C.ID, C.AMOUNT FROM COLLECTION C WHERE C.USERID = 2 AND C.AMOUNT > 1 AND C.ID NOT IN ( SELECT D.ID FROM COLLECTION D WHERE D.USERID = 1 ); ``` ## **[Fiddle here](http://www.sqlfiddle.com/#!2/517d1/4)**
*"where userid=2 and amount>1 AND userid 1 has none of that id"* You can use `NOT EXISTS`: ``` SELECT id, amount FROM collection c WHERE c.userid = '2' AND c.amount > 1 AND NOT EXISTS ( SELECT 1 FROM collection c2 WHERE c2.userid = '1' AND c.id = c2.id ) ```
SQL Select where user1 has more than one and user2 has none
[ "", "mysql", "sql", "" ]
I have a Database Table where i have some data in XML datatype. Following is an example of some data in that column. ``` <locale en-US="Test &amp; Data" /> ``` Is there any way to extract only the words "Test & Data" in SQL server. Is there any built in function.
Try something like this. If you have a XML variable: ``` declare @xml XML = '<locale en-US="Test &amp; Data" />'; select data.node.value('@en-US', 'varchar(11)') my_column from @xml.nodes('locale') data(node); ``` In your case, for a table's column (sorry for not given this example first): ``` create table dbo.example_xml ( my_column XML not null ); go insert into dbo.example_xml values('<locale en-US="Test &amp; Data" />'); go select my_column.value('(/locale/@en-US)[1]', 'varchar(11)') [en-US] from dbo.example_xml; go ``` Hope it helps.
``` IF OBJECT_ID('tempdb..#Holder') IS NOT NULL begin drop table #Holder end CREATE TABLE #Holder (ID INT , MyXml xml ) /* simulate your insert */ INSERT INTO #HOLDER (ID , MyXml) select 1 , '<locale en-US="Test &amp; Data" />' union all select 2 , '<locale en-US="Barney &amp; Friends" />' /* with other values in the table */ SELECT holderAlias.ID , pre.value('(@en-US)[1]', 'Varchar(50)') AS 'ItemID' FROM #Holder holderAlias CROSS APPLY MyXml.nodes('/locale') AS MyAlias(pre) /* OR */ SELECT [MyShreddedValue] = holderAlias.MyXml.value('(/locale/@en-US)[1]', 'varchar(50)') FROM #Holder holderAlias IF OBJECT_ID('tempdb..#Holder') IS NOT NULL begin drop table #Holder end ```
Converting XML in SQL Server
[ "", "sql", "sql-server", "xml", "" ]
I encountered a strange behavior of the `binary_double` rounding with Oracle SQL. `binary_double` values should be rounded `half even` according to the [documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions135.htm), but when testing this with the following queries, it seems that there are some inconsistencies. **All the queries below should give the same last digit**, respectively, i.e., 0.x00008 and 0.x00006 (rounded to 6 digits) or 0.x0008 and 0.x0006 (rounded to 5 digits) with x in (0,1,2,3,4,5,6,7,8,9). **Problem is that they do not**. Any help on understanding why the rounding results depend on the first digit after the seperator point and/or the number of digits in the original number is kindly appreciated. ``` select 1,(round( cast (0.0000075 as binary_double ) ,6)), (round( cast (0.0000065 as binary_double ) ,6)) from dual union select 2,(round( cast (0.1000075 as binary_double ) ,6)), (round( cast (0.1000065 as binary_double ) ,6)) from dual union select 3,(round( cast (0.2000075 as binary_double ) ,6)), (round( cast (0.2000065 as binary_double ) ,6)) from dual union select 4,(round( cast (0.3000075 as binary_double ) ,6)), (round( cast (0.3000065 as binary_double ) ,6)) from dual union select 5,(round( cast (0.4000075 as binary_double ) ,6)), (round( cast (0.4000065 as binary_double ) ,6)) from dual union select 6,(round( cast (0.5000075 as binary_double ) ,6)), (round( cast (0.5000065 as binary_double ) ,6)) from dual union select 7,(round( cast (0.6000075 as binary_double ) ,6)), (round( cast (0.6000065 as binary_double ) ,6)) from dual union select 8,(round( cast (0.7000075 as binary_double ) ,6)), (round( cast (0.7000065 as binary_double ) ,6)) from dual union select 9,(round( cast (0.8000075 as binary_double ) ,6)), (round( cast (0.8000065 as binary_double ) ,6)) from dual union select 10,(round( cast (0.9000075 as binary_double ) ,6)), (round( cast (0.9000065 as binary_double ) ,6)) from dual union select 11,(round( cast (0.000075 as binary_double ) ,5)), (round( cast (0.000065 as binary_double ) ,5)) from dual union select 12,(round( cast (0.100075 as binary_double ) ,5)), (round( cast (0.100065 as binary_double ) ,5)) from dual union select 13,(round( cast (0.200075 as binary_double ) ,5)), (round( cast (0.200065 as binary_double ) ,5)) from dual union select 14,(round( cast (0.300075 as binary_double ) ,5)), (round( cast (0.300065 as binary_double ) ,5)) from dual union select 15,(round( cast (0.400075 as binary_double ) ,5)), (round( cast (0.400065 as binary_double ) ,5)) from dual union select 16,(round( cast (0.500075 as binary_double ) ,5)), (round( cast (0.500065 as binary_double ) ,5)) from dual union select 17,(round( cast (0.600075 as binary_double ) ,5)), (round( cast (0.600065 as binary_double ) ,5)) from dual union select 18,(round( cast (0.700075 as binary_double ) ,5)), (round( cast (0.700065 as binary_double ) ,5)) from dual union select 19,(round( cast (0.800075 as binary_double ) ,5)), (round( cast (0.800065 as binary_double ) ,5)) from dual union select 20,(round( cast (0.900075 as binary_double ) ,5)), (round( cast (0.900065 as binary_double ) ,5)) from dual; ``` The bottom line, is this: Why in the following query, there is a difference between the two values: ``` SELECT (round( CAST (0.0000065 AS BINARY_DOUBLE ) ,6)), (round( cast (0.1000065 as binary_double ) ,6)) FROM dual; ``` Following @zerkms advice, I [convert](http://www.binaryconvert.com/convert_double.html?decimal=048046049048048048048054053) the numbers into binary format, and I get: ``` 0.0000065 -> 6.49999999999999959998360846147E-6 0.1000065 -> 1.00006499999999998173905169097E-1 ``` The query rounds this up to 6 digits. Surprisingly, for me, I see that the rounding results in: ``` 0.0000065 -> 0.000006 (execute the query above to see this) 0.1000065 -> 0.100007 (execute the query above to see this) ``` Why is that? I could understand, if I'd try round to >12 digits, where the series of digits in the binary representation start to differ, but how come the difference becomes visible on such early stage?
Let's have a look at the first example, since others are pretty much the similar: The `0.0000075` in double precision IEEE 754 is presented as `7.50000000000000019000643072808E-6` The `0.0000065` is presented as `6.49999999999999959998360846147E-6` When you round both by 6 - the former becomes `8e-6`, the latter `6e-6` There is no "consistent" behaviour, because different numbers are factorized to divisors of 2 differently. So, even though when you do `SELECT 0.0000065 FROM DUAL` and see `0.0000065` as a result - it's not how it's represented internally in a binary form, it's already "broken" and less than that number by a tiny fraction. Then it's rounded for you during output formatting. The double IEEE 754 provides [15-16 significant digits](http://en.wikipedia.org/wiki/Floating_point#Internal_representation). So for output purposes they become: `7.500000000000000e-6` and `6.499999999999999e-6` which is rounded to `6.5e-6` **UPD**: `6.49999999999999959998360846147E-6` == `0.00000649999999999999959998360846147`. If you round it by 6 - it equals to `0.000006`, because it's followed by `4` which is less than `5` `1.00006499999999998173905169097E-1` == `0.100006499999999998173905169097` is rounded by 6 to `0.100006`, because the next digit is `4`, that is less than `5`. And I see the difference with actual result. And honestly I don't have a good explanation here. I suspect it's an oracle issue, since: * C# runs "as expected": <http://ideone.com/Py9aer> * Go also runs "as expected": <http://ideone.com/OEJBoA> * Python also runs "as expected": <http://ideone.com/I0ADOR> * Javascript (in console): `parseFloat(0.1000065).toFixed(6) // 0.100006` **UPD 2**: after even more research with fellows from a skype chat I've been given a good example that the result depends on the rounding mode chosen: ``` flock.core> (import '[org.apache.commons.math3.util Precision]) flock.core> (Precision/round (Double. 0.1000065) 6 BigDecimal/ROUND_CEILING) 0.100007 flock.core> (Precision/round (Double. 0.1000065) 6 BigDecimal/ROUND_DOWN) 0.100006 flock.core> (Precision/round (Double. 0.1000065) 6 BigDecimal/ROUND_UP) 0.100007 flock.core> (Precision/round (Double. 0.1000065) 6 BigDecimal/ROUND_HALF_DOWN) 0.100006 flock.core> (Precision/round (Double. 0.1000065) 6 BigDecimal/ROUND_HALF_EVEN) 0.100006 flock.core> (Precision/round (Double. 0.1000065) 6 BigDecimal/ROUND_HALF_UP) 0.100007 flock.core> (Precision/round (Double. 0.1000065) 6 BigDecimal/ROUND_FLOOR) 0.100006 ``` **Conclusion**: there is no "correct" or "incorrect" result in this case, they are all correct and strongly depend on the implementation (+ options you use when you perform arithmetic operations). References: * Online decimal to IEEE 754 double converter: [0.0000065](http://www.binaryconvert.com/result_double.html?decimal=048046048048048048048054053) and [0.0000075](http://www.binaryconvert.com/result_double.html?decimal=048046048048048048048055053) * <http://en.wikipedia.org/wiki/Floating_point#Internal_representation>
You should better use the DECIMAL data type to avoid rounding issues. More info here : <http://docs.oracle.com/javadb/10.6.2.1/ref/rrefsqlj15260.html> Try this one: ``` select 1,round(cast (0.0000075 as decimal(15,7)),6), round(cast (0.0000065 as decimal(15,7)),6) from dual; ``` Since I have no Oracle database installed, I could not test it but it should work. One important remark: if decimal has a scale smaller than the actual number, exceeding decimals will be truncated. So you might want to cast to decimal (17,8) for more security.
Oracle SQL rounding misbehavior
[ "", "sql", "oracle", "casting", "rounding", "" ]
I have two tables `Tbl_A` and `Tbl_B` and data is as follows Tbl\_A ``` Company_nm Website Airindia www.airindia.coin/index.html Spicejet www.spicjet.com indigo indigo.com/ ``` Tbl\_B ``` Company URL Airindia www.airindia.co.in/index Spicejet www.spicjet.com/index.html indigo www.indigo.com/ ``` My query is to select all the names of the companies by joining on website/Url columns of two tables by replaceing/removing additional characters after domain name. Final output should be all the 3 companies.
You can use `substring` and `charindex` to get the first part of the url: ``` select * from tbl_a a join tbl_b b on case when charindex('/', a.website) > 0 then substring(a.website, 0, charindex('/', a.website)) else a.website end = case when charindex('/', b.url) > 0 then substring(b.url, 0, charindex('/', b.url)) else b.url end ```
i think you must use a 'pattern matching' way for your select query. have a look at [Regular Expressions](http://msdn.microsoft.com/en-us/magazine/cc163473.aspx) and [Pattern Matching (Regex)](http://www.sqllion.com/2010/12/pattern-matching-regex-in-t-sql/) for more Information.
Replace in Joins
[ "", "sql", "sql-server-2008", "" ]
Given a `Dictionary` table {`word_id`, `word`}. `word` is char(32), `word_id` is int, IDENTITY and PRIMARY KEY. I need to ensure some words exist in the `Dictionary` table. If they don't exist, those need to be added. I am having a hard time trying to wrap NOT EXISTS clause around several constant value. Suppose the Dictionary has 3 entries: ``` test test2 test3 ``` I am trying to ensure words `test1`, `test2`, and `hello` are there. From those listed, only `test2` is in the database, so `test1` and `hello` need to be inserted. Here is the query I am currently stuck with, don't know how to reference the synthetic column (#1 in the UNION ALL result set): ``` INSERT INTO Dictionary (word) SELECT 'test1' UNION ALL SELECT 'test2' UNION ALL SELECT 'hello' WHERE NOT EXISTS (SELECT 1 FROM Dictionary d WHERE d.word = {column1?}) ``` Don't worry about SQL injection for now - in the real world app a set of words will be passed via a parameterized query. I am just trying to grasp the concept for now.
Another way is using a [CTE](http://msdn.microsoft.com/en-us/library/ms175972.aspx), either by providing the column name in the expression: ``` WITH Data(Word) AS ( SELECT 'test1' UNION ALL SELECT 'test2' UNION ALL SELECT 'hello' ) SELECT Word FROM Data WHERE NOT EXISTS( SELECT 1 FROM Dictionary d WHERE d.Word = Data.Word) ``` or via alias in the query: ``` WITH Data AS ( SELECT Word = 'test1' UNION ALL SELECT Word = 'test2' UNION ALL SELECT Word = 'hello' ) SELECT Word FROM Data WHERE NOT EXISTS( SELECT 1 FROM Dictionary d WHERE d.Word = Data.Word) ```
You can warp the `UNION ALL` in another `SELECT`, give an alias to the word column, and put parentheses around the nested query, so that the `WHERE` clause applies to all rows of the union: ``` SELECT * FROM ( SELECT 'test1' AS word UNION ALL SELECT 'test2' AS word UNION ALL SELECT 'hello' AS word ) ww WHERE NOT EXISTS (SELECT 1 FROM Dictionary d WHERE d.word = ww.word) ```
NOT EXISTS with several constant rows
[ "", "sql", "sql-server", "constants", "not-exists", "" ]
I need to extract some data to analyse exceptions/logs, and I'm stuck at a point. I have a table with a column called `CallType`, and a status which can be **Success** or **Failure**. This table also has a column called `SessionId`. I need to do this: Select all the `SessionId`'s where all the `CallType = 'A'` are marked as **Success**, but there is at least one `CallType = 'B'` having a **Failure** for that session. There will be a where clause to filter out some stuff. I'm thinking something like: ``` select top 10 * from Log nolock where ProviderId=48 -- add more conditions here group by SessionId having --? what should go over here? ```
I would do this with conditional aggregation in the `having` clause: ``` select top 10 * from Log nolock where ProviderId=48 -- add more conditions here group by SessionId having sum(case when CallType = 'A' and Status = 'Failure' then 1 else 0 end) = 0 and sum(case when CallType = 'B' and Status = 'Failure' then 1 else 0 end) > 0 and sum(case when CallType = 'A' and Status = 'Success' then 1 else 0 end) > 0; ``` The `having` clause checks for three conditions by counting the number of rows that meet each one. If `= 0`, then no records are allowed. If `> 0` then records are required. * That `CallType A` has no failures. * That `CallType B` has at least one failure. * That at least one `CallType A` success exists. The third condition is ambiguous -- if is not clear if you actually need `CallType A`s to be in the data, based on the question.
Having clause can only operate on aggregates within the group so this isn't the correct way to go about it since you are filtering out other rows you want to check against. I'd use `EXISTS` for this e.g. **edit**: corrected the query ``` SELECT * FROM Log L WITH(NOLOCK) WHERE ProviderId = 48 AND CallType = 'A' AND Status = 'Success' AND EXISTS(SELECT * FROM Log WHERE L.SessionId = SessionId AND CallType = 'B' AND Status = 'Failure') ``` You can essentially filter out rows in the `EXISTS` part of the query using the aliased `Log` table (`aliased L`), matching all rows with the same session ID and seeing if any match the filters you required (failed with call type B)
How can I write this select query in SQL Server?
[ "", "sql", "sql-server", "" ]
I have created one temporary table named "table1". I am trying to list the columns of my temp table. I am not getting any values. Here is my mysql query. ``` SELECT column_name FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name = 'table2'; ``` Any body help me please? Thanks
You cannot get the temporary table columns using the [INFORMATION\_SCHEMA](http://dev.mysql.com/doc/refman/5.5/en/tables-table.html) The only way which you can use is to go with `SHOW CREATE TABLE table2` "INFORMATION\_SCHEMA.COLUMNS" does not contains columns of temporary table.
You can use SHOW COLUMS to achieve this. Example Table: ``` CREATE TEMPORARY TABLE SalesSummary ( product_name VARCHAR(50) NOT NULL, total_sales DECIMAL(12,2) NOT NULL DEFAULT 0.00, avg_unit_price DECIMAL(7,2) NOT NULL DEFAULT 0.00, total_units_sold INT UNSIGNED NOT NULL DEFAULT 0 ); ``` Command: SHOW COLUMNS FROM SalesSummary; Outout: ``` mysql> SHOW COLUMNS FROM SalesSummary; +------------------+------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +------------------+------------------+------+-----+---------+-------+ | product_name | varchar(50) | NO | | NULL | | | total_sales | decimal(12,2) | NO | | 0.00 | | | avg_unit_price | decimal(7,2) | NO | | 0.00 | | | total_units_sold | int(10) unsigned | NO | | 0 | | +------------------+------------------+------+-----+---------+-------+ 4 rows in set (0.00 sec) ``` More details are in the manual Section 13.7.5.5 for MySQL 5.7. [Link](http://dev.mysql.com/doc/refman/5.7/en/show-columns.html) Another possibility is using SHOW CREATE TABLE: ``` mysql> SHOW CREATE TABLE SalesSummary\G *************************** 1. row *************************** Table: SalesSummary Create Table: CREATE TEMPORARY TABLE `SalesSummary` ( `product_name` varchar(50) NOT NULL, `total_sales` decimal(12,2) NOT NULL DEFAULT '0.00', `avg_unit_price` decimal(7,2) NOT NULL DEFAULT '0.00', `total_units_sold` int(10) unsigned NOT NULL DEFAULT '0' ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec) ``` Here are some more [details](http://dev.mysql.com/doc/refman/5.7/en/show-create-table.html). In Mysql 5.7 there is a seperate table called INNODB\_TEMP\_TABLE\_INFO to achieve this.
List temporary table columns in mysql
[ "", "mysql", "sql", "" ]
``` SELECT e.cempnoee, e.cempname, e.cempemail, s.csrvposition, s.csrvbranch, s.csrvcostcentr, b.cbrndesc, p.caddnophoneh FROM emas e inner join esrvc s on e.cempnoee = s.csrvnoee inner join cbranch b on s.csrvbranch = b.cbrncode inner join eaddrs p on e.cempnoee = p.caddnoee ``` The SQL dialect I am using (Lianja/VFP) does not support multiple inner join statements as of the time of writing. How do I split the above SQL statement into multiple select statements as a workaround?
After corresponding with the author of Lianja, spliting the SQL statement works: ``` SELECT emas.cempnoee, emas.cempname, emas.cempemail, esrvc.csrvposition, esrvc.csrvbranch, esrvc.csrvcostcentr FROM emas inner join esrvc on emas.cempnoee = esrvc.csrvnoee into cursor esc SELECT *, cbranch.cbrndesc FROM esc inner join cbranch on esc.csrvbranch = cbranch.cbrncode into cursor esbc SELECT *, eaddrs.caddnophoneh FROM esbc inner join eaddrs on esbc.cempnoee = eaddrs.caddnoee ```
Actually Lianja does support multiple joins, it's just a bit weird syntax Try something like this, and it should work ``` SELECT emas.cempnoee, emas.cempname,; emas.cempemail, esrvc.csrvposition,; esrvc.csrvbranch, esrvc.csrvcostcentr,; cbranch.cbrndesc, eaddrs.caddnophoneh; FROM emas INNER JOIN esrvc ON emas.cempnoee = esrvc.csrvnoee,; esrvc INNER JOIN cbranch ON esrvc.csrvbranch = cbranch.cbrncode,; emas INNER JOIN eaddrs ON emas.cempnoee = eaddrs.caddnoee ``` [Lianja SQL\_SELECT](http://www.lianja.com/doc/index.php/SQL_SELECT) and [Lianja docs](http://www.lianja.com/doc/index.php/Using_Lianja_SQL)
How do I split two or more inner joins into multiple select statements
[ "", "sql", "" ]
So I'm working on a system that uses default date values of '1900-01-01 00:00:00.000' rather than allowing nulls. I've got a stored procedure that displays all members of an association and the date and amounts charged to them. If a member has not received charges, I need to display a zero amount and empty string for the date. I've tried every permutation of `CASE` and `CONVERT` I can think of, but I can't seem to suppress the default date. It's the column `trandate` I'm trying to manipulate. Any pointers are greatly appreciated! Here is my current code, and below that the current results. ``` ALTER PROCEDURE [dbo].[usp_ApplyCharges_GetAllAssociationsDebtors] @tCode int, @AssnKey int WITH RECOMPILE AS SELECT c.pkey, c.lname + ', ' + c.fname as 'Name', c.address1, c.address2, c.assnkey, cs.camount as 'LastAmt', coalesce(c.AssmtChrgAmt, 0) as 'DefaultAmt', cs.trancode, a.name as 'AssnName' , a.assmtchrgamt, a.latefee, CASE WHEN cs.trandate = '1900-01-01 00:00:00.000' THEN '' ELSE cs.trandate END AS lasttrandate, CASE a.assmtchrgfreq WHEN 'A' THEN 'Annual' WHEN 'D' THEN 'Daily' WHEN 'B' THEN 'Bi-Monthly' WHEN 'M' THEN 'Monthly' WHEN 'Q' THEN 'Quarterly' WHEN 'S' THEN 'Semi-Annual' WHEN 'T' THEN 'Tri-Annual' ELSE 'N/A' END AS AssessmentFrequency FROM Cases c LEFT OUTER JOIN CaseSumm cs ON c.pkey = cs.casekey INNER JOIN vw_CaseSumm_GetLastpKeyByTrantype v ON cs.pkey = v.pkey INNER JOIN assnctrl a ON c.assnkey = a.pkey WHERE v.trancode = @tcode and v.assnkey = @assnkey and c.active = 1 UNION SELECT c2.pkey, c2.lname + ', ' + c2.fname as 'Name', c2.address1, c2.address2, c2.assnkey, 0, 0, 0, a.name as 'AssnName', a.assmtchrgamt, a.latefee, '', CASE a.assmtchrgfreq WHEN 'A' THEN 'Annual' WHEN 'D' THEN 'Daily' WHEN 'B' THEN 'Bi-Monthly' WHEN 'M' THEN 'Monthly' WHEN 'Q' THEN 'Quarterly' WHEN 'S' THEN 'Semi-Annual' WHEN 'T' THEN 'Tri-Annual' ELSE 'N/A' END As AssessmentFrequency FROM Cases c2 INNER JOIN AssnCtrl a ON c2.assnkey = a.pkey WHERE c2.assnkey = @assnkey and c2.active = 1 and c2.pkey NOT IN (SELECT casekey FROM vw_CaseSumm_GetLastpKeyByTrantype WHERE assnkey = @assnkey and trancode = @tcode) ORDER BY Name ``` Result example: ``` 142373 Smith, John 1234 Main St. 84 0.00 0 0 Ashley Place Condominium, Inc. 333.00 0 1900-01-01 00:00:00.000 Tri-Annual ```
The problem is that all possible values of a CASE statement must have the same DataType, and the most restrictive possibility will be used. You are trying to return an empty string in one case, but in the other case you return a DateTime type. That means that the empty string will be implicitly converted to a DateTime, which will default to 1900-01-01. If you CAST the datetime value to a string, then both sides will return a string and you will be able to return an empty string. Like so: ``` CASE WHEN cs.trandate = '1900-01-01 00:00:00.000' THEN '' ELSE CONVERT(varchar(31),cs.trandate) END AS lasttrandate, ``` Of course this is only a useful answer if you can accept a having a string for an output datatype for that column. If you need the datatype to be a date-time, then you will have to accept the 1900-01-01 output and handle it in your front end.
Change this: ``` WHEN cs.trandate = '1900-01-01 00:00:00.000' THEN '' ``` into this ``` WHEN CAST(cs.trandate AS DATE) = '19000101' THEN NULL ``` Comparing date-times is always fraught with error for the same reason comparing floats is. This just pulls the actual date part and does the comparison, which is good enough I think for your case. Also, since the column type is 'datetime', the THEN '' will be cast to a date time, which will end up being "mindate"... or 1900-01-01. You really want NULL here, and to adjust the rest of the query as necessary to deal with NULLs.
Stored procedure to suppress default values of '1900-01-01 00:00:00.000'
[ "", "sql", "sql-server", "case", "" ]
I have a list of names. john, sam, peter, jack I want to query the same sql with each of above as the filter. Each query will give me a unique employee id, which I want to use to delete some other records. ``` select emp_id from employee where emp_name like '%john%'; ``` Let's say for the first query, I get the id as 1001. So the delete queries would be like following. ``` delete from account_details where emp_id = 1001; delete from hr_details where emp_id = 1001; delete from pay_role_details where emp_id = 1001; ``` I have to repeat this for a list of employees. Pseudocode would be like following. ``` var emp_list = ['john', 'jack', 'kate', 'peter', 'sam',...] for each :employee_name in emp_list select emp_id as :var_emp_id from employee where emp_name like '%:employee_name%'; delete from account_details where emp_id = :var_emp_id; delete from hr_details where emp_id = :var_emp_id; delete from pay_role_details where emp_id = :var_emp_id; end loop ``` I want a PL-SQL query to do this. Please help. Thanks. What I tried is something like the following. ``` set serveroutput on; begin loop x in ('john','jack', 'kate') loop as :name select emp_id as var_emp_id from employee where emp_name like '%:name%'; // delete queries end loop; end; ``` P.S. Although accoring to the question, like query may result in multiple records, in actual scenario, it is guaranteed to be only one record. Why I use like is that in actual scenario, it is a list of reference numbers instead of names. The reference number has some other pre texts and post texts and my comma seperated list has only the numbers.
Perhaps the following will help: ``` BEGIN FOR aName IN (SELECT 'john' AS EMP_NAME FROM DUAL UNION ALL SELECT 'sam' AS EMP_NAME FROM DUAL UNION ALL SELECT 'peter' AS EMP_NAME FROM DUAL UNION ALL SELECT 'jack' AS EMP_NAME FROM DUAL) LOOP FOR emp IN (SELECT * FROM EMPLOYEE WHERE EMP_NAME LIKE '%' || aName.EMP_NAME || '%') LOOP DELETE FROM ACCOUNT_DETAILS a WHERE a.EMP_ID = emp.EMP_ID; DELETE FROM HR_DETAILS h WHERE h.EMP_ID = emp.EMP_ID; DELETE FROM PAY_ROLE_DETAILS p WHERE p.EMP_ID = emp.EMP_ID; DBMS_OUTPUT.PUT_LINE('Deleted data for employee with EMP_ID=' || emp.EMP_ID); END LOOP; -- emp END LOOP; -- aName END; ``` Study this until you understand how and why it works. Share and enjoy.
Do you really need a cursor to do so? Try to skip cursor if possible to avoid poor performance/memory usage on huge data. ``` delete from account_details inner join employee on account_details.emp_id = employee.emp_id where WHERE CONTAINS(employee.emp_name, '"John" OR "Sam" OR "Max"', 1) >0; delete from hr_details inner join employee on hr_details.emp_id = employee.emp_id where WHERE CONTAINS(employee.emp_name, '"John" OR "Sam" OR "Max"', 1) >0; delete from pay_role_details inner join employee on pay_role_details.emp_id = employee.emp_id where WHERE CONTAINS(employee.emp_name, '"John" OR "Sam" OR "Max"', 1) >0; ```
PL SQL loop through list of ids
[ "", "sql", "oracle", "plsql", "oracle-sqldeveloper", "plsqldeveloper", "" ]
I have a table with 3 rows, where i did fill all the collumns with atributes. I can't understand why when i do this: ``` SELECT count(*) FROM person WHERE NULL is NULL; ``` it gives me the result: 3. can you please explain me?
In SQL, `NULL = NULL` is not true. However, `NULL IS NULL` is indeed true, and equivalent to `1=1`. Therefore, you get all the rows. For example, as per MSDN documentation for SQL Server: > If the value of expression is NULL, IS NULL returns TRUE; otherwise, it returns FALSE. > > If the value of expression is NULL, IS NOT NULL returns FALSE; otherwise, it returns TRUE. Here, the value `NULL` is a valid expression for `IS NULL`, and so returns `TRUE`. Therefore, you get all the rows from table.
Because for every row condition ``` NULL is NULL ``` is always true, so there is no reason to filter the rows. Below condition could be the opposite of above condition ``` 1=0 ``` with this condition you will get 0 rows because for every row the condition is false.
Understand the is null query in sql
[ "", "sql", "null", "" ]