Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am looking for a neat way to combine consecutive number ranges in a single select statement. Say my table has these records: ``` first_number last_number 0 9 10 19 20 29 40 49 50 59 70 79 ``` Then output shall be as follows: ``` first_number last_number 0 29 40 59 70 79 ``` This is what I've come up with: ``` select first_number, last_number_of_range from ( select first_number, is_continuing, is_continued, nvl(lead (last_number,1,null) over (order by first_number), last_number) as last_number_of_range from ( select * from ( select first_number, last_number, case when lag (last_number,1,null) over (order by first_number) + 1 = first_number then 1 else 0 end as is_continuing, case when lead (first_number,1,null) over (order by last_number) - 1 = last_number then 1 else 0 end as is_continued from ( select 0 as first_number, 9 as last_number from dual union all select 10 as first_number, 19 as last_number from dual union all select 20 as first_number, 29 as last_number from dual union all select 40 as first_number, 49 as last_number from dual union all select 50 as first_number, 59 as last_number from dual union all select 70 as first_number, 79 as last_number from dual ) ) where is_continuing = 0 or is_continued = 0 -- remove all but first and last of consecutive records ) ) where is_continuing = 0 -- now at last remove those records that gave us the last_number_of_range ; ``` This works fine. Only, it looks so complicated for so small a task. I would be interested to know wether there is a much more straight way than mine.
Please try: ``` with T1 as ( select row_number() over (order by first_number) RNum, first_number, last_number From yourtable ) ,T (RNUM, first_number, last_number, CNT) as ( select T1.*, 1 CNT from T1 where RNum=1 union all SELECT b.RNUM, b.first_number, b.last_number, (case when b.first_number=T.last_number+1 then t.CNT else T.CNT+1 end) CNT from T1 b INNER JOIN T on b.RNum=T.RNum+1 ) select min(first_number) as first_number, max(last_number) as last_number From T group by T.CNT ``` [SQL fiddle demo](http://sqlfiddle.com/#!4/e0c64/1)
Here is another approach, which will give you desired output. ``` select min(first_number) as first_number , max(last_number) as last_number from ( select first_number , last_number , sum(grp) over(order by first_number) as grp from ( select first_number , last_number , case when first_number <> lag(last_number) over(order by first_number) + 1 then 1 else 0 end as grp from t1 ) ) group by grp order by 1 ``` Result: ``` FIRST_NUMBER LAST_NUMBER ------------ ----------- 0 29 40 59 70 79 ``` [**SQLFiddle demo**](http://sqlfiddle.com/#!4/1d799/21)
How to combine consecutive number ranges
[ "", "sql", "oracle", "" ]
I have this query: ``` INSERT INTO emailevents (shopid, userid, emailid, campaignid, variationid, type, createdon) VALUES ($1, (SELECT id FROM users WHERE mongoid=$2), (SELECT id FROM emails WHERE mongoid=$3), (SELECT id FROM campaigns WHERE mongoid=$4) AS cid, (SELECT id FROM campaignvariations WHERE templateid=(SELECT id FROM templates WHERE mongoid=$5) AND campaignid=cid), $6, to_timestamp($7)) ``` and I'm getting this error:`Query failed: ERROR: syntax error at or near "AS"` I've tried putting cid inside the bracket, without success. How should I use the alias?
Try to change to `select` instead of `values` as below ``` INSERT INTO emailevents (shopid, userid, emailid, campaignid, variationid, type, createdon) select $1, (SELECT id FROM users WHERE mongoid=$2), (SELECT id FROM emails WHERE mongoid=$3), (SELECT id FROM campaigns WHERE mongoid=$4) AS cid, (SELECT id FROM campaignvariations WHERE templateid=(SELECT id FROM templates WHERE mongoid=$5) AND campaignid=(SELECT id FROM campaigns WHERE mongoid=$4)), $6, to_timestamp($7) ``` I think you should replace `cit` with `(SELECT id FROM campaigns WHERE mongoid=$4)` for `variationid` column
One fix is to change the `values` to a `select`: ``` INSERT INTO emailevents (shopid, userid, emailid, campaignid, variationid, type, createdon) select $1, (SELECT id FROM users WHERE mongoid=$2), (SELECT id FROM emails WHERE mongoid=$3), (SELECT id FROM campaigns WHERE mongoid=$4) AS cid, (SELECT id FROM campaignvariations WHERE templateid=(SELECT id FROM templates WHERE mongoid=$5) AND campaignid=cid), $6, to_timestamp($7); ``` I prefer using `insert . . . select` rather than `insert . . . values` in general. You might be able to just remove the `as cid` if Postgres supports subqueries in the `values` statement. EDIT: The above fixes the `as` problem, but not the overall problem. Let's use `select` and move most of the subqueries to the `from` clause: ``` INSERT INTO emailevents (shopid, userid, emailid, campaignid, variationid, type, createdon) select const.shopid, u.id, e.id, c.id (SELECT id FROM campaignvariations WHERE templateid=(SELECT id FROM templates WHERE mongoid=$5) AND campaignid=c.id ), const.type, const.createdon from (select $1 as shopid, $6 as type, to_timestamp($7) as createdon) const left outer join (SELECT id FROM users WHERE mongoid=$2) u cross join (SELECT id FROM emails WHERE mongoid=$3) e cross join (SELECT id FROM campaigns WHERE mongoid=$4) c; ```
Syntax error for insert query alias in Postgres
[ "", "mysql", "sql", "postgresql", "" ]
**I have this query:** ``` SELECT id,id1,title FROM tablename LEFT JOIN tablename AS parent ON tablename .id1 = parent.id WHERE parent.id is NULL ``` What I try to achieve is that only rows are shown from which no parent exist. **Table layout and content** ``` id id1(parent) title 1 0 parent 2 1 child1 3 1 child2 4 100 orphan 5 1 child3 6 1 child4 ``` In this example I would query all but leave out the one which has no existing parent row (row 4, the orphan, parent 100 which does not exists in tablename).
This ``` SELECT child.id,child.id1,child.name FROM table AS child --alias names, as column names would be ambiguous LEFT JOIN table AS parent ON table.id1 = parent.id WHERE parent.id is NULL -- only no parent! ``` Shows the *orphans*. To get all non-orphans ``` SELECT child.id,child.id1,child.name FROM table AS child JOIN table AS parent -- JOIN takes care of getting only the records with parents ON child.id1 = parent.id ``` Why does this work this way? `LEFT JOIN` is for joining tables, where we would like to receive rows of the left hand side table, where the right hand side table does not have a record that fulfills the join criteria. The columns pertaining to the right hand side table would all be `NULL` in that case. By using a simple `JOIN`, only those rows are shown from the first table, that have a record in the table on right hand side of the join. **Why did you get wrong result** The column names can get to be ambiguous when joining tables, and always do so when self-joining... You have to distinguish between them, by using alias names. **Ideas to consider** Maintainability. Keep this in mind, even for examples. Name your objects properly: `table` is not a descriptive name (and is a keyword too), use `PERSON` instead. For columns, `id1` is not a descriptive name - use `PARENT_ID` instead...
Your query looks correct, except you need table aliases for the columns in the `select`: ``` SELECT table.id, table.id1, table.name FROM table LEFT JOIN table AS parent ON table.id1 = parent.id WHERE parent.id is NULL ; ``` The problem is that both `table` and `parent` have the same columns. The engine doesn't know which you really want without the table alias.
Mysql only select row with existing parent
[ "", "mysql", "sql", "subquery", "left-join", "" ]
**Background** * Framework: Codeignighter/PyroCMS I have a DB that stores a list of products, I have a `duplicate function` in my application that first looks for the common product name so it can add a '`suffix`' value to the duplicated product. **Code in my Products model class** ``` $product = $this->get($id); $count = $this->db->like('name', $product->name)->get('products')->num_rows(); $new_product->name = $product->name . ' - ' . $count; ``` On the second line the application fails only when the `$product->name` contains quotes. I was with the understanding that Codeignighter escaped all strings so I dont know why I get this error. So I tried to use MySQL escape string function but that didn't help either. **The Error Message** ``` A Database Error Occurred Error Number: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 's Book%'' at line 3 SELECT * FROM `products` WHERE `name` LIKE '%Harry\\'s Book%' ``` **var\_dump** Below is the output of doing a `var_dump` on `product->name` before and after the line in question; ``` string 'Harry's Book' (length=12) A Database Error Occurred Error Number: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 's Book%'' at line 3 SELECT * FROM `products` WHERE `name` LIKE '%Harry\\'s Book%' ```
Let's do some testing about this. Here is what you are doing ``` $count = $this->db->like('name', $product->name)->get('products')->num_rows(); ``` And i suspect `$product->name` contains this. Harry's Book As we know this is coming from the database table as you are using. Where you are using the upper query mentioned it is wrapping it with single quotes and producing this result. ``` SELECT * FROM `products` WHERE `name` LIKE '%Harry\\'s Book%' ``` As you see it is escaping apostrophy to tell it is not end of string Therefore escaping it with two slashes.One for apostrophy and one for being in single quote. What you have to do is Before assigning the parameter to query wrap it with double quotes. ``` $product_name = "$product->name"; ``` And now pass it to query. ``` $count = $this->db->like('name', $product_name)->get('products')->num_rows(); ``` The output will be this ``` SELECT * FROM `products` WHERE `name` LIKE '%Harry\'s Book%' ``` You see the differece here. It contains single slash now and the record will be found.
Other answers didn't work for me, this does though: ``` $count = $this->db->query("SELECT * FROM `default_firesale_products` WHERE `title` LIKE '".addslashes($product['title'])."'")->num_rows(); ``` Whenever CI Active Record mangles your queries you can always just put a raw query in instead and have full control.
Active record query failed - Escape quote from query
[ "", "sql", "codeigniter", "activerecord", "pyrocms", "" ]
I wish to replace all occurrences of `?` with a single empty space within a table for all columns. Example: ``` This is a?string ``` would become ``` This is a string ``` Currently I have found out how to do it for one column only ``` UPDATE tableName SET columnName = REPLACE(columnName,'"','\'') ``` Is there a way to select and apply to al columns? I don't wish to 'try' it as if the table becomes corrupted or deleted it will cause a lot of upset. Any help would be greatly appreciated.
Try before you buy: ``` SELECT Replace(columnName1, '?', ' ') , Replace(columnName2, '?', ' ') , Replace(columnName3, '?', ' ') FROM your_table ``` Then update if you're happy with the results ``` UPDATE your_table SET columnName1 = Replace(columnName1, '?', ' ') , columnName2 = Replace(columnName2, '?', ' ') , columnName3 = Replace(columnName3, '?', ' ') ``` --- ## Update You may wish to limit the effect of your query. At current it will apply to every row in your table, regardless of whether any question marks exist in the column values or not. Therefore you should consider adding a `WHERE` clause that checks for the existence. ``` SELECT Replace(columnName1, '?', ' ') FROM your_table WHERE Locate('?', columnName1) > 0 ```
You cannot do it for all columns automatically, you'll need to list the columns individually, but you can still do it with one statement: ``` UPDATE tableName SET columnName1 = REPLACE(columnName1,'"','\''), columnName2 = REPLACE(columnName2,'"','\''), columnName3 = REPLACE(columnName3,'"','\''), ... columnNameN = REPLACE(columnNameN,'"','\'') ```
String replace within SQL statement
[ "", "mysql", "sql", "" ]
I'm having a problem getting the rows I need from my database. The question is simple: I have a table with customer data in it. Now I want to do a mailing (via post) to all my customers, but I only want to send it to every address once. So if any customers live on the same address (street, number, postal code, city) I only want to send the letter to the oldest person living at that address. The fields I need are title, last name, first name, street, number, ... (for exporting purposes). I tried using group by on the address fields but then I get the error I need to use an aggregate function on the other fields (name, ...) and I don't want to do that... Any suggestions?
``` SELECT title , last_name , first_name , address_line_1 , address_line_2 , etc FROM ( SELECT title , last_name , first_name , address_line_1 , address_line_2 , etc , Row_Number() OVER (PARTITION BY address_line_1, address_line_2, etc ORDER BY date_of_birth ASC) As row_number FROM your_table ) As all_duplicatified WHERE row_num = 1 ``` This gives every row a row number. The row number is "reset" on every partition (which in this case is our address fields) and the ordering of the numbers is determined by age (dob). Therefore if we only show the ones where `row_num = 1` we get just that eldest persons entry first.
Here's how I would do something like this in Oracle: ``` --Create testing table CREATE TABLE UniqueValTest ( fname NVARCHAR2(100), lname NVARCHAR2(100), address NVARCHAR2(100), city NVARCHAR2(50), state NVARCHAR2(2), zip NVARCHAR2(5), age NUMBER, recid NUMBER ); --Create sample data INSERT INTO UniqueValTest (fname, lname, address, city, state, zip, age, recid) VALUES ('JOHN', 'SMITH', '123 MAIN ST', 'JAMESTOWN', 'LA', '12345', 28, 1); INSERT INTO UniqueValTest (fname, lname, address, city, state, zip, age, recid) VALUES ('JENNIFER', 'SMITH', '123 MAIN ST', 'JAMESTOWN', 'LA', '12345', 30, 2); INSERT INTO UniqueValTest (fname, lname, address, city, state, zip, age, recid) VALUES ('RACHEL', 'ALLEN', '225 MAIN ST', 'JAMESTOWN', 'LA', '12345', 25, 3); INSERT INTO UniqueValTest (fname, lname, address, city, state, zip, age, recid) VALUES ('JOSEPH', 'ALLEN', '225 MAIN ST', 'JAMESTOWN', 'LA', '12345', 25, 4); INSERT INTO UniqueValTest (fname, lname, address, city, state, zip, age, recid) VALUES ('MARK', 'MCBRIDE', '228 MAIN ST', 'JAMESTOWN', 'LA', '12345', 55, 5); --Here's the real part, pulling the data with the dedupe and priority CREATE TABLE TestDataPull AS SELECT T.*, ROW_NUMBER() OVER (PARTITION BY lname, address, zip ORDER BY lname, address, zip, age DESC NULLS LAST) AS dupeid FROM UNIQUEVALTEST T; --Now you can easily select your data SELECT fname, lname, address, city, state, zip FROM TestDataPull WHERE dupeid = 1; ```
SQL select rows with unique values
[ "", "sql", "" ]
SO, **The problem** I have an issue with rows multiplication. In SQL, there is a `SUM()` function which calculates sum for some field for set of rows. I want to get multiplication, i.e. for table ``` +------+ | data | +------+ | 2 | | -1 | | 3 | +------+ ``` that will be `2*(-1)*3 = -6` as a result. I'm using *DOUBLE* data type for storing my data values. **My approach** From school math it is known that `log(A x B) = log(A) + log(B)` - so that could be used to created desired expression like: ``` SELECT IF(COUNT(IF(SIGN(`col`)=0,1,NULL)),0, IF(COUNT(IF(SIGN(`col`)<0,1,NULL))%2,-1,1) * EXP(SUM(LN(ABS(`col`))))) as product FROM `test`; ``` -here you see weakness of this method - since `log(X)` is undefined when `X<=0` - I need to count negative signs before calculating whole expression. Sample data and query for this is given [in this fiddle](http://sqlfiddle.com/#!2/96f5c/1). Another weakness is that we need to find if there is 0 among column values (Since it is a sample, in real situation I'm going to select product for some subset of table rows with some condition(s) - i.e. I can not simply remove 0-s from my table, because result zero product is a valid and expected result for some rows subsets) **Specifics** And now, finally, my question main part: how to handle situation when we have expression like: `X*Y*Z` and here `X < MAXF`, `Y<MAXF`, but `X*Y>MAXF` and `X*Y*Z<MAXF` - so we have possible data type overflow (here `MAXF` is limit for *double* MySQL data type). The sample is [here](http://sqlfiddle.com/#!2/1f67d/1). Query above works well, but can I always be sure that it will handle that properly? I.e. may be there is another case with overflow issue when some sub-products causing overflow, but entire product is ok (without overflow). Or may be there is another way to find rows product? Also, in table there possibly be millions of records (`-1.1<X<=1.1` mainly, but probably with values such as 100 or 1000 - i.e. high enough to overflow *DOUBLE* if multiplied with certain quantity if we have an issue that I've described above) - may be calculating via `log` will be slow?
If you need this type of calculations often, I suggest you store the signs and the logarithms in separate columns. The signs can be stored as `1` (for positives), `-1` (for negatives) and `0` (for zero.) The logarithm can be assigned for zero as `0` (or any other value) but it should not be used in calculations. Then the calculation would be: ``` SELECT CASE WHEN EXISTS (SELECT 1 FROM test WHERE <condition> AND datasign = 0) THEN 0 ELSE (SELECT 1-2*(SUM(datasign=-1)%2) FROM test WHERE <condition>) END AS resultsign, CASE WHEN EXISTS (SELECT 1 FROM test WHERE <condition> AND datasign = 0) THEN -1 -- undefined log for result 0 ELSE (SELECT SUM(datalog) FROM test WHERE <condition> AND datasign <> 0) END AS resultlog ; ``` This way, you have no overflow problems. You can check the `resultlog` if it exceeds some limits or just try to calculate `resultdata = resultsign * EXP(resultlog)` and see if an error is thrown.
I guess this would work... ``` SELECT IF(MOD(COUNT(data < 0),2)=1 , EXP(SUM(LOG(data)))*-1 , EXP(SUM(LOG(data)))) x FROM my_table; ```
Get rows product (multiplication)
[ "", "mysql", "sql", "" ]
I have following tables: **Mall:** ``` +-----------+----------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+----------------------+------+-----+---------+----------------+ | MallID | smallint(5) unsigned | NO | PRI | NULL | auto_increment | | Name | varchar(45) | NO | | NULL | | +-----------+----------------------+------+-----+---------+----------------+ ``` **Store:** ``` +------------+----------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+----------------------+------+-----+---------+----------------+ | StoreID | smallint(5) unsigned | NO | PRI | NULL | auto_increment | | MallID | smallint(5) unsigned | NO | MUL | NULL | | | Name | varchar(45) | NO | | NULL | | | Revenue | int(10) | NO | | NULL | | +------------+----------------------+------+-----+---------+----------------+ ``` **Customer:** ``` +------------+----------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+----------------------+------+-----+---------+----------------+ | CustomerID | smallint(5) unsigned | NO | PRI | NULL | auto_increment | | StoreID | smallint(5) unsigned | NO | MUL | NULL | | | Name | varchar(45) | NO | | NULL | | | Age | smallint(3) | NO | | NULL | | +------------+----------------------+------+-----+---------+----------------+ ``` `Store.MallID` is foreign key of `Mall.MallID` and `Customer.StoreID` is foreign key of `Store.StoreID` 1 - I want to select all Malls which have total store revenue greater than 100000. `select * from Mall where 100000 < ( select sum(Revenue) from Store where Mall.MallID = Store.MallID);` 2 - I want to select name of Malls which doesn't have any customers. `select Name from Mall where 0 = ( select count(*) from Customer, Store where Mall.MallID = Store.MallID and Store.StoreID = Customer.StoreID);` **Are these queries correct?**
Your queries do what you expect. I would write the second using ANSI standard `join` syntax and encourage you to use abbreviations as table aliases: ``` select Name from Mall m where 0 = (select count(*) from Customer c join Store s on Mall.MallID = Store.MallID where Store.StoreID = Customer.StoreID ); ``` An alternative formulation is to do it all as a single join and aggregation: ``` select m.Name from Mall m left outer join Store s on s.MallID = m.MallID left outer join Customer c on s.StoredId = c.StoreId group by m.Name having count(c.CustomerId) = 0; ``` In MySQL I would discourage you from doing the aggregation in a subquery and then joining it in. Although a very good SQL solution, MySQL actually creates the derived table for such subqueries, which can sometimes have a negative impact on performance.
They're correct, but there are better (IMHO) ways to write them: ``` SELECT m.* FROM Mall m JOIN (SELECT MallID, SUM(Revenue) totalRev FROM Store GROUP BY MallID HAVING totalRev > 100000) s ON s.MallID = m.MallID SELECT m.Name FROM Mall m LEFT JOIN (SELECT DISTINCT s.MallID FROM Customer c JOIN Store s ON c.StoreID = s.StoreID) s ON m.MallID = s.MallID WHERE s.MallID IS NULL ```
Are these queries correct?
[ "", "mysql", "sql", "foreign-keys", "subquery", "" ]
I have written a program where i can do a Request for an identification card. There are different types of identification cards ( Red, Blue, Green ) While the request, the program should generate identification numbers. The numbers (Range of the numbers) depends on which card are requested. ``` Red Card: 1 - 50000 Blue Card: 50001 - 100000 Green Card: 100001 - 150000 ``` If i add new identification cards to the system so the sequence should automatically create a new Range of numbers for that new added identification card. The Numbers should not recur. One Number should only be used one time. How can i do that? Can anyone help me with that?
Edit #1: I updated trigger (`IF UPDATE`), stored procedure and last two examples. ``` CREATE TABLE dbo.CustomSequence ( CustomSequenceID INT IDENTITY(1,1) PRIMARY KEY, SequenceName NVARCHAR(128) NOT NULL, -- or SYSNAME UNIQUE(SequenceName), RangeStart INT NOT NULL, RangeEnd INT NOT NULL, CHECK(RangeStart < RangeEnd), CurrentValue INT NULL, CHECK(RangeStart <= CurrentValue AND CurrentValue <= RangeEnd) ); GO CREATE TRIGGER trgIU_CustomSequence_VerifyRange ON dbo.CustomSequence AFTER INSERT, UPDATE AS BEGIN IF (UPDATE(RangeStart) OR UPDATE(RangeEnd)) AND EXISTS ( SELECT * FROM inserted i WHERE EXISTS ( SELECT * FROM dbo.CustomSequence cs WHERE cs.CustomSequenceID <> i.CustomSequenceID AND i.RangeStart <= cs.RangeEnd AND i.RangeEnd >= cs.RangeStart ) ) BEGIN ROLLBACK TRANSACTION; RAISERROR(N'Range overlapping error', 16, 1); END END; GO --TRUNCATE TABLE dbo.CustomSequence INSERT dbo.CustomSequence (SequenceName, RangeStart, RangeEnd) SELECT N'Red Card', 1, 50000 UNION ALL SELECT N'Blue Card', 50001, 100000 UNION ALL SELECT N'Green Card', 100001, 150000; GO -- Test for overlapping range INSERT dbo.CustomSequence (SequenceName, RangeStart, RangeEnd) VALUES (N'Yellow Card', -100, +100); GO /* Msg 50000, Level 16, State 1, Procedure trgIU_CustomSequence_VerifyRange, Line 20 Range overlapping error Msg 3609, Level 16, State 1, Line 1 The transaction ended in the trigger. The batch has been aborted. */ GO -- This procedure tries to reserve CREATE PROCEDURE dbo.SequenceReservation ( @CustomSequenceID INT, -- You could use also @SequenceName @IDsCount INT, -- How many IDs do we/you need ? (Needs to be greather than 0) @LastID INT OUTPUT ) AS BEGIN DECLARE @StartTranCount INT, @SavePoint VARCHAR(32); SET @StartTranCount = @@TRANCOUNT; IF @StartTranCount = 0 -- There is an active transaction ? BEGIN BEGIN TRANSACTION -- If not then it starts a "new" transaction END ELSE -- If yes then "save" a save point -- see http://technet.microsoft.com/en-us/library/ms188378.aspx BEGIN DECLARE @ProcID INT, @NestLevel INT; SET @ProcID = @@PROCID; SET @NestLevel = @@NESTLEVEL; SET @SavePoint = CONVERT(VARCHAR(11), @ProcID) + ',' + CONVERT(VARCHAR(11), @NestLevel); SAVE TRANSACTION @SavePoint; END BEGIN TRY UPDATE dbo.CustomSequence SET @LastID = CurrentValue = ISNULL(CurrentValue, 0) + @IDsCount WHERE CustomSequenceID = @CustomSequenceID; IF @@ROWCOUNT = 0 RAISERROR(N'Invalid sequence', 16, 1); COMMIT TRANSACTION; END TRY BEGIN CATCH IF @StartTranCount = 0 BEGIN ROLLBACK TRANSACTION; END ELSE -- @StartTranCount > 0 BEGIN ROLLBACK TRANSACTION @SavePoint END DECLARE @ErrorMessage NVARCHAR(2048), @ErrorSeverity INT, @ErrorState INT; SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE(); RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState); END CATCH; END; GO SELECT * FROM dbo.CustomSequence; GO -- Example usage #1 DECLARE @LastID INT; EXEC dbo.SequenceReservation @CustomSequenceID = 1, -- Red Card @IDsCount = 2, -- How many IDs ? @LastID = @LastID OUTPUT; SELECT @LastID - 2 + 1 AS [FirstID], @LastID AS [LastID]; GO -- Example usage #2 DECLARE @LastID INT; EXEC dbo.SequenceReservation @CustomSequenceID = 1, -- Red Card @IDsCount = 7, -- How many IDs ? @LastID = @LastID OUTPUT; SELECT @LastID - 7 + 1 AS [FirstID], @LastID AS [LastID]; SELECT * FROM dbo.CustomSequence; GO ``` Results: ``` CustomSequenceID SequenceName RangeStart RangeEnd CurrentValue ---------------- ------------ ----------- ----------- ------------ 1 Red Card 1 50000 9 2 Blue Card 50001 100000 NULL 3 Green Card 100001 150000 NULL ```
You can use instead of insert trigger for this ``` create table Cards_Types (Color nvarchar(128) primary key, Start int); create table Cards (ID int primary key, Color nvarchar(128)); insert into Cards_Types select 'RED', 0 union all select 'BLUE', 50000 union all select 'GREEN', 100000; create trigger utr_Cards_Insert on Cards instead of insert as begin insert into Cards (id, Color) select isnull(C.id, CT.Start) + row_number() over(partition by i.Color order by i.id), i.Color from inserted as i left outer join Cards_Types as CT on CT.Color = i.Color outer apply ( select max(id) as id from Cards as C where C.Color = i.Color ) as C end ``` **`sql fiddle demo`** It allows you to insert many rows at once: ``` insert into Cards (Color) select 'GREEN' union all select 'GREEN' union all select 'RED' union all select 'BLUE' ``` Note that you'd better have index on Cards columns `Color, ID`. Also note that your way you can insert only 50000 records for each type. You can use different seeds, for example 1 for 'RED', 2 for 'BLUE' and so on, and reserve place for , for example, 100 **types** of cards: ``` create table Cards_Types (Color nvarchar(128) primary key, Start int); create table Cards (ID int primary key, Color nvarchar(128)); insert into Cards_Types select 'RED', 1 union all select 'BLUE', 2 union all select 'GREEN', 3; create trigger utr_Cards_Insert on Cards instead of insert as begin insert into Cards (id, Color) select isnull(C.id, CT.Start - 100) + row_number() over(partition by i.Color order by i.id) * 100, i.Color from inserted as i left outer join Cards_Types as CT on CT.Color = i.Color outer apply ( select max(id) as id from Cards as C where C.Color = i.Color ) as C end; ``` **`sql fiddle demo`** this way ID for 'RED' will always ends on 1, ID for 'BLUE' ends on 2 and so on.
Create Sequence in MS SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
i have a problem with mysql 5.6 version. Sorry if my english sucks, and sorry if the format is not the right one, im in a hurry. Mysql throws me this error "#1241 - Operand should contain 1 column(s)" from this query: ``` SELECT DISTINCT (P.`P_nombre`, P.`P_raza`) FROM `Perros` AS P, `Adiestramientos` as A WHERE P.`P_codigo` = A.`P_codigo` AND A.`A_nroLegajo` = '1500' AND P.`P_codigo` NOT IN( SELECT A.`P_codigo` FROM `Adiestramientos` as A WHERE A.`A_nroLegajo`= '4600' ) ``` It seems to work fine on Mysql 5.0, the problem seems to be on the IN operator Hope you can help me. Thanks!
You're using `A` identifier for the same table twice - in both inner and outer query. Change one into different, e.g. `A2`: ``` SELECT DISTINCT P.`P_nombre`, P.`P_raza` FROM `Perros` AS P, `Adiestramientos` as A WHERE P.`P_codigo` = A.`P_codigo` AND A.`A_nroLegajo` = '1500' AND P.`P_codigo` NOT IN( SELECT A2.`P_codigo` FROM `Adiestramientos` as A2 WHERE A2.`A_nroLegajo`= '4600' ) ```
The [error](http://dev.mysql.com/doc/refman/5.0/en/subquery-errors.html) which you get means that:- MySQL does not support statements of the following form: ``` SELECT * FROM t1 WHERE s1 IN (SELECT s2 FROM t2 ORDER BY s1 LIMIT 1) ``` In your case you have to change the indentifier A as it is used twice. Try this:- ``` SELECT DISTINCT P.`P_nombre`, P.`P_raza` FROM `Perros` AS P, `Adiestramientos` as A WHERE P.`P_codigo` = A.`P_codigo` AND A.`A_nroLegajo` = '1500' AND P.`P_codigo` NOT IN( SELECT A1.`P_codigo` FROM `Adiestramientos` as A1 WHERE A1.`A_nroLegajo`= '4600' ) ```
#1241 - Operand should contain 1 column(s) IN
[ "", "mysql", "sql", "distinct", "mysql-error-1241", "" ]
The SQL below fails and gives me an error: ``` Incorrect syntax near '-'. ``` The variable valu1 is a GUID that has "-" characters in it. I am using it as a varchar ``` Declare @valu1 as varchar(1000) Declare @valu2 as varchar(200) Declare @sqlStr as nvarchar(2000) Select @valu1 = GUID, @valu2= RationaleText From dbo.tblll Where column= 'New' SET @sqlStr =N'Insert Into dbo.table1 Select newid() as ChangeID, GETDATE() as DateModified, a.col1 as col1, c.col2 as col2, d.col3 as col3, ' + @valu1 + ' as valu1, ' + @valu2 + ' as valu2 From table1 a Inner Join table2 b On a.col1 = b.col7 Left Join table2 c On b.col1 = c.col2 Left Join table3 d On b.col1 = d.col3 Order By a.col1 PRINT @sqlStr Exec(@sqlStr) ``` When I print the above SQL - i get: ``` Insert Into dbo.table1 Select newid() as ChangeID, GETDATE() as DateModified, a.col1 as col1, c.col2 as col2, d.col3 as col3, DD989A2A-5B9C-4369-81E1-13C638F1C555 as valu1, New as valu2 From table1 a Inner Join table2 b On a.col1 = b.col7 Left Join table2 c On b.col1 = c.col2 Left Join table3 d On b.col1 = d.col3 Order By a.col1 ```
``` SET @sqlStr =N'Insert Into dbo.table1 Select newid() as ChangeID, GETDATE() as DateModified, a.col1 as col1, c.col2 as col2, d.col3 as col3, ''' + @valu1 + ''' as valu1, ''' + @valu2 + ''' as valu2 From table1 a ``` Notice the extra quotes. Also note that this is incredibly vulnerable to sql injection attacks. You should investigate using [sp\_executesql](http://technet.microsoft.com/en-us/library/ms188001.aspx) instead of concatenation.
Please try putting more quotes around the variables: ``` ''' + @valu1 + ''' as valu1 ```
Dynamic tsql error
[ "", "sql", "sql-server", "t-sql", "" ]
I am new to Oracle Sql and facing an issue : I want to create a temporary table inside procedure . LIKE: ``` CREATE PROCEDURE P AS BEGIN CREATE TEMPORARY TABLE A(ID int); END P; ``` BUT THIS IS GIVING ME AN ERROR How Can I Create a temporary table inside procedure. I have seen other answers on stackoverflow but that doesn't answer my question properly Can you please help me out ?
You could use Dynamic SQL with [EXECUTE IMMEDIATE](http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/executeimmediate_statement.htm): ``` CREATE OR REPLACE PROCEDURE p AS BEGIN EXECUTE IMMEDIATE 'CREATE TEMPORARY TABLE A(id NUMBER)...etc'; END p; ``` Edit: Obviously you'll have to ensure your syntax is correct within the EXECUTE IMMEDIATE statement. Hope it helps.
Why do you want to create a temporary table in a stored procedure in the first place? It is relatively common to create temporary tables in other databases (SQL Server and MySQL, for example). It is very, very rare to do the same thing in Oracle. In almost every case where you are tempted to create a temporary table in Oracle, there is a better architectural approach. There is a thread over on the DBA stack that discusses [alternatives to temporary tables](https://dba.stackexchange.com/questions/34279/temporary-table-inside-procedure-oracle/34322#34322) and why they are not commonly needed in Oracle. Programmatically, you can create objects using dynamic SQL ``` CREATE OR REPLACE PROCEDURE dont_do_this AS BEGIN EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE a( id INTEGER )'; END; ``` If you create a temporary table dynamically, however, every reference to that table will also need to be via dynamic SQL-- you won't be able to write simple `SELECT` statements against the table. And the definition of a temporary table in Oracle is global so it is visible to every session. If you have two different sessions both trying to create the same table, the second session will get an error. If you expect the table to have a different definition in different sessions, you've got even more problems.
Oracle Sql : Procedure which can create temporary tables inside it
[ "", "sql", "oracle", "stored-procedures", "" ]
I have a table my\_table that has columns state, month, ID, and sales. My goal is to merge different rows that have the same state, month, ID into one row while summing the sales column of these selected rows into the merged row. For example: ``` state month ID sales ------------------------------- FL June 0001 12,000 FL June 0001 6,000 FL June 0001 3,000 FL July 0001 6,000 FL July 0001 4,000 TX January 0050 1,000 MI April 0032 5,000 MI April 0032 8,000 CA April 0032 2,000 ``` This what I am supposed to get ``` state month ID sales ------------------------------- FL June 0001 21,000 FL July 0001 10,000 TX January 0050 1,000 CA,MI April 0032 15,000 ```
You need to use `GROUP_CONCAT` for that: ``` SELECT GROUP_CONCAT(DISTINCT State) AS State , Month, ID, SUM(Sales) FROM Table1 GROUP BY Month, ID; ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!2/a90ef/6) --- ### Update (For SQL Server) For SQL Server you can use `STUFF()` for that: ``` SELECT state = STUFF((SELECT DISTINCT ' , ' + state FROM Table1 b WHERE b.ID = a.ID AND b.month = a.month FOR XML PATH('')), 1, 2, '') ,month, ID, SUM(sales) AS Sales FROM Table1 a GROUP BY month,ID; ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!3/49bf2/4)
Please try this: ``` SELECT B.STATE, A.ID, SUM(A.SALES) AS SALES FROM MYTABLE A INNER JOIN (SELECT DISTINCT CASE WHEN A.state = B.STATE THEN A.STATE ELSE A.STATE+','+B.STATE END AS STATE, ID FROM MYTABLE A INNER JOIN MYTABLE B ON A.ID = B.ID) B ON A.STATE = B.STATE GROUP BY B.state, A.month, A.ID ``` Let me know in case it query need any rectification.
Aggregate with row merge
[ "", "mysql", "sql", "" ]
This query works fine for each and every test case if you choose the person's last name: ``` SELECT P.Patient_ID, P.Person_FirstName, P.Person_LastName , H.Height , W.Weight , CASE when H.Height is not null then W.Weight / (H.Height * H.Height) else null END as BMI FROM VIEW_BillPatient P LEFT JOIN H on P.Patient_ID = H.Patient_ID LEFT JOIN W on P.Patient_ID = W.Patient_ID WHERE P.Person_LastName = 'ZZtest' ``` This even works when H.Height is `null` or W.Weight is `null`. Unfortunately, as soon as I take the `WHERE` clause off and try to run it for everybody, I get a divide by zero error. ``` SELECT P.Patient_ID, P.Person_FirstName, P.Person_LastName , H.Height , W.Weight , CASE when H.Height is not null then W.Weight / (H.Height * H.Height) else null END as BMI FROM VIEW_BillPatient P LEFT JOIN H on P.Patient_ID = H.Patient_ID LEFT JOIN W on P.Patient_ID = W.Patient_ID ``` Error: ``` Msg 8134, Level 16, State 1, Line 1 Divide by zero error encountered. ``` (Sub-queries H and W respectively return a Patient\_ID and the last value for Height or Weight in the database, as usually there is more than one recorded (or null if none is found). The formula is to calculate Body Mass Index as seen in the code) What am I doing wrong? (MS SQL SERVER 2008-R2)
Change it to this: ``` SELECT P.Patient_ID, P.Person_FirstName, P.Person_LastName , H.Height , W.Weight , CASE when H.Height != 0 then W.Weight / (H.Height * H.Height) else null END as BMI FROM VIEW_BillPatient P LEFT JOIN H on P.Patient_ID = H.Patient_ID LEFT JOIN W on P.Patient_ID = W.Patient_ID ``` NULL != 0 is already false, so you'll get null in that case, but you'll also get null for 0.
You could use `nullif(expr1,expr2)`. > Returns a null value if the two specified expressions are equal. In this case, `BMI` will be `null` if `height` is `0` or `null` or if `weight` is `null` ``` SELECT P.Patient_ID, P.Person_FirstName, P.Person_LastName , H.Height , W.Weight , W.Weight / nullif(H.Height * H.Height, 0) as BMI FROM VIEW_BillPatient P LEFT JOIN H on P.Patient_ID = H.Patient_ID LEFT JOIN W on P.Patient_ID = W.Patient_ID WHERE P.Person_LastName = 'ZZtest ``` ... or simply change your case condition to explicitly avoid `0`. You'll get `null` if `height` is `0` or `null` or if `weight` is `null` ``` SELECT P.Patient_ID, P.Person_FirstName, P.Person_LastName , H.Height , W.Weight , case when H.Height != 0 then W.Weight / (H.Height * H.Height) end as BMI FROM VIEW_BillPatient P LEFT JOIN H on P.Patient_ID = H.Patient_ID LEFT JOIN W on P.Patient_ID = W.Patient_ID WHERE P.Person_LastName = 'ZZtest ```
Divide by zero with SQL
[ "", "sql", "sql-server", "" ]
This is just purely for eye candy while working with SQLite in the terminal, but is there a way to format column width with headers so that each header is resized appropriately (and independently of the other columns)? In other words, here's the output with ``` .width auto ``` for a simple table ``` Id Name Price ---------- ---------- ---------- 1 Audi 52642 2 Mercedes 57127 3 Skoda 9000 4 Volvo 29000 5 Bentley 350000 6 Citroen 21000 7 Hummer 41400 8 Volkswagen 21600 ``` It does what I'd expect. It resizes each column so that the longest item in any one column can be displayed. However, I'd like to automatically have the output formatted such that each column is wide enough for the longest item in *only its* column. In other words, I don't want to have to type in ``` .width 2 10 5 ``` after the fact to get this output ``` Id Name Price -- ---------- ----- 1 Audi 52642 2 Mercedes 57127 3 Skoda 9000 4 Volvo 29000 5 Bentley 35000 6 Citroen 21000 7 Hummer 41400 8 Volkswagen 21600 ``` Is there something I can do to automate column sizing correctly?
The `sqlite3` tool has no such function. You would have to compute the column widths by hand (`SELECT max(length(col1)) ...`).
For "human readable" output, you can use `column` mode, and turn `header` output on. That will get you something similar to the `sqlplus` output in your examples: ``` sqlite> select * from foo; 234|kshitiz|dba.se sqlite> .mode column sqlite> select * from foo; 234 kshitiz dba.se sqlite> .headers on sqlite> select * from foo; bar baz baf ---------- ---------- ---------- 234 kshitiz dba.se ```
SQLite3 Terminal Output Formatting .width
[ "", "sql", "sqlite", "terminal", "formatting", "" ]
I have 2 procedures: `A` and `B`. `A` calls `B` and `B` calls `A`. I need to block calling if the procedure is already called from another one. How to check that?
MySQL [user-defined variables](http://dev.mysql.com/doc/refman/5.6/en/user-variables.html) are global to your session. So you could set a variable to TRUE in procedure A and then check it in procedure B. Procedure A: ``` BEGIN IF NOT @called = 1 THEN SET @called := 1; CALL B(); SET @called := NULL; END IF; END ``` Procedure B: ``` BEGIN IF NOT @called = 1 THEN SET @called := 1; CALL A(); SET @called := NULL; END IF; END ``` These variables are global to the session, that is each session gets its own version of the variable. So you don't have a problem if multiple user sessions are calling procedures. But you do have a problem that a user-defined variable persists until the end of the current session. That's why I show setting the variable to NULL after the call to the other proc. However, if the proc is interrupted before setting the variable to NULL, you could get some wrong results the next time you try calling a proc.
as variant you may check some value from table IsExecFromA. And Check this in proc B. If IsExecFromA = true then not exec again proc A etc. u?
How to break recursive calls between procedures on MySQL?
[ "", "mysql", "sql", "" ]
I'm trying to pull all records that have the engineer and has an overall score of 16 or higher. value\_money, quality\_service, friendliness, response\_time are all fields in the same table. I'm trying to add all these and perform a WHERE. This is what I've got so far ``` SELECT *, value_money+quality_service+friendliness+response_time AS overall FROM feedback WHERE engineer LIKE '%Chris Gauden%' AND overall < 16 ORDER BY date DESC LIMIT 0, 15 ``` And this is the error i'm getting. ``` #1054 - Unknown column 'overall' in 'where clause' ```
Alias cannot be used in `WHERE`. Do either ``` SELECT *, value_money+quality_service+friendliness+response_time AS overall FROM feedback WHERE engineer LIKE '%Chris Gauden%' AND value_money+quality_service+friendliness+response_time < 16 ORDER BY date DESC LIMIT 0, 15 ``` or ``` SELECT * FROM ( SELECT *, value_money+quality_service+friendliness+response_time AS overall FROM feedback WHERE engineer LIKE '%Chris Gauden%' )a WHERE overall < 16 ORDER BY date DESC LIMIT 0,15 ```
You cannot use alias in there WHERE clause. You have to use full `value_money+quality_service+friendliness+response_time` combination.
Where multi column adds up over 16
[ "", "mysql", "sql", "" ]
I''ll explain what I need to do on example. First of all, we have a simple table like this one, named *table*: ``` id | name ===+===== 1 | foo 1 | bar 1 | foobar 2 | foo 2 | bar 2 | foobar ``` Now the query: ``` SELECT t.* FROM table t GROUP BY t.id ``` Will get us result similar to this one: ``` id | name ===+===== 1 | foo 2 | foo ``` But is it possible, to collect all values of *name* to have result like this? ``` id | name ===+================= 1 | foo, bar, foobar 2 | foo, bar, foobar ```
Using MySQL you can use [GROUP\_CONCAT(expr)](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) > This function returns a string result with the concatenated non-NULL > values from a group. It returns NULL if there are no non-NULL values. > The full syntax is as follows: ``` GROUP_CONCAT([DISTINCT] expr [,expr ...] [ORDER BY {unsigned_integer | col_name | expr} [ASC | DESC] [,col_name ...]] [SEPARATOR str_val]) ``` Something like ``` SELECT ID, GROUP_CONCAT(name) GroupedName FROM Table1 GROUP BY ID ``` ## [SQL Fiddle DEMO](http://www.sqlfiddle.com/#!2/afbca/2)
For SQL Server (**before** 2017) use [`FOR XML`](https://learn.microsoft.com/en-us/sql/relational-databases/xml/for-xml-sql-server) clause and [`STUFF()`](https://learn.microsoft.com/en-us/sql/t-sql/functions/stuff-transact-sql) function for that: ``` SELECT distinct id, name = STUFF((SELECT ' , ' + name FROM Table1 b WHERE b.id = a.id FOR XML PATH('')), 1, 2, '') FROM Table1 a GROUP BY id; ``` ### UPDATE With SQL Server 2017, you can simply use [`STRING_AGG()`](https://learn.microsoft.com/en-us/sql/t-sql/functions/string-agg-transact-sql) function to achieve that: ``` SELECT ID, STRING_AGG (name, ', ') AS Name FROM Table1 GROUP BY ID ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!18/bee79/1)
GROUP BY but get all values from other column
[ "", "sql", "group-by", "" ]
What would be an effective way to add an integer column when a certain value in the rows are the same? For instance, say I have a table with scores of two different players. ``` id | score | player_id 1 5 1 2 6 1 3 9 2 4 3 2 ``` How can I add the player's score based on the id? I'm not sure about the last part of this selection: ``` SELECT sum(scores.score) FROM scores WHERE player_id = player_id; ```
I think you want `GROUP BY` ``` SELECT player_id, SUM(score) totalScore FROM tablename GROUP BY player_id ``` * [SQLFiddle Demo](http://sqlfiddle.com/#!2/1d899/2) OUTPUT ``` ╔═══════════╦════════════╗ ║ PLAYER_ID ║ TOTALSCORE ║ ╠═══════════╬════════════╣ ║ 1 ║ 11 ║ ║ 2 ║ 12 ║ ╚═══════════╩════════════╝ ```
You may try like this:- ``` SELECT player_id, SUM(score) as Sums FROM tablename GROUP BY player_id ```
Effective way to add Integers based on table values
[ "", "mysql", "sql", "" ]
I have two tables in SQL Server 2008 - ``` Sales.SalesOrderHeader --> CustomerID(FK, int, not null), OrderDate (datetime, not null), etc... Sales.Individual --> CustomerID(PK, FK, int, not null), ContactID (FK, int, not null), etc... ``` I have to find the the customers (ie CustomerID and the Corresponding ContactID) that ordered something on the last day orders were placed. This query is used to find the last OrderDate ``` select MAX(Soh.OrderDate) from Sales.SalesOrderHeader as Soh ``` Now, the next thing to do is get the CustomerID and Contact ID. I thought of two ways - Using only sub queries and where clauses OR Join and one sub query. The two approaches are shown below: --Style1: Using only subquery ``` select Si.CustomerID, Si.ContactID from Sales.Individual as Si where Si.CustomerID in ( select Soh.CustomerID from Sales.SalesOrderHeader as Soh where Soh.OrderDate = ( select MAX(Soh.OrderDate) from Sales.SalesOrderHeader as Soh ) ) order by Si.CustomerID, Si.ContactID ``` --Style 2: Using inner join ``` select CustOnLastDay.CustomerID, Si.ContactID from ( select Soh.CustomerID, Soh.ContactID from Sales.SalesOrderHeader as Soh where Soh.OrderDate = ( select MAX(Soh.OrderDate) from Sales.SalesOrderHeader as Soh ) ) as CustOnLastDay inner join Sales.Individual as Si on CustOnLastDay.ContactID = Si.ContactID order by Si.CustomerID, Si.ContactID ``` **Question -** Which is better, sub queries only or joins (In general and this case) ? Btw, most of my tables don't have more than 14-15k rows. Thanks.
In JOINs RDBMS can create an execution plan which makes it faster as compared to sub query. In many cases you will find that JOINS are comparatively faster than subquery. However when they are functionaly equivalent, they will perform the same. Subquery loads all the data for processing [MSDN](http://msdn.microsoft.com/en-us/library/ms189575%28v=sql.105%29.aspx) says:- > Many Transact-SQL statements that include subqueries can be > alternatively formulated as joins. Other questions can be posed only > with subqueries. In Transact-SQL, there is usually no performance > difference between a statement that includes a subquery and a > semantically equivalent version that does not. However, in some cases > where existence must be checked, a join yields better performance. > Otherwise, the nested query must be processed for each result of the > outer query to ensure elimination of duplicates. In such cases, a join > approach would yield better results. For example:- If you are doing something like this:- ``` select * from table1 where exists select * from table2 where table2.parent=table1.id ``` then it is better to use **JOIN** Check this [Example](http://blog.sqlauthority.com/2010/06/06/sql-server-subquery-or-join-various-options-sql-server-engine-knows-the-best/) which explains the difference between SUBQUERY and JOIN performance:- ``` USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of in SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- Use of Join SELECT * FROM HumanResources.Employee E INNER JOIN HumanResources.EmployeeAddress EA ON E.EmployeeID = EA.EmployeeID GO ``` Now compare the execution plan:- ![enter image description here](https://i.stack.imgur.com/9cDiJ.jpg)
Join is generally preferred to the multi-level nesting with subqueries. In general joins are faster as SQL server Engine can better optimize this type of query.
Which is better - subqueries only or joins?
[ "", "sql", "sql-server", "join", "subquery", "" ]
I'm having a difficult time understanding "on delete cascade" If I had the following example: create table X (id int primary key, name char(10)); create table Y (bid int primary key, aid references A(id) on delete cascade); X contains one row (111, 'Mike') Y contains two rows (1000, 111), (2000, 111)\*\* I if removed row (2000,111) in table Y what would happen? Would that row just be deleted or would it even allow me to delete anything because of the reference to the parent table? Thanks
It would be deleted and nothing else would happen. Cascading deletes only go from the referenced table to the referencing table. So a delete on table X will cascade a delete down to table y, while a delete on table y has no impact on table x.
Nothing will happen, only if you delete a row from table X the rows in table Y referencing it will be deleted.
"On Delete Cascade" if deleting a row from the child table
[ "", "mysql", "sql", "foreign-keys", "sqlcommand", "cascading-deletes", "" ]
I want to make sure that an email is sent exactly once, so I'm using the following statement in Oracle SQL: ``` update mytable set mail_sent = 't' where id = ? and mail_sent = 'f' ``` and examining the number of rows modified. If no rows were modified, then another process did the same thing first and will send the mail. If 1 row was modified, I send the mail. (Of course, if sending the mail fails, I reset mail\_sent. There's a tiny chance of the process crashing and leaving mail\_sent at 't', so no mail is sent. I'll live with it.) I can't quite convince myself this is safe against race conditions (Process 1 reads 'f' and process 2 reads 'f' before process 1 has written 't', so both processes think they modified the row and 2 emails are sent. I'm setting the isolation level to SERIALIZABLE to avoid the problem, but is this actually necessary, or am I safe without it?
There's a set of Tom Kyte's excellent articles about what happens during a concurrent update, which are worth reading: * <http://tkyte.blogspot.ru/2005/08/something-different-part-i-of-iii.html> * <http://tkyte.blogspot.ru/2005/08/part-ii-seeing-restart.html> * <http://tkyte.blogspot.ru/2005/09/part-iii-why-is-restart-important-to.html> Long story short, in case of two statements doing a concurrent update, the latter one: 1. performs a consistent read (a row's version as of moment of a statement start) 2. checks if the row fits the where condition of your update 3. if it does, it does current mode read - gets the latest committed version of the row - and checks if it's still the same row as in step 1(!), so we're no updating something we didn't intend to 4. If it is not, then the row doesn't get updated and the whole update statement gets restarted, but that's whole other story. As a result, in case your first update commits 't', the second update will never update this row again. You can check it with `sql%rowcount`. A simple test case (36 and 37 are two concurrent sessions here): ``` -- first session updates, locks the row 00:41:44 LKU@sandbox(36)> update mail set mail_sent = 't' where id = 1 and mail_sent = 'f'; 1 row updated. Elapsed: 00:00:00.21 -- second session tries to update the same row, it hangs as the row is locked 00:58:13 LKU@sandbox(37)> update mail set mail_sent = 't' where id = 1 and mail_sent = 'f'; -- first session commits 00:58:27 LKU@sandbox(36)> commit; Commit complete. Elapsed: 00:00:00.00 -- no rows updated in second! 00:58:13 LKU@sandbox(37)> update mail set mail_sent = 't' where id = 1 and mail_sent = 'f'; 0 rows updated. Elapsed: 00:00:33.12 -- time of me switching between sqlplus tabs and copy-pasting text here ;) ``` So, I can conclude that in case you check the amount of rows updated by a session after you perform the update - you are safe.
One safe way to do this is to select the row for update, which takes an exclusive lock on the row, send the email, then update the record to 't' and commit. The locking of the record is a deliberate design aim of this method. Until the email is confirmed to be sent you do not want to indicate that you have sent it, otherwise you need a recovery process to indicate that transmission actually failed. Similarly when you have started the process of sending an email you do not want another session to start that process. If it is necessary to avoid a longer term lock then I'd suggest breaking out the process into two steps -- setting a flag to confirm that the email transmission process has started (and actually I'd timestamp that), and setting it again (or setting another) to confirm transmission. That's not a bad method in itself, as it allows monitoring of how long it took to get the confirmation, and in my experience some internet requests can be a significant proportion of application time.
Are update statements safe from race conditions?
[ "", "sql", "oracle", "" ]
I have this query and I want to remove results that are = 0. ``` declare @Teacher as nvarchar(50) ='Professor David' select 'Science Class' as 'Study Type', (select Count(Distinct StudentID) from Table_class_SClass where Grade = 'Passed' and Teacher= @Teacher) as 'Number of Passing Students' union select 'Science Lab' as 'Study Type', (select Count(Distinct StudentID) from Table_class_SLab where Grade = 'Passed' and Teacher= @Teacher) as 'Number of Passing Students' union select 'Science Field' as 'Study Type', (select Count(Distinct StudentID) from Table_class_field where Grade = 'Passed' and Teacher= @Teacher) as 'Number of Passing Students' ``` I want to store this as a store procedure but I want to eliminate the outcoming results of the union that dont have the teacher declared 'Professor David'. The results showing are: ``` Study Type Number Of passing Students Science Class 8 Science Lab 0 Science Field 1 ``` The Results Required are: ``` Study Type Number Of passing Students Science Class 8 Science Field 1 ``` As you can see I want to eliminate Science Lab because the number of passing students are 0.
You can take your existing query and make it a subquery. Then, you can use a `where` clause to do the filtering: ``` select t.* from (select 'Science Class' as [Study Type], (select Count(Distinct StudentID) from Table_class_SClass where Grade = 'Passed' and Teacher= @Teacher) as [Number of Passing Students] union all select 'Science Lab' as 'Study Type', (select Count(Distinct StudentID) from Table_class_SLab where Grade = 'Passed' and Teacher= @Teacher) as [Number of Passing Students] union all select 'Science Field' as 'Study Type', (select Count(Distinct StudentID) from Table_class_field where Grade = 'Passed' and Teacher= @Teacher) as [Number of Passing Students] ) t where [Number of Passing Students] > 0; ``` I also changed the `union` to `union all`. `union all` is more efficient because it does *not* remove duplicates, so it is a good idea to use that by default (unless you want to remove duplicates). Also, I changed the column aliases to use square brackets rather than single quotes. Use single quotes only for string constants and not for names of columns within a query.
You can try adding this line at the end of each query `group by teacher having count(Distinct StudentID) > 0` This will return only records which has count more than 1 ``` select 'Science Class' as [Study Type], (select Count(Distinct StudentID) from Table_class_SClass where Grade = 'Passed' and Teacher= @Teacher group by teacher having count(Distinct StudentID) > 0) as [Number of Passing Students] union all select 'Science Lab' as 'Study Type', (select Count(Distinct StudentID) from Table_class_SLab where Grade = 'Passed' and Teacher= @Teacher group by teacher having count(Distinct StudentID) > 0) as [Number of Passing Students] union all select 'Science Field' as 'Study Type', (select Count(Distinct StudentID) from Table_class_field where Grade = 'Passed' and Teacher= @Teacher group by teacher having count(Distinct StudentID) > 0) as [Number of Passing Students] ```
How To Remove Results from multiple selection statements that are equal to 0
[ "", "sql", "count", "distinct", "" ]
I am trying to add a group by SC\_FRAMES.GROUPPRODUCTTYPE to this statement: ``` SELECT SC_JOBS.CREATIONDATE, (SELECT SUM(SC_JOBS.GROSSEXCLVAT) FROM SC_JOBS WHERE SC_FRAMES.GROUPPRODUCTTYPE = 'ABC' AND SC_FRAMES.JOBID = SC_JOBS.JOBSID AND SC_JOBS.INVOICEDATE < '1990-01-01') AS Product1, (SELECT SUM(SC_JOBS.GROSSEXCLVAT) FROM SC_JOBS WHERE SC_FRAMES.GROUPPRODUCTTYPE = 'XYZ' AND SC_FRAMES.JOBID = SC_JOBS.JOBSID AND SC_JOBS.INVOICEDATE < '1990-01-01') AS Product2 FROM SC_JOBS INNER JOIN SC_FRAMES ON SC_FRAMES.JOBID = SC_JOBS.JOBSID WHERE SC_JOBS.CREATIONDATE BETWEEN :StartDate AND :EndDate ORDER BY SC_JOBS.CREATIONDATE ``` Any suggestions please?
I think you want a query like this: ``` SELECT f.GROUPPRODUCTTYPE, MIN(j.CREATIONDATE), SUM(case when j.INVOICEDATE < '1990-01-01' then j.GROSSEXCLVAT else 0 end) as Product1 FROM SC_JOBS INNER j JOIN SC_FRAMES f ON f.JOBID = j.JOBSID WHERE j.CREATIONDATE BETWEEN :StartDate AND :EndDate GROUP BY f.GROUPPRODUCTTYPE ORDER BY min(j.CREATIONDATE); ``` It replaces the subqueries with conditional aggregation, based on the invoice date.
Try this: ``` SELECT SC_JOBS.CREATIONDATE, (SELECT SUM(SC_JOBS.GROSSEXCLVAT) FROM SC_JOBS WHERE SC_FRAMES.GROUPPRODUCTTYPE = 'ATI' AND SC_FRAMES.JOBID = SC_JOBS.JOBSID AND SC_JOBS.INVOICEDATE < '1990-01-01') AS Product 1, (SELECT SUM(SC_JOBS.GROSSEXCLVAT) FROM SC_JOBS WHERE SC_FRAMES.GROUPPRODUCTTYPE = 'ATI' AND SC_FRAMES.JOBID = SC_JOBS.JOBSID AND SC_JOBS.INVOICEDATE < '1990-01-01') AS Product 2 FROM SC_JOBS INNER JOIN SC_FRAMES ON SC_FRAMES.JOBID = SC_JOBS.JOBSID WHERE SC_JOBS.CREATIONDATE BETWEEN :StartDate AND :EndDate GROUP BY SC_FRAMES.GROUPPRODUCTTYPE ORDER BY SC_JOBS.CREATIONDATE ```
how to group by with a sql that has multiple subqueries
[ "", "mysql", "sql", "fastreport", "" ]
I'm using cakePHP to create a todo-application. CakePHP creates queries for you etc. That's why there must be no typo. The error: ``` Error: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'Projecttask.projecttasks_name' in 'field list' ``` The query: ``` SQL Query: SELECT `Itemrequirement`.`itemreq_id`, `Projecttask`.`projecttasks_name` FROM `gtd`.`itemrequirements` AS `Itemrequirement` LEFT JOIN `gtd`.`projecttasks` AS `ProjecttaskParent` ON (`Itemrequirement`.`itemreqs_rel_projectparents` = `ProjecttaskParent`.`projecttasks_id`) LEFT JOIN `gtd`.`projecttasks` AS `ProjecttaskChild` ON (`Itemrequirement`.`itemreqs_rel_projectchilds` = `ProjecttaskChild`.`projecttasks_id`) WHERE 1 = 1 ORDER BY `Itemrequirement`.`itemreq_id` asc LIMIT 10000 ``` The database: ![part of database](https://i.stack.imgur.com/hdunp.png) I'm growing quite clueless, as I've tried loads of things in phpmyadmin manually..
You are using `projecttasks` twice ``` LEFT JOIN `gtd`.`projecttasks` AS `ProjecttaskParent` ... LEFT JOIN `gtd`.`projecttasks` AS `ProjecttaskChild` ... ``` but with aliases `ProjecttaskParent` and `ProjecttaskChild`, so you must use an alias instead of the table name ``` `ProjecttaskParent`.`projecttasks_name` ``` or ``` `ProjecttaskChild`.`projecttasks_name` ``` Your query should look as below (with `ProjecttaskChild` alias fro example) ``` SELECT `Itemrequirement`.`itemreq_id`, `ProjecttaskChild`.`projecttasks_name` FROM `gtd`.`itemrequirements` AS `Itemrequirement` LEFT JOIN `gtd`.`projecttasks` AS `ProjecttaskParent` ON (`Itemrequirement`.`itemreqs_rel_projectparents` = `ProjecttaskParent`.`projecttasks_id`) LEFT JOIN `gtd`.`projecttasks` AS `ProjecttaskChild` ON (`Itemrequirement`.`itemreqs_rel_projectchilds` = `ProjecttaskChild`.`projecttasks_id`) WHERE 1 = 1 ORDER BY `Itemrequirement`.`itemreq_id` asc LIMIT 10000 ```
You are selecting from database name ProjectTask and you use an alias (ProjectTaskParent and ProjectTaskChild) Your query should be: ``` SELECT `Itemrequirement`.`itemreq_id`, `ProjecttaskParent`.`projecttasks_name` FROM `gtd`.`itemrequirements` AS `Itemrequirement` LEFT JOIN `gtd`.`projecttasks` AS `ProjecttaskParent` ON (`Itemrequirement`.`itemreqs_rel_projectparents` = `ProjecttaskParent`.`projecttasks_id`) LEFT JOIN `gtd`.`projecttasks` AS `ProjecttaskChild` ON (`Itemrequirement`.`itemreqs_rel_projectchilds` = `ProjecttaskChild`.`projecttasks_id`) WHERE 1 = 1 ORDER BY `Itemrequirement`.`itemreq_id` asc LIMIT 10000 ``` Change `Projecttask`.`projecttasks_name` for `ProjecttaskParent`.`projecttasks_name` or `ProjecttaskChild`.`projecttasks_name`
Column not found: 1054 Unknown column - NO TYPOS NOR WRONG ' ` "
[ "", "mysql", "sql", "database", "cakephp", "" ]
I have a table like this: ``` ------------------------------- EMP ID|Country |Emp Level | ------|-----------|-----------| 102 |UK |Staff | 103 |US |Admin Staff| 104 |CA |Staff | 105 |NL |Admin Staff| 106 |MN |Intern | 107 |IN |Staff | 108 |UK |Staff | 109 |US |Admin Staff| 110 |IN |Admin Staff| ------------------------------ ``` I need to count number of employees in each category in each country given following condition: If country is not in `('UK' or 'US' or 'CA')` then consider it as 'Global'. So our answer should be: ``` ------------------------------ |Country |Emp Level |Count| |-----------|-----------|----- |UK |Staff |2 |US |Admin Staff|2 |CA |Staff |1 |Global |Admin Staff|2 |Global |Intern |1 |Global |Staff |1 ``` So far I can count number of staff in each category, in each country but cannot club the countries not in given set and count & display them as global.
``` SELECT Country, EmpLevel, COUNT(*) AS Count FROM my_table WHERE Country IN ('UK', 'US', 'CA') GROUP BY Country, EmpLevel UNION SELECT 'Global', EmpLevel, COUNT(*) AS Count FROM my_table WHERE Country NOT IN ('UK', 'US', 'CA') GROUP BY EmpLevel ``` See it on [sqlfiddle](http://sqlfiddle.com/#!2/2c44a/1/0).
Have you tried this? ``` select country as cou, emp_level as emp, count(*) from your_table where country in ('UK', 'US', 'CA') group by cou, emp order by cou, emp union select 'global', emp_level as emp, count(*) from your_table where country not in ('UK', 'US', 'CA') group by emp order by emp ```
Group countries together and count
[ "", "mysql", "sql", "sql-server", "database", "" ]
I am curious about what is making my query slow and a question came into my mind. Which one is faster and more efficient? `LEFT()` or `SUBSTRING()`?
SQL Server is a database. You dod not ask questions of which string processing function is 'faster'. You ask the questions 'which can use an index?' and 'do I have the required index?'. Is all about data access, because disks are sloooooow, not about shifting CPU registers. So, **Which can use an index?** (which one is [sargable](http://en.wikipedia.org/wiki/Sargable)?). In theory `LEFT` could use an index, but in practice it usually does not. `SUBSTRING` cannot. Instead of `SUBSTRING` use [Full Text](https://learn.microsoft.com/en-us/sql/relational-databases/search/full-text-search). Design your data model to take advantage of sargable expressions, index accordingly. That's all there is to it, there is no magic bullet. Avoid scans.
There is no difference at all between `left` and `substring` because `left` is translated to `substring` in the execution plan. For example: ``` select substring(col, 1, 2), left(col, 3) from YourTable ``` will look like this in the execution plan ``` <DefinedValue> <ColumnReference Column="Expr1004" /> <ScalarOperator ScalarString="substring([col],(1),(2))"> <Intrinsic FunctionName="substring"> <ScalarOperator> <Identifier> <ColumnReference Column="col" /> </Identifier> </ScalarOperator> <ScalarOperator> <Const ConstValue="(1)" /> </ScalarOperator> <ScalarOperator> <Const ConstValue="(2)" /> </ScalarOperator> </Intrinsic> </ScalarOperator> </DefinedValue> <DefinedValue> <ColumnReference Column="Expr1005" /> <ScalarOperator ScalarString="substring([col],(1),(3))"> <Intrinsic FunctionName="substring"> <ScalarOperator> <Identifier> <ColumnReference Column="col" /> </Identifier> </ScalarOperator> <ScalarOperator> <Const ConstValue="(1)" /> </ScalarOperator> <ScalarOperator> <Const ConstValue="(3)" /> </ScalarOperator> </Intrinsic> </ScalarOperator> </DefinedValue> ```
Performance of SUBSTRING vs LEFT in SQL Server
[ "", "sql", "sql-server", "" ]
I am trying to figure out the SQL to do a running total for a daily quota system. The system works like this... Each day a user gets a quota of 2 "consumable things". If they use them all up, the next day they get another 2. If they somehow over use them (use more than 2), the next day they still get 2 (they can't have a negative balance). If they don't use them all, the remainder carries to the next day (which can carry to the next, etc...). Here is a chart of data to use as validation. It's laid out as quota for the day, amount used that day, amount left at the end of the day: ``` 2 - 2 - 0 2 - 0 - 2 4 - 3 - 1 3 - 0 - 3 5 - 7 - 0 2 - 1 - 1 3 - 0 - 3 5 - 2 - 3 5 - 1 - 4 6 - 9 - 0 ``` The SQL to start of with would be: ``` WITH t(x, y) AS ( VALUES (2, '2013-09-16'), (0, '2013-09-17'), (3, '2013-09-18'), (0, '2013-09-19'), (7, '2013-09-20'), (1, '2013-09-21'), (0, '2013-09-22'), (2, '2013-09-23'), (1, '2013-09-24'), (9, '2013-09-25') ) ``` For the life of me, trying recursive with statements and window aggregates, I cannot figure out how to make it work (but I can certainly see the pattern). It should be something like 2 - x + SUM(previous row), but I don't know how to put that in to SQL.
Try creating custom aggregate function like: ``` CREATE FUNCTION quota_calc_func(numeric, numeric, numeric) -- carry over, daily usage and daily quota RETURNS numeric AS $$ SELECT GREATEST(0, $1 + $3 - $2); $$ LANGUAGE SQL STRICT IMMUTABLE; CREATE AGGREGATE quota_calc( numeric, numeric ) -- daily usage and daily quota ( SFUNC = quota_calc_func, STYPE = numeric, INITCOND = '0' ); WITH t(x, y) AS ( VALUES (2, '2013-09-16'), (0, '2013-09-17'), (3, '2013-09-18'), (0, '2013-09-19'), (7, '2013-09-20'), (1, '2013-09-21'), (0, '2013-09-22'), (2, '2013-09-23'), (1, '2013-09-24'), (9, '2013-09-25') ) SELECT x, y, quota_calc(x, 2) over (order by y) FROM t; ``` May contain bugs, haven't tested it.
> they can't have a negative balance That triggered my memory :-) I had a similar problem >10 years ago on a Teradata system. The logic could be easily implemented using recursion, for each row do: > add 2 "new" and substract x "used" quota, if this is less than zero > use zero instead. I can't remember how i found that solution, but i finally implemented it using simple cumulative sums: ``` SELECT dt.*, CASE -- used in following calculation, this is just for illustration WHEN MIN(quota_raw) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) >= 0 THEN 0 ELSE MIN(quota_raw) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) END AS correction, quota_raw - CASE WHEN MIN(quota_raw) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) >= 0 THEN 0 ELSE MIN(quota_raw) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) END AS quote_left FROM ( SELECT quota, datecol, SUM(quota) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) AS quota_used, 2*COUNT(*) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) AS quota_available, quota_available - quota_used AS quota_raw FROM t ) AS dt ORDER BY datecol ``` The secret sauce is the moving min "correction" which adjusts negative results to zero.
Running total... with a twist
[ "", "sql", "postgresql", "" ]
I was wondering, what is the difference between these two scripts? ``` SELECT * FROM ##TEMP ``` and this ``` SELECT * FROM #TEMP ```
`##TEMP` is global temporary table, `#TEMP` is local. **Local temporary tables** are visible only to their creators during the same connection to an instance of SQL Server as when the tables were first created or referenced. Local temporary tables are deleted after the user disconnects from the instance of SQL Server. **Global temporary tables** are visible to any user and any connection after they are created, and are deleted when all users that are referencing the table disconnect from the instance of SQL Server. see [documentation](http://technet.microsoft.com/en-us/library/ms186986(v=sql.105).aspx). Actually here is almost the same question with answer - [Local and global temporary tables in SQL Server](https://stackoverflow.com/questions/2920836/local-and-global-temporary-table-in-sql-server).
The first one (##TEMP) is global - anyone can access it's content and also you can from different sessions (think of tabs in SQL Server Management Studio). The other is only visible by you.
MS SQL Temporary table
[ "", "sql", "sql-server", "t-sql", "temp-tables", "global-temp-tables", "" ]
I have a table that records users answers for a number of questions: **tableA** ``` user_id | question_id | date answered | correct? ------------------------------------------------- 66 345 timestamp 1 34 654 timestamp 0 34 654 timestamp 1 ``` Every question attempt by every user is stored in the database. I then also have a list of categories and the question\_ids that go in that category. e.g **tableB** ``` category_id | question_id -------------------------------- 1 34 1 44 1 23 2 99 2 44 ``` I am trying to write a query to work out the percentage of questions in the category that the user has previously answered correctly (where `correct? = 1`) and also the percentage of questions correct of the last 20 questions answered in the category. So far I can do the first part, but not the second ``` SELECT category_id, COUNT(*), COUNT(correct?) FROM tableA LEFT JOIN tableB USING (question_id) WHERE user_id = 1 GROUP_BY category_id ``` this gives me the number of questions in total in the category and the number of questions the user has answered correctly in the category. Something like this ``` cat_id | total_questions | answered_correctly ------------------------------------------------- 1 455 323 2 334 123 ``` However, for each category, I also want to look at the last 20 questions answered in a category and retrieve the number that were correct. So I want something like this: ``` cat_id | total_questions | answered_correctly | questions_correct_in_last_20_answered ------------------------------------------------------------------------------------- 1 455 323 12 2 334 123 8 ```
To add last twenty questions answered, you need pick last twenty rows and then count the correct answers, but `GROUP BY` and `LIMIT` don't go so well together and you can't attach the last twenty rows unless you are checking for only one category at a time. MySQL doesn't allow you to join a table when one of the subquery is referencing the table being joined on. So the below query is a workaround to which gets all answers for the category sorted on timestamp, makes a list, takes first twenty and then counts the number of correct answers. Tricky, but gets the job done. ``` SELECT category_id, Total_Q_Tried, Total_Unique_Q_Tried, Total_Answered_Correctly, Total_Answered_Correctly / Total_Q_Tried*100 Total_Correct_Answer_Percentage, Total_Answered_Correctly_In_Last20, Total_Answered_Correctly_In_Last20 / LEAST(20,Total_Q_Tried)*100 Total_Correct_Answer_Last20_Percentage FROM ( SELECT B.category_id, COUNT(B.question_id) Total_Q_Tried, COUNT(DISTINCT B.question_id) Total_Unique_Q_Tried, SUM(A.correct) Total_Answered_Correctly, (SELECT length(SUBSTRING_INDEX(GROUP_CONCAT(AA.correct ORDER BY AA.date_answered DESC SEPARATOR ',' ), ',', 20)) - length(replace(SUBSTRING_INDEX(GROUP_CONCAT(AA.correct ORDER BY AA.date_answered DESC SEPARATOR ',' ), ',', 20),'1', '')) FROM tableA AA INNER JOIN tableB BB ON AA.question_id = BB.question_id WHERE BB.category_id = B.category_id AND AA.user_id = A.user_id ) Total_Answered_Correctly_In_Last20 FROM tableA A LEFT JOIN tableB B ON B.question_id = A.question_id WHERE A.user_id = 34 GROUP BY B.category_id ) FinalNumbers ``` If you want percentage of correct answers in last twenty, you would need to use smaller of 20 and `TOTAL_Q_TRIED` and `TOTAL_ANSWERED_CORRECTLY_IN_LAST20` as calculated in the query. -- I couldn't try, but performance might not be good if there are lots and lots of rows. ``` | USER_ID | QUESTION_ID | DATE_ANSWERED | CORRECT | |---------|-------------|--------------------------------|---------| | 66 | 1 | January, 01 2013 00:00:00+0000 | 1 | | 34 | 1 | January, 02 2013 00:00:00+0000 | 1 | | 34 | 2 | January, 03 2013 00:00:00+0000 | 1 | | 34 | 3 | January, 04 2013 00:00:00+0000 | 0 | | 34 | 4 | January, 05 2013 00:00:00+0000 | 1 | | 34 | 6 | January, 06 2013 00:00:00+0000 | 0 | | CATEGORY_ID | QUESTION_ID | |-------------|-------------| | 1 | 1 | | 2 | 2 | | 2 | 3 | | 2 | 4 | | 2 | 5 | | 3 | 6 | | CATEGORY_ID | TOTAL_Q_TRIED | TOTAL_UNIQUE_Q_TRIED | TOTAL_ANSWERED_CORRECTLY | TOTAL_CORRECT_ANSWER_PERCENTAGE | TOTAL_ANSWERED_CORRECTLY_IN_LAST20 | TOTAL_CORRECT_ANSWER_LAST20_PERCENTAGE | |-------------|---------------|----------------------|--------------------------|---------------------------------|------------------------------------|----------------------------------------| | 1 | 1 | 1 | 1 | 100 | 1 | 100 | | 2 | 3 | 3 | 2 | 66.6667 | 2 | 66.6667 | | 3 | 1 | 1 | 0 | 0 | 0 | 0 | ``` --- Per comment below - add total of unique questions answered correctly. This gets tougher and tougher. I'm joining on every column including the timestamp in the latest query added to get the unique answers. See below. ``` SELECT category_id, Total_Q_Tried, Total_Unique_Q_Tried, Total_Answered_Correctly, Total_Unique_Answered_Correctly, Total_Answered_Correctly / Total_Q_Tried*100 Total_Correct_Answer_Percentage, Total_Answered_Correctly_In_Last20, Total_Answered_Correctly_In_Last20 / LEAST(20,Total_Q_Tried)*100 Total_Correct_Answer_Last20_Percentage FROM ( SELECT B.category_id, COUNT(B.question_id) Total_Q_Tried, COUNT(DISTINCT B.question_id) Total_Unique_Q_Tried, SUM(A.correct) Total_Answered_Correctly, SUM(UniqueA.correct) Total_Unique_Answered_Correctly, (SELECT length(SUBSTRING_INDEX(GROUP_CONCAT(AA.correct ORDER BY AA.date_answered DESC SEPARATOR ',' ), ',', 20)) - length(replace(SUBSTRING_INDEX(GROUP_CONCAT(AA.correct ORDER BY AA.date_answered DESC SEPARATOR ',' ), ',', 20),'1', '')) FROM tableA AA INNER JOIN tableB BB ON AA.question_id = BB.question_id WHERE BB.category_id = B.category_id AND AA.user_id = A.user_id ) Total_Answered_Correctly_In_Last20 FROM tableA A LEFT JOIN tableB B ON B.question_id = A.question_id LEFT JOIN (select user_id, question_id, MAX(date_answered) date_answered, correct from tableA GROUP BY user_id, question_id, correct ) UniqueA ON A.user_id = UniqueA.user_id AND A.question_id = UniqueA.question_id AND A.date_answered = UniqueA.date_answered WHERE A.user_id = 34 GROUP BY B.category_id ) FinalNumbers; ``` This might not work out right for the % for last 20 questions answered correctly. Please test it out. If it doesn't replace, the `tableA A` and `tableA AA` with the `UniqueA`'s select query to work on only unique answers and remove the latest left join added.
Hei, Friend look this: > Foo.c SELECT, COUNT (\*) AS pct \* t.factor FROM foo JOIN (SELECT > 100/COUNT (\*) FROM foo AS factor) AS t GROUP BY foo.c; Ooops! So it was enough to make a JOIN to get the number of total users, and apply some math test. Returning to my situation practical-metaphorical, we have: ``` SELECT count (id) AS pct * t.factor, good_person FROM people JOIN (SELECT 100/COUNT (*) FROM persons AS factor) AS t GROUP BY good_person; ``` the original link (in portuguese), here: [MySQL Blog](http://danielneis.wordpress.com/2008/02/08/mysql-e-porcentagens/)
Query on timestamps to select only recent rows for each group
[ "", "mysql", "sql", "" ]
I have a table Hobby, whose snippet is as follows: ``` Name Activity Hours John Hiking .5 Sam Cycling .5 Sam Swimming 1 Sam Hiking .5 John Running 1 Sam Sailing 1 ``` For every person X in (X, Y), I would like to find the sum of hours of activities where X and Y don't have in common. For example, if John = X and Sam = Y, then it would yield 1, since Running is the only activity John has that Sam doesn't. My code is as follows: ``` select a.Name, b.Name, sum(a.Hours) from Hobby a, Hobby b where a.Name <> b.Name and a.Activity <> b.Activity group by a.Name, b.Name; ``` However, this gave me a wrong answer. What is wrong with my code?
I find this to be a tricky question. My original approach was going to use a `full outer join`. But then I realized that if there is no match on the activity in one name, then I'm not going to have the name either. So, the following query works by getting a list of all pairs of names. This is an ordered list, so a given pair of names only appears once. Then this is joined to the `Hobby` table twice, using `left outer join` to get the matches. The key, though, is that when there is no match, the row with `Activity` on it is still present, but with a `NULL` value. The `where` clause finds all `Activity`s that have a `NULL` in either table. These are the ones that don't match. Then it is a simple matter of just adding up the hours: ``` select names.Name1, names.Name2, sum(coalesce(h1.hours, h2.hours)) from (select distinct h1.Name as name1, h2.Name as name2 from Hobby h1 cross join Hobby h2 where h1.Name < h2.Name ) names left outer join Hobby h1 on names.name1 = h1.name left outer join Hobby h2 on names.name2 = h2.name and h1.Activity = h2.Activity where h1.Activity is null or h2.Activity is null group by names.Name1, names.Name2; ```
your from clause reads ``` FROM Hobby a, Hobby b ``` Putting a comma in the from clause means "CROSS JOIN" which means each row in the first table are correlated with every row in the second table. Given your where clause I would think this gives some pretty big numbers. your query needs to be a bit different: ``` select sum(hours) from hobby where name = 'John' and activity not in ( select activity from hobby where name = 'Sam' ) ```
SQL sum values from the same table
[ "", "sql", "oracle", "" ]
I need to filter all dates which greater than, say 01 january 2011. ``` select * from table_name where date > '01/01/2011'; ``` the problem is that `date` field store `int` values, here is an example: ``` 1339011098 1336717439 1339010538 ``` How to convert the `date` field on the sql query (from the `int` format to date format), I need to convert it to a valid date so that I can compare it towards the above date. Thanx.
You can use [`UNIX_TIMESTAMP()`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_unix-timestamp) ``` select * from table_name where date > unix_timestamp('2011-01-01') ``` Or conversely use [`FROM_UNIXTIME()`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_from-unixtime) ``` select * from table_name where FROM_UNIXTIME(date, "%Y-%m-%d") > '2011-01-01' ```
You're going the wrong direction. Rather than converting potentially millions of records for the compare, try converting your target date, which you only need to do once. Those look like unix timestamps, so the resulting query should look like this: ``` SELECT * FROM `Table_name` WHERE date > unix_timestamp('01/01/2011') ``` Or, if you can control this, try using the ISO date format, which avoids confusion with european date formats for dates like 3/2/13: ``` SELECT * FROM `Table_name` WHERE date > unix_timestamp('2011-01-01') ```
convert date from numeric value to date valid format
[ "", "mysql", "sql", "date", "" ]
## Background Info I have a database with two tables: `Phones`, and `Carriers` Phones -> Carriers `Phones` (Primary key: `Phones.ID`; Foreign Key: `Phones.CarrierID`) is linked to `Carriers` (Primary key: `Carriers.ID`; Foreign Key: `Carriers.RegionID`). The data types for both `Phones.CarrierID` and `Carriers.ID` are bigint. *Sorry if that's confusing!* ## Problem I have a record in my Phones table called Nokia Lumia 1020. I need to be able to link it to multiple records in the Carriers Table via the `Phones.CarrierID` column. How would I do this without creating multiple records for the Nokia Lumia 1020 in the `Phones` table?
You have a Many-to-Many relationship. Typically this is implemented by adding a table in between the two data tables: ``` Phones -> PhoneCarriers -> Carriers ``` `PhoneCarrier` will look something like: ``` PhoneCarrierID PhoneID (FK) CarrierID (FK) ``` You won't have a foreign key directly from `Phone` to `Carrier` in that scenario.
You need to use a PhoneCarriers table: ``` PhoneCarriers ------------- PhoneCarrierID <-- Unimportant PK PhoneID <-- FK to Phones Table CarrierID <-- FK to Carrier Table. ``` This is where your multiple entries exist.
Link one record to multiple records in separate table
[ "", "sql", "sql-server", "database", "" ]
In one of my database tables, I want to know if there is atleast one record corresponding to a condition. The query i wrote is Count(\*) from table where (condition) And in my program, i can check whether the result is a non-zero value. It works fine. How can we optimize this? I dont want to wait till it finds the total count of records matching the condition.
``` SELECT TOP 1 1 AS found FROM tablename WHERE ... ``` Then check if the query returns a single row or not. In this case engine will immediately return you result as soon as it finds the first row (assuming you don't add `ORDER BY`)
SQL has `exists` which can be used for this. This will return 1 if the query returns a result, and 0 otherwise. ``` Select Case When Exists (<query>) Then 1 Else 0 End as X ```
SQL: How to find if there is at least one record matching a condition
[ "", "sql", "optimization", "count", "" ]
I have a list of tables (i.e. productsA, productsB, productsN, ...) each product in these tables may have a comment (stored in the comments table), if I hed to select top 10 ordered comments wich of these is the best solution to be adopted (in terms of performances and speed)? using UNION: <http://www.sqlfiddle.com/#!3/bc382/1> ``` select TOP 10 comment_product, product_name, comment_date FROM ( select comment_product, product_name, comment_date from comments inner join productsA on product_id = id_product WHERE product_type = 'A' UNION select comment_product, product_name, comment_date from comments inner join productsB on product_id = id_product WHERE product_type = 'B' UNION select comment_product, product_name, comment_date from comments inner join productsC on product_id = id_product WHERE product_type = 'C' ) as temp ORDER BY comment_date DESC ``` using CASE: <http://www.sqlfiddle.com/#!3/bc382/2> ``` select TOP 10 comment_product, comment_date, CASE product_type when 'A' then (select product_name from productsA as sub where sub.id_product = com.product_id) when 'B' then (select product_name from productsB as sub where sub.id_product = com.product_id) when 'C' then (select product_name from productsC as sub where sub.id_product = com.product_id) END FROM comments as com ORDER BY comment_date DESC ```
The second query would most probably use an index scan on `comment_date` with nested loops over product tables, i. e. at most 10 logical seeks plus whatever it takes to read 10 records from `comments` This first query would most probably use an index scan and sort over each of the queries, then a `MERGE UNION` of their results. If you have indexes on `comment_date` and `id_product` in all product tables, the second query would be much faster.
I'd suggest you need neither `UNION` or `CASE` and can just `JOIN` multiple times to comments: ``` SELECT TOP 10 comment_product , COALESCE(a.product_name,b.product_name,c.product_name) AS product_name , comment_date FROM comments z LEFT JOIN productsA a ON z.product_id = a.id_product AND z.product_type = 'A' LEFT JOIN productsB b ON z.product_id = b.id_product AND z.product_type = 'B' LEFT JOIN productsC c ON z.product_id = c.id_product AND z.product_type = 'C' WHERE COALESCE(a.id_product,b.id_product,c.id_product) IS NOT NULL ORDER BY z.comment_dateDESCC ```
which one have I to choose between "union" and "case"?
[ "", "sql", "sql-server", "select", "case", "union", "" ]
My query is ``` SELECT CASE WHEN clnt_ntnlty =0 THEN 'Aboriginal' WHEN clnt_ntnlty =1 THEN 'Torres Strait Islander' WHEN clnt_ntnlty =2 THEN 'Both Aboring & Torres Strait' WHEN clnt_ntnlty =3 THEN 'Neither Aboring OR Torres Strait' ELSE 'Not Provided' END AS Identy, ISNULL(COUNT(clnt_ntnlty), 0) AS Counts FROM dbo.clientInfo group by clnt_ntnlty ``` I am getting result as ![enter image description here](https://i.stack.imgur.com/TMvy7.png) I want all(five row) records with zero if there is no value . I have tried with group by all, it is not working.
Please try: ``` SELECT DISTINCT Identy, COUNT(clnt_ntnlty) OVER (PARTITION BY Identy) Counts FROM ( SELECT 0 Val, 'Aboriginal' Identy UNION SELECT 1 Val, 'Torres Strait Islander' Identy UNION SELECT 2 Val, 'Both Aboring & Torres Strait' Identy UNION SELECT 3 Val, 'Neither Aboring OR Torres Strait' Identy ) x left join dbo.clientInfo t on t.clnt_ntnlty=x.Val ```
You can have a subquery which has four values you want and do a `LEFT JOIN` on it. ``` SELECT a.Identy, COALESCE(b.Counts, 0) Counts FROM ( SELECT 'Aboriginal' Identy UNION ALL SELECT 'Torres Strait Islander' Identy UNION ALL SELECT 'Both Aboring & Torres Strait' Identy UNION ALL SELECT 'Neither Aboring OR Torres Strait' Identy ) a LEFT JOIN ( SELECT CASE WHEN clnt_ntnlty =0 THEN 'Aboriginal' WHEN clnt_ntnlty =1 THEN 'Torres Strait Islander' WHEN clnt_ntnlty =2 THEN 'Both Aboring & Torres Strait' WHEN clnt_ntnlty =3 THEN 'Neither Aboring OR Torres Strait' ELSE 'Not Provided' END AS Identy, ISNULL(COUNT(clnt_ntnlty), 0) AS Counts FROM dbo.clientInfo group by clnt_ntnlty ) b ON a.Identy = b.Identy ```
How to get zeros for no values in sql
[ "", "sql", "sql-server-2005", "" ]
I have two table account and balance ``` /---------------------\ | cid | name | mobile | |---------------------| | 1 | ABC | 12345 | |---------------------| | 2 | XYZ | 98475 | \---------------------/ /----------------------------\ | date | cid | balance | |----------------------------| | 2013-09-19 | 1 | 5000 | |----------------------------| | 2013-09-19 | 2 | 7000 | |----------------------------| | 2013-09-20 | 1 | 300 | |----------------------------| | 2013-09-20 | 2 | 4500 | |----------------------------| | 2013-09-21 | 2 | 600 | \----------------------------/ ``` I would like to join this two table and get the balance of the maximum date for a particular cid. Output result as - ``` /--------------------------------------------\ | cid | name | mobile | date | balance | |--------------------------------------------| | 1 | ABC | 12345 | 2013-09-20 | 300 | |--------------------------------------------| | 2 | XYZ | 98475 | 2013-09-21 | 600 | \--------------------------------------------/ ```
You need to use two sub-queries like this: ``` SELECT a.cid, a.name, a.mobile, b.date, b.balance FROM account a JOIN ( SELECT b1.* FROM balance b1 JOIN ( SELECT cid, MAX(Date) As maxDate FROM balance GROUP BY cid ) b2 ON b1.cid = b2.cid AND b1.date = b2.maxDate ) b ON a.cid = b.cid; ``` Output: | CID | NAME | MOBILE | DATE | BALANCE | | --- | --- | --- | --- | --- | | 1 | ABC | 12345 | September, 20 2013 00:00:00+0000 | 300 | | 2 | XYZ | 98475 | September, 21 2013 00:00:00+0000 | 600 | See this [SQLFiddle](http://sqlfiddle.com/#!9/1cd550/1) ### Edit As discussed in the comments, this query can also be written with only one subquery: ``` SELECT a.cid, a.name, a.mobile, b1.date, b1.balance FROM account a JOIN balance b1 ON a.cid = b1.cid JOIN ( SELECT cid, MAX(Date) As maxDate FROM balance GROUP BY cid ) b2 ON b1.cid = b2.cid AND b1.date = b2.maxDate ``` See the adjusted [SQLFiddle](http://sqlfiddle.com/#!9/1cd550/98)
``` SELECT a.cid, a.name, a.mobile, MAX(b.date), b.balance FROM account AS a INNER JOIN balance AS b WHERE a.cid=b.cid GROUP BY cid; ``` Sorry I din't notice the balance column in 3rd table. ``` SELECT a.cid, a.name, a.mobile, b.date, b.balance FROM account AS a INNER JOIN ( SELECT c.date, c.cid, c.balance FROM balance AS c INNER JOIN ( SELECT cid AS cid2, MAX(date) AS date2 FROM balance GROUP BY cid2) AS d ON c.cid=d.cid2 AND c.date=d.date2 ) AS b ON a.cid=b.cid GROUP BY cid;-- ```
MySQL join two table with the maximum value on another field
[ "", "mysql", "sql", "view", "max", "" ]
I have database called schoolDB and 2 database tables, `student` and `education` **Create student table:** ``` USE [schoolDB] GO /****** Object: Table [dbo].[tblStudent] Script Date: 09/22/2013 17:30:11 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[tblStudent]( [STUDENTNUMBER] [varchar](50) NOT NULL, [STUDENTNAME] [varchar](50) NULL, [EDUCATIONID] [varchar](50) NULL, CONSTRAINT [PK_tblStudent] PRIMARY KEY CLUSTERED ( [STUDENTNUMBER] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO ``` **Create Education table:** ``` USE [schoolDB] GO /****** Object: Table [dbo].[tblEducation] Script Date: 09/22/2013 17:31:30 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[tblEducation]( [EDUCATIONID] [varchar](50) NOT NULL, [STUDENTNUMBER] [varchar](50) NULL, [INSTITUTIONNAME] [varchar](50) NULL, [COURSENAME] [varchar](50) NULL, [GRADE] [varchar](50) NULL, [YEAROFLEAVING] [varchar](50) NULL ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO ``` Here is a screenshot of the data: ![enter image description here](https://i.stack.imgur.com/qQzyN.png) I want to be able to find every one who has been to an institution name called `Secondary School` AND who has another education record with a course name `like` `biol`. Not just limited to biology, i want to find all the sciences, so i need to put multiple like statements. **I have tried this:** ``` SELECT COUNT(*) AS 'Our Students', DTOurStudents.STUDENTNAME FROM (SELECT TOP 2 TBLSTUDENT.STUDENTNUMBER, TBLSTUDENT.STUDENTNAME, TBLEDUCATION.INSTITUTIONNAME, TBLEDUCATION.COURSENAME FROM TBLEDUCATION INNER JOIN TBLSTUDENT ON TBLEDUCATION.STUDENTNUMBER = TBLSTUDENT.STUDENTNUMBER WHERE TBLEDUCATION.INSTITUTIONNAME LIKE '%Secondary School%') DTOurStudents GROUP BY DTOurStudents.STUDENTNAME ``` **SQL FIDDLE:** <http://sqlfiddle.com/#!3/666f8/2>
A simple answer. 1. I select all rows of interest (i.e. 'Secondary School' or by coursename). 2. I count the number of rows per student. 3. I filter to keep the students that have at least 2 rows. 4. I use an expression in the `ORDER BY` to make sure secondary school shows up first. What your question left unclear is what should happen when there are more than 2 rows. In this case they all show up, but the query is easy enough to adjust (add a row\_number and filter on rn <= 2). Fiddle: <http://sqlfiddle.com/#!3/666f8/89/0> ``` WITH cte as ( SELECT STUDENTNUMBER, COURSENAME, INSTITUTIONNAME, COUNT(*) OVER (PARTITION BY STUDENTNUMBER) AS RecordCount FROM tblEducation WHERE INSTITUTIONNAME = 'Secondary School' OR COURSENAME like 'biol%' OR COURSENAME like 'math%' OR COURSENAME like 'etc%' ) select * from cte where RecordCount >= 2 order by studentnumber, case when institutionname = 'Secondary School' then 1 else 2 end ``` **EDIT** A comment correctly points out that the query doesn't check that there is at least one secondary school and one other education. There could be two secondary schools, or no secondary school at all! Those cases can be handled with the slightly more complicated query below: ``` WITH cte as ( SELECT STUDENTNUMBER, COURSENAME, INSTITUTIONNAME, SUM(CASE INSTITUTIONNAME WHEN 'Secondary School' THEN 1 END) OVER (PARTITION BY STUDENTNUMBER) AS SecondarySchoolCount, SUM(CASE WHEN INSTITUTIONNAME <> 'Secondary School' AND COURSENAME LIKE 'biol%' THEN 1 END) OVER (PARTITION BY STUDENTNUMBER) AS CourseCount FROM tblEducation WHERE INSTITUTIONNAME = 'Secondary School' OR COURSENAME like 'biol%' OR COURSENAME like 'math%' OR COURSENAME like 'etc%' ) select * from cte where SecondarySchoolCount >= 1 AND CourseCount >= 1 order by studentnumber, case when institutionname = 'Secondary School' then 1 else 2 end ```
This will give you a list of students and a count of college courses (per college), by joining the institution table with itself. ``` SELECT STUDENTNUMBER, SCHOOL_NAME, COLLEGE_NAME, count(*) as COLLEGE_COURSES FROM ( SELECT school.STUDENTNUMBER, school.INSTITUTIONNAME AS SCHOOL_NAME, college.INSTITUTIONNAME AS COLLEGE_NAME FROM dbo.tblEducation as school INNER JOIN dbo.tblEducation as college ON school.STUDENTNUMBER = college.STUDENTNUMBER WHERE school.INSTITUTIONNAME = 'Secondary School' AND college.INSTITUTIONNAME <> 'Secondary School' AND (college.COURSENAME like 'biol%' OR college.COURSENAME like 'math%' OR college.COURSENAME like 'etc%') ) AS c GROUP BY STUDENTNUMBER, SCHOOL_NAME, COLLEGE_NAME ``` If you want the college coursename then you can return that in the inner query. But since there is only one record per college course, the outer `select` and the `group by` would be redundant. ``` SELECT school.STUDENTNUMBER, school.INSTITUTIONNAME AS SCHOOL_NAME, college.INSTITUTIONNAME AS COLLEGE_NAME, college.COURSENAME FROM dbo.tblEducation as school INNER JOIN dbo.tblEducation as college ON school.STUDENTNUMBER = college.STUDENTNUMBER WHERE school.INSTITUTIONNAME = 'Secondary School' AND college.INSTITUTIONNAME <> 'Secondary School' AND (college.COURSENAME like 'biol%' OR college.COURSENAME like 'math%' OR college.COURSENAME like 'etc%' ```
Displaying results based on record count using SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I wondered if there is any table or view which is storing last updates, scripts run by SQL plus or transaction or any changes run by the system? If you look at the SQL Developer they have SQL history. I need something very similar to that but contain or information with time and date and maybe number of execution etc.
Changed Data Capture (CDC) allows Oracle Data Integrator to track changes in source data caused by other applications. When running integration interfaces, thanks to CDC, Oracle Data Integrator can avoid processing unchanged data in the flow. <http://docs.oracle.com/cd/E21764_01/integrate.1111/e12643/data_capture.htm>[CDC oracle website](http://docs.oracle.com/cd/E21764_01/integrate.1111/e12643/data_capture.htm)
There are several data dictionary views that contain information about SQL statements, transactions etc: * V$SQL * V$SESSION * V$TRANSACTION ... see [Asktom on V$SQL views](http://asktom.oracle.com/pls/asktom/f?p=100:11:0%3a%3a%3a%3ap11_question_id:385254862843) and [Oracle documentation on V$SQL](http://docs.oracle.com/cd/B19306_01/server.102/b14237/dynviews_2113.htm)
Is there any log file or view stores data about user last transaction with database?
[ "", "sql", "oracle", "oracle11g", "oracle-sqldeveloper", "" ]
I have every column set to `NOT NULL` but for some reason I am still able to add a `NULL` value in each column. Here is my table info (create syntax): ``` CREATE TABLE `addresses` ( `id` int(11) NOT NULL AUTO_INCREMENT, `address` varchar(100) NOT NULL, `city` varchar(100) NOT NULL, `state` varchar(4) NOT NULL, `zip` varchar(30) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=4545 DEFAULT CHARSET=utf8; ``` Here is a sample `INSERT` that works: ``` INSERT INTO `addresses` (`street`, `city`, `state`, `zip`) VALUES ('', '', '', ''); ``` Any ideas as to why this is happening?
You are inserting empty strings, and empty string are not `NULL`, to check for `NULL` error do: ``` INSERT INTO `addresses` (`street`, `city`, `state`, `zip`) VALUES (NULL, NULL, NULL, NULL); ``` and you will see error. The `NOT NULL` checks only for values that are not `NULL`. To prevent empty string either you have to use [triggers](http://dev.mysql.com/doc/refman/5.1/en/triggers.html), or do the checks on server side programming language to convert empty strings to `NULL` before performing `INSERT` query. An example trigger for `INSERT` may be like: (this is just an example) ``` CREATE TRIGGER avoid_empty BEFORE INSERT ON addresses FOR EACH ROW BEGIN IF street = '' THEN SET street = NULL END IF; END; ```
try below ``` INSERT INTO `addresses` (`street`, `city`, `state`, `zip`) VALUES (NULL, NULL, NULL, NULL); ``` above will not work ``` INSERT INTO `addresses` (`street`, `city`, `state`, `zip`) VALUES ('', '', '', ''); ``` above insert empty string The empty string specifically means that the value was set to be empty; null means that the value was not set. to prevent insert empty string check this SO question [I'm looking for a constraint to prevent the insert of an empty string in MySQL](https://stackoverflow.com/questions/2514178/im-looking-for-a-constraint-to-prevent-the-insert-of-an-empty-string-in-mysql)
MySQL column set to NOT NULL but still allowing NULL values
[ "", "mysql", "sql", "" ]
I have two columns, column1 and 2 with 10 records each for example: Column1 (A, A, A, B, B, B, B, C, C, D) and Column2 (1, 1, 2, 3, 3, 2, 1, 4, 4, 1) I need to create third column, column3 with these two columns such that, column3= (A\_1\_1\_2, B\_3\_3\_2\_1, C\_4\_4, D\_1) Please help me how to do this, I am using group by statements and also concatinations but not able to figure out how.
Here you go. You need to retain the 3rd column and only output on the last observation of each Col1 group: ``` data test; input Col1 $ Col2; datalines; A 1 A 1 A 2 B 3 B 3 B 2 B 1 C 4 C 4 D 1 ;; run; data test_out; set test; by Col1; format Col3 $64.; retain Col3; if first.Col1 then Col3 = strip(Col1); Col3 = catx("_",col3,col2); if last.Col1 then output; run; ```
I used the work found in [this previous question](https://stackoverflow.com/questions/194852/concatenate-many-rows-into-a-single-text-string) for most of my answer. ``` CREATE TABLE MyTable (COLUMN1 char(1) NULL, COLUMN2 char(1) NULL); INSERT INTO MyTable VALUES ('A', '1'), ('A', '1'), ('A', '2'), ('B', '3'), ('B', '3'), ('B', '2'), ('B', '1'), ('C', '4'), ('C', '4'), ('D', '1'); SELECT COLUMN1 + COLUMN2 AS COLUMN3 FROM( SELECT DISTINCT a.COLUMN1, (SELECT '_' + b.COLUMN2 FROM MyTable b WHERE a.COLUMN1 = b.COLUMN1 ORDER BY b.COLUMN1 FOR XML PATH ('')) AS COLUMN2 FROM MyTable a )c ``` [demo](http://sqlfiddle.com/#!6/f7f9b/1) Hopefully this should get you to a point that you can modify the query to serve your needs. As a plus, this method is able to avoid the dreaded cursor.
concatinating two column records to get the aggregated third column
[ "", "sql", "sas", "" ]
I have a simple stored procedure that is dynamically obtaining values from a cursor. The issue is that at various points some of the bound values can be NULL. I'd like to be able to use these bound values in another query later, for example: ``` select * from table where column = value; ``` The issue is that the `value` is NULL which breaks the query. I realize I need to do a `where column is null` and in the past I've created dynamic queries after evaluating the value. How can I simply do this comparison to cover both a NULL value and a populated VARCHAR2 value?
One straightforward solution is: ``` where ((value is null and column is null) or (value is not null and column = value)) ``` You might like to make sure that indexes on column are actually on (column,0) so that null values are included (and the optimiser knows that they are included).
If you want `NULL` values to compare as equals: ``` select * from table where (column = value or column is null and value is null); ```
Convert "column = null" to "column is null"?
[ "", "sql", "oracle", "syntax", "null", "oracle10g", "" ]
What I need is a date for the next given day (Monday, Tuesday, Wed...) following today's date. The user is allowed to select what day following they want and that is stored as an int in a table. "Call me next Tuesday (3)" ``` Sunday = 1 Monday = 2 Tuesday = 3 ... ``` So my table looks like this. ``` UserID, NextDayID ``` What I have come up with is: ``` select dateadd(dd,(7 - datepart(dw,GETDATE()) + NextDayID ) % 7, getdate()) ``` It seems to work and will return today's date if you ask for the next whatever day today is which I can add a week if needed. What I am wondering is, is that a good solution or is there something that I'm missing?
1) Your solution uses a non-deterministic function: `datepart(dw...)` . Because of this aspect, changing `DATEFIRST` setting will gives different results. For example, you should try: ``` SET DATEFIRST 7; your solution; ``` and then ``` SET DATEFIRST 1; your solution; ``` 2) Following solution is independent of `DATEFIRST`/`LANGUAGE` settings: ``` DECLARE @NextDayID INT = 0 -- 0=Mon, 1=Tue, 2 = Wed, ..., 5=Sat, 6=Sun SELECT DATEADD(DAY, (DATEDIFF(DAY, @NextDayID, GETDATE()) / 7) * 7 + 7, @NextDayID) AS NextDay ``` Result: ``` NextDay ----------------------- 2013-09-23 00:00:00.000 ``` This solution is based on following property of `DATETIME` type: * Day 0 = `19000101` = Mon * Day 1 = `19000102` = Tue * Day 2 = `19000103` = Wed ... * Day 5 = `19000106` = Sat * Day 6 = `19000107` = Sun So, converting INT value 0 to DATETIME gives `19000101`. If you want to find the next `Wednesday` then you should start from day 2 (`19000103`/`Wed`), compute days between day 2 and current day (`20130921`; 41534 days), divide by 7 (in order to get number of full weeks; 5933 weeks), multiple by 7 (41531 fays; in order to get the number of days - full weeks between the first `Wednesday`/`19000103` and the last `Wednesday`) and then add 7 days (one week; 41538 days; in order to get following `Wednesday`). Add this number (41538 days) to the starting date: `19000103`. Note: my current date is `20130921`. **Edit #1:** ``` DECLARE @NextDayID INT; SET @NextDayID = 1; -- Next Sunday SELECT DATEADD(DAY, (DATEDIFF(DAY, ((@NextDayID + 5) % 7), GETDATE()) / 7) * 7 + 7, ((@NextDayID + 5) % 7)) AS NextDay ``` Result: ``` NextDay ----------------------- 2013-09-29 00:00:00.000 ``` Note: my current date is `20130923`.
A calendar table is an alternative to using a bunch of date functions and date arithmetic. A minimal calendar table for this particular problem might look something like this. ``` 2013-09-20 Fri 2012-09-21 Sat 2012-09-22 Sun 2012-09-23 Mon 2012-09-24 Tue ... ``` So a query to get the next Monday might look like this. ``` select min(cal_date) from calendar where cal_date > current_date and day_of_week = 'Mon'; ``` In practice, you'll probably want a lot more columns in the calendar table, because you'll find a lot of uses for it. Also, code that uses a calendar table can usually be seen to be *obviously* correct. Reading the code above is simple: select the minimum calendar date that's after today and that falls on Monday. It's pretty rare to see code that relies on date functions and date arithmetic that's obviously correct. [A calendar table in PostgreSQL](https://stackoverflow.com/a/5030686/562459)
SQL: get next relative day of week. (Next Monday, Tuesday, Wed.....)
[ "", "sql", "date", "" ]
Using a Rails 3 Active Record model (postgres db) I want to take a user search input of something like this: ``` "some string, some other string, final string" ``` From there I want to split strings on the commas and build an SQL query that looks something like this ``` SELECT * FROM items WHERE item_name SIMILAR TO '%(some string)%' OR item_name SIMILAR TO '%(some other string)%' OR item_name SIMILAR TO '%(final string)%' ``` I'm struggling to come up with a way to build this query as I am rather unfamiliar with the syntax of Ruby.
I would skip SIMILAR TO and go straight to [POSIX regexes](http://www.postgresql.org/docs/current/interactive/functions-matching.html#FUNCTIONS-POSIX-REGEXP), I'm pretty sure SIMILAR TO will be translated to a regex internally so why bother? Also, `~` will let you use [`ANY`](http://www.postgresql.org/docs/current/interactive/functions-comparisons.html#AEN18479) to produce a nice readable expression. You could do something like this: ``` str = 'some string, some other string, final string' items = Item.where('item_name ~ any(array[?])', str.split(/\s*,\s*/)) ``` That will end up running SQL like this: ``` select "items".* from "items" where item_name ~ any(array['some string', 'some other string', 'final string']) ``` and that will produce the same results as your SIMILAR TO version without a bunch of string wrangling. If you're faced with a CSV string that can contain regex metacharacters then you probably want to throw some escaping in the mix. Backslashing anything that isn't alphanumeric should be safe enough: ``` str = 'some string, some other string, final string' pats = str.split(/\s*,\s*/) .map { |s| s.gsub(/\p{^Alnum}/) { '\\' + $& } } items = Item.where('item_name ~ any(array[?])', pats) ``` Switching to LIKE is also an option, then you'd only have to worry about `_` and `%`: ``` str = 'some string, some other string, final string' pats = str.split(/\s*,\s*/) .map { |s| s.gsub(/[_%]/, '%' => '\\%', '_' => '\\_') } .map { |s| '%' + s + '%' } items = Item.where('item_name like any(array[?])', pats) ``` In real life you'd bust the escaping mess (and the "add LIKE percent signs" mess) out into utility methods to make your code cleaner. If you don't care about case then you can use `~*` or `ilike` for case insensitive pattern matching.
Try this way, ``` string = 'some string, some other string, final string' patterns = search_string.split(", ").map{|str| "%#{str}%" } items = Item.where('item_name like any(array[?])', patterns) ```
Rails: How can I take comma separated values and build an SQL query?
[ "", "sql", "ruby-on-rails", "ruby", "postgresql", "" ]
Currently what I am trying to achieve is to create a graph within LINQPad from a SQL Datasource. I believe it is possible to do, however I am not 100% sure on how exactly to do it. Does anyone have any ideas on a method to do this? (Even if it includes using NuGet packages, I don't mind)
**Edit:** charting is now a built-in feature in LINQPad. See [this answer](https://stackoverflow.com/a/51048641/46223). Yes, you can use any NuGet charting library, or the built-in Windows Forms library in `System.Windows.Forms.DataVisualization.Charting`. Simply call Dump on the chart control after creating it, such as in [this example](http://share.linqpad.net/do9bu3.linq). Another option is to use the Google Chart API: ``` Util.Image ("http://chart.apis.google.com/chart?cht=p3&chd=s:Uf9a&chs=350x140&chl=January|February|March|April").Dump(); ``` with this result: ![enter image description here](https://i.stack.imgur.com/a54bB.png)
Linqpad 5.31 comes with internal Chart extension. ``` var customers = new[] { new { Name = "John", TotalOrders = 100 }, new { Name = "Mary", TotalOrders = 130 }, new { Name = "Sara", TotalOrders = 140 }, new { Name = "Paul", TotalOrders = 125 }, }; customers.Chart (c => c.Name, c => c.TotalOrders).Dump(); ``` [![enter image description here](https://i.stack.imgur.com/D9zGG.png)](https://i.stack.imgur.com/D9zGG.png) For more examples, click LINQPad's Samples tab (bottom left), **LINQPad Tutorial and Reference** > **Scratchpad features** > **Charting with Chart()**
Create Graphs in LINQPad
[ "", "sql", "linq", "graph", "linqpad", "graph-visualization", "" ]
I need to combine two result sets and I feel I am so close but just don't see how to wrap this all up: Here's one query with a small result set, (give me inactives) ``` SELECT MAX(set_date) AS most_recent_inactive, key_value, statusid FROM status_history WHERE base_table = 'userinfo' AND statusid = 10 AND set_date > TO_DATE('2012-10-01', 'YYYY-MM-DD') GROUP BY key_value, statusid ``` **Output:** ``` recent_inactive key_value statusid 2013-01-30 15 10 2013-06-04 261 10 2013-06-18 352 10 2012-10-04 383 10 2013-01-22 488 10 2013-03-04 711 10 2013-06-19 749 10 2013-03-05 806 10 ``` Another query with a small result set (give me actives) ``` SELECT MAX (set_date) AS most_recent_active, key_value, statusid FROM status_history WHERE base_table = 'userinfo' AND statusid =11 GROUP BY key_value, statusid ``` **Output:** ``` recent_active key_value statusid 2002-01-01 3 11 2002-01-01 5 11 2002-01-01 14 11 2002-01-01 15 11 2002-01-01 21 11 2002-01-01 23 11 2002-01-01 25 11 2002-01-01 26 11 ``` I want to get all of the actives and inactives together, so I union them all ``` SELECT NULL AS most_recent_active, MAX(set_date) AS most_recent_inactive, key_value, statusid FROM status_history WHERE base_table = 'userinfo' AND statusid = 10 AND set_date > TO_DATE('2012-10-01', 'YYYY-MM-DD') GROUP BY key_value, statusid UNION all SELECT MAX(set_date) AS most_recent_active, NULL AS most_recent_inactive, key_value, statusid FROM status_history WHERE base_table = 'userinfo' AND statusid = 11 GROUP BY key_value, statusid ORDER by key_value ``` **Output:** ``` recent_active recent_inactive key_value statusid 2002-01-01 null 3 11 2002-01-01 null 5 11 2002-01-01 null 14 11 null 2013-01-30 15 10 2002-01-01 null 15 11 2002-01-01 null 21 11 2002-01-01 null 23 11 2002-01-01 null 25 11 2002-01-01 null 26 11 2002-01-01 null 27 11 2002-01-01 null 29 1 ``` The problem is key\_value 15 is duplicated. The values are correct, but i want that record and all subsequent duplicates "flattened," row 15 and all other matches coming through as one record with both date fields set. Again, I feel I'm so close but how do I wrap this all up? Thank you?
This is assuming that your inactive status\_id is always less than your active status\_id. This may not work if there are other possible status\_id values. ``` select max(most_recent_active), max(most_recent_inactive), key_value, min(status_id) from (select null as most_recent_active, max(set_date) as most_recent_inactive, key_value,statusid from status_history where base_table = 'userinfo' and statusid = 10 and set_date > to_date('2012-10-01', 'YYYY-MM-DD') group by key_value,statusid UNION all select max(set_date) as most_recent_active, null as most_recent_inactive, key_value,statusid from status_history where base_table = 'userinfo' and statusid = 11 group by key_value,statusid order by key_value) group by key_value ```
You can use a CASE statement to split out which are inactive and which are active. The default, if the case is not fulfilled is NULL so you'll get what you want. ``` select max(case when statusid = 10 then set_date end) as most_recent_inactive , max(case when statusid = 11 then set_date end) as most_recent_active , key_value , max(statusid) as statusid from status_history where base_table = 'userinfo' and statusid in (10, 11) group by key_value, statusid ``` The date thing you've got going on in the inactive is a little strange but if you want to restrict just put this in the CASE for inactive dates: ``` select max(case when statusid = 10 and set_date > to_date('2012-10-01', 'YYYY-MM-DD') then set_date end) as most_recent_inactive , max(case when statusid = 11 then set_date end) as most_recent_active , key_value , max(statusid) as statusid from status_history where base_table = 'userinfo' and statusid in (10, 11) group by key_value, statusid ``` I've assumed that if something is both active and inactive you want to display that it's active. If you want to display it as inactive use `min(statusid)`; if you want to display both then you need another column... follow the same logic; use a CASE statement. If you want neither then remove it from the SELECT and GROUP BY clauses completely.
How do I combine multiple queries?
[ "", "sql", "oracle", "" ]
I have a query with a joined subquery. If the subquery returns null, I want it to be ignored and I want to the rest of the query to work normally. Currently I have something like: ``` SELECT a, b, c, d FROM tblOne JOIN tblTwo ON tblOne.a = tblTwo.a --this works fine JOIN (SELECT a FROM tblThree) ON tblThree.a = tblOne.a ``` The problem is that if tblThree.a is null, the entire query returns null. So, I only want to use the subquery if tblThree.a is not null. Can I do something with [CASE](http://technet.microsoft.com/en-us/library/ms181765.aspx) or COALESCE, or some other way? Please give code examples.
Use a `LEFT OUTER JOIN` instead of an `INNER JOIN`. This will return all rows for the rest of the query, even if `tblThree` returns no matching rows. In this case the columns for `tblThree` will all be `NULL`. Using your query (although I have added the required alias for the derived table): ``` SELECT a, b, c, d FROM tblOne INNER JOIN tblTwo ON tblOne.a = tblTwo.a LEFT OUTER JOIN ( SELECT a, e, f FROM tblThree ) tblThree ON tblThree.a = tblOne.a ``` Note, as @491243 points out, the derived-table subquery here does not really make sense. You also most likely have an ambiguous column `a` in the SELECT clause. I'm guessing this is just an extrapolation of your real query though.
Try changing your third join to a LEFT JOIN. ``` LEFT JOIN (SELECT a, e, f FROM tblThree) ON tblThree.a = tblOne.a ``` Then if a is null you will still get the original rows prior to the attempted join. Another option would be to add ``` WHERE a IS NOT NULL ``` to your subquery to only return rows where a has a value.
sql conditional subquery - ignore subquery if it returns null?
[ "", "sql", "sql-server", "" ]
I'm new to triggers and I can't seem to find an answer to this question. I need to insert calculated rows in a table and have no access to the application source code. I thought a work around might be to create a trigger on the table to calculate the values based on the inserted values then insert them into the table, though I'm not real sure how this works. I have the following trigger, it works for update, but doesn't work for insert. If I remove "INSERT" in "FOR INSERT, UPDATE" it will insert the account\_id field (which is the only non-null field), but not the rest. If I leave "INSERT" in, it will not insert any of the fields. How can I make this work for both Inserts and updates? ``` ALTER TRIGGER [dbo].[SelfCalFieldsTrigger] ON [dbo].[dynTable] FOR INSERT, UPDATE AS BEGIN Declare @acctID uniqueidentifier Declare @invDiscPremPerc decimal(9,6) Declare @investorSRP decimal(9,6) Declare @purchWireAmt decimal(11,2) set @acctID = (Select account_id from inserted) set @invDiscPremPerc = (Select Coalesce(INVESTOR_DISC_PREM_DOLLAR, 0)/ Coalesce(INVESTOR_PRIN_BAL_PURCHASED, 0)*100 from inserted) set @investorSRP = (Select Coalesce(INVESTOR_SRP_PREM_DOLLAR, 0)/ Coalesce(INVESTOR_PRIN_BAL_PURCHASED, 0)*100 from inserted) set @purchWireAmt = (Select Coalesce(INVESTOR_PRIN_BAL_PURCHASED, 0)- Coalesce(INVESTOR_ADMIN_FEE, 0)- Coalesce(INVESTOR_WIRE_FEE, 0)- Coalesce(INVESTOR_FLOOD_FEE, 0)- Coalesce(INVESTOR_TAX_SERVICE_FEE, 0) - Coalesce(INVESTOR_OTHER_FEE, 0) from inserted) UPDATE [dbo].dynTable] set INVESTOR_DISC_PREM_PERCENT = @invDiscPremPerc, INVESTOR_SRP = @investorSRP, PURCHASE_WIRE_AMOUNT = @purchWireAmt Where [dbo].[dynTable].Account_ID = @acctID END ```
You need to change the logic of your trigger. Right now, you are assuming that there is only one row inserted or updated, but that's not how it works. `INSERTED` is a pseudo table that contains all rows inserted or updated. So, try this instead: ``` ALTER TRIGGER [dbo].[SelfCalFieldsTrigger] ON [dbo].[dynTable] FOR INSERT, UPDATE AS BEGIN UPDATE A SET INVESTOR_DISC_PREM_PERCENT = ISNULL(B.INVESTOR_DISC_PREM_DOLLAR / NULLIF(B.INVESTOR_PRIN_BAL_PURCHASED,0)*100,0), INVESTOR_SRP = ISNULL(B.INVESTOR_SRP_PREM_DOLLAR / NULLIF(B.INVESTOR_PRIN_BAL_PURCHASED,0)*100,0), PURCHASE_WIRE_AMOUNT = COALESCE(B.INVESTOR_PRIN_BAL_PURCHASED, 0)- COALESCE(B.INVESTOR_ADMIN_FEE, 0)- COALESCE(B.INVESTOR_WIRE_FEE, 0)- COALESCE(B.INVESTOR_FLOOD_FEE, 0)- COALESCE(B.INVESTOR_TAX_SERVICE_FEE, 0)- COALESCE(B.INVESTOR_OTHER_FEE, 0) FROM [dbo].[dynTable] A INNER JOIN INSERTED B ON A.Account_ID = B.Account_ID END ```
Try it like this, it should fix both problems: ``` ALTER TRIGGER [dbo].[SelfCalFieldsTrigger] ON [dbo].[dynTable] FOR INSERT, UPDATE AS BEGIN UPDATE [dbo].dynTable SET INVESTOR_DISC_PREM_PERCENT = Coalesce(INVESTOR_DISC_PREM_DOLLAR, 0)/ Coalesce(INVESTOR_PRIN_BAL_PURCHASED, 0)*100, INVESTOR_SRP = Coalesce(INVESTOR_SRP_PREM_DOLLAR, 0)/ Coalesce(INVESTOR_PRIN_BAL_PURCHASED, 0)*100, PURCHASE_WIRE_AMOUNT = Coalesce(INVESTOR_PRIN_BAL_PURCHASED, 0)- Coalesce(INVESTOR_ADMIN_FEE, 0)- Coalesce(INVESTOR_WIRE_FEE, 0)- Coalesce(INVESTOR_FLOOD_FEE, 0)- Coalesce(INVESTOR_TAX_SERVICE_FEE, 0) - Coalesce(INVESTOR_OTHER_FEE, 0) WHERE Account_id IN(Select i.Account_id From inserted i) END ```
SQL After update trigger seems to prevent insert from working
[ "", "sql", "sql-server", "triggers", "" ]
I was asked this question in an interview, this the table ``` Roll | Sub | Marks 1 A 20 1 B 21 2 A 15 2 B 19 3 A 21 3 B 22 ``` now i have to find the roll and marks 2nd highest marks obtained by the student so i answered this : ``` declare @trytable table ( roll int, total int ) insert @trytable select Roll, SUM(Marks) from Student group by Roll Select * from @trytable t where t.total in (select MAX(total) from @trytable where total not in ( select MAX(total) from @trytable)) ``` which is giving the correct answer but the interviewer wanted this to be done in single query by not using the table variable the result should be ``` Roll | Total Marks 1 41 ``` so how can i do that ... please let me know
Below query gives the roll numbers who obtained 2nd highest marks summing the two subject marks. ``` SELECT TOP 1 Roll, Marks FROM ( SELECT DISTINCT TOP 2 Roll, SUM(Marks) over (Partition by Roll) Marks FROM YourTable ORDER BY marks DESC ) temp ORDER BY Marks ``` OR ``` SELECT DISTINCT Roll, Marks, SRANK FROM ( SELECT Roll, Marks, DENSE_RANK() OVER( ORDER BY Marks DESC) AS SRANK FROM ( SELECT Roll, SUM(Marks) over (Partition by Roll) Marks FROM YourTable )x )x WHERE SRANK=2 ```
If I understand you correctly, you just want to get the total score for the second highest student, and student is identified by roll? If so: ``` select roll, sum(Marks) from Student group by roll order by total limit 1,1; ``` Not 100% sure about the 1,1 - what you are saying is, I only want 1 row, and not the first.
How to get the 2nd highest from a table where it need to be added first in sql server in a single query?
[ "", "sql", "sql-server", "" ]
I'm trying to create a query that'll update `table_1` where column `id_owner` has more than `5 rows` with the same `owner id`, it needs to set column "`active`" to `3` on all rows those users have. I've tried several different methods and turned up empty with each. Any ideas?
Use this `UPDATE` query with `JOIN` to achieve this: ``` UPDATE table1 t1 JOIN ( SELECT id_owner FROM table1 GROUP BY id_owner HAVING COUNT(*) > 5 ) t2 ON t1.id_owner = t2.id_owner SET t1.active = 3; ``` ### See [this sample SQLFiddle](http://sqlfiddle.com/#!9/f9a5b7b/1)
You may try this:- ``` update table_1 set active = 3 where owner_id in ( select * from ( select owner_id from table_1 group by owner_id having count(*) > 5 ) a ) ```
Update if count is greater than 5 in MySQL
[ "", "mysql", "sql", "sql-update", "" ]
I have as following Data ``` Id , TagNo , Revision 100 , 20001 , A 101 , 20001 , B 102 , 20001 , C 103 , 20002 , B 104 , 20002 , A 105 , 20003 , B ``` I want if I pass B for revision i have following Records ``` 101 , 20001 , B 103 , 20002 , B 105 , 20003 , B ``` if i pass A for Revision i should have the following records ``` 100 , 20001 , A 104 , 20002 , A ``` and if i pass C for revision the following should be my result ``` 102 , 20001 , C 103 , 20002 , B 105 , 20003 , B ``` I couldn't make it with TSQl , can any body help me? Thank you
``` with cte as ( select Id, TagNo, Revision, row_number() over(partition by TagNo order by Revision desc) as rn from Table1 where Revision <= @Revision ) select Id, TagNo, Revision from cte where rn = 1 ``` **`sql fiddle demo`**
Are you just looking for ``` select * from mytable where Revision = 'A' ``` or ``` select * from mytable where Revision = 'B' ``` or ``` select * from mytable where Revision = 'C' ```
TSQL making doesn't work
[ "", "sql", "sql-server", "t-sql", "" ]
I am joining 2 tables like : ``` RIGHT JOIN Sheet.dbo.tbl_M_Tam TD ON TD.Name = EU.Name ``` The problem is that sometimes EU.Name is like 'name1' and TD is 'name1 KO' in this case the join won't work. So I want to make it work by adding the string 'KO' in EU.Name when it is a one word string. Is there is a simple way to do that, I am using SQL Server ?
If this is just for the one specific string ('KO') then this should work. ``` RIGHT JOIN Sheet.dbo.tbl_M_Tam TD ON TD.Name = CASE WHEN EU.Name LIKE '% %' THEN EU.Name ELSE EU.Name + ' KO' END ```
``` RIGHT JOIN Sheet.dbo.tbl_M_Tam TD ON TD.Name LIKE EU.Name +'%' ``` But be warned that this is not the most efficient way of doing a `JOIN` on many rows in a system with high ussage. If you want to be efficient try to clean up your data in some way so that you can do a straight col1 equals col2 join.
how to count how many words when using a join?
[ "", "sql", "sql-server", "join", "" ]
Given the table `users` below: ``` +----+---------+--------+ | id | name | office | +----+---------+--------+ | 1 | David | 1 | | 2 | Roz | 1 | | 3 | Patrick | 2 | | 4 | Chris | 3 | | 5 | Agnes | 3 | | 6 | Freya | 3 | +----+---------+--------+ ``` I want to select the first user of any given office, but ONLY if there's more than one user, so: * Office 1 = User 1 (David) * Office 2 = **NULL** * Office 3 = User 4 (Chris) Something along the lines of: ``` SET @office_id = 2; SELECT * FROM `users` WHERE `office` = @office_id AND number-of-users-for-office > 1 ORDER BY `id` ASC LIMIT 1; ```
``` SELECT a.office, MAX( CASE WHEN b.ID IS NULL THEN NULL ELSE a.Name END) Name FROM Tablename a LEFT JOIN ( SELECT office, MIN(id) ID FROM Tablename GROUP BY office HAVING COUNT(*) > 1 ) b ON a.office = b.office AND a.ID = b.ID -- WHERE ....... -- (if you have extra conditions) GROUP BY a.office ``` * [SQLFiddle Demo](http://sqlfiddle.com/#!2/55473/3) OUTPUT ``` ╔════════╦════════╗ ║ OFFICE ║ NAME ║ ╠════════╬════════╣ ║ 1 ║ David ║ ║ 2 ║ (NULL) ║ ║ 3 ║ Chris ║ ╚════════╩════════╝ ``` The purpose of the subquery is to get the least ID for every `Office`. The extra `HAVING` clause filters only records that has more than one employee within the certain office. Table `User` is then joined on the subquery via `LEFT JOIN` to get all the office within the table. The records are aggregated using `MAX()` (*or `MIN()`*) to get single record for every `office`.
I came up with another solution that I'm jotting for posterity. ``` -- Office id... SET @office_id = 1; -- Find the latest user SELECT `id` INTO @latest_user_id FROM `users` WHERE `office` = @office_id ORDER BY `id` DESC LIMIT 1; -- Find the first user that isn't the first SELECT `id` INTO @latest_user_id FROM `users` WHERE `office` = @office_id AND `id` != @latest_user_id ORDER BY `id` ASC LIMIT 1; ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!2/12c59/6) This one selects the latest, then tries to select the first where it's not the same as the latest. Thus if there's only 1 row you'll get NULL as desired.
Select first row but only if there's more than one match?
[ "", "mysql", "sql", "" ]
I have read **Database system concepts**, 6th edition, *Silberschatz*. I'm going to implement the university database system shown in chapter 2 on OS X on MySQL. But I have a trouble with creating the table `course`. the table `department` looks like ``` mysql> select * from department -> ; +------------+----------+-----------+ | dept_name | building | budget | +------------+----------+-----------+ | Biology | Watson | 90000.00 | | Comp. Sci. | Taylor | 100000.00 | | Elec. Eng. | Taylor | 85000.00 | | Finance | Painter | 120000.00 | | History | Painter | 50000.00 | | Music | Packard | 80000.00 | | Physics | Watson | 70000.00 | +------------+----------+-----------+ mysql> show columns from department -> ; +-----------+---------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------------+------+-----+---------+-------+ | dept_name | varchar(20) | NO | PRI | | | | building | varchar(15) | YES | | NULL | | | budget | decimal(12,2) | YES | | NULL | | +-----------+---------------+------+-----+---------+-------+ ``` Creating the table `course` causes the following error. ``` mysql> create table course -> (course_id varchar(7), -> title varchar (50), -> dept_name varchar(20), -> credits numeric(2,0), -> primary key(course_id), -> foreign key (dept_name) references department); ERROR 1215 (HY000): Cannot add foreign key constraint ``` after searching google for foreign key constraint, I have just learned that the word 'foreign key constraint' indicates that data from foreign key column in the table `course` must exist in primary key column in the table `department`. But I should have met this error when inserting data. If not, why does author make me execute that SQL statement? If I really execute erroneous SQL statement, Does I have to designate `dept_name` in course table as foreign key after inserting some data? **EDIT** : typing `set foreign_key_checks=0` into `mysql>` does not fix the error. ``` ------------------------ LATEST FOREIGN KEY ERROR ------------------------ 2013-09-21 16:02:20 132cbe000 Error in foreign key constraint of table university/course: foreign key (dept_name) references department): Syntax error close to: ) mysql> set foreign_key_checks=0 -> ; Query OK, 0 rows affected (0.00 sec) mysql> create table course -> (course_id varchar(7), -> title varchar(50), -> dept_name varchar(20), -> credits numeric(2,0), -> primary key(course_id), -> foreign key (dept_name) references department); ERROR 1215 (HY000): Cannot add foreign key constraint ```
The syntax of `FOREIGN KEY` for `CREATE TABLE` is structured as follows: ``` FOREIGN KEY (index_col_name) REFERENCES table_name (index_col_name,...) ``` So your MySQL DDL should be: ``` create table course ( course_id varchar(7), title varchar(50), dept_name varchar(20), credits numeric(2 , 0 ), primary key (course_id), FOREIGN KEY (dept_name) REFERENCES department (dept_name) ); ``` Also, in the `department` table `dept_name` should be `VARCHAR(20)` More information can be found in the [MySQL documentation](http://dev.mysql.com/doc/refman/5.6/en/create-table-foreign-keys.html)
When you get this vague error message, you can find out the more specific error by running ``` SHOW ENGINE INNODB STATUS; ``` The most common reasons are that when creating a foreign key, both the referenced field and the foreign key field need to match: * **Engine** should be the same *e.g. InnoDB* * **Datatype** should be the same, and with same length. *e.g. VARCHAR(20) or INT(10) UNSIGNED* * **Collation** should be the same. *e.g. utf8* * **Unique** - Foreign key should refer to field that is unique *(usually private)* in the reference table. Another cause of this error is: *You have defined a SET NULL condition though some of the columns are defined as NOT NULL.*
MySQL : ERROR 1215 (HY000): Cannot add foreign key constraint
[ "", "mysql", "sql", "database", "" ]
Been puzzling over this for a while now. Basically here's my statement. ``` SELECT CandidateID, Town, Candidates.SalaryMin, CandidateExperience, CandidateExperience.divTagExp, PrimarySector, Candidates.SalaryMin, CandidateSalary.divTagSal, CASE WHEN following.RecID =1 THEN 'block' ELSE 'none' END AS divFollow FROM Candidates LEFT JOIN CandidateExperience ON CandidateExperience.CandidateExpID = Candidates.CandidateExperience LEFT JOIN CandidateSalary ON Candidates.SalaryMin >= CandidateSalary.SalaryMin LEFT JOIN following ON following.RecID = Candidates.CandidateID AND Candidates.SalaryMin <= CandidateSalary.SalaryMax ``` Here's my Candidates table: ![enter image description here](https://i.stack.imgur.com/r3cel.png) Here's my following table: ![enter image description here](https://i.stack.imgur.com/9lCop.png) Here's my results of the query: ![enter image description here](https://i.stack.imgur.com/Afybn.png) I know there's something wrong with the join, but I've tried left, right, inner, outer and none of them of giving me what I want. What I do want is, one entry for each entry in the Candidates, with the divFollow field showing 'block', if there's a matching entry in 'following' and 'none' if there isn't. What am I missing? Thank you! UPDATE: New result set after adjusting RecID to FollowingID ![enter image description here](https://i.stack.imgur.com/55m8Q.png)
``` SELECT *, CASE WHEN candidateId IN ( SELECT followId FROM following ) THEN 'block' ELSE 'none' END AS divFollow FROM Candidates ```
``` select Candidates.*, ISNULL(Following.divFollow, 'none') as divFollow from Candidates left join (select distinct followID from Following) as Following on CandidateID = followID ``` Further inner joins can be made to get data from Following table. Left outer join ensures that all entries in the former table (left side of join operator) will be in the resulting table at least one. If there is no correspondent in the later (right side of the join operator) table the cells coming from the later table are filled with NULL. This gets the job done for you since you want for each Candidate having a correspondent in Following to get a 'block', otherwise 'none', value on divFollow.
MySQL - Something wrong with my join
[ "", "mysql", "sql", "" ]
is it possible to add such a thing as last second of the day into date? Let's say that I have dates with different times and I need every date set to time 23:59:59... Is it possible? Thanks
``` update the_table set the_date_column = to_date(to_char(the_date_column, 'yyyy-mm-dd')||' 23:59:59', 'yyyy-mm-dd hh24:mi:ss'); ```
In case the [solution proposed by @a\_horse\_with\_no\_name](https://stackoverflow.com/a/18916694/1667004) proves to be slow, it should be possible to do it this way: +1 day -1 second is the logic I'd follow to get that result, without string concatenation: ``` SELECT trunc(SYSDATE) + 1 - (INTERVAL '1' SECOND) FROM DUAL ``` [SQL fiddle](http://sqlfiddle.com/#!4/d41d8/17563) Translated into UPDATE ``` UPDATE MY_TABLE SET MY_DATE_COLUMN = trunc(MY_DATE_COLUMN) + 1 - (INTERVAL '1' SECOND) ``` **However** Keep in mind that maintainability is of key importance regarding writing software, and reading this is much harder than the other solution proposed. # Recommended Reading * [TRUNC](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions209.htm) * [INTERVAL](http://psoug.org/definition/INTERVAL.htm)
How to add last second into date?
[ "", "sql", "oracle", "" ]
How to use SQL to convert the first table to second table in oracle? For the first table, it has three columns called Time, TotalUsers and Department. For the Department column, it only has two types of values, one is \* another is null. I need to make TotalUsers with \* intoTotalUsersStar, and make TotalUsers with null into TotalUsersNull in the second table. Please see the second table. First Table: ``` Date TotalUsers Department 199905 1234 * 199912 2345 * 200005 8923 (null) 200012 6783 (null) ``` Second table: ``` Date TotalUsersNull TotalUsersStar 199905 1234 199912 2345 200005 8923 200012 6783 ```
It's very simple - ``` CREATE TABLE second AS SELECT date , CASE department WHEN NULL THEN totalUsers ELSE NULL END AS TotalUsersNull , CASE department WHEN '*' THEN totalUsers ELSE NULL END AS TotalUsersStar FROM first; ``` For more information [click here](http://www.dba-oracle.com/t_create_table_select_ctas.htm) or [here](http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_7002.htm).
This is just a query. ``` SELECT Date_, CASE WHEN Department IS NULL THEN TotalUsers END TotalUsersNull, CASE WHEN Department = '*' THEN TotalUsers END TotalUsersStar FROM First; ``` Please note that I'm using field name `Date_` since Oracle does not like much when data types used as names. To create a new table you will need something like this (assuming that you have **all the rights**): ``` CREATE TABLE Second (Date_ int, TotalUsersNull int, TotalUsersStar int) ; INSERT INTO Second (Date_, TotalUsersNull, TotalUsersStar) SELECT Date_, CASE WHEN Department IS NULL THEN TotalUsers END TotalUsersNull, CASE WHEN Department = '*' THEN TotalUsers END TotalUsersStar FROM First; ``` For simplicity I defined `Date_` field as `int`. See my [**SQLFiddle DEMO**](http://sqlfiddle.com/#!4/c370b/1).
how to use sql using sql in oracle
[ "", "sql", "oracle", "" ]
I am trying to add foreign key on my other table but this gave me error `#1072 - Key column 'role_id' doesn't exist in table` I have created a table named `role` then I created like this ``` create table role ( role_id varchar(15) primary key (role_id) ) ``` then when I try to alter table on my `user` table ``` alter table user add foreign key (role_id) references role(role_id) ``` and I got an error like this `#1072 - Key column 'role_id' doesn't exist in table`
You have to have the column you reference in `add foreign key (`**role\_id**`)` inside your user table. Otherwise you get that error. You would have to have inside your user table something like: ``` create table user( ... role_id varchar(15) ... ) ``` Or if you don't have it, you have to do: ``` ALTER TABLE user ADD COLUMN role_id VARCHAR(15) ``` Before you set it as a foreign key.
You need to add `role_id` inside your new table also then set it to foreign key. ``` alter table user roll_id varchar(15), add foreign key (role_id) references role(role_id) ``` Edit: I don't know the proper syntax since I am new to DBMS but I had a similar error so just declare `role_id` again in the new table and it should work.
#1072 - Key column 'role_id' doesn't exist in table
[ "", "mysql", "sql", "" ]
How I will properly do this: Customer ``` CusID, CusLname, CusFname, CusMname, CusAddress, CusEmailAdd ``` Order ``` OrderID, Order, CusID ``` SQL ``` Select Count(OrderID), o.CusID, CusLname, CusFname, CusMname, CusAddress, CusEmailAdd From Customer c Inner join Order o On c.CusID = o.CusID Group By o.CusID ``` Error `Column 'CusLname, CusFname, CusMname, CusAddress, CusEmailAdd' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.` Why do I need to add those columns in the Group By Clause? What will I do to select those columns without having them in the Group By because I only want to group them by CusID?
A little bit different version (from that Gordon Linoff suggests): ``` select cnt, c.CusID, CusLname, CusFname, CusMname, CusAddress, CusEmailAdd from Customer c join ( select count(OrderID) cnt, CusID from Order o group by CusID ) o on c.CusID = o.CusID order by c.CusID // ordering by OrderID and Order is out of logic :) ```
Just include them in the `group by` clause: ``` Select Count(OrderID), o.CusID, CusLname, CusFname, CusMname, CusAddress, CusEmailAdd From Customer c Inner join Order o On c.CusID = o.CusID Group By o.CusID, CusLname, CusFname, CusMname, CusAddress, CusEmailAdd; ```
Column is invalid in select
[ "", "sql", "t-sql", "sql-server-2008-r2", "" ]
I'm having some trouble translating an MS Access query to SQL: ``` SELECT id, col1, col2, col3 FROM table1 LEFT OUTER JOIN table2 ON table1.id = table2.id LEFT OUTER JOIN table3 ON table1.id = table3.id ``` so far so good, but here's the (CASE) part where I get stuck: ``` CASE WHEN table3.col3 IS NULL THEN table2.col3 AS col4 ELSE table3.col3 as col4 ``` I know the above line doesn't work, but hopefully it hints at what I'm trying to accomplish. Thanks! UPDATE: All of the suggestions so far have resulted in "Incorrect syntax near the keyword 'AS'" error, so maybe there's something else I'm missing. Below the actual query. The issue is that we have two tables, both with and EUID column. If dbo.EU\_Admin3.EUID is not NULL, it takes precedence in the join. If dbo.EU\_Admin3.EUID is NULL, use dbo.EU\_Admin2.EUID instead. Hope that clarifies this. ``` SELECT dbo.AdminID.CountryID, dbo.AdminID.CountryName, dbo.AdminID.RegionID, dbo.AdminID.[Region name], dbo.AdminID.DistrictID, dbo.AdminID.DistrictName, dbo.AdminID.ADMIN3_ID, dbo.AdminID.ADMIN3 (CASE WHEN dbo.EU_Admin3.EUID IS NULL THEN dbo.EU_Admin2.EUID ELSE dbo.EU_Admin3.EUID END AS EUID) FROM dbo.AdminID LEFT OUTER JOIN dbo.EU_Admin2 ON dbo.AdminID.DistrictID = dbo.EU_Admin2.DistrictID LEFT OUTER JOIN dbo.EU_Admin3 ON dbo.AdminID.ADMIN3_ID = dbo.EU_Admin3.ADMIN3_ID ```
Try this: ``` CASE WHEN table3.col3 IS NULL THEN table2.col3 ELSE table3.col3 END as col4 ``` The `as col4` should go at the end of the CASE the statement. Also note that you're missing the `END` too. Another probably more simple option would be: ``` IIf([table3.col3] Is Null,[table2.col3],[table3.col3]) ``` Just to clarify, MS Access does not support COALESCE. If it would that would be the best way to go. **Edit after radical question change:** To turn the query into SQL Server then you can use COALESCE (so it was technically answered before too): ``` SELECT dbo.AdminID.CountryID, dbo.AdminID.CountryName, dbo.AdminID.RegionID, dbo.AdminID.[Region name], dbo.AdminID.DistrictID, dbo.AdminID.DistrictName, dbo.AdminID.ADMIN3_ID, dbo.AdminID.ADMIN3, COALESCE(dbo.EU_Admin3.EUID, dbo.EU_Admin2.EUID) FROM dbo.AdminID ``` BTW, your CASE statement was missing a `,` before the field. That's why it didn't work.
That looks like it might belong in the select statement: ``` SELECT id, col1, col2, col3, (CASE WHEN table3.col3 IS NULL THEN table2.col3 AS col4 ELSE table3.col3 as col4 END) FROM table1 LEFT OUTER JOIN table2 ON table1.id = table2.id LEFT OUTER JOIN table3 ON table1.id = table3.id ```
where to place CASE WHEN column IS NULL in this query
[ "", "sql", "ms-access", "null", "case", "" ]
I've got 2 date columns in my table `(start_date, end_date)`. I've tried `Datediff(day, start_date, end_date)`, but I was prompt with: > **invalid column name** --- **How can I calculate the date difference between these 2 columns?**
select DATEDIFF (day,start\_date,end\_date) from yourtablename;
Should be `Datediff(day, start_date, end_date)`. There is no 's' at the end of the `day` * <http://technet.microsoft.com/en-us/library/ms189794.aspx>
Datediff between 2 columns in same table
[ "", "sql", "sql-server-2008", "t-sql", "datediff", "" ]
I have a large mySQL database and I want to remove every record that is empty, not null in a certain column. What is the best way to do write a SQL query for this? Currently I have tried: ``` DELETE FROM Businesses WHERE WEBADDRESS IS NULL ``` But it did not delete anything. There are 44,000 records and almost 80% of them are null in that column.
``` DELETE FROM myTable WHERE myColumn IS NULL ``` Link to MySQL page for DELETE syntax: <http://dev.mysql.com/doc/refman/5.7/en/delete.html> IF the column is not `NULL` but just blank you would need to do something like: ``` DELETE FROM myTable WHERE myColumn = '' ``` Based on the information you also provided in the comments, the values are likely being loaded as empty (`''`) and not `NULL`: <http://dev.mysql.com/doc/refman/5.7/en/problems-with-null.html> The second query should work.
``` delete from your_table where certain_column is null ```
mySQL Query Remove Null Records in Column
[ "", "mysql", "sql", "" ]
I have the tables Fruits: ``` |ID|FruitName| |1 |Banana | |2 |Orange | |3 |Apple | ``` And I also have the table Sales: ``` |ID|Month|Sold| |1 |Jan |20 | |1 |Feb |10 | |1 |Mar |30 | |2 |Apr |15 | |2 |Jan |25 | |3 |Jul |25 | |3 |Jun |18 | ``` Now I want to Display this ``` 1|Banana|Mar|30| 2|Orange|Jan|25| 3|Apple |Jul|25| ```
You need to join the `fruit` table with `sales` once to get the month and then again to `sales` table to get the `maxSold` count using which you can filter out the unneeded records on which the `sold` count is not equal to the `maxSold`. ``` SELECT f.id, f.name, s.month, maxSold FROM fruit f LEFT JOIN sales s ON f.id = s.id LEFT OUTER JOIN (SELECT id, max(sold) maxSold FROM sales GROUP BY id) salesMax ON salesMax.id = f.id WHERE s.sold = maxSold ``` See [demo](http://sqlfiddle.com/#!2/912d5/27)
Try this: ``` SELECT f.id ,f.fruitname ,s.month ,s.sold FROM fruits f LEFT JOIN (SELECT a.* FROM Sales a LEFT JOIN Sales b ON a.Id = b.IdAND a.Sold < b.sold WHERE b.Sold IS NULL) AS s ON s.id = f.id ``` The inner left join between sales tables are getting the max values, then outer left join is just adding the fruit names.
How to join two tables showing max() of other table
[ "", "mysql", "sql", "" ]
Why is this throwing an error on the equal sign? ``` select IIF((SUBSTRING('A1234', 1, 1) = 'A'), TRUE, FALSE) as IsAustraliaUser ``` Error: > Msg 102, Level 15, State 1, Line 1 > Incorrect syntax near '='.
[`IIF`](http://technet.microsoft.com/en-us/library/hh213574.aspx) is a SQL Server 2012 feature, you'll need to use [`CASE`](http://technet.microsoft.com/en-us/library/ms181765.aspx) ``` SELECT CASE SUBSTRING('A1234', 1, 1) WHEN 'A' THEN 'TRUE' ELSE 'FALSE' END ```
You should replace IIF with CASE, also TRUE and FALSE don't exists in SQL Server, you can use VARCHAR or BIT ``` select CASE WHEN SUBSTRING('A1234', 1, 1) = 'A' THEN 'TRUE' ELSE 'FALSE' END as IsAustraliaUser ```
What's wrong this T-SQL statement?
[ "", "sql", "sql-server", "t-sql", "" ]
I have a problem involving a SUM, LEFT OUTER JOIN and GROUP BY commands, but can't figure out where my error is. I have two tables, one for customer transactions and one for customer claims. A customer can have multiple transactions and multiple claims, but in both tables the rows are unique. Customers can also have no claims. Example transactions table: ``` Transactions: Customer | Transaction Year | Amount ------------------------------------- A | 2007 | 100 A | 2008 | 80 A | 2008 | 50 A | 2009 | 210 ``` Example claims table: ``` Claims: Customer | Claim Year | Amount ------------------------------- A | 2007 | 30 A | 2007 | 40 A | 2009 | 110 ``` The desired output is to sum the two amounts, and produce a row for each unique combination of Customer and Year. ``` Desired Output: Customer | Year | Transaction Amount | Claim Amount ---------------------------------------------------- A | 2007 | 100 | 70 A | 2008 | 130 | NULL A | 2009 | 210 | 110 ``` I have used a LEFT OUTER JOIN command with a GROUP BY command for the Customer and Year values. But what I am getting is a duplication of the Transaction Amount value, and the multiple relates to the number of matching rows in the Claims table. So using my example data I get the following: ``` Actual Output: Customer | Year | Transaction Amount | Claim Amount ---------------------------------------------------- A | 2007 | 200 | 70 A | 2008 | 130 | NULL A | 2009 | 210 | 110 ``` In the year 2007, there are two claims which has resulted in the Transactions.Amount value being multiplied by two (when there are three claims, the Transaction.Amount is tripled, etc). My code is as follows: ``` SELECT Transactions.Customer, Transactions.Year, sum(Transactions.Transaction Amount), sum(Claims.Claim Amount) FROM Transactions LEFT JOIN Claims ON Claims.Customer = Transactions.Customer AND Transactions.Year = Claims.Year GROUP BY Transactions.Customer, Transactions.Year ``` Does the answer lie in subqueries? I am not familiar with them, so any pointers would be great. Thanks.
So the first step to see what is happening is to remove the SUMs and just select the transaction amount and claim amount. That way you can see what data is being returned. You'll see that the join on A/2007 will have the transaction amount twice, since it's joining each row to the claims table. One solution is to use subquerys, like you said, to do the SUMs separately prior to joining. ``` SELECT Transactions.Customer, Transactions.Year, SumTransaction, SumClaim FROM ( select Customer, Year, sum(Transaction Amount) SumTransaction from Transactions group by Customer, Year ) Transactions LEFT JOIN ( select Customer, Year, sum(Claim Amount) sumClaim from Claims group by Customer, Year ) Claims ON Claims.Customer = Transactions.Customer AND Transactions.Year = Claims.Year ``` Another possible solution given your restrictions: ``` SELECT Transactions.Customer, Transactions.Year, SUM(Transaction Amount), (SELECT SUM(Claim Amount) from Claims where Claims.Customer = Transactions.Customer and Claims.Year = Transactions.Year) FROM Transactions GROUP BY Customer, Year ``` Third possible solution!! This one does not require any subqueries! See this [SQL Fiddle](http://www.sqlfiddle.com/#!2/d1865/1/0) ``` select t.Customer, t.Year, sum(distinct t.Amount), sum(c.Amount) from Transactions t left join Claims c on t.Customer = c.Customer and t.Year = c.year group by t.Customer, t.Year ```
The query will be counting the transaction amount twice for 2007 due to the fact you have two claims, so the transaction amount will be counted twice. i.e. the returned data being used is: ``` Customer | Transaction Year | Transaction Amount | Claim Amount ---------------------------------------------------------------- A | 2007 | 100 | 30 A | 2007 | 100 | 40 A | 2008 | 80 | A | 2008 | 50 | A | 2009 | 210 | 110 ``` Something like the following, although not pretty, should resolve the problem: ``` SELECT t.Customer ,t.Year ,[Transaction Amount] = SUM(t.[Transaction Amount]) ,[Claim Amount] = c.[Claim Amount] FROM Transactions t LEFT JOIN ( SELECT Customer ,Year ,SUM([Claim Amount]) FROM Claims GROUP BY Customer, Year ) c ON c.Customer = t.Customer c.Year = t.Year GROUP BY t.Customer, t.Year, c.[Claim Amount] ```
Duplication involving SUM, LEFT JOIN and GROUP BY
[ "", "mysql", "sql", "" ]
If I have a column in a table of type `TIMESTAMP` and has as default: CURRENT\_TIMESTAMP does this column get updated to the current timestamp if I update the value of *any* other column in the the same row? It seems that it does not but I am not sure if this is what should happen. I can not understand what this means ([from MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/timestamp-initialization.html)): > If the column is auto-updated, it is automatically updated to the > current timestamp when the value of any other column in the row is > changed from its current value. The column remains unchanged if all > other columns are set to their current values. To prevent the column > from updating when other columns change, explicitly set it to its > current value. To update the column even when other columns do not > change, explicitly set it to the value it should have][2](http://dev.mysql.com/doc/refman/5.0/en/timestamp-initialization.html)
Give the command `SHOW CREATE TABLE whatever` Then look at the [table definition](https://dev.mysql.com/doc/refman/8.0/en/timestamp-initialization.html). It probably has a line like this ``` logtime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, ``` in it. `DEFAULT CURRENT_TIMESTAMP` means that any `INSERT` without an explicit time stamp setting uses the current time. Likewise, `ON UPDATE CURRENT_TIMESTAMP` means that any update without an explicit timestamp results in an update to the current timestamp value. You can control this default behavior when creating your table. Or, if the timestamp column wasn't created correctly in the first place, you can change it. ``` ALTER TABLE whatevertable CHANGE whatevercolumn whatevercolumn TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP; ``` This will cause both INSERT and UPDATE operations on the table automatically to update your timestamp column. If you want to update `whatevertable` without changing the timestamp, that is, > To prevent the column from updating when other columns change then you need to issue this kind of update. ``` UPDATE whatevertable SET something = 'newvalue', whatevercolumn = whatevercolumn WHERE someindex = 'indexvalue' ``` This works with `TIMESTAMP` and `DATETIME` columns. (Prior to MySQL version 5.6.5 it only worked with `TIMESTAMP`s) When you use `TIMESTAMP`s, time zones are accounted for: on a correctly configured server machine, those values are always stored in UTC and translated to local time upon retrieval.
I think you have to define the timestamp column like this ``` CREATE TABLE t1 ( ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ); ``` See [here](http://dev.mysql.com/doc/refman/5.0/en/timestamp-initialization.html)
When is a timestamp (auto) updated?
[ "", "mysql", "sql", "timestamp", "" ]
I'm looking for a SQL function that can get query string in the sense, URL is stored on a variable. Actually, I need to create a trigger that will update the column containing url and encode the particular query string of the url, so to encode that particular query string, first I need to extract the query string value. Lets say ``` @url = 'mypage.php?name=This is test&address=Test Address&phone=+9779854125896' ``` Then, I need a SQL function like ``` select getURLparams('name', @url) ``` that should return ``` This is test ``` And, how to delete particular query string, so I could append the updated one. I am on SQL Sever 2008 R2.
Please try below query which returns querystring value with querystring name as parameter: ``` declare @url nvarchar(max) = 'mypage.php?name=This is test&address=Test Address&phone=+9779854125896' declare @param nvarchar(max)='phone' SELECT (CASE WHEN CHARINDEX('&', v)>0 THEN SUBSTRING(v, 0, CHARINDEX('&', v)) ELSE V END) FROM( SELECT SUBSTRING(@url, CHARINDEX(@param+'=', @url)+LEN(@param)+1, len(@url)) v WHERE CHARINDEX('&'+@param+'=', REPLACE(@url, '?', '&'))>0 )x ``` For changing the value of an existing querystring, please try: ``` declare @url nvarchar(max) = 'mypage.php?name=This is test&address=Test Address&phone=+9779854125896' declare @param nvarchar(max)='name' declare @NewValu nvarchar(max)='Test' SELECT REPLACE(@url, Ch+vv, Ch+@NewValu) FROM( SELECT (CASE WHEN CHARINDEX('&', v)>0 then SUBSTRING(v, 0, CHARINDEX('&', v)) ELSE V END) vv, (CASE WHEN CHARINDEX('?'+@param+'=', @url)>0 THEN '?'+@param+'=' ELSE '&'+@param+'=' END) Ch FROM( SELECT SUBSTRING(@url, CHARINDEX(@param+'=', @url)+LEN(@param)+1, len(@url)) v WHERE CHARINDEX('&'+@param+'=', REPLACE(@url, '?', '&'))>0 )x )xx ```
You need to create a CLR function that uses Regex. Read [SQL Server: Regular Expressions](http://msdn.microsoft.com/en-us/magazine/cc163473.aspx) for more information.
Get and remove query string with SQL
[ "", "sql", "sql-server", "database", "t-sql", "get", "" ]
I have tables(columns) `A(a)` and `B(b, a)`. Field `a` is primary in `A`, `b` is primary in `B`. `A` represents a set of classes, `B` represents a set of elements, each of them is part of one class. The task is to retrieve all the classes (`A.a`) is first column and elements of those classes in second column. If some class does not contain elements, it should be retrieved with null in second column. Right query is: ``` select A.a, B.b from A left join B on A.a = B.a ``` This does exactly what I need. But, having read the documentation of left join, I tried to repeat this result manually with query: ``` select B.a, B.b as "b" from B union select A.a, null as "b" from A where A.a not in (select B.a from B) ``` First line selects all the pairs I need, 3rd-4th lines select specifically those classes which are not presented in table `B` with null in second column. Those two queries return different number of rows, and I do not understand why. Could somebody explain me that? Is it something with my understanding of left joining behaviour or something else? (Unfortunately, I do not have much access to the server I performed it on, I may only see the number of rows returned.)
*(Note: this was first posted as a comment.)* If `B.a` can have NULLs and some rows do have NULLs in that column, the second query's second part will return nothing. That is because `x NOT IN (list)` essentially translates to ``` x <> value1 AND x <> value2 AND ... ``` and when at least one of the values is NULL, that particular `<>` predicate evaluates to UNKNOWN and, in case all other predicates have evaluated to TRUE, the entire condition becomes UNKNOWN too (TRUE AND UNKNOWN yields UNKNOWN, according to [three-valued logic](http://en.wikipedia.org/wiki/Three-valued_logic "Three-valued logic (Wikipedia)")). Depending on the ratio between `B` rows with NULLs in `B.a` and `A` rows not referenced by any `B` row, the entire second query can return [more](http://sqlfiddle.com/#!3/906a5/1) or [fewer rows](http://sqlfiddle.com/#!3/a0cf0/6) than the first query, although it is possible for the two queries to return [same number of rows](http://sqlfiddle.com/#!3/d2188/1) too. One other possibility is presence of references to non-existent `A` rows in `B` (which implies absence of a foreign key to guarantee referential integrity). If there are such rows, the first query will return [fewer rows](http://sqlfiddle.com/#!3/714ff/1) than the second one, because it will exclude invalid references, while the second query will include them.
Union will treat all null values as a single null value and that may not be what you are expecting. There are two ways around this: ``` 1. Use UNION ALL 2. Use non distinct "type Fields"... SELECT Type="Type1", Amount FROM Table UNION SELECT Type="Type2", Amount FROM Table ```
SQL left join and manual null addition
[ "", "sql", "" ]
I have a Postgres table with a string column carrying numeric values. I need to convert these strings to numbers for math, but I need both `NULL` values as well as empty strings to be interpreted as `0`. I can [convert empty strings into null values](https://stackoverflow.com/a/14035890/405017): ``` # select nullif('',''); nullif -------- (1 row) ``` And I can [convert null values into a `0`](https://stackoverflow.com/a/7452522/405017): ``` # select coalesce(NULL,0); coalesce ---------- 0 (1 row) ``` And I can [convert strings into numbers](https://stackoverflow.com/a/10518415/405017): ``` # select cast('3' as float); float8 -------- 3 (1 row) ``` But when I try to combine these techniques, I get errors: ``` # select cast( nullif( coalesce('',0), '') as float); ERROR: invalid input syntax for integer: "" LINE 1: select cast( nullif( coalesce('',0), '') as float); # select coalesce(nullif('3',''),4) as hi; ERROR: COALESCE types text and integer cannot be matched LINE 1: select coalesce(nullif('3',''),4) as hi; ``` What am I doing wrong?
The types of values need to be consistent; coalescing the empty string to a 0 means that you cannot then compare it to `null` in the `nullif`. So either of these works: ``` # create table tests (orig varchar); CREATE TABLE # insert into tests (orig) values ('1'), (''), (NULL), ('0'); INSERT 0 4 # select orig, cast(coalesce(nullif(orig,''),'0') as float) as result from tests; orig | result ------+-------- 1 | 1 | 0 | 0 0 | 0 (4 rows) # select orig, coalesce(cast(nullif(orig,'') as float),0) as result from tests; orig | result ------+-------- 1 | 1 | 0 | 0 0 | 0 (4 rows) ```
You could also use ``` cast( case when coalesce(orig, '') = '' then '0' else orig end as float ) ``` You could also unwrap that a bit since you're being fairly verbose anyway: ``` cast( case when orig is null then '0' when orig = '' then '0' else orig end as float ) ``` or you could put the cast inside the CASE: ``` case when coalesce(orig, '') = '' then 0.0 else cast(orig as float) end ``` A CASE makes it a bit easier to account for any other special conditions, this also seems like a clearer expression of the logic IMO. OTOH, personal taste and all that.
Cast string to number, interpreting null or empty string as 0
[ "", "sql", "postgresql", "syntax", "" ]
I have a contact number field which stores number as countrycode + ' ' + phonenumber.. Now i want to strip leading zeroes from phone number I tried using ``` UPDATE [dbo].[User] SET PhoneNumber = REPLACE(LTRIM(REPLACE([PhoneNumber], '0', ' ')), ' ', '0') ``` but this replaces the space in between with '0' Any suggestions?
Try converting the value to `int` or `numeric` Eg: ``` select '91 004563' as Input, CONVERT(INT, SUBSTRING('91 004563',CHARINDEX(' ','91 004563')+1,100)) as Output ``` This gives the result ``` Input Output --------- ------ 91 004563 4563 ```
Try this: `SUBSTRING(PhoneNumber, PATINDEX('%[^0 ]%', PhoneNumber + ' '), LEN(PhoneNumber))`
strip leading zeroes in sql in string after space
[ "", "sql", "sql-server", "" ]
I'd like to insert the results of this query to the table I created below. ``` IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[Entity]') AND type in (N'U')) DROP TABLE [Entity] Go Create Table Entity ([EntCode] [nvarchar](8) NOT NULL, [Name] [nvarchar](80) NOT NULL, [CompanyRegistration] [nvarchar](80) NULL, [Active] [int] NOT NULL, [AccessLevel] [int] NOT NULL , [SiteURN] [nvarchar](128) NOT NULL, [CompanyURN] [nvarchar](128) NOT NULL, [SiteName] [nvarchar](30) NOT NULL, [SiteDesc] [nvarchar](60) NULL, [SiteURL] [nvarchar](512) NOT NULL) ``` And I'd like to insert the Data from this query using this, however I get this error "Insert Error: Column name or number of supplied values does not match table definition" because I have an extra column [CompanyRegistration] nvarchar NULL, which I decalred NULL upon creation of the table. How can I insert these query results with my [CompanyRegistration] column tagged as NULL in the table?? ``` insert into ResourceTaskFact.dbo.Entity Select e.EntCode, e.Name, e.Active , e.AccessLevel, ss.SiteURN, ss.CompanyURN, ss.SiteName , ss.SiteDesc , ss.SiteURL from SMECSite ss, SMECLegalEnt e where ss.localsiteflag = 1 and e.active = 1 ``` How do I solve this? I need your help Guys. Thank you in Advance! Beau
``` insert into ResourceTaskFact.dbo.Entity (e.EntCode, e.Name, e.Active, e.AccessLevel, ss.SiteURN, ss.CompanyURN, ss.SiteName, ss.SiteDesc, ss.SiteURL) Select e.EntCode, e.Name, e.Active, e.AccessLevel, ss.SiteURN, ss.CompanyURN, ss.SiteName, ss.SiteDesc, ss.SiteURL from SMECSite ss, SMECLegalEnt e where ss.localsiteflag = 1 and e.active = 1 ```
You need to explicitly set `CompanyRegistration` to `NULL` in your `SELECT` statement: ``` insert into ResourceTaskFact.dbo.Entity Select e.EntCode, e.Name, NULL AS CompanyRegistration, -- Put column name in to demonstrate why you're selecting NULL here e.Active , e.AccessLevel, ss.SiteURN, ss.CompanyURN, ss.SiteName , ss.SiteDesc , ss.SiteURL from SMECSite ss, SMECLegalEnt e where ss.localsiteflag = 1 and e.active = 1 ```
How do Insert into a table when one the column in the empty table is NULL
[ "", "sql", "sql-server", "" ]
I'm having trouble with the performance of my rails application. This might be the effect of a poor database structure. And the page is frequently viewed by users, considering that the record is 5k+. ``` Current System: Model: person has_many payment Controller: @total_payments = 0 person.each do |p| @total_payments += p.payments.map(&:value).sum end View: @total_payments ``` * Is it better to have a separate database for a counter than to search and count through the records? * what databse structure do you recommend/ advice?
Assuming `persons` is an Activerecord relation ``` persons.joins(:payments).sum(:value) ``` If `persons` is an array already (less preferred) ``` Payment.where(:person_id => persons.map(&:id)).sum(:value) ``` If total payment is the only attribute need from associated payments in this view. Then the fastest method would be to make `total_payments` a field in `person` table. Update it whenever an associated `payment` is made. Something similar to counter cache. Then you won't need to make a sql query for payments at all.
You could try doing something like Payment.select('value') as a base query so that you are only doing one query and mapping over that instead.
Best way to Query Records
[ "", "mysql", "sql", "ruby-on-rails", "ruby-on-rails-3", "" ]
following is my Query to generate the Pivoted result: ``` SELECT '# of Corrective Actions open and overdue' as [Corrective Actions breakdown], [405], [2865], [3142], [405]+[2865]+[3142] as [Total] FROM (Select il.Locationid , ca.CorrectiveActionsid, il.locOrder, ca.isDeleted caDeleted, i.isDeleted iDeleted, i.CompanyId companyid,ca.CADateBy, ca.Status from IncidentLocation il inner join incident i on il.IncidentId = i.IncidentId inner join CorrectiveActions ca on i.IncidentId = ca.IncidentId ) AS SourceTable PIVOT ( COUNT(CorrectiveActionsid) FOR LocationId IN ([405],[2865],[3142]) ) PivotTable where locOrder = 0 and caDeleted =0 and iDeleted = 0 and companyId = 210 and CADateBy <= '2013-01-01' and [Status] = 'Open' ``` I was thinking for blank data the count should return o. But I am getting no result at all. Please guide me what wrong I am doing and what should I do to get zero instead of blank for all the counted values.
looks like your query doesn't return any row, you can fix it like this (I've not fixed other parts of query, just want to show you a workaround): ``` select A.[Corrective Actions breakdown], coalesce([405], 0) as [405], coalesce([2865], 0) as [2865], coalesce([3142], 0) as [3142], coalesce([405], 0) + coalesce([2865], 0) + coalesce([3142], 0) as [Total] from (select '# of Corrective Actions open and overdue' as [Corrective Actions breakdown]) as A outer apply ( SELECT [405], [2865], [3142] FROM (Select il.Locationid , ISNULL(ca.CorrectiveActionsid, 0) AS CorrectiveActionsid, il.locOrder, ca.isDeleted caDeleted, i.isDeleted iDeleted, i.CompanyId companyid,ca.CADateBy, ca.Status from IncidentLocation il inner join incident i on il.IncidentId = i.IncidentId inner join CorrectiveActions ca on i.IncidentId = ca.IncidentId ) AS SourceTable PIVOT ( COUNT(CorrectiveActionsid) FOR LocationId IN ([405],[2865],[3142]) ) PivotTable where locOrder = 0 and caDeleted =0 and iDeleted = 0 and companyId = 210 and CADateBy <= '2013-01-01' and [Status] = 'Open' ) as p ```
I would expect the `count()` to return 0. If it doesn't, you can try using `coalesce()`: ``` coalesce([405], 0) as [405] ```
SQL Pivot table for Blank data
[ "", "sql", "sql-server", "sql-server-2008", "pivot", "" ]
How to know the Column position in a table in MS-SQL. Eg: if a table consists of 3 columns namely column1, column2 and column3. I should write a query so that i can get the position of column3 as **3**
try : ``` SELECT ORDINAL_POSITION FROM information_schema.columns WHERE table_name = 'YourTableName' AND COLUMN_NAME = 'YourColumnName' ```
You will get all these from information\_schema. ``` select ordinal_position from information_schema.columns where schema_name = 'databasename' and table_name = 'tablename' and column_name = 'column name' ```
Position of a column in a table
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table like table name : emailvalue ``` -------------------------------------- | email | value | ----------------------------------- | ab@gmail.com | A | -------------------------------------- | ab@gmail.com | B | -------------------------------------- | ab@gmail.com | C | -------------------------------------- | cccc@gmail.com | F | -------------------------------------- | dd@gmail.com | G | -------------------------------------- | dd@gmail.com | A | -------------------------------------- | dd@gmail.com | H | -------------------------------------- ``` I want to delete all for same email without first row. how i will right the sql in my sql the SQL should delete this 2 row in mysql ``` -------------------------------------- | ab@gmail.com | B | -------------------------------------- | ab@gmail.com | C | -------------------------------------- and also -------------------------------------- | dd@gmail.com | A | -------------------------------------- | dd@gmail.com | H | -------------------------------------- ```
Use it: ``` DELETE t1 FROM emailvalue t1 LEFT JOIN ( SELECT email, value FROM emailvalue GROUP BY email ) t2 on t2.email = t1.email AND t2.value = t1.value WHERE t2.value is null; ```
``` delete t from your_table t left join ( select email, min(value) as minv from your_table group by email ) x on x.email = t.email and x.minv = t.value where x.minv is null ```
Want to delete multiple row but not first row for same property in mysql?
[ "", "mysql", "sql", "" ]
I have 2 Tables: User and UserFriends User userID, Guid UserFriends ActiveID, Guid - PassiveID, Guid So you can be in my friendlist but I dont need to be in yours. For an online list of all users, at first I get all user and Iterate them with a foreach to do things I have to do here. ``` List<DAL.User> onlineList = DAL.UserDAL.GetAllOnlineUser(); foreach (DAL.User onlineUser in onlineList) { //do things here if (DAL.UserDAL.CheckForFriends(myUserID, onlineUser.UserID)) { //Do things here } } ``` Server Code: ``` public static List<User> GetAllOnlineUser() { using (RPGDataContext dc = new RPGDataContext()) { return (from a in dc.Users where a.RefreshPing > DateTime.Now.AddMinutes(-30) orderby a.Username select a).ToList(); } } public static bool CheckForFriends(Guid userActive, Guid userPassive) { using (RPGDataContext dc = new RPGDataContext()) { UserFriend uf = (from a in dc.UserFriends where a.UserIDActive == userActive && a.UserIDPassive == userPassive select a).SingleOrDefault(); return (uf != null); } } ``` In this foreach, i must check the friend and if, i blend in a friend-symbol in the onlinelist. That means with 20 online Users without me, I have to go 20 times to the database and back. How to avoid that?
Are working with a relational database (e.g. Oracle, MySql, MsSql, etc.) ? If so, let the database do what it does best: write a single query the returns the appropriate results from the database. **UPDATE 1** So for example, a MS SQL query might look something like: ``` SELECT u.userId, u.refreshPing FROM userFriends f JOIN [user] u ON u.userId = f.userFriendId WHERE DATEDIFF(MINUTE, u.refreshPing, GetDate()) < 6000 AND f.userId = 1 ``` For the sake of time, my query assumes that: * the keys are integers (not GUIDs) * user table = userId (int), refreshPing (DateTime) * userFriends = userId (int), userFriendId (int) * in my example userFriends.userFriendId is a foreign key back into the user table * this query has not been tested You would have to modify the query to match your schema... I have no idea how you are managing the friend relationships. **UPDATE 2** You could: 1. write a similar query using entity framework (I think this is the technology you are using) 2. create a view on the database and query the results 3. create a stored procedure and call it from your code I would lean towards option #1. **UPDATE 3** Download LINQPad www.linqpad.net/‎
Create a query to return all of your friends before entering the `foreach` then just check each online user against that collection
How to avoid a sql query in a forearch
[ "", "asp.net", "sql", "" ]
I'm trying to put some data together for a High Charts Bar chart using ASP.NET. Basically, i have three users who i need to track when they have logged into the system. the variants to be used are: 1) Today 2) This Week 3) Last Week 4) Last Month So, i've created individual tsql scripts for today and and last week, but i'm now a little stuck on how to combine the two statemets, which will eventually be four. ``` SELECT Count(*) as CountToday from hitsTable WHERE Convert(date,hitDate) = Convert(date,GETDATE()) Group by UserId SELECT count(*) as CountLatWeek from hitTable where hitDate between (DATEADD(week, DATEDIFF (week,0,GETDATE()),-1)) AND getDate() Group by UserId ``` Searhing on google, leads me to nested select statements, which all seem to form dependacies with the two statements. However, what i need to do is produce a table of results like this: ![enter image description here](https://i.stack.imgur.com/sb4KA.png) ***EDIT*** I've set up a SQL Fiddle, so we can test out the examples <http://www.sqlfiddle.com/#!6/a21ec> the fiddle has tsql for today and tsql for last week (which may need some tweaking)
``` Select Distinct UserId , ( Select Count(*) as CountToday from hitsTable h2 Where h2.UserId = h1.UserId And Convert(date,hitDate) = Convert(date,GETDATE()) ) As CountToday , ( Select count(*) as CountLatWeek from hitsTable h2 Where h2.UserId = h1.UserId And hitDate Between DATEADD(dd, -(DATEPART(dw, GetDate())-1)-7, GetDate()) And DATEADD(dd, 7-(DATEPART(dw, GetDate()))-7, GetDate()) ) As CountLastWeek FROM hitsTable h1 ```
Try this query ``` select id, sum(case when Convert(date,hitDate) = Convert(date,GETDATE()) then 1 else 0 end) as as CountToday, sum(hitDate between (DATEADD(week, DATEDIFF (week,0,GETDATE()),-1)) AND getDate() then 1 else 0 end) as CountLatWeek, ...... -- Add more condition from hitsTable group by UserId ``` Edit ``` select userid, sum(case when Convert(date,hitDate) = Convert(date,GETDATE()) then 1 else 0 end) as cnt from hitstable group by userid ``` ## **[FIDDLE](http://sqlfiddle.com/#!6/a21ec/39)** ``` | USERID | CNT | |--------|-----| | User1 | 3 | | User2 | 0 | ```
Multiple Selects into one select
[ "", "sql", "select", "" ]
I have inherited a large database project with thousands of views. Many of the views are invalid. They reference columns that no longer exist. Some of the views are very complex and reference many columns. Is there an easy way to track down all the incorrect columns references?
This answer finds the underlying columns that were originally defined in the views by looking at `sys.views`, `sys.columns` and `sys.depends` (to get the underlying column if the column has been aliased). It then compares this with the data held in `INFORMATION_Schema.VIEW_COLUMN_USAGE` which appears to have the current column usage. ``` SELECT SCHEMA_NAME(v.schema_id) AS SchemaName, OBJECT_NAME(v.object_id) AS ViewName, COALESCE(alias.name, C.name) As MissingUnderlyingColumnName FROM sys.views v INNER JOIN sys.columns C ON C.object_id = v.object_id LEFT JOIN sys.sql_dependencies d ON d.object_id = v.object_id LEFT JOIN sys.columns alias ON d.referenced_major_id = alias.object_id AND c.column_id= alias.column_id WHERE NOT EXISTS ( SELECT * FROM Information_Schema.VIEW_COLUMN_USAGE VC WHERE VIEW_NAME = OBJECT_NAME(v.object_id) AND VC.COLUMN_NAME = COALESCE(alias.name, C.name) AND VC.TABLE_SCHEMA = SCHEMA_NAME(v.schema_id) ) ``` For the following view: ``` create table test ( column1 varchar(20), column2 varchar(30)) create view vwtest as select column1, column2 as column3 from test alter table test drop column column1 ``` The query returns: ``` SchemaName ViewName MissingUnderlyingColumnName dbo vwtest column1 ``` This was developed with the help of this [Answer](https://stackoverflow.com/questions/10048056/find-the-real-column-name-of-an-alias-used-in-a-view "Answer")
**UPDATED TO RETRIEVE ERROR DETAILS** So this answer gets you what you want but it isn't the greatest code. A cursor is used (yes I know :)) to execute a `SELECT` from each view in a TRY block to find ones that fail. Note I wrap each statement with a `SELECT * INTO #temp FROM view X WHERE 1 = 0` this is to stop the `EXEC` returning any results and the 1=0 is so that SQL Server can optimize the query so that it is in effect a NO-OP. I then return a list of any views whose sql has failed. I haven't performed lots of testing on this, but it appears to work. I would like to get rid of the execution of each SELECT from View. So here it is: ``` DECLARE curView CURSOR FOR SELECT v.name AS ViewName FROM sys.views v INNER JOIN sys.sql_modules m on v.object_id = m.object_id OPEN curView DECLARE @viewName SYSNAME DECLARE @failedViews TABLE ( FailedViewName SYSNAME, ErrorMessage VARCHAR(MAX) ) FETCH NEXT FROM curView INTO @ViewName WHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRY exec ('SELECT * INTO #temp FROM ' + @viewName + ' WHERE 1=0' ) END TRY BEGIN CATCH INSERT INTO @failedViews VALUES (@viewName, ERROR_MESSAGE()) END CATCH FETCH NEXT FROM curView INTO @ViewName END CLOSE curView DEALLOCATE curView SELECT * FROM @failedViews ``` An example of an ERROR returned is: ``` FailedViewName ErrorMessage --------------- ------------- vwtest Invalid column name 'column1'. ```
Is there a way to find all invalid columns that are referenced in a view using SQL Server 2012?
[ "", "sql", "sql-server", "view", "" ]
I have this table: ``` idFilmAward Name -------------------------------- 1 Academy Awards 2 Saturn Award .. ... ``` Lets say I also have this table: ``` ############### ###First Way### ############### idAward Name idFilmAward ------------------------------------------ 1 Best Director 1 2 Best Director 2 .. .... .. 100 Best Actor 1 ``` Lets say in "First Way" the "Best Director" repeated only two times so it mean I don't need to use "Second Way": ``` ################ ###Second Way### ################ idAward idAwardName idFilmAward ------------------------------------------ 1 1 1 2 1 2 .. .... .. 100 2 1 idAwardName Name idAward ------------------------------------------- 1 Best Director 1 2 Best Actor 2 ... ... ... ``` But lets say "Best Director" repeated 100 times, then I need to do use "Second Way", right? My question is how many repeats it's ok to have so to use "First Way" and after how many repeats I should use "Second Way"?
Strictly speaking, [Database Normalization](http://en.wikipedia.org/wiki/Database_normalization) says that you should always use second way. You should avoid repeating the same data in different rows in a "perfect" relational database design. In this way, if you want to change the name of the award from "Best Director" to "Best Senior Director" you should update many rows (in the first way) and just one row (in the second way). But in Data warehouse design, where you use snowflake design, the first is the best way.
It is not a problem of how many times it repeats. The database has to be normalized (third normal form at least). So "if it repeats even once" or better "could repeat" in your case use a separate table. (This is not a notion) One of the isolated cases when you do not want a normalized database is when using performance critical operations on a large amount of data in which case a de-normalized database would achieve faster(better) results.
When I need to create new table and connect it using foreign key to other table?
[ "", "mysql", "sql", "" ]
I've been trying make a SQL LIKE statement for a search button to display data to, but it doesn't work . I've tried searching about it and even copy-pasted some of the working codes but still no.. I'm really new to this so please excuse my fail coding. (vb.net 2008 + ms access 2003) here's my code : ``` item = txtsearch.Text Dim con As New OleDb.OleDbConnection Dim ds As New DataSet Dim da As OleDb.OleDbDataAdapter Dim dr As OleDbDataReader con.Open() cmd = New OleDbCommand("SELECT * FROM moviedb WHERE Title Like = '%" & item & "%' ", con) dr = cmd.ExecuteReader() If (dr.Read() = True) Then da = New OleDbDataAdapter("SELECT * FROM moviedb WHERE Title = '%" & item & "%' ", con) da.Fill(ds, "List") DataGridView1.DataSource = ds.Tables("List") con.Close() ``` I've also tried changing the "%" to "\*", but yeah still doesn't do it. It says Syntax error (missing operator) in query expression 'Title LIKE = '%@item%''.
There are some things wrong in your code: First, where do you set the connection string? ``` Dim con = new OleDbConnection(.....connection string here ....) ``` The connection string is essential to open the database, without it you cannot call the Open method on the connection. This site contains a lot of connection strings examples, included the one for Access 2003 [ConnectionStrings](http://www.connectionstrings.com/microsoft-jet-ole-db-4-0/) Second, you need to use a parameterized query to avoid problems if your text to search for contains a single quotes (and of course to avoid Sql Injections) Third, you don't need to query two times the database just to discover if there are records returned by the query. You could directly bind the return to your grid or test the number of rows returned in the table `List` of the dataset ``` Dim ds As New DataSet Using con = new OleDbConnection(......constring....) Using cmd = new OleDbCommand("SELECT * FROM moviedb WHERE Title Like ?", con) con.Open() cmd.Parameters.AddWithValue("@p1", "%" & item & "%") Using da = New OleDbDataAdapter(cmd) da.Fill(ds, "List") DataGridView1.DataSource = ds.Tables("List") End Using End Using End Using ```
I was looking for the same, but here is the correct way: ``` cmd = New OleDbCommand("SELECT * FROM moviedb WHERE Title Like '%" & item & "%' ", con) ``` You have to remove the `=` sign
SELECT * from table like from text box. (vb.net 2008 + ms access 2003)
[ "", "sql", "vb.net", "ms-access-2003", "" ]
I'm having a data column named **test\_duration bigint(12)**. I'm storing time in seconds into the database. Now when I fetch record from the table I want the time converted into HH:MM:SS format. How should I achieve this? Thanks in advance.
You can use **MySQL** function [SEC\_TO\_TIME()](http://www.w3resource.com/mysql/date-and-time-functions/mysql-sec_to_time-function.php). Example: ``` SELECT SEC_TO_TIME(2378); ``` Output is: ``` 00:39:38 ``` So in your case: ``` SELECT SEC_TO_TIME(test_duration) as `Time` FORM YOUR_TABLE; ```
Do you store the time in Unixtime (Unix seconds?). If so, use: ``` FROM_UNIXTIME(unix_timestamp, '%h:%i:%s') ```
How to convert time in seconds to HH:MM:SS format in MySQL?
[ "", "mysql", "sql", "time", "" ]
So I have this table ``` Col1 Col2 Col3 A 34 X B 43 L A 36 L ``` Now if I query ``` select * from Table1 where col1 in ('A','B','C') ``` I am expecting something like ``` Col1 Col2 Col3 A 34 X B 43 L A 36 L C - - ``` Is it possible ? P.S: the `-` in row C are just to show that the column is empty.
You could create a nested table schema object type: ``` create type T_List1 as table of varchar2(100); ``` And then construct your query as follows: ``` select s.column_value as col1 , nvl(to_char(t.col2), '-') as col2 , nvl(col3, '-') as col3 from Table1 t right join table(T_List1('A', 'B', 'C')) s on (t.col1 = s.column_value) ``` Example: ``` -- sample of data from your question with Table1(Col1, Col2, Col3) as( select 'A', 34, 'X' from dual union all select 'B', 43, 'L' from dual union all select 'A', 36, 'L' from dual ) -- actual query select s.column_value as col1 , nvl(to_char(t.col2), '-') as col2 , nvl(col3, '-') as col3 from Table1 t right join table(T_List1('A', 'B', 'C')) s --< here list your values on (t.col1 = s.column_value) -- as you would using `IN` clause ``` Result: ``` COL1 COL2 COL3 ------------------------ A 36 L A 34 X B 43 L C - - ``` [**SQLFiddle Demo**](http://sqlfiddle.com/#!4/bc89e/1)
To do this you can use a driver table that has all the values you want returned in it ie: ``` col1 A B C D E ``` Then `LEFT JOIN` to your table. ``` SELECT * FROM driver d LEFT JOIN Table1 t ON d.col1 = t.col1 WHERE d.col1 in ('A','B','C') ```
Display record even if it doesn't exist
[ "", "sql", "oracle11g", "" ]
I am getting `ORA-00907: missing right parenthesis` Error while creating a table on oracle here is what I did: ``` create table customers( cust_num number(4), company varchar2(20), cust_rep number(3), credit_limit number(15), custraint cust_num_pk primary key(cust_num)); ``` whats wrong ??
There's nothing called `Custraint`. It's `Constraint`. It should be: ``` create table customers( cust_num number(4), company varchar2(20), cust_rep number(3), credit_limit number(15), constraint cust_num_pk primary key(cust_num) ); ```
Check your syntax, see the below statement works fine, ``` create table customers( cust_num number(4), company varchar2(20), cust_rep number(3), credit_limit number(15), constraint cust_num_pk primary key(cust_num)); ```
ORA-00907: missing right parenthesis Error while creating a table on oracle
[ "", "sql", "oracle", "" ]
I have an update trigger that updates some important status fields in a table `tblCurrent`. When I first upload the daily batch of records into `tblCurrent` (circa 10K records), I perform some UPDATEs via three separate stored procedures when these are first uploaded and only then. How can I prevent the update trigger from running during these three initial UPDATEs?
You could temporary disable triggers for the table and then enable them back. [MSDN article](http://msdn.microsoft.com/en-us/library/ms189748%28v=sql.105%29.aspx). DISABLE TRIGGER { [ schema\_name . ] trigger\_name [ ,...n ] | ALL } ON { object\_name | DATABASE | ALL SERVER } [ ; ] ENABLE TRIGGER { [ schema\_name . ] trigger\_name [ ,...n ] | ALL } ON { object\_name | DATABASE | ALL SERVER } [ ; ] For example to disable all triggers for given table run following statement: ``` DISABLE TRIGGER ALL ON tblCurrent; ```
One way to do this would be to have some data in `tblCurrent` that enables you to detect the situation you describe as "first uploaded". For example, a BIT column "FirstUploaded", or a column that is NULL when first uploaded, or even a BIT column "DontFireTrigger". Then write your trigger to detect this condition, and conditionally update the status fields. Admittedly this looks like a nasty hack, but perhaps no worse than other solutions.
How can you `UPDATE` an SQL Server table without triggering the UPDATE trigger
[ "", "sql", "sql-server", "triggers", "" ]
I am working on modifying the existing SQL Server stored procedure. I added two new columns to the table and modified the stored procedure as well to select these two columns as well. Although the columns are available in the table, I keep getting this error: > Invalid column name 'INCL\_GSTAMOUNT' ![enter image description here](https://i.stack.imgur.com/5uFVZ.png) Can anyone please tell me what's wrong here?
Whenever this happens to me, I press `Ctrl`+`Shift`+`R` which refreshes `intellisense`, close the query window (save if necessary), then start a new session which usually works quite well.
Could also happen if putting string in double quotes instead of single.
SQL Server: Invalid Column Name
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
Sometimes it works anyway if I forget the `;`. But sometimes it doesn't. And in JDBC and Android SQLite, it seems that I don't need `;` at all. I am confused. When should I use a semicolon?
semicolon indicates end of a statement, so if there are multiple statements then you should use semicolon else it will work fine. I generally use semicolon as a practice, it can be useful even when you are running queries on sql client e.g. in Sql Developer using semicolon is very helpful if you have multiple statements on worksheet, as you can simply go to that particular statement and use F9 to execute that, without semicolon this is not possible.
Usually the semicolon is not part of the actual syntax of a statement (as most database internal APIs execute a single statement at a time). Instead the semicolon is an 'end-of-statement' marker or statement separator that is - usually - defined in CLI or scripting tools for the database. This allows that tool to know when a statement ends, so it can send that single statement to the database for execution. On the other hand, the JDBC API is intended to execute a single(!) statement at a time, therefore you don't need such a separator (the statement is the whole string). This means that a semicolon is not needed, and as it is not part of the actual statement syntax for a lot of database it is also a syntax error to include it. Some JDBC drivers will strip the last `;` from a statement to 'fix' that, some drivers don't. Some drivers allow - contrary to the JDBC specification - multiple statements to be executed as a single string, this usually has to be enabled with a connection property, for example for MySQL it is the option `allowMultiQueries` (see the [MySQL properties](http://dev.mysql.com/doc/refman/5.6/en/connector-j-reference-configuration-properties.html) for details).
Is the semicolon necessary in SQL?
[ "", "sql", "jdbc", "android-sqlite", "" ]
I have used the following query: ``` select tblclass.classname,tblattendance.id from tblclass,tblattendance where tblclass.classcode=tblattendance.classcode and tblattendance.attdate='2013-07-01' ``` Output of this query is as follows: ![enter image description here](https://i.stack.imgur.com/9SjQn.png) Now what I want is rather than the above result I want count of different classes like IB-2,IC-5. Please tell me what modifications do I need to made in my query to get the desired result
Use the [Group By](http://www.w3schools.com/sql/sql_groupby.asp) SQL clause and add the aggregate function [Count](http://www.w3schools.com/sql/sql_func_count.asp) ``` select tblclass.classname, Count(tblattendance.id) as counter from tblclass,tblattendance where tblclass.classcode=tblattendance.classcode and tblattendance.attdate='2013-07-01' group by tblclass.classname ```
Try this ``` select count(tblattendance.id),tblclass.classname from tblclass,tblattendance where tblclass.classcode=tblattendance.classcode and tblattendance.attdate='2013-07-01' group by tblclass.classname ```
How to get count of distinct rows in MySQL?
[ "", "mysql", "sql", "count", "" ]
I have a table from which I want to select data. One coloumn 'myCol' has datatype tinyint. It has values from 1 to 8. In my select I have a variable @myVar with datatype varchar(), that has values like '1,2' or '3,4'. Now I am trying to do something like this: ``` select * from myTable where myCol in (@myVar) ``` Unfortunately I get the following error: > Conversion failed when converting the varchar value '2,3' to data type tinyint. How to change the select that it works like it should?! It's very important to keep the select performance as high as possible!
Since you only have values from 1 to 8 you can use a string search method. Something like ``` select * from myTable where CHARINDEX(cast(mycol as varchar), @myVar) > 0 ``` ## [SQLFiddle demo](http://sqlfiddle.com/#!3/a28f8/3)
If you create a function similar to the accepted answer in [Splitting of comma separated values](https://stackoverflow.com/questions/5487961/splitting-of-comma-separated-values), you will then be able to do: ``` select * from myTable t inner join dbo.fnSplitStringAsTable(@myVar, ',') s on t.myCol = s.Value ``` Note that I'm assuming SQL Server, based on your syntax and the error message.
SQL select with 'in' casting from string to tinyint
[ "", "sql", "sql-server", "select", "" ]
I have a table, VehicleModelYear, containing columns id, year, make, and model. The following two queries work as expected: ``` SELECT DISTINCT make, model FROM VehicleModelYear SELECT COUNT(DISTINCT make) FROM VehicleModelYear ``` However, this query doesn't work ``` SELECT COUNT(DISTINCT make, model) FROM VehicleModelYear ``` It's clear the answer is the number of results returned by the first query, but just wondering what is wrong with this syntax or why it doesn't work.
`COUNT()` in `SQL Server` accepts the following syntax ``` COUNT(*) COUNT(colName) COUNT(DISTINCT colName) ``` You can have a subquery which returns unique set of `make` and `model` that you can count with. ``` SELECT COUNT(*) FROM ( SELECT DISTINCT make, model FROM VehicleModelYear ) a ``` The "a" at the end is not a typo. It's an alias without which SQL will give an error `ERROR 1248 (42000): Every derived table must have its own alias`.
Try combining them into a single field: ``` SELECT COUNT(DISTINCT make + ' ' + model) FROM VehicleModelYear ```
SELECT COUNT(DISTINCT... ) error on multiple columns?
[ "", "sql", "sql-server", "t-sql", "" ]
UPDATE: This is resolved, I was making a syntax error. --- Can I join and filter across two columns in a left join? For example: ``` tbl_people id food side value a pizza fries 10 b pizza shake 2 c burger fries 3 tbl_sides food side pizza fries burger fries ``` Then using SQL: ``` SELECT id, food, side, value FROM tbl_people AS people LEFT JOIN tbl_sides AS sides ON sides.food = people.food AND sides.side = people.side ``` Can I add a flag so that I can determine whether or not the food pair is in joined or if it's NULL? I don't want to inner join, because I need to count total food/sides per person, and also matching food/side pairs per person. I tried: ``` SELECT id, food, side, value, CASE WHEN side.side IS NOT NULL AND side.food IS NOT NULL THEN 1 ELSE 0 END AS match_flag FROM tbl_people AS people LEFT JOIN tbl_sides AS sides ON sides.food = people.food AND sides.side = people.side ``` But it's not working. Basically I just need to flag when the join isn't applied but I'm having trouble.
I think what you want is this: ``` SELECT id, food, side, value, CASE WHEN side.side = people.side THEN 1 ELSE 0 END AS match_flag FROM tbl_people AS people LEFT JOIN tbl_sides AS sides ON people.food = sides.food ```
With MySQL an expression will return a boolean true/false 1/0, works as a shorthand `CASE` statement when looking for boolean output. This will work to flag the non-matching with 1: ``` SELECT people.id, people.food, people.side, people.value ,sides.food IS NULL AS match_flag FROM tbl_people AS people LEFT JOIN tbl_sides AS sides ON sides.food = people.food AND sides.side = people.side ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!2/05fe9/3/0) Or this to flag the non-matching as 0: ``` SELECT people.id, people.food, people.side, people.value ,COALESCE(sides.food = people.food,0) AS match_flag FROM tbl_people AS people LEFT JOIN tbl_sides AS sides ON sides.food = people.food AND sides.side = people.side ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!2/05fe9/6/0)
Sql left join across two columns with filter
[ "", "mysql", "sql", "" ]
I'm looking for a way to get the user friendly MSSQL product name. I've tried: ``` select @@version ``` but it returns to much information (I don't want to parse it now) `Microsoft SQL Server 2008 R2 (RTM) - 10.50.1617.0 (X64) Apr 22 2011 19:23:43 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: )` Another try was ``` SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition') ``` which returns `10.50.1617.0 RTM Developer Edition (64-bit)` I tried to get the `SERVERPROPERTY` for every property from this [list](http://technet.microsoft.com/en-us/library/ms174396.aspx), but couldn't find the needed one. Is there a way to get the string **Microsoft SQL Server 2008 R2** only? Thanks
How about ``` SELECT LEFT(@@version, CHARINDEX(' - ', @@version)) ProductName; ``` *Note: you can obviously adjust it to your needs (like trim RTM if you have to etc.)* Sample output SQL Server 2008: ``` | PRODUCTNAME | |-------------------------------------| | Microsoft SQL Server 2008 R2 (RTM) | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!3/d41d8/21242)** demo Sample output SQL Server 2012: ``` | PRODUCTNAME | |----------------------------| | Microsoft SQL Server 2012 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!6/d41d8/7467)** demo
Try this (if you need to remove `(RTM)`): ``` select case when charindex('-', @@version,0) < charindex('(', @@version,0) then left(@@version, charindex('-', @@version,0)-1) else left(@@version, charindex('(', @@version,0)-1) end as myserver --Results Microsoft SQL Server 2008 R2 Microsoft SQL Server 2012 ``` Else ``` select left(@@version, charindex('-', @@version,0)-1) as myserver --Results Microsoft SQL Server 2008 R2 (RTM) Microsoft SQL Server 2012 ``` **[Fiddle demo](http://sqlfiddle.com/#!3/d41d8)**
Get user friendly MSSQL server's product name
[ "", "sql", "sql-server", "" ]
My database is like this - There are multiple Courses in a College. A Course is further divided into multiple streams. A stream has multiple Subjects , which in turn has multiple Topics. Finally there are multiple Notes for a Topic,Stream, Subject and Topic. Course -> Streams -> Subjects -> Topics -> Notes I want to write a select query to get a bird-eye view of the number of notes in the given Course. I want this - MBA (Total Notes Count - 5) - Course HR (Total Notes Count - 5) - Streams Sub A (Total Notes Count - 5) - Subject Topic 1 (Total Notes Count - 2) - Topic Topic 2 (Total Notes Count - 3) In the above example, Sub A and Sub B has a total of 5 notes which is getting shown against HR. Please help me in writing query for this. The query needs to be very fast. I am attaching my script. I can think of writing multiple sub queries but I dont think that will be an optimized method. ``` select MC.CourseName,MS.StreamName,MSub.SubjectName,MT.TopicName,MN.NoteName from Master_Course MC JOIN Master_Stream MS ON MC.CourseId = MS.CourseId JOIN Master_Subject MSub ON MS.StreamId = MSub.StreamId JOIN Master_Topics MT ON MSub.SubjectId = MT.SubjectId JOIN Master_Notes MN ON MT.TopicId = MN.TopicId ```
``` select MC.CourseName, MCN.NoteCount MS.StreamName, MS.NoteCount MSub.SubjectName, MT.TopicName, MN.NoteName from Master_Course MC JOIN (select MasterCourseID, count(*) as NoteCount from Master_Course MC JOIN Master_Stream MS ON MC.CourseId = MS.CourseId JOIN Master_Subject MSub ON MS.StreamId = MSub.StreamId JOIN Master_Topics MT ON MSub.SubjectId = MT.SubjectId JOIN Master_Notes MN ON MT.TopicId = MN.TopicId JOIN Master_Stream MS ON MC.CourseId = MS.CourseId) MCN on MC.MasterCourseID = MCN.MasterCourseID JOIN Master_Subject MSub ON MS.StreamId = MSub.StreamId JOIN Master_Topics MT ON MSub.SubjectId = MT.SubjectId JOIN Master_Notes MN ON MT.TopicId = MN.TopicId ``` See subquery for getting top-level note count; you'll need to repeat this for each level of the hierarchy. It should be pretty fast, as you're joining on primary keys and counting; if it runs too slowly, you might capture the note count logic in a (materialized) view.
Not 100% sure if this is what you want. However, you can get some counting adding the following after your JOIN statements: ``` GROUP BY CourseID, StreamID, SubjectID, TopicID WITH ROLLUP ``` or ``` GROUP BY CourseID, StreamID, SubjectID, TopicID WITH CUBE ``` This is of course the guessed names of you unique identifiers. The differences between ROLLUP and CUBE is: ROLLUP will count the following combinations: ``` CourseID, StreamID, SubjectID, TopicID CourseID, StreamID, SubjectID CourseID, StreamID CourseID ``` CUBE will do all combination of one to all four of the given elements like ``` CourseID CourseID, SubjectID StreamID, SubjectID, TopicID .... ``` So if you need all combinations anyway, use CUBE, else use ROLLUP. ROLLUP will give you a count of all unique courses, all unique combinations with the same CourseID and StreamID (but with different SubjectID and TopicID) and so on for CourseID, StreamID and SubjectID and all four. I hope that this is an answer to your question.
Please help in this SQL query
[ "", "sql", "database", "" ]
I have a column containing values like this, example: Farari - Made in 2013 Mercedes - Made in 2012 Jaguar - Made in 1978 I want to return the car with the highest or recent make year: Something like this will give me the year but obviously will truncate the rest of the string: ``` SELECT MAX(RIGHT(CarProfile, 4)) FROM mySchema.Car; ``` How do I get the highest year but maintain the full string? In this case: *Farari - Made in 2013*
``` SELECT * FROM Car WHERE RIGHT(CarProfile, 4) = (SELECT MAX(RIGHT(CarProfile, 4)) FROM Car) ``` * [SQLFiddle Demo](http://sqlfiddle.com/#!6/bd2ec/1) You should normalize your table properly. My ssugested schema would be adding a column for a year and have index it so that it would have great query performance . * [SQLFiddle Demo](http://sqlfiddle.com/#!6/a2efe/1)
``` SELECT top 1 * FROM Car order by RIGHT(CarProfile, 4)*1 desc ```
Return string based on values within the string without truncating
[ "", "sql", "sql-server", "t-sql", "" ]
``` $consulta3 = "SELECT * FROM Dept INNER JOIN Userinfo INNER JOIN Checkinout on Dept.DeptName = '$departamento' where Dept.Deptid = Userinfo.Deptid AND Userinfo.Name = Checkinout.name"; ``` the thing im trying to do with this query is: i have a table called Dept where it has all the deparments names and ids, User info that has user name, a department id and a id. Also i have a 3rd table called checkinout that has user id, with a time. So i want (using the deparment's name) to bring all the data from the Checkinout table from the people that belongs to that deparment, but when i try that query it gives me an error: ``` syntax error in from clause. sql state 37000 in sqlexecdirect ``` and i cant solve it. Im using a Acces Data base. Thank you very much for you help.
Access absolutely requires parentheses in the `FROM` clause of any query which includes more that one join. If you have Access available, create and test a new query in the query designer. One of the reasons the designer is useful is that it knows the rules for parentheses which keep the db engine happy. Start with a query similar to this. Don't worry about filtering based on `Dept.DeptName` at this point. Just make sure the joins are set up correctly. ``` SELECT * FROM (Dept INNER JOIN Userinfo ON Dept.Deptid = Userinfo.Deptid) INNER JOIN Checkinout ON Userinfo.Name = Checkinout.name ``` After you have the joins set up correctly, add in your filter constraint (`WHERE Dept.DeptName ...`).
I think it should be : ``` SELECT * FROM (Dept INNER JOIN Userinfo ON (Dept.Deptid = Userinfo.Deptid)) INNER JOIN Checkinout ON (Userinfo.Name = Checkinout.name) WHERE Dept.DeptName = '$departamento' ```
Access query error ("syntax error in from clause")
[ "", "sql", "ms-access", "ms-access-2007", "" ]
So basically which is better in terms of performance and space: ``` idPerson Name Gender Age 1 John Male 19 2 Anna Female 22 ``` or this way: ``` idPerson Name Age idGender 1 John 19 1 2 Anna 22 2 idGender Gender 1 Male 2 Female ``` I think the second one is better because by repeating Male/Female each time will consume more space but I think it may be slower cause of relation. I am wrong? There is better way?
Assuming gender as a binary field, I would go with a third option: name the column `MALE` and the valid values will be either `Y` or `N` (or `1`/`0`) Things to consider: other genders like transgender ;)
> which is better in terms of performance and space Most decisions in Computer Science are usually a trade-off between time and space. In your case, the first solution (storing a string up to 6 characters in the table) will ultimately use more space but results in that string being directly available to queries upon the table; whereas the second solution (storing an integer representation whose meaning can be looked up in a second table) will use less space but requires more work to discover the string (although it does enable one to restrict the possible values by defining a [foreign key constraint](http://dev.mysql.com/doc/en/create-table-foreign-keys.html)). Another approach might be to use MySQL's [`ENUM`](http://dev.mysql.com/doc/en/enum.html) type, which enables you to work with strings whilst effectively storing only an integer "under the hood". You may wish to consider Chris Komlenic's article [8 Reasons Why MySQL's ENUM Data Type Is Evil](http://komlenic.com/244/8-reasons-why-mysqls-enum-data-type-is-evil/) before deciding upon this course of action.
Use new table for gender or add it as field in person table
[ "", "mysql", "sql", "" ]
I have a table, something like ``` STUDENTNAME ISMANUALLYADDED ISEDITED ----------- --------------- -------- APPLE 0 0 ANT 0 1 BELL 0 0 DOLL 1 0 ``` Here, I am trying to sort the data by `studentname` first, by `ismanuallyadded`, and then by `isedited`. I am expecting the result like: ``` StudentName ----------- APPLE BELL DOLL ANT ``` For this, I am trying like, ``` select studentname from table1 order by studentname, ismanuallyadded, isedited ``` but, this query gives the result like, ``` Student ------- APPLE ANT BELL DOLL ``` Is it possible to order by `studentname` first and then order by `ismanuallyadded`? I am trying to display all the studentnames (neither `manually added` nor `isedited`) alphabetically, then only `ismanuallyadded` students should come, and then `isedited`.
The second criteria in the `order by` is only used when different rows have the same value in the previous criteria. Your rows all have different `studentname` values, so the second and third criteria are never used. Try this: ``` select studentname from table1 order by isedited, ismanuallyadded, studentname ```
Try this ``` select studentname from table1 order by studentname ASC, ismanuallyadded ASC, isedited ASC ```
SQL 'order by' first, and then next
[ "", "sql", "sql-server-2008", "" ]
I have a PL/SQL statement like this (it is not a stored procedure or function): ``` DECLARE curstatus VARCHAR(2); person_id varchar(6) := 100; BEGIN SELECT cur_status INTO curstatus FROM Persons WHERE PID=cast(person_id as number(6)); IF curstatus='F' THEN SELECT PID FROM Reports WHERE Report_Type='F' AND PID = person_id; END IF; END; ``` I want to return all the PID from reports (display them to screen), however I receive an error: `an INTO clause is expected`. Please note that this is not a stored procedure or function. How can I return the table?
Surely a pure SQL solution, something like the following, would work? ``` SELECT r.PID FROM Reports r, Persons p WHERE r.Report_Type = 'F' AND r.PID = p.PID AND p.cur_status = 'F' AND p.PID = 100; ```
The reason for the error message is there is no `into` in second select statement. ``` DECLARE curstatus VARCHAR(16); person_id varchar(6) := 100; p_pid VARCHAR(32); BEGIN SELECT cur_status INTO curstatus FROM Persons WHERE PID=cast(person_id as number(6)); IF curstatus='F' THEN SELECT PID into p_pid FROM Reports WHERE Report_Type='F' AND PID = person_id; END IF; END; ``` You can use a function which return `sys_refcursor` E.g. ``` CREATE OR REPLACE FUNCTION testfunc RETURN SYS_REFCURSOR AS curstatus persons.cur_status%TYPE; person_id VARCHAR (6) := 100; r_cursor SYS_REFCURSOR; BEGIN SELECT cur_status INTO curstatus FROM persons WHERE pid = CAST (person_id AS NUMBER (6)); IF curstatus = 'F' THEN OPEN r_cursor FOR SELECT pid INTO p_pid FROM reports WHERE report_type = 'F' AND pid = person_id; END IF; RETURN r_cursor; END; ```
PLSQL return table set
[ "", "sql", "oracle", "plsql", "" ]
Is there a way in SQL Server 2012 to grant execute all stored procedures in one schema? For additional info, these stored procedures are doing just a select.
Try something like that. It creates a new role and grants execute permission to a schema. ``` CREATE ROLE db_executor GRANT EXECUTE ON SCHEMA::schema_name TO db_executor exec sp_addrolemember 'db_executor', 'Username' ``` Replace `schema_name` with your schema and `'Username'` with your user.
For granting execute permission for all of the stored procedures in one schema , the query by @szymon is enough. The below query will grant execute permission for the procedure to the user selected. Provided the user already exists. > GRANT EXECUTE > ON OBJECT::[schema].[procedurename] TO [user] > AS [schema]; > GO
How to grant execute permissions to the stored procedures in a specific schema?
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I am using PostgreSQL and created the below table: ``` CREATE TABLE "TrainingMatrix" ( payroll text NOT NULL, "TrainingName" text NOT NULL, "Institute" text, "TrainingDate" date NOT NULL, "ExpiryDate" date, "RecorderName" text, "EnteringDate" date, CONSTRAINT "TrainingMatrix_pkey" PRIMARY KEY (payroll, "TrainingName", "TrainingDate") ) ``` I want to let `EnteringDate` to be filled automatically by the current date in the machine for each entered record.
``` CREATE TABLE "TrainingMatrix" ( payroll text NOT NULL, "TrainingName" text NOT NULL, "Institute" text, "TrainingDate" date NOT NULL, "ExpiryDate" date, "RecorderName" text, "EnteringDate" date NOT NULL default current_date,--"EnteringDate" date, CONSTRAINT "TrainingMatrix_pkey" PRIMARY KEY (payroll, "TrainingName", "TrainingDate") ) ```
``` CREATE TABLE "TrainingMatrix" ( payroll text NOT NULL, "TrainingName" text NOT NULL, "Institute" text, "TrainingDate" date NOT NULL, "ExpiryDate" date, "RecorderName" text, "EnteringDate" date not null default current_timestamp, CONSTRAINT "TrainingMatrix_pkey" PRIMARY KEY (payroll, "TrainingName", "TrainingDate") ) ``` current\_timestamp is equivelent to Now() will give the current date & time (including timezone) <http://www.postgresql.org/docs/8.1/static/functions-datetime.html> Defining the column as not null will also help to ensure that no records are explicitely entered that do not have a timestamp.
Date field to be filled automatically
[ "", "sql", "database", "postgresql", "" ]
I have data set as a varchar(500), but I only know if it's numeric or character. I need to count the max spaces of the length of a column AND the max spaces after a decimal point. For Example: ``` ColumnA 1234.56789 123.4567890 ``` would return 11 spaces total AND 7 spaces after the decimal. It can be two separate queries.
``` SELECT LEN(ColumnA ) ,CHARINDEX('.',REVERSE(ColumnA ))-1 FROM Table1 ``` If a value has no decimal, the above will return -1 for the spaces after decimal, so you could use: ``` SELECT LEN(ColumnA) ,CASE WHEN ColumnA LIKE '%.%' THEN CHARINDEX('.',REVERSE(ColumnA))-1 ELSE 0 END FROM Table1 ``` Demo of both: [SQL Fiddle](http://sqlfiddle.com/#!3/a70ff/5/0) If you just wanted the `MAX()` then you'd just wrap the above in `MAX()`: ``` SELECT MAX(LEN(ColumnA )) ,MAX(CHARINDEX('.',REVERSE(ColumnA ))-1) FROM Table1 ```
``` SELECT len(ColumnA), len(columnA) - charIndex('.',ColumnA) FROM theTable ```
Counting spaces before and after a decimal point
[ "", "sql", "sql-server-2005", "" ]